text
stringlengths 9
7.94M
| subset
stringclasses 1
value | meta
dict | file_path
stringclasses 1
value | question
dict | answers
listlengths |
|---|---|---|---|---|---|
\begin{document}
\title{Symmetric Tensor Nuclear Norms}
\author{Jiawang Nie} \address{Department of Mathematics, University of California at San Diego, 9500 Gilman Drive, La Jolla, CA, USA, 92093.} \email{[email protected]}
\subjclass[2010]{15A69, 65K05, 90C22}
\keywords{symmetric tensor, nuclear norm, nuclear decomposition, moment optimization, Lasserre relaxation}
\begin{abstract} This paper studies nuclear norms of symmetric tensors. As recently shown by Friedland and Lim, the nuclear norm of a symmetric tensor can be achieved at a symmetric decomposition. We discuss how to compute symmetric tensor nuclear norms, depending on the tensor order and the ground field. Lasserre relaxations are proposed for the computation. The theoretical properties of the relaxations are studied. For symmetric tensors, we can compute their nuclear norms, as well as the nuclear decompositions. The proposed methods can be extended to nonsymmetric tensors. \end{abstract}
\maketitle
\section{Introduction}
Let $\mathbb{F}$ be a field (either the real field $\mathbb{R}$ or the complex one $\mathbb{C}$). Let $\mathbb{F}^{n_1 \times \cdots \times n_m}$ be the space of tensors of order $m$ and dimension $(n_1, \ldots, n_m)$. Each tensor in $\mathbb{F}^{n_1 \times \cdots \times n_m}$ can be represented by a $m$-dimensional hypermatrix (or array) \[ \mathcal{A} = ( \mathcal{A}_{i_1 \ldots i_m} ) \] with each entry $\mathcal{A}_{i_1 \ldots i_m} \in \mathbb{F}$ and $1 \leq i_1 \leq n_1, \ldots, 1 \leq i_m \leq n_m$. For two tensors $\mathcal{A}, \mathcal{B} \in \mathbb{F}^{n_1 \times \cdots \times n_m}$, their {\it hermitian inner product} is defined as \begin{equation} \label{inn:<A,B>} \mathcal{A} \bullet \mathcal{B} := \sum_{1 \leq i_j \leq n_j, j=1,\ldots,m } \mathcal{A}_{i_1 \ldots i_m} \bar{\mathcal{B}}_{i_1 \ldots i_m}. \end{equation} (The bar $\bar{\empty}$ denotes the complex conjugate.) This induces the {\it Hilbert-Schmidt norm}
\begin{equation} \label{HSnm:||A||}
\| \mathcal{A} \| \, := \, \sqrt{ \mathcal{A} \bullet \mathcal{A} }. \end{equation} For vectors $x^{(1)} \in \mathbb{F}^{n_1}$, $\ldots$, $x^{(m)} \in \mathbb{F}^{n_m}$, $x^{(1)} \otimes \cdots \otimes x^{(m)}$ denotes their standard tensor product, i.e., \[ (x^{(1)} \otimes \cdots \otimes x^{(m)})_{i_1 \ldots i_m} = (x^{(1)})_{i_1} \cdots (x^{(m)})_{i_m}. \] The {\it spectral norm} of $\mathcal{A}$, depending on the field $\mathbb{F}$, is defined as
\begin{equation} \label{spc||A||:nsm}
\| \mathcal{A} \|_{\sigma,\mathbb{F}} := \max \{
| \mathcal{A} \bullet x^{(1)} \otimes \cdots \otimes x^{(m)} | : \,
\| x^{(j)} \| =1, x^{(j)} \in \mathbb{F}^{n_j} \}.
\end{equation} In the above, $\| \cdot \|$ denotes the standard Euclidean vector norm. The {\it nuclear norm} of $\mathcal{A}$, also depending on $\mathbb{F}$, is defined as
\begin{equation} \label{nuc||A||:nsy}
\| \mathcal{A} \|_{\ast,\mathbb{F}} := \min \left\{ \sum_{i=1}^r |\lambda_i|
\left| \begin{array}{c} \mathcal{A} = \Sigma_{i=1}^r \lambda_i v^{(i,1)} \otimes \cdots \otimes v^{(i,m)}, \\
\| v^{(i,j)} \| = 1, v^{(i,j)} \in \mathbb{F}^{n_j} \end{array}\right. \right \}.
\end{equation} The spectral norm $\| \cdot \|_{\sigma,\mathbb{F}}$
is dual to the nuclear norm $\| \cdot \|_{\ast,\mathbb{F}}$ (cf.~\cite{FriLim14b}): \[
\| \mathcal{A} \|_{\sigma,\mathbb{F}} = \max \{
| \mathcal{A} \bullet \mc{X} |: \, \| \mc{X} \|_{\ast,\mathbb{F}} = 1 \}, \] \[
\| \mathcal{A} \|_{\ast,\mathbb{F}} = \max \{
| \mathcal{A} \bullet \mc{Y} |: \, \| \mc{Y} \|_{\sigma,\mathbb{F}} = 1 \}. \] Spectral and nuclear tensor norms have important applications, e.g., signal processing and blind identification (\cite{LimCom10,LimCom14}), tensor completion and recovery (\cite{MHWG,YuaZha15}), low rank tensor approximations (\cite{FriOtt13,NW14,ZLQ12}).
When the order $m>2$, the computation of spectral and nuclear norms is NP-hard (\cite{FriLim14b,FriLim16a,HL13}). In \cite{Der13}, the nuclear norms of several interesting tensors were studied. We refer to \cite{Land12,Lim13} for tensor theory and applications.
This paper focuses on nuclear norms of symmetric tensors. Recall that a tensor $\mathcal{A} \in \mathbb{F}^{n_1 \times \cdots \times n_m}$ is symmetric if $n_1 = \cdots = n_m$ and \[ \mathcal{A}_{i_1 \ldots i_m} = \mathcal{A}_{j_1 \ldots j_m} \] whenever $(i_1, \ldots, i_m)$ is a permutation of $(j_1, \ldots, j_m)$. Let $\mt{S}^m( \mathbb{F}^n)$ be the space of all $n$-dimensional symmetric tensors of order $m$ and over the field $\mathbb{F}$. For convenience, denote the symmetric tensor power \[ x^{\otimes m} \, := \, x \otimes \cdots \otimes x \quad (\mbox{$x$ is repeated $m$ times}). \] For a symmetric tensor $\mathcal{A} \in \mt{S}^m( \mathbb{F}^n)$, its spectral and nuclear norms can be simplified as (for $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$)
\begin{equation} \label{||A||sig:sym}
\| \mathcal{A} \|_{\sigma,\mathbb{F}} = \max \{
| \mathcal{A} \bullet x^{\otimes m} | : \,
\| x \| =1, x \in \mathbb{F}^n \}, \end{equation}
\begin{equation} \label{ast||A||:sym}
\| \mathcal{A} \|_{\ast,\mathbb{F}} = \min \left\{ \sum_{i=1}^r |\lambda_i| : \, \mathcal{A} = \sum_{i=1}^r \lambda_i (v_i)^{\otimes m},
\| v_i \| = 1, v_i \in \mathbb{F}^n, \lambda_i \in \mathbb{F} \right \}.
\end{equation} The equality \reff{||A||sig:sym} can be found in Banach \cite{banach}, Friedland \cite{Fri13}, Friedland and Ottaviani \cite{FriOtt13}, and Zhang et al. \cite{ZLQ12}. The equality~\reff{ast||A||:sym} was recently proved by Friedland and Lim \cite{FriLim14b}. In \reff{ast||A||:sym}, the decomposition of $\mathcal{A}$, for which the minimum is achieved, is called a {\it nuclear decomposition} as in \cite{FriLim14b}. When $\mathcal{A}$ is a real tensor, \[
\| \mathcal{A} \|_{\sigma,\mathbb{R}} \leq \| \mathcal{A} \|_{\sigma,\mathbb{C}}, \qquad
\| \mathcal{A} \|_{\ast,\mathbb{R}} \geq \| \mathcal{A} \|_{\ast,\mathbb{C}}. \] The strict inequalities are possible in the above. Explicit examples can be found in \cite{FriLim14b} and in \S\ref{sc:num} of this paper.
The computation of tensor nuclear norms can be formulated as a moment optimization problem. When $\mathcal{A}$ is a real cubic symmetric tensor (i.e., $m=3$), Tang and Shah \cite{TanSha15} pointed out that the real nuclear norm $\| \mathcal{A} \|_{\ast,\mathbb{R}}$ is equal to the optimal value of the moment optimization problem \begin{equation} \label{muopt:A:m=3} \min \quad \int_S 1 \mt{d} \mu \quad s.t. \quad \mc{A} = \int_S x \otimes x \otimes x \mt{d} \mu \end{equation} where $\mu$ is a Borel measure variable whose support is contained in the unit sphere \begin{equation} \label{usph:S}
S \, := \, \{ x\in \mathbb{R}^n \mid \, \| x \| = 1 \}. \end{equation} The equality constraint in \reff{muopt:A:m=3} gives cubic moments of $\mu$, while the objective is the total mass of $\mu$. Lasserre's hierarchy of semidefinite relaxations \cite{Lasserre01,Lasserre08}
can be applied to solve \reff{muopt:A:m=3}, as proposed in \cite{TanSha15}. This gives a sequence of lower bounds, say, $\{ \rho_k \}$, for the real nuclear norm $\| \mathcal{A} \|_{\ast, \mathbb{R}}$. It can be shown that
$\rho_k \to \| \mathcal{A} \|_{\ast, \mathbb{R}}$ as $k \to \infty$. However, in computational practice, it is very difficult to check the convergence, i.e., how do we detect if $\rho_k$ is equal to, or close to, $\| \mathcal{A} \|_{\ast, \mathbb{R}}$? When the convergence occurs, how can we get a nuclear decomposition? To the best of the author's knowledge, there was few prior work on these two questions. The major difficulty is that the flat extension condition
(cf.~\cite{CurtoF,Fialkow,Helton}), which is often used for solving moment problems, is usually not satisfied for solving \reff{muopt:A:m=3}. This causes the embarrassing fact that the nuclear norm is often not known, although it can be approximated as close as possible in theory. Moreover, when the order $m$ is even, or the field $\mathbb{F}=\mathbb{C}$, the nuclear norm $\| \mathcal{A} \|_{\ast, \mathbb{F}}$ is no longer equal to the optimal value of \reff{muopt:A:m=3}.
In this paper, we propose methods for computing nuclear norms of symmetric tensors, for both odd and even orders, over both the real and complex fields. We give detailed theoretical analysis and computational implementation. \begin{itemize}
\item When the order $m$ is odd and $\mathbb{F} = \mathbb{R}$, the nuclear norm $\| \mathcal{A} \|_{\ast, \mathbb{R}}$ equals the optimal value of \reff{muopt:A:m=3}, as shown in \cite{TanSha15}.
\item When the order $m$ is even and $\mathbb{F} = \mathbb{R}$, the nuclear norm
$\| \mathcal{A} \|_{\ast, \mathbb{R}}$ is no longer equal to the optimal value of
\reff{muopt:A:m=3}. We construct a new moment optimization problem whose optimal value equals $\| \mathcal{A} \|_{\ast, \mathbb{R}}$.
\item When $\mathbb{F} = \mathbb{C}$, we construct a new moment optimization problem whose optimal value equals $\| \mathcal{A} \|_{\ast, \mathbb{C}}$, for both even and odd orders.
\end{itemize} Lasserre relaxations in \cite{Lasserre01,Lasserre08}
are efficient for solving these moment optimization problems. We can get a sequence of lower bounds for the nuclear norm $\| \mathcal{A} \|_{\ast, \mathbb{F}}$, which is deonoted as $\{ \| \mathcal{A} \|_{k\ast, \mathbb{F}} \}_{k=1}^{\infty}$. (The integer $k$ is called the relaxation order.) We prove the asymptotic convergence
$\| \mathcal{A} \|_{k\ast, \mathbb{F}} \to \| \mathcal{A} \|_{\ast, \mathbb{F}}$
as the relaxation order $k \to \infty$. In computational practice, the finite convergence often occurs, i.e., $\| \mathcal{A} \|_{k\ast, \mathbb{F}} = \| \mathcal{A} \|_{\ast, \mathbb{F}}$ for some $k$. We show how to detect $\| \mathcal{A} \|_{k\ast, \mathbb{F}} = \| \mathcal{A} \|_{\ast, \mathbb{F}}$ and how to compute nuclear decompositions. This can be done by solving a truncated moment problem. We also prove conditions that guarantee finite convergence.
The paper is organized as follows. Section~\ref{sc:Real:oddm} discusses nuclear norms when the order $m$ is odd and $\mathbb{F} = \mathbb{R}$. Section~\ref{sc:Real:meven} discusses nuclear norms when $m$ is even and $\mathbb{F} = \mathbb{R}$. Section~\ref{sc:cpx} discusses nuclear norms when the field $\mathbb{F} = \mathbb{C}$. The numerical experiments are given in Section~\ref{sc:num}. Some preliminary results are given in Section~\ref{sc:prelim}. The extensions to nonsymmetric tensors are given in Section~\ref{sc:exten}.
\section{Preliminaries} \label{sc:prelim}
\subsection*{Notation}\, The symbol $\mathbb{N}$ (resp., $\mathbb{R}$, $\mathbb{C}$) denotes the set of nonnegative integers (resp., real, complex numbers).
For $x=(x_1,\ldots,x_n)$ and $\alpha = (\alpha_1, \ldots, \alpha_n) \in \mathbb{N}^n$, denote \[ x^\alpha := x_1^{\alpha_1}\cdots x_n^{\alpha_n}, \quad
|\alpha| := |\alpha_1| + \cdots + |\alpha_n|. \] For a degree $d>0$, denote the set of monomial powers \begin{equation} \label{N[0d]} \left\{\begin{array}{rcl}
\mathbb{N}^n_{[0,d]} &:=& \{ \alpha \in \mathbb{N}^n:\, 0\leq |\alpha| \leq d \}, \\
\mathbb{N}^n_{\{d\}} &:=& \{ \alpha \in \mathbb{N}^n:\, |\alpha| = d \}, \quad \mathbb{N}^n_{\{0,d\}} := \mathbb{N}^n_{\{d\}} \cup \{ 0 \}. \end{array} \right. \end{equation} Denote the vector of monomials: \[
[x]_{0,m} := ( x^\alpha )_{ |\alpha| = 0, m }. \] The symbol $\mathbb{R}[x] := \mathbb{R}[x_1,\ldots,x_n]$ denotes the ring of polynomials in $x:=(x_1,\ldots,x_n)$ and with real coefficients, while $\mathbb{R}[x]_d$ denotes the set of polynomials in $\mathbb{R}[x]$ with degrees up to $d$. We use $\mathbb{R}[x]_d^{hom}$ to denote the set of homogeneous polynomials in $\mathbb{R}[x]$ with degree $d$. For the complex field $\mathbb{C}$, the $\mathbb{C}[x]$ and $\mathbb{C}[x]_d$ are similarly defined. The $deg(p)$ denotes the total degree of a polynomial $p$.
For $t\in \mathbb{R}$, $\lceil t\rceil$ (resp., $\lfloor t\rfloor$) denotes the smallest integer not smaller (resp., the largest integer not bigger) than $t$.
For a matrix $A$, $A^T$ denotes its transpose. For a symmetric matrix $X$, $X\succeq 0$ (resp., $X\succ 0$) means $X$ is positive semidefinite (resp., positive definite).
The $e_i$ denotes the standard $i$th unit vector, and $e$ is the vector of all ones.
In the following, we review some basics in polynomial optimization and moment problems. We refer to \cite{Lasserre09,Lasserre15,Laurent} for details. A polynomial $p \in \mathbb{R}[x]$ is said to be a sum of squares (SOS) if $p = p_1^2+\cdots+ p_k^2$ for some $p_1,\ldots, p_k \in \mathbb{R}[x]$. The set of all SOS polynomials in $x$ is denoted as $\Sigma[x]$. For a degree $m$, denote the truncation \[ \Sigma[x]_m := \Sigma[x] \cap \mathbb{R}[x]_m. \] For a tuple $g=(g_1,\ldots,g_t)$ of polynomials, its {\it quadratic module} is the set \[ \mbox{Qmod}(g):= \Sigma[x] + g_1 \cdot \Sigma[x] + \cdots + g_t \cdot \Sigma[x]. \] The $k$th truncation of $\mbox{Qmod}(g)$ is the set \begin{equation} \label{Qk(g)} \mbox{Qmod}(g)_k := \Sigma[x]_{k} + g_1 \cdot \Sigma[x]_{d_1} + \cdots + g_t \cdot \Sigma[x]_{d_t} \end{equation} where each $d_i = k - \deg(g_i)$. Note that \[ \mbox{Qmod}(g)= \bigcup_{k\in \mathbb{N}} \mbox{Qmod}(g)_k. \] For a tuple $h=(h_1,\ldots,h_s)$ of polynomials, the ideal it generates is the set \[ \mbox{Ideal}(h) := h_1 \cdot \mathbb{R}[x] + \cdots + h_m \cdot \mathbb{R}[x]. \] The $k$th {\it truncation} of $\mbox{Ideal}(h)$ is the set \begin{equation} \label{Ik(h)} \mbox{Ideal}(h)_{k} \, := \, h_1 \cdot \mathbb{R}[x]_{k-\deg(h_1)} + \cdots + h_m \cdot \mathbb{R}[x]_{k-\deg(h_m)}. \end{equation} Clearly, $\mbox{Ideal}(h)=\bigcup_{k\in \mathbb{N}} \mbox{Ideal}(h)_{k}$.
Let $g,h$ be as above. Consider the set \[ K = \{ x \in \mathbb{R}^n:\, h(x) = 0, \, g(x) \geq 0 \}. \] Clearly, if $f \in \mbox{Ideal}(h)+\mbox{Qmod}(g)$, then $f\geq 0$ on the set $K$. The reverse is also true under certain conditions. The set $\mbox{Ideal}(h)+\mbox{Qmod}(g)$ is said to be {\it archimedean} if
$N-\|x\|^2\in I(h)+Q(g)$ for some scalar $N>0$.
\begin{theorem} \label{thm:Put} (\cite{Putinar}) Let $h,g,K$ be as above. Assume $\mbox{Ideal}(h)+\mbox{Qmod}(g)$ is archimedean. If a polynomial $f > 0$ on $K$, then $f \in \mbox{Ideal}(h)+\mbox{Qmod}(g)$. \end{theorem}
The above theorem is called Putinar's Positivstellensatz in the literature. Interestingly, when $f \geq 0$ on $K$, we also have $f \in \mbox{Ideal}(h)+\mbox{Qmod}(g)$, under general optimality conditions (cf.~\cite{opcd}).
\iffalse
\begin{lemma} \label{agi:sos} (i) When $m=2m_0$ is even, we have \[ \left( \frac{1}{ \sqrt{m} } \right)^{m} - t_1 t_2 \cdots t_m \in \mbox{Ideal}(1-t_1^2-\cdots-t_m^2)_{2m_0} + \Sigma[t_1, \ldots, t_m]_{2m_0}. \] (ii) When $m=2m_0-1$ is odd, we have \[ \left( \frac{1}{\sqrt{m}} \right)^{ m } - t_1 t_2 \cdots t_{m} \in \mbox{Ideal}(1-t_1^2-\cdots-t_m^2)_{2m_0} + \Sigma[t_1, \ldots, t_m]_{2m_0}. \]
(iii) For all $\alpha$ with $| \alpha | \leq 2m_0$, it holds that \[
1 - x^\alpha \in \mbox{Ideal}(1-\|x\|^2)_{2m_0} + \Sigma[x]_{2m_0}. \] \end{lemma} \begin{proof} (i) When $m = 2m_0$ is even, it is known that \[ \frac{1}{m}( t_1^m + \cdots + t_m^m) - t_1 \cdots t_m \in \Sigma[t]_m \] \[ \frac{1}{m}( (t_1^2+\cdots+t_m^2)^{m_0} - t_1^m - \cdots - t_m^m) \in \Sigma[t]_m \]
\[ 1 - t = \half (1-t)^2 + \half ( 1 - t^2 ), \] \[ 1- t_1t_2 = \half (1-t_1^2-t_2^2) + \frac{1}{4} ( (1-t_1+t_2)^2 + (1 + t_1 - t_2)^2 ) \]
For every even $m = 2m_0$, \[ AGD(t):= \left( \frac{t_1^2+ \cdots + t_m^2}{m} \right)^{m_0} - t_1 t_2 \cdots t_m \in \Sigma[t_1,\ldots,t_m]_m. \] \[ AGD(t) \equiv \left( \frac{1}{m} \right)^{m_0} - t_1 t_2 \cdots t_m \mbox { on } \, S. \] \[ \left( \frac{1}{ 2m_0 } \right)^{m_0} - t_1 t_2 \cdots t_m \quad \equiv \quad SOS \mbox{ mod } 1-t_1^2 - \cdots - t_m^2. \] Choose $\gamma$ satisfying \[ \left( \frac{1}{ 2m_0 } \right)^{m_0} \cdot \frac{ \sqrt{1-\gamma^2}^{2m_0-1} }{ \gamma } = \left( \frac{1}{ 2m_0 -1 } \right)^{(2m_0-1)/2} \] If we do the replacing \[ (t_1, \ldots, t_{2m_0-1}, t_{2m_0} ) \rightarrow (\sqrt{1-\gamma^2} t_1, \ldots, \sqrt{1-\gamma^2} t_{2m_0-1}, \gamma) \] then we can get \[ \left( \frac{1}{ 2m_0-1} \right)^{ (2m_0-1)/2 } - t_1 t_2 \cdots t_{2m_0-1} \quad \equiv \quad SOS \mbox{ mod } 1-t_1^2 - \cdots - t_{2m_0-1}^2. \] \end{proof}
\begin{prop} If the vanishing ideal of $K$ is still $\mbox{Ideal}(h)$ and $h$ is a singleton, then $\mbox{Ideal}(h)_{2k} + \mbox{Qmod}(g)_{2k}$ is closed, for each $k \in \mathbb{N}$. \end{prop}
\fi
Let $\mathbb{R}^{\mathbb{N}_{[0,d]}^n}$ be the space of multi-sequences indexed by $\alpha \in \mathbb{N}^n_{[0,d]}$ (see the notation \reff{N[0d]}). A vector in $\mathbb{R}^{\mathbb{N}_{[0,d]}^n}$ is called a {\it truncated multi-sequence} (tms) of degree $d$. Every $z \in \mathbb{R}^{\mathbb{N}_{[0,d]}^n}$ can be labelled as \[
z \, = \, (z_\alpha)_{ \alpha \in \mathbb{N}_{[0,d]}^n }. \] For $t\leq d$ and $z\in \mathbb{R}^{\mathbb{N}^{n}_{d}}$, denote the truncation:
\begin{equation} \label{trun:z|0,m}
z|_{ \{0,m\} } \, := \, (z_{\alpha})_{ \alpha \in \mathbb{N}^n_{ \{0,m\} } }. \end{equation} For $ p = \Sigma_{ \alpha \in \mathbb{N}_{[0,d]}^n } p_\alpha x^\alpha \in \mathbb{R}[x]_d$ and $z \in \mathbb{R}^{\mathbb{N}_{[0,d]}^n}$, define the product \begin{equation} \label{df:<p,y>} \langle p, z \rangle \, := \, \sum_{\alpha \in \mathbb{N}_{[0,d]}^n } p_\alpha z_\alpha. \end{equation} In the above, each $p_\alpha$ is a coefficient. For a polynomial $q \in \mathbb{R}[x]_{2k}$ and a tms $z \in \mathbb{R}^{ \mathbb{N}^n_{[0,2k]} }$, the product $\langle q p_1p_2, z \rangle$ is a quadratic form in the coefficients of $p_1, p_2$. The $k$th {\it localizing matrix} of $q$, generated by a tms $z \in \mathbb{R}^{\mathbb{N}^n_{[0,2k]}}$, is the symmetric matrix $L_q^{(k)}(z)$ such that \begin{equation} \label{locM:q} \langle q p_1p_2, z \rangle \, = \, vec(p_1)^T \Big( L_q^{(k)}(z) \Big) vec(p_2) \end{equation} for all $p_1,p_2 \in \mathbb{R}[x]$ with $\deg(p_1), \deg(p_2) \leq 2k - \lceil \deg(q)/2 \rceil$. In the above, $vec(p_i)$ denotes the coefficient vector of $p_i$. When $q = 1$ (the constant one polynomial), $L_q^{(k)}(z)$ is reduced to the so-called {\it moment matrix} and is denoted as \begin{equation} \label{moment:mat} M_k(z):= L_{1}^{(k)}(z). \end{equation} We refer to \cite{CurtoF,Helton} for localizing and moment matrices.
\iffalse
When $q=(q_1,\ldots,q_r)$ is a tuple of polynomials, then we denote \begin{equation} \label{mat:locliz} L_q^{(k)} (y) \, := \, \mbf{diag} \left(L_{q_1}^{(k)} (y),\ldots, L_{q_r}^{(k)} (y)\right). \end{equation} (The $\mbf{diag} $ denotes the corresponding block diagonal matrix.)
Let $h = (h_1, \ldots, h_s)$ and $g=(g_1,\ldots, g_t)$ be two polynomial tuples in the above. For each $k$, the dual cone of $\mbox{Ideal}(h)_{2k} + \mbox{Qmod}(g)_{2k}$ is \[ \mbox{Mom}(K)_{2k} \, := \, \left\{ z \in \mathbb{R}^{ \mathbb{N}^n_{2k} } : L_{h}^{(k)}(z) = 0, \, M_k(z) \succeq 0, L_{g}^{(k)}(z) \succeq 0 \right\}. \] That is, for every $f \in \mbox{Ideal}(h)_{2k} + \mbox{Qmod}(g)_{2k}$ and $ z \in \mbox{Mom}(K)_{2k}$, we always have \[ \langle f, z \rangle \geq 0. \]
\fi
\section{Odd order tensors with $\mathbb{F} = \mathbb{R}$} \label{sc:Real:oddm}
Assume the field $\mathbb{F} = \mathbb{R}$ and the order $m$ is odd. We discuss how to computate the real nuclear norm
$\| \mathcal{A} \|_{\ast,\mathbb{R}}$ of a tensor $\mathcal{A} \in \mt{S}^m(\mathbb{R}^n)$. Note that $\lambda_i (v_i)^{\otimes m} = (-\lambda_i) (-v_i)^{\otimes m}$, when $m$ is odd. In the decomposition of $\mathcal{A}$ as in \reff{ast||A||:sym}, one can generally assume $\lambda_i \geq 0$, so \begin{equation} \label{nun:As*:odd}
\| \mathcal{A} \|_{\ast,\mathbb{R}} = \min \left\{ \sum_{i=1}^r \lambda_i : \, \mathcal{A} = \sum_{i=1}^r \lambda_i (v_i)^{\otimes m}, \, \lambda_i \geq 0,
\| v_i \| = 1, v_i \in \mathbb{R}^n \right \}. \end{equation}
\iffalse
Let $\delta_v$ denote the Dirac measure at $v$ and $S$ be the unit sphere \[
S = \{ x \in \mathbb{R}^n: \, \| x \| = 1 \}. \] For the decomposition of $\mathcal{A}$ as in \reff{nun:As*:odd}, let \[ \mu = \lambda_1 \delta_{v_1} + \cdots + \lambda_r \delta_{v_r}. \] Then, $\mu$ is a Borel measure supported on $S$ and satisfies \[
\mathcal{A} = \int x^{\otimes m} \mt{d} \mu, \quad
\sum_{i=1}^m \lambda_i = \int 1 \mt{d} \mu. \] For every $\nu \in \mathscr{B}(S)$ satisfying $\mathcal{A} = \int x^{\otimes m} \mt{d} \nu$, there always exist $c_1, \ldots, c_N>0$ and $u_1, \ldots, u_N \in S$ such that \[ \mathcal{A} = \sum_{i=1}^N c_i (u_i)^{\otimes m}, \quad
\sum_{i=1}^N c_i = \int 1 \mt{d} \mu. \] The above is implied by Proposition~3.3 of \cite{ATKMP}.
\fi
Let $\mathscr{B}(S)$ be the set of Borel measures supported on the unit sphere $S$ as in \reff{usph:S}. As pointed out in \cite{TanSha15},
$\| \mathcal{A} \|_{\ast,\mathbb{R}}$ equals the optimal value of \begin{equation} \label{BorOpt:R:oddm} \left\{\begin{array}{rl}
\min & \int 1 \mt{d} \mu \\ s.t. & \mathcal{A} = \int x^{\otimes m} \mt{d} \mu, \,\,
\mu \in \mathscr{B}(S). \end{array} \right. \end{equation} Let $\mbf{a} \in \mathbb{R}^{ \mathbb{N}^n_{ \{m\} } }$ be the vector of tensor entries of $\mathcal{A}$ such that \begin{equation} \label{a=A:af} \mbf{a}_\alpha = \mathcal{A}_{i_1\ldots i_m} \quad \mbox{ if } \quad x^\alpha = x_{i_1} \cdots x_{i_m}. \end{equation} The equality constraint in \reff{BorOpt:R:oddm} is equivalent to that \[ \mbf{a}_\alpha = \int x^\alpha \mt{d} \mu \quad (\alpha \in \mathbb{N}^n_{ \{m\} }). \] Define the cone of moments \begin{equation} \label{df:scrR:0+m} \mathscr{R}_{ \{0,m\} } := \left\{ y \in \mathbb{R}^{ \mathbb{N}^n_{ \{0,m\} } }
\left| \begin{array}{c}
\exists \mu \in \mathscr{B}(S) \quad s.t. \\ \, y_\alpha = \int x^\alpha \mt{d} \mu \,\, \forall \,\, \alpha \in \mathbb{N}^n_{ \{0,m\} } \end{array} \right. \right\}. \end{equation} The cone $\mathscr{R}_{ \{0,m\} }$ is closed, convex, and has nonempty interior \cite[Prop.~3.2]{LMOPT}. The optimization problem \reff{BorOpt:R:oddm} is equivalent to \begin{equation} \label{miny0:Rm(S):odd} \left\{\begin{array}{rl} \min & (y)_0 \\ s.t. & (y)_\alpha = \mbf{a}_\alpha \,\, \big( \alpha \in \mathbb{N}^n_{ \{m\} } \big), \\
& y \in \mathscr{R}_{ \{0,m\} }. \end{array} \right. \end{equation}
\subsection{An algorithm}
The cone $\mathscr{R}_{ \{0,m\} }$ can be approximated by semidefinite relaxations. Denote the cones \begin{align} \label{scr(S):2k} \mathscr{S}^{2k} & := \left\{
z \in \left. \mathbb{R}^{ \mathbb{N}^n_{ [0,2k] } } \right|
M_k(z) \succeq 0, \, L^{(k)}_{1-\|x\|^2}(z) = 0 \right\}, \\ \label{scr(S):2k:0+m} \mathscr{S}^{2k}_{ \{0,m\} } & := \left\{
y \in \left. \mathbb{R}^{ \mathbb{N}^n_{ \{0,m\} } } \right|
\exists \, z \in \mathscr{S}^{2k}, \, \, y = z|_{ \{0,m \} } \right\}. \end{align} It can be shown that (cf.~\cite[Prop.~3.3]{LMOPT}) \begin{equation} \label{SDr:R0,m:odd} \mathscr{R}_{ \{0,m\} } = \bigcap_{k \geq m/2 } \mathscr{S}^{2k}_{ \{0,m\} }. \end{equation} This leads to the hierarchy of semidefinite relaxations \begin{equation} \label{min(y)0:mom:k} \left\{ \begin{array}{rl}
\| \mathcal{A} \|_{k\ast,\mathbb{R}} \, := \, \min\limits_{ z } & (z)_0 \\ s.t. & (z)_\alpha = \mbf{a}_\alpha \,\,(\alpha \in \mathbb{N}^n_{ \{m\} } ), \\
& z \in \mathscr{S}^{2k}, \end{array} \right. \end{equation} for the relaxation orders $k = m_0, m_0 +1, \ldots$, where $m_0:=\lceil m/2 \rceil$. Since $\mathscr{R}_{ \{0,m\} } \subseteq \mathscr{S}^{2k+2} \subseteq \mathscr{S}^{2k}$ for all $k$, we have the monotonicity relationship \begin{equation} \label{mrel:rhok:om}
\| \mathcal{A} \|_{m_0\ast,\mathbb{R}}
\leq \cdots \leq \| \mathcal{A} \|_{k\ast,\mathbb{R}} \leq \cdots \leq
\| \mathcal{A} \|_{\ast,\mathbb{R}}. \end{equation}
\begin{alg} \label{alg:R:odd} Given a tensor $\mathcal{A} \in \mt{S}^m(\mathbb{R}^n)$ with odd $m$, let $k = m_0$ and do: \begin{itemize}
\item [Step 1] Solve the semidefinite relaxation \reff{min(y)0:mom:k}, for an optimizer $z^k$.
\item [Step 2] Let $y^k := z^k|_{ \{0,m\} }$
(see \reff{trun:z|0,m} for the truncation). Check whether $y^k \in \mathscr{R}_{ \{0,m\} }$ or not. If yes, then $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{k\ast,\mathbb{R}} $ and go to Step~3; otherwise, let $k :=k+1$ and go to Step~1.
\item [Step 3] Compute the decomposition of $y^k$ as \[ y^k = \lambda_1 [v_1]_{0,m} + \cdots + \lambda_r [v_r]_{0,m} \] with all $\lambda_i >0, v_i \in S$. This gives the nuclear decomposition \[ \mathcal{A} = \lambda_1 (v_1)^{\otimes m} + \cdots + \lambda_r (v_r)^{\otimes m} \]
such that $\lambda_1 + \cdots + \lambda_r = \| \mathcal{A} \|_{\ast,\mathbb{R}}$.
\end{itemize}
\end{alg}
In the above, the method in \cite{ATKMP} can be applied to check whether $y^k \in \mathscr{R}_{ \{0,m\} }$ or not. If yes, a nuclear decomposition can also be obtained. It requires to solve a moment optimization problem whose objective is randomly generated.
\subsection{Convergence properties}
The dual cone of the set $\mathscr{R}_{ \{0,m\} }$ is \begin{equation} \mathscr{P}(S)_{0,m} := \{ t + q \mid \, t \in \mathbb{R}, \, q \in \mathbb{R}[x]_m^{hom}, \, t + q \geq 0 \, \mbox{ on } S \}. \end{equation} So, the dual optimization problem of \reff{miny0:Rm(S):odd} is \begin{equation} \label{max<p,A>:1-p>=0} \max \limits_{p \in \mathbb{R}[x]_m^{hom} } \quad \langle p, \mbf{a} \rangle \quad s.t. \quad 1 - p \in \mathscr{P}(S)_{0,m}. \end{equation}
\begin{lemma} \label{nng:opval:odm} Let $\mbf{a}$ be the vector as in \reff{a=A:af}. Then, both \reff{miny0:Rm(S):odd} and \reff{max<p,A>:1-p>=0} achieve the same optimal value, which equals the nuclear norm
$\| \mathcal{A} \|_{\ast,\mathbb{R}}$. \end{lemma} \begin{proof} Clearly, $p=0$ (the zero polynomial) is an interior point of \reff{max<p,A>:1-p>=0}. By the linear conic duality theory~\cite[\S2.4]{BTN}, \reff{miny0:Rm(S):odd} and \reff{max<p,A>:1-p>=0}
have the same optimal value which is $\| \mathcal{A} \|_{\ast,\mathbb{R}}$, and \reff{miny0:Rm(S):odd} achieves it. The feasible set of \reff{max<p,A>:1-p>=0} is compact. This is because $|p| \leq 1$ on the unit sphere and $p$ is a form of degree $m$. So, \reff{max<p,A>:1-p>=0} also achieves its optimal value, which equals $\| \mathcal{A} \|_{\ast,\mathbb{R}}$. \end{proof}
Denote the nonnegative polynomial cones: \begin{equation} \label{Qk:oddm}
Q_k \, := \, \mbox{Ideal}_{2k}(1-\|x\|^2) + \Sigma[x]_{2k}, \quad Q := \bigcup_{k\geq 1} Q_k. \end{equation} The cones $Q_k$ and $\mathscr{S}^{2k}$ are dual to each other (cf.~\cite{LMOPT}). So, the dual optimization problem of \reff{min(y)0:mom:k} is \begin{equation} \label{m<pf>:Qk:oddm} \max \limits_{p \in \mathbb{R}[x]_m^{hom} } \quad
\langle p, \mbf{a} \rangle \quad s.t. \quad 1 - p \in Q_k. \end{equation} Some properties of Lasserre relaxations were mentioned in \cite{TanSha15}. For completeness of the paper, we give the properties with more details and rigorous proofs.
\begin{lemma} \label{ach:opv:k:om} Let $\mbf{a}$ be the vector as in \reff{a=A:af}. Then, both \reff{min(y)0:mom:k} and \reff{m<pf>:Qk:oddm} achieve the same optimal value, which equals
$\| \mathcal{A} \|_{k\ast,\mathbb{R}}$. Moreover, for each $k\geq m_0$,
$\| \mathcal{A} \|_{k\ast,\mathbb{R}}$ is a norm function in $\mathcal{A} \in \mt{S}^m(\mathbb{R}^n)$. \end{lemma} \begin{proof} For each $k \geq m_0$, $p=0$ is an interior point of \reff{m<pf>:Qk:oddm}. So, \reff{min(y)0:mom:k} and \reff{m<pf>:Qk:oddm}
have the same optimal value $\| \mathcal{A} \|_{k\ast,\mathbb{R}} $, and \reff{min(y)0:mom:k} achieves it (cf.~\cite[\S2.4]{BTN}). The set $Q_k$ is closed, which can be implied by Theorem~3.1 of \cite{Marsh03}(
aslo see Theorem~3.35 of \cite{Laurent}). When $1-p \in Q_k$, $|p| \leq 1$ on the unit sphere $S$. Hence, the feasible set of \reff{m<pf>:Qk:oddm} is compact, and it also achieves its optimal value. In the following, we prove that
$\| \mathcal{A} \|_{k\ast,\mathbb{R}}$ is a norm function in $\mathcal{A}$. \begin{itemize}
\item [1)] Because $M_k(z) \succeq 0$, $(z)_0 \geq 0$. So $\| \mathcal{A} \|_{k\ast,\mathbb{R}} \geq 0$ for all $\mathcal{A}$.
\item [2)] Let $z^*$ be an optimizer such that
$\| \mathcal{A} \|_{k\ast,\mathbb{R}} = (z^*)_0$. If $\| \mathcal{A} \|_{k\ast,\mathbb{R}} = 0$, then $(z^*)_0=0$ and $z^*=0$, because $M_k(z^*)\succeq 0$ and
$L_{1-\|x\|^2}^{(k)}(z^*)=0$. So, $\mbf{a}=0$ and $\mathcal{A}$ must be the zero tensor.
\item [3)] First, we show that
$\| -\mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{k\ast,\mathbb{R}}$. For $z \in \mathbb{R}^{ \mathbb{N}^n_{[0,2k]}}$, define $s(z) \in \mathbb{R}^{ \mathbb{N}^n_{[0,2k]}}$ be such that \[
( s(z) )_\alpha = (-1)^{|\alpha|} (z)_\alpha, \quad
\forall \, \alpha \in \mathbb{N}^n_{[0,2k]}. \] One can verify that ($\mbf{1}$ denotes the vector of all ones) \[ M_k( s(z) ) = \mbox{diag}( [-\mbf{1}]_k ) M_k( z ) \mbox{diag}( [-\mbf{1}]_k ), \] \[
L_{1-\|x\|^2}^{(k)}( s(z) ) =
\mbox{diag}( [-\mbf{1}]_{k-1} ) L_{1-\|x\|^2}^{(k)}( z ) \mbox{diag}( [-\mbf{1}]_{k-1} ). \] Thus, $z$ is feasible for \reff{min(y)0:mom:k} with tensor $\mathcal{A}$ if and only if $s(z)$ is feasible for \reff{min(y)0:mom:k} with tensor $-\mathcal{A}$. Since $(z)_0 = (s(z))_0$, we get
$\| -\mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{k\ast,\mathbb{R}}$.
Second, we show that
$\| t\mathcal{A} \|_{k\ast,\mathbb{R}} = t \| \mathcal{A} \|_{k\ast,\mathbb{R}}$ for all $t>0$. For $t>0$, $z$ is feasible for \reff{min(y)0:mom:k} with tensor $\mathcal{A}$
if and only if $tz$ is feasible for \reff{min(y)0:mom:k} with tensor $t\mathcal{A}$. Note that $t (z)_0 = (tz)_0$ for $t>0$. So, $\| t\mathcal{A} \|_{k\ast,\mathbb{R}} = t \| \mathcal{A} \|_{k\ast,\mathbb{R}}$ for $t>0$.
The above two cases imply that \[
\| t \mathcal{A} \|_{k\ast,\mathbb{R}} =|t| \cdot \| \mathcal{A} \|_{k\ast,\mathbb{R}} \, \quad \forall \, \mathcal{A} \in \mt{S}^m( \mathbb{R}^n), \, \, \forall \, t \in \mathbb{R}. \]
\item [4)] The feasible set of \reff{min(y)0:mom:k} is a convex set in $(z,\mathcal{A})$. Its objective is a linear function in $z$. By the result in \cite[\S3.2.5]{BVbook},
$\| \mathcal{A} \|_{k\ast,\mathbb{R}}$ is a convex function in $\mathcal{A}$, so $
\| \mathcal{A} + \mathcal{B} \|_{k\ast,\mathbb{R}} \leq \| \mathcal{A} \|_{k\ast,\mathbb{R}} +
\| \mathcal{B} \|_{k\ast,\mathbb{R}} $ for all $\mathcal{A},\mathcal{B}$.
\end{itemize} \end{proof}
The convergence of Algorithm~\ref{alg:R:odd} is summarized as follows.
\begin{theorem} \label{thm:cvg:oddm}
Let $\| \mathcal{A} \|_{k\ast,\mathbb{R}}$ be the optimal value of \reff{min(y)0:mom:k}. For all $\mathcal{A} \in \mt{S}^m(\mathbb{R}^n)$, Algorithm~\ref{alg:R:odd} has the following properties: \begin{itemize} \item [(i)]
$\lim\limits_{k\to\infty} \| \mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{\ast,\mathbb{R}}$.
\item [(ii)] Let $p^*$ be an optimizer of \reff{max<p,A>:1-p>=0}. If $1-p^* \in Q$, then
$\| \mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{\ast,\mathbb{R}}$ for all $k$ sufficiently big.
\item [(iii)] If $y^k \in \mathscr{R}_{ \{0,m\} }$ for some order $k$, then $\| \mathcal{A} \|_{\ast,\mathbb{R}} = \| \mathcal{A} \|_{k\ast,\mathbb{R}}$.
\item [(iv)] The sequence $\{ y^k \}_{k=m_0}^{\infty}$ converges to a point in $\mathscr{R}_{ \{0,m\} }$.
\end{itemize} \end{theorem} \begin{proof} (i) By Lemma~\ref{nng:opval:odm}, for every $\epsilon>0$, there exists $p_1 \in \mathbb{R}[x]_m^{hom}$ such that \[ 1 - p_1 >0 \mbox{ on } S, \qquad
\langle p_1, \mbf{a} \rangle \geq \| \mathcal{A} \|_{\ast,\mathbb{R}} - \epsilon. \] By Theorem~\ref{thm:Put}, there exists $k_1$ such that $
1 - p_1 \in Q_{k_1}. $ By Lemma~\ref{ach:opv:k:om}, we get \[
\| \mathcal{A} \|_{k_1\ast,\mathbb{R}} \geq \| \mathcal{A} \|_{\ast,\mathbb{R}} - \epsilon. \] The monotonicity relation \reff{mrel:rhok:om} and the above imply that \[
\| \mathcal{A} \|_{\ast,\mathbb{R}} \geq \lim\limits_{k\to\infty} \| \mathcal{A} \|_{k\ast,\mathbb{R}}
\geq \| \mathcal{A} \|_{\ast,\mathbb{R}} - \epsilon. \] Since $\epsilon>0$ can be arbitrarily small, the item (i) follows directly.
(ii) If $1-p^* \in Q$, then $1-p^* \in Q_{k_2}$ for some $k_2$. By Lemma~\ref{nng:opval:odm}, we know \[
\| \mathcal{A} \|_{\ast,\mathbb{R}} = \langle p^*, \mbf{a} \rangle
\leq \| \mathcal{A} \|_{k_2\ast,\mathbb{R}} . \] Then, \reff{mrel:rhok:om} implies that
$\| \mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{\ast,\mathbb{R}}$ for all $k \geq k_2$.
(iii) If $y^k \in \mathscr{R}_{ \{0,m\} }$ for some $k$, then $\| \mathcal{A} \|_{k\ast,\mathbb{R}} \geq \| \mathcal{A} \|_{\ast,\mathbb{R}}$, by Lemmas~\ref{nng:opval:odm} and \ref{ach:opv:k:om}. Then, the equality $\| \mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{\ast,\mathbb{R}}$ follows from \reff{mrel:rhok:om}.
(iv) Note the relations \[
(y^k)_0 = \| \mathcal{A} \|_{k\ast,\mathbb{R}}, \quad (y^k)_\alpha = \mbf{a}_{ \alpha } \quad (\forall \, \alpha \in \mathbb{N}^n_{ \{ m \} } ). \]
Since $\| \mathcal{A} \|_{k\ast,\mathbb{R}} \to \| \mathcal{A} \|_{\ast,\mathbb{R}}$, we know the limit $y^*$ of the sequence $\{ y^k \}$ must exist. For all $k\geq m/2$, we have $y^k \in \mathscr{S}^{2k}_{ \{0,m\} }$. The distance between $\mathscr{S}^{2k}_{ \{0,m\} }$ and $\mathscr{R}_{ \{0,m\} }$ tends to zero as $k\to \infty$ (cf.~\cite[Prop.~3.4]{LMOPT}), so $y^* \in \mathscr{R}_{ \{0,m\} }$. It can also be implied by the equality \reff{SDr:R0,m:odd}. \end{proof}
In Theorem~\ref{thm:cvg:oddm}(ii), we always have $1-p^*\geq 0$ on $S$. Under some general conditions, we further have $1-p^* \in Q$, as shown in \cite{opcd}. Thus, Algorithm~\ref{alg:R:odd} usually has finite convergence, which is confirmed by numerical experiments in \S\ref{sc:num}.
\section{Even order tensors with $\mathbb{F} = \mathbb{R}$} \label{sc:Real:meven}
Assume the order $m$ is even and the field $\mathbb{F} = \mathbb{R}$. For a symmetric tensor $\mathcal{A} \in \mt{S}^m(\mathbb{R}^n)$, the sign of $\lambda_i$ in \reff{ast||A||:sym} cannot be generally assumed to be positive. However, we can always decompose $\mathcal{A}$ as ($\mbf{1}$ is the vector of all ones) \begin{equation} \label{dcF:v+v-:lmd>=0} \left\{ \begin{array}{c} \mathcal{A} = \sum_{i=1}^{r_1} \lambda_i^+ (v_i^+)^{\otimes m} - \sum_{i=1}^{r_2} \lambda_i^- (v_i^-)^{\otimes m}, \\
\lambda_i^+ \geq 0, \, \| v_i^+ \| = 1, \, \mathbf{1}^Tv_i^+ \geq 0, \, v_i^+ \in \mathbb{R}^n,\\
\lambda_i^- \geq 0, \, \| v_i^- \| = 1, \, \mathbf{1}^Tv_i^- \geq 0, \, v_i^- \in \mathbb{R}^n. \end{array} \right. \end{equation} Let $\mathscr{B}(S^+)$ be the set of Borel measures supported in the half unit sphere \begin{equation} \label{set:S+}
S^+ := \{ x\in \mathbb{R}^n \, \mid \, \|x\|=1, \mathbf{1}^Tx \geq 0 \}. \end{equation} Clearly, the weighted Dirac measures \[ \mu^+ := \sum_{i=1}^{r_1} \lambda_i^+ \delta_{v_i^+}, \quad \mu^- := \sum_{i=1}^{r_2} \lambda_i^- \delta_{v_i^-} \] belong to $\mathscr{B}(S^+)$.
The decomposition \reff{dcF:v+v-:lmd>=0} is equivalent to \begin{equation} \label{dcpA:mu+mu-} \mathcal{A} = \int x^{\otimes m} \mt{d} \mu^+
- \int x^{\otimes m} \mt{d} \mu^-. \end{equation} Reversely, if there exist $\mu^+, \mu^- \in \mathscr{B}(S^+)$ satisfying \reff{dcpA:mu+mu-}, then $\mathcal{A}$ has a decomposition as in \reff{dcF:v+v-:lmd>=0} (cf.~\cite[Prop.~3.3]{ATKMP}). Therefore, the nuclear norm
$\| \mathcal{A} \|_{\ast,\mathbb{R}}$ equals the optimal value of the problem \begin{equation} \label{nnF:opt:mu+-} \left\{\begin{array}{rl} \min & \int 1 \mt{d} \mu^+ + \int 1 \mt{d} \mu^-\\ s.t. & \mathcal{A} = \int x^{\otimes m} \mt{d} \mu^+ - \int x^{\otimes m} \mt{d} \mu^-, \\ & \mu^+, \mu^- \in \mathscr{B}(S^+). \end{array}\right. \end{equation} Let $\mbf{a} \in \mathbb{R}^{ \mathbb{N}^n_{ \{m\} } }$ be the vector such that \begin{equation} \label{mbf(a):evm}
\mbf{a}_\alpha \, = \, \mathcal{A}_{i_1\cdots i_m}
\quad \mbox{ if } \quad x^\alpha = x_{i_1}\cdots x_{i_m}. \end{equation} Denote the cone of moments \begin{equation} \label{scr(R)+:0+m} \mathscr{R}^+_{ \{0,m\} } := \left\{ y \in \mathbb{R}^{ \mathbb{N}^n_{ \{0, m\} } }
\left| \begin{array}{c}
\exists \mu \in \mathscr{B}(S^+) \, \mbox{ such that } \\
y_\alpha = \int x^\alpha \mt{d} \mu \, \mbox{ for } \, \alpha \in \mathbb{N}^n_{ \{0, m\} } \end{array} \right. \right\}. \end{equation} Then, \reff{nnF:opt:mu+-} is equivalent to \begin{equation} \label{miny0:Rm(S)} \left\{ \begin{array}{rl} \min & (y^+)_0 + (y^-)_0 \\ s.t. & (y^+)_\alpha - (y^-)_\alpha = \mbf{a}_\alpha \,\, ( \alpha \in \mathbb{N}^n_{\{m\} } ), \\
& y^+, y^- \in \mathscr{R}^+_{ \{0,m\} }. \end{array} \right. \end{equation}
\subsection{An algorithm}
The cone $\mathscr{R}^+_{ \{0,m\} }$ can be approximated by semidefinite relaxations.
Denote the cones \begin{align} \mathscr{S}^{+,2k} & := \left\{
z \in \left. \mathbb{R}^{ \mathbb{N}^n_{ [0, 2k] } } \right| M_k(z) \succeq 0, \, L^{(k)}_{\mbf{1}^Tx}(z) \succeq 0,
\, L^{(k)}_{1-\|x\|^2}(z)=0 \right\}, \\ \mathscr{S}^{+,2k}_{ \{0,m\} } & := \left\{
y \in \left. \mathbb{R}^{ \mathbb{N}^n_{ \{0, m\} } } \right|
\exists \, z \in \mathscr{S}^{+,2k}, \, \, y = z|_{ \{0,m \} } \right\}. \end{align} Note that $\mathscr{S}^{+,2k}_{ \{0,m\} }$ is a projection of $\mathscr{S}^{+,2k}$ and $ \mathscr{R}^{+}_{ \{0,m\} } \subseteq \mathscr{S}^{+,2k}_{ \{0,m\} } $ for all $k$. As shown in \cite{LMOPT}, it holds that \begin{equation} \label{SDr:R0,m:S+} \mathscr{R}^+_{ \{0,m\} } = \bigcap_{k \geq m/2 } \mathscr{S}^{+,2k}_{ \{0,m\} }. \end{equation} So, we get the hierarchy of semidefinite relaxations for solving \reff{miny0:Rm(S)}: \begin{equation} \label{rho(k):m:even} \left\{ \begin{array}{rl}
\| \mathcal{A} \|_{k\ast,\mathbb{R}} \, := \, \min\limits_{z^+, z^-} & (z^+)_0 + (z^-)_0 \\ s.t. & (z^+)_\alpha - (z^-)_\alpha = \mbf{a}_\alpha \, ( \alpha \in \mathbb{N}^n_{\{m\} } ), \\
& z^+, z^- \in \mathscr{S}^{+,2k}, \end{array} \right. \end{equation} for $k= m_0, m_0+1, \ldots$ ($m_0 = \lceil m/2 \rceil$). Similar to \reff{mrel:rhok:om}, we also have the monotonicity relationship \begin{equation} \label{rhok:mcr:evm}
\| \mathcal{A} \|_{m_0\ast,\mathbb{R}} \leq \cdots
\leq \| \mathcal{A} \|_{k\ast,\mathbb{R}} \leq \cdots \leq \| \mathcal{A} \|_{\ast,\mathbb{R}}. \end{equation}
\begin{alg} \label{alg:even:m} For a given tensor $\mathcal{A} \in \mt{S}^m(\mathbb{R}^n)$, let $k = m_0$ and do: \begin{itemize}
\item [Step 1] Solve the semidefinite relaxation \reff{rho(k):m:even}, for an optimizer $(z^{+,k}, z^{-,k})$.
\item [Step 2] Let $y^{+,k} := z^{+,k}\big|_{ \{0,m\} }$,
$y^{-,k} := z^{-,k}\big|_{ \{0,m\} }$
(see \reff{trun:z|0,m} for the truncation). Check whether $y^{+,k}, y^{-,k} \in \mathscr{R}^+_{ \{0,m\} }$ or not. If they both belong, then
$\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{k\ast,\mathbb{R}}$ and go to Step~3; otherwise, let $k :=k+1$ and go to Step~1.
\item [Step 3] Compute the decompositions of $y^{+,k}, y^{-,k}$ as \[ y^{+,k} = \sum_{i=1}^{r_1} \lambda_i^+ [v_i^+]_{0,m}, \quad y^{-,k} = \sum_{i=1}^{r_2} \lambda_i^- [v_i^-]_{0,m}, \] with all $\lambda^+_i >0, \lambda_i^->0$ and $v_i^+, v_i^- \in S^+$. The above gives the nuclear decomposition: \[ \mathcal{A} = \sum_{i=1}^{r_1} \lambda_i^+ (v_i^+)^{\otimes m} - \sum_{i=1}^{r_2} \lambda_i^- (v_i^-)^{\otimes m} \] such that
$\sum_{i=1}^{r_1} \lambda_i^+ + \sum_{i=1}^{r_2} \lambda_i^- = \| \mathcal{A} \|_{\ast,\mathbb{R}}$.
\end{itemize}
\end{alg}
In the above, the method in \cite{ATKMP} can be applied to check if $y^{+,k}, y^{-,k} \in \mathscr{R}^+_{ \{0,m\} }$ or not. If yes, a nuclear decomposition can also be obtained. In Step~3, it is possible that $r_1=0$ or $r_2 = 0$, for which case the corresponding $y^{+,k}$ or $y^{-,k}$
is the vector of all zeros. Note that Algorithm~\ref{alg:even:m} can also be applied to compute $\| \mathcal{A} \|_{\ast,\mathbb{R}}$ even if the order $m$ is odd.
\subsection{Convergence properties}
The dual cone of the set $\mathscr{R}^+_{ \{0,m\} }$ is \[ \mathscr{P}(S^+)_{0,m} \,:= \, \{ t + p \mid t \in \mathbb{R}, \, p \in \mathbb{R}[x]_m^{hom}, \, t + p \geq 0 \mbox{ on } S^+ \}. \] So, the dual optimization problem of \reff{miny0:Rm(S)} is
\begin{equation} \label{max<p,f>:1>=|p|}
\max \limits_{ p \in \mathbb{R}[x]_m^{hom} } \quad \langle p, \mbf{a} \rangle \quad s.t. \quad 1 \pm p \in \mathscr{P}(S^+)_{0,m}. \end{equation}
\begin{lemma} \label{mnng:val=:evm}
Let $\mbf{a}$ be the vector as in \reff{mbf(a):evm}. Then, both \reff{miny0:Rm(S)} and \reff{max<p,f>:1>=|p|}
achieve the same optimal value which equals $\| \mathcal{A} \|_{\ast, \mathbb{R}}$. \end{lemma} \begin{proof} The feasible set of \reff{miny0:Rm(S)} is always nonempty, say, $(\hat{y}^+, \hat{y}^-)$ is a feasible pair. Let $\xi$ be an interior point of $\mathscr{R}^+_{ \{0,m\} }$. Then $\hat{y}^+ +\xi, \hat{y}^- + \xi$ are both interior points of
$\mathscr{R}^+_{ \{0,m\} }$. The zero polynomial $p=0$ is an interior point of \reff{max<p,f>:1>=|p|}. By the linear conic duality theory \cite[\S2.4]{BTN}, the optimal values of \reff{miny0:Rm(S)} and \reff{max<p,f>:1>=|p|} are equal, and they both achieve it. The optimal value of \reff{miny0:Rm(S)}
is $\| \mathcal{A} \|_{\ast, \mathbb{R}}$, so it is also the optimal value of \reff{max<p,f>:1>=|p|}. \end{proof}
Next, we study the properties of the relaxation~\reff{rhok:mcr:evm}. Denote the cones of nonnegative polynomials: \begin{equation} \label{Qk+:evm}
Q_k^+ \, := \, \mbox{Ideal}_{2k}(1-\| x \|^2) + \mbox{Qmod}_{2k}(\mathbf{1}^Tx), \quad Q^+ \, := \, \bigcup_{ k \geq 1 } Q_k^+. \end{equation} The cones $Q_k^+$ and $\mathscr{S}^{+,2k}$ are dual to each other (cf.~\cite{LMOPT}), so the dual optimization problem of \reff{rho(k):m:even} is \begin{equation} \label{mx<p,f>:1+-p:Qk}
\max \limits_{ p \in \mathbb{R}[x]_m^{hom} } \quad \langle p, \mbf{a} \rangle \quad s.t. \quad 1 \pm p \in Q_k^+. \end{equation}
\begin{lemma} \label{achval:Qk:evm} Let $\mbf{a}$ be the vector of entries of $\mathcal{A}$ as in \reff{mbf(a):evm}. For each $k \geq m_0$, both \reff{rho(k):m:even} and \reff{mx<p,f>:1+-p:Qk}
achieve the same optimal value $\| \mathcal{A} \|_{k\ast,\mathbb{R}}$. Moreover, $\| \mathcal{A} \|_{k\ast,\mathbb{R}}$ is a norm function in $\mathcal{A} \in \mt{S}^m(\mathbb{R}^n)$. \end{lemma} \begin{proof} The zero form $p=0$ is an interior point of \reff{mx<p,f>:1+-p:Qk}, for all $k \geq m_0$. By the linear conic duality theory \cite[\S2.4]{BTN}, \reff{rho(k):m:even} and \reff{mx<p,f>:1+-p:Qk} have the same optimal value and
\reff{rho(k):m:even} achieves it. The vanishing ideal of $S^+$ is $\mbox{Ideal}(1-\|x\|^2)$, so the set $Q_k^+$ is closed (cf.~\cite[Theorem~3.35]{Laurent} or \cite[Theorem~3.1]{Marsh03}). When $p$ is feasible for \reff{mx<p,f>:1+-p:Qk},
$|p| \leq 1$ on the unit sphere $S$. So, the feasible set of \reff{mx<p,f>:1+-p:Qk} is compact, and it also achieves its optimal value. As in the proof of Lemma~\ref{ach:opv:k:om}, we can similarly prove that
$\| \mathcal{A} \|_{k\ast,\mathbb{R}}$ is a norm function in $\mathcal{A}$, as follows: \begin{itemize}
\item [1)] Because $(z^+)_0 \geq 0$, $(z^-)_0 \geq 0$, we must have $\| \mathcal{A} \|_{k\ast,\mathbb{R}} \geq 0$ for all $\mathcal{A}$.
\item [2)] Let $(z^{+*}, z^{-*})$ be such that
$\| \mathcal{A} \|_{k\ast,\mathbb{R}} = (z^{+*})_0 + (z^{-*})_0$. If $\| \mathcal{A} \|_{k\ast,\mathbb{R}} = 0$, then $(z^{+*})_0 = (z^{-*})_0=0$, and hence and $z^{+*}=z^{-*}=0$. So, $\mathcal{A}$ must be the zero tensor.
\item [3)] Let $s(z)$ be the function as in the proof of Lemma~\ref{ach:opv:k:om}. One can similarly prove that $(z^+,z^-)$ is feasible for \reff{rho(k):m:even} with tensor $\mathcal{A}$ if and only if $(s(z^+), s(z^-))$ is feasible
\reff{rho(k):m:even} with tensor $-\mathcal{A}$. This implies that $\| -\mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{k\ast,\mathbb{R}}$. Similarly, one can show that
$\| t\mathcal{A} \|_{k\ast,\mathbb{R}} = t \| \mathcal{A} \|_{k\ast,\mathbb{R}}$ for $t>0$. Therefore,
$\| t \mathcal{A} \|_{k\ast,\mathbb{R}} =|t| \cdot \| \mathcal{A} \|_{k\ast,\mathbb{R}}$ for all $\mathcal{A}$ and for all $t \in \mathbb{R}$.
\item [4)] For all tensors $\mathcal{A}, \mathcal{B}$, the triangular inequality $
\| \mathcal{A} + \mathcal{B} \|_{k\ast,\mathbb{R}} \leq \| \mathcal{A} \|_{k\ast,\mathbb{R}} +
\| \mathcal{B} \|_{k\ast,\mathbb{R}} $ follows from the fact that the feasible set of \reff{rho(k):m:even} is a convex set in $(z,\mathcal{A})$ and its objective is linear in $z$.
\end{itemize}
\end{proof}
The convergence properties of Algorithm~\ref{alg:even:m} are as follows.
\begin{theorem} \label{thm:cvg:evm}
Let $\| \mathcal{A} \|_{k\ast,\mathbb{R}}$ be the optimal value of \reff{rho(k):m:even}. For all $\mathcal{A} \in \mt{S}^m(\mathbb{R}^n)$, Algorithm~\ref{alg:even:m} has the following properties: \begin{itemize} \item [(i)]
$\lim\limits_{k\to\infty} \| \mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{\ast,\mathbb{R}}$.
\item [(ii)] Let $p^*$ be an optimizer of \reff{max<p,f>:1>=|p|}. If $1 \pm p^* \in Q^+$, then
$\| \mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{\ast,\mathbb{R}}$ for all $k$ sufficiently big.
\item [(iii)] If $y^{+,k}, y^{-,k} \in \mathscr{R}^+_{ \{0,m\} }$ for some order $k$, then $\| \mathcal{A} \|_{k\ast,\mathbb{R}} = \| \mathcal{A} \|_{\ast,\mathbb{R}}$.
\item [(iv)] The sequence $\{ (y^{+,k}, y^{-,k}) \}_{k=m_0}^\infty$ is bounded, and for its each accumulation point $(\hat{y}^+, \hat{y}^-)$, we must have \[ \hat{y}^+, \hat{y}^- \in \mathscr{R}^{+}_{ \{0,m\} }, \quad
( \hat{y}^+)_0 + ( \hat{y}^- )_0 = \| \mathcal{A} \|_{\ast,\mathbb{R}}. \] Moreover, if the nuclear decomposition of $\mathcal{A}$ over $\mathbb{R}$ is unique, then $(y^{+,k}, y^{-,k})$ converges to the pair $(\hat{y}^+, \hat{y}^-)$ as above. \end{itemize}
\end{theorem} \begin{proof} (i) By Lemma~\ref{mnng:val=:evm}, for every $\epsilon>0$, there exists $p_1 \in \mathbb{R}[x]_m^{hom}$ such that \[ 1 \pm p_1 >0 \mbox{ on } S, \qquad
\langle p_1, \mbf{a} \rangle \geq \| \mathcal{A} \|_{\ast,\mathbb{R}} - \epsilon. \] By Theorem~\ref{thm:Put}, there exists $k_1$ such that $
1 \pm p_1 \in Q_{k_1}^+ . $ By Lemma~\ref{achval:Qk:evm}, we can get \[
\| \mathcal{A} \|_{k_1\ast,\mathbb{R}} \geq \| \mathcal{A} \|_{\ast,\mathbb{R}} - \epsilon. \] The relation \reff{rhok:mcr:evm} and the above imply that \[
\| \mathcal{A} \|_{\ast,\mathbb{R}} \geq \lim\limits_{k\to\infty} \| \mathcal{A} \|_{k\ast,\mathbb{R}}
\geq \| \mathcal{A} \|_{\ast,\mathbb{R}} - \epsilon. \] The item (i) follows from that $\epsilon>0$ can be arbitrarily small.
(ii)-(iii): The proof is the same as for Theorem~\ref{thm:cvg:oddm} (ii)-(iii), by using Lemmas~\ref{mnng:val=:evm} and \ref{achval:Qk:evm}.
(iv) Note that $y^{+,k} = z^{+,k}\big|_{ \{0,m\} }$,
$y^{-,k} = z^{-,k}\big|_{ \{0,m\} }$, \[ M_k( z^{+,k} ) \succeq 0, \quad M_k( z^{-,k} ) \succeq 0, \]
and $(y^{+,k})_0 + (y^{-,k})_0 = \| \mathcal{A} \|_{k\ast,\mathbb{R}} \leq
\| \mathcal{A} \|_{\ast,\mathbb{R}}$ for all $k$. From the condition \[
L^{(k)}_{1-\|x\|^2} ( z^{+,k} ) =
L^{(k)}_{1-\|x\|^2} ( z^{-,k} ) = 0, \] one can see that the sequence of diagonal entries of $M_k( z^{+,k} ),\, M_k( z^{-,k} )$ is bounded. Then, we can show that the sequence $\{ (z^{+,k}, z^{-,k} ) \}$ is bounded. This implies that $\{ (y^{+,k}, y^{-,k}) \}_{k=m_0}^\infty$ is also bounded. When $(\hat{y}^+, \hat{y}^-)$ is one of its accumulation points, we can get $
( \hat{y}^+)_0 + ( \hat{y}^- )_0 = \| \mathcal{A} \|_{\ast,\mathbb{R}} $ by evaluating the limit. Note that $y^{+,k}, y^{-,k} \in \mathscr{S}^{+,2k}_{ \{0,m\} }$ for all $k$. The distance between $\mathscr{S}^{+,2k}_{ \{0,m\} }$ and $\mathscr{R}^+_{ \{0,m\} }$ tends to zero as $k\to \infty$ (cf.~\cite[Prop.~3.4]{LMOPT}), so we have $\hat{y}^+, \hat{y}^- \in \mathscr{R}^+_{ \{0,m\} }$. It can also be implied by \reff{SDr:R0,m:S+}. Next, write down the decompositions: \[ \hat{y}^+ = \sum_{i=1}^{r_1} \lambda_i^+ [v_i^+]_{0,m}, \quad \hat{y}^- = \sum_{i=1}^{r_2} \lambda_i^- [v_i^-]_{0,m}, \] with all $\lambda_i^+ \geq 0$, $\lambda_i^- \geq 0$, and $v_i^+, v_i^- \in S^+$. They give the real nuclear decomposition $\mathcal{A} = \mathcal{A}_1 - \mathcal{A}_2$, with \[ \mathcal{A}_1 = \sum_{i=1}^{r_1} \lambda_i^+ (v_i^+)^{\otimes m}, \quad \mathcal{A}_2 = \sum_{i=1}^{r_2} \lambda_i^- (v_i^-)^{\otimes m}. \] When the nuclear decomposition of $\mathcal{A}$ is unique, the decompositions of $\mathcal{A}_1, \mathcal{A}_2$ are also unique. So, the accumulation point $(\hat{y}^+, \hat{y}^-)$ is unique and $(y^{+,k}, y^{-,k})$ must converge to it as $k \to \infty$. \end{proof}
In Theorem~\ref{thm:cvg:evm}(ii), we always have $1 \pm p^* \geq 0$ on $S^+$. Under some general optimality conditions, it holds that $1 \pm p^* \in Q^+$. So, Algorithm~\ref{alg:even:m} generally has finite convergence. This is confirmed by numerical experiments in \S\ref{sc:num}.
\section{Nuclear norms with $\mathbb{F} = \mathbb{C}$} \label{sc:cpx}
When the ground field $\mathbb{F} = \mathbb{C}$, the nuclear norm of $\mathcal{A} \in \mt{S}^m(\mathbb{C}^n)$ is \begin{equation} \label{A:s*:C}
\| \mathcal{A} \|_{\ast,\mathbb{C}} = \min \left\{ \sum_{i=1}^r |\lambda_i| : \, \mathcal{A} = \sum_{i=1}^r \lambda_i (w_i)^{\otimes m},
\| w_i \| = 1, w_i \in \mathbb{C}^n \right \}.
\end{equation} First, we formulate an optimization problem for computing $\| \mathcal{A} \|_{\ast,\mathbb{C}}$.
\begin{lemma} \label{lm:opt:cnuc}
For all $\mathcal{A} \in \mt{S}^m(\mathbb{C}^n)$, $\| \mathcal{A} \|_{\ast,\mathbb{C}}$ equals the optimal value of \begin{equation} \label{cnn:F=ui+vi} \left\{ \begin{array}{rl}
\min & \sum_{i=1}^r \lambda_i \\ s.t. & \mathcal{A} = \sum_{i=1}^r \lambda_i (u_i+\sqrt{-1}v_i)^{\otimes m}, \\
& \lambda_i \geq 0, \, \| u_i \|^2 + \| v_i \|^2 =1, \, u_i, v_i \in \mathbb{R}^n, \\
& \mbf{1}^T v_i \geq 0, \, \sin(\frac{2\pi}{m})\mbf{1}^T u_i - \cos(\frac{2\pi}{m}) \mbf{1}^T v_i \geq 0. \end{array} \right. \end{equation} In the above, $\mbf{1}$ is the vector of all ones. \end{lemma} \begin{proof} The decomposition of $\mathcal{A}$ as in \reff{A:s*:C} is equivalent to \[ \mathcal{A} = \sum_{i=1}^r \lambda_i (\tau_i w_i)^{\otimes m}, \] for all unitary $\tau_i \in \mathbb{C}$ with $\tau_i^m = 1$. Write \[ w_i = u_i + \sqrt{-1} v_i, \quad u_i, v_i \in \mathbb{R}^n, \] then \[ \begin{array}{rcl} \mbf{1}^T w_i &=& (\mbf{1}^T u_i) + \sqrt{-1} (\mbf{1}^T v_i), \\ \mbf{1}^T (\tau_i w_i ) &=& \tau_i \Big( (\mbf{1}^T u_i) + \sqrt{-1} (\mbf{1}^T v_i) \Big). \end{array} \] Write $\mbf{1}^T w_i = r e^{\sqrt{-1} \theta }$ with $r\geq 0$, $0 \leq \theta < 2\pi$. There always exists $k\in \{0,1,\ldots,m-1\}$ such that \[ 0 \leq \theta - 2k\pi/ m < 2\pi/m. \] If we choose $\tau_i = e^{ - 2k\pi \sqrt{-1} / m }$, then \[ \mbf{1}^T (\tau_i w_i) = r e^{\sqrt{-1} \theta_1 }, \quad 0 \leq \theta_1 < 2\pi/m. \] Therefore, without loss of generality, we can assume $\mbf{1}^T w_i = r e^{\sqrt{-1} \theta }$, with $0 \leq \theta < 2\pi/m$, in \reff{A:s*:C}. This means that ($\mbf{Im}$ denotes the imaginary part) \[ \mbf{Im}( \mbf{1}^T w_i ) \geq 0, \quad \mbf{Im}( e^{ \frac{- 2\pi}{m} \sqrt{-1} } \mbf{1}^T w_i ) \leq 0, \] which are equivalent to the conditions \[ \mbf{1}^T v_i \geq 0, \quad \sin(2\pi/m)\mbf{1}^T u_i - \cos(2\pi/m) \mbf{1}^T v_i \geq 0. \] Then, the lemma follows from \reff{A:s*:C}. \end{proof}
A complex vector in $\mathbb{C}^n$ can be represented by a $2n$-dimensional real vector. Let $x = ( x^{re}, x^{im} )$ with \[ x^{re} =(x_1,\ldots, x_n), \quad x^{im} = (x_{n+1}, \ldots, x_{2n}). \] Denote the set \begin{equation} \label{set:Sc} S^c := \left\{ x=(x^{re}, x^{im})
\left| \begin{array}{c}
\| x^{re} \|^2 + \| x^{im} \|^2 =1, \, \, x^{re}, x^{im} \in \mathbb{R}^n, \\ \mbf{1}^T x^{im} \geq 0, \, \sin( \frac{2\pi}{m} )\mbf{1}^T x^{re} -\cos(\frac{2\pi}{m}) \mbf{1}^T x^{im} \geq 0 \end{array} \right. \right\}. \end{equation} For the decomposition of $\mathcal{A}$ as in \reff{cnn:F=ui+vi}, the weighted Dirac masure \[ \mu := \lambda_1 \delta_{(u_1,v_1)} + \cdots + \lambda_r \delta_{(u_r,v_r)} \] belongs to $\mathscr{B}(S^c)$, the set of Borel measures supported on $S^c$. It satisfies \begin{equation} \label{F=int:dmu:C}
\mathcal{A} = \int (x^{re} + \sqrt{-1} x^{im} )^{\otimes m} \mt{d} \mu. \end{equation} Note that $ \lambda_1 + \cdots + \lambda_r = \int 1 \mt{d} \mu. $
Conversely, for every $\mu \in \mathscr{B}(S^c)$ satisfying \reff{F=int:dmu:C}, we can always get a decomposition of $\mathcal{A}$ as in \reff{cnn:F=ui+vi}. This can be implied by \cite[Prop.~3.3]{ATKMP}. By Lemma~\ref{lm:opt:cnuc}, $\| \mathcal{A} \|_{\ast,\mathbb{C}}$ equals the optimal value of \begin{equation} \label{cnn:F=int:xom:C} \left\{\begin{array}{rl} \min & \int 1 \mt{d} \mu \\ s.t. & \mathcal{A} = \int (x^{re} + \sqrt{-1} x^{im} )^{\otimes m} \mt{d} \mu, \\ & \mu \in \mathscr{B}(S^c). \end{array}\right. \end{equation} Note that $S^c = \{ x \in \mathbb{R}^{2n}: \, h(x) = 0, g_1(x) \geq 0, g_2(x) \geq 0 \}$ where \begin{equation} \label{df:h:g:C} h := x^Tx -1, \,\, g_1: = \mbf{1}^T x^{im}, \,\, g_2 :=\sin( \frac{2\pi}{m} )\mbf{1}^T x^{im} - \cos( \frac{2\pi}{m} ) \mbf{1}^T x^{re}. \end{equation} Let $\mbf{a}^{re}, \mbf{a}^{im} \in \mathbb{R}^{ \mathbb{N}^n_{ \{m\} } }$ be the real vectors such that \begin{equation} \label{F=fre+fim} \mbf{a}^{re}_\alpha + \sqrt{-1} \mbf{a}^{im}_\alpha = \mathcal{A}_\alpha \quad \mbox{ if } \quad x^\alpha = x_{i_1}\cdots x_{i_m}. \end{equation} For each $\alpha = (\alpha_1, \ldots, \alpha_n) \in \mathbb{N}^{n}_{ \{m\} }$, expand the product \begin{equation} \label{Raf:Taf} (x_1+\sqrt{-1}\, x_{n+1})^{\alpha_1} \cdots (x_n + \sqrt{-1}\,x_{2n})^{\alpha_n} =R_\alpha(x) + \sqrt{-1}\, T_\alpha(x), \end{equation} for real polynomials $R_\alpha, T_\alpha \in \mathbb{R}[x]:=\mathbb{R}[x_1,\ldots,x_{2n}]$. Then, \[ \int (x_1+\sqrt{-1}\, x_{n+1})^{\alpha_1} \cdots (x_n + \sqrt{-1}\,x_{2n})^{\alpha_n} \mt{d} \mu \] \[ =\int R_\alpha(x) \mt{d} \mu + \sqrt{-1}\int T_\alpha(x) \mt{d} \mu. \] Hence, \reff{cnn:F=int:xom:C} is equivalent to \begin{equation} \label{cnnopt:F=R+T:mu(C)} \left\{ \begin{array}{rl}
\min & \int 1 \mt{d} \mu \\ s.t. & \mbf{a}^{re}_\alpha = \int R_\alpha(x) \mt{d} \mu \,\, (\alpha \in \mathbb{N}^{n}_{ \{m\} }), \\ & \mbf{a}^{im}_\alpha = \int T_\alpha(x) \mt{d} \mu \,\, (\alpha \in \mathbb{N}^{n}_{ \{m\} }), \\ & \mu \in \mathscr{B}(S^c). \end{array} \right. \end{equation} To solve \reff{cnnopt:F=R+T:mu(C)}, we can replace $\mu$ by the vector of its moments. Denote the moment cone \begin{equation} \label{scrR(C):0+m} \mathscr{R}^{c}_{ \{0,m\} } := \left\{ y \in \mathbb{R}^{ \mathbb{N}^{2n}_{ \{0,m\} } }
\left| \begin{array}{c}
\exists \mu \in \mathscr{B}(S^c) \, \mbox{ such that } \\ y_\beta = \int x^\beta \mt{d} \mu\,\,\mbox{ for } \, \beta \in \mathbb{N}^{2n}_{ \{0,m\} } \end{array} \right. \right\}. \end{equation} So, \reff{cnnopt:F=R+T:mu(C)} is equivalent to the optimization problem \begin{equation} \label{cnn:F=R+T:y(C)} \left\{ \begin{array}{rl}
\min & (y)_0 \\ s.t. & \langle R_\alpha, y \rangle = \mbf{a}^{re}_\alpha \,\, (\alpha \in \mathbb{N}^{n}_{ \{m\} }), \\ & \langle T_\alpha, y \rangle = \mbf{a}^{im}_\alpha \,\, (\alpha \in \mathbb{N}^{n}_{ \{m\} }), \\ & y \in \mathscr{R}^{c}_{ \{0, m\} }. \end{array} \right. \end{equation}
\subsection{An algorithm}
The cone $\mathscr{R}^{c}_{ \{0, m\} }$ can be approximated by semidefnite relaxations. For $h, g_1, g_2$ as in \reff{df:h:g:C}, denote the cones \begin{align} \label{scr(S):C:2k} \mathscr{S}^{c, 2k} & := \left\{ z \in \mathbb{R}^{ \mathbb{N}^{2n}_{ [0,2k] } }
\left| \begin{array}{c} M_k(z) \succeq 0, L^{(k)}_{h}(z) = 0, \\
L^{(k)}_{g_1}(z) \succeq 0, L^{(k)}_{g_2}(z) \succeq 0 \end{array} \right. \right\}, \\ \label{scr(S):C:2k:0+m} \mathscr{S}^{c,2k}_{ \{0,m\} } & := \left\{
y \in \left. \mathbb{R}^{ \mathbb{N}^{2n}_{ \{0,m\} } } \right|
\exists \, z \in \mathscr{S}^{c,2k}, \, \, y = z|_{ \{0,m \} } \right\}. \end{align} Clearly, $\mathscr{S}^{c,2k}_{ \{0,m\} }$ is a projection of $\mathscr{S}^{c,2k}$. For all $k\geq m/2$, we have \[ \mathscr{R}^{c}_{ \{0,m\} } \subseteq \mathscr{S}^{c,2k+2}_{ \{0,m\} } \subseteq \mathscr{S}^{c,2k}_{ \{0,m\} }. \] Indeed, it holds that (cf.~\cite[Prop.~3.3]{LMOPT}) \begin{equation} \label{sdr:R:Sc} \mathscr{R}^{c}_{ \{0,m\} } = \bigcap_{k \geq m/2 } \mathscr{S}^{c, 2k}_{ \{0,m\} }. \end{equation} This produces the hierarchy of semidefinite relaxations \begin{equation} \label{cnn:mom:z(C)} \left\{ \begin{array}{rl}
\| \mathcal{A} \|_{k\ast,\mathbb{C}} \, := \, \min & (z)_0 \\ s.t. & \langle R_\alpha, z \rangle = \mbf{a}^{re}_\alpha \,\, (\alpha \in \mathbb{N}^{n}_{ \{m\} }), \\ & \langle T_\alpha, z \rangle = \mbf{a}^{im}_\alpha \,\, (\alpha \in \mathbb{N}^{n}_{ \{m\} }), \\ & z \in \mathscr{S}^{c,2k}, \end{array} \right. \end{equation} for $k=m_0, m_0+1, \ldots$ ($m_0=\lceil m/2 \rceil$). Like \reff{rhok:mcr:evm}, we also have \begin{equation} \label{rl:rhok:cpx}
\| \mathcal{A} \|_{m_0\ast,\mathbb{C}} \leq \cdots \leq
\| \mathcal{A} \|_{k\ast,\mathbb{C}} \leq \cdots \leq \| \mathcal{A} \|_{\ast,\mathbb{C}}. \end{equation}
\begin{alg} \label{alg:cnn:cpx} For a given tensor $\mathcal{A} \in \mt{S}^m(\mathbb{C}^n)$, let $k = m_0$ and do: \begin{itemize}
\item [Step 1] Solve the semidefinite relaxation \reff{cnn:mom:z(C)}, for an optimizer $z^{k}$.
\item [Step 2] Let $y^{k} := z^{k}\big|_{ \{0,m\} }$
(see \reff{trun:z|0,m} for the truncation). Check whether or not $y^{k} \in \mathscr{R}^{c}_{ \{0,m\} }$. If yes, then
$\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{k\ast,\mathbb{C}}$ and go to Step~3; otherwise, let $k :=k+1$ and go to Step~1.
\item [Step 3] Compute the decompositions of $y^{k}$ as \[ y^{k} = \lambda_1 [(u_1,v_1)]_{0,m} + \cdots + \lambda_r [(u_r,v_r)]_{0,m} \] with all $\lambda_i >0, (u_i,v_i) \in S^c$. This gives the nuclear decomposition \[ \mathcal{A} = \lambda_1 (u_1 + \sqrt{-1}\, v_1)^{\otimes m} + \cdots + \lambda_r (u_r + \sqrt{-1}\, v_r)^{\otimes m} \] such that
$\sum_{i=1}^{r} \lambda_i = \| \mathcal{A} \|_{\ast,\mathbb{C}}$.
\end{itemize}
\end{alg}
In the above, the method in \cite{ATKMP} can be used to check if $y^{k} \in \mathscr{R}^c_{ \{0,m\} }$ or not. If yes, we can also get a nuclear decomposition. It requires to solve a moment optimization problem whose objective is randomly generated.
\subsection{Convergence properties}
Denote the real polynomial vectors: \[ R(x):= (R_\alpha(x))_{ \alpha \in \mathbb{N}^{n}_{ \{m\} } }, \quad T(x):= (T_\alpha(x) )_{ \alpha \in \mathbb{N}^{n}_{ \{m\} } }, \] where $R_\alpha, T_\alpha$ are as in \reff{Raf:Taf}. Their length $D=\binom{2n+m-1}{m}$. Denote \begin{equation} \mathscr{P}(S^c)_{0,m} \, := \, \{ t + p \, \mid \, t \in \mathbb{R}, \, p\in \mathbb{R}[x]_m^{hom}, \, t+p \geq 0 \mbox{ on } \, S^c \}. \end{equation} The cones $\mathscr{R}^c_{\{0,m\}}$ and $\mathscr{P}(S^c)_{0,m}$ are dual to each other \cite{LMOPT}, so the dual optimization problem of \reff{cnn:F=R+T:y(C)} is \begin{equation} \label{mx<p,f>:p1p2:P(C)} \left\{ \begin{array}{rl}
\max\limits_{ p_1, p_2 \in \mathbb{R}^D } &
p_1^T \mbf{a}^{re} + p_2^T \mbf{a}^{im} \\ s.t. \quad & 1 - p_1^T R(x) - p_2^T T(x) \in \mathscr{P}(S^c)_{0,m}. \end{array} \right. \end{equation}
\begin{lemma} \label{monng:dual:Cpx} Let $\mbf{a}^{re}, \mbf{a}^{im}$ be as in \reff{F=fre+fim}. Then, both \reff{cnn:F=R+T:y(C)} and \reff{mx<p,f>:p1p2:P(C)} achieve the same optimal value which equals
$\| \mathcal{A} \|_{\ast,\mathbb{C}}$. \end{lemma} \begin{proof} The origin is an interior point of \reff{mx<p,f>:p1p2:P(C)}. So, \reff{cnn:F=R+T:y(C)} and \reff{mx<p,f>:p1p2:P(C)} have the same optimal value, and \reff{cnn:F=R+T:y(C)} achieves it (cf.~\cite[\S2.4]{BTN}). In the next, we prove that \reff{mx<p,f>:p1p2:P(C)} also achieves its optimal value. Let \[ w = x^{re} + \sqrt{-1} x^{im}, \quad q_\alpha = (p_1)_\alpha - \sqrt{-1} (p_2)_\alpha. \] \[ \begin{array}{rcl}
q(w) &=& \sum_{ |\alpha| =m} q_\alpha \big( R_\alpha(x) + \sqrt{-1} T_\alpha(x) \big) \\
& = & \sum_{ |\alpha| =m} q_\alpha w^\alpha . \end{array} \] Clearly, $q(w)$ is a form of degree $m$ and in $w \in \mathbb{C}^n$, and ($\mbf{Re}$ denotes the real part) \[ \mbf{Re} \,\, q(w) = p_1^T R(x) + p_2^T T(x). \] Let $ B=\{ x^{re} + \sqrt{-1} x^{im}: \, (x^{re}, x^{im} ) \in S^c \}, $
which is a subset of the complex unit sphere $\| w \| =1$. When $(p_1, p_2)$ is feasible for \reff{mx<p,f>:p1p2:P(C)}, the polynomial \[ p(x) \, := \, 1 - p_1^T R(x) - p_2^T T(x) \quad \geq 0 \quad \mbox{ on } S^c. \] So, $ \mbf{Re}\,\, q(w) \leq 1 $ for all $w \in B.$
For all $w \in \mathbb{C}^n$ with $\|w\|=1$, there exist $\tau^m=1$ and $a \in B$ such that $w=\tau a$. This is shown in the proof of Lemma~\ref{lm:opt:cnuc}, so \[ \mbf{Re}\,\, q(w) = \mbf{Re}\,\, q(\tau a) = \mbf{Re}\,\, q( a) \leq 1. \] The above is true for all unit complex vectors $w$, hence \[
\mbf{Re}\,\, q(w) \leq 1 \quad \forall w \in \mathbb{C}^n: \, \|w\|=1. \] Because $q(w)$ is homogeneous in $w$, the above implies that \[
|q(w)| \leq 1 \quad \forall w \in \mathbb{C}^n: \, \|w\|=1. \]
\iffalse
So, there exists $\epsilon >0$ such that \[
1 \geq \int_{ \| w \| = 1 } |q(w)|^2 d w
= vec(q)^h \Big( \int_{ \| w \| = 1 } [w]_m[w]_m^h d w \Big) vec(q)
\geq \epsilon \|q\|^2 \]
\fi
So, there exists $M >0$ such that
$ \|vec(q) \| \leq M $
for all $q$ satisfying the above. Since $\| vec(q) \|^2 = \| vec(p_1) \|^2 + \| vec(p_2) \|^2$, the feasible set of \reff{mx<p,f>:p1p2:P(C)} is compact. So, \reff{mx<p,f>:p1p2:P(C)} must achieve its optimal value. \end{proof}
Next, we study the properties of the relaxation \reff{cnn:mom:z(C)}. For $h,g:=(g_1,g_2)$ as in \reff{df:h:g:C}, denote the cones of polynomials \begin{equation} \label{Qkc:cpx} Q_k^c \, := \, \mbox{Ideal}(h)_{2k} + \mbox{Qmod}_{2k}(g), \quad Q^c \, := \, \bigcup_{k\geq 1} Q_k^c. \end{equation} The cones $Q_k^c$ and $\mathscr{S}^{c,2k}$ are dual to each other \cite{LMOPT}, so the dual optimization problem of \reff{cnn:mom:z(C)} is \begin{equation} \label{mx<p1+p2,f>:SOS:Q(C)} \left\{ \begin{array}{rl}
\max\limits_{ p_1, p_2 \in \mathbb{R}^D } &
p_1^T \mbf{a}^{re} + p_2^T \mbf{a}^{im} \\ s.t. & 1 - p_1^T R(x) - p_2^T T(x) \in Q_k^c. \end{array} \right. \end{equation}
\begin{lemma} \label{lm:sos-mom:QkSc} Let $\mbf{a}^{re}, \mbf{a}^{im}$ be the real vectors as in \reff{F=fre+fim}. Then, for each $k\geq m_0$, both \reff{cnn:mom:z(C)} and \reff{mx<p1+p2,f>:SOS:Q(C)}
achieve the same optimal value which equals $\| \mathcal{A} \|_{k\ast,\mathbb{C}}$. Moreover, $\| \mathcal{A} \|_{k\ast,\mathbb{C}}$ is a norm function in $\mathcal{A} \in \mt{S}^m(\mathbb{C}^n)$. \end{lemma} \begin{proof} The proof is almost the same as for Lemmas~\ref{ach:opv:k:om} and \ref{achval:Qk:evm}. For each $k\geq m_0$, the origin is an interior point of \reff{mx<p1+p2,f>:SOS:Q(C)}. The vanishing ideal of $S^c$ is $\mbox{Ideal}(h)$, so the set $Q_k^c$ is closed, implied by Theorem~3.35 of \cite{Laurent} or Theorem~3.1 of \cite{Marsh03}. In the proof of Lemma~\ref{monng:dual:Cpx}, we showed that the feasible set of \reff{mx<p,f>:p1p2:P(C)} is compact. Since $Q_k^c \subseteq \mathscr{P}(S^c)_{\{0,m\}}$, the feasible set of \reff{mx<p1+p2,f>:SOS:Q(C)} is also compact. By the linear conic duality theory \cite[\S2.4]{BTN}, both \reff{cnn:mom:z(C)} and \reff{mx<p1+p2,f>:SOS:Q(C)} achieve the same optimal value. We can similarly prove that
$\| \mathcal{A} \|_{k\ast,\mathbb{C}}$ is a norm function in $\mathcal{A}$. We omit the proof here, since it is almost the same as for Lemmas~\ref{ach:opv:k:om} and \ref{achval:Qk:evm}. \end{proof}
The convergence properties of Algorithm~\ref{alg:cnn:cpx} are as follows.
\begin{theorem} \label{thm:cvg:cpx}
Let $\| \mathcal{A} \|_{k\ast,\mathbb{C}}$ be the optimal value of \reff{cnn:mom:z(C)}. For all $\mathcal{A} \in \mt{S}^m(\mathbb{C}^n)$, Algorithm~\ref{alg:cnn:cpx} has the following properties: \begin{itemize} \item [(i)]
$\lim\limits_{k\to\infty} \| \mathcal{A} \|_{k\ast,\mathbb{C}}
= \| \mathcal{A} \|_{\ast,\mathbb{C}}$.
\item [(ii)] Let $(p_1^*, p_2^*)$ be an optimal pair for \reff{mx<p,f>:p1p2:P(C)}. If \[ 1-(p_1^*)^TR(x) -(p_2^*)^T T(x) \in Q^c, \]
then $\| \mathcal{A} \|_{k\ast,\mathbb{C}} = \| \mathcal{A} \|_{\ast, \mathbb{C}}$ for all $k$ sufficiently big.
\item [(iii)] If $y^{k} \in \mathscr{R}^{c}_{ \{0,m\} }$ for some order $k$, then $\| \mathcal{A} \|_{k\ast,\mathbb{C}} = \| \mathcal{A} \|_{\ast,\mathbb{C}}$.
\item [(iv)] The sequence $\{ y^{k} \}_{k=m_0}^\infty$ is bounded, and each of its accumulation points belongs to $\mathscr{R}^{c}_{ \{0,m\} }$. Moreover, if the nuclear decomposition of $\mathcal{A}$ over $\mathbb{C}$ is unique, then $y^{k}$ converges to a point in $\mathscr{R}^{c}_{ \{0,m\} }$ as $k \to \infty$.
\end{itemize} \end{theorem} \begin{proof} (i) By Lemma~\ref{monng:dual:Cpx}, for every $\epsilon>0$, there exist $s_1, s_2 \in \mathbb{R}^D$ such that \[ 1 - (s_1)^TR -(s_2)^T T>0 \mbox{ on } S^c, \quad \langle s_1, \mbf{a}^{re} \rangle + \langle s_2, \mbf{a}^{im} \rangle
\geq \| \mathcal{A} \|_{\ast,\mathbb{C}} - \epsilon. \] By Theorem~\ref{thm:Put}, there exists $k_1$ such that \[ 1 - (s_1)^TR -(s_2)^T T \in Q_{k_1}^c. \] By Lemma~\ref{lm:sos-mom:QkSc}, we can get $
\| \mathcal{A} \|_{k_1\ast,\mathbb{C}} \geq \| \mathcal{A} \|_{\ast,\mathbb{C}} - \epsilon. $ The monotonicity relation \reff{rl:rhok:cpx} and the above imply that \[
\| \mathcal{A} \|_{\ast,\mathbb{C}} \geq \lim\limits_{k\to\infty} \| \mathcal{A} \|_{k\ast,\mathbb{C}}
\geq \| \mathcal{A} \|_{\ast,\mathbb{C}} - \epsilon. \] Since $\epsilon>0$ can be arbitrarily small, the item (i) follows directly.
(ii) If $1-(p_1^*)^T R -(p_2^*)^T T \in Q^c$, then $1-(p_1^*)^T R -(p_2^*)^T T \in Q_{k_2}^c$ for some $k_2 \in \mathbb{N}$. By Lemma~\ref{monng:dual:Cpx}, we know that \[
\| \mathcal{A} \|_{\ast,\mathbb{C}} = \langle p_1^*, \mbf{a}^{re} \rangle +
\langle p_2^*, \mbf{a}^{im} \rangle \leq \| \mathcal{A} \|_{k_2\ast,\mathbb{C}}. \] Then, \reff{rl:rhok:cpx} implies that
$\| \mathcal{A} \|_{k\ast,\mathbb{C}} = \| \mathcal{A} \|_{\ast,\mathbb{C}}$ for all $k \geq k_2$.
(iii) If $y^k \in \mathscr{R}^c_{ \{0,m\} }$ for some order $k$, then $\| \mathcal{A} \|_{k\ast,\mathbb{C}} \geq \| \mathcal{A} \|_{\ast,\mathbb{C}}$, by Lemma~\ref{monng:dual:Cpx}. The equality
$\| \mathcal{A} \|_{k\ast,\mathbb{C}} = \| \mathcal{A} \|_{\ast,\mathbb{C}}$ follows from \reff{rl:rhok:cpx}.
(iv) Note that $ (z^k)_0 = (y^k)_0 = \| \mathcal{A} \|_{k\ast,\mathbb{C}}$ for all $k$. The condition $L_{1-\|x\|^2}^{(k)}(z^k)=0$ implies that \[ (z^k)_{2e_1 + 2\beta} + \cdots (z^k)_{2e_{2n} + 2\beta} = (z^k)_{2\beta} \] for all $\beta \in \mathbb{N}^{2n}_{[0,2k]}$. By induction, one can easily show that \[ (z^k)_{2\beta} \leq (z^k)_0 \quad \forall \, \beta \in \mathbb{N}^{2n}_{[0,2k]}. \] Since $y^k$ is a truncation of $z^k$ and $M_k( z^k ) \succeq 0$, we get \[
|(y^k)_{\beta}|^2 = |(z^k)_{\beta}|^2 \leq (z^k)_{2\beta} (z^k)_{0}
\leq |(z^k)_{0} |^2 = |(y^k)_{0} |^2 = \| \mathcal{A} \|_{\ast, \mathbb{C}}^2. \] This shows that the sequence $\{ y^k \}$ is bounded. For all $k\geq m/2$, it holds that $y^k \in \mathscr{S}^{c,2k}_{ \{0,m\} }$. The distance between $\mathscr{S}^{c,2k}_{ \{0,m\} }$ and $\mathscr{R}^c_{ \{0,m\} }$ tends to zero as $k\to \infty$ (cf.~\cite[Prop.~3.4]{LMOPT}). Therefore, every accumulation point $\hat{y}$ of the sequence $\{ y^k \}$ belongs to $\mathscr{R}^{c}_{ \{0,m\} }$. This can also be implied by \reff{sdr:R:Sc}. So, $ \hat{y} = \sum_{i=1}^r \lambda_i [(u_i, v_i)]_{0,m}, $ with $\lambda_i >0$ and $(u_i, v_i) \in S^c$. The feasibility condition in \reff{cnn:mom:z(C)} and the relation \reff{Raf:Taf} imply that \[ \mathcal{A} = \sum_{i=1}^r \lambda_i (u_i + \sqrt{-1} v_i)^{\otimes m}. \] When the nuclear decomposition of $\mathcal{A}$ is unique, $\lambda_i$ and $(u_i, v_i) \in S^c$ are also uniquely determined. So, the accumulation point $\hat{y}$ is unique and $y^k$ converges to a point in $\mathscr{R}^{c}_{ \{0,m\} }$ as $k \to \infty$. \end{proof}
\section{Numerical examples} \label{sc:num}
This section presents numerical experiments for nuclear norms of symmetric tensors. The computation is implemented in MATLAB R2012a, on a Lenovo Laptop with [email protected] and RAM 16.0G. Algorithm~\ref{alg:R:odd} is applied for real nuclear norms of real odd order tensors, Algorithm~\ref{alg:even:m} is for real nuclear norms of real even order tensors, while Algorithm~\ref{alg:cnn:cpx} is for complex nuclear norms of all tensors. These algorithms can be implemented in software {\tt Gloptipoly~3} \cite{Gloptipoly} by calling the semidefinite program package {\tt SeDuMi} \cite{sedumi}.
Since our methods are numerical, we display only four decimal digits for the computational results. For a nuclear decomposition $\mathcal{A} = (u_1)^{\otimes m} + \cdots + (u_r)^{\otimes m}$, we display it by listing the vectors $u_1, \ldots, u_r$ column by column, from the left to right. If one row block is not enough, we continue the display in the bottom, separated by one blank row.
Recall that $e$ is the vector of all ones, and $e_i$ denotes the $i$th standard unit vector (i.e., the vector whose $i$th entry is one and all others are zeros). We begin with some tensor examples from Friedland and Lim~\cite{FriLim14b}.
\begin{exm} (\cite{FriLim14b}) (i) Consider the tensor in $\mt{S}^3(\mathbb{R}^2)$ such that \[ \mathcal{A} = \frac{1}{\sqrt{3}}\big( e_1 \otimes e_1 \otimes e_2 + e_1 \otimes e_2 \otimes e_1 + e_2 \otimes e_1 \otimes e_1 \big). \] We got
$ \| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast, \mathbb{R}} = \sqrt{3}$ and
$ \| \mathcal{A} \|_{\ast, \mathbb{C}} = \| \mathcal{A} \|_{2\ast, \mathbb{C}} = 3/2$. It took about $1$ second. The real nuclear decomposition $\mathcal{A} = \sum_{i=1}^3 (u_i)^{\otimes 3}$ is {\tiny \begin{verbatim}
0.0000 -0.7937 0.7937
-0.5774 0.4582 0.4582 \end{verbatim} \noindent}and the complex nuclear decomposition $\mathcal{A} = \sum_{i=1}^3 (w_i)^{\otimes 3}$ is {\tiny \begin{verbatim}
-0.5873 + 0.2740i 0.4582 - 0.4583i 0.6456 - 0.0566i
0.2944 + 0.3511i -0.0002 + 0.4582i 0.4513 + 0.0797i \end{verbatim}}
\iffalse
clear all, n = 2; m = 3; mA = zeros(2,2,2); mA(1,1,2) = 1/sqrt(3); mA(1,2,1) = 1/sqrt(3); mA(2,1,1) = 1/sqrt(3);
tstart = tic; [rnn, rU] = realSymTNN_oddord( mA, n, 3 ); [cnn, cU] = complexSymTNN(mA, n, 3); comptime = toc(tstart),
>> format long >> rnn
rnn =
1.732050803942378
>> rU
rU =
0.000000555964861 -0.793700534502393 0.793700534502393
-0.577350256797167 0.458243329880489 0.458243075113373
>> cnn
cnn =
1.499999982628005
>> cU
cU =
-0.587287849546880 + 0.273982924314108i 0.458162359491522 - 0.458324026766759i 0.645577760792282 - 0.056595473893863i
0.294429225664688 + 0.351138558102111i -0.000161652232510 + 0.458243199348050i 0.451253377220938 + 0.079732286401430i
>>
\fi
\noindent (ii) Consider the tensor in $\mt{S}^3(\mathbb{R}^2)$ such that \[ \mathcal{A} = \frac{1}{2}\big( e_1 \otimes e_1 \otimes e_2 + e_1 \otimes e_2 \otimes e_1 + e_2 \otimes e_1 \otimes e_1 - e_2 \otimes e_2 \otimes e_2 \big). \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast, \mathbb{R}} = 2$
and $\| \mathcal{A} \|_{\ast, \mathbb{C}} = \| \mathcal{A} \|_{2\ast, \mathbb{C}} = \sqrt{2}$. It took about $1$ second.
\iffalse
clear all, n = 2; m = 3; mA = zeros(2,2,2); mA(1,1,2) = 1/2; mA(1,2,1) = 1/2; mA(2,1,1) = 1/2; mA(2,2,2) = -1/2;
tstart = tic; [rnn, rU] = realSymTNN_oddord( mA, n, 3 ); [cnn, cU] = complexSymTNN(mA, n, 3); comptime = toc(tstart),
\fi
The real nuclear decomposition $\mathcal{A} = \sum_{i=1}^3 (u_i)^{\otimes 3}$ is {\tiny \begin{verbatim}
0.0000 -0.7565 0.7565
-0.8736 0.4368 0.4368 \end{verbatim} \noindent}while the complex nuclear decomposition $\mathcal{A} = \sum_{i=1}^2 (w_i)^{\otimes 3}$ is{\tiny \begin{verbatim}
0.5456 - 0.3150i -0.5456 + 0.3150i
0.3150 + 0.5456i 0.3150 + 0.5456i \end{verbatim} \noindent}The nuclear norms are the same as in \cite{FriLim14b}. \qed \end{exm}
Next, we see some tensors of order four.
\begin{exm} (i) Consider the tensor $\mathcal{A} \in \mt{S}^4(\mathbb{R}^3)$ such that \[ \mathcal{A} = e^{\otimes 4} - e_1^{\otimes 4} - e_2^{\otimes 4} - e_3^{\otimes 4}. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast, \mathbb{R}} =12$
and $\| \mathcal{A} \|_{\ast, \mathbb{C}} = \| \mathcal{A} \|_{3\ast, \mathbb{C}} \approx 11.8960$. It took about $15$ seconds. The real nuclear decomposition is the same as above. The complex nuclear decomposition $\mathcal{A} = \sum_{i=1}^{9} (w_i)^{\otimes 4}$ is {\tiny \begin{verbatim}
-0.1152 + 0.0714i 0.0332 - 0.1001i 0.0479 - 0.1059i -0.1102 + 0.0539i -0.0845 + 0.8376i
0.5316 + 0.5596i 0.6145 + 0.5968i 0.0519 - 0.1094i -0.1068 + 0.0498i 0.0823 + 0.8373i
-0.1186 + 0.0752i 0.0288 - 0.0968i 0.5905 + 0.5684i 0.5654 + 0.5880i 0.0022 + 0.8307i
0.1285 + 0.7122i 0.0163 + 0.6866i 0.5921 + 0.6105i 0.5648 + 0.5379i
-0.0124 + 0.6932i -0.1320 + 0.7066i -0.1017 + 0.0364i 0.0674 - 0.1137i
-0.1158 + 0.7086i 0.1153 + 0.7018i -0.0984 + 0.0319i 0.0711 - 0.1172i \end{verbatim} }
\iffalse
clear all, n = 3; m = 4; mA = ones(n,n,n,n); for i1 = 1:n,
mA(i1,i1,i1,i1) = 0; end tstart = tic; [rnn, Uplus, Umius] = realSymTNN_evenord(mA, n, m); Uplus, Umius, [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
>> cnn
cnn =
11.895993912996527
>> cU
cU =
Columns 1 through 3
-0.115174600616283 + 0.071428382835663i 0.033213200110508 - 0.100060849335398i 0.047920889395148 - 0.105920978025123i
0.531636042305604 + 0.559611550252256i 0.614469244504901 + 0.596806787942288i 0.051932806175567 - 0.109387896269053i
-0.118632700128192 + 0.075202332189552i 0.028767821199554 - 0.096790846010909i 0.590491538355697 + 0.568364440024314i
Columns 4 through 6
-0.110221381121175 + 0.053869193418244i -0.084541488483467 + 0.837617307181343i 0.128483030449396 + 0.712150702088357i
-0.106788297876521 + 0.049829839204023i 0.082327436199197 + 0.837270047616982i -0.012368871169410 + 0.693236785275987i
0.565378326675049 + 0.587973800004984i 0.002195381021213 + 0.830711624216886i -0.115799708830957 + 0.708627216968966i
Columns 7 through 9
0.016270231115935 + 0.686551874449371i 0.592131718995424 + 0.610519779558299i 0.564836737037199 + 0.537911341441403i
-0.132005267871603 + 0.706550537668112i -0.101708194598306 + 0.036360718737297i 0.067392607211199 - 0.113734565458594i
0.115307440995952 + 0.701818183972137i -0.098391117644016 + 0.031915535465032i 0.071076799821348 - 0.117154420501664i
>>
\fi
\noindent (ii) Consider the tensor $\mathcal{A} \in \mt{S}^4(\mathbb{R}^3)$ such that \[ \mathcal{A} = (e_1 + e_2 )^{\otimes 4} + (e_1 + e_3)^{\otimes 4} - ( e_2 + e_3)^{\otimes 4}. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast, \mathbb{R}} =
\| \mathcal{A} \|_{\ast, \mathbb{C}} = \| \mathcal{A} \|_{2\ast, \mathbb{C}} = 12$. It took about $12$ seconds. The real and complex nuclear decompositions are the same as above.
\iffalse
clear all, n = 3; m = 4; u1 = [1 1 0]'; u2 = [1 0 1]'; u3 = [0 1 1]'; vA = kron( kron( kron(u1,u1),u1), u1) + ... kron( kron( kron(u2,u2),u2), u2) - ... kron( kron( kron(u3,u3),u3), u3); mA = reshape(vA,n,n,n,n); tstart = tic; [rnn, Uplus, Umius] = realSymTNN_evenord(mA, n, m); rU = [Uplus, Umius], [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
\fi
\noindent (iii) Consider the tensor $\mathcal{A} \in \mt{S}^4(\mathbb{R}^3)$ such that \[ \mathcal{A} = (e_1 + e_2 - e_3)^{\otimes 4} + (e_1 - e_2 + e_3)^{\otimes 4} + (-e_1 + e_2 + e_3)^{\otimes 4} - (e)^{\otimes 4}. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast, \mathbb{R}} =
\| \mathcal{A} \|_{\ast, \mathbb{C}} = \| \mathcal{A} \|_{2\ast, \mathbb{C}} = 36$. It took about $7$ seconds. The real and complex nuclear decompositions are the same as above.
\iffalse
clear all, n = 3; m = 4; u1 = [1 1 -1]'; u2 = [1 -1 1]'; u3 = [-1 1 1]'; u4 = [1 1 1]'; vA = kron( kron( kron(u1,u1),u1), u1) + ... kron( kron( kron(u2,u2),u2), u2) +... kron( kron( kron(u3,u3),u3), u3) - ... kron( kron( kron(u4,u4),u4), u4) ; mA = reshape(vA,n,n,n,n); tstart = tic; [rnn, Uplus, Umius] = realSymTNN_evenord(mA, n, m); Uplus, Umius, [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
\fi
\qed \end{exm}
The following are some examples of complex-valued tensors.
\begin{exm} (i) Consider the tensor $\mathcal{A} \in \mt{S}^3(\mathbb{C}^3)$ such that \[ \mathcal{A}_{i_1 i_2 i_3} = \sqrt{-1}^{i_1 i_2 i_3 }. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{C}} = \| \mathcal{A} \|_{2\ast, \mathbb{C}} \approx 8.8759$. It took about $1$ second. The nuclear decomposition $\mathcal{A} = \sum_{i=1}^5 (w_i)^{\otimes 3}$ is {\tiny \begin{verbatim}
0.6024 + 0.3478i -0.6019 + 0.3475i 0.0000 - 0.6894i 0.6262 + 0.3615i -0.0000 + 0.7230i
0.0000 - 0.0000i 0.0000 - 0.0000i 0.0000 - 0.0000i -0.7913 + 0.6476i 0.9565 - 0.3615i
-0.6024 - 0.3478i 0.6019 - 0.3475i -0.0000 + 0.6894i 0.6262 + 0.3615i 0.0000 + 0.7230i \end{verbatim}}
\iffalse
clear all, n = 3; m = 3; mA = ones(n,n,n); for i1 = 1:n, for i2=1:n, for i3=1:n,
mA(i1,i2,i3) = ( sqrt(-1) )^(i1*i2*i3); end, end, end tstart = tic; [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
>> cnn
cnn =
8.875858749110593
>> cU
cU =
Columns 1 through 3
0.602431591889852 + 0.347814245580094i -0.601924213495638 + 0.347521562236787i 0.000000221280415 - 0.689378867639660i
0.000000085086027 - 0.000000042591146i 0.000000045095532 - 0.000000094321624i 0.000000020761652 - 0.000000034291767i
-0.602431619374133 - 0.347814191431550i 0.601924322643719 - 0.347521476523454i -0.000000152091938 + 0.689378818450454i
Columns 4 through 5
0.626153710306847 + 0.361510013199741i -0.000000000000007 + 0.723020026399476i
-0.791309650846522 + 0.647568493386280i 0.956465591386194 - 0.361510013199742i
0.626153710306850 + 0.361510013199743i 0.000000000000006 + 0.723020026399492i
>>
\fi
\noindent (ii) Consider the tensor $\mathcal{A} \in \mt{S}^4(\mathbb{C}^3)$ such that \[ \mathcal{A}_{i_1 i_2 i_3 i_4} = ( \sqrt{-1} )^{i_1}+ ( -1 )^{i_2}+ ( -\sqrt{-1} )^{i_3}+ ( 1 )^{i_4}; \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{C}} =
\| \mathcal{A} \|_{3\ast, \mathbb{C}} \approx 26.9569$. It took about $17$ seconds. The nuclear decomposition $\mathcal{A} = \sum_{i=1}^7 (w_i)^{\otimes 4}$ is {\tiny \begin{verbatim}
0.6274 + 0.5703i -0.7090 + 0.2840i 0.2324 - 1.0695i -0.1256 + 0.3158i
0.6275 + 0.5699i 0.5471 + 0.1352i 0.2938 + 0.5936i 0.2671 + 0.1962i
-0.5940 - 0.6378i 0.5286 + 0.1184i 0.2875 + 0.6043i 0.1922 + 0.2772i
-0.1432 + 0.8472i 0.0955 + 0.9276i 0.7074 + 0.9184i
-0.1440 + 0.8475i 0.3732 + 0.7080i -0.0292 + 0.6956i
0.9490 + 0.2131i 0.4154 + 0.7460i -0.0031 + 0.6971i \end{verbatim} }
\iffalse
clear all, n = 3; m = 4; mA = ones(n,n,n,n); for i1 = 1:n, for i2=1:n, for i3=1:n, for i4=1:n,
mA(i1,i2,i3,i4) = ( sqrt(-1) )^i1+ ( -1 )^i2+ ( -sqrt(-1) )^i3+ ( 1 )^i4; end, end, end, end, tstart = tic; [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
>> cnn
cnn =
26.956855245362373
>> cU
cU =
Columns 1 through 2
0.627413725523020 + 0.570335764617585i -0.709005559020226 + 0.283958235693073i
0.627496018738079 + 0.569935236004923i 0.547099317874804 + 0.135200350562756i
-0.593990358453649 - 0.637797566857796i 0.528566834483104 + 0.118416377092749i
Columns 3 through 4
0.232405804810743 - 1.069457421785088i -0.125559030340978 + 0.315761888856084i
0.293758482044403 + 0.593632468839868i 0.267142784659590 + 0.196219686246286i
0.287480546792624 + 0.604327344467453i 0.192217022431981 + 0.277218545308141i
Columns 5 through 6
-0.143245990442636 + 0.847207059642828i 0.095539909891883 + 0.927624121027389i
-0.143974169192022 + 0.847526905836173i 0.373214596058091 + 0.708049267043455i
0.949022713005449 + 0.213114043618299i 0.415447700250477 + 0.745963411261063i
Column 7
0.707354017887271 + 0.918357311147553i
-0.029155948296665 + 0.695625831162718i
-0.003083053071200 + 0.697102531383811i
>>
\fi
\noindent (iii) Consider the tensor $\mathcal{A} \in \mt{S}^5(\mathbb{C}^3)$ such that \[ \mathcal{A}_{i_1 i_2 i_3 i_4 i_5} = (\sqrt{-1})^{ i_1 i_2 i_3 i_4 i_5} + (-\sqrt{-1})^{ i_1 i_2 i_3 i_4 i_5} . \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{C}} =
\| \mathcal{A} \|_{2\ast, \mathbb{C}} \approx 49.5626$. It took about $4.7$ seconds. The nuclear decomposition $\mathcal{A} = \sum_{i=1}^6 (w_i)^{\otimes 5}$ is {\tiny \begin{verbatim}
0.2711 + 0.8335i 0.8651 + 0.0003i 0.6741 + 0.6909i
-0.2255 - 0.6963i -0.7224 + 0.0008i 0.7712 - 0.6030i
0.2711 + 0.8335i 0.8651 + 0.0003i 0.6741 + 0.6909i
0.5542 + 0.0006i 0.8654 + 0.4276i 0.1743 + 0.5348i
1.1499 - 0.0004i -0.3352 + 0.9198i 0.3603 + 1.1102i
0.5542 + 0.0006i 0.8654 + 0.4276i 0.1743 + 0.5348i \end{verbatim} }
\iffalse
clear all, n = 3; m = 5; mA = zeros(n,n,n,n,n); for i1 = 1:n, for i2=1:n, for i3=1:n, for i4=1:n, for i5=1:n,
mA(i1,i2,i3,i4,i5) = ( sqrt(-1) )^(i1*i2*i3*i4*i5) + ( -sqrt(-1) )^(i1*i2*i3*i4*i5) ; end, end, end, end, end tstart = tic; [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
>> cnn
cnn =
49.562556630960479
>> cU
cU =
Columns 1 through 3
0.271140337184212 + 0.833509268152705i 0.865061086850895 + 0.000314180551707i 0.674140087300118 + 0.690864488717166i
-0.225476631817319 - 0.696330456631553i -0.722373147229996 + 0.000759410285282i 0.771201706748418 - 0.602959299480600i
0.271140327120179 + 0.833509274082346i 0.865061084197509 + 0.000314168529791i 0.674140087306555 + 0.690864488702736i
Columns 4 through 6
0.554158049319640 + 0.000565494098229i 0.865373853423632 + 0.427643007843231i 0.174349233975872 + 0.534776663030805i
1.149921645508792 - 0.000438305912550i -0.335161088240215 + 0.919771502657460i 0.360272742319135 + 1.110195454815789i
0.554158051805277 + 0.000565505353600i 0.865373853417446 + 0.427643007857771i 0.174349243334416 + 0.534776657522139i
>>
\fi
\noindent (iv) Consider the tensor $\mathcal{A} \in \mt{S}^6(\mathbb{C}^3)$ such that \[ \mathcal{A}_{i_1 i_2 i_3 i_4 i_5 i_6} = ( 1 + \sqrt{-1} )^{i_1+\cdots+i_6-6} + (1-\sqrt{-1})^{i_1+\cdots+i_6-6}. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{C}} =
\| \mathcal{A} \|_{2\ast, \mathbb{C}} = 686 $. It took about $4.8$ seconds. The nuclear decomposition is \[ \mathcal{A} = \begin{bmatrix} 1 \\ 1 - \sqrt{-1} \\ - 2\sqrt{-1} \end{bmatrix}^{\otimes 6} +
\begin{bmatrix} 1 \\ 1 + \sqrt{-1} \\ 2\sqrt{-1} \end{bmatrix}^{\otimes 6}. \]
\iffalse
clear all, n = 3; m = 6; mA = zeros(n,n,n,n,n,n); for i1 = 1:n, for i2=1:n, for i3=1:n, for i4=1:n, for i5=1:n, for i6 = 1:n
mA(i1,i2,i3,i4,i5,i6) = ( 1 + sqrt(-1) )^(i1+i2+i3+i4+i5+i6-6) + ...
( 1-sqrt(-1) )^(i1+i2+i3+i4+i5+i6-6); end, end, end, end, end, end tstart = tic; [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
\fi
\qed \end{exm}
\begin{exm} \label{exm:A=i1+i2+i3} Consider the tensor $\mathcal{A} \in \mt{S}^3(\mathbb{R}^n)$ such that \[ \mathcal{A}_{i_1 i_2 i_3} = i_1 + i_2 + i_3. \] For a range of values of $n$, the real and complex nuclear norms
$\| \mathcal{A} \|_{\ast, \mathbb{R}}$ and $\| \mathcal{A} \|_{\ast,\mathbb{C}}$ are reported in Table~\ref{tab:A=i1+i2+i3}. We list the order $k$ for which
$\| \mathcal{A} \|_{\ast,\mathbb{F}} = \| \mathcal{A} \|_{k\ast,\mathbb{F}}$ and the length of the nuclear decomposition, as well as the consumed time (in seconds).
\begin{table}[htb] \caption{Nuclear norms of the tensor in Example~\ref{exm:A=i1+i2+i3}.} \label{tab:A=i1+i2+i3}
\begin{tabular}{|c|c|r|c|c|r|} \hline
$n$ & $\mathbb{F}$ & $\| \mathcal{A} \|_{\ast, \mathbb{F}}$ & $k$ & {\tt length} & {\tt time} \\ \hline $2$ & $\mathbb{R}$ & $13.4164$ & $2$ & $3$ & 0.81 \\ \hline $2$ & $\mathbb{C}$ & $13.2114$ & $2$ & $3$ & 1.31 \\ \hline $3$ & $\mathbb{R}$ & $33.6749$ & $2$ & $3$ & 0.90 \\ \hline $3$ & $\mathbb{C}$ & $32.9505$ & $2$ & $3$ & 1.92 \\ \hline $4$ & $\mathbb{R}$ & $65.7267$ & $2$ & $3$ & 0.93 \\ \hline $4$ & $\mathbb{C}$ & $64.0886$ & $2$ & $3$ & 3.73 \\ \hline $5$ & $\mathbb{R}$ & $111.2430$ & $2$ & $3$ & 0.97 \\ \hline $6$ & $\mathbb{R}$ & $171.7091$ & $2$ & $3$ & 1.08 \\ \hline $7$ & $\mathbb{R}$ & $248.4754$ & $2$ & $3$ & 1.23 \\ \hline $8$ & $\mathbb{R}$ & $342.7886$ & $2$ & $3$ & 1.67 \\ \hline $9$ & $\mathbb{R}$ & $455.8125$ & $2$ & $3$ & 2.45 \\ \hline $10$ & $\mathbb{R}$ & $588.6425$ & $2$ & $3$ & 2.66 \\ \hline \end{tabular} \end{table} For neatness, we only display nuclear decompositions for $n=3$. The real nuclear decomposition $\mathcal{A} = \sum_{i=1}^3 (u_i)^{\otimes 3}$ is {\tiny \begin{verbatim}
0.3689 -0.8633 1.5317
-0.1899 -0.3318 1.8215
-0.7487 0.1996 2.1113 \end{verbatim} \noindent}while the complex nuclear decomposition $\mathcal{A} = \sum_{i=1}^3 (w_i)^{\otimes 3}$ is {\tiny \begin{verbatim}
0.0851 + 0.6890i 1.4795 - 0.0001i 0.5552 + 0.4168i
-0.2504 + 0.5318i 1.7763 + 0.0001i 0.5878 + 0.0477i
-0.5858 + 0.3745i 2.0730 + 0.0004i 0.6203 - 0.3214i \end{verbatim}}
\iffalse
clear all, n = 4; m = 3; for i1 = 1:n, for i2 = 1: n, for i3 = 1:n
mA(i1,i2,i3) = i1 + i2 + i3; end, end, end tstart = tic; [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
tstart = tic; [rnn, rU] = realSymTNN_oddord( mA, n, 3 ); comptime = toc(tstart),
>> format long >> rnn
rnn =
33.674916504195721
>> rU
rU =
-0.863294021131362 0.368852417437287 1.531654646345411
-0.331847750011753 -0.189901971459606 1.821490590969886
0.199598521107856 -0.748656360356497 2.111326535594361
>> cnn
cnn =
32.950548256267723
>> cU
cU =
0.085065506220401 + 0.688997382205001i 1.479535389450888 - 0.000146861594190i 0.555241772691097 + 0.416809354048874i
-0.250374102419288 + 0.531766909018544i 1.776257580640192 + 0.000110443616904i 0.587763701681968 + 0.047693167658844i
-0.585813711058973 + 0.374536435832085i 2.072979771829496 + 0.000367748827999i 0.620285630672835 - 0.321423018731185i
>>
\fi
\qed \end{exm}
\begin{exm} \label{exm:cos(1/i1+c+1/i4)} Consider the tensor $\mathcal{A} \in \mt{S}^4(\mathbb{R}^n)$ such that \[ \mathcal{A}_{i_1 i_2 i_3 i_4} = \cos \left( \frac{1}{i_1}+\frac{1}{i_2}+\frac{1}{i_3}+\frac{1}{i_4} \right). \] The nuclear norms, the order $k$ for which
$\| \mathcal{A} \|_{\ast, \mathbb{F}} = \| \mathcal{A} \|_{k\ast, \mathbb{F}}$, the lengths of the nuclear decompositions, and the consumed time (in seconds) are displayed in Table~\ref{tab:cos(1/i1+c+1/i4)} for a range of values of $n$. \begin{table}[htb] \caption{Nuclear norms of the tensor in Example~\ref{exm:cos(1/i1+c+1/i4)}.} \label{tab:cos(1/i1+c+1/i4)}
\begin{tabular}{|c|c|r|c|c|r|} \hline
$n$ & $\mathbb{F}$ & $\| \mathcal{A} \|_{\ast, \mathbb{F}}$ & $k$ & {\tt length} & {\tt time} \\ \hline $2$ & $\mathbb{R}$ & $4.9001$ & 2 & 4 & 0.93 \\ \hline $2$ & $\mathbb{C}$ & $3.9911 $ & 2 & 4 & 2.04 \\ \hline $3$ & $\mathbb{R}$ & $ 10.7246$ & 2 & 4 & 1.02 \\ \hline $3$ & $\mathbb{C}$ & $ 8.1627 $ & 2 & 4 & 10.05 \\ \hline $4$ & $\mathbb{R}$ & $ 18.0100 $ & 2 & 4 & 1.14 \\ \hline $4$ & $\mathbb{C}$ & $13.1108$ & 2 & 4 & 131.13 \\ \hline $5$ & $\mathbb{R}$ & $26.9770 $ & 2 & 4 & 1.33 \\ \hline $6$ & $\mathbb{R}$ & $37.8395 $ & 2 & 4 & 1.86 \\ \hline $7$ & $\mathbb{R}$ & $50.7373 $ & 2 & 4 & 2.78 \\ \hline $8$ & $\mathbb{R}$ & $65.7485 $ & 2 & 4 & 4.75 \\ \hline $9$ & $\mathbb{R}$ & $ 82.9121 $ & 2 & 4 & 9.30 \\ \hline $10$ & $\mathbb{R}$ & $102.2442 $ & 2 & 4 & 21.49 \\ \hline \end{tabular} \end{table} For neatness, we only display nuclear decompositions for $n=3$. The real nuclear decomposition is $\mathcal{A} = (u_1)^{\otimes 4} + (u_2)^{\otimes 4} -(u_3)^{\otimes 4} -(u_4)^{\otimes 4} $, where $u_1,u_2,u_3,u_4$ are respectively given as {\tiny \begin{verbatim}
-0.0261 0.9989 -0.6615 1.0988
0.7131 0.1816 0.2850 0.9044
0.9287 -0.1114 0.5965 0.7863 \end{verbatim} \noindent}The complex nuclear decomposition $\mathcal{A} = \sum_{i=1}^4 (w_i)^{\otimes 4}$ is {\tiny \begin{verbatim}
-0.0001 - 0.1505i 0.2673 + 0.7374i 0.7395 + 0.2678i 0.6659 + 0.6676i
-0.0010 + 0.3845i 0.5967 + 0.3270i 0.3287 + 0.5959i 0.5021 + 0.5032i
-0.0013 + 0.5481i 0.6771 + 0.1666i 0.1680 + 0.6759i 0.4172 + 0.4181i \end{verbatim} }
\iffalse
clear all, n = 10; m = 4; for i1 = 1:n, for i2 = 1: n, for i3 = 1:n, for i4=1:n
mA(i1,i2,i3, i4) = cos( 1/i1 + 1/i2 + 1/i3 + 1/i4 ); end, end, end, end tstart = tic; [rnn, Uplus, Umius] = realSymTNN_evenord( mA, n, m); comptime = toc(tstart),
tstart = tic; [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
>> format long >> rnn
rnn =
10.724597486963821
>> Uplus
Uplus =
-0.026107937619034 0.998934991636464
0.713057763543217 0.181642840060381
0.928746090580627 -0.111377055220947
>> Umius
Umius =
-0.661490088413522 1.098838149809734
0.285022295246667 0.904445244161288
0.596521889296131 0.786334055351988
>> cnn
cnn =
8.162693927903284
>> cU
cU =
Columns 1 through 2
-0.000075328889850 - 0.150537920519551i 0.267265257788763 + 0.737398324231674i
-0.001007443904575 + 0.384534721967229i 0.596673208961289 + 0.326986302061970i
-0.001273349181912 + 0.548069090977604i 0.677115413995324 + 0.166588799588037i
Columns 3 through 4
0.739458289446112 + 0.267829393405554i 0.665942622203394 + 0.667628467061621i
0.328666402078927 + 0.595860395371791i 0.502133456237029 + 0.503234320944953i
0.168043004251591 + 0.675871827143138i 0.417221955699565 + 0.418058511781897i
>>
\fi
\qed \end{exm}
For a tensor $\mathcal{A} \in \mt{S}^m( \mathbb{C}^n )$, define \begin{equation} \label{df:A(x)} \mathcal{A}(x) := \sum_{i_1,\ldots, i_m = 1}^n \mathcal{A}_{i_1 \ldots i_m} \cdot x_{i_1} \cdots x_{i_m}. \end{equation} Clearly, $\mathcal{A}(x)$ is a homogeneous polynomial in $x:=(x_1,\ldots, x_n)$ and of degree $m$. There is a bijection between the symmetric tensor space $\mt{S}^m( \mathbb{C}^n )$ and the space of homogeneous polynomials of degree $m$ (cf.~\cite{GPSTD,OedOtt13}). So, we can equivalently display $\mathcal{A}$ by showing the polynomial $\mathcal{A}(x)$. Moreover, the decomposition $\mathcal{A} = \sum_{i=1}^r \pm (u_i)^{\otimes m}$ is equivalent to $\mathcal{A}(x) = \sum_{i=1}^r \pm (u_i^Tx)^m$. Thus, we can also display a nuclear decomposition by writing $\mathcal{A}(x)$ as a sum of power of linear forms.
\begin{exm}\label{exm:poly:A(x)} (i) Consider the tensor $\mathcal{A} \in \mt{S}^3(\mathbb{R}^3)$ such that \[ \mathcal{A}(x) = x_1x_2x_3. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast, \mathbb{R}} =
\| \mathcal{A} \|_{\ast, \mathbb{C}} = \| \mathcal{A} \|_{2\ast, \mathbb{C}} = \sqrt{3}/2$.
The real nuclear decomposition of $\mathcal{A}$ is given as
\[ \begin{array}{rl} \mathcal{A}(x) = &\frac{1}{24} \Big( (-x_1-x_2+x_3)^3 + (-x_1+x_2-x_3)^3 \\ & \qquad + (x_1-x_2-x_3)^3 + (x_1+x_2+x_3)^3 \, \Big). \end{array} \] The above also serves as a complex nuclear decomposition.
\iffalse
clear all, n = 3; m = 3; mA = zeros(n,n,n); for i1 = 1:n, for i2 = 1: n, for i3 = 1:n,
if (i1~=i2 & i1~=i3 & i2~=i3 )
mA(i1,i2,i3) = 1/6;
end end, end, end, tstart = tic; [rnn, rU] = realSymTNN_oddord(mA, n, m); [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
syms x_1 x_2 x_3, Ax = ( (-x_1-x_2+x_3)^3 + (-x_1+x_2-x_3)^3 + (x_1-x_2-x_3)^3 + (x_1+x_2+x_3)^3 )/24; expand(Ax),
\fi
\noindent (ii) Consider the tensor $\mathcal{A} \in \mt{S}^4(\mathbb{R}^4)$ such that \[ \mathcal{A}(x) = x_1x_2x_3x_4. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast,\mathbb{R}} =
\| \mathcal{A} \|_{\ast, \mathbb{C}} = \| \mathcal{A} \|_{2\ast,\mathbb{C}} = 2/3$. The real nuclear decomposition is given as
\[ \begin{array}{rl} \mathcal{A}(x)\,\, = & \frac{1}{192} \Big( (-x_1-x_2+x_3+x_4)^4 + (-x_1+x_2-x_3+x_4)^4 + \\ & \qquad \,(-x_1+x_2+x_3-x_4)^4 + (+x_1+x_2+x_3+x_4)^4 - \\ & \qquad \,(-x_1+x_2+x_3+x_4)^4 -(x_1-x_2+x_3+x_4)^4 - \\ & \qquad \,(x_1+x_2-x_3+x_4)^4 -(x_1+x_2+x_3-x_4)^4 \, \Big). \end{array} \] The above also serves as a complex nuclear decomposition.
\iffalse
clear all, n = 4; m = 4; tA = zeros(n,n,n,n); for i1 = 1:n, for i2 = 1: n, for i3 = 1:n, for i4=1:n
if (i1~=i2 & i1~=i3 & i1~=i4 & i2~=i3 & i2~=i4 & i3~=i4 )
mA(i1,i2,i3, i4) = 1/24;
end end, end, end, end tstart = tic; [rnn, Uplus, Umius] = realSymTNN_evenord( mA, n, m); [cnn, cU] = complexSymTNN(mA, n, m); comptime = toc(tstart),
syms x_1 x_2 x_3 x_4 Ax = ( (-x_1-x_2+x_3+x_4)^4 + (-x_1+x_2-x_3+x_4)^4 + (-x_1+x_2+x_3-x_4)^4 + (+x_1+x_2+x_3+x_4)^4 ... - (-x_1+x_2+x_3+x_4)^4 -(x_1-x_2+x_3+x_4)^4 - (x_1+x_2-x_3+x_4)^4 -(x_1+x_2+x_3-x_4)^4 )/192; expand(Ax),
\fi
\noindent (iii) Consider the tensor $\mathcal{A} \in \mt{S}^4(\mathbb{R}^2)$ such that \[ \mathcal{A}(x) = x_1^2 x_2^2. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast, \mathbb{R}} = 1$. The real nuclear decomposition is \[ \mathcal{A}(x) = \frac{1}{12} (-x_1+x_2)^4 + \frac{1}{12} (x_1+x_2)^4 -\frac{1}{6} x_1^4 -\frac{1}{6} x_2^4. \] The complex nuclear norm
$\| \mathcal{A} \|_{\ast,\mathbb{C}} = \| \mathcal{A} \|_{2\ast,\mathbb{C}} = 2/3$, and the decomposition is \[ \mathcal{A}(x) = \frac{1}{24} \Big( (x_1-x_2)^4 + (x_1+x_2)^4 - (x_1+\sqrt{-1}x_2)^4 - (x_1-\sqrt{-1}x_2)^4 \Big). \]
\iffalse
clear all, n = 2; m = 4; mA = zeros(n,n,n,n); i1=1; i2=2; mA(i1,i1,i2,i2) = 1/6; mA(i1,i2,i1,i2) = 1/6; mA(i1,i2,i2,i1) = 1/6; mA(i2,i1,i1,i2) = 1/6; mA(i2,i1,i2,i1) = 1/6; mA(i2,i2,i1,i1) = 1/6; [rnn, Uplus, Umius] = realSymTNN_evenord( mA, n, m); Uplus, Umius,
[cnn, cU] = complexSymTNN(mA, n, m);
syms x_1 x_2 Ax = ( (x_1-x_2)^4 + (x_1+x_2)^4 - (x_1+sqrt(-1)*x_2)^4 - (x_1-sqrt(-1)*x_2)^4 )/24; expand( Ax ),
\fi
\noindent (iv) Consider the tensor $\mathcal{A} \in \mt{S}^4(\mathbb{R}^3)$ such that \[ \mathcal{A}(x) = x_1^2 x_2^2 + x_2^2 x_3^2 + x_1^2 x_3^2. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast, \mathbb{R}} = 2$. The real nuclear decomposition is \[ \begin{array}{rl} \mathcal{A}(x) = & \frac{1}{24} \Big( (x_1-x_2+x_3)^4 + (x_1+x_2-x_3)^4 + (-x_1+x_2+x_3)^4 + \\ & \qquad + (x_1+ x_2+x_3)^4 \Big) - \frac{1}{6} \Big( x_1^4 + x_2^4 + x_3^4 \Big). \end{array} \] The complex nuclear norm
$\| \mathcal{A} \|_{\ast,\mathbb{C}} = \| \mathcal{A} \|_{2\ast,\mathbb{C}} = 5/3$, and the decomposition is \[ \begin{array}{rl} \mathcal{A}(x) \,\, = & \frac{1}{36} \Big(
(-x_1+x_2+x_3)^4+ (x_1-x_2+x_3)^4+ (x_1+x_2-x_3)^4+ (x_1+x_2+x_3)^4 \\ & \qquad - (x_1-\sqrt{-1} x_3)^4 - (x_2-\sqrt{-1} x_3)^4 - (x_1-\sqrt{-1} x_2)^4\\ & \qquad - (x_1+\sqrt{-1} x_3)^4 - (x_2+\sqrt{-1} x_3)^4 - (x_1+\sqrt{-1} x_2)^4
\Big). \end{array} \]
\iffalse
clear all, n = 3; m = 4; mA = zeros(n,n,n,n); for i1 = 1:n, for i2 = i1+1: n,
mA(i1,i1,i2,i2) = 1/6; mA(i1,i2,i1,i2) = 1/6; mA(i1,i2,i2,i1) = 1/6;
mA(i2,i1,i1,i2) = 1/6; mA(i2,i1,i2,i1) = 1/6; mA(i2,i2,i1,i1) = 1/6; end, end [rnn, Uplus, Umius] = realSymTNN_evenord( mA, n, m); Uplus, Umius,
[cnn, cU] = complexSymTNN(mA, n, m);
syms x_1 x_2 x_3 Ax = ( (-x_1+x_2+x_3)^4+ (x_1-x_2+x_3)^4+(x_1+x_2-x_3)^4+ (x_1+x_2+x_3)^4 )/24 ...
- ( x_1^4 + x_2^4 + x_3^4 )/6; expand(Ax),
syms x_1 x_2 x_3 Ax = ( (-x_1+x_2+x_3)^4+ (x_1-x_2+x_3)^4+(x_1+x_2-x_3)^4+ (x_1+x_2+x_3)^4 ... - (x_1+sqrt(-1)*x_3)^4 - (x_2-sqrt(-1)*x_3)^4 - (x_1-sqrt(-1)*x_2)^4 ...
- (x_2+sqrt(-1)*x_3)^4 - (x_1-sqrt(-1)*x_3)^4 - (x_1+sqrt(-1)*x_2)^4 )/36; expand(Ax),
\fi
\noindent (v) Consider the tensor $\mathcal{A} \in \mt{S}^4(\mathbb{R}^3)$ such that \[ \mathcal{A}(x) = (x_1^2 + x_2^2 + x_3^2)^2. \]
We got $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{2\ast, \mathbb{R}} =
\| \mathcal{A} \|_{\ast, \mathbb{C}} = \| \mathcal{A} \|_{2\ast, \mathbb{C}} = 5$, with the nuclear decomposition \[ \begin{array}{rl} \mathcal{A}(x) = & \frac{1}{12} \Big( (x_1-x_2+x_3)^4 + (x_1+x_2-x_3)^4 + (-x_1+x_2+x_3)^4 + \\ & \qquad + (x_1+ x_2+x_3)^4 \Big) + \frac{2}{3} \Big( x_1^4 + x_2^4 + x_3^4 \Big). \end{array} \]
\iffalse
clear all, n = 3; m = 4; mA = zeros(n,n,n,n); for i1 = 1:n,
mA(i1,i1,i1,i1) = 1;
for i2 = i1+1: n,
mA(i1,i1,i2,i2) = 1/3; mA(i1,i2,i1,i2) = 1/3; mA(i1,i2,i2,i1) = 1/3;
mA(i2,i1,i1,i2) = 1/3; mA(i2,i1,i2,i1) = 1/3; mA(i2,i2,i1,i1) = 1/3; end, end [rnn, Uplus, Umius] = realSymTNN_evenord( mA, n, m); Uplus, Umius,
[cnn, cU] = complexSymTNN(mA, n, m);
syms x_1 x_2 x_3 Ax = ( (-x_1+x_2+x_3)^4+ (x_1-x_2+x_3)^4+(x_1+x_2-x_3)^4+ (x_1+x_2+x_3)^4 )/ 12 ... +(x_1^4+x_2^4+x_3^4)*2/3; expand(Ax),
\fi
The above also serves as a complex nuclear decomposition. \qed \end{exm}
\begin{exm} \label{sym:abc} For $a,b,c \in \mathbb{R}^n$, the symmetrization of the rank-$1$ nonsymmetric tensor $a\otimes b \otimes c$ is \[ \begin{array}{r} \mbf{sym}(a \otimes b \otimes c) := \frac{1}{6} (a \otimes b \otimes c + a \otimes c \otimes b + b \otimes a \otimes c \quad \, \\
+ b \otimes c \otimes a + c \otimes a \otimes b + c \otimes b \otimes a \, ). \end{array} \]
One wonders whether $\| \mbf{sym}(a \otimes b \otimes c) \|_{\ast, \mathbb{R}}
= \|a\| \cdot \|b\| \cdot \|c\|$ or not. Indeed, this is usually not true. Typically, we have the inequalities \[
\| \mbf{sym}(a \otimes b \otimes c) \|_{\ast, \mathbb{C}}
< \| \mbf{sym}(a \otimes b \otimes c) \|_{\ast, \mathbb{R}}
< \|a\| \cdot \|b\| \cdot \|c\|. \] For instance, consider the following tensor in $\mt{S}^3(\mathbb{R}^3)$ \[ \mathcal{A} = \mbf{sym}( e_1 \otimes (e_1+e_2) \otimes (e_1+e_2+e_3) ). \] We can compute that $
\| \mathcal{A} \|_{\ast, \mathbb{C}} \approx 2.2276, $ $
\| \mathcal{A} \|_{\ast, \mathbb{R}} \approx 2.4190, $ but \[
\| e_1 \| \cdot \| e_1+e_2 \| \cdot \| e_1+e_2+e_3 \| = \sqrt{6} \approx 2.4495. \] The computed real nuclear decomposition $\mathcal{A} = \sum_{i=1}^5 (u_i)^{\otimes 3}$ is {\tiny \begin{verbatim}
-0.2896 0.0149 -0.4750 -0.0947 1.0423
0.1803 -0.5617 0.1042 -0.2944 0.5806
-0.2891 -0.3132 0.3114 0.1865 0.2630 \end{verbatim} \noindent}The computed complex nuclear decomposition $\mathcal{A} = \sum_{i=1}^8 (w_i)^{\otimes 3}$ is {\tiny \begin{verbatim}
-0.2795 + 0.2861i -0.2882 + 0.2378i -0.4808 + 0.8488i -0.2489 + 0.0952i
0.1094 + 0.2837i -0.0298 + 0.2075i -0.2776 + 0.4375i 0.1376 + 0.3928i
-0.2009 + 0.0696i 0.2301 + 0.1457i -0.1146 + 0.2252i 0.1359 + 0.1297i
0.1212 + 0.0018i 0.3286 - 0.1764i 0.2244 - 0.1049i 0.2196 - 0.1357i
0.0304 - 0.0450i 0.2410 + 0.1640i 0.1103 + 0.2279i 0.2235 + 0.2601i
0.0268 + 0.0974i -0.0096 + 0.2903i 0.1464 - 0.1210i 0.0750 + 0.1106i \end{verbatim}}
\iffalse
clear all, n = 3; m = 3; q1 = [1 0 0]'; q2 = [1 1 0]'; q3 = [1 1 1]'; qtpow1 = kron( kron(q1,q2),q3); qtpow2 = kron( kron(q1,q3),q2); qtpow3 = kron( kron(q2,q1),q3); qtpow4 = kron( kron(q2,q3),q1); qtpow5 = kron( kron(q3,q1),q2); qtpow6 = kron( kron(q3,q2),q1); vA = qtpow1+qtpow2+qtpow3+qtpow4+qtpow5+qtpow6; mA = reshape(vA,n,n,n)/6;
[rnn, rU] = realSymTNN_oddord(mA, n, m); rnn-norm(q1)*norm(q2)*norm(q3), [cnn, cU] = complexSymTNN(mA, n, m); cnn-rnn,
>> rnn
rnn =
2.418982395764433
>> rU
rU =
-0.289583035109710 0.014912776264726 -0.475048352771981 -0.094695708171714 1.042297191694308
0.180323854446316 -0.561714824229171 0.104169452279075 -0.294424303150289 0.580644065854830
-0.289058295432570 -0.313158506188813 0.311379640841424 0.186476582853557 0.262983944716152
>> cnn
cnn =
2.227570862249642
>> cU
cU =
Columns 1 through 2
-0.279545192682690 + 0.286105753565567i -0.288182067639076 + 0.237775374712514i
0.109393447994599 + 0.283709936912754i -0.029787185593623 + 0.207472625658106i
-0.200946626019464 + 0.069634209213420i 0.230087412610016 + 0.145666559025095i
Columns 3 through 4
-0.480781016348366 + 0.848771410423092i -0.248909419369784 + 0.095242981544154i
-0.277575808413930 + 0.437534932663802i 0.137594553780137 + 0.392843577410099i
-0.114586046416108 + 0.225243820218457i 0.135910837919536 + 0.129716950059332i
Columns 5 through 6
0.121205668458481 + 0.001837471385231i 0.328618986574105 - 0.176382133495463i
0.030431909412909 - 0.044959173354886i 0.240979124504812 + 0.164035853878743i
0.026799324104151 + 0.097409630836931i -0.009642449509762 + 0.290308192348264i
Columns 7 through 8
0.224419728399201 - 0.104922346568722i 0.219610276969088 - 0.135685569377280i
0.110259143944795 + 0.227916400669288i 0.223454672350158 + 0.260077222386059i
0.146364208285751 - 0.120986518731285i 0.074990826125414 + 0.110610899557311i
>>
\fi
\noindent However, if $a=b=c$, then
$\| \mbf{sym}(a\otimes b\otimes c) \|_{\ast, \mathbb{F}} = \| a \|^3$ for $\mathbb{F} = \mathbb{R}, \mathbb{C}$. \qed \end{exm}
\iffalse
\begin{exm} Consider the tensors given as \begin{equation} \label{orth:ten} \mathcal{A} = \lambda_1 (q_1)^{\otimes m} + \cdots + \lambda_r (q_r)^{\otimes m}, \end{equation} where $q_1, \ldots, q_r \in \mathbb{C}^n$ are orthonormal to each other, and $\lambda_1,\ldots, \lambda_r$ are nonzero complex scalars. Such a tensor is called an {\it orthogonal tensor}. For all orthogonal tensors as in \reff{orth:ten}, we have \begin{equation} \label{tnn:orth}
\| \mathcal{A} \|_{\ast, \mathbb{C}} = |\lambda_1| + \cdots + |\lambda_r|. \end{equation} Clearly,
$\| \mathcal{A} \|_{\ast, \mathbb{C}} \leq \sum_{i=1}^r | \lambda_i |$. The reverse can be seen as follows. Let $\mathcal{B}$ be the tensor such that $\mathcal{B}(x) = \sum_{j=1}^r
\frac{\lambda_r}{|\lambda_r|} (q_r^Tx)^m$. Then, $\| \mathcal{B} \|_{\sigma, \mathbb{C}} = 1$, and by duality relation, \[
\| \mathcal{A} \|_{\ast, \mathbb{C}} \geq \mathcal{A} \bullet \mathcal{B} = \sum_{i=1}^r \lambda_i \left( \sum_{j=1}^r
\frac{\bar\lambda_j}{|\lambda_j|} (q_j^Hq_i)^m \right)
=\sum_{i=1}^r |\lambda_i|. \] (The superscript $^H$ denotes the conjugate transpose.) So, \reff{tnn:orth} is true. For instance, for the following orthogonal tensor \[ \mathcal{A} = \begin{bmatrix} \sqrt{-1} \\ -1 \\ 1 \end{bmatrix}^{\otimes 3} +
\begin{bmatrix} 1 \\ \sqrt{-1} \\ 2 \sqrt{-1} \end{bmatrix}^{\otimes 3} +
\begin{bmatrix} 1 \\ -\sqrt{-1} \\ 0 \end{bmatrix}^{\otimes 3}, \] Algorithm~\ref{alg:cnn:cpx} confirmed that
$\| \mathcal{A} \|_{\ast, \mathbb{C}} = 3\sqrt{3}+6\sqrt{6}+2\sqrt{2} \approx 22.7215$.
\iffalse
clear all, n = 3; m = 3; ii = sqrt(-1); P = [ii -1 1; 1 ii 2*ii; 1 -ii 0]; vA = zeros(n^m,1); for k = 1 : size(P,1)
q = P(k, :).';
qtpow = kron( kron(q,q),q);
vA = vA + qtpow; end mA = reshape(vA, n,n,n); [cnn, cU] = complexSymTNN(mA, n, 3); cU,
cnn - sqrt(3)^3 - sqrt(6)^3 - sqrt(2)^3,
\fi
When all $\lambda_i$ and $q_i$ are real, by the same argument, one can also show that
$\| \mathcal{A} \|_{\ast, \mathbb{R}} = \sum_{i=1}^r | \lambda_i |$. The equality \reff{tnn:orth} can also be proved by applying Lemma~4.1 of \cite{FriLim14b}. \qed \end{exm}
\fi
\begin{exm} (Sum of Even Powers) For an even order $m$, consider the tensors $\mathcal{A} \in \mt{S}^m(\mathbb{R}^n)$ of the form such that \begin{equation} \label{SOEP} \mathcal{A} = (a_1)^{\otimes m} + \cdots + (a_r)^{\otimes m}, \end{equation} with $a_1, \ldots, a_r \in \mathbb{R}^n$. Such a tensor is called a sum of even powers (SOEP). Interestingly, for all SOEP tensors as in \reff{SOEP}, we have \begin{equation} \label{tnn:spow}
\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{\ast, \mathbb{C}} = \| a_1 \|^{m} + \cdots + \| a_r \|^{m}. \end{equation} Clearly,
$\| \mathcal{A} \|_{\ast, \mathbb{C}} \leq \| \mathcal{A} \|_{\ast, \mathbb{R}}
\leq \sum_{i=1}^r \| a_i \|^{m}.$
The reverse inequalities are actually also true. Let $\mathcal{B}$ be the tensor such that $\mathcal{B}(x) = (x^Tx)^{m/2}$. Then, $\| \mathcal{B} \|_{\sigma, \mathbb{C}} = 1$. By the duality relation, we get \[
\| \mathcal{A} \|_{\ast, \mathbb{C}} \geq \mathcal{A} \bullet \mathcal{B} = \sum_{i=1}^r \mathcal{B}(a_i) =
\sum_{i=1}^r \| a_i \|^m. \] So, \reff{tnn:spow} is true. It can also be proved by applying Lemma~4.1 of \cite{FriLim14b}. Moreover, every real nuclear decomposition of an SOEP tensor must also be in the SOEP form. This can be shown as follows. Suppose $\mathcal{A} = \mathcal{A}_1 - \mathcal{A}_2$
is a real nuclear decomposition, with $\mathcal{A}_1, \mathcal{A}_2$ being SOEP tensors such that $\| \mathcal{A} \|_{\ast, \mathbb{R}} =
\| \mathcal{A}_1 \|_{\ast, \mathbb{R}} + \| \mathcal{A}_2 \|_{\ast, \mathbb{R}}$. Then, $\mathcal{A}_1 = \mathcal{A} + \mathcal{A}_2$ and $
\| \mathcal{A}_1 \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{\ast, \mathbb{R}} + \| \mathcal{A}_2 \|_{\ast, \mathbb{R}} $
by \reff{tnn:spow}. So, we must have $\| \mathcal{A}_2 \|_{\ast, \mathbb{R}}=0$, hence $\mathcal{A}_2=0$. This shows the real nuclear decomposition is also SOEP.
\iffalse
For instance, we consider the following SOEP tensor \[ \mathcal{A} = \frac{1}{6} \sum_{q \in \mbf{perm}([1, 2, 3])} q^{\otimes 4}, \]
where $\mbf{perm}$ denotes the set of all permutations. By \reff{tnn:spow}, we know $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{\ast, \mathbb{C}} = 196$. The computation by Algorithms~\ref{alg:even:m} and \ref{alg:cnn:cpx} confirms this fact. Interestingly, we get the real nuclear decomposition $\mathcal{A} = \sum_{i=1}^5 (u_i)^{\otimes 4}$ given as {\tiny \begin{verbatim}
2.0370 1.2427 1.8646 0.5793 0.9637
0.7046 0.7212 1.5626 1.5895 2.1095
1.2708 2.0485 0.5852 1.8436 0.9392 \end{verbatim} \noindent}which whose length is one less than six.
clear all, n = 3; m = 4; P = perms( [1:n] ); vA = zeros(n^m,1); for k = 1 : size(P,1)
q = P(k, :)';
qtpow = kron( kron( kron(q,q),q), q);
vA = vA + qtpow; end
vA = vA/size(P,1); mA = reshape(vA,n,n,n,n); [rnn, Uplus, Umius] = realSymTNN_evenord(mA, n, m); Uplus, Umius, rnn-norm(q)^m,
[cnn, cU] = complexSymTNN(mA, n, m); cnn-norm(q)^m,
\fi
For instance, consider the following SOEP tensor $\mathcal{A} \in \mt{S}^4(\mathbb{R}^3)$ \[ \mathcal{A} = \sum_{i=1}^3 \left( (e + e_i)^{\otimes 4} + (e - e_i)^{\otimes 4} \right). \] Algorithms~\ref{alg:even:m} and \ref{alg:cnn:cpx}
confirmed that $\| \mathcal{A} \|_{\ast, \mathbb{R}} = \| \mathcal{A} \|_{\ast, \mathbb{C}} = 120$.
\iffalse
clear all, n = 3; m = 4; P = [1 1 0; 1 0 1; 0 1 1; 2 1 1; 1 2 1; 1 1 2]; vA = zeros(n^m,1); for k = 1 : size(P,1)
q = P(k, :)';
qtpow = kron( kron( kron(q,q),q), q);
vA = vA + qtpow; end
mA = reshape(vA,n,n,n,n); [rnn, Uplus, Umius] = realSymTNN_evenord(mA, n, m); Uplus, Umius, [cnn, cU] = complexSymTNN(mA, n, m); cU,
\fi
\qed \end{exm}
\section{Extensions to nonsymmetric tensors} \label{sc:exten}
The methods in this paper can be naturally extended to nonsymmetric tensors. A similar discussion was made in \cite{TanSha15}. For convenience, we show how to do this for a nonsymmetric cubic tensor $\mathcal{A} \in \mathbb{R}^{n_1 \times n_2 \times n_3}$. Clearly, its real nuclear norm can be computed as \begin{equation} \label{nsymA:nn3}
\| \mathcal{A} \|_{\ast,\mathbb{R}} = \min \left\{ \sum_{i=1}^r \lambda_i
\left| \begin{array}{c} \mathcal{A} = \Sigma_{i=1}^r \lambda_i v^{(i,1)} \otimes v^{(i,2)} \otimes v^{(i,3)}, \\
\lambda_i \geq 0, \, \| v^{(i,j)} \| = 1, \, v^{(i,j)} \in \mathbb{R}^{n_j} \end{array}\right. \right \}. \end{equation} One can similarly show that
$\| \mathcal{A} \|_{\ast,\mathbb{R}}$ is equal to the minimum value of the optimization problem \begin{equation} \label{mOp:nsyA:m=3} \left\{\begin{array}{rl}
\min & \int 1 \mt{d} \mu \\ s.t. & \mathcal{A} = \int x^{(1)} \otimes x^{(2)} \otimes x^{(3)} \mt{d} \mu, \,\,
\mu \in \mathscr{B}(T). \end{array} \right. \end{equation} In the above, the variables $x^{(j)} \in \mathbb{R}^{n_j}$, and $\mathscr{B}(T)$ is the set of Borel measures supported on the set \[ T := \big\{ (x^{(1)}, x^{(2)}, x^{(3)}):
\| x^{(1)} \| = \| x^{(2)} \| = \| x^{(3)} \| = 1 \big \}. \] Similarly, we can define the cone of moments (denote $[n] :=\{1,\ldots,n\}$) \begin{equation} \mathscr{R}_{ \{0, 3\} }^{n_1,n_2,n_3} := \left\{ y \in \mathbb{R}^{ 1 + n_1n_2n_3 }
\left| \begin{array}{c}
\exists \mu \in \mathscr{B}(T) \quad s.t. \\ \, (y)_{ijk} = \int (x^{(1)})_i (x^{(2)})_j (x^{(3)})_k \mt{d} \mu \\ i \in [n_1],\, j \in [n_2],\, k \in [n_3], \\ \quad \mbox{ or } \quad i=j=k=0 \end{array} \right. \right\}. \end{equation} One can show that \reff{mOp:nsyA:m=3} is equivalent to \begin{equation} \label{yOpt:ns:m=3} \left\{\begin{array}{rl} \min & (y)_{000} \\ s.t. & (y)_{ijk} = \mathcal{A}_{ijk} \, \, (\forall i,j,k), \\
& y \in \mathscr{R}_{ \{0, 3\} }^{n_1,n_2,n_3}. \end{array} \right. \end{equation} A similar version of Algorithm~\ref{alg:R:odd} can be applied to solve \reff{yOpt:ns:m=3}.
\begin{exm} Consider the nonsymmetric tensor $\mathcal{A} \in \mathbb{R}^{2 \times 2 \times 2}$ such that \[ \mathcal{A}_{ijk} = i -j -k. \]
By solving \reff{yOpt:ns:m=3}, we get $\| \mathcal{A} \|_{\ast, \mathbb{R}} = 6.0000$. A real nuclear decomposition of $\mathcal{A}$ is given as {\tiny \[ \left[ \begin{array}{r} 1.4363 \\ 0.3140 \end{array} \right] \otimes \left[ \begin{array}{r} -0.9146 \\ -1.1516 \end{array} \right] \otimes \left[ \begin{array}{r} 0.9296 \\ 1.1394 \end{array} \right] + \left[ \begin{array}{r} 0.7375 \\ 1.0525 \end{array} \right] \otimes \left[ \begin{array}{r} -0.2430 \\ -1.2616 \end{array} \right] \otimes \left[ \begin{array}{r} 0.5250 \\ 1.1727 \end{array} \right] \] \[ +\left[ \begin{array}{r} 0.5484 \\ 0.6978 \end{array} \right] \otimes \left[ \begin{array}{r} 0.8846 \\ 0.0732 \end{array} \right] \otimes \left[ \begin{array}{r} 0.6501 \\ -0.6040 \end{array} \right]. \] }
\iffalse
clear all, n=2; m=3; mA = zeros(2,2,2); for i1 = 1:n, for i2=1:n, for i3=1:n
mA(i1,i2,i3) = i1-i2-i3; end, end, end
[rnn, rU] = realNonSymTNN(mA, m);
\begin{verbatim}
1.4363 0.7375 0.5484
0.3140 1.0525 0.6978
-0.9146 -0.2430 0.8846
-1.1516 -1.2616 0.0732
0.9296 0.5250 0.6501
1.1394 1.1727 -0.6040 \end{verbatim}
>> ru =[ ...
1.4363 0.7375 0.5484; ...
0.3140 1.0525 0.6978; ...
-0.9146 -0.2430 0.8846; ...
-1.1516 -1.2616 0.0732; ...
0.9296 0.5250 0.6501; ...
1.1394 1.1727 -0.6040],
>> rnn
rnn =
5.999999975510385
>> rU
rU =
1.436316666199095 0.737521396054032 0.548358080703983
0.313978273063560 1.052477238453717 0.697752280250228
-0.914607089056276 -0.243044917690185 0.884628171632327
-1.151550033757933 -1.261626190480282 0.073241751636554
0.929641277340014 0.524989788777880 0.650065935443151
1.139401033519433 1.172714503976489 -0.604004245181176
>>
\fi
\end{exm}
When $\mathbb{F} = \mathbb{C}$, we can similarly compute the complex nuclear norm $\| \mathcal{A} \|_{\ast,\mathbb{C}}$, by considering each $x^{(j)}$ as a complex variable. For nonsymmetric tensors, it is usually much harder to compute the nuclear norm
$\| \mathcal{A} \|_{\ast,\mathbb{R}}$ or $\| \mathcal{A} \|_{\ast,\mathbb{C}}$. This is because the variable $x$ has much higher dimension than for the case of symmetric tensors, which makes the moment optimization problem like \reff{yOpt:ns:m=3} very difficult to solve.
\noindent {\bf Acknowledgement} The author would like to thank Shmuel Friedland and Lek-Heng Lim for comments on tensor nuclear norms. The research was partially supported by the NSF grant DMS-1417985.
\end{document}
|
arXiv
|
{
"id": "1605.08823.tex",
"language_detection_score": 0.45726701617240906,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On abelian group actions with TNI-centralizers} \author{G\"{u}l\.{I}n Ercan$^{*}$} \address{G\"{u}l\.{I}n Ercan, Department of Mathematics, Middle East echnical University, Ankara, Turkey} \email{[email protected]} \thanks{$^{*}$Corresponding author} \thanks{This work has been supported by the Research Project T\"{U}B\.{I}TAK 114F223.} \author{\.{I}sma\.{I}l \c{S}. G\"{u}lo\u{g}lu} \address{\.{I}sma\.{I}l \c{S}. G\"{u}lo\u{g}lu, Department of Mathematics, Do \u{g}u\c{s} University, Istanbul, Turkey} \email{[email protected]} \subjclass[2000]{20D10, 20D15, 20D45} \keywords{tni-subgroup, automorphism, centralizer, Fitting length} \maketitle
\begin{abstract} A subgroup $H$ of a group $G$ is said to be a TNI-subgroup if $N_{G}(H)\cap H^g=1$ for any $g\in G\,\backslash \,N_{G}(H).$ Let $A$ be an abelian group acting coprimely on the finite group $G$ by automorphisms in such a way that $C_G(A)=\{g\in G : g^a=g $\, for all $a\in A\}$ is a solvable TNI-subgroup of $G$. We prove that $G$ is a solvable group with Fitting length $h(G)$ is at most $h(C_G(A))+\ell(A)$. In particular $h(G)\leq \ell(A)+3$ whenever $C_G(A)$ is nonnormal. Here, $h(G)$ is the Fitting length of $G$ and $\ell(A)$ is the number of primes dividing $A$ counted with multiplicities. \end{abstract}
\section{introduction} Throughout the paper all groups are finite and $h(G)$ denotes the Fitting length of the group $G$. A subgroup $H$ of a group $G$ is said to be a TNI-subgroup if $N_{G}(H)\cap H^g=1$ for any $g\in G\,\backslash \,N_{G}(H).$ In particular, every normal subgroup is a $TNI$-subgroup. In \cite{EG}, we studied the consequences of the action of a group $A$ on the group $G$ in case where $C_G(A)$ is a $TNI$-subgroup, and obtained the following two results:\\
\textbf{Theorem A.} \textit{ Let $A$ be a group that acts coprimely on the group $G$ by automorphisms. If $C_G(A)$ is a solvable $TNI$-subgroup of $G$, then $G$ is solvable.}\\
\textbf{Theorem B.} \textit{Let $A$ be a coprime automorphism of prime order of a finite solvable group $G$ such that $C_G(A)$ is a $TNI$-subgroup. Then $h(G)\leq h(C_G(A))+1.$ In particular, $h(G)\leq 4$ when $C_G(A)$ is nonnormal.}\\
In the present paper we extend Theorem B to the case where $A$ is abelian, namely we prove\\
\textbf{Theorem.} \,\,\textit{ Let $A$ be an abelian group acting coprimely on the finite group $G$ in such away that $C_G(A)$ is a solvable TNI-subgroup of $G$. Then $G$ is a solvable group with $h(G)\leq h(C_G(A))+\ell(A)$ where $\ell(A)$ is the number of primes dividing $A$ counted with multiplicities. In particular $h(G)\leq \ell(A)+3$ whenever $C_G(A)$ is nonnormal.}\\
This is achieved by applying the same techniques used in \cite{Tur1} in order to prove that $h(G)\leq \ell(A)$ if $A$ acts coprimely and fixed point freely on the group $G$ and for every proper subgroup $D$ and every $D$-invariant section $S$ of $G$ such that $D$ acts irreducibly on $S$, there is $v\in S$ with $C_D(S)=C_D(v)$, that is, $A$ acts with regular orbitd on $G$. In the light of this result we asked whether the conclusion of Theorem B is true if $A$ is not necessarily abelian, but acts with regular orbits on $G$. The main difficulty forcing us to study under the assumption that $A$ is abelian arises from the fact that a homomorphic image of a $TNI$-subgroup need not be a $TNI$-subgroup.
\section{proof of the theorem}
The group $G$ is solvable by Theorem A in \cite{EG}. It remains to show that $h(G)\leq h(C_G(A))+\ell(A).$ Suppose false and let\, $G,A$ be a counterexample with $|G|$ minimum.\\
Suppose that $C_G(A)$ is normal in $G$. Then the fixed point free action of $A$ on the group $G/C_G(A)$ yields that $h(G/C_G(A))\leq \ell(A)$ by the main theorem of \cite{Tur1}. So $h(G)\leq h(C_G(A))+\ell(A)$, a contradiction. Therefore by Theorem 2.2 in \cite{EG} we may assume that\\
\textit{$(1)$ $C_G(A)$ is a nonnormal subgroup of $G$ acting Frobeniusly on a section $M/N$ of $G$.}\\
We also have\\
\textit{$(2)$ There is an $A$-tower $\hat{P}_i, i=1,\ldots ,t$ where $t=h(G)$ satisfying the following conditions (see \cite{Tur2})}:
\textit{(a)} $\hat{P}_{i}$\textit{\ is an $A$-invariant }$p_{i} $\textit{-subgroup, }$p_{i}$\textit{\ is a prime, }$p_{i}\neq p_{i+1},$ \newline \textit{\ for }$i=1,\ldots ,t-1$\textit{;}
\textit{(b)} $\hat{P}_{i}\leq N_{G}(\hat{P}_{j})$\textit{\ whenever }$i\leq j $\textit{;}
\textit{(c)} $P_{t}=\hat{P}_{t}$\textit{\ and }$P_{i}=\hat{P}_{i}/C_{\hat{P} _{i}}(P_{i+1})$ \textit{\ for }$i=1,\ldots ,t-1$ \newline \textit{\ and }$P_{i}\neq1$ \textit{\ for }$i=1,\ldots ,t$\textit{;}
\textit{(d)} $\Phi(\Phi(P_{i}))=1$\textit{, }$\Phi(P_{i})\leq Z(P_{i})$ \textit{, and exp}$(P_{i})=p_{i}$\textit{\ when }$p_{i}$\textit{\ is odd } \newline \textit{\ for} $i=1,\ldots ,t$ \textit{;}
\textit{(e)} $[\Phi(P_{i}),\hat{P}_{i-1}]=1$\textit{\ and }$[P_{i},P_{i-1}]=P_{i} $\textit{\ for }$i=1,\ldots ,t$\textit{;}
\textit{{(f)} If $S\leq \hat{P}_{i}$ for some $i$, $S$ is normalized by $\hat{P}_{i-1}\ldots \hat{P}_{1}A$ and its image in ${P}_{i}$ is not contained in $\Phi(P_{i})$, then $S=\hat{P}_{i}$.}\\
By Lemma 2.1 in \cite{EG} the group $\prod_{i=1}^{t}{\hat{P}_{i}}$ is of Fitting length $t$ and it satisfies the hypothesis of the theorem. It follows now by induction that \\
\textit{$(3)$ $G=\prod_{i=1}^{t}{\hat{P}_{i}}$.}\\
Suppose that $C_{\hat{P}_{t}}(A)\ne 1$. Then we have $M/N=[M,C_{\hat{P}_{t}}(A)]N/N$ due to the Frobenius action of $C_{\hat{P}_{t}}(A)$ on $M/N.$ It follows that $M/N\leq \hat{P}_{t}N/N\cap M/N=1$ as $\hat{P}_{t}\lhd G$ and $p_{t}$ is coprime to $|M/N|$. This contradiction shows that\\
\textit{$(4)$ $C_{\hat{P}_{t}}(A)=1$.}\\
Set $H=\prod_{i=1}^{t-1}{\hat{P}_{i}}$. Pick a nontrivial subgroup $C$ of $A$. Set $S=[\hat{P}_{t-1},C]^{H}$. Clearly $S\leq \hat{P}_{t-1}$ is normalized by $\hat{P}_{t-1}\ldots \hat{P}_{1}A$. Now the image of $S$ in $P_{t-1}$ is $[{P}_{t-1},C]^{H}$. Suppose that $[{P}_{t-1},C]^{H}$ is contained in $\Phi({P}_{t-1})$. It follows that $[{P}_{t-1},C]\leq \Phi({P}_{t-1})$ and so $[{P}_{t-1},C ]=1$ due to coprimeness. By the three subgroup lemma $[{P}_{t-2},C,{P}_{t-1}]=1$ whence $[{P}_{t-2},C]=1$. Repeating the same argument one gets $[{P}_{i},C]=1$ for each $i< t$. Now the group $X=\prod_{i=1}^{t-1}{C_{\hat{P}_{i}}}(C)$ is of Fitting length $t-1$ on which $A/C$ acts in such a way that $C_X(A/C)$ is a TNI-subgroup. By induction we get $t-1\leq f(C_X(A))+\ell(A/C)$. It then follows that $t\leq f(C_G(A))+\ell(A)$. This contradiction shows that $[{P}_{t-1},C]^{H}$ is not contained in $\Phi({P}_{t-1})$. By $(2)$ part $f$ we have $S=\hat{P}_{t-1}$. This shows that $[\hat{P}_{t-1},C]^{\hat{P}_{t-1}\ldots \hat{P}_{1}}={\hat{P}_{t-1}}$ for every subgroup $C$ of $A$ with $\ell(C)\geq 1$.\\
Next let $D\leq A$ with $\ell(D)\geq 2$ and $Y=\prod_{i=1}^{t-2}{\hat{P}_{i}}$. Set $T=[\hat{P}_{t-2},D]^{Y}$. Clearly $T$ is $YA$-invariant. If the image $[{P}_{t-2},C]^{Y}$ of $T$ in $P_{t-2}$ is contained in $\Phi({P}_{t-2})$, then we can show by an argument similar as in the above paragraph that $[{P}_{i},D]=1$ for each $i< t-1$. Then $Z=\prod_{i=1}^{t-2}{C_{\hat{P}_{i}}}(D)$ is a group of Fitting length $t-2$ on which $A/D$ acts in such a way that $C_Z(A/D)$ is a TNI-subgroup. It follows by induction that $t\leq f(C_G(A)+\ell(A)$, which is not the case. Therefore $T= \hat{P}_{t-2}$ by $(2)$ part $(f)$. Thus we have \\
\textit{$(5)$ $[\hat{P}_{t-1},C]^{\hat{P}_{t-1}\ldots \hat{P}_{1}}={\hat{P}_{t-1}}$ for every subgroup $C$ of $A$ with $\ell(C)\geq 1$ and $[\hat{P}_{t-2},D]^{\hat{P}_{t-2}\ldots \hat{P}_{1}}= \hat{P}_{t-2}$ for every subgroup $D$ of $A$ with $\ell(D)\geq 2$. }\\
Let now $S$ be an $H$-homogeneous component of the irreducible $HA$- module $V={P}_{t}/\Phi({P}_{t})$. Notice that $\hat{P}_{t-1}$ acts nontrivially on each $H$-homogeneous component of $V.$ Set $B=N_A(S)$. Then $S$ is an irreducible $HB$-module such that $C_S(B)=0$. By the Fong-Swan theorem\\
\textit{$(6)$ We may take an irreducible $\mathbb{C}HB$-module $M$ such that $M|_H$ is homogeneous, $Ker_H(M)=Ker_H(S)$ and $C_{M}(B)=0.$ Among all pairs $(M_{\alpha}, C_{\alpha})$ such that $1\ne C_{\alpha}\leq B$, $M_{\alpha}$ is an irreducible $HC_{\alpha}$ submodule of $M|_{HC_{\alpha}}$ and $C_ {M_{\alpha}}(C_{\alpha})=0$, choose $(M_1,C)$ with $|C|$ minimum. Then $C_{M_1}(C_0)\ne 0$ for $1\ne C_0 < C$ and $Ker_H(M)= Ker_H(M_1).$ }\\
Set $\bar{H}=H/Ker_H(M).$ Suppose that $\bar{U}\ne 1$ is an abelian subgroup of $\bar{H}$ and that $\bar{U}\lhd \bar{H}C.$ Let $U$ be the preimage in $H$ of $\bar{U}.$ Since $M_1|_H$ is homogeneous, by Glauberman's lemma there is a homogeneous component $N_1$ of $M_1|_U$ such that $C\leq N_{HC}(N_1)$. Set $H_1=N_H(N_1).$ Then we have $H_1C=N_{HC}(N_1)$. Now $[U,C]\leq Ker_H(N_1)$. By Proposition 4.1 in \cite{Tur4}, $H=({\bigcap_{x\in HC}{H_1}^x})C_H(C)$. It follows that $M_1=\Sigma (N_1)^{x}$ for $x\in C_H(C).$ Notice that $[U,C]\leq Ker_H(N_1).$ Thus $[U,C]=[U,C]^{x}\leq Ker_H(N_1)^{x}$ and so $[U,C]\leq Ker_H(M_1)$. Then we have\\
\textit{$(7)$ If $\bar{U}$ is an abelian subgroup of $\bar{H}$ such that $\bar{U}\lhd \bar{H}C$ where $\bar{H}=H/Ker_H(M)$ then $[\bar{U},C]=1.$}\\
By $(5)$ we have $[\hat{P}_{t-1},C]^{H}C_{\hat{P}_{t-1}}({P}_{t})=\hat{P}_{t-1}$ and hence $[\hat{P}_{t-1},C]\not \leq Ker(M)$. Therefore by $(7)$, $\hat{P}_{t-1}Ker_H(M)/Ker_H(M)$ is nonabelian. As $P_1$ is elementary abelian we have $t>2.$ Set $\Phi/Ker_H(M)=\Phi(\hat{P}_{t-1}Ker_H(M)/Ker_H(M))$ and let $H_1=C_{\hat{P}_{t-2}\ldots \hat{P}_{1}}(\bar{\Phi})$. By $(7)$, $[\bar{\Phi},C]=1.$ Now $H_1C=C_{\hat{P}_{t-2}\ldots \hat{P}_{1}C}(\bar{\Phi})\lhd \hat{P}_{t-2}\ldots \hat{P}_{1}C.$ Hence $[\hat{P}_{t-2}\ldots \hat{P}_{1},C]\leq H_1$. By the coprimeness we have \\
\textit{$(8)$ $t>2$ and $H_1C_{\hat{P}_{t-2}\ldots \hat{P}_{1}}(C)=\hat{P}_{t-2}\ldots \hat{P}_{1}$ where $\bar{\Phi}=\Phi(\hat{P}_{t-1}Ker(M)/Ker(M))$ and $H_1=C_{\hat{P}_{t-2}\ldots \hat{P}_{1}}(\bar{\Phi})$.}\\
We have $\hat{P}_{t-2}\leq H_1$ by $(2)$ part $(e)$. Set $Q=\hat{P}_{t-2}$. Let $D\leq C$ be such that $\ell(D)\geq 2.$ Then by $(8)$, $[Q,D]^{H_1}=[Q,D]^{\hat{P}_{t-2}\ldots \hat{P}_{1}}.$ By $(5)$ we have\\
\textit{$(9)$ If $D\leq C$ with $\ell(D)\geq 2$ then $Q=[Q,D]^{H_1}$ where $Q=\hat{P}_{t-2}$.}\\
Let $N$ be a homogeneous component of $M_1|_S$. Then $N$ is normalized by $\hat{P}_{t-1}H_1C$ since $H_1C=C_{\hat{P}_{t-2}\ldots \hat{P}_{1}C}(\bar{\Phi}).$ Set $P_0=\hat{P}_{t-1}/\hat{P}_{t-1}\cap Ker_H(N)$. Then $N$ is a $P_0H_1C$-module. Notice that $H_1C$ centralizes $\Phi(P_0)$ and hence $N|_{\Phi(P_0)}$ is homogeneous. Then $\Phi(P_0)$
is cyclic. We also have $\Phi(P_0)$ is elementary abelian by $(2)$ part $(d)$ and hence $|\Phi(P_0)|\leq p.$ Recall that ${(\hat{P}_{t-1})'}\not \leq Ker_{H}(M_1)$ and hence $P_0$ is nonabelian. Thus we have ${P_0}'=\Phi(P_0)$. Note also that $P_0/Z(P_0)$ is elementary abelian. As $P_{t-1}/\Phi(P_{t-1})$ is irreducible $\hat{P}_{t-2}\ldots \hat{P}_{1}A$-module, it is completely reducible as $H_1$-module because $H_1$ is subnormal in $\hat{P}_{t-2}\ldots \hat{P}_{1}A$. It follows that $P_0/\Phi(P_0)$ is $H_1$-completely reducible. It follows by Maschke's theorem that it is $H_1C$-completely reducible.
Suppose that $1\ne D\leq C$. Then $[P_0,D]\leq[P_0,D]^{P_0H_1}\lhd P_0H_1C_{\hat{P}_{t-2}\ldots \hat{P}_{1}}(C).$ By $(8)$, $P_0H_1C_{\hat{P}_{t-2}\ldots \hat{P}_{1}}(C)=P_0\hat{P}_{t-2}\ldots \hat{P}_{1}$ and hence by $(5)$ we get $P_0=[P_0,D]^{H_1}$. Now we apply Theorem 1.1 in \cite{Tur3} by letting $G=P_0H_1$, $A=C$, $P=P_0$, $Q=\hat{P}_{t-2}$ and $\chi$ as the character afforded by $N$. This leads to $C_N(C)\ne 0,$ which is a contradiction completing the proof.
\end{document}
|
arXiv
|
{
"id": "1807.08342.tex",
"language_detection_score": 0.6987554430961609,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title*{Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation} \author{Rio Yokota, Huda Ibeid, David Keyes} \institute{Rio Yokota \at Tokyo Institute of Technology, 2-12-1 O-okayama Meguro-ku, Tokyo, Japan, \email{[email protected]} \and Huda Ibeid, David Keyes \at King Abdullah University of Science and Technology, 4700 KAUST, Thuwal, Saudi Arabia, \email{[email protected],[email protected]}} \maketitle
\abstract{There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.}
\section{Introduction} The fast multipole method (FMM) was originally developed as an algorithm to bring down the $\mathcal{O}(N^2)$ complexity of the direct $N$-body problem to $\mathcal{O}(N)$ by approximating the hierarchically decomposed far field with multipole/local expansions. In its original form, the applicability of FMM is limited to problems that have a Green's function solution, for which the multipole/local expansions can be calculated analytically. Their function is also limited to matrix-vector multiplications, in contrast to the algebraic variants that can perform matrix-matrix multiplication and factorizations. However, these restrictions no longer apply to the FMM since the kernel independent FMM \cite{Ying2004} does not require a Green's function, and inverse FMM \cite{Ambikasaran2014} can be used as the inverse operator instead of the forward mat-vec. Therefore the FMM can be used for a wide range of scientific applications, which can be broadly classified into elliptic partial differential equations (PDE) and kernel summation. Integral form of elliptic PDEs can be further categorized into boundary integrals for homogeneous problems, discrete volume integrals, and continuous volume integrals.
Scientific applications of FMM for boundary integrals include acoustics \cite{Wolf2011a, Hao2015}, biomolecular electrostatics \cite{Yokota2011}, electromagnetics \cite{Darve2004a, Gimbutas2013}, fluid dynamics for Euler \cite{Willis2005} and Stokes \cite{Rahimian2010} flows, geomechanics \cite{Verde2015}, and seismology \cite{Chaillat2008, Wilkes2015}. Application areas of FMM for discrete volume integrals are astrophysics \cite{Bedorf2014}, Brownian dynamics \cite{Liang2013}, classical molecular dynamics \cite{Ohno2014}, density functional theory \cite{Shao2001}, vortex dynamics \cite{Yokota2013}, and force directed graph layout\cite{Yunis2012}. FMM for continuous volume integrals have been used to solve Schr\"odinger \cite{Zhao2007} and Stokes \cite{Malhotra2014} equations. More generalized forms of FMM can be used as fast kernel summation for Bayesian inversion \cite{Ambikasaran2013a}, Kalman filtering \cite{Li2014}, Machine learning \cite{Gray2001,Lee2012}, and radial basis function interpolation \cite{Gumerov2007}.
All of these applications have in common the key feature that they are global problems where the calculation at every location depends on the values everywhere else. Elliptic PDEs that represent a state of equilibrium, many iterations with global inner products for their solution, dense matrices in boundary integral problems, all-to-all interaction in $N$-body problems, and kernel summations with global support are all different manifestations of the same source of global data dependency. Due to this global data dependency, their concurrent execution on future computer architectures with heterogeneous and deep memory hierarchy is one of the main challenges of exascale computing. For global problems that require uniform resolution, FFT is often the method of choice, despite its suboptimal communication costs. The methods we describe here have an advantage for global problems that require non-uniform resolution. For such non-uniform global problems multigrid methods are known to do quite well. Whether the reduced synchronization and increased arithmetic intensity of the FMM will become advantageous compared to multigrid on future architectures is something that is yet to be determined.
Many of the original FMM researchers have now moved on to develop algebraic variants of FMM, such as $\mathcal{H}$-matrix \cite{Hackbusch1999}, $\mathcal{H}^2$-matrix \cite{Hackbusch2000a}, hierarchically semi-seprable (HSS) \cite{Chandrasekaran2006}, hierarchically block-separable (HBS) \cite{Martinsson2005}, and hierarchically off-diagonal low-rank (HODLR) \cite{Ambikasaran2013} matrices. The differences between these methods are concisely summarized by Ambikasaran \& Darve \cite{Ambikasaran2014}. These algebraic generalizations of the FMM can perform addition, multiplication, and even factorization of dense matrices with near linear complexity. This transition from analytic to algebraic did not happen suddenly, and semi-analytic variants were developed along the way \cite{Ying2004, Fong2009}. Optimization techniques for the FMM such as compressed translation operators and their precomputation, also fall somewhere between the analytic and algebraic extremes.
The spectrum that spans purely analytic and purely algebraic forms of these hierarchical low-rank approximation methods, represents the tradeoff between computation (Flops) and memory (Bytes). The purely analytic FMM is a matrix-free $\mathcal{H}^2$-matrix-vector product, and due to its matrix-free nature it has very high arithmetic intensity (Flop/Byte) \cite{Barba2013}. On the other end we have the purely algebraic methods, which precompute and store the entire hierarchical matrix. This results in more storage and more data movement, both vertically and horizontally in the memory hierarchy. When the cost of data movement increases faster than arithmetic operations on future architectures, the methods that compute more to store/move less will become advantageous. Therefore, it is important to consider the whole spectrum of hierarchical low-rank approximation methods, and choose the appropriate method for a given pair of application and architecture.
There have been few attempts to quantitatively investigate the tradeoff between the analytic and algebraic hierarchical low-rank approximation methods. Previously, the applicability of the analytic variants were limited to problems with Green's functions, and could only be used for matrix-vector products but not to solve the matrix. With the advent of the kernel-independent FMM (KIFMM) \cite{Ying2004} and inverse FMM (IFMM) \cite{Ambikasaran2014}, these restrictions no longer apply to the analytic variants. Furthermore, the common argument for using the algebraic variants because they can operate directly on the matrix without the need to pass geometric information is not very convincing. Major libraries like PETSc offer interfaces to insert one’s own matrix free preconditioner as a function, and passing geometric information is something that users are willing to do if the result is increased performance. Therefore, there is no strong reason from the user’s perspective to be monolithically inclined to use the algebraic variants. It is rather a matter of choosing the method with the right balance between its analytic (Flops) and algebraic (Bytes) features.
The topic of investigating the tradeoff between analytic and algebraic hierarchical low-rank approximation methods is too broad to cover in a page-constrained article. In the present work, we limit our investigation to the compute-memory tradeoff in a comparison between FMM and HSS for Laplace and Helmholtz kernels. We also investigate the use of FMM as a preconditioner for iterative solutions to the Laplace and Helmholtz problems with finite elements, for which we compare with geometric and algebraic multigrid methods.
\begin{figure}
\caption{The compute-memory tradeoff between the analytic and algebraic hierarchical low-rank approximation methods. Various techniques lie between the analytic and algebraic extremes.}
\label{fig:tradeoff}
\end{figure}
\section{Hierarchical Low-Rank Approximation: Analytic or Algebraic?} In this section we review the full spectrum of hierarchical low-rank approximations starting from the analytic side and proceeding to the algebraic side. The spectrum is depicted in Fig. \ref{fig:tradeoff}, where various techniques like between the analytic and algebraic extremes. One can choose the appropriate method for a given architecture to achieve the best performance.
\subsection{Analytic Low-Rank Approximation} On the analytic end of the spectrum, we have classical methods such as the Treecode \cite{Barnes1986}, FMM \cite{Appel1985,Greengard1987}, and panel clustering methods \cite{Hackbusch1989}. These methods have extremely high arithmetic intensity (Flop/Byte) due to their matrix-free nature, and are compute-bound on most modern architectures. One important fact is that these are not brute force methods that do unnecessary Flops, but are (near) linear complexity methods that are only doing useful Flops, but they are still able to remain compute-bound. This is very different from achieving high Flops counts on dense matrix-matrix multiplication or LU decomposition that have $\mathcal{O}(N^3)$ complexity. The methods we describe in this section can approximate the same dense linear algebra calculation in $\mathcal{O}(N)$ or $\mathcal{O}(N\log N)$ time.
As an example of the absolute performance of the analytic variants, we refer to the Treecode implementation -- \texttt{Bonsai}, which scales to the full node of Titan using 18,600 GPUs achieving 24.77 PFlops \cite{Bedorf2014}. Bonsai's performance comes not only from its matrix-free nature, but also from domain specific optimizations for hardcoded quadrupoles and an assumption that all charges are positive. Therefore, this kind of performance cannot be transferred to other applications that require higher accuracy. However, viewing these methods as a preconditioner instead of a direct solver significantly reduces the accuracy requirements \cite{Ibeid2016,Aminfar2016a}.
\subsection{Fast Translation Operators} A large part of the calculation time of FMM is spent on the translation of multipole expansions to local expansions (or their equivalent charges). Therefore, much work has focused on developing fast translation operators to accelerate this part of the FMM. Rotation of spherical harmonics \cite{White1996}, Block FFT \cite{Elliott1996}, Planewaves \cite{Greengard1997} are analytic options for fast translation operators.
These translation operators are applied to a pair of boxes in the FMM tree structure that satisfy a certain proximity threshold. This proximity is usually defined as the parent's neighbors' children that are non-neighbors. This produces a list of boxes that are far enough that the multipole/local expansion converges, but are close enough that the expansion does not converge for the their parents. Such an interaction list can contain up to $6^3-3^3=189$ source boxes for each target box. Out of these 189 boxes, the ones that are further from the target box can perform the translation operation using their parent box as the source without loss of accuracy. There are a few variants for these techniques that reduce the interaction list size such as the level-skip M2L method \cite{Wang2015} and 8,4,2-box method \cite{Wilkes2015}. There are also methods that use the dual tree traversal along with the multipole acceptance criterion to construct optimal interaction lists \cite{Dehnen2002}, which automates the process of finding the optimal interaction list size.
Another technique to accelerate the translation operators is the use of variable expansion order, as proposed in the very fast multipole method (VFMM) \cite{Petersen1994}, Gaussian VFMM \cite{Burant1996}, optimal parameter FMM \cite{Choi2001}, and error controlled FMM \cite{Dachsel2010}. There are two main reasons why spatially varying the expansion order in the translation operators is beneficial. One is because not all boxes in the interaction list are of equal distance, and the boxes that are further from each other can afford to use lower expansion order, while retaining the accuracy. The other reason is because some parts of the domain may have smaller values, and the contribution from that part can afford to use lower expansion order without sacrificing the overall accuracy.
The translation operators can be stored as matrices that operate on the vector of expansion coefficients. Therefore, singular value decomposition (SVD) can be used to compress this matrix \cite{Gimbutas2002} and BLAS can be used to maximize the cache utilization \cite{Fortin2005}. Some methods use a combination of these techniques like Chebychev with SVD \cite{Fong2009} and planewave with adaptive cross approximation (ACA) and SVD \cite{Hesford2011}. The use of SVD is a systematic and optimal way of achieving what the variable expansion order techniques in the previous paragraph were trying to do manually. Precomputing these translation matrices and storing them is a typical optimization technique in many FMM implementations \cite{Malhotra2015}.
One important connection to make here is that these matrices for the translation operators are precisely what $\mathcal{H}^2$-matrices and $HSS$ matrices store in the off-diagonal blocks after compression. One can think of FMM as a method that has the analytical form to generate these small matrices in the off-diagonal blocks, without relying on numerical low-rank approximation methods. To complete this analogy, we point out that the dense diagonal blocks in $\mathcal{H}^2$-matrices and $HSS$ matrices are simply storing the direct operator (Green's function) in FMM. Noticing this equivalence leads to many possibilities of hybridization among the analytic and algebraic variants. Possibly the most profound is the following. Those that are familiar with FMM know that translation operators for boxes with the same relative positioning are identical. This suggests that many of the entries in the off-diagonal blocks of $\mathcal{H}^2$-matrices and $HSS$ matrices are identical. For matrices that are generated from a mesh that has a regular structure even the diagonal blocks would be identical, which is what happens in FMMs for continuous volume integrals \cite{Malhotra2015}. This leads to $\mathcal{O}(1)$ storage for the matrix entries at every level of the hierarchy, so the total storage cost of these hierarchical matrices could be reduced to $\mathcal{O}(\log N)$ if the identical entires are not stored redundantly. This aspect is currently underutilized in the algebraic variants, but seems obvious from the analytic side. By making use of the translational invariance and rotational symmetry of the interaction list one can reduce the amount of storage even further \cite{Coulaud2008,Darve2011,Takahashi2012}. This also results in blocking techniques for better cache utilization.
\subsection{Semi-analytical FMM} The methods described in the previous subsection all require the existance of an analytical form of the multipole/local translation operator, which is kernel dependent. There are a class of methods that remove this restriction by using equivalent charges instead of multipole expansions \cite{Anderson1992,Berman1995,Makino1999}. A well known implementation of this method is the kernel independent FMM (KIFMM) code \cite{Ying2004}. There are also variants that use Chebychev polynomials \cite{Dutt1996}, and a representative implementation of this is the Black-box FMM \cite{Fong2009}. As the name of these codes suggest, these variants of the FMM have reduced requirements for the information that has to be provided by the user. The translation operators are kernel-independent, which frees the user from the most difficult task of having to provide an analytical form of the translation operators. For example, if one wants to calculate the Mat\'{e}rn function for covariance martices, or multiquadrics for radial basis function interpolation, one simply needs to provide these functions and the location of the points and the FMM will handle the rest. It is important to note that these methods are not entirely kernel independent or black-box because the user still needs to provide the kernel dependent analytic form of the original equation they wish to calculate. Using the vocabulary of the algebraic variants, one could say that these analytical expressions for the hierarchical matrices are kernel independent only for the off-diagonal blocks, and for the diagonal blocks the analytical form is kernel dependent.
FMM for continuous volume integrals \cite{Ethridge2001} also has important features when considering the analytic-algebraic tradeoff. The volume integrals are often combined with boundary integrals, as well \cite{Ying2006}. One can think of these methods as an FMM that includes the discretization process \cite{Langston2011}. Unlike the FMM for discrete particles, these methods have the ability to impose regular underlying geometry. This enables the use of precomputation of the direct interaction matrix in the analytic variants \cite{Malhotra2015}, and reduces the storage requirements of the dense diagonal blocks in the algebraic variants.
\begin{table}[b] \caption{Categorization of algebraic low-rank approximation methods.} \begin{tabular}{p{3cm}p{2.5cm}p{3cm}p{2cm}} \hline\noalign{
} Method & Hierarchical & Weak admissibility & Nested basis \\ \noalign{
}\svhline\noalign{
} $\mathcal{H}$-matrix \cite{Hackbusch1999} & yes & maybe & no \\ $\mathcal{H}^2$-matrix \cite{Hackbusch2000a} & yes & maybe & yes \\ HODLR \cite{Ambikasaran2013} & yes & yes & no \\ HSS \cite{Chandrasekaran2006} / HBS \cite{Martinsson2005} & yes & yes & yes \\ BLR \cite{Amestoy2015} & no & yes & no \\ \noalign{
}\hline\noalign{
} \end{tabular} \label{tab:algebraic} \end{table}
\subsection{Algebraic Low-Rank Approximation} There are many variants of algebraic low-rank approximation methods. They can be categorized based on whether they are hierarchical, whether they use weak admissibility, or if the basis is nested, as shown in Table \ref{tab:algebraic}. For the definition of admissibility see \cite{Grasedyck2003}. Starting from the top, $\mathcal{H}$-matrices \cite{Hackbusch1999,Bebendorf2008} are hierarchical, usually use standard or strong admissibility, and no nested basis. The analytic counterpart of the $\mathcal{H}$-matrix is the Treecode. The $\mathcal{H}^2$-matrices \cite{Hackbusch2000a,Borm2009} are also hierarchical and use standard or strong admissibility, but unlike $\mathcal{H}$-matrices use a nested basis. This brings the complexity down from $\mathcal{O}(NlogN)$ to $\mathcal{O}(N)$. The analytic counterpart of the $\mathcal{H}^2$-matrix is the FMM. The next three entries in Table \ref{tab:algebraic} do not have analytic counterparts because analytic low-rank approximations do not converge under weak admissibility conditions. Hierarchical off-diagonal low-rank (HODLR) matrices \cite{Ambikasaran2013,Aminfar2016}, are basically $\mathcal{H}$-matrices with weak admissibility conditions. Similarly, hierarchically semi-seperable (HSS) \cite{Chandrasekaran2006, Xia2010}, and hierarchically block-seperable (HBS) \cite{Martinsson2005} matrices are $\mathcal{H}^2$-matrices with weak admissibility conditions. The block low-rank (BLR) matrices \cite{Amestoy2015} are a non-hierarchical version of the HODLR, with just the bottom level. A summary of implementations and their characteristics are presented in \cite{Rouet2015}.
For methods that do not have weak admissibility, it is common to use geometrical information to calculate the standard/strong admissibility condition. This dependence on the geometry of the algebraic variants is not ideal. There have been various proposals for algebraic clustering methods \cite{LeBorne2006,Oliveira2007,Grasedyck2008}. This problem requires even more advanced solutions for high dimension problems \cite{March2015a}. Stronger admissibility is also problem for parallelization since it results in more communication. There have been studies on how to partition hierarchical matrices on distributed memory \cite{Izadi2012}. There are also methods to reduce the amount of memory consumption during the construction of HSS matrices\cite{Lessel2015}.
The categorization in Table \ref{tab:algebraic} is for the hierarchical matrix structure, and any low-rank approximation method can be used with each of them during the compression phase. The singular value decomposition is the most na\"{i}ve and expensive way to calculate a low-rank approximation. QR or LU decompositions can be used to find the numerical rank by using appropriate pivoting. Rank-revealing QR \cite{Chan1987} has been proposed along with efficient pivoting strategies \cite{Hong1992, Chandrasekaran1994, Gu1996}. Rank-revealing LU \cite{Chan1984} also requires efficient pivoting strategies \cite{Hwang1992,Hwang1997,Miranian2003}. Rank-revealing LU is typically faster than rank-revealing QR \cite{Pan2000}. There are other methods like the pseudo-skeletal method \cite{Goreinov1997} and adaptive cross approximation (ACA) \cite{Bebendorf2000,Bebendorf2003}, which do not yield the optimal low-rank factorizations but have a much lower cost. ACA has a better pivoting strategy than pseudo-skeletal methods, but can still fail because of bad pivots \cite{Borm2003}. The hybrid cross approximation (HCA) \cite{Borm2005} has the same proven convergence as standard interpolation but also the same efficiency as ACA. Yet another class of low-rank approximation is the interpolative decomposition (ID) \cite{Cheng2005a, Martinsson2005}, where a few of its columns are used to form a well-conditioned basis for the remaining columns. ID can be combined with randomized methods \cite{Liberty2007}, which has much lower complexity. For a nice review on these randomized methods see \cite{Halko2011}.
\section{Low-Rank Approximation for Factorization} \subsection{Sparse Matrix Factorization} Hierarchical low-rank approximation methods can be used as direct solvers with controllable accuracy. This makes them useful as preconditioners within a Krylov subspace method, which in turn reduces the accuracy requirements of the low-rank approximation. High accuracy and completely algebraic methods are demanding in terms of memory consumption and amount of communication, so they will not be the optimal choice if they are not the only option for that problem. It is worth noting that these methods are for solving dense matrices, and for sparse matrices it is best to combine them with multifrontal methods and use them to compress the Schur complements \cite{Xia2009}. They should never be used to compress a sparse matrix nor the inverse of it directly, even if the inverse of a sparse matrix is dense. There are various methods to reduce fill-in during the factorization to not make the inverse dense, and not using these methods first will have a toll on the asymptotic constant \cite{Grasedyck2009}, even though the asymptotic complexity may still be optimal.
Ultimately, minimizing fill-in and minimizing off-diagonal rank should not be conflicting objectives. The former depends on the connectivity and the latter depends on the distance in the underlying geometry. In most applications, the closer points are connected (or interact) more densely, so reordering according to the distance should produce near optimal ordering for the connectivity as well. The same can be said about minimizing communication for the parallel implementation of these methods. Mapping the 3-D connectivity/distance to a 1-D locality in the memory space (or matrix column/row) is what we are ultimately trying to achieve.
There are various ways to minimize the fill-in and compress the dense blocks during factorization. These dense blocks (Schur complements) are an algebraic form of the Green's function \cite{Xia2014}, and have the same low-rank properties \cite{Chandrasekaran2010} stemming from the fact that some of the boundary points in the underlying geometry are distant from each other. Formulating a boundary integral equation is the analytical way of arriving to the same dense matrix. From an algebraic point of view, the sparse matrix for the volume turns into a dense matrix for the boundary, through the process of trying to minimize fill-in. Considering the minimization of fill-in and the compression of the dense matrices in separate phases leads to methods like HSS + multifrontal \cite{Xia2009,Xia2010,Xia2013}.
\subsection{Dense Matrix Factorization} The methods in the previous subsection are direct solvers/preconditioners for sparse matrices. As we have mentioned, there is an analogy between minimizing fill-in in sparse matrices by looking at the connectivity, and minimizing the rank of off-diagonal blocks of dense matrices by looking at the distance. Using this analogy, the same concept as nested dissection for sparse matrices can be applied to dense matrices. This leads to methods like the recursive skeletonization \cite{Ho2012}, or hierarchical Poincare-Steklov (HPS) \cite{Martinsson2015,Gillman2015}. HPS is like a bottom-up version of what nested dissection and recursive skeletonization do top-down. For high contrast coefficient problems, it makes sense to construct the domain dissection bottom-up, to align the bisectors with the coefficient jumps. There are also other methods that rely on a similar concept \cite{Yarvin1999,Greengard2009,Kong2011,Bremer2012a}. Furthermore, since many of these methods use weak admissibility with growing ranks for 3-D problems, it is useful to have nested hierarchical decompositions, which is like a nested dimension reduction. In this respect, the recursive skeletonization has been extended to hierarchical interpolative factorization (HIF) \cite{Ho2015}, the HSS has been extended to HSS2D \cite{Xia2014}. There is also a combination of HSS and Skeletonization \cite{Corona2015}. There are methods that use this nested dimension reduction concept without the low-rank approximation \cite{Henon2006} in the context of domain decomposition for incomplete LU factorization. One method that does not use weak admissibility is the inverse FMM \cite{Ambikasaran2014}, which makes it applicable to 3-D problems in $\mathcal{O}(N)$ without nested dimension reduction.
\section{Experimental Results} \subsection{FMM vs. HSS} There have been very few comparisons between the analytic and algebraic hierarchical low-rank approximation methods. From a high performance computing perspective, the practical performance of highly optimized implementations of these various methods is of great interest. There have been many efforts to develop new methods in this area, which has resulted in a large amount of similar methods with different names without a clear overall picture of their relative performance on modern HPC architectures. The trend in architecture where arithmetic operations are becoming cheap compared to data movement, is something that must be considered carefully when predicting which method will perform better on computers of the future.
We acknowledge that the comparisons we present here are far from complete, and much more comparisons between all the different methods are needed in order to acheive our long term objective. The limitation actually comes from the lack of highly optimized implementations of these methods that are openly available to us at the moment.
In the present work we start by comparing exaFMM -- a highly optimized implementation of FMM, with STRUMPACK -- a highly optimized implementation of HSS. We select the 2D and 3D Laplace equation on uniform lattices as test cases. For HSS we directly construct the compressed matrix by calling the Green's function in the randomized low-rank approximation routine. We perform the matrix-vector multiplication using the FMM and HSS, and measure the time for the compression/precalculation and application of the matrix-vector multiplication. We also measure the peak memory consumption of both methods.
\begin{figure}
\caption{Elapsed time for the matrix-vector multiplication using FMM and HSS for different problem sizes.}
\label{fig:fmm_hss}
\end{figure}
The elapsed time for the FMM and HSS for different problem sizes is shown in Fig. \ref{fig:fmm_hss}. In order to isolate the effect of the thread scalability of the two methods, these runs are performed on a single core of a 12-core Ivy Bridge (E5-2695 v2). For the 2D Laplace equation, the FMM shows some overhead for small $N$, but is about 3 orders of magnitude faster than HSS for larger problems. For the 3D Laplace equation, the FMM is about 2 orders of magnitude faster than HSS for smaller $N$, but HSS exhibits non-optimal behavior for large $N$ because the rank keeps growing.
The large difference in the computational time is actually coming from the heavy computation in the sampling phase and compression phase of the HSS. In Fig. \ref{fig:breakdown}, we show the percentage of the computation time of HSS for different problem sizes $N$. ``Sample" is the sampling time, ``Compress" is the compression time, and ``Mat-Vec" is the matrix-vector multiplication time. We can see that the sampling is taking longer and longer as the problem size increases. This is because the rank $k$ increases with the problem size $N$, and both sampling and compression time increase with the $k$ and $N$.
\begin{figure}
\caption{Percentage of the computation time of HSS for different problem sizes.}
\label{fig:breakdown}
\end{figure}
\begin{figure}
\caption{Peak memory usage of FMM and HSS for the 3D Laplace equation.}
\label{fig:memory}
\end{figure}
The peak memory usage of FMM and HSS is shown in Fig. \ref{fig:memory} for the 3D Laplace equation. We see that the FMM has strictly $\mathcal{O}(N)$ storage requirements, but since the rank in the HSS grows for 3D kernels it does not show the ideal $\mathcal{O}(N\log N)$ behavior. The disadvantage of HSS is two-fold. First of all, its algebraic nature requires it to store the compressed matrix, where as the FMM is analytic and therefore matrix-free. Secondly, the weak admissibility causes the rank to grow for 3D problems, and with that the memory consumption grows at a suboptimal complexity.
\subsection{FMM vs. Multigrid} If we are to use the FMM as a matrix-free $\mathcal{O}(N)$ preconditioner based on hierarchical low-rank approximation, the natural question to ask is "How does it compare against multigrid?", which is a much more popular matrix-free $\mathcal{O}(N)$ preconditioner for solving elliptic PDEs. We perform a benchmark test similar to the one in the previous subsection, for the Laplace equation and Helmholtz equation on a 3D cubic lattice $[-1,1]^3$, but for this case we impose Dirichlet boundary conditions at the faces of the domain. The preconditioners are used inside a Krylov subspace solver.
\begin{figure}
\caption{Convergence rate of the FMM and Multigrid preconditioners for the Laplace equation on a $[-1,1]^3$ lattice with spacing $h=2^{-5}$.}
\label{fig:laplace}
\end{figure}
\begin{figure}
\caption{Convergence rate of the FMM and Multigrid preconditioners for the Helmholtz equation on a $[-1,1]^3$ lattice with spacing $h=2^{-5}$ and wave number $\kappa=7$.}
\label{fig:helmholtz}
\end{figure}
The convergence rate of the FMM and Multigrid preconditioners for the Laplace equation is shown in Fig. \ref{fig:laplace}, for a grid spacing of $h=2^{-5}$. ``AMG" is algebraic multigrid, ``GMG" is geometric multigrid, ``Inc Chol" is incomplete Cholesky. The $\epsilon$ value represents the accuracy of the FMM. We see that the FMM preconditioner has comparable convergence to the algebraic and geometric multigrid method. Even for a very low-accuracy FMM with $\epsilon=10^{-2}$, the convergence rate is much better than the incomplete Cholesky. We refer to the work by Ibeid \textit{et al.} \cite{Ibeid2016} for more detailed comparisons between FMM and Multigrid.
A similar plot is shown for the Helmholtz equation with grid spacing of $h=2^{-5}$ and wave number $\kappa=7$ in Fig. \ref{fig:helmholtz}. The nomenclature of the legend is identical to that of Fig. \ref{fig:laplace}. In this case, we see a larger difference between the convergence rate of FMM and Multigrid. Even the FMM with the worst accuracy does better than the multigrid. We have also confirmed that the FMM preconditioner has a convergence rate that is independent of the problem size, up to moderate wave numbers of $\kappa$.
\section{Conclusions and Outlook} We have shown the contrast between the analytical and algebraic hierarchical low-rank approximations, by reviewing the contributions over the years and placing them along the analytical-algebraic spectrum. The relation between Treecode, FMM, KIFMM, black-box FMM, $\mathcal{H}$-matrix, $\mathcal{H}^2$-matrix, HODLR, HSS, HBS, and BLR were explained from the perspective of compute-memory tradeoff. This birds-eye view of the entire hierarchical low-rank approximation landscape from analytical to algebraic, allows us to place ideas like precomputation of FMM translation matrices and relate that to storage reduction techniques for the algebraic variants.
Some important findings from this cross-disciplinary literature review are: \begin{itemize} \item Translational invariance of the FMM operators suggest that $\mathcal{H}^2$-matrices (and the like) have mostly duplicate entries, which many are redundantly storing at the moment. \item The analytical variants can now perform factorization and are kernel independent, so the decision to use the algebraic variants at the cost of consuming more memory should be made carefully. \item The kernel-independent variants of FMM can be used as a matrix-free $\mathcal{O}(N)$ compression technique. \item The use of SVD to compress the FMM translation matrices, makes the work on variable expansion order and its error optimized variants redundant. \item The hierarchical compression should not be applied directly to the inverse or factorizations of sparse matrices just because they fill-in. One must first try to minimize fill-in, and then compress only the dense blocks that cannot be avoided. \end{itemize}
The comparison benchmarks between FMM and HSS are still preliminary tests for a very simple case. However, they clearly demonstrate the magnitude of the difference that lies between the various hierarchical low-rank approximation methods. The comparison between FMM and multigrid is also a very simple test case, but it reveals the previously unquantified convergence properties of low-accuracy FMM as a preconditioner. Of course, for such simple problems the FMM can give the exact solution in finite arithmetic and therefore solve the problem in a single iteration. The interesting point here is not the fact that it can be used as a preconditioner, but the practical performance of the low-accuracy FMM being significantly faster than the high accuracy FMM, even if it requires a few iterations.
There is much more that can be done if all of these complicated hierarchical low-rank approximation methods could somehow be made easier to code. We believe a modular view of these methods will help the developers though separation of concerns. Instead of everyone coding a slightly different version of the whole thing, we could each choose a module to focus on that fits our research interests, and contribute to a larger and more sustainable ecosystem. A few ideas to facilitate the transition to such a community effort are: \begin{enumerate} \item Create a common benchmark (mini app) for each of the modules. \item Gradually propagate standards in the community, starting from the major codes. \item Develop a common interface between the hierarchical structure and inner kernels. \item Do not try to unify code, just have a standard with a common API (like MPI). \end{enumerate}
\begin{thebibliography}{100}
\bibitem{Ambikasaran2013} S.~Ambikasaran and E.~Darve. \newblock An {O(NlogN)} fast direct solver for partial hierarchically
semi-separable matrices. \newblock {\em Journal of Scientific Computing}, 57:477--501, 2013.
\bibitem{Ambikasaran2014} S.~Ambikasaran and E.~Darve. \newblock The inverse fast multipole method. \newblock {\em arXiv:1407.1572v1}, 2014.
\bibitem{Ambikasaran2013a} S.~Ambikasaran, J.-Y. Li, P.~K. Kitanidis, and E.~Darve. \newblock Large-scale stochastic linear inversion using hierarchical matrices. \newblock {\em Computational Geosciences}, 17(6):913--927, 2013.
\bibitem{Amestoy2015} P.~Amestoy, C.~Ashcraft, O.~Boiteau, A.~Buttari, J.-Y. L'Excellent, and
C.~Weisbecker. \newblock Improving multifrontal methods by means of block low-rank
representations. \newblock {\em SIAM Journal on Scientific Computing}, 37(3):A1451--A1474, 2015.
\bibitem{Aminfar2016} A.~Aminfar, S.~Ambikasaran, and E.~Darve. \newblock A fast block low-rank dense solver with applications to
finite-element matrices. \newblock {\em Journal of Computational Physics}, 304:170--188, 2016.
\bibitem{Aminfar2016a} A.~Aminfar and E.~Darve. \newblock A fast, memory efficient and robust sparse preconditioner based on a
multifrontal approach with applications to finite-element matrices. \newblock {\em International Journal for Numerical Methods in Engineering},
accepted, 2016.
\bibitem{Anderson1992} C.~R. Anderson. \newblock An implementation of the fast multipole method without multipoles. \newblock {\em SIAM Journal on Scientific and Statistical Computing},
13(4):923--947, 1992.
\bibitem{Appel1985} A.~W. Appel. \newblock An efficient program for many-body simulation. \newblock {\em SIAM Journal on Scientific and Statistical Computing},
6(1):85--103, 1985.
\bibitem{Barba2013} L.~A. Barba and R.~Yokota. \newblock How will the fast multipole method fare in the exascale era? \newblock {\em SIAM News}, 46(6):1--3, 2013.
\bibitem{Barnes1986} J.~Barnes and P.~Hut. \newblock {O(NlogN)} force-calculation algorithm. \newblock {\em Nature}, 324:446--449, 1986.
\bibitem{Bebendorf2000} M.~Bebendorf. \newblock Approximation of boundary element matrices. \newblock {\em Numerische Mathematik}, 86:565--589, 2000.
\bibitem{Bebendorf2008} M.~Bebendorf. \newblock {\em Hierarchical Matrices}, volume~63 of {\em Lecture Notes in
Computational Science and Engineering}. \newblock Springer, 2008.
\bibitem{Bebendorf2003} M.~Bebendorf and S.~Rjasanow. \newblock Adaptive low-rank approximation of collocation matrices. \newblock {\em Computing}, 70:1--24, 2003.
\bibitem{Bedorf2014} J.~B{\'e}dorf, E.~Gaburov, M.~S. Fujii, K.~Nitadori, T.~Ishiyama, and
S.~Portegies~Zwart. \newblock 24.77 {P}flops on a gravitational tree-code to simulate the milky way
galaxy with 18600 {GPU}s. \newblock In {\em Proceedings of the 2014 ACM/IEEE International Conference for
High Performance Computing, Networking, Storage and Analysis}, pages 1--12,
2014.
\bibitem{Berman1995} C.~L. Berman. \newblock Grid-multipole calculations. \newblock {\em SIAM Journal on Scientific Computing}, 16(5):1082--1091, 1995.
\bibitem{Borm2009} S.~B{\"o}rm. \newblock Construction of data-sparse $h^2$-matrices by hierarchical
compression. \newblock {\em SIAM Journal on Scientific Computing}, 31(3):1820--1839, 2009.
\bibitem{Borm2005} S.~B{\"o}rm and L.~Grasedyck. \newblock Hybrid cross approximation of integral operators. \newblock {\em Numerische Mathematik}, 101:221--249, 2005.
\bibitem{Borm2003} S.~B{\"o}rm, L.~Grasedyck, and W.~Hackbusch. \newblock Introduction to hierarchical matrices with applications. \newblock {\em Engineering Analysis with Boundary Elements}, 27:405--422, 2003.
\bibitem{Bremer2012a} J.~Bremer. \newblock A fast direct solver for the integral equations of scattering theory
on planar curves with corners. \newblock {\em Journal of Computational Physics}, 231:1879--1899, 2012.
\bibitem{Burant1996} J.~C. Burant, M.~C. Strain, G.~E. Scuseria, and M.~J. Frisch. \newblock Analytic energy gradients for the {G}aussian very fast multipole
method ({GvFMM}). \newblock {\em Chemical Physics Letters}, 248:43--49, 1996.
\bibitem{Chaillat2008} S.~Chaillat, M.~Bonnet, and J.-F. Semblat. \newblock A multi-level fast multipole {BEM} for 3-{D} elastodynamics in the
frequency domain. \newblock {\em Computer Methods in Applied Mechanics and Engineering},
197:4233--4249, 2008.
\bibitem{Chan1984} T.~F. Chan. \newblock On the existence and computation of {LU}-factorizations with small
pivots. \newblock {\em Mathematics of Computation}, 42(166):535--547, 1984.
\bibitem{Chan1987} T.~F. Chan. \newblock Rank revealing {QR} factorizations. \newblock {\em Linear Algebra and its Applications}, 88/89:67--82, 1987.
\bibitem{Chandrasekaran2006} S.~Chandrasekaran, P.~Dewilde, M.~Gu, W.~Lyons, and T.~Pals. \newblock A fast solver for {HSS} representations via sparse matrices. \newblock {\em SIAM Journal on Matrix Analysis and Applications}, 29(1):67--81,
2006.
\bibitem{Chandrasekaran2010} S.~Chandrasekaran, P.~Dewilde, M.~Gu, and N.~Somasunderam. \newblock On the numerical rank of the off-diagonal blocks of {S}chur
complements of discretized elliptic {PDE}s. \newblock {\em SIAM Journal on Matrix Analysis and Applications},
31(5):2261--2290, 2010.
\bibitem{Chandrasekaran1994} S.~Chandrasekaran and I.~C.~F. Ipsen. \newblock On rank-revealing factorizations. \newblock {\em SIAM Journal on Matrix Analysis and Applications},
15(2):592--622, 1994.
\bibitem{Cheng2005a} H.~Cheng, Z.~Gimbutas, P.~G. Martinsson, and V.~Rokhlin. \newblock On the compression of low rank matrices. \newblock {\em SIAM Journal on Scientific Computing}, 26(4):1389--1404, 2005.
\bibitem{Choi2001} C.~H. Choi, K.~Ruedenberg, and M.~S. Gordon. \newblock New parallel optimal-parameter fast multipole method ({OPFMM}). \newblock {\em Journal of Computational Chemistry}, 22(13):1484--1501, 2001.
\bibitem{Corona2015} E.~Corona, P.~G. Martinsson, and D.~Zorin. \newblock An {O(N)} direct solver for integral equations on the plane. \newblock {\em Applied and Computational Harmonic Analysis}, 38:284--317, 2015.
\bibitem{Coulaud2008} O.~Coulaud, P.~Fortin, and J.~Roman. \newblock High performance {BLAS} formulation of the multipole-to-local
operator in the fast multipole method. \newblock {\em Journal of Computational Physics}, 227:1836--1862, 2008.
\bibitem{Dachsel2010} H.~Dachsel. \newblock Corrected article: ``an error-controlled fast multipole method". \newblock {\em The Journal of Chemical Physics}, 132:119901, 2010.
\bibitem{Darve2011} E.~Darve, C.~Cecka, and T.~Takahashi. \newblock The fast multipole method on parallel clusters, multicore processors,
and graphics processing units. \newblock {\em Comptes Rendus Mecanique}, 339:185--193, 2011.
\bibitem{Darve2004a} E.~Darve and P.~Hav{\'e}. \newblock A fast multipole method for {M}axwell equations stable at all
frequencies. \newblock {\em Philosophical Transactions of the Royal Society of London A},
362:603--628, 2004.
\bibitem{Dehnen2002} W.~Dehnen. \newblock A hierarchical {O(N)} force calculation algorithm. \newblock {\em Journal of Computational Physics}, 179(1):27--42, 2002.
\bibitem{Dutt1996} A.~Dutt, M.~Gu, and V.~Rokhlin. \newblock Fast algorithms for polynomial interpolation, integration, and
differntiation. \newblock {\em SIAM Journal on Numerical Analysis}, 33(5):1689--1711, 1996.
\bibitem{Elliott1996} W.~D. Elliott and J.~A. Board. \newblock Fast {F}ourier transform accelerated fast multipole algorithm. \newblock {\em SIAM Journal on Scientific Computing}, 17(2):398--415, 1996.
\bibitem{Ethridge2001} F.~Ethridge and L.~Greengard. \newblock A new fast-multipole accelerated {P}oisson solver in two dimensions. \newblock {\em SIAM Journal on Scientific Computing}, 23(3):741--760, 2001.
\bibitem{Fong2009} W.~Fong and E.~Darve. \newblock The black-box fast multipole method. \newblock {\em Journal of Computational Physics}, 228:8712--8725, 2009.
\bibitem{Fortin2005} P.~Fortin. \newblock Multipole-to-local operator in the fast multipole method:
{C}omparison of {FFT}, rotations and {BLAS} improvements. \newblock Technical Report RR-5752, Rapports de recherche, et theses de
l'Inria, 2005.
\bibitem{Gillman2015} A.~Gillman, A.~Barnett, and P.~G. Martinsson. \newblock A spectrally accurate direct solution technique for frequency-domain
scattering problems with variable media. \newblock {\em BIT Numerical Mathematics}, 55:141--170, 2015.
\bibitem{Gimbutas2013} Z.~Gimbutas and L.~Greengard. \newblock Fast multi-particle scattering: A hybrid solver for the {M}axwell
equations in microstructured materials. \newblock {\em Journal of Computational Physics}, 232:22--32, 2013.
\bibitem{Gimbutas2002} Z.~Gimbutas and V.~Rokhlin. \newblock A generalized fast multipole method for nonoscillatory kernels. \newblock {\em SIAM Journal on Scientific Computing}, 24(3):796--817, 2002.
\bibitem{Goreinov1997} S.~A. Goreinov, E.~E. Tyrtyshnikov, and N.~L. Zamarashkin. \newblock A theory of pseudoskeleton approximations. \newblock {\em Linear Algebra and its Applications}, 261(1-3):1--21, 1997.
\bibitem{Grasedyck2003} L.~Grasedyck and W.~Hackbusch. \newblock Construction and arithmetics of {H}-matrices. \newblock {\em Computing}, 70:295--334, 2003.
\bibitem{Grasedyck2008} L.~Grasedyck, R.~Kriemann, and S.~Le~Borne. \newblock Parallel black box {H-LU} preconditioning for elliptic boundary value
poblems. \newblock {\em Computing and Visualization in Science}, 11:273--291, 2008.
\bibitem{Grasedyck2009} L.~Grasedyck, R.~Kriemann, and S.~Le~Borne. \newblock Domain decomposition based {H-LU} preconditioning. \newblock {\em Numerische Mathematik}, 112:565--600, 2009.
\bibitem{Gray2001} A.~G. Gray and A.~W. Moore. \newblock {N}-body problems in statistical learning. \newblock In T.~K. Leen, T.~G. Dietterich, and V.~Tresp, editors, {\em Advances
in Neural Information Processing Systems}, volume~13, pages 521---527. MIT
Press, 2001.
\bibitem{Greengard2009} L.~Greengard, D.~Gueyffier, P.~G. Martinsson, and V.~Rokhlin. \newblock Fast direct solvers for integral equations in complex three
dimensional domains. \newblock {\em Acta Numerica}, 18:243--275, 2009.
\bibitem{Greengard1987} L.~Greengard and V.~Rokhlin. \newblock A fast algorithm for particle simulations. \newblock {\em Journal of Computational Physics}, 73(2):325--348, 1987.
\bibitem{Greengard1997} L.~Greengard and V.~Rokhlin. \newblock A new version of the fast multipole method for the {L}aplace equation
in three dimensions. \newblock {\em Acta Numerica}, 6:229--269, 1997.
\bibitem{Gu1996} M.~Gu and E.~S. C. \newblock Efficient algorithms for computing a strong rank-revealing {QR}
factorization. \newblock {\em SIAM Journal on Scientific Computing}, 17(4):848--869, 1996.
\bibitem{Gumerov2007} N.~A. Gumerov and R.~Duraiswami. \newblock Fast radial basis function interpolation via preconditioned {K}rylov
iteration. \newblock {\em SIAM Journal on Scientific Computing}, 29(5):1876--1899, 2007.
\bibitem{Hackbusch1999} W.~Hackbusch. \newblock A sparse matrix arithmetic based on {H}-matrices, part {I}:
{I}ntroduction to {H}-matrices. \newblock {\em Computing}, 62:89--108, 1999.
\bibitem{Hackbusch2000a} W.~Hackbusch, B.~Khoromskij, and S.~A. Sauter. \newblock On $h^2$-matrices. \newblock In H.~Bungartz, R.~Hoppe, and C.~Zenger, editors, {\em Lectures on
Applied Mathematics}. Springer-Verlag, 2000.
\bibitem{Hackbusch1989} W.~Hackbusch and Z.~P. Nowak. \newblock On the fast matrix multiplication in the boundary element method by
panel clustering. \newblock {\em Numerische Mathematik}, 54:463--491, 1989.
\bibitem{Halko2011} N.~Halko, P.~G. Martinsson, and J.~A. Tropp. \newblock Finding structure with randomness: Probabilistic algorithms for
constructing approximate matrix decompositions. \newblock {\em SIAM Review}, 53(2):217--288, 2011.
\bibitem{Hao2015} S.~Hao, P.~G. Martinsson, and P.~Young. \newblock An efficient and highly accurate solver for multi-body acoustic
scattering problems involving rotationally symmetric scatterers. \newblock {\em Computers and Mathematics with Applications}, 69:304--318, 2015.
\bibitem{Henon2006} P.~H{\'e}non and Y.~Saad. \newblock A parallel multistage {ILU} factorization based on a hierarchical
graph decomposition. \newblock {\em SIAM Journal on Scientific Computing}, 28(6):2266--2293, 2006.
\bibitem{Hesford2011} A.~J. Hesford and R.~C. Waag. \newblock Reduced-rank approximations to the far-field transform in the gridded
fast multipole method. \newblock {\em Journal of Computational Physics}, 230:3656--3667, 2011.
\bibitem{Ho2012} K.~L. Ho and L.~Greengard. \newblock A fast direct solver for structured linear systems by recursive
skeletonization. \newblock {\em SIAM Journal on Scientific Computing}, 34(5):A2507--A2532, 2012.
\bibitem{Ho2015} K.~L. Ho and L.~Ying. \newblock Hierarchical interpolative factorization for elliptic operators:
Integral equations. \newblock {\em arXiv:1307.2666}, 2015.
\bibitem{Hong1992} Y.~P. Hong and C.~T. Pan. \newblock Rank-revealing {QR} factorizations and the singular value
decomposition. \newblock {\em Mathematics of Computation}, 58(197):213--232, 1992.
\bibitem{Hwang1997} T.-M. Hwang, W.-W. Lin, and D.~Pierce. \newblock Improved bound for rank revealing {LU} factorizations. \newblock {\em Linear Algebra and its Applications}, 261(1):173--186, 1997.
\bibitem{Hwang1992} T.-M. Hwang, W.-W. Lin, and E.~K. Yang. \newblock Rank revealing {LU} factorizations. \newblock {\em Linear Algebra and its Applications}, 175:115--141, 1992.
\bibitem{Ibeid2016} H.~Ibeid, R.~Yokota, J.~Pestana, and D.~Keyes. \newblock Fast multipole preconditioners for sparse matrices arising from
elliptic equations. \newblock {\em arXiv:1308.3339}, 2016.
\bibitem{Izadi2012} M.~Izadi. \newblock {\em Hierarchical Matrix Techniques on Massively Parallel Computers}. \newblock PhD thesis, Universitat Leipzig, 2012.
\bibitem{Kong2011} W.~Y. Kong, J.~Bremer, and V.~Rokhlin. \newblock An adaptive fast direct solver for boundary integral equations in two
dimensions. \newblock {\em Applied and Computational Harmonic Analysis}, 31:346--369, 2011.
\bibitem{Langston2011} H.~Langston, L.~Greengard, and D.~Zorin. \newblock A free-space adaptive {FMM}-based {PDE} solver in three dimensions. \newblock {\em Communications in Applied Mathematics and Computational
Science}, 6(1):79--122, 2011.
\bibitem{LeBorne2006} S.~Le~Borne. \newblock Multilevel hierarchical matrices. \newblock {\em SIAM Journal on Matrix Analysis and Applications},
28(3):871--889, 2006.
\bibitem{Lee2012} D.~Lee, R.~Vuduc, and A.~G. Gray. \newblock A distributed kernel summation framework for general-dimension
machine learning. \newblock In {\em Proceedings of the 2012 SIAM International Conference on Data
Mining}, 2012.
\bibitem{Lessel2015} K.~Lessel, M.~Hartman, and S.~Chandrasekaran. \newblock A fast memory efficient construction algorithm for hierarchically
semi-separable representations. \newblock {\em http://scg.ece.ucsb.edu/publications/MemoryEfficientHSS.pdf},
2015.
\bibitem{Li2014} J.-Y. Li, S.~Ambikasaran, E.~F. Darve, and P.~K. Kitanidis. \newblock A {K}alman filter powered by $h^2$-matrices for quasi-continuous data
assimilation problems. \newblock {\em Water Resources Research}, 50:3734--3749, 2014.
\bibitem{Liang2013} Z.~Liang, Z.~Gimbutas, L.~Greengard, J.~Huang, and S.~Jiang. \newblock A fast multipole method for the {R}otne-{P}rager-{Y}amakawa tensor
and its applications. \newblock {\em Journal of Computational Physics}, 234:133--139, 2013.
\bibitem{Liberty2007} E.~Liberty, F.~Woolfe, P.~G. Martinsson, V.~Rokhlin, and M.~Tygert. \newblock Randomized algorithms for the low-rank approximation of matrices. \newblock {\em PNAS}, 104(51):20167--20172, 2007.
\bibitem{Makino1999} J.~Makino. \newblock Yet another fast multipole method without multipoles --
{P}seudoparticle multipole method. \newblock {\em Journal of Computational Physics}, 151(2):910--920, 1999.
\bibitem{Malhotra2015} D.~Malhotra and G.~Biros. \newblock {PVFMM}: A parallel kernel independent {FMM} for particle and volume
potentials. \newblock {\em Communications in Computational Physics}, 18(3):808--830, 2015.
\bibitem{Malhotra2014} D.~Malhotra, A.~Gholami, and G.~Biros. \newblock A volume integral equation stokes solver for problems with variable
coefficients. \newblock In {\em Proceedings of the 2014 ACM/IEEE International Conference for
High Performance Computing, Networking, Storage and Analysis}, pages 1--11,
2014.
\bibitem{March2015a} W.~B. March, B.~Xiao, and G.~Biros. \newblock {ASKIT}: Approximate skeletonization kernel-independent treecode in
high dimensions. \newblock {\em SIAM Journal on Scientific Computing}, 37(2):A1089--A1110, 2015.
\bibitem{Martinsson2015} P.~G. Martinsson. \newblock The hierarchical {P}oincar{\'e}-{S}teklov ({HPS}) solver for elliptic
{PDE}s: A tutorial. \newblock {\em arXiv:1506.01308}, 2015.
\bibitem{Martinsson2005} P.~G. Martinsson and V.~Rokhlin. \newblock A fast direct solver for boundary integral equations in two
dimensions. \newblock {\em Journal of Computational Physics}, 205:1--23, 2005.
\bibitem{Miranian2003} L.~Miranian and M.~Gu. \newblock Strong rank revealing {LU} factorizations. \newblock {\em Linear Algebra and its Applications}, 367:1--16, 2003.
\bibitem{Ohno2014} Y.~Ohno, R.~Yokota, H.~Koyama, G.~Morimoto, A.~Hasegawa, G.~Masumoto,
N.~Okimoto, Y.~Hirano, H.~Ibeid, T.~Narumi, and M.~Taiji. \newblock Petascale molecular dynamics simulation using the fast multipole
method on k computer. \newblock {\em Computer Physics Communications}, 185:2575--2585, 2014.
\bibitem{Oliveira2007} S.~Oliveira and Y.~F. \newblock An algebraic approcah for {H}-matrix preconditioners. \newblock {\em Computing}, 80:169--188, 2007.
\bibitem{Pan2000} C.~T. Pan. \newblock On the existence and computation of rank-revealing {LU}
factorizations. \newblock {\em Linear Algebra and its Applications}, 316:199--222, 2000.
\bibitem{Petersen1994} H.~G. Petersen, D.~Soelvason, J.~W. Perram, and E.~R. Smith. \newblock The very fast multipole method. \newblock {\em The Journal of Chemical Physics}, 101(10):8870--8876, 1994.
\bibitem{Rahimian2010} A.~Rahimian, I.~Lashuk, K.~Veerapaneni, A.~Chandramowlishwaran, D.~Malhotra,
L.~Moon, R.~Sampath, A.~Shringarpure, J.~Vetter, R.~Vuduc, D.~Zorin, and
G.~Biros. \newblock Petascale direct numerical simulation of blood flow on 200k cores and
heterogeneous architectures. \newblock In {\em Proceedings of the 2010 ACM/IEEE International Conference for
High Performance Computing, Networking, Storage and Analysis}, SC '10, 2010.
\bibitem{Rouet2015} F.-H. Rouet, X.-S. Li, P.~Ghysels, and A.~Napov. \newblock A distributed-memory package for dense hierarchically semi-separable
matrix computations using randomization. \newblock {\em arXiv:1503.05464}, 2015.
\bibitem{Shao2001} Y.~Shao, C.~A. White, and M.~Head-Gordon. \newblock Efficient evaluation of the {C}oulomb force in density-functional
theory calculations. \newblock {\em The Journal of Chemical Physics}, 114(15):6572--6577, 2001.
\bibitem{Takahashi2012} T.~Takahashi, C.~Cecka, W.~Fong, and E.~Darve. \newblock Optimizing the multipole-to-local operator in the fast multipole
method for graphical processing units. \newblock {\em International Journal for Numerical Methods in Engineering},
89:105--133, 2012.
\bibitem{Verde2015} A.~Verde and A.~Ghassemi. \newblock Fast multipole displacement discontinuity method ({FM-DDM}) for
geomechanics reservoir simulations. \newblock {\em International Journal for Numerical and Analytical Methods in
Geomechanics}, 39(18):1953--1974, 2015.
\bibitem{Wang2015} Y.~Wang, Q.~Wang, X.~Deng, Z.~Xia, J.~Yan, and H.~Xu. \newblock Graphics processing unit ({GPU}) accelerated fast multipole {BEM}
with level-skip {M2L} for {3D} elasticity problems. \newblock {\em Advances in Engineering Software}, 82:105--118, 2015.
\bibitem{White1996} C.~A. White and M.~Head-Gordon. \newblock Rotating around the quartic angular momentum barrier in fast
multipole method calculations. \newblock {\em The Journal of Chemical Physics}, 105(12):5061--5067, 1996.
\bibitem{Wilkes2015} D.~R. Wilkes and A.~J. Duncan. \newblock A low frequency elastodynamic fast multipole boundary element method
in three dimensions. \newblock {\em Computational Mechanics}, 56:829--848, 2015.
\bibitem{Willis2005} D.~Willis, J.~Peraire, and J.~White. \newblock {FastAero} -- a precorrected {FFT}-fast multipole tree steady and
unsteady potential flow solver. \newblock {\em http://hdl.handle.net/1721.1/7378}, 2005.
\bibitem{Wolf2011a} W.~R. Wolf and S.~K. Lele. \newblock Aeroacoustic integrals accelerated by fast multipole method. \newblock {\em AIAA Journal}, 49(7):1466--1477, 2011.
\bibitem{Xia2013} J.~Xia. \newblock Randomized sparse direct solvers. \newblock {\em SIAM Journal on Matrix Analysis and Applications},
34(1):197--227, 2013.
\bibitem{Xia2014} J.~Xia. \newblock {O(N)} complexity randomized 3{D} direct solver with {HSS2D}
structure. \newblock Proceedings of the Project Review, Geo-Mathematical Imaging Group
317--325, Purdue University, 2014.
\bibitem{Xia2009} J.~Xia, S.~Chandrasekaran, M.~Gu, and X.~S. Li. \newblock Superfast multifrontal method for large structured linear systems of
equations. \newblock {\em SIAM Journal on Matrix Analysis and Applications},
31(3):1382--1411, 2009.
\bibitem{Xia2010} J.~Xia, S.~Chandrasekaran, M.~Gu, and X.~S. Li. \newblock Fast algorithms for hierarchically semiseperable matrices. \newblock {\em Numerical Linear Algebra with Applications}, 17:953--976, 2010.
\bibitem{Yarvin1999} N.~Yarvin and V.~Rokhlin. \newblock An improved fast multipole algorithm for potential fields on the
line. \newblock {\em SIAM Journal on Numerical Analysis}, 36(2):629--666, 1999.
\bibitem{Ying2004} L.~Ying, G.~Biros, and D.~Zorin. \newblock A kernel-independent adaptive fast multipole algorithm in two and
three dimensions. \newblock {\em Journal of Computational Physics}, 196(2):591--626, 2004.
\bibitem{Ying2006} L.~Ying, G.~Biros, and D.~Zorin. \newblock A high-order 3{D} boundary integral equation solver for elliptic
{PDE}s in smooth domains. \newblock {\em Journal of Computational Physics}, 219:247--275, 2006.
\bibitem{Yokota2011} R.~Yokota, J.~P. Bardhan, M.~G. Knepley, L.~A. Barba, and T.~Hamada. \newblock Biomolecular electrostatics using a fast multipole {BEM} on up to 512
{GPU}s and a billion unknowns. \newblock {\em Computer Physics Communications}, 182:1272--1283, 2011.
\bibitem{Yokota2013} R.~Yokota, T.~Narumi, K.~Yasuoka, and L.~A. Barba. \newblock Petascale turbulence simulation using a highly parallel fast
multipole method on {GPU}s. \newblock {\em Computer Physics Communications}, 184:445--455, 2013.
\bibitem{Yunis2012} E.~Yunis, R.~Yokota, and A.~Ahmadia. \newblock Scalable force directed graph layout algorithms using fast multipole
methods. \newblock In {\em The 11th International Symposium on Parallel and Distributed
Computing}, Munich, Germany, June 2012.
\bibitem{Zhao2007} Z.~Zhao, N.~Kovvali, W.~Lin, C.-H. Ahn, L.~Couchman, and L.~Carin. \newblock Volumetric fast multipole method for modeling {S}chr{\"o}dinger's
equation. \newblock {\em Journal of Computational Physics}, 224:941--955, 2007.
\end{thebibliography}
\end{document}
|
arXiv
|
{
"id": "1602.02244.tex",
"language_detection_score": 0.793887197971344,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Even circuits]{Even circuits of prescribed clockwise parity}
\author[Ilse Fischer and C.H.C. Little]{\box\Adr}
\newbox\Adr \setbox\Adr\vbox{ \centerline{ Ilse Fischer} \centerline{Universit\"at Klagenfurt, A-9020 Klagenfurt, Austria} \centerline{{\tt [email protected]}}
\centerline{and}
\centerline{ C.H.C. Little} \centerline{Massey University, Palmerston North, New Zealand} \centerline{{\tt [email protected]}} }
\maketitle
\begin{abstract} We show that a graph has an orientation under which every circuit of even length is clockwise odd if and only if the graph contains no subgraph which is, after the contraction of at most one circuit of odd length, an even subdivision of $K_{2,3}$. In fact we give a more general characterisation of graphs that have an orientation under which every even circuit has a prescribed clockwise parity. This problem was motivated by the study of {\it Pfaffian} graphs, which are the graphs that have an orientation under which every alternating circuit is clockwise odd. Their significance is that they are precisely the graphs to which Kasteleyn's powerful method \cite{kasteleyn} for enumerating perfect matchings may be applied. \end{abstract}
\section{Introduction}
Consider the three (even) circuits in $K_{2,3}$. Is it possible to find an orientation under which all these circuits are clockwise odd, if the clockwise parity of a circuit of even length is defined as the parity of the number of edges that are directed in agreement with a specified sense? However $K_{2,3}$ is oriented one observes that the total number of clockwise even circuits is odd and therefore it is not possible to find such an orientation. In this paper we present a characterisation, in terms of forbidden subgraphs, of the graphs that have an orientation under which every even circuit is clockwise odd. It will turn out that the non-existence of such an orientation can in a sense always be put down to an even subdivision of $K_{2,3}$. (See Corollary~\ref{even}.)
We were motivated to study this problem by our work on a characterisation of Pfaffian graphs. A {\it Pfaffian orientation} of a graph is an orientation under which every alternating circuit is clockwise odd, an alternating circuit being a circuit which is the symmetric difference of two perfect matchings. A {\it Pfaffian} graph is a graph that admits a Pfaffian orientation. In \cite{kasteleyn} Kasteleyn introduced a remarkable method for enumerating perfect matchings in Pfaffian graphs, reducing the enumeration to the evaluation of the determinant of the skew adjacency matrix of the Pfaffian directed graph. He has shown that all planar graphs are Pfaffian. However a general characterisation of Pfaffian graphs is still not known. For research in this direction see \cite{Li1,Li2,LiReFi,FiLi,RoSeTh}.
Our characterisation of the graphs that admit an orientation under which every even circuit is clockwise odd will be an easy consequence of our main theorem, which gives a more general characterisation of the graphs that have an orientation under which every even circuit has a prescribed (not necessarily odd) clockwise parity. Before we are able to state our main theorem we need some definitions.
\begin{defi} Let $G$ be a graph and $J$ an assignment of clockwise parities to the even circuits of $G$. An orientation of $G$ is said to be $J$-compatible if every even circuit of $G$ has the clockwise parity prescribed by $J$. Otherwise the orientation is $J$-incompatible. The graph $G$ is said to be $J$-compatible if $G$ admits a $J$-compatible orientation, and $J$-incompatible otherwise. \end{defi}
Our main theorem (Theorem~\ref{main}) characterises $J$-compatible graphs in terms of forbidden subgraphs. Before we are able to formulate it, we have to introduce two relevant graph operations. To this end we need the following fundamental definition and lemma.
\begin{defi} \label{intractabledefinition} Let $G$ be a graph and $J$ an assignment of clockwise parities to the even circuits of $G$. A set $\mathcal S$ of even circuits in $G$ is said to be {\it $J$-intractable} if the symmetric difference of the circuits in $\mathcal S$ is empty and the parity of the number of clockwise even circuits in $\mathcal S$ with respect to an orientation is unequal to the parity of the number of clockwise even circuits in $\mathcal S$ with respect to the assignment $J$. \end{defi}
Observe that the parity of the number of clockwise even circuits with respect to an orientation in a $J$-intractable set ${\mathcal S}$ does not depend on the orientation since the reorientation of a single edge changes the clockwise parity of an even number of circuits in ${\mathcal S}$.
\begin{lem} \label{intractable}
Let $G$ be a graph and $J$ an assignment of clockwise parities to the even circuits of $G$. Then $G$ is $J$-incompatible if and only if $G$ contains a $J$-intractable set of even circuits. \end{lem}
{\it Proof.} The fact that the existence of a $J$-intractable set implies that $G$ is $J$-incompatible follows from the remark above the formulation of the lemma.
Suppose that $G$ is $J$-incompatible and orient $G$ arbitrarily. The existence of a $J$-compatible orientation of $G$ is equivalent to the solvability of a certain system of linear equations over the field $\mathbb{F}_2$. In these equations the variables correspond to the edges of the even circuits of $G$. For every even circuit $C$ there is a corresponding equation in which the sum of the variables corresponding to the edges of $C$ is $1$ if and only if the clockwise parity of $C$ is not that prescribed by $J$. A solution of this system is an assignment of zeros and ones to the edges of the even circuits of $G$. A $J$-compatible orientation of $G$ can be obtained from the fixed orientation by reorienting precisely those edges to which the solution assigns a $1$. The lemma now follows from the solvability criteria for systems of linear equations. \qed
Let $G$ be a graph and let $H$ be a graph obtained from $G$ by the contraction of the two edges $e$ and $f$ incident on some vertex $v$ in $G$ of degree $2$. Thus $EH = EG - \{e,f\}$. We may describe $G$ as an {\it even vertex splitting} of $H$. (See Figure~\ref{split}.) Any even circuit $C_H$ in $H$ is the intersection with $EH$ of a unique even circuit $C$ in $G$. To any assignment $J$ of clockwise parities to the even circuits in $G$ there corresponds an assignment $J_H$ of clockwise parities to the even circuits in $H$ so that any even circuit $C_H$ in $H$ is assigned the same clockwise parity as $C$ in $G$. We say that $J_H$ is {\it induced} by $J$. If either $e$ or $f$ is incident on a vertex of degree $2$ other than $v$, then it is also true that the intersection with $EH$ of any even circuit $C$ in $G$ yields an even circuit $C_H$ in $H$. In this case any assignment $J_H$ of clockwise parities to the even circuits in $H$ corresponds to a unique assignment $J$ of clockwise parities to the even circuits in $G$ so that $J_H$ is the assignment induced by $J$. We then say that $J$ is also {\it induced} by $J_H$.
\begin{figure}
\caption{Split vertices to obtain an even vertex splitting.}
\label{split}
\end{figure}
Similarly let $H$ be obtained from $G$ by contracting a circuit $A$ of odd length. Thus $EH = EG - A$. Any even circuit $C_H$ in $H$ is the intersection with $EH$ of a unique even circuit $C$ in $G$: we have $C \cap EH = C_H$ and if $C \ne C_H$ then $C \cap A$ is the path of even length in $A$ joining the ends of the path $C_H$ in $G$. To any assignment $J$ of clockwise parities to the even circuits in $G$ there corresponds an assignment $J_H$ of clockwise parities to the even circuits in $H$ so that any even circuit $C_H$ in $H$ is assigned the same clockwise parity as $C$ in $G$. We say that $J_H$ is {\it induced} by $J$.
In the following lemma we summarise some basic facts:
\begin{lem} \label{basic}
Let $G$ be a graph and $J$ an assignment of clockwise parities to the even circuits of $G$.
(1) Let $H$ be a subgraph of $G$ and $J_H$ the restriction of $J$ to the even circuits of $H$. If $G$ is $J$-compatible then $H$ is $J_H$-compatible.
(2) Let $H$ be obtained from $G$ by contracting the two edges incident on a vertex of degree $2$. The assignment $J$ induces an assignment $J_H$ of clockwise parities to the even circuits in $H$. If $G$ is $J$-compatible then $H$ is $J_{H}$-compatible. If either of the two contracted edges is incident on another vertex of degree $2$ then $G$ is $J$-compatible if and only if $H$ is $J_H$-compatible.
(3) Let $H$ be obtained from $G$ by contracting a circuit of odd length. The assignment $J$ induces an assignment $J_H$ of clockwise parities to the even circuits in $H$. If $G$ is $J$-compatible then $H$ is $J_H$-compatible. \end{lem}
{\it Proof.} (2) Every $J_H$-intractable set of even circuits in $H$ induces a $J$-intractable set of even circuits in $G$. If either of the two contracted edges is incident on another vertex of degree two then every $J$-intractable set of even circuits in $G$ also induces a $J_H$-intractable set of even circuits in $H$.
(3) Every $J_H$-intractable set of even circuits in $H$ induces a $J$-intractable set of even circuits in $G$. (Note that if the symmetric difference of some even circuits in $H$ is empty, then the symmetric difference of the corresponding even circuits in $G$ is empty as well, for it is obvious that this symmetric difference is both an even cycle and a subset of the odd circuit $EG - EH$.) \qed
In the following three paragraphs we introduce the minimal $J$-incompatible graphs which we need in the formulation of our main theorem. We say that an assignment $J$ is {\it odd} or {\it even} if it assigns, respectively, an odd or an even clockwise parity to every even circuit.
Let $O_1=K_{2,3}$ and let $O_2$ be the graph we obtain from $K_4$ by subdividing once all edges incident on one fixed vertex. (See Figure~\ref{g1}.) Observe that $O_1$ and $O_2$ are $J$-incompatible with respect to the odd assignment $J$. In fact $O_{1}$ and $O_{2}$ are $J$-incompatible precisely for those assignments $J$ that prescribe an even number of even circuits of these graphs to be of even clockwise parity. For these assignments Lemma~\ref{basic}(3) shows that the $J$-incompatibility of $O_2$ can be attributed to the fact that $O_1$ is $J$-incompatible, since the contraction of the triangle in $O_2$ gives $O_1$.
Let $E_1$ be the graph consisting of two vertices and three edges joining them, let $E_2=K_4$ and let $E_3$ be the graph we obtain from $K_4$ by subdividing each edge in a fixed even circuit once. (See Figure~\ref{g1}.) Then $E_1$, $E_2$ and $E_3$ are $J$-incompatible with respect to the even assignment $J$. More generally $E_{1}$, $E_{2}$ and $E_{3}$ are $J$-incompatible precisely for those assignments $J$ that prescribe an odd number of even circuits to be of even clockwise parity. Again by Lemma~\ref{basic}(3) the fact that $E_2$ is $J$-incompatible can be put down to the fact that $E_1$ is $J$-incompatible, since the contraction of a triangle in $E_2$ gives $E_1$.
Consider the four graphs $\Delta_1, \Delta_2, \Delta_3, \Delta_4$ in Figure~\ref{3t}. Note that the last three are obtained from $\Delta_1$ by contracting edges. Each of these four graphs has exactly four even circuits and is $J$-incompatible if and only if $J$ prescribes an odd number of them to be clockwise even. This observation follows from Lemma~\ref{intractable} because in each of these graphs the set of all even circuits is the only non-empty dependent set of even circuits with respect to symmetric difference.
\begin{figure}
\caption{The graphs $\Delta_1, \Delta_2, \Delta_3, \Delta_4$.}
\label{3t}
\end{figure}
Let $G$ be a graph and let $G_0, G_1, \ldots, G_k$ be graphs such that $G_0 = G$ and, for each $i > 0$, the graph $G_i$ is an even vertex splitting of $G_{i-1}$. Then $G_k$ is said to be an {\it even splitting} of $G$. There is a special case in which, for each $i > 0$, $G_i$ can be obtained from $G_{i-1}$ by subdividing an edge twice. In this case we describe $G_k$ as an {\it even subdivision} of $G$. If no vertex of $G$ is of degree greater than $3$, then each even splitting of $G$ is also an even subdivision of $G$. If $H$ is an even splitting of $G$ and $J_H$ is an assignment of clockwise parities to the even circuits in $H$, then we may apply the definition of an induced assignment inductively to obtain an assignment $J$ of clockwise parities to the even circuits in $G$. This assignment is also said to be {\it induced} by $J_H$. By applying Lemma~\ref{basic}(2) inductively we find that if $H$ is $J_H$-compatible then $G$ is $J$-compatible. Thus if $G$ is $J$-incompatible then $H$ is $J_H$-incompatible. The converse also holds if $H$ is an even subdivision of $G$.
Now we formulate our main theorem.
\begin{theo} \label{main} Let $G$ be a graph and $J$ an assignment of clockwise parities to the even circuits of $G$. Then $G$ is $J$-incompatible if and only if $G$ contains an even splitting $H$ of one of $O_1$, $O_2$, $E_1$, $E_2$, $E_3$, $\Delta_1$, $\Delta_2$, $\Delta_3$, $\Delta_4$ with the following property: if $H$ is an even splitting of $O_i$ for some $i$ then $J$ prescribes an even number of clockwise even circuits to the three even circuits of $H$, if $H$ is an even splitting of $E_i$ for some $i$ then $J$ prescribes an odd number of clockwise even circuits to the three even circuits of $H$ and if $H$ is an even splitting of $\Delta_i$ for some $i$ then $J$ prescribes an odd number of clockwise even circuits to the four even circuits of $H$. \end{theo}
The ``if'' direction in the theorem is obvious by Lemma~\ref{intractable} and Lemma~\ref{basic}. We obtain the following immediate corollaries.
\begin{cor} \label{even} A graph $G$ does not admit an orientation under which every even circuit is clockwise odd if and only if it contains a subgraph which is, after the contraction of at most one odd circuit, an even subdivision of $K_{2,3}$. \end{cor}
{\it Proof.} For each even splitting of $E_1$, $E_2$, $E_3$, $\Delta_1$, $\Delta_2$, $\Delta_3$ or $\Delta_4$ contained in $G$ the odd assignment prescribes an even number of clockwise even circuits to its set of even circuits. \qed
\begin{cor} A graph $G$ does not admit an orientation under which every even circuit is clockwise even if and only if it contains a subgraph which is, after the contraction of at most one odd circuit, an even subdivision of $E_1$ or $E_3$. \end{cor}
{\it Proof.} For each even splitting of $O_1$ or $O_2$ contained in $G$ the even assignment prescribes an odd number of clockwise even circuits to its set of even circuits, for both graphs have three even circuits. Moreover for each even splitting of $\Delta_1$, $\Delta_2$, $\Delta_3$ or $\Delta_4$ contained in $G$ the even assignment prescribes an even number of clockwise even circuits to its set of even circuits, for these graphs each have four even circuits. \qed
\section{An arc decomposition theorem}
Circuits, non-empty paths and, more generally, subgraphs without isolated vertices are determined by their edge sets and are therefore identified with them in this paper. If $u$ and $v$ are vertices of a path $P$, then $P[u,v]$ denotes the subpath of $P$ that joins $u$ and $v$. If $G$ is a graph and $V'$ is a subset of the vertex set $VG$ of $G$ then $G[V']$ denotes the subgraph of $G$ spanned by $V'$. Similarly if $E'$ is a subset of the edge set $EG$ of $G$ then $G[E']$ denotes the subgraph of $G$ spanned by $E'$.
Let $H_1$ and $H_2$ be two sets of edges in $G$. An $H_1 \overline{H_2}$-arc is a path in $H_1$ which joins two distinct vertices in $VG[H_2]$ but does not have an inner vertex in $VG[H_2]$. A $G \overline{H_2}$-arc is also called an $\overline{H_2}$-arc.
\begin{defi} A graph $G$ without isolated vertices is said to be {\it even-circuit-connected} if for every bipartition $\{H_{1},H_{2}\}$ of $EG$ there exists an even circuit $C$ which meets $H_{1}$ and $H_{2}$. \end{defi}
Every even-circuit-connected graph is $2$-connected. Indeed, suppose there exists a vertex $v$ such that $G-\{v\}$ is disconnected. Let $H_{1},H_{2},\dots,H_{k}$ be the components of $G-\{v\}$ and let $H'_{i} = G[VH_{i} \cup \{v\}]$ for each $i$. Then for every circuit $C$ of $G$ there exists an $i$, $1 \le i \le k$, with $C \subseteq EH_{i}'$, a contradiction.
First we prove a decomposition theorem on even-circuit-connected graphs. Note that in our characterisation of $J$-incompatible graphs in terms of forbidden subgraphs, even-circuit-connected graphs are the only graphs of interest since every $J$-incompatible graph that is minimal with respect to edge deletion is clearly even-circuit-connected.
Let $H$ be a subgraph of $G$ and $C$ an even circuit in $G$ which includes $EG - EH$ and meets $EH$. If there are $n$ $C\overline{H}$-arcs, then $G$ is said to be obtained from $H$ by an {\it $n$-arc adjunction}. An {\it arc decomposition} of an even-circuit-connected graph $G$ is a sequence $G_{0}, G_{1}, \dots, G_{k}$ of even-circuit-connected subgraphs of $G$ such that $EG_{0}$ is an even circuit, $G_{k} = G$ and, for every $i > 0$, $G_{i}$ is obtained from $G_{i-1}$ by an $n$-arc adjunction with $n=1$ or $n=2$. Moreover we assume that, for each $i$, every even circuit in $G_{i}$ which meets $EG_{i} - EG_{i-1}$ contains $EG_{i} - EG_{i-1}$. We shall show that every even-circuit-connected graph has an arc decomposition. For this purpose we need the following version of Menger's theorem.
\begin{theo} \label{menger} \cite{Ha} Let $S$ and $T$ be sets of at least $n$ vertices in an $n$-connected graph $G$. Then there are $n$ vertex disjoint paths joining vertices in $S$ to vertices in $T$ such that no inner vertex of these paths is in $VS \cup VT$. \end{theo}
\begin{lem} \label{decomp} Let $H$ be a non-empty proper even-circuit-connected subgraph of an even-circuit-connected graph $G$. Then $G$ has an even circuit $C$ that meets $EH$, admits just one or two $C \overline{H}$-arcs and has the property that $G[H \cup C]$ is even-circuit-connected. Moreover, if $G$ is bipartite or $H$ is not, then $C$ may be chosen to admit just one $C \overline{H}$-arc. \end{lem}
{\it Proof.} Suppose first that $G$ is bipartite. By hypothesis there is an edge $e$ in $EG - EH$. By the $2$-connectedness of $G$ and Theorem~\ref{menger} there are vertex disjoint paths $P$ and $Q$ in $EG - EH$ joining the ends of $e$ to two distinct vertices $u$ and $v$, respectively, in $VH$ such that neither $P$ nor $Q$ has an inner vertex in $VH$. Since $H$ is even-circuit-connected and therefore connected, a path $R$ in $H$ joins $u$ to $v$. Thus $P \cup \{e\} \cup Q \cup R$ is a circuit $C$ in $G$ meeting $EH$ (since $u \not= v$) and having $P \cup \{e\} \cup Q$ as its unique $C \overline{H}$-arc. Moreover $C$ is even since $G$ is bipartite.
Suppose therefore that $G$ is not bipartite. Again we may construct the circuit $C$ as in the previous case, and the proof is complete if $C$ is even. Suppose therefore that $C$ cannot be chosen to be even. Since $G$ is even-circuit-connected there exists an even circuit $D$ which meets $EH$ and $EG - EH$. Let $S$ and $T$ be two distinct $D \overline{H}$-arcs, joining $w$ to $x$ and $y$ to $z$, respectively. The fact that $D$ meets $EH$ implies $w \not= x$, $y \not=z$ and $\{w, x\} \not= \{y,z\}$. Let $U$ be a path in $H$ joining $w$ to $x$. Since $H$ is 2-connected there exist two vertex disjoint paths $V$ and $W$ in $H$ joining $y$ and $z$, respectively, to distinct vertices of $U$ and such that neither has an inner vertex in $VU$. Let $s$ and $t$ be the ends of $V$ and $W$, respectively, in $VU$. By assumption $S \cup U$ is an odd circuit and therefore it includes a path $X$, joining $s$ to $t$, such that $$
|X| \equiv |T| + |V| + |W| \qquad (\text{mod} \hspace{2mm} 2). $$ Then $T \cup V \cup W \cup X$ is a circuit $C$ of even length, and the only $C\overline{H}$-arcs are $T$ and possibly $S$.
Finally suppose that $H$ is not bipartite. Therefore $H$ has an odd circuit $O$. Since $H$ is even-circuit-connected and therefore $2$-connected, there are vertex disjoint paths $M$ and $N$ in $H$ joining $w$ and $x$, respectively, to distinct vertices $p$ and $q$ in $VO$ but having no inner vertex in $VO$. Since $O$ is odd, it includes a path $Y$ joining $p$ and $q$ such that $$
|Y| \equiv |M| + |N| + |S| \qquad (\text{mod} \hspace{2mm} 2). $$ Then $M \cup N \cup S \cup Y$ is an even circuit $C$ in $G$, and $S$ is the unique $C \overline{H}$-arc.
It remains to show that $G[H \cup C]=:H'$ is even-circuit-connected. Suppose that $\{K_{1},K_{2}\}$ is a bipartition of $EH'$. If $K_{1} \supseteq EH$ and $K_{2} \subseteq EH' - EH$ then $C$ is an even circuit which meets $K_{1}$ and $K_{2}$. Thus we may assume that $K_{l} \cap EH =:K'_{l} \not= \emptyset$ for $l=1,2$. By the assumption that $H$ is even-circuit-connected there exists an even circuit in $H$ which meets $K'_{1}$ and $K'_{2}$, and therefore $K_{1}$ and $K_{2}$. \qed
Lemma~\ref{decomp} shows that every even-circuit-connected graph $G$ has an arc decomposition $G_{0},G_{1},\dots,G_{n}$ with at most one $2$-arc adjunction. The single $2$-arc adjunction is necessary if and only if $G$ is not bipartite. In this case the arc decomposition can be chosen so that $G_{1}$ is obtained from $G_{0}$ by a $2$-arc adjunction as we see in the following theorem.
\begin{theo} \label{optimal} An even-circuit-connected graph $G$ has an arc decomposition $G_{0}, G_{1}, \dots, G_{k}$ such that $G_{i}$ is obtained from $G_{i-1}$ by a single arc adjunction for all $i>1$. \end{theo}
{\it Proof.} By Lemma~\ref{decomp} let $H_{0}, H_{1}, \dots, H_{n}$ be an arc decomposition of $G$ such that $H_{i}$ is obtained from $H_{i-1}$ by a 2-arc adjunction for some $i>1$ and $H_{j}$ is obtained from $H_{j-1}$ by a single arc adjunction for all $j \not= i$. Let $EH_{i} - EH_{i-1} = P \cup Q$, where $P$ and $Q$ are the two $H_{i} \overline{H_{{i-1}}}$-arcs. Let $P$ join $w$ to $x$ and $Q$ join $y$ to $z$. We distinguish cases according to whether $w,x,y,z$ are distinct.
{\it Case~1.} Suppose that $w,x,y,z$ are distinct. Since $H_{i-1}$ is 2-connected, we may assume by Theorem~\ref{menger}, the symmetry of $w$ and $x$ and the symmetry of $y$ and $z$ that $H_{i-1}$ has vertex disjoint paths $R$ joining $w$ to $y$ and $S$ joining $x$ to $z$. Similarly there are two vertex disjoint paths $T$ and $U$ in $H_{i-1}$ joining vertices in $VR$ to vertices in $VS$ but having no inner vertex in $VR \cup VS$. Let $T$ join vertex $a$ in $VR$ to vertex $b$ in $VS$ and let $U$ join vertex $c$ in $VR$ to vertex $d$ in $VS$. With no less generality we may assume that $c \in VR[a,y]$. Then $R[c,a] \cup T \cup S[b,d] \cup U$ is an even circuit $C$, since $H_{i-1}$ is bipartite. Note also that $P \cup R[w,a] \cup T \cup S[b,x]$ and $Q \cup R[y,c] \cup U \cup S[d,z]$ are odd circuits $A$ and $B$, respectively, for neither $G[H_{i-1} \cup P]$ nor $G[H_{i-1} \cup Q]$ is an even-circuit-connected graph.
Let $G_{0}: = G[C]$ and $D = P \cup R \cup Q \cup S$. Then $D$ is an even circuit since $D = A + B + C$. Furthermore observe that $G_{1}: = G[C \cup D]$ is a non-bipartite even-circuit-connected graph obtained from $G_{0}$ by a $2$-arc adjunction. Thus the assertion follows from Lemma~\ref{decomp}.
{\it Case~2.} In the remaining case observe that $w \not= x$, $y \not= z$ and $\{w,x\} \not= \{y,z\}$ for there exists an even circuit which includes $P \cup Q$ and meets $EH_{i-1}$. Thus we may assume that $x=y$
and $|\{w,x,z\}| = 3$ without loss of generality. If there are edges $e$ and $f$ in $EH_{i-1}$ joining $x$ to $w$ and $z$ respectively, then set $R=\{e\}$ and $S=\{f\}$. Otherwise, since $H_{i-1}$ is 2-connected, $x$ is joined in $H_{i-1}$ by an edge $g$ to a vertex $v$ not in $\{w,x,z\}$. Without loss of generality we assume that there are vertex disjoint paths $R'$ and $S$ joining $v$ to $w$ and $x$ to $z$, respectively. Set $R= R' \cup \{g\}$. By the $2$-connectedness of $H_{i-1}$ there exists a path $T$ in $H_{i-1} - \{x\}$ joining a vertex $a$ in $VR$ to a vertex $b$ in $VS$ but having no inner vertex in $VR \cup VS$. Then $C:=R[a,x] \cup S[x,b] \cup T$ is an even circuit for $H_{i-1}$ is bipartite. Set $$ D=P \cup R[w,a] \cup T \cup S[b,z] \cup Q. $$ Observe that $D$ is an even circuit since $D= C + P + R + Q + S$ and $P+R$ and $Q + S$ are odd circuits. Finally set $G_{0} = G [C]$ and $G_{1} = G[C \cup D]$. Then $G_{1}$ is a non-bipartite even-circuit-connected graph which can be obtained from $G_{0}$ by a 2-arc adjunction. Again the assertion follows from Lemma~\ref{decomp}. \qed
\begin{rem} In the previous proof $G_{1}$ is an even subdivision of one of the graphs in Figure~\ref{g1}. \end{rem}
\begin{figure}
\caption{$G_{1}$ in Theorem~\ref{optimal} is an even subdivision of one of these
graphs.}
\label{g1}
\end{figure}
\section{Proof of Theorem~\ref{main}}
For the rest of the paper let $G$ be a graph and $J$ an assignment of clockwise parities to the even circuits of $G$. Assume that $G$ is minimally $J$-incompatible with respect to the deletion of an edge. Let $G_{0}, G_{1}, \dots, G_{k}$ be an arc decomposition of $G$, where $G_{i}$ is obtained from $G_{i-1}$ by a single arc adjunction for $i >1$ and $G_{1}$ is isomorphic to an even subdivision of $O_{1}$, $E_{1}$ or one of the graphs in Figure~\ref{g1}. Since all possibilities for $G_{1}$ are either in the list of graphs in Theorem~\ref{main} or $J'$-compatible with respect to any assignment $J'$, we may assume that $k >1$. Let $H:=G_{k-1}$ and let $P$ be the unique $\overline{H}$-arc. Fix a $J$-compatible orientation of $H$ and extend it to an orientation of $G$ arbitrarily. Since $G$ is $J$-incompatible there exist two even circuits $A$ and $B$ including $P$ such that $A$ does not have the clockwise parity prescribed by $J$ but $B$ does. The following lemma shows that the even circuits $A$ and $B$ can be chosen with these properties so that $G[A \cup B]$ is fairly simple.
\begin{lem} \label{AB} The even circuits $A$ and $B$ can be chosen so that $G[A \cup B]$ is isomorphic to an even subdivision of $O_{1}$, $O_{2}$, $E_{1}$, $E_{2}$, $E_{3}$, $A_1$, $A_2$, $A_3$, $A_4$ or $A_5$. (See Figure~\ref{g1}.) \end{lem}
{\it Proof.} We assume that $A$ and $B$ have been chosen with the properties above so that $A \cup B$ is minimal.
Let $Q$ be the first $\overline{A} B$-arc we reach when traversing $B$ in a particular direction starting at $P$ and let $R$ be the first $\overline{A} B$-arc we reach when traversing $B$ in the opposite direction, again starting at $P$. If there exists an even circuit in $A \cup Q$ which includes $P \cup Q$ or an even circuit in $A \cup R$ which includes $P \cup R$ then let $B'$ be this even circuit. Otherwise there exists an even circuit $B'$ in $A \cup Q \cup R$ which includes $P \cup Q \cup R$.
First we show that $B'$ has the clockwise parity prescribed by $J$. Suppose the contrary. The minimality of $A \cup B$ implies $A \cup B = B' \cup B$ and therefore $A + B' \subseteq B$. Furthermore $A + B'$ is non-empty and the union of circuits. Therefore $A + B' = B$, a contradiction to $P \subseteq B$.
If there is a unique $\overline{A} B'$-arc then $G[A \cup B']$ is isomorphic to an even subdivision of either $O_{1}$ or $E_{1}$. Otherwise $G[A \cup B']$ is isomorphic to an even subdivision of $O_{2}$, $E_{2}$, $E_{3}$, $A_1$, $A_2$, $A_3$, $A_4$ or $A_5$. \qed
If $G[A \cup B]$ is an even subdivision of $O_{1}$, $O_{2}$, $E_{1}$, $E_{2}$ or $E_{3}$ then we have proved Theorem~\ref{main}, for in these cases $A + B$ is an even circuit with the clockwise parity prescribed by $J$ since $A+B \subseteq H$. Thus we may assume that $G[A \cup B]$ is an even subdivision of $A_1$, $A_2$, $A_3$, $A_4$ or $A_5$.
The symmetric difference $A + B$ is the union of two edge disjoint odd circuits $U$ and $W$ in $H$. Since $H$ is even-circuit-connected and therefore $2$-connected there exist vertex disjoint paths $S$ and $T$ in $H$ which join vertices of $U$ to vertices of
$W$ but have no inner vertex in $VU \cup VW$. Note that $|VU \cap VW| \le 1$ and if $|VU \cap VW|=1$ we may choose $S$ and $T$ so that $VS = VU \cap VW$ and $VT \cap VU \cap VW = \emptyset$. Note that $G[S \cup T \cup U \cup W]$ contains exactly two even circuits $C$ and $D$, and that $C + D = U + W$. In the following lemma we show that $G$ is spanned by the even circuits $A$ and $B$ and the paths $S$ and $T$. (See Figure~\ref{ABST}.)
\begin{figure}
\caption{$G$ is generated by the even circuits $A$ and $B$ and the two paths
$S$ and $T$. The paths $S$ and $T$ are dotted because they may intersect $X$ and $Y$.}
\label{ABST}
\end{figure}
\begin{lem} \label{key} $G=G[A \cup B \cup S \cup T]$. \end{lem}
{\it Proof.} The set $\{A,B,C,D\}$ of even circuits is $J$-intractable, for $C$ and $D$ both have the clockwise parity prescribed by $J$ since $C \cup D \subseteq H$. The assertion follows by the minimality of $G$. \qed
The set $(A \cup B) - (U \cup W)$ is the union of two vertex disjoint paths $X$ and $Y$ if $VU \cap VW = \emptyset$. In this case let $X$ join vertex $w$ in $U$ to vertex $y$ in $W$ and let $Y$ join vertex $x$ in $U$ to vertex $z$ in $W$. If $VU \cap VW \not= \emptyset$ then let $X$ be the unique path in $(A \cup B) - (U \cup W)$ and $VY = VU \cap VW$. Again we let $X$ join vertex $w$ in $U$ to vertex $y$ in $W$, but we also write $VU \cap VW = \{x\}$ and $z = x$ in this case.
(See Figure~\ref{ABST}.)
In our next lemma we show that it is impossible that the first $\overline{A \cup B}$-arcs of $S$ and $T$, if we traverse the paths from their vertex in $U$ to their vertex in $W$, both join vertices in $VX$ to vertices in $VY$. (See Figure~\ref{ABSITI}.)
\begin{figure}
\caption{Situation in Lemma~\ref{2}.}
\label{ABSITI}
\end{figure}
\begin{lem} \label{2} There exist no vertex disjoint $\overline{A \cup B}$-arcs $S'$ and $T'$ such that $S'$ joins a vertex $a$ in $VX$ to a vertex $b$ in $VY$ and $T'$ joins a vertex $c$ in $VY[x,b] - \{b\}$ to a vertex $d$ in $VX[a,y]-\{a\}$. \end{lem}
{\it Proof.} Clearly the existence of these arcs is impossible if $VU \cap VW \not= \emptyset$, since $VU \cap VW \not= \emptyset$ implies $VY = \{x\}$.
Suppose therefore that $VU \cap VW = \emptyset$, and that $S'$ and $T'$ exist. Let $E$ and $F$ be the two even circuits in
\[ U \cup W \cup X[w,a] \cup X[d,y] \cup Y[x,c] \cup Y[b,z] \cup S' \cup T'. \] Thus $E + F = U + W = C + D$. If it is not possible to orient $P$ so that $E$ and $F$ have the clockwise parity prescribed by $J$ then $\{C,D,E,F\}$ is an intractable set of circuits. The union of these circuits does not include $X[a,d] \cup Y[c,b]$, for otherwise $S \cup T$ would include the circuit $Z = S' \cup T' \cup X[a,d] \cup Y[c,b]$ and this is impossible since $S$ and $T$ are vertex disjoint paths. We now have a contradiction to the minimality of $G$.
Therefore it is possible to orient $P$ so that $E$ and $F$ have the clockwise parity prescribed by $J$ and still, by the symmetry of $A$ and $B$, we may assume that $A$ does not have the prescribed clockwise parity but $B$ does. Consequently $\{A,B,E,F\}$ is an intractable set of even circuits. By the minimality of $G$ this implies that $G=G[A \cup B \cup S' \cup T']$.
Suppose that $Z$ is an even circuit. Without loss of generality we may assume that $A+E=Z$, so that $B+F=Z$. If $Z$ has the clockwise parity prescribed by $J$ then $\{A,Z,E\}$ is a $J$-intractable set of even circuits; otherwise $\{B,Z,F\}$ is a $J$-intractable set of even circuits. This is a contradiction to the minimality of $G$ for neither $U \cup W \subseteq A \cup E$ nor $U \cup W \subseteq B \cup F$. We conclude that the circuit $Z$ is odd.
Let $M$ be the unique even circuit that contains $T'$ and is edge disjoint with $S' \cup W$, let $I$ be the unique even circuit that contains $T'$ and is edge-disjoint with $S' \cup U$, let $K$ be the unique even circuit that contains $S'$ and is edge-disjoint with $T' \cup W$ and, finally, let $L$ be the unique even circuit that contains $S'$ and is edge-disjoint with $T' \cup U$. Note that $M + K \ne Z$ since $Z$ is odd. Similarly $I + L \ne Z$. Therefore, since $M+K+I+L \subseteq U \cup W$ and $M+K+L+I$ is an even cycle, we have $M+K+L+I = U \cup W$, for if $M + K + L + I = \emptyset$ then $M + K = I + L = Z$.
Consequently we may assume that $A=M+I$ and $E=M+L$, so that $B=K+L$ and $F=I+K$. None of $\{A,M,I\}$, $\{B,K,L\}$, $\{E,M,L\}$, $\{F,I,K\}$ is a $J$-intractable set of even circuits by the minimality of $G$ for $S' \not\subseteq M \cup I$, $T' \not\subseteq K \cup L$, $Y[c,b] \not\subseteq M \cup L$ and $X[a,d] \not\subseteq I \cup K$. Therefore, since $\{A,B,E,F\}$ is a $J$-intractable set of even circuits but $\{A,M,I\}$ is not, $\{M,I,B,E,F\}= \{A,B,E,F\} + \{A,M,I\}$ is a $J$-intractable set of even circuits. It follows that $\{M,I,K,L,E,F\}$ is a $J$-intractable set of even circuits since $\{B,K,L\}$ is not, and therefore either $\{E,M,L\}$ or $\{F,I,K\}$ must be a $J$-intractable set of even circuits, a contradiction. \qed
Observe that if $S$, respectively $T$, does not have an $\overline{A \cup B}$-arc and is therefore equal to either $X$ or $Y$ then $T$, respectively $S$, has an $\overline{A \cup B}$-arc. Since $S$ and $T$ are vertex disjoint this arc does not join a vertex in $VX$ to vertex in $VY$. This fact and Lemma~\ref{2} imply that, if we traverse $S$ and $T$ from their vertex of $U$ to their vertex of $W$, then either the first $\overline{A \cup B}$-arc of $S$ or the first $\overline{A \cup B}$-arc of $T$ does not join a vertex in $VX$ to a vertex in $VY$. Let $R$ denote this arc for the rest of the paper. The next lemma shows that one end of $R$ is in $VX \cup VY$.
\begin{lem} There is no $\overline{A \cup B}$-arc that joins a vertex in $VU - \{w,x\}$ to a vertex in $VW - \{y,z\}$. \end{lem}
{\it Proof.} First consider the case that $VU \cap VW = \emptyset$. Let $Q$ be an $\overline{A \cup B}$-arc that joins a vertex in $VU - \{w,x\}$ to a vertex in $VW - \{y,z\}$.
Let $E$ and $F$ be the two even circuits in $U \cup W \cup Q \cup Y$, where we assume without loss of generality that $P \subseteq X$. Since $E \cup F \subseteq H$, $E$ and $F$ are of the prescribed clockwise parity and $\{A , B , E , F\}$ is a $J$-intractable set of even circuits. Therefore $G=G[A \cup B \cup Q]$. Observe that if $G[A \cup E]=G$ or $G[B \cup F] = G$ then $G[A \cup F] \not= G$ and $G[B \cup E] \not= G$. By the symmetry of $E$ and $F$ we therefore assume that $G[A \cup E] \not= G$ and $G[B \cup F] \not= G$.
The symmetric difference $A + E$ is an even circuit in $U \cup W \cup Q \cup X$. Depending on the clockwise parity of $A+E$ either $\{A,E,A+E\}$ or $\{A+E,B,F\}$ is a $J$-intractable set of even circuits and therefore either $G=G[A \cup E]$ or $G=G[B \cup F]$, a contradiction.
Now we consider the case that $VU \cap VW \not= \emptyset$ and, therefore, $x=z$ and $P \subseteq X$. Let $E$ and $F$ be the two even circuits in $U \cup W \cup Q$. Since $E \cup F \subseteq H$, $E$ and $F$ have the clockwise parity prescribed by $J$ and thus $\{A,B,E,F\}$ is a $J$-intractable set of circuits. There exists at least one even circuit $M$ that includes $Q$ and $X$. Moreover either $A+E=M$ or $A+F=M$. Without loss of generality let $A+E=M$. Then either $\{A,E,M\}$ or $\{B,F,M\}$ is a $J$-intractable set of circuits, which is a contradiction to the minimality of $G$ for $U \cup W \not\subseteq A \cup E$ and $U \cup W \not\subseteq B \cup F$. \qed
In the following lemma we show that both ends of $R$ are either in $VX$ or in $VY$.
\begin{figure}
\caption{Situation in Lemma~\ref{f1}.}
\label{1}
\end{figure}
\begin{lem} \label{f1} There is no $\overline{A \cup B}$-arc that joins a vertex in $(VU \cup VW) - \{w,x,y,z\}$ to a vertex in $VX \cup VY$. \end{lem}
{\it Proof.} Without loss of generality we assume that $Q$ is an $\overline{(A \cup B)}$-arc that joins a vertex in $VU - \{w,x\} $ to a vertex $a$ in $VX - \{w\}$. (See Figure~\ref{1}.)
Let $E$ and $F$ be the two even circuits in $U \cup W \cup Q \cup X[a,y] \cup Y$. If it is not possible to orient $P$ so that $E$ and $F$ have the clockwise parity prescribed by $J$ then $\{C,D,E,F\}$ would be a $J$-intractable set of circuits. This conclusion would be a contradiction to the minimality of $G$: $X[w,a]$ is not contained in $C \cup D \cup E \cup F$ since it cannot be contained in $S \cup T$ because $S$ and $T$ are vertex disjoint.
Therefore we may assume that $E$ and $F$ have the prescribed clockwise parity and $\{A,B,E,F\}$ is a $J$-intractable set of even circuits. Either $A+E$ or $A+F$ is equal to the unique even circuit in $U \cup Q \cup X$. Without loss of generality we assume that $A+E$ is equal to this even circuit. Then either $\{A,E,A+E\}$ or $\{A+E, B,F\}$ is a $J$-intractable set of even circuits, which is a contradiction to the minimality of $G$ for neither $W \subseteq A \cup E$ nor $W \subseteq B \cup F$. \qed
Thus $R$ joins either two vertices in $VX$ or two vertices in $VY$, as in Figure~\ref{ABQ}. In the following lemma we show that $G$ is generated by the even circuits $A$ and $B$ and by the arc $R$ and that $G$ is an even splitting of $\Delta_1$, $\Delta_2$, $\Delta_3$ or $\Delta_4$.
\begin{figure}
\caption{Situation in Lemma~\ref{final}}
\label{ABQ}
\end{figure}
\begin{lem} \label{final} If there is an $\overline{A \cup B}$-arc that joins two vertices in $VX$ or two vertices in $VY$, then $G$ is an even splitting of $\Delta_1$, $\Delta_2$, $\Delta_3$ or $\Delta_4$ and $J$ is an assignment which prescribes an odd number of clockwise even circuits to the even circuits of $G$. \end{lem}
{\it Proof.} Let $Q$ be an $\overline{A \cup B}$-arc which joins two vertices $a$ and $b$ in $VX$. We assume that $a \in VX[w,b]$. (See Figure~\ref{ABQ}.)
First we show that $G=G[A \cup B \cup Q]$. Let $E$ and $F$ be the two even circuits in $U \cup W \cup X[w,a] \cup Q \cup X[b,y] \cup Y $. As in the proofs of the previous lemmas we may orient $P$ such that $E$ and $F$ have the prescribed clockwise parity. Otherwise $G=G[C \cup D \cup E \cup F]$ and we reach a contradiction: the fact that $S$ and $T$ are vertex disjoint implies that $Q \cup X[a,b] \not\subseteq S \cup T$, so that $X[a,b] \not\subseteq C \cup D \cup E \cup F$. We conclude that $\{A,B,E,F\}$ is a $J$-intractable set of even circuits.
Suppose $Q \cup X[a,b]$ is an even circuit $M$. Then either $A+E=M$ or $A+F=M$, and without loss of generality we assume $A+E=M$. Consequently, either $\{A,E,M\}$ or $\{B,F,M\}$ is a $J$-intractable set of even circuits. We now have a contradiction since neither $U \cup W \subseteq A \cup E$ nor $U \cup W \subseteq B \cup F$.
Thus $Q \cup X[a,b]$ is odd and $G$ is an even splitting of $\Delta_1$, $\Delta_2$, $\Delta_3$ or $\Delta_4$. \qed
\end{document}
|
arXiv
|
{
"id": "0203281.tex",
"language_detection_score": 0.8672974705696106,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\maketitle
\begin{abstract} Computable reducibility is a well-established notion that allows to compare the complexity of various equivalence relations over the natural numbers. We generalize computable reducibility by introducing degree spectra of reducibility and bi-reducibility. These spectra provide a natural way of measuring the complexity of reductions between equivalence relations. We prove that any upward closed collection of Turing degrees with a countable basis can be realised as a reducibility spectrum or as a bi-reducibility spectrum. We show also that there is a reducibility spectrum of computably enumerable equivalence relations with no countable basis and a reducibility spectrum of computably enumerable equivalence relations which is downward dense, thus has no basis. \end{abstract}
\section{Introduction}
Computable reducibility is a long-standing notion that has proven to be fruitful for ranking the complexity of equivalence relations over the set $\omega$ of natural numbers. The following definition is the relativised version of the one commonly considered in the literature.
\begin{definition} Let $R$ and $S$ be two equivalence relations on $\omega$, and let $\mathbf{d}$ be a Turing degree. $R$ is \emph{$\mathbf{d}$-computably reducible} to $S$ (notation: $R \leq_{\mathbf{d}} S$), if there is a $\mathbf{d}$-computable function $f$ such that, for all natural numbers $x,y$, the following holds \[ x R y \Leftrightarrow f(x)Sf(y). \] If $R \leq_{\mathbf{d}} S$ and $S \leq_{\mathbf{d}} R$, we write $R\equiv_{\mathbf{d}} S$, and we say that $R$ and $S$ are \emph{bi-reducible by $\mathbf{d}$}. \end{definition} The case $\mathbf{d}=\mathbf{0}$ has been thoroughly explored. The standard underlying intuition is that, if $R\leq_{\mathbf{0}} S$ via some $f$, then $S$ is at least as complex as $R$,
since all that is needed to decide whether $x$ and $y$ are $R$-equivalent is to know if $f(x)$ and $f(y)$ are $S$-equivalent. A main benefit of computable reducibility is that it provides a single formal setting for classifying countable equivalence relations, even if they arise from very different contexts. For instance, Miller III~\cite{miller1971group} showed that there is a finitely presented group such that all computably enumerable equivalence relations are computably reducible to its word problem. As another example --- this one concerning relations that are not even hyperarithmetical --- Fokina, S.~Friedman, Harizanov, Knight, McCoy, and Montalb\'an~\cite{Fokina:12b} proved that the isomorphism relations on several classes of computable structures (e.g., graphs, trees, torsion abelian groups, fields of characteristic $0$ or $p$, linear orderings) is complete among $\Sigma^1_1$ equivalence relations.
More generally, researchers studied computable reducibility for decades, and approached it from several different perspectives, often unveiling significant connections with other fields, such as descriptive set theory and proof theory.
Ershov~\cite{Ershov:77} initiated this research program
in a category-theoretic fashion, while dealing with the theory of numberings
(see~\cite{Ershov:survey} for a survey in English). Following Ershov, one can define the category of equivalence relations on $\omega$, in which a morphism from $R$ to $S$ is a function $\mu : \omega/{_R} \rightarrow \omega/{_S}$ such that there is a computable function $f$ with $\mu([x]_{R}) = [f(x)]_{S}$. So morphisms are induced by computable functions $f$ such that $x R y \Rightarrow f(x) S f(y)$, and thus $R\leq S$ holds if and only if there is monomorphism from $R$ to $S$.
Scholars continued Ershov's work by pursuing different goals, such as studying provable equivalence of formal systems (see, e.g.,~\cite{Visser:80,Montagna:82,bernardi1981,Bernardi:83}). This proof-theoretic motivation explains why they focused on the $\Sigma^0_1$ case: the set of theorems of any computably axiomatizable theory is obviously a computably enumerable set. In the Russian literature, c.e.\ equivalence relations are often called \emph{positive}, but
as in~\cite{Gao:01} and many other contributions, we adopt the acronym \emph{ceers} for referring to them. The interested reader can see Andrews, Badaev, and Sorbi~\cite{andrews2017survey} for a nice and up-to-date survey on ceers, with a special focus on \emph{universal} ceers, i.e., ceers to which all others can be computably reduced.
Computable reducibility shall also be regarded as the computable analogue of \emph{Borel reducibility}, a central object of study of modern descriptive set theory. Introduced by H.~Friedman and Stanley~\cite{friedman1989borel}, the notion of Borel reducibility allows to compare the complexity of equivalence relations on Polish spaces, such as the Cantor space $2^\omega$ (see~\cite{gao2008invariant,kanoveui2008borel}). This is particularly meaningful for calculating the complexity of different classification problems, i.e., problems associated to the task of characterizing some collection of mathematical objects in terms of invariants (up to isomorphism, or some other nice equivalence relation expressing structural resemblance). Calvert, Cummins, Knight, and S.~Miller~\cite{calvert2004comparing} introduced an effective version of this study, by considering effective transformations between classes of structures. Another possible approach is that of regarding computable reducibility itself as representing a computable counterpart of Borel reducibility, where the former naturally applies to equivalence relations with domain $\omega$ and the latter refers to equivalence relations on $2^\omega$ (or similar spaces). This interpretation appears, e.g., in~\cite{Gao:01,Coskey:12,fokina2010effective,miller2016finitary}. In particular, Coskey, Hamkins, and R.~Miller~\cite{Coskey:12} investigated equivalence relations on (indices of) c.e.\ sets --- or, of families of c.e.\ sets --- that mirror classical combinatorial equivalence relations of crucial importance for Borel theory.
\subsection{Reducibility and bi-reducibility spectra} Our motivating question is the following: \emph{given two arbitrary equivalence relations $R$ and $S$, how much information is needed to compute possible ways of reducing $R$ to $S$?}
As our main tool, we introduce the following spectra of Turing degrees, that stand in analogy with many similar notions from computable structure theory.
\begin{definition} Let $(R, S)$ be a pair of equivalence relations. The \emph{degree spectrum of reducibility} of $(R,S)$ (or, the \emph{reducibility spectrum} of $(R,S)$) is the following set of Turing degrees \[
\Spec_{\Rightarrow}(R, S)=\set{\mathbf{d} \mbox{ $|$ $R \leq_{\mathbf{d}} S$} }. \]
Similarly, we define the \emph{degree spectrum of bi-reducibility} of $(R,S)$ (or, the \emph{bi-reducibility spectrum} of $(R,S)$) as \[
\Spec_{\Leftrightarrow}(R, S)=\set{\mathbf{d} \mbox{ $|$} R\equiv_\mathbf{d} S}. \] \end{definition}
\begin{definition} The \emph{degree of reducibility} of $(R,S)$ is the least degree of $\Spec_{\Rightarrow}(R, S)$ if any such degree exists. The \emph{degree of bi-reducibility} of $(R,S)$ is defined similarly. \end{definition}
Different degree spectra have been considered in the literature. The \emph{isomorphism spectrum} of a structure $\mathcal{A}$ (in symbols: $\Spec_{\cong}(\mathcal{A})$) is classically defined as the collection of all Turing degrees that compute a copy of $\mathcal{A}$ and is the most common way to measure the computational complexity of $\mathcal{A}$. Isomorphism spectra have been widely investigated (see, e.g.,~\cite{knight_degrees_1986, kalimullin2008almost, andrews2016complements, harizanov2007spectra}; they are also called ``degree spectra'' or ``spectra of structures'': our terminology emphasizes the difference with the other spectra discussed below). In recent years researchers considered also alternative degree spectra, such as \emph{theory spectra}~\cite{andrews_spectra_2015}, \emph{$\Sigma_n$-spectra}~\cite{fokina_degree_2016}, \emph{bi-embeddability spectra}~\cite{fokina_bi-embeddability_2016}, and \emph{elementary bi-embeddability spectra}~\cite{rossegger_elementary_2018}. The notion of \emph{computable categoricity} gives rise to the \emph{categoricity spectrum}, that measures how difficult it is to compute isomorphisms between computable copies of a given structure (see~\cite{fokina2010degrees}, where the notion was introduced). Our perspective is to some extent close to the latter spectra, since we similarly fix the structures involved and then analyse the information needed to witness a possible reduction between them.
The main problem when dealing with a given class of spectra is that of characterizing which kind of information they can encode, in terms of the classes of Turing degrees that can be realised by them. Even if $\mathcal{A}$ is a familiar structure, $\Spec_{\cong}(\mathcal{A})$ (or some other possible spectrum of $\mathcal{A}$) might be very complicated: for example, the isomorphism spectrum of a linear order $L$ is a \emph{cone} (i.e., $\Spec_{\cong}(L)=\set{\mathbf{d}: \mathbf{c}\leq \mathbf{d}}$, for some $\mathbf{c}$) if and only if $L$ is computably presentable, as proved by Richter~\cite{richter_degrees_1981}.
In the present paper we aim to compare reducibility and bi-reducibility spectra with other spectra considered in the literature.
\begin{definition} Let $\mathbf{\mathcal{A}}$ be a set of Turing degrees. A set of Turing degrees $\mathbf{\mathcal{B}}$ is a \emph{basis} of $\mathbf{\mathcal{A}}$ if \begin{enumerate} \item all elements of $\mathbf{\mathcal{B}}$ are Turing-incomparable, \item and $\mathbf{\mathcal{A}}=\set{\mathbf{d}: (\exists \mathbf{b} \in \mathbf{\mathcal{B}})(\mathbf{b}\leq \mathbf{d})}$. \end{enumerate} \end{definition}
In Section $2$ we prove that any upward closed set of Turing degrees with a countable basis can be realised as a reducibility spectrum. In Section $3$ we show that there is a reducibility spectrum with no countable basis, while in Section $4$ we show that there is a reducibility spectrum having no basis at all. In Section $5$ we partially extend these results to the case of bi-reducibility spectra.
\subsection{Notation and terminology} For all $X\subseteq \omega$, we denote $\omega \smallsetminus X$ by $\overline{X}$. All our equivalence relations have domain $\omega$. We denote by $[x]_R$ the $R$-equivalence class of any given $x$. We denote by $c_R$ the number of equivalence classes of $R$. An equivalence relation $R$ is $n$-\emph{bounded} if all its equivalence classes have size at most $n$; $R$ is \emph{bounded} if it is $n$-bounded for some natural number $n$. The following basic equivalence relations will appear many times:
\begin{itemize} \item $\texttt{Id$_1$}$ is the equivalence relation consisting of just one equivalence class, i.e., $x\texttt{Id$_1$} y$, for all $x,y$; \item $\texttt{Id}$ is the equivalence relation consisting of all singletons, i.e., $x\texttt{Id} y$ if and only if $x=y$. \end{itemize}
Our computability theoretic notions are standard, and as in~\cite{Soare:Book}. In particular, recall how the principal function of a given infinite set $A$ is defined. Let $a_0<a_1<\ldots$ be the ascending sequence of all elements of $A$. \emph{The principal function} of $A$ is the following injective function: $p_A(x)=a_x$. It is immediate that $p_A$ is computable in $A$.
\section{Classes of Turing degrees realised by reducibility spectra} For the sake of exposition, we begin by focusing on reducibility spectra. The discussion about bi-reducibility spectra is postponed to the last section.
It is easy to see from the definition of reducibility spectra that they are either upwards closed or empty.
\begin{proposition}\label{fact:upward-density} Let $(R,S)$ be any pair of equivalence relations. Then $\Spec_{\Rightarrow}(R, S)$ is either empty or upward closed and, if empty, then $c_R > c_S$. \end{proposition}
\begin{proof} If $c_R >c_S$, then obviously there can be no reduction from $R$ into $S$, since any function would necessarily map distinct equivalence classes of $R$ into one single class of $S$.
Next, assume $c_R \leq c_S$. Let $\mathbf{a}=\deg(R\oplus S)$, and define $f$ to be the $\mathbf{a}$-computable function such that $f(0)=0$ and
\[ f(x+1)= \begin{cases} f(y) &\text{ $(\exists y\leq x)(y\in [x+1]_R)$}\\ \mu z[(\forall y \leq x)(z\notin[f(y)]_S)] &\text{ otherwise}. \end{cases} \]
The hypothesis that $c_R \leq c_S$ holds guarantees that any fresh equivalence class of $R$ can be mapped into a fresh equivalence class of $S$. Thus, $f$ is well-defined and $R\leq_{\mathbf{a}}S$ via $f$. This proves that $\Spec_{\Rightarrow}(R, S)$ is not empty. For the upward closure just notice that if $R$ $\mathbf{a}$-computably reduces to $S$, then the reduction holds also for any $\mathbf{d}$ such that $\mathbf{d}\geq \mathbf{a}$. \end{proof}
\begin{remark} To avoid empty spectra, in what follows we assume that all our equivalence relations have infinitely many equivalence classes. \end{remark}
Although our analysis of reducibility spectra will focus mainly on positive results, we begin with a negative one: standard set-theoretic considerations show that reducibility spectra do not coincide with the class of all upward closed collections of Turing degrees.
\begin{proposition} There exists an upward closed collection of Turing degrees that can not be realised as a reducibility spectrum. \end{proposition}
\begin{proof} On the one hand, there are $2^{\aleph_0}$ many reducibility spectra. This follows immediately from the fact that any such spectrum is associated with a pair of equivalence relations with domain $\omega$. On the other hand, there are $2^{2^{\aleph_0}}$ many upward closed collections of Turing degrees. To see this, recall that there are $2^{\aleph_0}$ minimal degrees with respect to Turing reducibility and notice that any of set of minimal degrees form the basis of an upward closed collection of degrees. \end{proof}
Proposition~\ref{fact:upward-density} implies that any pair $(R,S)$ has degree of reducibility if and only if its reducibility spectrum is a cone. The next result says that, for any Turing degree~$\mathbf{d}$, there is a pair of equivalence relations $(R,S)$ that encodes $\mathbf{d}$ as its degree of reducibility. We begin by introducing a convenient way of coding sets of numbers by equivalence relations.
\begin{definition} Let $A_0,\ldots,A_{n-1}$ be a collection of $n$ pairwise disjoint sets of numbers. An equivalence relation $R_{A_0,\ldots,A_{n-1}}$ is \emph{generated} by $A_0,\ldots,A_{n-1}$ if \[ xR_{A_0,\ldots,A_{n-1}}y \Leftrightarrow x=y \vee (\exists i < n)(x,y \in A_i ). \] \end{definition}
This way of representing sets by equivalence relations is common in the literature (see, for instance, the characterization of \emph{set-induced} degrees of equivalence relations in Ng and Yu~\cite{ngyu}). Gao and Gerdes~\cite{Gao:01} call equivalence relations generated by $n$ sets \emph{$n$-dimensional}. We adopt their terminology when convenient. A special reason of interest for $1$-dimensional ceers is due to the fact that the interval $[\mathbf{0_1},\mathbf{0'_1}]$ of the $1$-degrees is embeddable into the degree structure generated by computable reducibility on ceers (computably enumerable equivalence relations). As a corollary, the first-order theory of ceers is undecidable, as is shown in~\cite{Andrews:14}.
The $1$-dimensional equivalence relations can easily encode any given Turing degree as a degree of reducibility.
\begin{proposition}\label{fact:onecone} For any Turing degree $\mathbf{a}$, there is a pair of equivalence relations $(R,S)$, such that $\Spec_{\Rightarrow}(R,S)=\set{\mathbf{d}: \mathbf{a} \leq \mathbf{d}}$. \end{proposition}
\begin{proof} Let $A\subseteq \omega$ be co-infinite and such that $\deg(A)=\mathbf{a}$. Let $a$ be an element of $A$. Consider $(R_A, \texttt{Id})$. We define $h$ as the following $\mathbf{a}$-computable function, for all~$x$, \[ h(x)=\begin{cases} a &\text{ if $x\in A$},\\ x &\text{ otherwise}. \end{cases} \]
We have that $h$ $\mathbf{a}$-computably reduces $R_A$ to $\texttt{Id}$. Thus, $\mathbf{a}\in \Spec_{\Rightarrow}(R_A,\texttt{Id})$. Now, suppose that $f$ $\mathbf{d}$-reduces $R_A$ to $\texttt{Id}$. It follows that there is some $z$ such that, for all $x$, $f(x)=z$ if and only if $x \in A$. This means that $f$ computes $A$, and therefore $\mathbf{d}\geq \mathbf{a}$. So, $\Spec_{\Rightarrow}(R_A, \texttt{Id})$ is the cone above $\mathbf{a}$. \end{proof}
The following question naturally arises: does every pair of equivalence relations have a degree of reducibility? We answer to this question negatively. In fact, much more can be proved.
It is a well known fact that the isomorphism spectrum of a structure can not be the union of finitely many or countably many cones of Turing degrees, (see Soskov~\cite{Soskov}). This is not true for theory spectra. Andrews and Miller~\cite{andrews_spectra_2015} showed that there is a theory $T$ such that its spectrum coincides with the union of two cones. The same holds for $\Sigma_n$-spectra, with $n\geq 2$, as proved by Fokina, Semukhin, and Turetsky~\cite{fokina_degree_2016}. By the following theorem, we show that reducibility spectra can be the union of $n$ cones, for all $n$. In fact, given any countable set of Turing degrees $\mathbf{\mathcal{B}}$, there is a reducibility spectrum that coincides with the upward closure of $\mathbf{\mathcal{B}}$. A similar result holds for bi-reducibility spectra, as discussed in the final section.
\begin{thm}\label{thm:characterization} Any upward closed collection of Turing degrees with a countable basis can be realised as a reducibility spectrum. \end{thm}
Before proving the theorem, let us recall the notion of introreducible set.
\begin{definition}[see Jockusch~\cite{Jockusch:68}] An infinite set $A\subseteq \omega$ is \emph{introreducible} if it is computable in any of its infinite subsets, i.e., for all infinite $B\subseteq A$, we have $B\geq_T A$. \end{definition}
It is well known that any Turing degree $\mathbf{d}$ contains an introreducible set. To prove it, from any infinite $A\in \mathbf{d}$ build a Turing equivalent set $B$ as follows: for each $\sigma\subseteq \chi_A$, put (the code of) $\sigma$ in $B$. It is not difficult to see that from any infinite subset of $B$ we can extract arbitrarily long initial segments of $\chi_A$, and thus compute $B$. A nice consequence of the fact that $B$ consists of initial segments is that, if some function $f$ enumerates an infinite subset of $B$, then $f$ computes $B$. Since we make use of the latter property several times through the rest of the paper, it is convenient to dignify it as a lemma.
\begin{lemma}\label{lem:initial} Let $B$ be introreducible and let $f$ be a function. If there is an infinite $Y\subseteq B$ such that $Y$ is c.e.\ in $f$, then $f\geq_T B$. \end{lemma}
\begin{proof} Every infinite c.e.\ set contains an infinite computable set. Relativising this, we see that there is a $Z\subseteq Y$ computable in $f$ so that $Z$ is infinite. Since $B$ is introreducible, $Z$ computes $B$ and thus $f\geq_T B$. \end{proof}
We can now prove the theorem.
\begin{proof}[Proof of Theorem~\ref{thm:characterization}] Let $\mathbf{\mathcal{A}}$ be an upward closed collection of Turing degrees having basis $\mathbf{\mathcal{B}}$. First, assume $\mathbf{\mathcal{B}}$ to be infinite. For any $\mathbf{b}_i \in \mathbf{\mathcal{B}}$, denote by $B_i$ an introreducible set that belongs to $\mathbf{b}_i$. Next, let \[ X=\set{ \langle p_{B_0}(i),p_{B_i}(x)\rangle : i,x \in \omega}, \]
where $p_{B_0}$ (resp.\ $p_{B_i}$) is the principal function of $B_0$ ($B_i$).
We shall think of $X$ as consisting of countably many columns such that its $i$th column encodes the set $B_i$, indexed by the $i$th element of $B_0$.
We claim that $\Spec_{\Rightarrow}(\texttt{Id}, R_{\overline{X}})=\mathbf{\mathcal{A}}$.
We first show that $\mathbf{\mathcal{A}}\subseteq \Spec_{\Rightarrow}(\texttt{Id}, R_{\overline{X}})$. For $j\in \omega$, consider the function \[ h_j(x)=\langle p_{B_0}(j), p_{B_j}(x)\rangle, \]
We have that, for all $j$, $\texttt{Id}$ is $\mathbf{b}_j$-computably reducible to $R_{\overline{X}}$ via $h_j$, since $h_j$ injectively maps singletons of $\texttt{Id}$ into the $j$th column of $X$, and therefore into singletons of $R_{\overline{X}}$. Thus $\mathbf{\mathcal{B}}\subseteq \Spec_{\Rightarrow}(\texttt{Id}, R_{\overline{X}})$ and then, since $\Spec_{\Rightarrow}(\texttt{Id}, R_{\overline{X}})$ is upward closed, $\mathbf{\mathcal{A}} \subseteq \Spec_{\Rightarrow}(\texttt{Id}, R_{\overline{X}})$.
Now we have to prove that $\Spec_{\Rightarrow}(\texttt{Id}, R_{\overline{X}})\subseteq \mathbf{\mathcal{A}}$. Suppose that $\texttt{Id}\leq_{\mathbf{d}}R_{\overline{X}}$ via some $f$. We want to show that there is $k$ such that $f$ computes $B_k$, hence witnessing that $\mathbf{d}\in \mathbf{\mathcal{A}}$. Notice that the range of $f$ is all contained, with the exception of at most one element, in $X$. Consider two cases.
\begin{enumerate} \item Suppose that there is $k$ such that $f$ enumerates an infinite subset $Y$ of the $k$th column of $X$. If so, then the set $\set{p_{B_k}(x) : \langle p_{B_0}(k),p_{B_k}(x)\rangle \in Y}$ is an infinite subset of $B_k$ which is c.e.\ in $f$. By Lemma~\ref{lem:initial}, this means that $f$ computes $B_k$, and so $\mathbf{ d} \geq \mathbf{b}_k$. \item If $f$ enumerates only finitely many elements for each column, then it must be the case that $f$ picks infinitely many columns, i.e., the set \[ Y=\set{p_{B_0}(k) : (\exists y)(\langle p_{B_0}(k),y\rangle \in \range(f))} \] must be infinite. Since $Y\subseteq B_0$ and $B_0$ is introreducible, by Lemma~\ref{lem:initial} we obtain that $\mathbf{d}\geq \mathbf{b}_0$. \end{enumerate} Thus, we have that $\Spec_{\Rightarrow}(\texttt{Id}, R_{\overline{X}})\subseteq \mathbf{\mathcal{A}}$. So we conclude that $\Spec_{\Rightarrow}(\texttt{Id}, R_{\overline{X}})=$~$\mathbf{\mathcal{A}}$.
If $\mathbf{\mathcal{B}}=\set{\mathbf{b}_0,\ldots,\mathbf{b}_n}$ is finite the proof is essentially the same. One can apply the above construction as follows: when $X$ is to be constructed, use the first $n+1$ columns of $X$ to encode $B_i\in \mathbf{b}_i$, for $i \leq n$, and then encode $B_0$ into all remaining columns. \end{proof}
\section{Reducibility spectra that contain $\Pi^0_1$-classes} In the previous section, we proved that reducibility spectra are rather expressive: any countable collection of cones can be realised as a reducibility spectrum. In this section, we go one step further. We show that there is a reducibility spectrum that contains a special $\Pi^0_1$-class, i.e., one with no computable member. In doing so, we continue our analysis of reducibility spectra of the form $\Spec_{\Rightarrow}(\texttt{Id}, R)$, focusing this time on the case when $R$ is a ceer.
The next definition appears in Andrews and Sorbi~\cite{andrewssorbi2}.
\begin{definition}[Andrews and Sorbi~\cite{andrewssorbi2}] A ceer $R$ is \emph{light} if $\texttt{Id} \leq R$. Otherwise, if $R$ has infinitely many equivalence classes, it is \emph{dark}. \end{definition}
Dark ceers exist. As an easy example, consider a $1$-dimensional ceer $R_S$ where $S$ is a simple set. If $\texttt{Id}\leq R_S$ via some computable $f$, then we would have that $\range(f)\smallsetminus S$ is an infinite c.e.\ subset of $\overline{S}$, contradicting the fact that the latter is immune. More generally, the distinction between light and dark ceers reflects a fundamental distinction about how much information we can effectively extract from a given ceer: light ceers are those for which there exists some computable listing $(y_i)_{i\in\omega}$ of pairwise nonequivalent numbers, while for dark ceers no such listing is possible (see~\cite{andrewssorbi2} for an extensive study of light and dark ceers and how they behave with respect to the existence of joins and meets of ceers).
For our present interests, it is immediate to observe that $R$ is light if and only if $\Spec_\Rightarrow(\texttt{Id},R)$ is the cone above $\mathbf{0}$. Hence, we shall turn to dark ceers for nontrivial spectra. In particular, we want to investigate how complicated $\Spec_\Rightarrow(\texttt{Id},R)$ can be if $R$ is dark. Our strategy is to consider the class of all partial transversals of $R$.
\begin{definition} Let $R$ be an equivalence relation. A set $A\subseteq \omega$ is a \emph{partial transversal} of $R$ if $x,y\in A$ implies $\neg(xRy)$, for $x\neq y$. Denote by $P(R)$ the class of all partial transversals of $R$. \end{definition}
Let us stress that we think of transversals of $R$ as sets and not functions. Hence, according to the last definition, $P(R)$ is a subset of Cantor space instead of a subset of Baire space. Our choice is motivated by the fact that we want to make use of Theorem \ref{thm:treeminimal} below, which holds for computably bounded $\Pi^0_1$-classes.
\begin{proposition}\label{prop:tranversaldegree} For any equivalence relation $R$, \[ \mathbf{d}\in \Spec_{\Rightarrow}(\emph{\texttt{Id}},R) \mbox{ if and only if $\mathbf{d}$ computes some $A\in P(R)$}. \] \end{proposition}
\begin{proof} $(\Rightarrow)$: If $\mathbf{d}\in \Spec_{\Rightarrow}(\texttt{Id},R)$, then there is a $\mathbf{d}$-computable function $f$ such that $\range(f)\in P(R)$.
$(\Leftarrow)$: If $A$ is a $\mathbf{d}$-computable infinite partial transversal of $R$, then $\texttt{Id} \leq R$ via $p_A$, and therefore $\mathbf{d}\in \Spec_{\Rightarrow}(\texttt{Id},R)$. \end{proof}
The transversals of a given ceer $R$ obviously form a $\Pi^0_1$-class of functions. Similarly, $P(R)$ forms a $\Pi^0_1$-class of sets.
\begin{lemma}\label{lem:tree} If $R$ is a ceer, then $P(R)$ is a $\Pi^0_1$-class. \end{lemma}
\begin{proof} We construct a binary computable tree $T_R$ such that $[T_R]$ (i.e., the infinite branches through $T_R$) coincides with $P(R)$. The idea is to freeze a node $v$ of $T_R$ whenever we witness that a branch passing through it fails to encode a partial transversal of $R$. This happens if the path from the root to $v$ picks numbers from the same equivalence class.
More formally, for any $\sigma \in 2^{<\omega}$ of length $s$, let $\sigma$ be in $T_R$ if and only if the following holds \[ (\forall x,y\leq s) [(x\neq y \wedge \sigma(x)=\sigma(y)=1) \Rightarrow \neg(x R_s y)]. \]
$T_R$ so defined is obviously a computable tree and, from the definition, it is clear that $[T_R]$ coincides with $P(R)$. \end{proof}
Our goal is now to apply the following classical theorem due to Jockusch and Soare~\cite{jockusch1972} to the case of reducibility spectra.
\begin{thm}[Jockusch and Soare~\cite{jockusch1972}]\label{thm:treeminimal} Let $\mathcal{C}$ be a special $\Pi^0_1$-class. $\mathcal{C}$ contains $2^{\aleph_0}$ elements any two of which form a minimal pair. \end{thm}
The main obstruction is that $P(R)$ include also finite transversals. This means that, for any given $R$, we have that $\mathbf{0}\in \set{\deg(A) : A \in P(R)}$, which makes the latter set useless for the analysis of $\Spec_\Rightarrow(\texttt{Id},R)$. To overcome this problem, we consider ceers of the following kind.
\begin{lemma}\label{prop:darknonhyper} There is a dark ceer $R$ that has no infinite equivalence classes and such that $P(R)$ contains an infinite nonhyperimmune element. \end{lemma}
\begin{proof} We construct $R$ by stages, i.e., $R=\bigcup_{s\in \omega} R_s$. We design the construction to meet the following requirements in such a way that $P(R)$ contains an infinite nonhyperimmune element \[ \mathcal{P}_e : \mbox{ if $W_e$ is infinite, then $W_e \notin P(R)$}, \] \[ \mathcal{N}_e: \mbox{ $[e]_R$ is finite}. \]
The priority ranking of the requirements is the following \[ \mathcal{P}_0 > \mathcal{N}_0 > \dots > \mathcal{P}_e > \mathcal{N}_e > \cdots \]
Notice that if all $\mathcal{P}$-requirements are met, then $R$ is necessarily dark. Otherwise, there would be an injective computable function $f$ such that $\range(f)\in P(R)$, and this would contradict any requirement $\mathcal{P}_e$ with $W_e=\range(f)$. On the other hand, if all $\mathcal{N}$-requirements are met, then all equivalence classes of $R$ are obviously finite.
\subsection*{Construction} Let us set some terminology. We \emph{collapse} two equivalence classes $[x]_R$ and $[y]_R$ by adding into $R$ the pairs needed to obtain $[x]_R=[y]_R$. At any given stage, a $\mathcal{P}$-requirement is either in \emph{stand-by} or \emph{settled}. Moreover, if some action designed to deal with a given requirement $\mathcal{P}_e$ has the effect of expanding $[i]_R$, then we say that $\mathcal{P}_e$ \emph{disturbs} $\mathcal{N}_i$.
\noindent \emph{Stage $0$}: $R=\set{(x,x) : x \in \omega}$. Put all $\mathcal{P}$-requirements in stand-by.
\noindent \emph{Stage $s+1=\langle e, n\rangle$}: Deal with $\mathcal{P}_e$. If $\mathcal{P}_e$ is in stand-by and there are $x,y\in W_{e,s}$ with $\min([x]_{R_s}\cup [y]_{R_s})\geq3e$, then do the following:
\begin{enumerate} \item Collapse $[x]_{R_s}$ and $[y]_{R_s}$ and call $(x,y)$ the pair of \emph{witnesses} of $\mathcal{P}_e$; \item Set $\mathcal{P}_e$ as settled. \end{enumerate}
Otherwise, do nothing.
\subsection*{Verification} The verification is based on the following claims.
\begin{claim}\label{claim:nrequi} All $\mathcal{N}$-requirements are satisfied, i.e, $R$ has no infinite equivalence classes. \end{claim}
\begin{proof} Towards a contradiction, assume that there is a least requirement $\mathcal{N}_e$ that is not satisfied. It follows from the construction that any $\mathcal{N}$-requirement can be disturbed only by $\mathcal{P}$-requirements with higher priority (in fact, it is immediate to see that if $\mathcal{P}_e$ disturbs $\mathcal{N}_i$ then $e\leq \left \lfloor{\frac{i}{3}}\right \rfloor$). Let $s$ be a stage such that no $\mathcal{P}$-requirement with priority higher than $\mathcal{N}_e$ acts after $s$. Such $s$ must exist since each $\mathcal{P}$-requirement acts at most once. We have that $[e]_R$ will never be expanded after stage $s$, since no $\mathcal{P}$-requirement with lower priority is allowed to disturb it, and thus eventually remains finite. \end{proof}
An immediate consequence of the last claim is that $R$ has infinitely many classes.
\begin{claim} All $\mathcal{P}$-requirements are satisfied, i.e, $R$ is dark. \end{claim}
\begin{proof} Towards a contradiction, assume that there is a least requirement $\mathcal{P}_e$ that is not satisfied. By Claim~\ref{claim:nrequi}, we know that there must be a stage $s$ such that all $\mathcal{N}$-requirements with priority higher than $\mathcal{P}_e$ are never disturbed after $s$. This means that there exists a least $k$ such that $\bigcup_{i<e}[i]_R\subseteq \set{x : 0 \leq x < k}$. Therefore, since $W_e$ is infinite and $R$ has infinitely many classes, we have that there is a stage $t+1=\langle e,n\rangle>s$ such that $x,y \in W_{e,t}$ and $\min([x]_{R_t}\cup [y]_{R_t})>\max \set{k,3e}$. So, at stage $t$ the construction collapses $[x]_{R_t}$ and $[y]_{R_t}$. But this action excludes $W_e$ from the partial transversals of $R$. That is to say, we obtain $W_e\notin P(R)$, which contradicts the initial hypothesis that $\mathcal{P}_e$ was not satisfied. \end{proof}
\begin{claim} $P(R)$ contains an infinite nonhyperimmune member. \end{claim}
\begin{proof} Let $Z$ be the set of numbers that are not witnesses of any $\mathcal{P}$-requirement.
These elements correspond to the singletons of $R$, as a given equivalence class $[x]_R$ has size bigger than $1$ if and only if there is a $\mathcal{P}$-requirement that picks $x$ as a witness. Furthermore, notice that for all $k$ the set $\set{x : 0\leq x \leq 3k}$ contains at least $k$ singletons of $R$. This is because, by construction, if $\mathcal{P}_e$ chooses a pair of witnesses $(x,y)$, then $\min(x,y)\geq 3e$. We use this fact to build a strong array containing an infinite partial transversal as follows. Without loss of generality, assume that $|[0]_R|=1$. Let $f$ be a computable function defined by recursion \[ D_{f(0)}=\set{0}, \] \[ D_{f(i+1)}=\set{x: \max(D_{f(i)})<x\leq 3(\max(D_{f(i)})+1)}. \]
It is immediate to notice that the so defined sets are pairwise disjoint. Recall that for all $k$ the set $\set{x : 0\leq x\leq 3(k+1)}$ contains at least $k+1$ singletons of $R$. From this fact, we obtain that, for all $i$, there exists $y$ such that $|[y]_R|=1$ and \begin{equation} y \in D_{f(i+1)}=(\set{x : 0\leq x\leq 3(\max(D_{f(i)}+1))}\smallsetminus \set{0\leq x\leq \max(D_{f(i)})}). \end{equation}
Consider now the set $A=\set{x : |[x]_R|=1 \wedge (\exists i)(x \in D_{f(i)})}$. It is obvious that $A\in P(R)$, because $A$ contains only singletons of $R$. From $(1)$ above, it follows that $A \cap D_{f(i)} \neq \emptyset$ for all $i$. Thus, $A\in P(R)$ is infinite and nonhyperimmune. \end{proof} \end{proof}
\begin{thm}\label{thm:nocount} There is a reducibility spectrum (not containing $\mathbf{0}$) that contains a special $\Pi^0_1$-class. \end{thm}
\begin{proof} Let $R$ be as in Lemma~\ref{prop:darknonhyper} and let $A\in P(R)$ be an infinite nonhyperimmune set. Let $\set{D_{f(i)}}_{i \in \omega}$ be a strong array witnessing the nonhyperimmunity of $A$. From $R$ we construct the tree $T_R$ as in the proof of Lemma~\ref{lem:tree}. Next, we computably build a subtree $T$ of $T_R$ by allowing in $T$ only the elements of $P(R)$ that meet every $D_{f(i)}$. That is to say, we freeze a node $v$ of $T$ if we see that any path passing through $x$ fail to intersect some $D_{f(i)}$. More formally, for any $\sigma \in 2^{<\omega}$ of length $s$, let $\sigma$ be in $T$ if and only $\sigma \in T_\mathcal{R}$ and \[ (\forall i\leq s)(\exists x) (x \in D_{f(i)} \wedge \sigma(x)=1). \]
$T$ is obviously computable, because $T_\mathcal{R}$ is computable and in order to establish whether some $\sigma\in T$ we have to check only finitely many finite sets. Moreover $[T]$ is special, i.e., it has no computable member. This follows from the following facts: $T$ is a subtree of $T_\mathcal{R}$; all infinite elements of $P(R)$ are immune (since $R$ is dark); and any member of $[T]$ is (the characteristic function of) an infinite set, since no finite set can intersect $\set{D_{f(n)}}_{n\geq 0}$ infinitely many times.
\end{proof}
Now, let $R$ be the ceer constructed in the proof of Theorem~\ref{thm:nocount} and assume that $\Spec_\Rightarrow(\Id,R)$ has a countable basis. Then, as by Theorem~\ref{thm:treeminimal} $\Spec_\Rightarrow(\Id,R)$ contains continuum many minimal pairs there must be an element of the basis Turing below a minimal pair in $\Spec_\Rightarrow(\Id,R)$ and thus this element must be $\mathbf 0$ contradicting that $R$ is dark. We have just proven the following. \begin{corollary}\label{cor:nocount}
There is a reducibility spectrum with no countable basis. \end{corollary} In the next section we will prove an even stronger result; that there are reducibility spectra with no basis at all.
To conclude the special focus on spectra of the form $\Spec_\Rightarrow(\Id,R)$, it is worth noticing that in the above proofs we never used the fact that all equivalence classes of $R$ are finite. In fact, it can be shown that the same result holds also for (properly constructed) equivalence relations with infinite equivalence classes. Yet, our choice makes the result sharp, in the sense of the following proposition.
\begin{proposition} If $R$ is a bounded ceer, then $\Spec_\Rightarrow(\emph{\texttt{Id}},R)$ has a countable basis; in fact, $\mathbf{0}\in \Spec_\Rightarrow(\emph{\texttt{Id}},R)$. \end{proposition}
\begin{proof}
Let $k$ be the largest number such that $R$ has infinitely many equivalence classes of size $k$. Let $Y=\set{y : |[y]_R|>k}$. $Y$ is obviously finite. We can then easily construct a c.e.\ partial transversal $A$ of $R$: when we witness that $|[x]_R|=k$, for some $x\notin Y$, we put the least element of $[x]_R$ into $A$. Then, $\texttt{Id} \leq_{\mathbf{0}} R$ via any computable function $f$ with $\range(f)=A$. \end{proof}
\section{A reducibility spectrum with no basis} In this section we complete the picture about the complexity of reducibility spectra. Having shown that these spectra can be with no countable basis, it is natural to ask whether they all have a basis. Notice that the question has not been already answered by Corollary~\ref{cor:nocount}, since the spectrum considered in the proof might have a basis which is uncountable. In this section, we directly construct a reducibility spectrum with no basis at all. As in the previous section, we prove that the result already holds for ceers. The idea for building the desired spectrum is to make it downward dense while at the same time excluding $\mathbf{0}$ from it. More precisely, we aim to build a pair of ceers $(R,S)$ such that
\begin{enumerate} \item $\mathbf{0} \notin \Spec_\Rightarrow(R,S)$, \item if $\mathbf{d} \in \Spec_\Rightarrow(R,S)$, then $\set{ \mathbf{c}:\mathbf{0} <\mathbf{c}< \mathbf{d}} \cap \Spec_\Rightarrow(R,S) \neq \emptyset$. \end{enumerate}
As is clear, a spectrum that satisfies $(1)$ and $(2)$ can not have a basis. The construction of $R$ and $S$ is based on the following notion due to Cleave~\cite{Cleave:61}.
\begin{definition}[Cleave~\cite{Cleave:61}] A sequence of pairwise disjoint c.e.\ sets $E_0\ldots,E_{n-1}$ is \emph{creative} if there is a computable function $p$ such that, if $W_{i_0},\ldots,W_{i_{n-1}}$ is a sequence of $n$ pairwise disjoint c.e.\ sets such that $W_{i_k} \cap E_{k}= \emptyset$, for all $ k<n$, then \[ p(i_0,i_1,\ldots,i_{n-1})\in \overline{\bigcup_{0\leq k< n}{E_k \cup W_{i_k}}}. \] \end{definition}
Creative sequences generalise effective inseparability from pairs of c.e.\ sets to sequences of them. The underlying intuition of the latter definition is that no set of a creative sequence can be effectively separated from the others.
Effective inseparability plays a crucial role for the theory of ceers. For example, Andrews, Lempp, J.~Miller, Ng, San Mauro, and Sorbi~\cite{Andrews:14} showed that \emph{uniformly effectively inseparable} ceers (i.e., ceers whose equivalence classes are pairwise effective inseparable, and such that this effective inseparability can be witnessed uniformly)
are universal. This fact subsumes all known results of universality for ceers and stands in analogy with the following classical results: any pair of effectively inseparable sets is $1$-complete, as proven by Smullyan~\cite{smullyan1961theory}, and any creative sequence is $1$-complete, as proved by Cleave~\cite{Cleave:61}. Cleave's result, in particular, means that, if $E_0,\ldots, E_{n-1}$ is a creative sequence, then for all sequences of pairwise disjoint c.e.\ sets $G_0,\ldots,G_{m-1}$ with $m\leq n$, there is an injective computable function $h$ such that \[ \mbox{if }x \in G_k \mbox{ for some $k< n$}, \mbox{ then } h(x) \in E_k; \] \[ \mbox{if } x \notin \bigcup_{k<n} G_k, \mbox{ then } h(x) \notin \bigcup_{k < n} E_k. \]
As an immediate corollary of the last fact we obtain the following.
\begin{proposition}\label{prop:univ-crea} For all $n$, there is a $n$-dimensional ceer $R_{E_0,\ldots,E_{n-1}}$ such that, for any $m$-dimensional ceer $R_{G_0,\ldots,G_{m-1}}$ with $m\leq n$, $R_{G_0,\ldots,G_{m-1}}\leq_{\mathbf{0}} R_{E_0,\ldots,E_{n-1}}$. \end{proposition}
\begin{proof} Just choose the sequence $E_0,\ldots,E_{n-1}$ to be creative. Cleave's result guarantees that, given any sequence ${G_0,\ldots,G_{m-1}}$ with $m\leq n$, there is a $1$-reduction $h$ from ${G_0,\ldots,G_{m-1}}$ into $E_0,\ldots,E_{n-1}$. Since $h$ is injective, $h$ will also induce a computable reduction from $R_{G_0,\ldots,G_{m-1}}$ into $R_{E_0,\ldots,E_{n-1}}$. \end{proof}
From now on, we denote by $\set{U_i}_{i\in \omega}$ a uniformly c.e.\ sequence of ceers such that, for all $i$, $U_i$ is generated by a creative sequence of length $i+1$. As an example of such a sequence the reader can think of each $U_i$ as $R_{K_0,\ldots,K_i}$, where $K_j=\set{x: \phi_x(x)\downarrow=j}$ for $j \leq i$.
For the next lemma, recall that $B$ and $C$ \emph{split} $A$ (notation: $A= B\sqcup C$) if $B\cap C=\emptyset$ and $A=B\cup C$.
\begin{lemma}\label{lem:descend} Let $A$ be a noncomputable c.e.\ set. There are computable functions $f,g$ such that $W_{f(0)}=A$, $W_{g(0)}=\emptyset$, and, for all $i$, \begin{enumerate} \item $W_{f(i+1)}\sqcup W_{g(i+1)}=W_{f(i)}$, \item and $W_{f(i)}>_T W_{f(i+1)}>_T \mathbf{0}$. \end{enumerate} \end{lemma}
\begin{proof} Given any noncomputable c.e.\ set $A$, Sacks Splitting theorem~\cite[Theorem 7.5.1.]{Soare:Book} allows to construct a splitting $B \sqcup C$ of $A$ into noncomputable c.e.\ sets such that both $B$ and $C$ avoid the cone above $A$. It is enough to apply the theorem infinitely many times to obtain $f$ and $g$. Indeed, suppose we have defined $W_{f(i)}$ and $W_{g(i)}$. By applying Sacks' construction to $W_{f(i)}$, one obtains a splitting $B_i\sqcup C_i$ of $W_{f(i)}$. Then, we define $f(i+1)$ and $g(i+1)$ equal (respectively) to an index of $B_i$ and an index of $C_i$, where these indices are uniformly obtained by the $s$-$m$-$n$ theorem. \end{proof}
\begin{definition} The \emph{cylinder} of a (possibly infinite) family $\mathcal{F}=\set{A_0,A_1,\ldots}$ of sets is the equivalence relation $R$ defined by \[
\langle i, x\rangle R \langle j, y\rangle \Leftrightarrow i=j \wedge [ x=y \vee (i < |\mathcal{F}| \wedge x,y \in A_i)]. \] \end{definition}
\begin{thm} There is a reducibility spectrum of ceers with no basis. \end{thm}
\begin{proof}
We build equivalence relations $R$ and $S$ in columns, in such a way that $\Spec_\Rightarrow(R,S)$ has no basis. To do so, let $B$ be a noncomputable co-c.e.\ introreducibile set with $0\notin B$ (Lachlan proved that such $B$ must be hyperimmune, see the addendum at the end of Jockusch~\cite{Jockusch:68}). We define $R$ as the cylinder of the family $\set{W_{f(i)}}_{i\in \omega}$, where $f$ is as in Lemma~\ref{lem:descend} and $W_{f(0)}=\overline{B}$, i.e., \[ \langle i, x\rangle R \langle j, y\rangle \Leftrightarrow i=j \wedge (x=y \vee x,y \in W_{f(i)}). \]
We define $S$ as follows. We keep the $0$th column of $S$ isomorphic to $\texttt{Id}$, and then we encode the cylinder of $\set{U_i}_{i\geq 0}$ into the remaining columns of $S$, with the further condition of making its $i$th column isomorphic to $\texttt{Id$_1$}$ if we witness that $i$ enters in $\overline{B}$. More formally, $S$ is the following equivalence relation \[ \langle i, x\rangle S \langle j, y\rangle \Leftrightarrow \langle i, x \rangle =\langle j,y\rangle \vee (i=j \wedge i>0 \wedge (x U_{i-1} y \vee i \in \overline{B})). \]
The relations $R$ and $S$ defined in this way are obviously ceers. Denote by $\mathbf{\mathcal{A}}$ the upward closure of $\set{\deg(W_{f(i)}): i \in \omega}$. We claim that $\Spec_{\Rightarrow}(R,S)=\mathbf{\mathcal{A}}$.
On the one hand, let $\mathbf{d} \in \mathbf{\mathcal{A}}$ and let $n$ be the least number such that $\mathbf{d}\geq \deg(W_{f(n)})$. By construction, any column of $S$ is either finite dimensional or isomorphic to $\texttt{Id$_1$}$. Moreover, since $B$ is infinite, there are infinitely many columns of $S$ that we never collapse to $\texttt{Id$_1$}$. Let $k$ be the least number such that the $k$th column of $S$ is $m$-dimensional with $m>n$. Denote by $C$ the cylinder of the family ${W_{f(0)},\ldots,W_{f(n)}}$. Since $C$ is $n+1$-dimensional, there must be a function $r$ that computably reduces $C$ to $U_m$. Indeed, by Proposition~\ref{prop:univ-crea}, the latter is universal with respect to all ceers of dimension $\leq m$. Consider now the following function
\[ p(\langle i,x\rangle)= \begin{cases} \langle k, r(\langle i, x \rangle)\rangle &\mbox{ if $i <n$}, \\ \langle 0, \langle i, 0\rangle \rangle &\mbox{ if $i \geq n$ and $x \in W_{f(i)}$}, \\ \langle 0,\langle i, x+1\rangle \rangle &\mbox{ if $i \geq n$ and $x \notin W_{f(i)}$}. \end{cases} \]
We want to show that $R\leq_{\mathbf{d}}S$ via $p$. First, notice that $p$ is $\mathbf{d}$-computable. Indeed, the uniformity of Sacks' Splitting guarantees that, for all $m$, $W_{f(m+1)}$ is uniformly reducible to $W_{f(m)}$. Hence, $\mathbf{d}$ can compute any $W_{f(i)}$ with $i\geq n$. Moreover, it is not difficult to see that $p$ reduces $R$ into $S$, by mapping the first $n+1$ columns of $R$ into the $k$th column of $S$ and all remaining columns of $R$ into singletons of the $0$th column of $S$ (the latter being isomorphic to \texttt{Id}). This proves that $\mathbf{\mathcal{A}} \subseteq \Spec_{\Rightarrow}(R,S)$.
For the other inclusion, assume that $R\leq_{\mathbf{d}} S$ via some $h$. We distinguish three cases:
\begin{enumerate} \item $h$ maps a noncomputable equivalence class of $R$ into a singleton of $S$, i.e., there exists $z$ such that, for some $k$, $h^{-1}(z)=\set{\langle k, y \rangle: y \in W_{f(k)}}$. If so, we obtain that $h$ computes $W_{f(k)}$, i.e., $\mathbf{d} \geq \deg(W_{f(k)})$; \item $h$ maps a noncomputable equivalence class of $R$ into a collapsed column of $S$, i.e., there exists $k$ such that, for some $b \in \overline{B}$, $\set{\langle k, y \rangle: y \in W_{f(k)}}\subseteq \set{\langle b, x \rangle : x \in \omega } $. If so, since the $b$th column of $S$ is isomorphic to $\texttt{Id$_1$}$, we obtain again that $h$ computes $W_{f(k)}$, i.e., $\mathbf{d} \geq \deg(W_{f(k)})$; \item $h$ maps all noncomputable equivalence classes of $R$ into noncomputable equivalence classes of $S$. If so, we claim that $h$ enumerates an infinite subset of $B$: choose in a c.e.\ way a witness from each noncomputable equivalence class of $R$ and then list the indices of the column into which $h$ map such witnesses. More formally, let $(y_i)_{i \in \omega}$ be an infinite c.e.\ sequence of numbers such that, for all $i$, $y_i\in W_{f(i)}$, and let $Y=\set {\langle k, y_k\rangle: k \in \omega}$. Notice that the set $Y^*=\set{j : (\exists x)(\langle j, x\rangle \in h[Y])}$ must be infinite. Otherwise, since each column of $S$ is finitely dimensional, $h$ would map some noncomputable equivalence class of $R$ into a singleton of $S$, and therefore we would be in Case $(1)$. Moreover, $Y^*\subseteq B$. Otherwise, $h$ would map some noncomputable equivalence class of $R$ into a collapsed column of $S$, and therefore we would be in Case $(2)$. Thus, $Y^*$ is an infinite subset of $B$ which is c.e.\ in $h$. Since $B$ is introreducible, by Lemma~\ref{lem:initial} we obtain $h\geq_T B$. This proves that $\mathbf{d} \geq \deg(B)=\deg(W_{f(0)})$. \end{enumerate}
In all the three cases $\mathbf{d}\in \set{\deg(W_{f(i)}): i \in \omega}$. Therefore, $ \Spec_{\Rightarrow}(R,S) \subseteq$~$\mathbf{\mathcal{A}}$.
Hence, we proved that $\Spec_\Rightarrow(R,S)=\mathbf{\mathcal{A}}$. To conclude that $\Spec_\Rightarrow(R,S)$ has no basis is enough to recall that $\deg(W_{f(0)})>\deg(W_{f(1)})>\ldots$ is an infinite descending sequence of c.e.\ degrees not containing $\mathbf{0}$. \end{proof}
\section{Bi-reducibility spectra}
We now turn our attention to bi-reducibility spectra. By definition, any bi-reducibility spectrum is the intersection of two reducibility spectra. Indeed, for any pair of equivalence relations $(R,S)$, the following holds \[ \Spec_{\Leftrightarrow}(R, S)=\Spec_{\Rightarrow}(R,S) \cap \Spec_{\Rightarrow}(S, R). \]
It follows immediately that any bi-reducibility spectrum of equivalence relations with infinitely many equivalence classes is upward closed. It is not difficult to see that all Turing degrees are degrees of bi-reducibility. But in fact, much more is true. In this section we obtain a natural companion of Theorem~\ref{thm:characterization} for bi-reducibility spectra, by proving that the latter realise any upward closed collection of Turing degrees with a countable basis. Moreover, we show that the result still holds if we limit our attention to equivalence relations with no infinite equivalence classes.
Bi-reducibility spectra are harder to deal with than reducibility spectra. This is because, while encoding or forbidding a given reduction, one has also to control backwards reductions. This explains why the next proof is more delicate than that of Theorem~\ref{thm:characterization}.
\begin{thm}\label{thm:bi-spectra} Let $\mathbf{\mathcal{A}}$ be an upward closed collection of Turing degrees with countable basis $\mathbf{\mathcal{B}}$. There is a pair of equivalence relations $(R,S)$ with no infinite equivalence classes such that $\Spec_{\Leftrightarrow}(R,S)=\mathbf{\mathcal{A}}$. \end{thm}
\begin{proof} We prove the theorem for the case in which the basis $\mathbf{\mathcal{B}}=\set{\mathbf{b}_0,\mathbf{b}_1,\ldots}$ is infinite, the finite case being a simpler variation of the following argument. For any $\mathbf{b}_i\in \mathbf{\mathcal{B}}$, let $B_i\in \mathbf{b}_i$ be an introreducible set such that $\set{0,1}\cap B_i=\emptyset$. Our strategy is to use $B_0$ and $B_1$ to encode the information provided by the other introreducible sets. In doing so, it is convenient to introduce some notation. First, similarly to the definition of $X$ in the proof of Theorem~\ref{thm:characterization}, denote by $f_i$ the following $\mathbf{b}_i$-computable function, for all $i$, \[ f_i(x)=\langle p_{B_0}(i), p_{B_i}(x)\rangle. \]
It is immediate to see that the ranges of the functions so defined are pairwise disjoint. Secondly, if $X$ is a finite set with canonical index $z$ (i.e., $X=D_z$), denote $\langle 0, p_{B_1}(z)\rangle$ by $\ulcorner X \urcorner$. In what follows, we will use the following observation several times: for any finite set $X$, $\ulcorner X \urcorner \notin (\bigcup_{k \in \omega} \range(f_k))$.
To define the desired pair of equivalence relations $(R,S)$, we start by considering the following sequence of families of finite sets \[ \mathcal{C}_0=\set{\set{\langle 0,0\rangle,\langle 0,1\rangle}}, \] \[ \mathcal{C}_{n+1}=\set{ f_i[X] \cup \set{\ulcorner X \urcorner}: X \in \mathcal{C}_n \wedge i\in\omega}, \]
Let $\mathcal{C}=\bigcup_{k\in\omega}\mathcal{C}_{k}$.
\begin{claim} $\mathcal{C}$ satisfies the following properties.
\begin{enumerate}
\item If $X \in \mathcal{C}_{n}$, then $|X|=n+1$. \item If $X \in \mathcal{C}_n$ and $Y \in \mathcal{C}_m$, then $X \cap Y=\emptyset$. \end{enumerate} \end{claim}
\begin{proof}
$(1)$ By induction we prove that, for all $n$, any two elements of $\mathcal{C}_n$ have the same size. This is trivially true for $\mathcal{C}_0$. Towards a contradiction, let $n$ be the least number such that there exists $\set{X,Y} \subseteq \mathcal{C}_n$ with $|X|\neq |Y|$. By construction, there must be $\set{X_0,Y_0}\subseteq \mathcal{C}_{n-1}$ such that $X=\set{f_i[X_0] \cup \set{\ulcorner X_0 \urcorner}}$, for some $f_i$, and $Y =\set{f_j[Y_0] \cup \set{\ulcorner Y_0 \urcorner}} $, for some $f_j$. Since $f_i$ and $f_j$ are both injective and $n$ is chosen to be minimal, we have that $|f_i[X_0]|=|f_j[Y_0]|$. It follows that $|X_0|\leq |X| \leq |X_0|+1$, and $|X|<|X_0|+1$ can hold only if there is $z\in X_0$ such that $\ulcorner X_0 \urcorner= f_i(z)$. But the latter equality is impossible, since $\ulcorner X_0 \urcorner\notin (\bigcup_{k\in\omega}\range(f_k))$. Therefore, we have $|X|=|X_0|+1$. By reasoning in a similar way, it can be shown that $|Y|=|Y_0|+1$. So, any two elements of $\mathcal{C}_{n}$ have the same size, and in fact they all have size $|A|+1$, for all $A \in \mathcal{C}_{n-1}$.
$(2)$
Towards a contradiction, assume that $(n,m)$ is the least pair for which there exist $X\in \mathcal{C}_n$ and $Y \in \mathcal{C}_m$ such that $z \in X \cap Y$, for some $z$.
Indeed, from the fact that $\set{0,1}\cap B_i=\emptyset$ for all $i$, we obtain that for any finite $X$ the following holds \[ \set{\langle 0,0\rangle,\langle 0,1\rangle} \cap (\bigcup_{k\in\omega}\range(f_k) \cup \ulcorner X\urcorner) =\emptyset. \]
Thus, $0\notin \set{n,m}$. By construction, we have that there is a unique pair of functions $(f_i,f_j)$ such that $X=f_i[X_0]\cup\set{\ulcorner X_0\urcorner}$ and $Y=f_j[Y_0]\cup\set{\ulcorner Y_0\urcorner}$, with $X_0 \in \mathcal{C}_{n-1}$ and $Y_0 \in \mathcal{C}_{m-1}$. We have that $z\notin \set{\ulcorner X_0\urcorner, \ulcorner Y_0\urcorner}$. This is because
\begin{enumerate} \item $\ulcorner X_0\urcorner \neq \ulcorner Y_0\urcorner$, \item and $\set{\ulcorner X_0\urcorner, \ulcorner Y_0\urcorner} \cap (\bigcup_{k\in\omega} \range(f_k))=\emptyset$. \end{enumerate}
Therefore, it must be $z\in \set{f_i[X_0]\cap f_j[Y_0]}$. If $i \neq j$ we immediately obtain a contradiction, since we know that $\range(f_i)\cap \range(f_j)=\emptyset$. Hence, $f_i=f_j$ and $f_i^{-1}(z)=f_j^{-1}(z)$ must be in $ X_0\cap Y_0$. But this would imply that $\mathcal{C}_{n-1}$ and $C_{m-1}$ overlap, contradicting the minimality of the pair $(n,m)$. \end{proof}
Let $R$ and $S$ be the equivalence relations generated respectively by $\bigcup_{k\in\omega}\mathcal{C}_{2k}$ and $\bigcup_{k\in\omega}\mathcal{C}_{2k+1}$, i.e., \[ xR y \Leftrightarrow x=y \vee (\exists Z \in \bigcup_{k\in\omega}\mathcal{C}_{2k})(x,y \in Z) \]
and \[ xS y \Leftrightarrow x=y \vee (\exists Z \in \bigcup_{k\in\omega}\mathcal{C}_{2k+1})(x,y \in Z). \]
Item $(2)$ of the last claim ensures that the two equivalence relations are well-defined, by guaranteeing that all equivalence classes of $R$ and $S$ are pairwise disjoint. Moreover, as a consequence of item $(1)$ of the last claim, we obtain that all equivalence classes of $R$ and $S$ are finite, but they have arbitrary large size: any equivalence class of $R$ is either a singleton or has even size; all equivalence class of $S$ have odd size. We claim that $\Spec_{\Leftrightarrow}(R,S)=\mathbf{\mathcal{A}}$.
\begin{claim}\label{lem:twocones1} $\mathbf{\mathcal{A}}\subseteq\Spec_{\Leftrightarrow}(R,S)$. \end{claim}
\begin{proof} Since any bi-reducibility spectrum is upward closed, it is enough to prove that $\mathbf{\mathcal{B}}\subseteq\Spec_{\Leftrightarrow}(R,S)$. Let $\mathbf{b}_i\in \mathbf{\mathcal{B}}$. We show that $R\leq_{\mathbf{b}_i}S$ via $f_i$. If $xR y$, with $x\neq y$, then there is $Z \in \mathcal{C}_{2k}$, for some $k$, such that $x,y\in Z$. It follows that $\set{f_i[Z]\cup \set{\ulcorner Z \urcorner}}\in \mathcal{C}_{2k+1}$. Since $\bigcup_{i\in\omega}\mathcal{C}_{2i+1}$ generates $S$, we obtain that $\set{f_i[Z]\cup \set{\ulcorner Z \urcorner}}$ forms an equivalence class of $S$ which contains $f_i[Z]$, and in particular $f_i(x)$ and $f_i(y)$. Hence, $f_i(x)S f_i(y)$ holds.
On the other hand, assume $\neg(xRy)$ and, towards a contradiction, suppose that $f_i(x) S f_i(y)$. By construction of $S$, this implies that there is $Z \in \mathcal{C}_{2k}$, for some $k$, such that $\set{f_i(x),f_i(y)}\subseteq \set{f_i[Z] \cup \ulcorner Z\urcorner}$. Since $\ulcorner Z \urcorner\notin \range(f_i)$, it follows that $\set{f_i(x),f_i(y)}\subseteq f_i[Z]$, and therefore $\set{x,y}\subseteq Z$. By construction of $R$ this would imply $x R y$, contradicting our assumption.
We proved that $\mathbf{\mathcal{B}}\subseteq \Spec_\Rightarrow(R,S)$. By a similar argument, it can be shown that, for all $i$, $S \leq_{\mathbf{b}_i} R$ via $f_i$. Thus, $\mathbf{\mathcal{B}}\subseteq \Spec_\Rightarrow(S,R)$. Since $\Spec_\Leftrightarrow(R,S)$ coincides with $\Spec_\Rightarrow(R,S)\cap \Spec_\Rightarrow(S,R)$, we conclude that $\mathbf{\mathcal{A}}\subseteq\Spec_{\Leftrightarrow}(R,S)$. \end{proof}
It remains to show that $\Spec_{\Leftrightarrow}{(R,S)} \subseteq \mathbf{\mathcal{A}}$. We say that a total function $f$ is \emph{eventually injective} if there is $n$ such that $f$ restricted to $x>n$ is injective. Let $\mathbf{d}\in \Spec_{\Leftrightarrow}{(R,S)}$. Assume that $R\leq_{\mathbf{d}} S$ via some function $s$ and $S \leq_{\mathbf{d}} R$ via some function $t$.
\begin{claim}\label{claim:inj}
There exists an infinite set $A$, computable in $\mathbf{d}$, such that if $z \in A$ then $|[z]_R|>1$. \end{claim}
\begin{proof} We distinguish two cases. If $s$ (resp.\ $t$) is not eventually injective, let $A=\set{x_0, x_1,\ldots}$ be an infinite set such that, for all $k$, $s(x_{2k})=s(x_{2k+1})$ ($t(x_{2k})=t(x_{2k+1})$). If $s$ and $t$ are both eventually injective,
define the following sequence, for all $x$, \[ x_0= x \] \[ x_{n+1}=\begin{cases} s(x_n) &\text{ if $n$ is even,} \\
t(x_n)&\text{ if $n$ is odd,} \end{cases} \]
and the following function \[ h_{x}(n)=\begin{cases}
|[x_n]_R| &\text{ if $n$ is even,} \\
|[x_n]_S| &\text{ if $n$ is odd.} \end{cases} \]
We claim that there exists $x$ and $m$ such that $h_x$ restricted to $y>m$ is strictly increasing. Otherwise, $s$ would map infinitely many equivalence classes of $R$ into classes of smaller size of $S$; or, vice versa, $t$ would map infinitely many equivalence classes of $S$ into classes of smaller size of $R$. In both cases, this contradicts the assumption that $s$ and $t$ are eventually injective.
Thus, let $z$ and $k$ be such that $h_z$ restricted to $y>2k$ is strictly increasing and $h_z(2k)>1$. We have that $A=\set{z_{2k}: k \in \omega}$ is a partial transversal of $R$ and each element of $A$ is in a class of size larger than $1$. \end{proof}
From the fact that $A$ intersects no singleton of $R$, it follows that any element of $A$ is either of the form $\langle 0, p_{B_1}(k)\rangle$, for some $k$, or the form $\langle p_{B_0}(i),p_{B_i}(y)\rangle$, for some $i$ and $y$: indeed, a number which is not in any of these forms is necessarily a singleton in $R$. We distinguish three cases.
\begin{enumerate} \item The set $Y=\set{p_{B_1}(k):\langle 0, p_{B_1}(k)\rangle \in X}$ is infinite: If so, we can reason in a familiar way. $Y\subseteq B_1$ is c.e.\ in $s$. By Lemma~\ref{lem:initial}, we obtain that $s$ computes $B_1$, and therefore $\mathbf{ d} \geq \mathbf{b}_1$; \item There is $j$ such that the set $Y_j=\set{p_{B_j}(k):\langle p_{B_0}(j), p_{B_j}(k)\rangle \in X}$ is infinite: If so, $Y_j\subseteq B_j$ is c.e.\ in $s$. By Lemma~\ref{lem:initial}, we obtain that $s$ computes $B_j$, and therefore $\mathbf{ d} \geq \mathbf{b}_j$; \item The set $Y^*=\set{p_{B_0}(k):(\exists z)(\langle p_{B_0}(k), p_{B_j}(z)\rangle \in X)}$ is infinite: If so, $Y^*\subseteq B_0$ is c.e.\ in $s$. By Lemma~\ref{lem:initial}, we obtain that $s$ computes $B_0$, and therefore $\mathbf{d}\geq \mathbf{b}_0$. \end{enumerate}
Therefore, $\mathbf{d}\in \set{\mathbf{c}: \mathbf{c}\geq \mathbf{b_0} \vee \mathbf{c}\geq \mathbf{b_1} \vee \mathbf{c}\geq \mathbf{b_j}}$, which means that $\mathbf{d}\in \mathbf{\mathcal{A}}$ and $\Spec_{\Leftrightarrow}(R,S)\subseteq \mathbf{\mathcal{A}}$. By recalling Claim~\ref{lem:twocones1}, we conclude that $\Spec_{\Leftrightarrow}(R,S)= \mathbf{\mathcal{A}}$. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1806.10363.tex",
"language_detection_score": 0.7900998592376709,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} In this paper, we consider Bayesian variable selection problem of linear regression model with global-local shrinkage priors on the regression coefficients. We propose a variable selection procedure that select a variable if the ratio of the posterior mean to the ordinary least square estimate of the corresponding coefficient is greater than $1/2$. Under the assumption of orthogonal designs, we show that if the local parameters have polynomial-tailed priors, our proposed method enjoys the oracle property in the sense that it can achieve variable selection consistency and optimal estimation rate at the same time. However, if, instead, an exponential-tailed prior is used for the local parameters, the proposed method does not have the oracle property. \end{abstract} \maketitle
\section{Introduction} The objective of this article is simultaneous variable selection and estimation in linear regression models under global-local shrinkage priors and a suitable thresholding. Selection of the best available model among a set of candidate models is extremely useful for most statistical applications. The problem often reduces to the choice of a subset of variables from all predictive variables in a regression setting. Linear regression models continue to occupy a prominent place in variable selection problems due to their interpretability as well as analytical tractability. Throughout this paper, we consider classical linear regression models with response vector $\mathbf{Y} = (y_1, \ldots, y_n)$ and a set of predictors $\mathbf{x}_1, \ldots, \mathbf{x}_p$. The target is to fit a model of the form \begin{equation} \mathbf{Y} = \sum_{i=1}^{p} \mathbf{x}_i\beta_i + \boldsymbol{\epsilon} = \mathbf{X}\boldsymbol{\beta} +\boldsymbol{\epsilon}, \end{equation} where $\mathbf{X} = (\mathbf{x}_1, \ldots, \mathbf{x}_p)$, $\boldsymbol{\beta} = (\beta_1, \ldots, \beta_p)^T$, and $\boldsymbol\epsilon \sim N(\mathbf{0}_n, \sigma^2\mathbf{I}_n)$. The goal of variable selection is to pick only a subset of predictors that are relevant for predicting a given response.
Historically, penalized regression methods have been very successful for variable selection. In its general form, the problem reduces to minimization of the objective function \begin{equation}\label{eq:obj_fun}
f(\boldsymbol{\beta}) = \| \mathbf{Y} - \mathbf{X}\boldsymbol{\beta} \|^2 + \lambda \sum_{i=1}^p u(\beta_i), \end{equation}
where $\lambda$ is the penalty parameter. The choice $u(z) = z^2$ leads to the ridge \citep{marquardt1975ridge} estimator, while $u(z) = | z |$ leads to the lasso \citep{tibshirani1996regression} estimator of $\boldsymbol{\beta}$. One of the advantages of the lasso estimator is that it can produce exact zero estimates for some of the regression coefficients. Despite this distinctive feature, the lasso method has some limitations in its original form. \citet{zou2006adaptive} showed that lasso estimators could not achieve consistent variable selection and optimal estimation rate at the same time. He proposed instead the adaptive lasso which heavily penalized zero coefficients and moderately penalized large coefficients using data dependent weights for different coefficients. Specifically, the adaptive lasso estimates are found as \begin{equation}
\hat{\boldsymbol{\beta}}^{\textrm{adap}} = \underset{\boldsymbol{\beta}}{\operatorname{argmin}} \left\{ \|\mathbf{Y} - \mathbf{X}\boldsymbol{\beta} \|^2 + \lambda \sum_{i=1}^p | \beta_i | / | \hat{\beta}_i |^{\gamma} \right\} \end{equation} for some $\gamma > 0$ where $\hat{\beta}_i$ is the least squares estimator of $\beta_i$. The adaptive lasso enjoys the oracle property in the sense that it achieves simultaneously variable selection consistency and asymptotic normality with $\sqrt{n}$ convergence rate. One important feature, implicit in the proof of Theorem 2 of \cite{zou2006adaptive}, is that $\hat{\beta}_i^{\mathrm{adap}} / \hat{\beta}_i \overset{P}{\rightarrow} 0$ or $1$ according as the true coefficient value equals or is different from zero. This property is the main motivation for us to develop a new thresholding method in a Bayesian context.
The history of Bayesian variable selection goes a long way back starting with \citet{mitchell1988bayesian}, where a spike and slab prior is used for the coefficients. The spike part of their prior placed probability mass at zero to exclude irrelevant variables, while the slab part used a uniform distribution with a large symmetric range to include the important variables. Since then, many priors invented for variable selection possess the spike-and-slab feature, although they differ in the choice of the distributions for the two parts. \citet{george1993variable} proposed stochastic search variable selection (SSVS) which assumed $\beta_i$ to be a mixture of two normal distributions with different variances. The spike part is the one with a smaller variance while the slab part is the one with a much larger variance. Different from the above, \citet{geweke1996variable} used positive mass at 0 for the spike part and a normal distribution for the slab part. Another example is \citet{narisetty2014bayesian}. \citet{xu2015bayesian} also considered spike-and-slab priors, but used median thresholding to select variables in the group lasso framework.
An alternative approach to Bayesian variable selection is to use shrinkage priors for the regression coefficients. An early example of this approach is the Bayesian lasso introduced by \citet{park2008bayesian}. They performed a full Bayesian analysis analogous to lasso, interpreting $\| \mathbf{Y} - \mathbf{X}\boldsymbol{\beta} \|^2$ in \eqref{eq:obj_fun} as the negative of a multiple of the log-likelihood and the penalty function $\lambda \sum_{i=1}^p |\beta_i|$ as the negative of a double exponential prior for $\beta_i$. Following a similar idea, \citet{li2010bayesian} proposed Bayesian elastic net.
Unlike the spike-and-slab priors, shrinkage priors cannot naturally produce exact zero estimates of the regression coefficients with positive probability. Thus a critical question to answer when using shrinkage priors for variable selection is how to actually select relevant variables. \citet{li2010bayesian} presented the credible interval criterion which selects predictor $\mathbf{x}_i$ if the credible interval of $\beta_i$ does not cover 0. A criterion called scaled neighborhood criterion is also considered in \citet{li2010bayesian}. It selects predictors with posterior probability of belonging to $[-\sqrt{\mathrm{Var}(\beta_j \, |\, \mathbf{Y})}, \sqrt{\mathrm{Var}(\beta_j \, | \, \mathbf{Y})}]$ less than a certain threshold. These authors did not address the issue of any oracle property of their procedures. \citet{bondell2012consistent} employed the conjugate normal priors for $\beta_i$ and used sparse solutions within posterior credible regions to perform selection. They gave a theoretical proof of variable selection consistency of their method. Recently, \citet{hahn2015decoupling} proposed selection of variables by minimizing the ``decoupled shrinkage and selection'' loss function after finding the posterior mean of $\boldsymbol{\beta}$. However, a surrogate optimization problem has to be used since the original one is intractable in the presence of moderate to large number of predictors.
In this paper, we consider the problem of variable selection and estimation using global-local shrinkage priors. The prior of \citet{park2008bayesian} came as special cases. Specifically, we assume the prior distribution of $\beta_i$ is a scale mixture of normals:
$$\beta_i \, | \, \overset{ind}{\sim} \mbox{N}(0, \sigma^2\gamma_i\tau), ~ \gamma_i \overset{ind}{\sim} \pi(\gamma_i).$$ These priors approximate the spike-and-slab priors, but instead are symmetric, unimodal and absolutely continuous. They place significant probability mass around zero and have heavy tails to signify the inclusion of relevant variables. The local parameters $\gamma_i$ control the degree of shrinkage of each individual $\beta_i$ while the global parameter causes an overall shrinkage. We give a list of such priors in a later section. The list includes not only the now famous horseshoe prior of \citet{carvalho2010horseshoe}, but several other priors considered, for example, in \citet{griffin2010inference,griffin2011bayesian}, \citet{polson2010shrink,polson2012half} and \citet{armagan2011generalized,ADL2012}. We also find it convenient to classify the priors $\pi(\gamma_i)$ into two subclasses: those with exponential tails and those with polynomial tails. We propose a thresholding procedure to select relevant variables in the model. It turns out that the theoretical properties of our proposed method are closely related to the tails of $\pi(\gamma_i)$. As we will show in the subsequent sections, if polynomial-tailed priors are used, the proposed method attains the oracle property for certain choice of $\tau$ in the same sense as the adaptive lasso. In contrast, the exponential-tailed priors, while attaining variable consistency for some choice of $\tau$, will fail to attain asymptotic normality at the $\sqrt{n}$ rate.
The outline of the remaining sections is as follows. The general class of shrinkage priors and the thresholding procedure are described in Section 2. In Section 3, we present the theoretical properties of the proposed method for orthogonal designs. Section 4 contains some simulation results. Some final remarks are made in Section 5. The technical proofs of the results are deferred to the Appendix.
We want to highlight some of the findings of our paper and also compare and contrast the same with some of the other variable selection procedures. The thresholding approach used in our paper attains simultaneous variable selection and estimation with exact zero estimates of some of the regression coefficients similar to the original lasso. While variable selection consistency has been addressed in a large number of papers including situations where $p\gg n$, the asymptotic normality of the non-zero vector of regression coefficients, to our knowledge, has not been considered earlier in this generality. Moreover, although the exponential-tailed priors including those of \citet{park2008bayesian} have been addressed quite frequently in the literature, their asymptotic non-optimality, as pointed by us, has not been addressed before. Finally, despite the fact that the oracle properties are proved only for orthogonal designs, the proposed selection mechanism works for non-orthogonal designs as well as shown in the simulations of Section 4.
\section{Global local shrinkage priors and proposed Method} For clarity, we reiterate the model considered in this article: \begin{align} \tag{M1}\label{eq:model1} \mathbf{Y} &\sim N(\mathbf{X}\boldsymbol\beta, \sigma^2\mathbf{I}_n),\\
\tag{M2}\label{eq:model2} \beta_i | \gamma_i & \overset{ind}{\sim} N(0, \sigma^2\gamma_i \tau),~ i= 1, \ldots, p,\\ \gamma_i & \overset{ind}{\sim} \pi(\gamma_i),~ i = 1, \ldots, p.\tag{M3}\label{eq:model3} \end{align} Throughout this article, we assume $p = p_n \leq n$.
Many priors in Bayesian literature can be expressed in the form of scale mixture of normals, as in \eqref{eq:model2} and \eqref{eq:model3}. Table \ref{table:priors}, given later in Section 3, presents a list of such priors (of $\beta_i$) and the corresponding form of $\pi(\gamma_i)$. By employing two levels of parameters to express the variances in \eqref{eq:model2}, the global-local shrinkage priors assign large probabilities around zero while assigning non-trivial probabilities to values far from zero. The global parameter $\tau$ tends to shrink all $\beta_i$'s towards zero. At the same time, the local parameters $\gamma_i$ control the degree of shrinkage of each individual $\beta_i$. If $\pi(\gamma_i)$ is appropriately heavy tailed, the coefficients of important variables can be left almost unshrunk.
In the same spirit as \citet{park2008bayesian}, placing a prior on $\beta_i$ is closely related with adding a penalty term of $\beta_i$ to the ordinary least square objective function, so the properties of penalized regression estimators can shed light on the features of Bayesian estimator of $\beta_i$. The proof of Theorem 2 in \citet{zou2006adaptive} implies that under mild conditions, \begin{align} \frac{\hat{\beta}_{i}^{adap}}{\hat{\beta}_{i}}\stackrel{p}{\rightarrow}\begin{cases} 0, & \text{when}\beta_{i}^{0}=0,\\ 1, & \text{when }\beta_{i}^{0}\neq0, \end{cases} \label{eq:ratio_converge} \end{align} where $\beta_i^0$ is the true value of $\beta_i$ and $\hat{\beta}_{i}$ is the ordinary least square estimator of $\beta_{i}$. This indicates that the adaptive lasso estimator for the coefficient of an irrelevant variable converges to zero faster than the least square estimator. In fact, \eqref{eq:ratio_converge} holds by replacing the adaptive lasso estimator with any penalized regression estimator that has the oracle property as given in \citet{zou2006adaptive}. Many of these estimators can also be interpreted as posterior modes of priors specified in \eqref{eq:model1}-\eqref{eq:model3}. Due to the asymptotic closeness of posterior means and posterior modes under such priors, one can threshold the ratio of posterior mean and least square estimator to obtain an oracle variable selection procedure even though the posterior mean is not sparse. Motivated by this, we propose to select predictor $\mathbf{x}_{i}$ if \begin{equation} \abs{\frac{\hat{\beta}_{i}^{PM}}{\hat{\beta}_{i}}}>\frac{1}{2},\label{eq:Criterion} \end{equation} where $\hat{\beta}_{i}^{PM}$ is the posterior mean of $\beta_{i}$ under certain shrinkage prior. We refer to this procedure as Half Thresholding (HT) and define the HT estimator of $\beta_i$ as $$ \hat{\beta}_i^{HT}=\hat{\beta}_i^{PM}I\left(\abs{\frac{\hat{\beta}_{i}^{PM}}{\hat{\beta}_{i}}}>\frac{1}{2}\right). $$
Our proposed HT procedure is simple and easy to implement. Once the posterior mean and the ordinary least square estimate of $\boldsymbol{\beta}$ are obtained, variable selection can be performed without any extra optimization step as required for example in \citet{bondell2012consistent} and \citet{hahn2015decoupling}. Besides its simplicity, as we will show in the next section, the HT procedure enjoys oracle properties for orthogonal designs if the global parameter $\tau$ and the prior of $\gamma_i$ are chosen appropriately.
\section{Theoretical Results} We consider two general types of priors $\pi(\gamma_i)$ given by \begin{align} \pi\left(\gamma_{i}\right)&=\gamma_{i}^{-a-1}L\left(\gamma_{i}\right),~a>0,\tag{P}\label{eq:polynomial}\\ \pi\left(\gamma_{i}\right)&=\exp\left(-b\gamma_{i}\right)L\left(\gamma_{i}\right),~b>0,\tag{E}\label{eq:exponential} \end{align} where $L\left(\cdot\right)$ is a nonnegative slowly varying function in Karamata's sense \citep[p.~6]{bingham1987regular} defined on $\left(0,\infty\right)$. We will call the priors in the form of \eqref{eq:polynomial} and \eqref{eq:exponential} polynomial-tailed priors and exponential-tailed priors respectively. As we will show later in this section that the theoretical performances of the HT method is closely related with the tails of the prior of $\gamma_i$. Table \ref{table:priors} provides a list of commonly used scale mixture priors of $\beta_i$ and the corresponding form of $\pi(\gamma_i)$. The Class column gives which class the prior belongs to. The corresponding form of $L$ is given in the last column. The half-hyperbolic and positive logistic distribution are included in the list as examples of exponential-tailed distributions other than the exponential distributions, although they have not been used as priors in literature.
\renewcommand{1}{2} \begin{table}[ht] \scriptsize \centering \begin{tabular}{cccc}
\toprule Prior & $\pi(\gamma_i)/C$ & Class & $L(\gamma_i)/C$ \\
\midrule
Double-exponential & $\exp\{-b\gamma_i\}$ & E & 1 \\ Half-hyperbolic & $\exp\left\{-b\sqrt{1+\gamma_i^2}\right\}$ & E & $\exp\left\{b\gamma_i-b\sqrt{1+\gamma_i^2}\right\}$ \\ Positive Logistic & $\exp(b\gamma_i)\{1+\exp(b\gamma_i)\}^{-2}$ & E & $\exp(2b\gamma_i)\{1+\exp(b\gamma_i)\}^{-2}$\\ Student's T & $\gamma_i^{-a-1}\exp(-{a}/{\gamma_i})$ & P & $\exp(-a/\gamma_i)$ \\ Horseshoe & $\gamma_i^{-1/2}(1+\gamma_i)^{-1}$ & P & $\gamma_i / (1 + \gamma_i)$ \\ Horseshoe+ & $ \gamma_i^{-1/2}(\gamma_i-1)^{-1}\log(\gamma_i)$ & P & $\gamma_i (\gamma_i - 1)^{-1}\log(\gamma_i)$\\ NEG & $ \left(1+\gamma_i\right)^{-1-a} $ & P & $\{\gamma_i/(1+\gamma_i)\}^{a+1}$ \\ TPBN & $\gamma_i^{u-1}(1+\gamma_i)^{-a-u}$ & P & $\{\gamma_i/(1+\gamma_i)\}^{a+u}$ \\ GDP & $\int_0^\infty \frac{\lambda^2}{2} \exp\left(-\frac{\lambda^2\gamma_i}{2}\right)\lambda^{2a-1}\exp(-\eta\lambda)d\lambda$ & P & $\int_0^{\infty} t^{a} \exp(-t-\eta\sqrt{2t/\gamma_i})dt$ \\ HIB & {$\gamma_i^{u-1}(1+\gamma_i)^{-(a+u)}\exp\left\{-\frac{s}{1+\gamma_i}\right\}\left\{\phi^2+\frac{1-\phi^2}{1+\gamma_i}\right\}^{-1}$} & P & {$\{\gamma_i/(1+\gamma_i)\}^{a+u}\exp\left\{-\frac{s}{1+\gamma_i}\right\}\left\{\phi^2+\frac{1-\phi^2}{1+\gamma_i}\right\}^{-1}$}\\
\bottomrule \end{tabular}
\caption{A list of scale mixture of normals shrinkage priors of $\beta_i$. $C$ is a generic constant. NEG: normal exponential gamma priors\citep{GB2005}, TPBN: three parameter beta normal priors \citep{armagan2011generalized}, GDP: generalized double Pareto priors \citep{ADL2012}, HIB: hypergeometric inverted beta priors \citep{polson2012half}.} \label{table:priors} \end{table} \renewcommand{1}{1}
In this section, we will assume the design matrix is orthognal, that is $\mathbf{X}^T\mathbf{X}=n\mathbf{I}_p$. With this assumption, $$E(\beta_i \, | \, \gamma_i, \tau, \mathbf{Y}) = \frac{n\tau\gamma_i}{n\tau\gamma_i + 1} \hat{\beta}_i = (1-s_i)\hat{\beta}_i,$$ where $s_i = 1/(1+ n\tau\gamma_i)$, is the shrinkage factor. By law of iterated expectations, $$\hat{\beta}_i^{\text{PM}} = E(\beta_i \, | \, \mathbf{Y}) = (1-E(s_i \, | \, \mathbf{Y}))\hat{\beta}_i.$$ Therefore, with the orthogonal design matrix assumption, the selection criterion \eqref{eq:Criterion} of the proposed method simplifies to \begin{equation} \label{eq:Criterion_simple}
1-E(s_i \, | \, \mathbf{Y}) > 1/2. \end{equation} A similar procedure was considered by \citet{ghosh2015asymptotic} in the multiple testing context.
Following \citet{fan2001variable} and \citet{zou2006adaptive}, we say a variable selection procedure is oracle if it results in both variable selection consistency and optimal estimation rate. Let $\mathcal{A} = \{j: \beta_j^0 \neq 0\}$ and $\mathcal{A}_n = \{j: \hat{\beta}_j^{HT} \neq 0\}$. The variable selection consistency means $$\lim_{n \rightarrow \infty}P(\mathcal{A}_n = \mathcal{A})=1,~\text{as}~n\rightarrow \infty,$$ while the optimal estimation rate means $$\sqrt{n}(\hat{\boldsymbol{\beta}}_{\mathcal{A}}^{{HT}}-\boldsymbol{\beta}_{\mathcal{A}}^0) \overset{d}{\rightarrow} N(0, \sigma^2\mathbf{I}_{p_0}),~\text{as}~n\rightarrow\infty,$$ where $p_0$ is the cardinality of $\mathcal{A}$ and it does not depend on $n$.
Another thing to clarify before the presentation of our theoretical results is the treatment of the global parameter $\tau$. In \citet{datta2013asymptotic} and part of the results of \citet{ghosh2015asymptotic}, $\tau$ was treated as a tuning parameter. \citet{carvalho2010horseshoe} considered a full Bayesian treatment and a half-Cauchy prior for the global parameter. \citet{ghosh2015asymptotic} also provided some results when an empirical Bayes estimate of the global parameter is used. In this article, we treat $\tau$ as a tuning parameter or assume a hyper-prior for it. To distinguish the two treatments, we will write $\tau$ as $\tau_n$ when it is a tuning parameter.
\subsection{Properties of shrinkage factors}
By \eqref{eq:Criterion_simple}, the HT procedure is closely related with the shrinkage factor $s_i$, so we present its properties first.
\begin{proposition}\label{prop:s_consistency} Suppose the prior of $\gamma_i$ is proper. For $i \not\in \mathcal{A}$, if $n\tau_{n}\rightarrow0$, as $n\rightarrow\infty$, then
$E(1 - s_{i} \, | \, \tau_n, \mathbf{Y}) \overset{p}{\rightarrow} 0$ as $n\rightarrow\infty$. For $i \in \mathcal{A}$, \begin{enumerate}
\item if $\gamma_i$ has a polynomial-tailed prior described in \eqref{eq:polynomial} and $n\tau_n \rightarrow 0$, $\log(\tau_n)/n \rightarrow 0$ as $n\rightarrow\infty$, then $E(1-s_i\, |\, \tau_n, \mathbf{Y}) \overset{p}{\rightarrow} 1$, as $n \rightarrow \infty$.
\item if $\gamma_i$ has an exponential-tailed prior described in \eqref{eq:exponential} and $n\tau_n \rightarrow 0$ and $n^2\tau_n \rightarrow \infty$ as $n\rightarrow\infty$, then $E(1 - s_i \, |\, \tau_n, \mathbf{Y}) \overset{p}{\rightarrow} 1$, as $n\rightarrow \infty$. \end{enumerate} \end{proposition}
Proposition \ref{prop:s_consistency} shows that, regardless of the choice of the prior of $\gamma_i$ in the given class, the HT procedure can identify an irrelevant variable correctly if $\tau_n$ goes to zero at a rate faster than $n^{-1}$. On the other hand, $\tau_n$ should not converge to zero too fast in order to avoid overshrinkage and to correctly identify relevant variables. The conditions $\log(\tau_n)/n \rightarrow 0$ and $n^2\tau_n \rightarrow \infty$ serve this idea for polynomial-tailed priors and exponential-tailed priors respectively. Given $n\tau_n \rightarrow 0$, the condition $n^2 \tau_n \rightarrow \infty$ is more stringent than $\log(\tau_n)/n \rightarrow 0$. Intuitively, this has to the case since exponential tails are lighter. To guarantee that the coefficients of important variables are not overly shrunk, the global parameter should decay at a slower rate and compensate the amount of shrinkage brought by exponential local parameters.
\subsection{Polynomial-tailed priors} Before presenting the main results of the HT procedure with polynomail-tailed priors, we would like to introduce an assumption that will be frequently mentioned in the rest of the section. We say a sequence of positive real numbers $\{t_n\}_{n = 1}^\infty$ satisfies poly-$a$ condition if there exists $\epsilon \in (0, a)$ such that \begin{equation*} p_n(nt_n)^\epsilon \rightarrow 0~\mbox{and}~\log ( t_n )/ \sqrt{n} \rightarrow 0,~\mbox{as}~n \rightarrow \infty. \end{equation*} Let $\tau_n = n^{-1-\frac{2}{a}}$. It satisfies the poly-$a$ condition since $p_n \leq n$. If $p$ does not vary with $n$, the condition can be simplified to $nt_n \rightarrow 0$ and $\log (t_n) / \sqrt{n} \rightarrow 0$.
\begin{theorem}\label{thm:tuning_poly} Suppose a proper polynomial-tailed prior of the form \eqref{eq:polynomial} is assumed for $\gamma_i, i=1, \ldots, p$ with $0<a<1$. If $\{\tau_n\}_{n=1}^\infty$ satisfies the poly-$a$ condition, then the HT procedure is oracle. \end{theorem}
As Theorem \ref{thm:tuning_poly} demenstrates, if $\tau_n$ is chosen to decay to zero at an appropriate rate, the HT procedure has the oracle property. This suggests if a hyperprior $\pi_n^{\tau}$ of $\tau$ concentrates most of its probability mass in an interval with its end points satisfying the poly-$a$ condition, then the HT threshold should still enjoy the oracle property. With this observation, we have the following result. \begin{corollary}\label{thm:prior_poly} Suppose that a proper polynomial-tailed prior of the form \eqref{eq:polynomial} is assumed for $\gamma_i, i=1, \ldots, p$ with $0<a<1$. We also place a prior $\pi_n^{\tau}$ with support $(\xi_n, \psi_n)$ on $\tau$. If both $\{\xi_n\}_{n=1}^\infty$ and $\{\psi_n\}_{n=1}^\infty$ satisfy the poly-$a$ condition, then the HT procedure is oracle. \end{corollary}
\subsection{Exponential-tailed priors} Now we examine the properties of HT procedure when exponential-tailed priors are assumed for the local parameters $\gamma_i$. \begin{theorem}\label{thm:tuning_exp_vs} Suppose a proper exponential-tailed prior of the form \eqref{eq:exponential} is assumed for $\gamma_i, i=1, \ldots, p$ and $\int_0^\infty \gamma_i \pi(\gamma_i) d\gamma_i < \infty$. If $n\tau_n \rightarrow 0$, $n^2\tau_n \rightarrow \infty$ and $\frac{p_n n \tau_n}{\sqrt{\log(n\tau_n)}} \rightarrow 0$, as $n\rightarrow \infty$, then the HT procedure achieves variable selection consistency. \end{theorem}
\noindent
{\bf Remark.} Let $\tau_n = \log\log n/n^2$. It satisfies the conditions in Theorem \ref{thm:tuning_exp_vs}. Similar to what we have mentioned in Section 3.1, these conditions are more stringent than the poly-$a$ condition since exponential tails are lighter than polynomial ones. If $p$ does not depend on $n$, the conditions simplify to $n\tau_n \rightarrow 0$ and $n^2\tau_n \rightarrow \infty$, as $n \rightarrow \infty$. If $\pi(\gamma_i) = \exp(-\gamma_i)$, then the marginal prior of $\beta_i$ is proportional to $\exp(-|\beta_i|/\sqrt{\tau_n})$. Thus $1/\sqrt{\tau_n}$ corresponds to the penalty parameter $\lambda_n$ in the lasso estimator. The two conditions on $\tau_n$ assuming fixed $p$ can be translated to $\lambda_n/\sqrt{n} \rightarrow \infty$ and $\lambda_n/n \rightarrow 0$, which is a sufficient condition for the lasso estimator to be model selection consistent assuming the irrepresentable condition \citep{zou2006adaptive}. \noindent
\begin{theorem}\label{thm:tuning_exp_an} Suppose a proper exponential-tailed prior of the form \eqref{eq:exponential} and there exist $0 < m \leq M < \infty$ such that $m < L(t) < M$ for all $t \in (0, \infty)$. If $n\tau_n \rightarrow 0$ and $n^2\tau_n \rightarrow \infty$, then for $i \in \mathcal{A}$, with probability 1, $$\frac{m}{M}S^{(i)}_n \leq n\sqrt{\tau_{n}}\left(\hat{\beta}_{i}^{HT}-\beta_{i}^{0}\right) \leq \frac{M}{m}S^{(i)}_n,$$ where $\{S^{(i)}_n, n\geq 1\}$ are sequences of random variables and $S^{(i)}_n \overset{p}{\rightarrow} -\sqrt{2b}\sigma \sign(\beta_i^0)$. \end{theorem} \noindent {\bf Remark.} Theorem \ref{thm:tuning_exp_vs} shows that, with exponential-tailed priors on local parameters, the HT procedure can achieve variable selection consistency when $\tau_n$ vanishes at certain rate. However, Theorem \ref{thm:tuning_exp_an} tells us that the procedure cannot achieve optimal estimation rate with $\tau_n$ decaying at this rate. The boundedness condition in Theorem \ref{thm:tuning_exp_an} looks restrictive, but all the three exponential-tailed distributions listed in Table \ref{table:priors} satisfy this condition. If $\beta_i$ has a double exponential prior as in \citet{park2008bayesian}, $L(t) = 1$, for all $t > 0$. In this case, we have $$n\sqrt{\tau_{n}}\left(\hat{\beta}_{i}^{HT}-\beta_{i}^{0}\right)\overset{p}{\rightarrow} -\sqrt{2b}\sigma \sign(\beta_i^0).$$
Next we will show the HT procedure with exponential-tailed priors does not have the oracle property either for other choice of $\tau_n$. \begin{proposition}\label{prop:not_vs} If the prior of $\gamma_i$, $\pi(\gamma_i)$, satisfies the condition \begin{equation}\label{eq:tuning_exp_cond1} \int_0^{\infty} \gamma_i^{-1/2}\pi(\gamma_i)d\gamma_i < \infty, \textrm{ for } i=1,\ldots,p, \end{equation} then the HT procedure cannot achieve variable selection consistency when $n\tau_n \rightarrow c \in (0, \infty]$ as $n \rightarrow \infty$. \end{proposition}
\noindent {\bf Remark.} Proposition \ref{prop:not_vs} holds for both polynomial-tailed and exponential-tailed priors. The finite integral condition is not very restrictive for exponential-tailed priors of $\gamma_i$. In fact, the three exponential-tailed distributions in Table \ref{table:priors} satisfies the condition. However, it excludes horseshoe priors and some other priors in the polynomial-tailed prior class.
\begin{proposition}\label{prop:not_an} Suppose $\pi(\gamma_i)$ is a proper exponential-tailed prior of the form \eqref{eq:exponential} and there exist $0 < m \leq M < \infty$ such that \begin{equation}\label{eq:tuning_exp_cond2} m < L(t) < M\textrm{ for all }t \in (0, \infty). \end{equation} If $n\tau_n \rightarrow 0$ as $n \rightarrow \infty$, the HT procedure cannot achieve optimal estimation rate. \end{proposition} \noindent {\bf Remark.} The proofs of Theorem \ref{thm:tuning_exp} implies the HT procedure over-shinks the nonzero coefficients and the convergence rate is slower than $n^{1/2}$.
Combining the above propositions, we have the following theorem: \begin{theorem}\label{thm:tuning_exp} If $\pi(\gamma_i)$ is a proper exponential-tailed prior of the form \eqref{eq:exponential} and it satisfies conditions \eqref{eq:tuning_exp_cond1} and \eqref{eq:tuning_exp_cond2}, then the HT procedure does not have the oracle property for any choice of $\tau_n$. \end{theorem}
\noindent {\bf Remark.} As a special case, Theorem \ref{thm:tuning_exp} implies that the HT procedure lacks oracle if the prior introduced in \citet{park2008bayesian} is used.
\section{Simulation Results} In this section we apply the HT technique to the TPBN priors and the double-exponential (DE) priors, and compare them with the lasso, the adaptive lasso, and the MLE in terms of prediction performance and variable selection accuracy, when applicable. TPBN priors are used with three different sets hyper-parameters: $a=0.5,u=0.5$, which gives the horseshoe prior; $a=0.5,u=0.1$, which places more probability mass around 0 than the horseshoe; $a=0.5,u=1$, which results the normal-exponential-gamma (NEG) prior. We denote these 3 priors as TPBN-HS, TPBN-0.1, TPBN-NEG, respectively. We generate data from $\mathbf{Y}=\mathbf{X}{\boldsymbol{\beta}}+\boldsymbol{\epsilon}$, where $\boldsymbol{\epsilon}\sim{N}_{n}\left(0,\sigma^{2}\mathbf{I}_{n}\right)$. To compare the prediction performance, we report the relative prediction error (RPE) $E\left[\left(\hat{y}-\mathbf{x}^{T}{\boldsymbol{\beta}}\right)^{2}\right]/\sigma^{2}$. To measure the variable selection accuracy, we use the misclassification error which is the proportion of variables incorrectly identified.
We use the LARS algorithm \citep{efron2004least} to fit both the lasso and the adaptive lasso. The penalty parameter $\lambda$ is chosen by 5-fold cross validation. Parameter $\gamma$ in the adaptive lasso is fixed to be one so that the adaptive weights are the reciprocal of the least square estimates. The Bayesian lasso and TPBN methods are fit with the Gibbs sampler using the \texttt{rstan} package \citep{stan_development_team_stan:_2014}. The first example below was used in \citet{zou2006adaptive}.
Example 1 (A few large effects and a few 0s) We let \[ {\boldsymbol{\beta}}=\left(3,1.5,0,0,2,0,0,0\right), \] the columns of $\mathbf{X}$ are normal vectors and the pairwise correlation between $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ are 0.5 for $i\neq j$. We let $\sigma=1,3,5$ and $n=20,50,80$ to compare the performance with varying signal to noise ratios and sample sizes.
Example 2 (A few large effects, a few small effects, and a few 0s) We let \[ \boldsymbol{\beta}=\left(3,1.5,0.1,0.01,2,0,0,0\right), \] and the rest of the set up is the same as Example 1.
For each example, we generate 50 training and corresponding testing data sets. Each model is fit on the training set and the RPE is computed on the test set. We summarize the RPE results in Table \ref{table:RPE} and use the asterisk to denote the model with the smallest RPE. A couple of observations can be made from Table \ref{table:RPE}: \begin{itemize}
\item In general, the HT methods with TPBN priors are the best in prediction for both examples, especially with $u=0.1$ which gives the lowest RPE in 5 out of 9 cases.
\item A smaller $u$ works better when the signal to noise ratio is low and the sample size is big, while a larger $u$ results in better prediction performance when the signal to noise ratio is high.
\item TPBN-0.1 predicts better than the adaptive lasso in almost all cases for both examples.
\item The performance of the HT method with DE priors is comparable to the lasso in terms of prediction. \end{itemize}
The mean misclassification errors are summarized in Table \ref{table:mis}. We also give the number of predictors chosen for each method in each set up in Table \ref{table:number_predictors}. TPBN-0.1 is the best model for variable selection in general. It has slightly lower misclassification error than the adaptive lasso and tends to be more parsimonious. The TPBN priors with larger $u$ including the horseshoe prior select significantly more variables and thus result in a much higher misclassification error due to less point mass around 0. The DE prior also tends to select most variables and does not perform well for variable selection, which agrees with our theoretical results. Finally, the lasso, which does not satisfy the oracle property, is not as accurate as the adaptive lasso and the TPBN-0.1.
\begin{table}[htb] \centering \caption{Median relative prediction error (RPE) for seven methods in two simulation examples, based on 50 replications.} \label{table:RPE} \small \begin{tabular}{clllllllllll} \hline
& \multicolumn{3}{c}{$n=20$} & \phantom{0}& \multicolumn{3}{c}{$n=50$} & \phantom{0} & \multicolumn{3}{c}{$n=80$} \\
\hline
& $\sigma=1$ & $\sigma=3$ & $\sigma=5$ && $\sigma=1$ & $\sigma=3$ & $\sigma=5$ && $\sigma=1$ & $\sigma=3$ & $\sigma=5$ \\ \hline Example\text{ }1\\ LS & 1.74 & 1.78 & 1.75 & & 1.20 & 1.20 & 1.21 & & 1.13 & 1.13 & 1.13 \\
lasso & 1.54 & 1.50 & 1.51 & & 1.16 & 1.14 & 1.15 & & 1.12 & 1.11 & 1.10 \\
adap lasso & 1.96 & 1.72 & 1.60 & & 1.18 & 1.18 & 1.17 & & 1.15 & 1.10 & 1.13 \\
DE & 1.59 & 1.40 & 1.42* & & 1.18 & 1.14 & 1.16 & & 1.12 & 1.11 & 1.10 \\
TPBN-HS & 1.52 & 1.38* & 1.46 & & 1.14 & 1.12 & 1.13* & & 1.11 & 1.08 & 1.08* \\
TPBN-0.1 & 1.36* & 1.48 & 1.58 & & 1.10* & 1.11* & 1.16 & & 1.07* & 1.07* & 1.09 \\
TPBN-NEG & 1.60 & 1.38* & 1.45 & & 1.17 & 1.14 & 1.14 & & 1.12 & 1.10 & 1.10 \\
\hline Example\text{ }2\\ LS & 1.60 & 1.62 & 1.90 & & 1.22 & 1.22 & 1.24 & & 1.09 & 1.11 & 1.12 \\
lasso & 1.56 & 1.51 & 1.42* & & 1.17 & 1.16 & 1.18 & & 1.09 & 1.10 & 1.09* \\
adap lasso & 1.61 & 1.99 & 1.67 & & 1.20 & 1.21 & 1.19 & & 1.08 & 1.11 & 1.12 \\
DE & 1.52 & 1.53 & 1.47 & & 1.19 & 1.16 & 1.16 & & 1.09 & 1.10 & 1.09* \\
TPBN-HS & 1.38 & 1.46* & 1.49 & & 1.16 & 1.12 & 1.15* & & 1.08 & 1.08 & 1.09* \\
TPBN-0.1 & 1.34* & 1.73 & 1.57 & & 1.13* & 1.11* & 1.21 & & 1.07* & 1.07* & 1.11 \\
TPBN-NEG & 1.47 & 1.50 & 1.42* & & 1.19 & 1.15 & 1.15* & & 1.09 & 1.09 & 1.09* \\
\hline
\end{tabular} \end{table} \begin{table}[ht] \centering \caption{Mean misclassification error based on 50 replications.} \label{table:mis} \small \begin{tabular}{cccccccccccc} \hline
& \multicolumn{3}{c}{$n=20$} & \phantom{0}& \multicolumn{3}{c}{$n=50$} & \phantom{0} & \multicolumn{3}{c}{$n=80$} \\
\hline
& $\sigma=1$ & $\sigma=3$ & $\sigma=5$ && $\sigma=1$ & $\sigma=3$ & $\sigma=5$ && $\sigma=1$ & $\sigma=3$ & $\sigma=5$ \\
\hline
Example\text{ }1\\ lasso & 0.30 & 0.26 & 0.24 & & 0.31 & 0.25 & 0.28 & & 0.37 & 0.30 & 0.32 \\
adap lasso & 0.16 & 0.19 & 0.28 & & 0.19 & 0.16 & 0.17 & & 0.17 & 0.18 & 0.16 \\
DE & 0.53 & 0.37 & 0.34 & & 0.60 & 0.47 & 0.43 & & 0.59 & 0.56 & 0.49 \\
TPBN-HS & 0.41 & 0.25 & 0.27 & & 0.51 & 0.27 & 0.25 & & 0.55 & 0.37 & 0.32 \\
TPBN-0.1 & 0.12 & 0.19 & 0.28 & & 0.12 & 0.09 & 0.18 & & 0.10 & 0.08 & 0.16 \\
TPBN-NEG & 0.52 & 0.33 & 0.30 & & 0.60 & 0.47 & 0.39 & & 0.59 & 0.57 & 0.47 \\
\hline
Example\text{ }2\\
lasso & 0.41 & 0.30 & 0.37 & & 0.37 & 0.40 & 0.39 & & 0.35 & 0.37 & 0.33 \\
adap lasso & 0.26 & 0.27 & 0.33 & & 0.28 & 0.27 & 0.29 & & 0.25 & 0.24 & 0.26 \\
DE & 0.57 & 0.44 & 0.43 & & 0.60 & 0.50 & 0.45 & & 0.61 & 0.60 & 0.50 \\
TPBN-HS & 0.46 & 0.35 & 0.36 & & 0.53 & 0.36 & 0.36 & & 0.55 & 0.49 & 0.37 \\
TPBN-0.1 & 0.24 & 0.29 & 0.35 & & 0.24 & 0.19 & 0.28 & & 0.22 & 0.25 & 0.27 \\
TPBN-NEG & 0.57 & 0.43 & 0.40 & & 0.61 & 0.51 & 0.42 & & 0.61 & 0.60 & 0.48 \\ \hline \end{tabular} \end{table} \begin{table}[ht]
\centering \caption{Mean number of predictors selected for six methods in two examples, based on 50 replications.} \label{table:number_predictors} \small \begin{tabular}{cccccccccccc}
\hline
& \multicolumn{3}{c}{$n=20$} & \phantom{0}& \multicolumn{3}{c}{$n=50$} & \phantom{0} & \multicolumn{3}{c}{$n=80$} \\
& $\sigma=1$ & $\sigma=3$ & $\sigma=5$ && $\sigma=1$ & $\sigma=3$ & $\sigma=5$ && $\sigma=1$ & $\sigma=3$ & $\sigma=5$ \\ \hline
Example\text{ }1\\ lasso & 5.40 & 4.94 & 3.72 & & 5.48 & 5.00 & 5.02 & & 5.94 & 5.36 & 5.52 \\
adap lasso & 3.92 & 3.42 & 2.62 & & 4.24 & 3.82 & 3.50 & & 4.20 & 4.24 & 3.90 \\
DE & 7.20 & 5.62 & 4.72 & & 7.80 & 6.78 & 6.38 & & 7.74 & 7.44 & 6.96 \\
TPBN-HS & 6.28 & 4.56 & 3.54 & & 7.08 & 5.18 & 4.74 & & 7.40 & 5.98 & 5.56 \\
TPBN-0.1 & 3.98 & 3.20 & 2.22 & & 4.00 & 3.44 & 3.06 & & 3.80 & 3.60 & 3.56 \\
TPBN-NEG & 7.18 & 5.38 & 4.52 & & 7.80 & 6.80 & 6.06 & & 7.76 & 7.58 & 6.78 \\
\hline
Example\text{ }2\\ lasso & 4.67 & 4.00 & 3.79 & & 5.10 & 5.13 & 4.70 & & 4.85 & 4.58 & 4.14 \\
adap lasso & 2.81 & 2.15 & 2.22 & & 3.26 & 3.06 & 3.18 & & 3.17 & 2.88 & 2.65 \\
DE & 7.12 & 5.66 & 4.30 & & 7.71 & 6.57 & 5.64 & & 7.80 & 7.36 & 6.31 \\
TPBN-HS & 5.86 & 4.28 & 2.90 & & 6.82 & 4.55 & 4.22 & & 7.13 & 5.84 & 4.46 \\
TPBN-0.1 & 3.06 & 2.68 & 1.81 & & 2.99 & 2.27 & 2.35 & & 2.91 & 2.90 & 2.81 \\
TPBN-NEG & 7.15 & 5.50 & 4.11 & & 7.75 & 6.48 & 5.24 & & 7.80 & 7.45 & 6.08 \\ \hline \end{tabular}
\end{table}
\section{Discussion} In this paper, we consider Bayesian variable selection problem of linear regression model with global-local shrinkage priors on the regression coefficients. Our proposed variable selection procedure selects a variable if the ratio of the posterior mean to the ordinary least square estimate of the corresponding coefficient is greater than $1/2$. With orthogonal design matrices, we show that if the local parameters have polynomial-tailed priors, our proposed method is oracle in the sense that it can achieve variable selection consistency and optimal estimation rate at the same time. However, if, instead, an exponential-tailed prior is used for the local parameters, the proposed method does not have the oracle property.
Although the theoretical results are obtained only under the assumption of orthogonal designs, the simulation study shows the performance of our method is similar when applied to design matrices with moderate correlation.
Because of the use of the ordinary least square estimate in the proposed method, we only consider the situation when the sample size is greater than the number of predictors in the model. In the case that $p > n$, the inverse of $\mathbf{X}^T\mathbf{X}$ does not exist. One possible way to get around this is to use a generalized inverse, say the Moore-Penrose generalized inverse. Since generalized inverse is not unique, it is critical to examine if different choices can produce consistent selection results or if certain choice of the generalized inverse will outperform others.
\appendix \section{Technical proofs of results in Section 3}
\begin{lemma}[Properties of slowly varying functions]\label{lemma:svf} If $L$ is a slowly varying function, then \begin{enumerate}[{(i)}] \item $L^\alpha$ is slowly varying for all $\alpha \in \mathbb{R}$.\label{eq:svf_1} \item ${\log L(x)}/{\log x} \rightarrow 0$, as $x\rightarrow \infty$.\label{eq:svf_2} \item $x^{-\alpha} L(x) \rightarrow 0$ and $x^\alpha L(x) \rightarrow \infty$, as $x \rightarrow \infty$ for all $\alpha > 0$.\label{eq:svf_3} \item for $\alpha < -1$, $-\frac{\int_x^\infty t^\alpha L(t) dt}{x^{\alpha+1} /(\alpha+1)} \rightarrow 1$, as $x \rightarrow \infty$.\label{eq:svf_4} \item there exist $A_0 > 0$ such that for $\alpha > -1$, $\frac{\int_{A_0}^x t^\alpha L(t) dt}{x^{\alpha+1} /(\alpha+1)} \rightarrow 1$, as $x \rightarrow \infty$.\label{eq:svf_5} \end{enumerate} \end{lemma}
\begin{proof} See Propositions 1.3.6, 1.5.8 and 1.5.10 of \citet{bingham1987regular}. \end{proof}
\begin{lemma} \label{lemma:noise_2} Suppose $n\tau_n \rightarrow 0$, as $n \rightarrow \infty$. \begin{enumerate} \item If $\gamma_i$ has a proper polynomial-tailed prior described in \eqref{eq:polynomial} with $0 < a < 1$, then there exist $A_0 > 1$ such that
$$E(1 - s_i \, | \, \tau_n, \mathbf{Y}) \leq \frac{A_0 (n\tau_n)^a}{a(1-a)} L\left(\frac{1}{n\tau_n}\right)\exp\left(\frac{n\hat{\beta}_i}{2\sigma^2}\right)(1+o(1))$$ \item If $\gamma_i$ has a proper exponential-tailed prior described in \eqref{eq:exponential} and $C = \int_0^{\infty} \gamma_i \pi(\gamma_i) d\gamma_i < \infty$, then
$$E(1 - s_i \, | \, \tau_n, \mathbf{Y}) \leq Cn\tau_n \exp\left(\frac{n\hat{\beta}_i^2}{2\sigma^2}\right)(1 + o(1)).$$ \end{enumerate} The $o(1)$ terms in both cases do not depend on $i$. \end{lemma}
\begin{proof} By Lebesgue dominated convergence Theorem, \begin{align*}
E(1-s_{i} \, | \, \tau_n, \mathbf{Y}) &= \frac{\int_{0}^{\infty}\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\exp\left\{ \frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\right\} \pi\left(\gamma_{i}\right)d\gamma_{i}}{\int_{0}^{\infty}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\exp\left\{ \frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\right\} \pi\left(\gamma_{i}\right)d\gamma_{i}}\\ & \leq \frac{\int_{0}^{\infty}\left(n\tau_{n}\gamma_{i}\right)\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{3}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}{\int_{0}^{\infty}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}\exp\left\{ \frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\right\}\\ & = \int_{0}^{\infty}\left(n\tau_{n}\gamma_{i}\right)\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{3}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}\exp\left\{ \frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\right\} (1+o(1)). \end{align*}
We now consider the case that $\gamma_i$ has a prior from the polynomial class. By property \eqref{eq:svf_5} in Lemma \ref{lemma:svf}, there exists $A_0 \geq 1$ such that $\frac{\int_{A_0}^x t^{-aL(t)dt}}{x^{1-a}L(x)} \rightarrow \frac{1}{1-a}$ as $x \rightarrow \infty$. Therefore, \begin{align*} & \int_{A_0}^{\frac{A_0}{n\tau_n}} \left(n\tau_{n}\gamma_{i}\right)\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{3}{2}}\gamma_i^{-a-1}L(\gamma_i)d\gamma_i \\ \leq &~ n\tau_n \int_{A_0}^{\infty} \gamma_i^{-a} L(\gamma_i) d\gamma_i = \frac{n\tau_n}{1-a}\left(\frac{A_0}{n\tau_n}\right)^{1-a}L\left(\frac{A_0}{n\tau_n}\right)(1+o(1)) \leq ~\frac{A_0}{1-a}(n\tau_n)^a L\left(\frac{1}{n\tau_n}\right)(1+o(1)). \end{align*} Also, $$\int_0^{A_0} \left(n\tau_{n}\gamma_{i}\right)\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{3}{2}}\gamma_i^{-a-1}L(\gamma_i)d\gamma_i \leq A_0 n\tau_n \int_0^{\infty} \gamma_i^{-a-1}L(\gamma_i)d\gamma_i = A_0 n \tau_n,$$ and \begin{align*} &\int_{\frac{A_0}{n\tau_n}}^{\infty} \left(n\tau_{n}\gamma_{i}\right)\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{3}{2}}\gamma_i^{-a-1}L(\gamma_i)d\gamma_i \\ \leq & ~ \int_{\frac{A_0}{n\tau_n}}^{\infty} \gamma_i^{-a-1} L(\gamma_i) d\gamma_i = \frac{1}{a}\left(\frac{A_0}{n\tau_n}\right)^{-a} L\left(\frac{1}{n\tau_n}\right)(1+o(1)) \leq ~ \frac{A_0}{a}(n\tau_n)^a L\left(\frac{1}{n\tau_n}\right) (1+o(1)). \end{align*} Hence, \begin{align*} &\int_{0}^{\infty}\left(n\tau_{n}\gamma_{i}\right)\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{3}{2}}\gamma_i^{-a-1}L(\gamma_i)d\gamma_{i}\\ \leq &~ \frac{A_0 (n\tau_n)^a}{a(1-a)} L\left(\frac{1}{n\tau_n}\right) \left[\frac{(n\tau_n)^{1-a}a(1-a)}{L(1/(n\tau_n))} + a(1+o(1)) + (1-a)(1+o(1))\right]\\ = & ~ \frac{A_0(n\tau_n)^a}{a(1-a)} L\left(\frac{1}{n\tau_n}\right)(1 + o(1)). \end{align*}
If $\gamma_i$ has a prior in the exponential-tailed class and $\int_0^\infty \gamma_i \pi(\gamma_i) d\gamma_i < \infty$, then \begin{align*} \int_{0}^\infty n\tau_n\gamma_i(1+n\tau_n\gamma_i)^{-3/2} \pi(\gamma_i) d\gamma_i \leq n\tau_n \int_0^{\infty}\gamma_i \pi(\gamma_i)d\gamma_i = Cn\tau_n. \end{align*}
\end{proof}
\begin{lemma}\label{lemma:signal_2} Suppose $n\tau_n \rightarrow 0$ as $n \rightarrow \infty$ and $\eta$, $q$ are arbitrary constants in $(0,1)$. \begin{enumerate} \item If $\gamma_i$ has a proper polynomial-tailed prior described in \eqref{eq:polynomial}, then
$$P\left( s_i > \eta \, | \tau_n, \mathbf{Y}\right) \leq \frac{(a+\frac{1}{2})(\eta q)^{-a-\frac{1}{2}}(1-\eta q)^a}{(n\tau_n)^{a}L\left(\frac{1}{n\tau_n}(\frac{1}{\eta q}-1)\right)} \exp\left\{-\frac{n\hat{\beta}_i^2}{2\sigma^2}\eta(1-q)\right\} (1+o(1)).$$ \item If $\gamma_i$ has a proper exponential prior described in \eqref{eq:exponential}, then for sufficient large $n$ (not depending on $i$),
$$P\left(s_{i}>\eta \, | \, \tau_n, \mathbf{Y}\right)\leq 2b\left(\frac{n\tau_n}{1-\eta q}\right)^\frac{1}{2}\exp\left\{ \frac{2b}{n\tau_n}\left(\frac{1}{\eta q}-1\right)\right\} \exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\eta\left(1-q\right)\right\}.$$ \end{enumerate} \end{lemma}
\begin{proof} For any $\eta,q\in\left(0,1\right)$, \begin{align}
P\left(s_{i}>\eta \, |\, \tau_n, \mathbf{Y}\right) & = P\left(\gamma_{i}<\frac{1}{n\tau_{n}}\left(\frac{1}{\eta}-1 \right)\,\middle | \, \tau_n, \mathbf{Y}\right)\nonumber\\
& \leq \frac{\int_{0}^{\frac{1}{n\tau_{n}}\left(\frac{1}{\eta}-1\right)}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\cdot\frac{1}{1+n\tau_{n}\gamma_{i}}\right\} \pi\left(\gamma_{i}\right)d\gamma_{i}}{\int_{\frac{1}{n\tau_{n}}\left(\frac{1}{\eta q}-1\right)}^{\infty}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\cdot\frac{1}{1+n\tau_{n}\gamma_{i}}\right\} \pi\left(\gamma_{i}\right)d\gamma_{i}} \nonumber \\
& \leq \frac{\int_{0}^{\frac{1}{n\tau_{n}}\left(\frac{1}{\eta}-1\right)}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}{\int_{\frac{1}{n\tau_{n}}\left(\frac{1}{\eta q}-1\right)}^{\infty}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\eta(1-q)\right\}.
\label{eq:RHS1} \end{align} The numerator of the first factor in \eqref{eq:RHS1} is bounded by 1. For the denominator (denoted by $D$), we discuss the two types of priors separately.
First consider the case that $\gamma_i$ has a proper polynomial-tailed prior. By property \eqref{eq:svf_4} of Lemma \ref{lemma:svf}, $$\frac{\left(\frac{1}{n\tau_n}(\frac{1}{\eta q} - 1)\right)^{-a-\frac{1}{2}}L\left(\frac{1}{n\tau_n}(\frac{1}{\eta q}-1)\right)}{\int_{\frac{1}{n\tau_n}(\frac{1}{\eta q}-1)}^{\infty}\gamma_i^{-a-\frac{3}{2}} L(\gamma_i)d\gamma_i} \rightarrow a+\frac{1}{2},~\text{as}~n \rightarrow \infty.$$ Hence \begin{align*} D & \geq \left(\frac{1-\eta q}{n\tau_n}\right)^{\frac{1}{2}} \frac{L\left(\frac{1}{n\tau_n}(\frac{1}{\eta q}-1)\right)}{(a+\frac{1}{2})\left(\frac{1}{n\tau_n}(\frac{1}{\eta q}-1)\right)^{a+\frac{1}{2}}}(1+o(1))\\ & = \frac{(n\tau_n)^a}{a+1/2} (\eta q)^{a+\frac{1}{2}}(1-\eta q)^{-a} L\left(\frac{1}{n\tau_n}(\frac{1}{\eta q}-1)\right)(1+o(1)). \end{align*}
If $\gamma_i$ has a proper exponential-tailed prior, \begin{align*} D & = \int_{\frac{1}{n\tau_{n}}\left(\frac{1}{\eta q}-1\right)}^{\infty}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\exp\left\{ -b\gamma_{i}\right\} L\left(\gamma_{i}\right)d\gamma_{i}\\
& = \int_{\frac{1}{n\tau_{n}}\left(\frac{1}{\eta q}-1\right)}^{\infty}\left(\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\right)^{\frac{1}{2}}\left(n\tau_{n}\right)^{-\frac{1}{2}}\gamma_{i}^{-\frac{1}{2}}\exp\left\{ -b\gamma_{i}\right\} L\left(\gamma_{i}\right)d\gamma_{i}\\
& \geq \int_{\frac{1}{n\tau_{n}}\left(\frac{1}{\eta q}-1\right)}^{\infty}\left(1-\eta q\right)^{\frac{1}{2}}\left(n\tau_{n}\right)^{-\frac{1}{2}}\gamma_{i}^{-\frac{1}{2}}\exp\left\{ -b\gamma_{i}\right\} L\left(\gamma_{i}\right)d\gamma_{i}\\
& = \int_{\frac{1}{n\tau_{n}}\left(\frac{1}{\eta q}-1\right)}^{\infty}\left(1-\eta q\right)^{\frac{1}{2}}\left(n\tau_{n}\right)^{-\frac{1}{2}}\exp\left\{ -2b\gamma_{i}\right\} \left(\exp\left\{ b\gamma_{i}\right\}\gamma_{i}^{-1}\right)\left(\gamma_{i}^{1/2}L\left(\gamma_{i}\right)\right)d\gamma_{i} \end{align*} Since $\exp(b\gamma_i)\gamma_i^{-1} \rightarrow \infty$ and $\gamma_i^{1/2}L(\gamma_i) \rightarrow \infty$ as $\gamma_i\rightarrow \infty$, for sufficiently large $n$, \begin{align*} D &\geq \int_{\frac{1}{n\tau_{n}}\left(\frac{1}{\eta q}-1\right)}^{\infty} \left(\frac{1-\eta q}{n\tau_{n}}\right)^{\frac{1}{2}}\exp\left\{ -2b\gamma_{i}\right\} d\gamma_{i} = \frac{1}{2b}\left(\frac{1-\eta q}{n\tau_n}\right)^{\frac{1}{2}}\exp\left\{ -\frac{2b}{n\tau_{n}}\left(\frac{1}{\eta q}-1\right)\right\}. \end{align*} \end{proof}
\begin{proof}[{\bf Proof of Proposition \ref{prop:s_consistency}}] It is clear that \begin{align}
E\left(1-s_{i}|\mathbf{Y}\right) & = \frac{\int_{0}^{\infty}\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\exp\left\{ \frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\right\} \pi\left(\gamma_{i}\right)d\gamma_{i}}{\int_{0}^{\infty}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\exp\left\{ \frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\right\} \pi\left(\gamma_{i}\right)d\gamma_{i}} \nonumber\\ & \leq \frac{\int_{0}^{\infty}\left(n\tau_{n}\gamma_{i}\right)\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{3}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}{\int_{0}^{\infty}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}\exp\left\{ \frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\right\}.\label{eq:1} \end{align}
By Lebesgue dominated convergence Theorem, the numerator and denominator in \eqref{eq:1} converges to 0 and 1 respectively as $n\rightarrow \infty$. If $i \not\in \mathcal{A}$, $n\hat\beta_i^2 = O_p(1)$. Therefore, $E(1-s_i \, | \, \tau_n, \mathbf{Y}) \overset{p}{\rightarrow} 0$, as $n \rightarrow \infty$ by Slutsky's theorem.
For any $0 <\epsilon \leq 1$,
$E(s_i \, | \, \tau_n, \mathbf{Y}) = \int_{0}^{\epsilon/2} s_i \,p(s_i \, | \, \tau_n, \mathbf{Y}) ds_i + \int_{\epsilon/2}^{1} s_i \,p(s_i \, | \, \tau_n, \mathbf{Y}) ds_i \leq {\epsilon}/{2} + P(s_i > \epsilon/2 \, | \, \tau_n, \mathbf{Y}).$ Thus
$P\left(E(s_i \, | \, \tau_n, \mathbf{Y}) \geq \epsilon\right) \leq P\left(P(s_i > {\epsilon}/{2} \, | \, \tau_n, \mathbf{Y}) \geq {\epsilon}/{2}\right).$ If $\gamma_i$ has a polynomial-tailed prior, using the first part of Lemma \ref{lemma:signal_2} with $\eta=\epsilon/2$, the above inequality yields \begin{align*}
P(E(s_i \, | \, \tau_n, \mathbf{Y}) \geq \epsilon) &\leq P\left( \frac{(a+\frac{1}{2})(\eta q)^{-a-\frac{1}{2}}(1-\eta q)^a}{(n\tau_n)^{a}L\left(\frac{1}{n\tau_n}(\frac{1}{\eta q}-1)\right)} \exp\left\{-\frac{n\hat{\beta}_i^2}{2\sigma^2}\eta(1-q)\right\} > \epsilon/2\right) (1+o(1))\\ & = P\left(\hat{\beta}_i^2 < \frac{2\sigma^2}{\eta q}\left[\frac{c_1}{n} - \frac{a\log(n\tau_n)}{n}\left\{1+\frac{\log L(\frac{1}{n\tau_n}(\frac{1}{\eta q}-1))}{a\log(n\tau_n)}\right\}\right]\right)(1+o(1)), \end{align*}
where $c_1$ is a constant that does not depend on $n$. By property \eqref{eq:svf_2} in Lemma \ref{lemma:svf} and our assumptions, the terms in the bracket converge to zero as $n \rightarrow \infty$. Since $\hat{\beta}_i \overset{p}{\rightarrow} \beta_i^{0} \neq 0$, we have $P(E(s_i \, | \, \tau_n, \mathbf{Y}) \geq \epsilon) \rightarrow 0$, as $n\rightarrow \infty$.
If $\gamma_i$ has an exponential-tailed prior, by the second part of Lemma \ref{lemma:signal_2}, the assumption that $n\tau_{n}\rightarrow0$, $n^{2}\tau_{n}\rightarrow\infty$ as
$n\rightarrow\infty$ implies that $P\left(s_{i}>\eta \, | \, \tau_n, \mathbf{Y}\right)\stackrel{p}{\rightarrow}0$
for any $\eta>0$. Therefore $P(P(s_i > {\epsilon}/{2} \, | \, \tau_n, \mathbf{Y}) \geq {\epsilon}/{2}) \rightarrow 0$ and hence $P(E(s_i \, | \, \tau_n, \mathbf{Y}) \geq \epsilon) \rightarrow 0$, as $n \rightarrow \infty$. \end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{thm:tuning_poly}}] We first prove the variable selection consistency part. It is clear that \begin{equation*} P(\mathcal{A}_n\neq \mathcal{A}) \leq
\sum_{i\,\not\in\, \mathcal{A}} P\left(E(1 - s_i \, |\, \tau_n, \mathbf{Y}) \geq \frac{1}{2}\right) + \sum_{i \, \in \, \mathcal{A}}P\left(E(1-s_i \, | \, \tau_n, \mathbf{Y}) < \frac{1}{2}\right). \end{equation*}
Since $p_0 = |\mathcal{A}|$ does not depend on n, by Proposition \ref{prop:s_consistency}, the second term on the right hand side of the above inequality goes to zero as $n\rightarrow \infty$. If $i \not\in \mathcal{A}$, by Lemma \ref{lemma:noise_2} and the fact that $\sqrt {n}\hat{\beta}_i$ has a standard normal distribution,
$$P\left(E(1 - s_i \, |\, \tau_n, \mathbf{Y}) > \frac{1}{2}\right) \leq P\left(\exp\left(\frac{n\hat{\beta}_i^2}{2}\right)\frac{A_0(n\tau)^a}{a(1-a)}L\left(\frac{1}{n\tau}\right)\xi_n > 1/2\right) = 2\left[1 - \Phi (\sqrt{M_n})\right],$$ where $\xi_n$, not depending on $i$, is a generic term that converges to 1 as $n \rightarrow \infty$, and $M_n = 2\log\left(\frac{C}{(n\tau)^aL(1/(n\tau))\xi_n}\right)$ with $C$ being a generic constant. Noticing that the right hand side of the above inequality does not depend on $i$, $\sum_{i\,\not\in\, \mathcal{A}} P\left(E(1 - s_i \, |\, \tau_n, \mathbf{Y}) \geq \frac{1}{2}\right) \leq p_n P\left(E(1 - s_i \, |\, \tau_n, \mathbf{Y}) > \frac{1}{2}\right)$. Therefore, the proof of the variable selection consistency part will be complete if we can show $2p_n\left[1 - \Phi (\sqrt{M_n})\right]$ converge to zero as $n \rightarrow \infty$. In fact, by property \eqref{eq:svf_3} in Lemma \ref{lemma:svf}, $M_n \rightarrow \infty$, so \begin{equation*} 2p_n\left[1 - \Phi (\sqrt{M_n})\right] \leq \frac{2\phi(\sqrt{M_n})}{\sqrt{M_n}} =Cp_n(n\tau_n)^\epsilon \frac{(n\tau_n)^{a-\epsilon}L(1/(n\tau_n))}{\sqrt{\log(1/(n\tau_n))}}(1 + o(1)). \end{equation*} Again by property \eqref{eq:svf_3} in Lemma \ref{lemma:svf}, $(n\tau_n)^{a-\epsilon} L(1/(n\tau_n))\rightarrow 0$ as $n\rightarrow \infty$. Therefore $2p_n\left[1 - \Phi (\sqrt{M_n})\right] \rightarrow 0$ if $p_n(n\tau_n)^\epsilon \rightarrow$, as $n\rightarrow \infty$.
Now we show the asymptotic normality part. For any $i\in\mathcal{A}$, we have $\hat{\beta}_{i}\overset{p}{\rightarrow}\beta_{i}^{0}\neq0$, $\sqrt{n}\left(\hat{\beta}_{i}-\beta_{i}^{0}\right)\stackrel{d}{\rightarrow}N\left(0,\sigma^{2}\right)$ and $$
\sqrt{n}\left(\hat{\beta}_{i}^{HT}-\beta_{i}^{0}\right)=\sqrt{n}\left(\hat{\beta}_{i}-\beta_{i}^{0}\right)-\sqrt{n}E(s_{i} \, | \, \tau_n, \mathbf{Y}) \hat{\beta}_{i}-\sqrt{n}\hat{\beta}_i^{PM}I\left(E(1-s_i \, | \, \tau_n, \mathbf{Y})\leq {1}/{2}\right). $$
Since the third term on the right hand side converges to zero in probability by Proposition \ref{prop:s_consistency}, the proof of the asymptotic normality part will be complete if we can show that $\sqrt{n}E(s_i \, | \, \tau_n, \mathbf{Y})$ converge to zero in probability. In fact, for any $\epsilon > 0$, by similar arguments as in the proof of Proposition \ref{prop:s_consistency},
$$P( \sqrt{n}E(s_i \, | \, \tau_n, \mathbf{Y}) \geq \epsilon) \leq P(P(s_i \geq \epsilon/(2\sqrt{n}) \,|\, \tau_n, \mathbf{Y}) > \epsilon/(2\sqrt{n})).$$ In Lemma \ref{lemma:signal_2}, let $\eta = \eta_n = {\epsilon}/({2\sqrt{n}})$. Then \begin{align*}
P\left(P\left(s_i \geq \frac{\epsilon}{2\sqrt{n}} \, \middle | \, \tau_n, \mathbf{Y}\right)>\frac{\epsilon}{2\sqrt{n}}\right) &\leq P\left(\frac{\left(a+\frac{1}{2}\right)\left(1-\frac{\epsilon q}{2\sqrt{n}}\right)^a\exp\left(-\frac{n\hat{\beta}_i^2\epsilon(1-q)}{4\sigma^2\sqrt{n}}\right)}{(n\tau_n)^a\left(\frac{\epsilon q}{2\sqrt{n}}\right)^{a+\frac{1}{2}} L\left(\frac{1}{n\tau_n}\left(\frac{2\sqrt{n}}{\epsilon q}-1\right)\right)} > \frac{\epsilon}{2\sqrt{n}}\right)(1+o(1)) \\ & = P(\hat{\beta}_i^2 < c_n)(1+o(1)), \end{align*}
where $c_n = d_2 n^{-1/2}\left\{ \log\left(d_1n^{3/4}\right) + a\log\left(\frac{1}{n\tau_n}\left(\frac{2\sqrt{n}}{\epsilon q}-1\right)\right)\left[1-\frac{\log L\left(\frac{1}{n\tau_n}\left(\frac{2\sqrt{n}}{\epsilon q}-1\right)\right)}{a\log \left(\frac{1}{n\tau_n}\left(\frac{2\sqrt{n}}{\epsilon q}-1\right)\right)}\right] \right\}$. Since $c_n \rightarrow 0$ and $\hat{\beta}_i \overset{p}{\rightarrow}\beta_i^0 \neq 0$, we have $P(E(s_i \, | \, \tau_n, \mathbf{Y}) \geq \epsilon/\sqrt{n}) \rightarrow 0$. \end{proof}
\begin{proof}[{\bf Proof of Corollary \ref{thm:prior_poly}}] Since $s_i = (1+n\tau\gamma_i)^{-1}$ is a decreasing function in $\tau$,
$$E(1-s_i \, | \, \mathbf{Y}) = \int_{\xi_n}^{\psi_n} E(1-s_i \, | \, \tau, \mathbf{Y}) \pi_n^{\tau}(\tau) d\tau \leq E(1-s_i \, | \, \tau = \psi_n, \mathbf{Y}).$$
Similarly, $$P(s_i < \eta \, |\, \mathbf{Y}) \leq P(s_i < \eta \, | \, \tau= \psi_n, \mathbf{Y}),$$ and $$P(s_i > \eta \, |\, \mathbf{Y}) \leq P(s_i > \eta \, | \, \tau= \xi_n, \mathbf{Y}).$$ The rest of the proof follows the proof of Theorem \ref{thm:tuning_poly}. \end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{thm:tuning_exp_vs}}] Similar to the proof of the variable selection consistency part of Theorem \ref{thm:tuning_poly}, the proof will be complete if we can show $\sum_{i\,\not\in\, \mathcal{A}} P\left(E(1 - s_i \, | \, \tau_n, \mathbf{Y}) > 1/2\right) \rightarrow 0$, as $n \rightarrow \infty$. By Lemma \ref{lemma:noise_2},
$$P\left(E(1 - s_i|\mathbf{Y}) > \frac{1}{2}\right) \leq P\left(C\exp\left(\frac{n\hat{\beta}_i^2}{2\sigma^2}\right)n\tau_n\xi_n' > \frac{1}{2}\right) = P\left(n\hat\beta_i^2 > M_n'\right) = 2\left[1 - \Phi(\sqrt{M_n'})\right],$$ where $\xi_n'$, not depending on $i$, is a generic term that converges to 1 as $n\rightarrow \infty$ and $M_n' = -2\log(2Cn\tau_n\xi_n')$. If $n\tau_n \rightarrow 0$, then $M_n' \rightarrow \infty$. Hence, \begin{align*}
\sum_{i\,\not\in\, \mathcal{A}} P\left(E(1 - s_i \, | \, \tau_n, \mathbf{Y}) > 1/2\right) \leq 2p_n\left[1 - \Phi(\sqrt{M_n'})\right] \sim \frac{2p_n\phi(\sqrt{M_n'})}{\sqrt{M_n'}} = \frac{2Cp_nn\tau_n\xi_n'}{\sqrt{-\pi\log(2Cn\tau_n\xi_n')}} \rightarrow 0, \end{align*} if $\frac{p_n n\tau_n}{\sqrt{\log(n\tau_n)}} \rightarrow 0$, as $n\rightarrow \infty$. \end{proof}
\begin{proof}[{\bf Proof of Proposition \ref{prop:not_vs}}] Notice that \begin{align*}
E\left(1-s_{i} \, | \, \tau_n, \mathbf{Y}\right) &= \frac{\int_{0}^{\infty}\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\exp\left\{ \frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\right\} \pi\left(\gamma_{i}\right)d\gamma_{i}}{\int_{0}^{\infty}\left(1+n\tau_{n}\gamma_{i}\right)^{-\frac{1}{2}}\exp\left\{ \frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\frac{n\tau_{n}\gamma_{i}}{1+n\tau_{n}\gamma_{i}}\right\} \pi\left(\gamma_{i}\right)d\gamma_{i}} \\ & \geq \frac{\int_{0}^{\infty}\gamma_{i}\left(\frac{1}{n\tau_{n}}+\gamma_{i}\right)^{-\frac{3}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}{\int_{0}^{\infty}\left(\frac{1}{n\tau_{n}}+\gamma_{i}\right)^{-\frac{1}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\right\}. \end{align*} Let $h_{n}{=}\frac{\int_{0}^{\infty}\gamma_{i}\left(\frac{1}{n\tau_{n}}+\gamma_{i}\right)^{-\frac{3}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}{\int_{0}^{\infty}\left(\frac{1}{n\tau_{n}}+\gamma_{i}\right)^{-\frac{1}{2}}\pi\left(\gamma_{i}\right)d\gamma_{i}}$. If $n\tau_n \rightarrow c \in (0, \infty]$ and $\int_0^{\infty} \gamma_i^{-\frac{1}{2}}\pi(\gamma_i)d\gamma_i < \infty$, by applying LDCTh to both the numerator and the denominator of $h_n$, we have $h_n$ converges to some positive constant that depends on $c$ and $\pi(\cdot)$ as $n\rightarrow \infty$. Then, for any $i\notin \mathcal{A}$,
$$P(\mathcal{A}_n = \mathcal{A}) \leq P\left(E(1 - s_i \, | \, \tau_n, \mathbf{Y}) \leq 1/2\right)\leq P\left(h_{n}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\right\} <\frac{1}{2}\right).$$ Note that $h_{n}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\right\} $ converges in distribution to some distribution $Z$ with support on $(0,1)$, so $$P\left(h_{n}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}\right\} <\frac{1}{2}\right)\rightarrow P\left(Z<\frac{1}{2}\right)<1.$$ Thus the HT procedure does not achieve variable selection consistency. \end{proof}
\begin{proof}[{\bf Proof of Proposition \ref{prop:not_an}}] Similar to the proof of Theorem \ref{thm:tuning_poly}, \begin{equation*}
\sqrt{n}\left(\hat{\beta}_{i}^{HT}-\beta_{i}^{0}\right) = \sqrt{n}\left(\hat{\beta}_{i}-\beta_{i}^{0}\right) - \sqrt{n}E(s_{i} \, | \, \tau_n, \mathbf{Y})\hat{\beta}_{i} - \sqrt{n}\hat{\beta}_i^{PM}I(E(1-s_i \, | \, \tau_n, \mathbf{Y}) \leq 1/2). \end{equation*} For $i \in \mathcal{A}$, the third term in the right hand side converge to zero in probability. The first term has a normal distribution with mean 0 and variance $\sigma^2$. The posterior density function of $s_{i}$ is
$$p\left(s_{i}\, | \, \tau_n, \mathbf{Y}\right)\propto s_{i}^{-\frac{3}{2}}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}s_{i}-\frac{b}{n\tau_{n}s_{i}}\right\}L\left(\frac{1}{n\tau_n}\left(\frac{1}{s_i}-1\right)\right) ,~0\leq s_{i}\leq1.$$
It is obvious that $\frac{m}{M}\tilde{S}^{(i)}_n \leq E(s_i \, | \, \tau_n, \mathbf{Y}) \leq \frac{M}{m}\tilde{S}^{(i)}_n$ almost surely, where $$\tilde{S}_n^{(i)}=\frac{\int_0^1 s_{i}^{-\frac{1}{2}}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}s_{i}-\frac{b}{n\tau_{n}s_{i}}\right\}ds_i}{\int_0^1 s_{i}^{-\frac{3}{2}}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}s_{i}-\frac{b}{n\tau_{n}s_{i}}\right\}ds_i}.$$ Notice that $$s_{i}^{-\frac{3}{2}}\exp\left\{ -\frac{n\hat{\beta}_{i}^{2}}{2\sigma^{2}}s_{i}-\frac{b}{n\tau_{n}s_{i}}\right\}I\left(0<s_{i}<\infty\right) = \left(\frac{\lambda_D}{2\pi}\right)^{-\frac{1}{2}}\exp\left(-\frac{\lambda_D}{\mu_D}\right)f(s_i;\lambda_D,\mu_D),$$ where $\lambda_D=2b/n\tau_{n}$, $\mu=\frac{\sqrt{2b}\sigma}{\abs{\hat{\beta}_{i}}}\left(n^{2}\tau_{n}\right)^{-\frac{1}{2}}$ and $f(x;\lambda, \mu) = \left(\frac{\lambda}{2\pi x^3}\right)^{\frac{1}{2}}\exp\left\{\frac{-\lambda\left(x-\mu\right)^2}{2\mu^2 x}\right\}I\left(0<x<\infty\right)$ is the probability density function of an Inverse Gaussian (IG) distribution with mean $\mu$ and shape parameter $\lambda$. According to \citet{shuster1968inverse}, the cdf of IG$(\mu,\lambda)$ can be expressed as $$F(x;\lambda,\mu) = \Phi\left(\sqrt{\frac{\lambda}{\mu}}\left(\frac{x}{\mu}-1\right)\right) + \exp\left(\frac{2\lambda}{\mu}\right)\Phi\left(-\sqrt{\frac{\lambda}{\mu}}\left(\frac{x}{\mu}+1\right)\right).$$ Thus the denominator of $\tilde{S}_n^{(i)}$ can be written as $\left(\frac{\lambda_D}{2\pi}\right)^{-\frac{1}{2}}\exp\left(-\frac{\lambda_D}{\mu_D}\right)F(1;\lambda_D, \mu_D)$. With the transformation $s_i = 1/t_i$ and similar arguments for the denominator, the numerator of $\tilde{S}_n^{(i)}$ can be expressed as $\left(\frac{\lambda_N}{2\pi}\right)^{-\frac{1}{2}}\exp\left(-\frac{\lambda_N}{\mu_N}\right)F(1;\lambda_N, \mu_N)$, where $\lambda_N = n\hat\beta_i^2/\sigma^2$ and $\mu_N = \frac{n\sqrt{\tau_n}\abs{\hat\beta_i}}{\sqrt{2b\sigma}}$. By some simple calculations, \begin{equation*} \tilde{S}_n^{(i)} = \frac{\sqrt{2b\sigma}\left\{\Phi(b_n) - \exp(c_n)\Phi(-d_n)\right\}}{n\sqrt{\tau_n}\abs{\hat\beta_i}\left\{\Phi(b_n) + \exp(c_n)\Phi(-d_n)\right\}}, \end{equation*} where $b_n = \sqrt{\frac{2b}{n\tau_n}}\left(\frac{n\sqrt{\tau_n}\abs{\hat\beta_i}}{\sqrt{2b}\sigma}-1\right)$, $c_n = \frac{2\sqrt{2b}\abs{\hat\beta_i}}{\sqrt{\tau_n}\sigma}$, and $d_n = \sqrt{\frac{2b}{n\tau_n}}\left(\frac{n\sqrt{\tau_n}\abs{\hat\beta_i}}{\sqrt{2b}\sigma}+1\right)$.
If $n^2\tau_n \rightarrow \infty$ as $n\rightarrow \infty$, $b_n \overset{p}{\rightarrow} +\infty$ and thus $\Phi(b_n) \overset{p}{\rightarrow} 1$. Combining this with the fact that $\Phi(b_n) + \exp(c_n)\Phi(-d_n)$ is in $[0,1]$ since it equals $F(1, \lambda_D,\mu_D)$, we have $\exp(c_n)\Phi(-d_n) \overset{p}{\rightarrow} 0$. As a result, $\sqrt{n}\tilde{S}_n^{(i)} \overset{p}{\rightarrow} \infty$.
If $n^2\tau_n\rightarrow c \in (0,\infty)$ as $n \rightarrow \infty$, the limit of $b_n$ can be $+\infty$, $-\infty$, or some constant $r$. We will discuss the three cases separately. If $b_n \overset{p}{\rightarrow} +\infty$, by similar arguments as in the case $n^2\tau_n\rightarrow \infty$, we have $\sqrt{n}\tilde{S}_n^{(i)} \overset{p}{\rightarrow} \infty$. If $b_n \overset{p}{\rightarrow} -\infty$, \begin{equation*} \frac{\exp(c_n)\Phi(-d_n)}{\Phi(b_n)} \sim \frac{\exp(c_n)\phi(d_n)/d_n}{\phi(-b_n)/(-b_n)} = \frac{\sqrt{2b}\sigma-n\sqrt{\tau_n}\abs{\hat\beta_i}}{\sqrt{2b}\sigma+n\sqrt{\tau_n}\abs{\hat\beta_i}} \overset{p}{\rightarrow} \frac{\sqrt{2b}\sigma-\sqrt{c}\abs{\beta_i^0}}{\sqrt{2b}\sigma+\sqrt{c}\abs{\beta_i^0}}. \end{equation*} Therefore, $\tilde{S}_n^{(i)} \overset{p}{\rightarrow} 1$ and $\sqrt{n}\tilde{S}_n^{(i)} \overset{p}{\rightarrow} \infty$. If $b_n \overset{p}{\rightarrow} r$, since $c_n - \frac{1}{2}d_n^2 + \frac{1}{2}b_n^2 = 0$, we have $c_n - \frac{1}{2}d_n^2 \overset{p}{\rightarrow} -\frac{1}{2}r^2$. Therefore, $\exp(c_n)\Phi(-d_n) \sim \exp(c_n)\phi(d_n)/d_n \overset{p}{\rightarrow} 0$ and $\sqrt{n}\tilde{S}_n^{(i)} \overset{p}{\rightarrow} \infty.$
If $n^2\tau_n \rightarrow 0$, $b_n \overset{p}{\rightarrow} -\infty$. By the famous inequality $\frac{x^2}{1+x^2} \leq \frac{x(1-\Phi(x))}{\phi(x)} \leq 1$, \begin{equation*}
\tilde{S}_n^{(i)} \geq \frac{\sqrt{2b}\sigma}{n\sqrt{\tau_n}|\hat\beta_i|} \frac{\frac{b_n^2}{1+b_n^2}\frac{\phi(b_n)}{b_n} + \exp(c_n)\frac{\phi(d_n)}{d_n}}{\frac{\phi(b_n)}{b_n} - \exp(c_n)\frac{\phi(d_n)}{d_n}}\\
=1 - \frac{1}{2}\frac{1+\frac{\sqrt{2b}\sigma}{n\sqrt{\tau_n}|\hat\beta_i|}}{ 1 + \frac{2b}{n\tau_n}\left(\frac{n\sqrt{\tau_n}|\hat\beta_i|}{\sqrt{2b}\sigma}-1\right)^2} \overset{p}{\rightarrow} 1. \end{equation*}
In all the cases, $\sqrt{n}\tilde{S}_n^{(i)} \overset{p}{\rightarrow} \infty$, as $n \rightarrow \infty$. Thus $\sqrt{n} E(s_i \, | \, \tau_n, \mathbf{Y}) \overset{p}{\rightarrow} \infty$ and $\sqrt{n}(\hat\beta_i^{PM} - \hat\beta_i^0) \not\rightarrow \mbox{N}(0, \sigma^2)$. \end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{thm:tuning_exp_an}}] Following the proof of Theorem \ref{prop:not_an},
$n\sqrt{\tau_n}\tilde{S}_n^{(i)} \overset{p}{\rightarrow} \sqrt{2b}\sigma/|\beta_i^0|$. This completes the proof. \end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{thm:tuning_exp}}] Theorem \ref{thm:tuning_exp} is a direct result of Propositions \ref{prop:not_vs} and \ref{prop:not_an}. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1605.07981.tex",
"language_detection_score": 0.6172610521316528,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\setcounter{section}{-1} \title{Seshadri constants at very general points} \author{Michael Nakamaye} \maketitle
\section{Introduction} \nonmbthm The goal of this paper is to study the Seshadri constant of an ample line bundle $A$ at a very general point $\eta$ of a smooth projective variety $X$. \begin{definition} Suppose $X$ is a smooth projective variety, $x \in X$, and $A$ an ample line bundle on $X$. Then we define the Seshadri constant of $A$ at $x$ by $$ \epsilon(x,A) = \inf_{C \ni x}\frac{c_1(A) \cap C}{{\rm mult}_x(C)}; $$ here the infimum runs over all integral curves $C \subset X$ passing through $x$. \label{d1} \end{definition} Equivalently, if $\pi: Y \rightarrow X$ denotes the blow--up of $X$ at $x$ with execptional divisor $E$, then $$
\epsilon(x,A) = \sup_r \{r \in {\bf Q}^+ \,|\, \pi^\ast(A)(-rE) \,\, \mbox{is ample}\}. $$ The Seshadri constant $\epsilon(x,A)$ measures how many jets $nA$ separates at $\eta$ asymptotically as $n \rightarrow \infty$. In the case when $X$ is a surface, it is known \cite{el} that $\epsilon(\eta,A) \geq 1$. Meanwhile, Ein, K\"uchle, and Lazarsfeld \cite{ekl} have established a lower bound in arbitrary dimension $$ \epsilon(\eta,A) \geq \frac{1}{\dim X}. $$ The factor of $\dim X$ appearing in the general result is a function of the ``gap argument'' used in the proof. The same gap argument is also responsible for the presumably extra factor of $\dim X$ in the known results for global generation of adjoint bundles (see \cite{S} for example).
Our goal in this paper is to make some progress toward obtaining better lower bounds for $\epsilon(\eta,A)$. Ultimately, however, one would like a bound which does not depend on the dimension of $X$ and so from our point of view the interest of this work lies more in the methods used than in the explicit results. The basic idea employed here, namely that a singular Seshadri exceptional subvariety influences the dimension count for sections with specified jets, was already presented in \cite{N}. The counting is difficult and we have tried to find a compromise between computational complexity versus obtaining the best possible results. Thus we have counted very carefully in the three--fold case and less so in the higher dimensional case.
In order to clarify our basic strategy we will go through the argument completely in the three--fold case to obtain:
\begin{theorem} Suppose $X$ is a smooth three--fold and $\eta \in X$ a very general point. Then for any ample line bundle $A$ on $X$ we have $$ \epsilon(\eta,A) \geq \frac{1}{2}. $$ \label{t} \end{theorem}
The proof of Theorem \ref{t} will use the work of Ein and Lazarsfeld \cite{el} on surfaces as well as the uniform bounds on symbolic powers obtained in \cite{els}. We will then establish the following slight quantitative improvement of the main result in \cite{ekl} for arbitrary dimension:
\begin{theorem} Suppose $X$ is a projective variety of dimension $d \geq 4$ and $A$ an ample line bundle on $X$. Then for a very general point $\eta \in X$ we have the lower bound $$ \epsilon(\eta,A) > \frac{3d+1}{3d^2}. $$ \label{main} \end{theorem}
The proofs of Theorem \ref{t} and Theorem \ref{main} follow \cite{ekl} line for line, our only innovation coming in counting jets. The fundamental observation is that if $\epsilon(\eta,A)$ is small this puts restrictions on $h^0\left(X,nA \otimes m_\eta^{n\alpha}/m_\eta^{n\alpha+1}\right)$ for various values of $\alpha$. The smaller $\epsilon(\eta,A)$ becomes the greater these restrictions are. In \cite{ekl} the lower bound on the multiplicity which can be imposed at $\eta$ comes from the Riemann--Roch theorem. We improve upon this here by considering the above mentioned obstructions and this translates into a better lower bound for $\epsilon(\eta,A)$. The main difficulty in establishing the lower bound $\epsilon(\eta,A) \geq \frac{1}{d-1}$ in general is that the set of points where the bound $\epsilon(\eta,A) \geq \frac{1}{d}$ from \cite{ekl} fails may contain divisors once $d \geq 3$. The reader only interested in the method employed and not the details of counting should skip to \S 2 after going through Lemma \ref{ml}.
Finally, we would like to point out the similarity between the counting methods used here and those employed by Faltings and W\"ustholz \cite{fw} to reprove and extend the Schmidt subspace theorem. In particular, the measure theoretic aspect of \cite{fw} is very closely related to our counting of jets. The asymptotic dimensions $h^0\left(X,nA \otimes m_\eta^{n\alpha}/m_\eta^{n\alpha+1}\right)$, as $n \rightarrow \infty$, can be used to define a measure $\mu$ on $[0,m(A)]$, with $m(A)$ defined in \S 2. Here $\mu((a,b))$ would measure the asymptotic cost of raising the multiplicity at $\eta$ from $a$ to $b$.
\noindent {\bf Notation and Conventions}
\begin{itemize} \item All varieties considered will be defined over the complex numbers
${\bf C}$. \item A point $x$ of an irreducible variety $X$ will be called very general if
$x$ belongs to the complement of countably many closed, proper
subvarieties. \item If $x \in X$ then $m_X \subset {\cal O}_X$ is the maximal ideal sheaf of $x$. \item Suppose $V \subset X$ is an irreducible subvariety and $s \in H^0(X,L)$
then ${\rm ord}_V(s)$ is the order of vanishing of $s$ along $V$. \item If $L$ is a line bundle ${\rm BS}(L)$ denotes the stable base locus of
$L$, that is $$ {\rm BS}(L) = \{x \in X: s(x) = 0 \,\,\,\mbox{for all} \,\,s \in H^0(X,nL)\,\,\,
\mbox{for all} \,\, n > 0 \}. $$ \item If $s \in H^0(X,L)$ then $Z(s)\subset X$ denotes the zero scheme of the section $s$. \item If $\alpha \in {\bf R}$ then $\lfloor \alpha \rfloor$ denotes the
round--down of $\alpha$, that is the largest integer less than or equal to $\alpha$. Similarly, $\lceil \alpha \rceil$ denotes the round--up of $\alpha$, the smallest integer greater than or equal to $\alpha$. \end{itemize}
\section{The Three--fold Case}
Before proving Theorem \ref{t} we review the strategy of \cite{ekl}. Suppose then that $X$ is a smooth projective variety of dimension $d$ and $A$ an ample line bundle on $X$. Furthermore, let $\eta$ be a very general point of $X$. Ein, K\"uchle, and Lazarsfeld study the linear series $$
\left|kA \otimes m_\eta^{k\alpha} \right| $$ for various values of $\alpha$.
Roughly speaking, the argument of \cite{ekl} goes as follows. Suppose that $\epsilon(\eta,A) < \frac{1}{d}$ and let $C_\eta$ be a curve with \rn \begin{eqnarray} \frac{A \cdot C}{{\rm mult}_\eta(C_\eta)} < \frac{1}{d}. \label{h} \end{eqnarray} Moreover, we assume that $C_\eta$ is chosen from a flat family ${\cal F} \subset X \times T$ defined over a smooth affine variety $T$ of dimension $d$ with a quasi--finite map $\phi: T \rightarrow X$ where $$ \frac{A \cdot C_t}{{\rm mult}_{\phi(t)}(C_t)} \, < \, \frac{1}{d}, \,\,\,\,\forall t \in T. $$ Consider, then, for $k$ sufficiently divisible, the linear series \rn \begin{eqnarray}
\left|kA \otimes m_\eta^{k\alpha} \right|, \,\,\,k\alpha \in {\bf Z}. \label{fff1} \end{eqnarray} If $k\alpha > k\epsilon(\eta,A)$ then by \ref{h} the curve $C_\eta$ is in the base locus of this linear series. Using the fact that $C_\eta$ moves in the family ${\cal F}$ in order to ``differentiate in the parameter direction'',
Ein, K\"uchle, and Lazarsfeld then show that any divisor $D \in \left|kA \otimes m_\eta^{k\alpha}
\right|$ vanishes along $C_\eta$ to order at least $k\alpha - k\epsilon(\eta,A)$.
In particular, taking $\alpha > 2\epsilon(\eta,A)$ in \ref{fff1}, we see that if $D \in \left|kA \otimes m_\eta^{k\alpha} \right|$ then $D$ vanishes to order greater than $k\epsilon(\eta,A)$ along $C_\eta$ and hence vanishes along all curves in $C_t \in {\cal F}$ with $\phi(t) \in C_\eta$. The next step in the argument is to show that a subfamily of the curves $\{C_t\}_{\phi(t) \in C_\eta}$, defined over a constructible subset $W \subset \phi^{-1}(C_\eta)$, sweep out an irreducible surface $S_\eta \subset X$. The argument is then iterated and the base locus of $$
\left|kA \otimes m_\eta^{k\alpha} \right|, \,\,\, \alpha > r\epsilon(\eta,A) $$ is shown to contain an irreducible
subvariety of dimension $r$, swept out by an $r-1$ dimensional subfamily of ${\cal F}$. After iterating this argument $d$ times, a contradiction is reached because the linear series $$
\left|kA \otimes m_\eta^{kd\epsilon(\eta,A)+1} \right| $$ is forced to be empty but by hypothesis $d\epsilon(\eta,A) < 1$ and a simple argument using the Riemann--Roch theorem yields the desired contradiction.
Fundamental to the argument of \cite{ekl} is the following ``differentiation'' result: \next \begin{lemma}[\cite{ekl} Proposition 2.3] Suppose $\eta \in X$ is a very general point and let $W \subset X$ be an irreducible subvariety. Let $\pi: Y \rightarrow X$ be the blow--up of $\eta$ with exceptional divisor $E$ and let $\tilde{W}$ is the strict transform of $W$ in $Y$. Write $$ \alpha(W) = \inf_{\beta \in {\bf Q}}\{\tilde{W} \subset {\rm BS}(\pi^\ast A(-\beta E))\}. $$ Suppose $\gamma > \alpha(W)$ and $0 \neq s \in H^0\left(X,nA \otimes m_\eta^{n\gamma}\right)$. Then $$ {\rm ord}_W(s) \geq n\gamma - \lfloor \alpha(W)n + 1 \rfloor. $$ \label{ml} \end{lemma}
\noindent We have stated Lemma \ref{ml} in the form in which it will be used. To obtain this version from \cite{ekl} Proposition 2.3, let $\Gamma \subset
X \times T$ be the graph of $\phi: T \rightarrow X$. Let $p_1: X \times T \rightarrow X$ and $p_2: X \times T \rightarrow T$ denote the projections to the first and second factors respectively. Consider $$ {\rm BS} \left(p_1^\ast(kA)\otimes {\cal I}_\Gamma^{\lfloor k\alpha(W) + 1\rfloor}\right). $$ By hypothesis for all $k > 0$ these subschemes contain an irreducible component $Z_k \subset X \times T$ so that $Z_k \cap \pi_2^{-1}(t) \supset W$ for any $t$ with $\phi(t) = \eta$. As $k \rightarrow \infty$ one obtains a fixed subscheme $Z \subset X \times T$ with $W \subset Z_t$ where $Z_t = Z \cap \pi_2^{-1}(t)$. According to \cite{ekl} Proposition 2.3, any section $\sigma \in H^0\left(X,p_1^\ast(kA) \otimes {\cal I}_\Gamma^{k\gamma}\right)$ must vanish to order at least $k\gamma - \lfloor k\alpha(W)+1\rfloor$ along $Z$. Indeed, if not, after differentiating $k\gamma -\lfloor \alpha(W) + 1 \rfloor$ times we obtain $\sigma^\prime \in H^0\left(X,p_1^\ast(kA) \otimes {\cal I}_\Gamma^{\lfloor k\alpha(W) +1 \rfloor}\right)$ not vanishing along $Z$ and this is a contradiction. Lemma \ref{ml} is the translation of this statement for the family $Z$ to the fibres $Z_t$. Note that when applying Lemma \ref{ml} we often will assume, for simplicity, that $$ {\rm ord}_W(s) \geq n(\gamma - \alpha(W)): $$ indeed, for the asymptotic estimate on jets, the round--down and 1 are irrelevant.
\noindent {\bf Proof of Theorem \ref{t}} Suppose that Theorem \ref{t} were false and $$ \epsilon(\eta,A) < \frac{1}{2}. $$ Thus through a very general point $\eta \in X$ there is a curve $C_\eta$ with $A\cdot C_\eta/{\rm mult}_\eta(C_\eta) < \frac{1}{2}$. Choosing a suitable family of such curves ${\cal F} \subset X \times T$ as above, we claim that there is an open set $U \subset X$ such that for each $x \in U$ there is an irreducible curve $C_x$ satisfying $A \cdot C_x = p$ and ${\rm mult}_\eta(C_x) = q$ with $p/q < 1/2$. If $\epsilon(\eta,A) = p/q$ and there is a curve $C_\eta$ through $\eta$ with ${\rm mult}_\eta(C_\eta) = q$ and $A\cdot C_\eta = p$ then this is satisfied. If there were a Seshadri exceptional surface $S$ at $\eta$, that is a surface with $\frac{\deg_A(S)}{{\rm mult}_\eta(S)} = \epsilon(\eta,A)$, then an immediate contradiction is obtained using Lemma \ref{ml}: choose $2p/q < \gamma < 1$ and $0 \neq \sigma \in H^0\left(X,nA \otimes m_\eta^{n\gamma}/m_\eta^{n\gamma +1}\right)$. According to Lemma \ref{ml} ${\rm ord}_{S}(\sigma) \geq n\gamma - \lfloor pn/q + 1\rfloor$. Since ${\rm mult}_\eta(S) \geq 2$ this is not possible for $n \gg 0$. Since there must either be a Seshadri exceptional curve or a Seshadri exceptional surface when $\epsilon(\eta,A) < 1$ we must have a Seshadri excptional curve $C_\eta$ through $\eta$ as desired.
The goal of the proof is to estimate \rn \begin{eqnarray} \lim_{n \rightarrow \infty} \frac{h^0\left(X, nA \otimes m_\eta^{\frac{3pn}{q}}\right)}{n^3}. \label{y0} \end{eqnarray} We will show that this limit is positive and
then we have a contradiction from \cite{ekl} which shows that the linear series $\left|nA \otimes m_\eta^{\frac{(3p+\alpha)n}{q}}\right|$ is empty for any $\alpha > 0$. Let $\pi: Y \rightarrow X$ be the blow--up of $X$ at $\eta$ with exceptional divisor $E$. Choose a rational number $\alpha$ and a large positive integer $n$ with $n\alpha \in {\bf Z}$. Then we have \rn \begin{eqnarray} h^0(X,nA) &-& h^0\left(X,nA \otimes m_\eta^{\alpha n} \right) \nonumber \\
& = & \sum_{k=0}^{\alpha n - 1} \left(h^0\left(X,nA \otimes m_\eta^{k} \right) -
h^0\left(X,nA \otimes m_\eta^{k+1} \right) \right) \nonumber \\ & = & \sum_{k=0}^{\alpha n - 1} \left(h^0\left(Y,\pi^\ast(nA)(-kE)\right) - h^0\left(Y,\pi^\ast(nA)(-(k+1)E) \right) \right). \label{ff1} \end{eqnarray} We have $E \simeq {\bf P}^{2}$ and using \ref{ff1} and the exact sequence \begin{eqnarray*} 0 \rightarrow H^0\left(Y,\pi^\ast(nA)(-(k+1)E) \right) \rightarrow H^0\left(Y,\pi^\ast(nA)(-kE)\right) \rightarrow H^0\left(E,\pi^\ast(nA)(-kE)\right) \end{eqnarray*} we find \rn \begin{eqnarray} h^0(X,nA) - h^0\left(X,nA \otimes m_\eta^{\alpha n} \right) = \sum_{k=0}^{\alpha n - 1} h^0_Y\left({\bf P}^{2}, {\cal O}(k)\right) \label{ff2} \end{eqnarray} where $h^0_Y\left({\bf P}^{2}, {\cal O}(k)\right)$ denotes the dimension of the subspace of $H^0\left({\bf P}^{2}, {\cal O}(k)\right)$ coming via restriction from $H^0\left(Y,\pi^\ast(nA)(-kE)\right)$. Our goal, then, is for each value of $k$, to bound $h^0_Y\left({\bf P}^2,{\cal O}(k)\right)$ from above.
We next define critical numbers where the base locus of $\left|kA \otimes m_\eta^{k\alpha}\right|$ is forced to jump for numerical reasons: \begin{eqnarray*} \alpha_1 = \frac{p}{q},\\ \alpha_3 = \frac{2p}{q}. \end{eqnarray*} There is also a more subtle jumping value between $\alpha_1$ and $\alpha_3$, at least for $q$ sufficiently large, for which we require an extra definition. Let $Z \subset {\bf P}(T_\eta(X)) = {\bf P}^2$ denote the zero--dimensional subscheme of degree $q$ given by $T_\eta(C_\eta)$. Then one can define a Seshadri constant associated to $Z$ as follows. Suppose $\psi: Y \rightarrow {\bf P}^2$ is a birational map with $Y$ smooth and $\psi^{-1}({\cal I}_Z) = {\cal O}_Y(-E)$. Then $$ \epsilon(Z,{\cal O}(1)) = \sup_r\{r\in {\bf Q}^+: \psi^\ast({\cal O}(1))(-rE)\,\, \mbox{is nef}\}. $$ Then, as we will see below,
the base locus of $\left|kA \otimes m_\eta^{k\alpha}\right|$ is forced to contain a surface as soon as $\alpha > \alpha_2$ where $\alpha_2$ satisfies $$ \frac{\alpha_2 - p/q}{2\alpha_2} = \epsilon \left(T_\eta(C_\eta),{\cal O}_{{\bf P}(T_\eta(X))}(1)\right). $$ Note that a surface could enter the base locus of
$\left|kA \otimes m_\eta^{k\alpha}\right|$ for $\alpha < \alpha_2$. The numbers we chose are the ``worst case scenario,'' the case where the linear series $|kA|$ generates the {\em most possible} jets at $\eta$. If it generates fewer jets, the numbers in the argument only improve. We note here that the reader not interested in the counting details can skip the analysis involving $\alpha_2$. Indeed, when we prove Theorem \ref{t} below there is enough room in the estimates so that the key result is Lemma \ref{seshmult} which applies to the jet analysis once we have exceded $\alpha_3$. We included a more complete analysis both in order to reveal the subtleties involved in counting and because in other cases the more detailed analysis may be required.
By Lemma \ref{ml} we know that any section of $\left|kA \otimes m_\eta^{k\beta}\right|$ for $\beta > \alpha_1$ must vanish along $C_\eta$ to multiplicity at least
$k(\beta - \alpha_1)$. Once $\beta > \alpha_2$ we claim that the base locus of $\left|kA \otimes m_\eta^{k\beta}\right|$ must contain a surface $S$ which passes through $\eta$. Indeed, if not, then choose
$s_1,s_2 \in \left|kA \otimes m_\eta^{k\beta}\right|$ so that $T_\eta(Z(s_1))$ and $T_\eta(Z(s_1))$ meet properly inside $T_\eta(X)$.
By Lemma \ref{ml} we have ${\rm mult}_{C_\eta}(s_1) \geq k(\beta - \alpha_1)$ and ${\rm mult}_{C_\eta}(s_2) \geq k(\beta - \alpha_1)$. By Theorem A of \cite{els}, we have $f_1,f_2 \in {\cal I}_C^{\lfloor k(\beta - \alpha)/2 \rfloor}$ where ${\cal I}_C$ is the ideal sheaf of $C$ and $f_1$ and $f_2$ are local equations for $s_1$ and $s_2$.
Let $\pi: Y \rightarrow X$ be the blow--up of $X$ at $\eta$ with exceptional divisor $E$. Let $D_1 = \pi^\ast(Z(s_1))(-k\beta E)|E$ and
$D_2 = \pi^\ast(Z(s_2))(-k\beta E)|E$. We have $E \simeq {\bf P}^2$ and $D_1,D_2$ are curves of degree $k\beta$ meeting properly along $T_\eta(C_\eta)$, each with multiplicity at least $\lfloor k(\beta - \alpha_1)/2 \rfloor$ along $T_\eta(C_\eta)$. Considering the pencil of divisors spanned by $D_1$ and $D_2$ shows that $\psi^\ast({\cal O}(k\beta))(-\lfloor k(\beta - \alpha_1)/2 \rfloor E)$ is nef where $\psi: Y \rightarrow {\bf P}^2$ is a resolution of ${\cal I}_Z$ as above. It follows that $$ \epsilon(Z,{\cal O}(1)) \geq \frac{\lfloor k(\beta -\alpha_1)/2 \rfloor}{k\beta} > \frac{\alpha_2 - p/q}{2\alpha_2}, $$ contradicting the definition of $\alpha_2$. Using Lemma \ref{ml} again, we conclude that there is a surface $S \subset X$ such that for $\beta > \alpha_2$ any divisor
$D \in \left|kA \otimes m_\eta^{k\beta}\right|$ must vanish along $S$ to order
at least $k(\beta - \alpha_2)$. Finally, let $S_\eta$ be the surface swept out by $\{C_x\}_{x \in Z}$ with $Z \subset \phi^{-1}(C_\eta)$ the constructible subset considered above. By Lemma \ref{ml} any divisor
$D \in \left|kA \otimes m_\eta^{k\beta}\right|$ must vanish along $S_\eta$ to order at least $k(\beta - \frac{2p}{q})$.
We are now prepared to bound $h^0_Y({\bf P}^2, {\cal O}(k))$ from above, using the information about the order of vanishing of each section of $H^0_Y({\bf P}^2, {\cal O}(k))$ along $T_\eta(C_\eta)$, $T_\eta(S)$, and $T_\eta(S_\eta)$. We will divide the estimate up into four cases \begin{eqnarray*} 0 \leq k \leq n\alpha_1, \\ n\alpha_1 < k \leq n\alpha_2, \\ n\alpha_2 <k \leq n\alpha_3,\\ n\alpha_3 < k \leq \frac{3np}{q}. \end{eqnarray*} We assume for simplicity that $\alpha_i \in {\bf Q}$ and $n\alpha_i \in {\bf Z}$. For those $\alpha_i$ which are irrational, it suffices in the argument below to replace $n\alpha_i$ by $\lfloor n\alpha \rfloor$. Note also that if $\alpha_2 > \alpha_3$, one simply eliminates the third interval, replacing $\alpha_2$ by $\alpha_3$ in the second interval.
For small values of $k$
one expects $|nA|$ to generate all $k$--jets and the estimate is \rn \begin{eqnarray} h^0_Y\left({\bf P}^2,{\cal O}(k)\right) \leq \left( \begin{array}{c} k+2 \\ 2 \end{array} \right), \,\,\, 0 \leq k \leq n\alpha_1. \label{est1} \end{eqnarray} Next, for $n\alpha_1 < k \leq n\alpha_2$
any section $\sigma \in H^0_Y\left({\bf P}^2,{\cal O}(k)\right)$ vanishes to order at least $\lfloor(k-n\alpha_1)/2\rfloor$ along $T_\eta(C_\eta) \subset {\bf P}(T_\eta(X))$ giving the estimate \rn \begin{eqnarray} h^0_Y\left({\bf P}^2,{\cal O}(k)\right) \leq \left( \begin{array}{c} k+2\\ 2 \end{array} \right) -q\left( \begin{array}{c} \lfloor (k-n\alpha_1)/2 \rfloor + 1 \\ 2 \end{array} \right) + o(k^2), \,\,\, n\alpha_1 < k \leq n\alpha_2. \label{est2} \end{eqnarray} This is established in Lemma \ref{nonsep} below.
Next suppose $n\alpha_2 < k \leq n\alpha_3$. Let $\sigma \in H^0\left(X,nA \otimes m_\eta^{k}\right)$. For $n$ suitably divisible, write $$ Z(\sigma) = aS + S^\prime, \,\,\,\mbox{with}\,\,\,{\rm mult}_\eta(S^\prime) = n\alpha_2: $$ this is possible since $\sigma$ must vanish to order at least $k-n\alpha_2$ along the surface $S$. Let $$ \rho: H^0\left(X,nA(-aS) \otimes m_\eta^{n\alpha_2}\right) \rightarrow H^0({\bf P}^2,{\cal O}(n\alpha_2)) $$ be the restriction homomorphism. Then by definition we have $$ h^0_Y\left({\bf P}^2,{\cal O}(k)\right) = \dim({\rm Image}(\rho)). $$ Using the construction in \cite{ekl} 3.8 we see that there exists an irreducible subvariety $V \subset X \times T$ such that $S = V \cap (X \times {t})$ for some $t$ with $\phi(t) = \eta$. In particular, since $\eta$ is a very general point it follows that there is a surface $S^\prime$ algebraically equivalent to $S$, not containing $\eta$, namely $S^\prime = V \cap (X \times {\xi})$ for a general point $\xi \in T$. Choose $r$ sufficiently large so that $rA + b(S-S^\prime)$
is very ample for all $b > 0$. Choose $D \in |rA+a(S-S^\prime)|$ so that $D$ does not contain $\eta$ and let $E = D+aS^\prime$. Then tensoring by $E$ gives an injection $$ \rho_E: H^0\left(X,nA(-aS) \otimes m_\eta^{n\alpha_2}\right) \rightarrow H^0\left(X,(n+r)A \otimes m_\eta^{n\alpha_2} \right) $$ which preserves multiplicity at $\eta$. We conclude that \rn \begin{eqnarray} h^0_Y\left({\bf P}^2,{\cal O}(k)\right) \leq h^0\left(X,(n+r)A \otimes m_\eta^{n\alpha_2}/ m_\eta^{n\alpha_2+1}\right), \,\,\,n\alpha_2 < k \leq n\alpha_3. \label{est3} \end{eqnarray}
Finally suppose $n\alpha_3 < k \leq \frac{3pn}{q}$. Suppose that ${\rm mult}_\eta(\sigma) = k$, $\sigma \in H^0\left(X,nA\right)$. We know from Lemma \ref{ml} that ${\rm mult}_{S_\eta}(\sigma) \geq k - n\alpha_3$. Since, according to Lemma \ref{seshmult} below ${\rm mult}_{C_\eta}(S_\eta) \geq 3$, we can write $$ Z(\sigma) = aS_\eta + S^\prime $$ with ${\rm mult}_\eta(S^\prime) = k - 3(k -n\alpha_3)$. Arguing as in the previous case then gives \rn \begin{eqnarray} h^0_Y\left({\bf P}^2,{\cal O}(k)\right) \leq h^0\left(X,(n+r)A \otimes m_\eta^{3n\alpha_3 - 2k}/ m_\eta^{3n\alpha_3-2k+1}\right), \,\,\,n\alpha_3 < k \leq \frac{3pn}{q}. \label{est4} \end{eqnarray}
We are now prepared to evaluate the limit \ref{y0} using \ref{ff2}. We assume to begin with that $q \geq 5$. In particular this means that $\epsilon \left(T_\eta(C_\eta), {\cal O}_{{\bf P}(T_\eta(X))}(1)\right) < 1/2$ and this guarantees that $\alpha_2 < \alpha_3$. We divide the sum into four ranges of $k$ determined by our critical numbers $\alpha_1,\alpha_2,\alpha_3$. First, by \ref{est1} $$ \lim_{n \rightarrow \infty} \frac{\displaystyle \sum_{k=0}^{n\alpha_1} h^0_Y({\bf P}^2,{\cal O}(k))}{n^3} \leq \frac{\alpha_1^3}{6}. $$ Next, using \ref{est2} we see that $$ \lim_{n \rightarrow \infty} \frac{\displaystyle \sum_{k=n\alpha_1+1}^{n\alpha_2} h^0_Y({\bf P}^2,{\cal O}(k))}{n^3} \leq \frac{\alpha_2^3-\alpha_1^3}{6} - \frac{q(\alpha_2 -\alpha_1)^3}{24}. $$ Using \ref{est2} and \ref{est3} gives \begin{eqnarray*} \lim_{n \rightarrow \infty} \frac{\displaystyle \sum_{k=n\alpha_2+1}^{n\alpha_3} h^0_Y({\bf P}^2,{\cal O}(k))}{n^3} & \leq & \lim_{n \rightarrow \infty} \frac{\displaystyle \sum_{k=n\alpha_2+1}^{n\alpha_3} h^0\left(X,(n+r)A \otimes m_\eta^{(n+r)(\frac{n\alpha_2}{n+r})}/m_\eta^{(n+r)(\frac{n\alpha_2}{n+r})+1} \right)}{n^3} \\ & \leq & (\alpha_3-\alpha_2)\left(\frac{\alpha_2^2}{2} - \frac{q(\alpha_2-\alpha_1)^2}{8}\right). \end{eqnarray*} Finally, for $\sum_{k=n\alpha_3+1}^{3np/q} h^0_Y({\bf P}^2,{\cal O}(k))$, using \ref{est4} and arguing as in the above case we can remove the $r$ which is fixed. But the sum $$ \sum_{k=n\alpha_3+1}^{3np/q} h^0\left(X,nA \otimes m_\eta^{3n\alpha_3 - 2k}/ m_\eta^{3n\alpha_3-2k+1}\right) $$ is simply every other term of the sum $\sum_{k=0}^{n\alpha_3}h^0_Y({\bf P}^2,{\cal O}(k))$ and since our upper bound for $h^0_Y({\bf P}^2,{\cal O}(k))$ varies as a piecewise polynomial this gives $$ \lim_{n\rightarrow \infty} \frac{\displaystyle \sum_{k=n\alpha_3+1}^{3np/q} h^0\left({\bf P}^2,{\cal O}(k)\right)}{n^3} \leq \frac{1}{2}\left(\frac{\alpha_2^3}{6} - \frac{q(\alpha_2 -\alpha_1)^3}{24} +(\alpha_3-\alpha_2)\left(\frac{\alpha_2^2}{2} - q\frac{(\alpha_2-\alpha_1)^2}{8}\right) \right). $$ Combining all of the above estimates gives \rn \begin{eqnarray} \lim_{n\rightarrow \infty} \frac{\displaystyle \sum_{k=0}^{3np/q} h^0\left({\bf P}^2,{\cal O}(k)\right)}{n^3} &\leq& \frac{3}{2}\left(\frac{\alpha_2^3}{6} - \frac{q(\alpha_2 -\alpha_1)^3}{24} +(\alpha_3-\alpha_2)\left(\frac{\alpha_2^2}{2} - q\frac{(\alpha_2-\alpha_1)^2}{8}\right) \right) \nonumber \\ &\leq& \frac{3}{2}\left(\frac{\alpha_2^3}{6} +(\alpha_3-\alpha_2)\left(\frac{\alpha_2^2}{2}\right)\right). \label{best} \end{eqnarray} Note that in the last inequality we have omitted two of the negative or defect terms which were obtained by the detailed analysis above. The reason for this is that for $q$ sufficiently large, the estimate \ref{best} turns out to be sufficient to establish Theorem \ref{t} while small values of $q$ can be dealt with by hand. We included all of the counting details nonetheless as this is the technical heart of this paper.
In order to compute the upper bound in \ref{best}, we need to know the value of $\alpha_2$. The Seshadri constant $\epsilon\left(T_\eta(C_\eta),{\cal O}_{{\bf P}(T_\eta(X))}(1)\right)$ is, however, very difficult to compute and so we look at the worst case scenario. In particular, the bound in \ref{est2} increases until $x = \frac{np}{q-4} + O(1)$ and then decreases. The bound in \ref{est3} then repeats the last value for the bound in \ref{est2} and then when one reaches \ref{est4} the values start to decrease.
The $O(1)$ term will have no effect on the asymptotic estimate and thus the worst case to cosider is $\alpha_2 = \frac{np}{q-4}$. With this value of $\alpha_2$ we need to assume that $q \geq 9$ in order
to guarantee that $\alpha_2 < \alpha_3$. We find then, using \ref{best}, \begin{eqnarray*} \lim_{n\rightarrow \infty} \frac{\displaystyle \sum_{k=0}^{3np/q} h^0\left({\bf P}^2,{\cal O}(k)\right)}{n^3} &\leq &\frac{3}{2}\left(\frac{p^3}{6(q-4)^3} + \frac{p(q-8)}{q(q-4)}\left( \frac{p^2}{2(q-4)^2} \right)\right) \\ &=& \frac{1}{6}\left(\frac{3p^3}{2(q-4)^2} + \frac{9p^3(q-8)}{2q(q-4)^3}\right) \end{eqnarray*} One checks that when $q \geq 10$ and $p/q < 1/2$ then $$ \frac{1}{6}\left(\frac{3p^3}{2(q-4)^2} + \frac{9p^3(q-8)}{2q(q-4)^3}\right) < \frac{1}{6}. $$ It follows from \ref{ff2} that when $q \geq 10$ $$ \lim_{n\rightarrow \infty} \frac{h^0\left(X,nA \otimes m_\eta^{\frac{3pn}{q}}\right)}{n^3} > 0 $$ and this concludes the proof of Theorem \ref{t} when $q \geq 10$.
If $q < 10$ then there are only four possibilities which are not eliminated by \cite{ekl}, namely $p/q = 2/5, p/q = 3/7, p/q = 3/8,$ and $p/q = 4/9$. We outline here how to eliminate the cases $p/q = 3/7$ and $p/q = 4/9$
which are the most difficult of the four. The counting here goes as follows. For $0 \leq k \leq np/q$ we use \ref{est1}. For $np/q < k \leq 2np/q$, we use the estimate in \ref{f3} below. Finally for $2np/q < k \leq 3np/q$, we use \ref{est4}. This gives, in the case where $p/q = 3/7$, $$ \lim_{n\rightarrow \infty} \frac{\displaystyle \sum_{k=0}^{3np/q} h^0\left({\bf P}^2,{\cal O}(k)\right)}{n^3} \leq \frac{1}{6}\left(\frac{567}{686}\right). $$ For $p/q = 4/9$ we find $$ \lim_{n\rightarrow \infty} \frac{\displaystyle \sum_{k=0}^{3np/q} h^0\left({\bf P}^2,{\cal O}(k)\right)}{n^3} \leq \frac{1}{6}\left(\frac{224}{243}\right). $$
\begin{lemma} Suppose $C_\eta$ satisfies $$ \frac{A \cdot C_\eta}{{\rm mult}_\eta(C_\eta)} = \frac{p}{q} < \frac{1}{2}. $$ Let $S_\eta$ be the surface swept out by $\{C_x\}_{x \in Z \subset \phi^{-1}(C_\eta)}$. Then $$ {\rm mult}_{C_\eta}(S_\eta) \geq 3. $$ \label{seshmult} \end{lemma}
\noindent {\bf Proof of Lemma \ref{seshmult}} To see why $S_\eta$ must be singular along $C_\eta$ note that for a general point $\xi \in \phi(Z) \subset C_\eta$ there is a curve $C_\xi \subset S_\eta$ such that $$ \frac{A \cdot C_\xi}{{\rm mult}_\eta(C_\xi)} = \frac{p}{q} < \frac{1}{2}. $$
If $S_\eta$ were smooth at a general point $\xi \in C_\eta$ then it would follow that $\epsilon(\xi,A|S_\eta) < \frac{1}{2}$ and this is impossible since Ein and Lazarsfeld \cite{el} have established that on a smooth surface the set of points where the Seshadri constant can be less than one is at most countable. In order to refine this argument, let $\pi: X^\prime \rightarrow X$ be an embedded resolution of $S_\eta$. For $\xi \in C_\eta$ general let $\tilde{C}_\xi$ be the strict transform of $C_\xi$ in $X^\prime$ and write $$ \pi^{-1}(\xi) \cap \tilde{C}_\xi = \{x_1,\ldots,x_r\}. $$ Suppose moreover that $\psi: C \rightarrow \tilde{C}_\xi$ is a desingularization with $\psi^{-1}(\pi^{-1}(\xi)) = \left\{ y_1,\ldots,y_s \right\}$.
Choose a linear series $|D|$ on $X$ with $D$ sufficiently positive so that if
$E \in |D|$ is a general member through $\xi$ then $i(x_j,\tilde{C}_\xi \cdot \tilde{E}:X^\prime) = {\rm mult}_{x_j}(\tilde{C}_\xi)$ for $1 \leq j \leq r$: this is possible by \cite{F} 12.4.5. Then by \cite{F} 12.4.5 and 7.1.17 we have $$ q = i(\xi,C_\xi \cdot D:X) = \sum_{j=1}^s {\rm ord}_{y_j}(\psi^\ast(\pi^\ast(D)) = \sum_{i=1}^r {\rm mult}_{x_i}\tilde{C}_\xi. $$ Now we have $s = {\rm mult}_{C_\eta}(S_\eta)$ and thus if $s = 2$ we find that for $i = 1$ or $i = 2$ $$ \frac{\pi^\ast(A) \cdot \tilde{C}_\xi}{{\rm mult}_{x_i}(\tilde{C}_\xi)} < 1. $$ Let $\tilde{S} \subset X^\prime$ be the resolution of $S_\eta$. The curves $\tilde{C}_\xi$ move in a one parameter family along the surface $\tilde{S}$ and thus $\{x \in \tilde{S}: \epsilon(x,\pi^\ast(A)) < 1\}$ is not countable, violating the main result of \cite{el}. Note that in \cite{el} Ein and Lazarsfeld state the main result for ample line bundles but the proof holds unchanged for a big and nef line bundle. Indeed, the only point where \cite{el} uses ampleness is to show that curves with bounded degree relative to the appropriate ample bundle $A$ move in finitely many families but this also holds more generally when $A$ is big and nef.
\begin{lemma} Suppose ${\rm mult}_\eta(C_\eta) = q$ and $n\alpha_1 <k \leq n\alpha_2$. Then $$ h^0_Y\left({\bf P}^2,{\cal O}(k)\right) \leq \left( \begin{array}{c} k+2\\ 2 \end{array} \right) -q\left( \begin{array}{c} \lfloor (k-n\alpha_1 + 1)/2\rfloor \\ 2 \end{array} \right) + o(k^2), \,\,\, n\alpha_1 < k \leq n\alpha_2. $$ \label{nonsep} \end{lemma}
\noindent {\bf Proof of Lemma \ref{nonsep}} Let $A = {\cal O}_{{\bf P}^2}(1)$ where ${\bf P}^2 = {\bf P}(T_\eta(X))$ and let $Z \subset {\bf P}^2$ be the projectivized tangent cone of $C_\eta$ at $\eta$. By definition of $\epsilon(Z,A)$, given $\delta > 0$ so that $\epsilon(Z,A) - \delta \in {\bf Q}$ for all $n > 0$ sufficiently large and divisible the evaluation map $$ H^0({\bf P}^2,{\cal O}(n)) \rightarrow H^0\left({\bf P}^2,{\cal O}(n) \otimes {\cal O}_{{\bf P}^2}/{\cal I}_Z^{n(\epsilon(Z,A) - \delta)}\right) $$ is surjective. According to \cite{F} Example 4.3.4 $$ \ell({\cal O}_X/{\cal I}_Z^{r}) = \frac{qr^2}{2} + O(r). $$ In particular, for $n\alpha_1 < k \leq n\alpha_2$ we see that that Lemma \ref{nonsep} holds since any section of $H^0_Y\left({\bf P}^2,{\cal O}(k)\right)$ vanishes to order at least $\lfloor(k - n\alpha_1)/2\rfloor$ along $Z$.
\section{Counting jets in the general case}
In addition to the computational complexity involved in estimating $h^0\left(X,nA \otimes m_\eta^{k}\right)$ for different values of $k$, the central difficulty in the higher dimensional case is Lemma \ref{seshmult} which uses the fact that on a surface $X$ the set $\{x\in X:\epsilon(x,A) < 1\}$ is countable for an ample line bundle $A$. In particular, it is critical for Lemma \ref{seshmult} that this set contains no divisor and this is not known in higher dimension. In order to prove Theorem \ref{main}, we begin by recalling a key definition from \cite{N}. \begin{definition}
For an ample ${\bf Q}$--divisor $A$ on a smooth surface $X$ we let $$
m(A) = \sup_{D \equiv A}\left\{{\rm mult}_\eta(D)\}\,| \,\,D \in \,\,{\rm Div}(X) \otimes {\bf Q} \,\,\mbox{effective}\right\}: $$ here $\equiv$ denotes numerical equivalence. \label{d2} \end{definition}
\noindent The importance of Definition \ref{d2} lies in the following simple result:
\begin{lemma} Suppose $X$ is a projective variety of dimension $d$ and $A$ an ample line bundle on $X$. If $$ \epsilon(\eta,A) < \frac{m(A)}{d} $$ then given $\delta > 0$ there exists an irreducible proper subvariety $Y \subset X$, of dimension at least one, such that $$
\epsilon(\xi,A|Y) < \epsilon(\eta, A) + \delta: $$
here $\xi$ is a very general point of $Y$ and $A|Y$ is the restriction of $A$ to $Y$. \label{l0} \end{lemma}
\noindent {\bf Proof of Lemma \ref{l0}} Suppose, to the contrary, that for some $\delta > 0$ $$
\epsilon(\xi,A|Y) \geq \epsilon(\eta,A) + \delta $$ for every irreducible $Y \subset X$. As above, following \cite{ekl} 3.4, given $\epsilon > 0$ there is a a family of curves ${\cal F} \subset X \times T$ with
\rn \begin{eqnarray} \alpha = \frac{A \cdot C_t}{{\rm mult}_{\phi(t)}(C_t)} < \epsilon(\eta,A) + \epsilon, \,\,\,t \in T. \label{tg} \end{eqnarray}
This gives a chain of subvarieties \rn \begin{eqnarray} V_1 \subset V_2 \subset \cdots \subset V_d \label{l1} \end{eqnarray} where $V_1 = C_\eta$ for a very general point $\eta$ and $V_{i+1} = C(V_i)$ in the notation of \cite{ekl} Lemma 3.5.1: in particular, $V_{i+1}$ is obtained by adjoining curves in the family ${\cal F}$. By \cite{ekl} Lemma 3.5.1 each $V_i$ is irreducible.
According to Lemma \ref{ml} any section $$ s \in H^0\left(X, nA \otimes m_\eta^{2n\alpha + 2}\right) $$ vanishes along $C_\eta$ to order at least $n\alpha + 1$ and hence vanishes along $V_2$. Proceding inductively using Lemma \ref{ml} we find that if $s \in H^0\left(X, nA \otimes m_\eta^{dn\alpha + d}\right)$ then \rn \begin{eqnarray}
s|V_d = 0. \label{uv} \end{eqnarray} By hypothesis, $m(A) > d\epsilon(\eta,A)$ and thus, shrinking $\epsilon$ in \ref{tg} if necessary, we can assume that $s$ is not indentically zero in \ref{uv} and thus $\dim (V_d) \leq d-1$. It follows from \ref{l1} that for some $1 \leq r \leq d-1$ we must have $$ V_r = V_{r+1}. $$ In particular for a general, hence smooth, point $\xi \in V_r$, we find a curve $C_\xi \subset V_r$ with $$ \frac{{\rm mult}_\xi(C_\xi)}{A \cdot C_\xi} = \alpha. $$ Hence $$
\epsilon(\xi,A|V_r) \leq \alpha < \epsilon(\eta,A) + \delta. $$ Thus we can take $Y = V_r$ and this proves Lemma \ref{l0}.
We will derive Theorem \ref{main} from Lemma \ref{l0} and the following result. \begin{lemma} Suppose $X$ is a projective variety of dimension $d \geq 4$ and $A$ an ample line bundle on $X$. Then either
$$ m(A) > 1 + \frac{1}{3d} $$ or $$ \epsilon(\eta,A) > \frac{1}{d} + \frac{1}{3d^2}. $$ \label{l2} \end{lemma}
\noindent {\bf Proof of Lemma \ref{l2}} The proof of Lemma \ref{l2} follows closely the method of \S 1 though the counting is much simpler. Let $\pi: Y \rightarrow X$ be the blow--up of $X$ at $\eta$ with exceptional divisor $E \simeq {\bf P}^{d-1}$. Choose a rational number $\alpha$, and a large positive integer $n$ so that $n\alpha \in {\bf Z}$. Then we have, with the same notation
as above \rn \begin{eqnarray} h^0(X,nA) - h^0\left(X,nA \otimes m_\eta^{\alpha n} \right) = \sum_{k=0}^{\alpha n - 1} h^0_Y\left({\bf P}^{d-1}, {\cal O}(k)\right) \label{f2} \end{eqnarray} where, as above,
$h^0_Y\left({\bf P}^{d-1}, {\cal O}(k)\right)$ denotes the dimension of the subspace of $H^0\left({\bf P}^{d-1}, {\cal O}(k)\right)$ coming via restriction from $H^0\left(Y,\pi^\ast(nA)(-kE)\right)$.
Suppose that $x \in {\bf P}^{d-1} = {\bf P}(T_\eta(X))$ is a tangent vector to $C_\eta$ at $\eta$. Then for $k > \epsilon(\eta,A)n$, Lemma \ref{ml} implies \rn \begin{eqnarray} h^0_Y\left({\bf P}^{d-1}, {\cal O}(k)\right) \leq h^0\left({\bf P}^{d-1}, {\cal O}(k) \otimes m_x^{\lceil k-\epsilon(\eta,A)n -1\rceil} \right): \label{f3} \end{eqnarray} Combining \ref{f2} and \ref{f3} and taking the limit as $n \rightarrow \infty$ we find \begin{eqnarray} \lim_{n \rightarrow \infty} \frac{h^0(X,nA)- h^0(X,nA \otimes m_\eta^{\alpha n})}{n^d} &\leq& \frac{1}{(d-1)!}\int_0^\alpha \left(x^{d-1} - {\rm max}\left\{0,(x-\epsilon(\eta,A))^{d-1}\right\}\right) \nonumber \\ &=& \frac{\alpha^d - (\alpha - \epsilon(\eta,A))^d}{d!}. \rn \label{f4} \end{eqnarray} According to \ref{f4}, if $$ \alpha^d - (\alpha - \epsilon(\eta,A))^d < 1 $$ then $\lim_{n \rightarrow \infty} \frac{h^0\left(X,nA \otimes m_\eta^{\alpha n}\right)}{n^{d}} > 0$ and hence $m(A) > \alpha$. Suppose then that $\epsilon(\eta,A) \leq \frac{1}{d}+ \frac{1}{3d^2}$ and let $\alpha = 1 + \frac{1}{3d}$. Then we find $$ \lim_{d \rightarrow \infty}\left( \left(\frac{3d+1}{3d}\right)^d - \left(\frac{3d+1}{3d} - \frac{3d+1}{3d^2}\right)^d\right) \leq e^{1/3} - e^{-2/3} < 0.9. $$ Thus we see that for all $d$ sufficiently large $m(A) > 1+ \frac{1}{3d}$ if $\epsilon(\eta,A) \leq \frac{1}{d} + \frac{1}{3d^2}$. Elementary calculus suffices to show that this also holds for all $d \geq 4$ and this establishes Lemma \ref{l2}.
\noindent {\bf Proof of Theorem \ref{main}} Suppose to the contrary that \rn \begin{eqnarray} \epsilon(\eta,A) \leq \frac{1}{d} + \frac{1}{3d^2}. \label{g1} \end{eqnarray} By Lemma \ref{l2} $m(A) > 1 + 1/3d$ and thus $$ \epsilon(\eta,A) < \frac{m(A)}{d}. $$ By Lemma \ref{l1} given $\delta > 0$ there is a proper subvariety $Y \subset X$ such that for a very general point $\xi \in Y$ $$
\epsilon(\xi,A|Y) \leq \frac{1}{d}+ \frac{1}{3d^2} + \delta. $$
But by the main theorem of \cite{ekl}, we know that $\epsilon(\xi,A|Y) \geq \frac{1}{\dim(Y)} \geq \frac{1}{d-1}$ and this is a contradiction for $\delta$ sufficiently small.
Note that above in \ref{f3} we have not counted carefully: in particular, the curve $C_\eta$ is singular at $\eta$ and thus the tangent space to $C_\eta$ will be more than a single point with multiplicity one. Thus the counting can be improved considerably here but we were unable to obtain a significant quantitative improvement in the final result by checking this counting more carefully.
\begin{tabbing} Department of Mathematics and Statistics\\ University of New Mexico\\ Albuquerque, New Mexico 87131\\ {\em Electronic mail:} [email protected] \end{tabbing}
\end{document}
|
arXiv
|
{
"id": "0403313.tex",
"language_detection_score": 0.7150081396102905,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{bibdiv} \begin{biblist}
\bib{MR2490408}{article}{
author={Asper{\'o}, David},
author={Friedman, Sy-David},
title={Large cardinals and locally defined well-orders of the universe},
journal={Ann. Pure Appl. Logic},
volume={157},
date={2009},
number={1},
pages={1--15},
issn={0168-0072},
review={\MR{2490408 (2010e:03061)}},
doi={10.1016/j.apal.2008.07.006}, }
\bib{MR884452}{article}{
author={Balcar, B.},
author={Franek, F.},
title={Completion of factor algebras of ideals},
journal={Proc. Amer. Math. Soc.},
volume={100},
date={1987},
number={2},
pages={205--212},
issn={0002-9939},
review={\MR{884452 (88g:06017)}},
doi={10.2307/2045944}, }
\bib{MR0416925}{article}{
author={Baumgartner, James E.},
title={A new class of order types},
journal={Ann. Math. Logic},
volume={9},
date={1976},
number={3},
pages={187--222},
issn={0168-0072},
review={\MR{0416925 (54 \#4988)}}, }
\bib{MR776640}{article}{
author={Baumgartner, James E.},
title={Applications of the proper forcing axiom},
conference={
title={Handbook of set-theoretic topology},
},
book={
publisher={North-Holland},
place={Amsterdam},
},
date={1984},
pages={913--959},
review={\MR{776640 (86g:03084)}}, }
\bib{MR654852}{article}{
author={Baumgartner, James E.},
author={Taylor, Alan D.},
title={Saturation properties of ideals in generic extensions. II},
journal={Trans. Amer. Math. Soc.},
volume={271},
date={1982},
number={2},
pages={587--609},
issn={0002-9947},
review={\MR{654852 (83k:03040b)}},
doi={10.2307/1998900}, }
\bib{MR1099782}{article}{
author={Beaudoin, Robert E.},
title={The proper forcing axiom and stationary set reflection},
journal={Pacific J. Math.},
volume={149},
date={1991},
number={1},
pages={13--24},
issn={0030-8730},
review={\MR{1099782 (92b:03054)}}, }
\bib{MR1472122}{article}{
author={Burke, Douglas R.},
title={Precipitous towers of normal filters},
journal={J. Symbolic Logic},
volume={62},
date={1997},
number={3},
pages={741--754},
issn={0022-4812},
review={\MR{1472122 (2000d:03114)}},
doi={10.2307/2275571}, }
\bib{MR2231126}{article}{
author={Caicedo, Andr{\'e}s Eduardo},
author={Veli{\v{c}}kovi{\'c}, Boban},
title={The bounded proper forcing axiom and well orderings of the reals},
journal={Math. Res. Lett.},
volume={13},
date={2006},
number={2-3},
pages={393--408},
issn={1073-2780},
review={\MR{2231126 (2007d:03076)}}, }
\bib{MR1059055}{book}{
author={Chang, C. C.},
author={Keisler, H. J.},
title={Model theory},
series={Studies in Logic and the Foundations of Mathematics},
volume={73},
edition={3},
publisher={North-Holland Publishing Co.},
place={Amsterdam},
date={1990},
pages={xvi+650},
isbn={0-444-88054-2},
review={\MR{1059055 (91c:03026)}}, }
\bib{MR2963017}{article}{
author={Claverie, Benjamin},
author={Schindler, Ralf},
title={Woodin's axiom $(\ast)$, bounded forcing axioms, and precipitous
ideals on $\omega_1$},
journal={J. Symbolic Logic},
volume={77},
date={2012},
number={2},
pages={475--498},
issn={0022-4812},
review={\MR{2963017}},
doi={10.2178/jsl/1333566633}, }
\bib{MR1779746}{article}{
author={Corazza, Paul},
title={Laver sequences for extendible and super-almost-huge cardinals},
journal={J. Symbolic Logic},
volume={64},
date={1999},
number={3},
pages={963--983},
issn={0022-4812},
review={\MR{1779746 (2001g:03096)}},
doi={10.2307/2586614}, }
\bib{CoxWNUFPaper}{article}{
author={Cox, Sean},
title={Nonregular ultrafilters on $\omega_2$},
journal={J. Symbolic Logic},
volume={76},
date={2011},
number={3},
pages={827--845},
doi={10.2178/jsl/1309952522}, }
\bib{CoxCoveringPaper}{article}{
author={Cox, Sean},
title={Covering theorems for the core model, and an application to
stationary set reflection},
journal={Ann. Pure Appl. Logic},
volume={161},
date={2009},
number={1},
pages={66--93},
issn={0168-0072},
review={\MR{2567927}},
doi={10.1016/j.apal.2009.06.001}, }
\bib{CoxThesis}{thesis}{
author={Cox, Sean},
type={Ph.D. dissertation},
address={University of California, Irvine},
date={2009} }
\bib{CoxChangPaper}{article}{
author={Cox, Sean},
title={Consistency strength of higher Chang's Conjecture, without CH},
journal={Archive for Mathematical Logic},
volume={50},
date={2011},
number={7},
pages={759--775}, }
\bib{Cox_LaverDiamond}{article}{
author={Cox, Sean},
title={Prevalence of Generalized Laver Diamond},
status={submitted}, }
\bib{Cox_MALP}{article}{
author={Cox, Sean},
author={Zeman, Martin},
title={Ideal projections as forcing projections},
journal={Journal of Symbolic Logic},
status={to appear}, }
\bib{MR3096619}{article}{
author={Cox, Sean},
author={Viale, Matteo},
title={Martin's Maximum and tower forcing},
journal={Israel J. Math.},
volume={197},
date={2013},
number={1},
pages={347--376},
issn={0021-2172},
review={\MR{3096619}},
doi={10.1007/s11856-013-0004-0}, }
\bib{Cox_Quasicompact}{article}{
author={Cox, Sean},
title={Stationary reflection at small cofinalities},
status={in preparation}, }
\bib{Cox_WRP_Failure}{article}{
author={Cox, Sean},
title={Persistent failure of the Weak Reflection Principle (in preparation)}, }
\bib{Cox_FA_Ideals}{article}{
author={Cox, Sean},
title={PFA and ideals on $\omega_2$ whose associated forcings are proper},
journal={Notre Dame Journal of Formal Logic},
volume={53},
number={3},
date={2012},
pages={397--412}, }
\bib{DRP}{article}{
author={Cox, Sean},
title={The Diagonal Reflection Principle},
journal={Proceedings of the American Mathematical Society},
volume={140},
number={8},
date={2012},
pages={2893--2902} }
\bib{MR2768691}{article}{
author={Cummings, James},
title={Iterated forcing and elementary embeddings},
conference={
title={Handbook of set theory. Vols. 1, 2, 3},
},
book={
publisher={Springer},
place={Dordrecht},
},
date={2010},
pages={775--883},
review={\MR{2768691}}, }
\bib{MR2078366}{article}{
author={Cummings, James},
author={Foreman, Matthew},
author={Magidor, Menachem},
title={Canonical structure in the universe of set theory. I},
journal={Ann. Pure Appl. Logic},
volume={129},
date={2004},
number={1-3},
pages={211--243},
issn={0168-0072},
review={\MR{2078366 (2005k:03105)}}, }
\bib{MR1214782}{article}{
author={Dehornoy, Patrick},
title={Braid groups and left distributive operations},
journal={Trans. Amer. Math. Soc.},
volume={345},
date={1994},
number={1},
pages={115--150},
issn={0002-9947},
review={\MR{1214782 (95a:08003)}},
doi={10.2307/2154598}, }
\bib{DehornoyHandbook}{book}{
author={Dehornoy, Patrick},
title={Elementary Embeddings and Algebra},
series={Handbook of Set Theory},
publisher={Springer},
date={2010}, }
\bib{MR645907}{article}{
author={Donder, D.},
author={Jensen, R. B.},
author={Koppelberg, B. J.},
title={Some applications of the core model},
conference={
title={Set theory and model theory},
address={Bonn},
date={1979},
},
book={
series={Lecture Notes in Math.},
volume={872},
publisher={Springer},
place={Berlin},
},
date={1981},
pages={55--97},
review={\MR{645907 (83c:03032)}}, }
\bib{MR2000073}{article}{
author={Deiser, Oliver},
author={Donder, Dieter},
title={Canonical functions, non-regular ultrafilters and Ulam's problem
on $\omega\sb 1$},
journal={J. Symbolic Logic},
volume={68},
date={2003},
number={3},
pages={713--739},
issn={0022-4812},
review={\MR{2000073 (2004k:03095)}}, }
\bib{MR730856}{article}{ author={Donder, Hans-Dieter}, author={Koepke, Peter}, title={On the consistency strength of ``accessible'' J\'onsson cardinals and of the weak Chang conjecture}, journal={Ann. Pure Appl. Logic}, volume={25}, date={1983}, number={3}, pages={233--261}, issn={0168-0072}, review={\MR{730856 (85j:03084)}}, }
\bib{MR1024901}{article}{
author={Donder, Hans-Dieter},
author={Levinski, Jean-Pierre},
title={Some principles related to Chang's conjecture},
journal={Ann. Pure Appl. Logic},
volume={45},
date={1989},
number={1},
pages={39--101},
issn={0168-0072},
review={\MR{1024901 (91b:03087)}},
doi={10.1016/0168-0072(89)90030-4}, }
\bib{MR1239715}{article}{
author={Donder, Hans-Dieter},
author={Matet, Pierre},
title={Two cardinal versions of diamond},
journal={Israel J. Math.},
volume={83},
date={1993},
number={1-2},
pages={1--43},
issn={0021-2172},
review={\MR{1239715 (94k:03065)}},
doi={10.1007/BF02764635}, }
\bib{MR1472317}{article}{
author={Dougherty, Randall},
author={Jech, Thomas},
title={Finite left-distributive algebras and embedding algebras},
journal={Adv. Math.},
volume={130},
date={1997},
number={2},
pages={201--241},
issn={0001-8708},
review={\MR{1472317 (99f:08004)}},
doi={10.1006/aima.1997.1655}, }
\bib{MR1031969}{article}{
author={Dow, Alan},
title={An introduction to applications of elementary submodels to
topology},
journal={Topology Proc.},
volume={13},
date={1988},
number={1},
pages={17--72},
issn={0146-4124},
review={\MR{1031969 (91a:54003)}}, }
\bib{MR2414453}{article}{
author={Ehrman, G.},
author={Gurpinar, A.},
author={Thibault, M.},
author={Yetter, D. N.},
title={Toward a classification of finite quandles},
journal={J. Knot Theory Ramifications},
volume={17},
date={2008},
number={4},
pages={511--520},
issn={0218-2165},
review={\MR{2414453 (2009e:57018)}},
doi={10.1142/S0218216508006270}, }
\bib{Hamkins_LaverDiamond}{article}{ author = {Joel David Hamkins}, title = {A class of strong diamond principles}, journal = {Preprint}, year = {2002}, eprint = {math/0211419}, }
\bib{MR2026390}{article}{
author={Hellsten, Alex},
title={Diamonds on large cardinals},
note={Dissertation, University of Helsinki, Helsinki, 2003},
journal={Ann. Acad. Sci. Fenn. Math. Diss.},
number={134},
date={2003},
pages={48},
issn={1239-6303},
review={\MR{2026390 (2004j:03054)}}, }
\bib{MR0325397}{article}{
author={Jech, Thomas J.},
title={Some combinatorial problems concerning uncountable cardinals},
journal={Ann. Math. Logic},
volume={5},
date={1972/73},
pages={165--198},
issn={0168-0072},
review={\MR{0325397 (48 \#3744)}}, }
\bib{MR1668171}{article}{ author={Feng, Qi}, author={Jech, Thomas}, title={Projective stationary sets and a strong reflection principle}, journal={J. London Math. Soc. (2)}, volume={58}, date={1998}, number={2}, pages={271--283}, issn={0024-6107}, review={\MR{1668171 (2000b:03166)}}, }
\bib{MR977476}{article}{
author={Feng, Qi},
author={Jech, Thomas},
title={Local clubs, reflection, and preserving stationary sets},
journal={Proc. London Math. Soc. (3)},
volume={58},
date={1989},
number={2},
pages={237--257},
issn={0024-6115},
review={\MR{977476 (90a:03072)}},
doi={10.1112/plms/s3-58.2.237}, }
\bib{MR1669368}{article}{
author={Foreman, Matthew},
title={An $\aleph\sb 1$-dense ideal on $\aleph\sb 2$},
journal={Israel J. Math.},
volume={108},
date={1998},
pages={253--290},
issn={0021-2172},
review={\MR{1669368 (2000e:03140)}}, }
\bib{ForemanDualityThm}{article}{
author={Foreman, Matthew},
title={Calculating quotient algebras for generic embeddings, to appear in Israel J. Math.}, }
\bib{MattHandbook}{book}{
author={Foreman, Matthew},
title={Ideals and Generic Elementary Embeddings},
series={Handbook of Set Theory},
publisher={Springer},
date={2010}, }
\bib{MR662045}{article}{
author={Foreman, Matthew},
title={Large cardinals and strong model theoretic transfer properties},
journal={Trans. Amer. Math. Soc.},
volume={272},
date={1982},
number={2},
pages={427--463},
issn={0002-9947},
review={\MR{662045 (84d:03038)}}, }
\bib{MR2538021}{article}{
author={Foreman, Matthew},
title={Smoke and mirrors: combinatorial properties of small cardinals
equiconsistent with huge cardinals},
journal={Adv. Math.},
volume={222},
date={2009},
number={2},
pages={565--595},
issn={0001-8708},
review={\MR{2538021}},
doi={10.1016/j.aim.2009.05.006}, }
\bib{MR1903851}{article}{
author={Foreman, Matthew},
title={Stationary sets, Chang's conjecture and partition theory},
conference={
title={Set theory},
address={Piscataway, NJ},
date={1999},
},
book={
series={DIMACS Ser. Discrete Math. Theoret. Comput. Sci.},
volume={58},
publisher={Amer. Math. Soc.},
place={Providence, RI},
},
date={2002},
pages={73--94},
review={\MR{1903851 (2003e:03089)}}, }
\bib{MR2151585}{article}{
author={Foreman, Matthew},
author={Komjath, Peter},
title={The club guessing ideal: commentary on a theorem of Gitik and
Shelah},
journal={J. Math. Log.},
volume={5},
date={2005},
number={1},
pages={99--147},
issn={0219-0613},
review={\MR{2151585 (2007a:03051)}},
doi={10.1142/S0219061305000419}, }
\bib{MR1359154}{article}{
author={Foreman, Matthew},
author={Magidor, Menachem},
title={Large cardinals and definable counterexamples to the continuum
hypothesis},
journal={Ann. Pure Appl. Logic},
volume={76},
date={1995},
number={1},
pages={47--97},
issn={0168-0072},
review={\MR{1359154 (96k:03124)}}, }
\bib{MR924672}{article}{
author={Foreman, M.},
author={Magidor, M.},
author={Shelah, S.},
title={Martin's maximum, saturated ideals, and nonregular ultrafilters.
I},
journal={Ann. of Math. (2)},
volume={127},
date={1988},
number={1},
pages={1--47},
issn={0003-486X},
review={\MR{924672 (89f:03043)}}, }
\bib{MR942519}{article}{
author={Foreman, M.},
author={Magidor, M.},
author={Shelah, S.},
title={Martin's maximum, saturated ideals and nonregular ultrafilters.
II},
journal={Ann. of Math. (2)},
volume={127},
date={1988},
number={3},
pages={521--545},
issn={0003-486X},
review={\MR{942519 (90a:03077)}}, }
\bib{MR2115072}{article}{
author={Foreman, Matthew},
author={Todorcevic, Stevo},
title={A new L\"owenheim-Skolem theorem},
journal={Trans. Amer. Math. Soc.},
volume={357},
date={2005},
number={5},
pages={1693--1715 (electronic)},
issn={0002-9947},
review={\MR{2115072 (2005m:03064)}},
doi={10.1090/S0002-9947-04-03445-2}, }
\bib{MR2857735}{article}{
author={Friedman, Sy-David},
title={BPFA and inner models},
journal={Ann. Japan Assoc. Philos. Sci.},
volume={19},
date={2011},
pages={29--36},
issn={0453-0691},
review={\MR{2857735 (2012i:03145)}}, }
\bib{MR2860182}{article}{
author={Friedman, Sy-David},
author={Holy, Peter},
title={Condensation and large cardinals},
journal={Fund. Math.},
volume={215},
date={2011},
number={2},
pages={133--166},
issn={0016-2736},
review={\MR{2860182}},
doi={10.4064/fm215-2-3}, }
\bib{MR820120}{article}{
author={Gitik, Moti},
title={Nonsplitting subset of ${\scr P}\sb \kappa(\kappa\sp +)$},
journal={J. Symbolic Logic},
volume={50},
date={1985},
number={4},
pages={881--894 (1986)},
issn={0022-4812},
review={\MR{820120 (87g:03054)}}, }
\bib{MR1357746}{article}{
author={Gitik, Moti},
title={Some results on the nonstationary ideal},
journal={Israel J. Math.},
volume={92},
date={1995},
number={1-3},
pages={61--112},
issn={0021-2172},
review={\MR{1357746 (96k:03108)}}, }
\bib{MR987765}{article}{
author={Gitik, Moti},
author={Shelah, Saharon},
title={On certain indestructibility of strong cardinals and a question of
Hajnal},
journal={Arch. Math. Logic},
volume={28},
date={1989},
number={1},
pages={35--42},
issn={0933-5846},
review={\MR{987765 (90e:03063)}},
doi={10.1007/BF01624081}, }
\bib{MR1469092}{article}{
author={Gitik, Moti},
title={Some results on the nonstationary ideal. II},
journal={Israel J. Math.},
volume={99},
date={1997},
pages={175--188},
issn={0021-2172},
review={\MR{1469092 (98i:03063)}}, }
\bib{MR1007865}{article}{
author={Gitik, Moti},
title={The negation of the singular cardinal hypothesis from
$o(\kappa)=\kappa\sp {++}$},
journal={Ann. Pure Appl. Logic},
volume={43},
date={1989},
number={3},
pages={209--234},
issn={0168-0072},
review={\MR{1007865 (90h:03037)}}, }
\bib{MR776310}{article}{
author={Gitik, Moti},
title={The nonstationary ideal on $\aleph\sb 2$},
journal={Israel J. Math.},
volume={48},
date={1984},
number={4},
pages={257--288},
issn={0021-2172},
review={\MR{776310 (86i:03061)}}, }
\bib{MR1098782}{article}{
author={Gitik, Moti},
title={The strength of the failure of the singular cardinal hypothesis},
journal={Ann. Pure Appl. Logic},
volume={51},
date={1991},
number={3},
pages={215--240},
issn={0168-0072},
review={\MR{1098782 (92f:03060)}}, }
\bib{MR1423421}{article}{
author={Gitik, Moti},
author={Mitchell, William J.},
title={Indiscernible sequences for extenders, and the singular cardinal
hypothesis},
journal={Ann. Pure Appl. Logic},
volume={82},
date={1996},
number={3},
pages={273--316},
issn={0168-0072},
review={\MR{1423421 (98a:03080)}}, }
\bib{MR1035887}{article}{
author={Gitik, Moti},
author={Shelah, Saharon},
title={Forcings with ideals and simple forcing notions},
journal={Israel J. Math.},
volume={68},
date={1989},
number={2},
pages={129--160},
issn={0021-2172},
review={\MR{1035887 (91g:03104)}}, }
\bib{MR1363421}{article}{
author={Gitik, Moti},
author={Shelah, Saharon},
title={Less saturated ideals},
journal={Proc. Amer. Math. Soc.},
volume={125},
date={1997},
number={5},
pages={1523--1530},
issn={0002-9939},
review={\MR{1363421 (97g:03055)}},
doi={10.1090/S0002-9939-97-03702-7}, }
\bib{MR2540935}{article}{
author={Hamkins, Joel David},
title={Some second order set theory},
conference={
title={Logic and its applications},
},
book={
series={Lecture Notes in Comput. Sci.},
volume={5378},
publisher={Springer},
place={Berlin},
},
date={2009},
pages={36--50},
review={\MR{2540935 (2011a:03053)}}, }
\bib{MR783595}{article}{
author={Harrington, Leo},
author={Shelah, Saharon},
title={Some exact equiconsistency results in set theory},
journal={Notre Dame J. Formal Logic},
volume={26},
date={1985},
number={2},
pages={178--188},
issn={0029-4527},
review={\MR{783595 (86g:03079)}}, }
\bib{MR2232202}{article}{
author={Henderson, Richard},
author={Macedo, Todd},
author={Nelson, Sam},
title={Symbolic computation with finite quandles},
journal={J. Symbolic Comput.},
volume={41},
date={2006},
number={7},
pages={811--817},
issn={0747-7171},
review={\MR{2232202 (2007a:68080)}},
doi={10.1016/j.jsc.2006.03.002}, }
\bib{MR2175299}{article}{
author={Ho, Benita},
author={Nelson, Sam},
title={Matrices and finite quandles},
journal={Homology Homotopy Appl.},
volume={7},
date={2005},
number={1},
pages={197--208},
issn={1532-0081},
review={\MR{2175299 (2006h:20094)}}, }
\bib{MR1286831}{article}{
author={Huberich, Markus},
title={Non-regular ultrafilters},
journal={Israel J. Math.},
volume={87},
date={1994},
number={1-3},
pages={275--288},
issn={0021-2172},
review={\MR{1286831 (95k:03085)}}, }
\bib{MR2194236}{article}{
author={Ishiu, Tetsuya},
title={Club guessing sequences and filters},
journal={J. Symbolic Logic},
volume={70},
date={2005},
number={4},
pages={1037--1071},
issn={0022-4812},
review={\MR{2194236 (2006j:03063)}}, }
\bib{MR1940513}{book}{
author={Jech, Thomas},
title={Set theory},
series={Springer Monographs in Mathematics},
note={The third millennium edition, revised and expanded},
publisher={Springer-Verlag},
place={Berlin},
date={2003},
pages={xiv+769},
isbn={3-540-44085-2},
review={\MR{1940513 (2004g:03071)}}, }
\bib{MR560220}{article}{
author={Jech, T.},
author={Magidor, M.},
author={Mitchell, W.},
author={Prikry, K.},
title={Precipitous ideals},
journal={J. Symbolic Logic},
volume={45},
date={1980},
number={1},
pages={1--8},
issn={0022-4812},
review={\MR{560220 (81h:03097)}}, }
\bib{JensenNotesNonoverlap}{misc}{
author={Jensen, R. B.},
title={Nonoverlapping Extenders},
status={handwritten notes},
place={Oxford} }
\bib{MR2499432}{article}{
author={Jensen, Ronald},
author={Schimmerling, Ernest},
author={Schindler, Ralf},
author={Steel, John},
title={Stacking mice},
journal={J. Symbolic Logic},
volume={74},
date={2009},
number={1},
pages={315--335},
issn={0022-4812},
review={\MR{2499432 (2010d:03087)}}, }
\bib{kwm}{article}{
author={Jensen, R. B.},
author={Steel, J. R.},
title={$\mbfK$ without a measurable, unpublished} }
\bib{MR1994835}{book}{
author={Kanamori, Akihiro},
title={The higher infinite},
series={Springer Monographs in Mathematics},
edition={2},
note={Large cardinals in set theory from their beginnings},
publisher={Springer-Verlag},
place={Berlin},
date={2003},
pages={xxii+536},
isbn={3-540-00384-3},
review={\MR{1994835 (2004f:03092)}}, }
\bib{MR0480041}{article}{
author={Kanamori, A.},
title={Weakly normal filters and irregular ultrafilters},
journal={Trans. Amer. Math. Soc.},
volume={220},
date={1976},
pages={393--399},
issn={0002-9947},
review={\MR{0480041 (58 \#240)}}, }
\bib{MR2371211}{article}{
author={Ketchersid, Richard},
author={Larson, Paul},
author={Zapletal, Jind{\v{r}}ich},
title={Increasing $\delta_2^1$ and Namba-style forcing},
journal={J. Symbolic Logic},
volume={72},
date={2007},
number={4},
pages={1372--1378},
issn={0022-4812},
review={\MR{2371211 (2008i:03058)}},
doi={10.2178/jsl/1203350792}, }
\bib{MR0419236}{article}{
author={Ketonen, Jussi},
title={Nonregular ultrafilters and large cardinals},
journal={Trans. Amer. Math. Soc.},
volume={224},
date={1976},
pages={61--73},
issn={0002-9947},
review={\MR{0419236 (54 \#7260)}}, }
\bib{MR2674000}{article}{
author={Krueger, John},
title={Internal approachability and reflection},
journal={J. Math. Log.},
volume={8},
date={2008},
number={1},
pages={23--39},
issn={0219-0613},
review={\MR{2674000 (2011i:03046)}},
doi={10.1142/S0219061308000701}, }
\bib{MR2332607}{article}{
author={Krueger, John},
title={Internally club and approachable},
journal={Adv. Math.},
volume={213},
date={2007},
number={2},
pages={734--740},
issn={0001-8708},
review={\MR{2332607 (2008e:03079)}}, }
\bib{Krueger_WRP}{article}{
author={Krueger, John},
title={On the weak reflection principle},
journal={to appear in the Transactions of the American Mathematical Society}, }
\bib{MR597342}{book}{
author={Kunen, Kenneth},
title={Set theory},
series={Studies in Logic and the Foundations of Mathematics},
volume={102},
note={An introduction to independence proofs},
publisher={North-Holland Publishing Co.},
place={Amsterdam},
date={1980},
pages={xvi+313},
isbn={0-444-85401-0},
review={\MR{597342 (82f:03001)}}, }
\bib{MR2030084}{article}{
author={Larson, Paul},
author={Shelah, Saharon},
title={Bounding by canonical functions, with CH},
journal={J. Math. Log.},
volume={3},
date={2003},
number={2},
pages={193--215},
issn={0219-0613},
review={\MR{2030084 (2005f:03080)}}, }
\bib{MR1782117}{article}{
author={Larson, Paul},
title={Separating stationary reflection principles},
journal={J. Symbolic Logic},
volume={65},
date={2000},
number={1},
pages={247--258},
issn={0022-4812},
review={\MR{1782117 (2001k:03094)}},
doi={10.2307/2586534}, }
\bib{MR2069032}{book}{
author={Larson, Paul B.},
title={The stationary tower},
series={University Lecture Series},
volume={32},
note={Notes on a course by W. Hugh Woodin},
publisher={American Mathematical Society},
place={Providence, RI},
date={2004},
pages={x+132},
isbn={0-8218-3604-8},
review={\MR{2069032 (2005e:03001)}}, }
\bib{MR0472529}{article}{
author={Laver, Richard},
title={Making the supercompactness of $\kappa $ indestructible under
$\kappa $-directed closed forcing},
journal={Israel J. Math.},
volume={29},
date={1978},
number={4},
pages={385--388},
issn={0021-2172},
review={\MR{0472529 (57 \#12226)}}, }
\bib{MR1317621}{article}{
author={Laver, Richard},
title={On the algebra of elementary embeddings of a rank into itself},
journal={Adv. Math.},
volume={110},
date={1995},
number={2},
pages={334--346},
issn={0001-8708},
review={\MR{1317621 (96c:03098)}},
doi={10.1006/aima.1995.1014}, }
\bib{MR694265}{article}{
author={Laver, Richard},
title={Saturated ideals and nonregular ultrafilters},
conference={
title={Patras Logic Symposion},
address={Patras},
date={1980},
},
book={
series={Stud. Logic Found. Math.},
volume={109},
publisher={North-Holland},
place={Amsterdam},
},
date={1982},
pages={297--305},
review={\MR{694265 (85e:03110)}}, }
\bib{MR1149623}{article}{
author={Laver, Richard},
title={The left distributive law and the freeness of an algebra of
elementary embeddings},
journal={Adv. Math.},
volume={91},
date={1992},
number={2},
pages={209--231},
issn={0001-8708},
review={\MR{1149623 (93b:08008)}},
doi={10.1016/0001-8708(92)90016-E}, }
\bib{MR2691107}{book}{
author={Law, David Richard},
title={An abstract condensation property},
note={Thesis (Ph.D.)--California Institute of Technology},
publisher={ProQuest LLC, Ann Arbor, MI},
date={1994},
pages={40},
review={\MR{2691107}}, }
\bib{MR0429566}{article}{
author={Magidor, Menachem},
title={How large is the first strongly compact cardinal? or A study on
identity crises},
journal={Ann. Math. Logic},
volume={10},
date={1976},
number={1},
pages={33--57},
issn={0168-0072},
review={\MR{0429566 (55 \#2578)}}, }
\bib{MR0327518}{article}{
author={Magidor, M.},
title={Combinatorial characterization of supercompact cardinals},
journal={Proc. Amer. Math. Soc.},
volume={42},
date={1974},
pages={279--285},
issn={0002-9939},
review={\MR{0327518 (48 \#5860)}}, }
\bib{MR0295904}{article}{
author={Magidor, M.},
title={On the role of supercompact and extendible cardinals in logic},
journal={Israel J. Math.},
volume={10},
date={1971},
pages={147--157},
issn={0021-2172},
review={\MR{0295904 (45 \#4966)}}, }
\bib{MR683153}{article}{
author={Magidor, Menachem},
title={Reflecting stationary sets},
journal={J. Symbolic Logic},
volume={47},
date={1982},
number={4},
pages={755--771 (1983)},
issn={0022-4812},
review={\MR{683153 (84f:03046)}}, }
\bib{MR0357121}{article}{
author={Menas, Telis K.},
title={On strong compactness and supercompactness},
journal={Ann. Math. Logic},
volume={7},
date={1974/75},
pages={327--359},
issn={0168-0072},
review={\MR{0357121 (50 \#9589)}}, }
\bib{MR673794}{article}{
author={Mitchell, William},
title={How weak is a closed unbounded ultrafilter?},
conference={
title={Logic Colloquium '80},
address={Prague},
date={1980},
},
book={
series={Stud. Logic Foundations Math.},
volume={108},
publisher={North-Holland},
place={Amsterdam},
},
date={1982},
pages={209--230},
review={\MR{673794 (84f:03047)}}, }
\bib{MR0313057}{article}{
author={Mitchell, William},
title={Aronszajn trees and the independence of the transfer property},
journal={Ann. Math. Logic},
volume={5},
date={1972/73},
pages={21--46},
issn={0168-0072},
review={\MR{0313057 (47 \#1612)}}, }
\bib{MR869398}{article}{
author={Mitchell, W.},
title={Applications of the covering lemma for sequences of measures},
journal={Trans. Amer. Math. Soc.},
volume={299},
date={1987},
number={1},
pages={41--58},
issn={0002-9947},
review={\MR{869398 (88a:03122)}}, }
\bib{MR2279659}{article}{
author={Mitchell, William J.},
title={On the Hamkins approximation property},
journal={Ann. Pure Appl. Logic},
volume={144},
date={2006},
number={1-3},
pages={126--129},
issn={0168-0072},
review={\MR{2279659 (2007k:03130)}},
doi={10.1016/j.apal.2006.05.005}, }
\bib{MR2452816}{article}{
author={Mitchell, William J.},
title={$I[\omega_2]$ can be the nonstationary ideal on ${\rm
Cof}(\omega_1)$},
journal={Trans. Amer. Math. Soc.},
volume={361},
date={2009},
number={2},
pages={561--601},
issn={0002-9947},
review={\MR{2452816 (2009m:03081)}},
doi={10.1090/S0002-9947-08-04664-3}, }
\bib{MR1359965}{article}{
author={Mitchell, W. J.},
author={Schimmerling, E.},
title={Weak covering without countable closure},
journal={Math. Res. Lett.},
volume={2},
date={1995},
number={5},
pages={595--609},
issn={1073-2780},
review={\MR{1359965 (96k:03123)}}, }
\bib{MR2199228}{article}{
author={Moore, Justin Tatch},
title={A five element basis for the uncountable linear orders},
journal={Ann. of Math. (2)},
volume={163},
date={2006},
number={2},
pages={669--688},
issn={0003-486X},
review={\MR{2199228 (2007d:03085)}},
doi={10.4007/annals.2006.163.669}, }
\bib{MR2341316}{article}{
author={Murillo, Gabriel},
author={Nelson, Sam},
author={Thompson, Anthony},
title={Matrices and finite Alexander quandles},
journal={J. Knot Theory Ramifications},
volume={16},
date={2007},
number={6},
pages={769--778},
issn={0218-2165},
review={\MR{2341316 (2008d:57018)}},
doi={10.1142/S0218216507005488}, }
\bib{NeemanPFA}{article}{
author={Neeman, Itay},
title={Forcing with sequences of models of two types (preprint)}, }
\bib{MR670992}{article}{
author={Radin, Lon Berk},
title={Adding closed cofinal sequences to large cardinals},
journal={Ann. Math. Logic},
volume={22},
date={1982},
number={3},
pages={243--261},
issn={0003-4843},
review={\MR{670992 (83m:03062)}}, }
\bib{MR1638250}{article}{
author={Schimmerling, E.},
author={Steel, J. R.},
title={The maximality of the core model},
journal={Trans. Amer. Math. Soc.},
volume={351},
date={1999},
number={8},
pages={3119--3141},
issn={0002-9947},
review={\MR{1638250 (99m:03104)}},
doi={10.1090/S0002-9947-99-02411-3}, }
\bib{MR2081183}{article}{
author={Schimmerling, Ernest},
author={Zeman, Martin},
title={Characterization of $\square\sb \kappa$ in core models},
journal={J. Math. Log.},
volume={4},
date={2004},
number={1},
pages={1--72},
issn={0219-0613},
review={\MR{2081183 (2005f:03067)}}, }
\bib{MR1469095}{article}{
author={Schindler, Ralf-Dieter},
title={On a Chang conjecture},
journal={Israel J. Math.},
volume={99},
date={1997},
pages={221--230},
issn={0021-2172},
review={\MR{1469095 (99d:03033)}}, }
\bib{SchindlerWoodinAxiom}{article}{
author={Claverie, Benjamin},
author={Schindler, Ralf-Dieter},
journal={to appear in Journal of Symbolic Logic},
title={Woodin's axiom (*), bounded forcing axioms, and precipitous ideals on $\omega_1$}, }
\bib{SchindlerCC2}{article}{
author={Schindler, Ralf-Dieter},
title={On a Chang conjecture. II},
journal={Arch. Math. Logic},
volume={37},
date={1998},
number={4},
pages={215--220},
issn={0933-5846},
review={\MR{1635555 (2000a:03056)}}, }
\bib{MR1900905}{article}{
author={Schindler, Ralf-Dieter},
title={The core model for almost linear iterations},
journal={Ann. Pure Appl. Logic},
volume={116},
date={2002},
number={1-3},
pages={205--272},
issn={0168-0072},
review={\MR{1900905 (2003d:03086)}}, }
\bib{MR768264}{article}{
author={Shelah, Saharon},
title={Can you take Solovay's inaccessible away?},
journal={Israel J. Math.},
volume={48},
date={1984},
number={1},
pages={1--47},
issn={0021-2172},
review={\MR{768264 (86g:03082a)}},
doi={10.1007/BF02760522}, }
\bib{MR1623206}{book}{
author={Shelah, Saharon},
title={Proper and improper forcing},
series={Perspectives in Mathematical Logic},
edition={2},
publisher={Springer-Verlag},
place={Berlin},
date={1998},
pages={xlviii+1020},
isbn={3-540-51700-6},
review={\MR{1623206 (98m:03002)}}, }
\bib{MR890443}{article}{
author={Shelah, Saharon},
title={Semiproper forcing axiom implies Martin maximum but not ${\rm
PFA}^+$},
journal={J. Symbolic Logic},
volume={52},
date={1987},
number={2},
pages={360--367},
issn={0022-4812},
review={\MR{890443 (89g:03072)}},
doi={10.2307/2274385}, }
\bib{MR1738689}{article}{
author={Shioya, Masahiro},
title={Splitting $\scr P_\kappa\lambda$ into maximally many stationary
sets},
journal={Israel J. Math.},
volume={114},
date={1999},
pages={347--357},
issn={0021-2172},
review={\MR{1738689 (2001i:03100)}},
doi={10.1007/BF02785587}, }
\bib{MR1480175}{book}{
author={Steel, John R.},
title={The core model iterability problem},
series={Lecture Notes in Logic},
volume={8},
publisher={Springer-Verlag},
place={Berlin},
date={1996},
pages={iv+112},
isbn={3-540-61938-0},
review={\MR{1480175 (99k:03043)}}, }
\bib{MR2194247}{article}{
author={Steel, John R.},
title={PFA implies ${\rm AD}^{L(\Bbb R)}$},
journal={J. Symbolic Logic},
volume={70},
date={2005},
number={4},
pages={1255--1296},
issn={0022-4812},
review={\MR{2194247 (2008b:03069)}},
doi={10.2178/jsl/1129642125}, }
\bib{Tall_TopoRef}{article}{
author={Tall, Franklin D.},
title={Reflection of Topological properties to $\aleph_1$ (in Open problems in topology. II)},
date={2007}, }
\bib{MR980949}{book}{
author={Todor{\v{c}}evi{\'c}, Stevo},
title={Partition problems in topology},
series={Contemporary Mathematics},
volume={84},
publisher={American Mathematical Society},
place={Providence, RI},
date={1989},
pages={xii+116},
isbn={0-8218-5091-1},
review={\MR{980949 (90d:04001)}}, }
\bib{Todorcevic_StrongReflections}{article}{
author={Todor{\v{c}}evi{\'c}, Stevo},
title={Strong Reflections (unpublished notes)},
date={1987}, }
\bib{MR2610755}{article}{
author={Finkel, Olivier},
author={Todor{\v{c}}evi{\'c}, Stevo},
title={The isomorphism relation between tree-automatic structures},
journal={Cent. Eur. J. Math.},
volume={8},
date={2010},
number={2},
pages={299--313},
issn={1895-1074},
review={\MR{2610755}},
doi={10.2478/s11533-010-0014-7}, }
\bib{MR1174395}{article}{
author={Veli{\v{c}}kovi{\'c}, Boban},
title={Forcing axioms and stationary sets},
journal={Adv. Math.},
volume={94},
date={1992},
number={2},
pages={256--284},
issn={0001-8708},
review={\MR{1174395 (93k:03045)}},
doi={10.1016/0001-8708(92)90038-M}, }
\bib{VV_ProperForcingRemastered}{article}{
author={Velickovic, Boban},
author={Venturi, Giorgio},
title={Proper Forcing Remastered (preprint)} }
\bib{Viale_GuessingModel}{article}{
author={Viale, Matteo},
title={Guessing models and generalized {L}aver diamond}, JOURNAL = {Ann. Pure Appl. Logic},
FJOURNAL = {Annals of Pure and Applied Logic},
VOLUME = {163},
YEAR = {2012},
NUMBER = {11},
PAGES = {1660--1678},
ISSN = {0168-0072},
CODEN = {APALD7},
MRCLASS = {Preliminary Data},
MRNUMBER = {2959666},
DOI = {10.1016/j.apal.2011.12.015},
URL = {http://dx.doi.org/10.1016/j.apal.2011.12.015} }
\bib{VW_ISP}{article}{ author={Viale, Matteo},
author={Wei{\ss}, Christoph},
title={On the consistency strength of the proper forcing axiom},
journal={Adv. Math.},
volume={228},
date={2011},
number={5},
pages={2672--2687},
issn={0001-8708},
review={\MR{2838054 (2012m:03131)}},
doi={10.1016/j.aim.2011.07.016}, }
\bib{VickersThesis}{thesis}{
author={Vickers, J.},
type={Ph.D. dissertation},
address={University of Bristol},
date={1994} }
\bib{MR1773781}{article}{
author={Vickers, J.},
author={Welch, P. D.},
title={On successors of J\'onsson cardinals},
journal={Arch. Math. Logic},
volume={39},
date={2000},
number={6},
pages={465--473},
issn={0933-5846},
review={\MR{1773781 (2001h:03100)}},
doi={10.1007/s001530050159}, }
\bib{WeissThesis}{thesis}{
author={Wei\ss, Christoph},
type={Ph.D. dissertation},
address={Ludwig Maximilians Universit\"at M\"unchen},
date={2010} }
\bib{Weiss_CombEssence}{article}{
author={Wei{\ss}, Christoph},
title={The combinatorial essence of supercompactness},
journal={Ann. Pure Appl. Logic},
volume={163},
date={2012},
number={11},
pages={1710--1717},
issn={0168-0072},
review={\MR{2959668}},
doi={10.1016/j.apal.2011.12.017}, }
\bib{MR1713438}{book}{
author={Woodin, W. Hugh},
title={The axiom of determinacy, forcing axioms, and the nonstationary
ideal},
series={de Gruyter Series in Logic and its Applications},
volume={1},
publisher={Walter de Gruyter \& Co.},
place={Berlin},
date={1999},
pages={vi+934},
isbn={3-11-015708-X},
review={\MR{1713438 (2001e:03001)}}, }
\bib{ZemanBook}{book}{
author={Zeman, Martin},
title={Inner models and large cardinals},
series={de Gruyter Series in Logic and its Applications},
volume={5},
publisher={Walter de Gruyter \& Co.},
place={Berlin},
date={2002},
pages={xii+369},
isbn={3-11-016368-3},
review={\MR{1876087 (2003a:03004)}}, }
\bib{ZemanThesis}{thesis}{
author={Zeman, Martin},
type={Ph.D. dissertation},
address={HU Berlin},
date={1997} }
\bib{ZemanGlobalSquarePreprint}{article}{
author={Zeman, Martin},
status={Preprint},
title={Global Square} }
\end{biblist} \end{bibdiv}
\end{document}
|
arXiv
|
{
"id": "1405.2791.tex",
"language_detection_score": 0.4096331000328064,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Harnack inequality for subelliptic operators]
{The Strong Maximum Principle and \\ the Harnack inequality for a class of\\
hypoelliptic divergence-form operators}
\author{Erika Battaglia}
\address{Dipartimento di Matematica,
Universit\`{a} degli Studi di Bologna\\
Piazza di Porta San Donato, 5 - 40126 Bologna, Italy\\
Fax.: +39-51-2094490.}
\email{[email protected]}
\author{Stefano Biagi}
\address{Dipartimento di Matematica,
Universit\`{a} degli Studi di Bologna\\
Piazza di Porta San Donato, 5 - 40126 Bologna, Italy\\
Fax.: +39-51-2094490.}
\email{[email protected]}
\author{Andrea Bonfiglioli}
\address{Dipartimento di Matematica,
Universit\`{a} degli Studi di Bologna\\
Piazza di Porta San Donato, 5 - 40126 Bologna, Italy\\
Tel.: +39-51-2094498, Fax.: +39-51-2094490.}
\email{[email protected]}
\begin{abstract}
In this paper we consider a class
of hypoelliptic second-order partial differential operators $\mathcal{L}$
in divergence form on $\mathbb{R}^N$, arising from CR geometry and Lie group theory, and we prove
the Strong and Weak Maximum Principles and the Harnack Inequality for $\mathcal{L}$.
The involved operators are not assumed to belong to the H\"ormander hypoellipticity class,
nor to satisfy subelliptic estimates, nor Muckenhoupt-type estimates on the degeneracy of the
second order part; indeed our results
hold true in the infinitely-degenerate case and for operators which are not
necessarily sums of squares.
We use a Control Theory result on hypoellipticity in order to
recover a meaningful geometric information on connectivity and maxima propagation, yet
in the absence of any H\"ormander condition.
For operators $\mathcal{L}$ with $C^\omega$ coefficients, this control-theoretic result
will also imply a Unique Continuation property for the $\mathcal{L}$-harmonic functions.
The (Strong) Harnack Inequality is obtained via the Weak Harnack Inequality
by means of a Potential Theory argument, and by a crucial use of the Strong Maximum Principle
and the solvability of the Dirichlet problem for $\mathcal{L}$ on a basis of the
Euclidean topology. \end{abstract}
\keywords{Degenerate-elliptic operators; Maximum principles; Harnack inequality; Unique continuation; Divergence form operators.}
\subjclass[2010]{Primary: 35B50, 35B45, 35H20; Secondary: 35J25, 35J70, 35R03}
\maketitle
\section{Introduction and main results}\label{sec:introductionAB}
Throughout the paper, we shall be concerned with linear second order
partial differential operators (PDOs, in the sequel), possibly degenerate-elliptic, of the form \begin{equation}\label{mainLL}
\mathcal{L}:=\frac{1}{V(x)}\sum_{i,j=1}^N\frac{\partial}{\partial {x_i}}\Big(V(x)\,a_{i,j}(x)\,\frac{\partial}{\partial {x_j}}\Big),\qquad
x\in\mathbb{R}^N, \end{equation}
where $V$ is a $C^\infty$
positive function on $\mathbb{R}^N$, the matrix $A(x):=(a_{i,j}(x))_{i,j}$
is symmetric and \emph{positive semi-definite} at every point
$x\in\mathbb{R}^N$, and it has real-valued $C^\infty$ entries. In particular, $\mathcal{L}$ is formally self-adjoint
on $L^2(\mathbb{R}^N,\mathrm{d}\nu)$ with respect to the measure $\mathrm{d}\nu(x)=V(x)\,\mathrm{d} x$, which clarifies the r\^ole of $V$.
We tacitly understand these structural assumptions on $\mathcal{L}$ throughout.
The literature on divergence-form operators like \eqref{mainLL} in
the strictly-elliptic case is so vast that we do no attempt to collect the related references.
Instead, we mention some papers (relevant for the topics of the present paper) in the degenerate case.
Degenerate-elliptic operators of the form \eqref{mainLL} were
extensively studied by Jerison and S\'anchez-Calle in the paper \cite{JerisonSanchez-Calle}
(under a suitable subelliptic assumption), where it is also described how these PDOs naturally intervene in the study of function theory
of several complex variables and CR Geometry (see also \cite{FollandStein, Kohn65, RothschildStein}).
Prototypes for the PDOs
\eqref{mainLL} also arise in the theory of sub-Laplace operators on real Lie groups
(e.g., for Carnot groups, \cite{BLUlibro}), as well as in Riemannian Geometry (e.g., the
Laplace-Beltrami operator has the form ${\sqrt {|g|^{-1}}}\sum
\partial_i (\sqrt{|g|} g^{ij} \partial_j )$).
Re\-gu\-la\-ri\-ty issues for degenerate-elliptic divergence-form operators
comprising the Harnack Ine\-qua\-li\-ty and the Maximum Principles (to which this paper is devoted)
trace back to the 80's, with the deep investigations by:
Fabes, Kenig, Serapioni \cite{FabesKenigSerapioni};
Fabes, Jerison, Kenig \cite{FabesJerisonKenig, FabesKenigJerison};
Gutiérrez \cite{Gutierrez}.
In these papers, operators as in \eqref{mainLL} are considered (with $V\equiv 1$) with low regularity assumptions on the coefficients,
under the hypothesis that the degeneracy of $A(x)$ be controlled on both sides by some Muckenhoupt weight.
Recent investigations on the Harnack inequality for variational operators, comprising \eqref{mainLL} as a special case,
also assume Muckenhoupt weights on the degeneracy; see \cite{DeCiccoVivaldi, Zamboni}.
Very recently, a systematic study of the Potential Theory for the harmonic/subharmonic functions related to operators $\mathcal{L}$
as in \eqref{mainLL}
has been carried out in the series of papers \cite{AbbondanzaBonfiglioli, BattagliaBonfiglioli, BonfLancJEMS, BonfLancTommas},
under the assumption that $\mathcal{L}$ possesses a (smooth) global positive fundamental solution.
We remark that in the present paper we do not require $\mathcal{L}$ to be a H\"ormander operator,
our results holding true in the infinitely-degenerate case as well, nor we make
any assumption of subellipticity or Muckenhoupt-weighted degeneracy
(see Example \ref{exa.Lie2}); furthermore, we do not assume the existence of a global
fundamental solution for $\mathcal{L}$. Hence our results are not contained in any of the
aforementioned papers.
We now describe the main results of this paper concerning $\mathcal{L}$, namely the \emph{Strong Maximum Principle}
and the \emph{Harnack Inequality} for $\mathcal{L}$; gradually as we need to specify them,
we introduce the three assumptions under which our theorems are proven.
As we shall see in a moment, the main hypothesis is a hypoellipticity assumption.
In obtaining our main results we are much indebted to the ideas in the pioneering paper
by Bony, \cite{Bony}, where H\"ormander operators are considered. The main novelty of our
framework is that we have to renounce to the geometric information encoded in H\"ormander's Rank Condition:
the latter
implies a connectivity/propagation property (leading to the Strong Maximum Principle),
as well as it implies hypoellipticity, due to the well-known H\"ormander's theorem \cite{Hormander}.
In our setting, the approach is somewhat reversed: hypoellipticity is the main assumption,
and we need to derive from it some appropriate connectivity and propagation features, even in the absence
of a maximal rank condition. This will be made possible by exploiting a Control Theory result by Amano \cite{Amano} on hypoelliptic PDOs, as we shall
describe in detail. Once the Strong Maximum Principle is established, the path to the (Strong) Harnack Inequality
is traced in \cite{Bony}: we pass through the solvability of the Dirichlet problem, the relevant Green kernel and a
Weak Harnack Inequality. Finally, the gap between the Weak and Strong Harnack Inequalities is filled by
an abstract Potential Theory result, due to Mokobodzki and Brelot, \cite{Brelot}.
In order to describe our results more closely, we first fix some notation and definition: we say that a linear second order PDO on $\mathbb{R}^N$ \begin{equation}\label{Lgenerale}
L:=\sum_{i,j=1}^N\alpha_{i,j}(x)\frac{\partial^2}{\partial x_i\partial x_j}+
\sum_{i=1}^N \beta_i(x)\frac{\partial}{\partial x_i}+\gamma(x) \end{equation}
is \emph{non-totally degenerate} at a point $x\in\mathbb{R}^N$ if the matrix $(\alpha_{i,j}(x))_{i,j}$ (which will be referred to
as the principal matrix of $L$)
is non-vanishing. We observe that the principal matrix of an operator
$\mathcal{L}$ of the form \eqref{mainLL} is precisely $A(x)=(a_{i,j}(x))_{i,j}$.
We also recall that $L$ is said to be
($C^\infty$-)hypoelliptic in an open set $\Omega\subseteq\mathbb{R}^N$ if, for every $u\in \mathcal{D}'(\Omega)$,
every open set $U\subseteq \Omega$ and every $f\in C^\infty(U,\mathbb{R})$, the equation $L u =f$ in $U$
implies that $u$ is (a function-type distribution associated with) a
$C^\infty$ function on $U$.
In the sequel, if $\Omega\subseteq\mathbb{R}^N$ is open, we say that $u$
is \emph{$L$-harmonic} (resp., \emph{$L$-subharmonic})
in $\Omega$ if $u\in C^2(\Omega,\mathbb{R})$ and $L u=0$ (resp., $L u\geq 0$) in $\Omega$.
The set of the $L$-harmonic functions in $\Omega$ will be denoted by $\mathcal{H}_{L}(\Omega)$.
We observe that, if $L$ is hypoelliptic on every open subset of $\mathbb{R}^N$, then
$\mathcal{H}_{L}(\Omega)\subset C^\infty(\Omega,\mathbb{R})$; under this hypoellipticity assumption,
$\mathcal{H}_{L}(\Omega)$ has important topological properties, which will be crucially used in the sequel (Remark \ref{rem.topolocinfyt}).
In order to introduce our first main result we assume the following hypotheses on $\mathcal{L}$:
\begin{description}
\item[(NTD)]
$\mathcal{L}$ is \emph{non-totally degenerate at every point of $\mathbb{R}^N$}, or equivalently (recalling that $A(x)$ is symmetric
and positive semi-definite), \begin{equation}\label{NTD}
\textrm{trace}(A(x))> 0,\quad \text{for every $x\in \mathbb{R}^N$.} \end{equation}
\item[(HY)]
$\mathcal{L}$ is $C^\infty$-\emph{hypoelliptic in every open subset of $\mathbb{R}^N$}.
\end{description}
Under these two assumptions we shall prove the \emph{Strong Maximum Principle for $\mathcal{L}$}.
Condition (NTD), if compared with the above mentioned Muckenhoupt-type weights on the degeneracies of $A(x)$,
does not allow a \emph{simultaneous} vanishing of the eigenvalues of $A(x)$, but it has the advantage
of permitting a very fast vanishing of the smallest eigenvalue (see Example
\ref{exa.Lie2}) together with a very fast growing of the largest one (see Example
\ref{exa.Lie}); both phenomena can happen at an exponential rate (e.g.,
like $e^{-1/x^2}$ as $x\to 0$ in the first case, and like
$e^{x}$ as $x\to \infty$ in the second case), which is not allowed when Muckenhoupt weights are involved.
Meaningful examples of operators satisfying hypotheses (NTD) and (HY), providing prototype PDOs to which our theory applies and a motivation for our
investigation, are now described
in the following two examples. \begin{example}\label{exa.Lie}
The following PDOs satisfy the assumptions (NTD) and (HY).
(a.)\,\, If $\mathbb{R}^N$ is equipped with a Lie group structure $\mathbb{G}=(\mathbb{R}^N,*)$,
and if we fix a set $X:=\{X_1,\ldots,X_m\}$ of Lie-generators for the Lie algebra $\mathfrak{g}$ of $\mathbb{G}$
(this means that the smallest Lie algebra containing $X$ is equal to $\mathfrak{g}$), then a direct computation shows that
\begin{equation}\label{subla}
\mathcal{L}_X:=-\sum_{j=1}^m X_j^*\, X_j
\end{equation}
is of the form \eqref{mainLL}, where $V(x)$ is the density of the Haar measure $\nu$ on $\mathbb{G}$,
and $(a_{i,j})_{i,j}$ is equal to $S\,S^T$, where $S$ is the $N\times m$ matrix whose columns
are given by the coefficients of the vector fields $X_1,\ldots,X_m$;
here $X_j^*$ denotes the (formal) adjoint of $X_j$ in the Hilbert space $L^2(\mathbb{R}^N,\mathrm{d}\nu)$.
Most importantly, $\mathcal{L}_X$ in \eqref{subla} satisfies the assumptions (NTD) and (HY) above. Indeed:
\begin{itemize}
\item The non-total-degeneracy is a consequence of $X$ being a set of Lie-generators of $\mathfrak{g}$.
\item
$\mathcal{L}_X$ is a H\"ormander operator, of the form
$\sum_{j=1}^m X_j^2+X_0$,
where $X_0$ is a linear combination (with smooth coefficients) of $X_1,\ldots,X_m$.
Therefore $\mathcal{L}_X$ is hypoelliptic due to H\"ormander's Hypoellipticity Theorem, \cite{Hormander},
jointly with the cited fact that $X$ is a set of Lie-generators of $\mathfrak{g}$.
\end{itemize}
The density $V$ need not be identically $1$ as for example for the Lie group $(\mathbb{R}^2,*)$, where
$$(x_1,x_2)*(y_1,y_2)=(x_1+y_1e^{x_2},x_2+y_2), $$
since in this case $V(x)=e^{-x_2}$. The left-invariant PDO associated with the set of generators
$X=\{e^{x_2}\frac{\partial}{\partial x_1},\frac{\partial}{\partial x_2}\}$ has fast-growing coefficients:
$$\mathcal{L}_X=e^{2x_2}\frac{\partial^2}{\partial x_1^2}+\frac{\partial^2}{\partial x_2^2}-\frac{\partial}{\partial x_2}. $$
Note that the eigenvalues of the principal matrix of $\mathcal{L}_X$ are $e^{2x_2}$ and $1$, so that the largest eigenvalue
cannot be controlled (for $x_2>0$) by any integrable weight.
(b.)\,\, More generally (arguing as above), if $X=\{X_1,\ldots,X_m\}$ is a family of smooth vector fields in $\mathbb{R}^N$
satisfying H\"ormander's Rank Condition, if $\mathrm{d}\nu(x)=V(x)\,\mathrm{d} x$ is the Radon measure associated
with any positive smooth density $V$ on $\mathbb{R}^N$, then the operator
$-\sum_{j=1}^m X_j^*\, X_j$
is of the form \eqref{mainLL} and it satisfies (NTD) and (HY).
Here $X_j^*$ denotes the formal adjoint of $X_j$ in $L^2(\mathbb{R}^N,\mathrm{d}\nu)$.
As already observed, PDOs of this form naturally arise in CR Geometry and in the function theory of
several complex variables (see \cite{JerisonSanchez-Calle}). \end{example}
The above examples show that geometrically meaningful
PDOs belonging to the class of our concern actually fall in the hypoellipticity class
of the H\"ormander operators. Nonetheless, hypotheses (NTD) and (HY) are
general enough to comprise \emph{non-H\"ormander} and \emph{non-subelliptic} PDOs, as it is shown in the next example.
Applications to this kind of \emph{infinitely-degenerate} PDOs
also furnish one of the main motivation for our study. \begin{example}\label{exa.Lie2}
Let us consider the class of operators in $\mathbb{R}^2$ defined by \begin{subequations} \begin{equation}\label{fediiOP}
\mathcal{L}_a=\frac{\partial^2}{\partial x_1^2}+\Big(a(x_1)\,\frac{\partial}{\partial x_2}\Big)^2, \end{equation}
with $a\in C^\infty(\mathbb{R},\mathbb{R})$, $a$ even, nonnegative, nondecreasing on $[0,\infty)$ and vanishing only at $0$. Then
$\mathcal{L}_a$ satisfies (NTD) (obviously) and (HY), thanks to a result
by Fedi$\breve{\textrm{\i}}$, \cite{Fedii}. Note that $\mathcal{L}_a$ does not satisfy H\"ormander's Rank Condition
at $x_1=0$ if all the derivatives of $a$ vanish at $0$, as for
$a(x_1)=\exp(-1/x_1^2)$. Other examples of operators satisfying our assumptions
(NTD) and (HY) but failing to be H\"ormander operators
can be found, e.g., in the following papers: Bell and Mohammed \cite{BellMohammed}; Christ \cite[Section 1]{Christ};
Kohn \cite{Kohn}; Kusuoka and Stroock \cite[Theorem 8.41]{KusuokaStroock};
Morimoto \cite{Morimoto}. Explicit examples are, for instance,
\begin{align}
&\frac{\partial^2}{\partial x_1^2}+\Big(\exp(-1/|x_1|)\,\frac{\partial}{\partial x_2}\Big)^2+ \Big(\exp(-1/|x_1|)\,\frac{\partial}{\partial x_3}\Big)^2&&\quad \text{in $\mathbb{R}^3$,}\label{christOP}\\
&\frac{\partial^2}{\partial x_1^2}+\Big(\exp(-1/\sqrt{|x_1|})\,\frac{\partial}{\partial x_2}\Big)^2+ \frac{\partial^2}{\partial x_3^2}&&\quad \text{in $\mathbb{R}^3$,}\label{kusuokastroockOP}\\
&\frac{\partial^2}{\partial x_2^2}+\Big(x_2\,\frac{\partial}{\partial x_1}\Big)^2+
\frac{\partial^2}{\partial x_4^2}+\Big(\exp(-1/\sqrt[3]{|x_1|})\,\frac{\partial}{\partial x_3}\Big)^2&&\quad \text{in $\mathbb{R}^4$}\label{morimotoOP}.
\end{align} \end{subequations}
For the hypoellipticity of \eqref{christOP} see \cite{Christ}; for \eqref{kusuokastroockOP} see \cite{KusuokaStroock}; for \eqref{morimotoOP} see \cite{Morimoto}.
Later on, in proving the Harnack Inequality, we shall add another hypothesis to (NTD) and (HY) and, as we shall show,
the operators from \eqref{fediiOP} to \eqref{morimotoOP} (and those in Example \ref{exa.Lie}) will fulfil this assumption as well. Hence the main results of this paper
(except for the Unique Continuation result in Section \ref{sec:Harnack_analytic}, proved for operators with $C^\omega$ coefficients) fully apply to
these PDOs.
Moreover, since the PDOs \eqref{fediiOP}-to-\eqref{morimotoOP} \emph{are not subelliptic} (see Remark \ref{rem.equivHYeps}),
they do not fall in the class considered by Jerison and S\'anchez-Calle in \cite{JerisonSanchez-Calle}.
Finally, note that the smallest eigenvalue in all the above examples vanishes
very quickly (like $\exp(-1/|x|^\alpha)$ for $x\to 0$, with positive $\alpha$) and it cannot be bounded
from below by any weight $w(x)$ with locally integrable reciprocal function.
\end{example}
Our first main result under conditions (NTD) and (HY) is the following one. \begin{theorem}[\textbf{Strong Maximum Principle for $\mathcal{L}$}]\label{th:SMP}
Suppose that $\mathcal{L}$ is an operator of the form
\eqref{mainLL}, with $C^\infty$ coefficients $V>0$ and $(a_{i,j})_{i,j}\geq 0$,
and that it satisfies \emph{(NTD)} and \emph{(HY)}. Let
$\Omega\subseteq\mathbb{R}^N$ be a connected open set. Then,
the following facts hold. \begin{enumerate}
\item[\emph{(1)}] Any function
$u\in C^2(\Omega,\mathbb{R})$ satisfying $\mathcal{L} u\geq 0$ on $\Omega$
and attaining a maximum in $\Omega$ is constant
throughout $\Omega$.
\item[\emph{(2)}]
If $c\in C^\infty(\mathbb{R}^N,\mathbb{R})$ is nonnegative on $\mathbb{R}^N$, and if we set
\begin{equation}\label{LC}
\mathcal{L}_c:=\mathcal{L}-c,
\end{equation}
then any function
$u\in C^2(\Omega,\mathbb{R})$ satisfying $\mathcal{L}_c u\geq 0$ on $\Omega$ and
attaining a nonnegative maximum in $\Omega$ is constant
throughout $\Omega$. \end{enumerate} \end{theorem}
\noindent The r\^ole of the nonnegativity of the zero-order term $c$ in the above statement (2)
in obtaining Strong Maximum Principles is well-known (see e.g., Pucci and Serrin \cite{PucciSerrin}). \begin{remark}\label{rem.WMPvale}
(a.)\,\, Obviously, the Strong Maximum Principle (SMP, shortly) in Theorem \ref{th:SMP} will immediately provide
the \emph{Weak} Maximum Principle (WMP, shortly) for operators $\mathcal{L}$ and $\mathcal{L}-c$, for any nonnegative zero-order term $c$ (and any bounded
open set $\Omega$), see
Corollary \ref{th.WMPPP} for the precise statement.
(b.)\,\, We will show that, in order to obtain the SMP and WMP for $\mathcal{L}-c$,
it is also sufficient to replace the hypothesis on the hypoellipticity of $\mathcal{L}$ with the (more natural hypothesis of the) hypoellipticity of $\mathcal{L}-c$,
still under assumption (NTD) and the divergence-form structure of $\mathcal{L}$; see Remark \ref{PMFancheconc2} for the precise result. \end{remark}
Our proof of the SMP in Theorem \ref{th:SMP} follows a rather classical scheme, in that it rests
on a Hopf Lemma for $\mathcal{L}$ (see Lemma \ref{lem_Hopf}). However, the passage
from the Hopf Lemma to the SMP is, in general, non-trivial and the same is true in our framework.
For example, in the paper \cite{Bony} by Bony, where
H\"ormander operators are considered, this passage is accomplished by means of a maximum propagation
principle, crucially based on H\"ormander's Rank Condition, the latter ensuring a connectivity property
(the so-called \emph{Chow's Connectivity Theorem} for H\"ormander vector fields).
The novelty in our setting is that, since hypotheses (NTD) and (HY)
do \emph{not} necessarily imply that $\mathcal{L}$ is a H\"ormander operator (see for instance Example
\ref{exa.Lie2}), we have to supply for a lack of geometric information.
Due to this main novelty, we describe more closely our argument in deriving the SMP.
As anticipated, we are able to supply the lack of H\"ormander's Rank Condition by using
a notable control-theoretic property (seemingly long-forgotten in the PDE literature), encoded in the hypoellipticity assumption (HY), proved by Amano in
\cite{Amano}: indeed,
thanks to the hypothesis (NTD), we are entitled to use
\cite[Theorem 2]{Amano} which states that
(HY) ensures the \emph{controllability} of the ODE system \begin{equation*}
\dot \gamma= \xi_0 X_0(\gamma)+\sum_{i=1}^N \xi_i X_i(\gamma),\qquad (\xi_0,\xi_1,\ldots,\xi_N)\in \mathbb{R}^{1+N}, \end{equation*}
on every open and connected subset of $\mathbb{R}^N$. Here $X_1,\ldots,X_N$
denote the vector fields associated with the rows of the principal matrix of $\mathcal{L}$, whereas $X_0$
is the drift vector field obtained by writing $\mathcal{L}$ (this being always possible) in the form
$$\mathcal{L} u= \sum_{i=1}^N \frac{\partial}{\partial x_i}( X_iu)+ X_0u.$$
By definition of a controllable system, Amano's controllability result provides another
geometric \emph{con\-nec\-ti\-vi\-ty property} (a substitute for Chow's Theorem): any couple of points
can be joined by a continuous path which is piece-wise an
integral curve of some vector field $Y$ be\-long\-ing to $\textrm{span}_\mathbb{R}\{X_0,X_1,\ldots,X_N\}$.
The SMP will then follow if we show that there is a pro\-pa\-ga\-tion
of the maximum of any $\mathcal{L}$-subharmonic function $u$ along all integral curves $\gamma_Y$ of
every $Y\in \textrm{span}_\mathbb{R}\{X_0,X_1,\ldots,X_N\}$.
In other words, we need to show that if the set $F(u)$
of the maximum points of $u$ intersects any such $\gamma_Y$, then $\gamma_Y$
is wholly contained in $F(u)$: briefly, if this happens we say that $F(u)$
is $Y$-invariant. In its turn, this $Y$-invariance property
can be characterized (see Bony, \cite[\S 2]{Bony})
in terms of a tangentiality property of $Y$ with respect to $F(u)$
(the reader is referred to Section \ref{sec:SMP} below for this notion of
tangentiality).
Now, the self-adjoint structure of our PDO $\mathcal{L}$ in \eqref{mainLL}
ensures that $X_0$ is a linear combination with smooth coefficients of
$X_1,\ldots,X_N$. Hence, by the very definition of tangentiality
(see e.g., \eqref{daprovSMP1primaa}),
the tangentiality of $X_0$ w.r.t.\,$F(u)$
will be inherited from the tangentiality of $X_1,\ldots,X_N$ w.r.t.\,$F(u)$.
By means of the above argument of controllability/propagation, this allows us
to reduce the proof of the SMP to showing that
any of the vector fields $X_1,\ldots,X_N$ is tangent to $F(u)$.
Luckily, this tangentiality is a consequence of the choice of $X_1,\ldots,X_N$ as deriving from the rows
of the principal matrix of $\mathcal{L}$, together with the Hopf-type Lemma \ref{lem_Hopf}
for $\mathcal{L}$. This argument is provided, in all detail, in Section \ref{sec:SMP}.
The use of the above ideas, plus the classical Holmgren's Theorem, will allow us to prove that,
when $\mathcal{L}$ has real-analytic coefficients, a \emph{Unique Continuation} result holds true for $\mathcal{L}$:
any $\mathcal{L}$-harmonic function defined on a connected open set $U$ which
vanishes on some non-void open subset is necessarily null on the whole of $U$ (see Theorem
\ref{th:Uniquecont}). We observe that the $C^\omega$ assumption is satisfied, for example,
if $\mathcal{L}$ is a left invariant operator on a Lie group (e.g., a sub-Laplacian on a Carnot group, as in
\cite{BLUlibro}), since, as it is well-know, any Lie group can be endowed with a compatible $C^\omega$ structure.
\begin{remark}
We explicitly remark that, as it is proved by Amano in \cite[Theorem 1]{Amano}, the above
controllability property ensures the validity of the H\"ormander Rank Condition only
on an open \emph{dense} subset of $\mathbb{R}^N$
which may fail to coincide with the whole of $\mathbb{R}^N$. This actual possible
lack of the H\"ormander Rank Condition is clearly exhibited in
Example \ref{exa.Lie2} (of non-H\"ormander operators which nonetheless satisfy our assumptions
(NTD) and (HY), and hence the SMP).
To the best of our knowledge, Amano's controllability result for hypoelliptic
non-totally-degenerate operators has been long forgotten in the literature; only recently,
it has been used by the third-named author and B.\,Abbondanza \cite{AbbondanzaBonfiglioli}
in studying the Dirichlet problem for $\mathcal{L}$,
and in obtaining Potential Theoretic
results for the harmonic sheaf related to $\mathcal{L}$. \end{remark}
In order to give the second main result of the paper (namely, the \emph{Harnack
Inequality} for $\mathcal{L}$), we shall need a further assumption, very similar to (HY)
(and, indeed, equivalent to it in many important cases), together with some technical results
on the solvability of the Dirichlet problem related to $\mathcal{L}$. Our next assumption is the following one:
\begin{description}
\item[$\textrm{(HY)}_\varepsilon $]
\emph{There exists $\varepsilon >0$ such that
$\mathcal{L}-\varepsilon $ is $C^\infty$-hypoelliptic in every open subset of $\mathbb{R}^N$}.
\end{description}
For operators $\mathcal{L}$ satisfying hypotheses (NTD), (HY) and (HY)$_\varepsilon $ we are able to prove the
Harnack Inequality (see Theorem \ref{lem.crustimabassoTHEOFORTE}).
We postpone the description of the relationship between assumptions (HY) and $\textrm{(HY)}_\varepsilon $
(and their actual equivalence for large classes of operators: for subelliptic PDOs, for instance)
in Remark \ref{rem.equivHYeps} below.
Instead, we anticipate the r\^ole of the perturbation $\mathcal{L}-\varepsilon $
of the operator $\mathcal{L}$: this is motivated by a crucial comparison argument
(which we generalize to our setting), due to Bony \cite[Proposition 7.1, p.298]{Bony},
giving the lower bound \begin{equation}\label{lowboundkeps}
u(x_0)\geq \varepsilon \int_{\Omega} u(y)\,k_\varepsilon (x_0,y)\,V(y)\,\mathrm{d} y\qquad \forall\,x_0\in\Omega, \end{equation}
for every nonnegative $\mathcal{L}$-harmonic function $u$ on the open set $\Omega$ which possesses
a Green kernel $k_{\varepsilon }(x,y)$ relative to the perturbed operator $\mathcal{L}-\varepsilon $
(see Theorem \ref{th.greeniani} for the notion of a Green kernel, and see
Lemma \ref{lem.crustimabasso} for the proof of \eqref{lowboundkeps}).
This lower bound, plus some topological facts on hypoellipticity, is the key ingredient for a \emph{Weak} Harnack Inequality
related to $\mathcal{L}$, as we shall explain shortly.
Some remarks on assumption (HY)$_\varepsilon $ are now in order. \begin{remark}\label{rem.equivHYeps}
Hypothesis (HY)$_\varepsilon $ is implicit in hypothesis (HY) for notable classes of operators, whence
our assumptions for the validity of the Harnack Inequality for $\mathcal{L}$ reduce to (NTD) and (HY) solely: namely,
\emph{(HY) implies (HY)$_\varepsilon $ in the following cases:}
\begin{itemize}
\item for H\"ormander operators, and, more generally, for second order \emph{subelliptic} operators
(in the usual sense of fulfilling a subelliptic estimate,
see e.g., \cite{JerisonSanchez-Calle, Kohn});
indeed, any operator $L$ in these
classes of PDOs is hypoelliptic (see H\"ormander \cite{Hormander}, Kohn and Nirenberg \cite{KohnNirenberg}),
and $L$ still belongs to these classes after the addition of a smooth zero-order term;
\item for operators with \emph{real-analytic coefficients.}
Indeed, in the $C^\omega$ case, one can apply known results by Ole\u{\i}nik and Radkevi\v{c} ensuring that,
for a general $C^\omega$ operator $L$ as in \eqref{Lgenerale}, hypoellipticity is equivalent
to the verification of H\"ormander's Rank Condition for the vector fields $X_0,X_1,\ldots,X_N$ obtained by
rewriting $L$ as $\sum_{i=1}^N \partial_i(X_i)+X_0+\gamma$; this
condition is clearly invariant under any change of the zero-order term $\gamma$ of $L$ so that
(HY) and (HY)$_\varepsilon $ are indeed equivalent.
\end{itemize}
The problem of establishing, in general, whether (HY) implies (HY)$_\varepsilon $ seems non-trivial
and it is postponed to future investigations.\footnote{It appears that having some quantitative information on the loss of derivatives
may help in facing this question (personal communication by A. Parmeggiani).} In this regard we recall that, for example,
in the complex coefficient case the presence of a zero-order term (even a small $\varepsilon$)
may drastically alter hypoellipticity (see for instance the example given by Stein in \cite{Stein}).
We explicitly remark that the operators
\eqref{fediiOP}-to-\eqref{morimotoOP} are \emph{not} subelliptic (nor $C^\omega$), yet they satisfy
hypotheses (NTD), (HY) and (HY)$_\varepsilon $. The lack of subellipticity is a consequence of the
characterization of the subelliptic PDOs due to Fefferman and Phong \cite{FeffermanPhong2, FeffermanPhong}
(see also \cite[Prop.1.3]{Kohn} or \cite[Th.2.1 and Prop.2.1]{JerisonSanchez-Calle},
jointly with the presence of a coefficient with a zero of infinite order
in \eqref{fediiOP}-to-\eqref{morimotoOP}). The second assertion concerning the verification
of (HY)$_\varepsilon $ (the other hypotheses being already discussed) derives from the following result by Kohn, \cite{Kohn}:
any operator of the form
$$L_1+\lambda(x)\,L_2\quad \text{in $\mathbb{R}^n_x\times \mathbb{R}^m_y$}$$
is hypoelliptic, where $\lambda\in C^\infty(\mathbb{R}_x)$, $\lambda\geq 0$ has
a zero of infinite order at $0$
(and no other zeroes of infinite order), and $L_1$ (operating in $x\in\mathbb{R}^n$)
and $L_2$ (operating in $y\in \mathbb{R}^m$) are general second order PDOs
(as in \eqref{Lgenerale}) with smooth coefficients and they are assumed to be subelliptic.
It is straightforward to recognize that by subtracting $\varepsilon $ to any PDO in
\eqref{fediiOP}-to-\eqref{morimotoOP} we get an operator of the form
$(L_1-\varepsilon )+\lambda(x)\,L_2$, where $\lambda$ has the required features, $L_2$ is uniformly elliptic (indeed, a classical Laplacian in all the
examples), and $L_1-\varepsilon $ is
a uniformly elliptic operator (cases \eqref{fediiOP}-to-\eqref{kusuokastroockOP})
or it is a H\"ormander operator (case \eqref{morimotoOP}). \end{remark}
Before describing the approach to the Harnack Inequality, inspired by the ideas in \cite{Bony}, we
state the main needed technical tools on the solvability of the Dirichlet problem
for $\mathcal{L}$ and for the perturbed operator $\mathcal{L}-\varepsilon$. \begin{lemma}\label{th.localDiri}
Suppose that $\mathcal{L}$ is an operator of the form
\eqref{mainLL}, with $C^\infty$ coefficients $V>0$ and $(a_{i,j})\geq 0$,
and that $\mathcal{L}$ satisfies \emph{(NTD)}.
Let $\varepsilon \geq 0$ be fixed (the case $\varepsilon =0$ being admissible).
We set $\mathcal{L}_\varepsilon :=\mathcal{L}-\varepsilon $ and we assume that $\mathcal{L}_\varepsilon $ is hypoelliptic on every open
subset of $\mathbb{R}^N$.
Then, there exists a basis for the Euclidean topology of $\mathbb{R}^N$, independent of $\varepsilon $,
made of open and connected sets
$\Omega$ (with Lipschitz boundary) with the following properties:
for every continuous function $f$ on $\overline{\Omega}$ and for every continuous
function $\varphi$ on $\partial\Omega$, there exists one and only one solution
$u\in C(\overline{\Omega},\mathbb{R})$ of the Dirichlet problem
\begin{gather}\label{DIRIEQ}
\left\{
\begin{array}{ll}
\mathcal{L}_\varepsilon u=-f & \hbox{on $\Omega$\quad (in the weak sense of distributions),} \\
u=\varphi & \hbox{on $\partial\Omega$\quad (point-wise).}
\end{array}
\right.
\end{gather}
Furthermore, if $f,\varphi\geq 0$ then $u\geq 0$ as well. Finally, if $f$ belongs to $C^\infty(\Omega,\mathbb{R})
\cap C(\overline{\Omega},\mathbb{R})$, then the same is true of $u$, and $u$ is a classical solution of \eqref{DIRIEQ}. \end{lemma}
We prove this theorem for a considerably larger
class of operators than the $\mathcal{L}_\varepsilon $ above; see Theorem \ref{th.localDiriMIGLIO}.
We adapt to our context the well established techniques in \cite[Section 5]{Bony} used for H\"ormander operators.
These techniques are perfectly suited to our more general case, since
they only rely on hypoellipticity and on the Weak Maximum Principle.
Since the proof presents no further difficulties, it is provided in the Appendix, for the sake of completeness only.
With the existence of the weak solution of the Dirichlet problem for $\mathcal{L}_\varepsilon $ on a bounded open set
$\Omega$, we can define the associated Green operator as usual: \begin{definition}[\textbf{Green operator and Green measure}]\label{defi.Green}
Let $\varepsilon \geq 0$ be fixed, and let $\mathcal{L}_\varepsilon $ and $\Omega$ satisfy, respectively,
the hypothesis and the thesis of Lemma \ref{th.localDiri}. We consider the operator (depending on $\mathcal{L}_\varepsilon $ and $\Omega$; we avoid
keeping track of the dependency on $\Omega$ in the notation)
\begin{equation}\label{greenoperator}
G_\varepsilon : C(\overline{\Omega},\mathbb{R})\longrightarrow C(\overline{\Omega},\mathbb{R})
\end{equation}
mapping $f\in C(\overline{\Omega},\mathbb{R})$ into the function $G_\varepsilon (f)$
which is the unique distributional solution $u$ in $C(\overline{\Omega},\mathbb{R})$
of the Dirichlet problem
\begin{gather}\label{DIRIEQGreen}
\left\{
\begin{array}{ll}
\mathcal{L}_\varepsilon u=-f & \hbox{on $\Omega$\quad (in the weak sense of distributions),} \\
u=0 & \hbox{on $\partial\Omega$\quad (point-wise).}
\end{array}
\right.
\end{gather}
We call $G_\varepsilon $ \emph{the Green operator related to $\mathcal{L}_\varepsilon $ and to the open set $\Omega$}.
By the Riesz Representation Theorem (which is applicable thanks to the monotonicity pro\-per\-ties in
Lemma \ref{th.localDiri} with respect to the function $f$), for every $x\in\overline{\Omega}$ there exists
a (nonnegative) Radon measure $\lambda_{x,\varepsilon }$ on $\overline{\Omega}$ such that
\begin{gather}\label{DIRIEQkernel}
G_\varepsilon (f)(x)=\int_{\overline{\Omega}} f(y)\,\mathrm{d}\lambda_{x,\varepsilon }(y),\quad \text{for
every $f \in C(\overline{\Omega},\mathbb{R})$.}
\end{gather}
We call $\lambda_{x,\varepsilon }$ \emph{the Green measure related to $\mathcal{L}_\varepsilon $ (to the open set $\Omega$ and to the point $x$)}. \end{definition}
Let $\mathcal{L}$ be as in \eqref{mainLL}; in the rest of the paper, we set once and for all \begin{equation}\label{VVmu}
\mathrm{d} \nu(x):=V(x)\,\mathrm{d} x, \end{equation}
that is, $\nu$ is the (Radon) measure on $\mathbb{R}^N$ associated with the (positive)
density $V$ in \eqref{mainLL}, absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}^N$. It is clear that the
measure $\nu$ plays the following key r\^ole:
\begin{equation}\label{autoagg}
\int \varphi\,\mathcal{L}\psi\,\mathrm{d}\nu=\int \psi\,\mathcal{L}\varphi\,\mathrm{d}\nu,\quad\text{
for every $\varphi,\psi\in C_0^\infty(\mathbb{R}^N,\mathbb{R})$,}
\end{equation}
thus making $\mathcal{L}$ (formally) self-adjoint in the space $L^2(\mathbb{R}^N,\mathrm{d}\nu)$.
We observe that (in general) our operators $\mathcal{L}$ in \eqref{mainLL} are not
\emph{classically} self-adjoint; indeed the classical adjoint operator $\mathcal{L}^*$ of $\mathcal{L}$ is related to $\mathcal{L}$
by the following identity (a consequence of \eqref{autoagg})
\begin{equation}\label{autoaggBISSS}
\mathcal{L}^* u=V\,\mathcal{L}(u/V),\quad \text{for every $u$ of class $C^2$.}
\end{equation}
The possibility of dealing with non-identically $1$ densities $V$
(as in the case of Lie groups, see Example \ref{exa.Lie}-(a))
makes it more convenient to decompose the Green measure $\lambda_{x,\varepsilon }$
with respect to $\nu$ in \eqref{VVmu}, rather than w.r.t.\,Lebesgue measure.
Hence we prove the following: \begin{theorem}[\textbf{Green kernel}]\label{th.greeniani}
Suppose that $\mathcal{L}$ is an operator of the form
\eqref{mainLL}, with $C^\infty$ coefficients $V>0$ and $(a_{i,j})\geq 0$,
and that $\mathcal{L}$ satisfies \emph{(NTD)}.
Let $\varepsilon \geq 0$ be fixed.
We set $\mathcal{L}_\varepsilon :=\mathcal{L}-\varepsilon $ and we assume that $\mathcal{L}_\varepsilon $ is hypoelliptic on every open
subset of $\mathbb{R}^N$.
Let $\Omega$ be an open set as in Lemma \ref{th.localDiri}.
If $G_\varepsilon $ and $\lambda_{x,\varepsilon }$ are as in Definition \ref{defi.Green},
there exists a function $k_\varepsilon :\Omega\times \Omega\to \mathbb{R}$, smooth and positive
out of the diagonal of $\Omega\times \Omega$, such that the following representation holds true: \begin{equation}\label{gprororfondam}
G_\varepsilon (f)(x)=
\int_\Omega f(y)\,k_\varepsilon (x,y)\,\mathrm{d}\nu(y), \quad
\text{for every $x\in\Omega$,} \end{equation}
and for every $f \in C(\overline{\Omega},\mathbb{R})$.
We call $k_\varepsilon $ \emph{the Green kernel related to $\mathcal{L}_\varepsilon $ and to the open set $\Omega$}.
Furthermore, we have the following properties: \begin{enumerate}
\item[\emph{(i)}] Symmetry of the Green kernel: \begin{equation}\label{gprororfondam2}
k_\varepsilon (x,y)=k_\varepsilon (y,x) \quad
\text{for every $x,y\in\Omega$.} \end{equation}
\item[\emph{(ii)}]
For every fixed $x\in \Omega$, the function $k_\varepsilon (x,\cdot)$
is $\mathcal{L}_\varepsilon $-harmonic in $\Omega\setminus\{x\}$; moreover
$G_\varepsilon (\mathcal{L}_\varepsilon \varphi)=-\varphi=\mathcal{L}_\varepsilon (G_\varepsilon (\varphi))$ for any $\varphi\in C_0^\infty (\Omega,\mathbb{R})$, that is \begin{gather}\label{gprororfondam2PROPR2} \begin{split}
-\varphi(x)&=\int_\Omega \mathcal{L}_\varepsilon \varphi(y)\,k_\varepsilon (x,y)\,\mathrm{d}\nu(y) \\
&=\mathcal{L}_\varepsilon \Big(\int_\Omega \varphi(y)\,k_\varepsilon (x,y)\,\mathrm{d}\nu(y)\Big),
\qquad
\text{for every $\varphi\in C_0^\infty (\Omega,\mathbb{R})$.} \end{split} \end{gather}
\item[\emph{(iii)}]
For every fixed $x\in \Omega$, one has \begin{equation}\label{gprororfondam2PROPR23}
\lim_{y\to y_0} k_\varepsilon (x,y)=0\quad \text{for any $y_0\in\partial\Omega$.} \end{equation}
\item[\emph{(iv)}]
For every fixed $x\in \Omega$, the functions
$k_\varepsilon (x,\cdot)=k_\varepsilon (\cdot,x)$ are in $L^1(\Omega)$, and $k_\varepsilon \in L^1(\Omega\times \Omega)$. \end{enumerate} \end{theorem} \noindent The key ingredients in the proof of the above results are the following facts:
\begin{itemize}
\item the hypoellipticity of $\mathcal{L}_\varepsilon $ (as assumed in the hypothesis)
which will imply the hypoellipticity of
the \emph{classical} adjoint of $\mathcal{L}_\varepsilon $ (see Remark \ref{rem.hyopoLadj});
\item the $C^\infty$-topology on the space of the $\mathcal{L}_\varepsilon $-harmonic functions
is the same as the $L^1_{\textrm{loc}}$-topology,
another consequence of the hypoellipticity of $\mathcal{L}_\varepsilon $ (Remark \ref{rem.topolocinfyt});
\item the fact that $\mathcal{L}$ is self-adjoint on $L^2(\mathbb{R}^N,\mathrm{d}\nu)$ (see \eqref{autoagg}) so that the same is true of $\mathcal{L}_\varepsilon $
(this will be crucial in proving the symmetry of the Green kernel);
\item the Strong Maximum Principle for the perturbed operator $\mathcal{L}_\varepsilon =\mathcal{L}-\varepsilon $, which we obtain as a consequence of
our previous Strong Maximum Principle for $\mathcal{L}$ in Theorem \ref{th:SMP} (see precisely Remark \ref{PMFancheconc}, where
\emph{nonnegative} maxima are considered): this is a key step for the proof of the \emph{positivity} of $k_\varepsilon $;
\item the Schwartz Kernel Theorem (used for the regularity of the Green kernel).
\end{itemize}
The difference with respect to the analogous result given in the framework of the H\"ormander operators in \cite[Théorème 6.1]{Bony}
is the introduction of the relevant measure $\nu $ in the integral representation \eqref{gprororfondam};
indeed, the symmetry property \eqref{gprororfondam2} of the kernel $k_\varepsilon $ is connected with
the identity \eqref{autoagg}, which is not true (in general) if
we consider Lebesgue measure instead of $\nu$.
We are now ready to give the second main result of the paper: \begin{theorem}[\textbf{Strong Harnack Inequality}]\label{lem.crustimabassoTHEOFORTE}
Suppose that $\mathcal{L}$ is an operator of the form
\eqref{mainLL}, with $C^\infty$ coefficients $V>0$ and $(a_{i,j})\geq 0$, and
suppose it satisfies hypotheses \emph{(NTD)}, \emph{(HY)}
and \emph{(HY)$_\varepsilon $}.
Then, for every connected open set $O\subseteq \mathbb{R}^N$ and every
compact subset $K$ of $O$,
there exists a constant $M=M(\mathcal{L},O,K)\geq 1$ such that \begin{equation}\label{HarnackdeboleEQ1forte}
\sup_{ K} u \leq M\,\inf_K u, \end{equation}
for every nonnegative $\mathcal{L}$-harmonic function $u$ in $O$.
If $\mathcal{L}$ is subelliptic or if it has $C^\omega$ coefficients, then
assumption \emph{(HY)$_\varepsilon $} can be dropped. \end{theorem}
\noindent The last assertion follows from Remark \ref{rem.equivHYeps}.
We now present the spine of the proof of Theorem \ref{lem.crustimabassoTHEOFORTE}.
The main step towards the Strong Harnack Inequality
is the following Theorem \ref{th.mokobre} from Potential Theory.
A proof of a more general abstract version of this useful result,
in the framework of axiomatic harmonic spaces,
can be found in the survey notes \cite[pp.20--24]{Brelot} by Brelot,
where this theorem is attributed to G. Mokobodzki.
(See also a further improvement to harmonic spaces
which are not necessarily second-countable, by Loeb and Walsh, \cite{LoebWalsh}).
Instead of appealing to an abstract Potential-Theoretic statement,
we prefer to formulate the result under the following more specific form. \begin{theorem}\label{th.mokobre}
Let $L$ be a second order linear PDO in $\mathbb{R}^N$ with smooth coefficients. Suppose the following conditions are satisfied. \begin{description}
\item[(Regularity)] There exists a basis $\mathcal{B}$ for the Euclidean topology of $\mathbb{R}^N$
(consisting of bounded open sets)
such that, for every $\Omega\in \mathcal{B}\setminus\{\varnothing\}$
and for every $\varphi\in C(\partial\Omega,\mathbb{R})$, there exists a unique
$L$-harmonic function $H_\varphi^\Omega\in C^2(\Omega)\cap C(\overline{\Omega})$
solving the Dirichlet problem
$$\left\{
\begin{array}{ll}
L u=0 & \hbox{in $\Omega$} \\
u=\varphi & \hbox{on $\partial\Omega$,}
\end{array}
\right.
$$
and satisfying $H_\varphi^\Omega\geq 0$ whenever $\varphi\geq0$.
\item[(Weak Harnack Inequality)]
For every connected open set $O\subseteq \mathbb{R}^N$, every
compact subset $K$ of $O$ and every $y_0\in O$,
there exists a constant $C(y_0)=C(L,O,K,y_0)>0$ such that \begin{equation*}
\sup_{ K} u \leq C(y_0)\,u(y_0), \end{equation*}
for every nonnegative $L$-harmonic function $u$ in $O$. \end{description}
Then, the following \emph{Strong Harnack Inequality} for $L$ holds:
for every connected open set $O$ and every
compact subset $K$ of $O$
there exists a constant $M=M(L,O,K)\geq 1$ such that \begin{equation}\label{SHIvera}
\sup_{K} u \leq M\,\inf_{K} u, \end{equation}
for every nonnegative $L$-harmonic function $u$ in $O$. \end{theorem}
See also Remark \ref{rem.precisiamo}
for some equivalent assumptions that can replace
the above (Weak Harnack Inequality) to get the Strong
Harnack Inequality.
The proof of Theorem \ref{th.mokobre} is given in Section \ref{sec:Harnackvera},
starting from a result by Mokobodzki and Brelot in \cite[Chapter I]{Brelot}:
in the latter it is shown that if the axioms (Regularity) and (Weak Harnack Inequality) are fulfilled then,
for any connected open set $O\subseteq\mathbb{R}^N$ and any
$x_0\in O$, the set
\begin{equation}\label{equicontinuosa}
\Phi_{x_0}:=\Big\{h\in \mathcal{H}_L(O)\,:\,h\geq 0,\quad h(x_0)=1\Big\}
\end{equation}
is equicontinuous at $x_0$. The proof of this fact rests on
some deep results of Functional Analysis concerning the family
of the so-called harmonic measures $\{\mu^\Omega_x\}_{x\in \partial\Omega}$
related to $L$ (and to a regular set $\Omega$ for the Dirichlet problem), jointly with some basic properties of the harmonic sheaf
associated with the operator $L$.
As observed by Bony in \cite[Remarque 7.1, p.300]{Bony},
the Strong Harnack Inequality classically relies on two-sided estimates of the ratios
$h(x_1,\cdot)/h(x_2,\cdot)$, where $h(x,y)$ is the relevant Poisson kernel; these estimates were unavailable in the
setting considered in \cite{Bony}, as they are (to the best of our knowledge) in our setting too.
However, like in \cite{Bony}, the unavailability of these estimates can be overcome
by the use of the Green kernel for the perturbed operator $\mathcal{L}-\varepsilon $ and by the Strong Maximum Principle, as they jointly lead
to the Weak Harnack Inequality.
It is interesting to observe that, once the Weak Harnack Inequality
is available, the equicontinuity of \eqref{equicontinuosa} (an equivalent version of the Strong Harnack Inequality)
is derived by Mokobodzki and Brelot by the comparison (in the sense of measures) $\mu^\Omega_{x_1}\leq M\,\mu^\Omega_{x_2}$ for harmonic measures:
this comparison seems to be the core substitute for the mentioned pointwise estimates with Poisson kernels centered at different points $x_1,x_2$.
Due to Theorem \ref{th.mokobre}, the focus on the Strong Harnack Inequality is now shifted to the Weak Harnack Inequality, which is easier to establish.
As already anticipated, the latter is based on the lower bound
\eqref{lowboundkeps} as we now briefly describe.
First, we remark that the proof of \eqref{lowboundkeps} is a two-line comparison argument:
it suffices to apply $\mathcal{L}-\varepsilon $ on both sides of \eqref{lowboundkeps} to see that they produce the same result, namely $-\varepsilon \,u$;
then one uses the Weak Maximum Principle, since the right-hand side is null on $\partial\Omega$ whereas the left-hand side
is nonnegative. Secondly, with inequality \eqref{lowboundkeps} at hands and the \emph{strict positivity} of $k_\varepsilon $ (a consequence
of the SMP), it is not difficult to prove that $u(x_0)$ dominates the $L^1_\textrm{loc}$-norm of $u$, on suitable compact sets.
Then, due to the equivalence of the $L^1_{\textrm{loc}}$ and $C^\infty$ topologies
on the space of the $\mathcal{L}$-harmonic functions (this fact deriving from (HY)), one can infer the following: \begin{theorem}[Weak Harnack inequality for derivatives]\label{lem.crustimabassoTHEO}
Let $\mathcal{L}$ satisfy \emph{(NTD)}, \emph{(HY)}
and \emph{(HY)}$_\varepsilon $.
Then, for every connected open set $O\subseteq \mathbb{R}^N$, every
compact subset $K$ of $O$, every $m\in \mathbb{N}\cup\{0\}$ and every $y_0\in O$,
there exists a positive $C(y_0)=C(\mathcal{L},\varepsilon ,O,K,m,y_0)$ such that \begin{equation}\label{HarnackdeboleEQ1}
\sum_{|\alpha|\leq m}
\sup_{x\in K} \Big| \frac{\partial^\alpha u(x)}{\partial x^\alpha}\Big| \leq C(y_0)\,u(y_0), \end{equation}
for every nonnegative $\mathcal{L}$-harmonic function $u$ in $O$. \end{theorem}
We remark that topological properties similar to those
mentioned above for the space of the $\mathcal{L}$-harmonic functions are also valid when $\mathcal{L}$ in \eqref{mainLL}
is \emph{not necessarily hypoelliptic}, provided that it possesses a
positive global fundamental solution: see e.g., \cite{BattagliaBonfiglioli} by
the first and third named authors, where Montel-type results are proved (in the sense of \cite{Montel}), jointly with the equivalence of the topologies
induced on $\mathcal{H}_\mathcal{L}(\Omega)$ by $L^1_{\textrm{loc}}$ and by $L^\infty_{\textrm{loc}}$, under no hypoellipticity assumptions.
\textbf{Acknowledgements.}
We wish to thank Alberto Parmeggiani for many helpful discussions on hypoellipticty,
leading to an improvement of the manuscript.
\section{The Strong Maximum Principle for $\mathcal{L}$}\label{sec:SMP}
The aim of this section is to prove the Strong Maximum Principle for $\mathcal{L}$
in Theorem \ref{th:SMP}. Clearly, a fundamental step is played by a suitable Hopf-type lemma, furnished
in Lemma \ref{lem_Hopf}.
(For a recent interesting survey on maximum principles and Hopf-type results for
uniformly elliptic operators, see L\'opez-G\'omez \cite{lopez-Gomez}.)
First the relevant definition and notation: given an open set $\Omega\subseteq \mathbb{R}^N$
and a relatively closed set $F$ in $\Omega$, we say that $\nu$ is \emph{externally orthogonal to $F$ at $y$}, and we write
\begin{equation}\label{estnorm}
\textrm{$\nu\bot\,F$ at $y$,}
\end{equation}
if: $y\in \Omega \cap \partial F$; $\nu\in \mathbb{R}^N\setminus\{0\}$; $\overline{B(y+\nu,|\nu|)}$
is contained in $\Omega$ and it intersects $F$ only at $y$. Here and throughout $B(x_0,r)$ is the Euclidean
ball in $\mathbb{R}^N$ of centre $x_0$ and radius $r> 0$; moreover $|\cdot|$ will denote the Euclidean norm on $\mathbb{R}^N$.
The notation \eqref{estnorm} does not explicitly refer to externality,
but this will not create any confusion in the sequel. It is not difficult to recognize that if $\Omega$ is connected and if $F$
is a proper (relatively closed) subset of $\Omega$, then there always exist couples $(y,\nu)$
such that $\nu\bot\,F$ at $y$.
Finally, throughout the paper we write $\partial_i$ for $\dfrac{\partial}{\partial x_i}$. \begin{lemma}[\textbf{of Hopf-type for $\mathcal{L}$}]\label{lem_Hopf}
Suppose that $\mathcal{L}$ is an operator of the form
\eqref{mainLL} with $C^1$ coefficients $V>0$ and $a_{i,j}$,
and let us set $A(x):=(a_{i,j}(x))_{i,j}$.
(We recall that $A(x)\geq 0$ for every $x\in\mathbb{R}^N$.)
Let
$\Omega\subseteq\mathbb{R}^N$ be a connected open set. Then,
the following facts hold. \begin{enumerate}
\item[\emph{(1)}] Let $u\in C^2(\Omega,\mathbb{R})$ be such that $\mathcal{L} u\geq 0$ on $\Omega$;
let us suppose that
\begin{equation}\label{Hopf.ortFUU}
F(u):=\Big\{x\in \Omega :\,u(x)=\max_\Omega u \Big\}
\end{equation}
is a proper subset of $\Omega$.
Then
\begin{equation}\label{Hopf.ort}
\langle A(y)\nu,\nu\rangle=0\quad
\text{whenever $\nu\bot\,F(u)$ at $y$}.
\end{equation}
\item[\emph{(2)}]
Suppose $c\in C(\mathbb{R}^N,\mathbb{R})$ is nonnegative on $\mathbb{R}^N$, and let us set
$\mathcal{L}_c:=\mathcal{L}-c$. Let $u\in C^2(\Omega,\mathbb{R})$ be such that $\mathcal{L}_c u\geq 0$ on $\Omega$;
let us suppose that $F(u)$ in \eqref{Hopf.ortFUU}
is a proper subset of $\Omega$ and that $\max_\Omega u\geq 0$.
Then \eqref{Hopf.ort} holds true. \end{enumerate} \end{lemma} \begin{proof}
We begin by proving part (1) in the statement of the Lemma, from which we also inherit
the notation and hypotheses on $u$ and $F(u)$. Notice that the assumption ensures that
$M:=\max_\Omega u\in \mathbb{R}$.
To this aim, let us assume by contradiction that
\begin{equation} \label{lem_Hopf.EQ1}
\langle A(y)\nu,\nu\rangle > 0, \quad \text{for some $\nu\bot\,F(u)$ at $y$.}
\end{equation}
We define a smooth function $w: \mathbb{R}^N \longrightarrow \mathbb{R}$ as follows
$$w(x) := e^{-\lambda|x - (y+\nu)|^2} - e^{-\lambda|\nu|^2},$$
where $\lambda$ is a positive real number chosen in a moment.
We set $b_j:=\sum_{i=1}^N\partial_i(V\,a_{i,j})/V$, so that
$\mathcal{L}=\sum_{i,j}a_{i,j}\partial_{i,j}+\sum_j b_j\partial_j$.
A simple computation shows that
\begin{equation} \label{lem_Hopf.EQ1ccc}
\mathcal{L} w(y) = \lambda^2e^{-\lambda|\nu|^2}\left(4\langle A(y)\nu,\nu\rangle
- \frac{2}{\lambda}\sum_{j = 1}^N\big(a_{j,j}(y) - b_j(y)\nu_j\big)\right),
\end{equation}
and thus, by \eqref{lem_Hopf.EQ1}, it is possible to choose $\lambda > 0$
in such a way that $\mathcal{L} w(y) > 0.$
By the continuity of $\mathcal{L} w$, we can then find a positive real number
$\delta$ such that $V:= B(y,\delta)$ is compactly contained in
$\Omega$ and $\mathcal{L} w > 0$ on $V$.
We now define, for $\varepsilon > 0$,
a function $v_{\varepsilon }: \overline{V}\to\mathbb{R}$ by setting
$v_{\varepsilon }:= u + \varepsilon \,w$.
Clearly, $v_{\varepsilon } \in C^2(V,\mathbb{R})\cap C(\overline{V},\mathbb{R})$, and we claim that
the maximum of $v_\varepsilon $ on $\overline{V}$ is attained in $V$.
Indeed, let us consider the splitting of $\partial V$ given by the two sets
$K_1 := \partial V\cap \overline{B(y+\nu,|\nu|)}$ and $K_2:=\partial V\setminus K_1.$
For every $x \in K_2$, one has
$$v_\varepsilon (x) = u(x) + \varepsilon w(x) < u(x) \leq M.$$
On the other hand, for all $x \in K_1$,
we have
$$v_\varepsilon (x) \leq \max_{K_1}u + \varepsilon \max_{K_1}w,$$
and since $\max_{K_1}u < M$
(observe that $u<M$ outside $F(u)$ and that
$K_1$ is a compact set contained in $\Omega \setminus F(u)$),
it is possible to choose $\varepsilon >0$ so small that
$v_\varepsilon < M$ on $K_1$.
By gathering together these facts we see that,
for every $x\in \partial V$ (note that $y \in F(u)$ and $w(y) = 0$)
$$v_\varepsilon (x) < M = u(y) = v_\varepsilon (y) \leq \max_{\overline{V}}v_\varepsilon , $$
and this proves the claim.
From $\mathcal{L} v_\varepsilon =\mathcal{L} u+\varepsilon \,\mathcal{L} w\geq \varepsilon \,\mathcal{L} w$ (and the latter is $>0$ on $V$) the function $v_\varepsilon $ is a
\emph{strictly} $\mathcal{L}$-subharmonic function on $V$, that is, $\mathcal{L} v_\varepsilon > 0$ on $V$, admitting
a maximum point on the open set $V$, say $p_0$.
Then we have (recall that $A(p_0)\geq 0$ and notice that
$\nabla v_\varepsilon (p_0)=0$ and $H(p_0):=(\partial_{i,j} {v_\varepsilon }(p_0))_{i,j} \leq 0$)
\begin{align}\label{fine.hopf}
0<\mathcal{L} v_\varepsilon (p_0) =
\sum_{i,j}a_{i,j}(p_0)\partial_{i,j}v_\varepsilon (p_0)=
\text{trace}\big(A(p_0)\cdot H(p_0)\big)
\leq 0,
\end{align}
which is clearly a contradiction.
Part (2) in the statement of the Lemma can be proved in a totally analogous way:
we replace $\mathcal{L}$ with $\mathcal{L}_c$ and we
notice that $w(y)=0$ so that $\mathcal{L}_c w(y)=\mathcal{L} w(y)$, and
\eqref{lem_Hopf.EQ1ccc} is left unchanged.
Arguing as above, we let again $p_0\in V$ be such that $v_\varepsilon (p_0)=\max_{\overline{V}} v_\varepsilon $.
This gives $v_\varepsilon (p_0)\geq v_\varepsilon (y)=u(y)=M$. Hence \eqref{fine.hopf} becomes
\begin{align*}
0<\mathcal{L}_c v_\varepsilon (p_0) =
\text{trace}\big(A(p_0)\cdot H(p_0)\big) -c(p_0)\,v_\varepsilon (p_0)
\leq -c(p_0)\,M,
\end{align*}
where in the last inequality we used the assumption $c\geq0$ and the fact that
$v_\varepsilon (p_0)\geq M$.
By the assumption $M\geq 0$ (and again by the assumption on the sign of $c$), we have
$-c(p_0)\,M\leq 0$, and we obtain another contradiction. \end{proof}
We are now in a position to provide the \begin{proof}[Proof (of Theorem \ref{th:SMP})]
Let $\mathcal{L}$ be as in the statement of Theorem \ref{th:SMP};
suppose that $\Omega\subseteq\mathbb{R}^N$ is a connected open set
and that $u\in C^2(\Omega,\mathbb{R})$ satisfies $\mathcal{L} u\geq 0$ on $\Omega$
and $u$ attains a maximum in $\Omega$. We set
$$F(u):=\Big\{x\in \Omega : \, u(x)=\max_\Omega u \Big\}.$$
By assumption $F(u)\neq \varnothing$, say $\xi\in F(u)$. We show that $F(u)=\Omega$.
To this aim, let us rewrite $\mathcal{L}$ as follows:
$$\mathcal{L}=
\frac{1}{V}\sum_{i,j} \partial_i \Big(V\,a_{i,j}\,\partial_j\Big)=
\frac{1}{V} \sum_{i,j} V\,\partial_i( a_{i,j}\partial_j)+
\sum_{i,j} \frac{\partial_i V}{V}\, a_{i,j}\partial_j=
\sum_{i,j} \partial_i( a_{i,j}\partial_j)+
\sum_{j} b_j\,\partial_j,
$$
where $b_j:=\frac{1}{V}\sum_{i=1}^N\partial_i V\, a_{i,j}$ (for $j=1,\ldots,N$).
Let us consider the vector fields
\begin{align}\label{CampiXX}
X_i:= \sum_{j=1}^N a_{i,j}\,\partial_j,\quad i=1,\ldots,N,\qquad
X_0:=\sum_{j=1}^N b_j\,\partial_j.
\end{align}
We explicitly remak the following useful fact: $X_0$
is a linear combination (with smooth-coefficients) of $X_1,\ldots,X_N$; indeed \begin{equation}\label{X0lincopmbxi}
X_0=\sum_{j=1}^N b_j\,\partial_j=\sum_{j=1}^N \frac{1}{V}\sum_{i=1}^N\partial_i V\, a_{i,j}\,\partial_j
=\sum_{i=1}^N\frac{\partial_i V}{V}\,\sum_{j=1}^N a_{i,j}\,\partial_j =\sum_{i=1}^N\frac{\partial_i V}{V}\,X_i. \end{equation}
Summing up, we have written $\mathcal{L}$ as follows
$$\mathcal{L} u =\sum_{i=1}^N\partial_i(X_iu)+\sum_{i=1}^N\frac{\partial_i V}{V}\,X_iu,\quad \forall\,\,u\in C^2. $$
Thanks to the assumption (NTD) of non-total degeneracy of $\mathcal{L}$
and due to the smoothness of its coefficients, we are entitled to use
a notable result \cite[Theorem 2]{Amano} by Amano, which states that
the hypoellipticity assumption (HY) ensures the controllability of the ODE system \begin{equation}\label{X0lincopmbxiamano}
\dot \gamma= \xi_0 X_0(\gamma)+\sum_{i=1}^N \xi_i\,X_i(\gamma),\quad (\xi_0,\xi_1,\ldots,\xi_N)\in \mathbb{R}^{1+N}, \end{equation}
on every open and connected subset of $\mathbb{R}^N$
(see e.g., \cite[Chapter 3]{Jurdjevic} for the notion of controllability).
Since $\Omega$ is open and connected, this implies that
any point of $\Omega$ can be joined to $\xi$ by a continuous curve $\gamma$
contained in $\Omega$ which is piecewise an integral curve of
a vector field belonging to $\mathcal{V}:=\textrm{span}_\mathbb{R}\{X_0,X_1,\ldots,X_N\}$.
It then suffices to prove that if $\gamma$ is
an integral curve of a vector field $X\in \mathcal{V}$ starting at a point of $F(u)$ (which is non-empty), then
$\gamma(t)$ remains in $F(u)$ for every admissible time $t$. In this case
we say that $F(u)$ is \emph{$X$-invariant}.
By a result of Bony, \cite[Théorème 2.1]{Bony}, the $X$-invariance of $F(u)$
is equivalent to the tangentiality of $X$ to $F(u)$: this latter condition means that \begin{equation}\label{daprovSMP1primaa}
\langle X(y),\nu\rangle =0\quad \text{whenever $\nu\bot\,F(u)$ at $y$}. \end{equation}
Hence, by all the above arguments, the proof of the SMP is complete if we show that
\eqref{daprovSMP1primaa} is fulfilled by any $X\in \mathcal{V}$.
Since $X$ is a linear combination of $X_0,X_1,\ldots,X_N$ and due to \eqref{X0lincopmbxi}, it suffices to prove
this identity when $X$ is replaced by any element of $\{X_1,\ldots,X_N\}$.
Due to identity \eqref{Hopf.ort} in the Hopf-type Lemma \ref{lem_Hopf}, it is therefore
sufficient to show that for every $i\in\{1,\ldots,N\}$ and every $x\in \mathbb{R}^N$, there exists
$\lambda_i(x)>0$ such that
\begin{equation}\label{daprovSMP1}
\langle X_i(x),\nu\rangle^2 \leq \lambda_i(x)\,\langle A(x)\nu,\nu\rangle\quad
\text{for every $\nu\in\mathbb{R}^N$}.
\end{equation}
Indeed, \eqref{daprovSMP1} together with \eqref{Hopf.ort} implies that
the left-hand side of \eqref{daprovSMP1} is null whenever $\nu\bot F(u)$ at $y$, which is precisely
\eqref{daprovSMP1primaa} for $X\in\{X_1,\ldots,X_N\}$.
Due to the very definition of $X_i$, inequality \eqref{daprovSMP1} boils down to proving that, given
a real symmetric positive semidefinite matrix $A=(a_{i,j})$, for every $i$ there exists $\lambda_i>0$ such that
$$\Big(\sum_{j}a_{i,j}\nu_j\Big)^2\leq \lambda_i\,\sum_{i,j} a_{i,j}\,\nu_i\,\nu_j
\quad \text{for every $\nu\in\mathbb{R}^N$},$$
which is a consequence of the Cauchy-Schwarz inequality and the characterization
(for a symmetric $A\geq 0$)
of $\textrm{ker}(A)$ as $\{x\in\mathbb{R}^N:\langle Ax,x\rangle=0\}$.
This proves part (1) of Theorem \ref{th:SMP}. As for part (2), let
$c$ be smooth and nonnegative on $\mathbb{R}^N$, and let us set $\mathcal{L}_c:=\mathcal{L}-c$.
Suppose $u\in C^2(\Omega,\mathbb{R})$ satisfies $\mathcal{L}_c u\geq 0$ on $\Omega$
and that it attains a nonnegative maximum in $\Omega$. For $F(u)\neq \varnothing$ as above, we show again that
$F(u)=\Omega$. The hypoellipticity and non-total degeneracy of $\mathcal{L}$
ensure again (by Amano's cited result for $\mathcal{L}$) the controllability of
system \eqref{X0lincopmbxiamano}. This again grants a connectivity property of $\Omega$
by means of continuous curves, piecewise integral curves of elements in the above vector space $\mathcal{V}$.
By Bony's quoted result on invariance/tangentiality, the needed identity $F(u)=\Omega$ follows if we show again that
\eqref{daprovSMP1primaa} is fulfilled when $X$ is replaced by $X_i$, for $i=1,\ldots,N$ (the case $i=0$ deriving as above from
\eqref{X0lincopmbxi}).
Now, by part (2) of Lemma \ref{lem_Hopf},
it is at our disposal a Hopf-type Lemma for operators
of the form $\mathcal{L}_c$, and for functions $u$ such that $\mathcal{L}_c u\geq 0$
and attaining a \emph{nonnegative} maximum. In other words, we know that
\eqref{Hopf.ort} holds true, again as in the previous case (1). The validity of \eqref{daprovSMP1} allows us to end the proof,
as in the previous part. \end{proof}
A close inspection to the above proof shows that we have indeed
demonstrated the following result as well (replacing the hypothesis of hypoellipticity
of $\mathcal{L}$ by that of $\mathcal{L}-c$), since Amano's results on hypoellipticity/controllobility
are independent of the presence of a zero-order term: \begin{remark}\label{PMFancheconc}
\emph{Suppose that $\mathcal{L}$ is an operator of the form
\eqref{mainLL}, with $C^\infty$ coefficients $V>0$ and $(a_{i,j})\geq 0$,
and that it satisfies \emph{(NTD)}.
Let $c\in C^\infty(\mathbb{R}^N,\mathbb{R})$ be nonnegative and
suppose that the operator $\mathcal{L}_c:=\mathcal{L}-c$ is hypoelliptic
on every open subset of $\mathbb{R}^N$.}
\emph{If
$\Omega\subseteq\mathbb{R}^N$ is a connected open set, then any function
$u\in C^2(\Omega,\mathbb{R})$ satisfying $\mathcal{L}_c u\geq 0$ on $\Omega$ and
attaining a nonnegative maximum in $\Omega$ is constant
throughout $\Omega$.} \end{remark}
As a Corollary of Theorem \ref{th:SMP} we immediately get the following result. \begin{corollary}[\textbf{Weak Maximum Principle for $\mathcal{L}$}]\label{th.WMPPP}
Suppose that $\mathcal{L}$ is an operator of the form
\eqref{mainLL}, with $C^\infty$ coefficients $V>0$ and $(a_{i,j})\geq 0$,
and that it satisfies \emph{(NTD)} and \emph{(HY)}. Suppose also that
$c\in C^\infty(\mathbb{R}^N,\mathbb{R})$ is nonnegative on $\mathbb{R}^N$ (the case $c\equiv 0$ is allowed),
and let us set $\mathcal{L}_c:=\mathcal{L}-c$.
Then,
$\mathcal{L}_c$ satisfies the \emph{Weak Maximum Principle}
on every bounded open set
$\Omega\subseteq\mathbb{R}^N$, that is: \begin{equation}\label{WMP}
\left\{
\begin{array}{ll}
u\in C^2(\Omega,\mathbb{R})\\
\mathcal{L}_c u\geq 0\,\,\text{on $\Omega$} \\
\limsup\limits_{x\to x_0} u(x)\leq 0\,\,\text{for every $x_0\in\partial\Omega$}
\end{array}
\right.
\qquad\Longrightarrow\qquad
u\leq 0\,\,\text{on $\Omega$.} \end{equation}
As a consequence, if $\Omega\subseteq\mathbb{R}^N$ is bounded, and if
$u\in C^2(\Omega)\cap C(\overline{\Omega})$ is nonnegative and
such that $\mathcal{L}_c u\geq 0$ on $\Omega$, then one has
$ \sup_{\overline{\Omega}} u=\sup_{\partial\Omega} u. $ \end{corollary} \begin{proof}
Suppose that the open set $\Omega\subset\mathbb{R}^N$ is bounded and $u$ is as in the left-hand side of
\eqref{WMP}. Let $x_0\in \overline{\Omega}$ be such that \begin{equation}\label{weier.max}
\limsup_{x\to x_0} u(x)=\sup_{\Omega}u. \end{equation}
If $x_0\in\partial\Omega$, then \eqref{WMP} ensures that $\limsup_{x\to x_0} u(x)\leq 0$, so that (due to
\eqref{weier.max}) $\sup_\Omega u\leq0$, proving the right-hand side of \eqref{WMP}.
If $x_0\in\Omega$, then \eqref{weier.max} gives $u(x_0)=\max_\Omega u$.
If $u(x_0)<0$, we conclude as above that $\max_\Omega u=u(x_0)<0$. If $u(x_0)\geq 0$,
we consider $\Omega_0\subseteq\Omega$ the connected component of $\Omega$ containing $x_0$, and,
thanks to part (2) of the Strong Maximum Principle in Theorem \ref{th:SMP},
the existence of an interior maximum point of $u$ on $\Omega \supseteq \Omega_0$
(and the fact that $u(x_0)\geq 0$)
ensures that $u\equiv u(x_0)$ on $\Omega_0$. Let us take any $\xi_0\in \partial \Omega_0$; we have
$$\max_{\Omega}u=u(x_0)=\limsup_{\Omega_0\ni x\to \xi_0}u(x)\leq
\limsup_{\Omega\ni x\to \xi_0}u(x)\leq 0, $$
where the last inequality follows from $\partial \Omega_0\subseteq \partial\Omega$
and from the assumption in \eqref{WMP}.
We remark that when $c\equiv 0$ the proof is slightly simpler, as an interior maximum of
$u$ propagates up to the boundary, regardless of the sign of this maximum.
Finally we prove the last assertion of the corollary.
Let $\Omega\subseteq\mathbb{R}^N$ be bounded and let $u\in C^2(\Omega)\cap C(\overline{\Omega})$
be nonnegative on $\overline{\Omega}$ and satisfying $\mathcal{L}_c u\geq 0$ on $\Omega$; then we set $M:=\sup_{\partial\Omega} u$
and we observe that $M\geq 0$ since this is true of $u$.
We have (recall that $c\geq 0$)
$$\mathcal{L}_c(u-M)=\mathcal{L}_c u-\mathcal{L}_c M\geq -\mathcal{L}_c M=-(\mathcal{L} -c)M=cM\geq 0. $$
Since (by definition of $M$) we have $u-M\leq 0$ on $\partial\Omega$ (and $u-M$ is continuous up to $\partial\Omega$),
we can apply \eqref{WMP} to get $u-M\leq 0$, that is $u\leq \sup_{\partial\Omega} u$ on $\Omega$.
This clearly proves the needed $ \sup_{\overline{\Omega}} u=\sup_{\partial\Omega} u $. \end{proof}
Arguing as in the previous proof (and exploiting Remark \ref{PMFancheconc}
instead of Theorem \ref{th:SMP}-(2)) we also get the following result,
where we alternatively replace the hypothesis of hypoellipticity
of $\mathcal{L}$ by that of $\mathcal{L}-c$: \begin{remark}\label{PMFancheconc2}
\emph{Suppose that $\mathcal{L}$ is an operator of the form
\eqref{mainLL}, with $C^\infty$ coefficients $V>0$ and $(a_{i,j})\geq 0$,
and that it satisfies \emph{(NTD)}.
Let $c\in C^\infty(\mathbb{R}^N,\mathbb{R})$ be nonnegative and
suppose that the operator $\mathcal{L}_c:=\mathcal{L}-c$ is hypoelliptic
on every open subset of $\mathbb{R}^N$.}
\emph{Then $\mathcal{L}_c$ satisfies the Weak Maximum Principle
on every bounded open set $\Omega\subseteq\mathbb{R}^N$.}
\emph{As a consequence, if $\Omega\subseteq\mathbb{R}^N$ is bounded, and if
$u\in C^2(\Omega)\cap C(\overline{\Omega})$ is nonnegative and
such that $\mathcal{L}_c u\geq 0$ on $\Omega$, then one has}
$ \sup_{\overline{\Omega}} u=\sup_{\partial\Omega} u. $ \end{remark}
\section{Analytic coefficients: A Unique Continuation result for $\mathcal{L}$}\label{sec:Harnack_analytic}
In this short section, by means of the ideas of controllability/propagation introduced in the previous section,
we prove the following result. \begin{theorem}[\textbf{Unique Continuation for $\mathcal{L}$}]\label{th:Uniquecont}
Suppose that $\mathcal{L}$ is an operator of the form
\eqref{mainLL} satisfying assumptions \emph{(NTD)} and \emph{(HY)}.
Suppose that $\mathcal{L}$ has $C^\omega$ coefficients $a_{i,j}$ and $V$.
Let
$\Omega\subseteq\mathbb{R}^N$ be a connected open set.
Then any $\mathcal{L}$-harmonic function
on $\Omega$ vanishing on some non-empty open subset of $\Omega$ is identically zero on $\Omega$. \end{theorem} \begin{proof}
Let $u\in \mathcal{H}_\mathcal{L}(\Omega)$ be vanishing on the open set $U\subseteq \Omega$, $U\neq\varnothing$.
Let $F\subseteq \Omega$ be the support of $u$.
We argue by contradiction, by assuming that $F\neq \varnothing$.
Let us fix any $y\in \partial F\cap \Omega$
and any $\nu\in \mathbb{R}^N\setminus\{0\}$ such that $\nu \bot F$ at $y$ (see the notion of exterior
orthogonality at the beginning of Section \ref{sec:SMP}; the assumption $F\neq \varnothing$
ensures the existence of such a couple $(y,\nu)$). We consider the Euclidean open ball
$B:=B(y+\nu,|\nu|)$ which is completely contained in $\Omega\setminus F$, so that $u\equiv 0$ on $B$.
We observe that $B$
is the sub-level set $\{f(x)<0\}$, where
$$f(x)=|x-(y+\nu)|^2-|\nu|^2. $$
There are only two cases:
(a) The boundary of $B$ is \emph{non-characteristic} for $\mathcal{L}$ at $y$, that is,
$\langle A(y)\,\nabla f(y),\nabla f(y)\rangle \neq 0$.
Due to the $C^\omega$ assumption we are allowed to use the classical Holmgren's Theorem
(see e.g., \cite[Theorem 8.6.5]{HormanderLibro}),
ensuring that
$u$ vanishes in a neighborhood of $y$, so that $y\in\Omega\setminus F$.
Since $F$ is relatively closed in $\Omega$, this is in contradiction with $y\in \partial F\cap \Omega$.
Hence it is true that:
(b) The boundary of $B$ is \emph{characteristic} for $\mathcal{L}$ at $y$, that is,
$\langle A(y)\,\nabla f(y),\nabla f(y)\rangle = 0$.
Since $\nabla f(x)=2\,(x-y-\nu)$, this condition boils down to
$\langle A(y)\nu,\nu\rangle=0$.
Let $X_0,X_1,\ldots,X_N$ be the vector fields introduced
in \eqref{CampiXX}.
The same Linear Algebra argument
leading to \eqref{daprovSMP1} shows that
$\langle A(y)\nu,\nu\rangle=0$ implies
$$\langle X_i(y),\nu\rangle=0\quad \text{for every $i=1,\ldots,N$.} $$
Identity \eqref{X0lincopmbxi} guarantees that the same holds for $i=0$ as well.
Therefore, one has $\langle X(y),\nu\rangle=0$ for every $X\in \mathcal{V}:=\textrm{span}\{X_0,X_1,\ldots,X_N\}$.
Arguing as in Section \ref{sec:SMP}, by means of the result by Bony \cite[Théorème 2.1]{Bony},
this geometric condition (holding true for arbitrary $\nu \bot F$ at $y$) implies that the closed set $F$ is $X$-invariant
for any $X\in \mathcal{V}$ (that is $F$ contains the trajectories of the integral curves
of $X$ touching $F$).
On the other hand, the hypoellipticity assumption (HY) on $\mathcal{L}$ ensures
(due to the recalled result by Amano \cite[Theorem 2]{Amano}) that
any pair of points of the connected open set $\Omega$ can be joined by a continuous curve which
is piecewise an integral curve of some vector fields $X$ in $\mathcal{V}$. Gathering together all the mentioned
results, the fact that $F\neq \varnothing$ implies that any point of
$\Omega$ belongs to $F$, contradicting the assumption that $U\subseteq \Omega\setminus F$. \end{proof}
\section{The Green function and the Green kernel for $\mathcal{L}-\varepsilon $}\label{sec:Green}
The aim of this section is to prove
Theorem \ref{th.greeniani}. In the first part of the proof
(Steps I--III) we follow the classical scheme by Bony (see \cite[Théorème 6.1]{Bony}),
hence we skip many details; it is instead in Step IV that a slight difference is presented,
in that we exploit the measure $\mathrm{d}\nu(x)=V(x)\,\mathrm{d} x$ in order to obtain the symmetry
property of the Green kernel even when our operator $\mathcal{L}$ is not (classically) self-adjoint.
The problem of the behavior of the Green kernel along the diagonal is more subtle, as it is
shown by Fabes, Jerison and Kenig in \cite{FabesJerisonKenig} who proved that, for divergence-form
operators as in \eqref{mainLL} (when $V\equiv 1$ and, roughly put, when the degeneracy of $A(x)$
is controlled by a suitable weight) the limit of the
Green kernel along the diagonal need not be infinite; we plan to investigate this behavior in a future study,
since our assumption (NTD) prevents the existence of any vanishing Muckenhoupt-type weight.
Throughout this section, we fix an operator $\mathcal{L}$ of the form
\eqref{mainLL}, with $C^\infty$ coefficients $V>0$ and $(a_{i,j})\geq 0$,
and we assume that $\mathcal{L}$ satisfies (NTD).
Moreover, we also fix $\varepsilon \geq 0$ (note that the case $\varepsilon =0$ is allowed)
and we set $\mathcal{L}_\varepsilon :=\mathcal{L}-\varepsilon $; we assume that $\mathcal{L}_\varepsilon $ is hypoelliptic on every open
subset of $\mathbb{R}^N$. Finally, $\Omega$ is a fixed open set as
in Lemma \ref{th.localDiri}, such that the Dirichlet problem
\eqref{DIRIEQ} is (uniquely) solvable.
From Lemma \ref{th.localDiri}, we know that there exists a monotone operator $G_\varepsilon $
(which we called the Green operator related to $\mathcal{L}_\varepsilon $ and $\Omega$); since $\varepsilon \geq0$ is fixed, in all this section
we drop the subscript $\varepsilon $ in $G_\varepsilon ,k_\varepsilon ,\lambda_{x,\varepsilon }$ and we simply write $G,k,\lambda_x$. Hence we are given the monotone operator \begin{equation*}
G: C(\overline{\Omega},\mathbb{R})\longrightarrow C(\overline{\Omega},\mathbb{R}) \end{equation*}
mapping $f\in C(\overline{\Omega},\mathbb{R})$ into the unique function $G(f)\in
C(\overline{\Omega},\mathbb{R})$
satisfying
\begin{gather}\label{DIRIEQGreenBIS}
\left\{
\begin{array}{ll}
\mathcal{L}_\varepsilon (G(f))=-f & \hbox{on $\Omega$\quad (in the weak sense of distributions),} \\
G(f)=0 & \hbox{on $\partial\Omega$\quad (point-wise).}
\end{array}
\right.
\end{gather}
We also know that the (Riesz) representation \begin{gather}\label{DIRIEQkernelBIS}
G(f)(x)=\int_{\overline{\Omega}} f(y)\,\mathrm{d}\lambda_x(y)\quad \text{for
every $f \in C(\overline{\Omega},\mathbb{R})$ and every $x\in\overline{\Omega}$}
\end{gather}
holds true, with a unique Radon measure
$\lambda_x$ defined on $\overline{\Omega}$ (which we called
the Green measure related to $\mathcal{L}_\varepsilon $, $\Omega$ and $x$).
Finally, we set $\mathrm{d} \nu(x):=V(x)\,\mathrm{d} x$
and we observe that (as in \eqref{autoagg})
\begin{equation}\label{autoaggBIS}
\int \varphi\,\mathcal{L}_\varepsilon \psi\,\mathrm{d}\nu=\int \psi\,\mathcal{L}_\varepsilon \varphi\,\mathrm{d}\nu,\quad\text{
for every $\varphi,\psi\in C_0^\infty(\mathbb{R}^N,\mathbb{R})$.}
\end{equation}
\textsc{Step I.}
We fix $x\in \Omega$.
We begin by proving that $\lambda_x$
is absolutely continuous with respect to the Lebesgue measure on $\overline{\Omega}$.
To this end, let $\varphi\in C_0^\infty(\Omega,\mathbb{R})$;
by \eqref{DIRIEQGreenBIS} it is clear that $G(\mathcal{L}_\varepsilon \varphi)=-\varphi$, so that (see
\eqref{DIRIEQkernelBIS}) \begin{gather*}
-\varphi(x)=\int_{\overline{\Omega}} \mathcal{L}_\varepsilon \varphi(y)\,\mathrm{d}\lambda_x(y),\quad \text{for
every $\varphi \in C_0^\infty(\overline{\Omega},\mathbb{R})$.}
\end{gather*}
If we consider $\lambda_x$ as a distribution on $\Omega$ in the standard way, this identity
boils down to \begin{equation}\label{DIRIIII}
(\mathcal{L}_\varepsilon )^* \lambda_x=-\textrm{Dir}_x\quad \text{in $\mathcal{D}'(\Omega)$}, \end{equation}
where $\textrm{Dir}_x$ denotes the Dirac mass at $x$, and $(\mathcal{L}_\varepsilon )^*$
is the classical adjoint operator of $\mathcal{L}_\varepsilon $. It is noteworthy to observe
that, in general, $(\mathcal{L}_\varepsilon )^*$ is neither equal to $\mathcal{L}_\varepsilon $ nor of the form $\widetilde{\mathcal{L}}-\varepsilon $
for any $\widetilde{\mathcal{L}}$ a divergence operator as in \eqref{mainLL}.
However, the following crucial property of $(\mathcal{L}_\varepsilon )^*$ is fulfilled: \begin{remark}\label{rem.hyopoLadj}
\emph{The operator $(\mathcal{L}_\varepsilon )^*$ is hypoelliptic on every open subset of $\mathbb{R}^N$.}
Indeed, let $U\subseteq W$ be open sets and let $u\in \mathcal{D}'(W)$ be
such that $(\mathcal{L}_\varepsilon )^* u=h$ in $\mathcal{D}'(U)$, where $h\in C^\infty(U,\mathbb{R})$. This gives the following chain of identities
(here $\psi\in C_0^\infty(U,\mathbb{R})$ is arbitrary) \begin{align*}
\int h\,\psi&= \langle u,\mathcal{L}_\varepsilon \psi\rangle= \langle u,\mathcal{L}\psi-\varepsilon \psi\rangle
\stackrel{\eqref{autoaggBISSS}}{=}
\Big\langle u,\frac{\mathcal{L}^*(V\psi)}{V}-\varepsilon \psi\Big\rangle\\ & =
\Big\langle \frac{u}{V},\mathcal{L}^*(V\psi)-\varepsilon \psi\,V\Big\rangle =\Big\langle \frac{u}{V},(\mathcal{L}_\varepsilon )^*(V\psi)\Big\rangle. \end{align*}
If we write
$\int h\,\psi=\int \frac{h}{V}\,(\psi\,V)$, and if we observe that
$C_0^\infty(U,\mathbb{R})=\{\psi\,V\,:\,\psi\in C_0^\infty(U,\mathbb{R})\}$,
the above computation shows that $\mathcal{L}_\varepsilon (u/V)=h/V$ in $\mathcal{D}'(U)$.
The hypoellipticity of $\mathcal{L}_\varepsilon $ now gives $u/V\in C^\infty(U,R)$ whence
$u\in C^\infty(U,R)$, as $V$ is smooth and positive.\qed \end{remark}
Identity \eqref{DIRIIII} gives in particular
$(\mathcal{L}_\varepsilon )^* \lambda_x= 0$ in $\mathcal{D}'(\Omega\setminus\{x\})$;
thanks to Remark \ref{rem.hyopoLadj},
this ensures the existence of $g_x\in C^\infty(\Omega\setminus\{x\},\mathbb{R})$
such that the distribution $\lambda_x$ restricted to
$\Omega\setminus\{x\}$ is the function-type distribution associated
with the function $g_x$; equivalently \begin{equation}\label{cerntarel16volte}
\int \varphi(y)\,\mathrm{d}\lambda_x(y) = \int \varphi(y)\,g_x(y)\,\mathrm{d} y,\quad \text{for every $\varphi\in C_0^\infty (\Omega\setminus\{x\},\mathbb{R})$.} \end{equation}
Clearly $g_x\geq 0$ on $\Omega\setminus\{x\}$
and $(\mathcal{L}_\varepsilon )^* g_x=0$ in $\Omega\setminus\{x\}$.
This temporarily proves that $\lambda_x$ coincides with
$g_x(y)\,\mathrm{d} y$ on $\Omega\setminus\{x\}$. We claim that this is also true
throughout $\Omega$. This will follow if we show that $C:=\lambda_x(\{x\})=0$.
Clearly, by the definition of $C$, on $\Omega$ we have
$$\lambda_x=C\,\textrm{Dir}_x+(\lambda_x)|_{\Omega\setminus\{x\}}=
C\,\textrm{Dir}_x+g_x(y)\,\mathrm{d} y.$$
Treating this as an identity between distributions on $\Omega$,
we apply the operator $(\mathcal{L}_\varepsilon )^*$ to get
$$C\,(\mathcal{L}_\varepsilon )^*\textrm{Dir}_x= -\textrm{Dir}_x-
(\mathcal{L}_\varepsilon )^*(g_x(y)\,\mathrm{d} y).$$
Here we used \eqref{DIRIIII}.
We now proceed as follows: \begin{enumerate}[-]
\item we multiply both sides by a $C^\infty$ function $\chi$
compactly supported in $\Omega$ and $\chi\equiv 1$ near $x$;
\item we compute the Fourier transform of the tempered distributions
obtained as above;
\item on the left-hand side we obtain a function-type distribution
associated with function
$$y\mapsto C\,e^{-i\langle x,y\rangle}\Big(-\sum_{i,j} a_{i,j}(x)\,y_iy_j
+\{\text{polynomial in $y$ of degree $\leq 1$}\}\Big),$$
where $(a_{i,j})$ is the principal matrix of $\mathcal{L}$;
\item on the right-hand side
we obtain a function-type distribution
associated with a function which is the sum of $y\mapsto -e^{-i\langle x,y\rangle}$
with a function of the form
$$y \mapsto -\sum_{i,j} \alpha_{i,j}(x,y)\,y_iy_j
+\{\text{polynomial in $y$ of degree $\leq 1$}\},$$
where
$$\alpha_{i,j}(x,y)=-\int g_x(\xi)\,\chi(\xi)\,a_{i,j}(\xi)\,
e^{-i\langle \xi,y\rangle}\,\mathrm{d} \xi.$$
By the Riemann-Lebesgue Theorem one has $\alpha_{i,j}(x,y)\longrightarrow 0$ as $|y|\to \infty$.
This implies that $C=0$, since at least one of the entries of
$(a_{i,j}(x))$ is non-vanishing, due to the (NTD) hypothesis on $\mathcal{L}$. \end{enumerate} We have therefore proved that,
for any $x\in\Omega$, \begin{equation}\label{suomegamiusss}
\text{$\mathrm{d}\lambda_x(y)=g_x(y)\,\mathrm{d} y$ on $\Omega$.} \end{equation}
Since $\lambda_x$ is a finite measure (recalling that $\overline{\Omega}$ is compact),
from \eqref{suomegamiusss} we get $g_x\in L^1(\Omega)$ for every
$x\in \Omega$.
\textsc{Step II.}
We next show that $\lambda_x(\partial \Omega)=0$ for any $x\in\overline{\Omega}$.
For small $\delta>0$, we let $D_\delta$ denote the closed $\delta$-neighborhood
of $\partial\Omega$ of the points in $\mathbb{R}^N$ having distance from $\partial\Omega$
less than or equal to $\delta$; we then choose a function $F\in C(\mathbb{R}^N,[0,1])$ which is identically
$1$ on $\partial\Omega$ and is supported in the interior of $D_\delta$.
We denote by $f$ the restriction of $F$ to $\overline{\Omega}$.
From \eqref{DIRIEQkernelBIS} we have \begin{gather}\label{DIRIEQkernelBISiii}
0\leq G(f)(x)=\int_{\overline{\Omega}} f(y)\,\mathrm{d}\lambda_x(y)
\leq \int_{\overline{\Omega}} \mathrm{d}\lambda_x(y)=G(1)(x), \quad \text{for every $x\in \overline{\Omega}$.}
\end{gather}
For any $x\in \overline{\Omega}$ we have \begin{align*}
\lambda_x(\partial\Omega)&=\int_{\partial\Omega}\mathrm{d}\lambda_x(y)=
\int_{\partial\Omega}f(y)\,\mathrm{d}\lambda_x(y)\leq \int_{\overline{\Omega}} f(y)\,\mathrm{d}\lambda_x(y)=
G(f)(x)\\
&\leq \sup_{\overline{\Omega}} G(f)
=\max\bigg\{\sup_{\overline{\Omega}\cap {D_\delta}} G(f),
\sup_{\overline{\Omega}\setminus D_\delta} G(f)\bigg\}
=:\max\{\textrm{I},\textrm{II}\}. \end{align*}
We claim that $\textrm{I}$ and $\textrm{II}$ in the above right-hand side are
bounded from above by $\sup_{\overline{\Omega}\cap D_\delta} G(1)$. This is true of $\textrm{I}$, due
to \eqref{DIRIEQkernelBISiii}; as for $\textrm{II}$ we invoke
the last assertion in Remark \ref{PMFancheconc2} applied to: \begin{enumerate}[-]
\item the hypoelliptic operator $\mathcal{L}_\varepsilon =\mathcal{L}-\varepsilon $,
\item the bounded open set $\Omega_1:=\overline{\Omega}\setminus D_\delta$,
\item the nonnegative function $G(f)$, which satisfies $\mathcal{L}_\varepsilon G(f)=-f=0$ on $\Omega_1$ both weakly
and strongly due to the hypoellipticity of $\mathcal{L}_\varepsilon $. \end{enumerate}
The mentioned Remark \ref{PMFancheconc2} then ensures that the values of $G(f)$ on
$\overline{\Omega}\setminus D_\delta$ are bounded from above by the values
of $G(f)$ on the boundary of this set, so that $\textrm{II}\leq \textrm{I}$.
Summing up, \begin{align*}
\lambda_x(\partial\Omega)&\leq
\max\{\textrm{I},\textrm{II}\}\leq
\sup_{\overline{\Omega}\cap D_\delta} G(1). \end{align*}
As $\delta$ goes to $0$, the right-hand side tends to
$\sup_{\partial \Omega} G(1)=0$ by \eqref{DIRIEQGreenBIS}. This gives the desired
$\lambda_x(\partial\Omega)=0$, for any $x\in \overline{\Omega}$.
By collecting together \eqref{suomegamiusss} and $\lambda_x(\partial\Omega)=0$, we infer that (for
every $f \in C(\overline{\Omega},\mathbb{R})$ and $x\in {\Omega}$) \begin{gather*}
G(f)(x)\stackrel{\eqref{DIRIEQkernelBIS}}{=}
\int_{\overline{\Omega}} f(y)\,\mathrm{d}\lambda_x(y)
=
\int_{\Omega} f(y)\,\mathrm{d}\lambda_x(y)
\stackrel{\eqref{suomegamiusss}}{=}
\int_{\Omega} f(y)\,g_x(y)\,\mathrm{d} y.
\end{gather*}
This proves the identity \begin{gather}\label{DIRIEQkernelBISTERQ}
G(f)(x)=\int_{\Omega} f(y)\,g_x(y)\,\mathrm{d} y,\quad \text{for
every $f \in C(\overline{\Omega},\mathbb{R})$ and every $x\in \Omega$.} \end{gather}
If $\varphi\in C_0^\infty(\Omega,\mathbb{R})$, since we know that $G(\mathcal{L}_\varepsilon \varphi)=-\varphi$, we get \begin{gather}\label{DIRIEQkernelBISTERQuater}
-\varphi(x)=\int_{\Omega} \mathcal{L}_\varepsilon \varphi(y)\,g_x(y)\,\mathrm{d} y,\quad \text{for
every $x\in \Omega$.} \end{gather}
This is equivalent to \begin{gather}\label{DIRIEQkernelBISTERQuater55}
(\mathcal{L}_\varepsilon )^* g_x=-\text{Dir}_x\quad \text{for every $x\in\Omega$.} \end{gather}
\textsc{Step III.}
If $g_x$ is as in Step I,
we are ready to set \begin{equation*}
g:\Omega\times\Omega \longrightarrow [0,\infty],\qquad
g(x,y):=\left\{
\begin{array}{ll}
g_x(y) & \hbox{if $x\neq y$} \\
\infty & \hbox{if $x=y$.}
\end{array}
\right. \end{equation*}
Hence the representation
\eqref{DIRIEQkernelBISTERQ}
becomes \begin{gather}\label{quasimoltk}
G(f)(x)=\int_{\Omega} f(y)\,g(x,y)\,\mathrm{d} y,\quad \text{for
every $f \in C(\overline{\Omega},\mathbb{R})$ and every $x\in \Omega$.} \end{gather}
We aim to prove that $g$ is smooth outside the diagonal of $\Omega\times\Omega$. \begin{remark}\label{rem.topolocinfyt}
Let $O$ be any open subset of $\mathbb{R}^N$.
\emph{The hypoellipticity of a general PDO $L$ as in \eqref{Lgenerale} ensures the equality of the topologies
on $\mathcal{H}_{L}(O)$ inherited by the Fréchet spaces $C^\infty(O)$
and $L^1_{\mathrm{loc}}(O)$.}
Indeed, let $\mathcal{X}$ and $\mathcal{Y}$ denote respectively the topological space
$\mathcal{H}_{L}(O)$ with the topologies inherited by $C^\infty(O)$
and $L^1_{\mathrm{loc}}(O)$.
Then $\mathcal{X}$ and $\mathcal{Y}$ are Fréchet spaces, since, if a sequence $u_n\in \mathcal{H}_{L}(O)$
converges to $u$ uniformly on the compact sets of $\Omega$ or, more generally
in $L^1_{\textrm{loc}}$,
$$0=\int u_n\,L^*\varphi \xrightarrow{n\to\infty} \int u\, L^*\varphi,\qquad
\forall\,\,\varphi \in C_0^\infty(O,\mathbb{R}).$$
Now, the identity map $\iota:\mathcal{X}\to \mathcal{Y}$ is
trivially linear, bijective and continuous, whence, by
the Open Mapping Theorem, $\iota$ is a homeomorphism, whence the mentioned topologies coincide.\qed \end{remark}
We next resume our main proof. The set $\{g_x\}_{x\in\Omega}$ is
bounded in $L^1(\Omega)$, since
$$0\leq \int_\Omega g_x(y)\,\mathrm{d} y= G(1)(x)\leq \max_{\overline{\Omega}} G(1).$$
A fortiori, the set $\{g_x\}_{x\in\Omega}$ is also bounded
in the topological vector space $L^1_{\mathrm{loc}}(\Omega)$.
We next fix two disjoint open sets $U,W$
with closures contained in $\Omega$.
The family of the restrictions
$$\Big\{(g_x)\big|_U\Big\}_{x\in W} $$
is contained in the space of the $(\mathcal{L}_\varepsilon )^*$-harmonic functions
on $U$.
By Remark \ref{rem.topolocinfyt},
the set $\mathcal{G}$ is also bounded in the
topological vector space
$$\mathcal{H}_{\displaystyle (\mathcal{L}_\varepsilon )^*}(U),\quad \text{endowed with the $C^\infty$-topology.}$$
This means that, for every
compact set $K\subset U$ and for every $m\in \mathbb{N}$, there exists
a constant $C(K,m)>0$ such that \begin{equation}\label{staellatoplo}
\sup_{|\alpha|\leq m}\,\,\sup_{y\in K}
\bigg| \Big(\frac{\partial}{\partial y}\Big)^\alpha g(x,y) \bigg| \leq C(K,m), \quad \text{uniformly for $x\in W$.} \end{equation}
Following Bony \cite[Section 6]{Bony}, we introduce the operator
$F$ transforming any distribution $T$ compactly supported in $U$ into the function
on $W$ defined by
$$F(T):W\longrightarrow \mathbb{R},\qquad F(T)(x):=\langle T, g_x\rangle\quad (x\in W). $$
The definition is well-posed since $g_x\in C^\infty (U,\mathbb{R})$ (and $T$ is compactly
supported in $U$). We claim that $F(T)\in C^\infty(W,\mathbb{R})$. Once this is proved,
by the Schwartz Kernel Theorem (see e.g., \cite[Section 11]{Dieudonne} or
\cite[Chapter 50]{Treves}),
we can conclude that $g(x,y)$ is smooth on $W\times U$.
By the arbitrariness of the disjoint open sets $U,W$ this proves that
$g(x,y)$ is smooth out of the diagonal of $\Omega\times \Omega$, as desired.
As for the proof of the claimed $F(T)\in C^\infty(W,\mathbb{R})$, we can take (say, by some appropriate convolution)
a sequence of continuous functions $f_n$, supported in $U$, converging
to $T$ in the weak sense of distributions; due to the compactness of the supports (of the $f_n$ and of $T$),
$$\lim_{n\to \infty}\int_U f_n\,\varphi=\langle T,\varphi\rangle,\quad \text{for every $\varphi\in C^\infty(U,\mathbb{R})$.} $$
We are hence entitled to take $\varphi=g_x$ (for any fixed $x\in W$). From
\eqref{quasimoltk} we get \begin{equation}\label{numerata}
\lim_{n\to \infty} G(f_n)(x)=\langle T, g_x\rangle=F(T)(x),\quad \text{for any $x\in W$.} \end{equation}
We now prove that $F(T)\in L^\infty(W)$; this follows from the next calculation (here
$C>0$ and $m\in \mathbb{N}$ are constants depending on $T$ and on the compact set $\overline{U}$) \begin{align*}
\|F(T)\|_{L^\infty}=\sup_{x\in W} |\langle T, g_x\rangle|\leq
\sup_{x\in W} C\sum_{|\alpha|\leq m}
\sup_{y\in \overline{U}}\bigg| \Big(\frac{\partial}{\partial y}\Big)^\alpha g(x,y) \bigg| \stackrel{\eqref{staellatoplo}}{\leq } \widetilde{C}(\overline{U},m)<\infty. \end{align*}
We finally prove that $\mathcal{L}_\varepsilon (F(T))=0$ in the weak sense of distributions on $W$;
by the hypoellipticity of $\mathcal{L}_\varepsilon $ this will yield the smoothness of $F(T)$ on $W$.
We aim to show that,
$$\int_W F(T)(x)\,(\mathcal{L}_\varepsilon )^*\varphi(x)\,\mathrm{d} x=0\qquad \text{for any $\varphi\in C_0^\infty (W)$}. $$
Now, the left-hand side is (by \eqref{numerata})
$$\int \lim_{n\to \infty} G(f_n)(x)\,(\mathcal{L}_\varepsilon )^*\varphi(x)\,\mathrm{d} x.$$
If a dominated convergence can be applied, this is equal to
$$\lim_{n\to \infty} \int_W G(f_n)(x)\,(\mathcal{L}_\varepsilon )^*\varphi(x)\,\mathrm{d} x
\eqref{DIRIEQGreenBIS}{=}
-\lim_{n\to \infty} \int_W f_n(x)\,\varphi(x)\,\mathrm{d} x=0, $$
the last equality descending from the fact that the $f_n$
are supported in $U$ for every $n$.
We are then left with showing that the dominated convergence
is fulfilled: this is a consequence of \eqref{staellatoplo}, of the boundedness of $F(T)$ on $W$, and of
the fact that the convergence in \eqref{numerata} is indeed uniform w.r.t.\,$x\in W$
(a general result of distribution theory: the uniform
convergence for sequences of distributions on bounded sets).
\textsc{Step IV.}
We are finally ready to introduce our kernel \begin{equation}\label{kkkkkkkkk}
k:\Omega\times\Omega \longrightarrow [0,\infty),\qquad
k(x,y):=\frac{g(x,y)}{V(y)}. \end{equation}
Clearly, from \eqref{quasimoltk} and \eqref{autoagg} we immediately have \begin{gather}\label{quasimoltkkkkkk}
G(f)(x)=\int_{\Omega} f(y)\,k(x,y)\,\mathrm{d} \nu(y),\quad \text{for
every $f \in C(\overline{\Omega},\mathbb{R})$ and every $x\in \Omega$.} \end{gather}
This gives the representation \eqref{gprororfondam} whilst
\eqref{gprororfondam2PROPR2}
follows from \eqref{DIRIEQkernelBISTERQuater}.
The integrability of $k(x,\cdot)$ in $\Omega$ is a consequence of
$g_x\in L^1(\Omega)$ (and the positivity of the continuous function $V$ on $\mathbb{R}^N$).
Moreover, $k$ is smooth on $\Omega\times \Omega$ deprived of the diagonal by Step III.
Also, the nonnegative function $k$ is integrable on $\Omega\times \Omega$ as this computation shows:
$$0\leq \int_{\Omega\times \Omega} k(x,y)\, \mathrm{d} x \mathrm{d} y=
\int_{\Omega} \Big(\int_{\Omega} \frac{1}{V(y)}\,k(x,y)\,\mathrm{d}\nu(y)\Big) \mathrm{d} x
\stackrel{\eqref{quasimoltkkkkkk}}{=} \int_{\Omega} G(1/V)(x)\,\mathrm{d} x<\infty, $$
the last inequality following from the continuity of $G(1/V)$ on the compact set
$\overline{\Omega}$.
For fixed $x\in \Omega$, the $\mathcal{L}_\varepsilon $-harmonicity of the function $k(x,\cdot)$
in $\Omega\setminus\{x\}$ is a consequence of the following computation
$$0\stackrel{\eqref{DIRIEQkernelBISTERQuater55}}{=}(\mathcal{L}_\varepsilon )^* g_x
\stackrel{\eqref{autoaggBISSS}}{=} V\,\mathcal{L}_\varepsilon \Big(\frac{g_x}{V}\Big)
\stackrel{\eqref{kkkkkkkkk}}{=}
V\,\mathcal{L}_\varepsilon (k(x,\cdot)).$$
The fact that $V$ is positive then gives
$\mathcal{L}_\varepsilon (k(x,\cdot))=0$ in $\Omega\setminus\{x\}$.
From the SMP for $\mathcal{L}_\varepsilon =\mathcal{L}-\varepsilon $ in Remark \ref{PMFancheconc},
we deduce that the nonnegative function
$k(x,\cdot)$ (which is $\mathcal{L}_\varepsilon $-harmonic in $\Omega\setminus\{x\}$)
cannot attain the (minimal) value $0$; therefore
$k(x,\cdot)>0$ on the connected open set $\Omega\setminus\{x\}$.
A crucial step consists in proving the symmetry
property \eqref{gprororfondam2}.
We take any nonnegative $\varphi\in C_0^\infty(\Omega,\mathbb{R})$ and
we set (note the reverse order of $x$ and $y$, if compared to $G(\varphi)$)
$$\Phi(x)=\int_\Omega \varphi(y)\,k(y,x)\,\mathrm{d}\nu(y),\qquad x\in\Omega. $$
We claim that $\Phi\geq G(\varphi)$ on $\Omega$; once the claim is proved,
from \eqref{quasimoltkkkkkk}
we infer that
$$\int_{\Omega} \varphi(y)\,k(x,y)\,\mathrm{d} \nu(y) \leq \int_\Omega \varphi(y)\,k(y,x)\,\mathrm{d}\nu(y),\qquad x\in\Omega. $$
The arbitrariness of $\varphi$ will then give $k(x,y)\leq k(y,x)$
(recalling that $\mathrm{d}\nu=V(y)\,\mathrm{d} y$ with positive $V$) for every $y\in\Omega$;
since $x,y\in\Omega$ are arbitrary, we get $k(x,y)=k(y,x)$
on $\Omega\times \Omega$. We prove the claim. We observe that
$\Phi$ is continuous on $\Omega$ and that $\mathcal{L}_\varepsilon \Phi=-\varphi$
in $\mathcal{D}'(\Omega)$, as the following computation shows
($\psi\in C_0^\infty(\Omega,\mathbb{R})$ is arbitrary): \begin{align*}
&\int_\Omega \Phi(x)\,(\mathcal{L}_\varepsilon )^*\psi(x)\,\mathrm{d} x=
\int_\Omega \varphi(y)\,\Big(
\int_\Omega
k(y,x)\,
\,(\mathcal{L}_\varepsilon )^*\psi(x)\,\mathrm{d} x
\Big)
\mathrm{d}\nu(y)\\
&\,\,\,\,=
\int_\Omega \varphi(y)\,\Big(
\int_\Omega
k(y,x)\,
\,\frac{(\mathcal{L}_\varepsilon )^*\psi(x)}{V(x)}\,\mathrm{d} \nu(x)
\Big)
\mathrm{d}\nu(y)\\
&\stackrel{\eqref{autoaggBISSS}}{=}
\int_\Omega \varphi(y)\,\Big(
\int_\Omega
k(y,x)\,
\,\mathcal{L}_\varepsilon \Big(\frac{\psi(x)}{V(x)}\Big)\,\mathrm{d} \nu(x)
\Big)
\mathrm{d}\nu(y)\\
&\stackrel{\eqref{gprororfondam2PROPR2}}{=}
- \int_\Omega \varphi(y)\,\frac{\psi(y)}{V(y)}\,
\mathrm{d}\nu(y)=
- \int_\Omega
\varphi(y)\,\psi(y)\,\mathrm{d} y. \end{align*}
From the hypoellipticity of $\mathcal{L}_\varepsilon $ we get
$\Phi\in C^\infty(\Omega,\mathbb{R})$ and
$\mathcal{L}_\varepsilon \Phi =-\varphi$ point-wise.
We now apply the WMP in Remark \ref{PMFancheconc2} to
the operator $\mathcal{L}_\varepsilon =\mathcal{L}-\varepsilon $
and to the function
$G(\varphi)-\Phi$: this function is smooth and $\mathcal{L}_\varepsilon $-harmonic on $\Omega$,
and $G(\varphi)-\Phi\leq G(\varphi)$ on $\Omega$ (since $\Phi$ is nonnegative), so that
$$\limsup_{x\to x_0}(G(\varphi)-\Phi)(x)\leq \limsup_{x\to x_0} G(\varphi)(x)=0\quad \text{for every $x_0\in\partial\Omega$}.$$
Therefore $G(\varphi)-\Phi\leq 0$ on $\Omega$ as claimed.
We finally prove \eqref{gprororfondam2PROPR23}. Due to the symmetry property of $k$,
\eqref{gprororfondam2PROPR23} will follow if we show that, given $x_0\in \Omega$
and $y_0\in\partial\Omega$, one has \begin{equation}\label{invertitag}
\lim_{n\to \infty} k(y_n,x_0)=0, \end{equation}
for every sequence $y_n$ in $\Omega$ converging to $y_0$.
To this end, we fix an open set $\Omega'$ containing $x_0$ and with closure contained in $\Omega$, and
it is non-restrictive to suppose that $y_n\notin \Omega'$ for every $n$.
The functions
$$k_n:\Omega'\longrightarrow \mathbb{R},\qquad k_n(x):=k(y_n,x),\quad x\in\Omega'$$
are smooth and $\mathcal{L}_\varepsilon $-harmonic in $\Omega'$.
We also have $k_n\longrightarrow 0$ in $L^1(\Omega')$, as it follows from \begin{align*}
0&\leq \int_{\Omega'} k_n(x)\,\mathrm{d} x\leq \int_\Omega k(y_n,x)\,\mathrm{d} x=
\int_\Omega \frac{g(y_n,x)}{V(x)}\,\mathrm{d} x\\
& \leq \sup_{\Omega} \frac{1}{V}\,
\int_\Omega g(y_n,x)\,\mathrm{d} x=
\sup_{\Omega} \frac{1}{V}\, G(1)(y_n)\xrightarrow{n\to\infty} 0. \end{align*}
From Remark \ref{rem.topolocinfyt} we get that
$k_n\longrightarrow 0$ in the Fréchet space $\mathcal{H}_{\mathcal{L}_\varepsilon }(\Omega')$
with the $C^\infty$-topology, so that
$k_n\longrightarrow 0$ uniformly on the compact sets of $\Omega'$ and in particular
point-wise on $\Omega'$.\qed
\section{The Harnack inequality}\label{sec:Harnackvera}
We begin by proving the next crucial lemma. This is the first time that, broadly
speaking, the PDOs $\mathcal{L}$ and the perturbed $\mathcal{L}-\varepsilon $ clearly interact. \begin{lemma}\label{lem.crustimabasso}
Let $\mathcal{L}$ be as in \eqref{mainLL} and let it satisfy \emph{(NTD)} and \emph{(HY)$_\varepsilon $}.
Let $\Omega$ be an open set in $\mathbb{R}^N$ as in the thesis
of Lemma \ref{th.localDiri}, and let $\Omega'$ be an open set containing
$\overline{\Omega}$.
Finally, we denote by $k_\varepsilon $ the Green kernel related to $\mathcal{L}_\varepsilon $ and to the set $\Omega$
(as in Theorem \ref{th.greeniani}).
Then we have the estimate \begin{equation}\label{crustimabasso.EQ1}
u(x)\geq \varepsilon \int_{\Omega} u(y)\,k_\varepsilon (x,y)\,\mathrm{d}\nu(y),\quad \forall\,x\in\Omega, \end{equation}
holding true for every smooth nonnegative $\mathcal{L}$-harmonic function $u$ in $\Omega'$. \end{lemma} \begin{proof}
We consider the function $v(x)=\int_{\Omega} u(y)\,k_\varepsilon (x,y)\,\mathrm{d}\nu(y)$ on
$\Omega$. From \eqref{gprororfondam} (and the definition of Green operator)
we know that $v=G_\varepsilon (u)$, where $G_\varepsilon $ is the Green operator related to $\mathcal{L}_\varepsilon $
(and to the open set $\Omega$); moreover, since $u$ is smooth (by assumption) on $\overline{\Omega}$,
we know from Lemma \ref{th.localDiri} (and the hypoellipticity of $\mathcal{L}_\varepsilon $)
that $v\in C^\infty(\Omega)\cap C(\overline{\Omega})$ is the solution of
\begin{gather}\label{DIRIEQHARNACK}
\left\{
\begin{array}{ll}
\mathcal{L}_\varepsilon v=-u & \hbox{on $\Omega$}, \\
v=0 & \hbox{on $\partial\Omega$.}
\end{array}
\right.
\end{gather}
This gives $\mathcal{L}_\varepsilon (\varepsilon \,v-u)=-\varepsilon \,u-(\mathcal{L}-\varepsilon )u=-\varepsilon \,u+\varepsilon \,u=0$ on $\Omega$;
moreover, on $\partial\Omega$, $\varepsilon \,v-u=-u\leq 0$, by the nonnegativity of $u$.
By the WMP in Remark \ref{PMFancheconc2}, we get $\varepsilon \,v-u\leq 0$
on $\Omega$ which is equivalent to
\eqref{crustimabasso.EQ1}. \end{proof}
We are ready for the proof of the Weak Harnack Inequality (for higher order
derivatives).
\begin{proof}[Proof (of the Weak Harnack Inequality for derivatives, Theorem \ref{lem.crustimabassoTHEO})]
We distinguish two ca\-ses: $y_0\notin K$ and $y_0\in K$.
The second case can be reduced to the former. Indeed,
let us assume we have already proved the theorem
in the former case, and let $y_0\in K$.
If we take any $y_0'\in O\setminus K$, and we
consider the inequality
$$ u(y_0') \leq C'\,u(y_0),$$
resulting from
\eqref{HarnackdeboleEQ1} by considering $m=0$ and the compact set $\{y_0'\}$, we get \begin{equation*}
\sum_{|\alpha|\leq m}
\sup_{x\in K} \Big| \frac{\partial^\alpha u(x)}{\partial x^\alpha}\Big|
\stackrel{\eqref{HarnackdeboleEQ1}}{\leq} C\,u(y_0')
\leq C\,C'\,u(y_0). \end{equation*}
We are therefore entitled to assume that $y_0\notin K$.
By the aid of a classical argument (with a chain
of suitable small open sets $\{\Omega_n\}_{n=1}^p$
covering a connected compact set
containing $K\cup\{y_0\}$), it is not restrictive to assume that $K\cup\{y_0\}\subset \Omega\subset
\overline{\Omega}\subset O$, where $\Omega$ is
one of the basis open sets constructed in Lemma
\ref{th.localDiri}.
Let $x_0\in K$ be arbitrarily fixed.
The function $k_\varepsilon (x_0,\cdot)$ (the Green kernel related to $\mathcal{L}_\varepsilon $ and $\Omega$)
is \emph{strictly positive} in $\Omega\setminus\{x_0\}$ (this is a consequence
of the SMP applied to the $\mathcal{L}_\varepsilon $-harmonic function $k_\varepsilon (x_0,\cdot)$; see
Theorem \ref{th.greeniani}). In particular, since $y_0\notin K$, we infer that
$k_\varepsilon (x_0,y_0)>0$. Hence, there exist a neighborhood $W$ of $x_0$
(contained in $\Omega$)
and a constant $\mathbf{c}=\mathbf{c}(\varepsilon ,y_0,x_0)>0$ such that \begin{equation}\label{HarnackdeboleEQ2}
\inf_{z\in W} k_\varepsilon (z,y_0)\geq \mathbf{c}>0. \end{equation}
Our assumptions allow us to apply
Lemma \ref{lem.crustimabasso}: hence, for every nonnegative
$u\in \mathcal{H}_\mathcal{L}(O)$, we have the following chain of inequalities \begin{align*}
u(y_0) &\stackrel{\eqref{crustimabasso.EQ1}}{\geq}
\varepsilon \int_{\Omega} u(z)\,k_\varepsilon (y_0,z)\,\mathrm{d}\nu(z)
\geq
\varepsilon \int_{W} u(z)\,k_\varepsilon (y_0,z)\,\mathrm{d}\nu(z)\\
&\stackrel{\eqref{gprororfondam2}}{=}
\varepsilon \int_{W} u(z)\,k_\varepsilon (z,y_0)\,\mathrm{d}\nu(z)
\stackrel{\eqref{HarnackdeboleEQ2}}{\geq}
\varepsilon \,\mathbf{c} \int_{W} u(z)\,\mathrm{d}\nu(z)\geq
\varepsilon \,\mathbf{c}\,\inf_W V\, \int_{W} u(z)\,\mathrm{d} z. \end{align*}
Summing up, for every $x_0\in K$ there exist
a neighborhood $W$ of $x_0$ and
a constant $\mathbf{c}_1>0$ (also depending on $x_0$
but independent of $u$) such that \begin{equation}\label{HarnackdeboleEQ3}
u(y_0) \geq
\mathbf{c}_1\int_{W} u(z)\,\mathrm{d} z, \end{equation}
for every nonnegative $u\in \mathcal{H}_\mathcal{L}(O)$.
Next, from
Remark \ref{rem.topolocinfyt}, we know that the
hypothesis (HY) for $\mathcal{L}$ ensures the equality of the topologies
on $\mathcal{H}_{\mathcal{L}}(W)$ inherited by the Fréchet spaces $C^\infty(W)$
and $L^1_{\mathrm{loc}}(W)$.
In particular, to any chosen open neighborhood
$U$ of $x_0$ (with $\overline{U}\subset W$) we are given
a positive constant $\mathbf{c}_2=\mathbf{c}_2(U,W,m)$ such that
\begin{equation}\label{HarnackdeboleEQ4}
\sum_{|\alpha|\leq m}
\sup_{x\in U} \Big| \frac{\partial^\alpha u(x)}{\partial x^\alpha}\Big|
\leq \mathbf{c}_2\,\int_{W} u(z)\,\mathrm{d} z, \end{equation}
for every nonnegative $u\in \mathcal{H}_\mathcal{L}(O)$.
Gathering together \eqref{HarnackdeboleEQ3}
and \eqref{HarnackdeboleEQ4}, we infer that,
for every $x_0\in K$ there exist
a neighborhood $U$ of $x_0$ and
a constant $\mathbf{c}_3>0$ (again depending on $x_0$
but independent of $u$) such that \begin{equation*}
u(y_0) \geq
\mathbf{c}_3
\sum_{|\alpha|\leq m}
\sup_{x\in U} \Big| \frac{\partial^\alpha u(x)}{\partial x^\alpha}\Big|, \end{equation*}
for every nonnegative $u\in \mathcal{H}_\mathcal{L}(O)$.
The compactness of $K$ allows us to derive
\eqref{HarnackdeboleEQ1} from the latter inequality, and a covering argument. \end{proof}
We now present a proof of Theorem \ref{th.mokobre}, crucially based on \cite[Chapter I]{Brelot}. \begin{proof}[Proof (of Theorem \ref{th.mokobre})]
As anticipated in the Introduction, the proof is based in an essential way on the ideas
by Mokobodzki-Brelot in \cite[Chapter I]{Brelot}, ensuring the equivalence
of the Strong Harnack Inequality with a series of properties comprising
the Weak Harnack Inequality, provided some assumptions are fulfilled.
We furnish some details in order to
be oriented through these equivalent properties.
We denote by $\mathcal{H}_L$ the harmonic sheaf on $\mathbb{R}^N$ defined by
$O\mapsto \mathcal{H}_L(O)$ (here $O\subseteq\mathbb{R}^N$ is any open set).
Under the assumptions of (Regularity) and (Weak Harnack Inequality),
Brelot proves that (see \cite[pp.22--24]{Brelot}),
for any connected open set $O\subseteq\mathbb{R}^N$, and any
$x_0\in O$, the set
\begin{equation}\label{fix0}
\Phi_{x_0}:=\Big\{h\in \mathcal{H}_L(O)\,:\,h\geq 0,\quad h(x_0)=1\Big\}
\end{equation}
is equicontinuous at $x_0$. The proof of this fact rests on
some results of Functional Analysis related to the family
of the so-called harmonic measures $\{\mu^\Omega_x\}_{x\in \partial\Omega}$
associated with $L$
(and on basic properties of the harmonic sheaf $\mathcal{H}_L$).
Next, we show how to prove \eqref{SHIvera} starting from
the equicontinuity of $\Phi_{x_0}$ at $x_0$. Indeed,
let $K\subset O$, where $K$ is compact and $O$ is an open and connected subset
of $\mathbb{R}^N$. By possibly enlarging $K$, we can suppose that $K$ is connected
as well. Let $u\in \mathcal{H}_L(O)$ be nonnegative.
If $u\equiv 0$ then \eqref{SHIvera} is trivial;
if $u$ is not identically zero then
(from the Weak Harnack Inequality) one has
$u>0$ on $O$.
For every $x\in K$, the equicontinuity of $\Phi_x$ ensures the existence
of $\delta(x)>0$ such that (with the choice $h=u/u(x)$ in \eqref{fix0}) \begin{equation}\label{fix0EQ1}
\frac{1}{2}\,u(x)\leq u(\xi)\leq \frac{3}{2}\,u(x),\quad
\text{for all $\xi\in B_x:=B(x,\delta(x))$.} \end{equation}
From the open cover $\{B_x\}_{x\in K}$ we can extract a finite
subcover $B_{x_1},\ldots,B_{x_p}$ of $K$. It is also non-restrictive
(since $K$ is connected)
to assume that the elements of this subcover are chosen in such a way that
$$B_{x_1}\cap B_{x_2}\neq \varnothing,\quad
(B_{x_1}\cup B_{x_2})\cap B_{x_3}\neq \varnothing,\quad\ldots\quad
(B_{x_1}\cup\cdots \cup B_{x_{p-1}})\cap B_{x_p}\neq \varnothing. $$
From \eqref{fix0EQ1} it follows
\eqref{SHIvera} with $K$ replaced by $B_{x_1}$
(with $M=3$); since $B_{x_1}$ intersects $B_{x_2}$, one can use
again \eqref{fix0EQ1} in order to prove
\eqref{SHIvera} with $K$ replaced by $B_{x_1}\cup B_{x_2}$
(with $M=3^2$); by proceeding in an inductive way, one can prove
\eqref{SHIvera} with $K$ replaced by $B_{x_1}\cup\cdots\cup B_{x_p}$ (and $M=3^p$), and this finally proves
\eqref{SHIvera}, since $B_{x_1}\cup\cdots\cup B_{x_p}$ covers $K$. \end{proof} \begin{remark}\label{rem.precisiamo}
Following Brelot \cite[pp.14--17]{Brelot},
it being understood that axiom (Regularity) in
Theorem \ref{th.mokobre} holds true,
the axiom (Weak Harnack Inequality) can be replaced by any of the following
equivalent assumptions (see also Constantinescu and Cornea \cite{ConstantinescuCornea}): \begin{description}
\item[(Brelot Axiom)]
For every connected open set $O\subseteq \mathbb{R}^N$,
if $\mathcal{F}$ is an up-directed\footnote{$\mathcal{F}$
is said to be up-directed if for any $u,v\in \mathcal{F}$ there exists $w\in \mathcal{F}$
such that $\max\{u,v\}\leq w$.}
family of $L$-harmonic functions in $O$,
then $\sup\limits_{u\in \mathcal{F}}u$ is either $+\infty$
or it is $L$-harmonic in $O$.
\item[(Harnack Principle)]
For every connected open set $O\subseteq \mathbb{R}^N$,
if $\{u_n\}_n$ is a non-de\-crea\-sing sequence
of $L$-harmonic functions in $O$,
then $\lim\limits_{n\to \infty} u_n$ is either $+\infty$
or it is an $L$-harmonic function in $O$. \end{description} \end{remark}
\noindent We are ready to derive our main result for this section: due to all our preliminary results,
the proof is now a few lines argument. \begin{proof}[Proof (of Harnack Inequality, Theorem \ref{lem.crustimabassoTHEOFORTE})]
Due to Theorem \ref{th.mokobre},
it suffices to prove that our operator $\mathcal{L}$ as in the statement of Theorem \ref{lem.crustimabassoTHEOFORTE}
satisfies the properties named (Regularity) and (Weak Harnack Inequality)
in Theorem \ref{th.mokobre}:
the former is a consequence of Lemma \ref{th.localDiri}
(with $f=0$), whilst the latter
follows from Theorem \ref{lem.crustimabassoTHEO}. \end{proof}
\section{Appendix: The Dirichlet problem for $\mathcal{L}$}\label{sec:Dirichlet}
The aim of this appendix is to prove Lemma \ref{th.localDiri}
under the following more general form in Theorem \ref{th.localDiriMIGLIO}:
our slightly more general framework (we indeed deal with general
hypoelliptic operators which are non-totally degenerate at every point)
compared to the one considered by Bony in \cite{Bony} (where
H\"ormander operators are concerned) does not present much more difficulties than the one in \cite[Section 5]{Bony},
and the proof is given for the sake of completeness only.
\begin{theorem}\label{th.localDiriMIGLIO}
Suppose that $L$ is an operator on $\mathbb{R}^N$ of the form \begin{equation}\label{mainLLMIGLIO}
L=\sum_{i,j=1}^N\alpha_{i,j}\frac{\partial^2}{\partial x_i\partial x_j}+
\sum_{i=1}^N \beta_i\frac{\partial}{\partial x_i}+\gamma, \end{equation}
with $\alpha_{i,j},\beta_i,\gamma\in C^\infty (\mathbb{R}^N,\mathbb{R})$,
with $(\alpha_{i,j})$ symmetric and
positive semi-definite. We assume that $L$ is non-totally
degenerate at every $x\in\mathbb{R}^N$ and that $L$ is $C^\infty$-hypoelliptic in every open set.
Then there exists a basis for the Euclidean topology of $\mathbb{R}^N$ made of open sets
$\Omega$ with the following properties:
for every continuous function $f$ on $\overline{\Omega}$ and for every continuous
function $\varphi$ on $\partial\Omega$, there exists one and only one solution
$u\in C(\overline{\Omega},\mathbb{R})$ of the Dirichlet problem
\begin{gather}\label{DIRIEQMIGLIO}
\left\{
\begin{array}{ll}
L u=-f & \hbox{on $\Omega$ (in the weak sense of distributions),} \\
u=\varphi & \hbox{on $\partial\Omega$ (point-wise).}
\end{array}
\right.
\end{gather}
Furthermore, if $f,\varphi\geq 0$ then $u\geq 0$ as well. Finally, if $f$ belongs to $C^\infty(\Omega,\mathbb{R})
\cap C(\overline{\Omega},\mathbb{R})$, then the same is true of $u$, and $u$ is a classical solution of \eqref{DIRIEQMIGLIO}.
Finally, if the zero-order term $\gamma$ of $L$ is non-positive on $\mathbb{R}$, the
above basis $\{\Omega\}$ does not depend on $\gamma$. If $\gamma<0$,
the basis $\{\Omega\}$ only depends on the principal matrix $(\alpha_{i,j})$
of $L$. \end{theorem}
The key step is to construct a basis for the Euclidean topology of $\mathbb{R}^N$ as follows: \begin{lemma}\label{lem.regol}
Let $A(x)=(a_{i,j}(x))$ be a matrix with real-valued continuous entries on $\mathbb{R}^N$,
which is symmetric, positive semi-definite and non-vanishing at a point $x_0\in\mathbb{R}^N$.
Then, there exists a basis of connected open neighborhoods $\mathcal{B}_{x_0}$ of $x_0$
such that any $\Omega\in \mathcal{B}_{x_0}$ satisfies the following property:
for every $y\in \partial\Omega$ there exists $\nu\in \mathbb{R}^N\setminus\{0\}$
such that $\overline{B(y+\nu,|\nu|)}$ intersects $\overline{\Omega}$ at $y$ only, and such that
\begin{equation}\label{normaleest}
\langle A(y)\,\nu , \nu \rangle >0.
\end{equation} \end{lemma} \begin{proof}
By the assumptions on $A(x_0)$ there exists a unit vector $h_0$
such that
\begin{equation}\label{normaleestEQ1}
\langle A(x_0) h_0, h_0\rangle >0.
\end{equation}
Following the idea of Bony \cite{Bony},
we choose the neighborhood basis $\mathcal{B}_{x_0}=\{\Omega(\varepsilon )\}$ as follows:
$$\Omega(\varepsilon ):=B(x_0+\varepsilon ^{-1}\,h_0,\varepsilon ^{-1}+\varepsilon ^2)\cap B(x_0 - \varepsilon ^{-1}\,h_0, \varepsilon ^{-1}+\varepsilon ^2) .$$
It suffices to show that there exists $\overline{\varepsilon }>0$ such that
every $\Omega(\varepsilon )$ with $0<\varepsilon \leq \overline{\varepsilon }$
satisfies the requirement of the lemma. Now,
the set $\Omega(\varepsilon )$ (which is
trivially an open neighborhood of $x_0$) shrinks to $\{x_0\}$
as $\varepsilon $ shrinks to $0$. Moreover, every
$y\in \partial \Omega(\varepsilon )$ belongs to one at least of the
spheres $\partial B(x_0\pm\varepsilon ^{-1}\,h_0,\varepsilon ^{-1}+\varepsilon ^2)$; accordingly,
we choose
$$\nu=\nu_\varepsilon (y):=\frac{y-(x_0\pm\varepsilon ^{-1}\,h_0)}{\varepsilon ^{-1}+\varepsilon ^2}$$
to get the geometric condition $\overline{B(y+\nu,|\nu|)}\cap \overline{\Omega(\varepsilon )}=\{y\}$.
It obviously holds that $\nu_\varepsilon (y)$ tends to $h(x_0)$ as $\varepsilon \to 0$
(uniformly for bounded $x_0,y,h_0$), so that
\eqref{normaleest} follows from \eqref{normaleestEQ1}
by continuity arguments, for any $0\leq \varepsilon \leq \overline{\varepsilon }$, with $\overline{\varepsilon }$
conveniently small. \end{proof}
We proceed with the proof of
Theorem \ref{th.localDiriMIGLIO} by constructing, for any given $x_0\in\mathbb{R}^N$,
a basis of neighborhoods of $x_0$ as required.
The crucial step is to reduce $L$ to some equivalent operator $\widetilde{L}$
with zero-order term $\widetilde{L}(1)$ which is strictly negative around $x_0$.
We observe that this procedure is not necessary if $\gamma=L(1)$ is already known to be negative on $\mathbb{R}^N$.
In general, we let
$$\widetilde{L} u:=w\,L(w\,u),\quad \text{where $w(x)=1-M\,|x-x_0|^2$},$$
with $M\gg 1$ to be chosen.
Let us denote by $B(x_0)$ the Euclidean
ball of centre $x_0$ and radius $1/\sqrt M$.
It is readily seen that the second order parts of $L$ and $\widetilde{L}$
are equal, modulo the factor $w^2$.
This shows that $\widetilde{L}$ is non-totally degenerate at any point of
$B(x_0)$ and that the principal matrix of $\widetilde{L}$ is
symmetric and positive semi-definite at any point of $B(x_0)$.
Since \begin{align*}
\widetilde{L}(1)(x)&=w^2(x)\,\gamma(x)-2M\textrm{trace}(A(x))-2M\sum_{i=1}^N \beta_i(x)\,(x-x_0)_i, \end{align*}
if we choose $M$ so large that $M>\gamma(x_0)/(2\,\textrm{trace}(A(x_0)))$
(we recall that $\textrm{trace}(A(x))>0$ at any $x$ since $L$ is non-totally degenerate
at any point), then $\widetilde{L}(1)(x_0)<0$. By continuity,
there exists $r>0$ small enough such that $B'(x_0):=B(x_0,r)\subseteq B(x_0)$
and such that $\widetilde{L}(1)<0$ on the closure of $B'(x_0)$.
We explicitly remark (and this will prove the final statement of the theorem)
that the condition $\gamma\leq 0$ allows us to take $M=1$ for all $x_0$
and to use the bound \begin{align*}
\widetilde{L}(1)(x)&\leq -2\textrm{trace}(A(x))-2\sum_{i=1}^N \beta_i(x)\,(x-x_0)_i, \end{align*}
in order to chose $r$ independently of $\gamma$. \begin{remark}\label{rem.WMPsullabase}
Classical arguments, \cite{lanco_maxprinc}, show that, due to the
strict negativity of $\widetilde{L}(1)$ on $B'(x_0)$, the operator $\widetilde{L}$
satisfies the Weak Maximum Principle on every open subset of $B'(x_0)$, that is: \begin{equation}\label{WMPB'}
\left\{
\begin{array}{ll}
\Omega\subset B'(x_0),\,\,u\in C^2(\Omega,\mathbb{R})\\
\widetilde{L} u\geq 0\,\,\text{on $\Omega$} \\
\limsup\limits_{x\to y} u(x)\leq 0\,\,\text{for every $y\in\partial\Omega$}
\end{array}
\right.
\qquad\Longrightarrow\qquad
u\leq 0\,\,\text{on $\Omega$.} \end{equation} \end{remark}
The rest of the proof consists
in demonstrating the following statement: \begin{description}
\item[(S)]
\emph{there exists a basis ${\mathcal{B}}_{x_0}$ of neighborhoods $\Omega$ of $x_0$ all contained in $B'(x_0)$
with the properties required in
Theorem \ref{th.localDiriMIGLIO} relative to $\widetilde{L}$ (in place of $L$)}. \end{description}
Once this is proved,
given any $\Omega\in {\mathcal{B}}_{x_0}$, any $f\in C(\overline{\Omega},\mathbb{R})$ and any
$\varphi\in C(\partial\Omega,\mathbb{R})$, we obtain the solution $\widetilde{u}$ of the problem
\begin{gather}\label{problema.tildato}
\left\{
\begin{array}{ll}
\widetilde{L} \widetilde{u}=-w\,f & \hbox{on $\Omega$ (in the weak sense of distributions),} \\
\widetilde{u}=\varphi/w & \hbox{on $\partial\Omega$ (point-wise);}
\end{array}
\right.
\end{gather}
then we set $u:=w\,\widetilde{u}$, and a simple verification shows that
$u$ solves \eqref{DIRIEQMIGLIO}, so that existence is proved.
As for uniqueness, it suffices to observe that for any fixed $\Omega\in{\mathcal{B}}_{x_0}$,
to any solution $u$ of
\eqref{DIRIEQMIGLIO} on $\Omega$, there corresponds a solution
$\widetilde{u}=u/w$ of \eqref{problema.tildato} (which is unique, as it is claimed in (S)).
Finally all the other requirements
on $u$ in the statement of Theorem \ref{th.localDiriMIGLIO}
are satisfied, since $w$ is positive and smooth on $\Omega\subseteq B(x_0)$. \begin{remark}\label{rem-ipoLP}
\emph{We remark that the operator $\widetilde{L}$ is $C^\infty$-hypoelliptic on every open subset of
$B(x_0)$.}
Indeed, for any open sets $V,V'$ such that $V\subseteq V'\subseteq B(x_0)$,
a distribution $u\in \mathcal{D}'(V')$
such that $\widetilde{L}u =f\in C^\infty(V,\mathbb{R})$ satisfies
$L(w\,u)=f/w\in C^\infty(V,\mathbb{R})$; thus, by the hypoellipticity of
$L$, we infer that $w\,u\in C^\infty(V,\mathbb{R})$ so that $u\in C^\infty(V,\mathbb{R})$
(recalling that $w\neq 0$ on $B(x_0)$). \end{remark}
We are then left to prove statement (S).
From now on we choose a neighborhood basis $\mathcal{B}_{x_0}$ of $x_0$ consisting of open sets
(contained in $B'(x_0)$)
as in Lemma \ref{lem.regol} relative to the principal matrix $\widetilde{A}$ of the operator $\widetilde{L}$
(the matrix $\widetilde{A}(x_0)$ is symmetric, positive semi-definite and non vanishing, as already discussed).
We will show that any $\Omega\in \mathcal{B}_{x_0}$ has the requirements in statement (S).
For the uniqueness part, it suffices to use in a standard way
the WMP in Remark \ref{rem.WMPsullabase} jointly with the
hypoellipticity condition in Remark \ref{rem-ipoLP}.
As for existence, we split the proof in several steps and, to simplify the notation, we write
$P$ instead of $\widetilde{L}$.
(I): \emph{$f$ smooth and $\varphi\equiv 0$}.
We fix $\Omega$ as above, $f\in C^\infty(\Omega,\mathbb{R})\cap C(\overline{\Omega},\mathbb{R})$
and $\varphi\equiv 0$. We use a standard elliptic approximation argument.
For every $n\in \mathbb{N}$ we set $$P_n:=P+\frac{1}{n}\sum_{j=1}^N \Big(\frac{\partial}
{\partial x_j}\Big)^2.$$ We observe that: \begin{itemize}
\item[-] $P_n$ is uniformly elliptic on $\mathbb{R}^N$;
\item[-] the zero-order term $P_n(1)=P(1)\,\,(=\widetilde{L}(1))$ is (strictly) negative on $\Omega$;
\item[-] $\Omega$ satisfies an exterior ball condition, due to Lemma \ref{lem.regol};
\item[-] $f\in C^\infty(\Omega,\mathbb{R})$. \end{itemize}
These conditions imply the existence (see e.g., Gilbarg and Trudinger \cite{GilbargTrudinger}) of a classical solution $u_n\in C^\infty(\Omega,\mathbb{R})\cap C(\overline{\Omega},\mathbb{R})$
of the Dirichlet problem \begin{gather*}
\left\{
\begin{array}{ll}
P_n u_n=-f & \hbox{on $\Omega$} \\
u_n=0 & \hbox{on $\partial\Omega$.}
\end{array}
\right.
\end{gather*}
Let $c_0>0$ be such that $P(1)<-c_0$ on the closure of $B'(x_0)$.
With this choice, we observe that (setting $\|f\|_\infty=\sup_{\overline{\Omega}}|f|$)
$$
\left\{
\begin{array}{ll}
P_n \Big(\pm u_n-\dfrac{\|f\|_\infty}{c_0}\Big)=\mp f-\dfrac{\|f\|_\infty}{c_0}\,P(1)\geq
\mp f+ \dfrac{\|f\|_\infty}{c_0}\,c_0\geq 0 & \hbox{on $\Omega$} \\
\pm u_n-\dfrac{\|f\|_\infty}{c_0}=-\dfrac{\|f\|_\infty}{c_0}\leq 0 & \hbox{on $\partial\Omega$.}
\end{array}
\right.$$
Arguing as in Remark \ref{rem.WMPsullabase}, the Weak Maximum Principle for $P_n$
proves that \begin{equation}\label{stimaunnnn}
\|u_n\|_\infty= \sup_{x\in \overline{\Omega}} |u_n(x)|\leq \dfrac{\|f\|_\infty}{c_0}\quad \text{uniformly for every $n\in \mathbb{N}$}. \end{equation}
This provides us with a subsequence of $u_n$ (still denoted by $u_n$)
and a function $u\in L^\infty(\Omega)$ such that $u_n$ tends to $u$
in the weak$^*$ topology, that is \begin{equation}\label{stimaunnnnEQ1}
\lim_{n\to \infty} \int_\Omega u_n\,h= \int_\Omega u\,h,\quad \text{for all $h\in L^1(\Omega)$.} \end{equation}
Moreover one knows that \begin{equation}\label{stimaunnnnEQ1bis}
\|u\|_{L^\infty(U) }\leq \limsup_{n\to \infty} \|u_n\|_{L^\infty(U) },\quad \text{for all $U\subseteq \Omega$.} \end{equation}
From \eqref{stimaunnnnEQ1} it easily follows that \begin{equation*}
\int_\Omega u\,P^*\psi= -\int_\Omega f\,\psi,\quad \text{for all $\psi\in C_0^\infty(\Omega,\mathbb{R})$.} \end{equation*}
This means that $Pu=-f$ in the weak sense of distributions.
As $P$ is hypoelliptic on every open set (Remark \ref{rem-ipoLP}),
we infer that $u$ can be modified on a null set in such a way that
$u\in C^\infty(\Omega,\mathbb{R})$. Thus $P u=-f$ in the classical sense on $\Omega$.
We aim to prove that $u$ can be continuously prolonged to $0$ on $\partial\Omega$.
To this end, given any $y\in \partial\Omega$, in view of
Lemma \ref{lem.regol} (and the choice of $\Omega$),
there exists $\nu\in \mathbb{R}^N\setminus\{0\}$
such that $\overline{B(y+\nu,|\nu|)}$ intersects $\overline{\Omega}$ at $y$ only,
and such that (see \eqref{normaleest})
\begin{equation}\label{normaleestBBO}
\langle \widetilde{A}(y)\,\nu , \nu \rangle >0.
\end{equation}
As in the Hopf-type Lemma \ref{lem_Hopf}, we consider the function
$$w(x) := e^{-\lambda|x - (y+\nu)|^2} - e^{-\lambda|\nu|^2},$$
where $\lambda$ is a positive real number chosen in a moment.
For every $n$ and for every $x$ one has \begin{gather}\label{stimaintornidel} \begin{split}
P_n w(x) &= Pw(x)+\frac{1}{n}\,e^{-\lambda|x - (y+\nu)|^2}\Big(4\lambda^2|x-(y+\nu)|^2
-2\lambda N\Big)\\
&\geq
Pw(x)-2\lambda N e^{-\lambda|x - (y+\nu)|^2}. \end{split} \end{gather}
If we set $P=\sum_{i,j}\widetilde{a}_{i,j}\partial_{i,j}+\sum_j \widetilde{b}_j\partial_j+\widetilde{c}$,
a simple computation (similar to \eqref{lem_Hopf.EQ1ccc}) shows that \begin{align*}
&\Big( Pw(x)-2\lambda N e^{-\lambda|x - (y+\nu)|^2}\Big)\Big|_{x=y}\\
&=
e^{-\lambda|\nu|^2}\bigg(4\lambda^2
\langle\widetilde{A}(y)\nu,\nu\rangle
- 2\lambda\sum_{j = 1}^N\big(\widetilde{a}_{j,j}(y) - \widetilde{b}_j(y)\nu_j\big)
-2\,\lambda\,N\bigg).
\end{align*}
Thanks to \eqref{normaleestBBO}, there exists $\lambda \gg 1$
such that the above right-hand side is strictly positive.
Therefore, due to \eqref{stimaintornidel}
there exist $\varepsilon >0$ and an open ball $V=B(y,\delta)$ (with $\varepsilon $ and $\delta$ \emph{independent of} $n$)
such that \begin{equation}\label{doppiastimadel}
P_nw(x)\geq \varepsilon \quad \text{for every $x\in V$ and every $n\in\mathbb{N}$.} \end{equation}
We are willing to apply the Weak Maximum Principle for the operator $P_n$
on the open set $\Omega\cap V$, and for the functions $M\,w\pm u_n$, where
$M\gg 1$ is chosen as follows. First we have
$$P_n (M\,w\pm u_n)=M\,P_n w\pm P_n u_n=M\,P_n w \mp f\geq M\,\varepsilon \mp f\geq M\,\varepsilon -\|f\|_\infty,\quad
\text{in $\Omega\cap V$}. $$
Consequently we first chose $M>\|f\|_\infty/\varepsilon $. Then we study the behavior of $M\,w\pm u_n$
on $$\partial(\Omega\cap V)=[V\cap\partial \Omega] \cup [\overline{\Omega}\cap \partial V]=:\Gamma_1\cup \Gamma_2.$$
Firstly, on $\Gamma_1$ we have $M\,w\pm u_n=M\,w\leq 0$ since $\Gamma_1\subseteq \mathbb{R}^N\setminus B(y+\nu,|\nu|)$.
Secondly, on $\Gamma_2$,
$$M\,w\pm u_n \leq M\,\max_{\Gamma_2}w+\|u_n\|_\infty
\stackrel{\eqref{stimaunnnn}}{\leq }
M\,\max_{\Gamma_2}w+\dfrac{\|f\|_\infty}{c_0}.$$
Since $\Gamma_2$ is a compact set on which $w$ is strictly negative, we have
$\max_{\Gamma_2}w<0$ and the further choice
$M\geq - \|f\|_\infty/(c_0\max_{\Gamma_2}w)$ yields
$M\,w\pm u_n\leq 0$ on $\Gamma_2$. Summing up,
$$\left\{
\begin{array}{ll}
P_n (M\,w\pm u_n)\geq 0 & \hbox{on $\Omega\cap V$} \\
M\,w\pm u_n \leq 0 & \hbox{on $\partial(\Omega\cap V)$.}
\end{array}
\right.$$
The Weak Maximum Principle yields $M\,w\pm u_n\leq 0$ on $\Omega\cap V$, that is
(since $w<0$ on $\Omega$)
$$ |u_n(x)|\leq M\,|w(x)|\quad \text {for every $x\in \Omega\cap V$ and for every $n\in \mathbb{N}$.}$$
Since $w(y)=0$, for every $\sigma>0$ there exists an open neighborhood $W\subset V$ of $y$
such that $\|w\|_{L^\infty(W)}<\sigma$; the above inequality then gives
$\|u_n\|_{L^\infty(W\cap \Omega)}\leq M\,\sigma$. Jointly with
\eqref{stimaunnnnEQ1bis} we deduce that
$\|u\|_{L^\infty(W\cap \Omega)}\leq M\,\sigma$, so that $\lim_{\Omega\ni x\to y}u(x)=0$.
From the arbitrariness of $y$, we obtain that $u$ prolongs to be $0$ on $\partial\Omega$ with
continuity.
In order to complete the proof of (S), we are left to show that
if $f\in C^\infty(\Omega,\mathbb{R})\cap C(\overline{\Omega},\mathbb{R})$
is nonnegative, then the unique solution $u\in C(\overline{\Omega},\mathbb{R})$ of \begin{gather*}
\left\{
\begin{array}{ll}
P u=-f & \hbox{on $\Omega$ (in the weak sense of distributions)} \\
u=0 & \hbox{on $\partial\Omega$ (point-wise)}
\end{array}
\right.
\end{gather*}
is nonnegative as well. From the hypoellipticity of $P$
(see Remark \ref{rem-ipoLP}), we already know that
$u\in C^\infty(\Omega,\mathbb{R})$, and we can apply the WMP to $-u$
(see Remark \ref{rem.WMPsullabase})
to get $-u\leq 0$.
(II): \emph{$f$ and $\varphi$ smooth}.
We fix $\Omega$ as above, and $f$ is in $C^\infty(\Omega,\mathbb{R})\cap C(\overline{\Omega},\mathbb{R})$
and $\varphi$ is the restriction to $\partial\Omega$ of some function
$\Phi$ which is smooth and defined on an open neighborhood of $\overline{\Omega}$.
As in Step (I), we consider the unique solution $v\in C^\infty(\Omega,\mathbb{R})\cap C(\overline{\Omega},\mathbb{R})$ of \begin{gather*}
\left\{
\begin{array}{ll}
P v=-f-P\Phi & \hbox{on $\Omega$} \\
v=0 & \hbox{on $\partial\Omega$,}
\end{array}
\right.
\end{gather*}
and we observe that $u=v+\Phi$ is the (unique) classical solution of \begin{gather*}
\left\{
\begin{array}{ll}
P u=-f & \hbox{on $\Omega$} \\
u=\Phi|_{\partial\Omega}=\varphi & \hbox{on $\partial\Omega$.}
\end{array}
\right.
\end{gather*}
If furthermore $f,\varphi\geq 0$, the nonnegativity of $u$ is a consequence of the
WMP as in Step (I).
(III): \emph{$f$ and $\varphi$ continuous}.
Finally we consider
$f\in C(\overline{\Omega},\mathbb{R})$ and
$\varphi\in C(\partial\Omega,\mathbb{R})$.
By the Stone-Weierstrass Theorem, there
exist polynomial functions $f_n,\varphi_n$
uniformly converging to $f,\varphi$ respectively on $\overline{\Omega},\partial\Omega$ as $n\to\infty$. As in Step (II), for every $n\in\mathbb{N}$ we consider the
unique classical solution $u_n$ of
\begin{gather*}
\left\{
\begin{array}{ll}
P u_n=-f_n & \hbox{on $\Omega$} \\
u_n=\varphi_n & \hbox{on $\partial\Omega$.}
\end{array}
\right.
\end{gather*}
From the fact that
$-c_0:=\max_{\overline{\Omega}} P(1)<0$, we
can argue as in Step (I), obtaining the estimate
$$\|u_n-u_m\|_{C(\overline{\Omega})}
\leq \max\bigg\{\frac{1}{c_0}\,\|f_n-f_m\|_{C(\overline{\Omega})},
\|\varphi_n-\varphi_m\|_{C({\partial\Omega})}\bigg\}. $$
This proves that there exists
the uniform limit $u:=\lim_{n\to\infty} u_n$ in $C(\overline{\Omega},\mathbb{R})$.
Clearly one has: $u=\varphi$ point-wise on $\partial\Omega$ and
$Pu=-f$ in the weak sense of distributions on $\Omega$.
From the hypoellipticity of $P$ (Remark \ref{rem-ipoLP})
we infer that $f$ smooth implies $u$ smooth. Finally,
suppose that $f,\varphi\geq 0$.
By the Tietze Extension Theorem, we prolong $f$ out of $\overline{\Omega}$
to a \emph{continuous} function $F$ on $\mathbb{R}^N$;
we consider a mollifying sequence $F_n\in C^\infty(\mathbb{R}^N,\mathbb{R})$
uniformly converging to $F$ on the compact sets of $\mathbb{R}^N$.
Since mollification preserves the sign, the fact that
$F|_{\overline{\Omega}}\equiv f\geq 0$ on $\overline{\Omega}$ gives that
$F_n\geq 0$ on $\overline{\Omega}$.
As above in this Step, we solve the problem
\begin{gather*}
\left\{
\begin{array}{ll}
P U_n=-F_n & \hbox{on $\Omega$} \\
U_n=\varphi & \hbox{on $\partial\Omega$,}
\end{array}
\right. \qquad \text{with}\quad U_n\in C^\infty(\Omega,\mathbb{R})\cap C(\overline{\Omega},\mathbb{R}),
\end{gather*}
and we get that $U_n$ uniformly converges on $\overline{\Omega}$ to the unique
continuous solution $u$ of
\begin{gather*}
\left\{
\begin{array}{ll}
P u=-f & \hbox{in $\mathcal{D}'(\Omega)$} \\
u=\varphi & \hbox{on $\partial\Omega$.}
\end{array}
\right.
\end{gather*}
From the WMP for $-U_n$ (recalling that $F_n\geq 0$ and $\varphi\geq0$), we derive
$U_n\geq 0$ on $\overline{\Omega}$ ; this gives $u(x)=\lim_{n\to \infty} U_n(x)\geq 0$
for all $x\in \overline{\Omega}$. This completes the proof.\qed
\end{document}
|
arXiv
|
{
"id": "1407.1669.tex",
"language_detection_score": 0.6755370497703552,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{A Canonical Semi-Deterministic Transducer}
\begin{abstract} We prove the existence of a canonical form for semi-deterministic transducers with sets of pairwise incomparable output strings. Based on this, we develop an algorithm which learns semi-deterministic transducers given access to translation queries. We also prove that there is no learning algorithm for semi-deterministic transducers that uses only domain knowledge.
\end{abstract}
\section{Introduction}
Transducers, introduced by \cite{nivat68}, are a type of abstract machine which defines a relation between two formal languages. As such, they are interpreted as modeling translation in any context where formal languages are applicable. We provide no background on formal languages in this paper; an overview of the subject can be found in \cite{bers79} and \cite{saka09}. Alternatively, transducers can be viewed as a generalization of finite state machines. This view was introduced by Mohri, who uses transducers in the context of natural language processing \cite{mohr97,mohr00a} and \cite{mohr00b}.
A fundamental task when studying the theory of transducers is to look for classes of transducers that can be learned given access to some form of data. If a class of transducers, $\mathscr C$, is found to be learnable, then a predictive model can be produced in any application where a translation from the class $\mathscr C$ is in use. The significance of transducers, specifically expanding the range of the learnable classes, is clear from the scope of applications of transducers. Among many others, some well known applications are in the fields of morphology and phonology \cite{roar07}, machine translation \cite{amen01a,casa04,clar01b}, web wrappers \cite{carm05}, speech \cite{mohr97} and pattern recognition \cite{bern06}. In each of these cases, different classes of transducers are examined with characteristics suitable to the application. Distinguishing characteristics of different classes include determinism properties, the use of probabilites or weights, as well as details of the types of transitions that are permitted.
\subsection{Transducer learning}
An important step in the theory of transducers was the development of the algorithm \textsc{Ostia}. Introduced in \cite{onci93}, \textsc{Ostia} was designed for language comprehension tasks \cite{cast93}. A number of elaborations on the original algorithm have since arisen, many of them aimed at trying to circumvent the restriction to total functions that limited \textsc{Ostia}. Typically, these attempts involved adding some new source of information. For example, \textsc{Ostia}-N uses negative (input) examples and \textsc{Ostia}-D supposes the algorithm has some knowledge of the domain of the function \cite{onci96}. Similar ideas were explored later by \cite{kerm02a} and \cite{cost04}. An application of \textsc{Ostia} for active learning is presented in \cite{vila96}. Using dictionaries and word alignments has been tested by \cite{vila00}. A demonstrated practical success of \textsc{Ostia} came in 2006. The Tenjinno competition \cite{star06} was won by \cite{clar06a} using an \textsc{Ostia} inspired algorithm.
\subsection{Towards nondeterminism with transducers}
Non-deterministic transducers pose numerous complex questions -- even parsing becomes a difficult problem \cite{casa99,casa00a}. Interest in non-deterministic models remains, however, as the limitations of subsequential transducers make them unacceptable for most applications. The first lifting of these constraints was proposed by \cite{alla02}. They propose a model in which the final states may have multiple outputs. In his PhD thesis, Akram introduced a notion of semi-determinism \cite{akra13} that strikes a balance between complete non-determinism and the very restrictive subsequential class. He provided an example witnessing that semi-deterministic transducers are a proper generalization of deterministic transducers, but did not pursue the topic further, focusing instead on probabilistic subsequential transducers. We examine an equivalent formulation of Akram's semi-determinism based on methods of mathematical logic. In particular, by viewing the definition from a higher level of the ranked universe, we convert what would be a general relation into a well-defined function. \cite{kunen80} provides an overview of a number of important topics in set theory including the ranked and definable universes. Some more recent developments in set theory is \cite{jech2003set}.
A significant obstacle in learning non-deterministic transducers is the fact that an absence of information cannot be interpreted. One approach to overcoming this problem is to use probabilities. We eschew the probabilistic approach in favor of a collection of methods that have their antecedents in Beros's earlier work distinguishing learning models \cite{beros2013} and determining the arithmetic complexity of learning models \cite{berosND}.
An earlier version of this work was presented at the International Conference on Grammatical Inference \cite{beros-delahiguera-2014}. In this version, we provide more of the algorithms involved in learning semi-deterministic transducers and prove that the algorithms converge. We also establish the relationship between semi-deterministic transducers and two other natural extensions of deterministic transducers and the bi-languages they generate, specifically $p$-subsequential transducers and finitary, finite-state, and bounded relations (definitions of these terms are provided in Section \ref{other-non-det-section}). Finally, we show that semi-deterministic transducers and the associated bi-languages fail two closure properties: closure under composition and closure under bi-language reversal.
\section{Notation}
We make use of following common notation in the course of this paper. Throughout, the symbols $x,y$ and $z$ denote strings and $a$ and $b$ will denote elements of a given alphabet. We shall use the standard notation $\lambda$ for the empty string.
\begin{itemize} \item The concatenation of two strings, $x$ and $y$, is denoted by $xy$. We write $x \prec y$ if there is a string $z \neq \lambda$ such that $y = xz$. We write $x \preceq y$ if $x \prec y$ or $x = y$. This order is called the prefix order.
\item For a set of strings, $S$, $T[S] = \{ x : (\exists y\in S)\big( x \preceq y \big) \}$ is the prefix closure of $S$.
\item A tree is a set of strings, $S$, such that $T[S] = S$. $S'$ is a subtree of $S$ if both $S$ and $S'$ are trees and $S'$ is contained in $S$. A strict subtree is a subtree that is not equal to the containing tree.
\item $\mathscr P(X) = \{ Y: Y\subseteq X \}$ and $\mathscr P^*(X) = \{ Y: Y\subseteq X \wedge |Y|<\infty \}$.
\item We will use elements of $\mathbb N$ both as numbers and as sets. In particular, we use the following inductive definition: $0 = \emptyset$ and, given $0, \ldots , n$, we define $n+1 = \{0, \ldots , n\}$.
\item Following the notation of set theory, the string $x = a_0 \ldots a_n$ is a function with domain $n+1$. Thus, $x \upto k = a_0 \ldots a_{k-1}$ for $k \leq n+1$. $|x|$ is the length of $x$ and $x^-$ is the truncation $x \upto (|x| - 1)$. Note that the last element of $x$ is $x(|x|-1)$ and the last element of $x^-$ is $x(|x|-2)$.
\item Again, drawing on set theory terminology, we call two functions, $f$ and $g$, compatible if $(\forall x \in \mbox{dom}(f) \cap \mbox{dom}(g))(f(x) = g(x))$.
\item We write $x\parallel y$ if $x = y$, $x \prec y$ or $x \succ y$ and say $x$ and $y$ are comparable. Otherwise, we write $x \perp y$ and say that $x$ and $y$ are incomparable.
\item By $<_{lex}$ and $<_{llex}$ we denote the lexicographic and length-lexicographic orders, respectively.
\item For an alphabet $\Sigma$, $\Sigma^*$ is the set of all finite strings over $\Sigma$. A tree over $\Sigma$ is a tree whose members are members of $\Sigma^*$, where the ordering of the tree is consistent with the prefix order on $\Sigma^*$ and the tree is prefix closed.
\item We reserve a distinguished character, \#, which we exclude from all alphabets under consideration and we will use \# to indicate the end of a word. We will write $x \#$ when we append the \# character to $x$. \end{itemize}
\section{Bi-Languages and Transducers}\label{bilang-trans-section}
Bi-languages are the fundamental objects of study. They capture the semantic correspondence between two languages. In principle, this correspondence does not specify any ordering of the two languages, but translation is always done from one language \emph{to} another language. As such, we refer to the input and the output languages of a bi-language. For notational simplicity, in everything that follows $\Sigma$ is the alphabet for input languages and $\Omega$ is the alphabet for output languages. Using this notation, the input language is a subset of $\Sigma^*$ and the output language is a subset of $\Omega^*$. We now present the standard definition of a bi-language.
\begin{Definition} Consider two languages, $L \subseteq \Sigma^*$ and $K \subseteq \Omega^*$. A \emph{bi-language from $L$ to $K$} is a subset of $L \times K$ with domain $L$. \end{Definition}
For our purposes, we wish to indicate the direction of translation and to aggregate all translations of a single string. To this end, in the remainder of this paper, we will use the following equivalent definition of a bi-language.
\begin{Definition}\label{bi-L} Consider two languages, $L \subseteq \Sigma^*$ and $K \subseteq \Omega^*$. A \emph{bi-language from $L$ to $K$} is a function $f: L \rightarrow \mathscr P(K)$. $L$ is said to be the \emph{input language} and $K$ the \emph{output language} of $f$. When defined without reference to a specific output language, a bi-language is simply a function $f:L \rightarrow \mathscr P(\Omega^*)$. If $f$ and $g$ are two bi-languages, then $f$ is a \emph{sub bi-language} of $g$ if $\mbox{dom}(f) \subseteq \mbox{dom}(g)$ and for all $x \in \mbox{dom}(f)$, $f(x) \subseteq g(x)$. A finite subset $\mathcal D$ of $L \times K$ is \emph{consistent with $f$} if for every $\langle x,X \rangle \in \mathcal D$, $X \in f(x)$. \end{Definition}
Note that for a bi-language $f$ from $L$ to $K$, we do not require that $\bigcup_{x \in L} f(x) = K$. We are interested in languages whose generating syntax is some form of transducer.
\begin{Definition}\label{transducer} A transducer $G$ is a tuple $\langle \mbox{\sc{states}}[G],I,\Sigma,\Omega,E \rangle$. \begin{enumerate} \item $\mbox{\sc{states}}[G]$ is a finite set of states. $I \subseteq \mbox{\sc{states}}[G]$ is the set of \emph{initial states}. \item $\Sigma$ and $\Omega$ are the \emph{input alphabet} and \emph{output alphabet}, respectively -- finite sets of characters which do not contain the reserved symbol \#. \item $E \subseteq \mbox{\sc{states}}[G] \times \mbox{\sc{states}}[G] \times (\Sigma^*\cup\{\#\}) \times \mathscr P^*(\Omega^*)$ is a finite relation called the \emph{transition relation}. An element $e \in E$ is called a \emph{transition} with $e = \langle start(e), end(e), input(e), output(e) \rangle$. If $input(e) = \#$, then $e$ is called a \emph{\#-transition}. \end{enumerate}
A transducer is said to \emph{generate} or \emph{induce} the bi-language which consists of all pairs of strings $\langle x,Y \rangle \in \Sigma^* \times \Omega^*$ such that: \begin{enumerate} \item $(\exists x_0, \ldots , x_n \in \Sigma^*)(x = x_0 \ldots x_n)$, \item $(\exists e_0, \ldots , e_{n+1} \in E)(\exists q \in I)\Big((\forall i \in \{ 1, \ldots , n \})\big(x_i = input(e_i) \wedge end(e_i) = start(e_{i+1})\big) \wedge start(e_0) = q \wedge input(e_{n+1}) = \#\Big)$ and \item there are $Y_i \in output(e_i)$ for $i \leq n+1$ such that $Y = Y_0Y_1 \cdots Y_{n+1}$. \end{enumerate} \end{Definition}
This paper addresses \emph{semi-deterministic bi-languages} which are bi-languages generated by \emph{semi-deterministic transducers}. These were defined in \cite{akra13}. We use an equivalent formulation.
\begin{Definition} A \emph{semi-deterministic transducer (SDT)} is a transducer with a unique initial state such that \allowbreak
\begin{enumerate}
\item $input(e)\in \Sigma \cup \{\#\}$ for every transition $e$,
\item given a state, $q$, and $a\in \Sigma$, there is at most one transition, $e$, with $start(e) = q$ and $input(e) = a$ and
\item given a transition, $e$, $output(e)$ is a finite set of pairwise incomparable strings in $\Omega^*$ (i.e., $output(e) \in \mathscr P^*(\Omega^*) \wedge (\forall X,Y \in output(e))\big( X \perp Y \big)$). \end{enumerate}
A \emph{semi-deterministic bi-language} (SDBL) is a bi-language that can be generated by an SDT. \end{Definition}
Two useful properties of SDTs follow from the definition. First, if $e\in E$ and $\lambda \in output(e)$, then $output(e) = \{\lambda\}$. Second, although there may be multiple translations of a single string, every input string follows a \emph{unique path} through an SDT. The precise meaning of this is made clear in the next definition. We must also note that, while SDBLs can be infinite, the image of any member or finite subset of $L$ is finite. Thus, an SDBL is a function $f: L \rightarrow \mathscr P^*(\Omega^*)$.
\begin{Definition}\label{paths-def} Let $G$ be an SDT with input language $L$. A \emph{path through $G$} is a string $e_0 \ldots e_k\in E^*$, where $E$ is the set of transitions, such that $start(e_{i+1}) = end(e_i)$ for $i<k$. $G[p]$ is the collection of all outputs of $G$ that can result from following path $p$. $p_x$ is the unique path through $G$, $e_0 \ldots e_k \in E^*$, defined by $x\in \Sigma^*$ such that $start(e_0)$ is the unique initial state of $G$, if such a path exists. We denote the final state of the path $p_x$ by $q_x$. \end{Definition}
\section{Ordering maximal antichains}
When parsing sets of strings, we will often use the following operations.
\begin{Definition}\label{*-1-def} Let $S$ and $P$ be two sets of strings. \begin{itemize} \item $P * S = \{x y : x\in P \wedge y\in S\}$.
\item $P^{-1}S = \{ y : (\exists x\in P)\big( x y \in S \big) \}$. \end{itemize} For notational simplicity, we define $x^{-1}S = \{x\}^{-1}S$, $P^{-1}x = P^{-1}\{x\}$, $x*S = \{x\}*S$ and $P*x = P*\{x\}$ for a string $x$. \end{Definition}
\begin{Proposition} $*$ is associative, but is not commutative. \end{Proposition}
\begin{proof} Associativity follows from the associativity of concatenation. To see that $*$ is not commutative, consider $A=\{ a \}$ and $B=\{ a,b \}$. $A*B=\{ aa,ab \}$ and $B*A=\{ aa,ba \}$. \end{proof}
The following definitions and results pertain to sets of strings and trees over finite alphabets.
\begin{Definition}\label{anti-def} Given a set of strings, $S$, we call $P \subseteq T[S]$ a \emph{maximal antichain of $S$} if $(\forall x,y\in P)\big(x\perp y \vee x = y\big)$ and $(\forall x\in S)(\exists y\in P)(y \parallel x)$. $P$ is a \emph{valid antichain of $S$} if $P$ is a maximal antichain of $S$ and $(\forall x,y \in P)\big(x^{-1}T[S] = y^{-1}T[S]\big)$. We define, $\vac(S) = \{P:P \mbox{ is a valid antichain of $S$}\}$. \end{Definition}
\begin{Example} Consider the following set of strings over the alphabet $\{a,b\}$: $$S = \{ a^5,a^4b,a^2ba,a^2b^2,ba^4,ba^3b,baba,bab^2,b^2a^3,b^2a^2b,b^3a,b^4 \}.$$
Graphically, we can represent $S$ as a tree where branching left indicates an $a$ and branching right indicates a $b$. In the picture below to the right, we highlight the four valid antichains of $S$: $P_0 = \{ \lambda \}$, $P_1 = \{ a^2,ba,b^2 \}$, $P_2 = \{ a^4,a^2b,ba^3,bab,b^2a^2,b^3 \}$ and $P_3 = S$. Note that $S$ is only a valid antichain of itself because it contains no comparable strings. The members of the four valid antichains are connected via dotted lines in the right picture ($P_0$ has only one member and therefore includes no dotted lines). For reference a maximal antichain that is not valid is included in the picture on the left and its members are joined with a dotted line.
\begin{figure}
\caption{On the left, a maximal antichain that is not valid; on the right, all the valid antichains.}
\end{figure}
In the next figure, we focus on the valid antichain $P_1$.
\begin{figure}
\caption{The identical subtrees below the elements of the valid antichain $P_1$.}
\end{figure}
Observe that the portions of the tree below each of $a^2$, $ba$ and $b^2$ are identical; the terminal nodes of all three sub-trees are $\{ a^3,a^2b,ab,b^2 \}$. It is this equivalence of suffixes that makes $P_1$ a valid antichain. \end{Example}
The concept of equivalence we have developed closely parallels that of Nerode equivalence \cite{nerode} in which two strings in a language are equivalent if there is no extension in the language that distinguishes the two strings.
It is interesting to note that the valid antichains in the above example have a natural linear ordering. As we shall see in Theorem \ref{ac-linear}, this is not an artifact of the particular example, but is true of any finite set $S$.
\begin{Proposition}\label{prefix-of-prefix} Suppose that $P$ is a valid antichain of a set of strings $S$ and $Q$ is a valid antichain of $P$, then $Q$ is a valid antichain of~$S$. \end{Proposition}
\begin{proof}
Let $P$ be a valid antichain of a set of strings $S$ and let $Q$ be a valid antichain of $P$. Every member of $T[S]$ is either a prefix or an extension of a member of $P$. Since $P$ consists of incomparable strings, each member of $P$ has a member of $Q$ as a prefix. Thus, $Q$ is a maximal antichain of $S$. To see that $Q$ is a valid antichain, observe that if $x,y \in Q$, then $x^{-1}T[P] = y^{-1}T[P]$. Since $z^{-1}T[S] = w^{-1}T[S]$ for all $z,w \in P$, $x^{-1}T[S] = y^{-1}T[S]$, thus $Q$ is a valid antichain. \end{proof}
\begin{Definition} For $P$ and $Q$, sets of strings over some common alphabet, we say that $P <_{ac} Q$ ($P$ is ``antichain less than" $Q$) if either \begin{itemize}
\item $|P| < |Q|$, or
\item $|P| = |Q|$ and, for all $x\in P$ and $y\in Q$, if $x \parallel y$, then $x\prec y$. \end{itemize} \end{Definition}
We will use valid antichains to parse a set of strings as one would parse a single string into a prefix and suffix. The validity of an antichain ensures that the corresponding suffix set is well-defined.
\begin{Proposition}\label{p*ps} Let $S$ be a finite set of incomparable strings. If $P$ is a valid antichain of $S$, then $P * (P^{-1}S) = S$. \end{Proposition}
\begin{proof} Observe that, if $P$ is a valid antichain of $S$, then $T[P^{-1}S] = x^{-1}T[S]$ for all $x \in P$. \end{proof}
The antichain ordering ($<_{ac}$) has particularly nice properties when applied to $\vac(S)$, where $S$ is a finite set of strings.
\begin{Proposition}\label{comparable} If $P$ and $Q$ are maximal antichains of the same finite set of strings, then there is a relation $R \subseteq P \times Q$ such that \begin{itemize} \item $\mbox{dom}(R) = P$, \item $\mbox{ran}(R) = Q$, \item $xRy \leftrightarrow x \parallel y$. \end{itemize}
Furthermore, if $|P| = |Q|$ and $P \parallel_{ac} Q$, then $R$ is a well-defined and bijective function. \end{Proposition}
\begin{proof}
Define $R = \{ \langle x,y \rangle : x \in P \wedge y \in Q \wedge x \parallel y \}$. Since $P$ and $Q$ are maximal antichains, for each $x \in P$ there is $y \in Q$ such that $x \parallel y$ hence, $\mbox{dom}(R) \supseteq P$. Similarly, for each $y \in Q$ there is an $x \in P$ such that $x \parallel y$ thus, $\mbox{ran}(R) \supseteq Q$. By the definition of $R$, $\mbox{dom}(R) \subseteq P$, $\mbox{ran}(R) \subseteq Q$ and $xRy \leftrightarrow x \parallel y$. If $|P| = |Q|$ and $P \parallel_{ac} Q$, then for each $x \in P$ there is a unique comparable $y \in Q$ and vice versa. Consequently, $R$ is well-defined and bijective in this case. \end{proof}
\begin{Theorem}\label{ac-linear} If $S$ is a finite set of strings, then $\Big(\vac(S), <_{ac}\Big)$ is a finite linear order. \end{Theorem}
\begin{proof}
Consider a finite set of strings, $S$, and let $T=T[S]$. We begin by fixing $P,Q \in \vac(S)$. We may assume that $|P| = |Q|$; if $|P| \neq |Q|$, then $P <_{ac} Q$ or $Q <_{ac} P$. We pick an element $x\in P$ and observe that, by Proposition \ref{comparable}, there is a $y\in Q$ such that $x \parallel y$.\par
Suppose that $x=y$ and let $x'$ be any other member of $P$. By Proposition \ref{comparable}, there is a $y' \in Q$ such that $x' \parallel y'$. Since $P$ and $Q$ are valid antichains and $x=y$, $x'^{-1}T=x^{-1}T=y^{-1}T=y'^{-1}T$. Given that $x'\parallel y'$, $T$ is finite and $x'^{-1}T = y'^{-1}T$ we conclude that $x' = y'$. Now assume $x\prec y$. In the case $y \prec x$ simply exchange the roles of $x$ and $y$. As above, we pick $x'\in P$ and any comparable element $y'\in Q$. Clearly $y^{-1}T$ is a strict subtree of $x^{-1}T$ and hence, $y'^{-1}T$ is a strict subtree of $x'^{-1}T$. We conclude that~$x' \prec y'$.\par
We have shown that any two members of $\vac(S)$ are comparable. The remaining order properties follow immediately from the definitions. \end{proof}
While the proof of Theorem \ref{ac-linear} is quite simple, we highlight it as a theorem because it is the critical result for the applications of valid antichains that follow. Note that $<_{ac}$ may not be a linear order on an arbitrary collection of maximal antichains.
\begin{Corollary} Let $S_0,S_1,S_2, \ldots$ be a sequence of finite sets. $\bigcap_{i\in \mathbb N} \vac(S_i)$ is linearly ordered under $<_{ac}$. \end{Corollary}
\begin{proof} Any subset of a linear order is a linear order. Since $\bigcap_{i\in \mathbb N} \vac(S_i) \subseteq \vac(S_0)$, the claim follows. \end{proof}
\begin{Definition} Given a set of strings, $S$, a finite sequence of sets of strings, $P_0, \ldots , P_n$, is a \emph{factorization of $S$} if $S = P_0 * \cdots * P_n$ and $P_i \neq \{ \lambda \}$ for $i \leq n$. Such a factorization is said to be \emph{maximal} if, for each $i\in \mathbb N$, $\vac(P_i) = \{ \{ \lambda \}, P_i \}$. \end{Definition}
Note that having $\vac(P_i) = \{ \{ \lambda \}, P_i \}$ for each factor, $P_i$, in a factorization is equivalent to having $P_{i+1}$ be the $<_{ac}$-least non-trivial valid antichain of $P_i^{-1}\cdots P_0^{-1} S$.
\begin{Example}\label{factorizeEx} We consider the following set of strings: \begin{align*} S = \{ &a^5,a^4b,a^3ba^2,a^3bab,a^3b^2a,a^3b^3,aba^2,abab,ab^2a^2,ab^2ab,ab^3a,ab^4,ba^4,\\ &ba^3b,ba^2ba^2,ba^2bab,ba^2b^2a,ba^2b^3,b^2a^2,b^2ab,b^3a^2,b^3ab,b^4a^2,b^4ab,b^5a,b^6 \}. \end{align*}
In the figure below, we display the tree, $T[S]$, as well as the $<_{ac}$-least non-trivial valid antichain, $P_0 = \{ a,b \}$.
\begin{figure}
\caption{A set of strings and its $<_{ac}$-least valid antichain.}
\end{figure}
The corresponding set of suffixes is $P_0^{-1}S = \{ a^4,a^3b,a^2ba^2,a^2bab,a^2b^2a,a^2b^3,ba^2,bab,\allowbreak b^2a^2,\allowbreak b^2ab,\allowbreak b^3a,\allowbreak b^4 \}$. Iterating, we find the next factor is $P_1 = \{ a^2,b \}$ and its set of suffixes is $(P_0*P_1)^{-1}S = \{ a^2,ab,ba^2,bab,b^2a,b^3 \}$.
\begin{figure}
\caption{$P_1$ is the $<_{ac}$-least non-trivial valid antichain of $P_0^{-1}S$ and $P_2$ is the $<_{ac}$-least non-trivial valid antichain of $(P_0 * P_1)^{-1}S$.}
\end{figure}
We next pick $P_2 = \{ a,ba,b^2 \}$. Once we factor out $P_2$, all that remains is $\{ a,b \}$. The only antichains of $\{ a,b \}$ are $\{ \lambda \}$ and $\{ a,b \}$, both of which are valid antichains. We pick the final factor to be $P_3 = \{ a,b \}$ and conclude that $P_0*P_1*P_2*P_3$ is a maximal factorization of $S$. \end{Example}
\begin{Corollary}\label{uniqueFact} Up to possible reordering of commutative terms, every finite set of incomparable strings has a unique maximal factorization. \end{Corollary}
\begin{proof}
Let $S$ be a finite set of incomparable strings. We will apply the iterative process illustrated in Example \ref{factorizeEx} to $S$. Define $P_0$ to be the $<_{ac}$-least non-trivial valid antichain of $S$. If $P_{0} = S$, then the process is complete. By Theorem \ref{ac-linear}, the choice of $P_0$ is unique. Suppose we have defined $P_0, P_1, \ldots, P_n$. Let $S_n = P_n^{-1} \cdots P_0^{-1} S$. To be explicit, $S_n = P_n^{-1} (P_{n-1}^{-1} ( \cdots (P_0^{-1} S)))$. Define $P_{n+1}$ to be the $<_{ac}$-least non-trivial valid antichain of $S_n$. As before, the choice is unique. If $P_{n+1} = S_n$, then the process is complete. Otherwise, we proceed to the next iteration.
Since $\vac(S)$ is finite, the process must terminate. The uniqueness of the factorization follows from the uniqueness of the choices made at each stage of the process.
\end{proof}
Observe that the interative process described above specifies a unique order for the terms of the unique maximal factorization. When the terms are listed in the order specified by this process, we will say that the factorization is in \emph{canonical order}.
\section{Semi-Deterministic Bi-Languages}
In this section, we prove the existence of a canonical SDT for every SDBL. Determining the canonical SDT for an SDBL is done in two phases. First, a ``maximal'' function on prefixes of the input language is found. Finding such a maximal function is analogous to the onwarding performed in algorithms such as OSTIA and can be loosely described as the process of moving decisions earlier in the translation process. Second, subsets of the domain on which the function has identical outputs are conflated in a largely standard merging process. Merging produces a finite-order equivalence relation on $T[L]$. Using this equivalence relation, we can define the canonical SDT.
\subsection{Semi-Deterministic Functions}\label{onwarding-section}
\begin{Definition} Let $f$ be an SDBL over $L$. $F: T[L] \rightarrow \mathscr P^*(\Omega^*)$ is a \emph{semi-deterministic function (SDF) of $f$} if, for $x\in L$, $f(x) = F(x\upto 1)*F(x\upto 2)* \cdots *F(x)*F(x\#)$. We define $\Pi F(x) = F(x\upto 1)*F(x\upto 2)*\cdots *F(x)$. If $F$ and $F'$ are SDFs of $f$, we say that $F \leq_{sdf} F'$ if $\Pi F(x)$ is a valid antichain of $\Pi F'(x)$ for all $x$. The SDF \emph{induced} by $f$ is the SDF, $F$, such that $F(x) = \{ \lambda \}$ for all $x\in T[L]$ and $F(x\#) = f(x)$ for all $x\in L$. \end{Definition}
\begin{Example}\label{SDFexample} Suppose that $A,B,C \subseteq \Omega^*$ are finite, non-empty and not equal to $\{\lambda\}$. Let $\Sigma = \{a\}$ be the input alphabet. Define an SDBL, $f$, over $L = \{ a^2 \}$ by $f(a^2) = A*B*C$. We define two incomparable SDFs of $f$ as follows. The first SDF: $F(\lambda) = \{ \lambda \}, F(a) = A*B, F(a^2) = \{ \lambda \}$ and $F(a^2\#) = C$. The second SDF: $F'(\lambda) = \{ \lambda \}, F'(a) = A, F'(a^2) = B*C$ and $F'(a^2\#) = \{ \lambda \}$. Since $\Pi F(a)$ is not a valid antichain of $\Pi F'(a)$, $F \not \leq_{sdf} F'$. Likewise, since $\Pi F'(a^2)$ is not a valid antichain of $\Pi F(a^2)$, $F' \not \leq_{sdf} F$. \end{Example}
Example \ref{SDFexample} demonstrates that $\leq_{sdf}$ is not a linear ordering of the SDFs of a fixed SDBL. Nonetheless, there is a $\leq_{sdf}$-maximum SDF of $f$.
\begin{Theorem}\label{maxSDF} If $f$ is an SDBL over $L$, then there is a $\leq_{sdf}$-maximum SDF of $f$. \end{Theorem}
\begin{proof}
For $x \in T[L]$, let $S$ be the collection of all members of $L$ that extend $x$ and let $x_0$ be the $<_{llex}$-least member of $S$. By Corollary \ref{uniqueFact}, for every $y \in S$ there is a unique maximal factorization of $f(y)$. Let $P_0*\cdots * P_n$ denote the unique maximal factorization of $f(x_0)$. Let $P_0 * \cdots * P_i$ be the longest common initial segment of all factorizations of members of $\{f(x) : x\in S\}$ when the terms of the factorizations are listed in canonical order. We define $P^x$ to be the product of this longest common factorization.
We define $F_m(\lambda) = \{ \lambda \}$ and define $F_m$ inductively on the members of $T[L]$ in $<_{llex}$-order as follows. Suppose we are considering $x \in T[L]$ and $F_m$ has already been defined on all $<_{llex}$-lesser members of $T[L]$. We define $F_m(x) = (\Pi F_m(x^-))^{-1}P^x$. If $y\in L$ and $F_m(y)$ is defined, we set $F_m(y\#) = (\Pi F_m(y))^{-1}f(y)$.
If $x \prec y$, then $\Pi F_m(x)$ is a valid antichain of $F_m(y)$ and $(F_m(y))^{-1}f(y)$ is well-defined. Consequently, $F_m$ is a well defined function with domain $T[L]$. If $F$ is any SDF of $f$ and $x$ is an arbitrary member of $T[L]$, then $\Pi F(x), \Pi F_m(x) \in \vac(f(x_0))$, where $x_0$ is the $<_{llex}$-least extension of $x$ in $L$. By Theorem \ref{ac-linear}, for any $x \in T[L]$, $\Pi F(x)$ and $\Pi F_m(x)$ are $<_{ac}$-comparable. Furthermore, $\Pi F(x), \Pi F_m(x) \in \vac(f(y))$ for all $y\succ x$. Given the construction of $F_m$, if $F_m(x) <_{ac} F(x)$, then there must be a $y \in L$ such that $x \prec y$ and $\Pi F(x) \not\in \vac(f(y))$ -- which is not possible. Thus, $F_m$ is a $\leq_{sdf}$-maximum SDF of $f$. \end{proof}
\begin{Definition} Let $f$ be an SDBL with maximal SDF $F$. For $x \in \mbox{dom}(F)$ and $F'$ an SDF of $f$, we say that $F'$ is \emph{onward at $x$} if for all $y \in \mbox{dom}(F)$, $y \succeq x$ implies that $F'(y) = F(y)$. If $F'$ is onward at $\lambda$, then we say that $F'$ is \emph{onward}. \end{Definition}
In Section \ref{transQueries}, we use the concept of onwarding to build the maximal SDF from data.
\subsection{Merging}
The second phase of building a canonical form for SDTs is to define an equivalence relation on the domain of a maximum SDF. This means identifying which paths lead to the same state.
\begin{Definition}\label{futureOne} Let $F$ be an SDF of $f$ over $L$ and $x\in T[L]$. We define $\mbox{\sc{future}}_F[x] : x^{-1}T[L] \rightarrow R$, where $R$ is the range of $F$, such that $\mbox{\sc{future}}_F[x](y) = F(xy)$. If $x,y \in \mbox{dom}(F)$, we say that $x \equiv y$ if $\mbox{\sc{future}}_F[x] = \mbox{\sc{future}}_F[y]$. Given $x$, we define $\lllesdf{x}$ to be the $<_{llex}$-least element of $\mbox{dom}(F)$ that is equivalent to $x$. \end{Definition}
\begin{Proposition}\label{mergeProp}$ $ \begin{enumerate} \item \label{mergePropA} $\equiv$ is an equivalence relation on the domain of an SDF.
\item \label{mergePropB} If $x \equiv y$ and $xz,yz \in T[L]$, then $xz \equiv yz$.
\item \label{finiteEquivClasses} If $F$ is an SDF of $f$ over $L$, then there are only a finite number of $\equiv$-equivalence classes on the domain of $F$. \end{enumerate} \end{Proposition}
\begin{proof}
Part \ref{mergePropA} follows from the fact that equality is an equivalence relation. Part \ref{mergePropB} follows from the definition of $\equiv$. To prove part \ref{finiteEquivClasses}, let $G$ be an SDT that generates $f$ and let $q_x$ be a state of $G$ which can be reached by the input string $x\in T[L]$. For any $y \in T[L]$, if $p_y$ leads to $q_x$, then $x \equiv y$ as their futures are the same. Thus, $\equiv$ induces an equivalence relation on (hence, a partition of) the states of $G$. Since there is at least one state in each equivalence class, the fact that $|\mbox{\sc{states}}[G]| < \infty$ implies that there are only finitely many equivalence classes. \end{proof}
\begin{Lemma}\label{boundedFuture} Let $F$ be an SDF of $f$ over $L$. There is an $n$ such that for all $x,y \in T[L]$, $x \equiv y$ if and only if $\mbox{\sc{future}}[x]\upto x\Sigma^n = \mbox{\sc{future}}[y]\upto y\Sigma^n$. \end{Lemma}
\begin{proof} The proof follows immediately from Proposition \ref{mergeProp}, part \ref{finiteEquivClasses}. Since there are only a finite number of possible futures, there is a finite portion of each that uniquely identifies it. Let $n$ be the maximum depth of the paths required to obtain the identifying portion of each future. We have obtained the desired $n$. \end{proof}
We can think of the identifying bounded future of an equivalence class as a sort of signature, an analogue of the famous locking sequence for Gold style learning \cite{blum-blum}.
The maximum SDF and the equivalence relation on its domain depend only on the underlying SDBL. Thus, we have defined a machine-independent canonical form. As a footnote, we demonstrate here how to produce an SDT from the canonical form which is unique up to isomorphism.
\begin{Definition}\label{canonical-form-def} Let $f$ be an SDBL, let $F_m$ be the maximum SDF for $f$ and let $\equiv$ be the equivalence relation on the domain of $F_m$. Define a finite state machine, $G_f$, as follows: \begin{itemize} \item $\mbox{\sc{states}}[G_f] = \{ r_{\lllesdf{x}} : x\in T[L] \}$ (in other words, a set of blank states indexed by $\{ \lllesdf{x} : x \in T[L] \}$).
\item The initial state is $r_{\lambda}$.
\item $E_{G_f} = \{ \langle r_{\lllesdf{x^-}},r_{\lllesdf{x}},x(|x|-1),F_m(x) \rangle : x\in T[L] \}\cup \{ \langle r_{\lllesdf{x}},r_{\lambda},\#,F_m(x\#) \rangle : x\in L \}$ \end{itemize} We call $G_f$ the \emph{canonical SDT for $f$}. \end{Definition}
As noted prior to the definition, the maximum SDF depends only on the SDBL. Thus, we are justified in calling the above SDT a canonical SDT. Although $L$ and $T[L]$ may be infinite sets, the set of transitions, $E_{G_f}$, and the set of states, $\mbox{\sc{states}}[G_f]$, are finite by Proposition \ref{mergeProp}. Also, observe that the method of defining an SDT from an SDF described in Definition \ref{canonical-form-def} can be used to define a unique SDT from any SDF. Since every SDT also defines a unique SDF, there is a bijection between SDFs and SDTs for a given SDBL.
\begin{Theorem} Let $f$ be an SDBL. $G_f$ is an SDT that generates $f$. \end{Theorem}
\begin{proof} Clearly, $G_f$ is a finite state transducer. If $P_0, \cdots , P_n$ are sets of incomparable strings, then $S = P_0 * \cdots * P_n$ also consists of incomparable strings. To see this, suppose $x = x_0 \cdots x_n$ and $y = y_0 \cdots y_n$ are such that $x \prec y$ and $x_i,y_i \in P_i$ for all $i \leq n$. If $i$ be least such that $x_i \neq y_i$, then $x_i \prec y_i$ and $P_i$ contains two comparable strings. Thus, the outputs of all transitions of $G_f$ consist of incomparable strings, as they are factors of the elements of the range of $f$.
We must show that $G_f$ generates $f$. $G_f$ and $f$ have the same domain. Let $F_m$ be the maximal SDF of $f$. If $x \in T[L]$, then $G_f[p_x] = \Pi F_m(x)$, thus, $G_f$ generates $f$. \end{proof}
\subsection{An Example}
To illustrate the canonical form that we have now defined, we exhibit a transducer not in canonical form together with its canonical form.
\begin{figure}
\caption{An SDBL not in canonical form (left) and in canonical form (right).}
\end{figure}
\section{The learning models}
There are two principal learning models in grammatical inference: identification in the limit \cite{gold67} and PAC-learning \cite{vali84}. Each of these models admits variants depending on what additional sources of information are provided. In order to learn semi-deterministic transducers, we use queries \cite{angl87a} as an additional resource. These queries are very limited; the oracle will be interrogated about a possible translation pair and the oracle will return either a \textsc{true} or \textsc{false}.
\begin{Definition} Let $f$ be a bi-language. The translation query $[x,Y]_f$ returns $\textsc{true}$ if $Y\in f(x)$ and $\textsc{false}$ otherwise. We call this oracle $[f]$. Where it is clear from context, we will write $[x,Y]$ instead of $[x,Y]_f$. \end{Definition}
Equivalently, the oracle answers membership queries about the graph of the bi-language. We also prove that learning is not possible without queries. The precise definition of learning we use is adapted from the one used in \cite{higu97}:
\begin{Definition}
An algorithm, $A$, \emph{polynomial identifies in the limit with translation queries} a class of transducers, $\mathscr C$, if for any $G \in \mathscr C$ there is a set, $CS_G$, such that on any $\mathcal D \supseteq CS_G$ contained in the bi-language induced by $G$, $A$ outputs a $G'$ equivalent to $G$. The algorithm must converge within a polynomial amount of time in $|\mathcal D|$ and $|G|$; $|CS_G|$ must be polynomial in $|G|$. $|G|$, $|\mathcal D|$ and $|CS_G|$ denote the number of bits required to encode the objects $G$, $\mathcal D$ and $CS_G$, respectivly. \end{Definition}
Note that in the above definition the number of calls to the oracle is also bounded by the overall complexity of the algorithm and is therefore polynomial in the size of the sample.
For Theorem \ref{not-learnable-thm}, we use a different model of learning: identification in the limit from positive data. We give the definition below.
\begin{Definition} An algorithm, $A$, \emph{identifies in the limit from positive data} a class of transducers, $\mathscr C$, if for any $G \in \mathscr C$ and any infinite enumeration of the bi-language induced by $G$, the algorithm $A$ outputs a finite number of distinct transducers on the initial segments of the enumeration. The only transducer that is output infinitely many times must be equivalent to $G$. \end{Definition}
\section{SDBLs are not learnable}
We assume domain knowledge (i.e., access to the characteristic function of the input language). In the proof of the following theorem, we encode a standard example of a ``topological" failure of identification in the limit. In particular, we encode the family $\mathcal H = \{\mathbb N\} \cup \{A \subseteq \mathbb N : |A|<\infty\}$ into a sequence of SDTs.
\begin{Definition} Let $f$ be a bi-language. We define $DK_f$ to be the oracle that, when asked about $x$, returns a boolean value $DK_f(x)$. If $DK_f(x) = \textsc{true}$, then $x$ is in the input language of $f$ (in other words, the domain of $f$). Otherwise, $x$ is not in the input language of $f$. An algorithm which has access to $DK_f$ is said to have domain knowledge about $f$. \end{Definition}
\begin{Theorem}\label{not-learnable-thm} There is a collection of SDBLs, $\mathcal C$, such that no algorithm can identify $\mathcal C$ in the limit from positive data, even given domain knowledge of each member of $\mathcal C$. \end{Theorem}
\begin{proof}
To avoid degenerate cases, we assume the output alphabet has at least two characters, $A$ and $B$, and the input alphabet has at least one character, $a$. We exhibit a sequence of SDTs, $\{G_i\}_{i\in\mathbb N}$, such that no program can successfully learn every member of the sequence. In the following graphical representation of $\{G_i\}_{i\in\mathbb N}$ we omit the \#-transitions, instead indicating terminal nodes with a double border.
\begin{figure}
\caption{A sequence SDTs that cannot be identified in the limit from positive data. Transitions are labelled with the input string they read and the set of possible output strings; for example, a transition $e$ labelled with $a:A,B$ has the property that $input(e) = a$ and $output(e) = \{A,B\}$.}
\end{figure}
Let $f_i$ be the SDBL generated by the SDT $G_i$. Fix any learning algorithm and let $M$ be the function such that, given data $\mathcal D$, the hypothesis made by the learning algorithm is $M(\mathcal D)$. We inductively define an enumeration of a bi-language generated by some member of the sequence, $\{G_i\}_{i\in\mathbb N}$. Define $X_i = \langle a^i,A^i \rangle \langle a^i,B^i \rangle$ and $X_i^j = \langle a^j,A^j \rangle \langle a^{j+1},A^{j+1} \rangle \cdots \langle a^{j+i},A^{j+i} \rangle$. Let $n_1$ be least such that $M(X_1 X_{n_1}^1)$ codes $G_1$. If no such $n_1$ exists, then there is an enumeration of $f_1$ which the chosen algorithm fails to identify. Thus, without loss of generality, we may assume such an $n_1$ exists. Similarly, we pick $n_2$ to be least such that $M(X_1 X_{n_1}^1 X_2 X_{n_2}^2)$ codes $G_2$. Proceeding in this fashion, either we reach a stage where some $n_k$ cannot be found and the algorithm has failed to learn $f_k$ or we have built an enumeration of $G_0$ on which the algorithm changes its hypothesis an infinite number of times. In either case, learning has failed. $\mathcal C = \{ f_i : i\in \mathbb N \}$ is the desired collection of SDBLs. \end{proof}
\section{Learning with translation queries}\label{transQueries}
In the remainder of the paper, we exhibit an algorithm that can learn any SDBL, $f$, in the limit, provided the algorithm has access to the oracles $DK_f$ and $[f]$. We present the algorithms that witness the learnability of SDBLs and summarize the result in Theorem \ref{learnableThm}.
\subsection{The characteristic sample}\label{sec-char-sample}
The characteristic sample must contain sufficient data to unambiguously perform two operations: onwarding and merging. Throughout this section $f$ is an SDBL over $L$ and $G$ is the canonical SDT that generates $f$. We define $\hat x$ to be the $<_{llex}$-least member of $L$ that extends $x$. We now proceed to define the characteristic sample for $f$, denoted $CS_f$. We will make extensive use of $p_x$, $q_x$ and $G[p_x]$ in this section (see Definition \ref{paths-def}).
The first component of the characteristic sample provides the data required to recognize which maximal antichains of a set of translations are not valid. In order to illustrate the concept, consider $f(a\#)$, the translations along a path involving only one non-\# transition. Let $X$ be the $<_{llex}$-least member of $f(a\#)$. Every maximal antichain of $f(a\#)$ contains a prefix of $X$ and every prefix of $X$ is a member of at most one element of $\vac(f(a\#))$. If $X_0$ is a prefix of $X$ that is not in a valid antichain, then there is a $Z \in f(a\#)$ such that for any $Z_0 \prec Z$, either
\begin{enumerate} \item there is a $Z_1$ such that $Z_0 Z_1 \in f(a\#)$ and $X_0 Z_1 \not\in f(a\#)$, or
\item there is a $X_1$ such that $X_0 X_1 \in f(a\#)$ and $Z_0 X_1 \not\in f(a\#)$. \end{enumerate}
In other words, $X_0$ and $Z_0$ have different futures. Thus, for each prefix which is not an element of a valid antichain, there is a translation pair that witnesses this fact. The following figure illustrates the two cases with the possible witnessing strings marked by dashed lines.
\begin{figure}
\caption{Two ways in which different futures might be witnessed. In both cases, it is easy to verify that the futures are different using translation queries.}
\end{figure}
To describe the required information in the general case, let $x_0, \ldots , x_k$ enumerate the minimal paths to each of the states of $G$. Let $x_0, \ldots , x_n$ enumerate $x_0, \ldots x_k$ together with all possible one-step extensions of the paths $x_0, \ldots , x_k$. Note that $n$ is bounded by $|\mbox{\sc{states}}[G]| + |\mbox{\sc{states}}[G]||E|$, where $E$ is the transition relation for $G$. Fix $i \leq n$. If $|x_i| > 0$, let $P$ be the $<_{ac}$-greatest antichain that is a member of $\vac(f(x_i^- y))$ for all strings $y$ such that $x_i^- y \in L$; if $|x_i| = 0$, define $P = \{ \lambda \}$. Define $X$ to be the $<_{llex}$-least member of $P^{-1} f(\hat{x_i})$. For each $X_0 \prec X$ that is not a member of a valid antichain of $P^{-1} f(\hat{x_i})$, there is a $Y \in P^{-1} f(\hat{x_i})$ no prefix of which has the same future in $P^{-1} f(\hat{x_i})$ as $X_0$ and there is a translation in $f(\hat{x_i})$ witnessing the different futures. We denote the set of such witnessing translation pairs, one for each prefix of $X$ not in a valid antichain, by $S_i$. Let $Z$ be the $<_{llex}$-least member of $P$. Let $N_0(x_i) = \{ \langle \hat{x_i}, ZX \rangle \} \cup S_i$ and define $N_0(f) = \bigcup_{i\leq n} N_0(x_i)$. Observe that $N_0(f)$ is polynomial in the size of $G$.
Consider $x\in T[L]$. Let $\vac = \bigcap_{x \prec y \in L} \vac(f(y))$. For each $P \in \vac(f(x)) \setminus \vac$, observe that there is an example that witnesses the fact that $P$ is not in $\vac$. Such examples demonstrate violations of either the maximality or the validity of the given antichain. In either case, the witness is a single element of the graph of $f$ (a paired string and translation). Since $\vac(f(x))$ is finite, the number of examples needed to eliminate all incorrect maximal antichains is also finite. We define $N_1(x)$ to be the set which consists of exactly one example for each member of $\vac(f(x)) \setminus \vac$. For the sake of a unique definition, we assume that we always choose the $<_{llex}$-least example -- although this is not essential. We can now define the second component of $CS_f$: $N_1(f) = \bigcup_{q\in \mbox{\sc{states}}[G]} N_1(\hat{x_q})$.
$N_0$ and $N_1$ are required to perform onwarding correctly. In order to perform merges, we must include enough data to identify the equivalence classes of states whose futures are the same. There are two ways in which the futures may differ: \begin{enumerate} \item there is a string, $z$, such that $xz \in L$, but $yz \not\in L$ or
\item for $X \in G[p_x]$ and $Y \in G[p_y]$, there are $z$ and $Z$ such that $XZ \in G[p_{xz}]$, but $YZ \not\in G[p_{yz}]$. \end{enumerate}
For each member of $\mbox{\sc{states}}[G]$ there is a finite collection of examples which uniquely identify the state. Let $N_2(q_x)$ be a canonically chosen collection of such examples for $q_x$. Let $e$ be a transition and $\hat{p}$ be the $<_{llex}$-least path starting at the initial state, ending with a \#-transition and including $e$. Define $N_2^*(e)$ to be the set of those translations of $\hat{p}$ each of which uses a different output of the transition $e$ and is $<_{llex}$-least amongst the translations of $\hat{p}$ that use that output. $|N_2^*(e)| = |output(e)|$. We define the final component of $CS_f$ as follows. \[ N_2(f) = \bigcup_{x \in W} N_2(q_x) \cup \bigcup_{e \in E_G} N_2^*(e), \] where $W$ consists of the minimal paths to each state of $G$ as well as all paths that are immediate extensions of those paths. \begin{Definition} For an SDBL, $f$, we define the characteristic sample of $f$, $CS_f = N_0(f) \cup N_1(f) \cup N_2(f)$. \end{Definition}
\subsection{Algorithms}
In all the algorithms that follow, loops over prefixes of a string will proceed in order of increasing length. Also, when a subroutine returns multiple outputs (e.g., returns all the elements of an array) we assume that an appropriate loop is executed to load the returned values into the selected variables in the main program.
\subsubsection{Initializing the transducer}
\begin{Definition} Given a string $x$ over the input alphabet of an SDT $G$, we say that $G$ is \emph{tree-like below $x$} if every path which begins at $q_x$ ends at a state which is the end state of exactly one transition. These states are called the \emph{states below $x$}. $G$ is said to be \emph{tree-like} if it is tree-like below its unique initial state. \end{Definition}
Consider a dataset, $\mathcal D$. We define an initial transducer by creating a state for every member of $T[\mbox{dom}(\mathcal D)]$. A tree-like transducer is produced where all transitions output only $\lambda$ except for the \#-transitions at members of $\mbox{dom}(\mathcal D)$. All outputs in the dataset are assigned to the \#-transitions.
\begin{algorithm}[H]\label{alg-initial} \caption{Forming the initial tree-like transducer (INITIAL)} \KwData{A finite collection of translation pairs, $\mathcal D$.} \KwResult{A tree-like SDT, $G_{\mathcal D}$.} \For{$\langle x, X \rangle \in \mathcal D$}{
$\mbox{\sc{states}}[G_{\mathcal D}] \cup \{r_x\} \rightarrow \mbox{\sc{states}}[G_{\mathcal D}]$ \\
$E_{G_{\mathcal D}} \cup \{ e_x^{\#} = \langle r_x, r_{\lambda}, \#, X \rangle \} \rightarrow E_{G_{\mathcal D}}$\\
\If {$x \neq \lambda$}{
\For{$y \prec x$}{
$\mbox{\sc{states}}[G_{\mathcal D}] \cup \{r_y\} \rightarrow \mbox{\sc{states}}[G_{\mathcal D}]$ \\
$E_{G_{\mathcal D}} \cup \{ e_y = \langle r_{y^-}, r_y, y(|y|-1), \lambda \rangle \} \rightarrow E_{G_{\mathcal D}}$ \\
}
} } \Return $G_{\mathcal D}$ \end{algorithm}
The transducer that results from a run of Algorithm \ref{alg-initial} recognizes the translations in $\mathcal D$ and no other translations.
\subsubsection{Generating an array of all valid antichains}\label{genVacSection}
In order to simplify the presentation of the algorithms, we will not include the algorithms for several simple functions. In particular, we will assume that $LEXORDER(A)$ takes an array, $A$, as an input and returns an array with the same contents as $A$, but in lexicographic order. $LLEXORDER(A)$ performs the same function, but for the $<_{llex}$-ordering. $LEXLEAST$ and $LLEXLEAST$ will be applied to sets and arrays and will return the $<_{lex}$- and $<_{llex}$-least member, respectively. For sets of strings $P$ and $S$, we will use the operations $P^{-1}S$ and $P*S$ as built-in arithmetic operations. Given an input string, $x$, output strings, $Z$ and $W$, and a set of translation pairs, $\mathcal D$, the function $COMPARE(x,Z,W,\mathcal D)$ returns \textsc{true} if, for every $\langle x,ZR \rangle, \langle x,WS \rangle \in \mathcal D$, the queries $[x,WR]_f$ and $[x,ZS]_f$ return values of \textsc{true}. Otherwise, $COMPARE(x,Z,W,\mathcal D)$ returns \textsc{false}. Applying the same notation used above, if $x$ is an input string, then $\hat{x}$ is the $<_{llex}$-least member of $L$ extending $x$. Using these functions, we define an algorithm to create a list of all valid antichains when considering the tree of outputs of a single input string.
\begin{algorithm}[H]\label{VAC-alg} \caption{List the valid antichains (VAC)} \KwData{A finite collection of translation pairs, $\mathcal D$; $x \in L$; $X_{\ell}$, the current least translation prefix for $x$.} \KwResult{An array, $A$, of all maximal antichains of the translations of $x$ in $\mathcal D$ which extend $X_\ell$ and are not provably invalid.}
$X_{\ell}^{-1}\{ Y : Y \succ X_{\ell} \wedge \langle x,Y \rangle \in \mathcal D \} \rightarrow T$\\ $LLEXLEAST(T) \rightarrow Z$\\ \For{$W \prec Z$}{
$W \rightarrow AC[0]$\\
\For{$R\in T \wedge R \neq Z$}{
\For{$V \prec R$}{
$COMPARE(x,X_{\ell}W,X_{\ell}V,\mathcal D) \rightarrow status$\\
\If{$status = \textsc{true}$}{
$V \rightarrow AC[|AC|]$\\
\textbf{break}\\
}
}
\If{$status = \textsc{false}$}{
\textbf{break}\\
}
}
\If{$status = \textsc{true}$}{
$AC \rightarrow A[|A|]$\\
} } \Return{$A$}\\
\end{algorithm}
\begin{wrapfigure}{r}{0.4\textwidth} \centering \resizebox{5cm}{!}{ \xy
(0,0)*+[o]=<8pt>\hbox{\textbullet}*\frm{oo}="0-0";
(-6,0)*+[o]=<2pt>\hbox{$X_{\ell}$}="13-4-caption";
(16,-8)*+[o]=<2pt>\hbox{}*\frm{oo}="0-1"; (-16,-8)*+[o]=<2pt>\hbox{}*\frm{oo}="1-1"; (24,-16)*+[o]=<2pt>\hbox{}*\frm{oo}="0-2"; (8,-16)*+[o]=<2pt>\hbox{}*\frm{oo}="1-2"; (-24,-16)*+[o]=<2pt>\hbox{}*\frm{oo}="3-2"; (28,-24)*+[o]=<2pt>\hbox{}*\frm{oo}="0-3"; (20,-24)*+[o]=<2pt>\hbox{}*\frm{oo}="1-3"; (12,-24)*+[o]=<2pt>\hbox{}*\frm{oo}="2-3"; (4,-24)*+[o]=<2pt>\hbox{}*\frm{oo}="3-3"; (-20,-24)*+[o]=<2pt>\hbox{}*\frm{oo}="6-3"; (-28,-24)*+[o]=<2pt>\hbox{}*\frm{oo}="7-3"; (30,-32)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="0-4"; (26,-32)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="1-4"; (18,-32)*+[o]=<2pt>\hbox{}*\frm{oo}="3-4"; (14,-32)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="4-4"; (10,-32)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="5-4"; (2,-32)*+[o]=<2pt>\hbox{}*\frm{oo}="7-4"; (-18,-32)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="12-4"; (-22,-32)*+[o]=<8pt>\hbox{\textbullet}*\frm{oo}="13-4";
(-22,-36)*+[o]=<2pt>\hbox{$Z$}="13-4-caption";
(-30,-32)*+[o]=<2pt>\hbox{}*\frm{oo}="15-4"; (19,-40)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="6-5"; (17,-40)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="7-5"; (3,-40)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="14-5"; (1,-40)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="15-5"; (-29,-40)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="30-5"; (-31,-40)*+[o]=<2pt>\hbox{\textbullet}*\frm{oo}="31-5"; "0-0";"0-1"**\dir{.}; "0-0";"1-1"**\dir{-}; "0-1";"0-2"**\dir{.}; "0-1";"1-2"**\dir{.}; "1-1";"3-2"**\dir{-}; "0-2";"0-3"**\dir{.}; "0-2";"1-3"**\dir{.}; "1-2";"2-3"**\dir{.}; "1-2";"3-3"**\dir{.}; "3-2";"6-3"**\dir{-}; "3-2";"7-3"**\dir{.}; "0-3";"0-4"**\dir{.}; "0-3";"1-4"**\dir{.}; "1-3";"3-4"**\dir{.}; "2-3";"4-4"**\dir{.}; "2-3";"5-4"**\dir{.}; "3-3";"7-4"**\dir{.}; "6-3";"12-4"**\dir{.}; "6-3";"13-4"**\dir{-}; "7-3";"15-4"**\dir{.}; "3-4";"6-5"**\dir{.}; "3-4";"7-5"**\dir{.}; "7-4";"14-5"**\dir{.}; "7-4";"15-5"**\dir{.}; "15-4";"30-5"**\dir{.}; "15-4";"31-5"**\dir{.};
"0-2";"1-2"**\dir{--}; "1-2";"3-2"**\dir{--};
"0-3";"3-4"**\dir{--}; "3-4";"2-3"**\dir{--}; "2-3";"7-4"**\dir{--}; "7-4";"6-3"**\dir{--}; "6-3";"15-4"**\dir{--};
"0-4";"1-4"**\dir{--}; "1-4";"6-5"**\dir{--}; "6-5";"7-5"**\dir{--}; "7-5";"4-4"**\dir{--}; "4-4";"5-4"**\dir{--}; "5-4";"14-5"**\dir{--}; "14-5";"15-5"**\dir{--}; "15-5";"12-4"**\dir{--}; "12-4";"13-4"**\dir{--}; "13-4";"30-5"**\dir{--}; "30-5";"31-5"**\dir{--};
\endxy } \caption{$X_{\ell}$ is the least translation prefix and $Z$ is the least translation} \end{wrapfigure}
One of the inputs of Algorithm \ref{VAC-alg} is the ``current least translation prefix of $x$''. The current translation prefix will converge to the $<_{llex}$-least output string generated along the unique path corresponding to $x$. $X_{\ell}$ provides a canonical output prefix for testing outputs using translation queries. The first step of Algorithm \ref{VAC-alg} restricts $\mathcal D$ to the tree of translation pairs whose second component extends the least translation prefix. Every antichain of the tree must contain a prefix of the $<_{llex}$-least member of the tree. Because of the linear ordering of the valid antichains (see Theorem \ref{ac-linear}), there is at most one valid antichain for each prefix of the least member of the tree. COMPARE is used to look for matching nodes to form valid antichains. As can be seen in the figure, all valid antichains include prefixes of the $<_{llex}$-least member and no two valid antichains contain the same prefix. This provides both a bound on the number of valid antichains and a convenient method to search for the valid antichains.
We formalize the above intuition in the proof of the following lemma.
\begin{Lemma} Let $\mathcal D$ be a finite set consistent with an SDBL $f$ over $L$ with canonical transducer $G$ and $x \in L$. Suppose $CS_f \subseteq \mathcal D$, $x$ is $<_{llex}$-least among $y \in L$ such that $q_y = q_x$, $X_{\ell}$ is the least translation prefix of $x$ and $X$ is the $<_{llex}$-least member of $f(x)$. Given inputs $\mathcal D$, $x$ and $X_{\ell}$ and given access to translation queries about $f$, Algorithm \ref{VAC-alg} outputs an array of antichains $A$ such that if $V$ is the set of valid antichains of translations of $x$ which extend $X_{\ell}$, then each antichain in $A$ is extended by an antichain in $V$ and each antichain in $V$ contains a unique antichain in $A$. Furthermore, $A$ contains the unique antichain which is a valid antichain of all translations of $y$ that extend $X_{\ell}$ for all $y \in L$ such that $y \succeq x$. \end{Lemma}
\begin{proof} Since $x$ is $<_{llex}$-least such that $p_x$ is a path to the state at which $p_x$ terminates, $CS_f$ (hence, $\mathcal D$ also) contains $\langle x,X \rangle$. Furthermore, for each prefix $Y$ such that $X_{\ell} \preceq Y \preceq X$, if $Y$ is not a member of a valid antichain, then $\mathcal D$ contains witnessing strings so that this can be determined using translation queries (this is the content of $N_0(x)$ defined in Section \ref{sec-char-sample}). Algorithm \ref{VAC-alg} performs exactly those translation queries necessary to determine that $Y$ is not a member of a valid antichain. Thus, the array of antichains that the algorithm returns will correctly exclude all subsets of maximal antichains that contain such a $Y$.
We now show that each antichain in the output must be a subset of a valid antichain. For the sake of a contradiction, suppose an antichain in $A$ contains $Y$ and $Z$ where $X_{\ell} \preceq Y \preceq X$ and $Y$ is a member of a valid antichain, but the unique member of $V$ that contains $Y$ does not contain $Z$. By the definition of the $N_0$ component of $CS_f$, $\mathcal D$ will contain a string such that when $COMPARE$ is run, $Y$ and $Z$ will be flagged as not members of the same valid antichain.
Let $F$ be the $<_{sdf}$-maximum SDF for $f$. By the definition of $N_2(f)$, for each $Z \in F(x)$ there must be a $Y$ such that $\langle \hat{x},Y \rangle \in \mathcal D$ and $X_{\ell}Z \preceq Y$. Consequently, $A$ will contain $\{ X_{\ell} \} * F(x)$, which is the unique antichain which is a valid antichain of the extensions of $X_{\ell}$ in $f(y)$ for all $y \in L$ such that $x \preceq y$. \end{proof}
\subsubsection{Performing onwarding on a single node}
The next algorithm takes an array of antichains and produces the $<_{ac}$-greatest antichain that appears to be a valid antichain of all trees of outputs on inputs extending~$x$. As the data may still be incomplete, testing the validity for other trees is done using translation queries.
\begin{algorithm}[H]\label{TEST-alg} \caption{Testing an array of antichains against a dataset (TESTVPS)} \KwData{A string, $x$, over the input alphabet; an array, $A$, of antichains for the output tree of input $\hat{x}$; a collection of translation pairs, $\mathcal D$.} \KwResult{The $<_{ac}$-greatest member of the array, $A$, for which there is no evidence in $\mathcal D$ that the selected antichain is not valid for all output trees in the future of $x$.}
\For{$i = |A|-1; i \geq 0; i--$}{
$\mbox{`not valid'} \rightarrow status$\\
\For{$\langle xy,Z \rangle \in \mathcal D$}{
\For{$R\in A[i]$}{
\If{$R \prec Z$}{
$R^{-1}Z \rightarrow W$\\
$\mbox{`valid'} \rightarrow status$\\
\For{$Q\in A[i]$}{
\If{$[xy,QW]_f = \textsc{false}$}{
$\mbox{`not valid'} \rightarrow status$\\
\textbf{break}\\
}
}
\If{$status = \mbox{`not valid'}$}{
\textbf{break}\\
}
}
}
}
\If{$status = \mbox{`valid'}$}{
\Return{$A[i]$}\\
} }
\end{algorithm}
Observe that there will always be a valid antichain that causes the above algorithm to terminate; if there is no other, then it will terminate on $\{ \lambda \}$. In the following algorithm, we use \textsc{null} to test for the existence of an optional argument.
\begin{algorithm}[H]\label{ONWARD-alg} \caption{Onwarding a tree-like portion of a transducer (ONWARD)} \KwData{A string $x$; a transducer, $G$, which is tree-like below a string, $x$; $X_\ell$, the current least translation prefix for $x$; a collection of translation pairs, $\mathcal D$; a set of strings, $S$ (optional).} \KwResult{A transducer that differs from $G$ only on transitions whose end state is $q_x$ or a state below $x$.} $S \rightarrow P$\\ \If{$P = \textsc{null}$}{
$VAC(\mathcal D,x,X_\ell) \rightarrow A$\\
$TESTVPS(x,A,\mathcal D) \rightarrow P$\\ }
$output(e_x) * P \rightarrow output(e_x)$\\
\For{$y \in \mbox{dom}(\mathcal D) \wedge x \prec y$}{
$P^{-1} output(e_y) \rightarrow output(e_y)$\\ }
\end{algorithm}
The purpose of Algorithm \ref{ONWARD-alg} is to advance as much translation as possible in a tree-like portion of a transducer.
\subsubsection{Merging states}
Following conventions presented in \cite{cdlh-book}, we will label states during the learning process as \textsc{red} states if it is not possible to merge them with any $<_{llex}$-lesser state. Initially, only the input state, $q_{\lambda}$, is a \textsc{red} state. We proceed through the states in $<_{llex}$-order. When a new state is found that cannot be merged with any \textsc{red} state, then it becomes a new \textsc{red} state.
The next algorithm we present merges two states if there is no evidence that the underlying transducer behaves differently on extensions of the inputs of the two states. In this operation, we assume that the first argument is a \textsc{red} state, the second argument is not, and that onwarding has already been performed for both states. In order to present the algorithm succinctly, we define a function similar to $COMPARE$ from Section \ref{genVacSection}. Define $FUTURE(x,y,G,\mathcal D) = \textsc{true}$ if \begin{align*} \big(\forall X \in G[p_x] \cap \mbox{ran}(\mathcal D), &Y \in G[p_y] \cap \mbox{ran}(\mathcal D), \langle z,Z \rangle \in \mathcal D\big)\Bigg( \\ &(x \preceq z \wedge X \preceq Z \rightarrow [y(x^{-1}z),Y_0 (X^{-1}Z)]_f = \textsc{true}) \\ &\wedge (y \preceq z \wedge Y \preceq Z \rightarrow [x(y^{-1}z),X_0 (Y^{-1}Z)]_f = \textsc{true}) \Bigg), \end{align*}
where $X_0 = LLEXLEAST(G[p_x])$ and $Y_0 = LLEXLEAST(G[p_y])$. Otherwise, $FUTURE(x,y,G,\mathcal D) = \textsc{false}$. Note that finding $LLEXLEAST(G[p_x])$ does not require enumeration all elements of $G[p_x]$, which could be exponential in the length of $x$. To determine $LLEXLEAST(G[p_x])$, one need only find the least element of each set of translations along the path $p_x$.
\begin{algorithm}[H]\label{merge-alg} \caption{MERGE} \KwData{A \textsc{red} state, $q_x$; a non-\textsc{red} state, $q_y$; a transducer, $G$, that is tree-like below $q_y$; a collection of translation pairs, $\mathcal D$.} \KwResult{A transducer; a boolean value of \textsc{true} if the two states have been merged and \textsc{false} otherwise.}
$FUTURE(x,y,G,\mathcal D) \rightarrow status$\\ \If{$status = \textsc{true}$}{
$q_x \rightarrow end(e_y)$\\
$\mbox{\sc{states}}[G] \setminus\{ q_y \} \rightarrow \mbox{\sc{states}}[G]$\\
\For{$z\in \mbox{dom}(\mathcal D) \wedge z \succ y$}{
\For{$y \prec w \preceq z$}{
\If{$q_{x(y^{-1}w)} \in \mbox{\sc{states}}[G]$}{
$\mbox{\sc{states}}[G] \setminus\{ q_{w} \} \rightarrow \mbox{\sc{states}}[G]$\\
}
$q_{x(y^{-1}w)} \rightarrow start(e_{w})$\\
$q_{x(y^{-1}w) input(e_{w})} \rightarrow end(e_{w})$\\
}
}
\Return{$\langle G, \textsc{true} \rangle$} } \Else{
\Return{$\langle G, \textsc{false} \rangle$} }
\end{algorithm}
If $G$ is a transducer generated from a dataset, it is likely that $G$ will include non-equivalent states for which there is no evidence in their futures to distinguish them. Ultimately, this will not be an obstacle to learning because if the characteristic sample has appeared, there will be enough data to distinguish earlier states that will be processed first.
\subsubsection{The learning algorithm}\label{learnAlgSDT}
Our final algorithm combines onwarding and merging into a single process. We proceed through the states of the initial transducer in $<_{llex}$-order, first onwarding and then attempting to merge with lesser states. If a state cannot be merged with any lesser state, it is fixed and will not subsequently be changed. The fact that such states are fixed is recorded by their membership in a set $\textsc{red}$.
\begin{algorithm}[H]\label{learningAlg} \caption{Learning an SDT} \KwData{A collection of translation pairs, $\mathcal D$.} \KwResult{A transducer.}
$INITIAL(\mathcal D) \rightarrow G_0$\\ $LLEXORDER(\mbox{\sc{states}}[G_0]) \rightarrow S$\\ $q_{\lambda} \rightarrow \textsc{red}[0]$\\ $0 \rightarrow i$\\ \For{$q_x \in S$}{
\If{$q_x \in \textsc{red} \vee q_{x^-} \not\in \textsc{red}$}{
\textbf{continue}\\
}
\Else{
$ONWARD(x,G,LLEXLEAST(G[p_x]),\mathcal D) \rightarrow G$\\
\For{$q_y \in \textsc{red}$}{
$MERGE(q_y,q_x,G,\mathcal D) \rightarrow \langle G,status \rangle$\\
\If{$status = \textsc{true}$}{
\textbf{break}\\
}
}
\If{$status = \textsc{false}$}{
$x \rightarrow \textsc{red}[i]$\\
$i++$\\
}
} }
\end{algorithm}
\begin{Lemma}\label{red-paths} Let $f$ be an SDBL with canonical SDT $G$ and let $\mathcal D$ be a finite set consistent with $f$ which contains $CS_f$. At every stage during the execution of Algorithm \ref{learningAlg} with input $\mathcal D$ and given access to translation queries about $f$, if $G'$ is the SDT constructed so far and $p_x$ is a path through $G'$ that exclusively involves \textsc{red} states, then $G[p_x] = G'[p_x]$. Furthermore, if $x$ and $y$ are strings such that such that their unique paths $p_x$ and $p_y$ terminate at different \textsc{red} states of $G'$, then the states $q_x$ and $q_y$ of $G$ are distinct. \end{Lemma}
\begin{proof} Let $G_0$ be the SDT that results from Algorithm \ref{alg-initial} and let $F$ be the $<_{sdf}$-maximum SDF for $f$. We prove the lemma by induction. Initially, the only \textsc{red} state is the initial state and the lemma holds trivially. Now suppose that the lemma holds for $G'$ at the beginning of an iteration of the main for-loop in Algorithm \ref{learningAlg}. Let $G''$ be the result of executing the next iteration of the for-loop.
If no new \textsc{red} states have been added, then a previously non-\textsc{red} state, $q_1$, had Algorithm \ref{ONWARD-alg} applied to it and was merged with a \textsc{red} state, $q_0$. Let $x$ be the $<_{llex}$-least string such that the path $p_x$ in $G'$ ends at $q_x = q_1$. Since $q_1$ was not a \textsc{red} state, $G'$ must have been tree-like below $x$ and because of the induction hypothesis, the correct least translation prefix for $x$ will have been used by Algorithm \ref{ONWARD-alg}. Furthermore, since the unique state in $G'$ with a transition to $q_1$ is a \textsc{red} state, by the definition of $N_0(f)$, $\mathcal D$ contains examples to guarantee that Algorithm \ref{VAC-alg} correctly identifies valid antichains. Thus, Algorithm \ref{ONWARD-alg} must have identified the unique $<_{ac}$-greatest antichain which is a valid antichain of $f(y)$ for all $y \in L$ such that $y \succeq x$. By the induction hypothesis, $CS_f \subseteq \mathcal D$ must contain examples that uniquely identify the future of $q_0$. Since precisely those examples from $\mathcal D$ will be tested for $q_1$ using translation queries when Algorithm \ref{merge-alg} is run, the fact that $q_0$ and $q_1$ were merged implies that $q_0 \equiv q_1$. Consequently, all paths that only visits \textsc{red} states of $G''$ satisfy the lemma. Since no new \textsc{red} states were introduced, the second conclusion in the statement of the lemma follows automatically from the induction hypothesis.
Now suppose that a new \textsc{red} state, $q_1$, is selected. Since the state was marked as a \textsc{red} state, no $<_{llex}$-lesser state is equivalent, meaning that if $x$ is $<_{llex}$-least such that $p_x$ in $G'$ ends at $q_1$, then $x$ defines the $<_{llex}$-least path to some state of $G$. Consequently, $CS_f \subseteq \mathcal D$ contains examples to guarantee that Algorithm \ref{ONWARD-alg} identifies the correct antichain, $F(x)$. Since $q_1$ was not merged with any existing \textsc{red} state, there must be examples in $\mathcal D$ that distinguish $q_1$ from each of the \textsc{red} states. Thus, for each such \textsc{red} state, $q$, we know that $q_1 \not\equiv q$. Since the for-loop in Algorithm \ref{learningAlg} proceeds through the states in $<_{llex}$-order, this means that if $x$ is $<_{llex}$-least such that the path $p_x$ in $G'$ ends at $q_1$, then $x$ is $<_{llex}$-least such that the state $q_x$ in $G$ has the same future as $q_1$. We may conclude, therefore, that $CS_f \subseteq \mathcal D$ contains examples that uniquely identify the future of $q_1$. Finally, to prove that $G''[p_z] = G[p_z]$ for any $z$ such that $p_z$ that only visits \textsc{red} states of $G''$ we need only consider paths that end at $q_1$ and do not visit $q_1$ at any other point. There is only one state, $q$, in $G''$ that has a transition to $q_1$ (because $G''$ is tree-like below $q_1$) and that state is a \textsc{red} state in both $G'$ and $G''$. Thus, $G[p_{z^-}] = G'[p_{z^-}] = G''[p_{z^-}]$. The transition from $q$ to $q_1$ with input $z(|z|-1)$ has output $F(x)$, thus $G''[p_z] = G''[p_{z^-}] * F(x) = G[p_z]$. \end{proof}
\subsection{Learnability of SDBLs}
\begin{Theorem}\label{learnableThm} The class of SDBLs is polynomially identifiable in the limit with translation queries. \end{Theorem}
\begin{proof} Let $f$ be an SDBL with canonical SDT $G$ and let $\mathcal D$ be a collection of translation pairs consistent with $f$ and containing $CS_f$. We apply Algorithm \ref{learningAlg} to learn $f$ from $\mathcal D$. To prove that Algorithm \ref{learningAlg} identifies $f$ in the limit in polynomial time, we must verify three claims. First, we must show that the size of the chosen characteristic sample is polynomial in the size of the canonical transducer of the target. Second, we must show that the algorithm terminates within a number of steps that is polynomial in the size of the canonical transducer of the target and in the size of the given data. Third, we must show the SDT produced by Algorithm \ref{learningAlg} generates $f$.
The first claim is easy. As noted in the section in which $CS_f$ was defined, $N_0(f), N_1(f)$ and $N_2(f)$ are all polynomial in the size of $G$.
An inspection of the algorithms shows that they converge in polynomial time and that only a polynomial number of translation queries are made. We conclude that the second claim is true.
Finally, we prove the third claim. Algorithm \ref{learningAlg} terminates at the point when every state in the SDT generated by Algorithm \ref{alg-initial} has been either marked as a \textsc{red} state or merged with another state. Suppose that $G'$ is the output of Algorithm \ref{learningAlg}. When the algorithm terminates, every state is a \textsc{red} state. Consequntly, for any $x$ for which there is a path $p_x$ in $G'$, by Lemma \ref{red-paths} $G[p_x] = G'[p_x]$. Thus, we need only prove that for all $x$, if $x$ defines a path through $G$ then $x$ defines a path through $G'$. By the definition of $N_0(f)$, every transition in $G$ is used at least once by translations in $CS_f$. Fix $x$ which defines a path through $G$ and let $xa$ be an extension by the single character $a$. If $xa$ also defines a path through $G$, then $G$ has a transition $e$ such that $start(e)$ is the final state of the path defined by $x$ and $input(e) = a$. Let $G_0$ be the SDT generated by Algorithm \ref{alg-initial} and let $\langle z_0az_1,Z_0YZ_1 \rangle$ be a translation pair in $CS_f$ such that the path through $G$ defined by $z_0az_1$ uses $e$ when translating the character $a$. Let $q_{z_0}$ and $q_{z_0a}$ be the states of $G_0$ corresponding to the initial segments $z_0$ and $z_0a$ of $z_0az_1$. Let $x_0$ be the $<_{llex}$-least string to the final state of the path through $G$ defined by $x$. Let $q_{x_0}$ be state of $G_0$ corresponding to $x_0$. By the definition of $N_2(f)$, $CS_f$ contains examples that uniquely identify the future of $q_{x_0}$. When $FUTURE(x_0,z_0a,G'',\mathcal D)$ is run at some point during the execution of Algorithm \ref{learningAlg} (where $G''$ is the current form of transducer under construction), the two strings will be recognized as having the same futures and will be merged. Thus, if we assume that the paths through $G'$ defined by $x$ and $x_0$ are the same, then $xa$ defines a path through $G'$. By induction on the length of $x$, we have shown the every string that defines a path through $G$ also defines a path through $G'$.
\end{proof}
\section{Related Results}
We establish the relationship between SDTs and two other classes of transducers that are not entirely deterministic: p-subsequential transducers and transducers that recognize $R_{ffb}$ relations. We also look at some properties of SDBLs; specifically, we show that SDBLs are not closed under composition or reversal.
\subsection{Other Forms of Non-Determinism}\label{other-non-det-section} The following definition is adapted from \cite{alla02}.
\begin{Definition}
A transducer is said to be \emph{$p$-subsequential} for some $p \in \mathbb N$ if for every transition $e$ in the transition relation, $|output(e)| = 1$ unless $input(e) = \#$, in which case $|output(e)| \leq p$. We say the a bi-language is $p$-subsequential if it is generated by a $p$-subsequential transducer. \end{Definition}
\begin{Definition} \cite{saka08}
A binary relation, $R$, is \emph{finitary} if for every $x \in \mbox{dom}(R)$ the set $\{ y : \langle x,y \rangle \in R \}$ is finite. If there is a number $n \in \mathbb N$ such that $|\{ y : \langle x,y \rangle \in R \}| \leq n$ for all $x$, then the relation is said to be \emph{bounded}. Such a relation is \emph{finite-state} if there is a transducer that generates $R$. We say that a relation is $R_{ffb}$ if it is finitary, finite-state and bounded. We say that a transducer is $R_{ffb}$ if the bi-language it generates is $R_{ffb}$. \end{Definition}
\begin{Proposition} Every $p$-subsequential bi-language is $R_{ffb}$. \end{Proposition}
\begin{proof} A $p$-subsequential bi-language is finitary and bounded as every input string has at most $p$ distinct translations. It is also clearly finite-state as it is generated by a $p$-subsequential transducer. Thus, every $p$-subsequential bi-language is also $R_{ffb}$. \end{proof}
\begin{Proposition}\label{psub-not-sdbl} There is a $p$-subsequential bi-language which is not an SDBL. \end{Proposition}
\begin{proof} If $f$ is an SDBL, $x \in \mbox{dom}(f)$ and $X,Y \in f(x)$, then either $X = Y$ or $X$ and $Y$ are incomparable. Thus, $f: \{a\} \rightarrow \{\{A,AA\}\}$ such that $f(a) = \{A,AA\}$ is not an SDBL. It is, however, $p$-subsequential. \end{proof}
\begin{Proposition} There is an SDBL which is neither $p$-subsequential nor $R_{ffb}$. \end{Proposition}
\begin{proof} Let $G$ be a transducer with a single state, which is a terminal state, and one transition which starts and ends at the unique state, has input $a$ and output $\{A,B\}$. $G$ is shown in Figure \ref{sdt-ffb}.
\begin{figure}
\caption{An SDT which generates an SDBL which is neither $R_{ffb}$ nor $p$-subsequential.}
\label{sdt-ffb}
\end{figure}
The set of translation pairs recognized by $G$ is $\{ \langle a^n,X \rangle : n \in \mathbb N \wedge X \in \{A,B\}^n \}$, which is not a bounded relation as $a^n$ is in the domain for every $n$ and has $2^n$ translations. Since it is not bounded, it is neither $R_{ffb}$ nor $p$-subsequential. \end{proof}
\begin{Proposition} There is an $R_{ffb}$ bi-language which is neither an SDBL nor $p$-subsequential. \end{Proposition}
\begin{proof} Let $G$ be the transducer in the following figure.
\begin{figure}
\caption{An $R_{ffb}$ bi-language which is not an SDBL.}
\label{diamond-1}
\end{figure}
The set of translation pairs recognized by $G$ is $S = \{ \langle a^{n+2},B^{n+2} \rangle : n \in \mathbb N \} \cup \{ \langle a^{n+2},AB^nA \rangle : n \in \mathbb N \}$. Suppose $G'$ is an SDT that recognizes $S$. For every $n$, the input string $a^{n+2}$ has exactly two translations: $B^{n+2}$ and $AB^nA$. Since $G'$ is has finitely many states, the first transition on the path through $G'$ defined by $a^{n+2}$ with an output other than $\{\lambda\}$ must have $\{AX,BY\}$ as a subset for some strings $X$ and $Y$. Furthermore, the last transition on the path with output other than $\{\lambda\}$ must have $\{ZA,WB\}$ as a subset for some strings $Z$ and $W$. This implies that $a^{n+2}$ should have at least four distinct translations, which is a contradiction.
Similarly, suppose $S$ is recognized by a $p$-subsequential transducer $G_p$. Since the two translations of $a^{n+2}$ differ at their first character, it must be that for every path through $G_p$ all translation occurs at the final transition (since only terminal transitions may have more than one output). Given this observation, $G_p$ must have infinitely many states -- one for each string $a^{n+2}$. This is a contradiction. \end{proof}
\subsection{Composition and Reversal of SDBLs}
\begin{Definition} Let $f$ and $g$ be two bi-languages. We define the \emph{composition of $f$ and $g$} to be the bi-language $h$ such that $\mbox{dom}(h) = \mbox{dom}(f)$ and for every $x \in \mbox{dom}(f)$, $h(x) = \bigcup_{X \in f(x)} g(X)$. We define the \emph{reversal of $f$} to be the bi-language $k$ with domain $S = \bigcup_{x \in \mbox{dom}(f)}f(x)$ such that $k(X) = \{ x \in \mbox{dom}(f) : X \in f(x) \}$. \end{Definition}
\begin{Proposition} There are SDBLs whose composition is not an SDBL. \end{Proposition}
\begin{proof} Let $G_f$ and $G_g$ be the following two SDTs and let $f$ and $g$ be the SDBLs generated by them.
\begin{figure}
\caption{Two SDTs whose composition is is not an SDBL. On the left, $G_f$; on the right, $G_g$.}
\end{figure}
Observe that if $h$ is the composition of $f$ and $g$, then $h(a) = \{ 0,01 \}$. Since $0$ is a prefix of $01$, $h$ cannot be an SDBL. \end{proof}
\begin{Proposition} There is an SDBL whose reversal is not an SDBL. \end{Proposition}
\begin{proof} Let $G$ be the following SDT and let $f$ be the SDBL generated by $G$.
\begin{figure}
\caption{An SDT which generates an SDBL whose reversal is not an SDBL.}
\end{figure}
For each $n$, the reversal of $f$ has two translations of $A^{n+2}$, $ba^nb$ and $ca^nc$. From this observation we see that the reversal of $f$ cannot be an SDBL for exactly the same reason that the bi-language generated by the transducer in Figure \ref{diamond-1} cannot be an SDBL. \end{proof}
\section{Conclusion} We have presented a novel algorithm that learns a powerful class of transducers with the help of reasonable queries. A probabilistic version of these transducers was defined in \cite{akra13}. We are unaware of any results involving this version. As both probabilities and translation queries can serve the purpose of answering questions about translation pairs not present in the given data, it seems possible that probabilistic transducers could be learned without translation queries, with statistical analysis taking the role of translation queries.
The learnability of SDBLs represents a significant advance in our ability to identify underlying structure in the challenging situation where the structure is non-deterministic. While it does not strictly expand existing models (as can be seen from Proposition \ref{psub-not-sdbl}), it does contribute an enormous class of new bi-languages which are beyond the scope of deterministic transducers.
\end{document}
|
arXiv
|
{
"id": "1405.2476.tex",
"language_detection_score": 0.7788295149803162,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Integrated Modeling and Verification of\ Real-Time Systems through Multiple Paradigms}
\begin{abstract} Complex systems typically have many different parts and facets, with different characteristics. In a multi-paradigm approach to modeling, formalisms with different natures are used in combination to describe complementary parts and aspects of the system. This can have a beneficial impact on the modeling activity, as different paradigms can be better suited to describe different aspects of the system. While each paradigm provides a different view on the many facets of the system, it is of paramount importance that a coherent comprehensive model emerges from the combination of the various partial descriptions. In this paper we present a technique to model different aspects of the same system with different formalisms, while keeping the various models tightly integrated with one another.
In addition, our approach leverages the flexibility provided by a bounded satisfiability checker to encode the verification problem of the integrated model in the propositional satisfiability (SAT) problem; this allows users to carry out formal verification activities both on the whole model and on parts thereof. The effectiveness of the approach is illustrated through the example of a monitoring system.
{\bf Keywords:} Metric temporal logic, timed Petri nets, timed automata,
discretization, dense time, bounded model checking. \end{abstract}
\section{Introduction}\label{sec:intro}
Modeling paradigms come in many different flavors: graphical or textual; executable or not; formal, informal, or semi-formal; more or less abstract; with different levels of expressiveness, naturalness, conciseness, etc. Notations for the design of real-time systems, in addition, include a notion of time, whose characteristics add a further element of differentiation \cite{HM96}.
A common broad categorization of modeling notations distinguishes between \emph{operational} and \emph{descriptive} paradigms \cite{FMMR09}. Operational notations --- such as Statecharts, finite state automata, or Petri nets --- represent systems through the notions of \emph{state} and \emph{transition} (or event); system behavior consists in evolutions from state to state, triggered by event occurrences. On the other hand, descriptive paradigms --- such as temporal logics, descriptive logics, or algebraic formalisms --- model systems by declaring their fundamental \emph{properties}.
The distinction between operational and descriptive models is, like with most classifications, neither rigid nor sharp. None\-the\-less, it is often useful in practice to guide the developer in the choice of notation based on what is being modeled and what are the ultimate goals (and requirements) of the modeling endeavor. In fact, operational and descriptive notations have different --- and often complementary --- strengths and weaknesses.
Operational models, for instance, are often easier to understand by experts of domains other than computer science (mechanical engineers, control engineers, etc.), which makes them a good design vehicle in the development of complex systems involving components of many different natures. Also, once an operational model has been built, it is typically straightforward to execute, simulate, animate, or test it. On the other hand, descriptive notations are the most natural choice when writing partial models of systems, because one can build the description \emph{incrementally} by listing the (partial) known properties one at a time. For similar reasons, descriptive models are often excellent languages to document the \emph{requirements} of a system: the requirements elicitation process is usually an incremental trial-and-error activity, and thus it benefits greatly from notations which allow cumulative development.
When modeling timed systems, in addition, the choice of the time domain is a crucial one, and it can significantly impact on the features of the model \cite{FMMR09}. For example, a dense time model is typically needed to represent true asynchrony. Discrete time, instead, is usually more amenable to automated verification, and is at the basis of a number of quite mature techniques and tools that can be deployed in practice to verify systems.
In this paper we present a technique to model different aspects of the same system with different formalisms, while keeping the various models tightly integrated with one another. In this approach, modelers can pick their preferred modeling technique and modeling paradigm (e.g., operational or descriptive, continuous or discrete) depending on the particular facet or component of the system to be described. Integration of the separate snippets in a unique model is made possible by providing a common formal semantics to the different formalisms involved. Finally, our approach leverages the flexibility provided by a bounded satisfiability checker to encode the verification problem of the integrated model in the propositional satisfiability (SAT) problem; this allows users to carry out formal verification activities both on the whole model and on parts thereof.
The technique presented in this paper hinges on Metric Temporal Logic (MTL) to provide a common semantic foundation to the integrated formalisms, and on the results presented in \cite{FR06} to integrate continuous- and discrete-time MTL fragments into a unique formal description. Operational formalisms can then be introduced in the framework by providing suitable MTL formalizations, which can then be discretized as well according to the same technique. While this idea is straightforward in principle, putting it into practice is challenging for several basic reasons. First, in order to have full discrete-time decidability we have to limit ourselves to \emph{propositional} MTL \cite{AH93}; its relatively limited expressive power makes it arduous to formalize completely the behavior of operational models (some technical facts, briefly described in Section \ref{sec:background}, justify this intuition). Second, even if we used a more expressive first-order temporal-logic language, formalizing the semantics of ``graphical'' operational formalisms is usually tricky as several semantic subtleties that are ``implicit'' in the original model must be properly understood and resolved when translating them into a logic language. See for instance extensive discussions of such subtleties in \cite{FMM94} for timed Petri nets and in \cite{vdB94} for Statecharts. Third, not any MTL axiomatization is amenable to the discretization techniques of \cite{FPR08-FM08}, as syntactically different MTL descriptions yielding the same underlying semantics provide discretizations of wildly different ``qualities''. Indeed, experience showed that the most ``natural'' axiomatizations of operational formalisms require substantial rewriting in order to work reasonably well under the discretization framework. Crafting suitable MTL descriptions has proved demanding, delicate, and crucially dependent on the features of the operational formalism at hand. In this respect, our previous work \cite{FPR08-ICFEM08} focused on a variant of Timed Automata (TA) --- a typical ``synchronous'' operational formalism. The formalization of intrinsically asynchronous components --- such as those that sit at the boundary between the system and its environment --- demands however the availability of a formalism that is both operational and ``asynchronous''. To this end, the present paper develops an axiomatization of Timed Petri Nets (TPN), an ``asynchronous'' operational formalism, integrates all three formalisms (MTL, TA, and TPN) into a unique framework, and evaluates an implementation of the framework on a monitoring system example.
The paper is structured as follows. Section \ref{sec:relatedwork} briefly discusses some works that are related to the approach and technique presented in this article. Section \ref{sec:background} introduces the relevant results on which the modeling and verification approach presented in this paper are based; more precisely, the section introduces MTL, timed automata and their MTL-based semantics, and the discretization technique for continuous-time MTL formulas. Section \ref{sec:TPN} presents the (continuous-time) MTL semantics of timed Petri nets and uses it to derive a discretized version of timed Petri nets that can be input to verification engines for discrete-time MTL (e.g., $\integers$ot). Section \ref{sec:casestudy} shows how the various formalisms can be used to describe, and then combine together in a unique model, different aspects and parts of the same system; in addition, it reports on some verification tests carried out on the modeled system. Finally, Section \ref{sec:conclusion} concludes and outlines some future works in this line of research.
\subsection{Related work} \label{sec:relatedwork}
Combining different modeling paradigms in a single framework for verification purposes is not a novel concept. In fact, there is a rich literature on du\-al-lan\-guage approaches, which combine an operational formalism and a descriptive formalism into one analysis framework \cite{FMMR09}. The operational notation is used to describe the system dynamics, whereas the properties to be checked are expressed through the descriptive notation. Model-checking techniques \cite{CGP00} are a widely-used example of a dual-language approach to formal verification. Dual-language frameworks, however, usually adopt a rigid stance, in that one formalism is used to describe the system, while another is used for the properties to be verified. In this work we propose a flexible framework in which different paradigms can be mixed for different design purposes: system modeling, property specification and also verification.
Modeling using different paradigms is a staple of UML \cite{UML}. In fact, the UML modeling language is actually a blend of different notations (message sequence charts, Statecharts, OCL formulas, etc.) with different characteristics. The UML framework provides means to describe the same (software) systems from different, possibly complementary, perspectives. However, the standard language is devoid of mechanisms to guarantee that an \emph{integrated} global view emerges from the various documents or that, in other words, the union of the different views yields a precise, coherent model.
Some work has been devoted to the (structural) transformation between models to re-use verification techniques for different paradigms and to achieve a unified semantics, similarly to the approach of this paper. Cassez and Roux \cite{CR06} provide a structural translation of TPN into TA that allows one to piggy-back the efficient model-checking tools for TA. Our approach is complementary to \cite{CR06} and similar works\footnote{See the related work section of \cite{CR06} for more examples of transformational approaches.} in several ways. First, our transformations are targeted to a discretization framework: on the one hand, this allows a more lightweight verification process as well as the inclusion of discrete-time components within the global model; on the other hand, discretization introduces incompleteness that might reduce its effectiveness. Second, we leverage on a descriptive notation (MTL) rather than an operational one. This allows the seamless integration of operational and descriptive components, whereas the transformation of \cite{CR06} stays within the model-checking paradigm where the system is modeled within the operational domain and the verified properties are modeled with a descriptive notation. Also, state-of-the-art of tools for model-checking of TA (and formalisms of similar expressive power) do not support full real-time temporal logics (such as TCTL) but only a subset of significantly reduced expressive power. We claim that the model and properties we consider in the example of Section \ref{sec:casestudy} are rather sophisticated and deep---even after weighting in the inherent limitations of our verification technique.
For the sake of brevity, we omit in this report a description of related works on the discretization of continuous-time models. The interested reader can refer to \cite{FPR08-FM08} for a discussion of this topic.
\section{Background}\label{sec:background}
\subsection{Continuous- and discrete-time real-time behaviors}
We represent the concept of \emph{trace} (or \emph{run}) of some real-time system through the notion of \emph{behavior}. Given a time domain $\timedomain$ and a finite set $\mathcal{P}$ of atomic propositions, a behavior $b$ is a mapping $b: \timedomain \rightarrow 2^\mathcal{P}$ which associates with every time instant $t \in \timedomain$ the set $b(t)$ of propositions that hold at $t$. $\mathcal{B}_{\timedomain}$ denotes the set of all behaviors over $\timedomain$ (for an implicit fixed set of propositions). $t \in \timedomain$ is a \emph{transition point} for behavior $b$ iff $t$ is a discontinuity point of the mapping $b$. Depending on whether $\timedomain$ is a discrete, dense, or continuous set, we call a behavior over $\timedomain$ discrete-, dense-, or continuous-time respectively. In this report, we assume the natural numbers $\naturals$ as discrete time domain and the nonnegative real numbers $\reals_{\geq 0}$ as continuous (and dense) time domain.
\paragraph{Non-Zeno and non-Berkeley.} Over continuous-time domains, it is customary to consider only physically meaningful behaviors, namely those respecting the so-called non-Zeno property. A continuous-time behavior $b$ is non-Zeno if the sequence of transition points of $b$ has no accumulation points. For a non-Zeno behavior $b$, it is well-defined the notions of values to the left and to the right of any transition point $t > 0$, which we denote as $b^-(t)$ and $b^+(t)$, respectively. When a proposition $p \in \mathcal{P}$ is such that $p \in b^-(t) \Leftrightarrow p \not\in b^+(t)$ (i.e., $p$ switches its truth value about $t$), we say that $p$ is ``triggered'' at $t$.
In order to ensure reducibility between continuous and discrete time, we consider non-Zeno behaviors with a stronger constraint, called \emph{non-Berkeleyness}. A continuous-time behavior $b$ is non-Berkeley for some positive constant $\delta \in \reals_{> 0}$ if, for all $t \in \timedomain$, there exists a closed interval $[u, u+\delta]$ of size $\delta$ such that $t \in [u, u+\delta]$ and $b$ is constant throughout $[u, u+\delta]$. Notice that a non-Berkeley behavior (for any $\delta$) is non-Zeno \emph{a fortiori}. The set of all non-Berkeley continuous-time behaviors for $\delta > 0$ is denoted by $\mathcal{B}_{\chi}^\delta \subset \mathcal{B}_{\reals_{\geq0}}$. In the following we always assume behaviors to be non-Berkeley, unless explicitly stated otherwise.
\paragraph{Syntax and semantics.} From a purely semantic point of view, one can consider the model of a (real-time) system simply as a set of behaviors \cite{AH92b,FMMR07-TR2007-22} over some time domain $\timedomain$ and sets of propositions. In practice, however, systems are modeled through some suitable notation: in this paper we consider a mixture of MTL formulas \cite{Koy90,AH93}, TA \cite{AD94,AFH96}, and TPN \cite{CMS99}. Given an MTL formula, a TA, or a TPN $\mu$, and a behavior $b$, $b \models \mu$ denotes that $b$ represents a system evolution which satisfies all the constraints imposed by $\mu$. If $b \models \mu$ for some $b \in \mathcal{B}_{\timedomain}$, $\mu$ is called $\timedomain$-satisfiable; if $b \models \mu$ for all $b \in \mathcal{B}_{\timedomain}$, $\mu$ is called $\timedomain$-valid. Similarly, if $b \models \mu$ for some $b \in \mathcal{B}_{\chi}^\delta$, $\mu$ is called $\chi^\delta$-satisfiable; if $b \models \mu$ for all $b \in \mathcal{B}_{\chi}^\delta$, $\mu$ is called $\chi^\delta$-valid.
\subsection{Descriptive notation: Metric Temporal Logic} Let $\mathcal{P}$ be a finite (non-empty) set of atomic propositions and $\mathcal{J}$ be the set of all (possibly unbounded) intervals of the time domain $\timedomain$ with rational endpoints. We abbreviate intervals with pseudo-arithmetic expressions, such as $=d$, $<d$, $\geq d$, for $[d,d]$, $(0,d)$, and $[d, +\infty)$, respectively.
\paragraph{MTL syntax.} The following grammar defines the syntax of (propositional) MTL, where $I \in \mathcal{J}$ and $\mathsf{p} \in \mathcal{P}$. \begin{equation*}
\phi ::= \mathsf{p} \mid \neg \phi \mid \phi_1 \wedge \phi_2 \mid
\untilMTL{I}{\phi_1, \phi_2} \mid \sinceMTL{I}{\phi_1, \phi_2} \end{equation*}
The basic temporal operators of MTL is the \emph{bounded until} $\untilMTL{I}{\phi_1, \phi_2}$ (and its past counterpart \emph{bounded since} $\sinceMTL{I}{}$) which says that $\phi_1$ holds until $\phi_2$ holds, with the additional constraint that $\phi_2$ must hold within interval $I$. Throughout the paper we omit the explicit treatment of past operators (i.e., $\sinceMTL{I}{}$ and derived) as it can be trivially derived from that of the corresponding future operators.
\paragraph{MTL semantics.} MTL semantics is defined over behaviors, parametrically with respect to the choice of the time domain $\timedomain$. While the semantics of Boolean connectives and In particular, the definition of the \emph{until} operators is as follows: \\ \begin{tabular}{l c l}
$b(t) \modelstime{\timedomain} \untilMTL{I}{\phi_1, \phi_2}$ & \ \ \ iff\ \ \ &
there exists $d \in I$ such that: $b(t+d) \modelstime{\timedomain} \phi_2$ \\
& & and, for all $u \in [0, d]$ it is $b(t+u) \modelstime{\timedomain} \phi_1$ \\
$b \modelstime{\timedomain} \phi$ & \ \ \ iff\ \ \ & for all $t \in \timedomain$: $b(t) \modelstime{\timedomain} \phi$ \end{tabular}
We remark that a global satisfiability semantics is assumed, i.e., the satisfiability of formulas is implicitly evaluated over \emph{all} time instants in the time domain. This permits the direct and natural expression of most common real-time specifications (e.g., time-bounded response, time-bounded invariance, etc.) without resorting to nesting of temporal operators.
\paragraph{Granularity.} For an MTL formula $\phi$, let $\mathcal{J}_{\phi}$ be the set of all non-null, finite interval bounds appearing in $\phi$. Then, $\mathcal{D}_\phi$ is the set of positive values $\delta$ such that any interval bound in $\mathcal{J}_{\phi}$ is an integer if divided by $\delta$.
\subsubsection{Derived (temporal) operators.} It is customary to introduce a number of derived (temporal) operators, to be used as shorthands in writing specification formulas. We assume a number of standard abbreviations such as $\bot, \top, \vee, \Rightarrow, \Leftrightarrow$; when $I = (0, \infty)$, we drop the subscript interval in temporal operators. All other derived operators used in this paper are listed in Table \ref{tab:mtl-derived} ($\delta \in \reals_{> 0}$ is a parameter used in the discretization techniques, discussed shortly). In the following we describe briefly and informally the purpose of such derived operators, focusing on future ones (the meaning of the corresponding past operators is easily derivable).
\begin{itemize}
\item For propositions in the set $\{ \gamma(x) \mid x \in X \}$, $\bigodot_{x \in X \subseteq Y} \gamma(x)$ states that $\gamma(x)$ holds for all $x$ in $X$ and does not hold for all $x$ in the complement set $Y \setminus X$.
\item A few common derived temporal operators such as $\relMTL{I}{}, \diamondMTL{I}{}, \boxMTL{I}{}$ are defined with the usual meaning: $\relMTL{I}{}$ (\emph{release}) is the dual of the \emph{until} operator; $\diamondMTL{I}{\phi}$ means that $\phi$ happens within time interval $I$ in the future; $\boxMTL{I}{\phi}$ means that $\phi$ holds throughout the whole interval $I$ in the future.
\item $\nowonstrMTL{\phi}$ and $\nowonMTL{\phi}$ are useful over continuous time only, and describe $\phi$ holding throughout some unspecified non-empty interval in the strict future; more precisely, if $t$ is the current instant, there exists some $t' > t$ such that $\phi$ holds over $\langle t, t')$, where the interval is left-open for $\nowonstrMTL{}$ and left-closed for $\nowonMTL{}$.
\item $\becomesMTL{}$ and $\becomesOMTL{}$ describe different types of \emph{transitions}. Namely, $\becomesMTL{\phi_1, \phi_2}$ describes a switch from $\phi_1$ to $\phi_2$, irrespective of which value holds at the current instant, whereas $\becomesOMTL{\phi_1, \phi_2}$ describes a switch from $\phi_1$ to $\phi_2$ such that $\phi_1$ holds at the current instant and $\phi_2$ will hold in the immediate future. Note that if $\becomesMTL{\phi_1, \phi_2}$ holds at some instant $t$, $\becomesOMTL{\phi_1, \phi_2}$ holds over $(t-\delta, t)$.
\item $\becomesMTL{\phi}, \becomesOMTL{\phi}$ are shorthands for transitions of a single item; correspondingly the $\triggerLMTL{}, \triggerLMTL{}, \triggerOMTL{}$ ``trigger'' operators are introduced: $\triggerLMTL{\phi}$ denotes a transition of $\phi$ from false to true or \emph{vice versa}, whereas $\triggerMTL{\phi}$ describes a similar transition where the value of $\phi$ at the current instant is unspecified.
$\triggerOMTL{\phi' \leadsto \phi}$ describes a more complex transition of $\phi$, one which is ``triggered'' by the auxiliary proposition $\phi'$.
\item It is also convenient to introduce the ``dual'' operators $\triggerNLMTL{}, \triggerNOMTL{}$ which describe ``non-transitions'' of their argument. For instance, $\triggerNLMTL{\phi}$ says that the truth value of $\phi$ (whatever it is) does not change from the current instant to the immediate future.
\item Finally, $\Alw{\phi}$ expresses the invariance of $\phi$. Since $b \modelstime{\timedomain} \Alw{\phi}$ iff $b \modelstime{\timedomain} \phi$, for any behavior $b$, $\Alw{\phi}$ can be expressed without nesting if $\phi$ is flat, through the global satisfiability semantics introduced beforehand. \end{itemize}
\begin{table}[!htb] \begin{scriptsize} \begin{center}
\begin{tabular}{|c @{$\quad \equiv \quad$} c|}
\hline
\textsc{Operator} & \textsc{Definition} \\
\hline
$\bigodot_{x \in X \subseteq Y} \gamma(x)$ & $\bigwedge_{x \in X} \gamma(x) \:\wedge\: \bigwedge_{y \in Y \setminus X} \neg \gamma(y)$ \\
\hline
$\relMTL{I}{\phi_1, \phi_2}$ & $\neg \untilMTL{I}{\neg \phi_1, \neg \phi_2}$ \\
$\redMTL{I}{\phi_1, \phi_2}$ & $\neg \sinceMTL{I}{\neg \phi_1, \neg \phi_2}$ \\
$\diamondMTL{I}{\phi}$ & $\untilMTL{I}{\top, \phi}$ \\
$\diamondPMTL{I}{\phi}$ & $\sinceMTL{I}{\top, \phi}$ \\
$\boxMTL{I}{\phi}$ & $\relMTL{I}{\bot, \phi}$ \\
$\boxPMTL{I}{\phi}$ & $\redMTL{I}{\bot, \phi}$ \\
\hline
$\nowonstrMTL{\phi}$ & $\untilMTL{(0, +\infty)}{\phi, \top} \vee (\neg \phi \wedge \relMTL{(0, +\infty)}{\phi, \bot})$ \\
$\uptonowstrMTL{\phi}$ & $\sinceMTL{(0, +\infty)}{\phi, \top} \vee (\neg \phi \wedge \redMTL{(0, +\infty)}{\phi, \bot})$ \\
$\nowonMTL{\phi}$ & $\phi \wedge \nowonstrMTL{\phi}$ \\
$\uptonowMTL{\phi}$ & $\phi \wedge \uptonowstrMTL{\phi}$ \\
\hline
$\becomesMTL{\phi_1, \phi_2}$ & $\begin{cases}
\uptonowstrMTL{\phi_1} \wedge \left( \phi_2 \vee \nowonstrMTL{\phi_2} \right) &
\text{if } \timedomain = \reals_{\geq 0} \\
\diamondPMTL{=1}{\phi_1} \wedge \diamondMTL{[0, 1]}{\phi_2} &
\text{if } \timedomain = \naturals
\end{cases}$ \\
$\becomesOMTL{\phi_1, \phi_2}$ & $\begin{cases}
\phi_1 \wedge \diamondMTL{=\delta}{\phi_2} &
\text{if } \timedomain = \reals_{\geq 0} \\
\phi_1 \wedge \diamondMTL{=1}{\phi_2} &
\text{if } \timedomain = \naturals
\end{cases}$ \\
\hline
$\becomesMTL{\phi}$ & $\becomesMTL{\neg \phi, \phi}$ \\
$\becomesOMTL{\phi}$ & $\becomesOMTL{\neg \phi, \phi}$ \\
$\triggerLMTL{\phi}$ & $\becomesLMTL{\phi} \vee \becomesLMTL{\neg \phi}$ \\
$\triggerMTL{\phi}$ & $\becomesMTL{\phi} \vee \becomesMTL{\neg \phi}$ \\
$\triggerOMTL{\phi' \leadsto \phi}$ & $\begin{cases}
\uptonowstrMTL{\neg \phi} \wedge \boxMTL{=\delta}{\phi' \Rightarrow \phi} \:\vee\: \uptonowstrMTL{\phi} \wedge \boxMTL{=\delta}{\phi' \Rightarrow \neg \phi} & \text{if } \timedomain = \reals_{\geq 0} \\
\boxPMTL{[0,1]}{\neg \phi} \wedge \boxMTL{[0,2]}{\phi' \Rightarrow \phi} \vee \boxPMTL{[0,1]}{\phi} \wedge \boxMTL{[0,2]}{\phi' \Rightarrow \neg \phi} & \text{if } \timedomain = \naturals
\end{cases}$ \\
\hline
$\triggerNMTL{\phi}$ & $\becomesMTL{\phi, \phi}$ \\
$\triggerNLMTL{\phi}$ & $\becomesLMTL{\phi, \phi} \vee \becomesLMTL{\neg \phi, \neg\phi}$ \\
$\triggerNOMTL{\phi' \leadsto \phi}$ & $\begin{cases}
\uptonowstrMTL{\phi} \wedge \boxMTL{=\delta}{\phi' \Rightarrow \phi} \:\vee\: \uptonowstrMTL{\neg \phi} \wedge \boxMTL{=\delta}{\phi' \Rightarrow \neg \phi} & \text{if } \timedomain = \reals_{\geq 0} \\
\boxPMTL{[0,1]}{\phi} \wedge \boxMTL{[0,2]}{\phi' \Rightarrow \phi} \vee \boxPMTL{[0,1]}{\neg\phi} \wedge \boxMTL{[0,2]}{\phi' \Rightarrow \neg \phi} & \text{if } \timedomain = \naturals
\end{cases}$ \\
\hline
$\Alw{\phi}$ & $\phi \wedge \boxMTL{(0, +\infty)}{\phi} \wedge \boxPMTL{(0, +\infty)}{\phi}$ \\
\hline
\end{tabular}
\caption{MTL derived temporal operators}
\label{tab:mtl-derived} \end{center} \end{scriptsize} \end{table}
\subsection{Operational notations: Timed Automata and Timed Petri Nets} \label{sec:timedautomata} For lack of space, we omit a formal presentation of TA, which have been however introduced in the framework in previous work \cite{FPR08-ICFEM08} and focus on MTL and TPN in the following. Section \ref{sec:casestudy} will however informally illustrate the syntax and semantics of TA on an example, with a level of detail sufficient to understand its role within the framework.
\paragraph{Timed Petri nets syntax.} A \emph{Timed Petri Net} (TPN) is a tuple $N = \langle P, T, F, M_0, \alpha, \beta \rangle$: \begin{itemize} \item $P$ is a finite set of \emph{places}; \item $T$ is a finite set of \emph{transitions}; \item $F \subseteq (P \times T) \cup (T \times P)$ is the \emph{flow relation}; \item $M_0: P \rightarrow \naturals$ is the \emph{initial marking}; \item $\alpha: T \rightarrow \rationals_{\geq 0}$ gives the \emph{earliest firing times} of transitions; and \item $\beta: T \rightarrow \rationals_{\geq 0} \cup \{\infty\}$ gives the \emph{latest firing times} of transitions. \end{itemize}
In general, a mapping $M: P \rightarrow \naturals$ is called a \emph{marking} of $N$. Given $a \in P \cup T$, let $\pre{a} = \{ b \mid b F a \}$ and $\post{a} = \{b \mid a F b \}$ denote the preset and postset of $a$, respectively. We assume that every node $a \in P \cup T$ has a nonempty preset or a nonempty postset (or both); this is clearly without loss of generality.
\paragraph{Timed Petri nets semantics.} The semantics of TPN is usually given as sequences of transition firings and place markings; see \cite{CMS99} for formal definitions. Correspondingly, a TPN is called \emph{$k$-safe} for $k \in \naturals$ iff for every reachable marking $M$ it is $M(p) \leq k$ for all $p \in P$. A TPN that is $k$-safe for some $k \in \naturals$ is called \emph{bounded}.
In this report we assume \emph{1-safe TPN}. This allows a simplified description of the semantics, where any marking is completely described by a set $M \subseteq P$ of places such that a place is marked iff it is in $M$. We remark, however, that extending the presentation to generic \emph{bounded} TPN would be routine. On the other hand, unbounded TPN would not be discretizable according to the notion of Section \ref{sec:discretization}, hence they would fit only in a different framework. To further simplify the presentation, we assume non-Berkeley behaviors for some generic $\delta > 0$ in presenting the semantics; correspondingly we do not have to consider zero-time transitions as every enabled transition is enabled for at least $\delta$ time units.
The continuous-time semantics of a 1-safe TPN $N = \langle P, T, F, M_0, \alpha, \beta \rangle$ can be conveniently introduced for behaviors over propositions in $\mathcal{P} = \mu \,\cup\, \epsilon \,\cup\, \tau = \{ \mu(p), \epsilon(p) \mid p \in P\} \cup \{ \tau(t) \mid t \in T\}$ as follows. Intuitively, at any time $t$ over a behavior $b$, $\mu(p) \in b(t)$ denotes that place $p$ is marked; $\tau(u)$ being triggered at $t$ denotes that transition $u$ fires at $t$; and $\epsilon(p)$ being triggered at $t$ denotes that place $p$ undergoes a ``zero-time unmarking'', as it will be defined shortly.\footnote{The dual ``zero-time markings'' do not occur over non-Berkeley behaviors as a consequence of zero-time transitions not occurring.} Then, $b$ is a \emph{run} of TPN $N$, and we write $b \modelstime{\reals_{\geq 0}} N$, iff the following conditions hold: \begin{itemize}
\item \emph{Initialization}: $b(0) = \epsilon \cup \tau \cup \bigcup_{p \in M_0} \mu(p) $, and there exists a transition instant $t_{\mathrm{start}} > 0$\footnote{In the following, we will assume that $t_{\mathrm{start}} \in [0,2\delta]$ for the discretization parameter $\delta > 0$.} such that: $b(t) = (t)$ for all $0 \leq t \leq t_{\mathrm{start}}$ and $b^+(t_{\mathrm{start}}) = \tau \cup \bigcup_{p \in M_0} \mu(p)$.
\item \emph{Marking}: for all instants $u > t_{\mathrm{start}}$ such that $\mu(p) \not\in b^-(u)$ and $\mu(p) \in b^+(u)$ we say that $p$ becomes marked. Correspondingly, there exists a transition $t \in \pre{p}$ such that: (i) $\tau(t)$ is triggered at $u$, (ii) for no other transition $t' \in \pre{p}$ (other than $t$ itself) $\tau(t')$ is triggered at $u$, and (iii) for no transition $t \in \post{p}$ $\tau(t)$ is triggered at $u$.
\item \emph{Unmarking}: for all instants $u > t_{\mathrm{start}}$ such that $\mu(p) \in b^-(u)$ and $\mu(p) \not\in b^+(u)$ we say that $p$ becomes unmarked. Correspondingly, there exists a transition $t \in \post{p}$ such that: (i) $\tau(t)$ is triggered at $u$, (ii) for no other transition $t' \in \post{p}$ (other than $t$ itself) $\tau(t')$ is triggered at $u$, and (iii) for no transition $t \in \pre{p}$ $\tau(t)$ is triggered at $u$.
\item \emph{Enabling}: for all instants $u > t_{\mathrm{start}}$ such that $\tau(t)$ is triggered at $u$, all places $p \in \pre{t}$ must have been marked continuously over $(u-\alpha(t), u)$ without any zero-time unmarkings of the same places occurring.
\item \emph{Bound}: for all instants $u > t_{\mathrm{start}}$ such that $\tau(t)$ has not been triggered anywhere over $(u-\beta(t), u)$ and all places $p\in\pre{t}$ have been marked continuously, one of the following must occur: (i) all such $p$'s becomes unmarked at $u$, (ii) $\tau(t)$ is triggered at $u$, or (iii) all such $p$'s are still marked ``now on'' and some $p \in \pre{t}$ undergoes a zero-time unmarking (i.e., $\epsilon(p)$ is triggered at $u$).
\item \emph{Effect}: for all instants $u > t_{\mathrm{start}}$ such that $\tau(t)$ is triggered at $u$, any place $p\in\pre{t}$ becomes unmarked or undergoes a zero-time unmarking, and any place $p\in\post{t}$ becomes marked or undergoes a zero-time unmarking.
\item \emph{Zero-time unmarking}: for all instants $u > t_{\mathrm{start}}$ such that $\epsilon(p)$ is triggered at $u$ we say that $p$ undergoes a zero-time unmarking.
Correspondingly, there exist transitions $t_a \in \pre{p}$ and $t_b \in \post{p}$ such that $\tau(t_a)$ is triggered, $\tau(t_b)$ is triggered, and for no other transition $t' \in \pre{p} \cup \post{p}$ (other than $t_a,t_b$) $\tau(t')$ is triggered. \end{itemize}
\subsection{Discrete-time approximations of continuous-time specifications} \label{sec:discretization}
This section provides an overview of the results in \cite{FPR08-FM08} that will be used as a basis for the technique of this paper. The technique of \cite{FPR08-FM08} is based on two approximation functions for MTL formulas, called under- and over-approximation. The under-approximation function $\mathrm{\Omega}_{\delta}$ maps continuous-time MTL formulas to discrete-time formulas such that the non-validity of the latter implies the non-validity of the former, over behaviors in $\mathcal{B}_{\chi}^\delta$; in other words $\mathrm{\Omega}_{\delta}$ preserves validity from continuous to discrete time. The over-approximation function $\mathrm{O}_{\delta}$ maps continuous-time MTL formulas to discrete-time MTL formulas such that the validity of the latter implies the validity of the former, over behaviors in $\mathcal{B}_{\chi}^\delta$. We have the following fundamental verification result, which constitutes the basis of the whole verification framework in the paper.
\begin{proposition}[Approximations \cite{FPR08-FM08}] \label{prop:approximations}
For any MTL formulas $\phi_1, \phi_2$, and for any $\delta \in \mathcal{D}_{\phi_1, \phi_2}$:
(1) if $\Alw{{\underap{\phi_1}}} \Rightarrow \Alw{\overap{\phi_2}}$ is $\naturals$-valid,
then $\Alw{\phi_1} \Rightarrow \Alw{\phi_2}$ is $\chi^\delta$-valid; and (2) if $\Alw{\overap{\phi_1}} \Rightarrow \Alw{\underap{\phi_2}}$ is not $\naturals$-valid,
then $\Alw{\phi_1} \Rightarrow \Alw{\phi_2}$ is not $\chi^\delta$-valid. \end{proposition}
Proposition \ref{prop:approximations} suggests the following verification approach for MTL. Assume first a system modeled as an (arbitrarily complex) MTL formula $\phi^{\mathsf{sys}}$; in order to verify if another MTL formula $\phi^{\mathsf{prop}}$ holds for all run of the system we should check the \emph{validity} of the derived MTL formula $\Alw{\phi^{\mathsf{sys}}} \Rightarrow \Alw{\phi^{\mathsf{prop}}}$ which postulates that every run of the system also satisfies the property. Over continuous time, we would build the two discrete-time formulas of Proposition \ref{prop:approximations} and infer the validity of the continuous-time formula from the results of a discrete-time validity checking. The technique is incomplete as, in particular, when approximation (1) is not valid and approximation (2) is valid nothing can be inferred about the validity of the property in the original system over continuous time.
Consider now another notation $\mathcal{N}$ (e.g., TA or TPN); if we can characterize the con\-tin\-u\-ous-time semantics of any system described with $\mathcal{N}$ by means of a set of MTL formulas, we can reduce the (continuous-time) verification problem for $\mathcal{N}$ to the (con\-tin\-u\-ous-time) verification problem for MTL, and solve the latter as outlined in the previous paragraph.
There are, however, several practical hurdles that make this approach not straightforward to achieve. First, the application of the over- and under- approximations of \cite{FPR08-FM08} requires MTL formulas written in a particular form and which do not nest temporal operators. Although in principle every formula can be transformed in the required form (possibly with the addition of a finite number of fresh propositional variables), not any transformation is effective. That is, it turns out that semantically equivalent continuous-time formulas can yield dramatically different --- in terms of efficacy and completeness --- approximated discrete-time formulas. The axiomatization of operational formalisms (such as TA and TPN) is all the more extremely tricky and requires different sets of axioms, according to whether they will undergo under- or over- approximation. However, all different axiomatizations will be shown to be continuous-time equivalent, hence the intended semantics is captured correctly in all situations. The application in practice of the MTL verification technique will use the ``best'' set of axioms in every case.
\section{Discretizable MTL Axiomatizations of TPN}\label{sec:TPN} It is not too hard to devise a general, continuous-time axiomatization of the semantics of a non-trivial subclass of TPN. However, this axiomatization---for reasons that are similar to those discussed in \cite{FPR08-ICFEM08} for the TA axiomatization---yields a poor discretized counterpart when the technique of Section \ref{sec:discretization} is applied. Then, this section describes three equivalent (for non-Berkeley behaviors) continuous-time axiomatizations of the semantics of TPN (as introduced in Section \ref{sec:timedautomata}): a generic one (Section \ref{sec:generic}), one that works best for discrete-time under-approximation (Section \ref{sec:4underap}), and one that works best for discrete-time over-approximation (Section \ref{sec:4overap}). Sections \ref{sec:UA} and \ref{sec:OA} produce respectively the corresponding discrete-time formulas that will be used in the verification problem. Throughout this section, assume a TPN $N = \langle P, T, F, M_0, \alpha, \beta \rangle$ and the set of propositions $\mathcal{P} = \mu \cup \epsilon \cup \tau$ as in the definition of their semantics (Section \ref{sec:timedautomata}). The axiomatization of TPN presented in this paper imposes that, in every marking, a place can contain at most one token. As a consequence, it captures all evolutions of any TPN that is 1-safe; however, it is also capable of describing, for a TPN that is not 1-safe (i.e., which has reachable markings such that at least one place contains more than one token) the sequences of markings in which every place has at most one token. For 1-safe TPN (either by construction or by imposition) any marking $M$ is completely described by the subset of places that are marked in $M$, which simplifies their formalization. We remark, however, that extending the axiomatization to include generic \emph{bounded} TPN would be routine.
\subsection{Generic axiomatization} \label{sec:generic}
The continuous-time semantics of a 1-safe TPN $N = \langle P, T, F, M_0, \alpha, \beta \rangle$ can be described through the set of propositions $\mathcal{P} = \mu \,\cup\, \epsilon \,\cup\, \tau$, where $ \mu = \{ \MU{p} \mid p \in P\}$, $ \epsilon = \{ \EPSILON{p} \mid p \in P\}$ and $\tau = \{ \TAU{u} \mid u \in T\}$. Intuitively, at any time $t$ in a behavior $b$, $\MU{p} \in b(t)$ denotes that place $p$ is marked; $\TAU{u}$ being ``triggered'' (see Section \ref{sec:background}) at $t$ denotes that transition $u$ fires at $t$; and $\EPSILON{p}$ being triggered at $t$ denotes that place $p$ undergoes a ``zero-time unmarking'', that is, $p$ is both unmarked and marked at the same instant (hence does not change the number of contained tokens), as it will be defined shortly.\footnote{The dual ``zero-time markings'' (in which a place $p$ is both marked and unmarked at the same instant, and hence remains empty) do not occur over non-Berkeley behaviors since, over these behaviors, transitions cannot fire in the same instant in which they are enabled.} Then, $b$ is a \emph{run} of TPN $N$, and we write $b \modelstime{\reals_{\geq 0}} N$, iff the conditions listed below hold.
\subsubsection{Places} Marking and unmarking of place $p \in P$ is described by linking transitions of $\MU{p}$ to transitions of $\TAU{u}$ for transitions $u$ in the pre and postset of $p$. The trigger operator $\triggerMTL{}$ (matching $\becomesMTL{}$) is used for $\TAU{u}$ as the actual truth value of $\TAU{u}$ after the transition is irrelevant as long as a transition occurs.
\paragraph{Marking:} For all instants $t$ such that $\MU{p}$ becomes true in $t$ we say that $p$ becomes marked. Correspondingly, there exists a transition $u \in \pre{p}$ such that: (i) $\TAU{u}$ is triggered at $t$, (ii) for no other transition $u' \in \pre{p}$ (other than $u$ itself) $\TAU{u'}$ is triggered at $t$, and (iii) for no transition $u'' \in \post{p}$ $\tau(u'')$ is triggered at $t$. This corresponds to the following axioms.
\begin{align}
\label{ax:u2m_i}
p \in M_0 &: \;
\becomesMTL{\MU{p}} \;\Rightarrow\;
\left(
\begin{array}{c}
\bigvee_{u \in \pre{p}} \left(
\triggerMTL{\TAU{u}}
\wedge
\bigwedge_{u'\neq u \in \pre{p}} \triggerNMTL{\TAU{u'}} \right)
\wedge
\bigwedge_{u \in \post{p}} \triggerNMTL{\TAU{u}}
\\ \vee \\
\boxPMTL{(0, \infty)}{\neg\MU{p}}
\end{array} \right) \\
\label{ax:u2m}
p \notin M_0 &: \;
\becomesMTL{\MU{p}} \;\Rightarrow\;
\bigvee_{u \in \pre{p}}
\left(
\triggerMTL{\TAU{u}}
\wedge
\bigwedge_{u'\neq u \in \pre{p}} \triggerNMTL{\TAU{u'}}
\right)
\wedge
\bigwedge_{u \in \post{p}} \triggerNMTL{\TAU{u}} \end{align}
\paragraph{Unmarking:} For all instants $t$ such that $\MU{p}$ becomes false in $t$ we say that $p$ becomes unmarked. Correspondingly, there exists a transition $u \in \post{p}$ such that: (i) $\TAU{u}$ is triggered at $t$, (ii) for no other transition $u' \in \post{p}$ (other than $u$ itself) $\TAU{u'}$ is triggered at $t$, and (iii) for no transition $u'' \in \pre{p}$ $\tau(u'')$ is triggered at $t$.
\begin{equation} \label{ax:m2u}
\becomesMTL{\neg\MU{p}} \quad\Rightarrow\quad
\bigvee_{u \in \post{p}}
\left(
\triggerMTL{\TAU{u}}
\;\wedge\;
\bigwedge_{u'\neq u \in \post{p}} \triggerNMTL{\TAU{u'}}
\right) \wedge
\bigwedge_{u \in \pre{p}} \triggerNMTL{\TAU{u}} \end{equation}
\subsubsection{Transitions} The lower and upper bounds on the firing of transition $u$ are specified by necessary and sufficient conditions, respectively, on transitions of proposition $\TAU{u}$. Earliest and latest firing times are introduced through MTL real-time constraints. A non-firing transition $u$ stays enabled as long as $\MU{p}$ (for $p$ in $t$'s preset) holds continuously.
\paragraph{Enabling:} For all instants $t$ such that $\TAU{u}$ is triggered at $t$, all places $p \in \pre{u}$ must have been marked continuously over $(t-\ALPHA{u}, t)$ without any zero-time unmarkings of the same places occurring.
\begin{equation} \label{ax:enabling}
\triggerMTL{\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}
\left(
\begin{array}{c}
\uptonowstrMTL{\MU{p} \wedge \EPSILON{p}} \wedge \boxPMTL{(0,\ALPHA{u})}{ \MU{p} \wedge \EPSILON{p}}
\\ \vee \\
\uptonowstrMTL{\MU{p} \wedge \neg \EPSILON{p}} \wedge \boxPMTL{(0,\ALPHA{u})}{ \MU{p} \wedge \neg\EPSILON{p}}
\end{array}
\right) \end{equation}
\paragraph{Bound:} For all instants $t$ such that $\TAU{u}$ has not been triggered anywhere over $(t-\BETA{u}, t)$ and all places $p\in\pre{u}$ have been marked continuously, one of the following must occur: (i) one of such $p$'s becomes unmarked at $t$, (ii) $\TAU{u}$ is triggered at $t$, or (iii) all such $p$'s are still marked in $b^+(t)$ and some $p \in \pre{u}$ undergoes a zero-time unmarking (i.e., $\EPSILON{p}$ is triggered at $t$).
This is formalized by introducing two axioms for each transition $u \in T$.
\begin{scriptsize} \begin{equation} \label{ax:boundP}
\boxPMTL{(0,\BETA{u})}{\TAU{u} \wedge \bigwedge_{p \in \pre{u}} \MU{p}}
\quad\Rightarrow\quad
\left(
\begin{array}{c}
\bigvee_{p \in \pre{u}}(\neg\MU{p} \vee \nowonstrMTL{\neg\MU{p}})
\\ \vee \\
\begin{array}{c}
\bigvee_{p \in \pre{u}}
\left( \begin{array}{c}
\boxPMTL{(0,\BETA{u})}{\EPSILON{p}} \ \Rightarrow\ \neg\EPSILON{p} \vee \nowonstrMTL{\neg\EPSILON{p}}
\\ \wedge \\
\boxPMTL{(0,\BETA{u})}{\neg\EPSILON{p}} \ \Rightarrow\ \EPSILON{p} \vee \nowonstrMTL{\EPSILON{p}}
\end{array}
\right) \end{array}
\\ \vee \\
\neg \TAU{u}
\vee
\nowonstrMTL{\neg \TAU{u}}
\end{array} \right) \end{equation} \end{scriptsize}
\begin{scriptsize} \begin{equation} \label{ax:boundN}
\boxPMTL{(0,\BETA{u})}{\neg \TAU{u} \wedge \bigwedge_{p \in \pre{u}} \MU{p}}
\ \Rightarrow\
\left(
\begin{array}{c}
\bigvee_{p \in \pre{u}}(\neg\MU{p} \vee \nowonstrMTL{\neg\MU{p}})
\\ \vee \\
\begin{array}{c}
\bigvee_{p \in \pre{u}}
\left( \begin{array}{c}
\boxPMTL{(0,\BETA{u})}{\EPSILON{p}} \ \Rightarrow\ \neg\EPSILON{p} \vee \nowonstrMTL{\neg\EPSILON{p}}
\\ \wedge \\
\boxPMTL{(0,\BETA{u})}{\neg\EPSILON{p}} \ \Rightarrow\ \EPSILON{p} \vee \nowonstrMTL{\EPSILON{p}}
\end{array}
\right) \end{array}
\\ \vee \\
\TAU{u}
\vee
\nowonstrMTL{\TAU{u}}
\end{array} \right) \end{equation} \end{scriptsize}
Axioms (\ref{ax:boundP}--\ref{ax:boundN}) impose a so-called ``strong time semantics'' to the TPN model \cite{FMM94}. This is a departure from the notion of TA formalized in \cite{FPR08-ICFEM08}, for which the axioms impose what is in fact a weak time semantics \cite{FMMR09}.
\paragraph{Effect:} For all instants $t$ such that $\TAU{u}$ is triggered at $t$, every place $p\in\pre{u}$ either becomes unmarked or undergoes a zero-time unmarking, and every place $p\in\post{u}$ either becomes marked or undergoes a zero-time unmarking.
\begin{equation} \label{ax:trans2}
\triggerMTL{\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}\left( \begin{array}{c}
\becomesMTL{\neg \MU{p}} \vee \triggerMTL{\EPSILON{p}}
\end{array} \right)
\;\wedge\;
\bigwedge_{p \in \post{u}}\left( \begin{array}{c}
\becomesMTL{\MU{p}} \vee \triggerMTL{\EPSILON{p}}
\end{array} \right) \end{equation}
\subsubsection{Zero-time unmarking} For all instants $t$ such that $\EPSILON{p}$ is triggered at $t$ we say that $p$ undergoes a zero-time unmarking. Correspondingly, there exist transitions $u_a \in \pre{p}$ and $u_b \in \post{p}$ such that $\TAU{u_a}$ is triggered, $\TAU{u_b}$ is triggered, and for no other transition $u' \in \pre{p} \cup \post{p}$ (other than $u_a,u_b$) $\TAU{u'}$ is triggered.
\begin{equation} \label{ax:unm}
\triggerMTL{\EPSILON{p}} \quad\Rightarrow\quad
\bigvee_{\substack{u_a \in \pre{p} \\ u_b \in \post{p}}}
\left( \begin{array}{c}
\triggerMTL{\TAU{u_a}}
\;\wedge\;
\bigwedge_{u'\neq u_a \in \pre{p}} \triggerNMTL{\TAU{u'}}
\\ \wedge \\
\triggerMTL{\TAU{u_b}}
\;\wedge\;
\bigwedge_{u'\neq u_b \in \post{p}} \triggerNMTL{\TAU{u'}}
\end{array} \right)
\end{equation}
\subsubsection{Initialization} $b(0) = \epsilon \cup \tau$, and there exists a transition instant $t_{\mathrm{start}} > 0$
such that: $b(t) = b(0)$ for all $0 \leq t < t_{\mathrm{start}}$ and $b^+(t_{\mathrm{start}}) = \epsilon \cup \tau \cup \bigcup_{p \in M_0} \MU{p}$ (i.e., the places in the initial marking become marked at $t_{\mathrm{start}}$).
This is captured by the following axiom: \begin{equation} \label{ax:init}
\text{at $0$: }
\quad \bigwedge_{p \in P}\neg\MU{p}
\wedge \diamondMTL{[0,2\delta]}{\bigwedge_{p \in M_0} \MU{p}}
\ \wedge \ \nowonMTL{\bigwedge_{p \in P} \EPSILON{p} \wedge \bigwedge_{u \in T} \TAU{u}} \end{equation}
Finally, given a TPN N, the MTL formula $\psi_N$ formalizing N is the conjunction of axioms \fsrf{ax:u2m_i}{ax:init} instantiated for each place and transition of N.
\subsection{Axiomatization for under-approximation} \label{sec:4underap} As also discussed in \cite{FPR08-ICFEM08}, operator $\becomesMTL{}$ yields very weak under-approximations when used to the left-hand side of implications. It turns out that the under-approximation of $\becomesMTL{\phi_1,\phi_2}$ is the discrete-time formula $\boxPMTL{[0,1]}{\phi_1} \wedge \phi_2$. For a proposition $x$, $\becomesMTL{x}$ is then the unsatisfiable formula $\boxPMTL{[0,1]}{\neg x} \wedge x$; correspondingly all implications with such formulas as antecedent are trivially true and do not constrain in any way the discrete-time system.
The approximations can be significantly improved by using the more constraining $\becomesLMTL{}$ in place of $\becomesMTL{}$. One can check that the under-approximation of $\becomesLMTL{x}$ is $\becomesLMTL{x}$ itself, which describes a discrete-time transition with $\neg x$ holding at the current instant and $x$ holding at the next instant. Correspondingly, all instances of $\becomesMTL{}$ are changed into instances of $\becomesLMTL{}$ in \fsrf{ax:u2m}{ax:init} yielding \fsrf{ax:u2m4UA}{ax:init4UA}.
\subsubsection{Places}
\begin{gather}
p \in M_0:
\becomesLMTL{\MU{p}} \Rightarrow
\left(
\begin{array}{c}
\bigvee_{u \in \pre{p}}
\left(
\triggerLMTL{\TAU{u}}
\wedge
\bigwedge_{u'\neq u \in \pre{p}} \triggerNLMTL{\TAU{u'}}
\right) \wedge
\bigwedge_{u \in \post{p}} \triggerNLMTL{\TAU{u}}
\\\vee\\
\boxPMTL{[0, \infty)}{\neg\MU{p}}
\end{array} \right)
\label{ax:u2m4UA_i} \\
p \notin M_0:
\becomesLMTL{\MU{p}} \quad\Rightarrow\quad
\begin{array}{c}
\bigvee_{u \in \pre{p}}
\left(
\triggerLMTL{\TAU{u}}
\;\wedge\;
\bigwedge_{u'\neq u \in \pre{p}} \triggerNLMTL{\TAU{u'}}
\right) \wedge
\bigwedge_{u \in \post{p}} \triggerNLMTL{\TAU{u}}
\end{array}
\label{ax:u2m4UA} \\
\becomesLMTL{\neg\MU{p}} \quad\Rightarrow\quad
\bigvee_{u \in \post{p}}
\left(
\triggerLMTL{\TAU{u}}
\;\wedge\;
\bigwedge_{u'\neq u \in \post{p}} \triggerNLMTL{\TAU{u'}}
\right) \wedge
\bigwedge_{u \in \pre{p}} \triggerNLMTL{\TAU{u}}
\label{ax:m2u4UA} \end{gather}
\subsubsection{Transitions}
\begin{gather}
\triggerLMTL{\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}
\left(
\begin{array}{c}
\MU{p} \wedge \EPSILON{p} \wedge \boxPMTL{(0,\ALPHA{u} -\delta)}{ \MU{p} \wedge \EPSILON{p}}
\\ \vee \\
\MU{p} \wedge \neg \EPSILON{p} \wedge \boxPMTL{(0,\ALPHA{u} -\delta)}{ \MU{p} \wedge \neg\EPSILON{p}}
\end{array}
\right)
\label{ax:enabling4UA} \\
\text{Same as }\frf{ax:boundP} \label{ax:boundP4UA} \\
\text{Same as }\frf{ax:boundN} \label{ax:boundN4UA} \\
\triggerLMTL{\TAU{u}} \quad\Rightarrow\quad \bigwedge_{p \in \pre{u}}\left( \begin{array}{c}
\becomesLMTL{\neg \MU{p}} \vee \triggerLMTL{\EPSILON{p}}
\end{array} \right) \;\wedge\; \bigwedge_{p \in \post{u}}\left( \begin{array}{c}
\becomesLMTL{\MU{p}} \vee \triggerLMTL{\EPSILON{p}}
\end{array} \right) \label{ax:trans24UA} \end{gather}
\subsubsection{Zero-time unmarking} \begin{gather}
\triggerLMTL{\EPSILON{p}} \quad\Rightarrow\quad
\bigvee_{\substack{u_a \in \pre{p} \\ u_b \in \post{p}}}
\left( \begin{array}{c}
\triggerLMTL{\TAU{u_a}}
\;\wedge\;
\bigwedge_{u'\neq u_a \in \pre{p}} \triggerNLMTL{\TAU{u'}}
\\ \wedge \\
\triggerLMTL{\TAU{u_b}}
\;\wedge\;
\bigwedge_{u'\neq u_b \in \post{p}} \triggerNLMTL{\TAU{u'}}
\end{array} \right) \label{ax:unm4UA} \end{gather}
\subsubsection{Initialization} \begin{gather}
\boxPMTL{[\delta, \infty)}{\perp}
\quad \Rightarrow \quad
\bigwedge_{p \in P}\neg\MU{p}
\wedge \diamondMTL{[0,2\delta]}{\bigwedge_{p \in M_0} \MU{p}}
\ \wedge \ \nowonMTL{\bigwedge_{p \in P} \EPSILON{p} \wedge \bigwedge_{u \in T} \TAU{u}} \label{ax:init4UA} \end{gather}
It can be shown that \fsrf{ax:u2m_i}{ax:init} are equivalent to \fsrf{ax:u2m4UA_i}{ax:init4UA} over behaviors that are non-Berkeley for $\delta$. For instance, consider \frf{ax:u2m} and \frf{ax:u2m4UA}. In order to show that \frf{ax:u2m} implies \frf{ax:u2m4UA}, let \frf{ax:u2m} and $\becomesLMTL{\MU{p}}$ hold at the current time instant $z$. $\becomesLMTL{\MU{p}}$ implies that there exists a $z' \in [z,z+\delta]$ where $\MU{p}$ shifts from false to true. \frf{ax:u2m} evaluated at $z'$ entails (among other things) that $\triggerMTL{\TAU{u}}$ holds at $z'$ for some $t$; that is, $\TAU{u}$ is triggered at $z'$. Without loss of generality, assume that $\TAU{u}$ is false before $z'$ and is true after it. The non-Berkeleyness assumption allows us to strengthen this fact, so that $\TAU{u}$ is false at $z$ as well and is true until $z+\delta$, because $z' \in [z, z+\delta]$. Hence $\triggerLMTL{\TAU{u}}$ holds at $z$. The rest of the implication is proved similarly. The proof of the converse implication that \frf{ax:u2m4UA} implies \frf{ax:u2m} also relies on the non-Berkeleyness assumption, which guarantees that there is exactly one transition of $\MU{p}$ over $[z,z+\delta]$ as a consequence of $\becomesLMTL{\MU{p}}$ holding at $z$. We omit the details of the proof, which are however along the same lines.
\subsection{Under-approximation} \label{sec:UA} The under-approximations of \fsrf{ax:u2m4UA_i}{ax:init4UA} are reported as formulas \fsrf{ax:u2m-UA_i}{ax:init-UA}. Notice the lower- and upper-bound relaxations in \fsrf{ax:enabling-UA}{ax:boundN-UA}, in accordance with the notion of under-ap\-prox\-i\-ma\-tion.
\subsubsection{Places}
\begin{gather}
\text{Syntactically the same as in }\frf{ax:u2m4UA_i}
\label{ax:u2m-UA_i} \\
\text{Syntactically the same as in }\frf{ax:u2m4UA}
\label{ax:u2m-UA} \\
\text{Syntactically the same as in }\frf{ax:m2u4UA}
\label{ax:m2u-UA} \end{gather}
\subsubsection{Transitions}
\begin{scriptsize} \begin{gather}
\triggerLMTL{\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}
\left(
\begin{array}{c}
\MU{p} \wedge \EPSILON{p} \wedge \boxPMTL{[1,\ALPHA{u}/\delta -2]}{ \MU{p} \wedge \EPSILON{p}}
\\ \vee \\
\MU{p} \wedge \neg \EPSILON{p} \wedge \boxPMTL{[1,\ALPHA{u}/\delta -2]}{ \MU{p} \wedge \neg\EPSILON{p}}
\end{array}
\right)
\label{ax:enabling-UA} \\
\boxPMTL{[0,\BETA{u}/\delta]}{\TAU{u} \wedge \bigwedge_{p \in \pre{u}} \MU{p}}
\quad\Rightarrow\quad
\left(
\begin{array}{c}
\bigvee_{p \in \pre{u}}\diamondMTL{=1}{\neg\MU{p}}
\\ \vee \\
\bigvee_{p \in \pre{u}}
\left( \begin{array}{c}
\boxPMTL{[0,\BETA{u}/\delta]}{\EPSILON{p}} \ \Rightarrow\ \diamondMTL{=1}{\neg\EPSILON{p}}
\\ \wedge \\
\boxPMTL{[0,\BETA{u}/\delta]}{\neg\EPSILON{p}} \ \Rightarrow\ \diamondMTL{=1}{\EPSILON{p}}
\end{array}
\right)
\\ \vee \\
\diamondMTL{=1}{\neg \TAU{u}}
\end{array} \right) \label{ax:boundP-UA} \\
\boxPMTL{[0,\BETA{u}/\delta]}{\neg\TAU{u} \wedge \bigwedge_{p \in \pre{u}} \MU{p}}
\quad\Rightarrow\quad
\left(
\begin{array}{c}
\bigvee_{p \in \pre{u}}\diamondMTL{=1}{\neg\MU{p}}
\\ \vee \\
\bigvee_{p \in \pre{u}}
\left( \begin{array}{c}
\boxPMTL{[0,\BETA{u}/\delta]}{\EPSILON{p}} \ \Rightarrow\ \diamondMTL{=1}{\neg\EPSILON{p}}
\\ \wedge \\
\boxPMTL{[0,\BETA{u}/\delta]}{\neg\EPSILON{p}} \ \Rightarrow\ \diamondMTL{=1}{\EPSILON{p}}
\end{array}
\right)
\\ \vee \\
\diamondMTL{=1}{\TAU{u}}
\end{array} \right) \label{ax:boundN-UA} \\
\text{Syntactically the same as in }\frf{ax:trans24UA}
\label{ax:trans2-UA} \end{gather} \end{scriptsize}
The straightforward under-approximation of \frf{ax:boundP4UA} and \frf{ax:boundN4UA} yields formulas which have been re-arranged to eliminate redundant terms. In fact, the time bound $(0, \BETA{u})$ in the antecedent becomes $[0, \BETA{u}/\delta]$ when under-approximated. Hence, formulas such as $\boxPMTL{(0,\BETA{u})}{\gamma} \Rightarrow \neg\gamma \vee \nowonMTL{\neg\gamma}$ are under-approximated as $\boxPMTL{[0,\BETA{u}/\delta]}{\gamma} \Rightarrow \neg\gamma \vee \diamondMTL{[0,1]}{\neg\gamma}$. However, $\neg\gamma$ never holds at the current instant because it would contradict the antecedent. Correspondingly, such formulas can be simplified to $\boxPMTL{[0,\BETA{u}/\delta]}{\gamma} \Rightarrow \diamondMTL{=1}{\neg\gamma}$.
\subsubsection{Zero-time unmarking} \begin{gather}
\text{Syntactically the same as in \frf{ax:unm4UA}}
\label{ax:unm-UA} \end{gather}
\subsubsection{Initialization} \begin{gather}
\text{at $0$: }
\quad \bigwedge_{p \in P}\neg\MU{p}
\wedge \diamondMTL{[1,2]}{\bigwedge_{p \in M_0} \MU{p}}
\ \wedge \ \bigwedge_{p \in P} \EPSILON{p} \wedge \bigwedge_{u \in T} \TAU{u} \label{ax:init-UA} \end{gather}
\subsection{Axiomatization for over-approximation} \label{sec:4overap} Continuous-time operator $\becomesMTL{}$ becomes\footnote{After some semantic-preserving simplifications.} discrete-time operator $\becomesLMTL{}$ under over-ap\-prox\-i\-ma\-tion when it occurs to the left-hand side of implications, hence is suitable to describe antecedents of transitions that will be over-approximated. However, the over-approximation of the same operator takes a different form in the right-hand side of implications. In such cases, the over-approximation of formulas such as $\becomesMTL{x}$ is $\boxPMTL{[0,1]}{\neg x} \wedge \boxMTL{[0,1]}{x}$ which is clearly unsatisfiable. Correspondingly, the whole over-approximation formulas would be unsatisfiable only for false antecedents, i.e., when no transition ever occurs.
After careful experimentation, we found that a workaround to this problem should exploit a weakening of the $\becomesMTL{}$ operators that occur in consequent formulas. Let us illustrate the idea as simply as possible for two propositions $x,y$ and the formula $\becomesMTL{x} \Rightarrow \becomesMTL{y}$: every transition of $x$ occurs concurrently with a transition of $y$. The formula is relaxed into the weaker $\becomesMTL{x} \Rightarrow \uptonowstrMTL{\neg y} \wedge \boxMTL{=\delta}{x \Rightarrow y}$: every transition of $x$ also triggers a transition of $y$ sometime in the future, as long as $x$ still holds $\delta$ time units in the future. The new formula is essentially equivalent to the original one for non-Berkeley behaviors for the following reasons. First, $x$ must still hold $\delta$ time units in the future, because its behavior is non-Berkeley for $\delta$; hence $y$ holds as well there and must transition somewhere over the interval $(0,\delta)$ from the current instant. In addition, the transition of $y$ cannot occur asynchronously to the transition of $x$; otherwise two distinct transitions would occur within $\delta$ time units, against the non-Berkeleyness assumption. In all, the two formulations are equivalent over non-Berkeley continuous time. Correspondingly, the $\triggerOMTL{}$ operator is introduced and used in the right-hand side of implications in the following continuous-time formulas \fsrf{ax:u2m4OA_i}{ax:init4OA}.
\subsubsection{Places}
\begin{scriptsize} \begin{gather}
p \in M_0:
\becomesMTL{\MU{p}} \ \Rightarrow\
\left(
\begin{array}{c}
\bigvee_{u \in \pre{p}}
\left(
\triggerOMTL{\MU{p} \leadsto \TAU{u}}
\;\wedge\;
\bigwedge_{u'\neq u \in \pre{p}} \triggerNOMTL{\MU{p} \leadsto \TAU{u'}}
\right)
\\ \wedge \\
\bigwedge_{u \in \post{p}} \triggerNOMTL{\MU{p} \leadsto \TAU{u}}
\\ \vee \\
\boxPMTL{[\delta, \infty)}{\neg\MU{p}}
\end{array} \right)
\label{ax:u2m4OA_i} \\
p \notin M_0:
\becomesMTL{\MU{p}} \ \Rightarrow\
\left( \begin{array}{c}
\bigvee_{u \in \pre{p}}
\left(
\triggerOMTL{\MU{p} \leadsto \TAU{u}}
\;\wedge\;
\bigwedge_{u'\neq u \in \pre{p}} \triggerNOMTL{\MU{p} \leadsto \TAU{u'}}
\right)
\\ \wedge \\
\bigwedge_{u \in \post{p}} \triggerNOMTL{\MU{p} \leadsto \TAU{u}}
\end{array} \right)
\label{ax:u2m4OA} \\
\becomesMTL{\neg\MU{p}} \quad\Rightarrow\quad
\bigvee_{u \in \post{p}}
\left(
\triggerOMTL{\neg\MU{p} \leadsto \TAU{u}}
\;\wedge\;
\bigwedge_{u'\neq u \in \post{p}} \triggerNOMTL{\neg\MU{p} \leadsto \TAU{u'}}
\right) \wedge
\bigwedge_{u \in \pre{p}} \triggerNOMTL{\neg\MU{p} \leadsto \TAU{u}}
\label{ax:m2u4OA} \end{gather} \end{scriptsize}
\subsubsection{Transitions}
\begin{scriptsize} \begin{gather}
\triggerMTL{\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}
\left(
\begin{array}{c}
\uptonowstrMTL{\MU{p} \wedge \EPSILON{p}} \wedge \boxPMTL{[\delta,\ALPHA{u})}{ \MU{p} \wedge \EPSILON{p}}
\\ \vee \\
\uptonowstrMTL{\MU{p} \wedge \neg \EPSILON{p}} \wedge \boxPMTL{[\delta,\ALPHA{u})}{ \MU{p} \wedge \neg\EPSILON{p}}
\end{array}
\right)
\label{ax:enabling4OA} \\
\text{Same as }\frf{ax:boundP} \label{ax:boundP4OA} \\
\text{Same as }\frf{ax:boundN} \label{ax:boundN4OA} \\
\becomesMTL{\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}
\left( \begin{array}{c}
\left( \begin{array}{c}
\uptonowstrMTL{\MU{p}} \\\wedge\\ \boxMTL{=\delta}{\TAU{u} \Rightarrow \neg \MU{p}}
\end{array}\right)
\\ \vee \\
\triggerOMTL{\TAU{u} \leadsto \EPSILON{p}}
\end{array} \right)
\;\wedge\;
\bigwedge_{p \in \post{u}}
\left( \begin{array}{c}
\left( \begin{array}{c}
\uptonowstrMTL{\neg \MU{p}} \\\wedge\\ \boxMTL{=\delta}{\TAU{u} \Rightarrow \MU{p}}
\end{array}\right)
\\ \vee \\
\triggerOMTL{\TAU{u} \leadsto \EPSILON{p}}
\end{array} \right)
\nonumber \\
\becomesMTL{\neg\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}
\left( \begin{array}{c}
\left( \begin{array}{c}
\uptonowstrMTL{\MU{p}} \\\wedge\\ \boxMTL{=\delta}{\neg\TAU{u} \Rightarrow \neg \MU{p}}
\end{array}\right)
\\ \vee \\
\triggerOMTL{\neg\TAU{u} \leadsto \EPSILON{p}}
\end{array} \right)
\;\wedge\;
\bigwedge_{p \in \post{u}}
\left( \begin{array}{c}
\left( \begin{array}{c}
\uptonowstrMTL{\neg \MU{p}} \\\wedge\\ \boxMTL{=\delta}{\neg\TAU{u} \Rightarrow \MU{p}}
\end{array}\right)
\\ \vee \\
\triggerOMTL{\neg\TAU{u} \leadsto \EPSILON{p}}
\end{array} \right) \label{ax:trans24OA} \end{gather} \end{scriptsize}
\subsubsection{Zero-time unmarking}
\begin{scriptsize} \begin{gather}
\becomesMTL{\EPSILON{p}} \quad\Rightarrow\quad
\bigvee_{\substack{u_a \in \pre{p} \\ u_b \in \post{p}}}
\left( \begin{array}{c}
\triggerOMTL{\EPSILON{p} \leadsto \TAU{u_a}}
\;\wedge\;
\bigwedge_{u'\neq u_a \in \pre{p}} \triggerNOMTL{\EPSILON{p} \leadsto \TAU{u'}}
\\ \wedge \\
\triggerOMTL{\EPSILON{p} \leadsto \TAU{u_b}}
\;\wedge\;
\bigwedge_{u'\neq u_b \in \post{p}} \triggerNOMTL{\EPSILON{p} \leadsto \TAU{u'}}
\end{array} \right) \nonumber \\
\becomesMTL{\neg \EPSILON{p}} \quad\Rightarrow\quad
\bigvee_{\substack{u_a \in \pre{p} \\ u_b \in \post{p}}}
\left( \begin{array}{c}
\triggerOMTL{\neg \EPSILON{p} \leadsto \TAU{u_a}}
\;\wedge\;
\bigwedge_{u'\neq u_a \in \pre{p}} \triggerNOMTL{\neg \EPSILON{p} \leadsto \TAU{u'}}
\\ \wedge \\
\triggerOMTL{\neg \EPSILON{p} \leadsto \TAU{u_b}}
\;\wedge\;
\bigwedge_{u'\neq u_b \in \post{p}} \triggerNOMTL{\neg \EPSILON{p} \leadsto \TAU{u'}}
\end{array} \right) \label{ax:unm4OA} \end{gather} \end{scriptsize}
\subsubsection{Initialization} \begin{gather}
\boxPMTL{(0, \infty)}{\perp}
\quad\Rightarrow\quad
\bigwedge_{p \in P}\neg\MU{p}
\wedge \diamondMTL{[0,2\delta]}{\bigwedge_{p \in M_0} \MU{p}}
\ \wedge \ \nowonMTL{\bigwedge_{p \in P} \EPSILON{p} \wedge \bigwedge_{u \in T} \TAU{u}} \label{ax:init4OA} \end{gather}
The observations that have been introduced at the beginning of this section can be leveraged to provide a rigorous proof that \fsrf{ax:u2m4OA_i}{ax:init4OA} are equivalent to the original \fsrf{ax:u2m_i}{ax:init} over non-Berkeley continuous time. We omit the details for brevity.
\subsection{Over-approximation} \label{sec:OA} The over-approximations of \fsrf{ax:u2m4OA_i}{ax:init4OA} are reported as formulas \fsrf{ax:u2m-OA_i}{ax:init-OA}. Notice the lower- and upper-bound relaxations in \fsrf{ax:enabling-OA}{ax:boundN-OA}, in accordance with the notion of over-ap\-prox\-i\-ma\-tion.
\subsubsection{Places}
\begin{scriptsize} \begin{gather}
p \in M_0:
\becomesLMTL{\MU{p}} \ \Rightarrow\
\left( \begin{array}{c}
\bigvee_{u \in \pre{p}}
\left(
\triggerOMTL{\MU{p} \leadsto \TAU{u}}
\;\wedge\;
\bigwedge_{u'\neq u \in \pre{p}} \triggerNOMTL{\MU{p} \leadsto \TAU{u'}}
\right)
\\ \wedge \\
\bigwedge_{u \in \post{p}} \triggerNOMTL{\MU{p} \leadsto \TAU{u}}
\\ \vee \\
\boxPMTL{[\delta, \infty)}{\neg\MU{p}}
\end{array} \right)
\label{ax:u2m-OA_i} \\
p \notin M_0:
\becomesLMTL{\MU{p}} \ \Rightarrow\
\left( \begin{array}{c}
\bigvee_{u \in \pre{p}}
\left(
\triggerOMTL{\MU{p} \leadsto \TAU{u}}
\;\wedge\;
\bigwedge_{u'\neq u \in \pre{p}} \triggerNOMTL{\MU{p} \leadsto \TAU{u'}}
\right)
\\ \wedge \\
\bigwedge_{u \in \post{p}} \triggerNOMTL{\MU{p} \leadsto \TAU{u}}
\end{array} \right)
\label{ax:u2m-OA} \\
\becomesLMTL{\neg\MU{p}} \quad\Rightarrow\quad
\bigvee_{u \in \post{p}}
\left(
\triggerOMTL{\neg\MU{p} \leadsto \TAU{u}}
\;\wedge\;
\bigwedge_{u'\neq u \in \post{p}} \triggerNOMTL{\neg\MU{p} \leadsto \TAU{u'}}
\right) \wedge
\bigwedge_{u \in \pre{p}} \triggerNOMTL{\neg\MU{p} \leadsto \TAU{u}}
\label{ax:m2u-OA} \end{gather} \end{scriptsize}
\subsubsection{Transitions}
\begin{scriptsize} \begin{gather}
\triggerMTL{\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}
\left(
\begin{array}{c}
\boxPMTL{[0,\ALPHA{u}/\delta+1]}{\MU{p} \wedge \EPSILON{p}}
\\ \vee \\
\boxPMTL{[0,\ALPHA{u}/\delta+1]}{\MU{p} \wedge \neg\EPSILON{p}}
\end{array}
\right)
\label{ax:enabling-OA} \\
\boxPMTL{[1,\BETA{u}/\delta-1]}{\TAU{u} \wedge \bigwedge_{p \in \pre{u}} \MU{p}}
\Rightarrow
\left(
\begin{array}{c}
\bigvee_{p \in \pre{u}}(\neg\MU{p} \vee \boxMTL{[0,1]}{\neg\MU{p}})
\\ \vee \\
\bigvee_{p \in \pre{u}}
\left( \begin{array}{c}
\boxPMTL{[1,\BETA{u}/\delta-1]}{\EPSILON{p}} \ \Rightarrow\ \neg\EPSILON{p} \vee \boxMTL{[0,1]}{\neg\EPSILON{p}}
\\ \wedge \\
\boxPMTL{[0,\BETA{u}/\delta-1]}{\neg\EPSILON{p}} \ \Rightarrow\ \EPSILON{p} \vee \diamondMTL{[0,1]}{\EPSILON{p}}
\end{array}
\right)
\\ \vee \\
\neg \TAU{u}
\end{array} \right) \label{ax:boundP-OA} \\
\boxPMTL{[1,\BETA{u}/\delta-1]}{\neg\TAU{u} \wedge \bigwedge_{p \in \pre{u}} \MU{p}}
\Rightarrow
\left(
\begin{array}{c}
\bigvee_{p \in \pre{u}}\boxMTL{[0,1]}{\neg\MU{p}}
\\ \vee \\
\bigvee_{p \in \pre{u}}
\left( \begin{array}{c}
\boxPMTL{[1,\BETA{u}/\delta-1]}{\EPSILON{p}} \ \Rightarrow\ \neg\EPSILON{p} \vee \boxMTL{[0,1]}{\neg\EPSILON{p}}
\\ \wedge \\
\boxPMTL{[0,\BETA{u}/\delta-1]}{\neg\EPSILON{p}} \ \Rightarrow\ \EPSILON{p} \vee \diamondMTL{[0,1]}{\EPSILON{p}}
\end{array}
\right)
\\ \vee \\
\TAU{u}
\end{array} \right) \label{ax:boundN-OA} \\
\becomesLMTL{\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}
\left( \begin{array}{c}
\left( \begin{array}{c}
\boxPMTL{[0,1]}{\MU{p}} \\\wedge\\ \boxMTL{[0,2]}{\TAU{u} \Rightarrow \neg \MU{p}}
\end{array}\right)
\\ \vee \\
\triggerOMTL{\TAU{u} \leadsto \EPSILON{p}}
\end{array} \right)
\;\wedge\;
\bigwedge_{p \in \post{u}}
\left( \begin{array}{c}
\left( \begin{array}{c}
\boxPMTL{[0,1]}{\neg \MU{p}} \\\wedge\\ \boxMTL{[0,2]}{\TAU{u} \Rightarrow \MU{p}}
\end{array}\right)
\\ \vee \\
\triggerOMTL{\TAU{u} \leadsto \EPSILON{p}}
\end{array} \right)
\nonumber \\
\becomesLMTL{\neg\TAU{u}} \quad\Rightarrow\quad
\bigwedge_{p \in \pre{u}}
\left( \begin{array}{c}
\left( \begin{array}{c}
\boxPMTL{[0,1]}{\MU{p}} \\\wedge\\ \boxMTL{[0,2]}{\neg\TAU{u} \Rightarrow \neg \MU{p}}
\end{array}\right)
\\ \vee \\
\triggerOMTL{\neg\TAU{u} \leadsto \EPSILON{p}}
\end{array} \right)
\;\wedge\;
\bigwedge_{p \in \post{u}}
\left( \begin{array}{c}
\left( \begin{array}{c}
\boxPMTL{[0,1]}{\neg \MU{p}} \\\wedge\\ \boxMTL{[0,2]}{\neg\TAU{u} \Rightarrow \MU{p}}
\end{array}\right)
\\ \vee \\
\triggerOMTL{\neg\TAU{u} \leadsto \EPSILON{p}}
\end{array} \right) \label{ax:trans2-OA} \end{gather} \end{scriptsize}
Similarly as with under-approximation, formulas have been conveniently simplified: the term $\uptonowstrMTL{\MU{p} \wedge \EPSILON{p}}$ in the consequent of \frf{ax:enabling4OA} is over-approximated to \linebreak $\boxPMTL{[0,1]}{\MU{p} \wedge \EPSILON{p}}$, which is subsumed by the other term $\boxPMTL{[0, \ALPHA{u}/ \delta +1 )}{\MU{p} \wedge \EPSILON{p}}$ in the over-approximation. (In fact, $\ALPHA{u}/\delta+1 \geq 2$ is the case). Subformulas $\neg \TAU{u} \vee \boxMTL{[0,1]}{\neg\TAU{u}}$ and $\TAU{u} \vee \boxMTL{[0,1]}{\TAU{u}}$ in the over-approximations \frf{ax:boundN-OA} and \frf{ax:boundN-OA}, respectively, can also be simplified. In fact, \frf{ax:enabling-OA} enforces marking and no zero-time unmarking for at least 3 time units whenever $\tau_u$ is triggered; hence $\MU{p}$ cannot be triggered over $[0,1]$ so that the terms $\boxMTL{[0,1]}{\neg\TAU{u}}$ and $\boxMTL{[0,1]}{\TAU{u}}$ are redundant.
\subsubsection{Zero-time unmarking}
\begin{scriptsize} \begin{gather}
\becomesLMTL{\EPSILON{p}} \quad\Rightarrow\quad
\bigvee_{\substack{u_a \in \pre{p} \\ u_b \in \post{p}}}
\left( \begin{array}{c}
\triggerOMTL{\EPSILON{p} \leadsto \TAU{u_a}}
\;\wedge\;
\bigwedge_{u'\neq u_a \in \pre{p}} \triggerNOMTL{\EPSILON{p} \leadsto \TAU{u'}}
\\ \wedge \\
\triggerOMTL{\EPSILON{p} \leadsto \TAU{u_b}}
\;\wedge\;
\bigwedge_{u'\neq u_b \in \post{p}} \triggerNOMTL{\EPSILON{p} \leadsto \TAU{u'}}
\end{array} \right) \nonumber \\
\becomesLMTL{\neg \EPSILON{p}} \quad\Rightarrow\quad
\bigvee_{\substack{u_a \in \pre{p} \\ u_b \in \post{p}}}
\left( \begin{array}{c}
\triggerOMTL{\neg \EPSILON{p} \leadsto \TAU{u_a}}
\;\wedge\;
\bigwedge_{u'\neq u_a \in \pre{p}} \triggerNOMTL{\neg \EPSILON{p} \leadsto \TAU{u'}}
\\ \wedge \\
\triggerOMTL{\neg \EPSILON{p} \leadsto \TAU{u_b}}
\;\wedge\;
\bigwedge_{u'\neq u_b \in \post{p}} \triggerNOMTL{\neg \EPSILON{p} \leadsto \TAU{u'}}
\end{array} \right) \label{ax:unm-OA} \end{gather} \end{scriptsize}
\subsubsection{Initialization} \begin{gather}
\text{at $0$: }
\quad \bigwedge_{p \in P}\neg\MU{p}
\wedge \diamondMTL{=1}{\bigwedge_{p \in M_0} \MU{p}}
\ \wedge \ \boxMTL{[0,1]}{\bigwedge_{p \in P} \EPSILON{p} \wedge \bigwedge_{u \in T} \TAU{u}} \label{ax:init-OA} \end{gather}
\subsection{Quality of discrete-time approximations} Proposition \ref{prop:approximations} guarantees that under-ap\-prox\-i\-ma\-tions preserve validity and over-ap\-prox\-i\-ma\-tions preserve counterexamples. It does not say anything about the \emph{quality} (or completeness) of such approximations; in particular an under-approximation can preserve validity trivially by being contradictory (i.e., inconsistent), and an over-approximation can preserve counterexamples trivially by being identically valid.
In order to make sure this is not the case, let us introduce a set of constraints that guarantees no degenerate behaviors are modeled in the approximations. Consider formulas involving metric intervals, namely \fsrf{ax:enabling-UA}{ax:boundN-UA} for the under-approximations and \fsrf{ax:enabling-OA}{ax:boundN-OA} for the over-approximation. We should check that, for every transition $u$ with dense-time firing interval $[\ALPHA{u}, \BETA{u}]$: \begin{itemize}
\item \emph {non-emptiness.} Metric intervals are non-empty; that is $\ALPHA{u} \geq 3\delta$ from the under-approximation and $\ALPHA{u} \geq -\delta$, $\BETA{u} \geq 2\delta$ from the over-approximation.
\item \emph{consistency.} The the minimum enabling interval (defined in \frf{ax:enabling-UA} and \frf{ax:enabling-OA} for under- and over-approximation respectively) is smaller than the maximum enabling interval (defined in \fsrf{ax:boundP-UA}{ax:boundN-UA} and \fsrf{ax:boundP-OA}{ax:boundN-OA} for under- and over-approximation respectively).
Correspondingly, we have the constraints $\BETA{u} \geq \ALPHA{u} - 2\delta$ from the under-approximation and $\BETA{u} \geq \ALPHA{u} + 2\delta$ from the over-approximation. \end{itemize}
The constraints can be summarized as $\ALPHA{u} \geq 3\delta$ and $\BETA{u} \geq \ALPHA{u} + 2\delta$. In our examples, we will consider only non-degenerate TPN satisfying these constraints.
\section{Multi-Paradigm Modeling and Verification at Work}\label{sec:casestudy}
The multi-paradigm modeling technique presented in this paper is supported by the $\integers$ot{} bounded satisfiability checker \cite{zot,PMS07}. More precisely, we exploited the flexibility provided by the SAT-based approach pursued by $\integers$ot{}, and implemented several separate plugins to deal with the various allowed formalisms. In particular, the tool now includes plugins capable of dealing with dense-time MTL formulas \cite{FPR08-FM08}, with timed automata \cite{FPR08-ICFEM08}, and with timed Petri nets (using the formalization presented in Section \ref{sec:TPN}). In addition, $\integers$ot{} is natively capable of accepting discrete-time MTL formulas as input language. The plugins provide primitives through which the user can define the system to be analyzed as a mixture of timed automata, dense- and discrete-time MTL formulas, and timed Petri nets. The properties to be verified for the system can also be described as a combination of fragments written using the aforementioned formal languages, though they are usually formalized through MTL formulas (either using dense or discrete time).
The tool then automatically builds, for the dense-time fragments of the system and of the property to be analyzed, the two discrete-time approximation formulas of Proposition \ref{prop:approximations}. These formulas, in possibly conjunction with MTL formulas natively written using a discrete notion of time, are checked for validity over time $\naturals$; the results of the validity check allows one to infer the validity of the integrated model, according to Proposition \ref{prop:approximations}.
The multi-paradigm verification process in $\integers$ot{} consists of three sequential phases. First, the discrete-time MTL formulas of Proposition \ref{prop:approximations} are built and are translated into a propositional satisfiability (SAT) problem. Second, the SAT instance (possibly including MTL formulas directly written using a discrete notion of time) is put into conjunctive normal form (CNF), a standard input format for SAT solvers. Third, the CNF formula is fed to a SAT solving engine (such as MiniSat, zChaff, or MiraXT).
\subsection{An Example of Multi-paradigm Modeling and Verification} We demonstrate how the modeling and verification technique presented in this paper works in practice through an example consisting of a fragment of a realistic monitoring system, which could be part of a larger supervision and control system.
The monitoring subsystem is composed of three identical sensors, a middle component that is in charge of acquiring and pre-processing the data from the sensors, and a data management component that further elaborates the data (e.g., to select appropriate control actions).
For reasons of dependability (by redundancy), the three sensors measure the same quantity (whose nature is of no relevance in this example). Each one of them senses independently the measured quantity at a certain rate which is in general aperiodic; however, while the acquisition rate can vary, the distance between consecutive acquisitions must always be no less than $T/2$ and no more than $T$ time units. Each sensor keeps track of only the last measurement, hence every new sensed value replaces the one stored by the sensor.
The data acquisition component retrieves data from the three sensors in a ``pull'' fashion. More precisely, when all three sensors have a fresh measurement available, with a delay of at least $T/10$ units, but of no more than $T/5$ time units, the data acquisition component collects the three values from the sensors (which then become stale, as they have been acquired). After having retrieved the three measurements, the component processes them (e.g., it computes a derived measurement as the average of the sensed values); the process takes between $T/5$ and $T/2$ time units.
After having computed the derived measurement, the data acquisition component sends it to the data manager, this time using a ``push'' policy which requires an acknowledgement of the data reception by the latter. The data acquisition component tries to send data to the data manager at most twice. If both attempts at data transmission fail (for example because a timeout for the reception acknowledgement by the data manager expires, or because the latter signals a reception error), the data transmission terminates with an error.
First, we model the mechanism through which the three sensors collect data from the field and the data acquisition component retrieves them for the pre-processing phase. This fragment of the model is described through a timed Petri net, and is depicted in Figure \ref{fig:TPN}.
\begin{figure}
\caption{Fragment of monitoring system modeled through a timed Petri net.}
\label{fig:TPN}
\end{figure}
In a multiple-paradigm framework, the reasons that lead to the choice of a notation instead of another often include a certain degree of arbitrariness. In this case, however, we chose to model the data acquisition part of the system through a TPN since we felt that the inherent asynchrony with which the three sensors collect data from the field was naturally matched by the asynchronous nature of a TPN and its tokens \cite{FMMR09}. While it is undeniable that different modelers might have made different choices, we maintain that TPN are well-suited (although not necessarily indispensable) in this case.
A further fragment of the formal model of the monitoring system is shown in Figure \ref{fig:TA}. It represents, through the formalism of timed automata presented in \cite{FPR08-ICFEM08}, the transmission protocol that the data acquisition component uses to send refined values to the data manager.\footnote{As remarked in \cite{FPR08-ICFEM08}, since, in our formalization, the definition of clock constraints forbids the introduction of exact constraints such as $A = T_2$, such constraints represent a shorthand for the valid clock constraint $T_2 \leq A < T + \delta$.}
\begin{figure}
\caption{Fragment of data acquisition system modeled through a timed automaton.}
\label{fig:TA}
\end{figure}
For this second fragment of the system, the formalism of timed automata was chosen, with a certain degree of arbitrariness, because it was deemed capable of representing the timing constraints on the protocol in a more natural way, especially for what concerns the constraint on the overall duration of the process.
Finally, MTL formulas are added to ``bridge the gap'' between the fragments shown in Figures \ref{fig:TPN} and \ref{fig:TA}. This is achieved by the two following formulas, which define, respectively, that the transmission procedure can begin only if a pre-processed measurement value has been produced by the data acquisition component in the last $T$ time units (\ref{eq:CNtrans}) and if the system is not in the middle of a data transmission (i.e., it is $\tait{idle}$), and a new datum is being processed, a transmission will start within $T/2$ time units, due to the upper bound of \textit{process\_d} transition (\ref{eq:CStrans}).
\begin{eqnarray}
\tait{try} \quad\Rightarrow\quad \diamondPMTL{(0,T/2]}{\tait{data\_retrieved}} \label{eq:CNtrans}\\
\tait{data\_retrieved} \land \tait{idle} \quad\Rightarrow\quad \diamondMTL{(0,T/2]}{\tait{try}} \label{eq:CStrans} \end{eqnarray}
Notice that the automata of Figures \ref{fig:TPN} and \ref{fig:TA} are defined, as per the formalizations of \cite{FPR08-ICFEM08} and of Section \ref{sec:TPN}, over a continuous notion of time.
This choice for the time domain of these two system fragments is justified by the fact that they deal with parts of the system interacting with physical elements (measured quantities, transmission channel), for which a continuous time seems better suited.
Formulas (\ref{eq:CNtrans}) and (\ref{eq:CStrans}), instead, describe a software synchronization mechanism within the application. As a consequence, discrete time is more suitable to describe this part of the system, hence formulas (\ref{eq:CNtrans}) and (\ref{eq:CStrans}) are to be interpreted accordingly.
Finally, the model of the system to be verified is built by conjoining the discrete-time approximations for the fragments of Figures \ref{fig:TPN}-\ref{fig:TA} and the discrete-time MTL formulas (\ref{eq:CNtrans})-(\ref{eq:CStrans}). More precisely, if $\psi^{\mathrm{\Omega}_{\delta}}_{N}$ and $\psi^{\mathrm{O}_{\delta}}_{N}$ are the continuous-time MTL formulas capturing the semantics of the net of Figure \ref{fig:TPN} (see Section \ref{sec:TPN}), $\psi^{\mathrm{\Omega}_{\delta}}_{A}$, $\psi^{\mathrm{O}_{\delta}}_{A}$ are the continuous-time MTL formulas for the automaton of Figure \ref{fig:TA}, $\psi_{L}$ is the discrete-time formula $\psi_{L} = (\ref{eq:CNtrans}) \land (\ref{eq:CStrans})$, and $\phi^{\mathsf{prop}}$ is the continuous-time property to be checked for the system, then we have:
\begin{footnotesize} $$ \begin{array}{l}
\phi^+ = \Alw{\underap{\psi^{\mathrm{\Omega}_{\delta}}_{N}} \land \underap{\psi^{\mathrm{\Omega}_{\delta}}_{A}} \land \psi_{L}} \Rightarrow \Alw{\overap{\phi^{\mathsf{prop}}}} \\
\phi^- = \Alw{\overap{\psi^{\mathrm{O}_{\delta}}_{N}} \land \overap{\psi^{\mathrm{O}_{\delta}}_{A}} \land \psi_{L}} \Rightarrow \Alw{\underap{\phi^{\mathsf{prop}}}} \end{array} $$ \end{footnotesize}
Note that formula $\psi_L$, which is to be interpreted over discrete time, must not be approximated. Then, if $\phi^+$ is $\naturals$-valid, we can draw some interesting conclusions.
First, if one implements a continuous-time system that does not vary faster than the sampling time $\delta$ (i.e., whose behaviors are in $\mathcal{B}_{\chi}^\delta$), which satisfies $\psi_N$, $\psi_A$, and a continuous-time MTL formula $\psi'$ such that $\underap{\psi'_L} = \psi_L$, then property $\phi^{\mathsf{prop}}$ holds for this system.
It can be shown that, for any continuous-time MTL formula $\phi$, the set of behaviors satisfying $\overap{\phi}$ is a subset of those satisfying $\underap{\phi}$ (i.e., $\{b \mid b \modelstime{\naturals} \overap{\phi}\} \subseteq \{b \mid b \modelstime{\naturals} \underap{\phi}\}$). In addition, given a discrete-time behavior $b$ that satisfies $\overap{\phi}$, from \cite[Lemma 3]{FPR08-FM08} we have that any continuous-time non-Berkeley behavior $b'$ for which $b$ is a sampling satisfies $\phi$. Then, any way one reconstructs a continuous-time non-Berkeley behavior $b'$ from a discrete-time one that satisfies $\overap{\phi}$, $b'$ satisfies $\phi$. This leads us to conclude that, if one builds a discrete-time system (e.g., a piece of software) which implements --- that is, satisfies --- $\overap{\psi^{\mathrm{O}_{\delta}}_{N}}$, $\overap{\psi^{\mathrm{O}_{\delta}}_{A}}$, $\psi_L$, this satisfies discrete-time property $\overap{\phi^{\mathsf{prop}}}$; in addition, any way one uses a discrete-time behavior of this system to reconstruct a continuous-time, non-Berkeley behavior, the latter satisfies $\psi_N$, $\psi_A$, and $\phi^{\mathsf{prop}}$.
Finally, if $\phi^-$ is not $\naturals$-valid, a discrete-time system implementing $\overap{\psi^{\mathrm{O}_{\delta}}_{N}}$, $\overap{\psi^{\mathrm{O}_{\delta}}_{A}}$, $\psi_L$ violates property $\underap{\phi^{\mathsf{prop}}}$.
\paragraph{Verification.} We used the system model presented above to check a number of properties to validate the effectiveness of our approach. Table \ref{tab:exp_res} shows the results, and duration of the tests. More precisely, for each test the table reports: the checked property; the values of the timing parameters in the model (i.e., $T_1, T_2, T_3$, $T$); the temporal bound $k$ of the time domain (as $\integers$ot{} is a bounded satisfiability checker, it considers all the behaviors with period $\le k$); the total amount of time to perform each phase of the verification, namely formula building (including transformation into conjunctive normal form), and propositional satisfiability checking; the results of the tests; the size (in millions of clauses) of the formula fed to the SAT-solver.\footnote{The verification tool and the complete model used for verification can be found at \texttt{http://home.dei.polimi.it/pradella}. Tests have been performed on a PC equipped with two Intel Xeon E5335 Quad-Core Processor 2GHz, 16 Gb of RAM, and GNU/Linux (kernel 2.6.29), using a single core for each test. $\integers$ot{} used the SAT-solver MiniSat 2.} Tests were performed instantiating the parameters with different values to get an idea of how the performance of the verification algorithm is affected, both in terms of time to complete the verification and of whether the verification attempt is conclusive. In addition, the timed interaction between the data acquisition and monitoring subsystems is quite subtle and the properties under verification hold in every run of the system only for certain combinations of parameter values. Automated verification allowed us to investigate this fact in some detail.
First, we checked some properties concerning the liveness of the data collection by a sensor $X$ (with $X \in \{1, 2, 3\}$). More precisely, we analyzed whether property (\ref{p2a}) holds for the model.\footnote{Recall that all properties to be proved are implicitly closed with the $\Alw{}$ operator.}
\begin{equation} \label{p2a}
\begin{array}{c}
\tait{replaceX} \land \tait{new\_dX} \Rightarrow \\\diamondMTL{(0,T+\delta]}{\tait{replaceX} \land \neg\tait{new\_dX} \lor \neg\tait{replaceX} \land \tait{new\_dX}} \\ \land \\
\tait{replaceX} \land \neg\tait{new\_dX} \Rightarrow \\\diamondMTL{(0,T+\delta]}{\tait{replaceX} \land \tait{new\_dX} \lor \neg\tait{replaceX} \land \neg\tait{new\_dX}}\\ \land \\
\neg\tait{replaceX} \land \tait{new\_dX} \Rightarrow \\\diamondMTL{(0,T+\delta]}{\neg\tait{replaceX} \land \neg\tait{new\_dX} \lor \tait{replaceX} \land \tait{new\_dX}}\\ \land \\
\neg\tait{replaceX} \land \neg\tait{new\_dX} \Rightarrow \\\diamondMTL{(0,T+\delta]}{\neg\tait{replaceX} \land \tait{new\_dX} \lor \tait{replaceX} \land \neg\tait{new\_dX}}
\end{array}
\end{equation}
Formula (\ref{p2a}) states that triggering events of $\tait{replaceX}$ and $\tait{new\_dX}$ transitions must occur within $T+\delta$ (with $\delta$ the sampling period) time instants in the future, i.e., that either $\tait{replaceX}$ or $\tait{new\_dX}$ must change value within the next $T+\delta$ time instants. The property does not hold in general, since a firing of transition $\tait{retrieve\_d}$ would reset the time counters for transitions $\tait{replaceX}$ and $\tait{new\_dX}$. This fact can be pointed out by checking $\phi^-$, with $\phi^{\mathsf{prop}} = (\ref{p2a})$, which is unsatisfiable, as shown in Table \ref{tab:exp_res}.
If the additional hypothesis that transition $\tait{retrieve\_d}$ does not fire along $(0,T+\delta]$, \frf{p2a} can however be shown to hold. More precisely, if (\ref{p2a}) is rewritten, as shown in formula (\ref{p2b}), by adding to the antecedents the condition that predicate $\tait{retrieve\_d}$ does not change in $(0,T+\delta]$ (i.e., transition $\tait{retrieve\_d}$ does not fire in that interval), then the new $\phi^+$ is $\naturals$-valid (as Table \ref{tab:exp_res} shows), hence (\ref{p2b}) holds for the system. \begin{equation} \label{p2b}
\begin{array}{c}
\boxMTL{(0,T+\delta]}{\tait{retrieve\_d}} \land \tait{replaceX} \land \tait{new\_dX} \Rightarrow \\
\diamondMTL{(0,T+\delta]}{\tait{replaceX} \land \neg\tait{new\_dX} \lor \neg\tait{replaceX} \land \tait{new\_dX}} \\
\wedge \dots \wedge \\
\boxMTL{(0,T+\delta]}{\tait{retrieve\_d}} \land \neg\tait{replaceX} \land \neg\tait{new\_dX} \Rightarrow \\
\diamondMTL{(0,T+\delta]}{\neg\tait{replaceX} \land \tait{new\_dX} \lor \neg\tait{replaceX} \land \tait{new\_dX}} \\
\bigvee \\
\boxMTL{(0,T+\delta]}{\neg \tait{retrieve\_d}} \land \tait{replaceX} \land \neg\tait{new\_dX} \Rightarrow \\
\diamondMTL{(0,T+\delta]}{\tait{replaceX} \land \tait{new\_dX} \lor \neg\tait{replaceX} \land \neg\tait{new\_dX}} \\
\wedge \dots \wedge \\
\boxMTL{(0,T+\delta]}{\neg\tait{retrieve\_d}} \land \neg\tait{replaceX} \land \neg\tait{new\_dX} \Rightarrow \\
\diamondMTL{(0,T+\delta]}{\neg\tait{replaceX} \land \tait{new\_dX} \lor \neg\tait{replaceX} \land \tait{new\_dX}}
\end{array}
\end{equation}
Another liveness property is formalized by formula (\ref{p1a}), which states that a datum is retrieved (i.e., place $\tait{data\_retrieved}$ is marked) at least every $\frac{3T}{2}$ time units.
\begin{equation}\label{p1a}
\diamondMTL{(0,\frac{3T}{2}]}{\tait{data\_retrieved}} \end{equation}
Property (\ref{p1a}) cannot be established with our verification technique as it falls in the incompleteness region (i.e., $\phi^+$ is not valid and $\phi^-$ is valid, as Table \ref{tab:exp_res} shows); from the automated check we cannot draw a definitive conclusion on the validity of the property for the system. If, however, the temporal bound of formula (\ref{p1a}) is slightly relaxed as in formula (\ref{p1c}), not only the verification is conclusive, but it shows that the property in fact holds for the system.
\begin{equation}\label{p1c}
\diamondMTL{(0,2T]}{\tait{data\_retrieved}} \end{equation}
Verification also shows that the original formula ($\ref{p1a}$) holds if the bound on transitions $\tait{replaceX}$ of the TPN is changed to $[\frac{4T}{5}, T]$ (property (\ref{p1a}') in Table \ref{tab:exp_res}).
Formula (\ref{p3}) expresses the maximum delay between sensor collect and data send. More precisely, if each sensor has provided a measurement and transition $\tait{retrieve\_d}$ fires, then the timed automaton will enter state $\tait{try}$ within $T$ instants. The validity of this formula would allow us to check that the two parts of the system modeled by the TPN and by the TA are correctly ``bridged'' by axioms (\ref{eq:CNtrans}) and (\ref{eq:CStrans}). As Table \ref{tab:exp_res} shows, property (\ref{p3}) does not hold; this occurs because, when place $\tait{data\_retrieved}$ is marked, the TA might not be in state $\tait{idle}$.
\begin{equation}\label{p3}
\tait{data\_retrieved} \Rightarrow \diamondMTL{(0, T]}{\tait{try}} \end{equation}
Axiom (\ref{eq:CStrans}) states that a $\tait{try}$ state is entered within $T/2$ if $\tait{data\_retrieved}$ holds when $\tait{idle}$ holds. Then, a deeper analysis on the timing constraints suggests that this condition depends on the maximum transmission time $T_3$ of the TA, which defines the maximum delay between two consecutive occurrences of $\tait{idle}$. If the system is in $\tait{data\_retrieved}$ and not in $\tait{idle}$, then the next $\tait{idle}$ state will be within $T_3$ instants in the future; moreover, $\tait{data\_retrieved}$ will be unmarked within $T/2$. This suggests that the following property (\ref{p3a}) is valid:
\begin{equation}\label{p3a}
\boxMTL{(0,T_3]}{\tait{data\_retrieved}} \Rightarrow \diamondMTL{(0, T]}{\tait{try}} \end{equation}
This property also falls in the incompleteness region of the verification technique. However, the following slight relaxation of formula (\ref{p3a}) can be proved to hold for the system:
\begin{equation}\label{p3b}
\boxMTL{(0,T_3+\delta]}{\tait{data\_retrieved}} \Rightarrow \diamondMTL{(0, T]}{\tait{try}} \end{equation}
\begin{table}[!htb] \label{tab:exp_res} \begin{scriptsize} \begin{center}
\begin{tabular}{|c | c c c c c | c c c c c|}
\hline
\textsc{Pr} & \textsc{$T_1$} & \textsc{$T_2$} & \textsc{$T_3$} & \textsc{$T$} & \textsc{k} & \textsc{Pre} (min.) & \textsc{CNF} (hrs.) & \textsc{SAT} (hrs.) & \textsc{$\naturals$-valid} & \textsc{\# Cl}$ \cdot 10^6$ \\
\hline
\ref{p2a}: $\phi^+$ & 3 & 6 & 18 & 30 & 90 & 1.9877 & 1.854 & 2.2322 & $\perp$ & 12.4148\\
\ref{p2a}: $\phi^-$ & 3 & 6 & 18 & 30 & 90 & 3.0743 & 6.2533 & 5.3518 & $\perp$ & 21.306\\
\ref{p2a}: $\phi^+$ & 3 & 9 & 36 & 30 & 90 & 2.425 & 2.5699 & 2.5368 & $\perp$ & 12.7411\\
\ref{p2a}: $\phi^-$ & 3 & 9 & 36 & 30 & 90 & 3.3372 & 6.2202 & 5.0851 & $\perp$ & 21.6323\\
\ref{p2a}: $\phi^+$ & 3 & 12 & 48 & 30 & 120 & 3.2059 & 3.6904 & 6.8226 & $\perp$ & 17.2833\\
\ref{p2a}: $\phi^-$ & 3 & 12 & 48 & 30 & 120 & 4.5452 & 10.439 & 9.2688 & $\perp$ & 29.117\\
\hline
\ref{p2b}: $\phi^+$ & 3 & 6 & 18 & 30 & 90 & 2.1074 & 1.9171 & 0.8101 & $\top$ & 12.8512\\
\ref{p2b}: $\phi^-$ & 3 & 6 & 18 & 30 & 90 & 3.1059 & 5.7381 & 3.017 & $\top$ & 21.7514\\
\ref{p2b}: $\phi^+$ & 3 & 9 & 36 & 30 & 90 & 2.6346 & 2.7726 & 0.9741 & $\top$ & 13.1775\\
\ref{p2b}: $\phi^-$ & 3 & 9 & 36 & 30 & 90 & 3.5125 & 6.4452 & 3.5557 & $\top$ & 22.0778\\
\ref{p2b}: $\phi^+$ & 3 & 12 & 48 & 30 & 120 & 3.6731 & 4.379 & 2.0955 & $\top$ & 17.8641\\
\ref{p2b}: $\phi^-$ & 3 & 12 & 48 & 30 & 120 & 5.1492 & 11.0093 & 5.0007 & $\top$ & 29.7098\\
\hline
\ref{p1a}: $\phi^+$ & 3 & 6 & 18 & 30 & 90 & 1.8887 & 1.7376 & 3.1524 & $\perp$ & 12.0598\\
\ref{p1a}: $\phi^-$ & 3 & 6 & 18 & 30 & 90 & 2.9094 & 6.0154 & 3.427 & $\top$ & 20.931\\
\ref{p1a}: $\phi^+$ & 3 & 9 & 36 & 30 & 90 & 2.2002 & 2.3232 & 2.4845 & $\perp$ & 12.3862\\
\ref{p1a}: $\phi^-$ & 3 & 9 & 36 & 30 & 90 & 3.1067 & 5.8341 & 4.6997 & $\top$ & 21.2573\\
\ref{p1a}: $\phi^+$ & 3 & 12 & 48 & 30 & 120 & 3.4446 & 4.1686 & 8.8680 & $\perp$ & 16.8108\\
\ref{p1a}: $\phi^-$ & 3 & 12 & 48 & 30 & 120 & 4.1621 & 9.9533 & 13.1718 & $\top$ & 28.6179\\
\hline
\ref{p1c}: $\phi^+$ & 3 & 6 & 18 & 30 & 90 & 2.0715 & 1.6828 & 1.2976 & $\top$ & 12.1584\\
\ref{p1c}: $\phi^-$ & 3 & 6 & 18 & 30 & 90 & 3.0536 & 5.3665 & 3.9414 & $\top$ & 21.0296\\
\ref{p1c}: $\phi^+$ & 3 & 9 & 36 & 30 & 90 & 2.8152 & 2.2134 & 1.7645 & $\top$ & 12.4848\\
\ref{p1c}: $\phi^-$ & 3 & 9 & 36 & 30 & 90 & 3.7314 & 6.1665 & 3.6802 & $\top$ & 21.3559\\
\ref{p1c}: $\phi^+$ & 3 & 12 & 48 & 30 & 120 & 3.9268 & 4.5246 & 9.3435 & $\perp$ & 16.9421\\
\ref{p1c}: $\phi^-$ & 3 & 12 & 48 & 30 & 120 & 4.8244 & 9.7484 & 14.8257 & $\top$ & 28.7491\\
\hline
\ref{p1a}': $\phi^+$ & 3 & 6 & 18 & 30 & 90 & 2.2399 & 2.3971 & 4.0335 & $\top$ & 12.8097\\
\ref{p1a}': $\phi^-$ & 3 & 6 & 18 & 30 & 90 & 3.3884 & 5.5905 & 4.5752 & $\top$ & 21.6645\\
\ref{p1a}': $\phi^+$ & 3 & 9 & 36 & 30 & 90 & 2.4788 & 2.2978 & 4.8259 & $\top$ & 13.136\\
\ref{p1a}': $\phi^-$ & 3 & 9 & 36 & 30 & 90 & 3.8369 & 7.3132 & 0.0036 & $\top$ & 21.9909\\
\ref{p1a}': $\phi^+$ & 3 & 12 & 48 & 30 & 120 & 4.7220 & 5.0607 & 13.3136 & $\perp$ & 17.8088\\
\ref{p1a}': $\phi^-$ & 3 & 12 & 48 & 30 & 120 & 4.8557 & 9.7088 & 8.4951 & $\top$ & 29.5942\\
\hline
\ref{p3}: $\phi^+$ & 3 & 6 & 12 & 30 & 75 & 1.5108 & 1.0502 & 0.4716 & $\perp$ & 9.91056\\
\ref{p3}: $\phi^-$ & 3 & 6 & 12 & 30 & 75 & 2.1418 & 3.1694 & 1.4723 & $\perp$ & 17.3177\\
\ref{p3}: $\phi^+$ & 3 & 3 & 15 & 30 & 75 & 1.5199 & 1.0564 & 0.4703 & $\perp$ & 9.87584\\
\ref{p3}: $\phi^-$ & 3 & 3 & 15 & 30 & 75 & 2.1586 & 3.1764 & 1.4473 & $\perp$ & 17.2837\\
\ref{p3}: $\phi^+$ & 3 & 6 & 18 & 30 & 75 & 1.5458 & 1.0706 & 0.5673 & $\perp$ & 9.97865\\
\ref{p3}: $\phi^-$ & 3 & 6 & 18 & 30 & 75 & 2.1978 & 3.2174 & 1.4323 & $\perp$ & 17.3858\\
\hline
\ref{p3a}: $\phi^+$ & 3 & 6 & 12 & 30 & 75 & 1.6018 & 1.1108 & 0.8844 & $\perp$ & 9.97312\\
\ref{p3a}: $\phi^-$ & 3 & 6 & 12 & 30 & 75 & 2.2909 & 3.3455 & 2.1095 & $\top$ & 17.3841\\
\ref{p3a}: $\phi^+$ & 3 & 3 & 15 & 30 & 75 & 1.6734 & 1.1945 & 0.6418 & $\perp$ & 9.95542\\
\ref{p3a}: $\phi^-$ & 3 & 3 & 15 & 30 & 75 & 2.1638 & 3.2626 & 1.5792 & $\top$ & 17.3671\\
\ref{p3a}: $\phi^+$ & 3 & 6 & 18 & 30 & 75 & 1.7031 & 1.2210 & 0.9653 & $\top$ & 10.0752\\
\ref{p3a}: $\phi^-$ & 3 & 6 & 18 & 30 & 75 & 2.48 & 3.3642 & 1.1761 & $\top$ & 17.4862\\
\hline
\ref{p3b}: $\phi^+$ & 3 & 6 & 12 & 30 & 75 & 1.578 & 1.0879 & 1.2972 & $\top$ & 9.97879\\
\ref{p3b}: $\phi^-$ & 3 & 6 & 12 & 30 & 75 & 2.3035 & 3.2128 & 1.6002 & $\top$ & 17.3898\\
\ref{p3b}: $\phi^+$ & 3 & 3 & 15 & 30 & 75 & 1.6465 & 1.0986 & 0.7740 & $\perp$ & 9.96109\\
\ref{p3b}: $\phi^-$ & 3 & 3 & 15 & 30 & 75 & 2.1604 & 3.1919 & 1.1408 & $\top$ & 17.3727\\
\ref{p3b}: $\phi^+$ & 3 & 6 & 18 & 30 & 75 & 1.6220 & 1.1249 & 0.8240 & $\top$ & 10.0809\\
\ref{p3b}: $\phi^-$ & 3 & 6 & 18 & 30 & 75 & 2.2892 & 3.2682 & 1.1178 & $\top$ & 17.4919\\
\hline
\end{tabular} \caption{Checking properties of the data monitoring system.} \end{center} \end{scriptsize} \end{table}
\section{Discussion and Conclusion}\label{sec:conclusion}
In this paper we presented a technique to formally model and verify systems using different paradigms for different system parts. The technique hinges on MTL axiomatizations of the different modeling notations, which provide a common formal ground for the various modeling languages, on which fully-automated verification techniques are built. We provided an MTL axiomatization of a subset of TPN, a typical asynchronous operational formalism, and showed how models could be built by formally combining together TPN and TA (a classic synchronous operational notation, for which an axiomatization has been provided in \cite{FPR08-ICFEM08}). In addition, we showed how the approach allows users to integrate in the same model parts described through a continuous notion of time, and parts described through a discrete notion of time.
Practical verification of systems modeled through the multi-paradigm approach is possible through the $\integers$ot{} bounded satisfiability checker, for which plugins supporting the various axiomatized notations have been built.
The technique has been validated on a non trivial example of data monitoring system. The experimental results show the feasibility of the approach, through which we have been able to investigate the validity (or, in some cases, the non validity) of some properties of the system. As described in Section \ref{sec:casestudy}, the verification phase has provided useful insights on the mechanisms and on the timing features of the modeled system, which led us to re-evaluate some of our initial beliefs on the system properties.
It is clear from our experiments that, unsurprisingly, the technique suffers from two main drawbacks: the incompleteness of the verification approach by discretization evidenced in \cite{FPR08-FM08}, which prevented us, in some cases, to get conclusive answers on some analyzed properties; and the computational complexity of our method, which is based on the direct translation of TPN and TA into MTL formulas, approximated into discrete ones, and then encoded into SAT. This makes proofs considerably lengthier as the size of the domains, and especially of the temporal one, increases, as evidenced by Table \ref{tab:exp_res}. Nevertheless, we maintain that the results we obtained are promising, and show the applicability of the technique on non trivial systems. This claim is supported on the one hand by the sophistication of the properties we have been able to prove (or disprove): it is inevitable that verification over continuous real-time has a high computational cost. On the other hand, while incompleteness is a hurdle to the full applicability of the technique, in practice it can be mitigated quite well, usually by slightly relaxing the real-time timing requirements under verification in a way that does not usually alter the gist of what is being verified.
In our future research on this topic we plan to address the two main drawbacks evidenced above. First, we will work on extending the verification technique to expand its range of applicability and reduce its region of incompleteness. Also, we will study more efficient implementations for the $\integers$ot{} plugins through which the various modeling notations are added to the framework: we believe that more direct (therefore more compact, both in the literals and clause numbers) encodings into SAT of the TPN and TA axiomatizations should significantly improve the efficiency of the tool.
In particular, we have not yet tackled the problem of optimizing the encodings of the TPN and TA axiomatizations into the SAT problem. We expect that significant improvements on the duration of the proofs can be gained through optimized encodings that reduce, on the one hand, the time needed to put formulas in the conjunctive normal form that is required as input by SAT solvers, and, on the other hand, the number of literals required to represent TPN and TA as SAT problems.
\end{document}
|
arXiv
|
{
"id": "0907.5074.tex",
"language_detection_score": 0.7193506956100464,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract}
From the polynomial approach to the definition of opetopes of Kock et al.,
we derive a category of opetopes, and show that its set-valued presheaves,
or opetopic sets, are equivalent to many-to-one polygraphs. As an immediate
corollary, we establish that opetopic sets are equivalent to multitopic
sets, introduced and studied by Harnick et al, and we also address an open
question of Henry. \end{abstract}
\title{The equivalence between many-to-one polygraphs and opetopic sets}
\begin{small}
\tableofcontents \end{small}
\section{Introduction}
Opetopes were originally introduced by Baez and Dolan in \cite{Baez1998} as an algebraic structure to describe compositions and coherence laws in weak higher dimensional categories. They differ from other shapes (such as globular or simplicial) by their (higher) tree structure, giving them the informal designation of ``many-to-one''. Pasting opetopes give rise to opetopes of higher dimension (it is in fact how they are defined!), and the analogy between opetopes and cells in a free higher category starts to emerge. On the other hand, polygraphs (also called computads) are higher dimensional directed graphs used to generate free higher categories by specifying generators and the way they may be pasted together (by means of source and targets).
In this paper, we relate opetopes and polygraphs in a direct way. Namely, we define a category $\bbOO$ whose objects are opetopes, in such a way that the category of its $\Set$-valued presheaves, or opetopic sets, is equivalent to the category of many-to-one polygraphs. This equivalence was already known from \cite{Harnik2002, Harnik2008, Hermida2000}, however the proof is very indirect. The recent work of Henry \cite{Henry2019} showed the category of many-to-one polygraphs (among many others) to be a presheaf category, but left the equivalence between ``opetopic plexes'' (serving as shapes for many-to-one polygraphs in his paper) and opetopes open. We establish this in our present work.
The notion of multitope \cite{Hermida2002, Harnik2008} is related to that of opetope, and has been developed based on similar motivations. However the approaches used are different: \emph{ope}topes are based on \emph{ope}rads (specifically, $T_n$-operads, where $T_n$ is a certain sequence of cartesian monads) \cite{Leinster2004}, while \emph{multi}topes are based on (symmetric) \emph{multi}categories. It is known that multitopic sets are equivalent to many-to-one polygraphs \cite{Harnik2008, Harnik2002}, and in particular our present contribution reasserts the equivalence between multitopic and opetopic sets.
\subsection*{Plan}
We begin by recalling elements of the theory of polynomial functors and polynomial monads \cref{sec:polynomial-functors-and-monads}. This formalism is at the base of our chosen approach to opetopes, which we present in \cref{sec:opetopes}. In \cref{sec:polygraphs}, we review some basic polygraphs theory, and pay special attention to those that are many-to-one. Finally, we state and prove the equivalence between opetopic sets and many-to-one polygraphs in \cref{sec:equialence}.
\subsection*{Acknowledgments}
I would like to thank my PhD advisors, Pierre-Louis Curien and Samuel Mimram, for their kind attention and guidance. This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska--Curie grant agreement No 665850.
\section{Polynomial functors and polynomial monads} \label{sec:polynomial-functors-and-monads}
We survey elements of the theory of polynomial functors, trees, and monads. For more comprehensive references, see \cite{Kock2011, Gambino2013}.
\subsection{Polynomial functors}
\begin{definition}
[{Polynomial functor \cite[paragraph 1.4]{Gambino2013}}]
\label{def:polynomial-functor}
A \emph{polynomial functor $P$} is a diagram in
$\Set$ of the form
\begin{equation}
\label{eq:polynomial-functor}
\polynomialfunctor{I}{E}{B}{J.}{s}{p}{t}
\end{equation}
We say that $P$ is a \emph{polynomial endofunctor}
if $I = J$. In this case, we also say that
$P$ is a \emph{polynomial functor over $I$}. We say that $P$ is
\emph{finitary}if the fibres of $p : E
\longrightarrow B$ are finite sets. We will always assume polynomial
functors and endofunctors to be finitary.
We use the following terminology for a polynomial functor $P$ as in
\cref{eq:polynomial-functor}, which is motivated by the intuition that a
polynomial functor encodes a multi-sorted signature of function symbols.
The elements of $B$ are called the \emph{nodes}or
\emph{operations}of $P$, and for every node $b$, the
elements of the fibre $E (b) \eqdef p^{-1} (b)$ are called the
\emph{inputs}of $b$. The elements of $I$ are called the
\emph{input colors}or \emph{input
sorts}of $P$, and the elements of $J$ are \emph{output
colors}or \emph{output sort}. For every input $e$ of a
node $b$, we denote its color by $s_e (b) \eqdef s (e)$.
\[
\tikzinput{polynomial-functors}{operation}
\] \end{definition}
\begin{definition}
[{Morphism of polynomial functor}]
\label{def:polynomial-endofunctor-morphism}
A morphism $f$ from a polynomial functor $P$ over $I$ (on the first row) to
a polynomial functor $P'$ over $I'$ (on the second row) is a commutative
diagram of the form
\[
\begin{tikzcd}
I
\ar[d, "f_0"'] &
E
\pullbackcorner \ar[r, "p"] \ar[d, "f_2" left]
\ar[l, "s" above] &
B
\ar[r, "t"] \ar[d, "f_1" left] &
I
\ar[d, "f_0" left] \\
I' &
E'
\ar[r, "p'"] \ar[l, "s'" above] &
B'
\ar[r, "t'"] &
I'
\end{tikzcd}
\]
where the middle square is cartesian (i.e. is a pullback square). If $P$
and $P'$ are both polynomial functors over $I$, then a morphism from $P$ to
$P'$ \emph{over $I$} is a commutative diagram as above, but where $f_0$ is
required to be the identity \cite[paragraph 0.1.3]{Kock2011} \cite[section
2.5]{Kock2010}. Let $\PolyEnd$ denote the
category of polynomial functors and morphisms of polynomial functors, and
$\PolyEnd (I)$ the category of polynomial
functors over $I$ and morphisms of polynomial functors over $I$. \end{definition}
\subsection{Trees} \label{sec:trees}
The combinatorial notion of a tree fits nicely in the framework of polynomial functors. We now state the definition of a polynomial tree, and refer the reader to \cite[section 1.0.3]{Kock2011} for more details about the intuition behind it.
\begin{definition}
[{Polynomial tree \cite[section 1.0.3]{Kock2011}}]
\label{def:polynomial-tree}
A polynomial functor $T$ given by
\[
\polynomialfunctor{T_0}{T_2}{T_1}{T_0}{s}{p}{t}
\]
is a \emph{polynomial tree} (or just
\emph{tree}) if
\begin{enumerate}
\item the sets $T_0$, $T_1$ and $T_2$ are finite (in particular, each
node has finitely many inputs); by convention we assume $T_0 \neq
\emptyset$;
\item the map $t$ is injective;
\item the map $s$ is injective, and the complement of its image $T_0 -
\im s$ has a single element, called the \emph{root};
\item let $T_0 = T_2 + \{ r \}$, with $r$ the root, and define the
\emph{walk-to-root} function $\sigma$ by
$\sigma (r) = r$, and otherwise $\sigma (e) = t p (e)$; then we ask
that for all $x \in T_0$, there exists $k \in \bbNN$ such that
$\sigma^k (x) = r$.
\end{enumerate}
We call the colors of a tree its \emph{edges} and the inputs
of a node the \emph{input edges} of that node.
Let $\Tree$ be the full
subcategory of $\PolyEnd$ whose objects are trees. Note that it is the
category of \emph{symmetric} or \emph{non-planar} trees (the automorphism
group of a tree is in general non-trivial) and that its morphisms
correspond to inclusions of non-planar subtrees. An \emph{elementary tree}
is a tree with at most one node, and we
write $\Tree_{\mathrm{elem}}$ for the full
subcategory of $\Tree$ spanned by elementary trees. \end{definition}
\begin{definition}
[$P$-tree]
\label{def:p-tree}
For $P \in \PolyEnd$, the category $\tree P$ of
\emph{$P$-trees} is the slice $\Tree / P$. If $f : P
\longrightarrow Q$ is a morphism of polynomial functors, then it induces a
natural functor $f_* : \tree P \longrightarrow \tree Q$ by postcomposition. \end{definition}
\begin{notation}
\label{not:p-tree}
A $P$-tree $T \in \tree P$ is a morphism from a polynomial tree, which we
shall denote by $\underlyingtree{T}$, to $P$, as in $T : \underlyingtree{T}
\longrightarrow P$. We point out that $\underlyingtree{T}_1$ is the set of
nodes of the $P$-tree $T$, while $T_1 : \underlyingtree{T}_1
\longrightarrow P_1$ provides a \emph{decoration} of the nodes of
$\underlyingtree{T}$ by operations of $P$, and likewise for edges. \end{notation}
\begin{definition}
[Address]
\label{def:address}
Let $T \in \Tree$ be a polynomial tree and $\sigma$ be its walk-to-root
function (\cref{def:polynomial-tree}). We define the \emph{address}
function $\addr$on edges inductively as
follows:
\begin{enumerate}
\item if $r$ is the root edge, let $\addr r \eqdef []$,
\item if $i \in T_0 - \{ r \}$ and if $\addr \sigma (i) = [x]$, define
$\addr i \eqdef [x e]$, where $e \in T_2$ is the unique element such
that $s(e) = i$.
\end{enumerate}
Thus an address is a sequence of elements of $T_2$, enclosed by brackets.
The address of a node $b \in T_1$ is simply $\addr b \eqdef \addr t (b)$.
Note that this function is injective since $t$ is. Let
$T^\bullet$denote its image,
the set of \emph{node addresses}of $T$, and let
$T^\medvert$be the set of
addresses of leaf edges, i.e. those edges not in the
image of $t$.
Assume now that $U : \underlyingtree{U} \longrightarrow P$ is a $P$-tree.
If $b \in \underlyingtree{U}_1$ has address $\addr b = [p]$, write
$\src_{[p]} U \eqdef U_1 (b)$. For convenience, we let $U^\bullet
\eqdef \underlyingtree{U}^\bullet$, and $U^\medvert \eqdef
\underlyingtree{U}^\medvert$. \end{definition}
\begin{remark}
The formalism of addresses is a useful bookkeeping syntax for the
operations of grafting and substitution on trees. The syntax of addresses
will extend to the category of opetopes and will allow us to give a precise
description of the composition of morphisms in the category of opetopes
(see \cref{def:o}) as well as certain constructions on opetopic
sets. \end{remark}
\begin{notation}
\label{not:marked-tree}
We denote by $\treewithleaf P$the set of $P$-trees with a marked leaf, i.e.
endowed with the address of one of its leaves. Similarly, we denote by
$\treewithnode P$ the set of $P$-trees with a marked node. \end{notation}
\begin{definition}
[Elementary $P$-trees]
\label{def:elementary-p-trees}
Let $P$ be a polynomial endofunctor as in equation
\cref{eq:polynomial-functor}. For $i \in I$, define $\itree{i} \in \tr
P$as having underlying tree
\begin{equation}
\label{eq:polynomial-functor:itree}
\polynomialfunctor
{\{i\}}{\emptyset}{\emptyset}{\{i \},}
{}{}{}
\end{equation}
along with the obvious morphism to $P$, that which maps $i$ to $i \in I$.
This corresponds to a tree with no nodes and a unique edge, decorated by
$i$. Define $\ytree{b} \in \tr P$, the
\emph{corolla}at $b$, as having underlying tree
\begin{equation}
\label{eq:polynomial-functor:ytree}
\polynomialfunctor
{s(E(b)) + \{*\}}{E (b)}{\{b \}}{s(E(b)) + \{*\},}
{s}{}{}
\end{equation}
where the right map sends $b$ to $*$, and where the morphism $\ytree{b}
\longrightarrow P$ is the identity on $s (E (b)) \subseteq I$, maps $*$ to
$t (b) \in I$, is the identity on $E (b) \subseteq E$, and maps $b$ to $b
\in B$. This corresponds to a $P$-tree with a unique node, decorated by
$b$. Observe that for $T \in \tr P$, giving a morphism $\itree{i}
\longrightarrow T$ is equivalent to specifying the address $[p]$ of an
edge of $T$ decorated by $i$. Likewise, morphisms of the form
$\ytree{b} \longrightarrow T$ are in bijection with addresses of nodes
of $T$ decorated by $b$. \end{definition}
\begin{remark}
\label{rem:elementary-p-trees:addresses}
Let $P$ be a polynomial endofunctor as in \cref{eq:polynomial-functor}.
\begin{enumerate}
\item Let $i \in I$ be a color of $P$. Since $\itree{i}$ does not have
any nodes, the set $\itree{i}^\bullet$ of its node addresses is
empty. On the other hand, the set of its leaf addresses is
$\itree{i}^\medvert = \left\{ [] \right\}$, since the unique leaf is
the root edge.
\item Let $b \in B$ be an operation of $P$. Then $\ytree{b}^\bullet
= \left\{ [] \right\}$ since the only node is that above the rood edge.
For leaves, we have $\ytree{b}^\medvert = \left\{ [e] \mid e \in E
(b) \right\}$.
\end{enumerate} \end{remark}
\begin{definition}
[Grafting]
\label{def:grafting}
For $S, T \in \tr P$, $[l] \in S^\medvert$ such that the
leaf of $S$ at $[l]$ and the root edge of $T$ are decorated by the
same $i \in I$, define the \emph{grafting}$S
\operatornamewithlimits{\circ}_{[l]} T$ of
$S$ and $T$ on $[l]$ by the following pushout (in $\tree P$):
\begin{equation}
\label{eq:polynomial-functor:grafting}
\pushoutdiagram
{\itree{i}}{T}{S}{S \operatornamewithlimits{\circ}_{[l]} T .}
{[]}{[l]}{}{}
\end{equation}
Note that if $S$ (resp. $T$) is a trivial tree, then $S \operatornamewithlimits{\circ}_{[l]} T = T$
(resp. $S$). We assume, by convention, that the grafting operator $\circ$
associates to the right. In particular $\itree{i} \operatornamewithlimits{\circ}_{[]} T = T$ and $S
\operatornamewithlimits{\circ}_{[l]} \itree{i} = S$. \end{definition}
\begin{lemma}
\label{lemma:grafting-addresses}
For $S, T \in \tr P$, $[l] \in S^\medvert$ such that the grafting $S
\operatornamewithlimits{\circ}_{[l]} T$ is defined, we have
\[
(S \operatornamewithlimits{\circ}_{[l]} T)^\bullet \:=\: S^\bullet +
\left\{ [lp] \mid [p] \in T^\bullet \right\} ,
\qquad\qquad
(S \operatornamewithlimits{\circ}_{[l]} T)^\medvert \:=\: S^\medvert - \left\{ [l] \right\} +
\left\{ [lp] \mid [p] \in T^\medvert \right\} .
\] \end{lemma}
\begin{notation}
[Total grafting]
\label{not:total-grafting}
Let $T, U_1, \ldots, U_k \in \tr P$, write $T^\medvert = \left\{ [l_1],
\ldots, [l_k] \right\}$, and assume the grafting $T \operatornamewithlimits{\circ}_{[l_i]} U_i$ is
defined for all $i$. Then the \emph{total grafting} will be denoted
concisely by
\begin{equation}
\label{eq:big-grafting}
T \biggraft_{[l_i]} U_i
= ( \cdots
(T \operatornamewithlimits{\circ}_{[l_1]} U_1) \operatornamewithlimits{\circ}_{[l_2]} U_2
\cdots ) \operatornamewithlimits{\circ}_{[l_k]} U_k .
\end{equation}
It is easy to see that the result does not depend on the order in which the
graftings are performed. \end{notation}
\begin{proposition}
[{\cite[proposition 1.1.21]{Kock2011}}]
\label{prop:polynomial-functor:trees-are-graftings}
Every $P$-tree is either of the form $\itree{i}$, for some $i \in I$, or
obtained by iterated graftings of corollas (i.e. $P$-trees of the form
$\ytree{b}$ for $b \in B$). \end{proposition} \begin{proof}
This can easily be proved by induction on the number of nodes. \end{proof}
\begin{remark}
\label{rem:tree-contexts}
As a consequence of \cite[proposition 1.1.3]{Kock2011}, a morphism $T
\longrightarrow S$ of $P$-trees exhibits $T$ as a subtree of $S$ as in
\[
S \:=\: U \operatornamewithlimits{\circ}_{[p]} T \biggraft_{[l]} V_{[l]}
\]
where $U$ is spanned by all the edges of $S$ that are either descendant of
the root edge of $T$, or incomparable to it \cite[paragraphs 1.0.7 and
1.1.11]{Kock2011}, and where $[l]$ ranges over $T^\medvert$. Conversely,
any such decomposition of $S$ induces a morphism $T \longrightarrow S$. \end{remark}
\subsection{Polynomial monads} \label{sec:polynomial:monads}
\begin{definition}
[Polynomial monad]
\label{def:polynomial-monad}
A \emph{polynomial monad over $I$}is a monoid in
$\PolyEnd(I)$. Note that a polynomial monad over $I$ is thus necessarily a
cartesian monad on $\Set/I$.\footnote{We recall that a monad is
\emph{cartesian}if its endofunctor preserves
pullbacks and its unit and multiplication are cartesian natural
transformations.} Let $\PolyMnd(I)$be the category of
monoids in $\PolyEnd(I)$. That is, $\PolyMnd(I)$ is the category of
polynomial monads over $I$ and morphisms of polynomial functors over $I$
that are also monad morphisms. \end{definition}
\begin{definition}
[$(-)^\star$ construction]
\label{def:star-construction}
Given a polynomial endofunctor $P$ as in \cref{eq:polynomial-functor},
we define a new polynomial endofunctor $P^\star$ as
\begin{equation}
\label{eq:free-monad}
\polynomialfunctor
{I}{\treewithleaf P}{\tree P}{I}
{s}{p}{t}
\end{equation}
where $s$ maps a $P$-tree with a marked leaf to the decoration of that
leaf, $p$ forgets the marking, and $t$ maps a $P$-tree to the decoration of
its root. Remark that for $T \in \tree P$ we have $p^{-1} (T) \cong
T^\medvert$. Clearly, there is an inclusion $P \longrightarrow P^\star$,
mapping $b \in B$ to $\ytree{b} \in \tree P$, and $e \in E (b)$ to $[e] \in
\ytree{b}^\medvert$ (see \cref{rem:elementary-p-trees:addresses}). \end{definition}
\begin{theorem}
[{\cite[section 1.2.7]{Kock2011}, \cite[sections 2.7 to 2.9]{Kock2010}}]
\label{th:polynomial-monads-star-algebras}
The polynomial functor $P^\star$ has a canonical structure of a polynomial
monad. Furthermore, the functor $(-)^\star$ is left adjoint to the
forgetful functor $\PolyMnd (I) \longrightarrow \PolyEnd (I)$, and the
adjunction is monadic. \end{theorem}
\begin{definition}
[Target and readdressing map]
\label{def:readdressing}
We abuse notations and letting $(-)^\star$ denote the associated monad on
$\PolyEnd (I)$. Let $M$ be a polynomial monad as in
\[
\polynomialfunctor{I}{E}{B}{I.}{s}{p}{t}
\]
By \cref{th:polynomial-monads-star-algebras}, $M$ is a $(-)^\star$-algebra,
and we will write its structure map $M^\star \longrightarrow M$ as
\begin{equation}
\label{eq:polynomial-monad-structure-map}
\begin{tikzcd}
I
\ar[d, equal] &
\treewithleaf M
\pullbackcorner
\ar[l] \ar[d, "\readdress"] \ar[r] &
\tree M
\ar[r] \ar[d, "\tgt"] &
I
\ar[d, equal]
\\
I &
E
\ar[l] \ar[r] &
B \ar[r] &
I.
\end{tikzcd}
\end{equation}
For $T \in \tree M$, we call $\readdress_T : T^\medvert \xto{\cong} E
(\tgt T)$ the \emph{readdressing} function of $T$, and
$\tgt T \in B$ is called the \emph{target} of $T$. If we think of an
element $b \in B$ as the corolla $\ytree{b}$, then the target map $\tgt$
``contracts'' a tree to a corolla, and since the middle square is a
pullback, the number of leaves is preserved. The map $\readdress_T$
establishes a decoration-preserving correspondence between the set
$T^\medvert$ of leaf addresses of a tree $T$ and the elements of $E
(\tgt T)$. \end{definition}
\begin{definition}
[Baez--Dolan $(-)^+$ construction]
\label{def:baez-dolan-construction}
Let $M$ be a polynomial monad as in \cref{eq:polynomial-functor}, and
define its \emph{Baez--Dolan construction}
$M^+$ to be
\begin{equation}
\label{eq:polynomial-functor:+}
\polynomialfunctor{B}{\treewithnode M}{\tree M}{B}{\src}{p}{\tgt}
\end{equation}
where $\src$ maps an $M$-tree with a marked node to the label of that node,
$p$ forgets the marking, and $\tgt$ is the target map of
\cref{def:readdressing}. If $T \in \tree M$, remark that $p^{-1} T
= T^\bullet$ is the set of node addresses of $T$. If $[p] \in
T^\bullet$, then $\src [p] \eqdef \src_{[p]} T$. \end{definition}
\begin{theorem}
[{\cite[section 3.2]{Kock2010}}]
\label{th:polynomial-functor:+:is-monad}
If $M$ a polynomial monad, then $M^+$ has a canonical structure of a
polynomial monad. \end{theorem}
\begin{remark}
[Nested addresses]
\label{rem:nested-addresses}
Let $M$ be a polynomial monad, and $T \in \tree M^+$. Then the nodes of
$T$ are decorated in $M$-trees, and its edges by operations of $M$.
Assume that $U \in \tree M$ decorates some node of $T$, say $U
= \src_{[p]} T$ for some node address $[p] \in T^\bullet$.
\begin{enumerate}
\item The input edges of that node are in bijection with
$U^\bullet$. In particular, the address of those input edges
are of the form $[p[q]]$, where $[q]$ ranges over $U^\bullet$.
This really motivates enclosing addresses in brackets.
\item On the other hand, the output edge of that node is decorated by
$\tgt U$ (where $\tgt$ is defined in \cref{def:readdressing}).
\end{enumerate} \end{remark}
\begin{notation}
\label{not:edg}
Let $M$ be a polynomial monad, and $T \in \tree M^+$. For $[a]$ the address
of an edge of $T$, let $\edg_{[a]} T$be the operation
of $M$ decorating that edge. Explicitely, if $[a] = []$, then $\edg_{[]} T
\eqdef \tgt \src_{[]} T$. Otherwise, $[a] = [p[q]]$ for some $[p] \in
T^\bullet$ (the node below the edge) and $[q] \in (\src_{[p]}
T)^\bullet$, and let $\edg_{[p[q]]} T \eqdef \src_{[q]} \src_{[p]} T$. \end{notation}
\section{Opetopes} \label{sec:opetopes}
\subsection{Definition} \label{sec:opetopes:definition}
\begin{definition}
[The $\frakZZ^n$ monad]
\label{def:zn}
Let $\frakZZ^0$ be the identity polynomial monad on $\Set$, as depicted
on the left below, and let $\frakZZ^n \eqdef
(\frakZZ^{n-1})^+$. Write $\frakZZ^n$ as on
right:
\begin{equation}
\label{eq:zn}
\polynomialfunctor
{\{* \}}{\{* \}}{\{* \}}{\{* \},}
{}{}{}
\qquad\qquad
\polynomialfunctor
{\bbOO_n}{E_{n+1}}{\bbOO_{n+1}}{\bbOO_n .}
{\src}{p}{\tgt}
\end{equation} \end{definition}
\begin{definition}
[Opetope]
\label{def:opetope}
An \emph{$n$-dimensional opetope} (or \emph{$n$-opetope} for short) $\omega$
is simply an element of $\bbOO_n$, and we write $\dim \omega = n$. If $n
\geq 2$, then $n$-opetopes are exactly the $\frakZZ^{n-2}$-trees. In
this case, an opetope $\omega \in \bbOO_n$ is called \emph{degenerate}
if its underlying tree has no nodes (and thus
consists of a unique edge), so that $\omega = \itree{\phi}$ for some $\phi
\in \bbOO_{n-2}$. We say that $\omega$ it is an \emph{endotope}
it its underlying tree has exactly one node, i.e. $\omega
= \ytree{\psi}$ for some $\psi \in \bbOO_{n-1}$.
Following \cref{eq:polynomial-monad-structure-map}, for $n \geq 2$ and
$\omega \in \bbOO_n$, the structure of polynomial monad
$(\frakZZ^{n-2})^\star \longrightarrow \frakZZ^{n-2}$ gives a
bijection $\readdress_\omega : \omega^\medvert \longrightarrow (\tgt
\omega)^\bullet$ between the leaves of $\omega$ and the nodes of $\tgt
\omega$, preserving the decoration by $(n-2)$-opetopes. \end{definition}
\begin{example}
\label{ex:opetopes}
\begin{enumerate}
\item The unique $0$-opetope is denoted $\filledlozenge$ and called the
\emph{point}.
\item The unique $1$-opetope is denoted $\filledsquare$ and called the
\emph{arrow}.
\item If $n \geq 2$, then $\omega \in \bbOO_n$ is a
$\frakZZ^{n-2}$-tree, i.e. a tree whose nodes are labeled in
$(n-1)$-opetopes, and edges are labeled in $(n-2)$-opetopes. In
particular, $2$-opetopes are $\frakZZ^0$-trees, i.e. linear trees,
and thus in bijection with $\bbNN$. We will refer to them as
\emph{opetopic integers}, and write
$\optInt{n}$ for the
$2$-opetope having exactly $n$ nodes.
\item A $3$-opetope is a $\frakZZ^1$-tree, i.e. a planar tree.
\item A $4$-opetope is a $\frakZZ^2$-tree. Unfolding definitions,
if $\omega : \underlyingtree{\omega} \longrightarrow \frakZZ^2$,
then nodes of $\omega$ are decorated by elements of $\bbOO_3$, i.e.
planar trees. Further, if $x \in \underlyingtree{\omega}_1$ is a node
of $\omega$, then $\omega_2$ exhibits a bijection between the input
edges of $x$ and the nodes of $\omega_1 (x) \in \bbOO_3$.
\end{enumerate} \end{example}
\subsection{The category of opetopes}
Akin to the work of Cheng \cite{Cheng2003}, we define a category of opetopes by means of generators and relations. The difference with the aforementioned reference is our use of polynomial opetopes (also equivalent to Leinster's definition \cite{Leinster2004, Kock2010}), while Cheng uses an approach by multicategorical slicing, yielding ``symmetric'' opetopes.
\begin{lemma}
[Opetopic identities]
\label{lemma:opetopic-identities}
Let $\omega \in \bbOO_n$ with $n \geq 2$.
\begin{enumerate}
\item (Inner edge) For an inner edge $[p[q]] \in \omega^\bullet$
(the fact that $\omega$ has an inner edge implies that it is non
degenerate), we have $\tgt \src_{[p[q]]} \omega = \src_{[q]} \src_{[p]}
\omega$.
\item (Globularity 1) If $\omega$ is non degenerate, we have $\tgt
\src_{[]} \omega = \tgt \tgt \omega$.
\item (Globularity 2) If $\omega$ is non degenerate, and $[p[q]] \in
\omega^\medvert$, we have $\src_{[q]} \src_{[p]} \omega =
\src_{\readdress_\omega [p[q]]} \tgt \omega$.
\item (Degeneracy) If $\omega$ is degenerate, we have $\src_{[]} \tgt
\omega = \tgt \tgt \omega$.
\end{enumerate} \end{lemma} \begin{proof}
\begin{enumerate}
\item (Inner edge) By definition of a $\frakZZ^{n-2}$-tree.
\item (Globularity 1 and 2) By
\cref{th:polynomial-monads-star-algebras}, the monad structure on
$\frakZZ^{n-2}$ amounts to a structure map
$(\frakZZ^{n-2})^\star \longrightarrow \frakZZ^{n-2}$, which,
taking the notations of \cref{def:readdressing}, is written as
\[
\begin{tikzcd}
\bbOO_{n-2}
\ar[d, equal] &
\treewithleaf \frakZZ^{n-2}
\pullbackcorner
\ar[r, "p"]
\ar[d, "\readdress" left]
\ar[l, "\edg" above] &
\tree \frakZZ^{n-2}
\ar[r, "\edg_{[]}"]
\ar[d, "\tgt" left] &
\bbOO_{n-2}
\ar[d, equal] \\
\bbOO_{n-2} &
\bbOO^\bullet_{n-1}
\ar[r, "p"]
\ar[l, "\src" above] &
\bbOO_{n-1}
\ar[r, "\tgt"] &
\bbOO_{n-2} .
\end{tikzcd}
\]
The claims follow from the commutativity of the right and left square
respectively.
\item (Degeneracy) Let $\omega = \itree{\phi}$, for $\phi \in
\bbOO_{n-2}$. Then, $\tgt \tgt \omega = \tgt \ytree{\phi} = \phi$, and
clearly, $\phi = \src_{[]} \ytree{\phi} = \src_{[]} \tgt \omega$.
\qedhere
\end{enumerate} \end{proof}
\begin{definition}
[The category $\bbOO$ of opetopes]
\label{def:o}
With the identities of \cref{lemma:opetopic-identities}, we define the
category $\bbOO$ of opetopes by generators and relations as follows.
\begin{enumerate}
\item (Object) We set $\ob \bbOO = \sum_{n \in \bbNN} \bbOO_n$.
\item (Generating morphism) Let $\omega \in \bbOO_n$ with $n \geq 1$.
We introduce a generator $\tgt : \tgt \omega \longrightarrow
\omega$, called the \emph{target
embedding}. If $[p] \in \omega^\bullet$,
then we introduce a generator $\src_{[p]} : \src_{[p]} \omega
\longrightarrow \omega$,
called a \emph{source embedding}. An
\emph{elementary face embedding}is
either a source or the target embedding. \footnote{Note if $n \geq 2$
and $[p]$ is an edge address of $\omega$, then the edge operation of
$\edg_{[p]}$ \cref{not:edg} also translates to a morphism $\edg_{[p]} :
\edg_{[p]} \omega \longrightarrow \omega$.}
\item (Relation) We impose 4 relations described by the following
commutative squares, which just enforce the identities of
\cref{lemma:opetopic-identities}. Let $\omega \in \bbOO_n$ with $n \geq
2$.
\begin{itemize}
\item \condition{Inner} For
$[p[q]] \in \omega^\bullet$ (forcing $\omega$ to be non
degenerate), the following square must commute:
\[
\squarediagram
{\src_{[q]} \src_{[p]} \omega}{\src_{[p]} \omega}
{\src_{[p[q]]}\omega}{\omega}
{\src_{[q]}}{\tgt}{\src_{[p]}}{\src_{[p[q]]}}
\]
\item \condition{Glob1} If
$\omega$ is non degenerate, the following square must commute:
\[
\squarediagram
{\tgt \tgt \omega}{\tgt \omega}{\src_{[]} \omega}{\omega .}
{\tgt}{\tgt}{\tgt}{\src_{[]}}
\]
\item \condition{Glob2} If
$\omega$ is non degenerate, and for $[p [q]] \in
\omega^\medvert$, the following square must commute:
\[
\diagramsize{2}{4}
\squarediagram
{\src_{\readdress_\omega [p [q]]} \tgt \omega}{\tgt \omega}
{\src_{[p]} \omega}{\omega .}
{\src_{\readdress_\omega [p[q]]}}{\src_{[q]}}{\tgt}
{\src_{[p]}}
\]
\item \condition{Degen} If
$\omega$ is degenerate, the following square must commute:
\[
\squarediagram
{\tgt \tgt \omega}{\tgt \omega}{\tgt \omega}{\omega .}
{\tgt}{\src_{[]}}{\tgt}{\tgt}
\]
\end{itemize}
\end{enumerate} \end{definition}
\begin{remark}
\label{rem:o}
Let us explain this definition a little more. Opetopes are trees whose
nodes (and edges) are decorated by opetopes. The decoration is now
interpreted as a geometrical feature, namely as an embedding of a lower
dimensional opetope. Further, the target of an opetope, while not an
intrinsic data, is also represented as an embedding. The relations can be
understood as follows.
\begin{itemize}
\item \condition{Inner} The inner edge at $[p[q) \in
\omega^\bullet$ is decorated by the target of the decoration of the
node ``above'' it (here $\src_{[p[q]]} \omega$), and in the
$[q]$-source of the node ``below'' it (here $\src_{[p]} \omega$). By
construction, those two decorations match, and this relation makes the
two corresponding embeddings $\src_{[q]} \src_{[p]} \omega
\longrightarrow \omega$ match as well. On the left is an informal
diagram about $\omega$ as a tree (reversed gray triangle), and on the
right is an example of pasting diagram represented by an opetope, with
the relevant features of the \condition{Inner} relation colored or
thickened.
\[
\tikzinput[.9]{optids-informal}{inner}
\qquad\qquad
\tikzinput[.9]{optids-informal}{inner.ps}
\]
\item \condition{Glob1-2} If we consider the underlying tree of
$\omega$ as its ``geometrical source'', and the corolla $\ytree{\tgt
\omega}$ as its ``geometrical target'', then they should be parallel.
The relation \condition{Glob1} expresses this idea by ``gluing'' the
root edges of $\omega$ and $\ytree{\tgt \omega}$ together, while
\condition{Glob2} glues the leaves according to $\readdress_\omega$.
\[
\tikzinput[.9]{optids-informal}{glob1}
\qquad\qquad
\tikzinput[.9]{optids-informal}{glob1.ps}
\]
\[
\tikzinput[.9]{optids-informal}{glob2}
\qquad\qquad
\tikzinput[.9]{optids-informal}{glob2.ps}
\]
\item \condition{Degen} If $\omega$ is a degenerate opetope, depicted
as on the right, then its target should be a ``loop'', i.e. its only
source and its target should be glued together.
\[
\tikzinput{optids-informal}{degen}
\qquad\qquad\qquad\qquad\qquad\qquad
\tikzinput{optids-informal}{degen.ps}
\]
\end{itemize} \end{remark}
\begin{notation}
\label{not:subcategories-of-o}
For $n \in \bbNN$, let $\bbOO_{\leq n}$ be the full subcategory of $\bbOO$
spanned by opetopes of dimension at most $n$. \end{notation}
\section{Opetopic sets and many-to-one polygraphs} \label{sec:polygraphs}
Polygraphs where originally introduced by Street \cite[section 2]{Street1976} under the name of \emph{computad}. They are to (strict) $\omega$-categories what graphs are to $1$-categories: a combinatorial device that freely generates them. However, unlike graphs, the category $\Pol$ of polygraphs fails to be a presheaf category \cite{Carboni2004} \cite{Makkai2008} \cite{Cheng2013}. The obstruction for it to be the case is an unpleasant corollary of the \emph{exchange law}: if $f$ and $g$ are endomorphisms of an identity cell, then $fg = gf$. \begin{align*}
\tikzinput[.8]{polygraph}{1}
&\quad\longleadsto\quad \tikzinput[.8]{polygraph}{2}
\quad\longleadsto\quad \tikzinput[.8]{polygraph}{3} \\
&\quad\longleadsto\quad \tikzinput[.8]{polygraph}{4}
\quad\longleadsto\quad \tikzinput[.8]{polygraph}{5} \end{align*} Nonetheless, the recent work of Henry \cite{Henry2019} characterized many subcategories of $\Pol$ to be presheaf categories. Among them, the category $\Pol^\MTO$ of \emph{many-to-one} polygraphs, in which the target (or codomain) of generating cells are themselves generating cells. In this section, we relate opetopes and many-to-one polygraphs in a formal way. Namely, we construct an equivalence of categories $\polyreal{-} : {\Psh\bbOO} \longrightarrow \Pol^\MTO$, called \emph{polygraphic realization}.
Recall from \cite[theorem 1]{Borceux1986} that if $\catAA$ and $\catBB$ are two Cauchy-complete categories such that $\Psh\catAA \simeq \Psh\catBB$, then $\catAA \simeq \catBB$ (see \cref{th:cauchy-complete} for more details). In particular, any Cauchy-complete category $\catAA$ that acts as a ``shape theory for many-to-one polygraphs'', i.e. such that $\Psh\catAA \simeq \Pol^\MTO$, is equivalent to $\bbOO$ (since it doesn't have any non-identity endomorphism, it is Cauchy-complete). This shows that the geometrical intuition behind the definition of $\bbOO$ (\cref{rem:o}) is essentially the unique way to faithfully implement the combinatorics of pasting diagrams.
The fact that many-to-one polygraphs are equivalent to opetopic sets was already known from \cite{Hermida2000} \cite{Harnik2002} \cite{Cheng2004} \cite{Harnik2008}, however the proof there is indirect and spanned over multiple articles. The formalism we developed so far allows us to establish this result directly.
\subsection{Strict higher categories}
\begin{definition}
[$\omega$-category]
\label{def:omega-category}
An \emph{$\omega$-category} $\catCC$ (also called
\emph{strict $\infty$-category}) is the
datum of a diagram of sets
\[
\begin{tikzcd}
\catCC_0
\ar[r, bend right, "\id" below]
&
\catCC_1
\ar[l, "{\src, \tgt}" above]
\ar[r, bend right, "\id" below]
&
\cdots
\ar[l, "{\src, \tgt}" above]
\ar[r, bend right, "\id" below]
&
\catCC_n
\ar[l, "{\src, \tgt}" above]
\ar[r, bend right, "\id" below]
&
\cdots
\ar[l, "{\src, \tgt}" above]
\end{tikzcd}
\]
with \emph{composition maps} $\operatornamewithlimits{\circ}_k : \catCC_{n, k} \longrightarrow
\catCC_n$, where $k < n$ and $\catCC_{n, k}$ is the pullback
\[
\pullbackdiagram
{\catCC_{n, k}}{\catCC_n}{\catCC_n}{\catCC_k,}
{}{}{\tgt^{n-k}}{\src^{n-k}}
\]
such that the following conditions hold:
\begin{enumerate}
\item for all $k < n$, the diagram
\[
\begin{tikzcd} [column sep = 5em]
\catCC_k
\ar[r, bend right, "\id^{n-k}" below]
&
\catCC_n
\ar[l, "{\src^{n-k}, \tgt^{n-k}}" above]
\end{tikzcd}
\]
with the composition map $\operatornamewithlimits{\circ}_k : \catCC_{n, k} \longrightarrow
\catCC_n$ is a $1$-category;
\item for all $l < k < n$, the diagram
\[
\begin{tikzcd} [column sep = 5em]
\catCC_l
\ar[r, bend right, "\id^{k-l}" below]
&
\catCC_k
\ar[l, "{\src^{k-l}, \tgt^{k-l}}" above]
\ar[r, bend right, "\id^{n-k}" below]
&
\catCC_n
\ar[l, "{\src^{n-k}, \tgt^{n-k}}" above]
\end{tikzcd}
\]
with the composition maps $\operatornamewithlimits{\circ}_l : \catCC_{k, l} \longrightarrow
\catCC_k$, $\operatornamewithlimits{\circ}_l : \catCC_{n, l} \longrightarrow \catCC_n$ and
$\operatornamewithlimits{\circ}_k : \catCC_{n, k} \longrightarrow \catCC_n$ is a strict
$2$-category.
\end{enumerate}
The maps $\src$ and $\tgt$ are called \emph{source} and \emph{target} maps,
respectively, and if $x \in \catCC_k$, then $\id_x \eqdef \id (x)$ is the
\emph{identity cell of $x$}. Note that by definition, the following
equalities hold:
\[
\src \src x \:=\: \src \tgt x,
\qquad\qquad
\tgt \src x \:=\: \tgt \tgt x,
\qquad\qquad
\src \id_x \:=\: x \:=\: \tgt \id_x.
\]
The first two are called the \emph{globular identities}. Still by definition, for $0 \leq l < k < n$ and $w, x, y, z
\in \catCC_n$ the following \emph{exchange law} holds:
\[
(w \operatornamewithlimits{\circ}_k x) \operatornamewithlimits{\circ}_l (y \operatornamewithlimits{\circ}_k z)
\:=\: (w \operatornamewithlimits{\circ}_l y) \operatornamewithlimits{\circ}_k (x \operatornamewithlimits{\circ}_l z) ,
\]
assuming both sides are well-defined. Note that a strict $n$-category
$\catCC$ may be viewed as an $\omega$-category where $\catCC_m$ only has
identities for all $m > n$. Given an $\omega$-category $\catCC$, we write
$\catCC_{\leq n}$ for the underlying strict
$n$-category
\[
\begin{tikzcd}
\catCC_0
\ar[r, bend right, "\id" below]
&
\catCC_1
\ar[l, "{\src, \tgt}" above]
\ar[r, bend right, "\id" below]
&
\cdots
\ar[l, "{\src, \tgt}" above]
\ar[r, bend right, "\id" below]
&
\catCC_n .
\ar[l, "{\src, \tgt}" above]
\end{tikzcd}
\]
An \emph{$\omega$-functor} $f : \catBB
\longrightarrow \catCC$ between two $\omega$-categories is just a sequence
of maps $f_n : \catBB_n \longrightarrow \catCC_n$ that induces a
$n$-functor $f_{\leq n} : \catBB_{\leq n} \longrightarrow \catCC_{\leq n}$
for all $n \in \bbNN$. If the context is clear, we simply write $f$ for
$f_n : \catBB_n \longrightarrow \catCC_n$. \end{definition}
\begin{definition}
[{Parallel cells \cite{Metayer2003}}]
\label{def:parallel-cells-hcat}
Let $\catDD$ be a strict $\omega$-category, $n \in \bbNN$. Two $n$-cells
$x, y \in \catDD_n$ are \emph{parallel}, denoted by $x
\parallel y$, if $\src x = \src y$
and $\tgt x = \tgt y$. By convention, $0$-cells are pairwise parallel. \end{definition}
\begin{definition}
[Cellular extension] Let $\catDD$ be a strict $(n-1)$-category. A
\emph{cellular extension} of $\catDD$ consists
in a set $X$ and two maps $\src, \tgt : X \longrightarrow \catDD_{n-1}$
such that the \emph{globular identities} hold,
i.e. for all $x \in X$, we have $\src x \parallel \tgt x$. We also denote
such a cellular extension by
\[
\catDD \xot{\src, \tgt} X .
\] \end{definition}
\begin{definition}
[Free $n$-category] Let $\catDD$ be a strict $(n-1)$-category and $\catDD
\xot{\src, \tgt} X$ be a cellular extension of $\catDD$. The \emph{free
strict $n$-category} generated by this cellular extension is the strict
$n$-category $\catDD [X]$ such that
\begin{enumerate}
\item as strict $(n-1)$-categories, $\catDD = \catDD [X]_{\leq n-1}$;
\item there is an inclusion $X \longhookrightarrow \catDD [X]_n$, and
the following diagrams commute:
\[
\diagramarrows{<-}{c->}{<-}{}
\triangleDRdiagram
{X}{\catDD_{n-1}}{\catDD [X]_n,}
{\src}{}{\src}
\qquad\qquad
\triangleDRdiagram
{X}{\catDD_{n-1}}{\catDD [X]_n;}
{\tgt}{}{\tgt}
\diagramarrows{}{}{}{}
\]
\item if $\catEE$ is a strict $n$-category, $f : \catDD \longrightarrow
\catEE_{\leq n-1}$ is an $(n-1)$-functor, and $f_n : X \longrightarrow
\catEE_n$ is a map such that for all $x \in X$, $f (\src x) = \src f_n
(x)$ and $f (\tgt x) = \tgt f_n (x)$, then $f$ and $f_n$ extend
uniquely to an $n$-functor $\catDD [X] \longrightarrow \catEE$.
\end{enumerate}
The free extension $\catDD [X]$ always exists and is unique up to
isomorphism, see \cite[section 1]{Harnik2008}. \end{definition}
For the rest of this section, let $\catDD$ be a strict $(n-1)$-category, $\catDD \xot{\src, \tgt} X$ be a cellular extension of $\catDD$, and $\catEE \eqdef \catDD [X]$ be the $n$-category freely generated by the cellular extension.
\begin{definition}
[Counting function]
\label{def:counting-function}
Define an $n$-category $\bbNN^{(n)}$ by
\[
\begin{tikzcd}
\{ 0 \}
\ar[r, bend right, "0" below]
&
\{ 0 \}
\ar[l, "{\src, \tgt}" above]
\ar[r, bend right, "0" below]
&
\phantom{\{} \cdots \phantom{\}}
\ar[l, "{\src, \tgt}" above]
\ar[r, bend right, "0" below]
&
\{ 0 \}
\ar[l, "{\src, \tgt}" above]
\ar[r, bend right, "0" below]
&
\bbNN,
\ar[l, "{\src, \tgt}" above]
\end{tikzcd}
\]
where all compositions correspond to the addition of integers. For $x \in
X$, define a \emph{counting function} $\#_x : X \longrightarrow \bbNN$
that maps $x$ to $1$, and all other elements of $X$ to $0$. This extends to
a $n$-functor $\catEE \longrightarrow \bbNN^{(n)}$. Similarly, let $\# :
X \longrightarrow \bbNN$ be the map sending all elements to $1$, and extend
it as $\# : \catEE \longrightarrow \bbNN^{(n)}$. \end{definition}
\begin{definition}
[{Context \cite[definition 2.1.1]{Guiraud2009}}]
\label{def:polygraph-context}
Consider another cellular extension
\[
\catDD \xot{\src, \tgt} (X + \left\{ \Box \right\})
\]
of $\catDD$, where $\src \Box$ and $\tgt \Box$ are chosen
arbitrarily. A \emph{$n$-context} of $\catEE$ is a cell $C
\in \catDD [X + \left\{\Box \right\}]_n$ such that $\#_\Box C
= 1$. One may think of $C$ as a cell of $\catEE_n$ with a ``hole'', and we
sometime write $C = C[\Box]$. If $u \in \catEE_n$ is parallel to
$\Box$ in $\catDD [X + \left\{ \Box \right\}]$, let $C [u]$, be
$C [\Box]$ where $\Box$ has been replaced by $u$. \end{definition}
\begin{definition}
[{Category of contexts \cite[definition 2.1.2]{Guiraud2009}}]
\label{def:category-of-contexts}
The category $\Ctx_n \catEE$ of
\emph{$n$-contexts} of $\catEE$ has objects the $n$-cells of $\catEE$, and
a morphism $C : x \longrightarrow y$ is an $n$-context $C = C [\Box]$
such that $C [x] = y$. If $D : y \longrightarrow z$ is another context,
then the composite of $C$ and $D$ is $DC \eqdef D [C [\Box]] : x
\longrightarrow z$, as indeed, $D [C [x]] = D [y] = z$. \end{definition}
\begin{definition}
[Primitive context]
\label{def:primitive-context}
A context is \emph{primitive} over a cell $y \in
\catEE_n$ if it is of the form $C : x \longrightarrow y$ with $x \in X$. \end{definition}
\subsection{Many-to-one polygraphs}
\begin{definition}
[{Polygraph \cite[definition 7.1]{Harnik2008}}]
\label{def:polygraph}
A \emph{polygraph} (also called a
\emph{computad}) $\catPP$ consists of a
small $\omega$-category $\catCC$ and sets $\catPP_n \subseteq \catCC_n$ for
all $n \in \bbNN$, such that $\catPP_0 = \catCC_0$, and such that
$\catCC_{\leq n+1} = \catCC_{\leq n} [\catPP_{n+1}]$, i.e. the underlying
$(n+1)$-category of $\catCC$ is freely generated by $\catPP_{n+1}$ over its
underlying $n$-category. We usually write $\catPP^*$ instead of $\catCC$. A
polygraph $\catPP$ is an \emph{$n$-polygraph}
if $\catPP_k =
\emptyset$ whenever $k > n$. A \emph{morphism of polygraphs}
is an $\omega$-functor mapping generators to
generators. Let $\Pol$ be the category
of polygraphs and morphisms between them. \end{definition}
\begin{example}
A $1$-polygraph $\catPP$ is simply a free $1$-category generated by the
graph $\catPP_0 \xot{\src, \tgt} \catPP_1$. \end{example}
\begin{proposition}
\label{prop:polygraphs-cocomplete}
The category $\Pol$ is cocomplete. If $F : \calJJ \longrightarrow \Pol$ is
diagram, and $n \in \bbNN$, then $(\colim_{i \in \calJJ} Fi)_n \cong
\colim_{i \in \calJJ} (Fi)_n$. \end{proposition}
\begin{notation}
If $\catPP \in \Pol$, we write $\Ctx_n \catPP$ instead of $\Ctx_n
\catPP_{\leq n-1} [\catPP_n]$ (see \cref{def:category-of-contexts}). \end{notation}
\begin{proposition}
[{\cite[proposition 2.1.3]{Guiraud2009}}]
\label{prop:Guiraud2009:2.1.3}
Let $\catPP$ be a polygraph, and $C \in \Ctx_n \catPP$. Then $C$ decomposes
as
\[
C \:=\: d_n \operatornamewithlimits{\circ}_{n-1} (
d_{n-1} \operatornamewithlimits{\circ}_{n-2} \cdots \: (
d_1 \operatornamewithlimits{\circ}_0 \Box \operatornamewithlimits{\circ}_0 e_1
) \: \cdots \operatornamewithlimits{\circ}_{n-2} e_{n-1}
) \operatornamewithlimits{\circ}_{n-1} e_n ,
\]
where $d_n, e_n \in \catPP^*_n$, and for $1 \leq i < n$, $d_i$ and $e_i$
are identities of $i$-cells. \end{proposition}
\begin{definition}
[{Whisker \cite[paragraph 2.1.4]{Guiraud2009}}]
\label{def:whisker}
Let $\catPP$ be a polygraph. an $n$-\emph{whisker} of
$\catPP$ is an $n$-context of the form
$
d_{n-1} \operatornamewithlimits{\circ}_{n-2} \cdots \: (
d_1 \operatornamewithlimits{\circ}_0 \Box \operatornamewithlimits{\circ}_0 e_1
) \: \cdots \operatornamewithlimits{\circ}_{n-2} e_{n-1}
$,
where for $1 \leq i \leq n-1$, $d_i$ and $e_i$ are identities of $i$-cells. \end{definition}
\begin{remark}
\label{rem:context-to-whisker}
If $C$ is an $(n-1)$-context, then by \cref{prop:Guiraud2009:2.1.3}, it
decomposes as on the left, and induces an $n$-whisker on the right
\[
C [\Box] \:=\: d_{n-1} \operatornamewithlimits{\circ}_{n-2} \cdots \: (
d_1 \operatornamewithlimits{\circ}_0 \Box \operatornamewithlimits{\circ}_0 e_1
) \: \cdots \operatornamewithlimits{\circ}_{n-2} e_{n-1} ,
\qquad\qquad
\id_{d_{n-1}} \operatornamewithlimits{\circ}_{n-2} \cdots \: (
\id_{d_1} \operatornamewithlimits{\circ}_0 \Box \operatornamewithlimits{\circ}_0 \id_{e_1}
) \: \cdots \operatornamewithlimits{\circ}_{n-2} \id_{e_{n-1}} ,
\]
which we shall also denote by $C$. \end{remark}
\begin{proposition}
[{\cite[proposition 2.1.5]{Guiraud2009}}]
\label{prop:Guiraud2009:2.1.5}
Let $\catPP$ be a polygraph, $u \in \catPP^*_n$, $k \eqdef \# u$, and
assume $k \geq 1$. Then $u$ decomposes as
\[
u \:=\: C_1 [x_1] \operatornamewithlimits{\circ}_{n-1} C_2 [x_2] \operatornamewithlimits{\circ}_{n-1} \cdots
\operatornamewithlimits{\circ}_{n-1} C_k [x_k] ,
\]
where all of the $C_i$'s are $n$-whiskers, and $x_1, \ldots, x_k \in
\catPP_n$. \end{proposition}
\begin{definition}
[{Partial composition \cite[definition 3.8]{Harnik2008}}]
\label{def:partial-composition}
Let $\catPP$ be a polygraph, $x, y \in \catPP_n^*$ be $n$-cells, and $C :
\tgt y \longrightarrow \src x$ be a context. The \emph{partial composition}
(called \emph{placed composition} in \cite[definition 3.8]{Harnik2008}) $x
\operatornamewithlimits{\circ}_C y$ is defined as
$
x \operatornamewithlimits{\circ}_C y \:\eqdef\: x \operatornamewithlimits{\circ}_{n-1} C [y] ,
$
where the notation $C[y]$ follows \cref{rem:context-to-whisker}. \end{definition}
\begin{lemma}
\label{lemma:partial-composition}
With $x$, $y$, and $C$ as in \cref{def:partial-composition}, we have $\src
( x \operatornamewithlimits{\circ}_C y ) = C [\src y]$ and $\tgt ( x \operatornamewithlimits{\circ}_C y ) = \tgt x$. \end{lemma} \begin{proof}
We have $\tgt (x \operatornamewithlimits{\circ}_C y) = \tgt (x \operatornamewithlimits{\circ}_{n-1} C [y]) = \tgt x$. On
the other hand, $\src (x \operatornamewithlimits{\circ}_C y) = \src C [y]$. By
\cref{prop:Guiraud2009:2.1.3,rem:context-to-whisker}, $C [y]$ decomposes as
\[
C [y] \:=\:
\id_{d_{n-1}} \operatornamewithlimits{\circ}_{n-2} \cdots \: (
\id_{d_1} \operatornamewithlimits{\circ}_0 y \operatornamewithlimits{\circ}_0 \id_{e_1}
) \: \cdots \operatornamewithlimits{\circ}_{n-2} \id_{e_{n-1}} ,
\]
where for $1 \leq k \leq n$, $d_k$ and $e_k$ are identities of $k$-cells.
Thus,
\[
\src C [y]
\:=\: d_{n-1} \operatornamewithlimits{\circ}_{n-2} \cdots \: (
d_1 \operatornamewithlimits{\circ}_0 (\src y) \operatornamewithlimits{\circ}_0 e_1
) \: \cdots \operatornamewithlimits{\circ}_{n-2} e_{n-1}
\:=\: C [\src y] .
\] \end{proof}
\begin{definition}
[{Many-to-one polygraph \cite[definition 7.4]{Harnik2008}}]
\label{def:many-to-one-polygraph}
Let $\catPP \in \Pol$ be a polygraph. For $n \geq 1$, an $n$-cell $x \in
\catPP^*_n$ is said \emph{many-to-one} if $\tgt x \in \catPP_{n-1}$, and we
write $\catPP^\mathrm{mto}_n$ for the set of many-to-one $n$-cells of $\catPP$. By
convention, all $0$-cells are many-to-one. In turn, the polygraph $\catPP$
is called \emph{many-to-one} (or \emph{opetopic}) if all its generators are
many-to-one. Let $\Pol^\MTO$ be the full subcategory of $\Pol$ spanned by
many-to-one polygraphs. \end{definition}
\begin{lemma}
\label{lemma:whisker-mto}
Let $u \in \catPP^*_n$ be such that $\# u \geq 1$. By
\cref{prop:Guiraud2009:2.1.5}, it decomposes as
\[
u \:=\: C_1 [x_1] \operatornamewithlimits{\circ}_{n-1} C_2 [x_2] \operatornamewithlimits{\circ}_{n-1} \cdots
\operatornamewithlimits{\circ}_{n-1} C_k [x_k] ,
\]
where $k \eqdef \# u$, where all of the $C_i$'s are $n$-whiskers, and
$x_1, \ldots, x_k \in \catPP_n$. Then $u$ is a many-to-one cell if and only
if $C_1 = \Box$, i.e. if $C_1$ is the trivial context. \end{lemma} \begin{proof}
First, note that $\tgt u = \tgt C_1 [x_1]$. Write $C_1$ as
\[
C_1 [\Box] \:=\: d_{n-1} \operatornamewithlimits{\circ}_{n-2} \cdots \: (
d_1 \operatornamewithlimits{\circ}_0 \Box \operatornamewithlimits{\circ}_0 e_1
) \: \cdots \operatornamewithlimits{\circ}_{n-2} e_{n-1} ,
\]
where for $1 \leq i \leq n-1$, $d_i$ and $e_i$ are identities of $i$-cells
(see \cref{def:whisker}). We have
\[
\tgt u
\:=\: \tgt C_1 [x_1]
\:=\: (\tgt d_{n-1}) \operatornamewithlimits{\circ}_{n-2} \cdots \: (
(\tgt d_1) \operatornamewithlimits{\circ}_0 (\tgt x_1) \operatornamewithlimits{\circ}_0 (\tgt e_1)
) \: \cdots \operatornamewithlimits{\circ}_{n-2} (\tgt e_{n-1}) .
\]
Thus, $\tgt u$ is a generator if and only if $\tgt d_i$ and $\tgt e_i$ are
identities, for all $0 \leq i \leq n-1$. In this case, $d_i$ and $e_i$ are
identity cells of $(i-1)$-cells, thus $C_1 = \Box$. Conversely, if
$C_1 = \Box$, then $\tgt u = \tgt x_1$ is a generator since $\catPP$
is a many-to-one polygraph, thus $u$ is a many-to-one cell. \end{proof}
The following result comes as a polygraphic analogue to \cref{prop:polynomial-functor:trees-are-graftings}.
\begin{proposition}
\label{prop:mto-induction}
A many-to-one $n$-cell $u$ of $\catPP$ is of either of the following forms:
\begin{enumerate}
\item $\id_a$ for $a \in \catPP_{n-1}$,
\item $x \in \catPP_n$,
\item $v \operatornamewithlimits{\circ}_C x = v \operatornamewithlimits{\circ}_{n-1} C [x]$, for some $v \in
\catPP^\mathrm{mto}_n$ with $1 \leq \# v < \# u$, $x \in \catPP_n$, and
$C : \tgt x \longrightarrow \src v$.
\end{enumerate} \end{proposition} \begin{proof}
If $\# u = 0$, then $u = \id_a$ for some $a
\in P^*_{n-1}$. Further, $a = \tgt u \in \catPP_n$. If $\# u = 1$, then
by \cref{lemma:whisker-mto}, $u$ is necessarily a generator. If $\# u =
k \geq 2$, then by \cref{prop:Guiraud2009:2.1.5}, $u$ decomposes as
\[
u \:=\: C_1 [x_1] \operatornamewithlimits{\circ}_{n-1} C_2 [x_2] \operatornamewithlimits{\circ}_{n-1} \cdots
\operatornamewithlimits{\circ}_{n-1} C_k [x_k] ,
\]
where all of the $C_i$'s are $n$-whiskers, and $x_1, \ldots, x_k \in
\catPP_n$. Let
\[
v \:\eqdef\:
C_1 [x_1] \operatornamewithlimits{\circ}_{n-1} C_2 [x_2] \operatornamewithlimits{\circ}_{n-1} \cdots
\operatornamewithlimits{\circ}_{n-1} C_{k-1} [x_{k-1}] .
\]
Then $C_k$ is a context $\tgt x_k \longrightarrow \src v$, and $u = v
\operatornamewithlimits{\circ}_{C_k} x_k$. By \cref{lemma:whisker-mto}, and since $u$ is
many-to-one, $C_1 = \Box$. By \cref{lemma:whisker-mto} again, $v$ is
many-to-one, and $\# v = k-1 \geq 1$, finishing the proof. \end{proof}
\begin{notation}
For $\catPP \in \Pol^\MTO$, let $\Ctx_n^\mathrm{mto} \catPP$ be the full subcategory
of $\Ctx_n \catPP$ generated by many-to-one $n$-cells. In other words, an
$n$-context $C : u \longrightarrow v$ is in $\Ctx_n^\mathrm{mto} \catPP$ if $u, v
\in \catPP^\mathrm{mto}_n$. Necessarily, such a context is itself a many-to-one
cell, as $\tgt C [\Box] = \tgt C [u] = \tgt v$ is a generator. \end{notation}
\begin{definition}
\label{def:terminal-mtop}
Define a polygraph $\catTT \in \Pol^\MTO$by $\catTT_0
\eqdef \{ \filledlozenge \}$, $\catTT_{n+1} \eqdef \left\{ (u, v) \in
\catTT_n^\mathrm{mto} \times \catTT_n \mid u \parallel v \right\}$ (see
\cref{def:many-to-one-polygraph,def:parallel-cells-hcat}), with $\src (u,
v) \eqdef u$ and $\tgt (u, v) \eqdef v$. \end{definition}
\begin{proposition}
\label{prop:terminal-mtop}
The polygraph $\catTT$ is terminal in $\Pol^\MTO$. \end{proposition} \begin{proof}
For $\catPP \in \Pol^\MTO$, we show that there exists a unique morphism
$f : \catPP \longrightarrow \catTT$.
\begin{itemize}
\item (Existence) If $x \in \catPP_0$, let $f (x) \eqdef \filledlozenge$,
and if $x \in \catPP_n$ with $n \geq 1$, let $f (x) = (f (\src x), f
(\tgt x))$. The source and target compatibility is trivial.
\item (Uniqueness) Consider $g : \catPP \longrightarrow \catTT$.
Necessarily $g_0 = f_0$ as $\catTT_0$ is a singleton. Let $x \in
\catPP_n$ with $n \geq 1$. We have
\begin{equation*}
f_n (x)
\:=\: (f_{n-1} (\src x), f_{n-1} (\tgt x))
\:=\: (g_{n-1} (\src x), g_{n-1} (\tgt x))
\:=\: (\src g_{n-1} (x), \tgt g_{n-1} (x))
\:=\: g_n (x)
\end{equation*}
Therefore, $f = g$.
\qedhere
\end{itemize} \end{proof}
\begin{notation}
\label{not:terminal-map}
If $\catPP$ is a many-to-one polygraph, we write $\shriek : \catPP
\longrightarrow \catTT$ for the terminal map. \end{notation}
\begin{definition}
\label{def:effective-and-familially-representable}
\begin{enumerate}
\item An \emph{effective category} is a
category $\catCC$ equipped with a functor $F : \catCC \longrightarrow
\Set$. For example:
\begin{enumerate}
\item if $\catAA$ is a small category, then $\Psh\catAA$ (or any
subcategory thereof) is naturally an effective category with the
functor $\Psh\catAA \longrightarrow \Set$ mapping a presheaf $X$ to
$\sum_{a \in \catAA} X_a$;
\item $\Pol$ (or any subcategory thereof) is an effective category,
where the functor $\Pol \longrightarrow \Set$ maps a polygraph
$\catPP$ to $\sum_{n \in \bbNN} \catPP_n$.
\end{enumerate}
\item A category $\catCC$ is an \emph{effective presheaf category} if
it is effective, and equivalent, as an effective category\footnote{i.e.
the equivalence functor commutes with the equiped functors to $\Set$.},
to a presheaf category.
\item A functor $F : \catCC \longrightarrow \Set$ is \emph{familially
representable} \cite[definition 2.4]{Carboni1995} if $F \:\cong\: \sum_{i
\in I} \catCC (c_i, -)$ for some family $\left\{ c_i \mid i \in I
\right\}$ of objects of $\catCC$.
\end{enumerate} \end{definition}
\begin{theorem}
\label{th:Henry2019}
\begin{enumerate}
\item \cite[corollary 2.4.9]{Henry2019} The category $\Pol^\MTO$ is a
\emph{good class of polygraphs} \cite[definition 2.2.2]{Henry2019}, and
in particular
\begin{enumerate}
\item it is an effective presheaf category;
\item for all $n \in \bbNN$, the functor $(-)^*_n : \Pol^\MTO
\longrightarrow \Set$ that maps a polygraph $\catPP \in \Pol^\MTO$ to
its set $\catPP_n^*$ of $n$-cells is familially representable, and
the representing objects are called the \emph{opetopic
$n$-polyplexes} (or just \emph{$n$-polyplexes}):
\[
\catPP_n^* \:\cong\: \sum_{
\substack{\omega \text{ is an opetopic} \\
n \text{-polyplex}}
} \Pol^\MTO (\omega, \catPP) .
\]
\end{enumerate}
\item \cite[proposition 2.2.6]{Henry2019} The opetopic $n$-polyplexes
are in bijective correspondence with $\catTT^*_n$. Isomorphic
polyplexes are equal. If $u \in \catTT^*_n$, let $\uU$ be the
associated polyplex (refer to \cite[section 2.3]{Henry2019} for the
precise construction). The isomorphism above can be reformulated as
\[
\triangleURdiagram
{\catPP_n^*}{\sum_{u \in \catTT^*_n} \Pol^\MTO (\uU, \catPP)}
{\catTT_n^* ,}
{\cong}{\shriek}{}
\]
where the vertical morphism maps an element in the $u \in \catTT^*_n$
component of the sum to $u$. In other words, a cell $v \in \catPP^*_n$
corresponds to a unique morphism of the form $\uU \longrightarrow
\catPP$, and $u = \shriek v$ (see \cref{not:terminal-map}).
\item \cite[lemma 2.4.4, corollary 2.3.13]{Henry2019} Let $0 \leq k <
n$, $a \in \catTT^*_k$, and $u, v \in \catTT^*_n$ be such that
$\tgt^{n-k} v = a = \src^{n-k} u$. Then we have natural maps
$\src^{n-k} : \uA \longrightarrow \uU$ and $\tgt^{n-k} : \uA
\longrightarrow \uV$, and $\underline{u \operatornamewithlimits{\circ}_k v}$ is obtained as the
pushout
\[
\pushoutdiagram
{\uA}{\uV}{\uU}{\underline{u \operatornamewithlimits{\circ}_k v} .}
{\tgt^{n-k}}{\src^{n-k}}{\iota_v}{\iota_u}
\]
Furthermore, the maps $\iota_u$ and $\iota_v$ are injective on $(n-1)$-
and $n$-cells.
\end{enumerate} \end{theorem}
\begin{example}
\label{ex:polyplex}
By \cref{def:terminal-mtop}, $\catTT$ has a unique $0$-cell $\filledlozenge$, and
the corresponding polyplex $\underline{\filledlozenge}$ is simply the polygraph
with a single $0$-generator. Indeed, $\Pol^\MTO (\underline{\filledlozenge}, -)$
maps a polygraph $\catPP \in \Pol^\MTO$ to its set of $0$-cells $\catPP_0$.
If we write $\filledsquare \eqdef (\filledlozenge, \filledlozenge)$ for the unique
$1$-generator of $\catTT$, then
\[
\catTT^*_1 \:=\: \left\{
\:\:
\id_{\filledlozenge}, \:\:
\filledsquare, \:\:
\filledsquare \operatornamewithlimits{\circ}_0 \filledsquare, \:\:
\filledsquare \operatornamewithlimits{\circ}_0 \filledsquare \operatornamewithlimits{\circ}_0 \filledsquare, \:\:
\filledsquare \operatornamewithlimits{\circ}_0 \filledsquare \operatornamewithlimits{\circ}_0 \filledsquare \operatornamewithlimits{\circ}_0 \filledsquare, \:\:
\ldots \:\:
\right\} .
\]
Write $l_0 \eqdef \id_{\filledlozenge}$, and $l_k$ for the composite $\filledsquare
\operatornamewithlimits{\circ}_0 \cdots \operatornamewithlimits{\circ}_0 \filledsquare$ of $k$ instances of $\filledsquare$. Then the
polyplex $\underline{l_0}$ is simply $\underline{\filledlozenge}$, and
$\underline{l_k}$ spans the free category on the linear graph with $k$
vertices
$
\bullet \longrightarrow
\bullet \longrightarrow
\cdots \longrightarrow \bullet
$.
Indeed, let $\catPP \in \Pol^\MTO$. Then a $1$-cell $u$ of $\catPP$ is either
\begin{enumerate}
\item an identity of a $0$-cell;
\item a sequence of $k$ composable $1$-generators of $\catPP$.
\end{enumerate}
If $u = \id_a$ for some $a \in \catPP_0$, then it is uniquely identified
(as a $1$-cell) by a morphism $\underline{l_0} \longrightarrow \catPP$
mapping the unique $0$-cell of $\underline{l_0}$ to $a$. If $u$ is a
composite of $k$ generators, then uniquely identified (as a $1$-cell) by a
morphism $\underline{l_k} \longrightarrow \catPP$. In conclusion,
\[
\catPP^*_1
\:\cong\: \sum_{k \in \bbNN} \Pol^\MTO (\underline{l_k}, \catPP)
\:=\: \sum_{v \in \catTT^*_1} \Pol^\MTO (\uV, \catPP) .
\] \end{example}
\begin{remark}
If $\uU$ is an $n$-polyplex, then under the isomorphism of
\cref{th:Henry2019} (2), the identity morphism $\uU \longrightarrow \uU$
corresponds to an $n$-cell of $\uU$, which we call its \emph{fundamental
cell}, and following \cite{Henry2019}, denote by $u$. If $\catPP$ is a
many-to-one polygraph, $v \in \catPP^*_n$, and $u = \shriek v$, then the
map $f : \uU \longrightarrow \catPP$ corresponding to $v$ maps the
fundamental cell $u$ to $v$. Indeed, by naturality of the isomorphism of
\cref{th:Henry2019} (2), the diagram on the left commutes, which results in
the mappings displayed on the right:
\[
\squarediagram
{\uU^*_n}{\sum_{w \in \catTT^*_n} \Pol^\MTO (\uW, \uU)}
{\catPP^*_n}{\sum_{w \in \catTT^*_n} \Pol^\MTO (\uW, \catPP)}
{\cong}{f}{f \circ -}{\cong}
\qquad\qquad
\begin{tikzcd}
u
\ar[r, mapsto]
&
\id_{\uU}
\ar[d, mapsto]
\\
v
\ar[r, mapsto]
&
f.
\end{tikzcd}
\]
Necessarily, $f (u) = v$. \end{remark}
\begin{lemma}
[{\cite[lemma 2.4.5]{Henry2019}}]
\label{lemma:Henry2019:2.4.5}
Let $n \geq 1$, $\uU$ be an $n$-polyplex, and $u$ be its fundamental cell.
For $a \in \uU_{n-1}$, exactly one of the following two possibilities
is true:
\begin{enumerate}
\item $a$ occurs in $\src u$;
\item $a$ is the target of a $n$-generator of $\uU$.
\end{enumerate}
Furthermore, in the second case, the $n$-generator in question is unique. \end{lemma}
\begin{lemma}
[{\cite[remark 2.2.9, corollary 2.2.13]{Henry2019}}]
\label{lemma:cartesian-compositions}
Let $f : \catPP \longrightarrow \catQQ$ a morphism of many-to-one
polygraphs, and recall from \cref{def:omega-category} that $\catPP^*_{n, k}
= \catPP^*_n \times_{\catPP^*_k} \catPP^*_n$. The following square is
cartesian
\[
\squarediagram
{\catPP^*_{n, k}}{\catPP^*_n}{\catQQ^*_{n, k}}{\catQQ^*_n .}
{\operatornamewithlimits{\circ}_k}{f}{f}{\operatornamewithlimits{\circ}_k}
\]
Consequently, for $u_1, u_2, v_1, v_2 \in \catPP^*_n$, if $u_1 \operatornamewithlimits{\circ}_k v_1
= u_2 \operatornamewithlimits{\circ}_k v_2$, $f (u_1) = f(u_2)$, and $f (v_1) = f(v_2)$, then $u_1
= u_2$ and $v_1 = v_2$. \end{lemma} \begin{proof} [Proof (sketch)]
Let us first consider the case $\catQQ = \catTT$, and let $u, v \in
\catPP_n^*$ be such that $\tgt^{n-k} v = \src^{n-k} u = a$. In particular,
pair $(u, v)$ is in $\catPP^*_{n, k}$. We have a series of correspondences
\begin{prooftree}
\AxiomC{A tuple $(u, v) \in \catPP^*_{n, k}$}
\RightLabel{\cref{th:Henry2019} (2)}
\UnaryInfC{A map $
\underline{\shriek u} \coprod_{\underline{\shriek a}}
\underline{\shriek v}
\longrightarrow \catPP
$}
\RightLabel{\cref{th:Henry2019} (3)}
\UnaryInfC{A map $
\underline{\shriek (u \operatornamewithlimits{\circ}_k v)} \longrightarrow \catPP
$ with a decomposition of $\shriek (u \operatornamewithlimits{\circ}_k v)$ as $x \operatornamewithlimits{\circ}_k y$}
\RightLabel{\cref{th:Henry2019} (2)}
\UnaryInfC{An element $u \operatornamewithlimits{\circ}_k v \in \catPP^*_n$ with a
decomposition of $\shriek (u \operatornamewithlimits{\circ}_k v)$ as $x \operatornamewithlimits{\circ}_k y$.}
\end{prooftree}
In other words, the following square is a pullback
\[
\squarediagram
{\catPP^*_{n, k}}{\catPP^*_n}{\catTT^*_{n, k}}{\catTT^*_n .}
{\operatornamewithlimits{\circ}_k}{\shriek}{\shriek}{\operatornamewithlimits{\circ}_k}
\]
For the general case, note that in the following diagram, the lower and
outer squares are cartesian, and by the pasting lemma, so is the upper one:
\[
\begin{tikzcd}
\catPP^*_{n, k}
\ar[r, "\operatornamewithlimits{\circ}_k"] \ar[d, "f" right]
\ar[dd, bend right, "\shriek" left] &
\catPP^*_n
\ar[d, "f" left] \ar[dd, bend left, "\shriek" right] \\
\catQQ^*_{n, k}
\pullbackcorner
\ar[r, "\operatornamewithlimits{\circ}_k"] \ar[d, "\shriek" right] &
\catQQ^*_n
\ar[d, "\shriek" left] \\
\catTT^*_{n, k}
\ar[r, "\operatornamewithlimits{\circ}_k"] &
\catTT^*_n .
\end{tikzcd}
\] \end{proof}
Let $\catPP \in \Pol^\MTO$ and $v \in \catPP_n^\mathrm{mto}$. Then $v$ is a composition of $n$-generators of $\catPP$, which are many-to-one, so intuitively, $v$ is a ``tree of $n$-generators''. In this section, we make this idea formal. We first define a polynomial functor $\nabla_n \catPP$ whose operations are the $n$-generators of $\catPP$ (\cref{def:nabla-construction}), and then construct the \emph{composition} $\composition{T} \in \catPP_n^\mathrm{mto}$ of a $\nabla_n \catPP$-tree $T$. In \cref{proposition:composition-map-bijective}, we show that this construction is bijective.
\begin{definition}
[The $\nabla$ construction]
\label{def:nabla-construction}
For $\catPP \in \Pol^\MTO$ and $n \geq 1$, let $\nabla_n \catPP$
be the following polynomial endofunctor:
\[
\polynomialfunctor
{\catPP_{n-1}}{\catPP^\bullet_n}{\catPP_n}{\catPP_{n-1},}
{\src}{p}{\tgt}
\]
where for $x \in \catPP_n$, the fiber $\catPP^\bullet_n (x)$ is the set
of primitive contexts over $\src x$, and for $C : a \longrightarrow \src x$
in $\catPP^\bullet_n (x)$, $\src (C) \eqdef a$, $p (C) \eqdef x$, and
$\tgt$ is the target map of $\catPP$. \end{definition}
\begin{lemma}
\label{lemma:primitive-context-of-mto-cells}
Let $\catPP, \catQQ \in \Pol^\MTO$, and $f : \catPP \longrightarrow \catQQ$.
For $v \in \catPP_n^\mathrm{mto}$, write $E (v)$ for the set of primitive contexts
over $v$, and likewise for many-to-one cells of $\catQQ$. Then $f$ induces
a bijection $E(v) \longrightarrow E(f (v))$. \end{lemma} \begin{proof}
We proceed by induction on $v$ (see \cref{prop:mto-induction}).
\begin{enumerate}
\item If $v$ is an identity (resp. a generator), then so is $f (v)$,
thus $E (v)$ and $E (f (v))$ are both empty (resp. singletons).
Trivially, $f : E (v) \longrightarrow E (f (v))$ is a bijection.
\item Assume that $v$ decomposes as $v = w \operatornamewithlimits{\circ}_C x$ with $\# w
\geq 1$ and $x \in \catPP_n$, and $C : \tgt x \longrightarrow \src w$.
Then a primitive context over $v$ is either $w \operatornamewithlimits{\circ}_C \Box$ or
of the form $D [\Box] \operatornamewithlimits{\circ}_C x$ for $D \in E(w)$, and $E (v)
\cong 1 + E (w)$. Likewise, $E (f (v)) \cong 1 + E (f (w))$, and it is
straightforward to check that $f : E (v) \longrightarrow E (f (v))$ is
indeed a bijection.
\qedhere
\end{enumerate} \end{proof}
\begin{proposition}
\label{prop:nabla-construction-induced-morphism}
Let $f : \catPP \longrightarrow \catQQ$ be a morphism of many-to-one
polygraphs. For all $n \geq 1$, it induces a morphism of polynomial
functors $\nabla_n f : \nabla_n \catPP \longrightarrow \nabla_n \catQQ$,
where $(\nabla_n f)_1 = f_n : \catPP_n \longrightarrow \catQQ_n$. \end{proposition} \begin{proof}
Consider
\[
\begin{tikzcd}
\catPP_{n-1}
\ar[d, "f"'] &
\catPP^\bullet_n
\ar[r, "p"]
\ar[d, "f^\bullet" left]
\ar[l, "\src" above] &
\catPP_n
\ar[r, "\tgt"]
\ar[d, "f" left] &
\catPP_{n-1}
\ar[d, "f" left] \\
\catQQ_{n-1} &
\catQQ^\bullet_n
\ar[r, "p"]
\ar[l, "\src" above] &
\catQQ_n
\ar[r, "\tgt"] &
\catQQ_{n-1}
\end{tikzcd}
\]
where $f^\bullet_n$ maps a context $C : a \longrightarrow \src x$ to $f
(C) : f (a) \longrightarrow f (\src x)$. Clearly, all squares commute, and
by \cref{lemma:primitive-context-of-mto-cells}, the middle one is
cartesian. \end{proof}
\begin{definition}
[Composition]
\label{def:composition}
We define the \emph{composition} operation
$\composition{(-)} : \tree \nabla_n \catPP \longrightarrow
\catPP_n^\mathrm{mto}$. At
the same time, we establish a bijection between $T^\medvert$ and the
primitive contexts over $\src T^\circ$, where $T \in \tree \nabla_n
\catPP$.
\begin{enumerate}
\item If $a \in \catPP_{n-1}$, then $(\itree{a})^\circ \eqdef \id_a$.
Note that the only primitive context over $\src \id_a$ is $\Box :
a \longrightarrow a$, and let $C_{[]} \eqdef \Box$.
\item If $x \in \catPP_n$, then $(\ytree{x})^\circ \eqdef x$. Note that
by definition of $\nabla_n \catPP$ (\cref{def:nabla-construction}) we
have $\ytree{x}^\medvert = \left\{ [D] \mid D \in x^\bullet
\right\}$ (see \cref{rem:elementary-p-trees:addresses}), and let
$C_{[D]} \eqdef D$.
\item Consider a tree of the form $S = T \operatornamewithlimits{\circ}_{[l]} \ytree{x}$, with
$T \in \tree \nabla_n \catPP$ having at least one node, $[l] \in
T^\medvert$ and $x \in \catPP_n$. By induction, the leaf $[l]$
corresponds to a primitive context $C_{[l]} : a \longrightarrow \src
T^\circ$, and moreover, $a = \tgt x$. Let $S^\circ \eqdef T^\circ
\operatornamewithlimits{\circ}_{C_{[l]}} x$. Let $[l'] \in S^\medvert$. If $[l']$ is of the
form $[l D]$, for some $[D] \in \ytree{x}^\medvert$, let $C_{[l']}
\eqdef C_{[l]} [D]$. Otherwise, $[l']$ is a leaf of $T$, and so
$C_{[l']}$ is already defined.
If $S = T' \operatornamewithlimits{\circ}_{[l']} \ytree{x'}$ is another decomposition of $S$,
then the fact that $T^\circ \operatornamewithlimits{\circ}_{C_{[l]}} x = (T')^\circ
\operatornamewithlimits{\circ}_{C_{[l']}} x'$ follows from the exchange law.
\end{enumerate} \end{definition}
\begin{definition}
[Composition tree]
\label{def:composition-tree}
A \emph{composition tree} is simply a $\nabla_n
\catPP$-tree. If $v \in \catPP_n^\mathrm{mto}$ and $T$ is a composition tree such
that $\composition{T} = v$, then we say that $T$ is a composition tree of
$v$. \end{definition}
The following result generalizes \cref{lemma:cartesian-compositions}.
\begin{proposition}
\label{prop:cartesian-compositions}
Let $f : \catPP \longrightarrow \catQQ$ a morphism of many-to-one
polygraphs. The following square is cartesian
\[
\squarediagram
{\tree \nabla_n \catPP}{\catPP_n^\mathrm{mto}}
{\tree \nabla_n \catQQ}{\catQQ_n^\mathrm{mto}.}
{\composition{(-)}}{f}{f}{\composition{(-)}}
\] \end{proposition} \begin{proof}
This amounts to showing that if $v \in \catPP_n^\mathrm{mto}$, then $f$ establishes
a bijective correspondence between the composition trees of $v$ and $f
(v)$. This is clear if $\# v \leq 1$, so let us assume $\# v \geq 2$,
and let $T$ be a composition tree of $v$. Then $T$ decomposes as
\[
T \:=\: \ytree{a} \biggraft_{[C]} T_C ,
\]
where $a \in \catQQ_n$, $C$ ranges over the primitive contexts over $\src
a$, and $T_C \in \tree \nabla_n \catQQ$. Then, by
\cref{lemma:cartesian-compositions}, there exists a unique $b \in \catPP_n$
such that $f(b) = a$, and unique $v_C$'s such that $f(v_C) =
\composition{T_C}$. Further, $f$ exhibits a bijection between the primitive
contexts over $a$ and over $b$. Note that $\# v_C < \# v$, so by
induction, there exists a unique $S_C \in \tree \nabla_n \catPP$ such that
$f (S_C) = T_C$. Finally,
\[
S \:\eqdef\: \ytree{b} \biggraft_{[D]} S_{f(D)}
\]
is the unique tree such that $f(S) = T$. \end{proof}
\begin{lemma}
\label{lemma:composition-tree:fundamental-cell}
Let $\uU$ be a polyplex. The fundamental cell $u \in \uU_n$ has at most one
composition tree. \end{lemma} \begin{proof}
Assume that $S$ and $T$ are two composition trees of $u$. Necessarily,
$\edg_{[]} S = \tgt u = \edg_{[]} T$, i.e. the root edges of $S$ and $T$
are both decorated by the target of $u$ (recall the notation from
\cref{not:edg,def:o}). By \cref{lemma:Henry2019:2.4.5}, exactly one of the
following two possibilities is true.
\begin{enumerate}
\item If $\tgt u$ occurs in $\src u$, then $u$ is an identity cell, and
$S = \itree{\tgt u} = T$. We are done.
\item Otherwise $\tgt u$ is the target of a unique $n$-generator of
$\uU$, say $x$. In particular, $\src_{[]} S = x = \src_{[]} T$, i.e.
the root nodes of $S$ and $T$ are decorated by $x$. Let $a$ be an
$(n-1)$-generator occurring in $\src x$. It decorates an input edge $e$
(resp. $e'$) of the root node of $S$ (resp. $T$). If $a$ occurs in
$\src u$, then by \cref{lemma:Henry2019:2.4.5} again, there is no
$n$-generator whose target is $a$. So in $S$ (resp. $T$), there cannot
be a node above $e$ (resp. $e'$), i.e. $e$ (resp. $e'$) is a leaf.
Otherwise, $a$ is the target of a unique $n$-generator $y$, and
necessarily, the nodes above $e$ and $e'$, namely $\src_{[e]} S$ and
$\src_{[e']} T$ are decorated by $y$. Applying
\cref{lemma:Henry2019:2.4.5} repeatedly, we show that $S = T$.
\qedhere
\end{enumerate} \end{proof}
\begin{proposition}
\label{proposition:composition-map-bijective}
The composition map $\composition{(-)} : \tree \nabla_n \catPP
\longrightarrow \catPP_n^\mathrm{mto}$ is a bijection. \end{proposition} \begin{proof}
\begin{itemize}
\item (Surjectivit) This clearly holds if $n \leq 1$. Assume $n \geq
2$ and let $v \in \catPP^\mathrm{mto}_n$. If $v = \id_a$ for some $a \in
\catPP_{n-1}$, then $v = \composition{\itree{a}}$. If $v \in \catPP_n$,
then $v = \composition{\ytree{v}}$. Otherwise, by
\cref{prop:mto-induction}, $v$ decomposes as $w \operatornamewithlimits{\circ}_C x$, where $w
\in \catPP_n^\mathrm{mto}$, $x \in \catPP_n$, and $C : \tgt x \longrightarrow
\src w$ is a context. By induction, there exist $T \in \tree \nabla_n
\catPP$ such that $\composition{T} = w$. By construction, $C$
corresponds to a unique leaf address $[l] \in T^\medvert$. Finally,
$v = \composition{\left( T \operatornamewithlimits{\circ}_{[l]} \ytree{x} \right)}$.
\item (Injectivit) Let $v \in \catPP_n^\mathrm{mto}$, $u \eqdef \shriek v$,
and $f : \uU \longrightarrow \catPP$ be the map associated with $v$
(see \cref{th:Henry2019} (2)). By \cref{prop:cartesian-compositions},
the following square is cartesian
\[
\pullbackdiagram
{\tree \nabla_n \uU}{\uU_n^\mathrm{mto}}
{\tree \nabla_n \catPP}{\catPP_n^\mathrm{mto} ,}
{\composition{(-)}}{f}{f}{\composition{(-)}}
\]
and in particular, since $f$ maps the fundamental cell $u$ to $v$, it
induces a bijection between the composition trees of $u \in \uU_n^\mathrm{mto}$
and $v \in \catPP_n^\mathrm{mto}$. By
\cref{lemma:composition-tree:fundamental-cell} and surjectivity proved
above, $u$ has exactly one composition tree, and thus, so does $v$.
\qedhere
\end{itemize} \end{proof}
\begin{notation}
Let the \emph{composition tree} operation $\ct : \catPP_n^\mathrm{mto}
\longrightarrow \tree \nabla_n \catPP$ be the inverse of the composition
operation $\composition{(-)}$ of \cref{def:composition}. \end{notation}
\begin{corollary}
\label{coroll:mto-decomposition}
Let $\catPP \in \Pol^\MTO$ and $v \in \catPP^\mathrm{mto}_n$ with $\# v \geq 1$.
Then $v$ uniquely decomposes as
\[
v \:=\: x \biggraft_C v_C
\]
where $x \in \catPP_n$, $C$ ranges over the primitive contexts over $\src
x$, and $v_C \in \catPP^\mathrm{mto}_n$. \end{corollary} \begin{proof}
The composition tree of $v$ decomposes uniquely as
\[
\ct v \:=\: \ytree{x} \biggraft_{[C]} T_C ,
\]
and applying back $\composition{(-)}$ gives the desired decomposition of
$v$. \end{proof}
\begin{corollary}
\label{coroll:mto-decomposition-context}
Let $\catPP \in \Pol^\MTO$. A many-to-one context $C \in \Ctx^\mathrm{mto}_n \catPP$
decomposes uniquely as
\[
C
\:=\: C [\Box]
\:=\: x \operatornamewithlimits{\circ}_D \Box \biggraft_E v_E
\]
where $x, v_E \in \catPP^\mathrm{mto}_n$. \end{corollary} \begin{proof}
We proceed by induction on $\# C$. Since $\#_\Box C = 1$ (i.e.
$C$ has only one occurrence of $\Box$, see
\cref{def:counting-function}), we have $\# C \geq 1$. By
\cref{coroll:mto-decomposition}, $C$ uniquely decomposes as
\[
C \:=\: y \biggraft_F u_F .
\]
Either one of following two possibilities is true.
\begin{enumerate}
\item If $y = \Box$, then consider
\[
C \:=\: \id_{\tgt \Box} \operatornamewithlimits{\circ}_\boxdot \Box
\biggraft_F u_F ,
\]
where $\boxdot$ is the trivial context $\tgt \Box \longrightarrow
\tgt \Box$.
\item Otherwise, there exists a unique $F \in y^\bullet$ such that
$\#_\Box u_F = 1$. By induction, $u_F$ decomposes as on the
left, and consider the decomposition of $C$ as on the right:
\[
u_F \:=\: z \operatornamewithlimits{\circ}_G \Box \biggraft_H w_H ,
\qquad\qquad
C \:=\: \left( y \operatornamewithlimits{\circ}_F z \biggraft_{G \neq F} u_G \right)
\operatornamewithlimits{\circ}_F \Box \biggraft_H w_H .
\]
\qedhere
\end{enumerate} \end{proof}
\begin{remark}
Given a many-to-one cell as on the left below,
\cref{coroll:mto-decomposition} decomposes it as on the right.
\[
\tikzinput[.8]{opetope-composition}{cell}
\qquad \longleadsto \qquad
\tikzinput[.8]{opetope-composition}{decomposition-mto}
\]
Given a context as on the left below,
\cref{coroll:mto-decomposition-context} detaches $\Box$ from the cell
``below'' and the cells ``above'' it:
\[
\tikzinput[.8]{opetope-composition}{cell-ctx}
\qquad \longleadsto \qquad
\tikzinput[.8]{opetope-composition}{decomposition-isolation}
\] \end{remark}
\begin{remark}
\label{rem:mto-contexts}
Let $C : x \longrightarrow y$ be an $n$-context of $\catPP \in \Pol^\MTO$
between many-to-one cells. In particular, $C$ is a many-to-one cell in the
extended category $\catQQ$ of \cref{def:polygraph-context}, so by virtue of
\cref{coroll:mto-decomposition-context}, it uniquely decomposes as
\[
C \:=\: z \operatornamewithlimits{\circ}_D \Box \biggraft_E v_E .
\]
where $z, v_E \in \catPP^*_n$. Since $C [x] = y$, we have
\[
y \:=\: z \operatornamewithlimits{\circ}_D x \biggraft_E v_E .
\]
Conversely, any decomposition of $y$ of the form above induces a context $C
: x \longrightarrow y$. In particular, the number of primitive contexts
over $y$ is $\# y$, and $\nabla_n \catPP$ is finitary. \end{remark}
\begin{definition}
We now extend $\composition{(-)}$ and $\ct$ to functors between $\tree
\nabla_n \catPP$ and $\Ctx_n^\mathrm{mto} \catPP$. On objects, they are respectively
defined in \cref{def:composition,def:composition-tree}.
\begin{enumerate}
\item Let $f : T \longrightarrow S$ be a morphism in $\tree \nabla_n
\catPP$. It corresponds to a decomposition of $S$ as on the left, and
let $f^\circ$ be the context $T^\circ \longrightarrow S^\circ$ on the
right:
\[
S \:=\: U \operatornamewithlimits{\circ}_{[p]} T \biggraft_{[l]} V_{[l]} ,
\qquad\qquad
f^\circ \:\eqdef\: U^\circ \operatornamewithlimits{\circ}_{C_{[p]}} \Box
\biggraft_{C_{[l]}} V_{[l]}^\circ ,
\]
where $[l]$ ranges over $T^\medvert$.
\item Let $C : x \longrightarrow y$ be an $n$-context. By
\cref{coroll:mto-decomposition-context}, it decomposes uniquely as on
the left, and let $\ct C$ correspond to the decomposition of $\ct y$ on
the right:
\[
C [\Box] \:=\: z \operatornamewithlimits{\circ}_{D} \Box \biggraft_{E} t_E,
\qquad\qquad
\ct y
\:=\: (\ct z) \operatornamewithlimits{\circ}_{[l_D]} (\ct x) \biggraft_{[l_E]} (\ct t_E) ,
\]
where $E$ ranges over all primitive contexts over $\src x$.
\end{enumerate} \end{definition}
One readily checks the following:
\begin{proposition}
[Composition tree duality]
\label{prop:composition-tree-duality}
The functors $(-)^\circ$ and $\ct$ are mutually inverse isomorphisms of
categories. \end{proposition}
\begin{corollary}
\label{coroll:primitive-context-tree}
For $n \geq 2$ and $x \in \catPP_n$, the functor $\ct$ induces a natural
bijection
\[
\catPP^\bullet_n (x)
\:\cong\:
\sum_{a \in \catPP_{n-1}}
(\tr \nabla_{n-1} \catPP) (\ytree{a}, \ct \src x) .
\] \end{corollary} \begin{proof}
Direct consequence of \cref{prop:composition-tree-duality}. \end{proof}
\begin{notation}
\label{not:source-of-a-generator}
If $x \in \catPP_n$ and $[p] \in (\ct \src x)^\bullet$, then we write
$\src_{[p]} x$ instead of $\src_{[p]} \ct \src x \in \catPP_{n-1}$. \end{notation}
\section{The equivalence} \label{sec:equialence}
We now aim at proving that the category of opetopic sets, i.e. $\Set$-presheaves over the category $\bbOO$ defined previously, is equivalent to the category of many-to-one polygraphs $\Pol^\MTO$. We achieve this by first constructing the \emph{polygraphic realization} functor $\polyreal{-} : \bbOO \longrightarrow \Pol^\MTO$. This functor ``realizes'' an opetope as a polygraph that freely implements all its tree structure by the means of adequately chosen generators in each dimension. Secondly, we consider the left Kan extension $\polyreal{-} : {\Psh\bbOO} \longrightarrow \Pol^\MTO$ along the Yoneda embedding. This functor has a right adjoint, the ``opetopic nerve'' $N : \Pol^\MTO \longrightarrow {\Psh\bbOO}$, and we prove this adjunction to be an adjoint equivalence. This is done using the \emph{shape function}, defined in \cref{sec:shape}, which to any generator $x$ of a many-to-one polygraph $\catPP$ associates an opetope $\shape{x}$ along with a canonical morphism $\tildX : \polyreal{\shape{x}} \longrightarrow \catPP$.
\subsection{Polygraphic realization} \label{sec:polygraphic-realization}
An opetope $\omega \in \bbOO_n$, with $n \geq 1$, has one target $\tgt \omega$, and sources $\src_{[p]} \omega$ laid out in a tree. If the sources $\src_{[p]} \omega$ happened to be generators in some polygraph, then that tree would describe a way to compose them. With this in mind, we define a many-to-one $n$-polygraph $\polyreal{\omega}$, whose generators are essentially iterated faces (i.e. sources or targets) of $\omega$ (hypothesis \condition{PR1} below). Moreover, $\polyreal{\omega}$ will be ``maximally unfolded'' (or ``free''), in that two (iterated) faces that are the same opetope, but located at different addresses, will correspond to distinct generators.
The rest of this subsection is devoted to inductively define the realization functor $\polyreal{-} : \bbOO \longrightarrow \Pol^\MTO$ together with its \emph{boundary} $\partial \polyreal{-}$. We bootstrap the process with \cref{def:polygraphic-realization-low-dimension} and state our induction hypotheses in \ref{ass:polygraphic-realization}.
\begin{definition}
[Low dimensional cases]
\label{def:polygraphic-realization-low-dimension}
For $\filledlozenge$ the unique $0$-opetope, let $\partial \polyreal{\filledlozenge}$
be the empty polygraph, and $\polyreal{\filledlozenge}$ be the polygraph with a
unique generator in dimension $0$, which we denote by $\filledlozenge$. For
$\filledsquare$ the unique $1$-opetope, let $\partial \polyreal{\filledsquare} \eqdef
\polyreal{\filledlozenge} + \polyreal{\filledlozenge}$, and let $\polyreal{\filledsquare}$ be
induced by the cellular extension
\[
\partial \polyreal{\filledsquare} \xot{\src, \tgt} \{ \filledsquare \} ,
\]
where $\src$ and $\tgt$ map $\filledsquare$ to distinct $0$-generators. There are
obvious functors $\polyreal{\src_{[]}}, \polyreal{\tgt} :
\polyreal{\filledlozenge} \longrightarrow \polyreal{\filledsquare}$, mapping $\filledlozenge$
to $\src \filledsquare$ and $\tgt \filledsquare$, respectively. \end{definition}
\begin{definition} [Dimension $2$]
For the reader's convenience, we construct $\partial \polyreal{\optInt{k}}$
and $\polyreal{\optInt{k}}$ for every opetopic integer $\optInt{k}$,
although this case already falls under the inductive definition (see
\cref{def:polygraphic-realization:inductive-boundary,def:polygraphic-realization:inductive}).
\begin{enumerate}
\item Let $\partial \polyreal{\optInt{0}}$ be the $1$-polygraph given
by the following coequalizer:
\[
\coeqdiagram
{\polyreal{\filledlozenge}}{\polyreal{\filledsquare}}{\partial \polyreal{\optInt{0}}.}
{\src_{[]}}{\tgt}{}
\]
In other words, $\partial \polyreal{\optInt{0}}$ has one object $x$ and
one generating endomorphism $f : x \longrightarrow x$. The
$2$-polygraph $\polyreal{\optInt{0}}$ is obtained by adjoining a
generating $2$-cell $\alpha : \id_x \longrightarrow f$ to $\partial
\polyreal{\optInt{0}}$.
\item Let $k \geq 1$, and consider the $1$-polygraph
\[
\catPP \:\eqdef\: \left(
\polyreal{\filledsquare}
\coprod_{\polyreal{\filledlozenge}} \polyreal{\filledsquare}
\coprod_{\polyreal{\filledlozenge}} \polyreal{\filledsquare}
\coprod_{\polyreal{\filledlozenge}} \cdots
\coprod_{\polyreal{\filledlozenge}} \polyreal{\filledsquare}
\right) ,
\]
where there are $k$ instances of $\polyreal{\filledsquare}$. In other words,
$\catPP$ is generated by a chain of $k$ composable $1$-cells, which we
denote by
\[
x_0 \xto{f_1} x_1 \xto{f_2} x_2 \xto{f_3} \cdots
\xto{f_k} x_k .
\]
Alternatively, $\catPP$ is the polyplex $\underline{l_k}$ of
\cref{ex:polyplex}. Let $\partial \polyreal{\optInt{k}}$ be the obvious
pushout
\[
\diagramsize{2}{3}
\pushoutdiagram
{\partial \polyreal{\filledsquare}}{\catPP}{\polyreal{\filledsquare}}
{\partial \polyreal{\optInt{k}},}
{(x_0, x_k)}{}{}{}
\]
i.e., $\partial \polyreal{\optInt{k}}$ is $\catPP$ with an additional
generating $1$-cell $g : x_0 \longrightarrow x_k$. Finally,
$\polyreal{\optInt{k}}$ is obtained from $\partial
\polyreal{\optInt{k}}$ by adjoining a generating $2$-cell $\alpha : f_k
\cdots f_2 f_1 \longrightarrow g$.
\end{enumerate}
In both cases, there is a bijective correspondence between the generating
cells of $\polyreal{\optInt{k}}$ and the objects of $\bbOO / \optInt{k}$,
and the obvious inclusions induce functors
\[
\partial \polyreal{-}, \polyreal{-} : \bbOO_{\leq 2}
\longrightarrow \Pol^\MTO .
\] \end{definition}
Let $n \geq 2$ and assume by induction that $\partial \polyreal{-}$ and $\polyreal{-}$ are defined on $\bbOO_{< n}$. Assume further that the following induction hypotheses hold (they are easily verified for $n = 2$).
\begin{assumptions}
\label{ass:polygraphic-realization}
For all $\psi \in \bbOO_k$ with $k < n$, the following hold:
\begin{itemize}
\item \condition{PR1} for all $j \in
\bbNN$, the set $\polyreal{\psi}_j$ of $j$-generators of
$\polyreal{\psi}$ is in bijection with the set of objects of the slice
$\bbOO_j / \psi$, i.e. of the form $\left( \phi \xto{\sfA} \psi
\right)$ for $\phi \in \bbOO_j$ and $\sfA : \phi \longrightarrow \psi$
a morphism in $\bbOO$;
\item \condition{PR2} for $\left( \phi
\xto{\sfA} \psi \right)$ a generator of $\polyreal{\psi}$, its target
is $\left( \tgt \phi \xto{\tgt} \phi \xto{\sfA} \psi \right)$;
\item \condition{PR3} for $l \leq k$, and
for $\left( \phi \xto{\sfA} \psi \right)$ a $l$-generator of
$\polyreal{\psi}$, the composition tree of its source $\ct \src
\left(\phi \xto{\sfA} \psi \right) \in \tree \nabla_{l - 1}
\polyreal{\psi}$ is
\begin{align*}
\underlyingtree{\phi} &\longrightarrow \nabla_{l - 1} \polyreal{\psi} \\
[p] &\longmapsto \left(
\src_{[p]} \phi \xto{\src_{[p]}} \phi \xto{\sfA} \psi
\right) .
\end{align*}
Recall that by \cref{prop:composition-tree-duality}, this completely
determines $\src \left(\phi \xto{\sfA} \psi \right) \in
\polyreal{\psi}_{l-1}^*$.
\end{itemize} \end{assumptions}
We now define $\partial \polyreal{\omega}$ and $\polyreal{\omega}$ when $\omega \in \bbOO_n$. Defining the former is easy, and done in \cref{def:polygraphic-realization:inductive-boundary}. The latter is defined in \cref{def:polygraphic-realization:inductive} as generated by a cellular extension \[
\partial \polyreal{\omega} \xot{\src, \tgt} \left\{ \omega \right\} \] of $\partial \polyreal{\omega}$, where the target and source of the new generator are given by \condition{PR2} and \condition{PR3}. Lastly, we check the inductive hypotheses in \cref{prop:polygraphic-realization:inductive}.
\begin{definition}
[Inductive step for $\partial \polyreal{-}$]
\label{def:polygraphic-realization:inductive-boundary}
For $\omega \in \bbOO_n$, let $\partial \polyreal{\omega}$ be the following
many-to-one $(n-1)$-polygraph:
\[
\partial \polyreal{\omega}
\:\eqdef\:
\colim_{\substack{\sfA : \psi \rightarrow \omega \\ \dim \psi < n}}
\polyreal{\psi} .
\]
For $\sfA : \psi \longrightarrow \omega$ in $\bbOO_{< n} / \omega$, this
colimit comes with a corresponding coprojection $\polyreal{\sfA} :
\polyreal{\psi} \longhookrightarrow \partial \polyreal{\omega}$. \end{definition}
\begin{remark}
\label{rem:polygraphic-realization:inductive-boundary}
Let $0 \leq k < n$. By \condition{PR1}, the set of $k$-generators of
$\partial \polyreal{\omega}$ is $\bbOO_k / \omega$. \end{remark}
\begin{lemma}
\label{lemma:polygraphic-realization:generators-of-boundary}
For $\omega \in \bbOO_n$, and $j < n$, the set $\partial
\polyreal{\omega}_j$ of $j$-generators of $\partial \polyreal{\omega}$ is
the slice $\bbOO_j / \omega$. \end{lemma} \begin{proof}
Follows from the induction hypothesis \condition{PR1} and
\cref{prop:polygraphs-cocomplete}. \end{proof}
\begin{corollary}
\label{lemma:polygraphic-realization:nabla-boundary}
For $\omega \in \bbOO_n$ and $1 \leq k < n$, the polynomial functor
$\nabla_k \partial \polyreal{\omega}$ is described as follows:
\[
\polynomialfunctor
{\bbOO_{k-1} / \omega}{E}{\bbOO_k / \omega}
{\bbOO_{k-1} / \omega}
{\src}{p}{\tgt}
\]
where for $\left( \psi \xto{\sfA} \omega \right) \in \bbOO_k / \omega$,
\begin{enumerate}
\item the fiber $E \left( \psi \xto{\sfA} \omega \right)$ is simply
$\psi^\bullet$;
\item for $[p] \in E \left( \psi \xto{\sfA} \omega \right) \cong
\psi^\bullet$, we have $\src [p] = \left( \src_{[p]} \psi
\xto{\src_{[p]}} \psi \xto{\sfA} \omega \right)$;
\item $\tgt \left( \psi \xto{\sfA} \omega \right) = \left( \tgt \psi
\xto{\tgt} \psi \xto{\sfA} \omega \right)$.
\end{enumerate} \end{corollary} \begin{proof}
Direct consequence of
\cref{lemma:polygraphic-realization:generators-of-boundary} and
\condition{PR1}, \condition{PR2}, and \condition{PR3}. \end{proof}
\begin{definition}
\label{def:polygraphic-realization:nabla-boundary:forgetful}
For $\omega \in \bbOO_n$ and $1 \leq k < n$, we have a morphism $u :
\nabla_k \partial \polyreal{\omega} \longrightarrow \frakZZ^{k-1}$
\[
\begin{tikzcd}
\bbOO_{k-1} / \omega
\ar[d, "u_0"'] &
E
\ar[r, "p"]
\ar[d, "u_2" left]
\ar[l, "\src" above] &
\bbOO_k / \omega
\ar[r, "\tgt"]
\ar[d, "u_1" left] &
\bbOO_{k-1} / \omega
\ar[d, "u_0" left] \\
\bbOO_{k-1} &
\bbOO_k^\bullet
\ar[r, "p"]
\ar[l, "\src" above] &
\bbOO_k
\ar[r, "\tgt"] &
\bbOO_{k-1}
\end{tikzcd}
\]
induced by the forgetful maps $\bbOO_{k-1} / \omega \longrightarrow
\bbOO_{k-1}$ and $\bbOO_k / \omega \longrightarrow \bbOO_k$. \end{definition}
\begin{lemma}
\label{lemma:polygraphic-realization:lift}
Let $\omega \in \bbOO_n = \tree \frakZZ^{n-2}$. The map $\omega :
\underlyingtree{\omega} \longrightarrow \frakZZ^{n-2}$ factors through
$u : \nabla_{n-1} \partial \polyreal{\omega} \longrightarrow
\frakZZ^{n-2}$
(\cref{def:polygraphic-realization:nabla-boundary:forgetful}):
\[
\triangleDRdiagram
{\nabla_{n-1} \partial \polyreal{\omega}}{\underlyingtree{\omega}}
{\frakZZ^{n-2} .}
{\baromega}{u}{\omega}
\] \end{lemma} \begin{proof}
Let $\baromega$ map a node $[p] \in \omega^\bullet$ to the cell
$\left(\src_{[p]} \omega \xto{\src_{[p]}} \omega \right) \in \bbOO_{n-1} /
\omega$, and map an edge $[l]$ to the cell $\left( \edg_{[l]} \omega
\xto{\edg_{[l]}} \omega \right) \in \bbOO_{n-2} / \omega$ (recall the
notation from \cref{not:edg,def:o}). \end{proof}
\begin{proposition}
\label{prop:polygraphic-realization:parallel}
On the one hand, consider the tree $\nabla_{n-1} \partial
\polyreal{\omega}$-tree $\baromega$ of
\cref{lemma:polygraphic-realization:lift}, and on the other hand, recall
from \cref{rem:polygraphic-realization:inductive-boundary} that there is a
$(n-1)$-generator $\left( \tgt \omega \xto{\tgt} \omega \right)$ of
$\partial \polyreal{\omega}$ corresponding to the target embedding of
$\omega$. Then, in $\partial \polyreal{\omega}$, the composite
$\baromega^\circ$ (\cref{def:composition}) and the generator $\left( \tgt
\omega \xto{\tgt} \omega \right)$ are parallel. \end{proposition} \begin{proof}
If $\omega$ is degenerate, say $\omega = \itree{\phi}$ for some $\phi \in
\bbOO_{n-2}$, then $\composition{\baromega} = \id_{( \phi \xto{\tgt \tgt}
\omega )}$, while $\left( \tgt \omega
\xto{\tgt} \omega \right) = \left( \ytree{\phi}
\xto{\tgt} \omega \right)$. By \condition{Degen}, those two cells are
parallel.
For the rest of the proof, we assume that $\omega$ is not degenerate.
First, we have
\[
\tgt \composition{\baromega}
\:=\: \tgt \src_{[]} \baromega
\:=\: \tgt \left( \tgt \src_{[]} \omega
\xto{\tgt \src_{[]}} \omega \right)
\:=\: \left( \tgt \tgt \omega \xto{\tgt \tgt} \omega \right)
\:=\: \tgt \left( \tgt \omega \xto{\tgt} \omega \right) .
\]
Then, in order to show that $\src \composition{\baromega} = \src (\tgt
\omega \xto{\tgt} \omega)$, we show that the $(n-2)$-generators occurring
on both sides are the same, and that the way to compose them is unique.
\begin{enumerate}
\item Generators in $\src \composition{\baromega}$ are of the form
$\left( \phi \xto{[q]} \psi \xto{[p]} \omega \right)$, for $[p [q]] \in
\omega^\medvert$. By \condition{Glob2}, those are equal to
$\left(\phi \xto{\readdress_\omega [p[q]]} \tgt \omega \xto{\tgt}
\omega \right)$, which are exactly the generators in the cell $\src
\left( \tgt \omega \xto{\tgt} \omega \right)$.
\item To show that there is a unique way to compose all the
$(n-2)$-generators of the form $(\phi \xto{\src_{[q]}} \psi
\xto{\src_{[p]}} \omega)$, where $[p[q]]$ ranges over
$\omega^\medvert$, it is enough to show that no two have the same
target. Assume $(\phi_i \xto{\src_{[q_i]}} \psi_i \xto{\src_{[p_i]}}
\omega)$, with $i = 1, 2$, are $(n-2)$-generators occuring in $\src
\composition{\baromega}$ with the same target. Consider the following
diagram:
\[
\begin{tikzcd} [column sep = large]
&
\phi_1
\ar[r, "\src_{[q_1]}"]
\ar[dr, "\src_{[r_1]}"'] &
\psi_1
\ar[dr, "\src_{[p_1]}"] \\
\rho
\ar[ur, "\tgt"]
\ar[dr, "\tgt"'] &
&
\tgt \omega
\ar[r, "\tgt"] &
\omega \\
&
\phi_2
\ar[r, "\src_{[q_2]}"]
\ar[ur, "\src_{[r_2]}"] &
\psi_2
\ar[ur, "\src_{[p_2]}"']
\end{tikzcd}
\]
where $[r_i] \eqdef \readdress_\omega [p_i [q_i]] \in \tgt
\omega^\bullet$. The outer hexagon commutes by assumption, the two
squares on the right are instances of \condition{Glob2}, and the left
square commutes as $\tgt : \tgt \omega \longrightarrow \omega$ is a
monomorphism, since $\omega$ is non degenerate. By inspection of the
opetopic identities (see \cref{def:o}), the only way for the left
square to commute is the trivial way, i.e. $[r_1] = [r_2]$. Since
$\readdress_\omega$ is a bijection, we have $[p_1 [q_1]] = [p_2
[q_2]]$, thus $[p_1] = [p_2]$ and $[q_1] = [q_2]$.
\qedhere
\end{enumerate} \end{proof}
\begin{definition}
[Inductive step for $\polyreal{-}$]
\label{def:polygraphic-realization:inductive}
For $\omega \in \bbOO_n$, let $\polyreal{\omega}$ be the cellular extension
\[
\partial \polyreal{\omega} \xot{\src, \tgt} \left\{ \omega \right\} ,
\]
where $\tgt$ maps $\omega$ to the $(n-1)$-generator $\left( \tgt \omega
\xto{\tgt} \omega \right)$, and where the composition tree of $\src \omega$
is $\baromega$ (\cref{lemma:polygraphic-realization:lift}). For
consistency, we also write $\left( \omega \xto{\id} \omega \right)$ for the
unique $n$-generator of $\polyreal{\omega}$. This is well-defined by
\cref{prop:polygraphic-realization:parallel}, and gives a functor
$\polyreal{-} : \bbOO_{\leq n} \longrightarrow \Pol$. \end{definition}
\begin{proposition}
\label{prop:polygraphic-realization:inductive}
For $\omega \in \bbOO_n$, the polygraphs $\partial \polyreal{\omega}$
(\cref{def:polygraphic-realization:inductive-boundary}) and
$\polyreal{\omega}$ (\cref{def:polygraphic-realization:inductive}) satisfy
the \cref{ass:polygraphic-realization}. \end{proposition} \begin{proof}
\begin{itemize}
\item \condition{PR1} For $j < n$, by
\cref{lemma:polygraphic-realization:generators-of-boundary}, we already
have $\polyreal{\omega}_j = \partial \polyreal{\omega}_j = \bbOO_j /
\omega$. In dimension $n$, the only element of $\bbOO_n / \omega$ is
$\id : \omega \longrightarrow \omega$, which corresponds to the unique
$n$-generator of $\polyreal{\omega}$. If $j > n$, then both $\bbOO_j /
\omega$ and $\polyreal{\omega}_j$ are empty.
\item \condition{PR2} and \condition{PR3} By definition, those
hypotheses hold for the unique $n$-generator $\left( \omega \xto{\id}
\omega \right)$ of $\polyreal{\omega}$. By induction, they also hold on
the other generators.
\qedhere
\end{itemize} \end{proof}
To conclude, we have defined a functor $\polyreal{-} : \bbOO \longrightarrow \Pol^\MTO$ which satisfies the \cref{ass:polygraphic-realization} for all $n \in \bbNN$.
\subsection{The shape function} \label{sec:shape}
In \cref{sec:equivalence}, we will define an adjunction $
\polyreal{-} : {\Psh\bbOO} \adjunction \Pol^\MTO : N $, where $\polyreal{-} : {\Psh\bbOO} \longrightarrow \Pol^\MTO$ is the left Kan extension of the polygraphic realization defined in \cref{sec:polygraphic-realization}, and $N$ its associated nerve. In order to show that this is an adjoint \emph{equivalence}, we need a good understanding of $N$. An crucial step is \cref{th:shape} that establishes a bijection $\catPP_n \longrightarrow \sum_{\omega \in \bbOO_n} (N \catPP)_\omega$, where $\catPP \in \Pol^\MTO$ and $n \in \bbNN$. Furthermore, we show that this bijection is \emph{over $\bbOO_n$}: \[
\triangleURdiagram
{\catPP_n}{\sum_{\omega \in \bbOO_n} (N \catPP)_\omega}{\bbOO_n,}
{\cong}{\shape{(-)}}{} \] where the vertical arrow maps an element in the $\omega \in \bbOO_n$ component to $\omega$, and where $\shape{(-)}$ is the \emph{shape function} which we define in this section.
Here is a sketch of the definition of $\shape{(-)} : \catPP_n \longrightarrow \bbOO_n$. The cases $n = 0, 1$ are trivial, since there is a unique $0$-opetope and a unique $1$-opetope. Assume $n \geq 2$, and take $x \in \catPP_n$. Then the composition tree of $\src x$ is a tree whose nodes are decorated $(n-1)$-generators, and edges by $(n-2)$-generators. Replacing these generators by their inductively defined shapes, we obtain a tree whose nodes are decorated by $(n-1)$-opetopes, and edges by $(n-2)$-opetopes, a.k.a. an $n$-opetope, which we shall denote by $\shape{x}$. \[
\tikzinput[.7]{shape}{} \] The fact that $\shape{x}$ corresponds to the intuitive notion of ``shape'' of $x$ is justified by \cref{th:shape}.
\begin{lemma}
\label{lemma:terminal-mtop:parallel-generators}
If $x, y \in \catTT_n$ are two parallel generators, then they are equal. \end{lemma} \begin{proof}
This is trivial if $n = 0$, as $\catTT$ only has one $0$-generator. If $n
\geq 1$, note that $x = (\src x, \tgt x) = (\src y, \tgt y) = y$. \end{proof}
\begin{proposition}
\label{prop:terminal-mtop:opetopes}
For $x \in \catTT_n$ there exists a unique $\shape{x} \in \bbOO_n$ such
that the terminal morphism $\shriek : \polyreal{\shape{x}} \longrightarrow
\catTT$ maps $\shape{x}$ (the unique $n$-generator of
$\polyreal{\shape{x}}$) to $x$. In particular, the map $\shape{(-)} :
\catTT_n \longrightarrow \bbOO_n$ is a bijection. \end{proposition} \begin{proof}
\begin{itemize}
\item (Uniqueness) Assume that there exists two distinct opetopes
$\omega, \omega' \in \bbOO_k$ such that $\shriek \omega = \shriek
\omega'$, with $k$ minimal for this property. Then necessarily, $k \geq
2$. On the one hand, we have $\underlyingtree{\omega} =
\underlyingtree{\ct \src \omega} = \underlyingtree{\ct \src \omega'} =
\underlyingtree{\omega'}$. On the other hand, for $[p] \in
\omega^\bullet = (\omega')^\bullet$, we have
\begin{align*}
\shriek \src_{[p]} \omega
&\:=\: \shriek \src_{[p]} \omega
& \text{since $\shriek$ is also }
\polyreal{\src_{[p]} \omega} \hookrightarrow \polyreal{\omega} \rightarrow \catTT \\
&\:=\: \src_{[p]} \shriek \omega
& \text{since $\shriek$ is a morphism of polygraphs} \\
&\:=\: \src_{[p]} \shriek \omega'
& \text{by assumption} \\
&\:=\: \shriek \src_{[p]} \omega'
& \text{since $\shriek$ is a morphism of polygraphs} \\
&\:=\: \shriek \src_{[p]} \omega'
& \text{since $\shriek$ is also }
\polyreal{\src_{[p]} \omega'} \hookrightarrow \polyreal{\omega'}
\rightarrow \catTT,
\end{align*}
and by minimality of $k$, we have $\src_{[p]} \omega = \src_{[p]}
\omega'$, for any address $[p]$. Consequently, $\omega = \omega'$, a
contradiction.
\item (Existence) The cases $n = 0, 1$ are trivial, so assume $n \geq
2$, and that by induction, the result holds for all $k < n$, i.e. that
for $g \in \catPP_k$, there is a unique opetope $\shape{g} \in \bbOO_k$
such that $\shriek (\shape{g}) = g$. In particular the following
two triangles commute:
\[
\diagramsize{2}{3}
\triangleURdiagram
{\polyreal{\src_{[p]} \shape{g}}}{\polyreal{\shape{g}}}{\catPP,}
{\polyreal{\src_{[p]}}}
{\shriek}{\shriek}
\qquad\qquad
\triangleURdiagram
{\polyreal{\tgt \shape{g}}}{\polyreal{\shape{g}}}{\catPP ,}
{\polyreal{\tgt}}{\shriek}{\shriek}
\]
where $[p] \in (\shape{g})^\bullet$. Consequently,
$\shape{(\src_{[p]} g)} = \src_{[p]} (\shape{g})$ and $\shape{(\tgt g)}
= \tgt (\shape{g})$, and the following displays an isomorphism
$\nabla_{n-1} \catTT \longrightarrow \frakZZ^{n-2}$:
\[
\begin{tikzcd}
\catTT_{n-2}
\ar[d, "\shape{(-)}"'] &
\catTT_{n-1}^\bullet
\ar[r, "p"]
\ar[d]
\ar[l, "\src" above] &
\catTT_{n-1}
\ar[r, "\tgt"]
\ar[d, "\shape{(-)}" left] &
\catTT_{n-2}
\ar[d, "\shape{(-)}" left] \\
\bbOO_{n-2} &
\bbOO_{n-1}^\bullet
\ar[r, "p"]
\ar[l, "\src" above] &
\bbOO_{n-1}
\ar[r, "\tgt"] &
\bbOO_{n-2} .
\end{tikzcd}
\]
Hence, the composite $\underlyingtree{\ct \src x} \xto{\ct \src x}
\nabla_{n-1} \catTT \xto{\shape{(-)}} \frakZZ^{n-2}$ defines an
$n$-opetope $\shape{x}$ with $\underlyingtree{\shape{x}} =
\underlyingtree{\ct \src x}$. We claim that $\shriek (\shape{x}) = x$. We
first show that $\shriek \src (\shape{x}) = \src x$. We have
\begin{align*}
\underlyingtree{\ct \src x}
&\:=\: \underlyingtree{\shape{x}}
& \text{by definition} \\
&\:=\: \underlyingtree{\ct \src (\shape{x})}
& \text{by \condition{PR3}} \\
&\:=\: \underlyingtree{\ct \shriek \src (\shape{x})}
& \text{since $\shriek$ is a morphism of polygraphs.}
\end{align*}
Then, for any address $[p]$ in $\underlyingtree{\ct \src x}$, we have
\begin{align*}
\src_{[p]} x
&\:=\: \shriek (\shape{(\src_{[p]} x)})
& \text{by induction} \\
&\:=\: \shriek \src_{[p]} (\shape{x})
& \text{by definition of $\shape{x}$} \\
&\:=\: \src_{[p]} \shriek (\shape{x})
& \text{since $\shriek$ is a morphism of polygraphs,}
\end{align*}
and therefore, by \cref{prop:composition-tree-duality}, $\src x = \src
\shriek (\shape{x})$. Next,
\begin{align*}
\tgt \shriek (\shape{x})
&\:=\: \shriek \tgt (\shape{x})
& \text{by induction} \\
&\:\parallel\: \shriek \src (\shape{x})
& \text{since } \src (\shape{x}) \parallel \tgt (\shape{x}) \\
&\:=\: \src x
& \text{showed above} \\
&\:\parallel\: \tgt x,
\end{align*}
and therefore, $\tgt \shriek (\shape{x}) = \tgt x$. Finally, $\shriek
(\shape{x}) \parallel x$, and by
\cref{lemma:terminal-mtop:parallel-generators}, $\shriek (\shape{x}) = x$.
\qedhere
\end{itemize} \end{proof}
\begin{notation}
In the light of \cref{prop:terminal-mtop:opetopes}, we identify $\catTT_n$
with $\bbOO_n$. This identification is compatible with faces, i.e.
$\src_{[p]}$ and $\tgt$. Further, $\shriek : \polyreal{\omega}
\longrightarrow \catTT$ maps a generator $(\phi \rightarrow \omega)$ to
$\phi$. \end{notation}
\begin{notation}
\label{not:polygraphes-opetopes}
For $\catPP \in \Pol^\MTO$ and $\omega \in \bbOO_n$, let $\catPP_\omega
\eqdef \left\{x \in \catPP_n \mid \shape{x} = \omega \right\}$. If $f :
\catPP \longrightarrow \catQQ$ is a morphism of polygraphs, then it
restricts and corestricts as a map $f : \catPP_\omega \longrightarrow
\catQQ_\omega$. \end{notation}
\begin{theorem}
\label{th:shape}
For $\catPP \in \Pol^\MTO$ and $x \in \catPP_n$, there exists a unique
pair\footnote{In \cite[proposition 2.2.3 (2)]{Henry2019}, $\shape{x}$ is
written $\uX$ and called the \emph{universal cell} (or \emph{top cell}) of
$x$.}
\[
\left( \shape{x}, \polyreal{\shape{x}} \xto{\tildX} \catPP \right)
\:\in\: \polyreal{-} / \catPP
\]
such that $\tildX_n (\shape{x}) = x$. Further, the \emph{shape function}
$\shape{(-)} : \calPP_n \longrightarrow \bbOO_n$ maps an $n$-generator $x$
to $\shape{x} = \shriek x$, and the map
\begin{equation}
\label{eq:shape-tilde}
\widetilde{(-)} : \catPP_\omega
\longrightarrow \Pol^\MTO (\polyreal{\omega}, \catPP)
\end{equation}
is a bijection\footnote{In other words, the functor $\Pol^\MTO
\longrightarrow \Set$ that maps a polygraph $\catPP$ to $\catPP_n$ is
\emph{familially representable}
(\cref{def:effective-and-familially-representable}) with
$\left\{\polyreal{\omega} \mid \omega \in \bbOO_n \right\}$ as representing
family.}. \end{theorem} \begin{proof}
\begin{itemize}
\item (Uniqueness) Assume $\polyreal{\omega} \xto{f} \catPP \xot{f'}
\polyreal{\omega'}$ are different morphisms such that $f (\omega) = x =
f' (\omega')$. Then $\shriek \omega = \shriek f (\omega) = \shriek f' (\omega') =
\shriek \omega'$, and by \cref{prop:terminal-mtop:opetopes}, $\omega =
\omega'$. Let $\left(\phi \xto{\sfA} \omega \right) \in
\polyreal{\omega}_k$ be such that $f
\left( \phi \xto{\sfA} \omega \right) \neq f' \left(\phi \xto{\sfA}
\omega \right)$, with $k$ minimal for this property. Then $k < n$
(since by assumption $f (\omega) = x = f' (\omega')$), and $\sfA$
factorizes as $\left(\phi \xto{\sfJ} \psi \xto{\sfB} \omega \right)$,
where $\sfJ$ is a face embedding, i.e. either $\tgt$ or $\src_{[p]}$
for some $[p] \in \omega^\bullet$. Then by assumption,
\begin{align*}
f \left( \phi \xto{\sfA} \omega \right)
&\:=\: \sfJ f \left(
\psi \xto{\sfB} \omega \right) \\
&\:=\: \sfJ f' \left(
\psi \xto{\sfB} \omega \right)
& \text{by minimality of $k$} \\
&\:=\: f' \left( \phi \xto{\sfA} \omega \right) ,
\end{align*}
a contradiction.
\item (Existence) The cases $n = 0, 1$ are trivial, so assume $n \geq
2$, and that by induction, the result holds for all $k < n$. Let
$\shape{x} \eqdef \shriek x \in \bbOO_n$. We wish to construct a morphism
$O [\shape{x}] \xto{\tildX} \catPP$ having $x$ in its image. For
$\left(\psi \xto{\sfJ} \shape{x} \right)$ a face of $\shape{x}$ (i.e.
$\tgt$ or $\src_{[p]}$ for some $[p] \in (\shape{x})^\bullet$), we
have $\shape{(\operatorname{\sfJ} x)} = \psi$, so that by induction,
there exists a morphism $\polyreal{\psi}
\xto{\widetilde{\operatorname{\sfJ} x}} \catPP$ having
$\operatorname{\sfJ} x$ in its image, providing a commutative square
\[
\squarediagram
{\polyreal{\psi}}{\catPP}{\polyreal{\shape{x}}}{\catTT .}
{\widetilde{\sfJ x}}{\polyreal{\sfJ}}{\shriek}{\shriek}
\]
To alleviate upcoming notations, write $\bar{\sfJ} \eqdef
\widetilde{\operatorname{\sfJ} x} : \polyreal{\psi} \longrightarrow
\catPP$. Let $\left( \phi \xto{\sfA} \shape{x} \right) \in \bbOO_{< n}
/ \shape{x}$. If $\sfA$ is a face embedding, define $\bar{\sfA}$ as
before. If not, then it factors through a face embedding as $\sfA =
\left( \phi \xto{\sfJ} \psi \xto{\sfB} \omega \right)$, and let
$\bar{\sfA} \eqdef \bar{\sfB} \cdot \polyreal{\sfJ}$. Then the left
square commutes, and passing to the colimit over $\bbOO_{< n} /
\shape{x}$, we obtain the right square:
\[
\squarediagram
{\polyreal{\phi}}{\catPP}{\polyreal{\shape{x}}}{\catTT ,}
{\bar{\sfA}}{\polyreal{a}}{\shriek}{\shriek}
\qquad\qquad
\squarediagram
{\partial \polyreal{\shape{x}}}{\catPP}{\polyreal{\shape{x}}}{\catTT .}
{f}{}{\shriek}{\shriek}
\]
We want a diagonal filler of the right square. Since
$\polyreal{\shape{x}}$ is a one-generator cellular extension of
$\partial \polyreal{\shape{x}}$
(\cref{def:polygraphic-realization:inductive}), it is enough to check
that $f \src \shape{x} = \src x$, and $f \tgt \shape{x} = \tgt x$. The
latter is clear, as $f$ extends $\bar{\tgt} : \polyreal{\tgt
\shape{x}} \longrightarrow \catPP$, and $f \tgt \shape{x} = \bar{\tgt}
\tgt \shape{x} = \tgt x$ by definition. We now proceed to prove the
former. First, $\underlyingtree{\ct \src \shape{x}} =
\underlyingtree{\ct \src x}$ since both are mapped to the same element
of $\catTT_n$. Then, for $[p]$ a node address of $\ct \src \shape{x}$,
we have $f \src_{[p]} \shape{x} = \overline{\src_{[p]}} \src_{[p]}
\shape{x} = \src_{[p]} x$. Hence $f \src \shape{x} = \src x$.
\qedhere
\end{itemize} \end{proof}
\subsection{The adjoint equivalence} \label{sec:equivalence}
\begin{definition}
[Polygraphic realization-nerve adjunction]
\label{def:polygraphic-realization-nerve}
The polygraphic realization functor $\polyreal{-} : \bbOO \longrightarrow
\Pol^\MTO$ extends to a left adjoint
\[
\polyreal{-} : {\Psh\bbOO} \adjunction \Pol^\MTO : N ,
\]
by left Kan extension of $\polyreal{-} : \bbOO \longrightarrow \Pol^\MTO$
along the Yoneda embedding $\yoneda : \bbOO \longrightarrow {\Psh\bbOO}$.
Explicitly, the polygraphic realization of an opetopic set $X \in {\Psh\bbOO}$
can be computed with the coend on the left, while the \emph{polygraphic
nerve} $N \catPP$ of a polygraph $\catPP \in
\Pol^\MTO$ is given on the right:
\[
\polyreal{X}
\:=\: \int^{\omega \in \bbOO} X_\omega \times \polyreal{\omega} ,
\qquad\qquad
N \catPP
\:=\: \Pol^\MTO (\polyreal{-}, \catPP) : \bbOO^\mathrm{op} \longrightarrow \Set .
\] \end{definition}
\begin{theorem}
\label{th:opetopic-sets-mto-polygraphs}
The unit and counit are natural isomorphisms. Consequently, the polygraphic
realization-nerve adjunction of \cref{def:polygraphic-realization-nerve} is
an adjoint equivalence between ${\Psh\bbOO}$ and $\Pol^\MTO$. \end{theorem} \begin{proof}
Let $X \in {\Psh\bbOO}$ and $\catPP \in \Pol^\MTO$. Note that with the nerve
functor of \cref{def:polygraphic-realization-nerve}, the bijection of
\cref{eq:shape-tilde} becomes
$
\widetilde{(-)} : \catPP_\omega \longrightarrow (N \catPP)_\omega
$.
An $n$-generator $x \in \catPP_n$ then corresponds to a cell $\tildX \in N
\catPP_\omega$, where $\omega \eqdef \shape{x}$, in other words, the shape
function partitions the set of $n$-generators to form an opetopic set $N
\catPP$. In the converse direction, for $X \in {\Psh\bbOO}$,
\begin{align*}
\polyreal{X}_n
&\:=\: \int^{\omega \in \bbOO} X_\omega \times \polyreal{\omega}_n
& \text{by \cref{def:polygraphic-realization-nerve}} \\
&\:\cong\: \int^{\omega \in \bbOO} X_\omega \times \bbOO_n / \omega
& \text{see \cref{ass:polygraphic-realization}, \condition{PR1}} \\
&\:\cong\: \int^{\omega \in \bbOO}
X_\omega \times \sum_{\psi \in \bbOO_n} \bbOO (\psi, \omega) \\
&\:\cong\: \sum_{\psi \in \bbOO_n} \int^{\omega \in \bbOO}
X_\omega \times \bbOO (\psi, \omega) \\
&\:\cong\: \sum_{\psi \in \bbOO_n} X_\psi ,
\end{align*}
so the set of $n$-generators of $\polyreal{X}$ is the set of $n$-cells of
$X$. In particular, $\polyreal{X}_\omega \cong X_\omega$. Finally, the unit
$\eta : X \longrightarrow N \polyreal{X}$ and counit $\epsilon :
\polyreal{N \catPP} \longrightarrow \catPP$ are the following composites
\[
X_\omega
\xto{\cong} \polyreal{X}_\omega
\xto{\widetilde{(-)}} \left( N \polyreal{X} \right)_\omega ,
\]
\[
\polyreal{N \catPP}_n
\xto{=} \sum_{\omega \in \bbOO_n} \polyreal{N \catPP}_\omega
\xto{\cong} \sum_{\omega \in \bbOO_n} (N \catPP)_\omega
\xto{\widetilde{(-)}^{-1}} \sum_{\omega \in \bbOO_n} \catPP_\omega
\xto{=} \catPP_n ,
\]
which by definition are isomorphisms. \end{proof}
Many-to-one polygraphs have been the subject of other work \cite{Harnik2002} \cite{Harnik2008}, and proved to be equivalent to the notion of \emph{multitopic sets}. This, together with \cref{th:opetopic-sets-mto-polygraphs}, prove the following:
\begin{corollary}
The category ${\Psh\bbOO}$ of opetopic sets is equivalent to the category of
multitopic sets. \end{corollary}
An \emph{opetopic plex} is an opetopic polyplex of the form $\uU$, where $u \in \catTT_n$ (as opposed to $\catTT^*_n$). In \cite[corollary 2.4.9 and remark 2.5.1]{Henry2019}, Henry shows that $\Pol^\MTO$ is a presheaf category over some category $\bbOO\mathrm{plex}$ of opetopic plexes, and asks wether they are the same as opetopes. We now answer this question positively.
\begin{definition}
[Cauchy-complete category]
\label{def:cauchy-complete}
An idempotent morphism $e : a \longrightarrow a$ \emph{splits} if it
decomposes as $e = i r$ with $r i = \id_a$. A category is
\emph{Cauchy-complete} if all its idempotent morphisms split. \end{definition}
\begin{theorem}
[{\cite[theorem 1]{Borceux1986}}]
\label{th:cauchy-complete}
Let $\catAA$ and $\catBB$ be Cauchy-complete categories. An equivalence of
categories $\Psh\catAA \xto{\simeq} \Psh\catBB$ restricts and corestricts
to the representable presheaves, or in other words, to an equivalence
$\catAA \xto{\simeq} \catBB$. \end{theorem}
\begin{corollary}
\label{coroll:opt-oplex}
The category $\bbOO\mathrm{plex}$ of opetopic plexes is equivalent to
$\bbOO$. \end{corollary} \begin{proof}
By definition, $\bbOO$ is a directed category, and by \cite[proposition
2.2.3 (4)]{Henry2019}, so is $\bbOO\mathrm{plex}$. In particular, they are
both Cauchy-complete. On the other hand, ${\Psh\bbOO} \simeq \Pol^\MTO \simeq
\Psh{\bbOO\mathrm{plex}}$, and we conclude using \cref{th:cauchy-complete}. \end{proof}
In \cite{Palm2004}, Palm studies another approach to weak higher-dimensional categories, based on \emph{dendrotopic sets}, and show that dendrotopic sets are equivalent to many-to-one polygraphs. Therefore,
\begin{corollary}
\label{coroll:opt-dendro}
Opetopic sets are equivalent to dendrotopic sets. \end{corollary}
\section{Conclusion}
To conclude, here is a diagram of the various approaches to the notion of pasting diagrams and ``many-to-one shapes'', and how they are related throughout the litterature and in this paper:
\end{document}
|
arXiv
|
{
"id": "1806.08645.tex",
"language_detection_score": 0.604613721370697,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\flushbottom
\title{The effect of oscillator and dipole-dipole interaction on multiple optomechanically induced transparency in cavity optomechanical system}
\thispagestyle{empty}
In atomic systems, electromagnetically induced transparency (EIT)\cite{Harris1990,Boller,Fleischhauer} is induced by quantum interference effects or Fano-interactions\cite{Fano} due to the coherently driving atomic wavepacket with an external control laser field. The OMIT, a phenomenon analogous to the EIT, was predicted theoretically firstly\cite{Agarwal,Huang} and then verified experimentally\cite{Weis,Safavi} in a cavity optomechanical system which is caused by the destructive quantum interference between different pathways of the internal fields. More recently, the study of OMIT has attracted much attention. For instance, the single-photon routers\cite{Agarwal1}, the ultraslow light propagation\cite{Teufel}, the quantum ground state cooling\cite{Liu}, the precision measurement\cite{Zhang}, the Brillouin scattering induced transparency and non-reciprocal light storage\cite{Dong,Kim}, the optomechanically induced amplifcation\cite{Yan}, the effective mass sensing\cite{Gao}, control of photon propagation in lossless media\cite{LHe}, optomechanically induced stochastic resonance\cite{Monifi} and chaos transfer and the parity-time-symmetric microresonators\cite{Jing}. In addition, tunable EIT and absorption\cite{HIan}, polariton states\cite{xgu} and transition from blockade to transparency\cite{yxliu} in a circuit-QED system have also been studied. On the other hand, the studies on the OMIT have been extended to double- and multi-optomechanically induced transparency\cite{zmzhang} by integrating more optical or mechanical modes. It has been reported that multiple OMIT windows may occur in the atomic-media assisted optomechanical system\cite{Xiao,Wang,Akram}, multi-resonators optomechanical system\cite{Huang2014}, optomechanical system with N membranes\cite{Huang1403}, two coupled optomechanical systems\cite{Sohail2017} and the multi-cavity optomechanical system\cite{Sohail}. In particular, achieving multi-OMIT phenomenon shows many practical applications for the multi-channel optical communication and quantum information processing, which motivate the further investigation on such OMIT.
Currently, a hybrid cavity optomechanical system containing atoms has attracted much attention. The additional control of atomic freedom can lead to rich physics resulted from the enhanced nonlinearities and the strengthened coupling strength, which can also provide an coherent optical controlled method to change the width of the transparency window\cite{Ian,Akram,Xiao}, multistability of OMIT\cite{chang} and switch from single to double and multiple OMIT windows\cite{Sohail}. On the other hand, there has been a great interest in studying the phenomenon of EIT in the interacting Rydberg atoms system due to the strongly long range dipole-dipole interactions (DDI) or van der Waals interactions and long radiative lifetimes for many years\cite{Fleischhauer}. Based on the essential blockade effect arising from DDI, some novel behaviors in EIT are revealed, such as the transmission reduction\cite{Petrosyan}, the nonlocal propagation and enhanced absorption\cite{Li}, the nonlocal Rydberg EIT\cite{Wu}, the nonlinear Rydberg-EIT\cite{Liu2014}, and the dipolar exchange induced transparency\cite{Petrosyan2017}. Furthermore, optomechanical cavity system assisted by Rydberg atomic ensembles has been proposed to investigate the state transfer, sympathetic cooling and the non-classical state preparation\cite{Guerlin,Carmele}. It can also be found that an all-optical transistor can be manipulated by controlling the Rydberg excitation\cite{Liu1}. Even though many meaningful researches of EIT based on Rydberg atoms have been conducted, further studies on OMIT with the auxiliary DDI Rydberg atoms are also expected.
Motivated by the remarkable developments and potential applications in OMIT mentioned above, in the present work we will study the multiple OMIT in a multi-cavity optomechanical system (MCOS) assisted by a pair of DDI Rydberg atoms driven by two coupling fields. Different from the previous studies, we focus on a multi-cavity optomechanical system composed of $N$ optical modes and $N$ mechanical modes. The Heisenberg-Langevin equations for the hybrid MCOS are solved and the in-phase and out-of-phase quadratures of the output field based on the the input-output theory are obtained to determine the effects of the odd and even labelled oscillators and DDI on the multi-OMIT. It can be found that the multi-OMIT and Fano resonance can be controlled by the DDI.
The paper is organized as follows: In Sec. \uppercase\expandafter{\romannumeral2}, we introduce the multi-cavity optomechanical system and the Hamiltonian of our system, and Sec. \uppercase\expandafter{\romannumeral3} is devoted to obtaining
the Langevin Equations of the system and the output field based on the input-output theory. The effects of the mechanical oscillators and DDI on OMIT are discussed in Sec. \uppercase\expandafter{\romannumeral4}. Finally the conclusions are summarized in Sec. \uppercase\expandafter{\romannumeral5}.
\section*{{\protect\LARGE \textbf{Results}}}
\subsection{Theoretical model and Hamiltonian.}
The $1D$ MCOS under consideration is shown in Fig.~\ref{fig1}. The $Nth$ cavity of the cavity optomechanical arrays is coherently driven by a strong control field of frequency $\omega_c$ and a weak probe laser field of frequency $\omega_p$.
$N$ optomechanical cavities are labelled as $1$ , $2$ , $\cdots $, $N$. The frequencies of $jth$ cavity and $jth$ mechanical oscillator are denoted by $\omega_j$ and $\omega_{mj}$, respectively. The coupling strength between $jth$ cavity and $jth$ mechanical oscillator is $g_{mj}$, and $g_n$ is the hopping rate between $n$th and $(n+1)th$ cavities $(n\not=N)$. In addition, a pair of DDI ladder-type three level Rydberg atoms are assisted in the $ith$ cavity. The Rydberg atoms of our system may chose Cesium (Cs) atoms, the fine-structure states $|6S_{1/2}, F=4\rangle$ and $|6P_{3/2}, F^{\prime}=5\rangle$ can be regarded as the ground state $|g\rangle$ and the intermediate state $|e\rangle$, respectively, while the correspond Rydberg state $|r\rangle$ is assumed as $^{70}S_{1/2}$ \cite{Goban}. As for the first Rydberg atom, the frequency of control field $\omega_c$ is coupled to the $|e\rangle \leftrightarrow |r\rangle$ transition with a Rabi frequency $\Omega$ and a frequency detuning $\Delta_r$. The $ith$ cavity field drives the $|g\rangle \leftrightarrow |e\rangle$ transition with strength $g$ and the frequency detuning $\Delta_e$. In brief, the second Rydberg atom is assumed to be excited in the Rydberg state and coupled with the first Rydberg atom by DDI in the $ith$ cavity ($1\le i\le N$) due to the long lifetime ($\tau\geq100\mu s$) of the Rydberg state. As explained in Refs.\cite{Peyronel,Beguin,Neuzner}, this configuration has an experimental feasibility when the radius of the blockade is smaller than the interatomic distance of a pair of Rydberg atoms, then they can be excited to the Rydberg state simultaneously and their interactions are utilized via van-der-Waals type of DDI.
\begin{figure}
\caption{Schematic diagram of the multi-cavity optomechanical system. (a) $N$ cavities connect through hopping rates $g_n$. A pair of Rydberg atoms are put into the $i$th cavity.
(b) The pair of ladder-type three-level Rydberg atoms interact with each other and one of Rydberg atoms is excited in the Rydberg state during the process of interaction. $g$ and $\Delta_e$ are the coupling strength and the frequency detuning of the transition $|g\rangle \leftrightarrow |e\rangle$, respectively. $\Omega$ and $\Delta_r$ are the Rabi frequency and the frequency detuning of the transition $|e\rangle \leftrightarrow |r\rangle$, respectively. In addition, $V(R)$ is the DDI strength between two Rydberg atoms.}
\label{fig1}
\end{figure}
The total Hamiltonian $H$ of the hybrid cavity optomechanical system in the rotating-wave frame can be written as \begin{equation} H = H_c + H_m + H_a +H_{in}+ H_{int},\label{H1} \end{equation} where the first four terms describe the Hamitonians of the optical cavity, the mechanical oscillator, the two Rydberg atoms and the input fields, with the expressions as following \begin{align} \begin{split} H_c =& \sum_{j=1}^{N}\Delta_{j}c^{\dagger}_{j}c_{j},\\ H_m =& \sum_{j=1}^{N}\omega_{mj}b_j^{\dagger}b_j,\\ H_a =& \Delta_{e}\sigma_{ee}^{(1)} +(\Delta_{e}+\Delta_{r})\sigma^{(1)}_{rr}+ \omega_{rg}\sigma^{(2)}_{rr},\\ H_{in} =& i\varepsilon_{c}(c_{_{N}}^{\dagger}-c_{_{N}})+i\varepsilon_{p}(c_{_{N}}^{\dagger}e^{-i\Delta t}-c_{_{N}} e^{i\Delta t}). \end{split} \label{H2} \end{align}
The optical modes are described as an annihilation (creation) operator $c_j$($c_{j}^\dagger$) of the $jth$ cavity field, and $b_j^\dagger$($b_j$) is the creation (annihilation) operator of the $jth$ mechanical resonator. $\Delta_j= \omega_j-\omega_c$ is the detuning of the $jth$ cavity field from the control field, and $\Delta=\omega_p-\omega_c$ represents the detuning between the probe field and the control field. $\Delta_{e} = \omega_{eg}-\omega_j$, $\Delta_{r} = \omega_{re}-\omega_p$, and $\omega_{\mu \nu}$ represents the frequency of the atomic transition between the level $|\mu\rangle$ and level
$|\nu\rangle (\mu,\nu=g,e,r)$. $\sigma_{\mu\nu}^{(k)}\equiv|\mu\rangle_{kk}\langle\nu|$ is the projection $(\mu=\nu)$ or
transition $(\mu\ne\nu)$ operator of the $kth$ ($k=1,2$) Rydberg atom. Moreover, the Hamiltonian of the input fields includes the Hamiltonian of the control field and probe field. $\varepsilon_c$ is the control field amplitude and $\varepsilon_p$ is the probe field amplitude.
The last term of Eq. (\ref{H1}) describes the system's interaction Hamiltonian, \begin{align} H_{int}=& \sum_{n=1}^{N-1}g_{n}(c_{n+1}^{\dagger}c_{n}+c_{n+1}c_{n}^{\dagger})- \sum_{j=1}^{N}g_{mj}(c_{j}^{\dagger}c_{_j})(b_j^{\dagger}+b_j)+(\Omega \sigma_{er}^{(1)}+g c_i\sigma_{eg}^{(1)} +H.c)+V(R)\sigma_{rr}^{{(1)}}\sigma_{rr}^{{(2)}}. \label{H3} \end{align} In Eq. (\ref{H3}), the first term corresponds to the hopping between the two adjacent cavities and $g_n$ is the intercavity tunneling strength. The second term describes the interaction between the $jth$ cavity and the mechanical oscillator via the radiation pressure and $g_{mj}$ is the coupling strength. One of the Rydberg atoms interacted with the control field and $ith$ cavity field is listed in the third term, respectively. $V(R)$ is the DDI strength between two Rydberg atoms which is described as the last term, and $R$ is the distance between two Rydberg atoms which can be controlled at different ranges by the separate optical traps\cite{Beguin}.
\subsection{The dynamical equation.} The Heisenberg-Langevin equatons for the operators can be obtained based on the Hamiltonian (\ref{H1}). Using the the factorization assumption (mean field approximation), viz, $\langle QC\rangle=\langle Q\rangle\langle C\rangle$\cite{Agarwal,Akram2015}, the equations of the mean value of the operators can be given by
\begin{align}
\begin{split}
\langle\dot{c}_{_N}\rangle =&-(\kappa_{_N}+i\tilde\Delta_{_N})\langle c_{_N}\rangle-ig_{_{N-1}}\langle c_{_{N-1}}\rangle+\varepsilon_c+\varepsilon_pe^{-i\Delta t}
+ ig_{mN}\langle c_N\rangle(\langle b_N^{\dagger}\rangle+\langle b_N\rangle),\\
\langle\dot{c}_n\rangle =&-(\kappa_n+i\tilde\Delta_n)\langle c_n\rangle-i(g_{n-1}\langle c_{n-1}\rangle+g_{n}\langle
c_{n+1}\rangle)+ ig_{mn}\langle c_n\rangle(\langle b_n^{\dagger}\rangle+\langle b_n\rangle),n\neq1,i,N,\\
\langle\dot{c}_1\rangle =&-(\kappa_1+i\tilde{\Delta}_1)\langle c_1\rangle-ig_{1}\langle c_{2}\rangle+ig_{m1}\langle c_1\rangle(\langle b_1^{\dagger}\rangle+\langle b_1\rangle),\\
\langle\dot{b}_j\rangle =&-(\gamma_{mj}+i\omega_{mj})\langle b_j\rangle+ig_{mj}|\langle c_{j}\rangle|^2,\\ \langle\dot{\sigma}_{ge}\rangle =& -(\gamma_{ge}+i\Delta_{e})\langle\sigma_{ge}\rangle+ig(\langle\sigma_{ee}\rangle-\langle\sigma_{gg}\rangle)\langle c_{i}\rangle-i\Omega \langle{\sigma}_{gr}\rangle,\\
\langle\dot{\sigma}_{gr}\rangle=&-(\gamma_{gr}+iS+i\Delta_{r})\langle{\sigma}_{gr}\rangle+ig\langle{\sigma}_{er}\rangle\langle c_i\rangle-i\Omega\langle\sigma_{ge}\rangle,\\ \langle\dot{\sigma}_{er}\rangle =&-(\gamma_{er}+i\Delta_{r}+iS-i\Delta_{e})\langle{\sigma}_{er}\rangle+ig\langle{\sigma}_{gr}\rangle\langle c_i\rangle +i\Omega(\langle\sigma_{rr}\rangle-\langle\sigma_{ee}\rangle), \end{split} \label{H4} \end{align} For two Rydberg atoms trapped in the $ith$ cavity case \begin{align} \langle\dot{c}_i\rangle =&-(\kappa_i+i\tilde\Delta_i)\langle c_i\rangle-i(g_{i-1}\langle c_{i-1}\rangle+g_{i}\langle c_{i+1}\rangle)
-ig\langle\sigma_{ge}\rangle +ig_{mi}\langle c_i\rangle(\langle b_i^{\dagger}\rangle+\langle b_i\rangle),i\neq1,N. \label{H5}
\end{align} If two Rydberg atoms are confined in the first cavity, Eq.~(\ref{H5}) should be substituted by
\begin{align} \langle\dot{c}_1\rangle =&-(\kappa_1+i\tilde{\Delta}_1)\langle c_1\rangle-ig_{1}\langle c_{2}\rangle+ig_{m1}\langle c_1\rangle(\langle b_1^{\dagger}\rangle+\langle b_1\rangle)-ig\langle\sigma_{ge}\rangle. \label{H6}
\end{align} If one puts the Rydberg atoms into the $Nth$ cavity, Eq.~(\ref{H5}) should be replaced by
\begin{align}
\langle\dot{c}_{_N}\rangle =&-(\kappa_{_N}+i\tilde\Delta_{_N})\langle c_{_N}\rangle-ig_{_{N-1}}\langle c_{_{N-1}}\rangle+\varepsilon_c+\varepsilon_pe^{-i\Delta t} -ig\langle\sigma_{ge}\rangle+ig_{mN}\langle c_N\rangle(\langle b_N^{\dagger}\rangle+\langle b_N\rangle), \label{H7}
\end{align} where $\kappa_j$ and $\gamma_{mj}$ are introduced phenomenologically to denote the dissipation of the $jth$ cavity, and the decay rate of the $jth$ mechanical oscillator, respectively. $S=V(R)$ with $\bar\sigma_{rr}^{(2)}=1$ due to reason that the second Rydberg atom are assumed to be excited to the Rydberg state during the interaction process with the first Rydberg atom. $\gamma_{\mu\nu}(\mu,\nu=g,e,r)$
is the decay rate of transition between the level $|\mu\rangle$
and the level $|\nu\rangle$. In addition, $\tilde\Delta_{j}=\Delta_{j}-g_{mj}\bar\lambda_j$. The general form of $\bar\lambda_j$ will be given in the following. In order to obtain the steady-state solutions, which are exacted for the control field in the parameter $\varepsilon_c$ and corrected to the first order in the parameter $\varepsilon_p$ of the probe field. As the probe field is much weaker than the control field, then the average value of the operator ${O}$ can be approximately written by using the ansatz\cite{Boyd}
\begin{equation}
\langle{O}\rangle\!=\!\bar{O}+\delta O(t)=\bar{O}+O_{-} e^{-i\Delta t}+O_{+} e^{i\Delta t}. \label{H8}
\end{equation} where $\bar{O}$ describes the steady-state value of the operator ${O}$ governed by the control field, but $\delta O(t)$ is proportional to the weak probing field, which gives rise to the Stokes scattering and the anti-Stokes scattering of light from the strong control field. Subsequently, substituting Eq.~(\ref{H8}) into Eqs.~(\ref{H4})-(\ref{H7}), one can obtain the steady-state solutions of the Heisenberg-Langevin equations. Because $\bar O$ is independent of time, and $\delta O(t)$ of the same order as $\varepsilon_p$ depends on the time but remains much smaller than $\bar O$, one can separate the equations into two parts. One part is irrelevant of time and the other one is related to the time. Assuming that the cavity optomechanical system\cite{LiaoJQ,LiaoJQ1,LiaoJQ2,LiaoJQ3,LiaoJQ4} evolves in the resolved sideband regime, e.g., $\kappa_j\ll\omega_{mj}$, then the Stokes part, the low sidebands and off-resonant one, can be ignored i.e., $O_+\approx 0$ in Eq.~(\ref{H8}), only the anti-Stokes scattering survives in the hybrid system. Thus, all elements of $O_-$ can be obtained as follows using the above ansatz, \begin{align} \begin{split}
0 =&-(\kappa_{_N}-ix_N)c_{_N,-}-ig_{_{N-1}} c_{_{N-1,-}}+\varepsilon_pe^{-i\Delta t}+iG_{mN} b_{N,-},\\
0 =&-(\kappa_n-ix_{n})c_{n,-}-i(g_{n-1} c_{n-1,-}+g_{n}c_{n+1,-})+ iG_{mn} b_{n,-},n\ne 1,i,N,\\
0 =&-(\kappa_i-ix_i) c_{i,-} -i(g_{i-1}c_{i-1,-} +g_{i}c_{i+1,-} )+ iG_{mi} b_{i,-}-ig\sigma_{ge,-},i\ne 1,N,\\
0 =&-(\gamma_{ge}-ix)\sigma_{ge,-} +ig(\bar\sigma_{ee} -\bar\sigma_{gg} )c_{i,-} -i\Omega\sigma_{gr,-},\\
0 =&-(\gamma_{gr}-ix_{gr})\sigma_{gr,-}+ig(\sigma_{er,-} \bar{c}_i+\bar{\sigma}_{er}c_{i,-} )-i\Omega\sigma_{ge,-},\\
0 =&-(\gamma_{er}-ix_{er})\sigma_{er,-} +ig(\sigma_{gr,-}\bar{c}_i +\bar{\sigma}_{gr}c_{i,-})+ i\Omega(\bar\sigma_{rr} -\bar\sigma_{ee} ),\\
0 =&-(\kappa_1-ix_1)c_{1,-} -ig_{1}c_{2,-} +iG_{m1} b_{1,-},\\
0 =&-(\gamma_{mj}-ix_j)b_{j,-} +iG_{mj}^* c_{j,-},\\ \end{split} \label{H9} \end{align} As we provide the equations in the resolved sideband regime, the detuning parameters are set as $\tilde\Delta_{j}=\Delta_{j}=\Delta_{r}=\Delta_{e}=\omega_{mj}$, with $x_{er}=\Delta-\Delta_r-S$ and $x_{gr}=\Delta-\Delta_r-\Delta_e-S$. $x_j=\Delta-\omega_{mj}$ is the detuning from the center line of the sideband. $G_{mj}^*=g_{mj}\bar c_j^*$ and $G_{mj}=g_{mj}\bar c_j$ describe the effective optomechanical coupling rate of the $jth$ cavity and they are equal. By solving the equations for $\bar{O}$ of the mechanical oscillators, one can obtain
\begin{equation}
\bar\lambda_j\equiv \bar{b}_j+\bar{b}_j^*=\frac{2\omega_{mj} g_{mj}|\bar c_j|^2}{\gamma_{mj}^2+\omega_{mj}^2}.\label{H10}
\end{equation}
\subsection{The output field.}
The response of the system can be detected by the output field at the probe frequency, which can be expressed as follows via the standard input-output theory of the cavity\cite{Walls}, \begin{equation}
\varepsilon_{out,p}e^{-i\Delta t}+\varepsilon_p e^{-i\Delta t}+\varepsilon_c =2\kappa_{_{N}}\langle
c_{_N}\rangle.\label{H11} \end{equation} Therefore, one can express the total output field as \begin{equation}
\varepsilon_T=\frac{\varepsilon_{out,p}}{\varepsilon_p}+1=\frac{2\kappa_{_N}
c_{_{N,-}}}{\varepsilon_p}=\chi_p+i\tilde{\chi}_p.\label{H12} \end{equation} Here, $\chi_p=Re(\varepsilon_T)$ and $\tilde{\chi}_p=Im(\varepsilon_T)$ denote the in-phase and out-of-phase quadratures of the output field associated with the absorption and dispersion, respectively. The OMIT is the phenomenon of the simultaneously vanishing absorption and dispersion. These two quadratures of the output field can be measured via the homodyne technique\cite{Walls}. Using Eq.~(\ref{H9}), $c_{N,-}$ can be easily obtained, then the expression of the output field $\varepsilon_T$ is given in a constructive form, \begin{eqnarray} \varepsilon_T\!=\!2\kappa_{_N}c_{_{N,-}}\!=\!\frac{2\kappa_{_N}}{B_N+\frac{g^2_{_{N-1}}}{B_{N-1}+\frac{g^2_{_{N-2}}}{\frac{\scriptstyle\ddots}{ B_i+A+\frac{g_{i-1}^{2}}{\frac{\scriptstyle\ddots}{{B_2+\frac{g^2_{1}}{B_1}}}}}}}},\label{H13} \end{eqnarray}
where $B_j=\kappa_{j}-ix_j+\frac{|G_{mj}|^2}{\gamma_{_mj}-ix_j}(j=1,...,N)$. In the above equation, the first line of the denominator describes two cavities with decay rates $\kappa_{N}$ and $\kappa_{N-1}$ are coupled through the coupling strength $g_{N-1}$. Second line of the denominator describes the interaction of two cavities with decay rates $\kappa_{N-1}$ and $\kappa_{N-2}$ and the coupling strength is $g_{N-2}$ and so on. It is obvious that each line of the denominator contains an interaction term denoted by an effective coupling $G_{mj}$ between the mechanical oscillator and the cavity. Analytically, we note that when $G_{mj}=0$, the mechanical oscillator is not coupled with $jth$ cavity. Moreover, the extra term $A$ in the $B_i$ line represents the interaction of the cavity field with the pair of Rydberg atoms including DDI, and its general form is shown in Eq.~(\ref{H14}) in the following with $Q=(\gamma_{gr}+i\Delta_{r}+iS)(\gamma_{ge}+i\Delta_{e})+\Omega^2$, $P=i(\Delta_r+S-\Delta_e)+\gamma_{er}+\frac{Ge^2(\gamma_{ge}+i\Delta_{e})}{(\gamma_{gr}+i\Delta_{r}+iS)(\gamma_{ge}+i\Delta_{e})+\Omega^2}$, and $G_{e}=g\bar{c}_{i}$ is the effective coupling strength between the Rydberg atom and the cavity field. Certainly, when one traps the atoms in the first cavity, this term will appear in the last line. If Rydberg atoms are localized in the $Nth$ cavity, it will emerge in the first line of the denominator. \begin{equation}
A\!=\!\frac{[g^2(\gamma_{gr}-ix_{gr}+\frac{G_e^2}{\gamma_{er}-ix_{er}})+\frac{(g\Omega G_{e})^2}{PQ}](\bar\sigma_{rr}+2\bar\sigma_{gg}-1) -\frac{(g\Omega)^2}{P}(2\bar\sigma_{rr}+\bar\sigma_{gg}-1)} {(\gamma_e-ix_i)(\gamma_{rg}-ix_{rg}+\frac{G_e^2}{\gamma_{er}-ix_{er}})+\Omega^2}.\label{H14} \end{equation}
From Eq.~(\ref{H14}), it can be found that the output field depends on $\bar c_{j}$ of the $jth$ cavity and the population $\bar\sigma_{gg}$ $(\bar\sigma_{rr})$ of the ground (Rydberg) state, which can be determined by solving Eq.~(\ref{H9}) for all $\bar O$. Note that there are four kinds of direct interactions in the system: the coupling between the adjacent cavities, the interaction between the cavities and the oscillators, the interactions of the cavities with the Rydberg atoms and the DDI between the Rydberg atoms, which make the expressions of $\bar c_{j}$, $\bar\sigma_{gg}$ and $\bar\sigma_{rr}$ become very complicated, then it is too difficult to give concrete forms. Fortunately, the values of $\bar c_{j}$ only affect the width of the OMIT windows\cite{Sohail}. When one focuses on the numbers of the OMIT window by numerical computation, $G_{mj}$ and $G_e$ can be valued by any reasonable and convenient value. Same argument, we also assume that the average $\bar\sigma_{gg}= 1$ and $\bar\sigma_{rr}= 0$. Besides, to benefit more OMIT windows as many as possible, the system works in the weak dissipative regime, i.e, $g_j \geq \kappa_N\gg\kappa_j,\gamma_{mj/gr/er/ge}$.
Without loss of generality, it is assumed that the parameters of the system are chosen as follows. For the mechanical oscillator, $\gamma_{m1}=\gamma_{m2}=\dots=\gamma_{mN}$, for the effective optomechanical rates, $G_{m1}=G_{m2}=\dots=G_{mN}$; The cavity decay rates are $\kappa_1=\kappa_2=\dots=\kappa_{N-1}$, the tuneling parameters are set as $g_1=g_2=\dots=\kappa_N$, the frequencies of mechanical oscillators are $\omega_{m1}=\omega_{m2}=\dots=\omega_{mN}$, therefore, the detunings from the center line of the sidebands are the same $x_1=\dots=x_N\equiv x$. \\
\subsection{Without Rydberg atoms.}
In this section, we first focus on the multiple OMIT phenomenon emerged due to the interaction between the cavity field and the mechanical oscillators without the Rydberg atoms. The parameters are $\omega_{mN}/g_{mj}=20$, $\gamma_{mN}/g_{mj}=0.001$, $\kappa_{N-1}/g_{mj}=0.002$, $\kappa_{N}/g_{mj}=2$, and we assume $G_{mN}/g_{mj}=\bar c_j=1$. The optomechanical coupling parameter $g_{mj}=1$ kHz is based on the realistic cavity optomechanical system\cite{Weis}. For simplicity, the following absorption analysis of the output field are restricted to a hybrid system with four cavities. The generalization to a large number of cavities case can be made according to the same method mentioned based on Eqs.~(\ref{H9})-(\ref{H14}).
\begin{figure}
\caption{The absorption $Re(\varepsilon_T)$ as a function of $x/\kappa_4$ for four cavities. The subplot (a) corresponds to one mechanical oscillator coupled to cavity $1$, the subplot (b) describes two mechanical oscillators coupled to cavity $1$ and $2$, respectively. The subplot (c) shows three mechanical oscillators coupled to cavity $1$, $2$ and $3$. The subplot (d) illustrates four mechanical oscillators coupled to cavity $1$, $2$, $3$ and $4$.}
\label{fig2}
\end{figure}
\begin{figure}
\caption{Energy level structure of the multi-cavity optomechanical system coupled with multi-oscillator. The number state of photons and phonons are denoted by $n_j$ and $m_j$. The tunneling parameter between $|n_1,...,n_N;m_1,m_2,...m_N\rangle$ and $|n_1,n_{2}\!+\!1,...n_N;m_1,m_2,...m_N\rangle$ is $g_i$, the coupling strength between $|n_1,...,n_{i}\!+\!1,...n_N;m_1,...,m_i,...m_N\rangle$ and $|n_1,...,n_{i},...n_N;m_1,...,m_i\!+\!1,...m_N\rangle$ is $G_{mj}$.}
\label{fig3}
\end{figure}
Firstly, Fig.~\ref{fig2} illustrates the absorption $Re(\varepsilon_T)$ of the output field as a function of $x/\kappa_N$ for four cavities. In detail, Fig.~\ref{fig2}(a) describes only one mechanical oscillator coupled to the first cavity. The mechanical oscillators are coupled to the first and second cavities are shown in Fig.~\ref{fig2}(b). Fig.~\ref{fig2}(c) corresponds to three mechanical oscillators coupled to cavity $1$, $2$ and $3$, respectively. Fig.~\ref{fig2}(d) depicts four mechanical oscillators coupled to four cavities. The dips of the absorption line correspond to the transparency windows of the output field. From Fig.~\ref{fig2}, it can be found that the number of transparency windows adds one with the increase of the mechanical oscillator in turn, which is determined by the infinity denominator of Eq.~(\ref{H13}) corresponding to the appearance of the coupling parameters $g_{N-1}$ and $G_{mN}$ in the denominators. When the hybrid system has $N$ cavities coupled with $N$ mechanical oscillators one by one without considering the effects of the outside environment, the sum of transparency windows adds to $2N\!-\!1$. Thus, MCOS becomes transparent to the probing field at $2N\!-\!1$ different frequencies, which are the destructive interferences between the input probing field and the anti-Stokes fields generated by the interactions of the coupling cavity field within the multiple cavities and the interactions between the coupling cavity field and the mechanical oscillators. However, when $N$ becomes large and each cavity couples with its bath, numerical results show that the multiple transparency windows of this system become more and more opaque. Therefore, what we are concerned only the small $(N<10)$ system in the realistic experiment. The origin of the multiple OMIT windows can be explained by the quantum interference effects between different energy level pathways, and the energy level configurations of the hybrid system consisted of $N$ cavities coupled with $N$ mechanical oscillators are presented in Fig.~\ref{fig3}. The excited pathway of the probe field is quantum interfering with different coupling pathways $G_{mj}(j=1,...,N)$ of the control field and the tunneling pathways $g_i(i=1,...,N)$. Therefore, the sum of the quantum interference pathways is $2N-1$ for $N$ cavities and $N$ mechanical oscillators. In addition, those pathways of the destructive quantum interference are formed via the optomechanical interaction and the tunneling, which lead to $2N\!-\!1$ transparency frequencies of the output field under the condition of $\varepsilon_T\!\approx\!0$ at extremum points.
To further explore the characteristics of the OMIT arising from the interaction of the mechanical oscillators, we plot the absorption $Re(\varepsilon_T)$ of the output field as a function of $x/\kappa_N$ for one and two coupled oscillators cases. The case without the mechanical oscillator coupling is also shown for comparison in Fig.~\ref{fig4}. Due to the destructive interference between the pathways of the mechanical oscillator and the cavity field, the system will add a new transparency window if the first cavity is coupled with a mechanical oscillator, which is shown in Figs.~\ref{fig2}(a) and ~\ref{fig4}(a). However, comparing Figs.~\ref{fig4}(b) with ~\ref{fig4}(a), it can be found that the third labelled mechanical oscillator just broaden the central absorptive peak. On the other hand, Figs.~\ref{fig4}(c) describes the coupling between the mechanical oscillator and $2nd$ cavity. Fig.~\ref{fig4}(d) describes that the mechanical oscillators interact with $2nd$ and $4th$ cavity, respectively. Compared with Fig.~\ref{fig4}(a), it can be found that the even-labelled mechanical oscillators does not change the number of the transparency window for both case, only contributes to broaden the central absorptive dip compared to the case of without mechanical oscillator coupling. Note that, although all the mechanical oscillators are identical, they can still lead to different quantum interference pathways.
\begin{figure}
\caption{The absorption $Re(\varepsilon_T)$ as a function of $x/\kappa_{4}$ for four cavities. The subplot (a) corresponds to no mechanical oscillators coupled to cavities, the subplot (b) describes two mechanical oscillators coupled to cavity $1$ and $3$, respectively. The subplot (c) shows one mechanical oscillator coupled to cavity $2$, and the subplot (d) illustrates two mechanical oscillators coupled to the $2nd$ and $4th$ cavity, respectively.}
\label{fig4}
\end{figure}
The numerical calculation shows that, if one enlarges the numbers of the cavities and the odd- (even)-labelled mechanical oscillators, the results are similar with the ones mentioned above. In detail, for the odd-labelled case, the number of the transparency windows only adds one compared with the case of without mechanical oscillator coupling no matter how many mechanical oscillators are coupled with the cavities. And the increased odd-labelled mechanical oscillators only change slightly width of the central absorptive peak. While for the even-labelled ones, the increased oscillator only alter the width of the central absorptive peak or dip. These behaviors can be analyzed from Eq. (\ref{H13}). The equation of $\varepsilon_T\approx0$ has $N-1$ different roots without the coupled mechanical oscillator at the extremum points. For odd- (even)-labelled oscillator coupled with its cavity, $\varepsilon_T\approx0$ has at most $N$$(N-1$) different roots. Furthermore, when only odd- or even-labelled oscillators are coupled with the cavities, we also find that increasing the effective optomechanical rate $G_{mN}$, the central absorptive peak or dip will be remarkable broadened. As for the broadened central absorptive dip, the phenomenon of the destructive interference is weakened with the increase of the central absorptive dip of the output field, and the consequent EIT-Autler Townes splitting (ATS) crossover or ATS\cite{Autler} can occur. Due to the splitting of energy levels resulting from the strong field-driven interactions, identifying OMIT or EIT with ATS has been detailedly investigated in toroidal microcavity system\cite{BPeng} and the circuit circuit quantum electrodynamics system\cite{QCLiu,HCSun}.
\begin{figure}
\caption{ The absorption $Re(\varepsilon_T)$ as a function of $x/\kappa_4$. Figs. 5(a)-5(d) illustrate the cases of two Rydberg atoms trapped in $1st$ cavity coupled with the mechanical oscillator, and correspond to the DDI with $V(R)/g_{mj}=(0,2,4,6,10,30)$, respectively.}
\label{fig5}
\caption{ The absorption $Re(\varepsilon_T)$ as a function of $x/\kappa_4$. Figs. $6(a)$-$6(d)$ describe the cases of two Rydberg atoms trapped in $2nd$ cavity coupled with a mechanical oscillator, and correspond to DDI with different strengthes $V(R)/g_{mj}=(0,2,4,6,10,30)$, respectively.}
\label{fig6}
\end{figure}
\begin{figure}
\caption{ Real part $Re(\varepsilon_T)$ as a function of $x/\kappa_4$. Figs. $7(a)$-$7(d)$ illustrate the cases of two Rydberg atoms trapped in the $1st$ cavity and the mechanical oscillators do not couple with cavities, which correspond to DDI with $V(R)/g_{mj}=(0,2,4,30)$, respectively.}
\label{fig7}
\caption{ Real part $Re(\varepsilon_T)$ as a function of $x/\kappa_4$. Figs. $8(a)$-$8(d)$ describe the cases of two Rydberg atoms trapped in $2nd$ cavity and the mechanical oscillators do not couple with cavities, which correspond to DDI with $V(R)/g_{mj}=(0,2,4,30)$, respectively.}
\label{fig8}
\end{figure}
\subsection{With Rydberg atoms.} In the proceeding section, we have considered the variation of the multi-OMIT without the Rydberg aotms. Now, we shall investigate the multi-OMIT in the present system in which two Rydberg atoms are trapped in $ith(i=1,...,N)$ cavity and interact with the cavity field, and explore the effects of DDI on the OMIT. The parameters $\gamma_{rr}/g_{mj}=\gamma_{gr}/g_{mj}=\gamma_{ee}/g_{mj}=\gamma_{er}/g_{mj}=0.001$, $\Omega/g_{mj}=g/g_{mj}=1$. The other parameters are same as the ones in the previous section. In order to simplify the model and highlight the effect of the Rydberg atoms in the $ith$ cavity, we just only consider one mechanical oscillator which interacts with the $ith$ cavity as others do not affect the behavior of Rydberg atom directly in principle.
In general, the maximal DDI strength is of the order of gigahertz\cite{Li2005}. Figs.~\ref{fig5}(a)-(d) describe one mechanical oscillator interacts with the $1st$ cavity and the Rydberg atoms are also trapped in the same cavity with different DDI strength for four cavities. In Fig.~\ref{fig5}(a), when DDI strength is zero, one can find that two extra symmetric transparency windows (extra resonances) appear on both sides of the central absorptive peak compared to the case [See Fig. ~\ref{fig3}(a)] without Rydberg atom. One can also find that the positions of the two extra resonances move to the right with the increase of the DDI strength as shown in Figs.~\ref{fig5}(b)-(d). But the position of the left extra resonance moves slowly than the right one. In Fig.~\ref{fig6}, one mechanical oscillator and two Rydberg atoms coupled with $2nd$ cavity have been discussed. The variation tendencies of two extra resonances are the same as the ones in Fig.~\ref{fig5}. However, the widths, the positions and the amplitudes of two extra resonances are different. When the Rydberg atoms are trapped in $3rd$ and $4th$ cavities, numerical results also show that same variation tendencies of two extra resonances can be obtained in Figs.~\ref{fig5} and ~\ref{fig6}, respectively. But the widths and the amplitudes of the two extra resonances have little difference.
In addition, the amplitudes of two extra resonances become smaller and experience Fano resonance with the increase of DDI strength. When DDI strength increases, the left extra resonance gets close to the central absorptive dip and then both extra resonances die out. Compared Fig.~\ref{fig5}(d) (~\ref{fig6}(d)) with Fig.~\ref{fig2}(a) (~\ref{fig4}(c)), we can find that the DDI only impacts on the width of central absorptive dip or peak when the DDI strength is large. Therefore, the large DDI strength of Rydberg atoms has slight influence on the output field. On the other hand, from Eq.~(\ref{H13}), one can find that the DDI strength can adjust the effective detunings $x_{gr}$ and $x_{er}$, which makes the OMIT be sensitive to the DDI strength. As we all know, with the change of effective detuning, the extra OMIT windows can move and become a Fano line shape\cite{Qu}.Then the extra narrow OMIT window, a analogue to EIT, evolves into a Fano resonance in the output field of the hybrid optomechanical system with the increase of DDI strength between two Rydberg atoms.
In Figs.~\ref{fig5} and ~\ref{fig6}, we discuss the influences of DDI strength and the mechanical oscillator coupling strength in the absorption of the output field. But we only consider the factor of DDI strength in Figs.~\ref{fig7} and ~\ref{fig8}. Compared Figs.~\ref{fig5} (\ref{fig6}) with Figs.~\ref{fig7} (\ref{fig8}), it can be found that the same behavior of the output filed appears except the slight differences in the position and width of the transparency windows compared with the cases of wihtout mechanical oscillator coupling. In detail, there are two additional transparency windows for weak DDI strength. When $V(R)$ becomes more and more greater, two extra windows move and become Fano resonance till the right extra resonance of the absorption profile disappears gradually and the left extra resonance approaches the central absorptive peak. Note that, the system reduces to a coupled cavity system assisted a two-level atom in the large range DDI strength\cite{Sohail}. Because the influence of the coupled Rydberg atoms resembles a mechanical oscillator as mentioned above. If the positions of the atoms is different, the different numbers of the transparency window appear as shown in Figs. ~\ref{fig7}(d) and ~\ref{fig8}(d). This result may be explained as follows. When DDI strength between Rydberg atoms is relatively weak, it is obvious that the second excited Rydberg atom does not shift the level of the first one. The system is regarded as a coupled cavity interacted with both a mechanical resonator and ladder-type Rydberg atoms. Due to the transitions $|g\rangle \leftrightarrow |e\rangle$ and $|e\rangle \leftrightarrow |r\rangle$ of the Rydberg atom in the hybrid system, additional interference pathways appear. Therefore, two additional OMIT windows in the absorption profile are observed. With the increase of DDI strength, the Rydberg blockade suppresses the excitation of the first atom and makes the OMIT condition be no longer fulfilled for the first atom. Then the first atom acts as a two-level atom which couples resonantly to the probe field.
\section*{{\protect\LARGE \textbf{Conclusion and Discussion}}}
In summary, we have studied the OMIT of the MCOS. For the case without Rydberg atoms trapped in the cavity, the MCOS system has been demonstrated the generation of $2N\!-\!1(N<10)$ OMIT windows for the output field, when $N$ cavities interact with $N$ mechanical oscillators, respectively. But the odd- and even-labelled oscillators will lead to different effects, if the odd-labelled oscillators are presented, only one extra OMIT emerges in the absorption profile by the quantum interference. In contrast, the increased even-labelled mechanical oscillators just broaden the central absorptive dip or peak. Under these circumstances, the corresponding transparency window can change from OMIT to ATS by increasing the effective optomechanical rate. On the other hand, when two Rydberg atoms are trapped in the $ith$ cavity with weak DDI and the cavity is coupled with a mechanical oscillator, two extra OMIT windows can be observed. In addition, two extra OMIT windows would gradually move to the far off-resonance regime with the DDI strength increasing. The right extra resonance move faster with the increase of the DDI strength. But the right one vanishes with great DDI strength. Furthermore, Fano resonances also appears with the changes of DDI strength.
In experiment, one possible scheme is the toroidal microcavity-tapered optical fiber system coupled with Rydberg atoms. Firstly, the effect of OMIT in a single optical nanofiber-based photonic crystal optomechanical cavity has been engineered in the experiments\cite{Huangjy,BPeng}. Further, a two-color optical dipole trap has also come true by using the red- and blue-detuned evanescent light fields near the optical nanofiber. This method can allow the Rydberg atoms to be prepared at a few hundred nanometers from the nanofiber surface and coupled with the ith photonic crystal cavity\cite{Vetsch,Goban}. And a series of nanofibers acted as a 1D coupled cavity array has been realized experimentally\cite{Notomi}, which is extended to lattices of coupled resonators with Rydberg atoms\cite{ZhangY}. Therefore, combined with the above experiments, the multi-cavity optomechanical system with two Rydberg atoms trapped in one cavity may be realizable with the present-day or near-term technology.
\subsection*{Acknowledgments}
This work was supported by the National Natural Science Foundation of China under grants Nos. 11274148 and 11434015, the National Key R$\&$D Program of China under grants Nos. 2016YFA0301500, and SPRPCAS under grants No. XDB01020300, XDB21030300.
\subsection*{Author contributions}
J.-L.M., L.T., Q.L., H.-Q.G., and W.-M.L. conceived the idea. J.-L.M. performed the theoretical as well as the numerical calculations. J.-L.M. and L.T. interpreted physics and wrote the manuscript. All of the authors reviewed the manuscript.
\subsection*{Additional Information} \textbf{ Competing financial interests:} The authors declare that they have no competing interests.
\end{document}
|
arXiv
|
{
"id": "1803.02701.tex",
"language_detection_score": 0.8105894923210144,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Quantum computers can search rapidly by using almost any selective transformations} \author{Avatar Tulsi\\
{\small Department of Physics, Indian Institute of Science, Bangalore-560012, India}} \email{[email protected]}
\begin{abstract} The search problem is to find a state satisfying certain properties out of a given set. Grover's algorithm drives a quantum computer from a prepared initial state to the target state and solves the problem quadratically faster than a classical computer. The algorithm uses selective transformations to distinguish the initial state and target state from other states. It does not succeed unless the selective transformations are very close to phase-inversions. Here we show a way to go beyond this limitation. An important application lies in quantum error-correction, where the errors can cause the selective transformations to deviate from phase-inversions. The algorithms presented here are robust to errors as long as the errors are reproducible and reversible. This particular class of systematic errors arise often from imperfections in apparatus setup. Hence our algorithms offer a significant flexibility in the physical implementation of quantum search. \end{abstract} \pacs{03.67.Ac, 03.67.Lx, 03.67.Pp}
\maketitle
\section{Introduction}
Suppose we have a set of $N$ items, $j=0,1,2,\ldots N-1$, and a binary function $f(j)$ which is $1$ if $j$ satisfies certain properties (e.g. if it is a solution to a certain computational problem) and $0$ otherwise. Let $T$ be the set of $M$ items for which $f(j)=1$, i.e. $T=\{j|f(j)=1\}$ and $|T|=M$. Consider the situation when the items are not sorted according to any property, but $f(j)$ can be computed by querying an oracle that outputs $f(j)$ for any input $j$. The search problem is to find an element of $T$ (i.e. a solution) using the minimum number of oracle queries. The best classical algorithm for this problem is to randomly pick an item $j$, use an oracle query to check whether $j\in T$, and then repeating the process till a solution is found. On the average, it takes $O(N/M)$ oracle queries to succeed, since $M/N$ is the probability of the picked item to be a solution.
In a quantum setting, Grover's search algorithm~\cite{grover1} provides a much faster way. The $N$ items are encoded as basis states $|j\rangle$ of an $N$-dimensional Hilbert space, which can be realized using $n=\log_{2}N$ qubits (without loss of generality, we assume $N$ to be a power of $2$). The initial unbiased state is chosen as the equal superposition state, $(1/\sqrt{N})\sum_{j}|j\rangle$, generated by applying the Walsh-Hadamard transformation $W$ on $|0\rangle$. The target state $|t\rangle$ can be any normalised state $\sum_{j\in T}a_{j}|j\rangle$ within the target subspace, since measuring $|t\rangle$ will always give a solution. Grover's algorithm obtains $|t\rangle$ by applying $O(\sqrt{N/M})$ iterations of the operator $\mathcal{G}=WI_{0}WI_{t}$ on $W|0\rangle$. Here $I_{t}=\sum_{j}(-1)^{\delta_{jt}}|j\rangle\langle j|$ and $I_{0}=\sum_{j}(-1)^{\delta_{j0}}|j\rangle\langle j|$ are the selective phase-inversions of $|t\rangle$ and $|0\rangle$ states respectively. Grover's algorithm thus provides a quadratic speedup over the classical algorithm, as each iteration of $\mathcal{G}$ uses one oracle query to implement $I_{t}$.
Grover showed that his algorithm works even if the Walsh-Hadamard transform $W$ is replaced by \emph{almost} any unitary operator $U$~\cite{grover2}. In this case, the initial state $U|0\rangle$ is a general (not necessarily equal) superposition of the basis states. The operator $\overline{\mathcal G}=UI_{0}U^{\dagger}I_{t}$ is iteratively applied to $U|0\rangle$, and the target state $|t\rangle$ is obtained after $O(1/\alpha_{U})$ iterations, where $\alpha_{U}=\sqrt{\sum_{j\in T}|U_{j0}|^{2}}$ is the projection of $U|0\rangle$ on the target subspace (for $U=W$, $\alpha_{W}=\sqrt{M/N}$). As the probability of getting a target state upon measuring $U|0\rangle$ is $\alpha_{U}^{2}$, the target state can be obtained classically by $O(1/\alpha_{U}^{2})$ preparations of $U|0\rangle$ and subsequent projective measurements. Hence, the quantum algorithm provides a quadratic speedup over this simple scheme by doing the same job in $O(1/\alpha_{U})$ steps. This generalization is known as quantum amplitude amplification~\cite{grover2,qaa}, and forms the backbone of many other quantum algorithms. It has an important application when in a physical implementation $W$ gets replaced by $U$ due to some unavoidable error. The algorithm succeeds as long as $U$ and $U^{\dagger}$ can be consistently implemented even when we do not know their precise form, making it intrinsically robust against certain types of errors. In the case of quantum search, provided $\alpha_{U}\not\ll\alpha_{W}$, there is not much of a slowdown and hence almost any transformation is good enough.
Quantum amplitude amplification often fails, however, when the selective phase-inversions $\{I_{t}, I_{0}\}$ are replaced by other selective transformations, say $\{S_{t},S_{0}\}$. Consider the simple case when $S_{t}=R_{t}^{\phi}=\sum_{j}e^{i\phi\delta_{jt}}|j\rangle\langle j|$ and $S_{0}=R_{0}^{\varphi}=\sum_{j}e^{i\varphi\delta_{j0}}|j\rangle\langle j|$ are the selective phase rotations of $|t\rangle$ and $|0\rangle$ states by angles $\phi$ and $\varphi$ respectively. The well-known phase matching condition~\cite{long1,hoyer1} demands $|\phi-\varphi| \ll \alpha_{U}$ for quantum amplitude amplification to succeed. This is a very strict condition for $\alpha_{U} \ll 1$, while the quadratic speedup is not of much use for large $\alpha_{U}$. In fact, systematic phase mismatching (i.e. $|\phi-\varphi| \not\ll \alpha_{U}$) is known to be the dominant gate imperfection in implementing quantum amplitude amplification, posing an intrinsic limitation to the size of database that can be searched~\cite{long2}.
In this work, we show that a successful quantum search can be obtained with almost any selective transformations $\{S_{t},S_{0}\}$, provided their inverse transformations $\{S_{t}^{\dagger},S_{0}^{\dagger}\}$ are also available. This is useful in situations where the errors are \emph{reproducible} (i.e. every time we ask for the transformation $\mathcal{A}$ the system implements the transformation $\mathcal{B}$) as well as \emph{reversible} (i.e. whenever we ask for the transformation $\mathcal{A}^{\dagger}$ the system implements the transformation $\mathcal{B}^{\dagger}$). For instance, such systematic errors arise when there is incorrect calibration of the instrumentation. In the following, we present two algorithms in this category, one iterative and the other recursive.
In section II, we consider the case of diagonal selective transformations, which rotate the phases of the desired states by \emph{any} amount (unlike the selective phase-inversions that change the phase by $\pi$) but leave all the other (non-desired) states unchanged. We then construct an operator which yields a successful quantum search algorithm when iterated on the initial state, and we show the algorithm to be optimal up to a constant factor. This iteratve algorithm does not work in the case when diagonal selective transformations also perturb the non-desired states. In section III, we design a recursive quantum search algorithm for such transformations provided they are not too far off from the selective phase-inversions. The algorithm requires $O(1/\alpha_{U}^{1+O(\Delta_{t}^{2},\Delta_{0}^{2})})$ queries, where $\Delta_{t}=\|S_{t}-I_{t}\|$ and $\Delta_{0}=\|S_{0}-I_{0}\|$ are the distances of selective transformations from the corresponding selective phase-inversions, assumed to be small. It is straightforward to extend the above two algorithms to situtations where the selective transformations are non-diagonal. We describe that in section IV, together with possible applications of our algorithms to quantum error correction, quantum workspace errors and bounded-error quantum search.
\section{Iterative algorithm}
Consider those selective transformations $\{S_{t},S_{0}\}$ which rotate the phases of the desired states by arbitrary angles but leave all the other states unchanged. In case of $|0\rangle$, there is only one desired state and $S_{0}=R_{0}^{\varphi}=I-(1-e^{i\varphi})|0\rangle\langle 0|$. In case of $|t\rangle$, there can be multiple target states and the rotation phase can be different for different target states, so $S_{t}=R_{t}=\sum_{j}e^{i\phi_{j}\delta_{jt}}|j\rangle \langle j|$. If we iteratively apply the generalized quantum amplitude amplification operator $\widetilde{\mathcal G}=UR_{0}^{\varphi}U^{\dagger}R_{t}$ on the initial state $U|0\rangle$, we will not succeed in getting a target state unless the phase matching condition is satisfied.
Instead, we iteratively apply a different operator, $\mathcal{T} = UR_{0}^{-\varphi}U^{\dagger}R_{t}^{\dagger}UR_{0}^{\varphi}U^{\dagger}R_{t}$, on the initial state $U|0\rangle$. It uses two oracle queries, one for $R_{t}$ and another for $R_{t}^{\dagger}$. It also uses $R_{0}^{\varphi\dagger}=R_{0}^{-\varphi}$ along with $R_{0}^{\varphi}$. Thus, unlike $\widetilde{\mathcal G}$, it makes explicit use of the inverse transformations $\{R_{t}^{\dagger},R_{0}^{\dagger}\}$. Observe that $\mathcal{T}$ is a product of two selective phase rotations: $UR_{0}^{-\varphi}U^{\dagger}$ is a rotation by $-\varphi$ of the state $U|0\rangle$, and $R_{t}^{\dagger}UR_{0}^{\varphi}U^{\dagger}R_{t}$ is a rotation by $\varphi$ of the state $R_{t}^{\dagger}U|0\rangle$. We therefore have \begin{equation} \mathcal{T}=UR_{0}^{-\varphi}U^{\dagger}R_{\sigma}^{\varphi},~~
|\sigma\rangle \equiv R_{t}^{\dagger}U|0\rangle. \label{Tfirstdefine} \end{equation}
Let $|\tau\rangle$ be a state orthogonal to $|\sigma\rangle$ in the two-dimensional subspace spanned by $U|0\rangle$ and $|\sigma\rangle$, such that up to an overall phase \begin{equation}
U|0\rangle=\cos\theta|\sigma\rangle+\sin\theta|\tau\rangle
=\cos\theta R_{t}^{\dagger}U|0\rangle+\sin\theta|\tau\rangle. \label{U0expansion} \end{equation}
For a general vector $|\psi(a,b)\rangle=a |\sigma\rangle+b |\tau\rangle$ in this subspace, we have \begin{equation}
\mathcal{T}|\psi(a,b)\rangle=UR_{0}^{-\varphi}U^{\dagger}R_{\sigma}^{\varphi}|\psi(a,b)\rangle = UR_{0}^{-\varphi}U^{\dagger}|\psi(a e^{i\varphi},b)\rangle. \label{subspacepreserve1} \end{equation}
As $UR_{0}^{-\varphi}U^{\dagger}|\psi\rangle=|\psi\rangle-zU|0\rangle$ with $z = (1-e^{-i\varphi})\langle 0|U|\psi\rangle$, we have \begin{equation}
\mathcal{T}|\psi(a,b)\rangle=|\psi(a e^{i\varphi}-z\cos\theta,b-z\sin\theta)\rangle. \label{subspacepreserve2} \end{equation} Hence, $\mathcal{T}$ preserves this two-dimensional subspace. For any vector within this subspace, we can also write $R_{\sigma}^{\varphi}=e^{i\varphi}R_{\tau}^{-\varphi}$, and so $\mathcal{T}$ is equivalent to $UR_{0}^{-\varphi}U^{\dagger}R_{\tau}^{-\varphi}$ up to an overall phase, i.e. \begin{equation} \mathcal{T} \cong UR_{0}^{-\varphi}U^{\dagger}R_{\tau}^{-\varphi} \label{Tseconddefine} \end{equation}
The above operator is a special case of the generalized quantum amplitude amplification operator with $|\tau\rangle$ as the effective target state. It satisfies the phase matching condition by construction. One may wonder that the phase matching condition is not satisfied in the form $\mathcal{T}=UR_{0}^{-\varphi}U^{\dagger}R_{\sigma}^{\varphi}$ as $\varphi \neq -\varphi$ in general. But the phase matching condition was derived assuming $\alpha_{U}\ll 1$, and it cannot be used with $|\sigma\rangle$ as the effective target state because $\langle\sigma|U|0\rangle=\langle 0|U^{\dagger}R_{t}U|0\rangle$ is close to $1$. That is why we converted $R_{\sigma}^{\varphi}$ to $R_{\tau}^{-\varphi}$.
Now applying $\mathcal{T}$ on the initial state $U|0\rangle$ rotates it towards the state $|\tau\rangle$ by an angle $2\theta\sin\frac{\varphi}{2}$~\cite{long3}. After $n$ iterations of $\mathcal{T}$, we get \begin{equation}
\mathcal{T}^{n}U|0\rangle=\cos\theta_{n}|\sigma\rangle+\sin\theta_{n}|\tau\rangle,~ \theta_{n}=\theta\left(1+2n\sin\frac{\varphi}{2}\right). \label{TnU0} \end{equation}
For $n=\lfloor\pi/(4\theta\sin\frac{\varphi}{2})\rfloor$, $\theta_{n}$ is close to $\pi/2$ and $\mathcal{T}^{n}U|0\rangle$ is close to $|\tau\rangle$. Further iterations of $\mathcal{T}$ rotate the state away from $|\tau\rangle$, displaying a cyclic motion in the two-dimensional subspace as in case of Grover's algorithm.
To understand the significance of the state $|\tau\rangle$, we use the expansions $U|0\rangle=\sum_{j}U_{j0}|j\rangle$ and $R_{t}^{\dagger}U|0\rangle = \sum_{j}U_{j0}e^{-i\phi_{j}\delta_{jt}}|j\rangle$ in Eq. (\ref{U0expansion}), and obtain \begin{equation}
|\tau\rangle = \frac{1}{\sin\theta}\sum_{j}U_{j0}\left(1-\cos\theta e^{-i\phi_{j}\delta_{jt}}\right)|j\rangle. \label{tauexpansion} \end{equation}
Here $\cos\theta =|\langle 0|U^{\dagger}R_{t}U|0\rangle| =\left|\sum_{j}|U_{j0}|^{2}e^{i\phi_{j}\delta_{jt}}\right|$, and since $\sum_{j\in T}|U_{j0}|^{2}=\alpha_{U}^{2}$, we have the bound $\cos\theta \geq 1-2\alpha_{U}^{2}$ or $|\theta| \leq 2\alpha_{U}$. Hence, $\langle j|\tau\rangle=U_{j0}O(\alpha_{U})$ for $j\not\in T$, and the projection of $|\tau\rangle$ on the non-target subspace is $\sqrt{\sum_{j\not\in T}|\langle j|\tau\rangle|^{2}}=O(\alpha_{U})$. This projection is very small, which makes $|\tau\rangle$ almost a state in the target subspace, $|\langle t|\tau\rangle|=1-O(\alpha_{U}^{2})\approx 1$.
The number of queries needed to reach the state $|\tau\rangle$ is twice the number of iterations of $\mathcal{T}$, as each iteration uses two queries. We therefore have $Q = \pi/(2\theta\sin\frac{\varphi}{2})$. The normalization condition for Eq. (\ref{tauexpansion}), $|\langle \tau|\tau\rangle|=1$, gives \begin{equation}
\sin^{2}\theta = (1-\cos\theta)^{2} + \sum_{j\in T}4|U_{j0}|^{2}\cos\theta\sin^{2}\frac{\phi_{j}}{2}. \end{equation}
For small $\theta$, this yields $\theta=\sqrt{\sum_{j\in T}4|U_{j0}|^{2}sin^{2}\frac{\phi_{j}}{2}}$, and \begin{equation}
Q=\frac{\pi}{4\sin\frac{\varphi}{2}\sqrt{\sum_{j\in T}|U_{j0}|^{2}\sin^{2}\frac{\phi_{j}}{2}}}. \end{equation}
For later reference, we point out that the state $|\tau\rangle$ is close to target state only because $\theta=O(\alpha_{U})$. That is true for the $R_{t}$ transformations which act only within the target subspace, but may not be true for general selective transformations $S_{t}$ which perturb non-target states also. More generally, Eq. (\ref{tauexpansion}) provides $|\langle t|\tau\rangle|=O(\frac{\alpha_{U}}{\theta})$, and the iterative algorithm can amplify the projection on the target subspace by a maximum factor of $1/\theta$. That may be too small for a general selective transformation to reach a target state. In section III, we use the idea of recursion to overcome this limitation.
\subsection{Comparison with Grover's algorithm}
When $\{S_{t},S_{0}\}=\{I_{t},I_{0}\}$, i.e. when $\varphi=\phi_{j}=\pi$, the operator $\mathcal{T}$ is simply $2$ steps of the quantum amplitude amplification algorithm. To demonstrate the difference between $\mathcal{T}$ and $\widetilde{\mathcal G}^{2}$ for general $\{S_{t},S_{0}\}$, consider the situation where $S_{t}=R_{t}^{\phi}$, i.e. rotation angle $\phi$ is the same for all target states. Then $\mathcal{T}=UR_{0}^{-\varphi}U^{\dagger}R_{t}^{-\phi}UR_{0}^{\varphi}U^{\dagger}R_{t}^{\phi}$ while $\widetilde{\mathcal G}^{2}=UR_{0}^{\varphi}U^{\dagger}R_{t}^{\phi}UR_{0}^{\varphi}U^{\dagger}R_{t}^{\phi}$. The quantum amplitude amplification algorithm succeeds in this case only if the phase-matching condition is satisfied, $|\phi-\varphi|\ll \alpha_{U}$~\cite{long1}. On the other hand, there is no such restriction on our algorithm which succeeds using $\pi/(4\alpha_{U}\sin\frac{\varphi}{2}\sin\frac{\phi}{2})$ queries. As Grover's optimal algorithm takes $\pi/(4\alpha_{U})$ queries, the slowdown is only by the constant factor $1/(\sin\frac{\varphi}{2}\sin\frac{\phi}{2})$. As long as $\phi,\varphi$ are not very small, not much is lost, and hence almost any selective transformation can be used for quantum search.
This particular case has been experimentally verified on an NMR quantum information processor~\cite{avik}, which compares the performances of Grover's and our algorithm for small $\alpha_{U}$ and $\phi\neq\varphi$. The experimental data confirms the theoretical prediction that our algorithm succeeds in getting the target state while Grover's algorithm does not.
In a more general case, the rotation angle $\phi_{j}$ can be different for different target states and $S_{t}=R_{t}=\sum_{j}e^{i\phi_{j}\delta_{jt}}|j\rangle \langle j|$. It is shown in an upcoming paper~\cite{tulsi} that in this case, iterating the operator $\widetilde{\mathcal{G}} = UR_{0}^{\varphi}U^{\dagger}R_{t}$ amplifies only those target states which satisfy the phase-matching condition i.e. for which $|\phi_{j}-\varphi|\ll |U_{j0}|$. (If there are no such target states then iterating $\widetilde{\mathcal{G}}$ will not succeed in getting a target state.) The target state is obtained after $O(1/\alpha_{U}')$ iterations where $\alpha_{U}' = \sqrt{\sum_{j:|\phi_{j}-\varphi|\ll |U_{j0}|}|U_{j0}|^{2}} \leq \alpha_{U}$, and the algorithm suffers a slowdown by a factor $\alpha_{U}/\alpha_{U}'$. There is no such restriction on our algorithm and the full amplitude along the target states can be utilised, irrespective of any phase-matching condition.
Quantum amplitude amplification is often described as a rotation in the two-dimensional space spanned by the initial state $U|0\rangle$ and the target state $|t\rangle$. Here, we have provided a new insight suggesting that it is better to interpret quantum search as a rotation in the two-dimensional space spanned by the initial state $U|0\rangle$ and the oracle-modified initial state $R_{t}^{\dagger}U|0\rangle$. $\mathcal{T}$ is then the fundamental unit of quantum search rather than $\widetilde{\mathcal G}$. The advantage of the operator $\mathcal{T}$ is that it uses the selective transformations and their inverses in such a way that the phase-matching condition is \emph{effectively} satisfied to produce a successful quantum search.
\subsection{New search Hamiltonian}
Grover's algorithm is a digital algorithm in the sense that it uses a discrete set of unitary operators and applies them sequentially on the initial state to reach the target state. Farhi and Gutmann developed an analog version of the algorithm~\cite{farhi}, which shows that any initial state, when evolved under a particular search Hamiltonian for a certain amount of time, will evolve to the target state. Their search Hamiltonian is given by $\mathcal{H}_{FG}=\mathcal{H}_{U|0\rangle}+\mathcal{H}_{|t\rangle}$, where $\mathcal{H}_{U|0\rangle}=I-U|0\rangle \langle 0|U^{\dagger}$ and $\mathcal{H}_{|t\rangle}=I-|t\rangle \langle t|$ are projector Hamiltonians. More general search Hamiltonians have been presented subsequently by Fenner~\cite{fenner}, and by Bae and Kwon~\cite{baekwon}.
The algorithm developed above suggests a new search Hamiltonian \begin{equation}
\mathcal{H}_{\rm new}=\mathcal{H}_{U|0\rangle}+\mathcal{H}_{R_{t}^{\dagger}U|0\rangle}=\mathcal{H}_{U|0\rangle}+R_{t}^{\dagger}\mathcal{H}_{U|0\rangle}R_{t}. \end{equation}
The second term of $\mathcal{H}_{\rm new}$ is just the first term but in a basis rotated by the oracle transformation $R_{t}$. $\mathcal{H}_{\rm new}$ can be analysed the same way as was done by Farhi and Gutmann, in the two-dimensional subspace spanned by $U|0\rangle$ and $R_{t}^{\dagger}U|0\rangle$. When evolved using $\mathcal{H}_{\rm new}$ for a certain amount of time, the initial state becomes the state $|\tau\rangle$, which is very close to a target state as shown.
$\mathcal{H}_{\rm new}$ has certain physical implementation advantages over $\mathcal{H}_{FG}$. Consider the situation when implementation errors perturb $\mathcal{H}_{FG}$ to $(1-s)\mathcal{H}_{U|0\rangle}+(1+s)\mathcal{H}_{|t\rangle}$, i.e. one term is enhanced while the other gets reduced. Analysing this perturbed Hamiltonian as is done in Ref.~\cite{roland}, it is easy to see that one reaches a target state only if $|s|<O(\alpha_{U})$. This is analogous to the phase-matching condition, and as $\alpha_{U}\ll 1$, it is a strict condition. There is no such restriction, however, on the new search Hamiltonian as it is the sum of the same term in two different bases. For example, calibration errors remain \emph{effectively} the same for both terms, making $\mathcal{H}_{\rm new}$ robust with $s=0$.
\section{Recursive algorithm}
In this section, we consider those diagonal selective transformations $\{S_{t},S_{0}\}$ which may also perturb the non-desired states, unlike the transformations $\{R_{t},R_{0}\}$ discussed in previous section which leave them unperturbed. We assume the perturbations to be small, i.e. $\|S_{t}-I_{t}\|=\Delta_{t}$ and $\|S_{0}-I_{0}\|=\Delta_{0}$ where $\Delta_{t}$ and $\Delta_{0}$ are small. More explicitly, \begin{eqnarray}
S_{t}=\sum_{j}e^{i\phi_{j}}|j\rangle \langle j|\ ,\ \phi_{j}=\pi\delta_{jt}+\epsilon_{j}\ ,\ |\epsilon_{j}|\leq \Delta_{t}; \nonumber \\
S_{0}=\sum_{j}e^{i\varphi_{j}}|j\rangle \langle j|\ ,\ \varphi_{j}=\pi\delta_{j0}+\mu_{j}\ ,\ |\mu_{j}|\leq \Delta_{0}. \label{S0Stexpansion} \end{eqnarray}
For such transformations, the iteration of operator $US_{0}U^{\dagger}S_{t}$ on the initial state $U|0\rangle$ may not give us a target state. As $O(1/\alpha_{U})$ iterations of $UI_{0}U^{\dagger}I_{t}$ on $U|0\rangle$ gives us a target state, it is easy to see that as long as $\{\Delta_{t},\Delta_{0}\}<O(\alpha_{U})$, iterating the operator $US_{0}U^{\dagger}S_{t}$ on $U|0\rangle$ will also bring us close to target state. But when $\{\Delta_{t},\Delta_{0}\}\geq O(\alpha_{U})$, the phase-matching condition required for a successful quantum search may not be satisfied. Another way to see this is to analyse the eigenspectrum of $US_{0}U^{\dagger}S_{t}$. Its distance from $UI_{0}U^{\dagger}I_{t}$ is $O(\Delta_{t},\Delta_{0})$. The two eigenvalues of $UI_{0}U^{\dagger}I_{t}$, relevant for quantum search, are separated by $O(\alpha_{U})$. A perturbation greater than $O(\alpha_{U})$ will in general shift them too much to maintain a successful quantum search. Note again that we are considering $\alpha_{U}\ll 1$, which makes the iterative quantum amplitude amplification very sensitive to small errors.
The recursive quantum search algorithm is defined, at the $m^{th}$ level, by the relation \begin{equation}
U_{m}|0\rangle = U_{m-1}S_{0}U_{m-1}^{\dagger}S_{t}U_{m-1}|0\rangle, \end{equation}
with $U_{0}\equiv U$. At the first level, $U_{1}|0\rangle=(US_{0}U^{\dagger}S_{t})U|0\rangle$ is a simple generalization of the quantum amplitude amplification step $\overline{\mathcal G}U|0\rangle$. But at higher levels, the operators $U_{m}$ involve $\{S_{t},S_{0}\}$ as well as $\{S_{t}^{\dagger},S_{0}^{\dagger}\}$, and cannot be expressed using repeated iterations of a single operator like $\overline{\mathcal G}$. (For instance, $U_{2} = U_{1}S_{0}U_{1}^{\dagger}S_{t}U_{1} = US_{0}U^{\dagger}S_{t}US_{0}U^{\dagger}S_{t}^{\dagger}US_{0}^{\dagger}U^{\dagger}S_{t}US_{0}U^{\dagger}S_{t}U$ involves operators $S$ and $S^{\dagger}$ in a non-periodic pattern.) The idea of recursive quantum search is not new. It has been used by Hoyer {\it et al.}~\cite{hoyerbound} and by Grover~\cite{private} for specific error models, as discussed in the next section. What is new here is the demonstration that recursion works even for general errors.
The number of queries used at the $m^{th}$ level of recursion is determined by the relation $q_{m}=3q_{m-1}+1$, since $q_{m-1}$ is the number of queries used by $U_{m-1}$ and $S_{t}$ needs one extra query. Using the fact that $q_{0}=0$ (as implementing $U$ does not need any query), we get \begin{equation} q_{m}=\frac{3^{m}-1}{2}=\Theta(3^{m}). \end{equation} The recursive algorithm increases the number of queries in a geometric progression with the level number, a factor of $3$ in the present case. On the other hand, the iterative algorithm increases the number of queries in an arithmetic progression with the iteration number, a step of $2$ in the algorithm of the previous section. We will see that the larger jumps in the allowed number of queries for the recursive algorithm are not a major disadvantage, because the total number of queries needed to obtain the target state remains about the same. (The worst case overhead is a tolerable factor of $3$ in the number of queries.)
At the first level, the initial state $U|0\rangle$ evolves to $U_{1}|0\rangle$, whose projection on the target subspace is $\alpha_{U_{1}}=\sqrt{\sum_{j\in T}|(U_{1})_{j0}|^{2}}$. In recursive quantum search, what matters is the amplification factor \begin{equation}
\kappa =\frac{\alpha_{U_{1}}}{\alpha_{U}} = \sqrt{\frac{\sum_{j\in T}|(U_{1})_{j0}|^{2}}{\sum_{j\in T}|U_{j0}|^{2}}}, \label{kappaequation} \end{equation} and the target state can be obtained using $O(1/\alpha_{U}^{\log_{\kappa}3})$ queries. To get the nearly optimal algorithm, the amplification factor $\kappa$ should be as close to $3$ (the number of $U$-type operators used by $U_{1}$) as possible. We will show that for small $\{\Delta_{t},\Delta_{0}\}$, $\kappa$ is indeed close to $3$, and the performance of recursive algorithm is close to the optimal algorithm that takes $O(1/\alpha_{U})$ queries.
We estimate $\kappa$ by estimating the ratio $\rho_{j} = |(U_{1})_{j0}/U_{j0}|$ for $j\in T$. In terms of $\rho_{j}$, we have \begin{equation}
\kappa = \sqrt{\frac{\sum_{j\in T}\rho_{j}^{2}|U_{j0}|^{2}}{\sum_{j\in T}|U_{j0}|^{2}}}. \label{kapparho} \end{equation} Clearly if $\rho_{j}$ is close to $3$ for each $j\in T$, then $\kappa$ is also close to $3$. To find $\rho_{j}$, let \begin{equation}
|\psi\rangle = S_{t}U|0\rangle = \sum_{j}U_{j0}e^{i\phi_{j}}|j\rangle, \label{psidefinition} \end{equation}
so that $U_{1}|0\rangle = US_{0}U^{\dagger}|\psi\rangle$. We decompose $S_{0}$ as $S_{0}=S_{0}'\cdot R_{0}^{\varphi_{0}}$, where $S_{0}'=|0\rangle\langle 0|+\sum_{j\neq 0}e^{i\varphi_{j}}|j\rangle \langle j|$ leaves the $|0\rangle$ state unchanged but acts like $S_{0}$ on all the other states, and $R_{0}^{\varphi_{0}}$ is a selective phase-rotation of the $|0\rangle$ state. We have $U_{1}|0\rangle=US_{0}'U^{\dagger}UR_{0}^{\varphi_{0}}U^{\dagger}|\psi\rangle$. With $UR_{0}^{\varphi_{0}}U^{\dagger}|\psi\rangle =|\psi\rangle-(1-e^{i\varphi_{0}})\langle 0|U^{\dagger}|\psi\rangle U|0\rangle$ and $1-e^{i\varphi_{0}}= 2e^{i\mu_{0}/2}\cos\frac{\mu_{0}}{2}$, we get \begin{equation}
U_{1}|0\rangle = US_{0}'U^{\dagger}|\psi'\rangle, \label{U10} \end{equation} where \begin{equation}
|\psi'\rangle=\sum_{j}U_{j0}\left(e^{i\phi_{j}}-2e^{i\mu_{0}/2}\cos\frac{\mu_{0}}{2}\beta\right)|j\rangle. \label{psiprimefirst} \end{equation}
Here $\beta = \langle 0|U^{\dagger}|\psi\rangle=\sum_{j}|U_{j0}|^{2}e^{i\phi_{j}}$. As $\phi_{j}=\pi\delta_{jt}+\epsilon_{j}$, the bound $|\epsilon_{j}|\leq \Delta_{t}$ gives \begin{equation}
(1- Re(\beta)) \leq 0.5\Delta_{t}^{2}+2\alpha_{U}^{2}\ ,\ |Im(\beta)|\leq \Delta_{t}. \label{betanonprimebound} \end{equation}
Since $\Delta_{t}^{2}$ and $\alpha_{U}^{2}$ are small, we can write $\beta=|\beta|e^{i\xi}$, where $|\xi| \leq \Delta_{t}$. Then \begin{equation}
|\psi'\rangle = \sum_{j}U_{j0}e^{i\phi_{j}}\left[1-2(-1)^{\delta_{jt}}\beta'e^{i\xi_{j}'}\right]|j\rangle, \label{psiprimesecond} \end{equation}
where $\beta' = \cos\frac{\mu_{0}}{2}|\beta|$ and $\xi_{j}' = \xi-\epsilon_{j}+\frac{\mu_{0}}{2}$. The bounds on $\beta,\xi,\mu_{0}$ and $\epsilon_{j}$ give \begin{eqnarray} (1-\beta')&\leq& 0.5\Delta_{t}^{2}+0.125\Delta_{0}^{2}+2\alpha_{U}^{2}, \nonumber \\
|\xi_{j}'| &\leq &2\Delta_{t}+0.5\Delta_{0}. \label{betaxiprimebound} \end{eqnarray}
Using Eq. (\ref{psiprimesecond}), we get $|\langle j|\psi'\rangle /U_{j0}|_{j\in T}=|1+2\beta'e^{i\xi_{j}'}|$. The bounds on $\beta'$ and $\xi_{j}'$ then yield \begin{equation}
\left(3 - \left|\frac{\langle j|\psi'\rangle}{U_{j0}}\right|\right)_{j\in T} \leq \frac{7}{3}\Delta_{t}^{2}+\frac{2}{3}\Delta_{t}\Delta_{0}+\frac{1}{3}\Delta_{0}^{2}+4\alpha_{U}^{2}. \label{ratiobound} \end{equation}
\textbf{Special Case:} Consider the situation $S_{0}=R_{0}^{\varphi_{0}}$, i.e. $S_{0}'=I$. In this case, $U_{1}|0\rangle =|\psi'\rangle$, and we have $\rho_{j} = |\langle j|\psi'\rangle/U_{j0}|$ which obeys the bound (\ref{ratiobound}) for $j\in T$. Using Eq. (\ref{kapparho}), we get \begin{equation} (3-\kappa) \leq \frac{7}{3}\Delta_{t}^{2}+\frac{2}{3}\Delta_{t}\Delta_{0}+\frac{1}{3}\Delta_{0}^{2}+4\alpha_{U}^{2}. \end{equation} Thus the projection on the target subspace is amplified by a factor close to $3$ as $\Delta_{t},\Delta_{0}$ and $\alpha_{U}$ are small quantities. The main idea behind recursion is to note that the above analysis holds for any unitary operator $U$, and hence it also holds for $U_{1}$ which is a unitary operator. Therefore, $U_{2}=U_{1}S_{0}U_{1}^{\dagger}S_{t}U_{1}$ will obey \begin{equation} (3-\kappa_{2}) = \left(3-\frac{\alpha_{U_{2}}}{\alpha_{U_{1}}}\right)\leq \frac{7}{3}\Delta_{t}^{2}+\frac{2}{3}\Delta_{t}\Delta_{0}+\frac{1}{3}\Delta_{0}^{2}+4\alpha_{U_{1}}^{2}, \end{equation}
where $\alpha_{U_{2}} = \sqrt{\sum_{j\in T}|\langle j|U_{2}|0\rangle|^{2}}$. Thus the projection on the target subspace is amplified again by a factor close to $3$, making the total amplification close to $3^{2}=9$. Continuing the process, the $m^{th}$ level of recursion gives $\alpha_{U_{m}}=\prod_{l=1}^{m}\kappa_{l}\alpha_{U}$, where $(3-\kappa_{l})\leq \frac{7}{3}\Delta_{t}^{2}+\frac{2}{3}\Delta_{t}\Delta_{0}+\frac{1}{3}\Delta_{0}^{2}+4\alpha_{U_{l-1}}^{2}$. As long as $\alpha_{U_{m}}^{2}\ll 1$, the complete amplification factor obeys $3^{m}\geq\prod_{l=1}^{m}\kappa_{l}\geq \overline{\kappa}^{m}$, where \begin{equation} \overline{\kappa} \approx 3-\frac{7}{3}\Delta_{t}^{2}-\frac{2}{3}\Delta_{t}\Delta_{0}-\frac{1}{3}\Delta_{0}^{2}. \end{equation}
This analysis shows that $m$ levels of recursion can be used for amplifying the projection on target subspace to at least $\alpha_{U_{m}}=O(\overline{\kappa}^{m}\alpha_{U})$. We can always choose $m$ such that the condition $\alpha_{U_{m}}^{2}=c \ll 1$ is satisfied, and then repeat the algorithm $c^{-1}$ times to get a target state. The number of queries required by the algorithm to get a target state is, therefore, at most $q_{m}=O(3^{\log_{\overline{\kappa}}(1/\alpha_{U})})=O(1/\alpha_{U}^{\log_{\overline{\kappa}}3})$. In other words, the query complexity of the algorithm is $O(\alpha_{U}^{-(1+p)})$, with \begin{equation} 0\leq p=\left(\frac{\log 3}{\log\overline{\kappa}}-1\right)\leq 0.71\Delta_{t}^{2}+0.20\Delta_{t}\Delta_{0}+0.10\Delta_{0}^{2}. \end{equation}
\textbf{General Case}: For more general $S_{0}$ transformations, the state $U_{1}|0\rangle = US_{0}'U^{\dagger}|\psi'\rangle$ is not equal to $|\psi'\rangle$. $S_{0}'$ is close to identity, however, and $\|US_{0}'U^{\dagger}-I\|=\|S_{0}'-I\|=\Delta_{0}$. Upto a phase factor, we have \begin{equation}
\langle j|US_{0}'U^{\dagger}=cos\gamma_{j}\langle j|+sin\gamma_{j}\langle x_{j}|, \label{xrangledefine} \end{equation}
where $|x_{j}\rangle$ is a normalized vector orthogonal to $|j\rangle$. As $\|US_{0}'U^{\dagger}-I\|=\Delta_{0}$, we have the bound $|\gamma_{j}|\leq \Delta_{0}$ so that $sin\gamma_{j} \approx \gamma_{j}$. Now \begin{equation}
(U_{1})_{j0} = \langle j|US_{0}'U^{\dagger}|\psi'\rangle = cos\gamma_{j}\langle j|\psi'\rangle+\gamma_{j}\langle x_{j}|\psi'\rangle. \end{equation}
Using Eq. (\ref{psiprimesecond}) for $|\psi'\rangle$, we find the ratio $\rho_{j}$ to be \begin{equation}
\rho_{j\in T} = \left|\frac{(U_{1})_{j0}}{U_{j0}}\right|_{j\in T} =\left|cos\gamma_{j}e^{i\phi_{j}}(1+2\beta'e^{i\xi_{j}'})+\gamma_{j}\frac{\langle x_{j}|\psi'\rangle}{U_{j0}}\right| . \label{kappageneralfirst} \end{equation}
As $|\psi'\rangle = UR_{0}^{\varphi_{0}}U^{\dagger}|\psi\rangle$, we have $|\langle\psi'|U|0\rangle| = |\langle\psi|U|0\rangle| = |\beta|$. Hence, up to a phase factor, \begin{equation}
|\psi'\rangle = \beta U|0\rangle + \overline{\beta}|y\rangle\ ,\ \overline{\beta} =\sqrt{1-|\beta|^{2}}, \end{equation}
where $|y\rangle$ is a normalised vector orthogonal to $U|0\rangle$. The bound on $\beta$ (\ref{betanonprimebound}) implies $\overline{\beta} \leq \sqrt{\Delta_{t}^{2}+4\alpha_{U}^{2}}$. Eq. (\ref{kappageneralfirst}) then reduces to \begin{eqnarray}
\rho_{j\in T} &=&|C_{1j}+C_{2j}+C_{3j}|, \nonumber \\ C_{1j}&=& cos\gamma_{j}e^{i\phi_{j}}(1+2\beta'e^{i\xi_{j}'}), \nonumber \\
C_{2j}&=& \gamma_{j} \beta \frac{\langle x_{j}|U|0\rangle}{U_{j0}}, \nonumber \\
C_{3j}&=&\gamma_{j} \overline{\beta} \frac{\langle x_{j}|y\rangle}{U_{j0}}. \label{kappageneralsecond} \end{eqnarray}
Since $cos\gamma_{j} = 1-O(\Delta_{0}^{2})$ and $1+2\beta'e^{i\xi_{j}'} = 3-O(\Delta_{t}^{2},\Delta_{0}^{2},\Delta_{t}\Delta_{0})$ (as proved earlier), we have $|C_{1j}| \approx 3$ for small $\{\Delta_{t},\Delta_{0}\}$. Using the definition (\ref{xrangledefine}) of $\langle x_{j}|$ and the bound $\gamma_{j} \leq \Delta_{0}$, \begin{equation}
\langle x_{j}|U|0\rangle = U_{j0}\frac{1-cos\gamma_{j}}{\gamma_{j}} = U_{j0}O(\Delta_{0}), \end{equation}
which makes $C_{2j} = O(\Delta_{0}^{2})$ and $|C_{1j}+C_{2j}| = 3-O(\Delta_{t}^{2},\Delta_{0}^{2},\Delta_{t}\Delta_{0})$. The ratio $\rho_{j\in T}$ will then be close to $3$ iff \begin{equation}
|C_{3j}| = \gamma_{j} \overline{\beta} \left|\frac{\langle x_{j}|y\rangle}{U_{j0}}\right| \ll 3. \end{equation}
By their definitions, the vectors $|x_{j}\rangle$ and $|y\rangle$ depend upon the eigenvalues of $S_{0}$ and $S_{t}$ respectively. In most cases, the eigenvalues of these two different operators are uncorrelated (in case they are correlated, we need to randomize one of them by random operations), and hence $|x\rangle$ and $|y\rangle$ are two relatively random unit vectors in the $N$-dimensional Hilbert space. So the expectation value of their inner product $|\langle x|y\rangle|$ is $1/\sqrt{N}$, and the above condition translates to \begin{equation}
\frac{\gamma_{j}\overline{\beta}}{\sqrt{N}} \ll 3|U_{j0}|, \label{recursivecondition} \end{equation} As long as this condition is satisfied for all $j\in T$, the ratio $\rho_{j\in T}$ and the amplification factor $\kappa$ are close to $3$. More precisely, \begin{equation} \kappa = 3-O(\Delta_{t}^{2},\Delta_{0}^{2},\Delta_{t}\Delta_{0}). \end{equation} If this condition is not satisfied for a particular target state $j$, then the amplitude along it will not be amplified by the recursive algorithm as if it were a non-target state.
The condition (\ref{recursivecondition}) is only a sufficient, not necessary, condition for $\kappa$ to be close to $3$. If it is satisfied for the first level of recursion then it is automatically satisfied for higher levels as $|U_{j0}|<|(U_{l})_{j0}|$ for any $l$. Also, even if this condition is not satisfied then amplification may still be possible by a factor greater than $1$, but not close to $3$. Note that if $\gamma_{j}$ or $\overline{\beta}$ is $O(U_{j0})$ then the condition is satisfied. It can be shown that this is the case when either of $S_{t}$ or $S_{0}$ becomes a selective phase-rotation $R_{t}$ or $R_{0}$ (the special case discussed earlier corresponds to $S_{0}=R_{0}$). Also, the condition is always satisfied for $U=W$ as $W_{j0} = 1/\sqrt{N}$ and $\gamma_{j}\overline{\beta} \leq \Delta_{0}\sqrt{\Delta_{t}^{2}+4\alpha_{U}^{2}}\ll 1$.
\subsection{Comparison with Grover's algorithm}
When $\{S_{t},S_{0}\}=\{I_{t},I_{0}\}$, the recursive algorithm reduces to the iterative Grover's algorithm and the optimal query complexity of $O(1/\alpha_{U})$ is achieved. The state at $m^{th}$ level of recursion $U_{m}|0\rangle$ is nothing but $(3^{m}-1)/2$ applications of $UI_{0}U^{\dagger}I_{t}$ on the initial state $U|0\rangle$. Explicitly, with $I_{t}^{\dagger}=I_{t}$ and $I_{0}^{\dagger}=I_{0}$, \begin{eqnarray*} U_{m+1} &=&(UI_{0}U^{\dagger}I_{t})^{q_{m}}UI_{0}U^{\dagger}(I_{t}^{\dagger}UI_{0}^{\dagger}U^{\dagger})^{q_{m}}I_{t}(UI_{0}U^{\dagger}I_{t})^{q_{m}}U \\
&=&(UI_{0}U^{\dagger}I_{t})^{q_{m}}UI_{0}U^{\dagger}(I_{t}UI_{0}U^{\dagger})^{q_{m}}I_{t}(UI_{0}U^{\dagger}I_{t})^{q_{m}}U \\
&=& (UI_{0}U^{\dagger}I_{t})^{3q_{m}+1}U. \end{eqnarray*} With $q_{0}=0$, $U_{m}=(UI_{0}U^{\dagger}I_{t})^{(3^{m}-1)/2}U$ is just quantum amplitude amplification, except for the jumps in the number of queries.
In recursive quantum search, we are interested in the amplification factor $\kappa = \alpha_{U_{1}}/\alpha_{U}$ of the projection on the target subspace, achieved by applying $US_{0}U^{\dagger}S_{t}$ to $U|0\rangle$. Detailed eigenspectrum of $US_{0}U^{\dagger}S_{t}$ is not of much relevance, since what matters is only one (rather than multiple) application of $US_{0}U^{\dagger}S_{t}$. In general, the state $US_{0}U^{\dagger}S_{t}U|0\rangle= UI_{0}U^{\dagger}I_{t}U|0\rangle + |\Delta\rangle$, where $|\Delta\rangle$ has norm $O(\Delta_{t},\Delta_{0})$. $\kappa$ is certainly close to $3$, when $\{\Delta_{t},\Delta_{0}\}\ll O(\alpha_{U})$. What we have shown above is that even when $\{\Delta_{t},\Delta_{0}\} \not\ll O(\alpha_{U})$, $\kappa$ can be close to $3$. That is because what matters for $\kappa$ is not the norm of $|\Delta\rangle$ but its projection on the target subspace, which can be small compared to $\alpha_{U}$ even when $\{\Delta_{t},\Delta_{0}\}\not\ll O(\alpha_{U})$.
The recursive algorithm needs $O(1/\alpha_{U}^{1+O(\Delta^{2})})$ queries, with $\Delta = O(\Delta_{t},\Delta_{0})$ characterizing the size of errors. The increase in query complexity, due to nonzero $\Delta$, is only a constant factor provided $\Delta = O(\sqrt{-1/\log\alpha_{U}})$. This is a much better performance than the quantum amplitude amplification algorithm which needs $\Delta = O(1/\alpha_{U})$ for success. Furthermore, the recursive algorithm can succeed even for larger $\Delta$ at the cost of more queries.
\section{Discussion}
Finally we consider the situation when $\{I_{0},I_{t}\}$ are replaced by non-diagonal operators $\{P,Q\}$. The iterative algorithm then evaluates $(UP^{\dagger}U^{\dagger}Q^{\dagger}UPU^{\dagger}Q)^{n} U|0\rangle$. Using diagonal decompositions of $\{P,Q\}$, i.e. $P=E_{P}S_{0}E_{P}^{\dagger}$ and $Q=E_{Q}S_{t}E_{Q}^{\dagger}$ with $S_{0}$ and $S_{t}$ diagonal, that becomes $E_{Q}(VS_{0}^{\dagger}V^{\dagger}S_{t}^{\dagger}VS_{0}V^{\dagger}S_{t})^{n} VE_{P}^{\dagger}|0\rangle$, where $V=E_{Q}^{\dagger}UE_{P}$. The algorithm therefore converges to the target state in $O(1/\alpha_{V})$ steps, provided $\{S_{t},S_{0}\}$ satisfy conditions for successful quantum search and $(E_{P})_{00},(E_{Q})_{tt}$ are close to $1$. The condition $(E_{P})_{00},(E_{Q})_{tt} \approx 1$ is important for any search algorithm, because only then we can rightfully call the transformations \emph{selective}, performing an operation on the intended state and leaving the other states alone. Thus, as long as $V_{t0}\not\ll U_{t0}$, there is no significant slowdown in quantum search.
Similarly, the recursive algorithm evaluates $U_{m}|0\rangle = E_{Q}V_{m}E_{P}^{\dagger}|0\rangle$ at the $m^{th}$ level, with \begin{equation} V_{m} = V_{m-1}S_{0}V_{m-1}^{\dagger}S_{t}V_{m-1}. \end{equation} As before, the algorithm succeeds, provided $\{S_{t},S_{0}\}$ satisfy conditions for successful quantum search and $(E_{P})_{00},(E_{Q})_{tt}$ are close to $1$.
Next we point out a few applications of our algorithms.
\textbf{(1) Correction of Certain Systematic Errors:} Quantum amplitude amplification is a repetitive application of the operator $\overline{\mathcal{G}}=UI_{0}U^{\dagger}I_{t}$. Small errors in $\overline{\mathcal{G}}$ may accumulate over iterations to produce a large deviation at the end, causing the algorithm to fail. Completely random errors have to be protected against, using the techniques of quantum error correction and fault-tolerant quantum computation~\cite{qec}. That adds redundancy to the quantum states and gates, i.e. extra resources, to overcome small errors. For errors exhibiting specific structures, however, it is worthwhile to investigate whether the dependence on quantum error-correction can be reduced by designing quantum algorithms that are intrinsically robust to these errors.
In this paper, we have studied a particular class of systematic errors, those that are perfectly \emph{reproducible} and \emph{reversible}. For an imperfect apparatus in this category, we have presented two algorithms that exploit the structure of errors and succeed in quantum search while the standard quantum search fails. These type of errors are not uncommon, e.g. the errors arising from imperfect pulse calibration and offset effect in NMR systems~\cite{avik}. Thus our algorithms offer a significant flexibility in physical implementation of quantum search.
\textbf{(2) Handling Errors in Workspace:}
The $I_{t}$ transformation used in quantum search is implemented using an oracle. A typical implementation uses an ancilla qubit initialized to the $\frac{|0\rangle-|1\rangle}{\sqrt{2}}$ state, and a C-NOT gate applied to it from a Boolean function $f(j)$. In general, $f(j)$ has to be computed using the techniques of reversible computation, and has to be uncomputed afterwards to ensure reversibility. Inevitably, we need to couple our search-space to an ancilla workspace to implement $I_{t}$, and the two get entangled. For a perfect algorithm, the workspace returns to its initial state at the end of the algorithm, and the search-space and the workspace get disentangled. But when there are errors, the workspace may not exactly return to its initial state, leaving some entanglement between the search-space and the workspace at the end. That deteriorates the performance of quantum search, and our algorithms come to rescue in such cases.
Let $\widehat{\mathcal{H}}=\mathcal{H}_{s}\otimes \mathcal{H}_{w}$ be the joint Hilbert space of the search-space and the workspace. The perfect oracle is $I_{t}= \sum_{j}\left(|j\rangle\langle j|_{f(j)=1}\otimes (-I)+|j\rangle\langle j|_{f(j)=0}\otimes I\right)$. In case of imperfect oracles, it may become $Q= \sum_{j}\left(|j\rangle\langle j|_{f(j)=1}\otimes A+|j\rangle\langle j|_{f(j)=0}\otimes B\right)$, where $A,B$ are unitary operators. First consider the case $B=I$, i.e. the workspace remains unaltered for $f(j)=0$. With the diagonal decomposition $A=E_{Q}S_{t}E_{Q}^{\dagger}$, we have $Q= \sum_{j}\big(|j\rangle\langle j|_{f(j)=1}\otimes E_{Q}S_{t}E_{Q}^{\dagger} +|j\rangle\langle j|_{f(j)=0}\otimes I\big)$. That is equivalent to $R_{t}$ of section II, performing a selective phase-rotation by $\phi_{k}$ of the effective target state $|j\rangle_{f(j)=1}\otimes |E_{Q}(\phi_{k})\rangle$ in $\widehat{\mathcal{H}}$, where $|E_{Q}(\phi_{k})\rangle$ is the eigenvector of $A$ with the eigenvalue $e^{i\phi_{k}}$. Our iterative algorithm would use $\widehat{\mathcal{T}}=\widehat{U}I_{\hat{0}}\widehat{U}^{\dagger}S_{t}^{\dagger}\widehat{U}I_{\hat{0}}\widehat{U}^{\dagger}S_{t}$, where $\widehat{U}=U\otimes I$ and $I_{\hat{0}}$ is the selective phase-inversion of $|\hat{0}\rangle = |0\rangle\otimes|0_{w}\rangle$ with $|0_{w}\rangle$ the initial state of the workspace. As shown in section II, iterating $\widehat{\mathcal{T}}$ leads us to a state $|j\rangle_{f(j)=1}\otimes|\psi\rangle$, whose projection on the search-space is a target state. The number of queries depends on the eigenvalues of $A$, but it will be $O(1/\alpha_{U})$ as long as the eigenvalues are away from $1$. The same result applies if the operator $A$ is different for different target states. Note that this is a much relaxed criterion than the phase-matching condition which demands the eigenvalues of $A$ to be within $O(\alpha_{U})$ of $-1$.
When $B \neq I$ as well, the iterative algorithm cannot take us to a target state and we have to use the recursive algorithm. The condition that $\|S_{t}-I_{t}\|$ should be small, restricts $A$ to be close to $-I$ (unlike the iterative algorithm, which allows a much wider range of $A$) and $B$ to be close to the identity operator. For small errors in the workspace transformations, therefore, quantum search works and complete elimination of the entanglement between the search-space and the workspace is not necessary.
\textbf{(3) Bounded Error Quantum Search:} Our recursive search algorithm is similar to the quantum search algorithm on bounded error inputs by Hoyer {\it et al.}~\cite{hoyerbound} (labeled HMW henceforth), except that our error model is much more general. HMW considered \emph{computationally} imperfect oracles, which provide the correct value of $f(j)$ not with certainty but with a probability close to $1$. For instance, if $j$ is a target (non-target) state, the Boolean oracle may output $1$ ($0$) with at least a probability $9/10$. We have considered \emph{physically} imperfect oracles, where the errors affect the unitary transformations corresponding to the oracle. In particular, the algorithm by HMW (see facts $1,2$ in section $3$ of \cite{hoyerbound}) uses fixed unitary transformations $(S_{0})_{\rm hmw},(S_{1})_{\rm hmw}$ (amplitude amplification) and $E_{\rm hmw}$ (error reduction), with $(S_{1})_{\rm hmw}$ replacing the oracle $I_{t}$. Our algorithm applies to the situation where these unitary transformations themselves contain errors. We have shown that as long as the errors are small, quantum search is possible.
Indeed, the HMW error model can be reduced to our error model. The HMW oracle transformation $O$ computes the value of $f(j)$ using workspace qubits and stores it in a qubit. It takes the initial state $\sum_{j}a_{j}|j\rangle|0_{w}\rangle|0\rangle$ to $\sum_{j}a_{j}|j\rangle(\sqrt{p_{j}}|\psi_{j1}\rangle|1\rangle+\sqrt{1-p_{j}}|\psi_{j0}\rangle|0\rangle)$, where $|\psi_{jb}\rangle,\ b\in\{0,1\}$ denote the workspace states. The probability $p_{j}$ is at least $9/10$ if $f(j)=1$ and at most $1/10$ if $f(j)=0$. Consider the operator $\overline{\mathcal{G}}_{O}=OI_{0}O^{\dagger}S_{1}O$ instead of only $O$, where $S_{1}$ inverts the states with last qubit $|1\rangle$ and $I_{0}$ is the selective phase-inversion of the $|0_{w}\rangle|0\rangle$ state. The operator $\overline{\mathcal{G}}_{O}$ is an amplitude amplification operator, and its eigenvalues are $e^{\pm 2i\theta_{j}}$ with $sin^{2}\theta_{j}=p_{j}$~\cite{qaa}. Hence for $f(j)=1(0)$, the eigenvalues are close to $-1(1)$. This is similar to the workspace error model discussed above, where $\|S_{t}-I_{t}\|$ is small.
Moreover, if we assume that there are no errors in workspace transformations, our error model can also be reduced to the HMW error model. We simply attach a qubit to the workspace in the $\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ state. A controlled $S_{t}$ transformation takes the qubit to the state $(|0\rangle+e^{i\phi_{j}}|1\rangle)/\sqrt{2}$, where $e^{i\phi_{j}}$ are eigenvalues of $S_{t}$. A Hadamard gate then transforms the qubit to the state $e^{i\phi_{j}/2}\big(cos\frac{\phi_{j}}{2}|0\rangle-isin\frac{\phi_{j}}{2}|1\rangle\big)$. Since $\phi_{j}=\pi f(j)+\epsilon_{j}$ with small $|\epsilon_{j}|$, we obtain the HMW model.
The difference arises when the workspace transformations of the HMW model also suffer from errors. To get rid of these errors, we cannot keep on attaching extra ancilla qubits till the new ancilla qubits are free of errors. Our results show that there is no need to worry about it, and recursion works as long as the errors are small.
A peculiar feature of the HMW error model is that the imperfect oracle can be used to simulate an almost perfect oracle by making $O(\log{N})$ oracle queries. Thereafter, the standard quantum search can be used. In our model, we cannot simulate $I_{t}$ using $S_{t}$. In fact, we have shown that there is no need to simulate $I_{t}$; $S_{t}$ is good enough for quantum search as long as it is close to $I_{t}$. More importantly, our algorithm also works when $I_{0}$ is affected by errors, a case not considered by HMW.
To conclude, we have presented two algorithms which allow a significant flexibility in the selective transformations used by quantum search. The iterative algorithm takes $O(\sqrt{N})$ queries and requires the oracle to be neutral for non-target states. But the oracle may mark the target states by phases other than phase-inversion, and hence almost any oracle transformation is good enough for quantum search. The recursive algorithm tackles the situations when the oracle perturbs non-target states also. For error size $\Delta$, it reaches a target state using $O(\sqrt{N}\cdot N^{O(\Delta^{2})})$ queries. Needless to say, errors are inevitable in any physical implementation of quantum search. As long as the errors are small, the algorithms we have constructed are more robust and better adapted to physical implementation than the standard quantum search.
\textbf{Acknowledgements}: I thank Prof. Apoorva Patel for going through the manuscript and for useful comments and discussions. I thank Avik Mitra and Prof. Anil Kumar for discussions on the experimental implementation of the iterative algorithm.
\end{document}
|
arXiv
|
{
"id": "0711.4299.tex",
"language_detection_score": 0.7311553359031677,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\renewcommand{\arabic{footnote}}{$\star$}
\newcommand{1601.02263}{1601.02263}
\renewcommand{046}{046}
\FirstPageHeading
\ShortArticleName{The Asymptotic Expansion of Kummer Functions}
\ArticleName{The Asymptotic Expansion of Kummer Functions\\ for Large Values of the $\boldsymbol{a}$-Parameter,\\ and Remarks on a Paper by Olver\footnote{This paper is a~contribution to the Special Issue on Orthogonal Polynomials, Special Functions and Applications. The full collection is available at \href{http://www.emis.de/journals/SIGMA/OPSFA2015.html}{http://www.emis.de/journals/SIGMA/OPSFA2015.html}}}
\Author{Hans VOLKMER}
\AuthorNameForHeading{H.~Volkmer}
\Address{Department of Mathematical Sciences, University of Wisconsin-Milwaukee,\\ P.O.~Box 413, Milwaukee, WI, 53201, USA} \Email{\href{mailto:[email protected]}{[email protected]}}
\ArticleDates{Received January 10, 2016, in f\/inal form May 01, 2016; Published online May 06, 2016}
\Abstract{It is shown that a known asymptotic expansion of the Kummer function $U(a,b,z)$ as $a$ tends to inf\/inity is valid for $z$ on the full Riemann surface of the logarithm. A corresponding result is also proved in a more general setting considered by Olver (1956).}
\Keywords{Kummer functions; asymptotic expansions}
\Classification{33B20; 33C15; 41A60}
\renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0}
\section{Introduction} Recently, the author collaborated on a project \cite{C} investigating the maximal domain in which an integral addition theorem for the Kummer function $U(a,b,z)$ due to Magnus \cite{Magnus41,Magnus} is valid. In this work it is important to know the asymptotic expansion of $U(a,b,z)$ as $a$~tends to inf\/inity. Such an expansion is well-known, and, for instance, can be found in Slater's book~\cite{S}. Slater's expansion is in terms of modif\/ied Bessel functions $K_\nu(z)$, and it is derived from a paper by Olver \cite{O}. However, there are two problems when we try to use the known result. As Temme~\cite{T} pointed out, there is an error in Slater's expansion. Moreover, in all known results the range of validity for the variable $z$ is restricted to certain sectors in the $z$-plane.
The purpose of this paper is two-fold. Firstly, we correct the error in \cite{S}, and we show that the corrected expansion based on \cite{O} agrees with the result in \cite{T} which was obtained in an entirely dif\/ferent way. Secondly, we show that the asymptotic expansion of $U(a,b,z)$ as~$a$~tends to inf\/inity is valid for $z$ on the full Riemann surface of the logarithm. This is somewhat surprising because often the range of validity of asymptotic expansions is restricted by Stokes' lines. Olver's results in~\cite{O} are valid for a more general class of functions (containing conf\/luent hypergeometric functions as a special case.)
He introduces a restriction on~$\arg z$, and on~\mbox{\cite[p.~76]{O}} he writes ``In the case of the series with the basis function~$K_\mu$ we establish the asymptotic property in the range $|\arg z|\le \frac32 \pi$. It is, in fact, unlikely that the valid range exceeds this $\dots$''. However, we show in this paper that the restriction $|\arg z|\le \frac32 \pi$ can be removed at least under an additional assumption \eqref{1:D}.
In Section \ref{Olver} of this paper we review the results that we need from Olver \cite{O}. We discuss these results in Section~\ref{discussion}. In Section~\ref{removal} we prove that Olver's asymptotic expansion holds on the full Riemann surface of the logarithm. Sections~\ref{extension}, \ref{AB} and \ref{W3} deal with extensions to more general values of parameters. In Section~\ref{confluent} we specialize to asymptotic expansions of Kummer functions. In Section~\ref{Temme} we make the connection to Temme~\cite{T}.
\section{Olver's work}\label{Olver} Olver \cite[(7.3)]{O} considers the dif\/ferential equation \begin{gather}\label{1:eq1} w''(z)=\frac1z w'(z)+\left(u^2+\frac{\mu^2-1}{z^2}+f(z)\right)w(z) . \end{gather} The function $f(z)$ is even and analytic in a simply-connected domain $D$ containing $0$. It is assumed that $\Re\mu\ge 0$. The goal is to f\/ind the asymptotic behavior of solutions of \eqref{1:eq1} as $0<u\to\infty$.
Olver \cite[(7.4)]{O} starts with a formal solution to \eqref{1:eq1} of the form \begin{gather*}
w(z)=z\mathcal{Z}_\mu(uz)\sum_{s=0}^\infty \frac{A_s(z)}{u^{2s}}+\frac{z}{u}\mathcal{Z}_{\mu+1}(uz)\sum_{s=0}^\infty \frac{B_s(z)}{u^{2s}} , \end{gather*} where either $\mathcal{Z}_\mu=I_\mu$, $\mathcal{Z}_{\mu+1}=I_{\mu+1}$ or $\mathcal{Z}_\mu=K_\mu$, $\mathcal{Z}_{\mu+1}=-K_{\mu+1}$ are modif\/ied Bessel functions. The functions $A_s(z)=A_s(\mu,z)$, $B_s(z)=B_s(\mu,z)$ are def\/ined by $A_0(z)=1$, and then recursively, for $s\ge 0$, \begin{gather} 2B_s(z) = -A_s'(z)+\int_0^z\left(f(t)A_s(t)-\frac{2\mu+1}{t} A_s'(t)\right) dt,\label{1:eq3}\\ 2A_{s+1}(z) = \frac{2\mu+1}{z} B_s(z)- B_s'(z)+\int f(z)B_s(z) dz.\label{1:eq4} \end{gather} The integral in \eqref{1:eq4} denotes an arbitrary antiderivative of $f(z)B_s(z)$. The func\-tions~$A_s(z)$,~$B_s(z)$ are analytic in $D$, and they are even and odd, respectively.
If the domain $D$ is unbounded, Olver \cite[p.~77]{O} requires that $f(z)=O(|z|^{-1-\alpha})$ as $|z|\to\infty$, where $\alpha>0$. In our application to the conf\/luent hypergeometric equation in Section~\ref{confluent} the function $f(z)=z^2$ does not satisfy this condition. Therefore, throughout this paper, we will take \begin{gather}\label{1:D}
D=\{z\colon |z|<R_0\},
\end{gather}
where $R_0$ is a positive constant. Olver \cite[p.~77]{O} introduces various subdomains $D'$, $D_1$, $D_2$ of~$D$. We may choose $D'=\{z\colon |z|\le R\}$, where $0<R<R_0$. The domain $D_1$ comprises those points~$z$ in~$D'$ which can be joined to the origin by a contour which lies in~$D'$ and does not cross either the imaginary axis, or the line through~$z$ parallel to the imaginary axis. For our special~$D'$ the contour can be taken as the line segment connecting $z$ and $0$, so $D_1=D'$. The domain~$D_1$ appears in Olver \cite[Theorem~D(i)]{O}. According to this theorem, \eqref{1:eq1} has a solu\-tion~$W_1(u,z)$ of the form \begin{gather}\label{1:W1} W_1(u,z)=zI_\mu(uz)\left(\sum_{s=0}^{N-1} \frac{A_s(z)}{u^{2s}}+g_1(u,z)\right) +\frac{z}{u}I_{\mu+1}(uz)\left(\sum_{s=0}^{N-1} \frac{B_s(z)}{u^{2s}}+zh_1(u,z)\right),\!\!\! \end{gather} where \begin{gather}\label{1:W1a}
|g_1(u,z)|+|h_1(u,z)|\le \frac{K_1}{u^{2N}}\qquad \text{for} \quad 0<|z|\le R,\quad u\ge u_1 . \end{gather}
\begin{Remarks}\quad \begin{enumerate}\itemsep=0pt \item The parameter $\mu$ is considered f\/ixed. We may write $W_1(u,\mu,z)$ to indicate the dependence of $W_1$ on $\mu$. \item Every solution $w(z)$ of \eqref{1:eq1} is def\/ined on the Riemann surface of the logarithm over~$D$. Note that there is no restriction on $\arg z$ in~\eqref{1:W1a}, see \cite[p.~76]{O}. \item The precise statement is this: for every positive integer $N$ there are functions~$g_1$,~$h_1$ and positive constants~$K_1$, $u_1$ (independent of $u,z$) such that~\eqref{1:W1},~\eqref{1:W1a} hold. \item The functions $A_s(z)$, $B_s(z)$ are not uniquely determined because of the free choice of integration constants in~\eqref{1:eq4}. Even if we make a def\/inite choice of these integration constants, the solution $W_1(u,z)$ is not uniquely determined by~\eqref{1:W1},~\eqref{1:W1a}. For example, one can replace $W_1(u,z)$ by $(1+e^{-u})W_1(u,z)$. \item Olver's construction of $W_1(u,z)$ is independent of $N$ but may depend on~$R$. In our application to the conf\/luent hypergeometric dif\/ferential equation we have $f(z)=z^2$. Then~$R$ can be any positive number but $W_1(u,z)$ may depend on the choice of $R$.
\item Olver has the term $\frac{z}{1+|z|}$ in place of~$z$ in front of~$h_1$ in~\eqref{1:W1} but since we assume $|z|\le R$ this makes no dif\/ference. \end{enumerate} \end{Remarks}
For the def\/inition of $D_2$ we suppose that $a$ is an arbitrary point of the sector $|\arg a|<\frac12\pi$ and $\epsilon>0$. Then $D_2$ comprises those points $z\in D'$ for which $|\arg z|\le\frac32\pi$, $\Re z\le \Re a$, and a~contour can be found joining~$z$ and~$a$ which satisf\/ies the following conditions: \begin{itemize}\itemsep=0pt \item[(i)] it lies in $D'$, \item[(ii)] it lies wholly to the right of the line through $z$ parallel to the imaginary axis, \item[(iii)] it does not cross the negative imaginary axis if $\frac12\pi\le \arg z\le \frac32\pi$, and does not cross the positive imaginary axis if $-\frac32\pi\le \arg z\le -\frac\pi2$, \item[(iv)]
it lies outside the circle $|t|=\epsilon |z|$. \end{itemize}
In our special case $D'=\{z\colon |z|\le R\}$ we choose $a=R$. If $0\le\arg z\le \frac32\pi$ and $0<|z|\le R$, we choose the contour starting at~$z$
moving in positive direction parallel to the imaginary axis until we hit the circle $|t|=R$. Then we move clockwise along the circle $|t|=R$ towards~$a$. Taking into account condition~(iv), we see that~$D_2$ is the set of points $z$ with $-\frac32\pi+\delta\le z\le \frac32\pi-\delta$,
$0<|z|\le R$, where $\delta>0$. The domain $D_2$ appears in Olver \cite[Theorem~D(ii)]{O}. According to this theorem, \eqref{1:eq1} has a~solution $W_2(u,z)$ of the form \begin{gather} W_2(u,z)=zK_\mu(uz)\left(\sum_{s=0}^{N-1} \frac{A_s(z)}{u^{2s}}+g_2(u,z)\right)\nonumber\\ \hphantom{W_2(u,z)=}{} -\frac{z}{u}K_{\mu+1}(uz)\left(\sum_{s=0}^{N-1} \frac{B_s(z)}{u^{2s}}+zh_2(u,z)\right),\label{1:W2} \end{gather} where \begin{gather}\label{1:W2a}
|g_2(u,z)|+|h_2(u,z)|\le \frac{K_2}{u^{2N}}\qquad \text{for} \quad 0<|z|\le R, \quad |\arg z|\le \frac32\pi-\delta, \quad u\ge u_2 . \end{gather} Note that in \eqref{1:W2a} there is a restriction on $\arg z$.
In the rest of this paper we choose the functions $A_s(z)$ such that \begin{gather}\label{1:As} A_s(0)=0\qquad\text{if} \quad s\ge 1. \end{gather} Then the functions $A_s(z)$, $B_s(z)$ are uniquely determined.
\section[Properties of solutions $W_1$ and $W_2$]{Properties of solutions $\boldsymbol{W_1}$ and $\boldsymbol{W_2}$}\label{discussion}
The dif\/ferential equation \eqref{1:eq1} has a regular singularity at $z=0$ with exponents $1\pm \mu$. Substituting $x=z^2$ we obtain an equation which has a regular singularity at $x=0$ with exponents $\frac12(1\pm\mu)$. Therefore, for every $\mu$ which is not a negative integer, \eqref{1:eq1} has a unique solution $W_+(z)=W_+(u,\mu,z)$ of the form \begin{gather*} W_+(z)= z^{1+\mu}\sum_{n=0}^\infty c_nz^{2n}, \end{gather*} where the $c_n$ are determined by $c_0=1$, and \begin{gather*} 4n(\mu+n)c_n=u^2c_{n-1}+\sum_{j=0}^{n-1}f_jc_{n-1-j} \qquad\text{for} \quad n\ge 1 \end{gather*} when \begin{gather*} f(z)=\sum_{n=0}^\infty f_n z^{2n} . \end{gather*}
If $\mu$ is not an integer, then $W_+(u,\mu,z)$ and $W_+(u,-\mu,z)$ form a fundamental system of solutions of~\eqref{1:eq1}. If $\Re \mu\ge 0$, there is a solution $W_-(z)$ linearly independent of $W_+(z)$ such that \begin{gather*} W_-(z)=z^{1-\mu}p\big(z^2\big)+ d \ln z W_+(z), \end{gather*} where $p$ is a power series and $d$ is a suitable constant. If $\mu\ne 0$ we choose $p(0)=1$. If~$\mu$ is not an integer then $d=0$.
\begin{Lemma}\label{2:l1} Suppose $\Re\mu\ge 0$. There is a function $\alpha(u)$ such that \begin{gather*} W_1(u,z)=\alpha(u)W_+(u,z), \end{gather*} and, for every $N=1,2,3,\dots$, \begin{gather}\label{2:alpha} \alpha(u)=\frac{2^{-\mu}u^\mu}{\Gamma(\mu+1)}\left(1+O\left(\frac1{u^{2N}}\right)\right)\qquad\text{as} \quad 0<u\to\infty. \end{gather} \end{Lemma}
\begin{proof} There are functions $\alpha_+(u)$, $\alpha_-(u)$ such that \begin{gather}\label{2:connectW1a} W_1(u,z)=\alpha_+(u)W_+(u,z)+\alpha_-(u)W_-(u,z). \end{gather} Suppose $\Re\mu>0$. Then \eqref{2:connectW1a} implies \begin{gather}\label{2:limit1} \lim_{z\to0^+} z^{\mu-1}W_1(u,z)=\alpha_-(u). \end{gather} We use \cite[(10.30.1)]{NIST} \begin{gather*}
\lim_{z\to0} I_\nu(z)z^{-\nu}=\frac{2^{-\nu}}{\Gamma(\nu+1)}. \end{gather*} Then \eqref{1:W1}, \eqref{1:W1a} give{\samepage \begin{gather}\label{2:limit2} \lim_{z\to0^+} z^{\mu-1}W_1(u,z)= 0. \end{gather} It follows from \eqref{2:limit1}, \eqref{2:limit2} that $\alpha_-(u)=0$.}
Now suppose that $\Re\mu=0$, $\mu\ne 0$. Then we argue as before but instead of $z\to0^+$ we approach $0$ along a spiral $z=re^{\pm i r}$, $0<r\to 0$, when $\pm \Im \mu>0$. Then along this spiral $z^{2\mu}\to 0$. We obtain again that $\alpha_-(u)=0$. In a similar way, we also show that $\alpha_-(u)=0$ when $\mu=0$.
Therefore, \eqref{2:connectW1a} gives \begin{gather*} \lim_{z\to0^+} z^{-\mu-1} W_1(z,u)=\alpha_+(u)\end{gather*} and, from \eqref{1:W1}, \eqref{1:W1a}, \eqref{1:As} \begin{gather*} \lim_{z\to0^+} z^{-\mu-1}W_1(u,z)=\frac{2^{-\mu}u^\mu}{\Gamma(\mu+1)}\left(1+O\left(\frac1{u^{2N}}\right)\right)\end{gather*} which implies \eqref{2:alpha} with $\alpha(u)=\alpha_+(u)$. \end{proof}
Let us def\/ine \begin{gather*} W_3(u,\mu,z)= \frac{2^{-\mu}u^\mu}{\Gamma(\mu+1)}W_+(u,\mu,z) . \end{gather*} Then Lemma~\ref{2:l1} gives \begin{gather*}
W_3(u,z)=\tilde\alpha(u) W_1(u,z),\qquad \text{where} \quad \tilde\alpha(u)=1+O\left(\frac1{u^{2N}}\right). \end{gather*} Therefore, $W_3$ admits the asymptotic expansion \eqref{1:W1}, \eqref{1:W1a}, so we can replace $W_1$ by $W_3$. Note that in contrast to $W_1$, $W_3$ is a uniquely def\/ined function which is identif\/ied as a (Floquet) solution of~\eqref{1:eq1} and not by its asymptotic behavior as $u\to\infty$.
Unfortunately, it seems impossible to replace $W_2$ by an easily identif\/iable solution of~\eqref{1:eq1}. However, we will now prove several useful properties of~$W_2$.
\begin{Lemma}\label{2:l2} Suppose that $\Re\mu\ge 0$. There is a function $\beta(u)$ such that \begin{gather}\label{2:eq3}
W_2\big(u,ze^{\pi i}\big)-e^{\pi i(1-\mu)}W_2(u,z)= \beta(u) W_3(u,z) , \end{gather} and, for every $N=1,2,3,\dots$, \begin{gather}\label{2:beta}
\beta(u)=\pi i\left(1+O\left(\frac{1}{u^{2N}}\right)\right)\qquad\text{as}\quad 0<u\to\infty.
\end{gather} \end{Lemma} \begin{proof} We set $\lambda_\pm=e^{\pi i(1\pm\mu)}$. Equation \eqref{1:eq1} has a fundamental system of solutions~$W_+$,~$W_-$ such that \begin{gather*} W_+\big(ze^{\pi i}\big)=\lambda_+ W_+(z),\qquad W_-\big(ze^{\pi i}\big)=\lambda_- W_-(z)+\rho W_+(z) . \end{gather*} Let $w(z)=c_+W_+(z)+c_-W_-(z)$ be any solution of \eqref{1:eq1}. Then \begin{gather*} w\big(ze^{\pi i}\big)-\lambda_- w(z)=((\lambda_+-\lambda_-)c_++\rho c_-) W_+(z). \end{gather*} If we apply this result to $w=W_2$ we see that there is a function~$\beta(u)$ such that~\eqref{2:eq3} holds.
Let $z>0$ and set $z_1=ze^{\pi i}$. We use~\eqref{1:W2} for $z_1$ in place of~$z$, and \cite[(10.34.2)]{NIST} \begin{gather}\label{3:ancontK} K_\nu\big(ze^{\pi im}\big)= e^{-\pi i \nu m}K_\nu(z)-\pi i \frac{\sin(\pi \nu m)}{\sin(\pi \nu)} I_\nu(z) \end{gather} with $m=1$. Then \begin{gather*}
W_2(u,z_1) = z\left(\lambda_-K_\mu(uz)+\pi i I_\mu(uz)\right)\left(\sum_{s=0}^{N-1}\frac{A_s(z)}{u^{2s}}+g_2(u,z_1)\right)\\ \hphantom{W_2(u,z_1) =}{} +\frac{z}{u}\left(-\lambda_-K_{\mu+1}(uz)+\pi i I_{\mu+1}(uz)\right) \left(\sum_{s=0}^{N-1}\frac{B_s(z)}{u^{2s}}+zh_2(u,z_1)\right). \end{gather*} Using \eqref{1:W2} a second time, we f\/ind that \begin{gather*}
W_2(u,z_1)-\lambda_-W_2(u,z)=\pi i zI_\mu(uz)\left(\sum_{s=0}^{N-1}\frac{A_s(z)}{u^{2s}}+g_2(u,z_1)\right)\\ \qquad{} +\pi i\frac{z}{u}I_{\mu+1}(uz) \left(\sum_{s=0}^{N-1}\frac{B_s(z)}{u^{2s}}+zh_2(u,z_1)\right)\\ \qquad{} +\lambda_- zK_\mu(uz) (g_2(u,z_1)-g_2(u,z))-\lambda_-\frac{z^2}{u}K_{\mu+1}(uz)(h_2(u,z_1)-h_2(u,z)). \end{gather*} We now expand the right-hand side of \eqref{2:eq3} using \eqref{1:W1}, and compare the expansions. Setting $z=R$ and dividing by $R I_\mu(uR)$, we obtain \begin{gather*}
(\beta(u)-\pi i) \left(1+O\left(\frac1u\right)\right)=O\left(\frac{1}{u^{2N}}\right) \quad \text{as $0<u\to\infty$,} \end{gather*} where we used \cite[(10.40.1)]{NIST} \begin{gather}\label{2:asyI}
I_\nu(x)=\frac{e^x}{\sqrt{2\pi x}}\left(1+O\left(\frac1x\right)\right)\qquad\text{as} \quad 0<x\to\infty, \end{gather} and \cite[(10.40.2)]{NIST} \begin{gather}\label{2:asyK}
K_\nu(x)=\sqrt{\frac\pi{2x}}e^{-x}\left(1+O\left(\frac1x\right)\right)\qquad\text{as} \quad 0<x\to\infty. \end{gather} This proves \eqref{2:beta}. \end{proof}
\begin{Lemma}\label{2:l3}\quad \begin{enumerate}\itemsep=0pt \item[$(a)$] If $\Re \mu>0$ then, for every $N=1,2,3,\dots$, we have \begin{gather}\label{2:limitW2}
\limsup_{z\to 0^+}\left| z^{\mu-1}W_2(u,z)-\Gamma(\mu)2^{\mu-1}u^{-\mu}\left(1-2\mu\sum_{s=0}^{N-1}\frac{B_s'(0)}{u^{2s+2}}\right)\right|= O\left(\frac{u^{-\mu}}{u^{2N+2}}\right) \end{gather} as $0<u\to\infty$. \item[$(b)$] If $\Re \mu=0$, $\mu\neq0$, \eqref{2:limitW2} holds when we replace $z^{\mu-1}W_2(u,z)$ by \begin{gather*} z^{\mu-1}W_2(u,z)-\Gamma(-\mu)2^{-\mu-1}u^\mu z^{2\mu}. \end{gather*} \item[$(c)$] If $\mu=0$ then \begin{gather*}
\limsup_{z\to 0^+} \left|\frac{W_2(u,z)}{z\ln z}+1\right|=O\left(\frac{1}{u^{2N}}\right) . \end{gather*} \end{enumerate} \end{Lemma}
\begin{proof} Suppose that $\Re\mu>0$. Then we use \cite[(10.30.2)]{NIST} \begin{gather*} \lim_{x\to 0^+} x^\nu K_\nu(x)=\Gamma(\nu)2^{\nu-1}\qquad\text{for} \quad \Re\nu>0. \end{gather*} It follows that \begin{gather}
\lim_{z\to 0^+} z^\mu K_\mu(uz) = \Gamma(\mu)2^{\mu-1}u^{-\mu}, \label{2:limitK1}\\
\lim_{z\to 0^+} \frac1u z^{\mu+1} K_{\mu+1}(uz) = \Gamma(\mu+1)2^\mu u^{-\mu-2}.\label{2:limitK2} \end{gather} Using \eqref{1:W2}, \eqref{1:As}, \eqref{2:limitK1}, \eqref{2:limitK2}, we obtain \begin{gather*}
\limsup_{z\to0^+}\left|z^{\mu-1}W_2(u,z)-\Gamma(\mu)2^{\mu-1}u^{-\mu}\left(1-2\mu\sum_{s=0}^{N-1}\frac{B_s'(0)}{u^{2s+2}}\right)\right|\\
\qquad{}\le\limsup_{z\to0^+} \left|\Gamma(\mu)2^{\mu-1}u^{-\mu}g_2(u,z)- \Gamma(\mu+1)2^\mu u^{-\mu-2}h_2(u,z)\right| . \end{gather*} Now \eqref{1:W2a} gives \eqref{2:limitW2} with $N-1$ in place of $N$. If $\Re\mu=0$, $\mu\neq0$, then we use \cite[(9.7)]{O} \begin{gather*} K_\mu(x)=\Gamma(\mu)2^{\mu-1}x^{-\mu}+\Gamma(-\mu)2^{-\mu-1}x^\mu+o(1)\qquad\text{as}\quad 0<x\to0 \end{gather*} and argue similarly. If $\mu=0$ we use \cite[(10.30.3)]{NIST} \begin{gather*} \lim_{x\to 0^+} \frac{K_0(x)}{\ln x} =-1 .\tag*{\qed} \end{gather*} \renewcommand{\qed}{} \end{proof}
\begin{Theorem}\label{2:t1} Suppose that $\Re\mu\ge 0$ and $\mu$ is not an integer. There are functions $\gamma(u)$, $\delta(u)$ such that \begin{gather}\label{2:connectW2} W_2(u,z)=\gamma(u)W_3(u,\mu,z)+\delta(u)W_3(u,-\mu,z) , \end{gather} and, for every $N=1,2,3,\dots$, \begin{gather}
\gamma(u) = -\frac{\pi}{2\sin(\pi\mu)}\left(1+O\left(\frac{1}{u^{2N}}\right)\right),\label{2:gamma}\\
\delta(u) = \frac{\pi}{2\sin(\pi\mu)}\left(1-2\mu\sum_{s=0}^{N-1}\frac{B_s'(0)}{u^{2s+2}}+O\left(\frac{1}{u^{2N+2}}\right)\right).\label{2:delta} \end{gather} \end{Theorem} \begin{proof} Since $\mu$ is not an integer, $W_3(u,\mu,z)$ and $W_3(u,-\mu,z)$ are linearly independent so \eqref{2:connectW2} holds for some suitable functions~$\gamma$,~$\delta$. From~\eqref{2:connectW2} we get \begin{gather*} W_2\big(u,ze^{\pi i}\big)-e^{\pi i (1-\mu)}W_2(u,z)=\gamma(u)\big(e^{\pi i(1+\mu)}-e^{\pi i(1-\mu)}\big) W_3(u,z) . \end{gather*} Comparing with Lemma~\ref{2:l2}, we f\/ind $-2i\gamma(u)\sin(\pi\mu) =\beta(u)$. Now~\eqref{2:beta} gives~\eqref{2:gamma}.
Suppose that $\Re\mu>0$. Then \eqref{2:connectW2} yields \begin{gather*} \lim_{z\to0^+} z^{\mu-1}W_2(u,z)=\delta(u)\frac{2^\mu u^{-\mu}}{\Gamma(1-\mu)}. \end{gather*} Using Lemma \ref{2:l3}(a) we obtain \begin{gather*} \Gamma(\mu)2^{\mu-1}u^{-\mu}\left(1-2\mu\sum_{s=0}^{N-1} \frac{B_s'(0)}{u^{2s+2}}+O\left(\frac{1}{u^{2N+2}}\right)\right)=\delta(u)\frac{2^\mu u^{-\mu}}{\Gamma(1-\mu)}. \end{gather*} Applying the ref\/lection formula for the Gamma function, we obtain~\eqref{2:delta}. If $\Re\mu=0$, $\mu\neq0$, the proof of~\eqref{2:delta} is similar. \end{proof}
\section[Removal of restriction on $\arg z$]{Removal of restriction on $\boldsymbol{\arg z}$}\label{removal}
Using $\beta(u)$ from Lemma \ref{2:l2} we def\/ine \begin{gather*} W_4(u,z)=\frac{\pi i}{\beta(u)} W_2(u,z) .\end{gather*} Then we have \begin{gather}\label{3:connect1} W_4\big(u,ze^{\pi i}\big)=e^{\pi i(1-\mu)} W_4(u,z)+\pi i W_3(u,z) . \end{gather} Moreover, \eqref{2:beta} shows that $W_4$ shares the asymptotic expansion \eqref{1:W2}, \eqref{1:W2a} with~$W_2$. From~\eqref{3:connect1} we obtain \begin{gather}\label{3:connect} W_4\big(u,ze^{\pi im}\big)=e^{\pi i (1-\mu)m} W_4(u,z)+\pi i\frac{\sin(\pi(\mu+1)m)}{\sin(\pi(\mu+1))} W_3(u,z) \end{gather}
for every integer $m$. We will use~\eqref{3:connect} and the asymptotic expansions \eqref{1:W1}, \eqref{1:W2} for $|\arg z|\le \frac12\pi$ to prove that in~\eqref{1:W2a} we can remove the restriction on~$\arg z$ completely.
\begin{Theorem}\label{3:t1} Suppose that $\Re\mu\ge0$. For every $N=1,2,3,\dots$, $W_2(u,z)$ can be written as the right-hand side of~\eqref{1:W2}, and \eqref{1:W2a} holds without a restriction on~$\arg z$: \begin{gather*}
|g_2(u,z)|+|h_2(u,z)|\le \frac{K_2}{u^{2N}}\qquad\text{for} \quad 0<|z|\le R, \quad u\ge u_2. \end{gather*} \end{Theorem}
\begin{proof}
Without loss of generality we replace $W_2$ by $W_4$. We assume that $|\arg z|\le \frac12\pi$, $0<|z|\le R$, $u>0$, $m$ is an integer and $z_1:=ze^{\pi im}$. We insert~\eqref{1:W1}, \eqref{1:W2} on the right-hand side of~\eqref{3:connect}. Using~\eqref{3:ancontK} we obtain \begin{gather}\label{3:eq4} W_4(u,z_1)=z_1K_\mu(uz_1)\sum_{s=0}^{N-1} \frac{A_s(z_1)}{u^{2s}} -\frac{z_1}{u}K_{\mu+1}(uz_1)\sum_{s=0}^{N-1} \frac{B_s(z_1)}{u^{2s}}+f(u,z), \end{gather} where \begin{gather*}
f=E_1g_2+E_2h_2+E_3g_1+E_4h_1, \end{gather*} with \begin{alignat*}{3} & E_1(u,z) = e^{-\pi i(\mu+1)m} zK_\mu(uz),\qquad &&
E_2(u,z) = -e^{-\pi i (\mu+1)m}\frac{z^2}{u} K_{\mu+1}(uz),&\\ & E_3(u,z) = \pi i\frac{\sin(\pi(\mu+1)m)}{\sin(\pi(\mu+1))}z I_\mu(uz),\qquad&&
E_4(u,z) = \pi i \frac{\sin(\pi (\mu+1)m)}{\sin(\pi (\mu+1))}\frac{z^2}{u}I_{\mu+1}(uz) .& \end{alignat*} We will construct functions $G_j(u,z)$ and $H_j(u,z)$ such that \begin{gather*} E_j(u,z)=z_1 K_\mu(uz_1) G_j(u,z)-\frac{z_1^2}{u} K_{\mu+1}(uz_1) H_j(u,z) \end{gather*} for $j=1,2,3,4$. Then \eqref{3:eq4} becomes \begin{gather} W_4(u,z_1)=z_1K_\mu(uz_1)\left(\sum_{s=0}^{N-1} \frac{A_s(z_1)}{u^{2s}}+g_3(u,z)\right)\nonumber\\ \hphantom{W_4(u,z_1)=}{} -\frac{z_1}{u}K_{\mu+1}(uz_1)\left(\sum_{s=0}^{N-1} \frac{B_s(z_1)}{u^{2s}}+z_1h_3(u,z)\right),\label{4:eq5} \end{gather} where \begin{gather*} g_3 = G_1 g_2+G_2h_2+G_3 g_1+G_4 h_1,\qquad h_3 = H_1 g_2+H_2h_2+H_3 g_1+H_4 h_1. \end{gather*} We now use \cite[(10.28.2)]{NIST} \begin{gather}\label{3:relation}
K_\mu(x)I_{\mu+1}(x)+K_{\mu+1}(x)I_\mu(x)=\frac1x . \end{gather} From \eqref{3:relation} and the relation \begin{gather}\label{3:ancontI}
I_\mu\big(ze^{\pi im}\big)=e^{\pi i\mu m}I_\mu(z) \end{gather} we obtain \begin{gather*} u z_1K_\mu(uz_1) e^{\pi i (\mu+1)m} I_{\mu+1}(uz)+uz_1K_{\mu+1}(uz_1) e^{\pi i \mu m} I_\mu(uz)=1 .\end{gather*} Therefore, we can choose \begin{gather*} G_1(u,z) = uz K_\mu(uz)I_{\mu+1}(uz),\qquad H_1(u,z) = -u^2K_\mu(uz)I_\mu(uz) . \end{gather*} We set \begin{gather*}
l_0(x)=\ln\frac{1+2|x|}{|x|},\qquad l_\mu(x)=1\qquad\text{if} \quad \mu\ne 0, \end{gather*} and note the estimates \cite[(9.12)]{O} \begin{alignat}{3}
&|I_\mu(x)K_\mu(x)|\le \frac{C l_\mu(x)}{1+|x|},\qquad&& |I_{\mu+1}(x)K_\mu(x)|\le \frac{C |x|l_\mu(x)}{1+|x|^2},& \label{3:olver1}\\
& |I_{\mu+1}(x)K_{\mu+1}(x)|\le \frac{C}{1+|x|},\qquad&& |I_\mu(x)K_{\mu+1}(x)|\le \frac{C}{|x|}& \label{3:olver2} \end{alignat}
valid when $|\arg x|\le \frac12\pi$ with $C$ independent of $x$. At this point we assume that $\mu\ne0$ (the case $\mu=0$ is mentioned at the end of the proof). The estimates~\eqref{3:olver1} give \begin{gather}\label{3:est1}
|G_1(u,z)|\le C,\qquad |H_1(u,z)|\le C u^2. \end{gather} Similarly, we choose \begin{gather*} G_2(u,z) = -z^2K_{\mu+1}(uz)I_{\mu+1}(uz),\qquad H_2(u,z) = uzK_{\mu+1}(uz)I_\mu(uz). \end{gather*} The estimates \eqref{3:olver2} give \begin{gather}\label{3:est2}
|G_2(u,z)|\le C|z|^2,\qquad|H_2(u,z)|\le C. \end{gather} It follows from \eqref{3:ancontK} that \begin{gather*} E_3(u,z) = -E_1(u,z)+ z_1K_\mu(uz_1),\qquad E_4(u,z) = -E_2(u,z)-\frac{z_1^2}{u}K_{\mu+1}(uz_1) . \end{gather*} Therefore, we can choose \begin{alignat*}{3} & G_3(u,z) = 1-G_1(u,z) ,\qquad && H_3(u,z) = -H_1(u,z),&\\ & G_4(u,z) = -G_2(u,z),\qquad&& H_4(u,z) = 1-H_2(u,z).& \end{alignat*} From \eqref{3:est1}, \eqref{3:est2}, we get \begin{alignat}{3}
& |G_3(u,z)|\le C+1,\qquad && |H_3(u,z)|\le C u^2,& \label{3:est3}\\
& |G_4(u,z)|\le C|z|^2,\qquad && |H_4(u,z)|\le C+1 .&\label{3:est4} \end{alignat} The estimates \eqref{3:est1}, \eqref{3:est2}, \eqref{3:est3}, \eqref{3:est4} give \begin{gather*}
|g_3(u,z)| \le C |g_2(u,z)| + C|z|^2|h_2(u,z)|+(C+1)|g_1(u,z)|+C|z|^2 |h_1(u,z)|,\\
|h_3(u,z)| \le C u^2|g_2(u,z)| + C|h_2(u,z)|+Cu^2|g_1(u,z)|+(C+1) |h_1(u,z)|. \end{gather*} Since we assumed that \begin{gather*}
|g_1(u,z)|+|h_1(u,z)|+|g_2(u,z)|+|h_2(u,z)|\le \frac{K}{u^{2N}} \end{gather*}
for $| \arg z|\le \frac12\pi$, $0<|z|\le R$, $u\ge u_0$, the expansion \eqref{4:eq5} has the desired form with $N$ replaced by $N-1$.
Suppose $\mu=0$. We use \cite[(10.31.2)]{NIST} \begin{gather}\label{3:K0} K_0(x)=-\left(\ln\left(\frac12x\right)+\gamma\right)I_0(x)+\frac{\frac14x^2}{(1!)^2}+\left(1+\frac12\right) \frac{\left(\frac14x^2\right)^2}{\left(2!\right)^2}+\cdots. \end{gather} It follows from \eqref{3:K0} that there exist positive constants $r>0$, $D>0$ such that \begin{gather*}
\frac{|K_0(x)|}{|K_0(xe^{\pi i m})|}\le D \qquad\text{for} \quad 0<|x|\le r, \quad |\arg x|\le\frac12\pi, \quad m\in\mathbb{Z}. \end{gather*} Then we set \begin{gather*}
G_1(u,z)=\frac{K_0(uz)}{K_0(uz_1)},\qquad H_1(u,z)=0\qquad\text{if} \quad 0<|uz|\le r \end{gather*}
with $G_1$ and $H_1$ the same as before when $|uz|>r$. The estimates~\eqref{3:est1} are valid with a suitable constant~$C$. The rest of the proof is unchanged. This completes the proof of the theorem. \end{proof}
\section[Extension to complex $u$]{Extension to complex $\boldsymbol{u}$}\label{extension}
So far we considered only $0<u\to\infty$. Now we set $u=te^{i\theta}$, where $t>0$ and $\theta\in\mathbb{R}$. In~\eqref{1:eq1} we substitute $z=e^{-i\theta} x$, $\tilde w(x)=w(z)$. Then we obtain the dif\/ferential equation \begin{gather}\label{4:eq1} \frac{d^2}{dx^2} \tilde w(x)=\frac{1}{x}\frac{d}{dx} \tilde w(x) +\left(t^2 +\frac{\mu^2-1}{x^2}+e^{-2i\theta}f\big(e^{-i\theta} x\big)\right)\tilde w(x) . \end{gather}
Assuming $\Re\mu\ge0$, we can apply Olver's theory to this equation, and obtain functions $\tilde W_1(t,x)$ and $\tilde W_2(t,x)$. Since we assumed that $f(z)$ is analytic in the disk $\{z\colon |z|<R_0\}$, the new function $\tilde f(x)= e^{-2i\theta}f(e^{-i\theta} x)$ is analytic in the same disk. Therefore, the domains $D_1$, $D_2$ are the same as before. The functions~$\tilde A_s(x)$, $\tilde B_s(x)$ that appear in place of~$A_s(z)$,~$B_s(z)$ satisfy \begin{gather*} \tilde A_s(x)=e^{-2si\theta} A_s(z),\qquad \tilde B_s(x)=e^{-(2s+1)i\theta} B_s(z) , \end{gather*} so \begin{gather*} \frac{\tilde A_s(x)}{t^{2s}} =\frac{A_s(z)}{u^{2s}},\qquad \frac{\tilde B_s(x)}{t^{2s+1}} =\frac{B_s(z)}{u^{2s+1}}. \end{gather*} Therefore, the functions $e^{{-}i\theta} \tilde W_1(t{,}x)$ and $e^{{-}i\theta} \tilde W_2(t{,}x)$ have the asymptotic expan\-sions~\eqref{1:W1},~\eqref{1:W1a} and~\eqref{1:W2},~\eqref{1:W2a} with $(t,x)$ replacing $(u,z)$.
Let $\tilde W_3(t,\mu,x)$ be the function $W_3$ for the dif\/ferential equation~\eqref{4:eq1}. Then \begin{gather*} W_3\big(te^{i\theta},\mu,e^{-i\theta}x\big)=e^{-i\theta} \tilde W_3(t,\mu,x). \end{gather*}
It follows that $W_3(u,\mu,z)$ can be expanded in the form of the right-hand side of~\eqref{1:W1}, and~\eqref{1:W1a} holds for $0<|z|\le R$ and $u=te^{i\theta}$ for any f\/ixed real~$\theta$.
We would like to connect $\tilde W_2$ to $W_2$ in a similar manner but this is not possible at this point because $W_2(u,z)$ is only def\/ined for $u>0$, and so we cannot substitute $u=te^{i\theta}$.
\section[Properties of $A_s$, $B_s$]{Properties of $\boldsymbol{A_s}$, $\boldsymbol{B_s}$}\label{AB}
For any $\mu\in\mathbb{C}$ we consider the solution $A_s(z)=A_s(\mu,z)$, $B_s(z)= B_s(\mu,z)$ of the recur\-sion~\eqref{1:eq3},~\eqref{1:eq4} which is uniquely determined by $A_0(z)=1$ and~\eqref{1:As}. The following lemma is mentioned by Olver \cite[p.~327]{O1}, \cite[p.~81, line~6]{O}.
\begin{Lemma}\label{5:l1} Let $\hat A_s(z)$, $\hat B_s(z)$ be any solution of \eqref{1:eq3}, \eqref{1:eq4} with $\hat A_0(z)=1$. Then, for all $s\ge 0$, \begin{gather}\label{5:eq1} \hat A_s(z)=\sum_{r=0}^s A_r(z)\hat A_{s-r}(0),\qquad \hat B_s(z)=\sum_{r=0}^s B_r(z)\hat A_{s-r}(0) . \end{gather} \end{Lemma} \begin{proof} Let us denote the right-hand sides of equations \eqref{5:eq1} by $A_s^\ast(z)$, $B_s^\ast(z)$, respectively. It is easy to show that $A_s^\ast(z)$, $B_s^\ast(z)$ is a solution of~\eqref{1:eq3},~\eqref{1:eq4}. Since $A_0^\ast(z)=1$ and $A_s^\ast(0)=\hat A_s(0)$, this solution must agree with~$\hat A_s(z)$,~$\hat B_s(z)$. \end{proof}
We now def\/ine $a_0(z)=1$ and, for $s\ge 0$, \begin{gather}
a_{s+1}(z) := A_{s+1}(-\mu,z)+\frac{2\mu}{z} B_s(-\mu,z),\label{5:a}\\
b_s(z) := B_s(-\mu,z) .\label{5:b} \end{gather}
\begin{Theorem}\label{5:t1} The functions $a_s(z)$, $b_s(z)$ satisfy \eqref{1:eq3}, \eqref{1:eq4} with $A_s$, $B_s$ replaced by $a_s$, $b_s$, respectively, and, for all $s\ge 0$, \begin{gather} a_s(z) = A_s(\mu,z)+2\mu\sum_{r=0}^{s-1} A_r(\mu,z)B_{s-1-r}'(-\mu,0),\label{5:c1}\\ b_s(z) = B_s(\mu,z)+2\mu\sum_{r=0}^{s-1} B_r(\mu,z)B_{s-1-r}'(-\mu,0).\label{5:c2} \end{gather} \end{Theorem} \begin{proof} We have \begin{gather*} 2 A_{s+1}(-\mu,z)=\frac{-2\mu+1}{z} B_s(-\mu,z)- B_s'(-\mu,z)+\int f(z) B_s(-\mu,z) dz .\end{gather*} We add $\frac{4\mu}{z} B_s(-\mu,z)$ on both sides and get \begin{gather}\label{5:eq2} 2a_{s+1}(z)=\frac{2\mu+1}{z}b_s(z)-b_s'(z)+\int f(z)b_s(z) dz . \end{gather} This is \eqref{1:eq4} for $a_s(z)$, $b_s(z)$.
Equation \eqref{1:eq3} is true for $a_s(z)$, $b_s(z)$ when $s=0$. Suppose $s\ge 1$. We have \begin{gather*} 2 B_s'(-\mu,z)=- A_s''(-\mu,z)+f(z) A_s(-\mu,z)+\frac{2\mu-1}{z} A_s'(-\mu,z) .\end{gather*} Using the def\/initions of $a_s(z)$, $b_s(z)$ we get \begin{gather}\label{5:eq3}
2b_s'(z)=-a_s''(z)+f(z)a_s(z)-\frac{2\mu+1}{z}a_s'(z)+\frac{4\mu}{z}a_s'(z)+G, \end{gather} where \begin{gather*} G:=\frac{d^2}{dz^2}\left(\frac{2\mu}{z} b_{s-1}(z)\right)-f(z)\frac{2\mu}{z}b_{s-1}(z) -\frac{2\mu-1}{z}\frac{d}{dz}\left(\frac{2\mu}{z} b_{s-1}(z)\right) . \end{gather*} In \eqref{5:eq3} we replace $\frac{4\mu}{z}a_s'(z)$ through~\eqref{5:eq2}. Then we obtain \begin{gather}\label{5:eq4} 2b_s'(z)=-a_s''(z)+f(z)a_s(z)-\frac{2\mu+1}{z}a_s'(z)+H+G, \end{gather} where \begin{gather*} H:=\frac{2\mu}{z}\left[\frac{d}{dz}\left(\frac{2\mu+1}{z}b_{s-1}(z)\right)-b_{s-1}''(z)+f(z)b_{s-1}(z)\right] . \end{gather*} By direct computation, we show $H+G=0$ for any function $b_{s-1}(z)$. Therefore, by integra\-ting~\eqref{5:eq4} noting that $a_s(z)$ is even and $b_s(z)$ is odd, we obtain \eqref{1:eq3} for $a_s(z)$, $b_s(z)$.
We now get \eqref{5:c1}, \eqref{5:c2} from Lemma \ref{5:l1}. \end{proof}
Using multiplication of formal series, we can write \eqref{5:c1}, \eqref{5:c2} as \begin{gather} F(u,-\mu)\sum_{s=0}^\infty \frac{A_s(z)}{u^{2s}} = \sum_{s=0}^\infty \frac{a_s(z)}{u^{2s}},\label{5:formal1}\\ F(u,-\mu)\sum_{s=0}^\infty \frac{B_s(z)}{u^{2s}} = \sum_{s=0}^\infty \frac{b_s(z)}{u^{2s}},\label{5:formal2} \end{gather} where \begin{gather*} F(u,\mu)=1-2\mu\sum_{s=0}^\infty \frac{B_s'(\mu,0)}{u^{2s+2}}. \end{gather*} We dif\/ferentiate \eqref{5:c2} with respect to $z$ and set $z=0$. Then we f\/ind \begin{gather*} B_s'(-\mu,0)=B_s'(\mu,0)+2\mu\sum_{r=0}^{s-1} B_r'(\mu,0)B_{s-1-r}'(-\mu,0), \end{gather*} or, equivalently, \begin{gather}\label{5:reciprocal} F(u,\mu)F(u,-\mu)=1 . \end{gather} In particular, it follows that \begin{gather} F(u,\mu)\sum_{s=0}^\infty \frac{a_s(z)}{u^{2s}}=\sum_{s=0}^\infty \frac{A_s(z)}{u^{2s}},\label{5:formal3}\\ F(u,\mu)\sum_{s=0}^\infty \frac{b_s(z)}{u^{2s}}=\sum_{s=0}^\infty \frac{B_s(z)}{u^{2s}}.\label{5:formal4} \end{gather}
\section[Asymptotic expansion of $W_3$ when $\Re\mu<0$]{Asymptotic expansion of $\boldsymbol{W_3}$ when $\boldsymbol{\Re\mu<0}$} \label{W3}
In Section \ref{discussion} we saw that $W_3(u,\mu,z)$ can be written as the right-hand side of~\eqref{1:W1}, and~\eqref{1:W1a} holds. However, this was proved only when $\Re\mu\ge 0$. Now we remove this restriction.
\begin{Theorem}\label{6:t1} Suppose that $\mu\in\mathbb{C}$ is not a negative integer, and $u=te^{i\theta}$ with $t>0$, $\theta\in\mathbb{R}$. Then $W_3(u,\mu,z)$ can be written as the right-hand side of~\eqref{1:W1} and, for each $R>0$ and $N\ge 1$, there are constants $L_1$ and $t_1$ such that \begin{gather*}
|g_1(u,z)|+|h_1(u,z)|\le \frac{L_1}{t^{2N}}\qquad \text{for} \quad 0<|z|\le R, \quad t\ge t_1. \end{gather*} \end{Theorem}
\begin{proof}
In Sections \ref{discussion} and \ref{extension} we proved this statement for $\Re\mu\ge 0$. Therefore, it will be suf\/f\/icient to treat $W_3(u,-\mu,z)$ with $\Re\mu>0$. By the considerations in Section~\ref{extension}, it is suf\/f\/icient to consider $\theta=0$, so $u>0$. Suppose $|\arg z|\le \frac12\pi$, $0<|z|\le R$. By~\eqref{2:connectW2}, we have \begin{gather}\label{6:eq1} c \delta(u)W_3(u,-\mu,z)=c W_2(u,\mu,z)-c\gamma(u)W_3(u,\mu,z), \end{gather} where $c=\frac{2}{\pi}\sin(\pi\mu)$. On the right-hand side of \eqref{6:eq1} we insert the expansions \eqref{1:W1} for~$W_3$ and~\eqref{1:W2} for~$W_2$. Taking into account \eqref{2:gamma}, we can expand $-c\gamma(u)W_3(u,\mu,z)$ the same way as~$W_3$. Then using \cite[(10.27.4)]{NIST} \begin{gather}\label{6:eq2} K_\nu(x)=\frac{\pi}{2\sin(\pi\nu)}\left(I_{-\nu}(x)-I_\nu(x)\right), \end{gather} we obtain \begin{gather}\label{6:eq3} c\delta(u)W_3(u,-\mu,z)=zI_{-\mu}(uz)\sum_{s=0}^{N-1}\frac{A_s(z)}{u^{2s}}+\frac{z}{u}I_{-\mu-1}(uz)\sum_{s=0}^{N-1} \frac{B_s(z)}{u^{2s}}+f(u,z), \end{gather} where \begin{gather*}
f=E_1g_2+E_2h_2+E_3g_1+E_4h_1 \end{gather*} with \begin{alignat*}{3} & E_1(u,z) = c zK_\mu(uz),\qquad && E_2(u,z) = -c\frac{z^2}{u} K_{\mu+1}(uz),& \\ & E_3(u,z) = z I_\mu(uz),\qquad && E_4(u,z) = \frac{z^2}{u}I_{\mu+1}(uz) . & \end{alignat*} We will construct functions $G_j(u,z)$ and $H_j(u,z)$ such that \begin{gather*} E_j(u,z)=z I_{-\mu}(uz) G_j(u,z)+\frac{z^2}{u} I_{1-\mu}(uz) H_j(u,z) \end{gather*} for $j=1,2,3,4$. Also using \cite[(10.29.1)]{NIST} \begin{gather}\label{6:eq5} I_{\nu-1}(x)-I_{\nu+1}(x)=\frac{2\nu}{x}I_\nu(x), \end{gather} \eqref{6:eq3} becomes \begin{gather} c\delta(u)W_3(u,-\mu,z) = zI_{-\mu}(uz)\left(\sum_{s=0}^{N-1} \frac{\tilde A_s(z)}{u^{2s}}+g_3(u,z)\right)\nonumber\\ \hphantom{c\delta(u)W_3(u,-\mu,z) =}{} +\frac{z}{u}I_{1-\mu}(uz)\left(\sum_{s=0}^{N-1} \frac{B_s(z)}{u^{2s}}+zh_3(u,z)\right),\label{6:eq6} \end{gather} where \begin{gather*} \tilde A_0(z)=1,\qquad \tilde A_s(z)=A_s(z)-\frac{2\mu}{z}B_{s-1}(z) \qquad \text{for} \quad s=1,\dots,N-1, \end{gather*} and \begin{gather*} g_3 = -\frac{2\mu}{z}B_{N-1}(z)u^{-2N}+ G_1 g_2+G_2h_2+G_3 g_1+G_4 h_1,\\ h_3 = H_1 g_2+H_2h_2+H_3 g_1+H_4 h_1. \end{gather*} The identities \eqref{3:relation} and \cite[(10.29.1)]{NIST} \begin{gather}\label{6:eq7} K_{\nu-1}(x)-K_{\nu+1}(x)=-\frac{2\nu}{x}K_\nu(x) \end{gather} give \begin{gather*} uzI_{-\mu}(uz)\left(K_{\mu+1}(uz)-\frac{2\mu}{uz}K_\mu(uz)\right)+uzI_{1-\mu}(uz)K_\mu(uz)=1 . \end{gather*} Therefore, we can choose \begin{gather*} G_3(u,z) = uz\left(K_{\mu+1}(uz)-\frac{2\mu}{uz}K_\mu(uz)\right)I_\mu(uz),\\ H_3(u,z) = u^2K_\mu(uz)I_\mu(uz) . \end{gather*} The estimates \eqref{3:olver1}, \eqref{3:olver2} give \begin{gather}\label{6:est1}
|G_3(u,z)|\le C_3,\qquad |H_3(u,z)|\le D_3 u^2. \end{gather} Similarly, we choose \begin{gather*} G_4(u,z) = z^2\left(K_{\mu+1}(uz)-\frac{2\mu}{uz}K_\mu(z)\right)I_{\mu+1}(uz),\\ H_4(u,z) = uzK_\mu(uz)I_{\mu+1}(uz), \end{gather*} and estimate \begin{gather}\label{6:est2}
|G_4(u,z)|\le C_4|z|^2,\qquad|H_4(u,z)|\le D_4. \end{gather} It follows from \eqref{6:eq2}, \eqref{6:eq5} that \begin{gather*} E_1(u,z) = zI_{-\mu}(uz)-E_3(u,z),\\ E_2(u,z) = \frac{z^2}{u}\left(-\frac{2\mu}{uz}I_{-\mu}(uz)+I_{-\mu+1}(uz)\right)-E_4(u,z) . \end{gather*} Therefore, we can choose \begin{alignat*}{3} & G_1(u,z) = 1-G_3(u,z) ,\qquad && H_1(u,z) = -H_3(u,z),& \\ & G_2(u,z) = -\frac{2\mu}{u^2}-G_4(u,z),\qquad && H_2(u,z) = 1-H_4(u,z).& \end{alignat*} From \eqref{6:est1}, \eqref{6:est2}, we get \begin{alignat}{3}
& |G_1(u,z)|\le C_1,\qquad && |H_1(u,z)|\le D_1 u^2,& \label{6:est3}\\
&|G_2(u,z)|\le C_2(1+|z|^2),\qquad && |H_2(u,z)|\le D_2 .&\label{6:est4} \end{alignat} Since we know that \begin{gather*}
|g_1(u,z)|+|h_1(u,z)|+|g_2(u,z)|+|h_2(u,z)|\le \frac{K}{u^{2N}} \end{gather*}
for $| \arg z|\le \frac12\pi$, $0<|z|\le R$, $u\ge u_0$, the estimates \eqref{6:est1}, \eqref{6:est2}, \eqref{6:est3}, \eqref{6:est4} give \begin{gather*}
|g_3(u,z)|+|h_3(u,z)|\le \frac{L}{u^{2N-2}}\qquad\text{if} \quad |\arg z|\le \frac12\pi, \quad 0<|z|\le R, \quad u\ge u_3. \end{gather*}
Now we divide both sides of \eqref{6:eq6} by $c\delta(u)$ and use \eqref{2:delta}, \eqref{5:formal3}, \eqref{5:formal4} (with $\mu$ replaced by $-\mu$). Then we obtain the desired expansion of $W_3(u,-\mu,z)$ for $\Re\mu<0$ and $|\arg z|\le \frac12\pi$, $0<|z|\le R$. The restriction on $\arg z$ is easily removed using~\eqref{3:ancontI} and $W_3(e^{\pi im}z)=e^{\pi i (\mu+1)m} W_3(z)$. \end{proof}
\section{Application to the conf\/luent hypergeometric equation}\label{confluent} The conf\/luent hypergeometric dif\/ferential equation \begin{gather*} xv''(x)+(b-x)v'(x)-av(x)=0 \end{gather*} has solutions $M(a,b,x)$ and $U(a,b,x)$. Substituting $x=z^2$, $w=e^{-\frac12 z^2}z^b v$ we obtain the dif\/ferential equation \begin{gather}\label{M:eq2} w''(z)=\frac1z w'(z)+\left(u^2+\frac{\mu^2-1}{z^2}+z^2\right)w(z), \end{gather} where \begin{gather}\label{M:a}
a=\frac14 u^2+\frac12 b, \qquad \mu=b-1. \end{gather} Equation \eqref{M:eq2} agrees with~\eqref{1:eq1} when $f(z)=z^2$. Let $A_s$, $B_s$ be def\/ined as in Section~\ref{Olver} for $f(z)=z^2$. In this case, $A_s(z)$, $B_s(z)$ are polynomials. Throughout this section, we assume that $a$, $b$, $u$, $\mu$ satisfy \eqref{M:a}.
The function $M(a,b,x)$ is given by a power series in $x$ and $M(a,b,0)=1$. Therefore, the function~$W_3$ associated with~\eqref{M:eq2} is given by \begin{gather}\label{M:W3} W_3(u,\mu,z)=\frac{2^{1-b}u^{b-1}}{\Gamma(b)} e^{-\frac12z^2}z^b M\big(a,b,z^2\big). \end{gather} Theorem \ref{6:t1} implies the following theorem.
\begin{Theorem}\label{M:t1} Suppose that $b\in\mathbb{C}$ is not $0$ or a negative integer, $u=te^{i\theta}$ with $t>0$, $\theta\in\mathbb{R}$, and $N\ge 1$, $R>0$. Then we can write \begin{gather}
\frac{2^{1-b}u^{b-1}}{\Gamma(b)} e^{-\frac12z^2}z^b M\big(\tfrac14 u^2+\tfrac12 b,b,z^2\big)\nonumber\\ \qquad {} = zI_{b-1}(uz)\left(\sum_{s=0}^{N-1} \frac{A_s(z)}{u^{2s}}+g_1(u,z)\right)
+\frac{z}{u}I_b(uz)\left(\sum_{s=0}^{N-1} \frac{B_s(z)}{u^{2s}}+zh_1(u,z)\right),\label{M:asy1} \end{gather} where \begin{gather*}
|g_1(u,z)|+|h_1(u,z)|\le \frac{L_1}{t^{2N}}\qquad \text{for} \quad 0<|z|\le R, \quad t\ge t_1. \end{gather*} and $L_1$, $t_1$ are positive constants independent of $z$ and $u$ $($but possibly depending on~$b$, $\theta$, $N$, $R)$. There is no restriction on~$\arg z$. The polynomials $A_s(z)$, $B_s(z)$ appearing in~\eqref{M:asy1} are determined by the recursion~\eqref{1:eq3},~\eqref{1:eq4} with $f(z)=z^2$ and the conditions $A_0(z)=1$, $A_s(0)=0$ for $s\ge 1$. \end{Theorem}
Suppose that $\Re b\ge 1$. Let $W_2(u,z)$ be the function associated with equation~\eqref{M:eq2} which satisf\/ies~\eqref{1:W2},~\eqref{1:W2a}. There are functions $\beta_1(u)$, $\beta_2(u)$ such that \begin{gather}\label{U:connect}
W_2(u,z)=\beta_1(u) e^{-\frac12z^2}z^b M\big(a,b,z^2\big)+\beta_2(u) e^{-\frac12z^2}z^b U\big(a,b,z^2\big) . \end{gather}
The determination of $\beta_1(u)$, $\beta_2(u)$ is not obvious. It is in this part of the analysis where there is an error in \cite{S}. Slater \cite[p.~79]{S} derives $\beta_2(u)\sim\Gamma(a)2^{b-2}u^{1-b}$, and claims ``we can take $\beta_1(u)=0$'' without proof. When comparing with~\cite{S}, note that our $\beta_2(u)$ is denoted by~$1/\beta_2(u)$ in~\cite{S}. Actually, the stated formula for $\beta_2(u)$ is correct but it is only the leading term of the required full asymptotic expansion given in the following lemma.
\begin{Lemma}\label{U:l1} Suppose $\Re b\ge 1$. For every $N=1,2,3,\dots$, as $0<u\to\infty$, \begin{gather}\label{U:beta2} \beta_2(u)=\Gamma(a)2^{b-2}u^{1-b}\left(1+2(1-b)\sum_{s=0}^{N-1}\frac{B_s'(0)}{u^{2s+2}}+O\left(\frac{1}{u^{2N+2}}\right)\right). \end{gather} \end{Lemma}
\begin{proof} Suppose $\Re b>1$. Then \cite[(13.2.18)]{NIST} \begin{gather*} \lim_{z\to0^+}z^{2b-2}U\big(a,b,z^2\big)=\frac{\Gamma(b-1)}{\Gamma(a)} \end{gather*} and \eqref{U:connect} give \begin{gather*} \lim_{z\to 0^+} z^{b-2} W_2(u,z)= \beta_2(u) \frac{\Gamma(b-1)}{\Gamma(a)} . \end{gather*} Comparing with \eqref{2:limitW2}, we obtain~\eqref{U:beta2}.
If $\Re b=1$, $b\ne 1$, the proof is similar using Lemma~\ref{2:l3}(b) and~\cite[(13.2.18)]{NIST} \begin{gather*} U(a,b,x)=\frac{\Gamma(b-1)}{\Gamma(a)} x^{1-b}+\frac{\Gamma(1-b)}{\Gamma(a-b+1)}+O(x)\qquad\text{as} \quad x\to0^+. \end{gather*} If $b=1$ we use Lemma \ref{2:l3}(c) and \cite[(13.2.19)]{NIST} \begin{gather*} \lim_{x\to0^+} \frac{U(a,1,x)}{\ln x} =-\frac{1}{\Gamma(a)} .\tag*{\qed} \end{gather*} \renewcommand{\qed}{} \end{proof}
We cannot show that $\beta_1(u)=0$ but we can prove that $|\beta_1(u)|$ is very small as $u\to\infty$. To this end we need the following lemma.
\begin{Lemma}\label{U:l2} Let $b\in\mathbb{C}$, $\Re x>0$, and $\epsilon>0$. There is a constant $Q$ independent of $a$ such that
\begin{gather*} |\Gamma(a) U(a,b,x)|\le Q \qquad\text{if} \quad \Re a\ge \epsilon. \end{gather*} \end{Lemma}
\begin{proof} We use the integral representation \cite[(13.4.4)]{NIST} \begin{gather*} \Gamma(a)U(a,b,x)=\int_0^\infty e^{-xt}t^{a-1}(1+t)^{b-a-1} dt . \end{gather*} Therefore, if $\Re a\ge \epsilon$, \begin{gather*}
|\Gamma(a)U(a,b,x)| \le \int_0^\infty e^{-\Re x t}\left(\frac{t}{1+t}\right)^{\Re a-\epsilon}\left(\frac{t}{1+t}\right)^{\epsilon-1}(1+t)^{\Re b-2} dt\\
\hphantom{|\Gamma(a)U(a,b,x)|}{} \le \int_0^\infty e^{-\Re x t}\left(\frac{t}{1+t}\right)^{\epsilon-1}(1+t)^{\Re b-2} dt = :Q .\tag*{\qed}
\end{gather*} \renewcommand{\qed}{} \end{proof}
\begin{Lemma}\label{U:l3} Suppose $\Re b\ge 1$. For every $q<R$ we have $\beta_1(u)=O(e^{-qu})$ as $0<u\to\infty$. \end{Lemma}
\begin{proof} In the following let $0<z\le R$ (and $b$) be f\/ixed. By Lemmas~\ref{U:l1},~\ref{U:l2}, there is a constant $C_1>0$ such that, for suf\/f\/iciently large $u>0$, \begin{gather}\label{U:est1}
\big|\beta_2(u) e^{-\frac12z^2}z^bU\big(a,b,z^2\big)\big| \le C_1\big|u^{1-b}\big| Q . \end{gather} Using \eqref{2:asyI} we get from Theorem \ref{M:t1} with $N=1$, for some constant $C_2>0$, \begin{gather}\label{U:est2}
\big|e^{-\frac12z^2} z^bM\big(a,b,z^2\big)\big|\ge C_2 \big|u^{\frac12-b}\big|e^{zu} . \end{gather} Similarly, \eqref{2:asyK}, \eqref{1:W2}, \eqref{1:W2a} yield \begin{gather}\label{U:est3}
|W_2(u,z)|\le C_3 u^{-\frac12}e^{-zu} . \end{gather} Substituting \eqref{U:est1}, \eqref{U:est2} and \eqref{U:est3} in~\eqref{U:connect}. we f\/ind \begin{gather*}
|\beta_1(u)|\le \frac{C_3}{C_2} \big|u^{b-1}\big|e^{-2zu}+\frac{C_1Q}{C_2}u^{\frac12}e^{-zu}. \end{gather*} If we choose $z=R$, we obtain the desired estimate. \end{proof}
\begin{Lemma}\label{U:l4} Suppose $\Re b\ge 1$. For every $N=1,2,3,\dots$, the function \begin{gather*}
\beta_2(u)e^{-\frac12z^2}z^b U\big(a,b,z^2\big) \end{gather*} can be written in the form of the right-hand side of \eqref{1:W2}, and \eqref{1:W2a} holds with $R$ replaced by~$\frac13 R$. \end{Lemma}
\begin{proof} Let \begin{gather*} L(u,z):=\beta_1(u)e^{-\frac12z^2}z^bM\big(a,b,z^2\big) .\end{gather*}
Applying Theorem~\ref{M:t1} and Lemma~\ref{U:l3}, we estimate, for $0<|z|\le R$, \begin{gather}\label{U:est4}
|L(u,z)|\le C_1 e^{-qu} |z|\big(|I_\mu(uz)|+u^{-1}|I_{\mu+1}(uz)|\big) ,
\end{gather} where $q<R$ will be chosen later. We use the estimate \begin{gather}\label{U:est5}
|I_\nu(x)|\le C_2 e^{|x|} \qquad \text{for} \quad |\arg x|\le \frac32\pi \end{gather} provided that $\Re \nu\ge 0$. This inequality follows from \cite[(9.2), (9.3)]{O}. Therefore, \eqref{U:est4} yields \begin{gather}\label{U:est6}
|L(u,z)|\le C_3 |z|e^{-qu+\frac13Ru}\qquad\text{for}\quad 0<|z|\le \frac13R, \quad |\arg z|\le \frac32\pi, \quad u\ge u_0. \end{gather} Using \eqref{3:relation}, we have \begin{gather*} L(u,z)=zK_\mu(uz)g(u,z)-\frac{z}{u}K_{\mu+1}(uz) zh(u,z), \end{gather*} where \begin{gather*} g(u,z) = uI_{\mu+1}(uz) L(u,z),\qquad h(u,z) = -\frac{u^2}{z} I_\mu(uz)L(u,z) . \end{gather*}
From \eqref{U:est5}, \eqref{U:est6}, we get, for $0<|z|\le \frac13R$, $|\arg z|\le \frac32\pi$, \begin{gather*}
|g(u,z)| \le C_2C_3R u e^{u(\frac23R-q)},\qquad
|h(u,z)| \le C_2C_3 u^2 e^{u(\frac23R-q)} . \end{gather*} By \eqref{U:connect}, we can write $\beta_2(u)e^{-\frac12z^2}z^bU(a,b,z^2)$ as the right-hand side of \eqref{1:W2} with $g_2$ replaced by $g_2-g$ and~$h_2$ replaced by~$h_2-h$. If we choose $q=\frac56R$, $g$ and $h$ become exponentially small as $u\to\infty$, and the theorem is proved. \end{proof}
\begin{Lemma}\label{U:l5} Suppose $\Re b\ge 1$. For all $N=1,2,3,\dots$, we have, as $0<u\to\infty$, \begin{gather}\label{U:newbeta2}
\frac{\beta_2(u)2^bu^{1-b}}{\Gamma(1+a-b)}=1+O\left(\frac{1}{u^{2N}}\right). \end{gather} Moreover, for all $b\in\mathbb{C}$ and all $N=1,2,3,\dots$, we have, as $0<u\to\infty$, \begin{gather}\label{U:gammaquotient}
\frac{\Gamma(1+a-b)}{\Gamma(a)} 2^{2-2b}u^{2b-2}=1+2(1-b)\sum_{s=0}^{N-1} \frac{B_s'(0)}{u^{2s+2}}+O\left(\frac{1}{u^{2N+2}}\right) . \end{gather} \end{Lemma} \begin{proof} We set \begin{gather*} T(u,z):=\beta_2(u)e^{-\frac12z^2}z^bU\big(a,b,z^2\big).\end{gather*} Using \cite[(13.2.12)]{NIST} \begin{gather*} U\big(a,b,xe^{2i\pi}\big)=e^{-2\pi i b}U(a,b,x)+\frac{2\pi i e^{-\pi i b}}{\Gamma(b)\Gamma(1+a-b)} M(a,b,x) \end{gather*} and \eqref{M:W3} we obtain \begin{gather*} T(u,ze^{i\pi})-e^{-\pi i b}T(u,z)=\beta_2(u)\frac{\pi i 2^bu^{1-b}}{\Gamma(1+a-b)}W_3(u,z) . \end{gather*} Now we argue as in the proof of Lemma~\ref{2:l2} (applying Lemma~\ref{U:l4} twice) and arrive at~\eqref{U:newbeta2}. If $\Re b\ge 1$ the asymptotic formula~\eqref{U:gammaquotient} follows from~\eqref{U:newbeta2} and Lemma~\ref{U:l1}. If $\Re b<1$ we use~\eqref{5:reciprocal}. \end{proof}
\begin{Theorem}\label{U:t1} Suppose that $b\in\mathbb{C}$, $N\ge 1$ and $R>0$. Then we can write \begin{gather}
\Gamma\big(1+\tfrac14u^2-\tfrac12 b\big)2^{-b}u^{b-1}e^{-\frac12z^2}z^b U\big(\tfrac14u^2+\tfrac12 b,b,z^2\big)\nonumber \\ \qquad {} = zK_{b-1}(uz)\left(\sum_{s=0}^{N-1} \frac{A_s(z)}{u^{2s}}+g_2(u,z)\right)
-\frac{z}{u}K_b(uz)\left(\sum_{s=0}^{N-1} \frac{B_s(z)}{u^{2s}}+zh_2(u,z)\right),\label{U:asy2} \end{gather} where \begin{gather}\label{U:asy2b}
|g_2(u,z)|+|h_2(u,z)|\le \frac{K_2}{u^{2N}}\qquad\text{for} \quad 0<|z|\le R, \quad u\ge u_2, \end{gather} and $K_2$, $u_2$ are constants independent of~$z$ and~$u$. There is no restriction on $\arg z$. The polyno\-mials~$A_s(z)$,~$B_s(z)$ appearing in~\eqref{U:asy2} are determined by the recursion~\eqref{1:eq3},~\eqref{1:eq4} with $f(z)=z^2$ and the conditions $A_0(z)=1$, $A_s(0)=0$ for $s\ge 1$.
Alternatively, we have \begin{gather}
\Gamma\big(\tfrac14u^2+\tfrac12 b\big)2^{b-2}u^{1-b}e^{-\frac12z^2}z^b U\big(\tfrac14u^2+\tfrac12 b,b,z^2\big)\nonumber \\ \qquad{} = zK_{b-1}(uz)\left(\sum_{s=0}^{N-1} \frac{a_s(z)}{u^{2s}}+g_2(u,z)\right) -\frac{z}{u}K_b(uz)\left(\sum_{s=0}^{N-1} \frac{b_s(z)}{u^{2s}}+zh_2(u,z)\right),\label{U:asy3} \end{gather} where again \eqref{U:asy2b} holds. The polynomials $a_s(z)$, $b_s(z)$ are defined by~\eqref{5:a},~\eqref{5:b}. \end{Theorem}
\begin{proof} We denote \begin{gather*} V(u,\mu,z):=\Gamma(1+a-b)2^{-b}u^{b-1}e^{-\frac12z^2}z^b U\big(a,b,z^2\big).\end{gather*} Then we have \begin{gather*}
V(u,-\mu,z)=\Gamma(a)2^{b-2}u^{1-b}e^{-\tfrac12 z^2} z^b U\big(a,b,z^2\big) \end{gather*} which follows from \cite[(13.2.40)]{NIST} \begin{gather*} U(a,b,x)=x^{1-b}U(1+a-b,2-b,x). \end{gather*} For any $b\in\mathbb{C}$, \eqref{5:formal1}, \eqref{5:formal2}, \eqref{5:formal3}, \eqref{5:formal4}, \eqref{U:gammaquotient} show that the expansions~\eqref{U:asy2} and~\eqref{U:asy3} are equivalent. We will prove~\eqref{U:asy2} and~\eqref{U:asy3} for $\Re \mu\ge 0$ and $\Re \mu<0$, respectively.
Suppose $\Re \mu\ge 0$. Then~\eqref{U:asy2},~\eqref{U:asy2b}
follow from Lemmas~\ref{U:l4} and~\ref{U:l5} when $|\arg z|\le \frac32 \pi-\delta$. Since the function~$V(u,\mu,z)$ is independent of $R$ we can replace $\frac13 R$ by~$R$. By Theorem~\ref{3:t1}, we can remove the restriction on $\arg z$. Note that in the proof of Theorem~\ref{3:t1} we only used that~$W_2(u,z)$ solves~\eqref{1:eq1} and admits the asymptotic expansions~\eqref{1:W2},~\eqref{1:W2a}. Therefore, we can apply the theorem to the function $V(u,\mu,z)$ in place of $W_2(u,z)$.
Now suppose that $\Re \mu<0$. Then, using the expansion we just proved, \begin{gather*}
V(u,-\mu,z) = zK_{-\mu}(uz)\left(\sum_{s=0}^{N-1} \frac{A_s(-\mu,z)}{u^{2s}}+g_2(u,z)\right)\nonumber \\ \hphantom{V(u,-\mu,z) =}{} -\frac{z}{u}K_{-\mu+1}(uz)\left(\sum_{s=0}^{N-1} \frac{B_s(-\mu,z)}{u^{2s}}+zh_2(u,z)\right). \end{gather*} Using \eqref{5:a}, \eqref{5:b}, \eqref{6:eq7} and $K_\nu(x)=K_{-\nu}(x)$, we obtain \eqref{U:asy3}, \eqref{U:asy2b}. \end{proof}
So far we considered only asymptotic expansions of $U(a,b,z^2)$ as $0<u\to\infty$. Now we set $u=te^{i\theta}$, where $t>0$ and $-\frac12\pi<\theta<\frac12\pi$. Using the notation of Section~\ref{extension}, we have \begin{gather*}
e^{-i\theta}W_2(t,x)=\beta_1(u) e^{-\frac12z^2}z^b M\big(a,b,z^2\big)+\beta_2(u) e^{-\frac12z^2}z^b U\big(a,b,z^2\big) . \end{gather*} It is easy to see that Lemma~\ref{U:l1} remains valid. Since we allow $-\frac12\pi<\theta<\frac12\pi$, $a=\frac14u^2+\frac12b$ may have negative real part. We need a modif\/ication of Lemma~\ref{U:l2}.
\begin{Lemma}\label{U:l6}
Let $b\in\mathbb{C}$, $-\pi<\arg x<0$, $|\arg (a-1)|\le \pi-\delta$ for some $\delta>0$. Then there is a~constant~$Q$ independent of~$a$ such that
\begin{gather*} |\Gamma(a)U(a,b,x)|\le Q .\end{gather*} \end{Lemma}
\begin{proof} We use the integral representation \cite[(13.4.14)]{NIST} \begin{gather*} \big(e^{2\pi i(a-1)}-1\big)\Gamma(a)U(a,b,x)=\int_C e^{-xt}t^{a-1}(1+t)^{b-a-1} dt , \end{gather*}
where the contour $C$ starts at $+\infty i$ and follows the positive imaginary axis, then describes a loop around $0$ in positive direction and returns to~$+\infty i$. The argument of $t$ starts at~$\frac12\pi$ and increases to~$\frac52\pi$. It will be suf\/f\/icient to estimate $\Gamma(a)U(a,b,x)$ in the sector $\frac12\pi\le \arg(a-1)\le \alpha_0$, where $\frac12\pi<\alpha_0<\pi$. The loop is chosen so that $w=\frac{t}{1+t}$ describes the circle $|w|=\cos\theta_0$, where $\theta_0\in(0,\frac12\pi)$ is the unique solution of the equation \begin{gather*} \cos\theta_0=e^{\theta_0\tan\alpha_0} . \end{gather*}
Then one obtains $|w^{a-1}|\le 1$ on the contour $C$ which implies the desired estimate. \end{proof}
The proofs of Lemma~\ref{U:l5} and Theorem~\ref{U:t1} can be easily modif\/ied to give the desired asymptotic expansions for $u=te^{i\theta}$ as $0<t\to\infty$
for f\/ixed $\theta\in(-\frac12\pi,\frac12\pi)$. In~\eqref{U:asy2b} we now have $u=te^{i\theta}$, $t\ge t_2$ and $0<|z|\le R$.
\section{Comparison with Temme \cite{T}}\label{Temme}
It is known \cite[(5.11.13)]{NIST} that, as $z\to\infty$, $|\arg z|\le \pi-\delta$, \begin{gather}\label{T:eq1} \frac{\Gamma(z+r)}{\Gamma(z+s)}\sim z^{r-s}\sum_{n=0}^\infty \binom{r-s}{n} B_n^{(r-s+1)}(r) \frac{1}{z^n}, \end{gather} where the generalized Bernoulli polynomials $B_n^{(\ell)}(x)$ are def\/ined by the Maclaurin expansion \begin{gather*} \left(\frac{t}{e^t-1}\right)^\ell e^{xt}=\sum_{n=0}^\infty B_n^{(\ell)}(x)\frac{t^n}{n!} .\end{gather*} We apply~\eqref{T:eq1} with $z=\frac14u^2$, $0<u\to\infty$, and $r=1-\frac12 b$, $s=\frac12 b$. Then we obtain with $a=\frac14 u^2+\frac12 b$, \begin{gather*}
\frac{\Gamma(1+a-b)}{\Gamma(a)}2^{2-2b}u^{2b-2}\sim \sum_{n=0}^\infty \frac{d_n}{u^{2n}}, \end{gather*} where \begin{gather*} d_n=4^n\binom{1-b}{n}B_n^{(2-b)}\left(1-\frac12 b\right) .\end{gather*} We notice that \begin{gather*} \left(\frac{t}{e^t-1}\right)^{2-b}e^{(1-\frac12b)t}=\left(\frac{\frac12t}{\sinh \frac12 t}\right)^{2-b} \end{gather*} is an even function of $t$. Therefore, $d_n=0$ for odd $n$.
It follows from \eqref{U:gammaquotient} that \begin{gather*} B_n'(0)=\frac12 \frac{1}{1-b}d_{n+1} , \end{gather*} and then from \eqref{5:a}, \eqref{5:b} \begin{gather}\label{T:eq4} a_n(0)=\tilde d_n,\qquad b_n'(0)=-\frac12 \frac1{1-b} \tilde d_{n+1} , \end{gather} where $\tilde d_n$ is obtained from $d_n$ by replacing $b$ by $2-b$, that is, \begin{gather*} \tilde d_n= 4^n\binom{b-1}{n}B_n^{(b)}\left(\frac12 b\right) . \end{gather*}
Temme \cite[(3.22)]{T} obtained the asymptotic expansion of \eqref{U:asy3} involving polyno\-mials~$a^\dagger_n(z)$, $b^\dagger_n(z)$ in place of $a_n(z)$, $b_n(z)$. The polynomials $a_n^\dagger(z)$, $b_n^\dagger(z)$ as follows. Introduce the function \begin{gather*} f(s,z)=e^{z^2\mu(s)} \left(\frac{\frac12 s}{\sinh \frac12s}\right)^b,\qquad \mu(s)=\frac1s-\frac{1}{e^s-1}-\frac12 , \end{gather*} and its Maclaurin expansion \begin{gather*} f(s,z)=\sum_{k=0}^\infty c_k(z) s^k . \end{gather*} Then recursively, set $c_k^{(0)}=c_k$ and \begin{gather*}
c_k^{(n+1)}=4\big(z^2c_{k+2}^{(n)}+(1-b+k)c_{k+1}^{(n)}\big), \end{gather*} where $k\ge0 $ and $n\ge 0$. Then set \begin{gather*} a_n^\dagger=c_0^{(n)},\qquad b_n^\dagger =-2zc_1^{(n)}.\end{gather*}
\begin{Theorem}\label{T:t1} For every $n=0,1,2,\dots$, we have $a_n=a_n^\dagger$ and $b_n=b_n^\dagger$. \end{Theorem}
\begin{proof} The function $f$ satisf\/ies the partial dif\/ferential equation \begin{gather*} 4\frac{\partial f}{\partial s} =\frac{\partial^2f}{\partial z^2}+\frac1z\left(2b-1-4\frac{z^2}{s}\right)\frac{\partial f}{\partial z}-z^2 f.\end{gather*} This implies \begin{gather}\label{T:c}
4(k+1)c_{k+1}+4zc_{k+1}'=c_k''+\frac{2b-1}{z} c_k'-z^2 c_k, \qquad ()'=\frac{d}{dz}. \end{gather} By induction on $n$ one can show that~\eqref{T:c} is also true with $c_k$ replaced by $c_k^{(n)}$ for any $n=0,1,2,\dots$. If we use this extended equation with $k=0$ and $k=1$, then we obtain~\eqref{1:eq3},~\eqref{1:eq4} with $a_s^\dagger$, $b_s^\dagger$ in place of $A_s$, $B_s$, respectively.
When $z=0$, we have \begin{gather*} a_n^\dagger(0)=c_0^{(n)}(0)=4^n(1-b)_n c_n(0)=4^n\frac{(1-b)_n}{n!} B_n^{(b)}\left(\frac12b\right) .\end{gather*} Comparing with~\eqref{T:eq4} and using that $c_n(0)=0$ for odd $n$, we f\/ind, for all $n$, \begin{gather}\label{T:eq5} a_n^\dagger(0)=a_n(0). \end{gather} Since both $a_n$, $b_n$ and $a_n^\dagger, b_n^\dagger$ solve \eqref{1:eq3}, \eqref{1:eq4}, \eqref{T:eq5} implies that $a^\dagger_n=a_n$, $b^\dagger_n=b_n$ for all $n$. \end{proof}
\section{Concluding remark} In this paper we started from Olver's paper \cite{O}, added some results, and then applied them to the conf\/luent hypergeometric functions. A referee pointed out that Chapter~12 of Olver's book~\cite{O2} contains a reworked version of~\cite{O} also involving error bounds. It would be interesting to start from this book chapter and derive results analogous to the ones obtained in the present paper. However, in contrast to~\cite{O} the book chapter assumes that $\mu$ is positive while in our original problem~\cite{C}~$\mu$ is complex. Therefore, an extension of the results in~\cite[Chapter~12]{O2} to complex~$\mu$ would be required to obtain results for the conf\/luent hypergeometric functions in full generality.
\pdfbookmark[1]{References}{ref}
\LastPageEnding
\end{document}
|
arXiv
|
{
"id": "1601.02263.tex",
"language_detection_score": 0.51034015417099,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Ohno-type relation for interpolated MZV]{Ohno-type relation for interpolated multiple zeta values}
\author{Minoru Hirose} \address[Minoru Hirose]{Institute For Advanced Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8602, Japan} \email{[email protected]}
\author{Hideki Murahara} \address[Hideki Murahara]{The University of Kitakyushu, 4-2-1 Kitagata, Kokuraminami-ku, Kitakyushu, Fukuoka, 802-8577, Japan} \email{[email protected]}
\author{Masataka Ono} \address[Masataka Ono]{Global Education Center, Waseda University, 1-6-1, Nishi-Waseda, Shinjuku-ku, Tokyo, 169-8050, Japan} \email{[email protected]}
\keywords{Multiple zeta(-star) values, Interpolated multiple zeta values, Ohno-type relations} \subjclass[2010]{Primary 11M32; Secondary 05A19}
\begin{abstract} We prove the Ohno-type relation for the interpolated multiple zeta values, which was introduced first by Yamamoto. Same type results for finite multiple zeta values are also given. Moreover, these relations give the sum formula for interpolated multiple zeta values and interpolated $\mathcal{F}$-multiple zeta values, which were proved by Yamamoto and Seki, respectively. \end{abstract}
\maketitle
\section{Introduction}
\subsection{Ohno-type relation for multiple zeta (star) values}
For a non-negative integer $r$ and an $r$-tuple of non-negative integers $\boldsymbol{e}=(e_1, \ldots, e_r)$, we set $\wt(\boldsymbol{e}) \coloneqq e_1+\cdots+e_r$ and $\dep(\boldsymbol{e})\coloneqq r$, and we call them the \emph{weight} and $depth$, respectively. We call $\boldsymbol{e}$ an index if all the entries of $\boldsymbol{e}$ are positive. In particular, there exists a unique index of depth 0, which we call an \emph{empty index} and denote by $\varnothing$. We call an index $\boldsymbol{k}=(k_1, \ldots, k_r)$ \emph{admissible} if $r\ge1$ and $k_r \ge2$, or $\boldsymbol{k} =\varnothing$.
For an admissible index $\boldsymbol{k}=(k_1, \ldots, k_r)$, \emph{the multiple zeta value} (MZV) $\zeta(\boldsymbol{k})$ and \emph{the multiple zeta-star value} (MZSV) $\zeta^{\star}(\boldsymbol{k})$ are real numbers defined by \begin{align*} \zeta(\boldsymbol{k}) \coloneqq \sum_{1\le n_1 <\cdots<n_r}\frac{1}{n^{k_1}_1\cdots n^{k_r}_r}, \qquad \zeta^{\star}(\boldsymbol{k}) \coloneqq \sum_{1\le n_1 \le \cdots \le n_r}\frac{1}{n^{k_1}_1\cdots n^{k_r}_r}. \end{align*} We set $\zeta(\varnothing)=\zeta^{\star}(\varnothing) \coloneqq 1$. It is known that there exist many $\mathbb{Q}$-linear relations among MZ(S)Vs. In this paper, we focus on Ohno-type relations for MZVs and MZSVs. For an admissible index $\boldsymbol{k}=(k_1, \ldots, k_r)$, we can write \begin{align*} \boldsymbol{k}=(\underbrace{1, \ldots, 1}_{a_1-1}, b_1+1, \ldots, \underbrace{1, \ldots, 1}_{a_s-1}, b_s+1) \end{align*} for $a_p, b_q \ge1$. Then we define \emph{the dual index} $\boldsymbol{k}^{\dagger}$ of $\boldsymbol{k}$ by \begin{align*} \boldsymbol{k}^{\dagger} \coloneqq (\underbrace{1, \ldots, 1}_{b_s-1}, a_s+1, \ldots, \underbrace{1, \ldots, 1}_{b_1-1}, a_1+1). \end{align*} For example, we have $(2,1,3)^{\dagger}=(\{1\}^{1-1}, 1+1, \{1\}^{2-1}, 2+1)^{\dagger}=(\{1\}^{2-1}, 2+1, \{1\}^{1-1}, 1+1)=(1,3,2)$. We set $\varnothing^{\dagger} \coloneqq \varnothing$. For $\boldsymbol{k}=(k_1, \ldots, k_r)$ and $\boldsymbol{e}=(e_1, \ldots, e_r)$, set $\boldsymbol{k} \oplus \boldsymbol{e} \coloneqq (k_1+e_1, \ldots, k_r+e_r)$. Moreover, we set $\varnothing \oplus \varnothing \coloneqq \varnothing$.
\begin{thm}[{Ohno-type relation for MZV, Ohno \cite{Ohn99}}]\label{thm:Ohno-type_MZV} For an admissible index $\boldsymbol{k}$ and a non-negative integer $m$, we have \begin{align*} \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k})}} \zeta(\boldsymbol{k} \oplus \boldsymbol{e}) = \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\dagger})}} \zeta\bigl((\boldsymbol{k}^{\dagger} \oplus \boldsymbol{e})^{\dagger}\bigr). \end{align*} \end{thm}
\begin{rem} In fact, Ohno \cite{Ohn99} proved wider class of $\mathbb{Q}$-linear relations among MZVs commonly called \emph{Ohno relation}. \end{rem}
\begin{thm}[{Ohno-type relation for MZSV, Hirose--Imatomi--Murahara--Saito \cite{HIMS20}}]\label{thm:Ohno-type_MZSV} For an admissible index $\boldsymbol{k}$ and a non-negative integer $m$, we have \begin{align*} \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k})}} c_1(\boldsymbol{k},\boldsymbol{e})\zeta^{\star}(\boldsymbol{k} \oplus \boldsymbol{e}) = \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\dagger})}} \zeta^{\star}\bigl((\boldsymbol{k}^{\dagger} \oplus \boldsymbol{e})^{\dagger}\bigr). \end{align*} Here, $c_1(\boldsymbol{k}, \boldsymbol{e})=1$ if $\boldsymbol{k}=\varnothing$, and \begin{align*} c_1((k_1, \ldots, k_r),(e_1, \ldots, e_r)) \coloneqq \prod_{i=1}^r\binom{k_i+e_i+\delta_{i,1}-2}{e_i}, \quad \binom{n-1}{n} = \begin{cases} 1 & \text{if $n=0$}, \\ 0 & \text{otherwise}, \end{cases} \end{align*} and $\delta_{x,y}$ is Kronecker's delta. \end{thm}
\subsection{Interpolated MZV}
This subsection describes the Ohno-type relation for \emph{interpolated MZVs} that interpolates both Theorems~\ref{thm:Ohno-type_MZV} and \ref{thm:Ohno-type_MZSV}. For an admissible index $\boldsymbol{k}=(k_1, \ldots, k_r)$, Yamamoto \cite{Yam13} defined \emph{the interpolated multiple zeta value} $\zeta^{t}(\boldsymbol{k})$ by \begin{equation*} \label{eq:def_tMZV} \zeta^{t}(\boldsymbol{k}) \coloneqq \sum_{\substack{\square\textrm{ is either a comma `,' } \\
\textrm{ or a plus `+'}}}
t^{(\text{the number of `+'})}
\zeta(k_1 \square k_2 \square \cdots \square k_r) \in \mathbb{R}[t] \end{equation*} and $\zeta^{t}(\varnothing)\coloneqq1$. Note that the interpolated MZV is a common generalization of MZV and MZSV since $\zeta^0(\boldsymbol{k})=\zeta(\boldsymbol{k})$ and $\zeta^1(\boldsymbol{k})=\zeta^{\star}(\boldsymbol{k})$.
Many $\mathbb{Q}[t]$-linear relations among interpolated MZV $\zeta^{t}(\boldsymbol{k})$ are known (for example, see \cite{Li19}, \cite{LQ17}, \cite{TW16}, and \cite{Wak17}). In this paper, we prove the Ohno-type relation for interpolated MZVs.
\subsection{Main result} Let $\mathcal{I}$ (resp.~ $\mathcal{I}^t$) be the set of formal $\mathbb{Q}$-linear (resp.~ $\mathbb{Q}[t]$-linear) sums of indices, $R$ a $\mathbb{Q}$-linear vector space, and $Z \colon \mathcal{I} \rightarrow R$ a $\mathbb{Q}$-linear map. If the equality \begin{align*} \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k})}} Z(\boldsymbol{k} \oplus \boldsymbol{e}) = \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\dagger})}} Z\bigl((\boldsymbol{k}^{\dagger} \oplus \boldsymbol{e})^{\dagger}\bigr) \end{align*} holds for any admissible index $\boldsymbol{k}$ and any non-negative integer $m$, we say that \textit{$Z$ satisfies the Ohno-type relation}. For example, if $R=\mathbb{R}$, then $Z=\zeta$ satisfies the Ohno-type relation by Theorem~\ref{thm:Ohno-type_MZV}.
For a non-empty index $\boldsymbol{k}=(k_1, \ldots, k_r)$ and a non-negative integer $m$, define $g_m(\boldsymbol{k}; t) \in \mathcal{I}^t$ by \begin{align*}
&g_m(\boldsymbol{k};t) \\
&\coloneqq \sum_{l=1}^{r}(-t(1-t))^{r-l}
\sum_{\substack{\boldsymbol{e}=(e_1, \ldots, e_l) \in \mathbb{Z}^{l}_{\ge0} \\ e_{1}+\cdots+e_{l}=m}}
\sum_{1=i_{1}<\cdots<i_{l+1}=r+1}
\prod_{l'=1}^{l}f_{i_{l'+1}-i_{l'}-1}(k_{i_{l'}}'+\cdots+k'_{i_{l'+1}-1},e_{l'}) \\
&\qquad \times\left((k_{i_{1}}+\cdots+k_{i_{2}-1},\dots,k_{i_{l}}+\cdots+k_{i_{l+1}-1})\oplus\boldsymbol{e}\right), \end{align*} where \begin{equation*}
f_{i}(k,e)
\coloneqq \sum_{j=0}^{e}{e-j \choose i}{k+e-i-2 \choose j}t^{j}(1-t)^{e-i-j} \in \mathbb{Q}[t],
\qquad k_{j}' \coloneqq k_{j}+\delta_{j,1}. \end{equation*} We set $f_i(k, -1)\coloneqq0, g_{-1}(\boldsymbol{k}; t) \coloneqq 0$, and $g_{m}(\varnothing; t)\coloneqq \delta_{m,0}\cdot\varnothing$. We define a $\mathbb{Q}[t]$-linear map $I^t \colon \mathcal{I}^t \rightarrow \mathcal{I}^t$ by $I^{t}(\varnothing) \coloneqq \varnothing$ and \begin{align*} I^{t}(k_1, \ldots, k_r) \coloneqq \sum_{\substack{\square\textrm{ is either a comma `,' } \\
\textrm{ or a plus `+'}}}
t^{(\text{the number of `+'})} (k_1 \square k_2 \square \cdots \square k_r) \in \mathcal{I}^t. \end{align*} Note that if we extend the definition of MZV $\zeta$ and interpolated MZV $\zeta^{t}$ to the $\mathbb{Q}[t]$-liner maps $\mathcal{I}^t \rightarrow \mathbb{R}[t]$, we have $\zeta^{t}=\zeta \circ I^t$.
\begin{thm}[{Main Theorem}] \label{thm:main} Let $R$ be a $\mathbb{Q}$-vector space and $Z \colon \mathcal{I} \rightarrow R$ a $\mathbb{Q}$-linear map satisfying the Ohno-type relation. We extend $Z$ to the $\mathbb{Q}[t]$-linear map $Z \colon \mathcal{I}^t \rightarrow R\otimes\mathbb{Q}[t]$. Set $Z^t \coloneqq Z\circ I^t$. Then, for any admissible index $\boldsymbol{k}$ and a non-negative integer $m$, we have
\begin{align}\label{eq:main}
Z^{t}\bigl(g_m(\boldsymbol{k};t)\bigr)
=\sum_{\substack{\wt (\boldsymbol{e})=m \\ \dep (\boldsymbol{e})=\dep (\boldsymbol{k}^{\dagger})}}
Z^{t}\bigl((\boldsymbol{k}^{\dagger}\oplus\boldsymbol{e})^{\dagger}\bigr).
\end{align} \end{thm}
\begin{cor} For any admissible index $\boldsymbol{k}$ and a non-negative integer $m$, \[ \sum_{\substack{\wt (\boldsymbol{e})=m \\ \dep (\boldsymbol{e})=\dep (\boldsymbol{k}^{\dagger})}} Z^{t}\bigl((\boldsymbol{k}^{\dagger}\oplus\boldsymbol{e})^{\dagger}\bigr) \] is a $\mathbb{Q}[t]$-linear combination of $Z(\boldsymbol{l})$'s with $\dep(\boldsymbol{l})\leq \dep(\boldsymbol{k})$. \end{cor}
\begin{rem} \begin{enumerate} \item For $\boldsymbol{k}=(k_1, \ldots, k_r)$, we have
\begin{align*}
g_m(\boldsymbol{k};0)
&=\sum_{\substack{e_{1}+\cdots+e_{r}=m \\ e_{1},\dots,e_{r}\ge 0 }}
\biggl\{\prod_{i=1}^{r}f_{0}(k_{i}',e_{i})\biggr\}\times(k_{1}+e_{1},\dots,k_{r}+e_{r})\\
&=\sum_{\substack{e_{1}+\cdots+e_{r}=m \\ e_{1},\dots,e_{r}\ge 0 }}(k_{1}+e_{1},\dots,k_{r}+e_{r})
\end{align*}
and
\begin{align*}
g_m(\boldsymbol{k};1)
&=\sum_{\substack{e_{1}+\cdots+e_{r}=m \\ e_{1},\dots,e_{r}\ge 0 }}
\prod_{i=1}^{r}
\binom{k_{i}+e_{i}+\delta_{i,1}-2}{e_{i}}
\times(k_{1}+e_{1},\dots,k_{r}+e_{r}).
\end{align*}
Therefore, if $(R, Z)=(\mathbb{R}, \zeta)$, this theorem gives the Ohno-type relations for MZVs (Theorem~\ref{thm:Ohno-type_MZV}) and MZSVs (Theorem~\ref{thm:Ohno-type_MZSV}). \item The cases $r=1,2$ are described as follows: \begin{align} &Z^t\bigl(g_m((k);t)\bigr) = f_0(k+1,m)Z(k+m),\label{eq:dep1}\\ &Z^t\bigl(g_m((k_1,k_2);t)\bigr)\label{eq:dep2}\\ &=-t(1-t)f_1(k_1+k_2+1, m)Z(k_1+k_2+m)\notag\\ &\quad +\sum_{\substack{e_1+e_2=m \\ e_1, e_2 \ge0}}f_0(k_1+1, e_1)f_0(k_2, e_2) \bigl\{ Z(k_1+e_1, k_2+e_2)+Z(k_1+k_2+m)t \bigr\}.\notag \end{align} In Section \ref{sec:applications}, we deduce the sum formulas for interpolated MZVs and $\mathcal{F}$-MZVs from (\ref{eq:dep1}) and the special case of (\ref{eq:dep2}), respectively.
\end{enumerate} \end{rem}
This paper is organized as follows. In Section 2, we give the recurrence relation of $g_m(\boldsymbol{k}; t)$ which plays a key role for the proof of our main theorem. In Section 3, we prove our main theorem. In Section 4, as an application, we give the Ohno-type relations for interpolated $\mathcal{F}$-MZVs. Moreover, we deduce the sum formula for interpolated MZV and $\mathcal{F}$-MZV from each Ohno-type relation.
\section{Recurrence relation of $g_m(\boldsymbol{k}; t)$ and $G_m(\boldsymbol{k}; t)$} In this section, we give the recurrence relations of $g_m(\boldsymbol{k}; t)$ and $G_m(\boldsymbol{k}; t) \coloneqq I^t(g_m(\boldsymbol{k}; t))$. First, we introduce the arrow notation. For a non-empty index $\boldsymbol{k}=(k_1, \ldots, k_r)$, set \begin{align*} \boldsymbol{k}_{\uparrow} \coloneqq (k_1, \ldots, k_{r-1}, k_r+1), \quad \boldsymbol{k}_{\rightarrow} \coloneqq (k_1, \ldots, k_r, 1). \end{align*} Moreover, if $w =\sum_{i=1}^n a_i(t)\boldsymbol{k}_i \in \mathcal{I}^t$ and $\boldsymbol{k}_1, \ldots, \boldsymbol{k}_n \neq \varnothing$, we define $w_{\uparrow} \coloneqq \sum_{i=1}^n a_i(t)(\boldsymbol{k}_i)_{\uparrow}$, $w_{\rightarrow} \coloneqq \sum_{i=1}^n a_i(t)(\boldsymbol{k}_i)_{\rightarrow}$, respectively.
\begin{prop}\label{prop:g_noarrow} For a non-negative integer $m$, we have \begin{align*} g_m((1);t) = (1)\oplus(m). \end{align*} \end{prop} \begin{proof} It follows from the following calculation. \begin{align*} g_m((1);t) & = f_0(2,m)\left( (1)\oplus(m) \right) = (1)\oplus(m).\qedhere \end{align*} \end{proof}
\begin{prop}\label{prop:g_uparrow} For a non-empty index $\boldsymbol{k}$ and a positive integer $m$, we have \begin{align*}
g_m(\boldsymbol{k}_{\uparrow};t)=g_m(\boldsymbol{k}; t)_{\uparrow}+tg_{m-1}(\boldsymbol{k}_{\uparrow};t)_{\uparrow}. \end{align*} \end{prop}
\begin{proof} For $k \ge1, i\ge0$, and $e\ge1$, we have \begin{align*} f_i(k+1,e) &=\sum_{j=0}^e \binom{e-j}{i} \binom{k+e-i-1}{j} t^j(1-t)^{e-i-j} \\ &=\sum_{j=0}^e \binom{e-j}{i} \biggl\{ \binom{k+e-i-2}{j} +\binom{k+e-i-2}{j-1} \biggr\}t^j(1-t)^{e-i-j} \\ &=f_i(k,e)+tf_i(k+1,e-1). \end{align*} This equality also holds for $e=0$ since $f_i(k+1, 0)=f_i(k, 0)=\delta_{i,0}$ and $f_i(k+1, -1)=0$.
Thus, we obtain \begin{align*} &g_m(\boldsymbol{k}_{\uparrow}; t)-g_m(\boldsymbol{k}; t)_{\uparrow} \\ &=\sum_{l=1}^{r}(-t(1-t))^{r-l} \sum_{\substack{e_{1}+\cdots+e_{l}=m\\ e_{1},\dots,e_{l} \ge 0}} \sum_{1=i_{1}<\cdots<i_{l+1}=r+1} \prod_{l'=1}^{l-1} f_{i_{l'+1}-i_{l'}-1}(k_{i_{l'}}'+\cdots+k'_{i_{l'+1}-1},e_{l'}) \\ &\qquad \times\prod_{l'=l}^{l}tf_{i_{l'+1}-i_{l'}-1}(k_{i_{l'}}'+\cdots+k'_{i_{l'+1}-1}+1,e_{l'}-1) \\ &\qquad \times\left((k_{i_{1}}+\cdots+k_{i_{2}-1},\dots,k_{i_{l}}+\cdots+k_{i_{l+1}-1})\oplus\boldsymbol{e}\right) \\ &=t\sum_{l=1}^{r}(-t(1-t))^{r-l} \sum_{\substack{e_{1}+\cdots+e_{l}=m-1\\ e_{1},\dots,e_{l} \ge 0}} \sum_{1=i_{1}<\cdots<i_{l+1}=r+1} \prod_{l'=1}^{l-1} f_{i_{l'+1}-i_{l'}-1}(k_{i_{l'}}'+\cdots+k'_{i_{l'+1}-1},e_{l'}) \\ &\qquad \times\prod_{l'=l}^{l} f_{i_{l'+1}-i_{l'}-1}(k_{i_{l'}}'+\cdots+k'_{i_{l'+1}-1}+1,e_{l'}) \\ &\qquad \times\left((k_{i_{1}}+\cdots+k_{i_{2}-1},\dots,k_{i_{l}}+\cdots+k_{i_{l+1}-1})\oplus\boldsymbol{e}\right) \\
&=tg_{m-1}(\boldsymbol{k}_{\uparrow}; t)_{\uparrow}, \end{align*} which completes the proof. \end{proof}
\begin{lem}\label{lem:recurrence_f} For a positive integer $k$ and non-negative integers $e$ and $i$, we have \begin{equation*} f_{i+1}(k+1,e)-(1-t)f_{i+1}(k+2,e)+f_{i}(k,e)-f_{i}(k+1,e)=0. \end{equation*} \end{lem}
\begin{proof} Note that \begin{align*} f_{i+1}(k+1,e) & =\sum_{j=0}^{e}{e-j \choose i+1}{k+e-i-2 \choose j}t^{j}(1-t)^{e-i-j-1},\\ f_{i+1}(k+2,e) & =\sum_{j=0}^{e}{e-j \choose i+1}{k+e-i-1 \choose j}t^{j}(1-t)^{e-i-j-1},\\ f_{i}(k,e) & =\sum_{j=0}^{e}{e-j \choose i}{k+e-i-2 \choose j}t^{j}(1-t)^{e-i-j},\\ f_{i}(k+1,e) & =\sum_{j=0}^{e}{e-j \choose i}{k+e-i-1 \choose j}t^{j}(1-t)^{e-i-j}. \end{align*} Then, we have \begin{align*} f_{i}(k,e) & =\sum_{j=0}^{e}{e-j \choose i}{k+e-i-2 \choose j}t^{j}(1-t)^{e-i-j}\\
& =\sum_{j=0}^{e}{e-j+1 \choose i+1}{k+e-i-2 \choose j}t^{j}(1-t)^{e-i-j}\\
& \quad -\sum_{j=0}^{e}{e-j \choose i+1}{k+e-i-2 \choose j}t^{j}(1-t)^{e-i-j}\\
& =\sum_{j=0}^{e}{e-j+1 \choose i+1}{k+e-i-2 \choose j}t^{j}(1-t)^{e-i-j}
-(1-t)f_{i+1}(k+1,e). \end{align*} Therefore, we obtain \begin{align*} (1-t)f_{i+1}(k+1, e)+f_i(k,e) =\sum_{j=0}^{e}\binom{e-j+1}{i+1}\binom{k+e-i-2}{j}t^{j}(1-t)^{e-i-j}. \end{align*} Putting $k+1$ into $k$ in the above equality, we also have \begin{multline*} (1-t)f_{i+1}(k+2,e)+f_i(k+1,e) =\sum_{j=0}^{e}\binom{e-j+1}{i+1}\binom{k+e-i-1}{j}t^{j}(1-t)^{e-i-j}. \end{multline*} Thus, we obtain \begin{align*} & (1-t)f_{i+1}(k+1, e)+f_i(k,e) - (1-t)f_{i+1}(k+2,e) - f_i(k+1,e)\\ & = \sum_{j=0}^{e}\binom{e-j+1}{i+1} \left( \binom{k+e-i-2}{j} - \binom{k+e-i-1}{j} \right) t^{j}(1-t)^{e-i-j}\\ & = -\sum_{j=0}^{e}\binom{e-j+1}{i+1} \binom{k+e-i-2}{j-1} t^{j}(1-t)^{e-i-j}\\ & \overset{j'=j-1}{=} -\sum_{j'=-1}^{e-1}\binom{e-j'}{i+1} \binom{k+e-i-2}{j'} t^{j'+1}(1-t)^{e-i-j'-1}\\ & = -tf_{i+1}(k+1,e). \end{align*} This completes the proof. \end{proof}
\begin{prop}\label{prop:g_rightarrow} For a non-empty index $\boldsymbol{k}$ and a non-negative integer $m$, we have \begin{align*} g_m(\boldsymbol{k}_{\rightarrow};t)=(1-t)g_m(\boldsymbol{k};t)_{\uparrow}+g_m(\boldsymbol{k};t)_{\rightarrow}-(1-t)g_m(\boldsymbol{k}_{\uparrow};t)+(1-t)g_{m-1}(\boldsymbol{k}_{\rightarrow\uparrow}; t). \end{align*} \end{prop}
\begin{proof} Note that since \begin{align*} (1-t)g_{m-1}(\boldsymbol{k}_{\rightarrow\uparrow};t)_{\uparrow} =\frac{1-t}{t}g_m(\boldsymbol{k}_{\rightarrow\uparrow}; t)-\frac{1-t}{t}g_m(\boldsymbol{k}_{\rightarrow}; t)_{\uparrow}, \end{align*} the claim is equivalent to \begin{align*} g_m(\boldsymbol{k}_{\rightarrow};t)_{\uparrow} &=(1-t)g_m(\boldsymbol{k};t)_{\uparrow\uparrow}+g_m(\boldsymbol{k};t)_{\rightarrow\uparrow}-(1-t)g_m(\boldsymbol{k}_{\uparrow};t)_{\uparrow}+(1-t)g_{m-1}(\boldsymbol{k}_{\rightarrow\uparrow}; t)_{\uparrow}\\ &=(1-t)g_m(\boldsymbol{k};t)_{\uparrow\uparrow}+g_m(\boldsymbol{k};t)_{\rightarrow\uparrow}\\ &\qquad -(1-t)g_m(\boldsymbol{k}_{\uparrow};t)_{\uparrow}+\frac{1-t}{t}g_m(\boldsymbol{k}_{\rightarrow\uparrow};t)-\frac{1-t}{t}g_m(\boldsymbol{k}_{\rightarrow};t)_{\uparrow}, \end{align*} and this is equivalent to \begin{equation}\label{eq:key_of_g_ra} g_m(\boldsymbol{k}_{\rightarrow};t)_{\uparrow}=t(1-t)g_m(\boldsymbol{k};t)_{\uparrow\uparrow}+tg_m(\boldsymbol{k};t)_{\rightarrow\uparrow}-t(1-t)g_m(\boldsymbol{k}_{\uparrow};t)_{\uparrow}+(1-t)g_m(\boldsymbol{k}_{\rightarrow\uparrow};t). \end{equation}
Since $g_m(\boldsymbol{k};t)$ also can be written as \begin{align*} g_m(\boldsymbol{k};t) &=\sum_{l=1}^{{\rm dep}(\boldsymbol{k})}(-t(1-t))^{{\rm dep}(\boldsymbol{k})-l} \sum_{\substack{e_{1}+\cdots+e_{l}=m \\ e_{1},\dots,e_{l}\geq0}} \sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0} }\prod_{l'=1}^{l}f_{{\rm dep}(\boldsymbol{k}_{l'})-1}({\rm wt}(\boldsymbol{k}_{l'})+\delta_{l',1},e_{l'})\\
& \ \ \times\left({\rm wt}(\boldsymbol{k}_{1})+e_{1},\dots,{\rm wt}(\boldsymbol{k}_{l})+e_{l}\right), \end{align*} we have \begin{align} \label{eq:S_1+S_2} g_m(\boldsymbol{k}_{\rightarrow};t)=S_{1}+S_{2}, \end{align} where \begin{align*} S_{1} & \coloneqq\sum_{l=1}^{{\rm dep}(\boldsymbol{k})}(-t(1-t))^{{\rm dep}(\boldsymbol{k})-l}\sum_{\substack{e_{1}+\cdots+e_{l+1}=m\\ e_{1},\dots,e_{l+1}\geq0 } }\sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l})\\ {\rm dep}(\boldsymbol{k}_{i})>0 } }\prod_{l'=1}^{l}f_{{\rm dep}(\boldsymbol{k}_{l'})-1}({\rm wt}(\boldsymbol{k}_{l'})+\delta_{l',1},e_{l'})\\
& \ \ \times f_{0}(1,e_{l+1})\times\left({\rm wt}(\boldsymbol{k}_{1})+e_{1},\dots,{\rm wt}(\boldsymbol{k}_{l})+e_{l},1+e_{l+1}\right) \end{align*} and \begin{align*} S_{2} & \coloneqq\sum_{l=1}^{{\rm dep}(\boldsymbol{k})}(-t(1-t))^{{\rm dep}(\boldsymbol{k})-l+1}\\ &\ \ \times\sum_{\substack{e_{1}+\cdots+e_{l}=m\\ e_{1},\dots,e_{l}\geq0 } }\sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l})\\ {\rm dep}(\boldsymbol{k}_{i})>0 } }\prod_{l'=1}^{l}f_{{\rm dep}(\boldsymbol{k}_{l'})+\delta_{l',l}-1}({\rm wt}(\boldsymbol{k}_{l'})+\delta_{l',1}+\delta_{l',l},e_{l'})\\
& \ \ \times\left({\rm wt}(\boldsymbol{k}_{1})+e_{1},\dots,{\rm wt}(\boldsymbol{k}_{l})+e_{l}\right)_{\uparrow}. \end{align*} Similarly,
we have \begin{align} \label{eq:S_3+S_4} g_m(\boldsymbol{k}_{\rightarrow\uparrow};t)=S_{3}+S_{4}, \end{align} where \begin{align*} S_{3} & \coloneqq\sum_{l=1}^{{\rm dep}(\boldsymbol{k})}(-t(1-t))^{{\rm dep}(\boldsymbol{k})-l} \sum_{\substack{e_{1}+\cdots+e_{l+1}=m \\ e_{1},\dots,e_{l+1}\geq0}} \sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}} \prod_{l'=1}^{l}f_{{\rm dep}(\boldsymbol{k}_{l'})-1}({\rm wt}(\boldsymbol{k}_{l'})+\delta_{l',1},e_{l'})\\ & \ \ \times f_{0}(2,e_{l+1})\times\left({\rm wt}(\boldsymbol{k}_{1})+e_{1},\dots,{\rm wt}(\boldsymbol{k}_{l})+e_{l},2+e_{l+1}\right) \end{align*} and \begin{align*} S_{4} & \coloneqq\sum_{l=1}^{{\rm dep}(\boldsymbol{k})}(-t(1-t))^{{\rm dep}(\boldsymbol{k})-l+1}\\ & \ \ \times \sum_{\substack{e_{1}+\cdots+e_{l}=m \\ e_{1},\dots,e_{l}\geq0}} \sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}} \prod_{l'=1}^{l}f_{{\rm dep}(\boldsymbol{k}_{l'})+\delta_{l',l}-1}({\rm wt}(\boldsymbol{k}_{l'})+\delta_{l',1}+2\delta_{l',l},e_{l'})\\ & \ \ \times\left({\rm wt}(\boldsymbol{k}_{1})+e_{1},\dots,{\rm wt}(\boldsymbol{k}_{l})+e_{l}\right)_{\uparrow\uparrow}. \end{align*} Since \begin{align*} f_{0}(1,e) & =\sum_{j=0}^{e}{e-1 \choose j}t^{j}(1-t)^{e-j} =\begin{cases} 1 & \text{if $e=0$,}\\ 1-t & \text{if $e>0$} \end{cases} \end{align*} and \begin{equation*} f_{0}(2,e)=\sum_{j=0}^{e}{e \choose j}t^{j}(1-t)^{e-j}=1, \end{equation*} we have \begin{align} \label{eq:S_1-(1-t)S_3} &(S_{1})_{\uparrow}-(1-t)S_{3}\\ &=\sum_{l=1}^{{\rm dep}(\boldsymbol{k})}(-t(1-t))^{{\rm dep}(\boldsymbol{k})-l} \sum_{\substack{e_{1}+\cdots+e_{l+1}=m \\ e_{1},\dots,e_{l+1}\geq0}} \sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}} \prod_{l'=1}^{l}f_{{\rm dep}(\boldsymbol{k}_{l'})-1}({\rm wt}(\boldsymbol{k}_{l'})+\delta_{l',1},e_{l'}) \nonumber \\ &\quad \times\left(f_{0}(1,e_{l+1})-(1-t)f_{0}(2,e_{l+1})\right) \times\left({\rm wt}(\boldsymbol{k}_{1})+e_{1},\dots,{\rm wt}(\boldsymbol{k}_{l})+e_{l},2+e_{l+1}\right) \nonumber \\
&=\sum_{l=1}^{{\rm dep}(\boldsymbol{k})}(-t(1-t))^{{\rm dep}(\boldsymbol{k})-l}
\sum_{\substack{e_{1}+\cdots+e_{l+1}=m \\ e_{1},\dots,e_{l+1}\geq0}}
\sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}}
\prod_{l'=1}^{l}f_{{\rm dep}(\boldsymbol{k}_{l'})-1}({\rm wt}(\boldsymbol{k}_{l'})+\delta_{l',1},e_{l'}) \nonumber \\
&\quad \times t\delta_{e_{l+1},0}
\times\left({\rm wt}(\boldsymbol{k}_{1})+e_{1},\dots,{\rm wt}(\boldsymbol{k}_{l})+e_{l},2+e_{l+1}\right) \nonumber \\
&=t\sum_{l=1}^{{\rm dep}(\boldsymbol{k})}(-t(1-t))^{{\rm dep}(\boldsymbol{k})-l}
\sum_{\substack{e_{1}+\cdots+e_{l}=m \\ e_{1},\dots,e_{l}\geq0}}
\sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}}
\prod_{l'=1}^{l}f_{{\rm dep}(\boldsymbol{k}_{l'})-1}({\rm wt}(\boldsymbol{k}_{l'})+\delta_{l',1},e_{l'}) \nonumber \\
& \quad \times\left({\rm wt}(\boldsymbol{k}_{1})+e_{1},\dots,{\rm wt}(\boldsymbol{k}_{l})+e_{l},2\right) \nonumber \\
& =tg_m(\boldsymbol{k};t)_{\rightarrow\uparrow}. \nonumber \end{align} Moreover, if we set $i \coloneqq \dep(\boldsymbol{k}_l)-1, k \coloneqq \wt(\boldsymbol{k}_l)+\delta_{l,1},$ and $e \coloneqq e_l$, from Lemma \ref{lem:recurrence_f}, we have \begin{align*} &(S_{2})_{\uparrow}-(1-t)S_{4}-t(1-t)g_m(\boldsymbol{k};t)_{\uparrow\uparrow}+t(1-t)g_m(\boldsymbol{k}_{\uparrow};t)_{\uparrow}\\ &=\sum_{l=1}^{{\rm dep}(\boldsymbol{k})}(-t(1-t))^{{\rm dep}(\boldsymbol{k})-l+1} \sum_{\substack{e_{1}+\cdots+e_{l}=m \\ e_{1},\dots,e_{l}\geq0}} \sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}} \prod_{l'=1}^{l-1}f_{{\rm dep}(\boldsymbol{k}_{l'})-1}({\rm wt}(\boldsymbol{k}_{l'})+\delta_{l',1},e_{l'})\\
& \quad \times\bigl(f_{i+1}(k+1,e)-(1-t)f_{i+1}(k+2,e)+f_{i}(k,e)-f_{i}(k+1,e)\bigr)\\
& \quad \times\left({\rm wt}(\boldsymbol{k}_{1})+e_{1},\dots,{\rm wt}(\boldsymbol{k}_{l})+e_{l}\right)_{\uparrow\uparrow}\\
&=0. \end{align*} That is, we have \begin{equation} \label{eq:S_2-(1-t)S_4} (S_{2})_{\uparrow}-(1-t)S_{4}=tg_m(\boldsymbol{k};t)_{\uparrow\uparrow}-t(1-t)g_m(\boldsymbol{k}_{\uparrow};t)_{\uparrow}. \end{equation} From \eqref{eq:S_1+S_2}, \eqref{eq:S_3+S_4}, \eqref{eq:S_1-(1-t)S_3}, and \eqref{eq:S_2-(1-t)S_4}, we completes the proof. \end{proof}
Recall $G_m(\boldsymbol{k};t) \coloneqq I^t(g_m(\boldsymbol{k};t))$. From Propositions~\ref{prop:g_noarrow}, \ref{prop:g_uparrow} and \ref{prop:g_rightarrow}, we obtain the recurrence relations of $G_m(\boldsymbol{k};t)$.
\begin{prop} \label{prop:recurrence_G} For any non-empty index $\boldsymbol{k}$ and any non-negative integer $m$, we have \begin{align*} G_m((1);t) &=(1)\oplus(m),\\ G_m(\boldsymbol{k}_{\uparrow};t) &=G_m(\boldsymbol{k};t)_{\uparrow}+tG_{m-1}(\boldsymbol{k}_{\uparrow};t)_{\uparrow}, \\ G_m(\boldsymbol{k}_{\rightarrow};t)
& =G_m(\boldsymbol{k};t)_{\uparrow}+G_m(\boldsymbol{k};t)_{\rightarrow}-(1-t)G_m(\boldsymbol{k}_{\uparrow};t)+(1-t)G_{m-1}(\boldsymbol{k}_{\rightarrow\uparrow};t). \end{align*} \end{prop}
\section{Proof of Main theorem} In this section, we prove Theorem~\ref{thm:main}. For a non-empty index $\boldsymbol{k}=(k_1, \ldots, k_r)$ and an integer $m\in \mathbb{Z}_{\ge -1}$, set
\begin{align*}
h_m(\boldsymbol{k}; t)
\coloneqq&\sum_{l=1}^{r}
\sum_{\substack{e_{1}+\cdots+e_{l}+e_{1}'+\cdots+e_{l}'=m \\ e_1, \ldots, e_l, e'_1, \ldots, e'_l \ge0}}
t^{r-l+e_{1}+\cdots+e_{l}} \sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}}\\
&\qquad \prod_{l'=1}^{l}{\wt(\boldsymbol{k}_{l'})-\dep(\boldsymbol{k}_{l'})+e_{l'}+\delta_{l',1}-2 \choose e_{l'}}\\
&\qquad \times\left((\wt(\boldsymbol{k}_{1}),\dots,\wt(\boldsymbol{k}_{l}))\oplus\boldsymbol{e}\oplus\boldsymbol{e}'\right) \in \mathcal{I}^t. \end{align*} If $m=-1$ then the above summation is empty, and therefore $h_{-1}(\boldsymbol{k}; t)=0$.
\begin{prop} \label{prop:G=h} For any non-empty index $\boldsymbol{k}$ and any non-negative integer $m$, we have \begin{align*}
G_m(\boldsymbol{k};t)=h_m(\boldsymbol{k}; t). \end{align*} \end{prop}
To show Proposition \ref{prop:G=h}, we will prove that $h_m(\boldsymbol{k};t)$ satisfies the same recurrence relations as $G_m(\boldsymbol{k}; t)$.
\begin{prop} \label{prop:recurrence_h}
For any non-empty index $\boldsymbol{k}$ and any non-negative integer $m$, we have
\begin{align*}
h_m((1);t)
&=(1)\oplus(m),\\
h_m(\boldsymbol{k}_{\uparrow};t)
&=h_m(\boldsymbol{k};t)_{\uparrow}+th_{m-1}(\boldsymbol{k}_{\uparrow};t)_{\uparrow},\\
h_m(\boldsymbol{k}_{\rightarrow};t)
& =h_m(\boldsymbol{k};t)_{\uparrow}+h_m(\boldsymbol{k};t)_{\rightarrow}-(1-t)h_m(\boldsymbol{k}_{\uparrow};t)+(1-t)h_{m-1}(\boldsymbol{k}_{\rightarrow\uparrow};t).
\end{align*} \end{prop}
We introduce the notation of the algebraic setting for interpolated MZVs \cite{Hof97} (see also \cite{Li19}, for example). Let $\mathbb{Q}\langle x,y \rangle$ be the non-commutative polynomial ring over $\mathbb{Q}$ with variables $x$ and $y$, and put $\mathfrak{H}_t \coloneqq \mathbb{Q}\langle x,y \rangle[t] \supset \mathfrak{H}^1_t \coloneqq \mathbb{Q}[t]+y\mathfrak{H}_t$. By corresponding $(k_1, \ldots, k_r)$ to $yx^{k_1-1}\cdots yx^{k_r-1}$, we have $\mathcal{I}^t \cong \mathfrak{H}^1_t$ as $\mathbb{Q}[t]$-modules. For an index $\boldsymbol{k}$ and $m \in \mathbb{Z}_{\ge0}$, let $\mathfrak{h}_m(\boldsymbol{k}; t)$ be the element in $\mathfrak{H}^1_t$ corresponding to $h_m(\boldsymbol{k};t)$. We set $\mathfrak{h}_{-1}(\boldsymbol{k};t)\coloneqq0$.
We define the map $\sigma$ as an automorphism of $\mathbb{Q}\langle x, y\rangle[[u]]$ satisfying $\sigma(x)=x$ and $\sigma(y)=y(1-xu)^{-1}$. Note that, for $k_1, \ldots, k_r \in \mathbb{Z}_{\ge1}$, we have \begin{align*} \sigma\bigl(yx^{k_1-1}\cdots yx^{k_r-1}\bigr) =\sum_{N=0}^{\infty}u^N\sum_{\substack{e_1+\cdots+e_r=N \\e_1, \ldots, e_r \ge0}} yx^{k_1+e_1-1}\cdots yx^{k_r+e_r-1}. \end{align*} By the above notations, we can write the generating function of $\mathfrak{h}_m(\boldsymbol{k}; t)$ as \begin{align*} \sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k}; t)u^m = \sigma(X(\boldsymbol{k})) \end{align*} where \begin{align*} X(\boldsymbol{k})\coloneqq \sum_{l=1}^{r}\sum_{e_{1},\dots,e_{l}\ge 0}t^{r-l}(tu)^{e_{1}+\cdots+e_{l}} \sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}} & \prod_{l'=1}^l{\wt(\boldsymbol{k}_{l'})-\dep(\boldsymbol{k}_{l'})+e_{l'}+\delta_{l',1}-2 \choose e_{l'}}\\ & \quad \times yx^{\wt(\boldsymbol{k}_{1})+e_{1}-1}\cdots yx^{\wt(\boldsymbol{k}_{l})+e_{l}-1} \end{align*} and $\sigma$ is naturally extended to the automorphism of $\mathbb{Q}\langle x, y\rangle[[t,u]]$.
\begin{lem}\label{lem:gen_h} For any non-empty index $\boldsymbol{k}=(k_1, \ldots, k_r)$, we have \begin{align*} \sum_{m=0}^{\infty} \mathfrak{h}_m(\boldsymbol{k};t)u^m =y\frac{1}{1-xu}\Bigl(\frac{x}{1-xtu}\Bigr)^{k_1-1} \prod_{l=2}^r \left\{ \Bigl(y\frac{1-xtu}{1-xu}+xt\Bigr)\Bigl(\frac{x}{1-xtu}\Bigr)^{k_l-1} \right\}. \end{align*} \end{lem}
\begin{proof}
Since \begin{align*} \sum_{e=0}^{\infty}{s+e \choose e}A^{e}=\frac{1}{(1-A)^{s+1}}, \end{align*} we have \begin{align*} X(\boldsymbol{k}) &=\sum_{l=1}^r t^{r-l}\sum_{e_1, \ldots, e_l \ge0}\sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}} yx^{\wt(\boldsymbol{k}_1)-1}(xtu)^{e_1}\cdots yx^{\wt(\boldsymbol{k}_l)-1}(xtu)^{e_l}\\ &\quad \times \prod_{l'=1}^l{\wt(\boldsymbol{k}_l')-\dep(\boldsymbol{k}_l')+e_{l'}+\delta_{l',1}-2 \choose e_{l'}}\\ &=\sum_{l=1}^r t^{r-l}\sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}}\prod_{l'=1}^l \left\{ yx^{\wt(\boldsymbol{k}_{l'})-1} \Bigl(\frac{1}{1-xtu}\Bigr)^{\wt(\boldsymbol{k}_{l'})-\dep(\boldsymbol{k}_{l'})+\delta_{l',1}-1} \right\}\\ &=\sum_{l=1}^r \sum_{\substack{\boldsymbol{k}=(\boldsymbol{k}_{1},\dots,\boldsymbol{k}_{l}) \\ {\rm dep}(\boldsymbol{k}_{i})>0}} \prod_{l'=1}^l \left\{ yx^{\dep(\boldsymbol{k}_{l'})-\delta_{l',1}} \Bigl(\frac{x}{1-xtu}\Bigr)^{\wt(\boldsymbol{k}_{l'})-\dep(\boldsymbol{k}_{l'})+\delta_{l',1}-1}t^{\dep(\boldsymbol{k}_{l'})-1} \right\}. \end{align*} Here, since \begin{align*} & yx^{\dep(\boldsymbol{k}_{l'})-\delta_{l',1}} \Bigl(\frac{x}{1-xtu}\Bigr)^{\wt(\boldsymbol{k}_{l'})-\dep(\boldsymbol{k}_{l'})+\delta_{l',1}-1}t^{\dep(\boldsymbol{k}_{l'})-1}\\ & = y(1-xtu)^{1-\delta_{l',1}}\Bigl(\frac{x}{1-xtu}\Bigr)^{k_{l',1}-1} \times xt\Bigl(\frac{x}{1-xtu}\Bigr)^{k_{l',2}-1}
\cdots \times xt\Bigl(\frac{x}{1-xtu}\Bigr)^{k_{l',\dep(\boldsymbol{k}_{l'})}-1} \end{align*} for $\boldsymbol{k}_{l'}=(k_{l',1},\dots,k_{l',\dep(\boldsymbol{k}_{l'})})$, we obtain \begin{equation} \label{eq:calcX} X(\boldsymbol{k})=y\Bigl(\frac{x}{1-xtu}\Bigr)^{k_1-1} \prod_{l=2}^r \left\{ \bigl(y(1-xtu)+xt\bigr)\Bigl(\frac{x}{1-xtu}\Bigr)^{k_l-1} \right\}. \end{equation} Moreover, since \begin{align*} \sigma(x)=x,\qquad \sigma(y)=y\frac{1}{1-xu}, \qquad \sigma\bigl(y(1-xtu)+xt\bigr)=y\frac{1-xtu}{1-xu}+xt, \end{align*} we complete the proof.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:recurrence_h}] By definition, \begin{align*} h_m((1);t)=\sum_{e+e'=m}t^e{e-1 \choose e}((1)\oplus(e)\oplus(e'))=(1)\oplus(m). \end{align*}
From Lemma \ref{lem:gen_h}, we have
\begin{align*}
\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k}_{\uparrow};t)u^m
&=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k}; t)u^m\Bigr)\times \frac{x}{1-xtu}, \\
\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k}; t)_{\uparrow}u^m
&=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k}; t)u^m\Bigr)\times x,\\
\sum_{m=0}^{\infty}\mathfrak{h}_{m-1}(\boldsymbol{k}_{\uparrow};t)_{\uparrow}u^m
&=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k}; t)u^m\Bigr)\times \frac{x^2u}{1-xtu}.
\end{align*}
Thus, we have
\begin{align*}
&\sum_{m=0}^{\infty}
\bigl\{
\mathfrak{h}_m(\boldsymbol{k}_{\uparrow};t)-\mathfrak{h}_m(\boldsymbol{k}; t)_{\uparrow}-t\mathfrak{h}_{m-1}(\boldsymbol{k}_{\uparrow}; t)_{\uparrow}
\bigr\}u^m\\
&=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k}; t)u^m\Bigr)
\times \Bigl(\frac{x}{1-xtu}-x-t\frac{x^2u}{1-xtu}\Bigr)=0.
\end{align*}
Similarly, from Lemma \ref{lem:gen_h}, we have
\begin{align*}
\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k}_{\rightarrow};t)u^{m}
&=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k};t)u^{m}\Bigr)
\times \Bigl(y\frac{1-xtu}{1-xu}+xt\Bigr),\\
\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k};t)_{\uparrow}u^{m}
&=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k};t)u^{m}\Bigr)
\times x,\\
\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k};t)_{\rightarrow}u^{m}
&=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k};t)u^{m}\Bigr)
\times y,\\
\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k}_{\uparrow};t)u^{m}
&=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k};t)u^{m}\Bigr)
\times \Bigl(\frac{x}{1-xtu}\Bigr), \\
\sum_{m=0}^{\infty}\mathfrak{h}_{m-1}(\boldsymbol{k}_{\rightarrow\uparrow};t)u^{m}
&=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k};t)u^{m}\Bigr)
\times \Bigl(y\frac{1-xtu}{1-xu}+xt\Bigr)\Bigl(\frac{x}{1-xtu}\Bigr)u.
\end{align*}
Therefore, we obtain \begin{align*} &\sum_{m=0}^{\infty} \bigl\{\mathfrak{h}_m(\boldsymbol{k}_{\rightarrow}; t) -\mathfrak{h}_m(\boldsymbol{k}; t)_{\uparrow} -\mathfrak{h}_m(\boldsymbol{k}; t)_{\rightarrow} +(1-t)\mathfrak{h}_m(\boldsymbol{k}_{\uparrow}; t) -(1-t)\mathfrak{h}_{m-1}(\boldsymbol{k}_{\rightarrow\uparrow}; t) \bigr\}u^m\\ &=\Bigl(\sum_{m=0}^{\infty}\mathfrak{h}_m(\boldsymbol{k};t)u^{m}\Bigr)\times S, \end{align*} where \begin{align*} S & =\left(y\frac{1-xtu}{1-xu}+xt\right)-x-y+\left(\frac{x}{1-xtu}\right)(1-t)\\
& \quad -\left(y\frac{1-xtu}{1-xu}+xt\right)\left(\frac{x}{1-xtu}\right)(1-t)u\\
& =0. \end{align*} This completes the proof. \end{proof}
\begin{proof}[Proof of Proposition \ref{prop:G=h}] Note that $G_m(\boldsymbol{k};t)$ and $h_m(\boldsymbol{k};t)$ are characterized by the recurrence relations given in Propositions \ref{prop:recurrence_G} and \ref{prop:recurrence_h}, respectively. Since these recurrence relations have exactly the same form, $G_m(\boldsymbol{k};t) = h_m(\boldsymbol{k};t)$. \end{proof}
Let $\tau$ be the anti-automorphism of $\mathfrak{H}$ that interchanges $x$ and $y$, and extend it to $\mathfrak{H}[[t,u]]$ by $\tau(\sum_{m,n=0}^{\infty} c_{m,n}t^{m}u^{n}) = \sum_{m,n=0}^{\infty} \tau(c_{m,n})t^{m}u^{n}$. \begin{prop} \label{prop:dual}
For any non-empty index $\boldsymbol{k}=(k_1, \ldots, k_r)$ and a non-negative integer $m$, we have
\begin{align} \label{eq:dual}
\sum_{m=0}^{\infty}u^{m}\sum_{\substack{\wt(\boldsymbol{e})=m\\\dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\dagger})}}
I^{t}\bigl((\boldsymbol{k}^{\dagger}\oplus\boldsymbol{e})^{\dagger}\bigr)
&= \tau\sigma\tau(X(\boldsymbol{k})).
\end{align} \end{prop}
\begin{proof}
By \eqref{eq:calcX}, we have \begin{align*} \tau\sigma\tau(X(\boldsymbol{k})) =\tau\sigma\tau \left( y\Bigl(\frac{x}{1-xtu}\Bigr)^{k_{1}-1} \prod_{l=2}^r \left\{\bigl(y(1-xtu)+xt\bigr)\Bigl(\frac{x}{1-xtu}\Bigr)^{k_{l}-1}\right\} \right). \end{align*} Since \begin{align*} \tau\sigma\tau\bigl(y(1-xtu)+xt\bigr)
& =\tau\sigma\tau\bigl(y+(1-yu)xt\bigr)\\
& =y+xt \end{align*} and \begin{align*} \tau\sigma\tau\left(\frac{x}{1-xtu}\right) & =\frac{1}{1-(1-yu)^{-1}xtu}(1-yu)^{-1}x\\
& =\frac{1}{1-yu-xtu}x, \end{align*} we obtain \begin{align*} \tau\sigma\tau(X(\boldsymbol{k})) & =y\Bigl(\frac{1}{1-yu-xtu}x\Bigr)^{k_{1}-1} \prod_{l=2}^r\left\{ (y+xt)\Bigl(\frac{1}{1-yu-xtu}x\Bigr)^{k_{l}-1} \right\}\\ & =I^{t}\Bigl( y\Bigl(\frac{1}{1-yu}x\Bigr)^{k_{1}-1} \prod_{l=2}^r\left\{ y\Bigl(\frac{1}{1-yu}x\Bigr)^{k_{l}-1} \right\} \Bigr). \end{align*} This expression implies the equality \eqref{eq:dual}. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}] Since $g_m(\varnothing; 0)=\delta_{m,0}\cdot\varnothing$, the assertion is obvious for $\boldsymbol{k}=\varnothing$. Assume that $\boldsymbol{k}\neq\varnothing$. By Propositions \ref{prop:G=h} and \ref{prop:dual}, we have \begin{align*} & \sum_{m=0}^{\infty}u^m\left( Z^{t}\bigl(g_m(\boldsymbol{k}; t)\bigr)-\sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\dagger})}} Z^{t}\bigl((\boldsymbol{k}^{\dagger} \oplus \boldsymbol{e})^{\dagger}\bigr) \right)\\ & = \hat{Z}(X(\boldsymbol{k})) - \hat{Z}(\tau\sigma\tau(X(\boldsymbol{k})))
\end{align*} where $\hat{Z}$ is the natural extension of $Z$ to $\mathfrak{H}[[t,u]]$. This expression vanishes by the assumption that $Z$ satisfies the Ohno-type relation. This completes the proof of Theorem~\ref{thm:main}. \end{proof}
\section{Applications}\label{sec:applications}
\subsection{Interpolated $\mathcal{F}$-multiple zeta values} Let $\mathcal{A}$ be the $\mathbb{Q}$-algebra defined by \begin{align*} \mathcal{A} \coloneqq \left. \prod_{p}\mathbb{Z}/p\mathbb{Z} \right/ \bigoplus_{p}\mathbb{Z}/p\mathbb{Z}, \end{align*} where $p$ runs through all the rational primes. For an index $\boldsymbol{k}=(k_1, \ldots, k_r)$, we define $\mathcal{A}$-multiple zeta value ($\mathcal{A}$-MZV) $\zeta^{}_{\mathcal{A}}(\boldsymbol{k})$ and $\mathcal{A}$-multiple zeta-star value ($\mathcal{A}$-MZSV) $\zeta^{\star}_{\mathcal{A}}(\boldsymbol{k})$ as elements in $\mathcal{A}$ by \begin{align*} \zeta^{}_{\mathcal{A}}(\boldsymbol{k}) &\coloneqq \Biggl( \sum_{0<n_1<\cdots<n_r<p}\frac{1}{n^{k_1}_1\cdots n^{k_r}_r} \bmod{p} \Biggr)_p,\\ \zeta^{\star}_{\mathcal{A}}(\boldsymbol{k}) &\coloneqq \Biggl( \sum_{1\le n_1\le \cdots \le n_r\le p-1}\frac{1}{n^{k_1}_1\cdots n^{k_r}_r} \bmod{p} \Biggr)_p. \end{align*} We set $\zeta^{}_{\mathcal{A}}(\varnothing)=\zeta^{\star}_{\mathcal{A}}(\varnothing) \coloneqq (1)_p \in \mathcal{A}$.
On the other hand, let $\mathcal{Z}$ denote the $\mathbb{Q}$-subspace of $\mathbb{R}$ generated by 1 and all MZVs $\zeta(\boldsymbol{k})$. For an index $\boldsymbol{k}=(k_1, \ldots, k_r)$, we define $\mathcal{S}$-multiple zeta value ($\mathcal{S}$-MZV) $\zeta^{}_{\mathcal{S}}(\boldsymbol{k})$ as an element of $\mathcal{Z}/\zeta(2)\mathcal{Z}$ by \begin{align*} \zeta^{}_{\mathcal{S}}(\boldsymbol{k}) \coloneqq \sum_{i=0}^{r}\zeta^{\ast}(k_1, \ldots, k_i)\zeta^{\ast}(k_r, \ldots, k_{i+1}) \bmod{\zeta(2)}\mathcal{Z}. \end{align*} Here, $\zeta^{\ast}(\boldsymbol{l})$ is the constant term of $\ast$-regularized polynomial of MZV, which is an element in $\mathcal{Z}$. Moreover, we define $\mathcal{S}$-multiple zeta-star value ($\mathcal{S}$-MZSV) $\zeta^{\star}_{\mathcal{S}}(\boldsymbol{k})$ by \begin{align*} \zeta^{\star}_{\mathcal{S}}(\boldsymbol{k}) \coloneqq \sum_{\substack{\square\textrm{ is either a comma `,' } \\
\textrm{ or a plus `+'}}}
\zeta^{}_{\mathcal{S}}(k_1 \square k_2 \square \cdots \square k_r) \in \mathcal{Z}/\zeta(2)\mathcal{Z}. \end{align*} We set $\zeta^{}_{\mathcal{S}}(\varnothing)=\zeta^{\star}_{\mathcal{S}}(\varnothing) \coloneqq1$. In \cite{KZ21}, Kaneko and Zagier conjectured that the $\mathcal{A}$-MZVs and $\mathcal{S}$-MZVs satisfy the same $\mathbb{Q}$-linear relations.
For a non-empty index $\boldsymbol{k}=(k_1, \ldots, k_r)=(\underbrace{1+\cdots+1}_{k_1}, \ldots, \underbrace{1+\cdots+1}_{k_r})$, we define the Hoffman dual index $\boldsymbol{k}^{\vee}$ of $\boldsymbol{k}$ by \begin{align*} \boldsymbol{k}^{\vee} \coloneqq (\underbrace{1, \cdots, 1}_{k_1}+\underbrace{1, \cdots, 1}_{k_2}+\ldots+\underbrace{1, \cdots, 1}_{k_r}). \end{align*} For example, we have $(2,1,3)^{\vee}=(1+1, 1, 1+1+1)^{\vee}=(1,1+1+1,1,1)=(1,3,1,1)$.
\begin{thm}[{Ohno-type relation for $\mathcal{F}$-MZVs, Oyama \cite{Oya18}}]\label{thm:Ohno-type_FMZV} For a non-empty index $\boldsymbol{k}$ and a non-negative integer $m$, we have \begin{align*} \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k})}} \zeta^{}_{\mathcal{F}}(\boldsymbol{k} \oplus \boldsymbol{e}) = \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\vee})}} \zeta^{}_{\mathcal{F}}\bigl((\boldsymbol{k}^{\vee} \oplus \boldsymbol{e})^{\vee}\bigr). \end{align*} \end{thm}
\begin{thm}[{Ohno-type relation for $\mathcal{F}$-MZSVs, Hirose--Imatomi--Murahara--Saito \cite{HIMS20}}]\label{thm:Ohno-type_FMZSV} For a non-empty index $\boldsymbol{k}$ and a non-negative integer $m$, we have \begin{align*} \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k})}} c_2(\boldsymbol{k}, \boldsymbol{e})\zeta^{\star}_{\mathcal{F}}(\boldsymbol{k} \oplus \boldsymbol{e}) = \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\vee})}} \zeta^{\star}_{\mathcal{F}}\bigl((\boldsymbol{k}^{\vee} \oplus \boldsymbol{e})^{\vee}\bigr). \end{align*} Here, \begin{align*} c_2((k_1, \ldots, k_r),(e_1, \ldots, e_r)) \coloneqq \prod_{i=1}^r\binom{k_i+e_i+\delta_{i,1}+\delta_{i,r}-2}{e_i}, \ \binom{n-1}{n} = \begin{cases} 1 & \text{if $n=0$}, \\ 0 & \text{otherwise}. \end{cases} \end{align*} \end{thm}
For a non-empty admissible index $\boldsymbol{k}=(k_1, \ldots, k_r)$, set $\boldsymbol{k}_{\downarrow} \coloneqq (k_1, \ldots, k_{r-1}, k_r-1)$. Moreover, for $f = \sum_{i=1}^n a_i(t)\boldsymbol{k}_i \in \mathcal{I}^t$ where $\boldsymbol{k}_1, \ldots, \boldsymbol{k}_n$ is non-empty admissible indices, set $f_{\downarrow} \coloneqq \sum_{i=1}^n a_i(t)(\boldsymbol{k}_i)_{\downarrow} \in \mathcal{I}^t$. For $\mathcal{F} \in \{\mathcal{A}, \mathcal{S}\}$ and an index $\boldsymbol{k}=(k_1, \ldots, k_r)$, the second-named author and the third-named author studied \emph{the interpolated $\mathcal{F}$-MZV} \begin{align*} \label{eq:def_tFMZV} \zeta^{t}_{\mathcal{F}}(\boldsymbol{k}) \coloneqq \zeta^{}_{\mathcal{F}}\bigl(I^t(\boldsymbol{k})\bigr) =\sum_{\substack{\square\textrm{ is either a comma `,' } \\
\textrm{ or a plus `+'}}}
t^{(\text{the number of `+'})}
\zeta^{}_{\mathcal{F}}(k_1 \square k_2 \square \cdots \square k_r) \end{align*} in \cite{MO19}. We set $\zeta^{t}_{\mathcal{F}}(\varnothing)\coloneqq1$. For $\mathcal{F} \in \{\mathcal{A}, \mathcal{S}\}$, we present the Ohno-type relation among interpolated $\mathcal{F}$-MZVs, which is derived from Theorem~\ref{thm:main} and \ref{thm:Ohno-type_FMZV}.
\begin{thm}\label{thm:Ohno-type_tFMZV} For a non-empty index $\boldsymbol{k}$ and a non-negative integer $m$, we have \begin{align*} \zeta^{t}_{\mathcal{F}}\bigl(g_m(\boldsymbol{k}_{\uparrow};t)_{\downarrow}\bigr) = \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\vee})}} \zeta^{t}_{\mathcal{F}}\bigl((\boldsymbol{k}^{\vee} \oplus \boldsymbol{e})^{\vee}\bigr). \end{align*} \end{thm}
\begin{rem} Theorem~\ref{thm:Ohno-type_tFMZV} is an interpolation of Theorems~\ref{thm:Ohno-type_FMZV} and \ref{thm:Ohno-type_FMZSV}.
\end{rem}
\begin{proof} Set $R \coloneqq \mathcal{A}$ (resp.~$R\coloneqq \mathcal{Z}/\zeta(2)\mathcal{Z}$) for $\mathcal{F}=\mathcal{A}$ (resp.~$\mathcal{F}=\mathcal{S}$) and $Z(\boldsymbol{k}) \coloneqq \zeta^{}_{\mathcal{F}}(\boldsymbol{k}_{\downarrow})$. Then, by Theorem~\ref{thm:Ohno-type_FMZV}, we have \begin{align*} \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k})}} Z(\boldsymbol{k} \oplus \boldsymbol{e}) &=\sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k})}} \zeta^{}_{\mathcal{F}}\bigl((\boldsymbol{k} \oplus \boldsymbol{e})_{\downarrow}\bigr)\\ &=\sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k})}} \zeta^{}_{\mathcal{F}}(\boldsymbol{k}_{\downarrow} \oplus \boldsymbol{e})\\
&=\sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep((\boldsymbol{k}_{\downarrow})^{\vee})}} \zeta^{}_{\mathcal{F}}\bigl(((\boldsymbol{k}_{\downarrow})^{\vee} \oplus \boldsymbol{e})^{\vee}\bigr). \end{align*} Since \begin{align*} \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep((\boldsymbol{k}_{\downarrow})^{\vee})}} ((\boldsymbol{k}_{\downarrow})^{\vee} \oplus \boldsymbol{e})^{\vee} =\sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\dagger})}} \bigl((\boldsymbol{k}^{\dagger} \oplus \boldsymbol{e})^{\dagger}\bigr)_{\downarrow}, \end{align*} we have \begin{align*} \sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k})}} Z(\boldsymbol{k} \oplus \boldsymbol{e}) =\sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\dagger})}} \zeta^{}_{\mathcal{F}}\bigl(((\boldsymbol{k}^{\dagger} \oplus \boldsymbol{e})^{\dagger})_{\downarrow}\bigr) =\sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\dagger})}} Z\bigl((\boldsymbol{k}^{\dagger} \oplus \boldsymbol{e})^{\dagger}\bigr), \end{align*} that is, we see that $Z(\boldsymbol{k})=\zeta^{}_{\mathcal{F}}(\boldsymbol{k}_{\downarrow})$ satisfies the Ohno-type relation. Therefore, by Theorem~\ref{thm:main}, we obtain \begin{align*} Z^{t}\bigl(g_m(\boldsymbol{k}_{\uparrow};t)\bigr) =\sum_{\substack{\wt (\boldsymbol{e})=m \\ \dep (\boldsymbol{e})=\dep (\boldsymbol{k}^{\dagger})}} Z^{t}\bigl(((\boldsymbol{k}_{\uparrow})^{\dagger}\oplus\boldsymbol{e})^{\dagger}\bigr). \end{align*} Since \begin{align*} Z^{t}\bigl(g_m(\boldsymbol{k}_{\uparrow};t)\bigr)=\zeta^{t}_{\mathcal{F}}\bigl(g_m(\boldsymbol{k}_{\uparrow};t)_{\downarrow}\bigr) \end{align*} and \begin{align*} \sum_{\substack{\wt (\boldsymbol{e})=m \\ \dep (\boldsymbol{e})=\dep (\boldsymbol{k}^{\dagger})}} Z^{t}\bigl(((\boldsymbol{k}_{\uparrow})^{\dagger}\oplus\boldsymbol{e})^{\dagger}\bigr) &=\sum_{\substack{\wt (\boldsymbol{e})=m \\ \dep (\boldsymbol{e})=\dep (\boldsymbol{k}^{\dagger})}} \zeta^{t}_{\mathcal{F}}\bigl((((\boldsymbol{k}_{\uparrow})^{\dagger}\oplus\boldsymbol{e})^{\dagger})_{\downarrow}\bigr)\\ &=\sum_{\substack{\wt(\boldsymbol{e})=m \\ \dep(\boldsymbol{e})=\dep(\boldsymbol{k}^{\vee})}} \zeta^{t}_{\mathcal{F}}\bigl((\boldsymbol{k}^{\vee} \oplus \boldsymbol{e})^{\vee}\bigr), \end{align*} we obtain the desired formula. \end{proof}
\subsection{Sum formula} In this subsection, we reprove the sum formula for interpolated MZVs which was first proved by Yamamoto as a corollary of our main theorem (Theorem~\ref{thm:main}).
\begin{thm}[{Sum formula for interpolated MZVs, Yamamoto \cite{Yam13}}] For positive integers $k$ and $r$ with $k>r$, we have \begin{align*} \sum_{\substack{k_1+\cdots+k_r=k \\ k_1, \ldots, k_{r-1}\ge1, k_r\ge2}} \zeta^{t}(k_1, \ldots, k_r) =\Biggl\{\sum_{j=0}^{r-1}\binom{k-1}{j}t^j(1-t)^{r-1-j}\Biggr\}\zeta(k). \end{align*} \end{thm}
\begin{proof} Set $(R, Z)=(\mathbb{R}, \zeta)$ and $\boldsymbol{k}=(k)$. Then we have \begin{equation}\label{eq:RHSofSF} Z^{t}\bigl(g_{r}(\boldsymbol{k};t)\bigr) =\Biggl\{\sum_{j=0}^{r}\binom{k+r-1}{j}t^{j}(1-t)^{r-j}\Bigg\}Z(k+r). \end{equation} On the other hand, we have \begin{equation}\label{eq:LHSofSF} \begin{split} \sum_{\substack{\wt(\boldsymbol{e})=r \\ \dep(\boldsymbol{e})=k-1}} Z^{t}\bigl((\boldsymbol{k}^{\dagger} \oplus \boldsymbol{e})^{\dagger}\bigr) &=\sum_{\substack{\wt(\boldsymbol{e})=r \\ \dep(\boldsymbol{e})=k-1}} Z^{t}\bigl(((\underbrace{1, \ldots, 1}_{k-2}, 2)\oplus \boldsymbol{e})^{\dagger}\bigr)\\
&=\sum_{\substack{\wt(\boldsymbol{l})=k+r \\ \dep(\boldsymbol{l})=k-1 \\ \mathrm{admissible}}} Z^{t}\bigl( \boldsymbol{l}^{\dagger} \bigr)\\ &=\sum_{\substack{\wt(\boldsymbol{l})=k+r \\ \dep(\boldsymbol{l})=r+1 \\ \mathrm{admissible}}} Z^{t}\bigl( \boldsymbol{l} \bigr). \end{split} \end{equation} Therefore, from Theorem~\ref{thm:main}, \eqref{eq:RHSofSF} and \eqref{eq:LHSofSF}, by replacing $k$ with $k-r+1$ and $r$ with $r-1$, we complete the proof. \end{proof}
In the final part of this section, we give another proof of Seki's result in \cite{Sek17} on the sum formula for interpolated $\mathcal{F}$-MZVs as a corollary of our theorem. For a positive integer $k$, we set \begin{align*} \mathfrak{Z}_{\mathcal{F}}(k) \coloneqq \begin{cases} \Biggl(\displaystyle\frac{B_{p-k}}{k} \bmod{p}\Biggr)_p \in \mathcal{A} &(\mathcal{F}=\mathcal{A}), \\ \zeta(k) \bmod{\zeta(2)\mathcal{Z}} \in \mathcal{Z}/\zeta(2)\mathcal{Z} & (\mathcal{F}=\mathcal{S}). \end{cases} \end{align*}
\begin{thm}[{Sum formula for interpolated $\mathcal{F}$-MZVs, Seki \cite{Sek17}}] For positive integers $k$ and $r$ with $k>r$, we have \begin{align*} \sum_{\substack{\text{$\boldsymbol{k}$:admissible} \\ \wt(\boldsymbol{k})=k, \dep(\boldsymbol{k})=r}} \zeta^{t}_{\mathcal{F}}(\boldsymbol{k}) =\left[\sum_{j=0}^{r-1}\Biggl\{\binom{k-1}{j}+(-1)^r\binom{k-1}{r-1-j}\Biggr\}t^{j}(1-t)^{r-1-j}\right]\mathfrak{Z}_{\mathcal{F}}(k). \end{align*} \end{thm}
\begin{proof} To prove this theorem, we need to calculate the difference between Theorem~\ref{thm:Ohno-type_tFMZV} with ``$\boldsymbol{k}=\boldsymbol{k}_1\coloneqq (k-r+1), m=r-1$" and ``$\boldsymbol{k}=\boldsymbol{k}_2 \coloneqq (k-r+1,1), m=r-2$" substituted.
First, we have \begin{equation}\label{eq:RHS} \begin{split} &\sum_{\substack{\wt(\boldsymbol{e})=r-1 \\ \dep(\boldsymbol{e})=\boldsymbol{k}^{\vee}_1}} \zeta^{t}_{\mathcal{F}}\bigr((\boldsymbol{k}^{\vee}_1 \oplus \boldsymbol{e})^{\vee}\bigl) -\sum_{\substack{\wt(\boldsymbol{e})=r-2 \\ \dep(\boldsymbol{e})=\boldsymbol{k}^{\vee}_2}} \zeta^{t}_{\mathcal{F}}\bigr((\boldsymbol{k}^{\vee}_2 \oplus \boldsymbol{e})^{\vee}\bigl)\\ &=\sum_{\substack{\wt(\boldsymbol{e})=r-1 \\ \dep(\boldsymbol{e})=k-r+1}} \zeta^{t}_{\mathcal{F}}\bigr(((\{1\}^{k-r+1}) \oplus \boldsymbol{e})^{\vee}\bigl) -\sum_{\substack{\wt(\boldsymbol{e})=r-2 \\ \dep(\boldsymbol{e})=k-r+1}} \zeta^{t}_{\mathcal{F}}\bigr(((\{1\}^{k-r}, 2) \oplus \boldsymbol{e})^{\vee}\bigl)\\ &=\sum_{\substack{l_1+\cdots+l_{k-r}=k-1 \\ l_1, \ldots, l_{k-r}\ge1}} \zeta^{t}_{\mathcal{F}}\bigl((l_1, \ldots, l_{k-r}, 1)^{\vee}\bigr) =\sum_{\substack{k_1+\cdots+k_r=k \\ k_1, \ldots, k_{r-1}\ge1, k_r\ge2}} \zeta^{t}_{\mathcal{F}}(k_1, \ldots, k_r). \end{split} \end{equation} On the other hand, by the definition of $g_m(\boldsymbol{k};t)$, $\zeta^{}_{\mathcal{F}}(a, b)=(-1)^{b}\binom{a+b}{a}\mathfrak{Z}_{\mathcal{F}}(a+b) \; (a, b \in \mathbb{Z}_{\ge1})$, and $\zeta^{}_{\mathcal{F}}(a)=0 \; (a \in \mathbb{Z}_{\ge1})$ (see \cite{Kan19}, for example), we have \begin{equation}\label{eq:LHS} \begin{split} &\zeta^{t}_{\mathcal{F}}\bigl(g_{r-1}((\boldsymbol{k}_1)_{\uparrow};t)_{\downarrow}\bigr)- \zeta^{t}_{\mathcal{F}}\bigl(g_{r-2}((\boldsymbol{k}_2)_{\uparrow}; t)_{\downarrow}\bigr)\\ &=-\sum_{\substack{e_1+e_2=r-2 \\ e_1, e_2 \ge0}} \Biggl\{\sum_{j_1=0}^{e_1}\binom{k-r+e_1}{j_1}t^{j_1}(1-t)^{e_1-j_1}\Biggr\}\\ &\quad\times\Biggl\{\sum_{j_2=0}^{e_2}\binom{e_2}{j_2}t^{j_2}(1-t)^{e_2-j_2}\Biggr\}\zeta^{}_{\mathcal{F}}(k-r+1+e_1, 1+e_2)\\ &=\sum_{e=0}^{r-2}(-1)^{r-e} \Biggl\{\sum_{j=0}^{e}\binom{k-r+e}{j}t^{j}(1-t)^{e-j}\Biggr\} \binom{k}{r-e-1}\mathfrak{Z}_{\mathcal{F}}(k). \end{split} \end{equation} Therefore, from Theorem~\ref{thm:Ohno-type_tFMZV}, \eqref{eq:RHS}, and \eqref{eq:LHS}, it suffices to prove that \begin{equation}\label{eq:claim} \begin{split} &\sum_{e=0}^{r-2}(-1)^{r-e}\Biggl\{\sum_{j=0}^{e}\binom{k-r+e}{j}t^{j}(1-t)^{e-j}\Biggr\}\binom{k}{r-e-1}\\ &=\sum_{j=0}^{r-1}\Biggl\{\binom{k-1}{j}+(-1)^r\binom{k-1}{r-1-j}\Biggr\}t^{j}(1-t)^{r-1-j}. \end{split} \end{equation} This equality can be proved as follows. Since $1=\sum_{s=0}^{r-e-1}\binom{r-e-1}{s}t^{s}(1-t)^{r-e-1-s}$, we have \begin{align*} &\sum_{e=0}^{r-2}(-1)^{r-e}\Biggl\{\sum_{j=0}^{e}\binom{k-r+e}{j}t^{j}(1-t)^{e-j}\Biggr\}\binom{k}{r-e-1}\\ &=\sum_{e=0}^{r-2}\sum_{j=0}^{e}\sum_{s=0}^{r-1-e}(-1)^{r-e}\binom{k-r+e}{j}\binom{k}{r-e-1}\binom{r-e-1}{s}t^{j+s}(1-t)^{r-1-j-s}\\ &=\sum_{\substack{j+s\le r-1 \\ j, s \ge0}}\sum_{e=j}^{\min(r-s-1, r-2)}(-1)^{r-e} \frac{k!}{j!s!(k-r+e-j)!(r-e-1-s)!}\frac{t^{j+s}(1-t)^{r-1-j-s}}{k-r+e+1}\\ &=A+\sum_{j=0}^{r-1}\binom{k-1}{j}t^j(1-t)^{r-1-j}, \end{align*} where, we set \begin{align*} A \coloneqq \sum_{\substack{j+s\le r-1 \\ j, s \ge0}}\sum_{e=j}^{r-s-1}(-1)^{r-e} \frac{k!}{j!s!(k-r+e-j)!(r-e-1-s)!}\frac{t^{j+s}(1-t)^{r-1-j-s}}{k-r+e+1}. \end{align*} By setting $u \coloneqq j+s$ and $m \coloneqq e-j$, we have \begin{align*} A &=\sum_{u=0}^{r-1}t^{u}(1-t)^{r-1-u}\sum_{j=0}^{u}\sum_{m=0}^{r-1-u}(-1)^{r-j-m}\\ &\quad \times\frac{k!}{j!(u-j)!(k-r+m)!(r-1-u-m)!}\frac{1}{k-r+m+j+1}\\ &=\sum_{u=0}^{r-1}t^u(1-t)^{r-1-u}\sum_{m=0}^{r-1-u}(-1)^{r-m}\frac{k!}{(k-r+m)!(r-1-u-m)!u!}\\ &\quad \times \sum_{j=0}^u(-1)^{j}\binom{u}{j}\frac{1}{k-r+m+1+j}. \end{align*} Moreover, since \begin{align*} \sum_{j=0}^u(-1)^{j}\binom{u}{j}\frac{1}{k-r+m+1+j} &=\int^{1}_{0}\sum_{j=0}^u(-1)^{j}\binom{u}{j}z^{k-r+m+j}dz\\ &=\int^{1}_{0}z^{k-r+m}(1-z)^{u}dz=\frac{u!(k-r+m)!}{(u+k-r+m+1)!}, \end{align*} we have \begin{align*} A &=\sum_{u=0}^{r-1}t^u(1-t)^{r-1-u}\sum_{m=0}^{r-1-u}(-1)^{r-m}\frac{k!}{(r-1-u-m)!(u+k-r+m+1)!}\\ &=\sum_{u=0}^{r-1}t^u(1-t)^{r-1-u}\sum_{n=0}^{r-1-u}(-1)^{u+1+n}\binom{k}{n}\\ &=\sum_{u=0}^{r-1}t^u(1-t)^{r-1-u}(-1)^{r}\binom{k-1}{r-1-u}. \end{align*} Thus we obtain \eqref{eq:claim}, which completes the proof. \end{proof}
\end{document}
|
arXiv
|
{
"id": "2103.14851.tex",
"language_detection_score": 0.35134419798851013,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{frontmatter}
\title{Convex geometries over induced paths with bounded length}
\author[unlp]{Marisa Gutierrez} \ead{[email protected]}
\author[uff]{F\'abio Protti\corref{cor1}} \ead{[email protected]}
\author[unlp]{Silvia B. Tondato} \ead{[email protected]}
\cortext[cor1]{Corresponding author}
\address[unlp]{Departamento de Matem\'atica, Facultad de Ciencias Exactas\\ Universidad Nacional de La Plata, Argentina\\ \hspace*{1cm}} \address[uff]{Instituto de Computa\c c\~ao\\ Universidade Federal Fluminense, Niter\'oi, Brazil}
\begin{abstract}
Graph convexity spaces have been studied in many contexts. In particular, some studies are devoted to determine if a graph equipped with a convexity space is a {\em convex geometry}. It is well known that chordal and Ptolemaic graphs can be characterized as convex geometries with respect to the geodesic and monophonic convexities, respectively. Weak polarizable graphs, interval graphs, and proper interval graphs can also be characterized in this way. In this paper we introduce the notion of {\em $l^k$-convexity}, a natural restriction of the monophonic convexity. Let $G$ be a graph and $k\geq 2$ an integer. A subset $S\subseteq V(G)$ is \textit{$l^k$-convex} if and only if for any pair of vertices $x,y$ of $S$, each induced path of length {\em at most} $k$ connecting $x$ and $y$ is completely contained in the subgraph induced by $S$. The {\em $l^k$-convexity} consists of all $l^k$-convex subsets of $G$. In this work, we characterize {\em $l^k$-convex geometries} (graphs that are convex geometries with respect to the $l^k$-convexity) for $k\in\{2,3\}$. We show that a graph $G$ is an $l^2$-convex geometry if and only if $G$ is a chordal $P_4$-free graph, and an $l^3$-convex geometry if and only if $G$ is a chordal graph with diameter at most three such that its induced gems satisfy a special ``solving'' property. As far as the authors know, the class of $l^3$-convex geometries is the first example of a non-hereditary class of convex geometries.
\end{abstract}
\begin{keyword} chordal graph \sep convexity \sep convex geometry \end{keyword}
\end{frontmatter}
\journal{Discrete Applied Mathematics}
\section{Introduction} \label{sec:intro}
A family $\mathcal{C}$ of subsets of a nonempty set $V$ is called a \textit{convexity} on $V$ if:
\begin{itemize} \item $(C1)$ $\emptyset, V \in \mathcal{C}$.
\item $(C2)$ $\mathcal{C}$ is stable for intersections, that is, if $\mathcal{D}$ is a non-empty subfamily of $\mathcal{C}$ then $\bigcap \mathcal{D}$ is a member of $\mathcal{C}$.
\item $(C3)$ $\mathcal{C}$ is stable for nested unions, that is, if $\mathcal{D}$ is a non-empty and totally ordered by inclusion subfamily of $\mathcal{C}$ then $\bigcup \mathcal{D}$ is a member of $\mathcal{C}$.
\end{itemize}
A \textit{convexity space} is an ordered pair $(V, \mathcal{C})$, where $V$ is a nonempty set and $\mathcal{C}$ is a convexity on $V$. The members of $\mathcal{C}$ are called \textit{convex sets}. A \textit{graph convexity space} is an ordered pair $(G, \mathcal{C})$ formed by a connected graph $G$ and a convexity $\mathcal{C}$ on $V(G)$ such that $(V(G),\mathcal{C})$ is a convexity space.
In the last few decades, convexity spaces and graph convexity spaces have been studied in many contexts~\cite{farber-jamison,pelayo,van-de-vel}. In particular, some studies are devoted to determine if a graph equipped with a convexity space is a {\em convex geometry}. The concept of convex geometry is related to the concepts of {\em convex hull} and {\em extreme point}. We refer the reader to~\cite{farber-jamison}. Let $(V, \mathcal{C})$ be a convexity space. Given a set $S \subseteq V$, the smallest convex set containing $S$ is called the \textit{convex hull} of $S$. An element $x$ of a convex set $S$ is an \textit{extreme point} of $S$ if $S\backslash\{x\}$ is also convex. The convexity space $(V, \mathcal{C})$ is said to be a \textit{convex geometry} if it satisfies the so-called \textit{Minkowski-Krein-Milman} property~\cite{krein-milman}:
\centerline{{\em Every convex set is the convex hull of its extreme points.}}
Let $(G,\mathcal{C})$ be a graph convexity space. There are many examples in the literature where the convexity $\mathcal{C}$ is defined over a path system. For example, the {\em monophonic convexity}~\cite{dourado-et-al,duchet} consists of all the monophonically convex sets of $V(G)$ (a set $S$ is {\em monophonically convex} if and only if every {\em induced} path between two vertices of $S$ lies entirely in the subgraph induced by $S$). Likewise, the {\em geodesic}, $m^3$-, {\em toll}, and {\em weakly toll} convexities are defined over shortest paths, induced paths of length at least three, tolled walks~\cite{alcon-et-al}, and weakly tolled walks~\cite{gutierrez-tondato}.
Chordal and Ptolemaic graphs have been characterized as convex geometries with respect to the monophonic convexity and the geodesic convexity, respectively \cite{farber-jamison}. Similarly, weak polarizable graphs~\cite{olariu} have been characterized as convex geometries with respect to the $m^3$-convexity~\cite{dragan-et-al}; interval graphs have been characterized as convex geometries with respect to the toll convexity~\cite{alcon-et-al}; and proper interval graphs have been characterized as convex geometries with respect to the weakly toll convexity~\cite{gutierrez-tondato}.
All the above-mentioned classes are hereditary for induced subgraphs. The natural question that arises is whether every convex geometry with respect to a path system defines a hereditary class of graphs. In this work we answer negatively to this question.
Inspired by the studies of Dragan, Nicolai, and Brandst\"adt in~\cite{dragan-et-al}, in this paper we introduce the notion of $l^k$-convexity. Let $k\geq 2$ be an integer. A subset $S\subseteq V(G)$ is called \textit{$l^k$-convex} if and only if for any pair of vertices $x,y$ of $S$, each induced path of length {\em at most} $k$ connecting $x$ and $y$ is completely contained in the subgraph induced by $S$. The {\em $l^k$-convexity} is the convexity consisting of all the $l^k$-convex sets of a graph $G$. If $G$ is a convex geometry with respect to the $l^k$-convexity, we say that $G$ is an {\em $l^k$-convex geometry}.
The main contribution of this work is to provide characterizations of $l^k$-convex geometries for $k\in\{2,3\}$. We show that a graph $G$ is an $l^2$-convex geometry if and only if $G$ is trivially perfect, or, equivalently, a chordal $P_4$-free graph~\cite{golumbic}. We also show that $G$ is an $l^3$-convex geometry if and only if $G$ is chordal, $\mathit{diam}(G)\leq 3$, and its induced gems with at least six vertices satisfy a special ``solving'' property, in the sense that there must be some external structures preventing such gems from being obstacles for $G$ to be an $l^3$-convex geometry. Interestingly, we show that $l^3$-convex geometries do not form a hereditary class of graphs. As far as the authors know, this is the first example of a non-hereditary class of convex geometries.
The paper is organized as follows. Section 2 contains the necessary background. In Section 3, we prove that $l^2$-convex geometries are precisely the chordal $P_4$-free graphs. In Section 4, we describe a characterization of the class of $l^3$-convex geometries and show that such a class is not hereditary. Section 5 contains our conclusions.
\section{Preliminaries} \label{sec:prelim}
All the graphs in this paper are finite, undirected, simple, and connected. Let $G$ be a graph. An \textit{induced path} $P$ in a $G$ is a sequence of vertices $x_0,\ldots,x_p$ such that $x_ix_j \in E(G)$ if and only if $i=j-1$, $j=1,\ldots,p$. The length of $P$ is $p$, which is the number of edges of $P$. We denote by $P_n$ the induced path with $n$ vertices. An \textit{induced cycle} $C$ is a sequence of vertices $x_0,\ldots,x_p$ such that: (i) $x_0=x_p$ and (ii) $x_ix_j \in E(G)$ if and only if $\{i,j\}=\{0,p-1\}$ or $|i-j|=1$. We denote by $C_n$ the induced cycle with $n$ vertices.
If $P=x_0,\ldots,x_p$ is a path, $P[x_i,x_j] \ (0\leq i\leq j\leq p)$ denotes the path $P'= x_i,x_{i+1},\ldots,x_{j-1},x_j$.
The \textit{distance} $d_G(u,v)$ between two vertices $u,v$ is the minimum number of edges in a path connecting these vertices. The \textit{diameter} of $G$, denoted by $\mathit{diam}(G)$, is the maximum distance between two vertices of $G$.
The neighborhood (resp., closed neighborhood) of $x\in V(G)$ is denoted by $N(x)$ (resp., $N[x]$).
Let $S\subseteq V(G)$. We denote by $G[S]$ the subgraph of $G$ induced by $S$. A \emph{clique} in $G$ is a set of pairwise adjacent vertices. We denote by $\mathcal{C}(G)$ the family of all maximal cliques of $G$. A vertex $x\in V(G)$ is a {\em simplicial vertex} if $N[x]$ is a clique.
Let $xy$ be an edge of $G$ and $z,w$ be two nonadjacent vertices of $G$. The graph $G-xy+zw$ is obtained from $G$ by deleting the edge $xy$ and adding the edge $zw$.
We say that $G$ {\em contains} a graph $H$ if $H$ is an induced subgraph of $G$. In addition, $G$ is {\em $H$-free} if $G$ does not contain $H$.
A vertex $u$ is a {\em universal vertex} if $u$ is adjacent to every other vertex of the graph. A \textit{gem} is a graph $G_n$ such that: (i) $V(G_n)=\{x_0,\ldots,x_n,u_n\}$ ($n\geq 3$); (ii) $x_0,\ldots,x_n$ is an induced path; (iii) $u_n$ is a universal vertex. We also say that $G_n$ is an {$n$-gem}, to mean that the induced path $x_0,x_1,\ldots,x_n$ contains $n$ edges. See Figure~\ref{fig0}.
\begin{figure}
\caption{An $n$-gem.}
\label{fig0}
\end{figure}
For $u,v\in V(G)$, the \textit{$l^k$-interval} $I_{l^k}[u,v]$ consists of $u,v$ together with all vertices lying in some induced path between $u$ and $v$ whose length is at most $k$. A subset $S$ of vertices is called \textit{$l^k$-convex} if and only if for every pair $u,v \in S$ it holds that $I_{l^k}[u,v] \subseteq S$. Note that the empty set, the whole vertex set, and cliques (including singletons) are all $l^k$-convex sets. Let $\mathcal{C}$ be the family of all the $l^k$-convex sets of $V(G)$. It is not difficult to see that $(G,\mathcal{C})$ is a graph convexity space. The family $\mathcal{C}$ is called {\em $l^k$-convexity}.
For $W\subseteq V(G)$, we define $I_{l^k}[W]=\cup_{u,v\in W} I_{l^k}[u,v]$. Also, we define $I^j_{l^k}[W]$ recursively as follows: $I^0_{l^k}[W]=W$ and $I^j_{l^k}[W]=I[I^{j-1}_{l^k}[W]]$ for $j\geq 1$.
If $x\in I_{l^k}^m[W]$ for some $m\geq 0$, we say that $x$ is {\em captured} by $W$.
The \textit{$l^k$-convex hull} of $S \subseteq V(G)$, denoted by $\mathit{hull}_{l^k}(S)$, is the smallest set of vertices of $G$ that contains $S$ and is $l^k$-convex. Alternatively, it is the intersection of all $l^k$-convex sets of $G$ that contain $S$. It can be easily shown that $\mathit{hull}_{l^k}(S)=I^j_{l^k}[S]$ for some integer $j\geq 0$; in fact, $j$ can be taken as the minimum index for which $I^j_{l^k}[S]=I^{j+1}_{l^k}[S]$.
A vertex $x$ of an $l^k$-convex set $S\subseteq V(G)$ is an \textit{extreme point} of $S$ if $S\backslash\{x\}$ is also an $l^k$-convex set of $G$. The set of all the extreme points of $S$ is denoted by $\mathit{Ext}_{l^k}(S)$. Clearly, not every $l^k$-convex set contains extreme points. Also, note that a vertex $x$ is an extreme point of an $l^k$-convex set $S$ $(k\geq 2)$ if and only if $x$ is a simplicial vertex in $G[S]$.
Let $W$ be a subset of vertices of a graph $G$, and let $u$ be a vertex in $I_{l^2}[W]\setminus W$. Then there exist vertices $x,y\in W$ such that $xy\notin E(G)$ and $xu,uy\in E(G)$. This implies that $u$ is not a simplicial vertex in the subgraph induced by $I_{l^2}[W]$. In fact, it is not difficult to see that, for any $j$, the only simplicial vertices in the subgraph induced by $I^j_{l^2}[W]$ are those in $W$ (if any). This also holds for $I^j_{l^3}[W]$, $j\geq 0$. Thus:
\begin{proposition}\label{prop:simplicial} Let $G$ be a graph, $W\subseteq V(G)$, and $k\in\{2,3\}$. Then if $u$ is an extreme point of $\mathit{hull}_{l^k}(W)$ then $u$ is a simplicial vertex in $G[W]$. \end{proposition}
A graph $G$ is an {\em $l^k$-convex geometry} if, for every $l^k$-convex set $S$ of $G$, it holds that $\mathit{hull}_{l^k}(\mathit{Ext}_{l^k}(S))=S$.
For $a\in\{g,m,m^3,t,wt\}$, we similarly define {\em a-convexity}, {\em a-convex set}, $I^k_a(S)$, $\mathit{hull}_a(S)$, $\mathit{Ext}_a(S)$, and {\em $a$-convex geometry}, according to Table~\ref{tab:conv}.
\begin{table}[htbp] \begin{center}
\begin{tabular}{|c|l|} \hline {\em symbol} & {\em associated path system}\\ \hline $g$ & shortest (geodesic) paths~\cite{batten}\\ $m$ & induced (monophonic) paths~\cite{dourado-et-al,duchet}\\\ $m^3$ & induced paths of length at least three~\cite{dragan-et-al}\\ $t$ & tolled walks~\cite{alcon-et-al}\\ $wt$ & weakly tolled walks~\cite{gutierrez-tondato}\\ \hline \end{tabular} \caption{Some convexities associated with path systems}\label{tab:conv}
\end{center} \end{table}
The subscript $a\in\{g,m,m^3,t,wt,l^k\}$ can be dropped from the notation when the path system under consideration is clear from the context.
Some important classes of graphs have been characterized as $a$-convex geometries, for $a$ in Table~\ref{tab:conv}. Chordal graphs and Ptolemaic graphs are the $m$- and $g$-convex geometries, respectively~\cite{farber-jamison}. Interval and proper interval graphs are the $t$- and $wt$-convex geometries, respectively~\cite{alcon-et-al,gutierrez-tondato}. Finally, weakly polarizable graphs are the $m^3$-convex geometries~\cite{dragan-et-al}.
All the above-mentioned classes of graphs are hereditary for induced subgraphs. As we shall see, this is not the case for $l^3$-convex geometries.
\section{A characterization of $l^2$-convex geometries}
In this section we prove that $l^2$-convex geometries are precisely the chordal $P_4$-free graphs, also referred as $C_4$-free cographs or trivially perfect graphs~\cite{golumbic} in the literature.
\begin{theorem}\label{thm:l2} A graph $G$ is an $l^2$-convex geometry if and only if $G$ is a chordal $P_4$-free graph. \end{theorem}
\begin{proof} Let $G$ be an $l^2$-convex geometry. Note that the extreme points of $V(G)$ are the simplicial vertices of $G$.
Suppose that $G$ contains an induced subgraph $C$ isomorphic to $C_n$, for $n>3$. It is clear that $\mathit{hull}(V(C))$ is a convex set of $G$, and so there exists a set $S \subseteq V(C)$ of extreme points of $\mathit{hull}(V(C))$ such that $\mathit{hull}(V(C))=\mathit{hull}(S)$. Since $C$ does not have simplicial vertices, by Proposition~\ref{prop:simplicial} the set $\mathit{hull}(V(C))$ does not have extreme points, a contradiction. Hence $G$ is a chordal graph.
Now, we prove that $G$ is a $P_4$-free graph. Let $P=x_0,\ldots,x_p$ be a maximum induced path of $G$, between two simplicial vertices of $G$. Such a path exists because $G$ is chordal~\cite{dirac}. Assume, in order to obtain a contradiction, that $p\geq 3$.
Note that $\mathit{hull}(V(P))$ is a convex set of $G$ with exactly two extreme points $x_0$ and $x_p$. Moreover if there exists $z\in \mathit{hull}(V(P))\setminus V(P)$, which is captured by $\{x_0,x_p\}$ with an induced path of length two between $x_0$ and $x_p$, $z$ must be adjacent to every $x_i$; otherwise, $P+x_0z+zx_p$ is an induced cycle of $G$ with at least four vertices. Since $x_0$ and $x_p$ are simplicial vertices of $G[\mathit{hull}(V(P))]$, $N[x_0]$ and $N[x_p]$ are cliques. Thus,
\[I_{l^2}[V(P)]=(N[x_0]\cap N[x_p])\cup\{x_0,x_p\}\]
\centerline{and}
\[I^2_{l^2}[V(P)]=I_{l^2}[(N[x_0]\cap N[x_p])\cup\{x_0,x_p\}]=(N[x_0] \cap N[x_p])\cup\{x_0,x_p\}.\]
No vertex in $V(P)$ is captured by $N[x_0]\cap N[x_p]$, and so $\mathit{hull}(V(P))\neq \mathit{hull}(\{x_0,x_p\})$, a contradiction since $\mathit{hull}(V(P))$ is a convex set of a convex geometry. This concludes the first part of the proof.
Conversely, let $G$ be a chordal $P_4$-free graph, and let $S\subseteq V(G)$ be an $l^2$-convex set of $G$. Since $G[S]$ is a chordal graph, every vertex in $S$ lies in an induced path between two simplicial vertices of $G[S]$ (extreme points of $S$). Such a path must have length two, otherwise $G$ contains $P_n$ $(n\geq 4)$ as an induced subgraph. This means that $S$ is the convex hull of its extreme points, and the proof is complete. \end{proof}
The following example shows that the statement of Theorem~\ref{thm:l2} is not true if we replace ``$l^2$-convex geometry/chordal $P_4$-free'' by ``$l^3$-convex geometry/chordal $P_5$-free''. Let $G$ be the chordal graph in Figure~\ref{figure-example}.
\begin{figure}
\caption{A chordal $l^3$-convex geometry that is not $P_5$-free.}
\label{figure-example}
\end{figure}
Note that $\mathit{Ext}_{l^3}(V(G))=\{1,7\}$ but $I_{l^3}[\{1,7\}]=\{1,2,5,7\}\neq V(G)$. Two iterations are necessary for the set $\{1,7\}$ to capture all the vertices of $G$, i.e., \[I^2_{l^3}[\{1,7\})]=\mathit{hull}_{l^3}(\{1,7\})=V(G).\]
Up to symmetries, the nontrivial $l^3$-convex sets of $G$ are: \[\{1,2,3,4\},\{1,2,3,4,5\},\{1,2,3,4,5,6\},\{2,3,4,5\},\{2,3,4,5,6\}.\]
It is a tedious but straightforward task to check that each set $S$ above satisfies $\mathit{hull}_{l^3}(\mathit{Ext}_{l^3}(S))=S$. Thus, $G$ is a chordal $l^3$-convex geometry. However, $G$ is not $P_5$-free. In order to characterize $l^3$-convex geometries, we need another properties, as we shall see in the next section.
\section{A characterization of $l^3$-convex geometries}
In this section we provide a characterization of $l^3$-convex geometries by means of a special property that must be satisfied by its induced gems with at least six vertices. Before stating the main result of this section, we need additional terminology.
Let $G$ be a graph. We say that an induced sugraph $H$ of $G$ is a {\em convex subgraph} if $V(H)$ is an $l^3$-convex subset of $G$. Let $P$ be an induced path and $x\in V(G)$ be a vertex; we say that {\em $P$ avoids $x$} if $x\notin V(P)$. Hereafter, an induced path with length three is simply referred as a $P_4$.
Let $G_n (n\geq 4)$ be an $n$-gem with vertices $x_0,\ldots,x_n,u_n$ as in Figure~\ref{fig0}, and assume that $G_n$ is an induced subgraph of $G$. We say that $G_n$ is {\em solved} if there exists in $G$ a $P_4$ connecting $x_0$ and $x_n$ that avoids $u_n$.
Before stating Theorem~\ref{thm:l3}, we need the following lemma:
\begin{lemma}\label{lem:diam3} If $G$ is a graph with $\mathit{diam}(G)\leq 3$ then every connected $l^3$-convex subgraph $H$ of $G$ satisfies $\mathit{diam}(H)\leq 3$. \end{lemma}
\begin{proof} Suppose $\mathit{diam}(H)=d>3$ for some connected convex subgraph $H$ of $G$, and let $u,v\in V(H)$ such that $\mathit{dist}_H(u,v)=d>3$. Since $\mathit{dist}_G(u,v)\leq 3$, there must exist an induced path $Q$ in $G$ between $u$ and $v$ of length at most three. Clearly, there exists a vertex $x\in V(Q)$ such that $x\notin V(H)$. However, $x\in I[u,v]\subseteq V(H)$, because $H$ is a convex subgraph. This is a contradiction. Therefore, $\mathit{diam}(H)\leq 3$. \end{proof}
\begin{theorem}\label{thm:l3} A graph $G$ is an $l^3$-convex geometry if and only if the following conditions hold:
\begin{enumerate}
\item $G$ is chordal;
\item $\mathit{diam}(G)\leq 3$;
\item every induced $n$-gem $(n\geq 4)$ contained in $G$ is solved. \end{enumerate}
\end{theorem}
\begin{proof} Suppose that $G$ is an $l^3$-convex geometry. Recall that the extreme points of $V(G)$ are its simplicial vertices.
First, we prove that $G$ is a chordal graph. Assume by contradiction that $G$ contains an induced cycle with at least four vertices. It is clear that $\mathit{hull}(V(C))$ is a convex set of $G$, and, since $G$ is a convex geometry, there exists a set $S \subseteq V(C)$ of extreme points of $\mathit{hull}(V(C))$ such that $\mathit{hull}(V(C))=\mathit{hull}(S)$. Since $C$ does not contain simplicial vertices, by Proposition~\ref{prop:simplicial} the set $\mathit{hull}(V(C))$ does not contain extreme points as well, a contradiction. Hence $G$ is a chordal graph.
Now, we prove that $\mathit{diam}(G)\leq 3$. Assume, in order to obtain a contradiction, that $\mathit{diam}(G)>3$. Since $G$ is chordal, there exist two simplicial vertices $x_0,x_n\in V(G)$ such that $d(x_0,x_n)=\mathit{diam}(G)$. Let $P=x_0,x_1,\ldots,x_n$ be an induced path whose length is $\mathit{diam}(G)$. Note that $\mathit{hull}(V(P))$ is a convex set of $G$ with exactly two extreme points, $x_0$ and $x_n$. Since every induced path between $x_0$ and $x_n$ in $G$ is of length greater than or equal to four, $\{x_0,x_n\}$ cannot capture, with induced paths of length less than or equal to three, any vertex of $\mathit{hull}(V(P))$. Thus, $\mathit{hull}(V(P))\neq\mathit{hull}(\{x_0,x_n\})$, which is a contradiction since $\mathit{hull}(V(P))$ is a convex set of a convex geometry. Hence, $\mathit{diam}(G)\leq 3$.
Finally, suppose that $G$ contains $G_n$ $(n\geq 4)$, as in Figure~\ref{fig0}. Clearly, $\mathit{hull}(V(G_n))$ is a convex set of $G$. As $G$ is a convex geometry and $G_n$ contains exactly two simplicial vertices, by Proposition~\ref{prop:simplicial} we conclude that $\mathit{hull}(V(G_n))=\mathit{hull}(\{x_0,x_n\})$. Clearly, $u_n$ is captured by an induced path $P$ of length at most 3 between $x_0$ and $x_n$, namely $P=x_0,u_n,x_n$. Since $u_n$ is a neighbor of $x_0$ and $x_n$, no vertex $x_i$ for $i \neq 0,n$ can be captured by $\{x_0,x_n,u_n\}$. Moreover, no vertex $x\notin V(G_n)$ lying in an induced path of length two between $x_0$ and $x_n$ in $H=G[\mathit{hull}(V(G_n))]$ can be added to $\{x_0,x_n,u_n\}$ in order to capture a vertex $x_i$ $(i \neq 0,n)$, because $H$ is chordal and thus $x$ would necessarily be a neighbor of $u_n$ and of every $x_i$.
From the above arguments, there must exist at least one $P_4$ between $x_0$ and $x_n$, say $P'$, such that $\{x_0, x_n\}$ captures, with $P'$, some $x_i$ ($i\neq 0,n$) or vertices that will be used to capture the $x_i$'s in subsequent iterations of the computation of $\mathit{hull}(V(G_n))$. It is clear that $P'$ avoids $u_n$. Therefore, $G_n$ is solved, and this concludes the first part of the proof.
Conversely, suppose that conditions 1, 2, and 3 are satisfied. We have to prove that every convex set $S\subseteq V(G)$ is the convex hull of its extreme points.
First, we show that $S$ contains extreme points. Assume $|S|>1$ (otherwise, the proof is trivial). Clearly, the convex subgraph $H=G[S]$ is a chordal graph, and thus contains at least two simplicial vertices (i.e., extreme points of $S$). In addition, every vertex of $H$ lies in an induced path between two simplicial vertices.
Every vertex of $H$ lying in an induced path of length at most 3 between two simplicial vertices of $H$ trivially belongs to $\mathit{hull}(\mathit{Ext}(V(H)))$. Assume then there exists a vertex $x\in V(H)$ that does not lie in an induced path of length at most 3 between two simplicial vertices of $H$. We need to prove that, in this case, $x\in \mathit{hull}(\mathit{Ext}(V(H)))$ as well.
We consider two cases: $H$ contains $G_n$ $(n\geq 4)$ or not. The following claims deal with the two cases.
\begin{claim}\label{claim1} If $H$ does not contain $G_n$ $(n\geq 4)$ then $x$ lies in an induced path of $H$ of length at most four between two simplicial vertices of $H$. \end{claim}
\begin{proof} Suppose that every induced path of $H$ between simplicial vertices containing $x$ as an internal vertex is of length $n>4$, and let $P=x_0,\ldots,x_n$ be a minimal induced path satisfying this condition.
Note that $\mathit{hull}(V(P))$ is completely contained in $H$ and it is a convex set of $G$. On the other hand, its only extreme points are $x_0$ and $x_n$. Since $\mathit{hull}(V(P))$ is convex, by Lemma~\ref{lem:diam3} $G[\mathit{hull}(V(P))]$ has diameter less than or equal to three. Thus, $d_H(x_0,x_n)\leq 3$.
Suppose $d_H(x_0,x_n)=2$. Clearly, there exists an induced path $Q=x_0,u_n,x_n$ with $u_n$ not in $P$ (Figure \ref{fig1}).
\begin{figure}
\caption{Induced path $Q$.}
\label{fig1}
\end{figure}
Since $G[\mathit{hull}(V(P))]$ is chordal, $Q\cup P$ is not an induced cycle. Thus $u_n$ is adjacent to every $x_i$, $i=1,\ldots,n-1$. But then $Q\cup P=G_n (n>4)$, a contradiction.
Hence, $d_H(x_0,x_n)=3$. Let $Q'=x_0,y_1,y_2,x_n$ be an induced path of $G[\mathit{hull}(V(P))]$ with length three. Note that $|V(Q') \cap V(P)|\leq 3$ (Figure \ref{fig2}).
\begin{figure}
\caption{Possible configurations for $Q'$ and $P$.}
\label{fig2}
\end{figure}
If $|V(Q')\cap V(P)|=3$, assume without loss of generality that $y_2=x_{n-1}$. Since $P$ and $Q'$ are induced paths and $G[\mathit{hull}(V(P))]$ is a chordal graph, it follows that $P[x_0,x_{n-1}] \cup Q'$ is not an induced cycle. Thus $y_1$ is adjacent to every $x_i$ lying in $P[x_0,x_{n-1}]\cup Q$. But then $P[x_0,x_{n-1}]\cup Q'=G_{n-1}$ for $n-1>3$, a contradiction.
Now we analyze the case $|V(Q')\cap V(P)|=2$. Since $x_0$ and $x_n$ are simplicial vertices in $G[\mathit{hull}(V(P))]$, $x_1$ is adjacent to $y_1$ and $x_{n-1}$ is adjacent to $y_2$ (Figure \ref{fig3}). Let $Q''$ be the path $Q''=x_1,y_1,y_2,x_{n-1}$.
\begin{figure}
\caption{Path $Q''$.}
\label{fig3}
\end{figure}
Since $G[\mathit{hull}(V(P))]$ is a chordal graph, $P[x_1,x_{n-1}]\cup Q''$ is not an induced cycle. Then $y_1$ or $y_2$ is adjacent to $x_i$ for $i\notin\{1,n-1\}$. Moreover, there exists at least one index $l \in \{1,..,n-1\}$ such that $x_l$ is adjacent to $y_1$ and $y_2$. Let $i$ be the minimum index such that $y_2$ is adjacent to $x_i$, and $j$ be the maximum index such that $y_1$ is adjacent to $x_j$ (Figure \ref{fig4}). Clearly, $i\leq j$, otherwise $G[\{y_1,y_2,x_i,\ldots,x_j\}]$ is an induced cycle of size at least 4.
\begin{figure}
\caption{Vertices $x_i$ and $x_j$.}
\label{fig4}
\end{figure}
Note that $y_2$ is adjacent to $x_h$ for all $h\geq i$, and $y_1$ is adjacent to $x_m$ for all $m \leq j$, otherwise there exists an induced cycle of size at least 4.
Note that $j<4$ and $i>n-4$, otherwise $G[\mathit{hull}(V(P))]$ would contain a $j$-gem $(j\geq 4)$ or an $(n-i)$-gem $(n-i\geq 4)$, which is impossible.
We show that $n<7$. Assume by contradiction that $n\geq 7$. Thus $n-4\geq 3$, and this implies $i>n-4\geq 3$. But this is impossible because $j<4$ and $i\leq j$. Hence $n<7$.
Assume $n=6$. Since $i>n-4$ and $j<4$, if $i\neq 3$ then $i\geq 4>j$, a contradiction since $i\leq j$. Thus, $i=3$. Analogously, $j=3$. But then $G[\{x_0,x_1,x_2,x_3,y_2,y_1\}]=G_4$, which is impossible.
Assume now $n=5$, and suppose that $j=i$. Since $i>n-4$ and $i\leq j$, it follows that $j=2$ or $j=3$, and then either $G[\{y_1,y_2,x_2,x_3,x_4,x_5\}]=G_4$ or $G[\{y_1,y_2,x_0,\ldots,x_3\}]=G_4$, a contradiction. If $i<j$ then, since $j<4$, $i=2$ and $j=3$. Thus $x$ is an internal vertex either of the induced path $x_0,x_1,x_2,y_2,x_5$ or the induced path $x_0,y_1,x_3,x_4,x_5$, contradicting the choice of $P$.
Hence $n\leq 4$ and $x$ lies in an induced path of $H$ of length at most four between two simplicial vertices of $H$. \end{proof}
\begin{claim}\label{claim2} If $H$ does not contain $G_n$ $(n\geq 4)$ then $x$ is captured by $\mathit{Ext}(V(H))$. \end{claim}
\begin{proof} By Claim \ref{claim1}, if $x\notin I[\mathit{Ext}(V(H))]$ then there exists in $H$ an induced path $P=x_0,\ldots,x,\ldots,x_4$ of length four such that $x_0$ and $x_4$ are simplicial vertices of $H$. Since $\mathit{hull}(V(P))\subseteq V(H)$ is a convex set of $G$, by Lemma~\ref{lem:diam3} the diameter of $G[\mathit{hull}(V(P))]$ is less than or equal to three. Also, it does not contain $G_n (n\geq 4)$, because it is an induced subgraph of $H$.
Note that the diameter of $G[\mathit{hull}(V(P))]$ is exactly three (since it is a chordal graph not containing $G_n \ (n\geq 4)$). In addition, $x_0$ and $x_4$ are the only simplicial vertices of $G[\mathit{hull}(V(P))]$. Then there exists in $G[\mathit{hull}(V(P))]$ an induced path $Q=x_0,y_1,y_2,x_4$ of length three. Observe that $P$ and $Q$ can have at most three vertices in common and the extreme points of $\mathit{hull}(V(P))$ are also extreme points of $H$.
If $|V(Q)\cap V(P)|=3$, without loss of generality assume that $y_2=x_3$. Since $x\notin I[\mathit{Ext}(V(H))]$, we have $x_3\neq x$. Note that $x_3\in I[\mathit{Ext}(\mathit{hull}(V(P)))]$ (since the length of the induced path $x_0,y_1,x_3,x_4$ is three).
On the other hand, $x$ is an internal vertex of the induced path $x_0,x_1,x_2,x_3$ and $x_3\in I[\mathit{Ext}(\mathit{hull}(V(P))]$. Thus, $x\in I^2[\mathit{Ext}(\mathit{hull}(V(P))]$. Since the extreme points of $\mathit{hull}(V(P))$ are also extreme points of $H$, it follows that $x\in I^{m}[\mathit{Ext}(V(H))]$ with $m<4$.
Now suppose $|V(Q)\cap V(P)|=2$. Since $x_0$ and $x_4$ are simplicial vertices of $H$, we have that $y_1$ is adjacent to $x_1$ and $y_2$ is adjacent to $x_3$. On the other hand, $G[\mathit{hull}(V(P))]$ is a chordal graph, and thus $x_1,y_1,y_2,x_3,x_2,x_1$ is not an induced cycle. Thus, $y_1$ and $y_2$ are simultaneously adjacent to at least one vertex of $P$. By hypothesis, since $G_n$ ($n\geq 4$) is not an induced subgraph of $H$, both $y_1$ and $y_2$ are adjacent to $x_2$ (Figure \ref{fig5}).
\begin{figure}
\caption{$x_1$ may or may not be adjacent to $y_2$, and $x_3$ may or may not be adjacent to $y_1$.}
\label{fig5}
\end{figure}
Note that $x_1$ may be adjacent to $y_2$, and $x_3$ may be adjacent to $y_1$. It is easy to see that $x$ is captured by $\{x_0,y_2\}$ with the induced path $x_0,x_1,x_2,y_2$ whenever $y_2$ is not adjacent to $x_1$, or by $\{x_4,y_1\}$ with the induced path $x_4,x_3,x_2,y_1$ if $y_1$ is not adjacent to $x_3$.
If $y_1$ is adjacent to $x_3$ or $y_2$ is adjacent to $x_1$ then $x$ is captured (depending on the position of $x$ in $P$) with one of the following induced paths: $x_0,x_1,y_2$; $x_4,x_3,y_1$; or $x_1,x_2,x_3$. In the last case, the following steps are needed: first, the set $\{x_0,x_4\}$ captures $y_1$ and $y_2$; next, $\{y_1, y_2\}$ captures $x_1$ and $x_3$; finally, $\{x_1,x_3\}$ captures $x=x_2$. Clearly, $x\in I^{m}[\mathit{Ext}(V(P))]$ with $m=2$ or $m=3$. Hence, $x\in I^{m}[\mathit{Ext}(V(H))]$ with $m<4$. This completes the proof of the claim.
\end{proof}
\begin{claim}\label{claim3} Suppose that $H$ contains $G_n$ $(n\geq 4)$, as in Figure~\ref{fig0}. Suppose also that the simplicial vertices of $G_n$ are captured by $\mathit{Ext}(V(H))$. Then every $x\in V(G_n)$ is captured by $\mathit{Ext}(V(H))$. \end{claim}
\begin{proof} Assume that $H$ contains $G_n$ $(n\geq 4)$ as in Figure~\ref{fig0}, and let $P$ be the induced path $P=x_0,x_1,\ldots,x_n$.
Note that $G[\mathit{hull}(V(G_n)]$ is a convex subgraph of $H$ with only two extreme points, $x_0$ and $x_n$. By hypothesis, $G_n$ is solved, and thus $G[\mathit{hull}(V(G_n))]$ contains an induced path $Q=x_0,y_1,y_2,x_n$ that avoids $u_n$.
Note that $P$ and $Q$ can have at most three vertices in common. Moreover, $y_1$ and $y_2$ are captured by $\{x_0,x_n\}$.
We prove the claim by induction on $n$.
\noindent {\em Base case:} $n=4$
If $Q=x_0,y_1,y_2,x_4$ and $P=x_0,x_1,x_2,x_3,x_4$ have three vertices in common, assume without loss of generality that $y_2=x_3$. Note that $x_3$ is captured by $V(Q)$, and then every other vertex of $G_n$ is captured by $\{x_0,x_1,x_2,x_3\}$. Thus the claim follows in this case.
If $Q$ and $P$ have exactly two vertices in common then $y_1$ is adjacent to $x_1$ and $y_2$ is adjacent to $x_3$. In addition, both are adjacent to $u_4$. Since $G[\mathit{hull}(V(G_4))]$ is a chordal graph, $x_1,y_1,y_2,x_3,x_2,x_1$ is not an induced cycle, and since $P$ is an induced path there exists at least one vertex $x_l$, with $l\in\{1,2,3\}$, which is adjacent to both $y_1$ and $y_2$. See Figure~\ref{fig5}. Let $i,j\in\{1,2,3\}$ be such that $i$ is the minimum index for which $x_i$ is adjacent to $y_2$, and $j$ is the maximum index for which $x_j$ is adjacent to $y_1$. Note that $i\leq j$, otherwise there exists an induced cycle of size at least four.
Since $n=4$, it is clear that $x_0,\ldots,x_i,y_2$ is an induced path of length at most three whenever $i\neq 3$. Similarly, $y_1,x_j,\ldots,x_4$ is an induced path of length at most three whenever $j\neq 1$.
If $i=3$ then $y_1$ is adjacent to $x_3$, and thus $\{y_1,x_4\}$ captures $x_3$. Next, $\{x_0,x_3\}$ captures the other vertices of $G_n$. We proceed analogously for $j=1$.
If $i\neq 3$ and $j\neq 1$, consider the following induced paths of length at most three: $Q'=x_0,\ldots,x_i,y_2$ and $Q''=y_1,x_j,\ldots,x_4$. The existence of such paths show that $x_i$ and $x_j$ are captured by $\{x_0,y_2\}$ and $\{y_1,x_4\}$, respectively. Then the other vertices of $G_n$ are also captured, and the base case is complete.
\noindent {\em Inductive step:} $n>4$
Assume that, for every $l$-gem $G'$ with $4\leq l<n$ contained in $H$ whose simplicial vertices are captured by $\mathit{Ext}(V(H))$, all vertices of $G'$ are captured by $\mathit{Ext}(V(H))$.
By hypothesis, $G_n$ $(n>4)$ is solved, and then there exists an induced path $Q=x_0,y_1,y_2,x_n$ in $G[\mathit{hull}(V(G_n)]$ that avoids $u_n$.
If $Q$ and $P=x_0,x_1,\ldots,x_{n-1},x_n$ have three vertices in common, assume without loss of generality that $x_{n-1}\in V(Q)\cap V(P)$. In this case, $x_{n-1}$ is captured with the induced path $Q$. Let $G'$ be the $(n-1)$-gem induced by $V(G_n)\backslash\{u_n\}$. Since $x_0$ and $x_{n-1}$ are already captured and $x_{n-1}$ is a simplicial vertex of $G'$, by the induction hypothesis all the vertices of $G'$ are captured, and this implies that all the vertices of $G_n$ are captured by $\mathit{Ext}(V(H))$.
If $Q$ and $P$ have only two vertices in common, we have the following situation. Since $G[\mathit{hull}(V(G_n))]$ is a chordal graph, $x_1,y_1,y_2,$ $x_{n-1},x_{n-2},\ldots,x_2,x_1$ is not an induced cycle. Thus there exists at least one vertex $x_l$, with $l\in\{1,\ldots,n-1\}$, which is adjacent to $y_1$ and $y_2$. Let $i,j\in\{1,2,\ldots,n-1\}$ be such that $i$ is the minimum index for which $x_i$ is adjacent to $y_2$, and $j$ is the maximum index for which $x_j$ adjacent to $y_1$. Note that $i\leq j$. See Figure~\ref{fig6}).
\begin{figure}
\caption{Inductive step: vertices $x_i$ and $x_j$.}
\label{fig6}
\end{figure}
If $i=1$, $x_1$ is captured with the induced path $x_0,x_1,y_2$. Note that the vertices $x_1,x_2,\ldots,x_n, y_2$ induce an $(n-1)$-gem $G''$ whose simplicial vertices, $x_1$ and $x_n$, are already captured. By the induction hypothesis, all the vertices of $G''$, and thus of $G_n$, are captured by $\mathit{Ext}(V(H))$. We proceed analogously for $j=n-1$, by considering the $(n-1)$-gem induced by the vertices $y_1,x_0,\ldots,x_{n-1}$.
If $i>1$ and $j<n-1$, we analyze three cases:
\noindent {\bf Case 1:} If $i>2$ and $j<n-2$, the gem induced by $\{x_0,x_1,\ldots,x_i,y_2,y_1\}$ (with simplicial vertices $x_0$ and $y_1$) and the gem induced by $\{y_1,y_2,x_j,\ldots,x_n\}$ (with simplicial vertices $x_n$ and $y_1$) are $l$-gems with $4\leq l<n$. Since the vertices $x_0,x_n,y_1,y_2$ have already been captured, by the induction hypothesis all vertices of such gems are captured in subsequent iterations. This means that vertices $x_1,\ldots,x_i$ and $x_j,\ldots,x_{n-1}$ are captured.
If $i=j$ then all vertices of $G_n$ are captured.
If $i<j$, either $Q'=x_i,x_{i+1},\ldots,x_{j-1},x_j$ is an induced path of length at most three, or the vertices of $Q'$ along with $u_n$ induce an $l$-gem $G'''$ ($4\leq l<n$) with simplicial vertices $x_i$ and $x_j$. In the latter case, by the induction hypothesis, all vertices of $G'''$ are captured. Thus, in either case, we conclude that all vertices of $G_n$ are captured.
\noindent {\bf Case 2:} If $i=2$ and $j<n-2$ (or $i>2$ and $j=n-2$), vertices $x_1$ and $x_2$ are captured with the induced path $x_0,x_1,x_2,y_2$. Using similar arguments as above, the claim follows by considering the sets $S_1=\{y_1,y_2,x_j,\ldots,x_n\}$ and $S_2=\{x_2,x_3,\ldots,x_j\}$. In the former case, $S_1$ induces a gem with already-captured simplicial vertices $y_1$ and $x_n$. In the latter, $S_2$ either induces a $P_r$ with $r\leq 4$ or, along with $u_n$, an $l$-gem $(4\leq l<n)$ with already-captured simplicial vertices $x_2$ and $x_j$.
\noindent {\bf Case 3:} If $i=2$ and $j=k-2$, we have that $x_1,x_2$ are captured with the induced path $x_0,x_1,x_2,y_2$, and $x_{n-2},x_{n-1}$ are captured with the induced path $y_1,x_{n-2},x_{n-1},x_n$.
If the induced path $x_2,x_3,\ldots,x_{n-2}$ has length at most three, vertices $x_3,\ldots,x_{n-3}$ are captured by $\{x_2,x_{n-2}\}$. Otherwise, the claim follows by considering the gem induced by $\{x_2,\ldots,x_{n-2},u_n\}$, where $x_2$ and $x_{n-2}$ are its already-captured simplicial vertices.
\end{proof}
Now, we conclude the proof that $V(H)=\mathit{hull}(\mathit{Ext}(V(H)))$. Recall that there is a vertex $x\in V(H)\setminus I(\mathit{Ext}(V(H)))$ such that $x$ lies in an induced path of length at least four $P=x_0,\ldots,x,\ldots,x_n$, where $x_0$ and $x_n$ are simplicial vertices of $H$. We need to prove that $x\in\mathit{hull}(\mathit{Ext}(V(H)))$.
Clearly, $\mathit{hull}(V(P))$ is contained in $V(H)$ and $H'=G[\mathit{hull}(V(P))]$ is a chordal subgraph of $H$. Also, $H'$ is a convex subgraph of $G$, and thus by Lemma~\ref{lem:diam3} we have ${\mathit diam}(H')\leq 3$. Note that $x_0$ and $x_n$ are the only simplicial vertices of $H'$.
If ${\mathit diam}(H')=2$, there exists an induced path $x_0,u_n,x_n$. But then $u_n$ must be adjacent to all vertices of $P$, and the set $V(P)\cup\{u_n\}$ induces $G_n$ $(n\geq 4)$. By Claim~\ref{claim3}, all vertices of $G_n$ are captured by $\mathit{Ext}(V(H))$, and the proof follows.
If ${\mathit diam}(H')=3$, there exists an induced path $Q=x_0,y_1,y_2,x_n$ such that $Q$ and $P$ have at most three vertices in common.
If $|V(Q)\cap V(P)|=3$, assume $x_{n-1}\in V(Q)\cap V(P)$. Then $x_{n-1}$ is captured with $Q$. If $n=4$ then $x_1,x_2$ are captured by $\{x_0,x_3\}$, and the proof follows. If $n>4$, since $H'$ is chordal it follows that $y_1$ is adjacent to all vertices of $V(P)\setminus\{x_n\}$, and then $\{x_0,\ldots,x_{n-1},y_1\}$ induces an $l$-gem with $l\geq 4$ and already-captured simplicial vertices $x_0$ and $x_{n-1}$. Hence, by Claim~\ref{claim3}, the proof follows.
If $|V(Q)\cap V(P)|=2$, since $x_0$ and $x_n$ are simplicial vertices, we have that $y_1$ is adjacent to $x_1$ and $y_2$ is adjacent to $x_{n-1}$. Also, $x_1,y_1,y_2,x_{n-1},\ldots,x_1$ is not an induced cycle, and then there must exist $x_i$ simultaneously adjacent to $y_1$ and $y_2$. Following previous arguments, let $i,j \in \{1,2,\ldots,n-1\}$ be such that $i$ is the minimum index for which $x_i$ is adjacent to $y_2$, and $j$ the maximum index for which $x_j$ adjacent to $y_1$. We have the situation illustrated in Figure~\ref{fig6}. The proof then follows by using the same argumentation as in Claim~\ref{claim3}.
Therefore, $V(H)=\mathit{hull}(\mathit{Ext}(V(H)))$ and $G$ is an $l^3$-convex geometry.
\end{proof}
\subsection{The class of $l^3$-convex geometries is not hereditary} \label{sec:not-hered}
Observe that the graph $G$ depicted in Figure~\ref{figure-example} is a proper interval graph (and, thus, an interval graph and a chordal graph). Also, $G$ is weak polarizable (see~\cite{olariu}). In addition, for $a\in\{m,m^3,t,wt\}$, we have the following facts:
\begin{itemize} \item $\mathit{\mathit{Ext}}_a(V(G))=\{1,7\}$; \item $\mathit{\mathit{hull}}_a(\{1,7\})=I_a[\{1,7\}]=V(G)$; \item $G-x$ is an $a$-convex geometry for every $x\in V(G)$. \end{itemize}
\noindent However, for $x\in\{2,5\}$, $G'=G-x$ is not an $l^3$-convex geometry, since:
\begin{itemize} \item $V(G')$ is trivially an $l^3$-convex set of $G'$, \item $\mathit{\mathit{Ext}_{l^3}(V(G'))}=\{1,7\}$, and \item $\mathit{\mathit{hull}}_{l^3}(\{1,7\})=\{1,7\}\neq V(G')$. \end{itemize}
\noindent This shows that the class of $l^3$-convex geometries is not hereditary.
\section{Conclusions and future work} \label{sec:conclu}
Characterizing $l^k$-convex geometries for $k\geq 4$ is an interesting open question. In order to tackle such a question, the following result will be useful:
\begin{proposition} Let $k\geq 2$. If $G$ is an $l^k$-convex geometry then $G$ is a chordal graph with $\mathit{diam}(G)\leq k$. \end{proposition}.
The proof of the above proposition uses the same arguments in the necessity proof of Theorem~\ref{thm:l3}.
Regarding complexity aspects, recognizing whether a graph $G$ is an $l^2$-convex geometry can be easily done in linear time by testing whether $G$ is a chordal cograph (see~\cite{corneil-et-al,rose-et-al}).
Finding, if possible, an efficient recognition algorithm for $l^3$-convex geometries amounts to finding an efficient test of condition 3 in Theorem~\ref{thm:l3}, for chordal graphs with diameter at most three.
\end{document}
|
arXiv
|
{
"id": "2203.05588.tex",
"language_detection_score": 0.8367897868156433,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Averaging and passage through resonances in two-frequency systems near separatrices hanks{The work was supported by the Leverhulme Trust (Grant No. RPG-2018-143).} \abstract{ The averaging method is a classical powerful tool in perturbation theory of dynamical systems. There are two major obstacles to applying the averaging method, resonances and separatrices. In this paper we obtain realistic asymptotic estimates that justify the use of averaging method in a generic situation where both these obstacles are present at the same time, passage through a separatrix for time-periodic perturbations of one-frequency Hamiltonian systems. As a general phenomenon, resonances accumulate at separatrices. The Hamiltonian depends on a parameter that slowly changes for the perturbed system (so slow-fast Hamiltonian systems with two and a half degrees of freedom are included in our class).
Our results can also be applied to perturbations of generic two-frequency integrable systems near separatrices, as they can be reduced to periodic perturbations of one-frequency systems. }
\tableofcontents
\section{Introduction} \input intro.tex
\section{Statement of results} \label{s:results} \input unperturbed.tex \input perturbed.tex \input cond-B.tex \input theorem.tex \input two_freq.tex
\input scheme_proofs.tex
\input proof_plan.tex
\input decompose.tex \section{Analysis of the perturbed system} \label{s:an-perturbed} We now focus on the proof of the lemma on approaching separatrices. Let us consider the perturbed system in $\mathcal A_3$. \input estimates.tex \section{Resonant and non-resonant zones} \label{s:res-nonres-zones} \input zones.tex \input est-nonres.tex \input outer-zones.tex \subsection{Lemmas on crossing resonant zones} \label{s:est-res-cross} \input est-res-cross.tex
\section{Proof of the lemma on approaching separatrices} \label{s:approach-proof} \input approach_proof.tex
\section{Crossing non-resonant zone: proof} \label{ss:nonres_proof}
\input nonres_proof.tex
\input auxiliary.tex
\input action.tex
\input phase.tex
\input averaging.tex
\input after-averaging.tex
\input main-part.tex
\section{Crossing resonant zones: proofs} \label{s:res-zones-proofs} \subsection{High-numerator resonances: proof} \label{ss:weak-res-proof}
\input weak_res_proof.tex
\input RC_auxiliary.tex
\input RC_auxiliary_proof.tex
\input strong_res.tex \section{Passing separatrices: proof} \label{s:pass-sep}
\input pass_sep.tex \input prob.tex \input proof_no_capture.tex
\section{Acknowledgment} We are greatful to A.V. Bolsinov for advices on integrable systems and to A.V. Artemyev and V.V. Sidorenko for useful discussion.
\begin{appendices} \section{Analytic continuation: proofs} \label{a:AC-proofs}
\input complex_proof.tex \section{Estimates on Fourier coefficients} \label{a:fourier_proof}
\input fourier_proof.tex \section{Proof of estimates on $u$} \label{a:appendix}
\input est_u.tex \section{Proof of auxiliary lemma} \label{a:proof-aux}
\input volume_proof.tex
\section{Reduction of two-frequency systems to periodically perturbed one-frequency system} \label{a:two-freq-reduction} In this appendix we present the proof of Lemma~\ref{l:two-freq-one-freq}. This proof was kindly communicated to us by A.V. Bolsinov. Then we show how this lemma can be used to reduce perturbations of two-frequency integrable systems to time-periodic perturbations of one-frequency systems. \begin{proof}[Proof of Lemma~\ref{l:two-freq-one-freq}]
We will consider the case without the parameter $z$, as with $z$ one can simply construct the new coordinates separately for each $z$ as described below.
The proof is based on results presented in the book~\cite{bolsinov2004integrable}.
Let $L$ denote the singular leaf of the Liouville foliation (i.e., the foliation of the phase space into \emph{Liouville tori} given by $H=H_0$, $F=F_0$; each Lioville torus is parametrized by the values $H_0$, $F_0$ of the two first integrals) and let $Q^3$ denote the isoenergy level that contains $L$.
By~\cite[Theorem 3.2]{bolsinov2004integrable} there exists a \emph{periodic integral} $s_1$ defined in a four-dimensional neighborhood $V(L)$, i.e. a function $s_1$ that is smooth even on separatices and is such that the flow of the vector field $\sgrad s_1$ is $2\pi$-periodic (we use the notation $\sgrad U$ for the \emph{Hamiltonian vector field} of the function $U$, it is determined by $\omega(v, \sgrad U)=dU(v)$, where $\omega$ is the symplectic structure, $v$ is arbitrary tangent vector and $dU(v)$ denotes the derivative of $U$ in the direction $v$).
The periodic integral $s_1$ allows to define the structure of a Seifert fibration in a neighborhood $U(L) \subset Q^3$ of the singular leaf $L$, the fibration of $U(L)$ by the orbits of the flow of $\sgrad s_1$ (\cite[Theorem 3.3]{bolsinov2004integrable}).
As shown in \cite[Chapter 3]{bolsinov2004integrable}, there are two cases:
\begin{enumerate}
\item one can take a two-dimensional surface $P \subset U(L)$ that intersects each leaf of the Seifert fibration once;
\item one can take a two-dimensional surface $\hat P \subset U(L)$ that intersects each regular leaf of the Seifert fibration twice and each singular leaf once.
\end{enumerate}
By~\cite[Proposition 5.4]{bolsinov2004integrable} topological stability of the isoenergy level $Q^3$ implies that $P$ and $\hat P$ can be taken transversal to $\sgrad H$.
On $Q^3$ the vector fields $\sgrad s_1$ and $\sgrad H$ are tangent to $Q^3$. Thus we can continue $P$ and $\hat P$ to $3$-dimensional transversals $P^3, \hat P^3 \subset V(L)$.
Let us consider the first case and construct a phase variable $\varphi_1$ conjugate to $s_1$. To do so, we set $\varphi_1=0$ on $P^3$ and propagate it along the leaves of Seifert fibration. Indeed, we want $\{ \varphi_1, s_1 \} = 1$ (here $\{\cdot, \cdot\}$ denotes the Poisson bracket), this condition can be rewritten in the following way: the derivative of $\varphi_1$ along $\sgrad s_1$ is $1$ and used to define $\varphi_1$.
As trajectories of $\sgrad s_1$ are $2\pi$-periodic, this correctly defines the angle variable $\varphi_1$.
In the second case we can construct $\varphi_1$ in the same way, the difference will be that $\varphi_1$ will be defined on a double cover. We will consider the lift of the unperturbed system to the covering space instead of the original unperturbed system in the rest of the proof.
Let us now define variables $p, q$ so that $s_1, p, \varphi_1, q$ are canonical variables. Fix variables $p, q$ on some two-dimensional section $\{ \varphi_1 = 0, s_1 = \tilde s_1 \}$ so that these variables are canonical with respect to the restriction of the symplectic structure to this section. Spread these coordinates on the whole $V(L)$ by the flows of $\sgrad s_1$ and $\sgrad \varphi_1$ (these flows commute, as $\{ \varphi_1, s_1 \} = 1$). As these flows are symplectic, we have $\{ q, p \} = 1$ on $V(L)$. By construction we have $\{ a, b \} = 0$, where $a=\varphi_1, s_1$ and $b=p, q$. Thus $s_1, p, \varphi_1, q$ are canonical variables.
In these new variables the dynamics of the unperturbed system rewrites as
\begin{equation}
\dot \varphi_1 = \pdv{H}{s_1}, \qquad \dot s_1 = 0, \qquad \dot p = -\pdv{H}{q}, \qquad \dot q = \pdv{H}{p}.
\end{equation}
Take $\varphi_1$ as a new independent variable (new time). Denote $\psi'=\frac{d\psi}{d\varphi_1}$.
Denote $h=H$ and take $h$ as a new variable that replaces $s_1$.
According to general formulas of isoenergetic reduction~\cite[\S9.45.B]{arnold1989mathematical}, the dynamics of $p$ and $q$ with respect to the time $\varphi_1$ is given by the Hamiltonian $S(p, q, h) = - s_1(p,q,h)$. Finally, denote $s=\varphi_1$. We have transformed the unperturbed system to the form
\begin{equation}
s' = 1, \qquad h' = 0, \qquad p' = -\pdv{S}{q}, \qquad q' = \pdv{S}{p}.
\end{equation} \end{proof}
In the coordinates of Lemma~\ref{l:two-freq-one-freq} the perturbed system rewrites as \begin{equation}
s' = 1 + \varepsilon f_s, \qquad
h' = \varepsilon f_h, \qquad
p' = - \pdv{S}{q} + \varepsilon f_p, \qquad
q' = \pdv{S}{p} + \varepsilon f_q, \qquad
z' = \varepsilon f_z \end{equation} where $(f_s, f_h, f_p, f_q, z)$ is the lift of the perturbation $f$ under the cover \[
(s, h, p, q, z) \mapsto (p_1, p_2, q_1, q_2, z). \] This vector field is smooth. Taking $s$ as new time gives (denoting $\dot a = \dv{a}{s}$). \begin{equation}
\dot s = 1, \qquad
\dot h = \varepsilon g_h, \qquad
\dot p = - \pdv{S}{q} + \varepsilon g_p, \qquad
\dot q = \pdv{S}{p} + \varepsilon g_q, \qquad
\dot z = \varepsilon g_z \end{equation} where \begin{align} \begin{split}
g_h &= f_h/(1 + \varepsilon f_s), \\
g_p &= (1 + \varepsilon f_s)^{-1} \Big( f_p + \pdv{S}{q} f_s \Big), \\
g_q &= (1 + \varepsilon f_s)^{-1} \Big( f_q - \pdv{S}{p} f_s \Big), \\
g_z &= f_z/(1 + \varepsilon f_s). \end{split} \end{align} Thus perturbed two-frequency system is reduced to time-periodic perturbation of one-frequency system with Hamiltonian depending on additional parameter $h$ (that can be included in the vector $z$).
\end{appendices}
\printbibliography
\vskip 15mm
\noindent Anatoly Neishtadt,
\noindent {\small Department of Mathematical Sciences,}
\noindent {\small Loughborough University, Loughborough LE11 3TU, United Kingdom;}
\noindent {\small Space Research Institute, Moscow 117997, Russia}
\noindent {\footnotesize{E-mail : [email protected]}}
\vskip 5mm
\noindent Alexey Okunev,
\noindent {\small Department of Mathematical Sciences,}
\noindent {\small Loughborough University, Loughborough LE11 3TU, United Kingdom}
\noindent {\footnotesize{E-mail : [email protected]}}
\end{document}
|
arXiv
|
{
"id": "2108.08540.tex",
"language_detection_score": 0.7235249876976013,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{\LARGE \bf Implementing Optimization-Based Control Tasks in Cyber\textendash Physical Systems With Limited Computing Capacity} \thispagestyle{empty} \pagestyle{empty}
\section{Introduction and Motivation}\label{sec:introduction} A common aspect of today's Cyber\textendash Physical Systems (CPSs) is that multiple control tasks may execute in a shared processor. For a CPS performing $n$ optimization-based control tasks, denoted by $\tau_k,~k={1,\cdots,n}$, let $\ell_k$ and $\Delta T_k$ denote the worst-case execution time and sampling period of task $\tau_k$, respectively. Under the Earliest Deadline First (EDF) policy, the tasks are schedulable on a single preemptive processor if the following condition \cite{Buttazzo2011} is satisfied: \begin{align}\label{eq:schedulabilitycondition} \sum_{k=1}^n \frac{\ell_k}{\Delta T_k}\leq1. \end{align} Optimization-based control tasks make use of online optimization and thus have large execution times; hence their sampling periods must be large as well to satisfy condition \eqref{eq:schedulabilitycondition}. However larger sampling periods may cause worse control performance.
\mypara{Prior Work} Existing methods to address the above-mentioned issue are: i) control-scheduling co-design (e.g., \cite{Pazzaglia2021}), where parameters of control systems are modified at every sampling instant; ii) pre-computation (e.g., \cite{Alessio2009}), where optimal control inputs are computed offline and stored for run-time use; iii) triggering-based control (e.g., \cite{Wang2021}), where a triggering mechanism invokes control tasks; and iv) fixed-iteration optimization (e.g., \cite{Pherson2020}), where control tasks perform a fixed number of iterations to approximately track the solution. These methods either do not guarantee constraint satisfaction (e.g., \cite{Pazzaglia2021}), or do not consider the variability and unpredictability of the task execution time (e.g., \cite{Alessio2009}) and available computing time (e.g., \cite{Wang2021,Pherson2020}).
Recently, dynamically embedded controllers have been proposed in the control theory literature (e.g., \cite{Nicotra2019,ROTEC}), where the processor, instead of solving an optimization problem, runs a virtual dynamical system whose trajectory converges to the optimal solution. This type of an approach is also pursued in our work limited and variable computing capacity in implementing optimization-based control tasks in CPSs.
\mypara{Goal} Drawing inspiration from dynamically embedded controllers, the goal of our work is to develop a robust to early termination optimization approach that can be used to effectively solve onboard optimization problems involved in controlling the system despite the presence of unpredictable, variable, and limited computing capacity.
\section{Proposed Solution\textemdash Robust to Early Termination Optimization Approach}\label{sec:solution} \mypara{Task Details} Suppose that task $\tau_k,~k\in\{1,\cdots,n\}$ solves the following optimization problem: \begin{align}\label{eq:OptimizationProblem} \left\{\begin{array}{cl}
& \min\limits_x f_0(x) \\
\text{s.t.} & f_i(x)\leq0,~i=1,\cdots,m \end{array} \right., \end{align} where $f_0:\mathbb{R}^p\rightarrow\mathbb{R}$ is a strongly convex objective function to be minimized over the $p$-variable vector $x$, and $f_i(x)\leq0$ is the $i$-th inequality constraint. Note that the mathematical problems in many existing optimization-based control tasks (e.g., Model Predictive Control (MPC) \cite{Rawlings2017}) are in the form of optimization problem \eqref{eq:OptimizationProblem}.
\mypara{Proposed Method} Consider the following \textit{modified} barrier function associated with the optimization problem \eqref{eq:OptimizationProblem}: \begin{align}\label{eq:BarrierFunction} \mathcal{B}(x,\lambda)=f_0(x)-\sum\limits_{i=1}^m\lambda_{i}\log(-\beta (f_{i}(x)+1/\beta)+1), \end{align} where $\beta\in\mathbb{R}_{>0}$ is the barrier parameter and $\lambda=[\lambda_{1}~\cdots~\lambda_{m}]^\top\in\mathbb{R}^m_{\geq0}$ is the vector of dual parameters.
We consider the following primal-dual gradient flow: \begin{subequations}\label{eq:systemvirtual} \begin{align} \dot{\hat{x}}(t)=&-\sigma\nabla_{\hat{x}}\mathcal{B}\big(\hat{x}(t),\lambda(t)\big),\\ \dot{\hat{\lambda}}_i(t)=&+\sigma\Big(\nabla_{\hat{\lambda}_i}\mathcal{B}\big((\hat{x}(t),\lambda(t)\big)+\Psi_i(t)\Big), \end{align} \end{subequations} where $\sigma\in\mathbb{R}_{>0}$ is a design parameter and $\Psi_i(t)$ is the projection operator onto the normal cone of $\lambda_i$ (see \cite{Nicotra2019}).
\mypara{Convergence} Let $\beta$ be sufficiently large. Using the following Lyapunov function: \begin{align}\label{eq:Lyapunov} V(\hat{x}(t),\hat{\lambda}(t))=&\frac{1}{2\sigma}\left\Vert \hat{x}(t)-x^\ast\right\Vert^2+\frac{1}{2\sigma}\left\Vert \hat{\lambda}(t)-\lambda^\ast\right\Vert^2, \end{align} where $(x^\ast,\lambda^\ast)$ is the pair of the optimal solution of \eqref{eq:OptimizationProblem} and the vector of optimal dual parameters, and according to the fact that the operator $\left[\big(\nabla_{\hat{x}}\mathcal{B}(\hat{x},\hat{\lambda})\big)^\top~-\big(\nabla_{\hat{\lambda}}\mathcal{B}(\hat{x},\hat{\lambda})\big)^\top\right]^\top$ is strongly monotone, it can be shown \cite{ROTEC} that $\big(\hat{x}(t),\hat{\lambda}(t)\big)$ exponentially converges to $\big(x^\ast,\lambda^\ast\big)$ as $t\rightarrow\infty$.
\mypara{Constraint-Handling} Since $\mathcal{B}(\hat{x}(t),\hat{\lambda}(t))\rightarrow\infty$ only if $f_i(\hat{x}(t))\rightarrow0^-$ for one or more $i\in\{1,\cdots,m\}$, Alexandrov's theorem \cite{Alexandrov2005} implies that \begin{align}\label{eq:limitB1} \lim\limits_{f_i(\hat{x}(t))\rightarrow0^-\text{ for one or more }i}\;\frac{d}{dt}\mathcal{B}\big(\hat{x}(t),\hat{\lambda}(t)\big)<0, \end{align} which asserts that $\mathcal{B}\big(\hat{x}(t),\hat{\lambda}(t)\big)$ must decrease along the system trajectories when these trajectories are near the boundary. Thus, $f_i\big(\hat{x}(t)\big)<0,~i=1,\cdots,m$ for all $t\geq0$. The significance of this conclusion is that the evolution of \eqref{eq:systemvirtual} can be stopped at any time instant with a guaranteed feasible solution.
\mypara{Implementation} Although system \eqref{eq:systemvirtual} is continuous-time, the above-mentioned properties (i.e., convergence and constraint-handling) are approximately maintained when system \eqref{eq:systemvirtual} is implemented in discrete time by making use of the difference quotient and with a sufficiently small sampling period. Thus, to solve problem \eqref{eq:OptimizationProblem}, one can run system \eqref{eq:systemvirtual} until the available computation time is exhausted (that may not be known in advance), and the solution is sub-optimal and guaranteed to enforce the constraints whenever the evolution of system \eqref{eq:systemvirtual} is terminated. This allows the designer to implement optimization-based control tasks with a small sampling period (and consequently with a minimum degradation in performance), while maintaining optimality and constraint-handling capabilities. It is noteworthy that warm-starting can improve convergence of system \eqref{eq:systemvirtual}.
\section{Experimental Results}\label{sec:simulation} The objective of this section is to validate the proposed optimization approach and assess its effectiveness. The experiments are carried out on an Intel(R) Core(TM) i7-7500U CPU 2.70 GHz with 16.00 GB of RAM. We use \texttt{YALMIP} toolbox to implement the optimization computations.
We consider a case study where two control tasks are executed on a single processing unit. Tasks $\tau_1$ and $\tau_2$ implement MPC to control DC motors \#1 and \#2 given in \cite{Roy2021}, respectively, such that the control inputs belong to the interval $[-10,10]$. The desired sampling period for both tasks is 20 ms. However, we observe (from 2000 runs) that $\ell_1=\ell_2\approx150$ ms, implying that the schedulability condition \eqref{eq:schedulabilitycondition} cannot be satisfied with the desired sampling periods. To satisfy \eqref{eq:schedulabilitycondition} we would need, for instance, that the tasks execute every 300 ms.
We consider the following cases: i) MPC with sampling period $\Delta T=20$ [ms], which is desired but unimplementable; ii) MPC with sampling period $\Delta T=300$ [ms] to satisfy condition \eqref{eq:schedulabilitycondition}; and iii) MPC with sampling period $\Delta T=20$ [ms] and the proposed optimization approach implemented with $\sigma=10$ and $\beta=10^5$. For comparison purposes, we use the Integral Square Error (ISE) index which is $\text{ISE}(t)\triangleq\int_0^te(\eta)^2d\eta$, where $e(t)$ is the tracking error and the integration is performed since the start of the experiment at time $0$ till the current time $t$.
The obtained ISEs for the above-mentioned cases are shown in Fig. \ref{fig:simulation}. As seen in this figure, using a large sampling period degrades the performance by 39\% for task $\tau_1$ and by 62\% for task $\tau_2$. However, MPC with the proposed optimization approach yields better performance by computing a sub-optimal but feasible solution every 20 ms. More precisely, performance degradation with the proposed optimization approach is 4\% for task $\tau_1$ and 9\% for task $\tau_2$.
\begin{figure}
\caption{Obtained ISEs for considered cases. MPC with the proposed optimization approach has 32\% gain in the obtained ISE for task $\tau_1$ and 48\% gain in the obtained ISE for task $\tau_2$ over the MPC with sampling period $\Delta T=300$ [ms].}
\label{fig:simulation}
\end{figure}
\section{Conclusion and Future Work}\label{sec:conclusion} Limited capacity to perform computations is a common constraint in today's CPSs. We proposed an approach to implement optimization-based control tasks despite the variability and unpredictability of available computing time. Experiments were carried out to assess the effectiveness of the proposed approach in satisfying performance requirements and real-time schedulability conditions. Future work will consider how the proposed optimization approach can empower optimization-based control schemes to deal with time-varying constraints.
{}
\end{document}
|
arXiv
|
{
"id": "2203.05745.tex",
"language_detection_score": 0.7759811282157898,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{An arithmetic Lefschetz-Riemann-Roch theorem\\With an appendix by Xiaonan Ma}
\author{Shun Tang}
\date{}
\maketitle
\hspace{5cm}\hrulefill\hspace{5.5cm}
\textbf{Abstract.} In this article, we consider regular projective arithmetic schemes in the context of Arakelov geometry, any of which is endowed with an action of the diagonalisable group scheme associated to a finite cyclic group and with an equivariant very ample invertible sheaf. For any equivariant morphism between such arithmetic schemes, which is smooth over the generic fibre, we define a direct image map between corresponding higher equivariant arithmetic K-groups and we discuss its transitivity property. Then we use the localization sequence of higher arithmetic K-groups and the higher arithmetic concentration theorem developed in \cite{T3} to prove an arithmetic Lefschetz-Riemann-Roch theorem. This theorem can be viewed as a generalization, to the higher equivariant arithmetic K-theory, of the fixed point formula of Lefschetz type proved by K. K\"{o}hler and D. Roessler in \cite{KR1}.
\textbf{2010 Mathematics Subject Classification:} 14C40, 14G40, 14L30, 19E08, 58J52
\tableofcontents
\section{Introduction} The aim of this article is to prove an arithmetic Riemann-Roch theorem of Lefschetz type for the higher equivariant arithmetic K-theory of regular arithmetic schemes in the context of Arakelov geometry. This theorem is an arithmetic analogue of a special case of K\"{o}ck's Lefschetz theorem in higher equivariant K-theory (cf. \cite{Ko1}), and it also generalizes K\"{o}hler-Roessler's Lefschetz fixed point formula \cite[Theorem 4.4]{KR1} to the case where higher arithmetic K-groups are concerned. To make things more explicit, let us first recall the study of such Lefschetz-Riemann-Roch problems.
Let $X$ be a smooth projective variety over an algebraically closed field $k$, and suppose that $X$ is endowed with an action of a cyclic group $\langle g\rangle$ of finite order $n$ such that $n$ is prime to the characteristic of $k$. A $\langle g\rangle$-equivariant coherent sheaf on $X$ is a coherent $\mathcal{O}_X$-module $F$ on $X$ together with an automorphism $\varphi: g^*F\to F$ such that $\varphi^n$ is equal to the identity map. Then the classical Lefschetz trace formula gives an expression of the alternating sum of the trace of $H^i(\varphi)$ on the cohomology space $H^i(X,F)$, as a sum of the contributions from the components of the fixed point subvariety $X_g$. For $k=\mathbb{C}$, the field of complex numbers, such a Lefschetz trace formula was presented via index theory and topological K-theory in \cite[III]{ASe}. While for general $k$, a Grothendieck type generalization to the scheme theoretic algebraic geometry is very natural to expect. Precisely, denote by $K_0(X,g)$ the Grothendieck group of the category of equivariant locally free coherent sheaves on $X$, then $K_0({\rm Pt},g)$ is isomorphic to the group ring $\mathbb{Z}[g]\cong \mathbb{Z}[T]/{(1-T^n)}$ and $K_0(X,g)$ has a natural $K_0({\rm Pt},g)$-algebra structure (${\rm Pt}$ stands for the point ${\rm Spec(k)}$). Let $Y$ be another $\langle g\rangle$-equivariant smooth projective variety, let $f: X\to Y$ be a projective morphism compatible with both $\langle g\rangle$-actions on $X$ and on $Y$, then we have a direct image map $f_*: K_0(X,g)\to K_0(Y,g)$ given by $$E\mapsto \sum_{i\geq 0}(-1)^iR^if_*(E).$$ Unsurprisingly, the direct image map $f_*$ doesn't commute with the restriction map $\tau: K_0(\cdot,g)\to K_0\big((\cdot)_g,g\big)$ from the equivariant $K_0$-group of an equivariant variety to the equivariant $K_0$-group of its fixed point subvariety. Namely, the restriction map $\tau$ is not a natural transformation between the covariant functors $K_0(\cdot,g)$ and $K_0\big((\cdot)_g,g\big)$. Like the other Riemann-Roch problems, the Lefschetz-Riemann-Roch theorem makes a correction of $\tau$ such that it becomes a natural transformation. In fact, for any $\langle g\rangle$-equivariant smooth projective variety $X$, let $N_{X/{X_g}}$ stand for the normal bundle associated to the regular immersion $X_g\hookrightarrow X$ and let $\lambda_{-1}(N^\vee_{X/{X_g}})$ be the alternating sum $\sum(-1)^j\wedge^jN^\vee_{X/{X_g}}$, then $\lambda_{-1}(N^\vee_{X/{X_g}})$ is an invertible element in $K_0(X_g,g)\otimes_{\mathbb{Z}[g]}\mathcal{R}$ where $\mathcal{R}$ is any $\mathbb{Z}[g]$-algebra in which $1-T^k$ is invertible for $k=1, \ldots, n-1$. We formally define $L_X: K_0(X,g)\to K_0(X_g,g)\otimes_{\mathbb{Z}[g]}\mathcal{R}$ as $\lambda^{-1}_{-1}(N^\vee_{X/{X_g}})\cdot\tau$, the Lefschetz-Riemann-Roch theorem reads: the following diagram \begin{align}\label{c1} \xymatrix{ K_0(X,g) \ar[r]^-{L_X} \ar[d]^-{f_*} & K_0(X_g,g)\otimes_{\mathbb{Z}[g]}\mathcal{R} \ar[d]^-{{f_g}_*} \\ K_0(Y,g) \ar[r]^-{L_Y} & K_0(Y_g,g)\otimes_{\mathbb{Z}[g]}\mathcal{R}} \end{align} is commutative.
This commutative diagram (\ref{c1}) was presented by P. Donovan in \cite{Do}, and later it was generalized to singular varieties by P. Baum, W. Fulton and G. Quart in \cite{BFQ}. Notice that the settings in \cite{Do} and in \cite{BFQ} are more general than that in this introduction. The reasoning in the first paper runs similarly to the technique used in Borel-Serre's paper \cite{BS}, while the reasoning in the second paper relies on the deformation to the normal cone construction. These two processes are both traditional for producing the Grothendieck type Riemann-Roch theorem.
After Quillen and other mathematicians' work, algebraic K-groups are extended to higher degrees and the higher (equivariant) algebraic K-groups of $X$ are defined as the higher homotopy groups of the K-theory space associated to the category of (equivariant) locally free coherent sheaves on $X$. There are many methods to construct this ``K-theory space", but no matter which construction we choose, the tensor product of locally free coherent sheaves always induces a graded ring structure on $K_\bullet(X,g)$. In particular, each $K_m(X,g)$ is a $K_0(X,g)$-module. Moreover, the functor $K_\bullet(\cdot,g)$ is again covariant with respect to equivariant proper morphisms. Then, for any $m\geq 1$, the following diagram for higher algebraic K-groups which is similar to (\ref{c1}) does make sense: \begin{align}\label{c2} \xymatrix{ K_m(X,g) \ar[r]^-{L_X} \ar[d]^-{f_*} & K_m(X_g,g)\otimes_{\mathbb{Z}[g]}\mathcal{R} \ar[d]^-{{f_g}_*} \\ K_m(Y,g) \ar[r]^-{L_Y} & K_m(Y_g,g)\otimes_{\mathbb{Z}[g]}\mathcal{R}.} \end{align}
The commutativity of diagram (\ref{c2}), which is named the Lefschetz-Riemann-Roch theorem for higher equivariant algebraic K-theory, was proved by B. K\"{o}ck in \cite{Ko1}. The main ingredient is an excess intersection formula whose proof also relies on the deformation to the normal cone construction. Moreover, it's worth indicating that the commutative diagram (\ref{c2}), combined with the Gillet's Riemann-Roch theorem for higher algebraic K-theory (cf. \cite{Gi}), implies a higher Lefschetz trace formula.
In the field of arithmetic geometry, one considers those noetherian and separated schemes $f: X\to {\rm Spec}(\mathbb{Z})$ over the ring of integers (actually over any excellent regular noetherian domain). In this context, it is possible to produce an analogue of the commutative diagram (\ref{c1}), by endowing $X$ with an action of the diagonalisable group scheme $\mu_n={\rm Spec}(\mathbb{Z}[\mathbb{Z}/{n\mathbb{Z}}])$ of $n$-th roots of unity rather than with the action of an automorphism of order $n$. Here, a $\mu_n$-action on $X$ is a morphism of schemes $m_X: \mu_n\times X\to X$ which satisfies the usual associativity property. The reason for this choice is that the fixed point subscheme $X_{\mu_n}$ of a regular scheme $X$ equipped with an action of $\mu_n$ is still regular and the natural inclusion $i_X: X_{\mu_n}\hookrightarrow X$ is a regular immersion, while the fixed point subscheme of a regular scheme under an automorphism of order $n$ can be very singular over the fibres lying above the primes dividing $n$. By a $\mu_n$-equivariant coherent sheaf $F$ on $X$, we understand a coherent $\mathcal{O}_X$-module $F$ together with an isomorphism $$m_F: m_X^*F\to {\rm pr}_X^*F$$ of $\mathcal{O}_{\mu_n\times X}$-modules which satisfies the following associativity property: $$({\rm pr}_{2,3}^*m_F)\circ\big((1\times m_X)^*m_F\big)=(m_{\mu_n}\times 1)^*m_F.$$ Here, $m_{\mu_n}$ denotes the multiplication $\mu_n\times \mu_n\to \mu_n$, ${\rm pr}_X: \mu_n\times X\to X$ and ${\rm pr}_{2,3}: \mu_n\times \mu_n\times X\to \mu_n\times X$ denote the obvious projections. Under this situation, Baum-Fulton-Quart's method still works, so that the commutative diagram (\ref{c1}) holds for regular $\mu_n$-equivariant schemes over $\mathbb{Z}$.
In \cite{Th}, R. W. Thomason used another way to do the same thing and he even got a generalization of the commutative diagram (\ref{c2}) for regular $\mu_n$-equivariant schemes. Thomason's strategy was to use Quillen's localization sequence for higher equivariant algebraic K-groups to show a concentration theorem. This theorem states that, after a suitable localization, the equivariant algebraic K-group $K_m(X_{\mu_n},\mu_n)_\rho$ is isomorphic to $K_m(X,\mu_n)_\rho$ for any $m\geq 0$, and the inverse map is exactly given by $\lambda^{-1}_{-1}(N^\vee_{X/{X_{\mu_n}}})\cdot i_X^*$. Here, $\rho$ is any prime ideal in $R(\mu_n):=K_0({\rm Spec}\mathbb{Z},\mu_n)\cong \mathbb Z[T]/{(1-T^n)}$ which doesn't contain the elements $1-T^k$ for $k=1,\ldots,n-1$. For instance, $\rho$ can be chosen to be the kernel of the natural morphism $\mathbb Z[T]/{(1-T^n)}\to \mathbb Z[T]/{(\Phi_n)}$ where $\Phi_n$ stands for the $n$-th cyclotomic polynomial. Then the Lefschetz-Riemann-Roch theorem for regular $\mu_n$-equivariant schemes \begin{align}\label{c3} \xymatrix{ K_m(X,\mu_n) \ar[r]^-{L_X} \ar[d]^-{f_*} & K_m(X_{\mu_n},\mu_n)_\rho \ar[d]^-{{f_{\mu_n}}_*} \\ K_m(Y,\mu_n) \ar[r]^-{L_Y} & K_m(Y_{\mu_n},\mu_n)_\rho} \end{align} follows from the covariant functoriality of $K_\bullet(\cdot,\mu_n)$ with respect to proper morphisms.
Now, let us turn to Arakelov geometry. Let $X$ be an arithmetic scheme over an arithmetic ring $(D, \Sigma, F_\infty)$ in the sense of Gillet-Soul\'{e} (cf. \cite{GS1}), then $X$ is quasi-projective over $D$ with smooth generic fibre. We denote $\mu_n:={\rm Spec}(D[\mathbb Z/{n\mathbb Z}])$ the diagonalisable group scheme over $D$ associated to a cyclic group $\mathbb Z/{n\mathbb Z}$. By saying $X$ is $\mu_n$-projective, we understand that $X$ is endowed with a projective $\mu_n$-action. That means $X$ is projective and there exists a very ample invertible $\mu_n$-sheaf on $X$.
For each regular $\mu_n$-projective arithmetic scheme $X$, K. K\"{o}hler and D. Roessler have defined an equivariant arithmetic K$_0$-group $\widehat{K_0}(X,\mu_n)$ in \cite{KR1}. This arithmetic $\text{K}_0$-group is a modified Grothendieck group of the category of equivariant hermitian vector bundles on $X$, it contains some smooth form class on $X_{\mu_n}(\mathbb C)$ as analytic datum. The same as the algebraic $\text{K}_0$-group $K_0(X,\mu_n)$, $\widehat{K_0}(X,\mu_n)$ has a ring structure and it is an $R(\mu_n)$-algebra. Moreover, direct image maps between equivariant arithmetic K$_0$-groups can be defined for an equivariant morphism which is smooth over the generic fibre, by using Bismut-K\"{o}hler-Ma's analytic torsion forms. Choose a K\"{a}hler metric for $X(\mathbb C)$, and let $\overline{N}_{X/{X_{\mu_n}}}$ be the normal bundle endowed with the quotient metric, then the main theorem in \cite{KR1} reads: the element $\lambda_{-1}(\overline{N}_{X/{X_{\mu_n}}}^\vee)$ is a unit in $\widehat{K_0}(X_{\mu_n},\mu_n)_\rho$ and the following diagram \begin{align}\label{c4} \xymatrix{ \widehat{K_0}(X,\mu_n) \ar[rr]^-{\Lambda_R\cdot \tau} \ar[d]_{f_*} && \widehat{K_0}(X_{\mu_n},\mu_n)_\rho \ar[d]^{{f_{\mu_n}}_*} \\
\widehat{K_0}(D,\mu_n) \ar[rr]^-{\tau} && \widehat{K_0}(D,\mu_n)_\rho} \end{align} is commutative, where $\rho$ is any prime ideal in $R(\mu_n)$ which doesn't contain the elements $1-T^k$ for $k=1,\ldots,n-1$, $\Lambda_R$ is defined as $\big(1-R_g(N_{X/{X_{\mu_n}}})\big)\cdot\lambda_{-1}^{-1}(\overline{N}_{X/{X_{\mu_n}}}^\vee)$, and $R_g(\cdot)$ is the equivariant $R$-genus due to Bismut (see below).
Later, two refinements of (\ref{c4}) were presented by the author in \cite{T1} and in \cite{T2} respectively. In \cite{T1}, $D$ was replaced by a general regular $\mu_n$-projective scheme $Y$. In \cite{T2}, $X$ was allowed to have singularities on its finite fibres. The aim of this article is to show an arakelovian analogue of a special case of (\ref{c3}), in which the higher equivariant algebraic K-groups are replaced by the higher equivariant arithmetic K-groups. Hence, our work is a generalization of K\"{o}hler-Roessler's Lefschetz fixed point formula to the higher equivariant arithmetic K-theory.
Let us describe the main result more precisely. Firstly, notice that we have constructed a group endomorphism $\otimes\lambda_{-1}(\overline{N}_{X/X_{\mu_n}}^\vee): \widehat{K}_m(X_{\mu_n},\mu_n)\to \widehat{K}_m(X_{\mu_n},\mu_n)$ and its formal inverse $\otimes\lambda^{-1}_{-1}(\overline{N}_{X/X_{\mu_n}}^\vee): \widehat{K}_m(X_{\mu_n},\mu_n)_\rho\to \widehat{K}_m(X_{\mu_n},\mu_n)_\rho$ in \cite[Section 5]{T3}. As what we stated before, $\rho$ is any prime ideal in $R(\mu_n):=K_0({\rm Spec}\mathbb{Z},\mu_n)\cong \mathbb Z[T]/{(1-T^n)}$ which doesn't contain the elements $1-T^k$ for $k=1,\ldots,n-1$. For instance, $\rho$ can be chosen to be the kernel of the natural morphism $\mathbb Z[T]/{(1-T^n)}\to \mathbb Z[T]/{(\Phi_n)}$ where $\Phi_n$ stands for the $n$-th cyclotomic polynomial. In this article, we shall further construct a group endomorphism $R_g(\overline{N}_{X/X_{\mu_n}}): \widehat{K}_m(X_{\mu_n},\mu_n)\to \widehat{K}_m(X_{\mu_n},\mu_n)$ and we shall prove that this endomorphism $R_g(\overline{N}_{X/X_{\mu_n}})$ is independent of the choice of the metric over $N_{X/X_{\mu_n}}$ after tensoring by $\mathbb Q$. So the expression $\Lambda_R=\big(1-R_g(N_{X/{X_{\mu_n}}})\big)\cdot\lambda_{-1}^{-1}(\overline{N}_{X/{X_{\mu_n}}}^\vee)$ still makes sense as an endomorphism of $\widehat{K}_m(X_{\mu_n},\mu_n)_\rho\otimes\mathbb Q$. Moreover, for any equivariant morphism $f: X\to Y$ between regular $\mu_n$-projective arithmetic schemes, which is smooth over the generic fibre, we shall prove that there exists a reasonable direct image map $f_*: \widehat{K}_m(X,\mu_n)\to \widehat{K}_m(Y,\mu_n)$ with $m\geq1$ and we discuss the transitivity property of the direct image maps up to torsion. Assume that the $\mu_n$-action on $Y$ is trivial and still use the notation $\tau$ to denote the morphism \begin{align*} \widehat{K}_m\big((\cdot),\mu_n\big)&\to \widehat{K}_m\big((\cdot)_{\mu_n},\mu_n\big)_\rho\otimes\mathbb Q\\ x&\mapsto \tau(x)\otimes 1, \end{align*} Our main theorem reads: the following diagram \begin{align}\label{c5} \xymatrix{ \widehat{K}_m(X,\mu_n) \ar[rr]^-{\Lambda_R\cdot \tau} \ar[d]_{f_*} && \widehat{K}_m(X_{\mu_n},\mu_n)_\rho\otimes \mathbb Q \ar[d]^{{f_{\mu_n}}_*} \\ \widehat{K}_m(Y,\mu_n) \ar[rr]^-{\tau} && \widehat{K}_m(Y,\mu_n)_\rho\otimes\mathbb Q} \end{align} is commutative. In such a formulation, the equivariant $R$-genus again plays a crucial role.
To this aim, the definition of higher equivariant arithmetic K-groups and some reasonable technique that can be carried out for higher equivariant arithmetic K-theory should be clarified. We have settled these in \cite{T3}. In fact, we have defined the higher equivariant arithmetic K-groups via the simplicial description of the Beilinson's regulators (cf. \cite{BW}) and we have developed a localization sequence as well as an arithmetic concentration theorem. So, principally, we shall follow Thomason's approach to prove the commutativity of (\ref{c5}), but the fact that the direct image maps are only defined for the morphisms which are smooth over the generic fibres will lead to a big gap comparing with the purely algebraic case. Some highly non-trivial analytic machinery should be involved, such as the transitivity property of analytic torsion forms and the Bismut-Ma's immersion formula.
The K\"{o}hler-Roessler's arithmetic Lefschetz fixed point formula has fruitful applications in number theory and in arithmetic geometry. One important reason is that the equivariant $R$-genus is closely related to the logarithmic derivative of certain $L$-functions. K\"{o}hler-Roessler and Maillot-Roessler have shown in \cite{KR2} and in \cite{MR1} that the Faltings heights and the periods of C.M. abelian varieties can be expressed as a formula in terms of the special value of logarithmic derivative of $L$-functions at $0$. Further, in \cite{MR2}, Maillot-Roessler presented a series of conjectures about the relation between several invariants of arithmetic varieties and the special values of logarithmic derivative of Artin $L$-functions at negative integers. We hope that our Lefschetz-Riemann-Roch theorem for higher equivariant arithmetic K-groups would be helpful to understand these conjectures.
The structure of this article is as follows. In Section 2, we define the direct image maps between higher equivariant arithmetic K-groups. As an opportunity, we recall the analytic torsion for cubes of hermitian vector bundles introduced by D. Roessler in \cite{Roe}, actually our construction is slightly different to Roessler's construction. In Section 3, we discuss certain transitivity property of the direct image maps, the relation of equivariant analytic torsion forms with respect to families of submersions will be presented. In Section 4, we formulate and prove the commutativity of the diagram (\ref{c5}), an accurate computation via the deformation to the normal cone construction is given. In the last section, Section 5, we attach an appendix on some properties of equivariant analytic torsion forms and immersion formula. These purely analytic properties are crucial for the main arguments in this article, the author is very grateful to Prof. Ma Xiaonan for writing this appendix.
\section{Higher equivariant arithmetic K-theory}
\subsection{Bott-Chern forms and arithmetic K-groups} Suggested by Soul\'{e} (cf. \cite{So}), and also by Deligne (cf. \cite{De}), the higher arithmetic K-groups of an arithmetic scheme $X$ can be defined as the homotopy groups of the homotopy fibre of Beilinson's regulator map so that one obtains a long exact sequence $$\xymatrix{\cdots \ar[r] & \widehat{K}_m(X) \ar[r] & K_m(X) \ar[r]^-{{\rm ch}} & \bigoplus_{p\geq 0}H_{\mathcal{D}}^{2p-m}\big(X,\mathbb R(p)\big) \ar[r] & \widehat{K}_{m-1}(X) \ar[r] & \cdots,}$$ where $H_{\mathcal{D}}^{*}\big(X,\mathbb R(p)\big)$ is the real Deligne-Beilinson cohomology and ${\rm ch}$ is the Beilinson's regulator map. In order to do this, a simplicial description of Beilinson's regulator map is necessary. In \cite{BW}, such a simplicial description was given by Burgos and Wang by using the higher Bott-Chern forms. Recently, in \cite{T3}, we followed Burgos-Wang's approach to define the higher equivariant Bott-Chern forms and further the higher equivariant arithmetic K-theory. In this subsection, we shall recall some relevant constructions and definitions, for more details the reader is referred to \cite{BW} and \cite{T3}.
At first, let $X$ be a smooth algebraic variety over $\mathbb{C}$. In this subsection, we shall work with the analytic topology of $X$. Denote by $E_{\log}^*(X)$ the complex of differential forms on $X$ with logarithmic singularities along infinity (cf. \cite[Definition 2.1]{T3}), then $E_{\log}^*(X)$ has a natural bigrading $E_{\log}^n(X)=\bigoplus_{\stackrel{p+q=n}{}}E_{\log}^{p,q}(X)$ and this grading induces a Hodge filtration $F^pE_{\log}^n(X)=\bigoplus_{\stackrel{p'\geq p}{p'+q'=n}}E_{\log}^{p',q'}(X)$. Write $E_{\log,\mathbb R}^*(X,p):=(2\pi i)^pE_{\log,\mathbb R}^*(X)$ with $E_{\log,\mathbb R}^*(X)$ the subcomplex of $E_{\log}^*(X)$ consisting of real forms, then we have a decomposition $E_{\log}^*(X)=E_{\log,\mathbb R}^*(X,p)\oplus E_{\log,\mathbb R}^*(X,p-1)$ and the projection $\pi_p: E_{\log}^*(X)\to E_{\log,\mathbb R}^*(X,p)$ is given by $\pi_p(x)=\frac{1}{2}\big(x+(-1)^p\overline{x}\big)$. Moreover, for any $x\in E_{\log}^n(X)$, we define two filtered functions $$F^{k,k}x=\sum_{\stackrel{l\geq k,l'\geq k}{}}x^{l,l'}\quad\text{and}\quad F^{k}x=\sum_{\stackrel{l\geq k}{}}x^{l,l'}.$$ Then we set $\pi(x):=\pi_{p-1}(F^{n-p+1,n-p+1}x)$.
The main result in \cite[Section 2]{Bu1} states that the following Deligne complex $$\mathfrak{D}^n\big(E_{\log}(X),p\big)=\left\{
\begin{array}{ll}
E_{\log,\mathbb R}^{n-1}(X,p-1)\bigcap\bigoplus_{\stackrel{p'+q'=n-1}{p'<p,q'<p}}E_{\log}^{p',q'}(X), & n<2p; \\
E_{\log,\mathbb R}^{n}(X,p)\bigcap\bigoplus_{\stackrel{p'+q'=n}{p'\geq p,q'\geq p}}E_{\log}^{p',q'}(X), & n\geq 2p,
\end{array}
\right.$$ with differential $$d_\mathcal{D}x=\left\{
\begin{array}{ll}
-\pi(dx), & n<2p-1; \\
-2\partial\overline{\partial}x, & n=2p-1; \\
dx, & n>2p-1.
\end{array}
\right.$$ computes the real Deligne-Beilinson cohomology of $X$. Namely, one has $$H_{\mathcal{D}}^n\big(X,\mathbb R(p)\big)=H^n\Big(\mathfrak{D}^*\big(E_{\log}(X),p\big)\Big).$$ We shall write $D^*(X,p):=\mathfrak{D}^*\big(E_{\log}(X),p\big)$ for short.
\begin{rem}\label{explain_d} (i). According to the definition, the real Deligne-Beilinson cohomology of $X$ at degrees $2p$ and $2p-1$ are given by $$H^{2p}\Big(\mathfrak{D}^*\big(E_{\log}(X),p\big)\Big)=\{x\in E_{\log}^{p,p}(X)\cap E_{\log,\mathbb R}^{2p}(X,p)\mid dx=0\}/{\operatorname{Im}(\partial\overline{\partial})}$$ and $$H^{2p-1}\Big(\mathfrak{D}^*\big(E_{\log}(X),p\big)\Big)=\{x\in E_{\log}^{p-1,p-1}(X)\cap E_{\log,\mathbb R}^{2p-2}(X,p-1)\mid \partial\overline{\partial}x=0\}/{(\operatorname{Im}\partial+\operatorname{Im}\overline{\partial})}.$$
(ii). Let $x\in D^n(X,p)$ and $y\in D^m(X,q)$, we write $l=n+m$ and $r=p+q$. Then $$x\bullet y=\left\{
\begin{array}{ll}
(-1)^nr_p(x)\wedge y+x\wedge r_q(y), & n<2p, m<2q; \\
\pi(x\wedge y), & n<2p, m\geq 2q, l<2r; \\
F^{r,r}\big(r_p(x)\wedge y\big)+2\pi_r\partial\big((x\wedge y)^{r-1,l-r}\big), & n<2p, m\geq 2q, l\geq 2r; \\
x\wedge y, & n\geq 2p, m\geq 2q.
\end{array}
\right.$$ induces a product on $\bigoplus_p D^*(X,p)$ which is graded commutative and is associative up to chain homotopy. Here $r_px=2\pi_p(F^p dx)$ if $n\leq 2p-1$ and $r_px=x$ otherwise. At the level of cohomology groups, this product coincides with the product defined by Beilinson. Notice that if $x\in D^{2p}(X,p)$ is a cocycle, then for all $y,z$ we have $x\bullet y=y\bullet x$ and $y\bullet(x\bullet z)=(y\bullet x)\bullet z=x\bullet(y\bullet z)$. \end{rem}
In order to introduce the higher Bott-Chern form, let us construct a new complex $\widetilde{D}^*(X,p)$ using the cocubical structure of the cartesian product of projective lines $(\mathbb{P}^1)^\cdot$. This complex $\widetilde{D}^*(X,p)$ has the same cohomology groups as $D^*(X,p)$. Firstly one notices that $D^*(X\times(\mathbb{P}^1)^\cdot,p)$ form a cubical complex with face and degeneracy maps $$d_i^j=({\rm Id}\times d_j^i)^*\quad\text{and}\quad s_i=({\rm Id}\times s^i)^*,$$ where $$d_j^i: (\mathbb{P}^1)^k\to (\mathbb{P}^1)^{k+1},\quad i=1,\cdots,k,j=0,1,$$ $$s^i: (\mathbb{P}^1)^k\to (\mathbb{P}^1)^{k-1},\quad i=1,\cdots,k,$$ which are given by $$d_0^i(x_1,\cdots,x_k)=(x_1,\cdots,x_{i-1},(0:1),x_i,\cdots,x_k),$$ $$d_1^i(x_1,\cdots,x_k)=(x_1,\cdots,x_{i-1},(1:0),x_i,\cdots,x_k),$$ $$s^i(x_1,\cdots,x_k)=(x_1,\cdots,x_{i-1},x_{i+1},\cdots,x_k)$$ are the coface and the codegeneracy maps of $(\mathbb{P}^1)^\cdot$. Then we write $D_{\mathbb{P}}^{r,k}(X,p)=D^r(X\times (\mathbb{P}^1)^{-k},p)$ and denote by $D_{\mathbb{P}}^{*,*}(X,p)$ the associated double complex with differentials $$d'=d_{\mathcal{D}}\quad\text{and}\quad d''=\sum(-1)^{i+j-1}d_i^j.$$ Next, let $(x:y)$ be the homogeneous coordinates of $\mathbb{P}^1$, and let $\omega=\partial\overline{\partial}\log\frac{x\overline{x}+y\overline{y}}{x\overline{x}}\in (2\pi i)E_{\mathbb{P}^1,\mathbb R}^2$ be a K\"{a}hler form over $\mathbb{P}^1$. We shall write $\omega_i=p_i^*\omega\in E_{\log}^*(X\times(\mathbb{P}^1)^k)$ where $p_i: X\times(\mathbb{P}^1)^k\to \mathbb{P}^1, i=1,\cdots,k$ is the projection over the $i$-th projective line. The complex $\widetilde{D}^*(X,p)$ is constructed by killing the degenerate classes and the classes coming from the projective spaces.
\begin{defn}\label{201} We define $\widetilde{D}^*(X,p)$ as the associated simple complex of the double complex $\widetilde{D}^{*,*}(X,p)$ which is given by $$\widetilde{D}^{r,k}(X,p)=D_{\mathbb{P}}^{r,k}(X,p)/{\sum_{i=1}^{-k}s_i\big(D_{\mathbb{P}}^{r,k+1}(X,p)\big)\oplus \omega_i\wedge s_i\big(D_{\mathbb{P}}^{r-2,k+1}(X,p-1)\big)}.$$ The differential of this complex will be denoted by $d$. \end{defn}
A repetition of the proofs of \cite[Proposition 1.2 and Lemma 1.3]{BW} gives that the natural morphism of complexes $$\iota: D^*(X,p)=\widetilde{D}^{*,0}(X,p)\to \widetilde{D}^*(X,p)$$ is a quasi-isomorphism.
Now, let $X$ be a smooth $\mu_n$-projective variety over $\mathbb{C}$ and denote by $\mathcal{U}:=\widehat{\mathcal{P}}(X,\mu_n)$ the exact category of $\mu_n$-equivariant vector bundles on $X$ equipped with $\mu_n$-invariant smooth hermitian metrics. We consider the exact cubes in the category $\mathcal{U}$. By definition, an exact $k$-cube in $\mathcal{U}$ is a functor $\mathcal{F}$ from $\langle-1,0,1\rangle^k$, the $k$-th power of the ordered set $\langle-1,0,1\rangle$, to $\mathcal{U}$ such that for any $\alpha\in \langle-1,0,1\rangle^{k-1}$ and $1\leq i \leq k$, the $1$-cube $\partial_i^\alpha$ defined by $$\mathcal{F}_{\alpha_1,\cdots,\alpha_{i-1},-1,\alpha_i,\cdots,\alpha_{k-1}}\to \mathcal{F}_{\alpha_1,\cdots,\alpha_{i-1},0,\alpha_i,\cdots,\alpha_{k-1}}\to \mathcal{F}_{\alpha_1,\cdots,\alpha_{i-1},1,\alpha_i,\cdots,\alpha_{k-1}}$$ which is called an edge of $\mathcal{F}$ is a short exact sequence. From now on, we shall write cubes instead of exact cubes for short. Let $\mathcal{F}$ be a $k$-cube in $\mathcal{U}$, for $1\leq i\leq k$ and $j\in \langle-1,0,1\rangle$, the $(k-1)$-cube $\partial_i^j\mathcal{F}$ defined by $(\partial_i^j\mathcal{F})_{\alpha_1,\cdots,\alpha_{k-1}}=\mathcal{F}_{\alpha_1,\cdots,\alpha_{i-1},j,\alpha_i,\cdots,\alpha_{k-1}}$ is called a face of $\mathcal{F}$. On the other hand, for any $1\leq i\leq k+1$, we denote by $S_i^1\mathcal{F}$ the $(k+1)$-cube $$(S_i^1\mathcal{F})_{\alpha_1,\cdots,\alpha_{k+1}}=\left\{
\begin{array}{ll}
0, & \alpha_i=1; \\
\mathcal{F}_{\alpha_1,\cdots,\alpha_{i-1},\alpha_{i+1},\cdots,\alpha_{k+1}}, & \alpha_i\neq1,
\end{array}
\right.$$ such that the morphisms $(S_i^1\mathcal{F})_{\alpha_1,\cdots,\alpha_{i-1},-1,\alpha_{i+1},\cdots,\alpha_{k+1}}\to (S_i^1\mathcal{F})_{\alpha_1,\cdots,\alpha_{i-1},0,\alpha_{i+1},\cdots,\alpha_{k+1}}$ are the identities of $(S_i^1\mathcal{F})_{\alpha_1,\cdots,\alpha_{i-1},\alpha_{i+1},\cdots,\alpha_{k+1}}$. Similarly, we have $(k+1)$-cube $S_i^{-1}\mathcal{F}$.
Denote by $C_k\mathcal{U}$ the set of all $k$-cubes in $\mathcal{U}$, then we have the face maps $\partial_i^j: C_k\mathcal{U}\to C_{k-1}\mathcal{U}$ and the degeneracy maps $S_i^j: C_k\mathcal{U}\to C_{k+1}\mathcal{U}$. The cubes in the image of $S_i^j$ are said to be degenerate. Let $\mathbb Z C_k\mathcal{U}$ be the free abelian group generated by $C_k\mathcal{U}$ and $D_k$ be the subgroup of $\mathbb Z C_k{\mathcal{U}}$ generated by all degenerate $k$-cubes. Set $\widetilde{\mathbb Z}C_k\mathcal{U}=\mathbb Z C_k\mathcal{U}/{D_k}$ and $$d=\sum_{i=1}^k\sum_{j=-1}^1(-1)^{i+j-1}\partial_i^j: \widetilde{\mathbb Z}C_k\mathcal{U}\to \widetilde{\mathbb Z}C_{k-1}\mathcal{U}.$$ Then $\widetilde{\mathbb Z}C_*\mathcal{U}=(\widetilde{\mathbb Z}C_k\mathcal{U},d)$ is a homological complex.
Assume that $\overline{E}$ is a hermitian $k$-cube in the category $\mathcal{U}=\widehat{\mathcal{P}}(X,\mu_n)$. If $\overline{E}$ is an emi-cube, namely the metrics on the quotient terms in all edges of $\overline{E}$ are induced by the metrics on the middle terms (cf. \cite[Definition 3.5]{BW}), one can follow \cite[(3.7)]{BW} to associate a hermitian locally free sheaf ${\rm tr}_k(\overline{E})$ on $X\times (\mathbb{P}^1)^k$. This ${\rm tr}_k(\overline{E})$ is called the $k$-transgression bundle of $\overline{E}$. If $k=1$, as an emi-$1$-cube, $\overline{E}$ is a short exact sequence $$\xymatrix{ 0\ar[r] & \overline{E}_{-1}\ar[r]^-{i} & \overline{E}_0\ar[r] & \overline{E}_1 \ar[r] & 0},$$ where the metric of $\overline{E}_1$ is induced by the metric of $\overline{E}_0$. Then ${\rm tr}_1(\overline{E})$ is the cokernel with quotient metric of the map $E_{-1}\to E_{-1}\otimes\mathcal{O}(1)\oplus E_0\otimes\mathcal{O}(1)$ by the rule $e_{-1}\mapsto e_{-1}\otimes \sigma_{\infty}\oplus i(e_{-1})\otimes \sigma_0$. Here $\sigma_0$ (resp. $\sigma_{\infty}$) is the section of the tautological bundle $\mathcal{O}(1)$ on $\mathbb{P}^1$ which vanishes only at $0$ (resp. $\infty$), and $\mathcal{O}(1)$ is endowed with the Fubini-Study metric. If $k>1$, suppose that the transgression bundle is defined for $k-1$. Let ${\rm tr}_1(\overline{E})$ be the emi-$(k-1)$-cube over $X\times \mathbb{P}^1$ given by ${\rm tr}_1(\overline{E})_\alpha={\rm tr}_1\big(\partial^\alpha_1(\overline{E})\big)$, then ${\rm tr}_k(\overline{E})$ is defined as ${\rm tr}_{k-1}\big({\rm tr}_1(\overline{E})\big)$.
Moreover, according to \cite[Proposition 3.6]{BW}, for any hermitian cube $\overline{E}$ in the category $\mathcal{U}$, there is a unique way to change the metrics on $E_\alpha$ for $\alpha\nleq0$ such that the obtained new hermitian cube is emi. In fact, for $i=1,\ldots,k$, define $\lambda_i^1\overline{E}$ to be $$(\lambda_i^1\overline{E})_\alpha= \left\{ \begin{array}{ll} (E_\alpha,h_\alpha), & \text{if } \alpha_i=-1,0; \\ (E_\alpha,h'_\alpha), & \text{if } \alpha_i=1, \end{array} \right.$$ where $h'_\alpha$ is the metric induced by $h_{\alpha_1,\ldots,\alpha_{i-1},0,\alpha_{i+1},\ldots,\alpha_k}$. Thus $\lambda_i^1\overline{E}$ has the same locally free sheaves as $\overline{E}$, but the metrics on the face $\partial_i^1E$ are induced by the metrics of the face $\partial_i^0\overline{E}$. To measure the difference between $\overline{E}$ and $\lambda_i^1\overline{E}$, let $\lambda_i^2(\overline{E})$ be the hermitian $k$-cube determined by $\partial_i^{-1}\lambda_i^2(\overline{E})=\partial_i^1\overline{E}$, $\partial_i^0\lambda_i^2(\overline{E})=\partial_i^1\lambda_i^1(\overline{E})$, and $\partial_i^1\lambda_i^2(\overline{E})=0$. Set $\lambda_i=\lambda_i^1+\lambda_i^2$, $\lambda=\lambda_k\circ\cdots\circ\lambda_1$ if $k\geq1$ and $\lambda={\rm Id}$ otherwise. Then the map $\lambda$ induces a morphism of complexes $$\widetilde{\mathbb Z}C_*\mathcal{U}\to \widetilde{\mathbb Z}C_*^{\rm emi}\mathcal{U}$$ which is the quasi-inverse of the inclusion $\widetilde{\mathbb Z}C_*^{\rm emi}\mathcal{U}\hookrightarrow \widetilde{\mathbb Z}C_*\mathcal{U}$. To specify the $\mu_n$-equivariant variety $X$, we shall write $\widetilde{\mathbb Z}C_*(X,\mu_n):=\widetilde{\mathbb Z}C_*\mathcal{U}$.
\begin{defn}\label{202} Fix a primitive $n$-th root of unity $\zeta_n$, the restriction of an equivariant hermitian vector bundle $\overline{F}\mid_{X_{\mu_n}}$ over the fixed point subvariety splits into a direct sum $\oplus_{l=1}^n\overline{F}_l$ where $F_l$ is the eigenbundle of $F\mid_{X_{\mu_n}}$ corresponding to the eigenvalue ${\zeta_n}^l$. Let $K_l$ be the curvature form with respect to the unique connection on $\overline{F}_l$ compatible with both the hermitian and the complex structure, the equivariant Chern-Weil form associated to $\overline{F}$ is defined as $${\rm ch}_g^0(\overline{F}):=\sum_{l=1}^n{\zeta_n}^l{\rm Tr}\big(\exp(-K_l)\big).$$ Define $R_n=\mathbb R$ if $n=1$ and $R_n=\mathbb C$ otherwise, denote $V\otimes_{\mathbb R}{R_n}$ by $V_{R_n}$ for any real vector space $V$, the equivariant higher Bott-Chern form associated to hermitian $k$-cube $\overline{E}$ is defined as $${\rm ch}_g^k(\overline{E}):={\rm ch}_g^0\Big({\rm tr}_k\big(\lambda(\overline{E})\big)\Big)\in \bigoplus_{p\geq0}\widetilde{D}^*(X_{\mu_n},p)[2p]_{R_n}.$$ \end{defn}
\begin{defn} Let $\overline{F}\mid_{X_{\mu_n}}=\oplus_{l=1}^n\overline{F}_l$ be the restriction of an equivariant hermitian vector bundle over the fixed point subvariety, where $F_l$ is the eigenbundle of $F\mid_{X_{\mu_n}}$ corresponding to the eigenvalue ${\zeta_n}^l$ and $K_l$ is the curvature form of $\overline{F}_l$. The equivariant Todd form is defined as $${\rm Td}_g(\overline{F})={\rm det}\big(\frac{-K_n}{1-e^{K_n}}\big)\prod_{l\neq n}{\rm det}\big(\frac{1}{1-\zeta_n^{-l}e^{K_l}}\big).$$ \end{defn}
When $X$ is proper, Burgos and Wang gave in \cite[Section 6]{BW} a quasi-inverse $\varphi: \widetilde{D}^*(X,p)\to D^*(X,p)$ of the quasi-isomorphism $\iota: D^*(X,p)\to \widetilde{D}^*(X,p)$. By means of this quasi-inverse, the equivariant higher Bott-Chern form has another expression with value in $\bigoplus_{p\geq0}D^*(X_{\mu_n},p)[2p]_{R_n}$. To see this expression, let us set $z=x/y$ which defines the coordinate map $\mathbb{C}\to \mathbb{P}^1_{\mathbb{C}}$ by sending $z\to [z,1]$. Then $\log\mid z\mid$ defines an $L^1$ function on $\mathbb{P}^1_{\mathbb{C}}$, which can be considered as a current. We shall denote by $\log\mid z_1\mid,\cdots,\log\mid z_k\mid$ the corresponding currents on $(\mathbb{P}^1_{\mathbb{C}})^k$. These currents can be formally considered as elements in $D^1\big((\mathbb{P}^1_{\mathbb{C}})^k,1\big)$, and they satisfy the following differential equation $$d_\mathcal{D}\log\mid z_j\mid=-2\partial\overline{\partial}\log\mid z_j\mid=-2i\pi(\delta_{\mathbb{P}^1_{\mathbb{C}}\times\mathbb{P}^1_{\mathbb{C}}\times\cdots\times\{\infty\}\times\cdots\times\mathbb{P}^1_{\mathbb{C}}}- \delta_{\mathbb{P}^1_{\mathbb{C}}\times\mathbb{P}^1_{\mathbb{C}}\times\cdots\times\{0\}\times\cdots\times\mathbb{P}^1_{\mathbb{C}}})$$ where $\infty$ and $0$ stand at the $j$-th place. Let $u_1,\cdots,u_k$ be $k$ elements in $\bigoplus_{p\geq 0}D^{2p-1}(\cdot,p)$, we define an element in $\bigoplus_{p\geq 0}D^{2p-k}(\cdot,p)$ by the formula $$C_k(u_1,\cdots,u_k):=-(-\frac{1}{2})^{k-1}\sum_{\sigma\in \mathfrak{S}_k}(-1)^\sigma u_{\sigma(1)}\bullet(u_{\sigma(2)}\bullet(\cdots u_{\sigma(k)})\cdots)$$ where $\mathfrak{S}_k$ stands for the $k$-th symmetric group. Then we have \begin{align}\label{dc} d_\mathcal{D}C_k(u_1,\cdots,u_k)&=(-\frac{1}{2})k\sum_{j=1}^k(-1)^{j-1}d_\mathcal{D}(u_j)\bullet C_{k-1}(u_1,\cdots,\widehat{u_j},\cdots,u_k)\notag \\ &=(-\frac{1}{2})k\sum_{j=1}^k(-1)^{j-1}d_\mathcal{D}(u_j)\wedge C_{k-1}(u_1,\cdots,\widehat{u_j},\cdots,u_k). \end{align} We refer to \cite[Lemma 2.9]{Roe} for a proof of these identities. With the above notations, the equivariant higher Bott-Chern form associated to a hermitian $k$-cube $\overline{E}$ with $k\geq 0$ is given by the expression $$\varphi\big({\rm ch}_g^k(\overline{E})\big)=\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}{\rm ch}_g^k(\overline{E})\wedge C_k(\log\mid z_1\mid^2,\cdots,\log\mid z_k\mid^2).$$
\begin{thm}\label{203} The equivariant higher Bott-Chern forms induce a morphism of complexes $$\xymatrix{ \widetilde{\mathbb Z}C^*(X,\mu_n) \ar[r]^-{\lambda} & \widetilde{\mathbb Z}C^*_{\rm emi}(X,\mu_n) \ar[r]^-{{\rm ch}_g^0\circ{\rm tr}_*} & \bigoplus_{p\geq0}\widetilde{D}^*(X_{\mu_n},p)[2p]_{R_n} \ar[r]^-{\varphi} & \bigoplus_{p\geq0}D^*(X_{\mu_n},p)[2p]_{R_n},}$$ which is denoted by ${\rm ch}_g$. Here, $\widetilde{\mathbb Z}C^*(X,\mu_n)$ and $\widetilde{\mathbb Z}C^*_{\rm emi}(X,\mu_n)$ are the (cohomological) complexes associated to the homological complexes $\widetilde{\mathbb Z}C_*(X,\mu_n)$ and $\widetilde{\mathbb Z}C_*^{\rm emi}(X,\mu_n)$. \end{thm}
Specify to the case $k=1$, let $\bar{\varepsilon}: 0\to \bar{E}_{-1}\to \bar{E}_0\to \bar{E}_1\to 0$ be a hermitian $1$-cube, then \begin{align*} d_\mathcal{D}{\rm ch}_g(\bar{\varepsilon})&=d_\mathcal{D}\bigg(\frac{1}{4\pi i}\int_{\mathbb{P}^1}{\rm ch}_g^0\Big({\rm tr}_1\big(\lambda(\bar{\varepsilon})\big)\Big)\log\mid z\mid^2\bigg)\\ &={\rm ch}_g(\bar{E}_{0})-{\rm ch}_g(\bar{E}_{-1})-{\rm ch}_g(\bar{E}_{1}). \end{align*} If $\bar{\varepsilon}$ is split, by replacing $z$ by $1/z$, we know that \begin{align*} {\rm ch}_g(\bar{\varepsilon})&=\frac{1}{4\pi i}\int_{\mathbb{P}^1}{\rm ch}_g^0\Big({\rm tr}_1\big(\lambda(\bar{\varepsilon})\big)\Big)\log\mid z\mid^2\\ &=\frac{1}{4\pi i}\int_{\mathbb{P}^1}{\rm ch}_g^0\big(\bar{E}_{1}(1)\big)\log\mid z\mid^2+\frac{1}{4\pi i}\int_{\mathbb{P}^1}{\rm ch}_g^0\big(\frac{\bar{E}_{-1}(1)\oplus \bar{E}_{-1}(1)}{\bar{E}_{-1}}\big)\log\mid z\mid^2\\ &=0. \end{align*}
Let ${\rm ch}'_g$ denote the usual equivariant Chern-Weil forms with the factor $2\pi i$ inside $${\rm ch}'_g(\bar E)=\sum_{l=1}^n{\zeta_n}^l{\rm Tr}\big(\exp(\frac{-K_l}{2\pi i})\big),$$ and let $\Phi$ be an operator acting on $2n$-forms by $\Phi(\alpha)=(2\pi i)^{-n}\alpha$. Then $$\Phi\big({\rm ch}_g(\bar E)\big)={\rm ch}'_g(\bar E)$$ and $$\frac{\bar{\partial}\partial}{2\pi i}\Big(2\Phi\big({\rm ch}_g(\bar{\varepsilon})\big)\Big)={\rm ch}'_g(\bar{E}_{0})-{\rm ch}'_g(\bar{E}_{-1})-{\rm ch}'_g(\bar{E}_{1}).$$ This means, after a rescaling, ${\rm ch}_g(\bar{\varepsilon})$ satisfies the axiomatic conditions for a theory of unique equivariant secondary Bott-Chern classes \cite[Theorem 3.4]{KR1} (See \cite[\S1, (f)]{BGS} for the non-equivariant case). Notice that in \cite{BGS}, the authors used the supertraces of Quillen's superconnections to define the non-equivariant secondary Bott-Chern form $\widetilde{\rm ch}$. Split $\bar{\varepsilon}\mid_{X_{\mu_n}}$ into a direct sum of short exact sequences of its eigenbundles $\oplus_{l=1}^n \bar{\varepsilon}_l$ and define $$\widetilde{{\rm ch}}_g(\bar{\varepsilon}):=\sum_{l=1}^n {\zeta_n}^l \widetilde{\rm ch}(\bar{\varepsilon}_l).$$ Then we get another way, using the supertraces of Quillen's superconnections, to define the equivariant secondary Bott-Chern form $\widetilde{{\rm ch}}_g(\bar{\varepsilon})$ which satisfies the equation $$\frac{\bar{\partial}\partial}{2\pi i}\widetilde{{\rm ch}}_g(\bar{\varepsilon})={\rm ch}'_g(\bar{E}_{0})-{\rm ch}'_g(\bar{E}_{-1})-{\rm ch}'_g(\bar{E}_{1}).$$ So, $2\Phi\big({\rm ch}_g(\bar{\varepsilon})\big)$ must be equal to $\widetilde{{\rm ch}}_g(\bar{\varepsilon})$ modulo $\operatorname{Im}\partial+\operatorname{Im}\bar{\partial}$. Let us write $2\Phi\big({\rm ch}_g(\bar{\varepsilon})\big)-\widetilde{{\rm ch}}_g(\bar{\varepsilon})=\partial\Delta_\partial(\bar{\varepsilon})+\bar{\partial}\Delta_{\bar{\partial}}(\bar{\varepsilon})$, the following theorem states that $\Delta_\partial(\bar{\varepsilon})$ and $\Delta_{\bar{\partial}}(\bar{\varepsilon})$ can be chosen to admit some funtorial property.
\begin{thm}\label{explain_t} Let notations and assumptions be as above. There is a functorial choice of the differential forms $\Delta_\partial(\bar{\varepsilon})$ and $\Delta_{\bar{\partial}}(\bar{\varepsilon})$ such that $$2\Phi\big({\rm ch}_g(\bar{\varepsilon})\big)-\widetilde{{\rm ch}}_g(\bar{\varepsilon})=\partial\Delta_\partial(\bar{\varepsilon})+\bar{\partial}\Delta_{\bar{\partial}}(\bar{\varepsilon})$$ and that $\Delta_\partial(j^*\bar{\varepsilon})=j^*\Delta_\partial(\bar{\varepsilon}), \Delta_{\bar{\partial}}(j^*\bar{\varepsilon})=j^*\Delta_{\bar{\partial}}(\bar{\varepsilon})$ for any equivariant morphism $j: X'\to X$. \end{thm} \begin{proof} For hermitian $1$-cube $\bar{\varepsilon}: 0\to \bar{E}_{-1}\to \bar{E}_0\to \bar{E}_1\to 0$, we divide it into two emi-$1$-cubes $$\bar{\varepsilon}_1: 0\to \bar{E}_{-1}\to \bar{E}_0\to \bar{E}'_1\to 0$$ and $$\bar{\varepsilon}_2: 0\to \bar{E}_{1}\to \bar{E}'_1\to 0\to 0$$ where $\bar{E}'_1$ is $E_1$ endowed with the quotient metric. According to the definition of the morphism $\lambda$, the higher Bott-Chern form is additive ${\rm ch}_g(\bar{\varepsilon})={\rm ch}_g(\bar{\varepsilon}_1)+{\rm ch}_g(\bar{\varepsilon}_2)$. To study the secondary Bott-Chern form constructed by the supertraces of Quillen's superconnections, we write down a double complex \begin{align} \xymatrix{ & 0 \ar[d] & 0 \ar[d] & 0 \ar[d] & \\ 0 \ar[r] & \bar{E}_{-1} \ar[r] \ar[d] & \bar{E}_0 \ar[r] \ar[d] & \bar{E}_1 \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \bar{E}_{-1} \ar[r] \ar[d] & \bar{E}_0 \ar[r] \ar[d] & \bar{E}'_1 \ar[r] \ar[d] & 0 \\ 0 \ar[r] & 0 \ar[r] \ar[d] & 0 \ar[r] \ar[d] & 0 \ar[r] \ar[d] & 0 \\ & 0 & 0 & 0 &} \end{align}
Restrict every bundle over $X_{\mu_n}$ and split it into the direct sum of eigenbundles, then one can immediately repeat the argument given in \cite[Theorem 1.20]{BGS} (where the non-equivariant bundles were dealt with) to write down a proof of the fact that $\widetilde{{\rm ch}}_g(\bar{\varepsilon})=\widetilde{{\rm ch}}_g(\bar{\varepsilon}_1)+\widetilde{{\rm ch}}_g(\bar{\varepsilon}_2)$ modulo $\operatorname{Im}\partial+\operatorname{Im}\bar{\partial}$. In the proof of \cite[Theorem 1.20]{BGS}, the error terms were explicitly written down and were functorial (see \cite[(1.71) (1.72) (1.78) (1.81) (1.82)]{BGS}). That means one can fix a functorial choice of differential forms $\Delta'_\partial(\bar{\varepsilon})$ and $\Delta'_{\bar{\partial}}(\bar{\varepsilon})$ such that $$\widetilde{{\rm ch}}_g(\bar{\varepsilon})-\big(\widetilde{{\rm ch}}_g(\bar{\varepsilon}_1)+\widetilde{{\rm ch}}_g(\bar{\varepsilon}_2)\big)=\partial\Delta'_\partial(\bar{\varepsilon})+\bar{\partial}\Delta'_{\bar{\partial}}(\bar{\varepsilon}).$$ So we may reduce our proof to the case where $\bar{\varepsilon}$ is an emi-$1$-cube.
Now we consider the following exact sequence on $X\times \mathbb{P}^1$ $$\xymatrix{ \Psi:\quad 0 \ar[rr] && \bar{E}_{-1} \ar[rr]^-{{\rm Id}\otimes \sigma_\infty\oplus i\otimes \sigma_0} && \bar{E}_{-1}(1)\oplus\bar{E}_0(1) \ar[rr] && {\rm tr}_1(\bar{\varepsilon}) \ar[rr] && 0, } $$ we compute, using the fact that $\int_{\mathbb{P}^1}{\rm ch}_g(\bar{E}_{-1})\log\mid z\mid=0$, $\int_{\mathbb{P}^1}{\rm ch}_g\big(\bar{E}_{-1}(1)\oplus \bar{E}_{0}(1)\big)\log\mid z\mid=0$ and the Stokes formula, \begin{align*} 2\Phi\big({\rm ch}_g(\bar{\varepsilon})\big)&=\int_{\mathbb{P}^1}\Phi\Big({\rm ch}_g\big({\rm tr}_1(\bar{\varepsilon})\big)\Big)\log\mid z\mid^2\\ &=-\int_{\mathbb{P}^1}\frac{\bar{\partial}\partial}{2\pi i}\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2\\ &=\frac{1}{2\pi i}\int_{\mathbb{P}^1}(\partial_X+\partial_z)(\bar{\partial}_X+\bar{\partial}_z)\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2\\ &=\partial_X\bar{\partial}_X(\frac{1}{2\pi i}\int_{\mathbb{P}^1}\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2)+\partial_X(\frac{1}{2\pi i}\int_{\mathbb{P}^1}\bar{\partial}_z\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2)\\ &\quad-\bar{\partial}_X(\frac{1}{2\pi i}\int_{\mathbb{P}^1}{\partial}_z\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2)+\frac{1}{2\pi i}\int_{\mathbb{P}^1}{\partial}_z\bar{\partial}_z\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2\\ &=\frac{1}{2\pi i}\int_{\mathbb{P}^1}\widetilde{{\rm ch}}_g(\Psi){\partial}_z\bar{\partial}_z\log \mid z\mid^2 +\partial_X\bar{\partial}_X(\frac{1}{2\pi i}\int_{\mathbb{P}^1}\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2)\\ &\quad+\partial_X(\frac{1}{2\pi i}\int_{\mathbb{P}^1}\bar{\partial}_z\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2)-\bar{\partial}_X(\frac{1}{2\pi i}\int_{\mathbb{P}^1}{\partial}_z\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2) \end{align*} Set $$\Delta_{\partial}(\bar{\varepsilon})=\bar{\partial}_X(\frac{1}{2\pi i}\int_{\mathbb{P}^1}\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2)+\frac{1}{2\pi i}\int_{\mathbb{P}^1}\bar{\partial}_z\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2$$ and $$\Delta_{\bar{\partial}}(\bar{\varepsilon})=-\frac{1}{2\pi i}\int_{\mathbb{P}^1}{\partial}_z\widetilde{{\rm ch}}_g(\Psi)\log \mid z\mid^2.$$ We get $$2\Phi\big({\rm ch}_g(\bar{\varepsilon})\big)=\widetilde{{\rm ch}}_g(\Psi\mid_{X\times\{\infty\}})-\widetilde{{\rm ch}}_g(\Psi\mid_{X\times\{0\}})+\partial\Delta_{\partial}(\bar{\varepsilon})+\bar{\partial}\Delta_{\bar{\partial}}(\bar{\varepsilon}).$$
By the construction, $\Psi\mid_{X\times\{0\}}$ is split and $\Psi\mid_{X\times\{\infty\}}$ is isometric to the direct sum of $\bar{\varepsilon}$ and a split exact sequence $0\to 0\to \bar{E}_{-1}\to \bar{E}_{-1}\to 0$, we finally have $$2\Phi\big({\rm ch}_g(\bar{\varepsilon})\big)-\widetilde{{\rm ch}}_g(\bar{\varepsilon})=\partial\Delta_{\partial}(\bar{\varepsilon})+\bar{\partial}\Delta_{\bar{\partial}}(\bar{\varepsilon}).$$ Since the $1$-transgression bundle construction $\Psi$ is functorial, the differential forms $\Delta_{\partial}(\bar{\varepsilon})$ and $\Delta_{\bar{\partial}}(\bar{\varepsilon})$ associated to $\widetilde{{\rm ch}}_g(\Psi)$ are also functorial, thus we complete the whole proof. \end{proof}
\begin{rem}\label{explain_tt} (i) According to Theorem~\ref{explain_t}, we can make a functorial choice of differential forms $\Delta_\partial(\bar{\varepsilon})$ and $\Delta_{\bar{\partial}}(\bar{\varepsilon})$ for any hermitian $1$-cube $\bar{\varepsilon}$ such that $$2\Phi\big({\rm ch}_g(\bar{\varepsilon})\big)-\widetilde{{\rm ch}}_g(\bar{\varepsilon})=\partial\Delta_\partial(\bar{\varepsilon})+\bar{\partial}\Delta_{\bar{\partial}}(\bar{\varepsilon}).$$ Set $\Delta(\bar{\varepsilon})=-\Phi^{-1}\big(\frac{\Delta_\partial(\bar{\varepsilon})+\Delta_{\bar{\partial}}(\bar{\varepsilon})}{2}\big)$, then $\Delta(\bar{\varepsilon})$ is functorial and by the definition of the Deligne complex $\mathfrak{D}^*\big(E_{\log}(X), p\big)$ we have $${\rm ch}_g(\bar{\varepsilon})-\Phi^{-1}(\frac{\widetilde{{\rm ch}}_g(\bar{\varepsilon})}{2})=-\pi\big((\partial+\bar{\partial})\Delta(\bar{\varepsilon})\big)=d_\mathcal{D}\Delta(\bar{\varepsilon}).$$
(ii) It is easily seen from the proof of Theorem~\ref{explain_t} that if one uses another way to define the equivariant Bott-Chern form $\widetilde{{\rm ch}}_g$ which satisfies the axiomatic conditions in \cite[Theorem 1.29]{BGS} (at the level of differential forms) and which is additive for direct sum of short exact sequences, then one can also make a functorial choice of element $\Delta(\bar{\varepsilon})$ for any hermitian emi-$1$-cube $\bar{\varepsilon}$ such that $${\rm ch}_g(\bar{\varepsilon})-\Phi^{-1}(\frac{\widetilde{{\rm ch}}_g(\bar{\varepsilon})}{2})=d_\mathcal{D}\Delta(\bar{\varepsilon}).$$ \end{rem}
If $X$ is a regular $\mu_n$-projective arithmetic scheme over an arithmetic ring $(D,\Sigma,F_\infty)$, we shall denote $X_\mathbb R:=\big(X(\mathbb C),F_\infty\big)$ the real variety associated to $X$ where $F_\infty$ is the antiholomorphic involution of $X(\mathbb C)$ induced by the conjugate-linear involution $F_\infty$ over $(D,\Sigma,F_\infty)$. For any sheaf of complex vector spaces $V$ with a real structure over $X_\mathbb R$, we denote by $\sigma$ the involution given by $$\omega\mapsto \overline{F_\infty^*(\omega)}.$$ Write $D^*(X_\mathbb R,p):=D^*\big(X(\mathbb C),p\big)^\sigma$ for the subcomplex of $D^*\big(X(\mathbb C),p\big)$ consisting of the fixed elements under $\sigma$, we define the real Deligne-Beilinson cohomology of $X$ as $$H_D^*\big(X,\mathbb R(p)\big):=H^*\big(D^*(X_\mathbb R,p)\big).$$
Let us denote by $\widehat{\mathcal{P}}(X,\mu_n)$ the exact category of $\mu_n$-equivariant hermitian vector bundles on $X$, and by $\widehat{S}(X,\mu_n)$ the simplicial set associated to the Waldhausen $S$-construction of $\widehat{\mathcal{P}}(X,\mu_n)$ (cf. \cite[Section 2.3]{T3}). The forgetful functor (forget about the metrics) $\pi: \widehat{\mathcal{P}}(X,\mu_n)\to \mathcal{P}(X,\mu_n)$ induces an equivalence of categories, so we have homotopy equivalence $$\mid \widehat{S}(X,\mu_n)\mid\simeq \mid S(X,\mu_n)\mid$$ and isomorphisms of abelian groups $$K_m(X,\mu_n)\cong \pi_{m+1}(\mid \widehat{S}(X,\mu_n)\mid,0)$$ for any $m\geq 0$. To give the simplicial description of the equivariant regulator maps, we associate to each element in $S_k\widehat{\mathcal{P}}(X,\mu_n)$ a hermitian $(k-1)$-cube. Firstly, notice that an element $A$ in $S_k\widehat{\mathcal{P}}(X,\mu_n)$ is a family of injections $$A_{0,1}\rightarrowtail A_{0,2}\rightarrowtail \cdots\rightarrowtail A_{0,k}$$ of $\mu_n$-equivariant hermitian vector bundles on $X$ with quotients $A_{i,j}\simeq A_{0,j}/{A_{0,i}}$ for each $i<j$. For $k=1$, we write $${\rm Cub}(A_{0,1})=A_{0,1}.$$ Suppose that the map ${\rm Cub}$ is defined for all $l<k$, then ${\rm Cub}A$ is the $(k-1)$-cube with \begin{align*} \partial_1^{-1}{\rm Cub}A & =s_{k-2}^1\cdots s_1^1(A_{0,1}), \\ \partial_1^{1}{\rm Cub}A & ={\rm Cub}(\partial_0A). \end{align*}
Let $\mathbb Z\widehat{S}_*(X,\mu_n)$ be the simplicial abelian group generated by the simplicial set $\widehat{S}(X,\mu_n)$, and let $\mathcal{N}\big(\mathbb Z\widehat{S}_*(X,\mu_n)\big)$ be the Moore complex associated to $\mathbb Z\widehat{S}_*(X,\mu_n)$ with differential $d=\sum_{i=0}^k(-1)^i\partial_i$ where $\partial_i$ is the face map of $\widehat{S}(X,\mu_n)$. Then, according to \cite[Corollary 4.8]{BW}, the map ${\rm Cub}$ defined above extends by linearity to a morphism of homological complexes $${\rm Cub}:\quad \mathcal{N}\big(\mathbb Z\widehat{S}_*(X,\mu_n)\big)\to \widetilde{\mathbb Z}C_*(X,\mu_n)[-1],$$ and hence one gets a simplicial map $${\rm Cub}:\quad \mathbb Z\widehat{S}_*(X,\mu_n)\to \mathcal{K}\big(\widetilde{\mathbb Z}C_*(X,\mu_n)[-1]\big)$$ where $\mathcal{K}$ is the Dold-Puppe functor.
\begin{defn}\label{204} Let notations and assumptions be as above. We denote by $D^{2p-*}(X_{\mu_n},p)$ the homological complex associated to the complex $\tau_{\leq0}\big(D^*(X_{\mu_n},p)[2p]\big)$ which is the canonical truncation of $D^*(X_{\mu_n},p)[2p]$ at degree $0$. We define a simplicial map $$\xymatrix{\widetilde{{\rm ch}}_g: \widehat{S}(X,\mu_n) \ar[r]^-{\rm Hu} & \mathbb Z\widehat{S}_*(X,\mu_n) \ar[d]^-{\rm Cub} & \\ & \mathcal{K}\big(\widetilde{\mathbb Z}C_*(X,\mu_n)[-1]\big) \ar[r]^-{\mathcal{K}({\rm ch}_g)} & \mathcal{K}\big(\bigoplus_{p\geq0}D^{2p-*}(X_{\mu_n},p)[-1]_{R_n}\big),}$$ where ${\rm Hu}$ is the Hurewicz map. \end{defn}
\begin{defn}\label{205} Let $X$ be a regular $\mu_n$-projective scheme over an arithmetic ring $(D,\Sigma,F_\infty)$, and let $X_{\mu_n}$ be the fixed point subscheme. The higher equivariant arithmetic K-groups of $X$ are defined as $$\widehat{K}_m(X,\mu_n):=\pi_{m+1}\big(\text{homotopy fibre of }\mid\widetilde{{\rm ch}}_g\mid\big)\quad\text{for}\quad m\geq1,$$ and the equivariant regulator maps $${\rm ch}_g:\quad K_m(X,\mu_n)\to \bigoplus_{p\geq0}H_D^{2p-m}\big(X_{\mu_n},\mathbb R(p)\big)_{R_n}$$ are defined as the homomorphisms induced by $\widetilde{{\rm ch}}_g$ at the level of homotopy groups. \end{defn}
\begin{rem}\label{206} (i). We have the long exact sequence $$\cdots \to \widehat{K}_m(X,\mu_n) \to K_m(X,\mu_n) \to \bigoplus_{p\geq0}H_{\mathcal{D}}^{2p-m}\big(X_{\mu_n},\mathbb R(p)\big)_{R_n} \to \widehat{K}_{m-1}(X,\mu_n) \to \cdots$$ ending with $$\xymatrix{\cdots\to K_1(X,\mu_n) \ar[r] & \bigoplus_{p\geq0}H_{\mathcal{D}}^{2p-1}\big(X_{\mu_n},\mathbb R(p)\big)_{R_n} \ar[d] & \\ & \pi_1\big(\text{homotopy fibre of }\widetilde{{\rm ch}}_g\big) \ar[r] & K_0(X,\mu_n)\to \bigoplus_{p\geq0}H_{\mathcal{D}}^{2p}\big(X_{\mu_n},\mathbb R(p)\big)_{R_n}.}$$
(ii). When $n=1$, the equivariant higher Bott-Chern forms given in Definition~\ref{202} coincide with the higher Bott-Chern forms defined in \cite{BW} for non-equivariant proper varieties. So, in this case, $${\rm ch}_g:\quad K_m(X,\mu_1)\to \bigoplus_{p\geq0}H_D^{2p-m}\big(X,\mathbb R(p)\big)$$ is the Beilinson's regulator map.
(iii). The higher equivariant arithmetic K-groups $\widehat{K}_m(X,\mu_n)$ can be defined for non-proper $X$, for details, see \cite[Section 2]{T3}.
(iv). Let $s({\rm ch}_g)$ denote the simple complex associated to the chain morphism $$\xymatrix{ {\rm ch}_g: & \widetilde{\mathbb Z}C_*(X,\mu_n) \ar[r]^-{{\rm ch}_g} & \bigoplus_{p\geq0}D^{2p-*}(X_{\mu_n},p)_{R_n}.}$$ Then, for any $m\geq1$, there is an isomorphism $$\widehat{K}_m(X,\mu_n)_\mathbb Q\cong H_m\big(s({\rm ch}_g),\mathbb Q\big).$$
(v). A $\mu_n$-equivariant hermitian sheaf on $X$ is a $\mu_n$-equivariant coherent sheaf on $X$ which is locally free on $X(\mathbb C)$ and is equipped with a $\mu_n$-invariant hermitian metric. To a $\mu_n$-equivariant hermitian sheaf, the higher equivariant Bott-Chern form can still be defined in the same way. Denote by $\widehat{\mathcal{P}}'(X,\mu_n)$ the category of $\mu_n$-equivariant hermitian sheaves on $X$, then instead of $\widehat{\mathcal{P}}(X,\mu_n)$ one may define a new arithmetic K-theory $\widehat{K}'_*(X,\mu_n)$ which is called the equivariant arithmetic K$'$-theory. Since $\widehat{\mathcal{P}}'(X,\mu_n)$ and $\widehat{\mathcal{P}}(X,\mu_n)$ define the same algebraic K-theory when $X$ is regular, it is easily seen from the Five-lemma that the natural inclusion $\widehat{\mathcal{P}}(X,\mu_n)\subset\widehat{\mathcal{P}}'(X,\mu_n)$ induces isomorphisms $\widehat{K}_m(X,\mu_n)\cong \widehat{K}'_m(X,\mu_n)$ for any $m\geq1$. \end{rem}
\subsection{Equivariant analytic torsion for hermitian cubes} In \cite{BK}, J.-M. Bismut and K. K\"{o}hler developed a theory of higher analytic torsion forms for holomorphic submersions of complex manifolds. The higher analytic torsion form solves a differential equation which gives a refinement of the Grothendieck-Riemann-Roch theorem at the level of characteristic forms. Later, in \cite{Ma1}, X. Ma generalized J.-M. Bismut and K. K\"{o}hler's results to the equivariant case. Considering the higher K-theory and the Deligne-Beilinson cohomology, to make a refinement of the Riemann-Roch theorem at the level of higher Bott-Chern forms representing the regulator maps, one needs an extension of higher analytic torsion for hermitian cubes, this has been done in \cite{Roe}. In this subsection, we do the equivariant case by using Ma's equivariant analytic torsion forms. Our construction is slightly different to Roessler's construction.
Let $X, Y$ be two smooth $\mu_n$-projective varieties over $\mathbb{C}$, and let $f: X\to Y$ be an equivariant and smooth morphism. A K\"{a}hler fibration structure on $f$ is a real closed $(1,1)$-form $\omega$ on $X$ which induces K\"{a}hler metrics on the fibres of $f$ (cf. \cite[Def. 1.1, Thm. 1.2]{BK}). For instance, we may fix a $\mu_n$-invariant K\"{a}hler metric on $X$ and choose corresponding K\"{a}hler form $\omega$ as a K\"{a}hler fibration structure on $f$. Let $(E,h^E)$ be a $\mu_n$-equivariant hermitian vector bundle on $X$ such that $E$ is $f$-acyclic i.e. the higher direct image $R^qf_*E$ vanishes for $q>0$. The equivariant analytic torsion form $T_g(f,\omega,h^E)$ is an element of $\bigoplus_{p\geq0}D^{2p-1}(Y_{\mu_n},p)_{R_n}$, which depends on $f,\omega$ and $(E,h^E)$ and satisfies the differential equation $$d_\mathcal{D}T_g(f,\omega,h^E)={\rm ch}_g(f_*E,f_*h^E)-\frac{1}{(2\pi i)^r}\int_{X_{\mu_n}/{Y_{\mu_n}}}{\rm Td}_g(Tf,h^{Tf}){\rm ch}_g(E,h^E)$$ where $h^{Tf}$ is the hermitian metric induced by $\omega$ on the holomorphic tangent bundle $Tf$, $r$ is the rank of the bundle $Tf_{\mu_n}$, and $f_*h^E$ is the $L^2$-metric on $f_*E$ (see the end of \cite[Section 2.2]{Roe} for a definition). By definition, for elements $u, v\in (f_*E)_y$ of the fibre of $f_*E$ over a point $y\in Y$, the $L^2$-hermitian product is given by $$\langle u, v\rangle_{L^2}=\frac{1}{(2\pi)^b} \int_{f^{-1}y} \langle u, v\rangle_E\frac{\omega^b}{b!}$$ where $b$ is the relative dimension of $X$ over $Y$.
We would like to caution the reader that the equivariant analytic torsion form we use here coincides with Ma's definition only up to a rescaling. If we denote by $T'_g(f,\omega,h^E)$ Ma's equivariant torsion form, then the equality $2\Phi\big(T_g(f,\omega,h^E)\big)=T'_g(f,\omega,h^E)$ holds. From now on, we shall write $T_g(\omega,h^E)$ or $T_g(h^E)$ for $T_g(f,\omega,h^E)$, if there is no ambiguity about the underlying map or K\"{a}hler form. Now, let $\omega'$ be the form associated to another K\"{a}hler fibration structure on $f: X\to Y$ and let $h'^{Tf}$ be the metric on $Tf$ induced by this new fibration. Let $\widetilde{{\rm Td}}_g(Tf,h'^{Tf},h^{Tf})$ be the equivariant secondary Todd form used in the Appendix \big(Section 5.1 (\ref{Ma2.3.2})\big), and set ${\rm Td}_g(Tf,h'^{Tf},h^{Tf})=\Phi^{-1}\big(\frac{\widetilde{{\rm Td}}_g(Tf,h'^{Tf},h^{Tf})}{2}\big)$. So $$d_\mathcal{D}{\rm Td}_g(Tf,h'^{Tf},h^{Tf})={\rm Td}_g(Tf,h^{Tf})-{\rm Td}_g(Tf,h'^{Tf}).$$
The following anomaly formula is useful for our later discussion.
\begin{thm}\label{207} Let notations and assumptions be as above. The following identity holds in $\bigoplus_{p\geq 0}\big(D^{2p-1}(Y_{\mu_n},p)/{\operatorname{Im} d_\mathcal{D}}\big)$: $$T_g(\omega,h^E)-T_g(\omega',h^E)={\rm ch}_g(f_*E,h'^{f_*E},h^{f_*E}) -\frac{1}{(2\pi i)^r}\int_{X_{\mu_n}/{Y_{\mu_n}}}{\rm Td}_g(Tf,h'^{Tf},h^{Tf}){\rm ch}_g(E,h^E)$$ where $(f_*E,h'^{f_*E},h^{f_*E})$ stands for the emi-$1$-cubes of hermitian vector bundles $$\xymatrix{ 0\ar[r] & (f_*E,h'^{f_*E}) \ar[r]^-{\rm Id} & (f_*E,h^{f_*E}) \ar[r] & 0 \ar[r] & 0.}$$ \end{thm} \begin{proof} This is a translation of \cite[Theorem 2.13]{Ma1}, see also Theorem~\ref{A10} in the Appendix. Considering the relation between the equivariant analytic torsion forms $T_g(\omega,h^E), T_g(\omega',h^E)$ and the ones used in Ma's paper, we only need to show $${\rm ch}_g(f_*E,h'^{f_*E},h^{f_*E})=\Phi^{-1}\big(\frac{\widetilde{{\rm ch}}_g(f_*E,h'^{f_*E},h^{f_*E})}{2}\big)\in \bigoplus_{p\geq 0}\big(D^{2p-1}(Y_{\mu_n},p)/{\operatorname{Im} d_\mathcal{D}}\big).$$ But this is the content of Remark~\ref{explain_tt}. \end{proof}
According to Remark~\ref{explain_tt} and Theorem~\ref{A1} in the Appendix, there exists a functorial choice of the differential form which measures the difference $$T_g(\omega,h^E)-T_g(\omega',h^E)-{\rm ch}_g(f_*E,h'^{f_*E},h^{f_*E})+\frac{1}{(2\pi i)^r}\int_{X_{\mu_n}/{Y_{\mu_n}}}{\rm Td}_g(Tf,h'^{Tf},h^{Tf}){\rm ch}_g(E,h^E)$$ in Theorem~\ref{207}. With the same notations as in Remark~\ref{explain_tt} and Theorem~\ref{A1}, we set $$\Delta(f,\overline{E},\omega,\omega'):=-\Phi^{-1}\big(\frac{\Delta^0(f,\overline{E},\omega,\omega')+\Delta_0(f,\overline{E},\omega,\omega')}{2}\big)+\Delta(f_*E,h'^{f_*E},h^{f_*E}),$$ it satisfies the differential equation \begin{multline*} d_\mathcal{D}\Delta(f,\overline{E},\omega,\omega')=T_g(\omega,h^E)-T_g(\omega',h^E)-{\rm ch}_g(f_*E,h'^{f_*E},h^{f_*E})\\+\frac{1}{(2\pi i)^r}\int_{X_{\mu_n}/{Y_{\mu_n}}}{\rm Td}_g(Tf,h'^{Tf},h^{Tf}){\rm ch}_g(E,h^E). \end{multline*} We consider the following setting. Let $Z$ be a compact K\"{a}hler manifold and let $Z_1$ be a closed submanifold of $Z$. Choose a K\"{a}hler metric on $Z$ and endow $Z_1$ with the restricted metric. Let $f_Z: X\times Z\to Y\times Z$ be the induced map and let $\omega,\omega'$ be the K\"{a}hler forms of the product metrics on $X\times Z$ with respect to two K\"{a}hler fibrations on $f: X\to Y$. Similarly, let $f_{Z_1}: X\times Z_1\to Y\times Z_1$ be the induced map and let $\omega_1,\omega'_1$ be the K\"{a}hler forms of the product metrics on $X\times Z_1$ with respect to the same two K\"{a}hler fibrations on $f: X\to Y$. We shall denote by $j$ (resp. $i$) the natural embedding $X\times Z_1\to X\times Z$ (resp. $Y\times Z_1\to Y\times Z$). Then $j^*\omega=\omega_1$ and $j^*\omega'=\omega'_1$. Let $\overline{E}$ be an $f_Z$-acyclic hermitian bundle on $X\times Z$, we have the following result.
\begin{lem}\label{delta} The identity $i_{\mu_n}^*\Delta(f_Z,\overline{E},\omega,\omega')=\Delta(f_{Z_1},j^*\overline{E},\omega_1,\omega'_1)$ holds. \end{lem} \begin{proof} This is a consequence of Theorem~\ref{A1} in the Appendix. \end{proof}
\begin{defn}\label{homotopy-diagram} By a chain homotopy of a diagram of homological complexes \begin{align*} \xymatrix{ A_* \ar[r]^-{i} \ar[d]^-{f} & B_* \ar[d]^-{l} \\ C_* \ar[r]^-{j} & D_*,} \end{align*} we understand a chain homotopy between the complex morphisms $j\circ f$ and $l\circ i$. \end{defn}
Roughly speaking, the equivariant analytic torsion for hermitian cubes is a chain homotopy of the following diagram \begin{align}\label{atc} \xymatrix{ \widetilde{\mathbb Z}C_*^{f-\text{ac}}(X,\mu_n) \ar[r]^-{{\rm ch}_g} \ar[d]^-{f_*} & \bigoplus_{p\geq0}D^{2p-*}(X_{\mu_n},p)_{R_n} \ar[d]^-{{f_{\mu_n}}_*\circ {\rm Td}_g(\overline{Tf})\bullet(\cdot)} \\ \widetilde{\mathbb Z}C_*(Y,\mu_n) \ar[r]^-{{\rm ch}_g} & \bigoplus_{p\geq0}D^{2p-*}(Y_{\mu_n},p)_{R_n}} \end{align} where $\widetilde{\mathbb Z}C_*^{f-\text{ac}}(X,\mu_n)$ is the subcomplex of $\widetilde{\mathbb Z}C_*(X,\mu_n)$ made of $f$-acyclic bundles. Since the Waldhausen K-theory space of $\widehat{\mathcal{P}}(X,\mu_n)$ is homotopy equivalent to the Waldhausen K-theory space of the full subcategory of $\widehat{\mathcal{P}}(X,\mu_n)$ consisting of $f$-acyclic bundles, we shall always work with acyclic bundles.
Like the non-equivariant case treated in \cite{Roe}, the equivariant analytic torsion for hermitian cubes induces a commutative diagram at the level of homology groups and hence one gets an analytic proof of the equivariant version of Gillet's Riemann-Roch theorem for higher algebraic K-theory.
To construct a chain homotopy of (\ref{atc}), let us move in two steps. Notice that the equivariant higher Bott-Chern form factors as $$\xymatrix{ \widetilde{\mathbb Z}C_*(X,\mu_n) \ar[r]^-{\lambda} & \widetilde{\mathbb Z}C_*^{\rm emi}(X,\mu_n) \ar[r]^-{{\rm ch}_g^0\circ {\rm tr}_*} & \bigoplus_{p\geq0}D^{2p-*}(X_{\mu_n},p)_{R_n},}$$ we firstly clarify the difference between $f_*\big({\rm tr}\circ\lambda(\cdot)\big)$ and ${\rm tr}\circ\lambda\big(f_*(\cdot)\big)$. Let $\overline{E}$ be a $f$-acyclic hermitian $k$-cube in $\widehat{\mathcal{P}}(X,\mu_n)$. The hermitian bundles $f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)$ and ${\rm tr}_k\circ\lambda\big(f_*(\overline{E})\big)$ are canonically isomorphic as bundles, but carry in general different metrics. For instance, assume that $\overline{E}$ is a hermitian emi-$1$-cube, then $f_*\big({\rm tr}_1(\overline{E})\big)$ and ${\rm tr}_1\big(f_*(\overline{E})\big)$ fit into the following two exact sequences \begin{align*} \xymatrix{ 0 \ar[r] & f_*(p_X^*\bar{E}_{-1}) \ar[r] & f_*(p_X^*\bar{E}_{-1})(1)\oplus f_*(p_X^*\bar{E}_0)(1) \ar[r] & f_*\big({\rm tr}_1(\overline{E})\big) \ar[r] & 0, } \end{align*} and \begin{align*} \xymatrix{ 0 \ar[r] & p_Y^*\big(f_*(\bar{E}_{-1})\big) \ar[r] & p_Y^*\big(f_*(\bar{E}_{-1})\big)(1)\oplus p_Y^*\big(f_*(\bar{E}_0)(1)\big) \ar[r] & {\rm tr}_1\big(f_*(\overline{E})\big) \ar[r] & 0. } \end{align*} Here $p_X$ (resp. $p_Y$) stands for the obvious projection $X\times \mathbb{P}^1\to X$ (resp. $Y\times \mathbb{P}^1\to Y$). By the definition of the $L^2$-metric, over the point $(y, t)$ in $Y\times \mathbb{P}^1$, the hermitian product on $f_*\big({\rm tr}_1(\overline{E})\big)_{(y, t)}$ relies on the integral of certain power of the K\"{a}hler form $\omega_{X\times \mathbb{P}^1}$ over the fibre $f_t$ and hence relies on $t$. But the pull-bak hermitian products on $p_Y^*\big(f_*(\bar{E}_0)_{(y, t)}$ and on $p_Y^*\big(f_*(\bar{E}_{-1})_{(y, t)}$ equal the hermitian products on $f_*(\bar{E}_0)_y$ and on $f_*(\bar{E}_{-1})_y$ which don't rely on $t$, therefore the induced hermitian product on ${\rm tr}_1\big(f_*(\overline{E})_{(y, t)}$ doesn't rely on $t$ neither. So in general, $f_*\big({\rm tr}_1(\overline{E})\big)$ and ${\rm tr}_1\big(f_*(\overline{E})\big)$ carry different metrics.
In the following, we shall write $H(\overline{E})$ for the short exact sequence $$\xymatrix{ 0\ar[r] & f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big) \ar[r]^-{\rm Id} & {\rm tr}_k\circ\lambda\big(f_*(\overline{E})\big) \ar[r] & 0 \ar[r] & 0}$$ which is an emi-$1$-cube of hermitian bundles on $Y\times (\mathbb{P}^1)^k$. The transgression bundle of $H(\overline{E})$ is a hermitian bundle on $Y\times (\mathbb{P}^1)^{k+1}=Y\times (\mathbb{P}^1)^k\times \mathbb{P}^1$. But here we change the order of the $\mathbb{P}^1$, let $p_1$ be the first projection from $Y\times\mathbb{P}^1\times (\mathbb{P}^1)^k$ to $Y\times (\mathbb{P}^1)^k$, we apply the transgression bundle construction to the short exact sequence $H(\overline{E})$ with respect to the projection $p_1$ to get a hermitian bundle on $Y\times (\mathbb{P}^1)^{k+1}$. With some abuse of notation, we still denote this hermitian bundle by ${\rm tr}_1\big(H(\overline{E})\big)$ and it satisfies the following relations: $${\rm tr}_1\big(H(\overline{E})\big)\mid_{Y\times \{0\}\times (\mathbb{P}^1)^k}={\rm tr}_k\circ\lambda\big(f_*(\overline{E})\big),\quad {\rm tr}_1\big(H(\overline{E})\big)\mid_{Y\times \{\infty\}\times (\mathbb{P}^1)^k}=f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)$$ and $${\rm tr}_1\big(H(\overline{E})\big)\mid_{Y\times (\mathbb{P}^1)^{i}\times \{0\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_1\big(H(\partial_i^0\overline{E})\big),$$ $${\rm tr}_1\big(H(\overline{E})\big)\mid_{Y\times (\mathbb{P}^1)^{i}\times \{\infty\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_1\big(H(\partial_i^{-1}\overline{E})\big)\oplus {\rm tr}_1\big(H(\partial_i^{1}\overline{E})\big)$$ for $i=1,\cdots,k$. Now we define $$\Pi'_k(\overline{E}):=\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E})\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2).$$ The same reasoning as in \cite[Lemma 3.3]{Roe} proves that $\Pi'_k$ vanishes on degenerate $k$-cubes, and hence we obtain a map $\Pi'_k: \widetilde{\mathbb Z}C_k^{f-\text{ac}}(X,\mu_n)\to \bigoplus_{p\geq0}D^{2p-k-1}(Y_{\mu_n},p)_{R_n}$ by linear extension.
\begin{prop}\label{208} The equality \begin{multline*} d_\mathcal{D}\circ \Pi'_k(\overline{E})+\Pi'_{k-1}\circ d(\overline{E})\\ ={\rm ch}_g(f_*\overline{E})-\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}{\rm ch}_g^0\Big(f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2) \end{multline*} holds. \end{prop} \begin{proof} We compute \begin{multline*} d_\mathcal{D}\circ \Pi'_k(\overline{E})=\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E})\big)\Big)\wedge d_\mathcal{D}C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\\ =\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E})\big)\Big)\wedge\big((-\frac{1}{2})(k+1)\sum_{j=1}^{k+1}(-1)^{j-1}(-4\pi i)(\delta_{z_j=\infty}-\delta_{z_j=0})\\ {\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\widehat{\log\mid z_j\mid^2},\cdots,\log\mid z_{k+1}\mid^2)\big)}\\ =\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E})\big)\Big)\wedge\big((-\frac{1}{2})(k+1)\sum_{j=2}^{k+1}(-1)^{j-1}(-4\pi i)(\delta_{z_j=\infty}-\delta_{z_j=0})\\ {\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\widehat{\log\mid z_j\mid^2},\cdots,\log\mid z_{k+1}\mid^2)\big)}\\ +\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}{\rm ch}_g^0\Big({\rm tr}_k\circ\lambda\big(f_*(\overline{E})\big)\Big)\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\\ -\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}{\rm ch}_g^0\Big(f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\\ =\frac{(-1)^{k+1}}{2k!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}\Bigg(\bigg(\sum_{j=2}^{k+1}(-1)^{j-1}{\rm ch}_g^0\Big({\rm tr_1}\big(H(\partial_j^{-1}\overline{E}\oplus \partial_j^{1}\overline{E})\big)\Big)-{\rm ch}_g^0\Big({\rm tr}_1\big(H(\partial_i^0\overline{E})\big)\Big)\bigg)
\\ {\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\Bigg)+{\rm ch}_g(f_*\overline{E})}\\ -\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}{\rm ch}_g^0\Big(f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\\ =\frac{(-1)^k}{2k!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}{\rm ch}_g^0\Big({\rm tr}_1\big(H(-d\overline{E})\big)\Big)\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)+{\rm ch}_g(f_*\overline{E})
\\ -\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}{\rm ch}_g^0\Big(f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\\ =-\Pi'_{k-1}\circ d(\overline{E})+{\rm ch}_g(f_*\overline{E})-\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}{\rm ch}_g^0\Big(f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2).
\end{multline*} So we are done. \end{proof}
On the other hand, we equip $X\times (\mathbb{P}^1)^k$ with the product metric and we define $$\Pi''_k(\overline{E})=\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(T_g\big({\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)$$ where $T_g\big({\rm tr}_k\circ\lambda(\overline{E})\big)$ is the equivariant higher analytic torsion of the hermitian bundle ${\rm tr}_k\circ\lambda(\overline{E})$ with respect to the fibration $f: X\times (\mathbb{P}^1)^k\to Y\times (\mathbb{P}^1)^k$. By \cite[Lemma 3.5]{Roe}, the map $\Pi''_k$ vanishes on degenerate $k$-cubes and hence we obtain a map $\Pi''_k: \widetilde{\mathbb Z}C_k^{f-\text{ac}}(X,\mu_n)\to \bigoplus_{p\geq0}D^{2p-k-1}(Y_{\mu_n},p)_{R_n}$ by linear extension.
\begin{thm}\label{209} Set $\Pi_k=\Pi'_k+\Pi''_k$, then $\Pi_k$ defines a chain homotopy of the diagram (\ref{atc}). This map $\Pi_k: \widetilde{\mathbb Z}C_k^{f-\text{ac}}(X,\mu_n)\to \bigoplus_{p\geq0}D^{2p-k-1}(Y_{\mu_n},p)_{R_n}$ is called the equivariant higher analytic torsion for hermitian cubes. \end{thm} \begin{proof} Let $\overline{E}$ be a hermitian $k$-cube in $\widetilde{\mathbb Z}C_k^{f-\text{ac}}(X,\mu_n)$, we compute \begin{multline*} d_\mathcal{D}\circ\Pi_k(\overline{E})+\Pi_{k-1}\circ d(\overline{E})=d_\mathcal{D}\circ\Pi'_k(\overline{E})+\Pi'_{k-1}\circ d(\overline{E})+d_\mathcal{D}\circ\Pi''_k(\overline{E})+\Pi''_{k-1}\circ d(\overline{E})\\ ={\rm ch}_g(f_*\overline{E})-\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}{\rm ch}_g^0\Big(f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)+d_\mathcal{D}\circ\Pi''_k(\overline{E})+\Pi''_{k-1}\circ d(\overline{E}). \end{multline*} and \begin{multline*} d_\mathcal{D}\circ\Pi''_k(\overline{E})=\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}d_\mathcal{D}C_{k+1}\Big(T_g\big({\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ =\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}(-\frac{1}{2})(k+1)\bigg(d_\mathcal{D}T_g\big({\rm tr}_k\circ\lambda(\overline{E})\big)\bullet C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)
\\ +\sum_{j=1}^{k}(-1)^{j}(-4\pi i)(\delta_{z_j=\infty}-\delta_{z_j=0}){\wedge C_{k}\Big(T_g\big({\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\widehat{\log\mid z_j\mid^2},\cdots,\log\mid z_k\mid^2\Big)}\bigg)\\ =\frac{(-1)^{k}}{k!(2\pi i)^{k-1}}\int_{(\mathbb{P}^1)^{k-1}}\sum_{j=1}^k(-1)^{j}\bigg(C_{k}\Big(T_g\big({\rm tr}_{k-1}\circ\lambda(\partial_j^{0}\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k-1}\mid^2\Big)
\\ -C_{k}\Big(T_g\big({\rm tr}_{k-1}\circ\lambda(\partial_j^{-1}\overline{E})\oplus {\rm tr}_{k-1}\circ\lambda(\partial_j^{1}\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k-1}\mid^2\Big)\bigg)\\ +\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}\Bigg(\bigg({\rm ch}_g^0\Big(f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)-\frac{1}{(2\pi i)^r}\int_{{X_{\mu_n}\times (\mathbb{P}^1)^k}/{Y_{\mu_n}\times (\mathbb{P}^1)^k}}{\rm Td}_g(\overline{Tf}){\rm ch}_g^0\big({\rm tr}_k\circ\lambda(\overline{E})\big)\bigg)\\ {\bullet C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\Bigg)}\\ =-\Pi''_{k-1}\circ d(\overline{E})+\frac{(-1)^k}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}{\rm ch}_g^0\Big(f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)
\\ -\frac{1}{(2\pi i)^r}\int_{X_{\mu_n}/{Y_{\mu_n}}}{\rm Td}_g(\overline{Tf})\bullet {\rm ch}_g(\overline{E}). \end{multline*} Combining these two computations, we finally get $$d_\mathcal{D}\circ\Pi_k(\overline{E})+\Pi_{k-1}\circ d(\overline{E})={\rm ch}_g(f_*\overline{E})-\frac{1}{(2\pi i)^r}\int_{X_{\mu_n}/{Y_{\mu_n}}}{\rm Td}_g(\overline{Tf})\bullet {\rm ch}_g(\overline{E}).$$ So we are done. \end{proof}
If we are given another fibration structure $\omega'$, then for any $f$-acyclic hermitian $k$-cube $\overline{E}$ in $\widehat{\mathcal{P}}(X,\mu_n)$, the short exact sequence \begin{align*} \xymatrix{0 \ar[r] & (f_*E,h'^{f_*E}) \ar[r]^-{\rm Id} & (f_*E,h^{f_*E}) \ar[r] & 0 \ar[r] & 0} \end{align*} forms a hermitian $(k+1)$-cube $H_f(\overline{E})$ on $Y$ such that the transgression bundle ${\rm tr}_{k+1}\Big(\lambda\big(H_f(\overline{E})\big)\Big)$ satisfies the relations $${\rm tr}_{k+1}\Big(\lambda\big(H_f(\overline{E})\big)\Big)\mid_{Y\times \{0\}\times (\mathbb{P}^1)^k}={\rm tr}_{k}\big(\lambda(f_*E,h^{f_*E})\big),$$ $${\rm tr}_{k+1}\Big(\lambda\big(H_f(\overline{E})\big)\Big)\mid_{Y\times \{\infty\}\times (\mathbb{P}^1)^k}={\rm tr}_{k}\big(\lambda(f_*E,h'^{f_*E})\big)$$ and $${\rm tr}_{k+1}\Big(\lambda\big(H_f(\overline{E})\big)\Big)\mid_{Y\times (\mathbb{P}^1)^{i}\times \{0\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_k\Big(\lambda\big(H_f(\partial_i^0\overline{E})\big)\Big),$$ $${\rm tr}_{k+1}\Big(\lambda\big(H_f(\overline{E})\big)\Big)\mid_{Y\times (\mathbb{P}^1)^{i}\times \{\infty\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_k\Big(\lambda\big(H_f(\partial_i^{-1}\overline{E})\big)\Big)\oplus {\rm tr}_k\Big(\lambda\big(H_f(\partial_i^{1}\overline{E})\big)\Big)$$ for $i=1,\cdots,k$. Therefore, the following map $$\Pi^{(1)}_k(\overline{E})=\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_{k+1}\circ\lambda\big(H_f(\overline{E})\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)$$ which vanishes on degenerate cubes provides a chain homotopy of homological complexes between the maps ${\rm ch}_g\circ f_*$ and ${\rm ch}_g\circ f'_*$ where $f'_*(\overline{E}):=(f_*E,h'^{f_*E})$ is the push-forward with respect to the new fibration $\omega'$. Similarly, by projection formula, the map \begin{multline*} \Pi^{(3)}_k(\overline{E}):=\frac{(-1)^{k}}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}\bigg(\Big(\frac{1}{(2\pi i)^r}\int_{{X_{\mu_n}\times (\mathbb{P}^1)^k}/{Y_{\mu_n}\times (\mathbb{P}^1)^k}}{\rm Td}_g(Tf,h'^{Tf},h^{Tf}){\rm ch}_g^0\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\\ {\bullet C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\bigg)} \end{multline*} gives a chain homotopy of homological complexes between the maps ${f_{\mu_n}}_*\circ\big({\rm Td}_g(Tf,h^{Tf})\bullet{\rm ch}_g\big)$ and ${f_{\mu_n}}_*\circ\big({\rm Td}_g(Tf,h'^{Tf})\bullet {\rm ch}_g\big)$. Finally we write $\Pi^{(2)}_k=\Pi'^{(2)}_k+\Pi''^{(2)}_k$ for the chain homotopy defined in Theorem~\ref{209} between the maps ${\rm ch}_g\circ f'_*$ and ${f_{\mu_n}}_*\circ\big({\rm Td}_g(Tf,h'^{Tf})\bullet {\rm ch}_g\big)$ with respect to the new fibration $\omega'$. Then $\Pi^{(1)}_k+\Pi^{(2)}_k-\Pi^{(3)}_k$ defines a chain homotopy between ${\rm ch}_g\circ f_*$ and ${f_{\mu_n}}_*\circ\big({\rm Td}_g(Tf,h^{Tf})\bullet {\rm ch}_g\big)$. At the end of this subsection, we compare this homotopy $\Pi^{(1)}_k+\Pi^{(2)}_k-\Pi^{(3)}_k$ with $\Pi_k$ constructed in Theorem~\ref{209}.
\begin{defn}\label{homotopy} Let $f, l$ be two morphisms of homological complexes $A_*\to B_*$, and let $h_1,h_2$ be two chain homotopies between $f$ and $l$. We say that $h_1$ is homotopic to $h_2$ if there exists a map $H: A_*\to B_{*+2}$ satisfying the condition that $Hd-dH=h_1-h_2$. \end{defn}
Now, we denote by $H_f^{f'}(\overline{E})$ the following emi-$2$-cube of hermitian bundles on $Y\times (\mathbb{P}^1)^k$ $$\xymatrix{f'_*\big({\rm tr}_k\circ\lambda(\overline{E})\big) \ar[r]^-{\rm Id} \ar[d]^-{\rm Id} & {\rm tr}_k\circ\lambda\big(f'_*(\overline{E})\big) \ar[r] \ar[d]^-{\rm Id} & 0 \ar[d]\\ f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big) \ar[r]^-{\rm Id} \ar[d] & {\rm tr}_k\circ\lambda\big(f_*(\overline{E})\big) \ar[r] \ar[d] & 0 \ar[d]\\ 0 \ar[r] & 0 \ar[r] & 0.}$$ Changing the order of the $\mathbb{P}^1\times \mathbb{P}^1$ in $(\mathbb{P}^1)^{k+2}=(\mathbb{P}^1)^k\times \mathbb{P}^1\times \mathbb{P}^1$ so that $(\mathbb{P}^1)^{k+2}=\mathbb{P}^1\times \mathbb{P}^1\times (\mathbb{P}^1)^k$, we construct a hermitian bundle ${\rm tr}_2\big(H_f^{f'}(\overline{E})\big)$ on $Y\times (\mathbb{P}^1)^{k+2}$ as the second transgression bundle of $H_f^{f'}(\overline{E})$ such that it satisfies the following relations: $${\rm tr}_2\big(H_f^{f'}(\overline{E})\big)\mid_{Y\times \{0\}\times (\mathbb{P}^1)^{k+1}}={\rm tr}_{k+1}\Big(\lambda\big(H_f(\overline{E})\big)\Big),$$ $${\rm tr}_2\big(H_f^{f'}(\overline{E})\big)\mid_{Y\times \{\infty\}\times (\mathbb{P}^1)^{k+1}}={\rm tr}_1\Big(H_f\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big),$$ $${\rm tr}_2\big(H_f^{f'}(\overline{E})\big)\mid_{Y\times \mathbb{P}^1\times \{0\}\times (\mathbb{P}^1)^k}={\rm tr}_1\big(H(\overline{E})\big),\quad {\rm tr}_2\big(H_f^{f'}(\overline{E})\big)\mid_{Y\times \mathbb{P}^1\times \{\infty\}\times (\mathbb{P}^1)^k}={\rm tr}_1\big(H'(\overline{E})\big)$$ and $${\rm tr}_2\big(H_f^{f'}(\overline{E})\big)\mid_{Y\times (\mathbb{P}^1)^{i+1}\times \{0\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_2\big(H_f^{f'}(\partial_i^0\overline{E})\big),$$ $${\rm tr}_2\big(H_f^{f'}(\overline{E})\big)\mid_{Y\times (\mathbb{P}^1)^{i+1}\times \{\infty\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_2\big(H_f^{f'}(\partial_i^{-1}\overline{E})\big)\oplus {\rm tr}_2\big(H_f^{f'}(\partial_i^{1}\overline{E})\big)$$ for $i=1,\cdots,k$. We set $$\Pi_{f,k}^{f'}(\overline{E}):=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\big(H_f^{f'}(\overline{E})\big)\Big)\wedge C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2).$$ Then $\Pi_{f,k}^{f'}$ vanishes on degenerate $k$-cubes, and we obtain a map $$\Pi_{f,k}^{f'}: \widetilde{\mathbb Z}C_k^{f-\text{ac}}(X,\mu_n)\to \bigoplus_{p\geq0}D^{2p-k-2}(Y_{\mu_n},p)_{R_n}$$ by linear extension.
\begin{prop}\label{210} Let notations and assumptions be as above. Then the chain homotopy $\Pi_k$ is homotopic to the chain homotopy $\Pi^{(1)}_k+\Pi^{(2)}_k-\Pi^{(3)}_k$. \end{prop} \begin{proof} Firstly, we set \begin{multline*} \Pi^{(3')}_k(\overline{E}):=\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(\frac{1}{(2\pi i)^r}\int_{{X_{\mu_n}\times (\mathbb{P}^1)^k}/{Y_{\mu_n}\times (\mathbb{P}^1)^k}}{\rm Td}_g(Tf,h'^{Tf},h^{Tf}){\rm ch}_g^0\big({\rm tr}_k\circ\lambda(\overline{E})\big)\\ ,\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big). \end{multline*} It also defines a chain homotopy between the maps ${f_{\mu_n}}_*\circ\big({\rm Td}_g(Tf,h^{Tf})\bullet{\rm ch}_g\big)$ and ${f_{\mu_n}}_*\circ\big({\rm Td}_g(Tf,h'^{Tf})\bullet {\rm ch}_g\big)$. Since the product $\bullet$ on Deligne complex is graded commutative and is associative up to homotopy, we claim that $\Pi^{(3')}_k(\overline{E})$ is homotopic to $\Pi^{(3)}_k(\overline{E})$ so that we are left to show that $\Pi_k$ is homotopic to $\Pi^{(1)}_k+\Pi^{(2)}_k-\Pi^{(3')}_k$. Actually, our claim follows from the fact that $d_\mathcal{D}\Pi^{(3)}_k(\overline{E})-d_\mathcal{D}\Pi^{(3')}_k(\overline{E})=\Pi^{(3)}_{k-1}(-d\overline{E})-\Pi^{(3')}_{k-1}(-d\overline{E})$ and \cite[Remark 2.4, Lemma 2.5]{T3}.
Now, let $\overline{E}$ be a hermitian $k$-cube in $\widehat{\mathcal{P}}(X,\mu_n)$ which is $f$-acyclic. We compute \begin{multline*} d_\mathcal{D}\circ\Pi_{f,k}^{f'}(\overline{E})=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\big(H_f^{f'}(\overline{E})\big)\Big)\wedge d_\mathcal{D}C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2)\\ =\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\big(H_f^{f'}(\overline{E})\big)\Big)\wedge\big((-\frac{1}{2})(k+2)\sum_{j=1}^{k+2}(-1)^{j-1}(-4\pi i)(\delta_{z_j=\infty}-\delta_{z_j=0})
\\ {\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\widehat{\log\mid z_j\mid^2},\cdots,\log\mid z_{k+2}\mid^2)\big)}\\ =\Pi_{f,k-1}^{f'}\circ d(\overline{E})-\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}\Bigg[\bigg({\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E})\big)\Big)-{\rm ch}_g^0\Big({\rm tr}_1\big(H'(\overline{E})\big)\Big)\bigg)
\\ {\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\Bigg]}\\ +\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}\Bigg[\Bigg({\rm ch}_g^0\bigg({\rm tr}_{k+1}\Big(\lambda\big(H_f(\overline{E})\big)\Big)\bigg)-{\rm ch}_g^0\bigg({\rm tr}_1\Big(H_f\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\bigg)\Bigg)\\ {\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\Bigg]}\\ =\Pi_{f,k-1}^{f'}\circ d(\overline{E})-\Pi'_k(\overline{E})+\Pi'^{(2)}_k(\overline{E})+\Pi^{(1)}_k(\overline{E})
\\ -\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\bigg({\rm tr}_1\Big(H_f\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\bigg)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2). \end{multline*}
On the other hand, according to the anomaly formula Theorem~\ref{207}, we have \begin{multline*} \Pi''_k(\overline{E})-\Pi''^{(2)}_k(\overline{E})\\ =\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(T_g\big({\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)
\\ -\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(T'_g\big({\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ =\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Bigg(\frac{1}{4\pi i}\int_{{Y_{\mu_n}\times (\mathbb{P}^1)^{k+1}}/{Y_{\mu_n}\times (\mathbb{P}^1)^{k}}}{\rm ch}_g^0\bigg({\rm tr}_1\Big(H_f\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\bigg)\log \mid z_{0}\mid^2
\\ ,\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Bigg)\\ -\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(\frac{1}{(2\pi i)^r}\int_{{X_{\mu_n}\times (\mathbb{P}^1)^k}/{Y_{\mu_n}\times (\mathbb{P}^1)^k}}{\rm Td}_g(Tf,h'^{Tf},h^{Tf}){\rm ch}_g^0\big({\rm tr}_k\circ\lambda(\overline{E})\big)\\ ,\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ +\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(d_\mathcal{D}\Delta\big(f,{\rm tr}_k\circ\lambda(\overline{E}),\omega,\omega'\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ =\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\bigg({\rm tr}_1\Big(H_f\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\bigg)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)
\\ +\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(d_\mathcal{D}\Delta\big(f,{\rm tr}_k\circ\lambda(\overline{E}),\omega,\omega'\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)-\Pi^{(3')}_k(\overline{E}). \end{multline*}
We formally define a product $C_{k+1}\Big(\Delta\big(f,{\rm tr}_k\circ\lambda(\overline{E}),\omega,\omega'\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)$ in a similar way to $C_{k+1}(\cdot,\ldots,\cdot)$ like follows. \begin{align}\label{NewC} &C_{k+1}\Big(\Delta\big(f,{\rm tr}_k\circ\lambda(\overline{E}),\omega,\omega'\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\notag\\ =&-(-\frac{1}{2})^{k}\sum_{\sigma\in \mathfrak{S}_k}(-1)^\sigma \Delta\bullet(\log\mid z_{\sigma(1)}\mid^2\bullet(\log\mid z_{\sigma(2)}\mid^2\bullet(\cdots \log\mid z_{\sigma(k)}\mid^2)\cdots)\notag\\ &-(-\frac{1}{2})^{k}\sum_{\sigma\in \mathfrak{S}_k}(-1)^\sigma \log\mid z_{\sigma(1)}\mid^2\bullet(\Delta\bullet(\log\mid z_{\sigma(2)}\mid^2\bullet(\cdots \log\mid z_{\sigma(k)}\mid^2)\cdots)\notag\\ &\cdots\notag\\ &-(-\frac{1}{2})^{k}\sum_{\sigma\in \mathfrak{S}_k}(-1)^\sigma \log\mid z_{\sigma(1)}\mid^2\bullet(\log\mid z_{\sigma(2)}\mid^2\bullet(\cdots \log\mid z_{\sigma(k)}\mid^2\bullet\Delta)\cdots) \end{align}
Then we set $$\Delta_k(\overline{E})=\frac{(-1)^{k}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(\Delta\big(f,{\rm tr}_k\circ\lambda(\overline{E}),\omega,\omega'\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big),$$ and it is readily checked by Lemma~\ref{delta} that $$\Delta_{k-1}(d\overline{E})-d_\mathcal{D}\Delta_k(\overline{E})=\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(d_\mathcal{D}\Delta\big(f,{\rm tr}_k\circ\lambda(\overline{E}),\omega,\omega'\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big).$$
Combing all the above computations, we finally get \begin{multline*} (\Pi_{f,k-1}^{f'}+\Delta_{k-1})\circ d(\overline{E})-d_\mathcal{D}\circ(\Pi_{f,k}^{f'}+\Delta_k)(\overline{E})\\ =-\Pi'^{(2)}_k(\overline{E})+\Pi'_k(\overline{E})-\Pi^{(1)}_k(\overline{E})+\Delta_{k-1}(d\overline{E})-d_\mathcal{D}\Delta_k(\overline{E})
\\ +\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\bigg({\rm tr}_1\Big(H_f\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\bigg)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\\ =-\Pi'^{(2)}_k(\overline{E})+\Pi'_k(\overline{E})-\Pi^{(1)}_k(\overline{E})-\Pi''^{(2)}_k(\overline{E})+\Pi''_k(\overline{E})+\Pi^{(3')}_k(\overline{E})\\ =\Pi_k(\overline{E})-\big(\Pi^{(1)}_k(\overline{E})+\Pi^{(2)}_k(\overline{E})-\Pi^{(3')}_k(\overline{E})\big). \end{multline*} So we are done. \end{proof}
\subsection{Direct image map between arithmetic K-groups} In this subsection, we define the direct image map between arithmetic K-groups of regular $\mu_n$-projective arithmetic schemes by means of the equivariant higher analytic torsion for hermitian cubes constructed in last subsection.
Let now $X$ and $Y$ be two regular $\mu_n$-projective schemes over an arithmetic ring $(D,\Sigma,F_\infty)$. Assume that $f: X\to Y$ is an equivariant and flat morphism from $X$ to $Y$ such that $f$ is smooth over the generic fibre. Notice that the chain homotopy $$\Pi_*: \widetilde{\mathbb Z}C_*^{f-\text{ac}}\big(X(\mathbb C),\mu_n\big)\to \bigoplus_{p\geq0}D^{2p-*-1}(Y(\mathbb C)_{\mu_n},p)_{R_n}$$ is $\sigma$-invariant and the following diagrams $$\xymatrix{\widehat{S}^{f-\text{ac}}(X,\mu_n) \ar[r]^-{\rm Hu} \ar[d]^-{f_*} & \mathbb Z\widehat{S}_*^{f-\text{ac}}(X,\mu_n) \ar[r]^-{\rm Cub} \ar[d]^-{f_*} & \mathcal{K}\big(\widetilde{\mathbb Z}C_*^{f-\text{ac}}(X,\mu_n)[-1]\big) \ar[d]^-{f_*} \\ \widehat{S}(Y,\mu_n) \ar[r]^-{\rm Hu} & \mathbb Z\widehat{S}_*(Y,\mu_n) \ar[r]^-{\rm Cub} & \mathcal{K}\big(\widetilde{\mathbb Z}C_*(Y,\mu_n)[-1]\big)}$$ are commutative, the chain homotopy $\Pi_*$ induces a simplicial homotopy between the maps $\widetilde{{\rm ch}}_g\circ f_*$ and ${f_{\mu_n}}_*\circ {\rm Td}_g(\overline{Tf})\bullet(\cdot)\circ \widetilde{{\rm ch}}_g$ in the following square $$\xymatrix{ \widehat{S}^{f-\text{ac}}(X,\mu_n) \ar[rr]^-{\widetilde{{\rm ch}}_g} \ar[d]^-{f_*} && \mathcal{K}\big(\bigoplus_{p\geq0}D^{2p-*}(X_{\mu_n},p)[-1]_{R_n}\big) \ar[d]^-{{f_{\mu_n}}_*\circ {\rm Td}_g(\overline{Tf})\bullet(\cdot)}\\ \widehat{S}(Y,\mu_n) \ar[rr]^-{\widetilde{{\rm ch}}_g} && \mathcal{K}\big(\bigoplus_{p\geq0}D^{2p-*}(Y_{\mu_n},p)[-1]_{R_n}\big).}$$ To see the construction of this simplicial homotopy and general theory on homotopies in the category of simplical abelian groups, the reader is referred to \cite[Section 2.1, Section 2.3, Section 3.2]{GJ}, especially \cite[p160, p162 Prop. 2.18, p72 Prop. 1.8 Cor. 1.9]{GJ}
We remark that, according to the construction given in \cite{GJ}, the resulting simplicial homotopy is unique up to a homotopy in a strong sense: let $h_1, h_2$ be two simplicial homotopies arising from $\Pi_*$, then there exists a homotopy $$\xymatrix{\widetilde{H}:\quad\widehat{S}^{f-\text{ac}}(X,\mu_n)\times \Delta^1\times \Delta^1 \ar[rr] && \mathcal{K}\big(\bigoplus_{p\geq0}D^{2p-*}(Y_{\mu_n},p)[-1]_{R_n}\big)}$$ such that $\widetilde{H}(\cdot, \cdot, 0)=h_1$, $\widetilde{H}(\cdot, \cdot, 1)=h_2$, $\widetilde{H}(\cdot, 0, \cdot)$ is the constant homotopy on $\widetilde{{\rm ch}}_g\circ f_*$ and $\widetilde{H}(\cdot, 1, \cdot)$ is the constant homotopy on ${f_{\mu_n}}_*\circ {\rm Td}_g(\overline{Tf})\bullet(\cdot)\circ \widetilde{{\rm ch}}_g$ (cf. \cite[Prop. 3.8]{GJ}). Thus, applying the geometric realization construction to the above simplicial square, we get a continuous map between homotopy fibres $$\mid f\mid: \text{ homotopy fibre of }\mid \widetilde{{\rm ch}}_g^X\mid\longrightarrow \text{homotopy fibre of }\mid \widetilde{{\rm ch}}_g^Y\mid$$ which is unique up to a homotopy. So we may have a well-defined direct image map between arithmetic $K$-groups as follows.
\begin{defn}\label{211} For $m\geq1$, the direct image map $f_*: \widehat{K}_m(X,\mu_n)\to \widehat{K}_m(Y,\mu_n)$ is defined as the homomorphism of abelian groups induced by the map $\mid f\mid$ at the level of homotopy groups. \end{defn}
\begin{rem}\label{flat} The condition ``flatness" of the map $f$ is only used to guarantee that the direct image of a $f$-acyclic bundle is locally free. By introducing the arithmetic K$'$-theory and using the isomorphisms $\widehat{K}_m(X,\mu_n)\cong \widehat{K}'_m(X,\mu_n)$ which hold for regular schemes, the condition ``flatness" can certainly be removed. \end{rem}
To study the direct image map up to torsion, we need the following lemma.
\begin{lem}\label{NewCA} Consider the following diagram of homological complexes $$\xymatrix{ A_* \ar@/_/[d]_-{f_1} \ar@/^/[d]^-{f_2} \ar[rr]^-{i} && B_* \ar@/_/[d]_-{l_1} \ar@/^/[d]^-{l_2} \\ C_* \ar[rr]^-{j} && D_*.}$$ Assume that $j\circ {f_1}$ (resp. $j\circ {f_2}$) is homotopic to $l_1\circ i$ (resp. $l_2\circ i$) via the chain homotopy $h_1$ (resp. $h_2$), and that $f_1$ (resp. $l_1$) is homotopic to $f_2$ (resp. $l_2$) via the chain homotopy $\pi_f$ (resp. $\pi_l$). Suppose that the chain homotopy $j\circ \pi_f+h_2-\pi_l\circ i$ is homotopic to the chain homotopy $h_1$, then the morphism on simple complexes $$(f_1,l_1,h_1): s_*(i: A_*\to B_*)\to s_*(j: C_*\to D_*)$$ is chain homotopic to $(f_2,l_2,h_2)$. \end{lem} \begin{proof} Let $(a,b)\in A_k\bigoplus B_{k+1}$, the morphism $(f_1,l_1,h_1)$ (resp. $(f_2,l_2,h_2)$) sends $(a,b)$ to $\big(f_1(a),l_1(b)+h_1(a)\big)$ (resp. $\big(f_2(a),l_2(b)+h_2(a)\big)$). Let $H: A_*\to D_{*+2}$ be the homotopy such that $$Hd-dH=h_1-(j\circ \pi_f+h_2-\pi_l\circ i),$$ and we define $\widetilde{H}(a,b)=\big(\pi_f(a),-\pi_l(b)+H(a)\big)$. Then we compute \begin{align*} d\widetilde{H}(a,b)=&d\big(\pi_f(a),-\pi_l(b)+H(a)\big)\\ =&\big(d\pi_f(a),j\circ \pi_f(a)+d\pi_l(b)-dH(a)\big)\\ =&\big(f_1(a)-f_2(a)-\pi_f(da),l_1(b)-l_2(b)-\pi_l(db)-Hd(a)+h_1(a)-h_2(a)+\pi_l\circ i(a)\big)\\ =&\big(f_1(a),l_1(b)+h_1(a)\big)-\big(f_2(a),l_2(b)+h_2(a)\big)-\big(\pi_fd(a),\pi_l(db)-\pi_l\circ i(a)+Hd(a)\big)\\ =&\big(f_1(a),l_1(b)+h_1(a)\big)-\big(f_2(a),l_2(b)+h_2(a)\big)-\widetilde{H}(da,i(a)-db)\\ =&\big(f_1(a),l_1(b)+h_1(a)\big)-\big(f_2(a),l_2(b)+h_2(a)\big)-\widetilde{H}d(a,b). \end{align*} So we are done. \end{proof}
\begin{cor}\label{212} Let notations and assumptions be as above, then the direct image map $f_*: \widehat{K}_m(X,\mu_n)_\mathbb Q\to \widehat{K}_m(Y,\mu_n)_\mathbb Q$ without torsion is independent of the choice of the K\"{a}hler fibration structure. \end{cor} \begin{proof} This follows from Remark~\ref{206} (iv), Theorem~\ref{210} and Lemma~\ref{NewCA}. \end{proof}
\section{Transitivity of the direct image maps} Let $f: X\to Y$, $h: Y\to Z$ and $l: X\to Z$ be three equivariant morphisms between regular $\mu_n$-projective schemes, which are all smooth over the generic fibres. Assume that $l=h\circ f$, in this section, we shall compare the direct image map $l_*$ with the composition $h_*\circ f_*$. To this aim, we shall firstly discuss the functoriality of the equivariant analytic torsion forms with respect to a composition of submersions.
\subsection{Analytic torsion forms and families of submersions} Let $W, V$ and $S$ be three smooth $\mu_n$-equivariant algebraic varieties over $\mathbb C$ with $S=S_{\mu_n}$. Suppose that $f: W\to V$ and $h: V\to S$ are two proper smooth morphisms, then passing to their analytifications the maps $f: W(\mathbb C)\to V(\mathbb C)$ and $h: V(\mathbb C)\to S(\mathbb C)$ are holomorphic submersions with compact fibres. Set $l=h\circ f$, it is also a proper smooth morphism and $l: W(\mathbb C)\to S(\mathbb C)$ is a holomorphic submersion with compact fibre as well.
Let $\omega^W$ and $\omega^V$ be two $\mu_n$-invariant K\"{a}hler forms on $W$ and on $V$. As before, $\omega^W$ and $\omega^V$ imply K\"{a}hler fibration structures on the morphisms $f, h$ and $l$ and they induce $\mu_n$-invariant hermitian metrics on relative tangent bundles $Tf, Th$ and $Tl$. Consider the following short exact sequence of hermitian vector bundles $$\overline{T}(f,h,h\circ f):\quad 0\to \overline{Tf}\to \overline{Tl}\to f^*\overline{Th}\to 0,$$ denote by ${\rm Td}_g\big(\overline{T}(f,h,h\circ f)\big)=\Phi^{-1}\Big(\frac{\widetilde{{\rm Td}}_g\big(\overline{T}(f,h,h\circ f)\big)}{2}\Big)$ (see Section 5.2 in the Appendix) the equivariant secondary Todd form such that $$d_\mathcal{D}{\rm Td}_g\big(\overline{T}(f,h,h\circ f)\big)={\rm Td}_g(\overline{Tl})-f_{\mu_n}^*{\rm Td}_g(\overline{Th}){\rm Td}_g(\overline{Tf}).$$
Now let $\overline{E}$ be a hermitian vector bundle on $W$, we shall assume that $E$ is $f$-acyclic and $l$-acyclic. Then the Leray spectral sequence $E_2^{i,j}=R^ih_*(R^jf_*E)$ degenerates at $E_2$ so that $f_*E=R^0f_*(E)$ is $h$-acyclic and $l_*E\cong h_*f_*E$. Clearly, $l_*E$ and $h_*f_*E$ carry in general different $L^2$-metrics (See Section 5.2 in the Appendix). Consider the following short exact sequence of hermitian vector bundles $$\overline{E}(f,h,h\circ f):\quad 0\to h_*f_*\overline{E} \to l_*\overline{E} \to 0\to 0,$$ it can be regarded as an emi-$1$-cube of hermitian bundles on $S$. Then the equivariant higher Bott-Chern form ${\rm ch}_g\big(\overline{E}(f,h,h\circ f)\big)$ satisfies the differential equation $$d_\mathcal{D}{\rm ch}_g\big(\overline{E}(f,h,h\circ f)\big)={\rm ch}_g(l_*\overline{E})-{\rm ch}_g(h_*f_*\overline{E}).$$
The main result in this subsection is the following.
\begin{thm}\label{301} Let notations and assumptions be as above. Then the following identity holds in $\bigoplus_{p\geq 0}\big(D^{2p-1}(S,p)/{\operatorname{Im} d_\mathcal{D}}\big)$: \begin{multline*} T_g(l,\omega^W,h^E)-T_g(h,\omega^V,h^{f_*E})-\frac{1}{(2\pi i)^{r_h}}\int_{V_{\mu_n}/S}{\rm Td}_g(\overline{Th})T_g(f,\omega^W,h^E)\\ ={\rm ch}_g\big(\overline{E}(f,h,h\circ f)\big)-\frac{1}{(2\pi i)^{r_l}}\int_{W_{\mu_n}/S}{\rm Td}_g\big(\overline{T}(f,h,h\circ f)\big){\rm ch}_g(\overline{E}) \end{multline*} where $r_h$ and $r_l$ are the relative dimensions of $V_{\mu_n}/S$ and $W_{\mu_n}/S$ respectively. \end{thm} \begin{proof} This is a translation of Theorem~\ref{A20} in the Appendix. \end{proof}
\begin{lem}\label{deltac} With the same notations as in Remark~\ref{explain_tt} and Theorem~\ref{A2} in the Appendix, we set $$\Delta(f,h,\omega^W,\omega^V,\overline{E}):=-\Phi^{-1}\big(\frac{\Delta^0(f,h,\omega^W,\omega^V,\overline{E})+\Delta_0(f,h,\omega^W,\omega^V,\overline{E})}{2}\big)+\Delta\big(\overline{E}(f,h,h\circ f)\big).$$ Then $d_\mathcal{D}\Delta(f,h,\omega^W,\omega^V,\overline{E})$ measures the difference \begin{multline*} T_g(l,\omega^W,h^E)-T_g(h,\omega^V,h^{f_*E})-\frac{1}{(2\pi i)^{r_h}}\int_{V_{\mu_n}/S}{\rm Td}_g(\overline{Th})T_g(f,\omega^W,h^E)\\ -{\rm ch}_g\big(\overline{E}(f,h,h\circ f)\big)+\frac{1}{(2\pi i)^{r_l}}\int_{W_{\mu_n}/S}{\rm Td}_g\big(\overline{T}(f,h,h\circ f)\big){\rm ch}_g(\overline{E}) \end{multline*} in Theorem~\ref{301}. Assume that we are in the same situation described before Lemma~\ref{delta}. Call $l: S\times Z_1\to S\times Z$ the natural inclusion, then similar to Lemma~\ref{delta}, we have that $$l^*\Delta(f_Z,h_Z,\omega^W,\omega^V,\overline{E})=\Delta(f_{Z_1},h_{Z_1},\omega^W_1,\omega^V_1,j^*\overline{E}).$$ \end{lem} \begin{proof} This is a consequence of Theorem~\ref{A2} in the Appendix. \end{proof}
\subsection{The transitivity property} In this subsection, we present certain transitivity property of direct image maps between equivariant higher arithmetic K-groups. To do this, we firstly write down the following diagram of homological complexes \begin{align}\label{atc-comp} \xymatrix{ \widetilde{\mathbb Z}C_*^{(f, l)-\text{ac}}(X,\mu_n) \ar[r]^-{{\rm ch}_g} \ar[d]^-{f_*} & \bigoplus_{p\geq0}D^{2p-*}(X_{\mu_n},p)_{R_n} \ar[d]^-{{f_{\mu_n}}_*\circ {\rm Td}_g(\overline{Tf})\bullet(\cdot)} \\ \widetilde{\mathbb Z}C_*^{f-\text{ac}}(Y,\mu_n) \ar[r]^-{{\rm ch}_g} \ar[d]^-{h_*} & \bigoplus_{p\geq0}D^{2p-*}(Y_{\mu_n},p)_{R_n} \ar[d]^-{{h_{\mu_n}}_*\circ {\rm Td}_g(\overline{Th})\bullet(\cdot)} \\ \widetilde{\mathbb Z}C_*(Z,\mu_n) \ar[r]^-{{\rm ch}_g} & \bigoplus_{p\geq0}D^{2p-*}(Z_{\mu_n},p)_{R_n}} \end{align} where $l$ is $h\circ f$ and $\widetilde{\mathbb Z}C_*^{(f, l)-\text{ac}}(X,\mu_n)$ is the subcomplex of $\widetilde{\mathbb Z}C_*(X,\mu_n)$ made of those bundles which are $f$-acyclic and $l$-acyclic simultaneously.
Let $\overline{E}$ be a hermitian $k$-cube in $\widehat{\mathcal{P}}(X,\mu_n)$ which is $f$-acyclic and $l$-acyclic, the short exact sequence \begin{align*} \xymatrix{0 \ar[r] & h_*f_*\overline{E} \ar[r]^-{\rm Id} & l_*\overline{E} \ar[r] & 0 \ar[r] & 0} \end{align*} can be regarded as a hermitian $(k+1)$-cube $H_{h\circ f}(\overline{E})$ on $Z$ such that the transgression bundle ${\rm tr}_{k+1}\Big(\lambda\big(H_{h\circ f}(\overline{E})\big)\Big)$ satisfies the relations $${\rm tr}_{k+1}\Big(\lambda\big(H_{h\circ f}(\overline{E})\big)\Big)\mid_{Z\times \{0\}\times (\mathbb{P}^1)^k}={\rm tr}_{k}\big(\lambda(l_*\overline{E})\big),$$ $${\rm tr}_{k+1}\Big(\lambda\big(H_{h\circ f}(\overline{E})\big)\Big)\mid_{Z\times \{\infty\}\times (\mathbb{P}^1)^k}={\rm tr}_{k}\big(\lambda(h_*f_*\overline{E})\big)$$ and $${\rm tr}_{k+1}\Big(\lambda\big(H_{h\circ f}(\overline{E})\big)\Big)\mid_{Z\times (\mathbb{P}^1)^{i}\times \{0\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_k\Big(\lambda\big(H_{h\circ f}(\partial_i^0\overline{E})\big)\Big),$$ $${\rm tr}_{k+1}\Big(\lambda\big(H_{h\circ f}(\overline{E})\big)\Big)\mid_{Z\times (\mathbb{P}^1)^{i}\times \{\infty\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_k\Big(\lambda\big(H_{h\circ f}(\partial_i^{-1}\overline{E})\big)\Big)\oplus {\rm tr}_k\Big(\lambda\big(H_{h\circ f}(\partial_i^{1}\overline{E})\big)\Big)$$ for $i=1,\cdots,k$.
\begin{prop}\label{302} The following map $$\Pi^{(1)}_k(\overline{E})=\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_{k+1}\circ\lambda\big(H_{h\circ f}(\overline{E})\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)$$ which vanishes on degenerate cubes provides a chain homotopy of homological complexes between the maps ${\rm ch}_g\circ l_*$ and ${\rm ch}_g\circ (h_*\circ f_*)$. \end{prop} \begin{proof} Using the above relations that the transgression bundle ${\rm tr}_{k+1}\Big(\lambda\big(H_{h\circ f}(\overline{E})\big)\Big)$ satisfies and the expression of $d_\mathcal{D}C_{k+1}$, the proof is straightforward. This can be also seen from the fact that $H_{h\circ f}(\overline{E})$ provides a chain homotopy between $l_*$ and $h_*\circ f_*$. \end{proof}
\begin{prop}\label{303} The composition ${h_{\mu_n}}_*\circ {\rm Td}_g(\overline{Th})\bullet\big({f_{\mu_n}}_*\circ {\rm Td}_g(\overline{Tf})\bullet(\cdot)\big)$ is equal to ${l_{\mu_n}}_*\circ f_{\mu_n}^*{\rm Td}_g(\overline{Th}){\rm Td}_g(\overline{Tf})\bullet(\cdot)$. The following maps \begin{multline*} \Pi^{(3)}_k(\overline{E}):=\frac{(-1)^{k}}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}\bigg(\Big(\frac{1}{(2\pi i)^{r_l}}\int_{{X_{\mu_n}\times (\mathbb{P}^1)^k}/{Z_{\mu_n}\times(\mathbb{P}^1)^k}}{\rm Td}_g\big(\overline{T}(f,h,h\circ f)\big){\rm ch}_g^0\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\\ {\bullet C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\bigg)} \end{multline*} and \begin{multline*} \Pi^{(3')}_k(\overline{E}):=\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(\frac{1}{(2\pi i)^{r_l}}\int_{{X_{\mu_n}\times (\mathbb{P}^1)^k}/{Z_{\mu_n}\times(\mathbb{P}^1)^k}}{\rm Td}_g\big(\overline{T}(f,h,h\circ f)\big){\rm ch}_g^0\big({\rm tr}_k\circ\lambda(\overline{E})\big)\\ ,\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big) \end{multline*} give two chain homotopies of homological complexes between the maps ${l_{\mu_n}}_*\circ {\rm Td}_g(\overline{Tg})\bullet\big({\rm ch}_g(\cdot)\big)$ and ${l_{\mu_n}}_*\circ f_{\mu_n}^*{\rm Td}_g(\overline{Th}){\rm Td}_g(\overline{Tf})\bullet\big({\rm ch}_g(\cdot)\big)$. Moreover, $\Pi^{(3)}_k(\overline{E})$ and $\Pi^{(3')}_k(\overline{E})$ are homotopic to each other. \end{prop} \begin{proof} The first statement follows from the projection formula, the second statement follows from a straightforward computation and the third follows from \cite[Remark 2.4, Lemma. 2.5]{T3}. \end{proof}
Now we write $\Pi^f_k=\Pi'^f_k+\Pi''^f_k$ for the chain homotopy of the upper square in (\ref{atc-comp}) and $\Pi^h_k=\Pi'^h_k+\Pi''^h_k$ for the chain homotopy of the lower square in (\ref{atc-comp}). Then $\Pi^{(1)}_k+{h_{\mu_n}}_*\circ \big({\rm Td}_g(\overline{Th})\bullet\Pi^f_k\big)+\Pi^h_k\circ f_*-\Pi^{(3)}_k$ defines a chain homotopy between maps ${\rm ch}_g\circ l_*$ and ${l_{\mu_n}}_*\circ {\rm Td}_g(\overline{Tg})\bullet\big({\rm ch}_g(\cdot)\big)$. Suppose that the $\mu_n$-action on $Z$ is trivial, it's the main result of this subsection that the chain homotopy $\Pi^{(1)}_k+{h_{\mu_n}}_*\circ \big({\rm Td}_g(\overline{Th})\bullet\Pi^f_k\big)+\Pi^h_k\circ f_*-\Pi^{(3)}_k$ is homotopic to the chain homotopy $\Pi^l_k=\Pi'^l_k+\Pi''^l_k$ for the whole square in (\ref{atc-comp}). According to Proposition~\ref{303}, it is equivalent to show that $\Pi^{(1)}_k+{h_{\mu_n}}_*\circ \big({\rm Td}_g(\overline{Th})\bullet\Pi^f_k\big)+\Pi^h_k\circ f_*-\Pi^{(3')}_k$ is homotopic to $\Pi^l_k$.
To see this, we firstly denote by $H_{h\circ f}^{l}(\overline{E})$ the following emi-$2$-cube of hermitian bundles on $Z\times (\mathbb{P}^1)^k$ $$\xymatrix{h_*f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big) \ar[r]^-{\rm Id} \ar[d]^-{\rm Id} & {\rm tr}_k\circ\lambda\big(h_*f_*(\overline{E})\big) \ar[r] \ar[d]^-{\rm Id} & 0 \ar[d]\\ l_*\big({\rm tr}_k\circ\lambda(\overline{E})\big) \ar[r]^-{\rm Id} \ar[d] & {\rm tr}_k\circ\lambda\big(l_*(\overline{E})\big) \ar[r] \ar[d] & 0 \ar[d]\\ 0 \ar[r] & 0 \ar[r] & 0.}$$ Then, like before, we construct a hermitian bundle ${\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)$ on $(\mathbb{P}^1)^{k+2}$ as the second transgression bundle of $H_{h\circ f}^{l}(\overline{E})$ such that it satisfies the following relations: $${\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)\mid_{Z\times \{0\}\times (\mathbb{P}^1)^{k+1}}={\rm tr}_{k+1}\Big(\lambda\big(H_{h\circ f}(\overline{E})\big)\Big),$$ $${\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)\mid_{Z\times \{\infty\}\times (\mathbb{P}^1)^{k+1}}={\rm tr}_1\Big(H_{h\circ f}\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big),$$ $${\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)\mid_{Z\times \mathbb{P}^1\times \{0\}\times (\mathbb{P}^1)^k}={\rm tr}_1\big(H(\overline{E},l_*)\big),$$ $${\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)\mid_{Z\times \mathbb{P}^1\times \{\infty\}\times (\mathbb{P}^1)^k}={\rm tr}_1\big(H(\overline{E},h_*f_*)\big)$$ and $${\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)\mid_{Z\times (\mathbb{P}^1)^{i+1}\times \{0\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_2\big(H_{h\circ f}^{l}(\partial_i^0\overline{E})\big),$$ $${\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)\mid_{Z\times (\mathbb{P}^1)^{i+1}\times \{\infty\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_2\big(H_{h\circ f}^{l}(\partial_i^{-1}\overline{E})\big)\oplus {\rm tr}_2\big(H_{h\circ f}^{l}(\partial_i^{1}\overline{E})\big)$$ for $i=1,\cdots,k$. We set $$\mathbf{H}_{1,k}(\overline{E}):=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)\Big)\wedge C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2).$$ Then $\mathbf{H}_{1,k}$ vanishes on degenerate $k$-cubes, and we obtain a map $$\mathbf{H}_{1,k}: \widetilde{\mathbb Z}C_k^{(f, l)-\text{ac}}(X,\mu_n)\to \bigoplus_{p\geq0}D^{2p-k-2}(Z,p)_{R_n}$$ by linear extension. This map satisfies the following differential equation \begin{multline*} d_\mathcal{D}\circ\mathbf{H}_{1,k}(\overline{E})=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)\Big)\wedge d_\mathcal{D}C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2)\\ =\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\big(H_{h\circ f}^{l}(\overline{E})\big)\Big)\wedge\big((-\frac{1}{2})(k+2)\sum_{j=1}^{k+2}(-1)^{j-1}(-4\pi i)(\delta_{z_j=\infty}-\delta_{z_j=0})
\\ {\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\widehat{\log\mid z_j\mid^2},\cdots,\log\mid z_{k+2}\mid^2)\big)}\\ =\mathbf{H}_{1,k-1}\circ d(\overline{E})-\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}\Bigg[\bigg({\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E},l_*)\big)\Big)-{\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E},h_*f_*)\big)\Big)\bigg)
\\ {\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\Bigg]}\\ +\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}\Bigg[\Bigg({\rm ch}_g^0\bigg({\rm tr}_{k+1}\Big(\lambda\big(H_{h\circ f}(\overline{E})\big)\Big)\bigg)-{\rm ch}_g^0\bigg({\rm tr}_1\Big(H_{h\circ f}\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\bigg)\Bigg)\\ {\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\Bigg]}\\ =\mathbf{H}_{1,k-1}\circ d(\overline{E})-\Pi'^l_k(\overline{E})+\Pi^{(1)}_k(\overline{E})
\\ +\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E},h_*f_*)\big)\Big){\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)}\\ -\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\bigg({\rm tr}_1\Big(H_{h\circ f}\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\bigg)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2). \end{multline*}
Secondly, we denote by $H'^l_{h\circ f}(\overline{E})$ the following emi-$2$-cube of hermitian bundles on $Z\times(\mathbb{P}^1)^k$ $$\xymatrix{h_*f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big) \ar[r]^-{\rm Id} \ar[d]^-{\rm Id} & h_*{\rm tr}_k\circ\lambda\big(f_*(\overline{E})\big) \ar[r] \ar[d]^-{\rm Id} & 0 \ar[d]\\ h_*f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big) \ar[r]^-{\rm Id} \ar[d] & {\rm tr}_k\circ\lambda\big(h_*f_*(\overline{E})\big) \ar[r] \ar[d] & 0 \ar[d]\\ 0 \ar[r] & 0 \ar[r] & 0.}$$ Again, we construct a hermitian bundle ${\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)$ on $Z\times(\mathbb{P}^1)^{k+2}$ as the second transgression bundle of $H'^l_{h\circ f}(\overline{E})$ such that it satisfies the following relations: $${\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)\mid_{Z\times \{0\}\times (\mathbb{P}^1)^{k+1}}={\rm tr}_1\big(H(f_*\overline{E},h_*)\big),$$ $${\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)\mid_{Z\times \{\infty\}\times (\mathbb{P}^1)^{k+1}}={\rm tr}_1\Big(h_*f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\to h_*f_*\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big),$$ $${\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)\mid_{Z\times \mathbb{P}^1\times \{0\}\times (\mathbb{P}^1)^k}={\rm tr}_1\big(H(\overline{E},h_*f_*)\big),$$ $${\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)\mid_{Z\times \mathbb{P}^1\times \{\infty\}\times (\mathbb{P}^1)^k}={\rm tr}_1\big(h_*H(\overline{E},f_*)\big)$$ and $${\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)\mid_{Z\times (\mathbb{P}^1)^{i+1}\times \{0\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_2\big(H'^l_{h\circ f}(\partial_i^0\overline{E})\big),$$ $${\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)\mid_{Z\times (\mathbb{P}^1)^{i+1}\times \{\infty\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_2\big(H'^l_{h\circ f}(\partial_i^{-1}\overline{E})\big)\oplus {\rm tr}_2\big(H'^l_{h\circ f}(\partial_i^{1}\overline{E})\big)$$ for $i=1,\cdots,k$. We set $$\mathbf{H}_{2,k}(\overline{E}):=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)\Big)\wedge C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2).$$ Then $\mathbf{H}_{2,k}$ defines a map $$\mathbf{H}_{2,k}: \widetilde{\mathbb Z}C_k^{(f, l)-\text{ac}}(X,\mu_n)\to \bigoplus_{p\geq0}D^{2p-k-2}(Z,p)_{R_n}$$ which satisfies the following differential equation \begin{multline*} d_\mathcal{D}\circ\mathbf{H}_{2,k}(\overline{E})=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)\Big)\wedge d_\mathcal{D}C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2)\\ =\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\big(H'^l_{h\circ f}(\overline{E})\big)\Big)\wedge\big((-\frac{1}{2})(k+2)\sum_{j=1}^{k+2}(-1)^{j-1}(-4\pi i)(\delta_{z_j=\infty}-\delta_{z_j=0})
\\ {\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\widehat{\log\mid z_j\mid^2},\cdots,\log\mid z_{k+2}\mid^2)\big)}\\ =\mathbf{H}_{2,k-1}\circ d(\overline{E})-\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}\Bigg[\bigg({\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E},h_*f_*)\big)\Big)-{\rm ch}_g^0\Big({\rm tr}_1\big(h_*H(\overline{E},f_*)\big)\Big)\bigg)
\\ {\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\Bigg]}\\ +\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_1\big(H(f_*\overline{E},h_*)\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\\ =\mathbf{H}_{2,k-1}\circ d(\overline{E})+\Pi'^h_k(f_*\overline{E})
\\ -\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E},h_*f_*)\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\\ +\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_1\big(h_*H(\overline{E},f_*)\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2). \end{multline*}
Thirdly, notice that the short exact sequence $$\xymatrix{0 \ar[r] & h_*{\rm tr}_1\big(H(\overline{E},f_*)\big) \ar[r]^-{\rm Id} & {\rm tr}_1\big(h_*H(\overline{E},f_*)\big) \ar[r] & 0 \ar[r] & 0}$$ forms an emi-$1$-cube of hermitian bundles on $Z\times\mathbb{P}^1\times (\mathbb{P}^1)^k$, we denote it by $\widetilde{H}_{h\circ f}(\overline{E})$. Using the same construction as before, we construct a transgression bundle ${\rm tr}_1\big(\widetilde{H}_{h\circ f}(\overline{E})\big)$ on $Z\times\mathbb{P}^1\times \mathbb{P}^1\times (\mathbb{P}^1)^k$ satisfying $${\rm tr}_1\big(\widetilde{H}_{h\circ f}(\overline{E})\big)\mid_{Z\times\{0\}\times (\mathbb{P}^1)^{k+1}}={\rm tr}_1\big(h_*H(\overline{E},f_*)\big),$$ $${\rm tr}_1\big(\widetilde{H}_{h\circ f}(\overline{E})\big)\mid_{Z\times\{\infty\}\times (\mathbb{P}^1)^{k+1}}=h_*{\rm tr}_1\big(H(\overline{E},f_*)\big),$$ $${\rm tr}_1\big(\widetilde{H}_{h\circ f}(\overline{E})\big)\mid_{Z\times\mathbb{P}^1\times \{0\}\times (\mathbb{P}^1)^k}={\rm tr}_1\big(h_*{\rm tr}_k\circ \lambda(f_*\overline{E})\to h_*{\rm tr}_k\circ \lambda(f_*\overline{E})\to 0\big),$$ $${\rm tr}_1\big(\widetilde{H}_{h\circ f}(\overline{E})\big)\mid_{Z\times\mathbb{P}^1\times \{\infty\}\times (\mathbb{P}^1)^k}={\rm tr}_1\big(h_*f_*{\rm tr}_k\circ \lambda(\overline{E})\to h_*f_*{\rm tr}_k\circ \lambda(\overline{E})\to 0\big)$$ and $${\rm tr}_1\big(\widetilde{H}_{h\circ f}(\overline{E})\big)\mid_{Z\times(\mathbb{P}^1)^{i+1}\times \{0\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_1\big(\widetilde{H}_{h\circ f}(\partial_i^0\overline{E})\big),$$ $${\rm tr}_1\big(\widetilde{H}_{h\circ f}(\overline{E})\big)\mid_{Z\times(\mathbb{P}^1)^{i+1}\times \{\infty\}\times (\mathbb{P}^1)^{k-i}}={\rm tr}_1\big(\widetilde{H}_{h\circ f}(\partial_i^{-1}\overline{E})\big)\oplus {\rm tr}_1\big(\widetilde{H}_{h\circ f}(\partial_i^{1}\overline{E})\big)$$ for $i=1,\cdots,k$. So if we set $$\mathbf{H}_{3,k}(\overline{E}):=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_1\big(\widetilde{H}_{h\circ f}(\overline{E})\big)\Big)\wedge C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2),$$ it satisfies the differential equation \begin{multline*} d_\mathcal{D}\circ\mathbf{H}_{3,k}(\overline{E})=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\big({\rm tr}_1(\widetilde{H}_{h\circ f})\big)\wedge d_\mathcal{D}C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2)\\ =\mathbf{H}_{3,k-1}\circ d(\overline{E})+\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_1\big(h_*H(\overline{E},f_*)\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)
\\ -\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\big(h_*{\rm tr}_1\big(H(\overline{E},f_*)\big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2) \end{multline*}
Finally, we set $$\mathbf{H}_{4,k}(\overline{E}):=\frac{(-1)^{k+2}}{(k+2)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}C_{k+2}\bigg(T_g\Big(h,h^{{\rm tr}_1\big(H(\overline{E},f_*)\big)}\Big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2\bigg),$$ then it satisfies \begin{multline*} d_\mathcal{D}\circ\mathbf{H}_{4,k}(\overline{E})=\frac{(-1)^{k+2}}{(k+2)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}C_{k+2}\bigg(T_g\Big(h,h^{{\rm tr}_1\big(H(\overline{E},f_*)\big)}\Big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2\bigg)\\ =\mathbf{H}_{4,k-1}\circ d(\overline{E})+\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big(h_*{\rm tr}_1\big(H(\overline{E},f_*)\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)
\\ -\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}\bigg(\frac{1}{(2\pi i)^{r_h}}\int_{Y_{\mu_n}}{\rm Td}_g(\overline{Th}){\rm ch}_g^0\Big({\rm tr}_1\big(H(\overline{E},f_*)\big)\Big)\bigg)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)
\\ -\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big(T_g\big(h,h^{{\rm tr}_k\circ\lambda(f_*\overline{E})}\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ +\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big(T_g\big(h,h^{f_*{\rm tr}_k\circ\lambda(\overline{E})}\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ =\mathbf{H}_{4,k-1}\circ d(\overline{E})-{h_{\mu_n}}_*\circ\big({\rm Td}_g(\overline{Th})\bullet \Pi'^f_k(\overline{E})\big)
\\ +\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big(h_*{\rm tr}_1\big(H(\overline{E},f_*)\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\\ -\Pi''^h_k(f_*\overline{E})+\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big(T_g\big(h,h^{f_*{\rm tr}_k\circ\lambda(\overline{E})}\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big) \end{multline*}
\begin{prop}\label{304} Let notations and assumptions be as above, then the chain homotopy $\Pi^l_k=\Pi'^l_k+\Pi''^l_k$ is homotopic to $\Pi^{(1)}_k+{h_{\mu_n}}_*\circ \big({\rm Td}_g(\overline{Th})\bullet\Pi^f_k\big)+\Pi^h_k\circ f_*-\Pi^{(3')}_k$. \end{prop} \begin{proof} Let $\overline{E}$ be a hermitian $k$-cube in $\widehat{\mathcal{P}}(X,\mu_n)$ which is $f$-acyclic and $l$-acyclic. Using the above differential equations concerning $\mathbf{H}_{i,k}$, we obtain that \begin{multline*} (\mathbf{H}_{1,k-1}+\mathbf{H}_{2,k-1}-\mathbf{H}_{3,k-1}-\mathbf{H}_{4,k-1})\circ d(\overline{E})-d_\mathcal{D}\circ (\mathbf{H}_{1,k}+\mathbf{H}_{2,k}-\mathbf{H}_{3,k}-\mathbf{H}_{4,k})(\overline{E})\\ =\Pi'^l_k(\overline{E})-\Pi^{(1)}_k(\overline{E})-\Pi'^h_k(f_*\overline{E})-\Pi''^h_k(f_*\overline{E})-{h_{\mu_n}}_*\circ\big({\rm Td}_g(\overline{Th})\bullet \Pi'^f_k(\overline{E})\big)
\\ +\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\bigg({\rm tr}_1\Big(H_{h\circ f}\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\bigg)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\\ +\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big(T_g\big(h,h^{f_*{\rm tr}_k\circ\lambda(\overline{E})}\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big) \end{multline*} On the other hand, according to Theorem~\ref{301}, we have \begin{multline*} \frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big(T_g\big(g,h^{{\rm tr}_k\circ\lambda(\overline{E})}\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ -\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big(T_g\big(h,h^{f_*{\rm tr}_k\circ\lambda(\overline{E})}\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)-{h_{\mu_n}}_*\circ\big({\rm Td}_g(\overline{Th})\bullet \Pi''^f_k(\overline{E})\big)
\\ =\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\bigg({\rm tr}_1\Big(H_{h\circ f}\big({\rm tr}_k\circ\lambda(\overline{E})\big)\Big)\bigg)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)-\Pi^{(3')}_k(\overline{E})
\\ +\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(d_\mathcal{D}\Delta\big(f,h,\omega^X,\omega^Y,{\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big). \end{multline*}
We then formally define a product $$C_{k+1}\Big(\Delta\big(f,h,\omega^X,\omega^Y,{\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)$$ in the same way as (\ref{NewC}), and we set $$\Delta_k(\overline{E})=\frac{(-1)^{k}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(\Delta\big(f,h,\omega^X,\omega^Y,{\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big).$$ It is readily checked by Lemma~\ref{deltac} that \begin{align*} &\Delta_{k-1}(d\overline{E})-d_\mathcal{D}\Delta_k(\overline{E})\\ =&\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(d_\mathcal{D}\Delta\big(f,h,\omega^X,\omega^Y,{\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big). \end{align*}
Combing all the above computations, we finally get \begin{multline*} (\mathbf{H}_{1,k-1}+\mathbf{H}_{2,k-1}-\mathbf{H}_{3,k-1}-\mathbf{H}_{4,k-1}+\Delta_{k-1})(d\overline{E})-d_\mathcal{D} (\mathbf{H}_{1,k}+\mathbf{H}_{2,k}-\mathbf{H}_{3,k}-\mathbf{H}_{4,k}+\Delta_k)(\overline{E})\\ =\Pi'^l_k(\overline{E})-\Pi^{(1)}_k(\overline{E})-\Pi'^h_k(f_*\overline{E})-\Pi''^h_k(f_*\overline{E})+\Pi^{(3')}_k(\overline{E})+\Pi''^l_k(\overline{E})
\\ -{h_{\mu_n}}_*\circ\big({\rm Td}_g(\overline{Th})\bullet \Pi'^f_k(\overline{E})\big)-{h_{\mu_n}}_*\circ\big({\rm Td}_g(\overline{Th})\bullet \Pi''^f_k(\overline{E})\big)\\ =\Pi^l_k(\overline{E})-\Big(\Pi^{(1)}_k(\overline{E})+{h_{\mu_n}}_*\circ \big({\rm Td}_g(\overline{Th})\bullet\Pi^f_k(\overline{E})\big)+\Pi^h_k(f_*\overline{E})-\Pi^{(3')}_k(\overline{E})\Big)
\end{multline*} So we are done. \end{proof}
\begin{cor}\label{305} Let $f: X\to Y$, $h: Y\to Z$ and $l: X\to Z$ be three equivariant morphisms between regular $\mu_n$-projective schemes, which are all smooth over the generic fibres. Assume that $l=h\circ f$ and that the $\mu_n$-action on $Z$ is trivial. Then the direct image map $l_*$ is equal to the composition $h_*\circ f_*$ from $\widehat{K}_m(X,\mu_n)_{\mathbb Q}$ to $\widehat{K}_m(Z,\mu_n)_{\mathbb Q}$ for any $m\geq 1$. \end{cor}
\section{The Lefschetz-Riemann-Roch theorem}
\subsection{The statement} In order to formulate the Lefschetz-Riemann-Roch theorem for higher equivariant arithmetic K-groups, we need to introduce the equivariant $R$-genus due to Bismut. Let $X$ be a $\mu_n$-equivariant smooth algebraic variety over $\mathbb C$, and let $\overline{E}$ be a $\mu_n$-equivariant hermitian vector bundle on $X$. For $\zeta\in \mu_n(\mathbb C)$ and $s>1$, we consider the following Lerch zeta function $$L(\zeta,s)=\sum_{k=1}^\infty\frac{\zeta^k}{k^s}$$ and its meromorphic continuation to the whole complex plane. Define a formal power series in the variable $x$ as $$\widetilde{R}(\zeta,x):=\sum_{n=0}^\infty\big(\frac{\partial L}{\partial s}(\zeta,-n)+L(\zeta,-n)\sum_{j=1}^n\frac{1}{2j}\big)\frac{x^n}{n!}.$$
\begin{defn}\label{401} The Bismut's equivariant $R$-genus of an equivariant hermitian vector bundle $\overline{E}$ with $\overline{E}\mid_{X_{\mu_n}}=\sum_{\zeta\in \mu_n(\mathbb C)}\overline{E}_\zeta$ is defined as $$R_g(\overline{E}):=\sum_{\zeta\in \mu_n(\mathbb C)}\big({\rm Tr}\widetilde{R}(\zeta,-\Omega^{\overline{E}_\zeta})-{\rm Tr}\widetilde{R}(1/\zeta,\Omega^{\overline{E}_\zeta})\big),$$ where $\Omega^{\overline{E}_\zeta}$ is the curvature form associated to $\overline{E}_\zeta$. \end{defn}
Now, let $X$ be a regular $\mu_n$-projective arithmetic scheme over an arithmetic ring $(D,\Sigma,F_\infty)$ and we construct a naive commutative diagram of homological complexes \begin{align}\label{genus} \xymatrix{ \widetilde{\mathbb Z}C_*(X,\mu_n) \ar[r]^-{{\rm ch}_g} \ar[d]^-{0} & \bigoplus_{p\geq0}D^{2p-*}(X_{\mu_n},p)_{R_n} \ar[d]^-{0} \\ \widetilde{\mathbb Z}C_*(X,\mu_n) \ar[r]^-{{\rm ch}_g} & \bigoplus_{p\geq0}D^{2p-*}(X_{\mu_n},p)_{R_n}} \end{align} where $0$ stands for the zero map. Let $\overline{N}$ be a $\mu_n$-equivariant hermitian vector bundle on $X$, we shall formally regard the $R$-genus $R_g(\overline{N})$ as an element in $\bigoplus_{p\geq0}D^{2p-1}(X,p)$. It is a $d$-closed form. Denote by $p_0$ the projection from $X\times (\mathbb{P}^1)^\cdot$ to $X$. For any hermitian $k$-cube $\overline{E}$ in $\widehat{\mathcal{P}}(X,\mu_n)$, we set $$\Pi_R(\overline{E})=\frac{(-1)^{k}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(R_g(p_0^*\overline{N}){\rm ch}_g^0\big({\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big).$$ It is clear that $\Pi_R(\overline{E})$ extends to be a map $\Pi_R: \widetilde{\mathbb Z}C_k(X,\mu_n)\to \bigoplus_{p\geq0}D^{2p-k-1}(X_{\mu_n},p)_{R_n}$ which provides a chain homotopy of the square (\ref{genus}). Therefore, we get an endomorphism of $\widehat{K}_m(X,\mu_n)$ for any $m\geq1$. This endomorphism will be denoted by $\otimes R_g(\overline{N})$.
Again, by \cite[Remark 2.4, Lemma 2.5]{T3}, the chain homotopy $\Pi_R$ is homotopic to the chain homotopy $\Pi'_R$ defined by $$\Pi'_R(\overline{E})=\frac{(-1)^{k+1}}{2k!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}R_g(p_0^*\overline{N})\bullet{\rm ch}_g^0\big({\rm tr}_k\circ\lambda(\overline{E})\big)\bullet C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2),$$ and hence is homotopic to $-R_g(\overline{N})\bullet{\rm ch}_g(\overline{E})$ by the projection formula. Let $(x,\alpha)$ be an element in $\widehat{K}_m(X,\mu_n)_\mathbb Q$, then $dx=0$ and ${\rm ch}_g(x)$ is a $d_\mathcal{D}$-closed form. Let $(0,\alpha)$ and $(0,\alpha')$ be two elements in $\widehat{K}_m(X,\mu_n)_\mathbb Q$, then $(0,\alpha)=(0,\alpha')$ if $\alpha$ and $\alpha'$ have the same cohomology class in $\bigoplus_{p\geq0}H_\mathcal{D}^*\big(X_{\mu_n},\mathbb{R}(p)\big)_{R_n}$. Notice that the product $\bullet$ on the Deligne-Beilinson complex induces the product on the real Deligne-Beilinson cohomology. Then, modulo torsion, the endomorphism $\otimes R_g(\overline{N})$ is independent of the choice of the metric on $N$ and it can be written as $\otimes R_g(N)$.
Assume that $\rho$ is any prime ideal in $R(\mu_n):=K_0({\rm Spec}\mathbb{Z},\mu_n)\cong \mathbb Z[T]/{(1-T^n)}$ which doesn't contain the elements $1-T^k$ for $k=1,\ldots,n-1$. For instance, $\rho$ can be chosen to be the kernel of the natural morphism $\mathbb Z[T]/{(1-T^n)}\to \mathbb Z[T]/{(\Phi_n)}$ where $\Phi_n$ stands for the $n$-th cyclotomic polynomial. Let $X_{\mu_n}$ be the fixed point subscheme of $X$, and let $\overline{N}_{X/{X_{\mu_n}}}$ be the normal bundle of $X_{\mu_n}$ in $X$ with some $\mu_n$-invariant hermitian metric. We set $$\Lambda_R:=\big({\rm Id}-\otimes R_g(N_{X/{X_{\mu_n}}})\big)\circ\otimes \lambda_{-1}^{-1}(\overline{N}_{X/{X_{\mu_n}}}^\vee),$$ it is a well-defined endomorphism of $\widehat{K}_m(X_{\mu_n},\mu_n)_\rho\otimes\mathbb Q$. Then the arithmetic Lefschetz-Riemann-Roch theorem for higher equivariant arithmetic K-groups can be formulated as follows.
\begin{thm}\label{402}(arithmetic Lefschetz-Riemann-Roch) Let $f: X\to Y$ be an equivariant morphism between two regular $\mu_n$-projective arithmetic schemes, which is smooth over the generic fibre. Suppose that the $\mu_n$-action on the base $Y$ is trivial. Then, for any $m\geq1$, the following diagram \begin{align*} \xymatrix{ \widehat{K}_m(X,\mu_n) \ar[rr]^-{\Lambda_R\circ \tau} \ar[d]_{f_*} && \widehat{K}_m(X_{\mu_n},\mu_n)_\rho\otimes \mathbb Q \ar[d]^{{f_{\mu_n}}_*} \\ \widehat{K}_m(Y,\mu_n) \ar[rr]^-{\tau} && \widehat{K}_m(Y,\mu_n)_\rho\otimes\mathbb Q} \end{align*} where $\tau$ is the restriction map, is commutative. \end{thm}
The proof of Theorem~\ref{402} will be given in next two subsections.
\subsection{Arithmetic K-theoretic form of Bismut-Ma's immersion formula} Let $Y\hookrightarrow X$ be a $\mu_n$-equivariant closed immersion of regular $\mu_n$-projective arithmetic schemes over $(D,\Sigma,F_{\infty})$. In \cite[Section 4]{T3}, we have proved an arithmetic purity theorem $$\widehat{K}_m(Y,\mu_n)\cong \widehat{K}_{Y,m}(X,\mu_n)$$ for any integer $m\geq1$. As a byproduct, we get an embedding morphism $\widehat{K}_m(Y,\mu_n)\to \widehat{K}_m(X,\mu_n)$. This embedding morphism is realized by constructing an explicit chain homotopy of the square \begin{align}\label{bcc} \xymatrix{ \widetilde{\mathbb Z}C_*(Y,\mu_n) \ar[r]^-{{\rm ch}_g} \ar[d]^-{i_*} & \bigoplus_{p\geq0}{'D}^{2p-*}(Y_{\mu_n},p)_{R_n} \ar[d]^-{{i_{\mu_n}}_!\circ {\rm Td}_g^{-1}(\overline{N}_{X/Y})\bullet(\cdot)} \\ \widetilde{\mathbb Z}C_*(P,\mu_n) \ar[r]^-{{\rm ch}_g} & \bigoplus_{p\geq0}{'D}^{2p-*}(P_{\mu_n},p)_{R_n},} \end{align} where ${'D}^{2p-*}(\cdot,p)$ stands for the Deligne complex of currents computing the Deligne homology groups, $({i_{\mu_n}}_!T)(\eta)=T(i_{\mu_n}^*\eta)$ for a current $T$ and a test form $\eta$, $i: Y\hookrightarrow P:=\mathbb{P}(N_{X/Y}\oplus\mathcal{O}_Y)$ is the associated zero section embedding with projection $\pi: P\to Y$ and $$i_*: \widetilde{\mathbb Z}C_*(Y,\mu_n)\to \widetilde{\mathbb Z}C_*(P,\mu_n)$$ is the complex morphism defined by sending a hermitian cube $\overline{E}$ to $\sum_{j=0}^n(-1)^j\overline{Q}^\vee\otimes \pi^*\overline{E}$ provided the Koszul resolution $$K(\overline{E},\overline{N}_{X/Y}):\quad 0\to \wedge^n\overline{Q}^\vee\otimes \pi^*\overline{E} \to \cdots \to \wedge\overline{Q}^\vee\otimes \pi^*\overline{E}\to \pi^*\overline{E}\to i_*\overline{E}\to 0.$$ For any hermitian $k$-cube $\overline{E}$, one chain homotopy $\mathbf{H}_k(\overline{E})$ of (\ref{bcc}) is given by the formula $$\mathbf{H}_k(\overline{E})=T_g\big(K(\overline{\mathcal{O}}_Y,\overline{N}_{X/Y})\big)\bullet{\rm ch}_g(\pi^*\overline{E})$$ where $T_g\big(K(\overline{\mathcal{O}}_Y,\overline{N}_{X/Y})\big)$ is the equivariant Bott-Chern singular current associated to the Koszul resolution which satisfies $$d_\mathcal{D}T_g\big(K(\overline{\mathcal{O}}_Y,\overline{N}_{X/Y})\big)=\sum_{j=1}^n(-1)^j{\rm ch}_g(\wedge^n\overline{Q}^\vee)-{i_{\mu_n}}_!\big({\rm ch}_g(\overline{O}_Y){\rm Td}_g(\overline{N}_{X/Y})\big).$$ For more details the reader is referred to \cite[Section 4.2]{T3}.
It is clear that if we choose another resolution $$0\to \overline{F}_n \to \cdots\to \overline{F}_1\to \overline{F}_0\to i_*\overline{O}_Y\to 0$$ with respect to the zero section embedding $i: Y\hookrightarrow \mathbb{P}(N_{X/Y}\oplus\mathcal{O}_Y)$ such that the metrics on $F.$ satisfy the Bismut's assumption (A), we may construct a different homotopy of (\ref{bcc}) and we shall get a different embedding morphism $i_*: \widehat{K}_m(Y,\mu_n)\to \widehat{K}_m(P,\mu_n)$. Our first result in this subsection is the following.
\begin{prop}\label{403} The embedding morphism over rational arithmetic K-groups $$i_*: \widehat{K}_m(Y,\mu_n)_{\mathbb Q}\to \widehat{K}_m(P,\mu_n)_{\mathbb Q}$$ is independent of the choice of the resolution of $i_*\overline{\mathcal{O}}_Y$ on $\mathbb{P}(N_{X/Y}\oplus\mathcal{O}_Y)$ which satisfies the Bismut's assumption (A). \end{prop} \begin{proof} Since any two resolutions of $i_*\overline{\mathcal{O}}_Y$ on $\mathbb{P}(N_{X/Y}\oplus\mathcal{O}_Y)$ are dominated by a third one, we may assume that $\overline{F}.$ and $\wedge^.\overline{Q}^\vee$ fit into the following diagram $$\xymatrix{ & 0 \ar[d] & 0 \ar[d] & 0 \ar[d] & \\ 0 \ar[r] & \overline{A}_n \ar[r] \ar[d] & \overline{F}_n \ar[r] \ar[d] & \wedge^n\overline{Q}^\vee \ar[r] \ar[d] & 0\\
& \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \\ 0 \ar[r] & \overline{A}_1 \ar[r] \ar[d] & \overline{F}_1 \ar[r] \ar[d] & \wedge\overline{Q}^\vee \ar[r] \ar[d] & 0\\ 0 \ar[r] & \overline{A}_0 \ar[r] \ar[d] & \overline{F}_0 \ar[r] \ar[d] & \overline{\mathcal{O}}_P \ar[r] \ar[d] & 0\\
& 0 \ar[r] & i_*\overline{\mathcal{O}}_Y \ar[r] & i_*\overline{\mathcal{O}}_Y & }$$ where $\overline{A}.$ is an exact sequence of hermitian vector bundles on $P$. We endow $A.$ with the metrics coming form $\overline{F}.$ via the natural inclusion. We split $\overline{A}.$ into a family of short exact sequence of hermitian bundles from $j=1$ to $n-1$ $$\xymatrix{\chi_j: \quad 0 \ar[r] & \Ker d_j \ar[r] & \overline{A}_j \ar[r]^-{d_j} & \Ker d_{j-1} \ar[r] & 0}.$$ Moreover, we denote by $\varepsilon_j$ the short exact sequence $$\xymatrix{0 \ar[r] & \overline{A}_j \ar[r] & \overline{F}_j \ar[r] & \wedge^j\overline{Q}^\vee \ar[r] & 0}$$ from $j=0$ to $n$. Write $i_*$ (resp. $i'_*$) for the morphism $\widetilde{\mathbb Z}C_*(Y,\mu_n)\to \widetilde{\mathbb Z}C_*(P,\mu_n)$ with respect to the Koszul resolution $K(\overline{\mathcal{O}}_Y,\overline{N}_{X/Y})$ (resp. the resolution $\overline{F}.$). Then, for any hermitian $k$-cube $\overline{E}$ on $Y$, the assignment $$H_i(\overline{E}):=\sum_{j=0}^n(-1)^j\varepsilon_j\otimes \pi^*\overline{E}+\sum_{j=1}^{n-1}(-1)^j\chi_j\otimes\pi^*\overline{E}\in \widetilde{\mathbb Z}C_{k+1}(P,\mu_n)$$ provides a chain homotopy between $i'_*$ and $i_*$. Consequently, the formula $$\mathbf{H}^{(1)}_k(\overline{E})=\big(\sum_{j=0}^n(-1)^j{\rm ch}_g(\varepsilon_j)+\sum_{j=1}^{n-1}(-1)^j{\rm ch}_g(\chi_j)\big){\rm ch}_g(\pi^*\overline{E})$$ defines a chain homotopy between ${\rm ch}_g\circ i'_*$ and ${\rm ch}_g\circ i_*$. We claim that there exists a homotopy of chain homotopies between $\mathbf{H}'_k(\overline{E})$ and $\mathbf{H}^{(1)}_k(\overline{E})+\mathbf{H}_k(\overline{E})$.
In fact, according to \cite[Theorem 3.14, Corollary 3.10]{KR1}, we have $$-\sum_{j=1}^{n-1}(-1)^j{\rm ch}_g(\chi_j)+T_g(\overline{F}.)-T_g\big(K(\overline{\mathcal{O}}_Y,\overline{N}_{X/Y})\big)=\sum_{j=0}^n(-1)^j{\rm ch}_g(\varepsilon_j)$$ up to ${\operatorname{Im} d_\mathcal{D}}$. We fix an element $\Delta$ such that $$d_\mathcal{D}\Delta=\sum_{j=0}^n(-1)^j{\rm ch}_g(\varepsilon_j)+\sum_{j=1}^{n-1}(-1)^j{\rm ch}_g(\chi_j)-T_g(\overline{F}.)+T_g\big(K(\overline{\mathcal{O}}_Y,\overline{N}_{X/Y})\big)$$ and set $$\widetilde{\mathbf{H}}_k(\overline{E}):=\Delta\bullet{\rm ch}_g(\pi^*\overline{E}).$$ Then $$d_\mathcal{D}\circ \widetilde{\mathbf{H}}_k(\overline{E})=\mathbf{H}^{(1)}_k(\overline{E})+\mathbf{H}_k(\overline{E})-\mathbf{H}'_k(\overline{E})+\widetilde{\mathbf{H}}_{k-1}\circ d(\overline{E}).$$ So we are done. \end{proof}
Notice that the product $P\times (\mathbb{P}^1)^\cdot$ can be identified with the projective space bundle over $Y\times (\mathbb{P}^1)^\cdot$ with respect to the vector bundle $p_0^*N_{X/Y}$, and $$0\to p_0^*\wedge^n\overline{Q}^\vee \to \cdots \to p_0^*\wedge\overline{Q}^\vee\to \overline{\mathcal{O}}_{P\times (\mathbb{P}^1)^\cdot}\to i_*\overline{\mathcal{O}}_{Y\times (\mathbb{P}^1)^\cdot}\to 0$$ is the Koszul resolution so that the corresponding Bott-Chern singular current is the pullback $p_0^*T_g\big(K(\overline{\mathcal{O}}_Y,\overline{N}_{X/Y})\big)$. We shall still write it as $T_g(K(\overline{\mathcal{O}}_Y,\overline{N}_{X/Y})$ for the sake of simplicity. Then, like before, by the projection formula and \cite[Remark 2.4, Lemma 2.5]{T3}, $\mathbf{H}_k(\overline{E})$ is homotopic to the following chain homotopy $$\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(T_g\big(K(\overline{\mathcal{O}}_Y,\overline{N}_{X/Y})\big)\bullet {\rm ch}_g^0\big({\rm tr}_k\circ \lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big),$$ which will be still denoted by $\mathbf{H}_k(\overline{E})$.
Now, let us recall the Bismut-Ma's immersion formula which relates analytic torsion forms and the Bott-Chern singular current. Let $X$ be a smooth $\mu_n$-equivariant algebraic variety over $\mathbb C$ and let $i: Y\hookrightarrow X$ be an equivariant closed smooth subvariety. Let $S$ be a smooth algebraic variety with trivial $\mu_n$-action, and let $f: Y\rightarrow S$, $l: X\rightarrow S$ be two equivariant proper smooth morphisms such that $f=l\circ i$. Assume that $\overline{\eta}$ is an equivariant hermitian bundle on $Y$ and $\overline{\xi}.$ is a complex of equivariant hermitian bundles on $X$ which provides a resolution of $i_*\overline{\eta}$ such that the metrics on $\xi.$ satisfy the Bismut's assumption (A). Let $\omega^Y$, $\omega^X$ be two K\"{a}hler fibrations on $f$ and on $l$ respectively. We shall assume that $\omega^Y$ is the pull-back of $\omega^X$ so that the K\"{a}hler metric on $Y$ is induced by the K\"{a}hler metric on $X$. Consider the following exact sequence $$\overline{\mathcal{N}}:\quad 0\to \overline{Tf}\to \overline{Tl}\mid_Y\to \overline{N}_{X/Y}\to 0$$ where $N_{X/Y}$ is endowed with the quotient metric. Denote by ${\rm Td}_g(\overline{\mathcal{N}})=\Phi^{-1}\big(\frac{\widetilde{{\rm Td}}_g(\overline{\mathcal{N}})}{2}\big)$ (see Section 5.3 in the Appendix) the equivariant secondary Todd form of $\overline{\mathcal{N}}$ which satisfies the identity $$d_\mathcal{D}{\rm Td}_g(\overline{\mathcal{N}})={\rm Td}_g(Tl\mid_Y,h^{Tl})-{\rm Td}_g(Tf,h^{Tf}){\rm Td}_g(\overline{N}_{X/Y}).$$
We suppose that in the resolution $\xi.$, $\xi_j$ are all $l-$acyclic and moreover $\eta$ is $f-$acyclic. Denote by $h^{H(\xi.)}$ the hermitian metric on $f_*\eta$ corresponding to the $L^2$-metric on the hypercohomology of $\xi.$ over the fibre of $l: X\rightarrow S$ (see Section 5.3 in the Appendix). By an easy argument of long exact sequence, we have the following exact sequence of hermitian vector bundles on $S$ $$\overline{\Xi}:\quad 0\to l_*(\overline{\xi}_m)\to l_*(\overline{\xi}_{m-1})\to\ldots\to l_*(\overline{\xi}_0)\to \big(f_*\eta, h^{H(\xi.)}\big) \to 0.$$ We may split $\Xi.$ into a family of short exact sequence of hermitian bundles from $j=1$ to $m$ $$\xymatrix{\chi_j: \quad 0 \ar[r] & \Ker d_j \ar[r] & \overline{\Xi}_j \ar[r]^-{d_j} & \Ker d_{j-1} \ar[r] & 0}$$ such that the kernel of every map $d_{j-1}$ for $j=2,\ldots,m$ carries the metric induced by $\overline{\Xi}_j$ and $\Ker d_0=\overline{\Xi}_0=\big(f_*\eta, h^{H(\xi.)}\big), \Ker d_m=\overline{\Xi}_{m+1}=l_*(\overline{\xi}_m)$. We regard $\chi_j$ as a hermitian $1$-cube on $S$ and we set ${\rm ch}_g(\overline{\Xi}.)=\sum_{j=1}^m(-1)^j{\rm ch}_g(\chi_j)$. Then it satisfies the differential equation $$d_\mathcal{D}{\rm ch}_g(\overline{\Xi}.)={\rm ch}_g\big(f_*\eta, h^{H(\xi.)}\big)-\sum_{j=0}^m{\rm ch}_g\big(l_*(\overline{\xi}_j)\big).$$ Set ${\rm ch}_g(\overline{\Xi}., f_*\overline{\eta}):={\rm ch}_g(\overline{\Xi}.)+{\rm ch}_g\big(f_*\eta, h^{H(\xi.)}, f_*h^\eta\big)$, it satisfies the differential equation $$d_\mathcal{D}{\rm ch}_g(\overline{\Xi}., f_*\overline{\eta})={\rm ch}_g(f_*\overline{\eta})-\sum_{j=0}^m{\rm ch}_g\big(l_*(\overline{\xi}_j)\big).$$
With some abuse of notations, we still use $\overline{\Xi}$ to denote the long exact sequence $$0\to l_*(\overline{\xi}_m)\to l_*(\overline{\xi}_{m-1})\to\ldots\to l_*(\overline{\xi}_0)\to f_*\overline{\eta} \to 0$$ and identify ${\rm ch}_g(\overline{\Xi}.)$ with ${\rm ch}_g(\overline{\Xi}., f_*\overline{\eta})$.
\begin{thm}\label{404}(Immersion formula) Let notations and assumptions be as above. Then the following identity holds in $\bigoplus_{p\geq0}\big(D^{2p-1}(S,p)/{\operatorname{Im} d_\mathcal{D}}\big)$. \begin{multline*} \sum_{i=0}^m(-1)^iT_g(\omega^X,h^{\xi_i})-T_g(\omega^Y,h^\eta)+{\rm ch}_g(\overline{\Xi}.)\\=-\frac{1}{(2\pi i)^{r_l}}\int_{X_{\mu_n}/S}{\rm Td}_g(\overline{Tl})T_g(\overline{\xi}.)-\frac{1}{(2\pi i)^{r_f}}\int_{Y_{\mu_n}/S}{\rm Td}_g(\overline{\mathcal{N}}){{\rm Td}_g^{-1}(\overline{N}_{X/Y})}{\rm ch}_g(\overline{\eta})
\\ +\frac{1}{(2\pi i)^{r_f}}\int_{Y_{\mu_n}/S}{\rm Td}_g(\overline{Tf})R_g(\overline{N}_{X/Y}){\rm ch}_g(\overline{\eta}) \end{multline*} where $r_f$ and $r_l$ are the relative dimensions of $Y_{\mu_n}/S$ and of $X_{\mu_n}/S$ respectively. \end{thm} \begin{proof} This is a translation of \cite[Theorem 0.1 and 0.2]{BM} (see also Theorem~\ref{A30} in the Appendix). \end{proof}
With the same notations as in Remark~\ref{explain_tt} and Theorem~\ref{A3} in the Appendix, we set $$\Delta(f,l,i_*\overline{\eta},\overline{\xi}.):=-\Phi^{-1}\big(\frac{\Delta^0(f,l,i_*\overline{\eta},\overline{\xi}.)+\Delta_0(f,l,i_*\overline{\eta},\overline{\xi}.)}{2}\big)-\Delta(\overline{\Xi}.).$$ Then $d_\mathcal{D}\Delta(f,l,i_*\overline{\eta},\overline{\xi}.)$ measures the difference \begin{multline*} \sum_{i=0}^m(-1)^iT_g(\omega^X,h^{\xi_i})-T_g(\omega^Y,h^\eta)+{\rm ch}_g(\overline{\Xi}.)+\frac{1}{(2\pi i)^{r_l}}\int_{X_{\mu_n}/S}{\rm Td}_g(\overline{Tl})T_g(\overline{\xi}.)\\ +\frac{1}{(2\pi i)^{r_f}}\int_{Y_{\mu_n}/S}{\rm Td}_g(\overline{\mathcal{N}}){{\rm Td}_g^{-1}(\overline{N}_{X/Y})}{\rm ch}_g(\overline{\eta})-\frac{1}{(2\pi i)^{r_f}}\int_{Y_{\mu_n}/S}{\rm Td}_g(\overline{Tf})R_g(\overline{N}_{X/Y}){\rm ch}_g(\overline{\eta}) \end{multline*} in Theorem~\ref{404}. Let us go back to the same situation described before Lemma~\ref{delta} and assume that the following diagrams $$\xymatrix{Y\times Z \ar[rd]_-{f_Z} \ar[rr]^-{i_Z} && X\times Z \ar[ld]^-{l_Z} \\ & S\times Z & }\quad \text{and}\quad \xymatrix{Y\times Z_1 \ar[rd]_-{f_{Z_1}} \ar[rr]^-{i_{Z_1}} && X\times Z_1 \ar[ld]^-{l_{Z_1}} \\ & S\times Z_1 & }$$ are obtained by smooth base changes. Then $Y\times Z$ and $X\times Z_1$ intersect transversely along $Y\times Z_1$ and the singular currents can be pulled back.
\begin{lem}\label{deltai} The restriction of $\Delta(f_Z,l_Z,{i_{Z}}_*\overline{\eta},\overline{\xi}.)$ over $S\times Z_1$ is equal to the differential form $\Delta(f_{Z_1},l_{Z_1},{i_{Z_1}}_*\overline{\eta}\mid_{Y\times Z_1},\overline{\xi}.\mid_{X\times Z_1})$. \end{lem} \begin{proof} This is a consequence of Theorem~\ref{A3} in the Appendix. \end{proof}
\begin{prop}\label{405} Let $Y$ be a regular $\mu_n$-projective arithmetic scheme over $(D,\Sigma,F_\infty)$ and let $\overline{N}$ be a $\mu_n$-equivariant hermitian vector bundle on $Y$. Suppose that the $\mu_n$-action on $Y$ is trivial and consider the zero section embedding $$i: Y\hookrightarrow P:=\mathbb{P}(N\oplus \mathcal{O}_Y)$$ with hermitian normal bundle $\overline{N}$ and the natural projection $\pi: P\to Y$. Then for any element $x\in \widehat{K}_m(Y,\mu_n)_\mathbb Q$ with integer $m\geq1$, the following identity $$x-R_g(N)\cdot x=\pi_*i_*(x)$$ holds in $\widehat{K}_m(Y,\mu_n)_\mathbb Q$. \end{prop} \begin{proof} By the definition of the action of $R_g(N)$ on $\widehat{K}_m(Y,\mu_n)_\mathbb Q$, the map $x\mapsto x-R_g(N)\cdot x$ is defined via the chain homotopy $$\Pi^{(0)}_k(\overline{E})=\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(R_g(\overline{N})\bullet {\rm ch}_g^0\big({\rm tr}_k\circ \lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)$$ of the square \begin{align*} \xymatrix{ \widetilde{\mathbb Z}C_*(Y,\mu_n) \ar[r]^-{{\rm ch}_g} \ar[d]^-{{\rm Id}} & \bigoplus_{p\geq0}{D}^{2p-*}(Y_{\mu_n},p)_{R_n} \ar[d]^-{{\rm Id}} \\ \widetilde{\mathbb Z}C_*(Y,\mu_n) \ar[r]^-{{\rm ch}_g} & \bigoplus_{p\geq0}{D}^{2p-*}(Y_{\mu_n},p)_{R_n}.} \end{align*}
According to Proposition~\ref{403}, to define the morphism $i_*: \widehat{K}_m(Y,\mu_n)_\mathbb Q\to \widehat{K}_m(P,\mu_n)_\mathbb Q$, we may choose a resolution $F.$ of $i_*\mathcal{O}_Y$ on $P$ such that every $F_j$ is $\pi$-acyclic. We shall endow $F.$ with the metrics satisfying the Bismut's assumption (A). Then we have an exact sequence of hermitian bundles on $Y$ $$\overline{\Xi}:\quad 0\to \pi_*(\overline{F}_m)\to \pi_*(\overline{F}_{m-1})\to\ldots\to \pi_*(\overline{F}_0)\to \overline{\mathcal{O}}_Y\to 0.$$ Like before, splitting $\overline{\Xi}$ into a family of short exact sequence of hermitian bundles from $j=1$ to $m$ $$\xymatrix{\chi_j: \quad 0 \ar[r] & \Ker d_j \ar[r] & \overline{\Xi}_j \ar[r]^-{d_j} & \Ker d_{j-1} \ar[r] & 0,}$$ we may construct a chain homotopy $$H_{\pi\circ i}(\overline{E}):=\sum_{j=1}^{m}(-1)^j\chi_j\otimes\overline{E}\in \widetilde{\mathbb Z}C_{k+1}(Y,\mu_n)$$ between the maps ${\rm Id}$ and $\pi_*\circ i_*: \widetilde{\mathbb Z}C_{*}(Y,\mu_n)\to \widetilde{\mathbb Z}C_{*}(Y,\mu_n)$. Consequently, the formula $$\mathbf{H}^{(1)}_k(\overline{E})=\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_{k+1}\circ\lambda\big(H_{\pi\circ i}(\overline{E})\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)$$ defines a chain homotopy between ${\rm ch}_g\circ {\rm Id}$ and ${\rm ch}_g\circ \pi_*\circ i_*$. Then $\mathbf{H}^{(1)}_k+\Pi_k^\pi\circ i_*+{\pi_{\mu_n}}_*\circ \big({\rm Td}_g(\overline{T\pi})\bullet \mathbf{H}_k\big)$ also defines a chain homotopy between ${\rm ch}_g\circ {\rm Id}$ and ${\rm Id}\circ {\rm ch}_g$. We compare it with $\Pi^{(0)}_k$.
Firstly, denote by ${\rm Pr}_P$ (resp. ${\rm Pr}_Y$) the projection from $P\times (\mathbb{P}^1)^k$ (resp. $Y\times (\mathbb{P}^1)^k$) to $P$ (resp. $Y$). Then, according to the functoriality of projective space bundle construction we have used before, ${\rm Pr}_P^*\overline{F}.$ provides a resolution of $i_*\overline{\mathcal{O}}_{Y\times (\mathbb{P}^1)^k}$ on $P\times (\mathbb{P}^1)^k$. Hence we have an exact sequence $$\overline{\Xi}':\quad 0\to \pi_*({\rm Pr}_P^*\overline{F}_m)\to \pi_*({\rm Pr}_P^*\overline{F}_{m-1})\to\ldots\to \pi_*({\rm Pr}_P^*\overline{F}_0)\to \overline{\mathcal{O}}_{Y\times (\mathbb{P}^1)^k}\to 0$$ which can be split into a family of short exact sequence of hermitian bundles from $j=1$ to $m$ $$\xymatrix{\chi'_j: \quad 0 \ar[r] & \Ker d_j \ar[r] & \overline{\Xi}'_j \ar[r]^-{d_j} & \Ker d_{j-1} \ar[r] & 0.}$$ Furthermore, the short exact sequence of hermitian $1$-cube $$H^{(j)}(\overline{E}): \xymatrix{0 \ar[r] & \chi'_j\otimes {\rm tr}_k\circ \lambda(\overline{E}) \ar[r]^-{\rm Id} & {\rm Pr}_Y^*\chi_j\otimes {\rm tr}_k\circ \lambda(\overline{E}) \ar[r] & 0 \ar[r] & 0}$$ forms a hermitian $2$-cube on $Y\times (\mathbb{P}^1)^k$. We set \begin{align*} \widetilde{\mathbf{H}}_k(\overline{E}):=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}&\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big(\sum_{j=1}^m(-1)^j{\rm tr}_2\circ \lambda\big(H^{(j)}(\overline{E})\big)\Big)\wedge C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2), \end{align*} it satisfies the differential equation \begin{multline*} d_\mathcal{D}\circ \widetilde{\mathbf{H}}_k(\overline{E})=\widetilde{\mathbf{H}}_{k-1}\circ d\overline{E}+\sum_{j=0}^m(-1)^j\Pi'^\pi_k(\overline{F}_j\otimes\pi^*\overline{E})\\ +\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_{k+1}\circ\lambda\big(H_{\pi\circ i}(\overline{E})\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)\\ -\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\big(\sum_{j=1}^m(-1)^j{\rm tr}_1\circ\lambda(\chi'_j)\boxtimes {\rm tr}_k\circ \lambda(\overline{E})\big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)
\\ =\widetilde{\mathbf{H}}_{k-1}\circ d\overline{E}+\mathbf{H}^{(1)}_k(\overline{E})+\Pi'^\pi_k\circ i_*(\overline{E})
\\ -\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big({\rm ch}_g\big(\overline{\Xi}'\otimes {\rm tr}_k\circ \lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big). \end{multline*} On the other hand, we apply the immersion formula to the resolution ${\rm Pr}_P^*\overline{F}.\otimes {\rm tr}_k\circ \lambda(\overline{E})$. We then have \begin{multline*} \Pi''^\pi_k\circ i_*(\overline{E})=-\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big({\rm ch}_g\big(\overline{\Xi}'\otimes {\rm tr}_k\circ \lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ -\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big(\frac{1}{(2\pi i)^{r_\pi}}\int_{P_{\mu_n}/Y}{\rm Td}_g(\overline{T\pi})T_g\big({\rm Pr}_P^*\overline{F}.\otimes {\rm tr}_k\circ \lambda(\overline{E})\big), \log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)
\\ +\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(R_g(\overline{N})\bullet {\rm ch}_g^0\big({\rm tr}_k\circ \lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ +\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(d_\mathcal{D}\Delta\big({\rm tr}_k\circ \lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)\\ =-\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big({\rm ch}_g\big(\overline{\Xi}'\otimes {\rm tr}_k\circ \lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)-{\pi_{\mu_n}}_*\circ \big({\rm Td}_g(\overline{T\pi})\bullet \mathbf{H}_k(\overline{E})\big)
\\ +\Pi^{(0)}_k(\overline{E})+\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(d_\mathcal{D}\Delta\big({\rm tr}_k\circ \lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big). \end{multline*}
We then formally define a product $C_{k+1}\Big(\Delta\big({\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)$ in the same way as (\ref{NewC}), and we set $$\Delta_k(\overline{E})=\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(\Delta\big({\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big).$$ Again, it is readily checked by Lemma~\ref{deltai} that \begin{align*} &\Delta_{k-1}(d\overline{E})-d_\mathcal{D}\Delta_k(\overline{E})\\ =&\frac{(-1)^{k}}{(k+1)!(2\pi i)^k}\int_{(\mathbb{P}^1)^k}C_{k+1}\Big(d_\mathcal{D}\Delta\big({\rm tr}_k\circ\lambda(\overline{E})\big),\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big). \end{align*}
Getting together all the above discussions, we see that $\widetilde{\mathbf{H}}_k+\Delta_k$ provides a homotopy between $\Pi^{(0)}_k$ and $\mathbf{H}^{(1)}_k+\Pi_k^\pi\circ i_*+{\pi_{\mu_n}}_*\circ \big({\rm Td}_g(\overline{T\pi})\bullet \mathbf{H}_k\big)$ which implies that $x-R_g(N)\cdot x=\pi_*i_*(x)$ for any element $x\in \widehat{K}_m(Y,\mu_n)_\mathbb Q$ with integer $m\geq1$. \end{proof}
\begin{cor}\label{406} Let $S$ be another regular $\mu_n$-projective arithmetic scheme with the trivial $\mu_n$-action. Let $f: Y\to S$ and $l=f\circ \pi: P\to S$ be two equivariant morphisms which are smooth over the generic fibres. Then the identity $$f_*(x)-f_*(R_g(N)\cdot x)=l_*\circ i_*(x)$$ holds in $\widehat{K}_m(S,\mu_n)_\mathbb Q$ for any element $x\in \widehat{K}_m(Y,\mu_n)$. \end{cor} \begin{proof} This is an immediate consequence of Proposition~\ref{405} and Corollary~\ref{305}. \end{proof}
Now, we consider general situation. Let $X,S$ be two regular $\mu_n$-projective arithmetic schemes over $(D,\Sigma,F_\infty)$, and let $Y$ be a regular $\mu_n$-equivariant arithmetic closed subscheme of $X$ with immersion $i: Y\to X$. Let $l: X\to S$ and $f=l\circ i: Y\to S$ be two equivariant morphisms which are smooth over the generic fibres. We shall suppose that the $\mu_n$-actions on $Y$ and on $S$ are trivial (e.g. $Y=X_{\mu_n},S={\rm Spec}D$). Then the main result in this subsection is the following.
\begin{thm}\label{407} For any element $x\in \widehat{K}_m(Y,\mu_n)$ with integer $m\geq 1$, the identity $$f_*(x)-f_*(R_g(N_{X/Y})\cdot x)=l_*\circ i_*(x)$$ holds in $\widehat{K}_m(S,\mu_n)_\mathbb Q$. \end{thm}
To prove Theorem~\ref{407}, we use the deformation to the normal cone construction. Denote by $W$ the blowing up of $X\times \mathbb{P}^1$ along $Y\times \{0\}$, and denote by $q_W: W\to \mathbb{P}^1$ the composition of the blow-down map $W\to X\times \mathbb{P}^1$ with the projection $X\times \mathbb{P}^1\to \mathbb{P}^1$. For any point $t\in \mathbb{A}^1\subset \mathbb{P}^1$, $t$ is called a $\mathbb{Z}$-point if it corresponds to a prime ideal $(x-a)$ in $D[x]$ with $a\in \mathbb{Z}$. Then for any $\mathbb{Z}$-point $t\in \mathbb{P}^1$ we have $$q_W^{-1}(t)\cong\left\{ \begin{array}{ll}
X\times \{t\}, & \hbox{if $t\neq0$,} \\
P\cup \widetilde{X}, & \hbox{if $t=0$,} \\ \end{array} \right.$$ where $\widetilde{X}$ is isomorphic to the blowing up of $X$ along $Y$ and $P$ is the projective space bundle $\mathbb{P}(N_{X/Y}\oplus\mathcal{O}_Y)$. Let $j: Y\times \mathbb{P}^1\rightarrow W$ be the closed immersion induced by $i\times {\rm Id}$, then the component $\widetilde{X}$ doesn't meet $j(Y\times \mathbb{P}^1)$ and the intersection of $j(Y\times \mathbb{P}^1)$ with $P$ is exactly the image of $Y$ under the zero section embedding. Moreover, denote by $s_t$ the obvious section $Y\cong Y\times\{t\}\hookrightarrow Y\times \mathbb{P}^1$ for every $\mathbb{Z}$-point $t$ and denote by $u_t$ the natural inclusion $q_W^{-1}(t)\hookrightarrow W$. We have two ${\rm Tor}$-independent squares $$\xymatrix{ Y\times \mathbb{P}^1 \ar[r]^-{j} & W \\ Y \ar[u]^-{s_t} \ar[r]^-i & X \ar[u]^-{u_t}}$$ with $t\neq0$ and $$\xymatrix{ Y\times \mathbb{P}^1 \ar[r]^-{j} & W \\ Y \ar[u]^-{s_0} \ar[r]^-{i_0} & \mathbb{P}(N_{X/Y}\oplus\mathcal{O}_Y) \ar[u]^-{u_0}.}$$
Notice that the complement $X\setminus Y$ is contained in $W\setminus {Y\times \mathbb{P}^1}$, we have pull-back morphism $u_t^*: \widehat{K}_{Y\times \mathbb{P}^1,m}(W,\mu_n)\to \widehat{K}_{Y,m}(X,\mu_n)$.
\begin{lem}\label{408} For any $\mathbb{Z}$-point $t\neq0$, the diagram $$\xymatrix{ \widehat{K}_m(Y\times \mathbb{P}^1,\mu_n) \ar[r]^-{\cong}_-{j_*} \ar[d]^-{s_t^*} & \widehat{K}_{Y\times \mathbb{P}^1,m}(W,\mu_n) \ar[d]^-{u_t^*} \\ \widehat{K}_m(Y,\mu_n) \ar[r]^-{\cong}_-{i_*} & \widehat{K}_{Y,m}(X,\mu_n) }$$ is commutative. \end{lem} \begin{proof} The commutativity of the algebraic prototype of this diagram follows from the Tor-independence of the deformation diagrams, but for arithmetic K-theory it is more complicated because the morphisms $j_*$ and $i_*$ are defined via another deformation to the normal cone construction according to the $\mathbb{A}^1$-homotopy invariance of the K-theory and the Deligne-Beilinson cohomology.
Write $c_t^*: \widehat{K}_m(Y\times \mathbb{P}^1,\mu_n)\to \widehat{K}_m(Y,\mu_n)$ for the composition $i_*^{-1}\circ u_t^*\circ j_*$. We need to show that $c_t^*=s_t^*$. The morphism $s_t^*$ is induced by the commutativity between $s_t^*$ and $\widetilde{{\rm ch}}_g$, while the morphism $c_t^*$ is induced by the homotopy defining $j_*$ and the homotopy defining $i_*$. Again, using the $\mathbb{A}^1$-homotopy invariance of the K-theory and the Deligne-Beilinson cohomology, we may consider the pull-backs of $s_t^*$ and $c_t^*$ to $\widehat{K}_m(Y\times \mathbb{P}^1\times \mathbb{A}^1,\mu_n)\to \widehat{K}_m(Y\times \mathbb{A}^1,\mu_n)$ and restrict them to $\{0\}\hookrightarrow \mathbb{A}^1$, then the statement in this lemma will follows from the commutativity of the diagram \begin{align}\label{dn} \xymatrix{ \widehat{K}_m(Y\times \mathbb{P}^1,\mu_n) \ar[r]^-{\cong}_-{{j_0}_*} \ar[d]^-{s_t^*} & \widehat{K}_{Y\times \mathbb{P}^1,m}(P',\mu_n) \ar[d]^-{u_t^*} \\ \widehat{K}_m(Y,\mu_n) \ar[r]^-{\cong}_-{{i_0}_*} & \widehat{K}_{Y,m}(P,\mu_n) } \end{align} where $P'=\mathbb{P}\Big(\big(N_{X/Y}\boxtimes \mathcal{O}(-1)\big)\oplus\mathcal{O}_{Y\times \mathbb{P}^1}\Big)$ is the projective completion of $N_{W/{Y\times \mathbb{P}^1}}$ over $Y\times \mathbb{P}^1$. It is equivalent to show that the following diagram \begin{align}\label{dn'} \xymatrix{ \widehat{K}_m(Y\times \mathbb{P}^1,\mu_n) \ar[r]^-{{j_0}_*} \ar[d]^-{s_t^*} & \widehat{K}_{m}(P',\mu_n) \ar[d]^-{u_t^*} \\ \widehat{K}_m(Y,\mu_n) \ar[r]^-{{i_0}_*} & \widehat{K}_{m}(P,\mu_n) } \end{align} is commutative because the morphism ${i_0}_*: \widehat{K}_m(Y,\mu_n) \to \widehat{K}_{m}(P,\mu_n)$ is injective. We endow $N_{X/Y}\boxtimes \mathcal{O}(-1)$ with the product metric coming from the metric on $N_{X/Y}$ and the Fubini-Study metric on $\mathcal{O}(-1)$, then the pull-back of $\overline{N}_{W/{Y\times \mathbb{P}^1}}$ along $s_t$ is isometric to $\overline{N}_{X/Y}$ so that the pull-back along $s_t$ of the Koszul resolution and of the corresponding Bott-Chern singular current with respect to $j_0$ is exactly the Koszul resolution and the corresponding Bott-Chern singular current with respect to $i_0$. According to the construction of the homotopies defining ${j_0}_*$ and ${i_0}_*$, we get the commutativity of the diagram (\ref{dn'}) and hence of (\ref{dn}). So we are done. \end{proof}
\begin{cor}\label{409} For any $\mathbb{Z}$-point $t\neq0$, the diagram $$\xymatrix{ \widehat{K}_m(Y\times \mathbb{P}^1,\mu_n) \ar[r]^-{j_*} \ar[d]^-{s_t^*} & \widehat{K}_{m}(W,\mu_n) \ar[d]^-{u_t^*} \\ \widehat{K}_m(Y,\mu_n) \ar[r]^-{i_*} & \widehat{K}_{m}(X,\mu_n) }$$ is commutative. \end{cor}
\begin{rem}\label{410} Using the same argument as in Lemma~\ref{408}, we know that the diagram $$\xymatrix{ \widehat{K}_m(Y\times \mathbb{P}^1,\mu_n) \ar[r]^-{j_*} \ar[d]^-{s_0^*} & \widehat{K}_{m}(W,\mu_n) \ar[d]^-{u_0^*} \\ \widehat{K}_m(Y,\mu_n) \ar[r]^-{{i_0}_*} & \widehat{K}_{m}(P,\mu_n) }$$ is also commutative. \end{rem}
Next, we consider the commutative diagram $$\xymatrix{ W \ar[rd]^-{l} & \\ X \ar[u]^-{u_t} \ar[r]^-{f} & S} $$ with $\mathbb{Z}$-point $t\neq 0$ and we compare the map $f_*\circ u_t^*$ with the map $l_*$ from $\widehat{K}_m(W,\mu_n)_\mathbb Q$ to $\widehat{K}_m(S,\mu_n)_\mathbb Q$.
Firstly, for any $\mu_n$-invariant K\"{a}hler metric $\omega^X$ on $X$ which induces an invariant K\"{a}hler metric $\omega^Y$ on $Y$, there exists a $\mu_n$-invariant K\"{a}hler metric $\omega^W$ on $W$ such that the restrictions of $\omega^W$ over $X\cong X\times\{t\}$ with $t\neq 0$ and to $Y\cong Y\times\{0\}$ are exactly $\omega^X$ and $\omega^Y$. This fact follows from \cite[Lemma 3.5]{T2}. Actually, such a metric is constructed via the Grassmannian graph construction. In this construction, we have an embedding $W\to X\times \mathbb{P}^r\times\mathbb{P}^1$ and the metric $\omega^W$ is the $\mu_n$-average of the restriction of a product metric on $X\times \mathbb{P}^r\times\mathbb{P}^1$. We fix such an invariant K\"{a}hler metric $\omega^W$ on $W$ and endow all submanifolds of $W$ with the induced metrics. Moreover, all normal bundles appearing in the construction of the deformation to the normal cone will be endowed with the quotient metrics.
Secondly, to the three divisors $u_t(X)$, $u_0(P)$ and $u_0(\widetilde{X})$ in $W$, we have the following result.
\begin{lem}\label{deformation} Over $W$, there are $\mu_n$-invariant hermitian metrics on $\mathcal{O}(X)$, $\mathcal{O}(P)$ and $\mathcal{O}(\widetilde{X})$ such that the isometry $\overline{\mathcal{O}}(X)\cong\overline{\mathcal{O}}(P)\otimes\overline{\mathcal{O}}(\widetilde{X})$ holds and such that the restriction of $\overline{\mathcal{O}}(X)$ over $X$ yields the metric of $N_{W/{X}}$, the restriction of $\overline{\mathcal{O}}(\widetilde{X})$ over $\widetilde{X}$ yields the metric of $N_{W/{\widetilde{X}}}$ and the restriction of $\overline{\mathcal{O}}(P)$ over $P$ yields the metric of $N_{W/{P}}$. \end{lem} \begin{proof} choose metric on $\mathcal{O}(P)$ in a small neighborhood of $P$ such that the restriction of $\overline{\mathcal{O}}(P)$ over $P$ yields the metric of the normal bundle. Do the same for $\mathcal{O}(\widetilde{X})$. Since $X$ is closed and disjoint from $\widetilde{X}$ and $P$, we can extend these metrics via a partition of unity to metrics defined on $W$ so that the restriction of the metric that $\mathcal{O}(X)$ inherits from the isomorphism $\mathcal{O}(X)\cong\mathcal{O}(P)\otimes\mathcal{O}(\widetilde{X})$ yields the metric of the normal bundle $N_{W/X}$. We then take the $\mu_n$-averages of these metrics to make them $\mu_n$-invariant. Since the metrics on $N_{W/{X}}$, $N_{W/{P}}$ and $N_{W/{\widetilde{X}}}$ are already $\mu_n$-invariant, the $\mu_n$-invariant metrics on $\mathcal{O}(X)$, $\mathcal{O}(P)$ and $\mathcal{O}(\widetilde{X})$ obtained as above have the properties that we require. \end{proof}
Now, consider the Koszul resolution $$0\to \overline{\mathcal{O}}(-X)\to \overline{\mathcal{O}}_W\to {u_t}_*\overline{\mathcal{O}}_X\to 0.$$ The associated equivariant singular Bott-Chern current $T_g(W/X)$ satisfies the identity $$d_\mathcal{D}T_g(W/X)={\rm ch}_g^0(\overline{\mathcal{O}}_W)-{\rm ch}_g^0\big(\overline{\mathcal{O}}(-X)\big)-{u_t}_*[{\rm ch}_g^0(\overline{\mathcal{O}}_X){\rm Td}_g^{-1}(\overline{N}_{W/X})].$$ We claim the following result.
\begin{lem}\label{411} For any element $x\in \widehat{K}_m(W,\mu_n)_\mathbb Q$ with integer $m\geq 1$, the identity $$f_*\circ u_t^*(x)-f_*(R_g(N_{W/X})\cdot u_t^*x)=l_*(x)-l_*(\overline{O}(-X)\otimes x)$$ hold in $\widehat{K}_m(S,\mu_n)_\mathbb Q$. \end{lem} \begin{proof} Let $\overline{E}$ be a $l$-acyclic hermitian $k$-cube in $\widehat{\mathcal{P}}(W,\mu_n)$. Since $W$ admits a very ample invertible $\mu_n$-sheaf which is relative to the morphism $l: W\to S$ (cf. \cite[Lemma 3.9]{T2}), we may assume that $\overline{O}(-X)\otimes \overline{E}$ is also $l$-acyclic and $u_t^*\overline{E}$ is $f$-acyclic. Then we have a short exact sequence of hermitian $k$-cubes in $\widehat{\mathcal{P}}(S,\mu_n)$ $$\chi(\overline{E}): \quad 0\to l_*(\overline{\mathcal{O}}(-X)\otimes \overline{E})\to l_*(\overline{E})\to f_*({u_t}^*\overline{E})\to 0,$$ which will be regarded as a hermitian $(k+1)$-cube and as a chain homotopy between the maps $l_*-l_*(\overline{\mathcal{O}}(-X)\otimes)$ and $f_*\circ u_t^*$. Consequently, the formula $$\mathbf{H}^{(1)}_k(\overline{E})=\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_{k+1}\circ\lambda\big(\chi(\overline{E})\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)$$ defines a chain homotopy between ${\rm ch}_g\circ l_*-{\rm ch}_g\circ l_*(\overline{\mathcal{O}}(-X)\otimes)$ and ${\rm ch}_g\circ f_*\circ u_t^*$.
On the other hand, for any element $\alpha\in \bigoplus_{p\geq 0}D^{2p-*}(W_{\mu_n},p)_{R_n}$, the formula $$\frac{1}{(2\pi i)^{r_l}}\int_{W_{\mu_n}/S}T_g(W/X)\bullet {\rm Td}_g(\overline{Tg})\bullet \alpha+\frac{1}{(2\pi i)^{r_f}}\int_{X_{\mu_n}/S}{\rm Td}_g(\overline{\mathcal{N}})\bullet{\rm Td}_g^{-1}(\overline{N}_{W/X})\bullet\alpha$$ gives a chain homotopy between the maps ${l_{\mu_n}}_!\circ ({\rm Td}_g(\overline{Tg})\bullet)-{l_{\mu_n}}_!\circ \Big({\rm Td}_g(\overline{Tg}){\rm ch}_g^0\big(\overline{\mathcal{O}}(-X)\big)\bullet\Big)$ and ${f_{\mu_n}}_!\circ ({\rm Td}_g(\overline{Tf})\bullet u_t^*)$. Hence, it defines a chain homotopy between ${l_{\mu_n}}_!\circ ({\rm Td}_g(\overline{Tg})\bullet{\rm ch}_g)-{l_{\mu_n}}_!\circ \Big({\rm Td}_g(\overline{Tg}){\rm ch}_g^0\big(\overline{\mathcal{O}}(-X)\big)\bullet{\rm ch}_g\Big)$ and ${f_{\mu_n}}_!\circ ({\rm Td}_g(\overline{Tf})\bullet u_t^*\circ {\rm ch}_g)$. Like before, using the projection formula and the fact that the deformation to the normal cone construction is base-change invariant along smooth morphisms, we write the induced homotopy as \begin{multline*} \mathbf{H}^{(2)}_k(\overline{E})=\frac{(-1)^{k}}{2k!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}\bigg(\Big(\frac{1}{(2\pi i)^{r_l}}\int_{W_{\mu_n}\times (\mathbb{P}^1)^{k}/{S\times (\mathbb{P}^1)^{k}}}T_g(W/X)\bullet {\rm Td}_g(\overline{Tg}){\rm ch}_g^0\big({\rm tr}_{k}\circ\lambda(\overline{E})\big)\Big)\\ {\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\bigg)}\\ +\frac{(-1)^{k}}{2k!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}\bigg(\Big(\frac{1}{(2\pi i)^{r_f}}\int_{X_{\mu_n}\times (\mathbb{P}^1)^{k}/{S\times (\mathbb{P}^1)^{k}}}{\rm Td}_g(\overline{\mathcal{N}})\bullet{\rm Td}_g^{-1}(\overline{N}_{W/X}){\rm ch}_g^0\big({\rm tr}_{k}\circ\lambda(u_t^*\overline{E})\big)\Big)
\\ {\wedge C_{k}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2)\bigg).} \end{multline*}
Now, we denote by $H_{\chi}(\overline{E})$ the following $2$-cube of hermitian bundles on $S\times (\mathbb{P}^1)^k$ $$\xymatrix{l_*\big({\rm tr}_k\circ\lambda(\overline{\mathcal{O}}(-X)\otimes\overline{E})\big) \ar[r]^-{\rm Id} \ar[d] & {\rm tr}_k\circ\lambda\big(l_*(\overline{\mathcal{O}}(-X)\otimes\overline{E})\big) \ar[r] \ar[d] & 0 \ar[d]\\ l_*\big({\rm tr}_k\circ\lambda(\overline{E})\big) \ar[r]^-{\rm Id} \ar[d] & {\rm tr}_k\circ\lambda\big(l_*(\overline{E})\big) \ar[r] \ar[d] & 0 \ar[d]\\ f_*\big({\rm tr}_k\circ\lambda(u_t^*\overline{E})\big) \ar[r]^-{\rm Id} & {\rm tr}_k\circ\lambda\big(f_*(u_t^*\overline{E})\big) \ar[r] & 0}$$ and we set \begin{align*} \widetilde{\mathbf{H}}_k(\overline{E}):=\frac{(-1)^{k+2}}{2(k+2)!(2\pi i)^{k+2}}&\int_{(\mathbb{P}^1)^{k+2}}{\rm ch}_g^0\Big({\rm tr}_2\circ \lambda\big(H_\chi(\overline{E})\big)\Big)\wedge C_{k+2}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+2}\mid^2), \end{align*} it satisfies the differential equation \begin{multline*} d_\mathcal{D}\circ \widetilde{\mathbf{H}}_k(\overline{E})\\ =\widetilde{\mathbf{H}}_{k-1}\circ d\overline{E}+\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\Big({\rm tr}_{k+1}\circ\lambda\big(\chi(\overline{E})\big)\Big)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)
\\ -\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\bigg({\rm tr}_1\circ\lambda\Big(\chi\big({\rm tr}_k\circ \lambda(\overline{E})\big)\Big)\bigg)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2)
\\ -\Pi'^l_k(\overline{E})+\Pi'^l_k(\overline{\mathcal{O}}(-X)\otimes\overline{E})+\Pi'^f_k(u_t^*\overline{E})\\ =\widetilde{\mathbf{H}}_{k-1}\circ d\overline{E}+\mathbf{H}^{(1)}_k(\overline{E})-\Pi'^l_k(\overline{E})+\Pi'^l_k(\overline{\mathcal{O}}(-X)\otimes\overline{E})+\Pi'^f_k(u_t^*\overline{E})
\\ -\frac{(-1)^{k+1}}{2(k+1)!(2\pi i)^{k+1}}\int_{(\mathbb{P}^1)^{k+1}}{\rm ch}_g^0\bigg({\rm tr}_1\circ\lambda\Big(\chi\big({\rm tr}_k\circ \lambda(\overline{E})\big)\Big)\bigg)\wedge C_{k+1}(\log\mid z_1\mid^2,\cdots,\log\mid z_{k+1}\mid^2).
\end{multline*}
Similar to the tricks that we used frequently before, we set \begin{multline*} \mathbf{H}^{(2')}_k(\overline{E})\\ =\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big(\frac{1}{(2\pi i)^{r_l}}\int_{W_{\mu_n}\times (\mathbb{P}^1)^{k}/{S\times (\mathbb{P}^1)^{k}}}T_g(W/X)\bullet {\rm Td}_g(\overline{Tg}){\rm ch}_g^0\big({\rm tr}_{k}\circ\lambda(\overline{E})\big)
\\ {,\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big)}\\ +\frac{(-1)^{k+1}}{(k+1)!(2\pi i)^{k}}\int_{(\mathbb{P}^1)^{k}}C_{k+1}\Big(\frac{1}{(2\pi i)^{r_f}}\int_{X_{\mu_n}\times (\mathbb{P}^1)^{k}/{S\times (\mathbb{P}^1)^{k}}}{\rm Td}_g(\overline{\mathcal{N}})\bullet{\rm Td}_g^{-1}(\overline{N}_{W/X}){\rm ch}_g^0\big({\rm tr}_{k}\circ\lambda(u_t^*\overline{E})\big)
\\ {,\log\mid z_1\mid^2,\cdots,\log\mid z_{k}\mid^2\Big).} \end{multline*} then our lemma follows from the Bimut-Ma's immersion formula and the fact that there exists a homotopy between $\mathbf{H}^{(2')}_k(\overline{E})$ and $\mathbf{H}^{(2)}_k(\overline{E})$. So we are done. \end{proof}
\begin{rem}\label{412} Similar to Lemma~\ref{411}, we consider other three divisors $\xymatrix{ W & P \ar[l]^-{u_0} \ar[r]^-{p} & S}$, $\xymatrix{ W & \widetilde{X} \ar[l]^-{u_0} \ar[r]^-{h_1} & S}$ and $\xymatrix{ W & P\cap \widetilde{X} \ar[l]^-{u_0} \ar[r]^-{h_2} & S}$ and corresponding Koszul resolutions $$ 0\to \overline{\mathcal{O}}(-P)\to \overline{\mathcal{O}}_{W}\to {u_0}_*\overline{\mathcal{O}}_{P}\to 0,$$ $$ 0\to \overline{\mathcal{O}}(-\widetilde{X})\to \overline{\mathcal{O}}_{W}\to {u_0}_*\overline{\mathcal{O}}_{\widetilde{X}}\to 0,$$ and $$ 0\to \overline{\mathcal{O}}(-\widetilde{X})\otimes\overline{\mathcal{O}}(-P)\to \overline{\mathcal{O}}(-\widetilde{X})\oplus \overline{\mathcal{O}}(-P)\to \overline{\mathcal{O}}_{W}\to {u_0}_*\overline{\mathcal{O}}_{\widetilde{X}\cap P}\to 0.$$ Then, for any element $x\in \widehat{K}_m(W,\mu_n)_\mathbb Q$, we have $$p_*\circ u_0^*(x)-p_*(R_g(N_{W/P})\cdot u_0^*x)=l_*(x)-l_*\big(\overline{O}(-P)\otimes x\big),$$ $${h_1}_*\circ u_0^*(x)-{h_1}_*(R_g(N_{W/\widetilde{X}})\cdot u_0^*x)=l_*(x)-l_*\big(\overline{O}(-\widetilde{X})\otimes x\big),$$ and \begin{multline*} {h_2}_*\circ u_0^*(x)-{h_2}_*(R_g(N_{W/{P\cap \widetilde{X}}})\cdot u_0^*x)=l_*(x)-l_*(\overline{O}(-P)\otimes x)-l_*(\overline{O}(-\widetilde{X})\otimes x)+l_*(\overline{O}(-P)\otimes\overline{O}(-\widetilde{X})\otimes x) \end{multline*} which hold in $\widehat{K}_m(S,\mu_n)_\mathbb Q$. \end{rem}
Now, we are ready to give the proof of Theorem~\ref{407}.
\begin{proof}(of Theorem~\ref{407}) Let $x$ be an element in $\widehat{K}_m(Y,\mu_n)_\mathbb Q$, we consider the following two diagrams $$\xymatrix{ Y\times \mathbb{P}^1 \ar[r]^-{j} & W \ar[rd]^-{h} & \\ Y \ar[u]^-{s_t} \ar[r]^-i & X \ar[u]^-{u_t} \ar[r]^-{l} & S}$$ with $\mathbb{Z}$-point $t\neq0$ and $$\xymatrix{ Y\times \mathbb{P}^1 \ar[r]^-{j} & W \ar[rd]^-{h} & \\ Y \ar[u]^-{s_0} \ar[r]^-{i_0} & \mathbb{P}(N_{X/Y}\oplus\mathcal{O}_Y) \ar[u]^-{u_0} \ar[r]^-{p} & S.}$$ By Corollary~\ref{409} and the fact that $s_t$ is a section of the obvious projection $Pr$ from $Y\times \mathbb{P}^1$ to $Y$, we have that $i_*(x)=u_t^*\circ j_* \circ Pr^*(x)$ and hence $l_*\circ i_*(x)=l_*\circ u_t^*\circ j_* \circ Pr^*(x)$. According to Lemma~\ref{411}, $$l_*\circ u_t^*(j_*Pr^*x)=h_*(j_*Pr^*x)-h_*(\overline{O}(-X)\otimes j_*Pr^*x)+l_*(R_g(N_{W/X})\cdot i_*x).$$ Similarly, we have $$l_*\circ u_0^*(j_*Pr^*x)=h_*(j_*Pr^*x)-h_*(\overline{O}(-P)\otimes j_*Pr^*x)+p_*(R_g(N_{W/P})\cdot i_*x).$$ Notice that the image $j(Y\times \mathbb{P}^1)$ doesn't meet $\widetilde{X}$, the localization sequence of the higher equivariant arithmetic K-groups implies that $u_0^*(j_*Pr^*x)$ vanishes in $\widehat{K}_m(\widetilde{X},\mu_n)_\mathbb Q$ and in $\widehat{K}_m(P\cap \widetilde{X},\mu_n)_\mathbb Q$ so that $$h_*(\overline{O}(-X)\otimes j_*Pr^*x)=h_*(\overline{O}(-P)\otimes j_*Pr^*x).$$ This can be seen from the several identities mentioned in Remark~\ref{412}. On the other hand, \begin{align*} R_g(N_{W/X})\cdot i_*x=&R_g(N_{W/X}){\rm ch}_g(i_*x)=R_g(N_{W/X})i_*\big({\rm Td}_g^{-1}(N_{X/Y}){\rm ch}_g(x)\big)\\ =&i_*\big(i^*R_g(N_{W/X}){\rm Td}_g^{-1}(N_{X/Y}){\rm ch}_g(x)\big)=0. \end{align*} The same reasoning gives that $R_g(N_{W/P})\cdot i_*x=0$ also. So $l_*\circ i_*(x)$ is actually equal to $p_*\circ {i_0}_*(x)$. Therefore, the statement in Theorem~\ref{407} follows from Corollary~\ref{406}. \end{proof}
\subsection{Proof of the statement} In this subsection, we give a complete proof of Theorem~\ref{402}. Denote by $i$ the closed immersion $X_{\mu_n}\to X$, then the arithmetic concentration theorem (cf. \cite[Theorem 5.2]{T3}) tells us that $$i_*: \widehat{K}_m(X_{\mu_n},\mu_n)_\rho\cong \widehat{K}_m(X,\mu_n)_\rho$$ with inverse map $\otimes \lambda_{-1}^{-1}(\overline{N}_{X/{X_{\mu_n}}})\circ \tau$.
Then let $x$ be any element in $\widehat{K}_m(X,\mu_n)$, we apply Theorem~\ref{407} to the morphisms $i$, $f$ and $f_{\mu_n}=f\circ i$ and we compute \begin{align*} f_*(x)=&f_*\big(i_*\circ \otimes \lambda_{-1}^{-1}(\overline{N}_{X/{X_{\mu_n}}})\circ \tau(x)\big)\\ =&f_*\circ i_*\big(\otimes \lambda_{-1}^{-1}(\overline{N}_{X/{X_{\mu_n}}})\circ \tau(x)\big)\\ =&{f_{\mu_n}}_*\big(\otimes \lambda_{-1}^{-1}(\overline{N}_{X/{X_{\mu_n}}})\circ \tau(x)\big)-{f_{\mu_n}}_*\big(\otimes R_g(N_{X/{X_{\mu_n}}})\circ\otimes \lambda_{-1}^{-1}(\overline{N}_{X/{X_{\mu_n}}})\circ \tau(x)\big)\\ =&{f_{\mu_n}}_*\big(\Lambda_R\circ \tau(x)\big) \end{align*} which holds in $\widehat{K}_m(Y,\mu_n)_\rho\otimes \mathbb Q$. This completes the proof of Theorem~\ref{402}.
\section{Appendix: Remarks on the equivariant analytic torsion forms and the immersion formula}
\subsection{Anomaly formula for the equivariant analytic torsion forms} \label{Ma2.s1} Let $W, V$ be two $\mu_n$-projective complex manifolds, and
let $f: W\to V$ be an equivariant, holomorphic submersion with fiber $X$.
Fix a $\mu_n$-invariant K\"{a}hler metric on $W$ and choose
corresponding K\"{a}hler form $\omega$ as a K\"{a}hler fibration
structure on $f$. We fix a primitive $n$-th root of unity $g$ as
a generator of $\mu_n(\mathbb C)$. In the following, ${\rm ch}_g$
and ${\rm Td}_g$ should stand for the usual Chern-Weil forms
with the factor $2\pi i$ in their definitions. Notice that they are
denoted by ${\rm ch}'_g$ and ${\rm Td}'_g$ in the text.
Let $(E,h^E)$ be a $\mu_n$-equivariant hermitian vector bundle on $W$ such that $E$ is $f$-acyclic. Let $T_g(f,\omega,h^E)\in\bigoplus_{p\geq0}A^{p,p}(V_{\mu_n})$ be the equivariant analytic torsion form
\cite[(2.27)]{Ma1} which satisfies the differential equation $$\frac{\bar\partial \partial}{2\pi i}T_g(f,\omega,h^E) ={\rm ch}_g(f_*E,f_*h^E) -\int_{W_{\mu_n}/{V_{\mu_n}}} {\rm Td}_g(Tf,h^{Tf}){\rm ch}_g(E,h^E)$$ where $h^{Tf}$ is the hermitian metric induced by $\omega$
on the holomorphic tangent bundle $Tf$. We shall write
$T_g(\omega,h^E)$ for $T_g(f,\omega,h^E)$,
if there is no ambiguity about the underlying map.
The following result is \cite[Theorem 2.13]{Ma1}
which extends \cite[Theorem 1.23]{BGS3},
\cite[Theorem 3.10]{BK}, \cite[Theorem 2.5]{Bi95}.
\begin{thm}\label{A10}(Anomaly formula) Let $\omega'$ be the form associated to another K\"{a}hler
fibration structure on $f: W\to V$. Let $h'^{Tf}$ be the metric on $Tf$ induced by $\omega'$. Then the following identity holds in $\bigoplus_{p\geq 0}A^{p,p}(V_{\mu_n})/ {(\operatorname{Im}{\partial}+\operatorname{Im}{\bar\partial})}$: \begin{align*} T_g(\omega,h^E)-T_g(\omega',h^E)=& - \widetilde{{\rm ch}}_g(f_*E,h^{f_*E},h'^{f_*E})\\ &+ \int_{W_{\mu_n}/{V_{\mu_n}}} \widetilde{{\rm Td}}_g(Tf,h^{Tf},h'^{Tf}){\rm ch}_g(E,h^E) \end{align*} where $(f_*E,h^{f_*E},h'^{f_*E})$ and $(Tf,h^{Tf},h'^{Tf})$ stand for the exact sequences of hermitian vector bundles $$\xymatrix{ 0\ar[r] & (f_*E,h^{f_*E}) \ar[r]^-{\rm Id} &
(f_*E,h'^{f_*E}) \ar[r] & 0 \ar[r] & 0}$$ and $$\xymatrix{ 0\ar[r] & (Tf,h^{Tf}) \ar[r]^-{\rm Id} &
(Tf,h'^{Tf}) \ar[r] & 0 \ar[r] & 0}.$$ \end{thm}
We shall see that there is a natural way to write down explicitly some differential forms $\Delta^0(f,\overline{E},\omega,\omega')$, $\Delta_0(f,\overline{E},\omega,\omega')$ such that they are functorial in certain sense and they measure the difference of the anomaly formula. \begin{align*} \Delta&=\partial\Delta^0(f,\overline{E},\omega,\omega') +\bar\partial\Delta_0(f,\overline{E},\omega,\omega')\\ &=T_g(\omega,h^E)-T_g(\omega',h^E) + \widetilde{{\rm ch}}_g(f_*E,h^{f_*E},h'^{f_*E}) - \int_{W_{\mu_n}/{V_{\mu_n}}} \widetilde{{\rm Td}}_g(Tf,h^{Tf},h'^{Tf}){\rm ch}_g(E,h^E). \end{align*} To do so, we need to fix the construction of $\widetilde{{\rm ch}}_g(f_*E,h^{f_*E},h'^{f_*E})$, $\widetilde{{\rm Td}}_g(Tf,h^{Tf},h'^{Tf})$ at the differential form level, i.e., without modulo $\operatorname{Im}{\partial}+\operatorname{Im}{\bar\partial}$. Let's fix the definition of $\widetilde{{\rm ch}}_g(f_*E,h^{f_*E},h'^{f_*E})$ as the left side of \cite[(2.42)]{Bi95}, and the definition $\widetilde{{\rm Td}}_g(Tf,h^{Tf},h'^{Tf})$ as the integral for $0$ to $1$ for the parameter $c$ of the differential form as the part $\frac{\partial}{\partial b}\cdots$ via the last term of \cite[(3.67)]{BK}, note that we can also fix the path of the metric as the segment direct connecting two metrics. Thus we can write them under the notation in \cite[(2.34), (2.56)] {Ma1} (cf. also the convention before Theorem 2.5 of this paper), \begin{align}\label{Ma2.3.2}\begin{split} &\widetilde{{\rm ch}}_g(f_*E,h^{f_*E},h'^{f_*E})
= \int_0^1 \Phi \mbox{Tr}_s[g Q^{H(X, E|_{X})} _c
\exp(-(\nabla^{H(X, E|_{X})}_c)^2)] dc,\\ &\widetilde{{\rm Td}}_g(Tf,h^{Tf},h'^{Tf}) = \int_0^1 \frac{\partial}{\partial b} \Big [ \mbox{Td}\Big (\frac{-R^{TX^g}_c}{2i\pi} -b(h^{TX^g})^{-1}\frac{\partial h^{TX^g}}{\partial c}\Big ) \\ &\hspace{3cm} \times \prod_{j=1}^q \frac{\mbox{Td}}{e}\Big (\frac{-R^{N_{X^g/X}^{\theta_j}}_c}{2i\pi} -b(h^{N^{\theta_j}_{X^g/X}})^{-1} \frac{\partial h^{N^{\theta_j}_{X^g/X}}}{\partial c}+i \theta_j\Big ) \Big ]_{b=0} d c. \end{split}\end{align}
Let $V_1$ be an equivariant closed submanifold of $V$, and let $W_1=f^{-1}(V_1)\subset W$ be the closed submanifold of $W$ with restricted K\"{a}hler metric. Then $f_1: W_1\to V_1$ is also an equivariant holomorphic submersion with compact fibre. Denote by $j$ (resp. $i$) the natural embedding $W_1\to W$ (resp. $V_1\to V$) and by $\omega_1, \omega'_1$
the induced K\"{a}hler forms $j^*\omega, j^*\omega'$.
Let $\overline{E}$ be an $f$-acyclic hermitian bundle on $W$.
\begin{thm}\label{A1} There is a natural way to write down explicitly differential forms $\Delta^0(f,\overline{E},\omega,\omega')$,
$\Delta_0(f,\overline{E},\omega,\omega')$ such that $\Delta=\partial\Delta^0(f,\overline{E},\omega,\omega') +\bar\partial\Delta_0(f,\overline{E},\omega,\omega')$ and they are functorial in the following sense. \begin{align}\label{Ma2.3.5}
i_{\mu_n}^*\Delta^0(f,\overline{E},\omega,\omega') =\Delta^0(f_1,j^*\overline{E},\omega_1,\omega'_1) \end{align} and \begin{align}\label{Ma2.3.6}
i_{\mu_n}^*\Delta_0(f,\overline{E},\omega,\omega') =\Delta_0(f_1,j^*\overline{E},\omega_1,\omega'_1). \end{align} \end{thm} \begin{proof} By the equivariant extension of \cite[Definition 3.14, Theorems 3.16, 3.17]{BK}] (cf. \cite[(2.34)]{Ma1}), there exist differential forms $\theta^1, \theta^2$ and $\theta^3$ such that \begin{align}\label{Ma2.3.7}
\Delta+ d \varrho=\bar{\partial}\theta^1+\partial \theta^2 +\bar{\partial}\partial\theta^3 \end{align} and $d\varrho$ is from the last term of \cite[(3.38)]{BK}, in particular, $\varrho$ is a local term from the small time heat kernel asymptotics of Bismut Superconnection, $\theta^k$ ($k=1, 2, 3$) have universal expression in terms
of $g, \omega, \omega'$ and $h^E$ via the Bismut superconnection.
Thus, from \cite[Definition 3.14, Theorem 3.16]{BK}]
and \cite[(2.34)]{Ma1}, we know that if $i: V_1\to V$ is a complex submanifold of $V$, when we consider the corresponding objects for the submersion $ f_1$, each above term is the restriction of the corresponding term for the global submersion $f$. Thus let $\Delta_1, \theta_1^k$ be the corresponding terms associated
to the fibration $f_1: W_1\to V_1$, then we have
$\Delta_1=i_{\mu_n}^*\Delta$ and
$\theta_1^k=i_{\mu_n}^*\theta^k$ ($k=1, 2, 3$),
$\varrho_{1}=i_{\mu_n}^*\varrho$.
So write $\Delta^0(f,\overline{E},\omega,\omega')=\theta^2-\varrho$
and $\Delta_0(f,\overline{E},\omega,\omega')
=\theta^1+\partial\theta^3- \varrho$,
we are done. \end{proof}
\subsection{Functoriality of the equivariant analytic torsion forms} \label{Ma2.s2} Let $W, V$ and $S$ be three $\mu_n$-equivariant projective
complex manifolds with $S=S_{\mu_n}$. Suppose that $f: W\to V$
and $h: V\to S$ are two holomorphic submersions with compact fibres
$X,Y$. Then $h\circ f$ is also a holomorphic submersion
with compact fibre $Z$.
Let $\omega^W$ and $\omega^V$ be two $\mu_n$-invariant K\"{a}hler forms on $W$ and on $V$. As usual, $\omega^W$ and $\omega^V$ decide K\"{a}hler fibration structures on the morphisms $f, h$ and $g$ and they induce $\mu_n$-invariant hermitian metrics associated with the K\"ahler forms
$\omega^X, \omega^Y$ and $\omega^Z$ on relative tangent bundles $Tf, Th$ and $T(h\circ f)$. Consider the following short exact sequence of hermitian vector bundles $$\overline{T}(f,h,h\circ f):\quad 0\to \overline{Tf}\to
\overline{T(h\circ f)}\to f^*\overline{Th}\to 0.$$ Denote by ${\rm Td}_g(\overline{T}(f,h,h\circ f))$
the equivariant secondary Todd form, it satisfies the differential equation $$\frac{\bar \partial \partial}{2\pi i}{\rm Td}_g(\overline{T}(f,h,h\circ f)) ={\rm Td}_g(\overline{T(h\circ f)})-f_{\mu_n}^*{\rm Td}_g (\overline{Th}){\rm Td}_g(\overline{Tf}).$$
Now let $\overline{E}$ be a hermitian vector bundle on $W$, we shall assume that $E$ is $f$-acyclic and $h\circ f$-acyclic. Then the Leray spectral sequence $E_2^{i,j}=R^ih_*(R^jf_*E)$ degenerates at $E_2$ so that $f_*E=R^0f_*(E)$ is $h$-acyclic and
$(h\circ f)_*E\cong h_*f_*E$. Clearly, $(h\circ f)_*E$ and $h_*f_*E$
carry in general different $L^2$ metrics
(Note that for $\sigma\in ((h\circ f)_*E)_b$, $b\in S$,
\begin{align} \label{Ma2.4.2} \begin{split}
& \|\sigma\|_{(h\circ f)_*E}^2=(2\pi)^{-\dim Z}\int_{Z_b} |\sigma|^2
\frac{(\omega^Z)^{\dim Z}}{(\dim Z)!} , \\
& \|\sigma\|_{h_*f_*E}^2=(2\pi)^{-\dim Z}\int_{Y_b}
\Big( \int_{X} |\sigma|^2
\frac{(\omega^X)^{\dim X}}{(\dim X)!}\Big)
\frac{(\omega^Y)^{\dim Y}}{(\dim Y)!} ,
\end{split}\end{align}
thus they are different in general).
Consider the following short exact sequence of hermitian vector bundles $$\overline{E}(f,h,h\circ f):\quad 0\to h_*f_*\overline{E} \to (h\circ f)_*\overline{E} \to 0\to 0.$$ The equivariant secondary Bott-Chern form $\widetilde{{\rm ch}}_g(\overline{E}(f,h,h\circ f))$ satisfies the differential equation $$\frac{\bar \partial \partial}{2\pi i}\widetilde{{\rm ch}}_g (\overline{E}(f,h,h\circ f)) ={\rm ch}_g((h\circ f)_*\overline{E})-{\rm ch}_g(h_*f_*\overline{E}).$$
\begin{thm}\label{A20} Let notations and assumptions be as above. Then the following identity holds in $\bigoplus_{p\geq 0}A^{p,p}(S)/{(\operatorname{Im}{\partial}+\operatorname{Im}\bar\partial)}$: \begin{align} \label{Ma2.4.3} \begin{split} T_g(h\circ f,\omega^W,h^E)-T_g(h,\omega^V,h^{f_*E})& -\int_{V_{\mu_n}/S}{\rm Td}_g(\overline{Th})T_g(f,\omega^W,h^E)\\ =&\widetilde{{\rm ch}}_g(\overline{E}(f,h,h\circ f)) -\int_{W_{\mu_n}/S}\widetilde{{\rm Td}}_g(\overline{T}(f,h,h\circ f)) {\rm ch}_g(\overline{E}). \end{split}\end{align} \end{thm} \begin{proof}(a sketch) This is a natural extension of \cite[Th\'{e}or\`{e}me 3.5]{Ma2} to the equivariant case, or the family extension of \cite[Theorem 3.1]{Ma1} which is an equivariant extension of \cite[Theorem 3.1]{BerB}. To prove this extension, one may follow the same approach as \cite[Sections 4-9]{Ma2}. In fact, as a purely functional analysis argument, the \cite[Theorems 4.5, 4.6 and 4.7]{Ma2} can be extended formally to the equivariant case by introduing in the right place the operator $g$. The reason one can do this formal extension has been given in \cite[Section 5]{Ma1}. For the equivariant extensions of \cite[Theorems 4.8, 4.9, 4.10 and 4.11]{Ma2}, one can show that their proofs are local on $f^{-1}(V_{\mu_n})$ and certain rescaling on Clifford variables which doesn't effect the action of $g$ can be made (cf. \cite[Section 7 b)]{Ma2}). Replacing the equivariant local index technique in \cite[Sections 7, 8, and 9]{Ma1} by its equivariant relative local index, one gets the desired identity.
To help the readers, we will use directly the notation in
\cite[Section 4]{Ma2}. By the anomaly formula Theorem \ref{207},
we only need to establish Theorem \ref{A20} for a special coupe
of K\"ahler forms, thus we will assume that
$\omega^{W}=\widetilde{\omega}^{W}+ f^{*} \omega^{V}$ with $\widetilde{\omega}^{W}$ a K\"ahler form on $W$.
Let $\Delta$ be the rectangular domain
in $\mathbb R^2$ with coordinates $(u,T)$,
defined by the four vertices $(1, \varepsilon), (T_0, \varepsilon)$,
$(T_0, A), (1,A)$, following \cite[(4.7)]{Ma2}, set \begin{align} \label{Ma2.4.7} \begin{split} \theta^0_1 &= (2\pi i)^{-1/2} \int_\Delta \frac{2}{u} \frac{\partial}{ \partial b} \left\{\varphi \mbox{Tr}_s \left[ g \left[ B'_{3,u^2,T}, N_{3,u^2,T}\right] \exp(-B^2_{3,u^2,T} - b M_{3,u^2,T})\right]\right\}_{b=0} dudT, \\ \theta^0_2& = (2\pi i)^{-1/2} \int_\Delta \frac{2}{u} \frac{\partial}{ \partial b} \left\{\varphi \mbox{Tr}_s \left[ g \left[ B''_{3,u^2,T}, N_{3,u^2,T}\right] \exp(-B^2_{3,u^2,T} - b M_{3,u^2,T})\right]\right\}_{b=0} dudT, \\ \theta^0_3 &= (2\pi i)^{-1} \int_\Delta \frac{2}{u} \frac{\partial}{ \partial b} \left\{\varphi \mbox{Tr}_s \left[ g N_{3,u^2,T} \exp(-B^2_{3,u^2,T} - b M_{3,u^2,T})\right]\right\}_{b=0} dudT . \end{split} \end{align} The only difference comparing with \cite[(4.7)]{Ma2} is that in \eqref{Ma2.4.7}, we add the operator $g$ as the first term in ${\rm Tr}_s[\cdots]$ in \cite[(4.7)]{Ma2}, i.e., replace ${\rm Tr}_s[\cdots]$ by ${\rm Tr}_s[g \cdots]$. Note that $B_{3,u^2,T}$ is the Bismut superconnection assocaited with the submersion $h\circ f$ and the form $\omega^W_T= \frac{1}{T^{2}} \widetilde{\omega}^{W}+ f^{*} \omega^{V}$, and $B'_{3,u^2,T}, B''_{3,u^2,T}$ are holomorphic and anti-holomorphic part of $B_{3,u^2,T}$. Moreover $N_{3,u^2,T}$ is a generalized number operator associated with $\omega^W_T$.
The boundary of $\Delta$ composes as four oriented segments
$\Gamma_1,\cdots,\Gamma_4$.
Let $I^0_k$ be the integral of the one form on $\mathbb R^2$ with values
in $\Lambda^\bullet(T^*_\mathbb R S)$ defined by
replacing ${\rm Tr}_s[\cdots]$ by ${\rm Tr}_s[g \cdots]$ in
\cite[Definition 4.2]{Ma2}, then we have the $g$-analogue of
\cite[(4.8)]{Ma2}:
\begin{eqnarray} \sum^4_{k=1} I^0_k = \overline \partial \theta^0_1 - \partial \theta^0_2 - \overline \partial \partial \theta^0_3 . \end{eqnarray} We study the terms $I^0_k$ and $\theta^0_j$ in succession
as $A\to +\infty$, $T_0\to +\infty$, $\varepsilon\to 0$ :
roughly, we get
$\bullet$ the term $-T_g(h,\omega^V,h^{f_*E})$ from $I^0_1$,
$\bullet$ a differential form version of
$- \widetilde{{\rm ch}}_g(\overline{E}(f,h,h\circ f))$
(via \cite[(1.58)]{BGS} or \cite[(4.17)]{Ma1} by replacing
${\rm Tr}_s[\cdots]$ by ${\rm Tr}_s[g \cdots]$)
from $I^0_2$,
$\bullet$ $T_g(h\circ f,\omega^W,h^E)$ from $I^0_3$,
$\bullet$ $-\int_{V_{\mu_n}/S}{\rm Td}_g(\overline{Th})T_g(f,\omega^W,h^E)
+ \int_{W_{\mu_n}/S} \widetilde{{\rm Td}}_g(\overline{T}(f,h,h\circ f)) {\rm ch}_g(\overline{E})$ (here we should use the differential form version of $\widetilde{{\rm Td}}_g(\overline{T}(f,h,h\circ f))$ from the term $\int_{1}^{\infty}\cdots$ in \cite[(4.72)]{BerB} by replacing ${\rm Td}$ therein by ${\rm Td}_g$) from $I^0_4$.
Let $\theta_j^3 (j=1,2,3)$ be the differential forms on $S$ obtained from $\theta_j^0$ by the above procedure, then the difference of two sides in \eqref{Ma2.4.3} (by using the differential form versions of $\widetilde{{\rm ch}}_g(\overline{E}(f,h,h\circ f))$ and $\widetilde{{\rm Td}}_g(\overline{T}(f,h,h\circ f))$ as above) is \begin{align}\label{Ma2.4.10}
\Delta+ d^{S} \Theta=\bar{\partial}\theta_1^3-\partial\theta_2^3 -\bar{\partial}\partial\theta_3^3 \end{align} and $\theta_k^3$ ($k=1, 2, 3$) have universal expressions via the Bismut superconnection $B_{3,u^2,T}$, $\Theta$ is a combination of local terms from the small time heat kernel asymptotics of the Bismut superconnection for the fibration $h$ and $h\circ f$, cf.
\cite[(2.24), (2.27)]{Ma1} and \cite[(4.27), (4.29)]{Ma2}. \end{proof}
Let $\theta_k^3$ ($k=1, 2, 3$) be the form in \eqref{Ma2.4.10} associated with the couple $\omega^{M}=\widetilde{\omega}^{W}+ f^{*}\omega^{V}$, $\omega^{V}$. Set \begin{align}\label{Ma2.4.11} \Delta^0(f, h, \omega^{W}, \omega^{V}, \overline{E}) =-\theta_{2}^3 -\Theta \quad \text{ and } \Delta_0(f, h, \omega^W, \omega^V, \overline{E}) =\theta_{1}^3-\partial\theta_{3}^3 -\Theta. \end{align} Then when we fix the differential form versions of $\widetilde{{\rm ch}}_g(\overline{E}(f,h,h\circ f))$ and $\widetilde{{\rm Td}}_g(\overline{T}(f,h,h\circ f))$ as above, \eqref{Ma2.4.11} measure the difference of the formula \eqref{Ma2.4.3} at the differential form level from \eqref{Ma2.4.10}: \begin{multline}\label{Ma2.4.12} \Delta=\partial\Delta^0(f, h, \omega^W, \omega^V, \overline{E}) +\bar{\partial}\Delta_0(f, h, \omega^W, \omega^V, \overline{E})\\ =T_g(h\circ f,\omega^W,h^E)-T_g(h,\omega^V,h^{f_*E}) -\int_{V_{\mu_n}/S}{\rm Td}_g(\overline{Th})T_g(f,\omega^W,h^E)\\ -\widetilde{{\rm ch}}_g(\overline{E}(f,h,h\circ f)) +\int_{W_{\mu_n}/S}\widetilde{{\rm Td}}_g(\overline{T}(f,h,h\circ f)) {\rm ch}_g(\overline{E}). \end{multline}
Let $S_1$ be a closed submanifold of $S$, and let $V_1=h^{-1}(S_1)\subset V$ (resp. $W_1=(h\circ f)^{-1}(S_1)\subset W$) be the closed submanifold of $V$ (resp. $W$) with restricted K\"{a}hler metric. Then $f_1: W_1\to V_1$, $h_1: V_1\to S_1$ and $h_1\circ f_1: W_1\to S_1$ also form a triple of equivariant holomorphic submersions with compact fibres. Denote by $j$ (resp. $i$) the natural embedding $W_1\to W$ (resp. $V_1\to V$) and by $\omega^{W_1}, \omega^{V_1}$ the induced K\"{a}hler forms $j^*\omega^{W}, i^*\omega^{V}$. Denote by $l$ the embedding $S_1\to S$. Let $\overline{E}$ be an $f$-acyclic and $h\circ f$-acyclic hermitian bundle on $W$.
\begin{thm}\label{A2} The forms $\Delta^0(f, h, \omega^{W}, \omega^{V}, \overline{E})$ and $\Delta_0(f, h, \omega^W, \omega^V, \overline{E})$ are functorial in the following sense that $$l^*\Delta^0(f, h, \omega^{W}, \omega^{V}, \overline{E}) =\Delta^0(f_1, h_1, \omega^{W_1}, \omega^{V_1}, j^*\overline{E})$$ and $$l^*\Delta_0(f, h, \omega^{W}, \omega^{V}, \overline{E}) =\Delta_0(f_1, h_1, \omega^{W_1}, \omega^{V_1}, j^*\overline{E}).$$ \end{thm} \begin{proof} Note that the square of the Bismut superconnection is a second order fiberwise elliptic operator with differential form coefficients \cite[Theorem 3.6]{Bi86} (cf. also \cite[Theorem 10.17]{BGV}), in particular, its heat kernel along the fibers is well-defined, and in \eqref{Ma2.4.7}, the terms $\left[B'_{3,u^2,T}, N_{3,u^2,T}\right]$, $\left[B''_{3,u^2,T}, N_{3,u^2,T}\right]$ are first oder differential operators along the fiber, the terms $N_{3,u^2,T}$, $M_{3,u^2,T}$ are tensors, thus we see clearly that when we consider the corresponding objects for the submersion $h_1\circ f_1$, each above term is the restriction of the corresponding term for the global submersion $h\circ f$.
We obtain that if $l: S_1\hookrightarrow S$ is a complex submanifold of $S$, and $\theta_{k,1}^0$, $\Theta_{1}$ are the corresponding terms associated to the relevant fibrations, then we have \begin{align} \label{Ma2.4.9} \Theta_{1}=l^*\Theta,\quad \theta_{k,1}^0=l^*\theta_{k}^0\qquad (k=1, 2, 3). \end{align} Now we make the procedure as $A\to +\infty$, $T_0\to +\infty$, $\varepsilon\to 0$, to get $\theta_{k,1}^3$, then from \eqref{Ma2.4.9}, we get $\theta_{k,1}^3 =l^*\theta_{k}^3$ ($k=1, 2, 3$). Combining it with \eqref{Ma2.4.11}, we get Theorem \ref{A2}. \end{proof}
Now for a general $\omega^{W}$, as we can use the anomaly formula for the trip $(h\circ f, \omega^{W}, \omega^{W}+ f^{*}\omega^{V})$, in particular, its differential form version as in Section \ref{Ma2.s1}, we can still define $\Delta^0(f, h, \omega^W, \omega^V, \overline{E})$ and $\Delta_0(f, h, \omega^W, \omega^V, \overline{E})$ such that Theorem \ref{A2} and \eqref{Ma2.4.12} still hold, again we need to fix a differential form version of $\widetilde{{\rm Td}}_g(\overline{T}(f,h,h\circ f))$.
\subsection{Immersion formula} Let $V, W$ be two $\mu_n$-equivariant projective complex manifolds and let $i: W\hookrightarrow V$ be an equivariant closed immersion. Let $S$ be a compact complex manifold with trivial $\mu_n$-action, and let $f: W\rightarrow S$, $l: V\rightarrow S$ be two equivariant holomorphic submersions with fibers $Y,X$ such that $f=l\circ i$. Assume that $\overline{\eta}$ is an equivariant hermitian bundle on $W$ and $(\overline{\xi}., v)$ is a complex of equivariant hermitian bundles on $V$ which provides a resolution of $i_*\overline{\eta}$ such that the metrics on $\xi.$ satisfy the Bismut's assumption (A). Let $\omega^W$, $\omega^V$ be two K\"{a}hler fibrations on $f$ and on $l$ respectively. We shall assume that $\omega^W$ is the pull-back of $\omega^V$ so that the K\"{a}hler metric on $W$ is induced by the K\"{a}hler metric on $V$. Consider the following exact sequence $$\overline{\mathcal{N}}:\quad 0\to \overline{Tf}\to \overline{Tl}\mid_W\to \overline{N}_{X/Y}\to 0$$ where $N_{X/Y}$ is endowed with the quotient metric. Then the equivariant secondary Todd form of $\overline{\mathcal{N}}$ satisfies the identity $$\frac{\bar \partial\partial}{2\pi i} \widetilde{{\rm Td}}_g(\overline{\mathcal{N}}) ={\rm Td}_g(Tl\mid_W,h^{Tl})-{\rm Td}_g(Tf,h^{Tf}) {\rm Td}_g(\overline{N}_{X/Y}).$$
We suppose that in the resolution $\xi.$, $\xi_j$ are all $l-$acyclic and moreover $\eta$ is $f-$acyclic.
Let $T_g(\omega^V, h^{\xi})$ be the equivariant analytic torsion forms associated with the family of relative Dolbeault double complexes
$(\Omega(X,\xi |_{X}), \overline{\partial}^X +v)$. Let $h^{H(X,\xi |_{X})}$ be the corresponding $L_2$ metric on the hypercohomology $H(X,\xi|_X)$
of $\xi|_X$.
Note that under our assumption,
$H(X,\xi|_X)\simeq f_{*}\eta$. And we have the following exact sequence of hermitian vector bundles on $S$ $$\overline{\Xi}:\quad 0\to l_*(\overline{\xi}_m)\to l_*(\overline{\xi}_{m-1})\to\ldots\to l_*(\overline{\xi}_0)
\to \overline{H(X,\xi|_X)}\to 0.$$ We can split $\overline{\Xi}.$ into a family of short exact sequence of hermitian bundles from $j=1$ to $m$ $$\xymatrix{\chi_j: \quad 0 \ar[r] & \Ker d_j \ar[r] & \overline{\Xi}_j \ar[r]^-{d_j} & \Ker d_{j-1} \ar[r] & 0}$$ such that the kernel of every map $d_{j-1}$ for $j=2,\ldots,m$ carries the metric induced by $\overline{\Xi}_j$ and
$\Ker d_0=\overline{\Xi}_0=\overline{H(X,\xi|_X)}, \Ker d_m=\overline{\Xi}_{m+1}=l_*(\overline{\xi}_m)$. We set $\widetilde{{\rm ch}}_g(\overline{\Xi}.) =\sum_{j=0}^{m+1}(-1)^j\widetilde{{\rm ch}}_g(\chi_j)$. Then it satisfies the differential equation $$\frac{\bar\partial\partial}{2\pi i} \widetilde{{\rm ch}}_g(\overline{\Xi}.)
={\rm ch}_g(\overline{H(X,\xi|_X)}) -\sum_{j=0}^m (-1)^{j} {\rm ch}_g(l_*(\overline{\xi}_j)).$$
The following result is the combination of \cite[Theorems 0.1 and 0.2]{BM} which is an equivariant extension of \cite[Theorems 0.1 and 0.2]{Bi}, and a family extension of \cite[Theorem 0.1]{Bi95}, \cite[Theorem 0.1]{BiL},
Let $R_g$ be the equivariant R-genus of Bismut \cite{Bi94}.
\begin{thm}\label{A30}(Immersion formula) The following identity holds in $\bigoplus_{p\geq0}A^{p,p}(S)/{(\operatorname{Im}\partial+\operatorname{Im}\bar\partial)}$. \begin{align} &T_g(\omega^V, h^{\xi}) -T_g(\omega^W,h^\eta)
+\widetilde{{\rm ch}}_g(f_*\eta, h^{H(X,\xi |_{X})}, h^{f_{*}\eta}) =-\int_{V_{\mu_n}/S}{\rm Td}_g(\overline{Tl})T_g(\overline{\xi}.)\nonumber\\ &\label{BM.2.3} \hspace{0.5cm} -\int_{W_{\mu_n}/S}
\widetilde{{\rm Td}}_g(\overline{\mathcal{N}}){{\rm Td}_g^{-1}(\overline{N}_{X/Y})}{\rm ch}_g(\overline{\eta}) +\int_{W_{\mu_n}/S}{\rm Td}_g(\overline{Tf})R_g(\overline{N}_{X/Y}){\rm ch}_g(\overline{\eta}),\\ &\label{BM.2.4} T_g(\omega^V, h^{\xi})- \sum_{i=0}^m(-1)^iT_g(\omega^V,h^{\xi_i}) - \widetilde{{\rm ch}}_g(\overline{\Xi}.)=0. \end{align} \end{thm}
Again to understand \eqref{BM.2.3} at the differential form level,
i.e., without modulo ${\operatorname{Im}\partial+\operatorname{Im}\bar\partial}$,
then we need to fix first
$\widetilde{{\rm ch}}_g(f_*\eta, h^{H(X,\xi |_{X})}, h^{f_{*}\eta}) $
and $\widetilde{{\rm Td}}_g(\overline{\mathcal{N}})$ as differential forms, and $T_g(\overline{\xi}.)$ as a current. The natural and nice way is that we use \cite[(7.33)]{Bi95} to replace $- \widetilde{{\rm Td}}_g(\overline{\mathcal{N}}){{\rm Td}_g^{-1}(\overline{N}_{X/Y})} + {\rm Td}_g(\overline{Tf})R_g(\overline{N}_{X/Y})$ by the differential form ${\bf B}_g(\overline{\mathcal{N}})$ in \cite[(7.24)]{Bi95}. Then we use the current $T_g(\overline{\xi}.)$ defined in
\cite[(6.30)]{Bi95} and
$\widetilde{{\rm ch}}_g(f_*\eta, h^{H(X,\xi |_{X})}, h^{f_{*}\eta})$ as the integral $\int_1^{+\infty}$ in \cite[(3.24)]{BM}.
Let $\Delta^0(f,l,i_*\overline{\eta},\overline{\xi}.)$ and $\Delta_0(f,l,i_*\overline{\eta},\overline{\xi}.)$ be the differential forms such that $$\Delta:=\partial\Delta^0(f,l,i_*\overline{\eta},\overline{\xi}.) +\bar\partial\Delta_0(f,l,i_*\overline{\eta},\overline{\xi}.)$$ measures the difference \begin{align*} &T_g(\omega^V, h^{\xi}) -T_g(\omega^W,h^\eta)
+\widetilde{{\rm ch}}_g(f_*\eta, h^{H(X,\xi |_{X})}, h^{f_{*}\eta})\\ &+ \int_{V_{\mu_n}/S}{\rm Td}_g(\overline{Tl})T_g(\overline{\xi}.) +\int_{W_{\mu_n}/S} {\bf B}_g(\overline{\mathcal{N}}) {\rm ch}_g(\overline{\eta}). \end{align*} We claim that $\Delta^0(f,l,i_*\overline{\eta},\overline{\xi}.)$ and $\Delta_0(f,l,i_*\overline{\eta},\overline{\xi}.)$ can be written down explicitly and they admit certain functoriality.
Let $S_1$ be a closed submanifold of $S$, and let $W_1=f^{-1}(S_1)\subset W$ (resp. $V_1=l^{-1}(S_1)\subset V$) be the closed submanifold of $W$ (resp. $V$) with restricted K\"{a}hler metric. Then $i_1: W_1\to V_1$, $l_1: V_1\to S_1$ and $f_1:W_1\to S_1$ also form a triple of equivariant morphisms such that $f_1=l_1\circ i_1$. Denote by $j$ the embedding $S_1\to S$.
\begin{thm}\label{A3} There is a natural way to write down explicitly differential forms $\Delta^0(f, l, {i}_*\overline{\eta}, \overline{\xi}.)$ and $\Delta_0(f, l, {i}_*\overline{\eta}, \overline{\xi}.)$ such that $\Delta:=\partial\Delta^0(f,l,i_*\overline{\eta},\overline{\xi}.) +\bar\partial\Delta_0(f,l,i_*\overline{\eta},\overline{\xi}.)$ and they are functorial in the following sense. $$j^*\Delta^0(f, l, {i}_*\overline{\eta}, \overline{\xi}.) =\Delta^0(f_{1},l_{1},{i_{1}}_*\overline{\eta}\mid_{W_1}, \overline{\xi}.\mid_{V_1})$$ and $$j^*\Delta_0(f, l, {i}_*\overline{\eta}, \overline{\xi}.) =\Delta_0(f_{1},l_{1},{i_{1}}_*\overline{\eta}\mid_{W_1}, \overline{\xi}.\mid_{V_1}).$$ \end{thm} \begin{proof} By the equivariant extension of \cite[(6.109), (6.110), (6.158), (6.170)]{Bi} in \cite[Definition 3.4]{BM}, there exist universal smooth forms $\gamma^3, \delta^3$ on $S$ such that $$\Delta+ d^{S}\beta=\bar{\partial}\gamma^3+\partial\delta^3.$$ Again $\beta$ is a combination of local terms from the small time
heat kernel asymptotics of the Bismut superconnection for the fibration $h$ and $h\circ f$, cf. \cite[Theorem 6.4, (6.36), (6.55)]{Bi} and
\cite[(2.24), (2.27)]{Ma1}. More precisely, before we make the procedure as $A\to +\infty$, $T_0\to +\infty$, $\varepsilon\to 0$, the forms $\gamma, \delta$ defined in \cite[(3.13)]{BM} are double integrals of certain supertrace of the heat kernel of the square of Bismut superconnection as in \eqref{Ma2.4.7}. Note that the square of the Bismut superconnection is a second order fiberwise elliptic operator with differential form coefficients and when we consider the corresponding objects for the submersion $l_1$, each above term is the restriction of the corresponding term for the global submersion $l$, thus if $\Delta_1, \gamma_1^3, \delta_1^3$, $\beta_{1}$ are corresponding terms associated to the relevant fibrations $i_1, l_1$ and $f_1$, we have $$\Delta_1=j^*\Delta, \gamma_1^3=j^*\gamma^3, \delta_1^3=j^*\delta^3, \beta_1=j^*\beta,$$ So write $\Delta^0(f, l, {i}_*\overline{\eta}, \overline{\xi}.)=\gamma^3-\beta$ and $\Delta_0(f, l, {i}_*\overline{\eta}, \overline{\xi}.) = \delta^3-\beta$, we are done. \end{proof}
We can do the same analysis for \eqref{BM.2.4}.
Note that we can relax our condition on $f: V\to S$ as follows: $S$ is a (possible noncompact) complex manifold and $f: V\to S$ is a K\"ahler fibration in the sense of Bismut-Gillet-Soul\'e \cite[Definition 1.4]{BGS2}.
\hspace{5cm} \hrulefill\hspace{5.5cm}
Shun Tang
Beijing Advanced Innovation Center for Imaging Theory and Technology
Academy for Multidisciplinary Studies, Capital Normal University
School of Mathematical Sciences, Capital Normal University
West 3rd Ring North Road 105, 100048 Beijing, P. R. China
E-mail: [email protected] \\
Xiaonan Ma
Universit\'{e} Paris Diderot - Paris 7
UFR de Math\'{e}matiques, Case 7012
75205 Paris Cedex 13, France
E-mail: [email protected]
\end{document}
|
arXiv
|
{
"id": "1503.07751.tex",
"language_detection_score": 0.5395933985710144,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Rational pulse design for enantiomer-selective microwave three-wave mixing}
\author{M. Leibscher} \affiliation{Theoretische Physik, Universit\"at Kassel, Heinrich-Plett-Stra{\ss}e 40, 34132 Kassel, Germany} \affiliation{Dahlem Center for Complex Quantum Systems and Fachbereich Physik, Freie Universit\"at Berlin, Arminallee 14, 14195 Berlin, Germany} \author{J. Kalveram}
\affiliation{Theoretische Physik, Universit\"at Kassel, Heinrich-Plett-Stra{\ss}e 40, 34132 Kassel, Germany} \author{C. P. Koch}
\affiliation{Theoretische Physik, Universit\"at Kassel, Heinrich-Plett-Stra{\ss}e 40, 34132 Kassel, Germany}
\affiliation{Dahlem Center for Complex Quantum Systems and Fachbereich Physik, Freie Universit\"at Berlin, Arminallee 14, 14195 Berlin, Germany}
\date{\today}
\begin{abstract}
Microwave three-wave mixing allows for enantiomer-selective excitation of randomly oriented chiral molecules into rotational states with different energy. The random orientation of molecules is reflected in the degeneracy of the rotational spectrum with respect to the orientational quantum number $M$ and reduces, if not accounted for, enantiomer-selectivity. Here, we show how to design pulse sequences with maximal enantiomer-selectivity from an analysis of the $M$-dependence of the Rabi frequencies associated with rotational transitions induced by resonant microwave drives.
We compare different excitations schemes for rotational transitions and show that maximal enantiomer-selectivity at a given rotational temperature is achieved for synchronized three-wave mixing with circularly polarized fields. \end{abstract}
\maketitle
\section{Introduction} Chiral molecules cannot be superimposed with their mirror image by rotation and translation; they exist in left- and right-handed forms called enantiomers. While the two enantiomers typically have different bio-chemical behavior, they share almost the same physical properties; in particular, they have practically identical spectra. Several techniques for the detection of enantiomers exploiting the interaction of the molecules with electromagnetic radiation have recently been developed. {\color{black} Since these techniques do not rely on the inherently weak interaction with the magnetic field of the radiation, they have a sufficiently high sensitivity for applications in the gas phase.} Among these are ultrafast spectroscopies based on photoelectron circular dichroism~\cite{LuxAngewandte12,CireasaNatPhys15}, high-harmonic generation~\cite{BakyushevaPRX18,NeufeldPRX19}, enantiomer-selective control of molecular rotation~\cite{TutunnikovJPCL18,MilnerPRL2019,TutunnikovPRA2020} and resonant phase-sensitive microwave three-wave mixing~\cite{PattersonNature13,ShubertAngewandte14,LobsigerJPCL15,Domingos2020}. These techniques are based on light-matter interaction in the dipole approximation, where the enantiomer-selective observable arises as a triple product of molecule-specific vectors which changes sign under exchange of the two enantiomers, independent of the molecular orientation~\cite{BychkovJETP01,OrdonezPRA18}. In addition to detecting enantiomeric excess, microwave three-wave mixing (3WM) can also be used to selectively excite enantiomers to different energy levels~\cite{EibenbergerPRL17,PerezAngewandte17,PerezJPCL18,Lee21}. This can serve as a precursor for the separation of enantiomers and the preparation of an enantio-pure sample out of a racematic mixture \cite{KralPRL01, FrishmanJCP03}. The enantiomer-selective excitation proceeds in a cyclic way involving three rotational states, corresponding to the enantiomer-selective triple product of the three non-vanishing Cartesian components of the molecular dipole moment, $\mu_a$, $\mu_b$ and $\mu_c$, of which one changes sign under exchange of the two enantiomers~\cite{HirotaPJA12}. The sign change causes constructive interference for one enantiomer and destructive interference for the other~\cite{KralPRL01,KralPRL03,HirotaPJA12,LehmannJCP18,YePRA2018,Leibscher19,YongPRA2021}.
In order to distill a single enantiomer from a racemic mixture, an enantiomer-selectivity of close to 100\% is required in the state transfer to the separate energy levels. In experiments, the efficiency is mainly limited by two factors. One is the temperature of the sample, i.e., the rotational states which are addressed in a three-wave mixing process are typically thermally populated. First demonstrations of enantiomer-selective state transfer chose one of the rotational levels with a large thermal weight as the starting point for the three-wave mixing~\cite{EibenbergerPRL17,PerezAngewandte17,PerezJPCL18}. Since thermal population in the excited states of the cycle cannot be coherently coupled, the contrast is reduced. The second limitation is due to degeneracies within the rotational spectrum. Denoting the rotational quantum number by $J$, every energy level of a rigid asymmetric top consists of $2J+1$ states with different values of the orientational quantum number $M$. As a result, some of the parallel cycles fail to close, and cycles with different $M$ involve different Rabi frequencies. This limits the efficiency of enantiomer-selective population transfer, even in the absence of thermal population in the excited states \cite{LehmannJCP18}.
The problem of temperature can be solved by addressing levels which are sufficiently excited such that their thermal population vanishes~\cite{Leibscher19,Zhang20}. Alternatively, the thermal population in the excited levels can be eliminated prior to the three-wave mixing, for example by optical pumping of an electronic transition \cite{Lee21}. The limitation due to the orientational degeneracy can also be mitigated---it requires a sufficiently large number of electric fields that break the corresponding symmetry~\cite{Leibscher20}. This has been shown by analyzing the controllability of finite dimensional subsystems of quantum asymmetric tops~\cite{Leibscher20,Pozzoli21}. Such an analysis allows one to determine the minimal number of microwave fields which is necessary to control enantiomer selective rotational dynamics~\cite{Leibscher20}. It also establishes the polarization directions and the frequencies of the required control fields. However, it cannot determine the actual pulse shapes, or the order of pulses in a sequence of microwave excitations which separates the enantiomers of a racemic mixture into different rotational states.
In the present study, we show how to derive pulse sequences with maximal enantiomer-selectivity from a quantitative analysis of the population dynamics of degenerate rotational states induced by resonant microwave fields. Forfeiting complete controllability, we identify simpler pulse sequences than those of Ref.~\cite{Leibscher20} which nevertheless lead to full enantiomer selectivity. We furthermore show how the analysis of the rotational dynamics allows us to extend the excitation schemes of Ref.~\cite{Leibscher20} and predict the pulse parameters, in particular the duration of the required pulses, for different experimental conditions and different molecular species.
The paper is organized as follows: In Section~\ref{sec:asymtop}, we summarize the properties of a rigid asymmetric top and its interaction with microwave fields. For a better understanding of the underlying rotational dynamics, we recall how rotational dynamics induced by a single microwave field depends on the quantum number $M$ of the rotational states (Section~\ref{sec:Mdependence}). These insights can be utilized to construct sequences of microwave pulses that result in complete enantio-selective populations transfer despite the presence of degenerate states, independent of the general controllability of the system. In Section~\ref{sec:completeselection}, we present two examples: in Subsection~\ref{subsec:linearfields}, we show that a combination of three different three-wave mixing cycles addresses all degenerate initial states and leads to complete enantio-selection. In Subsection~\ref{subsec:circularfileds} we explore enantio-selective population transfer with circular polarized microwave fields. The effects of rotational temperature and pulse duration are discussed in Subsection~\ref{subsec:temperature_accuracy}. In Section~\ref{sec:conclusions}, we summarize our results and conclude.
\section{Asymmetric top and its interaction with microwave radiation} \label{sec:asymtop} In general, rigid chiral molecules, {\color{black} i.e. chiral molecules in their electronic and vibrational ground state} are asymmetric top molecules with the rotational Hamiltonian \begin{equation}
{\hat H}_{rot} = A {\hat J_a}^2 + B {\hat J_b}^2 + C {\hat J_c}^2, \end{equation} where $\hat J_a$, $\hat J_b$ and $\hat J_c$ are the angular momentum operators with respect to the principle molecular axes, and $A > B > C$ are the rotational constants. The eigenfunctions of an asymmetric top are determined by \begin{equation}\label{eq:Hrot}
H_{rot} |J,\tau,M \rangle = E_{J,\tau} |J, \tau, M \rangle\,. \end{equation}
Since Eq.~\ref{eq:Hrot} does not have an analytical solution, the asymmetric top eigenfunctions are typically expressed in terms of symmetric top eigenfunctions which admit a closed form via Wigner D-matrices \cite{Zare88}. The molecule becomes a prolate or oblate symmetric top with the eigenfunctions $|J, K_a, M \rangle$ or $|J, K_c, M \rangle$ for $B=C$, respectively $A=B$. The symmetric top wavefunctions are characterized by the rotational quantum number $J=0, 1,2 ...$ and the quantum numbers $M=-J, -J+1,...,J$ and $K=-J, -J+1,...,J$ which describe the orientation with respect to a space-fixed and molecule-fixed axis, respectively. The eigenfunctions of the asymmetric top are given by superpositions of symmetric top eigenstates, \begin{equation}
| J, \tau, M \rangle = \sum_K c_K^{J} (\tau) |J,K,M \rangle,
\label{asym_top} \end{equation}
where $K$-states with the same $J$ and $M$ are mixed, and $\tau=-J,-J+1,...,J$. In rotational spectroscopy, the asymmetric top states are usually denoted by $|J_{|K_a|,|K_c|,M} \rangle$. We therefore use this notation to characterize the asymmetric top eigenstates. The two notations relate to each other as follows: For a given $J$, the asymmetric top states with lowest energy can be denoted either by $|J_{|K_a|=0,|K_c|=J,M} \rangle$ or, using the quantum number $\tau$, by $|J,\tau=-J,M \rangle$, the ones with largest energy by $|J_{|K_a|=J,|K_c|=0,M} \rangle$ or $|J,\tau=J,M \rangle$. The states in between can be matched accordingly. The eigenenergies $E_{J,\tau}$ of an asymmetric top do not depend on the quantum number $M$ and thus each rotational level is $2J+1$-fold degenerate.
The interaction of the molecules with an electromagnetic field in the electric dipole approximation is given by \begin{equation}
{\hat H}_{int} = - {\hat{\vec \mu}} \cdot {\vec E(t)}
\end{equation}
with the electric field
\begin{equation}
{\vec E(t)} = {\vec e} E(t) \cos (\omega t + \phi)\,,
\end{equation}
where ${\vec e}$ denotes the polarization direction, $E(t)$ the temporal shape, $\omega$ the frequency and $\phi$ the phase of the field.
Note that ${\hat{\vec \mu}}^T = ({\hat \mu}_x, {\hat \mu}_y, {\hat \mu}_z )$ is the molecular dipole moment in space-fixed coordinates. The transformation of the interaction Hamiltonian into molecule fixed coordinates is given by \cite{Zare88},
\begin{eqnarray}
\hat\mu_x &=& \frac{\mu_a} {\sqrt{2}} \left( D_{-10}^1 - D_{10}^1 \right) + \frac{\mu_b}{ 2} \left ( D_{11}^1 - D_{1-1}^1 - D_{-11}^1 + D_{-1-1}^1\right) \nonumber \\ &-& i \frac{\mu_c}{2} \left ( D_{11}^1 + D_{1-1}^1 - D_{-11}^1 - D_{-1-1}^1\right), \nonumber \\
\hat\mu_y &=& -i \frac{\mu_a}{\sqrt{2}} \left( D_{-10}^1 + D_{10}^1 \right) + i \frac{\mu_b}{ 2} \left ( D_{11}^1 - D_{1-1}^1 + D_{-11}^1 - D_{-1-1}^1\right) \nonumber \\ &+& \frac{\mu_c}{2} \left ( D_{11}^1 + D_{1-1}^1 + D_{-11}^1 + D_{-1-1}^1\right), \nonumber \\
\hat \mu_z &=& \mu_a D_{00}^1 - \frac{\mu_b}{\sqrt{2} } \left ( D_{01}^1 - D_{0-1}^1 \right) + i \frac{\mu_c^{(\pm)}}{\sqrt{2}} \left ( D_{01}^1 + D_{0-1}^1 \right),
\label{mu_projection}
\end{eqnarray} where $D_{MK}^J$ denote the elements of the Wigner $D$-matrix. We consider the interaction of an asymmetric top with microwave radiation and assume that the frequency is resonant to a particular rotational transition, i.e. $\omega = E_{J',\tau'} - E_{J,\tau}$. We assume that only rotational states with $E_{J',\tau'}$ and $E_{J,\tau}$ are addressed by the interaction. In broadband microwave spectroscopy, this assumption is typically justified for frequencies larger than about $50\,$MHz~\cite{PattersonNature13,ShubertAngewandte14,EibenbergerPRL17,PerezAngewandte17,PerezJPCL18}.
In order to investigate the population transfer between the rotational states induced by microwave pulses, we numerically solve the time-dependent Schr\"odinger equation, \begin{equation}
i \hbar \frac{\text{d}}{\text{d} t} |\psi(t) \rangle = \left( H_{rot} + H_{int}(t) \right) |\psi(t) \rangle\,, \label{TDSE} \end{equation} using the Chebychev propagation technique~\cite{Kosloff_Review1994} and the basis of the asymmetric top eigenfunctions, i.e., \begin{equation}
|\psi (t) \rangle = \sum_{J,\tau,M} a_{J,\tau}^M(t) |J, \tau, M \rangle. \end{equation}
The transition matrix elements between two asymmetric top states can be expressed in terms of those of the symmetric top eigenstates, \begin{eqnarray}
\langle J'' \tau'' M'' | D_{MK}^1 | J' \tau' M' \rangle = \sum_{K',K''}
c_{K'}^{J'} \left ( c_{K''}^{J''} \right )^\ast \langle J'' K'' M'' |D_{MK}^1|J' K' M' \rangle \label{transition_asym}
\end{eqnarray}
with \begin{eqnarray}
\langle J'', K'', M'' | D^1_{MK} | J', K', M' \rangle &=& \sqrt{2 J'' +1} \sqrt{2 J' +1} (-1)^{M''+K''} \nonumber \\
&\times& \left( \begin{array}{ccc} J' & 1& J'' \\ M' & M & -M'' \end{array} \right)
\left( \begin{array}{ccc} J' & 1 & J'' \\ K' & K & -K'' \end{array} \right)\,.
\label{w3j} \end{eqnarray} The $M$-dependence of the transition matrix elements is due to the first Wigner $3j$-symbol in Eq.~(\ref{w3j}). While this dependence is well-known, it is useful to recall the corresponding rotational dynamics. This will allow us to better understand microwave three-wave mixing and to construct fully enantiomer-selective pulse sequences in the presence of degenerate states. In Section~\ref{sec:Mdependence}, we therefore first discuss how the $M$-dependence of the transition matrix elements and the polarization of the microwave pulses affect the population transfer between the rotational states. In Section~\ref{sec:completeselection}, we then apply these results and design pulse sequences that allow for complete enantiomer-selective population transfer despite the degeneracy of the rotational spectrum.
\section{M-dependence of the population transfer between rotational states} \label{sec:Mdependence}
In the following, we consider a rotational subsystem consisting of the asymmetric top states $|2_{02M} \rangle$, $|3_{13M} \rangle$ and $|3_{12M} \rangle$. Transitions between these rotational states have been utilized in microwave experiments for enantiomer selective excitation \cite{PerezAngewandte17}. To recall how population transfer between rotational states depends on the orientational quantum number and for the sake of clarity, we assume in this subsection that a single microwave pulse interacts with the asymmetric top. Moreover, only one rotational level is taken to be occupied initially such that
\begin{equation}
\rho(t=0) = \frac{1}{2J_0+1} \sum_{M=-J_0}^{J_0} |J_0,\tau_0,M\rangle \langle J_0, \tau_0, M|\,.
\label{rho_initial_single_pulse}
\end{equation}
Figure~\ref{transition_z} shows the rotational dynamics $|a_{J,\tau}^{M}|^2$ for the different initial states during the interaction with a linearly polarized field along the laboratory-fixed $z$-axis. \begin{figure}
\caption{Population transfer between the rotational states $|3_{13M} \rangle$ and $|3_{12M} \rangle$ (a) and $|2_{02M} \rangle$ and $|3_{12M} \rangle$ (b) driven by a $z$-polarized microwave field.
The rotational manifolds are depicted in the top panels. Colored circles represent the initial states. The lower panels show the population of the rotational states. Green, blue, red, and black lines correspond to states with $M=0$, $|M|=1$ and $|M|= 2$ and $|M|=3$ (only in panel (a))}
\label{transition_z}
\end{figure} We assume microwave fields with constant amplitude and with frequencies $\omega= E_{3_{12}} - E_{3_{13}}$ in Fig.~\ref{transition_z}(a) and $\omega= E_{3_{12}} - E_{2_{02}}$ in Fig.~\ref{transition_z}(b). The relevant rotational states are sketched on the top, with the initially occupied states indicated by colored circles. Since for a $z$-polarized field, only transitions with $\Delta M = 0$ occur, the dynamics is divided into five individual two-level systems with Rabi frequencies
\begin{equation}
\hbar \Omega = | \langle J', \tau', M | \mu_z E_0 | J, \tau, M \rangle | \propto
\left | \left( \begin{array}{ccc} J & 1& J'\\ M & 0 & -M \end{array} \right) \right |.
\label{Rabi_general}
\end{equation}
The Rabi frequencies, and thus the time required for complete population transfer depends on the quantum number $M$. For transitions with $\Delta J = 0$, cf. Fig.~\ref{transition_z}(a),
\begin{equation}
\hbar \Omega \propto \left| \left( \begin{array}{ccc} J & 1& J\\ M & 0 & -M \end{array} \right) \right| =
\left | \frac{M}{\sqrt{(2J+1)(J+1)J}} \right|\,.
\end{equation}
The Rabi frequency is thus proportional to $|M|$, i.e., for states with $|M|=3$, three Rabi cycles occur at the time when one Rabi cycle is completed for states with $|M|=1$, as can be seen in Fig. \ref{transition_z}(a). Moreover, for $\Delta J = 0$, transitions with $M'=M''=0$ are forbidden. As a result, it is not possible to obtain complete population inversion between the levels $3_{13}$ and $3_{12}$ with a single resonant microwave pulse.
For transitions with $\Delta J = 1$, cf. Fig.\ref{transition_z}(b), the Rabi frequencies are proportional to
\begin{eqnarray}
\hbar \Omega \propto \left | \left( \begin{array}{ccc} J & 1& J+1\\ M & 0 & -M \end{array} \right) \right | = \left|
\frac{\sqrt{(J+M+1)(J-M+1)}}{\sqrt{(2J+3)(2J+1)(J+1)}} \right |. \label{Rabi_z_deltaJ1} \end{eqnarray} Here, $\Omega$ is maximal and the Rabi cycle fastest for $M=0$. Rabi cycles with larger $M$ are slightly slower, as can be seen in Fig. \ref{transition_z}(b). The Rabi frequencies differ by irrational factors, namely $\Omega \propto \sqrt{9-M^2}$. Thus, only approximate population inversion for the two rotational levels can be obtained in finite time.
If we consider the interaction of an asymmetric top with a single microwave pulse, the polarization direction does not influence the rotational dynamics since we can always choose the quantization axis to be parallel to the polarization direction. However, three-wave mixing relies on the interaction of chiral molecules with three orthogonally polarized fields. To understand the underlying rotational dynamics, it is important to study the population transfer induced by fields which are polarized perpendicular to the quantization axis (here chosen to be the $z$-axis). We therefore consider the interaction of an asymmetric top with $x$-polarized microwave pulses in Fig.~\ref{transition_x}. \begin{figure}
\caption{Same as Fig.~\ref{transition_z} but for an $x$-polarized field. Green lines correspond to states with $M=0$, solid and dashed blue, red and black lines to states with $M=\mp 1$ and $M=\mp 2$ and $M=\mp 3$, respectively.}
\label{transition_x}
\end{figure}
For simplicity we consider only a single initial state, depicted by the black dots in Fig.~\ref{transition_x}(a) and (b). For the simulation shown in Fig. \ref{transition_x}(a), the initial state is $|3_{1,3,-3} \rangle$, and the frequency of the microwave pulse $\omega = E_{3_{12}} - E_{3_{13}}$. An $x$-polarized field induces transitions with $\Delta M = \pm 1$. Thus, the dynamics cannot be described by a two-level system. Instead, the field couples all states connected by the dashed lines in the top panel of Fig.~\ref{transition_x}(a) and population is transferred through the complete manifold of $M$-states. Nevertheless, complete population inversion occurs between the states $3_{1,3,-3}$ and $3_{1,3,3}$ with all states in between only partially populated. This picture changes for transitions with $\Delta J = 1$, as shown in Fig.~\ref{transition_x}(b): Rapid oscillations between the states $3_{1,3,-3}$ and $2_{0,2,-2}$ with incomplete population transfer are observed before other states are substantially populated. In this case, population inversion between the states with maximal value of $|M|$ ($3_{1,3,-3}$ and $3_{1,3,3}$) remains incomplete. The observation in both cases can be rationalized in terms of the three Rabi frequencies relevant to the seven states that are coupled by the $x$-polarized pulse (note the symmetry around $M=0$). In Fig.~\ref{transition_x}(a), these are $\Omega_1$, $\Omega_2 = 2 \Omega_1$ and $\Omega_3=3\Omega_1$, i.e., strictly periodic. In Fig.~\ref{transition_x}(b) the ratio between the three frequencies $\Omega_1$, $\Omega_2$ and $\Omega_3$ is irrational and thus not strictly periodic. Note that, for excitation with a single pulse, the population transfer induced by $x-$ and $y-$polarized fields is identical. The corresponding transition matrix elements only differ by sign \cite{Leibscher19}. This will become important for cyclic excitation with three orthogonal pulses.
Due to the spread of population over the complete $M$-manifold, excitation with a combination of $x$- $y$ and $z$-polarized fields does not result in closed three-level cycles and thus poses a challenge to three-wave mixing since One way to overcome this problem is to use circularly polarized fields with polarization ${\vec e}_{\sigma_\pm}= \vec{e}_x \pm i {\vec e}_y$ instead of linearly polarized fields. The rotational dynamics resulting from interaction with ${\sigma}_+$-polarized microwave pulses is shown in Fig. \ref{transition_sigma}. \begin{figure}
\caption{Same as Fig.~\ref{transition_z} but for a $\sigma_+$-polarized field. Green lines correspond to states with $M=0$, solid and dashed blue, red and black lines to states with $M=\mp 1$ and $M=\mp 2$ and $M=\mp 3$, respectively.}
\label{transition_sigma}
\end{figure} A field with ${\sigma}_+$-polarization allows for transitions with $\Delta M = + 1$ from the lower to the higher level and with $\Delta M = - 1$ for the reverse process. With such transitions, the rotational manifold decomposes into individual two-level systems. The $M$-dependence of the Rabi frequencies for right-circularly polarized radiation is given by \begin{eqnarray}
\hbar \Omega \propto \left | \left( \begin{array}{ccc} J & 1& J\\ M & 1 & -(M + 1) \end{array} \right) \right | = \left|
\frac{\sqrt{(J - M)(J + M+1)}}{\sqrt{(2J+2)(2J)J}} \right |. \label{Rabi_sigma_deltaJ0} \end{eqnarray} for transitions with $\Delta J = 0$ and by \begin{eqnarray}
\hbar \Omega \propto \left | \left( \begin{array}{ccc} J & 1& J+1\\ M & 1 & -(M + 1) \end{array} \right) \right | = \left|
\frac{\sqrt{(J + M+2)(J + M+1)}}{\sqrt{(2J+3)(2J+2)(2J+1)}} \right | \label{Rabi_sigma_deltaJ1} \end{eqnarray}
for transitions with $\Delta J =1$. In both cases, for different values of $|M|$ the Rabi frequencies differ by irrational factors. Complete population transfer between two rotational levels can thus be achieved only approximately.
In summary, the calculations presented in this section illustrate three mechanisms by which $M$-degeneracy affects rotational dynamics --- forbidden transitions, $M$-dependence of transitions-matrix elements and thus Rabi frequencies, and occurrence of sequential transitions for population transfer in case of $x$- and $y$-polarized pulses. With this knowledge, we can already draw conclusions for the design of pulses leading to complete enantiomer selective population transfer. The sequential transitions which occur in case of $x$- and $y$-polarized pulses complicate the design of pulse sequences. While complete enantiomer-selective population transfer is possible when using only linearly polarized fields, it comes at the expense of rather complicated pulse sequences~\cite{Leibscher20}. A more straightforward way to construct a pulse sequence for complete enantiomer selective excitation is presented in Section \ref{subsec:linearfields}. Moreover, Fig.~\ref{transition_sigma} tells us that the simplest way to extend three-wave mixing to a manifold of degenerate states is to apply a combination of circularly polarized fields with a $z$-polarized field since this leads to a set of parallel three-level cycles. Due to the $M$-dependence of the Rabi-frequencies, parallel cycles need to be synchronized to achieve complete enantiomer selective excitation \cite{Leibscher20}. In contrast to excitation schemes with only linearly polarized fields, three-wave mixing using a combination of circularly and linearly polarized fields can be adopted in a straightforward manner to different rotational subsystems. In Section \ref{subsec:circularfileds}, we apply it to those rotational transitions in the carvone molecule which have been utilized in earlier microwave three-wave-mixing experiments \cite{PerezAngewandte17}.
\section{Complete enantiomer-selective excitation of degenerate rotational states} \label{sec:completeselection} In the following, we make use of the insights from Section~\ref{sec:Mdependence} to design pulse sequences which induce complete enenatiomer-selective excitation despite the degeneracy of rotational states. {\color{black} All results presented in this section are obtained by numerically solving the time-dependent Schr\"odinger equation (\ref{TDSE})}. In Subsection~\ref{subsec:linearfields}, we design sequences of linearly polarized pulses to achieve complete enantiomer selectivity. As mentioned in Section~\ref{sec:Mdependence}, the construction of closed cycles for complete enantiomer selection is particularly difficult in the presence of degenerate initial states if only linearly polarized pulses are used. We therefore consider the most simple rotational subsystem, i.e. the manifold of $J=0$ and $J=1$, in this subsection. Rotational subsystems with larger $J$ will be considered in in Subsection \ref{subsec:circularfileds}, where synchronized circularly polarized pulses are applied. The effects of initial rotational temperature and of the pulse duration are discussed in Subsection~\ref{subsec:temperature_accuracy}.
\subsection{Pulse design using linearly polarized fields: Combination of three-wave mixing cycles} \label{subsec:linearfields} The smallest subsystem with three rotational levels consists of the state with $J=0$, i.e., the non-degenerate rotational ground state denoted by $0_{00}$, and two excited rotational levels with $J=1$, namely $1_{11}$ and $1_{10}$, as shown in Fig. \ref{fig_schemaj1j2}. \begin{figure}\label{fig_schemaj1j2}
\end{figure} In a typical three-wave mixing pulse sequence, the first pulse creates a 50-50-coherence between the ground state and one of the excited states. The second (twist) pulse transfers the complete excited state population to the second excited state. Finally, the third pulse induces the separation by causing constructive interference for one enantiomer and destructive interference for the other in the respective rotational state. Starting with the rotational ground state, interaction with such a sequence of three pulses, polarized in $x-$, $y-$, and $z-$ direction and with frequencies $\omega_1$, $\omega_2$ and $\omega_3$ induces complete enantio-selective population transfer \cite{Leibscher19}. However, if the initial condition is given by \begin{equation}
\rho(t=0) = \frac{1}{3} \sum_M |1_{11M} \rangle \langle 1_{11M}|, \end{equation} i.e., if the three degenerate states of level $1_{11}$ are populated, such a three-wave mixing pulse sequence cannot induce complete enantiomer selective excitation. This can be rationalized with the help of Fig.~\ref{fig_schemaj1j2}. A three-wave mixing cycle can be realized by three different combinations of $x$-, $y$- and $z$-polarized microwave pulses, as seen in panels (i) - (iii). Here, the transitions induced by $x$-, $y$-, and $z$-polarized fields are marked by dashed, dotted, and solid lines, respectively, with the colors indicating the frequencies of the corresponding fields. The degenerate initial states are indicated by black circles. The three-wave mixing cycle (i) only affects the initial state with $M=0$, while the cycles (ii) and (iii) transfer the population from the initial states with $M=\pm1$. Thus, only one or two of the initially populated states are part of each closed three-level cycle and therefore only part of the initial population can be selectively transferred to different rotational states.
\begin{figure}
\caption{Population of the rotational levels $0_{00}$ (top), $1_{11}$ (middle) and $1_{10}$ (bottom) averaged over all $M$-states. The enantiomers $(+)$ and $(-)$ are presented by solid blue and dashed red lines. The envelopes of the microwave pulses are indicated by the turquoise ($\omega=\omega_1$), orange ($\omega=\omega_2)$) and yellow ($\omega=\omega_3$) shapes, and the polarization of the fields by the indices $x,y$ and $z$. (a): Excitation by the three microwave pulses indicated in Fig. \ref{fig_schemaj1j2} (i); (b): Excitation by a combination of the two three-wave mixing cycles depicted in Fig. \ref{fig_schemaj1j2} (i) and (ii); (c): Excitation by a combination of all three cycles (i), (ii) and (iii).}
\label{3wm_J01}
\end{figure} \begin{figure}\label{3wm_J01m}
\end{figure}
Figures~\ref{3wm_J01}(a) and \ref{3wm_J01m}(a) present the population transfer for the three-wave mixing cycle depicted in Fig.~\ref{fig_schemaj1j2}(i) with Fig. \ref{3wm_J01}(a) showing the time-dependent population of each rotational level averaged over the corresponding $M$-states for the two enantiomers (depicted by solid blue and dashed red lines), whereas the population of the individual $M$-states is shown in Fig. \ref{3wm_J01m}(a) for one of the enantiomers. Cycle (i) results in complete enantiomer-selective population transfer for the initial state $|1_{110} \rangle$, whereas the initial states $|1_{11\pm 1} \rangle$ are affected only by the first pulse, depicted in dashed orange lines in Fig. \ref{fig_schemaj1j2}(i). They are not part of a complete three-wave mixing cycle and thus no enantiomer-separation occurs for these initial states. As a result, the ground state $|0_{000} \rangle$ is populated only by a single enantiomer at final time, as shown in the top panel of Fig. \ref{3wm_J01} (a). In the first excited state, no enantiomer-separation is observed at all (middle panel), while only partial enantio-selectivity occurs in the second excited state, as shown in the bottom panel.
We quantify the overall selectivity by \begin{equation} S=\sum_{i=1}^{3} \Delta p_i(t_{final})\,,
\label{selectivity} \end{equation}
where $\Delta p_i(t) = |p_i^{(+)}-p_i^{(-)} | / |p_i^{(+)}+p_i^{(-)} |$ is the normalized population difference between the two enantiomers $(+)$ and $(-)$ in the three energy levels $i=0_{00}, 1_{11}, 1_{10}$. For excitation with the three-wave mixing cycle (i), $S=0.33$. The same amount of selectivity can be obtained by cycles (ii) and (iii).
A better selectivity can be expected if cycle (i) is combined with cycles (ii) or (iii) since all initial states are then addressed by a closed cycle. As an example we combine cycles (i) and (ii). The resulting population transfer is depicted in Figs. \ref{3wm_J01}(b) and \ref{3wm_J01m}(b) where the pulse sequence starts with cycle (ii). The first pulse (dashed turquoise lines in Fig. \ref{fig_schemaj1j2}(ii)) affects only the initial states $|1_{11\pm 1} \rangle$ and creates a coherence between the states $|1_{11\pm 1} \rangle$ and $|0_{000} \rangle$. The second pulse (solid yellow line) transfers the ground state population to the second excited state. The third pulse, depicted by dashed orange lines in Fig. \ref{fig_schemaj1j2} (ii), closes the cycle and, at the same time, acts as the first pulse of cycle (i). By combination of the two three-wave mixing cycles, complete separation of the enantiomers is achieved in levels $0_{00}$ and $1_{10}$, while the selectivity in level $1_{11}$ is still incomplete. Overall, the selectivity increases to $ S= 0.66$. Although all initial states are part of closed cycles, complete enantiomer-selectivity cannot be obtained. This is due to the population transfer being induced by $x$- or $y$-polarized fields. As shown in Fig. \ref{transition_x}, an $x$-polarized field does not induce a Rabi oscillation between two levels, but rather spreads the population over all $M$-states. Thus, part of the population of the initial state $|1_{11-1} \rangle$ leaks from the cycle connecting $|1_{11-1} \rangle$, $|0_{000} \rangle$ and $|1_{100} \rangle$. The same holds for the population from the initial state
$|1_{111} \rangle$.
Enantiomer-selective population transfer can be completed by adding the third excitation scheme from Fig. \ref{fig_schemaj1j2}. The resulting rotational dynamics is shown in Fig. \ref{3wm_J01}(c) and Fig.~\ref{3wm_J01m}(c). The corresponding pulse sequence consists of seven pulses: three pulses with $x$- $y$-, and $z$-polarization are resonant with the transition $0_{00} \leftrightarrow 1_{11}$, two pulses, $x$- and $z$-polarized, are resonant with $1_{11} \leftrightarrow 1_{10}$ and two pulses, $y$- and $z$-polarized are resonant with $0_{00} \leftrightarrow 1_{10}$. In order to obtain complete selectivity, it is important to properly synchronize the three cycles by using overlapping pulses and adjusting the field strengths. At $t=0$, both enantiomers are assumed to populate only level $1_{11}$. At the end of the pulse sequence, enantiomer $(+)$ (blue lines) populates level $1_{11}$, while enantiomer $(-)$ (red lines) exists only in rotational states belonging to level $1_{10}$. The rotational ground state is empty. By combining all three three-wave mixing cycles $S=0.98$ is obtained, i.e., almost 100 \% enantiomer-selectivity. Note that a systematic optimization of the pulse parameters will allow to push the enantiomer-selectivity even closer to 100 \%.
The sequence achieving essentially complete enantiomer-selectivity, cf. Figs.~\ref{3wm_J01}(c) and ~\ref{3wm_J01m}(c), consists of seven different microwave fields. It is constructed by combining all the different three-wave mixing schemes that exist for this rotational subsystem. This should be compared to Ref.~\cite{Leibscher20}, where enantiomer-selective excitation for the same rotational subsystem has been identified by means of a controllability study. In particular, five different fields, i.e., fields with different combinations of frequency and polarization, were found to be sufficient for the system to be enantiomer-selective controllable~\cite{Leibscher20}. While the minimal number of different fields is given by controllability analysis, the actual pulse shapes or sequence of pulses has to be determined by other means. In Ref.~\cite{Leibscher20}, a sequence of 12 individual pulses (using five different combinations of frequency and polarization) has been shown to yield complete enantiomer-selectivity. Controllability analysis on the one hand, and pulse design resulting from knowledge of the rotational dynamics are thus two complementary approaches to achieve a complete enantiomer-selective excitation (or any other desired target) in the presence of degeneracies in the rotational spectrum. The pulse sequence constructed here, while simpler than the pulse sequence found in Ref.~\cite{Leibscher20}, is challenging for current microwave experiments due to the need to carefully adjust the field intensities of overlapping pulses. Moreover, there is no automatic way to transfer these results to rotational subsystems with larger $J$. To overcome these limitations, we consider in the following a different excitation strategy, namely the use of circularly polarized fields.
\subsection{Pulse design with circularly polarized fields} \label{subsec:circularfileds} As discussed Section III, replacing the linear polarizations along $x$ and $y$ by circular polarizations prevents the spread of the initial population over the $M$-manifold. An excitation scheme using left- and right-circularly polarized pulses together with a $z$-polarized pulse has already been proposed in Ref.~\cite{Leibscher20}, where this combination of microwave fields was proven to lead to complete enantiomer-selectivity. Here, we extend this strategy to a set of rotational states for which microwave three-wave mixing was demonstrated experimentally~\cite{PerezAngewandte17}, namely the $2_{02}$, $3_{13}$ and $3_{12}$ states of carvone, depicted in Fig.~\ref{scheme_J2J3}.
\begin{figure}
\caption{Three-wave mixing with circularly polarized fields for the rotational levels $2_{02}$, $3_{13}$ and $3_{12}$ with yellow, turquoise and orange lines indicating microwave fields with $\omega_3$ ($z$-polarization), $\omega_1$ ($\sigma_-$-polarization) and $\omega_2$ ($\sigma_+$-polarization). The initial states are indicated by black circles.}
\label{scheme_J2J3}
\end{figure}
\begin{figure}
\caption{Population of the rotational levels $2_{02}$ (top), $3_{13}$ (middle) and $3_{12}$ (bottom) averaged over all $M$-states for excitation by a three-wave mixing scheme with $z$-, $x$-, and $y$-polarized fields (a), with $z$-, $\sigma_-$- and $\sigma_+$-polarized fields (b) and with $z$-, $\sigma_-$- and $\sigma_+$-polarized fields. Pulse durations are adjusted to allow for synchronization of the individual Rabi cycles. The two enantiomers are depicted by solid blue and dashed red lines, respectively. The envelopes of the microwave pulses are indicated by the turquoise ($\omega=\omega_1$), orange ($\omega=\omega_2$), and yellow ($\omega=\omega_3$) shapes. Note that the yellow pulse is 10 times as intense as the other pulses.}
\label{3wm_J2J3}
\end{figure} \begin{figure}
\caption{Population of the individual $M$-states for the rotational levels $2_{02}$ (top), $3_{13}$ (middle) and $3_{12}$ (bottom) for enantiomer $(+)$ for the same excitation schemes as in Fig. \ref{3wm_J2J3}. The solid and dashed gray, black and green lines present the states with $M=\mp 3$, $\mp 3$ and $\mp 1$, respectively. The states with $M=0$ are presented by the dotted purple lines.}
\label{3wm_J2J3m}
\end{figure} We assume that initially, all degenerate states of the lowest rotational level $2_{02}$ are equally populated, i.e., \begin{equation}
\rho(0) = \frac{1}{5} \sum_M |2_{02M} \rangle \langle 2_{20M} |\,, \end{equation}
c.f. the black circles in Fig. \ref{scheme_J2J3}, whereas levels $3_{13}$ and $3_{12}$ are empty. Such an initial condition can be realized by choosing the two excited rotational states in a vibrational state $\nu > 0$ \cite{Leibscher19,Zhang20} or by depleting excited rotational levels by laser excitation \cite{Lee21}. Figures \ref{3wm_J2J3} and \ref{3wm_J2J3m} show the population transfer for different pulse sequences, with the average population of the three rotational levels for the two enantiomers plotted in Fig. \ref{3wm_J2J3}, while Fig. \ref{3wm_J2J3m} shows the population of the individual $M$-states for a single enantiomer. In panel (a) of Figs.~\ref{3wm_J2J3} and \ref{3wm_J2J3m}, we show, for reference, the population transfer for a three-wave mixing scheme with $x$-, $y$- and $z$-polarized fields which corresponds to an enantiomer-selectivity of $S \approx 0.74$, with enantiomeric excess of enantiomer (+) and (-) in the levels $3_{13}$ and $3_{12}$, respectively and level $2_{02}$ empty. Using $\sigma_+$ and $\sigma_-$-polarized fields instead of the $x$- and $y$-polarized ones confines each of the initial states $|2_{02M}\rangle$ with $M=-2,...,2$ to a single 3-level cycle, as seen in Fig.~\ref{scheme_J2J3}. We take the first pulse, as before, to be a $z$-polarized $\pi/2$-pulse, i.e., the pulse duration is chosen such that 50\% of the population (averaged over the $M$-states) is in level $3_{12}$ and 50\% remains in the ground state $2_{02}$. The second and third pulses are $\sigma_-$ and $\sigma_+$-polarized, respectively, with pulse durations determined as in a standard three-wave mixing scheme. The resulting average population is shown in Fig.~\ref{3wm_J2J3}(b), and the population of the individual $M$-states can be seen in Fig.~\ref{3wm_J2J3m}(b). The excitation scheme leads to almost complete enantiomer-selection for the levels $3_{12}$ and $3_{13}$. However, about 20\% of the population of both enantiomers remains in the lowest level $2_{02}$. This is already a clear advantage compared to the standard three-wave mixing scheme with only linearly polarized fields. One could, for example, obtain a purified sample by extracting the population of either level $3_{13}$ or $3_{12}$, each corresponding to a single enantiomer. The overall incomplete enantiomer-selectivity can be rationalized as follows: The first pulse leads to an average 50/50-coherence between the levels $2_{02}$ and $3_{13}$, c.f. Fig.~\ref{3wm_J2J3}(b). However, as also illustrated in Fig.~\ref{transition_z}, the Rabi frequencies of the two-level transitions with different $|M|$ are different, with transitions between $M=0$ states having the largest and transitions between $M=\pm 2$ states having the smallest Rabi frequency. This results in population transfer of more than 50\% for states with $M=0$ and $M=\pm 1$ and less than 50\% for $M=\pm 2$, c.f. Fig.~\ref{3wm_J2J3m}(b).
One can account for the different Rabi frequencies by adjusting the pulse duration such that every single $M$-state undergoes a $50/50$-population transfer. To achieve that, one has to wait for several Rabi cycles until all individual cycles are synchronized. Similarly, the complete population transfer between the states $3_{12}$ and $3_{13}$ has to be synchronized. The pulse durations were determined by a simple parameter optimization using the NLopt software package~\cite{NLopt}. Pulse shapes and maximal field strengths were kept fixed. The resulting rotational dynamics is displayed in Figs. \ref{3wm_J2J3}(c) and \ref{3wm_J2J3m}(c). The fully synchronized three-wave mixing scheme with circularly polarized fields leads to almost complete enantiomer-selective excitation --- one enantiomer is entirely transferred to level $3_{12}$ (dashed red lines in Fig. \ref{3wm_J2J3}), while the second enantiomer ends up in level $3_{13}$ (blue lines in Fig. \ref{3wm_J2J3}). There is no population left in level $2_{02}$ and the overall enantiomer-selectivity amounts to $S=97\%$, limited -- according to Eqs.(\ref{Rabi_z_deltaJ1}), (\ref{Rabi_sigma_deltaJ0}) and (\ref{Rabi_sigma_deltaJ1}) -- by the Rabi frequencies for the different transitions differing by irrational factors.
While it is possible to synchronize the transitions such that a 50/50-coherence is obtained for all individual transitions with arbitrary accuracy, this comes at the expense of longer pulse durations. Eventually, the pulse durations are limited by the coherence time of the experiment, determined e.g. by collisions of the molecules in the sample with background gas. In practice, it is thus always necessary to find a compromise between accurate synchronization and pulse duration. We discuss the relation between accuracy and pulse duration in Subsection \ref{subsec:temperature_accuracy}. At the same time, we inspect another factor relevant in an experimental implementation. Most of the current microwave three-wave mixing experiments are carried out for rotational states with thermal population. In Section \ref{subsec:temperature_accuracy}, we thus discuss also the effects of the initial temperature.
\subsection{Rotational temperature and synchronization time} \label{subsec:temperature_accuracy} \begin{figure}
\caption{Selectivity $S$, Eq.(\ref{selectivity}) as a function of the initial rotational temperature. The green, blue and orange dots depict the selectivity for the excitation schemes shown in Fig. \ref{3wm_J2J3}(a), (b), and (c).}
\label{temperature}
\end{figure} So far, we have investigated the rotational dynamics of chiral asymmetric top molecules assuming that only the lowest rotational state of the relevant subsystem is initially populated. Typical microwave three-wave mixing experiments are carried out with thermal samples of chiral molecules~\cite{PerezAngewandte17,PerezJPCL18}. Then, the initial rotational temperature of the molecules has to be factored in. The initial density operator is given by \begin{equation}
\rho(0) = \sum_{i=1}^3 \sum_{M=-J_i}^{J_i} p_{J_i,\tau_i}(T) |J_i,\tau_i,M \rangle \langle J_i, \tau_i, M |\,, \end{equation} where the rotational levels are occupied according to the Boltzmann distribution, \begin{equation}
p_{J_i,\tau_i}(T) = \frac{1}{Q} \exp \left ( - \frac{E_{J_i,\tau_i}}{k_B, T}\right ) \,, \end{equation}
with $Q$ defined by $\sum_{i=1}^{3} \sum_{M=-J_i}^{J_i} p_{J_i,\tau_i}(T) = 1$ and $|J_1,\tau_1,M \rangle =|2,-2,M \rangle =|2_{02M} \rangle$, $|J_2,\tau_2,M \rangle =|3,-2,M \rangle =|3_{13M} \rangle$ and $|J_3,\tau_3,M \rangle =|3,-1,M \rangle =|3_{12M} \rangle$. In molecular beam experiments, rotational temperatures in the range of $1$ - $10$ K can be achieved, which result in non-neglibile initial thermal population of all rotational levels involved in the three-wave mixing cycle. Figure~\ref{temperature} shows the maximal selectivity for the excitation schemes (a), (b), and (c) in Fig.~\ref{3wm_J2J3} in a semi-logarithmic plot for initial temperatures between $T=10\,$mK and $10\,$K. For $ T < 0.1$ K, thermal occupation of the upper two levels becomes negligible and the selectivity approaches its maximal value, whereas it gets exponentially reduced for rotational temperatures $T > 0.1\,$K. Importantly, the modified excitation scheme using synchronized circularly polarized pulses (Fig.~\ref{3wm_J2J3}(c)) results in a larger selectivity than standard three-wave mixing with linearly polarized fields (Fig.~\ref{3wm_J2J3}(a)) and three-wave mixing with circularly polarized fields without synchronization of the individual three-level systems (Fig.~\ref{3wm_J2J3}(b)) for every temperature.
The maximal enantiomer-selectivity that can be achieved in practice with the optimal scheme employing synchronized circularly polarized fields also depends on how accurate the individual Rabi cycles are synchronized. In the following, we discuss the relation between accuracy and the pulse duration in more detail. Since we consider resonant excitation, the population of the lowest states is given by $p_{2_{02M}} = \cos^2 ( \Omega t )$, where the Rabi frequency $\Omega$, given in Eq.~(\ref{Rabi_general}), depends on $M$. Consider the first pulse in excitation scheme (c), which is a $z$-polarized field that drives a $\Delta J=1$ transition, i.e., the $M$-dependence of the Rabi frequency is given in Eq.(\ref{Rabi_z_deltaJ1}). As shown in Fig.~\ref{transition_z}(b), the first pulse simultaneously drives $2J +1$ two-level transitions. In the bottom panel of Fig. \ref{3wm_J2J3m} (c), it can be seen that after the first pulse, the 50/50-coherence is reached for all M-stats within an accuracy of $10 \%$. Since the Rabi frequencies for different $M$-states differ by irrational numbers, a 50/50-coherence can be obtained with arbitrary accuracy by increasing the pulse duration. \begin{figure}
\caption{Pulse duration required for achieving a 50/50-coherence for all $M$-states within accuracy $\epsilon$ for transitions from $|1_{01M}\rangle$ to $|2_{11M}\rangle$ (blue dots), $|2_{02M} \rangle$ to $|3_{12M}\rangle$ (orange dots) and from $|3_{03M}\rangle$ to $|4_{13M}\rangle$ (yellow dots) assuming a field intensity of $I=10$ W/cm$^2$.}
\label{synchronization}
\end{figure} In Fig. \ref{synchronization}, we show the pulse duration required to achieve a 50/50-coherence for all $M$-states within an accuracy $\epsilon$ defined by $p_{2_{02M}}= \left(\frac{1}{2} \pm \epsilon \right) p_{2_{02M}}(t=0)$. Transitions from $J=1$ to $J=2$ (blue dots), $J=2$ to $J=3$ (orange dots) and $J=3$ to $J=4$ (yellow dots) are considered, assuming a microwave field with an intensity of $10$ W/cm$^2$, comparable to field intensities used in current microwave experiments \cite{PerezAngewandte17}. A jump in the pulse duration occurs whenever the number of Rabi cycles has to be increased to obtain a smaller value of $\epsilon$. Since the degeneracy increases with increasing $J$, the pulse duration required for very accurate synchronization increases. For a rotational subsystem with initial state $J=1$, a 50/50-coherence with an accuracy below one per cent can be achieved within less than $1\,\mu$s. Starting with rotational states with $J=2$, one per cent accuracy already requires a pulse duration of about $10\,\mu$s. This is of the same order as typical coherence times in current microwave experiments. Of course, the absolute value of the pulse duration can in principle be reduced by employing stronger pulses.
\section{Discussion and conclusions} \label{sec:conclusions}
We have considered the problem that orientational degeneracy of randomly oriented chiral molecules impedes complete enantiomer selective excitation of asymmetric top molecules with resonant microwave three-wave mixing~\cite{LehmannJCP18}. Complementing earlier work using mathematical controllability analysis~\cite{Leibscher20}, we have shown how to solve this problem based on a detailed analysis of the rotational dynamics. Our results are relevant to microwave three-wave mixing experiments employing rotational transitions between degenerate rotational states~\cite{EibenbergerPRL17,PerezAngewandte17,PerezJPCL18}. The excitation schemes derived here will improve the enantiomer selectivity in those experiments at the expense of replacing two of the linearly polarized by circularly polarized microwave fields with their duration tuned to synchronize transitions involving different $M$-dependent Rabi frequencies. One option consists in combining all possible three-wave mixing cycles in a given rotational subsystem. This leads to a simpler excitation scheme compared to Ref. \cite{Leibscher20}. However, since the construction does not guarantee controllability, it is not obvious whether it can be applied to arbitrary rotational subsystems. We have therefore also revisited the strategy derived from controllability analysis in Ref.~\cite{Leibscher20}, adapting it to those rotational states of carvone which have been addressed in microwave three-wave mixing experiments \cite{PerezAngewandte17}. To estimate the improvement in enantiomer selectivity that one can expect to observe in experiments, we have compared different excitations schemes at thermal conditions. In line with Ref.~\cite{Leibscher20}, we find that synchronized three-wave mixing with circularly polarized fields results in the best selectivity at a given temperature. Moreover, we have identified the pulse duration required for a desired accuracy of the rotational transfer. We emphasize that the excitation schemes discussed here are feasible with current microwave technology and thus provide a promising route to increase the efficiency of enantiomer-separation of chiral molecules by microwave spectroscopy.
A recent three-wave mixing experiment \cite{Lee21} has followed a different path to circumvent loss of efficiency due to both thermal population in the excited rotational states and orientational degeneracy: The experiment addresses rotational levels with $J=0$ and $J=1$ with the excited rotational levels depleted prior to the three-wave mixing. Of course, in a typical molecular sample, only a small amount of molecules resides initially in the rotational ground state. In view of enantiomer separation with electric fields only, it will therefore be important to combine the optical depletion technique of Ref.~\cite{Lee21} with the pulse design presented here, in order to sequentially carry out several three-wave mixing cycles and thereby address a larger set of rotational states.
\begin{acknowledgments} We would like to thank Karl Horn and Eugenio Pozzoli for helpful discussions. Financial support from the Deutsche Forschungsgemeinschaft through CRC 1319 ELCH is gratefully acknowledged. \end{acknowledgments}
\end{document}
|
arXiv
|
{
"id": "2208.00044.tex",
"language_detection_score": 0.8242330551147461,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Entropy and complexity of spy billiards]{Entropy and complexity of polygonal billiards\break with spy mirrors} \author {Alexandra Skripchenko} \address{Faculty of Mathematics, National Research University Higher School of Economics, Vavilova St. 7, 112312 Moscow, Russia} \email{[email protected]}
\def{\itshape Address}{{\itshape Address}} \author{Serge Troubetzkoy} \address{Aix Marseille Universit\'e, CNRS, Centrale Marseille, I2M, UMR
7373, 13453 Marseille, France} \curraddr{ I2M, Luminy\\ Case 907\\ F-13288 Marseille CEDEX 9\\ France}
\email{[email protected]}
\begin{abstract} We prove that a polygonal billiard with one-sided mirrors has zero topological entropy. In certain cases we show sub exponential and for other polynomial estimates on the complexity.
\end{abstract}
\maketitle \section{Introduction} \subsection {Polygonal billiards with one-sided mirrors} We consider a table consisting of a polygon $Q\subset \mathbb R^{2}$ (not necessarily rational) with several one-sided mirrors inside; i.e.\ a straight line segments connection pairs of points in $Q$, each of which has two sides, a transparent side and a reflecting side. The billiard is defined as follows. Consider a point particle and a direction $\theta\in \mathbb S^{1}$; the point moves in the direction $\theta$ with a unit speed up to the moment when it reaches the boundary, if it arrives at a transparent side of a mirror it passes through it unperturbed, while if it arrives at a reflecting side of a mirror or at the boundary of $Q$ it is reflected with the usual law of geometric optics, the angle of incidence equals the angle of reflection.
Polygonal billiards with one-sided mirrors were described for the first time by M.\ Boshernitzan and I.\ Kornfeld in \cite{BK}, in this article one-sided mirrors were called spy mirrors. However, they considered the less general case of rational polygons with the mirrors that form rational angles with the sides of polygonal table. Such tables give rise to interval translation maps, a generalization of interval exchange maps. In contrast of interval exchange transformations, interval translation maps are poorly understood, only a few results are known (see \cite{BK}, \cite{SchT}, \cite{Ba}, \cite{BT}, \cite{SIA}, \cite{BC}, \cite{V}, \cite{ST}). A particular example of a rational billiard with one-sided mirrors, the square with a vertical one-sided mirror with one end point on the bottom side of the square, was studied in \cite{ST}.
In this article we will prove two types of results. In the setting of an arbitrary polygon with one-sided mirrors we show that the topological entropy of our system is zero. In certain more restricted settings we show that we have sub exponential or polynomial growth estimates. The next two subsections describe these results in more detail.
\subsection{Topological entropy} We prove that the polygonal billiard with one-sided mirrors has zero topological entropy. To do this we first consider the inverse limit space of a polygonal billiard with one-sided mirrors and show that it has zero topological entropy (an exact statement is provided below). We show that the inverse limit space in our case is closely related with the attractor (the notion of the attractor of the billiard map was introduced in \cite{ST}) and that the attractor has zero topological entropy. Then we extend the zero entropy result to the full phase space.
There exist several different proofs of zero topological entropy for polygonal billiards without one-sided mirrors (see \cite{K}, \cite{GKT}, \cite{GH}). Our proof mainly uses some ideas from \cite{GKT} and \cite{K}. The main difference with the situation of classical billiards is the non-invertibility of our system. Also, J. Buzzi in \cite{Bu} showed a closely related result that piecewise isometries in any dimension are of zero entropy. For rational polygonal billiards with rational one-sided spy mirrors, zero topological entropy is a corollary from the fact that the directional complexity is at most polynomial \cite{Ba}, and the variational principle.
Throughout the article the term {\it side} will denote a side of the polygon $Q$ or a side of a one-sided mirror, and the term {\it vertex} denotes an end point of a side.
We will are denote by $q$ the number of sides of $Q$ and $r$ the number of spy mirrors, thus we have $q + 2r$ sides. The collection of sides will be call the {\em boundary} $\Gamma$. We will consider the {\em billiard map} $T$, the first return map to $\Gamma$. The {\em phase space} $T\Gamma$ of the billiard map is the subset of inner pointing vectors of unit tangent bundle (for vectors with base point in a one-sided mirror this means that if we reverse the direction of the vector it will point at the reflecting side of the mirror). Note that if the billiard orbit arrives at a vertex of $Q$ then the collision rule is not well defined since we can reflect with respect two different sides, thus the billiard map is not defined for such points.
Let $\pi: T\Gamma \to \Gamma$ denote the natural projection.
Let $I_i : 1 \le i \le q + 2r$ be an enumeration of the sides of $Q$. The forward orbit of a point $x$ can be coded by the sequence of sides hit by the orbit. Let $$\Sigma^+ \mathrel{\mathop=}: \{ \vec{a} := (a_i)_{I \in \mathbb{N}}: \exists x \text{ such that } \pi(T^ix) \in I_{a_i} \ \forall i \ge 0\}.$$ In this definition it is implicitly assumed that the map $T^ix$ is defined for all $i \ge 0$. We use the discrete topology on the collection of sides, and the product topology on $\overline{\Sigma^+}$. The left shift map on $\overline{\Sigma^+}$ will be denoted by $\sigma$. The main result of this section is the following theorem.
\begin{theorem}\label{zero''} For any polygon with spy mirrors we have $$h_{top}(\overline{\Sigma^+},\sigma) = 0.$$ \end{theorem} Suppose that $\mu^+$ is an invariant measure on $\overline{\Sigma^+}$, Theorem \ref{zero''} implies that $\sigma$ is $\mu^+$ almost surely invertible. The {\em complexity} $p(n)$ is the number of words on length $n$ which appear in $\Sigma^+$. Theorem \ref{zero''} implies that $\lim_{n \to \infty} \log(p(n))/n = 0$.
Our proof of Theorem 1 uses invertibility, we begin by working in the inverse limit of the coding space $$\Sigma \mathrel{\mathop=}: \{ \vec{a} := (a_{i})_{I \in \mathbb{Z}}: (a_{i-j})_{i \ge j} \in \Sigma^+ \ \forall j \in \mathbb{Z}\}.$$ We also introduce $$\Sigma^- \mathrel{\mathop=}: \{ \vec{a} := (a_i)_{I \in \mathbb{N}}: \exists (b_i)_{i \in \mathbb{Z}} \in \Sigma \text{ such that } a_i = b_i \ \forall i \le 0\}.$$ Finally the inverse limit of the billiard map is $$\Omega \mathrel{\mathop=}: \{ \vec{x} := (x_i)_{i \in \mathbb{Z}}: Tx_i = x_{i+1} \ \forall i \in \mathbb{Z} \}.$$ Here we again assume that the forward orbit $T^jx_i$ is definite for all $i$ and all $j \ge 0$. We use the natural topology of $T\Gamma$ on $x_0$ and the product topology on $\Omega$.
The attractor of the billiard map is the set $\mathcal{A} := \cap_{n \ge 0} T^n (T\Gamma)$, where for
the set $T^n(T\Gamma)$ we consider
only the points in $T\Gamma$ for which $T^n (T\Gamma)$ is defined.
All shift maps (on $\Sigma$ or $\Omega$) will be denoted by $\sigma$. We extend the shift map to $\overline{\Omega}$, $\overline{\Sigma^+}$ and $\overline{\Sigma}$. We show that
\begin{theorem}\label{zero} For any polygon with spy mirrors we have $$h_{top}({\mathcal{A}},T) = h_{top}(\overline{\Omega},\sigma) = h_{top}(\overline{\Sigma},\sigma) = 0.$$ \end{theorem} \noindent and then we show that Theorem \ref{zero''} follows from Theorem \ref{zero}.
\subsection{Complexity estimates in special cases}
A {\em generalized diagonal} is an orbit segment which starts and ends in a vertex of $Q$, let $N_{vert}(n)$ denote the number of generalized diagonals of combinatorial length at most $n$.
We begin with a general theorem, and then we will apply it to specific examples. \begin{theorem}\label{thm1} Suppose that $Q$ is a $q$-gon with $r$ spy mirrors, then $$p(n)
\le 1 + (q+2r- 1)n + \left (2((q+2r)^2-3)\sum_{j=0}^{n-1} \sum_{i=0}^{j} N_{vert}(i) \right ).$$ \end{theorem}
We call $Q$ a {\em symmetric polygon with spy mirrors} if there is a polygon $P$ such that $Q$ is obtained from $P$ via a finite unfolding and there are finitely many spy mirrors which are contained in the common edges of the unfolded copies of $P$ (see Figure \ref{s}).
\begin{figure}
\caption{A symmetric polygon with two spy mirrors}
\label{s}
\end{figure}
\begin{theorem}\label{thm2} Suppose that $Q$ is a rational symmetric polygon with spy mirrors. Then there is a constant $C > 0$ so that the total complexity satisfies $p(n) \le Cn^4$ for all $n \ge 0$. \end{theorem}
Next we consider symmetric polygon with spy mirrors obtained from a triangle (non necessarily rational). Two smallest angles determine a triangle up to scaling, and billiards are scaling invariant. Thus, up to scaling, the set of triangles is a subset of $\mathbb{R}^2$ equipped with Lebesgue measure. In the next theorem the word {\em typical} will mean Lebesgue almost every. \begin{theorem}\label{thm3} Suppose that $Q$ is a symmetric polygon with spy mirrors obtained from a typical triangle. Then for every $\varepsilon > 0$ there is a constant $K > 0$ so that $p(n) \le Ke^{n^\varepsilon}$ for all $n \ge 0$.
\end{theorem} Also one can prove a special complexity estimation for a generalization of the billiard with square table that we studied in \cite{ST}. \begin{theorem}\label{thm4} Suppose that $Q$ is the square with $k$ vertical spy mirrors. Then there is a constant $K > 0$ so that the total complexity satisfies $p(n) \le K n^{k+4}$ for all $n \ge 0$. \end{theorem}
\section{The proofs of the entropy results} \subsection{Unfolding and strips}
Consider the backwards billiard flow starting from a point in $\Gamma$; instead of reflecting the orbit about a side of $Q$ we reflect the polygon about the same side and continue the orbit as a straight line. When it meets another side of the reflected polygon we repeat the procedure with respect to this side, etc. We can continue up to the moment when we hit the vertex. The copies of $Q$ obtained after such a reflection we will label with respect to a side that was an axis of reflecting ($Q_{A}$, for instance.)
\begin{lemma} \label{parallel} Suppose $\vec{x},\vec{y} \in \Omega$ and that $x_0$ and $y_0$ are not parallel, then their backward codings can not coincide. \end{lemma} \begin{proof} We look at unfolding lines for the past orbits, see Figure \ref{1}, we suppose there codes coincide for a certain interval of times. We remark that when a backward orbit hits a one-sided mirror, there are two possible preimages, by definition these preimages have different codings; thus for the interval of times when the backward codings coincide the choice of preimages (which is given since $\vec{x}$ and $\vec{y}$ are in the inverse limit space $\Omega$) is the same for $\vec{x}$ and for $\vec{y}$.
These lines are eventually linearly divergent, thus the distance between them is eventually more than twice the diameter of $Q$, and so the backwards unfoldings of $x_0$ and $y_0$ must be different, i.e.\ the reflections must occur in different edges, so the codings are different. \end{proof}
\begin{figure}
\caption{Non-parallel orbits have different coding}
\label{1}
\end{figure}
\subsection{Uniqueness of the coding}
For each $\vec{a} \in \Sigma$, there exist at least one point $\vec{x} \in \Omega$ such $\pi(\vec{x}_i) = \vec{a}_i$ for all $i \in \mathbb{Z}$, we denote the set of such $\vec{x}$ by $X(\vec{a})$. The set $X(\vec{a})$ can also be defined for $\vec{a} \in \Sigma^-$, and since for $\vec{x}\in \Omega$ from the definition it is immediate that $(x_i)_{i \le 0}$ determines $\vec{x}$ uniquely, it follows that
$X(\vec{a}) \subset \Omega$ for points $\vec{a} \in \Sigma^-$. Lemma \ref{parallel} implies that the set $X(\vec{a})$ consists of parallel points.
Fix $\vec{a} \in \Sigma^-$.
We call $S \subset X(\vec{a})$ a \emph{strip} if the set $\{x_0: \vec{x} \in S\}$ consists of parallel vectors whose base points form an interval. Note that the backwards orbit of all the $\vec{x} \in S$ are nonsingular. Using the unfolding procedure, we will think of $S$ as being a geometric strip in $\mathbb{R}^2$, hence the name. We have
\begin{proposition}\label{unique} Suppose that $Q$ is an arbitrary polygon with a finite number of spy mirrors. \begin{enumerate} \item{} For any $a\in \Sigma^{-}$ which is not periodic the set $X(\vec{a})$ consists of only one point. \item{} For any $a \in \Sigma^{-}$ which is periodic the set $X(\vec{a})$ consists of a finite union of parallel strips. \end{enumerate} \end{proposition}
\begin{proof}
\begin{figure}
\caption{The orbit in the middle of the strip has different dynamics.}
\label{2}
\end{figure}
(1) The proof is by contradiction, consider two points $x,y \in \Omega$ with exactly the same aperiodic backwards coding $\vec{a}$, Lemma \ref{parallel} implies that $x_i$ and $y_i$ are parallel for all $i \le 0$. It may happen that the trajectories of points between them are not exactly the same (see Figure \ref{2} for a possible example). Note that this can not happen for the usual billiard in a simply connect polygon. In order to avoid this problem, and enable us to use the strip argument from \cite{GKT}, we construct an auxiliary dynamical system by declaring that for all points in between our two special have the same dynamics. Thus, with respect to this auxiliary dynamics the set $S:= X_{aux}(\vec{a})$ is a strip. More precisely, the auxiliary dynamics is defined as the map $g$ which rigidly maps the vectors in the strip with base-point in the interval $(x_{i-1},y_{i-1})$ onto vectors in the strip with the base-point in the interval $(x_{i},y_{i})$ (for all $i \le 0$).
Consider the sequence of vectors $\vec{z} := (z_i)_{i \le 0}$ in the middle of the strip $S$, and the $\alpha$-limit set $Z$ of the auxiliary dynamics $g$ of $\vec{z}$. The set $Z$ is compact, and since strips can not contain vertices the inverse map $g^{-1}$ is continuous on $Z$, thus we can apply the Birkhoff recurrence theorem, to conclude that $Z$ contains a uniformly recurrent point $\vec{x}^*$ for the map $g^{-1}$. Fix a sequence $n_i$ so that $g^{-n_i}\vec{z} \to \vec{x}^*$.
We consider images of the original strip $S$ under $g^{-n_{i}}$ and denote it by $S_{i}$.
Note that the widths of the $S_i$ are all greater than or equal to the width of $S$, and that $S_i$ converge to a strip $S(\vec{x}^*)$ centered at $x^*$ of the same width or more.
We can suppose (by re-enumerating the sides) that $\vec{x}_0^{*}\in I_{1}$.
The point $\vec{x}_0^*$ is not tangent to the side $I_{1}$ since then the orbit of $\vec{z}$ would come arbitrarily close to one of the endpoints of the side $I_1$ which contradicts the positive width of the strip $S$.
We consider the set of points ${S}^\infty(\vec{x}^*)$ having the same auxiliary backwards code as $\vec{x}^*$. Clearly ${S}^{\infty}(\vec{x}^*)$ is a strip and ${S}^\infty(\vec{x}^*) \supset S(\vec{x}^*)$.
The left and right boundaries of ${S}^{\infty}(\vec{x}^*)$ are two lines, we consider their $\varepsilon$-neighborhoods $N_{\varepsilon}^{L}$ and $N_{\varepsilon}^{R}$. By the uniform recurrence of $\vec{x}^{*}$, vertices fall inside each of $N_{\varepsilon}^{L}$ and $N_{\varepsilon}^{R}$ with bounded gaps between their occurrences (see Figure \ref{3}). Fix one of the sides, say $N_{\varepsilon} := N_{\varepsilon}^{L}$. Now $S_i \to S(\vec{x}^*)$, this can happen in two ways, either $S_i$ is parallel to $S(\vec{x}^*)$ for all sufficiently large $i$ or not. In the second case the set $N_{\varepsilon}\cap S_{i} \subset N_{\varepsilon} \cap S$ contains an $\varepsilon$-width rectangle with a height $L_{i}$ that goes to infinity as $i\to\infty$; thus there must be
a vertex in $S$ (see Figure \ref{3}(b)). This contradicts the fact that the backwards orbit
of all $ \vec{x} \in S = X(\vec{a})$ are non-singular,
thus we can only have a single point $\vec{x} \in \Omega$ with aperiodic backwards coding $\vec{a}$.
In the first case for sufficiently large $i$, since they are parallel we have $S_i \subset {S}^{\infty}(\vec{x}^*)$.
We consider the set of points $S_i^{\infty}(\vec{z})$ have the same auxiliary backward code as
$g^{-n_i}\vec{z}$, clearly this is the maximal width strip such that $S_i^{\infty} \supset S_i$. By
maximality $S_i^{\infty} = S^{\infty}(\vec{x}^*)$. This implies that $\vec{x}^*$ is periodic which contradicts
the assumption that $\vec{a}$ is not periodic. Thus also in this case we can only have a single
point $\vec{x} \in \Omega$ with periodic backwards coding $\vec{a}$.
\begin{figure}
\caption{Vertices and recurrence points}
\label{3}
\end{figure}
(2) Now suppose that $\vec{a} \in \Sigma^-$ is periodic.
Lemma \ref{parallel} implies that $X(\vec{a})$ consists of parallel
orbits. By the definition of the auxiliary dynamics, the set $X_{aux}(\vec{a})$ with respect to the auxiliary dynamics is a strip.
Suppose that the period of $\vec{a}$ is $k$ and that
$\vec{x} \in X_{aux}(\vec{a})$. From the
periodicity of $\vec{a}$ we conclude that $x_{i-k} = x_i$, or $T^k(x_{i-k}) = x_i$,
for all $i \le 0$. But since $x_i$ for $i > 0$ is defined as $T^i(x_0)$ we conclude that $\vec{x}$ is
$k$ periodic. Since the map $T^k$ is a local isometry we conclude that furthermore the width
of the strip $X_{aux}(\vec{a})$ is strictly positive.
\begin{figure}
\caption{Finite union of strips}
\label{4}
\end{figure}
Consider the set $X_{aux}(\vec{a}) \cup T^{-1} X_{aux}(\vec{a}) \cup \cdots \cup T^{-k+1} X_{aux}(\vec{a})$. Since we only consider
a finite piece of the orbit the reflecting mirrors create at most finitely many "holes" in the auxiliary dynamics strip as in Figure \ref{4}, more precisely points for which the auxiliary dynamics and the real dynamic
disagree, the set $X(\vec{a}) \subset X_{aux}(\vec{a})$ is a finite union of strips.
\end{proof}
Let
$\partial \Sigma :=\overline{\Sigma} \setminus \Sigma$ and
$\partial \Omega :=\overline{\Omega} \setminus \Omega$. We extend $X$ to $\overline{\Sigma}$ as follows. Suppose $\vec{a} \in \partial \Sigma$, let $\vec{a}^{(n)} \in \Sigma$ be such that $\vec{a}^{(n)} \to \vec{a}$. Let $\vec{x}^{(n)} \in X(\vec{a}^{(n)})$. Since $\overline{\Omega}$ is closed there is an $\vec{x} \in \overline{\Omega}$ which is a limit point of the $\vec{x}^{(n)}$. Let $X(\vec{a})$ be the collection of all such $\vec{x}$. We remark that all such points $\vec{x} \in \partial \Omega$ since $\vec{a} \not \in \Sigma$ .
\begin{proposition}\label{extend} If $\vec{x} \in \partial \Omega$ then there exist at most countably many $\vec{a} \in \partial \Sigma$ such that $\vec{x} \in X(\vec{a})$.
\end{proposition}
\begin{proof} Suppose that $\vec{x} \in \partial \Omega$. Consider $\vec{x}^{(n)} \in \Omega$ such that $\vec{x}^{(n)} \to \vec{x}$. Let $\vec{a}^{(n)}$ be the code of $\vec{x}^{(n)}$, and $\vec{a}$ any limit point of the $\vec{a}^{(n)} $, then clearly $\vec{a} \in \partial \Sigma$, and $\vec{x} \in X(\vec{a})$. At a certain time the orbit of $\vec{x}$ reaches a vertex. There are clearly at most $q + 2r$ possible extensions by continuity of the dynamics and of the code (see, for example, Figure \ref{2015} where the purple lines are the orbits that hit some vertex and several dotted lines represent possible extensions). Each of these extensions may again reach a vertex, so again each of them has at most $q+2r$ possible extensions. This can happen at most a countable number of times. \end{proof}
\begin{figure}
\caption{The purple trajectory hits a vertex, all possible symbolic extensions are indicated via close by orbits, there are 2 extensions for the left figure, 6 for the right figure.}
\label{2015}
\end{figure}
\subsection{The attractor}
For $\vec{x}$ in ${\Omega}$ let $Y(\vec{x}) = x_0$.
The following proposition shows that $({\mathcal{A}},T)$ is a continuous factor of $({\Omega},\sigma)$.
\begin{proposition}\label{attractor} (i) $\vec{x} \in \Omega \iff Y(\vec{x}) \in \mathcal{A}$,\\ (ii) $Y(\sigma \vec{x}) = T Y(\vec{x})$,\\ (iii) the map $Y$ is continuous. \end{proposition} \begin{proof} (i) $\Longrightarrow$ Suppose $\vec{x} \in \Omega$, then $T^n(x_n) = x_0$ since $T(x_i) = x_{i+1}$ for all $i$.
\noindent$\Longleftarrow$ Suppose $x_0 \in \mathcal{A}$, then the set $\{T^{-i}(x_0)\}$ is non-empty for all $i \ge 0$. Consider the tree structure on these sets, i.e.\ draw an arrow from $x_i \in \{T^{-i}(x_0)\}$ to $x_{i+1} \in \{T^{-i-1}(x_0)\}$ iff $T(x_i) = x_{i+1}$. Since the vertex set on level $n$ of the tree is non-empty for all $n$, there must be an infinite path in this tree, which defines the past of $\vec{x} := (x_i)$, and the future is defined by $T^i(x_0)$ with $i > 0$.
(ii) and (iii) follow immediately from the definition of $\Omega$. \end{proof}
\subsection{Entropy of ergodic measures} This subsection follows ideas in \cite{K}. First, we show that nothing interesting happens on the boundaries. \begin{lemma}\label{support}
Every ergodic non-atomic shift invariant measure on $\overline{\Sigma}$ is supported by the set $\Sigma$. \end{lemma} \begin{proof}
We suppose that $\mu$ is an ergodic shift-invariant measure such that $\mu(\partial\Sigma)>0$. Let $\Omega_i := \{\vec{x}\in\overline{\Omega}:x_{i} \text{ is a vertex}\}$ and $\Sigma_i = \{\vec{a}: X(\vec{a}) \subset \Omega_i\}$.
Clearly
$\partial \Sigma = \cup_i \Sigma_i$, thus our assumption implies that $\mu(\partial\Sigma_{i})>0$ for some $i$. Now by ergodicity, $\mu$-almost every point $\vec{a} \in \overline{\Sigma}$ should visit this $\Sigma_{i}$ an infinite number of times, i.e.\ for each $\vec{x} \in X(\vec{a})$ there are times $j < k$ so that $\vec{x}_j$ and $\vec{x}_k$ are vertices, i.e the orbit of $\vec{x}$ is generalized diagonal. But the set of generalized diagonals is countable, by Proposition \ref{extend} their lift is countable, and thus the measure is atomic. \end{proof}
Next we have \begin{lemma}\label{zero'} For every ergodic shift invariant measure $\mu$ supported by the set $\Sigma$ the entropy $h_{\mu}(\Sigma)$ is equal to zero. \end{lemma}
\begin{proof} We consider the time zero partition $\xi$ of $\Sigma$ and the partition $\xi^- : = \vee_{j=0}^{\infty} \sigma^{i}\xi$. An element $P \in \xi^-$ corresponds to a code in $\Sigma^-$, there are two possibilities, either this code is periodic, or not.
(i) The code is periodic, then Proposition \ref{unique} tells us that $X(P)$ consists of a finite union of strips of periodic orbits, all with the same past code; thus all with the same future code. In particular $P$ is a single point.
(ii) If the code is not periodic, then Proposition \ref{unique} tells us the $X(P)$ is a single point in $\Omega$; thus $P$ is a single point.
Now we apply Rokhlin's theory of entropy (\cite{R}), we have shown that $\xi$ is a one sided generating partition ($\xi^-$ is the partition into points), thus $$h_{\mu}(\Sigma)=h_{\mu}(\Sigma,\xi) = H(\sigma \xi /\xi^{-})=0.$$
\end{proof}
{\noindent \bf 2.5 \ Proof of entropy theorems.\\}
\begin{proofof}{Theorem \ref{zero}}
$h_{top}(\overline{\Sigma},\sigma) = 0$ follows immediately from Lemma \ref{support}, Lemma \ref{zero'} and the variational principle. We have $h_{top}(\overline{\Omega},\sigma) \le h_{top}(\overline{\Sigma},\sigma) $ since $(\overline{\Omega},\sigma)$ is a continuous factor of $(\overline{\Sigma},\sigma)$ (Proposition \ref{extend}). Similarly, since $(\mathcal{A},T)$ is a continuous factor of $({\Omega},\sigma)$ (Proposition \ref{attractor}), we have $h_{top}(\mathcal{A},T) \le h_{top}(\Omega,\sigma) \le h_{top}(\overline{\Omega},\sigma) $. \end{proofof}
Let $f:X \to X$ be a continuous map of a compact topological space $X$. Let $(\overline{X},f)$ denote the inverse limit of the map $f$. The {\it non wandering set} $NW(X)$ of a continuous map $f:X \to X$ of a compact topological space $X$ is the set $\{x \in X:$ for each neighborhood $\mathcal{U}$ of $x$, $\exists n \ne 0$ such that $f^n \mathcal{U} \cap \mathcal{U} \ne \emptyset\}$.
\begin{proposition}\label{NW} If $h_{top}(\overline{X},f)= 0$ the $h_{top}(X,f) = 0$. \end{proposition}
\begin{proofof}{Theorem \ref{zero''}}
Theorem \ref{zero''} follows immediately by combining Theorem \ref{zero} and the proposition.\end{proofof}
\begin{proofof}{Proposition \ref{NW}}
We use a theorem of Bowen (\cite{Bo},Theorem 2.4) $h_{top}(X,f) = h_{top}(NW(X),f)$.
Let $p : \overline{X} \to X$ be the natural projection map. Clearly $p \circ f (\vec{x}) = f \circ p (\vec{x})$ for any $\vec{x} \in \overline{X}$, which implies $h_{top}(NW(\overline{X}),f) \ge h_{top}(p(NW(\overline{X})),f)$; therefore $h_{top}(p(NW(\overline{\Sigma})),f)=0$. But $p(NW(\overline{X})) =NW(X)$ (\cite{AH} Theorem 3.5.1(5)). Thus applying once again the above mention theorem of Bowen we conclude that
$0 = h_{top}(NW(X),f) = h_{top}(X,f)$. \end{proofof}
\section{The proofs of the complexity results}
Fix a $q$-gon $Q$ with $r$ spy mirrors. A {\em spy generalized diagonal} is an orbit which starts at a spy mirror (thus it has two backwards continuations) and ends in a vertex of $Q$, thus its orbit has two backwards continuations with different codes. Consider a spy generalized diagonal, since the length of the spy generalized diagonal is finite, slightly varying the angle of arrival at the vertex will yield a spy generalized diagonal with the same code (see Figure \ref{Un}).
\begin{figure}
\caption{All the orbit segments at the wedge are spy generalized diagonals with the same code $cb$. This word has 4 extension $e^{\pm}cba$ and $e^{\pm}cbd$ where $e^+$ and $e^-$ refer to the two sides of the one-sided mirror.}
\label{Un}
\end{figure}
Let $N_{spy}(n)$ denote the number of such families of length at most $n$. The set of spy generalized diagonals with a given code form a ``sector'' at the vertex, the boundaries of this sector are generalized diagonal of length at most $n$.
The boundary of a spy generalized diagonal sector of length $n$ consists of two generalized diagonals of length at most $n$, one bounding from the left and one from the right (here left and right are with respect to a fixed orientation, say clockwise, at the vertices of arrival of the generalized diagonals). Thus we can define an injective map, the ``right bounding generalized diagonal'', from families of spy generalized diagonals of length $n$ to generalized diagonal of length at most $n$ yielding
\begin{proposition} \label{prop} $N_{spy}(n) \le \sum_{i=0}^{n} N_{vert}(i)$. \end{proposition}
\begin{proofof}{Theorem \ref{thm1}} The main technical tool will be a variant of Cassaigne's formula \cite{C} developed in \cite{ST}. Let $\mathcal{A}$ be a finite alphabet, $\mathcal{L} \subset \mathcal{A}^{\mathbb{N}}$ be a language, $\mathcal{L}(n)$ the set of words of length $n$ which appear in $\mathcal{L}$, and $p(n) := \# \mathcal{L}(n)$. Note that $p(0) = \# \{\emptyset\} = 1$. For any $n \ge 0$ let $s(n) := p(n+1) - p(n),$ and thus $$p(n) = 1 + \sum_{j=0}^{n-1} s(j).$$ For $u \in \mathcal{L}(n)$ let \begin{eqnarray*} m_l(u) & := & \#\{a \in \mathcal{A}: au \in \mathcal{L}(n+1)\},\\ m_r(u) & := & \#\{b \in \mathcal{A}: ub \in \mathcal{L}(n+1)\},\\ m_b(u) & := & \#\{(a,b) \in \mathcal{A}^2: aub \in \mathcal{L}(n+2)\}. \end{eqnarray*} We remark that while $m_r(u) \ge 1$ the other two quantities can be $0$. A word $u \in \mathcal{L}(n)$ is called {\em left special} if $m_l(u) > 1$, {\em right special} if $m_r(u) > 1$ and {\em bispecial} if it is left and right special.
Let $\mathcal{BL}(n) := \{u \in \mathcal{L}(n): u \text{ is bispecial}\}.$ Let $\mathcal{L}_{np}(n):= \{v \in \mathcal{L}(n): m_l(v) = 0 \}$.
In \cite{ST} we showed that $$s(j+1) - s(j) = \sum_{v \in \mathcal{BL}(j)} \Big (m_b(v) - m_l(v) - m_r(v) + 1 \Big ) - \sum_{v \in \mathcal{L}_{np}(j):m_r(v) >1} \Big (m_r(v) - 1 \Big ). $$
Now we turn to the interpretation of this formula in the case that $v$ is a bispecial
word appearing in a $q$-gon $Q$ with $r$ spy mirrors. We make a worst case estimate, apriori the collection of orbits with the code $v$ can hit all sides and all spy mirrors (forwards or backwards)
\begin{eqnarray*} 2 & \le m_l(v) \le & (q+2r)\\ 2 & \le m_r(v) \le & (q+2r)\\ 2 & \le m_b(v) \le & (q+2r)^2 \end{eqnarray*} Thus $$s(j+1) - s(j) \le ((q+2r)^2-3)\#\mathcal{BL}(j)$$ We have $p(0) = 1$ (the empty set) and $p(1) = q+2r$, thus $s(0) = p(1) -1 = q+2r-1$ and the method of telescoping sums yields $$ s(n) \le (q+2r - 1) + ((q+2r)^2-3)\sum_{j=0}^{n-1} \#\mathcal{BL}(j).$$ To each bispecial word $v$ corresponds a collection of generalize diagonals and/or spy generalized diagonal sectors of the same combinatorial length and code. Since the collections are determined by their code they are disjoint, thus $$\sum_{j=0}^{n-1} \#\mathcal{BL}(j) \le N_{vert}(n) + N_{spy}(n) \le N_{vert}(n) + \sum_{i=0}^n N_{vert}(i) \le 2 \sum_{i=0}^n N_{vert}(i), $$ and thus $$s(n) \le (q+ 2r - 1) + 2((q+2r)^2-3) \sum_{i=0}^n N_{vert}(i).$$
Again by the method of telescoping sums we have \begin{eqnarray*} p(n) \le 1 + \sum_{j=0}^{n-1} \left ((q+2r - 1) + 2((q+2r)^2-3)\sum_{i=0}^{j}N(i)) \right ) \\ = 1 + (q+2r- 1)n + \left (2((q+2r)^2-3)\sum_{j=0}^{n-1} \sum_{i=0}^{j} N_{vert}(i) \right ). \end{eqnarray*} \end{proofof}
\begin{proofof}{Theorem \ref{thm2}} Consider a rational polygon $P$ without spy mirrors. H. Masur in \cite{M} has shown that the number $N_g(t)$ of generalized diagonals of geometric length $t$ satisfies $N_g(t) \le Ct^2$ for some constant $C = C(P)$ for all $t \ge 0$. By elementary reasoning there is a constant $B > 1$ such that $ B^{-1} \le N_{vert}(n)/N_g(n) \le B$, thus there is a constant $K = K(P)$ so that $N_{vert}(n) \le K n^2$ for all $n \ge 0$.
Suppose that the polygon $Q$ with spy mirrors is a $k$ fold cover of the rational polygon $P$. By the symmetry in the definition of $Q$ each generalized diagonal in $Q$ projects to a generalized diagonal in $P$, thus $N_{vert}^Q(n) \le k N^Q_{vert}(n) \le kK n^2$. Thus the Theorem follows from Theorem \ref{thm1}. \end{proofof}
\begin{proofof}{Theorem \ref{thm3}} D. Scheglov in \cite{Sch} has shown that for a typical triangle, for every $\varepsilon > 0$ there exists a constant $C > 0$ so that $N_{vert}(n) \le Ce^{n^{\varepsilon}}$ for all $n \ge 0$.
Suppose that the polygon $Q$ with spy mirrors is a $k$ fold cover of typical triangle $P$. As in the proof of Theorem \ref{thm2}, each generalized diagonal in $Q$ projects to a generalized diagonal in $P$, thus $N_{vert}^Q(n) \le k N^Q_{vert}(n) \le kC e^{n^\varepsilon}$ and we again use Theorem \ref{thm1} to conclude. \end{proofof}
\begin{proofof}{Theorem \ref{thm4}} First suppose the square is $1 \times 1$ and with a single spy mirror having $x$ coordinate $a$. Let $X$ denote the torus ($2 \times 2$) obtained by unfolding the square with the one-sided mirrors identified as in the figure \ref{Sq}.
\begin{figure}
\caption{A square billiard with the vertical mirror: table and unfolding}
\label{Sq}
\end{figure}
Now we construct an explicit universal cover of $X$, first we unfold the square to a torus and then take the universal cover of the torus by the plane. Next we identify the one-sided mirrors at $x=2-a +2n$ with the one-sided mirror at $x = 2 + a + 2n$, i.e.\ all jumps are to the right and of the form $2a$ (see Figure \ref{Cover}).
\begin{figure}
\caption{A square billiard with the vertical mirror: universal cover}
\label{Cover}
\end{figure}
Now consider the generalized diagonals starting at the origin which arrives at the point $(m,n)$ ($m>0$ and $n>0$) which do not pass through any other vertex in-between. Label the squares traversed by the generalized diagonals by the coordinates of their bottom left corners. By the explicit construction of our universal cover, when the generalized diagonal is in the square with bottom left corner $(i,j)$ then either we cross a horizontal side and are in square $(i,j+1)$ or we cross a spy mirror, or a vertical side and are in square $(i+1,j)$. Thus we cross $n$ horizontal sides $y=1,y=2,\dots,y=n-1$, $f$ vertical sides and $g$ vertical mirrors with $f+g = m-1$. We conclude that the combinatorial length of any such generalized diagonal is $(m-1)+f + g = m + n-2$. We want to estimate how many such generalized diagonals we can have. Clearly there is at most one which hits no spy mirrors, it is the line segment of slope $n/m$ starting at the origin and ending at $(m,n)$, and it is a generalized diagonal if and only if this segment does not reflect form any spy mirrors.
Now we claim that, for each $0 \le j \le m-1$
there is at most one generalized diagonal connecting $(0,0)$ to $(m,n)$ which hits exactly $j$ spy mirrors. In fact, the slope of such a generalized diagonal must be $n/(m - j2a)$, and then if we start an orbit segment at the origin with this slope it is a generalized diagonal (arrives at $(m,n)$) if and only if it hits exactly $j$ spy mirrors.
Thus there are at most $m$ generalized diagonals connecting $(0,0)$ to $(m,n)$, all other cases of pairs of vertices are similar. There are $const\cdot N^2$ lattice points and ends of spy mirrors at a distance $N$ of the origin,
yielding a cubic estimate for the number of generalized diagonals and thus a quintic estimate on the complexity by Theorem \ref{thm1}.
Now consider the general case. Let $x=a_i$, $1 \le i \le k$ be the $x$ coordinates of the finite collection of vertical mirrors (there can be several mirrors with the same $x$ coordinate). The reflecting sides of the mirrors are not necessarily the same. The universal cover identifies the copy of each mirror in $x = 2 - a_i + 2n$ with the associated one-sided mirror at $x = 2 + a_i + 2n$ for mirrors with the reflecting side of the left say, and if the right side is reflecting then $x = a_i + 2n$ with the associated mirror at $x = 2 - a_i + 2n$. As we trace a trajectory it always jumps to the right, by $b_i := 2a_i$ in the first case, and $b_i := 2-2a_i$ in the second case.
Consider the generalized diagonal connecting $(0,0)$ to $(m,n)$. Let $0 \le j_i \le m-1$ denote the total number spy mirrors with $x$-coordinate $a_i$ which the generalize diagonal hits, and $0 \le j_0 \le m-1$ denote the number of vertical sides it crosses. For each $0 \le \ell < m-2$ it hits exactly one spy mirror or one vertical mirror in each rectangle $\ell < x \le \ell +1$, thus \begin{equation} \sum_{i=0}^k j_i = m-1\label{s} \end{equation}
(note that it can not hit a vertical mirror in the rectangle $m-1 < x \le m$ and arrive at the point $(m,n)$ and by definition it arrives, but does not cross the side $x=m$). Furthermore the total horizontal distance travelled by the generalized diagonal is $n - \sum_{i=1}^k j_i b_i$. Since the vertical distance it travelled is $m$ it has slope $n/(m - \sum_i j_i b_i)$.
We claim by induction that the number $Q_k(m)$ of integer solutions of \eqref{s} with each $0 \le j_i \le m-1$ satisfies $Q_k(m) \le m^k/k!$ for all $m \ge 1$. For $k =1$ equality holds, this was used above. Suppose that this is true for some fixed $k$. To estimate $Q_{k+1}(m)$ suppose that $k_0 = p \in \{0,1,\cdots,m-1\}$, then $\sum_{i=1}^k k_i = m - 1 -p$ which has $Q_k(m-p) \le (m-p)^k/k!$ solutions. Thus $Q_{k+1}(m) \le \sum_{p=0}^{m-1} (m-p)^k/k! \le m^{k+1}/(k+1)!$.
As above we have shown that for each choice of the set $\{j_i\}$ we have at most one generalized diagonal connecting the origin to $(m,n)$. Thus there are at most $O(m^k)$ generalized diagonals connecting $(0,0)$ to $(m,n)$, all other cases of pairs of vertices are similar. Again there are at most $O(N^2)$ lattice points and ends of spy mirrors at a distance of at most $N$ from the origin, yielding at most $O(N^{k+2})$ generalized diagonals of length $N$ and thus the complexity is at most $O(N^{k+4})$ by Theorem \ref{thm1}. \end{proofof}
Remark, we actually prove a slightly stronger theorem \begin{theorem} Suppose that $Q$ is the square with a finite number of vertical spy mirrors cottoned in $k$ vertical lines. Then the total complexity satisfies $p(n) \le Cn^{k+4}$ for all $n$. \end{theorem}
\section{Acknowledgements.} We gratefully acknowledge the support of ANR Perturbations. The first author was also partially supported by Dynasty Foundation.
\end{document}
|
arXiv
|
{
"id": "1501.04584.tex",
"language_detection_score": 0.7779152393341064,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} In the spirit of Glauberman's fundamental work in B-loops and Moufang loops \cite{Gl1, Gl2}, we prove Cauchy and strong Lagrange theorems for Bol loops of odd order. We also establish necessary conditions for the existence of a simple Bol loop of odd order, conditions which should be useful in the development of a Feit-Thompson theorem for Bol loops. Bol loops are closely related to Aschbacher's twisted subgroups \cite{Asch}, and we survey the latter in some detail, especially with regard to the so-called Aschbacher radical. \end{abstract}
\maketitle
\section{Introduction} \seclabel{introduction}
A \emph{magma} $(\mathcal{L},\cdot)$ consists of a set $\mathcal{L}$ together with a binary operation $\cdot$ on $\mathcal{L}$. For $x\in\mathcal{L}$, define the left (resp., right) translation by $x$ by $L(x)y = x\cdot y$ (resp., $R(x)y = y\cdot x$) for all $y\in\mathcal{L}$. A magma with a two-sided neutral element $1$ such that all left translations bijective is called a \emph{left loop}. A left loop in which all right translations are bijective is called a \emph{loop}. For basic facts about loops, we refer the reader to \cite{Bel2, Br, CPS, Pf}. A loop satisfying the \textit{left Bol identity} \[ (x\cdot (y\cdot x))\cdot z = x\cdot (y\cdot (x\cdot z)) \] or equivalently \[ L(x\cdot (y\cdot x)) = L(x)L(y)L(x) \] for all $x,y,z\in\mathcal{L}$, is called a \emph{left Bol loop}. A loop satisfying the mirror identity $((x\cdot y)\cdot z)\cdot y = x\cdot ((y\cdot z)\cdot y)$ for all $x,y,z\in\mathcal{L}$ is called a \emph{right Bol loop}, and a loop which is both left and right Bol is a \emph{Moufang loop}. For the balance of this paper, the term ``Bol loop" will refer to left Bol loop; all statements about left Bol loops dualize trivially to right Bol loops. For basic facts about Bol loops, we refer the reader to \cite{Ro} and IV.6 in \cite{Pf}. (In both cases translating from right Bol to left Bol). A \textit{Bruck loop} is a Bol loop with the \textit{automorphic inverse property}, i.e., $x^{-1}\cdot y^{-1} = (x\cdot y)^{-1}$. (These are also known as K-loops \cite{Kie} and gyrocommutative gyrogroups \cite{Ung}.) A loop is said to be \emph{uniquely} $2$-\emph{divisible} if the squaring map $x\mapsto x\cdot x$ is a bijection; we will abuse terminology a bit and drop the ``uniquely". A $2$-divisible Bruck loop is called a \textit{B-loop} \cite{Gl1}. (Glauberman's original definition was restricted to the finite case.)
In the fundamental papers \cite{Gl1, Gl2}, Glauberman studied finite B-loops and finite Moufang loops of odd order. In \cite{Gl1}, he proved Hall, Sylow, Cauchy and Lagrange theorems for finite B-loops. In \cite{Gl2}, he used the B-loop results to establish similar results for Moufang loops. He also proved Feit-Thompson theorems for both finite B-loops and finite Moufang loops of odd order. This naturally raises the question as to how far these results extend to the general case of finite Bol loops of odd order. In this paper, we begin examining this question. We make use of the notion of a \emph{twisted subgroup} of a group, adopting the terminology of Aschbacher \cite{Asch}. This same idea can be found in Glauberman's papers \cite{Gl1, Gl2}, and we use his results to establish Cauchy and strong Lagrange theorems for Bol loops of odd order. We also start an attack on a Feit-Thompson theorem for Bol loops of odd order. We were not able to prove a complete Feit-Thompson result, but we present some conditions that a simple Bol loop of odd order must satisfy which we think will be crucial in a proof, if there is indeed such a theorem. We also observe that certain varieties of Bol loops of odd order, such as those in which every left inner mapping is an automorphism, are necessarily solvable.
In the next section, we present a few preliminaries from loop theory. This can be safely skipped by those who are more interested in groups than in loops. Such readers will find \S\secref{ts} and \S\secref{2dts} to their taste. A minimal amount of loop theory is present in \S\secref{2dts}, so as not to abandon completely the spirit of \cite{Gl1}, although it is possible in principle to avoid loops completely. In \S\secref{bol} and \S\secref{2dbol}, we apply the results of \S\secref{ts} and \S\secref{2dts}, respectively, to Bol loops of odd order.
Throughout this paper, we state several open problems in the hope of stimulating research into Bol loops of odd order. In the Feit-Thompson direction, the existence of \emph{any} finite simple (non-Moufang) Bol loop is widely considered to be the most important open problem in loop theory \cite{OpenProblems}, and we think that focusing on Bol loops of odd order is a reasonable place to start.
\section{Preliminaries} \seclabel{preliminaries}
In this section, we review a few necessary notions from loop theory, and establish some notation conventions. Binary operations in left loops will be explicitly denoted, while group operations in groups will be denoted by juxtaposition. Permutations, such as left and right translations, will act on the left of their arguments.
For a set $S$, we let $S!$ denote the group of all permutations of $S$. The \emph{multiplication group}, $\mathrm{Mlt}(\mathcal{L})$, of a loop $\mathcal{L}$ is the subgroup of $\mathcal{L} !$ generated by all right and left translations. The \emph{left multiplication group}, $\mathrm{LMlt}(\mathcal{L})$, of a left loop $\mathcal{L}$ is the subgroup of $\mathrm{Mlt}(\mathcal{L})$ generated by left translations. The subgroup $\mathrm{LMlt}_1(\mathcal{L}) = \{ \phi\in\mathrm{LMlt}(\mathcal{L}) : \phi 1 = 1 \}$ is called the \emph{left inner mapping group} of $\mathcal{L}$. This subgroup has trivial core (recall that the core $\mathrm{ker}_H(G) = \bigcap_{g\in G} gHg^{-1}$ of a subgroup $H$ in a group $G$ is the largest normal subgroup of $G$ contained in $H$). The set $L(\mathcal{L}) = \{ L(x) : x\in\mathcal{L} \}$ of left translations is a left transversal (complete set of coset representatives) to each conjugate of $\mathrm{LMlt}_1(\mathcal{L})$ in $\mathrm{LMlt}(\mathcal{L})$.
These observations lead us to the following construction (\cite{Baer}; see also \cite{KJ}). Let $G$ be a group, $H\leq G$, and $T\subseteq G$ a left transversal of $H$. There is a natural $G$-action on $T$, which we denote by $\cdot$, defined by the equation $(g\cdot x)H = gxH$, that is, $g\cdot x$ is the unique representative in $T$ of the coset $gxH$. This action restricted to $T$ itself endows $T$ with a binary operation. If $1\in T$, then $(T, \cdot)$ turns out to be a left loop, which we call the \textit{induced} left loop. If $T$ is also a left transversal of each conjugate $gHg^{-1}$, $g\in G$, then $(T,\cdot)$ is a loop. All of the induced left loops we discuss in this paper turn out to be loops.
\begin{proposition} \cite{Ph} \proplabel{core} Let $G$ be a group with subgroup $H\leq G$ and a left transversal $T\subset G$ of $H$ such that $\langle T\rangle = G$. The permutation representation $G\to T!$ defined by $(g\cdot x)H = gxH$ ($g\in G$, $x\in T$) gives an epimorphism from $G$ onto $\mathrm{LMlt}(T,\cdot)$. The sequence $1 \to \mathrm{ker}_H(G) \to G \to \mathrm{LMlt}(T,\cdot) \to 1$ is exact. \end{proposition}
If $\mathcal{L}$ is a Bol loop, then $\mathcal{L}$ is \emph{power-associative}, that is, if $x^0 := 1$, $x^{n+1} := x\cdot x^n$, $x^{-n-1} := x^{-1}\cdot x^{-n}$, $n\geq 0$, then $x^m \cdot x^n = x^{m+n}$ for all $x\in \mathcal{L}$ and all integers $m,n$. Moreover, $\mathcal{L}$ is \emph{left power-alternative}, which means that $L(x^n) = L(x)^n$ for all $x\in \mathcal{L}$ and all integers $n$. Taking $n = -1$ and $n = 2$, we obtain, respectively, the \emph{left inverse property} (LIP) $L(x)^{-1} = L(x^{-1})$ and the \emph{left alternative property} (LAP) $L(x)^2 = L(x^2)$.
The \emph{left nucleus}, \emph{middle nucleus}, \emph{right nucleus}, and \emph{nucleus} of a loop $\mathcal{L}$ are defined, respectively, by \[ \begin{array}{rcl} \mathrm{Nuc}_l(\mathcal{L}) &:=& \{x \in \mathcal{L} : x(yz) = (xy)z\ \forall y,z \in \mathcal{L} \} \\ \mathrm{Nuc}_m(\mathcal{L}) &:=& \{y \in \mathcal{L} : x(yz) = (xy)z\ \forall x,z \in \mathcal{L} \} \\ \mathrm{Nuc}_r(\mathcal{L}) &:=& \{z \in \mathcal{L} : x(yz) = (xy)z\ \forall x,y \in \mathcal{L} \}\\ \mathrm{Nuc}(\mathcal{L}) &:=& \mathrm{Nuc}_l(\mathcal{L})\cap \mathrm{Nuc}_m(\mathcal{L})\cap \mathrm{Nuc}_r(\mathcal{L}) \end{array} \] Each of these is an associative subloop of $\mathcal{L}$.\begin{lemma} \label{lem:nuc-char} If $\mathcal{L}$ is a left loop, then \[ \begin{array}{rcl} L(\mathrm{Nuc}_l(\mathcal{L})) &=& \bigcap_{x\in \mathcal{L}} L(\mathcal{L}) L(x)^{-1} \\ L(\mathrm{Nuc}_m(\mathcal{L}) &=& \bigcap_{x\in \mathcal{L}} L(x)^{-1} L(\mathcal{L}) . \end{array} \] \end{lemma}
\begin{proof} If $g\in \bigcap_{x\in\mathcal{L}} L(\mathcal{L}) L(x)^{-1}$, then $g = L(a)$ for some $a\in \mathcal{L}$, and for each $x\in\mathcal{L}$, there exists $y\in\mathcal{L}$ such that $L(y) = L(a)L(x)$. Applying both sides to $1$ gives $y=a\cdot x$, and thus $L(a)L(x) = L(a\cdot x)$ for all $x\in\mathcal{L}$, i.e., $a\in \mathrm{Nuc}_l(\mathcal{L})$. Reversing the argument yields the other inclusion, and the argument for $\mathrm{Nuc}_m(\mathcal{L})$ is similar. \end{proof}
Given a loop $\mathcal{L}$, a subloop $\mathcal{K}$ is said to be \emph{normal} if, for all $x,y\in \mathcal{L}$, $x\cdot (y\cdot \mathcal{K}) = (x\cdot y)\cdot \mathcal{K}$, $x\cdot \mathcal{K} = \mathcal{K}\cdot x$, and $(\mathcal{K}\cdot x)\cdot y = \mathcal{K}\cdot (x\cdot y)$ (\cite{Br}, p. 60, IV.1). These three conditions are clearly equivalent to the pair \begin{equation} \eqlabel{normality} x\cdot (\mathcal{K}\cdot y) = \mathcal{K}\cdot (x\cdot y) \qquad \text{and}\qquad x\cdot (\mathcal{K}\cdot y) = (x\cdot \mathcal{K})\cdot y \end{equation} for all $x,y\in \mathcal{K}$.
\section{Twisted Subgroups} \seclabel{ts}
Although the notion of a twisted subgroup of a group has been around for some time (see Remark \remref{history}), we follow here the terminology of Aschbacher \cite{Asch}, who proved one of the main structural results about twisted subgroups (our Proposition \propref{Asch-main} below). Our definition is a trivial modification of his.
For a subset $T$ of a group $G$, we use the notation $T^{-1} := \{ x^{-1} : x\in T \}$ and $xTx := \{ xyx: y\in T \}$ for $x\in T$.
\begin{definition}\cite{Asch} \deflabel{ts} A subset $T$ of a group $G$ is a \textit{twisted subgroup} of $G$ if \item (i) $1 \in T$, \item(ii) $T^{-1} = T$, and \item(iii) $xTx\subseteq T$ for all $x \in T$. \end{definition}
\begin{remark} One can replace (ii) and (iii) with the equivalent assertion\hspace*{\fill}\linebreak \noindent (ii$^{\prime}$) $xy^{-1} x \in T$ for all $x,y\in T$. \end{remark}
A twisted subgroup $T$ of a group $G$ is said to be \emph{uniquely} $2$-\emph{divisible} if each $x\in T$ has a unique square root in $T$, that is, a unique element $x^{1/2}\in T$ such that $(x^{1/2})^2 = x$. As we do with loops, we will abuse terminology slightly and drop the adverb ``uniquely".
An easy induction argument shows the following
\begin{proposition} (\cite{Asch}, Lem. 1.2(1)) \proplabel{mono} Let $G$ be a group and let $T\subseteq G$ be a twisted subgroup. Then for each $x\in T$, $\langle x \rangle \subseteq T$. \end{proposition}
\begin{remark} In some cases, portions of Definition \defref{ts} are redundant. \begin{enumerate} \item[1.] For \textit{finite} groups, the proof of Proposition \ref{prop:mono} shows that a subset satisfying (i) and (iii) necessarily satisfies (ii) (\cite{Asch}, Lemma 1.2(1)). \item[2.] If $T\subseteq G$ is a left transversal of a subgroup $H\leq G$ such that (iii) holds, then (ii) holds. Indeed, for $x\in T$, let $x^{\prime}\in T$ denote the representative of $x^{-1} H$. Then $x^{\prime} x x^{\prime}\in T$ and $x^{\prime} x x^{\prime}H = x^{\prime}H$, which implies $x^{\prime} x x^{\prime} = x^{\prime}$. Thus $x^{\prime} = x^{-1}$. In this case, the induced left loop $(T,\cdot)$ is a Bol loop; see Proposition \propref{ts-bol}. \item[3.] Glauberman showed that in the finite $2$-divisible case, both (i) and (ii) are redundant (\cite{Gl1}, Lemma 3; \cite{Gl2}, Remark 7). More precisely, he showed that if $T$ is a subset of a group $G$ satisfying (iii) and such that every element of $T$ has finite odd order, then (i) and (ii) hold, and every element of $T$ has a unique square root. \end{enumerate} \end{remark}
Of course, any subgroup is a twisted subgroup, but the notion of twisted subgroup is modeled on the following example which is not a subgroup.
\begin{example} \exlabel{1} Let $G$ be a group, and fix $\tau\in \mathrm{Aut}(G)$. Define \[ K(\tau) := \{ g\in G : g^{\tau} = g^{-1} \} . \] Then $K(\tau)$ is a twisted subgroup of $G$. If $\tau^2 = 1$, define \[ B(\tau) := \{ gg^{-\tau} : g\in G \} \] Then $B(\tau)$ is a twisted subgroup of $G$ and $B(\tau)\subseteq K(\tau)$. \end{example}
\begin{example} \exlabel{2} Let $T$ be a twisted subgroup of a group $G$. For $x\in T$, define $\theta_x \in T!$ by $\theta_x y = xyx$ for all $y\in T$. Then $\theta_1 = 1_{T!}$, $\theta_{x^{-1}} = \theta_x^{-1}$, and $\theta_x \theta_y \theta_x = \theta_{xyx}$ for all $x,y\in T$. Thus $\hat{T} = \{ \theta_x : x\in T \}$ is a twisted subgroup of $T!$. For later reference, we will denote by $\hat{G}$ the subgroup of $T!$ generated by $\hat{T}$. \end{example}
The \emph{associates} of a twisted subgroup $T$ of a group $G$ are the translates $aT = Ta^{-1}$, $a\in T$.
\begin{proposition} (\cite{Asch}, Lemma 1.5(1)) \proplabel{associates} Every associate of a twisted subgroup is a twisted subgroup. \end{proposition}
Most interesting results about twisted subgroups are predicated upon the assumption that a twisted subgroup $T$ generates its group $G$. In this case we will just say that $T$ is a generating twisted subgroup of $G$. Contained in such $T$ are important normal subgroups of $G$. First we consider the intersection of all associates.
\begin{theorem} \thmlabel{sharp} Let $G$ be a group with generating twisted subgroup $T$, and let \[ T^{\#} = \bigcap_{x\in T} xT . \] Then $T^{\#} \subseteq T$, $T^{\#} = \bigcap_{x\in T} Tx$, and $T^{\#} \triangleleft G$. \end{theorem}
\begin{proof} $T^{\#} \subseteq T$ is clear since $1\in T$, while $T^{\#} = \bigcap_{x\in T} Tx$ follows from $xT = Tx^{-1}$ for $x\in T$. Now fix $a,b\in T^{\#}$ and $x\in T$. There exists $u,v\in T$ such that $a = xu \in xT$ and $b = u^{-1} v \in u^{-1} T$ (using $T^{-1} = T$). Hence $ab = xv \in xT$, and since $x\in T$ is arbitrary, $ab \in T^{\#}$. Thus $T^{\#}$ is a subgroup of $G$. For each $y\in T$, \[ yT^{\#}y^{-1} = \bigcap_{x\in T} yxTy^{-1}= \bigcap_{x\in T} (yxy) T = \bigcap_{x\in T} \theta_y x T = T^{\#}. \] Since $T$ generates $G$, $T^{\#}$ is normal in $G$. \end{proof}
A more important normal subgroup sitting inside a twisted subgroup was introduced by Aschbacher (\cite{Asch}, p. 117). Our motivating discussion is a simplified version of his. Let $G$ be a group and let $T\subseteq G$ be a generating twisted subgroup $G$. Consider the group $G_0 = \langle ( x, x^{-1} ) : x\in T \rangle < G\times G$ generated by the graph $\{ (x,x^{-1}) : x\in T \}$ of the inversion mapping $x\mapsto x^{-1}$ on $T$. Let $\pi_i : G\times G\to G$ denote the projection onto the $i$th factor. As a subgroup of $G\times G$, $G_0$ is invariant under the action of the swapping automorphism $(x,y)\mapsto (y,x)$. This automorphism restricts to an isomorphism of the kernels
$\mathrm{Ker}(\pi_i |_{G_0})$. Each kernel is obviously isomorphic to the following subgroup of $G$: \[ T^{\prime} = \{ x_1 \cdots x_n : x_1^{-1} \cdots x_n^{-1} = 1,x_i \in T \}.\eqno{(1)} \]
We have $T^{\prime} = \pi_1({\rm Ker}(\pi_2 |_{G_0})) \triangleleft \pi_1(G_0) = G$, since $T$ generates $G$. From the preceding discussion, we see that $G_0$ is the graph of an automorphism $\tau$ of $G$ if and only if $T^{\prime} = \langle 1 \rangle$. In other words, $T$ is a subset of some $K(\tau)$ if and only if $T^{\prime} = \langle 1 \rangle$. This proves almost all of the following result.
\begin{proposition} (\cite{Asch}, Theorem 2.2) \proplabel{Asch-main} Let $G$ be a group with generating twisted subgroup $T$. There exists $\tau\in\mathrm{Aut}(G)$ with $\tau^2 = 1$ such that $T\subseteq K(\tau)$ if and only if $T^{\prime} = \langle 1 \rangle$. In this case the automorphism $\tau$ uniquely determined. \end{proposition}
\begin{proof} All that remains is the uniqueness and the order. If $T\subseteq K(\sigma )$ for some $\sigma\in\mathrm{Aut}(G)$, then $\tau\sigma^{-1}$ centralizes $T$, but then $\sigma = \tau$ since $T$ generates $G$. Since $T\subseteq K(\sigma )$ implies $T\subseteq K(\sigma^{-1})$, it follows that $\tau^2 = 1$. \end{proof}
For a twisted subgroup $T$ (whether it generates $G$ or not), we define its (Aschbacher) \textit{radical} to be the normal subgroup $T^{\prime}$ given by (1) (\cite{Asch}, p.117). If $T^{\prime} = \langle 1 \rangle$, then we say that $T$ is \textit{radical-free}.
If $T$ is radical-free and generates $G$, we will refer to the uniquely determined $\tau\in\mathrm{Aut}(G)$ of order $2$ such that $T\subseteq K(\tau)$ as being the corresponding \textit{Aschbacher automorphism}.
\begin{proposition} Let $G$ be a group with generating twisted subgroup $T$. Then $T^{\prime}$ is contained in every associate of $T$, and hence $T^{\prime} \subseteq T^{\#}$. \end{proposition}
\begin{proof} The principal assertion is (\cite{Asch}, Theorem 2.1(3)), and the rest follows from the definition of $T^{\#}$. \end{proof}
\begin{remarks}\hspace*{\fill} \begin{enumerate} \item If $T$ is a \emph{proper} generating twisted subgroup of $G$, then $T^{\prime}$ and $T^{\#}$ are proper normal subgroups of $G$. Thus twisted subgroups of simple groups are radical-free and the intersection of all their associates is trivial. \item If $T$ is actually a subgroup of $G$, then the radical $T^{\prime}$ is the derived subgroup of $T$. This motivates our choice of notation, which is different from that of \cite{Asch}. \end{enumerate} \end{remarks}
Besides the canonical projection of the previous proposition, there is another radical-free twisted subgroup associated with any twisted subgroup. Here we use the definitions and notation of Example \exref{2}.
\begin{theorem} \thmlabel{hat-rad-free} Let $G$ be a group, let $T\subseteq G$ be a twisted subgroup, and let $\hat{T} = \{ \theta_x : x\in T \}$ and $\hat{G} = \langle \hat{T} \rangle$. Then $\hat{T}$ is a radical-free twisted subgroup of $\hat{G}$. \end{theorem}
\begin{proof} If $\theta_{x_1}\cdots \theta_{x_n} \in \hat{T}^{\prime}$ for some $x_i \in T$, $1\le i\le n$, then $1 = \theta_{x_n}\cdots \theta_{x_1} 1 = x_n\cdots x_1^2 \cdots x_n$, and rearranging gives $x_1\cdots x_n^2 \cdots x_1 = 1$. But then for all $y\in T$, \[ \theta_{x_1}\cdots \theta_{x_n} y = \theta_{x_1}\cdots \theta_{x_n}\theta_{x_n}\cdots \theta_{x_1}y =\theta_{x_1\cdots x_n^2 \cdots x_1} y = y . \] Thus $\theta_{x_1}\cdots \theta_{x_n} = 1_{\hat{G}}$, and therefore $\hat{T}^{\prime} = \langle 1 \rangle$. \end{proof}
In view of the preceding theorem, it is not surprising that the radical is exactly the obstruction to a natural permutation representation of a group $G$ on a generating twisted subgroup $T$.
\begin{theorem} \thmlabel{extend} Let $G$ be a group with generating twisted subgroup $T$. The mapping $\theta : T\to \hat{T}$ defined by $\theta_x y = xyx$ ($x,y\in T$) extends to a homomorphism $\theta : G\to \hat{G}$ if and only if $T^{\prime} = \langle 1\rangle$. In this case, $\mathrm{ker} (\theta) = Z(G)\cap C_G(\tau)$ where $\tau\in\mathrm{Aut}(G)$ is the Aschbacher automorphism. \end{theorem}
\begin{proof} The subgroup $\langle (x,\theta_{x}) : x\in T \rangle$ of $G\times T!$ is the graph of a homomorphism from $G$ into $T!$ if and only if the group $K = \{ \theta_{x_1} \cdots \theta_{x_n} : x_1\cdots x_n = 1, x_i\in T \} = \langle 1 \rangle$. The mapping $K\to T; \phi\mapsto \phi 1$ is a homomorphism with image in $T^{\prime}$. Indeed, if $\phi = \theta_{x_1}\cdots \theta_{x_n}$ for $x_i\in T$ with $x_1\cdots x_n = 1$, then $\phi 1 = x_n\cdots x_1 \in T^{\prime}$. This homomorphism is clearly onto, and if $\phi 1 = 1$, then $x_n\cdots x_1 = 1$, whence $\phi = \theta_{x_1}\cdots \theta_{x_n} = 1_{\hat{G}}$. Therefore $K$ is isomorphic to $T^{\prime}$. This establishes the first assertion. Assume now that $T^{\prime} = \langle 1\rangle$ and let $\tau\in\mathrm{Aut}(G)$ denote the Aschbacher automorphism. Fix $g = x_1\cdots x_n \in \mathrm{ker} (\theta)$ for $x_i\in T$. Then $g y x_n\cdots x_1 = y$ for all $y\in T$. Taking $y=1$, we have $x_n\cdots x_1 = g^{-1}$. Thus $g$ centralizes $T$. Since $T$ generates $G$, $g\in Z(G)$. Also, $g^{\tau} = x_1^{\tau}\cdots x_n^{\tau} = (x_n\cdots x_1)^{-1} = g$. Conversely, if $g = x_1\cdots x_n \in Z(G)\cap C_G(\tau)$ for $x_i\in T$, then $g^{-1} = g^{-\tau} = x_n^{-\tau}\cdots x_1^{-\tau} = x_n\cdots x_1$, and so $g\in \mathrm{ker} (\theta)$. \end{proof}
\begin{corollary} \corolabel{ts-extend} Let $G$ be a simple group with generating twisted subgroup $T$. The mapping $\theta : T\to \hat{T}$ defined by $\theta_x y = xyx$ ($x,y\in T$) extends to an isomorphism $\theta : G\to \hat{G}$. \end{corollary}
\section{$2$-Divisible Twisted Subgroups and B-loops} \seclabel{2dts}
We now focus our attention on $2$-divisible twisted subgroups and their associated B-loops. Much (though not all) of this section is an adumbration of Glauberman's fundamental results \cite{Gl1, Gl2}. We give (often simpler) proofs of some of his results to make the exposition self-contained.
\begin{lemma} \lemlabel{equiv} Let $G$ be a group, and let $T\subseteq G$ be a twisted subgroup in which every element has finite order. The following are equivalent: (i) $T$ is $2$-divisible; (ii) every element of $T$
has odd order; (iii) no element of $T$ has order $2$. If, in addition, $T$ has finite order, then these conditions imply: (iv) $|T|$ is odd. \end{lemma}
\begin{proof} If $T$ is $2$-divisible, then obviously no elements of $T$ have even order, and so (i) implies (ii) and (iii). The equivalence of (ii) and (iii) is trivial. Now assume (ii) and let $z\in T$ be given with order $2k+1$. Then $z^{k+1}$ is a square root of $z$ in $T$. If $y\in T$ were another square root of $z$, then $y^{2(2k+1)} = 1$, and so $y^{2k+1} = 1$. But then $z^{2k+1} = y^{2k+1} = z^k y$ so that $y = z^{k+1}$. Thus (i) holds. For the remaining assertion, note that the inversion mapping $x\mapsto x^{-1}$ is a permutation of the set $T \backslash \{ 1 \}$. If $T$ had even order, then this mapping would necessarily fix some $a \neq 1$. But then $a^2 = 1$, whence $T$ is not $2$-divisible. Thus (i) implies (iv). \end{proof}
\begin{example} \exlabel{converse}
In general, condition (iv) of Lemma \lemref{equiv} does not imply the other conditions. Indeed, let $G = S_3$, the symmetric group on $3$ letters and let $T$ be the set of transpositions. Then $|T| = 3$, but every element of $T$ has order $2$. \end{example}
Radical-free, generating, $2$-divisible twisted subgroups are ``rigid" in the sense that they are uniquely determined by the Aschbacher automorphism. For a subset $S$ of a group $G$, we denote $S^2 = \{ x^2 : x\in S\}$.
\begin{theorem} \thmlabel{rigid} Let $G$ be a group, let $T\subseteq G$ be a radical-free, generating twisted subgroup, and let $\tau\in\mathrm{Aut}(G)$ be the corresponding Aschbacher automorphism. Then \[ K(\tau)^2 \subseteq B(\tau) \subseteq T \subseteq K(\tau). \] In particular, if $T$ is $2$-divisible, then $B(\tau) = T = K(\tau)$, and $T$ is a left transversal of $C_G(\tau)$ in $G$. \end{theorem}\begin{proof} For $g\in K(\tau)$, $g^2 = gg^{-\tau} \in B(\tau)$, and so $K(\tau)^2 \subseteq B(\tau)$. For $g\in G$, $g = x_1\cdots x_n$ for some $x_i\in T$. Thus $gg^{-\tau} = x_1\cdots x_n x_n\cdots x_1 \in T$, since $T$ is a twisted subgroup. Thus $B(\tau) \subseteq T$. The equality in the $2$-divisible case follows immediately. Now for $g\in G$, we have $a := (gg^{-\tau})^{1/2}\in T$, and it is easy to check that $h := a^{-1} g \in C_G(\tau)$. The uniqueness of the decomposition $g = ah$ is obvious. \end{proof}
\begin{definition} \label{defn:B-loop} Let $G$ be a group with a $2$-divisible twisted subgroup $T$. Define a binary operation $\odot : T\times T\to T$ by \[ x\odot y := (x y^2 x)^{1/2} \] for $x,y\in T$. We follow Glauberman's notation \cite{Gl2} and denote the magma $(T,\odot)$ by $T(1/2)$. We denote the left multiplication maps for $T(1/2)$ by $b_x y := (xy^2 x)^{1/2}$ for $x,y\in T$. \end{definition}
\begin{lemma} \label{lem:B-loop} Let $G$ be a group with a $2$-divisible twisted subgroup $T$. \begin{enumerate} \item[1.] $T(1/2)$ is a B-loop. \item[2.] Integer powers of elements in $T$ formed in $G$ agree with those in $T(1/2)$. Thus an element has finite order in $T$ if and only if it has the same order in $T(1/2)$. \item[3.] If $T$ is radical-free and generates $G$, then $T(1/2)$ agrees with the left loop structure induced on $T$ as a left transversal. \end{enumerate} \end{lemma}
\begin{proof} (1) and (2) follow from Lemma 3 in \cite{Gl1} and the following remark. For (3), let $\tau\in\mathrm{Aut}(G)$ be the Aschbacher automorphism, and note that for $x, y\in T$, $x\odot y = ((xy)(xy)^{-\tau})^{1/2}$. \end{proof}
\begin{remark} It is slightly more common in the loop theory literature to use the operation \[ x\odot^{\prime} y := x^{1/2}yx^{1/2} \] for $x,y\in T$. Clearly the squaring map $x\mapsto x^2$ is an isomorphism of $(T,\odot )$ onto $(T,\odot^{\prime})$. That some authors prefer $\odot^{\prime}$ is partly because the B-loop $(T,\odot^{\prime})$ is isotopic to a quasigroup structure on $T$ given by $(x,y)\mapsto xy^{-1} x$. (For the notion of isotopy, see any of the standard references \cite{Bel2, Br, Pf}.) With different terminology than that used here, the preceding construction on $2$-divisible twisted subgroups (using either $\odot$ or $\odot^{\prime}$) can be found in Foguel and Ungar \cite{FU}, Glauberman (\cite{Gl1}, Lemma 3), Kikkawa (\cite{Kik}, Theorem 5), Kreuzer \cite{Kr}, and Kiechle (\cite{Kie}, Chap. 6D). There is a related construction in uniquely $2$-divisible loops $\mathcal{L}$ which goes as follows: for $x,y\in\mathcal{L}$, define $x\odot^{\prime\prime} y = x^{1/2} \cdot (y\cdot x^{1/2})$. For certain loops $(\mathcal{L},\cdot )$, the new magma $(\mathcal{L}, \odot^{\prime\prime})$ turns out to be a B-loop. For $2$-divisible Moufang loops, this construction is due to Bruck (\cite{Br}, VII.5.2, p. 121). For uniquely $2$-divisible Bol loops, it is implicit in the work of Belousov (\cite{Bel1}, \cite{Bel2}), and is spelled out in work of P.T. Nagy and K. Strambach (\cite{NS}, p. 301, Thm. 7) as well as the recent dissertation of G. Nagy (\cite{Na3}, T\'{e}tel 2.3.6). As it turns out, these loop-based constructions are no more general than the construction for twisted subgroups, because they all depend on the fact that the loop in question can be identified in a natural way with a twisted subgroup. The B-loop structure is then transferred from the twisted subgroup to the loop. We will see how this works for Bol loops in \S\secref{2dbol}. \end{remark}
The following is clear from the definitions.
\begin{lemma} \lemlabel{conj} Let $G$ be a group with $2$-divisible twisted subgroup $T$, and let $s \in T!$ denote the squaring map on T: $s(x) = x^2$. Then $b_x = s^{-1}\theta_x s$ for all $x\in T$. Thus $\mathrm{LMlt}(T(1/2))$ is conjugate in $T!$ to $\hat{G} = \langle \theta_x : x\in T\rangle$. \end{lemma}
\begin{theorem} \thmlabel{b-extend} Let $G$ be a group with a generating, $2$-divisible twisted subgroup $T$. The mapping $b: T\to T!$ defined by $b_x y = (xy^2x)^{1/2}$ ($x,y\in T$) extends to a homomorphism $b : G\to T!$ if and only if $T^{\prime} = \langle 1\rangle$. In this case, there is an exact sequence \[ 1\to Z(G)\cap C_G(\tau) \to G \to \mathrm{LMlt}(T(1/2)) \to 1 \] where $\tau\in\mathrm{Aut}(G)$ is the Aschbacher automorphism, and $Z(G)\cap C_G(\tau)$ is the core of $C_G(\tau)$ in $G$. \end{theorem}
\begin{proof} This follows from Lemma \lemref{conj}, Theorem \thmref{extend} and Proposition \propref{core}. \end{proof}
One reason it is particularly convenient to work with the B-loop associated with a $2$-divisible twisted subgroup is the following.
\begin{lemma} (cf. \cite{Gl1}, p. 379, Lemma 4) \lemlabel{subloops} Let $G$ be a group and let $T\subseteq G$ be a $2$-divisible twisted subgroup. Then $K\subseteq T$ is a twisted subgroup of $G$ if and only if $K(1/2) := (K,\odot)$ is a subloop of $T(1/2)$. \end{lemma}
\begin{proof} This is immediate from the definition of $\odot$. \end{proof}
The second corollary to the following result is (\cite{Gl1}, p. 384, Corollary 3).
\begin{theorem} \thmlabel{preLagrange} Let $G$ be a finite group, let $T\subseteq G$ be a $2$-divisible twisted subgroup, and let $A\subseteq T$ be a subgroup of $G$. \begin{enumerate}
\item[1.] If $A$ is normal in $G$, then $|A|$ divides $|T|$.
\item[2.] If $A$ is abelian, then $|A|$ divides $|T|$. \end{enumerate} \end{theorem}
\begin{proof} (1) For each $x\in T$, note that $x A = x^{1/2} A x^{1/2} \subseteq T$, and thus $\{ xA : x\in T \}$ partitions $T$ into subsets of equal cardinality.
(2) $A(1/2)$ is an abelian group isomorphic to $A$. The restriction of \[ b: T\to \mathrm{LMlt}(T(1/2)); x \mapsto (y \mapsto x\odot y) \] to $A$ is a homomorphism of $A(1/2)$ onto its image. The orbits $\{ \{ b_x y : x\in A \} : y\in T\}$ clearly partition $T$, and the orbit through $1\in T$ is $A$ itself since $A$ is $2$-divisible. The action of $A$ on any orbit is regular since $T(1/2)$ is a loop. \end{proof}
\begin{corollary} \corolabel{rad-divides}
Let $G$ be a group and let $T\subseteq G$ be a finite $2$-divisible twisted subgroup. Then $|T^{\#}|$ and $|T^{\prime}|$ divide $|T|$. \end{corollary}
\begin{corollary} (Lagrange's Theorem) \corolabel{lagrange}
Let $G$ be a group and let $T\subseteq G$ be a finite $2$-divisible twisted subgroup. Then for every $x\in T$, the order of $x$ divides $|T|$. \end{corollary}
\begin{proof} Apply Theorem \ref{thm:preLagrange}(1) to the subgroup $\langle x\rangle \subseteq T$. \end{proof}
\begin{remark} \remlabel{lagrange-no} As Example \exref{converse} indicates, Lagrange's theorem does not hold for all twisted subgroups. \end{remark}
The following is a distilled version of (\cite{Gl2}, Theorem 14).
\begin{theorem} \thmlabel{odd} Let $G$ be a finite group, and let $T\subseteq G$ be a $2$-divisible, generating twisted subgroup. Then $G$ has odd order. \end{theorem}
\begin{proof}
Assume first that $T$ is radical-free and let $\tau\in\mathrm{Aut}(G)$ denote the Aschbacher automorphism. By Theorem \thmref{rigid}, $T = B(\tau)$. By Glauberman's $Z^*$ Theorem (\cite{Gl3}, Theorem 1), there exists a normal subgroup $N$ of $G\langle \tau \rangle$ such that $|N|$ is odd and $\tau N \in Z(G\langle \tau \rangle / N)$. But then for all $g\in G$, $g \tau g^{-1} \tau = gg^{-\tau} \in N$. Thus $T\subseteq N$. Since $T$ generates $G$, $G = N$. For the general case, $G/T^{\prime}$ must have odd order, and thus by Corollary \cororef{rad-divides},
$|G| = |G/T^{\prime}| |T^{\prime}|$ is odd. \end{proof}
\begin{definition} \deflabel{pi} Let $\pi$ be a set of primes. A positive integer $n$ is a $\pi$-\emph{number} if $n=1$ or if $n$ is a product of primes in $\pi$. For every positive integer $n$, let $n_\pi$ denote the largest $\pi$-number that divides $n$. As usual, a finite group $G$ is a
$\pi$-\emph{group} if $|G| = |G|_{\pi}$. If $T\subseteq G$ is a twisted subgroup, then we say that $T$ is a \emph{twisted} $\pi$-\emph{subgroup} of $G$ if $|T| = |T|_{\pi}$. We say that $T$ satisfies the \emph{Hall} $\pi$-\emph{condition} if there exists a twisted
$\pi$-subgroup $S$ of $G$ such that $S\subset T$ and $|S| = |T|_{\pi}$. If $\pi = \{ p\}$, we say that $T$ satisfies the \emph{Sylow} $p$-\emph{condition} if $T$ satisfies the Hall $\{p\}$-condition. \end{definition}
\begin{lemma} \lemlabel{preHall} Let $G$ be a finite group of odd order, let $\pi$ be a set of primes, and let $\beta\in\mathrm{Aut}(G)$ have order $2$. Then every $\pi$-subgroup of $G$ fixed by $\beta$ is contained in a Hall $\pi$-subgroup of $G$ fixed by $\beta$. \end{lemma}
\begin{proof} Since $G$ is solvable \cite{FT}, this is just (\cite{Gl1}, p. 391, Lemma 11). \end{proof}
Glauberman remarked that the following result can be established by purely group-theoretical means (\cite{Gl2}, p. 413, Remark 7).
\begin{theorem} (\cite{Gl2}, Theorem 15) \thmlabel{pi} Let $G$ be a finite group, let $T\subseteq G$ be a $2$-divisible, generating twisted subgroup, and let $\pi$ be a set of odd primes. Then $T$ is a twisted $\pi$-subgroup if and only if $G$ is a $\pi$-group. \end{theorem}
\begin{proof}
By Theorem \thmref{rigid}, $|T|$ divides $|G|$, and so if $G$ is a $\pi$-group, then $T$ is certainly a twisted $\pi$-subgroup. For the converse, assume first that $T$
is radical-free, let $\tau\in\mathrm{Aut}(G)$ denote the Aschbacher automorphism, and set $H := C_G(\tau)$. Let $P_0$ be a Hall $\pi$-subgroup of $H$. By Theorem \thmref{odd}, $|G|$ is odd, and so by Lemma \lemref{preHall}, $P_0$ is contained in some Hall $\pi$-subgroup $P$ of $G$ which is fixed by $\tau$. By Theorem \thmref{rigid}, $S := P\cap T$ is a left transversal of $P_0 = P\cap H$ in $P$, and hence
$| S | = |P| / |P_0| = |G|_{\pi} / |H|_{\pi} = [ G : H ]_{\pi} = |T|_{\pi} = |T|$. Thus $T\subseteq P$, and since $T$ generates $G$, we have $G = P$. In the general case, $G/T^{\prime}$ is a $\pi$-group, and so by Corollary
\cororef{rad-divides}, $|G| = |G/T^{\prime}| |T^{\prime}|$ is a $\pi$-number. \end{proof}
\begin{theorem} (Hall's Theorem, cf. \cite{Gl1}, p. 392, Theorem 8) \thmlabel{hall} Let $G$ be a finite group, and let $T\subseteq G$ be a $2$-divisible twisted subgroup. For every set $\pi$ of primes, $T$ satisfies the Hall $\pi$-condition, and thus $T(1/2)$ has a Hall $\pi$-subloop. \end{theorem}
\begin{proof} Without loss of generality, assume $T$ generates $G$. Since $T$ can be identified with its radical-free image $b(T)\subseteq \mathrm{LMlt}(T(1/2))$, there is no loss of generality in assuming that $T$ is radical-free. Repeating the proof of Theorem \thmref{pi}, we obtain a Hall $\pi$-subgroup $P$ of $G$ fixed by $\tau$ such that
$S := P\cap T$ satisfies $|S| = |T|_{\pi}$. $S(1/2)$ is a Hall $\pi$-subloop of $T(1/2)$ by Lemma \lemref{subloops}. \end{proof}
\begin{remark} \remlabel{hall} Using Theorem \thmref{pi} (i.e., \cite{Gl2}, Theorem 15), Glauberman showed that the Hall $\pi$-subloops of $T(1/2)$
are all conjugate under $C_G(\tau)$, that every prime dividing the number of such subloops also divides $|T|$ and is not in $\pi$, and that every $\pi$-subloop of $T(1/2)$ (that is, every twisted $\pi$-subgroup of $G$ contained in $T$) is contained in a Hall $\pi$-subloop; see (\cite{Gl1}, Theorem 8). \end{remark}
\begin{corollary} \corolabel{sylow} (Sylow's Theorem, \cite{Gl1}, p. 394, Corollary 3) Let $G$ be a finite group with a $2$-divisible twisted subgroup $T$. For every prime $p$, $T$ satisfies the Sylow $p$-condition, and thus $T(1/2)$ has a Sylow $p$-subloop. \end{corollary}
\begin{remark} In \cite{Gl1}, Glauberman originally gave separate proofs under different hypotheses of the Sylow and Hall theorems for B-loops, because at the time it was not known that the group generated by a $2$-divisible twisted subgroup must have odd order. In light of his later result (\cite{Gl2}, Theorem 14), our Theorem \thmref{odd}, the Sylow result easily follows from the Hall result. The additional properties mentioned in Remark \remref{hall} obviously hold in the Sylow case as well. \end{remark}
\begin{corollary} (Cauchy's Theorem, cf. \cite{Gl1}, p. 394, Corollary 1) \corolabel{ts-cauchy}
Let $G$ be a finite group, and let $T\subseteq G$ be a $2$-divisible twisted subgroup. If a prime $p\mid |T|$, then $T$ contains an element of order $p$. \end{corollary}
\begin{proof} By Corollary \cororef{sylow}, there is a twisted $p$-subgroup $S$ of $G$
such that $S\subseteq T$ and $|S| = |T|_p$. The rest follows from Lagrange's theorem (Corollary \cororef{lagrange}). \end{proof}
\begin{remark} \remlabel{no-hope} There is no hope of extending Cauchy's theorem to all twisted subgroups. There exists a twisted subgroup of order $180$ which does not have an element of order $5$. We will discuss this further in Remark \remref{bol-no-hope} \end{remark}
\begin{proposition} (Strong Lagrange Theorem) \proplabel{strongLagrange}
Let $G$ be a finite group, and let $T\subseteq G$ be a $2$-divisible twisted subgroup. If $A \subseteq B \subseteq T$ are twisted subgroups of $G$, then $|A|$ divides $|B|$. \end{proposition}
\begin{proof} (\cite{Gl1}, p. 395, Corollary 4). \end{proof}
\begin{remark} Feder \cite{Feder} recently extended Proposition \propref{strongLagrange} to \emph{strong near subgroups}, which include twisted subgroups of odd order as a special case. Roughly speaking, strong near subgroups are twisted subgroups in which the $2$-elements are well-behaved. \end{remark}
\section{Bol loops} \seclabel{bol}
We now apply the results of \S\secref{ts} to Bol loops. In fact, Bol loops are related to twisted subgroups in more than one way.
\begin{example} \exlabel{3} Let $\mathcal{L}$ be a loop, and let $L(\mathcal{L}) = \{ L(x) : x\in \mathcal{L} \}$ denote its set of left translations. Then $\mathcal{L}$ is a Bol loop if and only $L(\mathcal{L})$ is a twisted subgroup of $\mathrm{LMlt}(\mathcal{L})$. Also, if $\mathcal{K}$ is a subloop of $\mathcal{L}$, then $L(\mathcal{K})$ is a twisted subgroup of $\mathrm{LMlt}(\mathcal{L})$. \end{example}
More generally, we have the following.
\begin{proposition} (\cite{KJ}, Remark 4.4(2)) \proplabel{ts-bol} Let $G$ be a group, $H\leq G$, and $T\subseteq G$ a transversal of $H$. If $T$ is a twisted subgroup, then $(T,\cdot)$ is a Bol loop. Conversely, if $H$ is core-free and $(T,\cdot)$ is a Bol loop, then $T$ is a twisted subgroup. \end{proposition}
\begin{example} \exlabel{4} Let $\mathcal{L}$ be a Bol loop. For each $x\in\mathcal{L}$, set $P(x) = L(x)R(x)$, and let $P(\mathcal{L}) = \{ P(x) : x\in \mathcal{L} \}$. Then $P(\mathcal{L})$ is a twisted subgroup of the group $\mathrm{PMlt}(\mathcal{L}) := \langle P(x) : x\in\mathcal{L} \rangle$. This is really just a special case of Example \exref{2}. Indeed, for $x,y\in \mathcal{L}$, we have \begin{equation} \eqlabel{PL} \theta_{L(x)} L(y) = L(P(x)y). \end{equation} Thus for $x,y,z\in \mathcal{L}$, we compute \[ \begin{array}{rcl} L(P(x\cdot (y\cdot x))z) &=& \theta_{L(x\cdot (y\cdot x))} L(z) = \theta_{L(x)L(y)L(x)}L(z) \\ &=& \theta_{L(x)} \theta_{L(y)} \theta_{L(x)} L(z) = L(P(x)P(y)P(x)z). \end{array} \] Thus $P(x\cdot (y\cdot x)) = P(x)P(y)P(x)$ as claimed. The other properties of twisted subgroups follow similarly. \end{example}
\begin{example} \exlabel{5} Let $\mathcal{L}$ be a Bol loop. Then for each $x\in\mathcal{L}$, the triple \[ B(x) = (P(x),L(x^{-1}),L(x)) \] is an autotopism of $\mathcal{L}$. (For the notion of autotopism, see any of the standard references \cite{Bel2, Br, Pf}.) Conversely, if $\mathcal{L}$ is a loop in which each $B(x)$ is an autotopism, then $\mathcal{L}$ is a Bol loop. Let $\mathrm{Btp}(\mathcal{L}) = \langle B(x): x\in\mathcal{L}\rangle$ denote the group of all \textit{Bol autotopisms} of $\mathcal{L}$. Then from Examples~\exref{3} and \exref{4}, we see that the set $B(\mathcal{L}) = \{ B(x): x\in\mathcal{L} \}$ is a twisted subgroup of $\mathrm{Btp}(\mathcal{L})$ (or of the entire autotopism group of $\mathcal{L}$). Geometrically, the Bol autotopism group $\mathrm{Btp}(\mathcal{L})$ is isomorphic to a subgroup of the collineation group of the associated $3$-net, namely the direction-preserving collineation group generated by Bol reflections \cite{FN}. \end{example}
Recall that for a group $G$ with twisted subgroup $T$, the group $\hat{G}\subseteq T!$ is defined by $\hat{G} = \langle \theta_x : x\in T\rangle$; see Example \exref{2}.
\begin{lemma} \lemlabel{Piso} Let $\mathcal{L}$ be a Bol loop. Then $\mathrm{PMlt}(\mathcal{L}) \cong \widehat{\mathrm{LMlt}}(\mathcal{L})$. The isomorphism is defined on generators by $P(x)\mapsto \theta(L(x))$. In case $\mathcal{L}$ is $2$-divisible, we also have $\mathrm{PMlt}(\mathcal{L}) \cong \mathrm{LMlt}(\mathcal{L}(1/2))$ \end{lemma}
\begin{proof} The first assertion follows from \peqref{PL} in Example \exref{4}. The second follows from Lemma \lemref{conj}. \end{proof}
The distinction, therefore, between $\mathrm{PMlt}(\mathcal{L})$ and $\widehat{\mathrm{LMlt}}(\mathcal{L})$ is that the former acts directly on the loop $\mathcal{L}$, while the latter acts on the transversal $L(\mathcal{L})$.
\begin{corollary} \corolabel{P-radfree} Let $\mathcal{L}$ be a Bol loop. Then $P(\mathcal{L})$ is a radical-free twisted subgroup of $\mathrm{PMlt}(\mathcal{L})$. \end{corollary}
\begin{proof} This follows from Lemma \lemref{Piso} and Theorem \thmref{hat-rad-free}. \end{proof}
First we consider the interpretation in $\mathcal{L}$ of the normal subgroup $T^{\#}$ for $T = \mathrm{LMlt}(\mathcal{L})$.
\begin{theorem} \thmlabel{leftnuc} If $\mathcal{L}$ is a Bol loop, then $L(\mathcal{L})^{\#} = L(\mathrm{Nuc}_l(\mathcal{L})) = L(\mathrm{Nuc}_m(\mathcal{L}))$. \end{theorem}
\begin{proof} $\mathcal{L}$ has LIP, and so by Lemma \lemref{nuc-char}, $L(\mathrm{Nuc}_m(\mathcal{L})) = \bigcap_{x\in\mathcal{L}} L(x^{-1})L(\mathcal{L}) = L(\mathcal{L})^{\#}$. The other equality follows Theorem \thmref{sharp}. \end{proof}
\begin{remark} The equality $\mathrm{Nuc}_l(\mathcal{L}) = \mathrm{Nuc}_m(\mathcal{L})$ for left loops with LIP is well-known (e.g., \cite{Kie}, p. 62, (5.7)). Expressed in terms of a subset $T$ (such as $L(\mathcal{L})$) of a group (such as $\mathrm{LMlt}(\mathcal{L})$), this just says that the equality $\bigcap_{x\in T} xT = \bigcap_{x\in T} Tx$ holds provided that $T^{-1} = T$. \end{remark}
\begin{corollary} (\cite{Na2}, p. 405, Lemma 1) Let $\mathcal{L}$ be a Bol loop. Then $\mathrm{Nuc}_l(\mathcal{L})$ is a normal subloop. \end{corollary}
\begin{proof} Using Theorems \ref{thm:leftnuc} and \ref{thm:sharp}, the conditions of \peqref{normality} are easily checked. \end{proof}
Next we turn to the radical.
\begin{definition} \deflabel{bol-rad} Let $\mathcal{L}$ be a Bol loop. The \emph{radical} of $\mathcal{L}$ is the set $\mathcal{L}^{\prime} := \{ x\in \mathcal{L} : L(x) \in L(\mathcal{L})^{\prime} \}$. In case $\mathcal{L}^{\prime} = \{ 1\}$ (i.e., $L(\mathcal{L})^{\prime} = \langle 1\rangle$), we will say that $\mathcal{L}$ is \textit{radical-free}. \end{definition}
\begin{corollary} \corolabel{rad-assoc} Let $\mathcal{L}$ be a Bol loop with radical $\mathcal{L}^{\prime}$. Then $\mathcal{L}^{\prime}$ is an associative normal subloop of $\mathcal{L}$ contained in $\mathrm{Nuc}_l(\mathcal{L})$. \end{corollary}
\begin{proof} That $\mathcal{L}^{\prime} \subseteq \mathrm{Nuc}_l(\mathcal{L})$ follows from $L(\mathcal{L})^{\prime}\subseteq L(\mathcal{L})^{\#}$ and Theorem \ref{thm:leftnuc}. Thus $\mathcal{L}^{\prime}$ is associative and the conditions of \peqref{normality} follow easily. \end{proof}
\begin{remarks}\hspace*{\fill} \label{rem:bol-rads} \begin{enumerate} \item As isomorphic abstract groups, there is, of course, no meaningful distinction between the radical $\mathcal{L}^{\prime}$ of a Bol loop $\mathcal{L}$ and the radical $L(\mathcal{L})^{\prime}$ of the twisted subgroup $L(\mathcal{L})$ of left translations, particularly if one identifies the loop $\mathcal{L}$ with the induced loop structure on the transversal $L(\mathcal{L})$. However, the distinction does help clarify the normality of $\mathcal{L}^{\prime}$ as a sub\textit{loop} of $\mathcal{L}$ versus the normality of $L(\mathcal{L})^{\prime}$ as a \textit{subgroup} of $\mathrm{LMlt}(\mathcal{L})$. \item Let $\mathcal{L}$ be a Bruck loop, and let $\tau\in\mathrm{Aut}(\mathrm{LMlt}(\mathcal{L}))$ denote conjugation by the inversion mapping $x\mapsto x^{-1}$. Then the automorphic inverse property is equivalent to $L(x)^{\tau} = L(x^{-1})$ for all $x\in \mathcal{L}$. Thus $\tau$ is the Aschbacher automorphism of $\mathrm{LMlt}(\mathcal{L})$, and hence $\mathcal{L}$ is a radical-free Bol loop. \end{enumerate} \end{remarks}
\begin{theorem} \thmlabel{rep1} Let $\mathcal{L}$ be a Bol loop, and let $G = \mathrm{LMlt}(\mathcal{L})$. The mapping $L(\mathcal{L})\to P(\mathcal{L}); L(x)\mapsto P(x)$ extends to a homomorphism from $G$ onto $\mathrm{PMlt}(\mathcal{L})$ if and only if $\mathcal{L}^{\prime} = \{ 1\}$. In this case, the kernel of the homomorphism is $Z(G)\cap C_G(\tau)$ where $\tau\in\mathrm{Aut}(G)$ is the Aschbacher automorphism. \end{theorem}
\begin{proof} This follows from Lemma \lemref{Piso} and Theorem \thmref{extend}. \end{proof}
Next we consider the Bol autotopism group $\mathrm{Btp}(\mathcal{L})$ of a Bol loop $\mathcal{L}$. Let $\Phi_i : \mathrm{Btp}(\mathcal{L})\to \mathrm{Mlt}(\mathcal{L}); (f_1, f_2, f_3)\mapsto f_i$ denote the projection onto the $i^{\text{th}}$ component. Clearly $\Phi_1$ is an epimorphism onto $\mathrm{PMlt}(\mathcal{L})$ and $\Phi_2$ and $\Phi_3$ are epimorphisms onto $\mathrm{LMlt}(\mathcal{L})$.
In the Bol loop context, the subloop we call the radical made its first appearance in work of M. Funk and P. Nagy (\cite{FN}, p. 67, Theorem 1). The following is the algebraic version of their geometric result.
\begin{theorem} \thmlabel{btp-rad} Let $\mathcal{L}$ be a Bol loop, and let $\Phi_3 : \mathrm{Btp}(\mathcal{L}) \to \mathrm{LMlt}(\mathcal{L})$ be the projection onto the third factor. Then $\mathrm{ker}(\Phi_3) \cong \mathcal{L}^{\prime}$. \end{theorem}
\begin{proof} $(f,g,1)\in \mathrm{ker}(\Phi_3)$ if and only if $g$ can be written as $g = L(x_1^{-1})\cdots L(x_n^{-1})$ for some $x_i\in\mathcal{L}$, $i=1,\ldots ,n$, such that $L(x_1)\cdots L(x_n) = I$. Thus $(f,g,1)\in \ker(\Phi_3)$ if and only if $g \in L(\mathcal{L})^{\prime}$, and so the restriction of $\Phi_2$ to $\mathrm{ker}(\Phi_3)$ is an isomorphism onto $L(\mathcal{L})^{\prime}$. \end{proof}
\begin{remark} In particular, if $\mathcal{L}$ is a radical-free Bol loop, then the group $\mathrm{Btp}(\mathcal{L})$ simultaneously encodes both the graph of the Aschbacher automorphism $\tau\in\mathrm{Aut}(\mathrm{LMlt}(\mathcal{L}))$ and the graph of the homomorphism $\mathrm{LMlt}(\mathcal{L})\to \mathrm{PMlt}(\mathcal{L})$ described in Theorem \thmref{rep1}. These are given by, respectively, $f_3\mapsto f_2$ and $f_3\mapsto f_1$ for $(f_1,f_2,f_3)\in \mathrm{Btp}(\mathcal{L})$. \end{remark}
\begin{lemma} \lemlabel{kernel} Let $\mathcal{L}$ be a loop and set $G := \mathrm{LMlt}(\mathcal{L})$. Then $Z(G) = G\cap \{ R(x) : x \in \mathrm{Nuc}_r(\mathcal{L})\}$. Therefore the set $\mathcal{M} := \{ x\in \mathrm{Nuc}_r(\mathcal{L}) : R(x)\in G\}$ is an abelian group. \end{lemma}
\begin{proof} An element $a\in\mathcal{L}$ is in $\mathrm{Nuc}_r(\mathcal{L})$ if and only if $R(a)$ centralizes $G$ in the full multiplication group $\mathrm{Mlt}(\mathcal{L})$. So if some such $R(a)\in G$, then $R(a)\in Z(G)$. Conversely, if $g \in Z(G)$, then setting $a = g1$, we have $x\cdot a = L(x)g1 = gL(x)1 = gx$, and so $g = R(a)$ and $a\in \mathrm{Nuc}_r(\mathcal{L})$. The rest follows because the mapping $R : \mathcal{M}\to Z(G); x\mapsto R(x)$ is an anti-isomorphism. \end{proof}
\begin{theorem} Let $\mathcal{L}$ be a Bol loop, let $G = \mathrm{LMlt}(\mathcal{L})$, and let $\Phi_1 : \mathrm{Btp}(\mathcal{L}) \to \mathrm{PMlt}(\mathcal{L})$ be the projection onto the first factor. Then $\mathrm{ker}(\Phi_1) \cong Z(G)\cap \{ g : g = L(x_1)\cdots L(x_n) = L(x_1^{-1})\cdots L(x_n^{-1}), x_i\in \mathcal{L}, i=1,\ldots,n\}$. If $\mathcal{L}$ is radical-free, then $\mathrm{ker}(\Phi_1) \cong Z(G)\cap C_G(\tau)$ where $\tau\in\mathrm{Aut}(G)$ is the Aschbacher automorphism. \end{theorem}
\begin{proof} A triple $(1,f_2,f_3)$ of permutations is an autotopism if and only if $f_2 = f_3 = R(a)$ where $a = f_2(1) \in \mathrm{Nuc}_r(\mathcal{L})$. As in the proof of Lemma \ref{lem:kernel}, this holds if and only if $R(a)$ centralizes $\mathrm{LMlt}(\mathcal{L})$ in $\mathrm{Mlt}(\mathcal{L})$, and so $(1,R(a),R(a))\in \mathrm{Btp}(\mathcal{L})$ if and only if $R(a)\in Z(\mathrm{LMlt}(\mathcal{L}))$ and $R(a) = L(x_1)\cdots L(x_n) = L(x_1^{-1})\cdots L(x_n^{-1})$ for some $x_i\in\mathcal{L}, i=1,\ldots,n\}$. The remaining assertion follows immediately. \end{proof}
\begin{corollary} Let $\mathcal{L}$ be a radical-free Bol loop, let $G = \mathrm{LMlt}(\mathcal{L})$, and let $\tau\in\mathrm{Aut}(G)$ be the Aschbacher automorphism. If $Z(G)\cap C_G(\tau) = \langle 1\rangle$, then $\mathrm{Btp}(\mathcal{L}) \cong \mathrm{LMlt}(\mathcal{L}) \cong \mathrm{PMlt}(\mathcal{L})$. \end{corollary}
\begin{remark} \remlabel{history} Before proceeding on to $2$-divisible Bol loops, it is probably worthwhile to insert a few historical remarks. The concept of twisted subgroup (though obviously not the terminology we have adopted), and its relationship with quasigroup and loop theory, has been around for some time, and is not limited to the connection with Bol loops. For example, a \textit{Fischer group} is a group $G$ and a subset $T\subseteq G$ of involutions which generate $G$ such that for all $x,y\in T$, $(xy)^3 = 1$, and $xyx \in T$. If $1\in T$, then $T$ is a twisted subgroup. Fischer groups arise in the study of distributive, symmetric quasigroups and commutative Moufang loops of exponent 3 (\cite{Fi}; \cite{Ben}, p.133). In a different, but related direction, if we give a twisted subgroup $T$ of a group $G$ the binary operation $x\star y := xy^{-1} x$, $x,y\in T$, then $(T,\star)$ is a left quasigroup which is balanced ($x\star y = y$ iff $y\star x = x$), left distributive ($x\star (y\star z) = (x\star y) \star (x\star z)$), left key ($x\star (x\star y) = y$), and idempotent ($x\star x = x$). (Other subsets of groups can also be given this structure, such any conjugacy class with the operation $(x,y) \mapsto xyx^{-1}$.) If $T = G$, $(T,\star)$ is called the ``core" of $G$ (this is not the same usage as in group theory), and the same properties hold even if $G$ is a Moufang loop \cite{Br}. Studies of these structures, with twisted subgroups as a principal example, can be found in the work of Nobusawa and his collaborators (see \cite{Nobusawa4} and the references therein), who were in turn influenced by the work of Loos \cite{Loos} in symmetric spaces. See also Pierce \cite{Pierce1} \cite{Pierce2} and Umaya \cite{Umaya}. Doro \cite{Doro} used these structures in his study of simple Moufang loops. Nowadays the structure $(T,\star)$ is known as an \emph{involutory quandle}, thanks largely to Joyce's applications of the idea to knot theory \cite{Joyce1}. As far as we have been able to determine, Aschbacher's paper \cite{Asch} (which was motivated by work of Feder and Vardi \cite{FV}) seems to be the first in which twisted subgroups are used for a purpose other than the study of quasigroups and loops. \end{remark}
\section{Bol loops of odd order} \seclabel{2dbol}
We saw from Example \exref{converse} that a twisted subgroup of odd order need not be $2$-divisible. However, a twisted subgroup of odd order which has a compatible Bol loop structure is indeed $2$-divisible. This is, in fact, a well-known consequence of the left power-alternative property for Bol loops.
\begin{proposition} (e.g., \cite{Kie}) \proplabel{odd} The order of any element of a finite Bol loop divides the order of the loop. \end{proposition}
\noindent In particular, instead of stating results for finite, $2$-divisible Bol loops, we may simply state them for Bol loops of odd order.
For Bol loops of odd order, the Cauchy and Strong Lagrange theorems for twisted subgroups immediately transfer to the loop level.
\begin{theorem} (Cauchy's Theorem) Let $\mathcal{L}$ be a Bol loop of odd order. For every prime $p$ dividing $\mathcal{L}$, there exists $x\in \mathcal{L}$ of order $p$. \end{theorem}
\begin{proof} By Corollary \cororef{ts-cauchy}, there exists $L(x)\in L(\mathcal{L})$ of order $p$. Since $L(x^n) = L(x)^n$ for all $n$, $x$ has order $p$. \end{proof}
\begin{remark} \remlabel{bol-no-hope} As mentioned in Remark \ref{rem:no-hope} on the twisted subgroup level, Cauchy's theorem does not extend to all Bol loops, because the simple Moufang loop of order $180$ does not have an element of order $5$. \end{remark}
\begin{theorem} (Strong Lagrange Theorem) Let $\mathcal{L}$ be a Bol loop of odd order. If $\mathcal{K}_1 \subseteq \mathcal{K}_2
\subseteq \mathcal{L}$ are subloops, then $|\mathcal{K}_1|$ divides $|\mathcal{K}_2|$. \end{theorem}
\begin{proof}
By Proposition \propref{strongLagrange}, $| L(\mathcal{K}_1) |$ divides
$| L(\mathcal{K}_2) |$. \end{proof}
\begin{problem} \problabel{strongLagrange} Does the strong Lagrange property hold for all Bol loops? \end{problem}
\begin{remark} If a classification of finite, simple Bol loops were known, then it would be enough to verify the strong Lagrange property for such loops \cite{CKRV}. However, this observation merely reduces one hard problem to another. \end{remark}
\begin{theorem} Let $\pi$ be a set of odd primes, and let $\mathcal{L}$ be a finite Bol $\pi$-loop. Then $\mathrm{LMlt}(\mathcal{L})$ is a $\pi$-group. \end{theorem}
\begin{proof} This is Theorem \ref{thm:pi} interpreted on the loop level. \end{proof}
By the Feit-Thompson theorem \cite{FT}, we conclude the following.
\begin{corollary} If $\mathcal{L}$ is a finite Bol loop of odd order, then $\mathrm{LMlt}(\mathcal{L})$ is a solvable group. \end{corollary}
The status of the Sylow and Hall theorems for Bol loops is unclear even for Bol loops of odd order. For Moufang loops we have the following results of Glauberman.
\begin{proposition}(Hall's Theorem, \cite{Gl2}, p. 413, Theorem 16 and p. 409, Theorem 12) Let $\mathcal{L}$ be a Moufang loop of odd order and let $\pi$ be a set of primes. Then $\mathcal{L}$ contains a Hall $\pi$-subloop. \end{proposition}
The Sylow theorem for Moufang loops of odd order follows immediately, although Glauberman also gave a separate proof (\cite{Gl2}, p. 410, Theorem 13). Glauberman also proved other Hall-like properties of $\pi$-subloops (\cite{Gl2}, p. 413, Theorem 16).
\begin{problem}\hspace*{\fill} \problabel{bol-Hall} \begin{enumerate} \item For a given set $\pi$ of primes, does every Bol loop of odd order have a Hall $\pi$-subloop? \item If the answer to (1) is no, then for a given odd prime $p$, does every Bol loop of odd order have a Sylow $p$-subloop? \end{enumerate} \end{problem}
Finally, we turn to some preliminary investigations of simple Bol loops of odd order. This is motivated by Glauberman's Feit-Thompson theorems for B-loops and Moufang loops.
\begin{proposition} (\cite{Gl2}, p. 412, Theorem 14 and p. 413, Theorem 16) \proplabel{solvable} Let $\mathcal{L}$ be a Bol loop of odd order. If $\mathcal{L}$ is a B-loop or a Moufang loop, then $\mathcal{L}$ is solvable. \end{proposition}
\begin{corollary} \corolabel{extend} Let $X$ be the class consisting of all Moufang loops of odd order and all finite B-loops. Let $V$ be any variety of Bol loops of odd order such that every loop in $V$ is an extension of two loops in $X$. Then every loop in $V$ is solvable. \end{corollary}
A left loop is said to have the $A_l$-\emph{property} if every left inner mapping is an automorphism (\cite{Kie}, p. 35).
\begin{corollary} \corolabel{Al} If $\mathcal{L}$ is an $A_l$ Bol loop of odd order, then $\mathcal{L}$ is solvable. \end{corollary}
\begin{proof} By \cite{FU}, Theorem 4.11, $\mathcal{L}$ is an extension of a group by a B-loop. \end{proof}
These considerations pave the way to the following problem. We will not give a complete answer, but we will present some results which we think will play a role in its solution.
\begin{problem} \problabel{solvable} Do there exist any finite simple Bol loops of odd order? That is, is every finite Bol loop of odd order solvable? \end{problem}
Let $\mathcal{L}$ be a $2$-divisible Bol loop. Since $\mathcal{L}$ is left power-alternative, that is, $L(x^n) = L(x)^n$ for all $x\in \mathcal{L}$, we may use the $2$-divisible twisted subgroup $L(\mathcal{L})$ to define a B-loop operation on $\mathcal{L}$. For $x,y\in \mathcal{L}$, we have $L(x)\odot L(y) = (L(x)L(y)^2L(x))^{1/2} = L((x\cdot (y\cdot x))^{1/2})$. This us leads us to the following.
\begin{definition} Let $\mathcal{L}$ be a $2$-divisible Bol loop. The \emph{B-loop associated to} $\mathcal{L}$ is $(\mathcal{L},\odot)$ with the binary operation $\odot : \mathcal{L}\times \mathcal{L} \to \mathcal{L}$ given by $x\odot y = (x\cdot ((y\cdot y)\cdot x))^{1/2}$. We will denote the B-loop $(\mathcal{L},\odot)$ by $\mathcal{L}(1/2)$, and we follow a similar convention for subloops. Left multiplication maps for $\mathcal{L}(1/2)$ will be denoted by $M(x)y := x\odot y$. \end{definition}
\begin{remark} In case $\mathcal{L}$ is already a B-loop, $(\mathcal{L},\odot)$ is just $\mathcal{L}$ itself. This is because every Bruck loop satisfies the identity $(x\cdot y)^2 = x \cdot (y^2 \cdot x)$ (\cite{Kie}, p. 73, 6.8(1)). \end{remark}
The B-loop associated to a Moufang loop of odd order was the key component in Glauberman's proofs of the Hall, Sylow, and Feit-Thompson theorems in \cite{Gl2}. The idea was to ``pull back" the results from the associated B-loop to the Moufang loop. Since arbitrary Bol loops are not as structured as Moufang loops, one cannot expect this idea to work quite so well. Nevertheless, we can make some progress.
\begin{lemma} \lemlabel{PisB} Let $\mathcal{L}$ be a $2$-divisible Bol loop. Then the squaring map $s:\mathcal{L} \to \mathcal{L}; x\mapsto x\cdot x$ conjugates $\mathrm{LMlt}(\mathcal{L}(1/2))$ to $\mathrm{PMlt}(\mathcal{L})$ in $\mathcal{L} !$. \end{lemma}
\begin{proof} For each $x\in \mathcal{L}$, $M(x) = s^{-1} P(x) s$. \end{proof}
\begin{theorem} \thmlabel{rep2} Let $\mathcal{L}$ be a $2$-divisible Bol loop, and let $G = \mathrm{LMlt}(\mathcal{L})$. The mapping $L(\mathcal{L})\to M(\mathcal{L}(1/2)); L(x)\mapsto M(x)$ extends to a homomorphism from $G$ onto $\mathrm{LMlt}(\mathcal{L}(1/2))$ if and only if $\mathcal{L}^{\prime} = \{ 1\}$. The kernel of the homomorphism is $Z(G)\cap C_G(\tau)$ where $\tau\in\mathrm{Aut}(G)$ is the Aschbacher automorphism. \end{theorem}
\begin{proof} This follows from Lemma \lemref{PisB} and Theorem \thmref{rep1}. \end{proof}
\begin{lemma} \lemlabel{normal} A subloop $\mathcal{K}$ of a $2$-divisible Bol loop $\mathcal{L}$ is normal if and only if, for all $x,y\in \mathcal{L}$, $x \cdot (\mathcal{K}\cdot y) = \mathcal{K} \cdot (x\cdot y)$. \end{lemma}
\begin{proof} Referring to the conditions in \peqref{normality}, we see that only one direction requires proof. Thus assume $x \cdot (\mathcal{K}\cdot y) = \mathcal{K} \cdot (x\cdot y)$ for all $x,y \in \mathcal{L}$. Fix $x\in \mathcal{L}$ and set $u = x^{1/2}$. Using LAP and the left Bol identity, \[ \begin{array}{rcl} x \cdot (\mathcal{K} \cdot y) &=& u\cdot (u\cdot (\mathcal{K}\cdot y)) = u\cdot (\mathcal{K}\cdot (u\cdot y))\\ &=& (u\cdot (\mathcal{K}\cdot u))\cdot y = (u\cdot (u\cdot \mathcal{K}))\cdot y = (x\cdot \mathcal{K})\cdot y. \end{array} \] Thus the other condition of \peqref{normality} holds, and so $\mathcal{K}$ is normal. \end{proof}
Our next result is inspired by Aschbacher's normality condition for subloops (\cite{Asch2}, condition (NC)). It enables us to express normality directly in terms of the left multiplication group.
\begin{lemma} \lemlabel{normal2} A subloop $\mathcal{K}$ of a $2$-divisible Bol loop $\mathcal{L}$ is normal if and only if, for each $x\in \mathcal{L}$, $y\in \mathcal{K}$, $g\in \mathrm{LMlt}(\mathcal{L})$, \begin{equation} \eqlabel{normal} L(x) L(y) L(x^{-1}) = L(z) ghg^{-1} \end{equation} for some $z\in \mathcal{K}$, $h\in \mathrm{LMlt}_1(\mathcal{L})$. \end{lemma}
\begin{proof} Set $G = \mathrm{LMlt}(\mathcal{L})$, $H = \mathrm{LMlt}_1(\mathcal{L})$. Fix $x\in \mathcal{L}$, $y\in \mathcal{K}$. Since $L(\mathcal{L})$ is a transversal of each conjugate $g H g^{-1}$, $g\in G$, we have $L(x) L(y) L(x^{-1}) = L(z) ghg^{-1}$ for some $z\in \mathcal{L}$, $h\in H$. Applying both sides to $w = g1$, we have $x\cdot (y\cdot (x^{-1} \cdot w)) = z\cdot w$. Now if $\mathcal{K}$ is normal, then by Lemma \lemref{normal} (or just \peqref{normality}), $x\cdot (y\cdot (x^{-1} \cdot w)) = u\cdot w$ for some $u\in \mathcal{K}$. Thus $z = u$ and so $z \in \mathcal{K}$ so that \peqref{normal} holds. Conversely, if \peqref{normal} holds, then fix $v\in \mathcal{L}$ and set $g = L(v)$. Let $z\in \mathcal{K}$, which depends on $g$, be given as in \peqref{normal}. Apply both sides of \peqref{normal} to $z$ to get $x\cdot (y\cdot (x^{-1} \cdot v)) = z\cdot v$. By Lemma \lemref{normal}, $\mathcal{K}$ is normal. \end{proof}
In the Moufang case, the following result is (\cite{Gl2}, p. 401, Lemma 7(b)). The proof in the general case is essentially the same, but with care in the parenthesization.
\begin{lemma} \lemlabel{bol-subloops} Let $\mathcal{L}$ be a $2$-divisible Bol loop and suppose $\mathcal{K}$ is a subloop of $\mathcal{L}(1/2)$. Then $\mathcal{K}$ is a subloop of $\mathcal{L}$ if and only if $x^{-1}\cdot (\mathcal{K}\cdot x) = \mathcal{K}$ for all $x\in \mathcal{K}$. \end{lemma}
\begin{proof} The ``only if" is obvious, so assume $x^{-1}\cdot (\mathcal{K}\cdot x) = \mathcal{K}$ for all $x\in \mathcal{K}$. If $x\in \mathcal{K}$, then $\langle x\rangle \subset \mathcal{K}$, because powers in $\mathcal{L}$ agree with powers in $\mathcal{L}(1/2)$. Now fix $x,y\in \mathcal{K}$ and set $u = x^{1/2}$, $v = y^{1/2}$. Then using the definition of $\odot$, the Bol identity, and LAP, $\mathcal{K}$ contains $u\cdot ((u\odot v)^2 \cdot u^{-1}) = u\cdot ((u\cdot (v^2 \cdot u))\cdot u^{-1}) = u^2 \cdot v^2 = x\cdot y$. A subset of a Bol loop closed under inversion and multiplication is a subloop (see, e.g., \cite{Kie}, p. 50, 3.10(4)). \end{proof}
\begin{lemma} \lemlabel{main} Let $\mathcal{L}$ be a radical-free, $2$-divisible Bol loop, let $G = \mathrm{LMlt}(\mathcal{L})$, let $\tau\in\mathrm{Aut}(G)$ be the Aschbacher automorphism, and assume that $Z(G)\cap C_G(\tau) = \langle 1\rangle$. If $\mathcal{K}(1/2)$ is a normal subloop of $\mathcal{L}(1/2)$, then $\mathcal{K}$ is a normal subloop of $\mathcal{L}$. \end{lemma}
\begin{proof} Fix $x\in \mathcal{L}$, $y\in \mathcal{K}$, $g\in G$. As in the proof of Lemma \lemref{normal2}, there exists $z\in \mathcal{L}$, $h\in \mathrm{LMlt}_1(\mathcal{L})$ such that \begin{align*} L(x) L(y) L(x^{-1}) = L(z) ghg^{-1} \tag{*} \end{align*} The hypotheses and Theorem \ref{thm:rep2} imply $\mathrm{LMlt}(\mathcal{L}(1/2)) \cong G$, and that on generators, the isomorphism sends each $L(x)$ to $M(x)$. Applying this to (*), we have $M(x) M(y) M(x^{-1}) = M(z) \tilde{g}\tilde{h}\tilde{g}^{-1}$ for some $\tilde{g}\in \mathrm{LMlt}(\mathcal{L}(1/2))$, $\tilde{h}\in \mathrm{LMlt}_1(\mathcal{L}(1/2))$. If $\mathcal{K}(1/2)$ is normal, then by Lemma \lemref{normal2}, $z\in \mathcal{K}$. Thus by (*), the condition of Lemma \lemref{normal2} is satisfied for the sub\emph{set} $\mathcal{K}$ of $\mathcal{L}$, and all that remains is to show that $\mathcal{K}$ is a subloop. Taking $x\in \mathcal{K}$ and $g = 1$ in (*), and applying both sides to $1$, we have that for each $x,y\in \mathcal{K}$, there exists $z\in \mathcal{K}$ such that $x\cdot (y\cdot x^{-1}) = z$. Now apply Lemma \lemref{bol-subloops}. \end{proof}
\begin{theorem} \thmlabel{main} Let $\mathcal{L}$ be a simple Bol loop of odd order, let $G = \mathrm{LMlt}(\mathcal{L})$, and (since $\mathcal{L}$ is radical-free), let $\tau\in\mathrm{Aut}(G)$ denote the Aschbacher automorphism. Then $Z(G)\cap C_G(\tau) \neq \langle 1 \rangle$. \end{theorem}
\begin{proof} By Proposition \propref{solvable}, $\mathcal{L}(1/2)$ has a nontrivial normal subloop $\mathcal{K}(1/2)$. If $Z(G)\cap C_G(\tau) = \langle 1 \rangle$, then Lemma \lemref{main} implies that $\mathcal{K}$ is a nontrivial normal subloop of $\mathcal{L}$. \end{proof}
\begin{corollary} \corolabel{main} Let $\mathcal{L}$ be a simple Bol loop of odd order, and let $G = \mathrm{LMlt}(\mathcal{L})$. Then $G$ has nontrivial center, and $\mathrm{Nuc}_r(\mathcal{L})$ contains an abelian subgroup $\mathcal{M} = \{ x\in \mathrm{Nuc}_r(\mathcal{L}) : R(x)\in G\} \neq \langle 1\rangle$. \end{corollary}
\begin{proof} This follows from Theorem \thmref{main} and Lemma \lemref{kernel}. \end{proof}
\begin{remark} \remlabel{mfg} As a corollary, we obtain a new proof of the Moufang part of Proposition \propref{solvable}. In a Moufang loop, the three nuclei agree and the nucleus is normal (\cite{Pf}, p. 90, Corollary IV.1.5). Thus a simple Moufang loop has trivial nucleus, and so by Corollary \cororef{main}, cannot have odd order. \end{remark}
In general, the right nucleus of a left Bol loop need not be a normal subloop. This follows from a construction of D.~Robinson and K.~Robinson \cite{RR}, translated from right Bol loops to left Bol loops. However, their construction gives a Bol loop of even order. Thus the following problem still seems to be open, even for B-loops.
\begin{problem} Does there exist a Bol loop of odd order with nonnormal right nucleus? \end{problem}
\begin{acknowledgment} We thank Michael~Aschbacher, James~Beidleman, George~Glauberman, and Derek~Robinson for helpful comments. \end{acknowledgment}
\end{document}
|
arXiv
|
{
"id": "0208231.tex",
"language_detection_score": 0.6844779253005981,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Asymptotic approximations for the close evaluation of
double-layer potentials} \author{Camille Carvalho\footnote{Applied Mathematics Unit, School of Natural Sciences,
University of California, Merced, 5200 North Lake Road, Merced, CA
95343} \and Shilpa Khatri ${}^{\ast}$ \and Arnold D. Kim ${}^{\ast}$}
\maketitle
\begin{abstract}
When using the boundary integral equation method to solve a boundary value problem, the evaluation
of the solution near the boundary is challenging to compute
because the layer potentials that represent the solution
are nearly-singular integrals. To address this close evaluation
problem, we apply an asymptotic analysis of these nearly singular
integrals and obtain an asymptotic approximation.
We derive the asymptotic approximation for the case of the double-layer potential in two and three dimensions,
representing the solution of the interior Dirichlet problem for Laplace's equation.
By doing so, we obtain an asymptotic approximation given
by the Dirichlet data at the boundary point nearest to the interior
evaluation point plus a nonlocal correction.
We present numerical methods to compute this asymptotic approximation, and we demonstrate the efficiency and accuracy of the asymptotic
approximation through several examples. These
examples show that the asymptotic approximation is useful as it
accurately approximates the close evaluation of the double-layer
potential while requiring only modest computational resources.
\end{abstract}
\textbf{Keywords: }
Asymptotic approximation, close evaluation problem, potential
theory, boundary integral equations.
\section{Introduction}
The close evaluation problem refers to the non-uniform error produced by high-order quadrature rules used in boundary integral equation methods. In particular, the close evaluation problem occurs when evaluating layer potentials at evaluation points close to the boundary. These high-order quadrature rules attain spectral accuracy when computing the solution, represented by layer potentials, far from the boundary whereas they incur a very large error when computing the solution close to the boundary. It is well understood that this growth in error is due to the fact that the integrand of the layer potentials become increasingly peaked as the point of evaluation approaches the boundary. In fact, when the distance between the evaluation point and its closest boundary point is smaller than the distance between quadrature points on the boundary for a fixed-order quadrature rule, the quadrature points do not adequately resolve the peak of the integrand and therefore produce an $O(1)$ error.
Accurate evaluations of layer potentials close to the boundary of the domain are needed for a wide range of applications, including the modeling of swimming micro-organisms, droplet suspensions, and blood cells in Stokes flow~\cite{smith2009boundary, barnett2015spectrally, marple2016fast,
keaveny2011applying}, and to predict accurate measurements of the electromagnetic near-field in the field of plasmonics~\cite{Maier07} for nano-antennas~\cite{akselrod2014probing,novotny2011antennas} and sensors~\cite{mayer2008label,sannomiya2008situ}.
Several computational methods have been developed to address this close evaluation problem. Schwab and Wendland~\cite{schwab1999extraction} have developed a boundary extraction method based on a Taylor series expansion of the layer potentials. Beale and Lai~\cite{beale2001method} have developed a method that first regularizes the nearly singular kernel of the layer potential and then adds corrections for both the discretization and the regularization. Beale {\it et al.}~\cite{beale2016simple} have extended the regularization method to three-dimensional problems. Helsing and Ojala~\cite{helsing2008evaluation} developed a method that combines a globally compensated quadrature rule and interpolation to achieve very accurate results over all regions of the domain. Barnett~\cite{barnett2014evaluation} has used surrogate local expansions with centers placed near, but not on, the boundary. Kl\"{o}ckner {\it et al.}~\cite{klockner2013quadrature} introduced Quadrature By Expansion (QBX), which uses expansions about accurate evaluation points far away from the boundary to compute accurate evaluations close to it. There have been several subsequent studies of QBX~\cite{epstein2013convergence, af2016fast,
rachh2017fast, wala20183DQBX, af2017error} that have extended its use and characterized its behavior.
Recently, the authors have applied asymptotic analysis to study the close evaluation problem. For two-dimensional problems, the authors developed a method that used matched asymptotic expansions for the kernel of the layer potential~\cite{carvalho2018asymptotic}. In that method, the asymptotic expansion that captures the peaked behavior of the kernel (namely, the peaked behavior of the integrand of the layer potential) can be integrated exactly and the relatively smooth remainder is integrated numerically, resulting in a highly accurate method. For three-dimensional problems, the authors have developed a simple, three-step method for computing layer potentials~\cite{carvalho2018_3D}. This method involves first rotating the spherical coordinate system used to compute the layer potential so that the boundary point at which the integrand becomes singular is aligned with the north pole. By studying the asymptotic behavior of the integral, they found that integration with respect to the azimuthal angle is a natural averaging operation that regularizes the integral thereby allowing for a high-order quadrature rule to be used for the integral with respect to the polar angle. This numerical method was shown to achieve an error that decays quadratically with the distance to the boundary provided that the underlying boundary integral equation for the density is sufficiently resolved.
In this work, we carry out a complete asymptotic analysis of the double-layer potential for the interior Dirichlet problem for Laplace's equation in two and three dimensions. By doing so, we derive asymptotic approximations for the close evaluation of the double-layer potential. These asymptotic approximations provide valuable insight into the inherent challenges of the close evaluation problem and an explicit method to address it. We find that the leading-order asymptotic behavior of the double-layer potential in the close evaluation limit is given by the Dirichlet data at the boundary point closest to the evaluation point plus a nonlocal correction. It is the nonlocal correction that has made the close evaluation problem challenging to address. Since this asymptotic analysis explicitly finds this nonlocal correction, we are able to develop a simple and accurate numerical method to compute the double-layer potential and thus, address the close evaluation problem systematically. We compute several numerical examples using the asymptotic approximations to evaluate their efficacy and accuracy.
The asymptotic analysis used here to study the close evaluation problem also provides valuable insight (and useful asymptotic approximations) for other problems. In particular there is an interesting connection with forward-peaked scattering in radiative transfer, which is used to describe the multiple scattering of light~\cite{chandrasekhar1960,
ishimaru}. Forward-peaked scattering is an important problem for several applications, and is challenging to study. We draw this connection and apply the asymptotic analysis developed in this paper to forward-peaked scattering in radiative transfer.
The remainder of this paper is as follows. We precisely define the close evaluation problem for the double-layer potential in Section \ref{sec:close-eval}. We compute the leading-order asymptotic behavior of the double-layer potential in two and three dimensions in Sections \ref{sec:asymptotics2D} and \ref{sec:asymptotics3D}, respectively. We describe numerical methods to evaluate the asymptotic approximations for the close evaluation of the double-layer potential in Section \ref{sec:numerics}. We give several examples demonstrating the accuracy of this numerical method in Section \ref{sec:results}. Section \ref{sec:RTE} describes the connection between the close evaluation problem and forward-peaked scattering in radiative transfer. Section \ref{sec:conclusion} gives our conclusions. The Appendix provides details of the computations for the three-dimensional case: Appendix \ref{sec:rotation} gives details of how we rotate spherical integrals and Appendix \ref{sec:spherical-laplacian} gives a useful derivation of the spherical Laplacian.
\section{Close evaluation of the double-layer potential} \label{sec:close-eval}
Consider a simply connected, open set, denoted by $D \subset \mathbb{R}^{n}$ with $n = 2, 3$, with an analytic close boundary, $B$, and let $\bar{D} = D \cup B$. Given some smooth data $f$, we write the function $u \in C^{2}(D) \cap C^{1}(\bar{D})$ satisfying the interior Dirichlet problem, \begin{subequations}
\begin{gather}
\varDelta u = 0 \quad \text{in $D$},\\
u = f \quad \text{on $B$},
\end{gather}
\label{eq:dirichlet} \end{subequations} as the double-layer potential, \begin{equation}
u(x) = \frac{1}{2^{n-1} \pi} \int_{B} \frac{\nu_{y} \cdot ( x
- y )}{|x - y|^{n}} \mu(y) \mathrm{d}\sigma_{y}, \quad x \in D,
\quad n = 2, 3.
\label{eq:DLP} \end{equation} Here, $\nu_{y}$ denotes the unit outward normal at $y \in B$, $\mathrm{d}\sigma_{y}$ denotes the boundary element, and $\mu$, the density, is a continuous function. This double-layer potential satisfies the following jump relation~\cite{guenther1996partial}, \begin{equation}
\lim_{\substack{x \to y^{\star} \in B\\x \in D}} u(x) = u(y^{\star}) -
\frac{1}{2} \mu(y^{\star}).
\label{eq:jump} \end{equation} By requiring that $u$ satisfies \eqref{eq:dirichlet}, we find that, in light of jump relation \eqref{eq:jump}, $\mu$ must satisfy \begin{equation}
\frac{1}{2^{n-1} \pi} \int_{B} \frac{\nu_y \cdot (
y^{\star} - y )}{|y^{\star} - y|^{n}} \mu(y) \mathrm{d}\sigma_{y}
-\frac{1}{2} \mu(y^{\star}) = f(y^{\star}), \quad y^{\star}
\in B,
\label{eq:bie} \end{equation} the boundary integral equation for $\mu$.
Here, we seek to evaluate \eqref{eq:DLP} at points close to the boundary. To define a close evaluation point precisely, let $0 < \varepsilon \ll 1$ denote a small, dimensionless parameter, and consider \begin{equation}
x = y^{\star} - \varepsilon \ell \nu^{\star},
\label{eq:closept} \end{equation} with $y^{\star} \in B$ denoting the closest point to $x$ on the boundary, $\nu^{\star}$ denoting the unit, outward normal at $y^{\star}$, and $\ell$ denoting a characteristic length of the problem such as the signed (2D) or mean (3D) curvature at $y^{\star}$ (see Fig. \ref{fig:sketch}).
\begin{figure}
\caption{Sketch of the quantities introduced in
\eqref{eq:closept} to study evaluation points close to the
boundary in 2D (left) and in 3D (right).}
\label{fig:sketch}
\end{figure} Because the solution of \eqref{eq:dirichlet} continuously approaches its boundary data from within $D$, we write \begin{equation} u (x) = u(y^{\star} - \varepsilon \ell \nu^{\star}) = f(y^{\star})
+ \varepsilon U(y^{\star};\varepsilon).
\label{eq:constantapprox} \end{equation} To determine an expression for $U$, we substitute \eqref{eq:DLP} evaluated at \eqref{eq:closept} for $u(y^{\star} - \varepsilon \ell \nu^{\star})$ and \eqref{eq:bie} for $f(y^{\star})$ into \eqref{eq:constantapprox}, and find that \begin{equation}
U(y^{\star};\varepsilon) = \varepsilon^{-1} \left[ \frac{1}{2} \mu(y^{\star}) +
\frac{1}{2^{n-1} \pi} \int_{B} \left[
\frac{\nu_{y} \cdot ( y_{d} - \varepsilon \ell \nu^{\star})}{|y_{d} -
\varepsilon \ell \nu^{\star} |^{n}} - \frac{\nu_{y} \cdot
y_{d}}{|y_{d}|^{n}} \right] \mu(y) \mathrm{d}\sigma_{y} \right],
\label{eq:constantapproxwithbie} \end{equation} where we have introduced the notation, $y_{d} = y^{\star} - y$. Next, we make use of Gauss' theorem~\cite{guenther1996partial} \begin{equation}
\frac{1}{2^{n-1} \pi} \int_{B} \frac{\nu_{y} \cdot ( x - y
)}{|x - y|^{n}} \mathrm{d}\sigma_{y} = \begin{cases}
-1 & x \in D,\\
-\frac{1}{2} & x \in B,\\
\,\,\,\, 0 & x \not\in \bar{D}, \end{cases}
\label{eq:gauss} \end{equation} to write \begin{equation}
\frac{1}{2} \mu(y^{\star}) = - \frac{1}{2^{n-1} \pi} \int_{B}
\frac{\nu_{y} \cdot ( y_{d} - \varepsilon \ell
\nu^{\star})}{|y_{d} - \varepsilon \ell
\nu^{\star} |^{n}} \mu(y^{\star}) \mathrm{d}\sigma_{y}
+ \frac{1}{2^{n-1} \pi} \int_{B} \frac{\nu_{y} \cdot
y_{d}}{|y_{d} |^{n}} \mu(y^{\star})
\mathrm{d}\sigma_{y}.
\label{eq:gaussidentity} \end{equation} Substituting \eqref{eq:gaussidentity} into \eqref{eq:constantapproxwithbie} yields \begin{equation}
U(y^{\star};\varepsilon) = \frac{1}{2^{n-1} \pi} \int_{B}
\varepsilon^{-1} \left[ \frac{\nu_{y} \cdot ( y_{d} - \varepsilon
\ell \nu^{\star})}{|y_{d} - \varepsilon \ell \nu^{\star} |^{n}} -
\frac{\nu_{y} \cdot y_{d}}{|y_{d}|^{n}} \right] [ \mu(y) -
\mu(y^{\star}) ]\mathrm{d}\sigma_{y}.
\label{eq:correction} \end{equation}
We seek to determine the asymptotic expansion of $U$ given in \eqref{eq:correction} in the limit as $\varepsilon \to 0^{+}$. To determine this asymptotic expansion, we make use of explicit parametrizations for $B$. Therefore, we consider the two and three-dimensional problems separately.
\section{Asymptotic analysis in two dimensions} \label{sec:asymptotics2D}
Suppose $B$ is an analytic, closed curve on the plane. For that case, we introduce the parameter $t \in [-\pi,\pi]$ such that $y = y(t)$ and $y^{\star} = y(0)$. In terms of this parameterization, \eqref{eq:correction} is given by \begin{equation}
U(y^{\star};\varepsilon) = \frac{1}{2 \pi} \int_{-\pi}^{\pi}
\varepsilon^{-1} K(t;\varepsilon) [ \tilde{\mu}(t) -
\tilde{\mu}(0) ] \mathrm{d}t, \end{equation} with $\tilde{\mu}(t) = \mu(y(t))$, $\tilde{\mu}(0) = \mu(y(0)) = \mu(y^{\star})$, and \begin{equation}
K(t;\varepsilon) = \left[ \frac{\tilde{\nu}(t) \cdot ( y_{d}(t)
- \varepsilon \ell \nu^{\star})}{|y_{d}(t) -
\varepsilon \ell \nu^{\star} |^{2}} -
\frac{\tilde{\nu}(t) \cdot y_{d}(t)}{|y_{d}(t)|^{2}}
\right] J(t),
\label{eq:kernel2D} \end{equation} with $\tilde{\nu}(t) = \nu(y(t))$, $y_{d}(t) = y(0) - y(t)$, and
$J(t) = | y'(t) |$. Note that $\nu^\star = \tilde{\nu}(0)$.
To determine the asymptotic expansion for $U$, we introduce the small parameter $\delta$ satisfying $0 < \varepsilon \ll \delta \ll 1$, and write \begin{equation}
U(y^{\star};\varepsilon) = U^{\text{in}}(y^{\star};\varepsilon,\delta) +
U^{\text{out}}(y^{\star};\varepsilon,\delta).
\label{eq:3.3} \end{equation} Here, the inner expansion, $U^{\text{in}}$, is given by \begin{equation}
U^{\text{in}}(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\delta/2}^{\delta/2} \varepsilon^{-1} K(t;\varepsilon) \left[
\tilde{\mu}(t) - \tilde{\mu}(0) \right] \mathrm{d}t,
\label{eq:Uin2D} \end{equation} and the outer expansion, $U^{\text{out}}$, is given by \begin{multline}
U^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\pi}^{-\delta/2} \varepsilon^{-1} K(t;\varepsilon) \left[
\tilde{\mu}(t) - \tilde{\mu}(0) \right] \mathrm{d}t + \frac{1}{2\pi} \int_{\delta/2}^{\pi} \varepsilon^{-1} K(t;\varepsilon)
\left[ \tilde{\mu}(t) - \tilde{\mu}(0) \right] \mathrm{d}t.
\label{eq:Uout2D} \end{multline} The inner expansion involves integration over a small
portion of the boundary about $y^{\star}$, whereas the outer
expansion involves integration over the remaining portion of the
boundary.
We determine the leading-order asymptotic behaviors of $U^{\text{in}}$ and $U^{\text{out}}$ in the sections below. Then, we combine those results to obtain the asymptotic approximation for the double-layer potential in two dimensions, and discuss higher-order asymptotic approximations. We have developed a \textit{Mathematica} notebook that contains the presented calculations, available in a GitHub repository \cite{CKK-2018Codes}.
\subsection{Inner expansion}\label{ssec:inner}
To determine the leading-order asymptotic behavior of $U^{\text{in}}$, we substitute $t = \varepsilon T$ into \eqref{eq:Uin2D}, and obtain \begin{equation}
U^{\text{in}}(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\delta/2\varepsilon}^{\delta/2\varepsilon} K(\varepsilon T;\varepsilon)
\left[ \tilde{\mu}(\varepsilon T) - \tilde{\mu}(0) \right]
\mathrm{d}T.
\label{eq:3.6} \end{equation} Recognizing that $\tilde{\nu}(\varepsilon T) = \nu^{\star} + O(\varepsilon)$ and $y_{d}(\varepsilon T) = - \varepsilon T y'(0) +
O(\varepsilon^{2})$ with $\nu^{\star} \cdot y'(0) = 0$, we find that by expanding $K(\varepsilon T;\varepsilon)$ about $\varepsilon = 0$ that \begin{equation}
K(\varepsilon T;\varepsilon) = - \frac{\varepsilon^{-1} \ell J(0)}{
T^{2} J^{2}(0) + \ell^{2}} + O(1). \end{equation} Using the fact that this leading-order behavior is even in $T$, and expanding $\tilde{\mu}$ about $\varepsilon = 0$, we can substitute into \eqref{eq:3.6} to get, after expanding about $\delta = 0$, \begin{align}
\begin{split}
U^{\text{in}}&(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\delta/2\varepsilon}^{\delta/2\varepsilon} \left[ -
\frac{\varepsilon^{-1} \ell J(0)}{T^{2} J^{2}(0) + \ell^{2}} + O(1)
\right] \left[ \tilde{\mu}(\varepsilon T) - \tilde{\mu}(0) \right]
\mathrm{d}T\\
&= \frac{1}{2\pi} \int_{0}^{\delta/2\varepsilon} \left[ -
\frac{\varepsilon^{-1} \ell J(0)}{T^{2} J^{2}(0) + \ell^{2}} + O(1)
\right] \left[ \tilde{\mu}(\varepsilon T) + \tilde{\mu}(-\varepsilon T)
- 2 \tilde{\mu}(0) \right] \mathrm{d}T\\
&=\frac{1}{2\pi} \int_{0}^{\delta/2\varepsilon} \left[ -
\frac{\varepsilon^{-1} \ell J(0)}{T^{2} J^{2}(0) + \ell^{2}} + O(1)
\right] \left[ \varepsilon^{2} T^{2} \tilde{\mu}''(0) +
O(\varepsilon^{4}) \right] \mathrm{d}T\\
&=\frac{1}{2\pi} \int_{0}^{\delta/2\varepsilon} \left[ -
\frac{\varepsilon T^{2} \ell J(0)}{T^{2} J^{2}(0) + \ell^{2}}
\tilde{\mu}''(0) +
O(\varepsilon^{2}) \right] \mathrm{d}T\\
&= - \frac{\delta \ell}{4 \pi J(0)} \tilde{\mu}''(0) +
O(\varepsilon).
\end{split}
\label{eq:Uin2Dleadingorder} \end{align} This result gives the leading-order asymptotic behavior of $U^{\text{in}}$.
\subsection{Outer expansion} To determine the leading-order asymptotic behavior of $U^{\text{out}}$, we expand $K(t;\varepsilon)$ about $\varepsilon = 0$ and find that $K(t;\varepsilon) = [ \varepsilon K_{1}(t) + O(\varepsilon^{2}) ] J(t)$, with \begin{equation}
K_{1}(t) = \ell \frac{2 ( \tilde{\nu}(t) \cdot y_{d}(t))
( \nu^{\star} \cdot y_{d}(t)) - \tilde{\nu}(t) \cdot
\nu^{\star} |y_{d}(t)|^{2}}{|y_{d}(t)|^{4}}.
\label{eq:K1-2D} \end{equation} Substituting this expansion into \eqref{eq:Uout2D}, we find that \begin{multline}
U^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\pi}^{-\delta/2} K_{1}(t) \left[
\tilde{\mu}(t) - \tilde{\mu}(0) \right] J(t) \mathrm{d}t + \frac{1}{2\pi} \int_{\delta/2}^{\pi} K_{1}(t) \left[
\tilde{\mu}(t) - \tilde{\mu}(0) \right] J(t) \mathrm{d}t +
O(\varepsilon).
\label{eq:Uout2Dintermediate} \end{multline} To eliminate $\delta$ from the integration limits, we rewrite \eqref{eq:Uout2Dintermediate} as \begin{equation}
U^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\pi}^{\pi} K_{1}(t) \left[ \tilde{\mu}(t) - \tilde{\mu}(0)
\right] J(t) \mathrm{d}t - V^{\text{out}}(y^{\star};\varepsilon,\delta)
+ O(\varepsilon)
\label{eq:Uout2Dintermediate2} \end{equation} with \begin{equation}
V^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\delta/2}^{\delta/2} K_{1}(t) \left[ \tilde{\mu}(t) -
\tilde{\mu}(0) \right] J(t) \mathrm{d}t.
\label{eq:3.12} \end{equation} To determine the leading-order behavior for $V^{\text{out}}$, we proceed exactly as in section \ref{ssec:inner}. We substitute $t = \varepsilon T$ into \eqref{eq:3.12} and obtain \begin{equation}
V^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\delta/2\varepsilon}^{\delta/2\varepsilon} K_{1}(\varepsilon T) \left[
\tilde{\mu}(\varepsilon T) - \tilde{\mu}(0) \right]
J(\varepsilon T) \varepsilon \mathrm{d}T
\label{eq:Vout2D} \end{equation} Again, by recognizing that $\tilde{\nu}(\varepsilon T) = \nu^{\star} + O(\varepsilon)$ and $y_{d}(\varepsilon T) = - \varepsilon T y'(0) +
O(\varepsilon^{2})$ with $\nu^{\star} \cdot y'(0) = 0$, we find that \begin{equation}
K_{1}(\varepsilon T) = - \frac{\varepsilon^{-2} \ell}{T^{2}
J^{2}(0)} + O(\varepsilon^{-1}). \end{equation} Using the fact that this leading-order behavior is even in $T$, and that $J(\varepsilon T) = J(0) + O(\varepsilon)$, when we substitute it into \eqref{eq:Vout2D}, we find, after expanding about $\delta = 0$, that \begin{align}\label{eq:Vout2Dbis}
\begin{split}
V^{\text{out}}&(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\delta/2\varepsilon}^{\delta/2\varepsilon} \left[ -
\frac{\varepsilon^{-2} \ell}{T^{2} J^{2}(0)} + O(\varepsilon^{-1})
\right] \left[ \tilde{\mu}(\varepsilon T) - \tilde{\mu}(0) \right]
J(\varepsilon T) \varepsilon \mathrm{d}T\\
&= \frac{1}{2\pi} \int_{0}^{\delta/2\varepsilon} \left[ -
\frac{\varepsilon^{-2} \ell}{T^{2} J^{2}(0)} + O(\varepsilon^{-1})
\right] \left[ \tilde{\mu}(\varepsilon T) + \tilde{\mu}(-\varepsilon T)
- 2
\tilde{\mu}(0) \right] \left[J(0) + O(\varepsilon)\right] \varepsilon
\mathrm{d}T\\
&= \frac{1}{2\pi} \int_{0}^{\delta/2\varepsilon} \left[ -
\frac{\varepsilon^{-1} \ell}{T^{2} J(0)} + O(1) \right] \left[
\varepsilon^{2} T^{2} \tilde{\mu}''(0) + O(\varepsilon^{4})
\right] \mathrm{d}T\\
&= \frac{1}{2\pi} \int_{0}^{\delta/2\varepsilon} \left[
-\frac{\varepsilon \ell}{J(0)} \tilde{\mu}''(0) + O(\varepsilon^{2})
\right]
\mathrm{d}T\\
&= - \frac{\delta \ell}{4 \pi J(0)} \tilde{\mu}''(0)+ O(\varepsilon).
\end{split} \end{align} Substituting this result into \eqref{eq:Uout2Dintermediate2}, we find that the leading-order asymptotic behavior for $U^{\text{out}}$ is given by \begin{equation}
U^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{2\pi}
\int_{-\pi}^{\pi} K_{1}(t) \left[ \tilde{\mu}(t) - \tilde{\mu}(0)
\right] J(t) \mathrm{d}t + \frac{\delta \ell}{4 \pi J(0)}
\tilde{\mu}''(0) + O(\varepsilon).
\label{eq:Uout2Dleadingorder} \end{equation}
\subsection{Two-dimensional asymptotic approximation}
We obtain an asymptotic approximation for $U$ by summing the leading-order behaviors obtained for $U^{\text{in}}$ and $U^{\text{out}}$ given in \eqref{eq:Uin2Dleadingorder} and \eqref{eq:Uout2Dleadingorder}, respectively. Substituting that result into \eqref{eq:constantapprox}, we obtain the following asymptotic approximation, \begin{equation}
u(y^{\star} - \varepsilon \ell \nu^{\star}) = f(y^{\star}) + \varepsilon
L_{1}[ \mu ] + O(\varepsilon^{2}),
\label{eq:asymptotic2D} \end{equation} with \begin{equation}
L_{1}[\mu] = \frac{1}{2\pi} \int_{-\pi}^{\pi} K_{1}(t) \left[
\tilde{\mu}(t) - \tilde{\mu}(0) \right] J(t) \mathrm{d}t,
\label{eq:L1-2D} \end{equation} where $K_{1}$ is given by \eqref{eq:K1-2D}. Naturally, the obtained asymptotic approximation doesn't depend on the arbitrary parameter $\delta$.
Asymptotic approximation \eqref{eq:asymptotic2D} gives an explicit approximation for the close evaluation of the double-layer potential in two dimensions. According to the asymptotic analysis, the error of this approximation is $O(\varepsilon^{2})$. It gives the double-layer potential as the Dirichlet data at the boundary point $y^{\star}$ closest to the evaluation point $x$ plus a nonlocal correction. This nonlocal correction is consistent with the fact that solutions to elliptic partial differential equations have a global dependence on their boundary data. The leading-order asymptotic expansion indicates that the nonlocal correction only comes from the outer expansion, and the inner expansion doesn't contribute to the lower order terms.
\subsection{Higher-order asymptotic approximations}
By continuing on to higher order terms in the expansions for $U^{\text{in}}$ and $U^{\text{out}}$, we can obtain higher-order asymptotic approximations. This process will not be demonstrated here, because the calculations become unwieldy, details can be found in the developed \textit{Mathematica} notebook~\cite{CKK-2018Codes}. The result from these calculations is \begin{multline}
u(y^{\star} - \varepsilon \ell \nu^{\star}) = f(y^{\star}) + \varepsilon
L_{1}[ \mu ] + \varepsilon^{2} \left[ L_{2}[\mu] - \frac{\ell^{2} y'(0) \cdot
y''(0)}{4 J^{4}(0)} \tilde{\mu}'(0) + \frac{\ell^{2}}{4
J^{2}(0)} \tilde{\mu}''(0) \right] + O(\varepsilon^{3}),
\label{eq:asymptotic2D-higherorder} \end{multline} with $L_{1}[\mu]$ given in \eqref{eq:L1-2D}, and \begin{equation}
L_{2}[\mu] = \frac{1}{2\pi} \int_{-\pi}^{\pi} K_{2}(t) \left[
\tilde{\mu}(t) - \tilde{\mu}(0) \right] J(t) \mathrm{d}t, \end{equation} where \begin{equation}
K_{2}(t) = \ell^{2} \frac{(\nu \cdot y_{d}) \left[ 4 ( \nu^{\star}
\cdot y_{d} )^{2} - |y_{d}|^{2} \right] -2 |y_{d}|^{2} (\nu
\cdot \nu^{\star})( \nu^{\star} \cdot y_{d})}{|y_{d}|^{6}}.
\label{eq:K2-2D} \end{equation} This asymptotic approximation has an error that is $O(\varepsilon^{3})$. In addition to nonlocal terms, this approximation includes local contributions made by first and second derivatives of the density, $\mu$, evaluated at the boundary point $y^{\star}$. The local contributions come from the inner expansion.
\section{Asymptotic analysis in three dimensions} \label{sec:asymptotics3D}
Suppose $B$ is an analytic, closed, and oriented surface. We introduce the parameters $s \in [0,\pi]$ and $t \in [-\pi,\pi]$ such that $y = y(s,t)$ and $y^{\star} = y(0,\cdot)$. In terms of this parameterization, \eqref{eq:correction} is given by \begin{equation}
U(y^{\star};\varepsilon) = \frac{1}{4\pi} \int_{-\pi}^{\pi}
\int_{0}^{\pi} \varepsilon^{-1} K(s,t;\varepsilon) \left[
\tilde{\mu}(s,t) - \tilde{\mu}(0,\cdot) \right] \sin(s) \mathrm{d}s
\mathrm{d}t, \end{equation} with $\tilde{\mu}(s,t) = \mu(y(s,t))$, $\tilde{\mu}(0,\cdot) = \mu(y(0,\cdot))$, and \begin{equation}
K(s,t;\varepsilon) = \left[ \frac{\tilde{\nu}(s,t) \cdot ( y_{d}(s,t) -
\varepsilon \ell \nu^{\star})}{| y_{d}(s,t) - \varepsilon \ell
\nu^{\star} |^{3}} - \frac{\tilde{\nu}(s,t) \cdot
y_{d}(s,t)}{| y_{d}(s,t) |^{3}} \right] J(s,t), \end{equation} with $\tilde{\nu}(s,t) = \nu_y$,
$y_{d}(s,t) = y(0,\cdot) - y(s,t)$, $J(s,t) = | y_{s}(s,t) \times y_{t}(s,t) |/\sin(s)$. Note that $ \nu^\star = \tilde{\nu}(0, \cdot)$.
Just as we have done for the two dimensional problem, we introduce the small parameter $\delta$ satisfying $0 < \varepsilon \ll \delta \ll 1$, and write \begin{equation}
U(y^{\star};\varepsilon) = U^{\text{in}}(y^{\star};\varepsilon,\delta) +
U^{\text{out}}(y^{\star};\varepsilon,\delta).
\label{eq:4.3} \end{equation} Here, the inner expansion is given by \begin{equation}
U^{\text{in}}(y^{\star};\varepsilon,\delta) = \frac{1}{4\pi}
\int_{-\pi}^{\pi} \int_{0}^{\delta} \varepsilon^{-1} K(s,t;\varepsilon)
\left[ \tilde{\mu}(s,t) - \tilde{\mu}(0,\cdot) \right] \sin(s)\mathrm{d}s
\mathrm{d}t,
\label{eq:Uin3D} \end{equation} and the outer expansion is given by \begin{equation}
U^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{4\pi}
\int_{-\pi}^{\pi} \int_{\delta}^{\pi} \varepsilon^{-1}
K(s,t;\varepsilon) \left[ \tilde{\mu}(s,t) - \tilde{\mu}(0,\cdot)
\right] \sin(s)\mathrm{d}s \mathrm{d}t.
\label{eq:Uouter3D} \end{equation} Again, we determine the leading-order asymptotic behaviors for $U^{\text{in}}$ and $U^{\text{out}}$ separately. Then, we combine those results to obtain an asymptotic approximation for the close evaluation of the double-layer potential in three dimensions and discuss higher-order asymptotic approximations. Some details can be found in the developed \textit{Mathematica} notebook available on GitHub~\cite{CKK-2018Codes}.
\subsection{Inner expansion} \label{ssec:inner3D}
To find the leading-order asymptotic behavior of $U^{\text{in}}$, we substitute $s = \varepsilon S$ into \eqref{eq:Uin3D}, and obtain \begin{equation}
U^{\text{in}}(y^{\star};\varepsilon,\delta) = \frac{1}{4\pi}
\int_{-\pi}^{\pi} \int_{0}^{\delta/\varepsilon} K(\varepsilon
S,t;\varepsilon) \left[ \tilde{\mu}(\varepsilon S,t) -
\tilde{\mu}(0,\cdot) \right] \sin(\varepsilon S) \mathrm{d}S
\mathrm{d}t. \end{equation} Recognizing that $\tilde{\nu}(\varepsilon S, t) = \nu^{\star} + O(\varepsilon)$, and $y_{d}(\varepsilon S, t) = - \varepsilon S y_{s}(0,\cdot) +
O(\varepsilon^{2})$ with the vector $y_{s}(0,\cdot)$ lying on the plane tangent to $B$ at $y^{\star}$, we find by expanding $K(\varepsilon S, t ; \varepsilon)$ about $\varepsilon = 0$ that \begin{equation}
K(\varepsilon S, t ; \varepsilon) = - \frac{\varepsilon^{-2} \ell J(0,\cdot)}{(S^{2} |
y_{s}(0,\cdot) |^{2} + \ell^{2})^{3/2}} + O(\varepsilon^{-1}).
\end{equation} Since this leading-order asymptotic behavior for $K(\varepsilon S, t; \varepsilon)$ is independent of $t$, we write \begin{multline}
U^{\text{in}}(y^{\star};\varepsilon,\delta) = \frac{1}{4\pi}
\int_{0}^{\pi} \int_{0}^{\delta/\varepsilon} \left[ -
\frac{\varepsilon^{-2} \ell J(0,\cdot)}{(S^{2} | y_{s}(0,\cdot) |^{2}
+ \ell^{2})^{3/2}} + O(\varepsilon^{-1}) \right]\\
\left[ \tilde{\mu}(\varepsilon S,t) + \tilde{\mu}(\varepsilon S, t + \pi)
- 2 \tilde{\mu}(0,\cdot) \right] \sin(\varepsilon S) \mathrm{d}S
\mathrm{d}t.
\label{eq:4.8} \end{multline} Next, we use the regularity of $\tilde{\mu}$ over the north pole to substitute $\tilde{\mu}(\varepsilon S, t + \pi) = \tilde{\mu}(-\varepsilon S, t)$, so that \begin{align}
\begin{split}
\tilde{\mu}(\varepsilon S,t) + \tilde{\mu}(\varepsilon S, t + \pi) - 2
\tilde{\mu}(0,\cdot) &= \tilde{\mu}(\varepsilon S,t) +
\tilde{\mu}(-\varepsilon S,t) - 2 \tilde{\mu}(0,\cdot)\\
&= \varepsilon^{2} S^{2} \tilde{\mu}_{ss}(0,\cdot) + O(\varepsilon^{4}).
\end{split}
\label{eq:mu_ss} \end{align} Thus, we find after substituting \eqref{eq:mu_ss} and $\sin(\varepsilon S) = \varepsilon S + O(\varepsilon^{3})$ into \eqref{eq:4.8} that \begin{align}
\begin{split}
U^{\text{in}}(y^{\star};\varepsilon,\delta) &= \frac{1}{4\pi}
\int_{0}^{\pi} \int_{0}^{\delta/\varepsilon} \left[ - \frac{\varepsilon
S^{3} \ell J(0,\cdot)}{(S^{2} | y_{s}(0,\cdot) |^{2} +
\ell^{2})^{3/2}} \tilde{\mu}_{ss}(0,\cdot) + O(\varepsilon^{2})
\right] \mathrm{d}S \mathrm{d}t\\
&= - \frac{\ell J(0,\cdot)}{8} \varDelta_{S^{2}} \mu(y^{\star})
\int_{0}^{\delta/\varepsilon} \left[ \frac{\varepsilon S^{3}}{(S^{2} |
y_{s}(0,\cdot) |^{2} + \ell^{2})^{3/2}} + O(\varepsilon^{2})
\right] \mathrm{d}S,
\end{split} \end{align} where we have used the fact that \begin{equation}
\frac{1}{\pi} \int_{0}^{\pi} \tilde{\mu}_{ss}(0,\cdot)\mathrm{d}t =
\frac{1}{2} \varDelta_{S^{2}} \mu(y^{\star}),
\label{eq:spherical-laplacian} \end{equation} with $\varDelta_{S^{2}} \mu(y^{\star})$ denoting the spherical Laplacian of $\mu$ evaluated at $y^{\star}$ (see Appendix \ref{sec:spherical-laplacian}). Furthermore, when expanding about $\delta = 0$ we have \begin{equation}
\int_{0}^{\delta/\varepsilon} \left[ \frac{\varepsilon S^{3}}{(S^{2}
|y_{s}(0,\cdot)|^{2} + \ell^{2})^{3/2}} + O(\varepsilon^{2}) \right]
\mathrm{d}S = \frac{\delta}{|y_{s}(0,\cdot)|^{3}} + O(\varepsilon), \end{equation} we determine that \begin{equation}
U^{\text{in}}(y^{\star};\varepsilon,\delta) = - \frac{\delta \ell
J(0,\cdot)}{8 |y_{s}(0,\cdot)|^{3}} \varDelta_{S^{2}}
\mu(y^{\star}) + O(\varepsilon).
\label{eq:Uinner3D} \end{equation} This result gives the leading-order asymptotic behavior of $U^{\text{in}}$.
\subsection{Outer expansion}
To determine the leading-order asymptotic behavior of $U^{\text{out}}$, we expand $K(s,t;\varepsilon)$ about $\varepsilon = 0$ and find $K(s,t;\varepsilon) = $ $\left[ \varepsilon K_{1}(s,t) + O(\varepsilon^{2}) \right] J(s,t)$, with \begin{equation}
K_{1}(s,t) = \ell \frac{3 ( \tilde{\nu}(s,t) \cdot y_{d}(s,t) ) (
\nu^{\star} \cdot y_{d}(s,t) ) - |y_{d}(s,t)|^{2} \tilde{\nu}(s,t) \cdot
\nu^{\star}}{| y_{d}(s,t) |^{5}}.
\label{eq:K1-3D} \end{equation} Substituting this expansion into \eqref{eq:Uouter3D}, we obtain \begin{equation}
U^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{4\pi}
\int_{-\pi}^{\pi} \int_{\delta}^{\pi} K_{1}(s,t) \left[
\tilde{\mu}(s,t) - \tilde{\mu}(0,\cdot)
\right] J(s,t)\sin(s) \mathrm{d}s \mathrm{d}t + O(\varepsilon).
\label{eq:4.16} \end{equation} To eliminate $\delta$ as a limit of integration in \eqref{eq:4.16}, we write \begin{multline}
U^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{4\pi}
\int_{-\pi}^{\pi} \int_{0}^{\pi} K_{1}(s,t) \left[ \tilde{\mu}(s,t)
- \tilde{\mu}(0,\cdot)
\right] J(s,t)\sin(s) \mathrm{d}s \mathrm{d}t - V^{\text{out}}(y^{\star};\varepsilon,\delta) + O(\varepsilon),
\label{eq:Uouter3D-expand} \end{multline} with \begin{equation}
V^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{4\pi}
\int_{-\pi}^{\pi} \int_{0}^{\delta} K_{1}(s,t) \left[
\tilde{\mu}(s,t) - \tilde{\mu}(0,\cdot)
\right] J(s,t)\sin(s) \mathrm{d}s \mathrm{d}t.
\label{eq:4.17} \end{equation}
To determine the leading-order asymptotic behavior of $V^{\text{out}}(y^{\star};\varepsilon,\delta)$, we proceed as in section \ref{ssec:inner3D}. We substitute $s = \varepsilon S$ into \eqref{eq:4.17}, and obtain \begin{equation}
V^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{4\pi}
\int_{-\pi}^{\pi} \int_{0}^{\delta/\varepsilon} K_{1}(\varepsilon S,t) \left[
\tilde{\mu}(\varepsilon S,t) - \tilde{\mu}(0,\cdot)
\right]J(\varepsilon S,t) \sin(\varepsilon S) \varepsilon \mathrm{d}S \mathrm{d}t. \end{equation} Recognizing that $\tilde{\nu}(\varepsilon S, t) = \nu^{\star} + O(\varepsilon)$, and $y_{d}(\varepsilon S, t) = - \varepsilon S y_{s}(0,\cdot) +
O(\varepsilon^{2})$ with the vector $y_{s}(0,\cdot)$ lying on the plane tangent to $B$ at $y^{\star}$, we find by expanding $K_{1}(\varepsilon S, t)$ about $\varepsilon = 0$ that \begin{equation}
K_{1}(\varepsilon S, t) = -\frac{\varepsilon^{-3} \ell }{S^{3} |
y_{s}(0,\cdot)|^{3}} + O(\varepsilon^{-2}). \end{equation} Since the leading-order behavior of $K_{1}$ is independent of $t$, we use \eqref{eq:mu_ss}, plus knowing that $J(\varepsilon S,t) = J(0,\cdot) + O(\varepsilon)$ and $\sin(\varepsilon S) = \varepsilon S + O(\varepsilon^{3})$ to obtain, after expanding about $\delta = 0$ \begin{align}
\begin{split}
V^{\text{out}}(y^{\star};\varepsilon,\delta) &= \frac{1}{4\pi}
\int_{0}^{\pi} \int_{0}^{\delta/\varepsilon} \left[ - \frac{\varepsilon
\ell J(0,\cdot)}{|y_{s}(0,\cdot)|^{3}}
\tilde{\mu}_{ss}(0,\cdot) + O(\varepsilon^{2}) \right] \mathrm{d}S
\mathrm{d}t\\
&= - \frac{\delta \ell J(0,\cdot)}{8 |y_{s}(0,\cdot)|^{3}}
\varDelta_{S^{2}} \mu(y^{\star}) + O(\varepsilon).
\end{split}
\label{eq:Vouter-3D} \end{align} Note that we have used \eqref{eq:spherical-laplacian} in the last step. Substituting this result into \eqref{eq:Uouter3D-expand}, we find that \begin{multline}
U^{\text{out}}(y^{\star};\varepsilon,\delta) = \frac{1}{4\pi}
\int_{-\pi}^{\pi} \int_{0}^{\pi} K_{1}(s,t) \left[ \tilde{\mu}(s,t)
- \tilde{\mu}(0,\cdot)
\right] J(s,t)\sin(s) \mathrm{d}s \mathrm{d}t + \delta \frac{\ell J(0,\cdot)}{8 |y_{s}(0,\cdot)|^{3}}
\varDelta_{S^{2}} \mu(y^{\star}) + O(\varepsilon).
\label{eq:Uout3D-leadingorder} \end{multline} This result gives the leading-order asymptotic behavior of $U^{\text{out}}$.
\subsection{Three-dimensional asymptotic approximation}
We obtain an asymptotic approximation for $U$ by summing the leading-order behaviors obtained for $U^{\text{in}}$ and $U^{\text{out}}$ given in \eqref{eq:Uinner3D} and \eqref{eq:Uout3D-leadingorder}, respectively. Substituting that result into \eqref{eq:constantapprox}, we obtain the following asymptotic approximation \begin{equation}
u(y^{\star} - \varepsilon \ell \nu^{\star}) = f(y^{\star}) + \varepsilon
L_{1}[\mu] + O(\varepsilon^{2}),
\label{eq:asymptotic3D} \end{equation} with \begin{equation}
L_{1}[\mu] = \frac{1}{4\pi} \int_{-\pi}^{\pi} \int_{0}^{\pi}
K_{1}(s,t) \left[ \tilde{\mu}(s,t) - \tilde{\mu}(0,\cdot)
\right] J(s,t)\sin(s) \mathrm{d}s \mathrm{d}t. \end{equation} The structure of this asymptotic approximation for the close evaluation of the double-layer potential in three dimensions is exactly the same as what we found for the two-dimensional case: the leading-order asymptotic approximation is composed of the Dirichlet data and a non-local term coming from the outer expansion. Similarly, high-order asymptotic approximations could be obtained by continuing on to higher order terms in the expansions $U^{\text{in}}$ and $U^{\text{out}}$ after cumbersome calculations.
\section{Numerical methods} \label{sec:numerics}
Numerical methods to compute the asymptotic approximations for the close evaluation of the double-layer potential must be sufficiently accurate in comparison to $O(\varepsilon)$. Otherwise, the error made by the numerical method will dominate over the error of the asymptotic approximation. On the other hand, if the numerical method requires restrictively high resolution to compute the asymptotic approximation to sufficient accuracy, the numerical method suffers from the very issue of the close evaluation problem. In what follows, we describe numerical methods to compute the asymptotic approximations derived above at high accuracy with modest resolution requirements.
\subsection{Two dimensions}
Suppose we have parameterized $B$ by $y = y(\varphi)$ with $-\pi \le \varphi \le \pi$ with $y^{\star} = y(\varphi^{\star})$. For that case, we need to compute \begin{equation}
\mathscr{U}_{1}(y^{\star}) = \frac{1}{2\pi} \int_{-\pi}^{\pi}
F_{1}(\varphi;\varphi^{\star}) \mathrm{d}\varphi,
\label{eq:U1-2D} \end{equation} with \begin{equation}
F_{1}(\varphi;\varphi^{\star}) = K_{1}(\varphi)
J(\varphi) \left[ \tilde{\mu}(\varphi) -
\tilde{\mu}(\varphi^{\star}) \right],
\label{eq:F-2D} \end{equation} where $K_{1}$ is given in \eqref{eq:K1-2D}. The function, $K_{1}$, is singular at $\varphi = \varphi^{\star}$. Consequently, applying a high order accurate numerical quadrature rule to compute $\mathscr{U}_{1}$ will be limited in its accuracy even though $F_{1}$ vanishes identically at $\varphi = \varphi^{\star}$ due to the factor of $\tilde{\mu}(\varphi) - \tilde{\mu}(\varphi^{\star})$. To improve the accuracy of a numerical evaluation of \eqref{eq:U1-2D}, we revisit the asymptotic expansion obtained for $V^{\text{out}}(y^{\star};\varepsilon,\delta)$ in \eqref{eq:Vout2D}-\eqref{eq:Vout2Dbis}. By rewriting that result for the present context, we find \begin{equation}
\frac{1}{2\pi} \int_{-\delta/2}^{\delta/2}
F_{1}(\varphi;\varphi^{\star}) \mathrm{d}\varphi = -\frac{\delta}{4
\pi} \frac{\ell \tilde{\mu}''(\varphi^{\star}) }{J(\varphi^{\star})}
+ O(\varepsilon).
\label{eq:5.3} \end{equation}
This result suggests the following method to compute $\mathscr{U}_{1}(y^{\star})$ numerically using the $N$-point periodic trapezoid rule (PTR). Suppose we are given the grid function, $\tilde{\mu}(\varphi_{j})$ for $j = 1, \cdots, N$ with $\varphi_{j} = -\pi + 2 \pi (j - 1) /N$, and suppose $\varphi^{\star} = \varphi_{k}$ is one of the quadrature points. We introduce the numerical approximation \begin{equation}
\mathscr{U}_{1}(y^{\star}) \approx U_{1}^{N}(y^{\star}) =
\frac{1}{N} \sum_{j \neq k} F_{1}(\varphi_{j};\varphi_{k}) -
\frac{\ell \tilde{\mu}''(\varphi_{k})}{2 N J(\varphi_{k})}. \end{equation} where we have replaced the quadrature around $\varphi^{\star}$ with \eqref{eq:5.3} where $\delta = 2 \pi/N$. We compute $\tilde{\mu}''(\varphi_{k})$ with spectral accuracy using Fast Fourier transform methods. Using this numerical approximation, we compute the $O(\varepsilon^{2})$ asymptotic approximation for the close evaluation of the double-layer potential in two dimensions through evaluation of \begin{equation}
u(y^{\star} - \varepsilon \ell \nu^{\star}) \approx f(y^{\star})
+ \varepsilon U_{1}^{N}(y^{\star}) + O(\varepsilon^{2}).
\label{eq:numerical-method2D} \end{equation}
To compute the $O(\varepsilon^{3})$ asymptotic approximation, in addition to $\mathscr{U}_{1}$, we need to compute \begin{equation}
\mathscr{U}_{2}(y^{\star}) = \frac{1}{2\pi} \int_{-\pi}^{\pi}
F_{2}(\varphi;\varphi^{\star}) \mathrm{d}\varphi \end{equation} where \begin{equation}
F_{2}(\varphi;\varphi^{\star}) = K_{2}(\varphi) J(\varphi) \left[
\tilde{\mu}(\varphi) - \tilde{\mu}(\varphi^{\star}) \right], \end{equation} and $K_{2}$ given in \eqref{eq:K2-2D}. By using the higher-order asymptotic expansion for $V^{\text{out}}$ (computed on the {\it
Mathematica} notebook available on the GitHub repository~\cite{CKK-2018Codes}), we apply the same method used for $\mathscr{U}_{1}(y^{\star})$ and arrive at \begin{equation}
\mathscr{U}_{2}(y^{\star}) \approx U_{2}^{N} = \frac{1}{N} \sum_{j
\neq k} F_{2}(\varphi_{j};\varphi_{k}) - \frac{\ell^{2}
\kappa^{\star} \tilde{\mu}''(\varphi_k)}{4 N J(\varphi_k)}, \end{equation} with $\kappa^{\star}$ denoting the signed curvature at $y^{\star}$. Using this numerical approximation, we compute the $O(\varepsilon^{3})$ asymptotic approximation for the close evaluation of the double-layer potential in two dimensions through evaluation of \begin{multline}
u(y^{\star} - \varepsilon \ell \nu^{\star}) \approx f(y^{\star})
+ \varepsilon U_{1}^{N}(y^{\star}) + \varepsilon^{2} \left[ U_{2}^{N}(y^{\star}) - \frac{\ell^{2}
y'(\varphi^{\star}) \cdot y''(\varphi^{\star})}{4
J^{4}(\varphi^{\star})} \tilde{\mu}'(\varphi^{\star}) +
\frac{\ell^{2} \tilde{\mu}''(\varphi^{\star})}{4
J^{2}(\varphi^{\star})} \right] + O(\varepsilon^{3}).
\label{eq:numerical-method2D-higher-order} \end{multline} Since the boundary is given, we are able to compute $y'(\varphi)$ and $y''(\varphi)$, explicitly. We use Fast Fourier transform methods to compute $\tilde{\mu}'(\varphi)$ and $\tilde{\mu}''(\varphi)$ with spectral accuracy.
\subsection{Three dimensions}
Suppose we have parameterized $B$ by $y = y(\theta,\varphi)$ with $\theta \in [0,\pi]$ and $\varphi \in [-\pi,\pi]$ with $y^{\star} = y(\theta^{\star},\varphi^{\star})$. For that case, we seek to compute \begin{equation}
\mathscr{U}_{1}(y^{\star}) = \frac{1}{4\pi} \int_{-\pi}^{\pi}
\int_{0}^{\pi} F_{1}(\theta,\varphi;\theta^{\star},\varphi^{\star})
\sin(\theta) \mathrm{d}\theta \mathrm{d}\varphi,
\label{eq:U1-3D} \end{equation} with \begin{equation}
F_{1}(\theta,\varphi;\theta^{\star},\varphi^{\star}) =
K_{1}(\theta,\varphi) J(\theta,\varphi) \left[
\tilde{\mu}(\theta,\varphi) -
\tilde{\mu}(\theta^{\star},\varphi^{\star}) \right],
\label{eq:F-3D} \end{equation} where $K_{1}$ is given in \eqref{eq:K1-3D}. Just as with the two-dimensional case, the function $K_{1}$ is singular at $(\theta,\varphi) = (\theta^{\star},\varphi^{\star})$, so any attempt to apply a quadrature rule to compute $\mathscr{U}_{1}$ will be limited in its accuracy even though $F_{1}$ vanishes identically at $(\theta^{\star}, \varphi^{\star})$ due to the factor of $\tilde{\mu}(\theta,\varphi) - \tilde{\mu}(\theta^{\star},\varphi^{\star})$.
To numerically evaluate \eqref{eq:U1-3D}, we apply a three-step method developed by the authors~\cite{carvalho2018_3D}. This method has been shown to be effective for computing the modified double-layer potential in three dimensions resulting from the subtraction method. We first rotate this integral to another spherical coordinate system in which $y^{\star}$ is aligned with the north pole. The details of this rotation are given in Appendix \ref{sec:rotation} and lead to $\theta = \theta(s,t)$ and $\varphi = \varphi(s,t)$ with $s \in [0,\pi]$ and $t \in [-\pi,\pi]$ where $\theta^{\star} = \theta(0,\cdot)$ and $\varphi^{\star} = \varphi(0,\cdot)$. We apply this rotation and find that \begin{equation}
\mathscr{U}_{1}(y^{\star}) = \frac{1}{4\pi} \int_{-\pi}^{\pi}
\int_{0}^{\pi} \tilde{F}(s,t) \sin(s) \mathrm{d}s \mathrm{d}t,
\label{eq:5.10} \end{equation} with $\tilde{F}(s,t) = F_{1}(\theta(s,t), \varphi(s,t); \theta^{\star}, \varphi^{\star})$. Now $\tilde{K}_{1}(\theta(s,t),\varphi(s,t))$ is singular at the north pole of this rotated coordinate system corresponding to $s = 0$. To improve the accuracy of a numerical evaluation of \eqref{eq:5.10}, we revisit the asymptotic expansion obtained for $V^{\text{out}}(y^{\star};\varepsilon,\delta)$ in \eqref{eq:Vouter-3D}. By rewriting that result for the present context, we find \begin{equation}
\frac{1}{2} \int_{0}^{\delta} \left[ \frac{1}{2\pi}
\int_{-\pi}^{\pi} \tilde{F}(s,t) \mathrm{d}t \right] \sin(s)
\mathrm{d}s = \int_{0}^{\delta} \left[ - \frac{\ell J(0,\cdot)}{8 |
y_{s}(0,\cdot) |^{3}} \varDelta_{S^{2}} \mu(y^{\star}) \right]
\mathrm{d}s + O(\varepsilon).
\label{eq:L1-limit} \end{equation} Suppose we compute \begin{equation}
\bar{F}(s) = \frac{1}{2\pi} \int_{-\pi}^{\pi} \tilde{F}(s,t)
\mathrm{d}t. \end{equation} The result in \eqref{eq:L1-limit} suggests that $\bar{F}(s)$ smoothly limits to a finite value as $s \to 0$. Although we could use this result to evaluate $\bar{F}(s)$ in a numerical quadrature scheme, it will suffice to consider an open quadrature rule for $s$ that does not include the point $s = 0$ such as the Gauss-Legendre quadrature. This result suggests the following three-step method to compute $\mathscr{U}_{1}(y^{\star})$ numerically.
Let $t_{k} = -\pi + \pi (k - 1)/N$ for $k = 1, \cdots, 2N$, and let $z_{j}$ and $w_{j}$ for $j = 1, \cdots, N$ denote the $N$-point Gauss-Legendre quadrature abscissas and weights such that \begin{equation}
\int_{-1}^{1} f(x) \mathrm{d}x \approx \sum_{j = 1}^{N} f(z_{j})
w_{j}. \end{equation} We perform the mapping: $s_{j} = \pi ( z_{j} + 1 )/2$ for $j = 1, \cdots, N$, and make appropriate adjustments to the weights as will be shown below. For the first step, we rotate the spherical coordinate system so that $y^{\star}$ is aligned with its north pole as described in Appendix \ref{sec:rotation}. For the second step, we compute \begin{equation}
\bar{F}(s_{j}) \approx \bar{F}^{N}_{j} = \frac{1}{2N} \sum_{k =
1}^{2N} \tilde{F}(s_{j},t_{k}), \quad j = 1, \cdots, N. \end{equation} For the third step, we compute the numerical approximation \begin{equation}
\mathscr{U}_{1}(y^{\star}) \approx U_{1}^{N}(y^{\star}) =
\frac{\pi}{4} \sum_{j = 1}^{N} \bar{F}_{j}^{N} w_{j}.
\label{eq:L1-3D-numerics} \end{equation} In \eqref{eq:L1-3D-numerics}, a factor of $\pi/2$ is introduced to scale the quadrature weights due to the mapping from $z_{j}$ to $s_{j}$, and a factor of $1/2$ remains from the factor of $1/4\pi$ in \eqref{eq:U1-3D}.
Using the numerical approximation $U_{1}^{N}$, we compute the $O(\varepsilon^{2})$ asymptotic approximation for the close evaluation of the double-layer potential in three dimensions through evaluation of \begin{equation}
u(y^{\star} - \varepsilon \ell \nu^{\star}) \approx f(y^{\star})
+ \varepsilon U_{1}^{N}(y^{\star}) + O(\varepsilon^{2}).
\label{eq:numerical-method3D} \end{equation}
\section{Numerical results} \label{sec:results}
We present results that show the accuracy and efficiency of the asymptotic approximation and the corresponding numerical method
for the close evaluation of the double-layer potential. For all of the examples shown, we prescribe Dirichlet data corresponding to a particular harmonic function. With that Dirichlet data, we solve the boundary integral equation \eqref{eq:bie} numerically to obtain the density, $\mu$. We use that density to compute the double-layer potential using different methods, for comparison. The results below show the error made in computing the harmonic function at close evaluation points. The Matlab codes used to compute all of the following examples are available in a GitHub repository~\cite{CKK-2018Codes}.
\subsection{Two dimensions}
For the two-dimensional examples, we use the harmonic function, \begin{equation}
u(x) = -\frac{1}{2\pi} \log | x - x_{0} |, \end{equation} with $x_{0} \in \mathbb{R}^2 \setminus \bar{D}$
and prescribe Dirichlet data by evaluating this function on the boundary. We solve the boundary integral equation \eqref{eq:bie} using the Nystr\"om method with the $N$-point Periodic Trapezoid Rule (PTR) resulting in the numerical approximation for the density, $\tilde{\mu}_{j} \approx \tilde{\mu}(\varphi_{j})$ with $\varphi_{j} = -\pi + 2 (j-1) \pi/N$ for $j = 1, \cdots, N$.
We compute the close evaluation of the double-layer potential at points, $x = y^{\star} - \varepsilon \nu^{\star}$, using the following four methods: \begin{enumerate}
\item {\bf PTR method} -- Compute the double-layer potential,
\begin{equation*}
u(y^{\star} - \varepsilon \nu^{\star}) = \frac{1}{2\pi}
\int_{-\pi}^{\pi} \frac{\nu_{y} \cdot ( y^{\star} - \varepsilon
\nu^{\star} - y)}{| y^{\star} - \varepsilon \nu^{\star} - y|^{2}}
\mu(y) \mathrm{d}\sigma_{y},
\end{equation*}
using the same $N$-point PTR used to solve \eqref{eq:bie} .
\item {\bf Subtraction method} -- Compute the modified
double-layer potential,
\begin{equation*}
u(y^{\star} - \varepsilon \nu^{\star}) = -\mu(y^{\star}) + \frac{1}{2\pi}
\int_{-\pi}^{\pi} \frac{\nu_{y} \cdot ( y^{\star} - \varepsilon
\nu^{\star} - y)}{| y^{\star} - \varepsilon \nu^{\star} - y|^{2}}
\left[ \mu(y) - \mu(y^{\star}) \right] \mathrm{d}\sigma_{y},
\end{equation*}
using the same $N$-point PTR used to solve \eqref{eq:bie} and as in the first method.
\item {\bf $O(\varepsilon^{2})$ asymptotic approximation} -- Compute the
$O(\varepsilon^{2})$ asymptotic approximation given by
\eqref{eq:asymptotic2D} using the new numerical method given in
\eqref{eq:numerical-method2D} using the same $N$-point PTR used to
solve \eqref{eq:bie}.
\item {\bf $O(\varepsilon^{3})$ asymptotic approximation} -- Compute the
$O(\varepsilon^{3})$ asymptotic approximation given by
\eqref{eq:asymptotic2D-higherorder} using the new numerical method given
in \eqref{eq:numerical-method2D-higher-order} using the same
$N$-point PTR used to solve \eqref{eq:bie}.
\end{enumerate}
We consider two different domains:
\begin{itemize}
\item {\bf A kite domain} whose boundary is given by \begin{equation}\label{eq:kite}
y(t) = ( \cos t + 0.65 \cos 2t - 0.65, 1.5 \sin t), \quad -\pi \le t
\le \pi. \end{equation}
\item {\bf A star domain} whose boundary is given by \begin{equation}\label{eq:star}
y(t) = r(t) ( \cos t, \sin t ), \quad r(t) = 1 + 0.3 \cos 5 t, \quad
-\pi \le t \le \pi. \end{equation}
\end{itemize} For both examples we pick $x_0 = (1.85, 1.65)$ which lies outside the domains.
We consider $N$ fixed, here $N = 128$, and study the dependence of the error on $\varepsilon$ as $\varepsilon \to 0$.
In Fig.~\ref{kite-results} we show results for the kite domain. The error, using a $\log$ scale, is presented for each of the four methods described above. The results show that the PTR method exhibits an $O(1)$ error as $\varepsilon \to 0$. The subtraction method and the asymptotic approximations all show substantially smaller errors.
\begin{figure}
\caption{Plots of $\log_{10}$ of the error for the evaluation of the double-layer potential in the kite domain defined by \eqref{eq:kite} using four methods:
the PTR method (upper left), the subtraction method (upper right), the $O(\varepsilon^{2})$ asymptotic
approximation method (lower left), and
the $O(\varepsilon^{3})$ asymptotic approximation method (lower right). In each of these
plots, boundary points $y_{A} = (-1.3571, -1.0607)$ and
$y_{B} = (0.0571, 1.0607)$ are plotted as red $\times$'s.}
\caption{Log-log plots of the errors made in computing the double-layer potential by the four methods shown
in Fig.~\ref{kite-results} at $y_{A} - \varepsilon \nu_{A}$ (left)
and at $y_{B} - \varepsilon \nu_{B}$ (right) for
$10^{-6} \le \varepsilon \le 10^{-1}$. }
\label{kite-results}
\label{kite-errors}
\end{figure}
To compare the four methods more quantitatively, in Fig.~\ref{kite-errors} we plot the errors made by the four methods at $y_{A} - \varepsilon \nu_{A}$ (left) and $y_{B} - \varepsilon \nu_{B}$ (right) with $10^{-6} \le \varepsilon \le 10^{-1}$ where $\nu_{A}$ and $\nu_{B}$ are the unit outward normals at $y_{A}=(-1.3571, -1.0607)$, and $y_{B}=(0.0571, 1.0607)$, respectively. The points $y_A$ and $y_B$ are shown in each plot of Fig. \ref{kite-results}. From the results in Fig.~\ref{kite-errors} we observe that, the error when using the PTR method increases as $\varepsilon \to 0^{+}$, while the error in the other three methods decreases. The errors made by the asymptotic approximations are monotonically decreasing as $\varepsilon \to 0^{+}$. However, the error made by the subtraction method presents a different behavior: it reaches a maximum at $\varepsilon \approx 10^{-2}$ after which it decreases as $\varepsilon$ increases. For larger values of $\varepsilon$, the double-layer potential is no longer nearly singular, so the $N$-point PTR (and therefore methods 1 and 2) become more accurate. The error is at a maximum for the subtraction method when $\varepsilon = O(1/N)$, which is why we observe the maximum error occuring at $\varepsilon \approx 10^{-2}$. The results in Fig.~\ref{kite-errors} show a clear difference in the rate at which the errors vanish as $\varepsilon \to 0^{+}$ between the subtraction method and the asymptotic approximation methods. The $O(\varepsilon^{3})$ asymptotic approximation decays the fastest, followed by the $O(\varepsilon^{2})$ asymptotic approximation, and then the subtraction method. For $\varepsilon < 10^{-4}$, the error incurred by the $O(\varepsilon^{3})$ asymptotic approximation levels out at machine precision. We estimate the rate at which the subtraction method and the asymptotic approximation methods decay with respect to $\varepsilon$ from the slope of the best fit line through the $\log-\log$ plot of the error versus $\varepsilon$ in Fig.~\ref{kite-order}. We compute the slope for each evaluation point $y(t_{j})- \varepsilon \tilde{\nu}(t_{j})$ where $t_{j} = -\pi + 2 (j-1) \pi/N$ for $j = 1, \cdots, N$, and we vary $\varepsilon$. For the subtraction method, we consider $\varepsilon$ values such that $10^{-6} \le \varepsilon \le 10^{-2}$, and for the $O(\varepsilon^{2})$ and $O(\varepsilon^{3})$ asymptotic approximation methods, we consider the same range but only include values where the error is greater than $10^{-15}$. The results shown in Fig.~\ref{kite-order} indicate that the subtraction method decays linearly with $\varepsilon$, and the rates of the asymptotic approximations are consistent with the theory presented in Section \ref{sec:asymptotics2D}.
\begin{figure}
\caption{Estimated order of accuracy in computing the double-layer
potential in the kite domain when using the subtraction method
(blue $\circ$), the $O(\varepsilon^{2})$ asymptotic approximation method
(red $\times$), and the $O(\varepsilon^{3})$ asymptotic approximation method
(yellow $+$) for varying values of $t$.}
\label{kite-order}
\end{figure}
For the second example of computing the double-layer potential in the star domain, Figures~\ref{star-results}, \ref{star-errors}, and \ref{star-order}
are analogous to Figures~\ref{kite-results}, \ref{kite-errors}, and \ref{kite-order} for the kite domain. The characteristics of the errors for this second domain are exactly the same as described for the kite domain.
\begin{figure}
\caption{Plots of $\log_{10}$ of the error for the evaluation of the
double-layer potential in the star domain defined by
\eqref{eq:star} using four methods: the PTR method (upper left),
the subtraction method (upper right), the $O(\varepsilon^{2})$
asymptotic approximation method (lower left), and the
$O(\varepsilon^{3})$ asymptotic approximation method (lower
right). In each of these plots, boundary points
$y_{A} = (-1.3571, -1.0607)$ and $y_{B} = (0.0571, 1.0607)$ are
plotted as red $\times$'s.}
\caption{Log-log plots of the errors made in computing the double-layer potential by the four methods shown
in Fig.~\ref{star-results} at $y_{A} - \varepsilon \nu_{A}$ (left)
and at $y_{B} - \varepsilon \nu_{B}$ (right) for
$10^{-6} \le \varepsilon \le 10^{-1}$. }
\label{star-results}
\label{star-errors}
\end{figure}
\begin{figure}
\caption{Estimated order of accuracy in computing the double-layer potential in the star domain when using the
subtraction method (blue $\circ$), the $O(\varepsilon^{2})$ asymptotic
approximation method (red $\times$), and the $O(\varepsilon^{3})$ asymptotic
approximation method (yellow $*$) for varying values of $t$.}
\label{star-order}
\end{figure}
\subsubsection*{Summary of the results}
In the case of two dimensional-problems, the subtraction method yields a method whose error decays linearly with the distance away from the boundary. The $O(\varepsilon^{2})$ and $O(\varepsilon^{3})$ asymptotic approximations methods are much more accurate for close evaluation points. Moreover, only relatively modest resolution is required for these asymptotic approximations to be effective. However, the error for these asymptotic approximations are monotonically increasing with the distance to the boundary, so they are not accurate for points further away from the boundary. The error estimates provided by the asymptotic theory provide guidance on where to apply these asymptotic approximations effectively. For all of these reasons, we find that the asymptotic approximations and corresponding numerical methods are quite useful for two-dimensional problems.
\subsection{Three dimensions}
Let $(x_{1},x_{2},x_{3})$ denote an ordered triple in a Cartesian coordinate system. To study the computation of the double-layer potential in three dimensions, we consider the harmonic function, \begin{equation}
u(x_{1},x_{2},x_{3}) = \frac{1}{\sqrt{ (x_{1} - 5)^{2} + (x_{2} - 4 )^{2} +
(x_{3} - 3)^{2}}} \end{equation} in the domain whose boundary is given by \begin{equation}\label{eq:mushroom}
y(\theta,\varphi) = R(\theta) ( \sin\theta \cos\varphi, 2 \sin\theta
\sin\varphi, \cos\theta), \quad 0 \le \theta \le \pi, \quad -\pi \le
\varphi \le \pi, \end{equation} with \begin{equation}\label{eq:mushroom2}
R(\theta) = 2 - \frac{1}{1 + 100 (1 - \cos\theta)^{2}}. \end{equation} This boundary surface is shown in Fig.~\ref{mushroom} (left) along with its intersection with the vertical $x_{1}x_{3}$-plane (center) and the horizontal $x_{1}x_{2}$-plane (right).
We solve boundary integral equation \eqref{eq:bie} using the Galerkin method~\cite{atkinson1982laplace, atkinson1985algorithm,
atkinson1990survey, atkinson1997numerical}. The Galerkin method approximates the density according to \begin{equation}
\tilde{\mu}(\theta,\varphi) \approx \tilde{\mu}^{N}(\theta,\varphi)
= \sum_{n = 0}^{N-1} \sum_{m = -n}^{n} \hat{\mu}_{nm}
Y_{nm}(\theta,\varphi),
\label{eq:mu-galerkin} \end{equation} with $\{ Y_{nm} \}$ denoting the orthonormal set of spherical harmonics. For these results, we have set $N = 48$.
\begin{figure}
\caption{The boundary surface defined by
\eqref{eq:mushroom}-\eqref{eq:mushroom2} that is used to exemplify
the evaluation of the double layer potential in three dimensions
(left),
and the intersections of this boundary with the $x_{1}x_{3}$-plane
(center), and $x_{1}x_{2}$-plane (right).}
\label{mushroom}
\end{figure} We have computed the close evaluation of the double-layer potential at points $x = y^{\star} - \varepsilon \nu^{\star}$, using the following two different methods for comparison. \begin{enumerate}
\item {\bf Numerical approximation} -- Compute the modified
double-layer potential,
\begin{equation*}
u(y^{\star} - \varepsilon \nu^{\star}) = - \mu(y^{\star}) +
\frac{1}{4\pi} \int_{B} \frac{\nu_{y} \cdot
( y^{\star} - \varepsilon \nu^{\star} - y )}{| y^{\star} - \varepsilon
\nu^{\star} - y |^{3}} \left[ \mu(y) - \mu(y^{\star}) \right]
\mathrm{d}\sigma_{y},
\end{equation*}
using the three-step numerical method given by Carvalho {\it et
al.}~\cite{carvalho2018_3D}. For the first step, the modified
double-layer potential is written as a spherical integral that has
been rotated so that $y^{\star}$ is aligned with the north pole. For
the second step, the $2N$-point PTR is used to compute the integral
in the azimuthal angle. For the third step, the $N$-point
Gauss-Legendre quadrature rule mapped to $[0,\pi]$ is used to
compute the integral in the polar angle.
\item {\bf $O(\varepsilon^{2})$ Asymptotic approximation} -- Compute the
asymptotic approximation given by \eqref{eq:asymptotic3D} using the
method given in \eqref{eq:numerical-method3D} (with $N = 48$).
\end{enumerate}
The error of the numerical method has been shown to decay quadratically with $\varepsilon$ when
$\varepsilon \ll 1/N$~\cite{carvalho2018_3D}. This quadratic error decay occurs because, in the rotated coordinate system, the azimuthal integration acts as an averaging operation yielding a smooth function of the polar angle that is computed to high order using Gaussian quadrature. However, this asymptotic error estimate is valid only when the numerical approximation of the density is sufficiently resolved. If $N$ in \eqref{eq:mu-galerkin} is not sufficiently large that $|\hat{\mu}_{nm}|$ for $n > N$ is negligibly small, then the truncation error associated with \eqref{eq:mu-galerkin} may interrupt this quadratic error decay. For the domain here, with $N = 48$, we find that the estimated truncation error for \eqref{eq:mu-galerkin} is approximately $10^{-8}$. While this error is relatively small, it is not small enough to observe the error's quadratic rate of decay. We would have to consider a much larger value of $N$ to observe that decay rate. However, computing the numerical solution of boundary integral equation \eqref{eq:bie} with $N > 48$ becomes restrictively large. Hence, we evaluate below what the subtraction method and the $O(\varepsilon^{2})$ asymptotic approximation do in this limited resolution situation.
Error results for the computation of the double-layer potential in this domain for each of the two methods described above appear in Fig.~\ref{mushroom-results}. The top row shows the error on the slice of the domain through the vertical $x_{1}x_{3}$-plane for the numerical method (left) and the $O(\varepsilon^{2})$ asymptotic approximation method (right).The point $y_{A} = ( 1.7830, 0, 0.8390 )$ is plotted as a red $\times$ symbol in both plots. The bottom row shows the errors of the same methods (left for the numerical method, right for the $O(\varepsilon^2)$ asymptotic approximation) on the slice of the domain through the horizontal $x_{1}x_{2}$-plane. The point $y_{B} = ( 1.7439, 1.19175, 0 )$ is plotted as a red $\times$ symbol in both plots.
\begin{figure}
\caption{Plots of $\log_{10}$ of the error made in computing the
double-layer potential by the numerical approximation (left
column) and the $O(\varepsilon^{2})$ asymptotic approximation (right
column) in the domain whose boundary is shown in
Fig.~\ref{mushroom}. The top row of plots show the error on the
$x_{1}x_{3}$-plane, and the bottom row of plots show the error on
the $x_{1}x_{2}$-plane. In the top row of plots, the point
$y_{A} = ( 1.7830, 0, 0.8390 )$ is plotted as a red $\times$, and
in the bottom row of plots, the point
$y_{B} = ( 1.7439, 1.19175, 0 )$ is plotted as a red $\times$.}
\caption{Log-log plots of the errors made by the two methods shown
in Fig.~\ref{mushroom-results} at $y_{A} - \varepsilon \nu_{A}$
(left) and at $y_{B} - \varepsilon \nu_{B}$ (right) for
$10^{-6} \le \varepsilon \le 0.5$.}
\label{mushroom-results}
\label{mushroom-errors}
\end{figure}
In Fig.~\ref{mushroom-errors}, we show the errors computed at $y_{A} - \varepsilon \nu_{A}$ (left) and $y_{B} - \varepsilon \nu_{B}$ (right) for varying $\varepsilon$, where $\nu_{A}$ and $\nu_{B}$ are the unit outward normals at $y_{A}$, and $y_{B}$, respectively. In contrast to the two-dimensional results, we find that the error for the numerical method is approximately $10^{-8}$ for all values of $\varepsilon$. This error is due to the truncation error made by the Galerkin method. Because the truncation error dominates at this resolution, we are not able to see its quadratic decay as $\varepsilon \to 0^{+}$. If a higher resolution computation was used to solve the boundary integral equation, the error of the numerical method would exhibit a similar behavior to that made by the subtraction method for the two-dimensional examples. In particular, the error would have a maximum at $\varepsilon = O(1/N)$ about which the error decays.
We observe that the $O(\varepsilon^{2})$ asymptotic method decays monotonically with $\varepsilon$ even when $N = 48$.
\begin{figure}
\caption{Estimated order of accuracy for the numerical
approximation (blue $\circ$), and the $O(\varepsilon^2)$ asymptotic
approximation (red $\times$) on the $x_{1}x_{3}$-plane (left plot) and on
the $x_{1}x_{2}$-plane (right plot). Results on the
$x_{1}x_{3}$-plane are given in terms of the extended polar
angle, $s_{0} \in [0,2\pi]$, which parameterizes the circle on
the unit sphere lying on the $x_{1}x_{3}$-plane that starts and
ends at the north pole. Results on the $x_{1}x_{2}$-plane are
given in terms of the azimuthal angle, $t_{0} \in [0,2\pi]$.}
\label{mushroom-order}
\end{figure}
We estimate the order of accuracy in Fig.~\ref{mushroom-order}. The results for the esimated order of accuracy over
the points intersecting the vertical $x_{1}x_{3}$-plane are shown in
the left plot of Fig.~\ref{mushroom-order}. For those results we
determine the estimated order of accuracy by determining the best
fit line through the $\log-\log$ plot of the error versus $\varepsilon$
for several values of the extended polar angle,
$s_{0} \in [0,2\pi]$. This extended polar angle parameterizes the
circle on the unit sphere lying on the $x_{1}x_{3}$-plane that
starts and ends at the north pole. The results for the esimated
order of accuracy over the points intersecting the horizontal
$x_{1}x_{2}$-plane are shown in the right plot of
Fig.~\ref{mushroom-order}. For those results we determine the
estimated order of accuracy by determining the best fit line through
the $\log-\log$ plot of the error versus $\varepsilon$ for several
values of the azimuthal angle, $t_{0} \in [0,2\pi]$. Because of the resolution limitation in the Galerkin method, we are not able to see that the order of accuracy for the numerical method is two. In fact, the error is nearly uniform with respect to $\varepsilon$ because it is the truncation error of \eqref{eq:mu-galerkin} that is dominating. Despite the resolution limitation in the Galerkin method, we find that the $O(\varepsilon^{2})$ asymptotic approximation has an order accuracy of nearly two.
\subsubsection*{Summary of the results} For three-dimensional problems, the subtraction method is more effective when computed in an appropriate rotated coordinate system than for two-dimensional problems. The subtraction method is more effective in three dimensions because in this rotated coordinate system, integration with respect to the azimuthal angle is a natural averaging operation that regularizes the integral thereby allowing for the use of a high-order quadrature rules for integration with respect to the polar angle. Provided that the density is sufficiently resolved, the subtraction method has been shown to decay quadratically with the distance away from the boundary~\cite{carvalho2018_3D}. The $O(\varepsilon^{2})$ asymptotic approximation also decays quadratically. However, it is not as sensitive to the accuracy of the density. For this reason, we find that the asymptotic approximation is still useful for three-dimensional problems, especially when the density is not highly resolved.
\section{Extension to forward-peaked scattering in radiative transfer} \label{sec:RTE}
The asymptotic approximations developed here for the close evaluation of the double-layer potential can be extended to other problems. In particular, there is an interesting connection between the close evaluation problem in potential theory and forward-peaked scattering in radiative transfer. We first establish this connection and then, we apply the asymptotic analysis used above to this problem.
Radiative transfer describes the multiple scattering of light~\cite{chandrasekhar1960, ishimaru}. This theory has several applications for light propagation and scattering in geophysical media~\cite{marshak20053d,thomas2002radiative}, biological tissues~\cite{wang2012biomedical}, and computer graphics~\cite{jensen2001realistic}, among others. Let $\psi: S^{2} \times \mathbb{R}^{3} \times (0,T) \to \mathbb{R}^{+} \ge 0$ denote the specific intensity. The specific intensity gives the flow of power in direction $\Omega \in S^{2}$, at position $r \in \mathbb{R}^{3}$, at time $t \in [0,T]$. The radiative transfer equation, \begin{equation}
c^{-1} \psi_{t} + \Omega \cdot \nabla \psi + \mu_{a} \psi - \mu_{s}
L \psi = Q,
\label{eq:RTE} \end{equation} governs $\psi$ in a medium that absorbs, scatters, and emits light. Here, $c$ denotes the speed of light in the background medium, $\mu_{a} \ge 0$, $\mu_{s} \ge 0$ denote absorption and scattering coefficients, respectively, and $Q$ denotes a source. The scattering operator, $L$, is defined by \begin{equation}
L \psi(\Omega,r,t) = \displaystyle \frac{1}{4 \pi} \int_{S^{2}} p(\Omega \cdot \Omega') \left[
\psi(\Omega',r,t) - \psi(\Omega,r,t) \right]
\mathrm{d}\sigma_{\Omega'},
\label{eq:scattering-operator} \end{equation} where $p$ denotes the scattering phase function which gives the fraction of light incident in direction $\Omega'$ that is scattered in direction $\Omega$.
In forward-peaked scattering media, $p$ is sharply peaked about $\Omega = \Omega'$ so that most of the scattering occurs in a small angular cone about the incident direction. This problem is important for several applications, especially light propagation in biological tissues~\cite{Kim:03light}. Mathematically, forward-peaked scattering corresponds to the case in which \eqref{eq:scattering-operator} is a nearly singular integral.
There have been several studies on the asymptotic behavior of \eqref{eq:RTE} with forward-peaked scattering leading to the Fokker-Planck approximation~\cite{pomraning1992fokker} and its generalizations~\cite{prinja2001generalized,
leakeas2001generalized}. Solving the radiative transfer equation~\eqref{eq:RTE} with these approximate scattering operators has led to useful physical insight~\cite{Kim:04backscattering, Kim:04beam,
gonzalez2009comparison}. However, for most scattering models used for applications, the Fokker-Planck approximation is inaccurate. The error made by the Fokker-Planck approximation is due to neglecting the tail away from the sharply peaked behavior of the kernel $p$, corresponding to large-angle scattering. That tail typically decays too slowly to make a local approximation appropriate. In fact, it has been shown that for several specific scattering kernels, the leading order behavior is nonlocal and given by a pseudo-differential operator~\cite{pomraning1996higher,
larsen1999linear}.
We study forward-peaked scattering using the same asymptotic analysis used to study the close evaluation of layer potentials discussed above. For this study, we consider the specific choice of the Henyey-Greenstein scattering phase function~\cite{henyey1941diffuse}, \begin{equation}
p_{\text{HG}}(\Omega \cdot \Omega') =\frac{1 -
g^{2}}{( 1 + g^{2} - 2 g \Omega \cdot \Omega')^{3/2}}, \quad -1
\le g \le 1,
\label{eq:HG} \end{equation} as the kernel. The Henyey-Greenstein scattering phase function is extensively used in applications since it provides a simple model with enough sophistication to study multiple scattering of light. The anisotropy factor, $g$, is the mean cosine of the scattering angle. It sets the amount of scattering that is forward peaked. When $g = 0$, scattering is istropic, and when $g = \pm 1$, scattering is restricted to only the forward/backward direction. Forward-peaked scattering corresponds to the asymptotic limit as $g \to 1$.
Henyey-Greenstein scattering is directly related to Poisson's formula for the unit sphere, \begin{equation}
u(x) = \frac{1}{4\pi} \int_{S^{2}}
\frac{1 - |x|^{2}}{| x - y |^{3}}
f(y) \mathrm{d}\sigma_{y}, \quad |x| < 1,
\label{eq:PoissonSphere} \end{equation} which gives the solution of the boundary value problem for Laplace's equation: \begin{subequations}
\begin{gather}
\varDelta u = 0 \quad \text{in $|x| < 1$},\\
u = f \quad \text{on $|x| = 1$}.
\end{gather}
\label{eq:LaplaceSphere} \end{subequations} For the close evaluation point $x = (1 - \varepsilon) y^{\star}$ with
$|y^{\star}| = 1$ and $0 < \varepsilon \ll 1$, we find that \begin{equation}
u((1-\varepsilon) y^{\star}) = \frac{1}{4\pi} \int_{S^{2}}
\frac{1 - (1-\varepsilon)^{2}}{\left[ 1 + (1-\varepsilon)^{2} - 2 (1 -
\varepsilon) y^{\star} \cdot y \right]^{3/2}}
f(y) \mathrm{d}\sigma_{y}.
\label{eq:Poisson-CloseEval} \end{equation} Notice that the kernel in \eqref{eq:Poisson-CloseEval} is the same as \eqref{eq:HG} with $g = 1 - \varepsilon$. Consequently, the asymptotic limit of forward-peaked Henyey-Greenstein scattering is closely related to the close evaluation of Poisson's formula for the unit sphere.
To evaluate $L$ given in \eqref{eq:scattering-operator} using \eqref{eq:HG} as the scattering phase function, we use a spherical coordinate system with $\Omega$ defining its north pole. In this coordinate system, $\theta$ denotes the polar angle and $\phi$ denotes the azimuthal angle. For that case, we have \begin{equation}
L \psi(\Omega) = \frac{1}{2}\int_{0}^{\pi}
p_{\text{HG}}(\cos\theta;\varepsilon) \tilde{\psi}(\theta) \sin \theta
\mathrm{d}\theta \end{equation} with $ \Omega \cdot \Omega' = \cos\theta$, $g = 1 - \varepsilon$, \begin{equation}
p_{\text{HG}}(\cos\theta;\varepsilon) =\frac{1 - (1 -
\varepsilon)^{2}}{[ 1 + (1 - \varepsilon)^{2} - 2 (1 - \varepsilon)
\cos\theta ]^{3/2}},
\label{eq:fHG} \end{equation} and \begin{equation}
\tilde{\psi}(\theta) = \frac{1}{2\pi} \int_{-\pi}^{\pi}
\left[ \psi(\theta,\varphi) - \psi(0,\cdot) \right]
\mathrm{d}\varphi. \end{equation}
Due to the regularity of the solution at the north pole, we have $\psi(\theta,\varphi + \pi) = \psi(-\theta,\varphi)$. As a result, we write \begin{equation}
\tilde{\psi}(\theta) = \frac{1}{2\pi} \int_{0}^{\pi}
\left[ \psi(\theta,\varphi) + \psi(-\theta,\varphi) - 2
\psi(0,\cdot) \right] \mathrm{d}\varphi.
\label{eq:u-tilde} \end{equation} Note here $\psi(0,\cdot) = \psi(\Omega)$. To study forward-peaked scattering
we study the asymptotic limit corresponding to $\varepsilon \to 0^{+}$.
We let \begin{equation}
L \psi(\Omega) =
L^{\text{in}} \psi(\Omega) +
L^{\text{out}} \psi(\Omega), \end{equation} where \begin{equation}
L^{\text{in}} \psi(\Omega) =\frac{1}{2}\int_{0}^{\delta}
p_{\text{HG}}(\cos\theta;\varepsilon) \tilde{\psi}(\theta) \sin\theta
\mathrm{d}\theta,
\label{eq:Lpeak} \end{equation} is the inner expansion, and \begin{equation}
L^{\text{out}} \psi(\Omega) = \frac{1}{2} \int_{\delta}^{\pi}
p_{\text{HG}}(\cos\theta;\varepsilon) \tilde{\psi}(\theta) \sin\theta
\mathrm{d}\theta,
\label{eq:Ltail} \end{equation} is the outer expansion. Just as we have done for the double-layer potential, we consider the asymptotic limit in which $0 < \varepsilon \ll \delta \ll 1$. Details of the calculations below can be found in a developed \textit{Mathematica} notebook available on GitHub~\cite{CKK-2018Codes}.
To find the leading-order behavior for $L^{\text{in}}$, we substitute \eqref{eq:fHG} into \eqref{eq:Lpeak} and make the substitution $\theta = \varepsilon \Theta$, \begin{align}
\begin{split}
L^{\text{in}} \psi(\Omega) &= \int_{0}^{\delta/\varepsilon}
\frac{1}{2} \frac{1 - (1 - \varepsilon)^{2}} {[1 + (1 - \varepsilon)^{2}
- 2 (1 - \varepsilon) \cos\varepsilon \Theta ]^{3/2}}
\tilde{\psi}(\varepsilon \Theta) \sin \varepsilon\Theta
\varepsilon \mathrm{d}\Theta\\
&= \int_{0}^{\delta/\varepsilon} \left[ \frac{\Theta}{(1 +
\Theta^{2})^{3/2}} - \frac{\varepsilon}{2} \frac{\Theta - 2
\Theta^{3}}{(1 + \Theta^{2})^{5/2}} + O(\varepsilon^{2}) \right]
\tilde{\psi}(\varepsilon \Theta) \mathrm{d}\Theta.
\end{split}
\label{eq:5.14} \end{align} Next, we compute the expansion \begin{align}
\begin{split}
\tilde{\psi}(\varepsilon \Theta) &= \frac{1}{2\pi} \int_{0}^{\pi}
\left[ \psi(\varepsilon \Theta,\varphi) + \psi(-\varepsilon
\Theta,\varphi) - 2 \psi(0,\cdot) \right] \mathrm{d}\varphi\\
&= \frac{\varepsilon^{2} \Theta^{2}}{2\pi} \int_{0}^{\pi}
\psi_{\theta\theta}(0,\cdot) \mathrm{d}\varphi + O(\varepsilon^{4}).
\end{split} \end{align} Substituting \eqref{eq:spherical-laplacian} (see Appendix \ref{sec:spherical-laplacian}) into this result, we find that \begin{equation}
\tilde{\psi}(\varepsilon \Theta) = \frac{\varepsilon^{2}
\Theta^{2}}{2 \pi} \int_{0}^{\pi}
\psi_{\theta\theta}(0,\cdot) \mathrm{d}\varphi + O(\varepsilon^{4}) =
\frac{\varepsilon^{2} \Theta^{2}}{4}
\varDelta_{S^{2}} \psi(\Omega) + O(\varepsilon^{4}),
\label{eq:sphericallaplacian} \end{equation} Substituting \eqref{eq:sphericallaplacian} into \eqref{eq:5.14}, we determine by integrating and then expanding about $\varepsilon = 0$ that \begin{align}
\begin{split}
L^{\text{in}} \psi(\Omega) &= \varDelta_{S^{2}} \psi(\Omega)
\int_{0}^{\delta/\varepsilon} \left[ \frac{\varepsilon^{2} \Theta^{3}}{4
(1 + \Theta^{2})^{3/2}} - \frac{\varepsilon^{3}}{8}
\frac{\Theta^{3} - 2 \Theta^{5}}{(1 + \Theta^{2})^{5/2}} +
O(\varepsilon^{4}) \right]
\mathrm{d}\Theta,\\
&= \left[ \frac{\varepsilon \delta}{4} - \frac{\varepsilon^{2}}{2} +
\frac{\varepsilon^{2} \delta}{4} \right] \varDelta_{S^{2}} \psi(\Omega)
+ O(\varepsilon^{3}).
\end{split}
\label{eq:L-inner} \end{align}
To compute the outer expansion, we expand \eqref{eq:fHG} about $\varepsilon = 0$ then substitute the result into \eqref{eq:Ltail}, and obtain \begin{equation}
L^{\text{out}} \psi(\Omega) = \int_{\delta}^{\pi} \frac{\varepsilon +
\varepsilon^{2}}{2 \sqrt{2} (1 -
\cos\theta)^{3/2}} \tilde{\psi}(\theta) \sin\theta
\mathrm{d}\theta + O(\varepsilon^{3}). \end{equation} To remove $\delta$ from the lower limit of integration, we write \begin{multline}
L^{\text{out}} \psi(\Omega) = \int_{0}^{\pi} \frac{\varepsilon +
\varepsilon^{2}}{2 \sqrt{2} (1 - \cos\theta)^{3/2}}
\tilde{\psi}(\theta) \sin\theta \mathrm{d}\theta - \int_{0}^{\delta} \frac{\varepsilon + \varepsilon^{2}}{2 \sqrt{2} (1 -
\cos\theta)^{3/2}} \tilde{\psi}(\theta) \sin\theta
\mathrm{d}\theta + O(\varepsilon^{3}).
\label{eq:Ltail-intermediate} \end{multline} For the second term in \eqref{eq:Ltail-intermediate}, we substitute $\theta = \varepsilon \Theta$ and find that \begin{align}
\begin{split}
\int_{0}^{\delta/\varepsilon} \frac{\varepsilon + \varepsilon^{2}}{2
\sqrt{2} (1 - \cos\varepsilon \Theta)^{3/2}} &
\tilde{\psi}(\varepsilon
\Theta) \sin\varepsilon\Theta \varepsilon \mathrm{d}\Theta\\
&= \int_{0}^{\delta/\varepsilon} \left[ \frac{1 +
\varepsilon}{\Theta^{2}} + O(\varepsilon^{2}) \right]
\tilde{\psi}(\varepsilon \Theta)
\mathrm{d}\Theta\\
&= \varDelta_{S^{2}} \psi(\Omega) \int_{0}^{\delta/\varepsilon}
\left[ \frac{\varepsilon^{2} + \varepsilon^{3}}{4} + O(\varepsilon^{4})
\right] \mathrm{d}\Theta\\
&= \left[ \frac{\varepsilon \delta}{4} + \frac{\varepsilon^{2}
\delta}{4} \right] \varDelta_{S^{2}} \psi(\Omega) +
O(\varepsilon^{3}),
\end{split}
\label{eq:5.17} \end{align} where we have made use of \eqref{eq:sphericallaplacian}
Thus, the leading-order asymptotic behavior for $L^{\text{out}} \psi(\Omega)$ is given by \begin{equation}
L^{\text{out}} \psi(\Omega) = \frac{\varepsilon + \varepsilon^{2}}{2
\sqrt{2}} \int_{0}^{\pi} \frac{\tilde{\psi}(\theta)}{(1 -
\cos\theta)^{3/2}} \sin\theta \mathrm{d}\theta - \left[
\frac{\varepsilon\delta}{4} + \frac{\varepsilon^{2} \delta}{4} \right]
\varDelta_{S^{2}} \psi(\Omega) + O(\varepsilon^{3}).
\label{eq:L-outer} \end{equation}
By summing \eqref{eq:L-inner} and \eqref{eq:L-outer}, and substituting \eqref{eq:u-tilde}, we find that
\begin{equation}
L \psi(\Omega) = \left(\varepsilon + \varepsilon^{2}
\right) L_{3/2} \psi(\Omega) - \frac{\varepsilon^{2}}{2}
\varDelta_{S^{2}} \psi(\Omega) + O(\varepsilon^{3}).
\label{eq:HG-asympt} \end{equation} where \begin{equation}
L_{3/2}\psi(\Omega) = \frac{1}{4 \sqrt{2} \pi} \int_{0}^{\pi} \int_{0}^{\pi}
\frac{\psi(\theta,\varphi) + \psi(-\theta,\varphi) - 2
\psi(0,\cdot)}{(1 - \cos\theta)^{3/2}} \sin \theta
\mathrm{d}\theta \mathrm{d}\varphi.
\label{eq:L_3/2} \end{equation}
The integral in \eqref{eq:L_3/2} appears to be singular, but since \begin{equation}
\frac{1}{4 \sqrt{2} \pi} \int_{0}^{\pi} \frac{\psi(\theta,\varphi) +
\psi(-\theta,\varphi) - 2 \psi(0,\cdot)}{(1 - \cos\theta)^{3/2}}
\sin\theta \mathrm{d}\varphi = \frac{1}{4} \varDelta_{S^{2}} \psi(\Omega)
+ O(\theta^{2}), \end{equation} $L_{3/2} \psi(\Omega)$ is well defined.
The leading-order asymptotic behavior of $L$ given in \eqref{eq:HG-asympt} is equivalent to a result by Larsen~\cite[Eq. 31]{larsen1999linear} who derived an asymptotic approximation for $L$ using a spectral analysis. In contrast to that asymptotic approximation, the asymptotic analysis given above directly addresses the balance between the forward peak given by the inner expansion, and the long tail given by the outer expansion of the scattering operator with the Henyey-Greenstein scattering kernel.
\section{Conclusion} \label{sec:conclusion}
We have computed the leading-order asymptotic behavior for the close evaluation of the double-layer potential in two and three dimensions. By developing numerical methods to evaluate these asymptotic approximations, we obtain effective methods for computing double-layer potentials at close evaluation points. Our numerical examples demonstrate the effectiveness of these asymptotic approximations and corresponding numerical methods.
The key to this asymptotic analysis is the insight it provides. The leading-order asymptotic behavior of the close evaluation of the double-layer potential is given by its local Dirichlet data plus a correction that is nonlocal. It is this nonlocal term that makes the close evaluation problem challenging to address using only numerical methods. It is consistent with the fact that solutions of boundary value problems for elliptic partial differential equations have a global dependence on their boundary data. By explicitly computing this correction using asymptotic analysis, we have been able to develop an effective numerical method for it. Moreover, the asymptotic error estimates provide guidance on where to apply these approximations, namely, for evaluation points closer to the boundary than the boundary mesh spacing. The result of this work is an accurate and efficient method for computing the close evaluation of the double-layer potential.
The asymptotic approximation method can be extended to other layer potentials and other boundary value problems.
In this paper, we show how these methods discover valuable insight for forward-peaked scattering in radiative transfer theory. Future work will entail of extending these methods to applications of Stokes flow and plasmonics.
\appendix
\section{Rotations on the sphere} \label{sec:rotation} We give the explicit rotation formulas over the sphere used in the numerical method for the asymptotic approximation in three dimensions. Consider $y, y^{\star} \in S^{2}$. We introduce the parameters $\theta \in [0,\pi]$ and $\varphi \in [-\pi,\pi]$ and write \begin{equation}
y = y(\theta,\varphi) = \sin \theta \cos \varphi \, \hat{\i} + \sin
\theta \sin \varphi \, \hat{\j} + \cos \theta \,
\hat{{\rm k}}.
\label{eq:y-xyz} \end{equation} The parameter values, $\theta^{\star}$ and $\varphi^{\star}$, are set such that that $y^{\star} = y(\theta^{\star},\varphi^{\star})$.
We would like to work in the rotated, $\rm uvw$-coordinate system in which \begin{equation}
\begin{aligned}
\hat{{\rm u}} &= \cos \theta^{\star} \cos \varphi^{\star} \, \hat{\i} +
\cos \theta^{\star} \sin \varphi^{\star} \, \hat{\j} - \sin
\theta^{\star} \, \hat{{\rm k}},\\
\hat{{\rm v}} &= - \sin \varphi^{\star} \, \hat{\i} + \cos
\varphi^{\star} \, \hat{\j},\\
\hat{{\rm w}} &= \sin \theta^{\star} \cos \varphi^{\star} \, \hat{\i} +
\sin \theta^{\star} \sin \varphi^{\star} \, \hat{\j} + \cos
\theta^{\star} \, \hat{{\rm k}}.
\end{aligned}
\label{eq:transformations} \end{equation} Notice that $\hat{\rm w} = y^{\star}$. For this rotated coordinate system, we introduce the parameters $s \in [0,\pi]$ and $t \in [-\pi,\pi]$ such that \begin{equation}
y = y(s,t) = \sin s \cos t \, \hat{{\rm u}} + \sin s
\sin t \, \hat{{\rm v}} + \cos s \, \hat{{\rm w}}.
\label{eq:y-uvw} \end{equation} It follows that $y^{\star} = y(0,\cdot)$. By equating \eqref{eq:y-xyz} and \eqref{eq:y-uvw} and substituting \eqref{eq:transformations} into that result, we obtain \begin{equation}
\begin{bmatrix} \sin \theta \cos \varphi\\ \sin \theta \sin \varphi \\
\cos \theta \end{bmatrix}
= \begin{bmatrix} \cos \theta^{\star} \cos \varphi^{\star} & - \sin
\varphi^{\star} & \sin \theta^{\star} \cos \varphi^{\star}\\
\cos \theta^{\star} \sin \varphi^{\star} & \cos \varphi^{\star} & \sin
\theta^{\star} \sin \varphi^{\star} \\
- \sin \theta^{\star} & 0 & \cos \theta^{\star}
\end{bmatrix}
\begin{bmatrix} \sin s \cos t\\ \sin s \sin t\\
\cos s \end{bmatrix}.
\label{eq:y-identity} \end{equation} We rewrite \eqref{eq:y-identity} compactly as $\hat{y}(\theta,\varphi) = R(\theta^{\star},\varphi^{\star}) \hat{y}(s,t)$ with $R(\theta^{\star},\varphi^{\star})$ denoting the $3 \times 3$ orthogonal rotation matrix.
We now seek to write $\theta = \theta(s,t)$ and $\varphi = \varphi(s,t)$. To do so, we introduce \begin{align}
\xi(s,t;\theta^{\star},\varphi^{\star})
&= \cos \theta^{\star} \cos \varphi^{\star} \sin s \cos t -
\sin \varphi^{\star} \sin s \sin t + \sin \theta^{\star} \cos
\varphi^{\star} \cos s,\\
\eta(s,t,\theta^{\star},\varphi^{\star})
&= \cos \theta^{\star} \sin \varphi^{\star} \sin s \cos t + \cos
\varphi^{\star} \sin s \sin t + \sin \theta^{\star} \sin
\varphi^{\star} \cos s,\\
\zeta(s,t,\theta^{\star},\varphi^{\star})
&= - \sin \theta^{\star} \sin s \cos t + \cos \theta^{\star} \cos s.
\label{eq:angle-rotations} \end{align} From \eqref{eq:y-identity}, we find that \begin{equation}
\theta = \arctan \left( \frac{\sqrt{\xi^{2} + \eta^{2}}}{\zeta}
\right),
\label{eq:theta-st} \end{equation} and \begin{equation}
\varphi = \arctan\left( \frac{\eta}{\xi} \right).
\label{eq:varphi-st} \end{equation} With these formulas, we can write $\theta = \theta(s,t)$ and $\varphi = \varphi(s,t)$.
\section{Spherical Laplacian} \label{sec:spherical-laplacian} In this Appendix, we establish the result given in \eqref{eq:spherical-laplacian}. We first seek an expression for
$\partial_{s}^{2}[ \cdot ]|_{s = 0}$ in terms of $\theta$ and $\varphi$. By the chain rule, we find that \begin{equation}
\frac{\partial^{2}}{\partial s^{2}} [ \cdot ] \bigg|_{s = 0} =
\left[ \left( \frac{\partial \theta}{\partial s} \right)^{2}
\frac{\partial^{2}}{\partial \theta^{2}} + \left( \frac{\partial
\varphi}{\partial s} \right)^{2} \frac{\partial^{2}}{\partial
\varphi^{2}} + 2 \frac{\partial \theta}{\partial
s} \frac{\partial \varphi}{\partial s}
\frac{\partial^{2}}{\partial \theta \partial \varphi} +
\frac{\partial^{2} \theta}{\partial s^{2}}
\frac{\partial}{\partial \theta} + \frac{\partial^{2}
\varphi}{\partial s^{2}} \frac{\partial}{\partial \varphi}
\right] \bigg|_{s = 0}.
\label{eq:chain-rule} \end{equation} Using $\theta$ defined in \eqref{eq:theta-st} and $\varphi$ defined in \eqref{eq:varphi-st}, we find that \begin{align}
\frac{\partial \theta(s,t)}{\partial s} \bigg|_{s = 0}
&= \cos t, \label{eq:B.2}\\
\frac{\partial^{2} \theta(s,t)}{\partial s^{2}} \bigg|_{s = 0}
&= \frac{\cos \theta^{\star}}{\sin\theta^{\star}} \sin^{2}
t, \label{eq:B.3}\\
\frac{\partial \varphi(s,t)}{\partial s} \bigg|_{s = 0}
&= \frac{\sin t}{\sin \theta^{\star}}, \label{eq:B.4}\\
\frac{\partial^{2} \varphi(s,t)}{\partial s^{2}} \bigg|_{s = 0}
&= - \frac{\cos \theta^{\star}}{\sin^{2}\theta^{\star}} \sin 2 t.
\label{eq:B.5} \end{align} Note that at $s = 0$, we have $\theta^{\star} = \theta$.
Substituting \eqref{eq:B.2} -- \eqref{eq:B.5} into \eqref{eq:chain-rule} and replacing $\theta^{\star}$ by $\theta$, we obtain \begin{multline}
\frac{\partial^{2}}{\partial s^{2}} [ \cdot ] \bigg|_{s = 0} =
\cos^{2} t \frac{\partial^{2}}{\partial \theta^{2}} + \sin^{2} t
\frac{1}{\sin^{2} \theta} \frac{\partial^{2}}{\partial \varphi^{2}}
+ 2 \cos t \sin t \frac{1}{\sin \theta} \frac{\partial^{2}}{\partial
\theta \partial \varphi}\\
+ \sin^{2} t \frac{\cos \theta}{\sin \theta}
\frac{\partial}{\partial \theta} - \sin 2 t \frac{\cos
\theta}{\sin^{2} \theta} \frac{\partial}{\partial \varphi}, \end{multline} from which it follows that \begin{equation}
\frac{1}{\pi} \int_{0}^{\pi} \frac{\partial^{2}}{\partial s^{2}} [
\cdot ] \bigg|_{s = 0} \mathrm{d}t
= \frac{1}{2} \left[ \frac{\partial^{2}}{\partial \theta^{2}} +
\frac{\cos \theta}{\sin \theta} \frac{\partial}{\partial \theta} +
\frac{1}{\sin^{2} \theta} \frac{\partial^{2}}{\partial
\varphi^{2}} \right]
= \frac{1}{2} \varDelta_{S^{2}}. \end{equation}
\end{document}
|
arXiv
|
{
"id": "1810.02483.tex",
"language_detection_score": 0.6732988357543945,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\flushbottom
\title{Work and information from thermal states after subtraction of energy quanta}
\thispagestyle{empty}
\section*{Introduction}
Matter and radiation out of thermal equilibrium with an environment are significant resources of modern physics, information science, and technology. Thermal state of a cooled system is not in thermal equilibrium with its environment and can be used to perform work \cite{Greiner} and carry information \cite{Helstrom}. Preparation of the cooled state requires only a connection to a cold external reservoir where a large part of energy dissipates and, simultaneously, entropy gradually decreases. Similarly, by thermal heating from an {\em external stochastic} hot reservoir, we can enlarge mean energy, but also the entropy. A high-energy out-of-equilibrium state can also be prepared by {\em external deterministic} force \cite{Glauber} applied to a thermal state. Such coherent driving renders the entropy lower than of the initial thermal state. It is the best classical way for the preparation of states capable of transmitting more information \cite{Helstrom} and producing more work \cite{Gelbwaser,Lutz,Kiesel,Mari,Kolar,Paternostro}. Alternatively, mechanisms that do not require either external heating or driving allow us to test non-equilibrium quantum thermodynamics merging with information theory \cite{rev1,rev2}, also at currently unexplored experimental platforms.
Quantum optics has proven to be a suitable experimental platform for proof-of-principle tests of many quantum physics processes, heavily stimulating other experimental platforms and advancing novel quantum technologies. A weak dissipation of thermal energy of light to cold reservoir modes allow us to conditionally subtract individual quanta of that thermal energy by measuring the reservoir modes by quantum detectors. The subtraction can be successfully performed even under imperfect conditions and by using inefficient photodetectors. In quantum optics, such photodetection processes continuous in time were first discovered to conditionally manipulate the statistics and also increase the energy of thermal light \cite{reff1,reff2,reff3,reff4}. Over two decades, the continuous-time nonclassical state manipulations were extensively experimentally developed in cavity quantum electrodynamics \cite{reff5, reff6}. During the same period, the series of multi-photon subtraction experiments with thermal light also demonstrated a conditional instantaneous increase of mean energy by a subtraction of quanta from single-mode thermal state \cite{sub1,sub2,sub3}. The subtraction procedure use a basic dissipation mechanism, with no other energy supply, and an inefficient measurement of energy quanta without their exact resolution. This procedure applied to classical states has been used for quantum filtering \cite{app1}, state preparation \cite{app2}, noiseless amplification \cite{app4}, quantum cloning \cite{app5}, enhanced interferometry \cite{app6,app7} and recently, also to illustrate Maxwell demon in quantum thermodynamics \cite{app8}. Different measures have been applied to quantify the effect of subtraction procedures \cite{quan1,quan2,quan3,quan4}. However, no analysis, experiment, or operational measures proving the principal applicability the subtracted thermal states in the information transmission and work extraction have not yet been presented.
Here, we experimentally verify that the instantaneous subtraction of a number of quanta (photons) from the thermal energy of the oscillator produces out-of-equilibrium state with increased average energy, but simultaneously it keeps Fano factor constant \cite{Fano}. It means that average energy increases hand-in-hand with its variance. Also, entropy slowly increases with the increasing number of subtracted quanta. Despite this limitation, we predict and demonstrate that such out-of-equilibrium states can provide work and carry information larger than what is available by any dissipative cooling mechanism. Photons-subtracted thermal states represent a paramount example of out-of-equilibrium states that can be obtained without an external coherent deterministic drive or an additional thermal source of energy. These states can be employed as a useful source for the various future experiments in currently joining fields of information theory and nonequilibrium quantum thermodynamics.
\begin{figure}
\caption{Preparation and characterization of out-of-equilibrium states conditionally generated via multiple-photon subtraction from single-mode thermal light. Thermal light governed by Bose-Einstein statistics dissipates at an unbalanced beam splitter (BS) to the vacuum reservoir modes. A small fraction of light in the reservoir modes is detected by a multichannel detector formed by $m$ on-off detectors. Coincidence detection events, when all the $m$ detectors fires, trigger the output and verification stage consisting of a photon-number-resolving detector. Subsequently, data are processed, and photon statistics of the conditionally prepared out-of-equilibrium state is analyzed. The statistics is evaluated for its capability to provide work and carry information.}
\label{fig:scheme}
\end{figure}
\section*{Subtraction of energy from thermal state}
To experimentally produce and analyze the out-of-equilibrium state of a {\em single} oscillator and demonstrate its capabilities we follow a stream of the optical experiments \cite{sub1,sub2,sub3,app1,app2,app4,app5,app6,app7,app8}. Our motivation and evaluation are however different. The scheme is depicted in Fig.~\ref{fig:scheme}. The generation starts from a single oscillator represented by a single mode of radiation prepared in the state $\rho_{\text{th}}=\sum_{n=0}^{\infty}p_{n,\text{th}}|n\rangle\langle n|$ with thermal Bose-Einstein statistics $p_{n,\text{th}}=\frac{n_{\text{th}}^n}{\left(1+n_{\text{th}}\right)^{1+n}}$ determined only by the mean number $n_{\text{th}}$ of energy quanta, where $|n\rangle$ are energy basis states. In our experiment, thermal light is generated by temporal intensity modulation of a pulsed laser by rotating ground glass. The thermal state instantaneously dissipates small part of its energy at the unbalanced beam splitter to a multimode reservoir $R$ in vacuum (ground) state $|0\rangle_R$. The dissipation only negligibly cools down the thermal mode. Chiefly, it correlates states $|n\rangle$ of the oscillator's mode with a global photon number state $|k\rangle_R$ of the reservoir. It is apparent from the transformation \begin{equation}\label{eq1}
|n\rangle\langle n|\otimes|0\rangle_R\langle 0| \rightarrow \sum_{k=0}^n {n\choose k} p_s^k (1-p_s)^{n-k}|n-k\rangle\langle n-k|\otimes |k\rangle_R\langle k|, \end{equation} where $p_s$ is a survival probability of single quantum in the oscillator. High single quantum survival probability $p_s$ means weak coupling. Product $p_s^k(1-p_s)^{n-k}$ stands for the probability that $k$ quanta will remain in the oscillator and $n-k$ quanta go to the reservoir $R$. In the experiment, only 5\% of the energy is dissipated, so $p_s=0.95$.
This entirely classical correlation between the system and reservoir at a level of individual quanta is a useful resource. It arises from the classical (first-order) coherence of single-mode thermal light \cite{Glauber}. For heavily multimode thermal oscillator (incoherent), the statistics of quanta over all weakly occupied modes approaches Poissonian, and the dissipative process does not produce this correlation. It means we cannot modify statistics by any measurement performed on the reservoir $R$. The multimode thermal light establishes an incoherent (classical) limit. To realize the importance of first-order coherence for the formation of the out-of-equilibrium state, the subtraction experiment with multimode thermal states is also performed. $M$ temporal thermal modes with the same overall mean photon number $\langle n\rangle=n_{\text{th}}$ are selected to prepare a multimode state. The effective number of modes $M$ is modified by changing the size of speckle pattern collected after the ground glass. The partially coherent $M$-mode state would produce an interference visibility of $1/M$ given by first-order coherence function $g^{1}(0)$.
Light dissipated to reservoir further scatters to many modes. To detect at least a small fraction of the dissipated light, we select $m$ modes and detect them by single-photon avalanche diodes (SPADs). Only when $m$-fold coincidence is detected, the resulting optical output of the source is transmitted. The ideal version of this detection can be described by \cite{Sperling12} \begin{equation}\label{eq2} \Pi_{m}={N \choose m}\sum_{s=m}^{\infty}
\sum_{j=0}^m\frac{1}{N^s}{m \choose j}(-1)^j(m-j)^s|s\rangle_A\langle s|, \end{equation}
however, a real measurement collects only a small part of overall thermal energy dissipated into the reservoir $R$. Therefore, we introduce an overall effective collection efficiency $\eta$ by the transformation $|k\rangle\langle k| \rightarrow \sum_{r=0}^k {k\choose r} \eta^{k-r} (1-\eta)^{r}|k-r\rangle\langle k-r|$ of the energy states before the detection. Eqs.~(\ref{eq1},\ref{eq2}), together with the collection efficiency $\eta$, completely describe the instantaneous multiphoton subtraction process.
For a weak dissipative coupling with sufficiently high single-photon survival probability, $p_s\approx 1$, the out-of-equilibrium statistics approaches \begin{equation}\label{subtr} p_n=\frac{\frac{(n+m)!}{n!m!}\left(\frac{n_{\text{th}}}{1+ n_{\text{th}}}\right)^n}{(1+n_{\text{th}})^{m+1}} \end{equation} by conditioning on $m$ detection events. A potentially small $\eta\ll 1$ reduces the generation rate, but the prepared out-of-equilibrium states are very close to the theoretical limit. Importantly, (\ref{subtr}) describes a single-mode light. Its statistics is purely mathematically analogical to the overall statistics of $m+1$-mode thermal light equally populated in all the modes by an average number $n_{\text{th}}$ of quanta \cite{MR}. In the case of multimode light, the different $m+1$ modes are principally distinguishable, and the light possesses lower first-order coherence quantified by $g^{1}(0)=1/(m+1)$. Also, available energy, work, and information {\em per mode} are actually $m+1$ times lower, because the distinguishable modes are not used efficiently. Consequently, the single-mode state with the statistics (\ref{subtr}) produced by a coherent light source thermodynamically outperforms multimode states with the same statistics and is better suited to our purpose.
In the verification stage of the experiment, the generated out-of-equilibrium statistics is independently analyzed by a photon-number-resolving detector (PNRD). The verification PNRD consists of tunable free-space multichannel optical network and sufficient number of SPADs, eight in our case, and features precise balancing with no crosstalk between the individual detection ports. Further experimental details and characterization of optical set-up are presented in the Methods section.
\section*{Out-of-equilibrium statistics}
We will analyze several essential parameters of the prepared out-of-equilibrium light governed by the statistics (\ref{subtr}) to assess its performance. As has been already discussed, the single-mode statistics (\ref{subtr}) yields a linearly increasing mean number of quanta, $\langle n\rangle=(m+1)n_{\text{th}}$, where $m$ in a number of conditional detection events. The monotonous increase is shown in Fig.~\ref{fig:prms}(a) for $n_{\text{th}}=2$ set in our measurement. It is important to stress that the behavior of the mean number of quanta of the state subjected to the subtraction process depends on the initial state statistics. The mean energy increases (decreases) when a quantum is subtracted from a super-Poissonian (sub-Poissonian) state. The subtraction does not influence a state governed by Poissonian statistics. In this work, however, we will exclusively analyze the subtraction from single-mode and multimode thermal states because we start from thermal equilibrium.
Furthermore, second-order correlation function $g^{2}(0)=\frac{\langle a^{\dagger 2}a^2\rangle}{\langle a^{\dagger}a \rangle^2}=
1+\frac{1}{1+m}$ for (\ref{subtr}) converges to unity, irrespectively to $n_{\text{th}}$. It is depicted in Fig.~\ref{fig:prms}(b). However, Fano factor $F=\langle (\Delta n)^2\rangle /\langle n\rangle=1+n_{\text{th}}$ is independent on $m$ and it approaches unity only for very small $n_{\text{th}}\ll 1$. To reach higher $\langle n\rangle$, $m$ needs to be higher too, which is increasingly more challenging to reach. Let us note that these results correspond to an instantaneous limit $\lambda t\rightarrow 0$ of the continuous photodetection process, where $\lambda$ is the success probability of single photon subtraction \cite{reff2}. For $n_{\text{th}}=2$, we experimentally demonstrate in Fig.~\ref{fig:prms}(c) that the conditional out-of-equilibrium statistics indeed remains super-Poissonian although $g^{(2)}(0)$ is substantially reduced below 2, which holds for thermal light. Figs.~\ref{fig:prms}(a,b,c) also show that the measured statistics and derived characteristics agree very well with the theoretical model.
Invariance of the Fano factor $F=1+n_{\text{th}}$ means that the variance $\langle (\Delta n)^2\rangle$ increases simultaneously with the increase of $\langle n\rangle$. However, $\langle (\Delta n)^2\rangle$ does not actually grow fast enough to render the state (\ref{subtr}) useless. For example, mean-to-standard-deviation ratio $\mbox{MDR}=\langle n\rangle/\sqrt{\langle (\Delta n)^2\rangle}=\sqrt{\frac{n_{\text{th}}}{1+n_{\text{th}}}}\sqrt{m+1}$ increases monotonously. It means, the energy advantageously increases faster than its fluctuations. Moreover, it is already sufficient to use $m=1$ and $n_{\text{th}}>1$ to obtain $\mbox{MDR}>1$ and the mean $\langle n\rangle$ increases even faster with $m$ for larger $n_{\text{th}}$. Experimental evidence that $\langle n\rangle$ and $\mbox{MDR}$ increase with $m$ is shown in Figs.~\ref{fig:prms}(a,d). Using conditional instantaneous measurement, (\ref{subtr}) exhibits the same behavior of $\langle n\rangle$ and $\mbox{MDR}$ as thermal oscillator coherently driven out of equilibrium (see the Methods for details).
\begin{figure}
\caption{Mean number of photons (a), $g^{(2)}(0)$ function (b), Fano factor (c), and MDR (d) of the equilibrium ($m=0$) and out-of-equilibrium ($m>0$) states. $m$ stands for a number of subtracted photons. Experimental results (dark gray), full numerical model (blue dots), and the simplified model (\ref{subtr}) (green tiles) -- see the Methods for details on the theoretical models. Data error bars show the standard deviation of the measurement, error bars of the models represent the uncertainty of input parameters, particularly of the mean number of photons determined from the measured initial thermal statistics.}
\label{fig:prms}
\end{figure}
\begin{figure}
\caption{(a) Shannon entropy of the conditionally prepared states as a function of subtracted photon number $m$. Shown are experimental data (dark gray), full numerical model (blue dots), and the simplified model (green tiles) based on Eq.~(\ref{subtr}). (b) Shannon entropy as a function of a number $M$ of modes of the initial thermal state. Gray bars stand for the entropy of $M$-mode thermal states ($m=0$) and yellow bars show the entropy of the same states after single-photon subtraction ($m=1$).}
\label{fig:S}
\end{figure}
\section*{Work available from out-of-equilibrium state}
Previous analysis suggests that the conditionally generated statistics (\ref{subtr}) can be a viable alternative to the oscillator externally driven out of thermal equilibrium. To support this statement, we predict and measure work available from the out-of-equilibrium state (\ref{subtr}). Available average work $\langle W\rangle_{\text{yield}}$, which is performed while the system equilibrates with the environment with temperature $T$, is expressed by relative entropy \cite{w1,w2,w3,w4,w6} \begin{equation}\label{work}
\langle W\rangle_{\text{yield}}=k_BT D\left(p_n||p^{eq}_n\right). \end{equation}
Here $k_B$ is Boltzmann constant and $D(p_n|p_n^{eq})=\sum_{n=0}^{\infty}p_n \ln p_n - \sum_{n=0}^{\infty} p_n \ln p_n^{eq}$ is relative Shannon entropy (Kullback-Leibler divergence) between the out-of-equilibrium statistics $p_n$ and the distribution $p^{eq}_n$ of a system in the equilibrium with an environment with temperature $T$ \cite{w1}. Differently from the previous statistical analysis, which takes into account only the system, $\langle W\rangle_{\text{yield}}$ depends on both the system state and the environment with constant temperature $T$. The bound (\ref{work}) can be reached; some specific protocols are already developed \cite{Mari}.
Our preparation method actually uses two reservoirs, hot one ($T>0$) and cold auxiliary vacuum reservoir ($T=0$), see Fig.~\ref{fig:scheme}. However, the cold reservoir cannot be used to provide work (\ref{work}), without heating some of its modes up using an external source. We can, therefore, consider the hot reservoir at temperature $T>0$ and cool one mode to its ground state by a strong dissipation. The temperature $T>0$ of the thermal source is always constant in the experiment; consequently, the available work can be normalized by $k_B T$. By cooling the oscillator mode to the ground state, normalized work $\frac{\langle W\rangle_{\text{yield}}}{k_BT}|_0=\ln\left[1+n_{\text{th}}\right]$ sets a benchmark for any useful conditional preparation of out-of-equilibrium state. If $\frac{\langle W\rangle_{\text{yield}}}{k_BT}|>\frac{\langle W\rangle_{\text{yield}}}{k_BT}|_0$, more work can be extracted from the state prepared conditionally by the measurement of a small part of dissipated energy (presented protocol) than by a complete cooling of one of the hot reservoir modes down. The cooling to ground state can be challenging for many systems such as mechanical oscillators and, therefore, the conditional procedure may be preferable to achieve more work.
For the statistics (\ref{subtr}), we conditionally obtain the normalized available work \begin{equation}\label{work_th} \frac{\langle W\rangle_{\text{yield}}}{k_BT}=\frac{1}{(1+n_{\text{th}})^{m+1}}\sum_{n=0}^{\infty}\left(\frac{n_{\text{th}}}{n_{\text{th}}+1}\right)^n \frac{(n+m)!}{n!m!}\ln\left[\frac{1}{(1+n_{\text{th}})^m}\frac{(n+m)!}{n!m!}\right], \end{equation} which increases monotonously with $m$ for any $n_{\text{th}}>0$ without an offset or saturation. We experimentally verified that for $n_{\text{th}}=2$, see Fig.~\ref{fig:W}(a). The entropy also increases with $m$, as shown in Fig.~\ref{fig:S}(a). The amount of extractable work decreases for increasing number $M$ of modes of the initial thermal state, see Fig.~\ref{fig:W}(b). It vanishes completely in the incoherent limit of large $M$. It clearly demonstrates that first-order coherence is a resource needed to extract available work using the instantaneous dissipation and photon measurement.
\begin{figure}
\caption{(a) The normalized available work as a function of subtracted photon number $m$. Shown are experimental data (dark gray), full numerical model (blue dots), and the simplified model (green tiles) based on Eq.~(\ref{subtr}). The horizontal threshold (solid black line) corresponds to the work available by a cooling of the oscillator mode to the ground state. Light gray areas represent lower bounds derived for thermal state heated with the same mean number of photons as the corresponding $m$-photon subtracted states. (b) The normalized available work plotted against a number of modes $M$ for multimode thermal state after single-photon subtraction ($m=1$).}
\label{fig:W}
\end{figure}
Despite the increase in entropy, the available work obtained by the subtraction procedure overcomes the threshold $\frac{\langle W\rangle_{\text{yield}}}{k_BT}|_0$ given by the complete cooling already for $m=3$. The experimental results shown in Fig.~\ref{fig:W}(a) demonstrate the violation by 5 standard deviations. Moreover, the available work also overcomes a threshold set by a thermal state heated to the same mean number of quanta as reached by the subtraction. The work available by the adequate thermal heating is illustrated by a light gray area of the bars. Heating or cooling to the ground state -- heating/cooling strategy -- represents a joint benchmark here. All experimental results in Fig.~\ref{fig:W}(a) agree with the theory predictions. It opens the possibility to test other thermodynamical quantities and processes using the presented experimental photonic approach.
\section*{Information carried by out-of-equilibrium state}
We complement the measurement of available work by verification that the out-of-equilibrium distribution (\ref{subtr}), as a member of a binary alphabet, can carry information better than initial thermal distribution $p_{n,\text{th}}$. Average mutual information given in bits can be determined by the relative entropy \begin{equation}\label{inf}
\langle I\rangle=D(p^{AB}_{i,j}||p^A_ip^B_j) \end{equation} where indices $i,j=0,1$ stand for a single bit at a sender side A and a single bit on a receiver side B, respectively. $p^{AB}_{i,j}$ is a joint (correlated) probability distribution and $p^A_i$, $p^B_j$ are marginal probability distribution at the sender and receiver sides, respectively. In contrast to the average work (\ref{work}), average information depends on joint statistics of both communicating parties. Also, it is optimal to use vacuum state of the cold reservoir for encoding of the symbol `0'. The `1' can be encoded using thermal distribution $p_{n,\text{th}}$, which sets a benchmark $\langle I\rangle_{0}$ for the mutual information, see the Methods section for the details. To overcome the thermal bound, we employ the conditional statistics (\ref{subtr}) instead to encode the symbol `1'. In this case, the average mutual information reaches $\langle I\rangle=H\left((1-p^A_0)(1-p_E)\right)-(1-p^A_0)H(p_E)$, where $p_E=1/(1+n_{\text{th}})^{m+1}$ is the probability of error (symbol `1' is identified as `0') and $H(p_E)$ is a binary entropy function. The maximum \begin{equation} \langle I\rangle_{\text{yield}}=\mbox{max}_{p^A_0}\langle I\rangle_m=\log_2 \left(1+(1-p_E)p_E^{p_E/(1-p_E)}\right) \end{equation} of mutual information $\langle I\rangle$ over the probability $p^A_0$ at the sender side is monotonously increasing with $m$ for any $n_{\text{th}}$. The experimental result for $n_{\text{th}}=2$ is shown in Fig.~\ref{fig:MI}(a). It is not critically sensitive to a number of modes when a multimode thermal state is used instead of the single thermal mode. For the multimode states, the information gain has to be normalized per mode, because more modes can carry more information. It vanishes only gradually, as is presented in Fig.~\ref{fig:MI}(b). The first-order coherence is a key resource here, same as for the work extraction.
For arbitrary small $n_{\text{th}}$ and any $m>0$, average information $\langle I\rangle_{\text{yield}}\approx (1+m)n_{\text{th}}/(e\ln 2)$ overcomes the benchmark $\langle I\rangle_{0}$ for any $n_{\text{th}}$. The results of experimental verification for $n_{\text{th}}=2$ are shown in Fig.~\ref{fig:MI}(a). Information gain $\langle I\rangle_{\text{yield}}$ approaches its maximum of 1~bit even faster than for a thermal state heated to the equivalent mean number of quanta. We reach more than 0.9~bit already for $m=3$. Indeed, the conditionally generated state can carry maximum information despite its mixedness.
\begin{figure}
\caption{(a) The maximum mutual information per mode against a number of subtracted photons $m$. Dark gray bars stand for experimental results, blue dots stand for full numerical model, and green tiles represent the ideal model based on Eq.~(\ref{subtr}). Light gray areas represent lower bounds derived for thermal state heated with the same mean number of photons as the corresponding $m$-photon subtracted states. Solid black line represents a threshold of maximum mutual information available when encoding `1' using the initial thermal state. (b) The maximum mutual information per mode as a function of the number of modes $M$ for multimode thermal state before and after $m$-photon subtraction. Colors refer to the value of $m$, gray: $m$=0, yellow: $m$=1.}
\label{fig:MI}
\end{figure}
\section*{Conclusion}
We have experimentally produced the conditional out-of-equilibrium state (\ref{subtr}) from single-mode thermal light by a weak dissipation to a reservoir and an inefficient detection of photons there. We have theoretically and experimentally verified that average work can be extracted from the conditional out-of-equilibrium state, which outperforms any cooling/heating strategy. Furthermore, this state can also be used to carry more average information (closer to one bit) than for any state produced by a cooling/heating strategy, despite entropy increase of the conditional state. The presented procedure does not require any external coherent drive or additional thermal energy. It only uses energy measurement to reach higher work and information rate conditionally. However, it conclusively requires the first-order (classical) coherence of the thermal source. Obtained results complement the previous experiments demonstrating the applications of the subtraction procedure. The presented method can be translated to other experimental platforms and used for future experiments in currently merging fields of quantum information and quantum thermodynamics. It is also stimulating for current optomechanical experiments at single quanta level \cite{marcus}, where a mechanical oscillator is driven out-of-equilibrium by a weak optical cooling and incoherent photon detection more efficiently than by a complete cooling or adequate heating.
\section*{Methods}
\subsection*{Experimental setup}
Subtraction of $m$ quanta from a thermal state was experimentally realized to demonstrate generation and characterization of out-of-equilibrium states of light. The pseudothermal pulsed light was generated employing a nanosecond pulsed laser diode (805~nm) in gain switching regime with repetition rate of 4~MHz. This initial optical signal was focused on the surface of a rotating ground glass and the output speckle pattern was coupled into a single-mode optical fiber. The Glauber second order correlation function of the generated pseudothermal state was evaluated, $g^{2}\left(0\right)=2.00\left(3\right)$, to verify the high quality of the preparation stage. Multimode thermal states were generated by selecting $M$ thermal modes with the same overall mean photon number $\langle n\rangle=n_{\text{th}}$ but different temporal modulation. The effective number of modes $M$ is modified by changing the size of speckle pattern collected via the optical fiber. This is achieved by changing either the diameter of the laser spot on the rotating ground glass or the distance between the glass and fiber coupler.
Multiple-photon subtraction was realized using low-reflectivity beam splitter implemented with a half-wave plate followed by a polarizing beam splitter. In the first port, the reflected photons were detected via reconfigurable multichannel detector with $m$ commercial on-off single-photon detectors. To measure click statistics of the transmitted pulses, we placed photon-number-resolving detector (PNRD) at the second port. The PNRD consists of balanced eight-channel spatially multiplexed optical network and eight single-photon avalanche photodiodes. The resulting coincidence statistics was acquired by the PNRD under the condition that exactly $m$ detection events occurred at the reflected port. We have applied a statistical method based on the maximum-likelihood algorithm to reconstruct resulting photon statistics from the multi-coincidence measurement.
The coincidence rates increase with increasing mean photon number of the initial thermal state. However, it is crucial to set the mean photon number low enough to measure a coincidence statistics of $m$-photon subtracted thermal state within the range of the PNRD. Mean photon number $n_{\text{th}}$ of the $m$-photon subtracted thermal state increases with $m$ by factor $(m+1)n_{\text{th}}$. Taking into account the number of channels of the PNRD and its efficiency, we can safely set $n_{\text{th}} = 2$ for the maximum number of subtracted photons $m\leq 3$. At the same time, the selected mean photon number is high enough to keep the measurement time reasonably short.
Similarly, the value of beam-splitter reflectivity represents a trade-off between the subtraction rate (and, consequently, the total measurement time) and the ability of the generated out-of-equilibrium state to perform work and transfer information. Both these quantities monotonously decrease with increasing reflectivity $R$ (see Fig.~\ref{fig:R} for the three-photon-subtracted state). We can see that the chosen value of the reflectivity, $R = 5\%$, is close to the maximum possible one, which outperforms the tightest bound on the available work.
\begin{figure}
\caption{(a) The maximum mutual information (solid green curve) and (b) the available work (solid red curve) versus the beam-splitter reflectivity $R$ for three-photon-subtracted thermal state ($m=3$). Solid black lines and dashed black lines represent the corresponding benchmarks discussed in the main text.}
\label{fig:R}
\end{figure}
To fully describe the performed $m$-photon subtraction, a detailed model has been developed. It takes into account actual experimental properties of the set-up: the beam-splitter reflectivity $R$, the number $m$ of on-off detectors at the reflected port, and their detection efficiency. In the limit of $R\rightarrow 0$, the full numerical model is equivalent to an application of $m$-th power of annihilation operator to the input thermal state, which produces the ideal statistics (\ref{subtr}). The numerical model has been used to evaluate all the parameters discussed in the main text and plotted in Figs.~\ref{fig:prms}-\ref{fig:MI}.
\subsection*{Out-of-equilibrium statistics vs. coherently driven thermal noise}
The out-of-equilibrium statistics (\ref{subtr}) is similar to a statistics of thermal oscillator coherently driven out of thermal equilibrium with mean $\langle n\rangle_c=n_{\text{th}}+n_{c}$ and variance $\langle (\Delta n)^2\rangle_c = 2n_cn_{\text{th}}+n_c+n_{\text{th}}^2+n_{\text{th}}$, where $n_c$ is the mean number of coherent quanta caused by the driving. Considering $n_c=gn_{\text{th}}$, where $g$ is ratio between coherent and incoherent energy, we can see that both $\langle n\rangle_c$ and $\mbox{MDR}_c=\frac{\langle n\rangle_c}{\sqrt{\langle (\Delta n)^2\rangle_c}}$ monotonously increase with $g$ for any $n_{\text{th}}$, similarly to $\langle n\rangle_m$ and $\mbox{MDR}_m$ using the statistics (\ref{subtr}) in the case of $m$-quanta subtraction.
However, to reach $\mbox{MDR}_c>1$, $n_{\text{th}}<(n_c-1)n_c$ is necessary and, therefore, small coherent driving out of equilibrium is not sufficient for large $n_{\text{th}}$. For small $n_{\text{th}}\ll 1$, achievable only by cooling, the mean-to-deviation ratio reaches $\mbox{MDR}_c\approx \sqrt{n_{\text{th}}(1+g)}$, similarly as for $\mbox{MDR}_m$ with $m$ substituted by $g$. Simultaneously, the second-order correlation function $g^{(2)}_c(0)=1+\frac{1}{1+\frac{g^2}{1+2g}}$ also does not depend on $n_{\text{th}}$ similarly as for $g^{(2)}_m(0)$, although it has different dependency on $g$. Fano factor $F_c=1+n_{\text{th}}+n_{\text{th}}\frac{1+2g}{1+g}$ depends on $g$, whereas $F_m$ is principally independent on $m$. However, for large $g$ it also does no converge to $F_c=1$ and Poissonian statistics. Only for small $n_{\text{th}}\ll 1$, both statistics converges to Poissonian limit. Despite (\ref{subtr}) is not statistics of thermal state coherently driven out of equilibrium, it exhibits similar statistical features without any coherent drive.
\subsection*{Benchmark for available work}
Let us evaluate the available work (\ref{work}) that is performed while the oscillator in an initial thermal state $p_{n,\text{th}}^{(1)}$ equilibrates with the environment in thermal state $p_{n,\text{th}}^{(2)}$ with temperature $T$. The oscillator retains its thermal Bose-Einstein statistics but the mean number of quanta $n_{\text{th}}^{(1)}$ decreases to $n_{\text{th}}^{(2)}<n_{\text{th}}^{(1)}$. For $n_{\text{th}}^{(2)}>0$, the normalized work reads \begin{equation}\label{cool}
\frac{\langle W\rangle_{\text{yield}}}{k_BT}=D(p_{n,\text{th}}^{(1)}||p_{n,\text{th}}^{(2)})=n_{\text{th}}^{(1)}\ln\frac{n_{\text{th}}^{(1)}}{n_{\text{th}}^{(2)}}+(1+n_{\text{th}}^{(1)})\ln\frac{1+n_{\text{th}}^{(2)}}{1+n_{\text{th}}^{(1)}}. \end{equation} Expression (\ref{cool}) represents the lower bound (light gray areas) in Fig.~\ref{fig:W}(a). The mean number difference $\delta n_{\text{th}}=n_{\text{th}}^{(2)}-n_{\text{th}}^{(1)}<0$ corresponds to cooling of the oscillator, where thermal energy is dissipated to another reservoir at a lower temperature. Positive $\delta n_{\text{th}}>0$ means that the system has been heated, which requires additional source of thermal energy and therefore this case is not considered here.
\subsection*{Benchmark for carried information}
For two thermal distributions $p_{n,\text{th}}^{(1)}$ with mean number of photons $n_{\text{th}}^{(1)}$ (representing bit 1) and $p_{n,\text{th}}^{(0)}$ with $n_{\text{th}}^{(0)}<n_{\text{th}}^{(1)}$ (representing bit 0), the optimal measurement strategy distinguishes between number of quanta less or equal to $n_{\text{max}}$ (detection of bit 0) and higher than $n_{\text{max}}$ (detection of bit 1). The maximum mutual information (\ref{inf}) over $p^A_0$ approaches \begin{equation}\label{ass} \mbox{max}_{p^A_0}I=\log_2\left(1+2^\frac{H(p_{01})-H(p_{10})}{1-p_{01}-p_{10}}\right) -\frac{1-p_{10}}{1-p_{01}-p_{10}}H(p_{01})+\frac{p_{10}}{1-p_{01}-p_{10}}H(p_{10}), \end{equation} where and $H(x)=-x \log_2 x - (1-x)\log_2 (1-x)$ is binary entropy function, $p_{01}=1-\left(n_{\text{th}}^{(1)}/(1+n_{\text{th}}^{(1)})\right)^{1+n_{\text{max}}}$ is error probability of sending bit 1 and receiving it as bit 0, and $p_{10}=\left(n_{\text{th}}^{(0)}/(1+n_{\text{th}}^{(0)})\right)^{1+n_{\text{max}}}$ is error probability of sending bit 0 and receiving it as bit 1. To minimize the total error probability it is necessary to use two states whose distributions have the smallest possible overlap. For thermal states we can assume $n_{\text{th}}^{(0)}=0$, which yields $p_{10}=0$ and $n_{\text{max}}=0$. The optimal extraction of information is then simply the measurement of zero and non-zero energy. In this case, the maximum mutual information \begin{equation}\label{bench} \langle I\rangle_{0} =\mbox{max}_{p^A_0}I= \log_2\left(1+n_{\text{th}}^{(1)}(1+n_{\text{th}}^{(1)})^{-(1+n_{\text{th}}^{(1)})/n_{\text{th}}^{(1)}}\right) \end{equation} monotonously increases with $n_{\text{th}}^{(1)}$, linearly as $n_{\text{th}}^{(1)}/(e\ln 2)$ for small $n_{\text{th}}^{(1)}$, and slowly saturates at 1 bit. The benchmark (\ref{bench}) sets a lower bound on mutual information available by using the vacuum state (bit 0) and a thermal state with the same mean photon number as the prepared $m$-photon subtracted state (bit 1). This bound is shown in Fig.\ref{fig:MI}(a) by light gray areas for individual $m$.
\section*{Author contributions statement} J.H. constructed experimental setup, performed measurements and numerical simulations, and analyzed data. M.J. supervised the experiment and analyzed data. R.F. suggested the theoretical idea, performed calculations, and supervised the project. All authors participated in writing the manuscript.
\section*{Additional information} The authors declare no competing financial interests. Correspondence and requests for material should be addressed to R.F. [email protected].
\end{document}
|
arXiv
|
{
"id": "1801.06371.tex",
"language_detection_score": 0.8164277076721191,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Spatial decoherence near metallic surfaces}
\author{R.~Fermani\footnote{Electronic address: [email protected]}, S.~Scheel, and P.L.~Knight}
\affiliation{Quantum Optics and Laser Science, Blackett Laboratory, Imperial College London, Prince Consort Road, London SW7 2BW, United Kingdom}
\date{\today}
\begin{abstract} We present a first-principles derivation of spatial atomic-sublevel decoherence near dielectric and metallic surfaces. The theory is based on the electromagnetic-field quantization in absorbing dielectric media. We derive an expression for the time-variation of the off-diagonal matrix element of the atomic density matrix for arbitrarily shaped substrates. For planar multilayered substrates we find that for small lateral separations of the atom's possible positions the spatial coherence decreases quadratically with the separation and inversely to the squared atom-surface distance. \end{abstract}
\pacs{34.50.Dy, 42.50.Nn, 42.50.Ct, 03.75.Be}
\maketitle
\section{Introduction}
New physical models for quantum information processing and quantum computation have been inspired recently by the experimental achievements in trapping and controlling ultracold neutral atoms \cite{GREINER,NATURE_PHASETrans,SCHIEDMAYER,JAKSCH1999,ZANARDI}. The first experimental step to achieve a physical realization of a quantum computer with neutral atoms is to confine them on a definite region in the space. The creation of microscopic guides and traps for neutral atoms moving close to surfaces is possible using nanofabricated structures that either carry currents or are based on permanent magnetized films. The idea at the base of atom chips has been put forward by Frisch and Segr\'{e} \cite{SEGRE} who realized that, when a homogeneous magnetic field ('bias field') is superimposed with the field created by a current flowing through a wire, the magnetic field vanishes on a line parallel to the current which can trap atoms in low-field seeking magnetic hyperfine sublevels.
One of the main requirements for a qubit is to be well isolated from a noisy environment to avoid decoherence, namely the destruction of quantum superpositions due to the coupling of the atom cloud to the noisy chip environment. Although neutral atoms are considered good candidates as quantum systems since they have a small coupling to the environment, they still suffer from loss and decoherence. When atoms are trapped in atom chips, they are held close to the material surfaces. The small separation between the cold atom cloud and the macroscopic environment (usually at room temperature) raises the question of how strong the energy exchange will be, and which limit of atom confinement and height above the surface can ultimately be reached. Thermal fluctuations induce noise currents \cite{JOHNSONOISE} in the materials the trap is made of, and fluctuations of the electromagnetic field are produced in the conducting body. Such fluctuating fields can be strong enough for an atom close to the surface to drive rf magnetic dipole transitions that flip its spin causing either its loss or decoherence of its quantum state.
In \cite{Varpula,HENKEL/99,FOLMAN&Al,HENKEL/03,SPIN-FLIP,Henkel05,Scheel05}, atom loss due to thermally driven spin flips has been widely investigated and several experiments have confirmed the theoretical findings \cite{EXPATOMCHIP,Vuletic,Cornell}. In this article we examine the influence of thermally-induced spin flips on the coherence properties of atomic spatial superposition states. Such coherent superpositions can be thought of being created by tunneling through a shallow potential barrier in either a double-well potential or, more generally, an optical lattice structure \cite{MANDEL_nature}. The study of the latter has been received much attention over the recent years for its potential application in quantum information processing (see, e.g. \cite{PACHOS03,JAKarXiv}). The derivation is carried out within the framework of the quantum electrodynamic theory for electromagnetic fields in dielectric media \cite{VOGEL,AGARWAL,GRUNER,DUNG/98,SCHEEL/98,SCHEEL/99,DUNG/00,PERINA} which yields a first-principle description of the decoherence properties of spatial atomic superposition states.
This work is organized as follows: Sec. II introduces the basic notions of a quantized electromagnetic field in a dielectric medium. In Sec. III the density matrix of the atom is obtained in the presence of a fluctuating magnetic field and an expression for the spatial coherence is derived. We focus on a particular substrate geometry, a planarly multilayered structured, in Sec.~\ref{sec:planar}, for which the dyadic Green function is explicitly known.
\section{Basic equations} \label{sec:QED}
It is well known that the quantum statistical properties of electromagnetic fields and their interactions with atomic systems can be strongly influenced by the presence of dielectric bodies. In the present context it is useful to formulate quantum electrodynamics (QED) on a dielectric-matter background \cite{VOGEL,AGARWAL,GRUNER,DUNG/98,SCHEEL/98,SCHEEL/99,DUNG/00,PERINA}. The interaction between atomic systems and the electromagnetic field is typically treated in terms of the polarization and magnetization associated with the atomic charges. Let us restrict our attention to an isotropic but arbitrarily inhomogeneous medium whose polarization responds linearly and locally to the electric field. Causality and the dissipation-fluctuation theorem \cite{FLUCT/DISSIP} then require that \begin{equation} \label{Polarization} \mathbf{P}(\mathbf{r},t)= \varepsilon_0 \int\limits_0^{\infty} d\tau\, \chi(\mathbf{r},t) \mathbf{E}(\mathbf{r}, t-\tau) +\mathbf{P}_N(\mathbf{r},t) \,, \end{equation} where $\chi(\mathbf{r},t)$ is the dielectric susceptibility (in the time domain) and $\mathbf{P}_N (\mathbf{r},t)$ is the noise polarization associated with dissipative processes in the dielectric medium.
Using Maxwell's equations in Fourier space, we find that $\mathbf{E}(\mathbf{r},\omega)$ obeys the Helmholtz equation \begin{equation} \label{HelmholtzEq} \bm{\nabla} \times \bm{\nabla} \times \mathbf{E}(\mathbf{r},\omega) -\frac{\omega^2}{c^2}\varepsilon (\mathbf{r},\omega) \mathbf{E}(\mathbf{r},\omega)= \omega^2 \mu_0 \mathbf{P}_N(\mathbf{r},\omega), \end{equation} where the complex permittivity, $\varepsilon (\mathbf{r}, \omega)\!=\!\varepsilon_{R} (\mathbf{r}, \omega)+i\, \varepsilon_{I} (\mathbf{r}, \omega)$, is defined by \begin{equation} \varepsilon (\mathbf{r}, \omega) = 1+\int\limits_0^{\infty} d \tau e^{i\omega t} \chi (\mathbf{r},\tau). \end{equation} The solution to Eq.~(\ref{HelmholtzEq}) can then be written as \begin{equation} \label{ElectricGreen} \mathbf{E}(\mathbf{r},\omega) = \omega^2 \mu_0 \int d^3 \mathbf{r}'\bm{G}(\mathbf{r},\mathbf{r}',\omega) \mathbf{P}_N(\mathbf{r}',\omega), \end{equation} where the Green tensor $\bm{G}(\mathbf{r},\mathbf{r}',\omega)$ is a second rank tensor that has to be determined from the partial differential equation \begin{equation} \label{diffEqG} \nabla \times \nabla \times \bm{G}(\mathbf{r},\mathbf{r}',\omega) -\frac{\omega^2}{c^2}\varepsilon (\mathbf{r},\omega) \bm{G}(\mathbf{r},\mathbf{r}',\omega)= \mathbf{\delta} (\mathbf{r}-\mathbf{r}')\bm{U}, \end{equation} where $\bm{U}$ is the unit dyad. An important consequence of the differential equation (\ref{diffEqG}) is the integral relation \cite{SCHEEL/98} \begin{equation} \label{magicformula} \int d^3\mathbf{s} \frac{\omega^2}{c^2} \varepsilon_I(\mathbf{s},\omega) \bm{G}(\mathbf{r},\mathbf{s},\omega) \bm{G}^+(\mathbf{r}',\mathbf{s},\omega) = \mathrm{Im}\bm{G}(\mathbf{r},\mathbf{r}',\omega) . \end{equation}
Quantization of this theory then proceeds in the usual way \cite{PERINA}. First, a factor is split off from the (classical) noise polarization, \begin{equation} \label{polarizNoise} \mathbf{P}_N (\mathbf{r},\omega) = i \sqrt{\frac{\hbar \varepsilon_0}{\pi} \varepsilon_I (\mathbf{r},\omega)}\, \mathbf{f} (\mathbf{r},\omega ). \end{equation} One then identifies the dynamical variables $\mathbf{f}(\mathbf{r},\omega)$ as the fundamental $\delta$ correlated Gaussian random process and, upon quantization, replaces them by the operator-valued bosonic vector field $\hat{\mathbf{f}}(\mathbf{r},\omega)$ satisfying the equal-time commutation relations $\left[\hat{\mathbf{f}}(\mathbf{r},\omega),\hat{\mathbf{f}}^{\dagger}(\mathbf{r}',\omega')\right]$ $\!=$ $\!\delta(\mathbf{r}-\mathbf{r}')\delta(\omega-\omega')\bm{U}$. The Hamiltonian of the system composed of electromagnetic field and absorbing matter is \begin{equation} \label{FieldH} \hat{H}_F = \int d^3 \mathbf{r} \int\limits_0^{\infty} d \omega \, \hbar \omega \,\hat{\mathbf{f}}^{\dagger}(\mathbf{r},\omega) \hat{\mathbf{f}}(\mathbf{r},\omega). \end{equation}
The electromagnetic field operators can now be obtained in the Schr\"odinger picture as \begin{gather} \label{eq:eschroedinger} \hat{\mathbf{E}}(\mathbf{r}) = \int\limits_0^{\infty} d \omega \hat{\mathbf{E}}(\mathbf{r},\omega)+\mbox{H.c.}, \\ \label{ElectricFoper} \hat{\mathbf{E}}(\mathbf{r},\omega) = i \sqrt{\frac{\hbar}{\pi\varepsilon_0}} \frac{\omega^2}{c^2} \int d^3 \mathbf{r}'\sqrt{\varepsilon_I (\mathbf{r}', \omega)} \bm{G}(\mathbf{r},\mathbf{r}',\omega) \hat{\mathbf{f}}(\mathbf{r}',\omega) \end{gather} and, using Faraday's law, \begin{equation} \label{magneticFoper} \hat{\mathbf{B}}(\mathbf{r},\omega) = (i \omega)^{-1} \bm{\nabla} \times \hat{\mathbf{E}}(\mathbf{r},\omega). \end{equation}
An important feature of this theory is that it reproduces the correct form of the fluctuation-dissipation theorem. Let the system of electromagnetic field and absorbing matter be in thermal equilibrium at some temperature $T$. Then the thermal correlation function of the dynamical variables at temperature $T$ reads \begin{equation} \label{eq:correlator} \langle \hat{\mathbf{f}}(\mathbf{r},\omega) \hat{\mathbf{f}}^\dagger(\mathbf{r}',\omega') \rangle = (\bar{n}_{\mathrm{th}}+1) \delta(\mathbf{r}-\mathbf{r}') \delta(\omega-\omega') \bm{U}, \end{equation} with the mean thermal photon number at frequency $\omega$ \begin{equation} \bar{n}_{\mathrm{th}} = \frac{1}{e^{\hbar\omega/k_BT}-1}. \end{equation} From Eqs.~(\ref{ElectricFoper})---(\ref{eq:correlator}), together with Eq.~(\ref{magicformula}), it follows that the thermal expectation value of an anti-normally ordered product of magnetic field operators can be written as \begin{align} \label{eq:fdt} & \langle \hat{\mathbf{B}}(\mathbf{r},\omega) \hat{\mathbf{B}}^\dagger(\mathbf{r}',\omega') \rangle = \nonumber \\ & \frac{\hbar\mu_0}{\pi} \mathrm{Im}\left[ \overrightarrow{\bm{\nabla}} \times \bm{G}(\mathbf{r},\mathbf{r}',\omega) \times \overleftarrow{\bm{\nabla}} \right] (\bar{n}_{\mathrm{th}}+1) \delta(\omega-\omega') . \end{align}
Such a quantization model provides a valid description of electromagnetic field in absorbing dielectric materials. In fact, it has been shown in \cite{DUNG/98,SCHEEL/98} that the equal-time basic commutation relations of QED are preserved. The electromagnetic field is expressed in terms of the classical Green tensor satisfying the Helmholtz equation (\ref{HelmholtzEq}), and the continuum of the bosonic field variables $\hat{\mathbf{f}}(\mathbf{r},\omega)$. All the information about the dielectric matter is contained in the Green tensor via the permittivity $\varepsilon (\mathbf{r},\omega)$. For metals at low frequencies, the permittivity can be approximated by the well-known Drude relation \begin{equation} \varepsilon(\omega) \approx \frac{2ic^2}{\omega^2\delta^2} \end{equation} with the skin depth $\delta$. Although such a relation is not strictly consistent with causality as it has recently been pointed out \cite{Tip}, it can be assumed to be valid in a restricted frequency interval.
At this point it is necessary to point out the limitations of the quantization scheme presented above. Note that the form of the polarization, Eq.~(\ref{Polarization}), is valid only for strictly locally responding materials. That is to say, we assume that the elementary dipoles that give rise to the polarization are essentially fixed in space. Certainly, for metals which can alternatively described by a conductivity, this is not true as charge carriers can move around freely for considerable distances. However, the locality assumption can be upheld in situations in which the mean free path length is much shorter than all the other length scales in the system under consideration. While this is certainly true for ordinary metals at room temperature and geometric length scales of several micrometers, we do expect corrections due to spatially nonlocal response (the anomalous skin effect) for metals or superconductors at very low temperatures as considered in \cite{Scheel05,Henkel05}.
\section{Spatial Decoherence} \label{sec:dec}
Let us suppose we had an atom in one of two adjacent sites of an optical lattice. The tunneling interaction allows the atom's wave function to coherently spread over the neighboring site \cite{HAYCOCK} where its state can be written in the occupation-number basis as \begin{equation}
|\psi(t=0)\rangle_A = \frac{1}{\sqrt{2}}
\left( |1,0\rangle +|0,1\rangle \right). \end{equation} We take the time at which the equal superposition has been established to be $t=0$ and assume for simplicity that no tunneling occurs at later times, at least not at timescales shorter than the decoherence time. This means that we imagine the tunneling interaction being frozen over a certain time period. This assumption is justified when considering proposals in which spatial atomic locations are used to encode quantum information.
Atoms that are held close to microstructured surfaces experience fluctuations of the electromagnetic field due to absorption in the substrate material. In the case of a magnetic trap the atom is subject to a constant magnetic field with strength $B_0$ in the center of the trap. The magnetic sublevels are split due to the Zeeman effect by the Larmor frequency $\omega_L=g_S\mu_B B_0/\hbar$. A subset of these magnetic sublevels feel an attractive potential towards regions of low magnetic field. In the experiment reported in \cite{EXPATOMCHIP} $^{87}$Rb atoms are initially pumped into the hyperfine state
$|F,m_F\rangle=|2,2\rangle$ in which they are trapped. However, due to absorption in the surface material and the resulting quantum fluctuations, fluctuating magnetic fields cause the atoms to evolve into states with lower magnetic quantum number $m_F$. In sufficiently tight magnetic traps, also atoms in the
$|F,m_F\rangle=|2,1\rangle$ state are trapped.
Spin flips to even lower magnetic sublevels cause the atoms to be expelled from the trap. In this case, spatial decoherence is no more a matter of interest. Hence, it is sufficient to treat the atomic system in a two-level approximation.
We focus on the Zeeman coupling of the atomic magnetic moment to a fluctuating field represented by the Hamiltonian \begin{equation} \label{Hzeeman} \hat H_Z = - \hat{\bm{\mu}} \cdot \hat{\textbf{B}}({\textbf{r}}_A), \end{equation} where the operator of the magnetic induction is given by Eq.~(\ref{magneticFoper}), together with Eqs.~(\ref{eq:eschroedinger})
and (\ref{ElectricFoper}). The magnetic moment operator in Eq. (\ref{Hzeeman}) associated with a transition $|i\rangle\to|f\rangle$ can be written as $\hat{\bm{\mu}}=\bm{\mu} |i\rangle \langle f|+ \mbox{h.c.}$. Since we assume the atom to be cooled into its electronic ground state, there is no contribution of the angular momentum. Furthermore, since the nuclear magnetic moment can be neglected because of the ratio of the electron mass to the mass of the nucleus (see the discussions in \cite{HENKEL/99,SPIN-FLIP}), the magnetic moment vector is just proportional to the expectation value of the electronic spin operator, \begin{equation}
\bm{\mu} = g_S \mu_B \langle i|\hat{\mathbf{S}}|f \rangle , \end{equation} where $\mu_B$ denotes the Bohr magneton, and $g_S\approx 2$ the electron's $g$-factor. Inserting Eq.~(\ref{magneticFoper}) into Eq.~(\ref{Hzeeman}), the Zeeman Hamiltonian can be written in the rotating-wave approximation as \cite{SPIN-FLIP} \begin{eqnarray} \hat H_Z &= &
-\mu_B g_S \left[ \langle f|\hat{S}_q| i\rangle \hat{\xi}^\dagger \hat{B}_q(\mathbf{r}_A) + \mbox{h.c.} \right] \nonumber \\ &=&
-\mu_B g_S \left [\langle f|\hat{S}_q| i\rangle \int\limits_0^\infty d \omega \frac{\omega}{c^2}\sqrt{\frac{\hbar}{\varepsilon_0 \pi}}\, \epsilon_{qpj}\, \partial_p \int d^3 \mathbf{s} \right. \nonumber \\ && \left. \times \sqrt{\varepsilon_I(\mathbf{s},\omega)} G_{ji} ({\mathbf{r}}_A,{\mathbf{s}},\omega) \hat{f}_i({\mathbf{s}},\omega) \hat{\xi}^{+} +\mbox{h.c.} \right] \nonumber \\ \label{Happrox} \end{eqnarray}
where $\hat{\xi}=|f\rangle\langle i|$ denotes the atomic spin lowering operator. Finally, the free atomic Hamiltonian can be written in the two-level approximation used above as \begin{equation} \label{eq:Hatom} \hat H_A = \hbar\omega_A \hat{\xi}_z =
\frac{1}{2} \hbar\omega_A (|i\rangle\langle i|-|f\rangle\langle f|), \end{equation} where the $\hat{\xi}$ obey the commutation rules $[\hat{\xi}^{(\dagger)},\hat{\xi}_z]=\mp\hat{\xi}^{(\dagger)}$.
In order to analyze how this magnetic noise influences the coherence of the state of our atom, we rewrite the initial atomic state as \begin{equation}
|\psi_A\rangle = \frac{1}{\sqrt{2}} \left( |i_1\rangle +
|i_2\rangle \right), \end{equation} where the labels $1, 2$ refer to the occupied site. Let us consider a system composed of the two-level atom and a fluctuating magnetic field initially in the vacuum state
$|0\rangle$, so that the total state of the atom-field system reads \begin{eqnarray}
|\psi_{AF}\rangle = \frac{1}{\sqrt{2}}\left (|i_1,0\rangle +
|i_2,0\rangle \right ). \end{eqnarray} The Hamiltonian describing the evolution of the combined system is given by the sum of the three Hamiltonians $\hat{H} = \hat{H}_F+ \hat{H}_A +\hat{H}_Z$, where $\hat{H}_F$, and $\hat{H}_Z$, and $\hat{H}_A$ are given by Eqs.~(\ref{FieldH}), (\ref{eq:Hatom}), and (\ref{Happrox}), respectively. The system wave function at a certain time $t$ can be written as \cite{DUNG/00} \begin{eqnarray} \lefteqn{
|\psi_{AF} (t)\rangle =
C_{i_1}(t) e^{-i \omega_A t/2} |i_1, 0 \rangle
+ C_{i_2}(t) e^{-i \omega_A t/2} |i_2, 0 \rangle } \nonumber \\ && +\int d^3\mathbf{r} \int\limits_0^\infty d\omega \, C_{{f_1},m}({\mathbf{r}},\omega,t) e^{-i(\omega -\omega_A /2)t}
|f_1,1_m({\mathbf{r}},\omega)\rangle \nonumber \\ && +\int d^3\mathbf{r} \int\limits_0^\infty d\omega \, C_{{f_2},m}({\mathbf{r}},\omega,t)
e^{-i(\omega -\omega_A /2)t} |f_2,1_m({\mathbf{r}},\omega)\rangle, \nonumber \\ \label{SystemState} \end{eqnarray}
where $|0\rangle$ and $|1_m({\mathbf{r}},\omega)\rangle$ denote the electromagnetic field vacuum and single-excitation states, respectively. The Schr\"{o}dinger equation
$i\hbar\partial_t|\psi_{AF}(t)\rangle=\hat{H}|\psi_{AF}(t)\rangle$ yields ($a=1,2$) \begin{eqnarray} \label{CinizDot} \lefteqn{ \dot{C}_{i_a}(t) = \frac{i\mu_B g_S}{c^2\sqrt{\pi\hbar\varepsilon_0}}
\langle f|\hat{S}_q| i\rangle \int d^3\mathbf{r} \int\limits_0^\infty d\omega } \nonumber \\ && \times \omega e^{-i(\omega-\omega_A)t} \sqrt{\mathbf{\varepsilon}_I(\mathbf{r},\omega)} \epsilon_{qpj} \partial_p G_{jm}(\mathbf{r}_a,\mathbf{r},\omega) \nonumber \\ && \times C_{f_a,m}(\mathbf{r},\omega,t) , \end{eqnarray} \begin{eqnarray} \label{eq:cfoft} \lefteqn{ \dot{C}_{f_a,m}(\mathbf{r},\omega,t) = \frac{i \mu_B g_S}{c^2 \sqrt{\pi \varepsilon_0 \hbar}}
\langle i|\hat{S}_q| f\rangle \omega e^{i(\omega -\omega_A)t} } \nonumber \\ && \times \sqrt{\varepsilon_I(\mathbf{r},\omega)} \epsilon_{qpj}\partial_p G_{jm}^\ast(\mathbf{r}_a,\mathbf{r},\omega) C_{i_a}(t). \end{eqnarray}
We now substitute the result of formal integration of $C_{f_a,m}(\mathbf{r},\omega,t)$ with the condition $C_{f_a,m}(\mathbf{r},\omega,0)=0$ into $\dot{C}_{i_a}(t)$, make use of the integral relation (\ref{magicformula}), and obtain \begin{equation} \label{Ci1Dot} \dot{C_{i_a}}(t) =\int^t_0 dt' \,K_a(t-t')C_{{i_a}}(t'), \end{equation} where the integral kernel is \begin{eqnarray} \lefteqn{ K_a(t-t')= -\frac{(\mu_B g_s)^2}{c^2 \pi \varepsilon_0 \hbar}\,
\langle f|\hat{S}_q| i\rangle\,
\langle i|\hat{S}_k| f\rangle } \nonumber \\ && \times \int\limits_0^\infty d\omega\, e^{-i(\omega -\omega_A)(t-t')} \mathrm{Im}\left[ \overrightarrow{\bm{\nabla}}\times \bm{G}(\mathbf{r}_a,\mathbf{r}_a,\omega) \times \overleftarrow{\bm{\nabla}}\right]_{qk}. \nonumber \\ \end{eqnarray} We integrate both sides of Eq.~(\ref{Ci1Dot}) over $t$, and change the order of integrations on the right-hand side we derive \begin{equation} C_{i_a}(t)-C_{i_a}(0) = \int\limits_0^t dt' \,\bar{K}_a(t-t') C_{i_a}(t') \end{equation} with \begin{eqnarray} \lefteqn{ \bar{K}_a (t-t') = \frac{(\mu_B g_S)^2}{c^2 \pi \varepsilon_0 \hbar}\,
\langle f|\hat{S}_q| i\rangle\,
\langle i|\hat{S}_k| f\rangle } \nonumber \\ && \hspace*{-3ex}\times \int\limits_0^\infty d\omega\, \frac{e^{-i(\omega -\omega_A)(t-t')} -1}{i(\omega -\omega_A)} \mathrm{Im}\left[ \overrightarrow{\bm{\nabla}} \times \bm{G}(\mathbf{r}_a,\mathbf{r}_a,\omega) \times \overleftarrow{\bm{\nabla}} \right]_{qk} \nonumber \\ \end{eqnarray} and the initial condition $C_{i_a}(0)=1$. When the Markov approximation applies, i.e., when in coarse grained description of the atomic motion memory effects are disregarded, we may let \cite{BARNETT} \begin{equation} \label{Markov} \frac{\left (e^{-i(\omega -\omega_A)(t-t')} -1 \right )} {i(\omega -\omega_A)}\rightarrow -\pi\delta(\omega-\omega_A) +i{\cal P}\frac{1}{\omega-\omega_A}. \end{equation} Defining the coefficients \begin{eqnarray} \lefteqn{ \label{A1} \Gamma_a = 2 \left (\frac{(\mu_B g_S)^2 }{c^2
\varepsilon_0 \hbar} \right)\,\langle f|\hat{S}_q| i\rangle\,
\langle i|\hat{S}_k| f\rangle } \nonumber \\ && \times \mathrm{Im} \left[ \overrightarrow{\bm{\nabla}}\times \bm{G}(\mathbf{r}_a,\mathbf{r}_a,\omega_A) \times \overleftarrow{\bm{\nabla}}\right]_{qk} \end{eqnarray} and \begin{eqnarray} \label{eq:deltaomega} \lefteqn{ \delta\omega_a = \left(
\frac{(\mu_B g_S)^2 }{c^2 \pi \varepsilon_0\hbar} \right) \langle f|\hat{S}_q| i\rangle\, \langle i|\hat{S}_k| f\rangle } \nonumber \\ && \times \mathcal{P} \int\limits^{\infty}_0 d\omega \frac{\mathrm{Im}\left[\overrightarrow{\bm{\nabla}}\times \bm{G}(\mathbf{r}_a,\mathbf{r}_a,\omega_A) \times \overleftarrow{\bm{\nabla}} \right]_{qk}} {\omega - \omega_A}, \end{eqnarray} we can write $\bar{K}_a(t-t')=-\frac{1}{2}\Gamma_a+i\delta\omega_a$. We finally obtain for the time evolution of the coefficients $C_{i_a}(t)$ \begin{equation} \label{Ci1Sol} C_{i_a} (t) = \exp \left[ \left( -\frac{1}{2} \Gamma_a + i \delta\omega_a \right) t \right] . \end{equation} The coefficients $\Gamma_a$ and $\delta\omega_a$ defined in Eqs.~(\ref{A1}) and (\ref{eq:deltaomega}) represent the spin flip rate and the line shift, respectively, and have been derived in a similar fashion in \cite{SPIN-FLIP}. The spin flip lifetimes $1/\Gamma_a$ have already been subject of major theoretical \cite{Varpula,HENKEL/99,FOLMAN&Al,HENKEL/03,SPIN-FLIP,Henkel05,Scheel05} and experimental \cite{EXPATOMCHIP,Vuletic,Cornell} investigations which will not repeated here. In what follows, we will assume that the line shift $\delta\omega_a$ caused by the interaction with the quantized electromagnetic field is negligible. This can be seen as follows. The Green function appearing in Eq.~(\ref{ElectricGreen}), as well as the Fourier transform of the permittivity in Eq~(\ref{Polarization}), plays the role of a response function and so it satisfies the Kramers-Kronig relations for a complex-valued function $g(\omega)=\mathrm{Re}[g(\omega)]+i \mathrm{Im} [g(\omega)]$ \cite{NUSSENZVEIG}, \begin{eqnarray} \mathrm{Re}[g(\omega)]&=&\frac{1}{\pi} \mathcal{P} \int\limits^{\infty}_{-\infty} d\omega' \frac{ \mathrm{Im}[g(\omega)]}{\omega'-\omega}, \\ \mathrm{Im}[g(\omega)]&=&-\frac{1}{\pi} \mathcal{P} \int\limits^{\infty}_{-\infty} d\omega' \frac{ \mathrm{Re}[g(\omega)]}{\omega'-\omega}. \end{eqnarray} The lower limit of the integral in Eq. (\ref{eq:deltaomega}) can be extended to $-\infty$ with little error as the integrand is peaked around $\omega_A$. Hence, Eq.~(\ref{eq:deltaomega}) can be rewritten as \begin{eqnarray} \label{eq:deltaomega2} \lefteqn{ \delta\omega_a = \left(
\frac{(\mu_B g_S)^2 }{c^2 \varepsilon_0\hbar} \right) \langle f|\hat{S}_q| i\rangle\, \langle i|\hat{S}_k| f\rangle } \nonumber \\ && \times \mathrm{Re}\left[\overrightarrow{\bm{\nabla}}\times \bm{G}(\mathbf{r}_a,\mathbf{r}_a,\omega_A) \times \overleftarrow{\bm{\nabla}} \right]_{qk} . \end{eqnarray} As we will see later, the line shift is of the same order of magnitude as the spin flip rate. For typical experimental realizations, \cite{HENKEL/99,FOLMAN&Al,HENKEL/03,SPIN-FLIP,Henkel05,Scheel05,EXPATOMCHIP,Vuletic,Cornell}, this will be in the sub-Hz range. This means that $\delta \omega_a$ can be neglected as it is extremely small when compared to the spin flip transition frequency.
Now substituting Eq.~(\ref{Ci1Sol}) into the expression for $\dot{C}_{f_a,m}(\mathbf{r},\omega,t)$, Eq.~(\ref{eq:cfoft}), we find the formal solution \begin{align} & C_{f_a,m}(\mathbf{r},\omega,t)= \frac{i \mu_B g_S}{c^2 \sqrt{\pi
\varepsilon_0 \hbar}} \langle i|\hat{S}_q| f\rangle \omega \sqrt{\varepsilon_I(\mathbf{r},\omega)} \epsilon_{qpj}\partial_p \nonumber \\ & \times G_{jm}^\ast(\mathbf{r}_a,\mathbf{r},\omega) \int\limits_0^t dt' e^{i(\omega -\omega_A)t'} e^{-\frac{1}{2} \Gamma_a t'}. \label{Cf1} \end{align} In order to find how the off-diagonal elements of the density matrix decay, we trace the atomic density matrix over the field and obtain \begin{eqnarray} \label{MatRhoA} \varrho_A(t) &=&
\langle 0|\varrho_{AF}(t)|0\rangle \nonumber \\ && + \sum_i \int d^3\mathbf{r} \int\limits_0^\infty d\omega \,
\langle 1_i(\mathbf{r},\omega)|\varrho_{AF}| 1_i(\mathbf{r},\omega)\rangle \nonumber \\ &=& \frac{1}{2} \left( \begin{array}{cc} \rho_{11}(t) & \rho_{12}(t) \\ \rho_{12}^\ast(t) & \rho_{22}(t) \\ \end{array} \right), \end{eqnarray} where the matrix elements $\varrho_{ij}$ of the density matrix have to be calculated from \begin{eqnarray} \label{rho1}
\varrho_{11}(t) &=& |C_{i_1}(t)|^2 + \sum_i\int d^3\mathbf{r} \int\limits_0^\infty d\omega\,
|C_{f_1,m}({\mathbf{r}},\omega,t)|^2, \nonumber \\ \\ \label{rho4}
\varrho_{22}(t) &=& |C_{i_2}(t)|^2 + \sum_i\int d^3\mathbf{r} \int\limits_0^\infty d\omega \,
|C_{f_2,m}(\mathbf{r},\omega,t)|^2 , \nonumber \\ \\ \varrho_{12}(t) &=& C_{i_1}(t) C^\ast_{i_2}(t) \nonumber \\ && +\sum_i\int d^3\mathbf{r} \int\limits_0^\infty d\omega \, C_{f_1,m}(\mathbf{r},\omega,t)\,C^\ast_{f_2,m}(\mathbf{r},\omega,t), \label{rho2} \nonumber\\ \end{eqnarray} First, it can be checked that the diagonal elements $\varrho_{11}(t)$ and $\varrho_{22}(t)$ are properly normalized to $\varrho_{11}(t)=\varrho_{22}(t)=1$ by inserting Eqs.~(\ref{Ci1Sol}) and (\ref{Cf1}) together with Eq.~(\ref{magicformula}) into Eqs.~(\ref{rho1}) and (\ref{rho4}), respectively. Thus, as a consistency check we find that $\mathrm{Tr}[\varrho_A]=1$. We can then calculate the off-diagonal elements of the density matrix as \begin{eqnarray} \label{eq:offdiagonal} \lefteqn{ \varrho_{12}(t) = e^{-\Gamma_{12} t} +2\left( 1-e^{-\Gamma_{12} t} \right) \frac{ \left( \mu_B g_S \right)^2}{c^2 \varepsilon_0 \hbar} } \nonumber \\ && \times
\langle i|\hat{S}_q| f\rangle\langle f|\hat{S}_k| i\rangle \frac{\mathrm{Im} \left[ \overrightarrow{\bm{\nabla}} \times \bm{G}(\mathbf{r}_2,\mathbf{r}_1,\omega_A) \times \overleftarrow{\bm{\nabla}} \right]_{kq}} {\Gamma_{12}} \nonumber \\ \end{eqnarray} where $\Gamma_{12}=(\Gamma_1+\Gamma_2)/2$ is the arithmetic mean of the spin flip rates, Eq.~(\ref{A1}), at both sites. Note that the Hermiticity of the density matrix $\varrho_A(t)$ follows from the reciprocity theorem applied to the dyadic Green function which yields $\bm{G}(\mathbf{r}_1,\mathbf{r}_2,\omega_A)$ $\!=$ $\!\bm{G}^T(\mathbf{r}_2,\mathbf{r}_1,\omega_A)$.
Equation~(\ref{eq:offdiagonal}) constitutes the main result of our paper. It provides, via the Green function $\bm{G}(\mathbf{r}_2,\mathbf{r}_1,\omega_A)$, an elegant way to assess the loss of spatial coherence for arbitrarily shaped substrates. Recalling the expression for the fluctuation-dissipation theorem, Eq.~(\ref{eq:fdt}), it follows that Eq.~(\ref{eq:offdiagonal}) can be rewritten as \begin{eqnarray} \label{eq:coherence} \lefteqn{ \varrho_{12}(t) = e^{-\Gamma_{12}t} +\left( 1-e^{-\Gamma_{12}t} \right) } \nonumber \\ && \hspace*{-3ex} \times
\frac{\langle i|\hat{S}_q| f\rangle\langle f|\hat{S}_k| i\rangle \int_0^\infty d\omega \langle \hat{B}_k(\mathbf{r}_2,\omega_A)\hat{B}_q^\dagger(\mathbf{r}_1,\omega)\rangle}
{\langle i|\hat{S}_q| f\rangle\langle f|\hat{S}_k| i\rangle \int_0^\infty d\omega \langle \hat{B}_k(\mathbf{r}_1,\omega_A)\hat{B}_q^\dagger(\mathbf{r}_1,\omega)\rangle} \nonumber \\ && \hspace*{-3ex} \equiv e^{-\Gamma_{12}t} +\left( 1-e^{-\Gamma_{12}t} \right) S \left ( \mathbf{r}_1,\mathbf{r}_2,\omega_A \right ) \end{eqnarray} in terms of the magnetic cross-correlation tensor $\langle\hat{\mathbf{B}}(\mathbf{r},\omega)\hat{\mathbf{B}}^\dagger(\mathbf{r}',\omega')\rangle$. This means that the imaginary part of the (magnetic) Green function is proportional to the spatial coherence function of the fluctuating magnetic field \cite{CARMINATI/99,HENKEL_CARMINATI/00,Henkel01}.
Note that, although the calculations have been performed for surfaces held at zero temperature, the extension to finite temperatures is trivial. Indeed, it is seen from Eq.~(\ref{eq:fdt}) that the spatial coherence functions as well as the spin-flip rates simply have to be multiplied by the factor $(\bar{n}_{\text{th}}+1)$ to account for thermal fluctuations.
Equation~(\ref{eq:offdiagonal}), or equivalently, Eq.~(\ref{eq:coherence}), consists of two parts. The first is a
(spatially local) exponential decay that describes the effect of the transition from the initial spin state $|i\rangle$ to the final spin state $|f\rangle$. The second term is a (spatially nonlocal) non-exponential term which is proportional to the spatial coherence function. It should be noted that, in a model in which more than a two-level transition is considered, after this time a transition to even lower-lying hyperfine spin states are likely. However, in our two-level approximation these flips are not taken into consideration.
\section{Planar multilayer substrates} \label{sec:planar}
Up until now, the derivation of all formulas were valid for arbitrary substrate geometries. A particular geometric arrangement is fixed by defining the correct boundary conditions for the dyadic Green function $\bm{G}(\mathbf{r},\mathbf{s},\omega)$. In this section, we will concentrate on the simplest but experimentally important realization in terms of planar multilayer dielectrics. In what follows, we will focus on the spatially nonlocal term in Eq.~(\ref{eq:offdiagonal}) only. In particular, we notice that this is equivalent to taking the long-time limit of Eq.~(\ref{eq:offdiagonal}). Hence, for now we consider only \begin{eqnarray} \label{eq:ratio} \lefteqn{ S \left (
\mathbf{r}_1,\mathbf{r}_2,\omega_A \right )= 2 \frac{ \left( \mu_B g_S\right)^2}{c^2 \varepsilon_0\hbar} \langle i|\hat{S}_q|
f\rangle\langle f|\hat{S}_k| i\rangle } \nonumber \\ && \times \frac{\mathrm{Im}\left[ \overrightarrow{\bm{\nabla}} \times \bm{G}(\mathbf{r}_2,\mathbf{r}_1,\omega_A) \times \overleftarrow{\bm{\nabla}} \right]_{kq}}{\Gamma_{12}}, \end{eqnarray} which had previously been derived in connection with spatial decoherence of matter waves in \cite{Henkel01}. Note that in a planar geometry in which the atom is held at a fixed distance to the material surface, the spin flip rates $\Gamma_i$ coincide due to translational invariance, i.e. $\Gamma_{12}\equiv\Gamma_1=\Gamma_2$. Note also that Eq.~(\ref{eq:ratio}) is temperature-independent.
Let us first consider a half-space filled with a dielectric or metal of dielectric permittivity $\varepsilon(\omega)$ (see the discussion in Sec.~\ref{sec:QED}). We evaluate the spin matrix elements for the transition from one hyperfine ground state to another by the basis states through the Clebsch--Gordon coefficients
$|F,m_F\rangle=\sum_{m_Sm_I}C_{Fm_F}^{m_Sm_I} |m_S,m_I\rangle$. For the $^{87}$Rb ground state transition $|2,2\rangle \to|2,1\rangle$, the non-zero matrix elements are $|\langle i|\hat{S}_{y,z}|f\rangle|=1/4$. The dyadic Green function for such a situation can be found in \cite{LiLW94,Chew,TOMAS,DUNG/98}. We have collected some of the formulas in Appendix~\ref{sec:green}. Note that in the expressions for the components of the generalized reflection coefficient, Eq.~(\ref{eq:coefficients}), the common factor $e^{ik_{1z}(z+z')}\equiv e^{2ik_{1z}d}$ can be approximated by
$e^{-2d|k_\||}$ because the transition wavelength,
$\lambda=c/(2\pi\omega)$, is the by far biggest length scale in the system such that the approximation $k^2_{1z}\approx-k_\|^2$ holds. Then, by going over to polar co-ordinates in the two-dimensional Fourier transform in Eq.~(\ref{eq:Weyl}),
$\mathbf{k}_\|=(k_x,k_y)\mapsto(K\cos\varphi,K\sin\varphi)$ and
$d^2\mathbf{k}_\|\mapsto KdK\,d\varphi$. We can thus write $\Gamma_{12}$ after integration over $\varphi$ as \begin{equation} \label{Gamma12} \Gamma_{12} = \frac{(\mu_B g_S)^2}{8c^2\varepsilon_0 \hbar}\, 3 \pi \int \frac{K^2dK}{(2\pi)^2} \frac{e^{-2Kd}}{2} \mathrm{Im}[r^{TE}_{12}]. \end{equation} It is worth noting at this point that the line shift $\delta\omega_a$ in Eq.~(\ref{eq:deltaomega2}) can be computed as in Eq.~(\ref{Gamma12}) by replacing $\mathrm{Im}[r^{TE}_{12}]$ with $\mathrm{Re}[r^{TE}_{12}]$. Moreover, it is easily seen that both $\Gamma_a$ and $\delta \omega_a$ are of the same order.
Let us assume that an atom is located at a distance $d$ away from the planar interface which we describe by its skin depth $\delta$. In our example, we have chosen an aluminium substrate with $\delta=110\mu$m and an atomic transition frequency as $f=560$kHz. Furthermore, the atom can be in two distinct positions with a lateral separation $l$. \begin{figure}
\caption{ Spatial coherence function of the fluctuating magnetic field $S \left ( \mathbf{r}_1,\mathbf{r}_2,\omega_A \right )$, Eq.~(\ref{eq:ratio}), as a function of the lateral separation $l$ in $\mu$m with the parameters $f=560$~kHz, $\delta=110\,\mu$m for three different distances from the surface: $d=20\,\mu$m (solid line), $d=10\,\mu$m (dotted line), and $d=5\,\mu$m (dashed line).}
\label{fig:Ratio}
\end{figure} In Fig.~\ref{fig:Ratio} we show the decay of the spatial coherence as measured by the function $S \left ( \mathbf{r}_1,\mathbf{r}_2,\omega_A \right )$ for varying separation $l$ in $\mu$m for three different atom-surface distances $d$. As a function of separation, the decay of the spatial coherence starts off rather slowly. We attribute this behaviour to the fact that for separations below the coherence length of the magnetic-field fluctuations the spin flip is driven coherently at both sites.
In order to investigate the small-separation limit in some more detail, we take a closer look at the Weyl expansion of the scattering Green tensor $\bm{R}^{(12)}(\mathbf{r},\mathbf{r}',\omega)$, Eq.~(\ref{eq:Weyl}), which is the by far dominant contribution compared with the free-space Green function. The separation $l$ is nothing but $l=|\bm{\varrho}-\bm{\varrho}'|$ and serves as a parameter in the integral. Hence, we can expand the exponential
$e^{i\mathbf{k}_\|\cdot(\bm{\varrho}-\bm{\varrho}')}$ in Eq.~(\ref{eq:Weyl}) into powers of $l$ and evaluate each term seperately. The zeroth-order coefficient trivially leads to the spin flip rate $\Gamma_{12}$. The contribution from terms proportional to $l$
vanish identically due to the symmetry of the generalized reflection coefficients $R_{ij}^{(12)}$ with respect to the wave-vector components $\mathbf{k}_\|$ in the $(x,y)$-plane. In fact, all odd powers of $l$ vanish because of that symmetry.
Hence, the lowest non-vanishing power is $l^2$. It is straightforward to find analytical expressions for the spatial coherence in that limit by converting the additional factor $K^2$ coming from the expansion of the exponential in Eq.~(\ref{eq:Weyl}) into a parameter differentiation with respect to the atom-surface distance $d$. That is, we make the replacement $K^2\mapsto\frac{1}{4}\frac{\partial^2}{\partial d^2}$. In this way we find that \begin{equation} \label{eq:ratio2} S \left ( \mathbf{r}_1,\mathbf{r}_2,\omega_A \right ) = \frac{1}{\Gamma_{12}} \left( \Gamma_{12}-\frac{5 l^2}{96} \frac{\partial^2}{\partial d^2}\Gamma_{12} \right) +{\cal O}(l^4). \end{equation} In certain asymptotic regimes in which $\Gamma_{12}$ can be expressed as a monomial $\propto d^{-n}$ of the atom-surface distance $d$ (see, e.g. \cite{HENKEL/99,Henkel01,Scheel05}), Eq.~(\ref{eq:ratio2}) can be rewritten in the form \begin{equation} S \left ( \mathbf{r}_1,\mathbf{r}_2,\omega_A \right )=1-\frac{5n(n+1)l^2}{96d^2} +{\cal O}(l^4). \end{equation} In addition to the planar half-space we consider the experimentally relevant situation in which a thin metallic layer of thickness $h$ has been brought onto a dielectric substrate. The generalized Fresnel coefficient for this three-layer system is given in Eq.~(\ref{eq:Fresnel3}). In the limit of thick films ($\delta,h\gg d$) the asymptotic behaviour of the spin flip rate is $\Gamma_{12}\propto 1/d$ \cite{HENKEL/99,Scheel05} whereas for thin films ($\delta\gg d\gg h$) we have $\Gamma_{12}\propto 1/d^2$ \cite{Henkel01,Scheel05}. Thus, we finally obtain the small-$l$ limit of Eq.~(\ref{eq:offdiagonal}) as \begin{equation} \label{eq:smallsep} \varrho_{12}(t)=1-\frac{5\alpha l^2}{48d^2} \left( 1-e^{-\Gamma_{12}t} \right) +{\cal O}(l^4)\,, \end{equation} where $\alpha=1$ for thick films and $\alpha=3$ for thin films. It is interesting to note that the fall-off is three times faster for thin films than for thick films which we attribute to the fact that in thick films it is more likely to drive spin-flips coherently.
In order to see how the time scale is related to the expected lifetime we can expand the exponential in Eq.~(\ref{eq:smallsep}) for short times as \begin{equation}
\label{eq:smalltime} | \varrho_{12}(t)-\varrho_{12}(0)|\cong \frac{5\alpha l^2}{48d^2} \left ( \frac{t}{\tau}\right )+ {\cal O}(t^2) \end{equation} where $\varrho_{12}(0)=1$ and $\tau = \Gamma^{-1}_{12}$. The left-hand side in Eq.~(\ref{eq:smalltime}) can be thought as a proper measure of decoherence due to spin flips in terms of physical parameters such as the spin-flip lifetime $\tau$, the separation $l$ and the distance from the surface $d$. This means that it is possible to maximize those experimental parameters while the decoherence rate is under control. Hence, Eq.~(\ref{eq:smalltime}) turns out to be particularly interesting from the quantum information point of view when a certain degree of spatial coherence has to be maintained.
For larger separations, however, it is difficult to find analytical approximations and one has to resort to numerical evaluations of the Fourier transform (\ref{eq:Weyl}). It is interesting to see at which separation $l_{1/2}$, as a function of the other length parameters in the system, the spatial coherence drops to half its initial value which could be taken as a measure of robustness. In Fig.~\ref{fig:halflife} we show the dependence of $l_{1/2}$ on the thickness $h$ of the intermediate layer.
\begin{figure}
\caption{ Lateral separation $l_{1/2}$ after which spatial coherence has dropped to half its initial value as a function of the layer thickness $h$. The skin depth was varied from $\delta=100\mu$m (solid line) to $\delta=50\mu$m (dashed line) and $\delta=10\mu$m (dotted line). The atom-surface distance was $d=50\mu$m and all other parameters as in Fig.~\ref{fig:Ratio}.}
\label{fig:halflife}
\end{figure}
In our calculations, we assumed a transition frequency of $f=560$kHz. We have plotted $l_{1/2}$ for three different skin depths: $\delta=100\mu$m (solid line, corresponding to a good conductor such as Al of Cu at room temperature), $\delta=50\mu$m (dashed line), and $\delta=10\mu$m (dotted line). Although the latter two skin depth values are not realistic for materials at room temperature, at kryogenic temperatures these values can be achieved. For example, just above its critical temperature of $T_c=9.2$K, pure niobium shows a skin depth of only $\delta=15\mu$m at $f\lesssim 1$MHz \cite{Casalbuoni}.
In Fig.~\ref{fig:halflife} it is clearly seen that for skin depths smaller than the atom-surface distance (dotted line), the robustness of spatial coherence drops dramatically with increasing substrate thickness $h$ until $h\sim\delta$. This can be understood when noting that by increasing the thickness of the intermediate layer one increases the number of fluctuating dipoles that can cause the spin flip. Any further increase beyond $h\sim\delta$ does not change much because fluctuations would not reach the substrate surface. Note also that the coherence length $l_{1/2}$ levels out roughly at the value of the skin depth, $l_{1/2}\sim\delta$.
For skin depths equal (dashed line in Fig.~\ref{fig:halflife}) or larger than the atom-surface distance (solid line) spatial coherence is robust over a wide range of substrate thicknesses $h$. Only for $h\gtrsim\delta$ does the coherence length decrease towards the atom-surface distance.
\section{Conclusions} \label{sec:conclusions}
In summary, we have investigated loss of spatial coherence of atomic superpositions due to thermally driven spin flips. The consistent quantization of the electromagnetic field in absorbing dielectrics and metals allowed us to employ a first-principles approach to decoherence in this particularly simple physical system. The quantization scheme is based on the source-quantity representation of the electromagnetic field in terms of the dyadic Green function of the associated classical scattering problem and a bosonic vector field that serves as the dynamical variables of the theory. The Green function contains, via the dielectric permittivity, all information about the geometric arrangement and material properties of the substrate. Because the theory, starting already with Eq.~(\ref{Polarization}), is strictly valid only for spatially \textit{locally} responding materials, we stress again that spatially nonlocal effects --- which could be non-negligible for small skin depths (i.e. large conductivities) and small atom-surface distances --- have not been considered.
The interaction dynamics between atomic spin and electromagnetic field has been described in the Schr\"odinger picture and the Markov approximation which led to the result for the time evolution of the off-diagonal matrix element (or coherence) $\varrho_{12}(t)$ of the single-particle density matrix, Eq.~(\ref{eq:offdiagonal}). The spatially nonlocal part, Eq.~(\ref{eq:ratio}), agrees with previously obtained results \cite{Henkel01} for spatial decoherence of matter waves. It should be noted that both Eqs.~(\ref{eq:offdiagonal}) and (\ref{eq:ratio}) are valid for arbitrary geometrical arrangements of substrate materials.
For planarly multilayered substrates the dyadic Green function is explicitly known \cite{TOMAS,DUNG/98,Chew,LiLW94} and the main formulas presented in Appendix~\ref{sec:green}. For small lateral separation $l$ of the atom's two possible positions we found that the spatial coherence decreases quadratically with $l$ and inversely proportional to the squared atom-surface distance $d$ [Eq.~(\ref{eq:smallsep})]. For larger separations, a numerical study of a three-layer system showed that the coherence length $l_{1/2}$, defined to be the separation after which the coherence decays to half its initial value, converges for thick intermediate layers to roughly the atom-surface distance $d$.
We believe that these results are important for the design of microstructured devices in which spatial coherences are used to encode quantum information. In particular Eq.~(\ref{eq:smalltime}) shows how the decoherence rate depends on experimental parameters such as lifetime, lateral separation and atom-surface distance. They can be tuned in order to fall within a given tolerance rate for the degree of decoherence. Therefore, the theoretical results presented here may be useful in the physical realization of atomic traps where a certain degree of spatial coherence has to be maintained in order to be able to perform some kind of error correction.
\acknowledgments This work was financially supported by the UK Engineering and Physical Sciences Research Council (EPSRC) and the CONQUEST programme of the European commission.
\appendix \section{Green function for planar multilayers} \label{sec:green}
We briefly review the calculation of the Green function of planar multilayers as it can be found in \cite{TOMAS,DUNG/98,Chew,LiLW94}. The dyadic Green function for the electric field scattering off a material interface can always be decomposed into \begin{equation} \bm{G}(\textbf{r},\textbf{r}',\omega) = \left\{ \begin{array}{l} \bm{G}^{(1)}(\textbf{r},\textbf{r}',\omega) + \bm{R}^{(12)}(\textbf{r},\textbf{r}',\omega)\,; \textbf{r},\textbf{r}'\in {\cal V}_1 \\ \bm{T}^{(12)}(\textbf{r},\textbf{r}',\omega)\,; \textbf{r}\in{\cal V}_1\,,\textbf{r}'\in {\cal V}_2 \end{array} \right. \end{equation} where $\bm{G}^{(1)}(\textbf{r},\textbf{r}',\omega)$ denotes the solution to the inhomogeneous Helmholtz equation with the source in region ${\cal V}_1$ which in our case is vacuum with $\varepsilon_1(\omega)\equiv 1$. The two (double-sided transverse) scattering parts $\bm{R}^{(12)}(\textbf{r},\textbf{r}',\omega)$ and $\bm{T}^{(12)}(\textbf{r},\textbf{r}',\omega)$ have to be introduced to satisfy the boundary conditions for the electromagnetic fields at the interface and describe the reflection and transmission parts of the total scattering Green function, respectively. These scattering Green functions satisfy the homogeneous Helmholtz equation. In our case, we only need to concentrate on the reflection part $\bm{R}^{(12)}(\textbf{r},\textbf{r}',\omega)$.
The translational invariance in two spatial directions, say in the $(x,y)$-plane, allows one to write the Green function in terms of its Weyl expansion \begin{equation} \label{eq:Weyl} \bm{R}^{(12)}(\textbf{r},\textbf{r}',\omega) = \int
\frac{d^2\textbf{k}_\|}{(2\pi)^2}
\bm{R}^{(12)}(\textbf{k}_\|,\omega;z,z')
e^{i\textbf{k}_\|\cdot(\bm{\rho}-\bm{\rho}')} \end{equation}
[$\bm{\rho}=(x,y)$] where $\textbf{k}_\|=(k_x,k_y)$ is the wave-vector in the $(x,y)$-plane. The matrix components of
$\bm{R}^{(12)}(\textbf{k}_\|,\omega;z,z')$ can be read off from \cite{DUNG/98} as (here we omit the arguments to enhance readability) \begin{eqnarray} \label{eq:coefficients} R_{xx}^{(12)} &=& \frac{i}{2k_{1z}} e^{ik_{1z}(z+z')}
\left[ -r^{TM}_{12} \frac{k_{1z}^2 k_x^2}{k_1^2 k_\|^2}
+r^{TE}_{12} \frac{k_y^2}{k_\|^2} \right] \,,\nonumber \\ R_{xy}^{(12)} &=& \frac{i}{2k_{1z}} e^{ik_{1z}(z+z')}
\left[ -r^{TM}_{12} \frac{k_{1z}^2 k_xk_y}{k_1^2 k_\|^2}
-r^{TE}_{12} \frac{k_xk_y}{k_\|^2} \right] \,,\nonumber \\ R_{xz}^{(12)} &=& \frac{i}{2k_{1z}} e^{ik_{1z}(z+z')} \left[ r^{TM}_{12} \frac{k_{1z} k_x}{k_1^2} \right] \,,\nonumber \\ R_{zz}^{(12)} &=& \frac{i}{2k_{1z}} e^{ik_{1z}(z+z')}
\left[ r^{TM}_{12} \frac{k_\|^2}{k_1^2} \right] \,, \end{eqnarray} where $k_i^2=\frac{\omega^2}{c^2}\varepsilon_i(\omega)$ and
$k_{iz}^2=k_i^2-k_\|^2$. The remaining matrix elements can be deduced from Eq.~(\ref{eq:coefficients}) by replacement rules such as $R_{yy}^{(12)}=R_{xx}^{(12)}(k_x \leftrightarrow k_y)$ and the reciprocity condition $\bm{R}^{(12)}(\textbf{r},\textbf{r}',\omega)$ $\!=$ $\!\bm{R}^{(12)T}(\textbf{r}',\textbf{r},\omega)$ which yields
$\bm{R}^{(12)}(\textbf{k}_\|,\omega;z,z')$ $\!=$
$\!\bm{R}^{(12)T}(-\textbf{k}_\|,\omega;z',z)$.
The functions $r^{TE}_{12}$ and $r^{TM}_{12}$ denote the usual Fresnel reflection coefficients for TE and TM waves, respectively, and are defined by \begin{equation} \label{eq:fresnel} r^{TE}_{12} = \frac{k_{1z}-k_{2z}}{k_{1z}+k_{2z}} \,,\quad r^{TM}_{12} = \frac{\varepsilon_2(\omega) k_{1z} - \varepsilon_1(\omega)k_{2z}}{\varepsilon_2(\omega) k_{1z}- \varepsilon_1(\omega)k_{2z}} \,. \end{equation} The Fresnel coefficients obey certain recursion relations that permit one to calculate the dyadic Green function for arbitrarily multi-layered materials \cite{Chew,TOMAS,LiLW94}. In particular, the generalized Fresnel coefficient for a three-layer geometry reads (for both TE and TM polarizations) \begin{equation} \label{eq:Fresnel3} \tilde{r}_{12} = \frac{r_{12}+r_{23}e^{2ik_{2z}h}} {1-r_{21}r_{23}e^{2ik_{2z}h}} \end{equation} where $h$ is the thickness of the intermediate layer $2$. This relation has been used in the numerical calculations throughout the paper.
\end{document}
|
arXiv
|
{
"id": "0511146.tex",
"language_detection_score": 0.7009278535842896,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\keywords{spectral graph theory}
\begin{abstract}We establish some relations between the spectra of simple and non-backtracking random walks on non-regular graphs, generalizing some well-known facts for regular graphs. Our two main results are 1) a quantitative relation between the mixing rates of the simple random walk and of the non-backtracking random walk 2) a variant of the ``Ihara determinant formula'' which expresses the characteristic polynomial of the adjacency matrix, or of the laplacian, as the determinant of a certain non-backtracking random walk with holomorphic weights. \end{abstract}
\title{Some relations between the spectra of simple and non-backtracking random walks}
\section{Results} It has been noted by many authors that non-backtracking random walks on graphs looking locally like trees are simpler, from a combinatorial point of view, than usual random walks. This is for instance an ingredient of the proof of Alon's conjecture on random regular graphs by Friedman \cite{Fri08} or, more recently, by Bordenave \cite{Bor-new}. The study of the spectrum of non-backtracking random walks is also at the heart of the ``community detection'' problem and of the solution to the ``spectral redemption conjecture'' for various models of random graphs \cite{BLM}. Non-backtracking random walks are known to mix faster than the usual ones \cite{ABLS}. In \cite{PL}, the non-backtracking random walk is used to prove a cut-off phenomenon for the usual random walk on regular graphs. In geometric group theory, the ``cogrowth'' is directly related to the leading eigenvalue of non-backtracking random walks~: this is discussed in \cite{OW}, where a possibility to use this to extend the notion of co-growth to non-regular graphs is suggested. The non-backtracking random walk may also be used as an analog of ``classical dynamics'' in the field of quantum chaos on discrete graphs~: see the papers by Smilansky \cite{Smi10, Smi07}, which were a source of inspiration for this work.
This note is a contribution to the study of the relation between the spectra of simple and non-backtracking random walks, for {\em{non-regular graphs}}. It originates in the work \cite{AS}, where we used non-backtracking random walks to study the quantum ergodicity problem for eigenfunctions of Schr\"odinger operators on non-regular expander graphs.
Let $G=(V, E)$ be a graph without multiple edges and self-loops. We assume that the degree $D(x)$ of a vertex $x$ is bounded above and below~: $2\leq D(x)\leq D$. In the results about spectral gaps, we actually have to assume that $D(x)\geq 3$. We are interested in relating the spectrum of various operators, such as the laplacian, the adjacency matrix, and certain weighted non-backtracking random walks. Such relations are well-known for {\em{regular graphs}} (i.e. those for which $D(x)$ is constant), and the goal of this note is to partially extend what is known to non-regular graphs. Our two main results are 1) a relation between the mixing rates of the simple random walk and of the non-backtracking random walk 2) a variant of the ``Ihara determinant formula'' \cite{Iha66-1, Iha66-2} which expresses the characteristic polynomial of the adjacency matrix, or of the laplacian, as the determinant of a certain non-backtracking transfer matrix with holomorphic coefficients.
We will write $y\sim x$ to mean that $y$ is a neighbour of $x$ in $G$.
The first operator we are interested in is the adjacency matrix $\cA$. It acts on $\IC^V$ by the formula $\cA f(x)=\sum_{y\sim x} f(y)$, and is self-adjoint on $\ell^2(V, u)$ if $V$ is endowed with the uniform measure $u$.
We are also interested in the spectrum of the laplacian, defined by~: \begin{eqnarray}\nonumber P: \IC^V&\To& \IC^V\\ Pf(x)&=&\frac{1}{D(x)}\sum_{y\sim x} f(y). \label{e:lapl} \end{eqnarray} If we endow the set of vertices $V$ with the measure $\pi(x)= D(x)$, then $P$ is self-adjoint on $\ell^2(V, \pi)$. Note that this implies $\sum_x Pf(x)\pi(x)=\sum_x f(x)\pi(x).$ The space $\ell^2_o(V, \pi)$ of functions orthogonal to constants in $\ell^2(V, \pi)$ is preserved by $P$.
Let $B$ be the set of oriented edges, endowed with the uniform measure $U$ (each edge has weight $1$). Denoting $Q(x)=D(x)-1$, the ``transfer operator'' is defined by \begin{eqnarray}\nonumber \cS: \ell^2(B, U) &\To& \ell^2(B, U)\\ \cS f(e)&=&\frac{1}{Q(o(e))}\sum_{e' \leadsto e} f(e') \label{e:siso} \end{eqnarray} where $e' \leadsto e$ means that $t(e')=o(e)$ and $e$ is not the reverse of $e'$. The operator $\cS$ is stochastic, it is the generator of the non-backtracking random walk. It is not self-adjoint, but we have $\sum_e \cS f(e)=\sum_e f(e).$ This is equivalent to saying that $\cS^*:\ell^2(B, U) \To \ell^2(B, U)$ is also stochastic. When $G$ is finite, this implies that $\cS$ preserves the space $\ell^2_o(B, U)$ of functions orthogonal to constants in $\ell^2(B, U)$.
We will finally use the non-stochastic operator \begin{eqnarray}\nonumber \cB: \IC^B &\To& \IC^B\\ \cB f(e)&=&\sum_{e' \leadsto e} f(e') \label{e:B} \end{eqnarray}
If the graph $G$ is finite and $(q+1)$-regular, then $\cA$ and $P$ (resp. $\cB$ and $\cS$) are the same operator up to a homothety, and there is an explicit relation between the characteristic polynomials of $\cA$ and $\cB$~: \begin{equation}\label{e:Ihara}
\det(I^{|B|}-u\cB)=(1-u^{2})^{r-1}\det((1+u^2q)I^{|V|} -u\cA) \end{equation}
where $r=|E|-|V|+1$ is the rank of the fundamental group. This is the contents of the Ihara determinant formula \cite{Iha66-1, Iha66-2}, generalised in stages by Hashimoto, Bass and Kotani--Sunada \cite{Hash89, HashH89, Hash90-1, Hash92-1, Hash92-2, Bass, KS}. For finite non-regular graphs the relation reads \begin{equation}\label{e:Ihara2}
\det(I^{|B|}-u\cB)=(1-u^{2})^{r-1}\det(I^{|V|} -u\cA +u^2Q) \end{equation}
where $Q$ is the diagonal matrix with components $Q(x)=D(x)-1$. Note that the right-hand side in \eqref{e:Ihara2} is not directly related to the characteristic polynomial of $\cA$. The identity \eqref{e:Ihara2} relates eigenvalues (and eigenvectors) of $\cS$ to solutions of $(I^{|V|} -u\cA +u^2Q)v=0$, which are {\emph{not}} eigenvectors of $\cA$. Our goal is twofold~: \begin{itemize} \item for finite graphs, compare the mixing rates of $\cS$ and $P$. For regular graphs, there is an exact relation between eigenvalues of $\cS$ and $P$, which implies that the spectral gap of $P$ on $\ell^2_o(V, \pi)$ is explicitly related to the spectral gap of $\cS$ on $\ell^2_o(B, U)$. However since $\cS$ is not self-adjoint, the knowledge of its spectral gap is not sufficient to determine its mixing rate, one needs to control the angles beween eigenvectors. This analysis, done in \cite{ALM} and with more details in \cite{PL}, uses the fact that the eigenvectors of $\cS$ are explicitly related to those of $P$. Such explicit relations are not available for non-regular graphs and we have to find a more general method; \item extend formula \eqref{e:Ihara} to non-regular graphs in a way different of \eqref{e:Ihara2}, by finding an identity involving the characteristic polynomial of $\cA$ on the right-hand side. \end{itemize}
The result about spectral gaps also holds for infinite graphs. We recover, in a less geometric but more quantitative way, the result of Ortner and Woess \cite{OW} saying that $P$ has a spectral gap on $\ell^2(V, \pi)$ if and only if the spectral radius of $\cS$ on $\ell^2(B, U)$ is strictly less than $1$.
We do not discuss the ``spectral gap'' of $\cA$ as this is not as properly defined as for $P$ (the top eigenvalue and eigenvector of $\cA$ are not explicit in general).
\subsection{Spectral gap and mixing rate for $P$ and $\cS$.\label{s:exp}}
Let us first assume that $G$ is finite, connected and non-bipartite. This is equivalent to assuming that $1$ is a simple eigenvalue of $P^2$. In other words, the spectrum of $P^2$ in $\ell^2_o(V, \pi)$ is contained in $ [0, 1-\beta]$, for some $\beta>0$ which measures the mixing rate of the simple RW on $G$.
Our first result is the following~: \begin{thm} \label{t:mix}Assume that $G$ is finite and that $D(x)\geq 3$ for all $x$. Assume that the spectrum of $P^2$ on $\ell^2_o(V, \pi)$ is contained in $ [0, 1-\beta]$. Then the spectrum of $\cS^{*2}\cS^2$ on $\ell^2_o(B, U)$ is contained in $[0, 1- c(D,\beta)]$, where $ c(D,\beta)$ depends only on $D$ and $\beta$, and is positive if $\beta$ is so. \label{p:sg1} \end{thm} Note that $\norm{\cS}_{\ell^2_o(B, U)\To \ell^2_o(B, U)}=1$, as $\norm{\cS f}=\norm{f}$ as soon as $f$ is a function on $B$ that is constant on edges having the same terminus. However, our theorem says that $\norm{\cS^2}_{\ell^2_o(B, U)\To \ell^2_o(B, U)}\leq (1- c(D,\beta))^{1/2}$. The value of $ c(D,\beta)$ is given in \eqref{e:c}.
\begin{cor} For all $n\geq 1$, $$\norm{\cS^n}_{\ell^2_o(B, U)\To \ell^2_o(B, U)}\leq (1- c(D,\beta))^{\lfloor n/4\rfloor}.$$ \end{cor} This gives the rate of mixing of the non-backtracking RW.
The converse is easier~: in the course of the proof, we will also see that if the spectrum of $\cS^{*2}\cS^2$ on $\ell^2_o(B, U)$ is contained in $[0, 1- c]$, then the spectrum of $P^2$ on $\ell^2_o(V, \pi)$ is contained in $ [0, 1-D^{-2}c]$ (Remark \ref{r:impli}).
We were primarily interested in finite graphs in view of the application to quantum ergodicity \cite{AS}, but the result also holds for infinite graphs~: \begin{thm} \label{t:cog}Assume that $G$ is infinite and that $D(x)\geq 3$ for all $x$.
(i) If the spectrum of $\cS^{*2}\cS^2$ on $\ell^2(B, U)$ is contained in $[0, 1- c]$, then the spectrum of $P^2$ on $\ell^2(V, \pi)$ is contained in $ [0, 1-D^{-2}c]$.
(ii) If the spectrum of $P^2$ on $\ell^2(V, \pi)$ is contained in $ [0, 1-\beta]$, then the spectrum of $\cS^{*2}\cS^2$ on $\ell^2(B, U)$ is contained in $[0, 1- c(D,\beta)]$, where $ c(D,\beta)$ is given by \eqref{e:c}. \end{thm}
\begin{rem} It is well-known that $G$ is amenable iff the spectral radius of $P$ is $1$ (see \cite{Kes59-2, DodKen, Dod}). Thus Theorem \ref{t:cog} says that $G$ is amenable iff $\norm{\cS^2}_{\ell^2(B, U)\To \ell^2(B, U)}=1$. It was proven before by Ortner and Woess that $G$ is amenable iff the spectral radius of $\cS$ is $1$ \cite{OW}. This can be recovered by our methods~: indeed, one direction results from our Theorem \ref{t:cog}; in the other direction, one can follow the same lines as in our Remark \ref{r:impli} to show that if $\norm{\cS^{*n}\cS^n}_{\ell^2(B, U)\To \ell^2(B, U)}<1$ then $\norm{P^{2(n-1)}}_{\ell^2(V, \pi)\To \ell^2(V, \pi)}<1$ (with an explicit bound).
Our method is very down-to-earth and gives, by basic manipulations, a quantitative relation between the spectral gap of $P$ and $\norm{\cS^2}_{\ell^2(B, U)\To \ell^2(B, U)}$. The method in \cite{OW} is less direct and more geometric~: it starts from the general fact that $G$ is amenable iff SOLG (the symmetrized oriented line graph) is amenable. And then it is shown that SOLG is amenable iff the spectral radius of $\cS$ is $1$.
\end{rem}
\subsection{Determinant relation\label{s:det}} We now assume that $G$ is finite.
Let $T=(V(T), E(T))$ be the universal cover of $G$~: $T$ is a tree, and there exists a subgroup $\Gamma$ of of automorphism group of $T$, acting without fixed points on $V(T)$, such that $G=\Gamma\backslash T$. Let $\tilde\cA$ be the adjacency matrix of $T$. The Green function on $T$ will be denoted by $$ G(x, y;z)=\la \delta_x, (\tilde\cA-z)^{-1}\delta_y\ra_{\ell^2(V(T))}$$ for $z\in \IC\setminus \IR$.
Given $v,w \in T$ with $v \sim w$, we denote by ${T}^{(v|w)}$ the tree obtained by removing from ${T}$ the branch emanating from $v$ that passes through $w$. We define the restriction $H^{(v|w)}(x,y) = H(x,y)$ if $v,w \in {T}^{(v|w)}$ and zero otherwise. We then denote $G^{(v|w)}(\cdot, \cdot;z)$ the corresponding Green function.
Given $z \in \mathbb{C} \setminus \mathbb{R}$, $v\in V$, $w$ a neighbour of $v$ we denote \[
G^z(v)=G(\tilde v,\tilde v;z) \quad \text{and} \quad \zeta^{z}(w,v) = -G^{(\tilde v|\tilde w)}(\tilde v,\tilde v;z) \, . \] where $(\tilde v, \tilde w)$ is a lift of the edge $(v, w)$ in $T$. This definition does not depend on the choice of the lifts.
If $e=(w, v)\in B$, we also use the notation $G^z(e)=G(\tilde w,\tilde v;z) $, $\zeta^z(e)=\zeta^{z}(w,v)$. Note that $G^z(e)$ is invariant under edge-reversal, whereas $\zeta^z(e)$ is not. In the formula below, the function $\zeta^z$ on $B$ acts on $\IC^{B}$ as a multiplication operator.
\begin{thm}\label{t:det} For all $z\in \IC\setminus \IR$,
\begin{equation}\label{e:det} \prod_{e\in E}(- G^z(e))\cdot \det\left((\zeta^z)^{-1} I^{|B|}- \cB\right) =\det\left(z I^{|V|}-\cA\right) \cdot \prod_{x\in V}(-G^z(x))\end{equation} \end{thm} \begin{rem}In the case of a $(q+1)$-regular graph, $\zeta^z$ is a constant function, which solves the quadratic equation \begin{equation}\label{e:zeta} z = q \zeta^{z} + \frac{1}{\zeta^{z}} \, \end{equation} See Lemma \ref{lem:zetapot} below. We also have $G^z(x)=\frac{ \zeta^{z}}{(\zeta^{z})^2-1}$ and $G^z(e)=\frac{ (\zeta^{z})^2}{(\zeta^{z})^2-1}$ for all $x$ and all $e$. It can be checked that Theorem \ref{t:det} reduces to \eqref{e:Ihara} by setting $u=\zeta^z$. It is, however, different from \eqref{e:Ihara2} for non-regular graphs. Although extensions of the Ihara formula \eqref{e:Ihara} have been studied by many authors, the variant \eqref{e:det} seems to be new.
Note that \eqref{e:det} holds for any functions $\zeta^z$ that are solutions of the system of algebraic equations appearing in Lemma \ref{lem:zetapot}. These are by no means unique~: for instance, in the regular case, there are 2 solutions to equation \eqref{e:zeta}. It is nice, however, to know an explicit solution of this system that can be expressed in terms of Green functions. \end{rem}
\begin{rem} The theorem generalizes to the case where $\cA$ is replaced by a discrete ``Schr\"odinger operator'' of the form $\cA_p + W : \IC^{V}\To \IC^{V}$, $(\cA_p + W) f(x)=\sum_{y\sim x} p(x, y)f(y) + W(x)f(x)$ where $W$ is a real-valued function on $V$, and $p$ is such that $p(x, y)=p(y, x)\in \IR$ and $p(x, y)\not=0$ iff $x\sim y$. The definitions of the Green functions $G^z$ and $\zeta^z$ should be modified (in the obvious manner) to incorporate the weights $p$ and the potential $W$. The definition of $\cB$ should be modified to $$\cB_p f(e)= \sum_{e' \leadsto e} p(e')f(e')$$ and \eqref{e:det} becomes
\begin{equation*}\prod_{e\in E} \frac{(-G^z(e))}{p(e)}\cdot \det\left((\zeta^z)^{-1} I^{|B|}- \cB_p\right) =\det\left(z I^{|V|}-\cA_p-W\right) \cdot \prod_{x\in V}{(-G^z(x))}.\end{equation*}
This remark, in particular, allows to cover the case of $\det(z I^{|V|}-P)$, noting that $P$ is conjugate to $\cA_p$ with $p(x, y)=(D(x)D(y))^{-1/2}$.
\end{rem}
{\bf{Acknowledgements~:}} The author is supported by the Labex IRMIA and USIAS of Universit\'e de Strasbourg, and by Institut Universitaire de France. This material is based upon work supported by the Agence Nationale de la Recherche under grant No.ANR-13-BS01-0007-01,
.
I am very thankful to Mostafa Sabri for his careful reading and numerous useful comments on the manuscript.
\section{Proof of Theorem \ref{t:mix}}
We start by noting that if $f\in \ell^2_o(V, \pi)$, then
\begin{equation} \label{l:sg}\frac12\sum_{x\in V}\frac{1}{D(x)}\sum_{y, y'\sim x}|f(y)-f(y')|^2\geq \beta \norm{f}^2_{\ell^2(V, \pi)}.\end{equation}
This just comes from the identity \begin{eqnarray*}
\frac12\sum_{x\in V}\frac{1}{D(x)}\sum_{y, y'\sim x}|f(y)-f(y')|^2&=&\sum_x D(x) |f(x)|^2-\sum_{x}D(x)|Pf(x)|^2\\ &=& \la f, (I-P^2) f\ra_{\ell^2(V, \pi)} \end{eqnarray*}
To prove Proposition \ref{p:sg1}, we use the following decomposition of the space of functions on $B$~: \begin{equation}\ell^2_o(B, U)= O(\ell^2_o(V, \pi))\oplus T(\ell^2_o(V, \pi)) \oplus (O(\ell^2_o(V, \pi))^\perp \cap T(\ell^2_o(V, \pi))^\perp).\label{e:decompo}\end{equation} The space $O(\ell^2_o(V, \pi))$ is the image of $\ell^2_o(V, \pi)$ under the map $Of(e)=f(o(e)).$ Thus $O(\ell^2_o(V, \pi))$ is the space of functions (orthogonal to constants) such that $f(e)$ depends only on the origin of $e$. Similarly, $T(\ell^2_o(V, \pi))$ is the space of functions such that $f(e)$ depends only on the terminus of $e$. It is the image of $\ell^2_o(V, \pi)$ under the map
$Tf(e)=f(t(e)).$ If $G$ is non-bipartite, we have $O(\ell^2_o(V, \pi))\cap T(\ell^2_o(V, \pi))=\{0\}$. Note that each space $O(\ell^2_o(V, \pi)), T(\ell^2_o(V, \pi))$ is orthogonal to $\bbbone$ in $\ell^2(B, U)$, but that the two spaces are NOT orthogonal to each other. The space $O(\ell^2_o(V, \pi))^\perp \cap T(\ell^2_o(V, \pi))^\perp$ is of dimension $2|E| -2|V| +1=r-1$, where $r$ is the rank of the fundamental group of $G$. By definition, it is the space of functions $f:B\To\IC$ such that, for all $x\in V$, \begin{equation}\label{e:sums}\sum_{e, o(e)=x} f(e)=0\quad\mbox{ and } \sum_{e, t(e)=x} f(e)=0.\end{equation}
\begin{rem}In the infinite case, the proof of Theorem \ref{t:cog} will be similar, using the decomposition \begin{equation*}\ell^2(B, U)= O(\ell^2(V, \pi))\oplus T(\ell^2(V, \pi)) \oplus (O(\ell^2(V, \pi))^\perp \cap T(\ell^2(V, \pi))^\perp).\end{equation*} \end{rem}
To start the proof we use the Dirichlet identity for $f\in ~\ell^2(B, U)$:
$$\la f, (I-\cS^{*2}\cS^2)f\ra_{\ell^2(B, U)}=\frac12\sum_{e, e'}|f(e)-f(e')|^2 \cS^{*2}\cS^2(e, e').$$ Let us decompose $f$ according to \eqref{e:decompo}~: $f=F+G+H$ where $F\in O(\ell^2_o(V, \pi)), G\in T(\ell^2_o(V, \pi)), H\in (O(\ell^2_o(V, \pi))^\perp \cap T(\ell^2_o(V, \pi))^\perp)$.
We are first going to prove that \begin{equation}\la f, (I-\cS^{*2}\cS^2)f\ra_{\ell^2(B, U)}\geq Q^{-4}\beta\norm{F+H}^2_{\ell^2(B, U)}\label{e:first}\end{equation} where $Q=D-1$ (and, recall, $D$ is an upper bound on the degree).
In order to have $\cS^{*2}\cS^2(e, e')>0$, there must exist $e_1, e'_1, e_2\in B$ such that $e\leadsto e_1\leadsto e_2$ and $e'\leadsto e'_1\leadsto e_2$. Counting the number of possibilities, we see that $\cS^{*2}\cS^2(e, e')\geq (D(t(e))-2)Q^{-4}\geq Q^{-4}$ if $t(e)=t(e')$. Here we use the assumption that $D(t(e))\geq 3$. Thus,
\begin{multline*}\la f, (I-\cS^{*2}\cS^2)f\ra_{\ell^2(B, U)}\geq \frac{Q^{-4}}2\sum_{e, e' : t(e)=t(e')}|f(e)-f(e')|^2\\
\geq \frac{Q^{-4}}2\sum_{e, e': t(e)=t(e')}\frac{1}{D(t(e))}|f(e)-f(e')|^2\\
=\frac{Q^{-4}}2\sum_{e, e': t(e)=t(e')}\frac{1}{D(t(e))}|(F+H)(e)-(F+H)(e')|^2.\end{multline*} Let us fix a vertex $x\in V$. Using the fact that $F$ depends only on the origin, and that $H$ satisfies \eqref{e:sums},
\begin{multline}\sum_{e, e', t(e)=t(e')=x}|(F+H)(e)-(F+H)(e')|^2= \sum_{y, y'\sim x}|F(y)-F(y')|^2 \\+\sum_{e, e', t(e)=t(e')=x}|H(e)-H(e')|^2 +4\operatorname{Re}\sum_{e, e', t(e)=t(e')=x} \bar F(e)(H(e)-H(e'))\\
= \sum_{y, y'\sim x}|F(y)-F(y')|^2 +\sum_{e, e', t(e)=t(e')=x}|H(e)|^2+|H(e')|^2 +4\operatorname{Re}\sum_{e, e', t(e)=t(e')=x} \bar F(e)H(e)\label{e:GH} \end{multline} Summing now over $x$, and using the fact that $F$ and $H$ are orthogonal,
\begin{multline}\sum_{e, e', t(e)=t(e')}\frac{1}{D(t(e))}|(F+H)(e)-(F+H)(e')|^2
=\sum_x D(x)^{-1}\sum_{y, y'\sim x}|F(y)-F(y')|^2\\ + 2\sum_{e} |H(e)|^2+ 4\operatorname{Re}\sum_{e} \bar F(e)H(e) \\
=\sum_x D(x)^{-1}\sum_{y, y'\sim x}|F(y)-F(y')|^2 + 2\sum_{e} |H(e)|^2\\
\geq \sum_x D(x)^{-1}\sum_{y, y'\sim x}|F(y)-F(y')|^2 +2 \norm{H}^2_{\ell^2(B, U)}\\ \geq 2\beta \norm{F}^2 +2 \norm{H}^2 \geq 2\beta\norm{F+H}^2. \end{multline} On the last line, we have used \eqref{l:sg}. This concludes the proof of \eqref{e:first}.
Now, let $G\in T(\ell^2_o(V, \pi)$. Then again, by looking at what it means to have $\cS^{*2}\cS^2(e, e')>0$, we see that if $y, y'$ are two vertices such that $dist(y, y')=2$ (in other words, $y$ and $y'$ have a common neighbour $x$), then we can find edges $e, e'$ such that $t(e)=y, t(e')=y'$ and $\cS^{*2}\cS^2(e, e')\geq Q^{-4}$. Indeed, we may choose $e, e'$ such that $t(e)=y$ and $o(e)\not=x$, $t(e')=y'$ and $o(e')\not=x$, and $\cS^{*2}\cS^2(e, e')\geq (D(x)-2)Q^{-4}\geq Q^{-4}$.
Thus
\begin{multline*}\la G, (I-\cS^{*2}\cS^2)G\ra_{\ell^2(B, U)}= \frac12\sum_{e, e'}|G(e)-G(e')|^2 \cS^{*2}\cS^2(e, e')\\ \geq \frac{Q^{-4}}2\sum_{x}\sum_{y, y'\sim x} |G(y)-G(y')|^2 \geq Q^{-4}\beta \norm{G}^2. \end{multline*}
\begin{rem}\label{r:impli} We can also write (using the fact that $\cS^{*2}\cS^2$ is stochastic)
\begin{multline*}\la G, (I-\cS^{*2}\cS^2)G\ra_{\ell^2(B, U)}=\frac12 \sum_{y, y' : d(y, y')=2}|G(y)-G(y')|^2\sum_{e, e': t(e)=y, t(e')=y'} \cS^{*2}\cS^2(e, e')
\\ \leq D^2 \frac12 \sum_{x\in V} \frac{1}{D(x)}\sum_{y, y' : y\sim x\sim y'}|G(y)-G(y')|^2 = D^2 \la G, (I-P^2) G\ra_{\ell^2(V, \pi)} \end{multline*} and this proves part (i) of Theorem \ref{t:cog}. \end{rem}
Let $A>1$ (to be chosen later, depending on $\beta$ and $D$). Let $f=F+G+H$ as before. Assume first that $A\norm{F+H}\geq \norm{G}$. Then by the triangular inequality $\norm{f}\leq (1+A)\norm{F+H}$. In addition, as we have seen, \begin{equation}\la f, (I-\cS^{*2}\cS^2)f\ra_{\ell^2(B, U)}\geq Q^{-4}\beta\norm{F+H}^2_{\ell^2(B, U)}\geq Q^{-4}\beta(1+A)^{-2}\norm{f}^2.\label{e:case1}\end{equation} Otherwise, $A\norm{F+H}\leq \norm{G}$, and $\norm{f}\leq (1+A^{-1})\norm{G}$. Noting that the operator norm of $I-\cS^{*2}\cS^2$ is less than $1$, we write
for all $f=F+G+H$, \begin{multline}\la f, (I-\cS^{*2}\cS^2)f\ra_{\ell^2(B, U)}\geq \la G, (I-\cS^{*2}\cS^2)G\ra_{\ell^2(B, U)}-2 A^{-1}\norm{G}^2 - A^{-2}\norm{G}^2\\ \geq (Q^{-4}\beta-3A^{-1})\norm{G}^2 \geq \frac{(Q^{-4}\beta-3A^{-1})}{(1+A^{-1})^2}\norm{f}^2.\label{e:case2} \end{multline}
Choosing $A$ such that $A^{-1}=Q^{-4}\beta/6$, and gathering \eqref{e:case2} and \eqref{e:case1} we get the result with \begin{equation}\label{e:c}c(D, \beta)=\min \left( \frac{Q^{-4}\beta}{2(1+Q^{-4}\beta/6)^{2}},\;\frac{Q^{-4}\beta}{(1+6Q^4/\beta)^{2}} \right).\end{equation}
\section{Proof of the determinant relation} The relations in the next lemma follow from the resolvent identity, and are proven (for instance) in \cite{AS}. For a vertex $v$ of $T$, $\mathcal{N}_v$ stands for the set of neighbouring vertices.
\begin{lem} \label{lem:zetapot} For any $v \in V(T)$, $z = E+i\eta \in \mathbb{C}^+ $, if we let $2m^{z}(v)=-\frac{1}{G(v, v;z)}$, we have \[ z = \sum_{u \sim v} \zeta^{z}(v, u)+2m^{z}(v) \quad \text{and} \quad z = \sum_{u \in \mathcal{N}_v \setminus \{w\}} \zeta^{z}(v, u) + \frac{1}{\zeta^{z}(w, v)} \, . \] For any non-backtracking path $(v_0,\dots,v_k)$ in $T$, \begin{equation}\label{e:GG} G(v_0,v_k;z) = \frac{-\prod_{j=0}^{k-1} \zeta^{z}({v_{j+1}},v_j)}{2m^{z}(v_k)}= \frac{-\prod_{j=0}^{k-1} \zeta^{z}({v_{j}},v_{j+1})}{2m^{z}(v_0)}. \end{equation}
Also, for any $w\sim v$, we have \begin{equation}\label{e:reverse} \zeta^{z}(w, v) = \frac{m(w)^{z}}{m(v)^{z}} \,\zeta^{z}(v, w) \, , \qquad \frac{1}{\zeta^{z}(w, v)} - \zeta^{z}(v, w) = 2m^{z}(v) \, , \end{equation}
\end{lem}
\begin{rem}We can note that \eqref{e:GG} may be written as
$$ ((\zeta^z)^{-1} I^{|B|}- \cB)^{-1}(e, e')=\delta_{x=y}+\sum_{k=0}^{+\infty} \zeta^{z}(e') (\zeta^z \cB)^k(e, e')= -2m^z(x)(\cA -z)^{-1}(x, y)$$
for all $e, e'\in B$ and $x=o(e'), y=t(e)$.
In the case of regular graphs, this is formula (2.4) in \cite{OW}, where it is attributed to Grigorchuk, with various proofs published by Woess, Szwarc \cite{W, Szw}, Northshield, Bartholdi \cite{North, Bart}.
\end{rem} \subsection{Operator relations} In this section $z\in\IC^+$ is fixed, so we write $\zeta(x, y)$ instead of $\zeta^z(x, y)$, $m(x)$ instead of $m^z(x)$. If $e=(x, y)\in B$, we write $m_1(e)=m(x)$ and $m_2(e)=m(y)$. A function on $B$ defines a multiplication operator on $\IC^B$ (i.e. an operator which is diagonal in the canonical basis). We use the same notation for a function and the associated operator.
Let us introduce the notation $$\cP f(x)=\frac1{D(x)}\sum_{y\sim x} f(x, y).$$ This is a projector on the space of functions depending only on the origin, which may be identified with $\ell^2(V, \pi)$, isometrically embedded into $\ell^2(B, U)$ by the map $\psi\mapsto O(\psi)$ defined in the previous section.
Let $L=D (2m_1)^{-1}\cP$. Let $$Hg(x)=\sum_{y, y\sim x} \frac{1}{2m(y)}\left(\zeta(y, x)g(y, x)-g(x, y)\right)$$ Theorem \ref{t:det} is based on the following exact relation~: \begin{prop} $$H \circ (\zeta^{-1} I- \cB)=(\cA-zI)\circ L$$ \end{prop} \begin{proof} Let $\phi=Lf$ and $g=-(\zeta^{-1} I- \cB) f$. The latter relation implies that for any $y\sim x$, $$\phi(x)=(2m(x))^{-1}\left(f(x, y)+g(y, x)+\frac{f(y, x)}{\zeta(y, x)}\right).$$ We then calculate $$\cA\phi(x)=\sum_{y, y\sim x}\phi(y)=\sum_{y, y\sim x}\frac{1}{2m(y)}\left(f(y, x)+g(x, y)+\frac{f(x, y)}{\zeta(x, y)}\right). $$ We now use Lemma \ref{lem:zetapot} to write \begin{multline*}\sum_{y, y\sim x}\frac{1}{2m(y)}f(y, x)\\ =\sum_{y, y\sim x}\frac{1}{2m(y)}\left( \zeta(y, x) 2m(x)\phi(x)-\zeta(y, x) f(x, y)-\zeta(y, x)g(y, x)\right)\\ =\sum_{y, y\sim x} \zeta(x, y) \phi(x)+\sum_{y, y\sim x}\frac{1}{2m(y)}\left( -\zeta(y, x) f(x, y)-\zeta(y, x)g(y, x)\right)\\ = (z-2m(x)) \phi(x)+\sum_{y, y\sim x}\frac{1}{2m(y)}\left( -\zeta(y, x) f(x, y)-\zeta(y, x)g(y, x)\right). \end{multline*} Altogether, \begin{multline*}\cA\phi(x)=(z-2m(x)) \phi(x)+\sum_{y, y\sim x}\frac{1}{2m(y)}\left( \left(\frac{1}{\zeta(x, y)} -\zeta(y, x)\right) f(x, y)-\zeta(y, x)g(y, x)+g(x, y)\right) \\= (z -2m(x)) \phi(x)+\sum_{y, y\sim x} f(x, y)+ \sum_{y, y\sim x}\frac{1}{2m(y)} \left(-\zeta(y, x)g(y, x)+g(x, y)\right)\\=(z-2m(x)) \phi(x)+2m(x)\phi(x)+ \sum_{y, y\sim x}\frac{1}{2m(y)} \left(-\zeta(y, x)g(y, x)+g(x, y)\right)\\ =z \phi(x)+\sum_{y, y\sim x} \frac{1}{2m(y)}\left(-\zeta(y, x)g(y, x)+g(x, y)\right) \end{multline*} which is the desired relation. \end{proof}
We note that $Hg=-D\left(\cP((2m_2)^{-1}g)+\cP((2m_2)^{-1}\iota(\zeta g))\right)$ where $\iota f(x, y)=f(y, x)$ is the edge reversal involution. So $H$ itself is of the form $H= D\cP\circ K$ where $K=(2m_2)^{-1}(\iota\zeta-I)$. We have proven that $$D\cP\circ K\circ (\zeta^{-1} I- \cB)=(A-z)\circ (2m_1)^{-1} D\cP.$$ This is equivalent to the two relations $$\cP\circ K\circ (\zeta^{-1} I- \cB)\circ P= D^{-1}(A-z)\circ (2m_1)^{-1} D \cP$$ and $$\cP\circ K\circ (\zeta^{-1} I- \cB)\circ (I-\cP)=0.$$ The latter implies that $K\circ (\zeta^{-1} I- \cB)$ sends $\mathrm{Ker } \cP$ to itself.
We use the decomposition ${\IC}^B = \mathrm{Im } \cP \oplus \mathrm{Ker } \cP$. The two relations above tell us that $$\det [ K\circ (\zeta^{-1} I- \cB)]=\det[ (A-z)\circ (2m_1)^{-1}] \times \det[ K\circ (\zeta^{-1} I- \cB)]_{\mathrm{Ker } \cP \To \mathrm{Ker } \cP}.$$ But $f\in \mathrm{Ker } \cP$ is equivalent to $\cB f=-\iota f$. Thus if $f\in \mathrm{Ker } \cP$, $$K\circ (\zeta^{-1} I- \cB)f(x, y)=(2m(y))^{-1}f(x, y)\left(\zeta(y, x)-\frac{1}{\zeta(x, y)}\right)= -f(x, y).$$ Finally we obtain \begin{multline*}\det K \det (\zeta^{-1} I- \cB)=\det (A-z) \prod_{x\in V} (2m(x))^{-1} \, (-1)^{\dim \mathrm{Ker } \cP} = \det (A-z) \prod_{x\in V} (-G(x))\, (-1)^{\dim \mathrm{Ker } \cP}
\\= \det (A-z) \prod_{x\in V} (-G(x)) (-1)^{|B|-|V|}=\det (z-A) \prod_{x\in V} (-G(x)) \end{multline*}
since $|B|=2|E|$ is even.
To prove the determinant relation, there remains to compute $\det K$. But $K$ is diagonal by blocks of size $2$ in the canonical basis of $\IC^{B}$. More precisely, denoting by $\delta_e$ the element of $\IC^{B}$ that takes the value $1$ on $e$ and $0$ elsewhere, $$K\delta_e=-\frac{1}{2m(y)}\delta_e+\frac{\zeta(x, y)}{2m(x)}\delta_{\hat e}$$ if $e=(x, y)$. Thus $$\det K=\prod_{e=\{x, y\}\in E}(2m(x))^{-1}(2m(y))^{-1}(1-\zeta(x, y)\zeta(y, x))=\prod_{e=\{x, y\}\in E} (-G(x, y)).$$ This yields the announced relation.
\end{document}
|
arXiv
|
{
"id": "1703.03852.tex",
"language_detection_score": 0.7202978134155273,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\thispagestyle{empty} \begin{abstract} We prove an extension of the Babbage-Enriques-Petri theorem for semi-ca\-no\-ni\-cal curves. We apply this to show that the Prym variety of a generic element of a codimension $k$ subvariety of ${\mathcal R}_g$ is not isogenous to another distinct Prym variety, under some mild assumption on $k$. \end{abstract} \title{On isogenies of Prym varieties}
\setcounter{tocdepth}{1}
\section{Introduction} Let $\mathcal{R}_g$ denote the moduli space of unramified irreducible double covers of complex smooth curves of genus~$g$. Given an element $\pi:D\rightarrow C$ in $\mathcal{R}_g$, we can lift this morphism to the corresponding Jacobians via the norm map \[ \mathrm{Nm}_{\pi}:J(D)\rightarrow J(C). \] By taking the neutral connected component of its kernel, we obtain an abelian variety of dimension $g-1$ called the \emph{Prym variety} attached to~$\pi$.
In this note, we study the isogeny locus in ${\mathcal A}_{g-1}$ of Prym varieties attached to generic elements in ${\mathcal R}_g$; that is, principally polarized abelian varieties of dimension $g-1$ which are isogenous to such Prym varieties. More concretely, given a subvariety ${\mathcal Z}$ of ${\mathcal R}_g$ of codimension~$k$ and a generic element $\pi:D\rightarrow C$ in ${\mathcal Z}$, we prove that the Prym variety attached to $\pi$ is not isogenous to a distinct Prym variety, whenever $g \geq \max \lbrace 7, 3k+5 \rbrace$, see Theorem~\ref{thm1}.
This result is an extension of the analogue statements for Jacobians of generic curves proven by Bardelli and Pirola~\cite{bardelli-pirola89} for the case $k=0$, and Marcucci, Naranjo and Pirola~\cite{marcucci-naranjo-pirola16} for $k>0$, $g \geq 3k+5$ or $k=1$ and $g\geq 5$. In the latter, to prove the case $g \geq 3k+5$, they use an argument on infinitesimal variation of Hodge structure proposed by Voisin in~\cite[Remark~(4.2.5)]{bardelli-pirola89} which allows them to translate the question to a geometric problem of intersection of quadrics. In doing so, they give a generalization of Babbage-Enriques-Petri's theorem which allows them to recover a canonical curve from the intersection of a system of quadrics in $\mathbb{P}^{g-1}$ of codimension~$k$. The strategy we follow to prove Theorem~\ref{thm1} is an adaptation of these techniques to the setting of Prym varieties. We are also able to give an extension of Babbage-Enriques-Petri's theorem for semicanonical curves in a similar fashion as in~\cite{marcucci-naranjo-pirola16}, see Proposition~\ref{prop1}. Our result generalises the one by Lange and Sernesi~\cite{lange-sernesi96} for curves of genus $g\geq 9$, since it recovers a semicanonical curve of genus $g\geq 7$ from a system of quadrics in $\mathbb{P}^{g-2}$ of codimension~$k$, $g \geq 3k+5$.
\noindent {\textbf{Acknowledgments.}} We warmly thank the organizing committee of Pragmatic for the generous funding and the opportunity of enjoying such a lively and stimulating environment. We thank Juan Carlos Naranjo and V\'ictor González Alonso for proposing this problem and for their helpful advice.
\section{Intersection of quadrics} Let $C$ be a smooth curve. Given a globally generated line bundle $L\in\operatorname{Pic}(C)$, we denote by $\varphi_L:C\rightarrow {\mathbb P} \mathrm{H}^0(C,L)^*$ its induced morphism. If $L$ is very ample, we say that $\varphi_L(C)$ is \emph{projectively normal} if its homogeneous coordinate ring is integrally closed; or equivalently, if for all $k\geq 0$, the homomorphism \[ \operatorname{Sym}^k {\rm H}^0(L) \longrightarrow {\rm H}^0(L^{\otimes k}) \] is surjective.
We also recall that the \emph{Clifford index} of $C$ is defined as \[ \min \lbrace \deg(L)-2\mathrm{h}^0(C,L)+2\rbrace,
\] where the minimum ranges over the line bundles $L\in\operatorname{Pic}(C)$ such that $\mathrm{h}^0(C,L)\geq 2$ and~$\mathrm{h}^0(C,\omega_C\otimes L^{-1})\geq 2$. Its value is an integer between $0$ and~$\lfloor \frac{g-1}{2}\rfloor$, where $g$ is the genus of the curve.
Let $C$ be of genus $g$ and with Clifford index $c$. For any non-trivial $2$-torsion point $\eta$ in the Jacobian of~$C$, we call $\omega_C \otimes \eta$ a \emph{semicanonical line bundle} of $C$ whenever it is globally generated, and we denote by $\varphi_{\omega_C \otimes \eta}:C\rightarrow\mathbb{P}^{g-2}$ its associated morphism. In that case, we call its image $C_\eta:=\varphi_{\omega_C \otimes \eta}(C)$ a \emph{semicanonical curve}. The following is a result of Lange and Sernesi~\cite{lange-sernesi96}, and Lazarsfeld~\cite{lazarsfeld89}:
\begin{lemma}\label{lem:proj-normal} If $g\geq 7$ and $c\geq 3$, then $\omega_C\otimes\eta$ is very ample and the semicanonical curve $C_\eta$ is projectively normal. \end{lemma}
Furthermore, Lange and Sernesi prove that $C_\eta$ is the only non-degenerate curve in the intersection of all quadrics in $\mathbb{P}^{g-2}$ containing $C_\eta$ if $c>3$, or $c=3$ and $g\geq 9$, see \cite{lange-sernesi96}. The following proposition generalises this result for a smaller family of quadrics.
\begin{prop}\label{prop1} Let $C$ be a curve of genus $g$ and Clifford index~$c$, and $\eta$ be a non-trivial $2$-torsion point in~$J(C)$. Let $I_2(C_\eta) \subset \operatorname{Sym}^2 {\rm H}^0(C,\omega_C \otimes \eta)$ be the vector space of equations of the quadrics containing~$C$, and $K \subset I_2(C_\eta)$ be a linear subspace of codimension~$k$. If $g \geq \max \lbrace 7, 2k+6\rbrace$ and $c \geq \max \lbrace 3, k+2 \rbrace$, then $C_\eta$ is the only irreducible non-degenerate curve in the intersection of the quadrics of~$K$. \end{prop}
Notice that for $k=0$, this proposition extends the result of Lange and Sernesi \cite{lange-sernesi96} to the cases when $c=3$ and $g=7$ and~$8$. We refer to Remark~\ref{rmk1} for a brief discussion on a simplified version of the following proof in this case.
\begin{proof}
We start by assuming that there exists an irreducible non-degenerate curve ${C}_0$ in the intersection of quadrics $\bigcap_{Q\in K}Q\subset \mathbb{P}\mathrm{H}^0(C,\omega_C\otimes \eta)^*$, which is different from~$C_\eta$. In particular, we can choose $k+1$ linearly independent points in $\bigcap_{Q\in K}Q$ such that $x_i\not\in C_\eta$ for all~$i$. By abuse of notation, we denote also as $x_i$ the representatives in $\mathrm{H}^0(C,\omega_C\otimes\eta)^*$. We define $L\subset \operatorname{Sym}^2\mathrm{H}^0(C,\omega_C\otimes\eta)^*$ as the linear subspace spanned by $x_i\otimes x_i$.
Let $R=I_2(C_\eta)/K$ and $R'=\operatorname{Sym}^2\mathrm{H}^0(C,\omega_C\otimes\eta)/K$. By Lemma~\ref{lem:proj-normal} and the fact that $g\geq 7$ and $c\geq 3$, we have that $C_\eta$ is projectively normal. Hence, we can build the following diagram: \[ \xymatrix{
& & 0 \ar[d] & 0 \ar[d] & & \\
& 0 \ar[r] & K \ar[d] \ar[r] & I_2(C_\eta) \ar[d] \ar[r] & R \ar[r] & 0\\
& & \operatorname{Sym}^2\mathrm{H}^0(C,\omega_C\otimes\eta) \ar[d] \ar@{=}[r] & \operatorname{Sym}^2\mathrm{H}^0(C,\omega_C\otimes\eta) \ar[d] & & \\
0 \ar[r] & R \ar[r] & R' \ar[d] \ar[r] & \mathrm{H}^0(C,\omega_C^{\otimes 2}) \ar[d] \ar[r] & 0 & \\
& & 0 & 0 & & } \] where the last row is obtained by applying the snake lemma to the first two rows. By dualizing this diagram, we get \[ \xymatrix{
& & 0 \ar[d] & 0 \ar[d] & & \\ & 0 \ar[r] & \mathrm{H}^0(C,\omega_C^{\otimes 2})^*={\rm H}^1(C,T_C) \ar[d] \ar[r] & R'^* \ar[d] \ar[r] & R^* \ar[r] & 0\\
& & \operatorname{Sym}^2\mathrm{H}^0(C,\omega_C\otimes\eta)^* \ar[d] \ar@{=}[r] & \operatorname{Sym}^2\mathrm{H}^0(C,\omega_C\otimes\eta)^* \ar[d] & & \\
0 \ar[r] & R^* \ar[r] & I_2(C_\eta)^* \ar[d] \ar[r] & K^* \ar[d] \ar[r] & 0 & \\
& & 0 & 0 & & } \]
Notice that $Q(\alpha)=0$ for every $\alpha\in L$ and every $Q\in K$. Therefore, $L\subset R'^*$ Since $\dim(L)=k+1$ and $\dim(R)=k$, there is a non-trivial element $\alpha\in L\cap \mathrm{H}^1(C,T_C)$. By the isomorphism $\mathrm{H}^1(C,T_C)\simeq\mathrm{Ext}^1(\omega_C,\mathcal{O}_C)$, there is a $2$ vector bundle $E_\alpha$ associated to $\alpha$ satisfying the following exact sequence: \begin{equation}\label{eq:1} 0\longrightarrow\mathcal{O}_C\longrightarrow E_\alpha\longrightarrow\omega_C\longrightarrow 0. \end{equation} The cup product with $\alpha$ is the coboundary map $\mathrm{H}^0(C,\omega_C)\rightarrow\mathrm{H}^1(C,\mathcal{O}_C)$. By writing the element $\alpha=\sum_{i=1}^{k+1}a_ix_i\otimes x_i$, we have \[ \mathrm{Ker}(\cdot\cup\alpha)=\bigcap_{i\;\mid\;a_i\neq 0}H_i, \] where $H_i=\mathrm{Ker}(x_i)$. After reordering, we may assume that $x_1,\ldots,x_{k'}$ are the points such that $a_i\neq 0$, for some $k'\leq k+1$. This means that there are $g-k'$ linearly independent sections in $\mathrm{H}^0(C,\omega_C)$ lifting to~$\mathrm{H}^0(C,E_\alpha)$. Denote by $W\subset\mathrm{H}^0(C,E_\alpha)$ the vector space generated by these sections, and consider the morphism $\psi:\wedge^2W\rightarrow\mathrm{H}^0(C,\omega_C)$ obtained by the following composition: \[ \wedge^2W\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow} \wedge^2 \mathrm{H}^0(C,E_\alpha) \longrightarrow\mathrm{H}^0(C,\det E_\alpha)=\mathrm{H}^0(C,\omega_C). \] The kernel of $\psi$ has codimension at most~$g$, and the Grassmannian of the decomposable elements in $\mathbb{P}(\wedge^2W)$ has dimension $2(g-k'-2)$. Since $g>2k+5$ by hypothesis, we have that their intersection is not trivial. Thus, take $s_1,s_2\in \mathrm{H}^0(C,E_\alpha)$ such that $\psi(s_1\wedge s_2)=0$. They generate a line bundle $M_\alpha\subset E_\alpha$ and $h^0(C,M_\alpha)\geq 2$. Take $Q_\alpha$ the neutral component of the quotient $E_\alpha/M_\alpha$, and $L_\alpha$ the kernel of $E_\alpha\rightarrow Q_\alpha$, then we obtain the following exact sequence: \begin{equation}\label{eq:2} 0\longrightarrow L_\alpha\longrightarrow E_\alpha\longrightarrow Q_\alpha\longrightarrow 0. \end{equation} Notice that $M_\alpha\subset L_\alpha$, hence $h^0(C,L_\alpha)\geq 2$. Moreover from \eqref{eq:1} and \eqref{eq:2}, we obtain $\omega_C\simeq\det E_\alpha\simeq L_\alpha\otimes Q_\alpha$, which implies that $Q_\alpha\simeq \omega_C\otimes L_\alpha^{-1}$. We have the following diagram: \[ \xymatrix{
& & 0 \ar[d] & & \\
& & L_\alpha \ar[d] & & \\
0 \ar[r] & \mathcal{O}_C \ar[r] \ar[rd] & E_\alpha \ar[d] \ar[r] & \omega_C \ar[r] & 0\\
& & \omega_C\otimes L_\alpha^{-1} \ar[d] & & \\
& & 0 & & } \]
Assume that $\mathcal{O}_C\rightarrow\omega_C\otimes L_\alpha^{-1}$ is~$0$. Then the section of $E_\alpha$ that represents $\mathcal{O}_C\rightarrow E_\alpha$ would be a section of $L_\alpha$, in particular, a section in~$W$. Since the sections in $W$ map to sections of $\omega_C$, this contradicts the exactness of the horizontal sequence. So $\mathcal{O}_C\rightarrow\omega_C\otimes L_\alpha^{-1}$ is not~$0$ and the $h^0(C,\omega_C\otimes L_\alpha^{-1})>0$.
If $h^0(C,\omega_C\otimes L_\alpha ^{-1})\geq 2$, we have that \begin{equation}\label{eq:3} c\leq \deg (L_\alpha)-2h^0(C,L_\alpha)+2. \end{equation} Moreover, $h^0(C,L_\alpha)+h^0(C,\omega_C\otimes L_\alpha^{-1})\geq h^0(C,E_\alpha)> \dim (W)=g-k'$ and, using Riemann-Roch we obtain that $2h^0(C,L_\alpha)\geq \deg (L_\alpha)+2-k'$. Combining this with \eqref{eq:3}, we obtain that $c\leq k'\leq k+1$ which contradicts our hypothesis on~$c$ ($c\geq k+2$). Hence, $h^0(C,\omega_C\otimes L^{-1}_\alpha)=1$.
Write $\omega_C\otimes L^{-1}_\alpha\simeq \mathcal{O}_C(p_1+\cdots+p_e)$, where $e=\deg(\omega_C\otimes L^{-1}_\alpha)$. Notice that $h^0(C,L_\alpha)\geq g-k'$ and $\deg(L_\alpha)=2g-2-e$. Using Riemann-Roch, we get \[ g-k'\leq h^0(C,L_\alpha)=h^0(C,\omega_C\otimes L^{-1}_\alpha)+2g-2-e-(g-1)=g-e. \] So $e\leq k'$.
By \eqref{eq:2}, we have that $L_\alpha\simeq \omega_C(-p_1-\cdots -p_e)$. Moreover, the sections of $L_\alpha$ lie in $W$, and by construction of $W$ we have that $\mathrm{H}^0(\omega_C(-p_1-\cdots-p_e))\subset\mathrm{Ker}(\cdot\cup\alpha)=\cap_{i\mid a_i\neq 0}H_i$. Therefore, by dualizing this inclusion, we obtain that \begin{equation}\label{eq:inclusion} \langle x_1,\ldots,x_{k'}\rangle_{\mathbb{C}} \subset \langle p_1,\ldots,p_e\rangle_{\mathbb{C}}. \end{equation}
Let $\gamma:N_0\rightarrow {C}_0$ be a normalization. For any generic choice of $k+1$ points $x_i\in N_0$, we can repeat the construction above for $\gamma(x_1),\ldots,\gamma(x_{k+1})$, and we can assume that $k'$ and $e$ are constant. We define the correspondence \[ \Gamma=\left\lbrace (x_1+\cdots+x_{k'},p_1+\cdots+p_e)\in N_0^{(k')}\times C_\eta^{(e)}\;\mid\; \langle\gamma(x_1),\ldots,\gamma(x_{k'})\rangle_{\mathbb{C}}\subset\langle p_1,\ldots,p_e\rangle_{\mathbb{C}}\right\rbrace. \] Observe that $\Gamma$ dominates $N_0^{(k')}$, so $e\leq k'\leq \dim\Gamma$. In addition, the second projection $\Gamma\rightarrow C_\eta^{(e)}$ has finite fibers, since both curves are non-degenerate. This implies that $\dim\Gamma\leq e$, and so we have $k'=e$. Since $k'\leq k+1\leq g-3$, by the uniform position theorem we have that the rational maps \begin{align*} C^{(k')}\dashrightarrow Sec^{(k')}(C_\eta)\subset {\mathbb G}(e-1,\mathbb{P}^{g-2}),\\ N_0^{(k')}\dashrightarrow Sec^{(k')}(N_0)\subset {\mathbb G}(e-1,\mathbb{P}^{g-2}), \end{align*} are generically injective. This gives a birational map between $C_\eta^{(k')}$ and $N_0^{(k')}$. In particular, it induces dominant morphisms $JC_\eta\rightarrow JN_0$ and $JN_0\rightarrow JC_\eta$. Therefore, $g(C_\eta)=g(N_0)$ and by a theorem of Ran~\cite{ran86}, the birational map $C_\eta^{(k')}\dashrightarrow N_0^{(k')}$ is defined by a birational map between $C_\eta$ and~$N_0$. By composing it with the normalization map $\gamma$, we obtain a birational map \[ \varphi: C_\eta\dashrightarrow C_0, \] that defines the correspondence $\Gamma$; that is $\langle \varphi(x_1),\ldots,\varphi(x_{k'})\rangle= \langle x_1,\ldots,x_{k'}\rangle$ for generic elements $x_1+\ldots +x_{k'}\in C_\eta^{(k')}$.
This implies that $\varphi$ is generically the identity map over~$C_\eta$. Thus $C_\eta=C_0$, which is a contradiction and ends the proof. \end{proof}
\begin{rmk}\label{rmk1} The proof of Corollary~\ref{cor1} can be simplified for the case $K=I_2(C_\eta)$, that is~$k=0$. Under this assumption, we only consider one point $x\not\in C_\eta$, and $k'=e=1$. Therefore, the inclusion~\eqref{eq:inclusion} already implies the equality $C_\eta=C_0$. \end{rmk}
\section{Main theorem} An element in $\mathcal{R}_g$ can be identified with a pair $(C,\eta)$, where $C$ is a complex smooth curve of genus~$g$, and $\eta$ is a non-trivial $2$-torsion element in the Jacobian of~$C$. This allows us to consider $\mathcal{R}_g$ as a covering of the moduli space $\mathcal{M}_g$ of complex smooth curves of genus~$g$. It is given by the morphism \[ \mathcal{R}_g\longrightarrow\mathcal{M}_g,\quad (C,\eta)\longmapsto C, \] which has degree $2^{2g}-1$. Thus, a generic choice of an element in a subvariety $\mathcal{Z}\subset\mathcal{R}_g$ is equivalent to a generic choice of a curve $C$ in the image of $\mathcal{Z}$ in $\mathcal{M}_g$, and any non-trivial element~$\eta\in J(C)[2]$.
The following result is a direct consequence of Proposition~\ref{prop1} and it is the version of Babbage-Enriques-Petri's theorem that we use in the proof of the main result in this article.
\begin{cor}\label{cor1} Let $(C,\eta)$ be a generic point in a subvariety ${\mathcal Z}$ of ${\mathcal R}_g$ of codimension~$k$. Let $I_2(C_\eta) \subset \operatorname{Sym}^2 {\rm H}^0 (C,\omega_C \otimes \eta)$ be the vector space of the equations of quadrics in $\mathbb{P}^{g-2}$ containing~$C_\eta$. Let $K \subset I_2(C_\eta)$ be a linear subspace of codimension~$k$. If $g \geq \max\lbrace 7,3k+5\rbrace$, then $C_\eta$ is the only irreducible non-degenerate curve in the intersection of the quadrics of~$K$. \end{cor}
\begin{proof} Let $\mathcal{M}_g^c$ be the locus in $\mathcal{M}_g$ corresponding to curves with Clifford index~$c$. Then ${\mathcal M}_g^c$ is a finite union of subvarieties of~${\mathcal M}_g$, where the one of higher dimension corresponds to the curves whose Clifford index is realized by a~$g^1_{c+2}$ linear series, see~\cite{coppens-martens91}. By Riemann-Hurwitz, the codimension in $\mathcal{M}_g$ of the component of the curves with a $g^1_{c+2}$ linear series is \[ 3g-3 - (2g-2c+2-3) = g-2c-2. \] If $k=0$, a generic curve in $\mathcal{M}_g$ has Clifford index $c\geq 3$, because $g\geq 7$. As when~$k>0$, since~$g \geq 3k+5$, we obtain \[ k \geq g -2c -2 \geq 3k +5 -2c-2 = 3k -2c+3, \] and thus $c \geq k+2$. The corollary follows by applying Proposition~\ref{prop1}. \end{proof}
Let $\widetilde{{\mathcal A}_g}^m$ be the space of isogenies of principally polarized Abelian varieties of degree $m$ (up to isomorphism); that is the space of classes of isogenies $\chi: A \longrightarrow A'$ such that $\chi^*L_{A'} \cong L_{A}^{\otimes m}$, where~$L_A$ (respectively~$L_{A'}$) is a principal polarization on $A$ (respectively~$A'$). There are two forgetful maps to the moduli space $\mathcal{A}_g$ of p.p.a.v. of dimension~$g$ \begin{equation}\label{eq:forgetful} \xymatrix{
& \hspace*{3mm}\widetilde{{\mathcal A}_g}^m \ar[dl]_{\varphi} \ar[dr]^{\psi}& \\
{\mathcal A}_g & & {\mathcal A}_g, } \end{equation} such that $\varphi(\chi) = (A,L_{A})$ and $\psi(\chi) = (A',L_{A'})$. These maps yield the following commutative diagram, \begin{equation}\label{tangent} \xymatrix{
& \hspace*{3mm}T_{[\chi]}\widetilde{{\mathcal A}_{g-1}}^m \ar[dl]_{d\varphi} \ar[dr]^{d\psi} &
\vspace*{2mm}\\
\ T_{[A]} {\mathcal A}_{g-1} \ar[rr]^\lambda_{} & & T_{[A']} {\mathcal A}_{g-1}. } \end{equation} where all maps are isomorphisms.
\begin{thm}\label{thm1} Let ${\mathcal Z} \subset {\mathcal R}_g$ be a (possibly reducible) codimension $k$ subvariety. Assume that $g \geq \max\lbrace 7,3k+5 \rbrace$, and let $(C,\eta)$ be a generic element in~${\mathcal Z}$. If there is a pair $(C',\eta')\in\mathcal{R}_g$ such that there exists an isogeny $\chi: P(C,\eta) \longrightarrow P(C',\eta')$, then $(C,\eta) \cong (C',\eta')$ and $\chi = [n]$, for some $n \in {\mathbb Z}$. \end{thm}
\begin{proof} Suppose that $(C,\eta)$ is generic in ${\mathcal Z}$. By the assumption on $g$, the Clifford index of a generic element of ${\mathcal Z}$ is at least three (as shown in the proof of Corollary~\ref{cor1}). However, by \cite{naranjo96}, if the Clifford index of a curve $C$ is $c \geq 3$, then the corresponding fiber of the Prym map is 0-dimensional, i.e.~$\dim P^{-1}(P(C,\eta)) = 0$. Therefore, the restriction of the Prym map to ${\mathcal Z}$, \[ P_{\vert {\mathcal Z}}: {\mathcal Z} \ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow} {\mathcal R}_g \longrightarrow {\mathcal A}_{g-1}, \] has generically fixed degree $d$ onto its image, for some $d\in\mathbb{N}$. So, by the genericity of the pair~$(C,\eta)$, we can assume that $(C,\eta)$ lies in the locus of ${\mathcal Z}$ where~$P_{\vert{\mathcal Z}}$ is étale. This gives the isomorphisms of the tangent spaces \begin{equation}\label{eq:tangent} T_{P[(C,\eta)]}P({\mathcal Z}) \cong T_{[C,\eta]}{\mathcal Z}\quad \mbox{and}\quad T_{P[(C,\eta)]}P({\mathcal R}_g) \cong T_{[C,\eta]}{\mathcal R}_g. \end{equation}
Let us assume that the locus of curves in ${\mathcal R}_g$ whose corresponding Prym variety is isogenous to the Prym variety of an element in ${\mathcal Z}$ has an irreducible component ${\mathcal Z}'$ of codimension $k$. By \cite{pirola88}, since $k < g-2$, we have $\operatorname{End}(P(C,\eta)) \cong {\mathbb Z}$. Suppose that we are given an isogeny $\chi: P(C,\eta) \longrightarrow P(C',\eta')$; then, it must have the property that the pull-back of the principal polarization $\Xi'$ is a multiple of the principal polarization $\Xi$ on $P(C,\eta)$, say $\chi^*\Xi' \cong \Xi^{\otimes m}$, for some $m\in\mathbb{Z}$.
For such $m$, we have the diagram of forgetful maps as in~\eqref{eq:forgetful} with $g-1$ in place of~$g$. We can find an irreducible subvariety ${\mathcal V} \subset \widetilde{{\mathcal A}_{g-1}}^m$ which dominates both $P({\mathcal Z})$ and $P({\mathcal Z}')$ through $\varphi$ and $\psi$ respectively.
Setting ${\mathcal R} := \varphi^{-1}(P({\mathcal R}_g))$ and ${\mathcal R}' := \psi^{-1}(P({\mathcal R}_g))$, we have the inclusion~${\mathcal V} \subset {\mathcal R} \cap {\mathcal R}'$.
For a generic element $\chi: P(C,\eta) \longrightarrow P(C', \eta')$ in ${\mathcal V}$, the diagram (\ref{tangent}) becomes \[ \xymatrix{ & \hspace*{3mm}T_{[\chi]}\widetilde{{\mathcal A}_{g-1}}^m \ar[dl]_{d\varphi} \ar[dr]^{d\psi} & \vspace*{2mm}\\ \ T_{[P(C,\eta)]} {\mathcal A}_{g-1} \ar[rr]^\lambda_{\cong} & & T_{[P(C',\eta')]} {\mathcal A}_{g-1}. } \] In addition, $T_{[P(C,\eta)]} {\mathcal A}_{g-1} \cong \operatorname{Sym}^2 {\rm H}^0(P(C,\eta),T_{P(C,\eta)}) \cong \operatorname{Sym}^2 {\rm H}^0(\omega_C \otimes \eta)^*$. By looking at~$d\varphi$, and the isomorphisms in~\eqref{eq:tangent}, we see that we have the following diagram of tangents spaces and identifications: \[ \xymatrix{ T_{[\chi]}{\mathcal V} \ar[d]_{\cong} \ar@{^{(}->}[rr] & &T_{[\chi]}{\mathcal R} \ar[d]_{\cong}\ar@{^{(}->}[rr] & &T_{[\chi]}{\mathcal R} +T_{[\chi]}{\mathcal R}'\ar@{=}[d]\ar@{^{(}->}[rr] & &T_{[\chi]}\widetilde{{\mathcal A}_{g-1}}^m\ar[d]_{\cong}\\ T_{[C,\eta]}{\mathcal Z} \ar@{^{(}->}[rr] & &T_{C,\eta]}{\mathcal R}_g \ar@{^{(}->}[rr] & &\bar{T}\ar@{^{(}->}[rr] & &\operatorname{Sym}^2 {\rm H}^0(\omega_C \otimes \eta)^* } \] where the vertical arrows are $d\varphi$.
By the Grassmann formula, $\dim \bar{T} \leq 3g - 3 +k$. Set \[K(C_\eta) := \ker \Big(\operatorname{Sym}^2 {\rm H}^0(\omega_C \otimes \eta) \longrightarrow \bar{T}^*\Big),\] which is a subspace of the space of quadrics containing the semicanonical curve~$C_\eta$. Notice that $\operatorname{codim}_{I_2(C_\eta)} K(C_\eta) \leq k$. By repeating the above argument with $\psi$ in place of $\varphi$, we get the corresponding inclusion of vector spaces $K(C'_{\eta'}) \subset I_2(C'_{\eta'})$, and by using the (canonical) isomorphism $\lambda$ above, we get a (canonical) isomorphism $K(C_\eta) \cong K(C'_{\eta'})$.
A closer look at $\lambda: T_{[P(C,\eta)]} {\mathcal A}_{g-1} \longrightarrow T_{[P(C',\eta')]}{\mathcal A}_{g-1}$ reveals that this map is induced by the isogeny $\chi : P(C,\eta) \longrightarrow P(C',\eta')$. In fact, one has that $d_0 \chi: {\rm H}^0(\omega_{C} \otimes \eta) \longrightarrow {\rm H}^0(\omega_{C'} \otimes \eta')$ is an isomorphism, and $\lambda$ is induced by it. This means that $d_0 \chi$ induces an isomorphism of projective spaces ${\mathbb P}{\rm H}^0(\omega_{C} \otimes \eta)^* \longrightarrow {\mathbb P}{\rm H}^0(\omega_{C'} \otimes \eta')^*$, which sends quadrics containing $C'_{\eta'}$ to quadrics containing $C_\eta$, by means of $\lambda$. By using Lemma~\ref{cor1}, we get that $C_\eta \cong C'_{\eta'}$, and thus $C \cong C'$. This gives us the following commutative diagram \[ \xymatrixcolsep{5pc}\xymatrix{ C \ar[d]^\cong \ar@{^{(}->}[r]^{\varphi_{\omega_C \otimes \eta}}&C_\eta\ar@{^{(}->}[r]\ar[d]^\cong & {\mathbb P}{\rm H}^0(\omega_{C} \otimes \eta)^* \ar[d]^\cong\\ C' \ar@{^{(}->}[r]^{\varphi_{\omega_{C'} \otimes \eta'}}&C'_{\eta'}\ar@{^{(}->}[r] & {\mathbb P}{\rm H}^0(\omega_{C'} \otimes \eta')^* } \] from which we deduce that $(C,\eta) \cong (C',\eta')$. Indeed, pulling back hyperplanes to $C$ and~$C'$, yields an isomorphism $\omega_{C'} \otimes \eta' \cong \omega_{C} \otimes \eta$, from which it follows that~$\eta \cong \eta'$. The isogeny is necessarily of the form $[n]$, for some $n\in{\mathbb Z}$, because $\operatorname{End}(P(C,\eta))\cong {\mathbb Z}$. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1704.08956.tex",
"language_detection_score": 0.655362606048584,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Quantifier Elimination and Rectilinearisation Theorem for Generalised Quasianalytic Algebras }
\author{J.-P. Rolin\thanks{Partially supported by Convénio FCT/CNRS 2011 Project CNRS128447776310533.} $\ $and T. Servi\thanks{Partially supported by FCT PEst OE/MAT/UI0209/2011, by Convénio FCT/CNRS 2011 Project CNRS128447776310533 and by FCT Project PTDC/MAT/122844/2010.}}
\date{{}} \maketitle \begin{abstract} We consider for every $n\in\mathbb{N}$ an algebra $\mathcal{A}_{n}$ of germs at $0\in\mathbb{R}^{n}$ of continuous real-valued functions, such that we can associate to every germ $f\in\mathcal{A}_{n}$ a (divergent) series $\mathcal{T}\left(f\right)$ with nonnegative real exponents, which can be thought of as an asymptotic expansion of $f$. We require that the $\mathbb{R}$-algebra homomorphism $f\mapsto\mathcal{T}\left(f\right)$ be injective\emph{ }(quasianalyticity property). In this setting we prove analogue results to Denef and van den Dries' quantifier elimination and Hironaka's rectilinearisation theorems for subanalytic sets. \end{abstract} \emph{2010 Mathematics Subject Classification }30D60, 14P15, 03C64 (primary), 32S45 (secondary).
\section{Introduction}
In \cite{parusinski:preparation} the author proves a preparation theorem for subanalytic functions (whose original statement can be found in \cite{lr:prep,paru:lip1}), as a multivariable Puiseux Theorem, or as a primitive version of Hironaka's Rectilinearisation Theorem. Our aim is to extend this result to a general quasianalytic setting.
\paragraph{{}}
The results of this paper lie within the framework of o-minimal geometry. The notion of \emph{o-minimal structure} was introduced as a model-theoretic concept. However, in order to render it and our results accessible to a larger audience, we give a geometric version of its definition (see \cite{vdd:tame} and \cite{vdd:mill:omin} for an overview of the subject).
We consider a collection $\mathcal{F}$ of functions $f:\mathbb{R}^{m}\to\mathbb{R}$ for $m\in\mathbb{N}$. A set $A\subseteq\mathbb{R}^{n}$ is \emph{definable} in the \emph{structure} $\mathbb{R}_{\mathcal{F}}=\left(\mathbb{R},<,0,1,+,\cdot,\mathcal{F}\right)$ (which we call the \emph{expansion of the real field by }$\mathcal{F}$) if $A$ belongs to the smallest collection $\mathcal{S}=\left(\mathcal{S}_{n}\right)_{n\in\mathbb{N}}$ of subsets of $\mathbb{R}^{n}$ ($n\in\mathbb{N}$) such that: \begin{enumerate} \item $\mathcal{S}_{n}$ contains all semi-algebraic subsets of $\mathbb{R}^{n}$, \item $\mathcal{S}$ contains the graphs of all functions in $\mathcal{F}$, \item $\mathcal{S}$ is closed under the boolean set operations and projections. \end{enumerate} A map $f:A\subseteq\mathbb{R}^{m}\to\mathbb{R}^{n}$ is \emph{definable} in $\mathbb{R}_{\mathcal{F}}$ if its graph is a definable subset of $\mathbb{R}^{m+n}$ (this implies that $A$ itself is definable).
The structure $\mathbb{R}_{\mathcal{F}}$ is said to be: \begin{enumerate} \item \emph{model-complete} if in order to generate the whole family of definable sets one can dispense with taking complements; \item \emph{o-minimal }if all definable sets have finitely many connected components; \item \emph{polynomially bounded} if the germs at infinity of definable unary functions have at most polynomial growth. \end{enumerate} The two following classical example of polynomially bounded model-complete o-minimal expansion of the real field show how o-minimal geometry can be seen as a generalisation of real algebraic and analytic geometry. \begin{itemize} \item $\bar{\mathbb{R}}$, where $\mathcal{F}=\emptyset$, i.e. the structure whose definable sets are the semi-algebraic sets (the proof of model-completeness and o-minimality follows from the Tarski-Seidenberg \emph{elimination} principle \cite{tarski,seidenberg}). \item $\mathbb{R}_{\mathrm{an}}$, where $\mathcal{F}$ is the set of functions $f:\mathbb{R}^{n}\to\mathbb{R}$ ($n\in\mathbb{N}$) whose restriction to the unit cube $\left[-1,1\right]^{n}$ is real analytic and which are identically zero outside the unit cube (the proof is a consequence of Gabrielov's \emph{Theorem of the Complement}, which states that\emph{ }the complement of a subanalytic subset of a real analytic manifold is again subanalytic \cite{gabriel:proj,gabriel:expli,bm_semi_subanalytic}). \end{itemize} Besides the finiteness property mentioned in the definition, sets and functions definable in an o-minimal structure share many good geometric properties with semi-algebraic and subanalytic sets. In particular, for all $k\in\mathbb{N}$, a definable set admits a finite stratification with $\mathcal{C}^{k}$ definable manifolds (see for example \cite{vdd:mill:omin}).
\paragraph{{}}
In the last two decades the list of examples has grown considerably, including several polynomially bounded structures for which o-minimality is a consequence of model-completeness. Among these examples we find: \begin{itemize} \item $\mathbb{R}_{\mathrm{an}^{*}}$, where $\mathcal{F}$ is the collection of all so-called \emph{convergent generalised power series, }namely all functions $f:\mathbb{R}^{n}\to\mathbb{R}$ whose restriction to the unit cube is given by a convergent series of monomials with positive real exponents (with a well-order condition on the support) and which are identically zero outside the unit cube (see \cite{vdd:speiss:gen}). \item $\mathbb{R}_{\mathcal{G}}$, where $\mathcal{F}$ is a collection of Gevrey functions in several variables (see \cite{vdd:speiss:multisum} for the definitions and proofs). \item $\mathbb{R}_{\mathcal{C}\left(M\right)}$, where $\mathcal{F}$ is a collection of $\mathcal{C}^{\infty}$ functions, restricted to the unit cube, whose derivatives are bounded by a strictly increasing sequence of positive constants $M=\left(M_{n}\right)_{n\in\mathbb{N}}$ satisfying the \emph{Denjoy-Carleman quasianalyticity }condition (see \cite{rsw} for the definitions and proofs). \item $\mathbb{R}_{\text{an},H}$, where $\mathcal{F}$ is a collection of functions containing all real analytic functions restricted to the unit cube and a solution $H$ of a first order analytic differential equation which is singular at the origin (see \cite{rss}). \item $\mathbb{R}_{\mathcal{Q}}$, in which certain Dulac transition maps are definable (see \cite{ilyashenko:centennial} for a complete survey on Dulac's problem and \cite{krs} for the proof of model-completeness and o-minimality). In this example $\mathcal{F}$ is a collection of functions, restricted to the unit cube, whose germ at zero admits an asymptotic expansion (with positive real exponents) which is, in general, divergent (as opposed to the case of $\mathbb{R}_{\text{an}^{*}}$). \end{itemize} The Tarski-Seidenberg elimination principle, which implies model-completeness, is a \emph{quantifier elimination} result, which can be resumed by saying that the projection of a semi-algebraic set is again semi-algebraic. It is well known that the analogue result does not hold for $\mathbb{R}_{\text{an}}$. However, Denef and van den Dries proved in \cite{vdd:d} that every relatively compact subanalytic set can be described by a system of equations and inequalities satisfied by some compositions of restricted analytic functions and quotients. In other words, the structure $\mathbb{R}_{\text{an}}$ admits quantifier elimination in the language of restricted analytic functions expanded by the function $D:\left(x,y\right)\mapsto x/y$
for $|y|\geq|x|$ and zero otherwise.
\paragraph{{}}
Our main goal is to prove a quantifier elimination result in the spirit of \cite{vdd:d} for all the structures in the above examples. In order to do this, we first define a common framework of which each of the above examples appears as a special case. Secondly, we isolate the common properties of these structures which are relevant to the proof of their o-minimality. Finally, we proceed to develop a new strategy for the proof of quantifier elimination. The originality of this work lies especially in this last part, since the main tool in Denef and van den Dries' proof, namely the Weierstrass Preparation Theorem, is not available to us in this setting, as we will explain later.
\paragraph{{}}
The proofs of model-completeness and o-minimality for the above examples are similar and all inspired by \cite{gabriel:proj,bm_semi_subanalytic}. The common strategy is to parametrise a definable set by maps whose components are in $\mathcal{F}$. To obtain this, the key property is the \emph{quasianalyticity} (explained below) of the algebras generated by the germs of functions in $\mathcal{F}$. However, there are significant differences between these examples. In some of them (see \cite{vdd:speiss:gen,vdd:speiss:multisum,krs}), we can use the\emph{ }Weierstrass Preparation Theorem\emph{ }with respect to a certain type of variables. In this case the proof of o-minimality is very close to Gabrielov's proof for $\mathbb{R}_{\text{an}}$ in \cite{gabriel:proj}. In the other cases, one uses instead some form of \emph{resolution of singularities}, which allows to get to the required parametrisation result in a more indirect way.
\paragraph{{}}
Our first goal is to unify all these different proofs, so that all the examples above appear as particular cases of a general o-minimality statement. We consider, for all $n\in\mathbb{N}$, an algebra $\mathcal{A}_{n}$ of continuous functions such that to each germ at $0\in\mathbb{R}^{n}$ of $f\in\mathcal{A}_{n}$ we can associate a (divergent) series $\mathcal{T}\left(f\right)$ with nonnegative real exponents. This series can be thought of as an \emph{asymptotic expansion} of $f$. The algebra of all germs at $0$ of functions in $\mathcal{A}_{n}$ is \emph{quasianalytic }if the $\mathbb{R}$-algebra homomorphism $\mathcal{T}:f\mapsto\mathcal{T}\left(f\right)$ is injective. The link between quasianalyticity and o-minimality appears already in \cite{vdd:o_minimal_real_analytic,vdd:speiss:multisum}. The quasianalyticity property can be obtained de facto (as in \cite{vdd:speiss:gen}, where the functions under consideration are \emph{equal} to the sum of their convergent expansion), or it has to be proved using analysis techniques (resummation methods in \cite{vdd:speiss:multisum} and \cite{rss}, Denjoy-Carleman's theorem in \cite{rsw}, Ilyashenko's method for Dulac's problem in \cite{krs}).
\paragraph{{}}
It is not our purpose here to prove quasianalyticity. We assume rather to have been given a collection of quasianalytic algebras. We show, then, that the structure $\mathbb{R}_{\mathcal{A}}$ generated by the algebras $\mathcal{A}_{n}$ is model-complete, o-minimal and polynomially bounded. Given the level of generality we aim to keep, we cannot make use of the Weierstrass Preparation Theorem. Hence, we show how to prove the parametrisation result for definable sets by using an appropriate \emph{blow-up} procedure. This latter takes inspiration from the methods in \cite{bm_semi_subanalytic}, adapted to series with real exponents in \cite{vdd:speiss:gen}.
\paragraph{{}}
Once we have proved o-minimality for the structures which fall into this general framework, we can proceed towards our quantifier elimination result. In \cite{vdd:d}, by a clever use of the Weierstrass Preparation Theorem and of the Tarski-Seidenberg elimination principle, Denef and van den Dries prove their result without needing to solve explicitly any system of analytic equations.
In our framework, recall that we have already shown that a bounded $\mathbb{R}_{\mathcal{A}}$-definable set can be parametrised by maps whose components are in the algebras $\mathcal{A}_{n}$. Our aim is to eliminate the parameters. Since we cannot use the Weierstrass Preparation Theorem and hence reduce to the polynomial situation, our strategy is, instead, to solve explicitly the parametrising equations with respect to the parameters. In order to do that, we use o-minimality of the structure $\mathbb{R}_{\mathcal{A}}$, established in the first part of the paper, and apply an \emph{o-minimal Preparation Theorem} for functions definable in a polynomially bounded o-minimal structure \cite{vdd:speiss:preparation_theorems}. This latter result, whose proof uses valuation and model-theoretic methods, allows to find, piecewise, a ``principal part''\emph{ }of a definable function. This is the starting point for a Newton-Puiseux solving method for quasianalytic equations.
\paragraph{{}}
Finally, Denef and van den Dries deduce Hironaka's \emph{Rectilinearisation Theorem} \cite{hironaka_real_analytic} as a corollary of their main result. This result states that every subanalytic set $A\subseteq\mathbb{R}^{n}$ can be transformed into a finite union of quadrants of dimension at most $\text{dim}\left(A\right)$, via a finite sequence of blow-ups of the ambient space $\mathbb{R}^{n}$. In the same spirit, we prove a Rectilinearisation Theorem for bounded $\mathbb{R}_{\mathcal{A}}$-definable sets.
\paragraph{{}}
The plan of the paper is the following. In Section \ref{sec:Setting-and-main} we introduce formally the setting we are working in and, in Subsection \ref{sub:The-theorems}, we give the two main statements we prove, namely o-minimality of $\mathbb{R}_{\mathcal{A}}$ (Theorem A) and quantifier elimination (Theorem B). Section \ref{sec:Monomialisation of series} is dedicated to a monomialisation, or desingularisation, algorithm. In Section \ref{sec:Parametrisation-of--subanalytic-1} we prove the parametrisation result mentioned above. The proof of Theorem A is completed in Subsection \ref{subsec:Proof-of-o-minimality}, following a traditional approach à la Gabrielov, thanks to Proposition \ref{prop: trivial manifolds for subanal sets}. Section \ref{sec:Vertical-monomialisation} is dedicated to the proof of Theorem B. The key result is a monomialisation theorem for $\mathbb{R}_{\mathcal{A}}$-definable functions (Theorem \ref{thm: monomialis of def functions}), from which we deduce the Rectilinearisation Theorem \ref{cor: rectilinearisation} and Theorem B. The proof of Theorem \ref{thm: monomialis of def functions} is obtained by a significant modification of the monomialisation process described in Subsection \ref{sub:Monomialisation-of-generalised}. We develop a \emph{vertical monomialisation algorithm} which allows to solve explicitly a system of quasianalytic equations (Theorem \ref{thm: ABC}, part B), inverse a parametrisation (Theorem \ref{thm: ABC}, part C) and finally monomialise definable functions (Theorem \ref{thm: ABC}, part A). The first step for solving explicitly quasianalytic equations is to ``weakly monomialise'' the solutions. This is done in Lemma \ref{lem: weak monomialisation}, where we use the o-minimal Preparation Theorem in \cite{vdd:speiss:preparation_theorems} mentioned above.
\section{Setting and main results\label{sec:Setting-and-main}}
\subsection{Generalised power series\label{sub:gen power series}} \begin{defn} \label{def:good set}Let $m\in\mathbb{N}$. A set $S\subset[0,\infty)^{m}$ is called \emph{good} if $S$ is contained in a cartesian product $S_{1}\times\ldots\times S_{m}$ of well ordered subsets of $[0,\infty)$. If $S$ is a good set, define $S_{\mathrm{min}}$ as the set of minimal elements of $S$. By \cite[Lemma 4.2]{vdd:speiss:gen}, $S_{\mathrm{min}}$ is finite. For $\alpha=\left(\alpha_{1},\ldots,\alpha_{m}\right),\beta=\left(\beta_{1},\ldots,\beta_{m}\right)\in S$ we write $\alpha\leq\beta$ if $\alpha_{i}\leq\beta_{i}$ for all $i=1,\ldots,m$. So if $S$ is good, $\forall\alpha\in S\ \exists\beta\in S_{\mathrm{min}}$ such that $\alpha\geq\beta$.
Denote by $\Sigma\left(S\right)$ the set of all finite sums (done component-wise) of elements of $S$. By \cite[Lemma 4.3]{vdd:speiss:gen}, if $S$ is good then $\Sigma\left(S\right)$ is also good. \end{defn} We recall the definition of generalised formal power series, originally due to \cite{vdd:speiss:gen}. \begin{defn} \label{def:power series} Let $\mathbb{A}$ be a commutative ring, $m\in\mathbb{N}$ and $X=(X_{1},\ldots,X_{m})$ be a tuple of variables. We consider formal series \[ F(X)=\sum_{\alpha}c_{\alpha}X^{\alpha}, \]
where $\alpha=(\alpha_{1},\ldots,\alpha_{m})\in[0,\infty)^{m},\ c_{\alpha}\in\mathbb{A}$ and $X^{\alpha}$ denotes the formal monomial $X_{1}^{\alpha_{1}}\cdot\ldots\cdot X_{m}^{\alpha_{m}}$, and the set \[ \sopp(F):=\{\alpha\in[0,\infty)^{m}:\ c_{\alpha}\not=0\}\ (\mathrm{the\ support\ of\ }F) \]
is a good set. These series are added the usual way and form a ring denoted by $\mathbb{A}\llbracket X^{*}\rrbracket$. \end{defn} The ring of usual power series $\mathbb{A}\llbracket X\rrbracket$ can be seen as the subring of $\mathbb{A}\llbracket X^{*}\rrbracket$ consisting of the series $F$ with $\sopp(F)\subset\mathbb{N}^{m}$.
\begin{defn} \label{def:total support}Let $\mathcal{F}\subset\mathbb{A}\left\llbracket X^{*}\right\rrbracket $ be a (possibly infinite) family of series such that the \emph{total support of $\mathcal{F}$} \[ \mathrm{Supp}\left(\mathcal{F}\right):=\bigcup_{F\in\mathcal{F}}\mathrm{Supp}\left(F\right) \]
is a good set. Then $\mathrm{Supp}\left(\mathcal{F}\right)_{\mathrm{min}}$ is finite and we define the \emph{set of minimal monomials of $\mathcal{F}$} \[ \mathcal{F}_{\mathrm{min}}:=\left\{ X^{\alpha}:\ \alpha\in\mathrm{Supp}\left(\mathcal{F}\right)_{\mathrm{min}}\right\} . \]
\end{defn}
\begin{defn} \label{def:mixed series} We fix $m,n\in\mathbb{N}$. Let $(X,Y)=(X_{1},\ldots,X_{m},Y_{1},\ldots,Y_{n})$. We define $\mathbb{A}\llbracket X^{*},Y\rrbracket$ as the subring of $\mathbb{A}\llbracket(X,Y)^{*}\rrbracket$ consisting of those series $F$ such that $\sopp(F)\subset[0,\infty)^{m}\times\mathbb{N}^{n}$. Moreover, for $F\in\mathbb{A}\left\llbracket X^{*},Y\right\rrbracket $, we define \[ \mathrm{Supp}_{X}\left(F\right):=\left\{ \alpha\in[0,\infty)^{m}:\ \exists N\in\mathbb{N}^{n}\mathrm{\ s.t.\ }\left(\alpha,N\right)\in\mathrm{Supp}\left(F\right)\right\} . \]
Notice that $\mathrm{Supp}\left(F\right)$ is good if and only if $\mathrm{Supp}_{X}\left(F\right)$ is good. \end{defn}
\begin{notation} \label{not: derivatives}Let $F\in\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $. For $i\in\left\{ 1,\ldots,m\right\} $ and $j\in\left\{ 1,\ldots,n\right\} $, let \[ \hat{X}=\left(X_{1},\ldots,X_{i-1},X_{i+1},\ldots,X_{m}\right)\text{ and }\hat{Y}=\left(Y_{1},\ldots,Y_{j-1},Y_{j+1},\ldots,Y_{n}\right). \] We can write $F\left(X,Y\right)=\sum_{\alpha\in S}G_{\alpha}\left(\hat{X},Y\right)X_{i}^{\alpha}=\sum_{k\in\mathbb{N}}H_{k}\left(X,\hat{Y}\right)Y_{j}^{k}$, where $S$ is the $i^{th}$ projection of $\mathrm{Supp}\left(F\right)$ and $G_{\alpha}\in\mathbb{R}\left\llbracket \hat{X},Y\right\rrbracket ,\ H_{k}\in\mathbb{R}\left\llbracket X^{*},\hat{Y}\right\rrbracket $. We denote by $\partial_{i}F$ the series $\sum_{\alpha\in S}\alpha G_{\alpha}X_{i}^{\alpha}$ and by $\frac{\partial F}{\partial Y_{j}}$ the series $\sum_{k\geq1}kH_{k}Y_{j}^{k-1}$. \end{notation}
\begin{notation} Let $\lambda,\alpha>0$. We denote by $\left(Y+\lambda\right)^{\alpha}$ the power series $\lambda^{\alpha}\sum_{i\in\mathbb{N}}\binom{\alpha}{i}\left(\frac{Y}{\lambda}\right)^{i}\in\mathbb{R}\left\llbracket Y\right\rrbracket $. \end{notation}
\subsection{Quasianalytic algebras\label{sub:Quasi-analytic-algebras}}
In this subsection we define the basic object of our interest, namely we fix a family $\mathcal{A}$ of real functions satisfying the properties in \ref{vuoto:conditions on functions} and \ref{def: A-analytic}. Moreover, we require the collection of all germs at zero of the functions in $\mathcal{A}$ to satisfy the properties in \ref{def: quasi-analyticity} and \ref{emp:properties of the morph}. \begin{notation} Let $m,n\in\mathbb{N}$. A \emph{polyradius }of type $\left(m,n\right)$ in $\mathbb{R}^{m+n}$ is a tuple of the form $r=(s,t)=(s_{1},\ldots,s_{m},t_{1},\ldots,t_{n})\in[0,\infty)^{m+n}$. If $r,r'$ are polyradii in $\mathbb{R}^{m+n}$, we write $r'\leq r$ if $s_{i}'\leq s_{i}$ for all $i=1,\ldots,m$ and $t_{j}'\leq t_{j}$ for all $j=1,\ldots,n$. If $r'$ is of type $\left(m,n-1\right)$ and $r$ is of type $\left(m,n\right)$, we write $r'\leq r$ if $\left(r',0\right)\leq r$. We also define: \[ I_{m,n,r}:=(0,s_{1})\times\ldots\times(0,s_{m})\times(-t_{1},t_{1})\times\ldots\times(-t_{n},t_{n}), \]
\[ \hat{I}_{m,n,r}:=[0,s_{1})\times\ldots\times[0,s_{m})\times(-t_{1},t_{1})\times\ldots\times(-t_{n},t_{n}). \] We also denote by $\hat{I}_{m,n,\infty}$ the set $[0,+\infty)^{m}\times\mathbb{R}^{n}$.\end{notation} \begin{void} \label{vuoto:conditions on functions}For every $m,n\in\mathbb{N}$ and $r\in(0,\infty)^{m+n}$, we let $\mathcal{A}_{m,n,r}$ be an algebra of real functions, which are $\mathcal{C}^{0}$ on $\hat{I}_{m,n,r}$ and $\mathcal{C}^{1}$ on $I_{m,n,r}$. We require that the algebras $\mathcal{A}_{m,n,r}$ satisfy the following list of conditions. Let $x=\left(x_{1,}\ldots,x_{m}\right)$ and $y=\left(y_{1},\ldots,y_{n}\right)$. \begin{itemize} \item The coordinate functions of $\mathbb{R}^{m+n}$ are in $\mathcal{A}_{m,n,r}$. \item If $r'\leq r$ and $f\in\mathcal{A}_{m,n,r}$, then $f\upharpoonright\hat{I}_{m,n,r'}\in\mathcal{A}_{m,n,r'}$. \item If $f\in\mathcal{A}_{m,n,r}$ then there exists $r'>r$ and $g\in\mathcal{A}_{m,n,r'}$ such that $g\upharpoonright\hat{I}_{m,n,r}=f$. \item If $f\in\mathcal{A}_{m,n,r}$, $s\in(0,\infty)$ and $r'=\left(s_{1},\ldots,s_{m},s,t_{1},\ldots,t_{n}\right)$ then the function \[ \xyC{0mm}\xyL{0mm}\xymatrix{F\colon & \hat{I}_{m+1,n,r'}\ar[rrrr] & \ & \ & \ & \mathbb{R}\\
& \left(x_{1},\ldots,x_{m},z,y\right)\ar@{|->}[rrrr] & & & & f\left(x,y\right) } \] is in $\mathcal{A}_{m+1,n,r'}$. \item $\mathcal{A}_{m,n,r}\subset\mathcal{A}_{m+n,0,r}$, in the sense that we identify $f\left(x_{1},\ldots,x_{m},y_{1},\ldots,y_{n}\right)\in\mathcal{A}_{m,n,r}$ with $f\left(x_{1},\ldots,x_{m+n}\right)\in\mathcal{A}_{m+n,0,r}$. \item Let $\sigma\in\Sigma_{m}$ be a permutation and let $\sigma\left(x\right)=\left(x_{\sigma\left(1\right)},\ldots,x_{\sigma\left(m\right)}\right)$. If $f\in\mathcal{A}_{m,n,r}$ then there exists $f_{\sigma}\in\mathcal{A}_{m,n,r}$ such that $f_{\sigma}\left(x,y\right)=f\left(\sigma\left(x\right),y\right)$. \item If $f\in\mathcal{A}_{m,n,r}$ then there exists $g\in\mathcal{A}_{m-1,n,r}$ such that \[ g\left(x_{1},\ldots,x_{m-1},y\right)=f\left(x_{1},\ldots,x_{m-1},0,y\right). \]
\item If $f\in\mathcal{A}_{m,n,r}$ then $f\left(x_{1}/s_{1},\ldots,x_{m}/s_{m},y_{1}/t_{1},\ldots,y_{n}/t_{n}\right)\in\mathcal{A}_{m,n,1}.$ \end{itemize} \end{void} \begin{notation} \label{not: germs}We denote by $\mathcal{A}_{m,n}$ the algebra of germs at the origin of the elements of $\mathcal{A}_{m,n,r}$, for $r$ a polyradius in $(0,\infty)^{m+n}$. When $n=0$ we write $\mathcal{A}_{m}$ for $\mathcal{A}_{m,0}$. We will often denote the germ and a representative by the same letter. \end{notation} We require all the functions in $\mathcal{A}$ to satisfy the property described in the next definition, which mimics the property of real analytic germs of being analytic on a whole neighbourhood of the origin. \begin{defn}
\label{def: A-analytic}Let $f\left(x,y\right)\in\mathcal{A}_{m,n,r}$, for some $m,n\in\mathbb{N}$ and $r\in\left(0,\infty\right)^{m+n}$. For $a\in\hat{I}_{m,n,r}$, put $m'=|\left\{ i:\ 1\leq i\leq m,\ a_{i}=0\right\} |$ and choose a permutation $\sigma$ of $\left\{ 1,\ldots,m\right\} $ such that $\sigma\left(\left\{ i:\ 1\leq i\leq m,\ a_{i}=0\right\} \right)=\left\{ 1,\ldots,m'\right\} $. We say that $f$ is $\mathcal{A}$-analytic if for all $a\in\hat{I}_{m,n,r}$ there exists a germ $g_{a}\in\mathcal{A}_{m',m+n-m'}$ such that the germ at $a$ of $f$ is equal to the germ at $a$ of $ $ \[ \left(x,y\right)\mapsto g_{a}\left(x_{\sigma\left(1\right)}-a_{\sigma\left(1\right)},\ldots,x_{\sigma\left(m\right)}-a_{\sigma\left(m\right)},y_{1}-a_{m+1},\ldots,y_{n}-a_{m+n}\right). \]
\end{defn}
The next definition establishes a relevant analogy with the behaviour of analytic germs, namely the fact that a germ in the collection $\{\mathcal{A}_{m,n}:\ m,n\in\mathbb{N}\}$ is uniquely determined by its ``generalised Taylor expansion''. \begin{defn} \label{def: quasi-analyticity}We say that $\left\{ \mathcal{A}_{m,n}:\ m,n\in\mathbb{N}\right\} $ is a collection of \emph{quasianalytic algebras} if, for all $m,n\in\mathbb{N}$, there exists an \textbf{injective} $\mathbb{R}$-algebra morphism \[ \mathcal{T}_{m,n}:\mathcal{A}_{m,n}\to\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket , \]
where $X=\left(X_{1},\ldots,X_{m}\right),\ Y=\left(Y_{1},\ldots,Y_{n}\right)$. Moreover, for all $m'\geq m,\ n'\geq n$ we require that the morphism $\mathcal{T}_{m',n'}$ extend $\mathcal{T}_{m,n}$, hence, from now on we will write $\mathcal{T}$ for $\mathcal{T}_{m,n}$. \end{defn} We require $\{\mathcal{A}_{m,n}:\ m,n\in\mathbb{N}\}$ to be a collection of quasianalytic algebras which, together with the morphism $\mathcal{T}$, satisfies the list of closure and compatibility properties in \ref{emp:properties of the morph} below. First, we need some definitions. \begin{defn} \label{def: admissible exponents}A number $\alpha\in[0,\infty)$ is an \emph{admissible exponent} if there are $m,n\in\mathbb{N},$ $f\in\mathcal{A}_{m,n},\ \beta\in\sopp\left(\mathcal{T}\left(f\right)\right)\subset\mathbb{R}^{m}\times\mathbb{N}^{n}$ such that $\alpha$ is a component of $\beta$. We denote by $\mathbb{A}$ semi-ring generated by all admissible exponents and by $\mathbb{K}$ the field of fractions of $\mathbb{A}$. \end{defn}
\begin{defn} \label{def: blow-up charts}Let $m,n\in\mathbb{N},\ \left(x,y\right)=\left(x_{1},\ldots,x_{m},y_{1},\ldots,y_{n}\right)$. For $m',n'\in\mathbb{N}$ with $m'+n'=m+n$, we set $\left(x',y'\right)=\left(x_{1}',\ldots,x_{m'}',y_{1}',\ldots,y_{n'}'\right)$. Let $r,r'$ be polyradii in $\mathbb{R}^{m+n}$. A \emph{blow-up chart }is a map
\[ \xyC{0mm}\xyL{0mm}\xymatrix{\pi\colon & \hat{I}_{m',n',r'}\ar[rrrr] & \ & \ & \ & \hat{I}_{m,n,r}\\
& \left(x',y'\right)\ar@{|->}[rrrr] & & & & \left(x,y\right) } \] of either of the following forms: \begin{itemize} \item For $1\leq j<i\leq m$ and $\lambda\in(0,\infty)$, let $m'=m-1$ and $n'=n+1$ and define \begin{align*} \pi_{i,j}^{\lambda} & \left(x',y'\right)=\left(x,y\right),\ \ \ \mathrm{where}\ \begin{cases} x_{k}=x_{k}' & 1\leq k<i\\ x_{i}=x_{j}'\left(\lambda+y_{1}'\right)\\ x_{k}=x_{k-1}' & i<k\leq m\\ y_{k}=y_{k+1}' & 1\leq k\leq n \end{cases}. \end{align*}
\item For $1\leq j,i\leq m$, let $m'=m$ and $n'=n$ and define \begin{align*} \pi_{i,j}^{0} & \left(x',y'\right)=\left(x,y\right),\ \ \ \mathrm{where}\ \begin{cases} x_{k}=x_{k}' & 1\leq k\leq m,\ k\not=i\\ x_{i}=x_{j}'x_{i}'\\ y_{k}=y_{k}' & 1\leq k\leq n \end{cases}\\ \mathrm{and}\ & \pi_{i,j}^{\infty}=\pi_{j,i}^{0}; \end{align*}
\item For $1\leq i\leq n,\ 1\leq j\leq m$ and $\lambda\in\mathbb{R}$, let $m'=m$ and $n'=n$ and define \begin{align*} \pi_{m+i,j}^{\lambda}\left(x',y'\right) & =\left(x,y\right),\ \ \ \mathrm{where}\ \begin{cases} x_{k}=x_{k}' & 1\leq k\leq m\\ y_{i}=x_{j}'\left(\lambda+y_{i}'\right)\\ y_{k}=y_{k}' & 1\leq k\leq n,\ k\not=i \end{cases}. \end{align*}
\item For$1\leq i\leq n,\ 1\leq j\leq m$, let $m'=m+1$ and $n'=n-1$ and define \[ \pi_{m+i,j}^{\pm\infty}\left(x',y'\right)=\left(x,y\right),\ \ \ \mathrm{where}\ \begin{cases} x_{k}=x_{k}' & 1\leq k\leq m,\ k\not=j\\ x_{j}=x_{m+1}'x_{j}'\\ y_{k}=y_{k}' & 1\leq k<i\\ y_{i}=\pm x_{m+1}'\\ y_{k}=y_{k-1}' & i<k\leq n \end{cases}. \]
\item For $1\leq i,j\leq n$ and $\lambda\in\mathbb{R}$, let $m'=m$ and $n'=n$ and define \begin{align*} \pi_{m+i,m+j}^{\lambda}\left(x',y'\right) & =\left(x,y\right),\ \ \ \mathrm{where}\ \begin{cases} x_{k}=x_{k}' & 1\leq k\leq m\\ y_{i}=y_{j}'\left(\lambda+y_{i}'\right)\\ y_{k}=y_{k}' & 1\leq k\leq n,\ k\not=i \end{cases}\\ \mathrm{and}\ & \pi_{m+i,m+j}^{\infty}=\pi_{m+j,m+i}^{0}. \end{align*}
\end{itemize} \begin{flushleft} We also define the following collections: \par\end{flushleft}
\begin{tabular}{l} $\mathrm{for}\ 1\leq i,j\leq m,\ \ \pi_{i,j}:=\left\{ \pi_{i,j}^{\lambda}:\ \lambda\in\left[0,\infty\right]\right\} ,$\tabularnewline $\mathrm{for}\ 1\leq i\leq n,\ 1\leq j\leq m,\ \ \pi_{m+i,j}:=\left\{ \pi_{m+i,j}^{\lambda}:\ \lambda\in\mathbb{R}\cup\left\{ \pm\infty\right\} \right\} ,$\tabularnewline $\mathrm{for}\ 1\leq i,j\leq n,\ \ \pi_{m+i,m+j}:=\left\{ \pi_{i,j}^{\lambda}:\ \lambda\in\mathbb{R}\cup\left\{ \infty\right\} \right\} .$\tabularnewline \end{tabular}\end{defn} \begin{void} \label{emp:properties of the morph}We require that the family of algebras of germs $\left\{ \mathcal{A}_{m,n}:\ m,n\in\mathbb{N}\right\} $ satisfy the following properties: \begin{enumerate} \item \emph{Monomials and ramifications.} If $\mathbb{A}\not=$$\mathbb{N}$, then for every $\alpha\in\mathbb{K}^{\geq0}$, the germ $x_{1}\mapsto x_{1}^{\alpha}$ is in $\mathcal{A}_{1}$ and $\mathcal{T}\left(x_{1}^{\alpha}\right)=X_{1}^{\alpha}$. Moreover, if $f\in\mathcal{A}_{m,n}$, then $g\left(x,y\right):=f\left(x_{1}^{\alpha},x_{2},\ldots,x_{m},y\right)\in\mathcal{A}_{m,n}$ and $\mathcal{T}\left(g\right)=\mathcal{T}$$\left(f\right)\left(X_{1}^{\alpha},X_{2},\ldots,X_{m},Y\right)$. \item \emph{Monomial division.} Let $f\in\mathcal{A}_{m,n}$ and suppose that there exist $\alpha\in\mathbb{K},$ $n\in\mathbb{N}$ and $G\in\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $ such that $\mathcal{T}(f)\left(X,Y\right)=X_{1}^{\alpha}Y_{1}^{n}G\left(X,Y\right)$. Then there exists $g\in\mathcal{A}_{m,n}$ such that $f\left(x,y\right)=x_{1}^{\alpha}y_{1}^{n}g\left(x,y\right)$. It follows that $\mathcal{T}\left(g\right)=G$: in fact, $\mathcal{T}\left(f\right)=\mathcal{T}\left(x_{1}^{\alpha}y_{1}^{n}g\right)=X_{1}^{\alpha}Y_{1}^{n}\mathcal{T}\left(g\right)$ and $\mathcal{T}\left(f\right)$ is also equal to $X_{1}^{\alpha}Y_{1}^{n}G$. \item \emph{Permutations of the $x$-variables. }Let $\sigma\in\Sigma_{m}$ be a permutation and $f\in\mathcal{A}_{m,n}$. Then $\mathcal{T}\left(f_{\sigma}\right)=\mathcal{T}\left(f\right)_{\sigma}$. \item \emph{Setting a variable equal to zero. }For $f\in\mathcal{A}_{m,n}$, we have \[ \mathcal{T}\left(f\left(x_{1},\ldots,x_{m-1},0,y\right)\right)=\mathcal{T}\left(f\right)\left(X_{1},\ldots,X_{m-1},0,Y\right). \]
\item \emph{Composition in the $y$-variables.} Let $g_{1},\ldots,g_{n}\in\mathcal{A}_{m',n'}$ with $g_{i}\left(0\right)=0$ and let $f\in\mathcal{A}_{m,n}$. Then $h:=f\left(x,g_{1},\ldots,g_{n}\right)\in\mathcal{A}_{m+m',n'}$ and $\mathcal{T}\left(h\right)=\mathcal{T}\left(f\right)\left(X,\mathcal{T}\left(g_{1}\right),\ldots,\mathcal{T}\left(g_{n}\right)\right)$. \item \emph{Implicit functions in the $y$-variables.} Let $f\in\mathcal{A}_{m,n}$ and suppose that $\frac{\partial f}{\partial y_{n}}\left(0\right)$ exists and is nonzero. Then there exists $g\in\mathcal{A}_{m,n-1}$ such that $f\left(x,y_{1},\ldots,y_{n-1,}g\left(x,y_{1},\ldots,y_{n-1}\right)\right)=0$. It follows that \[ \mathcal{T}\left(f\right)\left(X,Y_{1},\ldots,Y_{n-1},\mathcal{T}\left(g\right)\left(X,Y_{1},\ldots,Y_{n-1}\right)\right)=0. \]
\item \emph{Blow-ups.} Let $f\in\mathcal{A}_{m,n}$ and $\pi:\hat{I}_{m',n',\infty}\to\hat{I}_{m,n,\infty}$ be a blow-up chart (see Definition \ref{def: blow-up charts}). Then $f\circ\pi\in\mathcal{A}_{m',n'}$ and $\mathcal{T}\left(f\circ\pi\right)=\mathcal{T}\left(f\right)\circ\pi$. \end{enumerate} \end{void} \begin{rems} \label{rems: properties of the algebras}$\ $Here is a list of consequences of the above properties.
{}
\noindent \emph{Closure under differentiation}. $\mathcal{A}_{m,n}$ is closed under partial derivatives with respect to the $y$-variables and $\mathcal{T}$ is compatible with this operation: $f\in\mathcal{A}_{m,n}\Rightarrow\frac{\partial f}{\partial y_{i}}\in\mathcal{A}_{m,n}$ and $\mathcal{\mathcal{T}}\left(\frac{\partial f}{\partial y_{i}}\right)=\frac{\partial\mathcal{\mathcal{T}}\left(f\right)}{\partial y_{i}}$. In fact, consider the germ $g\left(x,y,z_{1}\right):=f\left(x,\ldots,y_{i}+z_{1},\ldots\right)-f\left(x,y\right)\in\mathcal{A}_{m,n+1}$. Then $g\left(x,y,0\right)=0$, so $\mathcal{\mathcal{T}}\left(g\right)\left(x,y,0\right)$; hence $\mathcal{\mathcal{T}}\left(g\right)\left(x,y,z_{1}\right)=z_{1}H\left(x,y,z_{1}\right)$, for some series $H$. By monomial division, there exists $h\in\mathcal{A}_{m,n+1}$ such that $g\left(x,y,z_{1}\right)=z_{1}h\left(x,y,z_{1}\right)$ and hence $\frac{\partial f}{\partial y_{i}}\left(x,y\right)=h\left(x,y,0\right)\in\mathcal{A}_{m,n}$.
{}
\noindent \emph{Taylor expansion.} If $f\in\mathcal{A}_{m,1}$, then $\mathcal{\mathcal{T}}\left(f\right)\left(0,Y\right)$ is the Taylor expansion of $f\left(0,y\right)$ with respect to $y$. In fact, $\mathcal{\mathcal{T}}\left(f\right)=\sum_{i=0}^{\infty}\frac{1}{i!}\frac{\partial^{i}\mathcal{\mathcal{T}}\left(f\right)}{\partial Y^{i}}\left(X,0\right)Y^{i}=\sum_{i=0}^{\infty}\frac{1}{i!}\mathcal{\mathcal{T}}\left(\frac{\partial^{i}f}{\partial y^{i}}\left(x,0\right)\right)Y^{i}$.
{}
\noindent \emph{Closure under $\partial_{i}$. }Let $f\in\mathcal{A}_{m,n}$ and let $\partial_{i}f$ be the germ at zero of $x_{i}\frac{\partial f}{\partial x_{i}}$ (extended by continuity at zero), for $i=1,\ldots,m$. Then $\partial_{i}f\in\mathcal{A}_{m,n}$ and $\mathcal{T}\left(\partial_{i}f\right)=\partial_{i}\mathcal{T}\left(f\right)$. In fact, consider \[ g\left(x,y,z\right):=f\left(x_{1},\ldots,x_{i-1},x_{i}\left(1+z\right),x_{i+1},\ldots,x_{m},y\right)\in\mathcal{A}_{m,n+1}. \] Then, $\frac{\partial g}{\partial z}\left(x,y,0\right)\in\mathcal{A}_{m,n}$. Notice that, for some $r\in\left(0,\infty\right)^{m+n+1}$, there is a representative of $g$ (still denoted by $g$), such that \[ \frac{\partial g}{\partial z}\left(x,y,z\right)=x_{i}\frac{\partial f}{\partial x_{i}}\left(x_{1},\ldots,x_{i}\left(1+z\right),\ldots,x_{m},y\right)\mathrm{\ on\ }I_{m,n+1,r}. \]
Hence, $\partial_{i}f=\frac{\partial g}{\partial z}\left(x,y,0\right)$. The compatibility with $\mathcal{T}$ follows easily.
{}
\noindent \emph{Closure under homothety}. Let $f\in\mathcal{A}_{m}$ and $\lambda\in(0,\infty)$. Then $g\left(x\right)=f\left(\lambda x_{1},x_{2},\ldots,x_{m}\right)\in\mathcal{A}_{m}$. In fact, let $F\left(x_{1},z,x_{2},\ldots,x_{m}\right)=f\left(x\right)\in\mathcal{A}_{m+1}$. Using the appropriate blow-up chart involving $x_{1}$ and $z$, we obtain $G\left(x_{1},z,x_{2}\ldots,x_{m}\right)=F\left(x_{1},x_{1}\left(\lambda+z\right),x_{2},\ldots,x_{m}\right)\in\mathcal{A}_{m+1}$ and finally $g\left(x\right)=G\left(x_{1},0,x_{2}\ldots,x_{m}\right)\in\mathcal{A}_{m}$.
{}
\noindent \emph{$\mathcal{C}^{\infty}$ germs}. Let $f\in\mathcal{A}_{1}$ be $\mathcal{C}^{\infty}$ at zero and suppose that $f\left(0\right)=0$. Then $f\in\mathcal{A}_{0,1}$, i.e. the exponents appearing in $\mathcal{T}\left(f\right)$ are natural numbers. In fact, let $\alpha\in\left(0,\infty\right)\setminus\mathbb{N}$ be the smallest exponent of $\mathcal{\mathcal{T}}\left(f\right)$ which is not natural. Then there exists $g\in\mathcal{A}_{1}$ such that $f\left(x\right)-c_{1}x^{n_{1}}-\ldots-c_{k}x^{n_{k}}=x^{\alpha}g\left(x\right)$ and $g\left(0\right)\not=0$. Since $f$ is $\mathcal{C}^{\infty}$ at zero, it must be $\alpha\in\mathbb{N}$.
{}
\noindent \emph{Binomial formula}. Let $\alpha\in\mathbb{K}$. Then $\left(1+y\right)^{\alpha}\in\mathcal{A}_{0,1}$. In fact, suppose first that $\alpha>0$. Notice that $\left(1+y\right)^{\alpha}$ is an analytic germ and its Taylor expansion is $\sum_{n\in\mathbb{N}}\binom{\alpha}{n}y^{n}$; let $g\left(x,y\right)=y^{\alpha}\in\mathcal{A}_{2}$. By composing a ramification with a suitable blow-up chart, we obtain $g_{1}\left(x,y\right)=x^{\alpha}\left(1+y\right)^{\alpha}\in\mathcal{A}_{2}$ and $\mathcal{\mathcal{T}}\left(g_{1}\right)\left(X,Y\right)=X^{\alpha}\left(1+Y\right)^{\alpha}$. By monomial division, there exists $h\in\mathcal{A}_{1}$ such that $g_{1}\left(x,y\right)=x^{\alpha}h\left(y\right)$ and $\mathcal{\mathcal{T}}\left(h\right)\left(Y\right)=\sum_{n\in\mathbb{N}}\binom{\alpha}{n}Y^{n}$. Hence, $h\left(y\right)=\left(1+y\right)^{\alpha}\in\mathcal{A}_{0,1}$, by the previous remark. Now notice that $\left(1+y\right)^{-\alpha}-1$ is the solution of the implicit function problem $\left(1+y\right)^{\alpha}\left(1+z\right)-1=0$, and hence $\left(1+y\right)^{-\alpha}\in\mathcal{A}_{0,1}$.
{}
\noindent \emph{Units.} A \emph{unit} of $\mathcal{A}_{m}$ is an invertible element or, equivalently, a germ $U\in\mathcal{A}_{m}$ such that $U\left(0\right)\not=0$. We claim that, for all $\alpha\in\mathbb{K}$, $U\left(x\right)^{\alpha}\in\mathcal{A}_{m}$. In fact, we may suppose $U\left(0\right)=1$ and write $U\left(x\right)=1+\varepsilon\left(x\right)$, where $\varepsilon\in\mathcal{A}_{m}$ and $\varepsilon\left(0\right)=0$. By composition and the previous remark, $\left(1+\varepsilon\left(x\right)\right)^{\alpha}\in\mathcal{A}_{m}$.\end{rems} \begin{defn} \label{def: normal germ}Let $f\in\mathcal{A}_{m}$. We say that $f$ is \emph{normal} if there exist $\alpha\in\mathbb{A}^{m}$ and $u\in\mathcal{A}_{m}$ such that $f\left(x\right)=x^{\alpha}u\left(x\right)$ and $u\left(0\right)\not=0$. \end{defn} \begin{lem} \label{lem:composition with monomials}Let $f\in\mathcal{A}_{N}$ and $G=\left(g_{1},\ldots,g_{N}\right)\in\mathcal{A}_{M}^{N}$, and suppose that the $g_{i}$ are all normal. Then $f\circ G\in\mathcal{A}_{M}$.\end{lem} \begin{proof} We will only treat the case $N=1,\ M=2$, the proof in the general case being a straightforward generalisation. Let $g\left(x,y\right)=x^{\alpha}y^{\beta}U\left(x,y\right)$, where $U\in\mathcal{A}_{2}$ is a unit.
First, suppose that $\alpha=\beta=0$. Up to homothety, we may assume $U\left(0,0\right)=1$ and write $U\left(x,y\right)=1+\varepsilon\left(x,y\right)$, with $ $$\varepsilon\in\mathcal{A}_{2}$ and $\varepsilon\left(0,0\right)=0$. By $\mathcal{A}$-analyticity of $f$ (see Definition \ref{def: A-analytic}), there exists $\varphi\in\mathcal{A}_{0,1}$ such that $\varphi\left(x\right)=f\left(x+1\right)$. Since $\mathcal{T}\left(\varphi\right)$ has only integer exponents, we may apply Axiom 5 of \ref{emp:properties of the morph} and deduce that the germ of $x\mapsto\varphi\left(\varepsilon\left(x,y\right)\right)\in\mathcal{A}_{2}$.
Now suppose that $\alpha>0$. Let $F\left(u,x\right)=f\left(u\right)\in\mathcal{A}_{2}$. By Axioms 1 and 7 of \ref{emp:properties of the morph}, $F\left(x^{\alpha}\left(u+1\right),x\right)$ belongs to $\mathcal{A}_{2}$. By Axiom 4 of \ref{emp:properties of the morph}, the germ $f_{1}:x\mapsto F\left(x^{\alpha},x\right)\in\mathcal{A}_{1}$. This shows that for every germ $\tilde{f}\in\mathcal{A}_{n+1}$, we have $\tilde{f}\left(x^{\alpha},y_{1},\ldots,y_{n}\right)\in\mathcal{A}_{n+1}$. Let $\beta>0$ and $F_{1}\left(x,y\right)=f_{1}\left(x\right)\in\mathcal{A}_{2}$. By Axioms 1 and 7 of \ref{emp:properties of the morph}, the germ of $h:\left(x,y\right)\mapsto F_{1}\left(y^{\beta}x,x\right)\in\mathcal{A}_{2}$. Now, by what we have just proved, the germ $h_{1}:\left(x,y\right)\mapsto h\left(x,y^{1/\alpha}\right)\in\mathcal{A}_{2}$. Notice that $h_{1}\left(x,y\right)=f\left(x^{\alpha}y^{\beta}\right)$. Recall that, by Remarks \ref{rems: properties of the algebras}, $U^{1/\beta}\in\mathcal{A}_{2}$. We may assume $U^{1/\beta}\left(0,0\right)=1$ and write $U^{1/\beta}\left(x,y\right)=1+\varepsilon\left(x,y\right)$, with $ $$\varepsilon\in\mathcal{A}_{2}$ and $\varepsilon\left(0,0\right)=0$. Let $H\left(x,y,z\right)=h_{1}\left(x,z\right)\in\mathcal{A}_{3}$. By Axiom 7 of \ref{emp:properties of the morph}, the germ $H_{1}:\left(x,y,z\right)\mapsto H\left(x,y,y\left(1+z\right)\right)\in\mathcal{A}_{2,1}$. By Axiom 5 of \ref{rems: properties of the algebras}, the germ $h_{2}:\left(x,y\right)\mapsto H_{1}\left(x,y,\varepsilon\left(x,y\right)\right)\in\mathcal{A}_{2}$. Since $h_{2}\left(x,y\right)=f\left(x^{\alpha}y^{\beta}U\left(x,y\right)\right)$, we are done. \end{proof}
\subsection{The theorems\label{sub:The-theorems}} \begin{proviso} \label{empty: proviso} For the rest of the paper $\mathcal{A}=\{\mathcal{A}_{m,n,r}\colon m,n\in\mathbb{N},\ r\in(0,\infty)^{m+n}\}$ will be a collection of algebras of functions as in the previous subsection, i.e. such that: \begin{itemize} \item $\mathcal{A}$ satisfies the conditions in \ref{vuoto:conditions on functions}. \item Every function in the collection in $\mathcal{A}$-analytic (see Definition \ref{def: A-analytic}). \item $\left\{ \mathcal{A}_{m,n}:\ m,n\in\mathbb{N}\right\} $ is a collection of quasianalytic algebras of germs (see Definition \ref{def: quasi-analyticity}). \item The collection $\left\{ \mathcal{A}_{m,n}:\ m,n\in\mathbb{N}\right\} $ and the morphism $\mathcal{T}$ satisfy the closure and compatibility properties in \ref{emp:properties of the morph}. \end{itemize} \end{proviso} \begin{defn} \label{def:structures}Let $\mathcal{A}$ be as above.
For $f\in\mathcal{A}_{m,n,1}$ we define $\tilde{f}:\mathbb{R}^{m+n}\to\mathbb{R}$ as \[ \tilde{f}\left(x,y\right)=\begin{cases} f\left(x,y\right) & \mathrm{if}\ \left(x,y\right)\in I_{m,n,1}\\ 0 & \mathrm{otherwise} \end{cases}. \]
Let $\mathcal{L}_{\mathcal{A}}$ be the language of ordered rings $\left\{ <,0,1,+,-,\cdot\right\} $ augmented by a new function symbol for each function $\tilde{f}$. Let $\mathbb{R}_{\mathcal{A}}$ be the real ordered field with its natural $\mathcal{L}_{\mathcal{A}}$-structure. Let $^{-1}$ be a function symbol for $x\mapsto\frac{1}{x}$, where $0^{-1}=0$ by convention, and, for $n\in\mathbb{N}$, let $\sqrt[n]{\ }$ be a function symbol for the function
\[ x\mapsto\begin{cases} x^{1/n} & \text{if\ }0\leq x\leq1\\ 0 & \text{otherwise} \end{cases}. \] \end{defn} \begin{namedthm} {Theorem A}\label{thm:o-minimality}The structure $\mathbb{R}_{\mathcal{A}}$ is model-complete, o-minimal and polynomially bounded. Its field of exponents \footnote{Recall that the field of exponents of a polynomially bounded o-minimal structure is the set of all $\alpha\in\mathbb{R}$ such the function $x\mapsto x^{\alpha}$ is definable. It is indeed a field (see for example \cite{miller:growth_dichotomy}). } is the field $\mathbb{K}$, defined in Definition \ref{def: admissible exponents}. \end{namedthm}
\begin{namedthm} {Theorem B}\label{thm:qe}The (natural) expansion of the structure $\mathbb{R}_{\mathcal{A}}$ to the language $\mathcal{L}_{\mathcal{A}}\cup\left\{ ^{-1}\right\} \cup\left\{ \sqrt[n]{\ }:\ n\in\mathbb{N}\right\} $ admits quantifier elimination.\end{namedthm} \begin{rem} \label{rem: no need for homothety}The choice of putting in $\mathcal{L}_{\mathcal{A}}$ a function symbol only for representatives on the unit box is not binding; we could have put a function symbol for every function in $\mathcal{A}_{m,n,r}$ and dispensed with the last condition in \ref{vuoto:conditions on functions}, and the two above theorems would still be true. \end{rem}
\subsection{Examples\label{sub:Examples}}
Most known examples of o-minimal polynomially bounded expansions of the real field can be generated by a family $\mathcal{A}$ of algebras satisfying the requirements of Proviso \ref{empty: proviso}. In particular, this is true for all the structures mentioned in the introduction.
a) In \cite{vdd:speiss:gen} the authors consider, for every polyradius $r$, the sub-algebra $\mathbb{R}\left\{ X^{*},Y\right\} _{r}$ of $\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $ consisting of all generalised power series which converge on $\hat{I}_{m,n,r}$ (see \cite[p. 4377]{vdd:speiss:gen}). The morphism $\mathcal{T}$ in this case is the inclusion $\mathbb{R}\left\{ X^{*},Y\right\} _{r}$$\hookrightarrow\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $, so the quasianalyticity property is obvious, and $\mathcal{A}$-analyticity is proved in \cite[Corollary 6.7]{vdd:speiss:gen}.
b) In \cite{vdd:speiss:multisum} the authors consider a family of algebras $\mathcal{G}_{m}$ of Gevrey functions in $m$ variables (see \cite[Definition 2.20]{vdd:speiss:multisum}). The morphism $\mathcal{T}$ is the Taylor map at zero, the quasianalyticity property follows from a fundamental result in multisummabilty theory (see \cite[Proposition 2.18]{vdd:speiss:multisum}) and $\mathcal{A}$-analyticity is proved in \cite[Lemma 4.8]{vdd:speiss:multisum}.
c) In \cite{rsw} the authors consider a family of quasianalytic Denjoy-Carleman algebras $\mathcal{C}_{B}\left(M\right)$, where $B$ is a box in $\mathbb{R}^{n}$ and $M=\left(M_{1},M_{2},\ldots\right)$ is an increasing sequence of positive constants (see \cite[p.751]{rsw}). The morphism $\mathcal{T}$ is the Taylor map at zero, the quasianalyticity property is equivalent to the condition $\sum_{i=0}^{\infty}\frac{M_{i}}{M_{i+1}}=\infty$ and $\mathcal{A}$-analyticity is automatically verified, since these algebras are closed under translation.
d) In \cite{rss} the authors consider a solution $H=\left(H_{1},\ldots,H_{r}\right):\left(-\varepsilon,\varepsilon\right)\to\mathbb{R}^{r}$ of a first order analytic differential equation which is singular at the origin and they construct the smallest collection $\mathcal{A}_{H}$ of algebras of germs containing the germ of $H$ and all the analytic germs, and closed under (a subset of) the operations implicit in \ref{emp:properties of the morph}. They consider the family of algebras of functions consisting of all $\mathcal{A}$-analytic representatives of the germs in $\mathcal{A}_{H}$ (it is proved in \cite[Lemma 3.4]{rss} that every germ in $\mathcal{A}_{H}$ has an $\mathcal{A}$-analytic representative) and let the morphism $\mathcal{T}$ be the Taylor expansion at zero. The quasianalyticity property in proved in \cite[Theorem 3.5]{rss} and $\mathcal{A}$-analyticity is automatic by construction.
e) Finally, in \cite{krs} the authors consider a family of algebras $\mathcal{Q}_{m}^{m+n}\left(U\right)$ of real functions which have a holomorphic extension to a so-called ``quadratic domain'' $U\subseteq\mathbf{L}^{m+n}$, where $\mathbf{L}$ is the Riemann surface of the logarithm (see \cite[Definition 5.1]{krs}). These algebras contain the \emph{Dulac transition maps} of real analytic planar vector fields in a neighbourhood of hyperbolic non-resonant singular points. The morphism $\mathcal{T}$ defined in \cite[Definition 2.6]{krs} associates to the germ $f$ of a function $\mathcal{Q}_{m}^{m+n}$ an \emph{asymptotic expansion} $\mathcal{T}\left(f\right)\in\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $. The quasianalyticity property is proved in \cite[Proposition 2.8]{krs} and $\mathcal{A}$-analyticity is proved in \cite[Corollary 3.7]{krs}. For all the above examples the closure and compatibility properties in \ref{emp:properties of the morph} are also proven in the mentioned papers. \begin{rem} \label{rem: how to construct examples}Let $f:\hat{I}_{m_{0},n_{0},r_{0}}\to\mathbb{R}$ be a \emph{weak}-$\mathcal{C}^{\infty}$ function (in the sense of \cite{miller:infinite-differentiability}), i.e. for every point $a\in\hat{I}_{m_{0},n_{0},r_{0}}$ and for every $k\in\mathbb{N}$, the germ of $f$ at $a$ has a $\mathcal{C}^{k}$ representative. Suppose furthermore that $f$ is definable in a polynomially bounded expansion of the real field. Let $\left\{ \mathcal{A}_{m,n}:\ m,n\in\mathbb{N}\right\} $ be smallest collection of $\mathbb{R}$-algebras of germs containing the germs at zero of the functions $f_{a}\left(x\right):=f\left(x+a\right)$ (for all $a\in\hat{I}_{m_{0},n_{0},r_{0}}$) and of the coordinate functions, and satisfying the closure properties in \ref{emp:properties of the morph}. Let $\mathcal{A}=\left\{ \mathcal{A}_{m,n,r}\right\} $ be the collection of the $\mathbb{R}$-algebras made of all $\mathcal{A}$-analytic representatives of the germs in $\left\{ \mathcal{A}_{m,n}:\ m,n\in\mathbb{N}\right\} $. It is easy to check (see for example \cite[Lemma 3.4]{rss}) that every germ in $\mathcal{A}_{m,n}$ has an $\mathcal{A}$-analytic representative. Clearly, $\mathcal{A}$ satisfies the first (see Remark \ref{rem: no need for homothety}) and the last two conditions in Proviso \ref{empty: proviso}. Finally, it follows from \cite[Corollary 2]{miller:infinite-differentiability} that $\left\{ \mathcal{A}_{m,n}:\ m,n\in\mathbb{N}\right\} $ is a collection of quasianalytic algebras. Hence, the structure $\mathbb{R}_{\mathcal{A}}$ satisfies Theorems A and B. \end{rem} In analogy with \cite{gabriel:expli} and \cite{rsw}, we deduce from Theorem A the following explicit model-completeness result. \begin{cor} \label{cor: model compl from derivatives}Let $f:\hat{I}_{m_{0},n_{0},r_{0}}\to\mathbb{R}$ be a $\mathcal{C}^{\infty}$ function definable in some polynomially bounded expansion of the real field. Let $\mathcal{L}_{f}$ be the language of ordered rings $\left\{ <,0,1,+,-,\cdot\right\} $ augmented by function symbols for $f$ and for each partial derivative of $f$ and by a constant symbol for every real number. Then the natural $\mathcal{L}_{f}$-expansion $\mathbb{R}_{f}$ of the real field is model-complete.\end{cor} \begin{proof} Let us construct $\mathcal{A}$ as in Remark \ref{rem: how to construct examples}. It is easy to see that the structures $\mathbb{R}_{\mathcal{A}}$ and $\mathbb{R}_{f}$ have the same 0-definable sets. We claim that the primitives of $\mathbb{R}_{\mathcal{A}}$ are existentially definable. To see this, the only non-trivial observation to make is that, if $g\in\mathcal{A}_{m,n,r}$, and $h\in\mathcal{A}_{m,n,r}$ is obtained from $g$ by monomial division, e.g. $g\left(x\right)=x_{1}\cdot h\left(x\right)$ (where $x=\left(x_{1},\ldots,x_{m+n}\right)$), then the graph of $h$ is the set
\[ \left\{ \left(x,y\right):\ \left(x_{1}\not=0\land x_{1}\cdot y=g\left(x\right)\right)\vee\left(x=0\land y=\frac{\partial g}{\partial x_{1}}\left(x\right)\right)\right\} , \] which is existentially definable from the graph of $g$. \end{proof}
\subsection{Model-completeness of Euler's Gamma function\label{subsec: model completeness of gamma}}
The following application was obtained in collaboration with Gareth Jones. \begin{cor} \label{cor: mod compl of gamma}Let $\mathcal{L}_{\Gamma}$ be the language of ordered rings $\left\{ <,0,1,+,-,\cdot\right\} $ augmented by function symbols for $\Gamma\restriction\left(0,\infty\right)$ and for each derivative of $\Gamma\restriction\left(0,\infty\right)$ and by a constant symbol for every real number. Then the natural $\mathcal{L}_{\Gamma}$-expansion $\mathbb{R}_{\Gamma}$ of the real field is model-complete.\end{cor} \begin{proof} Let $\psi$ be defined as in \cite[Example 8.1]{vdd:speiss:multisum}. In particular, recall that, by Binet's second formula, we have, for all $x\in\left(1,\infty\right)$, \[ \log\Gamma\left(x\right)=\left(x-\frac{1}{2}\right)\log x+\frac{1}{2}\log\left(2\pi\right)+\psi\left(\frac{1}{x}\right). \] As remarked in \cite[Example 8.1]{vdd:speiss:multisum}, the functions $\psi\restriction\left(0,1\right)$ and $\Gamma\restriction\left(0,1\right)$ are both definable in the polynomially bounded o-minimal structure $\mathbb{R}_{\mathcal{G}}$. Moreover, $\psi\restriction\left(0,1\right)$ is $\mathcal{C}^{\infty}$ at zero and $1/\Gamma\restriction\left(0,1\right)$ is analytic at zero, hence by Corollary \ref{cor: model compl from derivatives}, the expansion $\mathcal{R}$ of the real field by the functions $\exp\restriction\left(0,1\right)$, $\psi\restriction\left(0,1\right)$ and $\Gamma\restriction\left(0,1\right)$ and their derivatives is model-complete. By \cite[Theorem B]{vdd:speiss:multisum}, the structure $\langle\mathcal{R},\exp\rangle$ is model-complete. Now, using Legendre's Duplication Formula (see for example \cite[p.5]{erdelyi:higher_transcendental_functions_i}) \[ \Gamma\left(x\right)\Gamma\left(x+\frac{1}{2}\right)=2^{1-2x}\sqrt{\pi}\Gamma\left(2x\right), \] it is easy to show that the structures $\langle\mathcal{R},\exp\rangle$ and $\mathbb{R}_{\Gamma}$ have the same 0-definable sets and that the primitives of $\langle\mathcal{R},\exp\rangle$ are existentially definable in $\mathbb{R}_{\Gamma}$. \end{proof}
\section{Monomialisation\label{sec:Monomialisation of series}}
The aim of this section is to define a class of transformations of $\mathbb{R}^{m+n}$, which we call admissible, which are bijective outside a set with empty interior, and modulo which we may assume that the germs in $\mathcal{A}_{m,n}$ are normal (see Definition \ref{def: normal germ}). More precisely, in Subsection \ref{sub:Monomialisation-of-generalised} we develop a monomialisation algorithm for generalised power series and in Subsection \ref{subsec:monomialisation of germs} we use quasianalyticity and the compatibility properties of the morphism $\mathcal{T}$ in \ref{emp:properties of the morph} to deduce a monomialisation result for germs.
\subsection{Admissible transformations and admissible trees\label{sub:Admissible-trees}} \begin{defn} \label{Def: elementary transformations}Let $m,n\in\mathbb{N},\ \left(x,y\right)=\left(x_{1},\ldots,x_{m},y_{1},\ldots,y_{n}\right)$. For $m',n'\in\mathbb{N}$ with $m'+n'=m+n$, we set $\left(x',y'\right)=\left(x_{1}',\ldots,x_{m'}',y_{1}',\ldots,y_{n'}'\right)$. Let $r,r'$ be polyradii in $\mathbb{R}^{m+n}$. An \emph{elementary transformations} of $\mathbb{R}^{m+n}$ is a map $\nu:\hat{I}_{m',n',r'}\to\hat{I}_{m,n,r}$ of either of the following types. \begin{itemize} \item A \emph{blow-up chart} (see Definition \ref{def: blow-up charts}), i.e. an element $\pi:\hat{I}_{m',n',r'}\to\hat{I}_{m,n,r}$ of either of the collections $\pi_{i,j},\ \pi_{m+i,j},\ \pi_{m+i,m+j}$. \item A \emph{Tschirnhausen translation}: let $m=m',n=n'$, let $r''=\left(s_{1}',\ldots,s_{m}',t_{1}',\ldots,t_{n-1}'\right)$ and $h\in\mathcal{A}_{m,n-1,r''}$ with $h\left(0\right)=0$ and set \[ \tau_{h}\left(x',y'\right)=\left(x,y\right),\ \ \ \mathrm{where}\ \begin{cases} x_{k}=x{}_{k}' & 1\leq k\leq m\\ y_{n}=y_{n}'+h\left(x_{1}',\ldots,x_{m}',y_{1}',\ldots,y_{n-1}'\right)\\ y_{k}=y_{k}' & 1\leq k\leq n-1 \end{cases}. \]
\item A \emph{linear transformation}: let $m=m',n=n'$, let $1\leq i\leq n\mathrm{\ and\ }c=\left(c_{1},\ldots,c_{i-1}\right)\in\mathbb{R}^{i-1}$, and set \[ L_{i,c}\left(x',y'\right)=\left(x,y\right),\ \ \ \mathrm{where}\ \begin{cases} x_{k}=x_{k}' & 1\leq k\leq m\\ y_{k}=y_{k}' & i\leq k\leq n\\ y_{k}=y_{k}'+c_{k}y_{i}' & 1\leq k<i \end{cases}. \]
\item A \emph{ramification} is either of the following maps: let $m=m',n=n'$, \end{itemize}
for $1\leq i\leq m$ and $\gamma\in\mathbb{K}^{>0}$ (see Definition \ref{def: admissible exponents}),
\begin{align*} r_{i}^{\gamma} & \left(x',y'\right)=\left(x,y\right),\ \ \ \mathrm{where}\ \begin{cases} x_{k}=x_{k}' & 1\leq k\leq m,\ k\not=i\\ x_{i}=x{}_{i}'^{\gamma}\\ y_{k}=y_{k} & 1\leq k\leq n \end{cases} \end{align*} and for $1\leq i\leq n$ and $d\in\mathbb{N}$, \begin{align*} \end{align*} \[ r_{m+i}^{d,\pm}\left(x',y'\right)=\left(x,y\right),\ \ \ \mathrm{where}\ \begin{cases} x_{k}=x_{k}' & 1\leq k\leq m\\ y_{i}=\pm y{}_{i}'^{d}\\ y_{k}=y_{k} & 1\leq k\leq n,\ k\not=i \end{cases}. \]
\end{defn} \begin{rem} \label{rem: elemntary transf send quadrants to sectors}Notice that, by \ref{emp:properties of the morph} and Lemma \ref{lem:composition with monomials}, the components of an elementary transformation $\nu:\hat{I}_{m',n',r'}\to\hat{I}_{m,n,r}$ are elements of $\mathcal{A}_{m',n',r'}$. Moreover, it follows from the axioms in \ref{emp:properties of the morph} that, if $h\in\mathcal{A}_{m,n}$, then $h\circ\nu\in\mathcal{A}_{m',\nu'}$. \end{rem} \begin{defn} \label{Def: admiss transf}Let $m,n,m',n',N\in\mathbb{N}$ with $m'+n'=m+n$ and let $\nu_{i}:\hat{I}_{m_{i}',n_{i}',r_{i}'}\to\hat{I}_{m_{i},n_{i},r_{i}}$ be elementary transformations (for $i=1,\ldots,N)$ with $m_{1}=m,\ n_{1}=n,\ m_{N}'=m',\ n_{N}'=n'$ and $m_{i}+n_{i}=m_{i}'+n_{i}'=m+n$. If $N>1$, then in order for the composition $\nu_{1}\circ\ldots\circ\nu_{N}$ to be well-defined it is enough that $m_{i}\geq m_{i-1}',\ n_{i}\leq n_{i-1}'$ and $r_{i}\leq r_{i-1}'$ for all $i=1,\ldots,N$. A map $\rho:\hat{I}_{m',n',r'}\to\hat{I}_{m,n,r}$ is called an \emph{admissible transformation} if $\rho=\nu_{1}\circ\ldots\circ\nu_{N}$ and moreover, if $N>1$, then $m_{i}=m_{i-1}'$ and $n_{i}=n_{i-1}'$ for all $i=1,\ldots,N$. The number $N$ is called the \emph{length} of the admissible transformation $\rho$.\end{defn} \begin{rem} \label{Rem: admiss transf are in the algebra}By Remark \ref{rem: elemntary transf send quadrants to sectors} and by induction on the length of $\rho$, it is easy to see that the components of the admissible transformation $\rho:\hat{I}_{m',n',r'}\to\hat{I}_{m,n,r}$ are elements of $\mathcal{A}_{m',n',r'}$ and that $\rho$ induces an algebra morphism
\[ \xyC{0mm}\xyL{0mm}\xymatrix{\rho\colon & \mathcal{A}_{m,n}\ar[rrrr] & \ & \ & \ & \mathcal{A}_{m',n'}\\
& f\ar@{|->}[rrrr] & & & & f\circ\rho } . \]
\end{rem}
\begin{lem} \label{Lem: elementary transf induce injective morph}An elementary transformation $\nu:\hat{I}_{m',n',r'}\to\hat{I}_{m,n,r}$ induces an injective algebra homomorphism
\[ \xyC{0mm}\xyL{0mm}\xymatrix{T_{\nu}\colon & \mathbb{R}\left\llbracket X^{*},Y\right\rrbracket \ar[rrrr] & \ & \ & \ & \mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket \\
& F\ar@{|->}[rrrr] & & & & F\circ\nu } , \] where we set \[ F\circ\tau_{h}\left(X',Y'\right):=F\left(X',Y_{1}',\ldots,Y_{n-1}',Y_{n}'+\mathcal{T}\left(h\right)\left(X',Y_{1}',\ldots Y_{n-1}'\right)\right). \] Moreover, if $F\in\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket \cap\text{Im\ensuremath{\left(\mathcal{T}\right)}},$ then $F\circ\nu\in\mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket \cap\text{Im\ensuremath{\left(\mathcal{T}\right)}}.$\end{lem} \begin{proof} It is clear that $T_{\nu}$ is a homomorphism, which preserves being a member of $\text{Im}\left(\mathcal{T}\right)$ by the properties in \ref{emp:properties of the morph}. If $\nu$ is either a linear transformation, or a Tschirnhausen translation or a ramification, then $\nu:\left(X',Y'\right)\mapsto\left(X,Y\right)$ is a bijective change of coordinates, hence $T_{\nu}$ is injective. It remains to show injectivity when $\nu$ is a blow-up chart. In order to do this, let $u,v,w_{1},\ldots,w_{l}$ be variables, with $\bar{w}=\left(w_{1},\ldots,w_{l}\right)$, and, for $\lambda\geq0$, consider the map $\pi^{\lambda}:\left(u,v,\bar{w}\right)\mapsto\left(u,u\left(\lambda+v\right),\bar{w}\right)$. For $F\in\mathbb{R}\left\llbracket \left(u,v,\bar{w}\right)^{*}\right\rrbracket \setminus\left\{ 0\right\} $, we prove that $F\circ\pi^{\lambda}\not\equiv0$. Write $F\left(u,v,\bar{w}\right)=\sum_{\alpha,\beta}a_{\alpha,\beta}\left(\bar{w}\right)u^{\alpha}v^{\beta}$, where $a_{\alpha,\beta}\in\mathbb{R}\left\llbracket \bar{w}^{*}\right\rrbracket $. Let us regroup the homogeneous terms as follows:
\[ F\left(u,v,\bar{w}\right)=\sum_{\gamma}Q_{\gamma}\left(u,v,\bar{w}\right)\ \text{where}\ Q_{\gamma}\left(u,v,\bar{w}\right)=\sum_{\alpha+\beta=\gamma}a_{\alpha,\beta}\left(\bar{w}\right)u^{\alpha}v^{\beta}. \] It follows from the well-order properties of the support (see for example \cite[Lemma 4.2 (2)]{vdd:speiss:gen}) that $Q_{\gamma}$ is a finite sum, hence let us rewrite $Q_{\gamma}={\displaystyle \sum_{i=1}^{q}}c_{\beta_{i}}u^{\gamma-\beta_{i}}v^{\beta_{i}}$, where $c_{\beta_{i}}=a_{\gamma-\beta_{i},\beta_{i}}\left(\bar{w}\right)$ and $\beta_{1}<\ldots<\beta_{q}$.
Let us first consider the case $\lambda=0$. Suppose that $0\equiv F\circ\pi^{0}\left(u,v,\bar{w}\right)={\displaystyle \sum_{\gamma}}u^{\gamma}{\displaystyle \sum_{\alpha+\beta=\gamma}}a_{\alpha,\beta}\left(\bar{w}\right)v^{\beta}$. Then for every $\gamma$ we have that $a_{\alpha,\beta}\equiv0$ whenever $\alpha+\beta=\gamma$, hence $F\equiv0$.
Now suppose $\lambda>0$ and assume that
\[ 0\equiv F\circ\pi^{\lambda}\left(u,v,\bar{w}\right)={\displaystyle \sum_{\gamma}}u^{\gamma}{\displaystyle \sum_{\alpha+\beta=\gamma}}a_{\alpha,\beta}\left(\bar{w}\right){\displaystyle \sum_{k=0}^{\infty}}\binom{\beta}{k}\lambda^{\beta-k}v^{k}. \]
Then for every $\gamma$ the series
\[ Q_{\gamma}\left(1,\lambda+v,\bar{w}\right)={\displaystyle \sum_{k=0}^{\infty}}\left({\displaystyle \sum_{i=1}^{q}}c_{\beta_{i}}\binom{\beta_{i}}{k}\lambda^{\beta_{i}-k}\right)v^{k}\in\mathbb{R}\left\llbracket \bar{w}^{*},v\right\rrbracket \] is identically zero. It follows that ${\displaystyle \sum_{i=1}^{q}}c_{\beta_{i}}\lambda^{\beta_{i}}\binom{\beta_{i}}{k}=0\ \forall k\in\mathbb{N}$. Now, it is not difficult to see that there exist $j_{1},\ldots,j_{q}\in\mathbb{N}$ such that the determinant of the linear system $\left\{ \sum_{i=1}^{q}\binom{\beta_{i}}{j_{s}}Z_{i}=0\ \text{for}\ s=1,\ldots q\right\} $ is nonzero and hence the only solution is $Z_{1}=\ldots=Z_{q}=0$. It follows that for every $\gamma$ we have that $a_{\alpha,\beta}\equiv0$ whenever $\alpha+\beta=\gamma$, hence $F\equiv0$. \end{proof} \begin{defn} \label{Def: admissible trees}An \emph{elementary tree} has either of the following forms: a vertex $\bullet$, or \[ \xyL{2cm}\xymatrix{\bullet\ar@{->}[d]\sb-{L_{i,c}}\\ \ } \] where $L_{i,c}$ is a linear transformation, or \[ \xyL{2cm}\xymatrix{\bullet\ar@{->}[d]\sb-{\tau_{h}}\\ \ } \] where $\tau_{h}$ is a Tschirnhausen transformation, or
\[ \xyL{2cm}\xymatrix{\bullet\ar@{->}[d]\sb-{r_{i}^{\gamma}}\\ \ } \] where $r_{i}^{\gamma}$ is a ramification of the first type, or\foreignlanguage{english}{ \[ \xyL{1cm}\xyC{6mm}\xymatrix{ & \bullet\ar@{->}[ldd]\sb-{r_{m+i}^{d,+}}\ar@{->}[rdd]\sp-{r_{m+i}^{d,-}}\\ \\ \ & \ & \ } \] }where $r_{m+i}^{d,+},\ r_{m+i}^{d,-}$ are ramifications of the second type, or
\[
\xyL{1cm}\xyC{6mm}\xymatrix{ & & \bullet\ar@{->}[llddd]\sb-{\displaystyle \pi_{l,s}}\ar@{->}[lddd]\ar@{->}[ddd]\ar@{}|-{\ldots}[rddd]\ar@{->}[rrddd]\\ \\ \\ \ & \ & \ & \ & \ } \] where the pair $\left(l,s\right)$ is of the form $\left(i,j\right),\left(m+i,j\right)$ or $\left(m+i,m+j\right)$ (corresponding to the three types of blow-up transformation) and each branch corresponds to an element of the collection $\pi_{l,s}$.
{}
A \emph{tree} is defined inductively as follows: a tree of \emph{height }zero is a vertex $\bullet$ ; a tree of height $\leq N$ is obtained from a tree $T$ of height $\leq N-1$ by adjoining an elementary tree to the end of each branch of $T$. The height of $T$ will be denoted by $\mathrm{h}\left(T\right)$. A branch $\mathfrak{b}$ of a tree $T$ can be represented as an ordered tuple (from the vertex to the end of the branch) of elementary transformations $\left(\nu_{1},\ldots,\nu_{N}\right)$.
{}
An \emph{admissible tree} is a tree $T$ such that for each branch $\mathfrak{b}=\left(\nu_{1},\ldots,\nu_{N}\right)$ of $T$, the map $\rho_{\mathfrak{b}}=\nu_{1}\circ\ldots\circ\nu_{N}$ is an admissible transformation. It follows that $\mathfrak{b}$ induces a homomorphism $T_{\mathfrak{b}}:\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket \to\mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket $, by setting $T_{\mathfrak{b}}\left(F\right)=F\circ\nu_{1}\circ\ldots\circ\nu_{N}$. \end{defn} \begin{rems} \label{rem:tree preserves good families}$\ $ \begin{enumerate} \item Let $T$ be an admissible tree and $\mathfrak{b}$ be one of its branches. Since $T_{\mathfrak{b}}$ is a homomorphism, if $U$ is a unit, then $T_{\mathfrak{b}}\left(U\right)$ is also a unit. \item Let $X=\left(X_{1},\ldots,X_{m}\right)$, $Y=\left(Y_{1},\ldots,Y_{n}\right)$. It is easy to see that, if $T$ is an admissible tree and $\mathcal{F}\subset\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $ is a family with good total support, then, for every branch $\mathfrak{b}$ of $T$, the family $T_{\mathfrak{b}}\left(\mathcal{F}\right)$ still has good total support. More precisely, let $F\in\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $ and $H\in\mathbb{R}\left\llbracket X^{*},\hat{Y}\right\rrbracket $, where $Y=\left(Y_{1},\ldots,Y_{n}\right)$ and $\hat{Y}=\left(Y_{1},\ldots,Y_{n-1}\right)$; suppose $\mathrm{Supp}_{X}\left(F\right)\subseteq S_{1}\times\ldots\times S_{m}$ and $\mathrm{Supp}_{X}\left(H\right)\subseteq S'_{1}\times\ldots\times S'_{m}$, where $S_{i},S'_{i}\subset[0,\infty)$ are well ordered sets. Then we have $\mathrm{Supp}_{X}\left(L_{i,c}\left(F\right)\right)=\mathrm{Supp}_{X}\left(r_{m+i}^{d,\pm}\left(F\right)\right)=\mathrm{Supp}_{X}\left(F\right)$ and $\mathrm{Supp}_{X}\left(\tau_{h}\left(F\right)\right)\subseteq\tilde{S}_{1}\times\ldots\times\tilde{S}_{m}$, with $\tilde{S}_{k}=\left\{ a+nb:\ a\in S_{k},\ b\in S'_{k},\ n\in\mathbb{N}\right\} $. Moreover, $\mathrm{Supp}_{X}\left(r_{i}^{\gamma}\left(F\right)\right)\subseteq\tilde{S}_{1}\times\ldots\times\tilde{S}_{m}$, with $\tilde{S}_{i}=\left\{ \gamma a:\ a\in S_{i}\right\} $ and $\tilde{S}_{k}=S_{k}$ for $k\not=i$. Finally, for $1\leq i,j\leq m$ and $\lambda\in[0,\infty)$, we have $\mathrm{Supp}_{X}\left(\pi_{i,j}^{\lambda}\left(F\right)\right)\subseteq\tilde{S}_{1}\times\ldots\times\tilde{S}_{m}$, with $\tilde{S}_{j}=\left\{ a+b:\ a\in S_{j},\ b\in S_{i}\right\} $ and $\tilde{S}_{k}=S_{k}$ for $k\not=j$ (the argument for the other types of blow-up transformation is similar). \item Let $T$ be an admissible tree and $\mathfrak{b}$ be one of its branches, inducing a homomorphism $T_{\mathfrak{b}}:\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket \to\mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket $. It follows from Lemma \ref{Lem: elementary transf induce injective morph} that if $F\in\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket \setminus\left\{ 0\right\} \cap\text{Im\ensuremath{\left(\mathcal{T}\right)}}$, then $T_{\mathfrak{b}}\left(F\right)\in\mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket \setminus\left\{ 0\right\} \cap\text{Im\ensuremath{\left(\mathcal{T}\right)}}.$ \end{enumerate} \end{rems}
\subsection{Monomialisation of generalised power series\label{sub:Monomialisation-of-generalised}} \begin{defn} A series $F\in\mathbb{R}\left\llbracket X^{*}\right\rrbracket \setminus\left\{ 0\right\} $ is \emph{normal} if there exist $\alpha\in\mathbb{A}^{m}$ and an invertible series $U\in\mathbb{R}\left\llbracket X^{*}\right\rrbracket $ such that $F\left(X\right)=X^{\alpha}U\left(X\right)$ . \end{defn} In analogy with \cite[Lemma 4.7]{bm_semi_subanalytic} and \cite[Lemma 2.2]{rsw}, we have the following result. \begin{lem} \label{lem: prod of series}$\ $ \begin{enumerate} \item The series $F_{1},\ldots,F_{k}\in\mathbb{R}\left\llbracket X^{*}\right\rrbracket \setminus\left\{ 0\right\} $ are all normal if and only if the series $\prod_{i=1}^{k}F_{i}$ is normal.
\item If $F_{1},F_{2}$ and $F_{1}-F_{2}$ are normal, then either $F_{1}|F_{2}$
or $F_{2}|F_{1}$. \end{enumerate} \end{lem} \begin{proof} Regarding 1., suppose that $\prod F_{i}\left(X\right)$ is normal. Then there exist two disjoint subsets $Z$ and $W$ of the set $X$ of all the variables, and multi-indices $\alpha,\ \beta$, and series $U\left(X\right),\tilde{F_{1}}\left(X\right),\ldots,\tilde{F}_{k}\left(X\right)$ such that \[ Z^{\alpha}\prod\tilde{F}_{i}\left(X\right)=W^{\beta}U\left(X\right), \] where all the components of $\alpha$ and $\beta$ are strictly positive, no power of any of the variables divides any of the $\tilde{F}_{i}$ and $U\left(0\right)\not=0$.
If $W=\emptyset$, then the $\tilde{F}_{i}$ are all units, and the statement is proved. Otherwise, suppose that $X_{1}\in W$; then $\prod\tilde{F}_{i}\left(0,X_{2},\ldots,X_{m+n}\right)=0$. But this is impossible, because no power of $X_{1}$ divides any of the $\tilde{F}_{i}$.
Regarding 2., there are multi-indices $\alpha_{1},\ \alpha_{2}$ and units $U_{1},\ U_{2}$ such that $F_{1}=X^{\alpha_{1}}U_{1}$ and $F_{2}=X^{\alpha_{2}}U_{2}$. Hence, the minimal elements of $\mathcal{\mathrm{Supp}}\left(F_{1}-F_{2}\right)$ are contained in the set $\left\{ \alpha_{1},\alpha_{2}\right\} $. Since $F_{1}-F_{2}$ is normal, either $\alpha_{1}\leq\alpha_{2}$ or $\alpha_{2}\leq\alpha_{1}$.\end{proof} \begin{notation} We fix $m,n,m',n'\in\mathbb{N}$ such that $m+n=m'+n'$. Let $X=\left(X_{1},\ldots,X_{m}\right),\ Y=\left(Y_{1},\ldots,Y_{n}\right)$ and $X'=\left(X_{1}',\ldots,X_{m'}'\right),\ Y'=\left(Y_{1}',\ldots,Y_{n'}'\right)$. If $T$ is an admissible tree and $\mathfrak{b}$ is one of its branches, we will always implicitly assume that $T_{\mathfrak{b}}:\mathbb{R}\left\llbracket X^{*},Y^{*}\right\rrbracket \to\mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket $. Let $\hat{Y}=\left(Y_{1},\ldots,Y_{n-1}\right)$ and $\hat{Y'}=\left(Y'_{1},\ldots,Y'_{n'-1}\right)$. \end{notation} The main result of this subsection is the following monomialisation algorithm for generalised power series. The proof methods take inspiration from \cite{bm_semi_subanalytic} and \cite{rsw}. \begin{thm} \label{thm: monomialisation}Let $F_{1},\ldots,F_{p}\in\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket \setminus\left\{ 0\right\} \cap\mathrm{Im}\left(\mathcal{T}\right)$. There exists an admissible tree $T$ such that for each branch $\mathfrak{b}$ of $T$ the series $T_{\mathfrak{b}}\left(F_{1}\right),\ldots,T_{\mathfrak{b}}\left(F_{p}\right)\in\mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket \setminus\left\{ 0\right\} \cap\mathrm{Im}\left(\mathcal{T}\right)$ are normal and linearly ordered by division.\end{thm} \begin{rem} In the proof of Theorem \ref{thm: monomialisation} we will often compose branches of different admissible trees. We leave it to the reader to check at each stage that such compositions are allowed, i.e. the composition defines a branch of some admissible tree.\end{rem} \begin{lem} \label{lem:singular blow ups}Let $\mathcal{F}=\left\{ F_{k}:\ k\in\mathbb{N}\right\} \subset\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $ be a family such that $\mathrm{Supp}_{X}\left(\mathcal{F}\right)$ is good. Then there is an admissible tree $T$ (consisting of ramifications and blow-up transformations) such that, for every branch $\mathfrak{b}$ of $T$, we have $T_{\mathfrak{b}}:\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket \to\mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket $ with $m'\leq m$ and there exist $\alpha\in[0,\infty)^{m'}$ and series $G_{k}\in\mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket \ \left(k\in\mathbb{N}\right)$ such that, for every $k\in\mathbb{N}$, $T_{\mathfrak{b}}\left(F_{k}\right)\left(X',Y'\right)=X'^{\alpha}G_{k}\left(X',Y'\right)$ and for some $k_{0}\in\mathbb{N}$, $G_{k_{0}}\left(0,Y'\right)\not\equiv0$.\end{lem} \begin{proof} Here we view $\mathcal{F}$ as a subset of $\mathbb{A}\left\llbracket X^{*}\right\rrbracket $, with $\mathbb{A}=\mathbb{R}\left\llbracket Y\right\rrbracket $. In \cite[4.11]{vdd:speiss:gen} the authors define the \emph{blow-up height} of a finite set of monomials, denoted by $b_{X}$. It follows from the definition of $b_{X}$ that if $b_{X}\left(\mathcal{F}_{\mathrm{min}}\right)=\left(0,0\right)$, then there exists $\alpha\in[0,\infty)^{m}$ such that $\mathcal{F}_{\mathrm{min}}=\left\{ X^{\alpha}\right\} $. The proof is by induction on the pairs $\left(m,b_{X}\left(\mathcal{F}_{\text{min}}\right)\right)$, ordered lexicographically. If $m=0$, there is nothing to prove. If $m=1$, then $b_{X}\left(\mathcal{F}_{\mathrm{min}}\right)=\left(0,0\right)$. In this case, for every $k\in\mathbb{N}$, there are a series $G_{k}$ such that $F_{k}=X^{\alpha}G_{k}$ and for some $k_{0}\in\mathbb{N}$ we have $G_{k}\left(0,Y\right)\not\equiv0$.
Hence we may assume that $m>1$ and $b_{X}\left(\mathcal{F}_{\mathrm{min}}\right)\not=\left(0,0\right)$. It follows from the proof of \cite[Proposition 4.14]{vdd:speiss:gen} that there are $i,j\in\left\{ 1,\ldots,m\right\} $ and suitable ramifications $r_{i}^{\gamma},\ r_{j}^{\delta}$ of the variables $X_{i}$ and $X_{j}$ such that $b_{X}(r_{i}^{\gamma}\circ r_{j}^{\delta}\circ\mathfrak{\pi}_{i,j}^{0}\left(\mathcal{F}_{\mathrm{min}}\right))<b_{X}\left(\mathcal{F}_{\mathrm{min}}\right)$ and $b_{X}(r_{i}^{\gamma}\circ r_{j}^{\delta}\circ\pi_{i,j}^{\infty}\left(\mathcal{F}_{\mathrm{min}}\right))<b_{X}\left(\mathcal{F}_{\mathrm{min}}\right)$ for all $k\in\mathbb{N}$. Notice that $r_{i}^{\gamma}\circ r_{j}^{\delta}\circ\mathfrak{\pi}_{i,j}^{0}\left(\mathcal{F}_{\mathrm{min}}\right)$ and $r_{i}^{\gamma}\circ r_{j}^{\delta}\circ\mathfrak{\pi}_{i,j}^{\infty}\left(\mathcal{F}_{\mathrm{min}}\right)$ are finite collections of monomials and they are in fact the collections of the minimal monomials of the families $\mathcal{F}_{0}=\left\{ r_{i}^{\gamma}\circ r_{j}^{\delta}\circ\pi_{i,j}^{0}\left(F\right):\ F\in\mathcal{F}\right\} $and $\mathcal{F}_{\infty}=\left\{ r_{i}^{\gamma}\circ r_{j}^{\delta}\circ\pi_{i,j}^{\infty}\left(F\right):\ F\in\mathcal{F}\right\} $, respectively. Moreover, for every series $ $ and every $\lambda\in\left(0,\infty\right)$, the series of the family $\mathcal{F}_{\lambda}=\left\{ r_{i}^{\gamma}\circ r_{j}^{\delta}\circ\pi_{i,j}^{\lambda}\left(F\right):\ F\in\mathcal{F}\right\} $ belong to $\mathbb{R}\left\llbracket X'^{*},Y'\right\rrbracket $, where $m'=m-1$. By Remark \ref{rem:tree preserves good families} the families $\mathcal{F}_{\lambda}$ ($\lambda\in\left[0,\infty\right]$) have good total support, so the inductive hypothesis applies and we obtain the required conclusion.\end{proof} \begin{rem} \label{rem:power of a unit}Let $F=X^{\alpha}U\left(X\right)\in\mathbb{R}\left\llbracket X^{*}\right\rrbracket \cap\text{Im}\left(\mathcal{T}\right)$ be a normal series and let $k\in\mathbb{N}$. Then $F^{1/k}$ is a well defined series and belongs also to $\text{Im}\left(\mathcal{T}\right)$. In fact, $U^{1/k}-1$ can be viewed as the solution $Y$ to the implicit function problem $\left(1+Y\right)^{k}-U\left(X\right)=0$. \end{rem}
{}
\begin{proof} [Proof of Theorem \ref{thm: monomialisation}] The proof is by induction on $m+n$. In view of Lemma \ref{lem: prod of series}, it is enough to prove the theorem for one series $F\in\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $. By Lemma \ref{lem:singular blow ups}, we may assume that $n>0$ and there exist $\alpha\in[0,\infty)^{m}$ and a series $G\in\mathbb{R}\left\llbracket X{}^{*},Y\right\rrbracket $ such that $F\left(X,Y\right)=X{}^{\alpha}G\left(X,Y\right)$ and $G\left(0,Y\right)\not\equiv0$. It is well known (see for example \cite[6.1]{vdd:speiss:gen}) that, after performing some linear transformation of the form $L_{n,c}$, the series $G$ becomes regular in the variable $Y_{n}$, of some order $d\in\mathbb{N}$.
Suppose that $d=1$. Then $\frac{\partial G}{\partial Y_{n}}\left(0\right)\not=0$. Hence, by the Implicit Function Theorem, there exists a series $A\left(X,\hat{Y}\right)$ such that $G\left(X,\hat{Y},A\left(X,\hat{Y}\right)\right)=0$ and $A\left(0,0\right)=0$. By Axiom 6 of \ref{emp:properties of the morph}, $A\in\text{Im\ensuremath{\left(\mathcal{T}\right)}}$, so $A\left(X,\hat{Y}\right)=\mathcal{T}\left(a\left(x,\hat{y}\right)\right),$ for some germ $a\in\mathcal{A}_{m,n-1}$. Then, $\tau_{a}\left(G\right)=G\left(X,\hat{Y},A\left(X,\hat{Y}\right)\right)+Y_{n}U\left(X,Y\right)$, for some unit $U\left(X,Y\right)\in\text{Im\ensuremath{\left(\mathcal{T}\right)}}$, and we are done.
Suppose next that $d>1$. Since $\frac{\partial^{d}G}{\partial Y_{n}^{d}}\left(0\right)\not=0$, by the Implicit Function Theorem, there exists a series $B\left(X,\hat{Y}\right)\in\text{Im\ensuremath{\left(\mathcal{T}\right)}}$ (hence $B\left(X,\hat{Y}\right)=\mathcal{T}\left(b\left(x,\hat{y}\right)\right),$ for some germ $b\in\mathcal{A}_{m,n-1}$) such that $\frac{\partial^{d-1}G}{\partial Y_{n}^{d-1}}\left(X,\hat{Y},B\left(X,\hat{Y}\right)\right)=0$ and $B\left(0,0\right)=0$. Hence by Taylor's Theorem, \[ \tau_{b}\left(G\right)\left(X,Y\right)=U\left(X,Y\right)Y_{n}^{d}+G_{1}\left(X,\hat{Y}\right)Y_{n}^{d-1}+\ldots+G_{d}\left(X,\hat{Y}\right), \] where $U$ is a unit and $G_{i}=\frac{\partial^{d-i}G}{\partial Y_{n}^{d-i}}\left(X,\hat{Y},B\left(X,\hat{Y}\right)\right)$. Hence $G_{1}\left(X,\hat{Y}\right)=0$.
By the inductive hypothesis, there is an admissible tree $\check{T}$ (acting as the identity on $Y_{n}$) such that, for every branch $\check{\mathfrak{b}}$ of $\check{T}$, the series in the set $\left\{ \check{T}_{\check{\mathfrak{b}}}\left(G_{i}\right):\ i=2,\ldots,d\right\} \cup\left\{ \check{T}_{\check{\mathfrak{b}}}\left(X^{\alpha}\right)\right\} $ are normal. Let $r^{\pm}:=r_{m+1}^{d!,\pm}\circ\ldots\circ r_{m+n}^{d!,\pm}$ and let $\check{G_{i}}:=\check{T}_{\check{\mathfrak{b}}}\circ r^{+}\left(G_{i}\right)$ (an argument analogous to the one which follows will hold for $r^{-}$). By the inductive hypothesis and Remark \ref{rem:power of a unit}, there is an admissible tree $\tilde{T}$ (acting as the identity on $Y_{n}$) such that, for every branch $\tilde{\mathfrak{b}}$ of $\tilde{T}$, the series in the set $\left\{ \tilde{T}_{\tilde{\mathfrak{b}}}\left[\check{G_{i}}^{1/i}\right]:\ i=2,\ldots,d\right\} \cup\left\{ \check{T}_{\check{\mathfrak{b}}}\circ r^{+}\circ\tilde{T}_{\tilde{\mathfrak{b}}}\left(X^{\alpha}\right)\right\} $
are normal and linearly ordered by division i.e. $\tilde{T}_{\tilde{\mathfrak{b}}}\left[\check{G_{i}}^{1/i}\right]=\left(X'\hat{Y'}\right)^{\frac{\alpha_{i}}{i}}U_{i}\left(X',\hat{Y'}\right)$, where $U_{i}$ are units and $\alpha_{i}=\left(\alpha_{i,1},\ldots,\alpha_{i,m+n-1}\right)\in[0,\infty)^{m'}\times\mathbb{N}^{n'-1}$, with $i|\alpha_{i,m'+j}$ for all $j=1,\ldots,n'$. Since $\tilde{T}_{\tilde{\mathfrak{b}}}$ is a ring homomorphism, if we put $T_{\mathfrak{b}}:=\tau_{B}\circ\check{T}_{\check{\mathfrak{b}}}\circ r^{+}\circ\tilde{T}_{\tilde{\mathfrak{b}}}$, we obtain \[ T_{\mathfrak{b}}\left(G\right)=\hat{U}\left(X',Y'\right)Y_{n}'^{d}+\sum_{i=2}^{d}\left(X'\hat{Y'}\right)^{\alpha_{i}}\hat{U}_{i}\left(X',\hat{Y'}\right)Y_{n'}'^{d-i}, \]
where $\hat{U}=\check{T}_{\check{\mathfrak{b}}}\circ r^{+}\circ\tilde{T}_{\tilde{\mathfrak{b}}}\left(U\right)$ and $\hat{U}_{i}=U_{i}^{i}$.
Let us rename $X'=X$ and $Y'=Y$. We are going to perform a series of ramifications and blow-ups with the aim of decreasing the order of $T_{\mathfrak{b}}\left(G\right)$ in the variable $Y_{n}$, possibly after factoring out a monomial in the variables $\left(X,\hat{Y}\right)$.
Let $l\in\left\{ 2,\ldots,d\right\} $ be maximal such that $\frac{\alpha_{l}}{l}\leq\frac{\alpha_{i}}{i}$ for all $i=2,\ldots,d$.
We first consider the variables in the tuple $X$. Suppose that $X_{1}$ does appear in the monomial $\left(X\hat{Y}\right)^{\alpha_{l}}$ and let $\gamma=\frac{\alpha_{l,1}}{l}>0$. Notice that $d-i+\frac{\alpha_{1,i}}{\gamma}\geq d$ for all $i=2,\ldots d$. Let $\hat{X}=\left(X_{2},\ldots,X_{m}\right)$ and $\alpha_{i}'=\left(\alpha_{i,2},\ldots,\alpha_{i,m+n-1}\right)$. We perform the ramification $r_{1}^{1/\gamma}$, so \[ T_{\mathfrak{b}}\circ r_{1}^{1/\gamma}\left(G\right)=\overline{U}\left(X,Y\right)Y_{n}^{d}+\sum_{i=2}^{d}\left(\hat{X}\hat{Y}\right)^{\alpha_{i}'}\overline{U}_{i}\left(X,\hat{Y}\right)X_{1}^{\alpha_{1,i}/\gamma}Y_{n}^{d-i}, \]
where $\overline{U}=r_{1}^{1/\gamma}\left(\hat{U}\right)$ and $\overline{U}_{i}=r_{1}^{1/\gamma}\left(\hat{U}_{i}\right)$.
We consider the blow-up charts in the collection $\pi_{m+n,1}$. For a better readability, in what follows we will still denote the variables as $\left(X,Y\right)$ after blow-up transformation.
\textbf{Case 1. $\lambda=\pm\infty$}
We will only treat the case $\lambda=+\infty$, the other case is analogous. The transformation $\pi_{m+n,1}^{+\infty}$ maps $\left(X,Y\right)$ to $\left(X_{1}Y_{n},X_{2},\ldots,X_{m},Y\right)$. Then, \begin{align*} T_{\mathfrak{b}}\circ r_{1}^{1/\gamma}\circ\pi_{m+n,1}^{+\infty}\left(G\right) & =\tilde{U}\left(X,Y\right)Y_{n}^{d}+\sum_{i=2}^{d}\left(\hat{X}\hat{Y}\right)^{\alpha_{i}'}\tilde{U}_{i}\left(X,Y\right)X_{1}^{\alpha_{1,i}/\gamma}Y_{n}^{d-i+\frac{\alpha_{1,i}}{\gamma}}\\
& =Y_{n}^{d}\left[\tilde{U}\left(X,Y\right)+\sum_{i=2}\left(\hat{X}\hat{Y}\right)^{\alpha_{i}'}\tilde{U}_{i}\left(X,Y\right)X_{1}^{\alpha_{1,i}/\gamma}Y_{n}^{-i+\frac{\alpha_{1,i}}{\gamma}}\right]\\
& =Y_{n}^{d}V\left(X,Y\right), \end{align*} where $\tilde{U}=\pi_{m+n,1}^{+\infty}\left(\overline{U}\right)$, $\tilde{U}_{i}=\pi_{m+n,1}^{+\infty}\left(\overline{U_{i}}\right)$ and $V$ has the form $\tilde{U}+X_{1}^{\beta}B\left(X,Y\right)$, for some $\beta\in\mathbb{K}^{>0}$ and $B\in\mathbb{R}\left\llbracket X^{*},Y\right\rrbracket $, and hence is a unit.
\textbf{Case 2.} $\lambda=0$
The transformation $\pi_{m+n,1}^{0}$ maps $\left(X,Y\right)$ to $\left(X,\hat{Y},X_{1}Y_{n}\right)$. Then, \begin{align*} T_{\mathfrak{b}}\circ r_{1}^{1/\gamma}\circ\pi_{m+n,1}^{0}\left(G\right) & =\tilde{U}\left(X,Y\right)X_{1}^{d}Y_{n}^{d}+\sum_{i=2}^{d}\left(\hat{X}\hat{Y}\right)^{\alpha_{i}'}\tilde{U}_{i}\left(X,Y\right)X_{1}^{d-i+\frac{\alpha_{1,i}}{\gamma}}Y_{n}^{d-i}\\
& =X_{1}^{d}\left[\tilde{U}\left(X,Y\right)Y_{n}^{d}+\left(\hat{X}\hat{Y}\right)^{\alpha_{l}'}\tilde{U}_{l}\left(X,Y\right)Y_{n}^{d-l}\right.\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.+\sum_{i=2,\ i\not=l}^{d}\left(\hat{X}\hat{Y}\right)^{\alpha_{i}'}\tilde{U}_{i}\left(X,Y\right)X_{1}^{-i+\frac{\alpha_{1,i}}{\gamma}}Y_{n}^{d-i}\right]\\
& =X_{1}^{d}G_{0}\left(X,Y\right), \end{align*} where $\tilde{U}=\pi_{m+n,1}^{0}\left(\overline{U}\right)$ and $\tilde{U}_{i}=\pi_{m+n,1}^{0}\left(\overline{U_{i}}\right)$. Notice that $X_{1}$ does not appear any more in the coefficient of $Y_{n}^{d-l}$ in $G_{0}$.
\textbf{Case 3.} $\lambda\not=0,\pm\infty$
The transformation $\pi_{m+n,1}^{\lambda}$ maps $\left(X,Y\right)$ to $\left(X,\hat{Y},X_{1}\left(\lambda+Y_{n}\right)\right)$. Then, \begin{align*} T_{\mathfrak{b}}\circ r_{1}^{1/\gamma}\circ\pi_{m+n,1}^{\lambda}\left(G\right) & =\tilde{U}\left(X,Y\right)X_{1}^{d}\left(\lambda+Y_{n}\right)^{d}+\sum_{i=2}^{d}\left(X'\hat{Y}\right)^{\alpha_{i}'}\tilde{U}_{i}\left(X,Y\right)X_{1}^{d-i+\frac{\alpha_{1,i}}{\gamma}}\left(\lambda+Y_{n}\right)^{d-i}\\
& =X_{1}^{d}\left[\tilde{U}\left(X,Y\right)Y_{n}^{d}+d\lambda\tilde{U}\left(X,Y\right)Y_{n}^{d-1}+\sum_{i=2}^{d}B_{i}\left(X,\hat{Y}\right)Y_{n}^{d-i}\right]\\
& =X_{1}^{d}G_{\lambda}\left(X,Y\right), \end{align*} where $\tilde{U}=\pi_{m+n,1}^{\lambda}\left(\overline{U}\right)$, $\tilde{U}_{i}=\pi_{m+n,1}^{\lambda}\left(\overline{U_{i}}\right)$ and $B_{i}\in\mathbb{R}\left\llbracket X^{*},\hat{Y}\right\rrbracket $. Notice that, thanks to the initial Tschirnhausen translation $\tau_{b}$, $G_{\lambda}$ is regular in $Y_{n}$ of order at most $d-1$.
{}
Arguing in the same way for every variable of the tuple $X$ which actually appears in the monomial $\left(X\hat{Y}\right)^{\alpha_{l}}$, after factoring out a monomial in the variables $X$, either we monomialise $G$ (case 1), or we reduce the order of regularity in $Y_{n}$ (case 3), or we eliminate the variables $X$ from the monomial $\left(X\hat{Y}\right)^{\alpha_{l}}$ (case 2).
Next we perform a blow-up transformation to obtain similar results in the variables $\hat{Y}$. If the variable $Y_{1}$ appears in the monomial $\left(X\hat{Y}\right)^{\alpha_{l}}$, we perform the blow-up transformation $\pi_{m+n,m+1}$. For the charts $\pi_{m+n,m+1}^{\infty}$ and $\pi_{m+n,m+1}^{\lambda}$ ($\lambda\in\mathbb{R}$), we obtain the same result as in cases 1 and 3 respectively, namely either $G$ has become normal or, after factoring out a power of $Y_{1}$, the order of regularity in $Y_{n}$ has decreased. For the chart $\pi_{m+n,m+1}^{0}$, we obtain, after factoring out a power of $Y_{1}$, that the exponents $\alpha_{i,m+1}$ of $Y_{1}$ each decrease by the quantity $i$, and hence by repeating the process we can reduce to the case $\alpha_{l,1}=0$. Hence, as in case 2, we have eliminated the variable $Y_{1}$ from the monomial $\left(X\hat{Y}\right)^{\alpha_{l}}$.
{}
Summing up, there is an admissible tree $\hat{T}$ such that, for every branch $\hat{\mathfrak{b}}$ of $\hat{T}$, we have that $T_{\mathfrak{b}}\circ\hat{T}_{\hat{\mathfrak{b}}}\left(X^{\alpha}\right)$ is a monomial (in the variables $\left(X,Y\right)$) and $T_{\mathfrak{b}}\circ\hat{T}_{\hat{\mathfrak{b}}}\left(G\right)=\left(X\hat{Y}\right)^{\alpha_{\hat{\mathfrak{b}}}}V_{\hat{\mathfrak{b}}}\left(X,Y\right)G_{\hat{\mathfrak{b}}}\left(X,Y\right)$, where $\alpha_{\hat{\mathfrak{b}}}\in[0,\infty)^{m}\times\mathbb{N}^{n-1}$, $V_{\hat{\mathfrak{b}}}$ is a unit and $G_{\hat{\mathfrak{b}}}$ is either a monomial in $Y_{n}$ (case 1), or it is regular in $Y_{n}$ of order 1 (repeated use of cases 2 and 3 ). In this latter case, we compose with a further Tschirnhausen translation in order to render \foreignlanguage{english}{$G_{\hat{\mathfrak{b}}}\left(X,Y\right)$} normal. This concludes the proof of Theorem \ref{thm: monomialisation}. \end{proof}
\subsection{Monomialisation of germs\label{subsec:monomialisation of germs}}
Recall that $f\in\mathcal{A}_{m,n}$ is \emph{normal} if there exist $\alpha\in\mathbb{A}^{m},\ N\in\mathbb{N}^{n}$ and $u\in\mathcal{A}_{m,n}$ such that $f\left(x,y\right)=x^{\alpha}y^{N}u\left(x,y\right)$ and $u\left(0,0\right)\not=0$. \begin{rem} \label{rem: f normal iff F normal}If $f\in\mathcal{A}_{m,n}$ then it follows from the axioms in \ref{emp:properties of the morph} that $f\left(x,y\right)$ is normal if and only if $\mathcal{\mathcal{T}}\left(f\right)\left(X,Y\right)$ is normal.\end{rem} \begin{thm} \label{thm: geom monomial}Let $f_{1},\ldots,f_{p}\in\mathcal{A}_{m,n}$. Then there exist a polyradius $r'$ such that the $f_{j}$ have a representative on $\hat{I}_{m,n,r'}$, and a finite family \[ \mathcal{F}=\left\{ \rho_{i}:\hat{I}_{m_{i},n_{i},r_{i}}\to\hat{I}_{m,n,r}\right\} _{i=1,\ldots,N} \]
of admissible transformations such that, for all $i=1,\ldots,N$, the germs $f_{1}\circ\rho_{i},\ldots,f_{p}\circ\rho_{i}$ are normal and linearly ordered by division, and \[ \hat{I}_{m,n,r'}\subseteq\bigcup_{i=1}^{N}\rho_{i}\left(\hat{I}_{m_{i},n_{i},r_{i}}\right). \] \end{thm} \begin{rem} Let $\nu:\hat{I}_{m',n',r'}\to\hat{I}_{m,n,r}$ be an elementary transformation. Notice that for every polyradius $r'''\leq r'$ there exists a polyradius $r''\leq r$ such that: \begin{itemize} \item if $\nu$ is either $\tau_{h}$, or $L_{i,c}$, or $r_{i}^{\gamma}$, then $\hat{I}_{m,n,r''}\subseteq\nu\left(\hat{I}_{m,n,r'''}\right)$; \item $\hat{I}_{m,n,r''}\subseteq r_{m+i}^{d,+}\left(\hat{I}_{m,n,r'''}\right)\cup r_{m+i}^{d,-}\left(\hat{I}_{m,n,r'''}\right)$;
\item $\hat{I}_{m,n,r''}\subseteq\bigcup_{\lambda\in\left(0,\infty\right)}\pi_{i,j}^{\lambda}\left(\hat{I}_{m-1,n+1,r'''}\right)\cup\pi_{i,j}^{0}\left(\hat{I}_{m,n,r'''}\right)\cup\pi_{i,j}^{\infty}\left(\hat{I}_{m,n,r'''}\right);$ \item $\hat{I}_{m,n,r''}\subseteq\bigcup_{\lambda\in\mathbb{R}}\pi_{m+i,j}^{\lambda}\left(\hat{I}_{m,n,r'''}\right)\cup\pi_{m+i,j}^{+\infty}\left(\hat{I}_{m+1,n-1,r'''}\right)\cup\pi_{m+i,j}^{-\infty}\left(\hat{I}_{m+1,n-1,r'''}\right);$ \item $\hat{I}_{m,n,r''}\subseteq\bigcup_{\lambda\in\mathbb{R}\cup\left\{ \infty\right\} }\pi_{m+i,m+j}^{\lambda}\left(\hat{I}_{m,n,r'''}\right).$ \end{itemize} Moreover, by a compactness argument as in \cite[p. 4406]{vdd:speiss:gen}, it is easy to see that the three last inclusions remain true for suitable finite families of parameters $\lambda$.\end{rem} \begin{proof} [Proof of Theorem \ref{thm: geom monomial}]Notice first that, using Lemma \ref{lem: prod of series} and Remark \ref{rem: f normal iff F normal}, we only need to prove the statement for one germ $f\in\mathcal{A}_{m,n}$. By Theorem \ref{thm: monomialisation} and by compatibility of $\mathcal{T}$ with the operations in \ref{emp:properties of the morph}, there exists an admissible tree $T$ such that for each branch $\mathfrak{b}$ of $T$, the germ $f\circ\rho_{\mathfrak{b}}$ is normal. We prove a stronger statement, namely that the admissible transformations in the statement are in fact induced by branches of $T$. The proof is by induction on the height of $T$. If the height of $T$ is zero, then $f$ is normal. Hence, let us assume $T$ has positive height, consider the initial vertex of $T$ and the elementary tree $T_{0}$ immediately attached to it.
Let $\nu:\hat{I}_{m_{\nu},n_{\nu},r_{\nu}}\to\hat{I}_{m,n,r}$ be an elementary transformation induced by a branch of $T_{0}$ (i.e. $\nu$ is either a Tschirnhausen translation, or a linear transformation, or a ramification, or it is a blow-up chart). The germ $g^{\nu}=f\circ\nu$ belongs to $\mathcal{A}_{m_{\nu},n_{\nu}}$ and can be monomialised by a tree $T'$ with the property that, for every branch $\mathfrak{b}^{'}$ of $T'$, there exists a unique branch $\mathfrak{b}$ of $T$ such that $\rho_{\mathfrak{b}}=\nu\circ\rho_{\mathfrak{b}^{'}}$. Since the height of $T'$ is smaller than the height of $T$, the inductive hypothesis applies and there are a polyradius $r_{\nu}'$ and a finite family $\mathcal{F}^{\nu}=\left\{ \rho_{i}^{\nu}:\hat{I}_{m_{i}^{\nu},n_{i}^{\nu},r_{i}^{\nu}}\to\hat{I}_{m_{\nu},n_{\nu},r_{\nu}}\right\} _{i=1,\ldots,N_{\nu}}$ as in the statement of the theorem such that the germs $g^{\nu}\circ\rho_{i}^{\nu}$ are normal and $\hat{I}_{m_{\nu},n_{\nu},r_{\nu}'}\subseteq\bigcup_{i=1}^{N_{\nu}}\rho_{i}^{\nu}\left(\hat{I}_{m_{i}^{\nu},n_{i}^{\nu},r_{i}^{\nu}}\right)$. Since the admissible transformations $\nu\circ\rho_{i}^{\nu}$ are induced by branches of $T$, we have that the germs $f\circ\nu\circ\rho_{i}^{\nu}$ are normal and $\nu\left(\hat{I}_{m_{\nu},n_{\nu},r_{\nu}'}\right)\subseteq\bigcup_{i=1}^{N_{\nu}}\nu\circ\rho_{i}^{\nu}\left(\hat{I}_{m_{i}^{\nu},n_{i}^{\nu},r_{i}^{\nu}}\right)$.
Now, by Remark \ref{rem: elemntary transf send quadrants to sectors}, there are a polyradius $r'$ and a finite collection $\mathcal{G}$ of branches of $T_{0}$ such that $\hat{I}_{m,n,r'}\subseteq{\displaystyle \bigcup_{\nu\in\mathcal{G}}}\nu\left(\hat{I}_{m_{\nu},n_{\nu},r_{\nu}'}\right)$, and we are done. \end{proof}
\section{A parametrisation theorem\label{sec:Parametrisation-of--subanalytic-1}}
The main result of this section is Theorem \ref{thm: param subanal}, which essentially provides a way to parametrise every bounded $\mathbb{R}_{\mathcal{A}}$-definable set by means of maps whose components are in $\mathcal{A}$. This provides the main step in the proof of Theorem A, which will be completed at the end of this section, and a key tool for proving Theorem B, in the next section. \begin{defn} \label{def:semi and subanalytic sets}
A set $A\subset\hat{I}_{m,n,r}$ is said to be \emph{$\mathcal{A}_{m,n}$-basic} if it is a finite union of sets of the form \[ \{(x,y)\in\hat{I}_{m,n,r}:\ g_{0}(x,y)=0,g_{1}(x,y)>0,\dots,g_{k}(x,y)>0\}, \]
where $g_{0},\ldots,g_{k}\in\mathcal{A}_{m,n,r}$.
A set $A\subset\mathbb{R}^{m+n}$ is said to be \emph{$\mathcal{A}_{m,n}$-semianalytic} if for every point $a\in\mathbb{R}^{m+n}$ there exists $r_{a}\in(0,\infty)^{m+n}$ such that for every choice of signs $\sigma=(\sigma_{1},\dots,\sigma_{m})\in\left\{ -1,1\right\} ^{m}$, there exists an $\mathcal{A}_{m,n}$-basic set $A_{a,\sigma}\subset\hat{I}_{m,n,r_{a}}$ with \[ A\cap(h_{a,\sigma}(\hat{I}_{m,n,r_{a}}))=h_{a,\sigma}(A_{a,\sigma}), \]
where $h_{a,\sigma}(x,y)=(\sigma_{1}x_{1}+a_{1},\dots,\sigma_{m}x_{m}+a_{m},y_{1}+a_{m+1},\ldots,y_{n}+a_{m+n})$.
We will simply say that a set is $\mathcal{A}$-basic or $\mathcal{A}$-semianalytic when the indices $m,n$ are either obvious from the context or irrelevant. \end{defn} \begin{rem} \label{rem: basic is semi}Notice that since all functions in $\mathcal{A}$ are $\mathcal{A}$-analytic (see Definition \ref{def: A-analytic}), an $\mathcal{A}$-basic set is also $\mathcal{A}$-semianalytic.\end{rem} \begin{defn} \label{def:quadrant}Let $r=\left(r_{1},\ldots r_{m+n}\right)\in\left(0,\infty\right)^{m+n}$ be a polyradius. A set $Q\subseteq\hat{I}_{m,n,r}$ of the form $B_{1}\times\ldots\times B_{m}$, where $B_{i}$ is either $\left\{ 0\right\} $, or $\left(-r_{i},0\right)$, or $\left(0,r_{i}\right)$, is called a \emph{sub-quadrant} of $\mathbb{R}^{m+n}$. The cardinality of the set $C:=\left\{ i:\ B_{i}\not=\left\{ 0\right\} \right\} $, denoted by $\mathrm{dim}\left(Q\right)$, is called the \emph{dimension }of $Q$. Notice that $\hat{I}_{m,n,r}$ is a finite union of sub-quadrants. \end{defn}
The following proposition states that the germ at zero of an $\mathcal{A}$-basic subset of $\mathbb{R}^{m+n}$ can be transformed into a finite union of sub-quadrants of $\mathbb{R}^{m+n}$ by means of a finite family of admissible transformations. \begin{prop} \label{prop: param basic sets}Let $A\subseteq\hat{I}_{m,n,r}$ be an $\mathcal{A}_{m,n}$-basic set. Then there exist a neighbourhood $W$ of 0 in $\mathbb{R}^{m+n}$ and a finite family $\mathcal{F}=\left\{ \left(\rho_{i},Q_{i}\right):\ i=1,\ldots,N\right\} $, where $\rho_{i}:\hat{I}_{m_{i}',n_{i}',r'_{i}}\to\hat{I}_{m,n,r}$ is an admissible transformation and $Q_{i}\subseteq\overline{Q_{i}}\subseteq\hat{I}_{m_{i}',n_{i}',r_{i}'}$ is a sub-quadrant, such that $\rho_{i}\restriction Q_{i}:\ Q_{i}\to\rho_{i}\left(Q_{i}\right)$ is a diffeomorphism and \[ W\cap A=\bigcup_{i=1}^{N}\rho_{i}\left(Q_{i}\right). \] \end{prop} \begin{proof} The proof is by induction on the dimension $m+n$ of the ambient space. Let $S:=\left\{ x_{1}=0\right\} \cup\ldots\cup\left\{ x_{m}=0\right\} \cup\left\{ y_{1}=0\right\} \cup\ldots\cup\left\{ y_{n}=0\right\} $. The set $A\cap S$ is an $\mathcal{A}$-basic sets contained in an ambient space of dimension strictly smaller than $m+n$, hence, by the inductive hypothesis, it is sufficient to prove the proposition for the set $A':=A\setminus S$. We may assume that $A'$ is of the following form: \[ A'=\left\{ \left(x,y\right)\in\hat{I}_{m,n,r}:\ f_{0}\left(x,y\right)=0\wedge f_{1}\left(x,y\right)>0\wedge\ldots\wedge f_{p}\left(x,y\right)>0\right\} , \] where $f_{i}\in\mathcal{A}_{m,n}$.
By Theorem \ref{thm: monomialisation}, there is an admissible tree $T$ such that for each branch $\mathfrak{b}$ of $T$, the germs $f_{i}\circ\rho_{\mathfrak{b}}$ are all normal. We prove by induction on the height of $T$ that the admissible transformations which parametrise $A'$ are in fact induced by branches of $T$. If the height of $T$ is zero, then all the $f_{i}$ are normal and $A'$ is a sub-quadrant of $\hat{I}_{m,n,r}$, so for some box around zero $W$, the closure of the sub-quadrant $W\cap A'$ is contained in $\hat{I}_{m,n,r}$. Hence, let us assume $T$ has positive height, consider the initial vertex of $T$ and the elementary tree $T_{0}$ immediately attached to it.
Let $\nu:\hat{I}_{m_{\nu},n_{\nu},r_{\nu}}\to\hat{I}_{m,n,r}$ be an elementary transformation induced by a branch of $T_{0}$. Consider the germs $g_{i}^{\nu}=f_{i}\circ\nu\in$$\mathcal{A}_{m_{\nu},n_{\nu}}$. The set
\[ A_{\nu}=\left\{ \left(x,y\right)\in\hat{I}_{m_{\nu},n_{\nu},r_{\nu}}:\ g_{0}^{\nu}\left(x,y\right)=0\wedge g_{1}^{\nu}\left(x,y\right)>0\wedge\ldots\wedge g_{p}^{\nu}\left(x,y\right)>0\right\} \] is $\mathcal{A}_{m_{\nu},n_{\nu}}$-basic and the germs $g_{i}^{\nu}$ can be monomialised by a tree $T'$ with the property that, for every branch $\mathfrak{b}^{'}$ of $T'$, there exists a unique branch $\mathfrak{b}$ of $T$ such that $\rho_{\mathfrak{b}}=\nu\circ\rho_{\mathfrak{b}^{'}}$. Since the height of $T'$ is smaller than the height of $T$, the inductive hypothesis applies and there are a neighbourhood $W_{\nu}$ of 0 in $\mathbb{R}^{m+n}$ and a finite family $\mathcal{F}_{\nu}=\left\{ \left(\rho_{i}^{\nu},Q_{i}^{\nu}\right):\ i=1,\ldots,N_{\nu}\right\} $ as in the statement such that the $\rho_{i}^{\nu}$ are induced by branches of $T'$ and $W_{\nu}\cap A_{\nu}=\bigcup_{i=1}^{N_{\nu}}\rho_{i}^{\nu}\left(Q_{i}^{\nu}\right)$. By Remark \ref{rem: elemntary transf send quadrants to sectors}, there are a polyradius $r'$ and a finite collection $\mathcal{G}$ of branches of $T_{0}$ such that $\hat{I}_{m,n,r'}\subseteq{\displaystyle \bigcup_{\nu\in\mathcal{G}}}\nu\left(\hat{I}_{m_{\nu},n_{\nu},r_{\nu}}\cap W_{\nu}\right)\subseteq\hat{I}_{m,n,r}$. Hence, there is a neighbourhood $W$ of zero such that ${\displaystyle \bigcup_{\nu\in\mathcal{G}}}\nu\left(\hat{I}_{m_{\nu},n_{\nu},r_{\nu}}\cap W_{\nu}\right)=\hat{I}_{m,n,r}\cap W$. It follows that
\[ A'\cap W=A'\cap{\displaystyle \bigcup_{\nu\in\mathcal{G}}}\nu\left(\hat{I}_{m_{\nu},n_{\nu},r_{\nu}}\cap W_{\nu}\right)={\displaystyle \bigcup_{\nu\in\mathcal{G}}}\nu\left(A_{\nu}\cap W_{\nu}\right)={\displaystyle \bigcup_{\nu\in\mathcal{G}}}\bigcup_{i=1}^{N_{\nu}}\nu\circ\rho_{i}^{\nu}\left(Q_{i}^{\nu}\right). \] We claim that $\nu\restriction A_{\nu}$ is a diffeomorphism onto its image (and hence so is $\nu\circ\rho_{i}^{\nu}\restriction Q_{i}^{\nu}$). This is clear if $\nu$ is either of type $L_{i,c},\ \text{or}\ \tau_{h}$, or $r_{m+i}^{d,\pm}$. If $\nu=r_{i}^{\gamma}$, then $A_{\nu}\cap\left\{ x_{i}'=0\right\} =\emptyset$ (because $A'\cap S=\emptyset$). For the same reason, if $\nu=\pi_{m+i,j}^{\lambda}$ then $A_{\nu}\cap\left\{ x_{j}'=0\right\} =\emptyset$, and similarly for the other blow-up charts. Summing up, $A_{\nu}$ does not meet the subset of $\hat{I}_{m_{\nu},n_{\nu},r_{\nu}}$ on which $\nu$ is either not bijective or not differentiable. \end{proof} \begin{cor} \label{cor: param semi-anal}Let $A\subseteq\mathbb{R}^{m+n}$ be a bounded $\mathcal{A}_{m,n}$-semianalytic set. Then there exists a finite family $\mathcal{F}=\left\{ \left(\rho_{i},Q_{i},a_{i},\sigma_{i}\right):\ i=1,\ldots,N\right\} $, where $a_{i}\in\mathbb{R}^{m+n}$, $\sigma_{i}\in\left\{ -1,1\right\} ^{m}$, $\rho_{i}:\hat{I}_{m_{i}',n_{i}',r'_{i}}\to\hat{I}_{m,n,r}$ is an admissible transformation and $Q_{i}\subseteq\hat{I}_{m_{i}',n_{i}',r_{i}'}$ is a sub-quadrant, such that $h_{a_{i},\sigma_{i}}\circ\rho_{i}\restriction Q_{i}:\ Q_{i}\to h_{a_{i},\sigma_{i}}\circ\rho_{i}\left(Q_{i}\right)$ is a diffeomorphism and \[ A=\bigcup_{i=1}^{N}h_{a_{i},\sigma_{i}}\circ\rho_{i}\left(Q_{i}\right), \] where $h_{a_{i},\sigma_{i}}$ is as in Definition \ref{def:semi and subanalytic sets}. \end{cor} \begin{rem} Since $h_{a_{i},\sigma_{i}}\circ\rho_{i}\restriction Q_{i}$ is a diffeomorphism onto its image, it follows that $A$ has dimension (in the sense of \cite[8.2]{vdd:speiss:gen}). Notice also that the components of $h_{a_{i},\sigma_{i}}\circ\rho_{i}$ belong to $\mathcal{A}_{m_{i}',n_{i}',r_{i}'}$.\end{rem} \begin{proof} The set $A$ is a finite union of translates of reflections of $\mathcal{A}_{m,n}$-basic sets, by a compactness argument. Hence the parametrisation required is obtained from the parametrisation in Proposition \ref{prop: param basic sets} composed with suitable translations and reflections. \end{proof} The next few definitions and statements are inspired by the approach to model-completeness developed in \cite{bm_semi_subanalytic}, \cite{vdd:speiss:gen} and \cite{rsw}. The proofs follow the usual pattern, with some minor differences dictated by the generality of our setting. It is however worth noticing that in order to prove the Fibre Cutting Lemma \ref{empty: Fibre Cutting}, we need to use both quasianalyticity (see Definition \ref{def: quasi-analyticity}) and $\mathcal{A}$-analyticity (see Definition \ref{def: A-analytic}). \begin{defn} \label{def: trivial manifolds}Let $F=\left(F_{1},\ldots,F_{m+n}\right):\hat{I}_{m,n,r}\to I_{0,m+n,r'}$ be a map such that $F_{i}\in\mathcal{A}_{m,n,r}$, and let $Q\subseteq\overline{Q}\subseteq\hat{I}_{m,n,r}$ be a sub-quadrant of dimension $q\leq m+n$. The set \[ \Gamma\left(F\restriction Q\right)=\left\{ \left(u,w\right):\ u\in Q,\ w=F\left(u\right)\right\} \subseteq I_{m,m+2n,\left(r,r'\right)} \]
is called a \emph{trivial manifold. }
A trivial manifold $M$ is clearly a $\mathcal{C}^{1}$ manifold of dimension $q$ and an $\mathcal{A}$-basic set. The frontier $\text{fr}\left(M\right)=\overline{M}\setminus M$ is the set $\left\{ \left(u,w\right):\ u\in\text{fr}\left(Q\right),\ w=F\left(u\right)\right\} $, which, by $\mathcal{A}$-analyticity of the components of $F$ (see Definition \ref{def: A-analytic}), is an $\mathcal{A}$-semianalytic set. Moreover, $\text{dim}\left(\text{fr}\left(M\right)\right)<\text{dim}\left(M\right)$. \end{defn} \begin{notation} Let $k,l\in\mathbb{N}$ with $k\leq l$. Denote by $\Pi_{k}^{l}:\mathbb{R}^{l}\to\mathbb{R}^{k}$ the projection onto the \textbf{last} $k$ coordinates, i.e. $\Pi_{k}^{l}\left(z_{1},\ldots,z_{l}\right)=\left(z_{l-k+1},\ldots,z_{l}\right)$. \end{notation} \begin{cor} \label{cor: decomp of s.a. into trivial manifolds}Let $A\subseteq\mathbb{R}^{m+n}$ be a bounded $\mathcal{A}_{m,n}$-semianalytic set. Then there exist finitely many trivial manifolds $M_{1},\ldots,M_{N}\subseteq\mathbb{R}^{2\left(m+n\right)}$ such that $A={\displaystyle \bigcup_{i=1}^{N}\Pi_{m+n}^{2\left(m+n\right)}\left(M_{i}\right)}$ and $\Pi_{m+n}^{2\left(m+n\right)}\restriction M_{i}$ is an immersion.\end{cor} \begin{proof} Apply Corollary \ref{cor: param semi-anal} to $A$ and let $M_{i}=\Gamma\left(h_{a_{i},\sigma_{i}}\circ\rho_{i}\restriction Q_{i}\right)$. Notice that $\text{dim}\left(M_{i}\right)=\text{dim}\left(Q_{i}\right)\leq\text{dim}\left(A\right)$.\end{proof} \begin{defn} \label{def: A-manifold}An $\mathcal{A}_{m,n}$-basic set $M\subseteq I_{m,n,r}$ is called an \emph{$\mathcal{A}_{m,n}$-manifold }if $M$ is a $\mathcal{C}^{1}$-manifold of dimension $d\leq m+n$ and there are $h_{1},\ldots,h_{m+n-d}\in\mathcal{A}_{m,n}$ such that $h_{i}\restriction M\equiv0$ and for all $z\in M$ the vectors $\nabla h_{1}\left(z\right),\ldots,\nabla h_{m+n-d}\left(z\right)$ are linearly independent (see \cite[Def. 8.3]{vdd:speiss:gen}). Notice that a trivial manifold is an $\mathcal{A}_{m,n}$-manifold.
Let $\iota:\left\{ 1,\ldots,d\right\} \to\left\{ 1,\ldots,m+n\right\} $ be a strictly increasing sequence and let $\Pi_{\iota}\left(z_{1},\ldots,z_{m+n}\right)=\left(z_{\iota\left(1\right)},\ldots,z_{\iota\left(d\right)}\right)$. Let $M_{\iota}=\left\{ z\in M:\ \Pi_{\iota}\restriction M\ \text{has\ rank\ }d\ \text{at\ }z\right\} $. Then $M_{\iota}=\left\{ z\in M:\ f_{\iota}\left(z\right)\not=0\right\} $ for some $f_{\iota}\in\mathcal{A}_{m,n,r}$, which is a polynomial in the partial derivatives and modified derivatives $\partial_{i}$ of $h_{1},\ldots,h_{m+n-d}$. In particular, $M_{\iota}$ is an $\mathcal{A}_{m,n}$-manifold of dimension $d$ and $M$ is the union of all the $M_{\iota}$.
Let $k\leq m+n$ and for every increasing sequence $\iota$, define $m_{\iota}\left(k\right)$ as the dimension of the vector space $\Pi_{\iota}\left(\mathbb{R}^{m+n}\right)\cap\Pi_{k}^{m+n}\left(\mathbb{R}^{m+n}\right)$. Notice that $m_{\iota}\left(k\right)\in\left\{ 0,\ldots,d\right\} $. Suppose that $\Pi_{k}^{m+n}\restriction M_{\iota}$ has constant rank $m_{\iota}\left(k\right)$. Then, arguing as in \cite[8.11]{vdd:speiss:gen}, it is easy to see that for every $a\in\mathbb{R}^{k}$, the fibre $M_{a}:=\left(\Pi_{k}^{m+n}\right)^{-1}\left(a\right)\cap M_{\iota}$ is either empty or an $\mathcal{A}_{m,n}$-manifold of dimension $d-m_{\iota}\left(k\right)$ and every connected component of the fibre has non-empty frontier. \end{defn} \begin{void} \label{empty: Fibre Cutting}\textbf{Fibre Cutting Lemma. }\emph{Let $M_{\iota}\subseteq I_{m,n,r}$ be as in the above definition and let $k\leq m+n$. Suppose that $\Pi_{k}^{m+n}\restriction M_{\iota}$ has constant rank $m_{\iota}\left(k\right)$ and that $m_{\iota}\left(k\right)<d=\text{dim}\left(M_{\iota}\right)$. Then there is an $\mathcal{A}_{m,n,r}$-basic set $A\subseteq M_{\iota}$ such that $\text{dim}\left(A\right)<d$ and $\Pi_{k}^{m+n}\left(A\right)=\Pi_{k}^{m+n}\left(M_{\iota}\right)$.}\end{void} \begin{proof} Let $f,g_{1},\ldots,g_{q}\in\mathcal{A}_{m,n,r}$ be such that
\[ M_{\iota}=\left\{ \left(x,y\right)\in I_{m,n,r}:\ f\left(x,y\right)=0,\ g_{1}\left(x,y\right)=0,\ldots,g_{q}\left(x,y\right)=0\right\} . \] Recall that for every $a\in\Pi_{k}^{m+n}\left(M_{\iota}\right)$, the fibre $M_{a}:=\left(\Pi_{k}^{m+n}\right)^{-1}\left(a\right)\cap M_{\iota}$ is an $\mathcal{A}_{m,n}$-manifold of dimension $d-m_{\iota}\left(k\right)>0$ and every connected component of the fibre has non-empty frontier. Let $r=\left(s,t\right)$. The function $g\left(x,y\right)=\prod_{i=1}^{m}x_{i}\left(s_{i}-x_{i}\right)\cdot\prod_{i=1}^{n}\left(t_{i}^{2}-y_{i}^{2}\right)\cdot\prod_{i=1}^{q}g_{i}\left(x,y\right)\in\mathcal{A}_{m,n,r}$ is positive on $M_{a}$ and vanishes on $\text{fr}\left(M_{a}\right)$, hence $g\restriction M_{a}$ has a critical point on every connected component of $M_{a}$. The set
\[ A=\left\{ \left(x,y\right)\in M_{\iota}:\ g\restriction M_{a}\text{\ is\ critical\ at\ \ensuremath{\left(x,y\right)},\ where }a:=\Pi_{k}^{m+n}\left(x,y\right)\right\} \] is clearly $\mathcal{A}_{m,n,r}$-basic and $\Pi_{k}^{m+n}\left(A\right)=\Pi_{k}^{m+n}\left(M_{\iota}\right)$. We claim that $A$ has empty interior in $M_{\iota}$, and hence $\text{dim}\left(A\right)<d$.
Let $a\in\Pi_{k}^{m+n}\left(M_{\iota}\right)$ and let $h_{1},\ldots,h_{m+n-d}\in\mathcal{A}_{m,n}$ such that $h_{i}\restriction M_{a}\equiv0$ and for all $z=\left(x,y\right)\in M_{a}$ the vectors $\nabla h_{1}\left(z\right),\ldots,\nabla h_{m+n-d}\left(z\right)$ are linearly independent. If $C$ is a connected component of $M_{a}$, let
\[ S=\left\{ z\in C:\ g\restriction M_{a}\text{\ is\ critical\ at }z\right\} =\left\{ z\in C:\ \text{d}g\left(z\right)\land\text{d}h_{1}\left(z\right)\land\ldots\land\text{d}h_{m+n-d}\left(z\right)=0\right\} . \] It suffices to prove that $S$ has empty interior on $C$. Clearly, $S$ is closed in $C$. We prove that, if $S$ has non-empty interior in $C$, then $S$ is also open in $C$, which contradicts the fact that $g$ is not constant on the fibres. Let $z_{0}\in S\setminus\overset{\circ}{S}$ and define $\overline{g}\left(z\right):=g\left(z+z_{0}\right),\ \overline{h}_{i}\left(z\right):=h_{i}\left(z+z_{0}\right)$ for $i=1,\ldots,m+n-d$. By $\mathcal{A}$-analyticity (see Definition \ref{def: A-analytic}), the germs at zero of these functions belong to $\mathcal{A}_{0,m+n}$. By the Implicit Function Theorem and Axiom 6 in \ref{emp:properties of the morph}, there are a polyradius $r'\in\mathbb{R}^{d}$ and a map $\varphi=\left(\varphi_{1},\ldots,\varphi_{m+n}\right)\in\left(\mathcal{A}_{0,d,r'}\right)$ such that $\varphi\left(0\right)=z_{0}$ and $\varphi$ is a local parametrisation of $C$ around $z_{0}$. Let $F\in\mathcal{A}_{0,d,r'}$ be such that
\begin{align*} \varphi^{-1}\left(S\right) & =\left\{ w\in I_{0,d,r'}:\ \text{d}\left(\overline{g}\circ\varphi\right)\left(w\right)\land\text{d}\left(\overline{h}_{1}\circ\varphi\right)\left(w\right)\land\ldots\land\text{d}\left(\overline{h}_{m+n-d}\circ\varphi\right)\left(w\right)=0\right\} \\
& =\left\{ w\in I_{0,d,r'}:\ F\left(w\right)=0\right\} . \end{align*} Since $\varphi$ is a homeomorphism, $\varphi^{-1}\left(\overset{\circ}{S}\right)\subseteq I_{0,d,r'}$ is an open set on which $F$ vanishes, and $0\in\text{cl}\left(\varphi^{-1}\left(\overset{\circ}{S}\right)\right)$ . By quasianalyticity, there exists $r''\leq r'$ such that $F$ vanishes on $I_{0,d,r''}$, which contradicts the fact that $z_{0}\notin\overset{\circ}{S}$. \end{proof} \begin{prop} \label{prop: trivial manifolds for subanal sets}Let $A\subseteq\mathbb{R}^{m+n}$ be a bounded $\mathcal{A}_{m,n}$-semianalytic set and $k\leq m+n$. Then there exist finitely many trivial manifolds $M_{1},\ldots,M_{N}\subseteq\mathbb{R}^{2\left(m+n\right)}$, with $\text{dim}\left(M_{i}\right)\leq k$, such that $\Pi_{k}^{m+n}\left(A\right)={\displaystyle \bigcup_{i=1}^{N}\Pi_{k}^{2\left(m+n\right)}\left(M_{i}\right)}$ and for every $i=1,\ldots,N$ there is a strictly increasing sequence $\iota:\left\{ 1,\ldots,\text{dim}\left(M_{i}\right)\right\} \to\left\{ m+n-k+1,\ldots,m+n\right\} $ such that $\Pi_{\iota}\restriction M_{i}$ is an immersion.\end{prop} \begin{proof} The proof is by induction on $d=\text{dim}\left(A\right)$, for all $m,n,k\in\mathbb{N}$. If $d=0$, then by Corollary \ref{cor: param semi-anal} $A$ is a finite set and there is nothing to prove. Let $d>0$. By Corollary \ref{cor: decomp of s.a. into trivial manifolds} and the inductive hypothesis, it is enough to prove the proposition for an $\mathcal{A}_{m,n}$-manifold $M$ of dimension $d$, instead of $A$. We argue by induction on $r=\text{max}\left\{ \mbox{\ensuremath{\text{rk}_{z}\Pi_{k}^{m+n}\restriction}M}:\ z\in M\right\} $. If $r=0$, then $\Pi_{k}^{m+n}\left(M\right)$ is a finite set and there is nothing to prove, so let us assume that $r>0$. Let $M_{1}=\left\{ z\in M:\ \text{rk}_{z}\Pi_{k}^{m+n}\restriction M=r\right\} $ and $M_{2}=\left\{ z\in M:\ \text{rk}_{z}\Pi_{k}^{m+n}\restriction M<r\right\} $. Then $M_{1}$ and $M_{2}$ are $\mathcal{A}_{m,n}$-semianalytic and $M_{1}$ is open in $M$, and hence is an $\mathcal{A}_{m,n}$-manifold. By Corollary \ref{cor: decomp of s.a. into trivial manifolds} and the inductive hypothesis on the rank, we obtain the statement of the proposition for $M_{2}$. Hence we may assume that $\Pi_{k}^{m+n}\restriction M$ has constant rank $r$.
Suppose first that $r=d$. Notice that in this case $d\leq k$ and $M$ is the union of all $\mathcal{A}_{m,n}$-manifolds $M_{\iota}$ (as in Definition \ref{def: A-manifold}) such that $\iota:\left\{ 1,\ldots,d\right\} \to\left\{ m+n-k+1,\ldots,m+n\right\} $ is a strictly increasing sequence. By Corollary \ref{cor: decomp of s.a. into trivial manifolds} and the inductive hypothesis on the dimension, the proposition holds then trivially for $M_{\iota}$.
Hence we may assume that $r<d$. For $\iota:\left\{ 1,\ldots,d\right\} \to\left\{ 1,\ldots,m+n\right\} $ an increasing sequence, consider the $\mathcal{A}_{m,n}$-manifold $M_{\iota}$. If $r=m_{\iota}\left(k\right)$, then by the Fibre Cutting Lemma \ref{empty: Fibre Cutting}, Corollary \ref{cor: decomp of s.a. into trivial manifolds} and the inductive hypothesis on the dimension, the proposition holds for $M_{\iota}$. Now, one can check that for every $z\in M_{3}:=M\setminus\bigcup\left\{ M_{\iota}:\ m_{\iota}\left(k\right)=r\right\} $ we have $\text{rk}_{z}\Pi_{k}^{m+n}\restriction M<r$, hence $M_{3}=\emptyset$ and we are done. \end{proof} We are now ready to prove the parametrisation result which was announced at the beginning of this section. In Subsection \ref{subsec:Proof-of-o-minimality} we will show that every bounded $\mathbb{R}_{\mathcal{A}}$-definable set is a projection of an $\mathcal{A}$-semianalytic set, hence the result below provides a parametrisation theorem for all bounded $\mathbb{R}_{\mathcal{A}}$-definable sets. \begin{thm} \label{thm: param subanal}Let $A\subseteq\mathbb{R}^{m+n}$ be a bounded $\mathcal{A}_{m,n}$-semianalytic set and let $k\leq m+n$. Then there exists $N\in\mathbb{N}$ and for all $i=1,\ldots,N$, there exist $m_{i}',n_{i}'\in\mathbb{N}$, with $m_{i}'+n_{i}'=m+n$, a polyradius $r_{i}$, a sub-quadrant $Q_{i}\subseteq\hat{I}_{m_{i}',n_{i}',r_{i}}$ and a map $H_{i}:\hat{I}_{m_{i}',n_{i}',r_{i}}\to\mathbb{R}^{k}$, whose components are in $\mathcal{A}_{m_{i}',n_{i}',r_{i}}$, such that $H_{i}\restriction Q_{i}:Q_{i}\to H_{i}\left(Q_{i}\right)$ is a diffeomorphism and \[ \Pi_{k}^{m+n}\left(A\right)=\bigcup_{i=1}^{N}H_{i}\left(Q_{i}\right). \] \end{thm} \begin{proof} By Proposition \ref{prop: trivial manifolds for subanal sets}, there are trivial manifolds $M_{i}\subseteq\mathbb{R}^{2\left(m+n\right)}$, with $\text{dim}\left(M_{i}\right)\leq k$, such that $\Pi_{k}^{m+n}\left(A\right)={\displaystyle \bigcup_{i=1}^{N}\Pi_{k}^{2\left(m+n\right)}\left(M_{i}\right)}$ and $\Pi_{k}^{2\left(m+n\right)}\restriction M_{i}$ is an immersion. By definition of trivial manifold, there exist maps $F_{i}:\hat{I}_{m_{i}',n_{i}',r_{i}}\to I_{0,m+n,r_{i}'}$ with components in $\mathcal{A}_{m_{i}',n_{i}',r_{i}}$, and sub-quadrants $Q_{i}\subseteq\overline{Q}_{i}\subseteq\hat{I}_{m_{i}',n_{i}',r_{i}}$ of dimension $q_{i}\leq k$ such that $M_{i}=\Gamma\left(F_{i}\restriction Q_{i}\right)$. Let $G_{i}:\hat{I}_{m_{i}',n_{i}',r_{i}}\to I_{0,2\left(m+n\right),\left(r_{i},r_{i}'\right)}$ be the map defined as $G_{i}\left(z\right)=\left(z,F_{i}\left(z\right)\right)$, whose components are in $\mathcal{A}_{m_{i}',n_{i}',r_{i}}$. Then the maps $H_{i}:=\Pi_{k}^{2\left(m+n\right)}\circ G_{i}$ and the quadrants $Q_{i}$ satisfy the required properties. \end{proof}
\subsection{Proof of Theorem A\label{subsec:Proof-of-o-minimality}}
We use Gabrielov's approach, as illustrated in \cite[2.1-2.9]{vdd:speiss:gen}. \begin{defn} \label{def: sub-lambda-set}Le $\Lambda_{n}$ be the collection of all $\mathcal{A}$-semianalytic subsets of the unit cube $\left[-1,1\right]^{n}$. Following \cite[Definition 2.1]{vdd:speiss:gen}, we say that a set $B\subseteq\mathbb{R}^{n}$ is a \emph{sub-$\Lambda$-set} if there exist $m\geq n$ and a set $A\in\Lambda_{n}$ such that $B=\Pi_{n}^{m}\left(A\right)$. \end{defn} The collection $\Lambda=\left(\Lambda_{n}\right)_{n\in\mathbb{N}}$ clearly satisfies conditions (I),(II) and (III) of \cite[2.3]{vdd:speiss:gen}. Condition (IV), namely the $\Lambda$-Gabrielov property, is also satisfied by Proposition \ref{prop: trivial manifolds for subanal sets}, since, up to composing with a homothety, we may assume that the trivial manifolds in the proposition are contained in the unit cube. It follows that $\Lambda$ satisfies the Theorem of the Complement \cite[2.7]{vdd:speiss:gen} and \cite[Corollary 2.8]{vdd:speiss:gen}. \begin{defn} \label{def: decomposition of Rn}Let $n\in\mathbb{N}$ and $x=\left(x_{1},\ldots,x_{n}\right)$. For $u\in\mathbb{R}$, define
\begin{align*} \tau\left(u\right) & =\begin{cases}
u & \text{if}\ |u|\leq1\\
1/u & \text{if}\ |u|>1 \end{cases}. \end{align*} Let $\iota\subseteq\left\{ 1,\ldots,n\right\} $. Define:
\[
R_{\iota}^{n}:=\left\{ x\in\mathbb{R}^{n}:\ |x_{i}|\leq1\ \text{for\ }i\in\iota,\ |x_{i}|>1\ \text{for}\ i\notin\iota\right\} , \]
\[
I_{\iota}^{n}:=\left\{ x\in\mathbb{R}^{n}:\ |x_{i}|\leq1\ \text{for\ }i\in\iota,\ |x_{i}|<1\ \text{for}\ i\notin\iota\right\} , \] \[ \xyC{0mm}\xyL{0mm}\xymatrix{\tau_{\iota}^{n}\colon & R_{\iota}^{n}\ar[rrrr] & \ & \ & \ & I_{\iota}^{n}\\
& x\ar@{|->}[rrrr] & & & & \left(\tau\left(x_{1}\right),\ldots,\tau\left(x_{n}\right)\right) } . \] \end{defn} \begin{rem} \label{rem: tau i a bijection}Notice that $\tau_{\iota}^{n}:R_{\iota}^{n}\to I_{\iota}^{n}$
is a bijection, hence it commutes with the boolean set operations. In particular, for $A\subseteq R_{\iota}^{n}$, we have $\tau_{\iota}^{n}\left(R_{\iota}^{n}\setminus A\right)=I_{\iota}^{n}\setminus\tau_{\iota}^{n}\left(A\right)$. Let $\kappa\subseteq\left\{ 1,\ldots,n\right\} $ and $\pi$ be the projection of $\mathbb{R}^{n}$ onto the vector subspace spanned by the coordinates $\left\{ x_{i}:\ i\in\kappa\right\} $. Let $\iota^{*}=\iota\cap\kappa$. Since $\tau_{\iota}^{n}$ acts coordinate-wise, we have that $\tau_{\iota^{*}}^{|\kappa|}\left(\pi\left(A\right)\right)=\pi\left(\tau_{\iota}^{n}\left(A\right)\right)$. Finally, note that the sets $R_{\iota}^{n}$ partition $\mathbb{R}^{n}$ and that the union of all the sets $I_{\iota}^{n}$ is the closed unit cube.\end{rem} \begin{void} \label{vuoto: model comp and o-min}We now prove the model-completeness and o-minimality of $\mathbb{R}_{\mathcal{A}}$. \end{void} Let $\tilde{f_{1}},\ldots,\tilde{f_{k}}$ be as in Definition \ref{def:structures}, i.e. $\tilde{f_{i}}$ coincides with some $f_{i}\in\mathcal{A}_{n',n'',1}$ (with $n'+n''=n$) inside the unit cube, and is zero outside the cube. Let $P\left(x,y\right)\in\mathbb{Q}\left[x,y\right]$ be a polynomial in $n+k$ variables. It is easy to see that, if $A=\left\{ x\in R_{\iota}^{n}:\ P\left(x,\tilde{f_{1}}\left(x\right),\ldots,\tilde{f_{k}}\left(x\right)=0\right)\right\} $, then the set $\tau_{\iota}^{n}\left(A\right)$ is an $\mathcal{A}$-basic set (this follows from the fact that the unbounded variables appear only semi-algebraically in the formula defining $A$). Now, routine manipulations show that every set $A\subseteq\mathbb{R}^{n}$ definable in the structure $\mathbb{R}_{\mathcal{A}}$ is of the form \[ A=\bigcup_{\iota\subseteq\left\{ 1,\ldots,n\right\} }\bigcup_{\iota^{*}\subseteq\left\{ 1,\ldots,m\right\} }A_{\iota,\iota^{*}}\text{,\ with} \] \[ A_{\iota,\iota^{*}}=\left\{ x\in R_{\iota}^{n}:\ \mathcal{Q}_{1}y_{1}\ldots\mathcal{Q}_{m}y_{m}\ y\in R_{\iota^{*}}^{m}\ \land\ P\left(x,y,\tilde{f_{1}}\left(x,y\right),\ldots,\tilde{f_{k}}\left(x,y\right)\right)=0\right\} , \] where $y=\left(y_{1},\ldots,y_{m}\right)$, $\mathcal{Q}_{i}$ is either the existential or the universal quantifier, $P$ is a polynomial in $n+m+k$ variables and $\tilde{f_{i}}$ are as in Definition \ref{def:structures}. Using Remark \ref{rem: tau i a bijection}, we immediately see that $\tau_{\iota}^{n}\left(A_{\iota,\iota^{*}}\right)$ is definable in the structure $\left(I,\Lambda\right)$ (in particular it is a sub-$\Lambda$-set, by \cite[Corollary 2.8]{vdd:speiss:gen}) and that $A_{\iota,\iota^{*}}$ is existentially definable and has finitely many connected components. \begin{rem} \label{rem: bounded sets are sub-lambda}Let $A\subseteq\left[-1,1\right]^{n}$ be an $\mathbb{R}_{\mathcal{A}}$-definable set. Using the above decomposition of $A$ and the substitution $y_{i}\mapsto1/y_{i}$ whenever $y_{i}>1$, one can see that $A$ is actually a sub-$\Lambda$-set. In particular, the Parametrisation Theorem \ref{thm: param subanal} applies to $A$.\end{rem} \begin{void} \label{vuoto: polybdd}We now prove polynomial boundedness.\end{void} \begin{rem} \label{rem:composition in one var}Let $f\in\mathcal{A}_{1}$. Then, since the support of $\mathcal{T}\left(f\right)$ is a well ordered set, $f$ is normal. It follows from Lemma \ref{lem:composition with monomials} that if $g$ is a function of one variable, which is obtained as a finite composition of functions in $\mathcal{A}$ and vanishing at 0, then $g\in\mathcal{A}$. \end{rem} Let $\varepsilon>0$ and let $f:\left(0,\varepsilon\right)\to\mathbb{R}$ be definable in $\mathbb{R}_{\mathcal{A}}.$ We proceed as in \cite[Theorem B, pag. 4419]{vdd:speiss:gen}, using Theorem \ref{thm: param subanal} and Remark \ref{rem:composition in one var} instead of 9.6 and \cite[Lemma 7.10]{krs} \footnote{Notice that in \cite[Lemma 7.10]{krs} one should replace $\lambda$ by $\lambda^{-1/\alpha}$ in the blow-up $\mathbf{r}^{\rho,\lambda}$ and in the definition of $g$. } instead of 9.9. We conclude that there exists $h\in\mathcal{A}_{1}$ such that $f\left(t\right)=h\left(t\right)$ for all sufficiently small $t>0$. In particular, there exist $\alpha\in\mathbb{K}$ and a unit $u\left(t\right)\in\mathcal{A}_{1}$ such that $f\left(t\right)=t^{\alpha}u\left(t\right)$. This finishes the proof of Theorem A.
\section{Vertical monomialisation\label{sec:Vertical-monomialisation}}
In this section we prove Theorem B. \begin{proviso} \textbf{\label{Proviso: admissible transf} }Consider the ramifications $r_{m+i}^{d,\pm}:\hat{I}_{m,n,r}\to\hat{I}_{m,n,r}$ in Definition \ref{Def: elementary transformations} and let $\sigma_{m+i}^{\pm}$ be the restriction of $r_{m+i}^{1,\pm}$ to the half-space $\left\{ y_{i}'\geq0\right\} $. In this section we will consider the transformations $\sigma_{m+i}^{\pm}$ as elementary transformations (and extend accordingly the notion of admissible transformation in Definition \ref{Def: admiss transf}). Notice that $\sigma_{m+i}^{+}\left(\hat{I}_{m+1,n-1,r}\right)\cup\sigma_{m+i}^{-}\left(\hat{I}_{m+1,n-1,r}\right)=\hat{I}_{m,n,r}$. \end{proviso} The main result of this section is the following. \begin{thm} \label{thm: monomialis of def functions}Let $D\subseteq\mathbb{R}^{N}$ and $\eta:D\to\mathbb{R}$ be an $\mathbb{R}_{\mathcal{A}}$-definable function such that the graph of $\eta$ is a sub-$\Lambda$-set. Then there exist a polyradius $r$ and a finite family $\mathcal{F}=\left\{ \rho_{i}:\hat{I}_{m_{i},n_{i},r_{i}}\to\mathbb{R}^{N}\right\} _{i=1,\ldots,M}$ (with $m_{i}+n_{i}=N$) of admissible transformations such that for all $i=1,\ldots,M$ the function $\eta\circ\rho_{i}$ belongs to $\mathcal{A}_{m_{i},n_{i},r_{i}}$ and its germ at zero is normal, and $D\cap\hat{I}_{0,N,r}\subseteq\bigcup_{i=1}^{M}\rho_{i}\left(\hat{I}_{m_{i},n_{i},r_{i}}\right)$. \end{thm} Before proving the above theorem, we give a list of consequences. \begin{cor} \label{cor: def functions are terms}Let $D\subseteq\mathbb{R}^{N}$ and $\eta:D\to\mathbb{R}$ be an $\mathbb{R}_{\mathcal{A}}$-definable function. Then there exist finitely many terms $t_{1},\ldots,t_{M}$ of the language $\mathcal{L}:=\mathcal{L}_{\mathcal{A}}\cup\left\{ ^{-1}\right\} \cup\left\{ \sqrt[n]{\ }:\ n\in\mathbb{N}\right\} $ such that \[ \forall x\in D\ \exists i\in\left\{ 1,\ldots,M\right\} \ \ \eta\left(x\right)=t_{i}\left(x\right). \] \end{cor} \begin{proof} The proof is by induction on $N$. If $N=1$, we can conclude by \ref{vuoto: polybdd}, after using the substitution $x\mapsto-x$ on $D\cap\mathbb{R}^{<0}$, the substitution $x\mapsto1/x$ on $D\cap\mathbb{R}\setminus[-1,1]$
and arguing with $1/\eta$ on $\left\{ x\in D:\ |\eta\left(x\right)|>1\right\} $. Hence we may suppose $N>1$. Notice that if $\text{dim}\left(D\right)<N$, then by cell decomposition we may assume that, up to a permutation of the variables, $D$ is the graph of some definable function $\eta':D'\to\mathbb{R}$, where $D'\subseteq\mathbb{R}^{N-1}$. Hence, the function $\eta\left(x_{1},\ldots,x_{N}\right)$ coincides with the function $\eta\left(x_{1}\ldots,x_{N-1},\eta'\left(x_{1},\ldots,x_{N-1}\right)\right)$ and we can conclude by the inductive hypothesis.
Let $\Gamma\left(\eta\right)$ be the graph of $\eta$. Let $A=D\times\mathbb{R}$ and consider the partition of $A$ given by the sets $A_{\iota,\iota^{*}}$ as in Subsection \ref{subsec:Proof-of-o-minimality}. Then each set $\Gamma\left(\eta\right)\cap A_{\iota,\iota^{*}}$ is either empty or the graph of an $\mathbb{R}_{\mathcal{A}}$-definable function and $\tau_{\iota}^{N+1}\left(\Gamma\left(\eta\right)\cap A_{\iota,\iota^{*}}\right)$ is a sub-$\Lambda$-set. Moreover,
\[ \left(x,y\right)\in\Gamma\left(\eta\right)\cap A_{\iota,\iota^{*}}\Leftrightarrow\tau_{\iota}^{N+1}\left(x,y\right)\in\Gamma\left(\tau_{\iota}^{N+1}\circ\eta\circ\left(\tau_{\iota}^{N+1}\right)^{-1}\right)\cap\tau_{\iota}^{N+1}\left(A_{\iota,\iota^{*}}\right), \] hence it suffices to prove the corollary for the function $\tilde{\eta}:=\tau_{\iota}^{N+1}\circ\eta\circ\left(\tau_{\iota}^{N+1}\right)^{-1}$, whose graph is a sub-$\Lambda$-set. We apply Theorem \ref{thm: monomialis of def functions} to the function $\tilde{\eta}:\tilde{D}\to\mathbb{R}$ and obtain in particular that the function $\tilde{\eta}\circ\rho_{i}:\hat{I}_{m_{i},n_{i},r_{i}}\to\mathbb{R}$ is an $\mathcal{L}$-term $t_{i}$. Arguing by induction on the length of $\rho_{i}$, it is easy to see that there is a closed sub-$\Lambda$-set $S\subseteq\hat{I}_{m_{i},n_{i},r_{i}}$ of dimension strictly smaller than $N$ such that $\rho_{i}\restriction\hat{I}_{m_{i},n_{i},r_{i}}\setminus S$ is a diffeomorphism onto its image and that the components of the map $\left(\rho_{i}\restriction\hat{I}_{m_{i},n_{i},r_{i}}\setminus S\right)^{-1}$ are $\mathcal{L}$-terms. Hence, for $x\in\rho_{i}\left(\hat{I}_{m_{i},n_{i},r_{i}}\setminus S\right)$ we have that $\tilde{\eta}\left(x\right)=t_{i}\circ\rho_{i}^{-1}\left(x\right)$ is an $\mathcal{L}$-term. Notice that the function $\tilde{\eta}\restriction\rho_{i}\left(S\right)$ can be dealt with by using the inductive hypothesis. Hence we have proved the corollary for $\tilde{\eta}\restriction\tilde{D}\cap\hat{I}_{0,N,r}$, for some polyradius $r$.
By applying the same argument to the function $\tilde{\eta}\left(x+a\right)$ for every point $a$ of the closed unit cube, we obtain the full statement of the corollary by a compactness argument. \end{proof} As an immediate consequence of the above corollary we obtain: \begin{proof} [Proof of Theorem B]Let $A\subseteq\mathbb{R}^{N}$ be an $\mathbb{R}_{\mathcal{A}}$-definable set. We prove by induction on $N$ that $A$ is quantifier free definable in the language $\mathcal{L}:=\mathcal{L}_{\mathcal{A}}\cup\left\{ ^{-1}\right\} \cup\left\{ \sqrt[n]{\ }:\ n\in\mathbb{N}\right\} $. This is clear if $N=1$. Hence, we may suppose that $N>1$ and that, by cell decomposition, $A$ is a cell; in particular, there exist a cell $C\subseteq\mathbb{R}^{N-1}$ and $\mathbb{R}_{\mathcal{A}}$-definable functions $f,g:C\to\mathbb{R}$ such that $A$ is either the graph of $f\restriction C$, or the epigraph of $f\restriction C$, or the set $\left\{ \left(x,y\right)\in\mathbb{R}^{N-1}\times\mathbb{R}:\ x\in C\ \land\ g\left(x\right)<y<f\left(x\right)\right\} $. By Corollary \ref{cor: def functions are terms}, there is a finite partition of $C$ into definable sets such that on every set of the partition the functions $f$ and $g$ coincides with some $\mathcal{L}$-terms $t_{1}$ and $t_{2}$, respectively. By the inductive hypothesis, each set of the partition is quantifier free definable in the language $\mathcal{L}$, hence so is $A$, in each of the above cases. \end{proof} Finally, we obtain the following Rectilinearisation Theorem, in the spirit of \cite{hironaka_real_analytic}. \begin{namedthm} {Rectilinearisation Theorem}\label{cor: rectilinearisation}Let $D\subseteq\mathbb{R}^{N}$ be a sub-$\Lambda$-set. Then there exist a neighbourhood $W$ of 0 in $\mathbb{R}^{N}$ and a finite family $\mathcal{F}=\left\{ \left(\rho_{i},Q_{i}\right):\ i=1,\ldots,M\right\} $, where $\rho_{i}:\hat{I}_{m_{i},n_{i},r{}_{i}}\to\mathbb{R}^{N}$ is an admissible transformation (with $m_{i}+n_{i}=N$) and $Q_{i}\subseteq\overline{Q_{i}}\subseteq\hat{I}_{m_{i},n_{i},r_{i}}$ is a sub-quadrant, such that $\rho_{i}\restriction Q_{i}:\ Q_{i}\to\rho_{i}\left(Q_{i}\right)$ is a diffeomorphism and \[ W\cap D=\bigcup_{i=1}^{M}\rho_{i}\left(Q_{i}\right). \]
\end{namedthm} The Rectilinearisation Theorem is a consequence of Proposition \ref{prop: strong rectilin} below. We need some definitions. \begin{notation} Let $N\in\mathbb{N}$. For the rest of the section, whenever we write an admissible transformation $\rho:\hat{I}_{m_{\rho},n_{\rho},r_{\rho}}\to\hat{I}_{m,n,r}$, we will implicitly assume that $m+n=m_{\rho}+n_{\rho}=N$ and that $r=\left(r',r''\right)\in\left(0,\infty\right)^{m}\times\left(0,\infty\right)^{n},\ r_{\rho}=\left(r_{\rho}',r_{\rho}''\right)\in\left(0,\infty\right)^{m_{\rho}}\times\left(0,\infty\right)^{n_{\rho}}$ are polyradii in $\mathbb{R}^{N}$. \end{notation} Recall that, if $f\in\mathcal{A}_{m,n}$ and $\rho:\hat{I}_{m_{\rho},n_{\rho},r_{\rho}}\to\hat{I}_{m,n,r}$ is an admissible transformation, then $f\circ\rho\in\mathcal{A}_{m_{\rho},n_{\rho}}$. The next definition is intended to extend this property to the case when the arity of $f$ is bigger than $N$. \begin{defn} \label{def: rho respects f}Let $N,l\in\mathbb{N}$ and $\hat{m},\hat{n},k_{1},k_{2}\in\mathbb{N}$ with $\hat{m}+\hat{n}=N$ and $k_{1}+k_{2}=l$. Let $\rho:\hat{I}_{m_{\rho},n_{\rho},r_{\rho}}\to\hat{I}_{m,n,r}$ be an admissible transformation. We say that $\rho$ \emph{respects} the elements of $\mathcal{A}_{\hat{m}+k_{1},\hat{n}+k_{2}}$ if $m\geq\hat{m}$.
Let $s=\left(s_{1},s_{2}\right)\in\left(0,\infty\right)^{k_{1}}\times\left(0,\infty\right)^{k_{2}}$ be a polyradius and $\hat{r}:=\left(r',s_{1},r'',s_{2}\right)$, $\widehat{r_{\rho}}:=\left(r_{\rho}',s_{1},r_{\rho}'',s_{2}\right)$. Let $x=\left(x_{1},\ldots,x_{N}\right),\ x'=\left(x_{1}',\ldots,x_{N}'\right),\ u=\left(u_{1},\ldots,u_{l}\right),\ u'=\left(u_{1}',\ldots,u_{l}'\right)$.
Define the map \[ \xyC{0mm}\xyL{0mm}\xymatrix{\hat{\rho}\colon & \hat{I}_{m_{\rho}+k_{1},n_{\rho}+k_{2},\widehat{r_{\rho}}}\ar[rrrr] & \ & \ & \ & \hat{I}_{m+k_{1},n+k_{2},\hat{r}}\\
& \left\langle x',u'\right\rangle \ar@{|->}[rrrr] & & & & \left\langle x,u\right\rangle } , \] where $\left\langle x',u'\right\rangle $ is the ordered tuple $\left(x_{1}',\ldots,x_{m_{\rho}}',u_{1}',\ldots,u_{k_{1}}',x_{m_{\rho}+1}',\ldots,x_{N}',u_{k_{1}+1}',\ldots,u_{l}'\right)$, $\left\langle x,u\right\rangle $ is the ordered tuple $\left(x_{1},\ldots,x_{m},u_{1},\ldots,u_{k_{1}},x_{m+1},\ldots,x_{N},u_{k_{1}+1},\ldots,u_{l}\right)$, and $x=\rho\left(x'\right)$ , $u=u'$. The map $\hat{\rho}$ is an admissible transformation, which we call a \emph{trivial extension} of $\rho$.
Notice that $\rho$ respects $f\in\mathcal{A}_{\hat{m}+k_{1},\hat{n}+k_{2}}$ if and only if $f\circ\hat{\rho}\in\mathcal{A}_{m_{\rho}+k_{1},n_{\rho}+k_{2},\widehat{r_{\rho}}}$.$ $ \end{defn}
In the next definition we will consider two sets of variables, the ``horizontal variables'', usually denoted by $x$, and the ``vertical variables'', usually denoted by $u$. We will define a special type of admissible transformations, the ``vertical admissible transformations'', which respect in some way the partition of the set of variables into horizontal and vertical. The aim is to obtain a class of admissible transformations $\rho:\left(x',u'\right)\mapsto\left(x,u\right)$ with the property that, given $f\in\mathcal{A}$, if we can solve explicitly the equation $f\circ\rho\left(x',u'\right)=0$ with respect to $u'$, then we can solve explicitly the equation $f\left(x,u\right)=0$ with respect to $u$. \begin{defn} \label{def:vertical transformations}Let $r=\left(s,t\right),\ r_{\rho}=\left(s_{\rho},t_{\rho}\right)\in\left(0,\infty\right)^{N}$$\times\left(0,\infty\right)^{l}$ be polyradii. Consider a map \[ \xyC{0mm}\xyL{0mm}\xymatrix{\rho\colon & \hat{I}_{r_{\rho}}\ar[rrrr] & \ & \ & \ & \hat{I}_{r}\\
& \langle x',u'\rangle\ar@{|->}[rrrr] & & & & \langle x,u\rangle } , \] where $\hat{I}_{r_{\rho}}$ is either $\hat{I}_{m_{\rho},n_{\rho}+l,r_{\rho}}$ (type 1) or $\hat{I}_{m_{\rho}+l,n_{\rho},r_{\rho}}$ (type 2) and $\hat{I}_{r}$ is either $\hat{I}_{m,n+l,r}$ (type 1) or $\hat{I}_{m+l,n,r}$ (type 2). If $\hat{I}_{r_{\rho}}$ is of type 1, then let $\left\langle x',u'\right\rangle $ be the ordered pair $\left(x',u'\right)$, whereas if $\hat{I}_{r_{\rho}}$ is of type 2, then let $\left\langle x',u'\right\rangle $ be the ordered pair $\left(x_{1}',\ldots,x_{m_{\rho}}',u_{1}',\ldots,u_{l}',x_{m_{\rho}+1}',\ldots,x_{N}'\right)$. If $\hat{I}_{r}$ is of type 1, then let $\left\langle x,u\right\rangle $ be the ordered pair $\left(x,u\right)$ and $\rho'=\left(\rho_{1},\ldots,\rho_{N}\right),$ $\rho''=\left(\rho_{N+1},\ldots,\rho_{N+l}\right)$, whereas if $\hat{I}_{r}$ is of type 2, then let $\left\langle x,u\right\rangle $ be the ordered pair $\left(x_{1},\ldots,x_{m},u_{1},\ldots,u_{l},x_{m+1},\ldots,x_{N}\right)$ and $\rho'=\left(\rho_{1},\ldots,\rho_{m},\rho_{m+l+1},\ldots,\rho_{N+l}\right),$ $\rho''=\left(\rho_{m+1},\ldots,\rho_{m+l}\right)$.
We say that $\rho=\left\langle \rho',\rho''\right\rangle $ is a \emph{vertical admissible transformation} (of type $\left(i,j\right)\in\left\{ 1,2\right\} ^{2}$) if the following conditions are satisfied: \begin{itemize} \item $\hat{I}_{r_{\rho}}$ is of type $i$ and $\hat{I}_{r}$ is of type $j$; \item $\rho$ is an admissible transformation; \item $\rho'$ does not depend on $u'$, hence we may write $x=\rho'\left(x'\right)$; \item there exists a closed sub-$\Lambda$-set $S_{\rho}\subseteq\hat{I}_{m_{\rho},n_{\rho},s_{\rho}}$ such that $\text{dim}\left(S_{\rho}\right)<N$ and $\rho'\restriction\hat{I}_{m_{\rho},n_{\rho},s_{\rho}}\setminus S_{\rho}$ is a diffeomorphism onto its image. Moreover, for every $x'\in\hat{I}_{m_{\rho},n_{\rho},s_{\rho}}\setminus S_{\rho}$, the map $u'\mapsto\rho''\left(\left\langle x',u'\right\rangle \right)$ is a diffeomorphism onto its image, and we denote by $\gamma_{\rho}$ the map $\left(x',u\right)\mapsto u'$. \end{itemize} Notice that $\rho\restriction\hat{I}_{r_{\rho}}\setminus S_{\rho}\times\mathbb{R}^{l}$ is a diffeomorphism onto its image.
{}
Let $D\subseteq\mathbb{R}^{N}$ and $\Phi:D\to\mathbb{R}^{l}$ be a map whose graph is a sub-$\Lambda$-set and let $D_{\rho}:=\left(\rho'\right)^{-1}\left(D\right)\setminus S_{\rho}$. Suppose that $\Phi\left(\rho'\left(D_{\rho}\right)\right)\subseteq\rho''\left(\hat{I}_{r_{\rho}}\right)$. We define the map $\Phi_{\rho}:D_{\rho}\to\mathbb{R}^{l}$ as follows: for every $x'\in D_{\rho},\ \Phi_{\rho}\left(x'\right):=\gamma_{\rho}\left(x',\Phi\circ\rho'\left(x'\right)\right)$. \end{defn}
Examples of admissible transformations which are not vertical are the blow-up chart $\left(x,u\right)\mapsto\left(xu,u\right)$ and the linear transformation $\left(x,u\right)\mapsto\left(x+cu,u\right)$, because in these cases the first component of the transformation depends on the vertical variable $u$. Another example is the blow-up chart $\left(x,u_{1},u_{2}\right)\mapsto\left(x,u_{1}u_{2},u_{2}\right)$, because the second component of the transformation is not a bijection.
Our aim in this section is to give a monomialisation algorithm which only uses vertical admissible transformations. We will do so at the expenses of covering a proper subset of the domain of the functions. Such a subset will turn out to be the graph of a definable map. This motivates the next definition. \begin{defn} \label{def: property *}$ $Let $N,l\in\mathbb{N}$ with $N\geq l$. Let $S\subseteq D\subseteq\mathbb{R}^{N}$ be sub-$\Lambda$-sets, with $\text{dim}\left(S\right)<N$. \begin{enumerate} \item Let $\mathcal{F}$ be a finite family of admissible transformations \[ \rho:\hat{I}_{m_{\rho},n_{\rho},r_{\rho}}\to\hat{I}_{m,n,r}. \] We say that $\mathcal{F}$ \emph{satisfies the covering} \emph{property with respect to} $D$ if for every choice of polyradii $r_{\rho}'\leq r_{\rho}\ \left(\rho\in\mathcal{F}\right)\ $ there exists a polyradius $r^{*}\leq r$ such that \[ \hat{I}_{m,n,r^{*}}\cap D\cap\rho\left(\hat{I}_{m_{\rho},n_{\rho},r_{\rho}'}\right)\not=\emptyset\text{\ and\ }\hat{I}_{m,n,r^{*}}\cap D\subseteq\bigcup_{\rho\in\mathcal{F}}\rho\left(\hat{I}_{m_{\rho},n_{\rho},r_{\rho}'}\right). \]
\item Let $\Phi:D\to\mathbb{R}^{l}$ be a map whose graph is a sub-$\Lambda$-set and let $\mathcal{F}$ be a finite family of vertical admissible transformations $ $(of the same type) \[ \rho:\hat{I}_{r_{\rho}}\to\hat{I}_{r}. \]
We say that $\mathcal{F}$ \emph{satisfies the covering property with respect to} $\left(\Phi,S\right)$ if for every choice of polyradii $r_{\rho}'\leq r_{\rho}\ \left(\rho\in\mathcal{F}\right)\ $ there exists a polyradius $r^{*}\leq r$ such that \[ \hat{I}_{r^{*}}\cap\Gamma\left(\Phi\restriction D\setminus S\right)\cap\rho\left(\hat{I}_{r_{\rho}'}\right)\not=\emptyset\text{\ and\ }\hat{I}_{r^{*}}\cap\Gamma\left(\Phi\restriction D\setminus S\right)\subseteq\bigcup_{\rho\in\mathcal{F}}\rho\left(\hat{I}_{r_{\rho}'}\right). \]
\end{enumerate} \end{defn} The following remarks will be used several times throughout the rest of the paper. \begin{rems} \label{rem: evolution of phi}$\ $Let $D$ and $\Phi$ be as above. \begin{enumerate} \item Let $\mathcal{F}$ be a finite family of admissible transformations which satisfies the covering property with respect to $D$. Suppose that for every $\rho\in\mathcal{F}$ there is a finite family $\tilde{\mathcal{F}}_{\rho}$ of admissible transformations such that $\tilde{\mathcal{F}}_{\rho}$ satisfies the covering property with respect to $\rho^{-1}\left(D\right)$. Then $\mathcal{G}$ satisfies the covering property with respect to $D$, where $\mathcal{G}=\left\{ \rho\circ\tilde{\rho}:\ \rho\in\mathcal{F},\ \tilde{\rho}\in\tilde{\mathcal{F}}_{\rho}\right\} $. \item Let $\mathcal{F}$ be a finite family of admissible transformations which satisfies the covering property with respect to $D$. In the notation of Definition \ref{def: rho respects f}, let $\pi$ be the projection $\left\langle x,u\right\rangle \mapsto x$ and $\hat{D}\subseteq\mathbb{R}^{N+l}$ be a sub-$\Lambda$-set such that $\pi\left(\hat{D}\right)=D$. Then $ $\foreignlanguage{english}{$\hat{\mathcal{F}}$} satisfies the covering property with respect to $\hat{D}$, where $\hat{\mathcal{F}}$ is the family of all admissible transformations $\hat{\rho}$ obtained by extending trivially $\rho\in\mathcal{F}$. \item Let $\mathcal{F}$ be a finite family of admissible transformations and suppose that $\mathcal{F}$ satisfies the covering property with respect to $D$. Then $ $\foreignlanguage{english}{$\hat{\mathcal{F}}$} satisfies the covering property with respect to $\left(\Phi,\emptyset\right)$, where $\hat{\mathcal{F}}$ is a finite family of vertical admissible transformations (of the same type) obtained by extending trivially each $\rho\in\mathcal{F}$ to $\hat{\rho}:\hat{I}_{r_{\rho}}\to\hat{I}_{r}$, where $\hat{I}_{r_{\rho}},\ \hat{I}_{r}$ are of either of the two types. Conversely, if $\mathcal{F}$ is a family of vertical admissible transformations which satisfies the covering property with respect to $\left(\Phi,\emptyset\right)$, then the family $\mathcal{F}'=\left\{ \rho':\ \rho\in\mathcal{F}\right\} $ of admissible transformations satisfies the covering property with respect to $D$. \item Let $\mathcal{F}$ be a finite family of vertical admissible transformations (of the same type). Suppose that $S\supseteq\bigcup_{\rho\in\mathcal{F}}\rho\left(S_{\rho}\right)$ and that $\mathcal{F}$ satisfies the covering property with respect to $\left(\Phi,S\right)$. Suppose furthermore that for every $\rho\in\mathcal{F}$ there are a finite family $\tilde{\mathcal{F}}_{\rho}$ of vertical admissible transformations (of the same type) and a sub-$\Lambda$-set $S_{\rho}'\supseteq\bigcup_{\tilde{\rho}\in\tilde{\mathcal{F}}_{\rho}}\tilde{\rho}\left(S_{\tilde{\rho}}\right)$ of dimension strictly smaller than $N$ such that $\tilde{\mathcal{F}}_{\rho}$ satisfies the covering property with respect to $\left(\Phi_{\rho},S_{\rho}'\right)$. Then $\mathcal{G}$ satisfies the covering property with respect to $\left(\Phi,S'\right)$, where $\mathcal{G}=\left\{ \rho\circ\tilde{\rho}:\ \rho\in\mathcal{F},\ \tilde{\rho}\in\tilde{\mathcal{F}}_{\rho}\right\} $ and $S'=S\cup\bigcup_{\rho\in\mathcal{F}}\rho\left(S_{\rho}'\right)$. \end{enumerate} \end{rems} The above definitions and remarks allow us to revisit the statement of Theorem \ref{thm: geom monomial}. \begin{rem} \label{rem: monomialise respecting f}Let $f\in\mathcal{A}_{\hat{m}+k_{1},\hat{n}+k_{2}}$ and $g\in\mathcal{A}_{\check{m},\check{n}}$. Define $m:=\text{max}\left\{ \hat{m},\check{m}\right\} $ and $n:=N-m$. Then we can apply Theorem \ref{thm: geom monomial} to $g$, seen as an element of $\mathcal{A}_{m,n}$ and obtain that the admissible transformations in the statement respect $f$. Moreover, using Remark \ref{rem: elemntary transf send quadrants to sectors}, the conclusion of Theorem \ref{thm: geom monomial} can be strengthened by saying that $\mathcal{F}$ satisfies the covering property with respect to $\hat{I}_{m,n,r'}$. \end{rem} The purpose of statements \textbf{(B)} and \textbf{(C) }in the next theorem is to solve a given system of equations with respect to the vertical variables. Statement \textbf{(A)} implies directly Theorem \ref{thm: monomialis of def functions}. \begin{notation} Let $\Phi=\left(\varphi_{1},\ldots,\varphi_{l}\right):D\to\mathbb{R}^{l}$ be a map whose graph is a sub-$\Lambda$-set. We denote by $j\left(\Phi\right)$ the cardinality of the set $\left\{ i:\ 1\leq i\leq l,\ \varphi_{i}\text{\ is\ not\ identically\ }0\right\} $. Up to a permutation, we may always assume that the first $j\left(\Phi\right)$ coordinates of $\Phi$ are not identically zero. We set $\hat{\Phi}=\emptyset$ if $j\left(\Phi\right)=0$ and $\hat{\Phi}=\left(\varphi_{1},\ldots,\varphi_{j\left(\Phi\right)}\right)$ if $j\left(\Phi\right)>0$. For $F\left(x,u\right)\in\mathcal{A}_{\check{m},\check{n}+l}$, we let $F_{0}\left(x,u_{1},\ldots,u_{j\left(\Phi\right)}\right):=F\left(x,u_{1},\ldots,u_{j\left(\Phi\right)},0\right)\in\mathcal{A}_{\check{m},\check{n}+j\left(\Phi\right)}$.\end{notation} \begin{thm} \label{thm: ABC}Let $N,l,\hat{m},\hat{n},k_{1},k_{2}\in\mathbb{N}$ with $l\leq N=\hat{m}+\hat{n}$ and let $f\in\mathcal{A}_{\hat{m}+k_{1},\hat{n}+k_{2}}$. $ $Let $D\subseteq[-1,1]{}^{N}$ be a sub-$\Lambda$-set and suppose that for every sufficiently small polyradius $r'$ in $\mathbb{R}^{N}$, the intersection $\hat{I}_{\hat{m},\hat{n},r'}\cap D$ is not empty.
\noindent \textbf{(A)$_{N}$} Let $\eta:D\to\mathbb{R}$ be a function whose graph is a sub-$\Lambda$-set. Then there exists a finite family $\mathcal{F}$ of admissible transformations \[ \rho:\hat{I}_{m_{\rho},n_{\rho},r_{\rho}}\to\hat{I}_{m,n,r} \] such that \emph{$\mathcal{F}$ }satisfies the covering property with respect to\emph{ $D$ }and for every $\rho\in\mathcal{F}$, \begin{itemize} \item $m=\hat{m}$, hence $\rho$ respects $f$; \item $\eta\circ\rho\in\mathcal{A}_{m_{\rho},n_{\rho},r_{\rho}}$ and is normal. \end{itemize} \noindent \textbf{(B)$_{N,l}$} $ $Let $\check{m},\check{n}\in\mathbb{N}$ with $\check{m}+\check{n}=N$ and let $F\left(x,u\right)\in\mathcal{A}_{\check{m},\check{n}+l}$. Let $\text{dim}\left(D\right)=N$ and $\Phi:D\to\mathbb{R}^{l}$ be a map whose graph is a sub-$\Lambda$-set.
Then there exist a finite family $\mathcal{F}$ of vertical admissible transformations \[ \rho:\hat{I}_{m_{\rho},n_{\rho}+j\left(\Phi\right),r_{\rho}}\to\hat{I}_{m,n+j\left(\Phi\right),r} \]
and $ $ a sub-$\Lambda$-set $S\subseteq D$ of dimension strictly smaller than $N$ such that $\mathcal{F}$ satisfies the covering property with respect to $\left(\hat{\Phi},S\right)$ and for every $\rho\in\mathcal{F}$, \begin{itemize} \item $m=\text{max}\left\{ \hat{m},\check{m}\right\} $, hence $\rho'$ respects $f$ and $\rho$ respects $F$; \item either $j\left(\hat{\Phi}_{\rho}\right)<j\left(\Phi\right)$ or $F_{0}\circ\rho$ is normal. \end{itemize} \textbf{(C)$_{N,l}$} $ $Let $\check{m},\check{n}\in\mathbb{N}$ with $\check{m}+\check{n}=N$ and let $f_{1}\left(x,u\right),\ldots,f_{N}\left(x,u\right)\in\mathcal{A}_{\check{m},\check{n}+N}$. Let $\text{dim}\left(D\right)=N$ and $\Phi:D\to\mathbb{R}^{N}$ be a map whose graph is a sub-$\Lambda$-set.
Suppose that $j\left(\Phi\right)=l$ and that \[ \forall\left(x,u\right)\in D\times\mathbb{R}^{N}\ \ \ \ \ \begin{cases} f_{1}\left(x,u\right)=0\\ \vdots\\ f_{N}\left(x,u\right)=0 \end{cases}\Leftrightarrow\ \ u=\Phi\left(x\right). \] Then there exist a finite family $\mathcal{F}$ of vertical admissible transformations \[ \rho:\hat{I}_{m_{\rho},n_{\rho}+N,r_{\rho}}\to\hat{I}_{m,n+N,r} \]
and $ $ a sub-$\Lambda$-set $S\subseteq D$ of dimension strictly smaller than $N$ such that $\mathcal{F}$ satisfies the covering property with respect to $\left(\Phi,S\right)$ and for every $\rho\in\mathcal{F}$, \begin{itemize} \item $m=\text{max}\left\{ \hat{m},\check{m}\right\} $, hence $\rho'$ respects $f$ and $\rho$ respects $f_{1},\ldots,f_{N}$; \item $\forall\left(x,u\right)\in D_{\rho}\setminus\left(\rho'\right)^{-1}\left(S\right)\times\mathbb{R}^{N}\ \ \ \ \ \begin{cases} f_{1}\circ\rho\left(x,u\right)=0\\ \vdots\\ f_{N}\circ\rho\left(x,u\right)=0 \end{cases}\Leftrightarrow\ \ u=0$. \end{itemize} \end{thm} Before proving Theorem \ref{thm: ABC}, we show how it implies the Rectilinearisation Theorem. We prove the following stronger statement. \begin{prop} \label{prop: strong rectilin}Let $m,n,k_{1},k_{2}\in\mathbb{N}$, with $N=m+n$, and $f\in\mathcal{A}_{m+k_{1},n+k_{2}}$. Let $D\subseteq\mathbb{R}^{N}$ be a sub-$\Lambda$-set. Then there exist a neighbourhood $W$ of 0 in $\mathbb{R}^{N}$ and a finite family $\mathcal{F}=\left\{ \left(\rho_{i},Q_{i}\right):\ i=1,\ldots,M\right\} $, where $\rho_{i}:\hat{I}_{m_{i},n_{i},r{}_{i}}\to\hat{I}_{m,n,r}$ is an admissible transformation (which respects $f$) and $Q_{i}\subseteq\overline{Q_{i}}\subseteq\hat{I}_{m_{i},n_{i},r_{i}}$ is a sub-quadrant, such that $\rho_{i}\restriction Q_{i}:\ Q_{i}\to\rho_{i}\left(Q_{i}\right)$ is a diffeomorphism and \[ W\cap\hat{I}_{m,n,r}\cap D=\bigcup_{i=1}^{M}\rho_{i}\left(Q_{i}\right). \] \end{prop} \begin{proof} Notice that, by Proposition \ref{prop: param basic sets} and Remark \ref{rem: monomialise respecting f}, the proposition has already been proved whenever $D$ is an $\mathcal{A}$-basic set. We aim to show that we can reduce to this situation.
The proof is by induction on the pairs $\left(N,d\right)$, where $d=\text{dim}\left(D\right)$, ordered lexicographically. The basic cases $\left(0,0\right)$ and $\left(N,0\right)$ are straightforward. By a cell decomposition argument, we may assume that $D$ is a cell and, without loss of generality, that \[ D=\left\{ \left(x,y\right):\ x\in C,\ \theta\left(x,y\right)\right\} , \] where, either \begin{itemize} \item (Case 1) $d=N,\ x=\left(x_{1},\ldots,x_{N-1}\right),\ y=x_{N}$, $C\subseteq\mathbb{R}^{N-1}$ is a sub-$\Lambda$-set and a cell of dimension $N-1$ and $\theta\left(x,y\right)$ is \foreignlanguage{english}{$y>\varphi_{0}\left(x\right)$,} where the graph of the function $\varphi_{0}:C\to\mathbb{R}$ is a sub-$\Lambda$-set, or \item (Case 2) $d<N,\ x=\left(x_{1},\ldots,x_{d}\right),\ y=\left(y_{1},\ldots,y_{N-d}\right)=\left(x_{d+1},\ldots,x_{N}\right)$, $C\subseteq\mathbb{R}^{d}$ is a sub-$\Lambda$-set and a cell of dimension $d$ and $\theta\left(x,y\right)$ is $\bigwedge_{i=1}^{N-d}y_{i}=\varphi_{i}\left(x\right)$, where the graphs of the functions $\varphi_{i}:C\to\mathbb{R}$ are sub-$\Lambda$-sets. \end{itemize} We will treat the two cases simultaneously. Notice that in both cases $c:=\text{dim}\left(C\right)\leq N-1$.
We first show that we can assume that $\varphi_{i}\in\mathcal{A}$. In fact, by the statement \textbf{(A)}$_{c}$, there exists a finite family $\mathcal{F}$ of admissible transformations $\rho:\hat{I}_{l_{1}',l_{2}',r'}\to\hat{I}_{l_{1},l_{2},r}$, where $l_{1}+l_{2}=l_{1}'+l_{2}'=c$, such that \[ \hat{I}_{l_{1},l_{2},r}\cap C\subseteq\bigcup_{\rho\in\mathcal{F}}\rho\left(\hat{I}_{l_{1}',l_{2}',r'}\right) \] and for all $\rho\in\mathcal{F}$ we have that $\rho$ respects $f$ and $\varphi_{i}\circ\rho\in\mathcal{A}_{l_{1}',l_{2}'}$. Arguing by induction on the length of $\rho$ it is easy to see that there exists a closed sub-$\Lambda$-set $S_{\rho}\subseteq\mathbb{R}^{c}$ of dimension $<c$ such that $\rho\restriction\hat{I}_{l_{1}',l_{2}',r'}\setminus S_{\rho}$ is a diffeomorphism onto its image. Let $B_{\rho}=\rho^{-1}\left(\hat{I}_{l_{1},l_{2},r}\cap C\right),\ B_{\rho}'=B_{\rho}\setminus S_{\rho}$ and $B_{\rho}''=B_{\rho}\cap S_{\rho}$. Let $\hat{\rho}:\hat{I}_{m',n',\hat{r'}}\to\hat{I}_{m,n,\hat{r}}$ be the trivial extension of $\rho$ and $\hat{\mathcal{F}}=\left\{ \hat{\rho}:\ \rho\in\mathcal{F}\right\} $. Let \begin{align*} D_{\rho}'= & \left\{ \left(x,y\right):\ x\in B_{\rho}',\ \theta_{\rho}\left(x,y\right)\right\} ,\\ D_{\rho}''= & \left\{ \left(x,y\right):\ x\in B_{\rho}'',\ \theta_{\rho}\left(x,y\right)\right\} , \end{align*} where $\theta_{\rho}\left(x,y\right)$ is $y>\varphi_{0}\circ\rho\left(x\right)$ in case 1 and $\bigwedge_{i=1}^{N-d}y_{i}=\varphi_{i}\circ\rho\left(x\right)$ in case 2. Notice that there exists a neighbourhood $W\subseteq\mathbb{R}^{N}$ of $0$ such that \[ \bigcup_{\hat{\rho}\in\hat{\mathcal{F}}}\hat{\rho}\left(D_{\rho}'\right)\cup\hat{\rho}\left(D_{\rho}''\right)=W\cap\hat{I}_{m,n,\hat{r}}\cap D. \] In both cases $\text{dim}\left(\hat{\rho}\left(D_{\rho}''\right)\right)\leq\text{dim}\left(D_{\rho}''\right)<d$, so by the inductive hypothesis on the dimension $d$ of $D$, and by the fact that $\hat{\rho}\restriction D_{\rho}'$ is a diffeomorphism onto its image, we have reduced to the situation where the $\varphi_{i}$ are in $\mathcal{A}$.
Next we show that we can reduce to the case when $C$ is a sub-quadrant (and hence $D$ is an $\mathcal{A}$-basic set). In order to see this, since $c<N$, by the inductive hypothesis on the dimension $N$ of the ambient space, we can apply the proposition to $C$. Hence there are a neighbourhood $\tilde{W}\subseteq\mathbb{R}^{c}$ of $0$ and a finite family $\mathcal{F}$ of admissible transformations $\rho:\hat{I}_{l_{1}',l_{2}',r'}\to\hat{I}_{l_{1},l_{2},r}$ (respecting $f,\varphi_{0},\ldots,\varphi_{N-l}$), where $l_{1}+l_{2}=l_{1}'+l_{2}'=c$, and sub-quadrants $Q\subseteq\overline{Q}\subseteq\hat{I}_{l_{1}',l_{2}'.r'}$ such that $\rho\restriction Q$ is a diffeomorphism onto its image and $\tilde{W}\cap\hat{I}_{l_{1},l_{2},r}\cap C={\displaystyle \bigcup_{\left(\rho,Q\right)\in\mathcal{F}}\rho\left(Q\right)}$. Let $\hat{\rho}:\hat{I}_{m',n',\hat{r'}}\to\hat{I}_{m,n,\hat{r}}$ be the trivial extension of $\rho$ and $\hat{\mathcal{F}}=\left\{ \hat{\rho}:\ \rho\in\mathcal{F}\right\} $. Let \begin{align*} D_{\rho}= & \left\{ \left(x,y\right):\ x\in Q,\ \theta_{\rho}\left(x,y\right)\right\} , \end{align*} where $\theta_{\rho}\left(x,y\right)$ is $y>\varphi_{0}\circ\rho\left(x\right)$ in case 1 and $\bigwedge_{i=1}^{N-d}y_{i}=\varphi_{i}\circ\rho\left(x\right)$ in case 2. Notice that there exists a neighbourhood $W\subseteq\mathbb{R}^{N}$ of $0$ such that \[ \bigcup_{\hat{\rho}\in\hat{\mathcal{F}}}\hat{\rho}\left(D_{\rho}\right)=W\cap\hat{I}_{m,n,\hat{r}}\cap D, \] and since $\hat{\rho}\restriction D_{\rho}$ is a diffeomorphism onto its image, we have reduced to the situation where $D$ is an $\mathcal{A}$-basic set, and we are done. \end{proof}
{}
The rest of the section is devoted to the proof of Theorem \ref{thm: ABC}, which is by induction on the pairs $\left(N,l\right)$, ordered lexicographically.
{}
If $N=0$, then statements \textbf{(A)}, \textbf{(B)} and \textbf{(C)} are trivial.
For\textbf{ }any $N$, we have that \textbf{(B)}$_{N,0}$ follows from Theorem \ref{thm: geom monomial} and Remark \ref{rem: monomialise respecting f} and \textbf{(C)}$_{N,0}$ is trivially true. Hence let us assume that $N,l\geq1$. \begin{lem} \label{lem: weak monomialisation}Let $N\geq1$ and suppose that \textbf{(A)}$_{N-1}$ holds. Let $l,\hat{m},\hat{n},k_{1},k_{2}\in\mathbb{N}$ with $l\leq N=\hat{m}+\hat{n}$ and let $f\in\mathcal{A}_{\hat{m}+k_{1},\hat{n}+k_{2}}$. Let $D\subseteq[-1,1]{}^{N}$ be a sub-$\Lambda$-set and suppose that for every sufficiently small polyradius $r'$ in $\mathbb{R}^{N}$, the intersection $\hat{I}_{\hat{m},\hat{n},r'}\cap D$ is not empty. Let $\text{dim}\left(D\right)=N$ and $\Phi:D\to\mathbb{R}^{l}$ be a map whose graph is a sub-$\Lambda$-set such that $j\left(\Phi\right)=l$. Then there are constants $K_{1},K_{2}\in\mathbb{R}^{>0}$ and a finite family $\mathcal{F}$ of admissible transformations \[ \rho:\hat{I}_{m_{\rho},n_{\rho},r_{\rho}}\to\hat{I}_{m,n,r} \] such that \emph{$\mathcal{F}$ }satisfies the covering property with respect to\emph{ $D$ }and for every $\rho\in\mathcal{F}$, \begin{itemize} \item $m=\hat{m}$, hence $\rho$ respects $f$; \item for every $i=1,\ldots,l$ there are exponents $\alpha_{i}\in\mathbb{A}^{n}$ and functions $U_{i}:D\to\mathbb{R}$ (whose graph is a sub-$\Lambda$-set)
with $K_{1}\leq|U_{1}|\leq K_{2}$ such that $\varphi_{i}\circ\rho\left(x\right)=x^{\alpha_{i}}U_{i}\left(x\right)$. \end{itemize} \end{lem} \begin{rem} \label{rem: weakly normal}Notice that we do not require at this stage that $U_{i}$ be an element of $\mathcal{A}$, hence we say that $\varphi_{i}\circ\rho$ is \emph{weakly normal}.\end{rem} \begin{proof} Let us first consider the case $N=1$. Notice that, by polynomial boundedness, for all $x\in D\cap\mathbb{R}^{\geq0}$ we have $\varphi_{i}\left(x\right)=x^{\alpha_{i}/\beta_{i}}U_{i}\left(x\right)$, for some $U_{i}$ as in the statement of the statement of the lemma and $\alpha_{i},\beta_{i}\in\mathbb{A}$. Let $\beta=\beta_{1}\cdot\ldots\cdot\beta_{l}$. If $\hat{m}=1$ then $\mathcal{F}=\left\{ \sigma_{1}^{+}\circ r_{1}^{\beta}\right\} $ satisfies the conclusion of the lemma. If $\hat{n}=1$ then $\mathcal{F}=\left\{ \sigma_{1}^{+}\circ r_{1}^{\beta},\sigma_{1}^{-}\circ r_{1}^{\beta}\right\} $ satisfies the conclusion of the lemma.
Let $N>1$. Let $\hat{x}=\left(x_{1},\ldots,x_{N-1}\right),\ y=x_{N}$ and $\hat{D}$ be the projection of $D$ onto the coordinate space spanned by $\hat{x}$. By cell decomposition and by \cite[Thm. 2.1]{vdd:speiss:preparation_theorems}, we may assume that there are $\beta_{1},\ldots,\beta_{l}\in\mathbb{K}$, $\mathbb{R}_{\mathcal{A}}$-definable functions $a_{0},a_{1},\ldots,a_{l}:\hat{D}\to\mathbb{R}$ and $U_{1},\ldots,U_{l}:D\to\mathbb{R}$ such that $\forall x=\left(\hat{x},y\right)\in D\ y>a_{0}\left(\hat{x}\right)$ and $\forall i=1,\ldots,l\ \ $ \[
\forall x=\left(\hat{x},y\right)\in D\ \ \varphi_{i}\left(\hat{x},y\right)=\left(y-a_{0}\left(\hat{x}\right)\right)^{\beta_{i}}a_{i}\left(\hat{x}\right)U_{i}\left(\hat{x},y\right)\text{\ and\ }\frac{1}{2}<|U_{i}|<\frac{3}{2}. \]
By a further cell decomposition we may assume that all the units $U_{i}$ are positive on $D$ and that for all $i=0,1,\ldots,l$ either $\forall\hat{x}\in\hat{D}\ a_{i}\left(\hat{x}\right)\leq1$ or $\forall\hat{x}\in\hat{D}\ a_{i}\left(\hat{x}\right)>1$. For $i=0,\ldots,l$ let \[ \hat{a}_{i}\left(\hat{x}\right):=\begin{cases} a_{i}\left(\hat{x}\right) & \text{if\ }a_{i}\leq1\text{\ on }\hat{D}\\ 1/a_{i}\left(\hat{x}\right) & \text{if\ }a_{i}>1\text{\ on }\hat{D} \end{cases} \] and let $\hat{a}\left(\hat{x}\right)=\prod_{i=0}^{l}\hat{a}_{i}\left(\hat{x}\right)$. By Remark \ref{rem: bounded sets are sub-lambda} the graph of $\hat{a}$ is a sub-$\Lambda$-set. Hence, by \textbf{(A)}$_{N-1}$, there is a finite family $\mathcal{F}$ of admissible transformations such that every $\rho\in\mathcal{F}$ extends trivially to $\hat{\rho}:\hat{I}_{m_{\rho},n_{\rho}.r_{\rho}}\to\hat{I}_{\hat{m},\hat{n},r}$ (hence $\hat{\rho}$ respects $f$), $\hat{\mathcal{F}}:\left\{ \hat{\rho}:\ \rho\in\mathcal{F}\right\} $ satisfies the covering property with respect to $D$ and for all $\hat{\rho}\in\hat{\mathcal{F}},$ for all $i=0,.\ldots l$ we have $\hat{a}_{i}\circ\hat{\rho}\in\mathcal{A}_{m_{\rho},n_{\rho},r_{\rho}}$. Let \[ g_{\rho}\left(\hat{x},y\right)=\begin{cases} y-\hat{a}_{0}\circ\rho\left(\hat{x}\right) & \text{if\ }a_{0}\leq1\text{\ on }\hat{D}\\ y\cdot\hat{a}_{0}\circ\rho\left(\hat{x}\right)-1 & \text{if\ }a_{0}>1\text{\ on }\hat{D} \end{cases} \] and $h_{\rho}\left(\hat{x},y\right):=g_{\rho}\left(\hat{x},y\right)\cdot\prod_{i=0}^{l}\hat{a_{i}}\circ\rho\left(\hat{x}\right)\in\mathcal{A}_{m_{\rho},n_{\rho},r_{\rho}}.$ By Theorem \ref{thm: geom monomial} and Remark \ref{rem: monomialise respecting f}, there is a finite family $\tilde{\mathcal{F}}_{\rho}$ of admissible transformations such that $\tilde{\mathcal{F}}_{\rho}$ satisfies the covering property with respect to $\hat{I}_{m_{\rho},n_{\rho},r_{\rho}}$ and for every $\tilde{\rho}:\hat{I}_{m_{\tilde{\rho}},n_{\tilde{\rho}},r_{\tilde{\rho}}}\to\hat{I}_{m_{\rho},n_{\rho},r_{\rho}}\in\tilde{\mathcal{F}}_{\rho}$ we have that $\tilde{\rho}$ respects $f\circ\hat{\rho}$ and finally there are $\gamma_{0},\ldots,\gamma_{l+1}\in\mathbb{A}^{N}$ and units $v_{0}\left(x\right),\ldots,v_{l+1}\left(x\right)\in\mathcal{A}_{m_{\tilde{\rho}},n_{\tilde{\rho}},r_{\tilde{\rho}}}$ such that \[ \hat{a}_{i}\circ\rho\circ\tilde{\rho}\left(x\right)=x^{\gamma_{i}}v_{i}\left(x\right)\text{\ for\ }i=0,\ldots,l\text{,\ and\ }g_{\rho}\circ\tilde{\rho}\left(x\right)=x^{\gamma_{l+1}}v_{l+1}\left(x\right). \] After a suitable sequence $\sigma$ of sign-changing transformations as in \ref{Proviso: admissible transf}, we may assume that $x\in\left(\mathbb{R}^{\geq0}\right)^{N}$, hence, \[ \varphi_{i}\circ\hat{\rho}\circ\tilde{\rho}\circ\sigma\left(x\right)=\begin{cases} x^{\gamma_{l+1}\beta_{i}+\gamma_{i}}U_{i}\left(x\right) & \text{if\ }a_{0}\leq1\text{\ and\ }a_{i}\leq1\text{\ on }\hat{D}\\ x^{\gamma_{l+1}\beta_{i}-\gamma_{i}}U_{i}\left(x\right) & \text{if\ }a_{0}\leq1\text{\ and\ }a_{i}>1\text{\ on }\hat{D}\\ x^{-\gamma_{0}+\gamma_{l+1}\beta_{i}+\gamma_{i}}U_{i}\left(x\right) & \text{if\ }a_{0}>1\text{\ and\ }a_{i}\leq1\text{\ on }\hat{D}\\ x^{-\gamma_{0}+\gamma_{l+1}\beta_{i}-\gamma_{i}}U_{i}\left(x\right) & \text{if\ }a_{0}>1\text{\ and\ }a_{i}>1\text{\ on }\hat{D} \end{cases}, \] for some $U_{i}$ as in the statement of the lemma. Notice that all the multi-exponents in the expression above belong to $\left(\mathbb{K}^{\geq0}\right)^{N}$, because $\Phi$ is bounded. Hence, after an appropriate series of ramifications, the above multi-exponents belong to $\mathbb{A}^{N}$ and we can conclude by the first remark in \ref{rem: evolution of phi}. \end{proof}
\subsection{(A)$_{N-1}$ and (B)$_{N,l'}$ $\left(\forall l'\leq l-1\right)$ imply (B)$_{N,l}$}
Let $\hat{u}=\left(u_{1},\ldots,u_{j\left(\Phi\right)-1}\right)$ and $v=u_{j\left(\Phi\right)}$. Let $\Phi'=\left(\varphi_{1},\ldots,\varphi_{j\left(\Phi\right)-1}\right)$.
We first show the following reduction. \begin{void} \label{vuoto: reduce to F regular in v}We can assume that $F_{0}\left(x,\hat{u},v\right)$ is regular of some order $d>0$ in the variable $v$, after factoring out a monomial in the variables $x,\hat{u}$. \end{void} It is enough to show that the series $\mathcal{T}\left(F_{0}\right)\left(X,\hat{U},V\right)$ is regular in the variable $V$. Clearly, we cannot simply perform a linear transformation in the variables $x_{\check{m}+1},\ldots,x_{N},\hat{u}$ as we did in Section \ref{sub:Monomialisation-of-generalised}, because that would not be a vertical transformation. We will follow a different approach, inspired by the methods in commutative algebra to prove that the ring of formal power series is Noetherian. Clearly the ring of generalised power series is not Noetherian. However, Lemma \ref{lem:noether} below gives us a finiteness result which is enough for our purposes. \begin{notation} Let $m,n,m',n',n,l\in\mathbb{N}$, with $m+n=m'+n'=N$. Let $Z=\left(Z_{1},\ldots,Z_{m}\right),\ Y=\left(Y_{1},\ldots,Y_{n}\right),\ U=\left(U_{1},\ldots,U_{l}\right)$ and $Z'=\left(Z_{1}',\ldots,Z_{m'}'\right),\ Y'=\left(Y_{1}',\ldots,Y_{n'}'\right),\ U'=\left(U_{1}',\ldots,U_{l}'\right)$. If $V\in\left\{ Y_{1},\ldots,Y_{n},U_{1},\ldots,U_{l}\right\} ,$ we denote by $\widehat{\left(Y,U\right)}$ the set $\left\{ Y_{1},\ldots,Y_{n},U_{1},\ldots,U_{l}\right\} \setminus\left\{ V\right\} $.\end{notation} \begin{void} [Formal Weierstrass division]\label{emp:weierstrass}Let $F,G\in\mathbb{R}\left\llbracket Z^{*},Y,U\right\rrbracket $ and suppose that $G$ is \emph{regular of order $d\in\mathbb{N}$ in the variable $V$}, i.e. $G\left(0,V\right)=V^{d}W\left(V\right)$, where $W\left(V\right)\in\mathbb{R}\left\llbracket V\right\rrbracket $ and $W\left(0\right)\not=0$. Then, by \cite[4.17]{vdd:speiss:gen}, there exist $Q\in\mathbb{R}\left\llbracket Z^{*},Y,U\right\rrbracket $ and $R=\sum_{i=0}^{d-1}B_{i}\left(Z,\widehat{\left(Y,U\right)}\right)V^{i}\in\mathbb{R}\left\llbracket Z^{*},\widehat{\left(Y,U\right)}\right\rrbracket \left[V\right]$ such that $F=G\cdot Q+R$. A careful analysis of the proof of \cite[4.17]{vdd:speiss:gen} shows that $\mathrm{Supp}_{Z}\left(Q\right)$ and $\mathrm{Supp}_{Z}\left(R\right)$ are contained in the good set $\Sigma\left(\mathrm{Supp}_{Z}\left(F\right)\cup\mathrm{Supp}_{Z}\left(G\right)\right)$.\end{void} \begin{lem} \label{lem:noether} Let $\mathcal{G}=\left\{ F_{i}=\left(F_{i,0},\ldots,F_{i,d-1}\right):\ i\in\mathbb{N}\right\} \subset\left(\mathbb{R}\left\llbracket Z^{*},Y,U\right\rrbracket \right)^{d}$ be a family with good total support. Then there exists an admissible tree $T$ such that, for every branch $\mathfrak{b}$ of $T$, acting as $T{}_{\mathfrak{b}}:\mathbb{R}\left\llbracket Z{}^{*},Y,U\right\rrbracket \to\mathbb{R}\left\llbracket Z'^{*},Y',U'\right\rrbracket $, the $\mathbb{R}\left\llbracket Z'^{*},Y',U'\right\rrbracket $-module generated by the set \[ \left\{ \left(T_{\mathfrak{b}}\left(F_{i,1}\right),\ldots,T_{\mathfrak{b}}\left(F_{i,d}\right)\right):\ i\in\mathbb{N}\right\} \]
is finitely generated. Moreover, $T_{\mathfrak{b}}$ induces an admissible transformation which is vertical (with respect to the variables $U$). \end{lem} In particular, for some $p\in\mathbb{N}$, $\left\{ \left(T_{\mathfrak{b}}\left(F_{i,0}\right),\ldots,T_{\mathfrak{b}}\left(F_{i,d}\right)\right):\ i=0,\ldots,p\right\} $ is a set of generators. \begin{proof} The proof is by induction on the pairs $\left(N+l,d\right)$, ordered lexicographically.
Let us first examine the case $d=1$. If $N+l=1,\ $ then there exist $\alpha\in[0,\infty)$ and $i_{0}\in\mathbb{N}$ such that $\forall i\in\mathbb{N}\ F_{i}=Z'^{\alpha}G_{i}$ and $G_{i_{0}}\left(0\right)\not=0$. Therefore the ideal generated by the set $\left\{ F_{i}:\ i\in\mathbb{N}\right\} $ is principal, generated by $F_{i_{0}}$.
Let us suppose $N+l>1$. By Lemma \ref{lem:singular blow ups}, there exists an admissible tree $T$ (acting as the identity on the variables $U$) such that, for every branch $\mathfrak{b}$ of $T$, acting as $T{}_{\mathfrak{b}}:\mathbb{R}\left\llbracket Z{}^{*},Y,U\right\rrbracket \to\mathbb{R}\left\llbracket Z'^{*},Y',U'\right\rrbracket $, there exist $\alpha\in[0,\infty)^{m}$ and $G_{i}\in\mathbb{R}\left\llbracket Z'^{*},Y',U'\right\rrbracket $ such that $T_{\mathfrak{b}}\left(F_{i}\right)=Z'^{\alpha}G_{i}$ and for some $i_{0}\in\mathbb{N}$, $G_{i_{0}}\left(0,Y',U'\right)\not\equiv0$.
If $Y'=\emptyset$, then there is a linear change of coordinates $L_{l,c}$ in the $U'$-variables such that $G_{i_{0}}$ is regular of order $d'\in\mathbb{N}$ in the variable $U_{l}'$.
If $Y'\not=\emptyset$, then there is a linear change of coordinates $L_{n',c}$ in the $\left(Y',U'\right)$-variables such that $G_{i_{0}}$ is regular of order $d'\in\mathbb{N}$ in the variable $Y_{n'}'$. Let $V$ be either $U_{l}'$ (in the first case) or $Y_{n'}'$ (in the second case) and let $L_{c}$ be either $L_{l,c}$ (in the first case) or $L_{n',c}$ (in the second case), and let us rename $Z'=Z$, $Y'=Y$ and $U'=U$. By \ref{emp:weierstrass}, there are $Q_{i}\in\mathbb{R}\left\llbracket Z{}^{*},Y,U\right\rrbracket $ and $B_{i,0},\ldots,B_{i,d'-1}\in\mathbb{R}\left\llbracket Z{}^{*},\widehat{\left(Y,U\right)}\right\rrbracket $ such that \[ L_{c}\left(G_{i}\right)=L_{c}\left(G_{i_{0}}\right)\cdot Q_{i}+R_{i}, \]
where $R_{i}=\sum_{j=0}^{d'-1}B_{i,j}\left(Z,\widehat{\left(Y,U\right)}\right)V{}^{j}.$
By Remark \ref{rem:tree preserves good families}, the family $\left\{ L_{c}\left(G_{i}\right):\ i\in\mathbb{N}\right\} $ has good total support, and hence, by \ref{emp:weierstrass}, so does the family $\mathcal{B}=\left\{ B_{i}=\left(B_{i,0},\ldots,B_{i,d'-1}\right):\ i\in\mathbb{N}\right\} \subset\left(\mathbb{R}\left\llbracket Z{}^{*},\widehat{\left(Y,U\right)}\right\rrbracket \right)^{d'}$.
By the inductive hypothesis, there is an admissible tree $T'$ (acting as the identity on the variable $V$) such that, for every branch $\mathfrak{b'}$ of $T'$, we have $T'_{\mathfrak{b'}}:\mathbb{R}\left\llbracket Z{}^{*},Y,U\right\rrbracket \to\mathbb{R}\left\llbracket Z'^{*},Y',U'\right\rrbracket $ , the $\mathbb{R}\left\llbracket Z'^{*},\widehat{\left(Y',U'\right)}\right\rrbracket $-module generated by the set $T'_{\mathfrak{b'}}\left(\mathcal{B}\right)$ is finitely generated and $T_{\mathfrak{b'}}'$ acts vertically with respect to the variables $U$. Let us again rename $Z'=Z$, $Y'=Y$ and $U'=U$. We may suppose that, for some $k\in\mathbb{N}$, the generators are $T'_{\mathfrak{b'}}\left(B_{0}\right),\ldots,T'_{\mathfrak{b'}}\left(B_{k}\right)$. Hence, $\forall i\in\mathbb{N}$, there exist series $C_{i,0},\ldots,C_{i,k}\in\mathbb{R}\left\llbracket Z{}^{*},\widehat{\left(Y,U\right)}\right\rrbracket $ such that $T'_{\mathfrak{b'}}\left(B_{i}\right)=\sum_{s=0}^{k}C_{i,s}T'_{\mathfrak{b'}}\left(B_{s}\right)$.
Notice that \[ T'_{\mathfrak{b'}}\left(R_{i}\right)=\sum_{j=0}^{d'-1}T'_{\mathfrak{b'}}\left(B_{i,j}\right)V{}^{j}=\sum_{s=0}^{k}C_{i,s}\sum_{j=0}^{d'-1}T'_{\mathfrak{b'}}\left(B_{s}\right)V{}^{j}=\sum_{s=0}^{k}C_{i,s}T'_{\mathfrak{b'}}\left(R_{s}\right). \] Finally, \begin{align*} T_{\mathfrak{b}}\circ L_{c}\circ T'_{\mathfrak{b'}}\left(F_{i}\right)=T'_{\mathfrak{b'}}\left(Z{}^{\alpha}\cdot L_{c}\left(G_{i_{0}}\right)\cdot Q_{i}\right)+\sum_{s=0}^{k}C_{i,s}T'_{\mathfrak{b'}}\left(Z{}^{\alpha}\cdot R_{s}\right)=\\ =\left[L_{c}\circ T'_{\mathfrak{b'}}\left(Z{}^{\alpha}\cdot G_{i_{0}}\right)\right]\cdot T'_{\mathfrak{b'}}\left(Q_{i}\right)+\sum_{s=0}^{k}C_{i,s}\cdot\left[L_{c}\circ T'_{\mathfrak{b'}}\left(Z{}^{\alpha}\cdot G_{i}\right)-L_{c}\circ T'_{\mathfrak{b'}}\left(Z{}^{\alpha}\cdot G_{i_{0}}\right)\cdot T'_{\mathfrak{b'}}\left(Q_{i}\right)\right]. \end{align*}
The series within square brackets in the last line of above formula are indeed elements of the ideal generated by the set $S=\left\{ T_{\mathfrak{b}}\circ L_{c}\circ T'_{\mathfrak{b'}}\left(F_{i}\right):\ i\in\mathbb{N}\right\} $. In particular, we can choose a finite set of generators within the set $S$ itself. This concludes the proof of the case $d=1$.
As for the general case, consider the family $\mathcal{G}'=\left\{ \left(F_{i,1},\ldots,F_{i,d-1}\right):\ i\in\mathbb{N}\right\} \subset\left(\mathbb{R}\left\llbracket Z^{*},Y,U\right\rrbracket \right)^{d-1}$. Since the total support of $\mathcal{G}'$ is good, we can apply the inductive hypothesis and find an admissible tree $T$ such that, for every branch $\mathfrak{b}$ of $T$, the module generated by $T_{\mathfrak{b}}\left(\mathcal{G}'\right)$ is finitely generated.
Let $\left\{ \left(T_{\mathfrak{b}}\left(F_{i,1}\right),\ldots,T_{\mathfrak{b}}\left(F_{i,d-1}\right)\right):\ i\leq p\right\} $ be a set of generators, for some $p\in\mathbb{N}$. Now consider the family $\mathcal{G}''=\left\{ T_{\mathfrak{b}}\left(F_{i,0}\right):\ i\in\mathbb{N}\right\} \subset\mathbb{R}\left\llbracket Z'^{*},Y',U'\right\rrbracket $. By Remark \ref{rem:tree preserves good families}, $\mathcal{G}''$ has good total support, hence there exists an admissible tree $T'$ such that, for every branch $\mathfrak{b'}$ of $T'$, the ideal generated by $T'_{\mathfrak{b'}}\left(\mathcal{G}''\right)$ is finitely generated. Let $\left\{ T_{\mathfrak{b}}\circ T'_{\mathfrak{b'}}\left(F_{i,0}\right):\ i\leq s\right\} $ be a set of generators, for some $s\in\mathbb{N}$. It is then clear that the module generated by $T'_{\mathfrak{b'}}\circ T_{\mathfrak{b}}\left(\mathcal{G}\right)$ is generated by the set $\left\{ \left(T_{\mathfrak{b}}\circ T'_{\mathfrak{b'}}\left(F_{i,0}\right),0\right):\ i\leq s\right\} \cup\left\{ \left(0,T_{\mathfrak{b}}\circ T'_{\mathfrak{b'}}\left(F_{i,1}\right),\ldots,T_{\mathfrak{b}}\circ T'_{\mathfrak{b'}}\left(F_{i,d-1}\right)\right):\ i\leq p\right\} $. \end{proof} Going back to the proof of \ref{vuoto: reduce to F regular in v}, let $F_{0}\left(x,\hat{u},v\right)=\sum_{i\geq0}f_{i}\left(x,\hat{u}\right)v^{i}$, where $f_{i}\left(x,\hat{u}\right)=\frac{1}{i!}\frac{\partial^{i}F_{0}}{\partial v^{i}}\left(x,\hat{u},0\right)$, which, by a remark in \ref{rems: properties of the algebras}, belongs to $\mathcal{A}_{\check{m},\check{n}+j\left(\Phi\right)-1}$. The family $\mathcal{G}=\left\{ f_{i}:\ i\in\mathbb{N}\right\} $ has good total support, hence, by Lemma \ref{lem:noether} and Remark \ref{rem: elemntary transf send quadrants to sectors}, there is a finite family $\mathcal{F}$ of vertical admissible transformations $\rho:\hat{I}_{m_{\rho},n_{\rho}+j\left(\Phi\right)-1,r_{\rho}}\to\hat{I}_{m,n+j\left(\Phi\right)-1,r}$, where $m=\text{max}\left\{ \hat{m},\check{m}\right\} $ (hence they all respect $f$), such that $\mathcal{F}$ satisfies the covering property with respect to $\hat{I}_{m,n+j\left(\Phi\right)-1,r}$ (and hence with respect to $\Phi'$ ) and for every $\rho\in\mathcal{F}$, either $j\left(\Phi_{\rho}'\right)<j\left(\Phi'\right)$, or $j\left(\Phi_{\rho}'\right)=j\left(\Phi'\right)=j\left(\Phi\right)-1\leq l-1$ and the ideal generated by the family $\left\{ f_{i}\circ\rho:\ i\in\mathbb{N}\right\} $ is generated by $f_{0}\circ\rho,\ldots,f_{p}\circ\rho$, for some $p\in\mathbb{N}$, i.e. there are series $Q_{i,n}\left(X,\hat{U}\right)$, not necessarily in $\text{Im}\left(\mathcal{T}\right)$, such that for all $n\in\mathbb{N},\ f_{n}\circ\rho=\sum_{i=0}^{p}Q_{i,n}\cdot f_{i}\circ\rho$. Hence we can write \[ \mathcal{T}\left(F_{0}\circ\rho\right)=\sum_{i=0}^{p}\mathcal{T}\left(f_{i}\circ\rho\right)\left(X,\hat{U}\right)V^{i}W_{i}\left(X,\hat{U},V\right), \] where the series $W_{i}=1+\sum_{n>p}Q_{i,n}\left(X,\hat{U}\right)V^{n-i}$ are units, not necessarily in $\text{Im}\left(\mathcal{T}\right)$. We can apply the inductive hypothesis \textbf{(B)}$_{N,j\left(\Phi\right)-1}$ to $\tilde{F}\left(x,\hat{u}\right):={\displaystyle \prod_{0\leq i,j\leq p,i\not=j}f_{i}\circ\rho\left(f_{i}\circ\rho-f_{j}\circ\rho\right)}$ and $\tilde{\Phi}:=\Phi_{\rho}':D_{\rho}\to\mathbb{R}^{j\left(\Phi'\right)}$ and obtain that there exist a sub-$\Lambda$-set $\tilde{S}\subseteq D_{\rho}$ of dimension $\leq N-1$ and a finite family $\tilde{\mathcal{F}}$ of vertical admissible transformations $\tilde{\rho}:\hat{I}_{m_{\tilde{\rho}},n_{\tilde{\rho}}+j\left(\Phi'\right),r_{\tilde{\rho}}}\to\hat{I}_{m_{\rho},n_{\rho}+j\left(\Phi'\right),r_{\rho}}$ such that $\tilde{\mathcal{F}}$ satisfies the covering property with respect to $\left(\tilde{\Phi},\tilde{S}\right)$ and for every $\tilde{\rho}\in\tilde{\mathcal{F}}$, $\tilde{\rho}$ respects $f\circ\rho$ and either $j\left(\tilde{\Phi}_{\tilde{\rho}}\right)<j\left(\tilde{\Phi}\right)$ or $f_{0}\circ\rho\circ\tilde{\rho},\ldots,f_{p}\circ\rho\circ\tilde{\rho}$ are all normal and (by Lemma \ref{lem: prod of series}) linearly ordered by division. In this latter case, after factoring out a monomial, we obtain that $F_{0}\circ\rho\circ\tilde{\rho}$ is regular in $v$. By Remarks \ref{rem: evolution of phi}, we have proved \ref{vuoto: reduce to F regular in v}.
{} The next step is to show the following reduction. \begin{void} \label{vuoto: chute d'ordre}We may assume that $F_{0}$ is regular of order $1$ in the variable $v$.
Since we may assume that $F_{0}$ is regular of some order $d>1$ in the variable $v$, after a Tschirnhausen translation, we can write \[ F_{0}\left(x,\hat{u},v\right)=W\left(x,\hat{u},v\right)v^{d}+a_{2}\left(x,\hat{u}\right)v^{d-2}+\ldots+a_{d}\left(x,\hat{u}\right), \] where $a_{i}\in\mathcal{A}_{\check{m},\check{n}+j\left(\Phi\right)-1}$ and $W\in\mathcal{A}_{\check{m},\check{n}+j\left(\Phi\right)}$ is a unit. By Lemma \ref{lem: weak monomialisation}, we may assume that there are constants $K_{1},K_{2}>0$, a function $U:D\to\mathbb{R}$ (whose graph is a sub-$\Lambda$-set) and a multi-exponent $\alpha\in\mathbb{A}^{N}$ such that $\varphi_{j\left(\Phi\right)}\left(x\right)=x^{\alpha}U\left(x\right)$
and $K_{1}<|U|<K_{2}$. Without loss of generality we may assume $U$ is strictly positive on $D$.
Let $a_{0}\left(x,\hat{u}\right)=1,\ a_{1}\left(x,\hat{u}\right)=0$ and consider the family $\left\{ x^{\alpha\left(d-i\right)}a_{i}\left(x,\hat{u}\right)\right\} _{i=0,\ldots,d}$. We apply \textbf{(B)}$_{N,j\left(\Phi\right)-1}$ simultaneously to the members of this family, i.e. to the function $A\left(x,\hat{u}\right)=\prod_{0\leq i,j\leq d,i\not=j}x^{\alpha\left(d-i\right)}a_{i}\left(x,\hat{u}\right)\left(x^{\alpha\left(d-i\right)}a_{i}\left(x,\hat{u}\right)-x^{\alpha\left(d-j\right)}a_{j}\left(x,\hat{u}\right)\right)$, and to $\Phi'$. Hence there are a sub-$\Lambda$-set $S\subseteq D$ of dimension $<N$ and a finite family of vertical admissible transformations which extends trivially to a finite family $\mathcal{F}$ of vertical admissible transformations $\rho:\hat{I}_{m_{\rho},n_{\rho}+j\left(\Phi\right),r_{\rho}}\to\hat{I}_{m,n+j\left(\Phi\right),r}$, where $m=\text{max}\left\{ \check{m},\hat{m}\right\} $ and $n=N-m$ (hence $\rho'$ respects $f$), such that $\mathcal{F}$ satisfies the covering property with respect to $\left(\hat{\Phi},S\right)$ and for every $\rho\in\mathcal{F}$, either $j\left(\hat{\Phi}_{\rho}\right)<j\left(\hat{\Phi}\right)$ or, for all $i=2,\ldots,d$, there exist $\gamma_{i}\in\mathbb{A}^{N},\ \delta_{i}\in\mathbb{N}^{j\left(\Phi\right)-1}$ and units $W_{i}\in\mathcal{A}_{m_{\rho},n_{\rho}+j\left(\Phi\right)-1}$ such that $a_{i}\circ\rho\left(x,\hat{u}\right)=x^{\gamma_{i}}\hat{u}^{\delta_{i}}W_{i}\left(x,\hat{u}\right)$. Moreover, by Lemma \ref{lem: prod of series}, the family $\left\{ \left(x^{\alpha\left(d-i\right)}a_{i}\left(x,\hat{u}\right)\right)\circ\rho\right\} _{i=0,\ldots,d}$ is linearly ordered by division, hence the set of $\left(N+j\left(\varphi\right)\right)$-tuples $\mathcal{E}=\left\{ \left(\beta\left(d-i\right)+\gamma_{i},\delta_{i}\right)\right\} _{i=0,\ldots,d}$ is totally ordered. Notice that $\varphi_{j\left(\varphi\right)}\circ\rho'\left(x\right)$ is still weakly normal, i.e. there exists $\beta\in\mathbb{A}^{N}$ such that $\varphi_{j\left(\Phi\right)}\circ\rho'\left(x\right)=x^{\beta}U\circ\rho'\left(x\right)$, and, after suitable ramifications of the variables $x_{1},\ldots,x_{\check{m}}$, we may suppose that $\beta\in\mathbb{N}^{N}$.
Let $N_{0}=\text{max}\left\{ M\leq N:\ \beta_{M}\not=0\right\} $. Define, for $j=1,\ldots,N_{0}-1$, $\rho_{\lambda,j}\left(x,\hat{u},v\right)=x_{j}^{\beta_{j}}v$ and $\rho_{\lambda,N_{0}}\left(x,\hat{u},v\right)=x_{N_{0}}^{\beta_{N_{0}}}\left(\lambda+v\right)$. Notice that the function $\left(x',\hat{u}',v'\right)\mapsto v=\rho_{\lambda,1}\circ\ldots\circ\rho_{\lambda,N_{0}-1}\circ\rho_{\lambda,N_{0}}\left(x',\hat{u}',v'\right)$ is a finite composition of blow-up charts and extends trivially to a vertical admissible transformation $\rho_{\lambda}:\hat{I}_{m_{\rho},n_{\rho}+j\left(\Phi\right),r_{\lambda}}\to\hat{I}_{m_{\rho},n_{\rho}+j\left(\Phi\right),r_{\rho}}$ .
Let \[ \xyC{0mm}\xyL{0mm}\xymatrix{\varepsilon\colon & \mathbb{R}^{>0}\ar[rrrr] & \ & \ & \ & \mathbb{R}^{>0}\\
& \lambda\ar@{|->}[rrrr] & & & & \varepsilon_{\lambda} } \] be any fixed function and let $\mathcal{G}$ be a finite family of positive real numbers such that $\left[K_{1},K_{2}\right]\subseteq\bigcup_{\lambda\in\mathcal{G}}\left(\lambda-\varepsilon_{\lambda},\lambda+\varepsilon_{\lambda}\right)$.
Let $\mathcal{F}_{\mathcal{G}}=\left\{ \rho_{\lambda}:\ \lambda\in\mathcal{G}\right\} $. Since for all $x\in D$ there exists $\lambda\in\mathcal{G}$ such that $x^{\beta}\left(\lambda-\varepsilon_{\lambda}\right)<\varphi_{j\left(\Phi\right)}\left(x\right)<x^{\beta}\left(\lambda+\varepsilon_{\lambda}\right)$, the family $\mathcal{F}_{\mathcal{G}}$ satisfies the covering property with respect to $\left(\hat{\Phi}_{\rho},S_{\rho}\right)$. We show that, for every $\rho_{\lambda}\in\mathcal{F}_{\mathcal{G}}$, either $j\left(\left(\hat{\Phi}_{\rho}\right)_{\rho_{\lambda}}\right)<j\left(\hat{\Phi}_{\rho}\right)$, or, possibly after factoring out a monomial in the variables $x$, $F_{0}\circ\rho\circ\rho_{\lambda}$ is regular of order $d'<d$ in the variable $v$. Let $\tilde{F}=F_{0}\circ\rho\circ\rho_{\lambda}$ and $\tilde{W}\left(x,\hat{u},v\right)=W\circ\rho\circ\rho_{\lambda}\left(x,\hat{u},v\right)$, which is still a unit. Then, \[ \tilde{F}\left(x,\hat{u},v\right)=\tilde{W}\left(x,\hat{u},v\right)\left(\lambda+v\right)^{d}x^{\beta d}+\left(\lambda+v\right)^{d-2}x^{\beta\left(d-2\right)+\gamma_{2}}\hat{u}^{\delta_{2}}W_{2}\left(x,\hat{u}\right)+\ldots+x^{\gamma_{d}}\hat{u}^{\delta_{d}}W_{d}\left(x,\hat{u}\right). \] Let $i_{0}=\text{min}\left\{ i:\ 0\leq i\leq d,\ \left(\beta\left(d-i\right)+\gamma_{i},\delta_{i}\right)=\text{min}\left(\mathcal{E}\right)\right\} $. Notice that necessarily $\delta_{i_{0}}=0$. After factoring $\tilde{F}$ out by the monomial $x^{\beta\left(d-i_{0}\right)+\gamma_{i_{0}}}$, we obtain that, either $i_{0}=0$ and, thanks to the Tschirnhausen translation we did at the beginning, the coefficient of the term $v^{d-1}$ is a unit, or $i_{0}>0$ and the coefficient of the term $v^{d-i_{0}}$ is a unit. In either of the two cases, after factorisation $\tilde{F}$ has become regular of order $d'<d$ in $v$. Hence we can start again with the procedure just described until we reduce to $d'=1$. By Remarks \ref{rem: evolution of phi}, this concludes the proof of \ref{vuoto: chute d'ordre}.
{}
Since $F_{0}$ is regular of order $1$ in the variable $v$, after performing a last Tschirnhausen translation $\tau_{h}$ as in the proof of Theorem \ref{thm: monomialisation}, we have that, either $j\left(\hat{\Phi}_{\tau_{h}}\right)<j\left(\hat{\Phi}\right)$, or $F_{0}\circ\tau_{h}\left(x,\hat{u},v\right)=x^{\alpha}\hat{u}^{\beta}v^{\gamma}W\left(x,\hat{u},v\right)$ for some $\alpha\in\mathbb{A}^{N},\left(\beta,\gamma\right)\in\mathbb{N}^{j\left(\Phi\right)}$ and a unit $W\in\mathcal{A}_{m,n+j\left(\Phi\right)}$ (where $m=\text{max}\left\{ \check{m},\hat{m}\right\} $). Let $r$ be a polyradius in $\mathbb{R}^{N+j\left(\Phi\right)}$ such that $W$ has a representative which does not vanish on $\hat{I}_{m,n+j\left(\Phi\right),r}$. Notice that the size of the last coordinate of $r$ determines the choice of $\varepsilon_{\lambda}$ in the proof of \ref{vuoto: chute d'ordre}.
Hence we can conclude the proof of \textbf{(B)}$_{N,l}$ by Remarks \ref{rem: evolution of phi}. \end{void}
\subsection{(B)$_{N,N}$ and (C)$_{N,l'}$ $\left(\forall l'\leq l-1\right)$ imply (C)$_{N,l}$}
We apply \textbf{(B)}$_{N,N}$ to $F\left(x,u\right):=\prod_{i=1}^{N}f_{i}\left(x,u\right)$ and $\Phi$ and obtain that there exist a finite family $\mathcal{F}$ of vertical admissible transformations $\rho:\hat{I}_{m_{\rho},n_{\rho}+l,r_{\rho}}\to\hat{I}_{m,n+l,r}$, where $m=\text{max}\left\{ \hat{m},\check{m}\right\} $ (hence $\rho'$ respects $f$), and$ $ a sub-$\Lambda$-set $S\subseteq D$ of dimension strictly smaller than $N$ such that $\mathcal{F}$ satisfies the covering property with respect to $\left(\hat{\Phi},S\right)$ and for every $\rho\in\mathcal{F}$, either $j\left(\hat{\Phi}_{\rho}\right)<j\left(\Phi\right)$ or $F_{0}\circ\rho$ is normal. Let $\hat{\mathcal{F}}$ be the family obtained by extending trivially each $\rho\in\mathcal{F}$ to $\hat{\rho}:\hat{I}_{m_{\rho},n_{\rho}+N,\widehat{r_{\rho}}}\to\hat{I}_{m,n+N,\hat{r}}$. By Remarks \ref{rem: evolution of phi}, $\hat{\mathcal{F}}$ satisfies the covering property with respect to $\left(\Phi,S\right)$.
Suppose first that $j\left(\hat{\Phi}_{\rho}\right)=l'<l$. Let $\tilde{f_{i}}\left(x,u\right)=f_{i}\circ\hat{\rho}\left(x,u\right),\ \tilde{D}=D_{\rho}$ and $\tilde{\Phi}=\left(\hat{\Phi}_{\rho},0\right):\tilde{D}\to\mathbb{R}^{N}$. Then by \textbf{(C)}$_{N,l'}$ there exist a finite family $\tilde{\mathcal{F}}$ of vertical admissible transformations $\tilde{\rho}:\hat{I}_{m_{\tilde{\rho}},n_{\tilde{\rho}}+N,r_{\tilde{\rho}}}\to\hat{I}_{m_{\rho},n_{\rho}+N,\widehat{r_{\rho}}}$ (respecting $f\circ\rho'$) and $ $ a sub-$\Lambda$-set $\tilde{S}\subseteq\tilde{D}$ of dimension strictly smaller than $N$ such that $\tilde{\mathcal{F}}$ satisfies the covering property with respect to $\left(\tilde{\Phi},\tilde{S}\right)$ and for every $\tilde{\rho}\in\tilde{\mathcal{F}}$, \[ \forall\left(x,u\right)\in\tilde{D}_{\tilde{\rho}}\times\mathbb{R}^{N}\ \ \ \ \ \begin{cases} \tilde{f_{1}}\circ\tilde{\rho}\left(x,u\right)=0\\ \vdots\\ \tilde{f}_{N}\circ\tilde{\rho}\left(x,u\right)=0 \end{cases}\Leftrightarrow\ \ u=0. \] Hence, by Remarks \ref{rem: evolution of phi}, we are done in this case.
Now suppose that $F_{0}\circ\rho$ is normal. Let $\hat{u}=\left(u_{1},\ldots u_{l}\right)$. Then there are $\left(\alpha_{i},\beta_{i}\right)\in\mathbb{A}^{N}\times\mathbb{N}^{l}$ and units $U_{i}\in\mathcal{A}_{m_{\rho},n_{\rho}+l}$ such that $\tilde{f_{i}}\left(x,\hat{u}\right):=\left(f_{i}\right)_{0}\circ\rho\left(x,\hat{u}\right)=x^{\alpha_{i}}u^{\beta_{i}}U_{i}\left(x,\hat{u}\right)$. Let $\tilde{D}=D_{\rho}$ and $\tilde{\Phi}=\hat{\Phi}_{\rho}$. Notice that \[ \forall\left(x,\hat{u}\right)\in\tilde{D}\times\mathbb{R}^{l}\ \ \ \ \ \begin{cases} \tilde{f}_{1}\left(x,\hat{u}\right)=0\\ \vdots\\ \tilde{f}_{N}\left(x,\hat{u}\right)=0 \end{cases}\Leftrightarrow\ \ \hat{u}=\tilde{\Phi}\left(x\right).\tag{*} \]
Let $\tilde{S}$ be the sub-$\Lambda$-set (of dimension strictly smaller than $N$) $\left\{ x_{1}=0\right\} \cup\ldots\cup\left\{ x_{N}=0\right\} $. Clearly we may assume that $\rho\left(\tilde{S}\right)\subseteq S$. Then we have \[ \forall\left(x,\hat{u}\right)\in\tilde{D}\setminus\tilde{S}\times\mathbb{R}^{l}\ \ \ \ \ \begin{cases} \tilde{f}_{1}\left(x,\hat{u}\right)=0\\ \vdots\\ \tilde{f}_{N}\left(x,\hat{u}\right)=0 \end{cases}\Leftrightarrow\ \ \begin{cases} \hat{u}^{\beta_{1}}=0\\ \vdots\\ \hat{u}^{\beta_{N}}=0 \end{cases}.\tag{**} \] We claim that $\forall x\in\tilde{D}\setminus\tilde{S}\ \ \tilde{\Phi}\left(x\right)=0$. In fact if this is not the case then, without loss of generality, for some $x\in\tilde{D}\setminus\tilde{S}$ we have $\tilde{\varphi}_{1}\left(x\right)\not=0$. This means that for every $a\in\mathbb{R}$, the tuple $\left(x,a,\tilde{\varphi}_{2}\left(x\right),\ldots,\tilde{\varphi}_{l}\left(x\right)\right)$ satisfies the system of equations on the right side of ({*}{*}). But this contradicts the equivalence in ({*}).
To conclude, notice that \[ \forall\left(x,u\right)\in\tilde{D}\setminus\tilde{S}\times\mathbb{R}^{N}\ \ \ \ \ \begin{cases} f_{1}\circ\hat{\rho}\left(x,u\right)=0\\ \vdots\\ f_{N}\circ\hat{\rho}\left(x,u\right)=0 \end{cases}\Leftrightarrow\ \ \hat{u}=\tilde{\Phi}\left(x\right),\ u_{l+1}=\ldots=u_{N}=0. \]
\subsection{(A)$_{N-1}$ and (C)$_{N,N}$ imply (A)$_{N}$}
Notice that, if $N=1$, then by \ref{vuoto: polybdd}, for all $x\in D\cap\mathbb{R}^{\geq0}$ we have $\eta\left(x\right)=x^{\alpha/\beta}U\left(x\right)$, for some unit $U\in\mathcal{A}_{1}$ and $\alpha,\beta\in\mathbb{A}$. If $\hat{m}=1$ then take $\mathcal{F}=\left\{ \sigma_{1}^{+}\circ r_{1}^{\beta}\right\} $, if $\hat{n}=1$ then take $\mathcal{F}=\left\{ \sigma_{1}^{+}\circ r_{1}^{\beta},\sigma_{1}^{-}\circ r_{1}^{\beta}\right\} $. In both cases $\mathcal{F}$ satisfies the conclusion of \textbf{(A)}$_{1}$.
Arguing as in the first paragraph of the proof of Corollary \ref{cor: def functions are terms} and by the inductive hypothesis \textbf{(A)}$_{N-1}$, we may assume that $\text{dim}\left(D\right)=N$. By the same argument, it is enough to prove the statement for $\eta\restriction D\setminus S$, where $S$ is any sub-$\Lambda$-set of dimension $<N$.
Since $\Gamma\left(\eta\right)$ is a sub-$\Lambda$-set, we can apply the Parametrisation Theorem \ref{thm: param subanal} and obtain that $\Gamma\left(\eta\right)$ is the finite union of the diffeomorphic images of sub-quadrants under maps whose components are in $\mathcal{A}$. Without loss of generality, we may assume that $\Gamma\left(\eta\right)=H\left(Q\right)$, where $Q\subseteq\left(\mathbb{R}^{>0}\right)^{N}$ is a sub-quadrant of dimension $N$ and $H=\left(g_{1},\ldots,g_{N},h\right)\in\left(\mathcal{A}_{N,0}\right)^{N+1}$. Moreover, \[ \forall\left(x,z\right)\in\mathbb{R}^{N+1}\ \ \ \ \ x\in D\ \text{and}\ z=\eta\left(x\right)\ \Leftrightarrow\ \exists!u\in Q\ \text{s.t.}\ \begin{cases} x_{1}=g_{i}\left(u\right)\\ \vdots\\ x_{N}=g_{N}\left(u\right)\\ z=h\left(u\right) \end{cases}. \] In particular, there is a map $\Phi:D\to Q$, whose graph is a sub-$\Lambda$-set, such that \[ \forall\left(x,u\right)\in D\times Q\ \ \ \ \ \begin{cases} f_{1}\left(x,u\right)=0\\ \vdots\\ f_{N}\left(x,u\right)=0 \end{cases}\Leftrightarrow\ \ u=\Phi\left(x\right), \] where $f_{i}\left(x,u\right)=x_{i}-g_{i}\left(u\right)$. Note that the $f_{i}$ might not satisfy the hypotheses of \textbf{(C)}$_{N,N}$, because the variables $u$ may appear with real exponents in $\mathcal{T}\left(f_{i}\right)$. Our next task is to reduce to the case when $f_{i}\in\mathcal{A}_{\check{m},\check{n}+N}$. In order to do this, we first apply Lemma \ref{lem: weak monomialisation} in order to reduce to the case when all the components of $\Phi$ are weakly normal (notice that $j\left(\Phi\right)=N)$, i.e. $\varphi_{i}\left(x\right)=x^{\alpha_{i}}U_{i}\left(x\right)$, where $\alpha_{i}\in\mathbb{A}$ and $\mathbb{R}_{\mathcal{A}}$-definable functions $U_{i}$ which are bounded away from zero. Secondly, we argue as in the proof of \ref{vuoto: chute d'ordre} and produce a finite family $\mathcal{F}_{\mathcal{G}}=\left\{ \rho_{\lambda}:\ \in\mathcal{G}\right\} $ of vertical admissible transformations \[ \rho_{\lambda}:\hat{I}_{m,n+N,r_{\lambda}}\to\hat{I}_{m+N,n,r} \] of type $\left(1,2\right)$, obtained by performing a series of ramifications and blow-up charts, such that $\rho_{\lambda}\left(x,u\right)=\left\langle x,x^{\alpha_{1}}\left(\lambda+u_{1}\right),\ldots,x^{\alpha_{N}}\left(\lambda+u_{N}\right)\right\rangle $. Notice that $f_{i}\circ\rho_{\lambda}\in\mathcal{A}_{m,n+N}$.
Now we can apply \textbf{(C)}$_{N,N}$ to the $f_{i}\circ\rho_{\lambda}$ and $\Phi_{\rho_{\lambda}}$ and obtain that there exist a finite family $\tilde{\mathcal{F}}$ of vertical admissible transformations \[ \tilde{\rho}:\hat{I}_{m_{\tilde{\rho}},n_{\tilde{\rho}}+N,r_{\tilde{\rho}}}\to\hat{I}_{m,n+N,r} \]
and $ $ a sub-$\Lambda$-set $\tilde{S}\subseteq D$ of dimension strictly smaller than $N$ such that $\mathcal{F}$ satisfies the covering property with respect to $\left(\Phi,S\right)$ and for every $\tilde{\rho}\in\tilde{\mathcal{F}}$, \[ \forall\left(x,u\right)\in D_{\rho}\setminus\left(\tilde{\rho}'\right)^{-1}\left(\tilde{S}\right)\times\mathbb{R}^{N}\ \ \ \ \ \begin{cases} f_{1}\circ\rho_{\lambda}\circ\tilde{\rho}\left(x,u\right)=0\\ \vdots\\ f_{N}\circ\rho_{\lambda}\circ\tilde{\rho}\left(x,u\right)=0 \end{cases}\Leftrightarrow\ \ u=0. \] Summing up, there are a finite family $\mathcal{F}$ of vertical admissible transformations \[ \rho:\hat{I}_{m_{\rho},n_{\rho}+N,r_{\rho}}\to\hat{I}_{m+N,n,r} \] of type $\left(1,2\right)$ such that $\rho'$ respects $f$ and $\rho''$ respects $h$, and a sub-$\Lambda$-set $S\subseteq D$ of dimension strictly smaller than $N$ such that, by Remarks \ref{rem: evolution of phi}, the family $\mathcal{F}'=\left\{ \rho':\ \rho\in\mathcal{F}\right\} $ satisfies the covering property with respect to $D\setminus S$ and for every $\rho\in\mathcal{F}$, \[ \eta\circ\rho'\left(x\right)=h\circ\rho''\left(x,0\right)\in\mathcal{A}_{m_{\rho},n_{\rho}}. \] We conclude the proof of \textbf{(A)}$_{N}$ by Theorem \ref{thm: geom monomial} and Remark \ref{rem: monomialise respecting f}. As in the proof of \ref{vuoto: chute d'ordre}, the size of the domain on which $\eta$ has become normal determines the choice of $\varepsilon_{\lambda}$.
\def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{}
\emph{Jean-Philippe Rolin}
\emph{Institut de Mathématiques de Bourgogne - UMR 5584}
\emph{Université de Bourgogne }
\emph{BP 138 }
\emph{21004 Dijon cedex (France)}
\emph{email}: \href{mailto:[email protected]}{[email protected]}
{}
\emph{Tamara Servi}
\emph{Centro de Matemática e Applicaçoes Fundamentais}
\emph{Av. Prof. Gama Pinto, 2 }
\emph{1649-003 Lisboa (Portugal)}
\emph{email}: \href{mailto:[email protected]}{[email protected]}\emph{ }
\end{document}
|
arXiv
|
{
"id": "1303.3724.tex",
"language_detection_score": 0.607657790184021,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Positive Eigenfunctions via Sub-Super Solutions Method]{Existence of Positive Eigenfunctions to an Anisotropic Elliptic Operator via Sub-Super Solutions Method}
\author[S. Ciani]{Simone Ciani} \address{Dpto. di Matematica e Informatica ``U. Dini", \\ Universit\`a degli Studi di Firenze,\\ viale G. Morgagni 67/A, 50134 Firenze, Italy} \email{\tt [email protected]}
\author[G. M. Figueiredo]{Giovany M. Figueiredo} \address{Dpto. de Matemática,\\ Universidade de Brasilia,\\ UNB, CEP: 70910-900, Brasília–DF, Brazil } \email{\tt [email protected]}
\author[A. Su\'arez]{Antonio Su\'arez} \address{Dpto. EDAN and IMUS,\\ University of Sevilla,\\ Avda. Reina Mercedes, s/n, 41012, Sevilla, Spain} \email{\tt [email protected]}
\subjclass{35K65, 35B65, 35B45, 35K20.}
\keywords{Anisotropic $p$-Laplacian, Positive Solution, Sub-Supersolution, Eigenvalues.}
\date{\today}
\begin{abstract} Using the sub-supersolution method we study the existence of positive solutions for the anisotropic problem \begin{equation} \label{0.1}
-\sum_{i=1}^N\frac{\partial}{\partial x_i}\left( \left|\frac{\partial u}{\partial x_i}\right|^{p_i-2}\frac{\partial u}{\partial x_i}\right)=\lambda u^{q-1} \end{equation} where $\Omega$ is a bounded and regular domain of $\mathbb{R}^N$, $q>1$ and $\lambda>0$.
\end{abstract}
\maketitle
\section{Introduction}\label{S:intro}
In this paper the main goal is to show the existence of positive solutions of the problem \begin{equation} \label{intro} \left\{\begin{array}{ll}
-\displaystyle\sum_{i=1}^N\frac{\partial}{\partial x_i}\left( \left|\frac{\partial u}{\partial x_i}\right|^{p_i-2}\frac{\partial u}{\partial x_i}\right)=\lambda u^{q-1} &\mbox{in $\Omega$,}\\ u=0 & \mbox{on $\partial\Omega$,} \end{array} \right. \end{equation}
where $\Omega\subset \mathbb{R}^N$, $N\geq 1$, is a bounded and regular domain, $p_i>1$, $i=1,\ldots, N$, $q>1$ and $\lambda$ is a real parameter. We will assume without loss of generality that the $ p_{i}$ are ordered increasingly, that is, $p_1<\ldots <p_N$.\newline There is a vast literature concerning to anisotropic elliptic problems. We mention here only those references most strongly related to (\ref{intro}). First, in \cite{FGK} it was proved that for $q<p_N$ for any $\gamma>0$ there exists $\lambda_\gamma>0$ and $u_\gamma$ with $\|u_\gamma\|_p=\gamma$ and $u_\gamma$ solution of (\ref{intro}) with $\lambda=\lambda_\gamma$. As the authors themselves claim, from this result it can not be deduced the existence of solutions of (\ref{intro}) for a given $\lambda$. In \cite{Monte}, using mainly variational methods, it was proved that if $p_1<q<p_N$ then there exist $0<\lambda_* \leq \lambda^*$ such that: \begin{itemize} \item If $\lambda\leq \lambda_*$, (\ref{intro}) does not posses positive solution. \item If $\lambda> \lambda^*$, (\ref{intro}) possesses at least a positive solution. \end{itemize} Finally, for the general results of \cite{MPR} (Corollary 1) we can deduce that for the case $1<q<p_1$ there exist $0<\lambda_*<\lambda_{**}$ such that (\ref{intro}) possesses at least a solution for $\lambda\in (0,\lambda_*)\cup (\lambda_{**},\infty)$.\newline In this paper we complete and improve the above results. For that, we use the sub-supersolution method, see \cite{ElHam}, \cite{giosusbsup} and \cite{Struwe}, (see also \cite{GST}, \cite{Giovany-Silva1}, \cite{Giovany-Silva2} and references therein for the application of this method to problems with nonlinear reaction function including singularities or critical exponent).\newline This method allows us not only to prove the existence of a solution, but also gives us lower and upper bounds of such solution. Specifically, our main result is the following. \begin{theorem} \quad \label{main} \begin{enumerate} \item Assume that $1<q<p_1$. There exists a positive solution of (\ref{intro}) if and only if $\lambda>0$. \item Assume that $p_1\leq q < p_N$. There exists $\Lambda>0$ such that (\ref{intro}) does not posses positive solutions for $\lambda<\Lambda$ and (\ref{intro}) possesses at least one positive solution for $\lambda>\Lambda$. \end{enumerate} \end{theorem} \noindent An outline of the paper is as follows: in Section 2 we recall some definitions and some properties of the eigenvalues and eigenfunctions of the classical $p$-Laplacian. Next in Section 3 we enunciate the sub-supersolution method. Then in Section 4 we construct sub and super-solutions by multiplication of powers of $p$-Laplacian eigenfunctions to be applied in the existence theorem. \section{Preliminary Lemmas and Setting} Consider $h (x,s): \Omega \times \mathbb{R} \rightarrow \mathbb{R}$ a Caratheodory function, i.e. measurable in $x$ and continuous in the second variable $s$. Consider the anisotropic problem \begin{equation} \label{proto} \left\{\begin{array}{ll}
-\displaystyle\sum_{i=1}^N\frac{\partial}{\partial x_i}\left( \left|\frac{\partial u}{\partial x_i}\right|^{p_i-2}\frac{\partial u}{\partial x_i}\right)=h(x,u(x)) & \text{in $\Omega$},\\ u=0 & \text{on $\partial \Omega.$} \end{array} \right. \end{equation} The natural framework to study (\ref{proto}) is the anisotropic Sobolev Space $W^{1,\bf{p}}_{0}(\Omega)$, that is, the closure of $C_{0}^{\infty}(\Omega)$ under the anisotropic norm $$
\|u \|_{W^{1,\bf{p}}(\Omega)}:= \sum_{i=1}^{N} \bigg{\|} \frac{\partial u}{\partial x_i} \bigg{\|}_{p_i} $$ where $\frac{\partial u}{\partial x_i}$ denotes the $i-$th weak partial derivative of $u$.\newline Recall that if we denote \begin{equation} \label{picondition} \sum_{i=1}^{N} \frac{1}{p_{i}} > 1, \quad p_{i}>1 \quad \forall i=1,\ldots,N, \quad p^{*}:= \frac{N}{\sum \frac{1}{p_{i}}-1}, \quad p_{\infty}:=\max \{p^{*}, p_{N} \}, \end{equation} \noindent then for every $r \in [1, p_{\infty}]$ the embedding $$W^{1,\bf{p}}_{0}(\Omega) \subset L^{r}(\Omega)$$ is continuous, and it is compact if $r<p_{\infty}$. More precisely, it holds the following directional Poincar\'e-type inequality for any $u\in C^1_c(\Omega)$ (see for instance \cite{FGK}) \begin{equation} \label{embedding}
||u||_{r} \leq \frac{d^i\, r}{2} \bigg| \bigg| \frac{\partial u}{\partial x_i} \bigg| \bigg|_{r}, \quad \forall r\geq 1, \quad d^i= \sup_{x,y \in \Omega} \langle x-y,e_i\rangle, \end{equation} denoting by $\{e_1,\ldots,e_N\}$ the canonical basis of $\mathbb{R}^N$. \newline The theory of embeddings of this kind of anisotropic Sobolev spaces is vast and we refer to \cite{FGK} for directional Poincar\'e-type inequality and to \cite{Schm} for Sobolev and Morrey's embeddings of the whole $W^{1,{\bf{p}}}(\Omega)$ space, obtained with an important geometric condition on the domain $\Omega$, namely that it must be semi-rectangular. It is not a case that this semi-rectangular condition reflects in our construction of the solution: the existence of traces for this kind of functions is heavily depending on the geometry of the domain as shown in \cite{Schm}. Regularity theory for orthotropic operators as the one defined by equation \eqref{intro} is still a challenging open problem, see for example \cite{DiBe}. \newline We also recall the following definition. \begin{Definition} A function $u \in W^{1,\bf{p}}(\Omega)$ is defined to be a a sub-(super-) solution to the problem (\ref{proto}) if $u \le(\ge) \, 0$ in $\partial \Omega$ and $\forall \hspace{0,1 cm} 0 \le \phi \in W^{1,{\bf{p}}}_0(\Omega)$ it satisfies \begin{equation} \label{subsolution}
\int_{\Omega} \bigg[ \displaystyle\sum_{i=1}^N\left|\frac{\partial u}{\partial x_i}\right|^{p_i-2}\frac{\partial u}{\partial x_i}\frac{\partial \phi}{\partial x_i} - h(x,u(x)) \phi \bigg] dx \leq (\ge) 0. \end{equation} Finally, a solution $u\in W^{1,\bf{p}}_{0}(\Omega)$ to (\ref{proto}) has to satisfy $$
\int_{\Omega} \bigg[ \displaystyle\sum_{i=1}^N\left|\frac{\partial u}{\partial x_i}\right|^{p_i-2}\frac{\partial u}{\partial x_i}\frac{\partial \phi}{\partial x_i} - h(x,u(x)) \phi \bigg] dx = 0\quad \forall \phi \in W^{1,\bf{p}}_0(\Omega). $$ \end{Definition} \noindent Now, we recall some well-known results concerning the eigenvalue problem for the $p$-Laplacian. Specifically, the problem \begin{equation} \label{eigen} \left\{\begin{array}{ll}
- \Delta_{p} u= \lambda |u|^{p-2} u &\text{in $\Omega$,}\\ u=0 & \text{on $\partial \Omega$,} \end{array} \right. \end{equation} where $$
\Delta_{p} u= \text{div} (|\nabla u|^{p-2}\nabla u)=\displaystyle\sum_{i=1}^N\frac{\partial}{\partial x_i}\left( \left|\frac{\partial u}{\partial x_i}\right|^{p-2}\frac{\partial u}{\partial x_i}\right) $$ The following result is well-known: \begin{lemma} \label{autovalor} The eigenvalue problem (\ref{eigen}) has a unique eigenvalue $\lambda=\lambda_1$ with the property of having a positive associated eigenfunction $\varphi_1\in W_0^{1,p}(\Omega)$, called principal eigenfunction. Moreover, $\lambda_1$ is simple, isolated and is defined by $$
\lambda_1=\inf\left\{\int_\Omega|\nabla u|^p: u\in W^{1,p}_0(\Omega),\;\int_\Omega |u|^p dx=1\right\}. $$
Furthermore, $\varphi_1\in C^{1,\beta}(\overline\Omega)$ for some $\beta\in (0,1)$ and $\partial\varphi_1/\partial n<0$ on $\partial\Omega$, where $n$ is the outward unit normal on $\partial\Omega$. Finally, for $N=1$ we have that \begin{equation} \label{cosa}
|\nabla \varphi_1|^{p-2}\nabla \varphi_1\in W^{1,2}(\Omega), \end{equation} and in fact $$ -\Delta_p \varphi_1(x)=\lambda_1\varphi_1(x)\quad \mbox{a.e. $x\in \Omega$}. $$ \end{lemma} \begin{remark} The existence of $\lambda_1$ and main properties of $\varphi_1$ are well-known, see \cite{Giusti}, \cite{Peter1}, \cite{Peral}. Property (\ref{cosa}) holds in $N=1$, see for instance \cite{gv}, and for $N\geq 2$ is some specific domains, for example for $\Omega$ convex, see \cite{Cia}. \end{remark}
\section{An existence sub-supersolution theorem} We start by stating an important theorem, see \cite{ElHam} and \cite{giosusbsup}, that assures the existence of a solution between a sub and a supersolution. \begin{theorem} \label{teoss} Suppose that $h: \mathbb{R}\mapsto \mathbb{R}$ a continuous function and that there exist $\underline{u}, \overline{u} \in W^{1,\bf{p}}(\Omega)\cap L^\infty(\Omega)$ subsolution and supersolution of (\ref{proto}) such that $\underline{u}\leq \overline{u}$. Then there exists $u \in W^{1,\bf{p}}_{0}(\Omega)$ solution to (\ref{proto}) such that $$ \underline{u} \le u \le \overline{u}. $$ \end{theorem} \begin{proof} Since $\underline{u}$ and $\overline{u}$ belong to $L^\infty(\Omega)$, then $h$ verifies condition $(h_2)$ of \cite{ElHam}. This concludes the proof. \end{proof}
\section{Construction of sub and super-solutions: proof of the main result} In this section we prove Theorem \ref{main}. For that, we apply Theorem \ref{teoss} to (\ref{intro}). Mainly, we construct the sub and the supersolution.
\subsection{Sub-solutions} Let us consider a rectangular bounded domain $U \subseteq \Omega$ i.e. $$ U:= \prod_{i=1}^{N}U_i,\quad\mbox{where $U_{i}=(a_i,b_i)$,} \quad a_i,b_i \in \mathbb{R} \quad \forall \, i=1,..,N. $$ Denote by $v_i=v_i(x_i)$ a positive principal eigenfunction of $-\Delta_{p_i}$ in $U_i$,\newline that is, \begin{equation} \label{pi}
\begin{cases} - \Delta_{p_{i}} v_i= \eta_{i} |v_i|^{p_{i}-2} v_i &\text{in $U_i$,}\\ v_i=0 &\text{on $\partial U_i$.} \end{cases} \end{equation} From Lemma \ref{autovalor}, recall that, if $n_i$ is the outward normal derivative to $\partial U_i$, we have \begin{equation} \label{normal} \frac{\partial v_i}{\partial n_i}<0 \quad\mbox{on $\partial U_i$.} \end{equation} Let us consider the function \begin{equation}
\underline{u}(x)= \begin{cases}
\epsilon \, \displaystyle\prod_{i=1}^{N} v_i^{\alpha_i} (x_{i}) & x \in U, \\
0 & x\in \Omega\setminus\overline{U},
\end{cases} \end{equation} where $\alpha_i>0$, $i=1,\ldots,N$, and $\epsilon>0$ will be chosen later. \begin{remark} We note that $\underline{u}(x)>0 $ in $\emptyset \ne U \subset \Omega$.
\end{remark} \noindent As $v_i$ are bounded, it is clear that $\underline{u} \in W^{1,{\bf{p}}} (\Omega)$ and that $\underline{u}_{|\partial \Omega} = 0 $. Hence, $\underline{u}$ is subsolution of (\ref{intro}) provided that $$
\int_{\Omega} \sum_{i=1}^{N} \bigg| \frac{\partial \underline{u}}{\partial x_i} \bigg|^{p_{i}-2} \frac{\partial \underline{u}}{\partial x_i} \, \frac{\partial \phi}{\partial x_i} \, dx \leq \lambda \int_{\Omega} \underline{u}^{q-1} \phi \, dx \quad\forall \, \phi \in W^{1,{\bf{p}}}_{0} (\Omega), \quad \phi \ge 0. $$ \noindent Observe that \begin{equation} \lambda \int_{\Omega} \underline{u}^{q-1} \phi \, dx = \lambda \epsilon^{q-1} \int_{U} \prod_{i=1}^{N}v_{i}^{\alpha_{i} (q-1)} \phi \, dx. \end{equation} \noindent On the other hand, observe that $$
\frac{\partial \underline{u}}{\partial x_i} = \epsilon \, \alpha_{i} \bigg( \prod_{j\ne i} v_j^{\alpha_j} \bigg) v_i^{\alpha-1} \frac{\partial v_i}{\partial x_i}\quad\mbox{in $U_i$.} $$ \noindent Then, taking into account the positivity of $v_i,\, \forall \, i=1,..,N$, \[
\int_{\Omega} \sum_{i=1}^{N} \bigg| \frac{\partial \underline{u}}{\partial x_i} \bigg|^{p_{i}-2} \frac{\partial \underline{u}}{\partial x_i} \, \frac{\partial \phi}{\partial x_i} \, dx= \] \[
\sum_{i=1}^{N} \int_{\prod_{j \ne i} U_{j} } \bigg{\{} \int_{U_i} \bigg[ \epsilon \alpha_{i} \, \bigg( \prod_{j\ne i} v_j^{\alpha_j} \bigg) v_i^{\alpha_i-1} \bigg]^{p_{i}-1} \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}-2} \frac{\partial v_i}{\partial x_i} \, \frac{\partial \phi }{\partial x_i} \, dx_{i} \bigg{\}} d\hat{x}^{i} \] with the obvious notation for $d\hat{x}^i$. Next, by using an integration by parts argument and the Fubini-Tonelli theorem, we get \[
\int_{\Omega} \sum_{i=1}^{N} \bigg|\frac{\partial \underline{u}}{\partial x_i} \bigg|^{p_{i}-2} \frac{\partial \underline{u}}{\partial x_i} \, \frac{\partial \phi}{\partial x_i} \, dx= \]
\[
-\sum_{i=1}^{N} \int_{\prod_{j \ne i} U_j} \bigg( \epsilon \alpha_{i} \prod_{j \ne i} v_j^{\alpha_j } \bigg)^{p_i -1} \int_{U_i} \frac{\partial}{\partial x_i} \bigg( v_i^{(\alpha_i -1)(p_i -1)} \bigg|\frac{\partial v_i}{\partial x_i} \bigg|^{p_i -2} \frac{\partial v_i}{\partial x_i} \bigg) \phi \, dx_i d\hat{x}^{i} \]
\[ + \sum_{i=1}^{N} \int_{\prod_{j \ne i} U_j} \bigg( \epsilon \alpha_{i} \prod_{j \ne i} v_j^{\alpha_j } \bigg)^{p_i -1} \bigg{\{} \int_{\partial U_i} v_i^{(\alpha_i -1)(p_i -1)} \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}-2} \frac{\partial v_i}{\partial n_i} \phi \, dx_i \bigg{\}} d\hat{x}^{i}. \] The second term on the right can be discarded as $\frac{\partial v_i}{\partial {n_i}}<0 $ in $\partial U_i$, see (\ref{normal}). Considering that $$
\frac{\partial}{\partial x_i} \bigg( v_i^{(\alpha_i -1)(p_i -1)} \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_i -2} \frac{\partial v_i}{\partial x_i}\bigg)=$$$$
(\alpha_i-1)(p_i-1) \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_i }+v_i^{(\alpha_i -1)(p_i -1)} \frac{\partial}{\partial x_i} \bigg( \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_i -2} \frac{\partial v_i}{\partial x_i} \bigg) $$ and, from Lemma \ref{autovalor}, that $$
\frac{\partial}{\partial x_i}\bigg( \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_i -2} \frac{\partial v_i}{\partial x_i} \bigg) =-\eta_i v_i^{p_i-1}\quad\mbox{in $U_i$,} $$ our sub-solution condition becomes \begin{equation} \label{CLAIM} \begin{aligned}
\sum_{i=1}^{N} \int_{\prod_{j \ne i} U_j} &\bigg(\epsilon \alpha_i \prod_{j \ne i} v_j^{\alpha_j} \bigg)^{p_i -1} \int_{U_i} \bigg{\{} \bigg[ (1-\alpha_i)(p_i -1) v_i^{(\alpha_i -1)(p_i -1)-1} \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}} +\\ & + v_{i}^{(\alpha_i -1)(p_i -1)} \eta_i v_i^{p_i -1} \bigg] - \lambda \epsilon^{q-1} \bigg( \prod_{k=1}^{N} v_k^{\alpha_k (q-1)} \bigg) \bigg{\}} \phi \, dx_i d\hat{x}^{i} \leq 0. \end{aligned} \end{equation} \noindent Let us require a condition on the pointwise integrand \[ \lambda \ge \] \[
\sum_{i=1}^{N} \bigg(\epsilon \alpha_i \prod_{j\ne i} v_j^{\alpha_j} \bigg)^{p_i -q} v_{i}^{(\alpha_i -1)(p_i -1)-1- \alpha_i( q-1)} \bigg[ (1-\alpha_i)(p_i-1) \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}} + \eta_i v_i^{p_{i}} \bigg]
\]
\[
= \sum_{i=1}^{N} \bigg(\epsilon \alpha_i \prod_{j\ne i} v_j^{\alpha_j} \bigg)^{p_i -q} v_{i}^{\alpha_i(p_i -q)-p_i} \bigg[ (1-\alpha_i)(p_i-1) \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}} + \eta_i v_i^{p_{i}} \bigg].
\] Now we consider various cases. \begin{itemize} \item If $1<q<p_{1}$, then by choosing $ \alpha_i>\frac{p_i}{(p_i -q)}>1$, letting $\epsilon \rightarrow 0^{+}$ we obtain that $\underline{u}$ is a subsolution provided $\lambda>0$.
\item Assume that some $i_0 \in \{1,..,N\}$ we have $p_{i_0+1}>q\geq p_{i _0}$ taking $p_{N+1}=\infty$. Then $\underline{u}$ is a subsolution if $$\lambda \ge \lambda_{*}:=\max_{U}{\mathcal S}$$ where $$
{\mathcal S}=\sum_{i=1}^N \bigg(\epsilon \alpha_i \prod_{j\ne i} v_j^{\alpha_j} \bigg)^{p_i -q} v_{i}^{\alpha_i (p_i -q)- p_i} \bigg[ (1-\alpha_i)(p_i-1) \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}} + \eta_i v_i^{p_{i}} \bigg]. $$ We show that $\lambda_{*}$ is finite. Observe that ${\mathcal S}={\mathcal S}_0+{\mathcal S}_1$ where $$
{\mathcal S}_0=\sum_{i=1}^{i_0}\bigg(\epsilon \alpha_i \prod_{j\ne i} v_j^{\alpha_j} \bigg)^{p_i -q} v_{i}^{\alpha_i (p_i -q)- p_i} \bigg[ (1-\alpha_i)(p_i-1) \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}} + \eta_i v_i^{p_{i}} \bigg]. $$ and $$
{\mathcal S}_1=\sum_{i=i_0+1}^N \bigg(\epsilon \alpha_i \prod_{j\ne i} v_j^{\alpha_j} \bigg)^{p_i -q} v_{i}^{\alpha_i (p_i -q)- p_i} \bigg[ (1-\alpha_i)(p_i-1) \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}} + \eta_i v_i^{p_{i}} \bigg]. $$
It is clear that ${\mathcal S}_1$ is finite, taking $\alpha_i>p_i/(p_i-q)$.\newline On the other hand, observe that the behaviour next to $\partial U$ is controlled: when $v_i \rightarrow 0^+$ then as $\frac{\partial}{\partial n_i} v_i <0 $ on $\partial U_i$ we have that there exists $\delta>0$ small enough such that the quantity
$$ \bigg[ (1-\alpha_i)(p_i-1) \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}} + \eta_i v_i^{p_{i}} \bigg] <0\quad\mbox{in $U_i^\delta$, for all $i=1,\ldots,p_{i_0}$}, $$ where $$ U_{i}^\delta:= \{ x_i \in U_i: \dist(x_i \, , \partial U_i) \ge \delta\}. $$ Moreover, ${\mathcal{S}}_0$ is bounded in $U \cap U_i^\delta$. Then, ${\mathcal{S}}_0$ is bounded in $U$ and we can conclude that $\lambda_*$ is finite. \end{itemize}
\subsection{Supersolutions} Since $\Omega$ is bounded, we can choose a domain $U$ such that $$ \Omega \subset U =\prod_{i=1}^{N} U^i, \qquad U^i=(a_i,b_i), \quad a_i,b_i \quad \text{in} \quad \mathbb{R}. $$ Now for $M>0$ we consider the function $$ \overline{u}(x):= M \prod_{i=1}^{N} v_i(x_i), \quad \quad x \in \Omega, $$ where $v_{i}$ are the first eigenfunctions to the $p_i$-Laplacian in $U^{i}$, whose first eigenvalue we denote by $\eta^i$. Observe that $$\overline{u}_{\partial \Omega} >0. $$ Then, $\overline{u}$ is a supersolution to (\ref{intro}) for all $ 0 \leq \phi \in W^{1,\bf{p}}_0(\Omega)$ holds $$ \lambda \int_{\Omega} M^{q-1} \prod_{i=1}^{N} v_{i}^{q-1} \phi \, dx \leq
\int_{\Omega} \sum_{i=1}^{N} \bigg| \frac{\partial \overline{u}}{\partial x_i} \bigg|^{p_i- 2}\frac{\partial \overline{u}}{\partial x_i} \, \frac{\partial \phi }{\partial x_i} \, dx. $$ It is clear that \[
\int_{\Omega} \sum_{i=1}^{N} \bigg| \frac{\partial \overline{u}}{\partial x_i} \bigg|^{p_i -2} \frac{\partial \overline{u}}{\partial x_i} \, \frac{\partial \phi }{\partial x_i} \, dx=\] \[
\int_{\Omega}\sum_{i=1}^{N}\bigg[M \, \bigg( \prod_{j\ne i} v_j \bigg) \bigg]^{p_{i}-1} \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_{i}-2} \frac{\partial v_i}{\partial x_i}\, \frac{\partial \phi }{\partial x_i} \, dx=\] \[
-\sum_{i=1}^{N} \int_{\Omega} \bigg(M \prod_{j \ne i} v_j^{} \bigg)^{p_i -1} \frac{\partial}{\partial x_i} \bigg( \bigg| \frac{\partial v_i}{\partial x_i} \bigg|^{p_i -2} \frac{\partial v_i}{\partial x_i} \bigg) \phi \, dx=\] \[ \sum_{i=1}^{N} \eta^{i}\int_{\Omega} \bigg(M \prod_{j =1}^{N} v_j \bigg)^{p_i -1} \phi \, dx \] Thus we may ask for the strong condition \begin{equation} \label{lambdasup} \lambda^*:= \sum_{i=1}^{N} \eta^i \bigg(M \prod_{j =1}^N v_j^{} \bigg)^{p_i -q} \ge \lambda. \end{equation} Hence, if $ 1 <q < p_{N}$ by letting $M\rightarrow \infty$ we have that $\overline{u}$ is a super solution $\forall \lambda >0$.
\begin{proof}[Proof of Theorem \ref{main}] \quad \begin{enumerate} \item Assume $1<q<p_1$. Fix $\lambda>0$. Then, we can choose $\epsilon>0$ small and $M$ large enough such that $\underline{u}$, $\overline{u}$ are sub-supersolution of (\ref{intro}) and $\underline{u}\leq \overline{u}$ in $\Omega$. Theorem \ref{teoss} assures the existence of a solution $u$ of (\ref{intro}) such that $\underline{u}\leq u\leq \overline{u}$. This completes this case. \item Assume $p_1\leq q <p_N$. In this case, taking for example $\epsilon=1$, we have that $\underline{u}$ is subsolution provided that $\lambda\geq\lambda_*$ for some $\lambda_*$. On the other hand, we can take $M$ large such that $\overline{u}$ is supersolution and $\underline{u}\leq \overline{u}$. Thus, there exists a positive solution for $\lambda \geq \lambda_*$.\newline Now, we define $$ \Lambda:= \inf \{\lambda: \mbox{(\ref{intro}) possesses at least a positive solution}\}. $$ We have proved that $\Lambda<\infty$. For $p_1<q<p_N$, in \cite{ Monte} it was proved that $0<\Lambda$. We show now that this is also true for $q=p_1$. Indeed, let now consider $q=p_1$ and let us multiply the equation \eqref{intro} by $u$ and integrate it on $\Omega$ to obtain \[
\sum_{i=1}^N \int_{\Omega} \bigg| \frac{\partial u}{\partial x_i} \bigg|^{p_i} dx = \sum_{i=1}^N \bigg| \bigg| \frac{\partial u}{\partial x_i} \bigg| \bigg|^{p_i}_{p_i}= \lambda ||u||_{p_1}^{p_1}= \lambda \int_{\Omega} |u|^{p_1} dx \] Now we use the embedding (\ref{embedding}) on $r=p_1$ to get \[
\bigg( \frac{d^1 p_1}{2} \bigg)^{-p_1} ||u||_{p_1}^{p_1} \leq \bigg| \bigg| \frac{\partial u}{\partial x_1} \bigg| \bigg|^{p_1}_{p_1}+ \sum_{i=2}^N \bigg| \bigg| \frac{\partial u}{\partial x_i} \bigg| \bigg|^{p_i}_{p_i}= \lambda ||u||_{p_1}^{p_1} \] and thus \[
||u||_{p_1}^{p_1} \bigg[ \lambda - \bigg( \frac{2}{d^1 p_1} \bigg)^{p_1} \bigg] \geq 0. \]
But if $\lambda< \bigg( \frac{2}{d^1 p_1} \bigg)^{p_1} $ this quantity is negative and we arrive to the absurd of declaring $||u||_{p_1} \ne 0$.\newline We prove now that for all $\lambda>\Lambda$ we have the existence of positive solution. Indeed, fix $\lambda_0>\Lambda$. Then, by definition of $\Lambda$, there exists $\mu\in(\Lambda,\lambda_0)$ and a positive solution, denoted by $u_\mu$, of (\ref{intro}) for $\lambda=\mu$. Since $\mu<\lambda_0$, it is clear that $u_\mu$ is subsolution of (\ref{intro}) for $\lambda=\lambda_0$. On the other hand, for $M$ large, there exists $\overline{u}$ supersolution of (\ref{intro}) for $\lambda=\lambda_0$. Finally, thanks to regularity results, see for instance Proposition 4.1 in \cite{ElHam} or Lemma 2.4 in \cite{giosusbsup}, we have that $u_\mu\in L^\infty(\Omega)$. Hence, for $M$ large $u_\mu\leq \overline{u}$, and we can conclude the existence of positive solution for $\lambda=\lambda_0$. This completes the proof. \end{enumerate} \end{proof}
\begin{remark} Since our subsolution $\underline{u}$ is strictly positive in $U$, we have by Theorem \ref{teoss} that $u \ge \underline{u}>0$ in a non empty open set contained in $\Omega$. In the case $p_1\ge 2$ by the result of Corollary 4.4 \cite{Monte} we have $u>0$ in $\Omega$. \end{remark}
\begin{remark} We comment a possible further generalization.\newline Let $ x=(x_{1},...,x_N)$, where $x_i \in \Omega_i \subset \mathbb{R}^{N_i}$ , being $\Omega_i$ an open, bounded and convex domain. Denote with $\nabla_{x_i}$ the gradient along the vector $x_i$ and $\text{div}_{x_i}$ its divergence, and let $$
\Delta_{p_i} u= {\text{div}}_{x_i} (|\nabla_{x_i} u|^{p_i-2}\nabla_{x_i} u)= \sum_{j=1}^{N_i}\frac{\partial}{\partial x_{ij}}\left( \left|\frac{\partial u}{\partial x_{ij}}\right|^{p_i-2}\frac{\partial u}{\partial x_{ij}} \right) $$ be the $p_i$-Laplacian acting on the $x_i$ vector. Problems of the kind of
\begin{equation} \label{generalisation} \left\{\begin{array}{ll} -\displaystyle \sum_{i=1}^N \Delta_{p_i} u =\lambda u^{q-1} &\mbox{in $\Omega= \prod {\Omega_i}$,}\\ u=0 & \mbox{on $\partial\Omega$,} \end{array} \right. \end{equation} \noindent can be faced with the same technique, using properties of the $p_i$-Laplacian principal eigenfuctions, and owing integrability condition as \eqref{cosa} to recent regularity results obtained in \cite{Cia}. \end{remark}
\section*{Aknowledgments}
The authors wish to thank professors A. Cianchi and V. Vespri for enlighting conversations about the problem. Moreover, the authors acknowledge IMUS (Mathematics Institute of the University of Seville), IEMath-GR (Mathematics Institute of the University of Granada) and University of Cadiz for supporting a PhD course from which the collaboration arose. S. Ciani is partially founded by INdAM group GNAMPA, A. Su\'arez by PGC2018-098308-B-I00 (MCI/AEI/FEDER, UE) and G. M. Figueiredo by CNPQ, CAPES and FAP-DF.
\end{document}
|
arXiv
|
{
"id": "2004.00466.tex",
"language_detection_score": 0.6145850419998169,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} We study the Cauchy problem associated to a family of nonautonomous semilinear equations in the space of bounded and continuous functions over $\mathbb R^d$ and in $L^p$-spaces with respect to tight evolution systems of measures. Here, the linear part of the equation is a nonautonomous second-order elliptic operator with unbounded coefficients defined in $I\times \mathbb R^d$, ($I$ being a right-halfline). To the above Cauchy problem we associate a nonlinear evolution operator, which we study in detail, proving some summability improving properties. We also study the stability
of the null solution to the Cauchy problem. \end{abstract}
\maketitle
\section{Introduction}
This paper is devoted to continue the analysis started in \cite{AL}. We consider a family of linear second-order differential operators $\mathcal{A}(t)$ acting on smooth function $\zeta$ as \begin{align}\label{oper} (\mathcal{A}(t)\zeta)(x)=\sum_{i,j=1}^d q_{ij}(t,x)D_{ij}\zeta(x)+ \sum_{i=1}^d b_i(t,x)D_i\zeta(x) ,\qquad\;\,t \in I,\,\, x\in \mathbb R^d, \end{align} where $I$ is either an open right halfline or the whole ${\mathbb R}$. Then, given $T>s \in I$, we are interested in studying the nonlinear Cauchy problem \begin{equation} \left\{ \begin{array}{ll} D_tu(t,x)=(\mathcal{A}(t)u)(t,x)+\psi_u(t,x), & (t,x)\in (s,T]\times\mathbb R^d,\\[1mm] u(s,x)= f(x), &x \in \mathbb R^d, \end{array} \right. \label{n-sm-pb} \end{equation} where $\psi_u(t,x)=\psi(t,x,u(t,x),\nabla_xu(t,x))$. We assume that the coefficients $q_{ij}$ and $b_i$ ($i,j=1,\dots,d$), possibly unbounded, are smooth enough, the diffusion matrix $Q=[q_{ij}]_{i,j=1, \dots,d}$ is uniformly elliptic and there exists a Lyapunov function $\varphi$ for $\mathcal{A}(t)$ (see Hypothesis \ref{base}(iii)). These assumptions yield that the linear part $\mathcal{A}(t)$ generates a linear evolution operator $\{G(t,s):\, t\geq s\in I\}$ in $C_b(\mathbb R^d)$. More precisely, for every $f\in C_b(\mathbb R^d)$ and $s \in I$, the function $G(\cdot,s)f$ belongs to $C_b([s,+\infty)\times \mathbb R^d)\cap C^{1,2}((s,+\infty)\times\mathbb R^d)$, it is the unique bounded classical solution of the Cauchy problem \eqref{n-sm-pb}, with $\psi\equiv 0$, and satisfies the estimate \begin{equation}\label{norm_g}
\|G(t,s)f\|_\infty \le \|f\|_\infty,\qquad\;\, t>s\in I,\;\, f \in C_b(\mathbb R^d). \end{equation}
We refer the reader to \cite{KunLorLun09Non} for the construction of the evolution operator $G(t,s)$ and for further details.
Classical arguments can be adapted to our case to prove the existence of a unique local mild solution $u_f$ of problem \eqref{n-sm-pb} for any $f\in C_b(\mathbb R^d)$, i.e., a function $u:[s,\tau]\times {\mathbb R}^d\to {\mathbb R}$ (for some $\tau>s$) such that \begin{align}\label{defi_mild} u(t,x)= (G(t,s)f)(x)+\int_s^t(G(t,r)\psi_u(r,\cdot))(x)dr,\qquad\;\,t\in [s,\tau],\;\,x\in\mathbb R^d. \end{align}
Under reasonable assumptions such a mild solution $u_f$ is classical, defined in the whole $[s,+\infty)$ and satisfies the condition \begin{eqnarray*}
\|u_f\|_\infty+\sup_{t \in (s,T)}\sqrt{t-s}\|\nabla_x u_f(t, \cdot)\|_\infty<+\infty \end{eqnarray*} for any $T>s$. Hence, setting ${\mathcal N}(t,s)f=u_f(t,\cdot)$ for any $t>s$ we deduce that ${\mathcal N}(t,s)$ maps $C_b(\mathbb R^d)$ into $C^1_b(\mathbb R^d)$ and, from the uniqueness of the solution to \eqref{n-sm-pb}, it follows that it satisfies the evolution law ${\mathcal N}(t,s)f={\mathcal N}(t,r){\mathcal N}(r,s)f$ for any $r\in (s,t)$ and $f\in C_b(\mathbb R^d)$.
As in the linear case we are also interested to set problem \eqref{n-sm-pb} in an $L^p$-context. However, as it is already known from the linear case, the most natural $L^p$-setting where problems with unbounded coefficients can be studied is that related to the so-called {\it evolution systems of measures} (\cite{DaPRoc}), that is one-parameter families of Borel probability measures $\{\mu_t: t \in I\}$ such that \begin{equation}\label{inv_pro} \int_{\mathbb R^d} G(t,s)fd\mu_t= \int_{\mathbb R^d} fd\mu_s,\qquad\;\,f\in C_b(\mathbb R^d),\;\,t>s \in I. \end{equation}
When they exist, evolution families of measures are in general infinitely many, even the tight ones, where, roughly speaking, tight means that all the measures of the family are essentially concentrated on the same large ball (see Section \ref{sect-2} for a rigorous definition of tightness). Under additional assumptions on the coefficients of the operator $\mathcal{A}(t)$ (see Section \ref{sect-2}), there exists a unique tight evolution system of measures $\{\mu_t: t \in I\}$, which has the peculiarity to be the unique system related to the asymptotic behavior of $G(t,s)$ as $t$ tends to $+\infty$. We also mention that, typically, even if for $t\neq s$, the measures $\mu_t$ and $\mu_s$ are equivalent (being equivalent to the restriction of the Lebesgue measure to the Borel $\sigma$-algebra in $\mathbb R^d$), the corresponding $L^p$-spaces differ. In this paper we consider any tight evolution system of measures.
Formula \eqref{inv_pro} and the density of $C_b(\mathbb R^d)$ in $L^p(\mathbb R^d, \mu_s)$ allow to extend $G(t,s)$ to a contraction from $L^p(\mathbb R^d, \mu_s)$ to $L^p(\mathbb R^d, \mu_t)$ for any $t>s$ and any $p \in [1, +\infty)$ and to prove very nice properties of $G(t,s)$ in these spaces.
In view of these facts, it is significant to extend $\mathcal{N}(t,s)$ to an operator from $L^p(\mathbb R^d,\mu_s)$ to $L^p(\mathbb R^d, \mu_t)$ for any $I\ni s<t$. This can be done if $p\ge p_0$ (see Hypothesis \ref{base} (v)), $\psi(t,x, \cdot, \cdot)$ is Lipschitz continuous in ${\mathbb R}^{d+1}$ uniformly with respect to $(t,x)\in (s,T]\times\mathbb R^d$ and, in addition, $\sup_{t\in (s, T]}\sqrt{t-s}\|\psi(t,\cdot,0,0)\|_{L^p(\mathbb R^d, \mu_t)}<+\infty$. In particular, each operator ${\mathcal N}(t,s)$ is continuous from $L^p(\mathbb R^d,\mu_s)$ to $W^{1,p}(\mathbb R^d,\mu_t)$.
We stress that the first condition on $\psi$ may seem too restrictive, but in fact it is not. Indeed, the Sobolev embedding theorems fail to hold, in general, when the Lebesgue measure is replaced by any of the measures $\mu_t$. This can be easily seen in the particular case of the one-dimensional Ornstein-Uhlenbeck operator, where the evolution system of measures is replaced by a time-independent measure $\mu$ (the so-called invariant measure), which is the gaussian centered at zero with covariance $1/2$. For any $\varepsilon>0$, the function $x\mapsto \exp(2(2p+\varepsilon)^{-1}|x|^2)$ belongs to $W^{k,p}({\mathbb R},\mu)$ for any $k\in{\mathbb N}$ but it does not belong to $L^{p+\varepsilon}(\mathbb R^d,\mu)$.
Under the previous assumptions, for any $f\in L^p(\mathbb R^d,\mu_s)$, ${\mathcal N}(\cdot,s)$ can be identified with the unique mild solution to problem \eqref{n-sm-pb} which belongs to $L^p((s,T)\times \mathbb R^d,\mu)\cap W^{0,1}_p(J\times \mathbb R^d,\mu)$, for any $J\Subset (s,T]$, such that $u_f(t,\cdot)\in W^{1,p}(\mathbb R^d,\mu_t)$ for almost every $t\in (s,T]$. Here, $\mu$ is the unique Borel measure on the $\sigma$-algebra of all the Borel subsets of $I\times{\mathbb R}^d$ which extends the map defined on the product of a Borel set $A\subset I$ and a Borel set $B\subset\mathbb R^d$ by \begin{equation*} \mu(A \times B):=\int_A\mu_t(B)dt. \end{equation*}
Since, as it has been stressed, in this context the Sobolev embedding theorems fail to hold in general, the summability improving properties of the nonlinear evolution operator $\mathcal{N}(t,s)$ are not immediate and true in all the cases. For this reason in Section \ref{4} we investigate properties such as hypercontractivity, supercontractivity, ultraboundedness of the evolution operator $\mathcal{N}(t,s)$ and its spatial gradient. Differently from \cite{AL}, where $\psi=\psi(t,u)$ and the hypercontractivity of $\mathcal{N}(t,s)$ is proved assuming $\psi(t,0)=0$ for any $t>s$, here we consider a more general case. More precisely we assume that there exist $\xi_0\ge 0$ and $\xi_1,\xi_2 \in {\mathbb R}$ such that
$u\psi(t,x,u,v)\le \xi_0|u|+\xi_1u^2+\xi_2|u||v|$
for any $t\ge s$, $x,v\in\mathbb R^d$ $u\in{\mathbb R}$. Under some other technical assumptions on the growth of the coefficients $q_{ij}$ and $b_i$ $(i,j=1,\ldots,d)$ as $|x|\to +\infty$, we show that as in the linear case, (see \cite{AngLorOnI, AngLorLun}), the hypercontractivity and the supercontractivity of $\mathcal{N}(t,s)$ and $\nabla_x\mathcal{N}(t,s)$ are related to some logarithmic Sobolev inequalities with respect to the tight system $\{\mu_t:\,\,t\in I\}$. These estimates are the natural counterpart of the Sobolev embedding theorems in the context of invariant measures and evolution systems of measures.
For what concerns the ultraboundedness of $\mathcal{N}(t,s)$ and $\nabla_x \mathcal{N}(t,s)$ we first prove an Harnack type estimate which establishes a pointwise estimate of $|{\mathcal N}(t,s)f|^p$ in terms of $G(t,s)|f|^p$ for any $f\in C_b(\mathbb R^d)$, $p>p_0$ and $t>s$. This estimate, together with the evolution law and the ultraboundedness of $G(t,s)$ allow us to conclude that, for any $f\in L^p(\mathbb R^d,\mu_s)$
and any $t>s$, the function ${\mathcal N}(t,s)f$ belongs to $W^{1,\infty}(\mathbb R^d,\mu_t)$ and to prove an estimate of $\|{\mathcal N}(t,s)f\|_{W^{1,\infty}(\mathbb R^d,\mu_t)}$ in terms of $\|f\|_{L^p(\mathbb R^d,\mu_s)}$.
Finally, assuming that $\psi(t,x,0,0)= 0$ for every $t\in (s,+\infty)$ and $x \in \mathbb R^d$, we prove that the trivial solution to the Cauchy problem \eqref{n-sm-pb} is exponentially stable both in $W^{1,p}(\mathbb R^d, \mu_t)$ and in $C^1_b(\mathbb R^d)$. This means that $\|u_f(t,\cdot)\|_X\le C_Xe^{-\omega_Xt}$ as $t\to +\infty$ for some constants $C_X>0$ and $\omega_X<0$, both when $X=W^{1,p}(\mathbb R^d,\mu_t)$
and $X=C^1_b(\mathbb R^d)$. In the first case, the space $X$ depends itself on $t$. We stress that, under sufficient conditions on the coefficients of the operators ${\mathcal A}(t)$, which include their convergence at infinity, in \cite{AngLor10Com, LorLunSch16Str} it has been proved that the measure $\mu_t$ weakly$^*$ converges to a measure $\mu$, which turns out the invariant measure of the operator $\mathcal{A}_{\infty}$, whose coefficients are the limit as $t\to +\infty$ of the coefficients of the operator $\mathcal{A}(t)$. This gives more information on the convergence to zero of $\|u_f(t,\cdot)\|_{W^{1,p}(\mathbb R^d,\mu_t)}$ at infinity. We refer the reader also to \cite{LorLunZam10Asy} for the case of $T$-time periodic coefficients.
To get the exponential stability of the trivial solution in $C_b(\mathbb R^d)$, differently from \cite{AL} where a nonautonomous version of the principle of linearized stability is used and more restrictive assumptions on $\psi$ are required, we let $p$ tend to $+\infty$ in the decay estimate of $\|u_f(t, \cdot)\|_{W^{1,p}(\mathbb R^d, \mu_t)}$, since all the constants appearing in this estimate admit finite limit as $p$ tends to $+\infty$. In particular, we stress that we do not need any additional assumptions on the differentiability of $\psi$ but, on the other hand, we require that the mild solution $u_f$ of \eqref{n-sm-pb} is actually classical.
\subsection*{Notations}
For $k\ge 0$, by $C^k_b(\mathbb R^d)$ we mean the space of the functions in $C^k(\mathbb R^d)$ which are bounded together with all their derivatives up to the $[k]$-th order.
$C^k_b(\mathbb R^d)$ is endowed with the norm $\|f\|_{C_b^k(\mathbb R^d)}=\sum_{|\alpha|\le [k]}\|D^\alpha f\|_{\infty}+\sum_{|\alpha|=[k]}[D^\alpha f]_{C_b^{k-[k]}(\mathbb R^d)}$ where $[k]$ denotes the integer part of $k$. When $k\notin {\mathbb N}$, we use the subscript ``loc'' to denote the space of all $f\in C^{[k]}(\mathbb R^d)$ such that the derivatives of order $[k]$ are $(k-[k]$)-H\"older continuous in any compact subset of $\mathbb R^d$. Given an interval $J$, we denote by $B(J\times\mathbb R^d;{\rm Lip}({\mathbb R}^{d+1}))$ and $C^{\alpha/2,\alpha}(J\times \mathbb R^d)$ ($\alpha\in (0,1)$), respectively, the set of all functions $f:J\times\mathbb R^d\times{\mathbb R}\times\mathbb R^d\to{\mathbb R}$ such that $f(t,x,\cdot,\cdot)$ is Lipschitz continuous in ${\mathbb R}^{d+1}$, uniformly with respect to $(t,x)\in J\times\mathbb R^d$, and the usual parabolic H\"older space. The subscript ``loc'' has the same meaning as above.
We use the symbols $D_tf$, $D_i f$ and $D_{ij}f$ to denote respectively the time derivative $\frac{\partial f}{\partial t}$ and the spatial derivatives $\frac{\partial f}{\partial x_i}$ and $\frac{\partial^2f}{\partial x_i\partial x_j}$ for any $i,j=1, \ldots,d$.
The open ball in $\mathbb R^d$ centered at $ 0$ with radius $r>0$ and its closure are denoted by $B_r$ and $\overline{B}_r$, respectively. For any measurable set $A$, contained in ${\mathbb R}$ or in $\mathbb R^d$, we denote by $\mbox{$1\!\!\!\;\mathrm{l}$}_A$ the characteristic function of $A$. Finally, we write $A\Subset B$ when $A$ is compactly contained in $B$.
\section{Assumptions and preliminary results} \label{sect-2} Let $\{\mathcal{A}(t): t\in I\}$ be the family of linear second-order differential operators defined by \eqref{oper}. Our standing assumptions on the coefficients of the operators $\mathcal{A}(t)$ are listed here below.
\begin{hyp}\label{base} \begin{enumerate}[\rm (i)] \item The coefficients $q_{ij}, b_{i}$ belong to $C^{\alpha/2,1+\alpha}_{\rm loc}(I\times \mathbb R^d)$ for any $i,j=1,\dots,d$ and some $\alpha \in (0,1)$; \item for every $(t,x)\in I \times \mathbb R^d$, the matrix $Q(t,x)=[q_{ij}(t,x)]_{ij}$ is symmetric and there exists a function $\kappa:I\times\mathbb R^d\to{\mathbb R}$, with positive infimum $\kappa_0$, such that
$\langle Q(t,x)\xi,\xi\rangle\ge \kappa(t,x)|\xi|^2$ for any $(t,x)\in I\times\mathbb R^d$ and any $\xi\in\mathbb R^d$;
\item there exists a non-negative function $\varphi\in C^2({\mathbb R}^d)$, diverging to $+\infty$ as $|x|\to +\infty$, such that $(\mathcal{A}(t)\varphi)(x)\leq a-c\,\varphi(x)$ for any $(t,x)\in I\times \mathbb R^d$ and some positive constants $a$ and $c$; \item there exists a locally bounded function $\rho:I\to{\mathbb R}^+$ such that
$|\nabla_x q_{ij}(t,x)|\le \rho(t)\kappa(t,x)$ for any $(t,x) \in I\times \mathbb R^d$ and
any $i,j=1, \dots,d$, where $\kappa$ is defined in $(ii)$; \item there exists a function $r:I\times\mathbb R^d\to{\mathbb R}$ such that
$\langle\nabla_x b(t,x)\xi,\xi\rangle \le r(t,x) |\xi|^2$ for any $\xi \in \mathbb R^d$ and $(t,x) \in I\times \mathbb R^d$. Further, there exists $p_0\in (1,2]$ such that \begin{equation} +\infty>\sigma_{p_0}=\sup_{(t,x)\in I\times\mathbb R^d}\left (r(t,x)+\frac{d^3(\rho(t))^2\kappa(t,x)}{4\min\{p_0-1,1\}}\right ). \label{sigma-p0} \end{equation} \end{enumerate} \end{hyp}
\noindent Under Hypotheses \ref{base}(i)-(iii) (actually even under weaker assumptions) it is possible to associate an evolution operator $\{G(t,s):\, t\geq s\in I\}$ to the operator $\mathcal{A}(t)$ in $C_b(\mathbb R^d)$, as
described in the Introduction. The function $G(\cdot,\cdot)f$ is continuous in $\{(s,t,x)\in I\times I\times\mathbb R^d: s\le t\}$ and \begin{equation} (G(t,s)f)(x)=\int_{\mathbb R^d}f(y)p(t,s,x,dy),\qquad\;\, I\ni s<t,\;\, x\in\mathbb R^d, \label{repres-formula} \end{equation}
where $p(t,s,x,dy)$ are probability measures for any $I\ni s<t$, $x,y\in\mathbb R^d$. This implies that $|G(t,s)f|^p\le G(t,s)(|f|^p)$ for any $I\ni s<t$, $f\in C_b(\mathbb R^d)$ and $p\ge 1$. Moreover, Hypotheses \ref{base}(iv) and (v) yield the pointwise gradient estimates \begin{equation}\label{point_p_D11}
|(\nabla_x G(t,s)f)(x)|^p \le e^{p\sigma_p(t-s)}(G(t,s)|\nabla f|^p)(x),\qquad\;\, f\in C^1_b(\mathbb R^d), \end{equation} \begin{align}
|(\nabla_x G(t,s)f)(x)|^p \le \left\{ \begin{array}{ll}
C_{\varepsilon}^pe^{p(\sigma_p+\varepsilon)(t-s)}(t-s)^{-\frac{p}{2}}(G(t,s)|f|^p)(x), & {\rm if}~\sigma_p<0,\\[2mm]
C_0^p(1+(t-s)^{-\frac{p}{2}})(G(t,s)|f|^p)(x), & {\rm otherwise}, \end{array} \right. \label{point_p_D} \end{align} for any $f\in C_b(\mathbb R^d)$, $t>s$, $x\in \mathbb R^d$, $p \in [p_0,+\infty)$, $\varepsilon>0$ and some positive constants $C_0$ and $C_{\varepsilon}$, where $\sigma_p$ is given by \eqref{sigma-p0}, with $p$ instead of $p_0$. We stress that the pointwise estimates \eqref{point_p_D11} and \eqref{point_p_D} have been proved with the constants $C_0$ and $C_{\varepsilon}$ also depending of $p$. Actually, these constants may be taken independent of $p$. Indeed, consider for instance estimate \eqref{point_p_D}. If $p\ge p_0$, then using the representation formula \eqref{repres-formula} we can estimate \begin{align*}
|\nabla_xG(t,s)f|^p=&(|\nabla_xG(t,s)|^{p_0})^{\frac{p}{p_0}}\le (C_{p_0}^{p_0}(1+(t-s)^{-\frac{p_0}{2}})G(t,s)|f|^{p_0})^{\frac{p}{p_0}}\\
\le &2^{\frac{p-p_0}{p_0}}C_{p_0}^p(1+(t-s)^{-\frac{p}{2}})(G(t,s)|f|^{p_0})^{\frac{p}{p_0}}\\
\le &2^{\frac{p-p_0}{p_0}}C_{p_0}^p(1+(t-s)^{-\frac{p}{2}})G(t,s)|f|^p \end{align*} for any $t>s\in I$ and $f\in C_b(\mathbb R^d)$, and, hence, estimate \eqref{point_p_D} holds true with a constant which can be taken independent of $p$.
\begin{rmk} \label{rmk-2.4}{\rm If the diffusion coefficients are bounded and independent of $x$, then the pointwise gradient estimate \eqref{point_p_D11} holds true also with $p=1$ and $\sigma_1=r_0$, where $r_0$ is the supremum over $I\times\mathbb R^d$ of the function $r$ in Hypothesis \ref{base}(v).} \end{rmk}
Under Hypotheses \ref{base} we can also associate an evolution system of measures $\{\mu_t: t\in I\}$ with the operators ${\mathcal A}(t)$. Such a family of measures is tight, namely for every $\varepsilon>0$ there exists $r>0$ such that $\mu_s(\mathbb R^d\setminus B_r)<\varepsilon$ for any $s \in I$. The invariance property \eqref{inv_pro} and the density of $C_b(\mathbb R^d)$ in $L^p(\mathbb R^d, \mu_s)$, $s \in I$, allows to extend $G(t,s)$ to a contraction from $L^p(\mathbb R^d,\mu_s)$ to $L^p(\mathbb R^d,\mu_t)$ for any $t>s$. As it has been stressed in the Introduction, in general evolution systems of measures are infinitely many, but, under suitable assumptions, there exists a unique tight evolution system of measures. This is, for instance, the case when Hypotheses \ref{base} are satisfied as well as the following two conditions: \begin{enumerate} \item $q_{ij}$, $b_i$ belong to $C^{\alpha/2,1+\alpha}_{\rm loc}([a,+\infty)\times {\mathbb R}^d)$ for any $i,j=1,\ldots,d$ and some $a\in I$. Moreover, $q_{ij}\in C_b([a,+\infty)\times B_R)$ and $D_kq_{ij},b_j\in C_b([a,+\infty);L^p(B_R))$ for any $i,j,k\in\{1,\ldots,d\}$, $R>0$ and some $p>d+2$;
\item there exists a constant $c>0$ such that either $|Q(t,x)|\le c(1+|x|)\varphi(x)$ and
$\langle b(t,x),x\rangle \le c(1+|x|^2)\varphi(x)$ for any $(t,x)\in [a,+\infty)\times {\mathbb R}^d$, or the diffusion coefficients are bounded in $[a,+\infty)\times\mathbb R^d$. \end{enumerate}
For more details and the proofs of the results that we have mentioned, we refer the reader to \cite{KunLorLun09Non, LorLibro-2, LorLunSch16Str,LorZam}.
\section{The semilinear problem in a bounded time interval}\label{Sp} Given $I\ni s<T$, we are interested in studying the Cauchy problem \eqref{n-sm-pb} both in the case when $f\in C_b(\mathbb R^d)$ and in the case when $f\in L^p(\mathbb R^d,\mu_s)$.
Let us introduce the standing assumptions on $\psi$.
\begin{hyp}\label{hyp-1} \begin{enumerate}[\rm (i)] \item The function $\psi:{[s,T]}\times \mathbb R^d\times{\mathbb R}\times\mathbb R^d\to {\mathbb R}$ is continuous. Moreover, there exist $\beta \in [0,1)$ such that for any $R>0$ and some constant $L_R>0$ \begin{align}
&|\psi(t,x,u_1,(t-s)^{-1/2}v_1)-\psi(t,x,u_2,(t-s)^{-1/2}v_2)|\notag\\
\le &L_R(t-s)^{-\beta}(|u_1-u_2|+|v_1-v_2|), \label{loc_Lip} \end{align} for any $t\in(s,T]$, $x \in \mathbb R^d$, $u_1,u_2\in [-R,R]$, $v_1, v_2\in \overline{B}_R$; \item the function $\psi(\cdot, \cdot,0,0)$ belongs to $C_b([s,T]\times\mathbb R^d)$. \end{enumerate} \end{hyp}
\begin{thm}\label{exi_cb} Under Hypotheses $\ref{base}$ and $\ref{hyp-1}$, for any
$\overline{f}\in C_b(\mathbb R^d)$ there exist $r_0, \delta\in (0,T-s]$ such that, if $f \in C_b(\mathbb R^d)$ and $\|f-\overline{f}\|_{\infty}\le r_0$, then problem \eqref{n-sm-pb} admits a unique mild solution $u_f \in C_b([s,s+\delta]\times \mathbb R^d)\cap C^{0,1}((s, s+\delta]\times \mathbb R^d)$ of \eqref{n-sm-pb} which satisfies the estimate \begin{align}
&\|u_f\|_{\infty}+\sup_{t\in(s,s+\delta]}\sqrt{t-s}\|\nabla_x u_f(t,\cdot)\|_\infty\notag\\
\le &2(1+2C_0+C_0\sqrt{\delta})(\|f\|_{C_b(\mathbb R^d)}+2\delta\|\psi(\cdot,\cdot,0,0)\|_{C_b([s,s+\delta]\times\mathbb R^d)}), \label{st-gr} \end{align}
where $C_0$ is the constant in \eqref{point_p_D}. Moreover, for any $R>0$, $\theta\in (0,1)$ and $t\in (s,s+\delta]$, $u_f(t,\cdot)$ belongs to $C^{1+\theta}(B_R)$ and there exists a positive constant $C_{R,T-s}$ such that $\sup_{t\in (s,s+\delta]}(t-s)^{(1+\theta)/2}\|u_f(t,\cdot)\|_{C^{1+\theta}(B_R)}\le C_{R,T-s}\|f\|_{\infty}$. Finally, if
$g \in C_b(\mathbb R^d)$ is such that $\|g-\overline{f}\|_{\infty}\le r_0$, then \begin{equation}\label{dip_dati}
\|u_f-u_g\|_{\infty}+\sup_{t\in(s,s+\delta]}\sqrt{t-s}\|\nabla_x u_f(t,\cdot)-\nabla_x u_g(t,\cdot)\|_\infty
\le 2(1+C_0+C_0\sqrt{\delta}) \|f-g\|_{\infty}. \end{equation} \end{thm}
\begin{proof} Even if the proof is quite standard, for the reader's convenience we provide some details.
Fix $\overline{f}\in C_b(\mathbb R^d)$ and let $R_0>0$ be such that $R_0/(1+K_0) \ge 8\|\overline{f}\|_{\infty}$, where $K_0=C_0(1+\sqrt{T-s})$ and $C_0$ is the constant in \eqref{point_p_D}. Further, for any $\delta\in (0,T-s]$, let $Y_{\delta}$ be the set of all $u\in C_b([s,s+\delta]\times \mathbb R^d)\cap C^{0,1}((s,s+\delta)\times \mathbb R^d)$ such that
$\|u\|_{Y_{\delta}}=\|u\|_{C_b((s,s+\delta]\times \mathbb R^d)}+ \sup_{t\in (s,s+\delta]}\sqrt{t-s}\|\nabla_xu(t,\cdot)\|_\infty<+\infty$.
{\em Step 1.} Here, we prove that there exists $\delta>0$ such that, for any $f\in C_b(\mathbb R^d)$ satisfying the condition $\|f-\overline f\|_{\infty}\le r_0:=R_0/(4+4K_0)$, there exists a mild solution to problem \eqref{n-sm-pb} defined in the time interval $[s,s+\delta]$. For this purpose, we consider the operator $\Gamma$, defined by the right-hand side of \eqref{defi_mild} for any $u \in B_{Y_{\delta}}(R_0)$ (the ball of $Y_{\delta}$ centered at zero with radius $R_0$). Clearly, the function $\psi_u$ is continuous in $(s,s+\delta]\times \mathbb R^d$ and $\psi_u(t,\cdot)$ is bounded in $\mathbb R^d$ for any $t \in (s,s+\delta]$. Moreover, estimating
$|\psi_u(t,x)|\le |\psi_u(t,x)-\psi(t,x,0,0)|+|\psi(t,x,0,0)|$ and taking
\eqref{loc_Lip} into account, we can easily show that the function $t\mapsto (t-s)^{\beta}\|\psi_u(t,\cdot)\|_\infty$ is bounded in $(s,s+\delta)$. Hence, Proposition \ref{smoth_v} and estimates \eqref{norm_g} and \eqref{point_p_D} show that $\Gamma(u)\in Y_{\delta}$ for any $t \in (s,s+\delta]$ and $u\in B_{Y_{\delta}}(R_0)$. To show that, for a suitable $\delta\in (0,1]$, $\Gamma$ is a $1/2$-contraction in $B_{Y_{\delta}}(R_0)$, we observe that, using again \eqref{loc_Lip}, it follows that \begin{align}
\|\psi_u(t,\cdot)- \psi_v(t, \cdot)\|_\infty
\le L_{R_0}(t-s)^{-\beta}\|u-v\|_{Y_{\delta}},\qquad\;\,t\in (s,s+\delta], \label{stima-psi-neu} \end{align} for any $u, v \in B_{Y_{\delta}}(R_0)$, where $L_{R_0}$ is the constant in Hypothesis \ref{hyp-1}(i). From this inequality and estimates \eqref{norm_g} and \eqref{point_p_D} we conclude that
$\|\Gamma(u)-\Gamma(v)\|_{Y_{\delta}}\le c_1\delta^{1-\beta}\|u-v\|_{Y_{\delta}}$ for any $u,v\in B_{Y_{\delta}}(R_0)$, where $c_1$, as the forthcoming constants, is independent of $\delta$ and $u$, if not otherwise specified. Hence, choosing $\delta$ properly, we can make $\Gamma$ a $1/2$-contraction in $B_{Y_{\delta}}(R_0)$.
It is also straightforward to see that $\Gamma$ maps $B_{Y_{\delta}}(R_0)$ into itself, up to replacing $\delta$ with a smaller value if needed. It suffices to split $\Gamma(u)=(\Gamma(u)-\Gamma(0))+\Gamma(0)$, use the previous result and estimate
$\|\Gamma(0)\|_{Y_{\delta}}\le(1+C_0+C_0\sqrt{\delta})\|f\|_{\infty}+\delta(1+2C_0+C_0\sqrt{\delta})\|\psi(\cdot,\cdot,0,0)\|_{C_b([s,T]\times\mathbb R^d)}$. As a consequence, $\Gamma$ has a unique fixed point in $B_{Y_{\delta}}(R_0)$, which is a mild solution of \eqref{n-sm-pb} and satisfies \eqref{st-gr}.
{\em Step 2.} Here we prove the uniqueness of the mild solution $u_f$. For this purpose, let $u_1, u_2\in Y_{\delta}$ be two mild solutions. By Lemma \ref{lemm-brigida}, the function
$r \mapsto h(r):= \|u_1(r,\cdot)-u_2(r,\cdot)\|_\infty+ \sqrt{r-s}\|\nabla_x u_1(r,\cdot)-\nabla_x u_2(r,\cdot)\|_\infty$ is measurable in $(s,s+\delta)$. Moreover, using \eqref{stima-psi-neu}, we easily deduce that \begin{align}\label{diff-fun}
\|D_x^ju_1(t,\cdot)-D_x^ju_2(t,\cdot)\|_\infty \le c_2(M)\int_s^t (t-r)^{-\frac{j}{2}}(r-s)^{-\beta}h(r)dr, \end{align}
for $j=0,1$, any $t\in [s,s+\delta]$, where $M=\max\{\|u_1\|_{Y_{\delta}},\|u_2\|_{Y_{\delta}}\}$. Estimating $\sqrt{t-s}$ with $\sqrt{t-r}+\sqrt{r-s}$ for any $r \in (s,t)$, from \eqref{diff-fun}, with $j=1$, it follows that \begin{align}
&\sqrt{t-s}\|\nabla_x u_1(t,\cdot)-\nabla_x u_2(t,\cdot)\|_\infty \notag\\
\le &c_2(M)\int_s^t(r-s)^{-\beta}h(r)dr+c_2(M)\int_s^t(t-r)^{-\frac{1}{2}}(r-s)^{\frac{1}{2}-\beta}\|u_1(r,\cdot)\!-\!u_2(r,\cdot)\|_{\infty}dr\notag\\
&+c_2(M)\int_s^t(t-r)^{-\frac{1}{2}}(r-s)^{1-\beta}\|\nabla_xu_1(r,\cdot)\!-\!\nabla_xu_2(r,\cdot)\|_{\infty}dr. \label{ale} \end{align} Using \eqref{diff-fun} we estimate the last two integral terms in the right-hand side of \eqref{ale}, which we denote by ${\mathcal I}(t)$ and ${\mathcal J}(t)$. Replacing \eqref{diff-fun}, with $j=0$, in ${\mathcal I}(t)$, we get \begin{align} {\mathcal I}(t)&\le c_3(M)\delta^{1-\beta}\int_s^t(\sigma-s)^{-\beta}h(\sigma)d\sigma. \label{A} \end{align}
The same arguments show that ${\mathcal J}(t)$ can be estimated pointwise in $[s,s+\delta]$ by the right-hand side of \eqref{A}, with $c_3(M)$ being possibly replaced by a larger constant $c_4(M)$. Summing up, we have proved that \begin{align}
\sqrt{t-s}\|\nabla_x u_1(t,\cdot)-\nabla_x u_2(t,\cdot)\|_\infty\le c_5(M)\delta^{1-\beta}\int_s^t(\sigma-s)^{-\beta}h(\sigma)d\sigma. \label{ale-1} \end{align} From \eqref{diff-fun} and \eqref{ale-1} we conclude that \begin{eqnarray*} h(t)\le c_6(M,\delta)\int_s^t (r-s)^{-\beta}h(r)dr,\qquad\;\,t\in (s,s+\delta]. \end{eqnarray*} The generalized Gronwall lemma (see \cite{Gronwall}) yields that $h(t)\equiv 0$ for any $t \in (s,s+\delta)$, i.e., $u_1\equiv u_2$ in $(s,s+\delta)\times \mathbb R^d$.
{\em Step 3.} Here, we prove \eqref{st-gr} and \eqref{dip_dati}. Since $u_f=\Gamma(0)+(\Gamma(u_f)-\Gamma(0))$ and $\Gamma$ is a $1/2$-contraction in $B_{Y_{\delta}}(R_0)$, we conclude that
$\|u_f\|_{Y_{\delta}}\le 2\|\Gamma(0)\|_{Y_{\delta}}$ and \eqref{st-gr} follows from the estimate on $\|\Gamma(0)\|_{Y_{\delta}}$ proved above. Estimate \eqref{dip_dati} can be proved in the same way.
{\em Step 4.} Here, we prove that $u_f(t,\cdot)\in C^{1+\theta}(B_R)$ for any $t\in (s,s+\delta]$, $R>0$, $\theta\in (0,1)$, and $\sup_{t\in (s,s+\delta]}(t-s)^{(1+\theta)/2}\|u_f(t,\cdot)\|_{C^{1+\theta}(B_R)}\le c_7\|f\|_{\infty}$ for some constant $c_7$, independent of $f$. For this purpose, we observe that the results in the previous steps show that the function $\psi_u$ satisfies the estimate
$(t-s)^{\beta}\|\psi_u(t,\cdot)\|_\infty\leq c_8\|f\|_\infty$ for any $t\in (s,s+\delta]$, the constant $c_8$ being independent of $f$. Applying Proposition \ref{smoth_v} and estimate \eqref{28AL} we complete the proof. \end{proof}
\begin{coro} \label{coro-classica} In addition to the assumption of Theorem $\ref{exi_cb}$ suppose that there exist $\beta\in [0,1)$ and $\gamma\in(0,1)$ such that $2\beta+\gamma<2$ and \begin{align}
|\psi(t,x,u,(t-s)^{-1/2}v)-\psi(t,y,u,(t-s)^{-1/2}v)|
\leq C_R(t-s)^{-\beta}|x-y|^\gamma, \label{hyp-holder} \end{align} for any $t\in(s,T]$, $x,y,v\in B_R$, $u\in [-R,R]$, any $R>0$ and some positive constant $C_R$. Then, for any $f\in C_b(\mathbb R^d)$ the mild solution $u_f$ to problem \eqref{n-sm-pb} belongs to $C^{1,2}((s,s+\delta]\times \mathbb R^d)$ and it is a classical solution to \eqref{n-sm-pb}. \end{coro}
\begin{proof} Fix $R>0$. Theorem \ref{exi_cb} shows that $u_f(t,\cdot)$ belongs to $C^{1+\gamma}(B_R)$
and $\|\nabla_xu_f(t,\cdot)\|_{C^{\gamma}(B_R)}\le C_R(t-s)^{-(1+\gamma)/2}\|f\|_{\infty}$ for any $t\in (s,s+\delta]$. Moreover, by interpolation from \eqref{st-gr} it follows that
$\|u_f(t,\cdot)\|_{C^{\gamma}_b(\mathbb R^d)}\le C(t-s)^{-\gamma/2}\|f\|_{\infty}$ for any $t\in (s,s+\delta]$. From these estimates, adding and subtracting $\psi(t,y,u(t,x),\nabla_xu(t,x))$, we deduce that
$|\psi_u(t,x)-\psi_u(t,y)|\leq C\|f\|_\infty (t-s)^{-\beta-\frac{\gamma}{2}}|x-y|^{\gamma}$
for any $t \in (s, s+\delta]$, $x,y\in\mathbb R^d$ such that $|x-y|\le R$ and some positive constant $C$, depending on $R$ and $u$. As a byproduct, $\|\psi_u(t,\cdot)\|_{C^{\gamma}(B_R)}\le \widetilde C(t-s)^{-\beta-\frac{\gamma}{2}}\|f\|_{\infty}$ for any $t \in (s, s+\delta]$ and some positive constant $\widetilde C$, depending on $R$ and $u$. Now, using Proposition \ref{smoth_v} we conclude that $u\in C^{1,2}((s,s+\delta]\times\mathbb R^d)\cap C^{0,2+\theta}_{\rm loc}((s,s+\delta]\times\mathbb R^d)$ for any $\theta<\gamma$, if $\gamma\le\alpha$, and for $\theta=\gamma$ otherwise. \end{proof}
\begin{rmk} \label{rem-3.5} {\rm Suppose that \eqref{loc_Lip} is replaced by the condition
$|\psi(\cdot,\cdot,u_1,v_1)-\psi(\cdot,\cdot,u_2,v_2)|\le L_R(|u_1-u_2|+|v_1-v_2|)$ in $[s,T]\times\mathbb R^d$, for any $R>0$, $u_1, u_2\in [-R,R]$, $v_1, v_2\in B_R$ and some positive constant $L_R$. Then, the proof of the previous theorem can be repeated verbatim with $Y_{\delta}=C^{0,1}_b([s,s+\delta]\times \mathbb R^d)$, endowed with the natural norm, and we can show that the mild solution to problem \eqref{n-sm-pb} belongs to $C^{0,1}_b([s,s+\delta]\times\mathbb R^d)$ and
$\|u_f\|_{C^{0,1}_b([s,s+\delta]\times\mathbb R^d)}\le \widetilde C_{\delta}\|f\|_{C_b^1(\mathbb R^d)}$ for some positive constant $\widetilde C_{\delta}$, independent of $f$.} \end{rmk}
We now provide some sufficient conditions for the mild solution to problem \eqref{n-sm-pb} to exist in the large. Such conditions will be crucial to define the nonlinear evolution operator associated with the Cauchy problem \eqref{n-sm-pb}. We introduce the following additional assumptions. \begin{hyp}\label{glob2} \begin{enumerate}[\rm (i)] \item For any $R>0$ there exists a positive constant $L_R$ such that
$|\psi(t,x,u_1,v_1)-\psi(t,x,u_2,v_2)|\le L_R(|u_2-u_1|+|v_2-v_1|)$ for any $t\in [s,T]$, $x\in\mathbb R^d$, $u_1, u_2\in [-R,R]$ and $v_1, v_2\in\mathbb R^d$; \item for any $\tau>s\in I$ there exist positive constants $k_0$, $k_1$ and $a$, and a function $\tilde\varphi\in C^2({\mathbb R}^d)$ with non-negative values and blowing up at infinity such that
$u\psi(t,x,u,v) \leq k_0(1+u^2)+k_1|u||v|$ and $\mathcal{A}\tilde\varphi+k_1|\nabla\tilde\varphi|\leq a\tilde\varphi$ in $\mathbb R^d$ for any $t\in [s,\tau]$, $x,v\in\mathbb R^d$ and $u\in{\mathbb R}$. \end{enumerate} \end{hyp}
In the rest of this section, for any $p\in [p_0,+\infty)$ and $T>s$ we denote by
$[\psi]_{p,T}$ the supremum over $(s,T)$ of the function $\sqrt{t-s}\,\|\psi(t,\cdot,0,0)\|_{L^p(\mathbb R^d,\mu_t)}$. $[\psi]_{\infty,T}$ is defined similarly,
replacing $L^p(\mathbb R^d,\mu_t)$ by $C_b(\mathbb R^d)$.
\begin{thm} \label{prop-3.8} Assume that Hypotheses $\ref{base}$, $\ref{hyp-1}(ii)$, $\ref{glob2}$ and condition \eqref{hyp-holder} are satisfied. Then, for any $f \in C_b(\mathbb R^d)$, the classical solution $u_f$ to problem \eqref{n-sm-pb} exists in $[s,T]$. If, further, the constant in Hypothesis $\ref{glob2}(i)$ is independent of $R$, then for any $p\in [p_0,+\infty]$, \begin{align}
&\sup_{t \in (s,T)}(\|u_f(t,\cdot)\|_{L^p(\mathbb R^d,\mu_t)}
+\sqrt{t-s}\,\|\nabla_x u_f (t,\cdot)\|_{L^p(\mathbb R^d, \mu_t)})\notag\\
\le &C_{T-s}(\|f\|_{L^p(\mathbb R^d, \mu_s)} +(\sqrt{T-s}+1)[\psi]_{p,T}), \label{dip-p-weak-0}\\[2mm]
&\sup_{t \in (s,T)}(\|u_f(t,\cdot)-u_g(t,\cdot)\|_{L^p(\mathbb R^d,\mu_t)}+\sqrt{t-s}\,\|\nabla_x u_f (t,\cdot)-\nabla_x u_g(t,\cdot)\|_{L^p(\mathbb R^d, \mu_t)})\notag\\
\le &C_{T-s}\|f-g\|_{L^p(\mathbb R^d, \mu_s)} \label{dip-p-weak} \end{align} for every $f,g \in C_b(\mathbb R^d)$, where $C_{\tau}=(\sqrt{\tau}+1)e^{d_1\tau^{3/2}+d_2}$ for some positive constants $d_1$ and $d_2$. \end{thm}
\begin{proof} We split the proof into two steps.
{\em Step 1.} Here, we prove that, for any $f\in C_b(\mathbb R^d)$, $u_f$ is defined in the whole $[s,T]$. For this purpose, we fix $f\in C_b(\mathbb R^d)$, denote by $[s,\tau_f)$ the maximal time domain where $u_f$ is defined and assume, by contradiction, that $\tau_f<T$.
We are going to prove that $u_f$ is bounded in $[s,\tau_f)\times\mathbb R^d$. Once this is proved, we can use Hypotheses \ref{glob2}(i) to deduce, adding and subtracting $\psi(t,x,0,0)$, that
$|\psi(t,x,u_f(t,x),v)|\le C(1+|v|)$
for $t \in [s,T]$, $x,v\in \mathbb R^d$ and some positive constant $C$, which depends on $\|u_f\|_{C_b([s,\tau_f)\times \mathbb R^d)}$ and $\|\psi(\cdot,\cdot,0,0)\|_{C_b([s,T]\times\mathbb R^d)}$. Applying the same arguments as in Step 2 of the proof of Theorem \ref{exi_cb}, we can show that also the function $t\mapsto \sqrt{t-s}\|\nabla_xu_f(t,\cdot)\|_{\infty}$ is bounded in $[s,\tau_f)\times\mathbb R^d$. This is enough to infer that $u_f$ can be extended beyond $\tau_f$, contradicting the maximality of the interval $[s,\tau_f)$.
To prove that $u_f$ is bounded in $(s,\tau_f)\times \mathbb R^d$, we fix $b\in(0,\tau_f-s)$, $\lambda>a+k_0$ and we set $v_n(t,x):=e^{-\lambda(t-s)}u_f(t,x)-n^{-1}\tilde\varphi(x)$ for any $(t,x)\in [s,s+b]\times\mathbb R^d$. A straightforward computation shows that \begin{align} D_tv_n-\mathcal{A} v_n =& e^{-\lambda (\cdot-s)}\psi(\cdot,\cdot,e^{\lambda(\cdot-s)}(v_n+n^{-1}\tilde\varphi),e^{\lambda(\cdot-s)}(\nabla_xv_n+n^{-1}\nabla\tilde\varphi))\notag\\ &-\lambda(v_n+n^{-1}\tilde\varphi)+n^{-1}\mathcal{A}\tilde\varphi, \label{mercato} \end{align} in $(s,s+b]\times\mathbb R^d$. Since $u_f$ is bounded in $[s,s+b]\times\mathbb R^d$ and $\tilde\varphi$ blows up at infinity, the function $v_n$ admits a maximum point $(t_n,x_n)$. If $v_n(t_n,x_n)\leq0$ for any $n$, then $u_f\leq0$ in $[s,s+b]\times\mathbb R^d$. Assume that $v_n(t_n,x_n)>0$ for some $n$. If $t_n=s$, then $v_n(t_n,x_n)\leq \sup_{\mathbb R^d}f$. If $t_n>s$ then $D_tv_n(t_n,x_n)-\mathcal{A}(t_n)v_n(t_n,x_n)\geq0$, so that, multiplying both the sides of \eqref{mercato} by $v_n(t_n,x_n)+n^{-1}\tilde\varphi(x_n)>0$ and using Hypotheses \ref{glob2}(ii) we get $0\le (-\lambda+k_0+a)(v_n(t_n,x_n)+n^{-1}\tilde\varphi(x_n))^2+k_0$, which clearly implies that, also in this case, $u$ is bounded from above in $[s,s+b]$ by a constant, independent of $b$.
Repeating the same arguments with $u_f$ being replaced by $-u_f$, we conclude that $u_f$ is bounded also from below by a positive constant independent of $b$. Since $b$ is arbitrary, it follows that $\|u_f\|_{C_b((s,\tau_f)\times\mathbb R^d)}<+\infty$ as claimed.
{\em Step 2.} Fix $f, g\in C_b(\mathbb R^d)$ $p\ge p_0$ and let $\opnorm{\cdot}_p$ be the norm defined by
$\opnorm{v}_p=\sup_{t\in (s,T)}e^{-\omega(t-s)}(\|v(t,\cdot)\|_{L^p(\mathbb R^d,\mu_t)}+\sqrt{t-s}\|\nabla_xv(t,\cdot)\|_{L^p(\mathbb R^d,\mu_t)})$ on smooth functions $v$, where $\omega$ is a positive constant to be chosen later on and to fix the ideas we assume that $p<+\infty$. From Hypothesis \ref{glob2}(i), where $L_R$ is replaced by a constant $L$, it follows that
$\|\psi_{u_g}(r,\cdot)-\psi_{u_f}(r,\cdot)\|_{L^p(\mathbb R^d,\mu_r)}\le L\|u_g(r,\cdot)-u_f(r,\cdot)\|_{W^{1,p}(\mathbb R^d,\mu_r)}$ for any $r \in (s,T]$. Hence, recalling that each operator $G(t,r)$ is a contraction from $L^p(\mathbb R^d,\mu_r)$ to $L^p(\mathbb R^d,\mu_t)$ and using the second pointwise gradient estimate in \eqref{point_p_D} and the invariance property of the family $\{\mu_t: t\in I\}$, we conclude that \begin{align}\label{commissioni} &\opnorm{u_f-u_g}_p\notag\\ \le & \opnorm{G(\cdot,s)(f-g)}_p+ L\opnorm{u_f-u_g}_p\sup_{t\in (s,T)}\int_s^te^{\omega(r-t)}\bigg (1+\frac{1}{\sqrt{r-s}}\bigg )dr\notag\\ &+LC_0\opnorm{u_f-u_g}_p\sup_{t\in (s,T)}\sqrt{t-s}\int_s^te^{\omega(r-t)}\bigg (1+\frac{1}{\sqrt{t-r}}\bigg )\bigg (1+\frac{1}{\sqrt{r-s}}\bigg )dr\notag\\
\le &[1+C_0(1+\sqrt{T-s})]\|f-g\|_{L^p(\mathbb R^d,\mu_s)}\notag\\ &+L\opnorm{u_f-u_g}_p\bigg [(1+C_0\sqrt{T-s})\bigg (\frac{1}{\omega} +\int_s^t\frac{e^{\omega(r-t)}}{\sqrt{r-s}}dr\bigg )+\frac{\sqrt{\pi}}{\sqrt{\omega}} C_0\sqrt{T-s}\notag\\ &\qquad\qquad\qquad\quad\;\;\;\,+C_0 \sup_{t\in (s,T)}\sqrt{t-s}\int_s^t\!\frac{e^{\omega(r-t)}dr}{\sqrt{t-r}\sqrt{r-s}}\bigg ]. \end{align}
To estimate the integral terms in the last side of \eqref{commissioni}, we fix $\delta>0$ and observe that \begin{align} \int_s^t \frac{e^{\omega(r-t)}}{\sqrt{r-s}}dr =\int_s^{s+\delta} \frac{e^{\omega(r-t)}}{\sqrt{r-s}}dr+\left (\int_{s+\delta}^t \frac{e^{\omega(r-t)}}{\sqrt{r-s}}dr\right )^+\le 2\sqrt{\delta}+\frac{1}{\sqrt{\delta}\omega}. \label{acqua-6a} \end{align} Hence, minimizing over $\delta>0$ we conclude that the left-hand side of \eqref{acqua-6a} is bounded from above by $\sqrt{8}\omega^{-1/2}$. Splitting $\sqrt{t-s}\le \sqrt{t-r}+\sqrt{r-s}$ and arguing as above, also the last term in square brackets in the last side of \eqref{commissioni} can be estimated by $(\sqrt{8}+\sqrt{\pi})\omega^{-1/2}$. It thus follows that \begin{align*}
\opnorm{u_f-u_g}_p\le &[1+C_0(1+\sqrt{T-s})]\|f-g\|_{L^p(\mathbb R^d,\mu_s)}\\ &+L(c_{T-s}\omega^{-1/2}+(1+C_0\sqrt{T-s})\omega^{-1})\opnorm{u_f-u_g}_p, \end{align*} where $c_{\tau}=(\sqrt{8}+\sqrt{\pi})C_0(\sqrt{\tau}+1)+\sqrt{8}$. Choosing $\omega$ such that $c_{T-s}\omega^{-1/2}+(1+C_0\sqrt{T-s})\omega^{-1}\le (2L)^{-1}$, we obtain \begin{eqnarray*}
\opnorm{u_f-u_g}_p\le 2[1+C_0(1+\sqrt{T-s})]\|f-g\|_{L^p(\mathbb R^d,\mu_s)} \end{eqnarray*} and estimate \eqref{dip-p-weak} follows at once. Estimate \eqref{dip-p-weak-0} can be proved likewise. Hence, the details are omitted. \end{proof}
As a consequence of Theorem \ref{prop-3.8} we prove the existence of a mild solution to problem \eqref{n-sm-pb} in the time domain $(s,T)$ when $f\in L^p(\mathbb R^d,\mu_s)$, that is a function $u_f\in L^p((s,T)\times \mathbb R^d,\mu)\cap W^{0,1}_p(J\times \mathbb R^d,\mu)$, for any $J\Subset (s,T]$, such that $u_f(t,\cdot)\in W^{1,p}(\mathbb R^d,\mu_t)$ for almost every $t\in (s,T]$ and, for such values of $t$, the equality \begin{align*} u_f(t,x)= (G(t,s)f)(x)+\int_s^t(G(t,r)\psi_u(r,\cdot))(x)dr, \end{align*} holds true in $\mathbb R^d\setminus A_t$, where $A_t$ is negligible with respect to the measure $\mu_t$ (or, equivalently, with respect to the restriction of the Lebesgue measure to the Borel $\sigma$-algebra in $\mathbb R^d$).
\begin{coro} \label{coro-Lp} Under all the assumptions of Theorem $\ref{prop-3.8}$, for any $f\in L^p(\mathbb R^d,\mu_s)$ $(p\ge p_0)$ there exists a unique mild solution to the Cauchy problem \eqref{n-sm-pb}. The function $u_f$ satisfies estimates \eqref{dip-p-weak-0} and \eqref{dip-p-weak} with the supremum being replaced by the essential supremum and, as a byproduct, $u_f\in W^{0,1}_p((s,T)\times\mathbb R^d,\mu)$, if $p<2$, and $u_f\in W^{0,1}_q((s,T)\times\mathbb R^d,\mu)$ for any $q<2$, otherwise. Finally, if there exists $\gamma\in(0,1)$ such that \begin{align} \label{tazza}
|\psi(t,x,\xi,\eta)-\psi(t,y,\xi,\eta)|\leq C_{J,R}(1+|\xi|+|\eta|)|x-y|^\gamma, \end{align} for any $t\in J$, $x,y\in B_R$, $\eta\in \mathbb R^d$, $\xi\in{\mathbb R}$, $J\Subset (s,T]$, $R>0$ and a constant $C_{J,R}>0$, then, for any $f\in L^p(\mathbb R^d,\mu_s)$ and almost every $t\in (s,T)$, $u_f(t,\cdot)$ belongs to $W^{2,p}_{\rm loc}(\mathbb R^d)$. Moreover, $u_f\in W^{1,2}_{p,\rm loc}((s,T)\times\mathbb R^d)$ and satisfies the equation $D_tu_f=\mathcal{A} u_f+\psi_{u_f}$. \end{coro}
\begin{proof} Fix $f\in L^p(\mathbb R^d,\mu_s)$ and let $(f_n)\subset C_b(\mathbb R^d)$ be a sequence converging to $f$ in $L^p(\mathbb R^d,\mu_s)$. By \eqref{dip-p-weak}, $(u_{f_n}(t,\cdot))$ is a Cauchy sequence in $W^{1,p}(\mathbb R^d,\mu_t)$ for any $t\in (s,T]$. Hence, there exists a function $v$ such that $u_{f_n}(t,\cdot)$ converges to $v(t,\cdot)$ in $W^{1,p}(\mathbb R^d,\mu_t)$ for any $t\in (s,T]$. Moreover, writing \eqref{dip-p-weak-0}, with $f$ being replaced by $f_n$, and letting $n$ tend to $+\infty$ we deduce that $v$ satisfies \eqref{dip-p-weak-0} as well.
Next, using \eqref{dip-p-weak} we can estimate \begin{align*}
\|u_{f_n}-u_{f_m}\|_{L^p((s,T)\times\mathbb R^d,\mu)}^p=&\int_s^T\|u_{f_n}(t,\cdot)-u_{f_m}(t,\cdot)\|_{L^p(\mathbb R^d,\mu_t)}^pdt\\
\le &C_{T-s}^p(T-s)\|f_n-f_m\|_{L^p(\mathbb R^d,\mu_s)}^p \end{align*} and \begin{align*}
\|\nabla_xu_{f_n}-\nabla_xu_{f_m}\|_{L^q((s,T)\times\mathbb R^d,\mu)}^q=&\int_s^T\|\nabla_xu_{f_n}(t,\cdot)-\nabla_xu_{f_m}(t,\cdot)\|_{L^q(\mathbb R^d,\mu_t)}^qdt\\
\le &\frac{2C_{T-s}^q}{2-q}(T-s)^{1-\frac{q}{2}}\|f_n-f_m\|_{L^q(\mathbb R^d,\mu_s)}^q \end{align*} for any $q\in [1,2)$, if $p\ge 2$ and for $p=q$ otherwise. Hence, recalling that $L^{q}(\mathbb R^d,\mu_t)\hookrightarrow L^p(\mathbb R^d,\mu_t)$ for any $t\in I$, we conclude that the sequence $(u_{f_n})$ converges in $L^p((s,T)\times\mathbb R^d,\mu)\cap W^{0,1}_q((s,T)\times\mathbb R^d,\mu)$ to a function, which we denote by $u_f$. Clearly, $v(t,\cdot)=u_f(t,\cdot)$ almost everywhere in $\mathbb R^d$ for almost every $t\in (s,T)$. Letting $n$ tend to $+\infty$ in the formula \eqref{defi_mild}, with $f_n$ replacing $f$, we deduce that $u_f$ is a mild solution to problem \eqref{n-sm-pb}. The uniqueness follows, arguing as in the proof of Theorem \ref{exi_cb} with the obvious changes.
Let us now prove the last part of the statement. We again use an approximation argument. Fix $t>s\in I$, $R>0$. At a first step, we estimate the norm of operator $G(t,r)$ in $\mathcal L(L^p(\mathbb R^d,\mu_r),L^p(B_{R+1}))$ and in $\mathcal L(L^p(\mathbb R^d,\mu_r),W^{2,p}(B_{R+1}))$, for any $r\in[s,t)$. In the rest of the proof, we denote by $c$ a positive constant, possibly depending on $R$, but being independent of $t$, $r$ and $f\in L^p(\mathbb R^d,\mu_r)$, which may vary from line to line. Since there exists a positive and continuous function $\rho:I\times\mathbb R^d\to{\mathbb R}$ such that $\mu_r=\rho(r,\cdot)dx$, the spaces $L^p(B_M)$ and $L^p(B_M,\mu_r)$ coincide and their norms are equivalent for any $M>0$. From this remark, the interior $L^p$-estimates in Theorem \ref{prop-A3}, with $u=G(\cdot,s)f$ and the contractiveness of $G(t,r)$ from $L^p(\mathbb R^d,\mu_r)$ to $L^p(\mathbb R^d,\mu_t)$, imply that \begin{align}
\|G(t,r)f\|_{W^{2,p}(B_{R+1})}
& \leq c(t-r)^{-1}\|f\|_{L^p(\mathbb R^d,\mu_r)},\qquad\;\, s<r<t<T, \label{volante} \end{align}
first for any $f\in C_b(\mathbb R^d)$, and then, by density, for any $f\in L^p(\mathbb R^d,\mu_r)$. Since, for $\theta\in (0,1)$, $(L^p(\mathbb R^d,\mu_r), L^p(\mathbb R^d,\mu_r))_{\theta,p}=L^p(\mathbb R^d,\mu_r)$ and $(W^{1,p}(B_{R+1}), W^{2,p}(B_{R+1}))_{\theta,p}$$=W^{1+\theta,p}(B_{R+1})$, with equivalence of the corresponding norms, by an interpolation argument and \eqref{volante} we deduce that $\|G(t,r)\|_{\mathcal L(L^p(\mathbb R^d,\mu_r),W^{1+\theta,p}(B_{R+1}))} \leq c(t-r)^{-\frac{1+\theta}{2}}$ for any $s<r<t<T$. Hence, if for any $n\in{\mathbb N}$ we consider the function $z_n$, which is the integral term in \eqref{defi_mild}, with $u$ being replaced by $u_{f_n}$,
and use \eqref{dip-p-weak} and the fact that $\psi\in B([s,T]\times\mathbb R^d;{\rm Lip}({\mathbb R}^{d+1}))$, then we get \begin{align*}
&\|\nabla_x z_n(t,\cdot)-\nabla_x z_m(t,\cdot)\|_{W^{\theta,p}(B_{R+1})}\\ \leq & c\int_s^t(t-r)^{-\frac{1+\theta}{2}}
\|u_{f_n}(r,\cdot)-u_{f_m}(r,\cdot)\|_{L^p(\mathbb R^d,\mu_r)}dr\\
&+c\int_s^t(t-r)^{-\frac{1+\theta}{2}}\|\nabla_xu_{f_n}(r,\cdot)-\nabla_xu_{f_m}(r,\cdot)\|_{L^p(\mathbb R^d,\mu_r)}dr \\
\leq & c(t-s)^{-\frac{\theta}{2}}\|f_n-f_m\|_{L^p(\mathbb R^d,\mu_s)} \end{align*} for any $n\in{\mathbb N}$. We have so proved that, for any $\theta\in (0,1)$ and almost every $t\in(s,T]$, the function $u_f(t,\cdot)$ belongs to $W^{1+\theta}(B_{R+1})$ and \begin{align*}
\|u_{f_n}(t,\cdot)-u_{f_m}(t,\cdot)\|_{W^{1+\theta,p}(B_{R+1})}\leq c(t-s)^{-\frac{1+\theta}{2}}\|f_n-f_m\|_{L^p(\mathbb R^d,\mu_s)},\qquad m,n\in{\mathbb N}. \end{align*}
Similarly, \begin{align*}
\|u_{f_n}(t,\cdot)\|_{W^{1+\theta,p}(B_{R+1})}\leq c(t-s)^{-\frac{1+\theta}{2}}(\|f\|_{L^p(\mathbb R^d,\mu_s)}+\|\psi(\cdot,\cdot,0,0)\|_{\infty}),\qquad\;\,n\in{\mathbb N}. \end{align*}
Using these estimates, we can now show that $\psi_{u_f}(r,\cdot)\in W^{\theta,p}(B_{R+1})$, for any $\theta<\gamma$. For this purpose, we add and subtract $\psi(t,y,u_{f_n}(t,x),\nabla_xu_{f_n}(t,x))$, use condition \eqref{tazza} and the Lipschitz continuity of $\psi$ with respect to the last two variables to infer that \begin{align}
|\psi_{u_{f_n}}(t,x)-\psi_{u_{f_n}}(t,y)|
\le &c|u_{f_n}(t,x)-u_{f_n}(t,y)|+c|\nabla_xu_{f_n}(t,x)-\nabla_xu_{f_n}(t,y)|\notag\\
&+c|x-y|^{\gamma}(1+|u_{f_n}(t,x)|+|\nabla_xu_{f_n}(t,x)|) \label{label-A} \end{align} and \begin{align}
|\psi_{u_{f_n}}(t,x)-\psi_{u_{f_m}}(t,x)|\le &c(|u_{f_n}(t,x)-u_{f_m}(t,x)|+|\nabla_xu_{f_n}(t,x)-\nabla_xu_{f_m}(t,x)|) \label{label-B} \end{align} for any $t\in (s,T)$, $x,y\in\mathbb R^d$ and $m,n\in{\mathbb N}$. Hence, using \eqref{label-A} we obtain \begin{align*}
&|\psi_{u_{f_n}}(t,x)-\psi_{u_{f_n}}(t,y)-\psi_{u_{f_m}}(t,x)+\psi_{u_{f_m}}(t,y)|\notag\\
\le & c\big [|u_{f_n}(t,x)-u_{f_n}(t,y)|+|\nabla_xu_{f_n}(t,x)-\nabla_xu_{f_n}(t,y)|\notag\\
&\quad\! +|u_{f_m}(t,x)-u_{f_m}(t,y)|+|\nabla_xu_{f_m}(t,x)-\nabla_xu_{f_m}(t,y)|\notag\\
&\quad\! +|x\!-\!y|^{\gamma}(1\!+\!|u_{f_n}(t,x)|\!+\!|u_{f_m}(t,x)|\!+\!|\nabla_xu_{f_n}(t,x)|\!+\!|\nabla_xu_{f_m}(t,x)|)\big ]\!=:\!{\mathcal I}(t,x,y) \end{align*} and, using \eqref{label-B}, \begin{align*}
&|\psi_{u_{f_n}}(t,x)-\psi_{u_{f_n}}(t,y)-\psi_{u_{f_m}}(t,x)+\psi_{u_{f_m}}(t,y)|\\
\le &c\big [|u_{f_n}(t,x)-u_{f_m}(t,x)|+|\nabla_xu_{f_n}(t,x)-\nabla_xu_{f_m}(t,x)|\\
&\quad +|u_{f_n}(t,y)-u_{f_m}(t,y)|+|\nabla_xu_{f_n}(t,y)-\nabla_xu_{f_m}(t,y)|\big ]=:{\mathcal J}(t,x,y). \end{align*} From these two estimates we conclude that \begin{align*}
|\psi_{u_{f_n}}(t,x)-\psi_{u_{f_n}}(t,y)-\psi_{u_{f_m}}(t,x)+\psi_{u_{f_m}}(t,y)|^p\le ({\mathcal I}(t,x,y))^{\beta p}({\mathcal J}(t,x,y))^{(1-\beta)p} \end{align*} for any $(t,x)\in (s,T)\times\mathbb R^d$, any $\beta\in (0,1)$ and any $m,n\in{\mathbb N}$. Hence, for any $\theta<\gamma$ and $\beta$, such that $(0,1)\ni\theta'=\theta/\beta+d(1-\beta)/(p\beta)$, a long but straightforward computation reveals that \begin{align*} &[\psi_{u_{f_n}}(t,\cdot)-\psi_{u_{f_m}}(t,\cdot)]_{W^{\theta,p}(B_{R+1})}\\
\le &c\|u_{f_n}(t,\cdot)-u_{f_m}(t,\cdot)\|_{W^{1,p}(\mathbb R^d,\mu_t)}^{(1-\beta)}\notag\\ &\qquad\times\big (
\|u_{f_n}(t,\cdot)\|_{W^{1+\theta',p}(B_{R+1})}^{\beta}+
\|u_{f_m}(t,\cdot)\|_{W^{1+\theta',p}(B_{R+1})}^{\beta} +1\big ) \end{align*} and, consequently, \begin{align*}
\|\psi_{u_{f_n}}(t,\cdot)-\psi_{u_{f_m}}(t,\cdot)\|_{W^{\theta,p}(B_{R+1})}\le c(t-s)^{\frac{\beta-\theta'}{2}-1}\|f_n-f_m\|_{L^p(\mathbb R^d,\mu_s)}^{1-\beta} \end{align*} for any $t\in (s,T)$. We are almost done. Indeed, by interpolation from Proposition \ref{prop-A3} we deduce that
$\|G(t,r)\|_{\mathcal L(W^{\theta,p}(B_{R+1}), W^{2,p}(B_R))}\le c(t-r)^{-1+\theta/2}$. From this and the previous estimate we conclude that \begin{eqnarray*}
\|z_n(t,\cdot)-z_m(t,\cdot)\|_{W^{2,p}(B_R)}\le c(t-s)^{\frac{\beta+\theta-\theta'}{2}-1}\|f_n-f_m\|_{L^p(\mathbb R^d,\mu_s)}^{1-\beta},\qquad\;\,m,n\in{\mathbb N}, \end{eqnarray*}
for any $t\in (s,T]$ and $\beta>\theta'$, so that $\|u_{f_n}(t,\cdot)-u_{f_m}(t,\cdot)\|_{W^{2,p}(B_R)}\le c(t-s)^{-1}\|f_n-f_m\|_{L^p(\mathbb R^d,\mu_s)}^{1-\beta}$ for any $m,n\in{\mathbb N}$, thanks to \eqref{volante}. From this estimate it is easy to conclude that $(u_{f_n})$ is a Cauchy sequence in $W^{0,2}_{p,\rm loc}((s,T)\times\mathbb R^d)$. Since $u_{f_n}$ is a classical solution to problem \eqref{n-sm-pb}, $(D_tu_{f_n})$ is a Cauchy sequence in $L^p_{\rm loc}((s,T)\times\mathbb R^d)$. It thus follows that $u_f\in W^{1,2}_{p,\rm loc}((s,T)\times\mathbb R^d)$ and it solves the equation $D_tu_f=\mathcal{A} u_{f}+\psi_{u_f}$ in $(s,T)\times\mathbb R^d$. \end{proof}
The arguments in the proof of Theorem \ref{prop-3.8} and Corollary \ref{coro-Lp} allow us to prove the following result.
\begin{prop} Under Hypotheses $\ref{base}$, the following properties are satisfied. \begin{enumerate}[\rm (i)] \item Let $\psi\!\in\! C((s,T]\times\mathbb R^d\times{\mathbb R}\times\mathbb R^d)$ with $[\psi]_{\infty,T}+\sup_{(t,x)\in (s,T]\times\mathbb R^d}[\psi(t,x,\cdot,\cdot)]_{{\rm Lip}({\mathbb R}^{d+1})}$ $<+\infty$. Then, for any $f\in C_b(\mathbb R^d)$, the Cauchy problem \eqref{n-sm-pb} admits a unique mild solution $u_f\in C([s,T]\times\mathbb R^d)\cap C^{0,1}((s,T]\times\mathbb R^d)$ which satisfies \eqref{dip-p-weak} and \eqref{dip-p-weak-0} for any $p\in [p_0,+\infty]$. \item Let $\psi\in C((s,T]\times\mathbb R^d\times{\mathbb R}\times\mathbb R^d)$ and $[\psi]_p+\sup_{(t,x)\in (s,T]\times\mathbb R^d}[\psi(t,x,\cdot,\cdot)]_{{\rm Lip}({\mathbb R}^{d+1})}<+\infty$ for some $p\ge p_0$. Then, for any $f\in L^p(\mathbb R^d,\mu_s)$, the Cauchy problem \eqref{n-sm-pb} admits a unique mild solution $u_f$ which belongs to $W^{0,1}_p((s,T)\times\mathbb R^d)$, if $p_0\le p<2$, and to $W^{0,1}_q((s,T)\times\mathbb R^d)$ for any $J\Subset (s,T]$, if $p\ge 2$. Further, $u_f$ satisfies \eqref{dip-p-weak-0} and \eqref{dip-p-weak}, with the supremum being replaced by the essential supremum. \end{enumerate} \end{prop}
\begin{proof} To prove property (i) it suffices to apply the Banach fixed point theorem in the space of all the functions $v\in C_b([s,T]\times\mathbb R^d)\cap C^{0,1}((s,T]\times\mathbb R^d)$ such that $\opnorm{v}_{\infty}<+\infty$, where $\opnorm{\cdot}_{\infty}$ is defined in Step 2 of the proof of Theorem \ref{prop-3.8}, with $p=+\infty$. The uniqueness of the so obtained solution follows from the condition $\sup_{(t,x)\in (s,T]\times\mathbb R^d}[\psi(t,x,\cdot,\cdot)]_{{\rm Lip}({\mathbb R}^{d+1})}<+\infty$, in a standard way.
To prove property (ii), one can argue by approximation. We fix $f\in L^p(\mathbb R^d,\mu_s)$, approximate it by a sequence $(f_n)\subset C_b(\mathbb R^d)$, converging to $f$ in
$L^p(\mathbb R^d,\mu_s)$, and introducing a standard sequence $(\vartheta_n)$ of cut-off functions. If we set $\psi_n=\vartheta_n\psi$ for any $n\in{\mathbb N}$, then, each function $\psi_n$ satisfies the condition (b) in property (i) and $[\psi_n]_{p,T}\le [\psi]_{p,T}$. Therefore, the Cauchy problem \eqref{n-sm-pb}, with $f_n$ and $\psi_n$ replacing $f$ and $\psi$ admits a unique mild solution $u\in C_b([s,T]\times\mathbb R^d)\cap C^{0,1}((s,T]\times\mathbb R^d)$, which satisfies \eqref{dip-p-weak-0} and \eqref{dip-p-weak} with $f_n$ replacing $f$. The arguments in the first part of the proof of Corollary \ref{coro-Lp} allow us to prove the existence of a mild solution $u_f$ to the Cauchy problem \eqref{n-sm-pb} with the properties in the statement of the proposition. The uniqueness of the solution follows also in this case from the condition $\sup_{(t,x)\in (s,T]\times\mathbb R^d}[\psi(t,x,\cdot,\cdot)]_{{\rm Lip}({\mathbb R}^{d+1})}<+\infty$. \end{proof}
\section{The evolution operator and its summability improving properties} \label{4}
If, besides Hypotheses \ref{base}, the assumptions on $\psi$ in Theorem \ref{prop-3.8} hold true for any $I\ni s<T$ or if $\psi\in C(I\times\mathbb R^d\times{\mathbb R}\times\mathbb R^d)\cap B_{\rm loc}(I\times\mathbb R^d;{\rm Lip}({\mathbb R}^{d+1}))$ and $\psi(\cdot,\cdot,0,0)\in C_b({\mathbb R}^{d+1})$, then for any $f\in C_b(\mathbb R^d)$ and $s\in I$ the mild solution to problem \eqref{n-sm-pb} exists in the whole of $[s,+\infty)$. Hence, we can set ${\mathcal N}(t,s)f=u_f(t,\cdot)$ for any $t>s$. Each operator ${\mathcal N}(t,s)$ maps $C_b(\mathbb R^d)$ into $C^1_b(\mathbb R^d)$. Moreover, the uniqueness of the solution to problem \eqref{n-sm-pb} yields the {\it evolution law} ${\mathcal N}(t,s)f={\mathcal N}(t,r){\mathcal N}(r,s)f$ for any $r\in (s,t)$ and $f\in C_b(\mathbb R^d)$. Hence $\{{\mathcal N}(t,s): I\ni s<t\}$ is a nonlinear evolution operator in $C_b(\mathbb R^d)$. It can be extended to the $L^p$-setting, for any $p\ge p_0$, using the same arguments as in the first part of the proof of Corollary \ref{coro-Lp}. Clearly, if $\psi(t,x,\cdot,\cdot)$ is Lipschitz continuous in ${\mathbb R}^{d+1}$, uniformly with respect to $(t,x)\times J\times\mathbb R^d$, for any $J\Subset I$, then by density, we still deduce that ${\mathcal N}(t,s)$ satisfies the evolution law and, moreover, each operator ${\mathcal N}(t,s)$ is bounded from $L^p(\mathbb R^d,\mu_s)$ to $W^{1,p}(\mathbb R^d,\mu_t)$ and \begin{align}
&\|{\mathcal N}(t,s)f\|_{L^p(\mathbb R^d,\mu_t)}
+\sqrt{t-s}\,\|\nabla_x {\mathcal N}(t,s)f\|_{L^p(\mathbb R^d, \mu_t)})\notag\\
\le &C_{T-s}(\|f\|_{L^p(\mathbb R^d, \mu_s)}+(T-s+1)\|\psi(\cdot,\cdot,0,0)\|_{\infty}). \label{ciclismo} \end{align}
\subsection{Continuity properties of the nonlinear evolution operator}
In the following theorem, assuming the above conditions on $\psi$, we prove an interesting continuity property of the operator ${\mathcal N}(t,s)$.
\begin{thm} \label{compleanno} Let $(f_n)\subset C_b(\mathbb R^d)$ be a bounded sequence converging to some function $f\in C_b(\mathbb R^d)$ pointwise in $\mathbb R^d$. Then, for any $s\in I$, ${\mathcal N}(\cdot,s)f_n$ and $\nabla_x{\mathcal N}(\cdot,s)f_n$ converge to ${\mathcal N}(\cdot,s)f$ and $\nabla_x{\mathcal N}(\cdot,s)f$, respectively, locally uniformly in $(s,+\infty)\times\mathbb R^d$. \end{thm}
\begin{proof} Let $(f_n)$ and $f$ be as in the statement. To ease the notation, we write $u_{f_n}$ and $u_f$ for ${\mathcal N}(\cdot,s)f_n$ and ${\mathcal N}(\cdot,s)f$, respectively. Moreover, for any $n\in{\mathbb N}$, $t>s$ and $r\in (s,t]$ we set
$h_n(r,\cdot)=G(t,r)(|u_{f_n}(r,\cdot)-u_f(r,\cdot)|^p
+|\nabla_x(u_{f_n}(r,\cdot)-u_f(r,\cdot))|^p)$, and we denote by $L_{R,T}$ any constant such that \begin{equation}
|\psi(t,x,u_2,v_2)-\psi(t,x,u_1,v_1)|\le L_{R,T}(|u_2-u_1|+|v_2-v_1|) \label{mismo} \end{equation}
for any $t\in [s,s+T]$, $x,v_1,v_2\in\mathbb R^d$, $u_1,u_2\in [-R,R]$ and $T>0$. As a first step, formula \eqref{st-gr} shows that, for any $T>0$, there exists a positive constant $M_T$ such that $\|u_f\|_{\infty}+\|u_{f_n}\|_{\infty}\le M_T$. Fix $p\in (1,2)$. Using formula \eqref{defi_mild}, we can estimate \begin{align*}
|D_x^ju_{f_n}(t,x)-D_x^ju_f(t,x)|^p\le &2^{p-1}|(D_x^jG(t,s)(f_n-f))(x)|^p\\
&+2^{p-1}\bigg |\int_s^t(D_x^jG(t,r)(\psi_{u_{f_n}}(r,\cdot)
-\psi_{u_f}(r,\cdot))(x)dr\bigg |^p \end{align*} for any $(t,x)\in (s,+\infty)\times\mathbb R^d$ and $j=0,1$. By the representation formula \eqref{repres-formula}, H\"older inequality, estimates \eqref{point_p_D} and \eqref{mismo}, we deduce that \begin{align*}
|u_{f_n}(t,\cdot)-u_f(t,\cdot)|^p\le 2^{p-1}G(t,s)|f_n-f|^p+(4T)^{p-1}L_{M_T}^p\int_s^th(r,\cdot)dr \end{align*} and \begin{align*}
|\nabla_xu_{f_n}(t,\cdot)-\nabla_xu_f(t,\cdot)|^p
\le &2^{p-1}(t-s)^{-\frac{p}{2}}c_TG(t,s)|f_n-f|^p\notag\\ &+(4T)^{p-1}c_TL_{M_T,T}^p\int_s^t(t-r)^{-\frac{p}{2}}h(r,\cdot)dr \end{align*} in $\mathbb R^d$, for any $t\in (s,s+T)$ and some positive constant $c_T$. Hence, the function $h_n(\cdot,x)$ satisfies the differential inequality \begin{eqnarray*}
h_n(t,x)\le C_{p,T}(t-s)^{-\frac{p}{2}}(G(t,s)|f_n-f|^p)(x)+C_{p,T}\int_s^t(t-r)^{-\frac{p}{2}}h_n(r,x)dr \end{eqnarray*} for any $t\in (s,s+T)$ and $x\in\mathbb R^d$. Since $h_n(\cdot,x)$ is continuous in $(s,t]$ and $h_n(r,x)\le \widetilde C_T(r-s)^{-p/2}$ for some positive constant $\widetilde C_T$, independent of $n$, and any $r\in (s,t)$, we can apply \cite[Lemma 7.1]{henry} and conclude that \begin{align*}
h_n(t,x)\le &C_{p,T}(t-s)^{-\frac{p}{2}}(G(t,s)|f_n-f|^p)(x)\\
&+C_{p,T}\int_s^t(t-r)^{-\frac{p}{2}}(r-s)^{-\frac{p}{2}}(G(r,s)|f_n-f|^p)(x)dr \end{align*} for any $t\in (s,s+T)$. Hence, \begin{align*}
\|h_n(t,\cdot)\|_{C_b(B_R)}\le &C_{p,T}(t-s)^{-\frac{p}{2}}\|G(t,s)|f_n-f|^p\|_{C_b(B_R)}\\
&+C_{p,T}\int_s^t(t-r)^{-\frac{p}{2}}(r-s)^{-p/2}\|G(r,s)|f_n-f|^p\|_{C_b(B_R)}dr \end{align*}
for any $R>0$. By \cite[Proposition 3.1(i)]{KunLorLun09Non}, $\|G(r,s)|f_n-f|^p\|_{C_b(B_R)}$ vanishes as $n\to +\infty$ for any $r>s$. Hence, by dominated convergence,
$\|h_n(t,\cdot)\|_{C_b(B_R)}$ vanishes as $n\to +\infty$ for any $t\in (s,s+T)$, which means that, for any $t\in (s,s+T)$, $u_{f_n}(t,\cdot)$ and $\nabla_xu_{f_n}(t,\cdot)$ converge uniformly in $B_R$ to $u_f(t,\cdot)$ and $\nabla_xu_{f_n}(t,\cdot)$, respectively. The arbitrariness of $R$ and $T$ yields the assertion. \end{proof}
\subsection{Hypercontractivity} Throughout this and the forthcoming subsections we set \begin{equation*}
{\mathcal F}(\zeta)=|\sqrt{Q}\nabla_x\zeta|^2,\qquad\;\,{\mathcal G}(\zeta)=\sum_{i=1}^d|\sqrt{Q}\nabla_xD_i\zeta|^2 \end{equation*} for any smooth enough function $\zeta$. To begin with, we recall the following crucial result.
\begin{lemm}[Lemma 3.1 of \cite{AngLorLun}] \label{derivative} Assume that Hypotheses $\ref{base}$ hold true and fix $[a,b]\subset I$. If $f\in C^{1,2}_b([a,b]\times\mathbb R^d)$ and $f(r,\cdot)$ is constant outside a compact set $K$ for every $r\in [a,b]$, then the function $r \mapsto \int_{\mathbb R^d}f(r,\cdot)d\mu_r$ is continuously differentiable in $[a,b]$ and \begin{eqnarray*} D_r\int_{\mathbb R^d}f(r,\cdot)d\mu_r=\int_{\mathbb R^d}D_rf(r,\cdot)d\mu_r -\int_{\mathbb R^d}\mathcal{A}(r)f(r, \cdot)d\mu_r,\qquad\;\,r\in [a,b]. \end{eqnarray*} \end{lemm}
Further, we introduce the following additional assumptions. \begin{hyp} \label{cond-iper} \begin{enumerate}[\rm(i)] \item $\psi\in B(I\times\mathbb R^d;{\rm Lip}({\mathbb R}^{d+1}))\cap C(I\times\mathbb R^d\times{\mathbb R}\times\mathbb R^d)$, condition \eqref{hyp-holder} is satisfied in $[s,T]$, for any $T>s\in I$ and some constant which may depend also on $s$ and $T$, and there exist two constants $\xi_0\ge 0$ and $\xi_1$ such that
$u\psi(t,x,u,v)\le \xi_0|u|+\xi_1u^2+\xi_2|u||v|$ for any $t\ge s$, $x,v\in\mathbb R^d$ and $u\in{\mathbb R}$; \item there exists a non-negative function $\widetilde\varphi:\mathbb R^d\to{\mathbb R}$, blowing up at infinity such that
$\mathcal A\widetilde\varphi+k_1|\nabla\widetilde\varphi|\le a\widetilde\varphi$ in $\mathbb R^d$ for some locally bounded functions $a,k_1$; \item there exist locally bounded functions $C_0, C_1, C_2:I\to{\mathbb R}^+$ such that \begin{align*}
&|Q(t,x)x|\leq C_0(t)|x|^3\varphi(x),\qquad\;\,{\rm Tr}(Q(t,x))\leq C_1(t)(1+|x|^2)\varphi(x),\notag\\
&\langle b(t,x),x\rangle\leq C_2(t)|x|^2\varphi(x) \end{align*} for any $t\in I$ and any $x\in\mathbb R^d$, where $\varphi$ is the Lyapunov function introduced in Hypothesis $\ref{base}(iii)$; \item there exists a positive constant $K$ such that \begin{align}
\int_{\mathbb R^d}|f|^q\log(|f|)d\mu_t\leq &\|f\|_{L^q(\mathbb R^d,\mu_t)}^q\log(\|f\|_{L^q(\mathbb R^d,\mu_t)})\notag\\
&+Kq\int_{\mathbb R^d}|f|^{q-2}|\nabla f|^2\mbox{$1\!\!\!\;\mathrm{l}$}_{\{f\neq0\}}d\mu_t, \label{LSI} \end{align} for any $t>s$, $f\in C^1_b(\mathbb R^d)$ and $q\in(1,+\infty)$. \end{enumerate} \end{hyp}
\begin{rmk} \label{marsella} {\rm \begin{enumerate}[\rm (i)]
\item Hypothesis \ref{cond-iper}(i) implies that $\psi(\cdot,\cdot,0,0)$ is bounded in $[s,+\infty)\times\mathbb R^d$ and $\|\psi(\cdot,\cdot,0,0)\|_{C_b([s,+\infty)\times\mathbb R^d)}\le \xi_0$. \item Sufficient conditions for \eqref{LSI} to hold are given in \cite{AngLorLun}. In particular, \eqref{LSI} holds true when \eqref{point_p_D11} is satisfied with $p=1$ (see Remark \ref{rmk-2.4}). \end{enumerate} } \end{rmk}
We can now prove the main result of this subsection.
\begin{thm} \label{vacanza} Let Hypotheses $\ref{base}$ and $\ref{cond-iper}$ be satisfied. Then, for any $f\in L^p(\mathbb R^d,\mu_s)$ $(p\ge p_0)$ and $t>s$ the function ${\mathcal N}(t,s)f$ belongs to $W^{1,p_{\gamma}(t)}(\mathbb R^d,\mu_t)$ and satisfies the estimates \begin{align}
&\|{\mathcal N}(t,s)f\|_{L^{p_{\gamma}(t)}(\mathbb R^d,\mu_t)}\leq e^{\omega_{p,\gamma}(t-s)}
[\|f\|_{L^p(\mathbb R^d,\mu_s)}+\xi_0(t-s)], \label{botti} \\
&\|\nabla_x{\mathcal N}(t,s)f\|_{L^{p_{\gamma}(t)}(\mathbb R^d,\mu_t)}\leq c_0(t\!-\!s)
e^{\omega_{p,\sqrt{\gamma}}(t\!-\!s)}[\|f\|_{L^p(\mathbb R^d,\mu_s)}\!+\!\xi_0(t-s)]\!+\! c_1(t-s)\xi_0, \label{botti-1} \end{align} where $p_{\gamma}(t):=\gamma^{-1}(p-1)(e^{\kappa_0 K^{-1}(t-s)}-1)+p$ for any $\gamma>1$, $\kappa_0$ being the ellipticity constant in Hypothesis $\ref{base}(ii)$ and $K$ being the constant in \eqref{LSI}, $\omega_{p,\sigma}=\xi_1+(\xi_2^+)^2\sigma[(\sigma-1)(p-1)\kappa_0]^{-1}$, $\gamma'$ is given by \eqref{edoardo} and the functions $c_0, c_1:(0,+\infty)\to{\mathbb R}^+$ are continuous and blow up at zero. \end{thm}
\begin{proof} To begin with, we observe that it suffices to prove \eqref{botti} and \eqref{botti-1} for functions $f\in C_b^1(\mathbb R^d)$. Indeed, in the general case, the assertion follows approximating $f$ with a sequence $(f_n)\subset C^1_b(\mathbb R^d)$ which converges to $f$ in $L^p(\mathbb R^d,\mu_s)$. By \eqref{dip-p-weak-0}, ${\mathcal N}(t,s)f_n$ converges to ${\mathcal N}(t,s)f$ in $W^{1,p}(\mathbb R^d,\mu_t)$ for almost every $t>s$. Hence, writing \eqref{botti} and \eqref{botti-1} with $f$ being replaced by $f_n$ and letting $n$ tend to $+\infty$, the assertion follows at once by applying Fatou lemma.
We split the rest of the proof into two steps. In the first one we prove \eqref{botti} and in the latter one \eqref{botti-1}.
{\em Step 1.} Fix $f\in C^1_b(\mathbb R^d)$, $n\in{\mathbb N}$, $\varepsilon>0$ and set
$\beta_{n,\varepsilon}(t):=\|v_{n,\varepsilon}(t,\cdot)\|_{p_{\gamma}(t)}$ for any $t>s$, where $v_{n, \varepsilon}= (\vartheta^2_n {\mathcal N}(\cdot,s)f+\varepsilon)^{1/2}$ and
$\vartheta_n= \zeta(n^{-1}|x|)$ for any $x \in \mathbb R^d$ and $n \in {\mathbb N}$. Here, $\zeta$ is a smooth function such that $\mbox{$1\!\!\!\;\mathrm{l}$}_{B_1}\le\zeta \le\mbox{$1\!\!\!\;\mathrm{l}$}_{B_2}$. Moreover, we set $\varphi_{1,n}=|\zeta'(|\cdot|/n)|\varphi$, $\varphi_{2,n}=|\zeta''(|\cdot|/n)|\varphi$ for any $n\in{\mathbb N}$. We recall that in \cite[Theorem 5.4]{KunLorLun09Non} it has been proved that $\sup_{t\in I}\|\varphi\|_{L^1(\mathbb R^d,\mu_t)}<+\infty$. Hence, the functions $t\mapsto \|\varphi_{j,n}\|_{L^p(\mathbb R^d,\mu_t)}$ $(j=1,2)$ are bounded in $I$ and pointwise converge to zero as $n\to +\infty$.
By definition, the function $u=\mathcal N(\cdot,s)f$ belongs to $C^{0,1}_b([s,\tau]\times\mathbb R^d)$ for any $\tau>s$ and is a classical solution to problem \eqref{n-sm-pb}. Moreover Lemma \ref{derivative} shows that $\beta_{n,\varepsilon}$ is differentiable in $(s,+\infty)$ and a straightforward computation reveals that \begin{align*} \beta_{n,\varepsilon}'(t)=&-\frac{p_{\gamma}'(t)}{p_{\gamma}(t)}\beta_{n,\varepsilon}(t) \log(\beta_{n,\varepsilon}(t))\\ &+\frac{1}{p_{\gamma}(t)}(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}\{D_t[(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)}] -\mathcal{A}(t)[(v_{n,\varepsilon}(t,\cdot)^{p_{\gamma}(t)}]\}d\mu_t. \end{align*} Taking into account that \begin{align*} &D_t[(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)}]-\mathcal{A}[(v_{n,\varepsilon}(t,\cdot)^{p_{\gamma}(t)}]\\ =&p_{\gamma}'(t)(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)}\log(v_{n,\varepsilon}(t,\cdot)) \!-\!p_{\gamma}(t)(p_{\gamma}(t)\!-\!1)(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}({\mathcal F}(v_{n,\varepsilon}))(t,\cdot)\\ &+p_{\gamma}(t)(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-1}(D_tv_{n,\varepsilon}(t,\cdot) -\mathcal{A}(t) v_{n,\varepsilon}(t,\cdot)) \end{align*} and \begin{align*} D_tv_{n,\varepsilon} -\mathcal{A} v_{n,\varepsilon}=&\vartheta_n^2v_{n,\varepsilon}^{-1} u\psi_u-\varepsilon\vartheta_n^2{\mathcal F}(u) v_{n,\varepsilon}^{-3}-{\rm Tr}(QD^2\vartheta_n)v_{n,\varepsilon}^{-1} \vartheta_nu^2-\varepsilon u^2v_{n,\varepsilon}^{-3}{\mathcal F}(\vartheta_n)\notag\\ &-\langle b,\nabla\vartheta_n\rangle v_{n,\varepsilon}^{-1}\vartheta_n u^2 -2(2\varepsilon u+\vartheta_n^2u^3)\langle Q\nabla\vartheta_n,\nabla_xu\rangle\vartheta_nv_{n,\varepsilon}^{-3}\\ =:& \vartheta_n^2v_{n,\varepsilon}^{-1} u\psi_u-\varepsilon\vartheta_n^2{\mathcal F}(u) v_{n,\varepsilon}^{-3}+g_{n,\varepsilon}(t,\cdot), \end{align*} we deduce \begin{align*} \beta_{n,\varepsilon}'(t)=& (\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-1}g_{n,\varepsilon}(t,\cdot)d\mu_t -\frac{p_{\gamma}'(t)}{p_{\gamma}(t)}\beta_{n,\varepsilon}(t) \log(\beta_{n,\varepsilon}(t))\notag\\ &-(p_{\gamma}(t)-1)(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)}\int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2} ({\mathcal F}(v_{n,\varepsilon}))(t,\cdot)d\mu_t\notag\\ &+\frac{p_{\gamma}'(t)}{p_{\gamma}(t)}(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)}\log(v_{n,\varepsilon}(t,\cdot))d\mu_t\notag\\ &+(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)}\int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2} \vartheta_n^2u(t,\cdot)\psi_u(t,\cdot)d\mu_t\notag\\ &-\varepsilon(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}\vartheta_n^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}({\mathcal F}(u))(t,\cdot))d\mu_t. \end{align*}
Using Hypotheses \ref{cond-iper}(i), (iv), the expression of the function $t\mapsto p_{\gamma}(t)$ and Hypothesis \ref{base}(ii), we can estimate \begin{align} \beta_{n,\varepsilon}'(t)\le & (\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-1}g_{n,\varepsilon}(t,\cdot)d\mu_t\notag\\
&+\xi_0(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)}\int_{\mathbb R^d}\vartheta_n^2|u(t,\cdot)| (v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}d\mu_t +\xi_1\beta_{n,\varepsilon}(t)\notag\\ &-\varepsilon\xi_1(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}d\mu_t\notag\\ &-(p-1)(1-\gamma^{-1})(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}({\mathcal F}(v_{n,\varepsilon}))(t,\cdot)d\mu_t\notag\\ &+\xi_2^+(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)}\int_{\mathbb R^d}\vartheta_n^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}
|u(t,\cdot)||\nabla_xu(t,\cdot)|d\mu_t\notag\\ &-\varepsilon(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}\vartheta_n^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}{\mathcal F}(u(t,\cdot))d\mu_t. \label{bedbreakfast} \end{align} Further, since ${\mathcal F}(v_{n,\varepsilon})=\vartheta_n^2{\mathcal F}(\vartheta_n)u^4v_{n,\varepsilon}^{-2} +\vartheta_n^4u^2{\mathcal F}(u)v_{n,\varepsilon}^{-2}+2\vartheta_n^2 u^3v_{n,\varepsilon}^{-2}\langle Q \nabla_xu,\nabla\vartheta_n\rangle$ and \begin{align*} &\int_{\mathbb R^d}(v(t, \cdot))^{p_{\gamma}(t)-4}\vartheta_n^2(u(t,\cdot))^3\langle Q(t,\cdot) \nabla_xu(t,\cdot),\nabla\vartheta_n\rangle d\mu_t\\ \le & \delta \int_{\mathbb R^d} (v(t, \cdot))^{p_{\gamma}(t)-4}\vartheta_n^4(u(t,\cdot))^2({\mathcal F}(u))(t,\cdot) d\mu_t\\ &+ \frac{1}{\delta} \int_{\mathbb R^d} (v(t, \cdot))^{p_{\gamma}(t)-4}(u(t,\cdot))^4({\mathcal F}(\vartheta_n))(t,\cdot)d\mu_t, \end{align*} it follows that \begin{align*} &\int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}({\mathcal F}(v_{n,\varepsilon}))(t,\cdot)d\mu_t\\ \ge &(1\!-\!\delta)\!\int_{\mathbb R^d}\vartheta_n^4(u(t,\cdot))^2 (v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}({\mathcal F}(u))(t,\cdot)d\mu_t-C_{\varepsilon,\delta}(t)\!\int_{\mathbb R^d}\varphi_{1,n} d\mu_t \end{align*} for any $\delta>0$ and some continuous function $C_{\varepsilon, \delta}:[s,+\infty)\to {\mathbb R}^+$. Moreover, applying H\"older and Young inequalities and Hypothesis \ref{base}(ii) we can infer that \begin{align*} &\int_{\mathbb R^d}\vartheta_n^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}
|u(t,\cdot)||\nabla_xu(t,\cdot)|d\mu_t\nonumber\\ \le &\frac{\delta_1}{\kappa_0}\int_{\mathbb R^d}\vartheta_n^4(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}
|u(t,\cdot)|^2({\mathcal F}(u))(t,\cdot)d\mu_t +\frac{1}{4\delta_1}(\beta_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)} \end{align*} for any $\delta_1>0$ and \begin{align*} \int_{\mathbb R^d}\vartheta_n^2u(t,\cdot) (v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}d\mu_t \le \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-1}d\mu_t \le (\beta_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-1}. \end{align*} Hence, \begin{align*} \beta_{n,\varepsilon}'(t)\le &\xi_0+\bigg (\xi_1+\frac{\xi_2^+}{4\delta_1}\bigg )\beta_{n,\varepsilon}(t)+(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-1}g_{n,\varepsilon}(t,\cdot)d\mu_t\notag\\ &-[(p-1)(1-\gamma^{-1})(1-\delta)-\kappa_0^{-1}\xi_2^+\delta_1] (\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)}\notag\\
&\qquad\quad\times\int_{\mathbb R^d}\vartheta_n^4|u(t,\cdot)|^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4} ({\mathcal F}(u))(t,\cdot)d\mu_t\notag\\ &+\widetilde C_{\varepsilon,\delta,p,\gamma}(t)(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}\varphi_{1,n} d\mu_t\notag\\ &-\varepsilon(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}\vartheta_n^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}({\mathcal F}(u))(t,\cdot)d\mu_t\notag\\ &-\varepsilon\xi_1(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}d\mu_t \end{align*} for some continuous function $\widetilde C_{\varepsilon,\delta,p,\gamma}:[s,+\infty)\to{\mathbb R}^+$. Now, we estimate the integral term containing $g_n$. We begin by observing that \begin{align*} &-2\int_{\mathbb R^d}(2\varepsilon u(t,\cdot)+\vartheta_n^2(u(t,\cdot))^3)\langle Q(t,\cdot)\nabla\vartheta_n,\nabla_xu\rangle\vartheta_n(v_{n,\varepsilon}(t,\cdot))^{p(t)-4}d\mu_t\\ \le & 4\varepsilon\delta_2\int_{\mathbb R^d}\vartheta_n^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}({\mathcal F}(u))(t,\cdot)d\mu_t\\
&+\varepsilon\delta_2^{-1}\int_{\mathbb R^d}|u(t,\cdot)|^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4} ({\mathcal F}(\vartheta_n))(t,\cdot)d\mu_t\\
&+\delta_2\int_{\mathbb R^d}\vartheta_n^4|u(t,\cdot)|^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}({\mathcal F}(u))(t,\cdot)d\mu_t\\
&+\delta_2^{-1}\int_{\mathbb R^d}\vartheta_n^2|u(t,\cdot)|^4(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4} ({\mathcal F}(\vartheta_n))(t,\cdot)d\mu_t\\ \le & 4\varepsilon\delta_2\int_{\mathbb R^d}\vartheta_n^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4} ({\mathcal F}(u))(t,\cdot)d\mu_t+\widetilde C_{\varepsilon,\delta_2}(t)\int_{\mathbb R^d}\varphi_{1,n} d\mu_t \\
&+\delta_2\int_{\mathbb R^d}\vartheta_n^4|u(t,\cdot)|^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}({\mathcal F}(u))(t,\cdot)d\mu_t \end{align*} for some continuous function $\widetilde C_{\varepsilon,\delta_2}:[s,+\infty)\to{\mathbb R}^+$. Moreover, \begin{align*} &-\int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}u(t,\cdot)[\vartheta_n{\mathcal A}(t)\vartheta_n -\varepsilon u(t,\cdot)({\mathcal F}(\vartheta_n))(t,\cdot)]d\mu_t\\ \le & \overline C_{\varepsilon}(t)\int_{\mathbb R^d}(\varphi_{1,n}+\varphi_{2,n}) d\mu_t \end{align*} for some positive and continuous function $\overline C_{\varepsilon}:[s,+\infty)\to{\mathbb R}^+$. Hence, replacing these estimates in \eqref{bedbreakfast}, we get \begin{align} \beta_{n,\varepsilon}'(t)\le &\xi_0+\bigg (\xi_1+\frac{\xi_2^+}{4\delta_1}\bigg )\beta_{n,\varepsilon}(t) +\widehat C_{\varepsilon,\delta,\delta_2,p}(t)(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(\varphi_{1,n}+\varphi_{2,n})d\mu_t\notag\\ &-[(p-1)(1-\gamma^{-1})(1-\delta)-\kappa_0^{-1}\xi_2^+\delta_1-\delta_2] (\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)}\notag\\
&\qquad\quad\times\int_{\mathbb R^d}\vartheta_n^4|u(t,\cdot)|^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4} ({\mathcal F}(u))(t,\cdot)d\mu_t\notag\\ &-\varepsilon(1-4\delta_2)(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}\vartheta_n^2(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-4}({\mathcal F}(u))(t,\cdot)d\mu_t\notag\\ &-\varepsilon\xi_1(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}d\mu_t, \label{giorgio} \end{align} where, again, $\widehat C_{\varepsilon,\delta,\delta_2,p}:(s,+\infty)\to{\mathbb R}^+$ is a continuous function. Choosing $\delta=1/2$, $\delta_1=(p-1)(1-\gamma^{-1})\kappa_0/(4\xi_2)$, if $\xi_2>0$, $\delta_1=0$, otherwise, and then $\delta_2$ small enough we obtain \begin{align} \beta_{n,\varepsilon}'(t)\le &\xi_0+\omega_{p,\gamma}\beta_{n,\varepsilon}(t)+ \widehat C_{\varepsilon,1/2,\delta_2,p}(t)(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)}
\|\varphi_{1,n}+\varphi_{2,n}\|_{L^1(\mathbb R^d,\mu_t)}\notag\\ &-\varepsilon\xi_1(\beta_{n,\varepsilon}(t))^{1-p_{\gamma}(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p_{\gamma}(t)-2}d\mu_t. \label{giorgio-1} \end{align} Hence, integrating \eqref{giorgio-1} between $s$ and $t$ and letting first $n\to +\infty$ and then $\varepsilon\to 0^+$, by dominated convergence we get \begin{align*}
\|u(t,\cdot)\|_{L^{p_{\gamma}(t)}(\mathbb R^d,\mu_t)}\le\|f\|_{L^p(\mathbb R^d,\mu_s)}+\xi_0(t-s)
+\omega_{p,\gamma}\int_s^t\|u(r,\cdot)\|_{L^{p(r)}(\mathbb R^d,\mu_r)}dr. \end{align*} Applying the Gronwall Lemma we conclude the proof of \eqref{botti}.
{\em Step 2.} To check estimate \eqref{botti-1}, we arbitrarily fix $\gamma\in (1,+\infty)$, $t>s$ and we take \begin{equation} \varepsilon=\frac{K}{2\kappa_0}\log\bigg (\frac{\gamma e^{\kappa_0K^{-1}(t-s)}}{\gamma+e^{\kappa_0K^{-1}(t-s)}-1}\bigg ),\qquad\;\, \gamma'=\gamma\frac{e^{\kappa_0K^{-1}(t-s-\varepsilon)}-1}{e^{\kappa_0K^{-1}(t-s)}-1}. \label{edoardo} \end{equation} With these choices of $\varepsilon$ and $\gamma'$, we have $p_{\gamma'}(t-\varepsilon)=p_{\gamma}(t)$. From Step 1, we know that ${\mathcal N}(t-\varepsilon,s)f\in L^{p_{\gamma'}(t-\varepsilon)}(\mathbb R^d,\mu_{t-\varepsilon})$ and \begin{equation}
\|{\mathcal N}(t-\varepsilon,s)f\|_{L^{p_{\gamma}(t)}(\mathbb R^d,\mu_{t-\varepsilon})}\leq e^{\omega_{p,\gamma'}(t-s-\varepsilon)}(\|f\|_{L^p(\mathbb R^d,\mu_s)}+\xi_0(t-s)). \label{magliarosa} \end{equation} By the evolution law and estimates \eqref{magliarosa} and \eqref{ciclismo} we get \begin{align*} &\sup_{\tau \in (t-\varepsilon,T)}
\sqrt{\tau-t+\varepsilon}\,\|\nabla_x {\mathcal N}(\tau,s)f\|_{L^{p_{\gamma}(t)}(\mathbb R^d,\mu_{\tau})}\\ \le &C_{T-t+\varepsilon}\{
e^{\omega_{p,\gamma'}(t-s-\varepsilon)}(\|f\|_{L^p(\mathbb R^d,\mu_s)}+\xi_0(t-s))+
(T-t+\varepsilon+1)\|\psi(\cdot,\cdot,0,0)\|_{\infty}\} \end{align*}
for any $T>t-\varepsilon$. In particular, taking $T=t$ and using Remark \ref{marsella}(i) to estimate $\|\psi(\cdot,\cdot,0,0)\|_{\infty}\le\xi_0$, we get \begin{align}
\|\nabla_x {\mathcal N}(t,s)f\|_{L^{p_{\gamma}(t)}(\mathbb R^d,\mu_t)}\le & \frac{C_{\varepsilon}}{\sqrt{\varepsilon}}e^{-\varepsilon\omega_{p,\gamma'}}
e^{\omega_{p,\gamma'}(t-s)}[\|f\|_{L^p(\mathbb R^d,\mu_s)}+\xi_0(t-s)]\notag\\ &+\bigg (\sqrt{\varepsilon}+\frac{1}{\sqrt{\varepsilon}}\bigg ) C_{\varepsilon}\xi_0. \label{raffaele} \end{align} Replacing the value of $\varepsilon$ in the expression of $\gamma'$ (see \eqref{edoardo}), we deduce that \begin{eqnarray*} \gamma' \ge\inf_{\delta\ge 1} \gamma(\delta-1)^{-1}\bigg [\bigg (\frac{\delta(\gamma+\delta-1)}{\gamma}\bigg )^{1/2}-1\bigg ]=\sqrt{\gamma} \end{eqnarray*} and, since the function $\sigma\mapsto \omega_{p,\sigma}$ is decreasing, $\omega_{p,\gamma'}\le\omega_{p,\sqrt{\gamma}}$. Finally, observing that $e^{-\varepsilon\omega_{p,\gamma'}}$ is bounded in $(s,+\infty)$, $\varepsilon<(2\kappa_0)^{-1}K\log(\gamma)$ (which follows from \eqref{edoardo} recalling that $\gamma'\ge\sqrt{\gamma}$) and $\varepsilon\sim (2\gamma)^{-1}(\gamma-1)(t-s)$ as $t-s\to 0^+$, formula \eqref{botti-1} follows immediately replacing in \eqref{raffaele} the value of $\varepsilon$ given by \eqref{edoardo}. \end{proof}
\begin{rmk} {\rm As the proof of Theorem \ref{vacanza} shows, if $\xi_2\le 0$, then we can take $\gamma=1$ and $\omega_{p,1}=\xi_1$ in \eqref{botti}.} \end{rmk}
\subsection{Supercontractivity}
In the next theorem we prove a stronger result than Theorem \ref{vacanza}, i.e., we prove that the nonlinear evolution operator ${\mathcal N}(t,s)$ satisfies a supercontractivity property. For this purpose, we introduce the following additional assumption. \begin{hypo} \label{hyp-super} There exists a decreasing function $\nu:(0,+\infty)\to{\mathbb R}^+$ blowing up as $\sigma$ tends to $0^+$ such that \begin{align}
&\int_{\mathbb R^d}|f|^p\log(|f|)d\mu_r
-\|f\|_{L^p(\mathbb R^d,\mu_r)}^p\log(\|f\|_{L^p(\mathbb R^d,\mu_r)})\notag\\
\le & \frac{\nu(\sigma)}{p}\|f\|_{L^p(\mathbb R^d,\mu_r)}^p
+\sigma p\int_{\mathbb R^d}|f|^{p-2}|\nabla f|^2\mbox{$1\!\!\!\;\mathrm{l}$}_{\{f\neq 0\}}d\mu_r \label{LSI_epsilon} \end{align} for any $r\in I$, $\sigma>0$ and $f\in C^1_b(\mathbb R^d)$. \end{hypo}
\begin{rmk}
{\rm Sufficient conditions for \eqref{LSI_epsilon} to hold are given in \cite{AngLorOnI}. In particular, it holds true when \eqref{point_p_D11} is satisfied with $p=1$ (see Remark \ref{rmk-2.4}) and there exist $K>0$ and $R>1$ such that $\langle b(t,x),x\rangle \le -K|x|^2\log|x|$ for any $t \in I$ and $|x|\ge R$.} \end{rmk}
\begin{thm} Let Hypotheses $\ref{base}$, $\ref{cond-iper}(i)$-$(iii)$ and $\ref{hyp-super}$ be satisfied. Then, for any $t>s \in I$, $p_0\le p<q<+\infty$ and any $f \in L^p(\mathbb R^d, \mu_s)$, ${\mathcal N}(t,s)f$ belongs to $W^{1,q}(\mathbb R^d, \mu_t)$ and \begin{align}
&\|{\mathcal N}(t,s)f\|_{L^q(\mathbb R^d, \mu_t)} \le c_2(t-s)(\|f\|_{L^p(\mathbb R^d,\mu_s)}+\xi_0(t-s)), \label{gotico}\\[1mm]
&\|\nabla_x{\mathcal N}(t,s)f\|_{L^q(\mathbb R^d, \mu_t)} \le c_3(t-s)\|f\|_{L^p(\mathbb R^d,\mu_s)}+c_4(t-s)\xi_0. \label{gotico-1} \end{align} Here, $c_2, c_3,c_4:(0,+\infty)\to {\mathbb R}^+$ are continuous functions such that $\lim_{r \to 0^+}c_k(r)=+\infty$ $(k=2,3,4)$. \end{thm} \begin{proof} The proof of this result follows the same lines of the proof of Theorem \ref{vacanza}. For this reason we use the notation therein introduced and we limit ourselves to sketching it in the case when $f\in C^1_b(\mathbb R^d)$.
{\em Step 1.} Here, we prove \eqref{gotico}. For any $\sigma>0$ and any $t\ge s$ we set $p(t)=e^{\kappa_0(2\sigma)^{-1}(t-s)}(p-1)+1$, $m(t)= \nu(\sigma)(p^{-1}-(p(t))^{-1})$ and $\zeta_{n, \varepsilon}(t)= e^{-m(t)}\beta_{n, \varepsilon}(t)$. The function $\zeta_{n, \varepsilon}$ is differentiable in $(s,+\infty)$ and arguing as in the proof of the quoted theorem, using \eqref{LSI_epsilon} instead of \eqref{LSI} and the definition of $m(t)$ and $p(t)$, we deduce that \begin{align*} \zeta_{n, \varepsilon}'(t)= \bigg [&(\beta_{n,\varepsilon}(t))^{1-p(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p(t)-1}g_{n,\varepsilon}(t,\cdot)d\mu_t\notag\notag\\ &-\frac{p-1}{2}(\beta_{n,\varepsilon}(t))^{1-p(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p(t)-2} ({\mathcal F}(v_{n,\varepsilon}))(t,\cdot)d\mu_t\notag\\ &+(\beta_{n,\varepsilon}(t))^{1-p(t)}\int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p(t)-2} \vartheta_n^2u(t,\cdot)\psi_u(t,\cdot)d\mu_t\notag\\ &-\varepsilon(\beta_{n,\varepsilon}(t))^{1-p(t)} \int_{\mathbb R^d}\vartheta_n^2(v_{n,\varepsilon}(t,\cdot))^{p(t)-4}({\mathcal F}(u))(t,\cdot)d\mu_t\bigg ]e^{-m(t)} \end{align*} and the same arguments used to prove \eqref{giorgio} show that, if $\delta_2<1/4$, then \begin{align*} \zeta_{n,\varepsilon}'(t)\le & \xi_0e^{-m(t)}+\bigg (\xi_1+\frac{\xi_2^+}{4\delta_1}\bigg ) \zeta_{n,\varepsilon}(t)\\ &+\bigg ( \widehat C_{\varepsilon,\delta,\delta_2,p}(t)(\beta_{n,\varepsilon}(t))^{1-p(t)} \int_{\mathbb R^d}(\varphi_{1,n}+\varphi_{2,n})d\mu_t\notag\\ &\qquad\;-[2^{-1}(p-1)(1-\delta)-\kappa_0^{-1}\xi_2^+\delta_1-\delta_2] (\beta_{n,\varepsilon}(t))^{1-p(t)}\notag\\
&\qquad\qquad\;\times\int_{\mathbb R^d}\vartheta_n^4|u(t,\cdot)|^2(v_{n,\varepsilon}(t,\cdot))^{p(t)-4} ({\mathcal F}(u))(t,\cdot)d\mu_t\notag\\ &\qquad\;-\varepsilon\xi_1(\beta_{n,\varepsilon}(t))^{1-p(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p(t)-2}d\mu_t\bigg )e^{-m(t)}. \end{align*} Choosing $\delta=1/2$, $\delta_1=(p-1)\kappa_0(8\xi_2)^{-1}$, if $\xi_2>0$, $\delta_1=0$, otherwise, and $\delta_2=[(p-1)\wedge 2]/8$ we get \begin{align} \zeta_{n,\varepsilon}'(t)\le \widetilde\omega_p\zeta_{n,\varepsilon}(t) +e^{-m(t)}\bigg [&\xi_0+ \widehat C_{\varepsilon,\delta_2,p}(t)(\beta_{n,\varepsilon}(t))^{1-p(t)}
\|\varphi_{1,n}+\varphi_{2,n}\|_{L^1(\mathbb R^d,\mu_t)}\notag\\ &-\varepsilon\xi_1(\beta_{n,\varepsilon}(t))^{1-p(t)} \int_{\mathbb R^d}(v_{n,\varepsilon}(t,\cdot))^{p(t)-2}d\mu_t\bigg ], \label{nanna} \end{align} where $\widetilde\omega_p=\xi_1+2(\xi_2^+)^2(\kappa_0(p-1))^{-1}$. Hence, integrating \eqref{nanna} between $s$ and $t$ and letting first $n\to +\infty$ and then $\varepsilon\to 0^+$, by dominated convergence we get \begin{align*}
e^{-m(t)}\|u_f(t,\cdot)\|_{L^{p(t)}(\mathbb R^d,\mu_t)}\le &\xi_0(t-s)+\|f\|_{L^p(\mathbb R^d,\mu_s)}\\
&+\widetilde\omega_p\int_s^te^{-m(r)}\|u(r,\cdot)\|_{L^{p(r)}(\mathbb R^d,\mu_r)}dr, \end{align*} which yields
$\|u_f(t,\cdot)\|_{L^{p(t)}(\mathbb R^d,\mu_t)}\le e^{\widetilde\omega_p(t-s)+m(t)}(\xi_0(t-s)+\|f\|_{L^p(\mathbb R^d, \mu_s)})$. Now, for any $q>p$ and $t>s$, we fix $\sigma= \kappa_0(t-s)(2\log(q-1)-2\log(p-1))^{-1}$. We get $p(t)=q$ and from the previous inequality the claim follows with \begin{eqnarray*} c_2(r)=\exp(\widetilde\omega_pr+ (p^{-1}-q^{-1})\nu(\kappa_0r (2\log(q-1)-2\log(p-1))^{-1})). \end{eqnarray*}
{\em Step 2.} Fix $q>p$. By Step 1, ${\mathcal N}((t+s)/2,s)f$ belongs to $L^q(\mathbb R^d,\mu_{(t+s)/2})$ and \begin{eqnarray*}
\|{\mathcal N}((t+s)/2,s)f\|_{L^q(\mathbb R^d,\mu_{(t+s)/s})}\le c_2((t-s)/2)\bigg (\|f\|_{L^p(\mathbb R^d,\mu_s)} +\xi_0\frac{t-s}{2}\bigg ). \end{eqnarray*} The same arguments used in Step 2 of the proof of Theorem \ref{vacanza} show that ${\mathcal N}(t,s)f\in W^{1,q}(\mathbb R^d,\mu_{\tau})$ for any $\tau>(t+s)/2$ and \begin{align*} &\sqrt{\frac{t-s}{2}}
\|\nabla_x{\mathcal N}(t,s)f\|_{L^q(\mathbb R^d,\mu_{\tau})}\notag\\
\le & C_{(t-s)/2}\bigg [c_2((t-s)/2)\bigg (\|f\|_{L^p(\mathbb R^d,\mu_s)}+\xi_0\frac{t-s}{2}\bigg ) +\bigg (\frac{t-s}{2}+1\bigg )\xi_0\bigg ] \end{align*} Estimate \eqref{gotico-1} follows with $c_3(r)=\sqrt{2/r}C_{r/2}c_2(r/2)$, $c_4(r)=C_{r/2}[c_2(r/2)\sqrt{r/2}$ $+\sqrt{r/2}+\sqrt{2/r}]$. \end{proof}
\subsection{Ultraboundedness} To begin with, we prove a sort of Harnack inequality, which besides the interest in its own will be crucial to prove the ultraboundedness of the nonlinear evolution operator ${\mathcal N}(t,s)$.
\begin{prop} \label{harnack} Let Hypotheses $\ref{base}(i)$-$(iii)$, $\ref{cond-iper}(i)$-$(iii)$ be satisfied. Further, suppose that estimate $\eqref{point_p_D11}$ holds, with $p=1$ and some constant $\sigma_1\in{\mathbb R}$. Then, for any $f\in C_b(\mathbb R^d)$, $p>1$, $t>s$ and $x,y\in\mathbb R^d$ the following estimate holds true: \begin{align}
|({\mathcal N}(t,s)f)(x)|^p\le& \exp\bigg (p(1+\xi_1^+)(t-s)+p\Theta(t-s)
\frac{(|x-y|+\xi_2^+(t-s))^2}{4\kappa_0(t-s)^2(p-1)}\bigg )\notag\\
&\quad\;\times [(G(t,s)|f|^p)(y)+\xi_0^p], \label{cavo} \end{align} where $\Theta(r)=(e^{2\sigma_1r}-1)/(2\sigma_1)$, if $\sigma_1>0$ and $\Theta(r)=r$ otherwise. \end{prop}
\begin{proof} To begin with, we observe that it suffices to prove \eqref{cavo} for functions in $C^1_b(\mathbb R^d)$. Indeed, if $f\in C_b(\mathbb R^d)$, we can determine a sequence $(f_n)\subset C^1_b(\mathbb R^d)$, bounded with respect to the sup-norm and converging to $f$ locally uniformly in $\mathbb R^d$. Writing \eqref{cavo} with $f$ replaced by $f_n$ and using Theorem \ref{compleanno} and \cite[Proposition 3.1(i)]{KunLorLun09Non}, we can let $n$ tend to $+\infty$ and complete the proof.
So, let us fix $f\in C^1_b(\mathbb R^d)$ and set $\Phi_n(r):=[G(t,r)(\vartheta_n^2v_\varepsilon(r,\cdot))](\phi(r))+\xi_0^p$ for any $n\in{\mathbb N}$ and $r\in (s,t)$, where $v_\varepsilon=(u_f^2+\varepsilon)^{p/2}$, $u_f={\mathcal N}(\cdot,s)f$ (see Theorem \ref{prop-3.8}), $\phi(r)=(r-s)(t-s)^{-1}x+(t-r)(t-s)^{-1}y$ and $(\vartheta_n)$ is a standard sequence of cut-off functions. We note that $\Phi_n(r)\geq C_\Phi>0$ for any $r\in[s,t]$ and any $n\geq n_0$. This is clear if $\xi_0>0$. Suppose that $\xi_0=0$. If $r<t$ then $\Phi_n(r)$ is positive since $v_\varepsilon>0$. If $r=t$, then $\Phi_n(t)=(\vartheta_n(x))^2v_\varepsilon(t,x)$ which is positive if we choose $n\in{\mathbb N}$ large enough such that $x\in {\rm supp}(\vartheta_n)$. Moreover, $\Phi_n\in C^1((s,t))$. Hence $\log(\Phi_n)\in C^1((s,t))$ and we have \begin{align*} \frac{d}{dr}\log(\Phi_n(r)) = \frac{1}{\Phi_n(r)}\{&[G(t,r)(\vartheta_n^2D_tv_\varepsilon(r,\cdot))- \mathcal{A}(\vartheta_n^2v_\varepsilon(r,\cdot)))] (\phi(r)) \\ & +(t-s)^{-1}\langle [\nabla_xG(t,r)(\vartheta_n^2v_\varepsilon(r,\cdot))](\phi(r)),x-y\rangle\}. \end{align*} We observe that \begin{align*} D_t(\vartheta_n^2v_\varepsilon)-\mathcal{A}(\vartheta_n^2v_{\varepsilon}) = & p\vartheta_n^2(u_f^2+\varepsilon)^{\frac{p}{2}-1}u_f\psi_{u_f} -p\vartheta_n^2v_{\varepsilon}^{1-\frac{4}{p}}((p-1)u_f^2+\varepsilon){\mathcal F}(u_f)\\ & -4pv_{\varepsilon}^{1-\frac{2}{p}}\vartheta_nu_f\langle Q\nabla\vartheta_n,\nabla_xu_f\rangle -2\vartheta_nv_\varepsilon \mathcal{A}\vartheta_n-2v_{\varepsilon}{\mathcal F}(\vartheta_n), \end{align*} and \begin{align*}
|\nabla_xG(t,r)(\vartheta_n^2v_{\varepsilon}(r,\cdot))|
\leq & e^{\sigma_1(t-r)}G(t,r)|\nabla_x(\vartheta_n^2v_\varepsilon(r,\cdot))|\\
\leq & pe^{\sigma_1(t\!-\!r)}G(t,r)(\vartheta_n^2(v_{\varepsilon}(r,\cdot))^{1\!-\!\frac{2}{p}}|u_f(r,\cdot)|\kappa_0^{-\frac{1}{2}}\!(({\mathcal F}(u_f))(r,\cdot))^{\frac{1}{2}})\\
&+e^{\sigma_1(t-r)}G(t,r)(2\vartheta_n|\nabla\vartheta_n|v_\varepsilon(r,\cdot)). \end{align*} Hence, we get \begin{align*} &\frac{d}{dr}\log\Phi_n(r) \\ \leq & \frac{1}{\Phi_n(r)}
\bigg\{p\frac{|x-y|}{t-s}e^{\sigma_1(t-r)}G(t,r)[\vartheta_n^2
(v_{\varepsilon}(r,\cdot))^{1-\frac{2}{p}}|u_f(r,\cdot)|\kappa_0^{-1/2}(({\mathcal F}(u_f))(r,\cdot))^{\frac{1}{2}}]\\
&\qquad\quad\;\;\,-G(t,r)\zeta_{n,\varepsilon}(r,\cdot) +\frac{|x-y|}{t-s}e^{\sigma_1(t-r)}G(t,r)(2\vartheta_n|\nabla\vartheta_n| v_\varepsilon(r,\cdot))\bigg\}(\phi(r)), \end{align*} where \begin{align*} \zeta_{n,\varepsilon}=& 2\vartheta_n(\mathcal{A}\vartheta_n)v_{\varepsilon}+2{\mathcal F}(\vartheta_n) v_\varepsilon+4p\vartheta_nv_{\varepsilon}^{1-\frac{2}{p}}u_f\langle Q\nabla\vartheta_n,\nabla_xu_f\rangle\\ &-p\vartheta_n^2v_{n,\varepsilon}^{1-\frac{2}{p}}u_f\psi_{u_f} +p\vartheta_n^2 v_{\varepsilon}^{1-\frac{4}{p}}((p-1)u_f+\varepsilon) {\mathcal F}(u_f). \end{align*} From Hypothesis \ref{cond-iper}(i) it follows that \begin{align} \frac{d}{dr}\log\Phi_n(r) \leq & \frac{1}{\Phi_n(r)} G(t,r)\bigg\{-2\vartheta_n(\mathcal{A}(r)\vartheta_n)v_{\varepsilon}(r,\cdot) +p\xi_0\vartheta_n^2v_{\varepsilon}^{1-\frac{1}{p}} +\xi_1^+\vartheta_n^2pv_{\varepsilon}(r,\cdot)\notag\\
&\qquad\qquad\qquad\;+4p(v_{\varepsilon}(r,\cdot))^{1-\frac{2}{p}}\vartheta_n|u_f(r,\cdot)|
|\langle Q(r,\cdot)\nabla\vartheta_n,\nabla_xu_f(r,\cdot)\rangle|\notag\\ &\qquad\qquad\qquad\; - p\vartheta_n^2v_\varepsilon(r,\cdot) \Big[((p-1)(u_f(r,\cdot))^2+\varepsilon)(h_{\varepsilon}(r,\cdot))^2 \notag\\
&\qquad\qquad\qquad\; -|u_f(r,\cdot)|h_{\varepsilon}(r,\cdot)\frac{e^{\sigma_1(t-r)}|x-y|+\xi_2^+(t-s)}{\sqrt{\kappa_0} (t-s)}\Big]\bigg\}(\phi(r))\notag\\
&+\frac{|x-y|}{t-s}e^{\sigma_1(t-r)}\{G(t,r)[|\nabla\vartheta_n|v_\varepsilon(r,\cdot)] \}(\phi(r)), \label{999} \end{align} where $h_\varepsilon= (u_f^2+\varepsilon)^{-1}\sqrt{{\mathcal F}(u_f)}$. Using the Cauchy-Schwarz inequality we can estimate \begin{align*}
v_{\varepsilon}^{1-\frac{2}{p}}\vartheta_n|u_f|
|\langle Q\nabla\vartheta_n,\nabla_xu_f\rangle|
\le &v_{\varepsilon}^{1-\frac{2}{p}}\vartheta_n|u_f| \sqrt{{\mathcal F}(\vartheta_n)} \sqrt{{\mathcal F}(u_f)}\notag\\ \le &\delta \vartheta_n^2v_{\varepsilon}h_{\varepsilon}^2u_f^2+ \frac{1}{4\delta}v_{\varepsilon}{\mathcal F}(\vartheta_n). \end{align*} Moreover, using formula \eqref{repres-formula}, we can estimate \begin{align*} (G(t,r)(\vartheta_n^2v_{\varepsilon}^{1-1/p}))(\phi(r)) \le &((G(t,r)(\vartheta_n^2v_{\varepsilon}))(\phi(r)))^{1-\frac{1}{p}} ((G(t,r)\vartheta_n)(\phi(r))^p\\ \le &((G(t,r)(\vartheta_n^2v_{\varepsilon}))(\phi(r)))^{1-\frac{1}{p}}\le (\Phi_n(r))^{1-\frac{1}{p}}. \end{align*} These two estimates replaced in \eqref{999} give \begin{align} \frac{d}{dr}\log\Phi_n(r) \leq & \frac{1}{\Phi_n(r)}\bigg ( G(t,r)\bigg\{\![p\delta^{-1}\!({\mathcal F}(\vartheta_n))(r,\cdot)\!-\!2\vartheta_n\mathcal{A}(r)\vartheta_n] v_{\varepsilon}(r,\cdot)\!+\!\xi_1^+\vartheta_n^2pv_{\varepsilon}(r,\cdot)\notag\\ & - p\vartheta_n^2v_\varepsilon(r,\cdot) \Big[((p-1-\delta)(u_f(r,\cdot))^2+\varepsilon)(h_{\varepsilon}(r,\cdot))^2 \notag\\
&\qquad\quad\qquad\quad -|u_f(r,\cdot)|h_{\varepsilon}(r,\cdot)\frac{e^{\sigma_1(t-r)}|x-y|\!+\!\xi_2^+(t-s)}{\sqrt{\kappa_0} (t-s)}\Big]\bigg\}\bigg )(\phi(r))\notag\\
&+\frac{|x-y|}{t-s}e^{\sigma_1(t-r)}\{G(t,r)[|\nabla\vartheta_n|v_\varepsilon(r,\cdot)] \}(\phi(r))+p. \label{tabloni} \end{align}
Straightforward computations show that ${\mathcal A}(r)\vartheta_n$ and $({\mathcal F}(\vartheta_n))(r,\cdot)$ vanish pointwise in $\mathbb R^d$ as $n\to +\infty$, for any $r\in (s,t)$ and there exists a positive constant
$C$ such that $|{\mathcal A}(r)\vartheta_n|+({\mathcal F}(\vartheta_n))(r,\cdot)\le C\varphi$ in $\mathbb R^d$ for any $n\in{\mathbb N}$, thanks to Hypothesis \ref{cond-iper}(iii). By \cite[Lemma 3.4]{KunLorLun09Non} the function $G(t,\cdot)\varphi$ is bounded in $(s,t)\times B_R$ for any $R>0$. Hence, by dominated convergence we conclude that $G(t,r)(\vartheta_n(\mathcal{A}(r)\vartheta_n)v_{\varepsilon}(r,\cdot))$ vanishes as $n\to +\infty$, pointwise in $\mathbb R^d$, for any $r\in (s,t)$ and \begin{align}
&\|G(t,r)[p\delta^{-1}({\mathcal F}(\vartheta_n))(r,\cdot)-2\vartheta_n\mathcal{A}(r)\vartheta_n]\|_{C_b(B_R)}\notag\\
\le &C_{\delta,p,\|u_f\|_{\infty}}\sup_{r\in (s,t)}\|G(t,r)\varphi\|_{C_b(B_R)}, \label{co'} \end{align}
where $R>\max\{|x|,|y|\}$. Similarly, the last but one term in \eqref{tabloni} vanishes pointwise in $\mathbb R^d$ as $n\to +\infty$, for any $r\in (s,t)$ and \begin{align}
|(G(t,r)[|\nabla\vartheta_n|v_\varepsilon(r,\cdot)])(\phi(r))|\le C_{p,\|u_f\|_{\infty}}\sup_{r\in (s,t)}\|G(t,r)\varphi\|_{C_b(B_R)}. \label{ciafaloni} \end{align}
Moreover, using the inequality $\alpha\beta^2-\gamma\beta\geq -\gamma^2/(4\alpha)$ for any $\alpha>0$ and $\beta, \gamma\in{\mathbb R}$, and that $G(t,s)g_1\leq G(t,s)g_2$ for any $t>s$ and any $g_1\leq g_2$, we deduce \begin{align*} \frac{d}{dr}\log(\Phi_n(r)) \leq &\frac{1}{\Phi_n(r)}(G(t,r)[p\delta^{-1}({\mathcal F}(\vartheta_n))(r,\cdot)-2\vartheta_n\mathcal{A}(r)\vartheta_n] v_{\varepsilon}(r,\cdot))(\phi(r))\notag\\ & +p(1+\xi_1^{+}+e^{2\sigma_1^+(t\!-\!r)}\chi_{\delta})\notag\\
&+\frac{e^{\sigma_1(t-r)}|x-y|}{(t-s)\Phi_n(r)}(G(t,r)[|\nabla\vartheta_n|v_\varepsilon(r,\cdot)]) (\phi(r)), \end{align*}
where $\chi_{\delta}=(|x-y|+\xi_2^+(t-s))^2(4\kappa_0(t-s)^2(p-1-\delta))^{-1}$. Integrating both sides of the previous inequality in $(s,t)$ and taking \eqref{co'} and \eqref{ciafaloni} into account to let $n\to +\infty$, we get \begin{align*} \log\bigg (\frac{((u_f(t,x))^2+\varepsilon)^{p/2}+\xi_0^p} {(G(t,s)((f^2+\varepsilon)^{p/2}))(y)+\xi_0^p}\bigg ) \leq & p[(1+\xi_1^+)(t-s)+\Theta(t-s)\chi_{\delta}], \end{align*} or even \begin{align*} ((u_f(t,x))^2+\varepsilon)^{p/2}\le &\exp[p(1+\xi_1^+)(t-s)+p\Theta(t-s)\chi_{\delta}]\notag\\ &\quad\;\,\times[(G(t,s)((f^2+\varepsilon)^{p/2}))(y)+\xi_0^p]. \end{align*} By \eqref{repres-formula} we can let $\varepsilon$ and $\delta$ tend to zero in both sides of the previous inequality and this yields the assertion. \end{proof}
We can now prove the main result of this subsection. For this purpose, we set $\varphi_{\lambda}(x)=e^{\lambda |x|^2}$ for any $x\in\mathbb R^d$ and $\lambda>0$, and introduce the following additional assumption.
\begin{hypo} \label{salvadori} For any $I\ni s<t$ and $\lambda>0$, the function $G(t,s)\varphi_{\lambda}$ belongs to $L^{\infty}(\mathbb R^d)$ and, for any $\delta>0$,
$+\infty>M_{\delta,\lambda}:=\sup_{t-s\ge \delta}\|G(t,s)\varphi_{\lambda}\|_{\infty}$. \end{hypo}
\begin{rmk} {\rm A sufficient condition for Hypothesis \ref{salvadori} to hold is given in \cite[Theorem 4.3]{AngLorOnI}. More precisely, it holds when $\eqref{point_p_D11}$ holds with $p=1$ and there exists
$K>0$, $\beta,R>1$ such that $\langle b(t,x),x\rangle\le -K|x|^2(\log(|x|))^{\beta}$ for any $t\in I$ and $x\in\mathbb R^d\setminus B_R$.} \end{rmk} \begin{thm} Assume that Hypothesis $\ref{salvadori}$ and the conditions in Proposition $\ref{harnack}$ are satisfied. Then, for any $I\ni s<t$, $f\in L^p(\mathbb R^d,\mu_s)$ $(p\in [p_0,+\infty))$ the function ${\mathcal N}(t,s)f$ belongs to $W^{1,\infty}(\mathbb R^d)$ and \begin{align}
&\|{\mathcal N}(t,s)f\|_{\infty}\le c_4(t-s)\|f\|_{L^p(\mathbb R^d,\mu_s)}+c_5(t-s)\xi_0, \label{giovanni}\\[1mm]
&\|\nabla_x{\mathcal N}(t,s)f\|_{\infty}\le c_6(t-s)\|f\|_{L^p(\mathbb R^d,\mu_s)}+c_7(t-s)\xi_0 \label{mancarella} \end{align} for some continuous functions $c_k:(0,+\infty)\to{\mathbb R}^+$ $(k=4,5,6,7)$ which blow up at zero. \end{thm}
\begin{proof} As usually, we prove the assertion for functions in $C^1_b(\mathbb R^d)$.
{\em Step 1.} Here, we prove \eqref{giovanni}. So, let us fix $f\in C^1_b(\mathbb R^d)$ and $x\in\mathbb R^d$. By the invariance property of the family $\{\mu_t: t\in I\}$ and inequality \eqref{cavo}, we can estimate \begin{align*}
&\|f\|_{L^p(\mathbb R^d,\mu_s)}^p=\int_{\mathbb R^d}(G(t,s)|f|^p)(y)\mu_t(dy)\\
\ge &\int_{B_R}[(G(t,s)|f|^p)(y)+\xi_0^p]\mu_t(dy)-\xi_0^p\\
\ge & |({\mathcal N}(t,s)f)(x)|^pe^{-p\phi(t-s)}\! \int_{B_R}\!\exp\bigg (\!-p\Theta(t-s)
\frac{(|x-y|+\xi_2^+(t-s))^2}{4\kappa_0(t-s)^2(p-1)}\bigg )\mu_t(dy)-\xi_0^p\\
\ge & |({\mathcal N}(t,s)f)(x)|^pe^{-p\phi(t-s)}\!
\exp\bigg (\!-p\Theta(t-s)\frac{(|x|+R+\xi_2^+(t-s))^2}{4\kappa_0(t-s)^2(p-1)}\bigg )\mu_t(B_R) -\xi_0^p, \end{align*} where $\phi=1+\xi_1^+$. By the tightness of the family $\{\mu_t: t\in I\}$ we can fix $R>0$ such that $\mu_t(B_R)\ge 2^{-p}$ for any $t\ge s$ and, from the previous chain of inequalities, we conclude that \begin{align}
|({\mathcal N}(t,s)f)(x)|^p\le 2^p(\widetilde C(t-s))^p\varphi_{p\Lambda(t-s)}(x)
(\|f\|_{L^p(\mathbb R^d,\mu_s)}^p+\xi_0^p), \label{harnack-1} \end{align} where \begin{align*} \Lambda(r)=\exp\bigg (\frac{\Theta(r)}{2\kappa_0r^2(p-1)}\bigg ),\qquad\;\, \widetilde C(r)=\exp\bigg (\phi r+\Theta(r)\frac{(\xi_2^+r+R)^2}{2\kappa_0r^2(p-1)}\bigg ). \end{align*}
Now, using the evolution law and again \eqref{cavo}, we can write \begin{align}
|({\mathcal N}(t,s)f)(x)|^p=& |({\mathcal N}(t,(t+s)/2){\mathcal N}((t+s)/2,s))(x)|^p\notag\\
\le & [(G(t,(t+s)/2)|{\mathcal N}((t+s)/2,s)f|^p)(y)+\xi_0^p]\notag\\ &\qquad\times \exp\bigg (p\phi\frac{t-s}{2}+p\Theta\bigg (\frac{t-s}{2}\bigg )
\frac{(2|x-y|+\xi_2^+(t-s))^2}{4\kappa_0(t-s)^2(p-1)}\bigg ) \label{lara} \end{align} for any $y\in\mathbb R^d$. From \eqref{repres-formula} and \eqref{harnack-1} we obtain \begin{align}
&(G(t,(t+s)/2)|{\mathcal N}((t+s)/2,s)f|^p)(y)\notag\\ \le &
2^p(\widetilde C((t-s)/2))^p(\|f\|_{L^p(\mathbb R^d,\mu_s)}^p+\xi_0^p) (G(t,(t+s)/2)\varphi_{p\Lambda(t-s)/2})(y)\notag\\ \le &
2^p(\widetilde C((t-s)/2))^p(\|f\|_{L^p(\mathbb R^d,\mu_s)}^p+\xi_0^p) M_{\frac{t-s}{2},p\Lambda(t-s)/2}. \label{laretta} \end{align} From \eqref{lara}, \eqref{laretta}, choosing $y=x$ in the exponential term, we get \begin{align*}
|({\mathcal N}(t,s)f)(x)|\le &
[2\widetilde C((t-s)/2)(\|f\|_{L^p(\mathbb R^d,\mu_s)}+\xi_0) M_{\frac{t-s}{2},p\Lambda(t-s)/2}^{1/p}+\xi_0]\notag\\ &\qquad\times \exp\bigg (\phi\frac{t-s}{2}+\Theta\bigg (\frac{t-s}{2}\bigg ) \frac{(\xi_2^+)^2}{4\kappa_0(p-1)}\bigg ) \end{align*} and \eqref{giovanni} follows with \begin{align*} c_4(r)=&2\widetilde C(r/2)M_{r/2,p\Lambda r/2}^{1/p} \exp[(1+\xi_1^+)r/2+\Theta(r/2)(\xi_2^+)^2(4\kappa_0(p-1))^{-1}],\\ c_5(r)=&(2\widetilde C(r/2)M_{r/2,p\Lambda r/2}^{1/p}+1)\exp[(1+\xi_1^+)r/2+\Theta(r/2)(\xi_2^+)^2(4\kappa_0(p-1))^{-1}], \end{align*}
{\em Step 2.} We fix $t>s$, $f\in C^1_b(\mathbb R^d)$. By Theorem \ref{exi_cb} ${\mathcal N}(t,s)f\in C^1_b(\mathbb R^d)$ and, by Step 1,
$\|{\mathcal N}((t+s)/2,s)f\|_{\infty}\le c_4((t-s)/2)\|f\|_{L^p(\mathbb R^d,\mu_s)}+c_5((t-s)/2)\xi_0$. Hence, from \eqref{ciclismo} we get \begin{align*}
&\sqrt{\frac{t-s}{2}}\|\nabla_x{\mathcal N}(t,s)f\|_{\infty}\\
\le&\widetilde C_{(t-s)/2}\bigg [c_4((t-s)/2)\|f\|_{L^p(\mathbb R^d,\mu_s)}+c_5((t-s)/2)\xi_0 +\frac{t-s}{2}\xi_0+\xi_0\bigg ]. \end{align*} Taking $T=t$, estimate \eqref{mancarella} follows with $c_6(r)=\sqrt{2}r^{-1/2}\widetilde C_{r/2}c_4(r/2)$ and $c_7(r)=\widetilde C_{r/2}[c_5(r/2)\sqrt{2/r}+\sqrt{r/2}+\sqrt{2/r}]$. \end{proof}
\section{Stability of the null solution}
In this section we study the stability of the null solution to problem \eqref{n-sm-pb} both in the $C_b$- and $L^p$-settings. For this reason, we assume that $\psi(\cdot,\cdot,0,0)=0$.
\begin{thm} The following properties are satisfied. \begin{enumerate}[\rm (i)] \item Let Hypotheses $\ref{base}$, $\ref{cond-iper}(i)$-$(iii)$. Further, suppose that the constant $\omega_p=\xi_1+(\xi_2^+)^2(4\kappa_0(p-1))^{-1}$ is negative, where $\xi_1$ and $\xi_2$ are defined in Hypothesis $\ref{cond-iper}(ii)$.
Then, for any $p\ge p_0$, there exists a positive constant $K_p$
such that, for any $s\in I$, $f\in L^p(\mathbb R^d,\mu_s)$ and $j=0,1$, \begin{align}
&\|D_x^j{\mathcal N}(t,s)f\|_{L^p(\mathbb R^d,\mu_t)}\leq K_p^je^{\omega_p(t-s)}\|f\|_{L^p(\mathbb R^d,\mu_s)}, \qquad\;\,t>s+j. \label{cartellino} \end{align} \item Suppose that the assumptions of Theorem $\ref{prop-3.8}$ are satisfied. Further, assume that Hypotheses $\ref{cond-iper}(i)$-$(iii)$ hold with $\xi_1<0$. Then, \eqref{cartellino} holds true for any $f\in C_b(\mathbb R^d)$ with $p=+\infty$ and $\omega_p$ and $K_p$ being replaced, respectively, by $\xi_1$ and $C_1e^{-\xi_1}$. \end{enumerate} \end{thm}
\begin{proof} (i) Estimate \eqref{cartellino} can be obtained arguing as in the proof of Theorem \ref{vacanza}, where now $p(t)=p$ for any $t\ge s$. As far as the gradient of ${\mathcal N}(t,s)f$ is concerned, we fix $t>s+1$ and observe that ${\mathcal N}(t,s)f={\mathcal N}(t,t-1){\mathcal N}(t-1,s)f$. Hence, from \eqref{ciclismo} we obtain \begin{eqnarray*}
\|\nabla_x{\mathcal N}(t,s)f\|_{L^p(\mathbb R^d,\mu_t)}\le C_1\|{\mathcal N}(t-1,s)f\|_{L^p(\mathbb R^d,\mu_{t-1})}
\le K_pe^{\omega_p(t-s)}\|f\|_{L^p(\mathbb R^d,\mu_s)}, \end{eqnarray*} where $K_p=C_1e^{-\omega_p}$.
(ii) The assertion follows easily letting $p$ tend to $+\infty$ in \eqref{cartellino}. \end{proof}
\appendix
\section{Technical results} \begin{prop}\label{smoth_v} Let Hypotheses $\ref{base}$ hold and let $g\in C((a,b]\times \mathbb R^d)$ satisfy
$[g]_{\gamma,\infty}:=\sup_{r\in (a,b)}(r-a)^{\gamma}\|g(r,\cdot)\|_\infty<+\infty$ for some $\gamma\in [0,1)$ and some $I\ni a<b$. Then, the function $z:[a,b]\times\mathbb R^d\to{\mathbb R}$, defined by \begin{equation*} z(t,x):=\int_a^t (G(t,r)g(r, \cdot))(x)dr, \qquad t \in [a,b], \, x \in \mathbb R^d, \label{x-men} \end{equation*} belongs to $C_b([a,b]\times \mathbb R^d)\cap C^{0,1+\theta}((a,b]\times \mathbb R^d)$ for any $\theta\in (0,1)$, \begin{equation}
\|z\|_{\infty}\le \frac{(b-a)^{1-\gamma}}{1-\gamma}[g]_{\gamma,\infty},\qquad\;\,\|\nabla_xz(t,\cdot)\|_{\infty}\le c_{\gamma,a,b}(t-a)^{\frac{1}{2}-\gamma}[g]_{\gamma,\infty} \label{leopolda} \end{equation} and \begin{equation}
\|\nabla_x z(t,\cdot)\|_{C^{\theta}(B_R)}\le C_R[g]_{\gamma,\infty}(t-a)^{\frac{1-2\gamma-\theta}{2}}, \label{leopolda-1} \end{equation} for any $t\in (a,b]$, $R>0$ and some positive constants $c_{\gamma,a,b}$ and $C_R$. In particular, if $\gamma\le 1/2$, then $\nabla_xz$ is bounded in $(a,b]\times\mathbb R^d$.
Finally, if $[g]_{\gamma,\theta,R}:=\sup_{t\in (a,b]}(t-a)^{\gamma}\|g(t,\cdot)\|_{C^{\theta}_b(B_R)}<+\infty$, for some $\theta\in (0,1)$ and any $R>0$, then $z\in C^{0,2+\theta}_{\rm loc}((a,b]\times\mathbb R^d)\cap C^{1,2}((a,b]\times\mathbb R^d)$. Moreover, \begin{align}
\|z(t,\cdot)\|_{C_b^{2}(B_R)}\le c(t-a)^{\frac{\theta}{2}-\gamma}[g]_{\gamma,\theta,R+1},\qquad\;\,t\in (a,b]. \label{rai-sport} \end{align} and, if $\theta>\alpha$, then \begin{align}
\|z(t,\cdot)\|_{C_b^{2+\rho}(B_R)}\le c(t-a)^{\frac{\theta-\rho}{2}-\gamma}[g]_{\gamma,\theta,R+1},\qquad\;\,t\in (a,b], \label{rai-sport-1} \end{align} where $\rho=\alpha$ if $\theta>\alpha$, whereas $\rho$ can be arbitrarily fixed in $(0,\theta)$ otherwise \end{prop}
\begin{proof} Throughout the proof, we will make use of \cite[Proposition 2.7]{AL}, where it has been shown that, for any $I\ni a<b$, $R>0$, $\eta\in (0,1]$ and $\beta\in [\eta,2+\alpha]$ there exists positive constants $C_{\beta}=C_{\beta}(a,b,R)$ and $C_{\eta,\beta}=C_{\eta,\beta}(a,b,R)$ such that for any $f\in C_b(\mathbb R^d)\cap C^{\eta}_{\rm loc}(\mathbb R^d)$ \begin{equation} \label{28AL}
\|G(t,s)f\|_{C^{\beta}(\overline{B}_R)} \leq \left\{ \begin{array}{ll}
C_{\beta}(t-s)^{-\frac{\beta}{2}}\|f\|_{\infty},\\[2mm]
C_{\eta,\beta}(t-s)^{-\frac{\beta-\eta}{2}}\|f\|_{C^{\eta}(\overline{B}_{R+1})}, \end{array} \right. \qquad\;\, a\leq s<t\leq b. \end{equation}
To begin with, we observe that, for any $t\in (a,b]$ and $x\in\mathbb R^d$, the function $r\mapsto (G(t,r)g(r,\cdot))(x)$ is measurable in $(a,t]$. If $g$ is bounded and uniformly continuous in ${\mathbb R}^{d+1}$ this is clear. Indeed, as it has been recalled in Section \ref{sect-2}, the function $(t,s,x)\mapsto (G(t,s)f)(x)$ is continuous in $\{(t,s,x)\in I\times I\times\mathbb R^d: t\ge s\}$ for any $f\in C_b(\mathbb R^d)$. Hence, taking \eqref{norm_g} into account and adding and subtracting $(G(t,r)g(r_0,\cdot))(x)$, we can estimate \begin{align*}
&|(G(t,r)g(r,\cdot))(x)-(G(t,r_0)g(r_0,\cdot))(x_0)|\\
\le & \|g(r,\cdot)-g(r_0,\cdot)\|_{\infty}+|(G(t,r)g(r_0,\cdot))(x)-(G(t,r_0)g(r_0,\cdot))(x_0)| \end{align*} for any $(r,x), (r_0,x_0)\in [a,t]\times\mathbb R^d$, and the last side of the previous chain of inequalities vanishes as $(r,x)$ tends to $(r_0,x_0)$.
If the function $g$ is as in the statement of the proposition, we can approximate it by a sequence $(g_n)$ of bounded and uniformly continuous functions in ${\mathbb R}^{d+1}$ which converge to $g$ pointwise in $(a,b)\times\mathbb R^d$
and satisfy $\|g_n(r,\cdot)\|\le \|g(r,\cdot)\|_{\infty}$ for any $r\in (a,b)$.\footnote{This can be done, for instance, setting $g_n(t,x)=\vartheta_n(t)(\overline g(t,\cdot)\star \rho_n)(x)$ for any $(t,x)\in{\mathbb R}^{d+1}$ and $n\in{\mathbb N}$, where $\overline g:(a,+\infty)\times\mathbb R^d\to{\mathbb R}$ equals $g$ in $(a,b)\times\mathbb R^d$ and $\overline g(t,\cdot)=g(b,\cdot)$ for any $t>b$, $(\vartheta_n)\subset C^{\infty}({\mathbb R})$ is a sequence of smooth functions such that $\mbox{$1\!\!\!\;\mathrm{l}$}_{[a+2/n,+\infty)}\le\vartheta_n\le \mbox{$1\!\!\!\;\mathrm{l}$}_{[a+1/n,+\infty)}$ for any $n\in{\mathbb N}$ and ``$\star$'' denotes convolution with respect to the spatial variables.} Since the sequence $(g_n)$ is bounded and pointwise converges to $g$ in $(a,t]\times\mathbb R^d$, by \cite[Proposition 3.1(i)]{KunLorLun09Non} $(G(t,\cdot)g_n(r,\cdot))(x)$ converges to $(G(t,\cdot)g(r,\cdot))(x)$ as $n\to +\infty$ pointwise in $(a,t]$. Hence, the function $r\mapsto (G(t,r)g(r,\cdot))(x)$ is measurable in $(a,t]$.
Using again \eqref{norm_g} we obtain
$\|G(t,r)g(r,\cdot)\|_{\infty}\le \|g(r,\cdot)\|_{\infty}\le (r-a)^{-\gamma}[g]_{\gamma,\infty}$ for any $r\in (a,t]$. It thus follows that $z$ is bounded and the first estimate in \eqref{leopolda} follows.
Proving that $z$ is continuous in $[a,b]\times\mathbb R^d$ is an easy task, based on estimate \eqref{norm_g} and the dominated convergence theorem. Hence, the details are omitted.
Fix $\theta\in (0,1)$. The first estimate in \eqref{28AL} with $\beta=1+\theta$ and the assumptions on $g$ allow to differentiate $z$ with respect to $x_j$ ($j=1,\ldots,d$), under the integral sign, and obtain that $D_j z(t,\cdot)$ is locally $\theta$-H\"older continuous in $\mathbb R^d$, uniformly with respect to $t \in (a,b)$, and \begin{equation}
\|D_jz(t,\cdot)\|_{C^{\theta}(B_R)}\le C_R[g]_{\gamma,\infty}(t-a)^{\frac{1-2\gamma-\theta}{2}},\qquad\;\,t\in (a,b]. \label{sgura} \end{equation} To conclude that $D_j z$ is continuous in $(a,b]\times \mathbb R^d$, it suffices to prove that, for any $x\in\mathbb R^d$, the function $D_jz(\cdot,x)$ is continuous in $(a,b]$. For this purpose, we apply an interpolation argument. We fix $R>0$ such that $x\in \overline{B}_R$. Applying the well-known interpolation estimate
$\|f\|_{C^1(\overline{B}_R)}\le K\|f\|_{C(\overline{B}_R)}^{\theta/(1+\theta)}\|f\|_{C^{1+\theta}(B_R)}^{1/(1+\theta)}$
with $f=z(t,\cdot)-z(t_0,\cdot)$ and $t,t_0\in (a,b]$, from the continuity of $z$ in $[a,b]\times\mathbb R^d$ and the local boundedness in $(a,b]$ of the function $t\mapsto\|f(t,\cdot)\|_{C^{1+\theta}(B_R)}$, we conclude that the function $D_jz(\cdot,x)$ is continuous in $(a,b]$. Hence, $z\in C^{0,1+\theta}_{\rm loc}((a,b]\times\mathbb R^d)$. Estimate \eqref{leopolda-1} follows from \eqref{sgura}. Further, estimate \eqref{point_p_D} and the assumption on $g$ imply that \begin{align*}
|D_jz(t,x)|\le C_0[g]_{\gamma,\infty}\int_a^t(r-a)^{-\gamma}(1+(t-r)^{-1/2})dr=C'_{\gamma,a,b}(t-a)^{\frac{1}{2}-\gamma}[g]_{\gamma,\infty} \end{align*} for any $(t,x)\in (a,b]\times\mathbb R^d$, whence the second estimate in \eqref{leopolda} follows at once.
Let us now assume that $\sup_{t\in (a,b)}(t-a)^{\gamma}\|g(t,\cdot)\|_{C^{\theta}_b(B_R)}<+\infty$ for any $R>0$. Arguing as above and taking the second estimate in \eqref{28AL} with $\beta=2$ (resp. $\beta=2+\alpha$) into account, we can show that $z(t,\cdot)\in C^2_{\rm loc}(\mathbb R^d)$ (resp. $z(t,\cdot)\in C^{2+\alpha}_{\rm loc}(\mathbb R^d)$) for any $t \in (a,b]$ and \eqref{rai-sport} (resp. \eqref{rai-sport-1}) holds true. Applying the interpolation inequality
$\| \varphi\|_{C^2(\overline{B}_R)} \leq C \|\varphi\|_{\infty}^{\theta/(2+\theta)} \| \varphi\|_{C^{2+\theta} (\overline{B}_R)}^{2/(2+\theta)}$ with $\varphi=z(t,\cdot)-z(t_0,\cdot)$ we deduce that the second-order spatial derivatives of $z$ are continuous in $(a,b]\times B_R$ and, hence, in $(a,b]\times\mathbb R^d$ due to the arbitrariness of $R>0$.
Finally, to prove the differentiability of $z$, we introduce the sequence $(z_n)$, where \begin{eqnarray*} z_n(t,x)=\int_a^{t-\frac{1}{n}}(G(t,r)g(r, \cdot))(x)dr, \qquad t \in [a+1/n,b],\;\, x \in \mathbb R^d,\;\,n\in{\mathbb N}. \end{eqnarray*} As it is immediately seen $z_n$ converges to $z$, locally uniformly in $(a,b]\times\mathbb R^d$ and each function $z_n$ is differentiable in $[a+1/n,b]\times\mathbb R^d$ with respect to $t$ and \begin{eqnarray*} D_tz_n(t,x)=\int_a^{t-\frac{1}{n}}(\mathcal{A}(t)G(t,r)g(r, \cdot))(x)dr+(G(t,t-1/n)g(t-1/n,\cdot))(x) \end{eqnarray*}
for such values of $(t,x)$. Since $\|\mathcal{A}(t)G(t,r)g(r, \cdot)\|_{C_b(B_R)}\le C_R[g]_{\gamma,\infty}(t-r)^{\theta/2-\gamma}(r-a)^{-\gamma}$ for any $r\in (a,t)$, and $g(t-1/n,\cdot)$ converges to $g(t,\cdot)$ locally uniformly in $\mathbb R^d$, by \cite[Proposition 3.6]{KunLorLun09Non} and the dominated convergence theorem, we conclude that $D_tz_n$ converges locally uniformly in $(a,b]\times\mathbb R^d$ to $\mathcal{A} z+g$. Thus, we conclude that $z$ is continuously differentiable in $(a,b]\times\mathbb R^d$ and, therein, $D_tz=\mathcal{A} z+g$. \end{proof}
\begin{lemm} \label{lemm-brigida}
Let $J$ be an interval and let $g\in C(J\times\mathbb R^d)$ be such that $g(t,\cdot)$ is bounded in $\mathbb R^d$ for any $t\in J$. Then, the function $t\mapsto \|g(t,\cdot)\|_{\infty}$ is measurable in $J$. \end{lemm}
\begin{proof}
To begin with, we observe that for any $n\in{\mathbb N}$ the function $t\mapsto \|g(t,\cdot)\|_{C(\overline B_n)}$ is continuous in $J$. This is a straightforward consequence of the uniform continuity of $g$ in $J_0\times B_n$ for any bounded interval $J_0$
compactly embedded into $J$. To complete the proof, it suffices to show that $z_n(t):=\|g(t,\cdot)\|_{C(\overline B_n)}$ converges to $\|g(t,\cdot)\|_{\infty}$
for any $t\in J$. Clearly, for any fixed $t\in J$, the sequence $(z_n(t))$ is increasing and is bounded from above by $\|g(t,\cdot)\|_{\infty}$. To prove that, $(z_n(t))$ converges to $\|g(t,\cdot)\|_{\infty}$, we fix a sequence $(x_n)\subset\mathbb R^d$ such that
$|g(t,x_n)|$ tends to $\|g(t,\cdot)\|_{\infty}$ as $n\to +\infty$. For any $n\in{\mathbb N}$, let $k_n\in{\mathbb N}$ be such that $x_n\in B_{k_n}$. Without loss of generality, we can assume that the sequence $(k_n)$ is increasing. Then,
$z_{k_n}(t)=\|g(t,\cdot)\|_{C(\overline B_{k_n})}\ge |g(t,x_n)|$ for any $n\in{\mathbb N}$. Hence, the sequence $(z_{k_n}(t))$ converges to $\|g(t,\cdot)\|_{\infty}$ and this is enough to conclude that the whole sequence $(z_n(t))$ converges to $\|g(t,\cdot)\|_{\infty}$ as $n\to +\infty$. \end{proof}
Finally, we prove some interior $L^p$-estimates.
\begin{prop} \label{prop-A3} Let $\Omega\subset\mathbb R^d$ be a bounded open set and let $u\in C^{1,2}((s,T)\times\Omega)$ solve the equation $D_tu=\mathcal{A} u$ in $(s,T)\times \Omega$. Then, for any $x_0\in\Omega$ and $R_1>0$, such that $B_{R_1}(x_0)\Subset \Omega$, there exists a positive constant $c=c(R_1,x_0,s,T)$ such that \begin{align*}
(t-s)\|u(t,\cdot)\|_{W^{2,p}(B_{R_1}(x_0))}+\sqrt{t-s}\|u(t,\cdot)\|_{W^{1,p}(B_{R_1}(x_0))}
\leq c\sup_{r\in(s,T)}\|u(r,\cdot)\|_{L^p(\Omega)}. \end{align*} \end{prop}
\begin{proof} Throughout the proof, we denote by $c$ a positive constant, independent of $n$ and $u$, which may vary from line to line.
Let us fix $0<R_1<R_2$, such that $\overline{B_{R_2}(x_0)}\subset\Omega$, and a sequence of cut-off functions $(\vartheta_n)\subset C_c^\infty(\Omega)$ such that $\mbox{$1\!\!\!\;\mathrm{l}$}_{B_{r_n}(x_0)}\leq\vartheta_n\leq \mbox{$1\!\!\!\;\mathrm{l}$}_{B_{r_{n+1}}(x_0)}$ and
$\|\vartheta_n\|_{C_b^k(\Omega)}\leq 2^{kn}c$, for any $n\in{\mathbb N}\cup\{0\}$ and $k=0,1,2,3$, where $r_n:=2R_1-R_2+(2-2^{-n})(R_2-R_1)$. Since the function $u_n:=\vartheta_nu$ solves the equation $D_tu_n=\mathcal{A} u_n+g_n$ in $(s,T)\times B_{r_{n+1}}(x_0)$, where $g_n=-u\mathcal{A}\vartheta_n-\langle Q\nabla_xu,\nabla\vartheta_n\rangle$, we can write \begin{align} u_n(t,x)=(G_{n+1}^{{\mathcal D}}(t,s)\vartheta_nu(s,\cdot))(x)+\int_s^t(G_{n+1}^{{\mathcal D}}(t,\sigma)g_n(\sigma,\cdot))(x)d\sigma, \label{acqua-5} \end{align} where $G_{n+1}^{{\mathcal D}}(t,s)$ is the evolution operator associated to the realization of the operator $\mathcal{A}$ in $L^p(B_{r_{n+1}}(x_0))$ with homogeneous Dirichlet boundary conditions. It is well known that
$\|G^{{\mathcal D}}(t,r)\psi\|_{W^{2,p}(B_{r_{n+1}}(x_0))}\leq c(t-r)^{-1+\frac{\alpha}{2}}\|\psi\|_{W^{\alpha,p}(B_{r_{n+1}}(x_0))}$ for any $\alpha\in (0,1)$, $\psi\in W^{\alpha,p}(B_{r_{n+1}}(x_0))$ and $s\leq r<t\leq T$. Since $g_n(\sigma,\cdot)\in W^{\alpha,p}(B_{r_{n+1}}(x_0))$ for any $\sigma\in(s,t)$, from \eqref{acqua-5} we obtain \begin{align*}
(t-s)\|u(t,\cdot)\|_{W^{2,p}(B_{r_n}(x_0))}\le &c\|u(s,\cdot)\|_{L^p(B_{r_{n+1}}(x_0))}\\
&+c\int_s^t (t-\sigma)^{-1+\frac{\alpha}{2}}\|g_n(\sigma,\cdot)\|_{W^{\alpha,p}(B_{r_{n+1}}(x_0))}d\sigma. \end{align*}
Now, for any $n\in{\mathbb N}$ we set $\zeta_n:=\sup_{t\in(s,T)}(t-s)\|u(t,\cdot)\|_{W^{2,p}(B_{r_n}(x_0))}$ and estimate the function under the integral sign. At first, we note that \begin{align*}
\|g_n(\sigma,\cdot)\|_{W^{\alpha,p}(B_{r_{n+1}}(x_0))}
\leq c\|\vartheta_n\|_{C_b^{2+\alpha}(B_{r_{n+1}}(x_0))}\|u(\sigma,\cdot)\|_{W^{1+\alpha,p}(B_{r_{n+1}}(x_0))}. \end{align*} By interpolation and using Young's inequalities we obtain, for any $\sigma\in (s,t)$, \begin{align*}
\|u(\sigma,\cdot)\|_{W^{1,p}(B_{r_{n+1}}(x_0))}
\leq & c(\sigma-s)^{-\frac{1}{2}}\|u(\sigma,\cdot)\|^{\frac{1}{2}}_{L^p(B_{r_{n+1}}(x_0))}\sqrt{\zeta_{n+1}} \\
\leq & (\sigma-s)^{-\frac{1}{2}}\left(c\varepsilon^{-1}\|u(\sigma,\cdot)\|_{L^p(\Omega)}+\varepsilon\zeta_{n+1}\right), \end{align*} and \begin{align*}
\|\nabla_xu(\sigma,\cdot)\|_{W^{\alpha,p}(B_{r_{n+1}}(x_0))}
\leq & (\sigma-s)^{-\frac{1+\alpha}{2}}\!\Big (c\varepsilon^{-\frac{1+\alpha}{1-\alpha}}\|u(\sigma,\cdot)\|_{L^p(\Omega)} +\varepsilon\zeta_{n+1}\Big ). \end{align*} Collecting the above estimates together we get \begin{eqnarray*}
\zeta_n\leq 8^nc\varepsilon\zeta_{n+1}+c\sup_{r\in(s,T)}\|u(r,\cdot)\|_{L^p(\Omega)}(1+8^n\varepsilon^{-(1+\alpha)/(1-\alpha)}). \end{eqnarray*} Now we fix $0<\eta<64^{-1/(1+\alpha)}$ and $\varepsilon=8^{-n}c^{-1}\eta$. Multiplying both the sides of the previous inequality by $\eta^n$ and summing up from $0$ to $N$ yields \begin{align} \zeta_0-\eta^{N+1}\zeta_{N+1}
\leq c\sup_{r\in(s,T)}\|u(r,\cdot)\|_{L^p(\Omega)}. \label{reti} \end{align} Since $\{\zeta_n\}_{n\in{\mathbb N}}$ is bounded, taking the limit as $N\rightarrow+\infty$ in the left-hand side of \eqref{reti} we conclude that
$(t-s)\|u(t,\cdot)\|_{W^{2,p}(B_{R_1}(x_0))}\leq c\sup_{r\in(s,T)}\|u(r,\cdot)\|_{L^p(\Omega)}$ for any $t\in(s,T)$. An interpolation argument gives
$\|u(t,\cdot)\|_{W^{1,p}(B_{R_1}(x_0))}\leq c(t-s)^{-1/2}\sup_{r\in(s,T)}\|u(r,\cdot)\|_{L^p(\Omega)}$ for any $t\in(s,T)$, and this completes the proof. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1607.05247.tex",
"language_detection_score": 0.5032646059989929,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} The aim of this article is to present an elementary proof of a global existence result for nonlinear wave equations in an exterior domain. The novelty of our proof is to avoid completely the scaling operator which would make the argument complicated in the mixed problem, by using new weighted pointwise estimates of a tangential derivative to the light cone. \end{abstract}
\maketitle
\section{Introduction} Let $\Omega$ be an unbounded domain in ${\mathbf R}^3$ with compact and smooth boundary $\partial \Omega$. We put ${\mathcal O}:={\mathbf R}^3 \setminus {\Omega}$, which is called an obstacle. This paper is concerned with the mixed problem for a system of nonlinear wave equations in $\Omega$\,: \begin{align}\label{ap1} & (\partial_t^2-c_i^2\Delta) u_i =F_i(u, \partial u,\nabla_{\!x}\,\partial u), & (t,x) \in (0,\infty)\times \Omega, \\ \label{ap2} & u(t,x)=0, & (t,x) \in (0,\infty)\times \partial\Omega, \\ \label{ap3} & u(0,x)=\varepsilon \phi(x),\ (\partial_t u)(0,x)=\varepsilon \psi(x), & x\in \Omega, \end{align} for $i=1, \dots, N$, where $c_i$ ($1\le i\le N$) are given positive constants, $u=(u_1, \dots, u_N)$, $\varepsilon$ is a positive parameter and $\phi$, $\psi \in C^\infty_0(\overline{\Omega}\,;{\mathbf R}^N)$, namely they are smooth functions on $\overline{\Omega}$ whose support is compact in $\overline{\Omega}$. We assume that $F_i(u,\partial u,\nabla_{\!x}\, \partial u)$ is a smooth function vanishing to first order at the origin. Besides, $\partial_0\equiv\partial_t=\partial/\partial t$, $\partial_j=\partial/\partial x_j$ ($j=1,2,3$), $\Delta=\sum_{j=1}^3 \partial_j^2$, $\nabla_{\!x}\, u=(\partial_1 u, \partial_2 u, \partial_3 u)$ and $\partial u=(\partial_t u, \nabla_{\!x}\, u)$. In the following we always assume that \begin{equation} \frac{\partial F_i}{\partial(\partial_k\partial_\ell u_j)}=\frac{\partial F_j}{\partial(\partial_k\partial_\ell u_i)} =\frac{\partial F_i}{\partial(\partial_\ell\partial_k u_j)} \label{symmetric} \end{equation} holds for $1\le i, j\le N$ and $1\le k, \ell\le 3$, so that the hyperbolicity of the system is assured.
First we consider the single speed case (i.e., $c_1=c_2=\cdots =c_N=1$). If we suppose in addition that quadratic part of the nonlinearity $F_i$ vanishes, then it was shown in Shibata -- Tsutsumi \cite{ShiTsu86} that the mixed problem \eqref{ap1}--\eqref{ap3} admits a unique global small amplitude solution. Otherwise, in order to get a global existence result, we need a certain algebraic condition on the nonlinearity in general, due to the blow-up result for the corresponding Cauchy problem obtained by John \cite{john3} and the finite speed of propagation. One of such conditions is the null condition introduced by Klainerman \cite{Kla86} (see Definition 1.1 below). Under the null condition, Klainerman \cite{Kla86} and Christodoulou \cite{Chr86} proved global solvability for the Cauchy problem with small initial data independently by different methods. This result was extended to the mixed problem by Keel -- Smith -- Sogge \cite{KeSmiSo02G} if the obstacle ${\mathcal O}$ is star-shaped, and by Metcalfe \cite{Met04} if it is non-trapping (for the case of other space dimensions, we refer to \cite{ShiTsu86}, \cite{Ha95}).
Next we consider the multiple speeds case where the propagation speeds $c_i$ ($1\le i\le N$) do not necessarily coincide with each other. Metcalfe -- Sogge \cite{MetSo05} and Metcalfe -- Nakamura -- Sogge \cite{MetNaSo05a, MetNaSo05b} extended the global existence result for the mixed problem to the multiple speeds case with more general obstacle as we shall describe later on (see \cite{Kov89}, \cite{Yok00}, \cite{Kub-Yok01}, \cite{Kat04:02}, and
\cite{kayo} for the Cauchy problem in three space dimensions; see also \cite{hk2} for the two space dimensional case).
The aim of this article is to present an alternative approach to these works which consists of the following two ingredients. One is the usage of space-time decay estimates for the mixed problem of the linear wave equation given in Theorem \ref{main3} below, which directly give us rather detailed decay estimates \begin{align}\label{std0}
& |u_i(t,x)| \le C \varepsilon (1+t+|x|)^{-1} \log\left(1+\frac{1+c_it+|x|}{1+|c_it-|x|\,|}\right), \\ \label{std1}
& |\partial u_i(t,x)| \le C \varepsilon (1+|x|)^{-1} (1+|c_i t-|x||)^{-1} \end{align} for $(t,x) \in [0,\infty) \times \overline{\Omega}$. These estimates are refinement of time decay estimates obtained in the previous works for the mixed problems. In this way, we do not need the space--time $L^2$ estimates which has been adopted in the works \cite{KeSmiSo02G, Met04, MetNaSo05a, MetNaSo05b, MetSo05}.
The other is making use of stronger decay property of a tangential derivative to the light cone given in Theorem \ref{D+Es} below. This idea is recently introduced by the authors \cite{KaKu07}, where the Cauchy problem is studied, and it enables us to deal with the null form without using neither the scaling operator $t\partial_t+x\cdot\nabla_{\!x}\,$ nor Lorentz boost fields $t \partial_j+x_j \partial_t$\ ($j=1,2,3$). In this paper, we will adopt this approach to the mixed problem, and treat the problem without using these vector fields. In contrast, the scaling operator has been used in the previous works, and it makes the argument rather complicated because it does not preserve the Dirichlet boundary condition (\ref{ap2}). Recently Metcalfe -- Sogge \cite{MetSo07} introduced a simplified approach which enables us to use the scaling operator without special care, but their approach is applicable only to star-shaped obstacles, and they assumed that the nonlinearity depends only on derivatives of $u$.
In order to state our result, we need a couple of notions about the obstacle, the initial data and the nonlinearity.
We remark that we may assume, without loss of generality, that ${\mathcal O}\subset B_{1}(0)$ by the scaling and the translation, where $B_r(z)$ stands for an open ball of radius $r$ centered at $z \in {\mathbf R}^3$. Hence we always assume ${\mathcal O}\subset B_1(0)$ in what follows.
Throughout this paper, we denote the standard Lebesgue and Sobolev spaces by $L^2({\Omega})$ and $H^m({\Omega})$
and their norms by $\|\,\cdot : L^2({\Omega})\|$ and
$\|\,\cdot : H^m({\Omega})\|$, respectively. Besides, $H^1_0(\Omega)$ is the completion of
$C^\infty_0({\Omega})$ with respect to $\|\,\cdot : H^1({\Omega})\|$.
\begin{definition} {\rm (i)}\ We say that the obstacle ${\mathcal O}$ is {\bf admissible} if there exists a non--negative integer $\ell$ having the following property\,{\rm :}\ Let $v \in C^\infty([0,\infty)\times \overline{\Omega}; {\mathbf R})$ be a solution of the homogeneous wave equation $(\partial_t^2-c^2\Delta)v=0$ in $[0,\infty)\times \Omega$, with some constant $c>0$ and the Dirichlet condition, whose initial value $(v(0,x), (\partial_t v)(0,x))$ vanishes for $x \in {\mathbf R}^3 \setminus {B_a(0)}$ with some $a>1$. Then for any $b>1$ we have \begin{align}\label{obstacle}
&\sum_{|\alpha| \le 1}
\|\partial^\alpha v(t):L^2({\Omega\,\cap B_b(0)})\| \\ \nonumber
& \quad \le C \exp(-\sigma t)\,(\|v(0):H^{\ell+1}(\Omega)\|
{}+\|(\partial_t v)(0):H^{\ell}(\Omega)\|), \end{align} where $C$ and $\sigma$ are positive constants depending on $a$, $b$, $c$ and $\Omega$.
\noindent {\rm (ii)}\ We say that the initial data $(\phi,\psi)$ satisfies the {\bf compatibility condition} to infinite order for the mixed problem \eqref{ap1}--\eqref{ap3} if the {\rm (}formal{\rm)} solution $u$ of the problem satisfies $(\partial^j_t u)(0,x)=0$ for any $x \in \partial\Omega$ and any non--negative integer $j$ $($notice that the values $(\partial^j_t u)(0,x)$ are determined by $(\phi,\psi)$ and $F$ successively; for example we have $\partial_t^2u_i(0,x)=\varepsilon c_i^2 \Delta\phi_i+F_i\bigl(\varepsilon\phi, \varepsilon(\psi, \nabla_x\phi), \varepsilon\nabla_x(\psi, \nabla_x \phi) \bigr)$, and so on$)$.
\noindent {\rm (iii)}\ We say that the nonlinearity $F=(F_1, F_2, \dots, F_N)$ satisfies the {\bf null condition} associated with the propagation speeds $(c_1, c_2, \dots, c_N)$ if each $F_i$ $(1\le i\le N)$ satisfies \begin{equation}\label{nullc} F_i^{(2)}(\lambda, V(\mu,X), W(\nu,X))=0 \end{equation} for any $\lambda$, $\mu$, $\nu \in \Lambda_i$ and $X=(X_0, X_1, X_2, X_3)\in {\mathbf R}^{4}$ satisfying $X_0^2=c_i^2(X_1^2+X_2^2+X_3^2)$, where $F_i^{(2)}$ is the quadratic part of $F_i$, and $$
\Lambda_i=\{(\lambda_1, \lambda_2, \ldots, \lambda_N)\in {\mathbf R}^N;
\lambda_j=0 \text{ if } c_j\ne c_i\}. $$ Here we put $V(\mu,X)=(X_a\,\mu_k:\,a=0,1,2,3, \,k=1, \dots, N)$, $W(\nu,X)=(X_j X_a \nu_k:\,j=1,2,3, \,a=0,1,2,3, \,k=1, \dots, N)$. \end{definition}
We often refer to \eqref{obstacle} as the local energy decay. We remark that when ${\mathcal O}$ is non--trapping, the estimate (\ref{obstacle}) holds for $\ell=0$ {\rm (}see for instance Melrose \cite{Mel79}, Shibata -- Tsutsumi \cite{ShiTsu83}{\rm )}. Even if ${\mathcal O}$ is trapping, it may be admissible in some cases. In fact, (\ref{obstacle}) for $\ell=5$ was obtained by Ikawa \cite{Ika82}, provided that ${\mathcal O}$ is a union of disjoint compact sets ${\mathcal O}_1$ and ${\mathcal O}_2$ whose Gaussian curvatures are strictly positive at every point of their boundaries (see also Ikawa \cite{Ika88}).
Now we are in a position to state our main result. \begin{theorem}\label{thm:GE} Suppose that ${\mathcal O}$ is admissible and that $(\phi,\psi)$ satisfies the compatibility condition to infinite order for the problem \eqref{ap1}--\eqref{ap3}. If $F$ satisfies the null condition associated with $(c_1, c_2, \dots, c_N)$, then there exists a positive constant $\varepsilon_0$ such that for all $\varepsilon \in (0,\varepsilon_0)$ the mixed problem \eqref{ap1}--\eqref{ap3} admits a unique solution $u \in C^\infty([0,\infty)\times \overline{\Omega}; {\mathbf R}^N)$ satisfying \eqref{std0} and \eqref{std1}. \end{theorem}
As we have mentioned in the above, the existence part of the Theorem \ref{thm:GE}
is already known in \cite{MetNaSo05b} (though the decay property obtained in \cite{MetNaSo05b} is different from ours), and our aim here is to give a simplified proof for it.
This paper is organized as follows. In the next section we collect notation. In the section 3 we give some preliminaries needed later on. The section 4 is devoted to establish pointwise decay estimates. Making use of the estimates from the section 4, we give a proof of Theorem \ref{thm:GE} in the section 5. \section{Notation} Let $c>0$. We shall consider the mixed problem\,: \begin{align}\label{eq} & (\partial_t^2-c^2\Delta) v =f, & (t,x) \in (0,T)\times \Omega, \\ \label{dc} & v(t,x)=0, & (t,x) \in (0,T)\times \partial\Omega, \\ \label{id} & v(0,x)=v_0(x),\ (\partial_t v)(0,x)=v_1(x), & x\in \Omega, \end{align} Here $v_0$, $v_1 \in C^\infty_0(\overline{\Omega};{\mathbf R})$ and $f \in C^\infty([0,T)\times \overline{\Omega};{\mathbf R})$. We say that $({v}_0, {v}_1, f)$ satisfies the compatibility condition to infinite order for the problem \eqref{eq}--\eqref{id} if $v_j=0$ {on} $\partial\Omega$ for any non--negative integer $j$, where we have set \begin{equation}\label{data+} v_j(x)\equiv c^2\Delta v_{j-2}(x)+(\partial_t^{j-2} f)(0,x) \quad \mbox{for \ $x \in \overline{\Omega}$ \ and \ $j\ge 2$}. \end{equation} Let us put $\vec{v}_0:=(v_0, v_1)$ and we denote by $K[\vec{v}_0;c](t,x)$ the solution of the problem (\ref{eq})--(\ref{id}) with $f \equiv 0$. While, we denote by $L[f;c](t,x)$ the solution of the problem (\ref{eq})--(\ref{id}) with $\vec{v}_0 \equiv 0$.
In a similar fashion, putting $\vec{w}_0:=(w_0,w_1)\in C^\infty({\mathbf R}^3;{\mathbf R}^2)$, we denote by $K_0[\vec{w}_0;c](t,x)$ and $L_0[g;c](t,x)$ the solution of the following Cauchy problem with $g \equiv 0$ and $\vec{w}_0 \equiv 0$, respectively\,: \begin{align}\label{eq0} &(\partial_t^2-c^2\Delta) w = g, & (t,x) \in (0,T)\times {\mathbf R}^3, \\ \label{id0} & w(0,x)=w_0(x),\ (\partial_t w)(0,x)=w_1(x), & x\in {\mathbf R}^3. \end{align}
Next we introduce vector fields\,: \begin{equation}\nonumber \partial_0=\partial_t, \quad \partial_j \ (j=1,2,3), \quad \Omega_{ij}=x_i \partial_j-x_j\partial_i \ (1\le i<j \le 3), \end{equation} and we denote them by $Z_j$\,($j=0, 1, \dots, 6$), respectively.
Notice that \begin{equation}\label{commute} [Z_i,\partial_t^2-c^2\Delta]=0 \quad (i=0, 1, \dots, 6), \end{equation} where we put $[A,B]:=AB-BA$. Denoting $Z^\alpha=Z_0^{\alpha_0}Z_1^{\alpha_1}
\cdots Z_{6}^{\alpha_{6}}$ with a multi--index $\alpha=(\alpha_0, \alpha_1, \dots, \alpha_{6})$, we set \begin{equation}\label{norm}
|\varphi (t,x)|_m=\sum_{|\alpha| \le m} |Z^\alpha \varphi(t,x)|, \quad
\|\varphi(t)\|_m=\|\,|\varphi(t,\cdot)|_m\!:\!{L^2(\Omega)}\| \end{equation} for a real or ${\mathbf R}^N$--valued smooth function $\varphi(t,x)$ and a non--negative integer $m$.
For $\nu$, $\kappa \in {\mathbf R}$, $c \ge 0$ and $c_j>0$ ($1\le j\le N$), we define \begin{align} \label{defPhi} {\Phi}_\nu(t,x)= &
\begin{cases}
\langle t+|x|\rangle^{\nu} & \text{ if } \nu<0, \\
\log^{-1}\bigg(2+\displaystyle\frac{\langle t+|x|\rangle}{\langle t-|x|\rangle}\bigg)
& \text{ if } \nu=0, \\
\langle t-|x|\rangle^{\nu} & \text{ if } \nu>0,
\end{cases} \\ \label{defW} W_{\nu,\kappa}(t,x)= &
\langle t+|x|\rangle^\nu \Bigl( \min_{0\le j\le N}
\jb{c_jt-|x|} \Bigr)^\kappa,
\\ \label{defz} W^{(c)}_{\nu,\kappa}(t,x)=&
\langle t+|x|\rangle^\nu \Bigl( \min_{0\le j\le N; c_j\ne c}
\jb{c_jt-|x|} \Bigr)^\kappa, \end{align}
where $c_0=0$ and $\langle y \rangle=\sqrt{1+|y|^2}$ for $y \in {\mathbf R}$ . We define \begin{equation}\label{eq:3.5}
\|g(t)\!:\!{M_k(z)}\| =
\sup_{(s,x) \in [0,t] \times {\mathbf R}^3}
\jb{|x|}\,z(s,x)\,|g(s,x)|_k \end{equation} for $t\in [0,T)$, a non--negative integer $k$ and any non--negative function $z(s,x)$. Similarly we put \begin{equation}\label{NfW}
\|f(t)\!:\!{N_k(z)}\| =\sup_{(s,x) \in [0,t] \times \Omega}
\jb{|x|}\,z(s,x)\,|f(s,x)|_k. \end{equation} We also define \begin{equation}
B_{\rho, k}[\phi, \psi]=\sup_{y\in {\mathbf R}^3} \jb{|y|}^{\rho}
\bigl(|\phi(y)|_{k}+|\nabla_x\phi(y)|_k+|\psi(y)|_k\bigr) \label{HomWei} \end{equation} for $\rho\ge 0$, a non--negative integer $k$ and $(\phi,\psi) \in (C_0^\infty({\mathbf R}^3))^2$.
For $a \ge 1$, let $\psi_a$ be a smooth radially symmetric function on ${\mathbf R}^3$ satisfying \begin{equation}\label{cutoff}
\psi_a(x)=0 \ (|x| \le a), \quad
\psi_a(x)=1 \ (|x| \ge a+1). \end{equation} For $r>0$, we set $$ \Omega_r=\Omega \cap B_r(0), $$ where $B_r(x)$ stands for an open ball of
radius $r$ centered at $x \in {\mathbf R}^3$.
\section{Preliminaries}
First we introduce the local energy decay estimate (\ref{LE}) which works well in getting pointwise estimates for solutions of our mixed problem. We also need the elliptic estimate given in Lemma \ref{elliptic}. For the completeness, we shall show them in the appendix.
As we have stated in the introduction, we always assume ${\mathcal O}\subset B_1(0)$. \begin{lemma}\ \label{local} Let ${\mathcal O}$ be admissible, and $\ell$ be the constant appeared in \eqref{obstacle}. Suppose that $(\vec{v}_0, f)$ satisfies the compatibility condition to infinite order for the mixed problem \eqref{eq}--\eqref{id} and \begin{eqnarray*} \text{supp}\,v_j \subset {\Omega_a} \quad (j=0,1), \quad \text{supp}\,f(t,\cdot) \subset {\Omega_{a}} \quad (t \ge 0) \end{eqnarray*} for some $a>1$. Let $v$ be the smooth solution of the mixed problem. Then for any $\gamma>0$, $b>1$ and integer $m$, there exists a positive constant $C=C(\gamma,a,b,c,m,\Omega)$ such that for $t\in [0,T)$, \begin{eqnarray}\nonumber
&& \sum_{|\alpha|\le m} \|\partial^\alpha_{t,x} v(t)\!:\!{L^2(\Omega_b)}\|
\le C(1+t)^{-\gamma} \bigg( \|\vec{v}_0\!:\!H^{m+\ell}(\Omega) \times H^{m+\ell-1}(\Omega)\| \\ \label{LE}
&& \hspace{30mm} +\sup_{0\le s \le t} (1+s)^\gamma \sum_{|\alpha|
\le m+\ell-1} \|\partial^\alpha_{s,x} f(s)\!:\!{L^2(\Omega)}\| \bigg). \end{eqnarray} \end{lemma}
\begin{lemma}\label{elliptic}\ Let $\varphi \in H^m(\Omega) \cap H_0^1(\Omega)$ for some integer $m(\ge 2)$. Then we have \begin{equation}\label{ap10}
\| \partial^\alpha \varphi : L^2(\Omega) \| \le C(\|\Delta \varphi\!:\!{L^2(\Omega)}\| + \|\nabla \varphi\!:\!{L^2(\Omega)}\|) \end{equation}
for $|\alpha|=m$. \end{lemma}
Next we introduce a couple of known estimates for the Cauchy problem. The first one is the decay estimate of solutions to the homogeneous wave equation, due to Asakura \cite[Proposition 1.1]{asa} (observe that the general case can be reduced to the case $m=0$, thanks to (\ref{commute})). Recall that ${\Phi}_\nu(t,x)$ is the function defined by (\ref{defPhi}).
\begin{lemma}\label{lem:freeH} Let $c>0$. For $\vec{w}_0\in (C_0^\infty({\mathbf R}^3))^2$, $\rho>0$ and a non--negative integer $m$, there exists a positive constant $C=C(\rho, m, c)$ such that \begin{equation}\label{decay}
\langle t+|x| \rangle\,{\Phi}_{\rho-1}(ct,x)
|K_{0}[\vec{w_0}; c](t,x)|_m \le C B_{\rho+1, m}[\vec{w_0}] \end{equation} for $(t,x) \in [0,\infty) \times {\mathbf R}^3$. \end{lemma}
The second one is the decay estimate for the inhomogeneous wave equation.
\begin{lemma}\label{free}\ Let $c>0$, $\rho>0$, and $k$ be a non--negative integer. If $\nu=\rho$ and $\kappa>1$, or alternatively if $\nu=\rho+\mu$ and $\kappa=1-\mu$ with some $\mu\in (0,1)$, then there exists a positive constant $C=C(\nu, \kappa, k, c)$ such that \begin{equation}\label{ba1}
\langle t+|x| \rangle\,{\Phi}_{\rho-1}(ct,x) |L_{0}[g; c](t,x)|_k
\le C \|g(t)\!:\!{M_k(W_{\nu,\kappa})}\| \end{equation} for $(t,x) \in [0,T) \times {\mathbf R}^3$. \end{lemma}
\noindent{\it Proof.}\ The desired estimate for $k=0$ was shown in Theorem 3.4 of Kubota -- Yokoyama \cite{Kub-Yok01} (see also Lemmas 3.2 and 8.1 in Katayama -- Yokoyama \cite{kayo},
and Lemma 2.2 in the authors \cite{KaKu07}).
Let $|\alpha| \le k$. Then it follows from (\ref{commute}) that \begin{equation}\label{ba11} Z^\alpha L_0[g; c]=L_0[Z^\alpha g; c]+K_0[(\phi_\alpha,\psi_\alpha); c], \end{equation} where we put $\phi_\alpha(x)=(Z^\alpha L_0[g; c])(0,x)$, $\psi_\alpha(x)=(\partial_t Z^\alpha L_0[g; c])(0,x)$. From the equation (\ref{eq0}) we get $$
\phi_\alpha(x)= \sum_{|\beta| \le |\alpha|-2} C_\beta
(Z^\beta g)(0,x), \quad \psi_\alpha(x)= \sum_{|\beta| \le
|\alpha|-1} C_\beta^\prime (Z^\beta g)(0,x) $$ with suitable constants $C_\beta$ and $C_\beta^\prime$ (cf.~\eqref{data+}). Therefore, by virtue of Lemma \ref{lem:freeH}, it is enough to show \begin{equation}\nonumber
\langle t+|x| \rangle\,{\Phi}_{\rho-1}(ct,x) |L_{0}[Z^\alpha g; c](t,x)|
\le C \|g(t)\!:\!{M_k(W_{\nu,\kappa})}\| \end{equation} for $(t,x) \in [0,T) \times {\mathbf R}^3$. But this inequality immediately follows from (\ref{ba1}) for $k=0$. Thus we finish the proof.
$\qed$
The third one is the decay estimate of derivatives of solutions to the inhomogeneous wave equation.
\begin{lemma}\label{freeD}\ Let $c>0$, and $k$ be a non--negative integer.
If $\rho=\nu>1$ and $\kappa>1$, or alternatively if $0<\rho\le 1$, $\nu=1+\mu$ and $\kappa=\rho-\mu$ with some $\mu\in (0, \rho)$, then there exists a positive constant $C=C(c,\nu,\kappa,k)$ such that \begin{equation}\label{ba2}
\langle |x| \rangle \langle ct -|x| \rangle^{\rho}
|\partial L_{0}[g; c](t,x)|_k
\le C \|g(t)\!:\!{M_{k+1}(W_{\nu,\kappa})}\| \end{equation} for $(t,x) \in [0,T) \times {\mathbf R}^3$.
On the other hand, if $\rho>0$ and $\kappa > 1$, then we have \begin{equation}\label{poc0}
\langle |x| \rangle \langle ct-|x| \rangle^{\rho}
|\partial L_{0}[g; c](t,x)|_k
\le C \|g(t)\!:\!{M_{k+1}(W_{\rho,\kappa}^{(c)})}\| \end{equation} for $(t,x) \in [0,T) \times {\mathbf R}^3$. \end{lemma}
\noindent{\it Proof.}\ In view of Lemma 3.2 in \cite{Kub-Yok01}, Lemma 8.2 and the proof of Lemma 3.2 in \cite{kayo}, we find that for $0\le a\le 3$, \begin{align} \label{KatYok04}
\jb{|x|}\jb{ct-|x|}^{\rho}|L_0[\partial_a g; c](t,x)|
\le C \|g(t)\!:\!{M_1(W_{\nu,\kappa})}\| \end{align} when $\rho=\nu>1$ and $\kappa>1$, or when $0<\rho\le1$, $\nu=1+\mu$, and $\kappa=\rho-\mu$ with some $\mu\in (0,\rho)$, while \begin{align} \label{KatYok04-}
\jb{|x|}\jb{ct-|x|}^{\rho}|L_0[\partial_a g; c](t,x)|
\le C \|g(t)\!:\!{M_1(W_{\rho,\kappa}^{(c)})}\|, \end{align} if $\rho>0$ and $\kappa > 1$ (see also \cite{KaKu07}).
Since $\partial_a L_0[g; c]=L_0[\partial_a g; c]+\delta_{a0}K_0[(0, g(0, \cdot)); c]$ for $0\le a\le 3$ with the Kronecker delta $\delta_{ab}$, \eqref{ba2} and \eqref{poc0} follow from (\ref{ba11}), \eqref{KatYok04}, \eqref{KatYok04-}, and Lemma \ref{lem:freeH}. This completes the proof.
$\qed$
In order to associate these decay estimates with the energy estimate, we use a variant of the Sobolev type inequality due to Klainerman, whose proof will be given in the appendix.
\begin{lemma}\label{KlainermanSobolev}\ Let $\varphi \in C_0^2(\overline{\Omega})$. Then we have \begin{equation}\label{ap21}
\sup_{x \in \Omega} \jb{|x|} |\varphi(x)|
\le C \sum_{|\alpha| \le 2} \|\widetilde{Z}^\alpha \varphi\!:\!{L^2(\Omega)}\|, \end{equation} where $\widetilde{Z}=\{\partial_1,\partial_2,\partial_3,\Omega_{12},\Omega_{23},\Omega_{13} \}$. \end{lemma}
Finally, we recall the estimates of the null forms from \cite{KaKu07}. The null forms $Q_0$ and $Q_{ab}$ are defined by \begin{align} Q_0(v,w\,;c)=&(\partial_t v)(\partial_t w)-c^2 (\nabla_{\!x}\,v)\cdot (\nabla_{\!x}\,w),\\ Q_{ab}(v,w)=&(\partial_a v)(\partial_b w)-(\partial_b v)(\partial_a w) \quad \text{($0\le a<b\le 3$)} \end{align} for a positive constant $c$, and real valued--functions $v=v(t,x)$ and $w=w(t,x)$. They are closely related to the null condition.
\begin{lemma}\label{NFEs} Let $c$ be a positive number and $u=(u_1, \dots, u_N)$. Suppose that $Q$ is one of the null forms. Then, for a non--negative integer $k$, there exists a positive constant $C=C(c,k)$ such that \begin{align*}
|Q(u_j, u_k)|_k & \le C \bigl\{
|\partial u|_{[k/2]} \sum_{|\alpha|\le k} |D_{+,c} Z^\alpha u|
{}+|\partial u|_{k} \sum_{|\alpha|\le [k/2]} |D_{+,c} Z^\alpha u|\\
& \qquad\quad {}+\frac{1}{r}\bigl(|\partial u|_{[k/2]}|u|_{k+1}+|u|_{[k/2]+1}|\partial u|_k
\bigr) \bigr\}, \end{align*}
where we put $D_{+,c}=\partial_t+c\,\partial_r$ with $r\partial_r=x\cdot\nabla_{\!x}$ and $r=|x|$.
\end{lemma}
\section{Basic estimates}
The aim of this section is to establish pointwise decay estimates for the mixed problem, which are deduced from corresponding estimates for the Cauchy problem in combination with the local energy decay. Theorem \ref{main1} is the result for the homogeneous wave equation, while Theorem \ref{main3} is for the inhomogeneous wave equation. In order to handle the null forms, we also need some
estimates, which will be given in Theorem \ref{D+Es}, of a tangential derivative to the light cone $t=|x|$ which is denoted by $D_{+,c}=\partial_t+c\partial_r$.
To prove these theorems we use
\begin{lemma}\label{KataLem} Let ${\mathcal O}$ be admissible, and $\ell$ be the constant in \eqref{obstacle}. Suppose that $\chi_j$ $(1\le j\le 3)$ are smooth radially symmetric functions on ${\mathbf R}^3$ satisfying $$ \supp \chi_1\subseteq B_b(0),\ \supp \chi_2, \supp \chi_3 \subseteq B_a(0), \ \chi_2=\chi_3 \equiv 0 \text{ on $B_1(0)$} $$ with some $a(>1)$ and $b(>1)$. Let $c>0$, $\nu>0$, $\kappa\ge 0$, and $\kappa_0\ge 0$, while $m$ is a non-negative integer. Then there exists a positive constant $C$ such that
\begin{align}
& \langle t \rangle^\nu
|\chi_1 L[ \chi_2 g;c ](t,x)|_m \le C \norm{\chi_2 g(t)}{M_{m+\ell+1}(W_{\nu, \kappa})},\label{KataL01}\\
& \norm{\chi_1L[\chi_2 g; c](t)}{M_m(W_{\nu, \kappa_0})}
\le C \norm{\chi_2 g(t)}{M_{m+\ell+1}(W_{\nu, \kappa})},
\label{KataL02}\\
& \norm{\chi_2L_0[\chi_3 g; c]}{M_m(W_{\nu, \kappa_0})} \le C\norm{g(t)}{N_m(W_{\nu,\kappa})},
\label{KataL03} \\
& \norm{\chi_2K_0[\vec{v}_0; c]}{M_m(W_{\nu, \kappa})}
\le C B_{\nu+1, m}[\vec{v}_0],
\label{KataL04} \\
& \langle t \rangle^\nu |\chi_1K[\chi_2 \vec{v}_0; c](t,x)|_m
\le C \norm{\vec{v}_0}{H^{m+\ell+2}(\Omega)\times H^{m+\ell+1}(\Omega)},
\label{KataL05} \\
& \norm{\chi_1K[\chi_2\vec{v}_0; c](t)}{M_m(W_{\nu, \kappa})}
\label{KataL06}\\
& \qquad\qquad\qquad\qquad\qquad
\le C \norm{\vec{v}_0}{H^{m+\ell+2}(\Omega) \times H^{m+\ell+1} (\Omega)} \nonumber \end{align} for any $g\in C^\infty([0,T)\times \Omega)$, and $\vec{v}_0\in C^\infty_0(\overline{\Omega})$. \end{lemma} \noindent{\it Proof.}\ \ First we note that we have \begin{equation}
|(\chi_1 h)(t,x)|_m\le C \sum_{|\beta|\le m} |\partial_{t, x}^\beta(\chi_1 h)(t,x)| \label{KataM01} \end{equation} for any smooth function $h$ on $[0,T)\times \Omega$, since $\supp \chi_1\subset B_b(0)$. We also note that, if $b>0$, $\nu\ge 0$, and $\kappa\ge 0$, then
$\langle |x|\rangle W_{\nu, \kappa} (t,x)$,
$\langle{t+|x|}\rangle \Phi_{\nu-1}(ct,x)$, and $\langle{t}\rangle^\nu$ are equivalent to each other for $(t,x)\in [0, \infty)\times B_b(0)$
(observe that we have $W_{\nu, \kappa}(ct,x)\le C\langle t+|x| \rangle^\nu \langle |x|\rangle^\kappa$).
By \eqref{KataM01}, the Sobolev inequality and \eqref{LE} with $\gamma=\nu$, we obtain \begin{align*} \langle t \rangle^\nu
|\chi_1 L[ \chi_2 g;c ](t,x)|_m
\le & C \langle t\rangle^\nu \sum_{|\beta|\le m+2}
\norm{\partial^\beta L[\chi_2 g;c](t)}{L^2(\Omega_b)}\\ \le & C \sup_{s\in [0,t]} \langle s\rangle^\nu
\sum_{|\beta|\le m+\ell+1} \norm{\partial^\beta (\chi_2g)(s)}{L^2(\Omega)}\\ \le & C \norm{(\chi_2 g)(t)}{M_{m+\ell+1}(W_{\nu,\kappa})}, \end{align*} which is \eqref{KataL01}.
From \eqref{KataL01}, we find \begin{align*} \norm{\chi_1L[\chi_2 g; c](t)}{M_m(W_{\nu, \kappa_0})}\le &
C \sup_{(s,x)\in [0,t]\times{\mathbf R}^3}
\langle s\rangle^\nu |\chi_1L[\chi_2g; c](s,x)|_m\\ \le & C\norm{\chi_2 g(t)}{M_{m+\ell+1}(W_{\nu, \kappa})}. \end{align*}
On the other hand, by \eqref{ba1}, we obtain \begin{align*} & \norm{\chi_2L_0[\chi_3g; c](t)}{M_m(W_{\nu, \kappa_0})}\\ & \qquad \le C \sup_{(s,x)\in [0,t]\times {\mathbf R}^3}
\langle s+|x|\rangle \Phi_{\nu-1}(cs, x) |L_0[\chi_3g; c](s,x)|_m\\ & \qquad \le C \norm{(\chi_3 g)(t)}{M_m(W_{\nu, 2})} \le C \norm{(\chi_3 g)(t)}{M_m(W_{\nu, \kappa})}. \end{align*}
Similarly to the proof of \eqref{KataL03}, \eqref{decay} immediately implies \eqref{KataL04}. From \eqref{KataM01}, the Sobolev inequality and \eqref{LE} we find \begin{align*}
\langle t\rangle^\nu|\chi_1 K[\chi_2 \vec{v}_0; c](t,x)|_m \le & C\langle t\rangle^\nu
\sum_{|\beta|\le m+2}\norm{\partial^\beta K[\chi_2 \vec{v}_0; c](t)}{L^2(\Omega_b)}\\ \le & C\norm{\chi_2 \vec{v}_0}{H^{m+\ell+2}(\Omega)\times H^{m+\ell+1}(\Omega)}, \end{align*}
which leads to \eqref{KataL05}. Finally, \eqref{KataL06} immediately follows from \eqref{KataL05} in view of the equivalence of $\langle{|x|}\rangle W_{\nu, \kappa}(t,x)$ and $\langle t\rangle^\nu$ in $[0,\infty)\times B_b(0)$. This completes the proof.
$\qed$
\begin{theorem}\ \label{main1} Let ${\mathcal O}$ be admissible, $\ell$ be the constant in \eqref{obstacle}, and $c>0$. Suppose that $\vec{v}_0\in (C_0^\infty(\overline{\Omega}))^2$ and $(\vec{v}_0,0)$ satisfies the compatibility condition to infinite order for the mixed problem \eqref{eq}--\eqref{id}. If $\rho>1$ and $k$ is a non--negative integer, then there exists a constant $C>0$ such that \begin{equation}\label{m1}
|K[\vec{v}_0; c](t,x)|_k
\le C\langle t+|x| \rangle^{-1} \langle ct -|x| \rangle^{-(\rho-1)} B_{\rho+1, k+\ell+3}[\vec{v}_0] \end{equation} for $(t,x)\in [0,\infty)\times \Omega$. \end{theorem}
\noindent{\it Proof.}\ \ First of all, we recall the following representation formula based on the cut--off method developed by Shibata \cite{Shi83}, and also by Shibata -- Tsutsumi \cite{ShiTsu86} where $L^p$--$L^q$ time decay estimates for the mixed problem was obtained (see also \cite{Kub06})\,: \begin{equation}\label{homo} K[\vec{v}_0; c](t,x)=\psi_1(x) K_0[\psi_2 \vec{v}_0; c](t,x) {}+\sum_{i=1}^4 K_i[\vec{v}_0](t,x), \end{equation} for $(t,x)\in [0,T)\times \Omega$. Here $\psi_a$ is defined by (\ref{cutoff}) and we have set \begin{align}\label{K1} & K_1[\vec{v}_0](t,x)=(1-\psi_2(x))L\bigl[\,[\psi_1,-c^2\Delta] K_0[\psi_2 \vec{v}_0;c];c\bigr](t,x), \\ \label{K2} &K_2[\vec{v}_0](t,x)\\ \nonumber & \quad =-L_0\bigl[\,[\psi_2,-c^2\Delta]
L\bigl[\,[\psi_1,-c^2\Delta]
K_0[\psi_2 \vec{v}_0; c];c\bigr]; c\bigr](t,x), \\ \label{K3} & K_3[\vec{v}_0](t,x)=(1-\psi_3(x)) K[(1-\psi_2) \vec{v}_0; c](t,x), \\ \label{K4} & K_4[\vec{v}_0](t,x)=-L_0\bigl[\,[\psi_3,-c^2\Delta] K[(1-\psi_2) \vec{v}_0;c]; c\bigr](t,x). \end{align}
It is easy to see from (\ref{decay}) for $\rho>1$ that the first term on the right--hand side of (\ref{homo}) has the desired bound. Hence our task is to show \eqref{m1} with $K[\vec{v}_0; c]$ replaced by $K_i[\vec{v}_0]$ ($1\le i\le 4$).
It is easy to check that \begin{align*} [\psi_a,-\Delta]u(t,x)= &
u(t,x) \Delta \psi_a(x)+2\nabla_{\!x}\, u(t,x) \cdot \nabla_{\!x}\, \psi_a(x)\\ =& 2\sum_{j=1}^3 \partial_j\bigl(u(x)\partial_j \psi_a(x)\bigr)-u(x)\Delta\psi_a(x) \end{align*} and \begin{equation*}
\sum_{|\alpha| \le m} \|Z^\alpha
[\psi_a,-\Delta]u(t)\!:\!L^2(\Omega)\| \le C
\sum_{|\alpha| \le m+1} \|\partial^\alpha u(t)\!:\!L^2(\Omega_{a+1})\| \end{equation*} for $t \in [0,T)$, $x \in \Omega$, $a \ge 1$ and any smooth function $u$. Therefore, by \eqref{KataL01} and \eqref{KataL04} with $\nu=\rho$, we get \begin{eqnarray*}
|K_1[\vec{v}_0](t,x)|_k \le C\langle t \rangle^{-\rho} B_{\rho+1, k+\ell+2}[\vec{v_0}], \end{eqnarray*} which leads to \eqref{m1} with $K$ replaced by $K_1$, because $\text{supp} K_1[\vec{v}_0](t,\cdot) \subset \overline{\Omega_3}$. On the other hand, \eqref{ba1}, \eqref{KataL02}, and \eqref{KataL04} with $\nu=\rho$ imply $$
|K_2[\vec{v}_0](t,x)|_k\le C \langle t+|x|\rangle^{-1}
\langle ct-|x| \rangle^{-(\rho-1)} B_{\rho+1, k+\ell+3}[\vec{v}_0]. $$ The bound for $K_3[\vec{v}_0](t,x)$ can be easily obtained by \eqref{KataL05}. Finally, \eqref{ba1} and \eqref{KataL06} imply the estimate for $K_4[\vec{v}_0](t,x)$. This completes the proof.
$\qed$
\begin{theorem}\ \label{main3} Let ${\mathcal O}$ be admissible, $\ell$ be the constant in \eqref{obstacle}, and $c>0$. Suppose that $f \in C^\infty([0,T)\times \Omega)$ and $(0,0,f)$ satisfies the compatibility condition to infinite order for the mixed problem \eqref{eq}--\eqref{id}.
\noindent {\rm (i)}\ Let $\rho>0$. If $\nu=\rho$ and $\kappa>1$, or alternatively if $\nu=\rho+\mu$ and $\kappa=1-\mu$ with some $\mu\in (0,1)$, then there exists a constant $C>0$ such that \begin{align}\label{ba3}
\langle t+|x| \rangle {\Phi}_{\rho-1}(ct,x) |L[f; c](t,x)|_k
\le & C
\|f(t)\!:\!{N_{k}(W_{\nu,\kappa})}\|\\
&+C\norm{f(t)}{N_{k+\ell+3}(W_{\rho, 0})} \nonumber\\
\le & C
\|f(t)\!:\!{N_{k+\ell+3}(W_{\nu,\kappa})}\|
\nonumber \end{align} for $(t,x)\in [0,T)\times \Omega$.
\\
{\rm (ii)}\ If $\nu=\rho>1$ and $\kappa>1$, or alternatively if $0<\rho\le 1$, $\nu=1+\mu$ and $\kappa=\rho-\mu$ with some $\mu\in (0, \rho)$, then we have \begin{equation}\label{ba4}
\langle |x| \rangle \langle ct -|x| \rangle^{\rho} |\partial L[f; c](t,x)|_k
\le C
\|f(t)\!:\!{N_{k+\ell+4}(W_{\nu,\kappa})}\| \end{equation} for $(t,x)\in [0,T)\times \Omega$.
\\
{\rm (iii)}\ If $\rho>0$ and $\kappa>1$, then we have \begin{equation}\label{DS}
\langle |x| \rangle \langle ct -|x| \rangle^{\rho} |\partial L[f; c](t,x)|_k
\le C
\|f(t)\!:\!{N_{k+\ell+4}(W_{\rho,\kappa}^{(c)})}\| \end{equation} for $(t,x)\in [0,T)\times \Omega$. \end{theorem}
\noindent{\it Proof.}\ Note that $L[f; c]$ has the similar expression to (\ref{homo})\,: \begin{equation}\label{inhomo} L[f;c](t,x)=\psi_1(x) L_0[\psi_2 f;c](t,x)+\sum_{i=1}^4 L_i[f](t,x) \end{equation} for all $(t,x)\in [0,T)\times \Omega$, where \begin{align}\label{L1} & L_1[f](t,x)=(1-\psi_2(x))L\bigl[\,[\psi_1,-c^2\Delta] L_0[\psi_2 f;c];c \bigr](t,x), \\ \label{L2} & L_2[f](t,x)\\ \nonumber & \qquad\quad =-L_0\bigl[\,[\psi_2,-c^2\Delta] L\bigl[\,[\psi_1,-c^2\Delta] L_0[\psi_2 f;c];c \bigr];c \bigr](t,x), \\ \label{L3} & L_3[f](t,x)=(1-\psi_3(x)) L[(1-\psi_2) f;c](t,x), \\ \label{L4} & L_4[f](t,x)=-L_0\bigl[\,[\psi_3,-c^2\Delta] L[(1-\psi_2) f;c];c\bigr](t,x). \end{align} The first term on the right--hand side of (\ref{inhomo}) can be easily treated by Lemmas \ref{free} and \ref{freeD}.
Let $\rho>0$ and $\kappa\ge 0$. By \eqref{KataL01} and \eqref{KataL03} with $\nu=\rho$, we obtain \begin{equation}\label{poD1}
\langle t \rangle^{\rho} |L_i[f](t,x)|_k
\le C \|f(t)\!:\!{N_{k+\ell+2}(W_{\rho,\kappa})}\| \end{equation} for $i=1,3$.
It is easy to see that $\langle t+|x|\rangle \Phi_{\rho-1}(ct,x)$
and $\langle |x|\rangle \langle ct-|x|\rangle^\rho$ are equivalent to $\langle t\rangle^\rho$ for $(t,x)\in [0,\infty)\times B_4(0)$. Therefore, since $\supp L_i[f](t,x)\subset B_4(0)$ for $i=1,3$, \eqref{poD1} implies the desired estimates for $L_1[f]$ and $L_3[f]$, corresponding to \eqref{ba3}, \eqref{ba4} and \eqref{DS} (note that we also have $W_{\rho, \kappa}\le W_{\nu, \kappa}\le W_{\nu, \kappa}^{(c)}$ for $\nu\ge \rho$).
On the other hand, by \eqref{KataL02} and \eqref{KataL03}, we obtain \begin{align} \label{KataN01} \norm{\square_c L_i[f](t)}{M_{m}(W_{\nu, \kappa_0})} \le & C \norm{f(t)}{N_{m+\ell+3}(W_{\nu, \kappa})}\\ \le & C \norm{f(t)}{N_{m+\ell+3}(W_{\nu, \kappa}^{(c)})}\ (i=2,4) \nonumber \end{align} for any $\nu>0$, $\kappa_0, \kappa\ge 0$, and $m\ge0$, where $\square_c=\partial_t^2-c^2\Delta$. Hence Lemmas \ref{free} and \ref{freeD} imply the desired estimates for $L_2[f]$ and $L_4[f]$. This completes the proof.
$\qed$
\begin{theorem}\label{D+Es} Let the assumptions in Theorem {\rm \ref{main3}} be fulfilled, and $1\le \rho\le 2$.
If $\nu=\rho$ and $\kappa>1$, or alternatively if $\nu=\rho+\mu$, $\kappa=1-\mu$ with some $\mu\in (0,1)$, then there exists a positive constant $C=C(\nu,\kappa,c)$ such that \begin{align} \label{D+Es-01}
& \jb{|x|} \jb{t+|x|} \jb{ct-|x|}^{\rho-1}
\sum_{|\alpha|\le k} |D_{+,c}Z^\alpha L[f;c](t,x)| \\
& \qquad\qquad \le C \log(2+t+|x|)\,\|f(t)\!:\!{N_{k+\ell+5}(W_{\nu,\kappa})}\|. \nonumber \end{align}
If $\nu>\rho+1$, we have \begin{align} \label{D+Es-02}
& \jb{|x|} \jb{t+|x|} \jb{ct-|x|}^{\rho-1}
\sum_{|\alpha|\le k} |D_{+,c}Z^\alpha K[\vec{v}_0;c](t,x)| \\ \nonumber & \qquad\qquad\qquad\qquad \le C B_{\nu, k+\ell+5}[\vec{v}_0] \end{align} for $(t,x)\in [0,T)\times \Omega$. \end{theorem}
\noindent{\it Proof.}\
We consider only (\ref{D+Es-01}), because (\ref{D+Es-02}) can be shown less hard by using (\ref{m1}). When $|x| \le 1$, (\ref{D+Es-01}) follows from
\eqref{ba3} immediately. While, if $|x|>1$, then we can proceed as in the proof of Theorem 1.2 in \cite{KaKu07}, because ${\mathcal O} \subset B_1(0)$. Here we only give an outline of the proof. Setting $U(t, r, \omega)=rL[f; c](t, r\omega)$ for $r>1$ and $\omega\in S^2$, we have \begin{align}
D_{-,c}D_{+,c} U(t,r,\omega)=rf(t,r\omega)
{}+\frac{c^2}{r}\sum_{1\le j<k\le 3} \Omega_{jk}^2 L[f;c](t, r\omega),
\label{rad} \end{align} where $D_{-,c}=\partial_t-c\partial_r$. Let $t_0>0$, $r_0>1$ and $\omega_0\in S^2$. Applying \eqref{ba3} to estimate the second term on the right-hand side of \eqref{rad} in terms of $\norm{f(t)}{N_{\ell+5}(W_{\nu, \kappa})}$, and then integrating the obtained inequality along the ray $\{(t, (r_0+c(t_0-t)\omega_0); 0\le t\le t_0\}$ (note that this ray lies in $\Omega$), we obtain \begin{align} \label{Tsuika}
& |D_{+,c}U(t_0, r_0, \omega_0)|\\ & \qquad \le C\jb{t_0+r_0}^{-\rho}\log(2+t_0+r_0) \norm{f(t_0)}{N_{\ell+5}(W_{\nu, \kappa})}. \nonumber \end{align} Since $rD_{+,c}L[f;c](t, r\omega)=D_{+, c}U(t, r, \omega)-cL[f;c](t, r\omega)$, \eqref{Tsuika} and \eqref{ba3} imply \eqref{D+Es-01} for $k=0$. It is easy to obtain \eqref{D+Es-01} for general $k$.
This completes the proof.
$\qed$
\section{Proof of Theorem 1.2}
In this section we prove Theorem \ref{thm:GE}. We assume ${\mathcal O}\subset B_{1}(0)$ as before. Let all the assumptions of Theorem \ref{thm:GE} be fulfilled.
Though there is no essential difficulty in treating the general case \footnote{ In fact, to treat the general case, we only have to replace the energy inequality for the wave equation in Subsections \ref{KEE1}, \ref{KEE2} and \ref{KEE3} below with that for systems of perturbed wave equations which is also standard (remember that the symmetry conditions \eqref{symmetric} are assumed). Such replacement is not needed for pointwise decay estimates, because loss of derivatives is allowed there. }, we concentrate on the semilinear case to keep our exposition simple. Hence we assume $F=F(u, \partial u)$ in what follows.
From the null condition associated with $(c_1, c_2, \ldots, c_N)$, we see that the quadratic part $F_i^{(2)}$ of $F_i$ is independent of $u$, and can be written as \begin{equation} F_i^{(2)}(\partial u)=F_i^{{\rm null}}(\partial u)+R_{I,i}(\partial u)+R_{II,i}(\partial u), \end{equation} where \begin{align*} F_i^{{\rm null}}(\partial u)=& \sum_{\substack{1\le j,k\le N\\ c_j=c_k=c_i}} \left(A_{i}^{jk} Q_0(u_j, u_k; c_i)+\sum_{0\le a<b\le 3} B_i^{jk,ab}Q_{ab}(u_j, u_k) \right),\\ R_{I,i}(\partial u)=&\sum_{\substack{1\le j,k\le N\\ c_j\ne c_k}}\sum_{0\le a, b\le 3} C_i^{jk,ab} (\partial_a u_j)(\partial_b u_k),\\ R_{II,i}(\partial u)=&\sum_{\substack{1\le j,k\le N\\ c_j=c_k\ne c_i}}\sum_{0\le a, b\le 3} D_i^{jk,ab} (\partial_a u_j)(\partial_b u_k) \end{align*} with suitable constants $A_i^{jk}$, $B_i^{jk, ab}$, $C_i^{jk, ab}$ and $D_i^{jk, ab}$. We put $$ H_i(u, \partial u)=F_i(u, \partial u)-F_i^{(2)}(\partial u)
$$ for $i=1, 2, \dots, N$, so that $H_i(u,\partial u)=O(|u|^3+|\partial u|^3)$ near $(u, \partial u)=(0,0)$.
Let $u=(u_1, u_2, \dots, u_N)$ be a smooth solution to \eqref{ap1}--\eqref{ap3} on $[0,T)\times \overline{\Omega}$. We set \begin{align*}
e_{k,i}[u_i](t,x)=& \jb{t+|x|}\Phi_0(c_it, x)|u_i(t,x)|_{k+1}
{}+\jb{|x|}\jb{c_it-|x|}|\partial u_i(t,x)|_{k}\\
& {}+\frac{\jb{|x|}\jb{t+|x|}}{\log(2+t+|x|)}
\sum_{|\alpha|\le k-1} |D_{+, c_i}Z^\alpha u_i(t,x)| \end{align*} for $1\le i \le N$. We also set $e_k[u](t)=\sum_{i=1}^N e_{k,i}[u_i](t,x)$.
We fix $k\ge 6\ell+30$, and assume that \begin{equation} \sup_{0\le t<T} \norm{e_{k}[u](t)}{L^\infty(\Omega)} \le M\varepsilon \label{InductiveAs} \end{equation} holds for some large $M(>1)$ and small $\varepsilon(>0)$, satisfying $M\varepsilon \le 1$. Since the local existence for the mixed problem has been shown by \cite{ShiTsu86}, what we need for the proof of the global existence result is a suitable {\it a priori} estimate. We will prove that
\eqref{InductiveAs} implies \begin{equation} \label{KFF} \sup_{0\le t<T} \norm{e_{k}[u](t)}{L^\infty(\Omega)} \le C\varepsilon+CM^2\varepsilon^2. \end{equation} From \eqref{KFF} we find that \eqref{InductiveAs} with $M$ replaced by $M/2$ is true for sufficiently large $M$ and sufficiently small $\varepsilon$, and the standard continuity argument implies that $e_k[u](t)$ stays bounded as long as the solution $u$ exists. Theorem \ref{thm:GE} follows immediately from this {\it a priori} bound.
To this end, the following energy estimate is crucial\,: \begin{equation}\label{ap16}
\|\partial u(t)\|_{2k-\ell-8} \le C M\varepsilon (1+t)^{C_* M\varepsilon+\rho_*} \quad \text{for} \ t \in [0,T), \end{equation} where $C$, $C_*$ and $\rho_*$ are positive constants independent of $M$ and $\varepsilon$. Moreover $\rho_*$ can be chosen arbitrarily small. In fact, once we find (\ref{ap16}), we can proceed as in the case of the corresponding Cauchy problem. While, unlike the case of the Cauchy problem, it is not so simple to get \eqref{ap16}, because of boundary terms coming from the integration--by--parts argument which may cause some loss of derivatives. For this reason, we estimate the space--time gradient and generalized derivatives separately and improve the estimate of the latter by using the local energy decay.
In the following, we set $r=|x|$. We define $$ w_-(t,r)=\min_{0\le j\le N} \jb{c_jt-r},\ w_-^{(c)}(t,r) =\min_{0\le j\le N; c_j\ne c} \jb{c_jt-r} $$ for $c\ge 0$, with $c_0=0$. Note that, for $0\le j,k\le N$, $c_j\ne c_k$ implies $$
\jb{c_jt-r}^{-1}\jb{c_kt-r}^{-1}\le C \jb{t+r}^{-1} \min\{\jb{c_jt-r}, \jb{c_kt-r}\}^{-1}. $$ Notice also that, for any $\mu>0$ and $c>0$, we have $$ \Phi_0(ct, x)^{-1}\le C \jb{t+r}^\mu \jb{ct-r}^{-\mu}, $$ where $C$ is a positive constant depending only on $\mu$ and $c$.
In the arguments below, we always suppose that $M$ is large enough, while $\varepsilon$ is small enough to satisfy $M\varepsilon<\!\!<1$.
\subsection{Estimates of the energy}\label{KEE1} First we evaluate the energy involved by time derivatives.
From \eqref{InductiveAs} we get \begin{equation}\nonumber
|\partial_t^{2k} F^{(2)}(\partial u)(t,x)|
\le C M\varepsilon \jb{t}^{-1} \sum_{m=0}^{2k} |\partial_t^{m} \partial u(t,x)|, \end{equation} and
\begin{align}\nonumber
& |\partial_t^{2k} H(u, \partial u)(t,x)|\\
& \qquad \le C|u(t,x)|^3+C
\,\sum_{m=0}^k \sum_{|\alpha|\le 1}|\partial_t^m \partial_{t,x}^\alpha u(t,x)|^2
\sum_{m=0}^{2k} |\partial_t^{m} \partial u(t,x)| \nonumber\\
& \qquad \le CM^3\varepsilon^3 \jb{t+r}^{-3+3\mu}w_-(t, r)^{-3\mu} \nonumber\\ \nonumber & \qquad\quad{}+CM^2\varepsilon^2 \jb{t+r}^{-2+2\mu}w_-(t,r)^{-2\mu}
\sum_{m=0}^{2k} |\partial_t^{m} \partial u(t,x)| \end{align} with small $\mu>0$. Since we have $$
\norm{\jb{t+|\cdot|}^{-3+3\mu}\jb{c_jt-|\cdot|}^{-3\mu}}{L^2({\mathbf R}^3)}\le
C_\mu\jb{t}^{-3/2} $$
for $\mu>0$ and $0\le j\le N$, if we set $y(t)= \sum_{m=0}^{2k} \|\partial_t^{m} \partial u(t)\!:\!{L^2(\Omega)}\|$, then we get $$
\|\partial_t^{2k} F(u, \partial u)(t)\!:\!{L^2(\Omega)}\| \le C_0 M\varepsilon (1+t)^{-1} y(t) +CM^3\varepsilon^3(1+t)^{-3/2}, $$ where $C_0$ is a universal constant which is independent of $M$ and $\varepsilon$. Noting that the boundary condition (\ref{ap2}) implies $\partial_t^j u(t,x)=0$ for $(t,x)\in [0,T) \times \partial \Omega$ and $0\le j\le 2k+1$, we see from the energy inequality for the wave equation that \begin{equation}\nonumber \frac{dy}{dt}(t) \le C_0 M\varepsilon (1+t)^{-1} y(t) +CM^3\varepsilon^3(1+t)^{-3/2}, \end{equation} which yields \begin{equation}\label{ene1} \hspace{10mm} y(t) \le (y(0)+CM^3\varepsilon^3) (1+t)^{C_0 M\varepsilon} \le CM\varepsilon (1+t)^{C_0 M\varepsilon}. \end{equation}
Next we prove that for $0 \le j+m \le 2k$ \begin{equation}\label{ap11}
\|\partial_t^{j} \nabla_{\!x}\, u(t)\!:\!{H^m(\Omega)}\|
\le CM\varepsilon (1+t)^{C_0 M\varepsilon}. \end{equation} Since (\ref{ap11}) for $m=0$ follows from (\ref{ene1}), it suffices to consider the case $m \ge 1$. Then (\ref{ap10}) yields \begin{equation}\nonumber
\|\partial^\alpha \partial_t^j \nabla_{\!x}\, u(t)\!:\!{L^2(\Omega)}\|
\le C( \|\Delta \partial_t^{j} u(t)\!:\!{{H}^{m-1}(\Omega) }\|
+\|\nabla_{\!x}\, \partial_t^{j} u(t)\!:\!{L^2(\Omega)}\|) \end{equation}
for $|\alpha|=m$. Since $0\le j \le 2k-1$, we see from \eqref{ap11} for $m=0$ that the second term is evaluated by $CM\varepsilon (1+t)^{C_0 M\varepsilon}$. While, using (\ref{ap1}), the first term is estimated by $$
C( \|\partial_t^{j+2} u(t)\!:\!{{H}^{m-1}(\Omega) }\|
+\|\partial_t^{j} F(u,\partial u)(t)\!:\!{H^{m-1}(\Omega)}\|). $$
If we set $z_{j,m}(t)= \sum_{s=0}^{j} \|\partial_t^{s} \partial u(t)\!:\!{H^m(\Omega)}\|$, then we have $$
\|\partial_t^{j} F(u, \partial u)(t)\!:\!{H^{m-1}(\Omega)}\| \le C M\varepsilon (1+t)^{-1} z_{j,m-1}(t) +CM^3\varepsilon^3(1+t)^{-3/2}, $$ as before.
In conclusion, we get, for $|\alpha|=m$, \begin{equation}\nonumber
\|\partial^\alpha \partial_t^{j} \nabla_{\!x}\, u(t)\!:\!{L^2(\Omega)}\| \le C z_{j+1,m-1}(t) +CM\varepsilon (1+t)^{C_0 M\varepsilon}. \end{equation} Since \eqref{ene1} yields $z_{j,0}(t)\le CM\varepsilon (1+t)^{C_0M\varepsilon}$ for $0\le j\le 2k$, we find from the inductive argument in $m(\ge 1)$ that $z_{j,m}(t)\le CM\varepsilon (1+t)^{C_0M\varepsilon}$ for $0\le j+m\le 2k$. In particular, we obtain (\ref{ap11}).
\subsection{Estimates of the generalized energy, part 1}\label{KEE2} In this subsection we evaluate the generalized derivatives $\partial Z^\alpha u$ in $L^2(\Omega)$
for $|\alpha| \le 2k-1$. Fix small $\mu_0>0$. It follows from (\ref{commute}) that \begin{eqnarray}\label{ene2} && \quad \frac12\frac{d}{dt}
\int_{\Omega} \left(|\partial_t Z^\alpha u_i|^2+|\nabla_{\!x}\, Z^\alpha u_i|^2
\right)\,dx \\ \nonumber && =\int_{\Omega} Z^\alpha F_i(u,\partial u)\,\partial_t Z^\alpha u_i\,dx
+c_i^2\int_{\partial \Omega} (\nu\cdot \nabla_{\!x}\, Z^\alpha u_i)\,(\partial_t
Z^\alpha u_i)\,dS, \end{eqnarray} where $\nu=\nu(x)$ is the unit outer normal vector at $x \in \partial \Omega$
and $dS$ is the surface measure on $\partial \Omega$. Observing that $|Z v|\le C \jb{r} |\partial v|$, we obtain \begin{align}\label{ene3}
\|Z^{\alpha} F(u, \partial u)(t)\!:\!{L^2(\Omega)}\|
\le & CM\varepsilon (1+t)^{-1} \|\partial u(t)\|_{|\alpha|}\\
&+CM^2\varepsilon^2(1+t)^{-1+2\mu_0}\|\partial u(t)\|_{|\alpha|-1} \nonumber\\ &+CM^3\varepsilon^3(1+t)^{-3/2} \nonumber \end{align}
for $|\alpha| \le 2k-1$ (cf. \eqref{KataKata01} below).
While, since $\partial \Omega \subset B_{1}(0)$, we have
$|Z^\alpha u(t,x)| \le C\sum_{|\beta| \le |\alpha|} |\partial^\beta u(t,x)|$ for $(t,x)\in [0,T) \times \partial \Omega$. Hence, by the trace theorem, we see that the second term of (\ref{ene2}) is evaluated by
$C \sum_{|\beta| \le |\alpha|+1} \|\partial^\beta \partial u(t)\!:\!{L^2(\Omega_{2})}\|^2$.
Noting that (\ref{ene1}) and (\ref{ap11}) imply \begin{equation} \label{LEK}
\|\partial^\beta \partial u(t)\!:\!{L^2(\Omega)}\| \le CM\varepsilon (1+t)^{C_0 M\varepsilon}
\text{ for $|\beta| \le 2k$,} \end{equation} we find from (\ref{ene2}) and (\ref{ene3}) that we have \begin{align*}
\frac{d}{dt}\|\partial u(t)\|_{m}^2 \le &
C_1 M\varepsilon (1+t)^{-1} \|\partial u(t)\|_{m}^2\\
&{}+CM^3\varepsilon^3 (1+t)^{-1+4\mu_0}\|\partial u(t)\|_{m-1}^2+CM^2\varepsilon^2 (1+t)^{2C_0 M\varepsilon} \end{align*} for $m\le 2k-1$, from which we inductively obtain \begin{equation}
\|\partial u(t)\|_{m}\le CM\varepsilon (1+t)^{C_0M\varepsilon+2\mu_0(m-1)+(1/2)} \end{equation} for $m\le 2k-1$, provided that $\varepsilon$ is so small that $C_1 M\varepsilon \le 1$. Setting $\gamma=4(k-1)\mu_0$, we obtain \begin{equation}\label{ap15}
\|\partial u(t)\|_{2k-1}\le CM\varepsilon (1+t)^{C_0M\varepsilon+\gamma+(1/2)}. \end{equation}
\subsection{Pointwise estimates, part 1} By (\ref{ap21}) and (\ref{ap15}) we have \begin{eqnarray}\label{ap21-}
&& \jb{|x|} |\partial u(t,x)|_{2k-3}
\le C \|\partial u(t)\|_{2k-1}
\le CM\varepsilon (1+t)^{C_0M\varepsilon+\gamma+(1/2)}. \end{eqnarray}
From \eqref{InductiveAs} we get \begin{align} \label{KataKata01}
|F(u, \partial u)(t,x)|_m\le & CM\varepsilon \jb{t+r}^{-1}w_-(t,r)^{-1}
|\partial u(t,x)|_m\\
&{}+CM^2\varepsilon^2\jb{t+r}^{-2+2\mu}w_-(t,r)^{-2\mu}|u(t,x)|_m \nonumber \end{align} for $m\le 2k$ with small $\mu>0$. We put \begin{equation} \label{KataKata01a}
U_{m,\lambda}(t)=\sup_{(s, x)\in [0,t]\times \Omega}
\sum_{i=1}^N\jb{s+|x|}^{1-\lambda} \Phi_0(c_is, x) |u_i(s,x)|_m \end{equation} for $\lambda\ge 0$. Then \eqref{KataKata01} yields \begin{align} \label{KataKata02}
|F(u, \partial u)(t,x)|_m\le & CM\varepsilon \jb{t+r}^{-1}w_-(t,r)^{-1}|\partial u(t,x)|_m\\ &{}+CM^2\varepsilon^2\jb{t+r}^{\lambda-3+3\mu}w_-(t,r)^{-3\mu} U_{m,\lambda}(t) \nonumber \end{align}
for $m\le 2k$. On the other hand, using $|u(t,x)|_m\le \jb{|x|}|\partial u(t,x)|_{m-1}$ for $m\ge 1$, and $|u_i(t,x)|\le M\varepsilon \jb{t+r}^{-1+\mu}\jb{c_it-r}^{-\mu}$, from \eqref{KataKata01} we also obtain \begin{align} \label{KataKata03}
|F(u, \partial u)(t,x)|_m\le & CM\varepsilon \jb{t+r}^{-1+2\mu}
w_-(t,r)^{-2\mu}|\partial u(t,x)|_m\\ &{}+CM^3\varepsilon^3\jb{t+r}^{-3+3\mu}w_-(t,r)^{-3\mu}. \nonumber \end{align}
Let $\chi$ be a non--negative $C^\infty({\mathbf R})$--function satisfying $\chi(\lambda)=1$ for $\lambda\le 1$, and $\chi(\lambda)=0$ for $\lambda\ge 2$. We define \begin{equation} \label{KataCut01}
\chi_{c,t_0,x_0}(t,x)=\chi\Bigl(c(t-t_0)+\sqrt{1+|x-x_0|^2}\Bigr) \end{equation} for $c>0$ and $(t_0, x_0)\in \Omega$. Then, because of the the finite speed of propagation, we have \begin{equation}
L[g;c](t_0,x_0)=L[\chi_{c, t_0,x_0}g; c](t_0, x_0). \label{KataCut02} \end{equation} We also have \begin{equation} \label{KataCut03}
\jb{t+|x|}\le C\jb{t_0+|x_0|} \end{equation} for any $(t,x)\in \supp \chi_{c,t_0,x_0}$ with $t\ge 0$, and any $(t_0, x_0)\in [0,\infty)\times \Omega$, where $C$ is a constant depending only on $c$.
Now we set $\lambda=C_0M\varepsilon+2\gamma+(1/2)$. Using \eqref{ap21-} and \eqref{KataKata02} with $m=2k-\ell-6$ and $\mu=(1-\gamma)/3$, we find \begin{align*}
&\|\chi_{c_i, t_0, x_0}F_i(u,\partial u)(t_0)\!:\! N_{2k-\ell-6}(W_{1+\gamma,
1-\gamma})\|\\
& \qquad \le
CM^2\varepsilon^2(1+U_{2k-\ell-6, \lambda}(t_0))
\jb{t_0+|x_0|}^{\lambda}. \end{align*} On the other hand, by \eqref{ap21-} and \eqref{KataKata03} with $m=2k-3$ and $\mu=\gamma/2$, we obtain
\begin{align*}
&\|\chi_{c_i, t_0, x_0}F_i(u,\partial u)(t_0)\!:\! N_{2k-3}(W_{1, 0})\|
\le CM^2\varepsilon^2 \jb{t_0+|x_0|}^{\lambda},
\end{align*} since we may assume $2-(3\gamma/2)\ge 1$.
In view of \eqref{KataCut03}, by using \eqref{m1} and the first inequality in \eqref{ba3} with $(\rho, \nu, \kappa)=(1, 1+\gamma, 1-\gamma)$, we obtain \begin{align*} & U_{2k-\ell-6, \lambda}(t)\le C\varepsilon+CM^2\varepsilon^2 (1+U_{2k-\ell-6, \lambda}(t)) \end{align*} with $\lambda=C_0M\varepsilon+2\gamma+(1/2)$, which leads to \begin{equation}
\sum_{i=1}^N \jb{t+|x|}^{(1/2)-C_0M\varepsilon-2\gamma}\Phi_0(c_it, x)
|u_i(t,x)|_{2k-\ell-6}\le CM\varepsilon \label{KataKata04} \end{equation} for $(t,x)\in [0, T)\times \Omega$, since we may assume $CM^2\varepsilon^2\le 1/2$.
\subsection{Estimates of the generalized energy, part 2}\label{KEE3} Since $\Phi_0(c_it,x)$ is bounded for $(t, x)\in [0,\infty)\times \Omega_2$, from \eqref{KataKata04} we get \begin{align} \label{ap17}
\norm{|u(t)|_{2k-\ell-6}}{L^2(\Omega_2)}
\le & C\norm{|u(t)|_{2k-\ell-6}}{L^\infty(\Omega_2)}\\ \le & CM\varepsilon \jb{t}^{-(1/2)+C_0M\varepsilon+2\gamma}, \nonumber \end{align} instead of \eqref{LEK}. Now (\ref{ene2}), (\ref{ene3}) and (\ref{ap17}) yield \begin{align}\nonumber
\frac{d}{dt}\|\partial u(t)\|_{m}^2 \le &
C_2 M\varepsilon (1+t)^{-1} \|\partial u(t)\|_{m}^2 \\ \nonumber
&+CM^3\varepsilon^3(1+t)^{-1+4\mu_0} \|\partial u(t)\|_{m-1}^2 \nonumber\\ &+CM^2\varepsilon^2 (1+t)^{-1+4\gamma+2C_0 M\varepsilon}, \nonumber \end{align} for $m\le 2k-\ell-8$, which inductively leads to (\ref{ap16}) with $C_*=C_0+C_2/2$ and $\rho_*=4\gamma$.
\subsection{Pointwise estimates, part 2} (\ref{ap21}) and (\ref{ap16}) imply \begin{align}\label{KataKata05}
& \jb{|x|} |\partial u(t,x)|_{2k-\ell-10} \le CM\varepsilon (1+t)^{\delta} \end{align} for $0<\varepsilon<\rho_*/(C_*M)$, where we have set $\delta=2\rho_*$. Note that we can take $\rho_*$ arbitrarily small, hence we may assume that $\delta$ is small enough in the following.
Using \eqref{KataKata05} and \eqref{KataKata02} with $m=2k-2\ell-13$, and $\mu=(1-\delta)/3$, we find \begin{align*}
&\|\chi_{c_i, t_0, x_0}F_i(u,\partial u)(t_0)\!:\! N_{2k-2\ell-13}
(W_{1+\delta, 1-\delta})\|\\
& \qquad \le
CM^2\varepsilon^2(1+U_{2k-2\ell-13, 2\delta}(t_0))
\jb{t_0+|x_0|}^{2\delta}. \end{align*} On the other hand, by \eqref{KataKata05} and \eqref{KataKata03} with $m=2k-\ell-10$ and $\mu=\delta/3$, we obtain
\begin{align*}
&\|\chi_{c_i, t_0, x_0}F_i(u,\partial u)(t_0)\!:\! N_{2k-\ell-10}(W_{1, 0})\|
\le CM^2\varepsilon^2 \jb{t_0+|x_0|}^{2\delta},
\end{align*} since we may assume $2-\delta\ge 1$. Now, similarly to \eqref{KataKata04}, these estimates end up with \begin{equation}
\sum_{i=1}^N \jb{t+|x|}^{1-2\delta}\Phi_0(c_it, x)
|u_i(t,x)|_{2k-2\ell-13}\le CM\varepsilon \label{KataKata06} \end{equation} for $(t,x)\in [0, T)\times \Omega$.
From \eqref{KataKata02} (with $\mu=(1+\delta)/3$), \eqref{KataKata05} and \eqref{KataKata06}, we get \begin{align} &
\|\chi_{c_i, t_0, x_0}F_i(u,\partial u)(t_0)\!:\! N_{2k-2\ell-13}(W_{1+\delta, 1+\delta})\| \\ \nonumber & \qquad\qquad\qquad\qquad\qquad
\le CM^2\varepsilon^2 \jb{t_0+|x_0|}^{4\delta}. \end{align}
From \eqref{m1}, \eqref{ba4}, \eqref{D+Es-01} and \eqref{D+Es-02}, we obtain \begin{align} & \jb{r}\jb{t+r}^{-4\delta}\jb{c_it-r}^{1+\delta}
|\partial u_i(t,x)|_{2k-3\ell-17}\le CM\varepsilon, \label{KataKata07}\\ & \jb{r}\jb{t+r}^{1-5\delta}\jb{c_it-r}^\delta
\sum_{|\alpha|\le 2k-3\ell-18}|D_{+, c_i} Z^\alpha u_i(t,x)| \le CM\varepsilon \label{KataKata08} \end{align} for $1\le i\le N$ and $(t,x)\in [0,T)\times \Omega$,
where we have used $\log(2+t+r)\le C\jb{t+r}^{\delta}$.
\subsection{Pointwise estimates, part 3} From now on, we take advantage of detailed structure of our nonlinearity.
Note that
$r$ is equivalent to $\jb{t+r}$, when $r\ge 1$ and $|c_it-r|<(c_it/2)$. By Lemma \ref{NFEs}, with the help of \eqref{InductiveAs}, \eqref{KataKata06}, \eqref{KataKata07}, and \eqref{KataKata08}, we obtain \begin{align}
|F_i^{{\rm null}}(\partial u)(t,x)|_{2k-3\ell-18}
\le CM^2\varepsilon^2 \jb{t+r}^{-3+5\delta} \jb{c_it-r}^{-1-\delta}
\end{align}
for $(t,x)$ satisfying $r\ge 1$ and $|c_it-r|<(c_it/2)$.
On the other hand, $\jb{c_it-r}$ is equivalent to $\jb{t+r}$, when $r<1$ or $|c_it-r|\ge (c_it/2)$. Hence, observing that $F_i^{{\rm null}}$ is quadratic with respect to $\partial u$, from \eqref{InductiveAs} and \eqref{KataKata07} we get \begin{equation}
|F_i^{{\rm null}}(\partial u)(t,x)|_{2k-3\ell-18} \le CM^2\varepsilon^2 \jb{t+r}^{-2+3\delta} \jb{r}^{-2} \end{equation}
for $(t,x)$ satisfying $r<1$ or $|c_it-r|\ge (c_it/2)$.
Now we find \begin{align} \label{KataKata11} \norm{F_i^{{\rm null}}(\partial u)(t)}{N_{2k-3\ell-18}(W_{\nu, \kappa})} \le CM^2\varepsilon^2 \end{align} with some $\nu>1$ and $\kappa>1$, since we may assume $2-5\delta>1$.
\eqref{InductiveAs} and \eqref{KataKata07} yield \begin{align}
& |R_{I,i}(\partial u)(t,x)|_{2k-3\ell-18}\\ & \qquad \le CM^2\varepsilon^2 \jb{r}^{-2}\jb{t+r}^{4\delta}
\sum_{c_j\ne c_k}\jb{c_jt-r}^{-1}\jb{c_kt-r}^{-1-\delta} \nonumber\\ & \qquad \le CM^2\varepsilon^2\jb{r}^{-1}\jb{t+r}^{-2+4\delta}
w_-(t,r)^{-1-\delta} \nonumber \end{align} for $(t,x)\in [0, T)\times \Omega$ with $c_0=0$. Since we may assume $2-4\delta>1$, we obtain \begin{align} \label{KataKata13} \norm{R_{I,i}(\partial u)(t)}{N_{2k-3\ell-18}(W_{\nu, \kappa})}\le CM^2\varepsilon^2 \end{align} with some $\nu>1$ and $\kappa>1$.
Similarly, we have \begin{align}
|R_{II,i}(\partial u)(t,x)|_{2k-3\ell-18}
& \le CM^2\varepsilon^2\jb{r}^{-1}\jb{t+r}^{-1+4\delta} \\ & \qquad\qquad\qquad \times
w_-^{(c_i)}(t,r)^{-2-\delta}, \nonumber \end{align} which yields \begin{align} \label{KataKata15} \norm{R_{II,i}(\partial u)(t)}{N_{2k-3\ell-18}(W_{-1+4\delta, \kappa}^{(c_i)})}\le CM^2\varepsilon^2 \end{align} with some $\kappa>1$.
From \eqref{InductiveAs}, \eqref{KataKata06} and \eqref{KataKata07} we have \begin{align} \label{KataKata16}
& |H_i(u, \partial u)(t,x)|_{2k-3\ell-18}\\ & \qquad \le CM^3\varepsilon^3 \jb{t+r}^{-3+3\mu+4\delta}
w_-(t,r)^{-3\mu} \nonumber \end{align} with small $\mu>0$, which implies \begin{equation} \label{KataKata17} \norm{H_i(u,\partial u)(t)}{N_{2k-3\ell-18}(W_{1+\delta, (1-4\delta)-\delta})} \le CM^2\varepsilon^2. \end{equation}
Finally, \eqref{ba3}, \eqref{ba4} and \eqref{D+Es-01} lead to \begin{equation} e_{2k-4\ell-22, i}\bigl[L[F_i^{{\rm null}}+R_{I,i}; c_i]\bigr](t,x) \le CM^2\varepsilon^2 \label{KataF01} \end{equation} in view of \eqref{KataKata11} and \eqref{KataKata13}. On the other hand, \eqref{KataKata15} and \eqref{DS} yield \begin{equation} \label{KataF02} \jb{r}\jb{c_it-r}^{1-4\delta}
|\partial L[R_{II,i}; c_i](t,x)|_{2k-4\ell-22}\le CM^2\varepsilon^2, \end{equation} while \eqref{KataKata17} and \eqref{ba4} with $(\rho,\nu, \kappa)=(1-4\delta, 1+\delta, (1-4\delta)-\delta)$ imply \begin{equation} \label{KataF03} \jb{r}\jb{c_it-r}^{1-4\delta}
|\partial L[H_{i}; c_i](t,x)|_{2k-4\ell-22}\le CM^2\varepsilon^2. \end{equation}
From \eqref{KataF01}, \eqref{KataF02} and \eqref{KataF03}, together with \eqref{m1}, we obtain \begin{equation}
\jb{r}\jb{c_it-r}^{1-4\delta}|\partial u_i(t,x)|_{2k-4\ell-22}\le CM\varepsilon. \label{KataF04} \end{equation}
\subsection{Pointwise estimates, the final part} By \eqref{InductiveAs} and \eqref{KataF04}, we obtain \begin{align}
& |R_{II,i}(\partial u)(t,x)|_{2k-4\ell-22}\\ & \qquad \le CM^2\varepsilon^2\jb{r}^{-1}\jb{t+r}^{-1} w_-^{(c_i)}(t,r)^{-2+4\delta}, \nonumber \end{align} which leads to \begin{align}
\norm{R_{II,i}(\partial u)(t)}{N_{2k-4\ell-22}( W_{1, \kappa}^{(c_i)})}\le CM^2\varepsilon^2 \end{align} with some $\kappa>1$, since we may assume $2-4\delta>1$. Hence \eqref{ba3}, \eqref{DS} and \eqref{D+Es-01} imply \begin{equation} e_{2k-5\ell-26,i}\bigl[ L[R_{II,i}; c_i] \bigr] (t,x)\le CM^2\varepsilon^2 \label{KataF06} \end{equation} (observe that we have $W_{1,\kappa}\le W_{1, \kappa}^{(c_i)}$).
By \eqref{InductiveAs} and \eqref{KataF04}, we also obtain \begin{align}
& |H_i(u,\partial u)(t,x)|_{2k-5\ell-26}\\ & \quad \le CM^3\varepsilon^3 \jb{r}^{-1} \jb{t+r}^{-2+2\mu}
w_-(t,r)^{-1+4\delta-2\mu} \nonumber\\ &\quad \qquad {}+CM^2\varepsilon^2\jb{t+r}^{-3+3\mu}
w_-(t,r)^{-3\mu} U_{2k-5\ell-26, 0}(t) \nonumber \end{align} with small $\mu>0$, where $U_{m, \lambda}$ is given by \eqref{KataKata01a}. Since we may assume $-1+4\delta<0$, we have \begin{align} \label{KataF09} & \norm{H_i(u, \partial u)(t)}{N_{2k-5\ell-26}(W_{1+\mu, 1-\mu})} \\ & \qquad \le CM^2\varepsilon^2(M\varepsilon+U_{2k-5\ell-26,0}(t)) \nonumber \end{align} From \eqref{KataKata16} we also have \begin{equation} \norm{H_i(u, \partial u)(t)}{N_{2k-4\ell-23}(W_{1, 0})} \le CM^3\varepsilon^3. \end{equation} Now the first inequality in \eqref{ba3} leads to \begin{align} \label{KataF07}
& \jb{t+r}\Phi_0(c_it, x)|L[H_i; c_i](t,x)|_{2k-5\ell-26}\\ & \qquad\qquad\qquad \le CM^2\varepsilon^2(M\varepsilon+U_{2k-5\ell-26,0}(t)). \nonumber \end{align} \eqref{KataF01}, \eqref{KataF06} and \eqref{KataF07} imply $$ U_{2k-5\ell-26,0}(t)\le C\varepsilon+CM^2\varepsilon^2(1+U_{2k-5\ell-26,0}), $$ which yields \begin{equation} \label{KataF08}
\jb{t+r}\Phi_0(c_it, x)|u_i(t,x)|_{2k-5\ell-26}
\le C\varepsilon+CM^2\varepsilon^2, \end{equation} provided that $\varepsilon$ is sufficiently small. In view of \eqref{KataF09} and \eqref{KataF08}, we obtain $$
\norm{H_i(u, \partial u)(t)}{N_{2k-5\ell-26}(W_{1+\mu, 1-\mu})}
\le CM^3\varepsilon^3. $$ Now \eqref{ba4} and \eqref{D+Es-01} with $(\rho, \nu, \kappa)=(1, 1+\mu, 1-\mu)$ imply \begin{align} \label{KataF10}
& \jb{r}\jb{c_it-r}|\partial L[H_i; c_i](t,x)|_{2k-6\ell-30}\le CM^3\varepsilon^3,\\ \label{KataF11}
& \frac{\jb{r}\jb{t+r}}{\log(2+t+r)}\sum_{|\alpha|\le 2k-6\ell-31}
|D_{+, c_i}Z^\alpha L[H_i; c_i](t,x)|\le CM^3\varepsilon^3. \end{align}
Finally, since $2k-6\ell-30\ge k$, from \eqref{KataF01}, \eqref{KataF06}, \eqref{KataF08}, \eqref{KataF10} and \eqref{KataF11}, we obtain \eqref{KFF}.
This completes the proof.
$\qed$
\subsection{Concluding remark} If we consider the single speed case $c_1=c_2=\cdots c_N=1$, we can replace $e_{k}[u](t)$ by \begin{align*}
\widetilde{e}_k[u](t,x)=&\jb{t+|x|}\jb{t-|x|}^\rho|u(t,x)|_{k+1}
{}+\jb{|x|}\jb{t-|x|}^{1+\rho}|\partial u(t,x)|_{k}\\
& {}+\frac{\jb{|x|}\jb{t+|x|}\jb{t-|x|}^\rho}{\log(2+t+|x|)}
\sum_{|\alpha|\le k-1} |D_{+, 1}Z^\alpha u(t,x)| \end{align*} with some $\rho \in (1/2, 1)$ as in the Cauchy problem treated in \cite{KaKu07}, and we can show $\norm{\widetilde{e}_k[u](t)}{L^\infty({\mathbf R}^3)}\le M\varepsilon$ for $0\le t<\infty$. The proof becomes much simpler because of the better decay of the solution.
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
\setcounter{equation}{0} \renewcommand{A.\arabic{lemma}}{A.\arabic{lemma}}
\renewcommand{A.\arabic{theorem}}{A.\arabic{theorem}}
\setcounter{theorem}{0}
\section*{Appendix}
\noindent{\it Proof of Lemma \ref{elliptic}.}\ We shall show (\ref{ap10}) only for $m=2$, because the general case can be obtained analogously by the inductive argument. Let $\chi$ be a $C^\infty_0({\mathbf R}^3)$ function such that $\chi \equiv 1$ in a neighborhood of ${\mathcal O}$. Let $\text{supp}\,\chi \subset B_R(0)$ for some $R>1$. We set $\varphi_1=\chi \varphi$ and $\varphi_2=(1-\chi) \varphi$, so that $\varphi=\varphi_1+\varphi_2$.
First we prove, for $|\alpha|=2$, \begin{equation}\label{ap10bis}
\| \partial^\alpha \varphi_2\!:\!{L^2(\Omega)}\| \le C(\|\Delta \varphi\!:\!{L^2(\Omega)}\|
+\|\nabla \varphi\!:\!{L^2(\Omega)}\|). \end{equation}
Since $\| \partial^\alpha w\!:\!{L^2({\mathbf R}^3)}\| \le C\|\Delta w\!:\!{L^2({\mathbf R}^3)}\|$ for $|\alpha|=2$ and $w \in H^2({\mathbf R}^3)$, the left--hand side of \eqref{ap10bis} is estimated by \begin{equation}\nonumber
C\|\Delta \varphi_2\!:\!{L^2(\Omega)}\|
\le C(\|\varphi\!:\!{L^2(\Omega_R)}\|
+\|\nabla \varphi\!:\!{L^2(\Omega)}\|
+\|\Delta \varphi\!:\!{L^2(\Omega)}\|). \end{equation} Thanks to the estimate \begin{equation}\label{ha}
\|w\!:\!{L^2(\Omega_R)}\| \le C R^2
\|\nabla w\!:\!{L^2(\Omega)}\| \end{equation} for $w \in H_0^1(\Omega)$ (for the proof, see \cite{LaPh}), we obtain \eqref{ap10bis}.
Next we estimate $\varphi_1$. We shall use the following well--known elliptic estimate (see Chapter 9 in \cite{GiTr} for instance): \begin{eqnarray}\nonumber
\|w\!:\!{H^{k+2}(\Omega_R)}\| \le C(\|\Delta w\!:\!{H^k(\Omega_R)}\|
+\|w\!:\!{L^2(\Omega_R)}\|) \end{eqnarray} for $w \in H^{k+2}(\Omega_R) \cap H^1_0(\Omega_R)$ with a non--negative integer $k$.
Since $\text{supp}\,\chi \subset B_R(0)$, we have $\varphi_1 \in H_0^1(\Omega_R)$. Therefore, the application of the above estimate for $k=0$ in combination with (\ref{ha}) gives \begin{equation}\label{ap10bis2}
\| \varphi_1\!:\!{H^2(\Omega)}\| \le C(\|\Delta \varphi\!:\!{L^2(\Omega)}\|
+\|\nabla \varphi\!:\!{L^2(\Omega)}\|). \end{equation} Thus (\ref{ap10}) for $m=2$ follows from (\ref{ap10bis}) and (\ref{ap10bis2}).
$\qed$
\noindent{\it Proof of Lemma \ref{local}.}\ If $v$ is the smooth solution of the mixed problem \eqref{eq}--\eqref{id}, then it follows that \begin{equation}\nonumber \partial_t^j v(t,x)=K[(v_j,v_{j+1}); c](t,x)+ \int_0^t K[(0,\partial_s^j f(s)); c](t-s,x) ds \end{equation} for any non--negative integer $j$ and any $(t,x) \in [0,T) \times \Omega$,
where $v_j$ are given by (\ref{data+}). By (\ref{obstacle}) we have, for ${|\alpha| \le 1}$, \begin{eqnarray}\label{obstacle1} && \hspace{4mm}
\|\partial^\alpha K[(v_j,v_{j+1}); c](t):L^2({\Omega_b})\| \\ \nonumber
&& \le C \exp(-\sigma t)\,(\|v_j:H^{\ell+1}(\Omega)\|+\|v_{j+1}:H^{\ell}(\Omega)\|) \\ \nonumber
&& \le C \exp(-\sigma t)\,(\|v_0:H^{\ell+j+1}(\Omega)\|+\|v_{1}:H^{\ell+j}(\Omega)\| \\ \nonumber && \hspace{40mm}
+\sum_{|\alpha| \le \ell+j-1} \| (\partial_{s,x}^\alpha f)(0) : L^2(\Omega)\|) \end{eqnarray} and \begin{eqnarray}\label{obstacle2} && \hspace{4mm}
\int_0^t \|\partial^\alpha K[(0,\partial_s^j f(s)); c](t-s) :L^2({\Omega_b})\| ds \\ \nonumber
&& \le C \int_0^t \exp(-\sigma (t-s))\,\| \partial_s^j f(s) :H^{\ell}(\Omega)\| ds \\ \nonumber && \le C(1+t)^{-\gamma} \sup_{0\le s \le t} (1+s)^\gamma
\| \partial_s^j f(s) :H^{\ell}(\Omega)\| \end{eqnarray}
for any $\gamma>0$. Therefore for ${|\alpha| \le 1}$ and any non--negative integer $j$, we have \begin{eqnarray}\label{LE1} &&
\| \partial^\alpha \partial^j_{t} v(t)\!:\!{L^2(\Omega_b)}\|
\le C(1+t)^{-\gamma} \,( \|\vec{v}_0\!:\!H^{\ell+j+1}(\Omega) \times H^{\ell+j}(\Omega)\| \\ \nonumber && \hspace{40mm}
+\sum_{|\alpha| \le \ell+j} \sup_{0\le s \le t} (1+s)^\gamma
\|\partial^\alpha_{s,x} f(s)\!:\!{L^2(\Omega)}\|). \end{eqnarray}
In order to evaluate $\partial^\alpha v$ for ${|\alpha| \le m}$, we have only to combine (\ref{LE1}) with a variant of (\ref{elliptic})\,: \begin{equation}\label{LE2}
\|\varphi\!:\!{H^m(\Omega_b)}\| \le C(\|\Delta \varphi\!:\!{H^{m-2}(\Omega_{b^\prime})}\| +\|\varphi\!:\!{H^1(\Omega_{b^\prime})}\|), \end{equation} where $1<b<b^\prime$ and $\varphi \in H^m(\Omega) \cap H_0^1(\Omega)$ with $m \ge 2$. This completes the proof.
$\qed$
\noindent{\it Proof of Lemma \ref{KlainermanSobolev}.}\ It is well-known that for $w \in C_0^2({\mathbf R}^3)$ we have \begin{eqnarray}\nonumber
\sup_{x \in {\mathbf R}^3} |x||w(x)| \le C \sum_{|\alpha| \le 2}
\|\widetilde{Z}^\alpha w\!:\!{L^2({\mathbf R}^3)}\| \end{eqnarray} (for the proof, see e.g. \cite{kl0}). Rewriting $\varphi$ as $\varphi=\psi_1 \varphi+(1-\psi_1) \varphi$ with $\psi_1$ in (\ref{cutoff}), we see that the left--hand side on (\ref{ap21}) is evaluated by \begin{eqnarray}\nonumber && \hspace{4mm}
C \sup_{x \in {\mathbf R}^3} |x| |\psi_1(x) \varphi(x)|
+C \sup_{x \in \Omega} |(1-\psi_1(x)) \varphi(x)| \\ \nonumber && \le
C \sum_{|\alpha| \le 2} \|\widetilde{Z}^\alpha (\psi_1 \varphi)\!:\!{L^2({\mathbf R}^3)}\|
+C \sum_{|\alpha| \le 2} \|\partial^\alpha ((1-\psi_1) \varphi)\!:\!{L^2(\Omega_2)}\| \\ \nonumber
&& \le C \sum_{|\alpha| \le 2} \|\widetilde{Z}^\alpha \varphi\!:\!{L^2(\Omega)}\|,
\end{eqnarray} hence we obtain (\ref{ap21}). This completes the proof.
$\qed$
\\ \begin{center} {\sc Acknowledgments} \end{center} The authors would like to express their gratitude to Prof.~S.~Alinhac for his useful comments on the preliminary version of this paper.
\end{document}
|
arXiv
|
{
"id": "0706.1833.tex",
"language_detection_score": 0.5205944180488586,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\frontmatter
\title{Orthogonal polynomial expansions for the Riemann xi function}
\author{Dan Romik }
\address{Department of Mathematics, University of California, Davis, One Shields Ave, Davis CA 95616, USA} \curraddr{}
\email{[email protected]}
\thanks{This material is based upon work supported by the National Science Foundation under Grant No. DMS-1800725.}
\date{}
\subjclass[2010]{Primary 11M06, 33C45}
\keywords{Riemann xi function, Riemann zeta function, Riemann hypothesis, orthogonal polynomials, De Bruijn-Newman constant, asymptotic analysis}
\begin{abstract}
We study infinite series expansions for the Riemann xi function $\Xi(t)$ in three specific families of orthogonal polynomials: (1) the Hermite polynomials; (2) the symmetric Meixner-Pollaczek polynomials $P_n^{(3/4)}(x;\pi/2)$; and (3) the continuous Hahn polynomials $p_n\left(x; \frac34,\frac34,\frac34,\frac34\right)$. The first expansion was discussed in earlier work by Tur\'an, and the other two expansions are new. For each of the three expansions, we derive formulas for the coefficients, show that they appear with alternating signs, derive formulas for their asymptotic behavior, and derive additional interesting properties and relationships. We also apply some of the same techniques to prove a new asymptotic formula for the Taylor coefficients of the Riemann xi function.
Our results continue and expand the program of research initiated in the 1950s by Tur\'an, who proposed using the Hermite expansion of the Riemann xi function as a tool to gain insight into the location of the Riemann zeta zeros. We also uncover a connection between Tur\'an's ideas and the separate program of research involving the so-called De Bruijn--Newman constant. Most significantly, the phenomena associated with the new expansions in the Meixner-Pollaczek and continuous Hahn polynomial families suggest that those expansions may be even more natural tools than the Hermite expansion for approaching the Riemann hypothesis and related questions. \end{abstract}
\maketitle
\tableofcontents
\mainmatter
\chapter{Introduction}
\label{ch:introduction}
\section{Background}
\label{sec:intro-background}
This paper concerns the study of certain infinite series expansions for the Riemann xi function $\xi(s)$. Recall that $\xi(s)$ is defined in terms of Riemann's zeta function $\zeta(s)$ by \begin{equation} \xi(s) = \frac12 s (s-1) \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \zeta(s) \qquad (s\in\mathbb{C}). \end{equation} $\xi(s)$ is an entire function and satisfies the functional equation \begin{equation} \label{eq:xi-functional-eq} \xi(1-s) = \xi(s). \end{equation} It is convenient and customary to perform a change of variables, denoting \begin{equation} \Xi(t) = \xi\left(\frac12+i t\right) \qquad (t\in\mathbb{C}), \end{equation} a function that (in keeping with convention) will also be referred to as the Riemann xi function. The functional equation \eqref{eq:xi-functional-eq} then becomes the statement that $\Xi(t)$ is an even function. The xi function has been a key tool in the study of the complex-analytic properties of $\zeta(s)$ and, crucially, the Riemann Hypothesis (RH). Two additional standard properties of $\Xi(t)$ are that it takes real values on the real line, and that RH can be stated as the claim that all the zeros of $\Xi(t)$ are real. \cite{titschmarsh}
\subsection{Some well-known representations of the Riemann xi function}
Much research on the zeta function has been based on studying various series and integral representations of $\zeta(s)$, $\xi(s)$ and $\Xi(t)$, in the hope that this might provide information about the location of their zeros. For example, it is natural to investigate the sequence of coefficients in the Taylor expansion \begin{equation} \label{eq:riemannxi-taylor} \xi(s) = \sum_{n=0}^\infty a_{2n} \left(s-\frac12\right)^{2n}. \end{equation} Riemann himself derived in his seminal 1859 paper a formula for the coefficients $a_{2n}$ \cite[p.~17]{edwards}, which in our notation reads as \begin{equation} \label{eq:riemannxi-taylorcoeff-int} a_{2n} = \frac{1}{2^{2n-1}(2n)!} \int_1^\infty \omega(x)x^{-3/4}(\log x)^{2n}\,dx, \end{equation} (where $\omega(x)$ is defined below in \eqref{eq:omegax-def}), and which plays a small role in the theory. The study of the numbers $a_{2n}$ remains an active area of research \cite{coffey, csordas-norfolk-varga, griffin-etal, pustylnikov1, pustylnikov2, pustylnikov3}---we will also prove a result of our own about them in \secref{sec:xi-taylor-asym}---but, disappointingly, the Taylor expansion \eqref{eq:riemannxi-taylor} has not provided much insight into the location of the zeros of $\zeta(s)$.
Another important way to represent $\xi(s)$, also considered by Riemann, is as a Mellin transform, or---which is equivalent through a standard change of variables---as a Fourier transform. Specifically, define functions $\theta(x), \omega(x), \Phi(x)$ by \begin{align} \label{eq:jactheta-def} \theta(x) & = \sum_{n=-\infty}^\infty e^{-\pi n^2 x} = 1+2\sum_{n=1}^\infty e^{-\pi n^2 x} & (x>0), \\ \label{eq:omegax-def} \omega(x) &= \frac12\left(2x^2 \theta''(x) + 3 x \theta'(x)\right) = \sum_{n=1}^\infty (2\pi^2 n^4 x^2 - 3\pi n^2 x)e^{-\pi n^2 x} & (x>0), \\ \label{eq:phix-def} \Phi(x) & = 2 e^{x/2} \omega(e^{2x}) = 2\sum_{n=1}^\infty \left(2\pi^2 n^4 e^{9x/2}-3\pi n^2 e^{5x/2}\right) \exp\left(-\pi n^2 e^{2x}\right) & (x\in\mathbb{R}). \end{align} Then it is well-known that $\theta(x), \omega(x), \Phi(x)$ are positive functions, satisfy the functional equations (all equivalent to each other, as well as to \eqref{eq:xi-functional-eq}) \begin{equation} \label{eq:theta-functional-equation} \ \ \theta\left(\frac{1}{x}\right) = \sqrt{x}\, \theta(x), \qquad \omega\left(\frac{1}{x}\right) = \sqrt{x}\, \omega(x), \qquad \Phi\left(-x\right) = \Phi(x), \end{equation} and that $\xi(s)$ has the Mellin transform representation \begin{equation} \label{eq:riemannxi-mellintrans} \xi(s) = \int_0^\infty \omega(x)x^{s/2-1}\,dx, \end{equation} and the Fourier transform representation \begin{equation} \label{eq:riemannxi-fouriertrans} \Xi(t) = \int_{-\infty}^\infty \Phi(x) e^{i t x}\,dx. \end{equation} The right-hand side of \eqref{eq:riemannxi-fouriertrans} is also frequently written in equivalent form as a cosine transform, that is, replacing the $e^{itx}$ term with $\cos(tx)$, which is valid since $\Phi(x)$ is an even function. For additional background, see \cite{edwards, titschmarsh}.
\subsection{P\'olya's attack on RH and its offshoots by De Bruijn, Newman and others}
P\'olya in the 1920s began an ambitious line of attack on RH in a series of papers \cite{polya1923, polya1926a, polya1926b, polya1927} (see also \cite[Ch.~X]{titschmarsh}) in which he investigated sufficient conditions for an entire function represented as the Fourier transform of a positive even function to have all its zeros lie on the real line. P\'olya's ideas have been quite influential and found important applications in areas such as statistical physics (see \cite{lee-yang, newman-wu}, \cite[pp.~424--426]{polya-collected}). One particular result that proved consequential is P\'olya's discovery that the factor $e^{\lambda x^2}$, where $\lambda>0$ is constant, is (to use a term apparently coined by De Bruijn \cite{debruijn}) a so-called \firstmention{universal factor}. That is to say, P\'olya's theorem states that if an entire function $G(z)$ is expressed as the Fourier transform of a function $F(x)$ of a real variable, and all the zeros of $G(z)$ are real, then, under certain assumptions of rapid decay on $F(x)$ (see \cite{debruijn} for details), the zeros of the Fourier transform of $F(x)e^{\lambda x^2}$ are also all real. This discovery spurred much follow-up work by De Bruijn \cite{debruijn}, Newman \cite{newman} and others \cite{csordas-norfolk-varga1988, csordas-odlyzko-etal, csordas-ruttan-varga, csordas-smith-varga, ki-kim-lee, norfolk-ruttan-varga, odlyzko, polymath15, rodgers-tao, saouter-etal, teriele} on the subject of what came to be referred to as the \firstmention{De Bruijn-Newman constant}; the rough idea is to launch an attack on RH by generalizing the Fourier transform \eqref{eq:riemannxi-fouriertrans} through the addition of the ``universal factor'' $e^{\lambda x^2}$ inside the integral, and to study the set of real $\lambda$'s for which the resulting entire function has only real zeros. See \secref{sec:hermite-poissonpolya} where some additional details are discussed, and see \cite[Ch.~5]{broughan}, \cite{newman-wu} for accessible overviews of the subject.
\subsection{Tur\'an's approach}
Next, we survey another attack on RH that is the closest one conceptually to our current work, proposed by P\'al Tur\'an. In a 1950 address to the Hungarian Academy of Sciences \cite{turan1952} and follow-up papers \cite{turan1954, turan1959}, Tur\'an took a novel look at the problem, starting by re-examining the idea of looking at the Taylor expansion \eqref{eq:riemannxi-taylor} and then analyzing why it fails to lead to useful insights and how one might try to improve on it. He argued that the coefficients in the Taylor expansion of an entire function provide the wrong sort of information about the zeros of the function, being in general well-suited for estimating the distance of the zeros from the origin, but poorly adapted for the purpose of telling whether the zeros lie on the real line. As a heuristic explanation, he pointed out that the level curves of the power functions $z\mapsto z^n$ are concentric circles, and argued that one must therefore look instead for series expansions of the Riemann xi function in functions whose level curves approximate straight lines running parallel to the real axis. He then argued that the Hermite polynomials \begin{equation*} H_n(x) = (-1)^n e^{x^2} \frac{d^n}{dx^n}\left( e^{-x^2}\right) \end{equation*} are such a family of functions, and proceeded to prove several results demonstrating his main thesis that the coefficients in the Fourier series expansion of a function in Hermite polynomials can in many cases provide useful information about the distance of the zeros of the function from the real line.
Tur\'an also made the important observation that the expansion of $\Xi(t)$ in Hermite polynomials has a rather nice structure, being expressible in the form \begin{equation} \label{eq:hermite-expansion-intro} \Xi(t) = \sum_{n=0}^\infty (-1)^n b_{2n} H_{2n}(t) \end{equation} in which, he pointed out, the coefficients $b_{2n}$ are given by the formula\footnote{Actually Tur\'an's formula in \cite{turan1959} appears to contain a small numerical error, differing from \eqref{eq:turan-coeff-formula} by a factor of $\frac{\pi}{2}$.} \begin{equation} \label{eq:turan-coeff-formula} b_{2n} = \frac{1}{2^{2n} (2n)!} \int_{-\infty}^\infty x^{2n} e^{-\frac{x^2}{4}} \Phi(x)\,dx, \end{equation} and in particular are positive numbers.
Note that the Hermite polynomials have the symmetry $H_n(-x) = (-1)^n H_n(x)$, so, as with the case of the Taylor expansion \eqref{eq:riemannxi-taylor}, the presence of only even-indexed coefficients in \eqref{eq:hermite-expansion-intro} is a manifestation of the functional equation \eqref{eq:theta-functional-equation}, and hence serves as another indication that the expansion \eqref{eq:hermite-expansion-intro} is a somewhat natural one to consider. (Of course, the same would be true for any other family of even functions; this is obviously a weak criterion for naturalness.)
Tur\'an focused most of his attention on Hermite expansions of polynomials rather than of entire functions like $\Xi(t)$. His ideas on locating polynomial zeros using knowledge of the coefficients in their Hermite expansions appear to have been quite influential, and have inspired many subsequent fruitful investigations into the relationship between the expansion of a polynomial in Hermite polynomials and other orthogonal polynomial families, and the location of the zeros of the polynomial. See the papers \cite{bates, bleecker-csordas, iserles-norsett1987, iserles-norsett1988, iserles-norsett1990, iserles-saff, piotrowski, schmeisser}.
By contrast, Tur\'an's specific observation about the expansion \eqref{eq:hermite-expansion-intro} of $\Xi(t)$ does not seem to have led to any meaningful follow-up work. We are not aware of any studies of the behavior of the coefficients $b_{2n}$, nor of any attempts to determine whether the Hermite polynomials are the only---or even the most natural---family of polynomials in which it is worthwhile to expand the Riemann xi function (but see \secref{sec:intro-relatedwork} for discussion of some related literature).
\section[Our new results]{Our new results: Tur\'an's program revisited and extended; expansion of $\Xi(t)$ in new orthogonal polynomial bases}
This paper can be thought of as a natural continuation of the program of research initiated by Tur\'an in his 1950 address. One full chapter---Chapter~\ref{ch:hermite}---is dedicated to the study of the Hermite expansion \eqref{eq:hermite-expansion-intro}, answering several questions that arise quite naturally from Tur\'an's work and that have not yet been addressed in the literature. For example, in Theorem~\ref{thm:hermite-coeff-asym} we derive an asymptotic formula for the coefficients $b_{2n}$.
It is however in later chapters that it will be revealed that Tur\'an's vision of understanding the Riemann xi function by studying its expansion in Hermite polynomials was too narrow in its scope, since it turns out that there is a wealth of new and interesting results related to the notion of expanding $\Xi(t)$ in \emph{different} families of orthogonal polynomials. Two very specific orthogonal polynomial families appear to suggest themselves as being especially natural and possessing of excellent properties, and it is those that are conceptually the main focus of this paper, being the subject of Chapters~\ref{ch:fn-expansion}--\ref{ch:gn-expansion}. These families are the \firstmention{Meixner-Pollaczek polynomials} $P_n^{(\lambda)}(x;\phi)$ with the specific parameter values $\phi=\pi/2$, $\lambda=\frac34$; and the \firstmention{continuous Hahn polynomials} $p_n(x;a,b,c,d)$ with the specific parameter values $a=b=c=d=\frac34$. We denote these families of polynomials by $(f_n)_{n=0}^\infty$ and $(g_n)_{n=0}^\infty$, respectively; they are given explicitly by the hypergeometric formulas \begin{align} \label{eq:fn-def-intro} f_n(x) &= \frac{(3/2)_n}{n!} i^n {}_2F_1\left(-n, \frac34+ix; \frac32; 2 \right), \\ \label{eq:gn-def-intro} g_n(x) &= i^n (n+1) \ {}_3F_2\left(-n,n+2,\frac34+ix;\frac32,\frac32; 1\right) \end{align} (where $(3/2)_n$ is a Pochhammer symbol), and form systems of polynomials that are orthogonal with respect to the weight functions
$\left|\Gamma\left(\frac34+ix\right)\right|^2$ and
$\left|\Gamma\left(\frac34+ix\right)\right|^4$ on $\mathbb{R}$, respectively.
As our analysis will show, the expansions of $\Xi(t)$ in the polynomial families $(f_n)_{n=0}^\infty$ and $(g_n)_{n=0}^\infty$ have forms that are pleasingly similar to the Hermite expansion \eqref{eq:hermite-expansion-intro}, namely \begin{align} \label{eq:fn-expansion-intro} \Xi(t) &= \sum_{n=0}^\infty (-1)^n c_{2n} f_{2n}\left(\frac{t}{2}\right), \\ \label{eq:gn-expansion-intro} \Xi(t) &= \sum_{n=0}^\infty (-1)^n d_{2n} g_{2n}\left(\frac{t}{2}\right), \end{align} where, importantly, the coefficients $c_{2n}$ and $d_{2n}$ again turn out to be positive numbers. Much more than this can be said, and in Chapters~\ref{ch:fn-expansion}--\ref{ch:gn-expansion} we undertake a comprehensive analysis of the meaning of the expansions \eqref{eq:fn-expansion-intro}--\eqref{eq:gn-expansion-intro}, the relationship between them, and the behavior of the coefficients $c_{2n}$ and $d_{2n}$. Among other results, we will prove that the coefficients satisfy the two asymptotic formulas \begin{align} \label{eq:intro-fncoeff-asym} c_{2n} &\sim 16 \sqrt{2} \pi^{3/2}\, \sqrt{n} \,\exp\left(-4\sqrt{\pi n}\right), \\ \label{eq:intro-gncoeff-asym}
d_{2n} & \sim \left(\frac{128\times 2^{1/3} \pi^{2/3}e^{-2\pi/3}}{\sqrt{3}}\right) n^{4/3} \exp\left(-3(4\pi)^{1/3} n^{2/3}\right) \end{align} as $n\to\infty$. See Theorems~\ref{THM:FN-COEFF-ASYM} and~\ref{thm:gn-coeff-asym} for precise statements, including explicit rate of convergence estimates.
There are many other results. What follows is a brief summary of the main results proved in each chapter.
\begin{itemize}
\item \textbf{Chapter~\ref{ch:hermite}:}
\begin{itemize}
\item We prove a theorem (Theorem~\ref{THM:HERMITE-EXPANSION} in \secref{sec:hermite-mainresults}) on the existence of the Hermite expansion, including the fact that the expansion converges throughout the complex plane and an effective rate of convergence estimate.
\item We prove an asymptotic formula for the coefficients $b_{2n}$ (Theorem~\ref{thm:hermite-coeff-asym} in \secref{sec:hermite-asym}).
\item We prove a theorem (Theorem~\ref{thm:hermite-poisson-polya} in \secref{sec:hermite-poissonpolya}) that reveals a connection between Tur\'an's ideas on the Hermite expansion and the separate thread of research on the topic of the De Bruijn-Newman constant described in the previous section. The idea is that the so-called P\'olya-De Bruijn flow---the one-parameter family of approximations to the Riemann xi function obtained by introducing the factor $e^{\lambda x^2}$ to the Fourier transform in \eqref{eq:riemannxi-fouriertrans}---shows up in a natural way also when taking the Hermite expansion \eqref{eq:hermite-expansion-intro} and using it to separately construct a family of approximations inspired by the standard construction of Poisson kernels in the theory of orthogonal polynomials.
\end{itemize}
\item \textbf{Chapter~\ref{ch:fn-expansion}:} \begin{itemize} \item We develop the basic theory of the expansion \eqref{eq:fn-expansion-intro} of $\Xi(t)$ in the polynomials $f_n$, deriving formulas for the coefficients, showing that they alternate in sign, and proving that the expansion converges throughout the complex plane, including an effective rate of convergence estimate (Theorem~\ref{THM:FN-EXPANSION} in \secref{sec:fnexp-mainresults}).
\item We prove the asymptotic formula given in \eqref{eq:intro-fncoeff-asym} above for the coefficients $c_{2n}$ (Theorem~\ref{THM:FN-COEFF-ASYM} in \secref{sec:fnexp-mainresults}).
\item We study the Poisson flow associated with the $f_n$-expansion, by analogy with the results of Chapter~\ref{ch:hermite}, and show that this flow is the Fourier transform of a family of functions with compact support; that it evolves according to an interesting dynamical law---a differential difference equation; and that, in contrast to the Poisson flow associated with the Hermite expansion, this flow does not preserve the reality of zeros of a polynomial in either direction of the time parameter.
\end{itemize}
\item \textbf{Chapter~\ref{ch:radial}:} \begin{itemize}
\item We develop an alternative point of view that reinterprets the $f_n$-expansion \eqref{eq:fn-expansion-intro} developed in Chapter~\ref{ch:fn-expansion} as arising (through the action of the Mellin transform) from an expansion of the elementary function \begin{equation*} \qquad \frac{d^2}{dr^2}\left( \frac{r}{4}\coth(\pi r)\right) = -\frac{\pi}{2}\frac{1}{\sinh^2(\pi r)} + \frac{\pi^2 r}{2} \frac{\cosh(\pi r)}{\sinh^3(\pi r)}, \end{equation*} in an orthogonal basis of eigenfunctions of the radial Fourier transform in $\mathbb{R}^3$, a family of functions which can be defined in terms of the Laguerre polynomials $L_n^{1/2}(x)$.
\item We introduce and study the properties of several more special functions, including a function $\tilde{\nu}(t)$, defined as a certain integral transform of the function $\omega(x)$, that is shown to be a generating function for the coefficient sequence $c_n$, and will later play a key role in Chapter~\ref{ch:gn-expansion}.
\end{itemize}
\item \textbf{Chapter~\ref{ch:gn-expansion}:} \begin{itemize} \item We develop the basic theory of the expansion of $\Xi(t)$ in the polynomials $g_n$, deriving formulas for the coefficients, showing that they alternate in sign, and proving that the expansion converges throughout the complex plane, including an effective rate of convergence estimate. (Theorem~\ref{THM:GN-EXPANSION} in \secref{sec:gnexp-mainresults}).
\item We prove the asymptotic formula given in \eqref{eq:intro-gncoeff-asym} above for the coefficients $d_{2n}$ (Theorem~\ref{thm:gn-coeff-asym} in \secref{sec:gnexp-mainresults}).
\item We show in Sections~\ref{sec:gnexp-chebyshev}--\ref{sec:gnexp-mellin} that, analogously to the results of Chapter~\ref{ch:radial}, the $g_n$-expansion also affords a reinterpretation as arising, through the Mellin transform, from the expansion of the function $\tilde{\nu}(t)$ introduced in Chapter~\ref{ch:radial} in yet another family of orthogonal polynomials, the Chebyshev polynomials of the second kind.
\end{itemize}
\item \textbf{Chapter~\ref{ch:misc}:}
This chapter contains a few additional results that enhance and supplement the developments in the earlier chapters.
\begin{itemize}
\item We apply the asymptotic analysis techniques we developed in Chapter~\ref{ch:hermite} to prove an asymptotic formula for the Taylor coefficients $a_{2n}$ of the Riemann xi function (Theorem~\ref{thm:riemannxi-taylorcoeff-asym} in \secref{sec:xi-taylor-asym}).
\item We study the function $\tilde{\omega}(x)$, a ``centered'' version of the function $\omega(x)$ that is first introduced in \secref{sec:rad-centered-recbal}. We show that $\tilde{\omega}(x)$ relates to the expansion \eqref{eq:fn-expansion-intro} in several interesting ways, and give an explicit description of its sequence of Taylor coefficient (Theorem~\ref{thm:omega-tilde-taylor} in \secref{sec:omega-tilde-properties}) in terms of a recently studied integer sequence. \end{itemize}
\item \textbf{Appendix~\ref{appendix:orthogonal}:}
This appendix contains a summary of mostly known properties of several families of orthogonal polynomials. In \secref{sec:orth-fngn-relation} we prove two new summation identities relating the two polynomial families $(f_n)_{n=0}^\infty$ and $(g_n)_{n=0}^\infty$.
\end{itemize}
\section{Previous work involving the polynomials $f_n$}
\label{sec:intro-relatedwork}
Our work on the Hermite expansion of the Riemann xi function is, as mentioned above, a natural continuation of Tur\'an's work, and also relates to the existing literature on the De Bruijn-Newman constant. By contrast, our results on the expansion of the Riemann xi function in the polynomial families $f_n$ and $g_n$ in Chapters~\ref{ch:fn-expansion}--\ref{ch:gn-expansion} do not appear to follow up on any established line of research. It seems worth mentioning however that the polynomials $f_n$ did in fact make an appearance in a few earlier works in contexts involving the Riemann zeta and xi functions.
The earliest such work we are aware of is the paper by Bump and Ng \cite{bump-ng}, which discusses polynomials that are (up to a trivial reparametrization) the polynomials $f_n$ in connection with some Mellin transform calculations related to the zeta function. The follow-up papers by Bump et al.~\cite{bump-etal} and Kurlberg \cite{kurlberg} discuss these polynomials further, in particular interpreting their property of having only real zeros in terms of a phenomenon that the authors term the ``local Riemann hypothesis.'' The idea of using these polynomials as a basis in which to expand the Riemann xi function (or any other function) does not appear in these papers, but they seem nonetheless to be the first works that contain hints that the polynomials $f_n$ may hold some significance for analytic number theory.
In another paper \cite{kuznetsov2} (see also \cite{kuznetsov1}), Kuznetsov actually does consider an expansion in the polynomial basis $f_n(t/2)$---the same basis we use for our expansion of $\Xi(t)$---of a modified version of the Riemann xi function, namely the function $e^{-\pi t/4}\Xi(t)$, and finds formulas for the coefficients in the expansion in terms of the Taylor coefficients of an elementary function. Kuznetsov's result gives yet more clues as to the special role played in the theory of the Riemann xi function by the polynomials $f_n$. It is however unclear to us how his results relate to ours.
Finally, in a related direction, Inoue, apparently motivated by the work of Kuznetsov, studies in a recent preprint \cite{inoue} the expansion of the completed zeta function $\pi^{-s/2}\Gamma(s/2)\zeta(s)$ in the polynomials $f_n(t/2)$, and proves convergence of the expansion in the critical strip.
\section{How to read this paper}
The main part of this paper consists of Chapters~\ref{ch:hermite}--\ref{ch:gn-expansion}. These chapters are arranged in two conceptually distinct parts: Chapter~\ref{ch:hermite}, which deals with the Hermite expansion of the Riemann xi function and its connection to the De Bruijn-Newman constant, forms the first part; and Chapters~\ref{ch:fn-expansion}--\ref{ch:gn-expansion}, which develop the theory of the expansion of the Riemann xi function in the orthogonal polynomial families $(f_n)_{n=0}^\infty$ and $(g_n)_{n=0}^\infty$, form the second. The second part is largely independent of the first, so it would be practical for a reader to start reading directly from Chapter~\ref{ch:fn-expansion} and only refer back to Chapter~\ref{ch:hermite} as needed on a few occasions.
Following those chapters, we prove some additional results in Chapter~\ref{ch:misc}, and conclude in Chapter~\ref{ch:summary} with some final remarks.
The work makes heavy use of known properties of several classical, and less classical, families of orthogonal polynomials: the Chebyshev polynomials of the second kind, Hermite polynomials, Laguerre polynomials, Meixner-Pollaczek polynomials, and continuous Hahn polynomials. Appendix~\ref{appendix:orthogonal} contains reference sections summarizing the relevant properties of each of these families, and ends with a section in which we prove a new pair of identities relating the polynomial families $(f_n)_{n=0}^\infty$ and $(g_n)_{n=0}^\infty$.
We assume the reader is familiar with the basic theory of orthogonal polynomials, as described, e.g., in Chapters~2--3 of Szeg\H{o}'s classical book \cite{szego} on the subject. We also assume familiarity with standard special functions such as the \nobreak{Euler} gamma function $\Gamma(s)$ and Gauss hypergeometric function ${}_2F_1(a,b;c;z)$ (see~\cite{andrews-askey-roy}), and of course with basic results and facts about the Riemann zeta function~\cite{edwards}. For background on Mellin transforms, of which we make extensive use, the reader is invited to refer to \cite{paris-kaminski}.
\section{The basic convergence result for the Hermite expansion}
\label{sec:hermite-mainresults}
Following Tur\'an \cite{turan1959}, we define numbers $(b_n)_{n=0}^\infty$ by \begin{equation} \label{eq:hermite-coeffs-def} b_n = \frac{1}{2^n n!} \int_{-\infty}^\infty x^n e^{-\frac{x^2}{4}} \Phi(x)\,dx \end{equation} with $\Phi(x)$ defined in \eqref{eq:phix-def}. Since $\Phi(x)$ is even and positive, we see that $b_{2n+1} = 0$ and $b_{2n} >0$ for all $n\ge0$. The following result is a more precise version of Tur\'an's remarks in \cite{turan1952} about the expansion of $\Xi(t)$ in Hermite polynomials.
\begin{thm}[Hermite expansion of $\Xi(t)$] \label{THM:HERMITE-EXPANSION} The Riemann xi function has the infinite series representation \begin{equation} \label{eq:hermite-expansion} \Xi(t) = \sum_{n=0}^\infty (-1)^n b_{2n} H_{2n}(t), \end{equation} which converges uniformly on compacts for all $t\in\mathbb{C}$. More precisely, for any compact set $K\subset \mathbb{C}$ there exist constants $C_1, C_2>0$ depending on $K$ such that \begin{equation} \label{eq:hermite-expansion-errorbound}
\left|\Xi(t) - \sum_{n=0}^N (-1)^n b_{2n} H_{2n}(t) \right| \leq C_1 e^{-C_2 N \log N} \end{equation} holds for all $N\ge1$ and $t\in K$. \end{thm}
We note for the record the unsurprising fact that the coefficients $b_{2n}$ can also be computed as Fourier coefficients of $\Xi(t)$ associated with the orthonormal basis of Hermite polynomials in the function space $L^2(\mathbb{R},e^{-t^2}\,dt)$.
\begin{corollary} \label{cor:hermite-coeff-innerproduct} An alternative expression for the coefficients $b_{2n}$ is \begin{equation} \label{eq:hermite-coeff-innerproduct} b_{2n} = \frac{(-1)^n}{\sqrt{\pi}2^{2n}(2n)!} \int_{-\infty}^\infty \Xi(t) e^{-t^2} H_{2n}(t)\,dt. \end{equation} \end{corollary}
We give the easy proof of Corollary~\ref{cor:hermite-coeff-innerproduct} at the end of the next section following the proof of Theorem~\ref{THM:HERMITE-EXPANSION}.
\section{Preliminaries}
Recall the easy fact that the series \eqref{eq:omegax-def}--\eqref{eq:phix-def} defining $\omega(x)$ and $\Phi(x)$ are asymptotically dominated by their first summands as $x\to\infty$, and that this remains true if the series are summed starting at $m=2$. This leads to the following standard estimates (with the second one also relying on \eqref{eq:theta-functional-equation}), which will be used several times in this and the following chapters.
\begin{lem} The functions $\omega(x)$ and $\Phi(x)$ satisfy the asymptotic estimates \begin{align} \label{eq:omegax-asym-xinfty} \omega(x) &= O\left(x^2 e^{-\pi x}\right) & \textrm{as }x\to\infty, \\ \label{eq:omegax-asym-xzero} \omega(x) &= O\left(x^{-5/2} e^{-\pi/x}\right) & \textrm{as }x\to 0+,\negphantom{,} \\ \label{eq:omegaxdiff-asym-xinfty} \omega(x) - (2\pi^2 x^2 - 3\pi x)e^{-\pi x} & = O\left(x^2 e^{-4\pi x}\right) & \textrm{as }x\to\infty, \\ \label{eq:phix-asym-xinfty} \Phi(x) & = O\left(\exp\left(\frac{9x}{2}-\pi e^{2x}\right)\right) & \textrm{as }x\to \infty, \end{align} and \begin{align} \label{eq:phixdiff-asym-xinfty} \Phi(x) - 2\left(2\pi^2 e^{9x/2}-3\pi e^{5x/2}\right) & \exp\left(-\pi e^{2x}\right) \\ & = O\left(\exp\left(\frac{9x}{2}-4\pi e^{2x}\right)\right) & \textrm{as }x\to \infty, \nonumber \end{align} \end{lem}
\section{Proof of Theorem~\ref{THM:HERMITE-EXPANSION}}
\label{sec:hermite-proofmain}
We start by deriving an easy (and far from sharp, but sufficient for our purposes) bound on the rate of growth of $H_n(t)$ as a function of $n$.
\begin{lem} \label{lem:hermite-easybound} The Hermite polynomials satisfy the bound \begin{equation} \label{eq:hermite-easybound}
|H_n(t)|\leq C \exp\left(\frac34 n \log n\right) \end{equation} for all $n\ge 1$, uniformly as $t$ ranges over any compact set $K\subset \mathbb{C}$, with $C>0$ being a constant that depends on $K$ but not on $n$. \end{lem}
\begin{proof}
Fix the compact set $K$, and denote $M=2\max_{t\in K} |t|$. Let $N_0$ be a positive integer whose value will be fixed shortly. Let $C>0$ be a constant for which \eqref{eq:hermite-easybound} holds for all $t\in K$ and $1\le n\le N_0$. We prove by induction that the inequality holds for all $n\ge 1$, using as the induction base the case $n=N_0$. For the inductive step, let $n\ge N_0$ and assume that we have proved all cases up to the $n$th case. Then for $t\in K$ we can bound $|H_{n+1}(t)|$ using the recurrence relation \eqref{eq:hermite-recurrence} for the Hermite polynomials, which, together with the inductive hypothesis, gives that \begin{align*}
|H_{n+1}(t)| &\leq 2 |t| \cdot|H_n(t)| + 2n |H_{n-1}(t)| \\ & \leq MC \exp\left(\frac34 n \log n \right) + 2C n \exp\left(\frac34 (n-1)\log (n-1)\right) \\ & = C \exp\left(\frac34 n \log n + \log (M) \right) + C \exp\left(\frac34 (n-1) \log (n-1) + \log(2n) \right). \end{align*} We see that it is easy to complete the induction by fixing $N_0$ to be large enough as a function of $M$, specifically setting, say, $N_0 = \max(128,\lceil(2M)^{4/3}\rceil) $. With this definition we then get (remembering the assumption $n\ge N_0$) that \begin{align*}
|H_{n+1}(t)| & \leq C \exp\left(\frac34 (n+1) \log n -\log 2 \right) + C \exp\left(\frac34 (n+1) \log (n-1) -\log 2 \right) \\ & \leq \frac{C}{2} \exp\left( \frac34 (n+1)\log (n+1) \right) + \frac{C}{2} \exp\left( \frac34 (n+1)\log (n+1) \right) \\ & = C \exp\left( \frac34 (n+1)\log (n+1) \right), \end{align*} which finishes the proof. \end{proof}
Define the Lambert $W$-function to be the unique increasing function $W:[0,\infty)\to [0,\infty)$ satisfying the equation \begin{equation*} W(xe^x) = x. \end{equation*}
In what follows, we will make use of the following asymptotic formula for $W(x)$ for large $x$. The result is a weaker version of eq.~(4.19) in \cite{corless-etal}. \begin{thm}[Corless et al. \cite{corless-etal}] The asymptotic behavior of $W(x)$ as $x\to\infty$ is given by \begin{equation} \label{eq:lambert-w-asym} W(x) = \log x - \log\log x + \frac{\log\log x}{\log x} + O\left( \left(\frac{\log\log x}{\log x}\right)^2 \right). \end{equation} \end{thm}
The Lambert $W$-function and its asymptotics will be quite important for our analysis. A hint of why this is so can already be glimpsed in the proof of the following technical lemma.
\begin{lem} \label{lem:hermite-easybound2} For any number $B\ge 1$ there is a constant $C>0$ such that \begin{align} \label{eq:hermite-easybound2} \int_0^\infty x^n & \exp\left(-B e^x\right)\,dx
\leq \exp\left[ n\log\log n - \frac{n \log\log n}{\log n} - (\log B+1) \frac{n}{\log n} + C \frac{n (\log\log n)^2}{(\log n)^2} \right]
\end{align} for all $n\ge 3$. \end{lem}
\begin{proof} Denote the integral on the left-hand side of \eqref{eq:hermite-easybound2} by $I_n$. It is convenient to rewrite this integral as \begin{equation*} I_n = \int_0^\infty \exp(\psi_n(x))\,dx, \end{equation*} where we denote \begin{equation} \label{eq:hermite-psindef} \psi_n(x) = n\log x - B e^x. \end{equation} To obtain an effective bound on this integral, it is natural to seek the point where $\psi_n(x)$ is maximized. Examining its derivative $\psi_n'(x) = \frac{n}{x}-B e^x$, we see that is positive for $x$ positive and close to $0$, negative for large values of $x$, and crosses zero when \begin{equation*} \frac{n}{x}-B e^x = 0 \quad \iff \quad x e^x = \frac{n}{B}, \end{equation*} an equation that has a unique solution, which we denote $x_n$, that is expressible in terms of the Lambert $W$-function, namely as \begin{equation*} x_n = W\left(\frac{n}{B}\right). \end{equation*} Thus $x_n$ is the unique global maximum point of $\psi_n(x)$. By \eqref{eq:lambert-w-asym}, the asymptotic behavior of $x_n$ for large $n$ (with $B$ fixed) is given by \begin{align} \label{eq:lambertw-value-asym} x_n &= \log\left(\frac{n}{B}\right) - \log\log\left(\frac{n}{B}\right) + \frac{\log\log\left(\frac{n}{B}\right)}{\log\left(\frac{n}{B}\right)} + O\left( \left( \frac{\log\log\left(\frac{n}{B}\right)}{\log\left(\frac{n}{B}\right)} \right)^2 \right) \\ \nonumber & =
(\log n - \log B) - \left(\log\log n - \frac{\log B}{\log n} + O\left( \left(\frac{\log B}{\log n}\right)^2\right) \right) \\ \nonumber & \qquad\qquad + \frac{ \left(\log\log n - \frac{\log B}{\log n} + O\left( \left(\frac{\log B}{\log n}\right)^2\right) \right) }{\log n - \log B} + O\left( \left( \frac{\log\log n}{\log n} \right)^2 \right)
\\ \nonumber & = \log n - \log\log n - \log B + \frac{\log B}{\log n} + \frac{\log\log n}{\log n} + O\left( \left( \frac{\log\log n}{\log n}\right)^2 \right) \quad (n\to\infty). \end{align} Denote $A_n = \psi_n(x_n)$, and observe that we can use the defining relation $x_n e^{x_n} = \frac{n}{B}$ for $x_n$ to rewrite $A_n$ in the form \begin{align} \label{eq:psin-maxvalue-lambert} A_n &= n \log x_n - B e^{x_n} = n \log \left(x_n e^{x_n}\right) - n x_n - \frac{B}{x_n} \left(x_n e^{x_n}\right) \\ \nonumber & = n\log\left(\frac{n}{B}\right) - n x_n - \frac{n}{x_n}
= n\left(\log n - \log B - x_n - \frac{1}{x_n} \right). \end{align} This form for $A_n$ makes it straightforward to derive an asymptotic formula for $A_n$: first, estimate the term $1/x_n$ separately as \begin{align} \label{eq:lambertw-valuerecip-asym} \frac{1}{x_n} &= \frac{1}{\log n - \log\log n -\log B + O\left(\frac{\log\log n}{\log n}\right)} \\ \nonumber &= \frac{1}{\log n}\left(1-\frac{\log\log n}{\log n} - \frac{\log B}{\log n} + O\left(\frac{\log\log n}{\log n}\right) \right)^{-1} = \frac{1}{\log n} + O\left(\frac{\log\log n}{(\log n)^2}\right). \end{align} Then inserting \eqref{eq:lambertw-value-asym} and \eqref{eq:lambertw-valuerecip-asym} into \eqref{eq:psin-maxvalue-lambert} gives that \begin{equation} \label{eq:an-asym-largedevconst} A_n = n\log\log n - \frac{n \log\log n}{\log n} - (\log B+1)\frac{n}{\log n} + O\left( \frac{n (\log\log n)^2}{(\log n)^2} \right) \quad (n\to\infty). \end{equation} We can now use these estimates to bound the integral $I_n$. First, split it into two parts, writing it as $I_n = I_n^{(1)} + I_n^{(2)}$, where we denote \begin{align*} I_n^{(1)} = \int_0^{2\log n} \exp(\psi_n(x))\,dx, \qquad \quad
I_n^{(2)} = \int_{2\log n}^\infty \exp(\psi_n(x))\,dx. \end{align*} Since $\psi_n(x)\leq A_n$ for all $x> 0$, for the first integral we have the trivial bound \begin{align} \label{eq:int1bound-hermiteeasy} I_n^{(1)} &\leq 2\log n \cdot e^{A_n}
= \exp\left[ n\log\log n - \frac{n \log\log n}{\log n} - (\log B+1) \frac{n}{\log n} + O\left( \frac{n (\log\log n)^2}{(\log n)^2} \right) \right]. \end{align} To bound the second integral, observe that $\psi_n(x)$ is a concave function (since its second derivative is everywhere negative), so in particular it is bounded from above by its tangent line at $x=2\log n$; that is, we have the inequality \begin{equation*} \psi_n(x) \leq \psi_n(2\log n) + \psi_n'(2\log n)(x-2\log n) \qquad (x>0). \end{equation*} The constants $\psi_n(2\log n)$, $\psi_n'(2\log n)$ in this inequality satisfy, for $n$ large enough, \begin{align*} \psi_n(2\log n) & = n\log(2\log n) - B n^2 \leq -\frac{B}{2} n^2 \leq -\frac12 n^2, \\ \psi_n'(2\log n) & = \frac{n}{2\log n} - B n^2 \leq -\frac{B}{2} n^2 \leq -\frac12 n^2. \end{align*} This then implies that, again for large $n$, we have \begin{align} \label{eq:int2bound-hermiteeasy} I_n^{(2)} & \leq \int_{2\log n}^\infty \exp\left( \psi_n(2\log n) + \psi_n'(2\log n)(x-2\log n) \right)\,dx \\ \nonumber & = \exp\left(\psi_n(2\log n)\right) \int_0^\infty \exp\left(\psi_n'(2\log n)t\right)\,dt \\ \nonumber & = \frac{1}{-\psi_n'(2\log n)} \exp\left(\psi_n(2\log n)\right) \leq \frac{2}{n^2} e^{-n^2/2} = O(1). \end{align} Combining the two bounds \eqref{eq:int1bound-hermiteeasy} and \eqref{eq:int2bound-hermiteeasy} gives the claimed bound \eqref{eq:hermite-easybound2}. \end{proof}
We are ready to prove \eqref{eq:hermite-expansion-errorbound}. First, consider the following slightly informal calculation that essentially explains how the expansion \eqref{eq:hermite-expansion} arises out of the definition \eqref{eq:hermite-coeffs-def} of the coefficients $b_n$. Recalling the formula \eqref{eq:hermite-genfun} for the generating function for the Hermite polynomials, we have that \begin{align} \label{eq:hermiteexp-informal-der} \sum_{n=0}^\infty (-1)^n b_{2n} H_{2n}(t) & = \sum_{n=0}^\infty i^n b_n H_n(t)
= \sum_{n=0}^\infty \frac{i^n}{2^n n!} \int_{-\infty}^\infty x^n e^{-\frac{x^2}{4}} \Phi(x)\,dx \cdot H_n(t) \\ \nonumber &= \int_{-\infty}^\infty \left(\sum_{n=0}^\infty \frac{i^n}{2^n n!} x^n H_n(t)\right) e^{-\frac{x^2}{4}} \Phi(x)\,dx \\ \nonumber &= \int_{-\infty}^\infty \exp\left(2t\cdot \frac{ix}{2} - \left(\frac{i x}{2}\right)^2\right) e^{-\frac{x^2}{4}} \Phi(x)\,dx
= \int_{-\infty}^\infty e^{itx} \Phi(x) \,dx = \Xi(t), \end{align} which is \eqref{eq:hermite-expansion}. Note that at the heart of this calculation is the simple identity \begin{equation} \label{eq:fourier-kernel-hermite} e^{itx} = e^{-\frac{x^2}{4}} \sum_{n=0}^\infty \frac{i^n x^n}{2^n n!} H_n(t), \end{equation} a trivial consequence of \eqref{eq:hermite-genfun}, which expands the Fourier transform integration kernel $e^{itx}$ as an infinite series in the Hermite polynomials. Thus, to get the more precise statement \eqref{eq:hermite-expansion-errorbound}, all that's left to do is to perform the same calculation a bit more carefully, using the results of Lemmas~\ref{lem:hermite-easybound} and~\ref{lem:hermite-easybound2} to get more explicit error bounds when summing this infinite series and integrating. Namely, using~\eqref{eq:fourier-kernel-hermite} we can estimate the left-hand side of \eqref{eq:hermite-expansion-errorbound} as \begin{align} \label{eq:hermite-expansion-proofchain}
\Bigg| \Xi(t) - & \sum_{n=0}^N (-1)^n b_{2n} H_{2n}(t) \Bigg| =
\left| \Xi(t) - \sum_{n=0}^{2N} i^n b_n H_n(t) \right| \\ \nonumber & =
\left| \int_{-\infty}^\infty \Phi(x) \left(e^{itx} - e^{-\frac{x^2}{4}}\sum_{n=0}^{2N} \frac{i^n x^n}{2^n n!} H_n(t) \right)\,dx
\right| \\ \nonumber & =
\left| \int_{-\infty}^\infty \Phi(x) e^{-\frac{x^2}{4}} \sum_{n=2N+1}^\infty \frac{i^n x^n}{2^n n!} H_n(t)\,dx
\right| \\ \nonumber & \leq \sum_{n=2N+1}^\infty \frac{1}{2^n n!} \left(\int_{-\infty}^\infty \Phi(x) e^{-\frac{x^2}{4}}
|x|^n \,dx
\right)|H_n(t)| \\ \nonumber & = \sum_{n=2N+1}^\infty \frac{1}{2^{n-1} n!} \left(\int_0^\infty \Phi(x) e^{-\frac{x^2}{4}} x^n \,dx
\right)|H_n(t)| \\ \nonumber & \leq \sum_{n=2N+1}^\infty \frac{1}{2^{n-1} n!} C \exp\left(\frac34 n \log n\right) \int_0^\infty \Phi(x) e^{-\frac{x^2}{4}} x^n \,dx, \end{align} for all $t$ ranging over some fixed compact set $K \subset \mathbb{C}$, and where in the last step we invoked Lemma~\ref{lem:hermite-easybound}, with $C$ denoting the positive constant given by that lemma (depending on the compact set $K$).
Now, since $\Phi(x) = O\left(\exp\left(-3 e^{2x}\right)\right)$ as $x\to\infty$ by \eqref{eq:phix-asym-xinfty}, we can use Lemma~\ref{lem:hermite-easybound2} with $B=3$ to bound the integral in the last sum, and therefore conclude that this sum in \eqref{eq:hermite-expansion-proofchain} is bounded from above by \begin{equation*} C \sum_{n=2N+1}^\infty \frac{1}{2^n n!} \exp \left(\frac34 n \log n\right) \times \frac{1}{2^n}\exp\left( n \log\log n - \frac{n\log\log n}{\log n} + O\left(\frac{n}{\log n}\right) \right). \end{equation*} By Stirling's formula this is $O\left(\exp\left(-\frac{1}{5} N \log N \right)\right)$, which is the bound we need. The proof of Theorem~\ref{THM:HERMITE-EXPANSION} is complete. \qed
\begin{proof}[Proof of Corollary~\ref{cor:hermite-coeff-innerproduct}]
The Hermite polynomials form an orthogonal basis of the Hilbert space $L^2(\mathbb{R},e^{-t^2}\,dt)$. By Lemma~\ref{lem:hermite-easybound2} we also get an upper bound for the coefficients $b_{2n}$ (which will be superseded by a more precise asymptotic result in the next section, but is still useful), namely the statement that \begin{equation*} b_{2n} \le \frac{C}{2^{2n} (2n)!}\exp\left(2n \log\log (2n)\right) \end{equation*} for some constant $C>0$ and all $n\ge3$. Together with the fact that the squared $L^2$-norm of $H_n(t)$ is $\sqrt{\pi}2^n n!$ (see \eqref{eq:hermite-orthogonality}), this implies that the infinite series on the right-hand side of \eqref{eq:hermite-expansion} converges in the sense of the function space $L^2(\mathbb{R},e^{-t^2}\,dt)$ to an element of this space. Since $L^2$-convergence implies almost everywhere convergence along a subsequence, the $L^2$-limit must be equal to the pointwise limit, that is, the function $\Xi(t)$. Thus, the relation \eqref{eq:hermite-expansion} holds in the sense of $L^2$, and it follows that the coefficients in the expansion can be extracted in the standard way as inner products in the $L^2$-space, which (again because of \eqref{eq:hermite-orthogonality}) leads to the formula \eqref{eq:hermite-coeff-innerproduct}. \end{proof}
\section{An asymptotic formula for the coefficients $b_{2n}$}
\label{sec:hermite-asym}
We now refine our analysis of the Hermite expansion by deriving an asymptotic formula for the coefficients $b_{2n}$. These asymptotics are most simply expressed in terms of the Lambert $W$-function.
\begin{thm}[Asymptotic formula for the coefficients $b_{2n}$] \label{thm:hermite-coeff-asym} The coefficients $b_{2n}$ satisfy the asymptotic formula \begin{align} \label{eq:hermite-coeff-asym} b_{2n} & = \left(1+O\left(\frac{\log\log n}{\log n}\right)\right) \frac{\pi^{1/4}}{2^{4n-\frac52} (2n)!} \left(\frac{2n}{\log (2n)}\right)^{7/4} \\ \nonumber & \qquad\qquad \times \exp\left[ 2n\left(\log \left(\frac{2n}{\pi}\right) - W\left(\frac{2n}{\pi}\right) - \frac{1}{W\left(\frac{2n}{\pi}\right)} \right) -\frac{1}{16} W\left(\frac{2n}{\pi}\right)^2\right] \end{align} as $n\to\infty$. \end{thm}
The appearance of the non-elementary, implicitly-defined function $W(x)$ in the asymptotic formula \eqref{eq:hermite-coeff-asym} may make it somewhat difficult to use or gain intuition from, but with the help of the asymptotic formula \eqref{eq:lambert-w-asym} for the Lambert $W$-function, or its stronger version \cite[eq.~(4.19)]{corless-etal} mentioned above, we can extract the asymptotically dominant terms from inside the exponential to get an asymptotic formula involving more familiar functions (unfortunately, at a cost of having a much larger error term---but this seems like an unavoidable tradeoff that comes about as a result of the unusual asymptotic expansion of the Lambert $W$-function). For example, as an immediate corollary we get the following more explicit, but weaker, result.
\begin{corollary}[Asymptotic formula for the logarithm of the coefficients $b_{2n}$] We have the relation \begin{align} \label{eq:hermite-coeffasym-simplified} \log b_{2n} = -2n \log (2n) + 2n \log\log \left( \frac{2n}{\pi} \right) + O\left( n \right) \end{align} as $n\to\infty$. \end{corollary}
\begin{proof}[Proof of Theorem~\ref{thm:hermite-coeff-asym}] Define numbers $Q_n, r_n$ by \begin{align} \label{eq:qn-int-def} Q_n & = \int_0^\infty x^{2n} e^{-\frac{x^2}{4}} e^{\frac{5x}{2}}\left(e^{2x}-\frac{3}{2\pi}\right) \exp\left(-\pi e^{2x}\right)\,dx, \\ \label{eq:rn-int-def} r_n & = \int_0^\infty x^{2n} e^{-\frac{x^2}{4}} e^{\frac{5x}{2}} \sum_{m=2}^\infty \left(m^4 e^{2x}-\frac{3m^2}{2\pi}\right) \exp\left(-\pi m^2 e^{2x}\right)\,dx, \end{align} so that, by \eqref{eq:phix-def} and~\eqref{eq:hermite-coeffs-def}, the relation \begin{equation} \label{eq:bof2n-qn-rn} b_{2n} = \frac{\pi^2}{2^{2n-3} (2n)!}( Q_n + r_n ) \end{equation} holds. We will analyze the asymptotic behavior of $Q_n$ and then show that the contribution of $r_n$ is asymptotically negligible relative to that of $Q_n$.
\textbf{Part 1: analysis of $Q_n$ using Laplace's method.} Define a function \begin{equation*} f(x) = e^{-\frac{x^2}{4}} e^{\frac{5x}{2}}\left(e^{2x}-\frac{3}{2\pi}\right). \end{equation*} Then $Q_n$ can be rewritten in the form \begin{equation} \label{eq:qnint-rewritten} Q_n = \frac{1}{2^{2n}}\int_0^\infty f(x) \exp\left( \psi_{2n}(2x)\right)\,dx, \end{equation} where $\psi_{2n}(x)$ is defined in \eqref{eq:hermite-psindef}, with the specific parameter value $B=\pi$. This representation makes it possible to use Laplace's method to understand the asymptotic behavior of $Q_n$ as $n$ grows large. Proceeding as in the proof of Lemma~\ref{lem:hermite-easybound2}, we recall our observation that the function $\psi_{2n}(x)$ has a unique global maximum point at \begin{equation*} x_{2n} = W\left(\frac{2n}{\pi}\right). \end{equation*} Now let quantities $\alpha_n$, $\beta_n$, $\gamma_n$ be defined by \begin{align} \label{eq:alphan-hermite-def} \alpha_n & = \psi_{2n}(x_{2n}), \\ \label{eq:betan-hermite-def} \beta_n & = -\psi_{2n}''(x_{2n}), \\ \label{eq:gamman-hermite-def} \gamma_n & = f(x_{2n}/2). \end{align} Examining these quantities a bit more closely, note that $\alpha_n = A_{2n}$ in the notation used in the proof of Lemma~\ref{lem:hermite-easybound2} (again with the parameter $B=\pi$), so that, as in \eqref{eq:psin-maxvalue-lambert}, we have \begin{equation} \label{eq:alphan-simplified} \alpha_n = 2n\left(\log (2n) - \log \pi - x_{2n} - \frac{1}{x_{2n}} \right). \end{equation} For $\beta_n$ we have that \begin{equation} \label{eq:betan-simplified-asym} \beta_n = \frac{2n}{x_{2n}^2} + \pi e^{x_{2n}} = \frac{2n}{x_{2n}^2} + \frac{2n}{x_{2n}} = \left(1+O\left(\frac{1}{\log n}\right)\right) \frac{2n}{x_{2n}}, \end{equation} and for $\gamma_n$ we can write \begin{align} \label{eq:gamman-asym} \gamma_n & = \left(1+ O\left( e^{-x_{2n}}\right)\right)\exp\left(-\frac{1}{16} x_{2n}^2 + \frac94 x_{2n} \right)
= \left(1+ O\left( \frac{\log n}{n}\right)\right) \left( \frac{2n}{\pi x_{2n}} \right)^{9/4} \exp\left(-\frac{1}{16} x_{2n}^2 \right). \end{align} With these preparations, Laplace's method in its heuristic form predicts that the integral on the right-hand side of \eqref{eq:qnint-rewritten} is given approximately for large $n$ by the expression \begin{equation} \label{eq:laplacemethod-heuristic} \frac{\sqrt{\pi}}{\sqrt{-2\psi_{2n}''(x_{2n})}} f(x_{2n}/2) \exp\left(\psi_{2n}(x_n)\right) = \frac{\sqrt{\pi}}{\sqrt{2\beta_n}} \gamma_n\exp(\alpha_n). \end{equation} Our goal is to establish this rigorously, with a precise rate of convergence estimate; substituting the formulas \eqref{eq:alphan-simplified}--\eqref{eq:gamman-asym} into \eqref{eq:laplacemethod-heuristic} will then give the desired formula for~$Q_n$.
It will be convenient to split up the integral defining $Q_n$ into three parts and estimate each part separately. Denote $\mu_n = n^{-2/5}$, and denote by $J_n$ the interval $\left[\frac12x_{2n}-\mu_n,\frac12x_{2n}+\mu_n\right]$. Now let \begin{align*} Q_n^{(1)} & = \int_0^{\frac12 x_{2n}- \mu_n} f(x)\exp(\psi_{2n}(2x))\,dx, \\ Q_n^{(2)} & = \int_{J_n} f(x)\exp(\psi_{2n}(2x))\,dx, \\ Q_n^{(3)} & = \int_{\frac12 x_{2n}+ \mu_n}^\infty f(x)\exp(\psi_{2n}(2x))\,dx, \end{align*} so that $Q_n = \frac{1}{2^{2n}}\left(Q_n^{(1)}+Q_n^{(2)}+Q_n^{(3)}\right)$. Our estimates will rely on the following useful calculus observations (the first two of which were already noted in the proof of Lemma~\ref{lem:hermite-easybound2}). \begin{enumerate} \item The function $\psi_{2n}(x)$ is increasing on $(0,x_{2n})$ and decreasing on $(x_{2n},\infty)$.
\item The function $\psi_{2n}(x)$ is concave.
\item \label{item:phin-third-deriv}
$\sup_{x\in J_n} |\psi_{2n}'''(2x)| = O(n)$ as $n\to\infty$.
\noindent \textbf{Proof.} $\psi_{2n}'''(2x) = \frac{n}{2x^3}-\pi e^{2x}$, so, for $x\in J_n$, using the fact that (by \eqref{eq:lambertw-value-asym}) for $n$ large enough we have the relation $J_n \subseteq \left[ \frac14 \log n, \frac12 \log n\right]$, we get that \begin{align*}
|\psi_{2n}'''(2x)| & \leq \frac{n}{2x^3} + \pi e^{2x} \leq \frac{4^3 n}{2\log^3 n} + \pi e^{\log n} = O(n). \end{align*}
\item As a consequence of the last observation, the Taylor expansion of $\psi_{2n}(2x)$ around $x=\frac12 x_{2n}$ in the interval $J_n$ has the form \begin{equation} \label{eq:psi2n-taylor-jn} \psi_{2n}(2x) =
\alpha_n - 2 \beta_n \left(x-\frac{x_{2n}}{2} \right)^2 + O\left(n \left|x-\frac{x_{2n}}{2}\right|^3 \right) \qquad (x\in J_n), \end{equation} where the constant implicit in the big-$O$ term does not depend on $n$ or $x$.
\item $\sup_{x\in J_n} \left| \frac{f(x)}{f(x_{2n}/2)}
-1\right| = O\left(\frac{\log n}{n^{2/5}}\right)$.
\noindent \textbf{Proof.} Noting that $x_n\to\infty$ as $n\to\infty$, so that $f(x_{2n}/2) \geq \frac12 e^{-\frac{x_{2n}^2}{16}+\frac94 x_{2n}}$ if $n$ is large, we have that \begin{align} \label{eq:fofx-messy}
\left| \frac{f(x)}{f(x_{2n}/2)}-1\right| & =
\left| \frac{e^{-\frac{x^2}{4}+\frac92 x}-\frac{3}{2\pi}e^{-\frac{x^2}{4}+\frac52 x}}{e^{-\frac{x_{2n}^2}{16}+\frac94 x_{2n}}-\frac{3}{2\pi}e^{-\frac{x_{2n}^2}{16}+\frac54 x_{2n}}} -1
\right| \\ \nonumber & =
\left| \frac{ \left( e^{-\frac{x^2}{4}+\frac92 x}-\frac{3}{2\pi}e^{-\frac{x^2}{4}+\frac52 x} \right) -\left( e^{-\frac{x_{2n}^2}{16}+\frac94 x_{2n}}-\frac{3}{2\pi}e^{-\frac{x_{2n}^2}{16}+\frac54 x_{2n}} \right) }{e^{-\frac{x_{2n}^2}{16}+\frac94 x_{2n}}-\frac{3}{2\pi}e^{-\frac{x_{2n}^2}{16}+\frac54 x_{2n}}}
\right| \\ \nonumber & \leq
\left| \frac{ e^{-\frac{x^2}{4}+\frac92 x} - e^{-\frac{x_{2n}^2}{16}+\frac94 x_{2n}} }{ \frac12 e^{-\frac{x_{2n}^2}{16}+\frac94 x_{2n}} }
\right| + \frac{3}{2\pi}
\left| \frac{ e^{-\frac{x^2}{4}+\frac52 x} - e^{-\frac{x_{2n}^2}{16}+\frac54 x_{2n}} }{ \frac12 e^{-\frac{x_{2n}^2}{16}+\frac94 x_{2n}} }
\right| \\ \nonumber & =
2 \left| e^{(-\frac{x^2}{4}+\frac92 x)-(-\frac{x_{2n}^2}{16}+\frac94 x_{2n})} -1
\right|
+ \frac{6}{2\pi}
e^{-x_{2n}} \left| e^{(-\frac{x^2}{4}+\frac52 x)-(-\frac{x_{2n}^2}{16}+\frac54 x_{2n})} -1
\right|. \end{align} Now observe that $e^{-x_{2n}} = O(1)$ and that for $x\in J_n$ we have that \begin{align*}
\Bigg| \left(-\frac{x^2}{4}+\frac92 x\right)- & \left(-\frac{x_{2n}^2}{16}+\frac94 x_{2n}\right)
\Bigg|
\leq \frac92 \left|x-\frac{x_{2n}}{2}\right| + \frac14 \left|x-\frac{x_{2n}}{2}\right|\cdot
\left|x+\frac{x_{2n}}{2}\right|
= O\left(\frac{\log n}{n^{2/5}}\right) \end{align*} (with a uniform constant implicit in the big-$O$ term), and similarly \begin{equation*}
\left| \left(-\frac{x^2}{4}+\frac52 x\right)-\left(-\frac{x_{2n}^2}{16}+\frac54 x_{2n}\right)
\right| = O\left(\frac{\log n}{n^{2/5}}\right), \end{equation*} so that \eqref{eq:fofx-messy} implies the claimed bound.
\end{enumerate}
We are ready to evaluate the integrals $Q_n^{(1)}$, $Q_n^{(2)}$, $Q_n^{(3)}$, starting with the middle integral $Q_n^{(2)}$, which is the one that is the most significant asymptotically. The standard idea is that the exponential term $\exp(\psi_{2n}(2x))$ can be approximated by a Gaussian centered around the point $\frac12 x_{2n}$. This follows from the Taylor expansion \eqref{eq:psi2n-taylor-jn}. Indeed, making the change of variables $u=\sqrt{\beta_n}\left(x-\frac12x_{2n}\right)$ in the integral, we have that \begin{align} \label{eq:qn2-longcalc} Q_n^{(2)} & = \int_{J_n} f(x) \exp(\psi_{2n}(2x))\,dx \\ \nonumber & = \int_{-\mu_n\sqrt{\beta_n}}^{\mu_n\sqrt{\beta_n}} f\left(\frac{x_{2n}}{2} + \frac{u}{\sqrt{\beta_n}}\right) \exp\left(\psi_{2n}\left( x_{2n} + \frac{2u}{\sqrt{\beta_n}} \right) \right)\, \frac{du}{\sqrt{\beta_n}} \\ \nonumber & = \frac{1}{\sqrt{\beta_n}} \int_{-\mu_n\sqrt{\beta_n}}^{\mu_n\sqrt{\beta_n}} \left(1+O\left(\frac{\log n}{n^{2/5}}\right)\right)f\left(\frac{x_{2n}}{2}\right) \exp\left[
\alpha_n - 2 u^2 + O\left(n \frac{|u|^3}{\beta_n^{3/2}} \right) \right] \,du \\ \nonumber & = \left(1+O\left(\frac{\log n}{n^{2/5}}\right)\right) \left(1+O\left(\frac{1}{n^{1/5}}\right)\right) \frac{1}{\sqrt{\beta_n}} e^{\alpha_n} \gamma_n \int_{-\mu_n\sqrt{\beta_n}}^{\mu_n\sqrt{\beta_n}} e^{-2 u^2}\,du \\ \nonumber & = \left(1+O\left(\frac{1}{n^{1/5}}\right)\right) \frac{1}{\sqrt{\beta_n}} \gamma_n e^{\alpha_n} \left[\sqrt{\frac{\pi}{2}} - O\left( \frac{1}{\mu_n \sqrt{\beta_n}} \exp\left(-2 \mu_n^2 \beta_n\right) \right)
\right] \\ \nonumber & = \left(1+O\left(\frac{1}{n^{1/5}}\right)\right) \frac{\sqrt{\pi}}{\sqrt{2\beta_n}} \gamma_n e^{\alpha_n}, \end{align} where in the penultimate step we used the standard inequality \begin{equation} \label{eq:gaussian-tail-bound} \int_t^\infty e^{-u^2/2}\,du \leq \frac1t e^{-t^2/2}, \qquad (t>0). \end{equation}
Next, to estimate $Q_n^{(1)}$, we use the fact that $\psi_{2n}(2x)$ is increasing on the interval of integration, bounding the integral by the length of the integration interval multiplied by an upper bound for $f(x)$ and the value of $\exp(\psi_{2n}(2x))$ at the rightmost end of the interval. Note that $f(x)$ is bounded from above by the numerical constant \begin{equation*} K_0 := \left(\sup_{x\ge 0} e^{-\frac{x^2}{4}+\frac92 x} + \frac{3}{2\pi} \sup_{x\ge 0} e^{-\frac{x^2}{4}+\frac52 x}\right) = e^{81/4} + \frac{3}{2\pi} e^{25/4}. \end{equation*} Thus, using \eqref{eq:betan-simplified-asym}, \eqref{eq:gamman-asym} and \eqref{eq:psi2n-taylor-jn}, we get that \begin{align} \label{eq:qn1-longcalc} Q_n^{(1)} & \leq K_0 \left(\frac{x_{2n}}{2}-\mu_n\right) \times \exp\left(\psi_{2n}\left(x_{2n}-2\mu_n\right)\right)
\leq \frac{K_0}{2} x_{2n} \exp\left(\alpha_n - 2 \beta_n \mu_n^2 + O(n \mu_n^3) \right) \\ \nonumber & = O(\log n) e^{\alpha_n} O\left(\exp\left(-n^{1/10}\right) \right) \left(1+O\left(\frac{1}{n^{1/5}}\right)\right)
= O\left( e^{-\frac12 n^{1/10}} \frac{\sqrt{\pi}}{\sqrt{2 \beta_n}} \gamma_n e^{\alpha_n} \right). \end{align}
Next, to estimate $Q_n^{(3)}$, note that, as in the proof of Lemma~\ref{lem:hermite-easybound2}, by the concavity of $\psi_{2n}(2x)$ the graph of $\psi_{2n}(2x)$ is bounded from above by the tangent line to the graph at $x=\frac{x_{2n}}{2}+\mu_n$. In other words, we have the inequality \begin{equation*} \psi_{2n}(2x) \le \psi_{2n}\left(x_{2n}+2\mu_n\right) + 2\psi_{2n}'\left(x_{2n}+2\mu_n\right)\left(x-\frac{x_{2n}}{2}-\mu_n\right) \qquad (x> 0). \end{equation*} Moreover, it is useful to note that the derivative value $\psi_{2n}'(x_{2n}+2\mu _n)$ satisfies \begin{align*} \psi_{2n}'(x_{2n}+2\mu _n) & = \frac{2n}{x_{2n}+2\mu_n} - \pi e^{x_{2n}+2\mu_n} = \frac{2n}{x_{2n}+2\mu_n} - e^{2\mu_n}\frac{2n}{x_{2n}}
\leq \frac{2n}{x_{2n}}\left(1 - e^{2\mu_n}\right) \leq -\frac{4n \mu_n}{x_{2n}}, \end{align*} so in particular $\psi_{2n}'(x_{2n}+2\mu _n) \leq -1$ if $n$ is large enough. These observations imply that as $n\to\infty$, $Q_n^{(3)}$ satisfies the bound \begin{align} \label{eq:qn3-longcalc} Q_n^{(3)} &\leq K_0\int_{\frac12 x_{2n}+\mu_n}^\infty \exp\left(\psi_{2n}\left(x_{2n}+2\mu_n\right) + 2\psi_{2n}'\left(x_{2n}+2\mu_n\right) \left(x-\frac{x_{2n}}{2}-\mu_n\right)\right)\,dx \\ \nonumber & \leq K_0 \int_{\frac12x_{2n}+\mu_n}^\infty \exp\left(\alpha_n - 2 \beta_n \mu_n^2 + O(n \mu_n^3)\right) \exp\left( -\left(x-\frac{x_{2n}}{2}-\mu_n\right) \right)\,dx \\ \nonumber & = K_0 \left(1+O\left(\frac{1}{n^{1/5}}\right)\right) e^{\alpha_n} O\left(\exp\left(-n^{1/10}\right)\right) \int_0^\infty \exp\left(-u\right)\,du
= O\left(e^{-n^{1/10}} \frac{\sqrt{\pi}}{\sqrt{2 \beta_n}} \gamma_n e^{\alpha_n}\right). \end{align} Combining \eqref{eq:qn2-longcalc}, \eqref{eq:qn1-longcalc} and \eqref{eq:qn3-longcalc}, we have finally that \begin{equation} \label{eq:qn-asym-preformula} Q_n = \left(1+O\left(\frac{1}{n^{1/5}}\right)\right) \frac{\sqrt{\pi}}{2^{2n}\sqrt{2\beta_n}} \gamma_n e^{\alpha_n}. \end{equation} Using \eqref{eq:alphan-simplified}--\eqref{eq:gamman-asym} we now get the asymptotic formula \begin{align} \label{eq:qn-asym-formula} Q_n &= \left(1+O\left(\frac{1}{\log n}\right)\right) \frac{1}{2^{2n+\frac12}} \left(\frac{2n}{\pi x_{2n}}\right)^{7/4}
\exp\left[ 2n\left(\log (2n) - \log \pi - x_{2n} - \frac{1}{x_{2n}} \right) -\frac{1}{16} x_{2n}^2\right] \end{align} that holds as $n\to\infty$. In particular, for the purpose of comparing $Q_n$ to $r_n$, it is useful to note that the exponential factors $\frac{1}{2^{2n}}$ and $e^{\alpha_n}$ are asymptotically the most significant ones in~\eqref{eq:qn-asym-preformula}. More precisely, recalling \eqref{eq:lambertw-value-asym}, \eqref{eq:an-asym-largedevconst}, \eqref{eq:betan-simplified-asym} and \eqref{eq:gamman-asym}, we get that \begin{align} \label{eq:qofn-exponential-approx} Q_n &= \frac{1}{2^{2n}}\exp\left(\alpha_n + O((\log n)^2)\right) \\ \nonumber &= \frac{1}{2^{2n}}\exp\Bigg[ 2n\log\log (2n) - \frac{2n \log\log (2n)}{\log (2n)}
- (\log \pi+1)\frac{2n}{\log (2n)} +O\left( \frac{n (\log\log n)^2}{(\log n)^2} \right) \Bigg] \end{align} as $n\to\infty$.
\textbf{Part 2: estimating $r_n$.} We proceed with asymptotically bounding $r_n$. Observe that, by \eqref{eq:phixdiff-asym-xinfty}, $r_n$ satisfies \begin{equation*}
|r_n| \leq C_1 \int_0^\infty x^{2n} \exp\left(-3\pi e^{2x}\right)\,dx = \frac{C_1}{2^{2n+1}} \int_0^\infty u^{2n} \exp\left(-3\pi e^u\right)\,du \end{equation*} for some constant $C_1>0$. We can therefore once again apply Lemma~\ref{lem:hermite-easybound2}, this time with the parameter $B=3\pi$, to get that, for all $n\ge3$ and some constant $C_2>0$, \begin{align*}
|r_n| & \leq \frac{1}{2^{2n}} \exp\Bigg[ 2n\log\log (2n) - \frac{2n \log\log (2n)}{\log (2n)}
- (\log (3\pi)+1) \frac{2n}{\log (2n)} + C_2 \frac{2n (\log\log (2n))^2}{(\log (2n))^2} \Bigg]. \end{align*} Comparing this to \eqref{eq:qofn-exponential-approx}, we see that for large $n$ the relation \begin{equation} \label{eq:rn-asym-bound}
|r_n| \leq \exp\left(-\frac12 \log(3) \frac{2n}{\log (2n)}\right)Q_n \end{equation} holds. Thus $r_n$ is indeed asymptotically negligible compared to $Q_n$.
\textbf{Finishing the proof.} Combining \eqref{eq:bof2n-qn-rn}, \eqref{eq:qn-asym-formula} and \eqref{eq:rn-asym-bound} we arrive finally at the desired formula for $b_{2n}$: \begin{align*} b_{2n} & = \frac{\pi^2}{2^{2n-3} (2n)!} \left( 1+O\left(\exp\left(-\log(3) \frac{n}{\log (2n)}\right)\right) \right) Q_n \\ & = \left(1+O\left(\frac{1}{\log n}\right)\right) \frac{\pi^2}{2^{2n-3} (2n)!} \times \frac{1}{2^{2n+\frac12}} \left(\frac{2n}{\pi x_{2n}}\right)^{7/4}
\exp\left[ 2n\left(\log (2n) - \log \pi - x_{2n} - \frac{1}{x_{2n}} \right) -\frac{1}{16} x_{2n}^2\right]. \end{align*} Since $x_{2n} = W\left(\frac{2n}{\pi}\right) = \left(1+O\left(\frac{\log\log n}{\log n}\right)\right)\log(2n)$, this gives \eqref{eq:hermite-coeff-asym} and completes the proof of Theorem~\ref{thm:hermite-coeff-asym}. \end{proof}
\section[The Poisson flow and the P\'olya-De Bruijn flow]{The Poisson flow, P\'olya-De Bruijn flow and the De Bruijn-Newman constant}
\label{sec:hermite-poissonpolya}
One recurring theme in the study of the Riemann hypothesis is the idea that in order to understand the zeros of the Riemann xi (or zeta) function, one might start by looking at suitable approximations to it that have a simpler structure---for example, being polynomials instead of entire or meromorphic functions---and trying to understand the location of the zeros of those approximations first. The hope is that there exists some good approximation that would have the feature that the zeros of the approximating functions can be understood, and, in an ideal scenario, shown to all lie on the real line (or on the critical line $\operatorname{Re}(s)=1/2$, depending on the coordinate system used). In the setting of a discrete sequence of approximations, this approach has been applied for example to the partial sums of the Taylor series of $\Xi(t)$ \cite{jenkins-mclaughlin} and to the partial sums of the Dirichlet series of $\zeta(s)$ \cite{haselgrove, spira, turan1948}. While those attacks can involve the use of some rather ingenious and sophisticated tools, they have not resulted in any easily quantifiable progress on the original question of RH.
Instead of looking at a discrete sequence of approximations, certain other contexts naturally suggest instead a family of approximations indexed by a continuous parameter. We refer to such a family informally, especially in the case when the family satisfies a partial differential equation or some other sort of dynamical evolution law (all the approximation families we consider will be of this type) as a \firstmention{flow}.
One very natural and well-studied example of such a flow is the family of functions \begin{equation} \label{eq:polya-flow-def} \Xi_\lambda(t) = \int_{-\infty}^\infty e^{\lambda x^2} \Phi(x) e^{itx}\,dx \qquad (\lambda \in \mathbb{R}). \end{equation} For $\lambda=0$, we have $\Xi_0\equiv\Xi$, so the family $\Xi_\lambda$ is a flow passing through the Riemann xi function at $\lambda=0$. We refer to it as the \firstmention{P\'olya-De Bruijn flow} associated with the xi function; this term seems appropriate in view of P\'olya's discovery of universal factors described in \secref{sec:intro-background} and its extension by De Bruijn. Specifically, P\'olya's result described in the introduction to the effect that $e^{\lambda x^2}$ is a universal factor implies that the P\'olya-De Bruijn flow preserves \firstmention{hyperbolicity} (the property of an entire function of having no non-real zeros) in the positive direction of the ``time'' parameter $\lambda$: that is, if $\Xi_\lambda$ is hyperbolic for some specific value of $\lambda$, then so is $\Xi_\mu$ for any $\mu>\lambda$, and in particular, if it could be shown that $\Xi_\lambda$ is hyperbolic for some \emph{negative} value $\lambda<0$, the Riemann hypothesis would follow. Moreover, showing that $\Xi_\lambda$ is hyperbolic for \emph{positive} values of $\lambda$ (which by the same logic ought to be both more likely to be true, and easier to prove if true) may still be beneficial, since if for instance it could somehow be shown that $\Xi_\lambda$ were hyperbolic for \emph{all} $\lambda>0$, once again RH would follow by a straightforward approximation argument.
De Bruijn extended P\'olya's work in an important way when he showed that in fact $\Xi_\lambda$ is hyperbolic for all $\lambda \geq 1/8$. His result was later strengthened slightly by Ki, Kim and Lee \cite{ki-kim-lee}, who showed that the same would be true for $\lambda\geq 1/8-\epsilon$ for some (fixed, but non-explicit) $\epsilon>0$. In the negative direction, Newman \cite{newman} proved that $\Xi_\lambda$ is \emph{not} hyperbolic if $\lambda$ is a negative number of sufficiently large magnitude, and conjectured that the same statement holds true for \emph{all} $\lambda < 0$---this is usually formulated as the statement that the \firstmention{De Bruijn-Newman constant}, denoted by $\Lambda$ and defined as four times the greatest lower bound of the set of $\lambda$'s for which $\Xi_\lambda$ is hyperbolic,\footnote{The multiplication by four is a quirk associated with the different notational conventions used by different authors in the literature on the subject. See p.~63 and Table 5.2 on p.~68 of \cite{broughan} for further discussion and a comparison of the different conventions.} is nonnegative. An explicit numerical lower bound of $-50$ for the De Bruijn-Newman constant was later established by Csordas, Norfolk and Varga \cite{csordas-norfolk-varga1988}. The lower bound was pushed upwards further in a succession of papers \cite{csordas-odlyzko-etal, csordas-ruttan-varga, csordas-smith-varga, norfolk-ruttan-varga, odlyzko, saouter-etal, teriele}, with the established bounds gradually growing extremely close to $0$ on the negative side. Most recently Rodgers and Tao \cite{rodgers-tao} succeeded in proving Newman's conjecture that $\Lambda\ge 0$, and recent work by the Polymath15 project \cite{polymath15} strengthened the result of Ki, Kim and Lee mentioned above by proving the sharper upper bound $\Lambda \le 0.22$.
We now come to a key idea that relates the above discussion to our theme of expansions of the Riemann xi function in families of orthogonal polynomials, and the Hermite polynomials in particular. Specifically, it is the idea that any series expansion of the Riemann xi function in a system of orthogonal polynomials comes equipped with its own flow based on the standard construction of the so-called \firstmention{Poisson kernel} in the theory of orthogonal polynomials. We call this the \firstmention{Poisson flow}.
To define the Poisson flow, recall that the Poisson kernel for a family $\phi=(\phi_n)_{n=0}^\infty$ of polynomials that are orthogonal with respect to a weight function $w(x)$ is defined by \begin{equation} \label{eq:poisson-ker-def}
p_r^\phi(x,y) = \sum_{n=0}^\infty \frac{r^n}{\int_\mathbb{R} \phi_n(u)^2 w(u)\,du} \phi_n(x)\phi_n(y) \qquad (|r|<1). \end{equation} Its essential feature is the equation \begin{equation*} \int_{-\infty}^\infty p_r^\phi(x,y) \phi_n(y) w(y)\,dy = r^n \phi_n(x), \end{equation*} which is trivial to verify by evaluating the integral termwise. That is, the associated integral kernel operator $\Pi_r^\phi:f\mapsto \int_{\mathbb{R}} p_r^\phi(x,y) f(y) w(y)\,dy$ acting on $L^2(\mathbb{R}, w(x)\,dx)$ sends a function with Fourier expansion $f(x) = \sum_n \gamma_n \phi_n$ to $\sum_n \gamma_n r^n \phi_n$, with the $n$th ``harmonic'' in the expansion being attenuated by a factor $r^n$. Note that one can also consider the limiting case $r\to 1$, in which case the definition \eqref{eq:poisson-ker-def} of the Poisson kernel $p_r^\phi$ no longer makes sense, but the operator $\Pi_1^\phi$ can be defined simply as the identity operator, which is clearly the limit of the $\Pi_r$'s (and $p_1^\phi(x,y)$ can be thought of as the distribution $\delta(x-y)$).
We can now define the \firstmention{Poisson flow associated with the Riemann xi function for the orthogonal polynomial sequence $\phi=(\phi_n)$} to be the family of functions \begin{equation} \label{eq:poisson-flow-def} X_r^{\phi}(t) = \Pi_r^\phi(\Xi)(t) = \begin{cases} \int_{-\infty}^\infty p_r(t,\tau)\Xi(\tau)w(\tau)\,d\tau & \textrm{if }0<r<1, \\ \Xi(t) & \textrm{if }r=1 \end{cases} \qquad (0 < r \le 1). \end{equation} Alternatively, if $\Xi(t)$ is expressed in terms of its Fourier series expansion $\Xi(t) = \sum_{n=0}^\infty \gamma_n \phi_n(t)$ in the orthogonal polynomial family $(\phi_n)_{n=0}^\infty$ (in the sense of the function space $L^2(\mathbb{R},w(x)\,dx)$), we can write the Poisson flow equivalently as \begin{equation} \label{eq:poisson-flow-expansion} X_r^\phi(t) = \sum_{n=0}^\infty r^n \gamma_n \phi_n(t). \end{equation}
Denote the family of Hermite polynomials by $\mathcal{H}=(H_n)_{n=0}^\infty$, so that $p_r^{\mathcal{H}}(x,y)$ and $\Pi_r^{\mathcal{H}}$ now denote the Poisson kernel and integral operator associated with the Hermite polynomials, respectively, and $X_r^{\mathcal{H}}(t)$ denotes the corresponding flow associated with the Riemann xi function. Our main result for this section, relating the different concepts we introduced above, is the following.
\begin{thm}[Connection between the P\'olya-De Bruijn and Poisson flows] \label{thm:hermite-poisson-polya} The Poisson flow for the Hermite polynomials is related to the P\'olya-De Bruijn flow \eqref{eq:polya-flow-def} via \begin{equation} \label{eq:hermite-poisson-polya} X_r^{\mathcal{H}}(t) = \Xi_{(r^2-1)/4}(rt) \qquad (0<r\le 1). \end{equation} \end{thm}
\begin{proof} This is a straightforward calculation that generalizes \eqref{eq:hermiteexp-informal-der} by weighting each of the summands in the expansion by the factor $r^n$. Once again using the generating function formula \eqref{eq:hermite-genfun}, we have that \begin{align} \label{eq:hermite-poissonpolya-calc} X_r^{\mathcal{H}}(t) & = \sum_{n=0}^\infty i^n r^n b_n H_n(t)
= \sum_{n=0}^\infty \frac{i^n r^n}{2^n n!} \int_{-\infty}^\infty x^n e^{-\frac{x^2}{4}} \Phi(x)\,dx \cdot H_n(t) \\ \nonumber &= \int_{-\infty}^\infty \left( \sum_{n=0}^\infty \frac{1}{n!} \left(\frac{i r x}{2} \right)^n H_n(t)\right) e^{-\frac{x^2}{4}} \Phi(x)\,dx
= \int_{-\infty}^\infty \exp\left( 2t\cdot \frac{ir x}{2}-\left(\frac{irx}{2}\right)^2\right) e^{-\frac{x^2}{4}}\Phi(x)\,dx \\ \nonumber &= \int_{-\infty}^\infty \Phi(x) \exp\left((r^2-1)\frac{x^2}{4}\right)e^{ir t x}\,dx = \Xi_{(r^2-1)/4}(r t), \end{align} as claimed. \end{proof}
Theorem~\ref{thm:hermite-poisson-polya} ties together in an interesting way the different threads of research into RH begun with the work of P\'olya on universal factors (and continued with the extensive subsequent investigations into the De Bruijn-Newman constant by De Bruijn, Newman and others) on the one hand, and Tur\'an's ideas on the Hermite expansion on the other hand. Incidentally, hints of this connection seem to have already been noted in a less explicit way in the literature; see in particular \cite[Section~3]{bleecker-csordas}.
One key point to take away from this discussion is that the Poisson flow appears to be a natural device with which to try to approximate the Riemann xi function. And while Theorem~\ref{thm:hermite-poisson-polya} shows that the Poisson flow associated with the Hermite polynomials is equivalent to an already well-studied construction, the point is that Poisson flows are a method of approximation that allows us a considerable freedom in choosing the system of orthogonal polynomials to use, and it is conceivable that this might lead to new families of approximations with useful properties. Indeed, in Chapter~\ref{ch:fn-expansion}, when we consider the expansion of $\Xi(t)$ in the family of Meixner-Pollaczek orthogonal polynomials $f_n$, we will revisit the Poisson flow in the context of this new expansion and show that it has some quite natural and interesting properties in that setting.
As a final remark, we recall that one of several notable features of the P\'olya-De Bruijn flow, first pointed out in \cite{csordas-smith-varga}, is that it satisfies the time-reversed heat equation \begin{equation} \label{eq:timerev-heat-equation} \frac{\partial \Xi_\lambda(t)}{\partial \lambda} = - \frac{\partial^2 \Xi_\lambda(t)}{\partial t^2}, \end{equation} a fact that follows immediately from the representation \eqref{eq:polya-flow-def} by differentiating under the integral sign, and which played a useful role in the study of the De Bruijn-Newman constant (see \cite[Sec.~5.5]{broughan}, \cite{csordas-smith-varga}, \cite{rodgers-tao}). It is of some interest to note that the same result can also be obtained by using the relation \eqref{eq:hermite-poisson-polya} interpreting the P\'olya-De Bruijn flow as a reparametrized version of the Poisson flow, together with basic properties of the Hermite polynomials. To see this, start by inverting \eqref{eq:hermite-poisson-polya} to express $\Xi_\lambda(t)$ in terms of the Poisson flow as \begin{equation*} \Xi_\lambda(t) = X_{\sqrt{1+4\lambda}}^\mathcal{H}\left(\frac{t}{\sqrt{1+4\lambda}}\right). \end{equation*} Now expanding the Poisson flow as in \eqref{eq:poisson-flow-expansion}, we differentiate and then use the classical ordinary differential equation \eqref{eq:hermite-ode} satisfied by the Hermite polynomials, to get that \begin{align*} \frac{\partial \Xi_\lambda(t)}{\partial \lambda} & = \frac{\partial}{\partial \lambda} \left( X_{\sqrt{1+4\lambda}}^\mathcal{H}\left(\frac{t}{\sqrt{1+4\lambda}}\right) \right) \\ &= \frac{\partial}{\partial \lambda} \left( \sum_{n=0}^\infty i^n b_n (1+4\lambda)^{\frac{n}{2}} H_n\left(\frac{t}{\sqrt{1+4\lambda}}\right) \right) \\ & = \sum_{n=0}^\infty i^n b_n \Bigg[ 4\frac{n}{2} (1+4\lambda)^{\frac{n}{2}-1} H_n\left(\frac{t}{\sqrt{1+4\lambda}}\right)
- (1+4\lambda)^{\frac{n}{2}} \frac{4t}{2(1+4\lambda)^{3/2}} H_n'\left(\frac{t}{\sqrt{1+4\lambda}}\right) \Bigg] \\ & = \sum_{n=0}^\infty i^n b_n (1+4\lambda)^{\frac{n}{2}} \Bigg[ \frac{1}{1+4\lambda} \left( - H_n''\left(\frac{t}{\sqrt{1+4\lambda}}\right) + \frac{2t}{\sqrt{1+4\lambda}} H_n'\left(\frac{t}{\sqrt{1+4\lambda}}\right) \right) \\ & \hspace{240pt} -\frac{2t}{(1+4\lambda)^{3/2}} H_n'\left(\frac{t}{\sqrt{1+4\lambda}}\right) \Bigg] \\ & = -\sum_{n=0}^\infty i^n b_n (1+4\lambda)^{\frac{n}{2}} \frac{\partial^2}{\partial t^2} \left( H_n\left(\frac{t}{\sqrt{1+4\lambda}}\right) \right)
= -\frac{\partial^2}{\partial t^2} \left( X_{\sqrt{1+4\lambda}}^\mathcal{H}\left(\frac{t}{\sqrt{1+4\lambda}}\right) \right)
= -\frac{\partial^2 \Xi_\lambda(t)}{\partial t^2}, \end{align*} recovering \eqref{eq:timerev-heat-equation} as expected. (Incidentally, at the heart of this calculation is the simple observation that each of the two-variable functions $h_n(\tau,x) = \tau^{n/2} H_n\left(-\frac{x}{\sqrt{\tau}}\right)$ solves the time-reversed heat equation $(h_n)_\tau = -\frac14 (h_n)_{xx}$. With a bit of hindsight, this fact coupled with knowledge of \eqref{eq:timerev-heat-equation} could have been seen as yet another clue foreshadowing the connection we made explicit in Theorem~\ref{thm:hermite-poisson-polya}.)
One reason why the above derivation was worth noting is that it has a nice analogue in the context of the expansion of the Riemann xi function in the orthogonal polynomial family $(f_n)_{n=0}^\infty$---see Theorem~\ref{thm:poissonflow-dde} in \secref{sec:fn-poissonflow}.
\chapter{Expansion of $\Xi(t)$ in the polynomials $f_n$}
\label{ch:fn-expansion}
Recall that in the Introduction we discussed a family of polynomials $(f_n)_{n=0}^\infty$ defined as \begin{equation*} f_n(x) = P_n^{(3/4)}(x;\pi/2) = \frac{(3/2)_n}{n!} i^n {}_2F_1\left(-n, \frac34+ix; \frac32; 2 \right), \end{equation*}
where $P_n^{(\lambda)}(x;\phi)$ denotes the Meixner-Pollaczek polynomials with parameters $\lambda, \phi$. The polynomials $f_n$ form a family of orthogonal polynomials with respect to the weight function $\left|\Gamma\left(\frac34+ix\right)\right|^2$ on $\mathbb{R}$. Their properties are summarized in \secref{sec:orth-fn}. Our main goal in this chapter is to derive the expansion \eqref{eq:fn-expansion-intro} for $\Xi(t)$ in the (trivially rescaled) orthogonal polynomials $f_n(t/2)$, which we refer to as the $f_n$-expansion, and to investigate some of its key properties. After proving two main results about the existence of the expansion and the asymptotic behavior of the coefficients, we will see that thinking about the $f_n$-expansion leads to a natural family of approximations to $\Xi(t)$ arising out of the Poisson flow of the orthogonal polynomial family $(f_n)_{n=0}^\infty$. The ideas in this chapter will also prepare the ground for much additional theory developed in Chapters~\ref{ch:radial} and~\ref{ch:gn-expansion}.
\section{Main results}
\label{sec:fnexp-mainresults}
We start by identifying the numbers $c_{2n}$ that will play the role of the coefficients in the $f_n$-expansion. We define more generally numbers $(c_n)_{n=0}^\infty$ by \begin{equation} \label{eq:def-fn-coeffs} c_n = 2\sqrt{2}\int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n\,dx. \end{equation} The integral converges absolutely, by \eqref{eq:omegax-asym-xinfty}--\eqref{eq:omegax-asym-xzero}. Moreover, the functional equation \eqref{eq:theta-functional-equation} implies through a trivial change of variables $u=1/x$ that \begin{equation} \label{eq:omega-int-powersn-symmetry} \int_0^1 \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n\,dx = (-1)^n \int_1^\infty \frac{\omega(u)}{(u+1)^{3/2}} \left(\frac{u-1}{u+1}\right)^n\,du. \end{equation} It follows that $c_{2n+1} = 0$ for all $n$, and that the even-indexed numbers $c_{2n}$ can be alternatively expressed as \begin{equation} \label{eq:c2n-formula} c_{2n} = 4\sqrt{2}\int_1^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^{2n}\,dx. \end{equation} Since the integrand in \eqref{eq:c2n-formula} is positive on $(1,\infty)$, the numbers $c_{2n}$ are positive.
With these preliminary remarks, we can formulate the main result on the expansion \eqref{eq:fn-expansion-intro}.
\begin{thm}[Infinite series expansion for $\Xi(t)$ in the polynomials $f_n$] \label{THM:FN-EXPANSION} The Riemann xi function has the infinite series representation \begin{equation} \label{eq:fn-expansion} \Xi(t) = \sum_{n=0}^\infty (-1)^n c_{2n} f_{2n}\left(\frac{t}{2}\right), \end{equation} which converges uniformly on compacts for all $t\in\mathbb{C}$. More precisely, for any compact set $K\subset \mathbb{C}$ there exist constants $C_1, C_2>0$ depending on $K$ such that \begin{equation} \label{eq:fn-expansion-errorbound}
\left|\Xi(t) - \sum_{n=0}^N (-1)^n c_{2n} f_{2n}\left(\frac{t}{2}\right) \right| \leq C_1 e^{-C_2 \sqrt{N}} \end{equation} holds for all $N\ge0$ and $t\in K$. \end{thm}
We will also prove a formula describing the asymptotic behavior of the coefficient sequence $c_{2n}$.
\begin{thm}[Asymptotic formula for the coefficients $c_{2n}$] \label{THM:FN-COEFF-ASYM} The asymptotic behavior of $c_{2n}$ for large $n$ is given by \begin{equation} \label{eq:fn-coeff-asym} c_{2n} = \left(1+O\left(n^{-1/10}\right)\right) 16 \sqrt{2} \pi^{3/2}\, \sqrt{n} \,\exp\left(-4\sqrt{\pi n}\right) \end{equation} as $n\to\infty$. \end{thm}
A corollary of the above results, analogous to Corollary~\ref{cor:hermite-coeff-innerproduct}, is the following.
\begin{corollary} \label{cor:fn-coeff-innerproduct} The coefficients $c_n$ can be alternatively expressed as \begin{equation} \label{eq:fn-coeff-innerproduct} c_n = (-i)^n \frac{\sqrt{2} \, n!}{\pi^{3/2} (3/2)_n}
\int_{-\infty}^\infty \Xi(t) f_n\left(\frac{t}{2}\right)\left|\Gamma\left(\frac34+\frac{it}{2}\right)\right|^2\,dt. \end{equation} \end{corollary}
\begin{proof} This is analogous to the proof of Corollary~\ref{cor:hermite-coeff-innerproduct}. \end{proof}
\section{Proof of Theorem~\ref{THM:FN-EXPANSION}}
\label{sec:fnexp-proofexpansion}
The next two lemmas establish technical bounds that will be useful for our analysis and play a similar role to the one played in the previous chapter by Lemmas~\ref{lem:hermite-easybound} and~\ref{lem:hermite-easybound2}.
\begin{lem} \label{lem:fn-easybound} The polynomials $f_n(x)$ satisfy the bound \begin{equation} \label{eq:fn-easybound}
|f_n(x)|\leq C_1 e^{C_2 n^{1/3}} \end{equation} for all $n\ge 0$, uniformly as $x$ ranges over any compact set $K\subset \mathbb{C}$, with $C_1,C_2>0$ being constants that depend on $K$ but not on $n$. \end{lem}
\begin{proof}
Fix the compact $K\subset \mathbb{C}$, and denote $M=2\max_{x\in K} |x|$. Fix an integer $N_0\ge \max(4,(3M)^3)$. Let $C_1,C_2>0$ be constants for which \eqref{eq:fn-easybound} holds for all $x\in K$ and $0\le n\le N_0$, and such that $C_2\ge 1$. Note that for all $n\ge N_0$ we have the inequality $ n^{1/3} - (n-2)^{1/3} \geq \frac{2}{3n^{2/3}}, $ which implies that \begin{equation} \label{eq:fn-easybound-auxineq} e^{-C_2 (n^{1/3} - (n-2)^{1/3})} \leq e^{-(n^{1/3} - (n-2)^{1/3})} \leq 1-\frac{1}{3n^{2/3}}. \end{equation} (since $e^{-x}\leq 1-x/2$ if $0\le x\le 1$). Then, assuming by induction that we have proved the bound \eqref{eq:fn-easybound} for all cases up to $n-1$, in the $n$th case (where $n> N_0$) we can use the recurrence \eqref{eq:fn-recurrence} and \eqref{eq:fn-easybound-auxineq} to write that, for all $x\in K$, \begin{align*}
|f_n(x)| &\leq \frac{2|x|}{n} |f_{n-1}(x)| + \left(1-\frac{1}{2n}\right)|f_{n-2}(x)|
\leq \frac{M}{n} C_1 e^{C_2 (n-1)^{1/3}} + \left(1-\frac{1}{2n}\right) C_1 e^{C_2 (n-2)^{1/3}} \\ &\leq C_1 e^{C_2 n^{1/3}} \left[ \frac{M}{n} e^{-C_2 (n^{1/3}-(n-1)^{1/3})} + \left(1-\frac{1}{2n}\right) e^{-C_2 (n^{1/3}-(n-2)^{1/3})} \right] \\ &\leq C_1 e^{C_2 n^{1/3}} \left[ \frac{M}{n} + \left(1-\frac{1}{2n}\right) \left(1-\frac{1}{3n^{2/3}}\right) \right] \\ &\leq C_1 e^{C_2 n^{1/3}} \left[ \frac{1}{3n^{2/3}} + \left(1-\frac{1}{3n^{2/3}}\right) \right] = C_1 e^{C_2 n^{1/3}}. \end{align*} This completes the inductive step. \end{proof}
\begin{lem} \label{lem:fn-easybound2} For any number $B\ge1$ there is a constant $C>0$ such that \begin{equation} \label{eq:fn-easybound2} \int_1^\infty e^{-B x} \left( \frac{x-1}{x+1}\right)^n\,dx \leq C e^{-2 \sqrt{B n}} \end{equation} for all $n\ge 0$. \end{lem}
\begin{proof} The integral can be expressed as \begin{align*} \int_1^\infty \exp\left(\psi_n(x)\right)\, dx, \end{align*} where we define \begin{equation} \label{eq:largedev-function} \psi_n(x) = -B x + n \log \left( \frac{x-1}{x+1}\right). \end{equation} By solving the equation $\psi_n'(x) = 0$, it is easy to check that $\psi_n(x)$ has a unique global maximum point $x_n$ in $[1,\infty)$, namely \begin{equation*} 0 = \psi_n'(x_n) = -B + \frac{2n}{x_n^2-1} \quad \iff \quad x_n = \sqrt{\frac{2n}{B}+1}, \end{equation*} which asymptotically as $n\to\infty$ behaves as \begin{equation*} x_n = \sqrt{\frac{2n}{B}} + O\left(\frac{1}{\sqrt{n}}\right). \end{equation*} The value at the maximum point is \begin{align*} \psi_n(x_n) &= -B x_n + n \log\left(\frac{x_n-1}{x_n+1}\right) = -B x_n + n \log\left(\frac{1-1/x_n}{1+1/x_n}\right) \\ &= - \sqrt{2B n} +O\left(\frac{1}{\sqrt{n}}\right) + n\left(-2\cdot\frac{1}{x_n} + O\left(\frac{1}{x_n^3}\right)\right)
= -2 \sqrt{2B n} +O\left(\frac{1}{\sqrt{n}}\right) \end{align*} as $n\to\infty$. We conclude that \begin{align*} \int_1^\infty \exp\left(\psi_n(x)\right)\, dx
&= \int_1^{2x_n} \exp\left(\psi_n(x)\right)\, dx + \int_{2x_n}^\infty \exp\left(\psi_n(x)\right)\, dx \\ &\leq 2x_n \exp\left(\psi_n(x_n)\right) + \int_{2x_n}^\infty e^{-B x}\,dx \\ &= \left(1+O\left(\frac{1}{\sqrt{n}}\right)\right) 2\sqrt{\frac{2n}{B}} \exp\left(-2\sqrt{2B n}\right) + \frac{1}{B} e^{-2 B x_n} = O\left( e^{-2\sqrt{B n}}\right) \end{align*} as $n\to\infty$, as claimed. \end{proof}
Denote $s=\frac12+it$, and observe that with this substitution the Mellin transform representation \eqref{eq:riemannxi-mellintrans} for $\xi(s)$ becomes the statement that \begin{equation*} \Xi(t) =\int_0^\infty \omega(x)x^{-\frac34+\frac{it}{2}}\,dx. \end{equation*} The idea behind the expansion \eqref{eq:fn-expansion} is the simple yet powerful fact that the integration kernel $x^{-\frac34+\frac{it}{2}}$ can be expanded in a very specific way in an infinite series related to the generating function \eqref{eq:fn-genfun}. More precisely, for any $x>0$ we have that \begin{align} \label{eq:int-kernel-expansion} x^{s/2-1} & = x^{-\frac34+\frac{it}{2}} = \frac{2\sqrt{2}}{(x+1)^{3/2}} \left(\frac{2x}{x+1}\right)^{-\frac34+\frac{it}{2}} \left(\frac{2}{x+1}\right)^{-\frac34-\frac{it}{2}} \\ &= \frac{2\sqrt{2}}{(x+1)^{3/2}} \cdot \left((1-iz)^{-\frac34+\frac{it}{2}} (1+iz)^{-\frac34-\frac{it}{2}}
\right)_{\Big| z=i\frac{x-1}{x+1}} \nonumber \\ &= \frac{2\sqrt{2}}{(x+1)^{3/2}} \sum_{n=0}^\infty f_n\left(\frac{t}{2}\right) \left(i \,\frac{x-1}{x+1}\right)^n
= \frac{2\sqrt{2}}{(x+1)^{3/2}} \sum_{n=0}^\infty i^n f_n\left(\frac{t}{2}\right) \left(\,\frac{x-1}{x+1}\right)^n. \nonumber \end{align} One can now get \eqref{eq:fn-expansion} as a formal identity by multiplying the first and last expressions in this chain of equations by $\omega(x)$ and integrating both sides over $(0,\infty)$, then using the fact that the odd-indexed coefficients $c_{2n+1}$ vanish.
To rigorously justify this formal calculation and obtain the more precise rate of convergence estimate \eqref{eq:fn-expansion-errorbound}, we now make use of the technical bounds from Lemmas~\ref{lem:fn-easybound} and \ref{lem:fn-easybound2}. Using the above infinite series representation of the kernel $x^{-\frac34+\frac{it}{2}}$, we see that \begin{align*}
\Bigg|\Xi(t) - \sum_{n=0}^N & (-1)^{2n}c_{2n} f_{2n}\left(\frac{t}{2}\right)\Bigg|
=
\left|\Xi(t) - \sum_{n=0}^{2N} i^n c_n f_n\left(\frac{t}{2}\right)\right| \\ &=
\left|\int_0^\infty \omega(x) \left(x^{-\frac34+\frac{it}{2}} - \frac{2\sqrt{2}}{(x+1)^{3/2}} \sum_{n=0}^{2N} f_n\left(\frac{t}{2}\right)\left( i \frac{x-1}{x+1}\right)^n \right)
\,dx\right| \\ &\leq \int_0^\infty \omega(x)
\left|x^{-\frac34+\frac{it}{2}} - \frac{2\sqrt{2}}{(x+1)^{3/2}} \sum_{n=0}^{2N} f_n\left(\frac{t}{2}\right)\left( i \frac{x-1}{x+1}\right)^n
\right| \,dx \\ &= 2\sqrt{2} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}}
\left| \sum_{n=2N+1}^{\infty} f_n\left(\frac{t}{2}\right) \left(i \frac{x-1}{x+1} \right)^n
\right| \,dx \\ &\leq 2\sqrt{2} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}}
\sum_{n=2N+1}^{\infty} \left|f_n\left(\frac{t}{2}\right)\right| \cdot\left| i \frac{x-1}{x+1}
\right|^n \,dx \\ &\leq 2\sqrt{2} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}}
\sum_{n=2N+1}^{\infty} C_1 e^{C_2 n^{1/3}} \left| \frac{x-1}{x+1}
\right|^n \,dx \\ &= 2\sqrt{2} \sum_{n=2N+1}^{\infty} C_1 e^{C_2 n^{1/3}} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}}
\left| \frac{x-1}{x+1}
\right|^n \,dx \\ & = 4\sqrt{2} \sum_{n=2N+1}^{\infty} C_1 e^{C_2 n^{1/3}} \int_1^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left( \frac{x-1}{x+1} \right)^n \,dx, \end{align*} where $C_1,C_2$ are the constants from Lemma~\ref{lem:fn-easybound} (associated with the compact set $K$ over which we are allowing $t$ to range); the last step follows from \eqref{eq:omega-int-powersn-symmetry}. Now note that, by \eqref{eq:omegax-asym-xinfty}, $\frac{\omega(x)}{(x+1)^{3/2}} = O\left(\sqrt{x}e^{-\pi x}\right) = O\left(e^{-\pi x/2}\right)$ as $x\to\infty$, so we can apply Lemma~\ref{lem:fn-easybound2} (with $B=\pi/2$) to the integrals, to get that the last expression in the above chain of inequalities is bounded by $$ 4\sqrt{2} \sum_{n=2N+1}^{\infty} C_1 e^{C_2 n^{1/3}} \cdot C e^{-2 \sqrt{\pi n/2}}, $$ and this is easily seen to be $O\left(e^{-\sqrt{\pi N}}\right)$ as $N\to\infty$. This proves \eqref{eq:fn-expansion-errorbound} and completes the proof of Theorem~\ref{THM:FN-EXPANSION}. \qed
\section{Proof of Theorem~\ref{THM:FN-COEFF-ASYM}}
\label{sec:fnexp-proofasym}
Define a function $\phi(x)$, and numbers $Z_n$ and $\varepsilon_n$, by \begin{align*} \phi(x) & = \frac{\pi x(2\pi x-3)}{(x+1)^{3/2}}, \\ Z_n & = \int_1^\infty \phi(x)e^{-\pi x} \left(\frac{x-1}{x+1}\right)^{2n}\,dx, \\ \varepsilon_n & = \int_1^\infty \left(\frac{\omega(x)}{(x+1)^{3/2}}-\phi(x)e^{-\pi x}\right) \left(\frac{x-1}{x+1}\right)^{2n}\,dx, \end{align*} so that $c_{2n}$ in \eqref{eq:c2n-formula} can be rewritten as $c_{2n} = 4\sqrt{2}(Z_n + \varepsilon_n)$. We consider separately the asymptotic behavior of $Z_n$ and $\varepsilon_n$. For $\varepsilon_n$, note that we have \begin{equation} \label{eq:omegax-firstorder-asym}
\left|\frac{\omega(x)}{(x+1)^{3/2}} - \phi(x)e^{-\pi x}\right| = O\left(e^{-3\pi x}\right) \quad \textrm{as }x\to\infty, \end{equation} by \eqref{eq:omegaxdiff-asym-xinfty}. Thus, Lemma~\ref{lem:fn-easybound2} implies that \begin{equation} \label{eq:asym-estimate-epsilonn}
|\varepsilon_n| = O\left(e^{-2\sqrt{6\pi n}}\right) \quad \textrm{as }n\to\infty. \end{equation} The main asymptotic contribution to $c_{2n}$ comes from $Z_n$, and can be found using Laplace's method. Start by rewriting $Z_n$ as \begin{equation*} Z_n = \int_1^\infty \phi(x) \exp(\psi_{2n}(x))\,dx, \end{equation*} where $\psi_n(x)$ is the function defined in \eqref{eq:largedev-function} with $B=\pi$. Noting that, as was discussed in the proof of Lemma~\ref{lem:fn-easybound2}, $\psi_{2n}(x)$ has a unique global maximum point at \begin{equation*} \tau_n := x_{2n} = \sqrt{\frac{4n}{\pi}+1} = \sqrt{\frac{4n}{\pi}} + O\left(\frac{1}{\sqrt{n}}\right) \qquad \textrm{as }n\to\infty, \end{equation*} we further split this integral up into three parts, by writing $Z_n = Z_n^{(1)} + Z_n^{(2)} + Z_n^{(3)}$, with \begin{align} Z_n^{(1)} &= \int_1^{\tau_n-n^{3/10}} \phi(x) \exp(\psi_{2n}(x))\,dx, \label{eq:int-z1} \\ Z_n^{(2)} &= \int_{I_n} \phi(x) \exp(\psi_{2n}(x))\,dx, \label{eq:int-z2} \\ Z_n^{(3)} &= \int_{\tau_n+n^{3/10}}^\infty \phi(x) \exp(\psi_{2n}(x))\,dx, \label{eq:int-z3} \end{align} where $I_n$ denotes the interval $[\tau_n-n^{3/10},\tau_n+n^{3/10}]$.
The following calculus facts are straightforward to check; their verification is left to the reader: \begin{enumerate} \item $\phi(x)$ is monotone increasing on $[1,\infty)$. \item $\psi_{2n}(x)$ is monotone increasing on $[1,\tau_n]$ and monotone decreasing on $[\tau_n,\infty)$. \item $\psi_{2n}(x)$ is concave on $[1,\infty)$. \item We have the asymptotic relations \begin{align*} V_n &:= \psi_{2n}(\tau_n) = -\pi \tau_n + 2n \log\left( \frac{\tau_n-1}{\tau_n+1}\right) = -4\sqrt{\pi n} + O\left(\frac{1}{\sqrt{n}}\right), \\ D_n &:= -\psi_{2n}''(\tau_n) = \frac{\pi^2}{2n} \tau_n = \pi^{3/2} \frac{1}{\sqrt{n}} + O\left(\frac{1}{n^{3/2}}\right), \\ E_n &:= \phi(\tau_n) = 2\sqrt{2} \pi^{7/4} n^{1/4} + O\left(\frac{1}{n^{1/4}}\right), \\ K_n &:= \psi_{2n}'(\tau_n + n^{3/10}) = -\pi^{3/2} n^{-1/5} + O\left(\frac{1}{n^{2/5}}\right) \end{align*} as $n\to\infty$.
\item We have the relation $ \psi_{2n}'''(x) = \frac{8n(3x^2+1)}{(x^2-1)^3}. $ Consequently \begin{equation*}
\sup_{x\in I_n} |\psi_{2n}'''(x)| = O\left(\frac{1}{n}\right) \qquad \textrm{as }n\to\infty, \end{equation*} which implies that the Taylor expansion of $\psi_{2n}(x)$ around $x=\tau_n$ can be written as \begin{align*}
\psi_{2n}(x) = V_n -\frac12 D_n (x-\tau_n)^2 + O\left(\frac{|x-\tau_n|^3}{n}\right), \qquad (x\in I_n) \end{align*} where the constant implicit in the big-$O$ term does not depend on $x$ or $n$.
\item We have \begin{equation*}
\sup_{x\in I_n} \left| \frac{\phi(x)}{E_n}-1\right| = O\left(\frac{1}{n^{1/5}}\right) \qquad \textrm{as }n\to\infty. \end{equation*}
\end{enumerate}
We now estimate the integrals \eqref{eq:int-z1}--\eqref{eq:int-z3}. For $Z_n^{(1)}$, since $\phi(x)$ and $\psi_{2n}(x)$ are increasing on $[1,\tau_n]$, we have that \begin{align} \label{eq:asym-estimate-z1}
|Z_n^{(1)}| &\leq (\tau_n-n^{3/10}-1) \phi(\tau_n-n^{3/10}) \exp\left( \psi_{2n}(\tau_n-n^{3/10}) \right) \\ &\leq O\left(n^{3/4}\right) \exp\left(V_n -\frac12 D_n n^{3/5} + O\left( \frac{n^{9/10}}{n} \right) \right) = O\left( \frac{1}{n^2} e^{-4\sqrt{\pi n}} \right) \nonumber \end{align} as $n\to\infty$.
For $Z_n^{(3)}$, we use the fact that \begin{equation*} \psi_{2n}(x) \leq \psi_{2n}(\tau_n+n^{3/10}) + K_n (x-\tau_n-n^{3/10}) \end{equation*} for all $x\ge \tau_n+n^{3/10}$ (which follows from the concavity of $\psi_{2n}(x)$) to write \begin{align} \label{eq:asym-estimate-z3}
|Z_n^{(3)}| &\leq \int_{\tau_n+n^{3/10}}^\infty \phi(x) \exp\Bigg[ \psi_{2n}(\tau_n+n^{3/10}) + K_n (x-\tau_n-n^{3/10})\Bigg] \,dx \\ \nonumber &\leq \exp\left[ V_n - \frac12 D_n n^{3/5} + O\left(\frac{n^{9/10}}{n}\right) \right] \\ \nonumber &\ \ \ \quad\qquad \times \int_{\tau_n+n^{3/10}}^\infty O\left( \sqrt{x} \right) \exp\Bigg[ -\left(1+O\left(\frac{1}{n^{1/5}}\right)\right)
\pi^{3/2} n^{-1/5} (x-\tau_n-n^{3/10}) \Bigg] \,dx \\ \nonumber &= e^{-4\sqrt{\pi n}} O\left(e^{-n^{1/20}}\right) = O\left(\frac{1}{n^2}e^{-4\sqrt{\pi n}}\right). \end{align}
Finally, to obtain the asymptotics of $Z_n^{(2)}$, we make the change of variables $u=\sqrt{D_n}(x-\tau_n)$ in the integral \eqref{eq:int-z2}, to get that \begin{align} \label{eq:asym-estimate-z2} Z_n^{(2)} &= \int_{-\sqrt{D_n} n^{3/10}}^{\sqrt{D_n} n^{3/10}} \phi\left(\tau_n + \frac{u}{\sqrt{D_n}} \right) \exp\left( \psi_{2n}\left(\tau_n + \frac{u}{\sqrt{D_n}} \right) \right)\,\frac{du}{\sqrt{D_n}} \\ &= \frac{1}{\sqrt{D_n}}\int_{-\sqrt{D_n} n^{3/10}}^{\sqrt{D_n} n^{3/10}} \phi\left(\tau_n + \frac{u}{\sqrt{D_n}} \right) \exp\left[ V_n - \frac12 u^2 + O\left(
\frac{|u|^3}{n D_n^{3/2}} \right) \right] \,du \nonumber \\ &= \left(1+O\left(\frac{1}{n^{1/4}}\right)\right) \pi^{-3/4} n^{1/4} \times \left(1+O\left(\frac{1}{n^{1/5}}\right)\right) E_n \nonumber \\ & \qquad \times \left(1+O\left(\frac{1}{\sqrt{n}}\right)\right) e^{-4\sqrt{\pi n}} \times \int_{-\sqrt{D_n} n^{3/10}}^{\sqrt{D_n} n^{3/10}} \exp\left[-u^2/2 + O\left(\frac{1}{n^{1/10}}\right)\right]\,du \nonumber \\ &= \left(1+O\left(\frac{1}{n^{1/10}}\right)\right) \pi^{-3/4} n^{1/4} \times 2\sqrt{2}\pi^{7/4} n^{1/4} \times e^{-4\sqrt{\pi n}}
\left(\sqrt{2\pi}- O\left(\exp\left(-\frac12 D_n n^{3/5}\right)\right)\right) \nonumber \\ &= \left(1+O\left(\frac{1}{n^{1/10}}\right)\right) 4 \pi^{3/2} n^{1/2} e^{-4\sqrt{\pi n}}, \nonumber \end{align} where we once again used \eqref{eq:gaussian-tail-bound} to account for the error arising from adding the tails of the Gaussian integral.
Since $c_{2n} = 4\sqrt{2}\left(\varepsilon_n + Z_n^{(1)}+Z_n^{(2)}+Z_n^{(3)}\right)$, combining \eqref{eq:asym-estimate-epsilonn}, \eqref{eq:asym-estimate-z1}, \eqref{eq:asym-estimate-z3} and \eqref{eq:asym-estimate-z2} gives the asymptotic formula \eqref{eq:fn-coeff-asym}. The proof of Theorem~\ref{THM:FN-COEFF-ASYM} is complete. \qed
\section{The Poisson flow associated with the $f_n$-expansion}
\label{sec:fn-poissonflow}
Motivated by the developments of \secref{sec:hermite-poissonpolya}, we now consider the Poisson flow \eqref{eq:poisson-flow-def} associated with the family $(f_n)_{n=0}^\infty$ of orthogonal polynomials, which in this section we will denote by $\mathcal{F}$. Recall that in the case of the Hermite expansion, we showed that the Poisson flow could be understood as the family of Fourier transforms of functions obtained from the function $\Phi(x)$ by performing a simple operation (refer to \eqref{eq:hermite-poissonpolya-calc}). One might wonder if something similar (or perhaps even more interesting) happens in the case of the Poisson flow associated with the family $\mathcal{F}$. The answer is given in the following result.
\begin{thm}[Mellin transform representation of the Poisson flow] \label{thm:fn-poisson-mellin} For $0<r<1$, the function $X_r^{\mathcal{F}}(t)$ has the Mellin transform representation \begin{equation} \label{eq:fn-poisson-mellin} X_r^{\mathcal{F}}(t) = \int_0^\infty \omega_r(x) x^{-\frac34+\frac{i t}{2}}\,dx \qquad (t\in \mathbb{C}), \end{equation} where we define \begin{equation} \label{eq:fn-omega-compressed} \omega_r(x) = \begin{cases} \frac{1+\eta}{\sqrt{1-\eta}} \frac{1}{\sqrt{1-\eta x}} \omega\left(\frac{x-\eta}{1-\eta x}\right) & \textrm{if }\eta < x<1/\eta, \\ 0 & \textrm{otherwise,} \end{cases} \end{equation} making use of the notation \begin{equation} \label{eq:compression-eta-notation} \eta = \frac{1-r}{1+r}. \end{equation} \end{thm}
Note that the map $x\mapsto \frac{x-\eta}{1-\eta x}$ maps the interval $(\eta,1/\eta)$ bijectively onto $(0,\infty)$, so the function $\omega_r(x)$ contains the same ``frequency information'' as $\omega(x)$, but compressed into a finite interval. In particular, a notable feature of this result, which stands in contrast to what we saw in the case of the Poisson flow associated with the Hermite polynomials, is that for $r<1$ the function $X_r^{\mathcal{F}}(t)$ is now the Fourier transform of a function with bounded support; that is, $X_r^{\mathcal{F}}(t)$ is an entire function of exponential type. It is intriguing to speculate that this might make the problem of understanding where the zeros of $X_r^{\mathcal{F}}(t)$ are located easier than for the case of the original xi function $\Xi(t)$.
\begin{proof} The derivation starts with the formula \eqref{eq:poisson-flow-expansion}. Specializing this to the expansion \eqref{eq:fn-expansion} and substituting the defining formula \eqref{eq:def-fn-coeffs} for the coefficients $c_n$, we have that \begin{align*} X_r^{\mathcal{F}}(t) &= \sum_{n=0}^\infty i^n c_n r^n f_n\left(\frac{t}{2}\right)
= 2\sqrt{2} \sum_{n=0}^\infty \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n\,dx \cdot r^n f_n\left(\frac{t}{2}\right) \\&= 2\sqrt{2} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \sum_{n=0}^\infty f_n\left(\frac{t}{2}\right) \left(i r \frac{x-1}{x+1}\right)^n\,dx \\& = 2\sqrt{2} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}}
\left(\sum_{n=0}^\infty f_n\left(\frac{t}{2}\right) {z^n}\right)_{\raisebox{3pt}{$\Big|$}z=i r \frac{x-1}{x+1}}\,dx. \end{align*} Inside the integrand we have an expression involving the generating function \eqref{eq:fn-genfun} of the polynomials $f_n(x)$. Substituting the formula for this generating function (as we did in \eqref{eq:int-kernel-expansion}, which is essentially the special case $r=1$ of the current computation) gives that \begin{align*} X_r^{\mathcal{F}}(t) &= 2\sqrt{2} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left((1-iz)^{-\frac34+\frac{it}{2}} (1+iz)^{-\frac34+\frac{it}{2}} \right)
_{\raisebox{3pt}{$\Big|$}z=i r \frac{x-1}{x+1}} \,dx \\ &= 2\sqrt{2} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left( \frac{1-r+x(1+r)}{x+1} \right)^{-\frac34+\frac{it}{2}}
\left( \frac{1+r+x(1-r)}{x+1} \right)^{-\frac34-\frac{it}{2}} \,dx \\ &= 2\sqrt{2} \int_0^\infty \omega(x) (1+r)^{-\frac34+\frac{it}{2}} \left(x+\frac{1-r}{1+r}\right)^{-\frac34+\frac{it}{2}}
(1+r)^{-\frac34-\frac{it}{2}} \left(1+\frac{1-r}{1+r}x\right)^{-\frac34-\frac{it}{2}} \,dx \\ &= \frac{2\sqrt{2}}{(1+r)^{3/2}} \int_0^\infty \omega(x) \frac{1}{(1+\eta x)^{3/2}} \left( \frac{x+\eta}{1+\eta x}\right)^{-\frac34+\frac{it}{2}}\,dx \\ &= (1+\eta)^{3/2} \int_0^\infty \omega(x) \frac{1}{(1+\eta x)^{3/2}} \left( \frac{x+\eta}{1+\eta x}\right)^{-\frac34+\frac{it}{2}}\,dx. \end{align*} We have thus expressed $X_r^{\mathcal{F}}(t)$ as a sort of modified Mellin transform of $\omega(x)$. But this last integral formula can be transformed to an ordinary Mellin transform by making the change of variables $x=\frac{u-\eta}{1-\eta u}$ in the last integral. The reader can verify without difficulty that this yields the Mellin transform \eqref{eq:fn-poisson-mellin} of the function given in \eqref{eq:fn-omega-compressed}. \end{proof}
In the next result we show that the Poisson flow satisfies an interesting dynamical evolution law, analogous to the time-reversed heat equation \eqref{eq:timerev-heat-equation} satisfied by the P\'olya-De Bruijn flow. In this case the evolution law is not a partial differential equation, but rather a \firstmention{differential difference equation (DDE)}. To make the equation homogeneous in the ``time'' variable, it is most convenient to perform a change of variables, reparametrizing the time variable $r$ by denoting $r = e^{-\tau}$ (with $\tau\ge0$).
\begin{thm}[Differential difference equation for the Poisson flow] \label{thm:poissonflow-dde} The function $M(\tau,t):=X_{e^{-\tau}}^{\mathcal{F}}(t)$ satisfies the differential difference equation \begin{equation} \label{eq:poissonflow-dde} \frac{\partial M}{\partial \tau} = \frac34 M(\tau,t) - \frac12\left(\frac34-\frac{it}{2}\right) M(\tau,t+2i) - \frac12\left(\frac34+\frac{it}{2}\right) M(\tau,t-2i) \quad (\tau >0, t\in \mathbb{C}). \end{equation} \end{thm}
\begin{proof} The computation is analogous to the derivation of the time-reversed heat equation at the end of \secref{sec:hermite-poissonpolya}, except that instead of using the differential equation satisfied by the Hermite polynomials we use the difference equation \eqref{eq:fn-diffeq} satisfied by the polynomials $f_n(x)$. We have, again starting with \eqref{eq:poisson-flow-expansion} with the substitution $r=e^{-\tau}$, \begin{align*} \frac{\partial M}{\partial \tau} &= \frac{\partial}{\partial \tau} \left( \sum_{n=0}^\infty i^n c_n e^{-n\tau} f_n\left(\frac{t}{2}\right) \right)
= \sum_{n=0}^\infty i^n c_n (-n) e^{-n\tau} f_n\left(\frac{t}{2}\right) \\ &= \sum_{n=0}^\infty i^n c_n e^{-n\tau} \bigg( \frac34 f_n\left(\frac{t}{2}\right) -\frac12\left(\frac34+\frac{it}{2}\right) f_n\left(\frac{t}{2}-i\right)
-\frac12\left(\frac34-\frac{it}{2}\right) f_n\left(\frac{t}{2}+i\bigg) \right) \\ &= \frac34 \sum_{n=0}^\infty i^n c_n e^{-n\tau} f_n\left(\frac{t}{2}\right) -\frac12\left(\frac34+\frac{it}{2}\right) \sum_{n=0}^\infty i^n c_n e^{-n\tau} f_n\left(\frac{t}{2}-i\right)
-\frac12\left(\frac34-\frac{it}{2}\right) \sum_{n=0}^\infty i^n c_n e^{-n\tau} f_n\left(\frac{t}{2}+i\right) \\ &= \frac34 M(\tau,t) - \frac12\left(\frac34-\frac{it}{2}\right) M(\tau,t+2i) - \frac12\left(\frac34+\frac{it}{2}\right) M(\tau,t-2i). \end{align*}
\end{proof}
\section{Evolution of the zeros under the Poisson flow}
\label{sec:zeros-evolution}
The differential difference equation \eqref{eq:poissonflow-dde} opens up the way to an analysis of the dynamical evolution of the zeros of the functions $X_r^{\mathcal{F}}(t)$ as a function of $r$, in a manner analogous to how the time-reversed heat equation \eqref{eq:timerev-heat-equation} made it possible to write a system of coupled ODEs satisfied by the P\'olya-De Bruijn flow, which played a useful role in the investigations of the De Bruijn-Newman constant (see \cite[Lemma~5.18, p.~83]{broughan}). Our next goal is to derive this evolution law, again using the more convenient time parameter $\tau$. To avoid technicalities involving the behavior of entire functions (and to generalize the question slightly, which also seems potentially useful), we switch in this section from the Riemann xi function to the simpler setting of polynomials.
Let $z_1,\ldots,z_n \in \mathbb{C}$ be distinct complex numbers. Let \begin{equation} \label{eq:poly-initial-condition} p(t) = \prod_{k=1}^n (t-z_k), \end{equation} and consider the function $M_p(\tau,t)$ defined as the solution to the DDE \eqref{eq:poissonflow-dde} with initial condition $M_p(0,t)=p(t)$. To see that such an object exists, write the expansion \begin{equation*} p(t) = \sum_{k=0}^n \gamma_k f_k\left(\frac{t}{2}\right) \end{equation*} in the linear basis of polynomials $(f_k(t/2))_{k=0}^n$. Then $M_p(\tau,t)$ is given by \begin{equation} \label{eq:poly-poisson-evolution} M_p(\tau,t) = \sum_{k=0}^n \gamma_k e^{-k \tau} f_k\left(\frac{t}{2}\right) \end{equation} (the proof is a repetition of the calculation in the proof of Theorem~\ref{thm:poissonflow-dde} above, with both proofs being based on the simple observation that each of the functions $m_k(\tau,t)= e^{-k\tau} f_k(t/2)$ for $k\ge 0$ is a solution to \eqref{eq:poissonflow-dde}). Proving uniqueness is left as an exercise. We refer to the function $M_p(\tau,t)$ as the \firstmention{Poisson flow (associated with the polynomial family $\mathcal{F}$) with initial condition $p$}.
For any fixed $\tau \in \mathbb{R}$, the function $t\mapsto M_p(\tau,t)$ is a polynomial of degree $n$ with leading coefficient $e^{-n\tau}$ (to see this, compare \eqref{eq:poly-poisson-evolution} at times $\tau$ and $0$, taking into account \eqref{eq:poly-initial-condition}). Denote its zeros by $Z_1(\tau),\ldots,Z_n(\tau)$, and note that while they are defined only up to ordering, in the neighborhood of any fixed time $\tau_0$ for which the zeros are distinct one can pick the ordering so that $Z_k(\tau)$ are smooth functions of~$\tau$.
\begin{thm}[Evolution equations for the zeros under the Poisson flow] In the neighborhood of any $\tau_0$ as above, the functions $(Z_k(\tau))_{k=1}^n$ satisfy the system of coupled ordinary differential equations \begin{align*} \frac{dZ_k}{d\tau} &= \frac12 \Bigg[ \left( Z_k + \frac{3i}{2} \right)\prod_{1\le j\le n \atop j\neq k} \left(1+\frac{2i}{Z_k-Z_j}\right)
+ \left( Z_k - \frac{3i}{2} \right)\prod_{1\le j\le n \atop j\neq k} \left(1-\frac{2i}{Z_k-Z_j}\right) \Bigg] \qquad (1\le k\le n). \end{align*} \end{thm}
\begin{proof} The fundamental relation defining the $k$th zero $Z_k$ is \begin{equation*} M_p(\tau, Z_k(\tau)) = 0. \end{equation*} Differentiating this with respect to $\tau$ gives \begin{align*} 0 &= \frac{d}{d\tau}\Big(M_p(\tau,Z_k(\tau))\Big) = \frac{\partial M_p}{\partial \tau}(\tau,Z_k(\tau)) + \frac{\partial M_p}{\partial t}(\tau,Z_k(\tau))\frac{dZ_k}{d\tau} \end{align*} (where $\frac{\partial M_p}{\partial t}$ refers to the partial derivative with respect to the second variable). By \eqref{eq:poissonflow-dde}, this expands to \begin{align*} 0 &= \frac34 M_p(\tau,Z_k) - \frac12\left(\frac34-\frac{iZ_k}{2}\right) M_p(\tau,Z_k+2i) - \frac12\left(\frac34+\frac{iZ_k}{2}\right) M_p(\tau,Z_k-2i)
+ \frac{\partial M_p}{\partial t}(\tau,Z_k(\tau))\frac{dZ_k}{d\tau} \\ & = - \frac12\left(\frac34-\frac{iZ_k}{2}\right) M_p(\tau,Z_k+2i) - \frac12\left(\frac34+\frac{iZ_k}{2}\right) M_p(\tau,Z_k-2i)
+ \frac{\partial M_p}{\partial t}(\tau,Z_k(\tau))\frac{dZ_k}{d\tau}. \end{align*} Now, $M_p(\tau,t) = e^{-n \tau} \prod_{j=1}^n (t-Z_j(\tau))$, so \begin{equation*} \frac{\partial M_p}{\partial t}(\tau,Z_k(\tau)) = e^{-n\tau} \prod_{1\le j\le n \atop j\neq k} (Z_k-Z_j). \end{equation*} It follows that \begin{align*} \frac{dZ_k}{d\tau} &= \frac12 \frac{1}{\frac{\partial M_p}{\partial t}(\tau,Z_k(\tau))} \left[ \left(\frac34-\frac{iZ_k}{2}\right) M_p(\tau,Z_k+2i) + \left(\frac34+\frac{iZ_k}{2}\right) M_p(\tau,Z_k-2i) \right] \\ &= \frac12 \prod_{1\le j\le n \atop j\neq k} (Z_k-Z_j)^{-1}
\left[ \left(\frac34-\frac{iZ_k}{2}\right) \prod_{j=1}^n (Z_k+2i-Z_j) + \left(\frac34+\frac{iZ_k}{2}\right) \prod_{j=1}^n (Z_k-2i-Z_j) \right] \\ &= \frac12 \Bigg[ 2i\left(\frac34-\frac{iZ_k}{2}\right) \prod_{1\le j\le n\atop j\neq k} \frac{Z_k+2i-Z_j}{Z_k-Z_j}
+ (-2i)\left(\frac34+\frac{iZ_k}{2}\right) \prod_{1\le j\le n\atop j\neq k} \frac{Z_k-2i-Z_j}{Z_k-Z_j} \Bigg] \\ &= \frac12 \Bigg[ \left( Z_k + \frac{3i}{2} \right)\prod_{1\le j\le n \atop j\neq k} \left(1+\frac{2i}{Z_k-Z_j}\right)
+ \left( Z_k - \frac{3i}{2} \right)\prod_{1\le j\le n \atop j\neq k} \left(1-\frac{2i}{Z_k-Z_j}\right) \Bigg], \end{align*} as claimed. \end{proof}
Our final result for this section is of a negative sort, illustrating another way in which the Poisson flow associated with the family $\mathcal{F}$ of orthogonal polynomials behaves differently from the P\'olya-De Bruijn flow. Specifically, it was mentioned in \secref{sec:hermite-poissonpolya} that the P\'olya-De Bruijn flow preserves the property of hyperbolicity. Our result shows that the Poisson flow associated with the family $\mathcal{F}$ does \emph{not}.
\begin{prop} \label{prop:poisson-notpreshyp} There exists a polynomial \begin{equation*} P(t) = \sum_{k=0}^n \gamma_k f_k\left(\frac{t}{2}\right), \end{equation*} and numbers $\tau_1>0$ and $\tau_2<0$, such that $P(t)$ has only real zeros, but the polynomials $t\mapsto M_P(\tau_1,t)$ and $t\mapsto M_P(\tau_2,t)$ both have non-real zeros. \end{prop}
\begin{proof} Take \begin{equation*} P(t) = (x-2)(x-2.01)(x-4) = \sum_{k=0}^4 \sigma_k f_k\left(\frac{t}{2}\right), \end{equation*} where \begin{equation*} (\sigma_0, \sigma_1, \sigma_2, \sigma_3) = \left( -\frac{5619}{1600}, \frac{83}{25}, - \frac{801}{400}, \frac34 \right), \end{equation*} and $\tau_1 = 0.1$ and $\tau_2 = -0.05$. Direct calculation of the zeros of $M_P(\tau_1,t)$ and $M_P(\tau_2,t)$ verifies the claim. \end{proof}
One conclusion from Proposition~\ref{prop:poisson-notpreshyp} is that there does not seem to be an obvious way to define an analogue of the De Bruijn-Newman constant in the context of the $f_n$-expansion of the Riemann xi function.
\chapter{Radial Fourier self-transforms}
\label{ch:radial}
In this chapter we continue to probe deeper into the theory of the $f_n$-expansion of the Riemann xi function, by developing what will turn out to be an entirely new way of thinking about the expansion as arising out of the expansion of an elementary function $A(r)$ (described below in \eqref{eq:aofr-explicit}) in a natural orthogonal basis of functions related to the Laguerre polynomials $L_n^{1/2}(x)$. Along the way we will encounter several interesting new special functions and develop some new ideas, which are of independent interest, related to radial functions that are eigenfunctions of the Fourier transform, and their connections to a class of functions satisfying a symmetry property similar to (but weaker than) that satisfied by modular forms.
\section{Radial Fourier self-transforms on $\mathbb{R}^d$ and their construction from balanced functions}
\label{sec:rad-selftransforms}
A function $F:\mathbb{R}^d\to\mathbb{R}$ is called a \firstmention{radial function} if $F(\mathbf{x})$ depends only on the Euclidean norm $|\mathbf{x}|$. Given a radial function $F$, it is common to abuse notation slightly and write $F(\mathbf{x})=F(|x|)$, that is, we use the same symbol to denote the function on $\mathbb{R}^d$ and the function (on $[0,\infty)$) of the norm through which the original radial function can be computed. Conversely, given a function $F:[0,\infty)\to\mathbb{R}$ it will sometimes be convenient to regard $F$ as a radial function on $\mathbb{R}^d$ for some specified value of $d$.
Let $\mathcal{F}_d$ denote the Fourier transform on $\mathbb{R}^d$, with the normalization \begin{equation*} \mathcal{F}_d(F)(\mathbf{y}) = \int_{\mathbb{R}^d} F(\mathbf{x}) e^{-2 \pi i\langle \mathbf{y}, \mathbf{x}\rangle} \, d\mathbf{x}. \end{equation*} It is well-known that the $d$-dimensional Fourier transform $\mathcal{F}_d(F)$ of a radial function $F$ is also a radial function, and can be expressed as a Hankel transform, namely as \begin{equation} \label{eq:radial-fourier-d} \mathcal{F}_d(F)(\rho) = 2\pi \rho^{1-d/2} \int_0^\infty F(r) r^{d/2} J_{d/2-1}(2\pi r \rho)\, dr \qquad (\rho \ge0), \end{equation} where \begin{equation*} J_\alpha(z) = \sum_{n=0}^\infty \frac{(-1)^n}{n! \, \Gamma(n+\alpha+1)} \left(\frac{z}{2}\right)^{2n+\alpha} \end{equation*} denotes the Bessel function; see \cite[Sec.~B.5]{grafakos}. The cases $d=1$ and $d=3$ of \eqref{eq:radial-fourier-d} are particularly simple (and of relevance to us, as we shall see). In those cases, the standard identities \begin{equation*} J_{1/2}(x) = \sqrt{\frac{2}{\pi}} \frac{\sin(x)}{\sqrt{x}}, \qquad J_{-1/2}(x) = \sqrt{\frac{2}{\pi}} \frac{\cos(x)}{\sqrt{x}} \end{equation*} mean that \eqref{eq:radial-fourier-d} can be rewritten as \begin{align} \label{eq:radial-fourier-1d} \mathcal{F}_1(F)(\rho) & = 2 \int_0^\infty F(r) \cos(2\pi r\rho)\,dr, \\ \label{eq:radial-fourier-3d} \mathcal{F}_3(F)(\rho) & = \frac{2}{\rho} \int_0^\infty F(r) r \sin(2\pi r\rho)\,dr. \end{align} Note that the case $d=1$ is simply a cosine transform; indeed, a radial function on $\mathbb{R}^d$ for $d=1$ is the same as an even function.
A function $F:\mathbb{R}^d\to\mathbb{R}$ is called a \firstmention{(Fourier) self-transform} if $\mathcal{F}_d(F) = F$. The Gaussian $F(r) = e^{-\pi r^2}$ is an important example of a self-transform (in any dimension!) which is also a radial function. More generally, through a trivial rescaling operation we see that the Fourier transform of a \emph{scaled} Gaussian $e^{-\pi c r^2}$ (with $c>0$) is given by \begin{equation*} \mathcal{F}_d(e^{-\pi c r^2})(\rho) = c^{-d/2} e^{-\pi \rho^2/c}. \end{equation*} This relation provides a general means for constructing a large class of radial self-transforms in $\mathbb{R}^d$ by taking a weighted average of scaled Gaussians (or a ``scale mixture'' of Gaussians, in probabilistic language), using a weighting function in which the contribution of the Gaussian scaled by a given scalar $c$ is suitably matched by that coming from the reciprocal scalar $1/c$. This sort of construction can be found for example in works by Hardy and Titschmarsh \cite{hardy-titschmarsh} and Barndorff-Nielsen et al \cite{barndorff-nielsen}. As discussed by Cohn \cite{cohn}, the same construction in the case where the weighting functions are modular forms motivated recent progress on the sphere packing problem (see also \cite{viazovska}).
For our purposes, the weighting functions we will consider are related to modular forms but are more general. Let $\alpha\ge 0$. If a function $f:(0,\infty)\to\mathbb{R}$ satisfies the relation \begin{equation*} f\left(\frac{1}{x}\right) = x^\alpha f(x) \qquad (x>0), \end{equation*} we say that $f$ is a \firstmention{reciprocally balanced function of weight $\alpha$}. (Usually, for convenience we will omit the adverb ``reciprocally'' and simply refer to $f$ as a balanced function of weight $\alpha$.) The following result is a trivial variant of the observation made in \cite[Eq.~(2.3)]{barndorff-nielsen}.
\begin{lem}[Constructing self-transforms from balanced functions] \label{lem:recbal-selftrans} Let $d\in\mathbb{N}$. Let $f(x)$ be a reciprocally balanced function of weight $2-d/2$, and define an associated function \begin{equation} \label{eq:gauss-trans} F(r) = \int_0^\infty f(x) e^{-\pi x r^2}\,dx \qquad (r>0). \end{equation} Then $F$, considered as a radial function on $\mathbb{R}^d$, is a Fourier self-transform, assuming its Fourier transform is well-defined. \end{lem}
\begin{proof} \begin{align*} \mathcal{F}_d(F)(\rho) & = \int_0^\infty f(x) \mathcal{F}_d\left(e^{-\pi x r^2}\right)(\rho) \,dx = \int_0^\infty f(x) x^{-d/2} e^{-\pi \rho^2/x}\,dx \\ & = \int_0^\infty f(1/y) y^{d/2} e^{-\pi y\rho^2}\, \frac{dy}{y^2} = \int_0^\infty f(y) e^{-\pi y\rho^2} \,dy = F(\rho). \end{align*}
\end{proof}
Note that the relationship between $f(x)$ and $F(r)$ in \eqref{eq:gauss-trans} is simply that $F(r)$ is the Laplace transform $(\mathcal{L} F)(u)$ of $f(x)$, with the change of coordinates $u=\pi r^2$. It can also be interpreted as a group-theoretic convolution operation of the Gaussian function $r\mapsto e^{-\pi r^2}$ with the function $x\mapsto x^{-1}f(x^{-1})$ with respect to the multiplicative group structure on $\mathbb{R}_+$ equipped with the multiplicative Haar measure $\frac{dx}{x}$.
\section{The radial function $A(r)$ associated to $\omega(x)$}
\label{sec:rad-aofr}
We have encountered two balanced functions that play an important role in the study of the Riemann xi function: the function $\theta(x)$ (which is in fact a modular form), and the function $\omega(x)$ derived from it; both of those functions are balanced of weight $1/2$. We are mainly interested in $\omega(x)$, because it has better integrability properties and because the Riemann xi function is its Mellin transform. Define \begin{equation*} A(r) = \int_0^\infty \omega(x) e^{-\pi x r^2}\,dx. \end{equation*} Since $\omega(x)$ is balanced of weight $1/2$, Lemma~\ref{lem:recbal-selftrans} implies that $A(r)$ is a Fourier self-transform when considered as a radial function on $\mathbb{R}^3$. The next result gives an explicit formula for $A(r)$.
\begin{prop} \label{prop:aofr-explicit} $A(r)$ is given explicitly by \begin{equation} \label{eq:aofr-explicit} A(r) = \frac{d^2}{dr^2}\left( \frac{r}{4}\coth(\pi r)\right) = -\frac{\pi}{2}\frac{1}{\sinh^2(\pi r)} + \frac{\pi^2 r}{2} \frac{\cosh(\pi r)}{\sinh^3(\pi r)}. \end{equation} \end{prop}
We give two short proofs of Proposition~\ref{prop:aofr-explicit}. As we remarked in the last paragraph of the previous section, this result has an obvious interpretation as a calculation of the Laplace transform of $\omega(x)$. Several closely related calculations have appeared in the literature; see \cite[pp.~23--24]{bellman}, \cite[eq.~(2.17)]{biane-pitman-yor}, \cite[pp.~168]{chung}, and especially eq.~(91) of \cite{pitman-yor-ejp}, which can be seen using the results of \cite{chung} to be equivalent to \eqref{eq:aofr-explicit}.
\begin{proof}[First proof of Proposition~\ref{prop:aofr-explicit}] We have directly from the definitions that \begin{align*} \int_0^\infty \omega(x) e^{-\pi r^2 x}\,dx & = \sum_{n=1}^\infty \Bigg[ 2\pi^2 n^4 \int_0^\infty x^2 e^{-\pi (r^2+n^2) x} \, dx
- 3\pi n^2 \int_0^\infty x e^{-\pi (r^2+n^2) x} \, dx \Bigg] \\ & = \sum_{n=1}^\infty \left(\frac{4\pi^2 n^4}{\pi^3(r^2+n^2)^3} - \frac{3\pi n^2}{\pi^2(r^2+n^2)^2}\right) \\& = \frac{1}{\pi}\sum_{n=1}^\infty \left( \frac{4n^4}{(r^2+\pi^2)^3} - \frac{3n^2}{(r^2+\pi^2)^2} \right) = \frac{1}{2\pi} \sum_{n=1}^\infty \frac{d^2}{dr^2} \left( \frac{r^2}{r^2+n^2} \right) \\ & = \frac{d^2}{dr^2}\left(\frac{1}{4\pi}+\frac{1}{2\pi}\sum_{n=1}^\infty \frac{r^2}{r^2+n^2}\right) = \frac{d^2}{dr^2}\left( \frac{r}{4}\coth(\pi r)\right). \end{align*} Here, the last equality follows from the classical identity \begin{equation*} \pi \coth(\pi r) = \frac{1}{r}+\sum_{n=1}^\infty \frac{2r}{r^2+n^2}, \end{equation*} the partial fraction decomposition of the hyperbolic cotangent function \cite[p.~12]{andrews-askey-roy}. This proves the first equality in \eqref{eq:aofr-explicit}; the second equality is a trivial verification, which we leave to the reader. \end{proof}
An alternative proof of Proposition~\ref{prop:aofr-explicit} is based on a calculation of the moments of $\omega(x)$, which seems worth recording separately.
\begin{lem} \label{lem:omega-moments} For $n\ge 0$ we have the relation \begin{equation} \label{eq:omega-moments} \int_0^\infty \omega(x) x^n\,dx = \frac{(-1)^n (4\pi)^{n+1} n!}{4(2n)!}B_{2n+2}, \end{equation} where $(B_k)_{k=0}^\infty$ denotes the Bernoulli numbers. \end{lem}
The relation \eqref{eq:omega-moments} is equivalent to the bottom-right entry in Table~1 of \cite[p.~442]{biane-pitman-yor} (see also \cite{pitman-yor-cjm} where several analogous formulas are derived).
\begin{proof} Recalling Euler's formula \begin{equation*} \zeta(2m) = \frac{(-1)^{m-1}(2\pi)^{2m}}{2(2m)!}B_{2m}, \end{equation*} we observe that for integer $n\ge 0$, \begin{align*} \int_0^\infty \omega(x) x^n\,dx
& = \left[\int_0^\infty \omega(x) x^{s/2-1}\,dx \right]_{\raisebox{3pt}{$\Big|$}s=2n+2} = \xi(2n+2)
= \frac12(2n+2)(2n+1) \pi^{-n-1} \Gamma(n+1)\zeta(2n+2) \\ &= \frac{n!}{2\pi^{n+1}}(2n+2)(2n+1)\frac{(-1)^n(2\pi)^{2n+2}}{2(2n+2)!}B_{2n+2}
= \frac{(-1)^n (4\pi)^{n+1} n!}{4(2n)!}B_{2n+2}, \end{align*} as claimed. \end{proof}
Another easy fact that we record is the Taylor expansion of the function on the right-hand side of \eqref{eq:aofr-explicit}.
\begin{lem} We have the Taylor expansion \begin{equation*} \frac{d^2}{dr^2} \left( \frac{r}{4}\coth\left(\pi r\right) \right) = \sum_{n=0}^\infty \frac{(2\pi)^{2n+1} B_{2n+2}}{2(2n)!} r^{2n}
\qquad (|r|<1). \end{equation*} \end{lem}
\begin{proof} Recall the standard generating function identity \begin{equation*} \frac{z}{2}\coth\left(\frac{z}{2}\right) =
\sum_{n=0}^\infty \frac{B_{2n}}{(2n)!}z^{2n} \qquad \left(|z|<2\pi\right) \end{equation*} (see \cite[p.~12]{andrews-askey-roy}). Making the substitution $z=2\pi r$ and differentiating twice gives the result. \end{proof}
\begin{proof}[Second proof of Proposition~\ref{prop:aofr-explicit}] From the above two lemmas we see that, calculating formally at least, \begin{align*} \int_0^\infty \omega(x) e^{-\pi r^2 x}\,dx & = \int_0^\infty \omega(x) \sum_{n=0}^\infty \frac{(-1)^n \pi^n r^{2n}}{n!} x^n\,dx \\ & = \sum_{n=0}^\infty \frac{(-1)^n \pi^n r^{2n}}{n!} \int_0^\infty \omega(x) x^n\,dx = \sum_{n=0}^\infty \frac{(2\pi)^{2n+1} B_{2n+2}}{2(2n)!} r^{2n}
= \frac{d^2}{dr^2} \left( \frac{r}{4}\coth\left(\pi r\right) \right). \end{align*}
To justify this rigorously, note that, by \eqref{eq:omegax-asym-xinfty}--\eqref{eq:omegax-asym-xzero}, the function $\omega(x)\exp(-\pi r^2 x)$ is absolutely integrable on $[0,\infty)$ for any \emph{complex} number $r$ satisfying $|r|<1$. We have thus established the identity \eqref{eq:aofr-explicit} for those values of $r$, and, since $A(r)$ can be regarded as an analytic function of a complex variable $r$ on some open set containing the positive real axis, the result follows for general $r\ge 0$ by analytic continuation. \end{proof}
\section{An orthonormal basis for radial self-transforms}
\label{sec:rad-orthradial}
Recall that the Laguerre polynomials $L_n^\alpha(x)$ are, for fixed $\alpha>-1$, a family of orthogonal polynomials with respect to the weight function $e^{-x} x^\alpha$ on $[0,\infty)$. Their main properties are summarized in \secref{sec:orth-laguerre}. We can use them to construct functions suitable for representing radial functions on $\mathbb{R}^d$ by defining \begin{equation*} G_n^{(d)}(r) = e^{-\pi r^2} L_n^{d/2-1}(2\pi r^2) \qquad (r>0). \end{equation*} One main reason why this is a useful definition is that the $G_n^{(d)}$ satisfy the orthogonality relation \begin{equation} \label{eq:gnd-orthogonality} \int_0^\infty G_m^{(d)}(r) G_n^{(d)}(r) r^{d-1}\,dr = \frac{\Gamma(n+d/2)}{2(2\pi)^{d/2} n!} \delta_{m,n}, \end{equation} which, as the reader can verify, is immediate from the standard orthogonality relation \eqref{eq:laguerre-orthogonality} for the Laguerre polynomials, by a change of variables. Equivalently, recalling that we are thinking of the $G_n^{(d)}$ as functions on $\mathbb{R}^d$, we can write this as an orthogonality relation with respect to the ordinary Lebesgue measure on $\mathbb{R}^d$ by interpreting the integral on the left-hand side of \eqref{eq:gnd-orthogonality} as an integral in polar coordinates, which gives the equivalent relation \begin{equation*} \int_{\mathbb{R}^d} G_m^{(d)}(\mathbf{x}) G_n^{(d)}(\mathbf{x}) \, d\mathbf{x} = \kappa_{d,n} \delta_{m,n}, \end{equation*} where \begin{equation*} \kappa_{d,n} = d\cdot V_d \frac{\Gamma(n+d/2)}{2(2\pi)^{d/2} n!} = \frac{d\cdot \Gamma(n+d/2)}{2^{1+d/2} n! \Gamma(1+d/2)}, \end{equation*} and $V_d = \frac{\pi^{d/2}}{\Gamma\left(\frac{d}{2}+1\right)}$ denotes the volume of the unit ball in $\mathbb{R}^d$.
The orthogonal family $(G_n^{(d)})_{n=0}^\infty$ is especially useful for representing radial \emph{self-transforms} such as the function $A(r)$, thanks to the following result.
\begin{thm}[{\cite[Secs.~4.20,~4.23]{lebedev}}] The functions $G_n^{(d)}(r)$, considered as radial functions on $\mathbb{R}^d$, form an orthogonal basis of the subspace $L_{\textnormal{rad}}^2(\mathbb{R}^d)$ of $L^2(\mathbb{R}^d)$ consisting of square-integrable radial functions. Moreover, this orthogonal basis diagonalizes the radial Fourier transform \eqref{eq:radial-fourier-d}; more precisely, we have the property \begin{equation*} \mathcal{F}_d\left(G_n^{(d)}\right) = (-1)^n G_n^{(d)}. \end{equation*} \end{thm}
The theorem implies in particular that the even-indexed functions $G_{2n}^{(d)}(r)$ form an orthogonal basis for the subspace of $L_{\textnormal{rad}}^2(\mathbb{R}^d)$ consisting of square-integrable radial Fourier self-transforms. This gives a new way of representing radial self-transforms as linear combinations of the form $\sum_n \gamma_n G_{2n}^{(d)}(r)$. Thus, we have now shown two general ways to construct radial Fourier self-transforms: first, as weighted mixtures of scaled Gaussians, and second, as linear combinations of the basis elements $G_{2n}^{(d)}$. As the next result starts to illustrate, the interplay between these two approaches turns out to be very fruitful.
\begin{prop} The $f_n$-expansion coefficients $c_n$ defined in~\eqref{eq:def-fn-coeffs} can be alternatively expressed as \begin{equation} \label{eq:fn-coeffs-radialform} c_n = \frac{8\sqrt{2} \pi n!}{(3/2)_n} \int_0^\infty A(r) r^2 G_n^{(3)}(r)\,dr. \end{equation} \end{prop}
\begin{proof} We start by evaluating a simpler integral, namely, for integer $m\ge1$, \begin{align*} \int_0^\infty A(r) e^{-\pi r^2} r^{2m}\,dr & = \int_0^\infty \left(\int_0^\infty \omega(x) e^{-\pi x r^2}\,dx \right) e^{-\pi r^2} r^{2m}\,dr \\ & = \int_0^\infty \omega(x) \left(\int_0^\infty e^{-\pi (x+1) r^2}r^{2m} \,dr \right) \,dx \\ & = \int_0^\infty \omega(x) \left(\int_0^\infty e^{-u} \left(\frac{\sqrt{u}}{\sqrt{\pi (x+1)}}\right)^{2m} \frac{du}{2\sqrt{\pi (x+1)u}}\right)\,dx \\ & = \frac{1}{2 \pi^{m+1/2}} \Gamma\left(m+\frac12\right)\int_0^\infty \omega(x) \frac{1}{(x+1)^{m+1/2}} \,dx \\ & = \frac{(3/2)_{m-1}}{4 \pi^{m}} \int_0^\infty \omega(x) \frac{1}{(x+1)^{m+1/2}}\,dx. \end{align*} Now using the formula \eqref{eq:laguerre-explicit} for the Laguerre polynomials, we have that \begin{align*} \int_0^\infty & A(r) r^2 G_n^{(3)}(r)\,dr = \int_0^\infty A(r) r^2 e^{-\pi r^2} L_n^{1/2}(2\pi r^2)\,dr \\ & = \int_0^\infty A(r) e^{-\pi r^2} \sum_{k=0}^n \frac{(-1)^k}{k!} \binom{n+1/2}{n-k} (2\pi)^k r^{2k+2} \,dr \\ & = \sum_{k=0}^n \frac{(-2\pi)^k}{k!}\binom{n+1/2}{n-k} \int_0^\infty A(r) e^{-\pi r^2}
r^{2k+2} \,dr \\ & = \sum_{k=0}^n \frac{(-2\pi)^k}{k!}\binom{n+1/2}{n-k} \frac{(3/2)_k}{4\pi^{k+1}} \int_0^\infty \omega(x) (x+1)^{-(k+3/2)} \,dx \\ & = \int_0^\infty \omega(x) \left( \sum_{k=0}^n \frac{(-2\pi)^k}{k!} \binom{n+1/2}{n-k} \frac{(3/2)_k}{4\pi^{k+1}} (x+1)^{-(k+3/2)} \right)\, dx \\ & = \frac{1}{4\pi} \int_0^\infty \frac{\omega(x)}{(x+1)^{n+3/2}} \left( \sum_{k=0}^n \frac{(-2)^k}{k!} \binom{n+1/2}{n-k} (3/2)_k (x+1)^{n-k} \right) dx. \end{align*} Noting the simple relation $\frac{(3/2)_k}{k!}\binom{n+1/2}{n-k} = \frac{(3/2)_n}{n!} \binom{n}{k}, $ we see that the sum inside the integral simplifies as \begin{align*} \sum_{k=0}^n \frac{(3/2)_k}{k!} \binom{n+1/2}{n-k} \cdot & (-2)^k (x+1)^{n-k} = \frac{(3/2)_n}{n!} \sum_{k=0}^n \binom{n}{k} (-2)^k (x+1)^{n-k} \\ & = \frac{(3/2)_n}{n!} (-2+(x+1))^n = \frac{(3/2)_n}{n!} (x-1)^n. \end{align*} Thus, we get finally that \begin{equation*} \int_0^\infty A(r) r^2 G_n^{(3)}(r)\,dr = \frac{(3/2)_n}{4\pi n!} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n\,dx = \frac{(3/2)_n}{8\sqrt{2} \pi n!} c_n, \end{equation*} which proves \eqref{eq:fn-coeffs-radialform}. \end{proof}
We have set the stage for one of the central results of this chapter.
\begin{thm}[Expansion of $A(r)$ in the orthogonal family $G_n^{(3)}(r)$] \label{thm:aofr-selftrans-expansion} The radial function $A(r)$ has the series expansion \begin{equation} \label{eq:aofr-selftrans-expansion} A(r) = \sum_{n=0}^\infty c_n G_n^{(3)}(r), \end{equation} with $c_n$ given by \eqref{eq:def-fn-coeffs}, \eqref{cor:fn-coeff-innerproduct} and \eqref{eq:fn-coeffs-radialform}. The series in \eqref{eq:aofr-selftrans-expansion} converges pointwise and in $L^2(\mathbb{R}^3)$. \end{thm}
\begin{proof} The equation \eqref{eq:aofr-selftrans-expansion} is simply the Fourier expansion of the (clearly square-integrable) function $A(r)$, considered as a radial function on $\mathbb{R}^3$, in the orthogonal basis $G_n^{(3)}(r)$. The fact that the coefficients $c_n$ are the Fourier coefficients follows from \eqref{eq:gnd-orthogonality}~and~\eqref{eq:fn-coeffs-radialform} (together with the simple equality $\frac{(3/2)_n}{8\sqrt{2}\pi n!} = \frac{\Gamma(n+3/2)}{2(2\pi)^{3/2}n!}$ relating the normalization constants appearing in those two equations). The convergence in $L^2(\mathbb{R}^3)$ is immediate, and pointwise convergence follows from standard theorems about expansions of a function in Laguerre polynomials; see \cite[p.~88]{lebedev}. \end{proof}
\section{Constructing new balanced functions from old}
\label{sec:rad-newrecbal}
Next, we show a simple operation that produces a balanced function of weight $2-\alpha$ starting with a balanced function of weight $\alpha$.
\begin{lem} \label{lem:new-from-old} Let $f:(0,\infty)\to \mathbb{R}$ be balanced of weight $0<\alpha<2$. Then the function $g:(0,\infty)\to\mathbb{R}$ defined by \begin{equation} \label{eq:gen-stieltjes} g(x) = \int_0^\infty \frac{f(u)}{(u+x)^{2-\alpha}}\,du \end{equation} is a balanced function of weight $2-\alpha$. \end{lem}
\begin{proof} \begin{align*} g\left(\frac{1}{x}\right) & = \int_0^\infty \frac{f(u)}{(u+1/x)^{2-\alpha}}\,du = \int_0^\infty \frac{f(1/v)}{\left(\frac{1}{v}+\frac{1}{x}\right)^{2-\alpha}}\,\frac{dv}{v^2} \\ & = \int_0^\infty \frac{(vx)^{2-\alpha} v^\alpha f(v)}{(v+x)^{2-\alpha}}\,\frac{dv}{v^2}
= x^{2-\alpha} \int_0^\infty \frac{f(v)}{(v+x)^{2-\alpha}}\,dv = x^{2-\alpha} g(x). \end{align*} \end{proof}
The integral transform in \eqref{eq:gen-stieltjes} is known as a \firstmention{generalized Stieltjes transform}. Its properties are discussed in \cite{karp-prilepkina, schwarz}. Lemma~\ref{lem:new-from-old} appears to be new.
\section{The functions $\nu(x)$ and $B(r)$}
\label{sec:rad-bofr}
Applying the construction of Lemma~\ref{lem:new-from-old} to our balanced function $\omega(x)$ (with $\alpha=1/2$), we define \begin{equation*} \nu(x) = \int_0^\infty \frac{\omega(u)}{(u+x)^{3/2}}\,du, \end{equation*} and note that $\nu(x)$ is a balanced function of weight $3/2$. Now define (as in \eqref{eq:gauss-trans}) the associated radial function \begin{equation*} B(r) = \int_0^\infty \nu(x) e^{-\pi x r^2}\,dx, \end{equation*} We will have much more to say about these functions below. Among other results, in \secref{sec:rad-properties-nu} we will show that $B(r)$ is given by the explicit formula $B(r) = 1-2r \psi'(r) - r^2 \psi''(r)$, where $\psi(x)$ is the digamma function. The function $\nu(x)$ and a closely related function $\tilde{\nu}(t) = \frac{1}{\sqrt{1+t}}\nu\left(\frac{1-t}{1+t}\right)$ will play an important role in our understanding of the $g_n$-expansion \eqref{eq:gn-expansion-intro} of the Riemann xi function, which we will show in Chapter~\ref{ch:gn-expansion} arises naturally from the expansion of $\tilde{\nu}(t)$ in the Chebyshev polynomials in the second kind. The same function $\tilde{\nu}(t)$ also has an interpretation as a generating function for the $f_n$-expansion coefficients $c_n$; see \secref{sec:rad-properties-nu}.
\section{Some Mellin transform computations}
\label{sec:rad-mellintransforms}
In this section we compute the Mellin transforms of the functions $A(r)$, $B(r)$, $\nu(x)$, and derive two separate Mellin transform representations for the polynomials $f_n$. This will set the ground for the alternative derivation of the $f_n$-expansion mentioned at the beginning of the chapter, which will be given in the next section, and for additional results in Sections~\ref{sec:rad-centered-recbal}--\ref{sec:rad-properties-nu} and Chapter~\ref{ch:gn-expansion} that will shed additional light on the significance of the new functions we introduced.
\begin{prop} \label{prop:aofr-mellin} The Mellin transform of $A(r)$ is given by \begin{equation} \label{eq:aofr-mellin} \int_0^\infty A(r) r^{s-1}\,dr = \frac{1}{2(2\pi)^{s-1}} (s-1)(s-2) \Gamma(s-1)\zeta(s-1) \qquad (\operatorname{Re}{s} > 0). \end{equation} \end{prop}
\begin{proof}[First proof] Denote $F(r) = \frac{r}{4}(\coth(\pi r)-1) = \frac12 \frac{r}{e^{2\pi r}-1}$. Using integration by parts twice, we have \begin{align*} \int_0^\infty A(r) r^{s-1}\,dr &= \int_0^\infty F''(r) r^{s-1}\,dr
=
F'(r) r^{s-1}\Big|_{r=0}^{r=\infty} - (s-1)\int_0^\infty F'(r) r^{s-2}\,dr \\ &=
0- (s-1)F(r)r^{s-2}\Big|_{r=0}^{r=\infty} + (s-1)(s-2)\int_0^\infty F(r)
r^{s-3}\,dr \\ &= 0+0+\frac12(s-1)(s-2) \int_0^\infty \frac{1}{e^{2\pi r}-1} r^{s-2}\,dr \\ &= \frac12(s-1)(s-2) (2\pi)^{-(s-1)} \Gamma(s-1)\zeta(s-1). \end{align*} \end{proof}
\begin{proof}[Second proof] \begin{align} \label{eq:aofr-explicit-secondproof} \int_0^\infty A(r) r^{s-1}\,dr &= \int_0^\infty \int_0^\infty \omega(x)e^{-\pi x r^2}\,dx \, r^{s-1}\,dr \\ &= \int_0^\infty \omega(x) \int_0^\infty e^{-\pi x r^2}r^{s-1}\,dr \, \,dx \nonumber \\ &= \int_0^\infty \omega(x) \left( \int_0^\infty e^{-u} \left(\frac{\sqrt{u}}{\sqrt{\pi x}}\right)^{s-1}\, \frac{du}{2 \sqrt{\pi x u}} \right) \,dx \nonumber \\ &= \frac{1}{2\sqrt{\pi}} \pi^{-(s-1)/2} \Gamma\left(\frac{s}{2}\right) \int_0^\infty \omega(x) x^{-s/2}\,dx \nonumber \\ &= \frac12 \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \xi(2-s) = \frac12 \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \xi(s-1) \nonumber \\ &= \frac12 \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \cdot \frac12 (s-1)(s-2) \pi^{-(s-1)/2} \Gamma\left(\frac{s-1}{2}\right)\zeta(s-1) \nonumber \\ & = \frac{1}{2(2\pi)^{s-1}} (s-1)(s-2) \left( \frac{2^{s-2}}{\sqrt{\pi}}\Gamma\left(\frac{s-1}{2}\right) \Gamma\left(\frac{s}{2}\right) \right) \zeta(s-1). \nonumber \end{align} This coincides with the right-hand side of \eqref{eq:aofr-mellin} by the duplication formula \cite[p.~22]{andrews-askey-roy} \begin{equation} \label{eq:duplication-formula} \Gamma(z)\Gamma(z+1/2) = \sqrt{\pi} 2^{1-2z} \Gamma(2z). \end{equation} \end{proof}
\begin{prop} \label{prop:mellin-nu} The Mellin transform of $\nu(x)$ is given by \begin{equation} \label{eq:nu-mellin-trans} \int_0^\infty \nu(x) x^{s/2-1}\,dx = \frac{2}{\sqrt{\pi}} \Gamma\left(\frac{s}{2}\right)\Gamma\left(\frac32-\frac{s}{2}\right) \xi(s-1) \qquad (0 < \operatorname{Re}{s} < 3). \end{equation} (Here, as with known formulas such as \eqref{eq:riemannxi-mellintrans}, it seems more aesthetically pleasing to use $s/2$ as the Mellin transform variable instead of $s$.) \end{prop}
\begin{proof} A textbook Mellin transform computation is the result that for $\alpha>0$, \begin{equation} \label{eq:mellin-textbook-ex} \int_0^\infty \frac{t^{s-1}}{(1+t)^\alpha}\,dt = \frac{\Gamma(s)\Gamma(\alpha-s)}{\Gamma(\alpha)} \qquad (0 < \operatorname{Re}(s) < \alpha), \end{equation} since the integral on the left transforms into a beta integal $\int_0^1 u^{s-1}(1-u)^{\alpha-s-1}\,du$ upon making the substitution $t=u/(1-u)$. Using this fact, we write \begin{align*} \int_0^\infty \nu(x) x^{s/2-1}\,dx &= \int_0^\infty \int_0^\infty \frac{\omega(u)}{(u+x)^{3/2}}\,du \, x^{s/2-1}\,dx
= \int_0^\infty \omega(u) \left( \int_0^\infty \frac{x^{s/2-1}}{(u+x)^{3/2}} \,dx \right)\,du \\ &= \int_0^\infty \frac{\omega(u)}{u^{3/2}} \int_0^\infty \frac{t^{s/2-1}}{(1+t)^{3/2}}\,dt\,u^{s/2}\,du
= \int_0^\infty \omega(u) u^{s/2-3/2}\,du \cdot \frac{\Gamma\left(\frac{s}{2}\right) \Gamma\left(\frac32-\frac{s}{2}\right)}{\Gamma\left(\frac32\right)} \\ &= \frac{2}{\sqrt{\pi}} \Gamma\left(\frac{s}{2}\right)\Gamma\left(\frac32-\frac{s}{2}\right) \xi(s-1). \end{align*} Note that the assumption $0<\operatorname{Re}(s)<3$ ensures that the double integral following the first equality is absolutely convergent (as can be seen by repeating the same chain of equalities with $s$ replaced by $\operatorname{Re}(s)$), which justifies the change in the order of integration. \end{proof}
\begin{prop} The Mellin transform of $B(r)$ is given by \begin{equation} \label{eq:bofr-mellin} \int_0^\infty B(r) r^{s-1}\,dr = \frac{(2\pi)^{1-s}}{2\sin(\pi s/2)}
s(s-1) \Gamma(s) \zeta(s) \qquad (0 < \operatorname{Re}{s} < 2). \end{equation} \end{prop}
\begin{proof} Repeat the calculation in \eqref{eq:aofr-explicit-secondproof}, replacing $A(r)$ by $B(r)$ and replacing the Mellin transform of $\omega(x)$ with the Mellin transform of $\nu(x)$. We omit the details. \end{proof}
The following result appears to be known, although its origins and proof seem difficult to trace (it is implicit in the results of section~1 of \cite{bump-etal}, and a more explicit version is mentioned without proof in \cite[eq.~(1)]{kuznetsov1} and \cite[p.~829]{kuznetsov2}). We include it along with a short, self-contained proof.
\begin{prop}[First Mellin transform representation of $f_n$] \label{prop:fn-mellintrans1} The Mellin transform of $G_n^{(3)}(r)$ is given by \begin{equation} \label{eq:fn-mellintrans1} \int_0^\infty G_n^{(3)}(r) r^{s-1}\,dr = \frac12 (-i)^n \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) f_n\left(\frac{1}{2i}\left(s-\frac{3}{2}\right)\right)\qquad (\operatorname{Re}{s} > 0) \end{equation} \end{prop}
\begin{proof} Again using the explicit formula \eqref{eq:laguerre-explicit} for the Laguerre polynomials, we have that \begin{align*} \int_0^\infty G_n^{(3)}(r) r^{s-1}\,dr &= \int_0^\infty e^{-\pi r^2} L_n^{1/2}(2\pi r^2) r^{s-1}\,dr \\ &= \sum_{k=0}^n \frac{(-1)^k}{k!} \binom{n+1/2}{n-k} (2\pi)^k \int_0^\infty e^{-\pi r^2} r^{2k} r^{s-1}\,dr \\ &= \sum_{k=0}^n \frac{(-1)^k}{k!} \binom{n+1/2}{n-k} (2\pi)^k \int_0^\infty e^{-u} \left(\frac{\sqrt{u}}{\sqrt{\pi}}\right)^{s+2k-1}\, \frac{du}{2\sqrt{\pi u}} \\ &= \frac12 \pi^{-s/2}\sum_{k=0}^n \frac{(-2)^k}{k!} \binom{n+1/2}{n-k} \Gamma\left(\frac{s}{2}+k\right) \\ &= \frac12 \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \sum_{k=0}^n \frac{(-2)^k}{k!} \binom{n+1/2}{n-k} \frac{s}{2} \left( \frac{s}{2} + 1 \right) \cdots \left( \frac{s}{2} + k-1 \right) \\ &= \frac12 \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \sum_{k=0}^n (-2)^k \binom{n+1/2}{n-k} \binom{s/2+k-1}{k} \\ &= \frac12 \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) (-i)^n f_n\left(\frac{1}{2i}\left(s-\frac{3}{2}\right)\right), \end{align*} where in the last step we used the formula \eqref{eq:fn-explicit4} for $f_n(x)$. This gives the claimed formula. \end{proof}
If we replace $s$ with the variable $t$ where $s=2\left(\frac34+it\right)$, \eqref{eq:fn-mellintrans1} can be rewritten in the form \begin{equation} \label{eq:fn-mellintrans2} \int_0^\infty G_n^{(3)}(r) r^{\frac12+2it}\,dr = \frac12 (-i)^n \pi^{-\frac34-it} \Gamma\left(\frac34+it\right) f_n\left(t\right)\qquad \end{equation} which is a useful integral representation for $f_n(t)$. The next result, which we have not found in the literature, gives yet another representation for $f_n(t)$ in terms of a Mellin transform.
\begin{prop}[Second Mellin transform representation of $f_n$] \label{prop:fn-mellintrans3} We have the relation \begin{align} \label{eq:fn-mellintrans-scoord} \int_0^\infty & \frac{1}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n x^{s-1}\,dx \\ \nonumber & = i^n\frac{2 n!}{\sqrt{\pi} (3/2)_n} \Gamma(s)\Gamma\left(\frac32-s\right)
f_n\left(\frac{1}{i}\left(s-\frac34\right)\right)
\qquad \left(0<\operatorname{Re}(s)<\frac32 \right). \end{align} Equivalently, with the substitution $s=\frac34+it$, this can be written in the form \begin{align} \label{eq:fn-mellintrans-tcoord} \int_0^\infty \frac{1}{(x+1)^{3/2}} & \left(\frac{x-1}{x+1}\right)^n x^{-\frac14+it}\,dx
= i^n \frac{2 n!}{\sqrt{\pi}(3/2)_n} \Gamma\left(\frac34+it\right) \Gamma\left(\frac34-it\right) f_n(t). \end{align} \end{prop}
It is interesting to compare this result with Proposition~\ref{prop:gn-mellintrans} (page~\pageref{prop:gn-mellintrans}), which gives an analogous Mellin transform representation for the polynomials $g_n$.
\begin{proof} Recalling~\eqref{eq:mellin-textbook-ex}, we have \begin{align*} \int_0^\infty & \frac{1}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n x^{s-1}\,dx = \int_0^\infty \frac{\sum_{k=0}^n (-1)^{n-k} \binom{n}{k} x^k}{(x+1)^{n+3/2}} x^{s-1}\,dx \\ & = \sum_{k=0}^n (-1)^{n-k} \binom{n}{k} \int_0^\infty \frac{x^{k+s-1}}{(x+1)^{n+3/2}} \,dx
= \sum_{k=0}^n (-1)^{k-k} \binom{n}{k} \mellin{\frac{1}{(x+1)^{n+3/2}}}(k+s) \\& = \sum_{k=0}^n (-1)^{n-k} \binom{n}{k} \frac{\Gamma(k+s)\Gamma\left(n-k+\frac32-s\right)}{\Gamma\left(n+\frac32\right)} \\& = \frac{\Gamma(s)\Gamma\left(\frac32-s\right)}{\Gamma\left(n+\frac32\right)} \sum_{k=0}^n (-1)^{n-k} \binom{n}{k} \left( s(s+1)\cdots (s+k-1) \right) \\ & \qquad\qquad\qquad\qquad\qquad \times \left( \left(\frac32-s\right)\left(\frac32-s+1\right)\cdots \left(\frac32-s+n-k-1\right) \right) \\ & = \frac{\Gamma(s)\Gamma\left(\frac32-s\right)}{\Gamma\left(n+\frac32\right)} \sum_{k=0}^n (-1)^{n-k} \binom{n}{k} \times (-1)^k k!\binom{-s}{k} \times (-1)^{n-k} (n-k)!\binom{s-3/2}{n-k} \\ & = \frac{\Gamma(s)\Gamma\left(\frac32-s\right)}{\Gamma\left(n+\frac32\right)} n! \sum_{k=0}^n (-1)^k \binom{-s}{k} \binom{s-3/2}{n-k} \\ & = \frac{2 n!}{\sqrt{\pi} (3/2)_n} \Gamma(s)\Gamma\left(\frac32-s\right) \sum_{k=0}^n (-1)^k \binom{-s}{k} \binom{s-3/2}{n-k} \\ & = \frac{2 n!}{\sqrt{\pi} (3/2)_n} \Gamma(s)\Gamma\left(\frac32-s\right) i^n f_n\left(\frac{1}{i}\left(s-\frac34\right)\right), \end{align*} using \eqref{eq:fn-explicit4} and the symmetry $f_n(-x)= (-1)^n f_n(x)$ in the last step. \end{proof}
\section{Alternative approach to the $f_n$-expansion of $\Xi(t)$}
\label{sec:rad-alt-fnexp}
The results of the preceding sections make it possible to conceive of a parallel approach to the development of the $f_n$-expansion of $\Xi(t)$ that is distinct from the approach taken in Chapter~\ref{ch:fn-expansion}, and relies entirely on the elementary function $A(r)$ rather than on its more sophisticated companion function $\omega(x)$ (or even the function $\theta(x)$ from which $\omega(x)$ is derived).
The idea is to define $A(r)$ directly using \eqref{eq:aofr-explicit}, and then to use \eqref{eq:fn-coeffs-radialform} as the definition of the coefficients $c_n$. Theorem~\ref{thm:aofr-selftrans-expansion} and its proof remain valid. Propositions~\ref{prop:aofr-mellin}~and~\ref{prop:fn-mellintrans1} also remain valid (note that, conveniently, the first proof we gave for Proposition~\ref{prop:aofr-mellin} does not rely on the connection between $A(r)$ and $\omega(x)$). We now consider what happens when we formally take the Mellin transform of both sides of \eqref{eq:aofr-selftrans-expansion}; the result is the relation \begin{align*} \frac{1}{2(2\pi)^{s-1}} &(s-1)(s-2) \Gamma(s-1)\zeta(s-1)
= \sum_{n=0}^\infty c_n \cdot \frac12 (-i)^n \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) f_n\left(\frac{1}{2i}\left(s-\frac{3}{2}\right)\right). \end{align*} After substituting $s+1$ in place of $s$, the left-hand side starts to resemble $\xi(s)$, so, rearranging terms judiciously, we see that \begin{align*} \xi(s) & = \frac12 s(s-1)\pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \zeta(s) \\ & = \left(\frac{\Gamma\left(\frac{s}{2}\right)}{\Gamma(s)}2^s \pi^{s/2} \right) \cdot \left(\frac{1}{2(2\pi)^{s}} s(s-1) \Gamma(s)\zeta(s)\right) \\ & = \left(\frac{\Gamma\left(\frac{s}{2}\right)}{\Gamma(s)}2^s \pi^{s/2} \right) \sum_{n=0}^\infty c_n \cdot \frac12 (-i)^n \pi^{-(s+1)/2} \Gamma\left(\frac{s+1}{2}\right) f_n\left(\frac{1}{2i}\left(s-\frac{1}{2}\right)\right) \\ & = \sum_{n=0}^\infty (-i)^n c_n \left( \frac{\Gamma\left(\frac{s}{2}\right)}{\Gamma(s)}2^s \pi^{s/2} \times \frac12 \pi^{-(s+1)/2} \Gamma\left(\frac{s+1}{2}\right) \right) f_n\left(\frac{1}{2i}\left(s-\frac{1}{2}\right)\right) \\ & = \sum_{n=0}^\infty (-i)^n c_n f_n\left(\frac{1}{2i}\left(s-\frac{1}{2}\right)\right), \end{align*} with the cancellation in the last step following from the duplication formula \eqref{eq:duplication-formula}. Setting $s=\frac12+it$, we again recover the $f_n$-expansion \eqref{eq:fn-expansion}---a satisfying result.
Note that the above approach, while it is rather intuitive and provides useful insight into the meaning and significance of the $f_n$-expansion, nonetheless suffers from the drawback that using \eqref{eq:fn-coeffs-radialform} as the definition of the coefficients $c_n$ does not make the positivity property of the even-indexed coefficients evident, nor can we see a way to understand their asymptotic behavior directly from this definition. For this reason, the approach we originally took in Chapter~\ref{ch:fn-expansion} seems preferable as the initial basis for developing the theory of the $f_n$-expansion.
\section{Centered versions of balanced functions}
\label{sec:rad-centered-recbal}
If $f$ is a balanced function of weight $\alpha$, denote \begin{equation} \label{eq:balanced-centered-def}
\tilde{f}(t) = \frac{1}{(1+t)^\alpha} f\left(\frac{1-t}{1+t}\right) \qquad (|t|<1). \end{equation} We refer to $\tilde{f}$ as the \firstmention{centered version of $f$}.
\begin{lem} The centered version $\tilde{f}$ of a balanced function of weight $\alpha$ is an even function. \end{lem}
\begin{proof} It is trivial to verify that the equation $\tilde{f}(-t)=\tilde{f}(t)$ is algebraically equivalent to the functional equation \eqref{eq:balanced-centered-def}. \end{proof}
We remark that, in the context of the modular form $\theta(x)$, the notion of its centered version was studied in \cite{romik} (see also \cite[Sec.~5.1]{zagier123} where a similar change of variables for modular forms was discussed).
One illustration of the relevance of the above definition is the following simple fact concerning the centered version $\tilde{\omega}(t)$ of $\omega(x)$.
\begin{prop} \label{prop:cn-as-moments} The coefficients $c_n$ can be alternatively expressed as \begin{equation} \label{eq:fn-coeffs-asmoments} c_n = 2 \int_{-1}^1 \tilde{\omega}(u) u^n\,du \qquad (n\ge 0). \end{equation} \end{prop}
\begin{proof} This is a trivial reinterpretation of the defining formula \eqref{eq:def-fn-coeffs} for $c_n$: simply make the change of variables $u=\frac{x-1}{x+1}$, and check that we get precisely \eqref{eq:fn-coeffs-asmoments} \end{proof}
See \secref{sec:omega-tilde-properties} for further discussion of $\tilde{\omega}(t)$. The centered version $\tilde{\nu}(t)$ of $\nu(x)$ also has an important role to play in connection with both the $f_n$-expansion and the $g_n$-expansion of the Riemann xi function, as we shall see in the next section and later in Chapter~\ref{ch:gn-expansion}.
\section{Properties of $\nu(x)$ and $B(r)$}
\label{sec:rad-properties-nu}
\begin{prop} \label{eq:nuofx-properties} The function $\nu(x)$ has the following properties:
(i) $\nu(x)$ is positive and monotone decreasing.
(ii) $\nu(0)=\frac{\pi}{6}$.
(iii) $\nu(x)$ has the asymptotic expansions \begin{align} \nu(x) & = \sum_{k=0}^n \frac{(2k+1) \pi^{k+1} B_{2k+2}}{k!} x^k + O(x^{n+1}) \qquad\quad\,\ \ \qquad \textrm{as }x\to 0, \label{eq:nutilde-asym1} \\ \nu(x) & = \sum_{k=0}^n \frac{(2k+1) \pi^{k+1} B_{2k+2}}{k!} x^{-k-3/2} + O(x^{-n-5/2}) \qquad \textrm{as }x\to \infty, \label{eq:nutilde-asym2} \end{align} \end{prop}
\begin{proof} (i) is immediate from the definition of $\nu(x)$. (ii) is a special case of (iii), but also follows more directly from the definition of $\nu(x)$, since for $x=0$ we have \begin{align*} \nu(0) & = \int_0^\infty \frac{\omega(u)}{u^{3/2}}\,du = \xi(-1) = \xi(2) = \frac12 \cdot 2\cdot 1\cdot \pi^{-1} \Gamma(1) \zeta(2) = \frac{\pi}{6}. \end{align*}
To prove (iii), first, note that \eqref{eq:nutilde-asym1} follows immediately from \eqref{eq:nutilde-asym2} using the balancedness property of $\nu(x)$. Now, to prove \eqref{eq:nutilde-asym2} we use the technique of Mellin transform asymptotics, described, e.g., in \cite[Appendix~B.7]{flajolet-sedgewick}. Let $\varphi(s) = \frac{2}{\sqrt{\pi}} \Gamma(s) \Gamma(\frac32-s)\xi(2s-1)$ denote the Mellin transform of $\nu(x)$. From the Mellin inversion formula we have \begin{equation*} \nu(x) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \varphi(s) x^{-s}\,ds, \end{equation*} where $c$ is an arbitrary number in $(0,3/2)$. We now shift the integration contour to the left to the line $\operatorname{Im}(s)=-n-3/2$ and apply the residue theorem to calculate the change in the value of the integral resulting from this contour shift. Note that the contour is unbounded, so one has to justify this use of the residue theorem using a limiting argument involving the rate of decay of the integrand along vertical lines; this is not hard to do using the standard facts that the xi and gamma functions both decay exponentially fast along vertical lines (see \cite[p.~21]{andrews-askey-roy}, \cite[p.~121]{paris-kaminski}).
With this technicality out of the way, we see that the result of the contour shift is that for each pole of the integrand $\varphi(s)x^{-s}$ that this contour shift skips over, the integral (including the factor of $\frac{1}{2\pi i}$ in front) changes by an amount equal to its residue. The poles being skipped over in this case are at $s=0, -1,\ldots, -n-1$, and the residue for the pole at $s=-k$ is equal to $x^k$ multiplied by \begin{align*} \frac{2}{\sqrt{\pi}} &\frac{(-1)^k}{k!} \Gamma\left(k+\frac32 \right) \xi(-2k-1) = \frac{2(-1)^k}{k!}\cdot \frac{(2k+2)!}{2^{2k+2}(k+1)!}\cdot \xi(2k+2) \\ &= \frac{(-1)^k (2k+2)!}{2^{2k+1} k! (k+1)!} \cdot \frac12 (2k+2)(2k+1) \pi^{-(k+1)}k! \zeta(2k+2) \\ &= \frac{(-1)^k (2k+2)!}{2^{2k+1} k! (k+1)!} \cdot \frac12 (2k+2)(2k+1) \pi^{-(k+1)}k! \cdot \frac{(-1)^{k}(2\pi)^{2k+2}}{2(2k+2)!}B_{2k+2} \\ &= \frac{(2k+1)\pi^{k+1}}{k!} B_{2k+2}. \end{align*} Thus, following the contour shift we get the alternative expression \begin{align*} \nu(x) &= \frac{1}{2\pi i} \int_{-(n+3/2)-i\infty}^{-(n+3/2)+i\infty} \varphi(s) x^{-s}\,ds + \sum_{k=0}^{n+1} \frac{(2k+1)\pi^{k+1}}{k!} B_{2k+2} x^k \\ & = \sum_{k=0}^{n+1} \frac{(2k+1)\pi^{k+1}}{k!} B_{2k+2} x^k + x^{n+3/2} O\left(
\int_{-(n+3/2)-i\infty}^{-(n+3/2)+i\infty} |\varphi(s)|\,ds \right) \\ & = \sum_{k=0}^{n} \frac{(2k+1)\pi^{k+1}}{k!} B_{2k+2} x^k + O(x^{n+1}), \end{align*} as claimed. \end{proof}
The next result shows that $\tilde{\nu}$ can be thought of as a generating function for the coefficient sequence $(c_n)_{n=0}^\infty$, providing another illustration of why $\nu(x)$ and $\tilde{\nu}(t)$ are interesting functions to study.
\begin{thm}[The function $\tilde{\nu}(t)$ as a generating function for the sequence $(c_n)_{n=0}^\infty$] The centered function $\tilde{\nu}(t)$ has the power series expansion \begin{equation} \label{eq:nutilde-taylor} \tilde{\nu}(t) = \frac{1}{2\sqrt{2}} \sum_{n=0}^\infty \frac{(-1)^n (3/2)_n}{n!} c_n t^n
\qquad (|t|<1). \end{equation} \end{thm}
\begin{proof} Noting the Taylor expansion \begin{equation*} \sum_{n=0}^\infty \frac{(3/2)_n}{n!} z^n = (1-z)^{-3/2}, \end{equation*} we write \begin{align*} \tilde{\nu}(t) & = \frac{1}{(1+t)^{3/2}} \nu\left(\frac{1-t}{1+t}\right)
= \frac{1}{(1+t)^{3/2}} \int_0^\infty \frac{\omega(u)}{\left(u+\frac{1-t}{1+t}\right)^{3/2}}\,du \\ & = \int_0^\infty \omega(u) \left( 1-t+u+t u \right)^{-3/2}\,du
= \int_0^\infty \frac{\omega(u)}{(u+1)^{3/2}}\left( 1+t\frac{u-1}{u+1} \right)^{-3/2}\,du \\ & = \int_0^\infty \frac{\omega(u)}{(u+1)^{3/2}}\left( \sum_{n=0}^\infty \frac{(-1)^n (3/2)_n}{n!} \left( \frac{u-1}{u+1} \right)^n t^n \right) \,du \\ & = \sum_{n=0}^\infty \frac{(-1)^n (3/2)_n}{n!} \left( \int_0^\infty \frac{\omega(u)}{(u+1)^{3/2}} \left(\frac{u-1}{u+1}\right)^n\,du \right) t^n
= \frac{1}{2\sqrt{2}} \sum_{n=0}^\infty \frac{(-1)^n (3/2)_n}{n!} c_n t^n, \end{align*} as claimed. \end{proof}
Note that the power series \eqref{eq:nutilde-taylor} also converges for $t=1$ (as is immediately apparent from the asymptotic rate of decay of the coefficients $c_{2n}$), and, by Proposition~\ref{eq:nuofx-properties}(ii), its value there is given explicitly by \begin{align*} \tilde{\nu}(1) = \frac{1}{(1+1)^{3/2}} \nu\left(\frac{1-1}{1+1}\right) = \frac{1}{2\sqrt{2}} \nu(0) = \frac{\pi}{12\sqrt{2}}. \end{align*} That is, we have the summation identity \begin{equation*} \sum_{n=0}^\infty \frac{(3/2)_{2n}}{(2n)!} c_{2n} = \frac{\pi}{6}. \end{equation*}
Next, we mention another curious integral representation expressing $\nu(x)$ in terms of $A(r)$.
\begin{prop} \label{prop:stieltjes-from-laplace} The function $\nu(x)$ can be expressed in terms of $A(r)$ as \begin{equation*} \nu(x) = 4\pi \int_0^\infty A(r)r^2 e^{-\pi x r^2} \,dr. \end{equation*} \end{prop}
\begin{proof} As pointed out to us by Jim Pitman, this is a special case of a standard relation expressing the generalized Stieltjes transform of a function in terms of its Laplace transform. In our notation, we have that \begin{align*} \int_0^\infty A(r) r^2 e^{-\pi x r^2}\,dr &= \int_0^\infty \left(\int_0^\infty \omega(t) e^{-\pi t r^2} \,dt \right) r^2 e^{-\pi xr^2}\,dr \\ & = \int_0^\infty \omega(t) \left( \int_0^\infty r^2 e^{-\pi(x+t)r^2}\,dr \right) \,dt \\ & = \int_0^\infty \omega(t) \left(\frac{1}{2} \Gamma\left(\frac32\right) \frac{1}{\pi^{3/2}(x+t)^{3/2}}\right) \,dt = \frac{1}{4\pi} \nu(x). \end{align*} \end{proof}
Next, we turn to examining the function $B(r)$. As it turns out, it can be evaluated explicitly in terms of the digamma function.
\begin{prop} \label{prop:bofr-explicit} The function $B(r)$ is given explicitly by \begin{equation} \label{eq:bofr-explicit} B(r) = 1-2r \psi'(r) - r^2 \psi''(r),\end{equation} where $\psi(x) = \frac{\Gamma'(x)}{\Gamma(x)}$ is the digamma function. \end{prop}
Combining this with Lemma~\ref{lem:recbal-selftrans}, we obtain the following result, which seems to be new (compare with \cite[Sec.~5.3]{berndt}, \cite[Sec.~9.12]{titschmarsh-fourier}, \cite{mathoverflow-fourier} where some results with a similar flavor are discussed).
\begin{corollary} The function $1-2r \psi'(r)-r^2 \psi''(r)$ is a Fourier self-transform, considered as a radial function on $\mathbb{R}$. That is, it satisfies the integral equation \begin{equation*} F(\rho) = 2 \int_0^\infty F(r) \cos(2\pi r\rho)\,dr \qquad (\rho\ge0). \end{equation*}
\end{corollary}
\begin{proof}[First proof of Proposition~\ref{prop:bofr-explicit}] Denote $\beta(r) = 1-2r \psi'(r) - r^2 \psi''(r)$. The proof that $B(r)=\beta(r)$ is based on computing the Mellin transform of $\beta(r)$. If we succeed in showing that this Mellin transform is defined for $0<\operatorname{Re}(s)<2$ and is equal to the function given in \eqref{eq:bofr-mellin}, the equality $B(r)=\beta(r)$ will follow from the standard uniqueness theorem for the Mellin transform.
We recall some useful facts about the digamma function and its derivatives \cite[p.~260]{abramowitz-stegun}. Start with the well-known partial fraction decomposition \begin{equation*} \psi(r+1) = -\gamma + \sum_{n=1}^\infty \left( \frac1n - \frac{1}{r+n}\right), \end{equation*} where $\gamma$ denotes the Euler-Mascheroni constant. Repeated differentiation gives the also-standard expansion \begin{equation} \label{eq:polygamma-partial-fractions} \psi^{(m)}(r+1) = (-1)^{m+1}\sum_{n=1}^\infty \frac{m!}{(r+n)^{m+1}} \qquad (m\ge 1). \end{equation} The Taylor expansion of $\psi^{(m)}(r+1)$ around $r=0$ is given by \begin{equation} \label{eq:polygamma-taylor} \psi^{(m)}(r+1) = \sum_{k=0}^\infty (-1)^{m+k+1} (k+1)(k+2)\cdots (k+m) \zeta(k+m+1) r^k \quad\ \ (m\ge1), \end{equation} and its asymptotic expansion as $r\to\infty$ (with $m\ge 1$, $N\ge0$ fixed) is \begin{equation} \label{eq:polygamma-asym} \psi^{(m)}(r+1) = (-1)^{m+1} \sum_{k=0}^N \frac{(k+m-1)!}{k!} \frac{B_k}{(r+1)^{k+m}} + O\left(\frac{1}{r^{N+m+1}}\right) \qquad \textrm{($r\to\infty$).} \end{equation}
With this preparation, the Mellin transform of derivatives of $\psi(r+1)$ can be evaluated through termwise integration of the terms of \eqref{eq:polygamma-partial-fractions}. After a short computation using~\eqref{eq:mellin-textbook-ex} and the reflection formula $\Gamma(z)\Gamma(1-z) =\frac{\pi}{\sin(\pi z)}$, we get that \begin{align} \label{eq:polygamma-mellin} \int_0^\infty \psi^{(m)}&(r+1) r^{s-1}\,dr
= -\frac{\pi (s-1)(s-2)\cdots (s-m)}{\sin(\pi s)} \zeta(m+1-s) \qquad (0 < \operatorname{Re}(s) < m) \end{align} for $m\ge 1$ (a variant of this formula is mentioned in \cite[p.~659, eq.~6.473]{gradshteyn-ryzhik}; see also \cite[p.~325, eq.~(7)]{erdelyi}). Note that the strip of convergence for each of the individual Mellin transforms being summed is $0 < \operatorname{Re}(s) < m+1$, and the requirement of absolute summability imposes the further restriction $\operatorname{Re}(s) < m$ (apparent in the zeta-term $\zeta(m+1-s)$, which blows up as $\operatorname{Re}(s)$ approaches $m$ from the left).
Building on the facts discussed above, we can approach the computation of the Mellin transform of $\beta(r)$. First, note that, because of the standard identity $\psi(x+1) = \psi(x) + \frac{1}{x}$, $\beta(r)$ can be rewritten as \begin{equation*} \beta(r) = 1-2r \psi'(r+1) - r^2 \psi''(r+1). \end{equation*} By \eqref{eq:polygamma-taylor}--\eqref{eq:polygamma-asym}, the asymptotic behavior of $\beta(r)$ for $r$ near $0$ and $\infty$ is given by \begin{align} \label{eq:betaofr-asym1} \beta(r) & = 1 - \frac{\pi^2}{3} r + O(r^2) \qquad \textrm{as }r\to 0, \\ \label{eq:betaofr-asym2} \beta(r) & = \frac{1}{6r^2} + O\left(r^{-4}\right) \ \ \qquad \textrm{as }r\to \infty. \end{align} This implies that the Mellin transform of $\beta(r)$ is defined for $0 < \operatorname{Re}(s) < 2$. Also of some interest is the function $\beta(r)-1$ and its own Mellin transform. Again by \eqref{eq:betaofr-asym1}--\eqref{eq:betaofr-asym2}, it follows that the Mellin transform of $\beta(r)-1$ is defined for $-1 < \operatorname{Re}(s) < 0$. Using \eqref{eq:polygamma-mellin}, it is readily evaluated for such $s$ to be \begin{align} \label{eq:bofr-minusone-mellin} \int_0^\infty (\beta(r)-1) r^{s-1} \, dr & = \int_0^\infty \left(-2 \psi'(r+1)r^s - \psi''(r+1)r^{s+1}\right) \, dr \\ \nonumber & = \frac{\pi s(s-1)}{\sin(\pi s)} \zeta(1-s) \qquad\qquad \qquad (-1 < \operatorname{Re}(s) < 0). \end{align} Now, this isn't exactly what we want, both because of the irksome ``$-1$'' term in the integrand and because the range of validity of the formula is disjoint from the range $0<\operatorname{Re}(s)<2$ we are interested in. We can nonetheless exploit this identity to get what we need through a trick involving analytic continuation. Namely, for $s$ satisfying $0<\operatorname{Re}(s)<2$, we can try to evaluate the Mellin transform of $\beta(r)$ by performing an integration by parts, which gives that (under the stated assumptions on $s$) \begin{align*} \int_0^\infty \beta(r) r^{s-1}\,dr & =
\frac{1}{s} \beta(r) r^s\Big|_{r=0}^{r=\infty} - \frac{1}{s} \int_0^\infty \beta'(r)r^{s+1}\,dr
= - \frac{1}{s} \int_0^\infty \beta'(r)r^{s+1}\,dr =: K(s). \end{align*} Moreover, examining the asymptotic behavior of $\beta'(r)$ near $r=0$ and $r=\infty$ (which, the reader can confirm using \eqref{eq:polygamma-taylor}--\eqref{eq:polygamma-asym}, is simply that obtained by differentiating the terms in \eqref{eq:betaofr-asym1}--\eqref{eq:betaofr-asym2} termwise, including the big-$O$ term), we see that the Mellin transform of $\beta'(r)$ converges (and is an analytic function) in the strip $0 < \operatorname{Re}(s) < 3$, which implies that $K(s)$ is an analytic function in the strip $-1 < \operatorname{Re}(s) < 2$. But for $s$ satisfying $-1<\operatorname{Re}(s)<0$, performing a similar integration by parts as the one above starting with the integral in \eqref{eq:bofr-minusone-mellin} gives \begin{align*} \int_0^\infty (\beta(r)-1) r^{s-1}\,dr & =
\frac{1}{s} (\beta(r)-1) r^s\Big|_{r=0}^{r=\infty} - \frac{1}{s} \int_0^\infty \frac{d}{dr}(\beta(r)-1) r^{s+1}\,dr \\ & = - \frac{1}{s} \int_0^\infty \frac{d}{dr}(\beta(r)-1) r^{s+1}\,dr
= - \frac{1}{s} \int_0^\infty \beta'(r)r^{s+1}\,dr = K(s), \end{align*} the \emph{same analytic function} evaluated on a different part of its domain of definition. Since $K(s)$ is analytic and given by the formula found in \eqref{eq:bofr-minusone-mellin} for $-1<\operatorname{Re}(s)<0$, by the principle of analytic continuation it is also equal to the same expression for $0<\operatorname{Re}(s)<2$. That is, we have shown that \begin{equation} \label{eq:betaofr-mellin-proof1} \int_0^\infty \beta(r) x^{s-1} \, dx = \frac{\pi s(s-1)}{\sin(\pi s)} \zeta(1-s) \qquad (0 < \operatorname{Re}(s) < 2). \end{equation} Finally, using the functional equation of the Riemann zeta function it is easy to check that the function on the right-hand side of \eqref{eq:betaofr-mellin-proof1} is equal to the one on the right-hand side of \eqref{eq:bofr-mellin}. This shows that the Mellin transforms of $\beta(r)$ and $B(r)$ coincide (in their ``natural'' region of definition where the Mellin transform of both converges), and finishes the proof. \end{proof}
\begin{proof}[Second proof of Proposition~\ref{prop:bofr-explicit}] A second method is to use Mellin inversion to represent $B(r)$ in terms of its Mellin transform found in \eqref{eq:bofr-mellin}, namely as \begin{equation*} B(r) = \frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} \frac{(2\pi)^{1-s}}{2\sin(\pi s/2)}
s(s-1) \Gamma(s) \zeta(s)
r^{-s}\,ds, \end{equation*} where $c$ is an arbitrary number in $(0,2)$. Fix an integer $n\ge1$, and shift the integration contour to the left to the line $\operatorname{Im}(s)=-n-1/2$. As in the proof of Proposition~\ref{eq:nuofx-properties}, we can use the residue theorem (with the same arguments to justify its application in this setting involving unbounded contours) to calculate the change in the value of the integral by looking at the poles of the integrand being skipped over and their residues. The relevant poles in this case are at $s=0, -1,\ldots, -n$. We leave to the reader to verify that the residue at $s=0$ is equal to $1$, and that for any $k\ge 1$, the pole at $-k$ has residue $(-1)^k k (k+1) \zeta(k+1) r^k$. We therefore get that \begin{align*} B(r) = 1 + \sum_{k=1}^n &(-1)^k k (k+1) \zeta(k+1) r^k
+\frac{1}{2\pi i} \int_{-(n+1/2)-i\infty}^{(n+1/2)+i\infty} \frac{(2\pi)^{1-s}}{2\sin(\pi s/2)}
s(s-1) \Gamma(s) \zeta(s)
r^{-s}\,ds. \end{align*} Assume that $0<r<1$. In this case it can be shown without much difficulty that the integral converges to $0$ as $n\to\infty$ (this becomes easier to do if one first replaces the Mellin transform of $B(r)$ in the integrand by the simpler expression on the right-hand side of \eqref{eq:betaofr-mellin-proof1}, which as we commented above in fact represents the same function). The conclusion is that $B(r)$ is represented for $0<r<1$ by a convergent Taylor series \begin{equation*} B(r) = 1 + \sum_{k=1}^\infty (-1)^k k (k+1) \zeta(k+1) r^k. \end{equation*} But this is consistent with (and implies) \eqref{eq:bofr-explicit}: using \eqref{eq:polygamma-taylor}, one can check easily that the function $\beta(r) = 1-2r \psi'(r+1)-r^2 \psi''(r+1)$ has the same Taylor expansion. This proves \eqref{eq:bofr-explicit} for $0<r<1$, and the claim follows for general $r$ by analytic continuation. \end{proof}
\chapter{Expansion of $\Xi(t)$ in the polynomials $g_n$}
\label{ch:gn-expansion}
In this chapter we continue to build on the tools developed in Chapters~\ref{ch:fn-expansion} and~\ref{ch:radial}, in order to derive an infinite series expansion for the Riemann xi function in yet another family of orthogonal polynomials, the family $(g_n(x))_{n=0}^\infty$, and study its properties. As we discussed briefly in the Introduction, the polynomials $g_n$ are defined by \begin{equation*} g_n(x) = p_n\left(x; \frac34,\frac34,\frac34,\frac34\right) = i^n (n+1) \ {}_3F_2\left(-n,n+2,\frac34+ix;\frac32,\frac32; 1\right), \end{equation*}
where $p_n(x;a,b,c,d)$ denotes the continuous Hahn polynomial with parameters $a,b,c,d$. They form a family of orthogonal polynomials with respect to the weight function $\left|\Gamma\left(\frac34+ix\right)\right|^4$ on $\mathbb{R}$. Their main properties are summarized in \secref{sec:orth-gn}.
\section{Main results}
\label{sec:gnexp-mainresults}
As in Chapters~\ref{ch:hermite} and~\ref{ch:fn-expansion}, we start by defining a sequence of numbers $(d_n)_{n=0}^\infty$ that will play the role of the coefficients associated with the new expansion. Define \begin{align} \label{eq:def-gn-coeffs} d_n & = \frac{(3/2)_n}{2^{n-3/2} n!}\int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n
{}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
\,dx.
\end{align}
As a first step towards demistifying this somewhat obscure definition, we expand the ${}_2F_1$ term in an infinite series. Momentarily ignoring issues of convergence, we have that \begin{align} \label{eq:dn-gausshyper-expansion} d_n & = \frac{(3/2)_n}{2^{n-3/2} n!}\int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n \left[\sum_{m=0}^\infty \frac{\left(\frac{n}{2}+\frac34\right)_m \left(\frac{n}{2}+\frac54\right)_m }{m! (n+2)_m} \left( \frac{x-1}{x+1}\right)^{2m} \right]
\,dx \\ \nonumber & = \frac{(3/2)_n}{2^{n-3/2} n!} \sum_{m=0}^\infty \frac{\left(\frac{n}{2}+\frac34\right)_m \left(\frac{n}{2}+\frac54\right)_m }{m! (n+2)_m} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^{n+2m} \,dx \\ \nonumber & = \frac{(3/2)_n}{2^n n!} \sum_{m=0}^\infty \frac{\left(\frac{n}{2}+\frac34\right)_m \left(\frac{n}{2}+\frac54\right)_m }{m! (n+2)_m} c_{n+2m} = \frac{n+1}{2^n} \sum_{m=0}^\infty \frac{(3/2)_{n+2m}}{4^m m! (n+m+1)!} c_{n+2m}, \end{align} where in the last step we use the relation $\left(\frac{n}{2}+\frac34\right)_m \left(\frac{n}{2}+\frac54\right)_m = \frac{(3/2)_{n+2m}}{2^{2m} (3/2)_n} $. In addition to being an interesting way to express $d_n$ in terms of the coefficients $c_k$, this suggests a relatively simple way to see that the integral \eqref{eq:def-gn-coeffs} converges absolutely (which would also justify the above formal computation); namely, letting \begin{equation*}
c_n' = \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left|\frac{x-1}{x+1}\right|^n \,dx, \end{equation*} we have the simple relations $c_r' \ge c_{r+1}'$, $c_{2r}'=c_{2r}$, and therefore (using \eqref{eq:fn-coeff-asym}, or some easy corollary of Lemma~\ref{lem:fn-easybound2}) get that $c_r' \le K e^{-M \sqrt{r}}$ for all $r\ge0$ and some constants $K,M>0$. This then gives that \begin{align} \int_0^\infty &
\left| \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n {}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
\right| \,dx \label{eq:finiteness-bound} \\ & \leq \sum_{m=0}^\infty \frac{\left(\frac{n}{2}+\frac34\right)_m \left(\frac{n}{2}+\frac54\right)_m }{m! (n+2)_m} \int_0^\infty
\left| \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^{n+2m}
\right| \,dx \nonumber \\ &= \sum_{m=0}^\infty \frac{\left(\frac{n}{2}+\frac34\right)_m \left(\frac{n}{2}+\frac54\right)_m }{m! (n+2)_m} c_{n+2m}' = \frac{(n+1)!}{(3/2)_n} \sum_{m=0}^\infty \frac{(3/2)_{n+2m}}{4^m m!(n+m+1)!}
c_{n+2m}' \nonumber \\ &\leq K \frac{(n+1)!}{(3/2)_n} \sum_{m=0}^\infty \frac{(3/2)_{n+2m}}{4^m m!(n+m+1)!} e^{-M \sqrt{n+2m}} \nonumber \\ &\leq 2K \frac{(n+1)!}{(3/2)_n} \sum_{m=0}^\infty \frac{(2)_{n+2m}}{2^{2m+1} m!(n+m+1)!} e^{-M \sqrt{n+2m}} \nonumber \\ &= 2K \frac{(n+1)!}{(3/2)_n} \sum_{m=0}^\infty \frac{1}{2^{2m+1}} \binom{n+2m+1}{m} e^{-M \sqrt{n+2m}} \nonumber \\ &\leq 2^{n+1} K \frac{(n+1)!}{(3/2)_n} \sum_{m=0}^\infty e^{-M \sqrt{n+2m}} < \infty, \nonumber \end{align} establishing the absolute convergence.
We summarize the above observations as a proposition.
\begin{prop} (i) The integral defining $d_n$ converges absolutely for all \mbox{$n\ge 0$.}
(ii) We have $d_{2n+1} = 0$ for all $n\ge 0$.
(iii) We have $d_{2n} > 0$ for all $n\ge 0$.
(iv) $d_n$ can be expressed alternatively in terms of the coefficients $c_k$ as \begin{equation} \label{eq:dn-cn-expansion} d_n = \frac{n+1}{2^n} \sum_{m=0}^\infty \frac{(3/2)_{n+2m}}{4^m m!(n+m+1)!} c_{n+2m}. \end{equation} \end{prop}
We are ready to formulate the main results concerning the expansion of $\Xi(t)$ in the polynomials $g_n$, which are precise analogues of Theorem~\ref{THM:HERMITE-EXPANSION} and~\ref{thm:hermite-coeff-asym} in Chapter~\ref{ch:hermite} and Theorem~\ref{THM:FN-EXPANSION}~and~\ref{THM:FN-COEFF-ASYM} in Chapter~\ref{ch:fn-expansion}.
\begin{thm}[Infinite series expansion for $\Xi(t)$ in the polynomials $g_n$] \label{THM:GN-EXPANSION} The Riemann xi function has the infinite series representation \begin{equation} \label{eq:gn-expansion} \Xi(t) = \sum_{n=0}^\infty (-1)^n d_{2n} g_{2n}\left(\frac{t}{2}\right) \end{equation} which converges uniformly on compacts for all $t\in\mathbb{C}$. More precisely, for any compact set $K\subset \mathbb{C}$ there exist constants $C_1, C_2>0$ depending on $K$ such that \begin{equation} \label{eq:gn-expansion-errorbound}
\left|\Xi(t) - \sum_{n=0}^N (-1)^n d_{2n} g_{2n}\left(\frac{t}{2}\right) \right| \leq C_1 e^{-C_2 N^{2/3}} \end{equation} holds for all $N\ge0$ and $t\in K$. \end{thm}
\begin{thm}[Asymptotic formula for the coefficients $d_{2n}$] \label{thm:gn-coeff-asym} The asymptotic behavior of $d_{2n}$ for large $n$ is given by \begin{equation} \label{eq:gn-coeff-asym} d_{2n} = \left(1+O\left(n^{-1/10}\right)\right) D n^{4/3} \exp\left(-E n^{2/3}\right) \end{equation} as $n\to\infty$, where $D,E$ are constants given by \begin{equation} \label{eq:gncoeffs-asym-consts} D = \frac{128\times 2^{1/3} \pi^{2/3}e^{-2\pi/3}}{\sqrt{3}}, \qquad E = 3(4\pi)^{1/3}. \end{equation}
\end{thm}
As in Chapters~\ref{ch:hermite} and~\ref{ch:fn-expansion}, we note the fact that the coefficients $d_n$ can be computed as inner products in the $L^2$-space $L^2(\mathbb{R},\left|\Gamma\left(\frac34+\frac{it}{2}\right)\right|^4)$.
\begin{corollary} The coefficients $d_n$ can be alternatively expressed as \begin{equation} \label{eq:gn-coeff-innerproduct}
d_n = \frac{8}{\pi^3}(-i)^n \int_{-\infty}^\infty \Xi(t) g_n\left(\frac{t}{2}\right) \left|\Gamma\left(\frac34+\frac{it}{2}\right)\right|^4\,dt. \end{equation} \end{corollary}
\begin{proof} Repeat the arguments in the proofs of Corollaries~\ref{cor:hermite-coeff-innerproduct} and~\ref{cor:fn-coeff-innerproduct}. \end{proof}
\section{Proof of Theorem~\ref{THM:GN-EXPANSION}}
\label{sec:gnexp-proofexpansion}
The next two lemmas are analogues of Lemmas~\ref{lem:hermite-easybound} and~\ref{lem:hermite-easybound2} in Chapter~\ref{ch:hermite} and Lemmas~\ref{lem:fn-easybound} and~\ref{lem:fn-easybound2} in Chapter~\ref{ch:fn-expansion}.
\begin{lem} \label{lem:gn-easybound} The polynomials $g_n(x)$ satisfy the bound \begin{equation} \label{eq:gn-easybound}
|g_n(x)|\leq C_1 e^{C_2 n^{1/3}} \end{equation} for all $n\ge 0$, uniformly as $x$ ranges over any compact set $K\subset \mathbb{C}$, with $C_1,C_2>0$ being constants that depend on $K$ but not on $n$. \end{lem}
\begin{proof} This is identical to the proof of Lemma~\ref{lem:fn-easybound}, except that the use of the recurrence relation \eqref{eq:fn-recurrence} is replaced by the analogous relation \eqref{eq:gn-recurrence} for the sequence $g_n(x)$, with the result that some small modifications need to be made to the constants in the proof. We leave the details as an exercise. \end{proof}
\begin{lem} There exist constants $J_1, J_2>0$ such that for all $n\ge0$, the bound \begin{align} \label{eq:gn-easybound2} \frac{1}{2^n}\int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}}
& \left| \left( \frac{x-1}{x+1}\right)^n {}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
\right| \,dx
\leq J_1 e^{-J_2 n^{2/3}} \end{align} holds. \end{lem}
\begin{proof} Note that this is a stronger version of the finiteness bound \eqref{eq:finiteness-bound} that makes explicit the dependence of the bound on $n$. To prove it, we refer back to the penultimate line of \eqref{eq:finiteness-bound} and proceed from there a bit more economically than before. Multiplying by $\frac{1}{2^n}$ and using the trivial fact that $\frac{(n+1)!}{(3/2)_n}\leq 2n$, we get that the integral in \eqref{eq:gn-easybound2} (together with the leading factor of $\frac{1}{2^n}$) is bounded from above by \begin{align*} & 4 K n\sum_{m=0}^\infty \frac{1}{2^{n+2m+1}} \binom{n+2m+1}{m} e^{-M \sqrt{n+2m}} \\ &= 4 K n \Bigg[\sum_{m\le \frac{1}{8M^{2/3}} n^{4/3}} \frac{1}{2^{n+2m+1}} \binom{n+2m+1}{m} e^{-M \sqrt{n+2m}}
+ \sum_{m> \frac{1}{8M^{2/3}} n^{4/3}} e^{-M \sqrt{n+2m}} \Bigg] \end{align*} (with the same constants $K,M$ appearing in \eqref{eq:finiteness-bound}). We will show that each of the two sums in this last expression satisfies a bound of the sort we need. For the second sum, observe that it is bounded by the integral \begin{equation*} \int_{\frac{1}{8M^{2/3}} n^{4/3}-1}^\infty e^{-M\sqrt{n+2x}}\,dx, \end{equation*} and this integral is $O\left(\exp\left(-\frac{M^{2/3}}{2} n^{2/3}\right)\right)$, by the relation \begin{equation*} \int_{A}^\infty e^{-M \sqrt{n+2x}} = \frac{1}{M^2} (M\sqrt{n+2A}+1) e^{-M\sqrt{n+2A}}. \end{equation*} To estimate the first sum, we claim that the terms in that sum are increasing as a function of $m$ for all large enough (but fixed) $n$; this would imply that the sum is bounded for large $n$ by $\frac{1}{8M^{2/3}} n^{4/3}$ times the last term, which in turn is at most $O\left(\exp\left(-\frac{M^{2/3}}{2} n^{2/3}\right)\right)$, and hence, when combined with the estimates above, would imply the claim of the lemma.
To prove the claim, observe that the ratio of successive terms in the sum is \begin{align} \label{eq:ratio-successive} \frac{ \frac{1}{2^{n+2m+3}} \binom{n+2m+3}{m+1} e^{-M\sqrt{n+2m+2}} }{ \frac{1}{2^{n+2m+1}} \binom{n+2m+1}{m} e^{-M\sqrt{n+2m}} } & = \frac{(n+2m+2)(n+2m+3)}{4(m+1)(n+m+2)} e^{M(\sqrt{n+2m}-\sqrt{n+2m+2})} \\ \nonumber & \geq \frac{\left(m+\frac{n}{2}\right)^2}{(m+1)(m+n+2)} \left(1-\frac{M}{\sqrt{m+\frac{n}{2}}}\right) \\ \nonumber & = \frac{\left(m+\frac{n}{2}\right)^2}{\left(m+\frac{n}{2}\right)^2 -\left(\frac{n^2}{4}-3m-n-2\right)} \left(1-\frac{M}{\sqrt{m+\frac{n}{2}}}\right). \end{align} Our claim is equivalent to the statement that, under the assumption $m\le \frac{1}{8M^{2/3}} n^{4/3}$, the last expression in \eqref{eq:ratio-successive} is $\ge 1$. Equivalently, we need to show that the inequality \begin{equation} \label{eq:ineq2verify1} 1- \frac{\frac{n^2}{4}-3m-n-2}{\left(m+\frac{n}{2}\right)^2} \leq 1-\frac{M}{\sqrt{m+\frac{n}{2}}} \end{equation} holds for those values of $m$. This reduces after some further simple algebra to verifying the inequality \begin{equation} \label{eq:ineq2verify2} m+\frac{n}{2} \leq \frac{1}{M^{2/3}}\left(\frac{n^2}{4}-3m-n-2\right)^{2/3}. \end{equation} To check this, assume that $n$ is large enough so that the inequalities \begin{equation} \label{eq:largen-assumption} 3\left(\frac{1}{8M^{2/3}}\right) n^{4/3}+n + 2 \le \frac{n^2}{8}, \qquad \frac{n}{2} \le \frac{1}{8 M^{2/3}}n^{4/3} \end{equation} are satisfied. Then, together with our assumption on $m$, that also implies that \begin{equation*} \frac{n^2}{8} \leq \frac{n^2}{4}-3m-n-2, \end{equation*} and therefore also that \begin{align*} m + \frac{n}{2} &\leq \frac{1}{8M^{2/3}} n^{4/3} + \frac{1}{8M^{2/3}} n^{4/3} = \frac{1}{4M^{2/3}} n^{4/3} = \frac{1}{M^{2/3}} \left( \frac{n^2}{8} \right)^{2/3}
\leq \frac{1}{M^{2/3}} \left( \frac{n^2}{4}-3m-n-2 \right)^{2/3}. \end{align*} This verifies~\eqref{eq:ineq2verify2}, hence also \eqref{eq:ineq2verify1}, for all $n$ satisfying \eqref{eq:largen-assumption} (which clearly includes all values of $n$ larger than some fixed $N_0$), and therefore finishes the proof of the claim and also of the lemma. \end{proof}
We need one final bit of preparation before proving Theorem~\ref{THM:GN-EXPANSION}. Recall that in the proof of Theorem~\ref{THM:FN-EXPANSION}, a key idea was the observation that the integration kernel $x^{s/2-1}$ can be related to the generating function of the polynomials $f_n(t/2)$. The next lemma shows a way of representing the same generating function as an infinite series involving the polynomials $g_n(t/2)$.
\begin{lem} \label{lem:fn-genfun-gnexpansion}
For $w\in \mathbb{C}$ and $|z|<1$, we have the identity \begin{equation*} \sum_{n=0}^\infty f_n(w) z^n = \sum_{n=0}^\infty \frac{(3/2)_n}{2^n n!} g_n(w) \,{}_2F_1\left(\frac{n}{2}+\frac34,\frac{n}{2}+\frac54; n+2;-z^2\right) z^n . \end{equation*} \end{lem}
\begin{proof} Using the relation \eqref{eq:fn-intermsof-gn} expressing the polynomial $f_n$ in terms of the $g_k$'s, we can write \begin{equation} \label{eq:fn-genfun-intermsof-gn} \sum_{n=0}^\infty f_n(w) z^n = \sum_{n=0}^\infty \left( \frac{(3/2)_n}{2^n (n+1)!} \sum_{m=0}^{\lfloor n/2 \rfloor} (-1)^m (n-2m+1) \binom{n+1}{m} g_{n-2m}(w) \right) z^n. \end{equation}
The claim will follow by suitably rearranging the terms in this double summation. First, let us check that this is permitted by showing that the sum is in fact absolutely convergent. Indeed, using Lemma~\ref{lem:gn-easybound} to bound $|g_{n-2m}(w)|$ (with $w$ being fixed and the resulting constants $C_1,C_2$ depending on $w$ but not on $n,m$), we see that \begin{align*} & \sum_{n=0}^\infty \sum_{m=0}^{\lfloor n/2 \rfloor}
\frac{(3/2)_n}{2^n (n+1)!}
\left| (-1)^m (n-2m+1)
\binom{n+1}{m} g_{n-2m}(w) \right|
\cdot |z|^n \\& \ \leq \sum_{n=0}^\infty \frac{(3/2)_n}{2^n (n+1)!} (n+1) 2^{n+1} \times C_1 e^{C_2 n^{1/3}}
|z|^n =
2C_1 \sum_{n=0}^\infty (n+1) e^{C_2 n^{1/3}} |z|^n < \infty. \end{align*} With absolute convergence established, we can rewrite the double sum in \eqref{eq:fn-genfun-intermsof-gn}, introducing a new summation index $k=n-2m$ in place of the index $n$, as \begin{align*} \sum_{k=0}^\infty \sum_{m=0}^\infty & \frac{(3/2)_{k+2m}}{2^{k+2m}(k+2m+1)!} (-1)^m (k+1) \binom{k+2m+1}{m} g_k(w) z^{k+2m} \\ & \qquad= \sum_{k=0}^\infty \frac{(3/2)_k}{2^k k!} g_k(w) z^k \left( \sum_{m=0}^\infty \frac{(-z^2)^m}{m!} \frac{(3/2)_{k+2m}}{(3/2)_k 2^{2m}} \frac{(k+1)!}{(k+m+1)!} \right) \\ & \qquad= \sum_{k=0}^\infty \frac{(3/2)_k}{2^k k!} g_k(w) z^k \left( \sum_{m=0}^\infty \frac{\left(\frac{k}{2}+\frac34\right)_m \left(\frac{k}{2}+\frac54\right)_m}{m!(k+2)_m} \left(-z^2\right)^m \right) \\ & \qquad= \sum_{k=0}^\infty \frac{(3/2)_k}{2^k k!} g_k(w) z^k {}_2F_1\left( \frac{k}{2}+\frac34, \frac{k}{2}+\frac54 ; k+2 ; -z^2 \right), \end{align*} as was the claim to prove. \end{proof}
We are ready to prove \eqref{eq:gn-expansion-errorbound}. The calculation parallels that in the proofs of Theorems~\ref{THM:HERMITE-EXPANSION} and~\ref{THM:FN-EXPANSION}. Namely, start by estimating in a fairly simple-minded way that \begin{align} \label{eq:gnexp-ineq-chain}
\Bigg|\Xi(t) - & \sum_{n=0}^N (-1)^{2n}d_{2n} g_{2n}\left(\frac{t}{2}\right)\Bigg|
=
\left|\Xi(t) - \sum_{n=0}^{2N} i^n d_n g_n\left(\frac{t}{2}\right)\right| \\ \nonumber &=
\Bigg|\int_0^\infty \omega(x) \Bigg(x^{-\frac34+\frac{it}{2}} - \sum_{n=0}^{2N} i^n \frac{(3/2)_n}{2^{n-3/2} n!} \frac{1}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n \\ \nonumber & \qquad\qquad\qquad\qquad\qquad\qquad \times {}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right) g_n\left(\frac{t}{2}\right)\Bigg)
\,dx\Bigg| \\ \nonumber &\leq \int_0^\infty \omega(x)
\Bigg| x^{-\frac34+\frac{it}{2}} - \sum_{n=0}^{2N} i^n \frac{(3/2)_n}{2^{n-3/2} n!} \frac{1}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n \\ \nonumber & \qquad\qquad\qquad\qquad\qquad\qquad \times {}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
g_n\left(\frac{t}{2}\right)\Bigg| \,dx. \end{align} By \eqref{eq:int-kernel-expansion} and Lemma~\ref{lem:fn-genfun-gnexpansion} (with $z=i(x-1)/(x+1)$), the kernel $x^{-\frac34+\frac{it}{2}}$ can be expanded as \begin{align*} x^{-\frac34+\frac{it}{2}} & = \sum_{n=0}^\infty i^n \frac{(3/2)_n}{2^{n-3/2} n!} \frac{1}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^n
{}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right) g_n\left(\frac{t}{2}\right). \end{align*} Continuing the chain of inequalities \eqref{eq:gnexp-ineq-chain}, we therefore get that \begin{align*}
& \left|\Xi(t) - \sum_{n=0}^N (-1)^{2n}d_{2n} g_{2n}
\right| \\ &\leq \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}}
\Bigg| \sum_{n=2N+1}^\infty i^n\frac{(3/2)_n}{2^{n-3/2} n!} \left(\frac{x-1}{x+1}\right)^n
{}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
g_n\left(\frac{t}{2}\right) \Bigg| \,dx \\ &\leq \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \sum_{n=2N+1}^\infty
\frac{(3/2)_n}{2^{n-3/2} n!} \left|\frac{x-1}{x+1}\right|^n
\left|{}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
\right|\cdot
\left|g_n\left(\frac{t}{2}\right)\right| \,dx \\ &= \sum_{n=2N+1}^\infty \frac{(3/2)_n}{2^{n-3/2} n!}
\left|g_n\left(\frac{t}{2}\right)\right| \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}}
\left|\frac{x-1}{x+1}\right|^n
\left|{}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
\right| \,dx. \end{align*} Appealing to \eqref{eq:gn-easybound} (with a fixed compact set $K$ on which we are allowing $t$ to range) and finally to \eqref{eq:gn-easybound2}, we see that this last expression is bounded by \begin{align*} \sum_{n=2N+1}^\infty & \frac{(3/2)_n}{2^{n-3/2} n!} C_1 e^{C_2 n^{1/3}} \int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}}
\left|\frac{x-1}{x+1}\right|^n
\left|{}_2F_1\left(\frac{n}{2}+\frac34, \frac{n}{2}+\frac54; n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
\right| \,dx \\ &\leq \sum_{n=2N+1}^\infty \frac{(3/2)_n}{2^{n-3/2} n!} C_1 e^{C_2 n^{1/3}} \times J_1 e^{-J_2 n^{2/3}} = O(e^{-\frac{J_2}{2} n^{2/3}}) \end{align*} as $n\to\infty$; this gives \eqref{eq:gn-expansion-errorbound} and finishes the proof. \qed
\section{Asymptotic analysis of the coefficients $d_{2n}$}
\label{sec:gnexp-proofasym}
In this section we prove Theorem~\ref{thm:gn-coeff-asym}. We will give two independent proofs of this result, one relying on the representation \eqref{eq:dn-cn-expansion} of the coefficients $d_{2n}$ in terms of the coefficients $c_{2k}$---whose asymptotic behavior we already analyzed---and another relying on a separate representation of $d_{2n}$ as a double integral, which seems of independent interest.
\begin{proof}[First proof of Theorem~\ref{thm:gn-coeff-asym}] Our starting point is the formula \eqref{eq:dn-cn-expansion}. We start by rewriting this relation in a form that's slightly more convenient for asymptotics, namely as \begin{align*} d_{2n} &= \frac{2n+1}{2^{2n}} \sum_{m=0}^\infty \frac{(3/2)_{2n+2m}}{2^{2m}m!(2n+m+1)!} c_{2n+2m} \\ & = \frac{2n+1}{2^{2n}} \sum_{m=0}^\infty \frac{(4n+4m+2)!}{2^{4n+6m+1}m!(2n+m+1)!(2n+2m+1)!} c_{2n+2m} \\ & = \frac12(2n+1) \sum_{k=n}^\infty \frac{1}{2^{6k}} \frac{(4k+2)!}{(k-n)!(k+n+1)!(2k+1)!} c_{2k}, \end{align*} substituting $k=n+m$ in the last step. Making use of \eqref{eq:fn-coeff-asym}, we get that \begin{align*} d_{2n} &= \left(1+O\left(n^{-1/10}\right)\right) 128 \sqrt{2} \pi^{3/2} n \sum_{k=n}^\infty \frac{k^{3/2}}{k+n} \cdot \frac{(4k)!}{2^{6k} (k-n)!(k+n)!(2k)!} e^{-4\sqrt{\pi k}} \\ &= \left(1+O\left(n^{-1/10}\right)\right) 128 \sqrt{2} \pi^{3/2} n \sum_{k=n}^\infty \frac{k^{3/2}}{k+n} \cdot \frac{1}{2^{4k}} \binom{4k}{2k}\times \frac{1}{2^{2k}} \binom{2k}{k-n} e^{-4\sqrt{\pi k}}, \end{align*} where for convenience the terms have been simplified slightly by making use of trivial approximations such as $4k+2 = (1+O(n^{-1}))4k$, etc.; the errors in these approximations are absorbed into the leading $\left(1+O\left(n^{-1/10}\right)\right)$ factor. By Stirling's approximation, the binomial coefficients in the summand have asymptotic behavior \begin{align*} \frac{1}{2^{4k}}\binom{4k}{2k} & = \left(1+O\left(n^{-1}\right)\right) \frac{1}{\sqrt{2\pi k}} \ \qquad\qquad \qquad \qquad \qquad \qquad \ (k\ge n, n\to\infty), \\ \frac{1}{2^{2k}}\binom{2k}{k-n} & = \left(1+O\left(\frac{1}{k-n}\right)\right) \frac{\sqrt{k}}{\sqrt{\pi(k-n)(k+n)}} \\ & \qquad \qquad \times \left( \left( \frac{k-n}{k}\right)^{-(k-n)} \left( \frac{k+n}{k}\right)^{-(k+n)} \right) \qquad (k\ge n, n\to\infty), \end{align*} Here, the error term $\left(1+O(k-n)^{-1}\right)$ is slightly bothersome as it makes it necessary to separately bound the summands for values of $k$ near $n$, but this is easy enough to do: observe that if $n\le k\le 2n$ then $k-n \le k/2$, and in this case we have for some constant $C>0$ independent of $n$ that \begin{equation*} \binom{2k}{k-n} \leq \binom{2k}{\lceil k/2 \rceil} \leq C(1.8)^{2k}, \end{equation*} using Stirling's formula or a well-known bound such as \cite[p.~113, Eq.~(4.7.1)]{ash}. Thus, combining the latest estimates we obtain the expression \begin{align} \label{eq:d2n-largedev-sum} d_{2n} &= \left(1+O\left(n^{-1/10}\right)\right) 128 \sqrt{\pi} n \\ \nonumber & \quad\times \Bigg[ \sum_{k=2n}^\infty \frac{k^{3/2}}{(k+n)^{3/2}(k-n)^{1/2}} \left( \left( \frac{k-n}{k}\right)^{-(k-n)} \left( \frac{k+n}{k}\right)^{-(k+n)} \right) e^{-4\sqrt{\pi k}}
+ O\left((0.9)^{2n}\right) \Bigg] \\ \nonumber & = \left(1+O\left(n^{-1/10}\right)\right) 128 \sqrt{\pi} n \sum_{k=2n}^\infty \frac{k^{3/2}}{(k+n)^{3/2}(k-n)^{1/2}} \exp\left(n^{2/3}\phi_n\left(\frac{k}{n^{4/3}}\right)\right), \end{align} where we denote \begin{equation*} \phi_n(t) = \frac{- (n^{4/3}t-n)\log\left(1-\frac{n^{-1/3}}{t}\right) - (n^{4/3}t+n)\log\left(1+\frac{n^{-1/3}}{t}\right)}{n^{2/3}} - 4\sqrt{\pi t}. \end{equation*} We are now in a position to apply what is essentially a variant of Laplace's method in the setting of a discrete sum. The following claims about the functions $\phi_n(t)$ are clearly relevant. \begin{lem} (i) The inequality \begin{equation} \label{eq:phin-upperbound} \phi_n(t) \leq F(t) := -\frac{1}{t}-4\sqrt{\pi t} \end{equation} holds for all $n\ge1$ and $t\ge 2 n^{-1/3}$.
(ii) We have the asymptotic relation \begin{equation} \label{eq:phin-approximation} \phi_n(t) = F(t) - \frac{1}{6t^3} n^{-2/3} + O\left( \frac{1}{n^{4/3}} \right) \qquad \textrm{as }n\to\infty \ \ \left(\textrm{with }\frac{1}{10}\le t \le 10\right), \end{equation} where the constant implicit in the big-$O$ is independent of $n$ and $t$, subject to the specified constraint. \end{lem}
\begin{proof} Consider the function of a real variable $0<x<1$ given by \begin{equation*} p(x) = -\left(\frac{1}{x}-1\right) \log(1-x) - \left(\frac{1}{x}+1\right)\log(1+x). \end{equation*} It is easy to verify that $p(x)$ has the Taylor expansion \begin{equation*} p(x) = -x - \frac{x^3}{6} - \frac{x^5}{3\times 5} - \frac{x^7}{4\times 7} - \frac{x^9}{5\times 9} - \ldots = -\sum_{m=1}^\infty \frac{x^{2m-1}}{m(2m-1)}. \end{equation*} In particular, $p(x)\leq -x$ for all $0\le x<1$. Substituting $x=1/(n^{1/3} t)$ gives the first claim of the lemma, and the second claim is obtained from the same substitution applied to the fact that $p(x) = -x-\frac{1}{6}x^3 + O(x^5)$ for $0\le x\le 1/2$. \end{proof} Some additional easy facts to note are that the function $F(t)$ has a unique global maximum at $t = \alpha_0:=(4\pi)^{-1/3}$; that $F(t)$ is increasing on $(0,\alpha_0)$ and decreasing on $(\alpha_0,\infty)$; that $F(\alpha_0) = -E$ (where $E$ is defined in \eqref{eq:gncoeffs-asym-consts}), and that $F''(\alpha_0) = -6\pi$. In particular, we have the Taylor expansion \begin{equation} \label{eq:foft-taylor}
F(t) = -E - 3\pi (t-\alpha_0)^2 + O\left(|t-\alpha_0|^3\right) \qquad \left(\frac{1}{10} \le t\le 10\right). \end{equation}
Now, split up the sum in \eqref{eq:d2n-largedev-sum} (without the leading numerical constant) into four parts, representing it as $S_n^{(1)}+S_n^{(2)}+S_n^{(3)}+S_n^{(4)}$, where \begin{align*} S_n^{(1)} & = \sum_{k\,:\,2n\le k < \alpha_0 n^{4/3}- n^{19/18}} \frac{k^{3/2}}{(k+n)^{3/2}(k-n)^{1/2}} \exp\left(n^{2/3}\phi_n\left(\frac{k}{n^{4/3}}\right)\right), \\ S_n^{(2)} & =
\sum_{k\,:\,|k- \alpha_0 n^{4/3}|\le n^{19/18}} \frac{k^{3/2}}{(k+n)^{3/2}(k-n)^{1/2}} \exp\left(n^{2/3}\phi_n\left(\frac{k}{n^{4/3}}\right)\right), \\ S_n^{(3)} & = \sum_{k\,:\,\alpha_0 n^{4/3}+ n^{19/18} < k \le 2n^{4/3}} \frac{k^{3/2}}{(k+n)^{3/2}(k-n)^{1/2}} \exp\left(n^{2/3}\phi_n\left(\frac{k}{n^{4/3}}\right)\right), \\ S_n^{(4)} & = \sum_{k\,:\,k > 2n^{4/3}} \frac{k^{3/2}}{(k+n)^{3/2}(k-n)^{1/2}} \exp\left(n^{2/3}\phi_n\left(\frac{k}{n^{4/3}}\right)\right). \end{align*} Of these four sums, it is $S_n^{(2)}$ that makes the asymptotically most significant contribution. Making use of \eqref{eq:phin-approximation} and \eqref{eq:foft-taylor}, we can estimate it for large $n$ as \begin{align*} S_n^{(2)} & =
\sum_{|k- \alpha_0 n^{4/3}|\le n^{19/18}} \frac{k^{3/2}}{(k+n)^{3/2}(k-n)^{1/2}}
\exp\left[ n^{2/3}\left( F\left(\frac{k}{n^{4/3}}\right) - \frac{n^{10/3}}{6 k^3} + O\left(n^{-4/3}\right) \right)\right] \\ & = \left(1+O\left(n^{-2/3}\right)\right)
\sum_{|k- \alpha_0 n^{4/3}|\le n^{19/18}} \frac{k^{3/2}}{(k+n)^{3/2}(k-n)^{1/2}} \\ & \qquad\qquad\qquad\qquad\qquad\qquad \times \exp\left[ n^{2/3} F\left(\frac{k}{n^{4/3}}\right) - \frac16 \alpha_0^{-3}\left(1+O\left(n^{-5/18}\right)\right) \right] \\ & = \left(1+O\left(n^{-5/18}\right)\right)e^{-2\pi/3}
\sum_{|k- \alpha_0 n^{4/3}|\le n^{19/18}} \frac{k^{3/2}}{(k+n)^{3/2}(k-n)^{1/2}} \exp\left( n^{2/3} F\left(\frac{k}{n^{4/3}}\right) \right) \\ & = \left(1+O\left(n^{-5/18}\right)\right)e^{-2\pi/3} \left(\alpha_0 n^{4/3}\right)^{-1/2} \\ & \qquad\qquad\qquad\qquad\quad\times
\sum_{|k- \alpha_0 n^{4/3}|\le n^{19/18}} \exp\left( -E n^{2/3} -3\pi n^{2/3} \left(\frac{k}{n^{4/3}}-\alpha_0\right)^2 +O\left( n^{-1/6} \right) \right) \\ & = \left(1+O\left(n^{-1/6}\right)\right)e^{-E n^{2/3}-2\pi/3} \alpha_0^{-1/2} n^{-2/3}
\sum_{|k- \alpha_0 n^{4/3}|\le n^{19/18}} \exp\left( -3\pi \left(\frac{k-\alpha_0 n^{4/3}}{n}\right)^2 \right). \end{align*} The sum in this last expression can be regarded in the usual way as a Riemann sum for a Gaussian integral; specifically, it is asymptotically equal to \begin{align*} \left(1+O\left(n^{-1}\right)\right) n \int_{-n^{1/18}}^{n^{1/18}} e^{-3\pi u^2}\,du & = \left(1+O\left(n^{-1}\right)\right) n \left( \frac{1}{\sqrt{3}}-O\left(e^{-n^{1/9}}\right) \right)
= \left(1+O\left(n^{-1}\right)\right) \frac{n}{\sqrt{3}} \end{align*} as $n\to\infty$ (again making use of \eqref{eq:gaussian-tail-bound} to justify the first transition). Thus, we have obtained the relation \begin{equation*} S_n^{(2)} = \left(1+O\left(n^{-1/6}\right)\right) \frac{\alpha_0^{-1/2}}{\sqrt{3}} n^{1/3} e^{-E n^{2/3}-2\pi/3} \qquad (n\to\infty). \end{equation*} Next, we bound the sum $S_n^{(1)}$ to show that its contribution is negligible compared to that of $S_n^{(2)}$. The polynomial-order factor appearing in front of the exponential term in the sum is bounded from above by $1$. Thus, by \eqref{eq:phin-upperbound} and the fact that $F(t)$ is increasing on $(0,\alpha_0)$ we have that, as $n\to\infty$, \begin{align*} 0\leq S_n^{(1)} & \leq \sum_{2n\le k < \alpha_0 n^{4/3}- n^{19/18}} \exp\left(n^{2/3}\phi_n\left(\frac{k}{n^{4/3}}\right)\right) \\ & \leq \sum_{2n \le k< \alpha_0 n^{4/3}- n^{19/18}} \exp\left(n^{2/3}F\left(\frac{k}{n^{4/3}}\right)\right) \\ & \leq \alpha_0 n^{4/3} \exp\left( n^{2/3}F\left(\alpha_0-n^{-5/18}\right) \right) \\ &\leq \alpha_0 n^{4/3} \exp\left(-E n^{2/3} -3\pi n^{1/9} + O\left(n^{-1/6} \right) \right) = O\left(e^{-E n^{2/3}-n^{1/9}}\right). \end{align*} The third sum $S_n^{(3)}$ can be bounded in a completely analogous fashion, resulting (the reader can easily check) in the same bound \begin{equation*} S_n^{(3)} = O\left(e^{-E n^{2/3}-n^{1/9}}\right). \end{equation*} Finally, to bound $S_n^{(4)}$, we use the fact that $F(t)\leq -4\sqrt{\pi t}$ to write \begin{align*} 0\leq S_n^{(4)} &\leq \sum_{k> 2 n^{4/3}} \frac{10}{k^{1/2}} \exp\left(n^{2/3} F\left(\frac{k}{n^{4/3}}\right)\right) \leq \sum_{k> 2 n^{4/3}} \frac{10}{k^{1/2}} \exp\left(-4\sqrt{\pi k}\right) \\ &\leq 10\int_{2n^{4/3}}^\infty e^{-4\sqrt{\pi x}}\,dx = \frac{10}{8\pi} \left( 4 \sqrt{2\pi} n^{2/3}+1 \right) \exp\left(-4\sqrt{2\pi} n^{2/3}\right)
= O\left(e^{-E n^{2/3}-n^{1/9}}\right). \end{align*} Combining the above estimates for $S_n^{(1)}$, $S_n^{(2)}$, $S_n^{(3)}$ and $S_n^{(4)}$, we have finally from \eqref{eq:d2n-largedev-sum} that \begin{align*} d_{2n} & = \left(1+O\left(n^{-1/10}\right)\right) \left(128 \sqrt{\pi} n \right) \left(\frac{\alpha_0^{-1/2}}{\sqrt{3}} n^{1/3} e^{-E n^{2/3}-2\pi/3}\right) \quad \textrm{as }n\to\infty, \end{align*} which, after a trivial reshuffling of the terms, is exactly \eqref{eq:gn-coeff-asym}. \end{proof}
\begin{proof}[Second method for proving Theorem~\ref{thm:gn-coeff-asym}] We give most of the details of a second proof of Theorem~\ref{thm:gn-coeff-asym}, except for the rate of convergence result, which we weaken to a less explicit $1+o(1)$ multiplicative error term. This seems of independent interest as it highlights yet another way of approaching the study of the coefficients $d_{2n}$. This proof requires some calculations that would be tedious to perform by hand, but are easily done using a computer algebra system (we used Mathematica). We omit the details of these calculations and a few other details needed to make the proof watertight, which may be filled in by an enthusiastic reader.
We start by deriving a new representation of $d_{2n}$ suitable for asymptotic analysis. Start with the formula \eqref{eq:def-gn-coeffs} for $d_{2n}$ in a slightly modified form \begin{align*} d_{2n} & = \frac{(3/2)_{2n}}{2^{2n-5/2} (2n)!}\int_1^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^{2n}
{}_2F_1\left(n+\frac34, n+\frac54; 2n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
\,dx \end{align*} in which the integration is performed on $(1,\infty)$ (this follows from \eqref{eq:def-gn-coeffs} by the same symmetry under the change of variables $u=1/x$ as in \eqref{eq:omega-int-powersn-symmetry}, a consequence of the functional equation \eqref{eq:theta-functional-equation}). Now use Euler's integral representation \begin{equation*} {}_2F_1(a,b;c;z) = \frac{\Gamma(c)}{\Gamma(b)\Gamma(c-b)} \int_0^1 t^{b-1} (1-t)^{c-b-1} \frac{1}{(1-zt)^a}\,dt \end{equation*} for the Gauss hypergeometric function (see \cite[p.~65]{andrews-askey-roy}) to represent the ${}_2F_1$ term inside the integral. This gives \begin{align*} d_{2n} &= \frac{(3/2)_{2n}}{2^{2n-5/2} (2n)!} \frac{\Gamma(2n+2)}{\Gamma(n+3/4)\Gamma(n+5/4)} \\ & \quad \times \int_1^\infty \int_0^1 \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^{2n}
t^{n+1/4}(1-t)^{n-1/4}
\left(1- \left(\frac{x-1}{x+1}\right)^2 t\right)^{-(n+3/4)} \,dt \,dx. \end{align*} As the reader can check, the constant in front of the integral simplifies to \begin{equation*} \frac{(2n+1)(3/2)_{2n}}{2^{2n-5/2}\Gamma(n+3/4)\Gamma(n+5/4)} = \frac{16}{\pi}(2n+1). \end{equation*} Thus, after some further trivial algebraic manipulations we arrive at the representation \begin{align} \label{eq:d2n-simplerep-doubleint} d_{2n} &= \frac{16}{\pi} (2n+1) \int_1^\infty \int_0^1 \frac{\omega(x)}{((x+1)^2-t(x-1)^2)^{3/4}} \left(\frac{t}{1-t}\right)^{1/4}
\left( \frac{t(1-t)(x-1)^2}{(x+1)^2-t(x-1)^2} \right)^n \,dt\,dx. \end{align} Recalling \eqref{eq:omegaxdiff-asym-xinfty}, we see that it makes sense to write \begin{equation} \label{eq:d2n-etan-mun} d_{2n} = \frac{16}{\pi}(2n+1)(R_n + \mu_n), \end{equation} where we define the quantities $R_n, \mu_n$ by \begin{align} \label{eq:def-etan-doubleint} R_n &= \int_1^\infty \int_0^1 \frac{\pi x(2\pi x-3)}{((x+1)^2-t(x-1)^2)^{3/4}} \left(\frac{t}{1-t}\right)^{1/4}
e^{-\pi x} \left( \frac{t(1-t)(x-1)^2}{(x+1)^2-t(x-1)^2} \right)^n \,dt\,dx, \\ \mu_n &= \int_1^\infty \int_0^1 \frac{\omega(x) - \pi x(2\pi x-3)e^{-\pi x}}{((x+1)^2-t(x-1)^2)^{3/4}} \left(\frac{t}{1-t}\right)^{1/4}
\left( \frac{t(1-t)(x-1)^2}{(x+1)^2-t(x-1)^2} \right)^n \,dt\,dx. \end{align} It will be enough to obtain the asymptotic behavior of $R_n$ as $n\to\infty$, and separately to show that $\mu_n$ is asymptotically negligible compared to $R_n$.
\paragraph{\textbf{Part 1: deriving asymptotics for $R_n$.}} Define functions \begin{align*} g(t,x) &= \frac{\pi x(2\pi x-3)}{((x+1)^2-t(x-1)^2)^{3/4}} \left(\frac{t}{1-t}\right)^{1/4}, \\ h_n(t,x) &= \frac{n}{\pi} \log\left( \frac{t(1-t)(x-1)^2}{(x+1)^2-t(x-1)^2}\right)-x
= M \log\left( \frac{t(1-t)(x-1)^2}{(x+1)^2-t(x-1)^2}\right) - x, \end{align*} where for convenience throughout the proof we denote $M= \frac{n}{\pi}$. Then $R_n$ can be rewritten in the form \begin{equation} \label{eq:etan-doubleint-largedev} R_n = \int_1^\infty \int_0^1 g(t,x)\exp\left(\pi h_n(t,x)\right) \,dt\,dx. \end{equation} This form is suitable for applying a two-dimensional version of Laplace's method. The method consists of identifying the global minimum point of $h_n(\cdot,\cdot)$ and analyzing the second-order Taylor expansion of $h_n$ around the minimum point. We will need the partial derivatives of $h_n(\cdot,\cdot)$ up to second order, which after some calculation are found to be \begin{align} \label{eq:hn-partial-x} \frac{\partial h_n}{\partial x} &= \frac{4M(x+1)-(x-1)((x+1)^2-t(x-1)^2)}{(x-1)((x+1)^2-t(x-1)^2)}, \\ \label{eq:hn-partial-t} \frac{\partial h_n}{\partial t} &= -M\cdot \frac{(2t-1)(x+1)^2 - t^2(x-1)^2}{t(1-t)((x+1)^2-t(x-1)^2}, \\ \label{eq:hn-partial-tt} \frac{\partial^2 h_n}{\partial t^2} &= {\scriptstyle -M \frac{t^4(x-1)^4 + (x+1)^4 - 4t^3(x^2-1)^2-4t(x+1)^2(x^2+1)+2t^2(x+1)^2(3x^2-2x+3)}{t^2(1-t)^2((x+1)^2-t(x-1)^2)^2},} \\ \label{eq:hn-partial-xx} \frac{\partial^2 h_n}{\partial x^2} &= -8M \frac{x(x+1)^2-t(x^3-3x+2)}{(x-1)^2((x+1)^2-t(x-1)^2)}, \\ \label{eq:hn-partial-tx} \frac{\partial^2 h_n}{\partial t \partial x} &= 4 M \frac{x^2-1}{((x+1)^2-t(x-1)^2)^2}. \end{align} To find the minimum point, we solve the equations $\frac{\partial h_n}{\partial t}=\frac{\partial h_n}{\partial t}=0$. By \eqref{eq:hn-partial-x}--\eqref{eq:hn-partial-t}, this gives the system of two equations \begin{align} \label{eq:hn-saddlepoint1} 4M(x+1)-(x-1)((x+1)^2-t(x-1)^2) &= 0, \\ \label{eq:hn-saddlepoint2} (2t-1)(x+1)^2 - t^2 (x-1)^2 &= 0. \end{align} Solving \eqref{eq:hn-saddlepoint1} (a linear equation in $t$) for $t$ gives the relation \begin{equation} \label{eq:hn-saddlepoint1-sol} t = \frac{(x+1)(x^2-4M-1)}{(x-1)^3} . \end{equation} Substituting this value back into \eqref{eq:hn-saddlepoint2} gives the equation \begin{equation*} \frac{4(x+1)^2}{(x-1)^4}(x(x-1)^2-4M^2) = 0. \end{equation*} That is, $x$ has to satisfy the cubic equation \begin{equation*} x(x-1)^2-4M^2 = 0. \end{equation*} For $M\ge 1$, one can check that the cubic has a single real solution, given by \begin{equation} \label{eq:hn-saddlepoint-solx} x = \frac{\left(\left(54M^2-1+6M\sqrt{3(27M^2-1)}\right)^{1/3}+1\right)^2}{3\left(54M^2-1+6M\sqrt{3(27M^2-1)}\right)^{1/3}}. \end{equation} The corresponding $t$ value is given by \eqref{eq:hn-saddlepoint1-sol}, which, for $x$ given by \eqref{eq:hn-saddlepoint-solx}, can be brought to the slightly simpler form \begin{equation*} t = \frac{1}{2M^3}\left[ (2M-1)x^2 + (-2M^2+1)x+2M^2(M-2) \right]. \end{equation*} Summarizing the above remarks, define quantities \begin{align} \alpha_n &= 54M^2-1+6M\sqrt{3(27M^2-1)}, \\ \label{eq:hn-saddlept-xin} \xi_n &= \frac{(\alpha_n^{1/3}+1)^2}{3\alpha_n^{1/3}}, \\ \tau_n &= \frac{1}{2M^3}\left((2M-1)\xi_n^2+(-2M^2+1)\xi_n + 2M^2(M-2) \right). \end{align} Then $(\tau_n,\xi_n)$ is the unique solution of the equations \begin{equation*} \frac{\partial h_n}{\partial x}(\tau_n,\xi_n) = 0, \qquad \frac{\partial h_n}{\partial t}(\tau_n,\xi_n) = 0. \end{equation*} Using these formulas one can now also find the asymptotic behavior of $\xi_n$ and $\tau_n$ as $M\to\infty$, which is given by \begin{align*} \tau_n &= 1- 2^{2/3} M^{-1/3} + 2^{4/3} M^{-2/3} - \frac83 M^{-1} + O(M^{-4/3}), \\ \xi_n &= 2^{2/3} M^{2/3} + \frac23 + \frac{1}{9\times 2^{2/3}} M^{-2/3} + O(M^{-4/3}). \end{align*} In particular, note that for large $M$ (that is, for large $n$) we have $\xi_n > 1$, $0 < \tau_n < 1$. That is, the point $(\xi_n,\tau_n)$ lies in the (interior of) the region of integration in the expression \eqref{eq:etan-doubleint-largedev} for $R_n$.
Next, having found the values $(\tau_n,\xi_n)$, we want to understand the values $h_n(\tau_n,\xi_n)$, $\frac{\partial^2 h_n}{\partial t^2}(\tau_n,\xi_n)$, $\frac{\partial^2 h_n}{\partial x^2}(\tau_n,\xi_n)$, $\frac{\partial^2 h_n}{\partial t \partial x}(\tau_n,\xi_n)$. These are somewhat complicated numbers, but can be brought to simpler forms by taking the relevant rational functions in $\tau_n, \xi_n$, expressing them as rational functions of $\xi_n$ only using \eqref{eq:hn-saddlepoint1-sol}, and then performing polynomial reduction modulo the polynomial $\xi_n(\xi_n-1)^2-4M^2$ (the cubic polynomial of which $\xi_n$ is a root). Using Mathematica to perform the reduction, we arrived at the following simplified formulas: \begin{align*} & \hspace{-50pt} \frac{\tau_n(1-\tau_n)(\xi_n-1)^2}{(\xi_n+1)^2-\tau_n(\xi_n-1)^2} \\ & = \frac{1}{M^3}\left[(2M-1)\xi_n^2+(-2M^2+1)\xi_n+M^2(M-4)\right] = 2\tau_n-1,
\\ h_n(\tau_n,\xi_n) &= M \log\left( \frac{\tau_n(1-\tau_n)(\xi_n-1)^2}{(\xi_n+1)^2-\tau_n(\xi_n-1)^2} \right)-\xi_n = \xi_n-M\log(2\tau_n-1) - \xi_n,
\\ \frac{\partial^2 h_n}{\partial t^2}(\tau_n,\xi_n) &= -\frac{1}{2(M^2+1)}\left[ (M^2+3)\xi_n^2 + 2(2M^2-1)\xi_n + (8M^3-9M^2+8M-1) \right],
\\ \frac{\partial^2 h_n}{\partial x^2}(\tau_n,\xi_n) &= -\frac{1}{4M^2(M^2+1)}\left( (2M^2+3)\xi_n^2-3\xi_n-4M(M^2+M+1) \right),
\\ \frac{\partial^2 h_n}{\partial t \partial x}(\tau_n,\xi_n) &= \frac{1}{4M(M^2+1)}\left( (M^2-1)\xi_n^2-2(2M^2-1)\xi_n+(7M^2-1) \right). \end{align*} Finally, the Hessian \begin{equation*} \Delta_n := \frac{\partial^2 h_n}{\partial t^2}(\tau_n,\xi_n) \frac{\partial^2 h_n}{\partial x^2}(\tau_n,\xi_n) - \left(\frac{\partial^2 h_n}{\partial t \partial x}(\tau_n,\xi_n)\right)^2 \end{equation*} can be found to be expressible by the (still ungainly) formula \begin{align*} \Delta_n &= {\textstyle \frac{ (24M^3-17M^2+24M-1)\xi_n^2 +2(6M^4-16M^3+31M^2-16M+1) +(56M^4+8M^3-9M^2+8M-1) }{16M^2(M^2+1)}.} \end{align*} From these expressions and \eqref{eq:hn-saddlept-xin}, we derive some additional useful asymptotic expansions: \begin{align} \label{eq:hn-partialxx-asym} \frac{\partial^2 h_n}{\partial x^2}(\tau_n,\xi_n) &= -2^{1/3} M^{-2/3} + O(M^{-1}), \\ \label{eq:hn-deltan-asym} \Delta_n &= \frac{3}{2^{4/3}} M^{2/3} + O(M^{-1/3}), \\ \label{eq:hn-deltan-minushalf} \frac{1}{\sqrt{\Delta_n}} &= \frac{2^{2/3}}{\sqrt{3}} M^{-1/3} - \frac{2^{4/3}}{\sqrt{3}} M^{-2/3} + \frac{10}{3\sqrt{3}} M^{-1} + O(M^{-4/3}), \\ \label{eq:hn-taun-xin} h_n(\tau_n,\xi_n) &= -3\times 2^{2/3} M^{2/3} - \frac23 - \frac{1}{15\times 2^{2/3}} M^{-2/3} + O(M^{-4/3}). \end{align} One additional quantity we need to understand is \begin{equation} \label{eq:g-taun-xin} g(\tau_n,\xi_n) = \frac{\pi \xi_n(2\pi \xi_n-3)}{((\xi_n+1)^2-\tau_n(\xi_n-1)^2)^{3/4}} \left(\frac{\tau_n}{1-\tau_n}\right)^{1/4}. \end{equation} This can be written as \begin{equation*} g(\tau_n,\xi_n) = \pi \xi_n(2\pi\xi_n-3)X_n^{1/4} Y_n^{3/4} \end{equation*} where we define \begin{equation*} X_n = \frac{\tau_n}{1-\tau_n}, \qquad Y_n = \frac{1}{(\xi_n+1)^2-\tau_n(\xi_n-1)^2}. \end{equation*} Some more algebraic simplification then shows that \begin{equation*} X_n = \frac{1}{4M}(\xi_n^2-1), \qquad Y_n = \frac{1}{8M(M^2+1)}(-\xi_n^2+3\xi_n + 2(M^2-1)). \end{equation*} Using these relations, we then get the asymptotic expansion \begin{equation*} g(\tau_n,\xi_n) = 2^{2/3} \pi^2 M^{2/3} - \frac16 \pi (9-\pi) + O(M^{-2/3}). \end{equation*} Now note that \eqref{eq:hn-partialxx-asym} and \eqref{eq:hn-deltan-asym} imply that (for large $n$) the Hessian matrix of $h_n$ at $(\tau_n,\xi_n)$ is negative-definite. Thus, $(\tau_n,\xi_n)$ is indeed a local maximum point of $h_n$. We leave to the reader to check that it is in fact a \emph{global} maximum.
Now recall that the two-dimensional version of Laplace's method gives the asymptotic formula \begin{equation*} (1+o(1)) \frac{2}{\sqrt{\Delta_n}} g(\tau_n,\xi_n) \exp\left(\pi h_n(\tau_n,\xi_n) \right), \end{equation*} for the integral on the right-hand side of \eqref{eq:etan-doubleint-largedev}. This arises by making a suitable change of variables in the integral to center it around the point $(\xi_n,\tau_n)$ and introduce scaling that turns the integral to an approximate Gaussian integral---see \cite[Ch.~VIII]{wong} for details; we omit the derivation of bounds needed to rigorously justify the approximation. Substituting the asymptotic values found in
\eqref{eq:hn-deltan-minushalf}--\eqref{eq:g-taun-xin} therefore gives that \begin{align} \label{eq:etan-asym} R_n &= (1+o(1))2\times \left(\frac{2^{2/3}}{\sqrt{3}} M^{-1/3}\right) 2^{2/3} \pi^2 M^{2/3} \exp\left( - 3 \times 2^{2/3} \pi M^{2/3}-\frac{2\pi}{3} \right) \\ \nonumber &= (1+o(1))\left( 2\times \frac{2^{2/3}}{\sqrt{3}}\pi^{1/3}\times 2^{2/3}\pi^2 \frac{1}{\pi^{2/3}} e^{-2\pi/3} \right) n^{1/3}
\exp\left(-3\times 2^{2/3} \pi^{1/3} n^{2/3} \right) \\ \nonumber &= (1+o(1))\left( \frac{4\times 2^{1/3}}{\sqrt{3}} \pi^{5/3} e^{-2\pi/3} \right) n^{1/3} \exp\left(-3 (4\pi)^{1/3} n^{2/3} \right). \end{align}
\paragraph{\textbf{Part 2: bounding $\mu_n$.}} The next step is to prove that the contribution of $\mu_n$ is asymptotically negligible relative to $R_n$. This relies as usual on \eqref{eq:omegaxdiff-asym-xinfty}. We sketch the argument but leave the details to the interested reader to develop. Observe that by \eqref{eq:omegaxdiff-asym-xinfty}, $\mu_n$ satisfies a bound of the form \begin{align*}
|\mu_n| &\leq C \int_1^\infty \int_0^1 g(t,x)\exp\left(\pi h_n(t,x)-2\pi x\right) \,dt\,dx
= C \int_1^\infty \int_0^1 g(t,x)\exp\left(\pi k_n(t,x)\right) \,dt\,dx, \end{align*} for some constant $C>0$, where we denote $k_n(t,x) = h_n(t,x)-2x$. But now $k_n(t,x)$ can be analyzed in a similar fashion to our analysis of $h_n(t,x)$ above. In particular, it can be shown that for $n$ large enough, $k_n(t,x)$ has a unique global maximum point $(t_n,x_n)\in (0,1)\times (1,\infty)$, and that the maximum value \begin{equation*} K_n^* := k_n(t_n,x_n) \end{equation*} behaves asymptotically as \begin{equation*} K_n^* = c_0 M^{2/3} + o\left(M^{2/3}\right) \end{equation*} for some constant $c_0$, where, significantly, $c_0 < -3\times 2^{2/3}$ (the leading constant in the analogous asymptotic expression \eqref{eq:hn-taun-xin} for the maximum value of $h_n(t,x)$). By deriving some auxiliary technical bounds for the decay of $h_n(t,x)$ away from its maximum point and near the boundaries of the integration region, one can then show that for any $\epsilon>0$, $\mu_n$ satisfies a bound of the form \begin{equation*}
|\mu_n| = O\left(\exp\left(\pi^{1/3} (c_0+\epsilon) n^{2/3}\right)\right). \end{equation*} Taking $\epsilon < 3\times 2^{2/3}-c_0$ then gives a rate of growth that is smaller than the exponential rate of growth of $R_n$, establishing that $\mu_n \ll R_n$.
\paragraph{\textbf{Putting everything together.}} Combining the above discussion regarding $\mu_n$ with \eqref{eq:d2n-etan-mun} and~\eqref{eq:etan-asym}, we find that \begin{align*} d_{2n} & = (1+o(1)) \left(\frac{16}{\pi} (2n+1)\right)\times \left( \frac{4\times 2^{1/3}}{\sqrt{3}} \pi^{5/3} e^{-2\pi/3} \right) n^{1/3} \exp\left(-3 (4\pi)^{1/3} n^{2/3} \right), \\ & = (1+o(1)) \left( \frac{128\times 2^{1/3}}{\sqrt{3}} \pi^{2/3} e^{-2\pi/3} \right) n^{4/3} \exp\left(-3 (4\pi)^{1/3} n^{2/3} \right), \end{align*} which is the same (except for the weaker rate of convergence estimate) as \eqref{eq:gn-coeff-asym}.
\end{proof}
\section[Connection to the function $\tilde{\nu}(t)$ and the Chebyshev polynomials]{Connection to the function $\tilde{\nu}(t)$ and the Chebyshev polynomials of the second kind}
\label{sec:gnexp-chebyshev}
We now prove yet another formula for $d_n$, tying it in a surprising way to the function $\tilde{\nu}(t)$ (discussed in \secref{sec:rad-properties-nu}) and its expansion in yet another family of orthogonal polynomials, the Chebyshev polynomials of the second kind. The properties of these very classical polynomials, denoted $U_n(t)$, are summarized in \secref{sec:orth-chebyshev}.
\begin{prop} The coefficients $d_n$ can be alternatively expressed as \begin{equation} \label{eq:dn-chebyshev-int} d_n = (-1)^n \frac{4\sqrt{2}}{\pi} \int_{-1}^1 \tilde{\nu}(t) U_n(t)\sqrt{1-t^2}\,dt. \end{equation} \end{prop}
\begin{proof} By the identity \eqref{eq:nutilde-taylor} expressing $\tilde{\nu}(t)$ as a power series with coefficients related to $c_n$, we have that \begin{align} \label{eq:nutilde-un-innerprod} \int_{-1}^1 \tilde{\nu}(t) U_n(t)\sqrt{1-t^2}\,dt & = \frac{1}{2\sqrt{2}} \int_{-1}^1 \left(\sum_{m=0}^\infty \frac{(-1)^m (3/2)_m}{m!} c_m t^m \right) U_n(t) \sqrt{1-t^2}\,dt \\ \nonumber & = \frac{1}{2\sqrt{2}} \sum_{m=0}^\infty \frac{(-1)^m (3/2)_m}{m!} c_m \int_{-1}^1 t^m U_n(t) \sqrt{1-t^2}\,dt. \end{align} The integrals in this last expression can be interpreted as inner products in the space $L^2((-1,1),\sqrt{1-t^2}\,dt)$ of the monomial $t^m$ with the Chebyshev polynomial $U_n(t)$, so they can be evaluated by using the relation \eqref{eq:chebyshev-monomial-expansion} to expand the monomial $t^m$ in the polynomials $U_j(t)$ and then using of the orthogonality relation \eqref{eq:chebyshev-orthogonality}. Together these relations imply that \begin{equation*} \int_{-1}^1 t^m U_n(t) \sqrt{1-t^2}\,dt = \begin{cases} \frac{\pi}{2} \frac{1}{(m+1)2^m} (n+1)\binom{m+1}{k} & \textrm{if $n=m-2k$ for some $k\ge0$,} \\ 0 & \textrm{otherwise}. \end{cases} \end{equation*} Thus, we can rewrite the series in \eqref{eq:nutilde-un-innerprod} as \begin{align*} \frac{1}{2\sqrt{2}} \sum_{k=0}^\infty & \frac{(-1)^{n+2k} (3/2)_{n+2k}}{(n+2k)!} c_{n+2k} \times \frac{\pi}{2} \frac{1}{(n+2k+1)2^{n+2k}}(n+1)\binom{n+2k+1}{k} \\ & = \frac{(-1)^n \pi}{4\sqrt{2}}\cdot \frac{n+1}{2^n} \sum_{k=0}^\infty \frac{(3/2)_{n+2k}}{2^{2k} k! (n+k+1)!} c_{n+2k}. \end{align*} By \eqref{eq:dn-cn-expansion} this gives precisely $\frac{(-1)^n \pi}{4\sqrt{2}} d_n$, so we are done. \end{proof}
The last proposition leads naturally to another central result of this chapter, which, in a manner analogous to Theorem~\ref{thm:aofr-selftrans-expansion}, gives a thought-provoking alternative point of view regarding the significance of the coefficients $d_{2n}$.
\begin{thm}[Expansion of $\tilde{\nu}(t)$ in the Chebyshev polynomials of the second kind] The function $\tilde{\nu}(t)$ has the series expansion \begin{equation} \label{eq:nutilde-chebyshev-exp} \tilde{\nu}(t) = \frac{1}{2\sqrt{2}}\sum_{n=0}^\infty d_{2n} U_{2n}(t), \end{equation} The series in \eqref{eq:nutilde-chebyshev-exp} converges pointwise for all $t\in (-1,1)$ and \ in\ the sense of the \, function space $L^2((-1,1),\sqrt{1-t^2}\,dt)$. \end{thm}
\begin{proof} From the general theory of orthogonal polynomial expansions, the function $\tilde{\nu}(t)$, being a continuous and bounded function on $(-1,1)$, has an expansion of the form \begin{equation*} \tilde{\nu}(t) = \sum_{n=0}^\infty \sigma_n U_n(t) \end{equation*} in the polynomials $U_n(t)$. The expansion converges in $L^2((-1,1),\sqrt{1-t^2}\,dt)$ and for all $t\in(-1,1)$, see \cite[Ch.~IX]{szego}. Using the orthogonality relation \eqref{eq:chebyshev-orthogonality}, the coefficients $\sigma_n$ can be extracted as $L^2$ inner products, namely \begin{equation*} \sigma_n = \frac{2}{\pi} \int_{-1}^1 \tilde{\nu}(t)U_n(t)\sqrt{1-t^2}\,dt, \end{equation*} and this is equal to $\frac{(-1)^n}{2\sqrt{2}} d_n$ by \eqref{eq:dn-chebyshev-int}. \end{proof}
\section[Alternative interpretation for the $g_n$-expansion]{Mellin transform representation for $g_n(x)$ and an alternative interpretation for the $g_n$-expansion}
\label{sec:gnexp-mellin}
The next result gives a formula representing the polynomials $g_n(x)$ in terms of Mellin transforms involving the Chebyshev polynomials of the second kind evaluated at $\frac{x-1}{x+1}$. This representation, which we have not found described explicitly in the literature but is a special case of a more general result \cite[eq.~(3.4)]{koelink}, stands as an interesting parallel to the integral representation for $f_n(x)$ given in Proposition~\ref{prop:fn-mellintrans3}.
\begin{prop}[Mellin transform representation of $g_n$] \label{prop:gn-mellintrans} We have the relation \begin{align} \label{eq:gn-mellintrans-scoord} \int_0^\infty & \frac{1}{(x+1)^{3/2}} U_n\left(\frac{x-1}{x+1}\right) x^{s-1}\,dx \\ \nonumber & = i^n \frac{2}{\sqrt{\pi}} \Gamma(s)\Gamma\left(\frac32-s\right) g_n\left(\frac{1}{i}\left(s-\frac34\right)\right) \qquad \left(0 < \operatorname{Re}(s) < \frac32\right), \end{align} or, equivalently, \begin{equation} \label{eq:gn-mellintrans-tcoord} \int_0^\infty \frac{1}{(x+1)^{3/2}} U_n\left(\frac{x-1}{x+1}\right) x^{-\frac14+it}\,dx = i^n \frac{2}{\sqrt{\pi}} \Gamma\left(\frac34+it\right)\Gamma\left(\frac34-it\right) g_n(t). \end{equation} \end{prop}
\begin{proof} We prove this in the equivalent form \eqref{eq:gn-mellintrans-tcoord}. Use the expansion \eqref{eq:chebyshev-explicit2} of $U_n(t)$ in monomials and then the Mellin transform representation \eqref{eq:fn-mellintrans-tcoord} for $f_n(t)$, to get that \begin{align*} \int_0^\infty & \frac{1}{(x+1)^{3/2}} U_n\left(\frac{x-1}{x+1}\right) x^{-\frac14+it}\,dx \\ & = \sum_{k=0}^{\lfloor \frac{n}{2}\rfloor} (-1)^k \binom{n-k}{k} 2^{n-2k} \int_0^\infty \frac{1}{(x+1)^{3/2}} \left( \frac{x-1}{x+1}\right)^{n-2k} x^{-\frac14+it}\,dx \\ & = \sum_{k=0}^{\lfloor \frac{n}{2}\rfloor} (-1)^k \binom{n-k}{k} 2^{n-2k} i^{n-2k} \frac{2(n-2k)!}{\sqrt{\pi} (3/2)_{n-2k}} \Gamma\left(\frac34+it\right)\Gamma\left(\frac34-it\right) f_{n-2k} \\ & = i^n \Gamma\left(\frac34+it\right)\Gamma\left(\frac34-it\right) \frac{2}{\sqrt{\pi}} \sum_{k=0}^{\lfloor \frac{n}{2}\rfloor} \frac{2^{n-2k} (n-k)!}{k! (3/2)_{n-2k}} f_{n-2k}(t). \end{align*} By the relation \eqref{eq:gn-intermsof-fn} expressing $g_n(t)$ in terms of the $f_k$'s, this is equal to the expression on the right-hand side of \eqref{eq:gn-mellintrans-tcoord}. \end{proof}
Recall that in \secref{sec:rad-alt-fnexp} we showed how the $f_n$-expansion of the Riemann xi function can be thought of as arising from the expansion of the radial function $A(r)$ in the orthogonal basis $(G_n^{(3)}(r))_{n=0}^\infty$, by taking Mellin transforms. In a completely analogous manner, the above Mellin transform representation of $g_n(t)$ makes a similar reinterpretation possible for the $g_n$-expansion of $\Xi(t)$ as originating in the expansion \eqref{eq:nutilde-chebyshev-exp} of $\tilde{\nu}(x)$ in the Chebyshev polynomials of the second kind. To see this, first recall the Mellin transform representation \eqref{eq:nu-mellin-trans}, in which we make the substitution $s=\frac32+it$ to bring it to the form \begin{equation} \label{eq:nu-mellintrans-alternative} \int_0^\infty \nu(x) x^{-\frac14 + \frac{it}{2}}\,dx = \frac{2}{\sqrt{\pi}} \Gamma\left(\frac34+\frac{it}{2}\right) \Gamma\left(\frac34-\frac{it}{2}\right) \Xi(t). \end{equation} Note however that $\nu(x)$ can be expressed in terms of $\tilde{\nu}(t)$ as \begin{equation*} \nu(x) = \frac{2\sqrt{2}}{(x+1)^{3/2}} \tilde{\nu}\left(\frac{1-x}{1+x}\right) = \frac{2\sqrt{2}}{(x+1)^{3/2}} \tilde{\nu}\left(\frac{x-1}{x+1}\right) \end{equation*} by inverting the defining relation \eqref{eq:balanced-centered-def} for centered functions and using the fact that $\tilde{\nu}(t)$ is an even function. This implies, using \eqref{eq:nutilde-chebyshev-exp}, that $\nu(x)$ has the series expansion \begin{align*} \nu(x) = \frac{1}{(x+1)^{3/2}} \sum_{n=0}^\infty d_{2n} U_{2n}\left(\frac{x-1}{x+1}\right). \end{align*} We can now use this together with \eqref{eq:gn-mellintrans-tcoord} to evaluate the Mellin transform on the left-hand side of \eqref{eq:nu-mellintrans-alternative} in a different way as \begin{align*} \int_0^\infty \nu(x) x^{-\frac14 + \frac{it}{2}}\,dx & = \int_0^\infty \frac{1}{(x+1)^{3/2}} \sum_{n=0}^\infty d_{2n} U_{2n}\left(\frac{x-1}{x+1}\right)x^{-\frac14+\frac{it}{2}}\,dx \\& = \sum_{n=0}^\infty d_{2n} \int_0^\infty \frac{1}{(x+1)^{3/2}}U_{2n}\left(\frac{x-1}{x+1}\right)x^{-\frac14+\frac{it}{2}}\,dx \\& = \sum_{n=0}^\infty d_{2n} (-1)^n \frac{2}{\sqrt{\pi}} \Gamma\left(\frac34+\frac{it}{2}\right) \Gamma\left(\frac34-\frac{it}{2}\right) g_n\left(\frac{t}{2}\right). \end{align*} Equating this last expression to the right-hand side of \eqref{eq:nu-mellintrans-alternative} and canceling common terms recovers the $g_n$-expansion \eqref{eq:gn-expansion}, as we predicted.
\chapter{Additional results}
\label{ch:misc}
In the previous chapters we developed the main parts of the theory associated with the expansions of the Riemann xi function in the Hermite, $(f_n)_{n=0}^\infty$ and $(g_n)_{n=0}^\infty$ polynomial families. In this chapter we include a few additional results that continue to shed light on the themes we explored.
\section{An asymptotic formula for the Taylor coefficients of $\Xi(t)$}
\label{sec:xi-taylor-asym}
The method we used in Chapter~\ref{ch:hermite} to analyze the asymptotic behavior of the Hermite expansion coefficients $b_{2n}$ has the added benefit of enabling us to also prove an analogous asymptotic formula for the Taylor coefficients $a_{2n}$ in the Taylor expansion \eqref{eq:riemannxi-taylor} of the Riemann xi function. The reason for this is a pleasing similarity between the formulas for $a_{2n}$ and $b_{2n}$. It was noted by the authors of \cite{coffey} and \cite{csordas-norfolk-varga} (and probably others before them) that the formula for $a_{2n}$ can be written in the form \begin{equation} \label{eq:a2n-qn-rnprime} a_{2n} = \frac{2}{(2n)!} \int_0^\infty \Phi(x)x^{2n}\,dx = \frac{1}{(2n)!} \int_{-\infty}^\infty \Phi(x)x^{2n}\,dx, \end{equation} as can be seen by performing the usual change of variables $x= e^{2u}$ in \eqref{eq:riemannxi-taylorcoeff-int} (or by differentiating $2n$ times under the integral sign in \eqref{eq:riemannxi-fouriertrans} and setting $t=0$). The striking resemblence of this formula to \eqref{eq:turan-coeff-formula} seems however to have gone unremarked in the literature.
\begin{thm}[Asymptotic formula for the coefficients $a_{2n}$] \label{thm:riemannxi-taylorcoeff-asym} The coefficients $a_{2n}$ satisfy the asymptotic formula \begin{align} \label{eq:taylorxi-coeff-asym} a_{2n} & = \left(1+O\left(\frac{\log\log n}{\log n}\right)\right) \frac{\pi^{1/4}}{2^{2n-\frac52} (2n)!} \left(\frac{2n}{\log (2n)}\right)^{7/4}
\exp\left[ 2n\left(\log \left(\frac{2n}{\pi}\right) - W\left(\frac{2n}{\pi}\right) - \frac{1}{W\left(\frac{2n}{\pi}\right)} \right) \right] \end{align} as $n\to\infty$, where $W(\cdot)$ denotes as in Chapter~\ref{ch:hermite} the Lambert $W$ function. \end{thm}
\begin{proof} The idea is to repeat the analysis in the proof of Theorem~\ref{thm:hermite-coeff-asym}, but with the numbers $Q_{2n}$ and $r_{2n}$ in \eqref{eq:qn-int-def}--\eqref{eq:rn-int-def} being replaced by \begin{align} \label{eq:qn-int-def-mod} Q_n' & = \int_0^\infty x^{2n} e^{\frac{5x}{2}}\left(e^{2x}-\frac{3}{2\pi}\right) \exp\left(-\pi e^{2x}\right)\,dx, \\ \label{eq:rn-int-def-mod} r_n' & = \int_0^\infty x^{2n} e^{\frac{5x}{2}} \sum_{m=2}^\infty \left(m^4 e^{2x}-\frac{3m^2}{2\pi}\right) \exp\left(-\pi m^2 e^{2x}\right)\,dx, \end{align} for which, by~\eqref{eq:phix-def} and~\eqref{eq:a2n-qn-rnprime}, we then have that \begin{equation} \label{eq:a2n-q2n-r2n-relation} a_{2n} = \frac{8\pi^2}{(2n)!}(Q_{2n}' + r_{2n}'). \end{equation} Note that the only difference from the original definitions of $Q_{2n}$ and $r_{2n}$ is the absence of the factor $e^{-x^2/4}$. Thus, the analysis carries over essentially verbatim to our current case, except that we replace the function $f(x)$ in the reformulated equation \eqref{eq:qnint-rewritten} for $Q_n$ with \begin{equation*} \varphi(x) = e^{\frac{5x}{2}}\left(e^{2x}-\frac{3}{2\pi}\right), \end{equation*} to get the analogous representation \begin{equation} \label{eq:qnint-rewritten-analogue} Q_n' = \frac{1}{2^{2n}}\int_0^\infty \varphi(x) \exp\left( \psi_{2n}(2x)\right)\,dx \end{equation} for $Q_n'$. The effect of this change on the subsequent formulas is that the factor $\gamma_n$ in \eqref{eq:gamman-hermite-def} then also gets replaced by the simpler factor $\gamma_n' = \varphi(x_{2n}/2)$ in the asymptotic formula \begin{equation} \label{eq:qn-asym-preformula-analogue} Q_n' = \left(1+O\left(\frac{1}{n^{1/5}}\right)\right) \frac{\sqrt{\pi}}{2^{2n}\sqrt{2\beta_n}} \gamma_n' e^{\alpha_n}. \end{equation} that is the analogue of \eqref{eq:qn-asym-preformula}---the factors $\beta_n$ and $\alpha_n$ (and, importantly, the maximum point value $x_{2n}$ from which they are derived) remain the same.
Now, $\gamma_n'$ has the asymptotic behavior (the counterpart to \eqref{eq:gamman-asym}) \begin{equation} \label{eq:gamman-prime-asym} \gamma_n' = \left(1+ O\left( e^{-x_{2n}}\right)\right)\exp\left(\frac94 x_{2n} \right) = \left(1+ O\left( \frac{\log n}{n}\right)\right) \left( \frac{2n}{\pi x_{2n}} \right)^{9/4} \end{equation} as $n\to\infty$. With these facts in mind, it is now a simple matter to go through the calculations and various bounds in the proof of Theorem~\ref{thm:hermite-coeff-asym} and verify that they remain valid in the current setting (including the bound \eqref{eq:rn-asym-bound} with $Q_n'$ and $r_n'$ replacing $Q_n$ and $r_n$, respectively), with the final result being that the relation \eqref{eq:qn-asym-formula} is now replaced by \begin{align*} Q_n' &= \left(1+O\left(\frac{\log\log n}{\log n}\right)\right) \frac{1}{2^{2n+\frac12}} \left(\frac{2n}{\pi x_{2n}}\right)^{7/4}
\exp\left[ 2n\left(\log (2n) - \log \pi - x_{2n} - \frac{1}{x_{2n}} \right) \right]. \end{align*} Inserting this into \eqref{eq:a2n-q2n-r2n-relation} gives \eqref{eq:taylorxi-coeff-asym}. \end{proof}
It is interesting to compare our formula \eqref{eq:taylorxi-coeff-asym} to other asymptotic formulas for the coefficients $a_{2n}$ which have appeared in the literature. At the time we completed the first version of this paper, the strongest result of this type we were aware of was the one due to Coffey \cite[Prop.~1]{coffey}. Coffey's formula is more explicit, since it contains only elementary functions, but is less accurate, since (if expressed in our notation as a formula for $a_{2n}$ rather than in Coffey's logarithmic notation) it has a multiplicative error term of $\exp(O(1)) = \Theta(1)$, compared to our $1+O\left(\frac{\log n}{n}\right)$.
After we finished the initial version of this paper, we learned of another recent asymptotic formula for the coefficients $a_{2n}$ that was proved by Griffin, Ono, Rolen and Zagier in a 2018 paper \cite[Th.~7]{griffin-etal} (see also equations (1) and (13) in their paper). Griffin et al's result is more accurate than our Theorem~\ref{eq:taylorxi-coeff-asym}, as it gives a full asymptotic expansion for $a_{2n}$ whereby the relative error term can be made smaller than $o(n^{-K})$ for any fixed $K$ by truncating the expansion after sufficiently many terms. Their formula is expressed in terms of an implicitly-defined quantity $L(n)$ that solves the equation $$ n = L\left(\pi e^L + \frac34 \right). $$ This equation (a slightly more exotic variant of our equation for $x_{2n}$ involving Lambert's $W$-function) arises out of an an application of Laplace's method in a manner quite similar to our own analysis. It is interesting to ask whether our approach can be similarly extended to obtain a full asymptotic expansion for $a_{2n}$ that is expressed in terms of the (arguably simpler) quantities $x_{2n} = W\left(\frac{2n}{\pi}\right)$.
\section{The function $\tilde{\omega}(t)$}
\label{sec:omega-tilde-properties}
In \secref{sec:rad-centered-recbal} we defined the centered version of a balanced function, and applied that concept to the study of the function $\nu(x)$ and its centered version $\tilde{\nu}(t)$, which has turned out to be quite significant in the developments of Chapters~\ref{ch:radial} and~\ref{ch:gn-expansion}. We now consider the function $\tilde{\omega}(t)$, the centered version of $\omega(x)$, which is not only a more fundamental object than $\tilde{\nu}(t)$ (in the sense that the latter is computed from the former), but turns out to also be significant and interesting in several distinct (and seemingly unrelated) ways.
As an initial and rather trivial observation, we already noted in Proposition~\ref{prop:cn-as-moments} that the coefficients $c_n$ can be interpreted as moments of $\tilde{\omega}(t)$, except for a trivial scaling factor.
The next observation, which is also trivial as it is essentially a restatement of the above result, is that the Fourier transform of $\tilde{\omega}(t)$ can be interpreted as a generating function for the coefficient sequence $c_n$ (that is different from $\tilde{\nu}(t)$, which in Theorem~\ref{eq:nutilde-taylor} we also interpreted as a generating function for the $c_n$'s). Namely, we have the relation \begin{equation} \label{eq:capital-omega-def} \int_{-1}^1 \tilde{\omega}(u)e^{itu}\,du = \frac12 \sum_{n=0}^\infty \frac{i^n c_n t^n}{n!}. \end{equation}
Next, we arrive at a somewhat more surprising fact, which is that $\tilde{\omega}(u)$ also arises in a different way as a scaling limit of the Fourier spectrum of the Poisson flow associated with the $f_n$-expansion. To make this precise, recall that in Theorem~\ref{thm:fn-poisson-mellin} we derived a Mellin transform representation for the Poisson flow $X_r^{\mathcal{F}}(t)$. We will consider a limit of this representation as $r\to 0$, but scale the $t$ variable by a factor of $r$ since, as the formula \eqref{eq:fn-omega-compressed} shows, the Mellin spectrum without scaling gets compressed into the interval $[(1-r)/(1+r),(1+r)/(1-r)]$, which shrinks to a point as $r\to 0$. We also rewrite the Mellin transform as an ordinary ``additive'' Fourier transform, in other words expressing the rescaled Poisson flow as \begin{equation*} X_r^{\mathcal{F}}\left(\frac{t}{r}\right) = \int_{-\infty}^\infty \Psi_r(v) e^{ivt}\,dv. \end{equation*} This representation is obtained from \eqref{eq:fn-poisson-mellin} by a standard exponential change of variables ($x=e^{2r v}$ in the particular scaling we use), and it is straightforward to check that $\Psi_r(v)$ is given by \begin{align*} \Psi_r(v) &= 2 r e^{rv/2} \omega_r\left(e^{2rv}\right)
= \begin{cases} 2 r \frac{1+\eta}{\sqrt{1-\eta}} \frac{1}{\sqrt{1-\eta e^{2rv}}} e^{rv/2} \omega\left( \frac{e^{2rv}-\eta}{1-\eta e^{2rv}} \right)
& \textrm{if }|v| < \frac{1}{2r}\log\left(\frac{1}{\eta}\right), \\ 0 & \textrm{otherwise} \end{cases} \end{align*} (refer to \eqref{eq:fn-omega-compressed} for the second equality, and recall the notation \eqref{eq:compression-eta-notation}).
\begin{prop}[The centered function $\tilde{\omega}(u)$ as a scaling limit of the Poisson flow frequency spectrum] We have the pointwise limits \begin{equation*} \lim_{r\to 0+} \Psi_r(v) = \begin{cases}
2\tilde{\omega}(v) & \textrm{if }|v|<1, \\ 0 & \textrm{otherwise}. \end{cases} \end{equation*} \end{prop}
\begin{proof} This is a somewhat mundane verification involving Taylor series approximations. Specifically, one finds that, as $r\to 0$, \begin{align*} \frac{1}{2r}\log \left(\frac{1+r}{1-r} \right) & = 1 + O(r^2), \\ 2 r \frac{1+\eta}{\sqrt{1-\eta}} \frac{1}{\sqrt{1-\eta e^{2rv}}} e^{rv/2} & = \frac{2}{\sqrt{1-v}} + O(r^2), \\ \frac{e^{2rv}-\eta}{1-\eta e^{2rv}} & = \frac{1+v}{1-v} + O(r^2). \end{align*} The first of these three limits substantiates the claim that the Fourier spectrum $\Psi_r(v)$ is supported in the limit on the interval $(-1,1)$; the second and third limits show that for $v\in (-1,1)$ we have \begin{equation*} \lim_{r\to 0+} \Psi_r(v) = \frac{2}{\sqrt{1-v}} \omega\left(\frac{1+r}{1-r}\right) = 2 \tilde{\omega}(-v) = 2 \tilde{\omega}(v) \end{equation*} (since $\tilde{\omega}(v)$ is an even function), as claimed. \end{proof}
Our final result on $\tilde{\omega}(u)$ will show that not just its Fourier transform, but also $\tilde{\omega}(u)$ itself, is a generating function for an interesting sequence, which can be given explicitly in terms of a recently studied sequence of integers. For this, we first recall our recent results \cite{romik} on the Taylor expansion of the Jacobi theta series $\theta(x)$ (defined in \eqref{eq:jactheta-def}) and its centered version, which as usual is related to $\theta(x)$ by \begin{equation} \label{eq:theta-thetatilde-relation} \tilde{\theta}(u) = \frac{1}{\sqrt{1+u}}
\theta\left( \frac{1-u}{1+u} \right) \qquad (|u|<1). \end{equation} In \cite{romik} (where $\theta(x)$ was denoted $\theta_3(x)$ and $\tilde{\theta}(u)$ was denoted $\sigma_3(u)$) we proved that $\tilde{\theta}(u)$ has the Taylor expansion \begin{equation} \label{eq:tilde-theta-taylorexp}
\tilde{\theta}(u) = W \sum_{n=0}^\infty \frac{\delta(n)}{(2n)!} \Phi^n u^{2n} \qquad (|u|<1), \end{equation} where $W$ and $\Phi$ are two special constants, given by \begin{equation*} W = \theta(1) = \frac{\Gamma\left(\frac14\right)}{\sqrt{2}\pi^{3/4}}, \qquad \Omega = \frac{\Gamma\left(\frac14\right)^8}{128 \pi^4}, \end{equation*} respectively, and where the sequence $(\delta(n))_{n=0}^\infty = 1,1,-1,51,849,-26199, 1341999, \ldots$ (denoted as $d(n)$ in \cite{romik}---for the current discussion we changed the notation in order to avoid a potential confusion with the coefficient sequence $d_n$ in the expansion \eqref{eq:gn-expansion}) is a sequence of integers first introduced and studied in \cite{romik} (see also \cite{jac3-oeis}).
With this preparation, we can formulate a result identifying the coefficients in the Taylor expansion of $\tilde{\omega}(u)$.
\begin{thm}[Taylor expansion of $\tilde{\omega}(u)$] \label{thm:omega-tilde-taylor} The Taylor expansion of $\tilde{\omega}(u)$ is given by \begin{equation*} \tilde{\omega}(u)
= W \sum_{n=0}^\infty \frac{\rho(n)}{(2n)!} \Omega^n u^{2n} \qquad (|u|<1), \end{equation*} where $\rho(n)$ are numbers defined in terms of the sequence $(\delta(n))_{n=0}^\infty$ as \begin{align*} \rho(n) &= \frac{1}{16} \Big( 4 \Omega \delta(n+1) - (32n^2 + 8n+3) \delta(n)
+ (2n-1)(2n)(4n-1)(4n-3) \Omega^{-1} \delta(n-1) \Big). \end{align*} \end{thm}
\begin{proof} The idea is to first of all find a way to express $\tilde{\omega}(u)$ in terms of $\tilde{\theta}(u)$, and then use~\eqref{eq:tilde-theta-taylorexp}. Start with the relation \begin{equation*} \theta(x) = \frac{\sqrt{2}}{\sqrt{1+x}} \tilde{\theta}\left(\frac{1-x}{1+x}\right) \end{equation*} that is inverse to \eqref{eq:theta-thetatilde-relation}. Differentiating twice, we get \begin{align*} \theta'(x) &= -\frac12 \sqrt{2} \frac{1}{(1+x)^{3/2}} \tilde{\theta}\left(\frac{1-x}{1+x}\right) + \frac{-2\sqrt{2}}{(1+x)^{5/2}} \tilde{\theta}'\left(\frac{1-x}{1+x}\right), \\ \theta''(x) &= \sqrt{2} \Bigg( \frac{3}{4(1+x)^{5/2}} \tilde{\theta}\left(\frac{1-x}{1+x}\right) + \frac{6}{(1+x)^{7/2}} \tilde{\theta}'\left(\frac{1-x}{1+x}\right)
+\frac{4}{(1+x)^{9/2}} \tilde{\theta}''\left(\frac{1-x}{1+x}\right) \Bigg). \end{align*} It then follows that \begin{align*} \tilde{\omega}(u) & = \frac{1}{\sqrt{1+u}} \omega\left(\frac{1-u}{1+u} \right) = \frac{1}{2\sqrt{1+u}} \Bigg( 2\left(\frac{1-u}{1+u}\right)^2 \theta''\left(\frac{1-u}{1+u}\right)
+3\left(\frac{1-u}{1+u}\right) \theta'\left(\frac{1-u}{1+u}\right) \Bigg) \\ &= \frac{1}{\sqrt{1+u}} \Bigg[
\sqrt{2} \left(\frac{1-u}{1+u}\right)^2 \Bigg( \frac34 \frac{(1+u)^{5/2}}{2^{5/2}} \tilde{\theta}(u) + \frac{6(1+u)^{7/2}}{2^{7/2}} \tilde{\theta}'(u)
+\frac{4(1+u)^{9/2}}{2^{9/2}} \tilde{\theta}''(u) \Bigg) \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad + \frac{3\sqrt{2}}{2} \left(\frac{1-u}{1+u}\right) \Bigg( -\frac12 \frac{(1+u)^{3/2}}{2^{3/2}} \tilde{\theta}(u) -2 \frac{(1+u)^{5/2}}{2^{5/2}} \tilde{\theta}'(u) \Bigg) \Bigg]. \end{align*} From here, a trivial algebraic simplification, which we omit, leads to the identity \begin{equation} \label{eq:omegatilde-thetatilde-exp} \tilde{\omega}(u) = \frac{1}{16} \left[ 3(u^2-1) \tilde{\theta}(u) + 12 u (u^2-1) \tilde{\theta}'(u) + 4 (u^2-1)^2 \tilde{\theta}''(u) \right]. \end{equation} But now observe that from \eqref{eq:tilde-theta-taylorexp} we have \begin{align*} \tilde{\theta}'(u) &= W \sum_{n=1}^\infty \frac{\delta(n)}{(2n-1)!} \Omega^n u^{2n-1}, \qquad\quad
\tilde{\theta}''(u) = W \sum_{n=2}^\infty \frac{\delta(n)}{(2n-2)!} \Omega^n u^{2n-2}. \end{align*} Inserting \eqref{eq:tilde-theta-taylorexp} and these last two expansions into \eqref{eq:omegatilde-thetatilde-exp} and simplifying gives the claim. \end{proof}
\chapter{Final remarks}
\label{ch:summary}
This work has seen the introduction of a curious menagerie of previously unnoticed (or, at the very least, under-appreciated) special functions that are tied in an interesting way to the theory of the Riemann zeta function. This collection includes the orthogonal polynomial families $U_n, H_n, L_n^{1/2}, f_n$, and $g_n$; the elementary (though esoteric) radial functions $A(r)$, $B(r)$; the well-known functions $\theta(x)$ and $\omega(x)$, originating in the world of modular forms and theta series; and the function $\nu(x)$ and its centered version $\tilde{\nu}(t)$, which do not seem to have been previously studied.
These functions and their many subtle interconnections add a new set of tools to the arsenal of methods available to attack central open problems in the theory of the zeta function, the Riemann hypothesis foremost among them. Most significantly, one is left with the impression that the theory of orthogonal polynomials may have a more central role to play in the study of the zeta function, and perhaps a greater potential to lead to new insights, than had been previously suspected.
We conclude with a few open problems and suggestions for future research.
\begin{enumerate}
\item There has been much discussion in the literature of sufficient conditions guaranteeing that a polynomial $p(z)$ has only real zeros based on knowledge of its coefficients in the expansion $p(z)=\sum_{k=0}^n \alpha_k \phi_k(z)$, where $(\phi_k)_{k=0}^\infty$ is some given family of orthogonal polynomials. We note Tur\'an's many results in \cite{turan1952, turan1954, turan1959}, particularly his observation (Lem.~II in \cite{turan1959}, a result he discovered independently but attributes to an earlier paper by P\'olya \cite{polya1915}) that if the zeros of $\sum_{k=0}^n a_k z^k$ are all real then that is also the case for the corresponding Hermite expansion $\sum_{k=0}^n a_k H_k(z)$; and the many analogous theorems of Iserles and Saff \cite{iserles-saff}, among them the result (a special case of Prop.~6 in their paper) that if the zeros of the polynomial $\sum_{k=0}^n a_k z^k$ are all real then that is also the case for the polynomial $\sum_{k=0}^n \frac{k!}{(3/2)_k} a_k f_k(z)$. See also \cite{bates, bleecker-csordas, iserles-norsett1987, iserles-norsett1988, iserles-norsett1990, piotrowski} and the survey \cite{schmeisser} for further developments along these lines.
One question that now arises naturally is: to what extent do these developments inform the attempts to prove the reality of the zeros of the Riemann xi function, in view of our new results?
\item One rather striking fact is that the four different series expansions we have considered for the Riemann xi function, namely \begin{equation*} \begin{array}{rclcrcl} \Xi(t) & \!\!\!=\!\!\!& \displaystyle \sum_{n=0}^\infty (-1)^n a_{2n} t^{2n}, & & \Xi(t) & \!\!\!=\!\!\!& \displaystyle \sum_{n=0}^\infty (-1)^n c_{2n} f_{2n}(t), \\[14pt] \Xi(t) & \!\!\!=\!\!\!& \displaystyle \sum_{n=0}^\infty (-1)^n b_{2n} H_{2n}(t), & & \Xi(t) & \!\!\!=\!\!\!& \displaystyle \sum_{n=0}^\infty (-1)^n d_{2n} g_{2n}(t), \end{array} \end{equation*} exhibit remarkably similar structural similarities: namely, in all four expansions the coefficients appear with alternating signs (and their asymptotics can be understood to a good level of accuracy, as our analysis shows).
It is intriguing to wonder about the significance of this structural property of $\Xi(t)$. Can this information be exploited somehow to derive information about the location of the zeros of $\Xi(t)$?
By way of comparison, one can consider ``toy'' expansions of the above forms involving more elementary coefficient sequences. For example, we have the trivial expansions (the latter two of which being easy consequences of \eqref{eq:hermite-genfun} and \eqref{eq:fn-genfun}, respectively) \begin{align*} \qquad\qquad \sum_{n=0}^\infty (-1)^n \frac{\alpha^{2n}}{(2n)!} t^{2n} \negphantom{t^{2n}}\phantom{H_{2n}(t)} &= \cos(\alpha t) & (\alpha>0), \\ \sum_{n=0}^\infty (-1)^n \frac{\alpha^{2n}}{(2n)!} H_{2n}(t) &= e^{\alpha^2} \cos(2\alpha t) & (\alpha>0), \\ \sum_{n=0}^\infty (-1)^n \alpha^{2n} f_{2n}(t)\negphantom{f_{2n}(t)}\phantom{H_{2n}(t)} &= \frac{2}{(1-\alpha^2)^{3/4}} \cos\left(t \log\left(\frac{1+\alpha}{1-\alpha} \right) \right) & (0<\alpha<1), \end{align*} which are entire functions of $t$ that---needless to say---all have only real zeros. On the other hand, we do not know for which values of $\alpha\in (0,1)$ the expansion (whose explicit form is evaluated using \eqref{eq:gn-genfun}) \begin{align*} \qquad \ \ \ \ \sum_{n=0}^\infty (-1)^n \alpha^{2n}g_{2n}(t) &= \frac{1}{(1-\alpha)^2} \,{}_2F_1\left(1,\frac34-it;\frac32;\frac{-4\alpha}{(1-\alpha)^2}\right) \\ & \qquad\ \ + \frac{1}{(1+\alpha)^2} \,{}_2F_1\left(1,\frac34-it;\frac32;\frac{4\alpha}{(1+\alpha)^2}\right) \end{align*} has only real zeros.
\item The notion of Poisson flows we introduced seems worth exploring further. The Poisson flow associated with the polynomial family $(f_n)_{n=0}^\infty$ has interesting properties, and while it does not preserve hyperbolicity in the sense of ``continuous time'' as we discussed in \secref{sec:zeros-evolution}, it seems not inconceivable that a weaker form of preservation of reality of the zeros for discrete time parameter values might still hold. For example, does there exist a constant $0<r_0<1$ such that if the polynomial $\sum_{k=0}^n a_k r_0^k f_k(t)$ has only real zeros then the same is guaranteed to be true for the polynomial $\sum_{k=0}^n a_k f_k(t)$? It appears like it may be possible to approach this question using the biorthogonality techniques developed in the papers by Iserles and coauthors~\cite{iserles-norsett1987, iserles-norsett1988, iserles-norsett1990, iserles-saff}. And what can be said about the Poisson flow associated with the orthogonal polynomial family $(g_n)_{n=0}^\infty$?
\item Does the function in \eqref{eq:capital-omega-def}, the Fourier transform of $\tilde{\omega}(u)$ (which as we have seen can be thought of as a scaling limit of the Poisson flow), have only real zeros? Is this question related to the Riemann hypothesis?
\end{enumerate}
\appendix
\chapter{Orthogonal polynomials}
\label{appendix:orthogonal}
In this appendix we summarize some background facts we will need on several families of orthogonal polynomials, and prove a few additional auxiliary results. We assume the reader is familiar with the basic theory of orthogonal polynomials, as described, e.g., in Chapter 2--3 of Szeg\H{o}'s classical book \cite{szego}.
\section{Chebyshev polynomials of the second kind}
\label{sec:orth-chebyshev}
The Chebyshev polynomials, denoted $U_n(x)$, are a sequence of orthogonal polynomials with respect to the weight function $\sqrt{1-x^2}$ on $(-1,1)$, and are one of the most classical families of orthogonal polynomials. A few of their main properties are given below; see \cite[pp.~225--229]{koekoek-etal} for more details.
\begin{enumerate} \item Definition: \begin{align} \label{eq:chebyshev-explicit1} U_n(x) &= \frac{\sin((n+1)\arccos(x))}{\sin(\arccos(x))} \\ \label{eq:chebyshev-explicit2} & = \sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k \binom{n-k}{k} (2x)^{n-2k} \end{align}
\item Inverse relationship with monomial basis: \begin{equation} \label{eq:chebyshev-monomial-expansion} x^n = \frac{1}{(n+1)2^n} \sum_{k=0}^{\lfloor n/2 \rfloor} (n-2k+1)\binom{n+1}{k} U_{n-2k}(x). \end{equation}
\item Orthogonality relation: \begin{equation} \label{eq:chebyshev-orthogonality} \int_0^1 U_m(x) U_n(x) \sqrt{1-x^2}\,dx = \frac{\pi}{2} \delta_{m,n} \end{equation}
\item Recurrence relation: \begin{equation} U_{n+1}(x)-2x U_n(x) + U_{n-1}(x) = 0 \end{equation}
\item Differential equation: \begin{equation} (1-x^2)U_n''(x) - 3x U_n'(x) + n(n+2) U_n(x) = 0 \end{equation}
\item Generating function: \begin{equation} \sum_{n=0}^\infty U_n(x) z^n = \frac{1}{1-2xz+z^2} \end{equation}
\item Poisson kernel: \begin{equation} \label{eq:chebyshev-poissonker} \frac{2}{\pi} \sum_{n=0}^\infty U_n(x)U_n(y) z^n = \frac{2}{\pi}\cdot \frac{1-z^2}{1-4xyz(z^2+1)+2(2x^2+2y^2-1)z^2+z^4} \end{equation}
\item Symmetry: $U_n(-x) = (-1)^n U_n(x)$.
\end{enumerate}
\textbf{Notes.} To derive \eqref{eq:chebyshev-monomial-expansion}, take derivatives of both sides of the relation (found in \cite[p.~22]{mason-handscomb}) $\cos^{n+1} \theta = \sum_{k=0}^{\lfloor n/2\rfloor } \binom{n}{k}\cos(n-2k)\theta - \frac12 \chi_{\{n \textrm{ even}\}} \binom{n}{n/2}$, and use \eqref{eq:chebyshev-explicit1}. Formula~\eqref{eq:chebyshev-poissonker} is derived in \cite{millar}, where it appears as equation (15), except that the formula there contains a typo (the term $-4axy$ in the denominator needs to be changed to $-2axy$), which we corrected.
\section{Hermite polynomials}
\label{sec:orth-hermite}
The Hermite polynomials are the well-known sequence $H_n(x)$ of polynomials that are orthogonal with respect to the Gaussian weight function $e^{-x^2}$ on $\mathbb{R}$. A few of their main properties are given below; see \cite[Sec.~6.1]{andrews-askey-roy} for proofs.
\begin{enumerate} \item Definition: \begin{equation} H_n(x) = (-1)^n e^{x^2} \frac{d^n}{dx^n}\left( e^{-x^2}\right) \end{equation}
\item Orthogonality relation: \begin{equation} \label{eq:hermite-orthogonality} \int_{-\infty}^\infty H_m(x) H_n(x) e^{-x^2}\,dx = 2^n n! \sqrt{\pi} \delta_{m,n} \end{equation}
\item Recurrence relation: \begin{equation} \label{eq:hermite-recurrence} H_{n+1}(x)-2x H_n(x) + 2n H_{n-1}(x) = 0 \end{equation}
\item Differential equation: \begin{equation} \label{eq:hermite-ode} H_n''(x) - 2x H_n'(x) + 2n H_n(x) = 0 \end{equation}
\item Generating function: \begin{equation} \label{eq:hermite-genfun} \sum_{n=0}^\infty \frac{H_n(x)}{n!} z^n = \exp(2xz-z^2) \end{equation}
\item Poisson kernel: \begin{equation} \frac{1}{\sqrt{\pi}} \sum_{n=0}^\infty \frac{H_n(x)H_n(y)}{2^n n!} z^n = \frac{1}{\sqrt{\pi}\sqrt{1-z^2}} \exp\left( \frac{2xyz-(x^2+y^2)z^2}{1-z^2}\right) \end{equation}
\item Symmetry: $H_n(-x) = (-1)^n H_n(x)$.
\end{enumerate}
\section{Laguerre polynomials}
\label{sec:orth-laguerre}
The (generalized) Laguerre polynomials form a sequence $L_n^\alpha(x)$ of polynomials, dependent on a parameter $\alpha> -1$, that are orthogonal with respect to the weight function $e^{-x}x^\alpha$ on $(0,\infty)$. Of particular interest to us will be the case $\alpha=1/2$; in this case the polynomials are essentially rescalings of the odd-indexed Hermite polynomials. For convenience, we summarize below a few of the main formulas associated with the polymomials $L_n^{\alpha}(x)$; proofs can be found in \cite[Sec~6.2]{andrews-askey-roy}.
\begin{enumerate} \item Definition: \begin{equation} \label{eq:laguerre-explicit} L_n^\alpha(x) = \sum_{k=0}^n \frac{(-1)^k}{k!} \binom{n+\alpha}{n-k} x^k. \end{equation}
\item Orthogonality relation: \begin{equation} \label{eq:laguerre-orthogonality} \int_{-\infty}^\infty L_m(x) L_n(x) e^{-x}x^\alpha\,dx = \frac{\Gamma(n+\alpha+1)}{n!} \delta_{m,n} \end{equation}
\item Recurrence relation: \begin{equation} \label{eq:laguerre-recurrence} (n+1)L_{n+1}^\alpha(x)+(x-2n-\alpha-1)L_n^\alpha(x)+(n+\alpha)L_{n-1}^\alpha(x) = 0 \end{equation}
\item Differential equation: \begin{equation} \label{eq:laguerre-diffeq} x (L_n^\alpha)''(x)+(\alpha+1-x)(L_n^\alpha)'(x)+n L_n^\alpha(x) = 0 \end{equation}
\item Generating function: \begin{equation} \label{eq:laguerre-genfun} \sum_{n=0}^\infty L_n^\alpha(x) z^n = \frac{1}{(1-z)^{\alpha+1}} \exp\left(-\frac{xz}{1-z}\right) \end{equation}
\item Poisson kernel: \begin{equation} \sum_{n=0}^\infty \frac{n!}{\Gamma(n+\alpha+1)} L_n^\alpha(x) L_n^\alpha(y) z^n = \frac{1}{1-z} \exp\left(-\frac{z(x+y)}{1-z}\right) (xyz)^{-\alpha/2} I_\alpha\left(\frac{2\sqrt{xyz}}{1-z}\right), \end{equation} where $I_\alpha(\cdot)$ denotes the modified Bessel function of the first kind.
\end{enumerate}
\section[The symmetric Meixner-Pollaczek polynomials]{The symmetric Meixner-Pollaczek polynomials $f_n(x)=P_n^{(3/4)}(x;\pi/2)$}
\label{sec:orth-fn}
The \textbf{Meixner-Pollaczek polynomials} are a two-parameter family of orthogonal polynomial sequences $P_n^{(\lambda)}(x;\phi)$. The parameters satisfy $\lambda>0, 0\le \phi<\pi$. In the special case $\phi=\pi/2$, the polynomials are sometimes referred to as the \textbf{symmetric Meixner-Pollaczek polynomials} (see \cite{araaya}). In this paper we make use of the special case $\lambda=3/4$ of the symmetric case, namely the polynomials, which we denote $f_n(x)$ for simplicity, given by \begin{equation} f_n(x) = P_n^{(3/4)}(x;\pi/2). \end{equation}
The key property of the polynomials $f_n(x)$ is that they are an orthonormal family for the weight function $\left|\Gamma\left(\frac34+ix\right)\right|^2$, $x\in\mathbb{R}$. Additional properties we will need are given in the list below. Bibliographic notes and a few more details regarding proofs are given at the end of this section. See also \secref{sec:orth-fngn-relation} where we prove additional results relating the polynomial family $f_n$ to another family $g_n$ of orthogonal polynomials, discussed in \secref{sec:orth-gn}.
\begin{enumerate} \item Definition and explicit formulas: \begin{align} f_n(x) & = \frac{(3/2)_n}{n!} i^n {}_2F_1\left(-n, \frac34+ix; \frac32; 2 \right) \label{eq:fn-explicit1} \\ & = (-i)^n \sum_{k=0}^n 2^k \binom{n+\frac12}{n-k} \binom{-\frac34+i x}{k} \label{eq:fn-explicit2} \\ & = i^n \sum_{k=0}^n (-1)^k 2^k \binom{n+\frac12}{n-k} \binom{-\frac14+i x+k}{k} \label{eq:fn-explicit3} \\ & = i^n \sum_{k=0}^n (-1)^k \binom{-\frac34+ix}{k} \binom{-\frac34-ix}{n-k}. \label{eq:fn-explicit4} \end{align}
\item Orthogonality relation: \begin{equation} \label{eq:fn-orthogonality}
\int_{-\infty}^\infty f_m(x) f_n(x) \left|\Gamma\left(\frac34+ix\right)\right|^2\,dx = \frac{\pi^{3/2}(3/2)_n}{2\sqrt{2} n!}\delta_{m,n}. \end{equation}
\item Recurrence relation: \begin{equation} \label{eq:fn-recurrence} (n+1)f_{n+1}(x)-2x f_n(x) + \left(n+\frac12\right) f_{n-1}(x) = 0. \end{equation}
\item Difference equation: \begin{equation} \label{eq:fn-diffeq} 2\left(n+\frac34\right)f_n(x)-\left(\frac34-ix\right) f_n(x+i) -\left(\frac34+ix\right) f_n(x-i) = 0. \end{equation}
\item Generating function: \begin{equation} \label{eq:fn-genfun} \sum_{n=0}^\infty f_n(x) z^n = (1-iz)^{-\frac34+ix}(1+iz)^{-\frac34-ix} = \frac{1}{(1+z^2)^{\frac34}}\left(\frac{1-iz}{1+iz}\right)^{ix}. \end{equation}
\item Poisson kernel: \begin{align} \label{eq:fn-poisson} \frac{2\sqrt{2}}{\pi^{3/2}}\sum_{n=0}^\infty \frac{n!}{(3/2)_n} f_n(x) f_n(y) z^n & = \frac{2\sqrt{2}}{\pi^{3/2}} \frac{1}{(1-z)^{3/2}} \left(\frac{1+z}{1-z}\right)^{i(x+y)}
{}_2F_1\left(\frac34+ix, \frac34+iy; \frac32; \frac{-4z}{(1-z)^2} \right) \end{align}
\item Mellin transform representations: \begin{align} \label{prop:fn-mellintrans1-appendix} f_n(x) &= (-i)^n \frac{\sqrt{\pi} (3/2)_n}{2 n!} \left( \Gamma\left(\frac34-ix\right)\Gamma\left(\frac34+ix\right) \right)^{-1}
\int_0^\infty \frac{1}{(u+1)^{3/2}} \left(\frac{u-1}{u+1}\right)^n u^{-\frac14+ix}\,du \\ \label{prop:fn-mellintrans2-appendix} &= 2 i^n \pi^{\frac34+ix} \Gamma\left(\frac34+ix\right)^{-1} \int_0^\infty e^{-\pi r^2} L_n^{1/2}(2\pi r^2) r^{\frac12+2ix} \,dr \end{align}
\item Symmetry: $f_n(-x) = (-1)^n f_n(x)$.
\end{enumerate}
\textbf{Notes.} The above list is based on the general list of properties of the Meixner-Pollaczek polynomials $P_n^{(\lambda)}(x;\phi)$ provided in \cite[pp.~213--216]{koekoek-etal}, except for \eqref{eq:fn-poisson}, which is a special case of \cite[Eq.~(2.25)]{ismail-stanton}, and the Mellin transform representations \eqref{prop:fn-mellintrans1-appendix}--\eqref{prop:fn-mellintrans2-appendix}, which are proved in our Propositions~\ref{prop:fn-mellintrans1} and~\ref{prop:fn-mellintrans3}.
In the formulas \eqref{eq:fn-explicit1}--\eqref{eq:fn-explicit4}, the first formula is the definition as given in \cite{koekoek-etal}; formula \eqref{eq:fn-explicit3} is an explicit rewriting of \eqref{eq:fn-explicit1} as a sum, and formula \eqref{eq:fn-explicit2} follows from \eqref{eq:fn-explicit3} by applying the symmetry property $f_n(-x) = (-1)^n f_n(x)$ (which in turn is an easy consequence of either the recurrence relation \eqref{eq:fn-recurrence} or the generating function \eqref{eq:fn-genfun}). Formula \eqref{eq:fn-explicit4} appears to be new, and follows by evaluating the sequence of coefficients of $z^n$ in the generating function \eqref{eq:fn-genfun} as a convolution of the coefficient sequences for the functions $(1-iz)^{-3/4+ix}$ and $(1+iz)^{-3/4-ix}$. Note that \eqref{eq:fn-explicit4} has the benefit of making the odd/even symmetry of $f_n(x)$ readily apparent, which the other explicit formulas do not.
\begin{table} \caption{The first few polynomials $f_n(x)$} \begin{equation*}
\begin{array}{r|l} n & f_n(x) \\ \hline
0 & 1 \\[3pt]
1 & 2 x \\[3pt]
2 & 2 x^2-\frac{3}{4} \\[3pt]
3 & \frac{4 }{3}x^3-\frac{13 }{6} x \\[3pt]
4 & \frac{2 }{3}x^4-\frac{17 }{6}x^2+\frac{21}{32} \\[3pt]
5 & \frac{4 }{15}x^5-\frac{7 }{3}x^3+\frac{177 }{80}x \\[3pt]
6 & \frac{4 }{45}x^6-\frac{25 }{18}x^4+\frac{2401 }{720}x^2-\frac{77}{128} \\[3pt]
7 & \frac{8 }{315}x^7-\frac{29 }{45}x^5+\frac{1123 }{360}x^3-\frac{4987 }{2240}x \\[3pt]
8 & \frac{2 }{315}x^8-\frac{11 }{45}x^6+\frac{1499 }{720}x^4-\frac{24749 }{6720}x^2+\frac{1155}{2048} \end{array} \end{equation*} \end{table}
\section[The continuous Hahn polynomials]{The continuous Hahn polynomials $g_n(x) = p_n\left(x; \frac34,\frac34,\frac34,\frac34\right)$}
\label{sec:orth-gn}
The \textbf{continuous Hahn polynomials} are a four-parameter family $p_n(x;a,b,c,d)$ of orthogonal polynomial sequences. They were introduced in increasing degrees of generality by Askey and Wilson \cite{askey-wilson} and later Atakishiyev and Suslov \cite{atakishiyev-suslov} as continuous-weight analogues of the Hahn polynomials; earlier special cases appeared in the work of Bateman \cite{bateman} and later Pasternack \cite{pasternack} (see also \cite{koelink} for a chronology of these discoveries and related discussion).
For our purposes, a special role will be played by the special case $a=b=c=d=3/4$ of the continuous Hahn polynomials, that is, the polynomial sequence \begin{equation} g_n(x) = p_n\left(x; \frac34,\frac34,\frac34,\frac34\right). \end{equation} A few of the main properties of these polynomials we will need are listed below. The notes at the end of the section provide references and additional details.
\begin{enumerate} \item Definition and explicit formulas: \begin{align} \label{eq:gn-explicit1} g_n(x) & = i^n (n+1) \ {}_3F_2\left(-n,n+2,\frac34+ix;\frac32,\frac32; 1\right) \\ \label{eq:gn-explicit2} &= (-i)^n \sum_{k=0}^n \frac{(n+k+1)!}{(n-k)!(3/2)_k^2} \binom{-\frac34+i x}{k} \\ \label{eq:gn-explicit3} & = i^n \sum_{k=0}^n (-1)^k \frac{(n+1)!}{(3/2)_k(3/2)_{n-k}} \binom{-\frac34+ix}{k} \binom{-\frac34-ix}{n-k}. \end{align}
\item Orthogonality relation: \begin{equation} \label{eq:gn-orthogonality}
\int_{-\infty}^\infty g_m(x) g_n(x) \left|\Gamma\left(\frac34+ix)\right)\right|^4\,dx = \frac{\pi^3}{16} \delta_{m,n}. \end{equation}
\item Recurrence relation: \begin{equation} \label{eq:gn-recurrence} (2n+3) g_{n+1}(x) - 8x g_n(x) + (2n+1)g_{n-1}(x) = 0. \end{equation}
\item Difference equation: \begin{equation} \label{eq:gn-diffeq} \left((n+1)^2-2x^2+\frac18\right)g_n(x) - \left(\frac34-ix\right)^2g_n(x+i) - \left(\frac34+ix\right)^2 g_n(x-i) = 0. \end{equation}
\item Generating functions: \begin{align} \label{eq:gn-genfun} \sum_{n=0}^\infty g_n(x) z^n &= \frac{1}{(1+iz)^2} \,{}_2F_1\left(1,\frac34-ix;\frac32;\frac{4iz}{(1+iz)^2}\right) \\ \label{eq:gn-genfun2} \sum_{n=0}^\infty \frac{g_n(x)}{(n+1)!}z^n & = {}_1F_1\left(\frac34+ix;\frac32;-iz\right) {}_1F_1\left(\frac34-ix;\frac32;iz\right) \end{align}
\item Symmetry: $g_n(-x) = (-1)^n g_n(x)$.
\item Mellin transform representation: \begin{align} g_n(x) &= (-i)^n \frac{\sqrt{\pi}}{2} \left(
\Gamma\left(\frac34-ix\right)
\Gamma\left(\frac34+ix\right) \right)^{-1}
\int_0^\infty \frac{1}{(u+1)^{3/2}} U_n\left(\frac{u-1}{u+1}\right)\,du \end{align}
\end{enumerate}
\textbf{Notes.} This list is based on the list of properties of the continuous Hahn polynomials $p_n(x;a,b,c,d)$ given in \cite[pp.~200--204]{koekoek-etal}, except for the Mellin transform representation, which is proved in our Proposition~\ref{prop:gn-mellintrans}.
In the formulas \eqref{eq:gn-explicit1}--\eqref{eq:gn-explicit3}, the first formula is the definition as given in \cite{koekoek-etal}, and formula \eqref{eq:gn-explicit2} is the explicit rewriting of \eqref{eq:gn-explicit1} as a sum. Formula \eqref{eq:gn-explicit3}, which (like \eqref{eq:fn-explicit4} discussed in the previous section) has the benefit of highlighting the odd/even symmetry of $g_n(x)$, seems new, and is proved by evaluating the coefficient of $z^n$ in \eqref{eq:gn-genfun2} as \begin{equation*} \frac{g_n(x)}{(n+1)!} = \sum_{k=0}^n [z^k] \Big( {}_1F_1\left(\frac34+ix;\frac32;-iz\right) \Big) \times [z^{n-k}] \Big( {}_1F_1\left(\frac34-ix;\frac32;iz\right) \Big) \end{equation*} and simplifying.
\begin{table} \caption{The first few polynomials $g_n(x)$} \begin{equation*}
\begin{array}{r|l} n & g_n(x) \\[3pt] \hline
0 & 1 \\[3pt]
1 & \frac{8 }{3}x \\[3pt]
2 & \frac{64 }{15}x^2-\frac{3}{5} \\[3pt]
3 & \frac{512 }{105}x^3-\frac{272 }{105}x \\[3pt]
4 & \frac{4096 }{945}x^4-\frac{5312 }{945}x^2+\frac{7}{15} \\[3pt]
5 & \frac{32768 }{10395}x^5-\frac{83968 }{10395}x^3+\frac{568 }{231}x \\[3pt]
6 & \frac{262144 }{135135}x^6-\frac{77824 }{9009}x^4+\frac{847232 }{135135}x^2-\frac{77}{195} \\[3pt]
7 & \frac{2097152 }{2027025}x^7-\frac{14876672 }{2027025}x^5+\frac{20968448 }{2027025}x^3-\frac{527392 }{225225}x \\[3pt]
8 & \frac{16777216 }{34459425}x^8-\frac{25427968 }{4922775}x^6+\frac{33107968 }{2650725}x^4-\frac{25399936 }{3828825}x^2+\frac{77}{221} \end{array} \end{equation*} \end{table}
\section{The relationship between the polynomial sequences $f_n$ and $g_n$}
\label{sec:orth-fngn-relation}
The goal of this section is to prove the following pair of identities, which seem new, relating the two orthogonal polynomial families $(f_n)_{n=0}^\infty$ and $(g_n)_{n=0}^\infty$
\begin{prop} The polynomial families $f_n(x)$ and $g_n(x)$ are related by the equations \begin{align} \label{eq:gn-intermsof-fn} g_n(x) & = \sum_{k=0}^{\lfloor n/2 \rfloor} \frac{2^{n-2k} (n-k)!}{(3/2)_{n-2k} k!} f_{n-2k}(x), \\ \label{eq:fn-intermsof-gn} f_n(x) & = \frac{(3/2)_n}{2^n(n+1)!} \sum_{k=0}^{\lfloor n/2 \rfloor} (-1)^k (n-2k+1)\binom{n+1}{k} g_{n-2k}(x). \end{align} \end{prop}
The proofs relies on two binomial summation identities, given in the next two lemmas.
\begin{lem} For integers $p,q\ge 0$ we have the summation identity \begin{equation} \label{eq:fn-gn-binomialiden1} \sum_{k=0}^{\lfloor p/2 \rfloor} \frac{(-1)^k (p+q-k)!}{2^{2k} k! (p-2k)!} = \frac{1}{2^p} q! \binom{p+2q+1}{p}. \end{equation} \end{lem}
\begin{proof} Consider, for fixed $q\ge 0$, the generating function in an indeterminate $x$ of the sequence of numbers (indexed by the parameter $p\ge 0$) on the left-hand side of \eqref{eq:fn-gn-binomialiden1}. This generating function can be evaluated as \begin{align*} \sum_{p\ge 0} & \left( \sum_{k=0}^{\lfloor p/2 \rfloor} \frac{(-1)^k (p+q-k)!}{2^{2k} k! (p-2k)!} \right) x^p \\ & = \sum_{p\ge 0} \left( \sum_k \left(-\frac14\right)^k \binom{p-k}{k} (p-k+1)\cdots (p-k+q) \right) x^p \\ & = \sum_{m\ge 0} \sum_k \left(-\frac14\right)^k \binom{m}{k} (m+1)\cdots (m+q) x^{m+k}
= \sum_{m\ge 0} (m+1)\cdots (m+q) x^m \left(\sum_k \binom{m}{k} \left(-\frac{x}{4}\right)^k\right) \\ & = \sum_{m\ge 0} (m+1)\cdots (m+q) \bigg(x \left(1-\frac{x}{4}\right)\bigg)^m
=
\frac{d^q}{d y^q}_{\raisebox{2pt}{$\big|$} y=x(1-x/4)} \left( \sum_{m=0}^\infty y^m \right)
\\ & = \frac{d^q}{d y^q}_{\raisebox{2pt}{$\big|$} y=x(1-x/4)} \left( \frac{1}{1-y} \right)
=
\frac{q!}{(1-y)^{q+1}}_{\raisebox{2pt}{$\big|$} y=x(1-x/4)} \\ & = \frac{q!}{\left(1-x\left(1-\frac{x}{4}\right)\right)^{q+1}} = \frac{q!}{\left(1-\frac{x}{2}\right)^{2q+2}} = \sum_{p=0}^\infty \frac{q!}{2^p} \binom{p+2q+1}{p} x^p, \end{align*} which is the generating function for the sequence on the right-hand side of \eqref{eq:fn-gn-binomialiden1}. \end{proof}
\begin{lem} The summation identity \begin{equation} \label{eq:fn-gn-binomialiden2} \sum_{k=0}^N (N-2k)\binom{N}{k} \binom{N+m-2k}{2m+1} = N \binom{N-1}{m} 2^{N-m} \end{equation} holds for integers $N,m\ge0$ \end{lem}
\begin{proof} Denote \begin{equation*} F_m(N,k) = \frac{(N-2k)\binom{N}{k} \binom{N+m-2k}{2m+1}}{N \binom{N-1}{m} 2^{N-m}}, \end{equation*} so that the identity to prove becomes the statement that $\sum_{k=0}^N F_m(N,k)=1$. This claim in turn follows by applying the method of Wilf-Zeilberger pairs \cite[Ch.~7]{aequalsb}, \cite{wilf-zeilberger} to the rational certificate function (in which $m$ is regarded as a parameter) \begin{equation*} R_m(N,k) = \frac{k (m+N+1-2k) (m+N+2-2k)}{2(N-2k)(N+1-k)(N-m-2k)}. \end{equation*} The certificate was found using the Mathematica package \texttt{fastZeil} \cite{paule-schorn1, paule-schorn2}, a software implementation of Zeilberger's algorithm. \end{proof}
\begin{proof}[Proof of~\eqref{eq:gn-intermsof-fn}] An immediate consequence of \eqref{eq:fn-gn-binomialiden1} is the identity \begin{equation} \label{eq:fngn-binomialiden2-variant} \sum_{k=0}^{\lfloor (n-m)/2 \rfloor} \frac{(-1)^k (n-k)!}{2^{2k} k! (3/2)_{n-2k}} \binom{n-2k+\frac12}{n-2k-m} = \frac{(n+m+1)!}{2^{n+m} (n-m)! (3/2)_m^2}, \end{equation} which holds for integers $n\ge m\ge 0$---indeed, this relation reduces to \eqref{eq:fn-gn-binomialiden1} after a short simplification on taking $p=n-m$, $q=m$ and using the facts that $(3/2)_m = \frac{(2m+2)!}{2^{2m+1} (m+1)!}$ and $\binom{n-2k+1/2}{n-2k-m} = \frac{(3/2)_{n-2k}}{(n-2k-m)! (3/2)_m}$. Now \eqref{eq:fngn-binomialiden2-variant} is the key to proving \eqref{eq:gn-intermsof-fn}: making use of the explicit formulas \eqref{eq:fn-explicit2} and \eqref{eq:gn-explicit2} for $f_n(x)$ and $g_n(x)$, respectively, we write \begin{align*} \sum_{k=0}^{\lfloor n/2 \rfloor} \frac{2^{n-2k} (n-k)!}{(3/2)_{n-2k} k!} f_{n-2k}(x)
& = \sum_{k=0}^{\lfloor n/2 \rfloor} \frac{2^{n-2k} (n-k)!}{(3/2)_{n-2k} k!} \left( (-i)^{n-2k} \sum_{m=0}^{n-2k} 2^m \binom{n-2k+\frac12}{n-2k-m} \binom{-\frac34+ix}{m} \right) \\ & = (-i)^n \sum_{m=0}^n \left( 2^m \sum_{k=0}^{\lfloor (n-m)/2 \rfloor} \frac{(-1)^k 2^{n-2k} (n-k)!}{(3/2)_{n-2k}k!} \binom{n-2k+\frac12}{n-2k-m} \right) \binom{-\frac34+ix}{m} \\ & = (-i)^n \sum_{m=0}^n \frac{(n+m+1)!}{(n-m)! (3/2)_m^2} \binom{-\frac34+ix}{m} = g_n(x), \end{align*} giving the result. \end{proof}
\begin{proof}[Proof of~\eqref{eq:fn-intermsof-gn}] Using \eqref{eq:fn-gn-binomialiden2}, we can deduce the slightly more messy identity \begin{equation} \label{eq:fngn-binomialiden2-newvariant} \frac{(3/2)_n}{2^n (3/2)_m^2} \sum_{k=0}^{\lfloor (n-m)/2 \rfloor} \frac{(n-2k+1)(n-2k+m+1)!}{k!(n-k+1)!(n-2k-m)!} = 2^m \binom{n+1/2}{n-m} \qquad (n\ge m\ge 0). \end{equation} The way to see this is to first massage the left-hand side of \eqref{eq:fn-gn-binomialiden2} a bit by rewriting it as \begin{align*} \sum_{k=0}^N (N-2k)\binom{N}{k} \binom{N+m-2k}{2m+1} & = 2 \sum_{k=0}^{\lfloor N/2\rfloor} (N-2k)\binom{N}{k} \binom{N+m-2k}{2m+1} \\ & = 2 \sum_{k=0}^{\lfloor \frac{N-m}{2} \rfloor} (N-2k)\binom{N}{k} \binom{N+m-2k}{2m+1}, \end{align*} where the first equality follows from the symmetry of the summand under the relabeling $k\mapsto N-k$, and the second equality follows on noticing that the summands actually vanish for values of $k$ for which $\frac{N-m}{2} < k < \frac{N+m}{2}$. Thus, we obtain another variant of \eqref{eq:fn-gn-binomialiden2}, namely \begin{equation} \label{eq:fngn-binomialiden2-anothervariant2} \sum_{k=0}^{\lfloor \frac{N-m}{2}\rfloor} (N-2k)\binom{N}{k} \binom{N+m-2k}{2m+1} = N \binom{N-1}{m} 2^{N-m+1}. \end{equation} We leave to the reader to verify (using similar simple substitutions as in the proof of \eqref{eq:gn-intermsof-fn} above) that setting $N=n+1$ in this new identity gives a relation that is equivalent to \eqref{eq:fngn-binomialiden2-newvariant}.
Finally, from \eqref{eq:fngn-binomialiden2-newvariant} we can prove the relation \eqref{eq:fn-intermsof-gn} in a manner analogous to the proof of \eqref{eq:gn-intermsof-fn}, again making use of the expansions \eqref{eq:fn-explicit2}, \eqref{eq:gn-explicit2} but working in the opposite direction. We have \begin{align*} & \frac{(3/2)_n}{2^n (n+1)!} \sum_{k=0}^{\lfloor n/2 \rfloor} (-1)^k (n-2k+1) \binom{n+1}{k} g_{n-2k}(x) \\ & = \frac{(3/2)_n}{2^n (n+1)!} \sum_{k=0}^{\lfloor n/2 \rfloor} (-1)^k (n-2k+1) \binom{n+1}{k}
(-i)^{n-2k} \left(\sum_{m=0}^{n-2k} \frac{(n-2k+m+1)!}{(n-2k-m)!(3/2)_m^2} \binom{-\frac34+ix}{m} \right) \\ & = (-i)^n \sum_{m=0}^n \left( \frac{(3/2)_n}{2^n} \sum_{k=0}^{\lfloor (n-m)/2 \rfloor} \frac{(n-2k+1)(n-2k+m+1)!}{k!(n-k+1)!(n-2k-m)!(3/2)_m^2} \right) \binom{-\frac34+ix}{m}
= f_n(x), \end{align*} as claimed. \end{proof}
\chapter{Summary of main formulas}
\noindent \textbf{Series expansions for the Riemann xi function} \begin{align*} \Xi(t) & = \sum_{n=0}^\infty (-1)^n a_{2n} t^{2n} & \textrm{(p.~\pageref{eq:riemannxi-taylor})} \\ \Xi(t) & = \sum_{n=0}^\infty (-1)^n b_{2n} H_{2n}(t) & \textrm{(p.~\pageref{eq:hermite-expansion})} \\ \Xi(t) & = \sum_{n=0}^\infty (-1)^n c_{2n} f_{2n}\left(\frac{t}{2}\right) & \textrm{(p.~\pageref{eq:fn-expansion})} \\ \Xi(t) & = \sum_{n=0}^\infty (-1)^n d_{2n} g_{2n}\left(\frac{t}{2}\right) & \textrm{(p.~\pageref{eq:gn-expansion})} \end{align*}
\noindent \textbf{Series expansions for related functions} \begin{align*} A(r) &= \sum_{n=0}^\infty c_{2n} G_{2n}^{(3)}(r) & \textrm{(p.~\pageref{eq:aofr-selftrans-expansion})} \\[10pt] \tilde{\nu}(t) &= \frac{1}{2\sqrt{2}} \sum_{n=0}^\infty \frac{(3/2)_{2n}}{(2n)!} c_{2n} t^{2n} & \textrm{(p.~\pageref{eq:nutilde-taylor})} \\ \tilde{\nu}(t) &= \frac{1}{2\sqrt{2}}\sum_{n=0}^\infty d_{2n} U_{2n}(t) & \textrm{(p.~\pageref{eq:nutilde-chebyshev-exp})} \end{align*}
\noindent \textbf{Formulas for the coefficients} \begin{align*} a_{2n} & = \frac{1}{2^{2n}(2n)!} \int_0^\infty \omega(x)x^{-3/4}(\log x)^{2n}\,dx & \textrm{(p.~\pageref{eq:riemannxi-taylorcoeff-int})} \\ & = \frac{1}{(2n)!} \int_{-\infty}^\infty x^{2n} \Phi(x)\,dx & \textrm{(p.~\pageref{eq:a2n-qn-rnprime})} \\[13pt] b_{2n} & = \frac{1}{2^{2n} (2n)!} \int_{-\infty}^\infty x^{2n} e^{-\frac{x^2}{4}} \Phi(x)\,dx & \textrm{(p.~\pageref{eq:hermite-coeffs-def})} \\ & = \frac{(-1)^n}{\sqrt{\pi}2^{2n}(2n)!} \int_{-\infty}^\infty \Xi(t) e^{-t^2} H_{2n}(t)\,dt & \textrm{(p.~\pageref{eq:hermite-coeff-innerproduct})} \end{align*} \begin{align*} c_{2n} & = 2\sqrt{2}\int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^{2n}\,dx & \textrm{(p.~\pageref{eq:def-fn-coeffs})} \\ & = (-1)^n \frac{\sqrt{2} \, (2n)!}{\pi^{3/2} (3/2)_{2n}}
\int_{-\infty}^\infty \Xi(t) f_{2n}\left(\frac{t}{2}\right)\left|\Gamma\left(\frac34+\frac{it}{2}\right)\right|^2\,dt & \textrm{(p.~\pageref{eq:fn-coeff-innerproduct})} \\ & = \frac{8\sqrt{2} \pi (2n)!}{(3/2)_{2n}} \int_0^\infty A(r) r^2 G_{2n}^{(3)}(r)\,dr & \textrm{(p.~\pageref{eq:fn-coeffs-radialform})} \\ & = 2\int_{-1}^1 t^{2n} \tilde{\omega}(t) \,dt & \textrm{(p.~\pageref{eq:fn-coeffs-asmoments})} \\[4pt] d_{2n} & =
\frac{(3/2)_{2n}}{2^{2n-3/2} (2n)!}\int_0^\infty \frac{\omega(x)}{(x+1)^{3/2}} \left(\frac{x-1}{x+1}\right)^{2n}
{}_2F_1\left(n+\frac34,n+\frac54; 2n+2; \left(\frac{x-1}{x+1}\right)^2 \right)
\,dx
& \textrm{(p.~\pageref{eq:def-gn-coeffs})} \\ & =
\frac{8}{\pi^3}(-1)^n \int_{-\infty}^\infty \Xi(t) g_{2n}\left(\frac{t}{2}\right) \left|\Gamma\left(\frac34+\frac{it}{2}\right)\right|^4\,dt
& \textrm{(p.~\pageref{eq:gn-coeff-innerproduct})} \\ & = \frac{n+1}{2^n} \sum_{m=0}^\infty \frac{(3/2)_{n+2m}}{4^m m!(n+m+1)!} c_{n+2m} & \textrm{(p.~\pageref{eq:dn-cn-expansion})} \\ & = \frac{4\sqrt{2}}{\pi} \int_{-1}^1 \tilde{\nu}(t) U_{2n}(t)\sqrt{1-t^2}\,dt & \textrm{(p.~\pageref{eq:dn-chebyshev-int})} \\ & = \frac{16}{\pi} (2n+1) \int_1^\infty \int_0^1 \frac{\omega(x)}{((x+1)^2-t(x-1)^2)^{3/4}} \left(\frac{t}{1-t}\right)^{1/4}
\left( \frac{t(1-t)(x-1)^2}{(x+1)^2-t(x-1)^2} \right)^n \,dt\,dx & \textrm{(p.~\pageref{eq:d2n-simplerep-doubleint})} \end{align*}
\noindent \textbf{Asymptotic formulas for the coefficients}
\begin{align*} a_{2n} & = \left(1+O\left(\frac{\log\log n}{\log n}\right)\right) \frac{\pi^{1/4}}{2^{2n-\frac52} (2n)!} \left(\frac{2n}{\log (2n)}\right)^{7/4} \\ & \qquad\qquad \times \exp\left[ 2n\left(\log \left(\frac{2n}{\pi}\right) - W\left(\frac{2n}{\pi}\right) - \frac{1}{W\left(\frac{2n}{\pi}\right)} \right) \right] & \textrm{(p.~\pageref{eq:taylorxi-coeff-asym})} \\[4pt] b_{2n} & = \left(1+O\left(\frac{\log\log n}{\log n}\right)\right) \frac{\pi^{1/4}}{2^{4n-\frac52} (2n)!} \left(\frac{2n}{\log (2n)}\right)^{7/4} \\ \nonumber &\qquad \qquad \times \exp\left[ 2n\left(\log \left(\frac{2n}{\pi}\right) - W\left(\frac{2n}{\pi}\right) - \frac{1}{W\left(\frac{2n}{\pi}\right)} \right) -\frac{1}{16} W\left(\frac{2n}{\pi}\right)^2\right] & \textrm{(p.~\pageref{eq:hermite-coeff-asym})} \\[4pt] c_{2n} & = \left(1+O\left(n^{-1/10}\right)\right) 16 \sqrt{2} \pi^{3/2}\, \sqrt{n} \,\exp\left(-4\sqrt{\pi n}\right) & \textrm{(p.~\pageref{eq:fn-coeff-asym})} \\[4pt] d_{2n} & = \left(1+O\left(n^{-1/10}\right)\right) \left(\frac{128\times 2^{1/3} \pi^{2/3}e^{-2\pi/3}}{\sqrt{3}}\right) n^{4/3}
\exp\left(-3(4\pi)^{1/3} n^{2/3}\right) & \textrm{(p.~\pageref{eq:gn-coeff-asym})} \end{align*}
\backmatter
\end{document}
|
arXiv
|
{
"id": "1902.06330.tex",
"language_detection_score": 0.6609782576560974,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{spacing}{1.25} \begin{frontmatter}
\title{On the quantification and efficient propagation of imprecise probabilities with copula dependence}
\author{ Jiaxin Zhang $^{1, 2}$, Michael Shields $^{2}$ \footnote{Corresponding author.\\ Email: [email protected] (J. Zhang), [email protected] (M. Shields)} } \address{$^{1}$Oak Ridge National Laboratory, Oak Ridge, TN 37831} \address{$^{2}$Department of Civil Engineering, Johns Hopkins University Baltimore, MD 21218}
\begin{abstract}
This paper addresses the problem of quantification and propagation of uncertainties associated with dependence modeling when data for characterizing probability models are limited. Practically, the system inputs are often assumed to be mutually independent or correlated by a multivariate Gaussian distribution. However, this subjective assumption may introduce bias in the response estimate if the real dependence structure deviates from this assumption. In this work, we overcome this limitation by introducing a flexible copula dependence model to capture complex dependencies. A hierarchical Bayesian multimodel approach is proposed to quantify uncertainty in dependence model-form and model parameters that result from small data sets. This approach begins by identifying, through Bayesian multimodel inference, a set of candidate marginal models and their corresponding model probabilities, and then estimating the uncertainty in {\color{black}the} copula-based {\color{black} dependence} structure, which is conditional on the marginals and their parameters. The overall uncertainties integrating marginals and copulas are probabilistically represented by an ensemble of multivariate candidate densities. A novel importance sampling reweighting approach is proposed to efficiently propagate the overall uncertainties through a computational model. Through an example studying the influence of constituent properties on the out-of-plane properties of transversely isotropic E-glass fiber composites, we show that the composite property with copula-based dependence model converges to the true estimate as data set size increases, {\color{black} while an independence or arbitrary Gaussian correlation} assumption leads to a biased estimate. \\
\end{abstract}
\begin{keyword} Dependence Modeling \sep Copula Theory \sep Imprecise Probability \sep Bayesian Multimodel Inference \sep Uncertainty Quantification \sep Composites \sep Small Data
\end{keyword} \end{frontmatter}
\section{Introduction}
Uncertainty Quantification (UQ) is widely applied to better understand complex stochastic physical and mathematical systems. Typically, computational simulations aim to estimate statistics of the response of a system subject to random inputs. These inputs are commonly modeled by a random vector $\bm{X}$ with their joint probability density {\color{black}$f_{\bm{X}}(\bm{x})$}. The uncertainty associated with the inputs are quantified probabilistically and propagated through a computational model $\mathcal{M}$. The corresponding output $Y = \mathcal{M}(\bm{X})$ is the quantity of interest (QoI), which is uncertain. If the computational model is deterministic, all uncertainties in $Y$ result from the uncertainty in $\bm{X}$.
Practically, the inputs are often assumed to be mutually independent or to possess a multivariate Gaussian dependence structure because it is simple to model and to fit {\color{black}from} data. Some conventional UQ approaches, for example, importance sampling \cite{melchers2018structural} and polynomial chaos expansions \cite{li1998adaptive}, take advantage of mutually independent inputs. If the inputs are dependent, a number of UQ approaches require to map the model inputs $\bm{X}$ onto an input $\bm{X}^*$ with independent components. When {\color{black}$f_{\bm{X}}(\bm{x})$} has multivariate Gaussian dependence structure, the map corresponds to the Nataf transformation \cite{nataf1962determination, lebrun2009innovating}. A more general way that maps the input $\bm{X}$ onto $\bm{X}^*$ is the Rosenblatt transformation \cite{rosenblatt1952remarks}, which needs to know the conditional probability distribution functions (pdfs) that are often infeasible in practice. For this reason, the Gaussian dependence assumption is widely applied in the context of UQ. The Gaussian assumption and the associated dependence provides a convenient representation of the input dependencies, but it may introduce a bias in the response estimate if the real dependence structure deviates from this assumption.
Dependence modeling has recently received widespread attention in the engineering and mathematics communities. This is mainly due to the significant development of copula models \cite{nelsen2007introduction, joe2014dependence, wisadwongsa2018bivariate}, and vine copulas \cite{joe2011dependence, aas2009pair, joe2010tail,nagler2019model, muller2019dependence} in particular. Copula theory is used to separately model the dependence and the marginal distribution, but it is often limited to low-dimensional problems, typically bivariate or simple copula families, such as Gaussian or Archimedean families \cite{nelsen2007introduction}. Copula-based approaches have been recently used in various dependence modeling studies, for example in reliability and risk analysis \cite{rozsas2017effect, wang2018roles, xu2018failure, he2018failure, wang2018role, pan2019modeling}, sensitivity analysis \cite{wang2018copula, hu2018probability}, and prognostics and health management (PHM) \cite{xi2014copula, xi2019enhanced}. Copulas also have widespread applications in engineering practice, such as ocean and offshore \cite{zhang2015long, masina2015coastal}, wind \cite{warsido2015synthesis}, and earthquake \cite{goda2015multi} engineering. To overcome the limitation of copula theory in high dimensions, {\color{black}the vine copula} theory was first proposed by Joe \cite{joe1994multivariate, joe1997multivariate} by formulating multivariate copulas as a product of bivariate copulas among pairs of random variables. Bedford and Cooke \cite{bedford2002vines} introduced a graphical model for describing multivariate copulas using pair-copulas, which provides a flexible and easy interpretation. Czado presented a series of productive studies in the context of vine copulas \cite{czado2010pair, czado2013selection} and successfully applied them to financial modeling \cite{dissmann2013selecting, brechmann2012truncated}. Recently, vine copula approaches have become increasingly attractive in engineering applications \cite{xu2018failure, torre2019general, qiu2019scenario, li2019efficient, niemierko2019d}.
Conventionally, the dependence structure of multivariate inputs is built probabilistically through a known joint probability measure. Therefore, the first step of copula-based dependence modeling is to identify or assume a reasonable copula or vine copula model for the input variables. However, it may not be straightforward to identify the appropriate copula model when data characterizing the input parameters are sparse. This process may therefore give rise to a form of \emph{epistemic uncertainty} \cite{der2009aleatory} - which is due to a lack of knowledge or data. Epistemic uncertainty plays an essential role in UQ and must be considered, particularly when it arises from a lack of data.
Many theories have been developed to address the various forms of epistemic uncertainty. It has been argued that epistemic uncertainty needs a different mathematical treatment than \emph{aleatory uncertainty} \cite{ferson1996different} that are naturally stochastic and treated probabilistically. It remains an open debate as to what that mathematical treatment should be. This desire also has given rise to the field of so-called \emph{imprecise probabilities} wherein epistemic uncertainty contributes a level of ``imprecision" and aleatory uncertainty are quantified by classical probability theory. There are numerous approaches to model this imprecision that include the use of fuzzy sets \cite{zadeh1965fuzzy,dubois2012fundamentals} and measures \cite{wang2013fuzzy}, random sets \cite{berger1994overview, fetz2004propagation, molchanov2005theory, fetz2016imprecise}, intervals and probability boxes \cite{Moore1979, ferson1996different, ferson2002, fetz2004propagation} and Dempster-Shafer theory \cite{Dempster1967, Shafer1976}. Efforts from Walley \cite{walley1991statistical, walley2000towards} have worked to unify these theories under an over-arching theory of imprecise probabilities. An extensive review of many of these imprecise probabilities approaches for engineering applications can be found in \cite{beer2013imprecise}.
To the author's knowledge, relatively few studies have accounted for the problem of \emph{imprecise dependence modeling} in UQ. Some recent studies focus on the investigations of Sklar's theorem for imprecise copulas using fuzzy theory \cite{montes2015sklar, pelessoni2013imprecise}. Coolen-Maturi et al. \cite{coolen2016predictive} combine nonparametric predictive inference that quantifies the uncertainties through imprecise probability with a parametric copula to model and estimate the dependence structure. Among the most comprehensive studies of UQ with dependence modeling is that conducted by Kurowicka and Cooke \cite{kurowicka2006uncertainty}, who discussed UQ in bivariate as well as high dimensional dependence modeling. More recent works include those of Schefzik et al. \cite{schefzik2013uncertainty}, who propose a general multi-stage procedure called ensemble copula coupling to quantify the uncertainty in complex simulation models, particularly in weather and climate predictions, and Emiliano et al.\cite{torre2019general} who use vine copulas to develop a general data-driven UQ framework for dependence modeling of complex input.
In this paper, we investigate copula-based dependence modeling in the context of imprecise probability that specifically results from a lack of data. This is motivated by the difficulty of data collection under complex conditions, for example, long-time cycle and expensive experiments, in engineering practice. When only scarce data is available, it is a challenging task to assign an objective and accurate probability distribution for the random inputs and precisely estimate their dependence relationship. The developed method builds on the previous work of the authors who proposed information-theoretic \cite{zhang2018quantification} and Bayesian \cite{zhang2018effect} multimodel probabilistic methodologies to quantify and efficiently propagate combined aleatory and epistemic uncertainty given small data sets. This work introduces a copula-based dependence modeling, which is flexible enough to capture complex dependence structure. To fully quantify the uncertainty in dependence modeling, we propose a hierarchical Bayesian multimodel approach that allows to first identify a set of candidate marginal models and their associated model probabilities, and then estimate the {\color{black}copula} model-form and model parameter uncertainties, which are {\color{black}conditioned} on the uncertain marginals and their parameters. Using the proposed method, an ensemble of candidate multivariate densities are identified as random inputs that need to be propagated through a complex model to estimate the response of an engineering system. Propagation of these families of densities is particularly difficult because it requires nested Monte Carlo calculations, which are often computationally infeasible even for simple models. This paper proposes a novel efficient importance sampling reweighting algorithm that allows simultaneous propagation of the multiple densities through one Monte Carlo simulation. The proposed method can further achieve an adaptive updating as additional data are collected but without requiring additional computational evaluation.
This paper is structured as follows. Section 2 provides a brief review of copula-based dependence modeling, particularly bivariate copula theory and vine copula theory. Section 3 presents the uncertainty analysis for copula-based multivariate dependence modeling, including copula uncertainty and marginal uncertainty. An efficient uncertainty propagation with imprecise copula dependence modeling is proposed in Section 4. Section 5 shows an application of the proposed method to the probabilistic prediction of unidirectional composite lamina properties. Some discussions and concluding remarks are given in Section 6.
\section{Copula-based modeling of dependence structure}
\subsection{{\color{black}Measures of statistical dependence}}
The most well-known measure of dependence between random variables is the Pearson's correlation coefficient, commonly named simply the correlation coefficient, which measures linear dependence. Considering two random variables $X$ and $Y$ with mean values $\mu_X$ and $\mu_Y$ and standard deviations $\sigma_X$ and $\sigma_Y$, the correlation coefficient $\rho_{X,Y}$ is defined as \begin{equation} \rho_{X,Y} = \frac{cov(X,Y)}{\sigma_X \sigma_Y} = \frac{E[(X-\mu_X)(Y-\mu_Y)]}{\sigma_X \sigma_Y} \label{eqn:correlation} \end{equation} where $E[\cdot]$ is the expectation and $cov$ is the covariance. All correlation coefficient values are bounded in the interval $[-1, 1]$, indicating the degree of linear dependence between two variables. The closer the coefficient is to either 1 or -1, the stronger the correlation between the variables. If the variables are linearly independent, the correlation coefficient is 0.
Another common measure of dependence is Kendall's $\tau$, or Kendall's rank correlation coefficient, which measures the difference between the concordance and discordance probability and can be used to detect some nonlinear dependence. Let $(X_1, Y_1)$ and $(X_2, Y_2)$ be independent and identically distributed random vectors, then Kendall's tau is defined as \begin{equation} \tau_{X,Y} = P[(X_1 - X_2)(Y_1 - Y_2)>0] - P[(X_1 - X_2)(Y_1-Y_2)<0]. \end{equation}
Rank correlation can also be expressed using Spearman's $\rho$ (defined as the correlation coefficient -- Eq.\ \eqref{eqn:correlation} -- between the ranks of the variables) and both Kendall's $\tau$ and Spearman's $\rho$ can be shown to be special cases of a generalized rank correlation \cite{kendall1938new}.
However, the information given by a correlation coefficient (Pearson's $\rho$, Kendall's $\tau$, or Spearman's $\rho$) is only enough to define the dependence structure between random variables in special cases, e.g. Gaussian random variables. In general, the complete dependence structure requires knowledge of the full joint distribution. One method to capture the complete dependence structure is to model the joint distribution using a copula. In practice, many data structures exhibit different marginal distributions, nonsymmetric/nonlinear dependencies, and/or tail dependencies between variables. These variables cannot be modeled by a Gaussian or multivariate $t$ distribution. This challenge is overcome by the copula approach, which models the dependencies and marginal distributions separately.
\subsection{Copula theory}
Consider {\color{black}$F_{\bm{X}}(\bm{x})$} as the $d$-dimensional joint distribution function of the random vector $\bm{X} = (X_1,...,X_d)^T$ with marginal distributions {\color{black}$F_1(x_1),...,F_d(x_d)$}. According to Sklar's theorem \cite{sklar1959fonctions}, there exists a copula $C$ such that for all $\bm{x} = (x_1,...,x_d)^T \in [-\infty, \infty]^d$, {\color{black} \begin{equation} F_{\bm{X}}(\bm{x}) = C(F_1(x_1),...,F_d(x_d)) \label{eq:sklar} \end{equation} } If {\color{black}$F_1(x_1),...,F_d(x_d)$} are continuous, the copula $C$ is unique. The copula $C$ can be interpreted as the joint distribution function of a $d$-dimensional random vector on $[0,1]^d$ with uniform marginals.
Sklar's theorem can also be restated with respect to probability densities. The corresponding copula density can be expressed as: {\color{black} \begin{equation} c(F_1(x_1),...,F_d(x_d)) = \frac{\partial C(F_1(x_1),...,F_d(x_d))}{\partial F_1(x_1),..,\partial F_d(x_d)} \label{eq:copula_density} \end{equation} } which implies the joint multivariate pdf can be formulated by {\color{black} \begin{equation}
f_{\bm{X}}(\bm{x}) = c(F_1(x_1),...,F_d(x_d)) \cdot f_1(x_1) \cdots f_d(x_d) \label{eq:jpdf_copula_de} \end{equation} } where $f_k(x_k), 1\le k \le d$ are the marginal pdfs. For the bivariate case, Joe \cite {joe1997multivariate} and Nelsen \cite{nelsen2007introduction} provided a rich variety of copula families from the two major classes of \emph{Elliptical} and \emph{Archimedean} copulas. Elliptical copulas are directly derived by inverting Sklar's theorem, shown in Eq.\ (\ref{eq:sklar}). Given a bivariate cumulative distribution function {\color{black}$F_{\bm{X}}(\bm{x})$} with marginals {\color{black}$F_1(x_1)$ and $F_2(x_2)$}, then \begin{equation} C(u_1, u_2) = F(F_{1}^{-1}(u_1), F_2^{-1}(u_2) ) \label{eq:bi_copula} \end{equation} is a bivariate copula for $u_1, u_2 \in [0,1]$. One of the most commonly used bivariate elliptical copula is the bivariate Gaussian copula \begin{equation} C(u_1, u_2) = \Phi_{\rho}(\Phi^{-1}(u_1), \Phi^{-1}(u_2) ) \end{equation} where $\Phi_{\rho}$ is the joint cumulative distribution of bivariate standard normal distribution with correlation coefficient $\rho$ and $\Phi^{-1}$ is the inverse standard normal cdf.
Another common copula is the Student-$t$ copula, whose bivariate density is given by \begin{equation}
{\color{black}f_{\bm{X}}(\bm{x})} = \frac{\Gamma(\frac{\nu+2}{2})}{\Gamma(\frac{\nu}{2}) \sqrt{(\pi\nu)^2|\bm{\Sigma}|}} \left( 1+ \frac{(\bm{x}-\bm{\mu})^{\prime}\bm{\Sigma}^{-1}(\bm{x}-\bm{\mu})}{\nu} \right)^{-\frac{\nu+2}{2}} \end{equation} where $\nu$ is the number of degrees of freedom, $\bm{\mu}$ is the mean vector and $\bm{\Sigma}$ is a positive-definite matrix. {\color{black} Since the copula remains invariant under a standardization of the marginal distributions, the copula of a $t(\nu, \bm{\mu}, \bm{\Sigma})$ is identical to that of a $t(\nu, 0, \bm{P})$ distribution where $\bm{P}$ is the correlation matrix implied by the dispersion matrix $\bm{\Sigma}$ \cite{demarta2005t}. Thus, the corresponding Student-t copula is given by \begin{equation}
C(u_1,u_2) = \int_{-\infty}^{t_{\nu}^{-1}(u_1)} \int_{-\infty}^{t_{\nu}^{-1}(u_2)} \frac{\Gamma(\frac{\nu+2}{2})}{\Gamma(\frac{\nu}{2}) \sqrt{(\pi\nu)^2|\bm{P}|}} \left( 1+ \frac{\bm{x}^{\prime}\bm{P}^{-1}\bm{x}}{\nu} \right)^{-\frac{\nu+2}{2}} d\bm{x}. \end{equation} For bivariate case, we simplify the notation to \begin{equation} C(u_1, u_2) = t_{\rho, \nu} (t_{\nu}^{-1}(u_1), t_{\nu}^{-1}(u_2)) \end{equation} where $\rho$ is the off-diagonal element of $\bm{P}$ \cite{demarta2005t}, $t_{\nu}^{-1}$ is defined as the inverse Student-$t$ marginal distribution function with $\nu$ degrees of freedom. Fig.\ \ref{fig:elliptical copula} shows samples from the elliptical copula family with Gaussian and Student-$t$ copulas. Table \ref{tab:elliptical copula table} provides the basic properties of the Gaussian and Student-$t$ copulas. \begin{figure}\label{fig:elliptical copula}
\end{figure} }
\begin{table}[!ht] \footnotesize \centering \caption{Properties and definition of elliptical copula families} \label{tab:elliptical copula table} \begin{tabular}{@{}cccc@{}} \toprule Elliptical family & Parameter range & Kendall's $\tau$ & Tail dependence \\ \midrule Gaussian & $\rho \in (-1,1)$ & $\frac{2}{\pi}\arcsin(\rho)$ & 0 \\ Student-$t$ & $\rho \in (-1,1), \nu>2$ & $\frac{2}{\pi}\arcsin(\rho)$ & $2t_{\nu+1}(-\sqrt{\nu+1}\sqrt{\frac{1-\rho}{1+\rho}})$ \\ \bottomrule \end{tabular} \end{table}
Another important copula family, Archimedean copulas are defined as \begin{equation} C(u_1, u_2) =\psi^{[-1]}(\psi(u_1) + \psi(u_2)) \end{equation} where $\psi$ is the generator function of the copula $C$, which is a continuous strictly decreasing convex function which satisfies $\psi(1) = 0$ and $\psi^{[-1]}$ is defined as \begin{equation} \psi ^{ [-1] }(t) = \left\{ \begin{matrix} \psi ^{ -1 }(t), \quad 0 \le t \le \psi(0) \\ 0, \quad \psi(0) \le t \le \infty \end{matrix} \right. \end{equation}
The most common single parameter Archimedean copulas are the Clayton, Gumbel and Frank \cite{nelsen2007introduction}. Their bivariate copula formulations are shown in Table \ref{tab:Archimedean_1}, with their corresponding properties (generator and Kendall's $\tau$) shown in Table \ref{tab:Archimedean_2} where $D_1(\theta) = \frac{1}{\theta} \int_{0}^{\theta} \frac{t}{e^t-1}dt$ is {\color{black}the} Debye function \cite {joe1997multivariate, nelsen2007introduction}. Fig.\ \ref{fig:archimedean copula} show examples of samples drawn from these copulas for two random variables $u_1$ and $u_2$.
\begin{table}[!ht] \footnotesize \centering \caption{Definitions of Archimedean copula families} \label{tab:Archimedean_1} \begin{tabular}{@{}ccc@{}} \toprule Name of Copula & Bivariate copula $C_{\theta}(u_1,u_2)$ & Parameter $\theta$ \\ \midrule Clayton & $\left[ \max \left\{ u_1^{-\theta} + u_2^{-\theta} -1, 0 \right\} \right]^{-1/\theta}$ & $\theta \in [-1, \infty) \setminus \left\{0 \right\}$ \\ Frank & $-\frac{1}{\theta}\log \left[ 1+ \frac{(e^{-\theta u_1}-1)(e^{-\theta u_2}-1)}{e^{-\theta}-1} \right] $ & $\theta \in \mathbb{R} \setminus \left\{0 \right\} $ \\ Gumbel & $e^{-\left( (-\log(u_1))^{\theta} + (-\log(u_2))^{\theta} \right)^{1/ \theta}} $ & $\theta \in [1, \infty)$ \\ \bottomrule \end{tabular} \end{table}
\begin{table}[!ht] \footnotesize \centering \caption{Properties of Archimedean copula families} \label{tab:Archimedean_2} \begin{tabular}{@{}ccc@{}} \toprule Name of Copula & Generator & Kendall's $\tau$ \\ \midrule Clayton & $\frac{1}{\theta}(t^{-\theta}-1)$ & $\frac{\theta}{\theta+2}$ \\ Frank & $-\log[\frac{e^{-\theta t}-1}{e^{-\theta}-1}]$ & $1-\frac{4}{\theta} + 4 \frac{D_1(\theta)}{\theta} $ \\ Gumbel & $(-\log t)^{\theta}$ & $1-\frac{1}{\theta}$ \\ \bottomrule \end{tabular} \end{table}
\begin{figure}\label{fig:archimedean copula}
\end{figure}
\subsection{Vine copulas} Copula families perform well in the bivariate case, but in arbitrarily high dimension, the choice of adequate copula families is very limited. Elliptical families and Archimedean copulas lack the flexibility to accurately model the dependence structure of high dimensional variables. Simple extensions of these bivariate families offer some improvement, but typically become intricate and introduce additional limitations that, for example, they can not be applied to establish a distribution consistent with arbitrary correlation \cite{brechmann2013cdvine}.
Vine copulas (also called tree structures) do not suffer from these issues and have been widely used in many fields of application. Bedford and Cooke \cite{bedford2002vines} introduced a graphical model for describing multivariate copulas using a cascade of bivariate copulas, denoted by \emph{pair-copulas}. This pair-copula construction provides a flexible way to decompose a multivariate probability density into bivariate copulas such that each pair-copula is independent of the others.
Consider a {\color{black}$d$}-dimensional joint density function {\color{black}$f_{\bm{X}}(x_1, ...,x_d)$} for a random vector {\color{black}$\bm{X} = (X_1,...,X_d)$}. This density can be decomposed based on the law of total probability \begin{equation}
{\color{black}f(x_1,...,x_d) = f_n(x_d) \cdot f(x_{d-1}|x_d) \cdot f(x_{d-2} | x_{d-1}, x_d) \cdots f(x_1|x_2,...,x_{d})}. \end{equation}
From Sklar's theorem, we also know the joint probability density can be formulated as shown in Eq.\ (\ref{eq:jpdf_copula_de}). In the bivariate case, Eq.\ (\ref{eq:jpdf_copula_de}) simplifies to \begin{equation} f(x_1, x_2) = c_{12}(F_1(x_1), F_2(x_2)) \cdot f_1(x_1) \cdot f_2(x_2) \end{equation} where $c_{12}$ is the appropriate \emph{pair-copula density} for the pair of transformed variables $F_1(x_1)$ and $F_2(x_2)$. It is straightforward to write a conditional density \begin{equation}
f(x_1| x_2) = c_{12}(F_1(x_1), F_2(x_2)) \cdot f_1(x_1) \label{eq:2d_copula1} \end{equation} in terms of the pair-copula. Similarly, it easily follows for three random variables $X_1$, $X_2$ and $X_3$ as follows \begin{equation}
f(x_1| x_2, x_3) = c_{12|3}(F(x_1|x_3), F(x_2|x_3)) \cdot f(x_1|x_3) \label{eq:3d_copula1} \end{equation}
for the appropriate pair-copula $c_{12|3}$ which is used for {\color{black}the} transformed variables $F(x_1 | x_3)$ and $F(x_2 | x_3)$. An alternative decomposition is \begin{equation}
f(x_1| x_2, x_3) = c_{13|2}(F(x_1|x_2), F(x_3|x_2)) \cdot f(x_1|x_2) \label{eq:3d_copula2} \end{equation}
where $c_{13|2}$ differs from the pair-copula in Eq.\ (\ref{eq:3d_copula1}). We can further decompose $f(x_1|x_2)$ in Eq.\ (\ref{eq:3d_copula2}) based on Eq.\ (\ref{eq:2d_copula1}) \begin{equation}
f(x_1| x_2, x_3) = c_{13|2}(F(x_1|x_2), F(x_3|x_2)) \cdot c_{12}(F_1(x_1), F_2(x_2)) \cdot f_1(x_1). \end{equation}
By the extension, the conditional marginal can be decomposed into the appropriate pair-copula using the general form given by \cite{aas2009pair, czado2010pair} \begin{equation}
f(x|\bm{v}) = c_{xv_j|\bm{v}_{-j}}(F({x}|\bm{v}_{-j}), F(v_j|\bm{v}_{-j})) f({x} | \bm{v}_{-j}) \end{equation}
where $v_j$ is an arbitrarily excluded element from vector $\bm{v}$ and $\bm{v}_{-j}$ denotes the vector $\bm{v}$ after excluding $v_j$. Hence, a multivariate density {\color{black}$f_X(\bm{x})$} can be expressed as a product of bivariate copula density functions with marginal conditional CDFs in the form of {\color{black} $F({x}|\bm{v})$ that can be formulated recursively as follows \cite{joe1997multivariate}
\begin{equation}
F({x}|\bm{v}) = \frac{\partial C_{x,v_j|\bm{v}_{-j}}(F({x}|\bm{v}_{-j}), F(v_j | \bm{v}_{-j}))}{\partial F(v_j | \bm{v}_{-j})} \label{eq:F_conditional} \end{equation}
where $C_{x,v_j|\bm{v}_{-j}}$ is a bivariate copula distribution function. }
Note that a {\color{black}$d$}-dimensional multivariable density can be factorized into a number of different conditional pair-copulas based on the vine copula construction proposed by Bedford and Cooke \cite{bedford2002vines}. Except regular vine structure (R-vine), there are two special types of regular vines: canonical vine (C-vine) and drawable vine (D-vine). For the C-vine, each tree has a unique node that is connected to all other nodes, and the corresponding joint pdf {\color{black}$f_X(\bm{x})$} is \begin{equation}
{\color{black}f_X(\bm{x}) = \prod_{k=1}^{d}f_k(x_k)\prod_{j=1}^{d-1}\prod_{i=1}^{d-j}c(F(x_j|x_1,...,x_{j-1}), F(x_{j+i}|x_1,...,x_{j-1}))}. \end{equation}
In contrast, each tree in a D-vine is a path and the corresponding joint pdf {\color{black}$f_X(\bm{x})$} is \begin{equation}
{\color{black}f_X(\bm{x}) = \prod_{k=1}^{d}f_k(x_k)\prod_{j=1}^{d-1}\prod_{i=1}^{d-j}c(F(x_i|x_{i+1},...,x_{i+j-1}), F(x_{i+j}|x_{i+1},...,x_{i+j-1}))} \end{equation} where the subscript indices indicate the conditional random variables to be drawn.
Copula theory and vine copulas are an important tool for modeling the dependence of multivariate densities in either low or high dimension. A following critical question is how to select and estimate all components of a bivariate copula model or tree structure model from limited data. {\color{black} The paper mainly focuses on the bivariate copula model to show how to efficiently quantify the uncertainties associated with copula model selection and the corresponding parameters. The proposed method can be extended to high dimensional problem with dependence given a specified vine copula structure.} The next sections discuss this issue in detail. The next sections discuss this issue in detail.
\section{Statistical inference of copula dependence modeling}
Given a {\color{black}$d$}-dimensional probability density, we can decompose it into products of marginal densities and bivariate copula densities and represent this decomposition with a nested set of trees that fulfill a proximity condition. However, it is often difficult to directly identify a {\color{black}$d$}-dimensional probability density. Instead, more commonly, only data are provided and statistical inference is necessary for model selection and parameter estimation. Small data sets create additional uncertainties which pose a significant challenge to the inference of the copula dependence model.
Assuming known marginal distributions, copula dependence modeling consists of three principal components: tree structure, copula {\color{black}form} and copula parameters. {\color{black} However, for small data sets, uncertainty uncertainty in the marginals cannot be ignored.} Consequently, the marginal {\color{black}form} and marginal distribution parameters must also be included in the inference process. As a result, the total uncertainty when inferring joint probability model form, $U_{all}$, includes the following five components: \begin{equation} U_{all} =\left\{ U_{t}, U_{cf}, U_{cp}, U_{mf}, U_{mp} \right\} \end{equation} where $U_{t}$ is uncertainty in the tree structure, $U_{cf}$ and $U_{cp}$ are the uncertainty in copula families and parameters respectively, and $U_{mf} $ and $ U_{mp}$ represent the uncertainty in marginal distribution families and parameters. To quantify these uncertainties, statistical methods are adopted for model selection and parameter estimation.
The model uncertainty in tree structure is particularly challenging to address. This is mainly because the possible decomposition of pair-copulas is potentially large, especially in high dimension. Typically, the tree structure is assumed to follow a specified model based on the analyst's knowledge or experience. There are several model selection approaches for specification of tree structures, including optimal C-vine structure selection \cite{czado2012maximum}, Bayesian approaches for D-vine selection \cite{min2011bayesian} and maximum spanning trees for R-vines \cite{dissmann2013selecting}. Here, the tree model selection is not our first priority, so we do not elaborate on these methods. Instead, {\color{black}our emphasis is on how} to efficiently quantify the uncertainties associated with copula {\color{black}form} selection and the corresponding parameters given a specified vine copula structure.
\subsection{Copula {\color{black}form} selection and parameter estimation}
When a specific vine copula structure is determined, classical statistical approaches, including goodness-of-fit tests \cite{massey1951kolmogorov}, independence test \cite{mcdonald2009handbook} and AIC/BIC \cite{burnham2004multimodel} are capable of handling copula {\color{black}form} selection when data sets are large. When both tree structure and copula {\color{black}form} are known and the data set is large, the copula parameters can be estimated using sequential estimation \cite{aas2009pair, czado2012maximum}, maximum likelihood estimation \cite{stober2012there}, or Bayesian parameter estimation \cite{min2011bayesian, gruber2015sequential}. However, these classical approaches fall short when inferring from small data sets.
Traditionally, statistical inference is applied to select a single ``best" model given a set of candidate models and available data, and the model is the sole model used for probabilistic modeling. Any uncertainty associated with model selection is simply ignored. However, it is often difficult (even impossible) to identify a unique best model without significant (and potentially problematic) assumptions. Consequently, it is necessary to consider model uncertainty and compare the validity of multiple candidate models -- a process referred to as multimodel inference, as introduced by Burnham and Anderson \cite{burnham2004multimodel}. In this study, we generalize the Bayesian multimodel inference developed previously by the authors \cite{zhang2018quantification, zhang2018effect} to include uncertainty in the form and parameters of the copula dependence model.
Given a data set $\bm{d}$, the model selection problem is to identify the model $M_i$ that``best" fits the data from a collection of $N$ candidate models {\color{black}$\bm{\mathbb{M}}=\{M_j\},\ j=1,\dots, N$}. The notion of best fit varies depending on the selected metric. In the Bayesian setting used here, initial model prior probabilities {\color{black}$\tilde{\pi}_j=p(M_j)$ with $\sum_{j=1}^{N}\tilde{\pi}_j=1$ are assigned to each model $M_j \in \bm{\mathbb{M}}$. According to Bayes' rule, the posterior model probability, given the data $\bm{d}$ can be calculated by \begin{equation}
{\pi}_j = p(M_j| \bm{d}) = \frac{p(\bm{d}| M_j)p(M_j )}{\sum_{k=1}^{N} p(\bm{d} | M_k)p(M_k )}, \quad j=1,\dots, N \label{eq:Bayes_model} \end{equation} having $\sum_{j=1}^{N}{\pi}_j=1$ and where} \begin{equation}
p(\bm{d}|M_j)=\int_{\boldsymbol{\theta}_j}p(\bm{d}|\boldsymbol{\theta}_j,M_j)p(\boldsymbol{\theta}_j|M_j)d\boldsymbol{\theta}_j, \quad j=1,\dots,N \label{eq:Model_evidence} \end{equation} is referred as to the marginal likelihood or evidence of model $M_j$.
Commonly, the model $M^* \in \bm{\mathbb{M}}$ with the highest posterior model probability $p(M^*|\bm{d})$ is selected as the single ``best" model. By contrast, Bayesian multimodel inference ranks the candidate models by their posterior model probabilities calculated by Eq. \eqref{eq:Bayes_model} and retains all plausible models with non-negligible probability. Once the plausible models and their associated model probabilities have been identified, model parameter uncertainties are assessed by applying Bayesian parameter estimation. For each model in the set of plausible models, {\color{black}$M_i, \ i= 1,\dots,N_d\ (N_d \le N) $}, we begin by assigning a prior (often a noninformative prior) to the model parameters $\bm{\theta}_i$, denoted ${ p }(\bm{\theta}_i|M_i)$. We then estimate the posterior parameter distribution using Bayes' rule: \begin{equation}
{ p }(\bm{\theta}_i|\bm{d},M_i)=\frac { p({ \bm{d}}|\bm{\theta}_i,M_i)p(\bm{\theta}_i|M_i) }{ p(\bm{d}|M_i) } \propto { p({ \bm{d}}|\bm{\theta}_i,M_i)p(\bm{\theta}_i|M_i) }, \quad i=1,\dots, m \label{eq:Bayesian_inference} \end{equation}
where $p({ \bm{d}}|\bm{\theta}_i,M_i)$ is the likelihood function. The posterior ${ p }(\bm{\theta}_i|\bm{d},M_i)$ is identified implicitly through Markov Chain Monte Carlo (MCMC) without requiring the calculation of model evidence $p(\bm{d}|M_i)$. However, the evidence, as evident from Eq. \eqref{eq:Model_evidence} is critical in Bayesian multimodel inference and needs to be calculated with caution. A detailed discussion of the evidence calculation can be found in \cite{zhang2018effect}.
In the classical setting, a unique set of model parameters $\bm{\theta}_i$ is identified from the posterior samples using, for example, the maximum a posterior (MAP) estimator, \begin{equation}
\tilde{\bm{\theta}}_j^{\textup{MAP}}(\bm{d}, M_j) = \underset {\bm{\theta}_j }{ \arg \max } \ p(\bm{\theta}_j | \bm{d}, M_j) = \underset {\bm{\theta}_j }{ \arg \max } \ {p(\bm{d} | \bm{\theta}_j, M_j) p(\bm{\theta}_j | M_j)}. \end{equation}
When $p(\bm{\theta}_j | M_j)$ is a noninformative prior, the MAP estimator is equivalent to the maximum likelihood estimate (MLE). Due to a lack of data, the posterior parameter probability will likely possess large variance. Rather than discarding the full uncertainty by selecting a single set of MLE or MAP {\color{black}parameters} or integrating out its variability using Bayesian model averaging \cite{sankararaman2011likelihood}, we retain the full posterior densities for each plausible model.
In this work, the Bayesian multimodel inference method is generalized to address copula dependence model selection and parameter estimation. A simple bivariate example is used to illustrate the process and its performance. Consider a bivariate random vector $\bm{u} = [u_1, u_2]$ whose dependence follows the Frank copula model with parameter $\theta=3$ (denoted \texttt{Frank}(3)). Fig.\ \ref{fig:frank_bivariate} shows data sets of varying size drawn from the \texttt{Frank}(3) copula. Notice that, given only 10 data, one cannot decipher a clear dependence relation. Only after 100 data are drawn does the dependence begin to emerge and it finally becomes clear when 1000 points are drawn.
\begin{figure}
\caption{Bivariate correlated data drawn from \texttt{Frank}(3) copula model, showing 10 data, 100 data and 1000 data}
\label{fig:frank_bivariate}
\end{figure}
From these data, Bayesian multimodel inference is first used to quantify the copula {\color{black}form} uncertainty. Five copula models -- the Gaussian, Student-$t$, Clayton, Gumbel and Frank copulas -- are selected as the candidate copula {\color{black}forms}. Without informative prior information, all candidate copula models are assumed to have equal probability. The Monte Carlo method is adopted to compute the evidence from Eq. \eqref{eq:Model_evidence}. Then the posterior copula model probabilities are obtained using Eq. \eqref{eq:Bayes_model}. Fig.\ \ref{fig:copula_model_probability} shows the posterior probabilities for each candidate copula model as a function of dataset size. Notice that the model probability for the Frank copula becomes gradually larger as the data set size increases but the Bayesian multimodel inference does not select the correct Frank copula model conclusively until 1000 correlated data are collected. \begin{figure}
\caption{Posterior copula model probability as a function of dataset size}
\label{fig:copula_model_probability}
\end{figure}
Next, Bayesian inference is employed to estimate the copula parameter for each plausible candidate model. Fig.\ \ref{fig:copula_parameter} shows the posterior probability distribution for the Frank copula parameter $\theta$ for increasing data set size. Note that the posterior variance is large when the data set size is small and the estimate gradually narrows with increasing data set size. Finally, the posterior density with 1000 data converges towards a narrow distribution that includes the true value ($\theta =3$).
\begin{figure}
\caption{Posterior histogram of the Frank copula model parameter given different data set sizes.}
\label{fig:copula_parameter}
\end{figure}
This simple example illustrates the Bayesian multimodel inference process for model selection and parameter estimation of copula dependence modeling. More specifically, it illustrates the fact that inference is inherently imprecise from small data sets. When data sets are small, it is impossible to uniquely identify the copula {\color{black}form} (and the associated copula model parameters) from which the data are drawn. In the following section, we turn our attention to uncertainty in the marginal distributions.
\subsection{Uncertainty in marginal distributions} \label{sec:multimodel_UQ}
As observed in authors' previous studies \cite{zhang2018quantification, zhang2018effect, zhang2019efficient}, uncertainty in the marginal distributions play a critical role in uncertainty quantification from small datasets. Consider again for simplicity, the bivariate case where the joint pdf can be expressed as: \begin{equation} f_{{\color{black}{X}}}(x_1, x_2) = c_{12}(F_1(x_1, \theta_1), F_2(x_2, \theta_2), \theta_c) \cdot f_1(x_1, \theta_1) \cdot f_2(x_2, \theta_2) \label{eq:bivariate_parameter} \end{equation}
{\color{black}where $\theta_c$ are the copula parameters.} Given this expression of the joint density, it is clear that the copula model is conditional on the marginals and their parameters, which the previous studies have shown to have very large uncertainties when data sets are small. Consequently, it is necessary to identify copula model probabilities and copula parameter probabilities for each set of inferred candidate marginals. This induces a hierarchy of probabilities that includes both the copula model and {\color{black}the} marginal model. We therefore propose a hierarchical Bayesian multimodel inference method, as illustrated in Fig.\ \ref{fig:c4_hier}. The procedure is summarized {\color{black}for each pair of variables} as follows: \begin{figure}
\caption{Hierarchy of Bayesian multimodel inference for copulas and marginals}
\label{fig:c4_hier}
\end{figure}
\begin{itemize}
\item Step 1: Marginal multimodel inference -- First identify the candidate marginal model set{\color{black}s $\bm{\mathbb{M}}^1=\{M_j^1\}, j=1,\dots,N_{d_1}$ and $\bm{\mathbb{M}}^2=\{M_j^2\}, j=1,\dots,N_{d_2}$} for each variable and compute the marginal model probabilities ${\bm{\pi}}^1=\{\pi_1^1, \pi_2^1, \dots, \pi_{N_{d_1}}^1\}$ and ${\bm{\pi}}^2=\{\pi_1^2, \pi_2^2, \dots, \pi_{N_{d_2}}^2\}$ using Eq. \eqref{eq:Bayes_model}. {\color{black}Notice that this induces a set of $N_{d_1}\times N_{d_2}$ possible marginal pairs.} Then estimate the posterior joint pdf for the marginal parameters for all plausible models, ${ p }(\bm{\theta}_j^1|\bm{d}^1,M_j^1) $, {\color{black}$j = 1,\cdots, N_{d_1}$} and ${ p }(\bm{\theta}_j^2|\bm{d}^2,M_j^2) $, {\color{black}$j = 1,\cdots, N_{d_2}$} using Eq. \eqref{eq:Bayesian_inference}.
\item Step 2: Define a finite set of marginal distributions -- Theoretically, the above process yields an infinite set of parameterized probability models. Practically, it is necessary to reduce this to a finite but statistically representative set of {\color{black}$N_{t_d}$ marginal probability model pairs}. This is achieved by randomly selecting a model family for each variable from {\color{black}$\bm{\mathbb{M}}^1$ and $\bm{\mathbb{M}}^2$} with probabilities ${\bm{\pi}}^1$ and ${\bm{\pi}}^2$ respectively, and randomly selecting the parameters of each model from the appropriate posterior joint pdf {\color{black}${ p }(\bm{\theta}_1|\bm{d}^1,M_j^1) $ and ${ p }(\bm{\theta}_2|\bm{d}^2,M_k^2) $}.
\item Step 3: Copula multimodel inference -- For each pair of marginal distributions {\color{black} $f_1(x_1|\bm{\theta}_1,M_j^1)$ and $f_2(x_2|\bm{\theta}_2,M_k^2)$}, standardize the data using {\color{black}$F_1(\bm{d}^1)$ and $F_2(\bm{d}^2)$}. Compute the posterior copula model probabilities {\color{black} ${\bm{\pi}}_{c} =\left\{ {\pi}_{c_1},\cdots,{\pi}_{c_{N_{d_c}}} \right\}$} for each candidate copula model {\color{black}$\left\{C_1, \cdots, C_{N_{d_c}} \right\}$ using Eq. \eqref{eq:Bayes_model} where $N_{d_c}$ is the number of plausible copula models for the specified marginal pair}. Next, estimate the posterior pdf for the copula parameters for each plausible copula model, $p(\bm{\theta}_{c_k} | \bm{d}, C_k)$, {\color{black}$k = 1,\dots,N_{d_c}$} using Eq. \eqref{eq:Bayesian_inference}. As in step 2, a finite set of {\color{black}$N_{t_c}$ ($N_{t_c}$ can be arbitrarily large) copulas (copula models and parameters)} are determined for each marginal pair {\color{black} $\left\{ f_1(x_1|\bm{\theta}_1,M_j^1), f_2(x_2|\bm{\theta}_2,M_k^2) \right\}$}. \item Step 4: Identify bivariate joint densities -- Combine the set of marginal densities and copula densities to define the full set of candidate joint densities {\color{black}$f_X(x_1, x_2)$}, as in Eq. \eqref{eq:bivariate_parameter}. This, however, may lead to a prohibitively large number, {\color{black}$N_{t_d} \times N_{t_c}$}, of candidate bivariate densities. In the following section, we discuss a strategy to keep this number tractable.
\end{itemize}
The result is a set of {\color{black}$N_{t_d} \times N_{t_c}$} joint distributions that are representative of the uncertainty in marginal model form, marginal parameters, {\color{black}copula} model form, and copula parameters. We now consider how to propagate this set of joint distributions through a computational model. Note that the cost of propagation depends only weakly on {\color{black}$N_{t_d} \times N_{t_c}$}, the number of joint densities in the set. That is, increasing {\color{black}$N_{t_d} \times N_{t_c}$} does not increase the number of model evaluations necessary for uncertainty propagation. Therefore, it is advantageous to make {\color{black}$N_{t_d} \times N_{t_c}$} as large as possible{\color{black}, as undersampling it will result in artificially narrow uncertainty bounds}.
\section{Uncertainty propagation with copula dependence modeling}
In the previous study \cite{zhang2018quantification}, we proposed an efficient algorithm for propagation of the imprecise probabilities characterized by a multimodel set with independent marginals. Here, we extend this algorithm to the propagation of imprecise probabilities with copula dependence modeling. For illustration, and without loss of generality, we derive here the propagation method for bivariate random variables. It's extension to higher-dimensional vectors with copula dependence, particularly vine copulas that rely on a series of bivariate copulas, follows naturally.
\subsection{Importance sampling for bivariate joint probability density} Consider the performance function $g(\bm{X}_1, \bm{X}_2)$ defining the response quantity of interest for a mathematical or physical system. The aim of uncertainty propagation is to evaluate the expectation $E(g(\bm{X}_1, \bm{X}_2))$ where $(\bm{X}_1, \bm{X}_2) \in \Omega$ is a random vector having bivariate joint probability density $p(\bm{x}_1, \bm{x}_2)$. The classical Monte Carlo estimator is computed as follows: {\color{black} \begin{equation} \mu = E_p[g(\bm{X}_1, \bm{X}_2)] = \int_{\Omega}g(\bm{x}_1, \bm{x}_2)p(\bm{x}_1, \bm{x}_2)d\mathbf{x} \approx \frac{1}{n} \sum_{i=1}^n g(\bm{x}_1^i, \bm{x}_2^i) \label{eq:bivariate_MC} \end{equation}} where $E_p[\cdot]$ is the expectation with respect to $p(\cdot)$ and $(\bm{x}_1^i, \bm{x}_2^i)$ are bivariate random samples drawn from $p(\bm{x}_1, \bm{x}_2)$. Importance sampling allows samples to be drawn from an alternate density $q(\bm{x}_1, \bm{x}_2)$ and then reweights the samples to obtain the estimator. The Monte Carlo estimator in Eq.\ (\ref{eq:bivariate_MC}) is modified as: {\color{black} \begin{equation} \begin{aligned}
\mu = E_q\left[ g(\bm{X}_1, \bm{X}_2) \frac{p(\bm{X}_1, \bm{X}_2)}{q(\bm{X}_1, \bm{X}_2)} \right] &= \int_{\Omega}g(\bm{x}_1, \bm{x}_2) \frac{p(\bm{x}_1, \bm{x}_2)} {q(\bm{x}_1, \bm{x}_2)} q(\bm{x}_1, \bm{x}_2) d\mathbf{x} \\ & \approx \frac{1}{n} \sum_{i=1}^n g(\bm{x}_1^i, \bm{x}_2^i) w(\bm{x}_1^i, \bm{x}_2^i) \label{eq:bivariate_IS} \end{aligned} \end{equation}} where $E_q[\cdot]$ denotes expectation for $(\bm{X}_1, \bm{X}_2) \sim q(\cdot)$ and the importance weights are defined as: \begin{equation} w(\bm{x}_1^i, \bm{x}_2^i) = \frac{p(\bm{x}_1^i, \bm{x}_2^i)}{q(\bm{x}_1^i, \bm{x}_2^i)}. \label{eq:weights_bi} \end{equation}
{\color{black} \subsection{Optimal important density for bivariate joint probability density with copula dependence: Derivation} }
The efficient propagation of multimodel imprecise probabilities is performed by identifying an ``optimal'' importance sampling density, propagating this optimal density, and reweighting the samples according to each distribution in the multimodel set. The optimal sampling density is derived as the distribution that ``best'' matches the multimodel distribution set according to some metric. In the prior work, the authors \cite{zhang2018quantification} derive an explicit analytical optimal importance sampling density given an ensemble of target marginal probability densities that minimizes the total expected mean square difference, {\color{black}{$\mathcal{M}(\mathbb{M}\parallel Q)$, between the model set $\mathbb{M}=\{M_j\}, j=1,\dots,N_d$ and the importance sampling density $Q=q(\bm{x})$ given by: \begin{equation}
\mathcal{E} = \sum_{j=1}^{{N_d}}E_{\theta}\left[{\mathcal{M}}(M_j \parallel Q)\right] = {E_{\theta} }\left[ \sum _{ j=1 }^{ {N_d}}\frac{1}{2} \int{ { \left( { p_j(\bm{x} | \bm{\theta} ) } - { q(\bm{x}) } \right) ^{ 2 }} d\bm{x}} \right], \label{eq: total_EMSD} \end{equation} }}
In other words, the following optimization problem is solved: \begin{equation} \begin{aligned} & \underset{q}{\text{minimize}} & &{\mathcal{L}}(q)=E_{\theta}\left [ \int{{{\mathcal{F}} }(\bm{x}, \bm{\theta}, q(\bm{x}))}d\bm{x} \right] \\ & \text{subject to} & &{\mathcal{I}}(q) = \int{q(\bm{x})d\bm{x}}-1=0 \label{eq: opt_EMSD} \end{aligned} \end{equation} where the action functional $\mathcal{F}(\cdot)$ is \textcolor{black}{the total square differences}: \begin{equation}
{\mathcal{F}(\bm{x}, \bm{\theta}, q(\bm{x}))}={ \frac { 1 }{ 2 } \sum_{j=1}^{{N_d}}{ \left( { p_j(\bm{x} | \bm{\theta}) } - { q(\bm{x}) } \right) ^{ 2 }} } \label{eq:MSD_funcitonal} \end{equation} \textcolor{black}{ and $E_{\theta}$ is the expectation with respect to the posterior probability of the model parameters $\bm{\theta}$.} ${\mathcal{I}}(q)$ ensures that $q(\bm{x})$ is a valid pdf. Solving this optimization problem yields a closed-form solution given by the convex mixture model \cite{zhang2018quantification} \begin{equation}
{q}^*(\bm{x}) ={\color{black}\frac{1}{{N_d}} \sum_{j=1}^{{{N_d}}}}E_{\theta}\left[{p_j({\bm{x} | \bm{\theta}})}\right] \label{eq: opt_MSD2} \end{equation} When the posterior model probabilities are not equal, this solution generalizes as \begin{equation}
{q}^*(\bm{x}) = \sum_{j=1}^{{\color{black}{N_d}}}{\pi}_j E_{\theta} \left [ { p_j(\bm{x}|\bm{\theta})} \right] \label{eq: opt_MSD3} \end{equation} where each term is weighted by the corresponding posterior model probabilities $ {\color{black}{\pi}_j}$ computed by Eq.\eqref{eq:Bayes_model}. The interested reader can find more details in \cite{zhang2018quantification}.
It is straightforward to generalize this solution from the one-dimensional probability density to multivariate joint probability densities. If the bivariate joint probability density has independent marginals, the optimal sampling density is expressed as: \begin{equation}
{q}^*(\bm{x}) =\frac{1}{N_{d_1} N_{d_2}}\sum_{i=1}^{N_{d_1}} \sum_{j=1}^{N_{d_2}} E_{\theta}\left[{p_{ij}({\bm{x} | \bm{\theta}})}\right] \label{eq: opt_MSD_2d} \end{equation}
and the bivariate joint probability density ${p_{ij}({\bm{x} | \bm{\theta}})}$ can be decomposed by marginal distribution $f_1^i (\bm{x}_1|\bm{\theta}_1) $ and $f_2^j(\bm{x}_2|\bm{\theta}_2)$ as follows: \begin{equation}
{p_{ij}({\bm{x} | \bm{\theta}})} = f_1^i (\bm{x}_1|\bm{\theta}_1) \cdot f_2^j(\bm{x}_2|\bm{\theta}_2) \label{eq: opt_MSD_2d_1} \end{equation} where $N_{d_1}$ and $N_{d_2}$ are the number of candidate probability models for the marginal densities respectively and $N_d= N_{d_1}\cdot N_{d_2}$ is the total number of candidate probability models for the bivariate joint probability density. Thus, the optimal sampling density for independent bivariate joint density can be expanded in terms of the margainals as: \begin{equation} \begin{aligned}
{q}^*(\bm{x}) &= \frac{1}{N_{d_1} N_{d_2}}\sum_{i=1}^{N_{d_1}} \sum_{j=1}^{N_{d_2}} E_{\theta}\left[f_1^i (\bm{x}_1|\bm{\theta}_1) f_2^j(\bm{x}_2|\bm{\theta}_2) \right] \\
& = \frac{1}{N_{d_1} N_{d_2}}\sum_{i=1}^{N_{d_1}} \sum_{j=1}^{N_{d_2}} E_{\theta_1}\left[f_1^i (\bm{x}_1|\bm{\theta}_1) \right] E_{\theta_2}\left[f_2^j(\bm{x}_2|\bm{\theta}_2) \right] \\
&= \frac{1}{N_{d_1} N_{d_2}}\sum_{i=1}^{N_{d_1}}E_{\theta_1}\left[f_1^i (\bm{x}_1|\bm{\theta}_1) \right] \sum_{j=1}^{N_{d_2}} E_{\theta_2}\left[f_2^j(\bm{x}_2|\bm{\theta}_2) \right] \label{eq: opt_MSD_2d_2} \end{aligned} \end{equation} Again, it is straightforward to show that this solution generalizes for unequal model probabilities as: \begin{equation}
{\color{black}{q}^*(\bm{x}) =\sum_{i=1}^{N_{d_1}}\pi_i^1 E_{\theta_1}\left[f_1^i (\bm{x}_1|\bm{\theta}_1) \right] \sum_{j=1}^{N_{d_2}} \pi_j^2 E_{\theta_2}\left[f_2^j(\bm{x}_2|\bm{\theta}_2) \right]} \label{eq: opt_MSD_2d_3} \end{equation}
where {\color{black}$\pi_i^1$} associated with marginal density {\color{black}$f_1^i (\bm{x}_1|\bm{\theta}_1)$} is the posterior model probability for model $M_i$ satisfying {\color{black}$\sum_{i=1}^{N_{d_1}}\pi_i^1 =1$} and {\color{black}$\pi_j^2$} associated with marginal density {\color{black}$f_2^j (\bm{x}_2|\bm{\theta}_2)$} is the posterior model probability for model $M_j$ satisfying {\color{black}$\sum_{j=1}^{N_{d_2}} \pi_j^2=1$} .
If the bivariate joint probability density has copula dependence, with copula density {\color{black}$c_{12}^k(F_1(\bm{x}_1|\bm{\theta}_1), F_2(\bm{x}_2|\bm{\theta}_2) |\bm{\theta}_c)$}, we can express the bivariate joint probability density as: \begin{equation}
{p_{ij}^k({\bm{x} | \bm{\theta}})} ={\color{black}c_{12}^k(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2) |\bm{\theta}_c)} \cdot f_1^i (\bm{x}_1|\bm{\theta}_1) \cdot f_2^j(\bm{x}_2|\bm{\theta}_2) \label{eq: opt_MSD_2d_copula} \end{equation} where $k = 1,...,N_{d_c}$ indexes the candidate copula models. Similarly, we can derive the optimal sampling density for dependent bivariate joint probability density with copula dependence as follows. {\color{black}We start by applying the joint density in Eq.\ \eqref{eq: opt_MSD_2d_copula} to the optimal density in Eq.\ \eqref{eq: opt_MSD_2d} where we require an additional summation over all $N_{d_c}$ candidate copula models: \begin{equation}
{q}_c^*(\bm{x}) = \frac{1}{N_{d_1} N_{d_2} N_{d_c}}\sum_{i=1}^{N_{d_1}} \sum_{j=1}^{N_{d_2}} \sum_{k=1}^{N_{d_c}} E_{\theta}\left[c_{12}^k(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2) |\bm{\theta}_c) \cdot f_1^i (\bm{x}_1|\bm{\theta}_1) \cdot f_2^j(\bm{x}_2|\bm{\theta}_2) \right]. \label{eq: opt_MSD_2d_copula1} \end{equation} Next, let us apply the law of total expectation as: \begin{equation}
E[X] = E[E[X|Y]] = \int_Y E[X|Y=y]p(y)dy \end{equation} where \begin{equation}
X = c_{12}^k(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2) |\bm{\theta}_c) \cdot f_1^i (\bm{x}_1|\bm{\theta}_1) \cdot f_2^j(\bm{x}_2|\bm{\theta}_2) \end{equation} and $Y=y$ is the condition that $\bm{\theta}_1$ and $\bm{\theta}_2$ take specific values, i.e.\ \begin{equation}
\bm{\theta}_1=\theta_n, \text{ and } \bm{\theta}_2=\theta_m. \end{equation} Applying the law of total expectation, the summand in Eq.\ \eqref{eq: opt_MSD_2d_copula1} can be expressed as \begin{multline}
\int_{\bm{\theta}_1} \int_{\bm{\theta}_2} E_{\theta}\left[c_{12}^k(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2) |\bm{\theta}_c, \bm{\theta}_1=\theta_n, \bm{\theta}_2=\theta_m) \cdot f_1^i (\bm{x}_1|\bm{\theta}_1 = \theta_n) \cdot f_2^j(\bm{x}_2|\bm{\theta}_2 = \theta_m) \right] \cdot \\ p(\bm{\theta_1} = \theta_n, \bm{\theta}_m = \theta_2) d\theta_n d\theta_m
\label{eqn:exp1-1}. \end{multline}
Recognizing that the first term is conditioned on $\bm{\theta}_1$ and $\bm{\theta_2}$ taking specific values, the expectation can be written entirely with respect to $\bm{\theta}_c$ and the marginal densities can be taken outside the expectation. We further recognize that $p(\bm{\theta_1} = \theta_n, \bm{\theta}_2 = \theta_m) = { p }(\bm{\theta}_1 = \theta_n|\bm{d},M_i) \cdot { p }(\bm{\theta}_2 = \theta_m|\bm{d},M_j)$ because $\bm{\theta}_1$ and $\bm{\theta}_2$ are independent and inferred from the data for each variable. Hence, Eq.\ \eqref{eqn:exp1-1} becomes: \begin{multline}
\int_{\bm{\theta}_1} \int_{\bm{\theta}_2} E_{\theta_c}\left[c_{12}^k(F_1(\bm{x}_1|\bm{\theta}_1), F_2(\bm{x}_2|\bm{\theta}_2) |\bm{\theta}_c, \bm{\theta}_1=\theta_n, \bm{\theta}_2=\theta_m)\right] \cdot f_1^i (\bm{x}_1|\bm{\theta}_1 = \theta_n) \cdot f_2^j(\bm{x}_2|\bm{\theta}_2 = \theta_m) \cdot \\ { p }(\bm{\theta}_1 = \theta_n|\bm{d},M_i) \cdot { p }(\bm{\theta}_2 = \theta_m|\bm{d},M_j) d\theta_n d\theta_m.
\label{eqn:exp2-1} \end{multline} Plugging this into Eq.\ \eqref{eq: opt_MSD_2d_copula1} and letting \begin{equation}
\hat{c}_{12}^{mn}(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2)) = \dfrac{1}{N_{d_c}}\sum_{k=1}^{N_{d_c}} E_{\theta_c}\left[c_{12}^k(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2) |\bm{\theta}_c, \bm{\theta}_1=\theta_n, \bm{\theta}_2=\theta_m)\right]
\label{eqn:exp3-1} \end{equation} be the expected conditional copula for marginal parameter pair $(\bm{\theta}_1=\theta_n, \bm{\theta}_2=\theta_m)$ gives: \begin{multline}
{q}_c^*(\bm{x}) = \frac{1}{N_{d_1} N_{d_2}}\sum_{i=1}^{N_{d_1}} \sum_{j=1}^{N_{d_2}} \int_{\bm{\theta}_1} \int_{\bm{\theta}_2} \hat{c}_{12}^{mn}(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2)) \cdot f_1^i (\bm{x}_1|\bm{\theta}_1 = \theta_n) \cdot f_2^j(\bm{x}_2|\bm{\theta}_2 = \theta_m) \cdot \\ { p }(\bm{\theta}_1 = \theta_n|\bm{d},M_i) \cdot { p }(\bm{\theta}_2 = \theta_m|\bm{d},M_j) d\theta_n d\theta_m. \label{eqn:exp4-1} \end{multline}
Next, recognizing that we likely cannot know ${ p }(\bm{\theta}_1 = \theta_n|\bm{d},M_i)$ and ${ p }(\bm{\theta}_2 = \theta_m|\bm{d},M_j)$ explicitly because we do not have the parameter posterior density in closed form (instead, we have sampled it from MCMC), we will rely on Monte Carlo estimation of the integrals over $\theta_n$, $\theta_m$ with $N_{n}\times N_m\to\infty$ samples such that $\theta_n$ and $\theta_m$ are drawn randomly from the posterior parameter density (i.e.\ from MCMC samples) and allowing us to express the optimal density as \begin{multline}
{q}_c^*(\bm{x}) = \frac{1}{N_{d_1} N_{d_2} N_n N_m}\sum_{i=1}^{N_{d_1}} \sum_{j=1}^{N_{d_2}} \sum_{n=1}^{N_n} \sum_{m=1}^{N_m} \hat{c}_{12}^{mn}(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2)) \cdot f_1^i (\bm{x}_1|\bm{\theta}_1 = \theta_n) \cdot f_2^j(\bm{x}_2|\bm{\theta}_2 = \theta_m). \label{eqn:exp5} \end{multline}
The optimal sampling density in Eq.\ \eqref{eqn:exp5} can be generalized to account for the posterior model probabilities as follows: \begin{equation}
{q}_c^*(\bm{x}) = \frac{1}{N_{n}N_{m}}\sum_{i=1}^{N_{d_1}} \sum_{j=1}^{N_{d_2}} \sum_{n=1}^{N_n} \sum_{m=1}^{N_m} \hat{c}_{12}^{mn}(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2)) \cdot \pi_i^1 f_1^i (\bm{x}_1|\bm{\theta}_1 = \theta_n) \cdot \pi_j^2 f_2^j(\bm{x}_2|\bm{\theta}_2 = \theta_m) \label{eqn:exp6} \end{equation}
where the expected conditional copula $\hat{c}_{12}^{mn}(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2))$ in Eq.\ \eqref{eqn:exp3-1} is replaced by: \begin{equation}
\hat{c}_{12}^{mn}(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2)) = \sum_{k=1}^{N_{d_c}} \pi_c^{k,mn}E_{\theta_c}\left[c_{12}^k(F_1^i(\bm{x}_1|\bm{\theta}_1), F_2^j(\bm{x}_2|\bm{\theta}_2) |\bm{\theta}_c, \bm{\theta}_1=\theta_n, \bm{\theta}_2=\theta_m)\right]
\label{eqn:exp7} \end{equation} where $\pi_c^{k,mn}$ is the posterior copula model probability conditioned on $\bm{\theta}_1=\theta_n$ and $\bm{\theta}_2=\theta_m$. \\
{\color{black} \subsection{Optimal important density for bivariate joint probability density with copula dependence: Implementation}
In the derived form, the optimal sampling density in Eqs.\ \eqref{eqn:exp6} and \eqref{eqn:exp7} is difficult to implement, involving several nested loops. For every pair of marginals $\{f_1^i(\cdot),f_2^j(\cdot)\}$, we need to randomly sample $N_n$ and $N_m$ samples respectively from the parameter densities using MCMC. Then, for each pair of the $N_n\times N_m$ model parameters, we need $N_{\theta_c}$ samples of the copula parameters for each of the $N_{d_c}$ candidate copula models for a total computational complexity of $N_{d_1}\times N_{d_2} \times N_n \times N_m \times N_{d_c} \times N_{\theta_c}$. Here, we propose a Monte Carlo sampling approach to reduce the complexity of this calculation.
This is performed by first populating the marginal sets. That is, we perform the multimodel selection process for the marginal distributions to obtain $\mathbb{M}^1$ and $\mathbb{M}^2$ and the model probabilities $\bm{\pi}^1$ and $\bm{\pi}^2$. Next we, perform Bayesian parameter estimation for each of the marginals in $\mathbb{M}^1$ and $\mathbb{M}^2$, which provides a set of $N_m$ and $N_n$ parameter values following the joint parameter distributions of each model $M_i^1$ and $M_i^2$, respectively. Next, instead of combining all combinations of marginals and parameters ($N_{d_1}\times N_{d_2} \times N_n \times N_m$), we set a feasible value $N_{t_d}$ of total marginal combinations to be considered. Note that while the total number of combinations is likely to be in the millions, e.g.\ $4\times 4\times 1000\times 1000 = 16,000,000$, we generally select $N_{t_d}\approx 1,000$. This set of $N_{t_d}$ probability models is selected by randomly drawing marginals from $\mathbb{M}^1$ and $\mathbb{M}^2$ with probabilities $\bm{\pi}^1$ and $\bm{\pi}^2$ and then randomly drawing their respective parameters from the MCMC samples for each marginal.
This first simplification reduces the estimator in Eq.\ \eqref{eqn:exp6} to the following form: \begin{equation}
q_c^*(\bm{x}) = \dfrac{1}{N_{t_d}}\sum_{l=1}^{N_{t_d}} \hat{c}_{12}^l(F_1^l(\bm{x}_1|\bm{\theta}_1), F_2^l(\bm{x}_2|\bm{\theta}_2))\cdot f_1^l(\bm{x}_1|\bm{\theta}_1=\theta_1^l) \cdot f_2^l(\bm{x}_2|\bm{\theta}_2=\theta_2^l)
\label{eqn:exp8} \end{equation} where $l$ is a single index associated with a pair of marginals randomly selected according to their model probabilities as well as random parameters for each of these marginals selected from their joint posterior pdf.
For each of the $N_{t_d}$ marginal pairs, we perform copula model selection to obtain the copula model probabilities $\pi_c^{l}$ and then, again perform MCMC to obtain samples of the copula parameters following their posterior distribution. To estimate the expected conditional copula, we again reduce the samples from $N_{d_c}\times N_{\theta_c}$, which might be on the order of 10,000, to a smaller number $N_{t_c}$ ($\approx 500$). We estimate Eq.\ \eqref{eqn:exp7} by randomly drawing $N_{t_c}$ copula models according to $\pi_c^{l}$ and randomly drawing the parameter values from the MCMC samples for that model obtained during Bayesian inference. Procedurally, Eq.\ \eqref{eqn:exp7} is re-expressed in the following form for use in Eq.\ \eqref{eqn:exp8}: \begin{equation}
\hat{c}_{12}^{l}(F_1^l(\bm{x}_1|\bm{\theta}_1), F_2^l(\bm{x}_2|\bm{\theta}_2)) = \dfrac{1}{N_{t_c}}\sum_{k=1}^{N_{t_c}} c_{12}^k(F_1^l(\bm{x}_1|\bm{\theta}_1), F_2^l(\bm{x}_2|\bm{\theta}_2) |\bm{\theta}_c^k, \bm{\theta}_1=\theta_1^l, \bm{\theta}_2=\theta_2^l)
\label{eqn:exp9} \end{equation} where the superscript $k$ in $c_{12}^k$ denotes that the form of the model for the $k^{th}$ copula is random and follows the model probabilities $\pi_c^l$, while superscript $k$ in $\bm{\theta}_c^k$ denotes that the copula parameters are randomly drawn from the posterior parameter density associated with copula model $c_{12}^k(\cdot)$.
Eqs.\ \eqref{eqn:exp8} and \eqref{eqn:exp9} are then actually used for optimal sampling density estimation. Overall, this reduces the complexity of the optimal sampling density estimation from $N_{d_1}\times N_{d_2} \times N_n \times N_m \times N_{d_c} \times N_{\theta_c} \sim \mathcal{O}(10^{11}-10^{12})$ to $N_{t_d}\times N_{t_c} \sim \mathcal{O}(10^5-10^6)$, while retaining a statistically representative set of joint probability models from which to estimate the optimal.
We further emphasize here that calculation of the optimal sampling density is generally much less expensive than evaluation of the computational model through which uncertainties are being propagated. Nonetheless, the optimal sampling density must be called for every sample re-weighting, which can lead to additional computational burden. One simple way to alleviate this burden is to compute the optimal joint density once via the approach described above and develop an inexpensive surrogate or lookup table to call it rapidly.
}
The implementation procedure for copula-based optimal sampling density estimation is summarized as Algorithm 1.
\begin{algorithm} {\color{black} \caption{Copula-based optimal sampling density} \begin{algorithmic}[1] \label{algo:copula} \State Identify the marginal models, $\mathbb{M}^1$ and $\mathbb{M}^2$, and their model probabilities, $\bm{\pi}^1$ and $\bm{\pi}^2$, using Bayesian multimodel inference.
\State Perform Bayesian parameter estimation using MCMC to obtain sample parameters following the posterior parameter density, ${ p }(\bm{\theta}_i|\bm{d},M_i)$ for each marginal model
\State Randomly select a pair of marginals $\left\{ f_1^i(\bm{x}_1 | \bm{\theta}_1 = \theta_n), f_2^i(\bm{x}_2 | \bm{\theta}_2 = \theta_m) \right\}$ by drawing the marginal models with probabilities $\bm{\pi}^1$ and $\bm{\pi}^2$ and randomly drawing the parameters from the MCMC samples of the posterior parameter density. \label{step:2} \State Identify the candidate copula models and their model probabilities $\pi_c$ for the specific marginal pair using Bayesian multimodel inference. \State Perform Bayesian parameter estimation using MCMC to obtain sample parameters following the posterior parameter density for each copula model \State Randomly draw $N_{t_c}$ copula models according to their model probabilities $\pi_c$ and their associated parameters from the MCMC samples for the posterior parameter density. \State Estimate the expected conditional copula $\hat{c}_{12}^{l}$ according to Eq. \eqref{eqn:exp9}
\State Determine the expected joint density by multiplying the marginals and copula. \label{step:5} \State Repeat Step \ref{step:2} - \ref{step:5} for a large number, $N_{t_d}$, of marginal pairs. \State Determine the copula-based optimal sampling density $q_c^*(\bm{x})$ by averaging the $N_{t_d}$ joint densities as shown in Eq.\ \eqref{eqn:exp8}. \State (Optional) Create a surrogate optimal sampling density or lookup table to expedite sample re-weighting.
\end{algorithmic} } \end{algorithm}
\subsection{Propagation of imprecise probabilities with copula dependence modeling} With the constituents outlined in the previous section, the importance sampling reweighting approach for imprecise uncertainty propagation with copula dependence is summarized here and a flowchart is shown in Fig.\ \ref{fig:c4_flowchart}. \begin{figure}
\caption{Flowchart for propagation of imprecise probabilities with copula-based dependence modeling}
\label{fig:c4_flowchart}
\end{figure}
\begin{itemize} \item Step 1: Identify the marginal and copula sets -- Given a small data set, the hierarchical Bayesian multimodel inference outlined in Section \ref{sec:multimodel_UQ} is used to identify the candidate sets of margainal distributions and copulas. We first identify candidate marginal forms and associated model probabilities, and construct combinations of marginals by randomly drawing $N_{t_d}$ marginal pairs. For each pair of marginals, identify copula forms and estimate the copula model probabilities and copula parameters. \item Step 2: Determine the optimal sampling density -- Combine all the candidate marginals and associated copulas modeling from Step 1. Solving the optimization problem yields the optimal sampling density $q_c^*(\bm{x})$, shown in Eq. \eqref{eqn:exp6}, which is practically solved as described in Sec. 4.3 (Eqs.\ \eqref{eqn:exp8} and \eqref{eqn:exp9}), i.e.\ according to the Algorithm 1. \item Step 3: Uncertainty propagation -- Uncertainty associated with copula-based dependence modeling is propagated using importance sampling with optimal sampling density $q_c^*(\bm{x})$. Samples are drawn from $q_c^*(\bm{x})$ using MCMC sampling and are reweighted for each model according to the importance weights $w(\bm{x}) = p(\bm{x})/q_c^*(\bm{x})$ \item Step 4: Analyze output -- Quantify the distribution of the statistical response quantity of interest. \end{itemize} }
\section{Application to probabilistic prediction of unidirectional composite lamina properties}
This section applies the proposed methodology to understand the influence of the constituent material properties on the out-of-plane elastic properties (Young's modulus) of a unidirectional composite lamina.
\subsection{Problem description}
Fiber reinforced composite materials are popular and widely used in many engineering fields because of their attractive properties, for example, high stiffness and strength combined with low weight. In order to evaluate the performance of a composite part, the accurate prediction of its mechanical properties in the layup is important {\color{black}\cite{zhangMAMS2019}}. Several numerical and experimental methods have been proposed to determine the mechanical properties of unidirectional lamina based on the elastic properties of the constituent materials (fibers and matrix)\cite{daniel1994engineering, younes2012comparative}. In this work, the finite element method (FEM) with a representative volume element (RVE) is used to predict the out-of-plane elastic properties of a unidirectional composite lamina given the constituent (fiber and matrix) material properties. \begin{figure}
\caption{Unidirectional fiber reinforced composite (a) Hexagonal RVE unit and (b) RVE FEM model}
\label{fig:RVE}
\end{figure}
Typically, unidirectional composites are considered as transversely isotropic materials composed of two phases: a fiber reinforcement phase and a matrix phase, as shown in Fig. \ref{fig:RVE} (a) for a hexagonal packing configuration. Commonly, the reinforced-fiber phase for traditional materials is modeled as isotropic (e.g. glass fibers) or orthotropic (e.g. carbon fiber) and the matrix phase is typically composed of an isotropic material (e.g. epoxy). The overall mechanical properties of transversely isotropic unidirectional fiber reinforced lamina with a hexagonal packing geometry are determined by five independent engineering constants which are given by the following compliance matrix: \begin{equation} C = \left[ \begin{matrix} 1/E_{11} & -\nu_{12}/E_{11} & -\nu_{12}/E_{11} & 0 & 0 & 0\\ -\nu_{12}/E_{11} & 1/E_{22} & -\nu_{23}/E_{22} & 0 & 0 & 0 \\ -\nu_{12}/E_{11} & -\nu_{23}/E_{22} &1/E_{22} &0 &0 &0 \\ 0 & 0 & 0 & 1/G_{23} & 0 & 0 \\ 0 & 0 & 0 & 0& 1/G_{12} & 0 \\ 0 & 0 & 0 & 0& 0 & 1/G_{12} \\
\end{matrix} \right]
\label{eq: C} \end{equation} where $E_{11}$ and $E_{22}$ are the longitudinal and transverse Young's moduli respectively, $G_{12}$ and $G_{23}$ are the longitudinal and transverse shear moduli, $\nu_{12}$ is the major Poisson's ratio and $\nu_{23}$ is the minor Poisson's ratio. The transverse shear modulus is determined from the minor Poisson's ratio $\nu_{23}$ and elastic modulus $E_{22}$ as \cite{hashin1983analysis}: \begin{equation} G_{23} = \frac{E_{22}}{2(1+\nu_{23})} \end{equation}
Experimental determination of the in-plane lamina properties are typically straightforward and generally provide accurate values for these properties. However, the out-of-plane lamina properties are difficult to obtain experimentally \cite{king1992micromechanics, gipple1994measurement, soden2004lamina}, and consequently numerical prediction becomes an attractive alternative to predict these lamina properties. In this example, we focus on the determination of the elastic modulus $E_{22}$ which is an independent out-of-plane lamina property.
The overall mechanical properties in Eq. \eqref{eq: C} depend on the constituent properties (fibers and matrix). Table \ref{tab:c4_mean_cov} shows the four independent constituent material properties and the fiber volume fraction, which are needed to define the lamina properties for the isotropic resin and fiber materials.
\begin{table}[!ht] \footnotesize \centering \caption{Constituent material properties of E-Glass fiber/LY556 Polyester Resin composites} \label{tab:c4_mean_cov} \begin{tabular}{@{}cccc@{}} \toprule Material property & Physical meaning & Mean value & Coefficient of variation \\ \midrule $V_f$ & Fiber volume fraction & 0.6 & 0.05 \\ $E_m$ & Matrix's Young's modules & 3.375 & 0.05 \\ $\nu_m$ & Matrix Poisson's ratio & 0.35 & 0.05 \\ $E_{1f}$ & Fiber Young's modules along 1 direction & 73.01 &0.05 \\ $\nu_{12f}$ & Fiber Poisson's ratio along 1-2 direction & 0.228 & 0.05 \\ \bottomrule \end{tabular} \end{table}
In this work, we study a common composite lamina fabricated from E-glass fibers and LY556 polyester resin matrix. The finite element method is employed to construct a three-dimensional RVE with two symmetry planes in the $x-y$ and $x-z$ directions and periodic boundary conditions, as shown in Fig. \ref{fig:RVE} (b). The model has a total 22750 nodes and 20448 C3D8R solid elements and is solved using the commercial solver Abaqus.
\subsection{Identification of probabilistic input model} From engineering experience, the five inputs in Table \ref{tab:c4_mean_cov} may be correlated or dependent and thus one task is to identify the dependence relationship among these five random variables from data. Commonly, the matrix properties $E_m$ and $\nu_m$ are considered to be dependent and the fiber properties $E_{1f}$ and $\nu_{12f}$ are dependent. However, fiber and matrix properties are independent of one another and the fiber volume fraction is often assumed independent of constituent properties. Therefore, the five probability inputs are composed of two bivariate dependent models and one independent variable: $\left\{E_m, \nu_m \right\}$, $\left\{E_{1f}, \nu_{12f} \right\}$ and $\left\{ V_f \right\}$. \begin{figure}
\caption{Dependent probabilistic input model}
\label{fig:de_input}
\end{figure}
Although this type of composite materials has been used extensively in many engineering applications, statistical data for its constituent properties are very limited. Typically, only nominal design values are provided without adequate guidance on their variability. The nominal values in Table \ref{tab:c4_mean_cov} were compiled from the literature for each constituent property and candidate probability distributions were identified for each property. The interested readers can find an extensive list of references for the relevant data and literature in the authors' recent work \cite{zhangigsa2019}.
Due to a lack of statistical data for characterizing the constituent material properties, it is difficult to assign accurate and objective probabilistic models for the properties, specifically the dependence model for the constituent properties. For reference purposes, we assume normal distributions with nominal mean value in Table \ref{tab:c4_mean_cov} and 5\% coefficient of variation (COV) as the ``true" marginal distributions for each fiber and matrix property. The matrix properties $\left\{E_m, \nu_m \right\}$ and the fiber properties $\left\{E_{1f}, \nu_{12f} \right\}$ are assumed to be strongly correlated with a ``true" Frank copula with parameter $\theta=-10$. Fig. \ref{fig:de_input} shows the ``true" probabilistic input model, which includes the marginal histogram and dependence relationship between each of these input variables. It can be observed that $\left\{E_m, \nu_m \right\}$ and $\left\{E_{1f}, \nu_{12f} \right\}$ have a strong dependence that follows the true Frank(-10) copula model. We assume this probabilistic model to be the truth and generate 20 random data, as shown in Fig. \ref{fig:c4_20data} for the joint matrix and fiber properties. These serve as the initial data from which uncertainty needs to be quantified and propagated. Clearly, a single bivariate dependence model cannot be precisely identified from these data -- although it is clear that the properties are dependent. \begin{figure}
\caption{20 randomly generated constituent material properties that serve as the initial dataset (a) fiber property and (b) matrix property}
\label{fig:c4_20data}
\end{figure}
\subsection{Probabilistic prediction of composite properties} \label{sec:composite3} The multimodel inference approach proposed herein is applied to this problem, given the limited data characterizing the constituent material properties and their clear dependencies. We first identify a set of candidate marginal probability models, which include the Gaussian, Gamma, Lognormal and Weibull distributions. The Bayesian multimodel approach in Eq. \eqref{eq:Bayes_model} is used to estimate the posterior model probabilities and the corresponding model parameter uncertainties are estimated by Bayesian inference using MCMC sampling. Combining these model-form and model parameter uncertainties, we therefore obtain an ensemble of plausible probability densities for the five input variables shown in Fig. \ref{fig:c4_marginal1}.
In this example, we identify 500 candidate densities for each marginal such that the total number of combinations of these marginal distributions is $500^5 = 3.125^{13}$, which is computationally prohibitive. Instead, a representative 1000 marginal pairs are compiled by Latin hypercube sampling. To evaluate the elastic modulus $E_{22}$, 5,000 random samples are drawn from the optimal sampling density, shown in the thick black thick curves in Fig. \ref{fig:c4_marginal1}, for each material property and computational model evaluations are performed using FEM. {\color{black}Hence, the computational advantage of the approach lies in the vastly reduced number of model evaluations needed to propagate the full model set. In this case, we need only 5,000 simulations where conventional multi-loop Monte Carlo approaches require on the order of $5,000^3$ simulations to cover the full set of copulas, marginals, and marginal parameters. For the composite model used herein, the 5,000 simulations take approximately 28 cpu-hours to complete, making the conventional strategy infeasible.} \begin{figure}
\caption{Multiple candidate probability densities for marginals (a) $E_m$, (b) $\nu_m$, (c) $V_f$, (d) $E_{1f}$ and (e) $\nu_{12f}$}
\label{fig:c4_marginal1}
\end{figure}
If the multivariate input is assumed independent, we can easily achieve the probabilistic prediction of overall material property $E_{22}$ by multiplying each marginal. {\color{black} Fig. \ref{fig:c4_marginal2} shows the cloud of candidate empirical CDFs for $E_{22}$ based on multimodel inference from the 20 data assuming the marginals are independent and Gaussian correlated with $\rho=0.8$. The ``true" CDF in Fig. \ref{fig:c4_marginal2} (with variable dependence) is shown in black. Note that the collection of CDFs compiled under the independence assumption (blue) and Gaussian correlation (green) as well as true estimate with dependence (black) seem to overlap -- suggesting that perhaps the independence assumption is sufficient to bound the elastic properties. However, as we show next, this result underestimates the uncertainty in $E_{22}$. }
\begin{figure}
\caption{Collection of candidate empirical CDFs for Young's modulus $E_{22}$ given the initial 20 data, assuming (a) independent marginal distributions and (b) Gaussian correlation}
\label{fig:c4_marginal2}
\end{figure}
To account for variable dependence, for each pair of marginals we must identify a set of candidate copulas. For this we perform the hierarchical Bayesian multimodel selection for the Gaussian, Clayton, Frank and Gumbel copulas. We first compute the posterior copula model probabilities and then compute the associated joint parameter densities. For each pair of marginals, we then construct an ensemble of copula model sets by randomly selecting the copula models and copula parameters. Finally, the optimal sampling density in Eq.\ \eqref{eqn:exp8} is determined and employed for propagation of the multiple candidate densities with copula dependence. Fig.\ \ref{fig:c4_copula_1mar} shows three examples illustrating the influence of copula dependence uncertainty for specific marginal density pairs. Notice that, when the marginals are assumed to be independent a single cdf for $E_{22}$ is generated. However, with uncertainty in the copula dependence, there are several candidate pdfs for each pair of marginal densities. In other words, the uncertainty associated with the spread in the sets of cdfs in Figure \ref{fig:c4_copula_1mar} is ignored if we assume independent marginals.
\begin{figure}
\caption{Collection of candidate empirical CDFs for Young's modulus $E_{22}$ with only copula uncertainty given (a) one pair of marginals, (b) two pairs of marginals and (c) three pairs of marginals}
\label{fig:c4_copula_1mar}
\end{figure}
{\color{black}When we combine the uncertainties from the copula model and marginal model together in Figure \ref{fig:c4_copula_total_20}, we see that the overall uncertainty is considerably wider than it was when assuming the marginals to be independent or Gaussian correlated (Fig.\ \ref{fig:c4_marginal2}). That is, the candidate densities with dependence modeling show a much wider band than the densities with independent or Gaussian correlated assumption. }
\begin{figure}
\caption{Total collection of candidate empirical CDFs for Young's modulus $E_{22}$ with uncertainty in dependence modeling given 20 data.}
\label{fig:c4_copula_total_20}
\end{figure}
\subsection{Influence of dataset size}
In this section, we investigate the convergence of the composite material properties as a function of dataset size. As discussed in the previous section, small datasets led to large uncertainties including the copula model and marginal model in the composite material properties. This raises a critical question: ``How much data is necessary to gain adequate confidence in the probabilistic prediction of composite material properties?"
Here, additional data are generated from the true joint probability density. We begin with the initial 20 data and increase to 50 data, 500 data and 5000 data, as shown in Fig.\ \ref{fig:c4_more_data}. As the data set size increases, we more clearly see the true dependence emerge. Both the normal marginals become increasingly pronounced and the nature of the underlying copula dependence becomes clear.
\begin{figure}
\caption{Increasing data set size for dependent matrix and fiber properties: (a,d) 50 data, (b,e) 500 data and (c,f) 5000 data}
\label{fig:c4_more_data}
\end{figure}
{\color{black} Fig. \ref{fig:c4_more_data_cdf} shows the results of the multimodel uncertainty propagation to estimate the cdf of the transverse modulus $E_{22}$ for increasing data set size. The figure shows the convergence of the approach under assumptions of independent marginals (Figure \ref{fig:c4_more_data_cdf}a-c), Gaussian correlation (Figure \ref{fig:c4_more_data_cdf}d-f) and with dependence included (Figure \ref{fig:c4_more_data_cdf}g-i). The true cdf (with the known joint probability densities) is shown for reference. As expected, in all three cases the band of cdfs narrow as additional data are collected -- i.e.\ uncertainty in the prediction of $E_{22}$ is reduced. However, we notice that under the assumption of independent marginals and Gaussian correlation, the band of cdfs do not converge to the true cdf. Instead, there is a bias introduced by the assumption of independent and Gaussian correlated marginals . Only when we account for the variable dependence in the multimodel UQ approach are we able to converge to the true cdf of the modulus. This is an important conclusion because it shows that, altough uncertainty bands generated under the incorrect assumption of independence \textbf{may} initially bound the true probability distribution, they (i) are likely to underestimate the uncertainty in the estimated distribution as shown in Section \ref{sec:composite3}, and (ii) provide biased bounds on the true probability distribution that will not converge as the data set size increases.}
\begin{figure}
\caption{Uncertain CDFs for transverse elastic modulus $E_{22}$ with increasing data set size under the assumption of independent marginals (a-c), Gaussian correlation (d-f) and accounting for copula dependence (g-i): (a,d,g) 50 data, (b,e,h) 500 data and (c,f,i) 5000 data.}
\label{fig:c4_more_data_cdf}
\end{figure}
\section{Conclusion}
In this work, we propose a hierarchical multimodel approach to investigates the effect of uncertainties associated with small data sets for quantifying and propagating probabilistic model inputs with dependencies. The joint CDF of the probabilistic model inputs is composed of marginal distributions and copulas, which are modeled separately. The proposed approach is set in a hierarchical Bayesian multimodel inference framework, where the model-form and model parameter uncertainties associated with marginals are first quantified, and uncertainties associated with the copula are conditioned on specified marginal pairs. This results in an ensemble of joint probability densities that represent the imprecise probabilities in the assignment of probability model inputs with {\color{black}statistical dependence}. A novel importance sampling reweighting algorithm is derived to efficiently propagate the imprecise probabilities through a mathematical or physical model, which is often computationally intensive. The proposed approach therefore estimates the uncertainty in the quantity of interest given multiple candidate model input distributions at a low computational cost when compared with the typical nested Monte Carlo simulations.
The methodology is demonstrated on an engineering application which aims to understand the influence of constituent properties on the overall out-of-plane properties of a transversely isotropic E-Glass fiber/LY556 Polyester Resign composites. A strong correlation between the constituent properties (fibers and matrix) is assumed and described using a Frank copula model. The results show that the assumption of independent {\color{black} and arbitrary Gaussian correlated} marginals in the imprecise UQ modeling both underestimates the uncertainty in predictions of the modulus and yields biased statistical estimates. When copula-based dependence is integrated into the multimodel UQ framework, the model achieves more realistic bounds on the uncertainty and more accurate probabilistic predictions.
\end{spacing}
\end{document}
|
arXiv
|
{
"id": "1805.12525.tex",
"language_detection_score": 0.7486802339553833,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Generation of path-encoded Greenberger-Horne-Zeilinger states}
\author{N. Bergamasco}
\email{[email protected]}
\affiliation{Department of Physics, University of Pavia, Via Bassi 6, I-27100 Pavia, Italy} \author{M. Menotti}
\affiliation{Department of Physics, University of Pavia, Via Bassi 6, I-27100 Pavia, Italy} \author{J. E. Sipe}
\affiliation{Department of Physics, University of Toronto, 60 St. George Street, Toronto,\\ Ontario M5S 1A7, Canada} \author{M. Liscidini}
\affiliation{Department of Physics, University of Pavia, Via Bassi 6, I-27100 Pavia, Italy}
\date{\today}
\begin{abstract} We study the generation of Greenberger-Horne-Zeilinger (GHZ) states of three path-encoded photons. Inspired by the seminal work of Bouwmeester et al. \cite{Bouwmeester} on polarization-entangled GHZ states, we find a corresponding path representation for the photon states of an optical circuit, identify the elements required for the state generation, and propose a possible implementation of our strategy. Besides the practical advantage of employing an integrated system that can be fabricated with proven lithographic techniques, our example suggests that it is possible to enhance the generation efficiency by using microring resonators. \end{abstract}
\pacs{42.50.-p,42.82.Et}
\maketitle
\section{Introduction} Quantum correlations between subsystems are the focus of many studies on the foundations of quantum mechanics, and the ability to generate states that exhibit these correlations is central to quantum information processing. While quantum correlations in a bipartite state are generally well-understood \cite{Nielsen}, the analysis of multipartite states is more intricate. Even for tripartite entangled states, where only three subsystems are involved, one can identify separable states, biseparable states, and two inequivalent classes of tripartite entangled states \cite{Cirac}: Greenberger-Horne-Zeilinger (GHZ) states \cite{GHZ_Paper,GHZ_Book}, and W states \cite{Kiesel}. A state in one class cannot be transformed into one of the other using only local operations and classical communications.
In this communication, we focus on the generation of tripartite GHZ states, the simplest of which can be written as \begin{equation}\label{GHZ paradigm}
\ket{GHZ}=\frac{1}{\sqrt{2}}\big(\ket{000}+\ket{111}\big), \end{equation} where $\ket{0}$ and $\ket{1}$ are orthogonal states. GHZ states were first studied experimentally by Bouwmeester et al. \cite{Bouwmeester}, where the states $\ket{0}$ and $\ket{1}$ identified orthogonal photon polarizations. But other implementations of the orthogonal states are possible, and have been demonstrated in a variety of platforms including trapped ions \cite{Roos} and superconducting circuits \cite{DiCarlo}. GHZ states have been applied in tests of local realism \cite{Pan}, where the use of tripartite states allows for a demonstration of its conflict with quantum mechanics even in a definite measurement, as opposed to such tests using bipartite states which rely on the statistics of a large number of measurements. They have also been used to devise quantum communication protocols, such as multipartite quantum key distribution, with secret keys shared safely among three parties \cite{Jin}; dense coding \cite{Hao}, where the capacity of a transmission channel is increased by using quantum states of light; and entanglement swapping \cite{Xiaolong}.
When photons are used to produce a GHZ state, the entangled degree of freedom is typically polarization. This choice arises from the fact that polarization can be naturally used as a qubit, and because polarization-entangled photon pairs are now routinely produced by parametric sources \cite{kwiat95}. Moreover, rotation of a polarization-encoded qubit on the Bloch sphere can be easily done by means of linear optical elements such as wave-plates, and routing of the photons can be performed using beam splitters (BSs) and polarizing beam splitters (PBSs). Yet the use of polarization can be problematic for long distance communication using optical fibers, where polarization can drift during propagation, and for the development of integrated quantum devices, where sophisticated solutions are required to control light polarization on a chip \cite{Matsuda:2012aa}. Thus, the use of other degrees of freedom in photonic implementations of GHZ states is worth investigating.
In this paper we propose a scheme to prepare GHZ states, with the generated photons entangled in the path degree of freedom; the states $\ket{0}$ and $\ket{1}$ here refer to the photon being in different spatial modes \cite{Matthews}, regardless any other degree of freedom. In presenting our strategy we consider, as an example, an integrated optical circuit in which two photon pairs are generated by spontaneous four-wave mixing (SFWM) in a $\chi^{(3)}$ material. While similar schemes could be implemented in different platforms, the approach we suggest allows us to take advantage of the enhancement of the generation rate provided by integrated microresonators, and to drastically reduce the footprint of the source \cite{Azzini:12}. In principle, it would be possible to design optical schemes that manipulate path-encoded states and subsequently translate and output them in the polarization representation. This has been proposed recently to achieve chip-to-chip quantum communication \cite{Wang16}. However, here we are mainly interested in both the manipulation and output of path-encoded states on optical chips.
\begin{figure}
\caption{ Sketch of the general scheme for the preparation and distribution of path-encoded GHZ states.}
\label{ABC}
\end{figure}
We envision the situation depicted in Fig. \ref{ABC}: Four photons are generated in an integrated device, of which one is used as a target and three are used as qubits. For each qubit there are two paths, each path associated with a basis state. The three photons are routed to three independent parties (Alice, Bob, and Charlie), which can manipulate them, where the rotation of the qubit on the Bloch sphere is performed by means of a Mach-Zehnder interferometer and two phase shifts \cite{Silverstone15}.
The work is organized as follows: in section 2 we establish a correspondence between some relevant optical elements used in polarization optics and the integrated counterparts in the path-encoding framework. In section 3 we present the integrated approach for generating the GHZ state, we discuss how to post-select the desired state, and we estimate the generation rate. Finally, in section 4 we draw our conclusions.
\section{From polarization- to path-encoded qubits} Bulk sources used to generate quantum correlated photons typically rely on spontaneous parametric down-conversion (SPDC) in nonlinear crystals, e.g. $\beta$-barium borate (BBO) \cite{kwiat95}, or on SFWM, e.g. in optical fibers \cite{Smith:2009}. The former is a second-order nonlinear process that can be pictured as the spontaneous fission of a pump photon into two daughter photons of lower energy, while the latter is a third-order nonlinear process that can be regarded as the elastic scattering of two pump photons to yield a new photon pair. These two processes can also be used in photonic integrated circuits (PICs) \cite{Azzini:12,Ducci:2013}. In this context, SFWM is particularly useful, for the circuit can be easily fabricated in silicon, with recent implementations employing silicon nitride \cite{Moss}. These materials possess a relatively strong third-order nonlinear susceptibility that favours SFWM, and also provide strong field confinement thanks to the large index contrast with silicon dioxide, which is usually used as the low-index cladding material in the fabrication of ridge waveguides and resonators. In principle, the polarization of the generated photons can be used to implement a qubit either in a bulk or integrated source. Yet in PICs this is particularly challenging, and thus alternative solutions are desirable \cite{Menotti:2016}.
In this section we investigate the possibility of using the path degree of freedom of photons for qubit encoding. To this end, we propose employing two waveguides, or \emph{paths}, for each photon route in a PIC. We assign the state $\ket{1}$ or $\ket{0}$ to a photon when it travels in one waveguide or the other, which we graphically depict as dotted and dashed, respectively, in Fig. \ref{Analogies}. This convention is kept consistent throughout the whole circuit.
In Fig. \ref{Analogies} we show that there is a full correspondence between bulk optical elements used to manipulate polarization states and integrated optical elements necessary to manipulate path-encoded states. \begin{figure}
\caption{Analogies between optical elements employed in bulk optics for schemes involving polarization-entangled states (on the left) and the corresponding integrated optical elements for the scheme introduced here involving path-encoded states (on the right). Dotted and dashed lines indicate waveguides associated with $\ket{1}$ and $\ket{0}$, respectively. The shaded boxes mark the coupling points between waveguides.}
\label{Analogies}
\end{figure}
The rotation of polarization states is performed in bulk optics by using a $\lambda/2$ plate, while the corresponding evolution of path states is effected with a 50:50 directional coupler (DC) connecting the two waveguides associated with the $\ket{1}$ and $\ket{0}$ states. Photons in a bulk optical circuit can be routed depending on their polarization using a PBS; the same can be done for the path-encoded states by properly connecting the waveguides of the input ports to the waveguides of the output ports (see Fig. \ref{Analogies}). Finally, photons in a bulk optical circuit can be spatially separated regardless their polarization state using a BS, and the corresponding operation on path-encoded states is performed in integrated optics using two 50:50 DCs.
Two remarks regarding path states and their manipulation are necessary: First, we note that the generation of a meaningful path-encoded state for a photon pair requires a source more complicated than a single-bus-waveguide ring resonator \cite{Mataloni04, Silverstone14, Preble15}. Second, some of the integrated optical elements used to manipulate path states (see Fig. \ref{Analogies}) display a waveguide crossing that seems problematic in a planar geometry, which is usually the choice for PICs. However, we will see that proper sources can be designed, and a waveguide rearrangement can avoid the problematic waveguide crossing.
\section{State generation and manipulation} Here we discuss the generation of path-encoded GHZ states and present an integrated circuit based on the fundamental building blocks introduced in the previous section. Considering a generic parametric source, in the approximation of undepleted pump pulses described classically, the state of the generated photons is of the form \cite{braunstein2005} \begin{multline} \ket{\psi} = e^{\beta C^\dagger_{II}-H.c.}\ket{\text{vac}}\\
= \left(1+\mathcal{O}(|\beta|^2)\right)\ket{\text{vac}} + \beta\Up{C}_{II}\ket{\text{vac}} + \frac{1}{2}\left[\beta\Up{C}_{II}\right]^2\ket{\text{vac}}+ \ldots \\
\equiv \left(1+\mathcal{O}(|\beta|^2)\right)\ket{\text{vac}} + \beta\ket{\text{II}} + \frac{1}{2}\left[\beta\Up{C}_{II}\right]^2\ket{\text{vac}} + \ldots, \label{eq:sfwmstate} \end{multline}
where $\ket{\text{vac}}$ is the vacuum state, $\Up{C}_{II}$ is the photon pair creation operator, $\left|\beta\right|^2$ is the pair
generation probability per pulse when that number is very small, and $\ket{\text{II}}$ is the normalized two-photon state. In the limit of interest where $\left|\beta\right|^2\ll1$, we can truncate the expansion \eqref{eq:sfwmstate} at the quadratic term in $\beta$ , which corresponds to the generation of two pairs. The properties of the four-photon state contribution to \eqref{eq:sfwmstate}, resulting from the generation of two photon pairs, are directly related to the those of the creation operator $\Up{C}_{II}$. Hence, once this has been calculated, the output state of two or more pairs can be obtained immediately. For this reason, we begin with a discussion of the generation of a single photon pair.
The structure we propose can be divided in two parts: a nonlinear integrated source, which generates a path-encoded initial state, and a linear optical circuit to manipulate it. The full calculation of the output state is reported in the Appendix. \begin{figure}
\caption{Sketch of the nonlinear integrated source of path-entangled states \eqref{eq:pathent}. Waveguides associated with single qubits are grouped together, phase shifters and relevant lengths are also shown.}
\label{source}
\end{figure} The nonlinear integrated source (see Fig. \ref{source}) consists of four identical ring resonators arranged in two blocks, each of which is a Mach-Zehnder interferometer unbalanced by a phase $\phi_i$, with one ring resonator per arm. The two blocks are coherently pumped using a 50:50 directional coupler, which splits the pump amplitude into two waveguides, with $\phi$ being the pump phase difference between the two blocks. Although this is not strictly necessary, here we consider degenerate SFWM \cite{Fang:13}, for which we require a dual-pump scheme, where the 50:50 split ratio can be guaranteed by choosing an appropriate length of the directional coupler \cite{Huang:94}. Since the field enhancement inside the rings is much larger than that in the waveguides, we assume that the generation of photons occurs only in the resonators.
It should be noticed that although the use of four identical microring resonators might pose some constraints, the fabrication technique for multiple integrated elements on SOI platforms has constantly improved in recent years, up to the realization of several hundred coupled microrings \cite{Mookherjea}. Moreover, it is possible to tune each resonator almost independently via heaters: this enables the control of the position of its resonances with great precision \cite{Cunningham:10}. If one considers silicon ring resonators, the large nonlinearity ($\gamma\approx 200\ W^{-1}m^{-1}$) guarantees high generation efficiencies with mW pump powers and $Q\approx10000$ \cite{Azzini:12}, which relaxes the constraints on the ring tunability. Finally, the two blocks in Fig. \ref{source} have already been used for the generation of deterministically split photons by the reverse HOM effect, yielding high-visibility quantum interference \cite{Silverstone15}. Indeed, when $\phi_i=\pi/2[2\pi]$ one observes deterministic splitting of the photon pair exiting the MZI \cite{Silverstone14}. But when the two blocks are pumped with a relative phase shift $\phi=\pi$ (or odd multiple), the two-photon state generated by the source is the Bell state (see the Appendix) \begin{equation} \ket{\Psi^-} = \frac{1}{\sqrt{2}}\left(\ket{1}\ket{0} - \ket{0}\ket{1}\right), \label{eq:pathent} \end{equation} where we use the first and fourth waveguide for the first path-encoded qubit and we use the second and the third waveguide for the second path-encoded qubit as depicted in Fig. \ref{source}. This situation is analogous to that considered by Bouwmeester et al. \cite{Bouwmeester}, where the nonlinear crystal generates photon pairs in the corresponding polarization-encoded entangled state.
We now consider the simultaneous generation of two pairs of photons, described by the effect of $(C_{II}^\dagger)^2$ on the vacuum state. This leads to the four-photon state \begin{multline} \ket{\mathrm{IV}} = -\frac{1}{2\sqrt{3}}\int dk'_1dk'_2 dk_1dk_2 \phi_{\text{ring}}(k_1,k_2)\phi_{\text{ring}}(k'_1,k'_2)\\ \times e^{i(\psi(k_1,k_2) +\psi'(k_1,k_2))}(\Up{b}_{k_1,1}\Up{b}_{k_2,2} - \Up{b}_{k_1,3}\Up{b}_{k_2,4})\\ \times (\Up{b}_{k'_1,1}\Up{b}_{k'_2,2} - \Up{b}_{k'_1,3}\Up{b}_{k'_2,4})\ket{\text{vac}}, \label{eq:fourphostate_text} \end{multline} where $\phi_{\text{ring}}(k_1,k_2)$ is the biphoton wave function of a pair generated in a single ring, $\psi(k_1,k_2)$ and $\psi^{\prime}(k_1,k_2)$ are phase factors associated with propagation in the channel (which can be assumed constant) defined in \eqref{eq:psiwave}, and $b^{\dagger}_{k_i,j}$ is the operator associated with the creation of a photon having wavevector $k_i$ and exiting the structure in Fig. \ref{source} from the channel $j$. The state $\ket{\text{IV}}$ is normalized under the assumption that the biphoton wave function $\phi_{\text{ring}}(k_1,k_2)$ is separable (see below).
\begin{figure}
\caption{Sketch of the complete integrated circuit for the generation of path-encoded GHZ states. We can identify a schematic representation of the source of path-encoded entangled states in the form \eqref{eq:pathent}, and a realistic implementation of the full circuit obtained by rearranging some of the channels and the detectors. Waveguides associated with single qubits are grouped together.}
\label{schemfull}
\end{figure}
We now turn to the manipulation of the state $\ket{\mathrm{IV}}$, which is done following the recipe Bouwmeester et al. \cite{Bouwmeester} used for polarization-encoded entangled states, but implemented for path-encoded entangled states using the correspondence between polarization bulk elements and the path integrated components illustrated in Fig. \ref{Analogies}. Note that we have avoided the waveguide overlapping in the integrated analogue of a beam splitter (see Fig. \ref{Analogies}) by a rearrangement of the circuit waveguides as shown in Fig. \ref{schemfull}.
In strict analogy with Bouwmeester et al. \cite{Bouwmeester}, post-selecting on a three-fold coincidence in detectors $D_1$, $D_2$, and $D_3$ in Fig. \ref{schemfull}, conditioned on the detection of a photon in the target detector $T$, identifies that a GHZ state was generated. Care must be taken to ensure that the generated GHZ state is pure. As in the generation of pairs of photons for heralded photon applications, this requires that the function $\phi_{\text{ring}}(k_1,k_2)$ is separable. To this end we observe that nearly-uncorrelated photons can be obtained by adjusting the duration of the pump pulse, which has to be comparable or shorter than the dwelling time of the photon inside the ring \cite{Helt10,onodera16}. For complete separability, a more careful design of the ring is required \cite{Gentry:16,Vernon:2017}. A more detailed discussion of the effect of the spectral properties of the BWF goes beyond the scope of this work, but we plan to examine this issue in a future communication.
Following earlier work \cite{Liscidini12}, the state \eqref{eq:fourphostate_text} can be written in terms of the creation operators corresponding to the asymptotic-out field of the structure of Fig. \ref{schemfull}. The relation between the asymptotic-in and -out field operators is reported in \eqref{eq:evo} of the Appendix. This allows us to rewrite the complete output state as: \begin{align}
\ket{\psi} &= \left(1+\mathcal{O}(|\beta|^2)\right)\ket{\text{vac}} \nonumber \\
&+ \beta\ket{\text{II}} + \frac{\sqrt{3}}{2}\beta^2\left[\ket{\Phi} -\frac{1}{2\sqrt{3}}\ket{\psi_{\text{GHZ}}}\right], \label{eq:outstate} \end{align} where $\ket{\Phi}$ includes other contributions that are second order in $\beta$ but would not lead to a four-fold coincidence event, while \begin{multline} \ket{\psi_{\text{GHZ}}} = \int dk_1dk_2dk'_1dk'_2\phi_{\text{ring}}(k_1,k_2) \phi_{\text{ring}}(k'_1,k'_2)\\ \times e^{i\Gamma}\ket{\text{T}}\ket{\text{GHZ}}, \label{eq:ghz5} \end{multline} with $\Gamma$ a phase factor and \begin{align} \ket{\text{GHZ}} &= \frac{1}{\sqrt{2}}\left[\Up{b}_{k_1,D_{1,1}} \nonumber \Up{b}_{k_2,D_{2,1}}\Up{b}_{k'_2,D_{3,0}} \right.\nonumber \\ &+ \left. e^{i\Theta(k_2,k'_1,k'_2)}\Up{b}_{k_2,D_{1,0}}\Up{b}_{k'_2,D_{2,0}}\Up{b}_{k_1,D_{3,1}}\right]\ket{\text{vac}}\nonumber \\
&= \frac{1}{\sqrt{2}}\left[\ket{110} + e^{i\Theta}\ket{001}\right], \qquad\qquad\qquad \nonumber\\
\ket{T}&=\Up{b}_{k'_1,T}\ket{\mathrm{vac}}. \label{eq:GHZ} \end{align} Here $\Theta(k_2,k'_1,k'_2)$ is a relative phase between the two GHZ state components and depends on the relative positions of the detectors (see \eqref{eq:Theta}), which cannot be longer than the coherence length of the photons. Such a coherence length can always be increased by filtering, although for typical resonance widths achievable at telecom wavelengths in silicon and silicon nitride resonators it already ranges from centimetres to meters \cite{Moss, Grassani15, Preble15}.
As expected, any four-fold coincidence event results in a GHZ state, where the probability of such an event is $\left|\beta^2/4\right|^2$, when propagation losses are neglected. The magnitude of $\beta$ depends on the pump power, the ring radius and the quality factor of the resonators, and it can vary depending on the device under consideration. Yet values of $\left|\beta\right|^2 \approx 0.1$ have been demonstrated in PICs \cite{Silverstone15}, and assuming MHz pump repetition rates, this would allow for the preparation of path-encoded GHZ states with kHz generation rates with mW pump powers and quality factors of the order of $10^4$. Although our theoretical estimate does not account for any loss, device imperfection, and detector efficiencies, we still expect a large improvement on the generation rate with respect to the present results reported in the literature.
\section{Conclusions} For the generation and processing of quantum correlated photons, we have shown that there is a one-to-one correspondence between components operating in a path encoding scheme and bulk optical elements operating in a polarization encoding scheme. Exploiting this result, we proposed and studied the generation of path-encoded tripartite GHZ states. Although the generation of the desired state is revealed only in post-selection, therefore destroying the quantum state, many protocols involving GHZ states are based on this condition \cite{Jin,Hao,Xiaolong}. Our approach is suitable for the generation of multipartite states in quantum photonic integrated devices, as it overcomes the difficulties related to the use of the polarization degree of freedom. To demonstrate this, we designed and studied an integrated structure relying on the generation of photon pairs by SFWM in ring resonators, showing the potential of this approach in terms of source footprint and brightness.
\section*{Acknowledgments} We are grateful to Mario Arnolfo Ciampini for the critical reading and the fruitful discussions on the manuscript.
\appendix \section{} Referring to the schematic representation in Fig. \ref{source}, the photon pair creation operator $C_{II}^\dagger$ can be expressed very generally as \begin{equation} C^\dagger_{II} = \frac{1}{\sqrt{2}}\sum_{p,q}\int dk_1dk_2\phi_{p,q}\left(k_1,k_2\right)b^\dagger_{k_1,p}b^\dagger_{k_2,q}, \end{equation} where $p$ and $q$ run over all the output channels, $\phi_{p,q}$ is the amplitude of the biphoton wave function (BWF) that is associated with the photon pair exiting from channels $p$ and $q$ and \begin{equation}\label{eq:normalcondition}
\sum_{p,q} \int\ dk_1dk_2 \left| \phi_{p,q}(k_1,k_2)\right|^2=1. \end{equation}
The particular arrangement of the ring resonators in our source design allows for only a restricted number of combinations in $(p,q)$: \begin{multline} (p,q)\in\Omega = \left\{(1,1);(2,1);(1,2);(2,2);\right. \\ \left.(3,3);(4,3);(3,4);(4,4)\right\}. \end{multline}
To lowest order in the pump intensities, $\phi_{p,q}(k_1,k_2)$ can be written as \cite{Yang, Helt10, Liscidini12} \begin{multline}\label{eq:phipq} \phi_{p,q}\left(k_1,k_2\right) =\frac{2\sqrt{2}\pi\alpha^2i}{\beta\hbar}\int dk \phi_P(k)\phi_P(k_1+k_2-k)\\ \times S_{p,q}\left(k_1+k_2-k,k,k_1,k_2\right), \end{multline} where the coupling term $S_{p,q}$ is related to the superposition of the asymptotic-in fields in the structure by \begin{multline} S_{p,q}\left(k_1+k_2-k,k,k_1,k_2\right) =\\ = \frac{3}{2\epsilon_0} \sqrt{\frac{(\hbar\omega_{k_1+k_2-k})(\hbar\omega_{k})(\hbar\omega_{k_1,p})(\hbar\omega_{k_2,q})}{16}}\\ \times\int d\V{r}\Gamma^{ijkl}\left(\V{r}\right)D^{i,\text{asy-in}}_{k_1+k_2-k}(\V{r})D^{j,\text{asy-in}}_{k}(\V{r})\\ \times D^{k,\text{asy-in}}_{k_1,p}(\V{r})D^{l,\text{asy-in}}_{k_2,q}(\V{r}), \label{Spq} \end{multline}
\noindent where $\Gamma^{ijkl}\left(\V{r}\right)$ is related to the third-order nonlinear susceptibility tensor \cite{Helt10}. Working out the explicit form of each term in equation \eqref{Spq} with respect to the scheme in Fig. \ref{source}, we find that \begin{multline}\label{Spq_sum} S_{p,q}\left(k_1+k_2-k,k,k_1,k_2\right) =\\ = \frac{3}{2\epsilon_0}\sqrt{\frac{(\hbar\omega_{k_1+k_2-k})(\hbar\omega_{k})(\hbar\omega_{k_1,p})(\hbar\omega_{k_2,q})}{16}}\\ \times\sum_{n\in[1,4]}A_n(k_1+k_2-k)A_n(k)B_{n,p}(k_1)B_{n,q}(k_2)\\ \times\bar{\jmath}(k_1+k_2-k,k,k_1,k_2), \end{multline}
\noindent where $\bar{\jmath}(k_1+k_2-k,k,k_1,k_2)$ is the overlap integral of the asymptotic-in fields $\tilde{D}_k(\mathbf{r})$ in a single ring \begin{multline} \bar{\jmath}\left(k_1+k_2-k,k,k_1,k_2\right)=\int_{1^{st}\text{ ring}} d\V{r}\Gamma^{ijkl}(\V{r})\\ \times \tilde{D}^{i}_{k_1+k_2-k}(\V{r})\tilde{D}^{j}_{k}(\V{r})\tilde{D}^{k}_{k_1}(\V{r})\tilde{D}^{l}_{k_2}(\V{r}), \label{jbar} \end{multline} and the coefficients $A_n$ and $B_{n,p(q)}$ in equation \eqref{Spq_sum} are given by \begin{align} A_1(k) &= \left(it\right)^2 e^{i\phi_1}e^{ik(L_1+L_2)},\notag\\ A_2(k) &= itre^{ik(L_1+L_2)},\notag\\ A_3(k) &= r^2 e^{i\phi}e^{ik(L_1+L_2)},\notag\\ A_4(k) &= itr\,e^{i(\phi+\phi_2)}e^{ik(L_1+L_2)} \label{eq:Acoeff} \end{align} and \begin{align} B_{1,1}(k) &= r\,e^{-ikL_3},\notag\\ B_{2,2}(k) &= r\,e^{-ikL_3},\notag\\ B_{2,1}(k) &= it\,e^{-ikL_3},\notag\\ B_{1,2}(k) &= it\,e^{-ikL_3}, \label{eq:Bcoeff} \end{align} where $L_1$, $L_2$, and $L_3$ are the distances between coupling points, and $\phi$, $\phi_1$, and $\phi_2$ are phase delays (see Fig. \ref{source}). Summing all the contributions in equation \eqref{Spq_sum} we find that, when all the directional couplers have a 50:50 split ratio and the phase $\phi_1=\phi_2=\frac{\pi}{2}$, the only nonvanishing terms in \eqref{eq:phipq} are \begin{align} \phi_{1,2}(k_1,k_2) &= \phi_{2,1}(k_1,k_2)\notag \\ &= \frac{-i}{4}e^{i\psi\left(k_1,k_2\right)}\frac{\beta_{ring}}{\beta}\phi_{ring}(k_1,k_2) \label{eq:phi12} \end{align} and \begin{align}\label{eq:phi34} \phi_{3,4}(k_1,k_2) &= \phi_{4,3}(k_1,k_2) \notag \\ &= \frac{i}{4}e^{i\psi\left(k_1,k_2\right)}e^{2i\phi}\frac{\beta_{ring}}{\beta}\phi_{ring}(k_1,k_2), \end{align}
where $|\beta_{ring}|^2$ is the probability of generating a pair in a single ring resonator, with a BWF \cite{Helt10} \begin{multline} \phi_{ring}(k_1,k_2) = \frac{2\sqrt{2}\pi\alpha^2i}{\beta_{\text{ring}}\hbar}\int dk \phi_P(k)\phi_P(k_1+k_2-k)\\ \times \frac{3}{4\epsilon_0}\sqrt{\frac{(\hbar\omega_{k_1+k_2-k})(\hbar\omega_{k})(\hbar\omega_{k_1})(\hbar\omega_{k_2})}{16}}\\ \times \bar{\jmath}(k_1+k_2-k,k,k_1,k_2) \end{multline} which is normalized according to \begin{equation}\label{eq:phiringnorm}
\int\ dk_1dk_2\left|\phi_{ring}(k_1,k_2)\right|^2=1, \end{equation} and where we defined $\psi(k_1,k_2)$ as \begin{equation} \psi(k_1,k_2) = 2(k_1+k_2)(L_1+L_2-L_3). \label{eq:psiwave} \end{equation}
Now we can finally reconstruct the complete output state generated by the source. Considering the limit of low generation probability, the ket \eqref{eq:sfwmstate} takes the form \begin{align}
\ket{\psi} &\approx \left(1+\mathcal{O}(|\beta|^2)\right)\ket{\text{vac}}+\beta C_{II}^\dagger\ket{\text{vac}}\notag\\ &+\frac{1}{2}\left(\beta C_{II}^\dagger\right)^2\ket{\text{vac}}+\cdots \notag\\
&\equiv \left(1+\mathcal{O}(|\beta|^2)\right)\ket{vac}+\beta \ket{II}+\frac{\sqrt{3}}{2}\beta^2\ket{IV}+\cdots \label{eq:expansion} \end{align} where the factor $\sqrt{3}$ comes form the normalization of the state $\ket{\text{IV}}$, and the normalized two-photon state is \begin{multline} \ket{II} = -\frac{i}{4\sqrt{2}}\int dk_1dk_2\frac{\beta_{\text{ring}}}{\beta}\phi_{\text{ring}}\left(k_1,k_2\right)\\ \times e^{i\psi\left(k_1,k_2\right)}\left\{\Up{b}_{k_1,1}\Up{b}_{k_2,2} + \Up{b}_{k_1,2}\Up{b}_{k_2,1}\right. \\ \left.- e^{2i\phi}\left(\Up{b}_{k_1,3}\Up{b}_{k_2,4}+\Up{b}_{k_1,4}\Up{b}_{k_2,3}\right)\right\}\ket{\text{vac}}. \label{eq:twophostate3} \end{multline} When $\phi=\pi$, \eqref{eq:twophostate3} becomes \begin{multline} \ket{II} = -\frac{i}{2\sqrt{2}}\int dk_1dk_2\frac{\beta_{\text{ring}}}{\beta}\phi_{\text{ring}}(k_1,k_2)
e^{i\psi\left(k_1,k_2\right)}\\ \times\left\{\Up{b}_{k_1,1}\Up{b}_{k_2,2} - \Up{b}_{k_1,3}\Up{b}_{k_2,4}\right\}\ket{\text{vac}}, \label{eq:twophostate4} \end{multline} which is equivalent to the Bell state \eqref{eq:pathent} in the path-encoding notation. It should be noticed that, from the normalization condition \eqref{eq:normalcondition} and equations \eqref{eq:phi12} and \eqref{eq:phi34}, we have \begin{equation}
4\times\int\ dk_1dk_2 \frac{1}{16}\left|\frac{\beta_{\text{ring}}}{\beta}\right|^2\left| \phi_{\text{ring}}(k_1,k_2)\right|^2=1 \end{equation} that, together with the normalization condition on the BWF \eqref{eq:phiringnorm}, gives \begin{equation}
\left|\beta\right|^2=\frac{\left|\beta_{ring}\right|^2}{4}. \label{eq:efficiency} \end{equation}
In this context we are interested in the simultaneous generation of two photon pairs, and thus we focus on the next term in the expansion \eqref{eq:expansion}, which involves the four-photon state $\ket{\mathrm{IV}}$; using Eq. \eqref{eq:efficiency}, this leads to Eq. \eqref{eq:fourphostate_text}.
Referring to Fig. \ref{schemfull} and following the notation \cite{Heebner04} for directional couplers, we can express the photon creation operators in \eqref{eq:fourphostate_text} in terms of the photon creation operators in each detector channel $D_{n,m}$ as
\begin{align}\label{eq:evo} \Up{b}_{k_1,1} &= e^{-ik_1L_T}\Up{b}_{k_1,T} \\ \Up{b}_{k_2,2} &= -it_1e^{-ik_2L_{3,0}}\Up{b}_{k_2,D_{3,0}}+ r_1e^{-ik_2L_{2,0}}\Up{b}_{k_2,D_{2,0}} \notag \\ \Up{b}_{k_1,3} &= -it_2e^{-ik_1L_{3,1}}\Up{b}_{k_1,D_{3,1}}+ r_2e^{-ik_1L_{1,1}}\Up{b}_{k_1,D_{1,1}} \notag \\ \Up{b}_{k_2,4} &= -it_3e^{-ik_2L_{2,1}}\Up{b}_{k_2,D_{2,1}}+ r_3e^{ik_2L_{1,0}}\Up{b}_{k_2,D_{1,0}},\notag \end{align} where $L_{n,m}$ is the distance between the appropriate output directional coupler in the source and the detector $D_{n,m}$, and $L_T$ is the length between the upper directional coupler in the source and the target detector $T$. Using \eqref{eq:evo} in \eqref{eq:fourphostate_text}, and referring to the output state expansion \eqref{eq:expansion} we find that the state at the detectors is \eqref{eq:outstate}-\eqref{eq:GHZ}, with the relative phase between the terms in the GHZ given by \begin{align}\label{eq:Theta} \Theta &= k_1(L_{1,1}-L_{3,1} )+k_2(L_{2,1}-L_{1,0} ) \notag\\
&+ k'_2(L_{3,0}-L_{2,0})+\frac{\pi}{2}. \end{align}
\end{document}
|
arXiv
|
{
"id": "1704.03213.tex",
"language_detection_score": 0.7707758545875549,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{The $d$-very ampleness of adjoint line bundles on quasi-elliptic surfaces} \author[Yongming Zhang]{Yongming Zhang} \email{[email protected]} \address{School of Mathematics, China University of Mining and Technology, Xuzhou, 221116, P. R. of China} \maketitle \begin{abstract} In this paper, we give a numerical criterion of Reider-type for the $d$-very ampleness of the adjoint line bundles on quasi-elliptic surfaces, and meanwhile we give a new proof of the vanishing theorem on quasi-elliptic surfaces emailed from Langer and show that it is the optimal version. \end{abstract} \section{Introduction}
Let $X$ be a projective algebraic variety defined over an algebraically closed field $k$. Let $Z$ be a $0$-dimensional subscheme of $X$ which is called $0$-cycle of $X$. For an integer $d\geq0$, a line bundle $L$ on $X$ is called \emph{$d$-very ample} if for any $0$-cycle $Z$ with length $ Z\leq d+1$, the restriction map $$\Gamma(X,L)\longrightarrow\Gamma(Z,L\mid_Z)$$ is surjective. Note that $0$-very ampleness and $1$-very ampleness is equivalent to being generated by global sections and being very ample respectively.
Let $X^{[d]}$ be the Hilbert scheme of points on $X$ of length $d$. If $L$ is $d$-very ample then the restriction map associates to every $0$-cycle $Z$ of length $d+1$ a subspace of $H^0 (X,L)^*$ of dimension $d+1$ and this is indeed a morphism $$\phi_d:X^{[d+1]}\rightarrow Grass(d+1, H^0(X,L)^*).$$ And it is proved that $\phi_d$ is an embedding if and only if $L$ is $d+1$-very ample (see \cite{CG90}). Thus the $d$-very ampleness is geometrically a natural generalization of the usual notation of very ampleness.
Using Reider's method (\cite{Reider1988}), Beltrametti and Sommese obtained a useful numerical criterion for the $d$-very ampleness of the adjoint line bundles in the case of surfaces in characteristic zero. \begin{thm}[\cite{BS91}]\label{thm_BS} Let $L$ be a nef line bundle on a complex smooth projective surface $X$ and suppose that $(L^2)\geq4r+5$. Then either $\mathcal{O}_X(K_X+L)$ is $r$-very ample or there exists an effective divisor $D$ containing some $0$-dimensional scheme of length $\leq r+1$ along which $r$-very ampleness fails, such that a power of the line bundle $L-2D$ has sections and $$(D\cdot L)-r-1\leq (D^2)<\frac{1}{2}(D\cdot L)<r+1.$$ \end{thm} In positive characteristic, by the results of N.I.Shepherd-Barron (\cite{SB91}), Theorem \ref{thm_BS} also works directly on surfaces neither of general type nor quasi-elliptic of Kodaira dimension $1$, and for the surface of general type, T. Nakashima used N.I.Shepherd-Barron's results to obtain a numerical criterion for the $d$-very ampleness of the adjoint line bundles (\cite{NT93}). Then H. Terakawa (\cite{TH99}) improved it and collected all such results on surfaces together. \begin{thm}[\cite{TH99}] Let $X$ be a nonsingular projective surface defined over an algebraically closed field of characteristic $p>0$. Let $L$ be a nef line bundle on $X$. Assume that $l:= L^2-4d-5\geq0$ and one of the following situations holds: \begin{enumerate}
\item $X$ is not of general type and further not quasi-elliptic of Kodaira dimension $1$;
\item $X$ is of general type with minimal model $X^{\prime}$, $p\geq3$ and $l>(K_{X^{\prime}}^2)$;
\item $X$ is of general type with minimal model $X^{\prime}$, $p=2$ and $l>{\rm max}\{(K_{X^{\prime}}^2),(K_{X^{\prime}}^2)-3\chi(\mathcal {O}_X)+2\}$. \end{enumerate} Then either $\mathcal{O}_X(K_X+L)$ is $d$-very ample or there exists an effective divisor $D$ containing some $0$-dimensional scheme of length $\leq d+1$ along which $d$-very ampleness fails, such that $L-2D$ is $\mathbb{Q}$-effective and $$(D\cdot L)-d-1\leq (D^2)<\frac{1}{2}(D\cdot L)<d+1.$$ \end{thm}
The purpose of this note is to study the adjoint linear system on the remaining case that $X$ is a quasi-elliptic surface and at the same time we give a new proof of the vanishing theorem on it. \begin{thm}[Theorem \ref{d-veryample}] Let $X$ be a quasi-elliptic surface over an algebraically closed field $k$ of characteristic $p$, and $F$ be a general fibre of the quasi-elliptic fibration $f:X\rightarrow C$. Let $L$ be a nef and big divisor on $X$. Assume that $$L^2>4(d+1)$$ for a nonnegative integer $d$ then we have the following descriptions. \begin{enumerate}
\item When $p=2$, we assume that $(L\cdot F)>2$ in addition. Then either $\mathcal{O}_X(K_X+L)$ is $d$-very ample or there exists an effective divisor $B$ containing a $0$-cycle $Z^{(2)}=F^{*}Z$, which is the Frobenius pull back of a $0$-cycle $Z$ of $\deg Z\leq d+1$ where the $d$-very ampleness fails, such that $2L-B$ is $\mathbb{Q}$-effective and
$$2(L\cdot B)-4\deg Z \leq (B^2)\leq (L\cdot B) \leq 4\deg Z;$$
\item When $p=3$, we assume that $(L\cdot F)>1$ in addition. Then either $\mathcal{O}_X(K_X+L)$ is $d$-very ample or there exists an effective divisor $B$ containing a $0$-cycle $Z^{(3)}=F^{*}Z$, which is the Frobenius pull back of a $0$-cycle $Z$ of $\deg Z\leq d+1$ where the $d$-very ampleness fails, such that $3L-2B$ is $\mathbb{Q}$-effective and
$$3(L\cdot B)-9\deg Z \leq (B^2)\leq \frac{3(L\cdot B)}{2} \leq 9\deg Z.$$ \end{enumerate} \end{thm} \begin{thm}[Langer, Theorem \ref{vanishing theorem}]\label{vanishing} Let $X$ be a quasi-elliptic surface over an algebraically closed field $k$ of characteristic $p$, and $F$ be a general fibre of the quasi-elliptic fibration $f:X\rightarrow C$. Let $L$ be a nef and big divisor on $X$. \begin{itemize}
\item If $ p=2$ and $(L\cdot F)>2$, then $H^1(X,L^{-1})=0$,
\item If $ p=3$ and $(L\cdot F)>1$, then $H^1(X,L^{-1})=0$. \end{itemize} \end{thm}
In fact, there are already some results along the line of Theorem \ref{vanishing} in the literature: \cite[Corollary 4.1]{Zheng17} and \cite[Corollary 7.4]{Langer16} which seem stronger than Theorem \ref{vanishing} but may be wrong when $p=2$ . And from the email of Langer recently, this result follows by modifying a small error in the calculation of length of the torsion part of $\Omega_F$ in \cite[Corollary 7.4]{Langer16} or by the first part of the proof of \cite[Corollary 4.1]{Zheng17} which still holds while the remaining part may go wrong. Here we give a new proof of this result and construct some examples to show that it is optimal.
Initially, our main skill is inspired by the method in Propositon 4.3 of \cite{DG15}. However we find that there is a small error in its proof (the general fibre $C_i$ may be non-reduced!), and it doesn't affect the results in Propositon 4.3 of \cite{DG15}. But it may lead a little trouble in Proposition 3.1 of \cite{C19}, which is a key step in the proof of Fujita's Conjecture on quasi-elliptic surfaces. However, the referee suggests to us a simple method to estimate the upper bound of anti-canonical degree on an integral curve by using adjunction formula, and it help us avoid the original tedious proof. The paper is organized as follows. The second section contains some preliminary materials which will be used in the last two sections to prove the main results. And we present a new proof of the vanishing theorem in the third section and study $d$-very ampleness of adjoint line bundle in the last section.
\textbf{Notations:} \begin{itemize}
\item Through this paper, $k$ is an algebraically closed field of characteristic $p>0$ and all varieties are defined over $k$;
\item $K_X$ is the canonical divisor of a smooth projective variety $X$.
\item For line bundles or divisors $A$ and $B$ on a projective surface, $(A\cdot B)$ denotes the intersection number of $A$ and $B$. \end{itemize}
\textbf{Acknowledgement:} The author would like to thank Yi Gu and Chen Jiang for their useful discussions and suggestions and thank Adrian Langer for his warm letter. The author is supported by grant NSFC (No. 12101618). Finally the author would like to thank the referees for their carefully reading and useful contribution to Proposition \ref{keylem}. \section{Preliminaries} \subsection{The degree of anti-canonical divisor on curves} The author thanks the refree for pointing out a stronger version of the following proposition with a simple proof. Indeed, as below the upper estimate of the anti-canonical degree on an integral part of the general fiber of a fibration over a curve could be easily obtained by the adjunction formula. \begin{prop}\label{keylem} Let $Y$ be a projective surface over an algebraically closed field. Let $F$ be an effective Cartier divisor on $Y$ such that $(F^2)=0$ and $h^0(\mathcal{O}(F))=1$, and assume that $Y$ is Gorenstein along $F$. Then $-(K_Y\cdot F)\leq 2$ is even and the equality holds if and only if $F$ is a smooth rational curve. \end{prop} \begin{proof}
By taking blow-ups away from $F$, we may assume that $Y$ is Gorenstein. Then the adjunction formula says that $\omega_Y(F)|_F\simeq \omega_F$. By taking degree, we obtain
$(K_Y+F\cdot F)=\deg(\omega_Y(F)|_F)=\deg(\omega_F)=-2\chi(\mathcal{O}(F))=2h^1(\mathcal{O}(F))-2h^0(\mathcal{O}(F))$, where we used the Riemann-Roch and the Serre duality on the Gorenstein curve $F$. Since $(F^2)=0$ and $h^0(\mathcal{O}(F))=1$, $(K_Y\cdot F)=2h^1(\mathcal{O}(F))-2\geq-2$ and is even. Moreover the equality holds if and only if $p_a=h^1(\mathcal{O}(F))=0$. \end{proof} \begin{rmk} The conditions in Proposition \ref{keylem} are satisfied when there is a fibration over a curve $f:Y\rightarrow C$ and $F$ is integeral and contained in a fibre of $f$. \end{rmk} \subsection{The Shepherd-Barron's result on the instability of locally free sheaves} \begin{defn} A rank $2$ locally free sheaf $E$ on a smooth projective surface $X$ is \emph{unstable} if there is a short exact sequence $$0\rightarrow\mathcal {O}(A)\rightarrow E\rightarrow I_Z\cdot\mathcal {O}(B)\rightarrow0$$ where $A, B\in {\rm Pic} (X)$, $I_Z$ is the ideal sheaf of an effective $0$-cycle $Z$ on $X$ and $A - B \in C_{++}(X)$, the positive cone of $NS(X)$. (Recall that $C_{++}(X)=\{x\in NS(X)\mid x^2 >0$ and $x\cdot H>0$ for some ample divisor $H$ and hence every ample divisor $H$\}.) We say that $E$ is \emph{semi-stable} if it is not unstable. \end{defn}
Let $F:X\rightarrow X$ be the \emph{(absolute) Frobenius morphism} and $F^e$ be the $e$-iteration of $F$. For any coherent sheaf $G$, and we write $G^{(p^e)}:=F^{e*}(G)$ simply. And for a morphism $f: Y\rightarrow X$, the \emph{relative $e$-iteration Frobenius morphism} $F_r^e$ is defined by the universal property the fibre product in following diagram $$\xymatrix{
Y \ar@/_/[ddr]_{f} \ar@/^/[drr]^{F^e}
\ar@{.>}[dr]|-{F_r^e} \\
& F^{e*}(Y) \ar[d] \ar[r]
& Y \ar[d]_{f} \\
& X \ar[r]^{F^e} & X .}
$$ \begin{thm}[ Theorem 1, \cite{SB91}]\label{Bogomolov} If $E$ is a rank $2$ locally free sheaf on a smooth projective surface X with $c_1^2(E)>4c_2(E)$, then there is a reduced and irreducible surface $Y\subset \mathbb{P}=\mathbb{P}(E)$ such that \begin{enumerate}
\item the composite $\rho: Y \rightarrow X$ is purely inseparable, say of degree $p^n$;
\item the $n$-iteration Frobenius $F^n: X \rightarrow X$ factors rationally through $Y$;
\item putting $\widetilde{E} = F^{n*} E$, $\widetilde{\mathbb{P}}= \mathbb{P}(\widetilde{E})$ and letting $\psi: \widetilde{\mathbb{P}}\rightarrow \mathbb{P}$ be the natural map, we have $\psi^*Y = p^nX_1$, where $X_1$ is the quasi-section of $\widetilde{\mathbb{P}}$ corresponding to an exact sequence $0 \rightarrow \mathcal{O}(A) \rightarrow \widetilde{E} \rightarrow I_Z\cdot \mathcal{O}(B) \rightarrow 0$, where $A, B \in {\rm Pic}(X)$, $Z \in X$ is a $0$-cycle and $N=A-B$ lies in the positive cone $C_{++}(X)$ of $NS(X)$. \end{enumerate} \end{thm} \begin{proof} See Theorem 1 in \cite{SB91}. \end{proof} The $Y$ in the above theorem could be constructed as follows (which could also be considered as the image of $X_1$ under $\psi$ in Theorem 1 of \cite{SB91}): Fix a semi-stable vector bundle $E$ of rank $2$ with $c_1^2(E)>4c_2(E)$. Suppose that $e>0$ is the smallest integer such that $F^{e*}E$ is unstable, then we have the following diagram, where $s$ is the section determined by the unstability of $F^{e*}E$, $Y=F^{e*}_r(X)$ is the pull back of this section under the relative $e$-iteration Frobenius $F_r^e$ which is reduced and irreducible, and $\rho=\pi\!\!\mid_Y$ is a inseparable morphism of degree $p^e$. $$ \xymatrix{
Y \ar@{_{(}->}[d] \ar[r]^{ \rho} & X \ar@{_{(}->}[d]^{s}\\
\mathbb{P}(E) \ar[dr]_{\pi} \ar[r]^{ F_r^n} & \mathbb{P}(E^{(p^e)})\ar[d]^{F^{e*}(\pi)} \\
& X } $$ \begin{prop}[\cite{SB91}, Corollary 5]\label{canonical divisor} With the same assumption as Theorem \ref{Bogomolov}, $$K_Y\equiv \rho^*(K_X-\frac{p^e-1}{p^e}N).$$ \end{prop}
\subsection{The Tyurin's result on a construction of locally free sheaves}
Let $X$ be a nonsingular projective surface defined over an algebraically closed field. Let $L$ be a line bundle on $X$. For a $0$-cycle $Z\in X^{[d]}$, consider the short exact sequence $0\rightarrow L\otimes I_Z\rightarrow L\rightarrow L\mid_Z\rightarrow0$, then we have a long exact sequence $$0\rightarrow H^0(X,L\otimes I_Z) \rightarrow H^0(X,L) \rightarrow H^0(L\mid_Z) \rightarrow H^1(X,L\otimes I_Z) \rightarrow H^1(X,L) \rightarrow 0.$$ Now put $\delta(Z, L):=h^1(X,L\otimes I_Z)-h^1(X,L)$. Note that the integer $\delta(Z, L)$ is nonnegative. The cycle $Z$ is called\emph{ $L$-stable }(in the sense of Tyurin) if $\delta(Z, L) >\delta(Z_0, L)$ for any subcycle $Z_0$ of $Z$. Note that $L$ is $d$-very ample if and only if $\delta(Z, L)=0$ for all $Z \in X^{[d+1]}$. \begin{thm} [\cite{T87}, Lemma 1.2]\label{T87} Let $L$ be a line bundle on a nonsingular projective surface $X$ defined over an algebraically closed field and let $Z$ be an $L$-stable $0$-cycle of $X$. Then there exists an extension $$0 \rightarrow H^1(X,L\otimes I_Z) \otimes \mathcal{O}(K_X)\rightarrow E(Z, L)\rightarrow L\otimes I_Z \rightarrow 0,$$ where $E(Z, L)$ is a locally free sheaf on $X$ of rank $h^1(X,L\otimes I_Z)+ 1$. \end{thm} \section{Vanishing theorem on quasi-elliptic surfaces} In this section we give a new proof of the following vanishing theorem from a letter of Langer, which can also be obtained by a modification of a small error in the proof of \cite[Corollary 7.4]{Langer16} or by the first part of the proof of \cite[Corollary 4.1]{Zheng17} (the other part may go wrong), and then show it is optimal by some examples. \begin{thm}[Langer]\label{vanishing theorem} Let $X$ be a quasi-elliptic surface, and $F$ be a general fibre of the quasi-elliptic fibration $f:X\rightarrow C$. Let $L$ be a nef and big divisor on $X$. \begin{itemize}
\item If $ p=2$ and $(L\cdot F)>2$, then $H^1(X,L^{-1})=0$ and
\item If $ p=3$ and $(L\cdot F)>1$, then $H^1(X,L^{-1})=0$. \end{itemize} \end{thm} \begin{proof} Assume that $H^1(X,L^{-1})\neq0$. Let us take any nonzero element $0\neq\alpha\in H^1(X,L^{-1})$ which gives a non-split extension $$0\rightarrow \mathcal {O}_X\rightarrow E\rightarrow L\rightarrow0.$$
Corollary 17 in \cite{SB91} implies that $F^{e*}E$ will split when $e\gg 0$. Suppose that $e>0$ is the smallest integer such that it splits, then we have the following diagram, where $s$ is the section determined by the splitting of $F^{e*}E$, $Y=F^{e*}_r(X)$ is the pull back of this section which is reduced and irreducible and $\rho=\pi\!\!\mid_Y$ is an inseparable morphism of degree $p^e$.
$$ \xymatrix{
Y \ar@{_{(}->}[d] \ar[r]^{ \rho} & X \ar@{_{(}->}[d]^{s}\\
\mathbb{P}(E) \ar[dr]_{\pi} \ar[r]^{ F_r^n} & \mathbb{P}(E^{(p^e)})\ar[d]^{F^{e*}(\pi)} \\
& X } $$
But the general fibre of $g= f \circ \rho: Y\rightarrow C$ may be not reduced: $$\rho^*(F)=p^{e-e_0}\widetilde{F}$$ with $ 0\leq e_0\leq e$ and $\widetilde{F}$ integral.
So by Proposition \ref{canonical divisor} we have \begin{eqnarray*}-(K_Y\cdot \widetilde{F})&=&(\rho^*((p^e-1)L-K_X)\cdot \widetilde{F})\\ &=&p^{e_0-e}(\rho^*((p^e-1)L-K_X)\cdot \rho^*F)\\ &=&p^{e_0}((p^e-1)L-K_X\cdot F)\\ &=&p^{e_0}(p^e-1)(L\cdot F), \end{eqnarray*} where the last equality is due to that $f$ is a quasi-elliptic fibration. By Proposition \ref{keylem}, we have the inequality $0<p^{e_0}(p^e-1)(L\cdot F)=-(K_Y\cdot \widetilde{F})\leq 2$ and $-(K_Y\cdot \widetilde{F})$ is even, so $p^{e_0}(p^e-1)(L\cdot F)=-(K_Y\cdot \widetilde{F})=2$, which gives that\\ when $p=2$, \begin{itemize}
\item $e=1$, $e_0=0$ and $(L\cdot F)=2$ or
\item $e=1$, $e_0=1$ and $(L\cdot F)=1$ \end{itemize} and when $p=3$, \begin{itemize}
\item $e=1$, $e_0=0$ and $(L\cdot F)=1$ \end{itemize} So we get our result. \end{proof} \begin{cor}\label{thm} Let $X$ be a quasi-elliptic surface and $L$ a nef and big divisor on $X$, then we have \begin{itemize}
\item if $ p=2$ and $n\geq3$, then $H^1(X,L^{-n})=0$ and
\item if $ p=3$ and $n\geq2$, then $H^1(X,L^{-n})=0$. \end{itemize} \end{cor} Next, we will construct a quasi-elliptic surface to show that the above vanishing theorem is optimal. \begin{exmp}\label{example} Let $k$ be an algebraically closed field with ${\rm char}(k)=2$ and $C\subseteq \mathbb{P}_k^2=\mathrm{Proj}(k[X,Y,Z])$ be the plane curve defined by the equation: $$Y^{2e}-X^{2e-1}Y=XZ^{2e-1},$$
where $e>1$ is a free variable. It is easy to check that $C$ is a smooth curve and $2g(C)-2=2e(2e-3)$. Take $\infty:=[0,0,1]$ on $C$. Then $U:=C\backslash \infty=C\cap \{ X\neq 0\}$ is an affine open subset defined by $y^{2e}-y=z^{2e-1}$ with $y=Y/X$ and $z=Z/X$. As a result, $\mathrm{d}z$ is a generator of $\Omega^1_{C}|_{U}$ since $\mathrm{d}y=z^{2e-2}\mathrm{d}z$. So we have $$ K_C=\mathrm{div}(\mathrm{d}z)=(2g(C)-2)\infty=2e(2e-3)\cdot \infty. $$ Let $D=e(2e-3)\cdot \infty$, then $C$ is a Tango curve with a Tango structure $L=\mathcal {O}_C(D)$ by \cite[Definition 2.1]{Zheng17} (see \cite[]{Tango72,Mu13,GZZ,Zh22} for more details about this example of Tango curves).
Set $e=3e_1$ and $N=e_1(2e-3)\cdot \infty$, then $L=\mathcal {O}_C (3N)$. Next we follow the same argument as section 2 in \cite{Zheng17} to construct a Raynaud surface $X$ which is an $l$-cycle cover of a ruled surface $P$ over $C$ with $l=p+1=3$ $$\phi: X\stackrel{\psi}{\longrightarrow} P\stackrel{\pi}{\longrightarrow} C.$$ Note that it is a quasi-elliptic fibration by \cite[Proposition 2.3]{Zheng17} and denote general fibre by $F$.
In this case, let $a=2$ and $b=1$, then $Z_{a,b}=\mathcal {O}_X(a\widetilde{E})\otimes \phi^*N^b=\mathcal {O}_X(2\widetilde{E})\otimes \phi^*N$ is ample and satisfies the condition in \cite[Theorem 3.7]{Zheng17}. Hence we get $$H^1(X,Z_{2,1}^{-1})\neq0.$$
But $$(Z_{2,1}\cdot F)=2>1,$$ which leads to a contradiction with \cite[Corollary 7.4]{Langer16}.
Moveover, if we set $e_1=2e_2$ for some positive integer $e_2$, then $Z_{2,1}=2A$ with $A=\mathcal {O}_X(\widetilde{E})\otimes \mathcal {O}(e_2(2e-3)\cdot \infty)$ ample and $H^1(X,A^{-2})\neq0$, which leads to a contradiction with \cite[Corollary 4.1]{Zheng17}. \end{exmp} By a similar argument we could also obtain a quasi-elliptic surface $X$ in characteristic $3$ such that $H^1(X,A^{-1})\neq0$ with $A$ ample on $X$. \section{Adjoint linear system on quasi-elliptic surfaces} In this section we prove a theorem of $d$-very ampleness of Reider-type in positive characteristic on quasi-elliptic surfaces. \begin{thm}\label{d-veryample} Let $X$ be a quasi-elliptic surface, and $F$ be a general fibre of the quasi-elliptic fibration $f:X\rightarrow C$. Let $L$ be a nef and big divisor on $X$. Assume that $$(L^2)>4(d+1)$$ for a nonnegative integer $d$, then we have the following descriptions. \begin{enumerate}
\item When $p=2$, we assume that $(L\cdot F)>2$ in addition. Then either $\mathcal{O}_X(K_X+L)$ is $d$-very ample or there exists an effective divisor $B$ containing a $0$-cycle $Z^{(2)}=F^{*}Z$, which is the Frobenius pull back of a $0$-cycle $Z$ of $\deg Z\leq d+1$ where the $d$-very ampleness fails, such that $2L-B$ is $\mathbb{Q}$-effective and
$$2(L\cdot B)-4\deg Z \leq (B^2)\leq (L\cdot B) \leq 4\deg Z;$$
\item When $p=3$, we assume that $(L\cdot F)>1$ in addition. Then either $\mathcal{O}_X(K_X+L)$ is $d$-very ample or there exists an effective divisor $B$ containing a $0$-cycle $Z^{(3)}=F^{*}Z$, which is the Frobenius pull back of a $0$-cycle $Z$ of $\deg Z\leq d+1$ where the $d$-very ampleness fails, such that $3L-2B$ is $\mathbb{Q}$-effective and
$$3(L\cdot B)-9\deg Z \leq (B^2)\leq \frac{3(L\cdot B)}{2} \leq 9\deg Z.$$ \end{enumerate} \end{thm} \begin{proof} Assume that $\mathcal{O}_X(K_X+L)$ is $(d-1)$-very ample but not $d$-very ample. Then there exist a $0$-cycle $Z$ of degree $d+1$ where the $d$-very ampleness fails. By Theorem \ref{vanishing theorem}, we have $H^1(X,\mathcal{O}_X(K_X+L))=0$,
then by Lemma \ref{rank2vector} we obtain a locally free sheaf $E$ of rank $2$ on $X$ sitting in the short exact sequence $$0\rightarrow \mathcal {O}_X \rightarrow E \rightarrow I_Z\cdot L \rightarrow 0.$$ Moreover we have $$c_1^2(E)-4c_2(E)=(L^2)-4 \deg Z= (L^2)-4(d+1)>0.$$ When $p=3$, by Lemma \ref{unstable} $F^*(E)$ is unstable, then we have the following diagram with exact vertical and horizontal sequences: $$\xymatrix{
& & 0 \ar[d]_{} & & \\
& & \mathcal{O}(A) \ar[d]_{} \ar[rd]^{\sigma}& & \\
0 \ar[r]^{} & \mathcal {O}_X\ar[r]^{} & E^{(3)} \ar[d]_{} \ar[r]^{} & I_Z^{(3)}\cdot L^{\otimes 3} \ar[r]^{} & 0 \\
& & I_W\cdot \mathcal{O}(B) \ar[d]_{} & & \\
& & 0 & & }
$$ where $A, B \in {\rm Pic}(X)$ and $I_W$ is the ideal sheaf of a $0$-cycle $W$ on $X$ and $A-B$ satisfies \begin{itemize}
\item $((A-B)^2)\geq 9(c_1^2(E)-4c_2(E))>0$ and
\item $(A-B\cdot H)>0$ for any ample divisor $H$ on $X$. \end{itemize} Note that the composition map $\sigma:A\rightarrow E^{(3)}\rightarrow I_Z^{(3)}\cdot L^{\otimes 3}$ is nonzero, otherwise we have $A\hookrightarrow \mathcal {O}_X$ and $(A-B\cdot H)=(2A-3L\cdot H)<0$ for any ample divisor $H$ on $X$ which is a contradiction. Let $B=3L-A$ be the effective divisor defined by $\sigma$. Then it contains the $0$-cycle $Z^{(3)}=F^{*}Z$ since its local function is contained in the ideal sheaf $I_Z^{(3)}$ by the map $\sigma$. Thus we have \begin{enumerate}
\item $2(L\cdot B)\leq 3(L^2)$,
\item $3(L\cdot B)-(B^2)\leq 9\deg Z$,
\item $(L\cdot B)\geq0$ and
\item $(L^2)(B^2)\leq (L\cdot B)^2$, \end{enumerate} where the first inequality is from the unstablity of $E^{(3)}$, the second one is obtained by computing the second Chern class of $E^{(3)}$ with the vertical and horizontal sequences and last one is form the Hodge index theorem. Put them together: $$3(L\cdot B)-9\deg Z \leq (B^2) \leq \frac{(L\cdot B)^2}{L^2}\leq \frac{3(L\cdot B)}{2}.$$ So we have $$3(L\cdot B)-9\deg Z \leq (B^2)\leq \frac{3(L\cdot B)}{2} \leq 9\deg Z.$$
When $p=2$, by Lemma \ref{unstable} we have $F^{*}(E)$ is unstable. Then by a similar argument as above, we will get an effective divisor $B$ containing the $0$-cycle $Z^{(2)}=F^{*}Z$ such that $$2(L\cdot B)-4\deg Z \leq (B^2)\leq (L\cdot B) \leq 4\deg Z.$$ \end{proof}
\begin{lem}[\cite{BS91}, Lemma 1.2]\label{filtration} Let $R$ be a Noetherian local ring and, $I$ and $J$ be ideals of $R$ with $I \subseteq J$. Assume that ${\rm length}(R/I)<\infty$. Then there exists a chain $$I=I_0 \subset I_1 \subset \cdots \subset I_r = J$$ of ideals of $R$ with ${\rm length}(I_i/I_{i-1})=1$ for $i=1,\ldots,r$. \end{lem}
The following lemma is a slight improvement of Lemma 2.2 in \cite{NT93} by H. Terakawa. For reader's convenience, we present a proof here. \begin{lem}[\cite{TH99}, Lemma 2.2]\label{rank2vector} Let $X$ be a nonsingular projective surface defined over an algebraically closed field k and $L$ a line bundle on $X$ such that $H^1(X,\mathcal{O}_X(K_X+L))=0$. Let $Z$ be a $0$-cycle with $\deg Z = d + 1$ where $d$ is a nonnegative integer. Assume that $\mathcal{O}_X(K_X+L)$ is $d-1$-very ample and the restriction map
$$\Gamma(\mathcal{O}_X(K_X+L))\rightarrow \Gamma(\mathcal {O}_X(K_X+L)|_Z)$$ is not surjective. Then there exists a rank $2$ locally free sheaf $E$ on $X$ which is given by the short exact sequence $$0\rightarrow \mathcal {O}_X \rightarrow E \rightarrow I_Z\cdot L \rightarrow 0,$$ where $I_Z$ is the ideal sheaf of $Z$. \end{lem} \begin{proof} From the conditions we see that the cycle $Z$ is $\mathcal{O}_X(K_X+L)$-stable in the sense of Tyurin. Then by Theorem \ref{T87} we have a locally free extension $$0 \rightarrow H^1(X,(\mathcal{O}_X(K_X+L))\otimes I_Z) \otimes \mathcal {O}_X (K_X)\rightarrow E(Z, (\mathcal{O}_X(K_X+L)))\rightarrow (\mathcal{O}_X(K_X+L))\otimes I_Z \rightarrow 0,$$ and it is sufficient to prove that $h^1((\mathcal{O}_X(K_X+L))\otimes I_Z)=1$.
By Lemma \ref{filtration}, we can take a sub-cycle $Z_0\subset Z$ of $\deg Z_0=d$. And we have the following diagram with exact rows. $$\xymatrix{
0 \ar[r] & (\mathcal{O}_X(K_X+L))\otimes I_{Z_0} \ar[r] & (\mathcal{O}_X(K_X+L)) \ar[r] & (\mathcal{O}_X(K_X+L))|_{Z_0} \ar[r] & 0 \\
0 \ar[r] & (\mathcal{O}_X(K_X+L))\otimes I_{Z} \ar[r]\ar@{_{(}->}[u]^i & (\mathcal{O}_X(K_X+L)) \ar[u]^{id}\ar[r] & (\mathcal{O}_X(K_X+L))|_{Z} \ar@{>>}[u]^j \ar[r] & 0
}.$$ Note that ${\rm kernel}(j)={\rm coker}(i)=k$ and by Tyurin' stability, we have
$$0=\delta(Z_0, \mathcal{O}_X(K_X+L))<\delta(Z, \mathcal{O}_X(K_X+L))={\rm dim \ coker}(H^0(\mathcal{O}_X(K_X+L))\rightarrow H^0((\mathcal{O}_X(K_X+L))|_{Z}))\leq 1.$$ Then considering the long exact sequence induced by the second row with the vanishing condition $H^1(\mathcal{O}_X(K_X+L))=0$, we obtain
$$H^1((\mathcal{O}_X(K_X+L))\otimes I_Z)={\rm coker}(H^0(\mathcal{O}_X(K_X+L))\rightarrow H^0((\mathcal{O}_X(K_X+L))|_{Z}))=k.$$ \end{proof} There is a little error in the proof of \cite[ Propositon 4.3]{DG15} that $\rho^*F$ may be not reduced, and here we correct and improve it. \begin{lem}\label{unstable} Let $X$ be a quasi-elliptic surface over an algebraically closed field of characteristic $p$, and $F$ be a general fibre of the quasi-elliptic fibration $f:X\rightarrow C$. Let $E$ be a rank $2$ vector bundle on $X$ with $c_1^2(E)-4c_2(E)>0$ then $F^{*}E$ is unstable. \end{lem} \begin{proof} We may assume $E$ is semi-stable. By Theorem \ref{Bogomolov}, let $e>0$ be the smallest integer such that $F^{e*}E$ is unstable: $$0\rightarrow\mathcal {O}(A)\rightarrow F^{e*}E\rightarrow I_Z\cdot\mathcal {O}(B)\rightarrow0.$$
Considering the composition $g= f \circ \rho: Y\rightarrow C$, $\rho^*F$ is a family of curves in $Y$ and the general fibre of $g$ may be not reduced: $$\rho^*(F)=p^{e-e_0}\widetilde{F}$$ with $ 0\leq e_0\leq e$ and $\widetilde{F}$ integral.
So by Propostion \ref{canonical divisor} we have \begin{eqnarray*}-K_Y\cdot \widetilde{F}&=&(\rho^*(\frac{p^e-1}{p^e}(A-B)-K_X)\cdot \widetilde{F})\\ &=&p^{e_0-e}(\rho^*(\frac{p^e-1}{p^e}(A-B)-K_X)\cdot \rho^*F)\\ &=&p^{e_0}(\frac{p^e-1}{p^e}(A-B)-K_X\cdot F)\\
&=&(p^e-1)\frac{(A-B\cdot F)}{p^{e-e_0}}>0 \end{eqnarray*} where the fourth equality is due to that $f$ is a quasi-elliptic fibration and the inequality is because $A-B$ is big.
By Proposition \ref{keylem}, we have $$(p^e-1)\frac{(A-B\cdot F)}{p^{e-e_0}}=-(K_Y\cdot \widetilde{F})= 2,$$ which gives that when $p=2$, \begin{itemize}
\item $e=1$, $e_0=0$ and $(A-B\cdot F)=4$ or
\item $e=1$, $e_0=1$ and $(A-B\cdot F)=2$ \end{itemize} and when $p=3$, \begin{itemize}
\item $e=1$, $e_0=0$ and $(A-B\cdot F)=3$ or
\item $e=1$, $e_0=1$ and $(A-B\cdot F)=1$. \end{itemize} So we get our result. \end{proof}
\begin{bibdiv} \begin{biblist}
\bib{BS91}{article}{
author={Beltrametti, M.},
author={Sommese, A. J.},
title={Zero cycles and $k$-th order embeddings of smooth projective
surfaces},
note={With an appendix by Lothar G\"{o}ttsche},
conference={
title={Problems in the theory of surfaces and their classification},
address={Cortona},
date={1988},
},
book={
series={Sympos. Math., XXXII},
publisher={Academic Press, London},
},
date={1991},
pages={33--48},
review={\MR{1273371}}, } \bib{C19}{article}{
author={Chen, Yen-An},
title={Fujita's conjecture for quasi-elliptic surfaces},
journal={Math. Nachr.},
volume={294},
date={2021},
number={11},
pages={2096--2104},
issn={0025-584X},
review={\MR{4371285}},
doi={10.1002/mana.202000522}, } \bib{CG90}{article}{
author={Catanese, Fabrizio},
author={G\oe ttsche, Lothar},
title={$d$-very-ample line bundles and embeddings of Hilbert schemes of
$0$-cycles},
journal={Manuscripta Math.},
volume={68},
date={1990},
number={3},
pages={337--341},
issn={0025-2611},
review={\MR{1065935}},
doi={10.1007/BF02568768}, }
\bib{DG15}{article}{
author={Di Cerbo, Gabriele},
author={Fanelli, Andrea},
title={Effective Matsusaka's theorem for surfaces in characteristic $p$},
journal={Algebra Number Theory},
volume={9},
date={2015},
number={6},
pages={1453--1475},
issn={1937-0652},
review={\MR{3397408}},
doi={10.2140/ant.2015.9.1453}, }
\bib{GZZ}{article}{
author={Gu, Yi},
author={Zhang, Lei},
author={Zhang, Yongming},
title={Counterexamples to Fujita's conjecture on surfaces in positive
characteristic},
journal={Adv. Math.},
volume={400},
date={2022},
pages={Paper No. 108271, 17},
issn={0001-8708},
review={\MR{4386546}},
doi={10.1016/j.aim.2022.108271}, } \bib{Langer16}{article}{
author={Langer, Adrian},
title={The Bogomolov-Miyaoka-Yau inequality for logarithmic surfaces in
positive characteristic},
journal={Duke Math. J.},
volume={165},
date={2016},
number={14},
pages={2737--2769},
issn={0012-7094},
review={\MR{3551772}},
doi={10.1215/00127094-3627203}, } \bib{Mu13}{article}{
author={Mukai, Shigeru},
title={Counterexamples to Kodaira's vanishing and Yau's inequality in
positive characteristics},
journal={Kyoto J. Math.},
volume={53},
date={2013},
number={2},
pages={515--532},
issn={2156-2261},
review={\MR{3079312}},
doi={10.1215/21562261-2081279}, } \bib{NT93}{article}{
author={Nakashima, Tohru},
title={On Reider's method for surfaces in positive characteristic},
journal={J. Reine Angew. Math.},
volume={438},
date={1993},
pages={175--185},
issn={0075-4102},
review={\MR{1215653}},
doi={10.1515/crll.1993.438.175}, } \bib{Reider1988}{article}{
author={Reider, Igor},
title={Vector bundles of rank $2$ and linear systems on algebraic
surfaces},
journal={Ann. of Math. (2)},
volume={127},
date={1988},
number={no.~2},
pages={309--316},
issn={0003-486X},
review={\MR{932299}}, }
\bib{SB91}{article}{
author={Shepherd-Barron, N. I.},
title={Unstable vector bundles and linear systems on surfaces in
characteristic $p$},
journal={Invent. Math.},
volume={106},
date={1991},
number={2},
pages={243--262},
issn={0020-9910},
review={\MR{1128214}},
doi={10.1007/BF01243912}, } \bib{Tango72}{article}{
author={Tango, Hiroshi},
title={On the behavior of extensions of vector bundles under the
Frobenius map},
journal={Nagoya Math. J.},
volume={48},
date={1972},
pages={73--89},
issn={0027-7630},
review={\MR{314851}}, }
\bib{T87}{article}{
author={Tyurin, A. N.},
title={Cycles, curves and vector bundles on an algebraic surface},
journal={Duke Math. J.},
volume={54},
date={1987},
number={1},
pages={1--26},
issn={0012-7094},
review={\MR{885772}},
doi={10.1215/S0012-7094-87-05402-0}, } \bib{TH99}{article}{
author={Terakawa, Hiroyuki},
title={The $d$-very ampleness on a projective surface in positive
characteristic},
journal={Pacific J. Math.},
volume={187},
date={1999},
number={1},
pages={187--199},
issn={0030-8730},
review={\MR{1674325}},
doi={10.2140/pjm.1999.187.187}, } \bib{Zh22}{article}{
author={Zhang, Yongming},
title={Strong non-vanishing of cohomologies and strong non-freeness of adjoint line bundles on $n$-Raynaud surfaces},
journal={arXiv:2210.00967},
date={2022}, } \bib{Zheng17}{article}{
author={Zheng, Xudong},
title={Counterexamples of Kodaira vanishing for smooth surfaces of
general type in positive characteristic},
journal={J. Pure Appl. Algebra},
volume={221},
date={2017},
number={10},
pages={2431--2444},
issn={0022-4049},
review={\MR{3646309}},
doi={10.1016/j.jpaa.2016.12.030}, }
\end{biblist} \end{bibdiv}
\end{document}
|
arXiv
|
{
"id": "2103.04268.tex",
"language_detection_score": 0.6834897994995117,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On the asymptotic distribution of the mean absolute deviation about the mean}
\begin{abstract} The mean absolute deviation about the mean is an alternative to the standard deviation for measuring dispersion in a sample or in a population.
For stationary, ergodic time series with a finite first moment, an asymptotic expansion for the sample mean absolute deviation is proposed. The expansion yields the asymptotic distribution of the sample mean absolute deviation under a wide range of settings, allowing for serial dependence or an infinite second moment.
\par
\noindent\emph{Key words:} central limit theorem; dispersion; ergodicity; regular variation; stable distribution; strong mixing. \end{abstract}
\section{Introduction} \label{sec:intro}
The mean absolute deviation of a sample $X_1, \ldots, X_n$ about the sample mean $\bar{X}_n = n^{-1} \sum_{i=1}^n X_i$ is given by the statistic \begin{equation} \label{eq:MADn}
\hat{\MAD}_n = \frac{1}{n} \sum_{i=1}^n \abs{X_i - \bar{X}_n}. \end{equation} If the random variables $X_i$ have common distribution function $F$ with finite mean $\mu = \int_\mathds{R} x \, \mathrm{d} F(x)$, then $\hat{\MAD}_n$ is an estimator of the mean absolute deviation \begin{equation} \label{eq:MAD}
\theta = \E[ \abs{X_1 - \mu} ] = \int_{\reals} \abs{x - \mu} \, \mathrm{d} F(x). \end{equation}
The (sample) mean absolute deviation is an alternative to the standard deviation for measuring dispersion. Its advantages and drawbacks have been widely discussed in the literature. The standard deviation is motivated mainly from optimality results in the context of independent random sampling from the normal distribution, an analysis dating back to Fisher and even Laplace \citep{stigler:1973}. However, the mean absolute deviation may be more appropriate in case of departures from normality or in the presence of outliers \citep{tukey:1960, huber:1996}. It may also offer certain pedagogical advantages. For extensive discussions and comparisons, see \cite{pham-gia:hung:2001} and \cite{gorard:2005} and the references therein.
Because of the presence of the absolute value function, finding the asymptotic distribution of the sample mean absolute deviation is surprisingly challenging. In \citet[Section~2]{pollard:1989} and \citet[Example~19.25]{vdvaart:1998}, the exercise is put forward as a showcase for the power of empirical process theory. In a nutshell, their analysis is as follows. Let \[
\tilde{\MAD}_n = \frac{1}{n} \sum_{i=1}^n \abs{X_i - \mu} \] be the version of the sample mean absolute deviation that could be computed if the true mean, $\mu$, were known. Consider the dispersion function \[
D_F(u) = \int_{\reals} \abs{x - u} \, \mathrm{d} F(x), \qquad u \in \mathds{R}. \] Clearly, $\theta = D_F(\mu)$. Under independent random sampling and under the presence of finite second moments, asymptotic uniform equicontinuity of the empirical process $u \mapsto n^{-1/2} \sum_{i=1}^n \bigl( \abs{X_i - u} - D_F(u) \bigr)$ implies that \begin{align} \nonumber
\sqrt{n} \bigl( \hat{\MAD}_n - \theta \bigr)
&= \sqrt{n} \bigl( \hat{\MAD}_n - D_F(\bar{X}_n) \bigr) + \sqrt{n} \bigl( D_F( \bar{X}_n ) - \theta \bigr) \\ \label{eq:MADn:vdV}
&= \sqrt{n} \bigl( \tilde{\MAD}_n - \theta \bigr) + \sqrt{n} \bigl( D_F( \bar{X}_n ) - \theta \bigr)
+ o_p(1), \qquad n \to \infty. \end{align} If $F$ is continuous at $u$, then $D_F$ is differentiable at $u$ with derivative $D_F'(u) = 2 F(u) - 1$ \citep[Theorem~1]{munoz:sanchez:1990}. By the delta method, it follows from \eqref{eq:MADn:vdV} that, if $F$ is continuous at $\mu$, \begin{align} \label{eq:MAD:empproc}
\sqrt{n} \bigl( \hat{\MAD}_n - \theta \bigr)
&= \sqrt{n} \bigl( \tilde{\MAD}_n - \theta \bigr)
+ \bigl(2 F(\mu) - 1 \bigr) \, \sqrt{n} ( \bar{X}_n - \mu ) + o_p(1) \\ \label{eq:MAD:asnorm}
&\rightsquigarrow N(0, \sigma_{\theta}^2), \qquad n \to \infty, \end{align} the arrow `$\rightsquigarrow$' denoting weak convergence. The asymptotic variance equals \[
\sigma_{\theta}^2 = \operatorname{var} \bigl( \abs{X_1 - \mu} + (2 F(\mu) - 1) \, X_1 \bigr). \]
The above proof is elegant and short. However, it rests on a body of advanced empirical process theory. Using direct arguments, \cite{babu:rao:1992} establish higher-order expansions for $\sqrt{n} \bigl( \hat{\MAD}_n - \theta \bigr)$ under the additional assumption that $F$ is H\"older continuous or even differentiable at $\mu$.
The results described so far are limited to independent random sampling from a distribution $F$ with a finite second moment, and, except for \eqref{eq:MADn:vdV}, without an atom at its mean $\mu$. In the literature, no results seem to be available on the asymptotic distribution of the sample mean absolute deviation in the case of serial dependence or when the second moment does not exist. Even in the case of independent random sampling from a distribution with finite second moment, the asymptotic distribution of \eqref{eq:MADn:vdV} in case $F$ has an atom at $\mu$ seems not to have been described yet.
The aim of this paper is to derive an asymptotic expansion of $\hat{\MAD}_n - \theta$ from first principles and under minimal assumptions (Section~\ref{sec:expansion}). The expansion yields the asymptotic distribution of the sample mean absolute deviation under a wide range of settings, including serial dependence, in the infinite-variance case, and without smoothness assumptions (Section~\ref{sec:dto}). Even in the case of independent random sampling from a distribution with finite second moment, the asymptotic distribution of $\sqrt{n} (\hat{\MAD}_n - \theta)$ is found to be non-Gaussian in case $F$ possesses an atom at $\mu$.
\section{Asymptotic expansion} \label{sec:expansion}
Stationarity of the time series $(X_i)_{i \geqslant 1}$ means that for all positive integers $k \geqslant 1$ and $h \geqslant 0$, the distribution of the random vector $(X_{1+h}, \ldots, X_{k+h})$ does not depend on $h$. By the Birkhoff ergodic theorem \citep[Theorem~10.6]{kallenberg:2002}, stationarity and ergodicity imply that, for every Borel measurable function $f : \mathds{R} \to \mathds{R}$ such that $\E[\abs{f(X_1)}] < \infty$, we have \begin{equation} \label{eq:ergodic}
\frac{1}{n} \sum_{i=1}^n f(X_i) \to \E[ f(X_1) ], \qquad n \to \infty, \text{ almost surely.} \end{equation}
Assume that the stationary distribution $F$ has finite mean, $\mu$, and recall the mean absolute deviation $\theta$ in \eqref{eq:MAD} and its sample version $\hat{\theta}_n$ in \eqref{eq:MADn}. Write \begin{equation} \label{eq:MAD:easy}
\hat{\theta}_n - \theta = \frac{1}{n} \sum_{i=1}^n \bigl( |X_i - \bar{X}_n| - |X_i - \mu| \bigr)
+ \frac{1}{n} \sum_{i=1}^n (\abs{X_i - \mu} - \theta) \end{equation} The second term on the right-hand side is just a sum of centered random variables. It is the first term which poses a challenge.
\begin{lemma} \label{lem:infl} Let $X_1, X_2, \ldots$ be a stationary, ergodic time series with finite mean $\mu = \E[X_1]$. We have, as $n \to \infty$, almost surely, \begin{align*}
\frac{1}{n} \sum_{i = 1}^n \bigl( |X_i - \bar{X}_n| - |X_i - \mu| \bigr)
&= (\bar{X}_n - \mu) \, \bigl( \mathbb{P}[X_1 < \mu] - \mathbb{P}[X_1 > \mu] \bigr) \\
& \qquad \mbox{}
+ \abs{\bar{X}_n - \mu} \, \mathbb{P}[ X_1 = \mu ] + o \bigl( \abs{\bar{X}_n - \mu} \bigr). \end{align*} \end{lemma}
\begin{proof} Consider the random variables \begin{align*}
A_n &= \min( \bar{X}_n, \mu ), & B_n &= \max( \bar{X}_n, \mu ). \end{align*} Let $\operatorname{sign}(z)$ be equal to $1$, $0$, or $-1$ according to whether $z$ is larger than, equal to, or smaller than zero, respectively. For $x \in \reals \setminus (A_n, B_n)$, that is, for $x$ not between $\mu$ and $\bar{X}_n$, a case-by-case analysis reveals that \[
\abs{x - \bar{X}_n} - \abs{x - \mu}
= ( \bar{X}_n - \mu ) \, \operatorname{sign}( \mu - x ) + \abs{\bar{X}_n - \mu} \, \ind_{\{\mu\}}(x). \] Consider the partition $\{1, \ldots, n\} = \mathcal{K}_n \cup \mathcal{L}_n$, where \begin{align*}
\mathcal{K}_n &= \{ i = 1, \ldots, n : A_n < X_i < B_n \}, \\
\mathcal{L}_n &= \{1, \ldots, n\} \setminus \mathcal{K}_n. \end{align*} By convention, the sum over the empty set is zero. We find that \begin{align*}
\lefteqn{
\sum_{i = 1}^n \bigl( \abs{X_i - \bar{X}_n} - \abs{X_i - \mu} \bigr)
} \\
&= (\bar{X}_n - \mu) \, \sum_{i \in \mathcal{L}_n } \operatorname{sign}( \mu - X_i ) + \abs{\bar{X}_n - \mu} \, \sum_{i \in \mathcal{L}_n } \ind_{\{\mu\}}(X_i) \\
&\quad\mbox{} + \sum_{i \in \mathcal{K}_n} \bigl( \abs{X_i - \bar{X}_n} - \abs{X_i - \mu} \bigr) \\
&= (\bar{X}_n - \mu) \, \sum_{i = 1}^n \operatorname{sign}( \mu - X_i ) + \abs{ \bar{X}_n - \mu } \, \sum_{i = 1}^n \ind_{\{\mu\}}(X_i) + R_n \end{align*} with \begin{multline*}
R_n
= \sum_{i \in \mathcal{K}_n} \bigl( \abs{X_i - \bar{X}_n} - \abs{X_i - \mu} \bigr) \\
- (\bar{X}_n - \mu) \, \sum_{i \in \mathcal{K}_n } \operatorname{sign}( \mu - X_i )
- \abs{ \bar{X}_n - \mu } \, \sum_{i \in \mathcal{K}_n } \ind_{\{\mu\}}(X_i) \end{multline*} By \eqref{eq:ergodic}, we have, as $n \to \infty$, almost surely, \begin{align*}
\frac{1}{n} \sum_{i = 1}^n \operatorname{sign}( \mu - X_i )
&\to \E[ \operatorname{sign}( \mu - X_1 ) ] = \mathbb{P}[X_1 < \mu] - \mathbb{P}[X_1 > \mu], \\
\frac{1}{n} \sum_{i = 1}^n \ind_{\{\mu\}}(X_i)
&\to \mathbb{P}[ X_1 = \mu ]. \end{align*} As a consequence, as $n \to \infty$ and almost surely, \begin{align*}
\frac{1}{n} \sum_{i = 1}^n \bigl( \abs{X_i - \bar{X}_n} - \abs{X_i - \mu} \bigr)
&= ( \bar{X}_n - \mu ) \bigl( \mathbb{P}[X_1 < \mu] - \mathbb{P}[X_1 > \mu] + o(1) \bigr) \\
&\qquad \mbox{} + \abs{ \bar{X}_n - \mu } \, \bigl( \mathbb{P}[ X_1 = \mu ] + o(1) \bigr) + \frac{R_n}{n}. \end{align*} Further, as $\abs{ \abs{x - \bar{X}_n} - \abs{x - \mu} } \leqslant \abs{ \bar{X}_n - \mu }$ for all $x \in \mathds{R}$, we obtain \[
\abs{R_n} \leqslant 3 \abs{ \bar{X}_n - \mu } \, \abs{ \mathcal{K}_n }, \] where $\abs{ \mathcal{K}_n }$ is the number of elements of $\mathcal{K}_n$. The lemma therefore follows if we can prove that \begin{equation} \label{eq:Kn0}
\frac{\abs{ \mathcal{K}_n }}{n} \to 0, \qquad n \to \infty, \text{ a.s.} \end{equation} For $x \in \mathds{R}$, put \begin{align*}
\hat{F}_n(x) &= \frac{1}{n} \sum_{i=1}^n \ind(X_i \leqslant x), &
\hat{F}_n(x-) &= \frac{1}{n} \sum_{i=1}^n \ind(X_i < x). \end{align*} We have \[
\frac{\abs{ \mathcal{K}_n }}{n} = \hat{F}_n( B_n- ) - \hat{F}_n( A_n ). \] By \eqref{eq:ergodic}, $\bar{X}_n$ and thus $A_n$ and $B_n$ converge to $\mu$ almost surely. Moreover, we have either $A_n = \mu \leqslant B_n$ or $A_n \leqslant \mu = B_n$ (or both, if $\bar{X}_n = \mu$). As a consequence, \[
\frac{\abs{ \mathcal{K}_n }}{n}
\leqslant \max \bigl\{ \hat{F}_n( B_n - ) - \hat{F}_n(\mu), \hat{F}_n( \mu - ) - \hat{F}_n(A_n) \bigr\}. \] Fix $\delta > 0$. By monotonocity of $\hat{F}_n$ and by ergodicity \eqref{eq:ergodic}, we have, almost surely, \begin{align*}
\limsup_{n \to \infty} \hat{F}_n( B_n - )
&\leqslant \limsup_{n \to \infty} \hat{F}_n( \mu + \delta ) = F(\mu + \delta), \\
\liminf_{n \to \infty} \hat{F}_n( A_n )
&\geqslant \liminf_{n \to \infty} \hat{F}_n( \mu - \delta ) = F(\mu - \delta). \end{align*} Since moreover $\hat{F}_n(\mu) \to F(\mu)$ and $\hat{F}_n(\mu-) \to F(\mu-) = \mathbb{P}[X_1 < \mu]$ almost surely, it follows that \begin{align*}
\limsup_{n \to \infty} \bigl( \hat{F}_n( B_n - ) - \hat{F}_n(\mu) \bigr)
&\leqslant F(\mu + \delta) - F(\mu), \\
\limsup_{n \to \infty} \bigl( \hat{F}_n( \mu - ) - \hat{F}_n(A_n) \bigr)
&\leqslant F( \mu - ) - F(\mu - \delta), \end{align*} almost surely, and therefore \[
\limsup_{n \to \infty} \frac{| \mathcal{K}_n |}{n} \leqslant
\max \bigl( F(\mu + \delta) - F(\mu), \, F( \mu - ) - F(\mu - \delta) \bigr) \] almost surely. Since $\delta$ was arbitrary, we obtain \eqref{eq:Kn0}, as required. \end{proof}
\section{Weak convergence} \label{sec:dto}
Combining the expansions in \eqref{eq:MAD:easy} and Lemma~\ref{lem:infl}, the limit distribution of $\hat{\theta}_n$ follows right away. For the sake of illustration, we consider two settings: strongly mixing time series with finite second moments (Subsection~\ref{ss:serial}) and independent random sampling from distributions in the domain of attraction of a stable law with index $\alpha \in (1, 2)$ (Subsection~\ref{ss:stable}).
\subsection{Strongly mixing time series, finite variance} \label{ss:serial}
Let $(X_i)_{i \in \mathds{Z}}$ be a stationary time series. For $k \in \mathds{Z}$, consider the $\sigma$-fields $\mathcal{F}_k = \sigma(X_i : i \leqslant k)$ and $\mathcal{G}_k = \sigma(X_i : i \geqslant k)$. Rosenblatt's mixing coefficients are defined by \[
\alpha_n =
\sup
\bigl\{
\abs{ \mathbb{P}(A \cap B) - \mathbb{P}(A) \, \mathbb{P}(B) } :
A \in \mathcal{F}_0, \, B \in \mathcal{G}_n
\bigr\} \] for integer $n \geqslant 0$. The time series $(X_i)_{i \in \mathds{Z}}$ is called strongly mixing if $\alpha_n \to 0$ as $n \to \infty$. Strong mixing implies ergodicity.
Define $\alpha(t) = \alpha_{\floor{t}}$, $t \geqslant 0$, and its inverse function $\alpha^{-1}(u) = \inf \{ t \geqslant 0 : \alpha(t) \leqslant u \}$ for $u > 0$. Let $Q$ denote the quantile function of the distribution of $X_i$.
\begin{proposition} \label{prop:serial} Let $(X_i)_{i \in \mathds{Z}}$ be a strongly mixing, stationary time series with finite second moments. If \begin{equation} \label{eq:alphaQ}
\int_0^1 \alpha^{-1}(u) \, \{Q(u)\}^2 \, \mathrm{d} u < \infty, \end{equation} then, as $n \to \infty$, \begin{equation} \label{eq:MAD:limit}
\sqrt{n} ( \hat{\MAD}_n - \theta )
\rightsquigarrow
Y \, \bigl( \mathbb{P}[ X_1 < \mu ] - \mathbb{P}[ X_1 > \mu ] \bigr)
+ \abs{Y} \, \mathbb{P}[ X_1 = \mu ]
+ Z, \end{equation} where $(Y, Z)$ is bivariate normal with mean zero and covariance matrix given by \begin{align*}
\operatorname{var}(Y)
&= \operatorname{var}(X_0) + 2 \, \sum_{i=1}^\infty \operatorname{cov}(X_i, X_0), \\
\operatorname{var}(Z)
&= \operatorname{var}(\abs{X_0 - \mu}) + 2 \, \sum_{i=1}^\infty \operatorname{cov}(\abs{X_i - \mu}, \abs{X_0 - \mu}), \\
\operatorname{cov}(Y, Z)
&= \sum_{i \in \mathds{Z}} \operatorname{cov}(\abs{X_i - \mu}, X_0)
= \sum_{i \in \mathds{Z}} \operatorname{cov}(X_i, \abs{X_0 - \mu}). \end{align*} all series being absolutely convergent. \end{proposition}
\begin{proof} By Theorem~1 in \cite{doukhan:massart:rio:1994} and the remark just after that theorem, we have, as $n \to \infty$, \begin{equation} \label{eq:YnZn}
(Y_n, Z_n) :=
\left(
\frac{1}{\sqrt{n}} \sum_{i=1}^n (X_i - \mu), \,
\frac{1}{\sqrt{n}} \sum_{i=1}^n \bigl( \abs{X_i - \mu} - \theta \bigr)
\right)
\rightsquigarrow (Y, Z), \end{equation} with $(Y, Z)$ as in the statement of the proposition. Combining \eqref{eq:MAD:easy} and Lemma~\ref{lem:infl} yields the expansion \begin{multline} \label{eq:MAD:YnZn}
\sqrt{n} ( \hat{\MAD}_n - \theta ) \\
= Y_n \, \bigl( \mathbb{P}[X_1 < \mu] - \mathbb{P}[X_1 > \mu] \bigr)
+ \abs{Y_n} \, \mathbb{P}[ X_1 = \mu ] + o( \abs{Y_n} ) + Z_n, \end{multline} as $n \to \infty$, almost surely. Apply the continuous mapping theorem and Slutsky's lemma to arrive at the result. \end{proof}
Condition~\ref{eq:alphaQ} covers many cases and is almost sharp; see the applications on pages~67--68 in \cite{doukhan:massart:rio:1994} as well as their Theorem~2. The case of independent and identically distributed variables is trivially included.
\begin{corollary} \label{cor:CLT} Let $X_1, X_2, \ldots$ be independent and identically distributed random variables with common distribution $F$. If $F$ has a finite second moment, then weak convergence \eqref{eq:MAD:limit} holds, where $(Y, Z)$ is bivariate normal with mean zero and covariance matrix equal to the one of $( X_1 - \mu, \abs{ X_1 - \mu } )$. \end{corollary}
\begin{proof} Combine \eqref{eq:YnZn} and \eqref{eq:MAD:YnZn} with the multivariate central limit theorem, the continuous mapping theorem, and Slutsky's lemma.
\end{proof}
If $\mathbb{P}[X_1 = \mu] = 0$, i.e., if $F$ is continuous at $0$, then the expansion \eqref{eq:MAD:YnZn} is the same as the one in \eqref{eq:MAD:empproc} and we obtain that $\sqrt{n} ( \hat{\MAD}_n - \theta )$ is asymptotically normal. In the independence case, the limit distribution coincides with the one in \eqref{eq:MAD:asnorm}. However, if $0 < \mathbb{P}[ X_1 = \mu ] < 1$, then the weak limit in \eqref{eq:MAD:limit} is not Gaussian. Moreover, the limit distribution is not centered either, its expectation being $\mathbb{P}[ X_1 = \mu ] \, \E[\abs{Y}]$.
\subsection{Independent random sampling, infinite variance} \label{ss:stable}
One argument in favour of the use of the mean absolute deviation for measuring dispersion is that, unlike the standard deviation, it does not require existence of second moments. Still, in the weak convergence statements \eqref{eq:MAD:asnorm} and \eqref{eq:MAD:limit}, finite second moments are presupposed. This condition is lifted in the next result. A positive, Borel measurable function $L$ defined on a neighbourhood of infinity is \emph{slowly varying} if $\lim_{x \to \infty} L(xy) / L(x) = 1$ for all $y > 0$.
\begin{proposition} \label{prop:asym:stable} Let $X_1, X_2, \ldots$ be independent, identically distributed random variables with common distribution function $F$. Assume that there exist $\alpha \in (1, 2)$, $p \in [0, 1]$, and a slowly varying function $L$ on $(0, \infty)$, such that, for $x > 0$, \begin{equation} \label{eq:RV}
\left.
\begin{array}{rcl}
\mathbb{P}[ X_1 > x ] &=& p \, x^{-\alpha} \, L(x), \\[1ex]
\mathbb{P}[ X_1 < -x ] &=& (1-p) \, x^{-\alpha} \, L(x).
\end{array}
\right\} \end{equation} Then $F$ has a finite mean but an infinite second moment. Defining \[
a_n = \inf \{ x > 0 : \mathbb{P}[ \abs{X_1} > x ] \leqslant 1/n \}, \] we have, assuming that $F$ is continuous at $\mu$, \[
\frac{n}{a_n} ( \hat{\MAD}_n - \theta ) \rightsquigarrow G, \qquad n \to \infty, \] where $G$ is a stable distribution with characteristic function \begin{equation} \label{eq:stable:G}
\int_{\mathds{R}} e^{-\mathrm{i} s x} \, \mathrm{d} G(x)
= \exp
\bigl\{
- \sigma^\alpha \abs{s}^\alpha \bigl( 1 - \mathrm{i} \operatorname{sign}(s) \tan(\alpha \pi/2) \bigr)
\bigr\},
\qquad s \in \mathds{R}, \end{equation} the scale parameter $\sigma > 0$ being given by \begin{equation} \label{eq:stable:sigma}
\sigma^\alpha =
\frac
{2^\alpha \{\mathbb{P}[ X_1 < \mu ]^\alpha \, p + \mathbb{P}[ X_1 > \mu]^\alpha \, (1-p)\}}
{\frac{\Gamma(2-\alpha)}{\alpha - 1} \abs{ \cos( \alpha \pi / 2 ) }}. \end{equation}
\end{proposition}
\begin{proof} The statement about the existence of the moments is well known; see for instance the unnumbered lemma on page~578 in Section~XVII.5 in \cite{feller:1971}. Since $\mathbb{P}[ X_1 = \mu ] = 0$, equation~\eqref{eq:MAD:easy} and Lemma~\ref{lem:infl} imply \begin{equation} \label{eq:MAD:xi}
\hat{\MAD}_n - \theta
= \frac{1}{n} \sum_{i=1}^n (\xi_i - \theta) + o \bigl( \abs{\bar{X}_n - \mu} \bigr),
\qquad n \to \infty, \text{ a.s.}, \end{equation} where, writing $b = \mathbb{P}[ X_1 < \mu ] - \mathbb{P}[ X_1 > \mu ]$, \begin{equation} \label{eq:xi}
\xi_i
= \abs{X_i - \mu} + b \, (X_i - \mu)
= \abs{X_i - \mu} \, \bigl(1 + b \operatorname{sign}(X_i - \mu) \bigr). \end{equation} Note that $\theta = \E[\xi_i]$. Since $-1 < b < 1$, we have $\xi_i \geqslant 0$. Moreover, by slow variation of $L$, as $x \to \infty$, \begin{align} \nonumber
\mathbb{P}[ \xi_i > x ]
&= \mathbb{P}[ X_i > \mu + x / (1+b) ] + \mathbb{P}[ X_i < \mu - x / (1 - b) ] \\ \label{eq:xi:L}
&= \bigl((1+b)^\alpha \, p + (1-b)^\alpha \, (1-p) + o(1) \bigr) \, x^{-\alpha} \, L(x). \end{align} As $\mathbb{P}[ \abs{X_i} > x ] = x^{-\alpha} \, L(x)$, it follows that, up to a multiplicative constant, the tail function of $\xi_i$ is asymptotically equivalent to the one of $\abs{X_i}$. Observe that $1+b = 2 \, \mathbb{P}[ X_1 < \mu ]$ and $1-b = 2 \, \mathbb{P}[ X_1 > \mu ]$.
By classical theory on the domains of attraction of non-Gaussian stable distributions, equation~\eqref{eq:RV} is equivalent to the weak convergence of $n a_n^{-1} (\bar{X}_n - \mu) = a_n^{-1} \sum_{i=1}^n (X_i - \mu)$ to an $\alpha$-stable distribution \citep{gnedenko:kolmogorov:1954}. The tails of $\abs{X_i}$ and $\xi_i$ being related through~\eqref{eq:xi:L}, $a_n^{-1} \sum_{i=1}^n (\xi_i - \theta)$ converges weakly to an $\alpha$-stable distributions too. The limit distribution can be found for instance from Theorem~1.8.1 in \cite{samorodnitsky:taqqu:1994} and coincides with $G$ in the statement of the proposition; details are given below. By \eqref{eq:MAD:xi}, we have \[
\frac{n}{a_n} \bigl( \hat{\MAD}_n - \theta \bigr)
= \frac{1}{a_n} \sum_{i=1}^n (\xi_i - \theta) + o_p(1), \qquad n \to \infty. \] Weak convergence of $n a_n^{-1} \bigl( \hat{\MAD}_n - \theta \bigr)$ to $G$ now follows from Slutsky's lemma.
The calculations leading to the expression of the scale parameter $\sigma$ in equation~\eqref{eq:stable:sigma} are as follows. Let $\tilde{a}_n$ be such that \begin{equation} \label{eq:tailxi:1}
\lim_{n \to \infty} n \, \mathbb{P}[ \xi_i > \tilde{a}_n ]
= \frac{\Gamma(2-\alpha)}{\alpha - 1} \abs{ \cos( \alpha \pi / 2 ) }. \end{equation} By Theorem~1.8.1 in \cite{samorodnitsky:taqqu:1994}, we have \[
\frac{1}{\tilde{a}_n} \sum_{i=1}^n (\xi_i - \theta) \rightsquigarrow \tilde{G}, \qquad n \to \infty, \] where $\tilde{G}$ is an $\alpha$-stable distribution whose characteristic function has the same form as in \eqref{eq:stable:G} with $\sigma$ replaced by $\tilde{\sigma} = 1$. By \eqref{eq:xi:L} and the definition of $a_n$, we have \begin{equation} \label{eq:tailxi:2}
\lim_{n \to \infty} n \, \mathbb{P}[ \xi_i > a_n ]
= (1+b)^\alpha \, p + (1-b)^\alpha \, (1-p). \end{equation} The function $x \mapsto \mathbb{P}[ \xi_i > x ]$ being regularly varying with index $-\alpha$, it follows that a valid choice for $\tilde{a}_n$ in \eqref{eq:tailxi:1} is $\tilde{a}_n = \gamma a_n$, where the constant $\gamma > 0$ can be read off by comparing \eqref{eq:tailxi:1} and \eqref{eq:tailxi:2}: \begin{align*}
\gamma^{-\alpha} \, \{ (1+b)^\alpha \, p + (1-b)^\alpha \, (1-p) \}
&= \lim_{n \to \infty} n \, \mathbb{P}[ \xi_i > \gamma a_n ] \\
&= \frac{\Gamma(2-\alpha)}{\alpha - 1} \abs{ \cos( \alpha \pi / 2 ) }. \end{align*} If $\tilde{Z}$ denotes a random variable whose distribution function is $\tilde{G}$, then \[
\frac{1}{a_n} \sum_{i=1}^n (\xi_i - \theta)
= \frac{\gamma}{\tilde{a}_n} \sum_{i=1}^n (\xi_i - \theta)
\rightsquigarrow \gamma \tilde{Z} = Z, \qquad n \to \infty. \] By \citet[Property~1.2.3]{samorodnitsky:taqqu:1994}, the characteristic function of the law of $Z$ is given by \eqref{eq:stable:G} with scale parameter $\sigma = \gamma \tilde{\sigma} = \gamma$. \end{proof}
The regular variation condition~\eqref{eq:RV} covers distributions with power-law tails, such as the Pareto distribution, non-Gaussian stable distributions, the Student t distribution, and the Fr\'echet distribution.
\begin{remark} Proposition~\ref{prop:asym:stable} supposes independent random sampling from a heavy-tailed distribution. Extensions to weakly dependent stationary time series are possible too. In \cite{bartkiewicz:etal:2011}, for instance, conditions are given which guarantee the weak convergence of the normalized partial sums of a weakly dependent stationary time series to a non-Gaussian stable distribution. These conditions are to be verified for the sequence $(\xi_i)_{i}$ defined in \eqref{eq:xi}. The expansion in \eqref{eq:MAD:xi} then allows to obtain the asymptotic distribution of the sample mean absolute deviation. \end{remark}
\begin{remark} In Proposition~\ref{prop:asym:stable}, if $F$ is not continuous at $\mu$, then $n a_n^{-1} ( \hat{\MAD}_n - \theta )$ can still be shown to converge weakly, but the limit law will no longer be stable. As in Corollary~\ref{cor:CLT}, it will be a non-linear functional of the bivariate stable distribution to which the joint distribution of $(X_1 - \mu, \abs{X_1 - \mu})$ is attracted \citep{rvaceva:1962}. \end{remark}
\section*{Acknowledgments}
The author gratefully acknowledges funding by contract ``Projet d'Act\-ions de Re\-cher\-che Concert\'ees'' No.\ 12/17-045 of the ``Communaut\'e fran\c{c}aise de Belgique'' and by IAP research network Grant P7/06 of the Belgian government (Belgian Science Policy).
\end{document}
|
arXiv
|
{
"id": "1406.4151.tex",
"language_detection_score": 0.5874439477920532,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[]{The Tutte's condition in terms of graph factors}
\author[H. Lu]{Hongliang Lu} \address{School of Mathematics and Statistics, Xi'an Jiaotong university, 710049 Xi'an, P.\ R.\ China} \email{[email protected]} \thanks{Lu is supported by the National Natural Science Foundation of China, No.\ 11471257.}
\author[D.G.L. Wang]{David G.L. Wang$^{\dag\ddag}$} \address{ $^\dag$School of Mathematics and Statistics, Beijing Institute of Technology, 102488 Beijing, P.\ R.\ China\\ $^\ddag$Beijing Key Laboratory on MCAACI, Beijing Institute of Technology, 102488 Beijing, P.\ R.\ China} \email{[email protected]} \thanks{Wang is supported by the National Natural Science Foundation of China, No.\ 11671037}
\keywords{graph factor, perfect matching, Tutte's condition, Tutte's theorem} \subjclass[2010]{05C75 05C70}
\begin{abstract} Let $G$ be a connected general graph of even order, with a function $f\colon V(G)\to\mathbb{Z}^+$. We obtain that $G$ satisfies the Tutte's condition \[ o(G-S)\le \sum_{v\in S}f(v)\qquad\text{for any nonempty set $S\subset V(G)$}, \] with respect to $f$ if and only if $G$ contains an $H$-factor for any function $H\colon V(G)\to 2^\mathbb{N}$ such that $H(v)\in \{J_f(v),\,J_f^+(v)\}$ for each $v\in V(G)$, where the set $J_f(v)$ consists of the integer $f(v)$ and all positive odd integers less than $f(v)$, and the set $J^+_f(v)$ consists of positive odd integers less than or equal to $f(v)+1$. We also obtain a characterization for graphs of odd order satisfying the Tutte's condition with respect to a function.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:introduction}
This note connects Tutte's condition with graph factors. Tutte's theorem states that a graph $G$ has a perfect matching if and only if \[
o(G-S)\;\le\; |S|\qquad\text{for any set $S\subset V(G)$}, \] where $o(G-S)$ denotes the number of odd components of the subgraph $G-S$, and $V(G)$ is the vertex set of $G$. Let $f\colon V(G)\to \mathbb{Z}^+$ be a function, where $\mathbb{Z}^+$ denotes the set of positive integers. The \emph{Tutte's condition on $G$ with respect to $f$} is the condition \[ o(G-S)\le f(S)\qquad\text{for any nonempty set $S\subset V(G)$}, \] where $f(S)=\sum_{v\in S}f(v)$. The Tutte's condition with respect to the constant function $f\equiv 1$ is the condition in Tutte's theorem.
A considerable large number of literatures on graph factors can be found in Akiyama and Kano's book~\cite{AK11B}. Let $H\colon V(G)\to 2^\mathbb{N}$ be a set-valued function. A spanning subgraph $F$ of $G$ is called an \emph{$H$-factor} if $\deg_F(v)\in H(v)$.
In particular, a 1-factor is exactly a perfect matching. For any vertex $x$ of $G$, denote by $G^x$ the graph obtained from $G$ by adding a new vertex $x'$ together with a new edge $xx'$.
A graph $G$ is said to be {\em $H$-critical} if $G$ contains no $H$-factors and if the graph $G^x$ has an $H^x$-factor for every vertex $x$ of~$G$, where \[ H^x(v)=\begin{cases} \{1\}, &\text{if $v=x'$};\\[5pt] H(v), &\text{otherwise}. \end{cases} \]
Lov\'asz~\cite{Lov69} proposed the {\em degree prescribed subgraph problem} of determining the distance of a factor from a given integer set function. He~\cite{Lov72} considered it with the restriction that the given set function $H$ is allowed, i.e., that every gap of the set $H(v)$ for each vertex $v$ is at most two. He also showed that the problem is NP-complete when the function $H$ is not allowed. Cornu\'ejols~\cite{Cor88} provided a polynomial Edmonds-Johnson type alternating forest algorithm for the degree prescribed subgraph problem with~$H$ allowed, which implies a Gallai-Edmonds type structure theorem.
For convenience, we denote the set of positive odd integers by $2\mathbb{N}+1$, and \[ J_n \;=\;\begin{cases} \{1,3,5,\ldots,n\},&\text{if $n$ is odd};\\[4pt] \{1,3,5,\ldots,n-1,n\},&\text{if $n$ is even}. \end{cases} \] Define $J_f(v)=J_{f(v)}$ for all vertices $v$.
\begin{thm}[Cui and Kano~\cite{CK88}]\label{thm:CK} A connected general graph $G$ of even order satisfies the Tutte's condition with respect to a function $f\colon V(G)\to 2\mathbb{N}+1$ if and only if $G$ contains a $J_f$-factor. \end{thm}
Extending the range of $f$ to be all positive integers, Egawa, Kano, and Yan~\cite{EKY16} obtain \cref{thm:EKY}.
\begin{thm}[Egawa et al.~\cite{EKY16}]\label{thm:EKY} Suppose that a connected simple graph $G$ of even order satisfies the Tutte's condition with respect to a function $f\colon V(G)\to\mathbb{Z}^+$. Then~$G$ contains a $J_f$-factor. \end{thm}
The particular case $f(v)\equiv 2n$ for some integer $n$ had been solved by Akiyama, Avis and Era~\cite{AAE80} for $n=1$ and by the present authors~\cite{LW13} for $n\ge2$.
Without restricting the parity of order of $G$, Akiyama and Kano \cite[Problem 6.14 (2)]{AK11B} proposed \cref{prob:AK}. Denote by $2\mathbb{Z}^+$ the set of positive even integers. \begin{prob}[Akiyama and Kano~\cite{AK11B}]\label{prob:AK} Suppose that a connected simple graph~$G$ satisfies the Tutte's condition with respect to a function $f\colon V(G)\to 2\mathbb{Z}^+$.
Then what factor or property does $G$ have? \end{prob}
In the next section, we will give a characterization of graphs satisfying the Tutte's condition with respect to a function $f$, in terms of graph factors, without any restriction on the range of $f$, and for graphs of any parity of order.
\section{Main Result} In terms of graph factors, the authors~\cite{LW17} have characterized graphs satisfying the Tutte's condition with respect to a function,
but with the aid of either $2$-colorings, or $2$-edge-colorings, or $2$-end-colorings; see \cref{thm:rTutte,thm:Tutte}. In this note, we present characterizations in terms of graph factors only; see \cref{thm:rTutte:new,thm:Tutte:new}.
\begin{thm}[Lu and Wang~\cite{LW17}]\label{thm:rTutte} A connected general graph $G$ of even order satisfies the Tutte's condition with respect to a function $f\colon V(G)\to\mathbb{Z}^+$ if and only if $G$ contains an $H$-factor for any coloring $g\colon V(G)\rightarrow \{B,R\}$, where \[ H(v)=\begin{cases} J_f(v),&\text{if $g(v)=R$};\\[5pt] 2\mathbb{N}+1,&\text{if $g(v)=B$}. \end{cases} \]
\end{thm}
For any function $f\colon V(G)\to\mathbb{Z}^+$, let $J_f^+(v)$ be the set of positive odd integers that are less than or equal to $f(v)+1$. In other words, \begin{align*} J_f^+(v) =\{m\in 2\mathbb{N}+1\colon m\le f(v)+1\} =\begin{cases} J_{f(v)},&\text{if $f(v)$ is odd};\\[4pt] J_{f(v)+1},&\text{if $f(v)$ is even}. \end{cases}
\end{align*} Define a set \[
\mathcal{H}_f=\bigl\{H\colon V(G)\to 2^\mathbb{N}\ \big|\ H(v)\in \{J_f(v),\,J_f^+(v)\}\text{ for each $v\in V(G)$}\bigr\}. \]
\begin{thm}\label{thm:rTutte:new} A connected general graph $G$ of even order satisfies the Tutte's condition with respect to a function $f\colon V(G)\to\mathbb{Z}^+$
if and only if $G$ contains an $H$-factor for any $H\in \mathcal{H}_f$. \end{thm}
\begin{proof} Let $G$ be a connected general graph of even order, with a function $f\colon V(G)\to\mathbb{Z}^+$. We shall show the necessity and sufficiency respectively.
\noindent{\bf Necessity.} Let $H\in \mathcal{H}_f$. Consider the function $f'\colon V(G)\to \mathbb{Z}^+$ defined by
\[ f'(v)=\max_{x\in H(v)}x=\begin{cases} f(v)+1,&\text{if $H(v)=J^+_f(v)$ and $f(v)$ is even};\\[4pt] f(v),&\text{otherwise}. \end{cases} \] From the premise, we infer immediately \[ o(G-S)\le f(S)\le f'(S)\qquad\text{for any set $S\subset V(G)$}. \] Applying Theorem \ref{thm:rTutte} with the coloring $g$ such that $g^{-1}(R)=V(G)$, one obtains that $G$ contains an $J_{f'}$-factor, i.e., an $H$-factor.
\noindent{\bf Sufficiency.} Let $S\subset V(G)$. Consider the function $H\in\mathcal{H}_f$ defined by \[ H(v)=\begin{cases} J_f(v),&\text{if $v\in S$};\\[4pt] J^+_f(v), &\text{otherwise}. \end{cases} \] From premise, the graph $G$ has an $H$-factor, say, $F$. Let $C$ be any odd component of the subgraph $G-S$. Then for each $v\in C$, we have $H(v)=J_f^+(v)$ and thus the degree $d_F(v)$ is odd. By parity argument, we have $E_F(V(C),\,S)\neq \emptyset$. Therefore, one may deduce that \begin{align*}
o(G-S)\le \sum_{C}|E_F(V(C),S)|\le f(S). \end{align*} This completes the proof. \end{proof}
We remark that \cref{thm:rTutte:new} reduces to \cref{thm:CK} if $f(V(G))\subseteq 2\mathbb{N}+1$. In fact, when $f(V(G))\subseteq 2\mathbb{N}+1$, we obtain $J_f=J_f^+$ and
\[
\mathcal{H}_f=\bigl\{H\colon V(G)\to 2^\mathbb{N}\ |\ H(v)=J_f(v)\text{ for each $v\in V(G)$}\bigr\}=\{J_f\}. \]
\begin{thm}[Lu and Wang~\cite{LW17}]\label{thm:Tutte} Let $G$ be a connected general graph. Then $G$ satisfies the Tutte's condition with respect to a function $f\colon V(G)\to\mathbb{Z}^+$ if and only if for any coloring $g\colon V(G)\rightarrow \{B,R\}$, the graph $G$ either contains an $H$-factor or is $H$-critical, where \[ H(v)=\begin{cases} J_f(v),&\text{if $g(v)=R$};\\[5pt] 2\mathbb{N}+1,&\text{if $g(v)=B$}. \end{cases} \]
\end{thm}
By a proof similar to that of Theorem \ref{thm:rTutte:new}, one may obtain the following result. \begin{thm}\label{thm:Tutte:new} Let $G$ be a connected general graph of odd order. Then $G$ satisfies the Tutte's condition with a function $f\colon V(G)\to\mathbb{Z}^+$ if and only if the graph $G$ either contains an $H$-factor or is $H$-critical, for any $H\in \mathcal{H}_f$. \end{thm} \begin{proof} Omitted. \end{proof}
Combining \cref{thm:rTutte:new,thm:Tutte:new} gives an answer to \cref{prob:AK}.
\end{document}
|
arXiv
|
{
"id": "1806.09357.tex",
"language_detection_score": 0.7045032382011414,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Phase separation and damage in thermoviscoelasticity] {A temperature-dependent phase-field model\hspace{1pt} for phase separation and damage}
\author{Christian Heinemann} \address{Christian Heinemann\hspace{1pt} Weierstrass Institute for Applied Analysis and Stochastics\hspace{1pt} Mohrenstr.~39 \hspace{1pt} D-10117 Berlin \hspace{1pt} Germany} \email{[email protected]}
\author{Christiane Kraus} \address{Christiane Kraus\hspace{1pt} Weierstrass Institute for Applied Analysis and Stochastics\hspace{1pt} Mohrenstr.~39 \hspace{1pt} D-10117 Berlin \hspace{1pt} Germany} \email{[email protected]}
\author{Elisabetta Rocca} \address{Elisabetta Rocca\hspace{1pt} Weierstrass Institute for Applied Analysis and Stochastics\hspace{1pt} Mohrenstr.~39 \hspace{1pt} D-10117 Berlin \hspace{1pt} Germany\hspace{1pt} and \hspace{1pt} Dipartimento di Matematica \hspace{1pt} Universit\hspace{1pt}`a di Milano\hspace{1pt} Via Saldini 50 \hspace{1pt} I-20133 Milano\hspace{1pt} Italy} \email{[email protected] and [email protected]}
\author{Riccarda Rossi} \address{Riccarda Rossi\hspace{1pt} DIMI \hspace{1pt} Universit\hspace{1pt}`{a} di Brescia\hspace{1pt} Via Valotti 9\hspace{1pt} I-25133 Brescia\hspace{1pt} Italy} \email{[email protected]}
\date{October 13, 2015}
\begin{abstract} In this paper we study a model for phase separation and damage in thermoviscoelastic materials. The main novelty of the paper consists in the fact that, in contrast with previous works in the literature (cf., e.g., \cite{hk1,hk2}), we encompass in the model thermal processes, \color{black} nonlinearly coupled with the damage, concentration and displacement evolutions. More in particular, we prove the existence of ``entropic weak solutions'', resorting to a solvability concept \color{black} first introduced in \cite{fei} in the framework of Fourier-Navier-Stokes systems and then recently employed in \cite{fpr09, RocRos14} \color{black} for the study of PDE systems for phase transition \color{black} and damage. Our global-in-time existence result is obtained by passing to the limit in a carefully devised time-discretization scheme. \color{black} \end{abstract}
\maketitle
\noindent {\bf Key words:}\hspace{3mm} damage, phase separation, thermoviscoelasticity, global-in-time entropic weak solutions, existence, time discretization.
\noindent {\bf AMS (MOS) subject clas\hspace{1pt}-si\hspace{1pt}-fi\hspace{1pt}-ca\hspace{1pt}-tion:}\hspace{3mm} 35D30, 74G25, 93C55, 82B26, 74A45.
\section{\bf Introduction and modeling \color{black}} In this paper we propose and analyze a model for phase separation and damage in a thermoviscoelastic body, occupying a spatial domain $\Omega \subset \bbR^d$, where $d\in \hspace{1pt}{2,3\hspace{1pt}}$. We shall \color{black} consider here a suitable weak formulation of the following PDE system
\begin{subequations} \label{eqn:PDEsystem-expli} \begin{align}
&c_t=\dive(m(c,z)\nabla\mu),
\label{e:c-expl-intro}\hspace{1pt}
&\begin{aligned}
\mu ={}&-\Delta_p(c)+\phi'(c) +\frac12\big(b(c,z) \mathbb{C}(\varepsilon(\mathbf{u})-\varepsilon^*(c)):(\varepsilon(\mathbf{u})-\varepsilon^*(c))\big)_{,c}-\vartheta+c_t,
\end{aligned}
\label{e:mu-expl-intro}\hspace{1pt}
&z_t+\partial I_{(-\infty,0]}(z_t) -\Delta_p(z)+\partial I_{[0,\infty)}(z) +\sigma'(z)\ni
-\frac12 b_{,z}(c,z) \mathbb{C} (\varepsilon(\mathbf{u})-\varepsilon^*(c)):(\varepsilon(\mathbf{u})-\varepsilon^*(c)) +\vartheta,\label{e:z-expl-intro}\hspace{1pt}
&\vartheta_t+c_t\vartheta+z_t\vartheta+\rho\vartheta\dive(\mathbf{u}_t)-\dive(\mathsf{K}(\vartheta)\nabla\vartheta) =g+|c_t|^2+|z_t|^2+a(c,z)\varepsilon(\mathbf{u}_t):\mathbb{V}\varepsilon(\mathbf{u}_t)+m(c,z)|\nabla\mu|^2,
\label{e:teta-expl-intro}\hspace{1pt}
&\mathbf{u}_{tt}-\dive\big( a(c,z)\mathbb{V}\varepsilon(\mathbf{u}_t) + b(c,z) \mathbb{C} (\varepsilon(\mathbf{u})-\varepsilon^*(c)) -\rho\vartheta\mathds 1\big)=\mathbf{f}
\label{e:u-expl-intro} \end{align} \end{subequations} posed in $\Omega \times (0,T)$. The system couples \begin{itemize} \item[-] the viscous Cahn-Hilliard equation \eqref{e:c-expl-intro}--\eqref{e:mu-expl-intro} ruling the evolution of the concentration $c$; \item[-] the damage flow rule \color{black} \eqref{e:z-expl-intro} for the local proportion of the damage $z$; \item[-] the internal energy balance \eqref{e:teta-expl-intro} for the absolute temperaure $\vartheta$ ; \item[-] the momentum balance \eqref{e:u-expl-intro} describing the dynamics for the displacement $\mathbf{u}$. \end{itemize} The symbol $(\cdot)_t$ denotes the partial derivative with respect to time. In the Cahn-Hilliard equation \eqref{e:c-expl-intro} $m$ denotes the mobility of the system and
$\mu$ the chemical potential, whose expression is \color{black} given in
\eqref{e:mu-expl-intro}. There, $\Delta_p(\cdot):= \dive(|\nabla\cdot|^{p-2}\nabla\cdot)$ denotes the $p$-Laplacian, $\phi$ is a mixing potential, $b$ is an elastic coefficient function depending possibly on both $c$ and $z$, $\mathbb{C}$ represents the elasticity tensor, $\varepsilon^*$ a residual strain tensor, and $(\cdot)_{,c}$ the partial derivate with respect to the variable $c$ (with an analogous notation for the \color{black} other variables). In the damage flow rule \color{black} \eqref{e:z-expl-intro} $\partial I_{(-\infty,0]}: \bbR \rightrightarrows \bbR$ denotes the subdifferential of the indicator function of the set $(-\infty,0]$,
given by \hspace{1pt}[ \partial I_{(-\infty,0]}(v) = \begin{cases} \hspace{1pt}{ 0\hspace{1pt}} & \text{for } v <0, \hspace{1pt} [0,+\infty) & \text{for } v=0 \end{cases} \hspace{1pt}]
while $\partial I_{[0,\infty)}: \bbR \rightrightarrows \bbR$ is \color{black} the subdifferential of the indicator function of the set $[0,\infty)$, i.e. \hspace{1pt}[ \partial I_{[0,\infty)}(z) = \begin{cases} (-\infty, 0]& \text{for } z=0, \hspace{1pt} \hspace{1pt}{ 0\hspace{1pt}} & \text{for } z >0\hspace{1pt},. \end{cases} \hspace{1pt}] \color{black}
The presence of these two \color{black} maximal monotone \color{black} graphs, enforcing in particular \color{black} the irreversibility of the damage phenomenon,
entails the constraint $z(t)\in [0,1]$ for $t\in (0,T)$ as soon as $z(0)\in [0,1]$. This is physically meaningful because $z$ denotes the damage parameter which is set to be equal to $0$ in case the material is completely damaged and it is \color{black} equal to $1$ in the completely safe case, while $z\in (0,1)$ indicates partial damage. The function $\sigma$ in \eqref{e:z-expl-intro} represents a smooth function, possibly non-convex, of the damage
variable \color{black} $z$. In the temperature equation \eqref{e:teta-expl-intro}, $\rho$ denotes a positive thermal expansion coefficient, $\mathsf{K}$ the heat conductivity of the system, $g$ a given heat source and $a$ a viscosity coefficient possibly depending on $c$ and $z$, while $\mathbb{V}$ is the viscosity tensor. Finally, in the momentum balance \eqref{e:u-expl-intro} $\mathbf{f}$ denotes a given volume force.
We will supplement system \eqref{eqn:PDEsystem-expli} with \color{black} the initial-boundary conditions
\begin{subequations} \label{init-bdry-conditions} \begin{align}
&c(0)=c^0,
&&z(0)=z^0,
&&\vartheta(0)=\vartheta^0,
&&\mathbf{u}(0)=\mathbf{u}^0,
&&\mathbf{u}_t(0)=\mathbf v^0
&&\text{a.e.\hspace{1pt} in }\Omega,
\label{init-conditions}\hspace{1pt}
&\nabla c\cdot { \bf n \color{black}}=0,
&&m(c,z)\nabla\mu\cdot { \bf n \color{black}}=0,
&&\nabla z\cdot { \bf n \color{black}}=0,
&&\mathsf{K}(\vartheta)\nabla\vartheta\cdot { \bf n \color{black} }=h,
&&\mathbf{u}=\mathbf{d}
&&\text{a.e. on }\partial\Omega\times(0,T),
\label{bdry-conditions} \end{align} \end{subequations} where ${ \bf n \color{black} }$ indicates the outer unit normal to $\partial\Omega$, while $h$ and $\mathbf{d}$ denote, respectively, a given boundary heat source and displacement.
The PDE system \eqref{eqn:PDEsystem-expli} may be written in the more compact form \begin{subequations} \label{eqn:PDEsystem} \begin{align}
&c_t=\dive(m(c,z)\nabla\mu),
\label{e:c}\hspace{1pt}
&\mu = -\Delta_p(c)+\phi'(c)+W_{,c}(c,\varepsilon(\mathbf{u}),z)-\vartheta+c_t,
\label{e:mu}\hspace{1pt}
&z_t +\partial I_{(-\infty,0]}(z_t) -\Delta_p(z)+\partial I_{[0,\infty)}(z) +\sigma'(z)\ni -W_{,z}(c,\varepsilon(\mathbf{u}),z)+\vartheta,
\label{e:z}\hspace{1pt}
&\vartheta_t+c_t\vartheta+z_t\vartheta+\rho\vartheta\dive(\mathbf{u}_t)-\dive(\mathsf{K}(\vartheta)\nabla\vartheta)=g+|c_t|^2+|z_t|^2+a(c,z)\varepsilon(\mathbf{u}_t):\mathbb{V}\varepsilon(\mathbf{u}_t)+m(c,z)|\nabla\mu|^2,
\label{e:teta}\hspace{1pt}
&\mathbf{u}_{tt}-\dive\big(a(c,z)\mathbb{V}\varepsilon(\mathbf{u}_t)+W_{,\varepsilon}(c,\varepsilon(\mathbf{u}),z)-\rho\vartheta\mathds{1}\big)=\mathbf{f},
\label{e:u} \end{align} \end{subequations}
with the following choice of the elastic energy density \begin{equation} \label{elastic-energy}
W(c,\varepsilon,z)=\frac 12b(c,z)\mathbb C(\varepsilon-\varepsilon^*(c)):(\varepsilon-\varepsilon^*(c)). \end{equation} The expression \color{black} of $W$ is typically quadratic
as a function of the strain tensor $\varepsilon(\mathbf{u})$, whereas \color{black}
the coefficient $b$ can depend on $c$ and $z$. This accounts for possible inhomogeneity of elasticity on the one hand,
and is characteristic for damage on the other hand. Indeed, the natural choice would be
that $b$ vanishes \color{black} for $z=0$, i.e.\hspace{1pt} when the material is completely damaged.
\paragraph{\bf Derivation of the model} Let us briefly discuss the thermodynamically consistent derivation of the PDE-system \eqref{eqn:PDEsystem-expli}.
The {\it state variables} that determine the local thermodynamic state of the material and the {\it dissipative variables} whose evolution describes the way along which the system tends to dissipate energy are as follows: \hspace{1pt} {\it State variables} $$ \vartheta, \hspace{1pt}, c, \hspace{1pt}, \nabla c, \hspace{1pt}, \varepsilon(\mathbf{u}), \hspace{1pt}, z, \hspace{1pt}, \nabla z $$ {\it Dissipation variables} $$ \nabla \vartheta, \hspace{1pt}, c_t, \hspace{1pt}, \varepsilon(\mathbf{u}_t), \hspace{1pt}, z_t $$
By classical principles of thermodynamics, the evolution of the \color{black} system is based on the free energy $\mathscr{F}$ and the pseudopotential of dissipation $\mathscr{P}$, for which we assume the following general form: \begin{align}
\tF{c}{z}{\vartheta}{ \varepsilon(\mathbf{u})} = \int_\Omega F(c,\nabla c,z,\nabla z,\vartheta, \varepsilon(\mathbf{u}) )\,\mathrm dx
\qquad \text{and} \qquad \tP{\nabla \vartheta}{c_t}{\varepsilon(\mathbf{u}_t)}{z_t} = \int_\Omega P(\nabla \vartheta, c_t, \varepsilon(\mathbf{u}_t), z_t) \,\mathrm dx . \end{align} Our evolutionary \color{black} system has been obtained by the principle of virtual power and by balance equations of micro-forces, \color{black}
a generalization of the approaches by Fremond \cite{fremond} and Gurtin \cite{gur96}. In addition, we also include temperature-dependent \color{black} effects by means of the
balance equation of energy. \hspace{1pt} The system relies altogether on the balance equations of mass, forces, micro-forces and energy:\hspace{1pt}
{\it Evolution system} \begin{subequations} \label{eqn:PDEsystem-derivation} \begin{align}
\text{Mass balance} \hspace{0.98cm}& \notag\hspace{1pt}
c_t + \dive {\boldsymbol J} &=0,
\label{e:c-deri}\hspace{1pt}
\text{Force balance}\hspace{0.9cm}& \notag\hspace{1pt} \mathbf{u}_{tt} - \dive { {\BS\sigma}} \def\bftau{{\BS\tau}} &= {\mathbf f},
\label{e:u-deri} \hspace{1pt}
\text{Micro-force balance} & \notag\hspace{1pt}
B - \dive{\bf H}& = 0 ,\hspace{1pt}
\Pi - \dive { \boldsymbol \xi} &= 0 ,\hspace{1pt}
\text{Energy balance} \hspace{0.6cm}& \notag \hspace{1pt}
U_t + \dive { \boldsymbol q} \hspace{1pt};&= g + {\BS\sigma}} \def\bftau{{\BS\tau}: \varepsilon(\mathbf{u}_t) +
\dive ({\bf H}) z_t + {\bf H} \cdot \nabla z_t
+ \dive ( {\boldsymbol \xi}) c_t
+ { \boldsymbol \xi} {\cdot} \nabla c_t - \dive ({\boldsymbol J}) \hspace{1pt}, \mu -{\boldsymbol J} \cdot \nabla \mu
\end{align} \end{subequations} where the internal energy density is given by $U= F - \vartheta \partial_\vartheta F$.\hspace{1pt} Note that the system is not closed for the variables. Therefore, constitutive laws have to be imposed for the mass flux ${ \bf J}$, the stress tensor ${\BS\sigma}} \def\bftau{{\BS\tau}$, the internal microforce $B$ for $z$, the microstress ${\bf H}$ for $z$, the internal microforce $\Pi$ for $c$, the microstress ${ \boldsymbol \xi}$ for $c$ and the heat flux ${\bf q}$.\hspace{1pt} {\it Constitutive relations}\hspace{1pt} Following Fr{\hspace{1pt}'e}mond's perspective, we assume that the stress tensor ${\BS\sigma}} \def\bftau{{\BS\tau}$, the microforce $B$ and the microstress ${\bf H}$, may be additively decomposed into their non-dissipative and dissipative components, i.e. \begin{align} && & {\boldsymbol \sigma }= {\boldsymbol \sigma }^{nd} + {\boldsymbol \sigma }^d & \text{with}&
\qquad {\boldsymbol \sigma }^{nd}= \partial_{\varepsilon(\mathbf{u})} F , & & {\boldsymbol \sigma }^d =
\partial_{\varepsilon(\mathbf{u}_t)} P, & & && &&\hspace{1pt} && &B= B^{nd} + B^d & \text{with}& \qquad B^{nd} \in \partial_z F , & & B^d \in
\partial_{z_t} P, & & && &&\hspace{1pt} && & {\bf H} = {\bf H}^{nd} + {\bf H}^d &\text{with} & \qquad {\bf H}^{nd}= \partial_{\nabla z} F , & & {\bf H}^d= \partial_{\nabla
z_t} P=0 .& & && & & \end{align} In a similar way, by choosing Gurtin's approach, cf.~\cite{gur96} equations (3.19)-(3.23), we get the constitutive relations: \begin{align} { \bf J}= - m(c,z) \nabla \mu,\qquad\qquad \Pi= \partial_c F + \partial_{c_t} P - \mu, \qquad \qquad {\boldsymbol \xi} = \partial_{\nabla c} F. \end{align} The heat source is given by the standard constitutive relation: $$ { \boldsymbol q}= - \frac{\partial P}{ \partial \nabla \vartheta}. $$ In the framework of the formulation of the damage and phase separation theory \cite{fremond, gur96}, we choose for our system the following free energy and dissipation potential:
\begin{align}
\label{free-energy}
\tF{c}{z}{\vartheta}{ \varepsilon(\mathbf{u})}:={}&\int_\Omega \frac 1p|\nabla c|^p+\frac 1p|\nabla z|^p+W(c,\varepsilon(\mathbf{u}),z)+\phi(c)+\sigma(z)+ I_{[0,+\infty)}(z)\,\mathrm dx\notag\hspace{1pt}
&+\int_\Omega-\vartheta\log\vartheta-\vartheta\big(c+z+\rho\dive(\mathbf{u})\big)\,\mathrm dx,\hspace{1pt} \label{dissipation}
\tP{\nabla \vartheta}{c_t}{\varepsilon(\mathbf{u}_t)}{z_t} :={} & \int_\Omega \frac{1}{2}
\mathsf{K}(\vartheta)|\nabla\vartheta|^2 + \frac{1}{2} z_t^2 + \frac{1}{2} c_t^2 + \frac{1}{2} a(c,z)\varepsilon(\mathbf{u}_t):\mathbb{V}\varepsilon(\mathbf{u}_t)
+ I_{(-\infty,0]}(z_t) \,\mathrm dx \hspace{1pt}, . \end{align} \par
The first two gradient terms in \eqref{free-energy} represent the
nonlocal interactions in phase separation and damage processes.
The analytical study of gradient \color{black} theories \color{black} goes back to \cite{LM89,Mod98}, where phase separation processes were investigated. \color{black} A typical choice for $W$ has
been introduced in \eqref{elastic-energy}. The functions
$\phi$ and $\sigma$ represent the mixing potentials.
The term $\vartheta(c+z+\rho\dive \mathbf{u})$ models the phase and thermal expansion
processes in the system. It may also be regarded as linear approximation near
to the thermodynamical equilibrium. In the following lines we will get further insight into the choices of these functionals. Exploiting \eqref{eqn:PDEsystem-derivation}-\eqref{dissipation} results in system \eqref{eqn:PDEsystem-expli}, for which the Clausius-Duhem inequality is satisfied.
As discussed, our approach is based on a gradient theory of phase separation and damage processes due to \cite{fremond,gur96,CH58}. For a non-gradient approach \color{black} to \color{black} damage models we refer to \cite{FG06, GL09,
Bab11}. There, the damage variable $z$ takes \color{black} only \color{black} two distinct values, i.e. $\hspace{1pt}{0,1\hspace{1pt}}$, in contrast to phase-field models where intermediate values $z \in [0,1]$ are also allowed. In addition, the mechanical properties of damage phenomena are described in \cite{FG06, GL09, Bab11} differently. They choose a $z$-mixture of a linearly elastic strong and weak material with two different elasticity tensors. We also refer to \cite{FKS11}, where a non-gradient damage model was studied by \color{black} means of \color{black} Young measures. \color{black}
\paragraph{\bf Mathematical difficulties.} The main mathematical difficulties attached with \color{black} the proof of existence of solutions to such a PDE system are related to the presence of the quadratic dissipative terms on the right-hand \color{black} side in the internal energy balance \eqref{e:teta}, as well as the doubly nonlinear and possibly nonsmooth carachter of the damage relation \eqref{e:z}. This is the reason why we shall \color{black} resort here to a weak solution notion for \eqref{eqn:PDEsystem} coupled with \eqref{init-bdry-conditions}. In this solution concept, partially drawn from \cite{RocRos14}, the Cahn-Hilliard system (\ref{e:c}--\ref{e:mu}) and the balance of forces \eqref{e:u} (read a.e. in $\Omega\times(0,T)$) \color{black} are coupled with an {\sl ``entropic'' formulation} of the heat equation \eqref{e:teta} and a weak formulation of the damage flow rule \eqref{e:z} \color{black} taken from \cite{hk1,hk2}.
Let us briefly illustrate them. \color{black} \paragraph{\bf The ``entropic'' formulation of the heat equation.} It consists of \color{black} a {\sl weak entropy inequality}
\begin{equation}
\label{entropy-ineq-intro}
\begin{aligned}
&\int_s^t \int_\Omega (\log(\vartheta) + c+z) \varphi_t \, \mathrm{d} x \, \mathrm{d} r -
\rho \int_s^t \int_\Omega \dive(\mathbf{u}_t) \varphi \, \mathrm{d} x \, \mathrm{d} r
-\int_s^t \int_\Omega \mathsf{K}(\vartheta) \nabla \log(\vartheta) \cdot \nabla \varphi \, \mathrm{d} x \, \mathrm{d} r\hspace{1pt}
&\begin{aligned}
\leq
\int_\Omega (\log(\vartheta(t))+c(t)+z(t)){\varphi(t)} \, \mathrm{d} x
&-\int_\Omega (\log(\vartheta(s))+c(s)+z(s)){\varphi(s)} \, \mathrm{d} x\hspace{1pt}
&-\int_s^t \int_\Omega \mathsf{K}(\vartheta)|\nabla\log(\vartheta)|^2\varphi\, \mathrm{d} x \, \mathrm{d} r
\end{aligned}\hspace{1pt}
&\quad-\int_s^t \int_\Omega \left( g +|c_t|^2+ |z_t|^2 + a(c,z) \varepsilon(\mathbf{u}_t):\mathbb{V} \varepsilon(\mathbf{u}_t) + m(c,z)|\nabla \mu|^2\right)
\frac{\varphi}{\vartheta} \, \mathrm{d} x \, \mathrm{d} r
-\int_s^t \int_{\partial\Omega} h \frac\varphi\vartheta \, \mathrm{d} S \, \mathrm{d} r
\end{aligned}
\end{equation}
required to be valid \color{black} for almost all $0\leq s \leq t \leq T$ and for $s=0$, and for all sufficiently regular and positive test functions $\varphi$, coupled with a
{\sl total energy inequality}:
\begin{equation}
\label{total-enid-intro}
\begin{aligned}
\tE{c(t)}{z(t)}{\vartheta(t)}{\mathbf{u}(t)}{\mathbf{u}_t(t)}
\leq{}&\tE{c(s)}{z(s)}{\vartheta(s)}{\mathbf{u}(s)}{\mathbf{u}_t(s)}\hspace{1pt}
&+ \int_s^t\int_\Omega g \, \mathrm{d} x \, \mathrm{d} r
+ \int_s^t\int_{\partial\Omega} h \, \mathrm{d} S \, \mathrm{d} r\hspace{1pt}
&+ \int_s^t \int_\Omega \mathbf{f} \cdot \mathbf{u}_t \, \mathrm{d} x \, \mathrm{d} r
+ \int_s^t \int_{\partial\Omega}\big({\boldsymbol{\sigma}} { \bf
n \color{black}} \big)\cdot \mathbf{d}_t \, \mathrm{d} S \, \mathrm{d} r,
\end{aligned}
\end{equation}
valid \color{black} for almost all $0\leq s \leq t \leq T$, and for $s=0$, where
the total energy $\mathscr E$ is the sum of the internal energy and the kinetic energy, i.e. \begin{align}
\tE{c}{z}{\vartheta}{\mathbf{u}}{\mathbf{u}_t}:={}&\tU{c}{z}{\vartheta}{\color{black} \varepsilon(\mathbf{u})\color{black}}+\int_\Omega\frac12|
\mathbf{u}_t \color{black}|^2\,\mathrm dx,
\label{total-energy} \end{align} being the internal energy $\mathscr{U}$ specified by (cf.~also \eqref{free-energy}): \begin{align}
\tU{c}{z}{\vartheta}{\color{black} \varepsilon(\mathbf{u})\color{black}}:={}&\tF{c}{z}{\vartheta}{\varepsilon(\mathbf{u})}-\vartheta\cdot\partial_\vartheta\mathscr{F}(c,z,\vartheta,\color{black} \varepsilon(\mathbf{u})\color{black})\notag\hspace{1pt}
={}&\int_\Omega \frac 1p|\nabla c|^p+\frac 1p|\nabla z|^p+W(c,\varepsilon(\mathbf{u}),z)+\phi(c)+\sigma(z)+ I_{[0,+\infty)}(z)+\vartheta\,\mathrm dx. \end{align}
From an analytical viewpoint, observe that the entropy inequality \eqref{entropy-ineq-intro} has the advantage that all the quadratic terms on the right-hand side of \eqref{e:teta} are multiplied by a negative test function, which, together with the fact that we are only requiring an {\sl inequality} and not an equation, will allow \color{black} us to apply upper semicontinuity arguments for \color{black} the limit passage in the time-discrete approximation
of system \eqref{eqn:PDEsystem} set up in \color{black}
Section~\ref{s:5}.
The \color{black} {\sl ``entropic'' formulation}, first introduced in \cite{fei} in the framework
of heat conduction in fluids, and then applied to a phase separation
\color{black} model \color{black} derived according to \textsc{Fr\hspace{1pt}'emond}'s approach
\cite{fremond} in \cite{fpr09}, has been successively
used also in models for different kinds of special materials. Besides the aforementioned
work on damage \cite{RocRos14}, we may mention the papers \cite{ffrs}, \cite{frsz1}, and \cite{frsz2} on liquid crystals, \color{black}
and more recently the analysis of a model for the evolution of non-isothermal binary incompressible immiscible fluids (cf.\hspace{1pt} \cite{ERS}).
Let us also mention that other approaches to treat PDE systems with an $L^1$-right-hand side
are available
in the literature: among others, we refer to \cite{zimmer},
resorting to the notion of {\em renormalized solution},
\color{black}
and \cite{roubiSIAM10} where
the coupling of rate-independent and thermal processes is considered. The heat equation therein, with an $L^1$-right-hand side, is tackled by means of Boccardo-Gallo\hspace{1pt}"uet type techniques. \color{black}
\paragraph{\bf The weak formulation of the damage flow rule.} Following the lines of \cite{hk1, hk2}, we replace the damage inclusion \eqref{e:z} by
the {\sl damage energy-dissipation
inequality}
\begin{align}
&\label{energ-ineq-z-intro}
\begin{aligned}
\int_s^t \int_{\Omega} |z_t|^2 \, \mathrm{d} x \, \mathrm{d} r & +\int_\Omega\left(
\frac1p |\nabla z(t)|^p + \sigma(z(t))\right)\, \mathrm{d} x\hspace{1pt} & \leq\int_\Omega\left(
\frac1p |\nabla z(s)|^p+ \sigma(z(s))\right)\, \mathrm{d} x
+\int_s^t \int_\Omega z_t \left(-
\pd{z}(c,\varepsilon(\mathbf{u}), z)
+\vartheta\right)\, \mathrm{d} x \, \mathrm{d} r,
\end{aligned}
\end{align}
imposed \color{black} for all $t \in (0,T]$, for $s=0$, and for almost all $0< s\leq t$ and the {\sl one-sided variational inequality for the damage process}
\begin{align}
\label{var-ineq-z-intro}
&\begin{aligned}
\int_\Omega \Big( z_t \zeta +|\nabla z|^{p-2} \nabla z \cdot \nabla \zeta + \xi \zeta +
\sigma'(z(t)) \zeta & + \pd{z}(c,\varepsilon(\mathbf{u}), z) \zeta -\vartheta \zeta \Big)\hspace{1pt},\mathrm{d}x
\geq 0 \quad \text{a.e. in\;}\hspace{1pt}, (0,T),
\end{aligned}
\end{align}
required to be valid \color{black} for all sufficiently regular test functions $\zeta$, where $\xi \in \partial I_{[0,+\infty)}(z)$ $\text{a.e. in\;}\hspace{1pt}, Q$, and $z(x,t) \in [0,1]$, $z_t(x,t)\in(-\infty,0]$ $ \text{a.e. in\;} Q.$
\paragraph{\bf Entropic weak solutions.} In what follows, we shall refer to the formulation consisting of (\ref{e:c}--\ref{e:mu}), \eqref{e:u}, \eqref{entropy-ineq-intro}, \eqref{total-enid-intro}, \eqref{energ-ineq-z-intro}, \eqref{var-ineq-z-intro}, supplemented with the initial and boundary conditions \eqref{init-bdry-conditions},
as the \emph{entropic weak formulation} of (the initial-boundary value problem for) system \eqref{eqn:PDEsystem}. Let us point out that, in case of regular solutions, it can be seen that the {\sl ``entropic'' formulation} is equivalent to the internal energy balance \eqref{e:teta} (cf.\hspace{1pt} Remark \ref{rmk:weak-sol} as well as \cite[Rmk. 2.6]{RocRos14} for more details). Likewise, the {\sl weak formulation of the damage flow rule} would give rise to the damage inclusion \eqref{e:z} for sufficiently regular solutions. \color{black} In this sense, we can observe that our formulation is consistent \color{black} with the PDE system \eqref{eqn:PDEsystem-expli}.
\paragraph{\bf Our results and related literature.} In this paper we prove existence of global-in-time entropic weak \color{black} solutions
under the following assumptions on the data: \begin{itemize} \item[-] the mixing potential $\phi$ is the sum of a convex possibly non-smooth part and a regular $\lambda$-concave part (cf.~Hyp.~(I)). Hence, both the sum of a logarithmic potential (e.g.~ $(1+c)\log(1+c)+(1-c)\log(1-c)$) or an indicator function (e.g.~$I_{[-1,1]}(c)$) and a smooth concave perturbation (e.g.~$-c^2$) are allowed as choices of $\phi$, cf.\hspace{1pt} Remark \ref{rmk:l-convex-splitting} ahead); \color{black} \item[-] the mobility $m$ is a smooth function bounded from below by a positive constant; \item[-] the function $\sigma$ is regular; \item[-] the heat conductivity $\mathsf{K}$ is a continuous function growing like a power of $\vartheta$. This choice is motivated by mathematics, indeed it is needed in order to get suitable estimates \color{black} on the temperature $\vartheta$, but it is also justified by \color{black} the physical behavior of certain materials (cf.~\cite{klein,zr}); \item[-] the function $a$ is bounded away from zero and bounded from above as well as its partial derivatives with respect to both $c$ and $z$. These assumptions are mainly made in order to prevent the full degeneracy of the momentum balance \eqref{e:u} and in order to obtain from it the \color{black} sufficient regularity on $\mathbf{u}$ needed to handle the nonlinear coupling with the temperature and damage relations. Instead, the coefficient $b$ in the elastic energy
density \color{black} \eqref{elastic-energy} can possibly vanish, and both $b$ and the eigenstrain $\varepsilon^*$ are required to \color{black} be sufficiently regular functions; \item[-] the thermal expansion coefficient $\rho$ is assumed to be a positive constant. For more general behavior of $\rho$ possibly depending \color{black} on the damage parameter $z$ the reader can refer to \cite{hr}, while the fact that $\rho$ is chosen to be independent of $\vartheta$ is justified by the fact that we assume \color{black} to have a constant specific heat $c_v$ (equal to 1 in \eqref{e:teta} for simplicity): indeed they are related (by thermodynamical laws) by the relation $\partial_\vartheta c_v=\vartheta\partial_\vartheta\rho$; \item[-] the initial data are taken in the energy space, except for
\color{black} the initial displacement and velocity which, jointly \color{black} with the boundary Dirichlet datum for $\mathbf{u}$, must enjoy \color{black} the regularity needed in order to perform
\color{black} elliptic regularity estimates on the momentum balance \eqref{e:u}. \end{itemize}
Furthermore, we consider a \emph{gradient theory} for damage. From the physical viewpoint, the term $\frac{1}{p}|\nabla z|^p$ contributing to \eqref{free-energy} models \color{black} nonlocality of the damage process, since the gradient of $z$ accounts for the influence of damage at a material point, undamaged in its neighborhood. The mathematical advantages attached to the presence of this term, and of the analogous contribution
$\frac{1}{p}|\nabla c|^p$,
are rather obvious. Let us mention that, in fact,
throughout the paper we shall assume that the exponent $p$ in \eqref{e:mu} and \eqref{e:z} fulfills $p>d$. This assumption is mainly mathematically motivated by the fact that it ensures that $c$ and $z$ are
estimated in $W^{1,p}(\Omega) \subset \mathrm{C}^0
(\overline\Omega)$, and has been
adopted for the analysis of other damage models (cf., e.g., \cite{bmr,MieRou06,mrz,krz2}).
\par Regarding the previous results on this type of problems in the literature,
let us point out that, by now, \color{black} several contributions on systems coupling rate-dependent damage and thermal processes \color{black} (cf., e.g.~\cite{BoBo, RocRos12, RocRos14, hr}) \color{black} as well as rate-dependent damage and phase separation (cf., e.g., \cite{hk1,hk2}) are available in the literature. Up to our knowledge, this is one of the first contributions on the analysis of a model encompassing all of the three processes (temperature evolution, damage, phase separation) in
a thermoviscoelastic \color{black} material.
Recently, \color{black}
a thermodynamically consistent, quite general \color{black} model describing
diffusion of a solute or a fluid in a solid undergoing possible phase
transformations and rate-independent damage, beside possible visco-inelastic processes, has been studied in \cite{TomRou}. Let us highlight the main difference to our own model: the evolution of the damage process is therein considered \emph{rate-independent}, which clearly affects the
weak solution concept adopted in \cite{TomRou}. In particular, we may point out
that dealing with a \emph{rate-dependent} flow rule for the damage variable is one of the challenges of our own analysis, \color{black} due to the presence of the quadratic nonlinearity in $\varepsilon(\mathbf{u})$ on the right-hand side of \color{black} \eqref{e:z}.
\par
Let us conclude by mentioning \color{black} some open problems which are currently under study, such as uniqueness of solutions, at least for the isothermal case, and
the global-in-time existence analysis for \color{black} the complete damage (degenerating) case, in which the coefficient $a$ in the momentum balance \eqref{e:u} is allowed to vanish in some parts of the domain (cf.~\cite{RocRos12} for the case without phase separation and \cite{hk3} for the isothermal case).
\paragraph{\bf Plan of the paper.} In \underline{Section~\ref{s:3}}, after listing all the assumptions on the data of the problem, we rigorously state the \emph{entropic weak} formulation of the problem and give \color{black} the main result of the paper, i.e.\hspace{1pt} \color{black} Theorem \ref{thm:1} ensuring the global-in-time existence of entropic weak solutions. \par In \underline{Section~\ref{s:4}} we (formally) derive all the a priori estimates on system \eqref{eqn:PDEsystem} which will be at the core of our existence analysis. \color{black} \par
As previously mentioned, Thm.\hspace{1pt} \ref{thm:1} is proved by passing to the limit in a carefully devised time-discretization scheme, also coupled with regularization procedures, which could also be of interest in view of possible numerical simulations on the model.
To its analysis, the whole \underline{Section \ref{s:5}} is devoted. While postponing more detailed comments on its
features, let us mention here that our time-discrete scheme will be {\sl
thermodynamically \color{black} consistent}, in that it will ensure the validity of the discrete versions of the entropy and energy inequalities \eqref{entropy-ineq-intro} and \eqref{total-enid-intro}. This will play a crucial role in the limit passage, developed in \underline{Section \ref{s:6}}, where the proof of Theorem \ref{thm:1} will be carried out. \color{black}
\section{\bf Weak formulation and statement of the main result} \label{s:3} In this section, first of all we recall some notation and preliminary results that will be used throughout the paper. Next, we list all of the conditions on the nonlinearities featuring in system \eqref{eqn:PDEsystem}, as well as on the data $f,\hspace{1pt}, g,\hspace{1pt}, h$ and on the initial data. We are thus in the position to give our notion of weak solution to the initial-boundary value problem for system \eqref{eqn:PDEsystem} and state our main existence result, Theorem \ref{thm:1}.
\subsection{\bf Preliminaries} \label{ss:3.1} In what follows, we will suppose that \begin{equation} \label{smoothness-omega} \Omega\subset\mathbb{R}^d, \quad d\in \hspace{1pt}{2,3\hspace{1pt}}, \color{black} \hspace{1pt} \hspace{1pt} \text{is
a bounded domain with \color{black} $\mathrm{C}^2$-boundary $\partial\Omega$.} \end{equation} This smoothness requirement will allow us to apply regularity results for elliptic systems, at the basis of a regularity estimate that we shall perform on the momentum equation and that will have a key role in the proof of our existence result for system \eqref{eqn:PDEsystem}.
\paragraph{\bf Notation for function spaces, norms, operators} Given a Banach space $X$,
we will use the symbol $\pairing{}{X}{\cdot}{\cdot}$ for the duality pairing between $X'$ and $X$.
Moreover, we shall denote by
${\rm BV}([0,T];X)$ (by $\mathrm{C}^0_{\mathrm{weak}}([0,T];X)$, respectively),
the space of functions from $[0,T]$ with values in $ X$ that are defined at every $t \in [0,T]$ and have bounded variation on $[0,T]$ (and are \emph{weakly} continuous on $[0,T]$, resp.).
Let $\Omega \subset \bbR^d$ be a bounded domain, $d \in \hspace{1pt}{2,3\hspace{1pt}}$. We set $Q:= \Omega \times (0,T)$ and $\Sigma:=\partial\Omega\times (0,T)$. We identify both $L^2 (\Omega)$ and $L^2 (\Omega;\R^d)$ with their dual spaces, and denote by $(\cdot,\cdot)$ the scalar product in $\bbR^d$, by $(\cdot,\cdot)_{L^2(\Omega)}$ both the scalar product in $L^2(\Omega)$ and in \color{black} $L^2 (\Omega;\R^d)$, and by $H_{0}^1(\Omega;\R^d)$, $H_{\mathrm{Dir}}^2(\Omega;\R^d)$ and $ H_N^2(\Omega)$ \color{black} the spaces \begin{align*}
&H_{0}^1(\Omega;\R^d):=\big\hspace{1pt}{\mathbf{v} \in H^1(\Omega;\bbR^d) \hspace{1pt},:\hspace{1pt} \mathbf{v}= 0 \hspace{1pt} \hbox{ on
}\partial\Omega \hspace{1pt},\big\hspace{1pt}},
\text{ endowed with the norm } \hspace{1pt}|
\mathbf{v}\hspace{1pt}|_{H_0^1(\Omega;\bbR^d)}^2: = \int_{\Omega} \varepsilon(\mathbf{v}) \colon \varepsilon(\mathbf{v})\hspace{1pt},\, \mathrm{d} x,
\hspace{1pt}
&H_{\mathrm{Dir}}^2(\Omega;\R^d):= H_{0}^1(\Omega;\R^d) \cap H^2(\Omega; \bbR^d) = \color{black} \big\hspace{1pt}{\mathbf{v} \in H^2(\Omega;\bbR^d)\hspace{1pt},:\hspace{1pt} \mathbf{v} ={0} \hspace{1pt} \hbox{ on }\partial\Omega \hspace{1pt},\big\hspace{1pt}},\hspace{1pt}
&H_N^2(\Omega):=\big\hspace{1pt}{v\in H^2(\Omega)\hspace{1pt},:\hspace{1pt} \partial_n v=0\text{ on }\partial\Omega\big\hspace{1pt}}. \end{align*}
Note that by Korn's inequality $\hspace{1pt}|\cdot\hspace{1pt}|_{H_0^1(\Omega;\bbR^d)}$ is a norm equivalent to the standard one on $H^1(\Omega;\bbR^d)$. We denote by $\mathcal{D} (\overline Q)$ the space of the $\mathrm{C}^\infty$-functions with compact support on $Q$. For $q\geq 1$ we will adopt the notation \begin{equation} \label{label-added}
W_+^{1,q}(\Omega):= \left\hspace{1pt}{\zeta \in
W^{1,q}(\Omega)\hspace{1pt}, : \hspace{1pt} \zeta(x) \geq 0 \quad \text{for a.a.}\hspace{1pt}, x \in
\Omega \right\hspace{1pt}}, \quad \text{ and analogously for }
W_-^{1,q}(\Omega). \end{equation}
Finally, throughout the paper we shall denote by the symbols $c,\hspace{1pt},c',\hspace{1pt}, C,\hspace{1pt},C'$ various positive constants depending only on known quantities. Furthermore, the symbols $I_i$, $i = 0, 1,... $, will be used as place-holders for several integral terms popping in the various estimates: we warn the reader that we will not be self-consistent with the numbering, so that, for instance, the symbol $I_1$ will occur several times with different meanings. \paragraph{\bf Preliminaries of mathematical elasticity} We postpone to Sec.\hspace{1pt} \ref{ss:3.2} the precise statement of all assumptions on the \emph{elastic} contribution
$\pd{\varepsilon}(c,\varepsilon(\mathbf{u}),z)$ \color{black} to the elliptic operator in \eqref{e:u}. Concerning the stiffness tensor $\mathbb{C}$ (we will take the viscosity tensor to be a multiple of $\mathbb{C}$, cf.\hspace{1pt} \eqref{eqn:assbV} ahead), \color{black} we suppose that
\begin{equation} \label{ass-elas}
\mathbb{C}=(c_{ijkh})
\in \mathrm{C}^{1}(\Omega;\bbR^{d \times d \times d \times d})\hspace{1pt}, \end{equation} with coefficients satisfying the classical symmetry and ellipticity conditions (with the usual summation convention) \begin{equation} \label{ellipticity} \begin{aligned} c_{ijkh}=c_{jikh}=c_{khij},\qquad \qquad \exists\hspace{1pt}, \nu_0>0 \hspace{1pt},: \quad c_{ijkh} \xi_{ij}\xi_{kh}\geq \nu_0\xi_{ij}\xi_{ij} \hspace{1pt} \hspace{1pt} \forall\hspace{1pt}, \xi_{ij}\colon \xi_{ij}= \xi_{ji}. \end{aligned} \end{equation}
Observe that with \eqref{ellipticity}, we also encompass in our analysis the case of an anisotropic and inhomogeneous material. Thanks to \eqref{ellipticity} and to the $\mathrm{C}^2$-regularity of $\Omega$ we have the following elliptic regularity result (cf.\hspace{1pt} e.g.\hspace{1pt} \cite[Lemma~3.2, p.\hspace{1pt} 260]{necas}) or \cite[Chap.\hspace{1pt} 6, p.\hspace{1pt} 318]{Hughes}): \begin{align} \label{cigamma} \begin{aligned}
\exists \hspace{1pt}, c_1,\hspace{1pt}, c_2>0 \quad \forall\hspace{1pt}, \mathbf{u} \in
H_{\mathrm{Dir}}^2(\Omega;\R^d)\hspace{1pt}, : \qquad
c_{1} \hspace{1pt}| \mathbf{u} \hspace{1pt}|_{H^2(\Omega;\bbR^d)}
\leq \hspace{1pt}|\dive (\mathbb{C}\varepsilon(\mathbf{u}))\hspace{1pt}|_{L^2(\Omega;\bbR^d)}
\leq c_{2} \hspace{1pt}| \mathbf{u} \hspace{1pt}|_{H^2(\Omega;\bbR^d)}\hspace{1pt},. \end{aligned} \end{align} Under the assumption that $\mathbf{u}$ has prescribed \color{black} boundary values $\mathbf{d}\in H^2(\Omega;\bbR^d)$, i.e. $\mathbf{u}=\mathbf{d}$ a.e. on $\partial\Omega$, we obtain by applying \eqref{cigamma} \color{black} to $\mathbf{u}-\mathbf{d}$ \begin{align} \label{H2reg} \begin{aligned}
&\exists \hspace{1pt}, \widetilde c_1,\hspace{1pt}, \widetilde c_2>0 \color{black} \quad \forall\hspace{1pt}, \mathbf{u} \in H^2(\Omega;\bbR^d)
\text{ with }\mathbf{u}=\mathbf{d}\text{ a.e. on }\partial\Omega\hspace{1pt}, :\hspace{1pt}
&\qquad\qquad\widetilde c_{1} \hspace{1pt}| \mathbf{u} \hspace{1pt}|_{H^2(\Omega;\bbR^d)}
\leq \hspace{1pt}|\dive (\mathbb{C}\varepsilon (\mathbf{u}))\hspace{1pt}|_{L^2(\Omega;\bbR^d)}+\hspace{1pt}|\mathbf{d}\hspace{1pt}|_{H^2(\Omega;\bbR^d)}
\leq \widetilde c_{2}\big(\hspace{1pt}|\mathbf{u}\hspace{1pt}|_{H^2(\Omega;\bbR^d)} + \hspace{1pt}|\mathbf{d}\hspace{1pt}|_{H^2(\Omega;\bbR^d)}\big)\hspace{1pt},. \end{aligned} \end{align}
\paragraph{\bf Useful inequalities} For later reference, we recall here the Gagliardo-Nirenberg inequality in a particular case: for all $r,\hspace{1pt},q\in [1,+\infty],$ and for all $v\in L^q(\Omega)$ such that $\nabla v \in L^r(\Omega)$, there holds \begin{equation} \label{gn-ineq}
\hspace{1pt}|v\hspace{1pt}|_{L^s(\Omega)}\leq C_{\mathrm{GN}}
\hspace{1pt}|v\hspace{1pt}|_{W^{1,r}(\Omega)}^{\theta} \hspace{1pt}|v\hspace{1pt}|_{L^q(\Omega)}^{1-\theta} \qquad
\text{ with } \frac{1}{s}=\theta
\left(\frac{1}{r}-\frac{1}{d}\right)+(1-\theta)\frac{1}{q}, \hspace{1pt} \hspace{1pt} 0
\leq \theta \leq 1, \end{equation} the positive constant $C_{\mathrm{GN}}$ depending only on $d,\hspace{1pt},r,\hspace{1pt},q,\hspace{1pt},\theta$.
We will also make use of the following interpolation inequality from \cite[Thm.\hspace{1pt} 16.4, p.\hspace{1pt} 102]{LM} \begin{align} \label{interpolationIneq}
\forall\varrho>0\quad\exists\hspace{1pt},C_\varrho>0\quad\forall u\in X:\qquad\hspace{1pt}|u\hspace{1pt}|_Y\leq \varrho\hspace{1pt}|u\hspace{1pt}|_X+C_\varrho\hspace{1pt}|u\hspace{1pt}|_Z, \end{align} where $X\subseteq Y\subseteq Z$ are Banach spaces with compact embedding $X\Subset Y$.
Combining this with the compact embedding \begin{equation} \label{dstar}
H_{\mathrm{Dir}}^2(\Omega;\R^d) \Subset W^{1,d^\star{-}\eta}(\Omega;\bbR^d),
\quad \text{with } d^{\star}=
\begin{cases} \infty & \text{if }d=2,
\hspace{1pt}
6 & \text{if }d=3,
\end{cases}
\quad \text{for all $\eta >0$}, \end{equation} (where for $d=2$ we mean that $H_{\mathrm{Dir}}^2(\Omega;\R^d) \Subset W^{1,q}(\Omega;\bbR^d)$ for all $1 \leq q <\infty$),
we have \begin{equation} \label{interp} \forall\hspace{1pt}, \varrho>0 \hspace{1pt} \hspace{1pt} \exists\hspace{1pt}, C_\varrho>0 \hspace{1pt} \hspace{1pt} \forall\hspace{1pt}, \eta>0 \hspace{1pt} \hspace{1pt}
\forall\hspace{1pt}, \mathbf{u} \in H_{\mathrm{Dir}}^2(\Omega;\R^d)\hspace{1pt},: \hspace{1pt} \hspace{1pt}
\hspace{1pt}|\varepsilon(\mathbf{u})\hspace{1pt}|_{L^{d^\star{-}\eta}(\Omega; \bbR^{d\times d}\color{black})}\leq \varrho
\hspace{1pt}|\mathbf{u}\hspace{1pt}|_{H^2(\Omega; \bbR^{d}\color{black})}+C_\varrho\hspace{1pt}|\mathbf{u}\hspace{1pt}|_{L^2(\Omega; \bbR^{d}\color{black})}. \end{equation} We also obtain by interpolation \begin{equation} \label{interp2}
\forall\hspace{1pt}, \varrho>0 \hspace{1pt} \hspace{1pt} \exists\hspace{1pt}, C_\varrho>0 \hspace{1pt} \hspace{1pt} \forall\hspace{1pt}, \eta>0 \hspace{1pt} \hspace{1pt}
\forall\hspace{1pt}, \mathbf{u} \in H^1(\Omega;\bbR^d)\hspace{1pt},: \hspace{1pt} \hspace{1pt}
\hspace{1pt}|u\hspace{1pt}|_{L^{d^\star{-}\eta}(\Omega;\bbR^d)}
\leq \varrho\hspace{1pt}|\mathbf{u}\hspace{1pt}|_{H^1(\Omega;\bbR^d)}+C_\varrho\hspace{1pt}|\mathbf{u}\hspace{1pt}|_{L^2(\Omega;\bbR^d)}. \end{equation}
We will also resort to the following \emph{nonlinear} Poincar\hspace{1pt}'{e}-type inequality
(proved in, e.g., \cite[Lemma 2.2]{gmrs}), with $\mathfrak{m}(w)$ the mean value of $w$:
\begin{equation}
\label{poincare-type}
\forall\hspace{1pt}, q>0 \quad \exists\hspace{1pt}, C_q >0 \quad \forall\hspace{1pt}, w \in H^1(\Omega)\hspace{1pt}, : \qquad
\hspace{1pt}| |w|^{q} w \hspace{1pt}|_{H^1(\Omega)} \leq C_q (\hspace{1pt}| \nabla (|w|^{q} w )\hspace{1pt}|_{L^2(\Omega)} + |\mathfrak{m}(w)|^{q+1})\hspace{1pt},. \end{equation}
\subsection{Assumptions} \label{ss:3.2} We now collect all the conditions on the functions $\phi,\hspace{1pt},m,\hspace{1pt},\sigma,\hspace{1pt},\mathsf{K},\hspace{1pt},a,\hspace{1pt},W,\hspace{1pt},\mathbb{V}$ in system \eqref{eqn:PDEsystem}. \par
\noindent \textbf{Hypothesis (I).} Concerning the potential $\phi$ for the concentration variable $c$, we require that \begin{equation} \label{potential-phi} \begin{gathered} \phi = \widehat{\beta} + \gamma \quad \text{with } \widehat{\beta}: \bbR \to [0,+\infty] \text{ proper, convex, and l.s.c., with } \widehat{\beta}(0)=0, \text{ and } \hspace{1pt}
\gamma \in \mathrm{C}^1(\bbR), \qquad \gamma \text{ $\lambda_{\gamma}$-concave for some $\lambda_{\gamma}\geq0$, and } \hspace{1pt} \text{such that } \exists\hspace{1pt}, C_\phi \in \bbR\hspace{1pt}, \hspace{1pt} \forall\hspace{1pt}, c \in \mathrm{dom}(\phi) : \hspace{1pt} \hspace{1pt} \phi(c) \geq C_\phi\hspace{1pt},. \end{gathered} \end{equation} In what follows, we will denote the convex-analysis subdifferential $\partial\widehat{\beta}:\bbR \rightrightarrows \bbR$ by $\beta$, and by $\mathrm{dom}(\beta)$ the set $\hspace{1pt}{ c \in \bbR\hspace{1pt}, : \hspace{1pt} \beta(c)\neq \emptyset\hspace{1pt}}$.
From $0\in \mathrm{Argmin}_{r\in \bbR} \widehat{\beta}(r)$, it follows that $0\in \beta(0)$. \color{black} \begin{remark}[Consequences of Hypothesis (I)] \upshape \label{rmk:l-convex-splitting} For later use we observe that, since the map $c \mapsto \gamma(c) - \lambda_\gamma\tfrac{c^2}{2}$ is concave, we have the following convex-concave decomposition for $\phi$:
\begin{equation} \label{decomposition} \phi(c)= \ddd{\widehat{\beta}(c) + \lambda_\gamma \frac{c^2}{2}}{convex}{} + \ddd{\gamma(c) - \lambda_\gamma\frac{c^2}{2}}{concave}{}\hspace{1pt},. \end{equation} \end{remark}
\begin{example} \label{ex:phi} \upshape
Admissible choices for $\widehat\beta$ are both the physically meaningful potentials $\widehat\beta(c)=(1+c)\log(1+c)+(1-c)\log(1-c)$ and $\widehat\beta(c)=I_{[-1,1]}(c)$, while $\gamma$ can be a general smooth concave perturbation, e.g.~$\gamma(c)=-\lambda_\gamma c^2$. \color{black}
\end{example} \par\noindent
\textbf{Hypothesis (II).} As for the nonlinear functions $m$ and $\sigma$, we suppose that \begin{align}
&m \in \mathrm{C}^1 (\bbR\times\bbR) \hspace{1pt} \text{ and } \hspace{1pt} \exists\hspace{1pt}, m_0>0 \hspace{1pt} \forall\hspace{1pt}, (c,z) \in \bbR\times \bbR \hspace{1pt}, : \hspace{1pt} m(c,z) \geq m_0,
\label{hyp-m}\hspace{1pt}
&\sigma \in \mathrm{C}^2 (\bbR).
\label{hyp-sigma} \end{align}
\par \noindent
\textbf{Hypothesis (III)} The heat conductivity function \begin{align} \label{hyp-K}
\begin{gathered}
\mathsf{K}:[0,+\infty)\to(0,+\infty) \hspace{1pt} \text{ is continuous and}\hspace{1pt}
\exists \hspace{1pt}, c_0, \hspace{1pt}, c_1>0 \quad\exists\kappa>1 \hspace{1pt} \hspace{1pt}
\forall\vartheta\in[0,+\infty)\hspace{1pt}, :\quad c_0 (1+ \vartheta^{\kappa})
\leq \mathsf{K}(\vartheta) \leq c_1 (1+\vartheta^{\kappa})\hspace{1pt},. \end{gathered} \end{align} We will denote by $\widehat{\mathsf{K}}$ the primitive $\widehat{\mathsf{K}} (x):= \int_0^x \mathsf{K}(r) \, \mathrm{d} r $ of $\mathsf{K}$.
\par
\noindent \textbf{Hypothesis (IV).} We require \begin{align} \label{data-a} \begin{aligned}
a \in \mathrm{C}^1(\bbR\times\bbR) \quad\text{ and }\quad
&\exists\hspace{1pt}, a_0,a_1>0 \quad\forall c, z\in \bbR\hspace{1pt}, : \quad
&&a_0\leq a(c,z) \leq a_1,\hspace{1pt}
&\exists\hspace{1pt}, a_2>0 \quad\forall c, z\in \bbR\hspace{1pt}, : \quad
&&|a_{,c}(c,z)|+|a_{,z}(c,z)|\leq a_2. \end{aligned} \end{align}
\noindent \textbf{Hypothesis (V).} We suppose that \begin{align} \label{eqn:assumptionW}
W(x,c,\varepsilon,z)=\frac12 b(c,z)\mathbb{C}(x)(\varepsilon-\varepsilon^*(c)):(\varepsilon-\varepsilon^*(c)), \end{align} where we recall that $b(c,z)$ models the influence of the concentration and damage on the stiffness tensor $\mathbb{C}$ and $\varepsilon^*$ models the eigenstrain. We assume \begin{align}\label{eqn:assbV}
&\varepsilon^*\in \mathrm{C}^2(\bbR),\qquad b\in \mathrm{C}^2(\bbR\times\bbR)
\quad\text{ and }\quad\exists\hspace{1pt}, b_0>0\quad\forall c, z\in \bbR\hspace{1pt}, : \quad 0\leq b(c,z)\leq b_0,\quad \mathbb{V}=\omega\mathbb{C}, \quad\omega>0. \color{black} \end{align} The tensor function $\mathbb{C}$ should satisfy conditions \eqref{ass-elas} and \eqref{ellipticity}.
Let us mention in advance that the last condition on $\mathbb{V}$ will play a crucial role in the proof of $H^2(\Omega;\bbR^d)$-regularity for the discrete displacements, cf.\hspace{1pt} Lemma \ref{lemma:4.16} ahead. \color{black} \par For notational convenience, from now on we shall neglect the $x$-dependence of $W$.
For later reference, we observe that \begin{equation} \label{later-ref} \begin{aligned} & W_{,c}(c,\varepsilon,z) = \frac12 b_{,c}(c,z) \mathbb{C}(\varepsilon-\varepsilon^*(c)):(\varepsilon-\varepsilon^*(c)) - \color{black} b(c,z) (\varepsilon^*)'(c) \mathbb{C} :(\varepsilon-\varepsilon^*(c)), \hspace{1pt} & \begin{aligned} W_{,cc}(c,\varepsilon,z) = & \frac12 b_{,cc}(c,z) \mathbb{C}(\varepsilon-\varepsilon^*(c)):(\varepsilon-\varepsilon^*(c)) - b_{,c}(c,z) (\varepsilon^*)'(c) \mathbb{C} :(\varepsilon-\varepsilon^*(c)) \hspace{1pt} & -b(c,z) (\varepsilon^*){''}(c) \mathbb{C} :(\varepsilon-\varepsilon^*(c)) + b(c,z) (\varepsilon^*)'(c) \mathbb{C} :(\varepsilon^*)'(c), \end{aligned} \hspace{1pt} & W_{,z}(c,\varepsilon,z) = \frac12 b_{,z}(c,z) \mathbb{C}(\varepsilon-\varepsilon^*(c)):(\varepsilon-\varepsilon^*(c)), \hspace{1pt} & W_{,zz}(c,\varepsilon,z) = \frac12 b_{,zz}(c,z) \mathbb{C}(\varepsilon-\varepsilon^*(c)):(\varepsilon-\varepsilon^*(c)), \hspace{1pt} & W_{,\varepsilon}(c,\varepsilon,z) = b(c,z) \mathbb{C}(\varepsilon-\varepsilon^*(c)), \hspace{1pt} & W_{,\varepsilon c }(c,\varepsilon,z) = b_{,c}(c,z) \mathbb{C}(\varepsilon-\varepsilon^*(c)) - b(c,z) (\varepsilon^*)'(c)\mathbb{C}, \hspace{1pt} & W_{,\varepsilon z }(c,\varepsilon,z) = b_{,z}(c,z) \mathbb{C}(\varepsilon-\varepsilon^*(c))\hspace{1pt},. \end{aligned} \end{equation} \color{black}
Finally, we will suppose throughout the work that $p>d$ and that the data $\mathbf{d}$, $\mathbf{f}$, $g$, and $h$ comply with \begin{subequations} \label{hyp:data} \begin{align}
&\mathbf{d}\in H^1(0,T;H^2(\Omega;\bbR^d))\cap W^{1,\infty}(0,T;W^{1,\infty}(\Omega;\bbR^d))\cap H^2(0,T;H^1(\Omega;\bbR^d)),
\label{dirichlet-data}\hspace{1pt}
&\mathbf{f}\in L^2(0,T;L^2 (\Omega;\R^d)),
\label{bulk-force}\hspace{1pt}
&g \in L^1(0,T;L^1(\Omega)) \cap L^2 (0,T; H^1(\Omega)'),\quad g\geq 0 \quad\hbox{a.e. in }Q\hspace{1pt},,
\label{heat-source}\hspace{1pt}
&h \in L^1 (0,T; L^2(\partial \Omega)), \quad h \geq 0 \quad\hbox{a.e. in }\Sigma\hspace{1pt},,
\label{dato-h} \end{align} \end{subequations} and that the initial data fulfill \begin{subequations} \label{h:initial} \begin{align}
&c^0\in W^{1,p}(\Omega),\quad \widehat{\beta}( c^0\color{black}) \in L^1(\Omega), \quad
\mathfrak{m}_0:= \mathfrak{m}( c^0\color{black}) \text{ belongs to the interior of } \mathrm{dom}(\beta),
\label{data_c}\hspace{1pt}
&z^0\in W^{1,p}(\Omega),\quad 0 \leq z^0 \leq 1 \text{ in }\Omega,
\label{data_z}\hspace{1pt}
&\vartheta^0 \in L^{1}(\Omega), \quad \log\vartheta^0\in L^1(\Omega),\quad\exists\hspace{1pt},
\vartheta_*>0\hspace{1pt},: \hspace{1pt};\vartheta^0\geq\vartheta_*>0\hspace{1pt};\text{a.e. in\;}\Omega,
\label{data_teta}\hspace{1pt}
&\mathbf{u}^0\in H^2(\Omega;\bbR^d)\text{ with }\mathbf{u}^0=\mathbf{d}(0)\hspace{1pt};\text{ a.e. on }\partial\Omega,
\label{data_u}\hspace{1pt}
&\mathbf{v}^0\in H^1(\Omega;\mathbb{R}^d).
\label{data_v} \end{align} \end{subequations} \begin{remark}
\upshape
\label{rmk:on-init-data}
Let us point out explicitly that,
if we choose \color{black} $\phi$ as the logarithmic potential from Example \ref{ex:phi}, or with $\phi$ given by the sum
$I_{[0,1]} + \gamma$, we enforce \color{black} the
(physically meaningful) property that $c \in (0,1)$ ($c\in [0,1]$, respectively) in $\Omega$.
From \eqref{data_c} we read that this constraint has to be enforced on the initial datum $ c^0$ \color{black} as well, in the same was as we require $ z^0\in [0,1]$ \color{black} with \eqref{data_z}.
The latter condition,
combined with the information that $z(\cdot,x)$ is nonincreasing for almost all $x\in\Omega$ thanks to the term $\partial I_{(-\infty,0]}(z_t)$ in \eqref{e:z},
will yield that the solution component $z$ is in $[0,1]$ a.e.\hspace{1pt} in $Q$. This property, albeit \color{black} not needed for the analysis of
\eqref{eqn:PDEsystem}, is in accordance with the physical meaning of the damage parameter.
Clearly, in the case
the concentration variable $c$ is forced to vary between two fixed values,
and $z$ is forced to be in $[0,1]$, values of the functions $m$, $\sigma$, $a$ and $b$ outside
these \color{black} ranges do not affect the PDE system. \end{remark}
\subsection{Entropic solutions and main result} \label{ss:3.3} Prior to the precise statement of our weak solution notion for the initial-boundary value problem for system \eqref{eqn:PDEsystem}, we shortly introduce and motivate its main ingredients, namely a suitable weak formulation of the flow rule \eqref{e:z} for the damage variable and the ``entropic'' formulation of the heat equation \eqref{e:teta}. To them, the standard weak formulation of the Cahn-Hilliard equation, and the pointwise (a.e.\hspace{1pt} in $Q$) momentum equation will be coupled.
\hspace{1pt} \paragraph{\bf Entropy and total energy inequalities for the heat equation} Along the footsteps of \cite{fei, fpr09}, cf.\hspace{1pt} also \cite{RocRos14} in the case of a PDE system in thermoviscoelasticity, we will weakly formulate \eqref{e:teta} by means of an ``entropy inequality'', and of a ``total energy (in)equality''. The former is obtained by testing \eqref{e:teta} by $\varphi/\vartheta$, with $\varphi$ a \emph{positive} \color{black} smooth test function. Integrating over space and time leads to \begin{equation} \label{later-4-comparison} \begin{aligned}
&
\begin{aligned}
\int_0^T \int_\Omega \big(\partial_t \log(\vartheta) + c_t + z_t & + \rho \mathrm{div}(\mathbf{u}_t) \big) \varphi \, \mathrm{d} x \, \mathrm{d} t
+\int_0^T \int_\Omega \mathsf{K}(\vartheta) \nabla \log(\vartheta)\cdot\nabla \varphi \, \mathrm{d} x \, \mathrm{d} t
\hspace{1pt} & - \int_0^T \int_\Omega \mathsf{K}(\vartheta) \frac{\varphi}{\vartheta} \nabla \log(\vartheta) \cdot \nabla \vartheta \, \mathrm{d} x \, \mathrm{d} t
\end{aligned}
\hspace{1pt}
&
= \int_0^T \int_\Omega \big(g + |c_t|^2+ |z_t|^2 + a(c,z) \varepsilon(\mathbf{u}_t):\mathbb{V} \varepsilon(\mathbf{u}_t) + m(c,z)|\nabla \mu|^2\big) \frac\varphi\vartheta \, \mathrm{d} x \, \mathrm{d} t
+ \int_0^T \int_{\partial\Omega} h \frac\varphi\vartheta \, \mathrm{d} S \, \mathrm{d} t \end{aligned} \end{equation} for all $\varphi \in \mathcal{D}(\overline Q)$. Then, the entropy inequality \eqref{entropy-ineq} below follows.
The total energy inequality (cf.\hspace{1pt} the forthcoming \eqref{total-enid}) associated with system \eqref{eqn:PDEsystem} corresponds to its standard \emph{energy} estimate. Formally, it is indeed obtained by
testing \eqref{e:c} by $\mu$, \eqref{e:mu} by $c_t$, \eqref{e:z} by $z_t$, \eqref{e:teta} by $1$, and \eqref{e:u} by $\mathbf{u}_t$, and it features the total energy \eqref{total-energy} of the system.\hspace{1pt} \paragraph{\bf Weak flow rule for the damage parameter} We will adopt the solution notion from \cite{hk1,hk2}, which can be motivated by observing that, due to the convexity of $I_{(-\infty,0]}$, the flow rule \color{black} \eqref{e:z} reformulates as $z_t\leq 0$ a.e.\hspace{1pt} in $Q$ and \begin{subequations} \label{ineq-system} \begin{align} \label{ineq-system2}
\Big( z_t-\Delta_p(z)+\xi + \sigma'(z)+\pd{z}(c,\varepsilon(\mathbf{u}),z)-\vartheta\Big) \zeta \geq{}&
0\quad && \qquad \text{a.e. in\;} Q,\text{ for all } \zeta \leq 0,
\hspace{1pt}
\label{ineq-system3}
\Big( z_t-\Delta_p(z)+
\xi + \sigma'(z) +\pd{z}(c,\varepsilon(\mathbf{u}),z) -\vartheta\Big) z_t \leq{}& 0 && \qquad
\text{a.e. in\;} Q, \end{align} \end{subequations} with $\xi \in \partial I_{[0,+\infty)}(z)$ in $\Omega \times (0,T)$. Our weak formulation of \eqref{e:z} in fact consists of the condition $z_t \leq 0$, of the integrated version of \eqref{ineq-system2}, with negative test functions from $W^{1,p}(\Omega)$, and of the \emph{damage energy-dissipation} inequality obtained by integrating \eqref{ineq-system3}.
We are now in the position to give the following notion of weak solution:
\begin{definition}[Entropic weak formulation] \label{def-entropic}
Given data $(\mathbf{d}, \mathbf{f}, g, h)$ fulfilling \eqref{hyp:data}
and initial values \linebreak
$ (c^0,z^0,\vartheta^0,\mathbf{u}^0,\mathbf{v}^0) $ \color{black} fulfilling \eqref{h:initial}, we call a quintuple
$(c, \mu, z,\vartheta,\mathbf{u})$ an \emph{entropic weak solution} to the PDE system
\eqref{eqn:PDEsystem}, supplemented with the initial and boundary conditions \eqref{init-bdry-conditions},
if
\begin{align}
&c\in L^\infty(0,T;W^{1,p}(\Omega))\cap H^1(0,T;L^2(\Omega)),\hspace{1pt},
\Delta_p(c)\in L^2(0,T;L^2(\Omega)),\label{reg-c}\hspace{1pt}
&\mu\in L^2(0,T;H_N^2(\Omega)),\label{reg-mu}\hspace{1pt}
&z\in L^\infty(0,T;W^{1,p}(\Omega))\cap H^1(0,T;L^2(\Omega)),\label{reg-z}\hspace{1pt}
&\vartheta\in L^2(0,T;H^1(\Omega))\cap L^\infty(0,T;L^1(\Omega)),\hspace{1pt},
\vartheta^{\frac{\kappa+\alpha}{2}}\in L^2(0,T;H^1(\Omega))\text{ for all }\alpha\in(0,1),
\label{reg-teta}\hspace{1pt}
&\mathbf{u}\in H^1(0,T; H^2(\Omega;\bbR^d))\cap W^{1,\infty}(0,T; H^1(\Omega;\bbR^d))\cap H^2(0,T;L^2(\Omega;\bbR^d)),\label{reg-u}
\end{align}
and subgradients (specified in \eqref{eta-beta} and \eqref{xi-def} below)
\begin{align}
&\eta \in L^2(0,T;L^2(\Omega)),\hspace{1pt}
&\xi \in L^2(0,T;L^2(\Omega)),
\end{align}
where $(c,z,\vartheta,\mathbf{u})$ comply
the initial conditions
(note that the initial condition for $\vartheta$ is implicitly formulated in \eqref{total-enid}
below)
\begin{align}
\label{better-init}
&&&c(0)=c^0,
&&z(0)=z^0,
&&\mathbf{u}(0)=\mathbf{u}^0,
&&\mathbf{u}_t(0)=\mathbf{v}^0
&&\text{a.e. in }\Omega,&&
\end{align}
the Dirichlet condition
\begin{align}
\label{boundary-cond}
\mathbf{u}=\mathbf{d}\quad\text{ a.e. on }\partial\Omega\times(0,T)
\end{align}
and the following relations: \color{black}
\begin{itemize}
\item[(i)] Cahn-Hilliard system:
\begin{align}
c_t={}&\dive(m(c,z)\nabla\mu)
&&\text{a.e. in\;}\hspace{1pt}, Q,\label{ch-1}\hspace{1pt}
\mu ={}&-\Delta_p(c)+\eta + \gamma'(c)+W_{,c}(c,\varepsilon(\mathbf{u}),z)-\vartheta+c_t
&&\text{a.e. in\;}\hspace{1pt}, Q,\label{ch-2}\hspace{1pt}
\eta \in{}& \partial \hat{\beta}(c) \color{black} &&\text{a.e. in\;}\hspace{1pt}, Q;
\label{eta-beta}
\end{align}
\item[(ii)]
balance of forces:
\begin{align}
&\mathbf{u}_{tt}-\dive{\boldsymbol{\sigma}}=\mathbf{f}
&&\qquad\qquad\text{a.e. in\;}\hspace{1pt}, Q,
\label{momentum-a.e.}\hspace{1pt}
&{\boldsymbol{\sigma}}=a(c,z)\mathbb{V}\varepsilon(\mathbf{u}_t)+W_{,\varepsilon}(c,\varepsilon(\mathbf{u}),z)-\rho\vartheta\mathds 1
&&\qquad\qquad\text{a.e. in\;}\hspace{1pt}, Q;
\label{stress-tensor}
\end{align}
\item[(iii)]
weak formulation of the damage flow rule:\hspace{1pt}
{\sl damage energy-dissipation
inequality} for all $t \in (0,T]$, for $s=0$, and for almost all $0< s\leq t$
\begin{align}
&\label{energ-ineq-z}
\begin{aligned}
\int_s^t \int_{\Omega} |z_t|^2 \, \mathrm{d} x \, \mathrm{d} r & +\int_\Omega\left(
\frac1p |\nabla z(t)|^p + \sigma(z(t))\right)\, \mathrm{d} x\hspace{1pt} & \leq\int_\Omega\left(
\frac1p |\nabla z(s)|^p+ \sigma(z(s))\right)\, \mathrm{d} x
+\int_s^t \int_\Omega z_t \left(-
\pd{z}(c,\varepsilon(\mathbf{u}), z)
+\vartheta\right)\, \mathrm{d} x \, \mathrm{d} r
\end{aligned}
\end{align}
and the {\sl one-sided variational inequality for the damage process}
\begin{align}
\label{var-ineq-z}
&\begin{aligned}
\int_\Omega \Big( z_t \zeta +|\nabla z|^{p-2} \nabla z \cdot \nabla \zeta + \xi \zeta +
\sigma'(z(t)) \zeta & + \pd{z}(c,\varepsilon(\mathbf{u}), z) \zeta -\vartheta \zeta \Big)\hspace{1pt},\mathrm{d}x
\geq 0 \hspace{1pt} & \text{for all } \zeta\in W_-^{1,p}(\Omega), \quad \text{a.e. in\;}\hspace{1pt}, (0,T),
\end{aligned}
\end{align}
where
\begin{align}
\xi \in \partial I_{[0,+\infty)}(z)\qquad\text{a.e. in\;}\hspace{1pt}, Q,
\label{xi-def}
\end{align}
as well as the constraints
\begin{align}
\label{constraint-chit}
&z \in [0,1],\qquad
z_t\in(-\infty,0] \qquad \text{a.e. in\;} Q;
\end{align}
\item[(iv)]
strict positivity and entropy inequality:
\begin{equation}
\label{strict-pos-teta}
\exists\hspace{1pt},\underline{\vartheta}>0 \hspace{1pt} \text{for a.a.}\hspace{1pt}, (x,t) \in Q\hspace{1pt}, : \hspace{1pt} \hspace{1pt}
\vartheta(x,t)\geq \underline{\vartheta}>0
\end{equation}
and for almost all $0\leq s \leq t \leq T$, and for $s=0$ the entropy inequality holds:
\begin{equation}
\label{entropy-ineq}
\begin{aligned}
&\int_s^t \int_\Omega (\log(\vartheta) + c+z) \varphi_t \, \mathrm{d} x \, \mathrm{d} r -
\rho \int_s^t \int_\Omega \dive(\mathbf{u}_t) \varphi \, \mathrm{d} x \, \mathrm{d} r
-\int_s^t \int_\Omega \mathsf{K}(\vartheta) \nabla \log(\vartheta) \cdot \nabla \varphi \, \mathrm{d} x \, \mathrm{d} r\hspace{1pt}
&\begin{aligned}
\leq
\int_\Omega (\log(\vartheta(t))+c(t)+z(t)){\varphi(t)} \, \mathrm{d} x
&-\int_\Omega (\log(\vartheta(s))+c(s)+z(s)){\varphi(s)} \, \mathrm{d} x\hspace{1pt}
&-\int_s^t \int_\Omega \mathsf{K}(\vartheta)|\nabla\log(\vartheta)|^2\varphi\, \mathrm{d} x \, \mathrm{d} r
\end{aligned}\hspace{1pt}
&\quad-\int_s^t \int_\Omega \left( g +|c_t|^2+ |z_t|^2 + a(c,z) \varepsilon(\mathbf{u}_t):\mathbb{V} \varepsilon(\mathbf{u}_t) + m(c,z)|\nabla \mu|^2\right)
\frac{\varphi}{\vartheta} \, \mathrm{d} x \, \mathrm{d} r
-\int_s^t \int_{\partial\Omega} h \frac\varphi\vartheta \, \mathrm{d} S \, \mathrm{d} r
\end{aligned}
\end{equation}
for all $\varphi \in \mathrm{C}^0 ([0,T]; W^{1,d+\epsilon}(\Omega)) \cap H^1 (0,T; L^{({d^\star})'}(\Omega))$
for some $\epsilon>0$, with $\varphi \geq 0$;
\item[(v)]
total energy inequality for almost all $0\leq s \leq t \leq T$, and for $s=0$:
\begin{equation}
\label{total-enid}
\begin{aligned}
\tE{c(t)}{z(t)}{\vartheta(t)}{\mathbf{u}(t)}{\mathbf{u}_t(t)}
\leq{}&\tE{c(s)}{z(s)}{\vartheta(s)}{\mathbf{u}(s)}{\mathbf{u}_t(s)}\hspace{1pt}
&+ \int_s^t\int_\Omega g \, \mathrm{d} x \, \mathrm{d} r
+ \int_s^t\int_{\partial\Omega} h \, \mathrm{d} S \, \mathrm{d} r\hspace{1pt}
&+ \int_s^t \int_\Omega \mathbf{f} \cdot \mathbf{u}_t \, \mathrm{d} x \, \mathrm{d} r
+ \int_s^t \int_{\partial\Omega}\big({\boldsymbol{\sigma}} { \bf n \color{black}} \big)\cdot \mathbf{d}_t \, \mathrm{d} S \, \mathrm{d} r,
\end{aligned}
\end{equation}
where for $s=0$ we read $\vartheta(0)= \vartheta^0$, and $\mathscr{E}$ is given by \eqref{total-energy}. \color{black}
\end{itemize}
\end{definition}
\noindent \begin{remark} \label{rmk:weak-sol}
A few comments on Definition \ref{def-entropic} are in order:
\begin{itemize}
\item[--]
First of all, observe that inequalities \eqref{var-ineq-z} and \eqref{energ-ineq-z}
yield the \emph{damage variational inequality} (with $\xi$ fulfilling \eqref{xi-def})
\begin{equation}
\label{dam-var-ineq}
\begin{aligned}
\int_s^t\int_\Omega|\nabla z|^{p-2}\nabla z\cdot\nabla\zeta \, \mathrm{d} x \, \mathrm{d} r & -\int_\Omega\frac 1p|\nabla z(t)|^p \, \mathrm{d} x +\int_\Omega\frac 1p|\nabla z(s)|^p \, \mathrm{d} x\hspace{1pt}
&+\int_s^t\int_\Omega\Big(z_t(\zeta-z_t)+\sigma'(z)(\zeta-z_t)+\xi(\zeta-z_t)\Big) \, \mathrm{d} x \, \mathrm{d} r \hspace{1pt}
&\geq \int_s^t\int_\Omega\Big(-W_{,z}(c,\varepsilon(\mathbf{u}),z)(\zeta-z_t)+\vartheta(\zeta-z_t)\Big) \, \mathrm{d} x \, \mathrm{d} r
\end{aligned}
\end{equation}
for all $t \in (0,T]$, for $s=0$, and for almost all $0< s\leq t$ and
for all test functions $\zeta \in L^p (0,T; W_-^{1,p}(\Omega)) \cap L^\infty (0,T; L^\infty(\Omega))$.
\item[--]
Concerning the \emph{entropic} formulation (=entropy+total energy inequalities) of the heat
equation, we point out that it is consistent
with the classical one. Namely, \color{black} if \color{black} the functions $\vartheta,\hspace{1pt}, c,\hspace{1pt}, z$ are sufficiently smooth,
then inequalities \eqref{entropy-ineq} and \eqref{total-enid}, combined with
\eqref{e:c}--\eqref{e:z} and \eqref{e:u} yield the pointwise formulation of
\eqref{e:teta}, cf.\hspace{1pt} \cite[Rmk.\hspace{1pt} 2.6]{RocRos14} for all details.
\item[--]
Observe that the \emph{damage energy-dissipation} inequality \eqref{energ-ineq-z} is
required to hold for all $t\in (0,T]$ and for almost all $0 \leq s<t$, and $s=0$.
Indeed we will not be able to improve it to an equality, or to an inequality holding
on \emph{every} subinterval $[s,t]\subset[0,T]$. This is due to the fact that we will
obtain \eqref{energ-ineq-z} by passing to the limit in its time-discrete version
(cf.\hspace{1pt} Lemma \ref{l:energy-est}), exploiting lower semicontinuity arguments to take the
limit of the left-hand side, and pointwise, almost everywhere in $(0,T)$, convergences
to take the limit of the right-hand side.
Analogous considerations apply to the \emph{entropy} and \emph{total energy} inequalities
\eqref{entropy-ineq} and \eqref{total-enid}.
\item[--]
We remark that the \emph{damage energy-dissipation} and the \emph{total energy}
inequalities are obtained independently one of another: while this will be clear from the
proof of Theorem \ref{thm:1} below, we refer to \cite[Rmk.\hspace{1pt} 2.8]{RocRos14}
and \cite[Sec. 2.4]{RocRos12} for further comments.
\item[--]
The quasi-linear $p$-Laplacian operator $\Delta_p:W^{1,p}(\Omega)\to W^{1,p}(\Omega)'$
with homogeneous Neumann conditions occurring in \eqref{ch-2} is defined in the distributional
sense as
$$
\langle - \color{black}\Delta_p(v),w\rangle_{W^{1,p}(\Omega)}=\int_\Omega|\nabla v|^{p-2}\nabla v\cdot\nabla w\,\mathrm dx.
$$
However, since $\Delta_p(c)\in L^2(0,T;L^2(\Omega))$ due to \eqref{reg-c},
the Cahn-Hilliard system can be interpreted in a pointwise formulation.
In view of the regularity result \cite[Thm.\hspace{1pt} 2, Rmk.\hspace{1pt} 3.5]{savare98},
we infer the enhanced regularity
\begin{align*}
c \in L^2 (0,T; W^{1+\sigma,p}(\Omega)) \qquad \text{for all } 1 \leq \sigma< \frac1p.
\end{align*}
\item[--]
All the terms in the total energy inequality \eqref{total-enid}
have \color{black} a physical interpretation:
The second and the third term on the right-hand \color{black} side of \eqref{total-enid}
describe energy changes due to external heat sources.
The integrand $\mathbf{f}\cdot\mathbf{u}_t$ in \color{black} the fourth term on the right-hand side of \eqref{total-enid}
specifies the power expended by \color{black} the external volume force $\mathbf{f}$,
whereas the integrand $\big({\boldsymbol{\sigma}} n\big)\cdot \mathbf{d}_t$ of the fifth term
indicates the power expended by \color{black} the time-dependent Dirichlet data $\mathbf{d}$
on the boundary $\partial\Omega$
(remember that ${\boldsymbol{\sigma}}$ is the stress tensor given in \eqref{stress-tensor}).
\end{itemize} \end{remark}
We can now state our existence result for the entropic formulation of system \eqref{eqn:PDEsystem}. Observe that, while the basic time-regularity for $\vartheta$ (in fact for $\log(\vartheta)$) is poor in the general case, under an additional restriction on the exponent $\kappa$ from Hypothesis (III) we will be able to obtain $\mathrm{BV}$-time regularity for $\vartheta$.
\begin{theorem} \label{thm:1}
Assume \textbf{Hypotheses (I)--(V)}, and let the data $(\mathbf{d},\mathbf{f}, g, h)$
comply with \eqref{hyp:data}. Then, for any quintuple
$(c^0,z^0,\vartheta^0, \mathbf{u}^0,\mathbf{v}^0)$ fulfilling
\eqref{h:initial} there exists an entropic weak solution
$(c,\mu,z,\vartheta,\mathbf{u})$
to the PDE system
\eqref{eqn:PDEsystem}, supplemented with the initial and boundary conditions \eqref{init-bdry-conditions}, such that \color{black}
\begin{align}
&\label{BV-log}
\log(\vartheta) \in L^\infty(0,T;W^{1,d+\epsilon}(\Omega)') \qquad \text{for all } \epsilon >0.
\end{align}
Furthermore, if in addition the exponent $\kappa$ in \eqref{hyp-K} satisfies
\begin{equation}
\label{range-k-admissible}
\kappa \in (1, 5/3) \quad\hbox{if $d=3$ and } \kappa \in (1, 2) \quad\hbox{if $d=2$ },
\end{equation}
then we have
\begin{equation}
\label{furth-reg-teta} \vartheta\in \mathrm{BV}([0,T];
W^{2,d+\epsilon}(\Omega)') \qquad \text{for every } \epsilon>0,
\end{equation}
and the total energy inequality \eqref{total-enid} holds \underline{for all} $t \in [0,T]$, for $s=0$, and for almost all $s \in (0,t)$. \end{theorem}
We will prove Theorem \ref{thm:1}
throughout Sections \ref{s:5} \hspace{1pt}& \ref{s:6} by passing to the limit in a carefully devised time discretization scheme
and several regularizations.
Namely, in Section \ref{s:5} we are going to set up our time discretization scheme for system \eqref{eqn:PDEsystem} and perform on it all the a priori estimates allowing us to prove, in Sec.\hspace{1pt} \ref{s:6}, that (along a suitable subsequence) the approximate solutions converge to an entropic weak solution to \eqref{eqn:PDEsystem}.
However, to enhance the readability of the paper in Section \ref{s:4} we will (formally) perform all estimates on the time-continuous level, i.e.\hspace{1pt} on system \eqref{eqn:PDEsystem} itself. \color{black}
\section{\bf Formal a priori estimates} \label{s:4}
Let us briefly outline all the estimates that will be formally developed on the time-continuous system \eqref{eqn:PDEsystem}: \color{black} \begin{itemize}
\item[--]
in the \underline{\bf First estimate},
from the (formally written) \emph{total energy identity} (cf.\hspace{1pt}
\eqref{calc1} below)
we will derive a series of bounds on the \emph{non-dissipative} variables $c,\hspace{1pt}, z,\hspace{1pt}, \vartheta,\hspace{1pt}, \mathbf{u} $, as well as on $\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^\infty (0,T; L^2(\Omega;\bbR^d))}$.
\item[--]
Then, with the \underline{\bf Second estimate},
we shall adapt some calculations first developed in \cite{fpr09} (see also \cite{RocRos14}) to derive a bound for
$\hspace{1pt}|\vartheta\hspace{1pt}|_{ L^2 (0,T; H^1(\Omega))\color{black}}$ via a clever test of the heat equation \eqref{e:teta}.
\item[--]
Exploiting the previously obtained estimates,
in the \underline{\bf Third estimate} we will obtain bounds for the \emph{dissipative} variables $c_t,\hspace{1pt}, z_t,\hspace{1pt}, \varepsilon(\mathbf{u}_t)$, as well as for $\nabla \mu$.
\item[--]
The \underline{\bf Fourth estimate} is an elliptic regularity estimate on the momentum equation, along the footsteps of \cite{bss} where it was developed in the case of a
\emph{scalar} displacement variable. With this, in particular we gain a (uniform in time) bound on $\hspace{1pt}|\mathbf{u}\hspace{1pt}|_{H^2(\Omega;\bbR^d)}$ which translates into an (uniform in time) $L^2(\Omega)$-bound for the term $\pd{c}(c,\varepsilon(\mathbf{u}),z)$ in \eqref{e:mu}.
\item[--]
Using this, in the \underline{\bf Fifth estimate} we obtain a bound on the
$L^2(0,T;H^1(\Omega))$-norm of $\mu$ from a bound on its mean value $\Xint-_\Omega \mu \, \mathrm{d} x$, combined with the previously obtained
bound for
$\nabla\mu$ via the Poincar\hspace{1pt}'e inequality.
To develop the related calculations, we will momentarily suppose that
\begin{equation}
\label{mir-zelik}
\begin{gathered}
\widehat\beta \in \mathrm{C}^1(\bbR) \text{ and satisfies the following property:}
\hspace{1pt}
\forall\hspace{1pt}, \mathfrak{m} \in\bbR\hspace{1pt} \exists\hspace{1pt}, C_{\mathfrak{m}},\hspace{1pt}, C_{\mathfrak{m}}'>0
\quad |\beta(c+\mathfrak{m})|\leq C_{\mathfrak{m}} \beta(c+\mathfrak{m})c +C_{\mathfrak{m}}'\hspace{1pt},.
\end{gathered}
\end{equation}
\item[--]
We are then in the position to obtain a $L^2(0,T; L^2(\Omega;\bbR^d))$-estimate for each single term in \eqref{e:mu} in the \underline{\bf Sixth estimate}.
\item[--]
With the \underline{\bf Seventh} and \underline{\bf Eighth} estimates we gain some information on the ($\mathrm{BV}$-)time regularity of $ \log(\vartheta)$
and $\vartheta$, respectively (in the latter case, under the further condition \eqref{range-k-admissible} on the growth exponent $\kappa$ of $\mathsf{K}$).
\item[--]
Finally, in the \underline{\bf Ninth estimate} we resort to \color{black} higher elliptic regularity results to gain
a uniform bound on $\hspace{1pt}|\mu\hspace{1pt}|_{L^2(0,T;H^2(\Omega))}$.
\end{itemize}
In the proof of
the forthcoming \color{black} Proposition \ref{prop:aprio-discr} we will discuss how to make all of the following calculations rigorous in the context of the time-discretization scheme from Definition \ref{def:time-discrete} (let us mention in advance that, for the \emph{Fifth estimate} we will need the analogue of \eqref{mir-zelik} on the level of the Yosida regularization of $\beta$),
with the exception of the computations related to the ensuing \textbf{Seventh a priori estimate}. Indeed, while in the present time-continuous context this formal estimate will provide a $\mathrm{BV}$-in-time bound for $\log(\vartheta)$, on the time-discrete level it will be possible to render it only in a \emph{weaker} form, albeit still
useful for the compactness arguments developed in Section
\ref{s:6}.
\par In the following calculations, at several spots we will follow the footsteps of \cite{RocRos14}, hence we will give the main ideas, skipping some details and and referring to the latter paper.
In comparison to \cite{RocRos14}, the additional coupling with the Cahn-Hilliard system \eqref{e:c}--\eqref{e:mu} requires new a priori estimates (see the \textbf{Fifth}, \textbf{Sixth} and \textbf{Ninth estimates} below). Beyond this the remaining system \eqref{e:z}--\eqref{e:u} also depends on the phase field variable $c$ and the estimation techniques used in \cite{RocRos14} need to be adapted to this situation. And, finally, the time-dependent Dirichlet boundary conditions for $\mathbf{u}$ requires substantial modifications especially in the \textbf{First}, but also in the \textbf{Third} and \textbf{Fourth estimates} below.
\paragraph{\bf Strict positivity of $\vartheta$} Along the lines of \cite{fpr09}, we rearrange terms in \eqref{e:teta} and (formally, disregarding the -positive- boundary datum $h$) we obtain \begin{equation} \label{formal-positivity} \begin{aligned}
\vartheta_t-\dive(\mathsf{K} (\vartheta)\nabla\vartheta)
& = g +|c_t|^2+|z_t|^2 + a( c, z)\varepsilon(\mathbf{u}_t):\mathbb{V} \varepsilon(\mathbf{u}_t) + m(c,z) |\nabla \mu|^2 - c_t\vartheta-z_t\vartheta
- \rho \vartheta \mathrm{div}(\mathbf{u}_t)
\hspace{1pt} & \geq
g +\frac12|c_t|^2+\frac12|z_t|^2 + c |\varepsilon(\mathbf{u}_t)|^2 + m(c,z) |\nabla \mu|^2 -C
\vartheta^2
\geq -C\vartheta^2 \quad \text{a.e. in\;} \hspace{1pt}, Q. \end{aligned} \end{equation} Here, for the first inequality we have used that $\mathbb{V}$ is positive definite by \eqref{eqn:assbV} and \color{black} \eqref{ellipticity}, that $a$ is strictly positive thanks to \eqref{data-a}, and that \begin{equation} \label{eps-estim}
| \dive(\mathbf{u}_t) | \leq c(d)
|\varepsilon({\mathbf{u}_t})| \quad \text{a.e.\hspace{1pt} in $Q$} \end{equation}
with $c(d)$ a positive constant only depending on the space dimension $d$. The second inequality in \eqref{formal-positivity} \color{black} also relies on the fact that $g \geq 0$ a.e.\hspace{1pt} in $Q$.
Therefore we conclude that
$v$ solving the Cauchy problem \hspace{1pt}[
v_t=-\frac12 v^2, \quad v(0)=\vartheta_*>0 \hspace{1pt}] is a subsolution of \eqref{e:teta}, and a comparison argument yields that there exists $\underline\vartheta>0$ such that \begin{equation}\label{teta-pos}
\vartheta(\cdot,t)\geq v(t)>\underline\vartheta>0\quad \hbox{for all }t\in [0,T]\hspace{1pt},. \end{equation}
\paragraph{\bf First estimate:} We test \eqref{e:c} by $\mu$, \eqref{e:mu} by $c_t$, \eqref{e:z} by $z_t$, \eqref{e:teta} by 1, \eqref{e:u} by $\mathbf{u}_t$, add the resulting relations and integrate over the time interval $(0,t)$, $t\in (0,T]$. Here the second term in the force balance equation is treated by integration by parts in space as follows (notice that $\mathbf{u}_t=\mathbf{d}_t$ a.e. on $\partial\Omega\times(0,T)$): \begin{align} \label{sigmaInt} \begin{aligned}
&\int_0^t\int_\Omega-\dive\big(a(c,z)\mathbb{V}\varepsilon(\mathbf{u}_t)+W_{,\varepsilon}(c,\varepsilon(\mathbf{u}),z)-\rho\vartheta\mathds{1}\big)\cdot\mathbf{u}_t\,\mathrm dx\,\mathrm ds\hspace{1pt}
&\qquad=\int_0^t\int_\Omega a(c,z)\mathbb{V}\varepsilon(\mathbf{u}_t):\varepsilon(\mathbf{u}_t)+W_{,\varepsilon}(c,\varepsilon(\mathbf{u}),z):\varepsilon(\mathbf{u}_t)-\rho\vartheta\dive(\mathbf{u}_t)\,\mathrm dx\,\mathrm ds
-\int_0^t\int_{\partial\Omega}({\boldsymbol{\sigma}}{ \bf n \color{black}} )\cdot\mathbf{d}_t\, \mathrm{d} S\,\mathrm ds. \end{aligned} \end{align} Furthermore, we use that, by the chain rule, \begin{align*} \begin{aligned}
& \text{(i) }
&& \begin{aligned}
&\int_0^t \int_\Omega \pd{c}(c,\varepsilon(\mathbf{u}),z) c_t + \pd{z}(c,\varepsilon(\mathbf{u}),z) z_t + \pd{\varepsilon}(c,\varepsilon(\mathbf{u}),z)\colon \varepsilon(\mathbf{u}_t) \, \mathrm{d} x \, \mathrm{d} s\hspace{1pt}
&= \int_\Omega W(c(t),z(t),\varepsilon(\mathbf{u}(t))) \, \mathrm{d} x - \int_\Omega W(c(0),z(0),\varepsilon(\mathbf{u}(0))) \, \mathrm{d} x,
\end{aligned}\hspace{1pt}
& \text{(ii) } && \int_0^t \int_\Omega \left( \eta + \gamma'(c) \right ) c_t \, \mathrm{d} x \, \mathrm{d} s = \int_\Omega \phi(c(t)) \, \mathrm{d} x - \int_\Omega \phi(c(0)) \, \mathrm{d} x,\hspace{1pt}
& \text{(iii) } && \int_0^t \int_\Omega\left( \partial I_{[0,+\infty)}(z) +\sigma'(z) \right) z_t \, \mathrm{d} x \, \mathrm{d} s = \int_\Omega I_{[0,+\infty)}(z(t)) + \sigma(z(t)) \, \mathrm{d} x - \int_\Omega I_{[0,+\infty)}(z(0)) + \sigma(z(0)) \, \mathrm{d} x, \end{aligned} \end{align*}
as well as the identity $\int_0^t \int_\Omega \partial I_{(-\infty,0]}(z_t) z_t \, \mathrm{d} x \, \mathrm{d} s = \int_0^t \int_\Omega
I_{(-\infty,0]}(z_t) \, \mathrm{d} x \, \mathrm{d} s= 0 $ due to the
positive \color{black} $1$-homogeneity of $ \partial I_{(-\infty,0]}$. Also taking into account the cancellation of a series of terms, we arrive at the \emph{total energy identity} \begin{equation}\label{calc1} \begin{aligned}
\tE{c(t)}{z(t)}{\vartheta(t)}{\mathbf{u}(t)}{\mathbf{u}_t(t)} ={}& \tE{c_0}{z_0}{\vartheta_0}{\mathbf{u}_0}{\mathbf{v}_0} + \int_0^t \int_\Omega g \, \mathrm{d} x\,\mathrm ds+
\int_0^t\int_{\partial\Omega} h \, \mathrm{d} S \, \mathrm{d} s\hspace{1pt}
&+\int_0^t \int_\Omega \mathbf{f} \cdot \mathbf{u}_t \, \mathrm{d} x \, \mathrm{d} s
+\int_0^t\int_{\partial\Omega}({\boldsymbol{\sigma}} { \bf n \color{black}})\cdot\mathbf{d}_t\, \mathrm{d} S\,\mathrm ds\hspace{1pt},, \end{aligned} \end{equation} which incorporates the initial conditions \eqref{better-init}.
We estimate the second, third and fourth terms on the right-hand side of \eqref{calc1} via \eqref{hyp:data} and obtain \begin{align*} \begin{aligned}
& \left| \int_0^t \int_\Omega g \, \mathrm{d} x \, \mathrm{d} s \right|
\stackrel{\eqref{heat-source}}{\leq} C,
\qquad
\left| \int_0^t \int_{\partial\Omega} h \, \mathrm{d} S \, \mathrm{d} s \right| \stackrel{\eqref{dato-h}}{\leq} C,\hspace{1pt}
& \left| \int_0^t \int_\Omega \mathbf{f} \cdot \mathbf{u}_t \, \mathrm{d} x \, \mathrm{d} s \right|
\stackrel{\eqref{bulk-force}}{\leq} C
+\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^2(0,T;L^2(\Omega;\bbR^d))}^2. \end{aligned} \end{align*}
We now carefully handle the last term on the right-hand side of \eqref{calc1}. Since \color{black}
no viscous term of the type $\varepsilon(\mathbf{u}_t)$ occurs on its left-hand side, to absorb the last term on the right-hand side
and close the estimate \color{black} we will extensively make use of integration by parts
in space, as well as of \color{black} the force balance equation \eqref{e:u} of \color{black} integration by parts in time, and of Young's inequality ($\delta>0$ will be chosen later):
\begin{align*}
\int_0^t\int_{\partial\Omega}({\boldsymbol{\sigma}}{ \bf n \color{black}})\cdot\mathbf{d}_t\, \mathrm{d} S\,\mathrm ds
={}&\int_0^t\int_\Omega\dive({\boldsymbol{\sigma}})\cdot\mathbf{d}_t\,\mathrm dx\,\mathrm ds
+\int_0^t\int_\Omega {\boldsymbol{\sigma}}:\varepsilon(\mathbf{d}_t)\,\mathrm dx\,\mathrm ds\hspace{1pt}
={}&\int_0^t\int_\Omega(-\mathbf{f}+\mathbf{u}_{tt})\cdot\mathbf{d}_t\,\mathrm dx\,\mathrm ds
+\int_0^t\int_\Omega{\boldsymbol{\sigma}}:\varepsilon(\mathbf{d}_t)\,\mathrm dx\,\mathrm ds\hspace{1pt}
\leq{}& \hspace{1pt}|\mathbf{f}\hspace{1pt}|_{L^2(0,T;L^2(\Omega;\bbR^d))}\hspace{1pt}|\mathbf{d}_t\hspace{1pt}|_{L^2(0,T;L^2(\Omega;\bbR^d))}
+\int_0^t\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^2(\Omega;\bbR^d)}\hspace{1pt}|\mathbf{d}_{tt}\hspace{1pt}|_{L^2(\Omega;\bbR^d)}\,\mathrm ds\hspace{1pt}
&+\delta\hspace{1pt}|\mathbf{u}_t(t)\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2+C_\delta\hspace{1pt}|\mathbf{d}_t(t)\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2
+\hspace{1pt}|\mathbf{v}^0\color{black} \hspace{1pt}|_{L^2(\Omega;\bbR^d)}\hspace{1pt}|\mathbf{d}_t(0)\hspace{1pt}|_{L^2(\Omega;\bbR^d)}\hspace{1pt}
&+\underbrace{\int_0^t\int_\Omega a(c,z)\mathbb{V}\varepsilon(\mathbf{u}_t):\varepsilon(\mathbf{d}_t)\,\mathrm dx\,\mathrm ds}_{\doteq I_1}
+\underbrace{\int_0^t\int_\Omega b(c,z)\mathbb{C}(\varepsilon(\mathbf{u})-\varepsilon^*(c)):\varepsilon(\mathbf{d}_t)\,\mathrm dx\,\mathrm ds}_{\doteq I_2}\hspace{1pt}
&+\rho\hspace{1pt}|\dive(\mathbf{d}_t)\hspace{1pt}|_{L^\infty(Q)}\int_0^t\int_\Omega|\vartheta|\,\mathrm dx\,\mathrm ds. \end{align*} Moreover, by using integration by parts \color{black} in space again, the properties of the coefficient functions $a$ and $b$ stated in Hypothesis (IV) and (V), and by using \eqref{dirichlet-data} on $\mathbf{d}$, \color{black} $\mathbf{u}_t=\mathbf{d}_t$ a.e. on $\partial\Omega\times(0,T)$ and the trace theorem we obtain \color{black} \begin{align*}
I_1={}&-\int_0^t\int_\Omega\mathbf{u}_t\cdot\dive\big(a(c,z)\mathbb{V}\varepsilon(\mathbf{d}_t)\big)\,\mathrm dx\,\mathrm ds
+\int_0^t\int_{\partial\Omega}\mathbf{u}_t\cdot\big(a(c,z)\mathbb{V}\varepsilon(\mathbf{d}_t) { \bf n \color{black}} \big)\, \mathrm{d} S\,\mathrm ds\hspace{1pt}
={}&-\int_0^t\int_\Omega\mathbf{u}_t\cdot\Big(\big(a_{,c}(c,z)\nabla c+a_{,z}(c,z)\nabla z\big)\cdot\mathbb{V}\varepsilon(\mathbf{d}_t)\Big)\,\mathrm dx\,\mathrm ds -\int_0^t\int_\Omega \mathbf{u}_t \cdot\left(a(c,z)\mathbb{V}\dive(\varepsilon(\mathbf{d}_t))\right)\,\mathrm dx\,\mathrm ds \color{black}\hspace{1pt}
&+\int_0^t\int_{\partial\Omega}\mathbf{d}_t\cdot\big(a(c,z)\mathbb{V}\varepsilon(\mathbf{d}_t) { \bf n \color{black}} \big)\, \mathrm{d} S\,\mathrm ds\hspace{1pt}
\leq{}&C\hspace{1pt}|\varepsilon(\mathbf{d}_t)\hspace{1pt}|_{L^\infty(Q; \bbR^{d\times d} \color{black})}\Big(\int_0^t\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2\,\mathrm ds
+\hspace{1pt}|a_{,c}(c,z)\hspace{1pt}|_{L^\infty(0,T;L^\infty(\Omega))}^2\int_0^t\hspace{1pt}|\nabla c\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2\,\mathrm ds\hspace{1pt}
&\qquad\qquad\qquad\quad+\hspace{1pt}|a_{,z}(c,z)\hspace{1pt}|_{L^\infty(0,T;L^\infty(\Omega))}^2\int_0^t\hspace{1pt}|\nabla z\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2\,\mathrm ds\Big)\hspace{1pt}
&+C\int_0^t\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^2(\Omega;\mathbb{R}^d)}^2\,\mathrm ds+C\hspace{1pt}|a(c,z)\hspace{1pt}|^2_{L^\infty(0,T;L^\infty(
\Omega))\color{black}
}\hspace{1pt}|\varepsilon(\mathbf{d}_t)\hspace{1pt}|_{L^2(0,T;H^1(\Omega;
\bbR^{d\times d} ))}^2 \color{black}\hspace{1pt}
&+C\hspace{1pt}|\mathbf{d}_t\hspace{1pt}|_{L^2(0,T;H^1(\Omega;\bbR^d))}\hspace{1pt}|\varepsilon(\mathbf{d}_t)\hspace{1pt}|_{L^2(0,T;H^1(\Omega;\bbR^{d\times d}))}
\hspace{1pt}|a(c,z)\hspace{1pt}|_{L^\infty(0,T;L^\infty(\partial\Omega))}\hspace{1pt}
\leq{}&C\int_0^t\Big(\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2+\hspace{1pt}|\nabla c\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2
+\hspace{1pt}|\nabla z\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2\Big)\,\mathrm ds+C,\hspace{1pt}
I_2\leq{}&C\int_0^t\int_\Omega b(c,z)^2\mathbb{C}(\varepsilon(\mathbf{u})-\varepsilon^*(c)):(\varepsilon(\mathbf{u})-\varepsilon^*(c))+\mathbb{C}\varepsilon(\mathbf{d}_t):\varepsilon(\mathbf{d}_t)\,\mathrm dx\,\mathrm ds\hspace{1pt}
\leq{}&C\hspace{1pt}|b(c,z)\hspace{1pt}|_{L^\infty(0,T;L^\infty(\Omega))}\int_0^t\int_\Omega W(c,\varepsilon(\mathbf{u}),z)\,\mathrm dx\,\mathrm ds
+C\hspace{1pt}|\varepsilon(\mathbf{d}_t)\hspace{1pt}|_{L^2(0,T;L^2(\Omega;\bbR^{d\times d}))}^2. \end{align*} All in all, again taking into account \eqref{hyp:data}, \color{black} we gain the estimate \begin{align*}
&\tE{c(t)}{z(t)}{\vartheta(t)}{\mathbf{u}(t)}{\mathbf{u}_t(t)}\hspace{1pt}
&\qquad\leq C_\delta+\delta\hspace{1pt}|\mathbf{u}_t(t)\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2
+\int_0^t C\big(\hspace{1pt}|\mathbf{d}_{tt}\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2+1\big)\times \color{black} \hspace{1pt}
&\qquad\qquad\times \int_0^t \color{black}\Big( \int_\Omega \color{black} W(c,\varepsilon(\mathbf{u}),z)\,\mathrm dx
+\hspace{1pt}|\nabla c\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2+\hspace{1pt}|\nabla z\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2
+\int_\Omega|\vartheta|\,\mathrm dx
+\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2\Big)\,\mathrm ds\hspace{1pt}
&\qquad\leq C_\delta+\delta\hspace{1pt}|\mathbf{u}_t(t)\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2
+\int_0^t C\big(\hspace{1pt}|\mathbf{d}_{tt}\hspace{1pt}|^2_{L^2(\Omega;\bbR^d)}+1\big)\tE{c(s)}{z(s)}{\vartheta(s)}{\mathbf{u}(s)}{\mathbf{u}_t(s)}\,\mathrm ds.\color{black} \end{align*} Choosing $\delta=1/4$, using Gronwall Lemma together with \eqref{dirichlet-data} and taking the positivity of $\vartheta$ into account, we conclude
\begin{equation} \label{est1}
\hspace{1pt}| \vartheta \hspace{1pt}|_{L^\infty (0,T;L^1(\Omega))}
+\hspace{1pt}| \mathbf{u}\hspace{1pt}|_{W^{1,\infty}(0,T;L^2(\Omega;\bbR^d))}
+\hspace{1pt}| c \hspace{1pt}|_{L^\infty(0,T;W^{1,p}(\Omega))}
+\hspace{1pt}| \nabla z \hspace{1pt}|_{L^\infty(0,T;L^{p}(\Omega;\bbR^d))}
\leq C. \end{equation} Note that we have also used the Poincar\hspace{1pt}'e inequality to obtain the boundedness for $c$ in $L^\infty(0,T;W^{1,p}(\Omega))$ because it holds $\int_\Omega c(t)\,\mathrm dx\equiv const$ for all $t\in[0,T]$ (this follows from \eqref{e:c} and the no-flux condition for $\mu$ in \eqref{bdry-conditions}). \color{black}
\paragraph{\bf Second estimate:} Let $F(\vartheta) = \vartheta^\alpha/\alpha$, with $\alpha \in (0,1)$.
We test \eqref{e:teta} by $F'(\vartheta):= \vartheta^{\alpha-1}$ , and integrate on $(0,t)$ with $t \in (0,T]$, thus obtaining \hspace{1pt}[
\begin{aligned}
&\int_\Omega F(\vartheta_0)\, \mathrm{d} x+
\int_0^t \int_\Omega g F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s
+\int_0^t \int_{\partial \Omega} h F'(\vartheta) \, \mathrm{d} S \, \mathrm{d} s
+\int_0^t \int_\Omega (|c_t|^2+ |z_t|^2) F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s\hspace{1pt}
&+\int_0^t \int_\Omega
a(c,z) \varepsilon({\mathbf{u}_t}): \mathbb{V} \varepsilon({\mathbf{u}_t}) F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s + \int_0^t \int_\Omega
m(c,z) |\nabla \mu|^2 F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s \hspace{1pt}
&\quad=\int_\Omega F(\vartheta(t))\, \mathrm{d} x + \int_0^t \int_\Omega (c_t + z_t) \vartheta F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s +\rho \int_0^t \int_\Omega \vartheta \dive(\mathbf{u}_t) F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s
\int_0^t \int_\Omega \mathsf{K}(\vartheta) \nabla \vartheta\cdot\nabla (F'(\vartheta)) \, \mathrm{d} x \, \mathrm{d} s.
\end{aligned} \hspace{1pt}] By the positivity of $g$ and $h$ we can neglect the second and third terms on the left-hand side, whereas, taking into account the ellipticity condition \color{black} \eqref{ellipticity} and the positivity \eqref{hyp-m} and \eqref{data-a} of $m$ and $a$, \color{black} we infer \begin{align} \label{eqn:secondEstPre}
\begin{aligned}
&\frac{4(1-\alpha)}{\alpha^2} \int_0^t \int_\Omega
\mathsf{K}(\vartheta) |\nabla (\vartheta^{\alpha/2})|^2 \, \mathrm{d} x \, \mathrm{d} s
+ \bar{c} \color{black}\int_0^t\int_\Omega(|\varepsilon({\mathbf{u}_t})|^2+|\nabla\mu|^2)F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s
\hspace{1pt}
& \qquad
+ \int_0^t \int_\Omega (|c_t|^2 + |z_t|^2) F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s
\leq \int_\Omega |F(\vartheta_0)|\, \mathrm{d} x +I_1 +I_2+I_3,
\end{aligned} \end{align} with $\bar{c}>0$ depending on $\nu_0$, $m_0$, and
$a_0$, where $I_3 \doteq |\rho| \int_0^t \int_\Omega |\vartheta \dive(\mathbf{u}_t) F'(\vartheta) | \, \mathrm{d} x \, \mathrm{d} s$. \color{black} We estimate \hspace{1pt}[
\begin{aligned}
I_1= \int_\Omega |F(\vartheta(t))|\, \mathrm{d} x \leq \frac1{\alpha}\int_\Omega \max\hspace{1pt}{
\vartheta(t), 1\hspace{1pt}}^\alpha \, \mathrm{d} x \leq \frac1{\alpha}\int_\Omega \max\hspace{1pt}{ \vartheta(t), 1\hspace{1pt}}
\, \mathrm{d} x \leq C
\end{aligned} \hspace{1pt}] since $\alpha <1$ and taking into account the previously obtained inequality \color{black} \eqref{est1}. Analogously we can estimate $\int_\Omega
|F(\vartheta_0)|\, \mathrm{d} x$ thanks to \eqref{data_teta}; moreover, \hspace{1pt}[
I_2 = \int_0^t \int_\Omega |(c_t +z_t) \vartheta F'(\vartheta)| \, \mathrm{d} x \, \mathrm{d} s
\leq \frac14 \int_0^t \int_\Omega \left( |c_t|^2 + |z_t|^2\right) F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s +
2 \int_0^t \int_\Omega F'(\vartheta)\vartheta^2 \, \mathrm{d} x \, \mathrm{d} s. \hspace{1pt}] Using inequality \eqref{eps-estim} \color{black} and Young's inequality, we have that \hspace{1pt}[
\begin{aligned}
I_3 =|\rho| \int_0^t \int_\Omega | \vartheta \dive(\mathbf{u}_t) F'(\vartheta)| \, \mathrm{d}
x \, \mathrm{d} s
\leq \frac { \bar{c}\color{black}} 4 \int_0^t \int_\Omega |\varepsilon({\mathbf{u}_t})|^2 F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s +
C\int_0^t \int_\Omega F'(\vartheta)\vartheta^2 \, \mathrm{d} x \, \mathrm{d} s\hspace{1pt},.
\end{aligned} \hspace{1pt}]
All in all, we conclude \begin{equation} \label{all-in-all}
\begin{aligned}
&\frac{4(1-\alpha)}{\alpha^2} \int_0^t \int_\Omega \mathsf{K}(\vartheta) |\nabla (\vartheta^{\alpha/2})|^2 \, \mathrm{d} x \, \mathrm{d} s
+ \frac{ 3 \bar{c} \color{black}}{4}\int_0^t \int_\Omega(|\varepsilon({\mathbf{u}_t})|^2 +|\nabla\mu|^2)F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s\hspace{1pt}
&+ \frac34 \int_0^t \int_\Omega (|c_t|^2 + |z_t|^2) F'(\vartheta) \, \mathrm{d} x \, \mathrm{d} s \leq C + C \int_0^t \int_\Omega \vartheta^{\alpha+1} \, \mathrm{d} x \, \mathrm{d} s.
\end{aligned} \end{equation}
Observe that \hspace{1pt}[
\int_0^t \int_\Omega \mathsf{K}(\vartheta) |\nabla (\vartheta^{\alpha/2})|^2 \, \mathrm{d} x \, \mathrm{d} s
\geq
c_1 \int_0^t \int_\Omega \vartheta^\kappa |\nabla (\vartheta^{\alpha/2})|^2 \, \mathrm{d} x \, \mathrm{d} s
= \tilde c_1
\int_0^t \int_\Omega |\nabla (\vartheta^{(\kappa+\alpha)/2} )|^2 \, \mathrm{d} x \, \mathrm{d} s\hspace{1pt},. \hspace{1pt}] Hence, from \eqref{all-in-all} we infer the estimate \begin{equation}
\label{calc2.1}
\tilde c_1 \int_0^t \int_\Omega |\nabla (\vartheta^{(\kappa+\alpha)/2} )|^2 \, \mathrm{d} x \, \mathrm{d} s
\leq C_{ 0} + C_{ 0}\int_0^t\int_\Omega \vartheta^{\alpha+1} \, \mathrm{d} x \, \mathrm{d} s. \end{equation} We now
repeat the very same calculations as in the \emph{Second} and \emph{Third} estimates in \cite[Sec.\hspace{1pt} 3]{RocRos14}, to which we refer for all details. Namely, we introduce the auxiliary quantity $w : = \max\hspace{1pt}{ \vartheta^{(\kappa+\alpha)/2}, 1 \hspace{1pt}}$ and observe that \begin{align}
&\int_0^t \int_\Omega |\nabla (\vartheta^{(\kappa+\alpha)/2} )|^2 \, \mathrm{d} x
\, \mathrm{d} s \geq \int_0^t \color{black}\int_{\hspace{1pt}{ \vartheta(s) \color{black} \geq 1\hspace{1pt}}}
|\nabla (\vartheta^{(\kappa+\alpha)/2} )|^2\, \mathrm{d} x \, \mathrm{d} s = \int_0^t \int_\Omega |\nabla w |^2 \, \mathrm{d} x \, \mathrm{d} s,\hspace{1pt}
&\vartheta^{\alpha+1} =\left( \vartheta^{(\alpha+1)/q} \right)^q \leq w^q \qquad \text{a.e. in\;}\hspace{1pt}, Q,
\label{eqn:thetaWEst} \end{align} for all $q \geq 1 $ such that \begin{equation} \label{1st-restr-alpha}
\frac{\kappa+\alpha}2 \geq \frac{\alpha +1}{q}
\Leftrightarrow \hspace{1pt} q \geq 2- 2\frac{\kappa-1}{\kappa+\alpha}. \end{equation} Therefore from \eqref{calc2.1} we infer that \begin{equation}\label{calc2.2}\begin{aligned}
& \tilde c_1\int_0^t \int_\Omega |\nabla w|^2 \, \mathrm{d} x \, \mathrm{d} s
\leq C_{ 0} + C_{ 0} \int_0^t \hspace{1pt}| w \hspace{1pt}|_{L^q(\Omega)}^q\, \mathrm{d} s. \end{aligned} \end{equation} We now apply the Gagliardo-Nirenberg inequality for $d=3$, yielding \begin{equation}\label{gagliardo}
\hspace{1pt}| w \hspace{1pt}|_{L^q(\Omega)} \leq c_1 \hspace{1pt}| \nabla w\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^\theta
\hspace{1pt}| w \hspace{1pt}|_{L^r(\Omega)}^{1-\theta} + c_2 \hspace{1pt}| w \hspace{1pt}|_{L^r(\Omega)} \end{equation} with $ 1 \leq r \leq q $ and $\theta $ satisfying $1/q= \theta/6 + (1-\theta)/r$. Hence $\theta= 6 (q-r)/q (6-r)$. Observe that $\theta \in (0,1)$ if $q<6$. Applying the Young inequality with exponents $2/(\theta q)$ and $2/(2-\theta q)$ we infer \begin{equation} \label{added-label}
C_{ 0}\int_0^t \hspace{1pt}| w \hspace{1pt}|_{L^q(\Omega)}^q\, \mathrm{d} s
\leq \frac{\tilde c_1}2\int_0^t \int_\Omega |\nabla w|^2 \, \mathrm{d} x \, \mathrm{d} s
+C\int_0^t \hspace{1pt}| w\hspace{1pt}|_{L^r(\Omega)}^{2q(1-\theta)/(2-q\theta)} \, \mathrm{d} s + C' \int_0^t \hspace{1pt}| w\hspace{1pt}|_{L^r(\Omega)}^{q} \, \mathrm{d} s. \end{equation}
We then plug \eqref{added-label}\hspace{1pt}, into \eqref{calc2.2}, and obtain \begin{equation} \label{added-label2}
\frac{\tilde c_1}2\int_0^t \int_\Omega |\nabla w|^2 \, \mathrm{d} x \, \mathrm{d} s
\leq C_0+C\int_0^t \hspace{1pt}| w\hspace{1pt}|_{L^r(\Omega)}^{2q(1-\theta)/(2-q\theta)} \, \mathrm{d} s + C' \int_0^t \hspace{1pt}| w\hspace{1pt}|_{L^r(\Omega)}^{q} \, \mathrm{d} s. \end{equation}
Hence, we choose $1\leq r \leq 2/(\kappa+\alpha)$ so that for almost all $t\in (0,T)$ \begin{equation} \label{r-estimate}
\hspace{1pt}| w(t) \hspace{1pt}|_{L^r(\Omega)} = \left( \int_\Omega \max\hspace{1pt}{ \vartheta(t)^{r(\kappa+\alpha)/2} ,1\hspace{1pt}} \, \mathrm{d} x \right)^{1/r} \leq
C\left( \hspace{1pt}|\vartheta(t)\hspace{1pt}|_{L^1(\Omega)} + |\Omega|\right) \leq C' \end{equation}
where we have used the bound for $ \hspace{1pt}| \vartheta\hspace{1pt}|_{L^\infty (0,T;L^1(\Omega))}$ from estimate \eqref{est1}. Observe that the inequalities \hspace{1pt}[
\begin{cases}
\theta q \leq 2 \hspace{1pt} \Leftrightarrow \hspace{1pt} 6\frac{q-r}{6-r} \leq 2 \hspace{1pt}
\Leftrightarrow \hspace{1pt} q \leq 2 +\frac23 r,\hspace{1pt}
r \leq \frac2{\kappa+\alpha}
\end{cases} \hspace{1pt}] lead to $q\leq 2 +\frac{4}{3(\kappa+\alpha)}$ which is still compatible with \eqref{1st-restr-alpha}, since $\frac{\kappa-1}{\kappa+\alpha}<1$. Inserting \eqref{r-estimate} into \eqref{added-label2}\hspace{1pt}, we ultimately deduce $ \int_0^t \int_\Omega
|\nabla w|^2 \, \mathrm{d} x \, \mathrm{d} s \leq C$.
Taking also \eqref{added-label} and \eqref{r-estimate} into account we
then conclude \color{black}
$\int_0^t\hspace{1pt}|w\hspace{1pt}|_{L^q(\Omega)}^q\,\mathrm ds \leq C$. By using this as well as estimates \eqref{calc2.1} and \eqref{eqn:thetaWEst} we see that
\begin{equation} \label{additional-info} \begin{aligned}
c \int_0^t \int_\Omega \vartheta^{\kappa+\alpha - 2} |\nabla \vartheta|^2 \, \mathrm{d} x \, \mathrm{d} s = \int_0^t \int_\Omega |\nabla (\vartheta^{(\kappa+\alpha)/2})|^2 \, \mathrm{d} x \, \mathrm{d} s\leq C.
\end{aligned}
\end{equation}
From \eqref{additional-info} and the strict positivity of $\vartheta$ (see \eqref{teta-pos})\hspace{1pt}, it follows that $
\int_0^t \int_\Omega |\nabla \vartheta|^2 \, \mathrm{d} x \, \mathrm{d} s \leq C, $ provided that $\kappa +\alpha-2 \geq 0$. Observe that, since $\kappa>1$ we can choose $\alpha \in (0, 1)$ such that this inequality holds.
Hence, in view of \color{black} estimate \eqref{est1} and
applying Poincar\hspace{1pt}'e inequality,
we gather
\begin{equation}
\label{crucial-est3.2} \hspace{1pt}| \vartheta \hspace{1pt}|_{L^{2} (0,T; H^1(\Omega))} \leq C . \end{equation}
With the very same calculations as in \cite[Sec.\hspace{1pt} 3]{RocRos14} we also obtain
\begin{equation}\label{estetainterp}
\hspace{1pt}|\vartheta\hspace{1pt}|_{L^q(Q)}\leq C\quad\hbox{with }q=8/3 \quad \hbox{if } d=3, \quad q=3 \quad \hbox{if } d=2\hspace{1pt}, \end{equation}
\color{black} interpolating between estimate \eqref{crucial-est3.2} and estimate \eqref{est1} for $\hspace{1pt}|\vartheta\hspace{1pt}|_{L^\infty (0,T; L^1(\Omega))}$ and using the Gagliardo-Nirenberg inequality \eqref{gn-ineq}.
Furthermore, we observe \begin{equation} \label{quoted-later-ohyes}
\int_\Omega |\nabla \vartheta^{(\kappa -\alpha)/2}|^2 \, \mathrm{d} x =
c \int_\Omega \vartheta^{\kappa-\alpha-2} |\nabla \vartheta|^2 \, \mathrm{d} x \leq \frac c{{\underline\vartheta}^{2\alpha}}\int_{\Omega} \vartheta^{\kappa+\alpha-2}
|\nabla \vartheta|^2 \, \mathrm{d} x \leq C, \end{equation} thanks to the positivity property \eqref{teta-pos} and estimate \eqref{additional-info}. Resorting to a nonlinear version of the Poincar\hspace{1pt}'e inequality (cf.\hspace{1pt} e.g.\hspace{1pt} \eqref{poincare-type}), we then infer \begin{equation} \label{necessary-added}
\hspace{1pt}| \vartheta^{(\kappa -\alpha)/2} \hspace{1pt}|_{L^2 (0,T; H^1(\Omega))}, \hspace{1pt}, \hspace{1pt}| \vartheta^{(\kappa +\alpha)/2} \hspace{1pt}|_{L^2 (0,T; H^1(\Omega))} \leq C. \end{equation}
\paragraph{\bf Third estimate:} We test \eqref{e:teta} by $1$, integrate in time, and subtract the resulting relation from the total energy balance \eqref{calc1}. We thus obtain
\begin{equation} \label{mech-energ-est} \begin{aligned}
&
\int_0^t\int_\Omega |c_t|^2 \, \mathrm{d} x \, \mathrm{d} s +\int_\Omega \tfrac1p |\nabla c(t)|^p + \phi(c(t))\, \mathrm{d} x
+ \int_0^t\int_\Omega m(c,z)|\nabla \mu|^2 \, \mathrm{d} x \, \mathrm{d} s+
\hspace{1pt}
& \quad
+\int_0^t\int_\Omega |z_t|^2 \, \mathrm{d} x \, \mathrm{d} s +\int_\Omega \tfrac1p |\nabla z(t)|^p + I_{[0,+\infty)}(z(t)) + \sigma(z(t)) \, \mathrm{d} x
\hspace{1pt}
&
\quad +
\frac 12\int_\Omega |\mathbf{u}_t(t)|^2\, \mathrm{d} x
+\int_0^t\int_\Omega a(c,z) \mathbb{V} \varepsilon(\mathbf{u}_t)\colon \varepsilon(\mathbf{u}_t) \, \mathrm{d} x \, \mathrm{d} s
+ \int_\Omega W(c(t),\varepsilon(\mathbf{u}(t)),z(t)) \, \mathrm{d} x
\hspace{1pt}
& = \int_\Omega \tfrac1p |\nabla c_0|^p + \phi(c_0) \, \mathrm{d} x
+\int_\Omega \tfrac1p |\nabla z_0|^p + I_{[0,+\infty)}(z_0) + \sigma(z_0) \, \mathrm{d} x +
\frac12 \int_\Omega|\mathbf{v}_0|^2\, \mathrm{d} x
\hspace{1pt}
& \quad + \int_\Omega W(c_0,\varepsilon(\mathbf{u}_0),z_0) \, \mathrm{d} x
+
\int_0^t\int_\Omega\vartheta \left(\rho \dive \mathbf{u}_t+c_t+z_t\right)\, \mathrm{d} x \, \mathrm{d}
s+\int_0^t \int_\Omega {\bf f}\cdot \mathbf{u}_t\, \mathrm{d} x \, \mathrm{d} s\hspace{1pt}
&\quad+\int_0^t\int_{\partial\Omega}({\boldsymbol{\sigma}} { \bf n \color{black}})\cdot\mathbf{d}_t\, \mathrm{d} S\,\mathrm ds. \end{aligned} \end{equation}
Observe that the \color{black} first, second, third, and fourth integral terms on the right-hand side are bounded thanks to conditions \eqref{h:initial} on $(c_0,z_0,\mathbf{u}_0,\mathbf{v}_0)$.
As in the First estimate we deduce boundedness of the last and the last but one \color{black} integral terms on the right-hand side. Since $\phi$, $I_{[0,+\infty)} +\sigma$, and $W$ are bounded from below,
exploiting \eqref{ellipticity}, \eqref{data-a}, and \eqref{eqn:assbV} \color{black} to deduce that
$\int_0^t\int_\Omega a( c, z) \mathbb{V} \varepsilon(\mathbf{u}_t)\colon \varepsilon(\mathbf{u}_t) \, \mathrm{d} x \, \mathrm{d} x \geq c \int_0^t\int_\Omega |\varepsilon(\mathbf{u}_t)|^2 \, \mathrm{d} x \, \mathrm{d} s$, and using \color{black} \eqref{hyp-m} to deduce that $\int_0^t\int_\Omega m(c,z)|\nabla\mu|^2\,\mathrm dx\,\mathrm ds\geq m_0\int_0^t\int_\Omega|\nabla\mu|^2\,\mathrm dx\,\mathrm ds$, we find that
\begin{equation} \label{dissip-est} \begin{aligned}
&\int_0^t\int_\Omega(|c_t|^2+|z_t|^2+|\nabla\mu|^2+|\varepsilon(\mathbf{u}_t)|^2)\, \mathrm{d} x \, \mathrm{d} s \leq C+\int_0^t\int_\Omega\vartheta \left(\rho \dive \mathbf{u}_t+c_t+z_t\right)\, \mathrm{d} x \, \mathrm{d} s. \end{aligned} \end{equation}
Then, we can estimate the integral term \color{black} on the right-hand side by \hspace{1pt}[
\varrho\int_0^t \int_\Omega \left( |\varepsilon(\mathbf{u}_t)|^2 + |c_t|^2 + |z_t|^2 \right) \, \mathrm{d} x \, \mathrm{d} s + C_\varrho \int_0^t \int_\Omega |\vartheta|^2 \, \mathrm{d} x \, \mathrm{d} s, \hspace{1pt}] for a sufficiently small constant $\varrho>0$, in such a way as to absorb the first integral term into the left-hand side
of \eqref{dissip-est}. Exploiting \eqref{crucial-est3.2} on $\vartheta$, we thus conclude, also with the aid of Korn's inequality
and condition \eqref{dirichlet-data} on the boundary value $\mathbf{d}$, \color{black} \begin{equation}\label{est5}
\hspace{1pt}|c_t\hspace{1pt}|_{L^2(Q)} + \hspace{1pt}| \nabla \mu \hspace{1pt}|_{L^2(Q;\bbR^d)} + \hspace{1pt}|z_t\hspace{1pt}|_{L^2(Q)}+\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^2(0,T; H^1(\Omega;\mathbb{R}^d))}
\leq
C\hspace{1pt},. \end{equation} Furthermore, taking into account the previously proved bound \eqref{est1}, we also gather \begin{equation}
\label{est5-added}
\hspace{1pt}| z \hspace{1pt}|_{L^\infty (0,T;W^{1,p}(\Omega))}
+\hspace{1pt}| \mathbf{u} \hspace{1pt}|_{H^1(0,T;H^1(\Omega;\bbR^d))}\leq C. \end{equation}
\paragraph{\bf Fourth estimate:}
We test \eqref{e:u} by $-\mathrm{div}(\mathbb{V}\varepsilon(\mathbf{u}_t))$ and integrate in time. This leads to \begin{equation} \label{added-4-clarity} \begin{aligned}
& -\int_0^t \mathbf{u}_{tt}\hspace{1pt},\cdot\hspace{1pt}, \mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t)) \, \mathrm{d}
x \, \mathrm{d} s + \int_0^t \int_{\Omega} \dive( a(c,z)\mathbb{V}\varepsilon(\mathbf{u}_t))
\hspace{1pt},\cdot\hspace{1pt}, \mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t)) \, \mathrm{d} x \, \mathrm{d} s
\hspace{1pt}
&
=-
\int_0^t \int_{\Omega}
\dive(\pd{\varepsilon}(c,\varepsilon(\mathbf{u}),z)) \hspace{1pt},\cdot\hspace{1pt}, \mathbf{\dive}
(\mathbb{V}\varepsilon(\mathbf{u}_t)) \, \mathrm{d} x \, \mathrm{d} s + \rho \int_0^t \int_\Omega
\nabla \vartheta \cdot \mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t)) \, \mathrm{d} x \, \mathrm{d} s
\hspace{1pt}
& \quad
-
\int_0^t \int_\Omega \mathbf{f}\hspace{1pt},\cdot \mathbf{\dive}
(\mathbb{V}\varepsilon(\mathbf{u}_t)) \, \mathrm{d} x \, \mathrm{d} s\hspace{1pt},. \end{aligned} \end{equation}
The following calculations \color{black} are based on \cite[Sec.\hspace{1pt} 5]{RocRos14}, to which we refer for details. However in the present case we have to take care of the non-homogeneous Dirichlet boundary condition for $\mathbf{u}$. The first term on the left-hand side of \eqref{added-4-clarity} gives \begin{align} \label{est-added-0} \begin{aligned}
&-\int_0^t \int_\Omega \mathbf{u}_{tt}\cdot \mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t)) \, \mathrm{d} x \, \mathrm{d} s\hspace{1pt}
&\qquad=-\int_0^t\int_{\partial\Omega}\mathbf{u}_{tt}\cdot\big(\mathbb{V}\varepsilon(\mathbf{u}_t) { \bf n \color{black}} \big)\, \mathrm{d} S\,\mathrm ds
+\int_\Omega \frac12 \varepsilon(\mathbf{u}_t (t)):\mathbb{V} \varepsilon(\mathbf{u}_t (t)) \, \mathrm{d} x
-\int_\Omega \frac12 \varepsilon(\mathbf{u}_t (0)):\mathbb{V} \varepsilon(\mathbf{u}_t (0)) \, \mathrm{d} x. \end{aligned} \end{align} On the boundary cylinder $\partial\Omega\times(0,T)$ we find $\mathbf{u}_{tt}=\mathbf{d}_{tt}$ a.e. \color{black} (note that not necessarily $\varepsilon(\mathbf{u}_{t})=\varepsilon(\mathbf{d}_{t})$ a.e. on $\partial\Omega\times(0,T)$) which yields by using the trace theorem and Young's inequality ($\delta>0$ will be chosen later) \begin{align*}
\Big|\int_0^t\int_{\partial\Omega}\mathbf{u}_{tt}\cdot\big(\mathbb{V}\varepsilon(\mathbf{u}_t) { \bf n \color{black}}
\big)\, \mathrm{d} S\,\mathrm ds\Big|
={}&\Big|\int_0^t\int_{\partial\Omega}\mathbf{d}_{tt}\cdot\big(\mathbb{V}\varepsilon(\mathbf{u}_t)
{ \bf n \color{black}}\big)\, \mathrm{d} S\,\mathrm ds\Big|\hspace{1pt}
\leq{}& \delta\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^2(0,T;H^2(\Omega;\bbR^d))}^2+C_\delta\hspace{1pt}|\mathbf{d}_{tt}\hspace{1pt}|_{L^2(0,T;H^{1}(\Omega;\bbR^d))}^2. \end{align*} The last term on the right-hand side can be estimated by using \eqref{dirichlet-data}.
For the second term on the left-hand side of \eqref{added-4-clarity} we find \begin{equation} \label{est-to-fill-2} \begin{aligned}
&\int_0^t \int_{\Omega} \mathbf{\dive} ( a(c,z)\mathbb{V}\varepsilon(\mathbf{u}_t))
\hspace{1pt},\cdot\hspace{1pt}, \mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t)) \, \mathrm{d} x \, \mathrm{d} s\hspace{1pt}
&=\int_0^t\int_\Omega a(c,z) \mathbf{\dive} (\mathbb{V}
\varepsilon(\mathbf{u}_t))\hspace{1pt},\cdot\hspace{1pt},\mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t))\, \mathrm{d} x \, \mathrm{d} s
+\int_0^t\int_\Omega (\nabla a(c,z)\cdot \mathbb{V}
\varepsilon(\mathbf{u}_t))\hspace{1pt},\cdot\hspace{1pt},\mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t))\, \mathrm{d} x \, \mathrm{d} s\hspace{1pt}
&\geq c\int_0^t \hspace{1pt}| \mathbf{u}_t \hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s-\hspace{1pt}|\mathbf{d}_t\hspace{1pt}|_{L^2(0,T;H^2(\Omega;\bbR^d))}^2+I_1, \end{aligned} \end{equation} where the second inequality follows from \eqref{H2reg}.
The second term on the right-hand side is bounded due to \eqref{dirichlet-data}. We move $I_1$ to the right-hand side of \eqref{added-4-clarity} and estimate \begin{equation} \label{est-to-fill-bis} \begin{aligned}
|I_1|&=\left| \int_0^t\int_\Omega \left(\color{black}\nabla a(c,z)\mathbb{V} \varepsilon({\mathbf{u}_t})\right)\color{black}\hspace{1pt},\cdot\hspace{1pt},\mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t)) \right| \, \mathrm{d} x \, \mathrm{d} s \hspace{1pt}
&\leq C\int_0^t\hspace{1pt}|\nabla a(c,z)\hspace{1pt}|_{L^{d+\zeta}(\Omega;\bbR^d)}\hspace{1pt}|\varepsilon({\mathbf{u}_t})\hspace{1pt}|_{L^{d^{\star}-\eta}(\Omega;\mathbb{R}^{d\times d})}\hspace{1pt}|{\mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t))}\hspace{1pt}|_{L^2(\Omega; \mathbb{R}^d)}\, \mathrm{d} s\hspace{1pt}
&\leq \delta \int_0^t\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s
+C_\delta\int_0^t \hspace{1pt}|\nabla a(c,z)\hspace{1pt}|_{L^{d+\zeta}(\Omega;\bbR^d)}^2 \hspace{1pt}|\varepsilon({\mathbf{u}_t})\hspace{1pt}|_{L^{d^{\star}-\zeta}(\Omega;\mathbb{R}^{d\times d})}^2 \, \mathrm{d} s \hspace{1pt}
&\leq \delta \int_0^t\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s
+C_\delta\varrho^2\int_0^t \big(\hspace{1pt}|c\hspace{1pt}|_{W^{1,p}(\Omega)}^2+ \hspace{1pt}|z\hspace{1pt}|_{W^{1,p}(\Omega)}^2\big)\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s\hspace{1pt}
&\quad+C_\delta C_\varrho\int_0^t \big(\hspace{1pt}|c\hspace{1pt}|_{W^{1,p}(\Omega)}^2+ \hspace{1pt}|z\hspace{1pt}|_{W^{1,p}(\Omega)}^2\big)\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{L^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s. \end{aligned} \end{equation} In the first line, we have chosen $\zeta>0$ fulfilling $p\geq d+\zeta$, and $\eta>0$ such that $\tfrac1{d+\zeta} + \tfrac{1}{d^\star -\eta} + \tfrac12 \leq 1$, with $d^{\star}$
from \eqref{dstar}, in order to apply the H\hspace{1pt}"older inequality. Moreover, we have exploited \eqref{data-a}, giving $\hspace{1pt}|\nabla a( c, z)\hspace{1pt}|_{L^{d+\zeta}(\Omega;\bbR^d)} \leq C(\hspace{1pt}|c\hspace{1pt}|_{W^{1,p}(\Omega)}+\hspace{1pt}|z\hspace{1pt}|_{W^{1,p}(\Omega)})$, as well as \eqref{interp} to estimate
$\hspace{1pt}|\varepsilon({\mathbf{u}_t})\hspace{1pt}|_{L^{d^{\star}-\zeta}(\Omega;\mathbb{R}^{d\times d})}$. Finally, $\delta $ and $\varrho$ are positive constants that we will choose later, accordingly determining $C_\delta, \hspace{1pt}, C_\varrho>0$ via the Young inequality. For the right-hand side of \eqref{added-4-clarity} we proceed as follows \begin{equation} \label{est-to-fill-1} \begin{aligned}
& -\int_0^t \int_{\Omega}
\dive(\pd{\varepsilon}(c,\varepsilon(\mathbf{u}), z)) \hspace{1pt},\cdot\hspace{1pt}, \mathbf{\dive}
(\mathbb{V}\varepsilon(\mathbf{u}_t)) \, \mathrm{d} x \, \mathrm{d} s\hspace{1pt} &
= -\int_0^t\int_\Omega \pd{\varepsilon c}(c,\varepsilon(\mathbf{u}),z) \nabla c \hspace{1pt},\cdot\hspace{1pt}, \mathbf{\dive}(\mathbb{V}\varepsilon(\mathbf{u}_t))\, \mathrm{d} x \, \mathrm{d} s
-\int_0^t\int_\Omega \pd{\varepsilon z}(c,\varepsilon(\mathbf{u}),z) \nabla z \hspace{1pt},\cdot\hspace{1pt}, \mathbf{\dive}(\mathbb{V}\varepsilon(\mathbf{u}_t))\, \mathrm{d} x \, \mathrm{d} s\hspace{1pt}
&\quad -\int_0^t\int_\Omega \big(\pd{\varepsilon \varepsilon}(c,\varepsilon(\mathbf{u}),z) :\hspace*{-0.2em}\cdot\hspace{1pt},\nabla (\varepsilon(\mathbf{u}))\big)\hspace{1pt},\cdot\hspace{1pt},
\mathbf{\dive}(\mathbb{V}\varepsilon(\mathbf{u}_t))\, \mathrm{d} x \, \mathrm{d} s \hspace{1pt}
&\leq C_4 \int_0^t\int_\Omega\big(|\nabla c|+|\nabla z|\big)|\big(|\varepsilon(\mathbf{u})|+1\big)|\dive(\mathbb{V}\varepsilon(\mathbf{u}_t))|\,\mathrm dx\,\mathrm ds
+C_4\int_0^t\int_\Omega|\nabla(\varepsilon(\mathbf{u}))||\dive(\mathbb{V}\varepsilon(\mathbf{u}_t))|\,\mathrm dx\,\mathrm ds\hspace{1pt}
&\leq C_4 \int_0^t \left( \hspace{1pt}|\nabla c\hspace{1pt}|_{L^{d+\zeta}(\Omega;\bbR^d)}+\hspace{1pt}|\nabla z\hspace{1pt}|_{L^{d+\zeta}(\Omega;\bbR^d)}\right)
\big(\hspace{1pt}|\varepsilon({\bf u})\hspace{1pt}|_{L^{d^{\star}-\eta} (\Omega;\mathbb{R}^{d\times d})}+1\big)\hspace{1pt}|{\mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t))}\hspace{1pt}|_{L^2(\Omega;\mathbb{R}^d)}\, \mathrm{d} s\hspace{1pt}
&\quad+ C_4'\int_0^t\hspace{1pt}|\mathbf{u}\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)} \, \mathrm{d} s\hspace{1pt}
&\leq \sigma\int_0^t\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s
+C_\sigma \int_0^t \left(
\left( \hspace{1pt}|c\hspace{1pt}|_{W^{1,p}(\Omega)}^2 + \hspace{1pt}|z\hspace{1pt}|_{W^{1,p}(\Omega)}^2+1\right)\left( \hspace{1pt}|\mathbf{u}\hspace{1pt}|_{H^2(\Omega; \mathbb{R}^d)}^2+1 \right) \hspace{1pt}|\mathbf{u}\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2\right) \, \mathrm{d} s\hspace{1pt},. \end{aligned} \end{equation} Here, the positive constants $\zeta$ and $\eta$ again fulfill $p\geq d+\zeta$ and $\tfrac1{d+\zeta} + \tfrac{1}{d^\star -\eta} + \tfrac12 \leq 1$, and we have exploited inequality \eqref{interp} with a constant $\sigma$ that we will choose later, and some $C_\sigma>0$. Moreover, we have used the structural assumption \eqref{eqn:assumptionW} on $W$ (cf.\hspace{1pt} also \eqref{later-ref}), \color{black} and estimates \eqref{est1} and \eqref{est5-added}, yielding
$\hspace{1pt}| c \hspace{1pt}|_{L^\infty(Q)} + \hspace{1pt}| z \hspace{1pt}|_{L^\infty(Q)} \leq C$, whence $$
\hspace{1pt}|b(c,z) \hspace{1pt}|_{L^{\infty}(Q)}+\hspace{1pt}|b_{,c}(c,z) \hspace{1pt}|_{L^{\infty}(Q)}+\hspace{1pt}|b_{,z}(c,z) \hspace{1pt}|_{L^{\infty}(Q)}
+ \hspace{1pt}|\varepsilon^*(c)\hspace{1pt}|_{L^{\infty}(Q)}+\hspace{1pt}|(\varepsilon^*)'(c)\hspace{1pt}|_{L^{\infty}(Q)}\leq C. $$
Finally, we estimate \begin{equation}
\label{est-to-fill-3} \left| \rho \int_0^t \int_\Omega \nabla \vartheta
\cdot \mathbf{\dive} (\mathbb{V}\varepsilon(\mathbf{u}_t)) \, \mathrm{d} x \, \mathrm{d} s \right|
\leq \eta \int_0^t\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s+C_\eta\int_0^t
\hspace{1pt}|\nabla\vartheta\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2 \, \mathrm{d} s \end{equation} for some positive constant $\eta$ to be fixed later and for some $C_\eta>0$. Combining estimates \eqref{est-added-0}--\eqref{est-to-fill-3} with \eqref{added-4-clarity} taking into account the previously proved estimates \eqref{est1}, \eqref{crucial-est3.2}, and exploiting \eqref{bulk-force} on $\mathbf{f}$ to estimate the last term on the right-hand side of \eqref{added-4-clarity}, we obtain \begin{align*}
&\frac{\nu_0\color{black}}{2} \int_\Omega |\varepsilon(\mathbf{u}_t (t) ) |^2 \, \mathrm{d} x
+c\int_0^t \hspace{1pt}| \mathbf{u}_t \hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s\hspace{1pt}
&\qquad\leq C\int_\Omega|\varepsilon( \mathbf{v}^0 \color{black})|^2 \, \mathrm{d} x + C\hspace{1pt}| \mathbf{f}\hspace{1pt}|_{L^2 (0,T;L^2 (\Omega;\R^d))}^2
+C\hspace{1pt}| \mathbf{d}\hspace{1pt}|_{H^1(0,T;H^2(\Omega;\bbR^d))\cap H^2(0,T;H^1(\Omega;\bbR^d))}^2\hspace{1pt}
&\qquad\quad+\frac{c}2 \int_0^t \hspace{1pt}| \mathbf{u}_t \hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s
+C\left(1+\hspace{1pt}|\mathbf{u}^0\color{black}\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2+\int_0^t\int_0^s\hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2\hspace{1pt}, \, \mathrm{d} r \, \mathrm{d} s\right)\hspace{1pt},, \end{align*} with $\beta_0$ from \eqref{ellipticity} (cf.~also \eqref{eqn:assbV}), \color{black} where we have used the fact that
$\int_0^t\hspace{1pt}|\mathbf{u}\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} s \leq \hspace{1pt}|\mathbf{u}_0\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2+\int_0^t\int_0^s \hspace{1pt}|\mathbf{u}_t\hspace{1pt}|_{H^2(\Omega;\mathbb{R}^d)}^2 \, \mathrm{d} r \, \mathrm{d} s $ and chosen $\delta$, $\varrho$, $\sigma$, and $\eta$ sufficiently small. Therefore, using the standard Gronwall lemma and conditions \eqref{data_u}--\eqref{data_v} on the initial data $\mathbf{u}^0$ and $\mathbf{v}^0$, \color{black} we conclude \begin{equation} \label{palla}
\hspace{1pt}| \mathbf{u}_t \hspace{1pt}|_{ L^{2}(0,T; H^2(\Omega;\bbR^d))\cap L^{\infty}(0, T; H^1(\Omega;\bbR^d)) } \leq C \quad \text{whence} \quad \hspace{1pt}| \mathbf{u} \hspace{1pt}|_{ L^{\infty}(0,T; H^2(\Omega;\bbR^d))}
\leq C. \end{equation}
By comparison in \eqref{e:u} we also get \begin{equation}
\hspace{1pt}| \mathbf{u}_{tt} \hspace{1pt}|_{L^{2}(0,T; L^2 (\Omega;\R^d))}
\leq C.
\label{utt-comparison} \end{equation}
In the end, taking into account the form \eqref{eqn:assumptionW} of $W$,
we infer from \eqref{est5-added} and \eqref{palla}, taking into account the continuous embedding $H^2(\Omega;\bbR^d)\subset W^{1,d^\star}(\Omega;\bbR^d)$, that \begin{equation}
\label{est-for-Ws}
\hspace{1pt}| \pd{c}(c,\varepsilon(\mathbf{u}),z) \hspace{1pt}|_{L^\infty (0,T; L^2(\Omega))}
+\hspace{1pt}| \pd{z}(c,\varepsilon(\mathbf{u}),z) \hspace{1pt}|_{L^\infty (0,T; L^2(\Omega))}\leq C. \end{equation}
\paragraph{\bf Fifth estimate:} Recall that, for the time being we suppose $\widehat\beta \in \mathrm{C}^1(\bbR)$, and we will use the notation $\phi'= \beta +\gamma'$. It follows from \eqref{e:c} and the no-flux boundary conditions on $c$ that $\Xint-_\Omega c_t \, \mathrm{d} x = 0$ a.e.\hspace{1pt} in $(0,T)$, hence there exists $\mathfrak{m}_0 \in \bbR$ with \begin{equation} \label{constant-mean}
\Xint-_\Omega c(t) \, \mathrm{d} x =\mathfrak{m}_0 \quad \text{for all } t \in [0,T]. \end{equation} Now, from \eqref{e:mu} we deduce that \begin{equation} \label{mean-mu}
\Xint-_\Omega \mu \, \mathrm{d} x = \Xint-_\Omega \phi'(c) \, \mathrm{d} x + \Xint-_\Omega W_{,c}(c,\varepsilon(\mathbf{u}),z) \, \mathrm{d} x - \Xint-_\Omega \vartheta \, \mathrm{d} x \quad \text{a.e. in\;} \hspace{1pt}, (0,T). \end{equation} Thanks to estimates \eqref{crucial-est3.2} and \eqref{est-for-Ws}, we have that \begin{equation}
\label{used-here}
\hspace{1pt}| \textstyle{\Xint-_\Omega \vartheta \, \mathrm{d} x} \hspace{1pt}|_{L^2(0,T)} +
\left\hspace{1pt}| \Xint-_\Omega W_{,c}(c,\varepsilon(\mathbf{u}),z) \, \mathrm{d} x \right\hspace{1pt}|_{L^\infty (0,T)} \leq C. \end{equation} Therefore, in order to estimate $\Xint-_\Omega \mu \, \mathrm{d} x$ it is sufficient to gain a bound for $\Xint-_\Omega \phi'(c) \, \mathrm{d} x$. We shall do so by testing \eqref{e:mu} by $c- \Xint-_\Omega c \, \mathrm{d} x = c - \mathfrak{m}_0 $. This gives for a.a.\hspace{1pt} $t\in (0,T)$ \begin{equation} \label{clever-c} \begin{aligned}
&\int_\Omega |\nabla c(t)|^p \, \mathrm{d} x + \int_\Omega \beta(c (t)) (c(t)-\mathfrak{m}_0) + \gamma'(c (t)) (c(t)-\mathfrak{m}_0) \, \mathrm{d} x\hspace{1pt}
&=\int_\Omega \big(\vartheta(t) - W_{,c}(c(t),\varepsilon(\mathbf{u}(t)),z(t))\big) (c(t)-\mathfrak{m}_0) \, \mathrm{d} x
+\int_\Omega \left(\mu(t) -\Xint-_\Omega \mu(t) \, \mathrm{d} x \right) ( c(t)- \mathfrak{m}_0 )\, \mathrm{d} x\hspace{1pt}
&\quad-\int_\Omega c_t(t)c(t)\,\mathrm dx\hspace{1pt}
&\leq C \left( \hspace{1pt}|\vartheta(t)\hspace{1pt}|_{L^2(\Omega)}+ \hspace{1pt}| W_{,c}(c(t),\varepsilon(\mathbf{u}(t)),z(t)) \hspace{1pt}|_{L^2(\Omega )} \right) \hspace{1pt}| c(t)\hspace{1pt}|_{L^2(\Omega)} + \hspace{1pt}| \nabla \mu(t)\hspace{1pt}|_{L^2(\Omega)} \hspace{1pt}| \nabla c(t)\hspace{1pt}|_{L^2(\Omega)}\hspace{1pt}
&\quad+\hspace{1pt}|c_t(t)\hspace{1pt}|_{L^1(\Omega)}\hspace{1pt}|c(t)\hspace{1pt}|_{L^\infty(\Omega)} \end{aligned} \end{equation} where for the first equality we have used that $ (\Xint-_\Omega \mu(t) \, \mathrm{d} x )( \int_\Omega (c(t)- \mathfrak{m}_0 )\, \mathrm{d} x ) =0$
and $\mathfrak{m}_0\int_\Omega c_t(t)\,\mathrm dx=0$, and for the second one the Poincar\hspace{1pt}'e inequality for the second integral. Now, observe that \begin{equation} \label{added-4-gamma}
\int_\Omega \gamma'(c (t)) (c(t)-\mathfrak{m}_0) \, \mathrm{d} x \geq -C \end{equation} since, by the $L^\infty (0,T;W^{1,p}(\Omega))$-estimate for $c$ and the fact that $p>d$, we have \begin{equation} \label{gamma'-bounded}
\hspace{1pt}| \gamma'(c) \hspace{1pt}|_{L^\infty (Q)} \leq C\hspace{1pt},. \end{equation}
Combining \eqref{clever-c} and \eqref{added-4-gamma} with \eqref{mir-zelik}, yielding
\begin{equation} \label{danke_zelik}
\exists\hspace{1pt}, C_{\mathfrak{m}_0},\hspace{1pt},C_{\mathfrak{m}_0}'>0 \hspace{1pt} \hspace{1pt} \text{for a.a. }
t \in (0,T)\hspace{1pt}, : \quad
\int_\Omega | \beta(c(t))| \, \mathrm{d} x \leq C_{\mathfrak{m}_0} \int_\Omega \beta(c (t)) (c(t)-\mathfrak{m}_0) \, \mathrm{d} x + C_{\mathfrak{m}_0}', \end{equation} and taking into account estimates \eqref{crucial-est3.2}, \eqref{est5}, \eqref{est5-added}, and \eqref{est-for-Ws}, we conclude that
$\left\hspace{1pt}|\beta(c) \right\hspace{1pt}|_{L^2 (0,T; L^1(\Omega))} \leq C$, whence $\left\hspace{1pt}|\phi'(c) \right\hspace{1pt}|_{L^2 (0,T; L^1(\Omega))} \leq C.$ Then, arguing by comparison in \eqref{mean-mu} and taking into account \eqref{used-here} we ultimately conclude
$\hspace{1pt}| \Xint-_\Omega \mu\, \mathrm{d} x \hspace{1pt}|_{L^2(0,T)} \leq C$. Combining this with \eqref{est5-added} and using the Poincar\hspace{1pt}'e inequality we infer that \begin{equation} \label{est-for-mu}
\hspace{1pt}| \mu\hspace{1pt}|_{L^2(0,T;H^1(\Omega))} \leq C. \end{equation}
\paragraph{\bf Sixth estimate:} We now argue by comparison in \eqref{e:mu} and take into account estimates \eqref{crucial-est3.2}, \eqref{est5}, \color{black} \eqref{est-for-Ws}, and \eqref{est-for-mu}, as well as \eqref{gamma'-bounded}.
Then we conclude that \hspace{1pt}[
\hspace{1pt}| \Delta_p(c) +\eta \hspace{1pt}|_{L^2(0,T; L^2(\Omega))} \leq C. \hspace{1pt}] Now, in view of the monotonicity of the function $\beta$, it is not difficult to deduce from the above estimate that \begin{equation}
\hspace{1pt}| \Delta_p(c)\hspace{1pt}|_{L^2(0,T; L^2(\Omega))}+ \hspace{1pt}| \eta \hspace{1pt}|_{L^2(0,T; L^2(\Omega))} \leq C. \end{equation}
\paragraph{\bf Seventh estimate:} We test \eqref{e:teta} by $\frac{w}{\vartheta}$, with $w$ a test function in $ W^{1,d+\epsilon}(\Omega)$ with $\epsilon>0 $, which then ensures $w\in L^\infty(\Omega)$.
Thus, using the place-holders \begin{align*}
&H := - c_t - z_t -\rho\mathrm{div}(\mathbf{u}_t),\hspace{1pt}
&J:= \frac1\vartheta (g+ a(c,z) \varepsilon(\mathbf{u}_t):\mathbb{V} \varepsilon(\mathbf{u}_t) + |c_t|^2 + |z_t|^2 + m(c,z)|\nabla \mu|^2), \end{align*} we obtain that \hspace{1pt}[ \begin{aligned} &
\left| \int_\Omega \partial_t \log(\vartheta) w \, \mathrm{d} x \right| \hspace{1pt} &
= \left| \int_\Omega \left( H w - \frac{\mathsf{K}(\vartheta)}\vartheta \nabla \vartheta \cdot \nabla w - \frac{\mathsf{K}(\vartheta)}{\vartheta^2}
|\nabla\vartheta|^2 w + Jw \right) \, \mathrm{d} x
+\int_{\partial\Omega} h \frac{w}{\vartheta} \, \mathrm{d} S
\right| \hspace{1pt} &
\leq \left|\int_\Omega H w \, \mathrm{d} x \right|
+ \left| \int_\Omega \frac{\mathsf{K}(\vartheta)}\vartheta \nabla \vartheta
\cdot \nabla w \, \mathrm{d} x \right| +
\left| \int_\Omega \frac{\mathsf{K}(\vartheta)}{\vartheta^2}
|\nabla\vartheta|^2 w \, \mathrm{d} x \right| + \left|\int_\Omega J w \, \mathrm{d} x \right|
+ \left|\int_{\partial\Omega} h\frac{w}\vartheta \, \mathrm{d} S\right| \hspace{1pt} &
\doteq I_1+I_2+I_3+I_4 +I_5. \end{aligned} \hspace{1pt}]
From estimate \eqref{est5} we deduce that $\hspace{1pt}|H\hspace{1pt}|_{L^2(0,T; L^2(\Omega))} \leq C$, therefore
\hspace{1pt}[
|I_1| \leq \mathcal{H}(t) \hspace{1pt}|w \hspace{1pt}|_{L^2(\Omega)} \quad \text{ with }
\mathcal{H}(t)= \hspace{1pt}| H(\cdot,t)\hspace{1pt}|_{L^2(\Omega)} \in L^2(0,T). \hspace{1pt}] Analogously, also in view of \eqref{heat-source}, of \eqref{teta-pos} and of estimate \eqref{est5}, we infer
that
\hspace{1pt}[
|I_4| \leq \frac{1}{\underline\vartheta}\mathcal{J}(t)
\hspace{1pt}|w\hspace{1pt}|_{L^\infty(\Omega)} \qquad \text{with } \mathcal{J}(t) := \hspace{1pt}|
J(\cdot,t)\hspace{1pt}|_{L^1(\Omega)} \in L^1(0,T).
\hspace{1pt}] Moreover, $
|I_5| \leq \frac{1}{\underline\vartheta} \hspace{1pt}|h(t)\hspace{1pt}|_{L^2(\partial \Omega)} \hspace{1pt}| w\hspace{1pt}|_{L^2(\partial \Omega)} $, with $ \hspace{1pt}|h(t)\hspace{1pt}|_{L^2(\partial \Omega)} \in L^1(0,T)$ thanks to \eqref{dato-h}. In order to estimate $I_2$ and $I_3$ we develop the very same calculations as in the proof of \cite[Sec.\hspace{1pt} 3, \emph{Sixth estimate}]{RocRos14}. Referring to the latter paper for all details, we mention here that, exploiting the growth condition \eqref{hyp-K} on $\mathsf{K}$, the positivity of $\vartheta$ \eqref{teta-pos}, and the H\hspace{1pt}"older inequality, we have
\hspace{1pt}[ \begin{aligned} &
|I_2| \leq C \frac C{\underline\vartheta} \mathcal{O}(t) \hspace{1pt}| \nabla w\hspace{1pt}|_{L^2(\Omega;\mathbb{R}^d)}
+ C \widetilde{\mathcal{O}}(t) \hspace{1pt}| \nabla w\hspace{1pt}|_{L^{{d+\epsilon}}(\Omega;\mathbb{R}^d)} \hspace{1pt} & \quad \text{with } \begin{cases}
\mathcal{O}(t) := \hspace{1pt}| \nabla
\vartheta(t) \hspace{1pt}|_{L^2(\Omega;\mathbb{R}^d)} \in L^2(0,T) & \text{by \eqref{crucial-est3.2},} \hspace{1pt}
\widetilde{ \mathcal{O}}(t) \color{black}:= \hspace{1pt}| \vartheta(t)^{(\kappa
+\alpha-2)/2} \nabla \vartheta (t) \hspace{1pt}|_{L^2(\Omega;\mathbb{R}^d)}
\hspace{1pt}|\vartheta(t)^{(\kappa -\alpha)/2} \hspace{1pt}|_{L^{d^\star-\eta}(\Omega)} \in L^1(0,T) & \text {by \eqref{additional-info}, \eqref{crucial-est3.2}, \eqref{necessary-added},} \end{cases} \end{aligned} \hspace{1pt}] with $\tfrac1{d+\epsilon} + \tfrac{1}{d^\star -\eta} +\tfrac12 \leq 1 $. With analogous arguments, we find \hspace{1pt}[ \begin{aligned} &
|I_3| \leq \frac C{{\underline \vartheta }^2} \mathcal{O}(t)^2 \hspace{1pt}| w\hspace{1pt}|_{L^\infty (\Omega)}
+ C \overline{\mathcal{O}}(t) \hspace{1pt}|w \hspace{1pt}|_{L^\infty(\Omega)}
\hspace{1pt}
&
\text{with } \overline{\mathcal{O}}(t)= \int_\Omega \vartheta(t)^{\kappa+\alpha-2} |\nabla \vartheta(t)|^2 \, \mathrm{d} x
+ \int_\Omega |\nabla \vartheta(t)|^2 \, \mathrm{d} x \in L^1(0,T)\hspace{1pt} \text{ by \eqref{additional-info} and \eqref{crucial-est3.2}.}
\end{aligned} \hspace{1pt}] All in all, we infer that there exists a positive function $\mathcal{C} \in L^1(0,T)$ such that $
| \int_\Omega \partial_t \log(\vartheta(t)) w \, \mathrm{d} x | \leq \mathcal{C} (t)
\hspace{1pt}|w\hspace{1pt}|_{W^{1,d+\epsilon}(\Omega)} $ for
a.a.\hspace{1pt} \color{black} $ t \in (0,T). $ Hence, \begin{equation}\label{est6}
\hspace{1pt}|\partial_t\log(\vartheta)\hspace{1pt}|_{L^1(0,T; (W^{1,d+\epsilon}(\Omega)'))} \leq C.
\end{equation}
\paragraph{ \bf Eighth estimate [$ {\boldsymbol \kappa} {\boldsymbol \in} { \bf (1,5/3)}$ if ${\bf d=3}$ and $\boldsymbol{\kappa} {\boldsymbol \in} {\bf (1,2)}$ if ${ \bf d=2}$]: \color{black}} We multiply
\eqref{e:teta} by a test function $w \in W^{1,\infty}(\Omega)$ (which e.g.\hspace{1pt} holds if $w \in W^{2,d+\epsilon}(\Omega)$ for
$\epsilon>0$) and find
\hspace{1pt}[ \begin{aligned}
\left| \int_\Omega \vartheta_t w \, \mathrm{d} x \right|
\leq \left|\int_\Omega L w \, \mathrm{d} x \right|
+ \left| \int_\Omega \mathsf{K}(\vartheta) \nabla \vartheta
\cdot \nabla w \, \mathrm{d} x \right| + \left|\int_{\partial\Omega} hw \, \mathrm{d} S\right| \doteq I_1+I_2+I_3, \end{aligned} \hspace{1pt}] where we have set $$
L= -c_t\vartheta - z_t \vartheta -\rho\vartheta \mathrm{div}(\mathbf{u}_t)+g+ a(c,z)\varepsilon(\mathbf{u}_t):\mathbb{V} \varepsilon(\mathbf{u}_t) +|c_t|^2 + |z_t|^2 + m(c,z)|\nabla \mu|^2. $$ Therefore, \hspace{1pt}[
|I_1| \leq \mathcal{L}(t) \hspace{1pt}|w\hspace{1pt}|_{L^\infty (\Omega)} \quad \text{with } \mathcal{L}(t):=\hspace{1pt}|L(t)\hspace{1pt}|_{L^1(\Omega)} \in L^1(0,T), \quad
|I_3| \leq \hspace{1pt}| h(t) \hspace{1pt}|_{L^2(\partial
\Omega)} \hspace{1pt}| w\hspace{1pt}|_{L^2(\partial \Omega)} \text{ with } h\in L^1(0,T) \hspace{1pt}] thanks to \eqref{heat-source}, \eqref{crucial-est3.2}, and \eqref{est5} for $I_1$, and \eqref{dato-h} for $I_3$. We estimate $I_2$ by proceeding exactly in the same way as for \cite[Sec.\hspace{1pt} 3, \emph{Seventh estimate}]{RocRos14}. Namely, taking into account once again the growth condition
\eqref{hyp-K} on $\mathsf{K}$, we find
\begin{equation} \label{citata-dopo-ehsi}
|I_2|\leq C\hspace{1pt}| \vartheta^{(\kappa-\alpha+2)/2} \hspace{1pt}|_{L^2(\Omega)} \hspace{1pt}|\vartheta^{(\kappa+\alpha-2)/2} \nabla \vartheta\hspace{1pt}|_{L^2(\Omega;\mathbb{R}^d)} \hspace{1pt}|\nabla w\hspace{1pt}|_{L^\infty (\Omega;\mathbb{R}^d)}
+ C \hspace{1pt}| \nabla \vartheta\hspace{1pt}|_{L^2(\Omega;\mathbb{R}^d)} \hspace{1pt}|\nabla w\hspace{1pt}|_{L^2 (\Omega;\mathbb{R}^d)}.
\end{equation}
Observe that, since $\kappa <\frac53$ if $d=3$, and $\kappa <2$ if $d=2$, and $\alpha$ can be chosen arbitrarily close to $1$, from estimate \eqref{estetainterp} we have that $\vartheta^{(\kappa-\alpha+2)/2}$ is bounded in $L^2(0,T; L^2(\Omega))$.
Thus, also taking into account \eqref{crucial-est3.2}, we conclude that
$|I_2|\leq C \mathcal{L}^*(t) \hspace{1pt}|\nabla w \hspace{1pt}|_{L^\infty (\Omega)} $ for some $\mathcal{L}^* \in L^1(0,T)$. Hence, \begin{equation} \label{bv-esti-temp}
\hspace{1pt}|\vartheta_t\hspace{1pt}|_{L^1(0,T; W^{1,\infty}(\Omega)')} \leq C.
\end{equation}
\paragraph{\bf Ninth estimate:} We test \eqref{e:c} by $\Delta\mu$ and integrate in time. It follows \begin{align} \label{eqn:est9}
\int_0^t\int_\Omega\dive\big(m(c,z)\nabla\mu\big)\Delta\mu\,\mathrm dx\,\mathrm ds
=\int_0^t\int_\Omega c_t\Delta\mu\,\mathrm dx\,\mathrm ds. \color{black} \end{align} The left-hand side is estimated \color{black} below by exploiting Hypotheses (II)
and the boundedness $\hspace{1pt}|c\hspace{1pt}|_{L^\infty(Q)}+\hspace{1pt}|z\hspace{1pt}|_{L^\infty(Q)}\leq C$, viz. \begin{align*}
\int_0^t\int_\Omega\dive\big(m(c,z)\nabla\mu\big)\Delta\mu\,\mathrm dx\,\mathrm ds
&\geq\int_0^t\int_\Omega\big(\nabla m(c,z)\cdot\nabla\mu\big)\Delta\mu\,\mathrm dx\,\mathrm ds
+m_0\int_0^t\int_\Omega|\Delta\mu|^2\,\mathrm dx\,\mathrm ds\hspace{1pt}
&\geq-C\int_0^t\int_\Omega(|\nabla c|+|\nabla z|)|\nabla\mu||\Delta\mu|\,\mathrm dx\,\mathrm ds
+m_0\int_0^t\int_\Omega|\Delta\mu|^2\,\mathrm dx\,\mathrm ds. \end{align*} By using the interpolation inequality \eqref{interp2} and by using analogous calculations as in the \textit{Fourth estimate}, we find by Young's inequality \begin{align*}
&\int_0^t\int_\Omega(|\nabla c|+|\nabla z|)|\nabla\mu||\Delta\mu|\,\mathrm dx\,\mathrm ds\hspace{1pt}
&\qquad\leq C\int_0^t\big(\hspace{1pt}|\nabla c\hspace{1pt}|_{L^{d+\zeta}(\Omega;\bbR^d)}+\hspace{1pt}|\nabla z\hspace{1pt}|_{L^{d+\zeta}(\Omega;\bbR^d)}\big)
\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^{d^*-\eta}(\Omega;\bbR^d)}\hspace{1pt}|\Delta\mu\hspace{1pt}|_{L^2(\Omega)}\,\mathrm ds\hspace{1pt}
&\qquad\leq C\big(\hspace{1pt}|\nabla c\hspace{1pt}|_{L^\infty(0,T;L^{p}(\Omega;\bbR^d))}+\hspace{1pt}|\nabla z\hspace{1pt}|_{L^\infty(0,T;L^{p}(\Omega;\bbR^d))}\big)\int_0^t\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^{d^*-\eta}(\Omega;\bbR^d)}\hspace{1pt}|\Delta\mu\hspace{1pt}|_{L^2(\Omega)}\,\mathrm ds\hspace{1pt}
&\qquad\leq C'\int_0^t\big(\varrho\hspace{1pt}|\nabla\mu\hspace{1pt}|_{H^{1}(\Omega;\bbR^d)}+C_\varrho\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^2(\Omega;\bbR^d)}\big)\hspace{1pt}|\Delta\mu\hspace{1pt}|_{L^2(\Omega;\bbR^d)}\,\mathrm ds\hspace{1pt}
&\qquad\leq \varrho C'C_\delta\int_0^t\hspace{1pt}|\nabla\mu\hspace{1pt}|_{H^{1}(\Omega;\bbR^d)}^2\,\mathrm ds
+C' C_\varrho C_\delta\int_0^t\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^{2}(\Omega;\bbR^d)}^2\,\mathrm ds
+\delta C'\int_0^t\hspace{1pt}|\Delta\mu\hspace{1pt}|_{L^2(\Omega)}^2\,\mathrm ds. \end{align*} By choosing suitable $\delta>0$ and $\varrho>0$, we see that \begin{align*}
&\int_0^t\int_\Omega\big(|\nabla c|+|\nabla z|\big)|\nabla\mu||\Delta\mu|\,\mathrm dx\,\mathrm ds
\leq \epsilon\int_0^t\hspace{1pt}|\mu\hspace{1pt}|_{H^{2}(\Omega)}^2\,\mathrm ds
+C_\epsilon\int_0^t\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^{2}(\Omega;\bbR^d)}^2\,\mathrm ds. \end{align*} All in all, we find from the above estimates \begin{align*}
\int_0^t\hspace{1pt}|\Delta\mu\hspace{1pt}|_{L^2(\Omega)}^2 \,\mathrm ds \color{black}
\leq
\epsilon\int_0^t\hspace{1pt}|\mu\hspace{1pt}|_{H^2(\Omega)}^2\,\mathrm ds+C_\epsilon\int_0^t\hspace{1pt}|c_t\hspace{1pt}|_{L^2(\Omega)}^2
\,\mathrm ds \color{black}
+C_\epsilon\int_0^t\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^{2}(\Omega;\bbR^d)}^2\,\mathrm ds, \end{align*} where the second and the third term on the right-hand side are bounded by \eqref{est5} for fixed $\epsilon>0$. By the $H^2$-elliptic regularity estimate for homogeneous Neumann problems, i.e. \begin{align*}
\hspace{1pt}|\mu\hspace{1pt}|_{H^2(\Omega)}^2\leq C\big(\hspace{1pt}|\Delta \mu\hspace{1pt}|_{L^2(\Omega)}^2+\hspace{1pt}|\mu\hspace{1pt}|_{H^1(\Omega)}^2\big), \end{align*}
we conclude by choosing $\epsilon>0$ sufficiently small and by using the boundedness of $\hspace{1pt}|\mu\hspace{1pt}|_{L^2(0,T;H^1(\Omega))}$ in \eqref{est5} that \begin{align}
\hspace{1pt}|\mu\hspace{1pt}|_{L^2(0,T;H^2(\Omega))}\leq C. \end{align}
$\square$
\section{\bf Time discretization and regularizations} \label{s:5} In this section we will introduce and motivate a \textit{thermodynamically consistent time-discretization scheme} for system \eqref{eqn:PDEsystem} and devote a large part of Sec. \ref{ss:5.2} to the proof that it admits solutions. Next, in Sec.\hspace{1pt} \ref{ss:5.3} we will derive the energy and entropy inequalities fulfilled by the discrete solutions, and, starting from them, we will obtain a series of a priori estimates on the approximate solutions.
\subsection{Setup of the time-discrete system} \label{ss:5.1} We consider an equidistant partition of $[0,T]$, with time-step $\tau>0$ and nodes \begin{align}
t_\tau^k:=k\tau, \label{time-nodes} \end{align} $k=0,\ldots,K_\tau$, and we approximate the data $\mathbf{f}$, $g$, and $h$ by local means, i.e. setting for all $k=1,\ldots,K_{\tau}$ \begin{equation} \label{local-means} \ftau{k}:=
\frac{1}{\tau}\int_{t_\tau^{k-1}}^{t_\tau^k} \mathbf{f}(s)\, \mathrm{d} s\hspace{1pt},,
\qquad \gtau{k}:= \frac{1}{\tau}\int_{t_\tau^{k-1}}^{t_\tau^k} g(s)
\, \mathrm{d} s\hspace{1pt},, \qquad \htau{k}:=
\frac{1}{\tau}\int_{t_\tau^{k-1}}^{t_\tau^k} h(s) \, \mathrm{d} s\hspace{1pt},. \end{equation} In what follows, for a given $K_\tau$-tuple $(v_{\tau}^k)_{k=1}^{K_\tau}$ the time-discrete derivative is denoted by \hspace{1pt}[
D_{\tau,k}(v) =\frac{v_{\tau}^k-v_{\tau}^{k-1}}{\tau} \quad \text{so that} \quad
D_{\tau,k}(D_{\tau,k}(v)) = \frac{v_{\tau}^k-2v_{\tau}^{k-1} + v_{\tau}^{k-2}}{\tau^2}. \hspace{1pt}] Before stating the complete time-discrete scheme in Problem \ref{def:time-discrete}, we are going to introduce its main ingredients in what follows. \color{black}
\paragraph{\bf Regularization of the coefficient functions depending on
$\bf c$ \color{black}} In the following we will analyze a specially chosen time-discretization scheme for system \eqref{eqn:PDEsystem}. To ensure suitable coercivity properties in the time-discrete system needed for existence of solutions we utilize the following $\omega$-regularizations which will eventually vanish as $\omega\downarrow 0$: \begin{itemize}
\item[--]
First of all, we will replace the maximally monotone operator $\beta$
(the derivative of the convex part of the potential $\phi$ (see Hypothesis (I))
by its Yosida regularization
$\beta_\omega\in \mathrm{C}^0(\bbR)$
with Yosida index $\omega\in(0,\infty)$.
This will be crucial to render rigorously the \emph{Fifth a priori estimate}
on the time-discrete level, cf.
the calculations in Sec.\hspace{1pt} \ref{ss:5.4}. Observe that the Yosida approximation
$\widehat{\beta}_\omega\in C^1(\bbR)$ of $\widehat\beta$, fulfilling
$\widehat{\beta}_\omega' = \beta_\omega$, is still convex, and that $\beta_\omega(0)=0$. \color{black}
For notational consistency we set $\phi_\omega:=\widehat{\beta}_\omega+\gamma$.
\item[--]
Let $\hspace{1pt}{\mathcal R_\omega\hspace{1pt}}_{ \omega>0\color{black}}\subseteq C^2(\bbR)\cap W^{2,\infty}(\bbR)$ be a family of
functions (we can think of ``smoothed truncations'') such that:
\begin{align}
\forall M>0\quad\exists \omega_0>0\qquad\forall \omega\in(0,\omega_0),\hspace{1pt};c\in(-M,M):
\qquad \mathcal R_\omega(c)=c.
\label{Rtrunc}
\end{align}
They have the role to somehow provide for the information that $c$ is bounded, which is not supplied by the concentration potential $\phi$, defined on all
of $\bbR$. In turn, this information is crucial in order to make some of the following calculations rigorous.
The limit passage as $\omega \downarrow 0$ will be possible thanks to an a priori bound for $c$ in $L^\infty (Q)$, cf.\hspace{1pt} Sec.\hspace{1pt} \ref{s:6} ahead. \color{black}
\par
We define the following regularizations for the elastic energy density:
\begin{align*}
&W^\omega(c,\varepsilon,z):=W(\mathcal R_\omega(c),\varepsilon,z).
\end{align*}
and observe that for fixed $\omega>0$ and fixed $\varepsilon\in\bbR_{sym}^{d\times d}$ and $z\in\bbR$ (cf.\hspace{1pt} also \eqref{later-ref}): \color{black}
\begin{align}
\label{eqn:WtauEst}
|W^\omega(c,\varepsilon,z)|+|W_{,c}^\omega(c,\varepsilon,z)|+|W_{,cc}^\omega(c,\varepsilon,z)|\leq C
\qquad\text{ uniformly in }c\in\bbR.
\end{align} \end{itemize} Throughout \underline{this section} we neglect the subscript $\omega$ on the solutions $c$, $\mu$, $z$, $\vartheta$ and $\mathbf{u}$ for the sake of readability.
\paragraph{\bf Convex-concave \color{black} splitting of the coefficient functions} Let us mention in advance how all the various nonlinear terms in \eqref{eqn:PDEsystem} will be coped with in the discrete system \eqref{PDE-discrete}, which is in fact carefully designed in such a way as to ensure the validity of the \emph{discrete total energy inequality}, cf.\hspace{1pt} the forthcoming Lemma \ref{l:energy-est}. To this aim, it will be crucial to employ the \emph{convex-concave splitting} of the functions $c\mapsto W^\omega(c,\varepsilon,z)$, $z \mapsto W^\omega(c,\varepsilon,z)$, $z\mapsto\sigma(z)$, as well as the specific splitting \eqref{specific-splitting-phi} below (cf.\hspace{1pt} also \eqref{decomposition}) for $\phi_\omega$. Recall that, a convex-concave decomposition of some real-valued $\mathrm{C}^2(I)$-function $\psi$
with bounded second derivative on an interval $I$ may be canonically given by $\psi = \conv{\psi} + \conc{\psi}$, with \begin{align} \label{eqn:splitting}
&\conv{\psi}(x):=\psi(x)+\frac12\Big(\max_{y\in I}|\psi''(y)|\Big)x^2,
&&\conc{\psi}(x):=-\frac12\Big(\max_{y\in I}|\psi''(y)|\Big)x^2. \end{align} Therefore, we will proceed as follows: \begin{itemize}
\item[--]
The nonlinear contribution $\sigma'(z)$ in \eqref{e:z} will be discretized via the
convex-concave splitting \eqref{eqn:splitting}
on $I=[0,1]$:
\begin{align*}
\sigma'(z)\hspace{1pt};\text{ via }\hspace{1pt};(\conv{\sigma})'(z_\tau^k)+(\conc{\sigma})'(z_\tau^{k-1}).
\end{align*}
\item[--]
For the time-discrete version of the term $\pd{c}(c,\varepsilon(\mathbf{u}),z)$ in \eqref{e:mu} and
$\pd{z}(c,\varepsilon(\mathbf{u}),z)$ in \eqref{e:z} we will resort to partial convex-concave splittings
of $ W^\omega$.
To denote them, we will use the symbols \eqref{eqn:splitting}, combined with subscripts \color{black}
to denote the variable with respect to which the splitting is computed.
Therefore, we set
\begin{subequations}
\label{eqn:convConcSplittingWc}
\begin{align}
&{\breve{W}_{1}^\omega}(c,\varepsilon,z):= W^\omega(c,\varepsilon(\mathbf{u}),z)+\frac12\Big (
\sup_{\widetilde c \in \bbR}| W_{,cc}^\omega(\widetilde c,\varepsilon,z)|\Big)c^2,\hspace{1pt}
&{\invbreve{W}_{1}^\omega}(c,\varepsilon,z):=-\frac12\Big(\sup_{\widetilde c \in \bbR}| W_{,cc}^\omega(\widetilde c,\varepsilon,z)|\Big)c^2,\hspace{1pt}
&{\breve{W}_{3}^\omega}(c,\varepsilon,z):=W^\omega(c,\varepsilon(\mathbf{u}),z)+\frac12\Big (
\sup_{\widetilde z \in [0,1]}| W_{, zz\color{black}}^\omega(c,\varepsilon,\widetilde z)|\Big)z^2,\hspace{1pt}
&{\invbreve{W}_{3}^\omega}(c,\varepsilon,z):=-\frac12\Big(\sup_{\widetilde z \in [0,1]}|W_{, zz}\color{black}^\omega(c,\varepsilon,\widetilde z)|\Big)z^2.
\end{align}
\end{subequations}
Note that these functions are well-defined for fixed $\omega>0$ due to \eqref{eqn:WtauEst}.
The splitting of $ W^\omega$ with respect to $\varepsilon(\mathbf{u})$ is not needed due to the
convexity of $ W^\omega$ with respect to $\varepsilon(\mathbf{u})$ by the structural assumption \eqref{eqn:assumptionW}
and the non-negativity of $b$ in Hypothesis (V).
We easily see that
$$
W^\omega={\breve{W}_{1}^\omega}+{\invbreve{W}_{1}^\omega}={\breve{W}_{3}^\omega}+{\invbreve{W}_{3}^\omega}
$$
and that
\begin{align*}
&&&&&{\breve{W}_{1}^\omega}(\cdot,\varepsilon,z)\text{ is convex on $\bbR $},
&&{\invbreve{W}_{1}^\omega}(\cdot,\varepsilon,z)\text{ is concave on } \bbR \hspace*{2.2em}\bigg\hspace{1pt}}\text{ for all fixed }\varepsilon,z,\hspace{1pt}
&&&&& W^\omega(c,\cdot,z)\text{ is convex on $\bbR_{sym}^{n\times n}$}
&&\hspace*{14.0em}\bigg\hspace{1pt}}\text{ for all fixed }c,z,\hspace{1pt}
&&&&&{\breve{W}_{3}^\omega}(c,\varepsilon,\cdot)\text{ is convex on $[0,1]$},
&&\hspace*{0.07em}{\invbreve{W}_{3}^\omega}(c,\varepsilon,\cdot)\text{ is concave on }[0,1]\quad\bigg\hspace{1pt}}\text{ for all fixed }c,\varepsilon.
\end{align*}
We will
replace the terms $W_{,c}$,
$W_{,\varepsilon}$, and
$W_{,z}$ in system \eqref{eqn:PDEsystem} by their \color{black} time-discretized and regularized versions: \color{black}
\begin{align*}
&&&&&\pd{c}(c,\varepsilon(\mathbf{u}),z)&&\text{ via }&&\breve{W}_{1,c}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})+\invbreve{W}_{1,c}^\omega(c_\tau^{k-1},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1}),&&&&&&\hspace{1pt}
&&&&&\pd{\varepsilon}(c,\cdot,z)&&\text{ via }&&\pd{\varepsilon}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^k),z_\tau^k),\hspace{1pt}
&&&&&\pd{z}(c,\varepsilon(\mathbf{u}),z)&&\text{ via }&&\breve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^k)+\invbreve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1}).
\end{align*}
By exploiting convexity and concavity estimates this time-discretization scheme leads to the crucial estimate
\begin{align}
\begin{aligned}
&\Big(\breve{W}_{1,c}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})+\invbreve{W}_{1,c}^\omega(c_\tau^{k-1},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})\Big)(c_\tau^{k}-c_\tau^{k-1})\hspace{1pt}
&+\pd{\varepsilon}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^k),z_\tau^k):\varepsilon(\ub_\tau^k-\ub_\tau^{k-1})\hspace{1pt}
&+\Big(\breve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^k)+\invbreve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})\Big)(z_\tau^k-z_\tau^{k-1})\hspace{1pt}
&\qquad\geq W^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})- W^\omega(c_\tau^{k-1},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})\hspace{1pt}
&\qquad\quad+ W^\omega(c_\tau^{k},\varepsilon(\ub_\tau^k),z_\tau^k)- W^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^k)\hspace{1pt}
&\qquad\quad+ W^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^k)- W^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})\hspace{1pt}
&\qquad\geq W^\omega(c_\tau^{k},\varepsilon(\ub_\tau^k),z_\tau^k)- W^\omega(c_\tau^{k-1},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1}),
\end{aligned}
\label{eqn:convConcWall}
\end{align}
which will be used later in the proof of the discrete total energy inequality.
\item[--]
We will discretize the (formally written) term $\phi'(c) = \beta(c) +\gamma'(c)$
in \eqref{e:mu} in the following way:
As mentioned above
the maximally monotone operator $\beta$
is replaced
by its Yosida regularization $\beta_\omega\in \mathrm{C}^0(\bbR)$.
Hence, in view of the $\lambda_\gamma$-convexity of
$\gamma$ (cf. Remark \ref{rmk:l-convex-splitting}), the functions \color{black}
\begin{equation}
\label{specific-splitting-phi}
\conv{\phi}_\omega(c): = \widehat{\beta}_\omega(c) + \lambda_\gamma \frac{c^2}2 \quad \text{and} \quad
\conc{\phi}(c):=\gamma(c) - \lambda_\gamma \frac{c^2}2
\end{equation}
provide a convex-concave decomposition of
$\phi_\omega:= \widehat{\beta}_\omega + \gamma$.
Thus, we will approximate
\begin{align*}
\phi'(c)\hspace{1pt};\text{ via }\hspace{1pt};(\conv{\phi}_\omega)'(c_\tau^{k})+(\conc{\phi})'(c_\tau^{k-1}) \qquad
\text{with } \conv{\phi}_\omega,\hspace{1pt}, \conc{\phi} \text{ given by \eqref{specific-splitting-phi}.}
\end{align*} \end{itemize}
\paragraph{\bf Statement of the time-discrete problem and existence result} In the following we are going to describe the time-discrete problem formally. Later on the precise spaces and a weak notion of solution \color{black} will be fixed. The time-discrete problem (formally) reads as follows: \begin{problem}
\upshape \label{def:time-discrete}
Let $\omega>0$ and $\tau>0$ be given.
Find functions
$\hspace{1pt}{(c_\tau^{k}, \mu_\tau^{k},z_\tau^k,\vartheta_\tau^k)\hspace{1pt}}_{k=0}^{K_\tau}$ and $\hspace{1pt}{\ub_\tau^k\hspace{1pt}}_{k=-1}^{K_\tau}$
which satisfy for all $k\in\hspace{1pt}{1,\ldots,K_\tau\hspace{1pt}}$
the following time-discrete version of \eqref{eqn:PDEsystem}:
\begin{subequations}
\label{PDE-discrete}
\begin{itemize}
\item[(i)] Cahn-Hilliard system:
\begin{align}
\label{eqn:discr1}
D_{\tau,k}(c)={}&\dive\big(m(c_\tau^{k-1},z_\tau^{k-1})\nabla\mu_\tau^{k}\big),\hspace{1pt}
\notag
\mu_\tau^{k}={}&-\Delta_p(c_\tau^{k})+(\conv{\phi}_\omega)'(c_\tau^{k}) +(\conc{\phi})'(c_\tau^{k-1})
+\breve{W}_{1,c}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})\hspace{1pt}
&+\invbreve{W}_{1,c}^\omega(c_\tau^{k-1},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})-\vartheta_\tau^k+D_{\tau,k}(c),
\label{eqn:discr2}
\end{align}
\item[(ii)] damage equation:
\begin{equation}
\begin{aligned}
\label{eqn:discr3}
&D_{\tau,k}(z)-\Delta_p(z_\tau^k)+\ell_\tau^{k} +\zeta_\tau^{k} + (\conv{\sigma})'(z_\tau^k)+ (\conc{\sigma})'(z_\tau^{k-1})
\hspace{1pt} & =-\breve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^k) - \invbreve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})+\vartheta_\tau^k
\end{aligned}
\end{equation}
with
\begin{align*}
&\ell_\tau^{k}\in \partial I_{[0,\infty)}\big(z_\tau^k\big),\qquad
\zeta_\tau^{k}\in \partial I_{(-\infty,0]}\big(D_{\tau,k}(z)\big),
\end{align*}
\item[(iii)] temperature equation:
\begin{equation}
\label{eqn:discr4}
\begin{aligned}
&D_{\tau,k}(\vartheta) - \mathrm{div}(\mathsf{K}(\vartheta_\tau^k)\nabla \vartheta_\tau^k) \color{black} +D_{\tau,k}(c)\vartheta_\tau^k+D_{\tau,k}(z)\vartheta_\tau^k+\rho\vartheta_\tau^k\dive(D_{\tau,k}(\mathbf{u}))\hspace{1pt}
&=g_\tau^k+|D_{\tau,k}(c)|^2+|D_{\tau,k}(z)|^2+m(c_\tau^{k-1},z_\tau^{k-1})|\nabla\mu_\tau^{k}|^2
\hspace{1pt}
& \qquad \qquad \qquad + a(c_\tau^{k-1},z_\tau^{k-1})\varepsilon(D_{\tau,k}(\mathbf{u})):\mathbb{V}\varepsilon(D_{\tau,k}(\mathbf{u})),
\end{aligned}
\end{equation}
\item[(iv)] balance of forces:
\begin{align}
&D_{\tau,k}(D_{\tau,k}(\mathbf{u})) -\dive\Big( a(c_\tau^{k-1},z_\tau^{k-1})\mathbb{V}\varepsilon(D_{\tau,k}(\mathbf{u}))
+ W_{,\varepsilon}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^k),z_\tau^k) -\rho\vartheta_\tau^k\mathds 1\Big)=\bold f_\tau^k,
\label{eqn:discr5}
\end{align}
\end{itemize}
\end{subequations}
supplemented with the initial data
\begin{align}
&\hspace*{9.3em}\left.
\begin{matrix}
c_\tau^0=c^0,\qquad&
z_\tau^0=z^0,\qquad&
\vartheta_\tau^0=\vartheta^0,\qquad\hspace{1pt}
\mathbf{u}_\tau^0=\mathbf{u}^0,\qquad&
\mathbf{u}_\tau^{-1}=\mathbf{u}^0-\tau\mathbf v^0\qquad
\end{matrix}
\right\hspace{1pt}}
&&\text{a.e. in }\Omega\label{discre-initial-cond}
\end{align}
and the boundary data
\begin{align}
&\left.
\begin{matrix}
\nablac_\tau^{k}\cdot { \bf n \color{black}}=0,\qquad&
m(c_\tau^{k-1},z_\tau^{k-1})\nabla\mu_\tau^{k}\cdot { \bf n \color{black}}=0,\qquad&
\nabla z_\tau^k\cdot { \bf n \color{black}}=0,\qquad\hspace{1pt}
\mathsf{K}(\vartheta_\tau^k)\nabla\vartheta_\tau^k\cdot{ \bf n \color{black}}=h_\tau^k,\qquad&
\ub_\tau^k=\mathbf{d}_\tau^k
\end{matrix}
\right\hspace{1pt}}
&&\text{a.e. on }\partial\Omega.\label{discre-boundary-cond}
\end{align} \end{problem} \begin{remark} \label{remark:discProbl}
\upshape
A few comments on Problem \ref{def:time-discrete} are in order: \color{black}
\begin{itemize}
\item[(i)]
It will turn out that a solution of the time-discrete problem always satisfies
the constraints:
\begin{align}
\label{discre-constraints}
&&&z_\tau^k\in[0,1],&&D_{\tau,k}(z)\leq 0,
&&\vartheta_\tau^k\geq\underline\vartheta\quad(\text{for some }\underline\vartheta>0)
&&\text{a.e. in }\Omega
\end{align}
as long as the initial data satisfy \eqref{h:initial}.
\item[(ii)]
Observe that the scheme is fully implicit and, in particular, the discrete temperature equation
\eqref{eqn:discr4} is coupled with \eqref{eqn:discr2}, \eqref{eqn:discr3}, and \eqref{eqn:discr5}
via the implicit term $\vartheta_\tau^k$ featuring in $D_{\tau,k}(c)\vartheta_\tau^k$, $D_{\tau,k}(z)\vartheta_\tau^k$, and
$ \rho \color{black} \hspace{1pt}, \vartheta_\tau^k\dive(D_{\tau,k}(\mathbf{u}))$. Indeed, having $\vartheta_\tau^k$ implicit in these terms is crucial for the argument we will develop later on
for proving the positivity of $\vartheta_\tau^k$, cf.\hspace{1pt} the proof of Lemma \ref{l:positivityThetaDiscr}. \color{black}
\item[(iii)]
The subgradients $\ell_\tau^{k}$ and $\zeta_\tau^{k}$ account for non-negativity as well as irreversibility constraints
for $z$. In the pointwise formulation we obtain by the sum rule for
$z_\tau^{k-1} \not= 0$ and by direct calculations for $z_\tau^{k-1}=0$ \color{black}
$$
\partial I_{[0,\infty)}\big(z_\tau^k\big)
+\partial I_{(-\infty,0]}\big(D_{\tau,k}(z)\big)
= \partial I_{[0,\infty)}\big(z_\tau^k\big)
+\partial I_{(-\infty,z_\tau^{k-1}]}\big(z_\tau^k\big)
=\partial I_{[0,z_\tau^{k-1}]}(z_\tau^k)
$$
and, consequently, the double inclusion in (ii) may be replaced by the single inclusion
\begin{align*}
&D_{\tau,k}(z)-\Delta_p(z_\tau^k)+\xi_\tau^{k} + (\conv{\sigma})'(z_\tau^k)+ (\conc{\sigma})'(z_\tau^{k-1})
\hspace{1pt} & =-\breve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^k) - \invbreve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})+\vartheta_\tau^k
\end{align*}
with
\begin{align*}
&\xi_\tau^{k}\in \partial I_{[0,z_\tau^{k-1}]}(z_\tau^k).
\end{align*}
\item[(iv)]
By assuming the additional growth assumptions
\begin{align*}
&\sigma(0)\leq \sigma(z),\qquad b(c,0)\leq b(c,z)\text{ for all }c\in\bbR,z\in\bbR
\text{ with }z<0,
\end{align*}
it is possible the prove a maximum principle for equation \eqref{eqn:discr3}
which ensures $z_\tau^k\geq 0$ as long as $z^0\geq 0$.
In this case the subdifferential term
$ \partial I_{[0,\infty)}(z_\tau^k) $ \color{black}
in equation \eqref{eqn:discr3}
may be dropped. For details we refer to \cite[Proposition 5.5]{KRZ}.
\end{itemize} \end{remark}
We can now state our existence result for Problem \ref{def:time-discrete},
where we also fix the concept of weak solution to system \eqref{PDE-discrete}. With this aim, let us also \color{black} introduce the nonlinear operator $\mathcal A^k:X\to H^1(\Omega)'$, with \begin{align} &
X:=\Big\hspace{1pt}{\theta\in H^1(\Omega)\hspace{1pt};:\hspace{1pt};\int_\Omega\mathsf{K}(\theta)\nabla\theta\cdot\nabla v\,\mathrm dx
\text{ is well-defined for all }v\in H^1(\Omega)\Big\hspace{1pt}}, \notag \hspace{1pt} &
\big\langle\mathcal A^k(\theta),v\big\rangle_{H^1}:=\int_\Omega \mathsf{K}(\theta)\nabla\theta\cdot\nabla v\,\mathrm dx
-\int_{\partial\Omega}h_\tau^k v\,\mathrm dx. \label{A-operator} \end{align}
\begin{proposition} \label{prop:exist-discr}
Assume \textbf{Hypotheses (I)--(V)}, as well as \eqref{hyp:data} on $(\mathbf{f},g,h)$ and \eqref{h:initial} on $(c^0,z^0,\vartheta^0,\mathbf{u}^0,\mathbf{v}^0)$. \color{black}
Then, for every $\omega>0$ and $\tau>0$ Problem \ref{def:time-discrete} admits a weak solution
\begin{align}
\label{eqn:regDiscSol}
\hspace{1pt}{(c_\tau^{k}, \mu_\tau^{k},z_\tau^k,\vartheta_\tau^k,\ub_\tau^k)\hspace{1pt}}_{k=1}^{K_\tau}\subseteq W^{1,p}(\Omega)\times H_N^2(\Omega)\times W^{1,p}(\Omega)\times H^1(\Omega)\times H^2(\Omega;\mathbb{R}^d)\color{black}
\end{align}
in the following sense:
\begin{itemize}
\item[--]
\eqref{eqn:discr1} and \eqref{eqn:discr5} are fulfilled a.e. in $\Omega$, with the boundary conditions $ \nablac_\tau^{k}\cdot n=0$ and
$ \ub_\tau^k=\mathbf{d}_\tau^k$ a.e.\hspace{1pt} in $\partial\Omega$, \color{black}
\item[--]
\eqref{eqn:discr2} is fulfilled in $W^{1,p}(\Omega)'$,
\item[--]
\eqref{eqn:discr4} is fulfilled in $H^1(\Omega)'$, in the form
\hspace{1pt}[
\begin{aligned}
&D_{\tau,k}(\vartheta)+\mathcal A^k(\vartheta_\tau^k)+D_{\tau,k}(c)\vartheta_\tau^k+D_{\tau,k}(z)\vartheta_\tau^k+\rho\vartheta_\tau^k\dive(D_{\tau,k}(\mathbf{u}))\hspace{1pt}
&=g_\tau^k+|D_{\tau,k}(c)|^2+|D_{\tau,k}(z)|^2+m(c_\tau^{k-1},z_\tau^{k-1})|\nabla\mu_\tau^{k}|^2
\hspace{1pt}
& \qquad \qquad \qquad + a(c_\tau^{k-1},z_\tau^{k-1})\varepsilon(D_{\tau,k}(\mathbf{u})):\mathbb{V}\varepsilon(D_{\tau,k}(\mathbf{u})),
\end{aligned}
\hspace{1pt}] \color{black}
\item[--]
\eqref{eqn:discr3} is reformulated as (cf. Remark \ref{remark:discProbl} (ii))
\begin{equation}
\begin{aligned}
&D_{\tau,k}(z)-\Delta_p(z_\tau^k)+\xi_\tau^{k} + (\conv{\sigma})'(z_\tau^k)+ (\conc{\sigma})'(z_\tau^{k-1})\hspace{1pt}
\label{eqn:discr3b}
&=-\breve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^k) - \invbreve{W}_{3,z}^\omega(c_\tau^{k},\varepsilon(\ub_\tau^{k-1}),z_\tau^{k-1})+\vartheta_\tau^k
\end{aligned}
\end{equation}
and fullfilled in $W^{1,p}(\Omega)'$ with
$\xi_\tau^{k}\in \partial I_{Z_\tau^{k-1}}(z_\tau^k)$ where
\begin{equation} \label{eqn:set_z}
Z_\tau^{k-1}:=\hspace{1pt}{z\in W^{1,p}(\Omega)\hspace{1pt},|\hspace{1pt},0\leq z\leq z_\tau^{k-1}\hspace{1pt}},\color{black}
\end{equation}
\item[--]
the initial conditions \eqref{discre-initial-cond} and the boundary conditions \eqref{discre-boundary-cond} \color{black} are satisfied,
\item[--]
the constraints \eqref{discre-constraints} are satisfied.
\end{itemize} \end{proposition} We will prove Proposition \ref{prop:exist-discr} in the ensuing section by performing a double passage to the limit in a carefully devised approximation of system \eqref{PDE-discrete}, depending on two additional parameters $\nu$ and $\varrho$.
\subsection{Proof of Proposition \ref{prop:exist-discr}} \label{ss:5.2} \noindent We will split the proof of Prop.\hspace{1pt} \ref{prop:exist-discr} in several steps and obtain a series of intermediate results.
Our argument is based on a double approximation procedure and two consecutive limit passages. More precisely, we approximate system \eqref{PDE-discrete} by \begin{enumerate}
\item adding the higher order terms
\begin{align*}
&&&&&&&+\nu\dive\big(|\nabla \mu_\tau^{k}|^{\varrho-2}\nabla\mu_\tau^{k}\big)-\nu\mu_\tau^{k}
&&\text{to the right-hand sides of the discrete Cahn-Hilliard equation \eqref{eqn:discr1}},\hspace{1pt}
&&&&&&&+\nu|c_\tau^{k}|^{\varrho-2}c_\tau^{k}
&&\text{to the right-hand sides of the discrete Cahn-Hilliard equation \eqref{eqn:discr2}},\hspace{1pt}
&&&&&&&+\nu|z_\tau^k|^{\varrho-2}c_\tau^{k}
&&\text{to the left-hand sides of the discrete damage equation \eqref{eqn:discr3}},\hspace{1pt}
&&&&&&&-\nu\dive\big(|\varepsilon(\ub_\tau^k-\mathbf{d}_\tau^k)|^{\varrho-2}\varepsilon(\ub_\tau^k-\mathbf{d}_\tau^k) \big) \color{black}
&&\text{to the left-hand side of the discrete momentum equation \eqref{eqn:discr5}}
\end{align*}
with $\nu>0$ and $\varrho>4$.
In this way, the quadratic growth of the terms on the right-hand side of the temperature equation will be compensated
and coercivity properties
of the elliptic operators involved in the time-discrete scheme \color{black}
ensured.
\item Truncating the heat conduction function $\mathsf{K}$ and replacing it with a bounded
$\mathsf{K}_M$ with $M\in\bbN$. In this way the elliptic operator in the discrete heat equation will be defined on $H^1(\Omega)$,
with values in $H^1(\Omega)'$, but we will of course loose \color{black}
the enhanced estimates on the temperature variable provided by the coercivity properties of $\mathsf{K}$. That is why,
we will have to accordingly \color{black} truncate all occurrences of $\vartheta$ in the quadratic terms. \end{enumerate} Let us mention in advance that this double approximation, leading to system \eqref{discr-syst-appr} later on, shall be devised in such a way as to allow us to prove the existence of solutions to \eqref{discr-syst-appr}, by resorting to a result from the theory of elliptic systems featuring pseudomonotone operators, cf.\hspace{1pt} \cite{Rou05}. \color{black}
\textbf{A caveat on notation:} the solutions to the approximate discrete system \eqref{discr-syst-appr} at the $k$-th time step, with \underline{given} $S_{\tau}^{k-1}:=(c_{\tau}^{k-1}, z_{\tau}^{k-1}, \mathbf{u}_{\tau}^{k-1}, \vartheta_{\tau}^{k-1})$ and $\mathbf{u}_\tau^{k-2}$ , will depend on the parameters $\tau$, $\nu$ and $M$
(and on $\omega$ which we omit at the moment). Therefore, we should denote them by $ S_{\tau,\nu,M}^k:= (c_{\tau,\nu,M}^k,\mu_{\tau,\nu,M}^k, z_{\tau,\nu,M}^k, \vartheta_{\tau,\nu,M}^k, \mathbf{u}_{\tau,\nu,M}^k)$. However, to increase readability, we will simply write $c^k$, $\mu^k$, $z^k$, $\vartheta^k$ and $\mathbf{u}^k$ and use the notation $c^k_M, \ldots, \mathbf{u}_M^k$ ($c^k_\nu, \ldots, \mathbf{u}_\nu^k$, respectively), only upon addressing the limit passage as $M\to\infty$ (as $\nu \downarrow 0$, respectively).
\paragraph{\bf Outline of the proof of Proposition \ref{prop:exist-discr}:} \color{black} For given $\tau>0$, the construction of
the solution quintuples \color{black} $S_{\tau,\nu,M}^k$ and the limit passages as $M\to\infty$ and as $\nu \downarrow 0$ \color{black} are performed recursively over $k=1,\ldots,K_\tau$ in the following order: \begin{align*}
&\qquad\vdots
&&\qquad\vdots
&&\quad\vdots
&&\qquad\vdots
&&\quad\vdots
&&\quad\vdots
&&\quad\vdots\hspace{1pt}
&(S_{\tau}^{k-2},\mathbf{u}_{\tau}^{k-3})
&&\xmapsto[\text{Step 1}]{\text{pseudo-mon. op. theory}}
&&S_{\tau,\nu,M}^{k-1}
&&\xrightarrow[\text{Step 2}]{\hspace{1pt};M\to\infty\hspace{1pt};}
&&S_{\tau,\nu}^{k-1}
&&\xrightarrow[\text{Step 3}]{\hspace{1pt};\nu\downarrow0\hspace{1pt};}
&&S_{\tau}^{k-1}\hspace{1pt}
&(S_{\tau}^{k-1},\mathbf{u}_{\tau}^{k-2})
&&\xmapsto[\text{Step 1}]{\text{pseudo-mon. op. theory}}
&&S_{\tau,\nu,M}^{k}
&&\xrightarrow[\text{Step 2}]{\hspace{1pt};M\to\infty\hspace{1pt};}
&&S_{\tau,\nu}^{k}
&&\xrightarrow[\text{Step 3}]{\hspace{1pt};\nu\downarrow0\hspace{1pt};}
&&S_{\tau}^{k}\hspace{1pt}
&(S_{\tau}^{k},\mathbf{u}_{\tau}^{k-1})
&&\xmapsto[\text{Step 1}]{\text{pseudo-mon. op. theory}}
&&S_{\tau,\nu,M}^{k+1}
&&\xrightarrow[\text{Step 2}]{\hspace{1pt};M\to\infty\hspace{1pt};}
&&S_{\tau,\nu}^{k+1}
&&\xrightarrow[\text{Step 3}]{\hspace{1pt};\nu\downarrow0\hspace{1pt};}
&&S_{\tau}^{k+1}\hspace{1pt}
&\qquad\vdots
&&\qquad\vdots
&&\quad\vdots
&&\qquad\vdots
&&\quad\vdots
&&\quad\vdots
&&\quad\vdots \end{align*} The construction of $S_{\tau,\nu,M}^{k}$ will be tackled in Subsection \ref{sss:4.2.1}, the limit passage as $M\to\infty$ to $S_{\tau,\nu}^k$ in Subsection \ref{sss:4.2.2.}, and the one as $\nu \downarrow 0$
to $S_\tau^k$ in Subsection \ref{sss:4.2.3.}.
Throughout all of them, we will work under the assumptions of Proposition \ref{prop:exist-discr}, and omit to explicitly invoke them in the
following statements. \color{black}
\subsubsection{\textbf{Step 1: Existence and uniform estimates of the
time-discrete system with
${\boldsymbol \nu}$- and ${\bf M}$-regularization.}} \label{sss:4.2.1} \noindent From now on let $\nu>0$, $\varrho>4$ and $M\in\bbN$. Let \begin{equation} \label{def-k-m} \mathsf{K}_M(r):=
\left\hspace{1pt}{ \begin{array}{ll} \mathsf{K}(0) & \text{if } r <0, \hspace{1pt}
\mathsf{K}(r) & \text{if } 0\leq r \leq M, \hspace{1pt}
\mathsf{K}(M) & \text{if } r >M
\end{array}
\right. \end{equation} and accordingly we \color{black} introduce the quasilinear operator $\mathcal{A}_M^k$ in analogy to \eqref{A-operator}\color{black}: \begin{equation} \label{M-operator}
\mathcal{A}_M^k: H^1(\Omega) \to H^1(\Omega)'
\hspace{1pt} \text{ defined by }
\pairing{}{H^1(\Omega)}{\mathcal{A}_M^k(\theta)}{v}:= \int_\Omega \mathsf{K}_M(\theta) \nabla \theta \cdot \nabla v \, \mathrm{d} x -
\int_{\partial \Omega} \htau{k}v \, \mathrm{d} S \end{equation} Observe that, thanks to \eqref{hyp-K} there still holds $\mathsf{K}_M(r) \geq c_{0} $ for all $r \in \bbR$, and therefore by the trace theorem \begin{equation} \label{ellipticity-retained}
\pairing{}{H^1(\Omega)}{ \mathcal{A}_M^{k} (\theta)}{\theta} \geq \tilde{c}_0 \hspace{1pt}|\nabla \theta\hspace{1pt}|_{L^2(\Omega)}^2-c_1\hspace{1pt}|\theta\hspace{1pt}|_{L^2(\Omega)}^2 -c_1\hspace{1pt}|h_\tau^k\hspace{1pt}|_{L^2(\partial\Omega)}^2 \qquad \text{for all } \theta \in H^1(\Omega). \end{equation} We also introduce the truncation operator $\mathcal{T}_M : \bbR \to \bbR$ \begin{equation} \label{def-truncation-m} \mathcal{T}_M(r):=
\left\hspace{1pt}{ \begin{array}{ll} 0 & \text{if } r <0,\hspace{1pt}
r & \text{if } 0\leq r \leq M, \hspace{1pt}
M & \text{if } r >M.
\end{array}
\right. \end{equation}
The $(\nu,M)$-regularized time-discrete system at time step $k$ reads as follows: \begin{subequations} \label{discr-syst-appr} \begin{align}
&D_k(c)=\dive\Big(m(c^{k-1},z^{k-1})\nabla\mu^k\Big)+\nu\dive\Big(|\nabla\mu^k|^{\varrho-2}\nabla\mu^k\Big)-\nu\mu^k,
\label{discr-syst-appr-c}\hspace{1pt}
&\mu^k=-\Delta_p(c^k)+(\conv{\phi}_\omega)'(c^k)+(\conc{\phi})'(c^{k-1})+\breve{W}_{1,c}^\omega(c^k,\varepsilon(\mathbf{u}^{k-1}), z^{k-1})+\invbreve{W}_{1,c}^\omega(c^{k-1},\varepsilon(\mathbf{u}^{k-1}), z^{k-1})\notag\hspace{1pt}
&\qquad -\mathcal T_M(\vartheta^k)+D_k(c)+\nu|c^k|^{\varrho-2}c^k,
\label{discr-syst-appr-mu}\hspace{1pt}
&D_k(z)-\Delta_p(z^k)+\xi^k+(\conv{\sigma})'(z^k) + (\conc{\sigma})'(z^{k-1})+\nu|z^k|^{\varrho-2}z^k\notag\hspace{1pt}
&\quad=-\breve{W}_{3,z}^\omega( c^{k},\varepsilon(\mathbf{u}^{k-1}),z^k)-\invbreve{W}_{3,z}^\omega( c^{k},\varepsilon(\mathbf{u}^{k-1}),z^{k-1})+\mathcal T_M(\vartheta^k)
\quad\text{with }\xi^k\in \partial I_{[0,z^{k-1}]}(z^k),
\label{discr-syst-appr-z}\hspace{1pt}
&D_k(\vartheta) + \mathcal{A}_M^k(\vartheta^k)+D_k(c)\mathcal T_M(\vartheta^k)+D_k(z)\mathcal T_M(\vartheta^k)+\rho\mathcal T_M(\vartheta^k)\dive(D_k(\mathbf{u}))\notag\hspace{1pt}
&\quad=g^k+|D_k(c)|^2+|D_k(z)|^2+ a(c^{k-1},z^{k-1})\varepsilon(D_k(\mathbf{u})):\mathbb{V}\varepsilon(D_k(\mathbf{u}))
+m(c^{k-1},z^{k-1})|\nabla\mu^k|^2,
\label{discr-syst-appr-teta}\hspace{1pt}
&D_k(D_k(\mathbf{u}))-\dive\Big(a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(D_k(\mathbf{u}))
+ W_{,\varepsilon}^\omega(c^{k},\varepsilon(\mathbf{u}^k),z^k) \color{black}-\rho\mathcal T_M(\vartheta^k)\mathds 1 \Big)\notag\hspace{1pt}
&\qquad-\nu\dive\Big(|\varepsilon(\mathbf{u}^k-\mathbf{d}^k)|^{\varrho-2}\varepsilon(\mathbf{u}^k-\mathbf{d}^k)\Big)=\bold f^k,
\label{discr-syst-appr-u} \end{align} \end{subequations}
supplemented with the previously given boundary conditions. \color{black} Please note that the functions $c^{k}$, $\mu^k,z^k$, $\vartheta^k$ and $\mathbf{u}^k$ depend on $M$, $\nu$, $\tau$ and $\omega$ whereas the functions from the previous time steps $c^{k-1}$, $\mu^{k-1},z^{k-1}$, $\vartheta^{k-1}$, $\mathbf{u}^{k-1}$ and $\mathbf{u}^{k-2}$ only depend on $\tau$ and $\omega$ and do \textbf{not} depend on $M$ and $\nu$.
We are now in the position to prove existence of weak solutions for system \eqref{discr-syst-appr} by resorting to an existence result for pseudomonotone operators from \cite{Rou05}, which is in turn based on a fixed point argument. \color{black}
\begin{lemma}[Existence of the time-discrete system for $\nu>0$ and $M\in\bbN$] \label{l:exist-approx-discr}
Let $\omega>0$, $\tau>0$, $k\in\hspace{1pt}{1,\ldots,K_\tau\hspace{1pt}}$, $\nu>0$ and $M\in\bbN$
be given.
We assume that
\begin{align*}
(c^{k-1},\mu^{k-1},z^{k-1},\vartheta^{k-1},\mathbf{u}^{k-1},\mathbf{u}^{k-2})\in W^{1,p}(\Omega)\times H^{2}(\Omega)\times W^{1,p}(\Omega)\times H^1(\Omega)\times H^2(\Omega;\bbR^d)\times H^2(\Omega;\bbR^d).
\end{align*}
Then\hspace{1pt}, there exists a weak solution
\begin{align*}
(c^k, \mu^k,z^k,\vartheta^k,\mathbf{u}^k)\in W^{1,p}(\Omega)\times W^{1,\varrho}(\Omega)\times W^{1,p}(\Omega)\times H^1(\Omega)\times W^{1,\varrho}(\Omega;\bbR^d)
\end{align*}
to system \eqref{discr-syst-appr} at time step $k$\hspace{1pt}, in the following sense:
\begin{itemize}
\item[--]
\eqref{discr-syst-appr-c} is fulfilled in $W^{1,\varrho}(\Omega)'$,
\item[--]
\eqref{discr-syst-appr-mu} is fulfilled in $W^{1,p}(\Omega)'$,
\item[--]
\eqref{discr-syst-appr-z} is fulfilled in $W^{1,p}(\Omega)'$
with $\xi^k\in \partial I_{Z^{k-1}}(z^k)$,
\item[--]
\eqref{discr-syst-appr-teta} is fulfilled in $H^{1}(\Omega)'$,
\item[--]
\eqref{discr-syst-appr-u} is fulfilled in $W_0^{1,\varrho}(\Omega;\bbR^d)'$,
\item[--]
the initial conditions \eqref{discre-initial-cond}
and the boundary condition $\mathbf{u}^k=\mathbf{d}^k$ a.e. on $\partial\Omega$
are satisfied,
\item[--]
the constraints \eqref{discre-constraints} are satisfied.
\end{itemize}
\end{lemma} \begin{proof}
Our approach for finding a solution to \eqref{discr-syst-appr} for a given $k$
is to rewrite the system as
\begin{align}
\label{label:inclusion2}
0\in \mathbf A(c^k,\mu^k,z^k,\vartheta^k,\mathbf{u}^k-\mathbf{d}^k)+
\partial\Psi(c^k,\mu^k,z^k,\vartheta^k,\mathbf{u}^k -\mathbf{d}^k \color{black}),
\end{align}
where $\mathbf A$ is a (to be specified) pseudomonotone and coercive operator
and $\partial\Psi$ is \color{black} the subdifferential of a (to be specified) proper, convex and
l.s.c. potential $\Psi$.
Note that both the operator $\mathbf A$ as well as $\Psi$
will depend
on the discrete functions obtained in previous time step $k-1$, but we choose not to highlight this for notational simplicity.
To be more precise, we introduce the space
$$
\mathbf{X} \color{black}:=W^{1,p}(\Omega)\times W^{1,\varrho}(\Omega)\times W^{1,p}(\Omega)\times H^1(\Omega)\times W_0^{1,\varrho}(\Omega;\bbR^d)
$$
and the announced operator
\begin{align*}
&\mathbf A=
\begin{bmatrix}
A_1\hspace{1pt}\A_2\hspace{1pt}\A_3\hspace{1pt}\A_4\hspace{1pt}\A_5
\end{bmatrix}
: \mathbf{X} \color{black}\to \mathbf{X}' \color{black}
\end{align*}
given component-wise by
\begin{align*}
A_1(c,\mu,z,\vartheta,\widetilde\mathbf{u})={}&-\mu-\Delta_p(c) +\nu |c|^{\varrho-2}c
+(\conv{\phi}_\omega)'(c)
+(\conc{\phi})'(c^{k-1})
+\breve{W}_{1,c}^\omega(c,\varepsilon(\mathbf{u}^{k-1}), z^{k-1})\hspace{1pt}
&+\invbreve{W}_{1,c}^\omega(c^{k-1},\varepsilon(\mathbf{u}^{k-1}), z^{k-1})
-\mathcal T_M(\vartheta)+(c-c^{k-1})\tau^{-1},\hspace{1pt}
A_2(c,\mu,z,\vartheta,\widetilde\mathbf{u})={}&-\dive(m(c^{k-1},z^{k-1})\nabla\mu)-\nu\dive(|\nabla\mu|^{\varrho-2}\nabla\mu)+\nu\mu+(c-c^{k-1})\tau^{-1},\hspace{1pt}
A_3(c,\mu,z,\vartheta,\widetilde\mathbf{u})={}&-\Delta_p(z)+\nu|z|^{\varrho-2}z+(z-z^{k-1})\tau^{-1}+(\conv{\sigma})'(\mathcal T(z))+(\conc{\sigma})'(z^{k-1})\hspace{1pt}
&\quad
+\breve{W}_{3,z}^\omega( c^{k},\varepsilon(\mathbf{u}^{k-1}),\mathcal T(z))+\invbreve{W}_{3,z}^\omega( c^{k},\varepsilon(\mathbf{u}^{k-1}),z^{k-1})-\mathcal T_M(\vartheta),\hspace{1pt}
\end{align*}
\begin{align*}
A_4(c,\mu,z,\vartheta,\widetilde\mathbf{u})={}& \mathcal{A}_M^k(\vartheta)+(\vartheta-\vartheta^{k-1})\tau^{-1}+(c-c^{k-1})\tau^{-1}\mathcal T_M(\vartheta)
+(z-z^{k-1})\tau^{-1}\mathcal T_M(\vartheta)
\hspace{1pt}
& \quad
+\rho\mathcal T_M(\vartheta)\dive(\widetilde\mathbf{u}+\mathbf{d}^k-\mathbf{u}^{k-1})\tau^{-1}-g_\tau^k
-|(c-c^{k-1})\tau^{-1}|^2-|(z-z^{k-1})\tau^{-1}|^2
\hspace{1pt}
& \quad
-a(c^{k-1},z^{k-1})\varepsilon((\widetilde\mathbf{u}+\mathbf{d}^k-\mathbf{u}^{k-1})\tau^{-1}):\mathbb{V}\varepsilon((\widetilde\mathbf{u}+\mathbf{d}^k-\mathbf{u}^{k-1})\tau^{-1})
-m(c^{k-1},z^{k-1})|\nabla\mu|^2,
\hspace{1pt}
A_5(c,\mu,z,\vartheta,\widetilde\mathbf{u})={}&(\widetilde\mathbf{u}+\mathbf{d}^k-2\mathbf{u}^{k-1}+\mathbf{u}^{k-2})\tau^{-2}
-\nu\dive\big(|\varepsilon(\widetilde\mathbf{u})|^{\varrho-2}\varepsilon(\widetilde\mathbf{u})\big)\hspace{1pt}
&\quad-\dive\Big(a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon((\widetilde\mathbf{u}+\mathbf{d}^k-\mathbf{u}^{k-1})\tau^{-1} ) \color{black}+
W_{,\varepsilon}^\omega( c^{k},\varepsilon(\widetilde\mathbf{u}+\mathbf{d}^k),\mathcal T(z)) -\rho\mathcal T_M(\vartheta)\mathds 1\Big)\hspace{1pt}
&\quad-\bold f_\tau^k,
\end{align*}
where we make use of the truncation operator $\mathcal T$
\begin{align*}
&\mathcal T(z):=
\begin{cases}
0&\text{if }z < 0,\hspace{1pt}
z&\text{if }0 < z < 1,\hspace{1pt}
1&\text{if }z > 1.
\end{cases}
\end{align*}
The potential $\Psi: \mathbf{X} \color{black} \to (-\infty,+\infty]$
featuring in \eqref{label:inclusion2} is \color{black}
given by
\begin{align*}
&\Psi(c,\mu,z,\vartheta,\widetilde\mathbf{u}):=I_{Z^{k-1}}(z)=
\begin{cases}
0&\text{if }0\leq z\leq z^{k-1}\text{ a.e. in }\Omega,\hspace{1pt}
\infty&\text{else,}
\end{cases}
\end{align*}
where the set $ Z^{k-1}$ is defined in \eqref{eqn:set_z}. \color{black}
We remark that for solutions of \eqref{label:inclusion2}
the truncation operator $\mathcal T$ will disappear in the resulting system
since
$
\mathrm{dom}(\partial\Psi)\subseteq
\hspace{1pt}{(c,\mu,z,\vartheta,\widetilde\mathbf{u})\in \mathbf{X} \color{black} \hspace{1pt},|\hspace{1pt}, 0\leq z\leq 1\text{ a.e. in }\Omega\hspace{1pt}}.
$
It is merely used as an auxiliary construction to ensure coercivity of the
operator $\mathbf A$.
Furthermore,
the boundary values for the displacement variable are shifted to $0$
in order to obtain a vector space structure for the domain $\mathbf{X}$ of $\mathbf A$. \color{black}
As a result, we have to add $\mathbf{d}^k$ to the displacement $\widetilde\mathbf{u}$ of the solution afterwards.
In following we are going to verify coercivity of $\mathbf A$.
To this end, we will estimate $\langle \mathbf A(\boldsymbol{x}),\boldsymbol{x}\rangle_{ \mathbf{X}\color{black}}$ for every $\boldsymbol{x}=(c,\mu,z,\vartheta,\widetilde\mathbf{u})\in \mathbf{X}\color{black}$
from below:
\begin{equation}
\label{to-refer-to-later}
\begin{aligned}
\langle \mathbf A(\boldsymbol{x}),\boldsymbol{x}\rangle_{ \mathbf{X}\color{black}}=\big\langle \mathbf A(c,\mu,z,\vartheta,\widetilde\mathbf{u}),(c,\mu,z,\vartheta,\widetilde\mathbf{u})\big\rangle_{ \mathbf{X} \color{black}}
={}&
\big\langle A_1(c,\mu,z,\vartheta,\widetilde\mathbf{u}),c\big\rangle_{W^{1,p}(\Omega)}
+\big\langle A_2(c,\mu,z,\vartheta,\widetilde\mathbf{u}),\mu\big\rangle_{W^{1,\varrho}(\Omega)}\hspace{1pt}
&+\big\langle A_3(c,\mu,z,\vartheta,\widetilde\mathbf{u}),z\big\rangle_{W^{1,p}(\Omega)}
+\big\langle A_4(c,\mu,z,\vartheta,\widetilde\mathbf{u}),\vartheta\big\rangle_{H^1(\Omega)}\hspace{1pt}
&+\big\langle A_5(c,\mu,z,\vartheta,\widetilde\mathbf{u}),\widetilde\mathbf{u}\big\rangle_{W^{1,\varrho}(\Omega;\bbR^d)}\hspace{1pt}
&=:I_1+I_2+\ldots+I_5.
\end{aligned}
\end{equation}
We now estimate the partial derivatives $\breve{W}_{1,c}^\omega$ and $\invbreve{W}_{1,c}^\omega$ of ${\breve{W}_{1}^\omega}$ and ${\invbreve{W}_{1}^\omega}$ w.r.t.\hspace{1pt} $c$, i.e.\hspace{1pt} \color{black}
\begin{align*}
&\breve{W}_{1,c}^\omega(c,\varepsilon(\mathbf{u}),z)= W_{,c}^\omega(c,\varepsilon(\mathbf{u}),z)+\big(\sup_{\widetilde{c}\in \bbR}| W_{,cc}^\omega(\widetilde c,\varepsilon(\mathbf{u}),z)|\big)\hspace{1pt},c,\hspace{1pt}
&\invbreve{W}_{1,c}^\omega(c,\varepsilon(\mathbf{u}),z)=-\big(\sup_{\widetilde{c}\in \bbR}| W_{,cc}^\omega(\widetilde c,\varepsilon(\mathbf{u}),z)|\big)\hspace{1pt},c.
\end{align*}
Taking into account \eqref{eqn:convConcSplittingWc}
and Hypothesis (V) (cf.\hspace{1pt} also \eqref{later-ref}), \color{black} we obtain
\begin{align}
&\left| \breve{W}_{1,c}^\omega(c,\varepsilon(\mathbf{u}),z)\right|
\leq C(|c| +1 )(1+|\varepsilon(\mathbf{u})|^2),
\label{est-quoted-5.1}\hspace{1pt}
&\left| \invbreve{W}_{1,c}^\omega(c,\varepsilon(\mathbf{u}),z)\right|
\leq C|c|(1+|\varepsilon(\mathbf{u})|^2)
\label{est-quoted-5.2}
\end{align}
We can also verify that \color{black}
\begin{align}
\label{oh-yes-quote}
& \left| W_{,\varepsilon}^\omega(c,\varepsilon(\mathbf{u}),z)\right|
\leq C(1+|\varepsilon(\mathbf{u})|),
\end{align}
and
\begin{align}
\label{est-quoted-5.3}
&\left| \breve{W}_{3,z}^\omega(c,\varepsilon(\mathbf{u}),z)\right| \leq C(1+|\varepsilon(\mathbf{u})|^2),\hspace{1pt}
\label{est-quoted-5.4}
&\left| \invbreve{W}_{3,z}^\omega(c,\varepsilon(\mathbf{u}),z) \right| \leq C(1+|\varepsilon(\mathbf{u})|^2)\hspace{1pt},.
\end{align}
Estimates
\eqref{est-quoted-5.1}--\eqref{est-quoted-5.4} are valid \color{black}
for all $c\in\bbR$, $z\in[0,1]$ and $\mathbf{u}\in\bbR^d$,
and fixed $C>0$.
Taking also the boundedness properties
\hspace{1pt}[
\mathcal T(z),\hspace{1pt};z^{k-1}\in{ }[0,1]\qquad\hspace{1pt};\hspace{1pt};\text{a.e. in }\Omega,\qquad
\mathcal T_M(\vartheta)\in{} [0,M]\qquad\text{a.e. in }\Omega
\hspace{1pt}]
into account, we obtain
\begin{align*}
\breve{W}_{1,c}^\omega(c,\varepsilon(\mathbf{u}^{k-1}), z^{k-1})
\geq{}& -C(|c| +1 )(1+|\varepsilon(\mathbf{u}^{k-1})|^2),\hspace{1pt}
\invbreve{W}_{1,c}^\omega(c^{k-1},\varepsilon(\mathbf{u}^{k-1}), z^{k-1})
\geq{}& -C|c^{k-1}|(1+|\varepsilon(\mathbf{u}^{k-1})|^2), \hspace{1pt}
W_{,\varepsilon}^\omega(c,\varepsilon(\widetilde\mathbf{u}+\mathbf{d}^k),\mathcal T(z))
\geq{}&-C(1+|\varepsilon(\widetilde\mathbf{u})|^2+|\varepsilon(\mathbf{d}^k)|^2),\hspace{1pt}
\breve{W}_{3,z}^\omega( c,\varepsilon(\mathbf{u}^{k-1}),\mathcal T(z))
\geq{}& -C(1+|\varepsilon(\mathbf{u}^{k-1})|^2),\hspace{1pt}
\invbreve{W}_{3,z}^\omega( c,\varepsilon(\mathbf{u}^{k-1}),z^{k-1})
\geq{}& -C(1+|\varepsilon(\mathbf{u}^{k-1})|^2).
\end{align*}
Together with Young's inequality and estimates \eqref{est-quoted-5.1}--\eqref{est-quoted-5.4}, \color{black} a calculation reveals for the terms $I_1,\ldots,I_5$ from \eqref{to-refer-to-later} the following bounds
(hereafter, we will write $\hspace{1pt}|\cdot\hspace{1pt}|_{L^p}$ in place of $\hspace{1pt}|\cdot\hspace{1pt}|_{L^p(\Omega)}$ for shorter notation and we will denote by $\delta$ a positive constant, to be chosen later, and by $C_\delta>0$ a constant depending on $\delta$): \color{black}
\begin{align*}
I_1={}&\hspace{1pt}|\nabla c\hspace{1pt}|_{L^p}^p+\nu\hspace{1pt}|c\hspace{1pt}|_{L^\varrho}^\varrho
+\tau^{-1}\hspace{1pt}|c\hspace{1pt}|_{L^2}^2-\tau^{-1}\int_\Omega c^{k-1} c\,\mathrm dx-\int_\Omega\mu c\,\mathrm dx\hspace{1pt}
&+\int_\Omega\Big(\beta_\omega(c)+\lambda_\gamma c
+\gamma'(c^{k-1})-\lambda_\gamma c^{k-1}
+\breve{W}_{1,c}^\omega(c,\varepsilon(\mathbf{u}^{k-1}), z^{k-1})\Big)c\,\mathrm dx\hspace{1pt}
&+\int_\Omega\Big(\invbreve{W}_{1,c}^\omega(c^{k-1},\varepsilon(\mathbf{u}^{k-1}), z^{k-1})-\mathcal T_M(\vartheta)\Big)c\,\mathrm dx\hspace{1pt}
\geq{}&\hspace{1pt}|\nabla c\hspace{1pt}|_{L^p}^p+\nu\hspace{1pt}|c\hspace{1pt}|_{L^\varrho}^\varrho
-\delta\hspace{1pt}|\mu\hspace{1pt}|_{L^2}^2
-C_\delta\hspace{1pt}|c\hspace{1pt}|_{L^2}^2
-C_\delta\hspace{1pt}|\varepsilon(\mathbf{u}^{k-1})\hspace{1pt}|_{L^4}^4
-C_\delta,\hspace{1pt}
I_2={}&\int_\Omega m(c^{k-1},z^{k-1})|\nabla\mu|^2\,\mathrm dx
+\nu\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^\varrho}^\varrho+\nu\hspace{1pt}|\mu\hspace{1pt}|_{L^2}^2+\tau^{-1}\int_\Omega (c-c^{k-1})\mu\,\mathrm dx\hspace{1pt}
\geq{}&
\nu\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^\varrho}^\varrho+\nu\hspace{1pt}|\mu\hspace{1pt}|_{L^2}^2-\delta\hspace{1pt}|\mu\hspace{1pt}|_{L^2}^2
-C_\delta\hspace{1pt}|c\hspace{1pt}|_{L^2}^2-C_\delta\hspace{1pt}
I_3={}&\hspace{1pt}|\nabla z\hspace{1pt}|_{L^p}^p+\nu\hspace{1pt}|z\hspace{1pt}|_{L^\varrho}^\varrho+\tau^{-1}\hspace{1pt}|z\hspace{1pt}|_{L^2}^2-\tau^{-1}\int_\Omega z^{k-1} z\,\mathrm dx\hspace{1pt}
&+\int_\Omega\Big((\conv{\sigma})'(\mathcal T(z))+(\conc{\sigma})'(z^{k-1})
+\breve{W}_{3,z}^\omega( c,\varepsilon(\mathbf{u}^{k-1}),\mathcal T(z))+\invbreve{W}_{3,z}^\omega( c,\varepsilon(\mathbf{u}^{k-1}),z^{k-1})-\mathcal T_M(\vartheta)
\Big)z\,\mathrm dx\hspace{1pt}
\geq{}&\hspace{1pt}|\nabla z\hspace{1pt}|_{L^p}^p+\nu\hspace{1pt}|z\hspace{1pt}|_{L^\varrho}^\varrho-\delta\hspace{1pt}|z\hspace{1pt}|_{L^2}^2
-C_\delta\hspace{1pt}|\varepsilon(\mathbf{u}^{k-1})\hspace{1pt}|_{L^4}^4-C_\delta,\hspace{1pt}
I_4={}&\int_\Omega \mathsf{K}_M(\vartheta)|\nabla \vartheta|^2\,\mathrm dx
-\int_{\partial \Omega} \htau{k}\vartheta \,\mathrm dx
+\tau^{-1}\hspace{1pt}|\vartheta\hspace{1pt}|_{L^2}^2-\tau^{-1}\int_\Omega\vartheta^{k-1}\vartheta\,\mathrm dx\hspace{1pt}
&+\tau^{-1}\int_\Omega\Big((c-c^{k-1})+(z-z^{k-1})
+\rho\dive(\widetilde\mathbf{u}+\mathbf{d}^k-\mathbf{u}^{k-1})\Big)\mathcal T_M(\vartheta)\vartheta\,\mathrm dx
-\int_\Omega g^k\vartheta\,\mathrm dx\hspace{1pt}
&-\int_\Omega\Big(|(c-c^{k-1})\tau^{-1}|^2+|(z-z^{k-1})\tau^{-1}|^2+
a(c^{k-1},z^{k-1})\varepsilon\Big(\frac{\widetilde\mathbf{u}+\mathbf{d}^k-\mathbf{u}^{k-1}}{\tau}\Big):\mathbb{V}\varepsilon\Big(\frac{\widetilde\mathbf{u}+\mathbf{d}^k-\mathbf{u}^{k-1}}{\tau}\Big)\Big)\vartheta\,\mathrm dx\hspace{1pt}
&-\int_\Omega m(c^{k-1},z^{k-1})|\nabla\mu|^2\vartheta\,\mathrm dx\hspace{1pt}
\geq{}&c_0\hspace{1pt}|\nabla\vartheta\hspace{1pt}|_{L^2}^2+\tau^{-1}\hspace{1pt}|\vartheta\hspace{1pt}|_{L^2}^2
-\delta\hspace{1pt}|\vartheta\hspace{1pt}|_{H^1}^2-C_\delta\hspace{1pt}|h^k\hspace{1pt}|_{H^{1/2}(\partial\Omega)}^2
-C_\delta\hspace{1pt}|\vartheta^{k-1}\hspace{1pt}|_{L^2}^2
-C_\delta\hspace{1pt}|c\hspace{1pt}|_{L^4}^4
-C_\delta\hspace{1pt}|z\hspace{1pt}|_{L^4}^4
-C_\delta\hspace{1pt}|\varepsilon(\widetilde\mathbf{u})\hspace{1pt}|_{L^4}^4\hspace{1pt}
&-C_\delta\hspace{1pt}|\varepsilon(\mathbf{d}^k)\hspace{1pt}|_{L^4}^4
-C_\delta\hspace{1pt}| c^{k-1} \hspace{1pt}|_{L^4}^4 - C_\delta \hspace{1pt}| z^{k-1} \hspace{1pt}|_{L^4}^4
-C_\delta\hspace{1pt}|\varepsilon(\mathbf{u}^{k-1})\hspace{1pt}|_{L^4}^4
-C_\delta\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^4}^4
-C_\delta\hspace{1pt}|g^k\hspace{1pt}|_{L^2}^2
-C_\delta,\hspace{1pt}
\end{align*}
\begin{align*}
I_5={}&\nu\hspace{1pt}|\varepsilon(\widetilde\mathbf{u})\hspace{1pt}|_{L^\varrho}^\varrho
+\tau^{-2}\hspace{1pt}|\widetilde\mathbf{u}\hspace{1pt}|_{L^2}^2
+\tau^{-2}\int_\Omega(\mathbf{d}^k-2\mathbf{u}^{k-1}+\mathbf{u}^{k-2})\cdot\widetilde\mathbf{u}\,\mathrm dx
+\tau^{-1}\int_\Omega a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(\widetilde\mathbf{u}):\varepsilon(\widetilde\mathbf{u})\,\mathrm dx\hspace{1pt}
&+\tau^{-1}\int_\Omega a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(\mathbf{d}^k-\mathbf{u}^{k-1}):\varepsilon(\widetilde\mathbf{u})\,\mathrm dx
+\int_\Omega W_{,\varepsilon}^\omega(c,\varepsilon( \widetilde\mathbf{u} \color{black}+\mathbf{d}^k),\mathcal T(z)):\varepsilon(\widetilde\mathbf{u})\,\mathrm dx\hspace{1pt}
&-\int_\Omega \rho\mathcal T_M(\vartheta)\dive(\widetilde\mathbf{u})\,\mathrm dx
-\int_\Omega \mathbf{f}^k\cdot\widetilde\mathbf{u}\,\mathrm dx\hspace{1pt}
\geq{}&\nu\hspace{1pt}|\varepsilon(\widetilde\mathbf{u})\hspace{1pt}|_{L^\varrho}^\varrho+\tau^{-2}\hspace{1pt}|\widetilde\mathbf{u}\hspace{1pt}|_{L^2}^2
-\delta\hspace{1pt}|\widetilde\mathbf{u}\hspace{1pt}|_{H^1}^2-C_\delta\hspace{1pt}|\mathbf{u}^{k-1}\hspace{1pt}|_{H^1}^2-C_\delta\hspace{1pt}|\mathbf{d}^k\hspace{1pt}|_{H^1}^2-C_\delta\hspace{1pt}|\mathbf{u}^{k-2}\hspace{1pt}|_{L^2}^2
-C_\delta\hspace{1pt}|\mathbf{f}^k\hspace{1pt}|_{L^2}^2-C_\delta.
\end{align*}
In conclusion, choosing $\delta>0$ sufficiently small
in such a way as to absorb the negative terms multiplied by $\delta$ into suitable positive contributions, \color{black}
we obtain constants $c',C>0$ such that
\begin{align*}
\langle \mathbf A(\boldsymbol{x}),\boldsymbol{x}\rangle_{ \mathbf{X}\color{black}}
\geq{}&
c'\Big(\hspace{1pt}|\nabla c\hspace{1pt}|_{L^p}^p
+\hspace{1pt}|c\hspace{1pt}|_{L^\varrho}^\varrho
+\hspace{1pt}|\nabla\mu\hspace{1pt}|_{L^\varrho}^\varrho
+\hspace{1pt}|\mu\hspace{1pt}|_{L^2}^2
+\hspace{1pt}|\nabla z\hspace{1pt}|_{L^p}^p
+\hspace{1pt}|z\hspace{1pt}|_{L^\varrho}^\varrho
+\hspace{1pt}|\nabla\vartheta\hspace{1pt}|_{L^2}^2+\hspace{1pt}|\vartheta\hspace{1pt}|_{L^2}^2\Big)\hspace{1pt}
&+c'\Big(\hspace{1pt}|\varepsilon(\widetilde\mathbf{u})\hspace{1pt}|_{L^\varrho}^\varrho
+\hspace{1pt}|\widetilde\mathbf{u}\hspace{1pt}|_{L^2}^2\Big)-C
\end{align*}
which leads to coercivity of $\mathbf A$ by using Korn's inequality.
The pseudomonotonicity follows from standard arguments
in the theory of quasilinear elliptic equations, cf.\hspace{1pt} \cite[Chapter 2.4]{Rou05}.
By virtue of the existence theorem in \cite[Theorem 5.15]{Rou05} together with
\cite[Lemma 5.17]{Rou05},
we find an $\boldsymbol{x}\in \mathbf{X} \color{black}$ solving \eqref{label:inclusion2}.
Thus a solution of \eqref{label:inclusion2} proves the claim. \end{proof} We now derive the incremental energy inequality satisfied by the solutions to system \eqref{discr-syst-appr}. This will be the starting point for the derivation of all a priori estimates allowing us to pass to the limit, first as $M\to\infty$ and then $\nu \to 0$. \begin{lemma}[ Incremental energy inequality for the approximate discrete system] \label{l:energy-est}
Let $(c^k,\mu^k,z^k,\vartheta^k,\mathbf{u}^k)$ be the\hspace{1pt}, weak solution to
system \eqref{discr-syst-appr} at time step $k$ according to
Lemma \ref{l:exist-approx-discr}.
Then, for every $M\in\bbN$ and $\nu>0$ the following energy inequality holds:
\begin{equation}
\label{discr-total-ineq}
\begin{aligned}
&\mathscr{E}_\omega(c^k,z^k,\vartheta^k,\mathbf{u}^k,\mathbf{v}^k)
+\frac\nu\varrho\hspace{1pt}|c^k\hspace{1pt}|_{L^\varrho(\Omega)}^\varrho
+\frac\nu\varrho\hspace{1pt}|z^k\hspace{1pt}|_{L^\varrho(\Omega)}^\varrho
+\frac\nu\varrho\hspace{1pt}|\varepsilon(\mathbf{u}^k)\hspace{1pt}|_{L^\varrho(\Omega)}^\varrho
+\nu\tau\Big(\hspace{1pt}|\nabla\mu^k\hspace{1pt}|_{L^\varrho(\Omega;\bbR^d)}^\varrho
+\hspace{1pt}|\mu^k\hspace{1pt}|_{L^2}^2\Big)\hspace{1pt}
&\leq\mathscr{E}_\omega(c^{k-1},z^{k-1},\vartheta^{k-1},\mathbf{u}^{k-1},\mathbf{v}^{k-1})
+\frac\nu\varrho\hspace{1pt}|c^{k-1}\hspace{1pt}|_{L^\varrho(\Omega)}^\varrho
+\frac\nu\varrho\hspace{1pt}|z^{k-1}\hspace{1pt}|_{L^\varrho(\Omega)}^\varrho
+\frac\nu\varrho\hspace{1pt}|\varepsilon(\mathbf{u}^{k-1})\hspace{1pt}|_{L^\varrho(\Omega)}^\varrho\hspace{1pt}
&\qquad+\tau\Big(\int_\Omega g^k\,\mathrm dx+ \int_{\partial\Omega} h^k \,\mathrm dx +\int_\Omega \bold f^k \cdot\mathbf{v}^k\,\mathrm dx\Big)\hspace{1pt}
&\qquad+\tau\int_\Omega D_k(\mathbf{v})\cdot D_k(\mathbf{d})\,\mathrm dx
+\tau\int_\Omega a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(\mathbf{v}^k):\varepsilon(D_k(\mathbf{d}))\,\mathrm dx\hspace{1pt}
&\qquad+\tau\int_\Omega W_{,\varepsilon}^\omega(c^{k},\varepsilon(\mathbf{u}^k),z^k):\varepsilon(D_k(\mathbf{d}))\,\mathrm dx
-\tau\int_\Omega\rho\mathcal T_M(\vartheta^k)\dive(D_k(\mathbf{d}))\,\mathrm dx
-\tau\int_\Omega \bold f^k \cdot D_k(\mathbf{d})\,\mathrm dx
\end{aligned}
\end{equation}
where we set \color{black} $\mathbf{v}^k:=D_k(\mathbf{u})$ and
denote by $\mathscr{E}_\omega$ the approximation of the total energy
$\mathscr{E}$ from \eqref{total-energy} obtained by replacing $\phi$ with
$\phi_\omega = \widehat{\beta}_\omega +\gamma$
and $W$ with $W^\omega$. \end{lemma} \begin{proof}
The convex-concave splitting give rise to the following crucial estimates,
(cf.\hspace{1pt} also \eqref{eqn:convConcWall}):
\begin{subequations}
\label{eqn:convConcEst}
\begin{align}
&\Big((\conv{\phi}_\omega)'(c^k)+(\conc{\phi})'(c^{k-1})\Big)(c^k-c^{k-1})\geq\phi_\omega(c^k)-\phi_\omega(c^{k-1}),\hspace{1pt}
&\Big((\conv{\sigma})'(z^k)+(\conc{\sigma})'(z^{k-1})\Big)(z^k-z^{k-1})\geq\sigma(z^k)-\sigma(z^{k-1}),\hspace{1pt}
&\Big(\breve{W}_{1,c}^\omega(c^k,\varepsilon(\mathbf{u}^{k-1}), z^{k-1})+\invbreve{W}_{1,c}^\omega(c^{k-1},\varepsilon(\mathbf{u}^{k-1}), z^{k-1})\Big)(c^k-c^{k-1})\notag\hspace{1pt}
&\quad + W_{,\varepsilon}^\omega( c^{k},\varepsilon(\mathbf{u}^k),z^k):\varepsilon(\mathbf{u}^k-\mathbf{u}^{k-1}) + \Big(\breve{W}_{3,z}^\omega( c^{k},\varepsilon(\mathbf{u}^{k-1}),z^k)+\invbreve{W}_{3,z}^\omega( c^{k},\varepsilon(\mathbf{u}^{k-1}),z^{k-1})\Big)(z^k-z^{k-1})\notag
\hspace{1pt} &
\geq
W^\omega(c^{k},\varepsilon(\mathbf{u}^k),z^k)- W^\omega(c^{k-1},\varepsilon(\mathbf{u}^{k-1}),z^{k-1}) \hspace{1pt},. \color{black}
\end{align}
\end{subequations}
Moreover, we will make use of standard convexity estimates:
\begin{subequations}
\label{eqn:stdConvEst}
\begin{align}
&|\nabla c^k|^{p-2}\nabla c^k\cdot\nabla (c^k-c^{k-1})
\geq \frac 1p|\nabla c^k|^p-\frac 1p|\nabla c^{k-1}|^p,\hspace{1pt}
&|c^k|^{\varrho-2}c^k (c^k-c^{k-1})
\geq \frac 1\varrho|c^k|^\varrho-\frac 1\varrho|c^{k-1}|^\varrho,\hspace{1pt}
&|\nabla z^k|^{p-2}\nabla z^k\cdot\nabla (z^k-z^{k-1})
\geq \frac 1p|\nabla z^k|^p-\frac 1p|\nabla z^{k-1}|^p,\hspace{1pt}
&|z^k|^{\varrho-2}z^k (z^k-z^{k-1})
\geq \frac 1\varrho|z^k|^\varrho-\frac 1\varrho|z^{k-1}|^\varrho,\hspace{1pt}
&|\varepsilon(\mathbf{u}^k)|^{\varrho-2}\varepsilon(\mathbf{u}^k):\varepsilon(\mathbf{u}^k-\mathbf{u}^{k-1})
\geq \frac 1\varrho|\varepsilon(\mathbf{u}^k)|^\varrho-\frac 1\varrho|\varepsilon(\mathbf{u}^{k-1})|^\varrho,\hspace{1pt}
&\Big(\mathbf{u}^k-2\mathbf{u}^{k-1}+\mathbf{u}^{k-2}\Big)\cdot(\mathbf{u}^k-\mathbf{u}^{k-1})\geq \frac12|\mathbf{u}^k-\mathbf{u}^{k-1}|^2-\frac12|\mathbf{u}^{k-1}-\mathbf{u}^{k-2}|^2.
\end{align}
\end{subequations}
To obtain the energy estimate, we test the time-discrete system
\eqref{discr-syst-appr} as follows:
\begin{align*}
&
\text{\eqref{discr-syst-appr-c}}\times(c^{ k}-c^{ k-1})\hspace{1pt};+\hspace{1pt};
\text{\eqref{discr-syst-appr-mu}}\times\tau\mu^{ k}\hspace{1pt};+\hspace{1pt};
\text{\eqref{discr-syst-appr-z}}\times(z^{k}-z^{ k-1})\hspace{1pt};+\hspace{1pt};
\text{\eqref{discr-syst-appr-teta}}\times\tau\hspace{1pt};\hspace{1pt}
&+\text{\eqref{discr-syst-appr-u}}\times(\mathbf{u}^{ k}-\mathbf{u}^{ k-1}-(\mathbf{d}^k-\mathbf{d}^{k-1}))
\end{align*}
and exploit estimates \eqref{eqn:convConcEst}
and \eqref{eqn:stdConvEst}.
\end{proof}
\begin{remark}
We note that in comparison with the calculations in the \textit{First estimate}
in Section \ref{s:4}, where we assumed spatial $H^2$-regularity for $\mathbf{u}$, we
cannot test the weak formulation \eqref{discr-syst-appr-u} with $\mathbf{u}^k-\mathbf{u}^{k-1}$ because
the boundary values of $\mathbf{u}^k-\mathbf{u}^{k-1}$ are not necessarily $0$. \end{remark}
\begin{lemma}[Positivity of $\vartheta^k$] \label{l:positivityThetaDiscr}
There exists a constant \color{black} $\underline\vartheta>0$, independent of $\omega$, $\tau$, $k$, $M$ and $\nu$, such that $\vartheta^k\geq\underline\vartheta$ a.e. in $\Omega$.
\end{lemma} \begin{proof}
The proof is carried out in two steps: At first we show non-negativity
of $\vartheta^k$ and then, in the second step, strictly positivity as claimed is shown.
\begin{itemize}
\item[]\textit{Step 1:}
Testing the discrete heat equation \eqref{discr-syst-appr-teta} with
$-(\vartheta^k)^-:=\min\hspace{1pt}{\vartheta^k,0\hspace{1pt}}$ shows after integration over $\Omega$:
\begin{align*}
&\int_\Omega \frac{1}{\tau}\underbrace{\vartheta^k(-(\vartheta^k)^-)}_{=|(\vartheta^k)^-|^2}\underbrace{-\frac{1}{\tau}\vartheta^{k-1}(-(\vartheta^k)^-)}_{\geq 0}
+\Big(D_k(c)+D_k(z)+\rho\dive(D_k(\mathbf{u}))\Big)\underbrace{\mathcal T_M(\vartheta^k)(-(\vartheta^k)^-)}_{=0}\,\mathrm dx\hspace{1pt}
&=\int_\Omega\underbrace{\Big(g^k+|D_k(c)|^2+|D_k(z)|^2+ a(c^{k-1},z^{k-1})\varepsilon(D_k(\mathbf{u})):\mathbb{V}\varepsilon(D_k(\mathbf{u}))
}_{\geq 0}\underbrace{(-(\vartheta^k)^-)}_{\leq 0}\,\mathrm dx\hspace{1pt}
&\quad+\int_\Omega\underbrace{
m(c^{k-1},z^{k-1})|\nabla\mu_\tau^{k}|^2}_{\geq 0}\underbrace{(-(\vartheta^k)^-)}_{\leq 0}\,\mathrm dx.
\end{align*}
Here we have merely used the information that $\vartheta^{k-1}\geq 0$ a.e. in $\Omega$.
We obtain
$$
\int_\Omega|(\vartheta^k)^-|^2\,\mathrm dx\leq 0
$$
and thus $\vartheta^k\geq 0$ a.e. in $\Omega$.
\item[]\textit{Step 2:}
The proof follows the very same lines as the argument developed in \cite[Lemma 4.4 - Step 3]{RocRos14}, hence we will just outline it and refer to \cite{RocRos14} for all details.
Namely, repeating the arguments formally developed in Sec.\hspace{1pt} \ref{s:4} (cf.\hspace{1pt} \eqref{formal-positivity}), we deduce from
\eqref{discr-syst-appr-teta}
that
there exists $C>0$ such that
\hspace{1pt}[
\int_\Omega D_k(\vartheta)w \, \mathrm{d} x + \int_\Omega \mathsf{K}_M(\vartheta^k) w \, \mathrm{d} x \geq - C \int_\Omega (\vartheta^k)^2 w \, \mathrm{d} x \qquad \text{for every }w \in W_{+}^{1,2}(\Omega)\hspace{1pt},.
\hspace{1pt}]
Then, we compare the functions $(\vartheta^k)_{k=1}^{K_\tau}$ with the solutions
$(v^k)_{k=1}^{K_\tau}$ of the finite difference equation $\frac{v_k-v_{k-1}}{\tau} = - C v_k^2$, with $v_0 = \vartheta_*$, and we conclude that
$\vartheta^k \geq v_k $ a.e.\hspace{1pt} in $\Omega$. Finally, with a comparison argument we prove that
\hspace{1pt}[
\vartheta^k \geq v_k \geq \frac{\vartheta_*}{1+CT\vartheta_*} \doteq \underline\vartheta \quad \text{a.e. in\;}\hspace{1pt}, \Omega \qquad\text{for all } k =1,\ldots,K_\tau.
\hspace{1pt}]
\end{itemize} \end{proof} Lemma \ref{l:energy-est} and Lemma \ref{l:positivityThetaDiscr} give rise to the following uniform estimates: \begin{lemma} \label{lemma:firstEstDiscr}
The following estimates hold uniformly in $\nu>0$ and $M\in\bbN$:
\begin{subequations}
\label{est-5.7}
\begin{align}
&\hspace{1pt}|c^k\hspace{1pt}|_{W^{1,p}(\Omega)}+\hspace{1pt}|z^k\hspace{1pt}|_{W^{1,p}(\Omega)}+\hspace{1pt}|\mathbf{v}^k\hspace{1pt}|_{L^2(\Omega;\bbR^d)}+\hspace{1pt}|\vartheta^k\hspace{1pt}|_{L^1(\Omega)}\leq C,
\label{est-5.7.1}\hspace{1pt}
&\nu^{\frac1\varrho}\hspace{1pt}|c^k\hspace{1pt}|_{L^\varrho(\Omega)}+\nu^{\frac1\varrho}\hspace{1pt}|z^k\hspace{1pt}|_{L^\varrho(\Omega)}+\nu^{\frac1\varrho}\hspace{1pt}|\varepsilon(\mathbf{u}^k)\hspace{1pt}|_{L^\varrho(\Omega;\bbR^{d\times d})}\leq C,
\label{est-5.7.2}\hspace{1pt}
&\nu\tau\Big(\hspace{1pt}|\nabla\mu^k\hspace{1pt}|_{L^\varrho(\Omega)}^\varrho+\hspace{1pt}|\mu^k\hspace{1pt}|_{L^2(\Omega)}^2\Big)\leq C.
\label{est-5.7.3}
\end{align}
\end{subequations} \end{lemma}
\begin{proof}
In order to deduce estimates \eqref{est-5.7}, \color{black}
it \color{black} suffices to estimate the terms of the $k$-th time step
on the right-hand side of the incremental energy inequality
\eqref{discr-total-ineq} from Lemma \ref{l:energy-est}.
The following calculations are an adaption of the
calculations performed First estimate in Section \ref{s:4}.
\begin{itemize}
\item[--]
At first we observe by Young's inequality
\begin{align*}
&\tau\int_\Omega \mathbf{f}^k\cdot\mathbf{v}^k\,\mathrm dx
\leq \delta\hspace{1pt}|\mathbf{v}^k\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2+C_\delta\hspace{1pt}|\mathbf{f}^k\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2,\hspace{1pt}
&\tau\int_\Omega D_k(\mathbf{v})\cdot D_k(\mathbf{d})\,\mathrm dx
\leq \delta\hspace{1pt}|\mathbf{v}^k\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2
+\delta\hspace{1pt}|\mathbf{v}^{k-1}\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2+C_\delta\hspace{1pt}|D_k(\mathbf{d})\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2,\hspace{1pt}
&-\tau\int_\Omega \mathbf{f}^k\cdot D_k(\mathbf{d})\,\mathrm dx
\leq C\hspace{1pt}|\mathbf{f}^k\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2+C\hspace{1pt}|D_k(\mathbf{d})\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2.
\end{align*}
By choosing $\delta>0$ sufficiently small, the term
$\delta\hspace{1pt}|\mathbf{v}^k\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2$
is absorbed by the left-hand side of \eqref{discr-total-ineq}.
The remaining terms are bounded due to \eqref{hyp:data}.
\item[--]
We continue with the next term
on the right-hand side of \eqref{discr-total-ineq} \color{black}
by using that $\mathbf{v}^k=D_k(\mathbf{d})$ a.e. on $\partial\Omega$,
the trace theorem and Young's inequality
\begin{align*}
\qquad&\tau\int_\Omega a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(\mathbf{v}^k):\varepsilon(D_k(\mathbf{d}))\,\mathrm dx\hspace{1pt}
&=-\tau\int_\Omega\mathbf{v}^k\cdot\dive\big(a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(D_k(\mathbf{d}))\big)\,\mathrm dx
+\tau\int_{\partial\Omega} \mathbf{v}^k\cdot\big(a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(D_k(\mathbf{d}))n\big)\, \mathrm{d} S\hspace{1pt}
&=-\tau\int_\Omega\mathbf{v}^k\cdot\Big(\big(a_{,c}(c^{k-1},z^{k-1})\nabla c^{k-1}
+a_{,z}(c^{k-1},z^{k-1})\nabla z^{k-1}\big)\mathbb{V}\varepsilon(D_k(\mathbf{d}))\Big)\,\mathrm dx\hspace{1pt}
&\quad-\tau\int_\Omega\mathbf{v}^k\cdot a(c^{k-1},z^{k-1})\dive\big(\mathbb{V}\varepsilon(D_k(\mathbf{d}))\big)\,\mathrm dx
+\tau\int_{\partial\Omega} D_k(\mathbf{d})\color{black}\cdot\big(a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(D_k(\mathbf{d}))n\big)\, \mathrm{d} S\hspace{1pt}
&\leq \delta\hspace{1pt}|\mathbf{v}^k\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2
+C_\delta\hspace{1pt}|\varepsilon(D_k(\mathbf{d}))\hspace{1pt}|_{L^\infty(\Omega;\bbR^{d\times d})}^2
\Big(\hspace{1pt}|a_{,c}(c^{k-1},z^{k-1})\hspace{1pt}|_{L^\infty(\Omega)}^2\hspace{1pt}|\nabla c^{k-1}\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2\hspace{1pt}
&\hspace*{20em}+\hspace{1pt}|a_{,z}(c^{k-1},z^{k-1})\hspace{1pt}|_{L^\infty(\Omega)}^2\hspace{1pt}|\nabla z^{k-1}\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2\Big)\hspace{1pt}
&\quad+C_\delta\hspace{1pt}|a(c^{k-1},z^{k-1})\hspace{1pt}|_{L^\infty(\Omega)}^2\hspace{1pt}|\tau\dive(\mathbb{V}\varepsilon(D_k(\mathbf{d})))\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2\hspace{1pt}
&\quad+C\hspace{1pt}|D_k(\mathbf{d}^k)\hspace{1pt}|_{H^1(\Omega;\bbR^d)}^2\hspace{1pt}|a(c^{k-1},z^{k-1})\hspace{1pt}|_{L^\infty(\Omega)}\hspace{1pt}|\tau\varepsilon(D_k(\mathbf{d}))n\big)\hspace{1pt}|_{H^1(\Omega;\bbR^{d\times d})}^2.
\end{align*}
Taking Hypothesis (IV) and \eqref{dirichlet-data} into account, we ultimately find that \color{black}
\begin{align*}
&\tau\int_\Omega a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(\mathbf{v}^k):\varepsilon(D_k(\mathbf{d}))\,\mathrm dx\hspace{1pt}
&\qquad\leq\delta\hspace{1pt}|\mathbf{v}^k\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2
+C_\delta\big(\hspace{1pt}|\nabla c^{k-1}\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2+\hspace{1pt}|\nabla z^{k-1}\hspace{1pt}|_{L^2(\Omega;\bbR^d)}^2+1\big).
\end{align*}
For small $\delta>0$ the first term on the right-hand side can be absorbed
into \color{black} the left-hand side of \eqref{discr-total-ineq}.
\item[--]
Moreover, we estimate the $\tau\int W_{,\varepsilon}^\omega(\ldots):\ldots$-term on the right-hand side of
\eqref{discr-total-ineq} as follows
\begin{align*}
\qquad&\tau\int_\Omega W_{,\varepsilon}^\omega(c^{k},\varepsilon(\mathbf{u}^k),z^k):\varepsilon(D_k(\mathbf{d}))\,\mathrm dx\hspace{1pt}
& \quad\leq \delta\hspace{1pt}|b(c^{k},z^{k})\hspace{1pt}|_{L^\infty(\Omega)}\int_\Omega \underbrace{\frac 12b(c^{k},z^{k})\mathbb{C}(\varepsilon(\mathbf{u}^k)-\varepsilon^*(c)):(\varepsilon(\mathbf{u}^k)-\varepsilon^*(c))}_{=W(c^k,\varepsilon(\mathbf{u}^k),z^k)}\,\mathrm dx
+C_\delta\hspace{1pt}|\varepsilon(D_k(\mathbf{d}))\hspace{1pt}|_{L^2(\Omega;\bbR^{d\times d})}^2,
\end{align*}
which can be absorbed by the left-hand side of \eqref{discr-total-ineq}
for small $\delta>0$.
\item[--]
Finally,
\begin{align*}
&-\tau\int_\Omega\rho\mathcal T_M(\vartheta^k)\dive(D_k(\mathbf{d}))\,\mathrm dx
\leq\tau\rho\hspace{1pt}|\dive(D_k(\mathbf{d}))\hspace{1pt}|_{L^\infty(\Omega)}\int_\Omega|\vartheta^k|\,\mathrm dx.
\end{align*}
\end{itemize}
In the end, by \color{black} choosing $\tau>0$ small enough depening only on $\rho$ and the data
$\mathbf{d}$, the right-hand side can be absorbed by the left-hand side of
\eqref{discr-total-ineq} (recall that $\vartheta^k$ is positive). \end{proof} \begin{remark}
We see that the calculation above takes advantage of the fact
that the $W_{,\varepsilon}(\ldots)$-term in the discrete force balance equation
is discretized fully implicit. \end{remark}
\subsubsection{\textbf{Step 2: Limit passage $M\to\infty$.}} \label{sss:4.2.2.} \noindent In the following we focus on the limit passage $M\to\infty$ and keep $M$ as a subscript in $c_M^k$, $\mu_M^k$, $z_M^k$, $\mathbf{u}_M^k$ and $\vartheta_M^k$. By adapting the proof in \cite[Proof of Lemma 4.4 - Step 4]{RocRos14} to our situation, we obtain enhanced estimates for $(\vartheta_M^k)_M$. \begin{lemma} \label{lemma:theteEstDiscr}
The following estimate holds uniformly in $M\in\bbN$:
\begin{align}
\label{eqn:thetaEstDiscr}
\hspace{1pt}|\vartheta_M^{k}\hspace{1pt}|_{H^1(\Omega)}\leq C.
\end{align} \end{lemma} \begin{proof}
In \cite[Proof of Lemma 4.4 - Step 4]{RocRos14} estimate \eqref{eqn:thetaEstDiscr} is obtained in two steps
which can be both applied in our case since the additional variable $c$
enjoys the same regularity properties and estimates as $z$.
At first \eqref{discr-syst-appr-teta} is tested by $\mathcal T_M(\vartheta_M^{k})$ leading to
the estimates
\begin{align*}
\hspace{1pt}|\mathcal T_M(\vartheta_M^{k})\hspace{1pt}|_{H^1(\Omega)}+\hspace{1pt}|\mathcal T_M \color{black} (\vartheta_M^{k})\hspace{1pt}|_{L^{3\kappa+6}(\Omega)}\leq C.
\end{align*}
Secondly, \eqref{discr-syst-appr-teta} is tested by $\vartheta_M^{k}$ leading to
the claimed estimate \eqref{eqn:thetaEstDiscr}. \end{proof} \begin{lemma} \label{lemma:discrConvergence}
For given $\nu>0$\hspace{1pt}, there exist functions
\begin{align*}
&c^k\in W^{1,p}(\Omega), \color{black}
&&\mu^k\in W^{1,\varrho}(\Omega),
&&z^k\in W^{1,p}(\Omega)\text{ with }z\in[0,1]\text{ a.e. in }\Omega,
\hspace{1pt}
&\vartheta^k\in H^1(\Omega)\text{ with }\vartheta^k\geq\underline\vartheta>0\text{ a.e. in }\Omega,
&&\mathbf{u}^k\in W^{1,\varrho}(\Omega;\bbR^d)
\end{align*}
such that for a subsequence $M\to\infty$
\begin{subequations}
\label{eqn:discrConv}
\begin{align}
&c_M^k \to c^k\text{ strongly in }W^{1,p}(\Omega),\label{eqn:strongConvCDiscr}\hspace{1pt}
&\mu_M^k \to \mu^k\text{ strongly in }W^{1,\varrho}(\Omega),\label{eqn:strongConvMuDiscr}\hspace{1pt}
&z_M^k \to z^k\text{ strongly in }W^{1,p}(\Omega),\label{eqn:strongConvZDiscr}\hspace{1pt}
&\vartheta_M^k \rightharpoonup \vartheta^k\text{ weakly in }H^{1}(\Omega), \label{eqn:strongConvTDiscr}\hspace{1pt}
&\mathbf{u}_M^k \to \mathbf{u}^k\text{ strongly in }W^{1,\varrho}(\Omega;\bbR^d).\label{eqn:strongConvUDiscr}
\end{align}
\end{subequations} \end{lemma} \begin{proof}
First of all, observe that \color{black} the a priori estimates in Lemma \ref{lemma:firstEstDiscr} and Lemma \ref{lemma:theteEstDiscr}
imply \eqref{eqn:discrConv} with weak instead of strong topologies.
The strong convergence \eqref{eqn:strongConvZDiscr} may be shown by rewriting
\eqref{discr-syst-appr-z} as a variational inequality
\begin{equation}
\begin{aligned}
&-\int_\Omega|\nabla z_M^{k}|^{p-2}\nabla z_M^{k}\cdot \nabla (\zeta-z_M^{k})\,\mathrm dx
\hspace{1pt}
&\qquad\leq\int_\Omega\Big(\frac{z_M^{k}- z^{k-1} }{\tau}+(\conv{\sigma})'(z_M^k) +
(\conc{\sigma})'(z^{k-1})+\nu|z_M^k|^{\varrho-2}z_M^k\Big)(\zeta-z_M^k)\,\mathrm dx\hspace{1pt}
&\qquad\quad+\int_\Omega\Big(\breve{W}_{3,z}^\omega( c^{k}, \varepsilon(\mathbf{u}^{k-1}), z_M^{k})
+\invbreve{W}_{3,z}^\omega( c^{k},\varepsilon(\mathbf{u}^{k-1}),z^{k-1})-
\mathcal T_M(\vartheta_M^k)\Big)(\zeta-z_M^{k})\,\mathrm dx
\label{eqn:varIneqZDiscr}
\end{aligned}
\end{equation}
holding for all $\zeta\in W^{1,p}(\Omega)$ with $0\leq \zeta\leq z^{k-1}$ a.e. in $\Omega$.
To proceed we can argue by recovery sequences:
By now we know the following:
\begin{align*}
&0\leq z_M^k\leq z^{k-1}\hspace{1pt}
&\qquad\downarrow\qquad\downarrow\qquad\text{ weakly in }W^{1,p}(\Omega)\text{ as }M\to\infty.\hspace{1pt}
&0\leq z^k\hspace{1pt};\leq z^{k-1}
\end{align*}
Due to the compact embedding $W^{1,p}(\Omega)\hookrightarrow \mathrm{C}^0(\overline\Omega)$, we find another
sequence denoted by $\widetilde z_M^k$ such that
\begin{align*}
&\widetilde z_M^k\to z^k\text{ strongly in }W^{1,p}(\Omega)\text{ as }M\to\infty\quad\text{ and }\quad
0\leq\widetilde z_M^k\leq z^{k-1}.
\end{align*}
We may take, for instance, $\widetilde z_M^k:=\max\hspace{1pt}{z^k-\delta_M,0\hspace{1pt}}$ for suitable values $\delta_M>0$ with $\delta_M\to 0$
as $M\to\infty$.
We test \eqref{eqn:varIneqZDiscr} with the admissible function $\zeta=\widetilde z_M^k$. Taking into account the
already proved weak convergences \eqref{eqn:discrConv} as well
as the growth properties of the functions $(\conv{\sigma})'$ and
$\breve{W}_{3,z}^\omega$ (cf.\hspace{1pt} \eqref{est-quoted-5.3}), we manage to pass to
the limit on the right-hand side of \eqref{eqn:varIneqZDiscr} and conclude that
\begin{align}
\label{label-2-added}
&\limsup_{M\to\infty}\int_\Omega-|\nabla z_M^k|^{p-2}\nabla z_M^k\cdot \nabla (\widetilde z_M^k-z_M^k)\,\mathrm dx\leq 0.
\end{align}
By exploiting the uniform $p$-convexity of
the $\hspace{1pt}|\cdot\hspace{1pt}|_{L^p(\Omega)}^p$-function
and strong $W^{1,p}(\Omega)$-convergence of the recovery sequence,
from \eqref{label-2-added} we deduce that \color{black}
$\hspace{1pt}|\nabla(\widetilde z_M^k-z_M^k)\hspace{1pt}|_{L^p(\Omega)}\to 0$ as $M\to\infty$.
Together with $\hspace{1pt}|\nabla(\widetilde z_M^k-z^k)\hspace{1pt}|_{L^p(\Omega)}\to 0$,
property \eqref{eqn:strongConvZDiscr} is shown.
To prove the strong convergences \eqref{eqn:strongConvCDiscr},
\eqref{eqn:strongConvMuDiscr} and \eqref{eqn:strongConvUDiscr},
we use a $\limsup$--argument.
We adapt the proof from \cite[Proof of Lemma 4.4 - Step 4]{RocRos14} to our situation:
\begin{itemize}
\item[--]
Let $\Lambda\in L^{\varrho/(\varrho-1)}(\Omega;\bbR^d)$ be a weak cluster point of $|\nabla\mu_M^k|^{\varrho-2}\nabla\mu_M^k$.
Testing \eqref{discr-syst-appr-c}
with $\mu_M^k$ yields by exploiting a lower semicontinuity \color{black} argument
\begin{align*}
\limsup_{M\to\infty}\nu\int_\Omega|\nabla\mu_M^k|^{\varrho}\,\mathrm dx
={}&\limsup_{M\to\infty}\int_\Omega-\frac{c_M^k-c^{k-1}}{\tau}\mu_M^k-m(c^{k-1},z^{k-1})|\nabla\mu_M^k|^2-\nu|\mu_M^k|^2\,\mathrm dx\hspace{1pt}
\leq{}&\int_\Omega-\frac{ c^k-c^{k-1}}{\tau}\mu^k-\liminf_{M\to\infty}\int_\Omega m(c^{k-1},z^{k-1})|\nabla\mu_M^k|^2\,\mathrm dx-\int_\Omega\nu|\mu|^2\,\mathrm dx\hspace{1pt}
\leq{}&\int_\Omega-\frac{ c^k-c^{k-1}}{\tau}\mu^k-m(c^{k-1},z^{k-1})|\nabla\mu|^2-\nu|\mu|^2\,\mathrm dx.
\end{align*}
However, the right-hand side equals $\nu\int_\Omega\Lambda\cdot\nabla\mu\,\mathrm dx$
by passing to the limit $M\to\infty$ in \eqref{discr-syst-appr-mu}
and testing the limit equation with $\mu$.
In conclusion, taking into account the previously
proved convergences we have that
\begin{align*}
\limsup_{M\to\infty}\int_\Omega|\nabla\mu_M^k|^{\varrho}\,\mathrm dx
\leq \int_\Omega\Lambda\cdot\nabla\mu\,\mathrm dx,
\end{align*}
which results in \eqref{eqn:strongConvMuDiscr}.
\item[--]
Convergence \eqref{eqn:strongConvCDiscr} can be gained with a similar argument as above, whereas
\eqref{eqn:strongConvUDiscr} can be shown as in \cite[Proof of Lemma 4.4 - Step 4]{RocRos14}.
\end{itemize} \end{proof} We are now in the position to carry out the limit passage as $M\to\infty$ and conclude the existence of a solution to an intermediate approximate version of the time-discrete system \eqref{PDE-discrete}, only featuring the higher regularizing terms and the $\omega$-regularizations, i.e.\hspace{1pt} \eqref{discr-syst-appr2}
below. \color{black} \begin{lemma}[Existence of the time-discrete system for $\nu>0$ and $M\to\infty$] \label{lemma:5.7}
Let the assumption from Lemma \ref{l:exist-approx-discr} be fulfilled.
Then\hspace{1pt}, for every $\nu>0$\hspace{1pt}, there exists a weak solution
\begin{align*}
\hspace{1pt}{(c^k, \mu^k,z^k,\vartheta^k,\mathbf{u}^k)\hspace{1pt}}_{k=1}^{K_\tau}\subseteq W^{1,p}(\Omega)\times W^{1,\varrho}(\Omega)\times W^{1,p}(\Omega)\times H^1(\Omega)\times W^{1,\varrho}(\Omega;\bbR^d)
\end{align*}
to the following time-discrete PDE system:
\begin{subequations}
\label{discr-syst-appr2}
\begin{align}
&D_k(c)=\dive\Big(m(c^{k-1},z^{k-1})\nabla\mu^k\Big)+\nu\dive\Big(|\nabla\mu^k|^{\varrho-2}\nabla\mu^k\Big)-\nu\mu^k
&&\text{in }W^{1,\varrho}(\Omega)',
\label{discr-syst-appr-c2}\hspace{1pt}
&\mu^k=-\Delta_p(c^k)+(\conv{\phi}_\omega)'(c^k)+(\conc{\phi})'(c^{k-1})
+\breve{W}_{1,c}^\omega(c^k,\varepsilon(\mathbf{u}^{k-1}), z^{k-1})\notag\hspace{1pt}
&\qquad +\invbreve{W}_{1,c}^\omega(c^{k-1},\varepsilon(\mathbf{u}^{k-1}), z^{k-1})-\vartheta^k+D_k(c)+\nu|c^k|^{\varrho-2}c^k
&&\text{in }W^{1,p}(\Omega)',
\label{discr-syst-appr-mu2}\hspace{1pt}
&D_k(z)-\Delta_p(z^k)+\xi^k+(\conv{\sigma})'(z^k) + (\conc{\sigma})'(z^{k-1})+\nu|z^k|^{\varrho-2}z^k\notag\hspace{1pt}
&\quad=-\breve{W}_{3,z}^\omega( c^{k},\varepsilon(\mathbf{u}^{k-1}),z^k)-\invbreve{W}_{3,z}^\omega( c^{k},\varepsilon(\mathbf{u}^{k-1}),z^{k-1})+\vartheta^k\notag\hspace{1pt}
&\quad\text{with }\xi^k\in \partial I_{Z^{k-1}}(z^k)
&&\text{in }W^{1,p}(\Omega)',
\label{discr-syst-appr-z2}\hspace{1pt}
&D_k(\vartheta) + \mathcal{A}^k(\vartheta^k)+D_k(c)\vartheta^k+D_k(z)\vartheta^k+\rho\vartheta^k\dive(D_k(\mathbf{u}))\notag\hspace{1pt}
&\quad=g^k+|D_k(c)|^2+|D_k(z)|^2+ a(c^{k-1},z^{k-1})\varepsilon(D_k(\mathbf{u})):\mathbb{V}\varepsilon(D_k(\mathbf{u}))\notag\hspace{1pt}
&\qquad+m(c^{k-1},z^{k-1})|\nabla\mu^k|^2
&&\text{in }H^{1}(\Omega)',
\label{discr-syst-appr-teta2}\hspace{1pt}
&D_k(D_k(\mathbf{u}))-\dive\Big(a(c^{k-1},z^{k-1})\mathbb{V}\varepsilon(D_k(\mathbf{u}))
+ W_{,\varepsilon}^\omega( c^{k},\varepsilon(\mathbf{u}^k),z^k)\big) -\rho\vartheta^k\mathds 1 \Big)\notag\hspace{1pt}
&\qquad-\nu\dive\Big(|\varepsilon(\mathbf{u}^k-\mathbf{d}^k)|^{\varrho-2}\varepsilon(\mathbf{u}^k-\mathbf{d}^k)\Big)=\mathbf{f}^k
&&\text{in }W_0^{1,\varrho}(\Omega;\bbR^d)'
\label{discr-syst-appr-u2}
\end{align}
\end{subequations}
satisfying the initial conditions \eqref{discre-initial-cond},
the boundary condition $\mathbf{u}^k=\mathbf{d}^k$ a.e. on $\partial\Omega$
and the constraints \color{black} \eqref{discre-constraints}. \end{lemma} \begin{proof}
At the beginning we notice that
\begin{align}
\label{eqn:TtetaConv}
\mathcal T_M(\vartheta_M^k)\to\vartheta^k\text{ strongly in }L^{p^*-\epsilon}(\Omega)\text{ for all }
\epsilon\in (0,p^*-1] \color{black} \text{ as }M\to\infty,
\end{align}
which follows from the pointwise convergence $\mathcal T_M(\vartheta_M^k)\to\vartheta^k$ as $M\to\infty$
a.e. in $\Omega$ and the \color{black} uniform boundedness of $\hspace{1pt}|\mathcal T_M(\vartheta_M^k)\hspace{1pt}|_{L^{p^*}(\Omega)}$ with respect to $M$.
We see that with the help of Lemma \ref{lemma:discrConvergence} and
\eqref{eqn:TtetaConv}, also taking into account the growth
properties of
$W_{,\varepsilon}^\omega$ (cf.\hspace{1pt} \eqref{oh-yes-quote}), \color{black}
we can pass to $M\to\infty$ along \color{black} a subsequence in \eqref{discr-syst-appr-c} for $c$
and \eqref{discr-syst-appr-u} for $\mathbf{u}$ \color{black} and obtain
\eqref{discr-syst-appr-c2} and \eqref{discr-syst-appr-u2}, respectively.
The limit passages for the remaining equations are carried out as follows:
\begin{itemize}
\item[--]
It follows from
$\hspace{1pt}| c_M^k \hspace{1pt}|_{W^{1,p}(\Omega)} \leq C$ and from the Lipschitz continuity of $\beta_\omega$ that
$(\conv{\phi}_\omega)'(c_M^k) = \beta_\omega(c_M^k) + \lambda c_M^k$ is bounded, uniformly in $M$ and $k=1,\ldots,K_\tau$, in
$L^\infty(\Omega)$. This and the growth properties of $\breve{W}_{1,c}^\omega$ and $\invbreve{W}_{1,c}^\omega$ (cf.\hspace{1pt} \eqref{est-quoted-5.1}--\eqref{est-quoted-5.2}), \color{black}
together with Lemma \ref{lemma:discrConvergence} and convergence \color{black}
\eqref{eqn:TtetaConv},
enable us to pass to $M\to\infty$
in equation \eqref{discr-syst-appr-mu} for $\mu$. \color{black} We find \eqref{discr-syst-appr-mu2}.
\item[--]
The limit passage in equation \eqref{discr-syst-appr-z} for $z$ \color{black} is managed via the variational
formulation \eqref{eqn:varIneqZDiscr}.
To this end we pick an arbitrary test-function $\zeta\in W^{1,p}(\Omega)$ with
$0\leq \zeta\leq z^{k-1}$ and construct the recovery sequence
\begin{align*}
\zeta_M:=\max\hspace{1pt}{z^{k-1}-\delta_M,0\hspace{1pt}}
\end{align*}
for suitable values $\delta_M>0$ with $\delta_M\to 0$ such that
$0\leq \zeta_M\leq z^{k-1} $ is fulfilled for all $M\in\bbN$.
Now, testing \eqref{eqn:varIneqZDiscr} with $\zeta_M$ and
passing to $M\to\infty$ with the help of Lemma \ref{lemma:discrConvergence}
and \eqref{eqn:TtetaConv} yields \eqref{discr-syst-appr-z2}.
\item[--]
By exploiting Lemma \ref{lemma:discrConvergence}, property \eqref{eqn:TtetaConv}
and a comparison argument as done in \cite[Lemma 4.4, Step 3]{RocRos14}
we find
\begin{align*}
\mathcal A_M^k(\vartheta_M^k)\rightharpoonup \mathcal A^k(\vartheta^k) \hspace{1pt} \hspace{1pt} \text{ weakly \color{black} in }H^1(\Omega)'
\text{ as }M\to\infty.
\end{align*}
This allows us to pass to the limit $M\to\infty$ in equation
\eqref{discr-syst-appr-teta} for $\vartheta$ \color{black} in order to obtain \eqref{discr-syst-appr-teta2}.
\end{itemize} \end{proof}
\subsubsection{\textbf{Step 3: Limit passage ${ \boldsymbol \nu } {
\boldsymbol \downarrow }{\bf 0}$. \color{black}}} \label{sss:4.2.3.} \noindent
We now address the limit passage $\nu \downarrow 0$ and denote by $(c_\nu^k,\mu_\nu^k,z_\nu^k,\mathbf{u}_\nu^k)_\nu$ the family of solutions to system \eqref{discr-syst-appr2} found in Lemma \ref{lemma:discrConvergence}. By lower semicontinuity, estimates \eqref{est-5.7} from Lemma \ref{lemma:firstEstDiscr} are thus inherited by the functions $(c_\nu^k,\mu_\nu^k,z_\nu^k,\mathbf{u}_\nu^k)_\nu$. \color{black}
Furthermore, we obtain a uniform $H^1(\Omega)$-estimate for $(\vartheta_\nu^k)_\nu$. Indeed, since the higher order terms $$
\nu\dive\big(|\nabla \mu_\nu^k|^{\varrho-2}\nabla\mu_\nu^k\big)-\nu\mu_\nu^k,
\ldots \ldots, -\nu\dive\big(|\varepsilon(\mathbf{u}_\nu^k-\mathbf{d}^k)|^{\varrho-2}\varepsilon(\mathbf{u}_\nu^k-\mathbf{d}^k))\color{black} $$ vanish as $\nu \downarrow 0$, \color{black} we loose \color{black} the $L^2(\Omega)$-estimate for the right-hand side of the discrete temperature equation \eqref{discr-syst-appr-teta2}. Therefore, to prove this $H^1$-bound for $\vartheta_\nu^k$ we have to resort to the arguments from the proof of the \emph{Second a priori estimate} in Sec.\hspace{1pt} \ref{s:4},
and in particular fully exploit the coercivity properties of the function $\mathsf{K}$. \color{black} \begin{lemma} \label{lemma:theteEstDiscr-nu}
The following estimates holds uniformly in $\nu>0$:
\begin{align}
\label{eqn:thetaEstDiscr-nu}
\hspace{1pt}|\vartheta_\nu^k \hspace{1pt}|_{H^1(\Omega)}\leq C,\qquad
\hspace{1pt}|(\vartheta_\nu^k)^{(\kappa+\alpha)/2} \hspace{1pt}|_{H^1(\Omega)} \leq C_\alpha
\quad\text{for all }\alpha\in(0,1).
\end{align} \end{lemma} \begin{proof}
We test \eqref{discr-syst-appr-teta2} by $(\vartheta_\nu^k)^{\alpha-1}$, with $\alpha \in (0,1)$. With the very same calculations as for the \emph{Second a priori estimate} (cf.\hspace{1pt}
\eqref{all-in-all} and \color{black}
also the proof of Prop.\hspace{1pt} \ref{prop:aprio-discr} later on), we conclude
\hspace{1pt}[
\begin{aligned}
&c\int_\Omega \mathsf{K}(\vartheta_\nu^k) |\nabla( \vartheta_\nu^k)^{\alpha/2}|^2 \, \mathrm{d} x + c\int_\Omega
\left(
\left| \varepsilon\left( D_k ( u_\nu^k \color{black})\right) \right|^2 + |\nabla \mu_\nu^K|^2 \right) \color{black}
(\vartheta_\nu^k)^{\alpha-1}\, \mathrm{d} x + c \int_\Omega \left( \left| D_k( z_\nu^k \color{black} )\right|^2 + \left| D_k( c_\nu^k \color{black})\right|^2 \right)
(\vartheta_\nu^k)^{\alpha-1}\, \mathrm{d} x\hspace{1pt}
&\leq C + C\int_\Omega (\vartheta_\nu^k)^{\alpha+1} \, \mathrm{d} x\hspace{1pt},.
\end{aligned}
\hspace{1pt}]
Then,
with the same arguments as in Sec.\hspace{1pt} \ref{s:4},
we arrive at
$\int_\Omega |\nabla (\vartheta_\nu^k)^{(\kappa+\alpha)/2}|^2 \, \mathrm{d} x \leq C$
for a constant independent of $\nu$. Ultimately, we conclude
\eqref{eqn:thetaEstDiscr-nu}. \end{proof}
By comparison arguments based on the \textit{Third estimate} we then \color{black} obtain uniform estimates for $(\mathbf{u}_\nu^k)_\nu$ and for $(\mu_\nu^k)_\nu$ with respect to $\nu$. \begin{lemma}
The following estimates hold uniformly in $\nu>0$:
\begin{align}
\label{eqn:uMuEstDiscr-nu}
\hspace{1pt}|\mathbf{u}_\nu^k \hspace{1pt}|_{H^1(\Omega;\bbR^d)} + \hspace{1pt}|\mu_\nu^k\hspace{1pt}|_{H^1(\Omega)}\leq C.
\end{align} \end{lemma} \begin{proof}
We proceed as in the \textit{Third estimate} in Section \ref{s:4}: Testing the
time-discrete heat equation \eqref{discr-syst-appr-teta2} with $\tau$,
and subtracting the resulting equation from the incremental energy inequality \eqref{discr-total-ineq}
(the limit version $M\to\infty$).
In particular we obtain boundedness with respect to $\nu$ of
\begin{align*}
\int_\Omega a(c^{k-1},z^{k-1})\frac{\varepsilon(\mathbf{u}_\nu^k-\mathbf{u}^{k-1})}{\tau}:\mathbb{V}\frac{\varepsilon(\mathbf{u}_\nu^k-\mathbf{u}^{k-1})}{\tau}\,\mathrm dx
+
\int_\Omega m(c^{k-1},z^{k-1})|\nabla\mu_\nu^k|^2\,\mathrm dx\leq C.
\end{align*}
Hence $\hspace{1pt}|\varepsilon(\mathbf{u}_\nu^k)\hspace{1pt}|_{L^2(\Omega;\bbR^{d\times d})}$ and $\hspace{1pt}|\nabla\mu_\nu^k\hspace{1pt}|_{L^2(\Omega)}$ are bounded in $\nu$.
Korn's inequality applied to $\mathbf{u}_\nu^k-\mathbf{d}^k$
shows the first part of the claim, namely boundedness
of $\hspace{1pt}|\mathbf{u}_\nu^k \hspace{1pt}|_{H^1(\Omega;\bbR^d)}$.
The proof of the second part makes use of the Poincar\hspace{1pt}'e inequality. To this end
boundedness of the spatial mean of $\mu_\nu^k$ has to be shown.
Testing the time-discrete equation \eqref{discr-syst-appr-mu2} with $1/|\Omega|$ shows
\begin{align*}
\Xint-_\Omega\mu_\nu^k\,\mathrm dx={}&\Xint-_\Omega(\conv{\phi}_\omega)'(c_\nu^k)+(\conc{\phi})'(c^{k-1})
+\breve{W}_{1,c}^\omega(c_\nu^k,\varepsilon(\mathbf{u}^{k-1}),z^{k-1})+\invbreve{W}_{1,c}^\omega(c^{k-1},\varepsilon(\mathbf{u}^{k-1}),z^{k-1})\,\mathrm dx\hspace{1pt}
&+\Xint-_\Omega-\vartheta_\nu^k+\frac{c_\nu^k-c^{k-1}}{\tau}+\nu|c_\nu^k|^{\varrho-2}c_\nu^k\,\mathrm dx.
\end{align*}
By the known boundedness properties of $(c_\nu^k)_\nu$, $(\mathbf{u}_\nu^k)_\nu$, $(z_\nu^k)_\nu$ and $(\vartheta_\nu^k)_\nu$,
and the growth of $\breve{W}_{1,c}^\omega$, $\invbreve{W}_{1,c}^\omega$
(cf.\hspace{1pt} \eqref{est-quoted-5.1}--\eqref{est-quoted-5.2}), \color{black}
and of $(\conv{\phi}_\omega)'$ (affine-linear growth
in $c$ due to Yosida approximation with parameter $\tau$),
we then infer boundedness of $\Xint-_\Omega\mu_\nu^k$.
Together with boundedness of $\hspace{1pt}|\nabla\mu_\nu^k\hspace{1pt}|_{L^2(\Omega)}$ we conclude \color{black} the second
part of the claim by the Poincar\hspace{1pt}'e inequality. \end{proof}
We then have the following counterpart to Lemma \ref{lemma:discrConvergence}, which reflects the lesser regularity of the solution components $\mu^k$ and $\mathbf{u}^k$ as a result of the limit passage as $\nu \downarrow 0$. Its proof is a straightforward adaptation of the argument developed for Lemma \ref{lemma:discrConvergence}. \begin{lemma} \label{lemma:discrConvergence-nu}
There exist $(c^k,\mu^k, z^k,\vartheta^k,\mathbf{u}^k) \in W^{1,p}(\Omega) \times H^1(\Omega)\times W^{1,p}(\Omega) \times H^1(\Omega)\times H^1(\Omega;\bbR^d)$\color{black}
and a (not relabeled) subsequence $\nu \downarrow 0$
such that convergences \eqref{eqn:strongConvCDiscr}, \eqref{eqn:strongConvZDiscr}--\eqref{eqn:strongConvTDiscr} hold, as well as
\begin{subequations}
\label{eqn:discrConv-nu}
\begin{align}
&\mu_\nu^k \to \mu^k\text{ strongly in }H^1(\Omega),\label{eqn:strongConvMuDiscr-nu}\hspace{1pt}
&\nu|\nabla\mu_\nu^k|^{\varrho-2}\nabla\mu_\nu^k \to 0 \text{ strongly in } L^{\varrho/(\varrho-1)} (\Omega;\bbR^{d}),\label{conv-nu-muk-zero}\hspace{1pt}
&\mathbf{u}_\nu^k \to \mathbf{u}^k\text{ strongly in }H^1(\Omega;\bbR^d),\label{eqn:strongConvUDiscr-nu}\hspace{1pt}
& \nu |\varepsilon(\mathbf{u}_\nu^k-\mathbf{d}^k)|^{\varrho-2}\varepsilon(\mathbf{u}_\nu^k-\mathbf{d}^k) \to 0 \text{ strongly in } L^{\varrho/(\varrho-1)} (\Omega;\bbR^{d\times d}). \label{conv-nu-uk-zero}
\end{align}
\end{subequations} \end{lemma}
We are now in the position to carry out the \underline{\textbf{limit passage as ${\boldsymbol \nu }{\boldsymbol
\downarrow} {\bf 0}$ \color{black} in system \eqref{discr-syst-appr2}}}. The aguments for taking the limits in \eqref{discr-syst-appr-c2}, \eqref{discr-syst-appr-mu2}, \eqref{discr-syst-appr-z2}, and \eqref{discr-syst-appr-u2} are completely analogous to those developed in the proof of Lemma \ref{lemma:5.7}.
Hence we only comment on the limit passage in the discrete heat equation \eqref{discr-syst-appr-teta2}.
Estimate \eqref{eqn:thetaEstDiscr-nu} allows us to conclude that, up to a subsequence, $(\vartheta_\nu^k)^{(\kappa+\alpha)/2} \rightharpoonup (\vartheta^k)^{(\kappa+\alpha)/2}$ in $H^1(\Omega)$, hence $(\vartheta_\nu^k)^{(\kappa+\alpha)/2} \to (\vartheta^k)^{(\kappa+\alpha)/2}$ in $L^{6-\epsilon}(\Omega)$ for all $\epsilon>0$, whence, taking into account the growth condition on $\mathsf{K}$, that \hspace{1pt}[ \mathsf{K}(\vartheta_\nu) \to \mathsf{K}(\vartheta) \qquad \text{in }
L^\gamma(\Omega) \quad \text{with } \gamma =
\frac{(6-\epsilon)(\kappa+\alpha)}{2\kappa} \quad \text{for all } \epsilon>0. \hspace{1pt}] This allows us to pass to the limit in the term $\mathsf{K}(\vartheta_\nu) \nabla \vartheta_\nu$, tested against $v \in W^{1,s}(\Omega)$ for some sufficiently large \color{black} $s>0$.
All in all, we infer that $(c,\mu,z,\vartheta,\mathbf{u},\chi)$ solves \color{black} system \eqref{PDE-discrete}, with \eqref{eqn:discr2} and \eqref{eqn:discr3} in $W^{1,p}(\Omega)'$, and with the discrete heat equation \eqref{eqn:discr4} understood in $W^{1,s}(\Omega)'$.
In the next step, we will address enhancements of the \color{black} regularities of $\mathbf{u}$ and $\mu$.
As a by-product we will obtain the discrete heat equation \eqref{eqn:discr4} understood in the $H^1(\Omega)'$-sense.
\subsubsection{\textbf{Step 4: ${\bf H^2}$-regularity of ${\bf \mathbf{u}^k}$ and ${ \boldsymbol \mu^{\bf k}}$ \color{black} and conclusion of the proof of Prop.\hspace{1pt} \ref{prop:exist-discr}}} \noindent
To complete the \underline{\textbf{proof of Proposition \ref{prop:exist-discr}}} we have to improve the regularity of $\mathbf{u}^k$ and $\mu^k$. This is achieved by transforming the corresponding equations in a way that enables us to apply standard elliptic regularity results. \begin{lemma} \label{lemma:4.16}
We get $\mu^k\in H_N^2(\Omega)$ and $\mathbf{u}^k\in H^2(\Omega; \mathbb{R}^d)\color{black}$
for the functions obtained in Lemma \ref{lemma:discrConvergence-nu}. \end{lemma} \begin{proof}
We will use an iteration argument as in \cite{RocRos14,hr} (see also \cite{bm})
and sketch the proof for the case $d=3$, since the calculations for $d=2$ are completely
analogous. \color{black}
We already know that $\mu^k\in H^1(\Omega)$ satisfies the elliptic equation
$$
\int_\Omega m(c^{k-1},z^{k-1})\nabla\mu^k\cdot\nabla w\,\mathrm dx=\int_\Omega -D_k(c)w\,\mathrm dx
\qquad\text{for all }w\in H^1(\Omega).
$$
Substituting $w=\frac{\zeta}{m(c^{k-1},z^{k-1})}\in H^1(\Omega)$ for an arbitrarily
chosen test-function $\zeta\in H^1(\Omega)$ yields
\begin{align*}
\int_\Omega \nabla\mu^k\cdot\nabla\zeta\,\mathrm dx
=\int_\Omega\Big(\frac{-D_k(c)}{m(c^{k-1},z^{k-1})}+\frac{m_{,c}(c^{k-1},z^{k-1})\nabla c^{k-1}+m_{,z}(c^{k-1},z^{k-1})\nabla z^{k-1}}{ m(c^{k-1},z^{k-1})\color{black}}\cdot\nabla\mu^k\Big)\zeta\,\mathrm dx
\end{align*}
valid for all $\zeta\in H^1(\Omega)$.
Note that, due to Hypothesis (II) and
the fact that \color{black}
$c^{k-1},z^{k-1}\in W^{1,p}(\Omega)$
and $\nabla \mu^k\in L^2(\Omega;\bbR^d)$, the function in the bracket on the right-hand
side is in $L^{2p/(2+p)}(\Omega)$.
Applying a higher elliptic regularity result for homogeneous Neumann problems
with $L^{2p/(2+p)}(\Omega)$-right-hand side proves $\mu^k\in W^{2,2p/(2+p)}(\Omega)$
and thus
$\nabla\mu^k\in L^{6p/(6+p)}(\Omega;\bbR^d)$.
Due to $p>3$ we end up with $\mu^k\in H_N^2(\Omega)$ after repeating this procedure
finitely many times (cf. \cite[Proof of Lemma 4.1]{hr}).
The proof for obtaining $\mathbf{u}^k\in H^2(\Omega;\bbR^d)$
from the elliptic equation \eqref{discr-syst-appr-u} in $H_0^1(\Omega;\bbR^n)'$
works as in \cite[Proof of Lemma 4.4 - Step 6]{RocRos14} (cf.\hspace{1pt} also \cite{hr}), \color{black}
with the exception that one needs to take the Dirichlet data
$\mathbf{d}^k\in H^2(\Omega;\bbR^d)$ into account. This is the very point where we need to assume that $\mathbb{V}=\omega \mathbb{C}$ for some $\omega>0$ (cf.~\eqref{eqn:assbV}).\color{black} \end{proof}
The enhanced regularity for $\mathbf{u}^k$ yields, by a comparison argument in \eqref{eqn:discr4}, that \eqref{eqn:discr4} not only holds in $W^{1,s}(\Omega)'$ for large $s>1$ but even in $H^1(\Omega)'$.
Finally, we end up with a quintuple $\hspace{1pt}{(c_\tau^{k}, \mu_\tau^{k},z_\tau^k,\vartheta_\tau^k,\ub_\tau^k)\hspace{1pt}}_{k=1}^{K_\tau}\subseteq W^{1,p}(\Omega)\times H_N^2(\Omega)\times W^{1,p}(\Omega)\times H^1(\Omega)\times H^2(\Omega;\bbR^d)$ satisfying the assertion stated in Proposition \ref{prop:exist-discr}. \par
This concludes the proof. \color{black}
$\square$
\subsection{Discrete energy and entropy inequalities} \label{ss:5.3}
We introduce the left-continuous and right-continuous piecewise constant, and the piecewise linear interpolants for a given sequence $\hspace{1pt}{\mathfrak{h}_\tau^k\hspace{1pt}}_{k={ 0}}^{K_\tau}$
on the nodes $\hspace{1pt}{t_\tau^k\hspace{1pt}}_{k=0}^{K_\tau}$ (see \ref{time-nodes}) by \begin{align*}
\left.
\begin{array}{llll}
& \pwc {\mathfrak{h}}{\tau}: (0,T) \to B & \text{defined by} &
\pwc {\mathfrak{h}}{\tau}(t): = \mathfrak{h}_\tau^k
\hspace{1pt}
& \upwc {\mathfrak{h}}{\tau}: (0,T) \to B & \text{defined by} &
\upwc {\mathfrak{h}}{\tau}(t) := \mathfrak{h}_\tau^{k-1}
\hspace{1pt}
&
\pwl {\mathfrak{h}}{\tau}: (0,T) \to B & \text{defined by} &
\pwl {\mathfrak{h}}{\tau}(t):
=\frac{t-t_\tau^{k-1}}{\tau} \mathfrak{h}_\tau^k +
\frac{t_\tau^k-t}{\tau}\mathfrak{h}_\tau^{k-1}
\end{array}
\right\hspace{1pt}}
\qquad \text{for $t \in (t_\tau^{k-1}, t_\tau^k]$.} \end{align*}
Furthermore, we denote by $\pwc{\mathsf{t}}{\tau}$ and by $\upwc{\mathsf{t}}{\tau}$ the left-continuous and right-continuous piecewise constant interpolants associated with the partition, i.e.
$\pwc{\mathsf{t}}{\tau}(t) := t_\tau^k$ if $t_\tau^{k-1}<t \leq t_\tau^k $ and $\upwc{\mathsf{t}}{\tau}(t):= t_\tau^{k-1}$ if $t_\tau^{k-1} \leq t < t_\tau^k $. Clearly, for every $t \in [0,T]$ we have $\pwc{\mathsf{t}}{\tau}(t) \downarrow t$ and $\upwc{\mathsf{t}}{\tau}(t) \uparrow t$ as $\tau\downarrow 0$.
\begin{proposition} \label{prop:energyEntropyIneq}
Let the assumptions of Proposition \ref{prop:exist-discr} be satisfied.
Then the
time-discrete solutions $\hspace{1pt}{(c_\tau^{k},\mu_\tau^{k},z_\tau^k,\vartheta_\tau^k,\ub_\tau^k)\hspace{1pt}}_{k=1}^{K_\tau}$
to Problem \ref{def:time-discrete} fulfill for all $0\leq s\leq t\leq T$
\begin{itemize}
\item[(i)]
the discrete entropy inequality
\begin{equation}
\label{discr-entropy-ineq}
\begin{aligned}
&\begin{aligned}
\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)} \int_\Omega (\log(\underline\vartheta_\tau) + \underline c_\tau+\underline z_\tau)\partial_t\varphi_\tau \, \mathrm{d} x \, \mathrm{d} r
&-\rho \int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)} \int_\Omega \dive(\partial_t\mathbf{u}_\tau)\overline\varphi_\tau \, \mathrm{d} x \, \mathrm{d} r\hspace{1pt}
&-\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)} \int_\Omega \mathsf{K}(\overline\vartheta_\tau) \nabla \log(\overline\vartheta_\tau) \cdot \nabla\overline\varphi_\tau \, \mathrm{d} x \, \mathrm{d} r
\end{aligned}\hspace{1pt}
&\begin{aligned}
\leq
\int_\Omega (\log(\overline\vartheta_\tau(t))+\overline c_\tau(t)+\overline z_\tau(t)){\overline\varphi_\tau(t)} \, \mathrm{d} x
&-\int_\Omega (\log(\overline\vartheta_\tau(s))+\overline c_\tau (s)+\overline z_\tau(s)){\overline\varphi_\tau(s)} \, \mathrm{d} x\hspace{1pt}
&-\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)} \int_\Omega \mathsf{K}(\overline \vartheta_\tau)|\nabla\log(\overline\vartheta_\tau)|^2\overline\varphi_\tau\, \mathrm{d} x \, \mathrm{d} r
\end{aligned}\hspace{1pt}
&\quad-\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)} \int_\Omega \left(\overline g_\tau +|\partial_t c_\tau|^2+ |\partial_t z_\tau|^2
+a(\underline c_\tau,\underline z_\tau) \varepsilon(\partial_t\mathbf{u}_\tau):\mathbb{V} \varepsilon(\partial_t\mathbf{u}_\tau)
+m(\underline c_\tau,\underline z_\tau)|\nabla \overline \mu_\tau|^2\right)\frac{\overline\varphi_\tau}{\overline\vartheta_\tau} \, \mathrm{d} x \, \mathrm{d} r\hspace{1pt}
&\quad-\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)} \int_{\partial\Omega}\overline h_\tau \frac{\overline\varphi_\tau}{\overline\vartheta_\tau} \, \mathrm{d} S \, \mathrm{d} r,
\end{aligned}
\end{equation}
for all $\varphi \in \mathrm{C}^0 ([0,T]; W^{1,d+\epsilon}(\Omega)) \cap H^1 (0,T; L^{({d^\star})'}(\Omega))$
for some $\epsilon>0$, with $\varphi \geq 0$;
\item[(ii)]
the discrete total energy inequality
\begin{align}
\label{discr-energy-ineq}
\begin{aligned}
&\mathscr{E}_\omega(\overline c_\tau(t),\overline z_\tau(t),\overline\vartheta_\tau(t),\overline\mathbf{u}_\tau(t),\overline \mathbf{v}_\tau(t))\hspace{1pt}
&\qquad\leq\mathscr{E}_\omega (\overline c_\tau(s),\overline z_\tau(s),\overline\vartheta_\tau(s),\overline\mathbf{u}_\tau(s),\overline \mathbf{v}_\tau(s))\hspace{1pt}
&\qquad\quad+\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega \overline g_\tau\,\mathrm dx\,\mathrm dr +\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega\overline{\bold f}_\tau \cdot\overline \mathbf{v}_\tau\,\mathrm dx\,\mathrm dr
+\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_{\partial\Omega}\overline h_\tau\,\mathrm dS\,\mathrm dr\hspace{1pt}
&\qquad\quad+\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_{\partial\Omega}({\boldsymbol{\sigma}}_\tau { \bf n \color{black}})\cdot\partial_t\mathbf{d}_\tau\, \mathrm{d} S\,\mathrm dr
\end{aligned}
\end{align}
with the discrete stress tensor
\begin{align*}
{\boldsymbol{\sigma}}_\tau:=a(\upwc c{\tau},\upwc z{\tau})\mathbb{V}\varepsilon(\partial_t \mathbf{u}_\tau)
+W_{,\varepsilon}^\omega(\pwc c{\tau},\varepsilon(\pwc \mathbf{u}{\tau}),\pwc z{\tau})
-\rho\pwc\vartheta{\tau}\mathds 1.
\end{align*}
\end{itemize} \end{proposition} \begin{proof}
$\hspace{1pt},$
\begin{itemize}
\item[To (i):]
The proof is based on \cite[Proof of Proposition 4.8]{RocRos14}.
Testing the time-discrete heat equation \eqref{eqn:discr4} for time step $k$
with $\frac{\varphi_\tau^k}{\vartheta_\tau^k}\in H^1(\Omega)$ shows
\begin{align*}
&\int_\Omega\bigg(g_\tau^k+|D_{\tau,k}(c)|^2+|D_{\tau,k}(z)|^2+m(c_\tau^{k-1},z_\tau^{k-1})|\nabla\mu_\tau^{k}|^2\bigg)\frac{\varphi_\tau^k}{\vartheta_\tau^k}\,\mathrm dx\hspace{1pt}
&+\int_\Omega a(c_\tau^{k-1},z_\tau^{k-1})\varepsilon(D_{\tau,k}(\mathbf{u})):\mathbb{V}\varepsilon(D_{\tau,k}(\mathbf{u}))\frac{\varphi_\tau^k}{\vartheta_\tau^k}\,\mathrm dx
+\int_{\partial\Omega}h_\tau^k\frac{\varphi_\tau^k}{\vartheta_\tau^k}\,\mathrm dS\hspace{1pt}
&\leq\int_\Omega\bigg(\mathsf{K}(\vartheta_\tau^k)\nabla\vartheta_\tau^k\cdot\nabla\frac{\varphi_\tau^k}{\vartheta_\tau^k}
+\bigg(\frac{1}{\tau}\big(\log(\vartheta_\tau^k)-\log(\vartheta_\tau^{k-1})\big)+D_{\tau,k}(c)+D_{\tau,k}(z)+\rho\dive(D_{\tau,k}(\mathbf{u}))\bigg)\varphi_\tau^k\,\mathrm dx
\end{align*}
by using the concavity estimate
$$
\frac{\vartheta_\tau^k-\vartheta_\tau^{k-1}}{\vartheta_\tau^k}
\leq\log(\vartheta_\tau^k)-\log(\vartheta_\tau^{k-1}).
$$
Summing over $k=\frac{\overline{\mathsf t}_\tau(s)}\tau+1,\ldots,\frac{\overline{\mathsf t}_\tau(t)}\tau$ \color{black} and using discrete by-part-integration
proves \eqref{discr-entropy-ineq}.
\item[To (ii):]
The total energy inequality
is inherited by the incremental energy inequality \eqref{discr-total-ineq}
of the $(M,\nu)$-regularized system in Lemma \ref{l:energy-est}.
Indeed, let $0\leq s\leq t\leq T$.
Passing to the limits $M\to\infty$ and $\nu\downarrow 0$
in \eqref{discr-total-ineq} by means of lower semicontinuity \color{black} arguments
and then summing over $j=\frac{\overline{\mathsf t}_\tau(s)}{\tau}+1,\ldots,\frac{\overline{\mathsf t}_\tau(t)}{\tau}$ yields
\begin{align*}
&\mathscr{E}_\omega(\pwc c{\tau}(t),\pwc z{\tau}(t),\pwc \vartheta{\tau}(t),\pwc \mathbf{u}{\tau}(t),\pwc \mathbf{v}{\tau}(t))\hspace{1pt}
&\qquad\leq\mathscr{E}_\omega (\pwc c{\tau}(s),\pwc z{\tau}(s),\pwc \vartheta{\tau}(s),\pwc \mathbf{u}{\tau}(s),\pwc \mathbf{v}{\tau}(s))
+\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\bigg(\int_\Omega \pwc g{\tau}\,\mathrm dx + \int_{\partial\Omega} \pwc h{\tau} \,\mathrm dS +\int_\Omega \pwc{\bold f}{\tau} \cdot\pwc\mathbf{v}{\tau}\,\mathrm dx\bigg)\,\mathrm dr\hspace{1pt}
&\left.
\begin{aligned}
&\qquad\quad+\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega \partial_t \mathbf{v}_\tau\cdot \partial_t \mathbf{d}_\tau\,\mathrm dx\,\mathrm dr
+\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega a(\upwc c{\tau},\upwc z{\tau})\mathbb{V}\varepsilon(\pwc \mathbf{v}{\tau}):\varepsilon(\partial_t \mathbf{d}_\tau)\,\mathrm dx\,\mathrm dr\hspace{1pt}
&\qquad\quad+\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega W_{,\varepsilon}^\omega(\pwc c{\tau},\varepsilon(\pwc\mathbf{u}{\tau}),\pwc z{\tau}):\varepsilon(\partial_t\mathbf{d}_\tau)\,\mathrm dx\,\mathrm dr
-\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega\rho\pwc\vartheta{\tau}\dive(\partial_t\mathbf{d}_\tau)\,\mathrm dx\,\mathrm dr\hspace{1pt}
&\qquad\quad-\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega \pwc{\bold f}{\tau} \cdot \partial_t\mathbf{d}_\tau\,\mathrm dx\,\mathrm dr.
\end{aligned}
\right\hspace{1pt}}
=:I_1
\end{align*}
Finally, integration by parts in space and using \eqref{eqn:discr5} shows
\begin{align*}
I_1={}&\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega\Big(\underbrace{\partial_t \mathbf{v}_\tau
-\dive\Big(a(\upwc c{\tau},\upwc z{\tau})\mathbb{V}\varepsilon(\pwc \mathbf{v}{\tau})
+W_{,\varepsilon}^\omega(\pwc c{\tau},\varepsilon(\pwc\mathbf{u}{\tau}),\pwc z{\tau})
-\rho\pwc\vartheta{\tau}\mathds 1\Big)-\pwc{\bold f}{\tau}}_{=0}\Big)\cdot \partial_t \mathbf{d}_\tau\,\mathrm dx\,\mathrm dr\hspace{1pt}
&+\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_{\partial\Omega}
\Big(a(\upwc c{\tau},\upwc z{\tau})\mathbb{V}\varepsilon(\pwc \mathbf{v}{\tau})
+W_{,\varepsilon}^\omega(\pwc c{\tau},\varepsilon(\pwc\mathbf{u}{\tau}),\pwc z{\tau})
-\rho\pwc\vartheta{\tau}\mathds
1\Big){ \bf n \color{black}} \cdot\partial_t \mathbf{d}_\tau\, \mathrm{d} S\,\mathrm dr\hspace{1pt}
={}&\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_{\partial\Omega}({\boldsymbol{\sigma}}_\tau { \bf n } \color{black})\cdot\partial_t\mathbf{d}_\tau\, \mathrm{d} S\,\mathrm dr.
\end{align*}
\end{itemize} \end{proof}
\subsection{A priori estimates} \label{ss:5.4} The aim of this section is to customize the a priori estimates which we have
developed \color{black} in Section \ref{s:4} \color{black} to the time-discrete setting described in Problem \ref{def:time-discrete}, for a time-discrete solution $(\overline c_\tau,\underline c_\tau, c_\tau,\overline\mu_\tau,\overline z_\tau,\underline z_\tau, z_\tau, \overline\vartheta_\tau, \underline\vartheta_\tau, \vartheta_\tau, \overline\mathbf{u}_\tau,\underline\mathbf{u}_\tau,\overline\mathbf{v}_\tau,\mathbf{v}_\tau)$ (recall that $\mathbf{v}^k = D_k (\mathbf{u})$ for all $k \in \hspace{1pt}{1, \ldots, K_\tau\hspace{1pt}}$.
Let us mention in advance that, in this \color{black} time-discrete setting we are only able to estimate (cf.\hspace{1pt} \eqref{thetaBound1} below) the \color{black} supremum of the total variation
$\langle\log(\overline\vartheta_\tau),\varphi\rangle_{W^{1,d+\epsilon}}$
over all test-functions $\varphi\in W^{1,d+\epsilon}(\Omega)$
with $\hspace{1pt}|\varphi\hspace{1pt}|_{W^{1,d+\epsilon}(\Omega)}\leq 1$,
which is a slightly weaker result than the \textbf{Seventh a priori estimate}
in Section \ref{s:4} however strong enough to apply the compactness
result proved in \color{black} \cite[Theorem A.5]{RocRos14}. \begin{proposition} \label{prop:aprio-discr}
Let the assumptions of Proposition \ref{prop:exist-discr} be satisfied.
Then the
time-discrete solutions $\hspace{1pt}{(c_\tau^{k},\mu_\tau^{k},z_\tau^k,\vartheta_\tau^k,\ub_\tau^k)\hspace{1pt}}_{k=1}^{K_\tau}$
to Problem \ref{def:time-discrete} fulfill the following a priori estimates
uniformly in $\omega>0$ and $\tau>0$:
\begin{align}
&\hspace{1pt}|\overline c_\tau\hspace{1pt}|_{L^\infty(0,T;W^{1,p}(\Omega))}+\hspace{1pt}|\underline c_\tau\hspace{1pt}|_{L^\infty(0,T;W^{1,p}(\Omega))}\leq C,
\label{cBound1}\hspace{1pt}
&\hspace{1pt}|c_\tau\hspace{1pt}|_{H^1(0,T;L^2(\Omega))\cap L^\infty(0,T;W^{1,p}(\Omega))}\leq C,
\label{cBound2}\hspace{1pt}
&\hspace{1pt}|\Delta_p(\overline c_\tau)\hspace{1pt}|_{L^2(0,T;L^{2}(\Omega))}\leq C,
\label{cBound3}\hspace{1pt}
&\hspace{1pt}|\overline\eta_\tau\hspace{1pt}|_{L^2(0,T;L^{2}(\Omega))}\leq C\qquad
\text{with }\overline\eta_\tau:=\beta_\omega(\overline c_\tau),
\label{etaBound}\hspace{1pt}
&\hspace{1pt}|\overline\mu_\tau\hspace{1pt}|_{L^2(0,T;H^2(\Omega))}\leq C,
\label{muBound}\hspace{1pt}
&\hspace{1pt}|\overline z_\tau\hspace{1pt}|_{L^\infty(0,T;W^{1,p}(\Omega))}+\hspace{1pt}|\underline z_\tau\hspace{1pt}|_{L^\infty(0,T;W^{1,p}(\Omega))}\leq C,
\label{zBound1}\hspace{1pt}
&\hspace{1pt}|z_\tau\hspace{1pt}|_{H^1(0,T;L^2(\Omega))\cap L^\infty(0,T;W^{1,p}(\Omega))}\leq C,
\label{zBound2}\hspace{1pt}
&\hspace{1pt}|\overline\vartheta_\tau\hspace{1pt}|_{L^2(0,T;H^1(\Omega))\cap L^\infty(0,T;L^1(\Omega))}\leq C,
\label{thetaBound1}\hspace{1pt}
&\big\hspace{1pt}|(\overline\vartheta_\tau)^{\frac{\kappa+\alpha}{2}}\big\hspace{1pt}|_{L^2(0,T;H^1(\Omega))}\leq C_\alpha
\text{ for all }\alpha\in(0,1),
\label{thetaBound2}\hspace{1pt}
&\hspace{1pt}|\log(\overline\vartheta_\tau)\hspace{1pt}|_{L^2(0,T;H^1(\Omega))}\leq C,
\label{thetaBound3}\hspace{1pt}
&\hspace{1pt}|\overline\mathbf{u}_\tau\hspace{1pt}|_{L^\infty(0,T;H^2(\Omega;\bbR^d))}+\hspace{1pt}|\underline\mathbf{u}_\tau\hspace{1pt}|_{L^\infty(0,T;H^2(\Omega;\bbR^d))}\leq C,
\label{uBound1}\hspace{1pt}
&\hspace{1pt}|\mathbf{u}_\tau\hspace{1pt}|_{H^1(0,T;H^2(\Omega;\bbR^d))\cap W^{1,\infty}(0,T;H^1(\Omega;\bbR^d))}\leq C,
\label{uBound2}\hspace{1pt}
&\hspace{1pt}|\mathbf{v}_\tau\hspace{1pt}|_{L^2(0,T;H^2(\Omega;\bbR^d))\cap H^1(0,T;L^2(\Omega;\bbR^d))}\leq C
\label{vBound}
\end{align}
as well as
\begin{align}
\label{thetaBound4}
&\sup_{\varphi\in W^{1,d+\epsilon}(\Omega), \hspace{1pt}|\varphi\hspace{1pt}|_{W^{1,d+\epsilon}(\Omega)} \color{black} \leq 1}
\mathrm{Var}\big(\langle\log(\overline\vartheta_\tau),\varphi\rangle_{W^{1,d+\epsilon}};[0,T]\big)
\leq C_\epsilon
\quad\text{for all }\epsilon>0.
\end{align}
Under the additional assumption \eqref{range-k-admissible}
we also have
\begin{align}
\label{thetaBoundAdd}
\hspace{1pt}|\vartheta_\tau\hspace{1pt}|_{ \mathrm{BV}\color{black}([0,T];W^{2,d+\epsilon}(\Omega)')}\leq C_\epsilon
\quad\text{for all }\epsilon>0.
\end{align} \end{proposition} \begin{proof}
The proof mainly follows the lines in Section \ref{s:4}.
Besides this, the estimates for the time-discrete variables $z_\tau$, $\vartheta_\tau$ and $\mathbf{u}_\tau$ are based
on \cite[Proof of Proposition 4.10]{RocRos14}.
To avoid repetition we will refer to the estimates in Section \ref{s:4}
when necessary.
\begin{itemize}
\item[(i)]
The time-discrete total energy inequality from
Proposition \ref{prop:energyEntropyIneq} (ii)
implies the following estimates
(see \textbf{First a priori estimate}):
\begin{align*}
\hspace{1pt}|\overline c_\tau\hspace{1pt}|_{L^\infty(0,T;W^{1,p}(\Omega))}
+\hspace{1pt}|\nabla\overline z_\tau\hspace{1pt}|_{L^\infty(0,T;W^{1,p}(\Omega;\bbR^d))}
+\hspace{1pt}|\overline\vartheta_\tau\hspace{1pt}|_{L^\infty(0,T;L^1(\Omega))}
+\hspace{1pt}|\mathbf{v}_\tau\hspace{1pt}|_{L^\infty(0,T;L^2(\Omega;\bbR^d))} \color{black}
\leq C.
\end{align*}
\item[(ii)]
The \textbf{Second a priori estimate} is performed by
testing the time-discrete heat equation \eqref{eqn:discr4} with $F'(\vartheta_\tau^k)=(\vartheta_\tau^k)^{\alpha-1}$
with $\alpha\in(0,1)$ and the concave function $F(\vartheta):=\vartheta^\alpha/\alpha$,
we obtain
\begin{align*}
&\int_\Omega\Big(g_\tau^k+|D_{\tau,k}(c)|^2+|D_{\tau,k}(z)|^2+m(c_\tau^{k-1},z_\tau^{k-1})|\nabla\mu_\tau^{k}|^2\Big)F'(\vartheta_\tau^k)\,\mathrm dx\hspace{1pt}
&+\int_\Omega a(c_\tau^{k-1},z_\tau^{k-1})\varepsilon(D_{\tau,k}(\mathbf{u})):\mathbb{V}\varepsilon(D_{\tau,k}(\mathbf{u}))F'(\vartheta_\tau^k)\,\mathrm dx
+\int_{\partial\Omega}h_\tau^k F'(\vartheta_\tau^k)\,\mathrm dS\hspace{1pt}
&\leq\int_\Omega\frac{F(\vartheta_\tau^k)-F(\vartheta_\tau^{k-1})}{\tau}+\mathsf{K}(\vartheta_\tau^k)\nabla\vartheta_\tau^k\cdot\nabla(F'(\vartheta_\tau^k))\,\mathrm dx\hspace{1pt}
&\qquad+\int_\Omega\Big(D_{\tau,k}(c)+D_{\tau,k}(z)+\rho\dive(D_{\tau,k}(\mathbf{u}))\Big)\vartheta_\tau^k F'(\vartheta_\tau^k)\,\mathrm dx
\end{align*}
by using the concavity \color{black} estimate $(\vartheta_\tau^k-\vartheta_\tau^{k-1})F'(\vartheta_\tau^k)\leq F(\vartheta_\tau^k)-F(\vartheta_\tau^{k-1})$.
Multplication by $\tau$ and summing over $k=1,\ldots,\overline{\mathsf t}_\tau(t)/\tau$ shows
for every $t\in(0,T]$ the precise time-discrete analogon to
\eqref{eqn:secondEstPre}.
With the same calculations as in Section \ref{s:4} we end up with
\begin{align*}
\hspace{1pt}|\overline\vartheta_\tau\hspace{1pt}|_{L^2(0,T;H^1(\Omega))}\leq C,\qquad
\hspace{1pt}|(\overline\vartheta_\tau)^{(\kappa+\alpha)/2}\hspace{1pt}|_{L^2(0,T;H^1(\Omega))}\leq C_\alpha.
\end{align*}
\item[(iii)]
By testing the time-discrete heat equation \eqref{eqn:discr4} with $\tau$, integrating over $\Omega$, summing over $k$ and
subtracting the result from the total energy inequality \eqref{discr-energy-ineq}
we obtain the \textbf{Third a priori estimate}:
\begin{align*}
\hspace{1pt}|\partial_t c_\tau\hspace{1pt}|_{L^2(Q)} + \hspace{1pt}|\nabla \overline\mu_\tau \hspace{1pt}|_{L^2(Q;\bbR^d)} + \hspace{1pt}|\partial_t z_\tau\hspace{1pt}|_{L^2(Q)}+\hspace{1pt}|\partial_t \mathbf{u}_\tau\hspace{1pt}|_{L^2(0,T; H^1(\Omega;\mathbb{R}^d))} \leq C
\end{align*}
as well as
\begin{align*}
\hspace{1pt}|\overline z_\tau \hspace{1pt}|_{L^\infty (0,T;W^{1,p}(\Omega))}
+\hspace{1pt}|\overline \mathbf{u}_\tau \hspace{1pt}|_{L^\infty (0,T;H^{1}(\Omega;\bbR^d))}
\leq C.
\end{align*}
\item[(iv)]
The \textbf{Fourth a priori estimate} is obtained by
testing the time-discrete force balance equation \eqref{eqn:discr5}
by $-\tau\dive(\mathbb{V}\varepsilon(\mathbf v_\tau^k))$.
The calculation in Section \ref{s:4} carry over to the time-discrete setting.
However,
let us point out that
the discrete analogue of \eqref{est-added-0} is given by \color{black} the convexity estimate
\begin{align*}
-\int_0^{\overline{\mathsf t}_\tau(t)} \int_\Omega \partial_t \mathbf{v}_\tau\cdot \mathbf{\dive} (\mathbb{V}\varepsilon(\overline\mathbf{v}_\tau)) \, \mathrm{d} x \, \mathrm{d} s
\geq{}&
-\int_0^{\overline{\mathsf t}_\tau(t)}
\int_{\partial\Omega}\partial_t
\mathbf{v}_\tau\cdot (\mathbb{V}\varepsilon(\overline\mathbf{v}_\tau) { \bf n \color{black}}) \, \mathrm{d} S \, \mathrm{d} s\hspace{1pt}
&+\int_\Omega \frac12 \varepsilon(\overline\mathbf{v}_\tau(t)):\mathbb{V} \varepsilon(\overline\mathbf{v}_\tau(t)) \, \mathrm{d} x
-\int_\Omega \frac12 \varepsilon(\mathbf{v}^0):\mathbb{V} \varepsilon(\mathbf{v}^0) \, \mathrm{d} x.
\end{align*}
With analogous calculations we arrive at
\begin{align*}
\qquad&\hspace{1pt}|\mathbf{u}_\tau\hspace{1pt}|_{H^1(0,T;H^2(\Omega;\bbR^d))}
+\hspace{1pt}|\overline\mathbf{u}_\tau\hspace{1pt}|_{L^\infty(0,T;H^2(\Omega;\bbR^d))}\leq
C, \color{black}\hspace{1pt}
&\hspace{1pt}|\mathbf{v}_\tau\hspace{1pt}|_{H^1(0,T;L^2(\Omega;\bbR^d))\cap L^\infty(0,T;H^1(\Omega;\bbR^d))\cap L^2(0,T;H^2(\Omega;\bbR^d))}
+\hspace{1pt}|\overline\mathbf{v}_\tau\hspace{1pt}|_{L^\infty(0,T;H^1(\Omega;\bbR^d))\cap L^2(0,T;H^2(\Omega;\bbR^d))}\leq C.
\end{align*}
\item[(v)]
For the \textbf{Fifth a priori estimate} we test \eqref{eqn:discr2}
with $c_\tau^{k}-\mathfrak{m}_0$ where $\mathfrak{m}_0:=\Xint-_\Omega c^0\,\mathrm dx$.
With exactly the same calculations as in Section \ref{s:4} we find
\begin{align*}
\hspace{1pt}|\overline\mu_\tau\hspace{1pt}|_{L^2(0,T;H^1(\Omega))}\leq C.
\end{align*}
\item[(vi)]
A comparison in \eqref{eqn:discr2} as done in
the \textbf{Sixth a priori estimate} gives
\begin{align*}
\hspace{1pt}| \Delta_p(\overline c_\tau)\hspace{1pt}|_{L^2(0,T; L^2(\Omega))}
+ \hspace{1pt}|\overline\eta_\tau \hspace{1pt}|_{L^2(0,T; L^2(\Omega))} \leq C.
\end{align*}
\item[(vii)]
Estimate \eqref{thetaBound1} \color{black} can be shown by utilizing the calculations
in \cite[Proof of Proposition 4.10 - Sixth estimate]{RocRos14}
and additionally noticing that $\hspace{1pt}{\overline c_\tau\hspace{1pt}}_{\tau>0}$
is bounded in $ \mathrm{BV}\color{black}([0,T];L^2(\Omega))$ due to the \textit{Third estimate}.
We thus obtain \eqref{thetaBound4}.
\item[(viii)]
The \textbf{Eighth a priori estimate} works as in Section \ref{s:4}
and yields \eqref{thetaBoundAdd}.
\item[(ix)]
The \textbf{Ninth a priori estimate} works as in Section \ref{s:4}
and yields \eqref{muBound}.
\end{itemize} \end{proof} \begin{remark}
We observe that \eqref{thetaBound4} implies the uniform bound
\begin{align*}
\hspace{1pt}|\log(\overline\vartheta_\tau)\hspace{1pt}|_{L^\infty(0,T;W^{1,d+\epsilon}(\Omega))}\leq C_\epsilon.
\end{align*}
Moreover, by interpolation we infer from \eqref{thetaBound1} that (see \eqref{estetainterp})
\begin{align*}
\hspace{1pt}|\overline\vartheta_\tau\hspace{1pt}|_{L^h(Q)}\leq C
\end{align*}
with $h=8/3$ for $d=3$ and $h=3$ for $d=2$. \end{remark}
\section{\bf Proof of Theorem \ref{thm:1}} \label{s:6} In this last section we are going to perform the limit passages as $\tau\downarrow 0$ and $\omega\downarrow 0$ in the time-discrete system \eqref{PDE-discrete}, for which the existence of solutions was proved in Proposition \ref{prop:exist-discr}. \color{black}
This will lead us \color{black} to prove Theorem \ref{thm:1}. \subsection{Compactness} \label{ss:6.1} We shall adopt the notation from the previous section. In particular for fixed $\omega>0$ we let $(\overline c_\tau,\underline c_\tau, c_\tau,\overline\mu_\tau,\overline z_\tau,\underline z_\tau, z_\tau, \overline\vartheta_\tau, \underline\vartheta_\tau, \vartheta_\tau, \overline\mathbf{u}_\tau,\underline\mathbf{u}_\tau,\overline\mathbf{v}_\tau,\mathbf{v}_\tau)$ be a time-discrete solution on an equi-distant partition of $[0,T]$ with fineness $\tau>0$ according to Proposition \ref{prop:exist-discr}. \begin{lemma} \label{lemma:discr-conv}
Let the assumptions from Proposition \ref{prop:exist-discr} be satisfied
and $\omega>0$ be fixed.
Then there exists a quintuple $(c,\mu,z,\vartheta,\mathbf{u})$ satisfying
\eqref{reg-c}--\eqref{reg-u} such that along a (not relabeled) subsequence, as $\tau \downarrow 0$, the following convergences hold: \color{black}
\begin{align}
&\pwc c{\tau},\hspace{1pt},\upwc c{\tau} \weakstarlim c
&&\text{ weakly-star in }L^\infty(0,T;W^{1,p}(\Omega)),
\label{cConv1}\hspace{1pt}
&c_\tau \weakstarlim c
&&\text{ weakly-star in }L^\infty(0,T;W^{1,p}(\Omega))\cap H^{1}(0,T;L^2(\Omega)),
\label{cConv2}\hspace{1pt}
&\Delta_p(\pwc c{\tau}) \rightharpoonup \Delta_p(c)
&&\text{ weakly in }L^2(0,T;L^2(\Omega)),
\label{cConv3}\hspace{1pt}
&\pwc c{\tau},\hspace{1pt},\upwc c{\tau}\to c
&&\text{ strongly in }L^s (0,T; W^{1,p}(\Omega))\text{ for all }s\in[1,\infty),
\label{cConv4}\hspace{1pt}
&\pwc \eta{\tau} \rightharpoonup \eta
&&\text{ weakly in }L^2(0,T;L^2(\Omega))\text{ with }\eta=\beta_\omega(c)\text{ a.e. in }Q,
\label{etaConv}\hspace{1pt}
&\pwc \mu{\tau} \rightharpoonup \mu
&&\text{ weakly in }L^2(0,T;H_N^2(\Omega)),
\label{muConv}\hspace{1pt}
&\pwc z{\tau},\hspace{1pt},\upwc z{\tau} \weakstarlim z
&&\text{ weakly-star in }L^\infty (0,T; W^{1,p}(\Omega)),
\label{zConv1}\hspace{1pt}
&z_\tau \weakstarlim z
&&\text{ weakly-star in }L^\infty (0,T; W^{1,p}(\Omega)) \cap H^1 (0,T;L^2(\Omega)),
\label{zConv2}\hspace{1pt}
&\pwc z{\tau},\hspace{1pt},\upwc z{\tau}\to z
&&\text{ strongly in }L^\infty (0,T; X)\text{ for all $X$ such that }W^{1,p}(\Omega) \Subset X\subseteq L^2(\Omega),
\label{zConv3}\hspace{1pt}
&z_\tau \to z
&&\text{ strongly in }\mathrm{C}^0([0,T]; X)\text{ for all $X$ such that }W^{1,p}(\Omega) \Subset X\subseteq L^2(\Omega),
\label{zConv4}\hspace{1pt}
&\pwc \vartheta{\tau}\rightharpoonup \vartheta
&&\text{ weakly in }L^2 (0,T; H^1(\Omega)),
\label{thetaConv1}\hspace{1pt}
&\log(\pwc\vartheta{\tau}) \weakstarlim \log(\vartheta)
&&\text{ weakly-star in } L^2 (0,T; H^1(\Omega)) \cap L^\infty (0,T; W^{1,d+\epsilon}(\Omega)') \quad \text{for every } \epsilon>0,
\label{thetaConv2}\hspace{1pt}
&\log(\pwc\vartheta{\tau}) \to \log(\vartheta)
&&\text{ strongly in }L^2(0,T;L^s(\Omega)) \text{ for all $ s \in [1,6)$ if $d=3$, and all $s\in [1,\infty) $ if $d=2$,}
\label{thetaConv3}\hspace{1pt}
&\log(\pwc\vartheta{\tau}(t)) \rightharpoonup \log(\vartheta(t))
&&\text{ weakly in $H^1(\Omega)$ for almost all $t \in (0,T)$}
\label{thetaConv4}\hspace{1pt}
&&&\text{ (the chosen subsequence for $\tau\downarrow 0$ does not depend on $t$)},\notag\hspace{1pt}
&\pwc\vartheta{\tau}\to \vartheta
&&\text{ strongly in } L^q(\Omega\times (0,T))
\text{ for all }q\in [1,8/3)\text{ for }d=3\text{ and all }q\in [1, 3)\text{ if }d=2,\color{black}
\label{thetaConv5}
\end{align}
\begin{align}
&\pwc \mathbf{u}{\tau},\upwc \mathbf{u}{\tau} \weakstarlim \mathbf{u}
&&\text{ weakly-star in }L^\infty(0,T;H^2(\Omega;\bbR^d)),
\label{uConv1}\hspace{1pt}
&\pwl \mathbf{u}{\tau} \weakstarlim \mathbf{u}
&&\text{ weakly-star in }H^1(0,T;H^2(\Omega;\bbR^d))\cap W^{1,\infty}(0,T;H^1(\Omega;\bbR^d)),
\label{uConv2}\hspace{1pt}
&\pwc \mathbf{u}{\tau},\hspace{1pt}, \upwc \mathbf{u}{\tau} \to \mathbf{u}
&&\text{ strongly in }L^\infty(0,T;X)\text{ for all $X$ such that }H^{2}(\Omega;\bbR^d) \Subset X\subseteq L^2(\Omega;\bbR^d),
\label{uConv3}\hspace{1pt}
&\pwl \mathbf{u}{\tau} \to \mathbf{u}
&&\text{ strongly in }\mathrm{C}^0([0,T]; X \color{black})\text{ for all $X$ such that }H^{2}(\Omega;\bbR^d) \Subset X\subseteq L^2(\Omega;\bbR^d),
\label{uConv4}\hspace{1pt}
&\pwc \mathbf{v}{\tau} \rightharpoonup \mathbf{u}_{t}
&&\text{ weakly in }L^2(0,T;H^2(\Omega;\bbR^d)),
\label{vConv1}\hspace{1pt}
&\mathbf{v}_\tau \rightharpoonup \mathbf{u}_t
&&\text{ weakly in }H^1(0,T;L^2(\Omega;\bbR^d))\cap L^2(0,T;H^2(\Omega;\bbR^d)).
\label{vConv2}
\end{align}
Under the additional assumption \eqref{range-k-admissible}
we also have for all $\epsilon>0$ that $\vartheta\in \mathrm{BV} \color{black}([0,T]; W^{2,d+\epsilon}(\Omega)')$ and
\begin{align}
&\pwc \vartheta{\tau}\to \vartheta
&&\text{ strongly in }L^2 (0,T; Y)\text{ for all $Y$ such that }H^{1}(\Omega) \Subset Y\subset W^{2,d+\epsilon}(\Omega)',
\label{tetaConvAdd1}\hspace{1pt}
&\pwc \vartheta{\tau}(t)\to \vartheta(t)
&&\text{ strongly in }W^{2,d+\epsilon}(\Omega)'\text{ for all }t\in[0,T].
\label{tetaConvAdd2}
\end{align} \end{lemma} \begin{proof}
We immediately obtain \eqref{cConv1}, \eqref{cConv2},
\eqref{muConv}, \eqref{zConv1}, \eqref{zConv2}, \eqref{thetaConv1}, \eqref{uConv1}, \eqref{uConv2}, \eqref{vConv1} and \eqref{vConv2}
from the estimates \eqref{cBound1}--\eqref{vBound} in Proposition \ref{prop:aprio-discr}
by standard weak \color{black} compactness arguments.
From the regularity result \cite[Thm.\hspace{1pt} 2, Rmk.\hspace{1pt} 3.5]{savare98},
we infer for every $1 \leq \delta\color{black}< \frac1p$ the enhanced regularity
$\pwc c{\tau},\upwc c{\tau} \in L^2 (0,T; W^{1+\delta\color{black},p}(\Omega))$
together with the estimate
\begin{align*}
\hspace{1pt}|\pwc c{\tau}\hspace{1pt}|_{L^2(0,T;W^{1+\delta,p}(\Omega))}+\hspace{1pt}|\upwc c{\tau}\hspace{1pt}|_{L^2(0,T;W^{1+\delta,p}(\Omega))}\leq C_{\delta\color{black}}.
\end{align*}
In combination with \eqref{cBound1} and \eqref{cBound2}, \color{black} the application of the Aubin-Lions compactness theorem
yields \eqref{cConv4}.
Now we choose a subsequence $\tau\downarrow 0$ such that
$\Delta_p(\pwc c{\tau})\rightharpoonup S$ in $L^2(Q)$ for an element $S\in L^2(Q)$
possible due to \eqref{cBound3}.
Taking $\pwc c{\tau}\to c$ in $L^2(Q)$ into account, we may identify
$S=\Delta_p(c)$ by the strong-weak closedness of the maximal monotone graph of
$\Delta_p:L^2(Q)\to L^2(Q)$. We then conclude
\eqref{cConv3}.
Analogously, \eqref{etaConv} ensues from the strong-weak closedness of the graph of the maximal monotone operator (induced by $\beta_\omega$)
$\beta_\omega : L^2(Q) \to L^2(Q) $. \color{black}
In addition, \color{black} \eqref{zConv3}, \eqref{zConv4}, \eqref{uConv3} and \eqref{uConv4}
follow from \eqref{zBound1}, \eqref{zBound2}, \eqref{uBound1} \color{black} and \eqref{uBound2} via
Aubin-Lions compactness results (see \cite{simon}).
It remains to show the convergences for $\pwc\vartheta{\tau}$ and $\log(\pwc\vartheta{\tau})$.
Here we proceed as in \cite[Proof of Lemma 5.1]{RocRos14}. We use the boundedness
properties \eqref{thetaBound3} and \eqref{thetaBound4},
and apply the compactness result \cite[Theorem A.5]{RocRos14}
which is based on Helly's selection principle.
We obtain a function $\lambda\in L^2(0,T;H^1(\Omega))\cap L^\infty(0,T;W^{1,d+\epsilon}(\Omega)')$ for all $\epsilon>0$
and a further, (again not relabeled), subsequence \color{black} such that
\begin{align*}
&\log(\pwc\vartheta{\tau})\weakstarlim \lambda\text{ weakly-star in }L^2(0,T;H^1(\Omega))\cap L^\infty(0,T;W^{1,d+\epsilon}(\Omega)'),\hspace{1pt}
&\log(\pwc\vartheta{\tau}(t))\rightharpoonup \lambda(t)\text{ weakly in }H^1(\Omega)
\quad\text{for a.a. \color{black} }t\in(0,T).
\end{align*}
Here the chosen subsequence for $\tau\downarrow 0$ does not depend on $t$ in the latter convergence.
We also infer from above that
$$
\log(\pwc\vartheta{\tau}(t))\to \lambda(t)\text{ strongly in }L^s(\Omega)
\quad\text{for a.a. $t\in(0,T)$ and all $s$ from \eqref{thetaConv3}.}
$$
By also exploiting the boundedness of $\hspace{1pt}|\log( \pwc\vartheta{\tau})\hspace{1pt}|_{L^2(0,T;H^1(\Omega))\cap L^\infty(0,T;W^{1,d+\epsilon}(\Omega)')}$
and the interpolation inequality \eqref{interpolationIneq} with $X=H^1(\Omega)$,
$Y=L^s(\Omega)$ and $Z=W^{1,d+\epsilon}(\Omega)$ we infer
that the sequence $\hspace{1pt}{\log(\pwc\vartheta{\tau})\hspace{1pt}}_{\tau}$ is uniformly integrable
in $L^2(0,T;L^s(\Omega))$.
Application of Vitali \color{black} convergence theorem proves
$$
\log(\pwc\vartheta{\tau})\to \lambda\text{ strongly in }L^2(0,T;L^s(\Omega))
\quad\text{for all $s$ from \eqref{thetaConv3}.}
$$
Comparison with \eqref{thetaConv1} yields $\lambda=\log(\vartheta)$ and hence
\eqref{thetaConv2}, \eqref{thetaConv3} and \eqref{thetaConv4}.
The uniform bound \eqref{thetaBound1} shows uniform integrability of
$\hspace{1pt}{\pwc\vartheta{\tau}\hspace{1pt}}_{\tau}$ in $L^q(Q)$ with $q\in[1,8/3)$ for $d=3$ and
$q\in[1,3)$ for $d=2$ (cf. \eqref{estetainterp}). \color{black}
Vitali's convergence theorem proves the strong convergence \eqref{thetaConv5}.
In particular we find $\pwc\vartheta{\tau}(t)\to \vartheta(t)$ strongly in $L^1(\Omega)$
for a.e. $t\in(0,T)$ (where the subsequence of $\tau\downarrow 0$ is independent of $t$).
By the boundedness $\hspace{1pt}|\pwc\vartheta{\tau}(t)\hspace{1pt}|_{L^1(\Omega)}\leq C$ uniformly in $t$ and $\tau$
(see \eqref{thetaBound1})
we infer by lower semicontinuity \color{black} that $\vartheta\in L^\infty(0,T;L^1(\Omega))$.
Furthermore, by considering a weak cluster point
$(\overline\vartheta_\tau)^{\frac{\kappa+\alpha}{2}}\rightharpoonup S$ in
$L^2(0,T;H^1(\Omega))$ and identifying $S=(\overline\vartheta_\tau)^{\frac{\kappa+\alpha}{2}}$
via a.e. limits from above we also obtain
$\vartheta^{\frac{\kappa+\alpha}{2}}\in L^2(0,T;H^1(\Omega))$.
Finally, under the additional assumption \eqref{range-k-admissible}
convergences \color{black} \eqref{tetaConvAdd1} and \eqref{tetaConvAdd2} follow from
an Aubin-Lions type compactness result for $\mathrm{BV}$-functions (cf.\hspace{1pt} e.g.\hspace{1pt} \cite[Chap.\hspace{1pt} 7, Cor.\hspace{1pt} 4.9]{Rou05}),
combining estimate \color{black} \eqref{thetaBound1} together with the
$\mathrm{BV}$-bound \eqref{thetaBoundAdd}. For further details \color{black} we refer to
\cite[Proof of Lemma 5.1]{RocRos14}. \end{proof} \subsection{Conclusion of the proof of Theorem \ref{thm:1}} \label{ss:6.2}
Here is the outline of the proof: \begin{enumerate} \item First, for fixed $\omega>0$ we will pass to the limit as $\tau\downarrow 0$, (along the same subsequence for which the convergences in Lemma \ref{lemma:discr-conv} hold), in the time-discrete system \eqref{PDE-discrete}. We will thus obtain an \emph{entropic weak solution} (in the sense of Definition \ref{def-entropic}), to the (initial-boundary value problem for the) PDE system \eqref{eqn:PDEsystem}, where the maximal monotone operator $\beta$ and the elastic energy density $W$ are replaced by their regularized versions $\beta_\omega$ and $W^\omega$. \item Secondly, we will tackle the limit passage as $\omega \downarrow 0$. \end{enumerate} Observe that the limit passages $\tau \downarrow 0 $ and $\omega\downarrow 0$ cannot be performed simultaneously, \color{black} because in the time-discrete system from Problem \ref{def:time-discrete} the (partial) derivatives of the convex- and the concave-decompositions \color{black} \eqref{eqn:convConcSplittingWc} may ``explode'' as $\omega\downarrow 0$. However, the convex-concave splitting shall \color{black} disappear in the limit $\tau\downarrow 0$ for fixed $\omega>0$ in the corresponding PDE system.
\paragraph{\bf Limit passage $\tau\downarrow 0$}
First of all, \color{black} we mention that from the time-discrete damage equation \eqref{eqn:discr3} we derive the following inequalities (for details we refer to \cite[Section 5.2]{hk1}; see also \cite[Proof of Theorem 1]{RocRos14} and \cite[Proof of Theorem 4]{RocRos12}): \begin{align}
&\textit{-- damage energy-dissipation inequality:}\text{ for all $t \in (0,T]$, for $s=0$, and for almost all $0< s\leq t$:}
\notag\hspace{1pt}
&\qquad
\begin{aligned}
\label{discr-energy-diss-ineq}
&\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega |\partial_t z_\tau|^2 \, \mathrm{d} x \, \mathrm{d} r
+\int_\Omega\left(\frac1p |\nabla\pwc z{\tau}(t)|^p + (\conv{\sigma})'(\pwc z{\tau}(t))+ (\conc{\sigma})'(\upwc z{\tau}(t))\right)\, \mathrm{d} x\hspace{1pt}
&\qquad\leq\int_\Omega\left(\frac1p |\nabla\pwc z{\tau}(s)|^p+ (\conv{\sigma})'(\pwc z{\tau}(s))+ (\conc{\sigma})'(\upwc z{\tau}(s))\right)\, \mathrm{d} x\hspace{1pt}
&\qquad\quad+\int_{\overline{\mathsf t}_\tau(s)}^{\overline{\mathsf t}_\tau(t)}\int_\Omega \partial_t z_\tau
\left(-\breve{W}_{3,z}^\omega(\pwc c{\tau},\varepsilon(\upwc\mathbf{u}{\tau}),\pwc z{\tau}) - \invbreve{W}_{3,z}^\omega(\pwc c{\tau},\varepsilon(\upwc\mathbf{u}{\tau}),\upwc z{\tau})+\pwc \vartheta{\tau}\right)\, \mathrm{d} x \, \mathrm{d} r;
\end{aligned}\hspace{1pt}
&\textit{-- damage variational inequality:}\text{ for all $\zeta\in L^\infty(0,T;W^{1,p}(\Omega))$ with $0\leq \zeta\leq\upwc z{\tau}$:}
\notag\hspace{1pt}
&\qquad
\begin{aligned}
\label{var-ineq-z-bis}
&\int_0^T\int_\Omega \Big(|\nabla \pwc z{\tau}|^{p-2} \nabla \pwc z{\tau} \cdot \nabla (\zeta-\pwc z{\tau})
\big((\partial_t z_\tau) + (\conv{\sigma})'(\pwc z{\tau})+ (\conc{\sigma})'(\upwc z{\tau})\big)(\zeta-\pwc z{\tau})\Big)\,\mathrm dx\,\mathrm dr\hspace{1pt}
&+\int_0^T\int_\Omega\Big(\breve{W}_{3,z}^\omega(\pwc c{\tau},\varepsilon(\upwc\mathbf{u}{\tau}),\pwc z{\tau})
+ \invbreve{W}_{3,z}^\omega(\pwc c{\tau},\varepsilon(\upwc\mathbf{u}{\tau}),\upwc z{\tau})
- \pwc \vartheta{\tau}\Big)(\zeta-\pwc z{\tau})\,\mathrm dx\,\mathrm dr\geq 0.
\end{aligned} \end{align}
The limit passage $\tau\downarrow 0$ in the damage energy-dissipation inequality \eqref{discr-energy-diss-ineq}, in the damage variational inequality \eqref{var-ineq-z-bis}, in the entropy inequality \eqref{discr-entropy-ineq}, in the total energy inequality \eqref{discr-energy-ineq} and in the equation for the balance of forces \eqref{eqn:discr5} works exactly as outlined in \cite[Proof of Theorem 1]{RocRos14} by taking the growth properties \color{black} \eqref{est-quoted-5.1}--\eqref{est-quoted-5.4} into account (for \textbf{fixed} $\omega>0$\color{black}) and needs no repetition here.
We end up with properties (ii), (iii), (iv) and (v) of Definition \ref{def-entropic},
keeping in mind that
$W(c,\varepsilon(\mathbf{u}),z)$, $W_{,z}(c,\varepsilon(\mathbf{u}),z)$ and $W_{,\varepsilon}(c,\varepsilon(\mathbf{u}),z)$ are replaced by their \color{black} $\omega$-regularized versions $W^\omega(c,\varepsilon(\mathbf{u}),z)$, $W_{,z}^\omega(c,\varepsilon(\mathbf{u}),z)$ and $W_{,\varepsilon}^\omega(c,\varepsilon(\mathbf{u}),z)$, respectively. Let us comment that in the limit $\tau\downarrow 0$ of \eqref{var-ineq-z-bis} we are only able to obtain a ``one-sided variational inequality'' which still suffices to obtain a weak solution in the sense of Definition \ref{def-entropic} (see \eqref{var-ineq-z}). Furthermore, following \color{black} the approach from \cite[Proof of Theorem 4.4]{hk1}, the subgradient $\xi\in L^2(0,T;L^2(\Omega))$
fulfilling $\xi \in \partial I_{[0,+\infty)}(z)$ a.e.\hspace{1pt} in $Q$ \color{black}
can be specified precisely as \begin{align*}
\xi=-\mathbf{1}_{\hspace{1pt}{z=0\hspace{1pt}}}\Big(\sigma'(z) + \pd{z}(c,\varepsilon(\mathbf{u}), z)-\vartheta\Big)^+ \qquad \text{a.e. in\;}\hspace{1pt}, Q, \color{black} \end{align*} where $\mathbf{1}_{\hspace{1pt}{z=0\hspace{1pt}}}:Q\to\hspace{1pt}{0,1\hspace{1pt}}$ denotes the characteristic function of the set $\hspace{1pt}{z=0\hspace{1pt}}\subseteq Q$ and $(\cdot)^+:=\max\hspace{1pt}{0,\cdot\hspace{1pt}}$.
It remains to show the limit passage as $\tau\downarrow 0$ in the Cahn-Hilliard system \eqref{eqn:discr1}--\eqref{eqn:discr2}. This can be achieved via standard convergence methods by exploiting the convergences shown in Lemma \ref{lemma:discr-conv} and noticing the growth properties \eqref{est-quoted-5.1}--\eqref{est-quoted-5.4}. This leads to property (i) from Definition \ref{def-entropic} where $W_{,c}(c,\varepsilon(\mathbf{u}),z)$ and $\beta$ should be replaced by $W_{,c}^\omega(c,\varepsilon(\mathbf{u}),z)$ and $\beta_\omega$, respectively. \hspace{1pt} \paragraph{\bf Limit passage $ {\boldsymbol \omega} {\boldsymbol \downarrow}
{\bf 0}$ \color{black}} In the subsequent argumentation we let $S_\omega=(c_\omega,\mu_\omega,z_\omega,\vartheta_\omega,\mathbf{u}_\omega)$ be an $\omega$-regularized weak solution, i.e. an entropic \color{black} weak solution in the sense of Definition \ref{def-entropic} where the $W$, $W_{,c}$, $W_{,\varepsilon}$, $W_{,z}$ and $\beta$-terms are replaced by $W^\omega$, $W_{,c}^\omega$, $W_{,\varepsilon}^\omega$, $W_{,z}^\omega$ and $\beta_\omega$, respectively. We observe that the a priori estimates in Proposition \ref{prop:aprio-discr} are inherited by the weak solutions $S_\omega$ via lower semicontinuity \color{black} arguments. Hence we obtain the same convergence properties for $\omega\downarrow 0$ as for $\tau\downarrow 0$ in Lemma \ref{lemma:discr-conv} where \eqref{etaConv} should be replaced by \begin{align}
\pwc \eta{\omega} \rightharpoonup \eta
\quad\text{ weakly in }L^2(0,T;L^2(\Omega))\text{ as }\omega\downarrow 0\text{ with }\eta\in\beta(c)\text{ a.e. in }Q.
\label{etaConv2} \end{align}
Indeed, to prove \color{black} \eqref{etaConv2}, let $\eta_{\omega}=\beta_\omega(c_\omega)\rightharpoonup S$ in $L^2(Q)$ for $\omega\downarrow 0$ for some \color{black} element $S\in L^2(Q)$. By convexity of the operator $\widehat\beta_\omega:L^2(Q)\to\bbR$ we find \begin{align} \label{convBeta}
\forall w\in L^2(Q):\quad \widehat\beta_\omega(c_\omega)+\langle \beta_\omega(c_\omega), w-c_\omega\rangle_{L^2(Q)}\leq \widehat\beta_\omega(w). \end{align} Since $\hspace{1pt}{\beta_\omega\hspace{1pt}}$ is the Yosida-approximation of $\beta$ we conclude that (cf. \cite[Lemma 5.17]{Rou05}) \begin{align}
\forall w\in L^2(Q):\quad \widehat\beta_\omega(w)\to \widehat\beta(w)\text{ strongly in $L^2(Q)$ as }\omega\downarrow0
\qquad\text{ and }\qquad\liminf_{\omega\downarrow0}\widehat\beta_\omega(c_\omega)\geq \widehat\beta(c).
\label{auxConv} \end{align} Thus by \eqref{etaConv2} and \eqref{auxConv} we can pass to the limit $\omega\downarrow 0$ for a subsequence in \eqref{convBeta} and obtain $\eta\in\partial\beta(c)$.
The main feature for the passage $\omega\downarrow 0$ in the PDE system is the following observation: From \eqref{cBound1} and \eqref{cBound2} we infer via the compact embedding $W^{1,p}(\Omega)\Subset L^\infty(\Omega)$ that for all $\omega>0$ \begin{align*}
\hspace{1pt}|c_\omega\hspace{1pt}|_{L^\infty(Q)}\leq C. \end{align*} An important consequence is that in combination with the definition of $\mathcal R_\omega$ in \eqref{Rtrunc} we find for all sufficiently small $\omega>0$ that $\mathcal R_\omega(c_\omega)=c_\omega$ a.e. in $Q$ and thus \begin{align*}
\left.
\begin{aligned}
&W(c_\omega,\varepsilon(\mathbf{u}_\omega),z_\omega)=W^\omega(c_\omega,\varepsilon(\mathbf{u}_\omega),z_\omega),
&W_{,c}(c_\omega,\varepsilon(\mathbf{u}_\omega),z_\omega)=W_{,c}^\omega(c_\omega,\varepsilon(\mathbf{u}_\omega),z_\omega),\hspace{1pt}
&W_{,\varepsilon}(c_\omega,\varepsilon(\mathbf{u}_\omega),z_\omega)=W_{,\varepsilon}^\omega(c_\omega,\varepsilon(\mathbf{u}_\omega),z_\omega),
&W_{,z}(c_\omega,\varepsilon(\mathbf{u}_\omega),z_\omega)=W_{,z}^\omega(c_\omega,\varepsilon(\mathbf{u}_\omega),z_\omega).
\end{aligned}
\quad\right\hspace{1pt}}\quad
\text{a.e. in }Q. \end{align*}
Then, the limit \color{black} passage $\omega\downarrow 0$ in the $\omega$-regularized versions of (i)-(v) in Definition \ref{def-entropic} works as for $\tau\downarrow 0$.
This concludes the proof of Theorem \ref{thm:1}. \color{black}
$\square$
\noindent {\bf \large Acknowledgments.}
Christian Heinemann and Christiane Kraus have been partially supported by ECMath SE 4 and SFB 1114. \color{black} The work of Elisabetta Rocca was supported by the FP7-IDEAS-ERC-StG Grant \hspace{1pt}#256872 (EntroPhase), by GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilit\hspace{1pt}`a e le loro Applicazioni) of INdAM (Istituto Nazionale di Alta Matematica), and by IMATI -- C.N.R. Pavia. Riccarda Rossi was partially supported by a MIUR-PRIN'10-'11 grant for the project ``Calculus of Variations'', and by GNAMPA (Gruppo Nazionale per l'Analisi Matematica, la Probabilit\hspace{1pt}`a e le loro Applicazioni) of INdAM (Istituto Nazionale di Alta Matematica).
\end{document}
|
arXiv
|
{
"id": "1510.03755.tex",
"language_detection_score": 0.49202218651771545,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{$SO(2)\times SO(3)$-invariant Ricci solitons and ancient flows on $\mathbb{S}
\abstract{Consider the standard action of $SO(2)\times SO(3)$ on $\mathbb{R}^5=\mathbb{R}^2\oplus \mathbb{R}^3$. We establish the existence of a uniform constant $\mathcal{C}>0$ so that any $SO(2)\times SO(3)$-invariant Ricci soliton on $\mathbb{S}^4\subset \mathbb{R}^5$ with Einstein constant $1$ must have Riemann curvature and volume bounded by $\mathcal{C}$, and injectivity radius bounded below by $\frac{1}{\mathcal{C}}$. This observation, coupled with basic numerics, gives strong evidence to suggest that the only $SO(2)\times SO(3)$-invariant Ricci solitons on $\mathbb{S}^4$ are round. We also encounter the so-called `pancake' ancient solution of the Ricci flow.}
\section{Introduction} Let $M$ be a smooth manifold. A \textit{gradient Ricci soliton} on $M$ is a Riemannian metric $g$ for which there exists a constant $\lambda$ and a smooth function $u:M\to \mathbb{R}$ so that \begin{equation}\label{GRS}
\text{Ric} (g)+Hess_g(u)=\lambda g. \end{equation} Solutions of \eqref{GRS} arise as self-similar solutions to the well-known Ricci flow \begin{align}\label{RF}
\frac{\partial g}{\partial t}=-2\text{Ric}(g). \end{align} In the quest for new solutions to \eqref{GRS}, one often assumes that $g$ and $u$ are invariant under a certain group action $G$ of $M$. It is of no use to assume that $G$ acts transitively on $M$, because then $g$ is homogeneous and $u$ is constant, so $g$ must be Einstein. Finding homogeneous Einstein metrics is its own well-studied problem; perhaps the crowning achievement in this area of study is the work in \cite{BWZ}, where the authors show that the mountain pass theorem is quite generally applicable in the construction of compact homogeneous Einstein metrics. The classification of non-compact homogeneous Einstein metrics is the subject of the long-standing Alekseevskii conjecture (see Conjecture 7.57 of \cite{Besse}).
After assuming that $G$ acts transitively, the next natural step is assuming that $G$ acts with \textit{cohomogeneity one}, which means that the generic orbits of the action of $G$ in $M$ have dimension one less than that of the manifold. Several examples of gradient Ricci solitons have been constructed using cohomogeneity one invariance (perhaps the most notable examples are \cite{Bohm98} and \cite{DancerWang}). In this paper, we consider the problem of solving \eqref{GRS} on $\mathbb{S}^4$ with $\lambda=1$ for a pair $(g,u)$ which is invariant under the usual cohomogeneity one action of $SO(2)\times SO(3)$. We first show a compactness result for solutions of this problem. \begin{thm}\label{CT}
There exists a $\mathcal{C}>0$ so that any $SO(2)\times SO(3)$-invariant solution $(g,u)$ of \eqref{GRS} (with $\lambda=1$) on $\mathbb{S}^4$
has $\text{inj}_g\ge \frac{1}{\mathcal{C}}$, $\text{vol}(g)\le \mathcal{C}$, and Riemann curvature bounded pointwise by $\mathcal{C}$. \end{thm} Theorem \ref{CT} forms part of a well-established area of study which seeks to produce various types of compactness results for spaces of gradient shrinking Ricci solitons, especially on four-dimensional manifolds. The strongest general result to date appears to be Theorem 1.1 in \cite{HaslMuller}, which shows compactness in the orbifold sense (with the pointed Cheeger-Gromov topology), but only once a uniform lower bound on the Perelman entropy is known. The proof of this orbifold compactness result essentially boils down to establishing a uniform $L^2$ norm on the Riemann curvature, rather than the stronger uniform $L^{\infty}$ bounds we establish with Theorem \ref{CT}.
We hope that Theorem \ref{CT} brings us a step closer to actually determining \textit{uniqueness} of invariant Ricci solitons. \begin{conj}\label{ClT}
Any $SO(2)\times SO(3)$-invariant solution of \eqref{GRS} on $\mathbb{S}^4$ with $\lambda=1$ is the round sphere, up to diffeomorphism. \end{conj} The prospects of verifying this conjecture numerically are discussed in Section \ref{mainproof}.
In the course of proving Theorem \ref{CT}, we discover that our notion of compactness is not very rigid, in the sense that we find a sequence of `almost' Ricci solitons with unbounded Riemann curvature. These `almost' solitons are an interpolation between the Gaussian shrinker on $\mathbb{R}^2\times \mathbb{S}^2$ and a rescaled product of the Bryant soliton on $\mathbb{R}^3$ with a flat metric on $\mathbb{S}^1$. The only reason we can conclude that these are \text{not} Ricci solitons is that these metrics have non-negative Riemann curvature; if these metrics were solitons, they would be round by Hamilton's pinching results in \cite{Hamilton86}. However, it turns out that this `pancake' shape is a $\kappa$-noncollapsed ancient Ricci flow on $\mathbb{S}^4$ with positive Riemann curvature. \begin{thm}\label{NAS}
There exists a $\kappa>0$ and a $\kappa$-noncollapsed, non-round $SO(2)\times SO(3)$-invariant ancient solution to the Ricci flow on $\mathbb{S}^4$
with positive Riemann curvature operator. \end{thm}
\section{Preliminaries} In case the metric $g$ and the function $u$ are invariant under a certain cohomogeneity one action, the Ricci soliton equation \eqref{GRS} reduces to a system of ordinary differential equations. In this section, we discuss these equations in the case that our cohomogeneity one action is $SO(2)\times SO(3)$, and we provide some initial results on their solutions using the maximum principle. It turns out that all of the material in this section applies to solitons on both $\mathbb{S}^4$ and $\mathbb{S}^2\times \mathbb{S}^2$ which are invariant under the obvious action of $SO(2)\times SO(3)$, so for this section only, we consider both of these four-dimensional manifolds. \subsection{The boundary value problem} Under the action of $SO(2)\times SO(3)$, the principal orbits of $\mathbb{S}^4$ or $\mathbb{S}^2\times \mathbb{S}^2$ are product spheres $\mathbb{S}^1\times \mathbb{S}^2$. In the case of $\mathbb{S}^4$, the two singular orbits are one copy of $\mathbb{S}^1$ and one copy of $\mathbb{S}^2$, whereas for $\mathbb{S}^2\times \mathbb{S}^2$, both singular orbits are copies of $\mathbb{S}^2$. Up to diffeomorphism, any $SO(2)\times SO(3)$-invariant Riemannian metric on $\mathbb{S}^4$ or $\mathbb{S}^2\times \mathbb{S}^2$ has the form \begin{equation}\label{metricform}
dt^2+f_1^2 d\theta^2+f_2^2Q, \end{equation}
where $f_i:(0,T)\to \mathbb{R}$ are smooth and positive functions, $d\theta$ is the standard one-form on $\mathbb{S}^1$, and
$Q$ is the round metric of unit Ricci curvature on $\mathbb{S}^2$. In order for the Riemannian metric $g$ to close up smoothly at
the singular orbits, the functions $f_1$ and $f_2$ must be smoothly extendable to functions on $(-T,2T)$ so that \begin{align}\label{metricsmoothness} \begin{split}
f_1(0)>0\ \text{and $f_1(t)$ is even about $t=0$},\qquad
f_2'(0)=1\ \text{and $f_2(t)$ is odd about $t=0$},\\ f_1'(T)=-1\ \text{and $f_1(t)$ is odd about $t=T$},\qquad f_2(T)>0\ \text{and $f_2(t)$ is even about $t=T$},\\
\end{split} \end{align} in the case of $\mathbb{S}^4$, and so that \begin{align}\label{metricsmoothness'} \begin{split}
f_1'(0)=1\ \text{and $f_1(t)$ is odd about $t=0$},\qquad
f_2(0)>0\ \text{and $f_2(t)$ is even about $t=0$},\\ f_1'(T)=-1\ \text{and $f_1(t)$ is odd about $t=T$},\qquad f_2(T)>0\ \text{and $f_2(t)$ is even about $t=T$},\\
\end{split} \end{align} in the case of $\mathbb{S}^2\times \mathbb{S}^2$. At any point in a principal orbit, let $\{e_1,e_2,e_3\}$ be an orthonormal basis adapted to the standard product metric $d\theta^2+Q$ on $\mathbb{S}^1\times \mathbb{S}^2$. Then in the basis $\{\partial_t\wedge e_1,\partial_t\wedge e_2,\partial_t\wedge e_3,e_2\wedge e_3,e_1\wedge e_2, e_1\wedge e_3\}$, we compute (cf. \cite{GroveZiller}) the Riemann curvature operator: \begin{align}\label{riemanncurvatureoperator}
\mathcal{R}=\begin{pmatrix}
-\frac{f_1''}{f_1}&0&0&0&0&0\\
0&-\frac{f_2''}{f_2}&0&0&0&0\\
0&0&-\frac{f_2''}{f_2}&0&0&0\\
0&0&0&\frac{1-f_2'^2}{f_2^2}&0&0\\
0&0&0&0&-\frac{f_1'f_2'}{f_1f_2}&0\\
0&0&0&0&0&-\frac{f_1'f_2'}{f_1f_2}
\end{pmatrix}. \end{align} Suppose we have an $SO(2)\times SO(3)$-invariant metric of the form \eqref{metricform} which is a gradient shrinking Ricci soliton with an $SO(2)\times SO(3)$-invariant potential function $u$. Then $u$ depends only on the $t$ parameter, and can be smoothly extended to a function on $(-T,2T)$ so that \begin{align}\label{usmoothness}
u(t) \ \text{is even about both $t=0$ and $t=T$}, \end{align} and the functions $(f_1,f_2,u)$ satisfy \eqref{metricsmoothness} or \eqref{metricsmoothness'}, alongside the Ricci soliton equation \eqref{GRS} on $(0,T)$ which becomes \begin{align}\label{solitonequations} \begin{split}
-\frac{f_1''}{f_1}-2\frac{f_2''}{f_2}+u''&=\lambda,\\
-\frac{f_1''}{f_1}-2\frac{f_1'f_2'}{f_1f_2}+u'\frac{f_1'}{f_1}&=\lambda,\\
-\frac{f_2''}{f_2}-\frac{f_1'f_2'}{f_1f_2}+\frac{1-f_2'^2}{f_2^2}+u'\frac{f_2'}{f_2}&=\lambda.
\end{split} \end{align} We find it useful to set $\lambda=1$ and introduce the new functions $L_i=\frac{f_i'}{f_i}$, $\xi=L_1+2L_2-u'$, $R=\frac{1}{f_2}$, so the Ricci soliton equations in these variables are \begin{align}\label{equationsnewf} \begin{split}
\xi'=-L_1^2-2L_2^2-1,\\
L_1'=-\xi L_1-1,\\
L_2'=-\xi L_2+R^2-1, \\
R'=-L_2 R.
\end{split} \end{align} Using the boundary conditions, a solution of \eqref{equationsnewf} on $(0,T)$ uniquely determines a solution of \eqref{solitonequations}. Note that in these co-ordinates, the Riemann curvature eigenvalues for a Ricci soliton are: \begin{itemize}
\item $\xi L_1+1-L_1^2$ (we know this must be non-negative for a soliton by Proposition \ref{scf1} below);
\item $R^2-L_2^2$ (we know this must be non-negative for a soliton by Proposition \ref{sf2p} below);
\item $-L_1L_2$ (occurs twice); and
\item $\xi L_2+1-R^2-L_2^2$ (occurs twice). \end{itemize}
\subsection{An initial step towards compactness: the maximum principle} If we were lucky enough to know \textit{a priori} that any $SO(2)\times SO(3)$-invariant shrinking soliton on $\mathbb{S}^4$ had to have non-negative Riemann curvature operator, then Hamilton's pinching results for $4$-manifolds under the Ricci flow in \cite{Hamilton86} would prove Conjecture \ref{ClT} in the affirmative immediately (and would therefore give us the proof of Theorem \ref{CT} as well). Although there does not seem to be a way to cheaply show that our Ricci solitons must have non-negative Riemann curvature, a relatively simple application of the maximum principle does show that at least two of the six Riemann curvature eigenvalues must be non-negative everywhere. Since this observation is straightforward, and is frequently used in the remainder of the paper, we include the proofs in this preliminary section. \begin{prop}\label{scf1}
For any $SO(2)\times SO(3)$-invariant gradient shrinking Ricci soliton $(g,u)$ on $\mathbb{S}^4$ or $\mathbb{S}^2\times \mathbb{S}^2$
of the form \eqref{metricform},
the quantity $\frac{-f_1''}{f_1}$ is non-negative everywhere. \end{prop} \begin{proof} For a given point on a principal orbit, consider the selfdual/anti-selfdual basis \begin{align*}
\{\partial_t\wedge e_1+e_2\wedge e_3,\partial_t\wedge e_2+e_1\wedge e_3,e_3\wedge \partial_t+e_1\wedge e_2,
\partial_t\wedge e_1-e_2\wedge e_3,\partial_t\wedge e_2-e_1\wedge e_3,e_3\wedge \partial_t-e_1\wedge e_2\}. \end{align*}
In this basis, the Riemann curvature operator is given by
\begin{align*}
\mathcal{R}=\begin{pmatrix}
A & B\\
B^T &A
\end{pmatrix}
\end{align*}
where
\begin{align*}
2A=\begin{pmatrix}
-\frac{f_1''}{f_1}+\frac{1-(f_2')^2}{f_2^2}&0&0\\
0&-\frac{f_2''}{f_2}-\frac{f_1'f_2'}{f_1f_2}&0\\
0&0&-\frac{f_2''}{f_2}-\frac{f_1'f_2'}{f_1f_2}
\end{pmatrix}, \ 2B=
\begin{pmatrix}
-\frac{f_1''}{f_1}-\frac{1-(f_2')^2}{f_2^2}&0&0\\
0&-\frac{f_2''}{f_2}+\frac{f_1'f_2'}{f_1f_2}&0\\
0&0&-\frac{f_2''}{f_2}+\frac{f_1'f_2'}{f_1f_2}
\end{pmatrix}.
\end{align*}
After applying the Uhlenbeck trick, Hamilton shows in \cite{Hamilton86} that, under the Ricci flow, the curvatures satisfy the evolution equations
\begin{align*}
\frac{\partial A}{\partial \tau}=\Delta_{g(\tau)} A+A^2+B^T B+2A^\#, \qquad \frac{\partial B}{\partial \tau}=\Delta_{g(\tau)} B+AB+BA+2B^\#.
\end{align*}
Since the second and third diagonal entries of both $A$ and $B$ are identical, the first entries of $A^\#$ and $B^\#$ are non-negative.
We therefore find that the first eigenvalue of the matrix $A+B$ must be increasing under the Ricci flow.
However, the Riemann curvatures of gradient shrinking Ricci solitons evolve only by diffeomorphisms and scalings under the flow, so
the first eigenvalue of the matrix $A+B$, which is $-\frac{f_1''}{f_1}$, must be non-negative everywhere. \end{proof} \begin{prop}\label{sf2p}
For any $SO(2)\times SO(3)$-invariant gradient shrinking Ricci soliton
on $\mathbb{S}^4$ or $\mathbb{S}^2\times \mathbb{S}^2$ of the form \eqref{metricform},
the quantity $\frac{1-f_2'^2}{f_2^2}$ is non-negative everywhere. More generally, suppose we have an
$SO(2)\times SO(3)$-invariant Riemannian metric of the form \eqref{metricform}, as well as two points $r_1,r_2\in [0,T]$ so that:
\begin{itemize}
\item $\frac{1-f_2'^2}{f_2^2}\ge 0$ at $r_1,r_2$; and
\item the metric satisfies the Ricci soliton equations on $(r_1,r_2)$.
\end{itemize} Then $\frac{1-f_2'^2}{f_2^2}\ge 0$ on $(r_1,r_2)$. \end{prop} \begin{proof}
Clearly it suffices to show that $f_2'\in [-1,1]$ on $[r_1,r_2]$.
Differentiating and using the equations of \eqref{solitonequations}, we find
\begin{align*}
0&=-f_2'''+(-\frac{f_1'f_2'}{f_1}+\frac{1-f_2'^2}{f_2})'+(u''-\lambda)f_2'+u'f_2''\\
&=-f_2'''+\frac{f_1'^2f_2'}{f_1^2}-\frac{f_1'f_2''}{f_1}+\frac{-2f_2'f_2''f_2+f_2'(f_2'^2-1)}{f_2^2}+2\frac{f_2''f_2'}{f_2}+u'f_2''.\\
\end{align*} Therefore, if $f_2'>1$ somewhere on $(r_1,r_2)$, then there must be a point in $(r_1,r_2)$ with $f_2'>1$, $f_2''=0$ and $f_2'''\le 0$. At this point, we find \begin{align*}
0&=-f_2'''+\frac{f_1'^2f_2'}{f_1^2}+\frac{f_2'(f_2'^2-1)}{f_2^2}>0, \end{align*} a contradiction. We obtain a similar contradiction if there were a point with $f_2'<-1$. A metric which is a Ricci soliton everywhere has $f_2'\in [-1,1]$ at $0$ and $T$ for both $\mathbb{S}^4$ and $\mathbb{S}^2\times \mathbb{S}^2$ by \eqref{metricsmoothness} and \eqref{metricsmoothness'}, so the claim follows. \end{proof} \section{The shooting problem for $\mathbb{S}^4$: a curve and a surface} We turn to the problem of establishing Theorem \ref{CT}. It is convenient to cast the problem of finding solutions of \eqref{metricsmoothness}, \eqref{usmoothness} and
\eqref{equationsnewf} as a shooting problem: the idea is to study the initial value problem for \eqref{equationsnewf} around the two singular orbits at $t=0,T$, and examine how the solutions meet at a specified principal orbit. In particular, we will examine how the solutions meet at an orbit where $\xi=0$, since \eqref{equationsnewf} implies that there will be exactly one such orbit for a shrinking soliton.
To begin, we examine more carefully the initial value problem at the $\mathbb{S}^1$ orbit ($t=0$). Note that, by \eqref{metricsmoothness} and \eqref{usmoothness}, there must be smooth functions $\eta_0,\eta_1,\eta_2,\eta_3$ so that \begin{align}\label{etaforma}
\xi(t)=\frac{2}{t}+\eta_0(t), \qquad L_1(t)=\eta_1(t), \qquad L_2(t)=\frac{1}{t}+\eta_2(t), \qquad R(t)=\frac{1}{t}+\eta_3(t), \end{align} and $\eta_i(0)=0$ for all $i=0,1,2,3$. It turns out that whenever solving \eqref{equationsnewf} subject to \eqref{etaforma} close to $t=0$, solutions are uniquely determined by the number \begin{align}\label{ivpa}
\delta_1=\eta_3'(0)=\lim_{t\to 0}\left(\frac{R(t)-\frac{1}{t}}{t}\right). \end{align} \begin{prop}\label{mapa}
For each value of $\delta_1\in \mathbb{R}$, there exists a unique solution to the initial value problem of \eqref{equationsnewf}, subject to \eqref{etaforma}
and \eqref{ivpa}.
The solution can be extended smoothly to a point with $\xi=0$, and $\xi^{-1}(0)$ depends smoothly on $\delta_1$.
For $i=0,1,2,3$, the quantity $\frac{\eta_i(t)}{t}$ depends continuously on $\delta_1$ and $t\in [0,\xi^{-1}(0)]$. \end{prop} The proof of Proposition \ref{mapa} essentially follows from the techniques discussed in \cite{Buzano}. We can similarly examine the initial value problem at the $\mathbb{S}^2$ orbit. This time, \eqref{metricsmoothness} and \eqref{usmoothness} imply that there must be smooth functions $\eta_0,\eta_1,\eta_2,\eta_3$ so that \begin{align}\label{etaformb}
-\xi(T-t)=\frac{1}{t}+\eta_0(t), \qquad -L_1(T-t)=\frac{1}{t}+\eta_1(t), \qquad -L_2(T-t)=\eta_2(t), \qquad R(T-t)=R(T)+\eta_3(t), \end{align} where again $\eta_i(0)=0$ for $i=0,1,2,3$. This time, solutions are uniquely determined by \begin{align}\label{ivpb}
\delta_2=\eta_0'(0), \qquad \delta_3=R(T). \end{align} \begin{prop}\label{mapb}
For each value of $(\delta_2,\delta_3)\in \mathbb{R}^2$, there exists a unique solution to the initial value problem of \eqref{equationsnewf}, subject to
\eqref{etaformb} and \eqref{ivpb}.
The solution can be extended smoothly to a point with $\xi=0$, and $\xi^{-1}(0)$ depends smoothly on $(\delta_2,\delta_3)$.
For $i=0,1,2,3$, the quantity $\frac{\eta_i(t)}{t}$ depends continuously on $\delta_2,\delta_3$ and $t\in [\xi^{-1}(0),T]$. \end{prop}
The main idea behind the proof of Theorem \ref{CT} is to provide bounds on the possible values that $\delta_1,\delta_2,\delta_3$ can achieve. Fortunately, we can find some of these bounds immediately. \begin{prop}\label{initialbounds}
If the solutions to the IVPs found in Propositions \ref{mapa} and \ref{mapb} coincide at time $\xi^{-1}(0)$, then $\delta_1\ge 0$, $\delta_2\ge -1$ and
$\delta_3\ge 0$. \end{prop} \begin{proof}
The $\delta_3$ bound is obvious because $\lim_{t\to 0}R(t)=\infty$, and $R'=-L_2R$. For $\delta_1$, note that $R^2-L_2^2\ge 0$ by Proposition \ref{sf2p}, so
using \eqref{etaforma} and \eqref{ivpa}, we find $0\le \lim_{t\to 0}(R(t)^2-L_2(t)^2)=2\eta_3'(0)-2\eta_2'(0)$.
But by using \eqref{equationsnewf} we see that $\eta_2'(0)=-2\eta_3'(0)$, so $0\le 6\delta_1$.
For $\delta_2$, we can use Proposition \ref{scf1} to find that $\xi L_1+1-L_1^2\ge 0$ everywhere; the result then follows similarly to the analogous result for
$\delta_1$. \end{proof}
Let $C$ be the smooth curve in $\mathbb{R}^3$ consisting of all values of $(L_1,L_2,R)$ evaluated at $\xi^{-1}(0)$ found from Proposition \ref{mapa} for $\delta_1\ge 0$, and let $S$ be the smooth surface in $\mathbb{R}^3$ consisting of all values of $(L_1,L_2,R)$ evaluated at $\xi^{-1}(0)$ found from Proposition \ref{mapb} for $\delta_2\ge -1$ and $\delta_3\ge 0$. Proposition \ref{initialbounds} implies that any $SO(2)\times SO(3)$-invariant Ricci soliton on $\mathbb{S}^4$ must correspond to a point in $\mathbb{R}^3$ where the curve $C$ intersects the surface $S$. The following images give various views of an approximation to $C$ (in red) and $S$ (in blue) which were found using Matlab's ODE solver.
\begin{center}
\includegraphics[width=5cm]{InitialView.png} \hspace{1cm}
\includegraphics[width=5cm]{L1L2.png}\\
\includegraphics[width=5cm]{L1R.png}\hspace{1cm}
\includegraphics[width=5cm]{L2R.png} \end{center}
Some important observations: \begin{itemize}
\item The intersection we see corresponds to the round sphere with
Einstein constant $1$, and is found by setting $\delta_1=\frac{1}{18}$, $\delta_2=-\frac{7}{9}$ and $\delta_3=\frac{1}{\sqrt{3}}$.
\item As $\delta_1$ increases,
it appears the corresponding point on the curve $C$ approaches the point on the surface $S$ corresponding to $\delta_2=-1$ and $\delta_3=1$. This behaviour appears
to resemble that of the ancient `pancake' Ricci flow solution discussed in Section \ref{newancient}. \end{itemize}
\section{The surface $S$ close to $\delta_2=-1$, $\delta_3=1$} The biggest difficulty in proving Theorem \ref{CT} is establishing an \textit{a priori} upper bound for $\delta_1$. This part of the proof is achieved by showing that any Ricci soliton that occurs with $\delta_1$ too large must have non-negative Riemann curvature everywhere. This is a two-step process: the first step is showing that if $\delta_1$ is too large, then the Riemann curvature must be non-negative between the $\mathbb{S}^1$ singular orbit and the unique orbit with $\xi=10$, and the data at this point resembles the Gaussian soliton on $\mathbb{R}^2\times \mathbb{S}^2$; the second shows that if the data at $\xi=10$ is close to Gaussian, then the Riemann curvature must also be non-negative between the $\xi=10$ orbit and the $\mathbb{S}^2$ singular orbit. This section is devoted to the proof of Theorem \ref{S2orbit} below, which achieves the second step. Indeed, Theorem \ref{S2orbit} explicitly describes just how Gaussian we need to be at the $\xi=10$ principal orbit to guarantee curvature non-negativity between this orbit and the $\mathbb{S}^2$ orbit. The proof essentially involves an analysis of the behaviour of the soliton equations for values of $\delta_2$ and $\delta_3$ close to $-1$ and $1$ respectively. \begin{thm}\label{S2orbit}
Suppose we have an $SO(2)\times SO(3)$-invariant gradient shrinking Ricci soliton on $\mathbb{S}^4$ of the form \eqref{metricform}.
Let $B_2= 10^{-146}$, and consider
the quantities $\xi,L_1,L_2,R$ associated to the soliton which satisfy \eqref{equationsnewf}.
If there is a principal orbit with $\xi=10$,
$\left|L_1+\frac{1}{5+\sqrt{26}}\right|\le B_2$, $\left|R-1\right|\le B_2$, $\left|L_2\right|\le B_2$, and with $\mathcal{R}\ge 0$,
then $\mathcal{R}\ge 0$ between this principal orbit and the $\mathbb{S}^2$ orbit ($\mathcal{R}$ is described in \eqref{riemanncurvatureoperator}). \end{thm} \begin{rmk} The value of $-\frac{1}{5+\sqrt{26}}$ in the statement of the above theorem arises in relation to the Gaussian soliton on $\mathbb{R}^2\times \mathbb{S}^2$. Indeed, this soliton is the special solution coming out of the $\mathbb{S}^2$ orbit with $-\xi(T-t)=\frac{1}{t}-t$, $-L_1(T-t)=\frac{1}{t}$,
$L_2(T-t)=0$, $R(T-t)=1$; when $\xi=10$, we have $L_1=-\frac{1}{5+\sqrt{26}}$. \end{rmk}
The proof of Theorem \ref{S2orbit} follows from the three lemmas presented below. The first lemma sets the goal posts, in that it tells us that curvature positivity between the $\xi=10$ orbit and the $\mathbb{S}^2$ orbit is guaranteed if curvature is positive at $\xi=10$ and $(\delta_2,\delta_3)$ is close to $(-1,1)$. \begin{lem}\label{S21}
Suppose we have a soliton so that $(\delta_2,\delta_3)\in [-1,-1+B_1]\times [1-B_1,1+B_1]$, where $B_1=10^{-70}$.
Then the corresponding solutions of the IVP in Proposition \ref{mapb} are such that the sectional curvatures
$-L_1L_2$ and $\xi L_2+1-R^2-L_2^2$ do not change sign on $[\xi^{-1}(10),T)$. \end{lem} The curvature positivity condition of the previous lemma is assumed in the hypothesis of Theorem \ref{S2orbit}, so we turn attention to
the task of ensuring that $\max\{\left|\delta_3-1\right|,\delta_2+1\}\le B_1$ by having the soliton at the $\xi=10$ orbit quite close to the Gaussian soliton. \begin{lem}\label{S22}
A Ricci soliton with $\left|R-1\right|,\left|L_2\right|\le B_2$ at the $\xi^{-1}(10)$ orbit must have
$\max\{\left|R(t)-1\right|,\left|L_2(t)\right|\}\le B_1$ for all $t\in (\xi^{-1}(10),T)$. In particular, $\left|\delta_3-1\right|\le B_1$. \end{lem} \begin{lem}\label{S23}
A Ricci soliton with $\left|L_1+\frac{1}{5+\sqrt{26}}\right|\le B_2$ at time
$\xi^{-1}(10)$ and $\max\{\left|R(t)-1\right|,\left|L_2(t)\right|\}\le B_1$ for all $t\in (\xi^{-1}(10),T)$ must have
$0\le \delta_2+1\le B_1$. \end{lem} \begin{proof}[Proof of Theorem \ref{S2orbit}]
Using the hypothesis of Theorem \ref{S2orbit}, Lemma \ref{S22} implies that $\left|\delta_3-1\right|\le B_1$ and
$$\max\{\left|R(t)-1\right|,\left|L_2(t)\right|\}\le B_1$$ for all $t\in (\xi^{-1}(10),T)$. Combining with the hypothesis of Theorem \ref{S2orbit}, Lemma \ref{S23} implies that $0\le \delta_2+1\le B_1$. Lemma \ref{S21} then implies that two of the Riemann curvature eigenvalues do not change sign. Since they are non-negative at time $\xi^{-1}(10)$, we obtain that these curvatures are non-negative on $(\xi^{-1}(10),T)$. We already know from Propositions \ref{scf1} and \ref{sf2p} that the other two Riemann curvature eigenvalues are non-negative. \end{proof} We conclude this section with the proof of the three lemmas. \begin{proof}[Proof of Lemma \ref{S21}] The strategy behind the proof of this lemma is simple enough: check the signs of $-L_1L_2$ and $\xi L_2+1-R^2-L_2^2$ using a Taylor series approximation, and show that the error of such an approximation is small enough. The result is obvious if $\delta_3=1$, because then $L_2=0$ and $R=1$ uniformly. Otherwise, consider the new functions $w(t)=-\xi(T-t)-\frac{1}{t}$, $x(t)=-L_1(T-t)+\xi(T-t)$, $y(t)=\frac{-L_2(T-t)}{\delta_3-1}$, $z(t)=\frac{R(T-t)-1}{\delta_3-1}$ so that the vector $u=(w,x,y,z)$ satisfies \begin{align}\label{feu}
u'+\frac{A}{t}u=Bu+C(u)+D \end{align} where \begin{align*}
A=\begin{pmatrix}
2&2&0&0\\
0&-1&0&0\\
0&0&1&0\\
0&0&0&0
\end{pmatrix},\
B=\begin{pmatrix}
0&0&0&0\\
0&0&0&0\\
0&0&0&2\\
0&0&-1&0
\end{pmatrix},\
C(u)=\begin{pmatrix}
-(x+w)^2-2(\delta_3-1)^2y^2\\
x^2+xw+2(\delta_3-1)^2 y^2\\
-wy+(\delta_3-1)z^2\\
-(\delta_3-1)yz
\end{pmatrix},\
D=\begin{pmatrix}
-1\\
0\\
0\\
0
\end{pmatrix},
\end{align*} and we have \begin{align*}
u(0)=(0,0,0,1), \qquad u'(0)=(\delta_2,-\frac{3\delta_2+1}{2},\frac{\delta_3+1}{2},0), \qquad u''(0)=(0,0,0,-\frac{\delta_3(\delta_3+1)}{2}). \end{align*}
Let $u_{2}=u(0)+u'(0)t+\frac{u''(0)t^2}{2}$ be the second-order Taylor series approximation to the solution, so that \begin{align*}
u_2'+\frac{A}{t}u_2-Bu_2-C(u_2)&=D+\begin{pmatrix}
\frac{(\delta_2+1)^2}{4}+\frac{(\delta_3-1)^2(\delta_3+1)^2}{2}\\
-\frac{(3\delta_2+1)(\delta_2+1)}{4}-\frac{(\delta_3-1)^2(\delta_3+1)^2}{2}\\
\frac{(\delta_3+1)(\delta_3+\delta_2)}{2}+\frac{(\delta_3-1)\delta_3(\delta_3+1)}{2}\\
0
\end{pmatrix}t^2+
\begin{pmatrix}
0\\
0\\
0\\
\frac{(\delta_3-1)(\delta_3+1)^2\delta_3}{8}
\end{pmatrix}t^3
+
\begin{pmatrix}
0\\
0\\
\frac{(1-\delta_3)\delta_3^2(1+\delta_3)^2}{16}\\
0
\end{pmatrix}t^4 \end{align*} Therefore, the function $v=u-u_2$ satisfies \begin{align*}
v'+\frac{A}{t}v=Bv+F(t)v+C(v)+E_2t^2+E_3t^3+E_4t^4,\\
v(0)=0, \qquad v'(0)=0, \qquad v''(0)=0, \end{align*} where \begin{align*}
\left|E_2\right|_{\infty}\le 4 \max\{\left|\delta_2+1\right|,\left|\delta_3-1\right|\},\qquad
\left|E_3\right|_{\infty}\le \max\{\left|\delta_2+1\right|,\left|\delta_3-1\right|\},\qquad
\left|E_4\right|_{\infty}\le \frac{1}{3} \max\{\left|\delta_2+1\right|,\left|\delta_3-1\right|\}, \end{align*}
provided $\max\{\left|\delta_2+1\right|,\left|\delta_3-1\right|\}\le 10^{-6}$, and \begin{align*}
F(t)&=\begin{pmatrix}
0&0&0&0\\
0&0&0&0\\
0&0&0&2(\delta_3-1)\\
0&0&-(\delta_3-1)&0
\end{pmatrix}\\
&+\begin{pmatrix}
1+\delta_2&1+\delta_2&-2(\delta_3-1)^2(\delta_3+1)&0\\
-\frac{3\delta_2+1}{2}&-2\delta_2-1&2(\delta_3-1)^2(\delta_3+1)&0\\
-\frac{\delta_3+1}{2}&0&-\delta_2&0\\
0&0&0&\frac{-(\delta_3-1)(\delta_3+1)}{2}
\end{pmatrix}t+
\begin{pmatrix}
0&0&0&0\\
0&0&0&0\\
0&0&0&-(\delta_3-1)(\frac{\delta_3+1}{2})\delta_3\\
0&0&(\delta_3-1)(\frac{\delta_3+1}{4})\delta_3&0
\end{pmatrix}t^2. \end{align*}
Therefore, \begin{align*}
\left|v'(t)\right|_{\infty}\le \left|v(t)\right|_{\infty}\left(2.1t+\frac{1}{t}+2.1\right)+\max\{\left|\delta_2+1\right|,\left|\delta_3-1\right|\}\left(4t^2+t^3+\frac{1}{3}t^4\right), \end{align*}
provided $\left|v\right|_{\infty}\le \frac{1}{100}$, and $\max\{\left|\delta_2+1\right|,\left|\delta_3-1\right|\}\le 10^{-6}$. We then find that, for $t\in [0,11]$, \begin{align}\label{mainvestimate}
\frac{\left|v(t)\right|_{\infty}}{t}\le 10^{65}\times \max\{\left|\delta_2+1\right|,\left|\delta_3-1\right|\}\le 10^{-5}, \end{align}
since $\max\{\left|\delta_2+1\right|,\left|\delta_3-1\right|\}\le B_1$.
The estimate \eqref{mainvestimate} gives $y(t)\ge \frac{\delta_3+1}{2}t-\left| v(t)\right|\ge 0$ on $[0,11]$, so $L_2(T-t)$ does not change sign for $t\in [0,11]$. We also find that, at the principal orbit $T-t$, \begin{align*}
\frac{\xi L_2+1-R^2-L_2^2}{\delta_3-1}=-\left(\frac{1}{t}+w(t)\right)y(t)-z(t)(2+(\delta_3-1)z(t))-(\delta_3-1)y(t)^2, \end{align*} which is sufficiently close to $-2$ for $t\in [0,11]$ by \eqref{mainvestimate}.
Therefore, these two sectional curvatures do not change sign on $[T-11,T)$. Since $\max\{\left|\delta_2+1\right|,\left|\delta_3-1\right|\}\le B_1$, we can also use these estimates to conclude that $-\xi(T-t)$ is close to $\frac{1}{t}-t$, so $\xi^{-1}(10)\in [T-11,T)$, as required.
\end{proof}
\begin{proof}[Proof of Lemma \ref{S22}] A key quantity to consider is $K(t)^2=L_2(t)^2+(R(t)-1)^2$ because $K'(t)=\frac{-\xi L_2^2+L_2(R-1)}{K(t)}$, so that \begin{align}\label{L2REvo} \begin{split} K(t)\left(-\frac{1}{2}-\max\{0,\xi\}\right) \le K'(t)\le \left(\frac{1}{2}+\max\{0,-\xi\}\right)K(t),\\ K(\xi^{-1}(10))\le \sqrt{2} \delta. \end{split} \end{align}
Using \eqref{L2REvo} and the fact that $\xi^{-1}(0)-\xi^{-1}(10)\le 10$ (since $\xi'\le -1$), we find that \begin{align}\label{kestimatesearly}
K(t)\le \sqrt{2}e^{5}B_2, \ t\in (\xi^{-1}(10),\xi^{-1}(0)]. \end{align} Now let $t^*\in (\xi^{-1}(0),T-\frac{B_1}{2}]$ be the last time so that \begin{align}\label{XE}
-\xi(t)\le -L_1(t)+\frac{1}{2}\le \frac{1}{T-t}+\frac{1}{2} \ \text{for all}\ t\in [\xi^{-1}(0),t^*). \end{align} Such a $t^*$ must exist because $0\le -L_1\le \frac{1}{T-t}$ everywhere by Proposition \ref{scf1}, so the estimate holds at time $\xi^{-1}(0)$. Then using \eqref{L2REvo} and \eqref{XE}, we obtain that $K'(t)\le \left(1+\frac{1}{T-t}\right)K(t)$; integrating from $\xi^{-1}(0)$ to $t^*$ gives \begin{align}\label{kestimateotstar}
K(t)\le \frac{\sqrt{8}B_2e^{T-\xi^{-1}(0)+5}(T-\xi^{-1}(0))}{B_1}, \ t\in (\xi^{-1}(0),t^*). \end{align} Now, using the fact that $y'\ge y^2+1$ and $y(\xi^{-1}(0))\ge 0$, where $y=\min\{-\xi,-L_1\}$, we find that $T-\xi^{-1}(0)\le \frac{\pi}{2}$. Combining this with \eqref{kestimateotstar} and the definition of $B_1$, we obtain that $K(t)\le \frac{B_1}{10}$ for all $t\in (\xi^{-1}(0),t^*)$. This implies that, for $t\in (\xi^{-1}(0),t^*)$, we have \begin{align*}
(-\xi+L_1)'&=L_1(-\xi+L_1)+2L_2^2\\
&\le L_1(-\xi+L_1)+10^{-4}. \end{align*} Since $-\xi+L_1$ is negative at time $\xi^{-1}(0)$, and $L_1$ is negative everywhere, we find that $-\xi(t)< -L_1(t)+\frac{1}{2}$ for $t\in (\xi^{-1}(0),t^*)$, so $t^*=T-\frac{B_1}{2}$.
Combining $K(T-\frac{B_1}{2})\le \frac{B_1}{10}$ with $\left|R'\right|\le R^2$ then gives \begin{align}\label{efrcts2}
\left|R(t)-1\right|\le B_1 \ \text{for} \ t\in (T-\frac{B_1}{2},T). \end{align} Equation \eqref{efrcts2} combined with \eqref{kestimateotstar} and \eqref{kestimatesearly} then gives \begin{align}\label{efrcts3}
\left|R(t)-1\right|\le B_1 \ \text{for} \ t\in (\xi^{-1}(10),T).
\end{align} In particular, we find that $\left|\delta_3-1\right|\le B_1$.
To conclude the proof, it therefore suffices to check the smallness of $L_2$ on $[\xi^{-1}(10),T)$. The required bound for $L_2$ on $[\xi^{-1}(10),T-\frac{B_1}{2}]$ follows from \eqref{kestimatesearly}, \eqref{kestimateotstar} and the fact that $t^*=T-\frac{B_1}{2}$. The required $L_2$ bound on $[T-\frac{B_1}{2},T)$
follows from the estimate $\left|L_2(t)\right|'\ge -\left|R(t)^2-1\right|\ge -B_1(R(t)+1)\ge -\frac{5}{2}B_1$ for $t\in (\xi^{-1}(0),T)$ coupled with $L_2(T)=0$. \end{proof}
\begin{proof}[Proof of Lemma \ref{S23}]
The estimate $\left|R(t)-1\right|\le B_1$ implies that $\left|L_2(t)\right|'\ge -B_1(R(t)+1)\ge -\frac{5}{2}B_1$ for $t\in (\xi^{-1}(0),T)$. Therefore,
$\left|L_2(t)\right|\le \frac{5B_1(T-t)}{2}$ for all $t\in (\xi^{-1}(0),T)$, because $L_2(T)=0$. In fact, we actually obtain \begin{align}\label{L240e}
\left|L_2(t)\right|\le \frac{5B_1 (T-t)}{2} \ \text{for all} \ t\in (\xi^{-1}(10),T) \end{align}
because $\left|L_2(t)\right|\le B_1$ on $(\xi^{-1}(10),\xi^{-1}(0))$ by Lemma \ref{S22}, and $T-\xi^{-1}(0)>\frac{2}{5}$. This estimate on $T-\xi^{-1}(0)$ follows from the fact that $L_2^2+(R-1)^2\le 2B_1^2$ is small at time $\xi^{-1}(0)$, and $L_1(\xi^{-1}(0))\in [-1,0]$ (since $\xi L_1+1-L_1^2\ge 0$ everywhere).
Consider the function $x(t)=-\xi(T-t)+L_1(T-t)+t$. In light of \eqref{L240e}, the function $x$ satisfies \begin{align*}
x'(t)\ge -25B_1^2t^2+\frac{x}{t}, \qquad x'(0)=\frac{3}{2}(\delta_2+1), \end{align*} for all $t$ with $x(t)\le t$. If $\delta_2+1>B_1$, we have that $x(t)>\frac{B_1 t}{2}$ for all $t\in (0,12)$. Since $0\le -L_1(T-t)\le \frac{1}{t}$, we then find that \begin{align}\label{finalestimatexB1}
-\xi(T-t)+L_1(T-t)-\frac{1}{L_1(T-t)}\ge x(t)>\frac{B_1t}{2}. \end{align} Now, the smallness of $(R(\xi^{-1}(0))-1)^2+L_2(\xi^{-1}(0))^2$ and the closeness of $L_1(\xi^{-1}(0))$ to $-\frac{1}{5+\sqrt{26}}$ implies that $T-\xi^{-1}(10)\ge \frac{1}{2}$. Therefore, \eqref{finalestimatexB1} is in contradiction with
$\left|L_1(T-t)+\frac{1}{5+\sqrt{26}}\right|\le B_2$ at time $t=T-\xi^{-1}(10)$. \end{proof}
\section{The curve C for large values of $\delta_1$} In our quest to prove that normalised $SO(2)\times SO(3)$-invariant solitons on $\mathbb{R}^4$ have uniformly bounded Riemann curvature and volume (as well as a uniform lower bound on the injectivity radius), it is necessary to find uniform bounds for the $\delta_1,\delta_2,\delta_3$ numbers discussed in Propositions \ref{mapa} and \ref{mapb}. This section is dedicated to the proof of Theorem \ref{d1large} below, which combines with Theorem \ref{S2orbit} above to produce the required bound for $\delta_1$ (recall that a Ricci soliton on $\mathbb{S}^4$ with non-negatve Riemann curvature operator must be round).
\begin{thm}\label{d1large} Suppose we have an $SO(2)\times SO(3)$-invariant gradient shrinking Ricci soliton on $\mathbb{S}^4$ of the form \eqref{metricform}.
Consider the quantities $\xi,L_1,L_2,R$ associated to the soliton which satisfy \eqref{equationsnewf}, and suppose $\delta \in (0,10^{-25})$ and
$\delta_1>\frac{1}{\delta^{120}}$. Then there is a principal orbit with $\xi=10$,
$\left|L_1+\frac{1}{5+\sqrt{26}}\right|\le \delta$, $\left|R-1\right|\le \delta$, $\left|L_2\right|\le \delta$,
and so that the Riemann curvature is non-negative between this
principal orbit and the $\mathbb{S}^1$ orbit. \end{thm} When proving this theorem, it is useful to again change variables with $w=L_1$, $x=\frac{L_2}{R}$, $y=\frac{R}{\xi}$, $z=\frac{1}{R}$ so that \begin{align}\label{firstchangebeforescaling} \begin{split}
w'&=\frac{-w-yz}{yz},\\
x'&=\frac{-x+y-yz^2+x^2y}{yz}, \\
y'&=\frac{-xy^2+y^3(w^2z^2+2x^2+z^2)}{yz},\\
z'&=\frac{xyz}{yz}.
\end{split} \end{align} Therefore our study essentially reduces to the analysis of the integral curves of the following system of equations: \begin{align}\label{FirstChange} \begin{split}
w'&=-w-yz,\\
x'&=-x+y-yz^2+x^2y, \\
y'&=-xy^2+y^3(w^2z^2+2x^2+z^2),\\
z'&=xyz.
\end{split} \end{align} In these co-ordinates, the non-negativity of $-L_1L_2$ and $\xi L_2+1-R^2-L_2^2$ is implied by the non-negativity of $x$ and $\frac{x}{y}+z^2-1-x^2$ (since we already know that $L_1\le 0$). The solutions we are interested in are part of the two-dimensional unstable manifold of the critical point of \eqref{FirstChange} at $(0,1,\frac{1}{2},0)$; the $\delta_1$ parameter now describes the initial direction of travel through this two-dimensional unstable manifold, and is given precisely by $\delta_1=\lim_{t\to 0}\frac{\frac{1}{z(t)}-\frac{1}{t}}{t}$ for the solutions of \eqref{firstchangebeforescaling}.
In this section, we are only interested in evolving the system \eqref{firstchangebeforescaling} up until the time $\xi^{-1}(10)$, so we can assume that $y\in (0,\infty)$. It is handy to keep track of the evolution of the quantities $C=\frac{x}{y(1-z)}$, $D=\frac{w}{y}-w^2$ and $E=\frac{x}{y}+z^2-1-x^2$. Indeed, the closeness of $D$ to $-1$ measures the Gaussian structure, positivity of $E$ ensures positivity of the last sectional curvature, and $C$ is an analogue of a quantity that arises in the construction of the Bryant soliton. We find that \begin{align}\label{usefulquantities} \begin{split}
C'&=-C+(1+z)+Cxy+C(xy-y^2(w^2z^2+2x^2+z^2))+C^2y^2z\\
D'&=(-D-1)(1-yw)+\tilde{D}\\
E'&=xy\left(1-w^2z^2-x^2\right)+\tilde{E},
\end{split} \end{align}
where $\left|\tilde{D}\right|\le 3\left|z-1\right|+3\left|x\right|$ whenever $0\le x,y,-w,z\le 1$, and $\tilde{E}=0$ whenever $E=0$ and $y\in (0,\infty)$. Therefore, to prove Theorem \ref{d1large}, it suffices to prove the following: \begin{thm}\label{delta1largesimple}
Choose $\delta\in (0,10^{-25})$. If we have a Ricci soliton with $\delta_1\ge \frac{1}{\delta^{120}}$,
then the corresponding trajectory of \eqref{FirstChange} which $\alpha$-limits to $(0,1,\frac{1}{2},0)$ includes a point at
which $yz=\frac{1}{10}$, $ 1-\frac{\delta}{3}\le z\le 1$, $0\le -w\le 1$, $\left|D+1\right|\le \frac{\delta}{3}$ and $x\le \frac{\delta}{3}$, and so that $\min\{x,E\}\ge 0$ on the trajectory up until
that point. \end{thm} To proof of this theorem essentially follows by noting the behaviour of the unstable trajectory coming out of $(0,1,\frac{1}{2},0)$ as $\delta_1$ becomes large: \begin{itemize}
\item the trajectory travels from $(0,1,\frac{1}{2},0)$, and gets close to the critical point $(0,0,0,0)$;
\item the trajectory then travels close to the line of critical points $(0,0,0,z)$ for $z\in [0,1]$;
\item the trajectory then breaks away from this line of critical points as $y>0$ begins to grow while $z$ stays close to $1$, and we get a point satisfying the conclusion of
Theorem \ref{delta1largesimple}. \end{itemize} To prove Theorem \ref{delta1largesimple}, we proceed working backwards, and start by determining how close the unstable trajectory needs to be to touching the point $(w,x,y,z)=(0,0,0,0)$ in order to get the desired conclusion. \begin{lem}\label{0tofinal}
Choose $\delta\in (0,10^{-25})$ and suppose our trajectory of \eqref{FirstChange} includes a point with
$0\le -w,x,y,z\le \delta^6$, $\left|\frac{y}{x}-1\right|\le \delta^6$, $\left|D\right|\le 1$ and $E\ge 0$.
Then the trajectory includes a later point at which $yz=\frac{1}{10}$,
$ 1-\frac{\delta}{3}\le z\le 1$, $0\le -w\le 1$, $\left|D+1\right|\le \frac{\delta}{3}$ and $x\le \frac{\delta}{3}$.
Furthermore, $\min\{E,x\}\ge 0$ on the trajectory between these two points. \end{lem} \begin{proof} We can assume that we have a solution of \eqref{FirstChange} so that $0$ is the time of the first point.
\textbf{Step One: construct an interval $(0,t^*)$ on which we expect to find the required terminal point; find some basic estimates.}
Define $t^*>0$ so that
\begin{align}\label{definitiontstart}
(0,t^*) \ \textit{is the maximal time interval on which} \ \left|x\right|\le \frac{1}{10}, \left|y\right|\le \frac{1}{9}, \left|w\right|\le \frac{1}{2} \ \textit{and} \
z\in (0,1).
\end{align}
On such an interval, \eqref{usefulquantities} implies that
\begin{align*}
-C+1-\frac{C}{10}\le C'\le -C+2+\frac{C^2}{50}+\frac{C}{10},
\end{align*}
so since $\left|C(0)-1\right|<10^{-3}$, we find that $C(t)\in[0.9,2.4]$ for all $t\in (0,t^*)$. The equations for $y$ and $z$ in \eqref{FirstChange} can then be written \begin{align}\label{yztmid} \begin{split} \frac{y'}{y^2}&=y\left(-C(1-z)+w^2z^2+2C^2y^2(1-z)^2+z^2\right),\\
\frac{z'}{y^2}&=Cz(1-z).
\end{split} \end{align} Now we claim that $t^*>z^{-1}(\frac{1}{2})$. Indeed, \eqref{yztmid} and the fact that $y(0)\le \delta^6$ with $C\in [0.9,2.4]$ implies that \begin{align}\label{estimateforyoninterval}
y(t)\le \delta^6 \ \text{for} \ t\in (0,\min\{t^*,z^{-1}(\frac{1}{2})\}). \end{align} Estimate \eqref{estimateforyoninterval}, coupled with the equation for $w'$ in \eqref{FirstChange} and the definition of $C$ implies that \begin{align}\label{estimateforxwoninterval}
\left|x\right|=\frac{Cy}{\left|1-z\right|}\le 5 \delta^6 \ \text{and} \ \left|w\right|<\frac{1}{4}\ \text{for} \ t\in (0,\min\{t^*,z^{-1}(\frac{1}{2})\}). \end{align} The estimates \eqref{estimateforyoninterval} and \eqref{estimateforxwoninterval} combine with the definition of $t^*$ in \eqref{definitiontstart} to show that $t^*>z^{-1}(\frac{1}{2})$.
Now, on $[z^{-1}(\frac{1}{2}),t^*]$, \eqref{yztmid} gives us $\frac{y'}{y^2}\le 1.5y$ and $\frac{(1-z)'}{y^2}\le -0.4(1-z)$. Therefore, $y(t)\le \delta^6e^{1.5(s-z^{-1}(\frac{1}{2}))}$ and $(1-z(t))\le \frac{1}{2} e^{-0.4(s-z^{-1}(\frac{1}{2}))}$, where $s$ is some function of $t$. Combining these estimates gives \begin{align}\label{yzestimate}
y(t)(1-z(t))^{3.75}\le \frac{\delta^6}{10} \ \text{for all} \ t\in [z^{-1}(\frac{1}{2}),t^*]. \end{align} We conclude this step with the assertion that $y(t^*)=\frac{1}{9}$. We show this by demonstrating that all of the other inequalities defining $t^*$ in \eqref{definitiontstart} are strict at time $t^*$ itself. On the interval $(0,t^*)$, $yz< \frac{1}{4}$, so $-w<\frac{1}{4}$ on $(0,t^*)$ since $-w(0)< \frac{1}{4}$. For any $t\in [0,t^*]$ with $(1-z)< \frac{9}{2.4}\delta$, it is clear that $x\le Cy(1-z)<\delta$. On the other hand, if $(1-z)\ge \frac{9}{2.4}\delta$, then $x\le 2.4 \frac{y(1-z)^{3.75}}{(1-z)^{2.75}}< \delta$ by \eqref{yzestimate} as well. It is clear from \eqref{yztmid} that $z(t^*)<1$, so it must be the case that $y(t^*)=\frac{1}{9}$.
\textbf{Step Two: show that $x$ and $E$ are non-negative on $(0,t^*)$.} We have $x\ge 0$ on $(0,t^*)$ by \eqref{FirstChange}, since $z\in (0,1)$ on $(0,t^*)$ and $x(0)\ge 0$. The estimates on $x,z,w$ provided by the definition of $t^*$ in \eqref{definitiontstart} then imply that $1-w^2z^2-x^2\ge 0$ on $(0,t^*)$; this observation coupled with \eqref{usefulquantities} and $E(0)\ge 0$ implies that $E\ge 0$ here as well.
\textbf{Step Three: find the required point in $(0,t^*)$.} Let $I$ be the connected interval of times ending at $t^*$ so that $y(t)\in [\frac{1}{10},\frac{1}{9}]$. We find from \eqref{yzestimate} that $(1-z)\le \delta^{\frac{6}{3.75}}<\frac{\delta}{200}$ in $I$. The definition of $C$ then implies that $0\le x\le \frac{\delta}{3}$ in $I$ as well.
The intermediate value theorem also gives us that there is some $t_1\in I$ at which $yz=\frac{1}{10}$. Finally, we need to demonstrate that $\left|D+1\right|\le \frac{\delta}{3}$ at $t_1$. At time $(1-z)^{-1}(\frac{\delta}{100})$ (which is less than $t_1$ by the above estimate for $(1-z)$ on $I$), \eqref{yzestimate} tells us that $y\le \frac{\delta^6100^{3.75}}{10\delta^{3.75}}\le \delta^{1.25}$. The fact that $y'\le 1.5y^3$ on $[z^{-1}(\frac{1}{2}),t^*]$ implies that the distance between $I$ and $(1-z)^{-1}(\frac{\delta}{100})$ is greater than $\frac{1}{100\delta^2}$. This large amount of time tells us that $D$ must get quite close to $-1$ by the time we land in $I$. Indeed, from \eqref{usefulquantities} and \eqref{yzestimate} we find that \begin{align}\label{Dfine}
\left|D+1\right|'\le -\frac{3}{4}\left|D-1\right|+\frac{\delta}{10} \end{align}
on $((1-z)^{-1}(\frac{\delta}{100}),t^*)$, while
\begin{align}\label{Dcrude}
\left|D+1\right|'\le -\frac{3}{4}\left|D-1\right|+10
\end{align}
holds on $(0,t^*)$. Estimate \eqref{Dfine} tells us that whenever $\left|D+1\right|\ge \frac{\delta}{3}$ and
$t\in((1-z)^{-1}(\frac{\delta}{100}),t^*)$ , we have $\left|D+1\right|'\le -\frac{\delta}{10}$.
The estimate \eqref{Dcrude} coupled with $\left|D(0)\right|\le 1$
implies that $\left|D\right|\le 15$ at time $(1-z)^{-1}(\frac{\delta}{100})$, so
it takes no more than time $\frac{10^5}{\delta}>\frac{1}{100\delta^2}$ to get $\left|D+1\right|\le \frac{\delta}{3}$. Therefore, $\left|D+1\right|\le \frac{\delta}{3}$
is achieved for all times in $I$ (including $t_1$). \end{proof} Now we discuss how making $\delta_1$ large can force our trajectory to be close to the $(0,0,0,0)$ point, in the sense of Lemma \ref{0tofinal}. This is achieved with the two lemmas below. \begin{lem} Let $B_3\in (0,10^{-100})$. Suppose our trajectory solution of \eqref{FirstChange} has a point so that
$0\le z\le B_3^{12}$, $0\le -w\le B_3$, $E\ge 0$ and
$$\{\left|x-x^{(\infty)}(\frac{1}{9})\right|,\left|y-y^{(\infty)}(\frac{1}{9})\right|\}\le B_3^6,$$
where $(x^{(\infty)},y^{(\infty)})$ is the special Bryant soliton solution discussed in Theorem \ref{BryantSolitonsmalltime} of the Appendix.
Then there is a later point on the trajectory at which $0\le -w,x,y,z\le B_3$, $\left|\frac{y}{x}-1\right|\le B_3$, $\left|D\right|\le 1$,
and so that $\min\{E,x\}\ge 0$ on the trajectory between these two points. \end{lem}
\begin{proof} This time, we find it convenient to have our $(w,x,y,z)$ solution of \eqref{FirstChange} so that the initial point described in the hypothesis of the lemma corresponds to time $\frac{1}{9}$. This is so we can easily compare our solution to the special $x^{(\infty)},y^{(\infty)}$ solution. Strictly speaking, these functions $x^{\infty},y^{\infty}$ solve \eqref{bryantfirstchange}, but we now reparametrise them so that they solve \eqref{xysystmebryant} instead, with a parametrisation that preserves $\frac{1}{9}$.
\textbf{Step One: estimates on the limiting solution.} As discussed in Appendix \ref{BryantAppendix}, we have \begin{align}\label{BryantEstimates}
(x^{(\infty)})'(t)\le 0, \qquad (y^{(\infty)})'(t)\le 0, \qquad \frac{y^{(\infty)}(t)}{x^{(\infty)}(t)}\in [\frac{1}{2},1] \ \text{for all} \ t\in [\frac{1}{9},\infty). \end{align} Combining \eqref{BryantEstimates} with \eqref{xysystmebryant} gives \begin{align}\label{bryanty'estimates}
(y^{(\infty)})'\le -(y^{(\infty)})^3(1-4(y^{(\infty)})^2). \end{align} Also note that \eqref{BryantEstimates}, combined with Theorem \ref{BryantSolitonsmalltime} and Proposition \ref{initialestaimtesxy} in Appendix A implies that \begin{align}\label{initialyestimatebryant} \begin{split} 1-0.0375\le x^{(\infty)}(\frac{1}{9})\le 1-0.0331\\ \frac{1}{2}-0.01875\le y^{(\infty)}(\frac{1}{9})\le \frac{1}{2}-0.01545. \end{split} \end{align} It is handy to note that \begin{align*}
[\frac{1}{9},\infty)\subseteq \overline{A^*\cup B^*\cup C^*\cup D^*} \end{align*} with \begin{align*}
A^*=(y^{(\infty)})^{-1}(\frac{1}{3},\frac{1}{2}-0.01545), B^*=(y^{(\infty)})^{-1}(\frac{1}{8},\frac{1}{3}),
\ C^*= (y^{(\infty)})^{-1}(\frac{B_3}{2},\frac{1}{8}), \
D^*=(y^{(\infty)})^{-1}(0,\frac{B_3}{2}). \end{align*} By \eqref{bryanty'estimates}, we find that $(y^{(\infty)})'(t)\le -\frac{1}{27}(1-4(y^{(\infty)}(t))^2)$ on $A^*$ so
that $\left|A^*\right|\le 18$. Also, $(y^{(\infty)})'(t)\le -\frac{1}{2\cdot 8^3}$ on $B^*$ so that $\left|B^*\right|\le 250$. Finally,
$\left|C^*\right|\le \frac{3}{B_3^2}$, because $(y^{(\infty)})'(t)\le -\frac{15 (y^{(\infty)}(t))^3}{16}$ for $t\in C^*$.
\textbf{Step Two: closeness to the limiting solution.} Define the positive number $t_0\le \sup C\le 268+\frac{3}{B_3^2}$ so that \begin{align}\label{tnodefinition}
[\frac{1}{9},t_0] \ \text{is the maximal time interval on which} \ z(t)\le B_3^6. \end{align} We will now obtain the required estimates by showing that the quantities $X=x-x^{(\infty)}$ and $Y=y-y^{(\infty)}$ are small on the interval $[\frac{1}{9},t_0]$. By examining \eqref{FirstChange} and \eqref{xysystmebryant}, we compute \begin{align}\label{bryantcomparisonestimates}
\begin{pmatrix}
X\\
Y
\end{pmatrix}'&=
\begin{pmatrix}
-1+2x^{(\infty)}y^{(\infty)}&1+(x^{(\infty)})^2\\
(y^{(\infty)})^2(4x^{(\infty)}y^{(\infty)}-1)&x^{(\infty)}y^{(\infty)}(6x^{(\infty)}y^{(\infty)}-2)
\end{pmatrix}\begin{pmatrix}
X\\
Y
\end{pmatrix}
+O(X^2+Y^2)+Z \end{align}
where $\left|O(X^2+Y^2)\right|_{\infty}\le 100( \left|X\right|^2+\left|Y\right|^2)$ provided $\left|X\right|^2+\left|Y\right|^2\le 1$,
and $\left|Z\right|_{\infty}\le 4\left|z\right|^2$ provided $x,y,-w,z\in [0,1.1]$. We now use \eqref{bryantcomparisonestimates} to obtain smallness of $(X,Y)$ on $(\frac{1}{9},t_0)$.
Let $V=\max\{\left|Y\right|,2\left|X-Y\right|\}$; \eqref{bryantcomparisonestimates} implies that \begin{align*}
V'&\le -\frac{B_3^2}{8} V+16 B_3^6 \end{align*}
on $C^*$, as long as $\left|V\right|\le B_3^3$, and \begin{align*}
V'&\le 2V+16B_3^6 \end{align*}
on $A^*\cup B^*$, provided $\left|V\right|\le B_3^3$. Therefore, since $V(\frac{1}{9})\le 4B_3^6$, we find that \begin{align}\label{longtimevestimates}
V(t)\le B_3^3 \ \text{for all} \ t\in \overline{A^*\cup B^*\cup C^*}\cap [\frac{1}{9},t_0]. \end{align}
\textbf{Step Three: concluding estimates.}
We need to check that $x,y,z,\frac{y}{x}-1,w$ are all smaller than $B_3$ at time $t_0$, that $\left|D\right|\le 1$, and that $E,x\ge 0$ on $[\frac{1}{9},t_0]$. We claim that $y^{(\infty)}(t_0)=\frac{B_3}{2}$. To see this, we use \eqref{FirstChange} to estimate \begin{align*}
\frac{z'}{z}&=xy\\
&\le (x^{(\infty)}+3B_3^3)(y^{(\infty)}+B_3^3)\\
&\le 2(y^{(\infty)}+\frac{3B_3^3}{2})^2\le 2.5 (y^{(\infty)})^2 \end{align*} on $[\frac{1}{9},t_0]$. Therefore, $z(t)\le B_3^{12}e^{28}$ for all $t\in A^*$, so $t_0\notin \overline{A^*}$. On $B^*$, we estimate that $z'\le \frac{z}{2}$, so $z(t)\le B_3^{12}e^{153}\le B_3^{11}$ for all $t\in B^*$ and $t_0\notin \overline{B^*}$. On the other hand, \eqref{bryanty'estimates} gives $(y^{\infty})'\le -\frac{15(y^{(\infty)})^3}{16}$ on $C^*$, so that \begin{align*} (y^{(\infty)}(t))^2\le \frac{8}{15(t-(y^{(\infty)})^{-1}(\frac{1}{8}))+512} \end{align*} for $t\in \overline{C^*}$. We consequently find \begin{align*}
\frac{z'(t)}{z(t)}\le \frac{20}{15(t-(y^{(\infty)})^{-1}(\frac{1}{8}))+512}, \end{align*} so that $z(t)<B_3^6$ for $t\in C^*$. Therefore, $y^{(\infty)}(t_0)=\frac{B_3}{2}$ as required.
The smallness of $y^{(\infty)}(t_0)$ and \eqref{longtimevestimates} imply the required estimates for $x(t_0),y(t_0)$. The estimate for $z(t_0)$ follows from the definition of $t_0$. The estimates for $w$ and $D$ follow immediately from \eqref{longtimevestimates} and the equation for $w'$ in \eqref{FirstChange}. Using Proposition \ref{FinalEstff}, we also have \begin{align*}
\left|\frac{y}{x}-1\right|&\le \left|\frac{y}{x}-\frac{y^{(\infty)}}{x^{(\infty)}}\right|+\left|\frac{y^{(\infty)}}{x^{(\infty)}}-1\right|\\
&\le \frac{\left|yx^{(\infty)}-xy^{(\infty)}\right|}{x^{(\infty)}(x^{(\infty)}-2V)}+\frac{3B_3}{4}\\ &\le B_3, \end{align*}
since $V\le B_3^3$, while $x^{(\infty)}\ge y^{(\infty)}=\frac{B_2}{2}$. It is clear that $x$ is non-negative on $[\frac{1}{9},t_0]$, since $x^{(\infty)}\ge \frac{B_3}{2}$ and $V\le B_3^3$. To show that $E$ is non-negative on $[\frac{1}{9},t_0]$, note that $1-w^2z^2-x^2$ is non-negative on $[\frac{1}{9},t_0]$ since $x^{(\infty)}\in[\frac{B_3}{2}, 1-0.331]$, $V\le B_3^3$, $z\le B_3$ and $w\le B_3$ (by \eqref{FirstChange} and the estimates on $y$ and $w$). The non-negativity of $1-w^2z^2-x^2$ and the fact that $E(\frac{1}{9})\ge 0$ implies that $E\ge 0$ on $[\frac{1}{9},t_0]$ by \eqref{usefulquantities}.
\end{proof} \begin{lem}
For each $B_3\in (0,10^{-100})$, choose an aribtrary $\delta_1\ge B_3^{-20}$. Then the solution of \eqref{firstchangebeforescaling} satisfies
$0\le z(\frac{1}{9\sqrt{\delta_1}})\le B_3^{12}$, $0\le -w(\frac{1}{9\sqrt{\delta_1}})\le B_3$, and
$\{\left|x(\frac{1}{9\sqrt{\delta_1}})-x^{(\infty)}(\frac{1}{9})\right|,\left|y(\frac{1}{9\sqrt{\delta_1}})-y^{(\infty)}(\frac{1}{9})\right|\}\le B_6^4$,
where $(x^{(\infty)},y^{(\infty)})$ is the Bryant soliton solution discussed in the Appendix. Furthermore, $\min\{E,x\}\ge 0$ on $(0,\frac{1}{9\sqrt{\delta_1}})$. \end{lem} \begin{proof} Let $p=\frac{1}{\sqrt{\delta_1}}$, $t^*=\frac{1}{9}$, and consider the rescaled functions $\tilde{f}(t)=pf(pt)$ for $f=R,L_1,L_2,\xi$. Then these new functions satisfy \begin{align*}
\tilde{\xi}'&=-\tilde{L_1}^2-2\tilde{L_2}^2-p^2,\\
\tilde{L_1}'&=-\tilde{\xi} \tilde{L_1}-p^2,\\
\tilde{L_2}&=-\tilde{\xi} \tilde{L_2}+\tilde{R}^2-p^2,\\
\tilde{R}'&=-\tilde{L_2} \tilde{R}, \end{align*} with the `new' $\delta_1$ equal to $1$, i.e., $\lim_{t\to 0}\frac{\tilde{R}(t)-\frac{1}{t}}{t}=1$. The discussion of the Bryant soliton on $\mathbb{R}^3$ discussed in the Appendix implies the existence of smooth functions $\xi^{(\infty)},L_1^{(\infty)},L_2^{(\infty)},R^{(\infty)}:(0,\infty)\to \mathbb{R}$ of the form \eqref{etaforma} so that \begin{align*} (\xi^{(\infty)})'&=-(L_1^{(\infty)})^2-2(L_2^{(\infty)})^2,\\ (L_1^{(\infty)})'&=-\xi^{(\infty)}L_1^{(\infty)},\\ (L_2^{(\infty)})'&=-\xi^{(\infty)}L_2^{(\infty)}+(R^{(\infty)})^2,\\
(R^{(\infty)})'&=-L_2^{(\infty)} R^{(\infty)},\\ 1&= \lim_{t\to 0}\frac{R^{(\infty)}(t)-\frac{1}{t}}{t}. \end{align*} Of course, $L_1^{(\infty)}=0$ uniformly here. Recall that by using Theorem \ref{BryantSolitonsmalltime} and Propositon \ref{initialestaimtesxy}, we find that the corresponding $x^{(\infty)}$ and $y^{(\infty)}$ functions satisfy \eqref{initialyestimatebryant}. It is also well-known that $x^{(\infty)}$ and $y^{(\infty)}$ are monotonically decreasing functions.
We prove this lemma by comparing $(\tilde{\xi},\tilde{L_1},\tilde{L_2},\tilde{R})$ to $(\xi^{(\infty)},L_1^{(\infty)},L_2^{(\infty)},R^{(\infty)})$ on the interval $[0,\frac{1}{9}]$. We use the notation $\tilde{\eta}$ and $\eta^{(\infty)}$ to mean the functions from $[0,\infty)$ to $\mathbb{R}^4$ consisting of components formed by breaking the corresponding sets of functions according to \eqref{etaforma}. Letting $\textbf{v}=\tilde{\eta}-\eta^{(\infty)}$, we find \begin{align}\label{vsystem}
\textbf{v}'(t)=\frac{A\textbf{v}}{t}+B(\textbf{v})-P, \qquad \textbf{v}(0)=0, \qquad \textbf{v}'(0)=(-p^2,-\frac{p^2}{3},0,0), \end{align} where \begin{align*}
A= \begin{pmatrix}
0&0&-4&0\\
0&-2&0&0\\
-1&0&-2&2\\
0&0&-1&-1
\end{pmatrix} \qquad
P=\begin{pmatrix}
p^2\\
p^2\\
p^2\\
0
\end{pmatrix} \end{align*} and \begin{align*}
\left|B(\textbf{v})\right|_{\infty}\le 5\left|\textbf{v}\right|_{\infty}, \end{align*}
provided $\left|\tilde{\eta}\right|_{\infty}\le 1$ and $\left|\textbf{v}\right|_{\infty}\le \frac{1}{100}$. One can easily check
using the results in the Appendix that $\left|\tilde{\eta}\right|_{\infty}\le 1$ on $[0,t^*]$.
Now we let $\textbf{s}=S^{-1}\textbf{v}$ so that \begin{align*}
\textbf{s}'=\frac{S^{-1}AS\textbf{s}}{t}+S^{-1}B(S\textbf{s})-S^{-1}P, \qquad \textbf{s}(0)=0, \qquad \textbf{s}'(0)=S^{-1}\textbf{v}'(0), \end{align*} where \begin{align*}
S=\begin{pmatrix}
2&-1&0&8\\
0&0&1&0\\
1&-1&0&-2\\
1&0&0&1
\end{pmatrix}, \qquad
S^{-1}=\begin{pmatrix}
-\frac{1}{9}&0&\frac{1}{9}&\frac{10}{9}\\
-\frac{1}{3}&0&-\frac{2}{3}&\frac{4}{3}\\
0&1&0&0\\
\frac{1}{9}&0&-\frac{1}{9}&-\frac{1}{9}
\end{pmatrix}, \qquad S^{-1}AS=\begin{pmatrix}
-2&1&0&0\\
0&-2&0&0\\
0&0&-2&0\\
0&0&0&1
\end{pmatrix}. \end{align*} This almost-diagonal form of the equations makes it clear that \begin{align*}
\left|\textbf{v}(t)\right|_{\infty}\le 11\left|\textbf{s}(t)\right|_{\infty}\le 10^6p^2t\le \frac{1}{100} \end{align*} for all $t\in [0,t^*]$, since $p$ is small.
The rest of the proof involves using the smallness of $\textbf{v}$ to obtain the estimates discussed in the statement of the lemma. First note that \begin{align*}
\left|x(pt)-x^{\infty}(t)\right|&\le \left|\frac{\tilde{L_2}(t)}{\tilde{R}(t)}-\frac{L_2^{(\infty)}(t)}{R^{(\infty)}(t)}\right|\\
&\le t^2 \left|\left(R^{(\infty)}(t)\tilde{L_2}(t)-L_2^{(\infty)}(t)\tilde{R}(t)\right)\right|\\
&\le t^2 \left(R^{(\infty)}(t)\left|\tilde{L_2}(t)-L_2^{(\infty)}(t)\right|
+\left|L_2^{(\infty)}(t)\right|\left|R^{(\infty)}(t)-\tilde{R}(t)\right|\right)\\
&\le 2\cdot 10^6 p^2 t^2, \qquad t\in [0,\frac{1}{9}]. \end{align*} In this previous computation, we used the estimates on $R^{(\infty)}$ and $L_2^{(\infty)}$ from Theorem \ref{BryantSolitonsmalltime} in the Appendix. Similarly, \begin{align*}
\left|y(pt)-y^{\infty}(t)\right|&\le \left|\frac{\tilde{R}(t)}{\tilde{\xi}(t)}-\frac{R^{(\infty)}(t)}{\xi^{(\infty)}(t)}\right|\\
&\le \frac{1}{\tilde{\xi}(t)\xi^{(\infty)}(t)}\left|\left(R^{(\infty)}(t)\tilde{\xi}(t)-\xi^{(\infty)}(t)\tilde{R}(t)\right)\right|\\
&\le t^2\left(R^{(\infty)}(t)\left|\tilde{\xi}(t)-\xi^{(\infty)}(t)\right|
+\xi^{(\infty)}(t)\left|R^{(\infty)}(t)-\tilde{R}(t)\right|\right)\\
&\le 4\cdot 10^6p^2 t^2, \qquad t\in [0,\frac{1}{9}]. \end{align*} To check the closeness of $w$ and $z$ to $0$, first note that \begin{align}\label{basiczestimate} 0\le z(pt)\le pt \end{align}
since $\left|R'\right|\le R^2$ by Proposition \ref{sf2p}. Also, the above estimates for $\textbf{v}$ and $\eta^{(\infty)}$ imply that $\tilde{\xi}(\frac{1}{9})\ge 0$, so the equation for $\tilde{L}_1$ implies immediately that $0\le -\tilde{L}_1'(t)\le p^2$ for each $t\in [0,t^*]$, so we find \begin{align*}
0\le -w(pt)=-L_1(pt)=\frac{-\tilde{L}_1(t)}{p}\le pt, \qquad t\in [0,\frac{1}{9}]. \end{align*}
To conclude the proof, we need show that $\min\{x,E\}\ge 0$ on this interval. It is clear that $x\ge 0$. To show that
$E\ge 0$, it suffices to show that $\left|z(pt)\right|^2\le \frac{\left|1-x(pt)\right|}{10}$ for all $t\in [0,\frac{1}{9}]$ because of the inequality $0\le-w\le 1$, the equation for $E'$ in \eqref{usefulquantities} and the fact that $\lim_{t\to 0}\xi L_2+1-R^2-L_2^2=\eta_0'(0)+1-2\eta_3'(0)=-3\eta_2'(0)=6\delta_1$, so that $\xi L_2+1-R^2-L_2^2$ is initially positive. From \eqref{basiczestimate}, we have \begin{align*}
0\le \frac{z(pt)}{pt}\le 1, \end{align*} but we also have \begin{align*} \frac{1-x(pt)}{p^2t^2}&=\frac{1-\tilde{x}(t)}{p^2 t^2}\\ &=\frac{1-x^{(\infty)}(t)}{p^2 t^2}+\frac{x^{(\infty)}(t)-\tilde{x}(pt)}{p^2 t^2}\\ &\ge \frac{2 \tan^2\left(\sqrt{\frac{3}{2}}t\right)}{t^2 p^2}-4\cdot 10^6, \end{align*} by Theorem \ref{BryantSolitonsmalltime}, so we obtain the required estimates. \end{proof}
\section{Bounds on curvature at the $\mathbb{S}^2$ singular orbit} By Theorems \ref{S2orbit} and \ref{d1large}, we find that any Ricci soliton must satisfy $\delta_1\in [0,10^{20,000}]$. We now construct bounds on $\delta_2$ and $\delta_3$. Fortunately, these bounds are simpler to construct, and can be found \textit{without} using the (already large) bound on $\delta_1$. The bound on $\delta_2$ is easier to construct once we have a bound on $\delta_3$, so we treat $\delta_3$ first. \begin{thm}\label{delta3estimate}
Suppose the metric of the form \eqref{metricform} is a gradient shrinking Ricci soliton with Einstein constant $\lambda=1$. Then $\delta_3\in [0,40]$. \end{thm} \begin{proof} We already know from Proposition \ref{initialbounds} that $\delta_3\ge 0$, so suppose for the sake of contradiction that $\delta_3>40$. In this case, we claim that \begin{align}\label{Tbiggerthan5}
T>\frac{1}{2} \ \text{and} \ L_2(t)<0 \ \text{for all} \ t\in (T-\frac{1}{2},T). \end{align} To verify \eqref{Tbiggerthan5}, note that if $T\le \frac{1}{2}$, then Proposition \ref{sf2p} implies that
$\left|R'\right|\le R^2$, so that $R>1$ on $(0,T)$. Then the estimate $L_2'=-\xi L_2+R^2-1>-\xi L_2$ violates the boundary conditions $\lim_{t\to 0}L_2(t)=\lim_{t\to 0}\xi(t)=+\infty$ and $\lim_{t\to T}L_2(t)=0$, $\lim_{t\to T}\xi(t)=-\infty$. Since $T>\frac{1}{2}$, Proposition \ref{sf2p}
again implies that $R^2-1>0$ on $(T-\frac{1}{2},T)$, so the boundary condtions $L_2(T)=0$, $\lim_{t\to T}\xi(t)=-\infty$ and the inequality $L_2'>-\xi L_2$ together imply that $L_2<0$.
With \eqref{Tbiggerthan5} in hand, we now claim that \begin{align}\label{l2l1estimates}
L_2(t)\ge L_1(t)\ge \frac{1}{t-T} \ \text{for all} \ t\in (0,T). \end{align} The second inequality in \eqref{l2l1estimates} is an immediate consequence of the fact that $L_1'\le -L_1^2$ on $(0,T)$ (follows from Proposition \ref{scf1}), and $\lim_{t\to T}L_1(t)=-\infty$. The first inequality is a consequence of the fact that
\begin{align*}
(L_2-L_1)'&=-\xi(L_2-L_1)+R^2,\\
\end{align*} and the observation that $\lim_{t\to 0}(L_2-L_1)(t)=+\infty$. In fact, we know that $\xi L_1+1-L_1^2\ge 0$ everywhere (Proposition \ref{scf1} again), so since $L_1\le 0$ everywhere, we rearrange to find $-\xi \ge \frac{1}{L_1}-L_1$ so we can estimate further on $(T-\frac{1}{2},T)$: \begin{align*}
(L_2-L_1)'&\ge (\frac{1}{L_1}-L_1)(L_2-L_1)+R^2\\
&=-L_1(L_2-L_1)+\frac{L_2-L_1}{L_1}+R^2\\
&\ge (L_2-L_1)^2+\frac{L_2-L_1}{L_1}+R^2\\
&\ge (L_2-L_1)^2+R^2-1\\
&\ge (L_2-L_1)^2+\frac{1}{(\frac{1}{\delta_3}+T-t)^2}-1 \end{align*} since $L_2<0$ (follows from \eqref{Tbiggerthan5}) and $R'\le R^2$ (Proposition \ref{sf2p}). Let $y=(L_2-L_1)(T-t)$, so that $y(T-\frac{1}{2})\ge 0$ and $y(t)\le 1$ for all $t\in (T-\frac{1}{2},T)$. But we can estimate the evolution of $y$: \begin{align*}
y'&\ge \left((L_2-L_1)^2+\frac{1}{(\frac{1}{\delta_3}+T-t)^2}-1\right)(T-t)-(L_2-L_1)\\
&= \frac{y^2-y}{(T-t)}+\left(\frac{1}{(\frac{1}{\delta_3}+T-t)^2}-1\right)(T-t)\\
&\ge -\frac{1}{4(T-t)}+\frac{(T-t)}{(\frac{1}{\delta_3}+T-t)^2}-(T-t), \end{align*} since $y^2-y\ge -\frac{1}{4}$ for all $y\in \mathbb{R}$. If $\delta_3>40$, we integrate to get the following estimate: \begin{align*}
1&\ge y(T-\frac{1}{40})-y(T-\frac{1}{2})\\
&=\int_{T-\frac{1}{2}}^{T-\frac{1}{40}}y'(t)dt\\
&\ge \int_{T-\frac{1}{2}}^{T-\frac{1}{40}} \left(\frac{(T-t)}{(\frac{1}{\delta_3}+T-t)^2}-(T-t)-\frac{1}{4(T-t)}\right)dt\\
&\ge 1.89 -\frac{1}{8}-\frac{\ln(20)}{4}>1, \end{align*} which is a contradiction. \end{proof}
\begin{thm}
Suppose the metric of the form \eqref{metricform} is a gradient shrinking Ricci soliton with Einstein constant $\lambda=1$. Then $\delta_2\in [-1,10^{20,000}]$. \end{thm} \begin{proof} Once again consider the non-negative quantity defined by $K(t)^2=L_2(t)^2+(R(t)-1)^2$, and note that \begin{align}\label{Kestimateagain}
K'(t)\ge K(t)\left(-\frac{1}{2}-\max\{0,\xi\}\right). \end{align} Also note that \begin{align}\label{maxtimetillT}
T-\xi^{-1}(0)\le 2 \end{align} because of the estimate $y'\ge y^2+1$, where $y=\min\{-\xi,-L_1\}$ and $y(\xi^{-1}(0))\ge 0$. Now Theorem \ref{delta3estimate} tells us that $\lim_{t\to T}K(t)\le 39$, so \eqref{Kestimateagain} combined with \eqref{maxtimetillT} implies that \begin{align}\label{d2kestimate}
K(t)\le 39e \ \text{for all} \ t\in (\xi^{-1}(0),T). \end{align}
The equation $L_2'=-\xi L_2+R^2-1$ then implies that $\left|L_2(t)\right|\le e(39^2e+2\cdot 39)(T-t)$ for all $t\in (\xi^{-1}(0),T)$. Now consider the quantities $X=\xi-L_1$ and $Y=\xi L_1+1-L_1^2$. Since $Y\ge 0$ everywhere, we find that $L_1(\xi^{-1}(0))\in [-1,0]$ and $X(\xi^{-1}(0))\in [0,1]$. We compute \begin{align}\label{xestimated2}
X'=L_1X-2L_2^2, \end{align} and \begin{align*}
Y'&=L_1'(\xi-L_1)+L_1(\xi'-L_1')\\
&=(-\xi L_1-1)(\xi-L_1)+L_1(-L_1^2-2L_2^2-1+\xi L_1+1)\\
&=(-\xi L_1-1+L_1^2)(\xi-L_1)-2L_1L_2^2\\
&=-XY-2L_1L_2^2. \end{align*} Since $L_1\le 0$ and $T-\xi^{-1}(0)\le 2$, we find from \eqref{d2kestimate} and \eqref{xestimated2} that $-X(t)\le 4\cdot (39e)^2$ for all $t\in (\xi^{-1}(0),T)$. Therefore \begin{align*}
Y'\le 4\cdot (39e)^2 Y+2(39^2e^2+2\cdot 39e)^2(T-t), \end{align*} so that $Y(T)$, which coincides with $\frac{3}{2}(\delta_2+1)$, can be no more than $10^{20,000}$.
\end{proof}
\section{Compactness and uniqueness}\label{mainproof}
We summarise the results for the $\mathbb{S}^4$ we have seen so far. \begin{thm}\label{deltaboundedsummary} An $SO(2)\times SO(3)$-invariant gradient shrinking Ricci soliton on $\mathbb{S}^4$ of the form \eqref{metricform} has $\delta_1\in [0,10^{20,000}]$, $\delta_2\in [-1,10^{20,000}]$ and $\delta_3\in [0,40]$. \end{thm} We are now ready to prove the main result of this paper. \begin{proof}[Proof of Theorem \ref{CT}] We assume for the sake of contradiction that there is no such value of $\mathcal{C}>0$. Then there is a sequence of $SO(2)\times SO(3)$-invariant solutions $(g,u)$ to \eqref{GRS} with unbounded Riemann curvature, unbounded volume, or an injectivity radius shrinking to $0$. Theorem \ref{deltaboundedsummary} implies that $\delta_1,\delta_2,\delta_3$ are all bounded uniformly, so we can assume that there numbers are all convergent. Propositions \ref{mapa} and \ref{mapb} imply that our sequence of solutions converge to another solution. It is clear that the Riemann curvature, volume and injectivity radii all depend continuously on the values of $(\delta_1,\delta_2,\delta_3)$, so we obtain a contradiction. \end{proof}
With Theorem \ref{CT} in hand, we discuss how one could prove Conjecture \ref{ClT}. First note that by Propositions \ref{mapa} and \ref{mapb}, there is a smooth function $F:\mathbb{R}^3\to \mathbb{R}^3$ whose zeroes are precisely the Ricci solitons we aim to classify. By Theorem \ref{deltaboundedsummary}, there is a compact domain $\Omega\subset \mathbb{R}^3$ that contains all the zeroes of $F$. In fact, we have \textit{explicitly} described this domain. Therefore, Conjecture \ref{CT} would follow with the following steps.
\begin{enumerate} \item Find an explicit open neighbourhood $N$ of the canonical metric on $\mathbb{S}^4$ on which no other zeroes of $F$ occur. This is essentially a quantitative use of the inverse function theorem.
\item Show numerically that there are no zeroes of $F$ in $\Omega\setminus N$. This could be achieved by finding an upper bound for $\left|dF\right|$ on $\Omega$, discretising the set $\Omega\setminus N$ accordingly, and showing that $F$ is sufficiently far away from $0$ at each of these finitely-many points using an appropriate numerical ODE solver. \end{enumerate} We do not pursue these ideas in this paper, primarily because the $\delta_1,\delta_2$ bounds we have found are far too large for the numerics described here to provide an answer in a reasonable amount of time. However, we do emphasise that these techniques could be used to resolve Conjecture \ref{ClT} in the affirmative in `finite time'.
\section{An $SO(2)\times SO(3)$-invariant ancient solution on $\mathbb{S}^4$}\label{newancient} The sequence of `almost Ricci solitons' on $\mathbb{S}^4$ we found by making $\delta_1$ large appears to have a pancake shape. In this section, we describe a `pancake' $\kappa$-noncollapsed ancient solution to the Ricci flow; it is likely that this is the precise geometric structure that our `almost Ricci solitons' are detecting. It is worth noting that by the recent classification result in \cite{Brendle}, this $\kappa$-noncollapsed ancient Ricci flow on $\mathbb{S}^4$ cannot be uniformly PIC and weakly PIC2.
We restate Theorem \ref{NAS} for convenience. \begin{thm}\label{ancientsolution}
There exists a $\kappa>0$ and a $\kappa$-noncollapsed $SO(2)\times SO(3)$-invariant ancient Ricci flow on $\mathbb{S}^4$ with positive Riemann curvature operator
which is not isometric to the round
shrinking sphere. The group $SO(2)\times SO(3)$ acts on $\mathbb{S}^4\subseteq \mathbb{R}^5=\mathbb{R}^2\oplus \mathbb{R}^3$ in the obvious way. \end{thm} The proof of this result is broken up into several steps. Apart from the first step, the construction of this ancient Ricci flow solution is almost identical to that of the Perelman ancient `sausage' solution; the details are available in Chapter 19 of \cite{RicciIII}. \begin{proof} \textbf{Step One: a sequence of initial Riemannian metrics.} For each large $L>10$, choose an $SO(2)\times SO(3)$-invariant Riemannian metric $g_0$ on $\mathbb{S}^4$ of the form \eqref{metricform} with $T=L+1$ and \begin{align*}
\tilde{f}_1(r)=\begin{cases}
L \ &\text{if} \ 0<r<1\\
(L+1-r) \ &\text{if} \ 1\le r<L+1
\end{cases}, \qquad
\tilde{f}_2(r)=\begin{cases}
\sin(\frac{\pi r}{2}) \ &\text{if} \ 0<r<1\\
1 \ &\text{if} \ 1\le r<L+1.
\end{cases} \end{align*} This metric clearly satisfies the smoothness conditions at the singular orbits, but it does fail to be smooth at $r=1$. However, we can mollify the two functions $\tilde{f}_1,\tilde{f}_2$ on $(-\frac{1}{2},\frac{3}{2})$ so that the resulting functions $f_1,f_2$ are smooth. Recall that the Riemann curvatures of this Riemannian metric are \begin{align*}
-\frac{f_1''}{f_1}, \qquad -\frac{f_2''}{f_2}, \qquad \frac{1-(f_2')^2}{f_2^2}, \qquad \frac{-f_1'f_2'}{f_1f_2}; \end{align*} it is clear that after mollification, these curvatures are all non-negative. Since we mollify on $(\frac{1}{2},\frac{3}{2})$, the quantity $-\frac{f_2''}{f_2}+\frac{1-(f_2')^2}{f_2^2}$ is uniformly bounded from below, independently of $L$ because $f_2$ does not depend on $L$ in this region. Also the supremum of all four eigenvalues is uniformly bounded from above ($L$ does not affect the $f_2$ terms, and only makes $f_1$ large so the corresponding curvatures can only get smaller). Therefore, \begin{align}\label{initialcest}
\textit{there exists a} \ C>0 \ \textit{so that} \ \frac{1}{C}\le S(g_0)=\left|\mathcal{R}(g_0)\right|\le C \ \textit{for all} \ L>10. \end{align} Now we claim that there is a $\kappa_0>0$ so that $(M,g_0)$ is $\kappa_0$-noncollapsed on all scales $r_0>0$, and for all large $L>10$. To see this, we must show that any geodesic ball $B_{r_0}(x_0)$ on which $S(g_0)\le \frac{1}{r_0^2}$ has volume at least $\kappa_0 r_0^4$. By the lower bound on the scalar curvature \eqref{initialcest}, it suffices to find a $\kappa_0>0$ so that the volume of any geodesic ball $B_{r_0}(x_0)$ is at least $\kappa_0 r_0^4$ whenever $0<r_0^2\le C$. To this end, take an arbitrary geodesic ball $B_{r_0}(x_0)\subseteq \mathbb{S}^4$ and use the manifold decomposition:
\begin{align*}
\mathbb{S}^4=\overline{A\cup B},
\end{align*} where $A=(0,\frac{3}{2}]\times \mathbb{S}^1\times \mathbb{S}^2$ and $B=[\frac{3}{2},L+1)\times \mathbb{S}^1\times \mathbb{S}^2$. Now for each $L>10$, $B$ is isometrically contained in $\mathbb{R}^2\times \mathbb{S}^2$ equipped
with the standard metric, so we have \begin{align}\label{gaussiankappa} \inf \Bigg\{ \ \frac{\text{Vol}(B_{r_0}(x_0))}{r_0^4} \ \vert\ B_{r_0}(x_0)\subseteq B, r_0\in (0,C]\ \Bigg\}\ge \inf \Bigg\{ \ \frac{\text{Vol}(B_{r_0}(x_0))}{r_0^4} \ \vert\ B_{r_0}(x_0)\subseteq \mathbb{R}^2\times \mathbb{S}^2, r_0\in (0,C] \ \Bigg\}. \end{align} On the other hand, let $\overline{g}$ be an $SO(3)$-invariant Riemannian metric on $\mathbb{S}^3$ found by extending $f_2$ on $(0,\frac{3}{2})$ smoothly to a function on $(0,2)$ with the appropriate smoothness conditions at $2$. Also let $\overline{f_1}(r)$ be a smooth extension of $f_1:(0,\frac{3}{2})\to \mathbb{R}$ to a function with domain $(0,2)$ and appropriate smoothness conditions at $r=2$. Note that $\overline{g}$ is independent of $L$, and $A$ is isometrically embedded in the warped product manifold $\mathbb{S}^1\times \mathbb{S}^3$ with metric $(\overline{f_1}(r)^2d\theta^2,\overline{g})$, where $d\theta$
is the standard one-form on $\mathbb{S}^1$.
Therefore, \begin{align}\label{warpedkappa} \inf \Bigg\{ \ \frac{\text{Vol}(B_{r_0}(x_0))}{r_0^4} \ \vert\ B_{r_0}(x_0)\subseteq A, r_0\in (0,C] \ \Bigg\}\ge \inf \Bigg\{ \ \frac{\text{Vol}(B_{r_0}(x_0))}{r_0^4} \ \vert\ B_{r_0}(x_0)\subseteq \mathbb{S}^1\times \mathbb{S}^3, r_0\in (0,C] \ \Bigg\}. \end{align}
Now it is well-known that the right hand side of \eqref{gaussiankappa}, which is independent of $L$, is strictly positive. Also, we can arrange it so that $\overline{f_1}(r)\ge 1$ for all $r\in (0,2)$ and $L>10$, so it is also clear that the right hand side of \eqref{warpedkappa} is strictly positive for all $L>10$, and can be bounded from below by a positive number, independently of $L>10$.
Now any other geodesic ball of radius $r_0$ contains a geodesic
ball of radius $\frac{r_0}{2}$ which is entirely contained in at least one of $A$ or $B$, so the existence of a $\kappa_0>0$ independent of $L$ follows from \eqref{gaussiankappa} and \eqref{warpedkappa}.
\textbf{Step Two: Ricci flows from the initial metrics.} Since the Riemann curvature of $g_0$ is non-negative, for each $L>10$ there exists a $T_L$ so that the Ricci flow starting at $g_0$ becomes singular for the first time at $T_L$, and with the round sphere as a singularity model. By \eqref{initialcest}, there exists a uniform $C'>0$ so that $T_L\in (\frac{1}{C'},C')$ for all $L>10$. Furthermore, by \eqref{initialcest} and the fact that each $g_0$ is $\kappa_0$-noncollapsed on all scales, Theorem 19.52 (no local collapsing) of \cite{RicciIII} implies the existence of a $\kappa>0$ so that all of the Ricci flows on $(\frac{2T_L}{3},T_L)$ are uniformly $\kappa$-noncollapsed on all scales $r^2<\frac{T_L}{3}$.
Note that we can apply this theorem because we can uniformly bound $\left|\mathcal{R}\right|$ on the time interval $[0,\frac{1}{16\mathcal{C}}]$ independently of $L$; this is a result of the evolution equations for $\mathcal{R}$ discussed in the proof of Proposition \ref{scf1}.
For a given small $\delta>0$, let $t_L$ be the unique time at which the ratio of largest to smallest Riemann curvature eigenvalues is $1+\delta$, and so that the ratio is less than or equal to $1+\delta$ on $(t_L,T_L)$.
\textbf{Step Three: rescaled flows.} Consider the rescaled Ricci flows \begin{align*}
\hat{g}_L(t)=\frac{1}{T_L-t_L}g_L(T_L+(T_L-t_L)t) \end{align*} which start at time $t_0(L)=-\frac{T_L}{T_L-t_l}$, have sectional curvature ratio $1+\delta$ at time $-1$, become singular at time $0$, and are uniformly $\kappa$-noncollapsed on $(-\frac{T_L}{3(T_L-t_L)},0)$ for scales $r^2\le \frac{-t_0(L)}{3}$.
We now show that $\lim_{L\to \infty}t_0(L)=-\infty$. To see this, note that for small enough $\delta>0$, we have \begin{align}\label{scalacurvaturebound-1}
\hat{S}(-1)\le 3, \end{align} thanks to the evolution equation \begin{align*}
\frac{\partial \hat{S}}{\partial t}\ge \Delta_{\hat{g}(t)}\hat{S}+\frac{\hat{S}^2}{2} \end{align*} and the pinching of $1+\delta$ that we have already established. The estimate \eqref{scalacurvaturebound-1} coupled with Hamilton's trace Harnack inequality (Corollary 15.3 in \cite{RicciII}) then implies that $\hat{S}(t)\le 3\frac{-1-t_0(L)}{t-t_0(L)}$ for $t\in (t_0(L),-1]$. Therefore, \begin{align*}
\frac{L}{\sqrt{T_L-t_l}}&\le \text{diam} (\hat{g}_L(t_0(L)))\\
&= \text{diam} (\hat{g}_L(-1))-\int_{t_0(L)}^{-1}\frac{\partial \text{diam}(\hat{g}_L(t))}{\partial t}dt\\
&\le \text{diam} (\hat{g}_L(-1))+12\int_{t_0(L)}^{-1}\sqrt{\frac{-1-t_0(L)}{t-t_0(L)}}dt\\
&=\text{diam} (\hat{g}_L(-1))+24(-1-t_0(L))\\
&\le 24\frac{T_L}{T_L-t_l}. \end{align*} Note that in the last computation, we have used the following facts: \begin{itemize}
\item For any $x,y\in \mathbb{S}^4$, $\frac{\partial d(x,y)}{\partial t}\ge -12 \sqrt{\frac{-1-t_0(L)}{t-t_0(L)}}$, which follows from the Ricci curvature upper bound.
\item By making $\delta$ small, we can force $\hat{g}_L(-1)$ to be arbitrarily close to a round sphere, and the scalar curvature of this particular sphere must be $2$
because the remaining time until blow up is exactly $1$. Therefore, Myer's theorem implies that
$\text{diam} (\hat{g}_L(-1))\le 2\sqrt{3}\pi$. \end{itemize} The estimate $\frac{L}{\sqrt{T_L-t_l}}\le 24\frac{T_L}{T_L-t_l}$ implies that $\lim_{L\to \infty}T_L-t_L= 0$ as well, so $\lim_{L\to \infty}t_0(L)=-\infty$, because of our uniform estimates on $T_L$ itself.
\textbf{Step Four: convergence.} We are now in a position to take limits. Indeed, the uniform $\kappa$-noncollapsing and curvature bounds are enough to get smooth convergence to an ancient Ricci flow by Theorem 3.10 in \cite{RicciI}; the ancient Ricci flow must have non-negative (hence positive) Riemann curvature. Since $\delta>0$ was eventually fixed, we can examine the sequence at time $-1$ to find that the ancient Ricci flow solution is on $\mathbb{S}^4$, but it is not the round sphere. By following the arguments in Chapter 19 of \cite{RicciI}, we find that $SO(2)\times SO(3)$ acts via isometries on this ancient solution on $\mathbb{S}^4\subset \mathbb{R}^5=\mathbb{R}^2\oplus \mathbb{R}^3$ in the obvious way. \end{proof}
We conclude this section by observing that, not only is this $SO(2)\times SO(3)$-invariant ancient solution non-round, but it is not isometric to Perelman's rotationally-invariant $\kappa$ solution either. \begin{prop}\label{so432}
Let $SO(2)\times SO(3)$ act on $\mathbb{S}^4\subset \mathbb{R}^5=\mathbb{R}^2\oplus \mathbb{R}^3$ in the obvious way. Any $SO(2)\times SO(3)$-invariant continuous
function
$f:\mathbb{S}^4\to \mathbb{R}$ which is also invariant under an $SO(4)$ rotation group action must be constant. \end{prop} \begin{proof} Let $r_1:\mathbb{S}^4\to [0,1]$ be a continuous function which parametrises the $SO(4)$ action, and let $r_2:\mathbb{S}^4\to [0,1]$ be a continuous function which parametrises the $SO(2)\times SO(3)$ action. Note that, almost by construction, we have the following: \begin{itemize}
\item $r_1^{-1}(0)$ and $r_1^{-1}(1)$ are single points;
\item $r_2^{-1}(0)$ is a copy of $\mathbb{S}^1$;
\item $r_2^{-1}(1)$ is a copy of $\mathbb{S}^2$;
\item $r_1^{-1}(k)$ is a copy of $\mathbb{S}^3$ for each $k\in (0,1)$;
\item $r_2^{-1}(k)$ is a copy of $\mathbb{S}^1\times \mathbb{S}^2$ for each $k\in (0,1)$;
\item the function $f$ is constant on the level sets $r_i^{-1}(k)$ for each $i=1,2$ and $k\in [0,1]$. \end{itemize} We claim that there is an $\epsilon>0$ so that $f$ is constant on $r_1^{-1}([0,\epsilon])$ and $r_1^{-1}([\epsilon,1])$. To see this, note that $f$ must be constant on the two submanifolds $(SO(2)\times SO(3))\cdot r_1^{-1}(0)$ and $(SO(2)\times SO(3))\cdot r_1^{-1}(1)$. Since these submanifolds are compact, connected and at least one-dimensional, their images under the continuous function $r_1$ must be non-trivial closed subintervals of $[0,1]$ containing $0$ or $1$, respectively. However, since $f$ is constant on the level sets of $r_1$, $f$ must actually be constant on the $r_1$ pre-images of these closed subintervals.
Choose $\tilde{\epsilon}_0\in (0,1]$ so that $[0,\tilde{\epsilon}_0]$ is the maximal connected subinterval of $[0,1]$ containing $0$ so that $f$ is constant on $[0,\tilde{\epsilon}_0]$. Similarly, choose $\tilde{\epsilon}_1\in [0,1)$ so that $[\epsilon_1,1]$ is the maximal connected subinterval of $[0,1]$ containing $1$ so that $f$ is constant on $[\epsilon_1,1]$. The argument in the last paragraph shows that $\tilde{\epsilon}_0$ and $\tilde{\epsilon}_1$ both exist. If $\tilde{\epsilon}_1\le \tilde{\epsilon}_0$, the proof will be complete, so we assume that $0<\tilde{\epsilon}_0<\tilde{\epsilon}_1<1$. Define the pairwise-disjoint connected sets $A=r_1^{-1}([0,\tilde{\epsilon_0}])$, $B=r_1^{-1}((\tilde{\epsilon}_0,\tilde{\epsilon}_1))$, $C=r_1^{-1}([\tilde{\epsilon}_1,1])$. It is clear that $\mathbb{S}^4=A\cup B\cup C$. Also note that both $A$ and $C$ are $SO(2)\times SO(3)$-invariant because any orbit containing points both in and out of either of the sets would violate the maximality of $[0,\tilde{\epsilon_0}]$ or $[\tilde{\epsilon}_1,1]$. This implies that $B$ is $SO(2)\times SO(3)$-invariant as well. Now $r_2(A),r_2(B)$ and $r_2(C)$ are connected subintervals which must cover $[0,1]$; they must be pairwise disjoint because $SO(2)\times SO(3)$ orbits stay in exactly one of $A$, $B$ or $C$. Since $A$ and $C$ are compact, we find that $r_2(A)$ and $r_2(C)$ are closed, so $r_2(B)$ must be of the form $(a,c)$ for some $0<a<c<1$. Therefore, $B$ must be homeomorphic to both $(a,c)\times \mathbb{S}^1\times \mathbb{S}^2$ and $(\tilde{\epsilon}_0,\tilde{\epsilon}_1)\times \mathbb{S}^3$, which is a contradiction (these two topological spaces have non-isomorphic fundamental groups).
\end{proof}
\appendix
\section{The Bryant soliton revisited}\label{BryantAppendix} The Bryant steady soliton is a rotationally-invariant metric on $\mathbb{R}^3$ of the form $g=dt\otimes dt+f(t)^2 Q$, where $Q$ is the standard metric on $\mathbb{S}^2$ with Ricci curvature $1$, and $f:(0,\infty)\to (0,\infty)$ is smooth, and can be extended to a smooth and odd function on $(-\infty,\infty)$ with $f'(0)=1$. A detailed construction of this metric is given in \cite{RicciI}, where the authors also show that the Bryant soliton has positive Riemann curvature everywhere. If we let $R=\frac{1}{f}$, $L_2=\frac{f'}{f}$ and $\xi=2L_2-u'$, where $u$ is the potential function, then \begin{align} \begin{split}\label{Bryantsolutionnonrescaled}
\xi'&=-2L_2^2,\\
L_2'&=-\xi L_2+R^2,\\
R'&=-L_2 R.
\end{split} \end{align} For any $p>0$, solutions of \eqref{Bryantsolutionnonrescaled} are invariant under the transformation that sends a function $f(t)$ to $pf(pt)$, so to uniquely specify the Bryant soliton, we need to prescribe the value $\delta_1:=\lim_{t\to 0}\left(\frac{R(t)-\frac{1}{t}}{t}\right).$
Let $x=\frac{L_2}{R}$, $y=\frac{R}{\xi}$ and $z=\frac{1}{R}$, then \begin{align}\label{bryantfirstchange} \begin{split}
x'&=\frac{1}{yz}\left(-x+y+yx^2\right),\\
y'&=\frac{-xy^2+2x^2y^3}{yz},\\
z'&=x.
\end{split} \end{align} The following facts about the Bryant soliton curve $(x(t),y(t),z(t))$ are well-known (they are also discussed in \cite{RicciI}): \begin{itemize}
\item $(x(0),y(0))=(1,\frac{1}{2})$;
\item $\lim_{t\to \infty}(x(t),y(t))=(0,0)$;
\item $x'(t),y'(t)< 0$ for all $t\in (0,\infty)$;
\item $\lim_{t\to \infty}\frac{y(t)}{x(t)}=1$. \end{itemize} It is therefore clear that there is a function $f:[0,1]\to [0,\frac{1}{2}]$ so that $y(t)=f(x(t))$ along the Bryant soliton, and this function does not depend on the choice of $\delta_1>0$ (this parameter only affects how quickly one travels through the curve $y=f(x)$). The following propositions tell us some valuable information about this function. \begin{prop}\label{initialestaimtesxy}
The function $f$ satisfies $\frac{x}{2}\le f(x)\le \frac{x}{2}+(1-x)^2$ for all $x\in [\frac{3}{4},1]$. In fact, $\frac{x}{2}\le f(x)$ for all $x\in [0,1]$. \end{prop} \begin{proof} Consider the system \begin{align}\label{xysystmebryant} \begin{split}
x'&=-x+y+yx^2,\\
y'&=-xy^2+2x^2y^3;
\end{split} \end{align} the critical point $(1,\frac{1}{2})$ has a one-dimensional unstable manifold. The part of the unstable manifold lying in $[0,1]\times [0,\frac{1}{2}]$ consists of exactly those points of the form $(x,f(x))$ for $x\in [0,1]$. It therefore suffices to show that any point in the unstable manifold with $x\in [\frac{3}{4},1]$ satisfies \begin{align}\label{bryantyxestimatesinitial}
\frac{x}{2}\le y\le \frac{x}{2}+(1-x)^2. \end{align}
Our first step in verifying \eqref{bryantyxestimatesinitial} for $x \in [\frac{3}{4},1]$ is to find an $\epsilon>0$ so that \eqref{bryantyxestimatesinitial} holds on $[1-\epsilon,1]$. To find such an $\epsilon>0$, note that $f$ must be smooth close to $x=1$ because it describes an unstable manifold of a hyperbolic critical point of a smooth vector field. By looking at the linearisation of \eqref{xysystmebryant} at $(1,\frac{1}{2})$, we see that the unstable manifold points in the direction of $(2,1)$, so $f'(1)=\frac{1}{2}$. Now the function $f$ also satisfies \begin{align}\label{unstablemanifoldequation} (-x+f(x)+f(x)x^2)f'(x)=-xf(x)^2+2x^2f(x)^3 \end{align} for $x$ close to $1$. Using the equalities $f(1)=\frac{1}{2}$ and $f'(1)=\frac{1}{2}$, we can write $f(x)=\frac{x}{2}+a(x-1)^2+O(x-1)^3$. Then \eqref{unstablemanifoldequation} becomes \begin{align*} \frac{x-1}{2}+(x-1)^2\left(\frac{3}{4}+3a\right) +O(x-1)^3=\frac{x-1}{2}+(x-1)^2\left(\frac{7}{4}+\frac{a}{2}\right)+O(x-1)^3, \end{align*} so $a=\frac{2}{5}\in (0,1)$. The existence of the required $\epsilon>0$ follows.
Consider the quantities $M_1=y-\frac{x}{2}$ and $M_2=y-\frac{x}{2}-(x-1)^2$. We find that that $y'=0$ whenever $M_1=0$, so that $M_1'\ge 0$ whenever $M_1=0$ and $x\in [0,1]$. On the other hand, we have \begin{align*}
M_2'=-xy^2+2x^2y^3-(\frac{1}{2}+2(x-1))(-x+y+yx^2), \end{align*} which is non-positive whenever $M_2=0$ and $x\in [\frac{3}{4},1]$. The required estimates follow. \end{proof} \begin{prop}\label{FinalEstff}
We have $x-x^2\le f(x)\le x$ for $x\in [0,\frac{1}{4}]$. In fact, $f(x)\le x$ for all $x\in [0,1]$ \end{prop} \begin{proof} As before, it suffices to consider the unstable trajectory of the system \eqref{xysystmebryant} which travels from $(1,\frac{1}{2})$ to $(0,0)$. The estimate $f(x)\le x$ follows from the fact that $f(1)<1$, and the observation that whenever the quantity $M_1=y-x$ vanishes and $x\in (0,1)$, we have \begin{align*}
M_1'&=xy^2(2xy-1)-M-yx^2\\
&=2x^3\left(x^2-1\right)<0\\ \end{align*} so that $M_1\le 0$ is preserved. On the other hand, consider the quantity $M_2=\frac{y}{x}-1+x$, so that \begin{align*}
M_2'&=\frac{y'x-x'y}{x^2}+x'\\
&=\frac{(-xy^2+2x^2y^3)x-(-x+y+yx^2)y}{x^2}-x+y+yx^2\\
&=-2y^2+2xy^3+(1-\frac{y}{x}-yx)\frac{y}{x}-x+y+yx^2\\
&=-2y^2+2xy^3+(x-M_2-yx)(M_2+1-x)-x+y+yx^2. \end{align*} Whenever $y=x-x^2$ and $x\in (0,\frac{3}{10}]$, we find that $M_2'> 0$. It therefore suffices to show that $y>0.21$ when $x=0.3$. To show this point, we consider the quantity $M_3=(y-0.21)+\frac{0.3-x}{5}$. Then \begin{align*}
M_3'=-xy^2+2x^2y^3+\frac{x-y-yx^2}{5} \end{align*} so that whenever $M_3=0$ and $x\in [0.3,1]$, we have $M_3'>0$. Since $M_3>0$ at the point $(1,\frac{1}{2})$, we have that $M_3>0$ when $x=0.3$, i.e., our solution curve has $y>0.21$ when $x=0.3$. \end{proof} We conclude this appendix with some short-time estimates on the original functions solving \eqref{bryantfirstchange} with $\delta_1=1$. \begin{thm}\label{BryantSolitonsmalltime}
If $\delta_1=1$, then up until time $\frac{1}{9}$,
we have
\begin{align*}
\frac{\sin(\sqrt{6}t)}{\sqrt{6}}&\le z(t)\le t,\\
1-2 \tan^2\left(\sqrt{\frac{3}{2}}t\right)&\le x(t)\le 1-3t^2e^{-9t^2}.
\end{align*} \end{thm} \begin{proof} For the $z$ estimate, note that $z(t)\le t$ follows from curvature positivity of the Bryant soliton. For the lower bound, we use Proposition \ref{initialestaimtesxy} to conclude that $y\ge \frac{x}{2}$, so that \begin{align*}
z''&=\frac{1}{yz}\left(-x+y+yx^2\right)\\
&\ge \frac{x}{yz}\left(-1+\frac{(x^2+1)}{2}\right)\\
&\ge \frac{2}{z}\left(-1+\frac{(x^2+1)}{2}\right)\\
&=\frac{(z')^2-1}{z}. \end{align*} Now since $R(t)=\frac{1}{t}+t+O(t^3)$, we find that $z(0)=0=z''(0)$, and $z'(0)=1$ with $z'''(0)=-6$. We therefore claim that \begin{align}\label{zlower}
z(t)\ge \frac{\sin(\sqrt{6}t)}{\sqrt{6}}, \end{align}
provided $t\in [0,\frac{1}{9}]$. To see this, note that the solution $\frac{\sin(\sqrt{p}t)}{\sqrt{p}}$ for $\sqrt{p}t\in [0,\frac{\pi}{2}]$ corresponds
to a curve in $(z,z')$ space starting at $(0,1)$, with $z$ increasing and $z'$ decreasing to the point $(\frac{1}{\sqrt{p}},0)$. We write this curve as $z'=f_p(z)$ for
each $p>0$.
The equality $z'''(0)=-6$ then implies that $z'\ge f_6(z)$,
as long as $z\in [0,\frac{1}{\sqrt{6}}]$. The solution of $z'=f_6(z)$ is precisely $\frac{\sin(\sqrt{6}t)}{\sqrt{6}}$; estimate \eqref{zlower}
follows for all $t$ so that $\sqrt{6}t\in [0,\frac{\pi}{2}]$.
We now move on to the $x$ estimates. Using $z(t)\le t$ and Proposition \ref{initialestaimtesxy}, we find that \begin{align*}
x'&\le \frac{1}{t}\left(\frac{(x-1)(2x^3-x^2+3x-2)}{2\left(\frac{x}{2}+(1-x)^2\right)}\right)\\
&\le \frac{2(x-1)}{t}\left(1-3(1-x)\right), \end{align*} provided $x\in [\frac{3}{4},1]$. Using \eqref{Bryantsolutionnonrescaled} and the fact that $R(t)=\frac{1}{t}+t+O(t^3)$, we find that $L_2(t)=\frac{1}{t}-2t+O(t^3)$. One can then verify that $x(0)=1,x'(0)=0,x''(0)=-6$. It is clear that $(1-x)\ge x_*(t)$, where $x_*(t)'=\frac{2x_*(t)}{t}(1-3x_*(t))$ with $x_*(0)=x_*'(0)=0$, $x_*''(0)=6$. We can easily show that $x^*(t)\le 3t^2$ for all $t\ge 0$, so that $x_*(t)'\ge \frac{2x_*(t)}{t}(1-9t^2)$; solving this implies that $(1-x)(t)\ge 3t^2e^{-9t^2}$. For the $x$ lower bound, we estimate \begin{align*}
x'&\ge \frac{-x+\frac{x}{2}(1+x^2)}{yz}\\
&\ge \frac{-x+\frac{x}{2}(1+x^2)}{\frac{xz}{2}}\\
&=\frac{x^2-1}{z}\\
&\ge \frac{\sqrt{6}(x^2-1)}{\sin(\sqrt{6}t)}\\
&\ge \frac{2\sqrt{6}(x-1)}{\sin(\sqrt{6}t)}, \end{align*} which implies that $(1-x)(t)\le 2 \tan^2(\sqrt{\frac{3}{2}}t)$. \end{proof}
\end{document}
|
arXiv
|
{
"id": "2104.12996.tex",
"language_detection_score": 0.6056992411613464,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{CJK*}{GB}{}
\title{Physical interpretation of nonlocal quantum correlation through local description of subsystems
}
\author{Tanumoy Pramanik} \thanks{These authors contributed equally to this work} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China} \affiliation{ Beijing Academy of Quantum Information Sciences, Beijing 100193, China}
\author{Xiaojiong Chen} \thanks{These authors contributed equally to this work} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China}
\author{Yu Xiang} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China}
\author{Xudong Li} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China} \author{Jun Mao} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China} \author{Jueming Bao} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China} \author{Yaohao Deng} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China} \author{Tianxiang Dai} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China}
\author{Bo Tang} \affiliation{Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China} \author{Yan Yang} \affiliation{Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China} \author{Zhihua Li} \affiliation{Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China}
\author{Qihuang Gong} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China} \affiliation{ Beijing Academy of Quantum Information Sciences, Beijing 100193, China} \affiliation{ Frontiers Science Center for Nano-optoelectronics \& Collaborative Innovation Center of Quantum Matter, Peking University, Beijing, 100871, China } \affiliation{ Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan 030006, Shanxi, China} \affiliation{ Peking University Yangtze Delta Institute of Optoelectronics, Nantong 226010, Jiangsu, China} \author{Qiongyi He} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China} \affiliation{ Beijing Academy of Quantum Information Sciences, Beijing 100193, China} \affiliation{ Frontiers Science Center for Nano-optoelectronics \& Collaborative Innovation Center of Quantum Matter, Peking University, Beijing, 100871, China } \affiliation{ Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan 030006, Shanxi, China} \affiliation{ Peking University Yangtze Delta Institute of Optoelectronics, Nantong 226010, Jiangsu, China} \author{Jianwei Wang} \affiliation{State Key Laboratory for Mesoscopic Physics, School of Physics, Peking University, Beijing, 100871, China} \affiliation{ Beijing Academy of Quantum Information Sciences, Beijing 100193, China} \affiliation{ Frontiers Science Center for Nano-optoelectronics \& Collaborative Innovation Center of Quantum Matter, Peking University, Beijing, 100871, China } \affiliation{ Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan 030006, Shanxi, China} \affiliation{ Peking University Yangtze Delta Institute of Optoelectronics, Nantong 226010, Jiangsu, China}
\date{\today}
\begin{abstract} \noindent Characterization and categorization of quantum correlations are both fundamentally and practically important in quantum information science. Although quantum correlations such as non-separability, steerability, and non-locality can be characterized by different theoretical models in different scenarios with either known (trusted) or unknown (untrusted) knowledge of the associated systems, such characterization sometimes lacks unambiguous to experimentalist. In this work, we propose the physical interpretation of nonlocal quantum correlation between two systems. In the absence of {\it complete local description} of one of the subsystems quantified by the {\it local uncertainty relation}, the correlation between subsystems becomes nonlocal. Remarkably, different nonlocal quantum correlations can be discriminated from a single uncertainty relation derived under local hidden state (LHS)-LHS model only.
We experimentally characterize the two-qubit Werner state in different scenarios. \end{abstract}
\maketitle
\end{CJK*}
\section{Introduction}
Quantum correlation between two or more subsystems that cannot be described by local-causal theories is a key resource in quantum information science~\cite{EPR, Bell, Schro_1, Schro_2, Brunner, Horodecki, Jones07_1,Jones07_2, tele1,tele2,tele3,tele4,tele5,tele6, QKey1,QKey2, QKey3, QKey4, QCom1, QCom2}. A crucial task is to characterize, categorize and certificate different quantum correlations. In general, quantum correlations can be described by the joint probability distribution of the events measured in the subsystems. For the bipartite quantum systems, the correlation is defined by \begin{eqnarray}
\mathcal{P}=\left\{P(a_{\mathcal{A}_i}, b_{\mathcal{B}_j}| \rho_{AB}) = \text{Tr}\left[\left(\Pi^{\mathcal{A}_i}_a\otimes\Pi^{\mathcal{B}_j}_b\right) \rho_{AB}\right]\right\}~~~ \label{Exp_Prob} \end{eqnarray} where $\rho_{AB}$ is the unknown state composed by Alice's and Bob's systems, and $\Pi^{\mathcal{A}_i}_a$ ($\Pi^{\mathcal{B}_j}_b$) is the projective measurement having outcomes of $a$ ($b$) for the $\mathcal{A}_i$ ($\mathcal{B}_j$) observable. The characterization of correlation of the state $\rho_{AB}$ implies the measurement of the probability distribution $\mathcal{P}$. For example, to certify the Bell nonlocality, the distribution $\mathcal{P}$ has to violate Bell inequalities~\cite{Bell, CHSH, Brunner}. Quantum correlations are further categorized by entanglement~\cite{Horodecki} and quantum steering~\cite{ Schro_1, Jones07_1, Ramanathan_2018}. Wiseman \textit{et. al.} proposed a framework to describe all the three quantum correlations for the bipartite system by considering three different scenarios having either known (trusted) or unknown (untrusted) knowledge of the system~\cite{Jones07_1,Jones07_2,CHRW_2011}:
(1) $\rho_{AB}$ is entangled if $\mathcal{P}$ can not be generated by a separable state having trusted measurement devices in both subsystems. (2) $\rho_{AB}$ is steerable if $\mathcal{P}$ can not be produced by a local hidden state (LHS) model, in the case that one subsystem owns trusted measurement device while the other remains untrusted. (3) $\rho_{AB}$ is Bell nonlocal if $\mathcal{P}$ is incompatible with the local hidden variable (LHV) interpretation and both measurement devices are untrusted.
Categorizing quantum correlations regarding their capability of controlling measurement apparatuses have enabled important applications in quantum information, e.g., device independent (DI) or one-side DI quantum key distribution~\cite{DI_QKD1, MDI_QKD1, MDI_QKD2} and randomness generation~\cite{DI_Random}. We however note that non-separability, steerability, and Bell nonlocality can only be verified by the violations of their own inequalities, asking for a general framework of characterizing quantum correlations. The conceptual definition of known or unknown systems may also lead to confusion and ambiguity to experimentalist who usually can well control the system and measurement apparatuses.
\begin{figure*}\label{Fig_1}
\end{figure*}
In this work, we propose a more physical interpretation of different nonlocal quantum correlations from {\it complete local description} of the subsystems that can be quantified by the {\it local uncertainty relation} of the subsystems. Our idea is inspired by the Einstein's comment~\cite{EPR} and Bell's seminal work~\cite{Bell} on incompleteness of quantum theory supplemented by LHV. We here ask a similar question: {\it when two systems $A$ and $B$ are quantumly correlated, is there any complete local description of one of the subsystems, say, $B$ has nothing to do with $A$, or vice versa}? We will show how the local uncertainty relation {derived using the complete local description of subsystems} can help in discriminating different nonlocal quantum correlations. We remark that our way of characterizing quantum correlations represents the fundamental connection of quantum nonlocality {and uncertainty relation.}
Note that, in the previous works~\cite{Bell, CHSH, Brunner, Jones07_1,Jones07_2,CHRW_2011, Hofmann_2003, Zhen_2016}, the criteria of on discrimination of different non-local quantum correlations are based on different forms of uncertainty relation formulated under LHS-LHS, LHS-LHV, LHV-LHV model. Here, we introduce single uncertainty relation~(inequality~(\ref{G_UR})) formulated under LHS-LHS model, and this uncertainty relation can discriminate three different kinds of nonlocal correlations, e.g., entanglement, steering, Bell nonlocal correlation. See the Fig.~(\ref{Fig_id}) for more clear picture.
\begin{figure}
\caption{ $S$, $E$, $St$, and $B$ correspond to separable, entangled, steerable, and Bell nonlocal correlation, respectively. Entanglement, steerability, and Bell nonlocal correlation are confirmed if the observed correlation $\mathcal{P}$ of Eq.~(\ref{Exp_Prob}) can not be explained by the theoretical models LHS-LHS, LHS-LHV, and LHV-LHV, respectively. Less assumption about the associated systems makes the correlation more nonlocal. In this work, we discriminate the degree of nonlocality under a single theoretical model, LHS-LHS with the help of proposed uncertainty relation of inequality~(\ref{G_UR}). Here, the violation of the inequalities $\mathcal{F}_1^n\leq \mathcal{C}_1^n$~(\ref{LHS_Ent}), $\mathcal{F}_2^n\leq \mathcal{C}_2^n$~(\ref{LHS_Seer}) and $\mathcal{F}_3^n\leq \mathcal{C}_3^n$~(\ref{LHS_CHSH}) validates entanglement, steerability and Bell nonlocal correlation, respectively. }
\label{Fig_id}
\end{figure}
\section{Verification of different nonlocal correlations through complete description of subsystems}
Let us first consider the following game. Alice prepares a joint system of $A$ and $B$ in an unknown state $\rho_{AB}$, and sends the subsystem $B$ to Bob. While Bob may think that Alice can cheat him by preparing the state according to the LHS model, $\rho_{AB}^{\text{LHS}} = \sum_{i} p_i \rho^A_i\otimes \rho^B_i$,
where $\rho^A_i$ ($\rho^B_i$) is Alice's (Bob's) local state, $p_i\geq 0$ and $\sum_{i} p_i=1$. For $\rho_{AB}^{\text{LHS}}$, the system $B$ has complete local description, $\{p_i,\,\rho^B_i\}$.
Here Bob tends to characterize the {nonlocal} quantum correlations of the state $\rho_{AB}$ {with the help of} {the local uncertainty relation}. Bob asks Alice to minimize the uncertainty of the state of system $B$ by communicating $k$-cbit (classical bit) information. Given the $k$-cbit, Alice measures an observable of $\mathcal{A}_i$, and sends back the measurement outcome $a$ together with $\mathcal{A}_i$ to Bob.
Finally, Bob checks {whether the joint probability distribution $\mathcal{P}$ can be describe by the complete local description of $\rho_{AB}^{\text{LHS}}$. This description is certified by } the uncertainty of their outcomes characterized by the condition $V(a,b|i,j)$ (where $i$, $j$ represent Alice's and Bob's choice of observables $\{\mathcal{A}_i\}$ and $\{\mathcal{B}_j\}$, {respectively}).
Bob can confirm that the state $\rho_{AB}$ is entangled if {the following local uncertainty} relation is violated \begin{eqnarray}
\mathcal{F}^n_k = \{\sum_{i,j=0}^{n-1} \sum_{a,b=0}^{1} V_k(a,b|i,j) P(a_{\mathcal{A}_i},\,b_{\mathcal{B}_j}| \rho_{AB}) \} \leq \mathcal{C}^n_k, ~~~~~ \label{G_UR} \end{eqnarray}
where $V_k(a,b|i,j)$, $k\in\{1,2,3\}$, represents three different conditions of quantum correlations ($V_1$ for entanglement, $V_2$ for steering, and $V_3$ for Bell nonlocality); where $n$ is the number of measurement performed on $A$ and $B$ and chosen as $n=2,3$ in our work (also in the experiment), which however can be chosen to an arbitrary number (see in the Appendix). The upper bound $\mathcal{C}^n_k$ is obtained by maximizing $\mathcal{F}^n_k$ over the state of $\rho_{AB}^{\text{LHS}}$ and Alice's all possible strategies. The violation of inequality~(\ref{G_UR}) implies that the shared state $\rho_{AB}$ cannot be written in the form of $\rho_{AB}^{\text{LHS}}$.
Figure~\ref{Fig_1} ~\cite{Fig1} sketches three different scenarios for the characterization and certification of quantum correlations in a single local-description model.
For simplicity, we start with the $LHS_2^n$ one, that is the LHS model description for quantum steering~\cite{Schro_2, Jones07_1,Jones07_2,Paul}).
\subsection{Verification of steerability}
For the verification of steerability of the shared state $\rho_{AB}$, Bob asks to minimize his uncertainty of observables $\mathcal{B}_i$. He checks the uncertainty of their outcomes constrained by the condition of $V_2(a,b|i,j)=(-1)^{a+b} \delta_{ij}$, and the {local uncertainty} relation thus turns into~\cite{Saunders, bennet12}
\begin{eqnarray}
\{\mathcal{F}_2^n=\sum_{i=0}^{n-1}|\langle \mathcal{A}_i\,\mathcal{B}_i\rangle| \} \leq \{C_2^n=\max_{\{\mathcal{A}_i\},\rho_{AB}^{\text{LHS}}}\left[\mathcal{F}_2^n\right]\}, \label{LHS_Seer} \end{eqnarray}
{where the upper bound, $C_2^3 = \sqrt{3}$ ($C_2^2=\sqrt{2}$) for $n=3$ ($n=2$) measurement setting corresponds to the local description of Bob's system by the eigenstates of the observables $(\sigma_x\pm\sigma_y\pm\sigma_z)/\sqrt{3}$ ($(\sigma_x\pm\sigma_z)/\sqrt{2}$)} (see see in the Appendix for details). The $V_2$ shown as Eq.~(\ref{LHS_Seer}) represents Bob's residual uncertainty of the observable $\mathcal{B}_i$ (randomly chosen from a set of non-commuting observables~\cite{Joint_Measure_1,Joint_Measure_2,Joint_Measure_3}), given the $\{a,\,\mathcal{A}_i\}$ information from Alice. The classical communication of 1-cbit ($\log_2^{n=2}$) or 1.58-cbit ($\log_2^{n=3}$) is required from Bob to Alice when Bob randomly chooses $\mathcal{B}_i$ from a set of $n=2$ or 3 observables, say, $\{\sigma_z,\,\sigma_x\}$ or $\{\sigma_z,\,\sigma_x,\,\sigma_y\}$, respectively. The violation of inequality~(\ref{LHS_Seer}) indicates that the system $B$ does not have complete {local description} independent of the system $A$, and the correlation is known as quantum {steering}~\cite{Saunders, bennet12, FgSt}.
\subsection{Verification of Bell nonlocal correlation}
For the verification of Bell nonlocal, Bob does not reveal the choice of observables and there is no communication from Bob to Alice. Given information from Alice, Bob estimates the uncertainty from the measured probability distribution $\mathcal{P}$.
In the case of $n=2$ measurement, the uncertainty of input $\{i,j\}$ and output $\{a,b\}$ correlation is determined by the CHSH game ({$V_3(a,b|i,j)=(-1)^{(a+b+ij)}$ which corresponds to the winning condition of the Clauser-Horne-Shimony-Holt game)~\cite{Bell,CHSH,Oppen}.}
Thus, the {local-uncertainty} Eq.~(\ref{G_UR}) can be rewritten as \begin{eqnarray} \mathcal{F}_3^2 \leq \{\max_{\{\mathcal{A}_i\},\,\rho_{AB}^{\text{LHS}}}\left[ \mathcal{F}_3^2\right]= 2\}, \label{LHS_CHSH} \end{eqnarray}
where $\mathcal{F}_3^2=|\langle\mathcal{A}_0\left(\mathcal{B}_0+\mathcal{B}_1\right)\rangle + \langle\mathcal{A}_1\left(\mathcal{B}_0-\mathcal{B}_1\right)\rangle|$ {and the upper bound corresponds to local description of Bob's system by the state, e.g., $|0\rangle$}. The $V_3$ corresponds to Bob's residual uncertainty of the randomly chosen observables of $\{\mathcal{B}_0=\sigma_x,\,\mathcal{B}_1=\sigma_z\}$~\cite{Joint_Measure_1, Joint_Measure_4}, with respect to Alice's {individual} measurement from $\{\mathcal{A}_0, \mathcal{A}_1\}$. When the local uncertainty relation~(\ref{LHS_CHSH}) is violated, Bob validates the Bell nonlocal correlation~\cite{Bell,CHSH,Oppen}.
The inequality~(\ref{LHS_CHSH}) becomes a necessary and sufficient condition for Bell nonlocality for the 2-measurement settings and binary outcomes~\cite{Horo_1995}. Note that the $\text{LHS}_3^2$ model is a stricter version of $\text{LHS}_2^n$ model, as the former represents a simultaneous steerability (uncertainty) of $\{\mathcal{B}_0,\,\mathcal{B}_1\}$~\cite{Oppen} while the later represents an {individual steerability (uncertainty) of $\mathcal{B}_i$ with respect to Alice's observable $\mathcal{A}_i$.} Therefore, the Bell nonlocal correlation becomes the strongest form of nonlocal correlations -- the violation of inequality~(\ref{LHS_CHSH}) indicates the violation of inequality~(\ref{LHS_Seer}).
\subsection{Verification of entanglement}
To certify entanglement, Bob asks to minimize the value of $b$ for the $\mathcal{B}_j$ measurement, randomly chosen from the set of non-commuting observables. The classical communication of $2$-cbit (four possible combinations of $\{a,\mathcal{B}_j\}$) or $2.58$-cbit (six possible combinations) is required from Bob to Alice, when $n=2$ or 3-measurement is chosen, respectively.
Bob evaluates the uncertainty of $V_1(a,b|i,j)=(-1)^{a+b} \delta_{(\mathcal{A}_i\mathcal{B}_j)} \delta_{(a\overline{b})}$, where $\overline{b}=b\oplus1$. Applying the condition of $V_1$ in the inequality~(\ref{G_UR}), the {local uncertainty} relation turns into \begin{eqnarray} \{\mathcal{F}_1^n &=& \sum_{i=0}^{n-1} P(\mathcal{A}_i,\,\mathcal{B}_i)\} \leq \{\max_{\rho_{AB}^{\text{LHS}}}\left[\mathcal{F}_1^n\right]= \mathcal{C}_1^n\} \label{LHS_Ent} \end{eqnarray}
where $P(\mathcal{A}_i,\,\mathcal{B}_i)=P(0_{\mathcal{A}_i},\,1_{\mathcal{B}_i}) + P(1_{\mathcal{A}_i},\,0_{\mathcal{B}_i})$, 0 and 1 refer to measurement outcomes; where $\mathcal{C}_1^{n=3}=2$, $\mathcal{C}_1^{n=2}=1$ {corresponds to Bob's local state, e.g., $|0\rangle$} (see details in the Appendix). The $V_1$ leads to the uncertainty of anti-correlated outcomes $a\oplus b=1$ when Alice and Bob both performs measurement of the same observable, i.e, $\mathcal{A}_i=\mathcal{B}_j$ on their respective subsystems. The violation of the {local uncertainty relation} in Eq.(\ref{LHS_Ent}) can confirm the presence of entanglement~\cite{Hofmann_2003, St_Ent3, St_Ent1, St_Ent2}. The uncertainty relation of Eq. (\ref{LHS_Ent}) is the weaker form of Eq.~(\ref{LHS_Seer}), as quantum steering considers uncertainty of all possible combinations of $\{a, b\}$, while entanglement only takes uncertainty of anti-correlated outcomes.
\begin{figure}
\caption{Theoretical and experimental characterizations of (a) entanglement, (b) steerability, and (c) Bell nonlocality for the bipartite Werner state. The $\text{LHS}_i^n$, $i=1,2,3$ and $n=2,3$, refers to different LHS models in the three scenarios, see the derived local uncertainty relations of (3)--(5). All experiments were implemented on an integrated silicon-photonics quantum device. Points denote experimental data and lines denote theoretical prediction: circular and square points are for $n=3$ and $n=2$ measurement settings; blue and black lines are for $n=3$ and $n=2$ measurement, respectively.
Red shaded (black dotted) regime in (a)--(c) identifies the $p$ mixing parameter of the Werner state $\rho_W$, above which the state is certified as entanglement, steerable, and Bell nonlocal, for $n=3$ ($n=2$) measurement settings, respectively.
Horizontal dashed lines are plotted for the guidance the achievable upper bound of the inequality value, $\mathcal{F}^n_k$. Note error bars ($\pm\sigma$) estimated from {20 sets of data}
are too small to be invisible in the plot. }
\label{Fig_2}
\end{figure}
\begin{figure}
\caption{{Schematic of an integrated silicon-photonics quantum device.} The quantum device is capable of generating, manipulating and analyzing maximally path-entangled states. The device is fabricated on the silicon-on-insulator platform. Lines are silicon nanophotonic waveguides with the size of 450nm $\times$ 220nm, and yellow parts are thermo-optic phase shifters that can be precisely controlled in experiment. A continuous wave laser light (at the wavelength of 1550.12nm) was used to pump two photon-pair sources, producing a pair of path-entangled photons via the spontaneous four-wave mixing (sFWM) process. The entangled photons were locally manipulated and analyzed by Alice (signal photon at 1545.31nm) and Bob (idler photon at 1554.91nm), respectively, which were implemented by the terminate Mach-Zehnder interferometers (MZIs). The two photons are measured by two superconducting nanowire single-photon detectors (SNSPDs), and their coincidence were recorded by a time tagger.
}
\label{FigScheme_text}
\end{figure}
\subsection{ Higher degree of nonlocality from more uncertainty of the condition $V_k(a,\,b|i,\,j)$}
For the purpose of connecting uncertainty relation~(\ref{G_UR}) with the degrees of nonlocality, we define normalized probability by $\mathcal{P}_k^n=\mathcal{F}_k^n/\max\left[\mathcal{F}_k^n\right]$, where $\max\left[\mathcal{F}_k^n\right]$ corresponds to algebraic maximum of $\mathcal{F}_k^n$. The corresponding uncertainty is measured by the Shannon entropy, $\mathcal{H}_k^n= - \mathcal{P}_k^n \log2\left[\mathcal{P}_k^n\right] - (1-\mathcal{P}_k^n) \log2\left[1-\mathcal{P}_k^n\right] $. Therefore, $\mathcal{H}_k^n$ determines the degree of uncertainty of the event $V_k(a,\,b|i,\,j)$, which corresponds to the correlation between Alice's and Bob's outcomes $a$ and $b$. For entanglement, steerability and Bell nonlocal correlation $\mathcal{H}_1^3 > 0.92$ (corresponds to the inequality~(\ref{LHS_Ent}), $\mathcal{F}_1^3 > 2$ and $\max\left[\mathcal{F}_1^3\right]=3$), $\mathcal{H}_2^3 > 0.98$ (corresponds to the inequality~(\ref{LHS_Seer}), $\mathcal{F}_1^3 > \sqrt{3}$ and $\max\left[\mathcal{F}_2^3\right]=3$), $\mathcal{H}_3^2 > 1$ (corresponds to the inequality~(\ref{LHS_CHSH}), $\mathcal{F}_1^3 > 2$ and $\max\left[\mathcal{F}_3^2\right]=4$), receptively. As a result, higher degree of nonlocality implies larger threshold value of uncertainty of the condition $V_k(a,\,b|i,\,j)$.
Our local uncertainty relations as shown by inequalities of (3)--(5), which are all derived from a single inequality of (2) under different conditions, represent the more physical interpretation of different quantum correlations including quantum entanglement, steering and Bell nonlocal correlation. We now take the Werner state of $\rho_W= p\,\rho_{|\phi^-\rangle} + (1-p)\,\frac{\rm{I}\otimes \rm{I}}{4}$ as an example to test our local-description model in theory and experiment.
$\rho_{|\phi^-\rangle}$ is the density matrix of the singlet state of $|\phi^-\rangle=(|01\rangle - |10\rangle)/\sqrt{2}$, $I$ is the identity matrix, and $p$ ($0\leq p\leq 1$) denotes the mixing parameter.
The task now is to determine both in theory and experiment the bound of the $p$ parameter, above which the inequalities (3)--(5) can be violated and thus the state $\rho_W$ can be certified to be entangled, steerable, or Bell nonlocal. The results are shown in Fig.~\ref{Fig_2}.
\section{Experimental demonstration of different nonolocal correlations}
We experimentally verified the three quantum correlations for the Werner state. Figure \ref{FigScheme_text} shows the diagram of an integrated silicon-photonic quantum device that can generate, manipulate and analyze all four Bell states~\cite{Silverstone_15, Wang_16}. The integrated quantum device offers high levels of controllability and stabilities of operating quantum states of light~\cite{Wang:16D,Wang:review}. The {maximally entangled state has been } created with a high fidelity of $0.951\pm0.096$ by performing quantum state tomography (QST). The experimental realization of the $\rho_W$ state with a fully controllable mixture parameter $p$ is enabled by the classical mixture of quantum states (see experimental details in the Appendix).
Figure~\ref{Fig_2} shows the characterizations of entanglement, steering and Bell nonlocal, experimentally demonstrating the violations of their inequalities of (\ref{LHS_Ent}), (\ref{LHS_Seer}) and (\ref{LHS_CHSH}), respectively. In Fig.~\ref{Fig_2}a, for $n=2$ and 3-measurement settings, entanglement is confirmed for $1/2<p\leq 1$ (black dotted) and $1/3<p\leq 1$ (red shaded), respectively~\cite{Hofmann_2003}. Note that 3-measurement is sufficient to fully reveal entanglement of the $\rho_W$ state up to the value obtained by QST (see see in the Appendix and Fig.~\ref{Fig_3}). In Fig.~\ref{Fig_2}b, quantum steerability is certified when $1/\sqrt{3}<p\leq1$ (red shaded) for the 3-measurement setting, larger than that for the 2-measurement setting having $1/\sqrt{2}<p\leq 1$ (black dotted). Increasing the number of measurement of $n$ can relax the $p$ value of demonstrating steering~\cite{Saunders, bennet12}. For example, when implementing infinite measurement settings, the steerability inequality can be violated for $1/2<p\leq 1$~\cite{Jones07_2}. In Fig.~\ref{Fig_2}c, it shows that the state is demonstrated to be Bell nonlocal for $0.7071<p\leq 1$. Unlike the steering and entanglement scenarios, increasing the number of measurement to three however does not relax the choice of $p$ parameter.
Bell nonlocality can be verified for $4/5<p\leq 1$
using the $I_{3322}$ inequality, as reported in ref. ~\cite{BE3322}.
Figure~\ref{Fig_3} summarizes the bound of violating the LHS inequalities for entanglement, steering and Bell nonlocal. We here consider the $\mathcal{F}^n_k$ for the $n=2,3$ measurement settings, and for infinite measurements and for QST measurement. The regimes of $p$ parameter obeying the LHS models are grayed, while the regimes for certificated entanglement, steerability, and Bell nonlocality are colored. In the $\mathcal{F}^{465}_3$ bar, the red regime was estimated with 465 measurement settings~\cite{BNnm1}, and the black one refers to an unknown regime~\cite{AGT_06}.
\begin{figure}\label{Fig_3}
\end{figure}
\section{Conclusions}
In sum, we formulate single uncertainty relation under LHS-LHS model and different kinds of nonlocal correlations can be discriminated through it. This is major improvement over previously used different uncertainty relation based on different theortical models, e.g., LHS-LHS for entanglement, LHS-LHV for steering and LHV-LHV for Bell nonlocal correlation. We also show that different nonlocal quantum correlations have been characterized by the physical property, i.e, complete local description of one of the subsystems, which is quantified by the local uncertainty relation {conditioned} on the outcomes of subsystems. The violation of local uncertainty relation confirms the nonlocal correlation between subsystems. When increasing the uncertainty of the condition by restricting the communication between two parties, local uncertainty relation detects stronger form of nonlocal quantum correlation. Therefore, uncertainty of local description of one of the subsystems can be interpreted as nonlocal correlation between subsystems. As an example, in experiment, we have tested the uncertainty of local descriptions and the quantum correlation of subsystems prepared in the bipartite Werner states. The framework presented in this work may open new possibilities for interpretation of quantum correlation with respect to other fundamental properties of the multipartite systems.
\section{DATA AVAILABILITY} The main data supporting the finding of this study are available within the article. Additional data can be provided upon request.
\section{ACKNOWLEDGMENTS} We acknowledge support from the Natural Science Foundation of China (nos 61975001, 61590933, 61904196, 61675007, 11975026), the National Key Research and development (R$\&$D) Program of China (2019YFA0308702, 2018YFB1107205, 2016YFA0301302), Beijing Natural Science Foundation (Z190005), and Key R$\&$D Program of Guangdong Province (2018B030329001). T.P. thanks Guruprasad Kar for useful discussion.\\
\section{AUTHOR CONTRIBUTIONS}
J.W. conceived the project. T.P., X.C., Y.X., X.L., J.M., J.B., Y.D., and T.D. built the setup and carried out the experiment. Y.Y., B.T., and Z.L. fabricated the device. T.P., X.C., Y.X., X.L., J.W. and Q.H. performed the theoretical analysis. Q.H., Q.G., and J.W. managed the project. All authors discussed the results and contributed to the manuscript.
\section{COMPETING INTERESTS} The authors declare no competing interests.
\section{ADDITIONAL INFORMATION} Correspondence and requests for materials should be addressed to T.P. Emails to: [email protected].
\onecolumngrid \appendix
\section{Complete local description of quantum system} \label{Apdx_Complete}
In the seminal work~\cite{EPR}, Einstein et al. discussed about incompleteness of quantum theory in the presence of entanglement. J. Bell extended this work by considering quantum theory supplemented by the hidden variable, say, $\lambda$ distributed according to $\{P(\lambda)\}$ where the restriction on $\lambda$ are $P(\lambda) \geq 0$ and $\sum_{\lambda} P(\lambda)=1$~\cite{Bell}. They showed that in the presence of entanglement between the system $A$ and $B$, \begin{eqnarray}
|\psi\rangle_{AB} = \frac{|01\rangle - |10\rangle}{\sqrt{2}}, \label{Singlet_State} \end{eqnarray} quantum theory can be incomplete~\cite{EPR, Bell}. The incompleteness occurring from the violation of Bell inequality by the correlation $\mathcal{P}$ of Eq.~(\ref{Exp_Prob_SM}) is known as Bell nonlocality, and the corresponding correlation is known as Bell nonlocal correlation. There are other well-known nonlocal quantum correlations, e.g., entanglement and steering. These nonlocal correlations have been explained with respect to trusting-untrusting scenarios~\cite{Jones07_1,Jones07_2}.
In this work, we have revisited the idea introduced by Einstein~\cite{EPR} et al. and Bell~\cite{Bell} considering the following question : when the systems $A$ and $B$ are quantumly correlated, is there any {\it complete local description} of the subsystems? Interestingly, if two systems $A$ and $B$ are in the separable state of \begin{eqnarray} \rho_{AB}^{\text{LHS}} = \sum_{i} p_i \rho^A_i\otimes \rho^B_i, \label{LHS_State_SM} \end{eqnarray} where $\rho^A_i$ ($\rho^B_i$) is Alice's (Bob's) local state, $p_i\geq 0$ , and $\sum_{i} p_i=1$, individual system $A$ ($B$) has {\it complete local description}, $\{p_i,\, \rho^{B}_i\}$ ($\{p_i,\, \rho^{A}_i\}$), i.e., systems $A$ and $B$ do not share quantum correlation. For the verification of the nonlocal correlation of the bipartite state $\rho_{AB}$, let us consider the following game. In this game, Alice prepares two quantum systems $A$ and $B$ in the an entangled state $\rho_{AB}$, and sends the system $B$ to Bob. Bob thinks that Alice may cheat him by preparing the system $A$ and $B$ in the separable state of the form of Eq.~(\ref{LHS_State_SM}). For the verification of nonlocal correlation of the shared state $\rho_{AB}$, Bob asks Alice to reduce his uncertainty about the state of the system $B$ for the measurement of observables chosen randmonly from the set of non-commuting observables $\{\mathcal{B}_j\}$. Therefore, communication of $k$-cbit (classical-bit) is required from Bob to Alice. According to Bob's information, Alice measures $\mathcal{A}_i$ on her system and communicates the measurement outcome $a$ and the choice of observables $\mathcal{A}_i$. From the information $\{a,\,\mathcal{A}_i\}$, Bob constructs the joint probability distribution \begin{eqnarray} \mathcal{P}=\left\{P(a_{\mathcal{A}_i}, b_{\mathcal{B}_j}; \rho_{AB}) = Tr[\left(\Pi^{\mathcal{A}_i}_a\otimes\Pi^{\mathcal{B}_j}_b\right).\rho_{AB}]\right\} \label{Exp_Prob_SM} \end{eqnarray}
and checks the uncertainty of the {\it condition} characterized by $V_k(a,b|i,j)$. $V_k(a,b|i,j)$ corresponds to the desired correlation between outcomes $a$ and $b$ for the measurement of observables $\mathcal{A}_i$ and $\mathcal{B}_j$, and $k\in{1,\,2,\,3}$ corresponds to three different conditions. The {\it local uncertainty} relation associated with the condition $V_k(a,b|i,j)$ becomes \begin{eqnarray}
\mathcal{F}_k^n = \sum_{i,j=0}^{n-1}\sum_{a,b=0}^1 V_k(a,b|i,j) P(a_{\mathcal{A}_i,b_{\mathcal{B}_j}}|\rho_{AB}) \leq \mathcal{C}_k^n, \label{G_UR_SM} \end{eqnarray} where $n$ is the number of observables chosen by Alice and Bob, $k\in\{1,\,2,\,3\}$ corresponds three different condition for discrimination of three different nonlocal correlations (i.e., entanglement, steering and Bell nonlocal correlation). The upper bound $\mathcal{C}_k^n$ is obtained by maximizing $\mathcal{F}_k^n$ over the shared state $\rho_{AB}^{\text{LHS}}$ and Alice's all possible strategies. The violation of the inequality~(\ref{G_UR_SM})
implies that the shared state $\rho_{AB}$ can not be written in the form of $\rho_{AB}^{\text{LHS}}$ of Eq.~(\ref{LHS_State_SM}) and Bob system does not have {\it complete local description} under considered $V_k(a,b|i,j)$. As a result, Bob validates the nonlocal correlation of the shared state $\rho_{AB}$.
\section{Incomplete local-description of the system $B$ due to the presence of entanglement} \label{Apdx_Ent}
\begin{figure}\label{Fig_Ent_SM}
\end{figure}
{\it $LHS_1^n$ : } Fig.~(\ref{Fig_Ent_SM}) describes the schematic diagram to certify the entanglement of the shared state $\rho_{AB}$. For the verification of entanglement of the shared state $\rho_{AB}$, Bob checks the uncertainty of the condition \begin{eqnarray}
V_1(a,\,b|i,\,j) = (-1)^{a+b} \delta_{\mathcal{A}_i,\,\mathcal{B}_j}\,\delta_{a\,\overline{b}}, \label{V_Ent_SM} \end{eqnarray}
where $\delta_{\mathcal{A}_i,\,\mathcal{B}_j}=1$ and $\delta_{a\,\overline{b}}=1$ only for $\mathcal{A}_i=\mathcal{B}_j$ and $a=\overline{b}=b\oplus 1$, respectively. In simple words, Bob checks the uncertainty of anti-correlation of their outcomes, $a\oplus b=1$ when they measure same observable $\mathcal{A}_i=\mathcal{B}_j$ on their respective systems. As a result, Bob's needs to send the information of $\{b,\,\mathcal{B}_j\}$ to Alice, and it requires 2-cbit ($\log_2^{k=4}$, corresponding to the four different combinations of two different outcomes and two measurement settings) or 2.58-cbit ($\log_2^{k=6}$, corresponding to the six different combinations of two different outcomes and three measurement settings) for the choice of two measurement settings or three measurement seetings, respectively. In the case of verification condition, $V_1(a,\,b|i,\,j)$ of Eq.~(\ref{V_Ent_SM}), the inequality~(\ref{G_UR_SM}) becomes \begin{eqnarray} \mathcal{F}_1^n &=& \sum_{i=0}^{n-1} P(0_{\mathcal{A}_i},\,1_{\mathcal{B}_i}) + P(1_{\mathcal{A}_i},\,0_{\mathcal{B}_i}) \leq \max_{\rho_{Ab}^{\text{LHS}}}\left[\mathcal{F}_1^n\right]= \mathcal{C}_1^n \label{LHS_Ent_SM} \end{eqnarray}
where maximization is taken over all possible choices of $\rho_{AB}^{\text{LHS}}$. For the choice of two measurement settings, say, $\mathcal{A}_1=\mathcal{B}_1=\sigma_x$ and $\mathcal{A}_2=\mathcal{B}_2=\sigma_x$, the upper bound of the inequality~(\ref{LHS_Ent_SM}) becomes $\mathcal{C}_1^{n=2}=1$. In the case of three measurement settings, $\mathcal{A}_1=\mathcal{B}_1=\sigma_x$, $\mathcal{A}_2=\mathcal{B}_2=\sigma_y$, $\mathcal{A}_3=\mathcal{B}_3=\sigma_z$, $\mathcal{C}_1^{n=3} =2$. In this scenario, the optimal cheating strategy for Alice corresponds to $\mathcal{F}_1^n=\mathcal{C}_1^n$ and the system $B$ has complete local description $\in\{|0\rangle, |1\rangle, \sqrt{\alpha} |0\rangle + \sqrt{1-\alpha}|1\rangle\}$. Note here that the above complete local description is not unique. The inequality~(\ref{LHS_Ent_SM}) is the {\it local uncertainty relation}, where `{\it local}' signifies that uncertainty relation is satisfied by the quantum systems having {\it complete local description} as shown in the Eq.~(\ref{LHS_State_SM}). From the violation of the uncertainty relation~(\ref{LHS_Ent_SM}), Bob validates the nonlocal correlation between system $B$ and $A$ and corresponding nonlocal correlation called entanglement~\cite{St_Ent1, St_Ent2, St_Ent3}. The violation of the inequality~(\ref{LHS_Ent_SM}) is the necessary criterion for verification of entanglement. Alice's knowledge about Bob's strategy makes the criterion~(\ref{LHS_Ent_SM}) weakest. As a result, entanglement is the weakest nonlocal correlation.
\section{Incomplete local description of the system $B$ due to the presence of steerability} \label{Apdx_Steer}
\begin{figure}\label{Fig_Steer_SM}
\end{figure}
{\it $LHS_2^n$ : } Fig.~(\ref{Fig_Steer_SM}) describes the scenario to verify the steerability of the shared state $\rho_{AB}$. To verify steerability of the shared state $\rho_{AB}$, Bob checks the uncertainty of the condition \begin{eqnarray}
V_2(a,\,b|i,\,j) = (-1)^{a+b} \delta_{i,\, j}, \label{V_St_SM} \end{eqnarray}
which corresponds to Bob's residual uncertainty associated with the measurement of non-commuting observables $\{B_i\}$ from the knowledge of $\{a,\,\mathcal{A}_i\}$. For consideration of $V_2(a,\,b|i,\,j)$ of Eq.~(\ref{V_St_SM}), the uncertainty relation~(\ref{G_UR_SM}) becomes
\begin{eqnarray}
\mathcal{F}_2^n=\sum_{i=0}^{n-1}|\langle \mathcal{A}
_i\,\mathcal{B}_i\rangle| \leq C_2^n=\max_{\{\mathcal{A}_i\},\rho_{AB}^{\text{LHS}}}\left[\mathcal{F}_2^n\right] , \label{LHS_Steer_SM} \end{eqnarray}
where maximization is taken over all possible choices of $\{\mathcal{A}_0,\,\mathcal{A}_1,\, \cdots, \mathcal{A}_{n-1}\}$ and $\rho_{AB}^{\text{LHS}}$. When Bob randomly choose observable from the set of two (three) non-commuting observables, say, $\{\sigma_x, \,\sigma_z\}$ ($\{\sigma_x, \,\sigma_y,\,\sigma_z\}$) the upper bound $C_2 = \sqrt{2}$ ($C_3=\sqrt{3}$) occurs when Bob's system $B$ has a complete local description by the eigenstates of observables $\frac{\sigma_x\pm\sigma_z}{\sqrt{2}}$ ($\frac{\sigma_x\pm\sigma_y\pm\sigma_z}{\sqrt{3}}$). The violation of the inequality~(\ref{LHS_Steer_SM}) implies nonlocal correlation between Bob's system and Alice's system, and it is known as steering~\cite{Saunders, bennet12, FgSt}. Note here that the uncertainty relation~(\ref{LHS_Ent_SM}) is weaker form of the uncerainty relation~(\ref{LHS_Steer_SM}) as inequality~(\ref{LHS_Ent_SM}) deals with the uncertainty of $a\oplus b =1$ while inequality~(\ref{LHS_Steer_SM}) corresponds to the uncertainty of both the condition $a\oplus b =1$ and $a\oplus b =0$. Therefore, the violation of uncertainty relation~(\ref{LHS_Steer_SM}) indicates a stronger nonlocal correlation than the nonlocal correlation characterized by the uncertainty relation~(\ref{LHS_Ent_SM}). Using $\langle\mathcal{A}_i\mathcal{B}_i\rangle = Ps\left(\mathcal{A}_i,\,\mathcal{B}_i\right) - Pd\left(\mathcal{A}_i,\,\mathcal{B}_i\right)$ (where $Ps\left(\mathcal{A}_i,\,\mathcal{B}_i\right)= P(0_{\mathcal{A}_i},\,0_{\mathcal{B}_i}) + P(1_{\mathcal{A}_i},\,1_{\mathcal{B}_i})$) and $Pd\left(\mathcal{A}_i,\,\mathcal{B}_i\right) + Ps\left(\mathcal{A}_i,\,\mathcal{B}_i\right)=1$, inequality~\ref{LHS_Steer_SM} becomes \begin{eqnarray}
|2\sum_{i=1}^{n} Pd\left(\mathcal{A}_i,\,\mathcal{B}_i\right) - n| \leq C_n. \label{LHS_Steer_2_SM} \end{eqnarray} Violation of the above inequality indicate the violation of the inequality~(\ref{LHS_Ent_SM}), but the reverse is not true. Therefore, all steerable states are entangled and steerblity is stronger form of nonlocal correlation than entanglement.
Note that the violation of inequality~(\ref{LHS_Steer_SM}) is the necessary criterion for steerability, and it becomes more efficient to capture steerability of the given state with increment of the number of measurement settings, $n$~\cite{Saunders, bennet12}. But if inequality~(\ref{LHS_Steer_SM}) is satisfied, unsteerability can not be concluded.
The unsteerability of the given state can be verified from the sufficient criteria~\cite{USt1}, but it does not tell about steerability.
\section{ Incomplete local-description of the system $B$ due to the presence of Bell nonloclality} \label{Apdx_Bell}
\begin{figure}\label{Fig_BI_SM}
\end{figure}
{\it $LHS_3^n$ : } Fig.~(\ref{Fig_BI_SM}) describes the scenario to validate the Bell nonlocal correlation of the shared state $\rho_{AB}$. In the case of Bell nonlocality, Bob keeps secret the information of the observable $\mathcal{B}_i$ going to be measured on the system $B$. In the case of two measurement settings, he checks the uncertainty of the condition \begin{eqnarray}
V_3(a,\,b|i,\,j) = (-1)^{(a+b+ij)}, \label{V_BI2_SM} \end{eqnarray}
which corresponds the winning condition of CHSH game~\cite{Bell,CHSH,Oppen}. The uncertainty relation~(\ref{G_UR_SM}) for the choice of $V_2(a,\,b|i,\,j)$ of Eq.~(\ref{V_BI2_SM}) becomes \begin{eqnarray}
\mathcal{F}_3^2 \leq \max_{\{\mathcal{A}_i\},\,\rho_{AB}^{\text{LHS}}}\left[ \mathcal{F}_3^2\right]= 2, \label{LHS_BI_2_SM} \end{eqnarray}
where $\mathcal{F}_3^2=|\langle\mathcal{A}_0\left(\mathcal{B}_0+\mathcal{B}_1\right)\rangle + \langle\mathcal{A}_1\left(\mathcal{B}_0-\mathcal{B}_1\right)\rangle|$ corresponds to Bob's residual uncertainty of the set of non-commuting observables $\{\mathcal{B}_0,\,\mathcal{B}_1\}$ from the knowledge of individual $\{a,\,\mathcal{A}_i\}$ and $i\in{0,\,1}$. The above inequality is the necessary and sufficient criterion for Bell nonlocality for 2-measurement-setting and 2-outcome scenario~\cite{Horo_1995}. The equality $BI_2=2$ occurs when Bob's system has a complete local description by $\{|0\rangle,\,|1\rangle,\, \sqrt{\alpha} |0\rangle+ \sqrt{1-\alpha}|1\rangle\}$. Note that the above local description is also not unique.
\begin{figure}
\caption{ Different nonlocal quantum correlations. Bell nonlocal correlation is the strongest form of all nonlocal correlations, and entanglement is the weakest while steering lies between Bell nonlocal correlation and entanglement. }
\label{Fig_TC}
\end{figure}
Recent developments improve Bell inequalities with $n$ number of measurement settings per side~\cite{BNnm1, BE3322, BI31, BI33}. Among them Bell inequality given in the Ref.~\cite{BE3322} is the inequivalent class of Bell-CHSH inequality and it can detect those entangled state which is Bell-CHSH local. Particularly, for the Werner state, Bell inequality with $465$ measurement settings per side has been developed~\cite{BNnm1} which can show Bell nolocality of Werner state for $0.7056<p\leq 1$. This inequality~\cite{BNnm1} gives the tightest Bell violation for Werner state after Bell-CHSH inequality. Therefore, Bell-CHSH inequality~(\ref{LHS_BI_2_SM}) works better than the existing 3-setting Bell inequality. Here, for three measurement settings, we consider Bell inequality given in the Ref.~\cite{BE3322} as an example. Bob checks the following condition \begin{eqnarray}
V_2(a,\,b|i,\,j) =(-1)^{a\oplus b=\max[0,i.j-1]} (1-\delta_{i,3}\delta_{j,3}) + (-1)^a\,(1-\delta_{i,3}) - (-1)^b (1-\delta_{j,3}). \label{V_BI3_SM} \end{eqnarray} Under this condition, the inequality~(\ref{G_UR_SM}) becomes \begin{eqnarray} \mathcal{F}_3^3 \leq \max_{\{\mathcal{A}_i\},\,\rho_{AB}^{\text{LHS}}}\left[ \mathcal{F}_3^2\right]= 4, \label{LHS_BI_3_SM} \end{eqnarray}
where $\mathcal{F}_3^3= | \langle\mathcal{A}_1\left(\rm{I} + \mathcal{B}_1\rangle + \mathcal{B}_2 + \mathcal{B}_3\rangle \right) +\langle\mathcal{A}_2\left(\rm{I}+\mathcal{B}_1 +\mathcal{B}_2 - \mathcal{B}_3\right)\rangle + \langle\mathcal{A}_3\left(\mathcal{B}_1 -\mathcal{B}_2\right)\rangle - \langle\mathcal{B}_1\rangle - \langle\mathcal{B}_2\rangle |$, and Bob chooses observables randomly from the set $\{\mathcal{B}_1=\sigma_z,\,\mathcal{B}_2=\sin(\frac{\pi}{3})\sigma_x+\cos(\frac{\pi}{3})\sigma_z,\,\mathcal{B}_3=\sin(\frac{2\pi}{3})\sigma_x+\cos(\frac{2\pi}{3})\sigma_z\}$. In this scenario, $|0\rangle$ becomes one of the complete local description of the system $B$ corresponding to $\mathcal{F}_3^3=4$. The inequality~(\ref{LHS_BI_3_SM}) detects Bell nonlocality of the Werner state~(\ref{W_State_SM}) for $p>0.8$, whereas the inequlality~(\ref{LHS_BI_2_SM}) confirms Bell nonlocality for $p>1/\sqrt{2}$.
Bell-CHSH inequality~(\ref{LHS_BI_2_SM}) works better for the two-qubit Werner state than existing Bell inequalities with three measurement settings and two outcomes scenario~\cite{BE3322, BNnm1, BI31, BI33}. Note that, mathematically, inequality~(\ref{LHS_BI_2_SM}) can be written as \begin{eqnarray}
BI_2= |\langle \mathcal{A}_1^\prime \mathcal{B}_1\rangle + \langle \mathcal{A}_2^\prime \mathcal{B}_2\rangle| \leq \sqrt{2}, \end{eqnarray} where $\mathcal{A}_1=\frac{\mathcal{A}_1+\mathcal{A}_2}{\sqrt{2}}$ and $\mathcal{A}_2=\frac{\mathcal{A}_1-\mathcal{A}_2}{\sqrt{2}}$. Therefore, violation of inequality~(\ref{LHS_BI_2_SM}) implies the violation of inequality~(\ref{LHS_Steer_SM}) while the reverse is not true. Therefore, all Bell nonlocal states are steerable. Unlike steerability, Bell nonlocality includes the uncertainty of choice of Bob's observables from the set $\{\mathcal{B}_1,\,\mathcal{B}_2, \cdots, \mathcal{B}_n\}$ which makes Bell nonlocal correlation as a strongest form nonlocal correlation than steerability and entanglement. A complete picture of all the correlations can be viewed as in Fig.~(\ref{Fig_TC}), which represents that Bell nonlocal correlation forms a subset of both steering and entanglement, while steering is itself a subset of entanglement.
\begin{figure}\label{FigTomo}
\end{figure}
\section{Experimental setup and demonstration} \label{Apdx_Werner}
Fig. (3) (in the main text) represents the experimental setup designed on a silicon photonic chip which is used to demonstrate different quantum correlations from the violation of different forms of uncertainty relations~(\ref{LHS_Ent_SM}, \ref{LHS_Steer_SM}, \ref{LHS_BI_2_SM}). Here, a pair of path-encoded entangled photons in the state $ |\psi^{+}\rangle$ ($= (|00\rangle + |11\rangle)/\sqrt{2}$) are generated via spontaneous four-wave mixing (SFWM) process, by pumping a continuous wavelength laser at 1550.12nm on two spiral waveguides (single photon-pair sources).
The signal photon (1545.31nm, shown in red) is assumed to be system $A$ (belonging to Alice) while the idler photon (1554.91nm, shown in green) is assumed to be system $B$ (belonging to Bob). The expectation value $\langle\mathcal{A}\,\mathcal{B}\rangle_{|\psi^+\rangle}$, \begin{eqnarray}
\langle\mathcal{A}\,\mathcal{B}\rangle_{|\psi^+\rangle} = \langle\psi^+|\mathcal{A} \otimes \mathcal{B} | \psi^+\rangle \label{Expt_psi+} \end{eqnarray}
can be experimentally observed from coincidence detection \begin{eqnarray} P(a_\mathcal{A},\,b_{\mathcal{B}}) = \frac{C(a_\mathcal{A},\,b_{\mathcal{B}})}{\displaystyle\sum_{\{a,\,b\}=0}^1 C(a_\mathcal{A},\,b_{\mathcal{B}})}, \label{Corr_psi+} \end{eqnarray}
for the measurement of $\mathcal{A}$ and $\mathcal{B}$ on system $A$ and $B$ prepared in the state $|\psi^+\rangle$, respectively. Here, $C(a_\mathcal{A},\,b_{\mathcal{B}})$ represents coincident counts for the outcome $a$ and $b$ .
In experiment, direct generation of two-qubit Werner state $\rho_W$ is still challenging as it requires the mixture of Bell state and maximal incoherent state, which can be described by \begin{eqnarray}
\rho_W &=&p\rho_{|\phi^-\rangle}+(1-p)\,\frac{\rm{I}\otimes \rm{I}}{4} \nonumber \\
&=& \frac{\alpha\, (\rm{I}\otimes \sigma_x) \,\rho_{|\psi^{+}\rangle}\,(\rm{I}\otimes \sigma_x) + \beta\, (\rm{I}\otimes \, \sigma_y) \,\rho_{|\psi^{+}\rangle}\,(\rm{I}\otimes \, \sigma_y) + \gamma \,\rho_{|\psi^{+}\rangle} + \delta \, (\rm{I}\otimes \sigma_z) \,\rho_{|\psi^{+}\rangle}\,(\rm{I}\otimes \sigma_z)}{\alpha+\beta+\gamma+\delta}, \label{W_State_SM} \end{eqnarray}
where $\frac{\alpha}{\alpha+\beta+\gamma+\delta}=\frac{1+3p}{4}, \beta=\delta=\gamma=\tfrac{1-p}{4} (\alpha+\beta+\gamma+\delta)$, $\rho_{|\phi^-\rangle}$ and $\rho_{|\psi^{+}\rangle}$ are the density matrix of the states $|\phi^-\rangle$ ($=(|01\rangle-|10\rangle)/\sqrt{2}$) and $|\psi^{+}\rangle$, respectively. The expectation value $\langle\mathcal{A}\,\mathcal{B}\rangle_{\rho_W}$ for the Werner state $\rho_W$ has been experimentally realized from the statistical mixing of identity (\rm{I}) and Pauli rotations ($\sigma_x$,\,$\sigma_y$,\, $\sigma_z$) on observable $\mathcal{B}$ as given by \begin{eqnarray}
\langle\mathcal{A}\,\mathcal{B}\rangle_{\rho_w} = \frac{\alpha \langle\mathcal{A}\,\mathcal{B}_x\rangle_{|\psi^+\rangle} + \beta \langle\mathcal{A}\,\mathcal{B}_y\rangle_{|\psi^+\rangle} + \gamma \langle\mathcal{A}\,\mathcal{B}\rangle_{|\psi^+\rangle} + \delta \langle\mathcal{A}\,\mathcal{B}_z\rangle_{|\psi^+\rangle}}{\alpha+\beta+\gamma+\delta}, \end{eqnarray} where $\mathcal{B}_i=\sigma_i\,\mathcal{B}\,\sigma_i$, $i\in\{x,\,y,\,z\}$. The $\langle\mathcal{A}\,\mathcal{B}\rangle_{\rho_W}$ is calculated from the experimentally observed data for $p=0.0,0.2,0.4,0.6,0.8,1.0$, and the corresponding mixing weights $\{\alpha, \, \beta\}$ are \begin{eqnarray} p&&=0.0\rightarrow \{\alpha=5,\beta=5\}, \hspace{1cm} p=0.6\rightarrow \{\alpha=14,\beta=2\}, \nonumber \\ p&&=0.2\rightarrow \{\alpha=8,\beta=4\}, \hspace{1cm} p=0.8\rightarrow \{\alpha=17,\beta=1\}, \\ p&&=0.4\rightarrow \{\alpha=11,\beta=3\}, \hspace{1cm} p=1.0\rightarrow \{\alpha=20,\beta=0\}. \nonumber \end{eqnarray}
For example, $p=0.0$ is calculated from the 5 sets of data of each $\langle A\mathcal{B}_i\rangle_{|\psi^+\rangle}$. In the experiment, the Bell state $|\psi^{+}\rangle$ has been generated with fidelity $0.951\pm0.096$, and the details of quantum state tomography (QST) is shown in the Fig.\ref{FigTomo}. The error bars in terms of standard deviation have been calculated from the 20 sets of data, which result in the order of $10^{-2}$ for entanglement, steering and Bell nonlocality.
\end{document}
|
arXiv
|
{
"id": "2210.00237.tex",
"language_detection_score": 0.7411924004554749,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Singular equivalence and the (Fg) condition} \author{\O{}ystein Skarts\ae{}terhagen} \address{Institutt for matematiske fag, NTNU \\ N-7491 Trondheim \\ Norway} \email{[email protected]} \date{\today}
\keywords{ Finite generation condition, Hochschild cohomology, singular equivalences}
\subjclass[2010]{ Primary 16E40,
16E65, 18E30; Secondary 16G10 }
\begin{abstract} We show that singular equivalences of Morita type with level between finite-dimensional Gorenstein algebras over a field preserve the \fgtext{(Fg)}~condition. \end{abstract}
\maketitle
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction} \label{sec:introduction}
Throughout the paper, we let $k$ be a fixed field.
Support varieties for modules over a group algebra $kG$ were introduced by J.~F.~Carlson in~\cite{carlson}, using the group cohomology ring $\operatorname{H}^*(G,k)$. Later, Snashall and Solberg~\cite{ss} defined support varieties for modules over an arbitrary finite-dimensional $k$-algebra $\Lambda$, using the Hochschild cohomology ring $\HH*(\Lambda)$.
We say that a finite-dimensional $k$-algebra $\Lambda$ satisfies the \fgtext{(Fg)}~condition if the Hochschild cohomology ring $\HH*(\Lambda)$ of $\Lambda$ is Noetherian and the Yoneda algebra $\Ext_\Lambda^*(M,M)$ is a finitely generated $\HH*(\Lambda)$-module for every finitely generated $\Lambda$-module $M$ (for more details, see the definition in Section~\ref{sec:fg}). It was shown in~\cite{ehsst} that many of the results for support varieties over a group algebra also hold for support varieties over a selfinjective algebra which satisfies the \fgtext{(Fg)}~condition. We can thus think of the \fgtext{(Fg)}~condition as a criterion for deciding whether a given algebra has a nice theory of support varieties.
It is therefore interesting to investigate whether the \fgtext{(Fg)}~condition holds for various algebras, and to find out which relations between algebras preserve the \fgtext{(Fg)}~condition. This question has been considered in~\cite{pss} for algebras whose module categories are related by a recollement of abelian categories, and in~\cite{kps} for derived equivalence of algebras. In this paper, we consider singular equivalence of algebras.
The \defterm{singularity category} $\mathsf{D}_\mathsf{sg}(\Lambda)$ of a $k$-algebra $\Lambda$, introduced by Buchweitz in~\cite{buchweitz}, is defined as the Verdier quotient \[ \mathsf{D}_\mathsf{sg}(\Lambda) = \mathsf{D}^\mathsf{b}(\fmod \Lambda)/\perf(\Lambda) \] of the bounded derived category $\mathsf{D}^\mathsf{b}(\fmod \Lambda)$ by the subcategory of perfect complexes. This is a triangulated category. We say that two $k$-algebras $\Lambda$ and~$\Sigma$ are \defterm{singularly equivalent} if there exists a triangle equivalence $f\colon \mathsf{D}_\mathsf{sg}(\Lambda) \to \mathsf{D}_\mathsf{sg}(\Sigma)$ between their singularity categories, and the functor~$f$ is then called a \defterm{singular
equivalence} between the algebras $\Lambda$ and~$\Sigma$.
The purpose of this paper is to investigate to what extent singular equivalences preserve the \fgtext{(Fg)}~condition. Since arbitrary singular equivalences are hard to work with and do not necessarily have nice properties, we restrict our attention to special classes of singular equivalences.
A \defterm{singular equivalence of Morita type} (introduced by Chen and Sun in~\cite{chen-sun}) between $k$-algebras $\Lambda$ and~$\Sigma$ is a singular equivalence \[ \mathsf{D}_\mathsf{sg}(\Lambda) \xrightarrow{N \tensor_\Lambda -} \mathsf{D}_\mathsf{sg}(\Sigma) \] which is induced by a tensor functor $N \tensor_\Lambda -$, where $N$ is a $\Sigma$--$\Lambda$ bimodule subject to some technical requirements. Wang~\cite{wang} has introduced a generalized version of singular equivalence of Morita type called \defterm{singular equivalence of
Morita type with level}. We recall the definitions of these two types of singular equivalences in Section~\ref{sec:semtl}. The question we want to answer in this paper is: Do singular equivalences of Morita type with level preserve the \fgtext{(Fg)}~condition?
All algebras that satisfy the \fgtext{(Fg)} condition are Gorenstein algebras (see Theorem~\ref{thm:fg=>gorenstein}), and singular equivalences of Morita type with level do not preserve Gorensteinness. Moreover, even if one of the algebras involved in a singular equivalence of Morita type with level satisfies the \fgtext{(Fg)}~condition, the other algebra does not need to be a Gorenstein algebra (see Example~\ref{ex:fg-not-preserved}). This means that the \fgtext{(Fg)}~condition is in general not preserved under singular equivalence of Morita type with level.
However, we can consider the question of whether it is only when one of the algebras is non-Gorenstein that such counterexamples arise. In other words, if we require all our algebras to be Gorenstein, is it then true that singular equivalences of Morita type with level preserve the \fgtext{(Fg)}~condition? The main result of this paper, Theorem~\ref{thm:main}, answers this question affirmatively: A singular equivalence of Morita type with level between finite-dimensional Gorenstein algebras over a field preserves the \fgtext{(Fg)}~condition. As a consequence of this, we obtain a similar statement for \emph{stable} equivalence of Morita type (Corollary~\ref{cor:stable-equivalence}), where we do not need the assumption of Gorensteinness.
The content of the paper is structured as follows.
In Section~\ref{sec:semtl}, we state the definitions of singular equivalence of Morita type and singular equivalence of Morita type with level, and look at some easily derived consequences.
In Section~\ref{sec:gorenstein-cm}, we begin to look at what more we can deduce from a singular equivalence of Morita type with level when the assumption of Gorensteinness is added. We recall the well-known result stating that the singularity category of a Gorenstein algebra is equivalent to the stable category of maximal Cohen--Macaulay modules. This implies that a singular equivalence \[ f\colon \mathsf{D}_\mathsf{sg}(\Lambda) \xrightarrow{\simeq} \mathsf{D}_\mathsf{sg}(\Sigma) \] between Gorenstein algebras $\Lambda$ and~$\Sigma$ gives an equivalence \[ g\colon \sCM(\Lambda) \xrightarrow{\simeq} \sCM(\Sigma) \] between their stable categories of maximal Cohen--Macaulay modules. We show that if the singular equivalence $f$ is of Morita type with level, and thus induced by a tensor functor, then the equivalence $g$ is induced by the same tensor functor.
In Section~\ref{sec:rot}, we consider certain maps of the form \[ \Ext_\Lambda^n(U, V) \to \Ext_\Lambda^n(\Omega_\Lambda^i(U), \Omega_\Lambda^i(V)), \] which we call rotation maps. We show that these maps are isomorphisms if the algebra~$\Lambda$ is Gorenstein and $n > \id_\Lambda \Lambda$. This means that in extension groups of sufficiently high degree over a Gorenstein algebra, we can replace both modules by syzygies. This result is used in the following three sections.
In Section~\ref{sec:ext}, we show that if we have a singular equivalence of Morita type with level \[ \mathsf{D}_\mathsf{sg}(\Lambda) \xrightarrow[N \tensor_\Lambda -]{\simeq} \mathsf{D}_\mathsf{sg}(\Sigma) \] between two Gorenstein algebras $\Lambda$ and~$\Sigma$, then we have isomorphisms \begin{equation} \label{eqn:intro-ext-iso} \Ext_\Lambda^n(A, B) \xrightarrow[N \tensor_\Lambda -]{\cong} \Ext_\Sigma^n(N \tensor_\Lambda A, N \tensor_\Lambda B) \qquad \text{(for $A$ and $B$ in $\fmod \Lambda$)} \end{equation} between extension groups over~$\Lambda$ and extension groups over~$\Sigma$, in all sufficiently large degrees~$n$. In the terminology of~\cite{pss}, this implies that a tensor functor inducing a singular equivalence of Morita type with level between Gorenstein algebras is an eventually homological isomorphism. The proof of this result builds on the result about stable categories of Cohen--Macaulay modules from Section~\ref{sec:gorenstein-cm}.
In Section~\ref{sec:hh}, we show that a singular equivalence of Morita type with level between Gorenstein algebras preserves Hochschild cohomology in almost all degrees. That is, if two Gorenstein algebras $\Lambda$ and~$\Sigma$ are singularly equivalent of Morita type with level, then there are isomorphisms \begin{equation} \label{eqn:intro-hh-iso} \HH{n}(\Lambda) \cong \HH{n}(\Sigma) \end{equation} for almost all~$n$, and these isomorphisms respect the ring structure of the Hochschild cohomology.
In Section~\ref{sec:fg}, we show the main result of the paper: A singular equivalence of Morita type with level between finite-dimensional Gorenstein algebras over a field preserves the \fgtext{(Fg)} condition. The main ingredients in the proof of this result are the isomorphism~\eqref{eqn:intro-ext-iso} of extension groups from Section~\ref{sec:ext} and the isomorphism~\eqref{eqn:intro-hh-iso} of Hochschild cohomology groups from Section~\ref{sec:hh}.
\subsection*{Acknowledgments}
I would like to thank \O{}yvind Solberg and Chrysostomos Psaroudakis for helpful discussions and suggestions. I would also like to thank Yiping Chen for informing me about the result stated in Corollary~\ref{cor:stable-equivalence} and how it follows from the main result of this paper.
\section{Singular equivalences of Morita type with level} \label{sec:semtl}
In this section, we recall the definitions we need regarding singular equivalences. We begin with the concept of singularity categories.
\begin{defn} Let $\Lambda$ be a $k$-algebra. The \defterm{singularity category} $\mathsf{D}_\mathsf{sg}(\Lambda)$ of~$\Lambda$ is a triangulated category defined as the Verdier quotient \[ \mathsf{D}_\mathsf{sg}(\Lambda) = \mathsf{D}^\mathsf{b}(\fmod \Lambda)/\perf(\Lambda) \] of the bounded derived category $\mathsf{D}^\mathsf{b}(\fmod \Lambda)$ by the subcategory of perfect complexes. We say that two algebras $\Lambda$ and~$\Sigma$ are \defterm{singularly equivalent} if their singularity categories $\mathsf{D}_\mathsf{sg}(\Lambda)$ and $\mathsf{D}_\mathsf{sg}(\Sigma)$ are equivalent as triangulated categories. A~triangle equivalence between $\mathsf{D}_\mathsf{sg}(\Lambda)$ and $\mathsf{D}_\mathsf{sg}(\Sigma)$ is called a \defterm{singular
equivalence} between the algebras $\Lambda$ and~$\Sigma$. \end{defn}
The singularity category of an algebra was first defined by Buchweitz in \cite[Definition~1.2.2]{buchweitz}. In his definition, the singularity category is called the \emph{stabilized derived category}, and it is denoted by $\underline{\mathsf{D}^\mathsf{b}(\Lambda)}$. Later, Orlov~\cite{orlov} used the same construction in algebraic geometry to define the \emph{triangulated category of singularities} of a scheme $X$, denoted $\mathbf{D}_{Sg}(X)$. We follow the recent convention of using Orlov's terminology and notation for algebras as well. The term \emph{singular equivalence} was introduced by Chen~\cite{chen}.
Analogously to the special type of stable equivalences called \emph{stable equivalences of Morita type}, Chen and Sun have defined a special type of singular equivalences called \emph{singular
equivalences of Morita type} in their preprint~\cite{chen-sun}. This concept was further explored by Zhou and Zimmermann in~\cite{zz}.
\begin{defn} Let $\Lambda$ and~$\Sigma$ be finite-dimensional $k$-algebras, and let $M$ be a $\Lambda$--$\Sigma$ bimodule and $N$ a $\Sigma$--$\Lambda$ bimodule. We say that $M$ and~$N$ induce a \defterm{singular
equivalence of Morita type} between $\Lambda$ and~$\Sigma$ (and that $\Lambda$ and~$\Sigma$ are \defterm{singularly equivalent of Morita
type}) if the following conditions are satisfied: \begin{enumerate} \item $M$ is finitely generated and projective as a left $\Lambda$-module and as a right $\Sigma$-module. \item $N$ is finitely generated and projective as a left $\Sigma$-module and as a right $\Lambda$-module. \item There is a finitely generated $\e{\Lambda}$-module $X$ with finite projective dimension such that $M \tensor_\Sigma N \cong \Lambda \oplus X$ as $\e{\Lambda}$-modules. \item There is a finitely generated $\e{\Sigma}$-module $Y$ with finite projective dimension such that $N \tensor_\Lambda M \cong \Sigma \oplus Y$ as $\e{\Sigma}$-modules. \end{enumerate} \end{defn}
Notice that the definition is precisely the same as the definition of stable equivalence of Morita type, except that the modules $X$ and~$Y$ are not necessarily projective, but only have finite projective dimension. Thus stable equivalences of Morita type occur as a special case of singular equivalences of Morita type.
The following proposition describes how a singular equivalence of Morita type is a singular equivalence, thus justifying the name.
\begin{prop}\cite[Proposition 2.3]{zz} \label{prop:semt-equivalence} Let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules which induce a singular equivalence of Morita type between two $k$-algebras $\Lambda$ and $\Sigma$. Then the functors \[ N \tensor_\Lambda - \colon \mathsf{D}_\mathsf{sg}(\Lambda) \to \mathsf{D}_\mathsf{sg}(\Sigma) \qquad\text{and}\qquad M \tensor_\Sigma - \colon \mathsf{D}_\mathsf{sg}(\Sigma) \to \mathsf{D}_\mathsf{sg}(\Lambda) \] are equivalences of triangulated categories, and they are quasi-inverses of each other. \end{prop}
Inspired by the notion of singular equivalence of Morita type, Wang~\cite{wang} has defined a more general type of singular equivalence called \emph{singular equivalence of Morita type with
level}.
\begin{defn} Let $\Lambda$ and~$\Sigma$ be finite-dimensional $k$-algebras, and let $M$ be a $\Lambda$--$\Sigma$ bimodule and $N$ a $\Sigma$--$\Lambda$ bimodule. Let $l$ be a nonnegative integer. We say that $M$ and~$N$ induce a \defterm{singular equivalence of Morita type with level~$l$} between $\Lambda$ and~$\Sigma$ (and that $\Lambda$ and~$\Sigma$ are \defterm{singularly equivalent of Morita type with level~$l$}) if the following conditions are satisfied: \begin{enumerate} \item $M$ is finitely generated and projective as a left $\Lambda$-module and as a right $\Sigma$-module. \item $N$ is finitely generated and projective as a left $\Sigma$-module and as a right $\Lambda$-module. \item There is an isomorphism $M \tensor_\Sigma N \cong \Omega_{\e\Lambda}^l(\Lambda)$ in the stable category $\stmod \e\Lambda$. \item There is an isomorphism $N \tensor_\Lambda M \cong \Omega_{\e\Sigma}^l(\Sigma)$ in the stable category $\stmod \e\Sigma$. \end{enumerate} \end{defn}
Just as in the case of singular equivalence of Morita type, the conditions in the definition of singular equivalence of Morita type with level are designed to ensure that the functors $N \tensor_\Lambda -$ and $M \tensor_\Sigma -$ induce singular equivalences.
\begin{prop}\cite[Remark~2.2]{wang} \label{prop:semtl-equivalence} Let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules which induce a singular equivalence of Morita type with level~$l$ between two $k$-algebras $\Lambda$ and~$\Sigma$. Then the functors \[ N \tensor_\Lambda - \colon \mathsf{D}_\mathsf{sg}(\Lambda) \to \mathsf{D}_\mathsf{sg}(\Sigma) \qquad\text{and}\qquad M \tensor_\Sigma - \colon \mathsf{D}_\mathsf{sg}(\Sigma) \to \mathsf{D}_\mathsf{sg}(\Lambda) \] are equivalences of triangulated categories. The compositions \[ M \tensor_\Sigma N \tensor_\Lambda - \colon \mathsf{D}_\mathsf{sg}(\Lambda) \to \mathsf{D}_\mathsf{sg}(\Lambda) \qquad\text{and}\qquad N \tensor_\Lambda M \tensor_\Sigma - \colon \mathsf{D}_\mathsf{sg}(\Sigma) \to \mathsf{D}_\mathsf{sg}(\Sigma) \] are isomorphic to the shift functor $[-l]$ on the respective categories $\mathsf{D}_\mathsf{sg}(\Lambda)$ and $\mathsf{D}_\mathsf{sg}(\Sigma)$. \end{prop}
We now show that the notion of singular equivalence of Morita type with level generalizes the notion of singular equivalence of Morita type, in the sense that any equivalence of the latter type is also of the former type. This is mentioned without proof in~\cite{wang}.
\begin{prop} \label{prop:semt=>semtl} Let $\Lambda$ and~$\Sigma$ be finite-dimensional $k$-algebras. If a functor $f \colon \mathsf{D}_\mathsf{sg}(\Lambda) \to \mathsf{D}_\mathsf{sg}(\Sigma)$ is a singular equivalence of Morita type, then it is also a singular equivalence of Morita type with level. \end{prop} \begin{proof} Let $M$, $N$, $X$ and~$Y$ be bimodules satisfying the requirements of a singular equivalence of Morita type, such that $f = (N \tensor_\Lambda -)$. Let $l = \max \{ \pd_{\e\Lambda} X, \pd_{\e\Sigma} Y \}$. Let $M'$ be an $l$-th syzygy of $M$ as $\Lambda$--$\Sigma$-bimodule, and let \begin{equation} \label{eq:M'} 0 \to M' \to P_{l-1} \to \cdots \to P_0 \to M \to 0 \end{equation} be the beginning of a projective resolution of~$M$. We show that the bimodules $M'$ and~$N$ induce a singular equivalence of Morita type with level~$l$.
If we consider the bimodules in sequence~\eqref{eq:M'} as one-sided modules (left $\Lambda$-modules or right $\Sigma$-modules), then $M$ and the modules $P_0, \ldots, P_{l-1}$ are projective, and thus $M'$ must be projective as well. Thus condition~(1) in the definition is satisfied. Condition~(2) is trivially satisfied, since it is the same as condition~(2) in the definition of singular equivalence of Morita type.
Tensoring sequence~\eqref{eq:M'} with~$N$ gives the sequence \[ 0 \to M' \tensor_\Sigma N \to P_{l-1} \tensor_\Sigma N \to \cdots \to P_0 \tensor_\Sigma N \to M \tensor_\Sigma N \to 0. \] This sequence is exact since $N$ is projective as left $\Sigma$-module, and the modules $P_i \tensor_\Sigma N$ are projective $\e\Lambda$-modules since $N$ is projective as right $\Lambda$-module. The $\e\Lambda$-module $M' \tensor_\Sigma N$ is therefore an $l$-th syzygy of $M \tensor_\Sigma N$. Since $M \tensor_\Sigma N$ is isomorphic to $\Lambda \oplus X$ and the projective dimension of $X$ is at most $l$, this means that $M' \tensor_\Sigma N$ is an $l$-th syzygy of $\Lambda$ as $\e\Lambda$-module. Similarly, we can show that $N \tensor_\Lambda M'$ is an $l$-th syzygy of $\Sigma$ as $\e\Sigma$-module. This means that conditions (3)~and~(4) in the definition are satisfied. \end{proof}
In the rest of the paper we work with singular equivalences of Morita type with level. By the above proposition, all results where we assume such an equivalence are also applicable to singular equivalences of Morita type.
As seen above, if $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ are bimodules which induce a singular equivalence of Morita type with level, then the functors $N \otimes_\Lambda -$ and $M \otimes_\Sigma -$ are equivalences between the singularity categories of $\Lambda$ and~$\Sigma$. We end this section by examining some properties of these tensor functors when viewed as functors between the module categories $\fmod \Lambda$ and $\fmod \Sigma$.
\begin{lem} \label{lem:semtl-functor-properties} Let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules which induce a singular equivalence of Morita type with level between two $k$-algebras $\Lambda$ and~$\Sigma$. Then the functors \[ N \tensor_\Lambda - \colon \fmod \Lambda \to \fmod \Sigma \qquad\text{and}\qquad M \tensor_\Sigma - \colon \fmod \Sigma \to \fmod \Lambda \] are exact and take projective modules to projective modules. In particular, this means that they take projective resolutions to projective resolutions. \end{lem} \begin{proof} Consider the functor $N \tensor_\Lambda -$. This functor is exact since $N$ is projective as right $\Lambda$-module, and it takes projective modules to projective modules since $N$ is projective as left $\Sigma$-module. \end{proof}
Let $\Lambda$, $\Sigma$, $M$ and~$N$ be as in the above lemma. Since the functor \[ N \tensor_\Lambda - \colon \fmod \Lambda \to \fmod \Sigma \] is exact, it induces homomorphisms of extension groups. By abuse of notation, we denote these maps by $N \tensor_\Lambda -$ as well. More precisely, for $\Lambda$-modules $U$ and~$V$ and an integer $n \ge 0$, we define a map \begin{equation} \label{eq:ext-map} N \tensor_\Lambda - \colon \Ext_\Lambda^n(U, V) \to \Ext_\Sigma^n(N \tensor_\Lambda U, N \tensor_\Lambda V). \end{equation} For $n = 0$, the map $N \tensor_\Lambda -$ simply sends a homomorphism $f \colon U \to V$ to the homomorphism $N \tensor_\Lambda f \colon N \tensor_\Lambda U \to N \tensor_\Lambda V$. For $n > 0$, the map $N \tensor_\Lambda -$ sends the element represented by the extension \[ 0 \to V \to E_n \to \cdots \to E_1 \to U \to 0 \] to the element represented by the extension \[ 0 \to N \tensor_\Lambda V \to N \tensor_\Lambda E_n \to \cdots \to N \tensor_\Lambda E_1 \to N \tensor_\Lambda U \to 0 \] obtained by applying the functor $N \tensor_\Lambda -$ to all objects and maps.
The maps~\eqref{eq:ext-map} play an important role later in the paper. In Section~\ref{sec:ext}, we show that if $\Lambda$ and~$\Sigma$ are Gorenstein algebras, then these maps are isomorphisms for almost all~$n$. This fact is used in the proof of the main theorem (Theorem~\ref{thm:main}).
\section{Gorenstein algebras and maximal Cohen--Macaulay modules} \label{sec:gorenstein-cm}
So far, we have considered the situation of two $k$-algebras $\Lambda$ and~$\Sigma$, together with bimodules $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ inducing a singular equivalence of Morita type with level between $\Lambda$ and~$\Sigma$. From now on, we restrict our attention to the special case where both $\Lambda$ and~$\Sigma$ are Gorenstein algebras. In this section, we prove our first result under this assumption, namely Proposition~\ref{prop:semtl->sCM}, which states that the tensor functors $N \tensor_\Lambda -$ and $M \tensor_\Sigma -$ induce triangle equivalences between the stable categories of maximal Cohen--Macaulay modules over $\Lambda$ and~$\Sigma$.
We begin by recalling the definition of Gorenstein algebras.
\begin{defn} A $k$-algebra $\Lambda$ is a \defterm{Gorenstein algebra} if the injective dimension of $\Lambda$ as a left $\Lambda$-module is finite and the injective dimension of $\Lambda$ as a right $\Lambda$-module is finite: \[ \id_\Lambda ({}_\Lambda \Lambda) < \infty \qquad\text{and}\qquad \id_{\opposite{\Lambda}} (\Lambda_\Lambda) < \infty. \] \end{defn}
If $\Lambda$ is a Gorenstein algebra, then $\id_\Lambda (\leftmod{\Lambda}{\Lambda})$ and $\id_{\opposite{\Lambda}} (\rightmod{\Lambda}{\Lambda})$ are the same number, and this number is called the \defterm{Gorenstein dimension} of $\Lambda$. In later sections, we need the following result about Gorenstein algebras.
\begin{lem}\cite[Lemma~2.1]{bj} \label{lem:gorenstein-envalg} If $\Lambda$ is a Gorenstein $k$-algebra with Gorenstein dimension $d$, then its enveloping algebra $\e\Lambda$ is a Gorenstein algebra with Gorenstein dimension at most $2d$. \end{lem}
We continue by recalling the definition of maximal Cohen--Macaulay modules.
\begin{defn} Let $\Lambda$ be a $k$-algebra. A finitely generated $\Lambda$-module $C$ is a \defterm{maximal Cohen--Macaulay module} if $\Ext_\Lambda^n(C, \Lambda) = 0$ for every positive integer $n$. We denote the subcategory of $\fmod \Lambda$ consisting of all maximal Cohen--Macaulay modules by $\CM(\Lambda)$, and the corresponding stable category modulo projectives by $\sCM(\Lambda)$. \end{defn}
In the following lemma, we recall some characterizations of maximal Cohen--Macaulay modules over Gorenstein algebras.
\begin{lem} \label{lem:cm<->syzygy} Let $\Lambda$ be a finite-dimensional Gorenstein $k$-algebra and $C$ a finitely generated $\Lambda$-module. The following are equivalent. \begin{enumerate} \item $C$ is a maximal Cohen--Macaulay module. \item $C$ has a projective coresolution. That is, there exists an exact sequence \[ 0 \to C \to P_{-1} \to P_{-2} \to \cdots \] where every $P_i$ is a projective $\Lambda$-module. \item For every $n > 0$, there is a $\Lambda$-module $A$ such that $C$ is an $n$-th syzygy of $A$. \item For some $n \ge \id_\Lambda \Lambda$, there is a $\Lambda$-module $A$ such that $C$ is an $n$-th syzygy of $A$. \end{enumerate} \end{lem} \begin{proof} We only need to show that statement~(1) implies statement~(2); the implications $\text{(2)} \implies \text{(3)} \implies \text{(4)}$ are obvious, and the implication $\text{(4)} \implies \text{(1)}$ follows directly from the definitions.
We use Theorem~5.4~(b) from~\cite{contravariantly}. We first describe the notation used in~\cite{contravariantly} for certain subcategories of a module category.
For a $\Lambda$-module $T$ with the property that $\Ext_\Lambda^i(T,T) = 0$ for every $i>0$, we define the subcategories ${}^{\bot}T$ and~$\mathscr{X}_T$ of $\fmod \Lambda$. The category ${}^{\bot}T$ is the subcategory of $\fmod \Lambda$ consisting of all modules $A$ such that $\Ext_\Lambda^i(A,T) = 0$ for every $i>0$. The category $\mathscr{X}_T$ is the subcategory of ${}^{\bot}T$ consisting of all modules $A$ such that there is an exact sequence \[ 0 \to A
\to T_0
\xrightarrow{f_0} T_1
\xrightarrow{f_1} T_2
\xrightarrow{f_2} \cdots \] where $T_i$ is in $\add T$ and $\im f_i$ is in ${}^{\bot}T$ for every $i \ge 0$.
Theorem~5.4~(b) in~\cite{contravariantly} says that if $T$ is a cotilting module, then the categories ${}^{\bot}T$ and~$\mathscr{X}_T$ are equal.
Now consider the case $T = \Lambda$. Since $\Lambda$ is a Gorenstein algebra, it is a cotilting module, and then by the above we have ${}^{\bot}\Lambda = \mathscr{X}_\Lambda$. Furthermore, ${}^{\bot}\Lambda$ is the category $\CM(\Lambda)$ of maximal Cohen--Macaulay modules. Therefore, every maximal Cohen--Macaulay module is in the category $\mathscr{X}_\Lambda$, and thus it has a resolution of the form \[ 0 \to C \to P_{-1} \to P_{-2} \to \cdots \] where every $P_i$ is a projective $\Lambda$-module. \end{proof}
We now recall the theorem by Buchweitz which provides the connection we need between singularity categories and stable categories of maximal Cohen--Macaulay modules.
\begin{thm}\cite[Theorem~4.4.1]{buchweitz} \label{thm:sCM-equiv-Dsg} Let $\Lambda$ be a finite-dimensional Gorenstein algebra. Then there is an equivalence of triangulated categories \[ \sCM(\Lambda) \xrightarrow{\simeq} \mathsf{D}_\mathsf{sg}(\Lambda) \] given by sending every object in $\sCM(\Lambda)$ to a stalk complex concentrated in degree~$0$. \end{thm}
A direct consequence of Theorem~\ref{thm:sCM-equiv-Dsg} is that if two finite-dimensional Gorenstein algebras $\Lambda$ and~$\Sigma$ are singularly equivalent, then the categories $\sCM(\Lambda)$ and $\sCM(\Sigma)$ are triangle equivalent. If the algebras are not only singularly equivalent, but singularly equivalent of Morita type (with level), then there are tensor functors $N \tensor_\Lambda -$ and $M \tensor_\Sigma -$ that induce equivalences between the singularity categories $\mathsf{D}_\mathsf{sg}(\Lambda)$ and $\mathsf{D}_\mathsf{sg}(\Sigma)$. What we aim to prove now is that these tensor functors also induce equivalences between the stable categories $\sCM(\Lambda)$ and $\sCM(\Sigma)$ of maximal Cohen--Macaulay modules. We first show that these functors preserve the property of being a maximal Cohen--Macaulay module.
\begin{lem} \label{lem:tensor-functor-cm} Let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules which induce a singular equivalence of Morita type with level between two finite-dimensional Gorenstein $k$-algebras $\Lambda$ and~$\Sigma$. Then the functors \[ N \tensor_\Lambda - \colon \fmod \Lambda \to \fmod \Sigma \qquad\text{and}\qquad M \tensor_\Sigma - \colon \fmod \Sigma \to \fmod \Lambda \] send maximal Cohen--Macaulay modules to maximal Cohen--Macaulay modules. \end{lem} \begin{proof} Let $n = \max \{ \id_\Lambda \Lambda, \id_\Sigma \Sigma \}$ (this is finite since the algebras $\Lambda$ and~$\Sigma$ are Gorenstein). Let $C$ be a maximal Cohen--Macaulay module over $\Lambda$. Then by Lemma~\ref{lem:cm<->syzygy}, there is a $\Lambda$-module $A$ such that $C$ is an $n$-th syzygy of $A$. By Lemma~\ref{lem:semtl-functor-properties}, the $\Sigma$-module $N \tensor_\Lambda C$ is an $n$-th syzygy of $N \tensor_\Lambda A$, and therefore by Lemma~\ref{lem:cm<->syzygy} it is a maximal Cohen--Macaulay module. \end{proof}
Finally, we are ready to prove the main the result of this section.
\begin{prop} \label{prop:semtl->sCM} Let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules which induce a singular equivalence of Morita type with level between two finite-dimensional Gorenstein $k$-algebras $\Lambda$ and~$\Sigma$. Then the functors \[ N \tensor_\Lambda - \colon \sCM(\Lambda) \to \sCM(\Sigma) \qquad\text{and}\qquad M \tensor_\Sigma - \colon \sCM(\Sigma) \to \sCM(\Lambda) \] are equivalences of triangulated categories. \end{prop} \begin{proof} We first check that $N \tensor_\Lambda -$ actually gives a functor from $\sCM(\Lambda)$ to $\sCM(\Sigma)$. We know from Lemma~\ref{lem:tensor-functor-cm} that it gives a functor from $\CM(\Lambda)$ to $\CM(\Sigma)$. By Lemma~\ref{lem:semtl-functor-properties}, we see that if $f$ is a map of $\Lambda$-modules that factors through a projective module, then the map $N \tensor_\Lambda f$ also factors through a projective module. Thus $N \tensor_\Lambda -$ gives a well-defined functor from $\sCM(\Lambda)$ to $\sCM(\Sigma)$.
Consider the diagram \[ \xymatrix@C=5em{ \sCM(\Lambda)
\ar[r]^{N \tensor_\Lambda -}
\ar[d]^{\simeq} & \sCM(\Sigma)
\ar[d]^{\simeq} \\ \mathsf{D}_\mathsf{sg}(\Lambda)
\ar[r]^{N \tensor_\Lambda -}_{\simeq} & \mathsf{D}_\mathsf{sg}(\Sigma) } \] of categories and functors, where the vertical functors are the equivalences from Theorem~\ref{thm:sCM-equiv-Dsg}, and the functor $N \tensor_\Lambda -$ in the bottom row is an equivalence by Proposition~\ref{prop:semtl-equivalence}. The diagram commutes, and therefore the functor $N \tensor_\Lambda -$ in the top row is also an equivalence. \end{proof}
\section{Rotations of extensions} \label{sec:rot}
If $U$ and~$V$ are modules over an algebra $\Lambda$, then dimension shift gives isomorphisms $\Ext_\Lambda^n(U, V) \cong \Ext_\Lambda^{n-i}(\Omega_\Lambda^i(U), V)$ for integers $n$ and~$i$ with $n > i > 0$. If the algebra $\Lambda$ is Gorenstein, then all projective $\Lambda$-modules have finite injective dimension. This means that for sufficiently large~$n$ (more precisely, $n > \id_\Lambda \Lambda$), we can use projective resolutions to do dimension shifting in the second argument of Ext as well. That is, we have isomorphisms $\Ext_\Lambda^n(U, V) \cong \Ext_\Lambda^{n+i}(U, \Omega_\Lambda^i(V))$. By dimension shifting in both arguments, we then get isomorphisms \[ \Ext_\Lambda^n(U, V) \cong \Ext_\Lambda^n(\Omega_\Lambda^i(U), \Omega_\Lambda^i(V)), \] where we stay in the same degree~$n$, but replace both arguments to Ext by their $i$-th syzygies. In this section, we describe such isomorphisms, which we call rotation maps, and which are going to be used several times in later sections.
For defining the rotation maps, we do not need to assume that we are working over a Gorenstein algebra. This however means that the maps are not necessarily isomorphisms. We first define the maps in a general setting, and then in Lemma~\ref{lem:rotation} describe the conditions we need for ensuring that they are isomorphisms.
\begin{defn} Let $\Lambda$ be a finite-dimensional $k$-algebra, and let $U$ and~$V$ be finitely generated $\Lambda$-modules. Choose projective resolutions $\pi \colon \cdots \to P_1 \to P_0 \to U \to 0$ and $\tau \colon \cdots \to Q_1 \to Q_0 \to V \to 0$ of the modules $U$ and~$V$. Let $i$ and~$n$ be integers with $i < n$, and let \begin{align*} \pi_i &\colon 0 \to \Omega_\Lambda^i(U) \to P_{i-1} \to \cdots \to P_0 \to U \to 0, \\ \tau_i &\colon 0 \to \Omega_\Lambda^i(V) \to Q_{i-1} \to \cdots \to Q_0 \to V \to 0 \end{align*} be truncations of the chosen projective resolutions. We define the \defterm{$i$-th rotation} of the extension group $\Ext_\Lambda^n(U, V)$ with respect to the resolutions $\pi$ and~$\tau$ to be the map \[ \xymatrix{ {\rho_i \colon \Ext_\Lambda^n(U, V)} \ar[rr] \ar[dr]_{(\tau_i)_*} && {\Ext_\Lambda^n(\Omega_\Lambda^i(U), \Omega_\Lambda^i(V))} \\ & {\Ext_\Lambda^{n+i}(U, \Omega_\Lambda^i(V))} \ar[ur]_{(\pi_i^*)^{-1}} } \] given by $\rho_i = (\pi_i^*)^{-1} (\tau_i)_*$. \end{defn}
Consider the situation in the above definition. If the algebra $\Lambda$ is Gorenstein and $n > \id_\Lambda \Lambda$, then for each of the projective modules~$Q_j$, we have $\id_\Lambda Q_j \le \id_\Lambda \Lambda < n$, and thus the map $(\tau_i)_*$ is an isomorphism. This gives the following result.
\begin{lem} \label{lem:rotation} Let $\Lambda$ be a finite-dimensional Gorenstein $k$-algebra, and let $U$ and~$V$ be finitely generated $\Lambda$-modules. For every $n > \id_\Lambda \Lambda$ and every $i < n$, the $i$-th rotation \[ \rho_i \colon \Ext_\Lambda^n(U, V) \to \Ext_\Lambda^n(\Omega_\Lambda^i(U), \Omega_\Lambda^i(V)) \] \textup{(}with respect to any projective resolutions of $U$ and~$V$\textup{)} is an isomorphism. \end{lem}
If we look at a rotation map of an extension group $\Ext_\Lambda^n(U,U)$ with the same module in both arguments, then the action of the map can be viewed as a concrete ``rotation'' of the extensions, as we will now see. Let $\pi \colon \cdots \to P_1 \to P_0 \to U \to 0$ be a projective resolution of~$U$, and consider the $i$-th rotation map \[ \rho_i \colon \Ext_\Lambda^n(U, U) \to \Ext_\Lambda^n(\Omega_\Lambda^i(U), \Omega_\Lambda^i(U)) \] with respect to the resolution $\pi$. Every element of $\Ext_\Lambda^n(U,U)$ can be represented by an exact sequence of the form \[ \xymatrix@C=2.5ex@R=1ex{ 0 \ar[r] & U \ar[r] & E \ar[r] & P_{n-2} \ar[r] & \cdots \ar[r] & P_i \ar[rr]\ar[rd] && P_{i-1} \ar[r] & \cdots \ar[r] & P_0 \ar[r] & U \ar[r] & 0 \\ &&&&&& \Omega_\Lambda^i(U) \ar[ur] } \] Applying the map~$\rho_i$ to the element represented by this sequence produces the element represented by the following sequence: \[ \xymatrix@C=2.5ex@R=1ex{ 0 \ar[r] & \Omega_\Lambda^i(U) \ar[r] & P_{i-1} \ar[r] & \cdots \ar[r] & P_0 \ar[rr]\ar[rd] && E \ar[r] & P_{n-2} \ar[r] & \cdots \ar[r] & P_i \ar[r] & \Omega_\Lambda^i(U) \ar[r] & 0 \\ &&&&& U \ar[ur] } \] We have thus rotated the sequence by removing an $i$-fold sequence from the right side and moving it to the left side.
\section{Isomorphisms between extension groups} \label{sec:ext}
In this section, we show that if $\Lambda$ and~$\Sigma$ are Gorenstein algebras which are singularly equivalent of Morita type with level, then we have isomorphisms between extension groups over $\Lambda$ and extension groups over $\Sigma$ in sufficiently high degrees. More precisely, if $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ are bimodules which induce a singular equivalence of Morita type with level between the algebras $\Lambda$ and~$\Sigma$, then the functor $N \tensor_\Lambda -$ induces an isomorphism \begin{equation} \label{eq:ext-iso} \Ext_\Lambda^n(A,B) \cong \Ext_\Sigma^n(N \tensor_\Lambda A,
N \tensor_\Lambda B) \end{equation} for every $n \ge \max\{\id_\Lambda \Lambda, \id_\Sigma \Sigma\}$ and for any $\Lambda$-modules $A$ and~$B$. This is stated as Proposition~\ref{prop:ext-iso}.
To prove this result, we use maximal Cohen--Macaulay modules and the results from Section~\ref{sec:gorenstein-cm}, as well as the rotation maps from Section~\ref{sec:rot}. By Proposition~\ref{prop:semtl->sCM}, we know that in the setting described above, we have isomorphisms \begin{equation} \label{eq:stHom-iso} \stHom_\Lambda(C,C') \cong \stHom_\Sigma(N \tensor_\Lambda C,
N \tensor_\Lambda C') \end{equation} between stable $\Hom$ groups over $\Lambda$ and~$\Sigma$ for maximal Cohen--Macaulay $\Lambda$-modules $C$ and~$C'$. Lemma~\ref{lem:stHom=ext} below relates stable $\Hom$ groups to extension groups. Using this and isomorphism~\eqref{eq:stHom-iso}, we show (Proposition~\ref{prop:ext-iso-cm}) that there are isomorphisms \[ \Ext_\Lambda^n(C, C') \cong \Ext_\Sigma^n(N \tensor_\Lambda C,
N \tensor_\Lambda C') \] for all maximal Cohen--Macaulay modules $C$ and $C'$ and every positive integer~$n$. Finally, to arrive at isomorphism~\eqref{eq:ext-iso} for any $\Lambda$-modules $A$ and~$B$ in Proposition~\ref{prop:ext-iso}, we use Proposition~\ref{prop:ext-iso-cm} together with two facts about Gorenstein algebras from earlier sections: all syzygies of sufficiently high degree are maximal Cohen--Macaulay modules, and by using a rotation map, we can replace the modules $A$ and~$B$ by their syzygies.
We begin this section by showing, in the following two lemmas, how extension groups between maximal Cohen--Macaulay modules can be described as stable Hom groups. If $C$ and~$C'$ are maximal Cohen--Macaulay modules over an algebra $\Lambda$, then we get (Lemma~\ref{lem:stHom=ext}) an isomorphism \[ \Ext_\Lambda^n(C, C') \cong \stHom_\Lambda(K_n, C') \] for every positive integer~$n$, with $K_n$ an $n$-th syzygy of~$C$.
In fact, it turns out that the conditions on $C$ and~$C'$ can be relaxed somewhat. Recall that $C$ being a maximal Cohen--Macaulay module means that $\Ext_\Lambda^i(C,\Lambda) = 0$ for every positive integer~$i$. To get the above isomorphism in degree~$n$, it is sufficient to assume that $\Ext_\Lambda^n(C,\Lambda) = 0$, and we do not need to put any assumptions on the module~$C'$. We use this weaker assumption in the lemmas.
The following notation is used in the two lemmas. Given two modules $A$ and $B$ over an algebra~$\Lambda$, we write $\mathscr{P}_\Lambda(A,B) \subseteq \Hom_\Lambda(A,B)$ for the subspace of $\Hom_\Lambda(A,B)$ consisting of morphisms that factor through a projective module; then the stable Hom group is $\stHom_\Lambda(A, B) = \Hom_\Lambda(A, B) / \mathscr{P}_\Lambda(A, B)$.
In the first lemma, we consider the special case $n = 1$.
\begin{lem} \label{lem:stHom=ext1} Let $\Lambda$ be a finite-dimensional $k$-algebra, and let $A$ and~$C$ be finitely generated $\Lambda$-modules such that $\Ext_\Lambda^1(C, \Lambda) = 0$. Let \[ \eta \colon 0 \to K \xrightarrow{\alpha} P \xrightarrow{\beta} C \to 0. \] be a short exact sequence of $\Lambda$-modules with $P$ projective. Then the sequence \[ 0 \to \mathscr{P}_\Lambda(K, A)
\hookrightarrow \Hom_\Lambda(K, A)
\xrightarrow{\eta^*} \Ext_\Lambda^1(C, A)
\to 0 \] of $k$-vector spaces is exact. \end{lem} \begin{proof} By applying the functor $\Hom_\Lambda(-, A)$ to the sequence $\eta$, we get the exact sequence \[ 0 \to \Hom_\Lambda(C, A)
\xrightarrow{\beta^*} \Hom_\Lambda(P, A)
\xrightarrow{\alpha^*} \Hom_\Lambda(K, A)
\xrightarrow{\eta^*} \Ext_\Lambda^1(C, A)
\to 0. \] From this we obtain the short exact sequence \[ 0 \to \im \alpha^*
\hookrightarrow \Hom_\Lambda(K, A)
\xrightarrow{\eta^*} \Ext_\Lambda^1(C, A)
\to 0. \]
Now we only need to show that $\im \alpha^* = \mathscr{P}_\Lambda(K, A)$. If a homomorphism $f \colon K \to A$ lies in $\im \alpha^*$, then it factors through the map $\alpha \colon K \to P$, and since the module $P$ is projective, this means that $f$ lies in $\mathscr{P}_\Lambda(K, A)$. We thus have $\im \alpha^* \subseteq \mathscr{P}_\Lambda(K, A)$.
For the opposite inclusion, let $Q$ be a projective $\Lambda$-module. Since we have assumed that $\Ext_\Lambda^1(C, \Lambda) = 0$, we also have $\Ext_\Lambda^1(C, Q) = 0$. Then from the long exact sequence obtained by applying the functor $\Hom_\Lambda(-,Q)$ to the short exact sequence $\eta$, we see that every homomorphism $g\colon K \to Q$ factors through the homomorphism $\alpha \colon K \to P$. Thus every homomorphism which starts in $K$ and factors through some projective module, also factors through $\alpha$, and we get $\mathscr{P}_\Lambda(K,A) \subseteq \im \alpha^*$. \end{proof}
Now we continue to extension groups in arbitrary degree by using the above lemma and dimension shifting.
\begin{lem} \label{lem:stHom=ext} Let $\Lambda$ be a finite-dimensional $k$-algebra, let $A$ and~$C$ be finitely generated $\Lambda$-modules, and let $n$ be a positive integer. Assume that $\Ext_\Lambda^n(C, \Lambda) = 0$. Let \[ \pi_n \colon 0 \to K_n \to P_{n-1}
\to P_{n-2} \to \cdots \to P_1 \to P_0 \to C \to 0 \] be the beginning of a projective resolution of~$C$ with $K_n$ as the $n$-th syzygy. Then the sequence \[ 0 \to \mathscr{P}_\Lambda(K_n, A)
\hookrightarrow \Hom_\Lambda(K_n, A)
\xrightarrow{\pi_n^*} \Ext_\Lambda^n(C, A)
\to 0 \] of $k$-vector spaces is exact, and thus the map $\pi_n^*$ induces an isomorphism \[ \overline{\pi_n^*} \colon \stHom_\Lambda(K_n, A) \xrightarrow{\cong} \Ext_\Lambda^n(C, A). \] \end{lem} \begin{proof} Decompose the sequence $\pi_n$ into two exact sequences \begin{align*} \eta \colon & 0 \to K_n \to P_{n-1} \to K_{n-1} \to 0 \\ \text{and}\qquad \pi_{n-1} \colon & 0 \to K_{n-1} \to P_{n-2} \to P_{n-3} \to \cdots
\to P_1 \to P_0 \to C \to 0, \end{align*} such that $\pi_n = \eta \circ \pi_{n-1}$. By dimension shifting, we have an isomorphism \[ \pi_{n-1}^* \colon \Ext_\Lambda^1(K_{n-1}, A) \xrightarrow{\cong} \Ext_\Lambda^n(C, A). \] We observe that $\pi_n^* = \pi_{n-1}^* \circ \eta^*$, so the following diagram is commutative. \[ \xymatrix{ 0 \ar[r] & \mathscr{P}_\Lambda(K_n, A) \ar@{^{(}->}[r] \ar@{=}[d] & \Hom_\Lambda(K_n, A) \ar[r]^{\eta^*} \ar@{=}[d] & \Ext_\Lambda^1(K_{n-1}, A) \ar[r] \ar[d]_{\pi_{n-1}^*}^{\cong} & 0 \\ 0 \ar[r] & \mathscr{P}_\Lambda(K_n, A) \ar@{^{(}->}[r] & \Hom_\Lambda(K_n, A) \ar[r]^{\pi_n^*} & \Ext_\Lambda^n(C, A) \ar[r] & 0 } \] By Lemma~\ref{lem:stHom=ext1}, the top row of this diagram is exact. Since all the vertical maps are isomorphisms, the bottom row is also exact. \end{proof}
We now show that we get the isomorphisms we want between extension groups in the special case where the involved modules are maximal Cohen--Macaulay modules. In this case, we get isomorphisms between extension groups in all positive degrees, while in the general case which is considered afterwards (Proposition~\ref{prop:ext-iso}), we only get isomorphisms in almost all degrees.
\begin{prop} \label{prop:ext-iso-cm} Let $\Lambda$ and~$\Sigma$ be finite-dimensional Gorenstein algebras which are singularly equivalent of Morita type with level, and let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules which induce a singular equivalence of Morita type with level between $\Lambda$ and~$\Sigma$. Let $C$ and~$C'$ be maximal Cohen--Macaulay modules over~$\Lambda$. Then for every positive integer~$n$, the map \[ \Ext_\Lambda^n(C, C') \xrightarrow{N \tensor_\Lambda -} \Ext_\Sigma^n(N \tensor_\Lambda C, N \tensor_\Lambda C') \] is an isomorphism. \end{prop} \begin{proof} The idea is to translate the two Ext groups to stable Hom groups by using Lemma~\ref{lem:stHom=ext}, and then use the equivalence of stable categories of Cohen--Macaulay modules from Proposition~\ref{prop:semtl->sCM}.
Let \[ \pi_n \colon 0 \to K_n \to P_{n-1} \to \cdots \to P_0 \to C \to 0 \] be the beginning of a projective resolution of~$C$ with $K_n$ as $n$-th syzygy. By Lemma~\ref{lem:semtl-functor-properties}, the sequence $N \tensor_\Lambda \pi_n$, which is obtained by applying the functor $N \tensor_\Lambda -$ to all objects and maps in $\pi_n$, is the beginning of a projective resolution of the $\Sigma$-module $N \tensor_\Lambda C$, with $N \tensor_\Lambda K_n$ as the $n$-th syzygy.
Since $C$ and $C'$ are maximal Cohen--Macaulay modules, we deduce that $N \tensor_\Lambda C$, $N \tensor_\Lambda C'$, $K_n$ and $N \tensor_\Lambda K_n$ are also maximal Cohen--Macaulay modules, by using Lemma~\ref{lem:cm<->syzygy} and Lemma~\ref{lem:tensor-functor-cm}. We form the following commutative diagram of $k$-vector spaces. \[ \xymatrix@C=5em{ {\stHom_\Lambda(K_n, C')} \ar[r]^-{N \tensor_\Lambda -}_-{\cong} \ar[d]_{\pi_n^*}^{\cong} & {\stHom_\Sigma(N \tensor_\Lambda K_n, N \tensor_\Lambda C')} \ar[d]^{(N \tensor_\Lambda \pi_n)^*}_{\cong} \\ {\Ext^n_\Lambda(C, C')} \ar[r]^-{N \tensor_\Lambda -} & {\Ext^n_\Sigma(N \tensor_\Lambda C, N \tensor_\Lambda C')} } \] The vertical maps are isomorphisms by Lemma~\ref{lem:stHom=ext}, and the map in the top row is an isomorphism by Proposition~\ref{prop:semtl->sCM}. Therefore the map in the bottom row is also an isomorphism, and this concludes the proof. \end{proof}
Finally, we come to the main result of this section, where we show that if two Gorenstein algebras $\Lambda$ and~$\Sigma$ are singularly equivalent of Morita type with level, then for every extension group (of sufficiently high degree) over~$\Lambda$, there is an isomorphic extension group over~$\Sigma$.
\begin{prop} \label{prop:ext-iso} Let $\Lambda$ and~$\Sigma$ be finite-dimensional Gorenstein $k$-algebras which are singularly equivalent of Morita type with level, and let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules which induce a singular equivalence of Morita type with level between $\Lambda$ and~$\Sigma$. Let \[ d = \max \{ \id_\Lambda \Lambda, \id_\Sigma \Sigma \} \] be the maximum of the injective dimensions of $\Lambda$ and~$\Sigma$. Then for every integer $n > d$, we have $k$-vector space isomorphisms \begin{align*} \Ext_\Lambda^n(A, B) &\xrightarrow[N \tensor_\Lambda -]{\cong} \Ext_\Sigma^n(N \tensor_\Lambda A, N \tensor_\Lambda B) && \text{for $\Lambda$-modules $A$ and $B$,} \\ \Ext_\Sigma^n(A', B') &\xrightarrow[M \tensor_\Sigma -]{\cong} \Ext_\Lambda^n(M \tensor_\Sigma A', M \tensor_\Sigma B') && \text{for $\Sigma$-modules $A'$ and $B'$.} \end{align*} \end{prop} \begin{proof} Let $A$ and~$B$ be $\Lambda$-modules, and let \[ \pi \colon \cdots \to P_1 \to P_0 \to A \to 0 \qquad\text{and}\qquad \tau \colon \cdots \to Q_1 \to Q_0 \to B \to 0 \] be projective resolutions. Then by Lemma~\ref{lem:semtl-functor-properties}, the sequences $N \tensor_\Lambda \pi$ and $N \tensor_\Lambda \tau$ are projective resolutions of the $\Sigma$-modules $N \tensor_\Lambda A$ and $N \tensor_\Lambda B$. We form the following commutative diagram, where $\rho_d$ is the $d$-th rotation map with respect to the resolutions $\pi$ and $\tau$, and $\rho'_d$ the $d$-th rotation map with respect to the resolutions $N \tensor_\Lambda \pi$ and $N \tensor_\Lambda \tau$. These maps are isomorphisms by Lemma~\ref{lem:rotation}. \[ \xymatrix@C=5em{ {\Ext^n_\Lambda(A, B)} \ar[r]^-{N \tensor_\Lambda -} \ar[d]_{\rho_d}^\cong & {\Ext^n_\Sigma(N \tensor_\Lambda A, N \tensor_\Lambda B)} \ar[d]^{\rho'_d}_\cong \\ {\Ext^{n+d}_\Lambda(\Omega_\Lambda^d(A), \Omega_\Lambda^d(B))} \ar[r]^-{N \tensor_\Lambda -}_-\cong & {\Ext^{n+d}_\Sigma(N \tensor_\Lambda \Omega_\Lambda^d(A), N \tensor_\Lambda \Omega_\Lambda^d(B))} } \] By Lemma~\ref{lem:cm<->syzygy}, the syzygies $\Omega_\Lambda^d(A)$ and $\Omega_\Lambda^d(B)$ are maximal Cohen--Macaulay modules, and then by Proposition~\ref{prop:ext-iso-cm}, the map $N \tensor_\Lambda -$ in the bottom row is an isomorphism. It follows that the map $N \tensor_\Lambda -$ in the top row is an isomorphism. This gives the first of the two isomorphisms we want. The second isomorphism follows by symmetry. \end{proof}
\section{Hochschild cohomology rings} \label{sec:hh}
In this section, we define the Hochschild cohomology ring $\HH*(\Lambda)$ of an algebra~$\Lambda$, and we show that if two Gorenstein $k$-algebras are singularly equivalent of Morita type with level, then their Hochschild cohomology rings are isomorphic in almost all degrees.
We first introduce some notation for rings of extensions. If $\Lambda$ is a $k$-algebra and $A$ a $\Lambda$-module, then we define \[ \extring{\Lambda}{A} = \Ext_\Lambda^*(A, A) = \bigoplus_{n \ge 0} \Ext_\Lambda^n(A, A). \] That is, $\extring{\Lambda}{A}$ denotes the graded $k$-algebra which is the direct sum of all extension groups of $A$ by itself, with multiplication given by Yoneda product.
We are interested in the ``asymptotic'' behaviour of such graded rings of extensions; that is, we want to find isomorphisms which hold in all degrees above some finite bound. Given an extension ring $\extring{\Lambda}{A}$, we therefore consider the graded ideals of the form \[ \extring[>d]{\Lambda}{A} = \bigoplus_{n > d} \Ext_\Lambda^n(A, A) \] for some integer~$d$. We use the term \defterm{rng} for a ``ring without identity''. The object $\extring[>d]{\Lambda}{A}$ is thus a graded rng. In order to study the asymptotic behaviour of extension rings, the appropriate morphisms to look at are the morphisms of graded rngs between objects of the form $\extring[>d]{\Lambda}{A}$.
We define the Hochschild cohomology of an algebra as the extension ring of the algebra over its enveloping algebra.
\begin{defn} Let $\Lambda$ be a finite-dimensional $k$-algebra. The \defterm{Hochschild cohomology ring} of $\Lambda$ is the extension ring $\HH*(\Lambda) = \extring{\e\Lambda}{\Lambda}$. \end{defn}
Hochschild cohomology was first defined by G.~Hochschild in~\cite{hochschild}. The original definition uses the bar resolution. We follow the definition in~\cite{ce}, where Hochschild cohomology is given by extension groups. Since we have assumed that $k$ is a field, this definition is equivalent to the original one. More generally, the two definitions are equivalent whenever $\Lambda$ is projective over $k$ (see~\cite[IX, \S6]{ce}).
We now turn to the problem of showing that singular equivalences of Morita type with level between Gorenstein algebras preserve Hochschild cohomology in almost all degrees. We need the following diagram lemma, known as the ``$3 \times 3$ splice''.
\begin{lem} \cite[Lemma VIII.3.1]{maclane} \label{lem:3x3-splice} Let $R$ be a ring, and let \[ \xymatrix@C=.1em@R=.1em{
& \eta'\colon && \eta\colon && \eta''\colon \\ \eta_A\colon & A' \ar[rr]\ar[dd] && A \ar[rr]\ar[dd] && A'' \ar[dd] \\\\ \eta_B\colon & B' \ar[rr]\ar[dd] && B \ar[rr]\ar[dd] && B'' \ar[dd] \\\\ \eta_C\colon & C' \ar[rr] && C \ar[rr] && C'' \\\\ } \]
be a commutative diagram of $R$-modules, where the three rows $\eta_A$, $\eta_B$ and $\eta_C$, as well as the three columns $\eta'$, $\eta$ and $\eta''$, are short exact sequences. Then the elements in the extension group $\Ext_R^1(C'', A')$ represented by the composition $\eta_A \circ \eta''$ and by the composition $\eta' \circ \eta_C$ are the additive inverses of each other: \[ [\eta_A \circ \eta''] = - [\eta' \circ \eta_C]. \] \end{lem}
If two bimodules $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ induce a singular equivalence of Morita type with level between algebras $\Lambda$ and~$\Sigma$, then the $\e\Lambda$-module $M \tensor_\Sigma N$ is a syzygy of~$\Lambda$. In the following lemma, we use Lemma~\ref{lem:3x3-splice} to show that under certain assumptions, the tensor functors $(M \tensor_\Sigma N) \tensor_\Lambda -$ and $- \tensor_\Lambda (M \tensor_\Sigma N)$ induce isomorphisms of Ext groups in almost all degrees. This is afterwards used in the proof of Theorem~\ref{thm:hh-iso}.
\begin{lem} \label{lem:tensor-syzygy} Let $\Lambda$ be a finite-dimensional Gorenstein $k$-algebra, and let $U$ be a $\e\Lambda$-module which is projective as a left $\Lambda$-module and as a right $\Lambda$-module. Let $d \ge 2 \cdot \id_\Lambda \Lambda$. Let $K$ be an $i$-th syzygy of $\Lambda$ as $\e\Lambda$-module, for some $i < d$. Then the maps \[ K \tensor_\Lambda - \colon \extring[>d]{\e\Lambda}{U} \to \extring[>d]{\e\Lambda}{K \tensor_\Lambda U} \quad \text{and} \quad - \tensor_\Lambda K \colon \extring[>d]{\e\Lambda}{U} \to \extring[>d]{\e\Lambda}{U \tensor_\Lambda K} \] are graded rng isomorphisms. \end{lem} \begin{proof} We show that the map $K \tensor_\Lambda -$ is an isomorphism; the proof for $- \tensor_\Lambda K$ is similar. Let \[ \pi\colon \cdots \to P_1 \to P_0 \to \Lambda \to 0 \] be a projective resolution of $\Lambda$ as $\e\Lambda$-module, with $K$ as the $i$-th syzygy, and let \[ \sigma \colon \cdots \to P_1 \tensor_\Lambda U \to P_0 \tensor_\Lambda U \to U \to 0. \] be the result of applying the functor $- \tensor_\Lambda U$ to the sequence~$\pi$ and identifying $\Lambda \tensor_\Lambda U$ with~$U$ in the last term. This sequence is exact since $U$ is projective as left module, and every $P_j \tensor_\Lambda U$ is projective since $U$ is projective as right module. Thus, $\sigma$ is a projective resolution of~$U$, and $K \tensor_\Lambda U$ is an $i$-th syzygy of~$U$.
By Lemma~\ref{lem:gorenstein-envalg}, the enveloping algebra $\e\Lambda$ of $\Lambda$ is Gorenstein, and we have $\id_{\e\Lambda} \e\Lambda \le 2 \cdot \id_\Lambda \Lambda \le d$. Then by Lemma~\ref{lem:rotation}, the $i$-th rotation map \[ \rho_i \colon \extring[>d]{\e\Lambda}{U} \to \extring[>d]{\e\Lambda}{K \tensor_\Lambda U} \] (with respect to the resolution $\sigma$) is a graded rng isomorphism. We show that the map $K \tensor_\Lambda -$ is an isomorphism by showing that it is equal to the map~$\rho_i$, up to sign. More precisely, we show that for any homogeneous element $[\eta] \in \extring[>d]{\e\Lambda}{U}$ of degree $n > d$, we have \[ K \otimes_\Lambda [\eta] = (-1)^{in} \cdot \rho_i([\eta]). \]
Let $[\eta] \in \extring[>d]{\e\Lambda}{U}$ be a homogeneous element of degree $n > d$ represented by an exact sequence \[ \eta \colon 0 \to U \to E_n \to \cdots \to E_1 \to U \to 0. \] We can assume without loss of generality that all the modules $E_j$ are projective as left $\Lambda$-modules and as right $\Lambda$-modules. Let \begin{align*} \pi_i&\colon 0 \to K \to P_{i-1} \to \cdots \to P_0 \to \Lambda \to 0 \\ \sigma_i&\colon 0 \to K \tensor_\Lambda U \to P_{i-1} \tensor_\Lambda U \to \cdots \to P_0 \tensor_\Lambda U \to U \to 0 \end{align*} be truncations of the projective resolutions $\pi$ and $\sigma$. \begin{figure}
\caption{Commutative diagram used in the proof of
Lemma~\ref{lem:tensor-syzygy}.}
\label{fig:tensor}
\end{figure} We construct the commutative diagram in Figure~\ref{fig:tensor} by tensoring $\pi_i$ with~$\eta$ over $\Lambda$ and identifying $\Lambda \tensor_\Lambda -$ with the identity in the last row. The rows and columns of the diagram are exact sequences.
The bottom row in the diagram is the sequence~$\eta$, the top row is the sequence $K \tensor_\Lambda \eta$, and the first and the last column are both equal to the sequence $\sigma_i$. By using Lemma~\ref{lem:3x3-splice} repeatedly, we get the equality \[ [(K \otimes_\Lambda \eta) \circ \sigma_i] = (-1)^{in} [\sigma_i \circ \eta] \] in the extension group $\Ext_{\e\Lambda}^{n+i}(U, K \otimes_\Lambda U)$. By the definition of the rotation map~$\rho_i$, we then get \[ K \otimes_\Lambda [\eta] = [K \otimes_\Lambda \eta] = (-1)^{in} \cdot \rho_i([\eta]). \] Since the map $\rho_i$ is an isomorphism, this means that the map $K \otimes_\Lambda -$ is an isomorphism as well. \end{proof}
We now show that a singular equivalence of Morita type with level between Gorenstein $k$-algebras preserves the Hochschild cohomology in almost all degrees. A weaker form of this result, stating that a singular equivalence of Morita type preserves Hochschild cohomology groups in almost all degrees (but not necessarily the ring structure of the cohomology), appears in~\cite[Remark 4.3]{zz}.
\begin{thm} \label{thm:hh-iso} Let $\Lambda$ and~$\Sigma$ be finite-dimensional Gorenstein $k$-algebras which are singularly equivalent of Morita type with level. Then we have the following. \begin{enumerate} \item The Hochschild cohomology rings $\HH{*}(\Lambda)$ and $\HH{*}(\Sigma)$ are isomorphic in almost all degrees, with isomorphisms that respect the ring structure. \item Let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules which induce a singular equivalence of Morita type with level~$l \ge 1$ \textup{(}see Remark~\ref{rem:hh-iso-level}\textup{)} between $\Lambda$ and~$\Sigma$, and let $d = \max \{ l, 2 \cdot \id_\Lambda \Lambda, 2 \cdot \id_\Sigma \Sigma \}$. Then there are isomorphisms \[ \xymatrix{ \HH{>d}(\Lambda) \ar[d]_{N \tensor_\Lambda - \tensor_\Lambda M}^{\cong} \ar[r]^-{\rho_l}_-{\cong} & \extring[>d]{\e\Lambda}{M \tensor_\Sigma \Sigma \tensor_\Sigma N} \\ \extring[>d]{\e\Sigma}{N \tensor_\Lambda \Lambda \tensor_\Lambda M} & \HH{>d}(\Sigma) \ar[u]_{M \tensor_\Sigma - \tensor_\Sigma N}^{\cong} \ar[l]^-{\rho'_l}_-{\cong} } \] of graded rngs, where the maps $\rho_l$ and~$\rho'_l$ are rotation maps. \end{enumerate} \end{thm} \begin{proof} We show part~(2). Part~(1) then follows directly.
Since $M$ and~$N$ induce a singular equivalence of Morita type with level~$l$, the module $M \tensor_\Sigma \Sigma \tensor_\Sigma N \cong M \tensor_\Sigma N$ is an $l$-th syzygy of~$\Lambda$ as a $\e\Lambda$-module. Let \[ \rho_l \colon \HH{>d}(\Lambda) \to \extring[>d]{\e\Lambda}{M \tensor_\Sigma \Sigma \tensor_\Sigma N} \] be the $l$-th rotation map with respect to a projective resolution of $\Lambda$ with $M \tensor_\Sigma \Sigma \tensor_\Sigma N$ as the $l$-th syzygy. By Lemma~\ref{lem:gorenstein-envalg}, the enveloping algebras $\e\Lambda$ and~$\e\Sigma$ are Gorenstein algebras, and we have $\id_{\e\Lambda} \e\Lambda \le 2 \cdot \id_\Lambda \Lambda$ and $\id_{\e\Sigma} \e\Sigma \le 2 \cdot \id_\Sigma \Sigma$. By Lemma~\ref{lem:rotation}, the rotation map $\rho_l$ is an isomorphism, since \[ \max \{ l, \id_{\e\Lambda} \e\Lambda, \id_{\e\Sigma} \e\Sigma \} \le \max \{ l, 2 \cdot \id_\Lambda \Lambda, 2 \cdot \id_\Sigma \Sigma \} = d. \] We can similarly define the rotation map~$\rho'_l$ and show that it is an isomorphism.
We now show that the maps $N \tensor_\Lambda - \tensor_\Lambda M$ and $M \tensor_\Sigma - \tensor_\Sigma N$ are isomorphisms. For any $n > d$, we can make the following diagram: \begin{equation} \label{eqn:hh-diagram} \vcenter{ \xymatrix{ \HH{n}(\Lambda) \ar[d]_{N \tensor_\Lambda - \tensor_\Lambda M} \ar[r]^-{\rho_l}_-{\cong} & \extring[n]{\e\Lambda}{M \tensor_\Sigma \Sigma \tensor_\Sigma N} \\ \extring[n]{\e\Sigma}{N \tensor_\Lambda \Lambda \tensor_\Lambda M} & \HH{n}(\Sigma) \ar[u]_{M \tensor_\Sigma - \tensor_\Sigma N} \ar[l]^-{\rho'_l}_-{\cong} }} \end{equation} Consider the map $N \tensor_\Lambda - \tensor_\Lambda M$ in this diagram. We construct the following commutative diagram with this map at the top: \[ \xymatrix@C=6em{ \HH{n}(\Lambda) \ar[r]^-{N \tensor_\Lambda - \tensor_\Lambda M} \ar[d]_{(M \tensor_\Sigma N) \tensor_\Lambda -}^{\cong} & \extring[n]{\e\Sigma}{N \tensor_\Lambda \Lambda \tensor_\Lambda M} \ar[d]^{M \tensor_\Sigma - \tensor_\Sigma N} \\ \extring[n]{\e\Lambda}{M \tensor_\Sigma N \tensor_\Lambda \Lambda} \ar[r]_-{- \tensor_\Lambda (M \tensor_\Sigma N)}^-{\cong} & \extring[n]{\e\Lambda}{M \tensor_\Sigma N \tensor_\Lambda \Lambda \tensor_\Lambda M \tensor_\Sigma N} } \] By Lemma~\ref{lem:tensor-syzygy}, the maps $(M \tensor_\Sigma N) \tensor_\Lambda -$ and $- \tensor_\Lambda (M \tensor_\Sigma N)$ in this diagram are isomorphisms, since $M \tensor_\Sigma N$ is an $l$-th syzygy of $\Lambda$ as $\e\Lambda$-module. Therefore, the map $N \tensor_\Lambda - \tensor_\Lambda M$ in diagram~\eqref{eqn:hh-diagram} is a monomorphism. By a similar argument, the map $M \tensor_\Sigma - \tensor_\Sigma N$ in diagram~\eqref{eqn:hh-diagram} is a monomorphism. Since $\HH{n}(\Lambda)$ and $\HH{n}(\Sigma)$ are finite-dimensional over~$k$, it follows that these monomorphisms must be isomorphisms. \end{proof}
\begin{rem} \label{rem:hh-iso-level} In Theorem~\ref{thm:hh-iso}~(2), we assumed that the level~$l$ is positive. The reason for this is that if we had allowed $l=0$, then we could not have made the rotation maps $\rho_l$ and $\rho'_l$. This assumption does not strongly affect the applicability of the theorem, since any equivalence with level~$0$ implies the existence of an equivalence with level~$1$. In general, if two bimodules $\bimod{\Lambda}{M}{\Sigma}$ and $\bimod{\Sigma}{N}{\Lambda}$ induce a singular equivalence of Morita type with level~$l$ between algebras $\Lambda$ and $\Sigma$, then the bimodules $\Omega_{\Lambda \tensor_k
\opposite{\Sigma}}^1(M)$ and $N$ induce a singular equivalence of level $l+1$ between $\Lambda$ and $\Sigma$. \end{rem}
\section{Finite generation} \label{sec:fg}
Support varieties for modules over artin algebras were defined by Snashall and Solberg in~\cite{ss}, using the Hochschild cohomology ring. In~\cite{ehsst}, Erdmann, Holloway, Snashall, Solberg and Taillefer defined two finite generation conditions \textbf{Fg1} and~\textbf{Fg2} for the Hochschild cohomology ring of an algebra. These conditions ensure that the support varieties for modules over the given algebra have good properties. In~\cite{es}, these conditions were reformulated as a new condition called~\textbf{(Fg)} which is equivalent to the combination of \textbf{Fg1} and \textbf{Fg2}. We use the definition from~\cite{es}.
In this section, we describe the finite generation condition~\fgtext{(Fg)}. We then show the main result of this paper (Theorem~\ref{thm:main}): A singular equivalence of Morita type with level between finite-dimensional Gorenstein $k$-algebras preserves the \fgtext{(Fg)} condition.
In order to define the \fgtext{(Fg)} condition, we first describe a way to view extension rings over an algebra as modules over the Hochschild cohomology ring. Let $\Lambda$ be a finite-dimensional $k$-algebra and $A$ a $\Lambda$-module. We define a graded ring homomorphism \[ \varphi_A \colon \HH*(\Lambda) \to \extring{\Lambda}{A} \] as follows. A homogeneous element of $\HH*(\Lambda)$ can be represented by an exact sequence \[ \eta \colon 0 \to \Lambda \to E \to P_n \to \cdots \to P_0 \to \Lambda \to 0 \] of $\e\Lambda$-modules, where each $P_i$ is projective. Viewed as a sequence of right $\Lambda$-modules, this sequence splits. The complex \[ \eta \tensor_\Lambda A \colon 0 \to \Lambda \tensor_\Lambda A \to E \tensor_\Lambda A \to P_n \tensor_\Lambda A \to \cdots \to P_0 \tensor_\Lambda A \to \Lambda \tensor_\Lambda A \to 0 \] is therefore an exact sequence. By composition with the isomorphism $\mu_A\colon \Lambda \tensor_\Lambda A \to A$ and its inverse, we get an extension \[ \mu_A \circ (\eta \tensor_\Lambda A) \circ \mu_A^{-1} \colon 0 \to A \to E \tensor_\Lambda A \to P_n \tensor_\Lambda A \to \cdots \to P_0 \tensor_\Lambda A \to A \to 0 \] of $A$ by itself, and thus a representative of a homogeneous element in the extension ring $\extring{\Lambda}{A}$. The map $\varphi_A$ is defined by the action \[ \varphi_A([\eta]) = [\mu_A \circ (\eta \tensor_\Lambda A) \circ \mu_A^{-1}] \] on homogeneous elements. By the map~$\varphi_A$, the graded ring $\extring{\Lambda}{A}$ becomes a graded $\HH*(\Lambda)$-module.
\begin{defn} Let $\Lambda$ be a finite-dimensional $k$-algebra. We say that $\Lambda$ satisfies the \fgtext{(Fg)} condition if the following holds. \begin{enumerate} \item The ring $\HH*(\Lambda)$ is Noetherian. \item The $\HH*(\Lambda)$-module $\extring{\Lambda}{\Lambda/\rad
\Lambda}$ is finitely generated. (The module structure is given by the map $\varphi_{\Lambda/\rad \Lambda}$, as described above.) \end{enumerate} \end{defn}
By~\cite[Proposition~5.7]{varieties-survey}, the \fgtext{(Fg)} condition as defined here is equivalent to the combination of the conditions \fgtext{Fg1} and~\fgtext{Fg2} defined in~\cite{ehsst}.
The following result describes why Gorenstein algebras are important in connection with the \fgtext{(Fg)} condition.
\begin{thm}\cite[Theorem~1.5~(a)]{ehsst} \label{thm:fg=>gorenstein} If an algebra satisfies the \fgtext{(Fg)}~condition, then it is a Gorenstein algebra. \end{thm}
Our aim is to show that if two Gorenstein $k$-algebras are singularly equivalent of Morita type with level, then the \fgtext{(Fg)} condition holds for one of the algebras if and only if it holds for the other. We use the following result, which describes a relation between two algebras ensuring that \fgtext{(Fg)} for one of the algebras implies \fgtext{(Fg)} for the other.
\begin{prop} \label{prop:fg} Let $\Lambda$ and~$\Sigma$ be finite-dimensional $k$-algebras. Let $A = \Lambda/\rad \Lambda$, and assume that we have a commutative diagram \begin{equation} \begin{gathered} \label{eqn:fg-diagram} \xymatrix@C=5em{ \HH{>d}(\Lambda) \ar[r]^{\varphi_A} \ar[d]^{f}_{\cong} & \extring[>d]{\Lambda}{A} \ar[d]_{g}^{\cong} \\ \HH{>d}(\Sigma) \ar[r]^{\varphi_B} & \extring[>d]{\Sigma}{B} } \end{gathered} \end{equation} of graded rngs, for some $\Sigma$-module $B$ and some positive integer~$d$, where the vertical maps $f$ and~$g$ are isomorphisms. Assume that $\Sigma$ satisfies the \fgtext{(Fg)} condition. Then $\Lambda$ also satisfies~\fgtext{(Fg)}. \end{prop} \begin{proof} This follows from Proposition~6.3 in~\cite{pss}. \end{proof}
We are now ready to prove the main result of this paper.
\begin{thm} \label{thm:main} Let $\Lambda$ and~$\Sigma$ be finite-dimensional Gorenstein algebras over the field~$k$. Assume that $\Lambda$ and~$\Sigma$ are singularly equivalent of Morita type with level. Then $\Lambda$ satisfies \fgtext{(Fg)} if and only if $\Sigma$ satisfies \fgtext{(Fg)}. \end{thm} \begin{proof} We show that if $\Sigma$ satisfies \fgtext{(Fg)}, then $\Lambda$ satisfies \fgtext{(Fg)}. The opposite implication then follows by symmetry. Let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules which induce a singular equivalence of Morita type with level~$l \ge 1$ (see Remark~\ref{rem:hh-iso-level}) between $\Lambda$ and~$\Sigma$. Let $d = \max \{ l, 2 \cdot \id_\Lambda \Lambda, 2 \cdot \id_\Sigma \Sigma \}$. Let $A$ be the $\Lambda$-module $\Lambda/\rad \Lambda$.
The $\e\Lambda$-module $M \tensor_\Sigma \Sigma \tensor_\Sigma N \cong M \tensor_\Sigma N$ is an $l$-th syzygy of~$\Lambda$ as $\e\Lambda$-module. Let $\pi$ be a projective resolution of~$\Lambda$ with $M \tensor_\Sigma \Sigma \tensor_\Sigma N$ as the $l$-th syzygy. Then the complex $\pi \tensor_\Lambda A$ is a projective resolution of $\Lambda \tensor_\Lambda A$, with $M \tensor_\Sigma \Sigma \tensor_\Sigma N \tensor_\Lambda A$ as the $l$-th syzygy. We construct the commutative diagram in Figure~\ref{fig:hamburger-diagram}, where the maps $\rho_l$ and $\rho'_l$ are the $l$-th rotation maps with respect to the resolutions $\pi$ and $\pi \tensor_\Lambda A$, respectively. These maps are isomorphisms by Lemma~\ref{lem:rotation}. The map $M \tensor_\Sigma - \tensor_\Sigma N$ in the diagram is an isomorphism by Theorem~\ref{thm:hh-iso}, and the map $M \tensor_\Sigma -$ is an isomorphism by Proposition~\ref{prop:ext-iso}. The isomorphisms $f$ and~$g$ are defined to be the appropriate compositions of the other isomorphisms in the diagram. By Proposition~\ref{prop:fg}, this diagram shows that if the algebra $\Sigma$ satisfies \fgtext{(Fg)}, then $\Lambda$ also satisfies \fgtext{(Fg)}. \begin{figure}
\caption{Commutative diagram used in the proof of
Theorem~\ref{thm:main}.}
\label{fig:hamburger-diagram}
\end{figure} \end{proof}
We now show that the assumption of both algebras being Gorenstein is necessary in the above theorem. Example~5.5 in~\cite{pss} contains two singularly equivalent algebras where one algebra satisfies \fgtext{(Fg)} and the other is not Gorenstein. We use the same algebras, and show that there exists a singular equivalence of Morita type with level between them.
\begin{ex} \label{ex:fg-not-preserved} Let $\Lambda = kQ/\langle\rho\rangle$ and $\Sigma = kR/\langle\sigma\rangle$ be $k$-algebras given by the following quivers and relations: \[ \setlength\arraycolsep{1em} \begin{array}{ll} Q \colon \xymatrix{ 1 \ar@(ul,dl)_{\alpha} \ar[r]^{\beta} & 2 } & \rho = \{ \alpha^2, \beta\alpha \} \\[1.5em] R \colon \xymatrix{ 3 \ar@(ul,dl)_{\gamma} } & \sigma = \{ \gamma^2 \} \end{array} \] The tensor algebra $\Lambda \tensor_k \opposite{\Sigma}$ has the following quiver and relations: \[ Q \times \opposite{R} \colon \vcenter{ \xymatrix{ 1 \times \opposite{3} \ar@(ul,ur)^{\alpha \times \opposite{3}} \ar@(ur,dr)^{1 \times \opposite\gamma} \ar[d]^{\beta \times \opposite{3}} \\ 2 \times \opposite{3} \ar@(ur,dr)^{2 \times \opposite\gamma} }} \qquad \mbox{\scriptsize $\begin{Bmatrix} (\alpha \times \opposite{3})^2, (\beta \times \opposite{3})(\alpha \times \opposite{3}), \\ (1 \times \opposite{\gamma})^2, (2 \times \opposite{\gamma})^2, \\ (\alpha \times \opposite{3})(1 \times \opposite{\gamma}) - (1 \times \opposite{\gamma})(\alpha \times \opposite{3}), \\ (\beta \times \opposite{3})(1 \times \opposite{\gamma}) - (2 \times \opposite{\gamma})(\beta \times \opposite{3}) \end{Bmatrix}$ } \] The tensor algebra $\Sigma \tensor_k \opposite{\Lambda}$ has the following quiver and relations: \[ R \times \opposite{Q} \colon \xymatrix{ 3 \times \opposite{1} \ar@(ul,dl)_{3 \times \opposite{\alpha}} \ar@(dl,dr)_{\gamma \times \opposite{1}} & 3 \times \opposite{2} \ar[l]_{3 \times \opposite{\beta}} \ar@(dl,dr)_{\gamma \times \opposite{2}} } \qquad \mbox{\scriptsize $\begin{Bmatrix} (3 \times \opposite{\alpha})^2, (3 \times \opposite\alpha)(3 \times \opposite\beta), \\ (\gamma \times \opposite{1})^2, (\gamma \times \opposite{2})^2, \\ (3 \times \opposite{\alpha})(\gamma \times \opposite{1}) - (\gamma \times \opposite{1})(3 \times \opposite{\alpha}), \\ (3 \times \opposite{\beta})(\gamma \times \opposite{2}) - (\gamma \times \opposite{1})(3 \times \opposite{\beta}) \end{Bmatrix}$ } \] Let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be bimodules given by the following representations over $Q \times \opposite{R}$ and $R \times \opposite{Q}$, respectively: \[ M \colon \vcenter{ \xymatrix{ k^2 \ar@(ul,ur)^{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} \ar@(ur,dr)^{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} \ar[d]_{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} \\ k^2 \ar@(ur,dr)^{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} }} \qquad N \colon \xymatrix{ k^2 \ar@(ul,dl)_{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} \ar@(dl,dr)_{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} & 0 \ar[l] \ar@(dl,dr) } \] We show that the bimodules $M$ and~$N$ induce a singular equivalence of Morita type with level~$1$ between the algebras $\Lambda$ and~$\Sigma$. We first check that these bimodules satisfy the first two conditions in the definition. Considering only the left or right structure of $M$ and~$N$, we have the following four isomorphisms: \begin{align*} \leftmod{\Lambda}{M} &\cong \Lambda & \rightmod{N}{\Lambda} &\cong e_2 \Lambda \\ \leftmod{\Sigma}{N} &\cong \Sigma^2 & \rightmod{M}{\Sigma} &\cong \Sigma \end{align*} Thus the bimodules $M$ and~$N$ are projective when viewed as one-sided (left or right) modules.
To check the last two conditions in the definition of singular equivalence of Morita type with level, we compute the tensor products $M \tensor_\Sigma N$ and $N \tensor_\Lambda M$ as representations of quivers, and check that they are syzygies of $\Lambda$ and~$\Sigma$, respectively. The enveloping algebra $\e\Lambda$ has the following quiver and relations: \[ Q \times \opposite{Q} \colon \vcenter{ \xymatrix{ 1 \times \opposite{1} \ar@(ul,ur)^{\alpha \times \opposite{1}} \ar@(ul,dl)_{1 \times \opposite\alpha} \ar[d]^{\beta \times \opposite{1}} & 1 \times \opposite{2} \ar@(ul,ur)^{\alpha \times \opposite{2}} \ar[l]_{1 \times \opposite\beta} \ar[d]^{\beta \times \opposite{2}} \\ 2 \times \opposite{1} \ar@(ul,dl)_{2 \times \opposite\alpha} & 2 \times \opposite{2} \ar[l]_{2 \times \opposite\beta} }} \qquad \mbox{\scriptsize $\begin{Bmatrix} (\alpha \times \opposite{1})^2, (\beta \times \opposite{1})(\alpha \times \opposite{1}), \\ (\alpha \times \opposite{2})^2, (\beta \times \opposite{2})(\alpha \times \opposite{2}), \\ (1 \times \opposite\alpha)^2, (1 \times \opposite\alpha)(1 \times \opposite\beta), \\ (2 \times \opposite\alpha)^2, (2 \times \opposite\alpha)(2 \times \opposite\beta), \\ (\alpha \times \opposite{1})(1 \times \opposite\alpha) - (1 \times \opposite\alpha)(\alpha \times \opposite{1}), \\ (\alpha \times \opposite{1})(1 \times \opposite\beta) - (1 \times \opposite\beta)(\alpha \times \opposite{2}), \\ (\beta \times \opposite{1})(1 \times \opposite\alpha) - (2 \times \opposite\alpha)(\beta \times \opposite{1}), \\ (\beta \times \opposite{1})(1 \times \opposite\beta) - (2 \times \opposite\beta)(\beta \times \opposite{2}) \end{Bmatrix}$ } \] The tensor product $M \tensor_\Sigma N$ is the $\e\Lambda$-module given by the following representation over $Q \times \opposite{Q}$: \[ M \tensor_\Sigma N \colon \vcenter{ \xymatrix{ k^2 \ar@(ul,ur)^{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} \ar@(ul,dl)_{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} \ar[d]_{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} & 0 \ar[l] \ar[d] \ar@(ul,ur) \\ k^2 \ar@(ul,dl)_{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} & 0 \ar[l] }} \] The algebra $\Lambda$ considered as a $\e\Lambda$-module has the following representation over $Q \times \opposite{Q}$: \[ \Lambda \colon \vcenter{ \xymatrix{ k^2 \ar@(ul,ur)^{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} \ar@(ul,dl)_{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} \ar[d]_{\left(\begin{smallmatrix} 1 & 0 \end{smallmatrix}\right)} & 0 \ar[l] \ar[d] \ar@(ul,ur) \\ k \ar@(ul,dl)_{0} & k \ar[l]^{1} }} \] There is an exact sequence \[ 0 \to M \tensor_\Sigma N \to \e\Lambda e_{1 \times \opposite{1}} \oplus \e\Lambda e_{2 \times \opposite{2}} \to \Lambda \to 0 \] of $\e\Lambda$-modules, and thus $M \tensor_\Sigma N$ is a first syzygy of $\Lambda$.
The enveloping algebra $\e\Sigma$ has the following quiver and relations: \[ R \times \opposite{R} \colon \vcenter{ \xymatrix{ 3 \times \opposite{3} \ar@(dl,dr)_{\gamma \times \opposite{3}} \ar@(ul,dl)_{3 \times \opposite\gamma} }} \qquad \mbox{\scriptsize $\begin{Bmatrix} (\gamma \times \opposite{3})^2, (3 \times \opposite\gamma)^2, \\ (\gamma \times \opposite{3})(3 \times \opposite\gamma) - (3 \times \opposite\gamma)(\gamma \times \opposite{3}) \end{Bmatrix}$ } \] The algebra $\Sigma$ considered as a $\e\Sigma$-module has the following representation over $R \times \opposite{R}$: \[ \Sigma \colon \xymatrix{ k^2 \ar@(ul,dl)_{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} \ar@(dl,dr)_{\left(\begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix}\right)} } \] Its minimal projective resolution is \[ \cdots \to \e\Sigma \to \e\Sigma \to \Sigma \to 0, \] with $\Sigma$ itself as every syzygy. The tensor product $N \tensor_\Lambda M$ is isomorphic to $\Sigma$ as $\e\Sigma$-module; in particular, it is a first syzygy of $\Sigma$.
We have now shown that the bimodules $M$ and~$N$ induce a singular equivalence of Morita type with level~$1$ between the algebras $\Lambda$ and~$\Sigma$. The algebra $\Sigma$ satisfies the \fgtext{(Fg)} condition, but $\Lambda$ does not, and is not even a Gorenstein algebra. This shows that the assumption of both algebras being Gorenstein can not be removed in Theorem~\ref{thm:main}. \end{ex}
For stable equivalences of Morita type (which are singular equivalences of Morita type with level~$0$), we can, under some conditions, remove the assumption of Gorensteinness.
\begin{cor} \label{cor:stable-equivalence} Let $\bimod{\Lambda}{M}{\Sigma}$ and~$\bimod{\Sigma}{N}{\Lambda}$ be indecomposable bimodules that induce a stable equivalence of Morita type between two finite-dimensional $k$-algebras $\Lambda$ and~$\Sigma$. Assume that $\Lambda$ and $\Sigma$ have no semisimple blocks and that $\Lambda/\rad \Lambda$ and $\Sigma/\rad \Sigma$ are separable. Then $\Lambda$ satisfies \fgtext{(Fg)} if and only if $\Sigma$ satisfies \fgtext{(Fg)}. \end{cor} \begin{proof} By~\cite[Corollary~3.1~(2)]{dugas-martinez-villa}, the assumptions in the statement of the result imply that $(M \tensor_\Sigma -, N \tensor_\Lambda -)$ and $(N \tensor_\Lambda -, M \tensor_\Sigma -)$ are adjoint pairs. Then, by~\cite[Corollary~4.6]{liu-xi}, it follows that $\Lambda$ is a Gorenstein algebra if and only if $\Sigma$ is a Gorenstein algebra. The result now follows from Theorem~\ref{thm:main}. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1506.06607.tex",
"language_detection_score": 0.6916865706443787,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{\bf\Large The Generalized Tur\'{a}n Problem of Two Intersecting Cliques} \date{} \author{ Erica L.L. Liu$^1$, Jian Wang$^2$\\[10pt] $^{1}$Center for Applied Mathematics\\ Tianjin University\\ Tianjin 300072, P. R. China\\[6pt] $^{2}$Department of Mathematics\\ Taiyuan University of Technology\\ Taiyuan 030024, P. R. China\\[6pt] E-mail: $^[email protected], $^[email protected] }
\maketitle
\begin{abstract}
For $s<r$, let $B_{r,s}$ be the graph consisting of two copies of $K_r$, which share exactly $s$ vertices. Denote by $ex(n, K_r, B_{r,s})$ the maximum number of copies of $K_r$ in a $B_{r,s}$-free graph on $n$ vertices. In 1976, Erd\H{o}s and S\'{o}s determined $ex(n,K_3,B_{3,1})$. Recently, Gowers and Janzer showed that $ex(n,K_r,B_{r,r-1})=n^{r-1-o(1)}$. It is a natural question to ask for $ex(n,K_r,B_{r,s})$ for general $r$ and $s$. In this paper, we mainly consider the problem for $s=1$. Utilizing the Zykov's symmetrization, we show that $ex(n,K_4, B_{4,1})=\lfloor (n-2)^2/4\rfloor$ for $n\geq 45$. For $r\geq 5$ and $n$ sufficiently large, by the F\"{u}redi's structure theorem we show that $ex(n,K_r,B_{r,1}) =\mathcal{N}(K_{r-2},T_{r-2}(n-2))$, where $\mathcal{N}(K_{r-2},T_{r-2}(n-2))$ represents the number of copies of $K_{r-2}$ in the $(r-2)$-partite Tur\'{a}n graph on $n-2$ vertices.
\end{abstract}
\noindent{\bf Keywords:} Generalized Tur\'{a}n number; Zykov's symmetrization; F\"{u}redi's structure theorem.
\section{Introduction}
Let $T$ be a graph and $\mathcal{F}$ be a family of graphs. We say that a graph $G$ is $\mathcal{F}$-free if it does not contain any graph from $\mathcal{F}$ as a subgraph. Let $ex(n, T, \mathcal{F})$ denote the maximum possible number of copies of $T$ in an $\mathcal{F}$-free graph on $n$ vertices. The problem of determining $ex(n,T,\mathcal{F})$ is often called the generalized Tur\'{a}n problem. When $T=K_2$, it reduces to the classical Tur\'{a}n number $ex(n,\mathcal{F})$. For simplicity, we often write $ex(n,T, F)$ for $ex(n,T,\{F\})$.
Let $T$ be a graph on $t$ vertices. The $s$-blow-up of $T$ is the graph obtained by replacing each vertex $v$ of $T$ by an independent set $W_v$ of size $s$, and each edge $uv$ of $T$ by a complete bipartite graph between the corresponding two independent sets $W_u$ and $W_v$. Alon and Shikhelman \cite{alon} showed that $ex(n,T,F) =\Theta(n^t)$ if and only if for any positive integer $s$, $F$ is not a subgraph of the $s$-blow-up of $T$. Otherwise, there exists some $\epsilon(T,F)>0$ such that $ex(n,T,F) \leq n^{t-\epsilon(T,F)}$.
For integers $s<r$, let $B_{r,s}$ be the graph consisting of two copies of $K_r$, which share exactly $s$ vertices. In 1976, Erd\H{o}s and S\'{o}s \cite{sos} determined the maximum number of hyperedges in a 3-uniform hypergraph without two hyperedges intersecting in exactly one vertex. From their result, it is easy to deduce the following theorem. \begin{thm}[Erd\H{o}s and S\'{o}s \cite{sos}]
For all $n$,
\begin{align*}
ex(n,K_3,B_{3,1})=\left\{
\begin{array}{ll}
n, & \hbox{$n\equiv0\pmod 4$;} \\
n-1, & \hbox{$n\equiv1\pmod 4$;} \\
n-2, & \hbox{$n\equiv2$ or $3\pmod 4$.}
\end{array}
\right.
\end{align*} \end{thm}
The celebrated Ruzsa-Szemer\'{e}di theorem \cite{ruzsa} implies that $ex(n, K_3, B_{3,2})=n^{2-o(1)}$. Recently, Gowers and Janzer \cite{gowera} proposed a natural generalization of Ruzsa-Szemer\'{e}di Theorem, and proved the following result. \begin{thm}[Gowers and Janzer \cite{gowera}]\label{gowers} For each $2\leq s< r$, \[ ex(n,K_r,\{B_{r,s},B_{r,s+1},\ldots,B_{r,r-1}\})=n^{s-o(1)}. \] \end{thm}
For a graph $G$, let $V(G)$ and $E(G)$ be the vertex set and edge set of $G$, respectively. The {\it join} of two graphs $G_1$ and $G_2$, denoted by $G_1\vee G_2$, is defined as $V(G_1\vee G_2)=V(G_1)\cup V(G_2)$ and $E(G_1\vee G_2)=E(G_1)\cup E(G_2)\cup \{xy\colon x\in V(G_1), y\in V(G_2)\}$. The $r$-partite Tur\'{a}n graph on $n$ vertices, denoted by $T_r(n)$, is a complete $r$-partite graph with each part of size differ by at most one. Denote by $\mathcal{N}(T,G)$ the number of copies of $T$ in $G$.
In this paper, by Zykov's symmetrization \cite{zykov} we determine $ex(n, K_4, B_{4,1})$ for $n\geq 45$.
\begin{thm}\label{thm1} For $n\geq 45$, \begin{align*} ex(n,K_4, B_{4,1})=\left\lfloor\frac{(n-2)^2}{4}\right\rfloor, \end{align*} and $K_2\vee T_2(n-2)$ is the unique graph attaining the maximum number of copies of $K_4$. \end{thm}
Then, by F\"{u}redi's structure theorem \cite{furedi}, we determine $ex(n, K_r, B_{r,1})$ for $r\geq 5$ and $n$ sufficiently large. \begin{thm}\label{thm3} For $r\geq 5$ and sufficiently large $n$, \begin{align*} ex(n,K_r, B_{r,1})=\mathcal{N}(K_{r-2}, T_{r-2}(n-2)), \end{align*} and $K_2\vee T_{r-2}(n-2)$ is the unique graph attaining the maximum number of copies of $K_r$. \end{thm}
Note that $B_{r,0}$ represents the union graph of two disjoint copies of $K_r$. By F\"{u}redi's structure theorem, we determine $ex(n, K_r, B_{r,0})$ for $r\geq 3$ and $n$ sufficiently large.
\begin{thm}\label{thm8} For $r\geq 3$ and sufficiently large $n$, \begin{align*} ex(n, K_r, B_{r,0})=\mathcal{N}(K_{r-1},T_{r-1}(n-1)), \end{align*} and $K_1\vee T_{r-1}(n-1)$ is the unique graph attaining the maximum number of copies of $K_r$. \end{thm}
Let $r,s$ be positive integers with $s<r$. An integer vector $(a_1,a_2,\ldots,a_t)$ is called a {\it partition} of $r$ if $a_1\geq a_2\geq \ldots\geq a_t>0$ and $\sum_{i=1}^t a_i=r$. Let $P=(a_1,a_2,\ldots,a_t)$ be a partition of $r$. If $\sum_{i\in I} a_i\neq s$ holds for every $I\subset \{1,2,\ldots,t\}$, then we call $P$ an {\it $s$-sum-free} partition of $r$. Denote by $\beta_{r,s}$ the maximum length of an $s$-sum-free partition of $r$.
\begin{thm}\label{thm9} For any $r>s\geq 2$, if $r\geq 2s+1$, \begin{align*} ex(n, K_r, B_{r,s})=\Theta(n^{r-s-1}); \end{align*} if $r\leq 2s$, then there exist positive reals $c_1$ and $c_2$ such that \begin{align*} c_1n^{\beta_{r,s}}\leq ex(n, K_r, B_{r,s})\leq c_2n^{s}. \end{align*} \end{thm}
Utilizing the graph removal lemma, we establish an upper bound on $ex(n,K_4,B_{4,2})$.
\begin{thm}\label{thm2} For sufficiently large $n$, \begin{align*} \frac{n^2}{12}-2\leq ex(n,K_4, B_{4,2})\leq\frac{n^2}{9}+o(n^2). \end{align*} \end{thm}
The rest of this paper is organized as follows. In Section 2, we prove Theorem \ref{thm1}. In Section 3, we prove Theorems \ref{thm3} and \ref{thm8}. In Section 4, we prove Theorem \ref{thm9}. In Section 5, we prove Theorem \ref{thm2}.
\section{The value of $ex(n,K_4,B_{4,1})$}
Zykov \cite{zykov} introduced a useful tool to prove Tur\'{a}n theorem, which is called Zykov's symmetrization. In this section, by Zykov's symmetrization we first determine $ex(n,K_4,\{B_{4,1},$\\$H_1,K_5\})$, where $H_1$ is a graph on seven vertices as shown in Figure \ref{fig:fig}. Then, we show that a $B_{4,1}$-free graph can be reduced to a $\{B_{4,1},H_1,K_5\}$-free graph by deleting vertices, which leads to a proof of Theorem \ref{thm1}.
\begin{figure}
\caption{A graph $H_1$ on seven vertices. }
\label{fig:fig}
\end{figure}
For $S\subset V(G)$, let $G[S]$ denote the subgraph of $G$ induced by $S$, and let $G - S$ denote the subgraph of $G$ induced by $V(G)\setminus S$.
\begin{lem}\label{lem-1} For $n\geq 2$, \begin{align*} ex(n,K_4, \{B_{4,1}, H_1, K_5\})=\left\lfloor\frac{(n-2)^2}{4}\right\rfloor, \end{align*} and $K_2\vee T_2(n-2)$ is the unique graph attaining the maximum number of $K_4$'s. \end{lem}
\begin{proof} Assume that $G$ is a $\{B_{4,1}, H_1, K_5\}$-free graph with the maximum number of copies of $K_4$. We may further assume that each edge of $G$ is contained in at least one copy of $K_4$, since otherwise we can delete it without decreasing the number of copies of $K_4$. For each $e\in E(G)$, let $\mathcal{K}_4(e)$ denote the set of copies of $K_4$ in $G$ containing $e$. Let \[ E_1= \left\{e\in E(G)\colon \mbox{there exist }K,K'\in \mathcal{K}_4(e)\mbox{ such that }E(K)\cap E(K')=\{e\}\right\} \] and let $G_1$ be the subgraph of $G$ induced by $E_1$.
\begin{claim}\label{claim3} $E_1$ is a matching of $G$. \end{claim} \begin{proof} Suppose to the contrary that there exists a path of length two in $G_1$, say $vuw$. Since $uv\in E_1$, there exist distinct vertices $a_1,b_1,a_2,b_2$ so that both $G[\{u,v,a_1,b_1\}]$ and $G[\{u,v,a_2,b_2\}]$ are copies of $K_4$. Since $uw\in E_1$, there exist distinct vertices $c_1,d_1,c_2,d_2$ so that both $G[\{u,w,c_1,d_1\}]$ and $G[\{u,w,c_2,d_2\}]$ are copies of $K_4$.
{ Case 1.} $w\in\{a_1,b_1,a_2,b_2\}$ or $v\in\{c_1,d_1,c_2,d_2\}$.
Since the two cases are symmetric, we only consider the case $w\in\{a_1,b_1,a_2,b_2\}$. By symmetry, we may assume that $a_1=w$. Now $G[\{u,v,w,b_1\}]$ and $G[\{u,v,a_2,b_2\}]$ are both copies of $K_4$. Clearly, we have either $v\notin \{c_1,d_1\}$ or $v\notin \{c_2,d_2\}$. Without loss of generality, assume that $v\notin\{c_1,d_1\}$. If $\{c_1,d_1\}\cap\{a_2,b_2\}=\emptyset$, then $G[\{u,v,w,a_2,b_2,c_1,d_1\}]$ contains a copy of $B_{4,1}$, which contradicts the assumption that $G$ is $B_{4,1}$-free. If $|\{c_1,d_1\}\cap\{a_2,b_2\}|=1$, by symmetry we assume that $c_1=a_2$, then $G[\{u,v,w,b_1,a_2,b_2,d_1\}]$ contains a copy of $H_1$, a contradiction. If $\{c_1,d_1\}=\{a_2,b_2\}$, then $G[\{u,v,w,a_2,b_2\}]$ is a copy of $K_5$, a contradiction.
{ Case 2.} $w\notin\{a_1,b_1,a_2,b_2\}$ and $v\notin\{c_1,d_1,c_2,d_2\}$.
For $i,j\in \{1,2\}$, we claim that $|\{a_i,b_i\}\cap\{c_j,d_j\}|=1$. If $\{a_i,b_i\}\cap \{c_j,d_j\}=\emptyset$, then $G[\{u,v,w,a_i,b_i,c_j,d_j\}]$ contains $B_{4,1}$ as a subgraph, a contradiction. If $\{a_i,b_i\}=\{c_j,d_j\}$, then $G[\{u,v,w,a_i,b_i,c_i,d_i\}]$ contains $B_{4,1}$ as a subgraph, a contradiction. Hence $|\{a_i,b_i\}\cap\{c_j,d_j\}|=1$. It follows that $\{a_1,b_1,a_2,b_2\}=\{c_1,d_1,c_2,d_2\}$. Then $G[\{u,v,w,a_1,b_1,a_2,b_2\}]$ contains $H_1$ as a subgraph, a contradiction. Thus, the claim holds.\end{proof}
Let $G_2=G- V(G_1)$. For two distinct vertices $u,v\in V(G)$ with $uv\notin E(G)$, define $C_{uv}(G)$ to be the graph obtained by deleting edges incident to $u$ and adding edges in $\{uw\colon w\in N(v)\}$.
\begin{claim}\label{claim22} For two distinct vertices $u,v\in V(G_2)$ with $uv\notin E(G)$, $C_{uv}(G)$ is a $\{B_{4,1}, H_1, K_5\}$-free graph. \end{claim}
\begin{proof} Let $\tilde{G} = C_{uv}(G)$. Since $uv\notin E(G)$, clearly we have $uv\notin E(\tilde{G})$. We first claim that $\tilde{G}$ is $K_5$-free. Otherwise, since $G$ is $K_5$-free, there is a vertex set $K$ containing $u$ such that $\tilde{G}[K]\cong K_5$. Then $v\notin K$ since $uv\notin E(\tilde{G})$. It follows that $K\setminus\{u\}\cup \{v\}$ induces a copy of $K_5$ in $G$, a contradiction.
\begin{figure}\label{fig:fz}
\end{figure}
If $\tilde{G}$ contains a copy of $B_{4,1}$, let $S=\{a_1,a_2,a_3,b_1,b_2,b_3,c\}$ be a subset of $V(\tilde{G})$ such that both $\tilde{G}[\{a_1,a_2,a_3,c\}]$ and $\tilde{G}[\{b_1,b_2,b_3,c\}]$ are copies of $K_4$. If $u\notin S$, then $G[S]$ is a copy of $B_{4,1}$, a contradiction. If $u\in S$ but $v\notin S$, then $G[(S\setminus\{u\})\cup \{v\}]$ is a copy of $B_{4,1}$, a contradiction. If $u,v\in S$, since $uv\notin E(\tilde{G})$, by symmetry we may assume that $a_1=v$ and $b_1=u$. Since $u$ is a ``clone" of $v$ in $\tilde{G}$, we have $vb_2,vb_3\in E(G)$ (as shown in Figure \ref{fig:fz}). Then both $G[\{v,c,a_2,a_3\}]$ and $G[\{v,c,b_2,b_3\}]$ are copies of $K_4$ in $G$. It follows that $vc$ is an edge in $E_1$ in $G$, which contradicts the assumption that $v\in V(G)\setminus V(G_1)$. Thus $\tilde{G}$ is $B_{4,1}$-free.
\begin{figure}
\caption{A copy of $H_1$ in $\tilde{G}$. }
\label{fig:fig-3}
\end{figure}
If $\tilde{G}$ contains a copy of $H_1$, let $T=\{h,i,j,k,l,m,n\}$ be a subset of $V(\tilde{G})$ such that $\tilde{G}[\{h,i,j,k\}]$, $\tilde{G}[\{i,j,k,m\}]$, $\tilde{G}[\{i,k,l,m\}]$ and $\tilde{G}[\{j,k,m,n\}]$ are all copies of $K_4$ as shown in Figure \ref{fig:fig-3}. Similarly, we have $u,v\in T$. Since $uv\notin E(\tilde{G})$, by symmetry we have to consider three cases: (i) $h=u$, $n=v$; (ii) $h=u$, $m=v$ or (iii) $h=v$, $m=u$. If $h=u$ and $n=v$, then $vi\in E(G)$ since $ui\in E(\tilde{G})$. It follows that $\{i,j,k,m,v\}$ induces a copy of $K_5$ in $G$, which contradicts the assumption that $G$ is $K_5$-free. If $h=u$ and $m=v$, then $kv\in E_1$ since both $G[\{k,v,i,l\}]$ and $G[\{k,v,j,n\}]$ are copies of $K_4$, which contradicts the fact that $v\in V(G_2)$. If $h=v$ and $m=u$, then $vl,vn\in E(G)$ since $ul,un\in E(\tilde{G})$. It follows that both $G[\{k,v,i,l\}]$ and $G[\{k,v,j,n\}]$ are copies of $K_4$, which contradicts the fact that $v\in V(G_2)$. Hence $\tilde{G}$ is $H_1$-free. \end{proof}
By Zykov symmetrization, we prove the following claim.
\begin{claim}\label{claim4} $G_2$ is a complete $r$-partite graph with $r\leq 4$. \end{claim} \begin{proof} Recall that $G$ is a $\{B_{4,1}, H_1, K_5\}$-free graph with the maximum number of copies of $K_4$ and each edge of $G$ is contained in at least one copy of $K_4$. We define a binary relation $R$ in $V(G_2)$ as follows: for any two vertices $x,y\in V(G_2)$, $xRy$ if and only if $xy\notin E(G)$. We shall show that $R$ is an equivalence relation. Since $G$ is loop-free, it follows that $R$ is reflexive. Since $G$ is a undirected graph, it follows that $R$ is symmetric.
Now we show that $R$ is transitive. Suppose to the contrary that there exist $x,y,z\in V(G_2)$ such that $xy, yz\notin E(G_2)$ but $xz\in E(G_2)$. For $u,v\in V(G_2)$, let $k_4(u)$ be the number of copies of $K_4$ in $G$ containing $u$, and $k_4(u,v)$ be the number of copies of $K_4$ in $G$ containing $u$ and $v$.
{ Case 1.} $k_4(y)<k_4(x)$ or $k_4(y)<k_4(z)$. Since the two cases are symmetric, we only consider the case $k_4(y)<k_4(x)$. Let $\tilde{G}=C_{yx}(G)$. By Claim \ref{claim22}, $\tilde{G}$ is $\{B_{4,1}, H_1, K_5\}$-free since $G$ is $\{B_{4,1}, H_1, K_5\}$-free. But now we have \[ \mathcal{N}(K_4,\tilde{G}) = \mathcal{N}(K_4,G) -k_4(y) +k_4(x)> \mathcal{N}(K_4,G), \] which contradicts the assumption that $G$ is a $\{B_{4,1}, H_1, K_5\}$-free graph with the maximum number of copies of $K_4$.
{ Case 2.} $k_4(y)\geq k_4(x)$ and $k_4(y)\geq k_4(z)$. Let $G^*=C_{xy}(C_{zy}(G))$. By Claim \ref{claim22}, $G^*$ is $\{B_{2,1}, H_1, K_5\}$-free. Since each edge in $G$ is contained in at least one copies of $K_4$, it follows that \begin{align*} \mathcal{N}(K_4, G^*)&=\mathcal{N}(K_4, G)-(k_4(x)+k_4(z)-k_4(x,z))+2k_4(y)\\ &\geq \mathcal{N}(K_4, G)+k_4(x,z)\\ &>\mathcal{N}(K_4, G), \end{align*} which contradicts the assumption that $G$ is a $\{B_{4,1}, H_1, K_5\}$-free graph with the maximum number of copies of $K_4$. Thus, we conclude that $xz\notin E(G)$ and $R$ is transitive. Since $R$ is an equivalence relation on $V(G_2)$ and $G$ is $K_5$-free, it follows that $G_2$ is a complete $r$-partite graph with $r\leq 4$. \end{proof}
\begin{claim}\label{claim5}
For any copy $K$ of $K_4$ in $G$ and any $uv\in E_1$, $|V(K)\cap \{u,v\}|\neq 1$. \end{claim}
\begin{proof} Suppose for contradiction that there exists $\{a, b,c,d,v\}\subset V(G)$ such that $G[\{a,b,c,d\}]$ is isomorphic to $K_4$ and $bv$ is an edge in $E_1$, as shown in Figure \ref{fig:f3}.
\begin{figure}
\caption{An edge in $E_1$ is attached to a copy of $K_4$.}
\label{fig:f3}
\end{figure}
Since $bv\in E_1$, there exist distinct vertices $x_1,y_1,x_2,y_2$ such that both $G[\{b,v,x_1,y_1\}]$ and $G[\{b,v,x_2,y_2\}]$ are copies of $K_4$ in $G$. Then either $|\{x_1,y_1\}\cap \{a,c,d\}|\leq 1$ or $|\{x_2,y_2\}\cap \{a,c,d\}|\leq 1$ holds since $x_1,y_1,x_2,y_2$ are distinct. By symmetry, we assume that $|\{x_1,y_1\}\cap \{a,c,d\}|\leq 1$. If $\{x_1,y_1\}\cap \{a,c,d\}=\emptyset$, then $G[\{b,v,x_1,y_1,a,c,d\}]$ contains a copy of $B_{4,1}$, a contradiction. If $|\{x_1,y_1\}\cap \{a,c,d\}|=1$, without loss of generality, we assume that $x_1=a$. Since both $G[\{a,b,x_2,v\}]$ and $G[\{a,b,c,d\}]$ are copies of $K_4$, it follows that $ab\in E_1$, which contradicts Claim \ref{claim3}. Thus, we conclude that $|V(K)\cap \{u,v\}|\neq 1$ for any copy $K$ of $K_4$ in $G$ and any $uv\in E_1$. \end{proof}
Now let $K$ be a copy of $K_4$ in $G$. Recall that $E_1$ is a matching in $G$ and $G_1$ is the graph induced by $E_1$. If $|V(K)\cap V(G_1)|=1$ or 3, then we will find an edge in $E_1$ attached to $K$, which contradicts Claim \ref{claim5}. Thus $|V(K)\cap V(G_1)|\in \{0,2,4\}$. Moreover, if $|V(K)\cap V(G_1)|=2$, let $\{x,y\}= V(K)\cap V(G_1)$, then by Claim \ref{claim5} we have $xy\in E_1$. Recall that $\mathcal{K}_4(e)$ represents the set of copies of $K_4$ in $G$ containing $e$ for $e\in E(G)$. Define \begin{align*}
\mathcal{K}_1(G)=&\{K\colon K\mbox{ is a copy of }K_4 \mbox{ in } G \mbox{ and } V(K)\subset V(G_1)\};\\ \mathcal{K}_2(G)=&\{K\colon K\mbox{ is a copy of }K_4 \mbox{ in } G \mbox{ and } V(K)\subset V(G_2)\};\\
\mathcal{K}_3(G)=&\{K\colon K \in \mathcal{K}_4(e) \mbox{ for some }e \in E_1\mbox{ and } |V(K)\cap V(G_1)|=2\}.
\end{align*}
Let $|V(G_1)|=n_1$, $|V(G_2)|=n-n_1=n_2$. Since $E_1$ is a matching, it follows that $n_1$ is even. By Claim \ref{claim5}, for any $K\in \mathcal{K}_1(G)$ we have $E(K)\cap E_1$ is a matching of size 2. To derive an upper bound on $|\mathcal{K}_1(G)|$, we define a graph $H$ with $V(H)=E_1$ as follows. For any $e_1,e_2\in E_1$, $e_1e_2$ is an edge of $H$ if and only if there exists a copy of $K_4$ containing both $e_1$ and $e_2$. Since $G$ is $K_5$-free, it is easy to see that $H$ is triangle-free. Moreover, each copy of $K_4$ in $G$ corresponds to an edge in $H$. Thus, by Mantel's Theorem \cite{mantel} we have \[
|\mathcal{K}_1(G)|= e(H) \leq \left\lfloor\frac{|E_1|^2}{4}\right\rfloor=\left\lfloor\frac{n_1^2}{16}\right\rfloor. \]
We have shown that $G_2$ is a complete $r$-partite graph with $r\leq 4$ in Claim \ref{claim4}. If $r\leq 1$, then $\mathcal{K}_2(G)= \mathcal{K}_3(G)=\emptyset$. Thus, we have \[
\mathcal{N}(K_4,G)=|\mathcal{K}_1(G)|\leq \left\lfloor\frac{n_1^2}{16}\right\rfloor \leq \left\lfloor\frac{n^2}{16}\right\rfloor \leq \left\lfloor\frac{(n-2)^2}{4}\right\rfloor, \] where the equalities hold if and only if $n=4$ and $G$ is isomorphic to $K_4$.
If $r=2$, then $\mathcal{K}_2(G)=\emptyset$. If $n_1=0$, then we have $\mathcal{N}(K_4,G)=0$. Hence we may assume that $n_1\geq 2$. We claim that each edge in $E(G_2)$ is contained in at most one copy of $K_4$ in $\mathcal{K}_3(G)$. Otherwise, by the definition of $\mathcal{K}_3(G)$, there exists an edge $e\in E(G_2)$ contained in two distinct copies of $K_4$, which contradicts the fact that $e\notin E_1$. Then \[
|\mathcal{K}_3(G)| \leq e(G_2) \leq \left\lfloor\frac{n_2^2}{4}\right\rfloor. \] Thus, we have \[
\mathcal{N}(K_4,G)=|\mathcal{K}_1(G)|+|\mathcal{K}_3(G)| \leq \left\lfloor\frac{n_1^2}{16}\right\rfloor+\left\lfloor\frac{n_2^2}{4}\right\rfloor. \] For even integer $x$ with $2\leq x\leq n$, let \[ f(x) = \left\lfloor\frac{x^2}{16}\right\rfloor+\left\lfloor\frac{(n-x)^2}{4}\right\rfloor. \] Then \begin{align*} f(x-2)&=\left\lfloor\frac{(x-2)^2}{16}\right\rfloor+\left\lfloor\frac{(n-x+2)^2}{4}\right\rfloor\\ &=\left\lfloor\frac{x^2-4x+4}{16}\right\rfloor+\left\lfloor\frac{(n-x)^2}{4}+n-x+1\right\rfloor\\ &\geq \left\lfloor\frac{x^2}{16}\right\rfloor -\frac{x-1}{4}-1 +\left\lfloor\frac{(n-x)^2}{4}\right\rfloor+n-x+1\\ &\geq f(x)+n-\frac{5x-1}{4} \end{align*} and \begin{align*} f(x-2)&\leq \left\lfloor\frac{x^2}{16}\right\rfloor -\frac{x-1}{4}+1 +\left\lfloor\frac{(n-x)^2}{4}\right\rfloor+n-x+1\\ &\leq f(x)+n-\frac{5x-9}{4}. \end{align*} Thus, $f(x-2)\geq f(x)$ for $x\leq \frac{4n+1}{5}$ and $f(x-2)\leq f(x)$ for $x\geq \frac{4n+9}{5}$. Therefore, for even $n$ we have \[ \mathcal{N}(K_4,G) \leq \max\{f(2),f(n)\}=\max\left\{\left\lfloor\frac{(n-2)^2}{4}\right\rfloor,\left\lfloor\frac{n^2}{16}\right\rfloor\right\}\leq \left\lfloor\frac{(n-2)^2}{4}\right\rfloor, \] where the equality holds if and only if $G$ is isomorphic to $K_2\vee T_2(n-2)$. For odd $n$ we have \[ \mathcal{N}(K_4,G) \leq \max\{f(2),f(n-1)\}=\max\left\{\left\lfloor\frac{(n-2)^2}{4}\right\rfloor,\left\lfloor\frac{(n-1)^2}{16}\right\rfloor\right\}\leq \left\lfloor\frac{(n-2)^2}{4}\right\rfloor, \] where the equality holds if and only if $G$ is isomorphic to $K_2\vee T_2(n-2)$.
If $r= 3$, there exists a triangle $xyz$ in $G_2$. Since each edge in $G$ is contained in at least one copy of $K_4$, by Claim \ref{claim5} there exist $ab, cd\in E_1$ such that both $G[\{x,y,a,b\}]$ and $G[\{y,z,c,d\}]$ are copies of $K_4$ in $G$. Since $E_1$ is a matching, we have either $\{a,b\}=\{c,d\}$ or $\{a,b\}\cap \{c,d\}=\emptyset$. If $\{a,b\}=\{c,d\}$, then $G[\{x,y,z,a,b\}]$ is a copy of $K_5$, a contradiction. If $\{a,b\}\cap \{c,d\}=\emptyset$, then then $G[\{x,y,z,a,b,c,d\}]$ contains $B_{4,1}$, a contradiction. Thus, we conclude that $r\neq 3$.
If $r=4$, let $V_1,V_2,V_3,V_4$ be four vertex classes of $G_2$. Since $G$ is $B_{4,1}$-free, at least two of $|V_i|$'s equal one. Without loss of generality, we assume that $|V_3|=|V_4|=1$. Let $V_3=\{u\}$ and $V_4=\{v\}$. Since $uv\notin E_1$, it follows that one of $|V_1|$ and $|V_2|$ equal one. By symmetry let $|V_2|=1$. Then, we have \[
|\mathcal{K}_2(G)| = |V_1| = n_2-3. \] Moreover, we claim that $\mathcal{K}_3(G)=\emptyset$. Otherwise, assume that there exists $K\in\mathcal{K}_3(G)$ such that $V(K)\cap V(G_2)=\{x,y\}$. Since $x,y$ also contained in some $K'\in\mathcal{K}_2(G)$, it follows that $E(K)\cap E(K')=\{xy\}$, which contradicts the fact that $xy\notin E_1$. Since $4\leq n_2\leq n$, we have \begin{align*}
\mathcal{N}(K_4,G) &=|\mathcal{K}_1(G)|+|\mathcal{K}_2(G)|\\
&\leq \left\lfloor\frac{n_1^2}{16}\right\rfloor + n_2-3\\
&\leq \max\left\{\left\lfloor\frac{(n-4)^2}{16}\right\rfloor +1,n-3\right\}\\
&\leq \left\lfloor\frac{(n-2)^2}{4}\right\rfloor, \end{align*} in which the equality holds if and only if $n=4$ and $G\cong K_4$ or $n=5$ and $G \cong K_2\vee T_2(3)$. Thus, the lemma holds. \end{proof}
Now we are in position to prove Theorem \ref{thm1}.
\begin{proof}[Proof of Theorem \ref{thm1}] Let $G$ be a $B_{4,1}$-free graph on $n$ vertices. We show that $G$ can be made $\{B_{4,1},H_1,K_5\}$-free by deleting vertices. Let $H_2$ be a graph on six vertices as shown in Figure \ref{fig:fig-2}.
\begin{figure}
\caption{A graph $H_2$ on six vertices. }
\label{fig:fig-2}
\end{figure}
\begin{claim}\label{claim1}
There exists a subset $V'\subset V(G)$ such that $G'=G - V'$ is $\{H_1, H_2\}$-free and $\mathcal{N}(K_4, G')\geq \mathcal{N}(K_4,G) -10 |V'|$. \end{claim}
\begin{proof}
Assume that $G$ contains $H_2$ as a subgraph. Without loss of generality, we further assume that $A=\{a,b,c,d,e,f\}$ is a subset of $V(G)$ such that $G[A]$ contains $H_2$ (see Figure \ref{fig:fig-2}). We first claim that $V(K)\subset A$ for each copy $K$ of $K_4$ containing $f$. Otherwise, if $|V(K)\cap A|=1$, then $K$ and $G[\{c,d,e,f\}]$ are both copies of $K_4$ that share exactly one vertex $f$, contradicting the fact that $G$ is $B_{4,1}$-free. If $|V(K)\cap A|=2$, by symmetry we may assume that $V(K)\cap A=\{e,f\}$. Then $K$ and $G[\{b,c,d,e\}]$ are both copies of $K_4$ that share exactly one vertex $e$, a contradiction. If $|V(K)\cap A|=3$, by symmetry we assume that $V(K)\cap A=\{d,e,f\}$. Then $K$ and $G[\{a,b,c,e\}]$ are both copies of $K_4$ that share exactly one vertex $e$, a contradiction. Thus, we conclude $V(K)\subset A$ for each copy $K$ of $K_4$ containing $f$. Now we delete $f$ from $G$ to destroy a copy of $H_2$. By doing this, we loss at most $\binom{|A\setminus\{f\}|}{3}=10$ copies of $K_4$ since they are contained in $A$. We do it iteratively until the resulting graph is $H_2$-free. Let $G_1$ be the resulting graph and $V_1$ be the set of deleted vertices. Clearly, we have $\mathcal{N}(K_4, G_1)\geq \mathcal{N}(K_4,G)-10|V_1|$.
Now $G_1$ is $\{B_{4,1},H_2\}$-free. Assume that $G_1$ contains $H_1$ as a subgraph. Let $B=\{h,i,j,k,l,m,n\}$ be a subset of $V(G_1)$ such that $G_1[B]$ contains $H_1$ (see Figure \ref{fig:fig-3}). It is easy to see that $hm$ is not an edge in $G_1$. Otherwise, $G_1[\{h,i,j,k,m\}]$ is a copy of $K_5$ and $G_1[\{h,i,j,k,m,l\}]$ contains a copy of $H_2$, a contradiction. Now we claim that $V(K)\subset B\setminus\{m\}$ for each copy $K$ of $K_4$ in $G_1$ containing $h$. Otherwise, we have one of the following cases:
\begin{itemize}
\item If $V(K)\cap B\subset \{h,l,n\}$, then $K$ and $G_1[\{h,i,j,k\}]$ form a copy of $B_{4,1}$;
\item if $|V(K)\cap\{i,j,k\}|=1$, then $K$ and $G_1[\{i,j,k,m\}]$ form a copy of $B_{4,1}$;
\item if $V(K)\cap B=\{h,i,j\}$ or $\{h,i,k\}$, then $K$ and $G_1[\{j,k,m,n\}]$ form a copy of $B_{4,1}$;
\item if $V(K)\cap B=\{h,j,k\}$, then $K$ and $G_1[\{i,k,l,m\}]$ form a copy of $B_{4,1}$.
\end{itemize}
Since $G_1$ is $B_{4,1}$-free, each of these cases leads to a contradiction. Note that $hm$ is not an edge in $G_1$. We have $V(K)\subset B\setminus\{m\}$ for each copy $K$ of $K_4$ in $G_1$ containing $h$. By deleting $h$ from $G_1$, we destroy a copy of $H_1$ and loss at most $\binom{|B\setminus\{h,m\}|}{3}=10$ copies of $K_4$. We do it iteratively until the resulting graph is $H_1$-free. Let $G_2$ be the resulting graph and $V_2$ be the set of deleted vertices. Clearly, we have $\mathcal{N}(K_4, G_2)\geq \mathcal{N}(K_4,G_1) -10|V_2|$.
Let $G'=G_2$ and $V'=V_1\cup V_2$. Clearly, $G'$ is $\{H_1,H_2\}$-free and
$\mathcal{N}(K_4, G')\geq \mathcal{N}(K_4,G)-10|V'|$. \end{proof}
\begin{claim}\label{claim2}
There exists a subset $V''\subset V(G')$ such that $G''=G'- V''$ is $K_5$-free and $\mathcal{N}(K_4, G'')\geq \mathcal{N}(K_4,G') -|V''|$. \end{claim} \begin{proof}
Since $G'$ is $\{B_{4,1},H_2\}$-free, it is easy to see that each pair of copies of $K_5$ in $G'$ is vertex-disjoint. Let $T$ be a copy of $K_5$ in $G'$. We claim that $V(K)\subset V(T)$ for each copy $K$ of $K_4$ in $G'$ with $V(T)\cap V(K)\neq \emptyset$. Otherwise, if $|V(K)\cap V(T)|\leq 2$, then it is easy to find a copy of $B_{4,1}$ in $G'$, a contradiction. If $|V(K)\cap V(T)|=3$, then we will find a copy of $H_2$ in $G'$, a contradiction. Thus, we conclude that $V(K)\subset V(T)$ for each copy $K$ of $K_4$ in $G'$ with $V(T)\cap V(K)\neq \emptyset$. By deleting $V(T)$ from $G'$, we loss at most $\binom{|V(T)|}{4}=5$ copies of $K_4$. Repeating this process, finally we arrive at a $K_5$-free graph $G''$. Let $V''$ be the set of deleted vertices. Clearly, we have $G''$ is $K_5$-free and
$\mathcal{N}(K_4,G'')\geq \mathcal{N}(K_4, G')-|V''|$. \end{proof}
Let $X=V'\cup V''$ and $|X|=x$. Note that $G''$ is $\{B_{4,1},H_1,K_5\}$-free. By Lemma \ref{lem-1} we have \[ \mathcal{N}(K_4,G'')\leq \left\lfloor\frac{(n-x-2)^2}{4}\right\rfloor. \] By Claims \ref{claim1} and \ref{claim2}, we have \[ \mathcal{N}(K_4,G)\leq \left\lfloor\frac{(n-x-2)^2}{4}\right\rfloor +10x = \left\lfloor\frac{(n-x-2)^2}{4}+10x\right\rfloor. \] Since $f(x) = \frac{(n-x-2)^2}{4}+10x$ is a convex function and $0\leq x\leq n$, it follows that \[ \mathcal{N}(K_4,G)\leq \max\left\{\left\lfloor\frac{(n-2)^2}{4}\right\rfloor,10n+1\right\}. \] Since $n\geq 45$, we have $\mathcal{N}(K_4,G)\leq \lfloor(n-2)^2/4\rfloor$. Moreover, by Lemma \ref{lem-1}, the equality holds if and only if $G$ is isomorphic to $K_2\vee T_2(n-2)$. Thus, the theorem holds. \end{proof}
\section{The values of $ex(n,K_r,B_{r,1})$ and $ex(n,K_r,B_{r,0})$}
By F\"{u}redi's structure theorem, Frankl and F\"{u}redi \cite{ff85} determined the maximum number of hyperedges in an $r$-uniform hypergraph without two hyperedges sharing exactly $s$ vertices for $r\geq 2s+2$. In this section, we determine $ex(n,K_r,B_{r,1})$ and $ex(n,K_r,B_{r,0})$ by following a similar approach.
First, we recall a result due to Frankl and F\"{u}redi in the intersection closed family (Lemma 5.5 in \cite{ff85}). Let $X$ be a finite set and $2^X$ be the family of all the subsets of $X$. We say that $\mathcal{I} \subset 2^X$ is {\it intersection closed} if for any $I, I'\in \mathcal{I} $, $I\cap I' \in \mathcal{I}$. We say $I\subset X$ is {\it covered} by $\mathcal{I}$ if there exists an $I'\in \mathcal{I}$ such that $I\subset I'$.
\begin{thm}[Frankl and F\"{u}redi \cite{ff85}]\label{thm5}
Let $r$ and $s$ be positive integers with $r\geq 2s+3$ and let $F$ be an $r$-element set. Suppose that $\mathcal{I}\subset 2^F\setminus\{F\}$ is an intersection closed family such that $|I|\neq s$ for any $I\in \mathcal{I}$ and all the $(r-s-2)$-element subsets of $F$ are covered by $\mathcal{I}$. Then there exists an $(s+1)$-element subset $A(F)$ of $F$ such that $$\{I: A(F)\subset I\subsetneq F\}\subset \mathcal{I}.$$ \end{thm}
We use $[n]$ to denote the set $\{1, \ldots , n\}$ and use $\binom{[n]}{r}$ to denote the collection of all $r$-element subsets of $[n]$. Let $\mathcal{F}\subset \binom{[n]}{r}$ be a hypergraph. We call $\mathcal{F}$ $r$-partite if there exists a partition $[n]=X_1\cup \cdots \cup X_r$ such that $|F\cap X_i|=1$ for all $F\in \mathcal{F}$ and $i\in \{1,2,\ldots,r\}$.
We adopt the statement of F\"{u}redi's structure theorem given by Frankl and Tokushige in \cite{ft}. For clarity purpose, we recall some definitions from \cite{ft}. Let $\mathcal{F}\subset \binom{[n]}{r}$ be an $r$-partite hypergraph with partition $[n]=X_1\cup\cdots \cup X_r$. For any $F\in \mathcal{F}$, define the {\it restriction} of $\mathcal{F}$ on $F$ by $$\mathcal{I}(F, \mathcal{F})=\{F'\cap F: F'\in \mathcal{F}\setminus\{F\}\}.$$
A set of $p$ hyperedges $F_1,\ldots,F_p$ in $\mathcal{F}$ is called a {\it $p$-sunflower} if $F_i\cap F_j = C$ for every $1\leq i<j\leq p$ and some set $C$. The set $C$ is called {\it center} of the $p$-sunflower.
F\"{u}redi \cite{furedi} proved the following fundamental result, which was conjectured by Frankl. It roughly says that every $r$-uniform hypergraph $\mathcal{F}$ contains a large $r$-partite subhypergraph $\mathcal{F}^*$ satisfying that $\mathcal{I}(F, \mathcal{F}^*)$ is isomorphic to $\mathcal{I}(F', \mathcal{F}^*)$ for any $F,F'\in \mathcal{F}^*$.
\begin{thm}[F\"{u}redi \cite{furedi}]\label{thm4} For positive integers $r$ and $p$, there exists a positive constant $c=c(r,p)$ such that every hypergraph $\mathcal{F}\subset \binom{[n]}{r}$ contains an $r$-partite subhypergraph $\mathcal{F}^*$ with partition $[n]=X_1\cup\cdots \cup X_r$ satisfying (i)-(iv).
(i) $|\mathcal{F}^*|\geq c|\mathcal{F}|$.
(ii) For any $F_1,F_2\in \mathcal{F}^*$, $\mathcal{I}(F_1, \mathcal{F}^*)$ is isomorphic to $\mathcal{I}(F_2, \mathcal{F}^*)$.
(iii) For $F\in \mathcal{F}^*$, $\mathcal{I}(F, \mathcal{F}^*)$ is intersection closed.
(iv) For $F\in \mathcal{F}^*$ and every $I\in \mathcal{I}(F,\mathcal{F}^*)$, $I$ is the center of a $p$-sunflower in $\mathcal{F}^*$. \end{thm}
We need the following two results. The first one is due to Deza, Erd\H{o}s and Frankl \cite{deza}.
\begin{lem}[Deza, Erd\H{o}s and Frankl \cite{deza}]\label{lem-2} Suppose that $\{E_1,\ldots,E_{r+1}\}$ and $\{F_1,\ldots,F_{r+1}\}$ are both $(r+1)$-sunflowers in an $r$-uniform hypergraphs with center $C_1$ and $C_2$, respectively. Then there exist $i$ and $j$ such that $F_i\cap F_j=C_1\cap C_2$. \end{lem}
The second one is due to Zykov \cite{zykov}. He showed that the Tur\'{a}n graph maximizes the number of $s$-cliques in $n$-vertex $K_{t+1}$-free graphs for $s\leq t$.
\begin{thm}[Zykov \cite{zykov}]\label{thm7} For $s\leq t$, $$ex(n, K_s, K_{t+1})=\mathcal{N}(K_s, T_t(n)),$$ and $T_{t}(n)$ is the unique graph attaining the maximum number of copies of $K_s$. \end{thm}
Let $\mathcal{F}\subset \binom{[n]}{r}$ be a hypergraph and $x\in [n]$. Define \[ N_{\mathcal{F}}(x) = \{T\colon T\cup\{x\}\in \mathcal{F}\}. \] The degree of $x$ in $\mathcal{F}$, denoted by $\deg_{\mathcal{F}}(x)$, is the cardinality of $N_{\mathcal{F}}(x)$.
Now we are ready to prove Theorem \ref{thm3}.
\begin{proof}[Proof of Theorem \ref{thm3}] Let $G$ be a $B_{r,1}$-free graph on $[n]$ with the maximum number of copies of $K_r$. Since $K_2\vee T_{r-2}(n-2)$ is $B_{r,1}$-free, we may assume that $\mathcal{N}(K_r, G)\geq \mathcal{N}(K_{r-2}, T_{r-2}(n-2))$.
Let \[ \mathcal{F} = \left\{F\in \binom{[n]}{r}\colon G[F] \mbox{ is a clique}\right\}. \]
Clearly, $|F_1\cap F_2|\neq 1$ for any $F_1, F_2\in \mathcal{F}$ since $G$ is $B_{r,1}$-free. Now we apply Theorem \ref{thm4} with $p=r+1$ to $\mathcal{F}$ and obtain $\mathcal{F}_1=\mathcal{F}^*$ satisfying (i)-(iv). Then apply Theorem \ref{thm4} to $\mathcal{F}-\mathcal{F}_1$ to obtain $\mathcal{F}_2=(\mathcal{F}-\mathcal{F}_1)^*$, in the $i$-th step we obtain $\mathcal{F}_i=(\mathcal{F}-(\mathcal{F}_1\cup\cdots\cup \mathcal{F}_{i-1}))^*$. We stop if there is an $F_0\in \mathcal{F}_i$ and an $(r-3)$-element subset $B_0$ of $F_0$ such that $B_0$ is not covered by $\mathcal{I}(F_0,\mathcal{F}_i)$. Suppose that the procedure stops in the $m$-th step. By Theorem \ref{thm4} (ii), for every $F\in \mathcal{F}_m$ there is an $(r-3)$-element subset $B$ of $F$ such that $B$ is not covered by $\mathcal{I}(F,\mathcal{F}_m)$.
\begin{claim}\label{claim6}
$|\mathcal{F}-(\mathcal{F}_1\cup\cdots\cup \mathcal{F}_{m-1})|\leq c'{n\choose r-3}$ for some $c'>0$. \end{claim}
\begin{proof}
For any $F\in \mathcal{F}_m$, let $B$ be an $(r-3)$-element subset of $F$ that is not covered by $\mathcal{I}(F,\mathcal{F}_m)$. Then it follows that $B\nsubseteq E\cap F$ for any $E\in \mathcal{F}_m\setminus \{F\}$, that is, $F$ is the only hyperedge in $\mathcal{F}_m$ that contains $B$. Thus $|\mathcal{F}_m|\leq {n\choose r-3}$. Now by Theorem \ref{thm4} (i), \begin{align*}
|\mathcal{F}-(\mathcal{F}_1\cup\cdots\cup \mathcal{F}_{m-1})|\leq c^{-1}|\mathcal{F}_m|\leq c'{n\choose r-3}. \end{align*} \end{proof}
Let $i\in \{1,2,\ldots,m-1\}$ and $F\in \mathcal{F}_i$. By Theorem \ref{thm4} (iii), $\mathcal{I}(F,\mathcal{F}_i)$ is intersection closed. Since $|F_1\cap F_2|\neq 1$ for any $F_1,F_2\in \mathcal{F}_i$, $|I|\neq 1$ for each $I\in \mathcal{I}(F,\mathcal{F}_i)$. Now apply Theorem \ref{thm5} with $s=1$ to $\mathcal{I}(F,\mathcal{F}_i)$, we obtain a $2$-element subset $A(F)$ of $F$ such that $$\{I: A(F)\subset I \subsetneq F\}\subset \mathcal{I}(F, \mathcal{F}_i).$$ Let $A_1, A_2, \ldots, A_h$ be the list of 2-element sets for which $A_j=A(F)$ for some $F\in \mathcal{F}_1\cup \cdots \cup \mathcal{F}_{m-1}$. For $j=1,\ldots,h$, let $$\mathcal{H}_j=\{F\in \mathcal{F}_1\cup \cdots \cup \mathcal{F}_{m-1}: A(F)=A_j\}$$ and \[ V(\mathcal{H}_j) =\bigcup_{F\in \mathcal{H}_j} F. \]
\begin{claim}\label{claim7} $V(\mathcal{H}_1), \ldots, V(\mathcal{H}_h)$ are pairwise disjoint. \end{claim}
\begin{proof}
Suppose for contradiction that $|V(\mathcal{H}_1)\cap V(\mathcal{H}_2)|\geq 1$. It follows that there exist $F_1\in \mathcal{H}_1$ and $F_2\in \mathcal{H}_2$ such that $|F_1\cap F_2|\geq 1$. Then we can find two sets $C_1$ and $C_2$ satisfying $A_1\subset C_1 \subsetneq F_1$, $A_2\subset C_2 \subsetneq F_2$ and $|C_1\cap C_2|=1$ in the following way. If $|A_1\cap A_2|=1$, then let $C_1=A_1$ and $C_2=A_2$. If $A_1\cap A_2=\emptyset$, then let $C_1=A_1\cup \{x\}$ and $C_2=A_2\cup \{x\}$ for some $x\in F_1\cap F_2$.
Since $F_1\in \mathcal{F}_i$ for some $i\in \{1, \ldots, m-1\}$ and $$C_1\in \{I: A_1\subset I \subsetneq F_1\}\subset \mathcal{I}(F_1, \mathcal{F}_i),$$
by Theorem \ref{thm4} (iv) $C_1$ is the center of an $(r+1)$-sunflower in $\mathcal{F}_i$. Therefore $C_1$ is the center of an $(r+1)$-sunflower in $\mathcal{F}$. Similarly, $C_2$ is also the center of an $(r+1)$-sunflower in $\mathcal{F}$. By Lemma \ref{lem-2}, there exist $F_1', F_2'\in \mathcal{F}$ satisfying $|F_1'\cap F_2'|=|C_1\cap C_2|=1$, which contradicts the fact that $|F_1\cap F_2|\neq 1$ for any $F_1, F_2\in \mathcal{F}$. Thus the claim holds. \end{proof}
Assume that $A_i=\{u_i, v_i\}$ for $i=1,\ldots, h$. Let $G_i$ be the graph on the vertex set $V(\mathcal{H}_i)$ with the edge set \[ E(G_i) =\left\{uv\colon \{u,v\}\subset F\in \mathcal{H}_i\right\}. \] Obviously, $G_i$ is a subgraph of $G$ and $vu_i, vv_i, u_iv_i \in E(G_i)$ for each $v\in V(\mathcal{H}_i)\setminus A_i$.
\begin{claim}\label{claim8} $G_i- A_i$ is $K_{r-1}$-free for $i=1,\ldots, h$. \end{claim} \begin{proof} By symmetry, we only need to show that $G_1- A_1$ is $K_{r-1}$-free. Suppose for contradiction that $\{a_1, a_2, \ldots, a_{r-1}\}\subset V(G_1)\setminus\{u_1, v_1\}$ induces a copy of $K_{r-1}$ in $G_1- A_1$. Since $u_1a_j\in E(G_1)$ for each $j=1,\ldots, r-1$, $\{u_1, a_1, a_2, \ldots, a_{r-1}\}$ induces a copy of $K_{r}$ in $G$. Note that $A_1=\{u_1,v_1\}$ is the center of an $(r+1)$-sunflower in $\mathcal{F}$. Let $F_1,F_2,\ldots,F_{r+1}$ be such a sunflower with center $A_1$. Then there exists some $F_j$ with $(F_j\setminus A_1)\cap \{a_1, a_2, \ldots, a_{r-1}\}=\emptyset$. It follows that $F_j\cap\{u_1, a_1, a_2, \ldots, a_{r-1}\}=\{u_1\}$. By the definition of $\mathcal{F}$, the subgraph of $G$ induced by $F_j\cup \{u_1, a_1, a_2, \ldots, a_{r-1}\}$ contains $B_{r,1}$. This contradicts the fact that $G$ is $B_{r,1}$-free and the claim follows. \end{proof}
Let $x_i=|V(\mathcal{H}_i)|$ for $i=1,2,\ldots, h$ and assume that $x_1\geq x_2\geq\cdots\geq x_h$. By Claim \ref{claim7}, $x_1+\cdots +x_h\leq n$.
\begin{claim}\label{claim33} $x_1\geq n-c''$, for some constant $c''>0$. \end{claim}
\begin{proof} By Claim \ref{claim8} and Theorem \ref{thm7}, the number of copies of $K_{r-2}$ in $G_i - A_i$ is at most $\mathcal{N}(K_{r-2}, T_{r-2}(x_i-2))$. It follows that \[
|\mathcal{H}_i| \leq \mathcal{N}(K_{r-2}, T_{r-2}(x_i-2)) \] for each $i=1,\ldots,h$. By Claims \ref{claim6} and \ref{claim7}, \begin{align}\label{ineq-3}
\mathcal{N}(K_r, G)&= |\mathcal{F}-(\mathcal{F}_1\cup\cdots \cup\mathcal{F}_{m-1})|+|(\mathcal{F}_1\cup\cdots \cup\mathcal{F}_{m-1})|\nonumber\\[5pt]
&= |\mathcal{F}-(\mathcal{F}_1\cup\cdots \cup \mathcal{F}_{m-1})|+|\mathcal{H}_1|+\cdots+|\mathcal{H}_h|\nonumber\\[5pt] &\leq c'{n\choose r-3}+ \sum_{i=1}^h \mathcal{N}(K_{r-2}, T_{r-2}(x_i-2)). \end{align} Since \begin{align*} \mathcal{N}(K_{r-2}, T_{r-2}(x_i-2))\leq \left(\frac{x_i-2}{r-2}\right)^{r-2}, \end{align*} we have \begin{align}\label{ineq-4} \mathcal{N}(K_r, G)\leq & c'{n\choose r-3}+ \sum_{i=1}^h\left(\frac{x_i-2}{r-2}\right)^{r-2}\nonumber\\[5pt] \leq & c'{n\choose r-3}+\sum_{i=1}^h (x_i-2) \cdot \frac{(x_1-2)^{r-3}}{(r-2)^{r-2}}\nonumber\\[5pt] \leq & c'{n\choose r-3}+\frac{(x_1-2)^{r-3}(n-2)}{(r-2)^{r-2}}. \end{align} By our assumption, \begin{align}\label{ineq-2} \mathcal{N}(K_r, G)\geq \mathcal{N}(K_{r-2}, T_{r-2}(n-2)) \geq \left( \frac{n-r}{r-2}\right)^{r-2}. \end{align} Combining \eqref{ineq-4} and \eqref{ineq-2}, we obtain that \[ 1\leq c'{n\choose r-3}{\left( \frac{r-2}{n-r}\right)^{r-2}} + \frac{n-2}{n-r}\cdot \left(\frac{x_1-2}{n-r}\right)^{r-3}. \] Since $n$ is sufficiently large, we get $x_1\geq \frac{n}{2}+r$.
Let $n_1,n$ be two integers with $0<n_1<n$ and let $H$ be an $r$-partite Tur\'{a}n graph on $n$ vertices with vertex classes $V_1, V_2,\ldots,V_r$. Then there exist partitions $V_j =V_{j,1} \cup V_{j,2}$ for each $j=1,2,\ldots,r$ such that \[
\sum_{j=1}^r |V_{j,1}|=n_1 \] and both $H[\cup_{j=1}^r V_{j,1}]$ and $H[\cup_{j=1}^r V_{j,2}]$ are Tur\'{a}n graphs. By considering the edges between $\cup_{j=1}^r V_{j,1}$ and $\cup_{j=1}^r V_{j,2}$, it is easy to see that \begin{align}\label{ineq-1} \mathcal{N}(K_r, T_r(n))> \mathcal{N}(K_r, T_r(n_1)) +\mathcal{N}(K_r, T_r(n-n_1)) + \left\lfloor \frac{n-n_1}{r}\right\rfloor \cdot \mathcal{N}(K_{r-1}, T_r(n_1)). \end{align} Apply the inequality \eqref{ineq-1} inductively, we have \begin{align}\label{ineq-5} \sum_{i=2}^h \mathcal{N}(K_{r-2}, T_{r-2}(x_i-2)) < \mathcal{N}(K_{r-2}, T_{r-2}(n-x_1)). \end{align} By \eqref{ineq-3} and \eqref{ineq-5}, we see that \begin{align*} \mathcal{N}(K_r, G)\leq & c'{n\choose r-3}+ \mathcal{N}(K_{r-2}, T_{r-2}(x_1-2))+\mathcal{N}(K_{r-2}, T_{r-2}(n-x_1)). \end{align*} Apply the inequality \eqref{ineq-1} again, we obtain that \begin{align}\label{ineq-6} \mathcal{N}(K_r, G)&\leq c'{n\choose r-3}+ \mathcal{N}(K_{r-2}, T_{r-2}(n-2)) - \left\lfloor \frac{n-x_1+2}{r}\right\rfloor \cdot \mathcal{N}(K_{r-3}, T_{r-2}(x_1-2))\nonumber\\[5pt] &\leq \mathcal{N}(K_{r-2}, T_{r-2}(n-2))+c'{n\choose r-3} - \frac{n-x_1-r}{r} \cdot (r-2) \left( \frac{x_1-r}{r-2}\right)^{r-3}. \end{align} It follows from \eqref{ineq-2} and \eqref{ineq-6} that \[ c'{n\choose r-3}\geq \frac{n-x_1-r}{r} \cdot (r-2) \left( \frac{x_1-r}{r-2}\right)^{r-3}. \] Since $x_1>\frac{n}{2}+r$, we arrive at \[ c'{n\choose r-3}\geq \frac{n-x_1-r}{r\cdot 2^{r-2}} \cdot (r-2) \left( \frac{n}{r-2}\right)^{r-3}. \] It follows that $x_1\geq n-c''$ for some $c''>0$. \end{proof}
Let us define \[ \mathcal{K}=\left\{F\in \mathcal{F}:\begin{array}{l}A_1\subset F \text{ and for each } I \text{ with } A_1\subset I\subsetneq F,\\ I\text{ is the center of an }(r+1)\text{-sunflower in } \mathcal{F} \end{array}\right\}. \] Obviously, we have $\mathcal{H}_1\subset \mathcal{K}$. Define $$\mathcal{A}=\{F\in\mathcal{F}: A_1\subset F, F\notin \mathcal{K}\}$$ and $$\mathcal{B}=\mathcal{F}-\mathcal{K}-\mathcal{A}.$$
Note that $V(\mathcal{K})=\cup_{F\in \mathcal{K}} F$ and $V(\mathcal{B})=\cup_{F\in \mathcal{B}} F$. We claim that $V(\mathcal{K})\cap V(\mathcal{B})=\emptyset$. Otherwise, there exist $F_1\in \mathcal{K}$ and $F_2\in \mathcal{B}$ with $|F_1\cap F_2|\geq 1$. Note that $A_1\subset F_1$ and $A_1\not\subset F_2$. If $F_2\cap A_1= \emptyset$, let $C=A_1\cup \{x\}$ with $x\in F_1\cap F_2$. If $F_2\cap A_1\neq \emptyset$, then let $C = A_1$. It is easy to see that $|C\cap F_2|=1$ in both of the two cases. Clearly, we have $A_1\subset C\subsetneq F_1$. By the definition of $\mathcal{K}$, $C$ is center of an $(r+1)$-sunflower in $\mathcal{F}$. Let $E_1,E_2,\ldots,E_{r+1}$ be such a sunflower. Since $|F_2\setminus C|< r$, there exists some $E_j$ such that $(E_j\setminus C)\cap (F_2\setminus C) =\emptyset$. Then we have $|E_j\cap F_2| =|C\cap F_2|=1$, a contradiction. Thus $V(\mathcal{K})\cap V(\mathcal{B})=\emptyset$.
By Claim \ref{claim33}, we have \begin{align}\label{ineq-7}
|V(\mathcal{B})|\leq n-V(\mathcal{K})\leq n-V(\mathcal{H}_1)\leq c''. \end{align}
Let $\mathcal{C} = \{F\in \mathcal{A}\colon F \cap V(\mathcal{B})=\emptyset\}$, $\mathcal{K}' =\mathcal{K} \cup \mathcal{C}$ and $\mathcal{A}' =\mathcal{A} \setminus \mathcal{C}$. Clearly, $V(\mathcal{K}') \cap V(\mathcal{B})=\emptyset$, $F\cap V(\mathcal{K}') \supset A_1$ and $F\cap V(\mathcal{B})\neq \emptyset$ for each $F\in \mathcal{A}'$.
\begin{claim}\label{claim9} $\mathcal{B}=\emptyset$. \end{claim}
\begin{proof} Suppose for contradiction that there exists $B\in \mathcal{B}$. We first show that the degree of each vertex $x$ in $B$ is small. By \eqref{ineq-7}, we have \[
\deg_{\mathcal{B}}(x) \leq \binom{|V(\mathcal{B})|}{r-1} \leq \binom{c''}{r-1}. \]
Note that $A_1\subset F$ for any $F\in \mathcal{F}\setminus \mathcal{B}$ and $|F\cap F'|\neq 1$ for any $F,F'\in \mathcal{F}$. We have $A_1\subset B'$ and $|B'\cap B|\geq 2$ for any $B'\in \mathcal{F}\setminus \mathcal{B}$ with $x\in B'$. Thus, the number of hyperedges containing $x$ in $\mathcal{F}\setminus \mathcal{B}$ is at most $|B\setminus \{x\}|\cdot\binom{n}{r-4}=(r-1)\binom{n}{r-4}$. Therefore, \[ \deg_{\mathcal{F}}(x)\leq \deg_{\mathcal{B}}(x) +(r-1)\binom{n}{r-4} \leq \binom{c''}{r-1}+(r-1)\binom{n}{r-4}. \]
Let $u\in V(\mathcal{K}')\setminus A_1$ be the vertex with $$\deg_{\mathcal{K}'}(u)= \max \left\{\deg_{\mathcal{K}'}(v)\colon v\in V(\mathcal{K}')\setminus A_1 \right\}.$$ We show that $\deg_{\mathcal{K}'}(u)\geq c'''n^{r-3}$ for some constant $c'''>0$. Since $F\cap V(\mathcal{B})\neq \emptyset$ for each $F\in \mathcal{A}'$, we have \[
|\mathcal{A}'|+|\mathcal{B}| \leq \sum_{v\in V(\mathcal{B})}\deg_{\mathcal{F}}(v). \] If $\deg_{\mathcal{K}'}(u)=o(n^{r-3})$, then \begin{align*}
\mathcal{N}(K_r, G)&=|\mathcal{K}'|+|\mathcal{A}'|+|\mathcal{B}|\\[6pt] &\leq \frac{1}{r-2}\sum_{v\in V(\mathcal{K}')\setminus A_1}\deg_{\mathcal{K}'}(v)+\sum_{v\in V(\mathcal{B})}\deg_{\mathcal{F}}(v)\\[6pt] &\leq o(n^{r-2})+c''\left((r-1)\binom{n}{r-4}+\binom{c''}{r-1}\right), \end{align*} which contradicts the assumption that $\mathcal{N}(K_r, G)\geq \mathcal{N}(K_{r-2}, T_{r-2}(n-2))$. Thus $\deg_{\mathcal{K}'}(u)\geq c'''n^{r-3}$ for some constant $c'''>0$.
Since $n$ is sufficiently large, for each $x\in B$ we have \[ \deg_{\mathcal{F}}(u) \geq \deg_{\mathcal{K}'}(u) \geq c'''n^{r-3} > \deg_{\mathcal{F}}(x). \] We claim that there exists $x_0\in B$ such that $ux_0$ is not an edge of $G$. Otherwise, if $ux\in E(G)$ for all $x\in B$, then $\{u\}\cup T$ induces a copy of $K_r$ in $G$ for any $T\in \binom{B}{r-1}$. Since $\deg_{\mathcal{K}'}(u) \geq c'''n^{r-3}$, there exists an hyperedge $K$ in $\mathcal{K}'$ containing $u$. Recall that $V(\mathcal{K}')\cap V(\mathcal{B})=\emptyset$. Then $\{u\}\cup T \cup K$ induces a copy of $B_{r,1}$ in $G$, a contradiction. Thus, there exists $x_0\in B$ such that $ux_0$ is not an edge of $G$.
Now let $G'$ be a graph obtained from $G$ by deleting edges incident to $x_0$ and adding edges in $\{x_0w\colon w\in N(u)\}$. We claim that $G'$ is $B_{r,1}$-free. Otherwise, there exist two copies $K,K'$ of $K_r$ in $G'$ with $V(K)\cap V(K')=\{y\}$ for some $y\in V(G')$. Since $G$ is $B_{r,1}$-free, we may assume that $x_0\in V(K)$. If $u\notin V(K')$, then $V(K)\cup V(K')\setminus \{x_0\} \cup \{u\}$ induces a copy of $B_{r,1}$ in $G$, a contradiction. If $u\in V(K')$, then $y\neq x_0$ since $x_0y$ is not an edge in $G'$. Moreover, $V(K')\notin \mathcal{B}$ and $V(K)\setminus \{x_0\}\cup \{u\}\notin \mathcal{B}$ since $u\in V(\mathcal{K}')$. By the definition of $\mathcal{K}'$ and $\mathcal{A}'$, we see that both $V(K')$ and $V(K)\setminus \{x_0\}\cup \{u\}$ contains $A_1$. But now we have $V(K)\cap V(K')\supset A_1$ since $u,x_0\notin A_1$, which contradicts our assumption that $V(K)\cap V(K')=\{y\}$. Thus $G'$ is $B_{r,1}$-free.
Since $\deg_{\mathcal{F}}(u) > \deg_{\mathcal{F}}(x_0)$, we have \[ \mathcal{N}(K_r, G')=\mathcal{N}(K_r, G)-\deg_{\mathcal{F}}(x_0)+\deg_{\mathcal{F}}(u)>\mathcal{N}(K_r, G), \] which contracts the maximality of the number of copies of $K_r$ in $G$. Thus, the claim follows. \end{proof}
By Claim \ref{claim9}, $A_1$ is contained in every hyperedge of $\mathcal{F}$. Recall that $A_1=\{u_1,v_1\}$. It follows that $xu_1,xv_1\in E(G)$ for any $x\in V(G)\setminus A_1$. We claim that $G\setminus A_1$ is $K_{r-1}$-free. Otherwise, let $\{a_1, a_2, \ldots, a_{r-1}\}\subset V(G)\setminus A_1$ be a set that induces a copy of $K_{r-1}$ in $G- A_1$. Since $u_1a_j\in E(G)$ for each $j=1,\ldots, r-1$, $\{u_1, a_1, a_2, \ldots, a_{r-1}\}$ induces a copy of $K_{r}$ in $G$. Note that $A_1$ is the center of an $(r+1)$-sunflower in $\mathcal{F}$. Let $F_1,F_2,\ldots,F_{r+1}$ be such a sunflower with center $A_1$. Then there exists some $F_j$ with $(F_j\setminus A_1)\cap \{a_1, a_2, \ldots, a_{r-1}\}=\emptyset$. It follows that $F_j\cap\{u_1, a_1, a_2, \ldots, a_{r-1}\}=\{u_1\}$. By the definition of $\mathcal{F}$, the subgraph of $G$ induced by $F_j\cup \{u_1, a_1, a_2, \ldots, a_{r-1}\}$ contains $B_{r,1}$, a contradiction. Thus $G- A_1$ is $K_{r-1}$-free.
By Theorem \ref{thm7}, there are at most $\mathcal{N}(K_{r-2}, T_{r-2}(n-2))$ copies of $K_{r-2}$ in $G- A_1$ and Tur\'{a}n graph $T_{r-2}(n-2)$ is the unique graph attaining the maximum number. Thus, the number of $K_r$ in $G$ is at most $\mathcal{N}(K_{r-2}, T_{r-2}(n-2))$ and $K_2\vee T_{r-2}(n-2)$ is the unique graph attaining the maximum number of copies of $K_r$. \end{proof}
Now we prove Theorem \ref{thm8} using F\"{u}redi's structure theorem.
\begin{proof}[Proof of Theorem \ref{thm8}] Let $G$ be a $B_{r,0}$-free graph on vertex set $[n]$ and let \[ \mathcal{F} = \left\{F\in \binom{[n]}{r}\colon G[F] \mbox{ is a clique}\right\}. \]
Since $G$ is $B_{r,0}$-free, $\mathcal{F}$ is an intersecting family. We apply Theorem \ref{thm4} with $p=r+1$ to $\mathcal{F}$ and obtain $\mathcal{F}^*$. Let $\mathcal{I}=\mathcal{I}(F, \mathcal{F}^*)$ for some fixed $F\in\mathcal{F}^*$. From Theorem \ref{thm4} (iv) and Lemma \ref{lem-2}, we have $|I\cap I'|\geq 1$ for any $I, I'\in \mathcal{I}$. Let $I_0$ be a minimal set in $\mathcal{I}$. Since $\mathcal{I}$ is intersection closed, $I_0\subset I$ for all $I\in\mathcal{I}$. Otherwise we have $I_0\cap I\in \mathcal{I}$ and $|I\cap I_0|<|I_0|$, which contracts the minimality of $I_0$. Now we distinguish two cases.
{ Case 1.} $|I_0|=1$. Let $I_0=\{v\}$. By Theorem \ref{thm4} (iv), $\{v\}$ is center of an $(r+1)$-sunflower in $\mathcal{F}^*$. Let $F_1,F_2,\ldots, F_{r+1}$ be hyperedges in such an $(r+1)$-sunflower. If there is a hyperedge $F$ in $\mathcal{F}$ with $v\notin F$, then it is easy to find some $j$ such that $F_j\cap F=\emptyset$, which contradicts the fact that $\mathcal{F}$ is an intersecting family. Thus, $v$ is contained in every hyperedge of $\mathcal{F}$. Let $G'= G[N(v)]$. Since each copy of $K_r$ in $G$ contains $v$, $G'$ is $K_{r}$-free. By Theorem \ref{thm7}, we have \begin{align*} \mathcal{N}(K_r, G)\leq \mathcal{N}(K_{r-1}, G')\leq \mathcal{N}(K_{r-1}, T_{r-1}(n-1)), \end{align*} and the equality holds if and only if $G\cong K_1\vee T_{r-1}(n-1)$.
{ Case 2.} $|I_0|\geq 2$. We claim that $F\setminus I_0$ is not covered by $\mathcal{I}$. Otherwise, assume that $F\setminus I_0\subset I^*$ for some $I^*\in \mathcal{I}$. Since $I_0\subset I$ for all $I\in \mathcal{I}$, we have $I_0\subset I^*$. It follows that $I^*=F$, which contradicts the fact that $F\notin \mathcal{I}$. Hence $F\setminus I_0$ is not covered by $\mathcal{I}$. It follows that $F$ is the only hyperedge in $\mathcal{F}^*$ containing $F\setminus I_0$. Theorem \ref{thm4} (ii) shows that $\mathcal{I}(F, \mathcal{F}^*)$ is isomorphic to $\mathcal{I}(F', \mathcal{F}^*)$ for any $F,F'\in\mathcal{F}^*$. For any $E\in\mathcal{F}^*$, there is an $(r-|I_0|)$-element subset $T$ of $E$ such that $E$ is the only hyperedge in $\mathcal{F}^*$ containing $T$. Since $|I_0|\geq 2$, we have $|\mathcal{F}^*|\leq {n\choose r-2}$. By Theorem \ref{thm4} (i), for sufficiently large $n$, we have \begin{align*}
\mathcal{N}(K_r, G)=|\mathcal{F}|\leq c^{-1}|\mathcal{F}^*|\leq c^{-1}{n\choose r-2}< \mathcal{N}(K_{r-1}, T_{r-1}(n-1)). \end{align*} This completes the proof. \end{proof}
\section{Bounds on $ex(n,K_r,B_{r,s})$ for general $r$ and $s$}
Let $B^{(r)}_s$ be an $r$-uniform hypergraph consisting of two hyperedge that share exactly $s$ vertices. Let $ex_r(n,B^{(r)}_s)$ denote the maximum number of hyperedges in an $r$-uniform $B^{(r)}_s$-free hypergraph on $n$ vertices. In \cite{ff85}, Frankl and F\"{u}redi proved that \begin{thm}[Frankl and F\"{u}redi \cite{ff85}]\label{ffthm} For $r\geq 2s+2$ and $n$ sufficiently large, \[ ex_r(n,B^{(r)}_s) =\binom{n-s-1}{r-s-1}. \] For $r\leq 2s+1$, \[ ex_r(n,B^{(r)}_s) =O(n^s). \] \end{thm}
Now we prove Theorem \ref{thm9} by using Theorem \ref{ffthm}.
\begin{proof}[Proof of Theorem \ref{thm9}] Notice that $ex(n,K_r,B_{r,s})\leq ex_r(n,B^{(r)}_s)$, by Theorem \ref{ffthm} we have \begin{align}\label{ffupbound} ex(n,K_r,B_{r,s})= O(n^{\max\{s,r-s-1\}}). \end{align}
For $r\geq 2s+1$, it is easy to see that $K_{s+1}\vee T_{r-s-1}(n-s-1)$ is a $B_{r,s}$-free graph. Then \[ ex(n,K_r,B_{r,s}) \geq \mathcal{N}(K_{r-s-1},T_{r-s-1}(n-s-1)). \] By \eqref{ffupbound}, we have $ex(n,K_r,B_{r,s})=\Theta(n^{r-s-1})$.
For $r\leq 2s$, by using $s$-sum-free partition of $r$, we give a lower bound construction as follows. Let $P=(a_1,a_2,\ldots,a_t)$ be an $s$-sum-free partition of $r$. Define a graph $G_P$ on the vertex set $V(G)=X_1\cup X_2\cup \ldots X_t$ with $X_i=\lfloor n/t\rfloor$ or $\lceil n/t\rceil$ for each $i=1,2,\ldots,t$. Let $G_P[X_i]$ be the union of $|X_i|/a_i$ vertex-disjoint copies of $K_{a_i}$ for each $i=1,2,\ldots,t$ and $G_P[X_i,X_j]$ be a complete bipartite graph for $1\leq i<j\leq t$.
We claim that $G_P$ is $B_{r,s}$-free. Let $K, K'$ be two copies of $K_r$ in $G_P$. Since $G_P[X_i]$ is a union of vertex-disjoint copies of $K_{a_i}$, we have $|V(K)\cap X_i|\leq a_i$ and $|V(K')\cap X_i|\leq a_i$. It follows that $|V(K)\cap X_i|= a_i$ and $|V(K')\cap X_i|= a_i$ because of $a_1+\cdots+a_t=r$. Since $P$ is $s$-sum-free, we conclude that $|V(K)\cap V(K')|\neq s$. Thus, $G_P$ is $B_{r,s}$-free. Moreover,
\[
\mathcal{N}(K_r,G_P) = \prod_{i=1}^t \left\lfloor\frac{n}{ta_i}\right\rfloor \approx \left(t^t\prod_{i=1}^t a_i\right)^{-1} n^t.
\] Note that $\beta_{r,s}$ is defined to be the maximum length $t$ in an $s$-sum-free partition of $r$. Thus, the construction gives that \[ ex(n,K_r,B_{r,s}) =\Omega(n^{\beta_{r,s}}) \] for $r\leq 2s$. This completes the proof. \end{proof}
\section{Bounds on $ex(n,K_4, B_{4,2})$}
In this section, we derive an upper bound on $ex(n,K_4, B_{4,2})$ by utilizing the graph removal lemma.
Let $G=(V, E)$ be a graph. For any $E'\subset E(G)$, let $G[E']$ denote the subgraph of $G$ induced by the edge set $E'$, and let $G- E'$ denote the subgraph of $G$ induced by $E(G)\setminus E'$. We use $v(G)$ to denote the number of vertices in a graph $G$. \begin{lem}[Graph removal lemma \cite{fox}] For any graph $H$ and any $\epsilon>0$, there exists $\delta>0$ such that any graph on $n$ vertices which contains at most $\delta n^{v(H)}$ copies of $H$ may be made $H$-free by removing at most $\epsilon n^2$ edges. \end{lem}
\begin{proof}[Proof of Theorem \ref{thm2}]
The lower bound in the theorem is due to the following construction. Suppose that $n=6m+t$ with $t\leq 5$, let $G^*$ be a graph on $n$ vertices consisting of a set $V$ of size $3m$, whose induced subgraph is a union of $m$ disjoint copies of triangles, and an independent set $U$ of size $3m+t$ as well as all the edges between $V$ and $U$. Then, it is easy to see that $G^*$ is $B_{4,2}$-free and \begin{align*} \mathcal{N}(K_4, G^*)= m(3m+t)\geq\frac{n^2}{12}-2. \end{align*} Thus, we are left with the proof of the upper bound.
Let $G$ be a $B_{4,2}$-free graph on $n$ vertices. We may further assume that each edge of $G$ is contained in at least one copy of $K_4$.
\begin{claim}\label{claim10}
There is a subset $E'\subset E(G)$ with $|E'|=o(n^2)$ such that $G'=G- E'$ is $K_5$-free, and $\mathcal{N}(K_4, G)= \mathcal{N}(K_4, G')+o(n^2)$.
\end{claim} \begin{proof} For any edge $e$ in $G$, there is at most one copy of $K_5$ containing $e$, since otherwise we shall find a copy of $B_{4,2}$. Thus, the number of $K_5$ in $G$ is $O(n^2)=o(n^5)$. By the graph removal lemma, we can delete $o(n^2)$ edges to make $G$ $K_5$-free. Let $E'$ be the set of the deleted edges.
Note that the edge deletion is to remove the copy of $K_5$ in $G$, so the deleted edges are contained in some $K_5$ in $G$. Moreover, for any $e\in E'$, there is exactly one copy of $K_5$ in $G$ containing $e$. We denote it by $K$. Then each copy of $K_4$ containing $e$ is a subgraph of $K$, otherwise we shall find a copy of $B_{4,2}$. Thus, there are at most three copies $K_4$ in $G$ containing $e$. Thus, edge deletion reduces at most $o(n^2)$ copies of $K_4$. \end{proof}
Let $R$ be a subset of $E(G')$ consisting of all the edges contained in at least two copies of $K_4$ in $G'$, and let $B=E(G')\setminus R$.
\begin{claim}\label{claim11}
There is a subset $T\subset B$ with $|T|=o(n^2)$ such that $G'[B\setminus T]$ is $K_4$-free, and $\mathcal{N}(K_4,G')= \mathcal{N}(K_4,G'- T)+o(n^2)$.
\end{claim}
\begin{proof} By the definition of the set $B$, each edge in $B$ is contained in at most one copy of $K_4$ in $G'$. Thus, the number of copies of $K_4$ in $G'[B]$ is at most $O(n^2)=o(n^4)$. By the graph removal lemma, we can delete $o(n^2)$ edges to make $G'[B]$ $K_4$-free. Moreover, for any deleted edge $e$, since $e\in B$ it follows that $e$ is contained in exactly one copy of $K_4$ in $G'$. Thus, the edge deletion decreases at most $o(n^2)$ copies of $K_4$. \end{proof}
Let $G^*=G'- T$, $B^*=B\setminus T$. Then the edge set of $G^*$ consists of $R$ and $B^*$, and $G^*[B^*]$ is $K_4$-free. In Claim \ref{claim11}, the edge deletion is to remove the copy of $K_4$ in $G'[B]$, and each deleted edge is contained in exactly one copy of $K_4$ in $G'[B]$. Then each edge in $R$ is still contained in at least two copies of $K_4$ in $G^*$ and every edge in $B^*$ is contained in at most one copy of $K_4$ in $G^*$. We say a copy of $K_4$ in $G^*$ {\it right-colored} if three of its edges form a triangle in $G^*[R]$ and the other three edges form a star in $G^*[B^*]$.
\begin{claim}
All the copies of $K_4$ in $G^*$ are right-colored. \end{claim}
\begin{proof}
Suppose that $S=\{v_1, v_2, v_3, v_4\}$ induces a copy of $K_4$ in $G^*$. Clearly, at least one edge in $G^*[S]$ is contained in $R$. Without loss of generality, assume that $v_1v_2$ be such an edge. Since $v_1v_2$ is contained in at least two copies of $K_4$ in $G^*$, assume that $G^*[\{v_1,v_2,v_s,v_t\}]$ be another copy of $K_4$ containing $v_1v_2$. If $\{v_s,v_t\}\cap \{v_3,v_4\}=\emptyset$, then we find a copy of $B_{4,2}$ in $G^*$, a contradiction. Thus, we have $|\{v_s,v_t\}\cap \{v_3,v_4\}|=1$. Assume that $v_s=v_3$, then both $v_1v_3$ and $v_2v_3$ are contained in at least two copies of $K_4$. It follows that $v_1v_3$ and $v_2v_3$ are edges in $R$. Thus, there are three edges in $G^*[S]$ belonging to $R$ that form a triangle in $G^*$.
Next we show that $v_1v_4,v_2v_4$ and $v_3v_4$ are all edges in $B^*$. If not, assume that $v_3v_4\in R$. Then, all the copies of $K_4$ containing $v_1v_2$ should also contain $v_3$ or $v_4$, otherwise we shall find a copy of $B_{4,2}$. Without loss of generality, assume that all the copies of $K_4$ containing $v_1v_2$ contain $v_3$ as well. Let $G^*[\{v_1,v_2,v_3,v_4\}]$ and $G^*[\{v_1,v_2,v_3,v_5\}]$ be two such copies of $K_4$. Similarly, all the copies of $K_4$ containing $v_3v_4$ should also contain $v_1$ or $v_2$. Without loss of generality, assume that $G^*[\{v_3,v_4,v_1,v_2\}]$ and $G^*[\{v_3,v_4,v_1,v_6\}]$ be two such copies of $K_4$. Clearly, we have $v_5\neq v_6$ for $G^*$ is $K_5$-free. However, at this time both $G^*[\{v_1,v_3,v_4,v_6\}]$ and $G^*[\{v_1,v_3,v_2,v_5\}]$ form a copy of $K_4$, which implies $G^*[\{v_1, v_2, v_3, v_4, v_5, v_6\}]$ contains a copy of $B_{4,2}$, a contradiction. Thus, $v_3v_4\in B^*$.
Similarly, we can deduce that $v_1v_4$ and $v_2v_4$ are edges in $B^*$. Therefore, $G^*[S]$ is right-colored and the claim holds. \end{proof}
Since $G^*[B^*]$ is $K_4$-free, by Tur\'{a}n theorem \cite{turan} there are at most $\frac{n^2}{3}$ edges in $G^*[B^*]$. Moreover, since all the copies of $K_4$ in $G^*$ are right-colored, it follows that each copy of $K_4$ in $G^*$ contains three edges in $B^*$. Thus, we have \begin{align*}
\mathcal{N}(K_4, G^*)\leq \frac{|B^*|}{3}\leq \frac{n^2}{9}. \end{align*} From Claims \ref{claim10} and \ref{claim11}, it follows that \[ \mathcal{N}(K_4, G)= \mathcal{N}(K_4, G^*)+o(n^2) \leq \frac{n^2}{9}+o(n^2), \] which completes the proof. \end{proof}
\noindent {\bf Acknowledgement.} The second author is supported by Shanxi Province Science Foundation for Youths, PR China (No. 201801D221028).
\end{document}
|
arXiv
|
{
"id": "2101.08004.tex",
"language_detection_score": 0.6684102416038513,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Log Adjunction: effectiveness and positivity}
\section{Introduction}
This is a first instalment of much larger work about relations between birational geometry and moduli of triples. The extraction of work is mainly related to Theorem~\ref{weak_Kawamata}. It is a weak version of Kawamata's Conjecture~\ref{Kawamata_conjecture} and an important technical step toward semiampleness of moduli part of adjunction. To prove Theorem~\ref{weak_Kawamata}, we use relative analogues of b-representations. The proof here is rather complete except for b-mobile property used in Corollary~\ref{adjunt_for_0contr}. We assume also the LMMP and the semiampleness (abundance) conjecture. For the former, it is sufficient \cite{BCHM}. The latter is not crucial for b-representations, because nonabundace gives empty representations. This will be cleared up in a final version of the preprint.
The preprint will be periodically renewed on http://www.math.jhu.edu/$\sim$shokurov/adj.pdf. A final version will appear again on arXive.
The author is grateful to Florin Ambro, Valery Alexeev, J\'anos Koll\'ar for sharing unpublished materials and their valuable expertise in the area where he is still an apprentice.
\section{Adjunction}
\begin{propdf}[Maximal log pair] Let $(X_\eta,B_{X_\eta})$ be a generic wlc pair with a boundary $B_{X_\eta}$. Then there exists a {\em maximal\/} complete wlc pair $(X_m/Z_m,B_m)$, which is birationally equivalent to $(X_\eta,B_{X_\eta})$, that is, there exists a flop $$ (X_\eta,B_{X_\eta})\dashrightarrow (X_{\eta_m,m},B_{X_{\eta_m,m}}), $$ where $\eta_m$ is a generic point of $Z_m$ and the flop induces an isomorphism $\eta\cong\eta_m$. The {\em maximal\/} property means an inequality $\mathcal B_m^{\mathrm{mod}}\ge{\mathcal B'}^{\mathrm{mod}}$ for any complete wlc pair $(X'/Z',B')$, which is birationally equivalent to $(X_\eta,B_{X_\eta})$. For a maximal pair it is not necessary that $(X_m,B_m)$ is lc and $B_m$ is a boundary, but $(X_m,B_m)$ is a log pair and $K_{X_m}+B_m$ is nef over $Z_m$. However, always there exists a wlc maximal pair $(X_m/Z_m,B_m)$.
If $(X/Z,D)$ is an (irreducible) pair, which is generically a wlc pair, then its {\em maximal\/} pair is a maximal complete wlc pair of $(X_\eta,D_{X_\eta})$, where $\eta$ is a generic point of $Z$, in particular, $D_{X_\eta}$ is a boundary. In this situation we denote a maximal moduli part of adjunction by $\mathcal D^{\mathrm{mod}}$. \end{propdf}
\begin{proof} Immediate by the existence of a complete {\rm tdlt}\ family for $(X_\eta,B_{X_\eta})$ and by Proposition~\ref{can_moduli_part}. \end{proof}
\begin{exs} (1) ($0$-mappings.) Let $(X/Z,D)$ be a complete (irreducible) log pair such that \begin{description}
\item{} the generic fiber is a $0$-pair, possibly, not geometrically irreducible, but $D$ is a boundary generically over $Z$; and
\item{} $K+D\equiv 0/Z$. \end{description} The complete property is global, that is, $X$ and $Z$ are complete. Then $(X/Z,D)$ is maximal itself \cite{PSh}.
In particular, if $f\colon X\to Z$ is a contraction, it is a $0$-contraction and the maximal (upper) moduli part of adjunction is $\mathbb R$-linear equivalent to a pulling back of low moduli part of adjunction (see Corollary~\ref{adjunt_for_0contr}): $$ \mathcal D^{\mathrm{mod}}\sim_\mathbb R f^*\mathcal D{}_{\mathrm{mod}}. $$
(2) (Maximality over curve; cf. a canonical moduli part in Proposition~\ref{can_moduli_part}.) Let $(X/C,B^{\mathrm{log}})$ be a complete {\rm tlc}\ pair such that \begin{description}
\item{} $C$ is a nonsingular complete curve,
\item{} a generic fiber is wlc, and
\item{} $K+B^{\mathrm{log}}$ is nef over $C$. \end{description} Then the pair is maximal. The {\rm tlc}\ in this situation means that $B^{\mathrm{log}}=B+\sum D_i$ is a boundary such that the vertical part of $B^{\mathrm{log}}$ is a sum $\sum D_i$ of reductions of fibers $D_i=(f^*p_i){}_{\mathrm{red}},p_i\in C$, the vertical sum includes all degenerations and $(X,B^{\mathrm{log}})$ is wlc. The inclusion of degenerations means that, if $p\in C\setminus\{p_i\}$, then $(X,B^{\mathrm{log}}+f^*p)$ is also wlc. In particular, the fiber $f^*p$ is reduced. The log structure of $(X/C,B^{\mathrm{log}})$ is given on $X$ by the reduced horizontal divisors and reduction of vertical degenerations $D_i$ of $f$, and on $C$ by the (critical) points $p_i=f(D_i)\in C$. A maximal (upper) moduli part $\mathcal B{}^{\mathrm{mm}}$ of adjunction for $(X/C,B^{\mathrm{log}})$ or for $(X/C,B)$ is stabilized over $X$ and is $\overline{K+B^{\mathrm{log}}-f^*(K_C+\sum p_i)}$. Indeed, $B{}_{\mathrm{div}}^{\mathrm{log}}=\sum p_i$.
Of course, we can add to $B^{\mathrm{log}}$ some nondegenerate fibers $f^*p$ as above. This does not change $\mathcal B{}^{\mathrm{mm}}$. Moreover, if $D=B^{\mathrm{log}} +f^*A$, where $A$ is any divisor on $C$, then $(X/C,D)$ is also maximal and has the same moduli part as for $(X/C,B^{\mathrm{log}})$. So, $$ \mathcal D{}^{\mathrm{mm}}=\overline{D{}^{\mathrm{mm}}}=\mathcal B{}^{\mathrm{mm}}=\overline{K+B^{\mathrm{log}}-f^*(K_C+\sum p_i)}. $$ Note that $K+B^{\mathrm{log}}-f^*(K_C+\sum p_i)$ is a divisor of a (log) canonical $\mathbb R$-sheaf $\omega_{X/C}^1[B]$, an adjoint log sheaf, where $B=B^{\mathrm{log}}{}^{\mathrm{h}}=D^{\mathrm{h}}$ is the horizontal part of $B^{\mathrm{log}},D$. If the fibers of $f$ are reduced, then $\omega_{X/C}=\omega_{X/C}^1$.
If $X$ is also a complete nonsingular curve, then $X\to C$ is a finite morphism of curves, $B=0$, and $D,B^{\mathrm{log}}=\sum q_{j,i}$ are vertical. In this situation, $\mathcal D{}^{\mathrm{mm}}=D{}^{\mathrm{mm}}\sim 0$, $B^{\mathrm{log}}=\sum q_{j,i},q_{j,i}\in X$, with $\sum_jq_{j,i}=D_i$, and the above equation $$ K+\sum q_{j,i}-f^*(K_C+\sum p_i)\sim 0 $$ is the Hurwitz formula. The points $p_i\in C$ should include all critical ones. \end{exs}
\begin{cor}\label{adjunt_for_0contr} Let $(X/Z,D)$ be a complete (irreducible) log pair such that \begin{description}
\item{} $f\colon X\to Z$ is a contraction,
\item{} the generic fiber is a $0$-pair, and
\item{} $K+D\equiv 0/Z$. \end{description} Then $(X/Z,D)$ is maximal and $$ \mathcal D{}^{\mathrm{mm}}=\mathcal D^{\mathrm{mod}}\sim_\mathbb R f^*\mathcal D{}_{\mathrm{mod}}. $$ Moreover, there exists an effective low moduli part $\mathcal D{}_{\mathrm{mod}}$ such that \begin{description}
\item{} the moduli parts $\mathcal D^{\mathrm{mod}}=f^*\mathcal D{}_{\mathrm{mod}},D^{\mathrm{mod}}=(\mathcal D^{\mathrm{mod}})_X$, and $\mathcal D{}_{\mathrm{mod}},D{}_{\mathrm{mod}}=(\mathcal D{}_{\mathrm{mod}})_Z$ are also effective and flop invariant: for every flop $g\in\Bir(X\to Z/k,D)$, $$ g^*\mathcal D^{\mathrm{mod}}=\mathcal D^{\mathrm{mod}},g^*D^{\mathrm{mod}}=D^{\mathrm{mod}} \text{ and } g_Z^*\mathcal D{}_{\mathrm{mod}}=\mathcal D{}_{\mathrm{mod}},g_Z^*D{}_{\mathrm{mod}}=D{}_{\mathrm{mod}}, $$ where $g_Z\colon Z\dashrightarrow Z$ is a birational automorphism induced by $g$;
\item{} if $(X,D)$ is lc, klt, then $(Z,D_Z)$ is lc, klt respectively, where $D_Z=D{}_{\mathrm{div}}+D{}_{\mathrm{mod}}$;
\item{} if $D$ is a effective, then the divisorial part $D{}_{\mathrm{div}}$, the above moduli part $D{}_{\mathrm{mod}}$ on $Z$ and $D_Z$ are effective $\mathbb R$-divisors;
\item{} if $D=B$ is a boundary, then the divisorial part $B{}_{\mathrm{div}}=D{}_{\mathrm{div}}$, the above moduli part $B{}_{\mathrm{mod}}=D{}_{\mathrm{mod}}$ on $Z$ and $B_Z=B{}_{\mathrm{div}}+B{}_{\mathrm{mod}}$ are boundaries; and
\item{} if $(X,Z)$ is a wlk (klt) pair, then the pair $(Z,B_Z)$ is wlc (klt respectively). \end{description} \end{cor}
\begin{prop}[Canonical moduli part]\label{can_moduli_part} Let $(X/Z,B)$ be a {\rm tlc}\ wlc (irreducible) family with a horizontal boundary $B$. Then the pair is maximal, its maximal moduli part is stabilized over $X$, and any divisor $M$ of $\mathbb R$-sheaf $\omega_{X/Z}^1[B]$ is a divisor of the upper maximal moduli part of adjunction. In particular, it is the maximal moduli part for $(X_\eta,B_\eta)$, where $\eta$ is a generic point of $Z$. More precisely, $$ B{}^{\mathrm{mm}}=B^{\mathrm{mod}}\sim M=K_{X/Z}^{\mathrm{log}}+B $$ and $$ \mathcal B_\eta{}^{\mathrm{mm}}= \mathcal B_\eta^{\mathrm{mod}}= \mathcal B{}^{\mathrm{mm}}=\mathcal B^{\mathrm{mod}}=\overline{B{}^{\mathrm{mm}}}=\overline{B^{\mathrm{mod}}} \sim\overline{M}=\mathcal K_{X/Z}^{\mathrm{log}}+\overline{B}. $$ \end{prop}
\begin{df}[Canonical adjunction] A {\em canonical (upper) maximal moduli part of adjunction\/} is the $\mathbb R$-sheaf $\omega_{X/Z}^1[B]$ of Proposition~\ref{can_moduli_part}. It is a b-sheaf on $(X_\eta,B_{X_\eta})$. Such a b-sheaf is unique on $X_\eta$ and some times we denote it by $\mathcal M$. By $\mathcal M$ we denote also a b-divisor of the last b-sheaf. This divisor is defined up to linear equivalence and $$ \mathcal M=\overline{M}, $$ where $M$ is a divisor in Proposition~\ref{can_moduli_part}.
Respectively, a plane {\em moduli part of adjunction\/} $\mathcal M$ is either the $\mathbb R$-sheaf $\omega_{X/Z}^1[B]$ up to an $\mathbb R$-isomorphism, or the $\mathbb R$-divisor $\mathcal M$ up to $\mathbb R$-linear equivalence. To define a low moduli part of adjunction we need such a flexibility. \end{df}
\section{Mapping $\prd$}
\begin{df}[Equivalent $0$-pairs] The {\em equivalence\/} of connected $0$-pairs $(X,B)$ is the minimal equivalence such that \begin{description}
\item{\rm (1)} component adjunction gives an equivalent $0$-pair, that is, any component $(X_i,B_i)$ of a normalization $(X,B)^{\mathrm{n}}=\coprod (X_i,B_i)$ is a $0$-pair equivalent to a $0$-pair $(X,B)$ itself;
\item{\rm (2)} any flopped $0$-pairs are equivalent, that is, if $(X,B)\dashrightarrow (X',B_{X'})$ is a flop of pairs with boundaries and $(X,B)$ is a $0$-pair, then $(X',B_{X'})$ is a $0$-pair equivalent to $(X,B)$;
\item{\rm (3)} divisorial adjunction gives an equivalent $0$-pair, that is, if $D\subset (X,B)$ is a divisorial lc center of a $0$-pair $(X,B)$, then the adjoint pair $(D,B_D)$ is a $0$-pair equivalent to $(X,B)$; and
\item{\rm (4)} field base change gives an equivalent $0$-pair, that is, if $(X,B)$ is a $0$-pair over a field $K/k$ and $F/k$ is a field extension, then any connected component of pair $(X,B)\otimes_kF$ is a $0$-pair over $F$ equivalent to $(X,B)$ over $K$. \end{description} \end{df}
\begin{exs}
(1) (Log curves.) Let $(C,B_C)$ be a $1$-dimensional $0$-pair, that is, $C$ is a connected complete nodal curve and $B_C$ is a boundary such that $K_C+B_C\sim_\mathbb R 0$. Such a pair $(C,B_C)$ is equivalent to a $0$-dimensional $0$-pair $(\mathrm{pt.},0)$ if $(C,B_C)$ is not klt, equivalently, it has a node or a nonsingular point $p\in C$ with $\mult_pB_C=1$.
Two $1$-dimensional klt $0$-pairs $(C,B_C)$ and $(C',B_{C'})$ are equivalent if and only if they are log isomorphic.
(5) (Toric pairs.) Any complete toric variety $X$ is naturally a $0$-pair $(X,D)$, where $D$ is its total invariant divisor. All those pairs are equivalent and they equivalent to $(\mathrm{pt.},0)$ of Example (1).
(6) (Koll\'ar's sources \cite{Kol11}.) Let $(X,\Delta)$ be a log pair, $Z$ be its lc center, and $\Delta\ge 0$ near a generic point of $Z$. Then one can associate to $Z$ a class of $0$-contractions $(S/\widetilde{Z}_S,\Delta_S)$, a relative source. The class of pairs $(S,\Delta_S)$ up to flops is denoted by $\text{Scr}(Z,X,\Delta)$ and is called the source of $Z$ in $(X,\Delta)$. An interest to the class is related to the divisorial part of adjunction (see for more details in \cite[Theorem~1]{Kol11}). On the other hand, the moduli part of adjunction is related to the equivalence class of generic pairs $(S_\eta,B_{S_\eta})$ of relative sources, where $\eta$ is a generic point of $\widetilde{Z}_S$ (see for Corollary~\ref{adjunt_for_0contr}). The mapping of sources into equivalence classes is not injective, except for, the case with $\eta=\widetilde{Z}_S=\mathrm{pt.}$ and $Z=\mathrm{pt.}$. The divisorial and moduli part in this case are $0$ and $\sim_\mathbb R 0$ respectively.
(7) (Characteristic.) Let $k$ be a prime field and $F/k$ is a field extension. Then by definition the $F$-point $\mathrm{pt.}_F=(\Spec F,0)$ is equivalent to $\mathrm{pt.}=(\Spec k,0)$: $\mathrm{pt.}_F=\mathrm{pt.}\otimes_kF$. So, the equivalence class of $\mathrm{pt.}$ is the class of $\mathrm{pt.}_F$ for field extensions $F/k$. This class of fields is uniquely determined by $\chr k$. \end{exs}
\begin{prop} \label{klt0pair} Every equivalence class of $0$-pairs has an (irreducible) projective klt representative $(X,B)$. Two klt $0$-pairs over $k$ are equivalent if and only if they are related by a generalized flop. \end{prop}
\begin{lemma} \label{mincents} Let $(X,B),(X',B')$ be connected {\rm tdlt}\ $0$-pairs, $(V,B_V),(V',B_{V'}')$ be irreducible adjoint lc centres $V,V'$ of those pairs respectively such that there exists a flop $$ (V,B_V)\dashrightarrow(V',B_{V'}'), $$ and $(W,B_W),(W',B_{W'}')$ be adjoint pairs of minimal lc centers $W,W'$ of $(X,B),(X',B')$ respectively. Then $(W,B_W),(W',B_{W'}')$ are klt $0$-pairs and are related by a (generalized) flop. In particular, the conclusion holds for adjoint pairs $(W,B_W),(W',B_{W'})$ of any two minimal lc centers $W,W'$ of $(X,B)$. \end{lemma}
\begin{proof} Immediate by Theorem~\ref{can_0-perest} and Lemma~\ref{chain_of_flop}. \end{proof}
\section{Toroidal geometry}
\begin{prop}\label{torus_invar-pt} Let $X\subseteq \mathbb P(W^v)$ be a nondegerate projective variety, and $g$ be a semisimple operator on $W$. Suppose that $X$ is invariant under the dual (contragarient) action of $g^v$. Then there exists an integral number $n\not=0$ and a point $x\in X$ such that \begin{description}
\item{\rm (1)} $(g^v)^n$ invariant: $(g^v)^nx=x$, and, moreover,
\item{\rm (2)} if $g$ is not torsion, then the point $x$ has a nonzero coordinate $w(x)$, where $w$ is a coordinate (linear) function, an eigenfunction under $g$ with an eigenvalue, which is not a root of unity. \end{description} \end{prop}
\begin{proof} Take a basis $w_1,\dots,w_d,d=\dim W$, with eigenvectors $w_i$ for $g$ such that the eigenvalues $e_1,\dots,e_l$ of $w_1,\dots,w_l,0\le l\le d$, respectively are nonroots of unity and the eigenvalues $e_{l+1},\dots,e_d$ of $w_{l+1},\dots,w_d$ respectively are roots of unity. (Actually, for the dual action $g^v\colon w_i^v\mapsto e_iw_i^v$, because the representation is commutative.)
If $l=0$, take any $x\in X$ and a uniform torsion $n$ of all roots of unity $e_i$.
If $l\ge 1$, take a sufficiently general point $x\in X$, that is, all homogeneous coordinates $x_i=w_i(x)\not=0$. Then the Zariski closure of the orbit $(g^v)^m(x),m\in\mathbb Z$, in $\mathbb P(W^v)$ is the closure $\overline{Y}$ of a subtoric orbit $Y$ with respect to the coordinate system $w_i^v$, that is, the torus action by diagonal matrices in this basis. By construction $Y\subseteq\overline{Y}\subseteq X$. The group generated by $g^v$ is dense in the subtorus $T$ in the Zariski topology. Let $T_1$ be the connected component of the unity $1$. Then $n=\# T/T_1$, a torsion of the Abelian quotient $T/T_1$.
The subvariety $\overline{Y}$ is toric with respect to $T_1$, and has a $T_1$-invariant point $y$ with some homogeneous coordinate $w_i(y)\not=0,1\le i\le l$, where $T_1=T_1^n=T^n$. So it is $(g^v)^n$-invariant (1). (We identify the point $y=(y_1:\dots:y_d)\in X\subseteq \mathbb P(W^v)$ with a line $k(y_1,\dots,y_d)$ in $W^v$.)
The subtorus $T_1$ has the following parameterization: $$ k^{*d}\twoheadrightarrow T_1, (t_1,\dots,t_d)\mapsto (\prod_j t_j^{a_{j,i}}), a_{j,i}\in \mathbb Z, i,j=1,\dots,d, $$ with the weighted action: $$(w_i^v)\mapsto (\prod_j t_j^{a_{j,i}}w_i^v). $$ (We can suppose that this is an action of the whole torus $k^{*d}$. Actually, under our assumptions, the action is smaller: all $a_{j,i}=0$ for $i\ge l+1$.) The weight of vector $w_i^v$ or of coordinate $x_i$ is the vector $(a_{1,i},\dots,a_{d,i})$. The wights are ordered lexicographically: $(a_1,\dots,a_d)\ge (b_1,\dots,b_d)$ if $a_1>b_1$ or $a_1=b_1,a_2>b_2$, etc.
We can assume that the action is {\em positive\/}. This implies (2). The positivity means that the maximal weight vector $(a_{j,i})$ is {\em positive\/}: all $a_{j,i}\ge 0$ and some $a_{j,i}>0$. Under our assumptions, $i\le l$. If the action is not positive, then changing the action of $t_j$ for some $j$ by the inverse one $t_j^{-1}$ (equivalently, change $a_{j,i}$ on $-a_{j,i}$), any action can be converted into a positive one. The positivity of action allows to get an invariant point with $w_i(y)\not=0,1\le i\le l$, if $w_i^v$ is maximal.
More precisely, a $T_1$-invariant vector $y\in \overline{Y}$ can be constructed as follows. The vector $y=(y_i)$ has the coordinates $y_i=x_i$, if $w_i^v$ has the maximal weight and $y_i=0$ for the other coordinates. Then $y\in \overline{Y}$ (the closure of orbit $T_1x=T_1(x_i)$ of $x$) and (1) $T_1$-invariant with (2) $w_i(y)=x_i\not=0$ for any maximal $w_i^v$. \end{proof}
\section{Isomorphisms and flops}
\begin{lemma} \label{etal-action} Let $(X,B)$ be a wlc klt pair, and $\mathcal D$ be a b-polarization on $X$. The natural mapping $$ \alpha_\mathcal D\colon \Bir_0(X,B)=\Aut_0(X,B)\to \bPic_\mathcal D X, a\mapsto \text{class }\mathcal O_X(a^*\mathcal D), $$ is an isogeny of Abelian varieties on the image. Thus $\Aut_0(X,B)$ is an Abelian variety.
The fiber $G_\mathcal D=\alpha_\mathcal D^{-1}(\alpha_\mathcal D \mathcal D)\subseteq \Aut_0(X,B)$ is a subgroup and coincide with the kernel of natural homomorphism $$ \gamma_\mathcal D\colon \Aut_0(X,B)\to \Pic_0 X, a\mapsto \text{class}\mathcal O_X(a^*\mathcal D-\mathcal D). $$ More precisely, $$ G_\mathcal D= \ker\gamma_\mathcal D=\Aut(X,B,\linsys{\mathcal D})\cap \Aut_0(X,B)\subseteq \Aut_0(X,B) $$ is a finite Abelian group and depend only on the algebraic (not numerical) equivalence class: $$ \mathcal D\approx \mathcal D'\Rightarrow G_\mathcal D=G_{\mathcal D'},A_\mathcal D=A_{\mathcal D'}. $$
For a projective $0$-pair $(X,B)$, $\alpha_\mathcal D$ is an isogeny onto. In general (proper) case $\Aut_0(X,B)$ should be replaced by $\Bir_0(X,B)$. \end{lemma}
\begin{thm}\label{aut_tame} Let $(X,B,H)$ be a klt wlc triple with a polarization $H$, an ample sheaf or an ample divisor up to algebraic or up to numerical equivalence. Then the group $$ \Bir(X,B,H)=\Aut(X,B,H) $$ is tame. More precisely, the group is algebraic of finite type and complete (almost Abelian). \end{thm}
\begin{df} Let $(X/T,B)$ be a connected family. The family is {\em moduli part trivial}, for short, {\em mp-trivial\/}, if its upper moduli part of adjunction $\mathcal M$ behaves on $X$ as on a trivial fibration: $\Ii(X,\mathcal M)=\Ii(X/T,\mathcal M)$ or, equivalently, there are rather general horizontal curves $C\subseteq X$ over $T$ such that $(\mathcal M.C)=0$.
If $(X/T,B)$ is a family of $0$-pairs, then the mp-trivial property means that $\mathcal M\sim_\R0$, as a b-divisor.
Respectively, the family {\em isotrivial}, if its rather general fibers are log isomorphic. \end{df}
\begin{ex}\label{mp_triv_nonisotri} Let $(X/C,S+B)$ be a $\mathbb P^1$-fibration over a nonsingular curve $C$ with a section $S$ and a boundary $B=\sum b_iD_i$, where prime divisors $D_i$ are horizontal. Suppose also that $(X/C,S+B)$ is {\rm tlc}, and $(X_\eta,B_{X_\eta})$ is a $0$-log pair. Then $(X/C,S+B)$ is a maximal family of $0$-pairs and its moduli part is trivial: $\mathcal M\sim_\mathbb R 0$. So, the family is mp-trivial. However, for rather general divisors $D_i$, it not isotrivial. \end{ex}
\paragraph{Directed generic flop.} Let $g\colon (X/T,B)\dashrightarrow (Y/S,B_Y)$ be a generic flop, that is, $g\in \Bir(X,B;Y,B_Y)$ is compatible with the relative structures. The latter means that $g$ induces a birational transformation $g_T\colon T\dashrightarrow S$ such that the flop if fiberwise with respect to $g_T$.
A {\em directed flop with respect to a b-polarization\/} $\mathcal D$ and its decomposition $g=g\rest{Y'}c$. It determines a b-polarization $\mathcal D=g^*\mathcal H$ on $X$, where $\mathcal H=\overline{H}$ is a b-divisor of polarization on $Y/S$. However, $g_T,g_\eta$ and $g_\theta$ are not uniquely determined by $\mathcal D$. The decomposition also depends on a polarization $\mathcal H$.
\begin{df} A generic flop $g\in\Bir(X\to T/k,B)$ is an {\em mp-autoflop\/}, if it transforms any fiber $(X_t,B_{X_t})$ into a fiber $(X_{t'},B_{X_{t'}}),t'=g_T t$, in a connected mp-trivial subfamily with $(X_t,B_{X_t})$.
Respectively, the flop is {\em almost autoflop\/}, if it transforms rather general fibers within isotrivial connected families: for rather general $t$, the fibers $(X_t,B_{X_t}),(X_{t'},B_{X_{t'}})$ are in a connected isotrivial subfamily. \end{df}
The mp-autoflops form a normal subgroup $\mBir(X\to T/k,B)\subseteq \Bir(X\to T/k,B)$. Respectively, the almost autoflops. form a normal subgroup $\aBir(X\to T/k,B)\subseteq \Bir(X\to T/k,B)$.
\begin{lemma}\label{mp_it} $\aBir(X\to T/k,B)\subseteq\mBir(X\to T/k,B)$, if $\mathcal M$ is semiample on $X$, and, $=$ holds if the family $(X/T,B)$ is a connected, generically klt, e.g., the family is a connected, generically klt, wlc maximal family. \end{lemma}
\begin{proof} Indeed, the isotrivial property implies that the upper moduli part exists and is mp-trivial on that family. Hence adjunction implies the inclusion. The converse does not hold in general (Example~\ref{mp_triv_nonisotri}).
Now suppose that the upper moduli part $\mathcal M$ exists, stabilized and semiample over $X$, e.g., this holds, if the family $(X/T,B)$ is wlc maximal. (After a perturbation of $B$, one can suppose that $B,\mathcal M$ is $\mathbb Q$-divisors.) For a rather divisible natural number $m$, the linear system $\linsys{m\mathcal M}$ gives a contraction with mp-trivial fibers. In this situation, $\mBir(X\to T/k,B)$ is exactly the kernel of b-representation: $$ \Bir(X\to T/k,B)\to \Aut H^0(X,m\mathcal M), g \mapsto g^*. $$ The kernel can be determined on finitely many rather general fibers of a morphism given by $\linsys{m\mathcal M}$. The fibers are mp-trivial by definition. Rather general fibers are klt and isotrivial (Viehweg-Ambro), if the family is generically klt \cite[Theorem~6.1]{Am}. \end{proof}
\begin{cor}\label{it_generic_Kawamata} Let $(X_\eta,B_{X_\eta})$ be a generic projective wlc klt pair. Then there are only finitely many generic log flops of $(X_\eta\to \eta/k,B_{X_\eta})$ modulo almost autoflops, that is, the group $$ \Bir(X_\eta\to\eta/k,B_{X_\eta})/\aBir(X_\eta\to\eta/k,B_{X_\eta}) $$ is finite. \end{cor}
\begin{proof} Immediate by Lemma~\ref{mp_it} and Corollary~\ref{generic_Kawamata}. \end{proof}
\begin{ex}\label{Mordell-Weil}[Mordell-Weil group.] Let $S$ be a surface with a nonisotrivial pencil $f$ of genus $g\ge 1$ curves. We consider the pencil as a rational contraction $f\colon S\dashrightarrow C$ onto a curve $C$. Then, by Corollary~\ref{it_generic_Kawamata}, the group $\Aut(S/C)$ has finite index in the group $\Aut(S\dashrightarrow C/k)$ fixing the pencil. Moreover, we can replace $\Aut(S/C)$ by $\Bir(S/C)$. Indeed, we can replace $S/C$ by a genus $g$ fibration, a minimal model over $C$ with genus $g$ fibers. By our assumption, generic isotrivial subfamilies of $S/C$ are $0$-dimensional (points) and thus $$ \aBir(S\to C/k)=\Aut(S/C). $$ For $g\ge 2$, groups $\Aut(S/C)$ and $\Aut(S\dashrightarrow C/k)$ are finite. For $g=1$, $\Aut(S/C)$ is the Mordel-Weil group and can be infinie. \end{ex}
A first step to Corollary~\ref{it_generic_Kawamata} is as follows.
\begin{lemma}\label{Burnside_3} Let $(X_\eta,B_{X_\eta})$ be a generic projective klt $0$-pair, geometrically irreducible, and $(X_\eta,B_{X_\eta})\to (\theta,\mathcal M_\theta)$ be an isotrivial contraction with a canonical polarization $\mathcal M_\theta$, the $\mathbb R$-direct image of a canonical sheaf moduli part of adjunction for $(X_\eta,B_{X_\eta})$. Then there exists a extension $l\subset k$ of finite type of the prime subfield such that the natural homomorphism $$ \Bir(X_\eta\to \eta/k,B_{X_\eta})/\aBir(X_\eta\to\eta/k,B_{X_\eta}) \hookrightarrow\Aut(\theta/\overline{l},\mathcal M_\theta), g\mapsto g_\theta, $$ is injective and, for every generic flop $g$, there exists a finite subextension $l_g/l$ in $k$ of uniformly bounded degree such that $g_\theta$ is defined over $l_g$. \end{lemma}
\begin{proof} There exists a required $l$ over which $(X_\eta\to\eta,B_{X_\eta}), (X_\eta,B_{X_\eta})\to (\theta,\mathcal M_\theta)$ and polarizations $\mathcal H_\eta,\mathcal M_\theta$ are defined. Suppose also that $\bPic{}^{\mathrm{b}} X_\eta$ (the Picard group of bounded b-divisors in the sense of resolution) is defined over the same $l$. The bound $\Pic{}^{\mathrm{b}}$ means that we consider Cartier $b$-divisors which are stabilized over some partial (geometric) resolution over $\eta$. After a finite extension it can be given over $l$. It is sufficient to consider any minimal model (i.e., geometrically $\mathbb Q$-Cartier) which is defined for klt pairs or to take any log resolution. Each generic flop $g\in \Bir(X_\eta\to \eta/k,B_{X_\eta})$ can be given by a geometric b-divisor $\mathcal D\in \bPic{}^{\mathrm{b}} X_\eta$. Actually, it is sufficient such a divisor $\mathcal D$ modulo algebraic equivalence. (The algebraic equivalence of b-divisors is the same as for usual one, that is, modulo $\Pic_0 X_\eta$ considered as a group scheme of divisors, fiberwise in the connected component of $0$. We use here the rationality of klt singularities.) Indeed, if $g\colon X_\eta\dashrightarrow X_\eta/k$ such a flop then $\mathcal D=g^* \mathcal H_\eta$. Since polarization is defined modulo the numerical equivalence, we can take $\mathcal D$ modulo algebraic equivalence as well. After an extension of $l$ it has a representative in each geometrically algebraic equivalence class, that is, there exists a section $\mathcal D\in\bPic{}^{\mathrm{b}} X_\eta$ in each those class. (We treat $\bPic X_\eta$ as a scheme over $\eta$ and b-divisors $\mathcal D$ as its sections over $\eta$.) This follows from the finite generatedness of geometric divisors modulo algebraic equivalence (the Neron-Severi group).
Next, we verify that each generic flop $g$ can be defined over a finite extension $l_g/l$ modulo almost flops over $k$. Taking a representative $\mathcal D$ one can construct a flopped variety $(X_\eta',B_{X_\eta'},\mathcal H')$ with a canonical flop over $l$ $$ c_\mathcal D=c\colon (X_\eta,B_{X_\eta},\mathcal D)\dashrightarrow (X_\eta',B_{X_\eta'},\mathcal H'),c^*\mathcal H'=\mathcal D. $$ It is over $\eta$, that is, identical on $\eta$: $c_{\eta,\mathcal D}=\id_\eta$. The autoflop $g=g\rest{X_\eta'}c$ is given by a composition with a log isomorphism (also a flop) of generic triples $g\rest{X_\eta'}\colon (X_\eta,B_{X_\eta},\mathcal H_\eta)\leftarrow(X_\eta',B_{X_\eta'},\mathcal H')$, which induces an automorphism $g_\eta$ of $\eta/k$. In general, it does not preserve polarization and is not defined over $l$. However, $\mathcal H'\approx g\rest{X_\eta'}^*\mathcal H_\eta/\eta$ and Lemma~\ref{etal-action} implies fiberwise linear equivalence: $$ \mathcal H'_t\approx(g\rest{X_\eta'}^*\mathcal H_\eta)_t \Rightarrow \mathcal H'_t\sim(g\rest{X_\eta'}^*\mathcal H_\eta)_t, $$ where $\sim$ up to isomorphism, that is, there exists an autoflop $h\colon (X_t',B_{X_t}')\dashrightarrow(X_t',B_{X_t}')$ with $\mathcal H'_t\sim h^*((g\rest{X_\eta'}^*\mathcal H_\eta)_t) =h^*g_t'^*(\mathcal H_\eta)_{g_\eta t}$. So, the triples $(X_t',B_{X_t'},\mathcal H_t'), (X_{g_\eta t},B_{X_{g_\eta t}},(\mathcal H_\eta)_{g_\eta t})$ are equivalent with polarizations up to $\sim$, where the automorphism $g_\eta\colon \eta\to \eta$ is induced by $g\rest{X_\eta'}$ or by $g$. By the lemma $\sim$ is equal to $\approx$ up to isomorphism.
To construct required flops over $l_g$ we use modili $\mathfrak M$ of triples for special fibers $(X_t,B_t,\mathcal H_t)$ with polarization up to linear equivalence. Such moduli exist. By construction we have a unique morphism $\mu=\mu'\colon\eta,\eta\to\mathfrak M$ corresponding to generic families $(X_\eta,B_{X_\eta},\mathcal H_\eta),(X_\eta',B_{X_\eta'},\mathcal H')$. Indeed, any generic flop preserves isotrivial families of wlc klt pairs, and in our situation preserves fiberwise the polarization up to $\sim$ as was explained above. After an extension of $l$ we can suppose that $\mathfrak M$ is also defined over $l$. The morphism $\mu$ is defined over $l$ too. By construction, $\mu$ is defined over an algebraic closure $\overline{l}$. Let $\sigma\in \Gal(\overline{l}/l)$ be a Galois automorphism then $\mu^\sigma=\mu$ and it is defined over $l$. Indeed, $\mu^\sigma\colon \eta^\sigma=\eta\to\mathfrak M^\sigma=\mathfrak M$ is also universal because an isomorphism of triples preserving polarization gives (conjugation) isomorphism of those triples: $\sigma\colon (X_t,B_t,\mathcal H_t)\cong(X_{t^\sigma}^\sigma,B_{t^\sigma}^\sigma,\mathcal H_{t^\sigma}^\sigma)= (X_{t^\sigma},B_{t^\sigma},\mathcal H_{t^\sigma})$. Then we use Hilbert 90.
By definition of generic flops there is a (birational) automorphism $g_\eta\colon\eta \to \eta$. Indeed, it is induced by the isomorphisms of families $g\rest{X_\eta'}$. It is unique as $g\rest{X_\eta'}$ for the flop $g$ but is not unique itself. However, for any other isomorphism as above $(X_\eta,B_{X_\eta},\mathcal H_\eta)\leftarrow (X_\eta',B_{X_\eta'},\mathcal H')$, an induced isomorphism $g_\eta'\colon\eta \to \eta$ is compatible with $g_\eta$ over $\mathfrak M$: $\mu g_\eta'=\mu g_\eta$. Equivalently, $g_\eta'g_\eta^{-1}\in \Aut(\eta/\mathfrak M)$, that is, preserving triples with polarization up to the linear equivalence. By construction up to algebraic equivalence and by Lemma~\ref{etal-action} up to linear one. Thus $g_\eta=g_\eta'$ up to an automorphism of fibers of $\eta/\mathfrak M$. The isomorphism $g_\eta$ is defined over $k$ and in general the minimal field of definition for all $g_\eta$ can have algebraic elements of unbounded degree and even infinite transcendent degree over $l$. The main finite indeterminacy is the same as for almost auto flops. To remove this, we push $g_\eta$ to an automorphism $g_\theta\colon \theta\to \theta=\theta'$ where $\eta\to\theta=\eta\to \theta=\theta'$ is the universal morphisms with maximal connected fibers over $\mathfrak M$ and $\theta=\theta'\to \mathfrak M$ is finite, that is, $\mu=\mu'$ can be universally and equally decomposed $\eta\to\theta\to\mathfrak M=\eta\to\theta'\to\mathfrak M$. (This decomposition can be obtain from a Stein one after a completion of fibers over $\mathfrak M$, that is, a completion of isotrivial families.) The isotrivial property is preserved for $g_\eta$ because it induces a log isomorphism of fibers or, equivalently, a flop preserves isotrivial klt families. This holds fiberwise even for triples with polarization up to $\sim$. Thus this is a single canonical decomposition $\eta\to \theta\to\mathfrak M$. It is defined over the same field $l$ (after a finite extension) independent of $g$. By construction each $g$ preserves the canonical moduli part of adjunction: $\omega_{X_\eta}^m[mB_{X_\eta}]=g^*\omega_{X_\eta}^m[mB_{X_\eta}]= \omega_{X_\eta'}^m[mB_{X_\eta'}]$ (the last identification by $g\rest{X_\eta'}^*\colon X_\eta\cong X_\eta'$). This action commutes with the direct image on $\theta$: $\mathcal M_\theta^m=c_{\eta,*}\omega_{X_\eta}^m[mB_{X_\eta}]= c_{\eta,*}g^*\omega_{X_\eta}^m[mB_{X_\eta}]=g_\theta^*c_{\eta,*}\omega_{X_\eta}^m[mB_{X_\eta}]= g_\theta^*\mathcal M_\theta^m$. This concludes a construction of a required injection: $$ \Bir(X_\eta\to \eta/k,B_{X_\eta})/\aBir(X_\eta\to\eta/k,B_{X_\eta}) \hookrightarrow\Aut(\theta/k,\mathcal M_\theta),g\mapsto g_\theta. $$ The kernel of map $g\mapsto g_\theta$ is $\aBir(X_\eta\to\eta/k,B_{X_\eta})$ by definition.
Finally, we verify that the image $g_\theta$ belongs to $\Aut(\theta/l_g,\mathcal M_\theta)$, where $l_g/l$ is a finite extension. Actually, it depends on a choice of $\mathcal D$ but it is unique up to $\approx$ for a given $g$: $\mathcal D\approx g^*\mathcal H_\eta/\eta$. Note that there are finitely many conjugated automorphisms $g_\theta^\sigma$ of $g_\eta$ over $\overline{l}$. More precisely, $a_\theta=g_\theta\1g_\theta^\sigma\in \Aut(\theta/\mathfrak M)$ over $\overline{l}$. Indeed, the conjugated flop $g'=g^\sigma$ is given by the same polarization as above. The divisors $\mathcal H$, $\mathcal D$ and $\mathcal H'$ are defined over $l$. (However, $g_\eta$ and $g_\theta$ are not uniquely determined by these data.) By construction an automorphism $a=g_\eta^\sigma g_\eta^{-1}$ induces the generic flop $g^\sigma g^{-1}$ preserving fiberwise the triples of $(X_\eta,B_{X_\eta},\mathcal H)$ over $\mathfrak M$, that is, in $\Aut((X_\eta,B_{X_\eta},\mathcal H)/\mathfrak M)$ over $\overline{l}$. The push down induces $a_\theta\in \Aut(\theta/\mathfrak M)$ over $\overline{l}$. The last group if finite of order $\le (\deg \theta/\mathfrak M)!$. Thus any $g_\theta$ is defined over a uniformly bounded extension $l_g$ of above $l$ of the degree $\le (\deg \theta/\mathfrak M)!$. Each $g_\theta$ can be defined over an extension $l_g/l$ with injective group action $\Gal(l_g/l)$ on the permutation group of all $g_\theta^\sigma$. \end{proof}
\section{Algebra and calculus of relative differentials}
\paragraph{Properties of the norm.}
(1) For every $\omega\in H^0(X,\omega_{X/T}^m[mB])$, $\Vert g^*\omega\Vert_t=\Vert \omega\Vert_{g_Tt}$ where $g$ is a generalized flop of $(X/T,B)$ transforming $X_t\dashrightarrow X_{g_Tt}$. So, $\Vert g^*\omega\Vert=\Vert\omega\Vert$.
(2) Let $\omega\in H^0(X,\omega_{X/T}^m[mB])$ be an eigenvector for the induced linear operator $g^*$ with an eigenvalue $\lambda\in \mathbb C$, that is, $g^*\omega=\lambda\omega$. Then $\Vert g^*\omega\Vert=\vert\lambda\vert^2\Vert\omega\Vert$.
(3) For every $\omega\in H^0(X,\omega_{X/T}^m[mB])$, the function $\Vert\omega\Vert_t$ is continuous in the complex (classical) topology on $T$ including the value $+\infty$.
\begin{lemma}\label{sum_resid} Let $f\colon(X,B+D_1+D_2)\dashrightarrow T$ be a rational conic bundle with two sections and a vertical (sub)boundary $B$. Then, for any rational $m$-differential $\omega$, that is regular on the generic fiber, $$ c^*(\omega\rest{D_2})=(-1)^m\omega\rest{D_1}, $$ where $c\colon (D_1,B_{D_1})\dashrightarrow (D_2,B_{D_2})$ is a birational transformation given by the conic bundle structure.
Moreover, if $f$ is a $0$-contraction, then $c$ is a flop and $c^*$ preserves (as canonical isomorphism on) the regular $m$-differentials of pairs. \end{lemma}
\paragraph{Restrictions to lc centers under generic flop.} $$ (g^*\omega)\rest{Y_t}=g\rest{Y_t}^*(\omega\rest{Y_s}). (4) $$
\begin{prop}\label{flops_in_fibers} Let $(X/T,B)$ be a projective {\rm tdlt}\ family of $0$-pairs with the generic connected klt fiber, horizontal $B$ and irreducible $T$, and $g\in\Bir(X\to T/k,B)$ be a generic flop. Then for any $t\in T$ there exists $s\in T$ such that, for any minimal lc centers $(Y_t,B_{Y_t}),(Y_s,B_{Y_s})$ of $(X_t,B_{Y_t}),(X_s,B_{Y_s})$ (even on blowups) respectively there exists a log flop $g_{Y_t}\colon (Y_t,B_{Y_t})\dashrightarrow (Y_s,B_{Y_s})$, which satisfies $$ \rest{Y_t}g^*=(g_{Y_t})^*\rest{Y_s}. $$
If $(X_t,B_{X_t}),(X_s,B_{X_s})$ belong to an mp-trivial subfamily, then, for any even natural number $m$, $$ \rest{Y_t} g^*=(g_{Y_t})^*c^*\rest{Y_t}\colon H^0(X,\omega_{X/T}^m[mB])\to H^0(Y_t,\omega_{Y_t}^m[mB_{Y_t}]). $$ where $c^*:H^0(Y_t,\omega_{Y_t}^m[mB_{Y_t}])\to H^0(Y_s,\omega_{Y_s}^m[mB_{Y_s}])$ is the canonical identification. \end{prop}
\begin{proof} We use induction on $\dim T$. If $\dim T=0$, then by our assumptions $s=t, X=X_s=X_t=Y_s=Y_t, \rest{Y_t}=\id_{X},g_{Y_t}=g,c^*=\id_{H^0(X,\omega_{X/T}^m[mB])}$ and $$ \rest{Y_t} g^*=(g_{Y_t})^*\rest{Y_s}= (g_{Y_t})^*c^*\rest{Y_t}=g^*\colon H^0(X,\omega_{X/T}^m[mB])\to H^0(Y_t,\omega_{Y_t}^m[mB_{Y_t}]). $$
Now suppose that $\dim T\ge 1$.
Construction of $g_{Y_t}$. Take a rather general curve $C\subseteq T$ through $t$. Such a curve means a birational on image morphism $h\colon C\ni o\to T$ with $h(o)=s$. Birationally, $C=h(C),g(C)=g_Th(C)$. This gives a curve $g(C)$ with the morphism $gh\colon C\ni o\to T$ and $s=hg(o)$. Moreover, the birational map $g_T\rest{C}\colon C\dashrightarrow g(C)$ induces (restriction) a flop of two {\rm tdlt}\ families of $0$-pairs over $C$: $$ g\rest{X_C}\colon (X_C/C\ni o,B_{X_C})\dashrightarrow (X'/C\ni o,B_{X'}). $$ The first family is pulling back for $h$, the second one for $gh$. Actually, both families are normal {\rm tdlt}\ over $C$. By construction $g\rest{X_C}$ is birational and by Lemma~\ref{flopping_lc_center} is a flop.
Suppose first that $\dim T=1$ and $T=C,t=o,X_C=X,B_{X_C}=B$. Add vertical boundaries $X_o,X_o'$. Take any minimal lc centers $Y_t\subset (X_o,B_{X_o})=(X_t,B_{X_t}), Y_s\subset (X_o',B_{X_o'})=(X_s,B_{X_s})$. Even we can suppose that they are centers on a {\rm tdlt}\ blowup of fibers over $o$. So, we replace both families by such blowups. We can suppose also that $g$ is defined in a minimal lc center $Y_t'\subset (X_o,B_{X_o})=(X_t,B_{X_t})$. Then by Lemma~\ref{flopping_lc_center} $g(Y_t')=Y_s'\subset (X_o',B_{X_o'})=(X_s,B_{X_s})$ is also a minimal lc center and $g$ gives the flop $$ g\rest{Y_t'}\colon (Y_t',B_{Y_t'})\dashrightarrow (Y_s',B_{Y_s'}). $$ Let $$ c_t\colon (Y_t',B_{Y_t'})\dashrightarrow (Y_t,B_{Y_t}), c_s\colon (Y_s',B_{Y_s'})\dashrightarrow (Y_s,B_{Y_s}) $$ be canonical flops between minimal lc centers. Then $g_{Y_t}=c_sg\rest{Y_t'}c_t^{-1}$.
The real induction is for $\dim T\ge 2$. For any minimal lc centers $Y_t\subset (X_o,B_{X_o})=(X_t,B_{X_t}), Y_s\subset (X_o',B_{X_o'})=(X_s,B_{X_s})$, even on {\rm tdlt}\ resolutions, we construct minimal lc centers $Y_t'\subset (X_o,B_{X_o})=(X_t,B_{X_t}), g(Y_t')=Y_s'\subset (X_o',B_{X_o'})=(X_s,B_{X_s})$ with a flop given $g\rest{Y_t'}$ by a restriction. Indeed, after a log resolution of the base extending $\Delta_T$, we can suppose that $C$ is nonsingular and intersects transversally the log structure. Then $t=o$ and we can canonically identify the fiber over $o$ with the fiber over $s$ before the resolution. Take a nonsingular divisor $D\subset X$ extending the log structure $\Delta_T$ on $T$ and passing through $C$. The mapping $g_T$ in general is not a log flop with respect to the log structure. Nonetheless, after adding to $D+\Delta_T$ the preimage $g_T^{-1} \Delta_T$ and adding to $\Delta_T$ the image $g_T(D+\Delta_T)$, we can convert the birational automorphism $g_T$ into a regular flop (usually, not an autoflop) $g_T\colon (T,\Delta_T)\to (T',\Delta_{T'})$ using a log resolution of the log map (torification). So, $D$ after those modifications is still a nonsingular divisor of $\Delta_T$, $C\subset D$ and $D'=g_TD$ is also a nonsingular divisor of $\Delta_{T'}$. Then by Lemma~\ref{flopping_lc_center} we constructed a log flop $$ g\rest{X_D}\colon (X_D/D,B_{X_D})\dashrightarrow(X_{D'}/D',B_{X_{D'}'}). $$ Moreover, $g\rest{D}\rest{C}$ is the above flop over $C\to C'=g_TC$. By induction we constructed required centers $Y_t'$ and $Y_s'$ and a flop $g_{Y_t}=c_sg\rest{Y_t'}c_t^{-1}$ as above.
Restrictions to lc centers under generic flop, (4), implies a similar relation for $g_{Y_t}$: $$ \rest{Y_t} g^*=(c_t^{-1})^*\rest{Y_t'}g^*= (c_t^{-1})^*(g\rest{Y_s'})^*\rest{Y_s'}= (c_t^{-1})^*(g\rest{Y_s'})^*c_s^*\rest{Y_s}=(g_{Y_t})^*\rest{Y_s}. $$
If $(X_t,B_{X_t}),(X_s,B_{X_s})$ belong to an mp-trivial subfamily, then $\rest{Y_s}=c^*\rest{Y_t}$ by Lemma~\ref{restr_in_mp_triv} and the required relation holds: $$ \rest{Y_t} g^*=(g_{Y_t})^*\rest{Y_s}= (g_{Y_t})^*c^*\rest{Y_t}. $$ \end{proof}
\section{Interlacing}
\begin{ex}[Rational lc conic bundle structure] \label{nonunique_cb} Let $X=C\times \mathbb P^1,B=D_1+D_2,D_i=C\times y_i, i=1,2$, where $C$ is a nonsingular curve and $y_1,y_2\in\mathbb P^1$ are two distinct points. Then the conic bundle $f\colon C\times\mathbb P^1\to C, (x,y)\mapsto x$, has two horizontal sections $D_i$ in the reduced boundary $B$ and $K+B\equiv 0/C$. A proper rational conic bundle structure $X\dashrightarrow C'$ with horizontal $B$ and $(K+B.F)=0$ for generic fiber $F$ the conic bundle is unique and coincide with the above one. By a proper rational conic bundle on $X$ we mean a rational conic bundle corresponding to an imbedded family of rational curves [pencil] without fixed points. In the surface case such a conic bundle is [always] a regular pencil. If the genus of $C$ is $\ge 1$ then the uniqueness follows from rationality of $F$. Otherwise by adjunction $0=(K+B.F)=(f^*K_C.F)= (K_C.f(F))$ implies that $F$ is again vertical.
However for $C=\mathbb P^1$ there are infinitely many rational (nonproper) conic bundles on $X$ such that $B$ is horizontal on their regular model and consists of two sections. For instance this holds for general pencil $P\subset \linsys{x\times \mathbb P^1+\mathbb P^1\times y}$ of conics through two generic points of $X$. (This pencil is proper after a blowup of two fixed points.) \end{ex}
\begin{prop}\label{unique_cb} Let $(X,B+D)$ be a projective plt wlc pair and $D$ be the reduced part of $B+D$ with $2$ components. Then there exists birationally at most one proper rational conic bundle structure on $X$ such that $D$ is the double section and $B$ does not intersect the generic fiber $F$ of conic bundle, that is, $(K+B+D.F)=(K+D.F)=0$. More precisely, the conic bundle structure is birationally independent of a plt wlc model of $(X,B+D)$. \end{prop}
\begin{proof} A rational conic bundle of $X$ is a rational contraction $X\dashrightarrow T$ such that its generic fiber $F$ is a rational curve. The conic bundle is proper if it is regular near $F$, that is, the generic fiber is a free curve. Suppose that $(K+B+D.F)=(K+D.F)=0$. Equivalently, $(K.F)=-2,(D.F)=2, (B.F)=0$. The last condition means that $\Supp B\cap F=\emptyset$. We prove that such a conic bundle is birationally unique, that is, the generic fiber $F$ is unique. Note also that if such a conic bundle structure is proper on some plt wlc model of $(X,B+D)$, then it will be proper on $X$ after a crepant blowup (flop). Thus it is sufficient to establish the uniqueness on one fixed model $(X,B+D)$.
Step 1. Reduction to the case of a $0$-pair $(X,B+D)$. Let $(X,B+D)\to X{}_{\mathrm{lcm}}$ be an Iitaka contraction. Then the generic fiber $F$ is contractible to a point on $X{}_{\mathrm{lcm}}$. Thus the uniqueness of conic bundle is sufficient to establish on the generic log fiber $(X_\eta,B_{X_\eta}+D_\eta)$ where $\eta\in X{}_{\mathrm{lcm}}$ is the generic point. By construction $(X_\eta,B_{X_\eta}+D_\eta)$ is a $0$-pair.
Step 2. Reduction to the case when $B=0$. Use the LMMP for $(X,(1+\varepsilon)B+D),0<\varepsilon \ll 1$. Since the Kodaira dimension of $(X,(1+\varepsilon)B+D)$ is $\ge 0$, the LMMP terminates with a wlc model. Note also that the LMMP requires an appropriate initial model with $\mathbb R$-Cartier $B$. Such a model can be constructed as a small modification of $(X,B+D)$, a $\mathbb Q$-factorialization of $B$. Any small modification of $X$ is a flop of $(X,B+D)$ and does not touch any generic fiber $F$. For sufficiently small $\varepsilon$, $(X,(1+\varepsilon)B+D)$ is also plt. The divisorial contractions of the LMMP does not touch $F$ because they are negative with respect to $B$ and their exceptional locus lies in $\Supp B$. For wlc $(X,(1+\varepsilon)B+D)$, $B$ is semiample. Let $(X,(1+\varepsilon)B+D)\to X{}_{\mathrm{lcm}}$ be the corresponding Iitaka contraction. Then as in Step 1 $F$ is contractible to a point by the contraction and it is sufficient to verify the uniqueness for the generic log fiber $(X_\eta,B_{X_\eta}+D_\eta)=(X_\eta,D_\eta)$. By construction $B_{X_\eta}=0$.
Step 3. Reduction to the case when $\Diff_D 0=0$. Use the canonical covering (fliz) of $(X,D)$. Indeed, the canonical covering makes $(X,D)$ a log Gorenstein $0$-pair, that is, $K+D\sim 0$. Thus $(D,\Diff_D0)$ is also a log Gorenstein $0$-pair. The plt property of $(X,D)$ gives the klt property of $(D,\Diff_D0)$, and the canonical one in the Gorensten case. Hence $\Diff_D 0=0$.
Note also that every proper rational conic bundle gives a similar bundle on the covering. Indeed, every proper rational fibration induces the proper rational fibration on the covering. The latter fibration is the rational contraction for a Stein decomposition of composition of the covering with former contraction. The generic fiber $F$ on $X$ is $\mathbb P^1$ with a transversal double section $D$. The divisorial ramification of the canonical covering is only in $D$. Thus the conic bundle fibration goes into a conic bundle with the double section induced by $D$. Each section $D_i$ goes into a section of the fibration after covering.
The mapping of fibrations for coverings is monomorphic.
Step 4. Final. Since $\Diff_D 0=0$, then $D$ is rationally disconnected (separably in the positive characteristic), that is, two generic points of $D$ are not connected by a rational curve on $D$. The same holds for each component $D_i, i=1,2$, of $D$. So, the base $T$ of any rational contraction $X\dashrightarrow T$ with rational sections $D_i$ is also rationally disconnected. The base $T$ is birationally isomorphic to each of $D_i$. Thus a rational proper conic bundle on $X$ is a rational contraction given by the rational connectedness. \end{proof}
The rational disconnectedness of $D$ was important in the last step of proof.
\begin{ex}\label{nonunique_cb2} Let $X=\mathbb P^1\times \mathbb P^1$ and $D\in \linsys{-K}$ be a smooth anticanonical curve. Then $X$ has two conic bundle fibrations with the double section $D$. Actually, for any double covering $D\to \mathbb P^1$, there are a wlc model of $(X,D)$ and a conic bundle inducing this double covering. \end{ex}
\begin{thm}\label{can_0-perest} Let $(X,B+D)$ be a plt pair with a plt wlc model $(Y,B_Y+D_Y)/X{}_{\mathrm{lcm}}$ and $D$ be the reduced part of $B+D$.
If $D$ is not vertical over $X{}_{\mathrm{lcm}}$ then there exists such a (projective plt) wlc model $(Y,B_Y+D_Y)/X{}_{\mathrm{lcm}}$ that $X\dashrightarrow Y$ is a $1$-modification and the model has a Mori log contraction $(Y,B_Y/T/X{}_{\mathrm{lcm}})$ with $K_Y+B_Y+D_Y\equiv 0/T/X{}_{\mathrm{lcm}}$. In this case $D_Y$ has at most $2$ irreducible components and each component of $D_Y$ is horizontal with respect to $Y\to X{}_{\mathrm{lcm}}$.
In the case with $2$ horizontal components $D_1=D_{1,Y},D_2=D_{2,Y}$ of $D_Y$ over $X{}_{\mathrm{lcm}}$, the Mori log contraction $Y\to T$ is a conic bundle. The divisors $D_1,D_2$ are rational sections of the conic bundle. Such a conic bundle structure $Y\to T$ is birationally unique for $(X,B+D)$. More precisely, the conic bundle structure is independent on a plt wlc model $(Y,B_Y+D_Y)$.
In the case with $2$ horizontal components, if $(X,B+D)$ is wlc itself, then $D_Y$ is the birational transformation of $D$ the conic bundle gives a generalized canonical log flop $c\colon (D_1,B_{D_1})\dashrightarrow (D_2,B_{D_2})$ as the composition \begin{align*} (D_1,B_{D_1})\hookrightarrow (X,B+D)&\dashrightarrow (Y,B_Y+D_Y)\twoheadrightarrow (T,(B+D)_T) \twoheadleftarrow (Y,B_Y+D_Y)\\ &\dashleftarrow (X,B+D) \hookleftarrow (D_2,B_{D_2}), \end{align*} where $(B+D)_T=B{}_{\mathrm{div}}$ is the divisorial part of adjunction with respect to the conic bundle.
The canonical property in addition to uniqueness means that the restrictions of differentials (Poincare residues) (super)commutes with the flop: $$ c^*\rest{D_2}=(-1)^m\rest{D_1}\colon H^0(X,\omega^m[mB])=H^0(Y,\omega_{Y}^m[mB_Y])\to H^0(D_1,\omega_{D_1}^m[mB_{D_1}]). $$
The induced standard structure is preserved under the flop.
If $B+D=B{}_{\mathrm{st}}+B{}_{\mathrm{c}}+D$ is the standard structure with $\mathbb Q$-mobile $B{}_{\mathrm{c}}$ the flop preserves the induced standard structure. \end{thm}
\begin{proof} By the LMMP a wlc model $(Y,B_Y+D_Y)/X{}_{\mathrm{lcm}}$ exists exactly when the Kodaira dimension of $(X,B+D)$ is $\ge 0$. In particular, this will be a generalized flop if $(X,B+D)$ is wlc. One can suppose that $X\dashrightarrow Y$ is $1$-modification. Indeed, to apply the LMMP we need a projective lc model, e.g., a log resolution with boundary multiplicities $1$ in exceptional divisors. By the plt property all such divisors will be contracted and $(Y,B_Y+D_Y)$ will be plt.
If $D$ is not vertical over $X{}_{\mathrm{lcm}}$ then one can apply the LMMP to $(Y/X{}_{\mathrm{lcm}},B_Y)$ assuming that $D_Y$ is $\mathbb Q$-factorial. (The latter needs a small $\mathbb Q$-factorialization.) This gives a required Mori log contraction $(Y,B_Y/T/X{}_{\mathrm{lcm}})$ such that $K_Y+B_Y+D_Y\equiv 0/T/X{}_{\mathrm{lcm}}$. The generic log fiber $(Y_\eta,B_{X_\eta}+D_\eta)$ of $Y/T$ has dimension $\ge 1$ and is a plt $0$-pair. Thus $D_Y$ has at most $2$ irreducible components and each component of $D_Y$ is horizontal with respect to $Y\to X{}_{\mathrm{lcm}}$.
By the plt property of $(Y,B_Y+D_Y)$, $D_Y$ is the birational transformation of $D$ if $(X,B+D)$ is wlc.
In the case with $2$ horizontal components $D_1=D_{1,Y},D_2=D_{2,Y}$ of $D_Y$ over $X{}_{\mathrm{lcm}}$, the Mori log contraction $Y\to T$ is a conic bundle. The divisors $D_1,D_2$ are rational sections of the conic bundle. Such a conic bundle structure $Y\to T$ is birationally unique for $(X,B+D)$.
If $(X,B+D)$ is wlc then the conic bundle gives a generalized log flop $(D_1,B_{D_1})\dashrightarrow (D_2,B_{D_2})$ as the composition \begin{align*} (D_1,B_{D_1})\hookrightarrow (X,B+D)&\dashrightarrow (Y,B_Y+D_Y)\twoheadrightarrow (T,(B+D)_T) \twoheadleftarrow (Y,B_Y+D_Y)\\ &\dashleftarrow (X,B+D) \hookleftarrow (D_2,B_{D_2}), \end{align*} where $(B+D)_T=B{}_{\mathrm{div}}$ is the divisorial part of adjunction for the conic bundle. The moduli part of adjunction is trivial.
The induced standard structure is preserved under the flop.
If $B+D=B{}_{\mathrm{st}}+B{}_{\mathrm{c}}+D$ is the standard structure with $\mathbb Q$-mobile $B{}_{\mathrm{c}}$ the flop preserves the induced standard structure as for b-divisors. If one would like to have the last property for divisors, then one needs to assume that $B{}_{\mathrm{c}}\equiv 0/T$ and is a divisor. This holds after a (possibly not small) modification of the conic bundle over $T$.
The statement and formula, which relates restrictions with the canonical flop follows from Lemma~\ref{sum_resid}, because restrictions preserve log regular $m$-differentials.
Finally, the uniqueness of flop follows from Proposition~\ref{unique_cb}. Indeed, after a crepant blowup of divisors on $(Y,B_Y+D_Y)$ with log discrepancies $\le 1$ one can assume that a fixed rational conic bundle structure on another wlc model of $(X,B+D)$ with horizontal $D$ is proper on $Y$. \end{proof}
A typical example of an interlaced triple comes from a {\rm tdlt}\ triple.
\begin{ex}[Triple of minimal lc centers]\label{tripl_min_c} Let $(X,B)$ be a {\rm tdlt}\ pair and $(Y,B_Y)=(X,B){}_{\mathrm{mlcc}}$ be its pair of minimal lc centers. Then $(Y,B)$ is also {\rm tdlt}, respectively, wlc, standard etc, if so does $(X,B_Y)$.
For (projective) wlc $(X,B)$, there is an interlacing on $(Y,B_Y)$. The vertexes of $\Gamma$ are irreducible components $Y_i$ of $Y$. An edge between $Y_i,Y_j$ is an invariant (with respect to the log structure) closed irreducible subvariety $Z\subseteq X$, a flopping center, such that \begin{description}
\item{} $Y_i,Y_j\subset Z$ are invariant divisors and
\item{} there exists a rational $0$-contraction of $(Z,B_Z)$ with horizontal divisors $Y_i,Y_j$. Equivalently, there is a free curve $C\subseteq Z$ (the generic fiber of $0$-contraction) such that $(K_Z+B_Z.C)=0$ and $(Y_i,C),(Y_j.C)>0$. \end{description} Indeed, in this situation there exists a generalized log flop $(Y_i,B_{i,Y})\dashrightarrow (Y_j,B_{j,Y})$ by Theorem~\ref{can_0-perest}.
Note that, for $i=j$, one can get sometimes an autoflop $(Y_i,B_{i,Y})\dashrightarrow (Y_i,B_{i,Y})$, an involution in $\Bir(Y_i,B_{i,Y})$. Unfortunately, it is not unique by Example~\ref{nonunique_cb2} and so is not very useful in general. So, we agree that, for $i=j$, a flopping center has a single invariant divisor $Y_i=Y_j$ and the flop is identical. \end{ex}
\begin{lemma}\label{chain_of_flop} Let $(X/T,B)$ be a {\rm tdlt}\ $0$-contraction with a boundary $B$. Then any two minimal lc center over $T$ can be connected by flopping centers over $T$. \end{lemma}
\begin{proof} The proof is similar to the case over a point: $T=\mathrm{pt.}$. Let $Y,Y'$ be two minimal lc center over $T$. We will find a chain of flopping centers $C_1,\dots,C_n$ on $X$ such that $Y\subset C_1,\dots, Y'\subset C_n$. As usually, a chain means that $C_i$ intersects $C_{i+1}$ for $1\le i\le n-1$.
Step 1. We can suppose that $X$ is irreducible. Otherwise, take the normalization $(X^{\mathrm{n}},B^{\mathrm{n}})=\coprod (X_i,B_i)$. Note that $X_i$ is possible not geometrically irreducible and not connected (fiberwise) over $T$. Nonetheless, since $X/T$ is contraction, the fibers are connected. Thus there is chain of components $X_i$: $$ X_1\supset Y_1\subset X_2\supset \dots \subset X_{n-1}\supset Y_{n-1}\subset X_n $$ with common minimal lc centers $Y_i$ for each pair $(X_i,B_i),(X_{i+1},B_{i+1}), 1\le i\le n-1$. Hence it is sufficient to find a chain of flopping centers for each pair of minimal centers $Y_{i-1},Y_{i}\subset X_i,1\le i\le n$, where $Y_0=Y$ and $Y_n=Y'$. So, we replace $(X/T,B)$ by $(X_i/T_i,B_i)$, where $X_i\to T_i$ a $0$-contraction given by a Stein decomposition. It is {\rm tdlt}.
Step 2. Dimensional induction. The case $\dim X/T=0$ is empty. The case $\dim X/T=1$ is flopping. Indeed, the generic fiber is a (geometrically) irreducible curve with at most two minimal centers. If there are no centers, then the statement is empty. If there are centers, then $X$ itself is flopping. A chain is trivial: $C_1=X$.
If $\dim X/T\ge 2$ and there are minimal lc centers $Y,Y'$, then two situations are possible: \begin{description}
\item{\rm (1)} the generic fiber $(X_\eta,B_{X_\eta})$ is nonklt, but plt, or
\item{\rm (2)} all proper lc centers over $T$ are lc connected, that is, their union is connected over $T$. \end{description} In (1), the chain is trivial as above: $C_1=X$. In (2), we apply induction to the {\rm tdlt}\ pair $(Y/T,B_Y)$, where $Y$ is the union of all invariant divisors over $T$. Note that the lc centers for a {\rm tdlt}\ pair are invariant subvarieties. \end{proof}
\begin{cor} Let $(X,B)$ be a connected projective {\rm tdlt}\ $0$-pair. Then $(X,B){}_{\mathrm{mlcc}}$ is a connected interlaced $0$-pair. \end{cor}
\begin{proof} Immediate by Theorem~\ref{can_0-perest}, Example~\ref{tripl_min_c} and Lemma~\ref{chain_of_flop} over a point: $T=\mathrm{pt.}$. \end{proof}
By the uniqueness or a canonical construction of flops in Theorem~\ref{can_0-perest}, we can make interlacing for families.
\begin{cor}\label{interl_of_famil} Let $(X/T,B)$ be a family connected projective {\rm tdlt}\ $0$-pairs. Then, for any generic point $\eta$ of $T$, $(X/T,B){}_{\mathrm{mlcc}}$ is a connected interlaced family of {\rm tdlt}\ $0$-pairs. \end{cor}
\begin{proof} Immediate by Theorem~\ref{can_0-perest}, Example~\ref{tripl_min_c} and Lemma~\ref{chain_of_flop}.
By general properties of {\rm tdlt}\ families, $(X/T,B){}_{\mathrm{mlcc}}=(X{}_{\mathrm{mlcc}}/T,B_{X{}_{\mathrm{mlcc}}})$ is a {\rm tdlt}\ family of $0$-pairs. Irreducible components $Y\subseteq X{}_{\mathrm{mlcc}}$ are not necessarily geometrically irreducible or/and connected (fiberwise) over $T$. However, they are connected by flopping centers according to Lemma~\ref{chain_of_flop}. On the other hand, each flopping center, with a geometrically reducible lc center $Y$ over $T$, determines a rational conic bundle and flop of centers by Theorem~\ref{can_0-perest}. (Actually, $Y$ can be irreducible, but with two components in generic geometric fibers.) If $Y$ is geometrically irreducible, then the flop is identical on $Y$. For this constructions, it is sufficient to consider the generic fiber $(X_\eta,B_{X_\eta})$. To apply the theorem, take an algebraic closure of $\eta$. By the uniqueness required flops are defined over $\eta$ and by definition over $T$. \end{proof}
\begin{ex}[Blowup of an lc center]\label{blowup_lc_cent} Let $f\colon (\tilde{X},B_{\tilde{X}})\to (X,B)$ be a crepant extraction (flop) of an lc center $f(D)$ with a prime divisor $D\subset X$. We suppose that $f(D)$ is a real lc center, that is $B\ge0$ and $(X,B)$ is lc near the generic point of $f(D)$. So, $B$ is a boundary near the center. By definition $D$ is a reduced divisor in $B_{\tilde{X}}$ and $(D,B_D)$ is lc with a boundary $B_D$ (generically) over the center. The mapping $f\rest{D}\colon (D,B_D)\to f(D)$ is a $0$-contraction. So, any two minimal lc center of $(D,B_D)$ are related by a flop according to Corollary~\ref{interl_of_famil} and to the connectedness of lc locus. Such a minimal center always exists, possibly, $D$ itself. If $(X,B)$ is {\rm tdlt}\ along $f(D)$, then, for any minimal lc center $Y\subseteq (D,B_D)$ over $f(D)$, the restriction $$ f\rest{Y}\colon (Y,B_Y)\to (f(Y),B_{f(Y)}) $$ is birational and a flop.
However, if $(X,B)$ is slc along $f(D)$ and $f$ is a normalization with a blowup, then $f\rest{Y}$ can be a fliz, if it is generically finite, e.g., $2$-to-$1$ for osculation in divisors of slc $(X,B)$. \end{ex}
\section{Relative b-representation}
This section gives results which are relative analogues and generalizations of a well-known finiteness of representation of flops on differentials (Nakamura, Ueno, Sakai, Fujino, etc) \cite{NU} \cite{S} \cite{U} \cite{FC} (the last preprint has historic remarks on the question).
\begin{df} A linear representation $G\to\Aut V$ is {\em finite\/}, if so does its image. An {\em order} of the representation is the order of its image. The same works for projective representations $G\to \mathbb P(V)$. \end{df}
Even for sheaves $\omega_{X/T}^m[mB]$ and $\mathcal O_X(mM)$, which are isomorphic, the representation of flops on $H^0(X,\omega_{X/T}^m[mB])$ is typically different from that of on $H^0(X,m\mathcal M)$. For example, if $X$ is a K3 surface then the representation of the automorphisms of $X$ on $H^0(X,\mathcal O_X)$ is trivial, and $\omega_X\cong \mathcal O_X$, but the representation on $H^0(X,\omega_X)$ can be nontrivial. In this situation, the automorphisms with trivial action on $H^0(X,\omega_X)$ are known as symplectic. In general, the difference between representations on global sections for isomorphic invariant invertible sheaves is in scaler matrices. So, the projective representations of isomorphic invariant invertible sheaves or of invariant up to linear equivalence divisors coincide. Thus in the proof of Corollary~\ref{generic_Kawamata} it does not matter a choice of a moduli part of adjunction $\mathcal M$ as a sheaf or as a divisor. It is important only the flop invariance of $\mathcal M$. But in Theorem~\ref{flop_repres} the canonical choice is an important assumption.
\begin{ex}[Toric representation]\label{toric_repr} Take a log pair $(\mathbb P^1,0+\infty)$. This is a toric variety with an action $tx,t\in T^1=k^*$, where $x$ is a nonhomogenous coordinate. For the sheaf $\mathcal O_{\mathbb P^1}(n(\infty-0))$, the representation of $T$ is $t^nx^n, H^0(\mathbb P^1,n(\infty-0))=kx^n$ has weight $n$. The sheaf $\mathcal O_X$ with $n=0$ has trivial representation and is isomorphic canonically to $\omega_{\mathbb P^1}(0+\infty), 1\mapsto dx/x$.
Similarly, it is easy to construct for any rank $1$ invariant sheaf an isomorphic invariant sheaf with infinite representation on its global section if the group of its isomorphisms is infinite and if there exists a nonconstant rational function with the invariant divisor. However those log canonical divisors and functions exist only on nonklt pairs. So, for the klt pairs, any scalar representation is finite, and the finiteness of a linear representation of an invertible sheaf is the property of all class of isomorphic sheaves. \end{ex}
\begin{lemma}\label{prod_of_repres} Let $\mathcal D_1,\mathcal D_2$ be two b-divisors, which are effective up to linear equivalence and invariant up to linear equivalence with respect to action of a group $G\subseteq\Bir(X)$ of birational automorphisms.
(1) Then $\mathcal D_1+\mathcal D_2$ is also invariant up to linear equivalence and the finiteness of projective representation of $G$ on $\mathbb P(H^0(X,\mathcal D_1+\mathcal D_2))$ implies the same for representations of $G$ on $\mathbb P(H^0(X,\mathcal D_1)),\mathbb P(H^0(X,\mathcal D_2))$. Moreover, the orders of both representations are bounded by and divide the order of representation on $\mathbb P(H^0(X,\mathcal D_1+\mathcal D_2))$.
(2) The converse holds if sections for $\mathcal D_1$ and $\mathcal D_2$ generate the sections of $\mathcal D_1+\mathcal D_2$, that is, the surjectivity $$ H^0(X,\mathcal D_1)\otimes H^0(X,\mathcal D_2)\twoheadrightarrow H^0(X,\mathcal D_1+\mathcal D_2), s_1\otimes s_2\mapsto s_1s_2, $$ holds. The order of representation on $\mathbb P(H^0(X,\mathcal D_1+\mathcal D_2))$ is bounded by and divide the product of orders of representations on $\mathbb P(H^0(X,\mathcal D_1)),\mathbb P(H^0(X,\mathcal D_2))$.
(3) For a natural number $m>0$, if sections for $\mathcal D_1$ generate the sections for $mD_1$, then the representations on $\mathbb P(H^0(X,\mathcal D_1)),\mathbb P(H^0(X,m\mathcal D_1))$ have isomorphic images and the same orders.
(4) If the divisors $\mathcal D_1,\mathcal D_2$ are invariant with respect to $G$, then (2) holds for linear representations. If $\mathcal D_1$ is invariant, then in (3) the image of representation on $H^0(X,m\mathcal D_1)$ is a quotient of the image for $H^0(X,\mathcal D_1)$ and the order of the former representation divides the order of the letter one.
(5) The statements (1-4) also holds for $G$-invariant b-sheaves instead of b-divisors, even $G$-invariant up to isomorphism for the projective representations. \end{lemma}
In general (1) does not hold for linear representations.
\begin{ex} For any natural number $n>0$, the representations of $k^*$ on $H^0(\mathbb P^1,n(\infty-0))$ and on $H^0(\mathbb P^1,-n(0-\infty))$ (see Example~\ref{toric_repr}) are infinite, but the representation on their product $H^0(X,0(\infty-0))$ is trivial. \end{ex}
\begin{lemma}\label{repres_comparable} Let $G\subseteq\Bir(X)$ be a group of birational automorphisms, and $\mathcal M,\mathcal D$ be two b-divisor on $X$ such that \begin{description}
\item{\rm (1)} $\mathcal M,\mathcal D$ are invariant up to linear equivalence with respect to $G$,
\item{\rm (2)} $\mathcal M$ is semi-ample, and
\item{\rm (3)} $\mathcal D\equiv r\mathcal M$ for some real number $r\ge 0$.
\end{description} Then the finiteness of representation of $G$ on $\mathbb P(H^0(X,m\mathcal M))$ for sufficiently large natural numbers $m$ implies the finiteness of representation on $\mathbb P(H^0(X,\mathcal D))$. A bound for the last representation is the same as for $\mathbb P(H^0(X,m\mathcal M))$. \end{lemma}
\begin{lemma}\label{resid_injection} Let $(X/T,B)$ be a {\rm tdlt}\ family of connected $0$-pairs, and $Y\subseteq X{}_{\mathrm{mlcc}}$ be a component of its minimal lc center over $T$. Then, for any natural number $m$, the Poincare residue gives a canonical inclusion $$ H^0(X,\omega_{X/T}^m[mB])\subseteq H^0(Y,\omega_{Y/T}^m[mB_Y]), \omega\mapsto \omega\rest{Y}=\res_Y\omega. $$ \end{lemma}
\begin{lemma}\label{restr_in_mp_triv} Let $(X/T,B)$ be a connected [{\rm tdlt}] mp-trivial reduced family of connected {\rm tdlt}\ $0$-pairs with a horizontal boundary $B$ and $m$ is a natural number such that $\mathcal M\sim_m0$ and $m$ is even. Then for any $t,s\in T$ there exist canonical identifications given restrictions and residues: $$ H^0(X,\omega_{X/T}^m[mB])=H^0(X_t,\omega_{X_t}^m[mB_{X_t}])= H^0(X_s,\omega_{X_t}^m[mB_{X_s}])= $$ $$ H^0(Y_t,\omega_{Y_t}^m[mB_{Y_t}])=H^0(Y_s,\omega_{Y_s}^m[mB_{Y_s}]), $$ where $Y_t,Y_s$ are the minimal lc centers or even the minimal lc centers for {\rm tdlt}\ blowups. If $c^*:H^0(Y_t,\omega_{Y_t}^m[mB_{Y_t}])\to H^0(Y_s,\omega_{Y_s}^m[mB_{Y_s}])$ denotes the last canonical identification, then $c^*\rest{Y_t}=\rest{Y_s}$. \end{lemma}
\begin{lemma}\label{flopping_lc_center} Let $g\colon (X,B)\dashrightarrow (Y,B_Y)$ be a flop of {\rm tdlt}\ pairs, and $Z\subseteq X$ be a lc center of $(X,B)$ such that $g$ is defined in $Z$. Then $g(Z)\subseteq Y$ is also a lc center of $(Y,B_Y)$ and $g\rest{Z}\colon Z\dashrightarrow g(Z)$ is a rational contraction. Moreover, if $g\rest{Z}$ is birational, then it is a log flop $$ g\rest{Z}\colon (Z,B_Z)\dashrightarrow (g(Z),B_{g(Z)}). $$ In particular, if $Z$ is a minimal lc center, then $g(Z)$ is also minimal, and $g\rest{Z}$ is a log flop. \end{lemma}
\begin{proof} By definition, we can suppose that both varieties $X,Y$ are irreducible. Otherwise, we take irreducible component of $X$ containing $Z$ and its image in $Y$. Here we use the {\rm tdlt}\ property (no osculation).
Also by definition any lc center is an image of a b-divisor $D$ with the boundary multiplicity $1$ with respect to $(X,B)$. Equivalently, there exists an extraction $\tilde{X}\to X$ of $D$. Since the singularities are {\rm tdlt}, we can make a flop, a crepant {\rm tdlt}\ resolution (even very economical with one exceptional divisor $D$). By construction $D\subset \tilde{X}$ is a divisor with a contraction $D\to Z$.
To verify that $g\rest{Z}\colon Z\dashrightarrow g(Z)$ is a rational contraction, it is sufficient to verify that the composition $D\to Z\dashrightarrow g(Z)$ is a rational contraction. Note for this that $D\dashrightarrow g(Z)$ is lc center for $(Y,B_Y)$, because the composition $(\tilde{X},B_{\tilde{X}})\to (X,B)\dashrightarrow (Y,B_Y)$ is also a flop of {\rm tdlt}\ pairs. By the crepant property of flops, the composition maps $D$ onto the lc center $g(Z)$. This is a rational contraction by the {\rm tdlt}\ property of $(Y,B_Y)$, there exists an extraction of $D$ in $Y$ with contraction onto $g(Z)$ as above.
The flopping property of birational $g\rest{Z}$ follows from the divisorial adjunction. For this we use a dimensional induction. It is the divisorial adjunction if $Z$ is a divisor. If $Z$ is not a divisor, then by the {\rm tdlt}\ property there exists an invariant divisor $W$ containing $Z$. If $W\dashrightarrow g(W)$ is birational, then we can use induction. Otherwise we extract an invariant b-divisor $\tilde{W}\subset \tilde{Y}\to Y$ such that $W$ maps to $\tilde{W}$. Now the mapping of $Z$ to $\tilde{W}$ is not necessarily defined. If so, then we blow up $Z$ in $W$ and by Example~\ref{blowup_lc_cent} we can find flop of a lc center $(\tilde{Z},B_{\tilde{Z}})\dashrightarrow (Z,B_Z)$ such that the mapping of $\tilde{Z}$ to $\tilde{W}$ is defined. The composition $\tilde{Z}\dashrightarrow \tilde{W}\to Y$ maps $\tilde{Z}\dashrightarrow g(Z)=\tilde{Z}\to Z\dashrightarrow g(Z)$ and gives a required flop by induction.
Finally, suppose that $Z$ is a minimal lc center. Then the lc centers of contraction $(D,B_D)\to Z$ are only horizontal (the connectedness of lc centers). And vice versa. By the {\rm tdlt}\ property, any minimal lc center $\tilde{Z}$ of $(D,B_D)$ gives a (flop) birational mapping to $Z$. Thus $g(Z)$ is minimal and $Z\dashrightarrow g(Z)$ is birational. \end{proof}
\begin{prop}\label{repres_exten} Let $(X/T,B)$ be a {\rm tdlt}\ family of connected $0$-pairs, and $Y\subseteq X{}_{\mathrm{mlcc}}$ be an irreducible component of its minimal lc center over $T$. Then, for any natural number $m$, each representation linear transformation $g^*$ on $H^0(X,\omega_{X/T}^m[mB])$ can be extended to a representation linear transformation $g_Y^*$ on $H^0(Y,\omega_{Y/T}^m[mB_Y])$. That is, for any $g\in\Bir(X\to T/k,B)$, there exists $g_Y\in \Bir(Y\to T/k,B_Y)$ such that $$ g^*=(g_Y^*)\rest{H^0(X,\omega_{X/T}^m[mB])}, $$ where $g_Y^*$ is the representation of $g_Y$ on $H^0(Y,\omega_{Y/T}^m[mB_Y])$. \end{prop}
\begin{proof} An extension can be done under the canonical inclusion $$ V=H^0(X,\omega_{X/T}^m[mB])\subseteq H^0(Y,\omega_{Y/T}^m[mB_Y]), \omega\mapsto \omega\rest{Y}=\res_Y\omega $$ of Lemma~\ref{resid_injection}.
Step 1. If a flop $g$ is defined in $Y$, then $g(Y)\subseteq X{}_{\mathrm{mlcc}}$ and is also an irreducible component. In the case $g(Y)=Y$, $g$ induces a generic flop $g_Y=g\rest{Y}$ on $(Y,B_Y)$ by Lemma~\ref{flopping_lc_center}. In this case, $(g_Y^*)\rest{V}=\rest{Y}g^*$ is a general invariance of the Poincare residue.
Step 2. More generally, if a flop $g$ is defined in $Y$, but possibly $g(Y)\not=Y$, then $g$ induces a log flop $g\rest{Y}\colon (Y,B_Y)\to (g(Y),B_{g(Y)})$ again by Lemma~\ref{flopping_lc_center}. By connectedness of fibers and Lemma~\ref{chain_of_flop}, Theorem~\ref{can_0-perest}, there exists a chain $C_1,\dots,C_n,n\ge 1$, of flopping centers $C_i$ on $X$ such that $Y\subset C_1,\dots,g(Y)\subset C_n$ and the chain of centers define a sequence of (canonical) flops $Y=Y_0\dashrightarrow Y_1 \dashrightarrow\dots\dashrightarrow Y_{n-1}\dashrightarrow g(Y)=Y_n$. (According to our agreement, $Y_0=Y_1$ and/or $Y_{n-1}=Y_n$, if the flopping centers $C_1$ and/or $C_n$ have respectively a single minimal lc center.) They are flops with respect to adjoint boundaries $(Y_i,B_{Y_i})$. Their composition gives a flop $c\colon (Y,B_Y)\dashrightarrow (g(Y),B_{g(Y)})$, canonical with respect to differentials. The canonicity means that all such flops are identical on restricted sections for every {\em even\/} $m$. The flops agrees with restrictions (Poincare residues): for every $c_i\colon (Y_i,B_{Y_i})\dashrightarrow (Y_{i+1},B_{Y_{i+1}})$, $$ c_i^*\rest{Y_{i+1}}=\rest{Y_i}, $$ and the inclusion is given by the Poincare residue, the identifications $=$ by canonical flops: $$ H^0(X,\omega_{X/T}^m[mB])\subseteq H^0(Y,\omega_{Y/T}^m[mB_Y])= H^0(Y_i,\omega_{Y_i}^m[mB_{Y_i}])=H^0(g(Y),\omega_{g(Y)/T}^m[mB_{g(Y)}]). $$ (Usually, $Y/T$ is not geometrically irreducible and the inclusion (Poincare residue) is proper.) Thus, for $g_Y=c^{-1}(g\rest{Y})\colon Y\to Y$, $g_Y^*$ extends the representation of $g$ from $H^0(X,\omega_{X/T}^m[mB])$ to $H^0(Y,\omega_{Y/T}^m[mB_Y])$. Indeed, for any $\omega\in V$, $$ (g_Y^*)(\omega\rest{Y})=(g\rest{Y})^*(c^{-1})^*(\omega\rest{Y})= (g\rest{Y})^*(\omega\rest{g(Y)})=(g^*\omega)\rest{Y}. $$
Step 3. If a flop $g$ is not defined in $Y$ ($Y$ is in indeterminacy locus), then we make a blowup $(X',B_{X'})\to (X,B)$ in $Y$. For {\rm tdlt}\ families such a blowup exists. However, in our situation, the problem is birational and it is sufficient to consider $(X_\eta,B_{X_\eta})$. In the generic case, the only problem is nonirreducibility of $X_\eta$. In this case, we replace $X_\eta$ by its normalization with isomorphism of gluing divisors and identification of differentials along them. Then a blowup on any component should be done simultaneously in identified centers on both gluing divisors. We identify the blown up centers, in particular, their minimal lc centers. Take any minimal lc center $Y'$ over $Y$. Then the blowup gives a canonical log flop $c'\colon (Y',B_{Y'})\to (Y,B_Y)$. The canonicity again means the same as above: $$ H^0(X',\omega_{X'/T}^m[mB_{X'}])= H^0(X,\omega_{X/T}^m[mB])\subseteq H^0(Y,\omega_{Y/T}^m[mB_Y]) =H^0(Y',\omega_{Y'/T}^m[mB_{Y'}]). $$ If $g'$ is $g$ on $X'$, and is defined in $Y'$, then put $g_Y=c'c^{-1}(g'\rest{Y'}){c'}^{-1}$, where $c\colon Y'\to g(Y')$ is now constructed for $g',Y',(X',B_{X'})$ as above for $g,Y,(X,B)$. In this situation $g_Y^*$ also extends the representation of $g$ from $H^0(X,\omega_{X/T}^m[mB])=H^0(X',\omega_{X'/T}^m[mB_{X'}])$ to $H^0(Y,\omega_{Y/T}^m[mB_Y])=H^0(Y',\omega_{Y'/T}^m[mB_{Y'}])$.
If $g'$ is not defined in $Y'$ we make the next blowup etc. Finally, we associate, to each flop $g\in\Bir(X\to T/k,B)$, a flop $g_Y\in\Bir(Y\to T/k,B_Y)$ with the same (sub)representation: $$ g^*=(g_Y^*)\rest{H^0(X,\omega_{X/T}^m[mB])} \text{ on } H^0(X,\omega_{X/T}^m[mB])\subseteq H^0(Y,\omega_{Y/T}^m[mB_Y]). $$ \end{proof}
A version of the Burnside theorem.
\begin{thm} \label{Burnside} Let $G\subseteq \Aut V$ be a group of linear transformation of a finite dimensional linear space $V$ over a field $k$ such that \begin{description}
\item{\rm (1)} $G$ is torsion, that is, for every $g\in G$, there exists a positive integral number $m$ such that $g^m=1$, and
\item{\rm (2)} finitely generated or
\item{\rm (3)} every element $g\in G$ is defined over a field $l_g$ which has a uniformly bounded degree over a field of pure transcendent extension over the prime subfield in $k$.
\end{description} Then $G$ is finite. \end{thm}
For example, (3) holds if $l_g$ has a uniformly bounded degree over a field $l\subseteq k$ of finite type over the prime subfield in $k$ which independent of $g$.
The following result is a special case of Corollary~\ref{repr_lc_can} below. Technically, this is the most crucial step.
\begin{thm} \label{flop_repres} Let $(X_\eta,B_{X_\eta})$ be a generic wlc $0$-pair, where $X_\eta$ is geometrically irreducible. Then, for any natural number $m$, the canonical representation of generic log flops on differentials $$ \Bir(X_\eta\to\eta/k,B_{X_\eta})\to \Aut H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}]), g \mapsto g^*, $$ is finite. Moreover, the order of representation has a uniform bound, independent of $m$. \end{thm}
\begin{proof} Since the representation is independent of a wlc model of $(X_\eta,B_{X_\eta})$, we can suppose that the model $(X_\eta,B_{X_\eta})$ is projective. To construct such a model one can use the LMMP. Below we use some other modifications of this model and even a completion over $k$.
We can suppose that $B$ is $\mathbb Q$-divisor. If the letter does not hold then $H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])=0$ for every $m\not=0$ because $(X_\eta,B_{X_\eta})$ is a $0$-pair.
Step 1. By Lemmas~\ref{prod_of_repres} and~\ref{repres_comparable}, we can suppose that $m$ is sufficiently divisible, and the finiteness needed only for {\em some\/} such $m$. It is enough to suppose that $\omega_{X_\eta}^m[mB_{X_\eta}]$ is invertible, equivalently the divisor $m\mathcal M$ is Cartier, where $\mathcal M$ is a canonical upper moduli part of adjunction, and that $H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])$ (this space of section of a b-sheaf is finite dimensional) generate the relative log canonical ring $\alg (\omega_{X_\eta}^m[mB_{X_\eta}])$. Indeed, by Lemma~\ref{prod_of_repres}, (5), the representation of $\Bir(X_\eta\to\eta/k,B_{X_\eta})$ on $\alg (\omega_{X_\eta}^m[mB_{X_\eta}])$ is finite. Thus it is finite in each degree $l$, that is, on each $H^0(X_\eta,\omega_{X_\eta}^{lm}[lmB_{X_\eta}])$. On the other hand, for every natural number $n$, $nm\mathcal M/m=n\mathcal M$. Therefore, by Lemma~\ref{repres_comparable}, the projective representation on $\mathbb P(H^0(X_\eta,\omega_{X_\eta}^n[nB_{X_\eta}]))$ is finite with the same bound as for the algebra. A difference with the linear representation on $H^0(X_\eta,\omega_{X_\eta}^n[nB_{X_\eta}])$ is only in scalar matrices on rather general fibers $(X_t,B_{X_t})$ (cf. Step 5 below). Thus it is the $0$-dimensional version of the theorem: $\eta=k$. By Proposition~\ref{repres_exten}, this case can be reduced to the same statement for a klt $0$-pair $(Y,B_Y)$. Take a minimal lc center $Y\subseteq X_{t,}{}_{\mathrm{mlcc}}$, assuming that $X_t$ is {\rm tdlt}. But the required result for the klt pairs is well-known (cf. Step 6 below). Note that after the reduction we need to consider all flops of $(Y,B_Y)$ and their representations but with scalar restrictions on $H^0(X_t,\omega_{X_t}^n(nB_{X_t}))$. The final uniform bound is the maximum for two algebras $\alg (\omega_{X_\eta}^m[mB_{X_\eta}])$ and $\bigoplus_{m\ge0}H^0(Y,\omega_Y^m(mB_Y))$.
We suppose also that $m$ is {\em even\/} (see Step 2).
Additionally, we assume, that there exists a nonzero section $\omega_0\in H^0(X_\eta,\omega_{X_\eta}^n[nB_{X_\eta}])$ vanishing on the birational reduced b-divisor $\mathcal D$ of $\eta$, which contains all centers in $\overline{\eta}$ of degenerations of $X_\eta$. More precisely, $\Supp\mathcal D$ contains all special points $t\in\overline{\eta}$ such that $\dk(X_t,B_{X_t})<\dk(X_\eta,B_{X_\eta})$. Actually, it is sufficient for a subdivisor of $\mathcal D$ related to $\Delta$ in Step 4 below.
Step 2. We can suppose that $(X_\eta,B_{X_\eta})$ is klt. Equivalently, $\dk(X_\eta,B_{X_\eta})=\dim X_\eta$. By the LMMP we can suppose that $(X_\eta,B_{X_\eta})$ is dlt. If $(X_\eta,B_{X_\eta})$ is not klt then we consider the minimal lc center $(X_{\eta,}{}_{\mathrm{mlcc}},B_{\eta,}{}_{\mathrm{mlcc}})$ (a generic family of interlaced pairs). It can have disconnected fibers (geometrically not irreducible).
Fix an irreducible component $Y\subseteq X_{\eta,}{}_{\mathrm{mlcc}}$. Then Lemma~\ref{resid_injection} gives a canonical inclusion $$ H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])\subseteq H^0(Y,\omega_{Y/T}^m[mB_Y]), \omega\mapsto \omega\rest{Y}=\res_Y\omega. $$ On the other hand, by Proposition~\ref{repres_exten}, each representation linear transformation $g^*$ of $H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])$ can be extended to a representation linear transformation $g_Y^*$ of $H^0(Y,\omega_Y^m[B_Y])$. That is, for any $g\in\Bir(X_\eta\to\eta/k,B_{X_\eta})$, there exists $g_Y\in \Bir(Y\to \eta/k,B_Y)$ such that $$ g^*=(g_Y^*)\rest{H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])}, $$ where $g_Y^*$ is the representation of $g_Y$ on $H^0(Y,\omega_{Y/T}^m[mB_Y])$.
Now take $Y/\theta$ instead of $Y/\eta$, where $Y\to \theta\to \eta$ is a Stein decomposition. Then $Y$ is geometrically irreducible over $\theta$ and $\Bir(Y\to\eta/k,B_Y)\subseteq \Bir(Y\to\theta/k,B_Y)$. So, it is sufficient to establish the finiteness of representation of $\Bir(Y\to\theta/k,B_Y)$ on $H^0(Y,\omega_Y^m[B_Y])$. But $(Y,B_Y)$ is klt by construction.
Step 3. Now $(X_\eta,B_{X_\eta})$ is klt and, by Lemma~\ref{Burnside_3}, it is sufficient to verify that linear $g^*$ is torsion for each $g\in\Bir(X_\eta\to\eta/k,B_{X_\eta})$. Indeed, by the lemma $$ g^*=g'^*g_\theta^*\text{ on } H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])= H^0(\theta,\mathcal M_\theta^m), $$ where $g'\in\aBir(X_\eta\to\eta/k,B_{X_\eta}), g_\theta\in\Aut(\theta/\overline{l},\mathcal M_\theta)$, $\mathcal M_\theta=(\mathcal M_\theta^m)^{1/m}$ is a canonical $\mathbb Q$-sheaf on $\theta$, and $\mathcal M_\theta^m$ is the direct image of $\omega_{X_\eta}^m[mB_{X_\eta}]$ on $\theta$. Since $B_{X_\eta}$ is a $\mathbb Q$-boundary, we can take a canonical $\mathbb Q$-sheaf $\mathcal M_\theta$. The equation for sections under the direct image holds, if $m$ is sufficiently divisible, e.g., $\mathcal M_\theta^m$ is an invertible sheaf. The last follows from above choice of $m$. It is well-know that $g'^*$ is a bounded scalar torsion representation, a representation on an isotrivial family (see the proof of Corollary~\ref{repr_klt_lin} and Step 6 below). Thus, for every torsion $g^*$, $g_\theta^*$ is also torsion. By Lemma~\ref{Burnside_3} every $g_\theta$ and $g_\theta^*$ are defined over $l_g$ with a uniformly bounded degree over a field of finite type $l$ over the prime subfield in $k$. Hence the representation $g_\theta^*$ satisfies (1) and (3) of Theorem~\ref{Burnside} and is finite by the theorem. This implies also the finiteness of $g^*$ because the scalar part $g'^*$ is finite. Actually, for sufficiently divisible $m$, $g'^*$ is identical and $g^*=g_\theta^*$.
Step 4. We can suppose now that $(X_\eta,B_{X_\eta})$ is klt, equivalently, $\dk(X_\eta,B_{X_\eta})=\dim X_\eta$, and $g$ is a generic flop. We need to establish that $g^*$ is torsion for the linear representation. In this step we verify semisimplicity of $g^*$, that is, $g^*$ diagonalizable. Moreover, $g^*$ is unitary: the eigenvalues $e_i$ of $g^*$ have norm $1$. It is sufficient to establish on a subspace of bounded forms $$ W\subseteq \{\omega\in H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])\mid \Vert\omega\Vert<+\infty\}. $$
This is a birational concept: $\Vert\omega\Vert=\sup_{t\in T}\Vert\omega\Vert_t$, the fiberwise norm. A pedestrian and more algebraic explanation as follows. For good properties of $\Vert\omega\Vert_t$ on a completion of $\eta$, we use a (flat) maximal wlc $(X/T,B)$ with tdlt singularities such that $\eta$ is the generic point of $T$ and $(X_\eta,B_{X_\eta})$ is as above. Such a model exists. We can suppose also that $B$ is horizontal over $T$, equivalently, $B{}_{\mathrm{div}}=0$. Then $\Bir(X_\eta\to\eta/k,B_{X_\eta})=\Bir(X\to T/k,B)$. Usually, the induced morphism $g_T\colon T\dashrightarrow T$ is birational. In particular, $t\mapsto t'=g_Tt$ and fiberwise flops $g\rest{X_t}\colon (X_t,B_{X_t})\dashrightarrow (X_{t'},B_{X_{t'}})$ are not always well-defined. They are defined for rather general points $t$ (and so do powers $g^d$ for very general points). The flop $g$ permutes some vertical b-divisors, namely, multiple fibers and degenerate fibers, equivalently, the invariant divisors of log structure of $T$, over generic points of which fibers are not reduced or with degenerations (lc points). This transformation on $X,T$ is really birational, that is, some of those invariant divisors are contracted some are extracted under $g,g_T$ respectively. The moduli part of adjunction is stabilized over $X$: $\mathcal M=\overline{M}$, where $M$ is an upper moduli part of adjunction for $(X/T,B)$, and semiample by dimensional induction. Moreover, under our assumptions $m\mathcal M,mM$ are Cartier and $mM$ is a divisor of the power sheaf $\omega_{X_\eta}^m[mB_{X_\eta}]= \omega_{X/T}^m[mB]$ of the sheaf of moduli part of adjunction. Thus $H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])= H^0(X,\omega_{X/T}^m[mB])$ with isomorphic representations.
We denote by $$ \varphi\colon X\to \mathbb P(H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])^v)= \mathbb P(H^0(X,\omega_{X/T}^m[mB])^v) $$ a contraction given by the linear system $$ \linsys{\omega_{X_\eta}^m[mB_{X_\eta}]}= \linsys{\omega_{X/T}^m[mB]}. $$
Let $\Delta\subset T$ be the degeneration locus: $$ \Delta=\{t\in T\mid \dk(X_t,B_t)<\dim X_t=\dim X_\eta\} $$ parameterizes the nonklt fibers. By properties of norm, $\Vert\omega\Vert_t$ is continuous always and bounded on $T$, if and only if $\omega_t=0$, equivalently, $\Vert\omega\Vert_t=0$, for all $t\in\Delta$. In the last situation $\Vert\omega\Vert=\max_{t\in T}\Vert\omega\Vert_t$. So, $$ W=H^0(X,{X_{\Delta,}}{}_{\mathrm{red}},\omega_{X/T}^m[mB])=\{\omega\in H^0(X,\omega_{X/T}^m[mB])\mid \omega\rest{{X_{\Delta,}}{}_{\mathrm{red}}}=0\}. $$
By both definitions, $W$ is invariant under $g^*$. The first definition uses the invariance of norm: $\Vert g^*\omega\Vert=\Vert\omega\Vert$. The second definition uses the invariance of degenerate fibers for flops. By properties of norm (1) and (3) the linear operators $(g^*)^n,n\in \mathbb Z$, are uniformly bounded: for all integral numbers $n\in\mathbb Z$ and all forms $\omega\in W$ of length $1$, $\Vert(g^*)^n\omega\Vert=\Vert\omega\Vert =1$. Thus the operator $g^*$ is diagonalizable and unitary on $W$.
Now we establish the semisimplicity and unitary properties of $g^*$ on the whole space $H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])= H^0(X,\omega_{X/T}^m[mB])$. Take for this a $g^*$-semiinvariant form $\omega_0\in W$ and consider an equivariant imbedding of representation (cf. the proof of Lemma~\ref{repres_comparable}): $g^*\omega_0=e_0\omega_0,e_0\in k^*$, $$ H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])= H^0(X,\omega_{X/T}^m[mB])\hookrightarrow H^0(X,\omega_{X/T}^{2m}[2mB]), \omega\mapsto \omega\omega_0. $$ Such a form $\omega_0$ exists for sufficiently large $m$ by semiampleness of moduli part because $\varphi(X_{\Delta,}{}_{\mathrm{red}})$ is a proper subset of $\varphi(X)$ [klt fibers are isotrivial families]. Actually, this restriction on $m$ was already imposed in Step 1: the birational pre-image of $\mathcal D$ on $\theta$ contains all prime b-divisors over $\Delta$.
The image of the imbedding is a $g^*$-invarian subspace of bounded forms: for any $t\in\Delta$ and any $\omega \in H^0(X,\omega_{X/T}^m[mB])$, $$ g^*(\omega\omega_0)=e_0(g^*\omega)\omega_0 \text{ and } (\omega\omega_0)_t=\omega_t\omega_{t,0}=\omega_t0=0. $$ Thus $g^*$ is semisimple on $H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])$ with all $\vert e_i\vert=1$.
Step 5. Every $g^*$ is torsion on $W=H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])= H^0(X,\omega_{X/T}^m[mB])$. As one can see in the proof below, we only need a completion along generic curves. Again we use the regularization $(X/T,B)$. According to Step 4 we need to establish that each eigenvalue $e_i$ is a root of unity. Let $w_i\in W$ be the eigenvectors of $g^*$. By Step 4 , they generate $W$ and we can form a basis of those vectors $w_1,\dots,w_d,d=\dim W$. The dual vectors $w_i^v$ form a basis of $H^0(X,\omega_{X/T}^m[mB])^v$. Suppose that $w_1,\dots,w_l$ are all vectors $w_i$ with nonroots $e_1,\dots,e_l$. We need to verify that $l=0$.
If $l\ge 1$, by Proposition~\ref{torus_invar-pt}, we can find a point $y\in \varphi(X)$ and an integral number $n\not=0$ such that \begin{description}
\item{\rm (1)} $y$ has a nonzero coordinate $w_i(y),1\le i\le l$, and
\item{\rm (2)} $(g^*)^n$-invariant: $(g^*)^ny=y$. \end{description} Taking a power $(g^*)^n=(g^n)^*$ instead of $g^*$ we can suppose $n=1$. Note that the eigenvalues of $(g^*)^n$ are powers $e_i^n$ and their property to be a root of unity independent of $n$. By construction $\varphi(X)$ is invariant for $g^*$ and nondegenerate. Now we verify that $e_i$ is a root of unity, a contradiction.
Take now a point $t$ and a fiber $X_t$ over $y$. The invariance $g^*y=y$ does not imply in general invariance of $t$ and/or of $X_t$ under $g$. But an invariance up to certain mp-trivial deformation. More precisely, $(X_t,B_{X_t}),t\in S$, belongs to the family $(X_S/S,B_{X_S})$, where $S\subseteq T$ is a maximal connected subvariety such that $\varphi(X_S)=y$. For every two points $t,s\in S$ and sufficiently divisible even $m$ (as we assume, base point free $m\mathcal M$), there exists a canonical identification of sections of their minimal lc centers: $$ H^0(X_{S,}{}_{\mathrm{red}},\omega_{X_{S,}{}_{\mathrm{red}}/S}^m[mB_{X_{S,}{}_{\mathrm{red}}}])=H^0(Y_t,\omega_{Y_t}^m[mB_{Y_t}])= H^0(Y_s,\omega_{Y_s}^m[mB_{Y_s}]), X_{S,}{}_{\mathrm{red}}=\varphi\1y, $$ where $(Y_t,B_{Y_t}),(Y_s,B_{Y_s})$ are minimal lc centers of $(X_t,B_{X_t}),(X_s,B_{X_s})$ respectively. We denote this identification by $c^*:H^0(Y_t,\omega_{Y_t}^m[mB_{Y_t}])\to H^0(Y_s,\omega_{Y_s}^m[mB_{Y_s}])$. It is determined by the relation: $c^*\rest{Y_t}=\rest{Y_s}$. Actually, it is determined by the subfamily over $S$. Apply Lemma~\ref{restr_in_mp_triv} to the reduced {\rm tdlt}\ family $(X_{S,}{}_{\mathrm{red}}/S,B_{X_{S,}{}_{\mathrm{red}}})$. However, it is not very useful, when $c^*$ does not correspond to a flop (cf. the paragraph after the next one of this step). In general, $Y_t,Y_s$ and, moreover, $X_t,X_s$ even are not birationally equivalent.
On the other hand, by Proposition~\ref{flops_in_fibers}, for any $t\in S$ and any minimal lc center $(Y_t,B_{Y_t})$, there exists $s\in S$ such that, for any minimal lc center $(Y_s,B_{Y_s})$, there exists a log flop $g_{Y_t}\colon (Y_t,B_{Y_t})\dashrightarrow (Y_s,B_{Y_s})$. Then under the above identification $$ \rest{Y_t} g^*=(g_{Y_t})^*c^*\rest{Y_t}. $$
We need to present now $c^*$ as a representation of a (canonical) flop $c\colon (Y_s,B_{Y_s})\dashrightarrow (Y_t,B_{Y_t})$. This is a log isomorphism and this holds, e.g., if there exist $Y_s,Y_t$ in the same isotrivial family for minimal lc centers without degenerations. The base $S$ can be present as a finite disjoint union $\coprod S_i$ of locally closed subsets such that, for every family $(X_{S_i,}{}_{\mathrm{red}}/S_i,B_{X_{S_i,}{}_{\mathrm{red}}})$, the family of its minimal lc centers is finite disjoint union of isotrivial families of klt $0$-pairs without degenerations. The mp-trivial property of minimal lc centers follows by adjunction. Since they are klt families, they are isotrivial by (Viehweg-Ambro). For some curve $C$ (a curve $g^n(C)$) and some natural number $N>0$, $g^N(C)$ gives a point $s$ in same $S_i$ as for $t$ and, moreover, $Y_s$ is the same family as $Y_t$. (Dirichlet principal.) Now replace $g$ by $g^N$. Then $s,t\in S_i$ and $Y_s,Y_t$ in the same klt isotrivial family without degenerations. So, $c^*$ correspond to a natural log isomorphism $c\colon (Y_s,B_{Y_s})\to (Y_t,B_{Y_t})$. (After a finite covering an isotrivial family without degeneration became trivial.)
Now we take form the $\omega_i=w_i$. Then $g^*\omega_i=e_i\omega_i$ and $$ (cg_{Y_t})^*(\omega_i\rest{Y_t})= (g_{Y_t})^*c^*(\omega_i\rest{Y_t})= \rest{Y_t}(g^*\omega_i)= \rest{Y_t}(e_i\omega_i)=e_i(\omega_i\rest{Y_t}). $$ By construction $\omega_i\rest{Y_t}\not=0$, equivalently, $\omega_i\rest{X_{S,}{}_{\mathrm{red}}}\not=0$, and $cg_{Y_t}$ a flop of $(X_t,B_{X_t})$. Hence $e_i$ is a root of unity by the next Step 6, a contradiction.
Step 6. $\dim T=0$ and $(X,B)$ is a klt $0$-pair (cf. \cite[Theorem~3.9]{FC}). Subtracting $B$ we can reduce the problem to that of in two situations
(1) with $B=0$ and $X$ is terminal, and
(2) $(X,\varepsilon B),0<\varepsilon\ll 1$, is a klt Fano variety.
(Here we use induction on the dimension of fibers.) The case (1) is well-known by \cite[Proposition~14.4]{U}: every $e_i$ (actually single: $d=1$) is an algebraic integer and $\vert e_i\vert =1$. So, $e_i$ is a root of unity. In the case (2), the group $\Bir(X,B)$ is finite itself and every $e_i$ is a root of unity ($1$-dimensional representation of a finite group). \end{proof}
The next result is a little bit more general (cf. Corollary~\ref{repr_lc_can}) but its proof uses more geometry: from isotrivial families to mp-trivial.
\begin{cor} \label{repr_wlc_can} Let $(X_\eta,B_{X_\eta})$ be a generic wlc pair, where $X_\eta$ is geometrically irreducible. Then, for any natural number $m$, the canonical representation of generic log flops on differentials $$ \Bir(X_\eta\to\eta/k,B_{X_\eta})\to \Aut H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}]), g \mapsto g^*, $$ is finite. Moreover, the order of representation has a uniform bound, independent of $m$. \end{cor}
\begin{proof} Step 1. Construction of a generic lcm pair $(X_{\eta,}{}_{\mathrm{lcm}},B_{X_{\eta,}{}_{\mathrm{lcm}}})$. The proof below uses the Iitaka contraction and the semiampleness conjecture in the dimension of generic fiber. However, it is possible to do without this assumption. E.g., if a nonvanishing for generic fiber does not hold, then $H^0(X_\eta,m\mathcal M)=0$ for all natural $m$ and the projective representation is empty. The nonvanishing implies semiampleness by known results. It is much easier for 2 section: $\dim H^0(X_\eta,m\mathcal M)\ge 2$ (Kawamata).
Take a firberwise Iitaka contraction $$ I\colon (X_\eta,B_{X_\eta})\to (X_{\eta,}{}_{\mathrm{lcm}},B_{X_{\eta,}{}_{\mathrm{lcm}}}), $$ where $(X_{\eta,}{}_{\mathrm{lcm}},B_{X_{\eta,}{}_{\mathrm{lcm}}})$ is a generic lcm pair with geometrically irreducible $X_{\eta,}{}_{\mathrm{lcm}}$ and with a boundary $B_{X_{\eta,}{}_{\mathrm{lcm}}}$. The boundary $B_{X_{\eta,}{}_{\mathrm{lcm}}}$ is constructed by adjunction: $B_{X_{\eta,}{}_{\mathrm{lcm}}}=D+M$ is a sum of the divisorial and a low moduli part of adjunction on $X_{\eta,}{}_{\mathrm{lcm}}$. The divisorial part of adjunction $D$ is determined canonically.
Step 2. Let $$ G=\Bir(X_\eta\to\eta/k,B_{X_\eta})\cap \ker \rho_\theta \subseteq\Bir(X_\eta\to\eta/k,B_{X_\eta}) $$ be a subgroup preserving any canonical upper and any effective low moduli part of adjunction for $(X_\theta,B_{X_\theta})$, where $\theta=X_{\eta,}{}_{\mathrm{lcm}}, X_\theta=X_\eta,B_{X_\theta}=B_{X_\eta}$ and $$ \rho_\theta\colon \Bir(X_\theta\to\theta/k,B_{X_\theta})\to \Aut H^0(X_\theta,\omega_{X_\theta}^l[lB_{X_\theta}]) $$ for sufficiently divisible $l$. The subgroup $G$ has a finite index in $\Bir(X_\eta\to\eta/k,B_{X_\eta})$. So, it is sufficient to establish the finiteness of representation $$ G\to \Aut H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}]), g \mapsto g^*. $$
More precisely, we suppose that $G$ preserves all differentials $\omega\in H^0(X_\theta,\omega_{X_\theta}^l[lB_{X_\theta}])$: for any natural number $l$ and any $g\in G$, $g^*\omega=\omega$. By Theorem~\ref{flop_repres}, the representation $\rho_\theta$ is finite. Indeed, by construction $I\colon (X_\eta,B_{X_\eta})\to \theta$ is a $0$-contraction and $(X_\theta,B_{X_\theta})$ is a generic family of $0$-pairs.
Note the $G$-invariance is an empty assumption unless $B_{X_\theta}$ and $B_{X_\eta}$ are $\mathbb Q$-divisors over $\theta$. Indeed, otherwise, for all $l$, $H^0(X_\eta,\omega_{X_\eta}^l[lB_{X_\eta}])= H^0(X_\theta,\omega_{X_\theta}^l[lB_{X_\theta}])=0$, and the corollary is established. So, below we suppose that $B_{X_\theta}$ and $B_{X_\eta}$ are $\mathbb Q$-divisors over $\theta$, and the bound on and kernel of $\rho_\theta$ are independent on $l$. This is true for sufficiently divisible $l$. Note also that each generic flop $g$ of $(X_\eta,B_{X_\eta})$ is also a generic flop of $(X_\theta,B_{X_\theta})$ and this gives a natural inclusion: $$ \Bir(X_\eta\to\eta/k,B_{X_\eta})\subseteq \Bir(X_\theta\to\theta/k,B_{X_\theta}). $$ Indeed, each fiberwise flop $(X_t,B_{X_t})\dashrightarrow (X_{g_\eta t},B_{X_{g_\eta t}})$ is compatible with fibwerwise Iitaka contractions: $$ \begin{array}{ccc} (X_t,B_{X_t})&\dashrightarrow& (X_{g_\eta t},B_{X_{g_\eta t}})\\ \downarrow&&\downarrow\\ (X_{t,}{}_{\mathrm{lcm}},B_{X_{t,}{}_{\mathrm{lcm}}})&\dashrightarrow&(X_{g_\eta t,}{}_{\mathrm{lcm}},B_{X_{g_\eta t,}{}_{\mathrm{lcm}}})\\ \end{array}. $$ So, the finiteness of $\ker \rho_\theta$ implies the required finiteness of index. The index has a uniform bound independent of $l$.
Step 3. For a rather divisible natural number $l$ and any rather general effective $l$-canonical low moduli part of adjunction $M$, there exists a canonical homomorphism of generic flops: $$ \gamma=\gamma_M\colon G\to \Bir(X_{\eta,}{}_{\mathrm{lcm}}\to\eta/k,B_{X_{\eta,}{}_{\mathrm{lcm}}}), g\mapsto g_{X_{\eta,}{}_{\mathrm{lcm}}}. $$ More precisely, the flop $g_{X_{\eta,}{}_{\mathrm{lcm}}}$ is given fiberwise by the above diagram: $$ (X_{t,}{}_{\mathrm{lcm}},B_{X_{t,}{}_{\mathrm{lcm}}})\dashrightarrow (X_{g_\eta t,}{}_{\mathrm{lcm}},B_{X_{g_\eta t,}{}_{\mathrm{lcm}}}). $$ Take such a natural number $l$ that the upper effective $l$-canonical moduli part of adjunction $lM^{\mathrm{mod}}\in\linsys{\omega_{X_\theta}^l[lB_{X_\theta}]}$ on $X_\theta$ is mobile and b-free, that is, the trace of a b-free divisor. The moduli part is mobile even over $\eta$. Then $I^*M=M^{\mathrm{mod}},g^*M^{\mathrm{mod}}=M^{\mathrm{mod}}$ and $g_{X_{\eta,}{}_{\mathrm{lcm}}}^*M=M$. The divisorial part of adjunction is preserved by any generic flop of $(X_\eta\to\eta/k,B_{X_\eta})$ and of $(X_{\eta,}{}_{\mathrm{lcm}}\to\eta/k,B_{X_{\eta,}{}_{\mathrm{lcm}}})$.
By Corollary~\ref{adjunt_for_0contr}, for rather general $M$, $(X_{\eta,}{}_{\mathrm{lcm}},B_{X_{\eta,}{}_{\mathrm{lcm}}})$ is an lcm family, where $X_{\eta,}{}_{\mathrm{lcm}}$ is geometrically irreducible. Since the boundary $B_{X_{\eta,}{}_{\mathrm{lcm}}}$ depends on $M$, for simplicity of notation, we replace it by $B_{X_{\eta,}{}_{\mathrm{lcm}}}+M$, where $B_{X_{\eta,}{}_{\mathrm{lcm}}}$ denotes only the divisorial part of adjunction. We use those notation in this proof. Step 4. Let $$ G_\diamond=\{g\in G\mid \gamma_Mg \text{ is almost identical }\} \subseteq\Bir(X_\eta\to\eta/k,B_{X_\eta}) $$ be a subgroup of $G$ which elements induces almost identical flops of $(X_{\eta,}{}_{\mathrm{lcm}}\to\eta/k,B_{X_{\eta,}{}_{\mathrm{lcm}}})$ for a rather general moduli part $M$. The group $G_\diamond$ is independent of $M$ and has a finite index in $G$. (Another more invariant description of $G_\diamond$ see in the next step.) So, it is sufficient to establish the finiteness of representation $$ G_\diamond\to \Aut H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}]), g \mapsto g^*. $$
It is sufficient the finiteness of the quotients $$ \Bir(X_{\eta,}{}_{\mathrm{lcm}}\to\eta/k,B_{X_{\eta,}{}_{\mathrm{lcm}}}+M)/ \aBir(X_{\eta,}{}_{\mathrm{lcm}}\to\eta/k,B_{X_{\eta,}{}_{\mathrm{lcm}}}+M) $$ for rather general $M$.
Indeed, the group $\aBir(X_{\eta,}{}_{\mathrm{lcm}}\to\eta/k,B_{X_{\eta,}{}_{\mathrm{lcm}}}+M)$ acts on $X_{\eta,}{}_{\mathrm{lcm}}$ within connected isotrivial subfamilies of the lcm family $(X_{\eta,}{}_{\mathrm{lcm}},B_{X_{\eta,}{}_{\mathrm{lcm}}}+M)$. Such a group of automorphisms is finite up to almost identical flops. The quotient of $\aBir(X_{\eta,}{}_{\mathrm{lcm}}\to\eta/k,B_{X_{\eta,}{}_{\mathrm{lcm}}}+M)$ modulo almost identical flops has a natural identification with a subgroup of $\Aut(Y,B_Y)$, where $(Y,B_Y)$ is an lcm pair canonically associated with a rather general connected isotrivial subfamily $(X_{S,}{}_{\mathrm{lcm}}/S,B_{X_{S,}{}_{\mathrm{lcm}}}+M\rest{X_{S,}{}_{\mathrm{lcm}}})$ of $(X_{\eta,}{}_{\mathrm{lcm}},B_{X_{\eta,}{}_{\mathrm{lcm}}}+M)$. For general $S$, $S$ is irreducible and the subfamily reduced and geometrically irreducible over $S$. Any generic flop $g\in\aBir(X_{\eta,}{}_{\mathrm{lcm}}\to\eta/k,B_{X_{\eta,}{}_{\mathrm{lcm}}}+M)$ induces a flop $$ g\rest{X_{S,}{}_{\mathrm{lcm}}}\in \Bir(X_{S,}{}_{\mathrm{lcm}}\to S/k,B_{X_{S,}{}_{\mathrm{lcm}}}+M\rest{X_{S,}{}_{\mathrm{lcm}}})= \aBir(X_{S,}{}_{\mathrm{lcm}}\to S/k,B_{X_{S,}{}_{\mathrm{lcm}}}+M\rest{X_{S,}{}_{\mathrm{lcm}}}). $$ By Lemma~\ref{mp_it} the family over $S$ is mp-trivial and by definition, there exists a natural contraction (we can suppose $S$ to be complete) $$ \varphi\colon(X_{S,}{}_{\mathrm{lcm}}/S,B_{X_{S,}{}_{\mathrm{lcm}}}+M\rest{X_{S,}{}_{\mathrm{lcm}}})\to Y, $$ given by the moduli part of adjunction, that is, $Y$ is projective with a polarization $H$ such that $\varphi^*H$ is an upper moduli part of adjunction. So, any generic flop $g$ of $(X_{S,}{}_{\mathrm{lcm}}/S,B_{X_{S,}{}_{\mathrm{lcm}}}+M\rest{X_{S,}{}_{\mathrm{lcm}}})$ induces a regular automorphism of $Y$ (linear for very ample $H$). For the lcm family there exists a natural boundary $B_Y$ on $Y$ such that $$ \varphi\rest{X_{t}{}_{\mathrm{lcm}}}\colon (X_{t,}{}_{\mathrm{lcm}},B_{X_{t,}{}_{\mathrm{lcm}}})\to (Y,B_Y) $$ is a fliz and $g\rest{X_{S,}{}_{\mathrm{lcm}}}$ induces a flop $g_Y$ of $(Y,B_Y)$. As above $B_Y$ depend on $M$ and can be replaced by $B_Y+M_Y$. The almost identical flops $g\rest{X_{S,}{}_{\mathrm{lcm}}}$ induces identical automorphism on $(Y,B_Y)$. A fiberwise flop $$ g\rest{X_{t,}{}_{\mathrm{lcm}}}\colon (X_{t,}{}_{\mathrm{lcm}},B_{X_{t,}{}_{\mathrm{lcm}}})\to (X_{g_St,}{}_{\mathrm{lcm}},B_{X_{g_St,}{}_{\mathrm{lcm}}}), t\in S, $$ of almost identical flop is canonical, that, correspond to identical on a trivialization of the family. So, to be almost identical is generic deformation property for deformation of an isotrivial family. The group $\Aut(Y,B_Y+M)$ is finite and also a generic deformation invariant. Thus the almost isotrivial flops of $(X_{\eta,}{}_{\mathrm{lcm}},B_{X_{\eta,}{}_{\mathrm{lcm}}}+M)$ form a finite group up to almost identical ones. Moreover, there exists a uniform bound on the flops of isotrivial subfamilies up to almost identical flops.
On the other hand, the group of generic flops modulo almost isotrivial flops is finite, because the family $(X_{\eta,}{}_{\mathrm{lcm}},B_{X_{\eta,}{}_{\mathrm{lcm}}}+M)$ is lcm. The bound for the quotient can be given by the degree of $\theta\to\mathfrak M$, where $\mathfrak M$ is a coarse moduli for fibers and $T\to\theta\to\mathfrak M$ is a Stein decomposition of the moduli morphism.
So, each subgroup $$ \{g\in G\mid \gamma_Mg \text{ is almost identical }\} \subseteq\Bir(X_\eta\to\eta/k,B_{X_\eta}) $$ has a finite index for every $M$. Actually, the group is independent of $M$, because the isotriviality of subfamily over $S$ and the contraction $\varphi$ are independent of $M$. Indeed, if $M'$ is another (generic) effective moduli part $M\sim_l M'\ge 0$, then $M'$ is also vertical with respect to $\varphi$ and $0\le M_Y'\sim_l M_Y$.
Step 5. The projective representation $$ G_\diamond\to \Aut \mathbb P(H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])), g \mapsto g^*, $$ is trivial. It sufficient to verify that, for any flop $g\in G_\diamond$ and any effective divisor $D\in\linsys{\omega_{X_\eta}^m[mB_{X_\eta}]}$, $$ g^*D=D. $$ If $D$ is fixed, it is sufficient to verify the same property on a rather general mp-trivial subfamily, which is invariant for $g$. Take a subfamily $(X_S/S,B_{X_S})$ over a generic isotrivial family $(X_{S,}{}_{\mathrm{lcm}}/S,B_{X_{S,}{}_{\mathrm{lcm}}}+M\rest{X_{S,}{}_{\mathrm{lcm}}})$ of Step 4. The letter family is mp-trivial and so does the former one. Moreover, the effective moduli parts are the same under the Iitaka contraction $I$: $$ D\rest{X_S}=I\rest{X_S}^*D{}_{\mathrm{lcm}}=I^*\varphi^*D_Y, $$ where $D{}_{\mathrm{lcm}}\in\linsys{\omega_{X_{S,}{}_{\mathrm{lcm}}}^m[mB_{X_{S,}{}_{\mathrm{lcm}}}]}$ and $K_Y+B_Y\sim_\mathbb R D_Y\ge 0$ is an effective divisor on $Y$. Hence $$ g^*D\rest{X_S}=I^*g_{X_{S,}{}_{\mathrm{lcm}}}^*D{}_{\mathrm{lcm}}=I^*\varphi^*g_Y^*D_Y= I^*\varphi^*D_Y=D\rest{X_S}, $$ because $g_Y=\id_Y$.
Step 6. The scaler representation $$ G_\diamond\to \Aut H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}]), g \mapsto g^*, $$ is finite with a uniform bound. It is scaler by Step 5. So, for every $g\in G_\diamond$, there exists a constant $e\in k^*$ such that, for every $\omega\in H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])$, $$ g^*\omega=e\omega. $$
Take a rather general mp-trivial family $(X_S/S,B_{X_S})$ of Step 5. Then, for general $\omega \in H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])$, $\omega\rest{X_S}\not=0$. For general $t\in S$, $g_St=s\in S$, and there exists a flop $$ g\rest{X_t}\colon (X_t,B_{X_t})\dashrightarrow (X_s,B_{X_s}) $$ and a canonical log isomorphism with respect to restrictions $$ c\colon (X_t,B_{X_t})\dashrightarrow (X_s,B_{X_s}). $$ Then $g_t=c^{-1} g\rest{X_t}$ is a flop of $(X_t,B_t)$ and, for even $m$ and general $\omega\in H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])$,
$$ g_t^*(\omega\rest{X_t})=g\rest{X_t}^*{c^{-1}}^*(\omega\rest{X_t})= g\rest{X_t}^*(\omega\rest{X_s})=(g^*\omega)\rest{X_t}= e\omega\rest{X_t}\text{ and } \omega\rest{X_t}\not=0. $$ For any $m$, we can consider $\omega^2$ and $e^2$. So, $e$ is a root of unity. There are only finitely many such roots. The number of roots depends only on $(X_t,B_{X_t})$. Using the Iitaka contraction $I\rest{X_t}$, one can reduce the scaler representation to a fiber of $I\rest{X_t}$, that is, to a fixed $0$-pair (cf. Step 6 in the proof of Theorem~\ref{flop_repres}). \end{proof}
\begin{thm}\label{repr_slc_proj} Let $(X_\eta,\mathcal B_{X_\eta})$ be a generic normally lc [slc] pair with a b-boundary $\mathcal B_{X_\eta}$, $G\subseteq \Bir(X_\eta\to\eta/k,\mathcal B_{X_\eta})$ be a subgroup of generic flops, and $\mathcal D$ be a b-divisor of $X_\eta$ in a decomposition $$ r(\mathcal K_{X_\eta}+\mathcal B_{X_\eta})\equiv \mathcal F+\mathcal D, $$ where \begin{description}
\item{\rm (1)} $r$ is nonnegative real number,
\item{\rm (2)} $\mathcal F$ is an effective b-divisor, invariant for $G$, and
\item{\rm (3)} $\mathcal D$ is an effective b-divisor, invariant up to linear equivalence for $G$.
\end{description} Then, for any natural number $m$, the projective (sub)representation of generic log flops $$ G\to \Aut \mathbb P(H^0(X_\eta,m\mathcal D)), g \mapsto g^*, $$ is finite. Moreover, the order of representation has a uniform bound, independent of $m,r,\mathcal F,\mathcal D,G$. \end{thm}
\begin{proof} Step 1. We can suppose that $X_\eta$ is normal, irreducible and geometrically irreducible. Take a normalization $(X_\eta^{\mathrm{n}},B_{X_\eta^{\mathrm{n}}})$ and its irreducible decomposition $(X_\eta^{\mathrm{n}},B_{X_\eta^{\mathrm{n}}})= \coprod (X_i,B_{X_i})$. Then by definition the normal pair $(X_\eta^{\mathrm{n}},B_{X_\eta^{\mathrm{n}}})$ is lc and the decomposition of log canonical divisor is componentwise: $r(\mathcal K_{X_i}+\mathcal B_{X_i})\equiv \mathcal F_i+\mathcal D_i$. But generic flops permute components, that is, a canonical homomorphism $$ G\subseteq \Bir(X_\eta\to\eta/k,B_{X_\eta})\to \Aut\{X_i\}, g\mapsto (X_i\mapsto g(X_i)), $$ is defined. On the other hand, the representation $$ G\to \Aut \mathbb P(H^0(X_\eta,m\mathcal D))= \prod\Aut\mathbb P(H^0(X_i,m\mathcal D_i)) $$ agrees with permutations. The group of permutations is finite and the representation of kernel is in a product of restricted representations $$ \ker[G\to \Aut\{X_i\}]\subseteq \prod G_i\to\prod\Aut\mathbb P(H^0(X_i,m\mathcal D_i)), $$ where $G_i=(\ker[G\to \Aut\{X_i\}])\rest{X_i}\subseteq\Bir(X_i\to\eta/k,B_{X_i})$. Thus it is sufficient to verify the finiteness of each factor $G_i\to\Aut\mathbb P(H^0(X_i,m\mathcal D_i))$. This means that we can assume that $X_\eta$ is normal and irreducible. A finite base change (Stein decomposition) allows to assume geometrical irreducibility of $X_\eta$. This change can increase the group of generic flops and its subgroup $G$, but preserves decomposition and sections. Indeed, $K_\eta=K_\theta$ for any decomposition $X_\eta\to\theta\to\eta$, where $\theta\to\eta$ is finite. Thus we can take the same decomposition $r(\mathcal K_{X_\theta}+\mathcal B_{X_\theta})\equiv \mathcal F+\mathcal D$. Actually, we can replace $G$ by a larger subgroup: the generic flops $g$, which preserve $\mathcal F$ and preserve up to linear equivalence $\mathcal D$. Anyway, this subgroup includes the flops from $G$ by (2-3).
Step 2. Since the divisor $\mathcal F+\mathcal D$ is invariant up to linear equivalence, we suppose also that $\mathcal F=0$. By Lemma~\ref{prod_of_repres}, (1), the representation on the subspace $$ \mathbb P(H(X_\eta,m\mathcal D))\subseteq \mathbb P(H^0(X_\eta,m(\mathcal F+\mathcal D)), D\mapsto D+\mathcal F, $$ is invariant and finite, if the representation is finite on the ambient space. Indeed, the lemma applies, if $H^0(X,m\mathcal D)\not=0$. Otherwise, the representation on $H^0(X,m\mathcal D)$ is empty.
Step 3. We can suppose that $(X_\eta,B_{X_\eta})$ is wlc. Indeed, by our assumptions, it is an initial model. Hence we can apply the LMMP. If the resulting model $(X_\eta/\theta,B_{X_\eta})$ is a Mori fibration, then b-divisors $\mathcal K_{X_\eta}+\mathcal B_{X_\eta}$ and $\mathcal D$ are negative with respect to the fibration and, for $r>0$. $H^0(X_\eta,m\mathcal D)=0$ and the representation is empty. Otherwise, in the Mori case, $r=0$ by (1) and $\mathcal D\equiv 0$. So, the representation is empty or trivial, respectively, for $H^0(X_\eta,m\mathcal D)=0$ or $=k$.
Therefore, the nontrivial cases are possible only for a wlc resulting model. Note that the sections, the numerical equivalence and representation will be preserved under the LMMP modifications. (Even the subgroup $G$ of generic flops under (2-3) can be increased.)
Step 4. Finally, we derive the required finiteness from Corollary~\ref{repr_wlc_can} and Lemma~\ref{repres_comparable}. By Corollary~\ref{repr_wlc_can}, the (sub)representation of $G\subseteq \Bir(X_\eta\to\eta/k,B_{X_\eta})$ on $H^0(X_\eta,\omega_{X_\eta}^l[lB_{X_\eta}])$ is finite for any natural number $l$. The projective representations of $G\subseteq \Bir(X_\eta\to\eta/k,B_{X_\eta})$ on $\mathbb P(H^0(X_\eta,\omega_{X_\eta}^l[lB_{X_\eta}]))$ and on $\mathbb P(H^0(X,l(\mathcal K_{X_\eta}+\mathcal B_{X_\eta})))$ are canonically isomorphic and finite. Thus by Lemma~\ref{repres_comparable} the representation on $H^0(X,m\mathcal D)$ is finite too. A uniforme bound can be found by Corollary~\ref{repr_wlc_can}. \end{proof}
\begin{cor} \label{repr_lc_can} Let $(X_\eta,B_{X_\eta})$ be a generic slc pair with a boundary $B_{X_\eta}$. Then, for any natural number $m$, the canonical representation of generic log flops on differentials $$ \Bir(X_\eta\to\eta/k,B_{X_\eta})\to \Aut H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}]), g \mapsto g^*, $$ is finite. Moreover, the order of representation has a uniform bound, independent of $m$. \end{cor}
Actually, the slc property of the statement can be replaced by many similar ones, e.g., normally lc, seminormal lc, etc. Then a proof should only explain what is a meaning of differentials or of $H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])$ and of flops. If this is natural, then a proof goes as below in the slc case. For instance, generic flops should preserve such differentials for every $m$.
\begin{proof} This proof uses the reduction to geometrically wlc pairs, which can be done as in Theorem~\ref{repr_slc_proj}. After that for the linear representation we can apply Corollary~\ref{repr_wlc_can}.
Take a normalization $(X_\eta^{\mathrm{n}},B_{X_\eta^{\mathrm{n}}})$ and its irreducible decomposition $(X_\eta^{\mathrm{n}},B_{X_\eta^{\mathrm{n}}})= \coprod (X_i,B_{X_i})$. Then by definition there exists a natural imbedding $$ H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])\hookrightarrow H^0(X_\eta,\omega_{X_\eta}^m[m\mathcal B_{X_\eta}])= H^0(X_\eta^{\mathrm{n}},\omega_{X_\eta^{\mathrm{n}}}^m[mB_{X_\eta^{\mathrm{n}}}])= $$ $$ \prod H^0(X_i,\omega_{X_i}^m[mB_{X_i}]), $$ where the differentials for the b-boundary $\mathcal B_{X_\eta}$ are defined on the normalization. This an imbedding of linear representation too. Thus by Theorem~\ref{repr_slc_proj} the projective representation $$ \Bir(X_\eta\to\eta/k,B_{X_\eta})\to \Aut \mathbb P(H^0(X_\eta,\omega_{X_\eta}^m[mB_{X_\eta}])), g \mapsto g^*, $$ is uniformly finite.
Actually, the big linear representation on the product is also uniformly finite. It is sufficient to verify for an irreducible component and wlc by the MMMP. The required finiteness in this case follows from Corollary~\ref{repr_wlc_can}. \end{proof}
\begin{cor}\label{repr_lc_proj} Let $(X_\eta,B_{X_\eta})$ be a generic slc pair with a boundary $B_{X_\eta}$, and $\mathcal M$ be an upper maximal (canonical) moduli part of adjunction. Then, for any natural number $m$, the projective representation of generic log flops $$ \Bir(X_\eta\to\eta/k,B_{X_\eta})\to \Aut \mathbb P(H^0(X_\eta,m\mathcal M)), g \mapsto g^*, $$ is finite. Moreover, the order of representation has a uniform bound, independent of $m,\mathcal M$. \end{cor}
\begin{proof} Immediate by Theorem~\ref{repr_slc_proj}. By definition a b-divisor $\mathcal M$ is a mobile part of a mobile decomposition: $$ \mathcal K_{X_\eta}+\mathcal B_{X_\eta}\sim \mathcal F+\mathcal M. $$ Then we use the invariance of $\mathcal F$ and invariance up to linear equivalence of $\mathcal K_{X_\eta}+\mathcal B_{X_\eta}$ and $\mathcal M$ with respect to generic flops. \end{proof}
\begin{cor}\label{repr_klt_lin} Let $(X_\eta,B_{X_\eta})$ be a generic klt pair with a boundary $B_{X_\eta}$, $G\subseteq \Bir(X_\eta\to\eta/k,B_{X_\eta})$ be a subgroup of generic flops, and $\mathcal D$ be a b-divisor of $X_\eta$ as in Theorem~\ref{repr_slc_proj}. In addition, we assume either $\mathcal D$ is $G$-invariant, or it is a $G$-invariant b-divisorial sheaf. Then, for any natural number $m$, the linear representation of generic log flops $$ G\to \Aut H^0(X_\eta,m\mathcal D), g \mapsto g^*, $$ is finite. Moreover, the order of representation has a uniform bound, independent of $m,G$. \end{cor}
The bound on order can depend on $r,\mathcal F,\mathcal D$.
\begin{proof} Immediate by Theorem~\ref{repr_slc_proj} and the finiteness of scaler representations in the klt case.
We can suppose that $H^0(X_\eta,m\mathcal D)$ is not empty for some $m\ge 1$. Otherwise all representations are empty. Taking such a minimal natural $m$ and replacing $\mathcal D$ by $m\mathcal D$ (respectively, $r$ by $mr$ etc), we suppose that $H^0(X_\eta,\mathcal D)\not=0$. So, there exists a nonzero rational function $F\in k(X_\eta)$ such that $F\in H^0(X_\eta,\mathcal D)$.
By Theorem~\ref{repr_slc_proj} the subgroup $$ G_\diamond=\{g\in G\mid \text{ for all } m, g^* \text{ is identical on } \mathbb P(H^0(X_\eta,m\mathcal D))\}\subseteq G $$ of the scaler representation has a finite index, uniformly bounded with respect to $m$. So, $g^*F=c_gF,c_g\in k^*$, for all $g\in G_\diamond$ and it is sufficient to establish the finiteness for the scaler representation. The scaler representation $$ G_\diamond\mapsto k^*,g\mapsto g^*=c_g, $$ depends on $F$ and is finite, that is, $c_g$ belongs to a finite set of roots of unity. This implies the required finiteness of linear representation uniformly for all $m$.
The finiteness of the scaler representation of $G_\diamond$ on $F\mathcal O_X=\mathcal O((F))$ follows from the klt property of $(X_\eta,B_{X_\eta})$. The question can be reduced to situation with the scaler representation for a klt $0$-pair $(X,B)$. Moreover, this follows from the finiteness of the linear canonical representations of $(X,B+\varepsilon\Supp(F))$, where $0<\varepsilon\ll 1$ is a small positive (rational) real number.
The similar approach works for the b-divisorial sheaves $\mathcal O_X(\mathcal D)$. \end{proof}
\begin{cor}\label{generic_Kawamata} Let $(X_\eta,B_{X_\eta})$ be a generic wlc pair. Then there are only finitely many generic log flops of $(X_\eta\to \eta/k,B_{X_\eta})$ up to mp-autoflops, with respect to a maximal moduli part of adjunction, that is, the group $$ \Bir(X_\eta\to\eta/k,B_{X_\eta})/\mBir(X_\eta\to\eta/k,B_{X_\eta}) $$ is finite. \end{cor}
\begin{proof}
Step 1. After a finite base change (extension) we can suppose that $X_\eta$ is geometrically irreducible. For a base change, the group of generic flops increases, but the group of mp-autoflops decreases.
Step 2. After an appropriate perturbation we can suppose that $B_{X_\eta}$ is a $\mathbb Q$-divisor and $\mathcal M$ is a $\mathbb Q$-divisor too. By definition the b-divisor $\mathcal M$ is a moduli part of adjunction for a maximal model. It exists. The moduli part of adjunction is invariant of generic flops: $g^*\mathcal M\sim_m \mathcal M$ for any rather divisible natural number $m$.
Now, for such a number $m$, take the canonical semirepresentation of generic log flops $$ \Bir(X_\eta\to \eta/k,B_{X_\eta})\to \Aut H^0(X_\eta,m\mathcal M),g\mapsto g^*. $$ A posteriori we can convert it into a noncanonical representation.
Step 3. The kernel of representation is $\mBir(X_\eta\to \eta/k,B_{X_\eta})$ for any rather divisible natural number $m$. Indeed, consider the morphism $$ \varphi\colon X_\eta\to \mathbb P(H^0(X_\eta,m\mathcal M)^v) $$ given by the linear system $\linsys{m\mathcal M}$. The above representation gives a canonical representation on the projectivisation: $$ \Bir(X_\eta\to \eta/k,B_{X_\eta})\to \Aut \mathbb P(H^0(X_\eta,m\mathcal M)^v),g\mapsto g^*. $$ The rational morphism $\varphi$ is equivariant with respect to the action of generic flops, and,
for any rather divisible $m$, is actually a morphism and a contraction. The kernel of representation can be determined on (finitely many) rather general fibers $\varphi^{-1} x\in\varphi(X_\eta)$. Those fibers are irreducible and the kernel acts within them. On the other hand, $\mathcal M\rest{\varphi^{-1} x}\sim_m 0$ and by adjunction the restriction is also a maximal moduli part of adjunction on the subfamily for $\varphi^{-1} x$. Thus the kernel consists of mp-autoflops. The converse holds as well.
So, for such a natural number $m$, the image of projective representation is isomorphic to the quotient group in the statement.
Finally, the image is finite by Corollary~\ref{repr_lc_proj}. \end{proof}
\section{Bounding flops}
\begin{con}[Kawamata~{\cite[Conjecture~3.16]{ISh}}] \label{Kawamata_conjecture} The number of projective klt wlc models in a given log birational class is always finite up to log isomorphisms. \end{con}
\begin{ex}[Pjateckii-Shapiro and Shafarevich] Let $X$ be a nonsingular K3 surface. Conjecture~\ref{Kawamata_conjecture} holds for $X$. That is, $X$ has finitely many wlc klt models $Y$ up to isomorphism. Actually, each model $Y$ is a $0$-pair with $B=0$ and only Du Val singularities. The polarized lattices $\Lambda^+Y\subset\Lambda(Y)$ of models $Y$ have finitely many types. This implies that the models have bounded polarization.
The same holds for (genus $1$) fibrations $Y\to T$. There are only finitely many fibrations up to isomorphism, where $Y$ is a wlc klt model of $X$.
In terms of $\Aut(X)$ these facts means that there are finitely many orbits of exceptional curves (not necessarily irreducible) and finitely many orbites of fibrations. All these follows the Torelli theorem for K3 surfaces \cite{PShSh}.
So, the group of automorphisms $\Aut(X)$ is infinite if $X$ has infinitely many exceptional curves or/and fibrations. The converse does not hold in general. \end{ex}
Let $(X,B)$ be a pair with a boundary $B$. Denote by $\GM(X,B)$ the category of projective klt wlc models $(Y,B^{\mathrm{log}}_Y)$ of $(X,B)$ with their log flops $(Y,B^{\mathrm{log}}_{Y})\dashrightarrow (Y',B^{\mathrm{log}}_{Y'})$ as morphisms which are considered up to log isomorphisms. For example, if $A$ is an Abelian variety and $B=0$ then $\GM(A,0)$ is equivalent to a trivial one, a category with a single object $A$ and with only the identical morphism.
\begin{df}[Bounded flops] Let $d$ be a natural number. A log flop of projective klt wlc models $(X_1,B_{X_1})\dashrightarrow (X_2,B_{X_2})$ is {\em bounded with respect to\/} $d$, if there are very ample divisors $D_1,D_2$ on $X_1,X_2$ respectively of degree $\le d$ and of mutual degrees $\le d$. A {\em set\/} (or {\em class\/}) {\em of log flops\/} is {\em bounded\/}, if there exists a natural number $d$ with respect to which the flops are bounded. A {\em category of log flops\/} is {\em of bounded type\/}, if the category has a bounded {\em set\/} (or {\em class\/}) of generators.
A {\em model\/} $(X,B)$ is bounded with respect to $d$, if the identical flop $(X,B)\to (X,B),x\mapsto x$, so does. A {\em set\/} (or {\em class\/}) {\em of pairs\/} $(X,B)$ is bounded, if there exists a natural number $d$ with respect to which the pairs are bounded. \end{df}
We denote by $\GM{}^{\mathrm{b}}(X,B)\subseteq \GM(X,B)$ a subcategory of bounded type of log flops up to log isomorphisms. For example, for any pair $(Y,B^{\mathrm{log}}_{Y})$ in $\GM(X,B)$, the subcategory of log isomorphisms $(Y_1,B_{Y_1})\to (Y_2,B_{Y_2})$, where $(Y_1,B_{Y_1}),(Y_2,B_{Y_2})$ are log isomorphic to $(Y,B^{\mathrm{log}}_{Y})$, is of bounded type.
According to Corollary~\ref{equiv-conjectures} below Conjecture~\ref{Kawamata_conjecture} is equivalent to each of the following one.
\begin{con} \label{latt_fg_conjecture} The category $\Lat(X,B)$ is of finite type. \end{con}
\begin{con} \label{boundedness_conjecture} The models of $\GM(X,B)$ are bounded. \end{con}
\begin{con} \label{categ_bound_conjecture} The category $\GM(X,B)$ is of bounded type. \end{con}
Note that, in general, $(X,B)$ may not have a projective klt wlc model at all. Then the conjectures are empty. However, if $(X,B)$ has a projective klt wlc model then any other resulting projective model $(Y,B^{\mathrm{log}}_Y)$ is also klt wlc.
\begin{thm} \label{weak_Kawamata} Any category of bounded type $\GM{}^{\mathrm{b}}(X,B)$ is of finite type. \end{thm}
\begin{cor}\label{equiv-conjectures} Conjectures~\ref{Kawamata_conjecture}, ~\ref{latt_fg_conjecture}, \ref{boundedness_conjecture} and \ref{categ_bound_conjecture} are equivalent. \end{cor}
\begin{proof} Conjecture~\ref{Kawamata_conjecture} implies Conjecture~\ref{latt_fg_conjecture}. Consider a subcategory of bounded type $\GM{}^{\mathrm{b}}\subseteq\GM(X,B)$ with finitely many objects $(Y,B^{\mathrm{log}}_Y)$ such that each wlc model of $(X,B)$ is isomorphic to one of in the subcategory. Thus the subcategory is equivalent to the whole one. Actually, the subcategory is of finite type. The generators are projective $\mathbb Q$-factorializations, elementary contractions and flops. There are only finitely many such transformations. Up to log isomorphisms, they belong to $\GM{}^{\mathrm{b}}$ and every flop of $\GM{}^{\mathrm{b}}$ can be factorize into them \cite{ShC}. Hence the image of the lattice functor $$ \GM(X,B)\to\Lat(X,B), (Y,B^{\mathrm{log}}_Y)\mapsto \Lambda(Y), $$ is of finite type too, Conjecture~\ref{latt_fg_conjecture}. Indeed, the image is equivalent to the image of $\GM{}^{\mathrm{b}}$.
Conjecture~\ref{latt_fg_conjecture} implies Conjecture~\ref{boundedness_conjecture}. The former implies that there are finitely many types of polarized lattices $\Lambda^+\subset\Lambda$ for models $(Y,B^{\mathrm{log}}_Y)$ of $\GM(X,B)$. For every polarization type, take a polarization $H\in\Lambda^+$. So, each model in $\GM(X,B)$ has a bounded polarization by the effective ampleness: there exists a natural number $N$ such that $NH$ is very ample for every $(Y,B^{\mathrm{log}}_Y)$ of type $H\in\Lambda^+\subset\Lambda$. Hence each model of $\GM(X,B)$ is bounded, Conjecture~\ref{boundedness_conjecture}.
Conjecture~\ref{boundedness_conjecture} implies Conjecture~\ref{categ_bound_conjecture}. The former implies that the objectes are bounded, that is, each model $(Y,B^{\mathrm{log}}_Y)$ has a bounded polarization $H_Y$. Equivalently, the models belong to pairs of finitely many families of triples. By \cite{ShC} projective $\mathbb Q$-factorializations, elementary contractions and flops are generators of the generalized log flops. Those generators are bounded by Noetherian induction for above families. Indeed, a relative $\mathbb Q$-factorialization can be done for klt families generically. So, the $\mathbb Q$-factorializations are bounded. Each elementary contraction $(Y_1,B^{\mathrm{log}}_{Y_1})\to (Y_2,B^{\mathrm{log}}_{Y_2})$ can be treated as a crepant elementary blowup of an exceptional divisor $E\subset Y_1$ with $b_{E}=\mult_EB$. There are only finitely many such exceptional b-divisors for $Y_2$. Again, by Noetherian induction, the blowups form finitely many projective families and are bounded. Each elementary flop $(Y_1,B^{\mathrm{log}}_{Y_1})\dashrightarrow (Y_2,B^{\mathrm{log}}_{Y_2})$ can be factorize into an elementary flopping contraction $(Y_1,B^{\mathrm{log}}_{Y_1})\to (Y,B^{\mathrm{log}}_Y)$ and a small blowup $(Y,B^{\mathrm{log}}_Y)\leftarrow (Y_2,B^{\mathrm{log}}_{Y_2})$. Both are projective $\mathbb Q$-factorializations with two possible polarizations. So, their composition is also bounded.
Conjecture~\ref{categ_bound_conjecture} implies Conjecture~\ref{Kawamata_conjecture}. By the former conjecture we can take $\GM{}^{\mathrm{b}}(X,B)=\GM(X,B)$. Then by Theorem~\ref{weak_Kawamata} the category $\GM(X,B)$ is of finite type. In particular, $\GM(X,B)$ is equivalent to a category with finitely many objects, Conjecture~\ref{Kawamata_conjecture}. \end{proof}
\begin{proof}[Proof of Theorem~\ref{weak_Kawamata}] Consider a bounded category $\GM{}^{\mathrm{b}}=\GM{}^{\mathrm{b}}(X,B)$ of klt wlc models $(Y,B^{\mathrm{log}}_Y)$ of a pair $(X,B)$. Since the category is bounded, there exists a bounded coarse muduli $\mathfrak M$ of triples $(X,B,H)$, where now $(X,B)$ denotes a klt wlc model with a polarization $H$, such that the bounded models of $\GM{}^{\mathrm{b}}$ belong to $\mathfrak M$.
Step 1. We can suppose that the models of $\GM{}^{\mathrm{b}}$ are Zariski dense in $\mathfrak M$. This means that triples $(X,B,H)$ with a pair $(X,B)$ in $\GM{}^{\mathrm{b}}$ form a dense subset in $\mathfrak M$. Otherwise, we replace $\mathfrak M$ by a Zariski closure of those triples. A polarization $H$ is considered here as an invertible sheaf up to algebraic equivalence, that is, $H\in \NS X=\Pic X/\approx=\Pic X/\Pic_0 X$. The corresponding b-sheaf modulo $\approx$ will be denoted by $\mathcal H\in\bNS X$.
We assume also that the moduli is irreducible because it is sufficient to establish the theorem for pairs of each irreducible component. So, there is a bounded reduced irreducible family $(X/T,B,H)$ of $\mathfrak M$ such that it contains up to a log isomorphism a dense subset of pairs of $\GM{}^{\mathrm{b}}$. That is, for such a pair $(X_t,B_{X_t})$ there exists a polarization $H_t$ on $X_t$ such that $(X_t,B_{X_t},H_t)$ belongs to $(X/T,B,H)$.
Step 2. There is such a family $(X/T,B,H)$ with a finite set of b-polarizations $\mathcal D_i\subset X$ over $T$ such that each flop of $\GM{}^{\mathrm{b}}$ can be given by some of those divisors. This means that, if $t,s\in T$ and $g_t\colon (X_t,B_{X_t})\dashrightarrow (X_s,B_{X_s})$ is flops of $\GM{}^{\mathrm{b}}$, then, for some b-divisor $\mathcal D_i$, the flop is given (as directed) for $\mathcal D_{t,i}$. More precisely, we suppose that $g(\mathcal D_{t,i})= \mathcal H_s$. By the boundedness of flops, the restriction $\mathcal D_{t,i}$ is bounded with respect to $H_t$. Each $\mathcal D_i$ is defined over a locally closed algebraic subvariety $T_i\subseteq T$. By the irreducibility of $T$ and the dense property of Step 1, at least one $T_i$ is dense: $\overline{T_i}=T$, equivalently, $\mathcal D_i$ is dominant over $T$. By Noetherian induction, it is sufficient to verify the finiteness of $\GM{}^{\mathrm{b}}$ for flops given by the dominant $\mathcal D_i$. Since the set of b-divisors $\mathcal D_i$ is finite, we can suppose that each $\mathcal D_i$ is flat over $T$ and surjective to $\mathfrak M$. By construction each $\mathcal D_i$ is a b-polarization over $T$, that is, it is big and semiample over $T$.
Each b-polarization $\mathcal D\in\bNS X/T$ gives a canonical log flop $c=c_\mathcal D \colon (X/T,B,H)\dashrightarrow (X'/T,B_{X'},H')$ with $\mathcal H'= c_*\mathcal D$, equivalently, $\mathcal D=\overline{H'}$ as b-divisors. In general, the second family does not belong to $\mathfrak M$. Moreover, that can happen with flops for $\mathcal D_i$. However, there exists the image of $(X'/T,B_{X'},H')$ in $\mathfrak M$: the image of subfamily over $$ T'=\{t\in T\mid (X_t',B_{X_t'},H_ t')\in \mathfrak M\}\subseteq T. $$ Again by the dense property we can suppose that $T_i=T'$ for some $\mathcal D=\mathcal D_i$ is dense in $T$. If $\mathcal D$ is flat over $T$ then the dense property implies the equality: $T'=T$. By Noetherian induction we can suppose that, for each $\mathcal D_i$, $T_i$ is dense in $T$ and, actually, each $T_i=T$. Thus each $(X_i/T,B_{X_i},H_i)=(X'/T,B_{X'},H')$ belongs to $\mathfrak M$, that is, $(X_i,B_{X_i},H_i)\in \mathfrak M$. In other words, each $\mathcal D_i$ gives a (surjective) flop over $\mathfrak M$. In general, we say that $\mathcal D$ is {\em flopping\/} over $\mathfrak M$ if $(X',B_{X'},H')\in \mathfrak M$. This property is compartible with algebraic equivalence over $T$. For $g$ given by $\mathcal D$, the induced map of b-sheaves transforms the polarization $\mathcal H'$ into the b-polarization $\mathcal D=c^*\mathcal H'$ over $T$. The same holds for generic $\mathcal D\in \bNS X_\eta$. We suppose that all $\mathcal D_i\in \Lambda$ and $\Lambda$ is also invariant under every $g^*$, it is automatically under $c^*$. The latter means that $c$ determines a unique lattice $\mathcal H'\in\Lambda'=c_*\Lambda$. On each rather general special fiber $X_t$, there exists a natural lattice structure $$ \Lambda\hookrightarrow \bNS X_t, \mathcal D\mapsto \mathcal D_t=\mathcal D\rest{X_t} $$ with the image $\Lambda=\Lambda_t\subseteq\bNS X_t$. (This is actually injection for connected families.) By construction $\mathcal H_t\in \Lambda_t$ and $(X/T,B,H)$ is a family of triples $(X_t,B_t,H_t\in\Lambda_t)$. The flop $c$ in $\mathcal D\in \Lambda$ is a flop of such families that can be extended by $g\rest{X'}$ into families of the moduli of triples with a lattice structure. Under canonical isomorphism of lattices: $c^*\mathcal H'=\mathcal D$.
Step 3. We can convert a flop $c\colon (X/T,B,H)\dashrightarrow (X'/T,B_{X'},H')$ into an autoflop over $T$, if there exists an isomorphism $g\rest{X'}\colon(X'/T,B_{X'},H')\to (X/T,B,H)$. Then the composition gives a flop $g=g\rest{X'}c\colon (X/T,B,H)\dashrightarrow (X/T,B,H)$. This flop is fiberwise in the following sense: there exists an isomorphism $g_T\colon T\to T$ such that, for each $t\in T$, $g(X_t)=X_{g_T t}$ and $g$ induces the log flop $$ g\rest{X_t}\colon (X_t,B_{X_t})\dashrightarrow (X_{g_T t},B_{X_{g_T t}}). $$
Typically this (existence) does not holds even for a universal family $(X/T,B,H)$ of fine moduli. If $c$ preserves a universal family then we have an isomorphism $g\rest{X'}$ and a generic flop $g=g\rest{X'}c$.
This can be done for a fine moduli with lattice: $(X_t,B_t,H_t\in\Lambda_t)$. Indeed, if $(X_t,B_t,H_t\in\Lambda_t)=(X_s,B_s,H_s\in\Lambda_s)$ is an isomorphism (unique and canonical for fine moduli). Then the canonical identification $\Lambda_t=\Lambda=\Lambda_s$ is given, that is, the log isomorphism $h_{t,s}\colon (X_t,B_t,H_t\in\Lambda_t)=(X_s,B_s,H_s\in\Lambda_s)$ transforms each sheaf $\mathcal D_s\in\Lambda_s$ into sheaf $h_{t,s}^*\mathcal D_s=\mathcal D_t$ (modulo algebraic equivalence), in particular, the polarization $\mathcal H_t=h_{t,s}^*\mathcal H_s$. Thus isomorphic triples go under $\mathcal D$-flop into isomorphic triples and the same for $g^{-1}$ given by $g_*\mathcal H$. Note also that such a flop changes the polarization: $\mathcal H'=\mathcal D$ (usually $\not=\mathcal H$) and $H'=\mathcal H_{X'}'$ is the polarization of $(X'/T,B_{X'},H'\in \Lambda'),c^*\Lambda'=\Lambda$. Thus a universal family will be preserved for triples with the lattice structure.
Step 4. Any moduli of lattice triples $(X,B,H\in \Lambda)$ can be converted into fine moduli adding an extra rigidity structure $R$. It is sufficient that $\Aut (X,B,H\in \Lambda)=\{\id_X\}$. The group $\Aut (X,B,H\in \Lambda)$ is tame by Theorem~\ref{aut_tame}. E.g., if $R\in X$ will be a rather general $l$-taple of points in $X$. Then $\Aut (X,B,H\in \Lambda,R)=\{\id_X\}$ and generically the moduli $\mathfrak M$ of such quadruples are fine. The family $(X/T,B,H\in\Lambda)$ of Step 3 can be converted into a family of quadruples $(X/T,B,H\in\Lambda,R)$ which is dominant on $\mathfrak M$. This can be done by an appropriate base change and taking an open subfamily after that. E.g., for the moduli with a $l$-taple $R$, take a base change under the fiber power $f^l\colon X_T^l\to T$ over $T$ and then an open subset in $X_T^l$ corresponding to $l$-taples $R\in X_T^l$ with $\Aut (X_t,B_{X_t},H_t\in \Lambda_t,R)=\{\id_{X_t}\},t=f^l(R)$.
Now we can suppose that each flop given by $\mathcal D_i$ and all other flopping divisors $\mathcal D$ can be extended to a generic flop of $(X_\eta\to\eta/k,B_{X_\eta})$. In general, the group of generic flops can be infinite.
Step 5. The required finiteness of $\GM{}^{\mathrm{b}}$ follows from the finiteness of the quotient group of Corollary~\ref{it_generic_Kawamata} and thus by it. Indeed, the objects of $\GM{}^{\mathrm{b}}$ are given by isomorphism classes of pairs $(X_t,B_{X_t})$ in the orbit of sufficiently general fiber $(X_t,B_{X_t},H_{X_t})$ under the action of $\Bir(X_\eta\to\eta/k,B_{X_\eta})$. (A chain of bounded flops.) Such a fiber exists by the dense property of Step 1 and, if $\GM{}^{\mathrm{b}}$ is infinite up to isomorphism, then the orbit is well-defined for most (a dense subset) of such objects. (Actually it is possible to make for all points using klt limits of $0$-pairs.)
By definition of $\aBir(X_\eta\to\eta/k,B_{X_\eta})$ the orbit for this subgroup of almost autoflops is isotrivial: a single pair $(X_t,B_{X_t})$ up to isomorphism for rather general $t$. Thus by the corollary the set of pairs in the orbit up to log isomorphism for the whole group is finite too.
Finally, a bounded set of flops of a finite set of pairs is finite up to log isomorphisms. \end{proof}
Department of Mathematics
Johns Hopkins University
Baltimore, MD 21218, USA
[email protected]
Mathematical Institute
Russian Academy of Sciences
Moscow, Russia
[email protected]
\end{document}
|
arXiv
|
{
"id": "1308.5160.tex",
"language_detection_score": 0.7241560220718384,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Hochschild cohomology of type II$_1$ von Neumann algebras with Property $\Gamma$}
\author{Wenhua Qian} \address{Wenhua Qian \\
Demartment of Mathematics \\
East China University of Science and Technology\\
Shanghai 200237, China; Email: [email protected]} \author{Junhao Shen} \address{Junhao Shen \\
Demartment of Mathematics and Statistics \\
University of New Hampshire\\
Durham, NH 03824; Email: [email protected]}
\begin{abstract}
In this paper, Property $\Gamma$ for a type II$_{1}$ von Neumann algebra is introduced as a generalization of Murray and von Neumann's Property $\Gamma$ for a type II$_{1}$ factor. The main result of this paper is that if a type II$_{1}$ von Neumann algebra $\mathcal{M}$ with separable predual has Property $\Gamma$, then the continuous Hochschild cohomology group $H^{k}(\mathcal{M}, \mathcal{M})$ vanishes for every $k \geq 2$. This gives a generalization of an earlier result in \cite{EFAR2}.
\end{abstract}
\subjclass[2000]{Primary 46L10; Secondary 18G60} \keywords{Hochschild cohomology, von Neumann algebra, Property $\Gamma$}
\maketitle
\section{Introduction} The continuous Hochschild cohomology of von Neumann algebras was initialized by Johnson, Kadison and Ringrose in \cite{RJ1}, \cite{RJ2}, \cite{BRJ}, where it was conjectured that the $k$-th continuous Hochschild cohomology group $H^{k}(\mathcal{M}, \mathcal{M})$ is trivial for any von Neumann algebra $\mathcal{M}$, $k \geq 1$. In the case $k=1$, this conjecture, which is equivalent to the problem of whether a derivation of a von Neumann algebra into itself is inner, had been solved by Kadison and Sakai independently in \cite{Ka1}, \cite{S1}. In the following we focus on the case when $k \geq 2$. In \cite{BRJ}, it was shown that $H^{k}(\mathcal{M}, \mathcal{M})=0$ for $k\ge 2$ if $\mathcal{M}$ is a injective von Neumann algebra. It follows that if $\mathcal M$ is a type I von Neumann algebra, then $H^{k}(\mathcal{M}, \mathcal{M})=0$ for $k\ge 2$.
Significant progress was made after the introduction of completely bounded Hochschild cohomology groups for von Neumann algebras (\cite{EGA}, \cite{EFAR1}, \cite{EFAR2}, \cite{EA0}, \cite{EA1}, \cite{EA2}, \cite{FR}, \cite{PS}, \cite{AR1}, \cite{AR2}). It was shown in \cite{EA1}, \cite{CS} (see also \cite{AR3}) that the completely bounded Hochschild cohomology group $H^{k}_{cb}(\mathcal{M}, \mathcal{M})=0$ for $k \geq 2$. As a consequence of
results in \cite{EGA}, if $\mathcal M$ is a type II$_\infty$ or type III von Neumann algebra, then $H^{k}(\mathcal{M}, \mathcal{M})=0$ for $k\ge 2$. In the case that $\mathcal M$ is a type II$_1$ von Neumann algebra, many results as listed below have also been obtained. (We refer to a wonderful book \cite{AR3} by Sinclair and Smith for a survey of Hochschild cohomology theory for von Neumann algebras and proofs of most of the following results.) \begin{enumerate} \item [(i)] $H^{k}(\mathcal{M}, \mathcal{M})=0$ for $k \geq 2$ if the type II$_1$ central summand in the type decomposition $\mathcal{M}=\mathcal{M}_{1}\oplus \mathcal{M}_{c_{1}} \oplus \mathcal{M}_{c_{\infty}} \oplus \mathcal{M}_{\infty}$ of the von Neumann algebra $\mathcal{M}$ satisfies $\mathcal{M}_{c_{1}} \otimes \mathcal{R} \cong \mathcal{M}_{c_{1}}$, where $\mathcal{R}$ is the hyperfinite type II$_{1}$ factor (\cite{EGA}).
\item [(ii)] $H^{k}(\mathcal{M}, \mathcal{M})=0$ for $k \geq 2$ if $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra with a Cartan subalgebra and separable predual (\cite{EFAR1}, \cite{FR}, \cite{AR1}, \cite{AR2}); it was shown later in \cite{Ca} that $H^{k}(\mathcal{M}, \mathcal{M})=0$ if $\mathcal{M}$ is a type II$_{1}$ factor with a Cartan masa.
\item [(iii)] $H^{2}(\mathcal{M}, \mathcal{M})=0$ for $k \geq 2$ if $\mathcal{M}$ is a type II$_{1}$ factor satisfying various technical properties related to its action on $L^{2}(\mathcal{M}, tr)$ (\cite{FR}).
\item [(iv)] $H^{k}(\mathcal{M}, \mathcal{M})=0$ for $k \geq 2$ if $\mathcal{M}$ is a type II$_{1}$ factor with Property $\Gamma$ (\cite{EFAR2}).
\item [(v)] $H^{2}(\mathcal{M}_{1} {\otimes} \mathcal{M}_{2}, \mathcal{M}_{1} {\otimes} \mathcal{M}_{2})=0$ if both $\mathcal{M}_{1}$ and $\mathcal{M}_{2}$ are type II$_{1}$ von Neumann algebras (\cite{PS}). \end{enumerate}
A motivation of this paper is to generalize the listed result (iv) in \cite{EFAR2} for type II$_{1}$ factors with Property $\Gamma$ to general type II$_{1}$ von Neumann algebras with certain properties. Recall Murray and von Neumann's Property $\Gamma$ for type II$_{1}$ factors as follows. {\em Suppose $\mathcal{A}$ is a type II$_{1}$ factor with a trace $\tau$. Let $\Vert \cdot \Vert_{2}$ be the $2$-norm on $\mathcal{A}$ given by $\Vert a \Vert_{2}=\sqrt{\tau(a^{*}a)}$ for any $a \in \mathcal{A}$. Then $\mathcal{A}$ has Property $\Gamma$ if, given $\epsilon >0$ and $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{A}$, there exists a unitary $u \in \mathcal{A}$ such that
\begin{enumerate} \item [(a)] $\tau(u)=0$;
\item [(b)] $\Vert ua_{j}-a_{j}u \Vert_{2} < \epsilon, \ \ \forall \ 1 \leq j \leq k$. \end{enumerate}} An equivalent definition of Property $\Gamma$ for a type II$_{1}$ factor $\mathcal{A}$ was given by Dixmier in \cite{Di1}. {\em Suppose $\mathcal{A}$ is a type II$_{1}$ factor with a trace $\tau$. Let $\Vert \cdot \Vert_{2}$ be the $2$-norm on $\mathcal{A}$ given by $\Vert a \Vert_{2}=\sqrt{\tau(a^{*}a)}$ for any $a \in \mathcal{A}$. Then $\mathcal{A}$ has Property $\Gamma$ if, given $n \in \mathbb{N}$, $\epsilon >0$ and $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{A}$, there exist $n$ orthogonal equivalent projections $\{ p_{1}, p_{2}, \dots, p_{n} \}$ in $\mathcal{A}$ with sum $I$ such that $$\Vert p_{i}a_{j}-a_{j}p_{i} \Vert_{2} < \epsilon, \qquad \forall \ 1 \leq i \leq n, 1 \leq j \leq k.$$}
In the paper, we extend Dixmier's equivalent definition of Murray and von Neumann's Property $\Gamma$ to general von Neumann algebras as follows.
{ Definition \ref{3.1}.} \ {\em Suppose $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra with a predual $\mathcal M_{\sharp}$. Suppose that $\sigma (\mathcal M, \mathcal M_\sharp)$ is the weak-$*$ topology on $\mathcal M$ induced from $\mathcal M_\sharp$. We say that $\mathcal{M}$ has Property $\Gamma$ if and only if $ \forall \ a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$ and $\forall \ n\in \Bbb N$, there exist a partially ordered set $\Lambda$ and a family of projections $$\{ p_{i \lambda}: 1\le i\le n; \lambda \in \Lambda \}\subseteq \mathcal{M}$$ satisfying \begin{enumerate} \item [(i)] For each $\lambda \in \Lambda$, $\{ p_{1 \lambda}, p_{2 \lambda}, \ldots, p_{n \lambda} \}$ is a family of orthogonal equivalent projections in $\mathcal M$ with sum $I$. \item [(ii)] For each $1\le i\le n$ and $1\le j\le k$, $$ \lim_{\lambda} (p_{i \lambda}a_{j}-a_{j}p_{i \lambda})^*(p_{i \lambda}a_{j}-a_{j}p_{i \lambda}) =0\qquad \text {in $\sigma(\mathcal M, \mathcal M_\sharp)$ topology.}$$ \end{enumerate} }
We note that Definition \ref{3.1} coincides with Dixmier's definition (and Murray and von Neumann's definition) when $\mathcal{M}$ is a type II$_{1}$ factor (see Corollary \ref{3.3.5}). The following theorem is our main result of this paper, which gives a generalization of an earlier result in \cite{EFAR2}.
{ Theorem \ref{mainthm}}. \ {\em Suppose $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra with separable predual. If $\mathcal{M}$ has Property $\Gamma$, then the Hochschild cohomology group $$H^{k}(\mathcal{M}, \mathcal{M})=0, \qquad \forall \ k\ge 2.$$}
The proof of Theorem \ref{mainthm} follows the similar line as the
one in \cite{EFAR2} besides that new tools from direct integral
theory for von Neumann algebras need to be developed.
The organization of this paper is as follows. In section 3, we introduce a definition of Property $\Gamma$ for type II$_1$ von Neumann
algebras.
In section 4, by applying the technique of direct integrals to $\mathcal{M}$, we will construct a hyperfinite subfactor $\mathcal{R}$ such that the relative commutant of $\mathcal{R}$ is the center of $\mathcal{M}$ and $\mathcal{R}$ satisfies the additional property of containing an asymptotically commuting family of projections for $\mathcal{M}$. In section 5, we will prove a Grothendick inequality for $\mathcal{R}$-multimodular normal multilinear maps. Then, in section 6, we combine these results obtained in section 4 and section 5 to show that for a type II$_{1}$ von Neumann algebra $\mathcal{M}$ with separable predual, if $\mathcal{M}$ has Property $\Gamma$, then every bounded $k$-linear $\mathcal{R}$-multimodular separately normal map from $\mathcal{M}^{k}$ to $\mathcal{M}$ is completely bounded, which implies the triviality of the cohomology group $H^{k}(\mathcal{M}, \mathcal{M})$ by Theorem 3.1.1 and Theorem 4.3.1 in \cite{AR3}.
\section{Preliminaries} \subsection{Hochschild cohomology} In this subsection, we will recall a definition of continuous Hochschild cohomology groups (see \cite{AR3}).
Let $\mathcal{M}$ be a von Neumann algebra. We say that a Banach space $\mathcal{X}$ is a Banach $\mathcal{M}$-bimodule if there is a module action of $\mathcal{M}$ on both the left and right of $\mathcal{X}$ satisfying
$$\Vert m \xi \Vert \leq \Vert m \Vert \Vert \xi \Vert$$ and $$\Vert \xi m \Vert \leq \Vert \xi \Vert \Vert m \Vert$$ for any $m \in \mathcal{M}, \xi \in \mathcal{X}$.
For each integer $k \geq 1$, we denote by $\mathcal{L}^{k}(\mathcal{M},\mathcal{X})$ the Banach space of $k$-linear bounded maps $\phi : \mathcal{M}^{k} \to \mathcal{X}$. For $k=0$, we define $\mathcal{L}^{0}(\mathcal{M}, \mathcal{X})$ to be $\mathcal{X}$. Then we can define coboundary operators $\partial^{k}:\mathcal{L}^{k}(\mathcal{M}, \mathcal{X}) \to \mathcal{L}^{k+1}(\mathcal{M}, \mathcal{X}) $ as follows: \begin{enumerate} \item [(i)] when $k \geq 1$, for any $\phi \in \mathcal{L}^{k}(\mathcal{M}, \mathcal{X}), a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$, $$ \begin{aligned} \partial^{k} \phi(a_{1}, a_{2}, \dots, a_{k+1})&=a_{1}\phi (a_{2}, \dots, a_{k+1})\\ &+\sum_{i=1}^{k}(-1)^{k}\phi (a_{1}, \dots, a_{i-1}, a_{i}a_{i+1}, \dots, a_{k+1})\\ &+(-1)^{k+1}\phi (a_1, \dots, a_{k})a_{k+1}. \end{aligned}$$
\item [(ii)] when $k=0$, for any $\xi \in \mathcal{X}, a \in \mathcal{M}$, $\partial^{0}\xi (a)=a \xi -\xi a$. \end{enumerate} It's easy to check that $\partial^{k} \partial^{k-1}=0$ for each $k \geq 1$. Thus $Im \partial^{k-1}$(the space of coboundaries) is contained in $Ker \partial^{k}$(the space of cocycles). The continuous Hochschild cohomology groups $H^{k}(\mathcal{M}, \mathcal{X})$ are then defined to be the quotient vector spaces $Ker \partial^{k} / Im \partial^{k-1}, k \geq 1$.
\subsection{Direct integral} The concepts of direct integrals of separable Hilbert spaces and von Neumann algebras acting on separable Hilbert spaces were introduced by von Neumann in \cite{vN}. General knowledge on direct integrals can be found in \cite{vN}, \cite{KR1}. Here, we list some lemmas which will be needed in this paper.
\begin{lemma} \label{2.1} (\cite{KR1}) Suppose $\mathcal{M}$ is a von Neumann algebra acting on a separable Hilbert space $H$. Let $\mathcal{Z}$ be the center of $\mathcal{M}$. Then there is a direct integral decomposition of $\mathcal{M}$ relative to $\mathcal{Z}$, i.e. there exists a locally compact complete separable metric measure space $(X, \mu)$ such that \begin{enumerate} \item [(i)] $H$ is (unitarily equivalent to) the direct integral of $\{ H_{s} : s \in X \}$ over $(X, \mu)$, where each $H_{s}$ is a separable Hilbert space, $s \in X$. \item [(ii)] $\mathcal{M}$ is (unitarily equivalent to) the direct integral of $\{ \mathcal{M}_{s} \}$ over $(X, \mu)$, where $\mathcal{M}_{s}$ is a factor in $B(H_{s})$ almost everywhere. Also, if $\mathcal{M}$ is of type $I_{n}$($n$ could be infinite), II$_{1}$, II$_{\infty}$ or $III$, then the components $\mathcal{M}_{s}$ are, almost everywhere, of type I$_{n}$, II$_{1}$, II$_{\infty}$ or $III$, respectively. \end{enumerate} Moreover, the center $\mathcal{Z}$ is (unitarily equivalent to) the algebra of diagonalizable operators relative to this decomposition. \end{lemma}
The following lemma gives a decomposition of a normal sate on a direct integral of von Neumann algebras. \begin{lemma} \label{2.2} (\cite{KR1}) If $H$ is the direct integral of separable Hilbert spaces $\{ H_{s} \}$ over $(X, \mu)$, $\mathcal{M}$ is a decomposable von Neumann algebra on $H$ (i.e every operator in $\mathcal{M}$ is decomposable relative to the direct integral decomposition, see Definition 14.1.6 in \cite{KR1}) and $\rho$ is a normal state on $\mathcal{M}$. There is a positive normal linear functional $\rho_{s}$ on $\mathcal{M}_{s}$ for every $s \in X$ such that $\rho(a)=\int_{X} \rho_{s}(a(s))d\mu$ for each $a$ in $\mathcal{M}$. If $\mathcal{M}$ contains the algebra $\mathcal{C}$ of diagonalizable operators and $\rho \vert_{E\mathcal{M}E}$ is faithful or tracial, for some projection $E$ in $\mathcal{M}$, then $\rho_{s} \vert_{E(s)\mathcal{M}_{s}E(s)}$ is, accordingly, faithful or tracial almost everywhere. \end{lemma}
\begin{remark} \label{2.3} From the proof of Lemma 14.1.19 in \cite{KR1}, we obtain that if $\rho=\sum\limits_{n=1}^{\infty}\omega_{y_{n}}$ on $\mathcal{M}$, where $\{ y_{n} \}$ is a sequence of vectors in $H$ such that $\sum\limits_{n=1}^{\infty}\Vert y_{n} \Vert ^{2}=1$ and $\omega_{y}$ is defined on $\mathcal{M}$ such that $\omega_{y}(a)=\langle ay, y \rangle$ for any $a \in \mathcal{M}, y \in H$, then $\rho_{s}$ can be chosen to be $\sum\limits_{n=1}^{\infty} \omega_{y_{n}(s)}$ for each $s \in X$. \end{remark}
\begin{remark} \label{2.4} Let $\mathcal{M}=\int_{X} \bigoplus M_{s} d \mu$ and $H= \int_{X} \bigoplus H_{s} d \mu$ be the direct integral decompositions of $(\mathcal{M},H)$ relative to the center $\mathcal{Z}$ of $\mathcal{M}$. By the argument in section 14.1 in \cite{KR1}, we can find a separable Hilbert space $K$ and a family of unitaries $\{U_{s}:H_{s} \to K; s \in X \}$ such that $s \to U_{s}x(s)$ is measurable (i.e. $s \to \langle U_{s}x(s), y \rangle$ is measurable for any vector $y$ in $K$) for every $x\in H$ and $s \to U_{s}a(s)U_{s}^{*}$ is measurable (i.e. $s \to \langle U_{s}a(s)U_{s}^{* }y, z \rangle$ is measurable for any vectors $y,z$ in $K$) for every decomposable operator $a \in B(H)$. \end{remark}
\begin{proposition} \label{2.5}
Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra acting on a separable Hilbert space H. Let $\mathcal{M}=\int_{X} \bigoplus M_{s} d \mu$ and $H= \int_{X} \bigoplus H_{s} d \mu$ be the direct integral decompositions of $\mathcal{M}$ and $H$ relative to the center $\mathcal{Z}$ of $\mathcal{M}$. Suppose $K$ is a Hilbert space and $\{ U_{s}: H_{s} \to K \}$ is a family of unitaries as in Remark \ref{2.4}. Denote by $\mathcal{B}$ the unit ball of $B(K)$ equipped with the $*$-strong operator topology. Suppose $\rho$ is a faithful normal tracial state on $\mathcal{M}$. Then there is a family of positive, faithful, normal, tracial linear functionals $\rho_s$ on $\mathcal{M}_{s}$ (almost everywhere) such that \begin{enumerate} \item [(a)] $\rho (a)=\int_{X}\rho _{s}(a(s))d\mu$ for every $a \in \mathcal{M}$;
\item [(b)] for any $a_0\in\mathcal M$, there exists a Borel $\mu$-null subset N of X such that the maps $$(s,b) \to \rho_{s}((a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s))^{*}(a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s)))$$ and $$(s,b) \to \rho_{s}(U_{s}^{*}bU_{s})$$ are Borel measurable from $(X \setminus N) \times \mathcal{B}$ to $\mathbb{C}$.\end{enumerate} \end{proposition}
\begin{proof}
If $\rho$ is a faithful, normal, tracial state on $\mathcal{M}$, then there exist a sequence of vectors $\{y_{n}\}\subset H$ with $\sum\limits_{n=1}^{\infty} \Vert y_{n} \Vert^{2}=1$ such that $\rho =\sum\limits_{n=1}^{\infty} \omega _{y_{n}}$. Take $\rho_{s}=\sum\limits_{n=1}^{\infty} \omega _{y_{n}(s)} $ for every $s \in X$. By Remark \ref{2.3}, we know, for $s\in X$ almost everywhere, $\rho_s$ is a positive, faithful, normal, tracial linear functional on $\mathcal M_s$ and
\begin{equation}\rho (a)=\int_{X}\rho _{s}(a(s))d\mu\qquad \forall a \in \mathcal{M}.\label{equa 2.5.1}\end{equation}
For each vector $y_n$ in $ H$, we let $\omega_{{y_n}}$ be the vector state relative to ${y_n}$. Then $$\omega _{{y_n}}(a)=\int_{X}\omega _{{y_n}(s)}(a(s))d\mu \qquad \forall a\in \mathcal{M}.$$ Consider the maps $\phi_n, \psi_n: X \times \mathcal{B} \to \mathbb{C}$: \begin{eqnarray*} \phi_n (s,b) = \omega_{{y_n}(s)}((a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s))^{*}(a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s))) \end{eqnarray*} and \begin{eqnarray*} \psi_n (s,b) = \omega_{{y_n}(s)}(U_{s}^{*}bU_{s}). \end{eqnarray*} We have $$\begin{aligned}
\phi_n (s,b) &= \omega_{{y_n}(s)} ((a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s))^{*}(a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s))) \\ & = \langle (a_{0}{s}U_{s}^{* }bU_{s}-U_{s}^{*}bU_{s}a_{0}(s))^{* }(a_{0}(s)U_{s}^{* }bU_{s}-U_{s}^{* }bU_{s}a_{0}(s)){y_n}(s),{y_n}(s) \rangle \\ & = \langle a_{0}(s)U_{s}^{* }bU_{s}{y_n}(s), a_{0}(s)U_{s}^{* }bU_{s}{y_n}(s)\rangle \\ &\ \quad\quad - \langle a_{0}(s)U_{s}^{* }bU_{s}{y_n}(s), U_{s}^{*}bU_{s}a_{0}(s){y_n}(s) \rangle \\ &\ \quad\quad - \langle U_{s}^{* }bU_{s}a_{0}(s){y_n}(s), a_{0}(s)U_{s}^{* }bU_{s}{y_n}(s) \rangle \\ &\ \quad\quad + \langle U_{s}^{* }bU_{s}a_{0}(s){y_n}(s), U_{s}^{* }bU_{s}a_{0}(s){y_n}(s)\rangle \\ & = \langle bU_{s}{y_n}(s),U_{s}a_0^{* }(s)U_{s}^{* }U_{s}a_{0}(s)U_{s}^{* }bU_{s}{y_n}(s) \rangle \\ & \ \quad\quad - \langle U_{s}a_{0}(s)U_{s}^{*}bU_{s}{y_n}(s), bU_{s}a_{0}(s)U_{s}^{*}U_{s}{y_n}(s) \rangle \\ & \ \quad\quad - \langle bU_{s}a_{0}(s)U_{s}^{*}U_{s}{y_n}(s), U_{s}a_{0}(s)U_{s}^{*}bU_{s}{y_n}(s) \rangle \\ & \ \quad\quad + \langle U_{s}a_{0}(s)U_{s}^{*}U_{s}{y_n}(s),b^{*}bU_{s}a_{0}(s)U_{s}^{*}U_{s}{y_n}(s) \rangle. \end{aligned}$$ By the choice of the family $\{U_{s}: s \in X \}$ in Remark \ref{2.4}, the maps \begin{align} &s \to U_{s}a_{0}(s)U_{s}^{*} \label{1} \\ &s \to U_{s}a_{0}^{*}(s)U_{s}^{*} \label{2} \end{align} from $X$ to $B(K)$ and $$s \to U_{s}{y_n}(s)$$ from $X$ to $K$ are measurable. Therefore by Lemma 14.3.1 in \cite{KR1}, there is a Borel $\mu$-null subset $N_{n,1}$ of X such that, restricted to $X \setminus N_{n,1}$, the maps (\ref{1}) and (\ref{2}) are all Borel maps. It follows that the map $\phi_n$ is a Borel map from $(X\setminus N_{n,1}) \times \mathcal{B}$.
Since $\omega_{{y_n}(s)}(U_{s}^{*}bU_{s})=\langle bU_{s}{y_n}(s), U_{s}{y_n}(s) \rangle$, the map $(s, b) \to \omega_{{y_n}(s)}(U_{s}^{*}bU_{s})$ from $X \times \mathcal{B}$ to $\mathbb{C}$ is measurable by the choice of $\{ U_{s}: s \in X \}$ in Remark \ref{2.4}. By Lemma 14.3.1 in \cite{KR1}, there exists a Borel $\mu$-null subset $N_{n,2}$ of $X$ such that the map \begin{eqnarray*} \psi_n: (s, b) \to \omega_{{y_n}(s)}(U_{s}^{*}bU_{s}) \end{eqnarray*} is Borel measurable from $(X \setminus N_{n,2}) \times \mathcal{B}$ to $\mathbb{C}$.
By the discussions in the preceding paragraphs, the maps $$\phi_n: (s,b)\rightarrow \omega_{y_{n}(s)}((a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s))^{*}(a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s)))$$ and $$\psi_n: (s,b) \to \omega_{y_n(s)}(U_{s}^{*}bU_{s})$$ are Borel measurable from $(X\setminus N_n) \times \mathcal{B}$ to $\mathbb{C}$, where $ N_n=N_{n,1}\cup N_{n,2}$ is a $\mu$-null subset of $X$.
Since $\rho_{s}=\sum\limits_{n=1}^{\infty} \omega _{y_{n}(s)} $, we obtain that \begin{equation}(s,b)\rightarrow \rho_{s}((a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s))^{*}(a_{0}(s)U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{0}(s)))\label{equa 2.5.2}\end{equation} and \begin{equation}(s,b) \to \rho_{s}(U_{s}^{*}bU_{s})\label{equa 2.5.3}\end{equation} are Borel measurable from $(X\setminus N) \times \mathcal{B}$ to $\mathbb{C}$, where $N=\cup_{n\in \Bbb N}N_n$ is a Borel $\mu$-null subset of $X$. By (\ref{equa 2.5.1}), (\ref{equa 2.5.2}), and (\ref{equa 2.5.3}), we complete the proof of the proposition. \end{proof}
\section{Property $\Gamma$ for type II$_{1}$ von Neumann algebras}
In this section, we will introduce Property $\Gamma$ for general von Neumann algebras and discuss some of its properties.
Murray and von Neumann's Property $\Gamma$ for a type II$_1$ factor is defined as follows. {\em Suppose $\mathcal{A}$ is a type II$_{1}$ factor with a trace $\tau$. Let $\Vert \cdot \Vert_{2}$ be the $2$-norm on $\mathcal{A}$ given by $\Vert a \Vert_{2}=\sqrt{\tau(a^{*}a)}$ for any $a \in \mathcal{A}$. Then $\mathcal{A}$ has Property $\Gamma$ if, given $\epsilon >0$ and $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{A}$, there exists a unitary $u \in \mathcal{A}$ such that
\begin{enumerate} \item [(a)] $\tau(u)=0$;
\item [(b)] $\Vert ua_{j}-a_{j}u \Vert_{2} < \epsilon, \ \ \forall \ 1 \leq j \leq k$. \end{enumerate}} An equivalent definition of Property $\Gamma$ for a type II$_{1}$ factor $\mathcal{A}$ was given by Dixmier in \cite{Di1}. {\em Suppose $\mathcal{A}$ is a type II$_{1}$ factor with a trace $\tau$. Let $\Vert \cdot \Vert_{2}$ be the $2$-norm on $\mathcal{A}$ given by $\Vert a \Vert_{2}=\sqrt{\tau(a^{*}a)}$ for any $a \in \mathcal{A}$. Then $\mathcal{A}$ has Property $\Gamma$ if, given $n \in \mathbb{N}$, $\epsilon >0$ and $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{A}$, there exists $n$ orthogonal equivalent projections $\{ p_{1}, p_{2}, \dots, p_{n} \}$ in $\mathcal{A}$ with sum $I$ such that $$\Vert p_{i}a_{j}-a_{j}p_{i} \Vert_{2} < \epsilon, \qquad \forall \ 1 \leq i \leq n, 1 \leq j \leq k.$$}
We introduce a definition of Property $\hat\Gamma$ for a type II$_{1}$ von Neumann algebra as follows.
\begin{definition} \label{3.1} Suppose $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra with a predual $\mathcal M_{\sharp}$. Suppose that $\sigma (\mathcal M, \mathcal M_\sharp)$ is the weak-$*$ topology on $\mathcal M$ induced from $\mathcal M_\sharp$. We say that $\mathcal{M}$ has Property $\hat{\Gamma}$ if and only if $ \forall \ a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$ and $\forall \ n\in \Bbb N$, there exist a partially ordered set $\Lambda$ and a family of projections $$\{ p_{i \lambda}: 1\le i\le n; \lambda \in \Lambda \}\subseteq \mathcal{M}$$ satisfying \begin{enumerate} \item [(i)] For each $\lambda \in \Lambda$, $\{ p_{1 \lambda}, p_{2 \lambda}, \ldots, p_{n \lambda} \}$ is a family of orthogonal equivalent projections in $\mathcal M$ with sum $I$. \item [(ii)] For each $1\le i\le n$ and $1\le j\le k$, $$ \lim_{\lambda} (p_{i \lambda}a_{j}-a_{j}p_{i \lambda})^*(p_{i \lambda}a_{j}-a_{j}p_{i \lambda}) =0\qquad \text {in $\sigma(\mathcal M, \mathcal M_\sharp)$ topology.}$$ \end{enumerate} \end{definition}
\begin{remark} \label{rem 3.1} Suppose that $\mathcal M$ acts on a Hilbert space $ H$. It is well-known that $\sigma(\mathcal M, \mathcal M_\sharp)$, the weak-$*$ topology,
on the unit ball of $\mathcal M$ coincides with the weak operator topology on
the unit ball of $\mathcal M$. (See Theorem 7.4.2 in \cite{KR1})
\end{remark}
Let $\mathcal{M}$ be a countably decomposable type II$_{1}$ von Neumann algebra with a faithful normal tracial state $\rho$. Let $\Vert \cdot \Vert_{2}$ be the $2$-norm on $\mathcal{M}$ given by $\Vert a \Vert_{2}=\sqrt{\rho(a^{*}a)}, \forall a \in \mathcal{M}$ . Let $H_{\rho}=L^{2}(\mathcal{M}, \rho)$. For each element $a \in \mathcal{M}$, we denote by $\bar{a}$ the corresponding vector in $H_{\rho}$. Let $\pi_{\rho}$ be the left regular representation of $\mathcal{M}$ on $H_{\rho}$ induced by $\pi_{\rho}(a)(\bar{b})=\bar{ab}, \forall a, b \in \mathcal{M}$. The vector $\bar{I}$ is cyclic for $\pi_{\rho}$, where $I$ is the unit of $\mathcal{M}$. Since that $\rho$ is faithful, we obtain that $\pi_{\rho}$ is faithful.
The following result is well-known. For the purpose of completeness, we include a proof here. \begin{lemma} \label{3.2} Let $\mathcal{M}$ be a countably decomposable type II$_{1}$ von Neumann algebra acting on a Hilbert space $H$ and $\rho$ be a faithful normal tracial state on $\mathcal{M}$. Let $\Vert \cdot \Vert_{2}$ be the $2$-norm on $\mathcal{M}$ given by $\Vert a \Vert_{2}=\sqrt{\rho(a^{*}a)}, \forall a \in \mathcal{M}$. Then the topology induced by $\Vert \cdot \Vert_{2}$ coincides with the strong operator topology on bounded subsets of $\mathcal{M}$. \end{lemma} \begin{proof} We claim that $\pi_{\rho}$ is $WOT-WOT$ continuous on bounded subsets of $\mathcal{M}$. To show this, we first suppose $\{ a_{\lambda} \}$ is a net in the unit ball $(\mathcal{M})_{1}$ of $\mathcal{M}$ such that $$WOT-\lim\limits_{\lambda} a_{\lambda} =a \in (\mathcal{M})_{1}.$$ Then for any $b, c \in \mathcal{M}$, \begin{eqnarray} \lim\limits_{\lambda} \langle \pi_{\rho}(a_{\lambda}) \pi_{\rho}(b) \bar{I}, \pi_{\rho}(c) \bar{I} \rangle =\lim\limits_{\lambda} \rho(c^{*}a_{\lambda}b) = \rho(c^{*}ab)=\langle \pi_{\rho}(a) \pi_{\rho}(b) \bar{I}, \pi_{\rho}(c) \bar{I} \rangle. \label{3} \end{eqnarray} Since the vector $\bar{I}$ is cyclic for $\pi_{\rho}$, we obtain from (\ref{3}) that $$\lim\limits_{\lambda} \langle \pi_{\rho}(a_{\lambda})x, y \rangle=\langle \pi_{\rho}(a)x, y \rangle, \qquad \forall x, y \in H_{\rho}. $$ Therefore $WOT-\lim\limits_{\lambda} \pi_{\rho}(a_{\lambda}) = \pi_{\rho}(a)$ and $\pi_{\rho}$ is $WOT-WOT$ continuous on bounded subsets of $\mathcal{M}$.
Since $(\mathcal{M})_{1}$ is $WOT$ compact, the unit ball $(\pi_{\rho}(\mathcal{M}))_{1}=\pi_{\rho}((\mathcal{M})_{1})$ is $WOT$ closed. By Kaplansky's Density Theorem, $\pi_{\rho}(\mathcal{M})$ is a von Neumann algebra. Hence $\pi_{\rho}$ from $\mathcal{M}$ onto $\pi_{\rho}(\mathcal{M})$ is a $*$-isomorphism between von Neumann algebras. By Theorem 7.1.16 in \cite{KR1}, $\pi_{\rho}$ is a $*$-homeomorphism from $(\mathcal{M})_{1}$ onto $(\pi_{\rho}(\mathcal{M}))_{1}$ when both are endowed with the strong operator topology.
Now we can prove the result. First suppose $\{ b_{\lambda} \}$ is a net in $(\mathcal{M})_{1}$ such that $SOT-\lim\limits_{\lambda} b_{\lambda}=0$. Then $SOT-\lim\limits_{\lambda} b_{\lambda}^{*}b_{\lambda}=0$. Since $\rho$ is $SOT$-continuous on $(\mathcal{M})_{1}$, $\lim\limits_{\lambda} \rho(b_{\lambda}^{*}b_{\lambda})=0$, which implies that $\lim\limits_{\lambda} \Vert b_{\lambda} \Vert_{2}=0$. On the other hand, suppose $\{ c_{\lambda} \}$ is a net in $(\mathcal{M})_{1}$ such that $\lim\limits_{\lambda} \Vert c_{\lambda} \Vert_{2}=0$. Then for any $a \in \mathcal{M}$, $\lim\limits_{\lambda} \Vert c_{\lambda}a \Vert_{2}=0$, and hence \begin{eqnarray*} \lim\limits_{\lambda} \langle \pi_{\rho} (c_{\lambda}) \bar{a}, \pi_{\rho} (c_{\lambda}) \bar{a} \rangle&=&\lim\limits_{i} \langle \pi_{\rho} (a^{*}c_{\lambda}^{*}c_{\lambda}a) \bar{I}, \bar{I} \rangle \\ &=&\lim\limits_{\lambda}\rho(a^{*}c_{\lambda}^{*}c_{\lambda}a)\\ &=&\lim\limits_{\lambda} \Vert c_{\lambda}a \Vert_{2}^{2}\\ &=&0. \end{eqnarray*} Because $\{ \bar{a}: a \in \mathcal{M} \}$ is norm dense in $H_{\rho}$, it follows that $SOT-\lim\limits_{i} \pi_{\rho} (c_{\lambda}) = 0 $ in $B(H_{\rho})$. Since $\pi_{\rho}$ is a homeomorphism from $(\mathcal{M})_{1}$ onto $(\pi_{\rho}(\mathcal{M}))_{1}$ when both are endowed with the strong operator topology, $SOT-\lim\limits_{i} c_{\lambda} = 0$ in $B(H)$. \end{proof}
\begin{corollary} \label{3.3} Let $\mathcal{M}$ be a countably decomposable type II$_{1}$ von Neumann algebra with a faithful normal tracial state $\rho$. Then the following are equivalent: \begin{enumerate} \item [(a)] $\mathcal{M}$ has Property $\hat\Gamma$ (in the sense of Definition \ref{3.1}); \item [(b)] Given any $\epsilon >0$, any positive integer $n$ and elements $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$, there exist orthogonal equivalent projections $ p_{1}, p_{2}, \dots, p_{n}$ in $\mathcal M$ summing to $I$ satisfying $$\Vert p_{i}a_{j}-a_{j}p_{i} \Vert_{2, \rho} < \epsilon, \qquad 1 \leq i \leq n, 1 \leq j \leq k,$$ where the $2$-norm $\Vert \cdot \Vert_{2, \rho}$ on $\mathcal{M}$ is given by $\Vert a \Vert_{2, \rho}=\sqrt{\rho(a^{*}a)}$ for any $a \in \mathcal{M}$. \item [(c)] For any faithful normal tracial state $\tilde\rho$ on $\mathcal{M}$, any $\epsilon >0$, any positive integer $n$ and elements $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$, there exist orthogonal equivalent projections $ p_{1}, p_{2}, \dots, p_{n} $ in $\mathcal M$ summing to $I$ satisfying $$\Vert p_{i}a_{j}-a_{j}p_{i} \Vert_{2, \tilde{\rho}} < \epsilon, \qquad 1 \leq i \leq n, 1 \leq j \leq k,$$ where the $2$-norm $\Vert \cdot \Vert_{2, \tilde{\rho}}$ on $\mathcal{M}$ is given by $\Vert a \Vert_{2, \tilde{\rho}}=\sqrt{\tilde\rho(a^{*}a)}$ for any $a \in \mathcal{M}$.\end{enumerate} \end{corollary}
\begin{proof} We might assume that $\mathcal M$ acts on a Hilbert space $ H$.
(a)$\Rightarrow$(b) It follows directly from Definition \ref{3.1}, Remark \ref{rem 3.1} and Lemma \ref{3.2}.
(b)$\Rightarrow$(a) Assume that (b) holds. Let $n \in \Bbb N$ and $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$. By (b), there exists a family of projections $$\{ p_{i r}: 1\le i\le n; r\ge 1 \}\subseteq \mathcal{M}$$ satisfying \begin{enumerate} \item [(1)] for each $r\ge 1$, $p_{1r}, p_{2r}, \ldots, p_{nr}$ is a family of orthogonal equivalent projections in $\mathcal M$ with sum $I$. \item [(2)] for each $1\le i\le n$ and $1\le j\le k$, \begin{equation} \lim_{r\rightarrow \infty} \rho((p_{ir}a_{j}-a_{j}p_{i r})^*(p_{ir}a_{j}-a_{j}p_{i r})) =0.\label {eq1}\end{equation} \end{enumerate} In order to show that $\mathcal M$ has Property $\hat{\Gamma}$, we need only to verify that the family of projections $\{ p_{i r}: 1\le i\le n; r\ge 1 \}$ satisfies condition (ii) in Definition \ref{3.1}. Actually, combining equation (\ref{eq1}) and Lemma \ref{3.2}, we know that, for each $1 \le i \le n$ and $1 \le j \le k$, as $r\rightarrow \infty$, $ p_{ir}a_{j}-a_{j}p_{i r}$ converges to $0$ in strong operator topology. Therefore, for each $1 \le i \le n$ and $1 \le j \le k$, as $r\rightarrow \infty$, $(p_{ir}a_{j}-a_{j}p_{i r})^*(p_{ir}a_{j}-a_{j}p_{i r})$ converges to $0$ in weak operator topology and, whence in $\sigma(\mathcal M, \mathcal M_\sharp)$ by Remark \ref{rem 3.1}. Therefore, $\mathcal M$ has Property $\hat{\Gamma}$.
(b)$\Leftrightarrow$(c) Suppose $\rho_{1}$ and $\rho_{2}$ are two faithful normal tracial states on $\mathcal{M}$. By Lemma \ref{3.2}, the $2$-norms induced by $\rho_{1}$ and $\rho_{2}$ will give the same topology on bounded subsets of $\mathcal{M}$ (since they both coincide with the strong operator topology on bounded subsets of $\mathcal{M}$). Therefore (b) and (c) are equivalent. \end{proof}
\begin{corollary}\label{3.3.5} Suppose that $\mathcal M$ is a factor of type II$_1$ with a tracial state $\tau$. The following are equivalent: \begin{enumerate}
\item [(i)] $\mathcal M$ has Property $\hat\Gamma$ in the sense of Definition \ref{3.1}.
\item [(ii)] $\mathcal M$ has Property $\Gamma$ in the sense of Dixmier (equivalently, of Murray and von Neumann). \end{enumerate} \end{corollary} \begin{proof} A type II$_1$ factor is countably decomposable and $\tau$ is the unique faithful normal tracial state of $\mathcal M$. From Dixmier's Definition of Property $\Gamma$ for type II$_1$ factors and Corollary \ref{3.3}, we know that $(i) \Leftrightarrow (ii)$. \end{proof} \begin{remark}\label{3.3.6} Because of Corollary \ref{3.3.5}, from now on we will use Definition \ref{3.1} as a definition of Property $\Gamma$ for type II$_1$ von Neumann algebras. \end{remark}
In the rest of the paper, we will only consider von Neumann algebras with separable predual because direct integral theory is only applied to von Neumann algebras with separable predual. Next proposition follows directly from Definition \ref{3.1}, Corollary \ref{3.3} and the assumption that $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra with separable predual.
\begin{proposition} \label{3.4} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra with separable predual and $\rho$ be a faithful normal tracial state on $\mathcal{M}$. Then $\mathcal{M}$ has Property $\Gamma$ if and only if for any $n \in \mathbb{N}$, there exists a family of projections $\{ p_{ir}: 1 \leq i \leq n, r \in \mathbb{N} \}$ such that \begin{enumerate} \item [(i)] for each $r \in \mathbb{N}$, $\{ p_{ir}: 1\leq i \leq n \}$ is a set of $n$ equivalent orthogonal projections in $\mathcal{M}$ with sum $I$;
\item [(ii)] for each $1 \leq i \leq n$, $\lim\limits_{r \to \infty} \Vert p_{ir}a-ap_{ir} \Vert_{2} = 0$ for any $a \in \mathcal{M}$, where $\Vert \cdot \Vert_{2}$ is the $2$-norm induced by $\rho$ on $\mathcal{M}$.\end{enumerate} \end{proposition}
With the help of Proposition \ref{3.4} and Corollary \ref{3.3}, we can directly get the next corollary.
\begin{corollary} \label{3.5} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra with separable predual and $\rho$ a faithful normal tracial state on $\mathcal{M}$. Suppose $\{ a_{j}: j \in \mathbb{N} \}$ is a sequence of elements in $\mathcal{M}$ that generates $\mathcal{M}$ as a von Neumann algebra. Then $\mathcal{M}$ has Property $\Gamma$ if and only if for any $n \in \mathbb{N}$, there exists a family of projections $\{ p_{ir}: 1 \leq i \leq n, r \in \mathbb{N} \}$ such that \begin{enumerate} \item [(i)] for each $r \in \mathbb{N}$, $\{ p_{ir}: 1\leq i \leq n \}$ is a set of $n$ equivalent orthogonal projections in $\mathcal{M}$ with sum $I$;
\item [(ii)] for each $1 \leq i \leq n$ and $j \in \mathbb{N}$, $\lim\limits_{r \to \infty} \Vert p_{ir}a_{j}-a_{j}p_{ir} \Vert_{2} = 0$, where $\Vert \cdot \Vert_{2}$ is the $2$-norm induced by $\rho$ on $\mathcal{M}$.\end{enumerate} \end{corollary}
\begin{example}\label{example 1} Let $\mathcal{A}_{1}$ be a type II$_{1}$ factor with separable predual and $\mathcal{A}_{2}$ a finite von Neumann algebra with separable predual. Suppose $\mathcal{A}_{1}$ has Property $\Gamma$. Then the von Neumann algebra tensor product $\mathcal{A}_{1} \otimes \mathcal{A}_{2}$ is a type II$_{1}$ von Neumann algebra with separable predual and Property $\Gamma$.
\end{example}
\begin{remark} \label{3.6} Let $\mathcal{M}$ be a von Neumann algebra acting on a separable Hilbert space $H$ and $\mathcal{Z}$ the center of $\mathcal{M}$. Suppose $\mathcal{M}=\int_{X} \bigoplus \mathcal{M}_{s} d\mu$ and $H=\int_{X} \bigoplus H_{s} d\mu$ are the direct integral decompositions of $\mathcal{M}$ and $H$ over $(X, \mu)$ relative to $\mathcal{Z}$. Take a countable $SOT$ dense self-adjoint subset $\mathcal{F}$ of $\mathcal{M}$ and let $\mathcal{S}$ be the set of all rational $*$-polynomials (i.e, coefficients from $\mathbb{Q}+i \mathbb{Q}$) with variables from $\mathcal{F}$. We observe that $\mathcal{S}$ is countable and $SOT$ dense in $\mathcal{M}$. Take $\{ a_{j}: j \in \mathbb{N} \}$ to be the unit ball of $\mathcal{S}$. By Kaplansky's Density Theorem, $\{ a_{j}: j \in \mathbb{N} \}$ is $SOT$ dense in the unit ball $(\mathcal{M})_{1}$. By Definition 14.1.14 and Lemma 14.1.15 in \cite{KR1}, $\{ a_{j}(s): j \in \mathbb{N} \}$ is $SOT$ dense in the unit ball $(\mathcal{M}_{s})_{1}$ for almost every $s \in X$. In the rest of this paper, when we mention a $SOT$ dense sequence $\{ a_{j}: j \in \mathbb{N} \}$of $(\mathcal{M})_{1}$ (or $(\mathcal{M}')_{1}$), we always assume that this sequence has been chosen such that $\{ a_{j}(s): j \in \mathbb{N} \}$ is $SOT$ dense in the unit ball $(\mathcal{M}_{s})_{1}$ (or $(\mathcal{M}'_{s})_{1}$) for almost every $s \in X$. \end{remark}
\begin{lemma} \label{3.8} If $H= \int_{X} \bigoplus H_{s} d\mu$ is a direct integral of separable Hilbert spaces and $\mathcal{A}$ is a decomposable von Neumann algebra (see definition in \cite{KR1}) over $H$ such that $\mathcal{A} \cong M_{m}(\mathbb{C})$, the $m \times m$ matrix algebra over $\mathbb{C}$ for some $m \in \mathbb{N}$. Then $\mathcal{A}_{s} \cong M_{m}(\mathbb{C})$ for almost every $s \in X$. \end{lemma} \begin{proof} Notice that $\mathcal{A}$ is also a finite dimensional C$^*$-algebra. By Theorem 14.1.13 in \cite{KR1} and the fact that $\mathcal{A}$ is separable, the map from $\mathcal{A}$ to $\mathcal{A}_{s}$ given by $a \to a(s)$ is a unital $*$-homomorphism for almost every $s \in X$. Since $\mathcal{A} \cong M_{m}(\mathbb{C})$, $\mathcal{A}$ is simple. Therefore $\mathcal{A}_{s} \cong M_{m}(\mathbb{C})$ for almost every $s \in X$. \end{proof}
The following Proposition gives a useful characterization of a type II$_1$ von Neumann algebra with Property $\Gamma$. \begin{proposition} \label{3.7} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra acting on a separable Hilbert space $H$ and $\mathcal{Z}$ the center of $\mathcal{M}$. Suppose $\rho$ is a faithful normal tracial state on $\mathcal{M}$ and a $2$-norm $\Vert \cdot \Vert_{2}$ on $\mathcal{M}$ is defined by $\Vert a \Vert_{2}=\sqrt{\rho(a^{*}a)}$ for any $a \in \mathcal{M}$. Suppose $\mathcal{M}=\int_{X} \bigoplus \mathcal{M}_{s} d\mu$ and $H=\int_{X} \bigoplus H_{s} d\mu$ are the direct integral decompositions of $\mathcal{M}$ and $H$ over $(X, \mu)$ relative to $\mathcal{Z}$. Then \begin{enumerate} \item [(i)] $\mathcal{M}$ has Property $\Gamma$. \item [(ii)]There exists a positive integer $n_0\ge 2$ such that \begin{enumerate} \item []
for any $ \epsilon >0$, and any $ a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$, there exist orthogonal equivalent projections $ p_{1}$, $p_{2}$, $\dots,$ $ p_{n_0}$ in $\mathcal M$ summing to $I$ satisfying $$\Vert p_{i}a_{j}-a_{j}p_{i} \Vert_{2} < \epsilon, \qquad 1 \leq i \leq n_0, 1 \leq j \leq k.$$ \end{enumerate} \item [(iii)] $\mathcal{M}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$ for almost every $s \in X$.
\end{enumerate} \end{proposition}
\begin{proof} Since $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra, by Lemma \ref{2.1}, the component $\mathcal{M}_{s}$ is a type II$_1$ factor for almost every $s \in X$. We may assume $\mathcal{M}_{s}$ is a type II$_{1}$ factor with a trace $\tau_{s}$ for each $s \in X$.
By Lemma \ref{2.2}, there is a positive faithful normal tracial linear functional $\rho_{s}$ on $\mathcal{M}_{s}$ for almost every $s \in X$ such that $\rho(a)=\int_{X} \rho_{s}(a(s))d\mu$ for each $a$ in $\mathcal{M}$. We may assume $\rho_{s}$ is positive, faithful, normal and tracial for each $s \in X$. Hence for each $s \in X$, $\rho_{s}$ is a positive scalar multiple of the unique trace $\tau_{s}$ on the type II$_{1}$ factor $\mathcal{M}_{s}$.
Let $\{ a_{j}: j \in \mathbb{N} \}, \{ a'_{j}: j \in \mathbb{N} \}$ be $SOT$ dense subsets of the unit balls $\mathcal{M}_1, (\mathcal{M}')_1$ of $\mathcal{M}$ and $\mathcal{M}'$ respectively. By Proposition 14.1.24 in \cite{KR1}, we may assume that $(\mathcal{M}')_{s}=(\mathcal{M}_{s})'$ for every $s \in X$ and we use the notation $\mathcal{M}'_{s}$ for both. By Remark \ref{3.6}, we may assume $\{ a_{j}(s): j \in \mathbb{N} \}$ and $\{ a'_{j}(s): j \in \mathbb{N} \}$ are $SOT$ dense in $(\mathcal{M}_{s})_{1}$ and $(\mathcal{M}'_{s})_{1}$ for every $s \in X$.
(i)$\Rightarrow$ (ii): The result is clear from Corollary \ref{3.3}, Corollary \ref{3.3.5} and Remark \ref{3.3.6}.
(ii)$\Rightarrow$ (iii): For this direction, we suppose (ii) holds. Notice that $\mathcal M$ acts on a separable Hilbert space, whence $\mathcal M$ is countably generated in strong operator topology. By (ii), there exists a sequence of systems of matrix units $\{ \{e_{i,j}^{(r)}\}_{ i , j=1}^{ n_0} \ : \ r \in \mathbb{N} \}$ such that \begin{enumerate} \item [(A)] for each $r \in \mathbb{N}$, we have $$\text{$\sum_{i=1}^{n_0} e_{i,i}^{(r)}=I$, \quad $(e_{i,j}^{(r)})^*= e_{j,i}^{(r)}$ \quad and \ \ $e_{i,j}^{(r)}e_{j,k}^{(r)}=e_{i,k}^{(r)}$ \ \ \ for all $1\le i,j,k\le n_0$.}$$ \item [(B)] for each $1 \leq i \leq n_0$, $\lim\limits_{r \to \infty} \Vert e_{i,i}^{(r)}a-ae_{i,i}^{(r)} \Vert_{2}=0$ for any $a \in \mathcal{M}$. \end{enumerate} By condition (A) and Lemma \ref{3.8}, there exists a
$\mu$-null subset $N_{0}$ of X such that, for each $r \in \mathbb{N}$, $ \{e_{i,j}^{(r)}(s)\}_{ i , j=1}^{ n_0} $ is a system of matrix units such that $\sum_{i=1}^{n_0} e_{i,i}^{(r)}(s)=I_{s}$ (the identity in $\mathcal{M}_{s}$) in $\mathcal{M}_{s}$ for each $s \in X \setminus N_{0}$. In the following, we let
$$ p_{i,r}=e_{ii}^{(r)} \qquad \text {for all } 1\le i\le n_0, r\in \Bbb N. $$ Therefore, without loss of generality, we can assume that \begin{enumerate} \item [ (I)] $\{ p_{1,r}(s), p_{2,r}(s), \dots, p_{n_0,r}(s) \}$ is a set of $n_0$ equivalent orthogonal projections with sum $I_{s}$ in $\mathcal{M}_{s}$ for every $r \in \mathbb{N}$ and every $s \in X$; \item [ (II)] for each $1 \leq i \leq n_0$, $$ \lim\limits_{r \to \infty} \Vert p_{i,r} a-ap_{i,r} \Vert_{2}=0 \qquad \text{ for any $a \in \mathcal{M}$}. $$\end{enumerate}
In the following we will use a diagonal selection process to produce a subsequence $\{ r_{m}: m \in \mathbb{N} \}$ of $\{ r: r \in \mathbb{N} \}$ and a $\mu$-null subset $X_{0}$ of $X$ such that \begin{eqnarray} \lim\limits_{m \to \infty} \Vert p_{i,r_{m}}(s)a_{j}(s)-a_{j}(s)p_{i,r_{m}}(s) \Vert_{2,s}=0 \qquad \forall \ i \in \{ 1, 2, \dots, n_0\} \text { and } \forall \ s \in X \setminus X_{0}, \end{eqnarray} where the $\Vert \cdot \Vert_{2,s}$ is the $2$-norm induced by the unique trace $\tau_{s}$ on each $\mathcal{M}_{s}$.
First, by Assumption (II), for each $i \in \{ 1, 2, \dots, n_0 \}$, $$\lim\limits_{r \to \infty} \Vert p_{i,r}a_{1}-a_{1}p_{i,r} \Vert_{2}=\lim\limits_{r \to \infty} \int_{X} \rho_{s}((p_{i,r}(s)a_{1}(s)-a_{1}(s)p_{i,r}(s))^{*}(p_{i,r}(s)a_{1}(s)-a_{1}(s)p_{i,r}(s))) d\mu = 0.$$ Therefore there exists a $\mu$-null subset $Y_{1}$ of $X$ and a subsequence $\{ r_{1,m}: m \in \mathbb{N} \}$ of $\{ r: r \in \mathbb{N} \}$ such that $$\lim\limits_{m \to \infty} \rho_{s}((p_{i,r_{1,m} }(s)a_{1}(s)-a_{1}(s)p_{i,r_{1,m}}(s))^{*}(p_{i,r_{1,m}}(s)a_{1}(s)-a_{1}(s)p_{1,r_{1,m}}(s))) = 0$$ for any $s \in X \setminus Y_{1} $ and any $i \in \{ 1, 2, \dots, n_0 \}$. Since $\rho_{s}$ is a positive scalar multiple of the unique trace $\tau_{s}$ on the type II$_{1}$ factor $\mathcal{M}_{s}$, we obtain $$ \lim\limits_{m \to \infty}\Vert p_{i,r_{1,m}} (s)a_{1}(s)-a_{1}(s)p_{i,r_{1,m} }(s)\Vert_{2,s} = 0 $$ for any $i \in \{ 1, 2, \dots, n \}$ and any $s \in X \setminus Y_{1}$, where $\Vert \cdot \Vert_{2,s}$ is the $2$-norm on $\mathcal{M}_{s}$ induced by $\tau_{s}$.
Again, there is a subsequence $\{ r_{2,m} : m \in \mathbb{N} \}$ of $\{ r_{1,m} : m \in \mathbb{N} \}$ and a $\mu$-null subset $Y_{2}$ of $X$ such that $$ \lim\limits_{m \to \infty} \Vert p_{i,r_{2,m}}(s)a_{2}(s)-a_{2}(s)p_{i,r_{2,m}}(s) \Vert_{2,s}=0 $$ for any $i \in \{ 1, 2, \dots, n_0 \}$ and any $s \in X\setminus Y_{2}$.
Continuing in this way, we obtain a subsequence $\{ r_{k,m}: m \in \mathbb{N} \}$ of $\{ r_{k-1,m}: m \in \mathbb{N} \}$ and a $\mu$-null subset $Y_{k}$ for each $k\ge 2$, satisfying $$ \lim\limits_{m \to \infty} \Vert p_{i,r_{k,m}}(s)a_{k}(s)-a_{k}(s)p_{i,r_{k,m}}(s) \Vert_{2,s}=0 $$ for any $i \in \{ 1, 2, \dots, n_0 \}$ and any $s \in X\setminus Y_{k}$. Now we apply the diagonal selection by letting $r_{m}=r_{m,m}$ for each $m \in \mathbb{N}$ to these subsequences and obtain that \begin{align} \lim\limits_{m \to \infty} \Vert p_{i,r_{m}}(s)a_{j}(s)-a_{j}(s)p_{i,r_{m}}(s) \Vert_{2,s} = 0 \label{8}
\end{align} for any $i \in \{ 1, 2, \dots, n_0 \}$, $j \in \mathbb{N}$ and $s \in X \setminus X_{0}$, where $X_{0}=\cup_{k \in \mathbb{N}} Y_{k}$ is a $\mu$-null subset of $X$.
Since $\{ a_{j}: j \in \mathbb{N} \}$ is $SOT$ dense in the unit ball of $\mathcal{M}_{s}$ for each $s \in X$, (\ref{8}) implies that, for any $i \in \{ 1, 2, \dots, n_0 \}$, $s \in X \setminus X_{0}$ and any $a \in \mathcal{M}_{s}$, \begin{eqnarray} \lim\limits_{m \to \infty} \Vert p_{i,r_{m}}(s)a-a p_{i,r_{m}}(s) \Vert_{2,s} = 0. \label{9} \end{eqnarray} It follows from (\ref{9}) and Assumption (I) that $\mathcal{M}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$ for almost every $s \in X$.
(iii) $\Rightarrow$ (i): Suppose $\mathcal{M}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$ for almost every $s \in X$. We may assume that for every $s \in X$, $\mathcal{M}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$.
By Remark \ref{2.4}, we can obtain a separable Hilbert space $K$ and a family of unitaries $\{ U_{s}: H_{s} \to K; s \in X \}$ such that $s \to U_{s}x(s)$ and $s \to U_{s}a(s)U_{s}^{*}$ are measurable for any $x \in H$ and any decomposable operator $a \in B(H)$. Let $\mathcal{B}$ be the unit ball of self-adjoint elements in $B(K)$ equipped with the $*$-strong operator topology. Then it is metrizable by setting $d(S,T)=\sum_{m=1}^{\infty} 2^{-m} (\Vert (S-T)e_{m} \Vert + \Vert (S^{*}-T^{*})e_{m} \Vert)$ for any $S, T \in \mathcal{B}$, where $\{ e_{m} \}$ is an orthonormal basis of $K$. The metric space $(\mathcal{B}, d)$ is complete and separable. Now let $\mathcal{B}_{1} =\mathcal{B}_{2} =\dots =\mathcal{B}_{l} =\dots =\mathcal{B}$ and $\mathcal{C} =\prod\limits_{ l \in \mathbb{N}} \mathcal{B}_{l}$ provided with the product topology of the $*$-strong operator topology on each $B_{l}$. It follows that $\mathcal{C}$ is metrizable and it's also a complete separable metric space.
Replacing $a_{0}$ by $a_{j}$ for $j\in \Bbb N$, we apply Proposition \ref{2.5} countably many times and obtain positive, faithful, normal, tracial linear functionals $\rho_s$ on $\mathcal M_s$ (almost everywhere) and a Borel $\mu$-null subset $N$ of $X$ such that, \begin{enumerate} \item [(1)] $\rho(a)=\int_X \rho_s(a(s))d\mu$ for every $a\in\mathcal M$; \item [(2)] for any $j \in \mathbb{N}$, the maps \begin{equation} s \to \rho_{s}((a_{j}U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{j}(s))^{*}(a_{j}U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{j}(s))) \label{equa3.10.1}\end{equation} and \begin{equation}s \to \rho_{s}(U_{s}^{*}bU_{s})\label{equa3.10.2}\end{equation} from $X$ to $\mathbb{C}$ are Borel measurable when restricted to $X \setminus N$. \end{enumerate}
We denote by $(s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots))$ an element in $X\times \mathcal C$. Since $b \to b^{*}$ and $b \to b^{2}$ are $*$-$SOT$ continuous from $\mathcal{B}$ to $\mathcal{B}$, the maps \begin{align} (s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots)) &\to Q_{it}, \label{13}\\ (s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots)) &\to Q_{it}^{2}, \label{14}\\ (s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots)) &\to Q_{it}^{*} \label{15} \end{align} are Borel measurable from $X \times \mathcal{C}$ to $\mathcal{B}$.
By Remark \ref{2.4}, the map $s \to U_{s}a'_{j}(s)U_{s}^{*}$ from $X$ to $\mathcal{B}$ is measurable for every $j \in \mathbb{N}$. Therefore, by Lemma 14.3.1 in \cite{KR1}, there exists a Borel $\mu$-null subset $N'$ of $X$ such that the map $s \to U_{s}a'_{j}(s)U_{s}^{*}$ is Borel measurable when restricted to $X \setminus N'$ for every $j \in \mathbb{N}$. Hence the maps \begin{align} (s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots)) &\to Q_{it}U_{s}a'_{j}(s)U_{s}^{*}, \label{16}\\ (s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots)) &\to U_{s}a'_{j}(s)U_{s}^{*}Q_{it} \label{17} \end{align} are Borel measurable when restricted to $(X \setminus N') \times \mathcal{C}$ for every $j \in \mathbb{N}$.
Since the functionals $\rho_{s}$ are chosen such that the maps $$s \to \rho_{s}((a_{j}U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{j}(s))^{*}(a_{j}U_{s}^{*}bU_{s}-U_{s}^{*}bU_{s}a_{j}(s)))$$ and $$s \to \rho_{s}(U_{s}^{*}bU_{s})$$ are Borel measurable when restricted to $X \setminus N$, where $N$ is a Borel $\mu$-null subset of $X$, the maps \begin{eqnarray} &&(s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots)) \notag \\ &&\to \rho_{s}((a_{j}(s)U_{s}^{*}Q_{it}U_{s}-U_{s}^{*}Q_{it}U_{s}a_{j}(s))^{*}(a_{j}(s)U_{s}^{*}Q_{it}U_{s}-U_{s}^{*}Q_{it}U_{s}a_{j}(s))) \label{18} \end{eqnarray} and \begin{eqnarray} (s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots)) \to \rho_{s}(U_{s}^{*}Q_{it}U_{s}) \label{19} \end{eqnarray} are Borel measurable when restricted to $(X\setminus N) \times \mathcal{C}$ for each $j \in \mathbb{N}$.
Take $N_{0}=N \cup N'$. Then we have the following claim.
{\bf Claim \ref{3.7}.1.} {\em $N_{0}$ is a Borel $\mu$-null subset of $X$ and the maps (\ref{13})-(\ref{19}) are Borel measurable when restricted to $X \setminus N_{0}$.}
Next we introduce the following subset $\eta$ of $(X\setminus N_{0}) \times \mathcal{C}$.
{\em Let $ \eta$ be a subset of $(X\setminus N_{0}) \times \mathcal{C}$ that consists of all these elements $$(s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots, Q_{1t}, Q_{2t}, \dots, Q_{nt}, \dots )) \in (X\setminus N_{0}) \times \mathcal{C}$$ satisfying \begin{enumerate} \item [(a)] for any $1 \leq i \leq n, t \in \mathbb{N}$, \begin{equation}
Q_{it}=Q_{it}^{*}=Q_{it}^{2} \neq 0; \label{equa 3.7.5} \end{equation} \item [(b)] for any $1 \leq i \leq n$, $t, j \in \mathbb{N}$, \begin{equation}
Q_{it}U_{s}a'_{j}(s)U_{s}^{*}=U_{s}a'_{j}(s)U_{s}^{*}Q_{it}; \label{equa 3.7.6} \end{equation} \item [(c)] for any $1 \leq i \leq n$, $t \in \mathbb{N}, 1 \leq j \leq t$, \begin{equation}
\rho_{s}((a_{j}(s)U_{s}^{*}Q_{it}U_{s}-U_{s}^{*}Q_{it}U_{s}a_{j}(s))^{*}(a_{j}(s)U_{s}^{*}
Q_{it}U_{s}-U_{s}^{*}Q_{it}U_{s}a_{j}(s)))<1/t; \label{equa 3.7.7} \end{equation} \item [(d)] for any $t \in \mathbb{N}$,
\begin{equation}
\rho_{s}(U_{s}^{*}Q_{1t}U_{s})= \dots =\rho_{s}(U_{s}^{*}Q_{nt}U_{s}) \ \text{ and } \ Q_{1t}+Q_{2t}+ \dots +Q_{nt}=I. \label{equa 3.7.8} \end{equation} \end{enumerate}}
We have the following claim.
{\bf Claim \ref{3.7}.2:} {\em The set $\eta$ is analytic.}
{Proof of Claim \ref{3.7}.2:} By Claim \ref{3.7}.1, we know the maps (\ref{13})-(\ref{19}) are Borel measurable when restricted to $X \setminus N_{0}$. It follows that the set $\eta$ is a Borel set. Thus by Theorem 14.3.5 in \cite{KR1}, $\eta$ is analytic. The proof of Claim \ref{3.7}.2 is completed.
{\bf Claim \ref{3.7}.3:} {\em Let $\pi$ be the projection of $X \times \mathcal{M}$ onto $X$. Then $\pi(\eta)= X \setminus N_{0}$.}
{Proof of Claim \ref{3.7}.3:} Let $$(s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots, Q_{1t}, Q_{2t}, \dots, Q_{nt}, \dots ))$$ be an element in $\eta$. From the definitions of the set $ \eta$, it's not hard to see that condition (a) is equivalent to that each $Q_{it}$ is a nonzero projection. Since $\{ a'_{j}(s): j \in \mathbb{N} \}$ is $SOT$ dense in $(\mathcal{M}')_{1}$ for each $s \in X$, condition (b) is equivalent to the condition that $U_{s}^{*}Q_{it}U_{s} \in \mathcal{M}_{s}$. Notice that $\{ a_{j}(s): j \in \mathbb{N} \}$ is $SOT$ dense in $(\mathcal{M})_{1}$ for each $s \in X$, condition (c) is equivalent to $$\lim\limits_{t \to \infty} \rho_{s}((aU_{s}^{*}Q_{it}U_{s}-U_{s}^{*}Q_{it}U_{s}a)^{*}(aU_{s}^{*}Q_{it}U_{s}-U_{s}^{*}Q_{it}U_{s}a))=0$$ for any $a \in \mathcal{M}_{s}$. Furthermore, $\rho_{s}$ is a positive scalar multiple of $\tau_{s}$ on $\mathcal{M}_{s}$ for each $s \in X$, it follows that condition (c) is equivalent to $$\lim\limits_{t \to \infty} \Vert aU_{s}^{*}Q_{it}U_{s}-U_{s}^{*}Q_{it}U_{s}a \Vert_{2,s} = 0$$ for any $a \in \mathcal{M}_{s}$. Moreover, $(s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots, Q_{1t}, Q_{2t}, \dots, Q_{nt}, \dots ))$ satisfies condition (a) and condition (d) if and only if $U_{s}^{*}Q_{1t}U_{s}, U_{s}^{*}Q_{2t}U_{s}, \dots, U_{s}^{*}Q_{nt}U_{s}$ are $n$ equivalent projections in $\mathcal{M}_{s}$ with sum $I_{s}$ for each $n\in \Bbb N$ and each $t \in \mathbb{N}$.
For each $s \in X$, notice that $\mathcal{M}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$. From the argument in the preceding paragraph, there exist projections $\{ U_{s}^{*}Q_{it}U_{s}: 1 \leq i \leq n, t \in \mathbb{N}\}$ in $\mathcal{M}_{s}$ such that $$(s, (Q_{11}, Q_{21}, \dots, Q_{n1}, Q_{12}, Q_{22}, \dots, Q_{n2}, \dots, Q_{1t}, Q_{2t}, \dots, Q_{nt}, \dots )) \in X \times \mathcal{C}$$ satisfies conditions (a), (b), (c) and (d). Therefore the image of $\eta$ under $\pi$ is exactly $X \setminus N_{0}$. The proof of Claim \ref{3.7}.3 is completed.
\noindent(Continue the proof of Proposition \ref{3.7}:) By Claim \ref{3.7}.2 and Claim \ref{3.7}.3, $\eta$ is analytic and the image of $\eta$ under $\pi$ is $X \setminus N_{0}$. By Theorem 14.3.6 in \cite{KR1}, there is a measurable mapping $$s \to (Q_{11}^{(s)}, Q_{21}^{(s)}, \dots, Q_{n1}^{(s)}, Q_{12}^{(s)}, Q_{22}^{(s)}, \dots, Q_{n2}^{(s)}, \dots)$$ from $X\setminus N_{0}$ to $\mathcal{C}$ such that, for $s \in X \setminus N_{0}$ almost everywhere, $$(s, (Q_{11}^{(s)}, Q_{21}^{(s)}, \dots, Q_{n1}^{(s)}, Q_{12}^{(s)}, Q_{22}^{(s)}, \dots, Q_{n2}^{(s)}, \dots))$$ satisfies conditions (a), (b), (c) and (d) (see (\ref{equa 3.7.5}), (\ref{equa 3.7.6}), (\ref{equa 3.7.7}) and (\ref{equa 3.7.8})). By defining $Q_{it}^{(s)}=0$ for any $t \in \mathbb{N}, 1 \leq i \leq n, s \in N_{0}$, we obtain {\em a measurable mapping \begin{equation}s \to (Q_{11}^{(s)}, Q_{21}^{(s)}, \dots, Q_{n1}^{(s)}, Q_{12}^{(s)}, Q_{22}^{(s)}, \dots, Q_{n2}^{(s)}, \dots)\label{equa 3.7.4}\end{equation} from $X$ to $\mathcal{C}$ such that such that, for $s \in X $ almost everywhere, $$(s, (Q_{11}^{(s)}, Q_{21}^{(s)}, \dots, Q_{n1}^{(s)}, Q_{12}^{(s)}, Q_{22}^{(s)}, \dots, Q_{n2}^{(s)}, \dots))$$ satisfies conditions (a), (b), (c) and (d) (see (\ref{equa 3.7.5}), (\ref{equa 3.7.6}), (\ref{equa 3.7.7}) and (\ref{equa 3.7.8})).}
By (\ref{equa 3.7.4}), for any $t \in \mathbb{N}, 1 \leq i \leq n$ and any vectors $y,z \in H$, we have $$\langle U_{s}^{*}Q_{it}^{(s)}U_{s}y(s), z(s) \rangle = \langle Q_{it}^{(s)}U_{s}y(s), U_{s}z(s) \rangle$$ and thus the map $$s \to \langle U_{s}^{*}Q_{it}^{(s)}U_{s}y(s), z(s) \rangle$$ is measurable. Since $$\vert \langle U_{s}^{*}Q_{it}^{(s)}U_{s}y(s), z(s) \rangle \vert \leq \Vert y(s) \Vert \Vert z(s) \Vert,$$ the map $s \to \langle U_{s}^{*}Q_{it}U_{s}y(s), z(s) \rangle$ is integrable. By Definition 14.1.1 in \cite{KR1}, it follows that \begin{eqnarray} U_{s}^{*}Q_{it}^{(s)}U_{s}y(s)=(p_{it}y)(s) \label{20} \end{eqnarray} almost everywhere for some $p_{it}y$ in $H$. For each $t \in \mathbb{N}$, (\ref{20}) implies that $p_{it}(s)=U_{s}^{*}Q_{it}^{(s)}U_{s}$ for almost every $s \in X$. Therefore $p_{it} \in \mathcal{M}$ for each $t \in \mathbb{N}$. Notice conditions (a) and (d) together imply that $U_{s}^{*}Q_{1t}^{(s)}U_{s}, U_{s}^{*}Q_{2t}^{(s)}U_{s}, \dots, U_{s}^{*}Q_{nt}^{(s)}U_{s}$ are $n$ orthogonal equivalent projections in $\mathcal{M}_{s}$ with sum $I_{s}$ for each $t \in \mathbb{N}$. It follows that $p_{1t}, p_{2t}, \dots, p_{nt}$ are $n$ orthogonal equivalent projections in $\mathcal{M}$ with sum $I$ for each $t \in \mathbb{N}$.
In order to show that $\mathcal{M}$ has Property $\Gamma$, it suffices to show that for any $i \in \{ 1, 2, \dots, n \}$ and $a \in \mathcal{M}$, \begin{eqnarray} \lim\limits_{t \to \infty} \rho((ap_{it}-p_{it}a)^{*}(ap_{it}-p_{it}a)) =0. \label{21} \end{eqnarray} By condition (c), we obtain that for each $j \in \mathbb{N}, 1\leq i \leq n$ and $s \in X$, \begin{eqnarray} \lim\limits_{t \to \infty} \rho_{s}((a_{j}(s)U_{s}^{*}Q_{it}^{(s)}U_{s}-U_{s}^{*}Q_{it}^{(s)}U_{s}a_{j}(s))^{*}(a_{j}(s)U_{s}^{*}Q_{it}^{(s)}U_{s}-U_{s}^{*}Q_{it}^{(s)}U_{s}a_{j}(s)))=0. \label{22} \end{eqnarray} Fix $i \in \{ 1, 2, \dots, n \}$ and $j \in \mathbb{N}$. For each $t \in \mathbb{N}$, define a function $f_{t}: X \to \mathbb{C}$ such that $$f_{t}(s)=\rho_{s}((a_{j}(s)U_{s}^{*}Q_{is}^{(s)}U_{s}-U_{s}^{*}Q_{it}^{(s)}U_{s}a_{j}(s))^{*}(a_{j}(s)U_{s}^{*}Q_{it}^{(s)}U_{s}-U_{s}^{*}Q_{it}^{(s)}U_{s}a_{j}(s))).$$ It follows from (\ref{22}) that \begin{eqnarray} \lim\limits_{t \to \infty} f_{t}(s) = 0 \label{23} \end{eqnarray} almost everywhere. By Lemma 14.1.9 in \cite{KR1}, for each $j \in \mathbb{N}$, $\Vert a_{j} \Vert$ is the essential bound of $\{ \Vert a_{j}(s) \Vert: s \in X \}$. Therefore \begin{eqnarray*} &&\Vert (a_{j}(s)U_{s}^{*}Q_{it}^{(s)}U_{s}-U_{s}^{*}Q_{it}^{(s)}U_{s}a_{j}(s))^{*}(a_{j}(s)U_{s}^{*}Q_{it}^{(s)}U_{s}-U_{s}^{*}Q_{it}^{(s)}U_{s}a_{j}(s)) \Vert\\ &\leq& 4 \Vert a_{j}(s) \Vert ^{2}\\ &\leq& 4 \Vert a_{j} \Vert ^{2} \end{eqnarray*} almost everywhere. Hence \begin{eqnarray} 0 \leq f_{t}(s) \leq 4 \Vert a_{j} \Vert^{2} \rho_{s}(I_{s}) \label{24} \end{eqnarray} almost everywhere. Furthermore, \begin{eqnarray} \int_{X} 4 \Vert a_{j} \Vert^{2} \rho_{s}(I_{s}) d\mu=4 \Vert a_{j} \Vert^{2} \rho(I)=4 \Vert a_{j} \Vert^{2} \leq 4, \label{25} \end{eqnarray} by the Dominated Convergence Theorem, it follows from (\ref{23}), (\ref{24}) and (\ref{25}) that \begin{eqnarray} \lim\limits_{t \to \infty} \int_{X} \rho_{s}((a_{j}(s)U_{s}^{*}Q_{it}^{(s)}U_{s}-U_{s}^{*}Q_{it}^{(s)}U_{s}a_{j}(s))^{*}(a_{j}(s)U_{s}^{*}Q_{it}^{(s)}U_{s}-U_{s}^{*}Q_{it}^{(s)}U_{s}a_{j}(s))) d \mu = 0. \label{26} \end{eqnarray} Since $p_{it}(s)=Q_{it}^{(s)}$ for almost every $s \in X$, (\ref{26}) implies \begin{eqnarray} \lim\limits_{t \to \infty} \rho((a_{j}p_{it}-p_{it}a_{j})^{*}(a(j)p_{it}-p_{it}a_{j})) =0. \label{27} \end{eqnarray} From the fact that $\{ a_{j}: j \in \mathbb{N} \}$ is $SOT$ dense in the unit ball of $\mathcal{M}$, we obtain equation (\ref{21}) from (\ref{27}). Thus $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra with Property $\Gamma$. \end{proof}
\iffalse \begin{example}\label{example 2} Let $F_2$ be a non-abelian free group on two generators $a, b$ and $L(F_2)$ be the free group factor on two standard generators $\lambda(a)$, $\lambda(b)$. Let $Aut(L(F_2))$ be the automorphism group of $L(F_2)$.
For each $n\ge 2$, let $G_n\cong \Bbb Z$ be a copy of group of integers and $g_n$ be a generator of $G_n$. Let $$ G=G_2\oplus G_3\oplus \cdots =\oplus_{n=2}^\infty G_n $$ be a direct sum of $\{G_n\}_{n=2}^\infty$. And $\{g_n\}_{n=2}$ can be naturally viewed as a family of generators of $G$.
Let $$\alpha: \ G\rightarrow Aut(L(F_2))$$ be a group homomorphism from $G$ to $Aut(L(F_2))$ defined by the following action: $$ \alpha(g_n)(\lambda(a))=e^{ {2\pi i}/ n} \lambda(a)\qquad \text{and} \qquad \alpha(g_n)(\lambda(b))=e^{ {2\pi i}/ n} \lambda(b) , \qquad \forall \ n\ge 2. $$ Let $$M=L(F_2) \rtimes_\alpha G$$ be the crossed product of $L(F_2)$ by the group action $G$ (See \cite{KR1} for details of this construction). Notice that $\mathcal M$ has a nontrivial center. In the following we will prove that $\mathcal M$ is a type II$_1$ von Neumann algebra with Property $\Gamma$.
Without loss of generality, we might assume that $\alpha$ is unitarily implemented, i.e. there is a family of unitary elements $\{u(g): g\in G\} $ in $\mathcal M$ such that $\alpha(g)(x)=u(g)xu(g)^*$, for all $x\in \mathcal M$.
Note that $\mathcal M$ acts on a separable Hilbert space. We might assume that $$
\mathcal{M}=\int_{X} \bigoplus M_{s} d \mu \quad \text{ and } \quad H= \int_{X} \bigoplus H_{s} d \mu $$ are the direct integral decompositions of $(\mathcal{M},H)$ relative to the center $\mathcal{Z}$ of $\mathcal{M}$. We can further assume that $\mathcal M_s$ is a type II$_1$ factor for each $s\in X$. In order to show that $\mathcal M$ has Property $\Gamma$, by Proposition \ref{3.7}, it suffices to show that $\mathcal M_s$ has Property $\Gamma $ for $s\in X$ almost everywhere.
Let $C_r(F_2)$ be the reduced C$^*$-algebra of free group $F_2$. In fact, $C_r(F_2)$ is the C$^*$-subalgebra generated by $\lambda(a)$ and $\lambda(b)$ in $L(F_2)$. Then $\alpha $ also induces an action of $G$ on $C_r(F_2)$. Let $C_r(F_2)\rtimes_{\alpha,r} G$ be the reduced crossed product of $C_r(F_2)$ by the group action $G$. Then $C_r(F_2)\rtimes_{\alpha,r} G$ embeds naturally into $\mathcal M$ as a SOT dense subalgebra. In fact $C_r(F_2)\rtimes_{\alpha,r} G$ is the C$^*$-subalgebra generated by $\lambda(a)$, $\lambda(b)$ and $\{u(g): g\in G\} $ in $\mathcal M$.
From Theorem 14.1.13 in \cite{KR1}, we know that, for $s\in X$ almost everywhere, there is a $*$-homomorphism $$\pi_s: C_r(F_2)\rtimes_{\alpha,r} G \rightarrow \mathcal M_s$$ such that $\pi_s(C_r(F_2)\rtimes_{\alpha,r} G )$ is SOT dense in $\mathcal M_s$. Let $\tau_s$ be a tracial state on $\mathcal M_s$. Note $G$ is an abelian group. In order to show that $\mathcal M_s$ has Property $\Gamma$, it suffices to show: $\forall \epsilon>0$, there is a $g\in G$ such that
\begin{enumerate}\item $\tau_s(\pi_s(u(g)))=0$; \item $\|\pi_s(\lambda(a)u(g)-u(g)\lambda(a))\|_2\le \epsilon$ and $\|\pi_s(\lambda(b)u(g)-u(g)\lambda(b))\|_2\le \epsilon$, where $\|\cdot \|_2$ is the $2$-norm induced by $\tau_s$ on $\mathcal M_s$. \end{enumerate}
Recall $\{g_n\}_{n=2}^\infty$ is a family of standard generators of $G$ such that $$ \alpha(g_n)(\lambda(a))=e^{ {2\pi i}/ n} \lambda(a)\qquad \text{and} \qquad \alpha(g_n)(\lambda(b))=e^{ {2\pi i}/ n} \lambda(b) , \qquad \forall \ n\ge 2. $$I.e. $$ u(g_n)\lambda(a)u(g_n)^*=e^{ {2\pi i}/ n} \lambda(a)\qquad \text{and} \qquad u(g_n)\lambda(b)u(g_n)^*=e^{ {2\pi i}/ n} \lambda(b) , \qquad \forall \ n\ge 2. $$ Note $\pi_s$ is a $*$-homomorphism for $s\in X$ almost everywhere. We have, for $ n\ge 2$, $$\begin{aligned} \pi_s(u(g_n))\pi_s(\lambda(a))\pi_s(u(g_n)^*) =e^{ {2\pi i}/ n} \pi_s(\lambda(a)), \quad \pi_s(u(g_n))\pi_s(\lambda(b))\pi_s(u(g_n)^*)
=e^{ {2\pi i}/ n} \pi_s(\lambda(b)).\end{aligned} $$ Let $\epsilon>0$ be a given positive number. When $n$ is sufficiently large, we know, for $s \in X$ almost everywhere, $\pi_s(u(g_n))$ is a unitary element in $\mathcal M_s$ such that
\begin{enumerate}\item[(1')] $\tau_s(\pi_s(u(g_n)))=0$; \item[(2')] $\|\pi_s(\lambda(a)u(g_n)-u(g_n)\lambda(a))\|_2\le \epsilon$ and $\|\pi_s(\lambda(b)u(g_n)-u(g_n)\lambda(b))\|_2\le \epsilon$. \end{enumerate} Therefore, $\mathcal M_s$ is a type II$_1$ factor with Property $\Gamma$ for $s\in X$ almost everywhere. By Proposition \ref{3.7}, we conclude that $\mathcal M$ has Property $\Gamma$.
A further analysis on the direct integral of $\mathcal M$ shows that $\mathcal M\cong \mathcal Z\otimes \mathcal A$, where $\mathcal Z$ is the center of $\mathcal M$ and $\mathcal A$ is a type II$_1$ factor with Property $\Gamma$. Now the result that $\mathcal M$ has Property $\Gamma$ also follows from Example \ref{example 1}. \end{example} \fi
If $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra, then by Lemma 6.5.6 in \cite{KR1}, for any $m \in \mathbb{N}$, there is a unital subalgebra $\mathcal{A}$ of $\mathcal{M}$ such that $\mathcal{A} \cong M_{m}(\mathbb{C})$.
\begin{proposition} \label{3.9} Suppose $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra acting on a separable Hilbert space $H$. Suppose further $\mathcal{A}$ is a unital subalgebra of $\mathcal{M}$ such that $\mathcal{A}\cong M_{m}(\mathbb{C})$ for some $m \in \mathbb{N}$. Let $\mathcal{N}=\mathcal{A}' \cap \mathcal{M}$. Then $\mathcal{M}$ has Property $\Gamma$ if and only if $\mathcal{N}$ has Property $\Gamma$. \end{proposition}
\begin{proof} By Lemma 11.4.11 in \cite{KR1}, $\mathcal{M} \cong \mathcal{A} {\otimes} \mathcal{N} \cong M_{m}(\mathbb{C}) {\otimes} \mathcal{N}$. It is trivial to see that if $\mathcal{N}$ has Property $\Gamma$, then $\mathcal{M}$ has Property $\Gamma$. Thus we only need to show that Property $\Gamma$ of $\mathcal{M}$ implies Property $\Gamma$ of $\mathcal{N}$.
Suppose $\mathcal{M}$ has Property $\Gamma$. Let $\mathcal{M}= \int_{X} \bigoplus \mathcal{M}_{s}d\mu$ and $H=\int_{X} \bigoplus H_{s} d\mu$ be the direct integral decompositions relative to the center $\mathcal{Z}$ of $\mathcal{M}$. Since $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra with Property $\Gamma$, by Proposition \ref{3.7}, $\mathcal{M}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$ for almost every $s \in X$. We may assume $\mathcal{M}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$ for every $s \in X$.
Since $\mathcal{A} \cong M_{m}(\mathbb{C})$, by Lemma \ref{3.8}, we may assume $\mathcal{A}_{s} \cong M_{m}(\mathbb{C})$ for every $s \in X$. Since $\mathcal{N}=\mathcal{A}' \cap \mathcal{M}$, $\mathcal{N}_{s}=\mathcal{A}'_{s} \cap \mathcal{M}_{s}$ for almost every $s \in X$. Then by Lemma 11.4.11 in \cite{KR1}, $$\mathcal{M}_{s} \cong \mathcal{A}_{s} {\otimes} \mathcal{N}_{s} \cong M_{m}(\mathbb{C}) {\otimes} \mathcal{N}_{s}$$ for almost every $s \in X$. Since $\mathcal{M}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$ for every $s \in X$, by Lemma 5.1 in \cite{EFAR2}, $\mathcal{N}_{s}$ has Property $\Gamma$ for almost every $s \in X$. By a similar argument as the proof of Proposition \ref{3.7}, we can conclude that $\mathcal{N}$ has Property $\Gamma$. \end{proof}
\section{Hyperfinite II$_1$ subfactors in type II$_1$ von Neumann algebras}
Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra with separable predual and Property $\Gamma$. We will devote this section to the construction of a hyperfinite type II$_{1}$ subfactor $\mathcal{R}$ of $\mathcal{M}$ such that \begin{enumerate} \item [(I)] $\mathcal{R}' \cap \mathcal{M} = \mathcal{Z}$, where $\mathcal{Z}$ is the center of $\mathcal{M}$ ;
\item [(II)] for any given $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}, n \in \mathbb{N}$ and $\epsilon >0$, there exist orthogonal equivalent projections $p_{1}, p_{2}, \dots, p_{n}$ in $\mathcal{R}$ with sum $I$ such that $$\Vert p_{i}a_{j}-a_{j}p_{i} \Vert_{2} < \epsilon, i=1, 2, \dots; j=1, 2, \dots, k,$$ where the $2$-norm $\Vert \cdot \Vert_{2}$ is given by $\Vert a \Vert_{2}=\sqrt{\rho(a^{*}a)}, \forall a \in \mathcal{M}$ for some faithful tracial state $\rho$ on $\mathcal{M}$. \end{enumerate} \begin{lemma} \label{3.10} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra acting on a separable Hilbert space $H$. Let $m \in \mathbb{N}$ and $\mathcal{A} $ be a unital subalgebra of $\mathcal{M}$ such that $\mathcal{A} \cong M_{m}(\mathbb{C})$. Let $\mathcal{N}=\mathcal{A}' \cap \mathcal{M}$. Assume that $\mathcal{M}= \int_{X} \bigoplus \mathcal{M}_{s}d\mu$ and $H=\int_{X} \bigoplus H_{s} d\mu$ are the direct integral decompositions relative to the center $\mathcal{Z}$ of $\mathcal{M}$. Assume that $\rho$ is a faithful normal tracial state on $\mathcal{M}$ and $\{ \rho_{s}: s \in X \}$ is a family of positive, faithful, normal, tracial functionals as introduced in Lemma \ref{2.2} and Proposition \ref{2.5}. If $\mathcal M$ has Property $\Gamma$, then
\begin{enumerate}\item []
$\forall a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$, $\forall n \in \mathbb{N}$ and $\forall \epsilon >0$, there exist a $\mu$-null subset $X_{0}$ of $X$ and a family of mutually orthogonal equivalent projections $\{p_{1}, p_{2}, \dots, p_{n}\}$ in $\mathcal{N}$ with sum $I$ such that,
$$\rho_{s}((p_{i}(s)a_{j}(s)-a_{j}(s)p_{i}(s))^{*}(p_{i}(s)a_{j}(s)-a_{j}(s)p_{i}(s))) < \epsilon,$$
for all $i=1,2, \dots, n,$ $ j=1,2, \dots, k,$ and $ s \in X \setminus X_{0}$.\end{enumerate} \end{lemma}
\begin{proof} Since $\mathcal{A} \cong M_{m}(\mathbb{C})$ and $\mathcal{N}=\mathcal{A}' \cap \mathcal{M}$, by Lemma 11.4.11 in \cite{KR1}, $\mathcal{M} \cong \mathcal{A} {\otimes} \mathcal{N}$. Then by the discussion in section 11.2 in \cite{KR1}, $\mathcal{N}$ is a type II$_{1}$ von Neumann algebra. Let $\{ a_{gh} \}_{g,h=1}^{m}$ be a system of matrix units for $\mathcal{A}$. By Lemma \ref{3.8}, we may assume that $\mathcal{A}_{s} \cong M_{m}(\mathbb{C})$ and $\mathcal{M}_{s} \cong \mathcal{A}_{s} {\otimes} \mathcal{N}_{s}$ for every $s \in X$. For each $s \in X$, let $\{ a_{gh}(s)\}_{g,h=1}^{m}$ be a system of matrix units for $\mathcal{A}_{s}$. By Proposition \ref{3.7}, we may assume that $\mathcal{M}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$ for every $s \in X$. Then by Lemma 5.1 in \cite{EFAR2}, $\mathcal{N}_{s}$ is a type II$_{1}$ factor with Property $\Gamma$ for every $s \in X$.
Let $\{ a'_{r}: r \in \mathbb{N} \}$ be a $SOT$ dense subset in the unit ball $\mathcal{M}'_{1}$ of $\mathcal{M}'$. By Proposition 14.1.24 in \cite{KR1}, we may assume that $(\mathcal{M}')_{s}=(\mathcal{M}_{s})'$ for every $s \in X$ and we use the notation $\mathcal{M}'_{s}$ for both. Therefore by Remark \ref{3.6}, we may assume $\{ a'_{r}(s) : r \in \mathbb{N} \}$ is $SOT$ dense in $(\mathcal{M}_{s}')_{1}$ for every $s \in X$.
Take a separable Hilbert space $K$ and a family of unitaries $\{ U_{s}: H_{s} \to K; s \in X \}$ as in Remark \ref{2.4} such that $s \to U_{s}x(s)$ and $s \to U_{s}a(s)U_{s}^{*}$ are measurable for any $x \in H$ and any decomposable operator $a \in B(H)$. Let $\mathcal{B}$ be the unit ball of $B(K)$ equipped with the $*$-strong operator topology. Since $K$ is separable, $\mathcal{B}$ is metrizable by setting $d(S,T)=\sum_{j=1}^{\infty} 2^{-j}(\Vert (S-T)e_{j} \Vert +\Vert (S^{*}-T^{*})e_{j} \Vert )$ for any $S, T \in \mathcal{B}$, where $\{ e_{j}: j \in \mathbb{N} \}$ is an orthonormal basis for $K$. Then the metric space $(\mathcal{B}, d)$ is complete and separable. For each $1 \leq i, j \leq n$, let $\mathcal{B}_{ij}=\mathcal{B}$. Take $\mathcal{C}=\prod\limits_{1 \leq i, j \leq n} \mathcal{B}_{ij}$ equipped with the product topology. It follows that $\mathcal{C}$ is a complete separable metric space.
By the choices of $\{U_s\}$, we know that the maps $ s \to U_{s}a'_{r}(s)U_{s}^{*},$ and $s \to U_{s}a_{gh}(s)U_{s}^{*} $ from $X$ to $B(K)$ are measurable for any $r \in \mathbb{N}$ and any $ g, h=1, 2, \dots, m$. By Lemma 14.3.1 in \cite{KR1}, there exists a Borel $\mu$-null subset $N_{1}$ of $X$ such that the maps \begin{align} (s, b) &\to bU_{s}a'_{r}(s)U_{s}^{*}, \label{28} \\ (s, b) &\to U_{s}a'_{r}(s)U_{s}^{*}b, \label{29} \\ (s, b) &\to bU_{s}a_{gh}(s)U_{s}^{*}, \label{30} \\ (s, b) &\to U_{s}a_{gh}(s)U_{s}^{*}b \label{31} \end{align} are Borel measurable from $(X \setminus N_{1}) \times \mathcal{B}$ to $B(K)$ for any $r \in \mathbb{N}$ and any $ g, h=1, 2, \dots, m$. Since $\rho$ is a faithful normal tracial state, by Lemma \ref{2.2}, we may assume that, for every $s\in X$, there exists a positive, faithful, normal, tracial functional $\rho_{s}$ on $\mathcal{M}_{s}$ such that $\rho (a)= \int_{X} \rho_{s}(a(s)) d\mu$ for any $a \in \mathcal{M}$. By Proposition \ref{2.5}, there is a Borel $\mu$-null subset $N_{2}$ of $X$ such that, for each $j \in \mathbb{N}$, the map \begin{eqnarray} (s, b) \to \rho_{s}((U_{s}^{*}bU_{s}a_{j}(s)-a_{j}(s)U_{s}^{*}bU_{s})^{*}(U_{s}^{*}bU_{s}a_{j}(s)-a_{j}(s)U_{s}^{*}bU_{s})) \label{32} \end{eqnarray} is Borel measurable from $( X\setminus N_{2}) \times \mathcal{B}$ to $\mathbb{C}$. From the fact that each $\rho_s$ is a positive, faithful, normal, tracial functional on $\mathcal{M}_{s}$, it follows that $\rho_{s}$ is a positive scalar multiple of the unique trace $\tau_{s}$ on $\mathcal{M}_{s}$ for each $s$ in $X\setminus N_{2}$.
Let $N=N_{1} \cup N_{2}$. {\em Let $\eta$ be the collection of all these elements $$(p, E_{11}, E_{12}, \dots, E_{nn}) \in (X\setminus N) \times \mathcal{C}$$ such that \begin{enumerate} \item [(i)] for all $i_{1}, i_{2}, i_{3} \in \{ 1,2, \dots, n \}$, \begin{equation} E_{i_{1}i_{2}}=E_{i_{2}i_{1}}^{*}\quad \text{and} \quad E_{i_{1}i_{2}}E_{i_{2}i_{3}}=E_{i_{1}i_{3}};\label{equa 3.10.2}\end{equation}
\item [(ii)] \begin{equation} E_{11}+E_{22}+\dots +E_{nn}=I;\label{equa 3.10.3}\end{equation}
\item [(iii)] for all $i_{1}, i_{2} \in \{ 1,2, \dots, n \}$ and $r \in \mathbb{N},$ \begin{equation} E_{i_{1}i_{2}}U_{s}a'_{r}(s)U_{s}^{*}=U_{s}a'_{r}(s)U_{s}^{*}E_{i_{1}i_{2}} ;\label{equa 3.10.4}\end{equation}
\item [(iv)] for all $i_{1}, i_{2} \in \{ 1,2, \dots, n \}$ and $g, h \in \{ 1,2, \dots, m \}$, \begin{equation} E_{i_{1}i_{2}}U_{s}a_{gh}(s)U_{s}^{*}=U_{s}a_{gh}(s)U_{s}^{*}E_{i_{1}i_{2}}; \label{equa 3.10.5}\end{equation}
\item [(v)] for all $i$ in $\{1,2, \dots, n\}$ and $j$ in $\{1,2, \dots, k\}$,
\begin{equation} \rho_{s}((U_{s}^{*}E_{ii}U_{s}a_{j}(s)-a_{j}(s)U_{s}^{*}E_{ii}U_{s})^{*}(U_{s}^{*}E_{ii}U_{s}a_{j}(s)-a_{j}(s)U_{s}^{*}E_{ii}U_{s})) < \epsilon. ;\label{equa 3.10.6}\end{equation} \end{enumerate}} We have the following claim.
{\bf Claim \ref{3.10}.1.} {\em The set $\eta$ is analytic.}
{\noindent Proof of Claim \ref{3.10}.1:} The maps $$\begin{aligned}
(E_{11}, E_{12}, \dots, E_{nn}) &\to E_{i_{1}i_{2}},\\
(E_{11}, E_{12}, \dots, E_{nn}) &\to E_{i_{2}i_{1}}^{*},\\
(E_{11}, E_{12}, \dots, E_{nn}) &\to E_{i_{1}i_{2}}E_{i_{2}i_{3}},\\
(E_{11}, E_{12}, \dots, E_{nn}) &\to E_{11}+E_{22}+ \dots + E_{nn} \end{aligned} $$ are continuous from $\mathcal{C}$ (with the product topology) to $\mathcal{B}$ (with the $*$-strong operator topology). Therefore, we obtain that the maps $$\begin{aligned}
(s,E_{11}, E_{12}, \dots, E_{nn}) &\to E_{i_{1}i_{2}},\\
(s,E_{11}, E_{12}, \dots, E_{nn}) &\to E_{i_{2}i_{1}}^{*},\\
(s,E_{11}, E_{12}, \dots, E_{nn}) &\to E_{i_{1}i_{2}}E_{i_{2}i_{3}},\\
(s,E_{11}, E_{12}, \dots, E_{nn}) &\to E_{11}+E_{22}+ \dots + E_{nn} \end{aligned} $$ are Borel measurable from $(X \setminus N) \times \mathcal{C}$ to $\mathcal{B}$ for all $1 \leq i_{1},i_{2},i_{3} \leq n$. From the fact that the maps (\ref{28}), (\ref{29}), (\ref{30}) and (\ref{31}) are Borel measurable from $(X \setminus N_{1}) \times \mathcal{B}$ to $B(K)$ and the map (\ref{32}) is Borel measurable from $(X \setminus N_{2}) \times \mathcal{B}$ to $\mathbb{C}$, it follows that the following maps $$\begin{aligned} (s, E_{11}, E_{12}, \dots, E_{nn}) &\to E_{i_{1}i_{2}}U_{s}a'_{r}(s)U_{s}^{*},\\
(s, E_{11}, E_{12}, \dots, E_{nn}) &\to U_{s}a'_{r}(s)U_{s}^{*}E_{i_{1}i_{2}},\\
(s, E_{11}, E_{12}, \dots, E_{nn}) &\to E_{i_{1}i_{2}}U_{s}a_{gh}(s)U_{s}^{*},\\
(s, E_{11}, E_{12}, \dots, E_{nn}) &\to U_{s}a_{gh}(s)U_{s}^{*}E_{i_{1}i_{2}},\\
(s, E_{11}, E_{12}, \dots, E_{nn}) &\to \rho_{s}((U_{s}^{*}E_{ii}U_{s}a_{j}(s)-a_{j}(s)U_{s}^{*}E_{ii}U_{s})^{*}(U_{s}^{*}E_{ii}U_{s}a_{j}(s)-a_{j}(s)U_{s}^{*}E_{ii}U_{s})) \end{aligned} $$ are Borel measurable
when restricted to $(X \setminus N) \times \mathcal{C}$ for all $1 \leq i_{1}, i_{2}, i \leq n, 1 \leq g,h \leq m, r \in \mathbb{N},$ and $ 1 \leq j \leq k$.
Therefore $\eta$ is a Borel set. Thus $\eta$ is analytic by Theorem 14.3.5 in \cite{KR1}.This completes the proof of Claim \ref{3.10}.1.
{\bf Claim \ref{3.10}.2.} {\em Let $\pi$ be the projection of $X \times \mathcal{C}$ onto $X$. Then $\pi(\eta)= X \setminus N$.}
{\noindent Proof of Claim \ref{3.10}.2:} Notice that an element $(s, E_{11}, E_{12}, \dots, E_{nn})$ in $(X \setminus N) \times \mathcal{C}$ satisfies conditions (i) and (ii) if and only if $\{ E_{i_{1}i_{2}} \}_{i_{1},i_{2}=1}^{n}$ is a system of matrix units for a matrix algebra which is isomorphic to $\mathcal M_n(\Bbb C)$. Condition (iii) is equivalent to that $U_{s}^{* }E_{i_{1}i_{2}}U_{s} \in \mathcal{M}_{s}$. Condition (iv) is equivalent to that $U_{s}^{*}E_{i_{1}i_{2}}U_{s} \in \mathcal{A}'_{s}$.
By assumption, for each $s \in X$, $\mathcal{M}_{s}$ and $\mathcal{N}_{s}$ are type II$_{1}$ factors with Property $\Gamma$ and $\mathcal{M}_{s} \cong \mathcal{A}_{s} {\otimes} \mathcal{N}_{s}$. Thus $\mathcal{A}'_{s} \cap \mathcal{M}_{s} = \mathcal{N}_{s}$. It follows from the argument in the preceding paragraph that, for each $s \in X $, there exists a system of matrix units $\{ U_{s}^{*}E_{11}U_{s}, U_{s}^{*}E_{12}U_{s}, \dots, U_{s}^{*}E_{nn}U_{s} \}$ in $\mathcal{N}_{s}$ such that $(s, E_{11}, E_{12}, \dots, E_{nn})$ satisfies conditions (i), (ii), (iii), (iv) and (v). Therefore the image of $\eta$ under $\pi$ is exactly $X \setminus N$. This completes the proof of Claim \ref{3.10}.2.
\noindent (Continue the proof of Lemma \ref{3.10}) By Claim \ref{3.10}.1 and Claim \ref{3.10}.2, $\eta$ is analytic and $\pi(\eta)=X\setminus N$. By the measure-selection principle (Theorem 14.3.6 in \cite{KR1}), there is a measurable mapping $$s \to (E_{11,s}, E_{12,s}, \dots, E_{nn,s})$$ from $X \setminus N$ to $\mathcal{C}$ such that, for $s \in X \setminus N$ almost everywhere,
$(s, E_{11,s}, E_{12,s}, \dots, E_{nn,s})$ satisfies conditions (i), (ii), (iii), (iv) and (v) (see (\ref {equa 3.10.2}),
(\ref {equa 3.10.3}), (\ref {equa 3.10.4}), (\ref {equa 3.10.5}), and (\ref {equa 3.10.6})).
Defining $E_{i_{1}i_{2}, s}=0$ for $s \in N, 1 \leq i_{1}, i_{2} \leq n$, we get {\em a measurable map \begin{equation}s
\to (E_{11,s}, E_{12,s}, \dots, E_{nn,s})\label {equa 3.10.1} \end{equation} from $X$ to $\mathcal{C}$ such that, for $s \in X $ almost everywhere, $(s, E_{11,s}, E_{12,s}, \dots, E_{nn,s})$ satisfies conditions (i), (ii), (iii), (iv) and (v) (see (\ref {equa 3.10.2}), (\ref {equa 3.10.3}), (\ref {equa 3.10.4}), (\ref {equa 3.10.5}), and (\ref {equa 3.10.6})).}
From (\ref{equa 3.10.1}), for any $1 \leq i_{1}, i_{2} \leq n$ and any two vectors $x, y \in H$, it follows $$\langle U_{s}^{*}E_{i_{1}i_{2},s}U_{s}x(s), y(s)\rangle=\langle E_{i_{1}i_{2},s}U_{s}x(s), U_{s}y_{s} \rangle,$$ and the map $s \to \langle U_{s}^{*}E_{i_{1}i_{2},s}U_{s}x(s), y(s)\rangle$ are measurable. Since $$\vert \langle U_{s}^{*}E_{i_{1}i_{2}, s}U_{s}x(s), y(s)\rangle \vert \leq \Vert x(s) \Vert \Vert y(s) \Vert,$$ we know $s \to \langle U_{s}E_{i_{1}i_{2}, s}U_{s}x(s), y(s) \rangle$ is integrable. By Definition 14.1.1 in \cite{KR1}, it follows that \begin{eqnarray} U_{s}^{*}E_{i_{1}i_{2}, s}U_{s}x(s)=(p_{i_{1}i_{2}}x)(s) \label{33} \end{eqnarray} almost everywhere for some $p_{i_{1}i_{2}}x \in H$. From (\ref{33}), we have that \begin{eqnarray} p_{i_{1}i_{2}}(s)=U_{s}^{*}E_{i_{1}i_{2}, s}U_{s} \label{34} \end{eqnarray} for almost every $s \in X$ and thus $p_{i_{1}i_{2}} \in \mathcal{M}$. By condition (iv), $$U_{s}^{*}E_{i_{1}i_{2}, s}U_{s} \in \mathcal{A}_{s}'.$$ Hence \begin{eqnarray} p_{i_{1}i_{2}} \in \mathcal{A}' \cap \mathcal{M} = \mathcal{N}. \label{35} \end{eqnarray}
Since conditions (i) and (ii) together imply that $\{ E_{i_{1}i_{2}, s} \}_{i_{1},i_{2}=1}^{n}$ is a system of matrix units, by (\ref{34}), we obtain that $p_{11}(s), p_{22}(s), \dots, p_{nn}(s)$ are $n$ orthogonal equivalent projections in $\mathcal{M}_{s}$ with sum $I_{s}$ almost everywhere. Therefore (\ref{35}) implies that $p_{11}, p_{22}, \dots, p_{nn}$ are $n$ orthogonal equivalent projections in $\mathcal{N}$ with sum $I$. For each $i \in \{ 1, 2, \dots, n\}$, let $p_{i}=p_{ii}$. From condition (v), we conclude that $p_{1}, p_{2}, \dots, p_{n}$ is a family of mutually orthogonal equivalent projections in $\mathcal N$ with sum $I$ satisfying,
$\forall i=1,2, \dots, n, \forall j=1,2, \dots, k,$
$$\rho_{s}((p_{i}(s)a_{j}(s)-a_{j}(s)p_{i}(s))^{*}(p_{i}(s)a_{j}(s)-a_{j}(s)p_{i}(s))) < \epsilon \quad \text { for } s \in X \ \text{ almost everywhere}.$$
This ends the proof of the lemma. \end{proof}
A slight modification of the proof in Lemma \ref{3.10} gives us the next corollary.
\begin{corollary} \label{3.11} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra acting on a separable Hilbert space $H$. Let $m \in \mathbb{N}$ and $\mathcal{A}$ be a unital subalgebra of $\mathcal{M}$ such that $\mathcal{A} \cong M_{m}(\mathbb{C})$. Let $\mathcal{N}=\mathcal{A}' \cap \mathcal{M}$. Assume that $\mathcal{M}= \int_{X} \bigoplus \mathcal{M}_{s}d\mu$ and $H=\int_{X} \bigoplus H_{s} d\mu$ are the direct integral decompositions relative to the center $\mathcal{Z}$ of $\mathcal{M}$. Assume that $\mathcal{M}$ has Property $\Gamma$. Then, $\forall a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$, $\forall n \in \mathbb{N}$ and $\forall \epsilon >0$, there exist a $\mu$-null subset $X_{0}$ of $X$ and a family of mutually orthogonal equivalent projections $\{p_{1}, p_{2}, \dots, p_{n}\}$ in $\mathcal{N}$ with sum $I$ such that
$$\Vert p_{i}(s)a_{j}(s)-a_{j}(s)p_{i}(s)) \Vert_{2,s} < \epsilon , \qquad \forall i=1,2, \dots, n, \ \forall j=1,2, \dots, k \text { and } s \in X \setminus X_{0},$$ where $\Vert \cdot \Vert_{2,s}$ is the $2$-norm induced by the unique trace $\tau_{s}$ on $\mathcal{M}_{s}$. \end{corollary}
In \cite{P}, Popa proved that if $\mathcal{A}$ is a type II$_{1}$ factor with separable predual, then there is a hyperfinite subfactor $\mathcal{B}$ of $\mathcal{A}$ such that $\mathcal{B}' \cap \mathcal{A} = \mathbb{C} I$. The following lemma is essentially Theorem 8 in \cite{AR4}. The proof presented here is based on the direct integral theory for von Neumann algebras and is different from the one in \cite{AR4}.
\begin{lemma} (\cite{AR4})\label{3.12} If $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra acting on a separable Hilbert space $H$, then there is a hyperfinite type II$_{1}$ subfactor $\mathcal{R}$ of $\mathcal{M}$ such that $\mathcal{R}' \cap \mathcal{M}=\mathcal{Z}$, where $\mathcal Z$ is the center of $\mathcal{M}$. \end{lemma}
\begin{proof} By Lemma \ref{2.1}, $\mathcal{M}$ can be decomposed (relative to its center) as a direct integral $\int_X \bigoplus \mathcal{M}_{s} d \mu$ over a locally compact complete separable metric measure space $(X,\mu)$ and $\mathcal{M}_{s}$ is a type II$_{1}$ factor almost everywhere. In the following we assume that $\mathcal{M}_{s}$ is a type II$_{1}$ factor for every $s \in X$.
By Remark \ref{2.4}, we can obtain a separable Hilbert space $K$ and a family of unitaries $\{ U_{s}:H_{s} \to K; s \in X \}$ such that the maps $s \to U_{s}x(s)$ and $s \to U_{s}a(s)U_{s}^{*}$ are measurable for any $x \in H$ and any decomposable $a \in B(H)$. Let $\mathcal{B}$ be the unit ball of $B(K)$ with the $*$-strong operator topology. We observe that $\mathcal{B}$ is metrizable by setting $d(S,T)=\sum_{j=1}^{\infty} 2^{-j}(\Vert (S-T)e_{j} \Vert +\Vert (S^{*}-T^{*})e_{j} \Vert)$ for any $S, T \in \mathcal{B}$, where $\{ e_{j}: j \in \mathbb{N}\}$ is an orthonormal basis of $K$. Moreover, $(\mathcal{B},d)$ is a complete separable metric space. Let $\mathcal{C}=\mathcal{B} \times \mathcal{B}$ equipped with the product topology. It follows that $\mathcal{C}$ is a complete separable metric space.
Let $\{ a_{j}': j \in \mathbb{N} \}$ be a $SOT$ dense subset of the unit ball $(\mathcal{M}')_{1}$. By Lemma 14.1.24, we may assume that $(\mathcal{M}')_{s}=(\mathcal{M}_{s})'$ for every $s \in X$ and we use the notation $\mathcal{M}'_{s}$ for both. By Remark \ref{3.6}, we may assume further that $\{ a'_{j}(s) : j \in \mathbb{N} \}$ is $SOT$ dense in $(\mathcal{M}'_{s})_{1}$ for every $s \in X$. Let $\{ y_{j}: j \in \mathbb{N} \}$ be a countable dense subset in $H$. By Lemma 14.1.3 in \cite{KR1}, the Hilbert space generated by $\{ y_{j}(s): j \in \mathbb{N} \}$ is $H_{s}$ for almost every $s \in X$. Replacing $\{ y_{j}: j \in \mathbb{N} \}$ by the set of all finite rational-linear combinations of vectors in $\{ y_{j}: j \in \mathbb{N} \}$ if necessary, in the following we assume that $\{ y_{j}(s): j \in \mathbb{N} \}$ is dense in $H_{s}$ for every $s \in X$.
Fix an irrational number $\theta \in (0,1)$. We denote by $(s,W,V)$ an element in $ X\times \mathcal{B} \times \mathcal{B} = X \times \mathcal{C}$.
The maps $W \to WW^{*}$, $W \to W^{*}W$, $V \to VV^{*}$, $V \to V^{*}V$ are $*$-$SOT$ continuous from $\mathcal{B}$ to $\mathcal{B}$. The maps $(W, V) \to WV, (W, V) \to e^{2\pi i \theta} VW$ are continuous from $\mathcal{C}$ with the product topology to $\mathcal{B}$ with the $*$-strong operator topology. Therefore the maps \begin{align} (s, W, V) &\to WW^{*}, \label{36} \\ (s, W, V) &\to W^{*}W, \label{37} \\ (s, W, V) &\to VV^{*}, \label{38} \\ (s, W, V) &\to V^{*}V, \label{39} \\ (s, W, V) &\to WV, \label{40} \\ (s, W, V) &\to e^{2\pi i \theta} VW \label{41} \end{align} are Borel measurable from $X \times \mathcal{C}$ to $\mathcal{B}$. By Remark \ref{2.4}, the maps $$s \to U_{s}a'_{j}(s)U_{s}^{*}$$ from $X$ to $B(K)$ and $$s \to U_{s}y_{j}(s)$$ from $X$ to $K$ are all measurable for each $j \in \mathbb{N}$.
Let $$ \Bbb Q\langle X, Y, Z_1, Z_2,\ldots \rangle $$ be the collection of all $*$-polynomials in intermediate variables $X, Y, Z_1, Z_2, \ldots$ with rational coefficients. It is a countable set. By Lemma 14.3.1 in \cite{KR1}, there exists a Borel $\mu$-null subset $N$ of $X$ such that, $\forall j_{1}, j_{2} \in \mathbb{N}$, $\forall f\in \Bbb Q\langle X, Y, Z_1, Z_2,\ldots \rangle $, the maps \begin{align} (s, W ,V) &\to WU_{s}a'_{j}(s)U_{s}^{*}, \label{42} \\ (s, W, V) &\to U_{s}a'_{j}(s)U_{s}^{*}W, \label{43} \\ (s, W ,V) &\to VU_{s}a'_{j}(s)U_{s}^{*}, \label{44} \\ (s, W, V) &\to U_{s}a'_{j}(s)U_{s}^{*}V \label{45} \end{align} are Borel measurable from $(X \setminus N) \times \mathcal{C}$ to $\mathcal{B}$ and the map \begin{eqnarray} (s, W, V) \to \Vert f(W,V, \{ U_{s}a'_{j}(s)U_{s}^{*}: j \in \mathbb{N} \} )U_{s}y_{j_{1}}(s)-U_{s}y_{j_{2}(s)} \Vert \label{46} \end{eqnarray} is Borel meaurable from $(X \setminus N) \times \mathcal{C}$ to $\mathbb{C}$.
Now we introduce the set $\eta$ as follows.
{\em Let $\eta$ be the collection of all these elements $ (s, W, V) \in (X \setminus N) \times \mathcal{C}$ satisfying \begin{enumerate} \item [(i)] $WW^{*}=W^{*}W=VV^{*}=V^{*}V=I$, where $I$ is the identity in $B(K)$;
\item [(ii)] $WU_{s}a'_{j}(s)U_{s}^{*}=U_{s}a'_{j}(s)U_{s}^{*}W$ and $VU_{s}a'_{j}(s)U_{s}^{*}=U_{s}a'_{j}(s)U_{s}^{*}V$ for every $j \in \mathbb{N}$;
\item [(iii)] $WV=e^{2\pi i \theta} VW$;
\item [(iv)] for all $N, j_{1}, j_{2} \in \mathbb{N}$, there exists an $f$ in $\Bbb Q\langle X, Y, Z_1, Z_2,\ldots \rangle $ such that
$$\Vert f(W,V, \{ U_{s}a'_{j}(s)U_{s}^{*}: j \in \mathbb{N} \} )U_{s}y_{j_{1}}(s)-U_{s}y_{j_{2}}(s) \Vert < 1/N.$$ \end{enumerate}}
{\bf Claim \ref{3.12}.1.} {\em The set $\eta$ is analytic.}
{\noindent Proof of Claim \ref{3.12}.1:} Since the maps (\ref{36})-(\ref{46}) are all Borel measurable when restricted to $(X\setminus N) \times \mathcal{C}$, $\eta$ is a Borel set. By Theorem 14.3.5 in \cite{KR1}, $\eta$ is analytic. This completes the proof of Claim \ref{3.12}.1.
{\bf Claim \ref{3.12}.2.} {\em Let $\pi$ be the projection of $X \times \mathcal{C}$ onto $X$. Then $\pi(\eta) = X \setminus N$.}
{\noindent Proof of Claim \ref{3.12}.2:} We observe that an element $(s, W, V)$ satisfies conditions (i), (ii) and (iii) if and only if $U_{s}^{*}WU_{s}$ and $U_{s}^{*}VU_{s}$ are two unitaries in $\mathcal{M}_{s}$ such that $(U_{s}^{*}WU_{s})(U_{s}^{*}VU_{s})=e^{2 \pi i \theta}(U_{s}^{*}VU_{s})(U_{s}^{*}WU_{s})$. Since $\{ y_{j}(s): j \in \mathbb{N} \}$ is dense in $H_{s}$ for every $s \in X$, condition (iv) is equivalent to the condition that the von Neumann algebra generated by $\{U_{s}^{*}WU_{s}, U_{s}^{*}VU_{s} \} \cup \{ a'_{j}(s): j \in \mathbb{N} \}$ is $B(H_{s})$.
For each $s \in X$, $\mathcal{M}_{s}$ is a type II$_{1}$ factor with separable predual. By Popa's result in \cite{P}, there exists a type II$_{1}$ hyperfinite subfactor $\mathcal{R}^{(s)}$ of $\mathcal{M}_{s}$ such that $(\mathcal{R}^{(s)})' \cap \mathcal{M}_{s}=\mathbb{C} I_{s}$. Notice a hyperfinite II$_1$ factor always contains an irrational rotation C$^*$-algebra as a SOT dense subalgebra. Combining with the argument in the prededing paragraph, we know that there exist two unitaries $U_{s}^{*}WU_{s}$ and $U_{s}^{*}VU_{s}$ in $\mathcal{R}^{(s)}$ (where $W, V$ are unitaries in $\mathcal{B}$) such that they generate $\mathcal{R}^{(s)}$ as a von Neumann algebra and $(s,W,V)$ satisfies conditions (i), (ii) and (iii). The condition $(\mathcal{R}^{(s)})' \cap \mathcal{M}_{s}=\mathbb{C} I_{s}$ is equivalent to the condition that the von Neumann algebra generated by $\mathcal{R}^{(s)} \cup \mathcal{M}'_{s}$ is $B(H_{s})$. Since $U_{s}^{*}WU_{s}$ and $U_{s}^{*}VU_{s}$ generate $\mathcal{R}^{(s)}$ as a von Neumann algebra and $\{ a'_{j}(s): j \in \mathbb{N} \}$ is $SOT$ dense in the unit ball of $\mathcal{M}'_{s}$, the von Neumann algebra $W^{*}(U_{s}^{*}WU_{s}, U_{s}^{*}VU_{s}, \{ a'_{j}(s): j \in \mathbb{N} \})$ generated by $U_{s}^{*}WU_{s}, U_{s}^{*}VU_{s}$ and $\{ a'_{j}(s): j \in \mathbb{N} \}$ is $B(H_{s})$. Hence, from the argument in the preceding paragraph, it follows that $(s,W,V)$ satisfies satisfy condition (iv). Therefore the image of $\eta$ under $\pi$ is $X \setminus N$. This completes the proof of Claim \ref{3.12}.2.
\noindent(Continue the proof of Lemma \ref{3.12}) By Claim \ref{3.12}.1 and Claim \ref{3.12}.2, we know that $\eta$ is analytic and $\pi(\eta)=X \setminus N$. By the measure-selection principle (Theorem 14.3.6 in \cite{KR1}), there is a measurable map $s \to (W_{s}, V_{s})$ from $X \setminus N$ to $\mathcal{C}$ such that $(s, W_{s}, V_{s})$ satisfies condition (i), (ii), (iii) and (iv) for $s \in X \setminus N$ almost everywhere. Defining $W_{s}=V_{s}=0$ for any $s \in N$, we get {\em a measurable map \begin{equation} s \to (W_{s}, V_{s})\label {3.12.1}\end{equation} from $X$ to $\mathcal{C}$ such that $(s, W_{s}, V_{s})$ satisfies condition (i), (ii), (iii) and (iv) for $s \in X $ almost everywhere.}
For any two vectors $x, y \in H$, we have \begin{equation} \langle U_{s}^{*}W_{s}U_{s}x(s), y(s) \rangle=\langle W_{s}U_{s}x(s), U_{s}y(s) \rangle.\label {3.12.2}\end{equation} Combining (\ref{3.12.2}) with (\ref{3.12.1}), we know the map $$s \to \langle U_{s}^{*}W_{s}U_{s}x(s), y(s) \rangle$$ from $X$ to $\mathbb{C}$
is measurable. Since $$\vert \langle U_{s}^{*}W_{s}U_{s}x(s), y(s) \rangle \vert \leq \Vert x(s) \Vert \Vert y(s) \Vert,$$ we obtain that $$s \to \langle U_{s}^{*}W_{s}U_{s}x(s), y(s) \rangle$$ is integrable. By Definition 14.1.1 in \cite{KR1}, it follows that $$U_{s}^{*}W_{s}U_{s}x(s)=(\bar{W} x)(s)$$ almost everywhere for some $\bar{W} x \in H$. Therefore \begin{eqnarray} \bar{W}(s)=U_{s}^{*}W_{s}U_{s} \label{47} \end{eqnarray} for almost every $s \in X$. Since conditions (i) and (ii) imply that $U_{s}^{*}W_{s}U_{s}$ is a unitary in $\mathcal M_s$, we obtain from equation (\ref{47}) that $\bar{W}$ is a unitary in $\mathcal{M}$. Similarly we can find another unitary $\bar{V}$ in $\mathcal{M}$ such that \begin{eqnarray}\bar{V} (s)=U_{s}^{*}V_{s}U_{s} \label{47.2} \end{eqnarray} for almost every $s \in X$ and thus, from condition (iii), $$\bar{W}(s) \bar{V}(s)=e^{2 \pi i \theta} \bar{V}(s) \bar{W}(s)$$ for almost every $s \in X$. Therefore \begin{eqnarray}\bar{W} \bar{V}=e^{2 \pi i \theta} \bar{V} \bar{W}.\label{47.1} \end{eqnarray}
Let $\mathcal{R}^{(s)}$ be the von Neumann subalgebra generated by $U_{s}^{*}W_{s}U_{s}$ and $U_{s}^{*}V_{s}U_{s} $ in $\mathcal M_s$. From condition (iv), we know that $(\mathcal{R}^{(s)})'\cap M_s=\Bbb CI_s$ for $s\in X$ almost everywhere.
Let $\mathcal{R}$ be a von Neumann subalgebra of $\mathcal{M}$ generated by two unitaries $\bar{W}, \bar{V}$. From (\ref{47}), (\ref{47.2}) and (\ref{47.1}), it follows that $\mathcal{R}$ is a hyperfinite type II$_{1}$ factor and $\mathcal{R}_{s}=\mathcal{R}^{(s)}$ for almost every $s \in X$.
To complete the proof, we just need to show that $\mathcal{R}' \cap \mathcal{M} = \mathcal{Z}$. Suppose $a \in \mathcal{R}' \cap \mathcal{M}$. Then $a(s) \in \mathcal{R}'_{s} \cap \mathcal{M}_{s}$ for almost every $s \in X$. Since $(\mathcal{R}^{(s)})' \cap \mathcal{M}_{s}=\mathbb{C} I_{s}$ and $\mathcal{R}_{s}=\mathcal{R}^{(s)}$ for almost every $s \in X$, $a(s)=c_{s}I_{s}$ for almost every $s \in X$ and thus $a \in \mathcal{Z}$. Hence $\mathcal{R}' \cap \mathcal{M} = \mathcal{Z}$. \end{proof}
The following result is a generalization of Theorem 5.4 in \cite{EFAR2} in the setting of von Neumman algebras.
The proof follows the similar line as the one used in Theorem 5.4 in \cite{EFAR2}.
\begin{theorem} \label{3.13} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra with separable predual and $\mathcal{Z}$ be the center of $\mathcal{M}$. Let $\rho$ be a faithful normal tracial state on $\mathcal{M}$ and $\Vert \cdot \Vert_{2}$ be the $2$-norm on $\mathcal{M}$ induced by $\rho$. If $\mathcal{M}$ has Property $\Gamma$, then there exists a hyperfinite type II$_{1}$ subfactor $\mathcal{R}$ of $\mathcal{M}$ such that \begin{enumerate} \item [(I)] $\mathcal{R} \cap \mathcal{M}' = \mathcal{Z}$; \item [(II)] for any $n \in \mathbb{N}$, any elements $a_{1}, a_{2}, ..., a_{k}$ in $\mathcal{M}$, there exists a countable collection of projections $\{ p_{1t}, p_{2t}, \dots, p_{nt} : t \in \mathbb{N} \}$ in $\mathcal{R}$ such that \begin{enumerate} \item [(i)] for each $t \in \mathbb{N}$, $p_{1t}, p_{2t}, \dots, p_{nt}$ are $n$ orthogonal equivalent projections in $\mathcal{R}$ with sum I;
\item [(ii)] $\lim\limits_{t \to \infty} \Vert p_{it}a_{j}-a_{j}p_{it} \Vert_{2} =0$ for any $i=1, 2,\dots , n; j=1, 2, \dots, k$. \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} Since $\mathcal{M}$ has separable predual, by Proposition A.2.1 in \cite{JS}, there is a faithful normal representation $\pi$ of $\mathcal{M}$ on a separable Hilbert space. Replacing $\mathcal{M}$ by $\pi({\mathcal{M}})$ and $\rho$ by $\rho \circ \pi^{-1}$, we may assume that $\mathcal{M}$ is acting on a separable Hilbert space $H$.
By Lemma \ref{2.1}, there are direct integral decompositions $\mathcal{M}=\int_{X} \bigoplus \mathcal{M}_{s} d \mu $ and $H=\int_{X} H_{s} d \mu$ of $(\mathcal{M}, H)$ relative to $\mathcal{Z}$ over $(X, \mu)$, where $\mathcal{M}_{s}$ is a type II$_{1}$ factor for almost every $s \in X$. We assume that every $\mathcal{M}_{s}$ is a type II$_{1}$ factor. Notice that $\rho$ is a faithful, normal, tracial state on $\mathcal M$. From Lemma \ref{2.2}, we might assume there is a positive, faithful, normal, tracial linear functional $ \rho_s $ on $\mathcal M_s$ for every $s\in X$ such that $$ \rho(a)=\int_X \rho_s(a(s)) d\mu, \qquad \forall a\in \mathcal M. $$
Let $\{\phi_{i}: i \in \mathbb{N} \}$ be a sequence of normal states on $\mathcal{M}$ that is norm dense in the set of all
normal states on $\mathcal{M}$. Let $\{ b_{j}: j \in \mathbb{N} \}$ be a sequence of elements that is $SOT$ dense in the unit ball
$(\mathcal{M})_{1}$ of $\mathcal{M}$. By Remark \ref{3.6}, we may assume that $\{b_{j}(s): j \in \mathbb{N} \}$ is $SOT$ dense in the
unit ball $(\mathcal{M}_{s})_{1}$ of $\mathcal{M}_{s}$ for every $s \in X$.
Let $\tau$ be the unique center-valued trace on $\mathcal{M}$ such that $\tau(a)=a$ for all $a\in \mathcal Z$ (see Theorem 8.2.8 in \cite{KR1}).
We will show that {\em there is an increasing sequence $\{ \mathcal{A}_{t}: t \in \mathbb{N} \}$ of full matricial algebras in $\mathcal M$ satisfying, for all $t\in \Bbb N$, \begin{enumerate} \item [(a)] there exists a $\mu$-null subset $N_{t}$ of $X$ such that, for each $1 \leq l \leq t$, there exist $l$ equivalent orthogonal projections $p_{1}, p_{2}, \dots, p_{l}$ in $\mathcal{A}_{t}$ with sum $I$ satisfying
$$\rho_{s}((p_{i}(s)b_{j}(s)-b_{j}(s)p_{i}(s))^{*}(p_{i}(s)b_{j}(s)-b_{j}(s)p_{i}(s)))<1/t$$
for any $i=1, 2, \dots, l; j=1,2, \dots , t; s \in X \setminus N_{t};$ \item [(b)] let $\mathcal{U}_{t}$ be the unitary group of $\mathcal{A}_{t}$ and $d \mu_{t}$ be the normalized Haar measure on $\mathcal{U}_{t}$. Then for any $i, j =1, 2, \dots, t$,
$$\vert \phi_{i}(\int_{\mathcal{U}_{t}} ub_{j}u^{*} d\mu_{t} -\tau(b_{j})) \vert < 1/t.$$\end{enumerate}}
First, we observe that conditions (a) and (b) are satisfied by letting $\mathcal{A}_{1}=\mathbb{C}1$ and $p_{1}=I$. Now suppose $\mathcal{A}_{t-1}$ has been constructed. Take $\mathcal{N}_{1}=\mathcal{A}_{t-1}' \cap \mathcal{M}$. By Lemma 11.4.11 in \cite{KR1}, $\mathcal{M} \cong \mathcal{A}_{t-1} {\otimes} \mathcal{N}_{1}$.
Next, in order to construct $\mathcal A_t$, we will apply Lemma \ref{3.10} $t-1$ times. At the first time, applying Lemma \ref{3.10} to $\mathcal A_{t-1}$ and the set $\{ b_{1}, b_{2}, \dots, b_{t} \}$, we obtain two equivalent orthogonal projections $p_{1,1}, p_{2,1}$ in $\mathcal{N}_{1}$ with sum $I$ and a $\mu$-null subset $N_{t,1}$ of $X$ such that \begin{eqnarray} \rho_{s}((p_{i,1}(s)b_{j}(s)-b_{j}(s)p_{i,1}(s))^{*}(p_{i,1}(s)b_{j}(s)-b_{j}(s)p_{i,1}(s)))<1/t \label{48} \end{eqnarray} for any $i=1, 2; j=1, 2, \dots, t, s \in X \setminus N_{t,1}$. Note that $p_{1,1}, p_{2,1}$ are two equivalent orthogonal projections
in $\mathcal{N}_{1}$ with sum $I$. There is a unital subalgebra $\mathcal{B}_{t,1}$ of $\mathcal{N}_{1}$ such that $\mathcal{B}_{t,1} \cong M_{2}(\mathbb{C})$ and $p_{1,1}, p_{2,1} \in \mathcal{B}_{t,1}$. Take \begin{eqnarray} \mathcal{A}_{t,1}=\mathcal{A}_{t-1} {\otimes} \mathcal{B}_{t,1}. \label{49} \end{eqnarray} Now suppose that $\mathcal{A}_{t,l-1}$ have been constructed for some $2\le l \leq t-1$. By applying Lemma \ref{3.10} to $\mathcal{A}_{t,l-1}$ and $\{ b_{1}, b_{2}, \dots, b_{t} \}$, we can find $l+1$ equivalent orthogonal projections $p_{1,l}, p_{2,l}, \dots, p_{l+1, l}$ in $\mathcal{A}_{t, l-1}' \cap \mathcal{M}$ with sum $I$ and a $\mu$-null subset $N_{t, l}$ of $X$ such that \begin{eqnarray} \rho_{s}((p_{i,l}(s)b_{j}(s)-b_{j}(s)p_{i,l}(s))^{*}(p_{i,l}(s)b_{j}(s)-b_{j}(s)p_{i,l}(s)))<1/t \label{50} \end{eqnarray} for any $i=1, 2, \dots, l+1, j=1, 2, \dots, t,$ and $ s \in X \setminus N_{t, l}$. Again there is a unital subalgebra $\mathcal{B}_{t, l}$ of $\mathcal{A}_{t, l-1}' \cap \mathcal{M}$ such that $\mathcal{B}_{t, l} \cong M_{l+1}(\mathbb{C})$ and $p_{1,l}, p_{2,l}, \dots, p_{l+1,l} \in \mathcal{B}_{t, l}$. Take \begin{eqnarray} \mathcal{A}_{t, l}=\mathcal{A}_{t, l-1} {\otimes} \mathcal{B}_{t, l}. \label{51} \end{eqnarray} Now we let \begin{eqnarray} \mathcal{B}_{t}=\mathcal{A}_{t, t-1} \label{52} \end{eqnarray} and \begin{eqnarray} N_{t}=\cup_{l=1}^{t-1} N_{t, l}. \label{53} \end{eqnarray} Then $\mu(N_{t})=0$. By (\ref{48}), (\ref{49}), (\ref{50}), (\ref{51}), (\ref{52}) and (\ref{53}), $\mathcal{B}_{t}$ contains sets of projections satisfying condition (a).
Let $\mathcal{N}=\mathcal{B}_{t}' \cap \mathcal{M}$. By Lemma 11.4.11 in \cite{KR1}, we know that $\mathcal{M} \cong \mathcal{B}_{t} {\otimes} \mathcal{N}$. By the arguments in Section 11.2 in \cite{KR1}, $\mathcal{N}$ is a type II$_{1}$ von Neumann algebra, and therefore, by Lemma \ref{3.12}, there is a hyperfinite subfactor $\mathcal{S}$ of $\mathcal{N}$ such that $\mathcal{S}' \cap \mathcal{N}=\mathcal{Z}_{\mathcal{N}}$, where $\mathcal{Z}_{\mathcal{N}}$ is the center of $\mathcal{N}$. Hence $(\mathcal{B}_{t} {\otimes} \mathcal{S})' \cap \mathcal{M}=\mathbb{C}I {\otimes} \mathcal{Z}_{\mathcal{N}}=\mathcal{Z}$. Since $\mathcal{S}$ is a hyperfinite type II$_{1}$ factor, there exists an increasing sequence $\{ \mathcal{F}_{r}: r \in \mathbb{N} \}$ of matrix subalgebras of $\mathcal{S}$ whose union is ultraweakly dense in $\mathcal{S}$ and thus $\cup_{r \in \mathbb{N}} \mathcal{B}_{t} {\otimes} \mathcal{F}_{r}$ is ultraweakly dense in $\mathcal{B}_{t} {\otimes} \mathcal{S}$. Let $\mathcal{V}_{r}$ be the unitary group of $\mathcal{B}_{t} {\otimes} \mathcal{F}_{r}$ with normalized Haar measure $d \nu_{r}$. Since $(\mathcal{B}_{t} {\otimes} \mathcal{S})' \cap \mathcal{M} = \mathcal{Z}$ and $\tau$ is a center-valued trace on $\mathcal M$ such that $\tau(a)=a$ for all $a\in \mathcal Z$, Lemma 5.4.4 in \cite{AR3} shows that $\tau(a)= \lim\limits_{r \to \infty} \int_{\mathcal{V}_{r}} vav^{*} d \nu_{r}$ ultraweakly for all $a \in \mathcal{M}$. Since each $\phi_{i}$ is normal, there exists $r$ large enough such that \begin{eqnarray} \vert \phi_{i}(\int_{\mathcal{V}_{r}}vb_{j}v^{*}d \nu_{r}-\tau (b_{j})) \vert < 1/t, \forall i, j=1, 2, \dots, t. \label{54} \end{eqnarray} Now we let $$\mathcal{A}_{t}=\mathcal{B}_{t} {\otimes} \mathcal{F}_{r}.$$ Then $\mathcal{A}_{t}$ satisfies both conditions (a) and (b). The construction is finished.
Let $\mathcal{R} \subset \mathcal{M}$ be the ultraweak closure of $\cup_{t \in \mathbb{N} } \mathcal{A}_{t}$. It follows that $\mathcal{R}$ is a finite von Neumann algebra containg an ultraweakly dense matricial C$^*$-algebra. By Corollary 12.1.3 in \cite{KR1}, $\mathcal{R}$ is a hyperfinite type II$_{1}$ subfactor of $\mathcal{M}$.
Now fix $n \in \mathbb{N}$, $\epsilon >0$ and elements $a_{1}, a_{2}, \dots, a_{k}$ in $\mathcal{M}$. We may first assume that $\Vert a_{l} \Vert \leq 1$ for any $1 \leq l \leq k$. Since $\{b_{j}: j \in \mathbb{N} \}$ is $SOT$ dense in the unit ball of $\mathcal{M}$, there exist elements $b_{j_{1}}, b_{j_{2}}, \dots, b_{j_{k}}$ such that \begin{eqnarray} \Vert a_{l}-b_{j_{l}} \Vert_{2} < \epsilon /3 \label{55} \end{eqnarray} for any $1 \leq l \leq k$. For each integer $t > \max \{ n, j_{1}, j_{2}, \dots, j_{k} \}$, by condition (a), there exist a $\mu$-null subset $N_{t}$ of $X$ and a set of $n$ orthogonal equivalent projections $\{ p_{1t}, p_{2t}, \dots, p_{nt}\}$ in $\mathcal{A}_{t}$ such that \begin{eqnarray} \rho_{s} ((p_{it}(s)b_{j_{l}}(s)-b_{j_{l}}(s)p_{it}(s))^{*}(p_{it}(s)b_{j_{l}}(s)-b_{j_{l}}(s)p_{it}(s)) < 1/t \label{56} \end{eqnarray} for all $i\in \{1, 2, \dots, n\}$, $ l\in \{1,2, \dots , k\}$, and $s \in X \setminus N_{t}.$
Take $N=\cup_{t \in \mathbb{N}} N_{t}$. Then $\mu (N)=0$ and inequality (\ref{56}) implies \begin{eqnarray} \lim\limits_{t \to \infty} \rho_{s} ((p_{it}(s)b_{j_{l}}(s)-b_{j_{l}}(s)p_{it}(s))^{*}(p_{it}(s)b_{j_{l}}(s)-b_{j_{l}}(s)p_{it}(s)) = 0 \label{57} \end{eqnarray} for all $i\in \{1, 2, \dots ,n\},$ $l\in \{1, 2, \dots, k\}$, and $ s \in X \setminus N$. For any fixed $i \in \{ 1, 2, \dots ,n \}, l \in \{ 1, 2, \dots, k \}$, define function $f_{t}: X \to \mathbb{C}$ such that $$f_{t}(s)=\rho_{s} ((p_{it}(s)b_{j_{l}}(s)-b_{j_{l}}(s)p_{it}(s))^{*}(p_{it}(s)b_{j_{l}}(s)-b_{j_{l}}(s)p_{it}(s)).$$ Then $\vert f_{t}(s) \vert \leq \rho_{s}(4I_{s})$ for almost every $s \in X$. Since $$\int_{X} \rho_{s}(4I_{s}) d \mu = \rho (4I)=4,$$ by the Dominated Convergence Theorem, (\ref{57}) gives $$\lim\limits_{t \to \infty} \rho ((p_{it}b_{j_{l}}-b_{j_{l}}p_{it})^{*}(p_{it}b_{j_{l}}-b_{j_{l}}p_{it})) = 0$$ for all $i\in \{1, 2, \dots, n\}$, $ l\in \{1,2, \dots , k\}$. Hence there exists $t_{0} \in \mathbb{N}$ such that \begin{eqnarray} \Vert p_{it}b_{j_{l}}-b_{j_{l}}p_{it} \Vert_{2} < \epsilon /3 \label{58} \end{eqnarray} for all $i\in \{1, 2, \dots, n\}$, $ l\in \{1,2, \dots , k\}$ and $ t>t_{0}$.
Therefore for any $t>t_{0}$, it follows from (\ref{55}) and (\ref{58}) that \begin{eqnarray*} \Vert p_{it}a_{l}-a_{l}p_{it} \Vert_{2} &\leq& \Vert p_{it}b_{j_{l}}-b_{j_{l}}p_{it} \Vert_{2} + \Vert p_{it}(a_{l}-b_{j_{l}})-(a_{l}-b_{j_{l}})p_{it} \Vert_{2} \\ &\leq& \Vert p_{it}b_{j_{l}}-b_{j_{l}}p_{it} \Vert_{2} + 2 \Vert a_{l}-b_{j_{l}} \Vert_{2}\\ &<& \epsilon \end{eqnarray*} for all $i\in \{1, 2, \dots, n\}$, $ l\in \{1,2, \dots , k\}$. Hence $\lim\limits_{t \to \infty} \Vert p_{it}a_{l}-a_{l}p_{it} \Vert_{2} = 0$ for all $i\in \{1, 2, \dots, n\}$, $ l\in \{1,2, \dots , k\}$.
It remains to show that $\mathcal{R}' \cap \mathcal{M}=\mathcal{Z}$. Suppose that $a \in \mathcal{R}' \cap \mathcal{M}$ and $\Vert a \Vert =1$. Since the sequence $\{ b_{j}: j \in \mathbb{N} \}$ is $SOT$ dense in the unit ball of $\mathcal{M}$, we can choose a subsequence $\{ b_{j_{l}}: l \in \mathbb{N} \}$ that converges to $a$ in the strong operator topology. Therefore this subsequence converges to $a$ ultraweakly. By the fact that $\tau$ is ultraweakly continuous, $\lim\limits_{l \to \infty} \tau (b_{j_{l}}) = \tau (a)$ ultraweakly. Since $a \in \mathcal{R}'$, for each $i \in \mathbb{N}$, \begin{eqnarray*} \vert \phi_{i}(\int_{\mathcal{U}_{j_{l}}} ub_{j_{l}}u^{*} d \mu_{j_{l}}-a) \vert &=& \vert \phi_{i}(\int_{\mathcal{U}_{j_{l}}} u(b_{j_{l}}-a)u^{*} d \mu_{j_{l}}) \vert\\ &\leq& (\phi_{i}((b_{j_{l}}-a)^{*}(b_{j_{l}}-a)))^{1/2} \\ &\to& 0.\\ \end{eqnarray*} From the fact that the sequence $\{ \phi_{i} : i \in \mathbb{N} \}$ is norm dense in the set of normal states on $\mathcal{M}$, we get that $\int_{\mathcal{U}_{j_{l}}} ub_{j_{l}}u^{*} d \mu_{j_{l}}$ converges to $a$ ultraweakly. By condition (b), $\int_{\mathcal{U}_{j_{l}}} ub_{j_{l}}u^{*} d \mu_{j_{l}}$ converges to $\tau(a)$ ultraweakly. Therefore $a=\tau (a)$ and thus $a \in \mathcal{Z}$. Hence $\mathcal{R}' \cap \mathcal{M} = \mathcal{Z}$. The proof is complete. \end{proof}
\section{Necessary inequalities} Suppose $\mathcal{M}$ is a von Neumann algebra and $\mathcal{N}$ is a von Neumann subalgebra of $\mathcal{M}$. A map $\phi: \mathcal{M}^{k} \to B(H)$ is called $\mathcal{N}$-multimodular if,
for any $s \in \mathcal{N}$ and any $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$, $$s\phi (a_{1}, a_{2}, \dots , a_{k})=\phi (sa_{1}, a_{2}, \dots , a_{k}),$$ $$\phi (a_{1}, a_{2}, \dots , a_{k})s=\phi (a_{1}, a_{2}, \dots , a_{k}s),$$ $$\phi(a_{1}, a_{2}, \dots , a_{i}s, a_{i+1}, \dots , a_{k})=\phi (a_{1}, a_{2}, \dots , a_{i}, sa_{i+1}, \dots , a_{k}).$$ For any $n \in \mathbb{N}$, the $n$-fold amplication $\phi^{(n)} : (M_{n}(\mathcal{M}))^{k} \to M_{n}(\mathcal{M})$ of a bounded map $\phi : \mathcal{M}^{k} \to \mathcal{M}$ is defined in \cite{EA1} and \cite{EA2} as follows: for elements $(a_{ij}^{(1)}), (a_{ij}^{(2)}), \dots , (a_{ij}^{(k)})$ in $M_{n}(\mathcal{M})$, the $(i,j)$ entry of $\phi^{(n)} ((a_{ij}^{(1)}), (a_{ij}^{(2)}), \dots , (a_{ij}^{(k)}))$ is $$\sum\limits_{1 \leq j_{1}, j_{2}, \dots , j_{n-1} \leq n} \phi (a_{ij_{1}}^{(1)}, a_{j_{1}j_{2}}^{(2)}, \dots , a_{j_{n-2}j_{n-1}}^{(n-1)}, a_{j_{n-1}j}^{(n)}).$$ A bounded map $\phi$ is said to be completely bounded if $\sup\limits_{n \in \mathbb{N}} \{ \Vert \phi^{(n)} \Vert : n \in \mathbb{N} \} < \infty$. When $\phi$ is completely bounded, we denote $\Vert \phi \Vert_{cb} =\sup\limits_{n \in \mathbb{N}} \{ \Vert \phi^{(n)} \Vert : n \in \mathbb{N} \}$.
Let $\{ e_{ij} \}_{i,j=1}^{n}$ be the standard matrix units for $M_{n}(\mathbb{C})$. Then $$\Vert \phi^{(n)} (e_{11}a_{1}e_{11}, e_{11}a_{2}e_{11}, \dots , e_{11}a_{k}e_{11}) \Vert \leq \Vert \phi \Vert \Vert a_{1} \Vert \dots \Vert a_{k} \Vert$$ for any $a_{1}, a_{2}, \dots , a_{k}$ in $M_{n}(\mathcal{M})$.
If $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra and $n$ is a positive integer, $M_{n}(\mathcal{M})$ is also a type II$_{1}$ von Neumann algebra. In the rest of this section, we let $\tau_{n}$ be the center-valued trace on $M_{n}(\mathcal{M})$ such that $\tau_{n}(a)=a$ for any $a$ in the center of $M_{n}(\mathcal{M})$ (see Theorem 8.2.8 in \cite{KR1}). Let \begin{eqnarray} \gamma_{n}(a)=(\Vert a \Vert ^{2} + n \Vert \tau_{n} (a^{*} a) \Vert )^{1/2} \label{59} \end{eqnarray} for each $a \in M_{n}(\mathcal{M})$.
Replacing $tr_{n}$ by $\tau_{n}$ in the proof of Lemma 3.1 in \cite{EFAR2}, we can obtain the next lemma directly. \begin{lemma} \label{4.1} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra acting on a Hilbert space $H$. Suppose $\mathcal{R}$ is a hyperfinite type II$_{1}$ subfactor of $\mathcal{M}$ such that $\mathcal{R}' \cap \mathcal{M} = \mathcal{Z}$, the center of $\mathcal{M}$. Let $\theta$ be a positive number and $n$ be a positive integer. If $\psi: M_{n}(\mathcal{M}) \times M_{n}(\mathcal{M}) \to B(H^{n})$ is a normal bilinear map satisfying $$\psi (ac, b)=\psi (a, cb), a, b \in M_{n}(\mathcal{M}), c \in M_{n}(\mathcal{R})$$ and $$\Vert \psi( ae_{11}, e_{11}b ) \Vert \leq \theta \Vert a \Vert \Vert b \Vert, a, b \in M_{n}(\mathcal{M}),$$ then $$\Vert \psi(a, b) \Vert \leq \theta \gamma_{n}(a) \gamma_{n}(b)$$ for any $a, b \in M_{n}(\mathcal{M})$ \end{lemma} \iffalse \begin{proof} Following the same idea as in the proof of Lemma 3.1 in \cite{EFAR2}, we let $x, y$ be two unit vectors in $H^{n}$ and define a normal bilinear functional $\varphi$ on $M_{n}(\mathcal{M}) \times M_{n}(\mathcal{M})$ by \begin{eqnarray} \varphi (a, b) = \langle \psi(ae_{11}, e_{11}b)x, y \rangle \end{eqnarray} for all $a, b \in M_{n}(\mathcal{M})$. Then $\Vert \varphi \Vert \leq \theta$. Applying the Grothendick Inequality for normal bilinear maps on a von Neumann algebra$\cite{U}$, we can obtain states $f_{1}, f_{2}, g_{1}$ and $g_{2}$ on $M_{n}(\mathcal{M})$ such that \begin{eqnarray} \vert \varphi (a, b) \vert \leq \theta (f_{1}(aa^{*})+f_{2}(a^{*}a))^{1/2}(g_{1}(bb^{*})+g_{2}(b^{*}b))^{1/2} \end{eqnarray} for all $a, b \in M_{n}(\mathcal{M})$. Thus it follows from (60) and (61) that \begin{eqnarray*} \vert \langle \psi (a, b)x, y \rangle \vert&=&\vert \sum\limits_{j=1}^{n} \langle \psi ( ae_{j1}e_{11}, e_{11}e_{1j}b)x, y \rangle \vert\\ &\leq& \sum\limits_{j=1}^{n} \vert \varphi (ae_{j1}, e_{1j}b) \vert\\ &\leq& \sum\limits_{j=1}^{n} \theta(f_{1}(ae_{jj}a^{*})+f_{2}(e_{1j}a^{*}ae_{j1}))^{1/2}(g_{1}(e_{1j}bb^{*}e_{j1})+g_{2}(b^{*}e_{jj}b))^{1/2}\\ &\leq& \theta(f_{1}(aa^{*})+\sum\limits_{j=1}^{n} f_{2}(e_{1j}a^{*}ae_{j1}))^{1/2}(\sum\limits_{j=1}^{n}g_{1}(e_{1j}bb^{*}e_{j1})+g_{2}(b^{*}b))^{1/2} \end{eqnarray*} by the Cauchy-Schwarz inequality in the last step.
Since $\mathcal{R}$ is a hyperfinite subfactor of $\mathcal{M}$, we can take an increasing sequence $\{ \mathcal{R}_{\lambda} : \lambda \in \mathbb{N} \}$ of matrix subalgebras of $\mathcal{R}$ whose union is ultraweakly dense in $\mathcal{R}$. Let $\mathcal{U}_{\lambda}$ be the unitary group of $M_{n}(\mathcal{R}_{\lambda})$ with normalized Haar measure $d\mu_{\lambda}$. Since $\mathcal{R}' \cap \mathcal{M} = \mathcal{Z}$, $M_{n}(\mathcal{R})' \cap M_{n}(\mathcal{M})$ is the center of $M_{n}(\mathcal{M})$, and therefore by Lemma 5.4.4 in \cite{AR3}, \begin{eqnarray} \lim\limits_{\lambda} \int_{\mathcal{U}_{\lambda}} u^{*}au d \mu_{\lambda} = \tau_{n}(a) \end{eqnarray} in the ultraweak topology, where $\tau_{n}$ is the center-valued trace on $M_{n}(\mathcal{M})$.
For each unitary $u \in \mathcal{U}_{\lambda}$, substituting $au$ and $u^{*}b$ for $a$ and $b$ respectively, we obtain \begin{eqnarray} \vert \langle \psi (a, b) x, y \rangle \vert =\vert \langle \psi (au, u^{*}b ) x , y \rangle \vert. \end{eqnarray}
Integrating both sides of (63) over $\mathcal{U}_{\lambda}$ and using the Cauchy-Schwarz inequality, we have \begin{eqnarray*} &&\vert \langle \psi(a, b)x, y \rangle \vert\\ &\leq& \theta(f_{1}(aa^{*})+\sum\limits_{j=1}^{n} f_{2}(e_{1j}\int_{\mathcal{U}_{\lambda}} u^{*}a^{*}au d\mu_{\lambda} e_{j1}))^{1/2}(\sum\limits_{j=1}^{n}g_{1}(e_{1j}\int_{\mathcal{U}_{\lambda}} u^{*}bb^{*}ud\mu_{\lambda} e_{j1})+g_{2}(b^{*}b))^{1/2}. \end{eqnarray*} \begin{eqnarray} \end{eqnarray} Taking the ultraweak limit on both sides of (64) over $\lambda$, by (62) and the normality of $f_{2}$ and $g_{1}$, we get \begin{eqnarray*} &&\vert \langle \psi(a, b)x, y \rangle \vert\\ &\leq& \theta (f_{1}(aa^{*})+\sum\limits_{j=1}^{n} f_{2}(e_{1j} \tau_{n}(a^{*}a)e_{j1}))^{1/2}(\sum\limits_{j=1}^{n}g_{1}(e_{1j} \tau_{n}(bb^{*})e_{j1})+g_{2}(b^{*}b))^{1/2}.\\ \end{eqnarray*} \begin{eqnarray} \end{eqnarray} The unit vectors $x, y$ were arbitrarily chosen from $H^{n}$, therefore (65) implies \begin{eqnarray*} &&\Vert \psi(a, b) \Vert\\ &\leq& \theta (\Vert a \Vert^{2}+n \Vert \tau_{n} (a^{*}a \Vert) )^{1/2}(\Vert b \Vert^{2} + n \Vert \tau_{n}(bb^{*})\Vert )^{1/2}\\ &=&\theta \gamma_{n}(a) \gamma_{n}(b). \end{eqnarray*} \end{proof} \fi
If Lemma 3.1 in \cite{EFAR2} is replaced by the preceding Lemma \ref{4.1}, the proof of Theorem 3.3 in \cite{EFAR2} gives us the following result.
\begin{lemma} \label{4.2} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra acting on a Hilbert space $H$. Suppose $\mathcal{M}$ has a hyperfinite subfactor $\mathcal{R}$ such that $\mathcal{R}' \cap \mathcal{M} = \mathcal{Z}$, the center of $\mathcal{M}$. Fix $k \in \mathbb{N}$. If $\phi : \mathcal{M}^{k} \to B(H)$ is a $k$-linear $\mathcal{N}$-multimodular normal map, then
$$\Vert \phi^{(n)} (a_{1}, a_{2}, \dots, a_{k}) \Vert \leq 2^{k/2} \Vert \phi \Vert \gamma_{n}(a_{1}) \gamma_{n}(a_{2}) \dots \gamma_{n}(a_{k})$$\\ for all $a_{1}, a_{2}, \dots, a_{k} \in M_{n}(\mathcal{M})$ and $n \in \mathbb{N}$. \end{lemma} \iffalse \begin{proof} The proof is similar to the one in the proof of Theorem 3.3 in \cite{EFAR2}. For the purpose of completeness, we sketch its proof here.
Dividing $\phi$ by $\Vert \phi \Vert$ if necessary, we may assume that $\Vert \phi \Vert =1$. Fix $a_{1}, a_{2}, ..., a_{k} \in M_{n}(\mathcal{M})$ and $n \in \mathbb{N}$. Define $\psi_{1}: M_{n}(\mathcal{M}) \times M_{n}(\mathcal{M}) \to B(H^{n})$ such that \begin{eqnarray} \psi_{1} (a, b)= \phi^{(n)} (a^{*} e_{11}, e_{11}a_{2}e_{11}, \dots , e_{11}a_{k}e_{11})^{*} \phi^{(n)} (be_{11}, e_{11}a_{2}e_{11}, \dots , e_{11}a_{k}e_{11}). \label{60} \end{eqnarray} Therefore \begin{eqnarray} \Vert \psi_{1}(ae_{11}, e_{11}b) \Vert \leq \Vert a_{2} \Vert^{2} \dots \Vert a_{k} \Vert^{2} \Vert a \Vert \Vert b \Vert. \label{61} \end{eqnarray} Taking $\theta=\Vert a_{2} \Vert^{2} \dots \Vert a_{k} \Vert^{2}$, by Lemma \ref{4.1}, (\ref{61}) implies \begin{eqnarray} \Vert \psi_{1}(a, b) \Vert = \Vert \psi_{1} (e_{11}a, be_{11}) \Vert \leq \Vert a_{2} \Vert^{2} \dots \Vert a_{k} \Vert^{2} \gamma_{n}(e_{11}a) \gamma_{n}(be_{11}). \label{62} \end{eqnarray} Since $e_{11}, e_{22}, \dots, e_{nn}$ are equivalent projections with sum $I$, $\tau_{n}(e_{11})= \tau_{n}(e_{22})= \tau_{n}(e_{nn})= \frac{1}{n} I$. Therefore \begin{eqnarray} 0 \leq \tau_{n} (e_{11}a^{*}ae_{11}) \leq \Vert a \Vert^{2} \tau_{n}(e_{11}) \leq \frac{\Vert a \Vert^{2}} {n} I \label{63} \end{eqnarray} and thus \begin{eqnarray} \gamma_{n}(e_{11}a) =( \Vert e_{11}a \Vert^{2} + n \Vert e_{11}a^{*}ae_{11}\Vert)^{1/2} \leq \sqrt{2} \Vert a \Vert. \label{64} \end{eqnarray} Similarly, we can obtain \begin{eqnarray} \gamma_{n}(be_{11}) \leq \sqrt{2} \Vert b \Vert. \label{65} \end{eqnarray} Replacing $a$ by $a_{1}^{*}$ and $b$ by $a_{1}$ in (\ref{60}), it follows from (\ref{62}), (\ref{64}) and (\ref{65}) that \begin{eqnarray} \Vert \phi^{(n)}(a_{11}e_{11}, e_{11}a_{2}e_{11}, \dots, e_{11}a_{k}e_{11} ) \Vert \leq \sqrt{2} \Vert a_{1} \Vert \Vert a_{2} \Vert \dots \Vert a_{k} \Vert. \label{66} \end{eqnarray} Define $\psi_{2}: M_{n}(\mathcal{M}) \times M_{n}(\mathcal{M}) \to B(H^{n})$ such that \begin{eqnarray} \psi_{2}(a, b) = \phi^{(n)}(a, be_{11}, e_{11}a_{3}e_{11}, \dots , e_{11}a_{k}e_{11}). \label{67} \end{eqnarray} It follows from (\ref{66}) that $$\Vert \psi_{2} (ae_{11}, e_{11}b) \Vert \leq \sqrt{2} \Vert a_{3} \Vert \dots \Vert a_{k} \Vert \Vert a \Vert \Vert b \Vert.$$ Taking $\theta = \sqrt{2} \Vert a_{3} \Vert \dots \Vert a_{k} \Vert$, Lemma \ref{4.1} gives \begin{eqnarray*} \Vert \psi_{2} (a, b) \Vert&=&\Vert \psi_{2}(a, be_{11}) \Vert\\ &\leq& \sqrt{2} \Vert a_{3} \Vert \dots \Vert a_{k} \Vert \gamma_{n}(a) \gamma_{n}(be_{11})\\ &\leq& 2 \Vert a_{3} \Vert \dots \Vert a_{k} \Vert \gamma_{n}(a) \Vert b \Vert. \end{eqnarray*} Replacing $a$ by $a_{1}$ and $b$ by $a_{2}$ in (\ref{67}), we obtain \begin{eqnarray} \Vert \phi^{(n)}(a_{1}, a_{2}e_{11}, \dots, e_{11}a_{k}e_{11}) \Vert \leq 2 \gamma_{n}(a_{1}) \Vert a_{2} \Vert \Vert a_{3} \Vert \dots \Vert a_{k} \Vert. \label{68} \end{eqnarray} Repeating this step $k-2$ times, gaining a factor of $\sqrt{2}$ each time, we get the inequality \begin{eqnarray} \Vert \phi^{(n)} (a_{1}, a_{2}, \dots, a_{k-1}, a_{k}e_{11} ) \Vert \leq 2^{k/2} \gamma_{n}(a_{1}) \dots \gamma_{n}(a_{k-1}) \Vert a_{k} \Vert. \label{69} \end{eqnarray} Now define $\psi_{3}: M_{n}(\mathcal{M}) \times M_{n}(\mathcal{M}) \to B(H^{n})$ such that \begin{eqnarray} \psi_{3}(a, b)= \phi^{(n)} (a_{1}, a_{2}, \dots , a_{k-1}, a) \phi^{(n)}(a_{1}, \dots, a_{k-1}, b^{*})^{*}. \label{70} \end{eqnarray} It follows from (\ref{69}) that \begin{eqnarray} \Vert \psi_{3}(ae_{11}, e_{11}b) \Vert \leq 2^{k} \gamma_{n}(a_{1})^{2} \dots \gamma_{n}(a_{k-1})^{2} \Vert a \Vert \Vert b \Vert. \label{71} \end{eqnarray} By Lemma \ref{4.1}, taking $\theta=2^{k} \gamma_{n}(a_{1})^{2} \dots \gamma_{n}(a_{k-1})^{2}$, we obtain the inequality \begin{eqnarray} \Vert \psi_{3}(a, b) \Vert \leq 2^{k} \gamma_{n}(a_{1})^{2} \dots \gamma_{n}(a_{k-1})^{2} \gamma_{n}(a) \gamma_{n}(b). \label{72} \end{eqnarray} Replacing $a$ by $a_{k}$ and $b$ by $a_{k}^{*}$ in (\ref{70}), inequality (\ref{72}) gives $$\Vert \phi^{(n)} (a_{1}, a_{2}, \dots, a_{k}) \Vert \leq 2^{k/2} \gamma_{n}(a_{1}) \gamma_{n}(a_{2}) \dots \gamma_{n}(a_{k}).$$ \end{proof} \fi \begin{corollary} \label{4.3} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra and $\mathcal{Z}$ the center of $\mathcal{M}$. Suppose $\mathcal{R}$ is a hyperfinite type II$_{1}$ subfactor of $\mathcal{M}$ such that $\mathcal{R}' \cap \mathcal{M}=\mathcal{Z}$, the center of $\mathcal{M}$. Let $n ,k \in \mathbb{N}$. Suppose $p_{1}, p_{2}, \dots , p_{n}$ are $n$ orthogonal equivalent projections in $M_{n}(\mathcal{M})$ with sum $I$ and $\phi : \mathcal{M}^{k} \to B(H)$ is a $k$-linear $\mathcal{R}$-multimodular normal map. Then $$\Vert \phi^{(n)}(a_{1}p_{j}, a_{2}p_{j}, \dots , a_{k}p_{j}) \Vert \leq 2^{k} \Vert \phi \Vert \Vert a_{1} \Vert \Vert a_{2} \Vert \dots \Vert a_{k} \Vert $$ for any $j=1, 2, \dots, n$ and any $a_{1}, a_{2}, \dots , a_{k} \in M_{n}(\mathcal{M})$. \end{corollary}
\begin{proof} By Lemma \ref{4.2}, for any $j=1, 2, \dots, n$, \begin{eqnarray} \Vert \phi^{(n)} (a_{1}p_{j}, a_{2}p_{j}, \dots , a_{k}p_{j}) \Vert \leq 2^{k/2} \Vert \phi \Vert \gamma_{n}(a_{1}p_{j}) \gamma_{n}(a_{2}p_{j}) \dots \gamma_{n}(a_{k}p_{j}). \label{73} \end{eqnarray}
Since $p_{1}, p_{2}, \dots , p_{n}$ are orthogonal equivalent projections with sum $I$, $\tau_{n}(p_{j})=\frac{1}{n} I$ for each $j$. Then for any $1 \leq i \leq k$, \begin{eqnarray*} \gamma_{n}(a_{i}p_{j})&=&(\Vert a_{i}p_{j} \Vert^{2} + n \Vert \tau_{n}(p_{j} a_{i}^{*}a_{i}p_{j}) \Vert )^{1/2}\\ &\leq& (\Vert a_{i} \Vert)^{2} +n \Vert a_{i} \Vert ^{2} \Vert \tau_{n}(p_{j}) \Vert)^{1/2}\\ &=&\sqrt{2} \Vert a_{i} \Vert.\\ \end{eqnarray*} Therefore (\ref{73}) gives $$\Vert \phi^{(n)}(a_{1}p_{j}, a_{2}p_{j}, \dots , a_{k}p_{j}) \Vert \leq 2^{k} \Vert \phi \Vert \Vert a_{1} \Vert \Vert a_{2} \Vert \dots \Vert a_{k} \Vert $$ for any $j=1, 2, \dots, n$ and any $a_{1}, a_{2}, \dots , a_{k} \in M_{n}(\mathcal{M})$. \end{proof}
\section{Hochschild cohomology of type II$_{1}$ von Neumann algebras with separable predual and Property $\Gamma$} Let us recall some notations from \cite {EFAR2}. Let $S_{k}$, $k \geq 2$, be the set of nonempty subsets of $\{ 1, 2, \dots , k \}$. Suppose $\phi: \mathcal{M}^{k} \to B(H)$ is a $k$-linear map, $p$ is a projection in $\mathcal{M}$ and $\sigma \in S_{k}$.
Define $\phi_{\sigma, p} : \mathcal{M}^{k} \to B(H)$ by $$\phi_{\sigma, p}(a_{1}, \dots , a_{k}) = \phi (b_{1}, b_{2}, \dots , b_{k}),$$ where $b_{i}= pa_{i}-a_{i}p$ for $i \in \sigma$ and $b_{i}=a_{i}$ otherwise.
Denote by $l(\sigma)$ the least integer in $\sigma$. Define $\phi_{\sigma, p, i}: \mathcal{M}^{k} \to B(H)$ by changing the $i$-th variable in $\phi_{\sigma, p}$ from $a_{i}$ to $pa_{i}-a_{i}p, 1 \leq i < l(\sigma) $, and replacing $pa_{i}-a_{i}p$ by $p(pa_{i}-a_{i}p)$ if $i=l(\sigma)$.
The following is Lemma 6.1 in \cite{EFAR2}.
\begin{lemma} \label{5.1}(\cite{EFAR2}) Let $p$ be a projection in a von Neumann algebra $\mathcal{M}$. Let $\mathcal{C}_{k}, k\geq 2$, be the set of $k$-linear maps $\phi : \mathcal{M}^{k} \to B(H)$ satisfying \begin{equation}p\phi (a_{1}, a_{2}, \dots, a_{k})=\phi (pa_{1}, a_{2}, \dots, a_{k})\label{equa 5.1.1}\end{equation} and \begin{equation}\phi (a_{1}, \dots, a_{i}p, a_{i+1}, \dots, a_{k})= \phi (a_{1}, \dots, a_{i}, pa_{i+1}, \dots, a_{k})\label{equa 5.1.2}\end{equation} for any $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$ and $1 \leq i \leq k-1$. Then if $\phi \in \mathcal{C}_{k}$, $$p\phi (a_{1}, a_{2}, \dots, a_{k})-p\phi (a_{1}p, \dots, a_{k}p)=\sum\limits_{\sigma \in S_{k}}(-1)^{\vert \sigma \vert +1} p\phi_{\sigma, p}(a_{1}, \dots, a_{k}).$$ Moreover, for each $\sigma \in S_{k}$, $$p\phi_{\sigma, p}(a_{1}, \dots, a_{k})=\sum\limits_{i=1}^{l(\sigma)} \phi_{\sigma, p, i}(a_{1}, a_{2}, \dots, a_{k}).$$ \end{lemma}
Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra with separable predual. Suppose $\rho$ is a faithful normal tracial state on $\mathcal{M}$. Then by Lemma \ref{3.2}, the $2$-norm induced by $\rho$ gives the same topology as the strong operator topology on bounded subsets of $\mathcal{M}$. The unit ball $(\mathcal{M})_{1}$ is a metric space under this $2$-norm. Using a similar argument as Section 4 in \cite{EFAR2}, we can get the joint continuity of $\phi$ on $(\mathcal{M})_{1} \times (\mathcal{M})_{1} \times \dots \times (\mathcal{M})_{1}$ in the $2$-norm induced by $\rho$. Therefore we have the following lemma.
\begin{lemma} \label{5.2} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra with separable predual and $\phi : \mathcal{M}^{k} \to B(H)$ be a bounded $k$-linear separately normal map. Let $\rho$ be a faithful normal tracial state on $\mathcal{M}$. Suppose $\{ p_{t} : t \in \mathbb{N} \}$ is a sequence of projections in $\mathcal{M}$ satisfying (\ref{equa 5.1.1}), (\ref{equa 5.1.2}) and $$\lim\limits_{t \to \infty} \Vert p_{t}a-ap_{t} \Vert_{2} = 0$$ for any $a \in \mathcal{M}$, where $\Vert \cdot \Vert_{2}$ is the $2$-norm induced by $\rho$. Then for any $a_{1}, a_{2}, \dots, a_{k} \in \mathcal{M}$, each $\sigma \in S_{k}$, each integer $i \leq l(\sigma)$ and each pair of unit vectors $x, y \in H$, $$\lim\limits_{t \to \infty} \langle \phi_{\sigma, p_{t}, i} (a_{1}, a_{2}, \dots, a_{k})x, y \rangle =0,$$ and $$\lim\limits_{t \to \infty} \langle p_t\phi_{\sigma, p_t}(a_{1}, \dots, a_{k})x, y \rangle =0.$$ \end{lemma} \begin{proof}The proof is similar to the one of Lemma 6.2 in \cite{EFAR2} and is skipped here. \end{proof}
\iffalse \begin{proof} By the definition, each variable in $\phi_{\sigma, p_{t}, i}$ is one of the three types: $a_{j}, p_{t}a_{j}-a_{j}p_{t}$ and $p_{t}(p_{t}a_{j}-a_{j}p_{t})$, and at least one of the last two will occur. Therefore by the hypothesis, as $t \to \infty$, at least one variable will goes to 0 in the $2$-norm induced by $\rho$. Then the required result follows from the joint continuity of $\phi$ on $(\mathcal{M})_{1} \times (\mathcal{M})_{1} \times \dots \times (\mathcal{M})_{1}$ in the $2$-norm induced by $\rho$. \end{proof} \fi Now we have the following result. \begin{theorem} \label{5.3} Let $\mathcal{M}$ be a type II$_{1}$ von Neumann algebra with separable predual and $\mathcal{Z}$ be the center of $\mathcal{M}$. Let $\rho$ be a faithful normal tracial state on
$\mathcal{M}$ and $\| \cdot \|_2$ be the $2$-norm induced by $\rho$ on $\mathcal M$. Suppose $\mathcal{R}$ is a hyperfinite type II$_{1}$ subfactor of $\mathcal{M}$ such that \begin{enumerate} \item [(I)] $\mathcal{R}' \cap \mathcal{M} = \mathcal{Z}$;\\ \item [(II)] for any $n \in \mathbb{N}$, any elements $a_{1}, a_{2}, ..., a_{m}$ in $\mathcal{M}$, there exists a countable collection of projections $\{ p_{1t}, p_{2t}, \dots, p_{nt} : t \in \mathbb{N} \}$ in $\mathcal{R}$ such that \begin{enumerate} \item [(i)] for each $t \in \mathbb{N}$, $p_{1t}, p_{2t}, \dots, p_{nt}$ are mutually orthogonal equivalent projections in $\mathcal{R}$ with sum I, the identity in $\mathcal{M}$;
\item [(ii)] $\lim\limits_{t \to \infty}\left\Vert p_{it}a_{l}-a_{l}p_{it} \right\Vert_{2} = 0$ for any $i=1, 2,\dots , n, l=1, 2, \dots, m$. \end{enumerate} \end{enumerate} Then a bounded $k$-linear $\mathcal{R}$-multimodular separately normal map $\phi : \mathcal{M}^{k} \to B(H)$ is completely bounded and $\left\Vert \phi \right\Vert_{cb} \leq 2^{k} \left\Vert \phi \right\Vert$. \end{theorem}
\begin{proof} The proof is similar to the one for Theorem 6.3 in \cite{EFAR2} and is sketched here for the purpose of completeness.
Fix $n \in \mathbb{N}$ and $k$ elements $b_{1}, b_{2}, \dots, b_{k} \in M_{n}(\mathcal{M})$.
By condition (II), we can find a family of projections $\{ q_{it} : 1 \leq i \leq n; t \in \mathbb{N}\}$ in $\mathcal{R}$ such that \begin{enumerate} \item [(a)] for each $t \in \mathbb{N}$, $q_{1t}, \dots, q_{nt}$ are $n$ orthogonal equivalent projections in $\mathcal{R}$ with sum $I$;
\item [(b)] $\lim\limits_{t \to \infty}\left\Vert q_{it}a-aq_{it} \right\Vert_{2} = 0$ for any $a \in \mathcal{M}, 1 \leq i \leq n$. \end{enumerate} \noindent Let $q'_{it}= I_{n} \otimes q_{it} \in M_{n}(\mathcal{R})$ for each $i$ and $t$. We obtain that \begin{enumerate} \item [(a')] for each $t \in \mathbb{N}$, $q'_{1t}, \dots, q'_{nt}$ are $n$ orthogonal equivalent projections in $M_{n}(\mathcal{R})$ with sum $I_{n} \otimes I$;
\item [(b')] $\lim\limits_{t \to \infty}\left\Vert q'_{it}b-bq'_{it} \right\Vert_{2} = 0$ for any $b \in M_{n}(\mathcal{M}), 1 \leq i \leq n$. \end{enumerate} Since $\phi$ is an $\mathcal{R}$-multimodular map, $\phi^{(n)}$ is an $M_{n}(\mathcal{R})$-multimodular map. Assume that $\mathcal M$ acts on a Hilbert space $H$. For any two unit vectors $x, y$ in $H^{n}$ and any $t \in \mathbb{N}$, by Lemma \ref{5.1}, \begin{eqnarray*} &&\langle \phi^{(n)}(b_{1}, \dots, b_{k})x, y \rangle\\ &=& \langle \sum\limits_{i=1}^{n} q'_{it} \phi^{(n)} (b_{1}, \dots, b_{k})x, y \rangle\\ &=&\langle \sum\limits_{i=1}^{n} \sum\limits_{\sigma \in S_{k}} (-1)^{\vert \sigma \vert +1} q'_{it} \phi^{(n)}_{\sigma, q'_{it}}(b_{1}, \dots, b_{k})x, y \rangle+\langle \sum\limits_{i=1}^{n}q'_{it} \phi^{(n)}(b_{1}q'_{it}, \dots, b_{k}q'_{it})x, y \rangle\\ &=&\langle \sum\limits_{i=1}^{n} \sum\limits_{\sigma \in S_{k}} (-1)^{\vert \sigma \vert +1} q'_{it} \phi^{(n)}_{\sigma, q'_{it}}(b_{1}, \dots, b_{k})x, y \rangle+\langle \sum\limits_{i=1}^{n}q'_{it} \phi^{(n)}(b_{1}q'_{it}, \dots, b_{k}q'_{it})q'_{it}x, y \rangle. \end{eqnarray*}
Therefore \begin{eqnarray} &&\langle \phi^{(n)}(b_{1}, \dots, b_{k})x, y \rangle - \langle \sum\limits_{i=1}^{n} \sum\limits_{\sigma \in S_{k}} (-1)^{\vert \sigma \vert +1} q'_{it} \phi^{(n)}_{\sigma, q'_{it}}(b_{1}, \dots, b_{k})x, y \rangle \notag \\ &=&\langle \sum\limits_{i=1}^{n}q'_{it} \phi^{(n)}(b_{1}q'_{it}, \dots, b_{k}q'_{it})q'_{it}x, y \rangle. \label{74} \end{eqnarray}
Since $\{ q'_{1t}, \dots, q'_{nt} \}$ is a set of $n$ orthogonal projections for each $t \in \mathbb{N}$, by Corollary \ref{4.3}, \begin{eqnarray} \Vert \sum\limits_{i=1}^{n} q'_{it} \phi^{(n)}(b_{1}q'_{it}, \dots, b_{k}q'_{it})q'_{it} \Vert &\leq& \max\limits_{1 \leq i \leq n} \Vert q'_{it} \phi^{(n)}(b_{1}q'_{it}, \dots, b_{k}q'_{it})q'_{it} \Vert \notag \\ &\leq& 2^{k} \Vert \phi \Vert \Vert b_{1} \Vert \dots \left\Vert b_{k} \right\Vert. \label{76} \end{eqnarray}
By Lemma \ref{5.2}, condition (b') implies \begin{eqnarray} \lim\limits_{t \to \infty} \langle q'_{it} \phi^{(n)}_{\sigma, q'_{it}}(b_{1}, \dots, b_{k})x, y \rangle = 0 \label{77} \end{eqnarray} for each $1 \leq i \leq n$ and $\sigma \in S_{k}$.
Letting $t \to \infty$ for both sides of (\ref{74}), it follows from inequality (\ref{76}) and equation (\ref{77}) that $$\langle \phi^{(n)}(b_{1}, \dots, b_{k})x, y \rangle \leq 2^{k} \Vert \phi \Vert \Vert b_{1} \Vert \dots \Vert b_{k} \Vert.$$ Since $n, x, y$ were arbitrarily chosen, $\Vert \phi \Vert_{cb} \leq 2^{k} \Vert \phi \Vert$. \end{proof}
The following is the main result of the paper.
\begin{theorem}\label{mainthm} If $\mathcal{M}$ is a type II$_{1}$ von Neumann algebra with separable predual and Property $\Gamma$, then the Hochschild cohomology group $$H^{k}(\mathcal{M}, \mathcal{M})=0, \qquad \forall \ k \geq 2.$$ \end{theorem}
\begin{proof} By Theorem \ref{3.13}, there is a hyperfinite type II$_{1}$ subfactor $\mathcal{R}$ of $\mathcal{M}$ satisfying conditions (I) and (II) in Theorem \ref{5.3}.
Now consider the cohomology groups $H^{k}(\mathcal{M}, \mathcal{M})$. By Theorem 3.1.1 in \cite{AR3}, it suffices to consider a $k$-linear $\mathcal{R}$-multimodular separately normal cocycle $\phi$. Theorem 5.3 shows that such cocycles are completely bounded. By Theorem 4.3.1 in \cite{AR3}, completely bounded Hochschild cohomology groups are trivial. It follows that $\phi$ is a coboundary, whence $H^{k}(\mathcal{M}, \mathcal{M})=0, k \geq 2$. \end{proof}
The next result in \cite{EGA} follows directly from Theorem \ref{mainthm} and Example \ref{example 1}. \begin{corollary}\label{mainthm2} Suppose that $\mathcal{M}_1$ is a type II$_{1}$ von Neumann algebra with separable predual and $\mathcal{M}_2$ is a type II$_{1}$ factor with separable predual. If $\mathcal M_2$ has Property $\Gamma$, then the Hochschild cohomology group $$H^{k}(\mathcal{M}_1\otimes \mathcal{M}_2, \mathcal{M}_1\otimes \mathcal{M}_2)=0, \qquad k \geq 2.$$ In particular, if $\mathcal M$ is a type II$_{1}$ von Neumann algebra with separable predual satisfying $\mathcal M \cong \mathcal M\otimes \mathcal R$, where $\mathcal R$ is the hyperfinite II$_1$ factor, then $$H^{k}(\mathcal{M}, \mathcal{M})=0, \qquad \forall \ k \geq 2.$$ \end{corollary}
\end{document}
|
arXiv
|
{
"id": "1407.0664.tex",
"language_detection_score": 0.5587541460990906,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{$p$-adic measures associated with zeta values and $p$-adic $\log$ multiple gamma functions}
\begin{abstract} We study a relation between two refinements of the rank one abelian Gross-Stark conjecture: For a suitable abelian extension $H/F$ of number fields, a Gross-Stark unit is defined as a $p$-unit of $H$ satisfying some proporties. Let $\tau \in \mathrm{Gal}(H/F)$. Yoshida and the author constructed the symbol $Y_p(\tau)$ by using $p$-adic $\log$ multiple gamma functions, and conjectured that the $\log_p$ of a Gross-Stark unit can be expressed by $Y_p(\tau)$. Dasgupta constructed the symbol $u_T(\tau)$ by using the $p$-adic multiplicative integration, and conjectured that a Gross-Stark unit can be expressed by $u_T(\tau)$. In this paper, we give an explicit relation between $Y_p(\tau)$ and $u_T(\tau)$. \end{abstract}
\section{Introduction}
Let $F$ be a totally real field, $K$ a CM-field which is abelian over $F$, $S$ a finite set of places of $F$. We assume that \begin{itemize} \item $S$ contains all infinite places of $F$, all places of $F$ lying above a rational prime $p$, and all ramified places in $K/F$. \item Let $\mathfrak p$ be the prime ideal corresponding to the $p$-adic topology on $F$. (Hence $\mathfrak p \in S$.) Then $\mathfrak p$ splits completely in $K/F$. \end{itemize} For $\tau \in \mathrm{Gal}(K/F)$, we consider the partial zeta function \begin{align*} \zeta_S(s,\tau):=\sum_{(\frac{K/F}{\mathfrak a})=\tau,\ (\mathfrak a,S)=1}N\mathfrak a^{-s}. \end{align*} Here $\mathfrak a$ runs over all integral ideals of $F$, relatively prime to any finite places in $S$, whose image under the Artin symbol $(\frac{K/F}{*})$ is equal to $\tau$. The series converges for $\mathrm{Re}(s)>1$, has a meromorphic continuation to the whole $s$-plane, and is analytic at $s=0$. Moreover, under our assumption, we see that \begin{itemize} \item There exists the $p$-adic interpolation function $\zeta_{p,S}(s,\tau)$ of $\zeta_S(s,\tau)$. \item $\mathrm{ord}_{s=0}\zeta_S(s,\tau),\mathrm{ord}_{s=0}\zeta_{p,S}(s,\tau) \geq 1$. \item There exist a natural number $W$, a $\mathfrak p$-unit $u$ of $K$, which satisfy \begin{align} \label{GSu}
\log |u^\tau|_\mathfrak P=-W\zeta_S'(0,\tau) \quad (\tau \in \mathrm{Gal}(K/F)). \end{align}
Here $\mathfrak P$ denotes the prime ideal corresponding to the $p$-adic topology on $K$, $|x|_\mathfrak P:=N\mathfrak P^{-\mathrm{ord}_\mathfrak Px}$. \end{itemize} Gross conjectured the following $p$-adic analogue of the rank $1$ abelian Stark conjecture:
\begin{cnj}[{\cite[Conjecture 3.13]{Gr}}] \label{GSc} Let $u$ be a $\mathfrak p$-unit characterized by {\rm (\ref{GSu})} up to roots of unity. Then we have \begin{align*} \log_p N_{K_\mathfrak P/\mathbb Q_p}(u^\tau)=-W\zeta_{p,S}(0,\tau). \end{align*} \end{cnj}
Dasgupta-Darmon-Pollack \cite{DDP} proved a large part of Conjecture \ref{GSc}. Yoshida and the author, and independently Dasgupta formulated refinements of Conjecture \ref{GSc}: Let $\mathfrak f$ be an integral ideal of a totally real field $F$ satisfying $\mathfrak p\nmid \mathfrak f$, $H_\mathfrak f$ the narrow ray class field modulo $\mathfrak f$, $H$ the maximal subfield of $H_\mathfrak f$ where $\mathfrak p$ splits completely. Yoshida and the author \cite{KY1} essentially constructed the invariant $Y_p(\tau)$ (Definition \ref{Yp}) for $\tau \in \mathrm{Gal}(H/F)$ by using $p$-adic $\log$ multiple gamma functions. Then \cite[Conjecture A$'$]{KY1} states that $\log_p u^\tau$ (without $N_{K_\mathfrak P/\mathbb Q_p}$) can be expressed by $Y_p(\tau)$. On the other hand, Dasgupta constructed the invariant $u_T(\mathfrak b ,\mathcal D_\mathfrak f)$ (Definition \ref{ueta}-(iv)) by using the multiplicative integration for $p$-adic measures associated with Shintani's multiple zeta functions. Then \cite[Conjecture 3.21]{Da} states that a modified version of $u^\tau$ can be expressed by $u_T(\mathfrak b ,\mathcal D_\mathfrak f)$. In \cite[Remark 2]{Ka3}, the author announced the following relation between these refinements.
\begin{thm*}[Theorem \ref{main}] Let $\eta$ be a ``good'' prime ideal in the sense of {\rm Definition \ref{assump}}. We put $T:=\{\eta\}$. Then we have \begin{align*} \log_p(u_\eta(\mathfrak b ,\mathcal D_\mathfrak f)) =-Y_p((\tfrac{H/F}{\mathfrak b})) +N\eta\,Y_p((\tfrac{H/F}{\mathfrak b \eta^{-1}})). \end{align*} \end{thm*}
In particular, we see that two refinements are consistent (roughly speaking, \cite[Conjecture 3.21]{Da} is a further refinement of \cite[Conjecture A$'$]{KY1} by $\ker \log_p$). The aim of this paper is to prove this Theorem.
Let us explain the outline of this paper. In \S 2, we introduce Shintani's technique of cone decompositions. We obtain a suitable fundamental domain of $F\otimes \mathbb R_+/E_{\mathfrak f,+}$, where $F\otimes \mathbb R_+$ denotes the totally positive part of $F\otimes \mathbb R$, $E_{\mathfrak f,+}$ is a subgroup of the group of all totally positive units. We need such fundamental domains in order to construct both of the invariants $Y_p$, $u_T$. In \S 3, we recall the definition and some properties of $Y_p$, which is essentially defined in \cite{KY1} and slightly modified in \cite{Ka3}. The classical or $p$-adic $\log$ multiple gamma function is defined as the derivative values at $a=0$ of the classical or $p$-adic Barnes' multiple zeta function, respectively. Then the invariant $Y_p(\tau,\iota)$ is defined in Definition \ref{Yp}, as a finite sum of the ``difference'' of $p$-adic $\log$ multiple gamma functions and classical $\log$ multiple gamma functions. Conjecture \ref{KYc} predicts exact values of $Y_p(\tau,\iota)$. In \S 4, we also recall some results in \cite{Da}. Dasgupta introduced $p$-adic measures $\nu_T(\mathfrak b,\mathcal D_\mathfrak f)$ associated with special values of Shintani's multiple zeta functions, and defined $u_T(\mathfrak b,\mathcal D_\mathfrak f)$ as the multiplicative integration $\times\hspace{-0.9em}\int_{\mathbf O}x\ d\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f,x)$ with certain correction terms. Dasgupta formulated a conjecture (Conjecture \ref{Dc}) on properties of $u_T(\mathfrak b,\mathcal D_\mathfrak f)$. In \S 5, we state and prove the main result (Theorem \ref{main}) which gives an explicit relation between $Y_p(\tau,\mathrm{id})$ and $\log_p(u_\eta(\mathfrak b,\mathcal D_\mathfrak f))$. Then we will see that Conjectures \ref{KYc}, \ref{Dc} are consistent in the sense of Corollary \ref{crlofmain}. The key observation is Lemma \ref{key}: Dasgupta's $p$-adic measure $\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f)$ is originally associated with Shintani's multiple zeta functions. By this Lemma, we can relate $\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f)$ to Barnes' multiple zeta functions and $p$-adic analogues as in Lemma \ref{intzeta}.
\section{Shintani domains}
Let $F$ be a totally real field of degree $n$, $\mathcal O_F$ the ring of integers of $F$, $\mathfrak f$ an integral ideal of $F$. We denote by $F_+$ the set of all totally positive elements in $F$ and put $\mathcal O_{F,+}:=\mathcal O_F \cap F_+$, $E_+:=\mathcal O_F^\times \cap F_+$. We consider subgroups of $E_+$ of the following form: \begin{align*} E_{\mathfrak f,+}:=\{\epsilon \in E_+ \mid \epsilon \equiv 1 \bmod \mathfrak f\}. \end{align*} We identify \begin{align*} F \otimes \mathbb R = \mathbb R^n, \quad \sum_{i=1}^k a_i \otimes b_i \mapsto (\sum_{i=1}^k\iota(a_i)b_i)_{\iota \in \mathrm{Hom}(F,\mathbb R)}, \end{align*} where $\mathrm{Hom}(F,\mathbb R)$ denotes the set of all real embeddings of $F$. In particular, the totally positive part \begin{align*} F \otimes \mathbb R_+:=\mathbb R_{+}^n \end{align*} has a meaning. On the right-hand side, $\mathbb R_{+}$ denotes the set of all positive real numbers. Let $v_1,\dots,v_r \in \mathcal O_F$ be linearly independent. Then we define the cone with basis $\bm v:=(v_1,\dots,v_r)$ as \begin{align*} C(\bm v):=\{\bm t \,{}^t \bm v \in F\otimes \mathbb R \mid \bm t \in \mathbb R_{+}^r\}. \end{align*} Here we $\bm t \,{}^t \bm v$ denotes the inner product.
\begin{dfn} \begin{enumerate} \item We call a subset $D \subset F \otimes \mathbb R_+$ is a Shintani set if it can be expressed as a finite disjoint union of cones: \begin{align*}
D=\coprod_{i \in J} C(\bm v_j) \quad (|J|<\infty,\ \bm v_j \in \mathcal O_{F,+}^{r(j)},\ r(j) \in \mathbb N). \end{align*} \item We consider the natural action $E_{\mathfrak f,+} \curvearrowright F \otimes \mathbb R_+$, $u(a\otimes b):=(ua)\otimes b$. We call a Shintani set $D$ a Shintani domain $\bmod E_{\mathfrak f,+}$ if it is a fundamental domain of $ F \otimes \mathbb R_+/E_{\mathfrak f,+}$: \begin{align*} F \otimes \mathbb R_+=\coprod_{\epsilon \in E_{\mathfrak f,+}} \epsilon D. \end{align*} \end{enumerate} When $\mathfrak f=(1)$, we write $\bmod E_+$ instead of $\bmod E_{(1),+}$. \end{dfn}
Shintani \cite[Proposition 4]{Sh} showed that there always exists a Shintani domain.
\section{$p$-adic $\log$ multiple gamma functions}
We recall the definition and some properties of the symbol $Y_p$ defined in \cite{KY1}, \cite{Ka3}. We denote by $\mathbb R_{+}$ the set of all positive real numbers.
\begin{dfn} Let $z \in \mathbb R_{+}$, $\bm v \in \mathbb R_{+}^r$. Barnes' multiple zeta function is defined as \begin{align*} \zeta(s,\bm v,z):=\sum_{\bm m \in \mathbb Z_{\geq 0}^r} (z+\bm m\,{}^t\bm v)^{-s}. \end{align*} This series converges for $\mathrm{Re}(s) >r$, has a meromorphic continuation to the whole $s$-plane, and is analytic at $s=0$. Then Barnes' multiple gamma function is defined as \begin{align*}
\Gamma(z,\bm v):=\exp\left(\frac{\partial}{\partial s}\zeta(s,\bm v,z)|_{s=0}\right). \end{align*} \end{dfn}
Note that this definition is modified from that given by Barnes. For the proof and details, see \cite[Chap I, \S 1]{Yo}. Throughout this paper, we regard each number field as a subfield of $\overline{\mathbb Q}$, and fix two embeddings $\overline{\mathbb Q} \hookrightarrow \mathbb C$, $\overline{\mathbb Q} \hookrightarrow \mathbb C_p$. Here $\mathbb C_p$ denotes the $p$-adic completion of the algebraic closure of $\mathbb Q_p$. We denote by $\mu_{(p)}$ the group of all roots of unity of prime-to-$p$ order. Let $\mathrm{ord}_p\colon \mathbb C_p^\times \rightarrow \mathbb Q$, $\theta_p \colon \mathbb C_p^\times \rightarrow \mu_{(p)}$ be unique group homomorphisms satisfying \begin{align} \label{ordtheta}
|p^{-\mathrm{ord}_p(z)}\theta_p(z)^{-1}z|_p<1 \quad (z \in \mathbb C_p^\times). \end{align}
\begin{dfn} Let $z \in \overline{\mathbb Q}$, $\bm v \in (\overline{\mathbb Q}^\times)^r$. We assume that \begin{align} &z \in \mathbb R_{+},\ \bm v \in \mathbb R_{+}^r \text{ via the embedding }\overline{\mathbb Q} \hookrightarrow \mathbb C, \notag \\ &\mathrm{ord}_p(z) < \mathrm{ord}_p(v_1),\dots,\mathrm{ord}_p(v_r) \text{ via the embedding }\overline{\mathbb Q} \hookrightarrow \mathbb C_p. \label{cond} \end{align} Then we denote by $\zeta_p(s,\bm v,z)$ $(s \in \mathbb Z_p-\{1,2,\dots,r\})$ the $p$-adic multiple zeta function characterized by \begin{align} \label{interpolation} \zeta_p(-m,\bm v,z)=p^{-\mathrm{ord}_p(z)m}\theta_p(z)^{-m}\zeta(-m,\bm v,z) \quad (m \in \mathbb Z_{\geq 0}). \end{align} We define the $p$-adic $\log$ multiple gamma function as \begin{align*}
L\Gamma_p(z,\bm v):=\frac{\partial}{\partial s}\zeta_p(s,\bm v,z)|_{s=0}. \end{align*} \end{dfn}
The construction of $\zeta_p(s,\bm v,z)$ is due to Cassou-Nogu\`es \cite{CN1}. The author defined and studied $L\Gamma_p(z,\bm v)$ in \cite{Ka1}. See \cite[\S 2]{Ka3} for a short survey.
\begin{dfn} Let $F$ be a totally real field, $\mathfrak f$ an integral ideal, $D=\coprod_{j \in J}C(\bm v_j)$ $(\bm v_j \in \mathcal O_{F,+}^{r(j)})$ a Shintani domain $\bmod E_+$. We denote by $\mathrm{Hom}(F,\mathbb R)$ {\rm (}resp.\ $\mathrm{Hom}(F,\mathbb C_p)${\rm)} the set of all embeddings of $F$ into $\mathbb R$ {\rm(}resp.\ $\mathbb C_p${\rm )}. Since we fixed embeddings $\overline{\mathbb Q} \hookrightarrow \mathbb C$, $\overline{\mathbb Q} \hookrightarrow \mathbb C_p$, we may identify \begin{align*} \mathrm{Hom}(F,\mathbb R)=\mathrm{Hom}(F,\mathbb C_p). \end{align*} \begin{enumerate} \item We denote by $C_\mathfrak f$ the narrow ideal class group modulo $\mathfrak f$, by $H_\mathfrak f$ the narrow ray class field modulo $\mathfrak f$. In particular, the Artin map induces \begin{align*} C_\mathfrak f \cong \mathrm{Gal}(H_\mathfrak f/F). \end{align*} \item Let $\pi \colon C_\mathfrak f \rightarrow C_{(1)}$ be the natural projection. For each $c \in C_\mathfrak f$, we take an integral ideal $\mathfrak a_c$ satisfying \begin{align*} \mathfrak a_c\mathfrak f \in \pi(c). \end{align*} \item For $c \in C_\mathfrak f$, $\bm v \in \mathcal O_F^r$, we put \begin{align*} R(c,\bm v):=R(c,\bm v,\mathfrak a_c):=\{\bm x \in (\mathbb Q\cap (0,1])^r \mid \mathcal O_F \supset (\bm x\,{}^t\bm v) \mathfrak a_c\mathfrak f\in c\}. \end{align*} \item For $c \in C_\mathfrak f$, $\iota \in \mathrm{Hom}(F,\mathbb R)$, we define \begin{align*} G(c,\iota):=G(c,\iota,D,\mathfrak a_c):=\sum_{j \in J} \sum_{\bm x \in R(c,\bm v_j)} \log \Gamma(\iota(\bm x \,{}^t \bm v_j), \iota(\bm v_j)). \end{align*} \item For $\iota \in \mathrm{Hom}(F,\mathbb R)$ $(=\mathrm{Hom}(F,\mathbb C_p))$, we put \begin{align*}
\mathfrak p_\iota:=\{z \in \mathcal O_F \mid |\iota(z)|_p<1\}. \end{align*} Note that the prime ideal $\iota(\mathfrak p_\iota)$ corresponds to the $p$-adic topology on $\iota(F) \subset \mathbb C_p$. \item Assume that $\mathfrak p_\iota \mid \mathfrak f$. For $c \in C_\mathfrak f$, $\iota \in \mathrm{Hom}(F,\mathbb R)$, we define \begin{align*} G_p(c,\iota):=G_p(c,\iota,D,\mathfrak a_c):=\sum_{j \in J} \sum_{\bm x \in R(c,\bm v_j)} L\Gamma_p(\iota(\bm x \,{}^t \bm v_j), \iota(\bm v_j)). \end{align*} Note that $(\iota(\bm x \,{}^t \bm v_j), \iota(\bm v_j))$ satisfies the assumption {\rm (\ref{cond})} whenever $\mathfrak p_\iota \mid \mathfrak f$, $\bm x \in R(c,\bm v_j)$. \end{enumerate} \end{dfn}
The following map $[\ ]_p$ is well-defined by \cite[Lemma 5.1]{KY1}.
\begin{dfn} We denote by $\overline{\mathbb Q} \log_p \overline{\mathbb Q}^\times$ {\rm (}resp.\ $\overline{\mathbb Q} \log \overline{\mathbb Q}^\times${\rm )} the $\overline{\mathbb Q}$-subspace of $\mathbb C_p$ {\rm (}resp.\ $\mathbb C${\rm )} generated by $\log_p b$ {\rm (}resp.\ $\pi$, $\log b${\rm )} with $b \in \overline{\mathbb Q}^\times$. We define a $\overline{\mathbb Q}$-linear map $[\ ]_p$ by \begin{align*} [\ ]_p \colon \overline{\mathbb Q} \log \overline{\mathbb Q}^\times \rightarrow \overline{\mathbb Q} \log_p \overline{\mathbb Q}^\times, \quad a\log b \mapsto a\log_p b, \quad a\pi \mapsto 0\quad (a,b \in \overline{\mathbb Q},\ b\neq 0). \end{align*} \end{dfn}
\begin{lmm} Let $H$ be an intermediate field of $H_\mathfrak f/F$, $\mathfrak q$ a prime ideal of $F$, relatively prime to $\mathfrak f$, splitting completely in $H/F$. Then we have \begin{align*}
\sum_{c \in C_{\mathfrak f\mathfrak q},\ \mathrm{Art}(\overline c)|_H=\tau}G(c,\iota) \in \overline{\mathbb Q}\log \overline{\mathbb Q}^\times \quad (\tau \in \mathrm{Gal}(H/F)). \end{align*} Here $c$ runs over all ideal classes whose images under the composite map $C_{\mathfrak f\mathfrak q} \rightarrow C_\mathfrak f \rightarrow \mathrm{Gal}(H_\mathfrak f/F) \rightarrow \mathrm{Gal}(H/F)$ is equal to $\tau$. \end{lmm}
\begin{proof} We put $W(c,\iota):=W(\iota(c))$ in \cite[(4.3)]{KY1}, $V(c,\iota):=V(\iota(c))$ in \cite[(1.6)]{KY1}, and $X(c,\iota):=G(c,\iota)+W(c,\iota)+V(c,\iota)$. Here we consider the ideal class group $C_{\iota(\mathfrak f)}$ of $\iota(F)$ modulo $\iota(\mathfrak f)$. Then $\iota(c)$ denotes the image of $c\in C_\mathfrak f$ in $C_{\iota(\mathfrak f)}$ under the natural map. By the definition \cite[(4.3)]{KY1} and \cite[Appendix I, Theorem]{KY2}, we have $W(c,\iota),V(c,\iota) \in \overline{\mathbb Q}\log \overline{\mathbb Q}^\times$. Moreover \cite[Lemma 5.5]{KY1} states that \begin{align*} \sum_{c \in C_{\mathfrak f\mathfrak q}} \chi_\mathfrak q(c) X(c,\iota) \in \overline{\mathbb Q}\log \overline{\mathbb Q}^\times \quad (\chi \in \hat C_\mathfrak f,\ \chi([\mathfrak q])=1). \end{align*} Here $\chi_\mathfrak q$, $[\mathfrak q]$ denote the composite map $C_{\mathfrak f\mathfrak q} \rightarrow C_\mathfrak f \stackrel{\chi}\rightarrow \mathbb C^\times$, the ideal class $\in C_\mathfrak f$ of $\mathfrak q$, respectively. Therefore, when $H$ is the fixed subfield under $(\frac{H_\mathfrak f/F}{\mathfrak q})$, it follows from the orthogonality of characters. The general case follows from this case immediately. \end{proof}
\begin{dfn} \label{Yp} Let $H$ be an intermediate field of $H_\mathfrak f/F$. Assume that $\mathfrak p_\iota \nmid \mathfrak f$ and that $\mathfrak p_\iota$ splits completely in $H/F$. Then we define \begin{align*}
Y_p(\tau,\iota):=\sum_{c \in C_{\mathfrak f\mathfrak p_\iota},\ \mathrm{Art}(\overline c)|_H=\tau}G_p(c,\iota)-[\sum_{c \in C_{\mathfrak f\mathfrak p_\iota},\ \mathrm{Art}(\overline c)|_H=\tau}G(c,\iota)]_p \quad (\tau \in \mathrm{Gal}(H/F)). \end{align*} When $\iota=\mathrm{id}$, we drop the symbol $\iota$: $Y_p(\tau):=Y_p(\tau,\mathrm{id})$. \end{dfn}
By \cite[Proposition 5.6]{KY1} (and the orthogonality of characters), we see that $Y_p(\tau,\iota)$ depends only on $H,\mathfrak f,\tau,\iota$, not on $D$, $\mathfrak a_c$'s. We formulated a conjecture \cite[Conjecture A$'$]{KY1}, which is equivalent to the following Conjecture \ref{KYc} by \cite[Proposition 6-(ii)]{Ka3}.
\begin{cnj} \label{KYc} Let $H_\mathfrak f/H/F$ be as above: we assume that \begin{center} $\mathfrak p_\iota$ does not divide $\mathfrak f$, splits completely in $H/F$. \end{center} We take a lift $\tilde\iota\colon H\rightarrow \mathbb C_p$ of $\iota\colon F\rightarrow \mathbb C_p$
and put $\mathfrak p_{H,\tilde\iota}:=\{z\in \mathcal O_H \mid |\tilde\iota(z)|_p<1\}$. Let $\alpha_{H,\tilde\iota}$ be a generator of the principal ideal $\mathfrak p_{H,\tilde\iota}^{h_H}$, where $h_H$ denotes the class number. Then we have \begin{align*} Y_p(\tau,\iota)=\frac{-1}{h_H}\sum_{c\in C_\mathfrak f}\zeta(0,c^{-1})\log_p \tilde\iota\left(\alpha_{H,\tilde\iota}^{\tau \mathrm{Art}(c)}\right). \end{align*} \end{cnj}
\begin{rmk} Roughly speaking, the above conjecture states a relation between the ratios $[$ $p$-adic multiple gamma functions $:$ multiple gamma functions $]$ and Stark units associated with the finite place $\mathfrak p_\iota$. We also studied a relation between the same ratios and Stark units associated with real places in {\rm \cite{Ka3}}. We found a more significant relation between the ratios $[$ $p$-adic gamma function $:$ gamma function $]$ and cyclotomic units in {\rm \cite{Ka2}}. \end{rmk}
We rewrite the definition of $Y_p$ for later use.
\begin{dfn} \label{zetaR} Let $R$ be a subset of $F_+$. We assume that $R$ can be expressed in the following form: \begin{align*} R=\coprod_{i=1}^k\{(\bm x_i+\bm m) \,{}^t \bm v_i \mid \bm m \in \mathbb Z_{\geq 0}^{r_i}\} \quad (\bm x_i \in \mathbb Q_+^{r_i},\ \bm v_i \in F_+^{r_i}). \end{align*} \begin{enumerate} \item We define \begin{align*} \zeta_\iota(s,R)&:=\sum_{z \in R} \iota(z)^{-s}:=\sum_{i=1}^k \zeta(s,\iota(\bm v_i),\iota(\bm x_i \,{}^t \bm v_i)), \\
L\Gamma_\iota(R)&:=\frac{\partial}{\partial s}\zeta_\iota(s,R)|_{s=0}=\sum_{i=1}^k \log \Gamma(\iota(\bm x_i \,{}^t \bm v_i),\iota(\bm v_i)). \end{align*} \item Additionally we assume that each $(\iota(\bm x_i \,{}^t \bm v_i),\iota(\bm v_i))$ satisfies {\rm (\ref{cond})}. Then there exists the $p$-adic interpolation function \begin{align*} \zeta_{\iota,p}(s,R):= \sum_{i=1}^k \zeta_p(s,\iota(\bm v_i),\iota(\bm x_i \,{}^t \bm v_i)) \end{align*} of $\zeta_{\iota}(s,R)$. We define \begin{align*}
L\Gamma_{\iota,p}(R):=\frac{\partial}{\partial s}\zeta_{\iota,p}(s,R)|_{s=0}=\sum_{i=1}^k L\Gamma_p(\iota(\bm x_i \,{}^t \bm v_i),\iota(\bm v_i)). \end{align*} \end{enumerate} When $\iota=\mathrm{id}$, we drop the symbol $\iota$. \end{dfn}
It follows that, for any Shintani domain $D$ $\bmod E_+$ and for any integral ideals $\mathfrak a_c$ satisfying $\mathfrak a_c\mathfrak f \in \pi(c)$, we have \begin{align} \label{rewrite}
Y_p(\tau,\iota)=\sum_{c \in C_{\mathfrak f\mathfrak p_\iota},\ \mathrm{Art}(\overline c)|_H=\tau}L\Gamma_{\iota,p}(R_c)
-[\sum_{c \in C_{\mathfrak f\mathfrak p_\iota},\ \mathrm{Art}(\overline c)|_H=\tau}L\Gamma_\iota(R_c)]_p, \end{align} where we put $R_c:=\{z \in D \mid \mathcal O_F \supset z \mathfrak a_c \mathfrak f\mathfrak p_\iota \in c\}$. We will use the following properties of the classical or $p$-adic multiple gamma functions in the proof of Theorem \ref{main}.
\begin{prp} \label{eZ} \begin{enumerate} \item Let $R$ be as in {\rm Definition \ref{zetaR}-(i)}, $\alpha \in F_+$. Then we have \begin{align*} L\Gamma_\iota(R)-L\Gamma_\iota(\alpha R)=\zeta_\iota(0,R)\log \iota(\alpha). \end{align*} \item Let $R$ be as in {\rm Definition \ref{zetaR}-(ii)}, $\alpha \in F_+$. Then we have \begin{align*} L\Gamma_{\iota,p}(R)-L\Gamma_{\iota,p}(\alpha R)=\zeta_\iota(0,R)\log_p \iota(\alpha). \end{align*} \end{enumerate} \end{prp}
\begin{proof} The assertions follow from $\zeta_\iota(s,\alpha R)=\iota(\alpha)^{-s}\zeta_\iota(s,R)$ immediately. \end{proof}
We also recall Shintani's multiple zeta functions in {\rm \cite[(1.1)]{Sh}} which we need in subsequent sections.
\begin{dfn} \label{Sz} \begin{enumerate} \item Let $A=(a_{ij})$ be an $(l\times r)$-matrix with $a_{ij} \in \mathbb R_{+}$, $\bm x \in \mathbb R_{+}^r$, $\bm \chi=(\chi_1,\dots,\chi_r) \in (\mathbb C^\times)^r$ with
$|\chi_i|\leq 1$. Then Shintani's multiple zeta function is defined as \begin{align*} \zeta(s,A,\bm x,\bm \chi):=\sum_{(m_1,\dots,m_r) \in \mathbb Z_{\geq 0}^r} \left(\prod_{j=1}^r\chi_j^{m_j}\right)\left(\prod_{i=1}^l \left(\sum_{j=1}^ra_{ij}(m_j+x_j)\right)\right)^{-s}. \end{align*} This series converges for $\mathrm{Re}(s)>\frac{r}{l}$, has a meromorphic continuation to the whole $s$-plane, is analytic at $s=0$. \item Let $\bm x$, $\bm \chi$ be as in {\rm (i)}. For $\bm v=(v_1,\dots,v_r) \in F_+^r$, we consider two kinds of Shintani's multiple zeta functions: \begin{enumerate} \item Shintani's multiple zeta function with $l=1$: \begin{align*} \zeta(s,\bm v,\bm x,\bm \chi)=\sum_{(m_1,\dots,m_r) \in \mathbb Z_{\geq 0}^r}\left(\prod_{j=1}^r\chi_j^{m_j}\right)\left(\sum_{j=1}^r v_j(m_j+x_j)\right)^{-s}. \end{align*} Here we consider $v_i \in F_+ \stackrel{\mathrm{id}}\hookrightarrow \mathbb R_{+}$. \item Let $A$ be the $(n\times r)$-matrix whose raw vectors are $\iota(\bm v_i)$ $(\iota \in \mathrm{Hom}(F,\mathbb R),\ n:=[F:\mathbb Q])$. Then we put \begin{align*} \zeta_N(s,\bm v,\bm x,\bm \chi):=\zeta(s,A,\bm x,\bm \chi) =\sum_{(m_1,\dots,m_r) \in \mathbb Z_{\geq 0}^r}\left(\prod_{j=1}^r\chi_j^{m_j}\right)N\left(\sum_{j=1}^r v_j(m_j+x_j)\right)^{-s}. \end{align*} \end{enumerate} \item Let $R$ be as in {\rm Definition \ref{zetaR}-(i)}. We define \begin{align*} \zeta_N(s,R):=\sum_{z \in R} N z^{-s}:=\sum_{i=1}^k \zeta_N(s,\bm v_i,\bm x_i,(1,\dots,1)). \end{align*} \end{enumerate} \end{dfn}
\section{$p$-adic measures associated with zeta values}
We consider the following two kinds of integration.
\begin{dfn} Let $K$ be a finite extension of $\mathbb Q_p$, $O$ the ring of integers of $K$, $P$ the maximal ideal of $O$. \begin{enumerate} \item We say $\nu$ is a $p$-adic measure on $O$ if for each open compact subset $U \subset O$, it takes the value $\nu(U)\in K$ satisfying \begin{enumerate} \item $\nu(U\coprod U')=\nu(U)+\nu(U')$ for disjoint open compact subsets $U,U'$.
\item $|\nu(U)|_p$'s are bounded. \end{enumerate} We say a $p$-adic measure $\nu$ is a $\mathbb Z$-valued measure if $\nu(U) \in \mathbb Z$. \item Let $\nu$ be a $p$-adic measure, $f\colon O\rightarrow O$ a continuous map. We define \begin{align*} \int_Of(x)d\nu(x):=\lim_{\leftarrow}\sum_{\overline a \in O/P^m}\nu(f^{-1}(a+P^m))f(a) \in \lim_{\leftarrow} O/P^m=O. \end{align*} \item Let $\nu$ be a $\mathbb Z$-valued measure, $f\colon (O-P^e) \rightarrow (O-P^e)$ a continuous map $(e \in \mathbb N)$. We define \begin{align*} \times\hspace{-1.1em}\int_{O-P^e}f(x)d\nu(x):=\lim_{\leftarrow}\prod_{\overline a\in (O-P^e)/(1+P^m)}f(a)^{\nu(f^{-1}(a+P^m))} \in O. \end{align*} \end{enumerate} \end{dfn}
We recall the setting in \cite{Da}. Let $F$ be a totally real field of degree $n$, $\mathfrak f$ an integral ideal of $F$, $\mathfrak p:=\mathfrak p_{\mathrm{id}}$ the prime ideal corresponding to the $p$-adic topology on $F$ induced by $\mathrm{id}\colon F \hookrightarrow \mathbb C_p$. We assume that $\mathfrak p \nmid \mathfrak f$.
\begin{dfn}[{\cite[Definitions 3.8, 3.9]{Da}}] Let $\eta$ be a prime ideal of $F$. \begin{enumerate} \item We say $\eta$ is good for a cone $C(v_1,\dots,v_r)$ if $v_i \in \mathcal O_F-\eta$ and if $N\eta$ is a rational prime (i.e., the residue degree $=1$). \item We say $\eta$ is good for a Shintani set $D$ if it can be expressed as a finite disjoint union of cones for which $\eta$ is good. \end{enumerate} \end{dfn}
\begin{dfn}[{\cite[Definitions 3.13, 3.16, Conjecture 3.21]{Da}}] \label{assump} We take an element $\pi \in \mathcal O_{F,+}$, a prime ideal $\eta$, a Shintani domain $\mathcal D_\mathfrak f$ $\bmod E_{\mathfrak f,+}$ satisfying the following conditions. \begin{enumerate} \item Let $e$ be the order of $\mathfrak p$ in $C_\mathfrak f$. We fix a generator $\pi \in \mathfrak p^e$ satisfying $\pi\in \mathcal O_{F,+}$, $\pi \equiv 1 \bmod \mathfrak f$. \item $N\eta \geq n+2$ and $(N\eta,\mathfrak f \mathfrak p)=1$. \item The residue degree of $\eta$ $=1$ and the ramification degree of $\eta$ $\leq N\eta-2$. \item $\eta$ is ``simultaneously'' good for $\mathcal D_\mathfrak f,\pi^{-1}\mathcal D_\mathfrak f$ in the following sense:
There exist vectors $\bm v_j \in (\mathcal O_{F,+}- \eta)^{r(j)}$, units $\epsilon_j\in E_{\mathfrak f,+}$ $(j\in J',\ |J'|<\infty)$ satisfying \begin{align*} \mathcal D_\mathfrak f=\coprod_{j\in J'} C(\bm v_j), \quad \pi^{-1}\mathcal D_\mathfrak f=\coprod_{j\in J'} \epsilon_j C(\bm v_j). \end{align*} \end{enumerate} \end{dfn}
\begin{rmk}
Dasgupta {\rm \cite{Da}} took a suitable set $T$ of prime ideals instead of one prime ideal $\eta$. In this article, we assume that $|T|=1$ for simplicity. \end{rmk}
We denote by $F_\mathfrak p,\mathcal O_{F_\mathfrak p}$ the completion of $F$ at $\mathfrak p$, the ring of integers of $F_\mathfrak p$ respectively.
\begin{dfn}[{\cite[Definitions 3.13, 3.17]{Da}}] \label{ueta} Let $\pi,\eta,\mathcal D_\mathfrak f$ be as in {\rm Definition \ref{assump}}, $\mathfrak b$ a fractional ideal of $F$ relatively prime to $\mathfrak f \mathfrak pN\eta$. We put \begin{align*} F^\times_\mathfrak f:=\{z\in F^\times\mid z\equiv 1\ {\bmod}^*\ \mathfrak f\}. \end{align*} \begin{enumerate} \item For an open compact subset $\mathcal U \subset \mathcal O_{F_\mathfrak p}$, a Shintani set $D$, we put \begin{align*} \nu(\mathfrak b,D,\mathcal U) &:=\zeta_N(0,F^\times_\mathfrak f\cap\mathfrak b^{-1}\cap D\cap\mathcal U), \\ \nu_\eta(\mathfrak b,D,\mathcal U)&:= \nu(\mathfrak b,D,\mathcal U)-N\eta \nu(\mathfrak b\eta^{-1},\mathcal D,\mathcal U). \end{align*} Here $\zeta_N(s,R)$ is defined in {\rm Definition \ref{Sz}}. By {\rm \cite[Proposition 3.12]{Da}} we see that \begin{itemize} \item When $\eta$ is good for $D$, we have $\nu_\eta(\mathfrak b,D,\mathcal U) \in \mathbb Z[N\eta^{-1}]$. \item When $\eta$ is good for $D$ and $N\eta \geq n +2$, we have $\nu_\eta(\mathfrak b,D,\mathcal U) \in \mathbb Z$. \end{itemize} \item Assume that $\eta$ is good for $D$ and that $\eta \nmid p$. We define a $p$-adic measure $\nu_\eta(\mathfrak b,D)$ on $\mathcal O_{F_\mathfrak p}$ by \begin{align*} \nu_\eta(\mathfrak b,D)(\mathcal U):=\nu_\eta(\mathfrak b,D,\mathcal U). \end{align*} Under the assumption of {\rm Definition \ref{assump}}, $\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f)$ is a $\mathbb Z$-valued measure. \item For $\tau\in \mathrm{Gal}(H_\mathfrak f/F)$, we put \begin{align*} \zeta_\mathfrak f(s,\tau)& :=\sum_{\mathfrak a\subset\mathcal O_F,\ (\frac{H_\mathfrak f/F}{\mathfrak a})=\tau,\ (\mathfrak{a,f})=1} N\mathfrak a^{-s}, \\ \zeta_{\mathfrak f,\eta}(s,\tau)&:=\zeta_\mathfrak f(s,\tau) -N\eta^{1-s}\zeta_\mathfrak f(s,\tau (\tfrac{H_\mathfrak f/F}{\eta^{-1}} )). \end{align*} Here $H_\mathfrak f$ denotes the narrow ray class field modulo $\mathfrak f$. \item We define \begin{align*} \epsilon_\eta(\mathfrak b,\mathcal D_\mathfrak f,\pi)&:=\prod_{\epsilon\in E_{\mathfrak f,+}} \epsilon^{\nu_\eta(\mathfrak b,\epsilon\mathcal D_\mathfrak f\cap\pi^{-1}\mathcal D_\mathfrak f,\mathcal O_{F_\mathfrak p})} \in E_{\mathfrak f,+}, \\ u_\eta(\mathfrak b,\mathcal D_\mathfrak f)&:= \epsilon_\eta(\mathfrak b,\mathcal D_\mathfrak f,\pi) \pi^{\zeta_{\mathfrak f,\eta}(0,(\frac{H_\mathfrak f/F}{\mathfrak b}))} \times\hspace{-1.1em}\int_{\mathbf O}x\ d\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f,x)\in F_\mathfrak p^\times, \end{align*} where $\mathbf O:=\mathcal O_{F_\mathfrak p}-\pi\mathcal O_{F_\mathfrak p}$. The product in the first line is actually a finite product since $\nu_\eta(\mathfrak b,\epsilon\mathcal D_\mathfrak f\cap\pi^{-1}\mathcal D_\mathfrak f,\mathcal O_{F_\mathfrak p})=0$ for all but finite $\epsilon\in E_{\mathfrak f,+}$. \end{enumerate} \end{dfn}
\begin{cnj}[{\cite[Conjecture 3.21]{Da}}] \label{Dc} Let $\pi,\eta,\mathcal D_\mathfrak f$ be as in {\rm Definition \ref{assump}}, $H$ the fixed subfield of $H_\mathfrak f$ under $(\frac{H_\mathfrak f/F}{\mathfrak p})$. \begin{enumerate} \item Let $\tau \in \mathrm{Gal}(H/F)$. For a fractional ideal $\mathfrak b$ relatively prime to $\mathfrak f \mathfrak pN\eta$ satisfying $(\frac{H/F}{\mathfrak b})=\tau$, we put \begin{align*} u_\eta(\tau):=u_\eta(\mathfrak b,\mathcal D_\mathfrak f). \end{align*} Then $u_\eta(\tau)$ depends only on $\mathfrak f,\tau,\eta$, not on the choices of $\mathcal D_\mathfrak f,\mathfrak b$. \item For any $\tau \in \mathrm{Gal}(H/F)$, $u_\eta(\tau)$ is a $\mathfrak p$-unit of $H$ satisfying $u_\eta(\tau)\equiv 1 \bmod \eta$. \item For any $\tau,\tau' \in \mathrm{Gal}(H/F)$, we have $u_\eta(\tau\tau')=u_\eta(\tau)^{\tau'}$. \end{enumerate} \end{cnj}
\section{The main results}
We keep the notation in the previous sections: Let $F$ be a totally real field of degree $n$, $H_\mathfrak f$ the narrow ray class field modulo $\mathfrak f$. We assume that the prime ideal $\mathfrak p$ corresponding to the $p$-adic topology on $F$ does not divide $\mathfrak f$. Let $H$ be the fixed subfield of $H_\mathfrak f$ under $(\frac{H_\mathfrak f/F}{\mathfrak p})$. For $\tau\in \mathrm{Gal}(H/F)$, let $Y_p(\tau):=Y_p(\tau,\mathrm{id})$ be as in Definition \ref{Yp}. For a fractional ideal $\mathfrak b$ relatively prime to $\mathfrak f \mathfrak pN\eta$, let $u_\eta(\mathfrak b,\mathcal D_\mathfrak f)$ be as in Definition \ref{ueta}-(iv).
\begin{thm} \label{main} We have \begin{align*} \log_p(u_\eta(\mathfrak b,\mathcal D_\mathfrak f)) =-Y_p((\tfrac{H/F}{\mathfrak b})) +N\eta\,Y_p((\tfrac{H/F}{\mathfrak b\eta^{-1}})) . \end{align*} \end{thm}
\begin{crl} \label{crlofmain} {\rm Conjecture \ref{KYc}} implies \begin{align*} u_\eta(\mathfrak b,\mathcal D_\mathfrak f)^{h_H} \equiv \prod_{\sigma\in\mathrm{Gal}(H/F)}\alpha_H^{\zeta_{\mathfrak f,\eta}(0,\sigma^{-1}) (\frac{H/F}{\mathfrak b}) \sigma}\bmod \ker\log_p, \end{align*} where $\mathfrak p_H$, $h_H$, $\alpha_H$ are the prime ideal of $H$ corresponding to the $p$-adic topology on $H$, the class number of $H$, a generator of $\mathfrak p_H^{h_H}$. \end{crl}
We prepare some Lemmas in order prove this Theorem.
\begin{lmm}[{\cite[Th\'eor\`eme 13]{CN2}}] \label{CN} Let $\bm v \in F_+^r$, $z\in F_+$, $\bm \xi=(\xi_1,\dots,\xi_r)$ with $\xi_i$ roots of unity, $\neq 1$. For $k\in \mathbb Z_{\geq 0}$, we have \begin{align*} \zeta_N(-k,\bm v,z,\bm \xi) &=\sum_{\bm m=(m_1,\dots,m_r)\in \mathbb N^r}\frac{\sum_{\bm l=(l_1,\dots,l_r),\ 1\leq l_i\leq m_i} \begin{Bmatrix}\bm m\\\bm l\end{Bmatrix}N(z-\bm l\,{}^t \bm v)^k}{(\bm 1-\bm \xi)^{\bm m}}, \\ \zeta(-k,\bm v,z,\bm \xi) &=\sum_{\bm m=(m_1,\dots,m_r)\in \mathbb N^r}\frac{\sum_{\bm l=(l_1,\dots,l_r),\ 1\leq l_i\leq m_i} \begin{Bmatrix}\bm m\\\bm l\end{Bmatrix}(z-\bm l\,{}^t \bm v)^k}{(\bm 1-\bm \xi)^{\bm m}}. \end{align*} Here we put $(\bm 1-\bm \xi)^{\bm m}:=\prod_{i=1}^{r}(1-\xi_i)^{m_i}$, $\begin{Bmatrix}\bm m\\\bm l\end{Bmatrix}:=\prod_{i=1}^r\left((-1)^{l_i-1}\binom{m_i-1}{l_i-1}\right)$ with the binomial coefficient $\binom{m_i-1}{l_i-1}$. The sum over $\bm m$ is actually a finite sum since we have $\sum_{\bm l}\begin{Bmatrix}\bm m\\\bm l\end{Bmatrix}N(z-\bm l\,{}^t\bm v)^k =\sum_{\bm l}\begin{Bmatrix}\bm m\\\bm l\end{Bmatrix}(z-\bm l\,{}^t\bm v)^k=0$ if $m_i$ is large enough. \end{lmm}
\begin{lmm} \label{key} Let $\nu_\eta(\mathfrak b,D,\mathcal U)$ be as in {\rm Definition \ref{ueta}-(i)}. Assume that $\eta$ is good for $D$. Then we have \begin{align*} \nu_\eta(\mathfrak b,D,\mathcal U) =\zeta(0,F^\times_\mathfrak f\cap\mathfrak b^{-1}\cap D\cap\mathcal U) -N\eta\,\zeta(0,F^\times_\mathfrak f\cap\mathfrak b^{-1}\eta\cap D\cap\mathcal U). \end{align*} \end{lmm}
\begin{proof} It is enough to show the statement when \begin{itemize} \item $D$ is a cone $C(\bm v)$ with $\bm v=(v_1,\dots,v_r)$, $v_i \in \mathcal O_F-\eta$. \item $\mathcal U$ is of the form $a+\mathfrak p^m \mathcal O_{F_\mathfrak p}$ ($m \in \mathbb N$, $a \in \mathcal O_{F_\mathfrak p}$). \end{itemize} Put $R:=F^\times_\mathfrak f \cap\mathfrak b^{-1}\cap C(\bm v) \cap(a+\mathfrak p^m \mathcal O_{F_\mathfrak p})$. By definition we have \begin{align} \nu_\eta(\mathfrak b,C(\bm v),a+\mathfrak p^m \mathcal O_{F_\mathfrak p}) &=\zeta_N(0,R)-N\eta\,\zeta_N(0,\{z\in R \mid \mathrm{ord}_\eta z>0\}) \notag \\ &=[\sum_{z\in R}Nz^{-s}-N\eta \sum_{z\in R,\ \mathrm{ord}_\eta z>0}Nz^{-s}]_{s=0}. \label{eq1} \end{align} Let $L$ be a positive integer satisfying $L\in \mathfrak f\mathfrak p^m\mathfrak b^{-1}$, $(\eta,L)=1$. Then we have \begin{align*} \sum_{z\in R}Nz^{-s}&=\sum_{\bm x\in R_a}\sum_{\bm m \in \mathbb Z_{\geq 0}^r}N( (\bm x+\bm m)\,{}^t (L\bm v))^{-s}, \\ R_a&:=\{\bm x \in (\mathbb Q\cap (0,1])^r \mid \bm x\,{}^t (L\bm v) \in F^\times_\mathfrak f \cap\mathfrak b^{-1} \cap(a+\mathfrak p^m \mathcal O_{F_\mathfrak p})\}. \end{align*} Since $N\eta$ is a rational prime, the following homomorphism is a surjection. \begin{align*} \mathbb Z\rightarrow \mathbb Z/N\eta \cong \mathcal O_F/\eta \cong {\mathcal O_F}_{(\eta)}/\eta{\mathcal O_F}_{(\eta)}. \end{align*} Here we denote the localization of $\mathcal O_F$ at $\eta$ by ${\mathcal O_F}_{(\eta)}$. Hence for each $\bm x\in R_a$, there exists an integer $n_{\bm x}$ satisfying $\bm x\,{}^t\bm v \equiv n_{\bm x} \bmod \eta{\mathcal O_F}_{(\eta)}$. Similarly we take $n_i$ satisfying $Lv_i \equiv n_i \bmod \eta{\mathcal O_F}_{(\eta)}$ and put $\bm n_{L\bm v}:=(n_1,\dots,n_r)$. Then the following are equivalent: \begin{align*} \mathrm{ord}_\eta ((\bm x+\bm m)\,{}^t (L\bm v))>0 \Leftrightarrow n_{\bm x}+\bm m\,{}^t \bm n_{L\bm v} \equiv 0 \bmod N\eta. \end{align*} Let $\zeta$ be a primitive $N\eta$th root of unity. We put $\xi_{\bm x}:=\zeta^{n_{\bm x}}$, $\xi_i:=\zeta^{n_i}$, $\bm \xi_{L\bm v}:=(\xi_1,\dots,\xi_r)$. Note that $\xi_i\neq 1$ for any $i$. Then we have \begin{align*} \sum_{\lambda=1}^{N\eta-1}(\xi_{\bm x}\bm \xi_{L\bm v}^{\bm m})^\lambda= \begin{cases} -1 & (\mathrm{ord}_\eta ((\bm x+\bm m)\,{}^t (L\bm v))=0), \\ N\eta-1 & (\mathrm{ord}_\eta ((\bm x+\bm m)\,{}^t (L\bm v))>0). \end{cases} \end{align*} Here we put $\bm \xi_{L\bm v}^{\bm m}:=\prod_{i=1}^r \xi_i^{m_i}$. It follows that \begin{align} \sum_{\bm x \in R_a}\sum_{\lambda=1}^{N\eta-1}\xi_{\bm x}^\lambda \zeta_N(s,L\bm v,\bm x,\bm \xi_{L\bm v}^\lambda )&= \sum_{\bm x \in R_a}\sum_{\bm m \in \mathbb Z_{\geq 0}^r}\sum_{\lambda=1}^{N\eta-1} (\xi_{\bm x}\bm \xi_{L\bm v}^{\bm m})^\lambda N((\bm x+\bm m)\,{}^t (L\bm v))^{-s} \notag \\ &=-\sum_{z\in R}Nz^{-s}+N\eta \sum_{z\in Z,\ \mathrm{ord}_\eta z>0}Nz ^{-s}. \label{eq2} \end{align} Similarly we obtain \begin{align} \sum_{\bm x \in R_a}\sum_{\lambda=1}^{N\eta-1}\xi_{\bm x}^\lambda \zeta(s,L\bm v,\bm x,\bm \xi_{L\bm v}^\lambda ) &=-\sum_{z\in R}z^{-s}+N\eta \sum_{z\in Z,\ \mathrm{ord}_\eta z>0}z ^{-s} \notag \\ &=\zeta(s,R)-N\eta\,\zeta(s,\{z\in R \mid \mathrm{ord}_\eta z>0\}). \label{eq3} \end{align} By Lemma \ref{CN}, we have for $\bm x \in R_a$ \begin{align} \zeta_N(0,L\bm v,\bm x,\bm \xi_{L\bm v}^\lambda )=\zeta(0,L\bm v,\bm x,\bm \xi_{L\bm v}^\lambda ). \label{eq4} \end{align} Then the assertion follows from (\ref{eq1}), (\ref{eq2}), (\ref{eq3}), (\ref{eq4}). \end{proof}
Dasgupta's $p$-adic integration $\int d\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f,x)$ is originally associated with special values of multiple zeta functions ``with the norm'' $\zeta_N(\cdots)$. By the above Lemma, we can rewrite it in terms of special values of multiple zeta functions ``without the norm'' $\zeta(\cdots)$. This observation is one of the main discoveries in this paper.
\begin{lmm} \label{intzeta} Let $\nu_\eta(\mathfrak b,D,\mathcal U)$, $\mathbf O=\mathcal O_{F_\mathfrak p}-\pi\mathcal O_{F_\mathfrak p}$ be as in {\rm Definition \ref{ueta}}. Assume that $\eta$ is good for $D$. Then we have for $k\in \mathbb Z_{\geq 0}$, $s \in \mathbb Z_p$ \begin{align} \int_{\mathbf O}x^k d\nu_\eta(\mathfrak b, D,x) &=\zeta(-k,F^\times_\mathfrak f \cap\mathfrak b^{-1}\cap D \cap \mathbf O) -N\eta\,\zeta(-k,F^\times_\mathfrak f \cap\mathfrak b^{-1}\eta\cap D \cap\mathbf O), \notag \\ \int_{\mathbf O}\langle x\rangle^{-s} d\nu_\eta(\mathfrak b, D,x) &=\zeta_p(s,F^\times_\mathfrak f \cap\mathfrak b^{-1}\cap D \cap \mathbf O) -N\eta\,\zeta_p(s,F^\times_\mathfrak f \cap\mathfrak b^{-1}\eta\cap D \cap\mathbf O). \label{pmzitopm} \end{align} Here we put $\langle x\rangle:=p^{-\mathrm{ord}_p x}\theta_p(x)^{-1}x$ by using $\mathrm{ord}_p,\theta_p$ in {\rm (\ref{ordtheta})}. \end{lmm}
\begin{proof} It is enough to show the statement when $D=C(\bm v)$ with $\bm v=(v_1,\dots,v_r)$, $v_i \in \mathcal O_F-\eta$. By definition we can write \begin{align*} \int_{\mathbf O}x^k d\nu_\eta(\mathfrak b, C(\bm v),x) =\lim_{\leftarrow }\sum_{\overline a\in \mathbf O/(1+\mathfrak p^m\mathcal O_{F_\mathfrak p})} a^k\nu_\eta(\mathfrak b,C(\bm v),a(1+\mathfrak p^m\mathcal O_{F_\mathfrak p})). \end{align*} By Lemmas \ref{CN}, \ref{key}, we have \begin{align*} a^k\nu_\eta(\mathfrak b,C(\bm v),a(1+\mathfrak p^m\mathcal O_{F_\mathfrak p})) =-\sum_{\bm m\in \mathbb N^r}\sum_{\lambda=1}^{N\eta-1}\sum_{\bm x \in R_a} \frac{\xi_{\bm x}^\lambda \sum_{1\leq l_i \leq m_i}\begin{Bmatrix}\bm m\\\bm l\end{Bmatrix}a^k}{(\bm 1-\bm \xi_{L\bm v}^\lambda)^{\bm m}}, \end{align*} where $L,R_a,\xi_{\bm x},\bm \xi_{L\bm v}$ are as in the proof of Lemma \ref{key}. On the other hand, by Lemma \ref{CN} again, we obtain \begin{align*} &\zeta(-k,F^\times_\mathfrak f \cap\mathfrak b^{-1}\cap C(\bm v)\cap\mathbf O) -N\eta\,\zeta(-k,F^\times_\mathfrak f \cap\mathfrak b^{-1}\eta\cap C(\bm v)\cap\mathbf O) \\ &=-\sum_{\overline a \in \mathbf O/(1+\mathfrak p^m\mathcal O_{F_\mathfrak p})}\sum_{\bm x \in R_a} \sum_{\bm m \in \mathbb N^r}\sum_{\lambda=1}^{N\eta-1}\frac{\xi_{\bm x}^\lambda \sum_{1\leq l_i \leq m_i} \begin{Bmatrix}\bm m\\\bm l\end{Bmatrix}((\bm x-\bm l)\,{}^t (L\bm v))^k}{(\bm 1-\bm \xi_{L\bm v}^\lambda )^{\bm m}}. \end{align*} By definition, we see that $L\in \mathfrak p^m$, $\bm x\,{}^t (L\bm v) \equiv a \bmod \mathfrak p^m\mathcal O_{F_\mathfrak p}$ for $\bm x \in R_a$. It follows that \begin{align*} a ^k \equiv ((\bm x-\bm l)\,{}^t (L\bm v))^k \bmod \mathfrak p^m \mathcal O_{F_\mathfrak p} \quad (\bm x \in R_a), \end{align*} Hence the first assertion is clear. The second assertion follows from the $p$-adic interpolation property (\ref{interpolation}). \end{proof}
\begin{proof}[Proof of {\rm Theorem \ref{main}}] For a fractional ideal $\mathfrak b$, a Shintani set $D$, an open compact subset $\mathcal U \subset \mathcal O_{F_\mathfrak p}$, and for $*=\emptyset,p$, we put \begin{align*} L\Gamma_*(\mathfrak b,D,\mathcal U)&:=L\Gamma_*(F^\times_\mathfrak f \cap\mathfrak b^{-1}\cap D\cap\mathcal U), \\ L\Gamma_{\eta,*} (\mathfrak b,D,\mathcal U) &:=L\Gamma_*(\mathfrak b,D,\mathcal U)-N\eta\,L\Gamma_*(\mathfrak b\eta^{-1},D,\mathcal U) \end{align*} whenever each function is well-defined. It suffices to show the following three equalities: \begin{align} \log_p(\epsilon_\eta (\mathfrak b,\mathcal D_\mathfrak f ,\pi) \pi^{\zeta_{\mathfrak f,\eta}(0,(\frac{H_\mathfrak f/F}{\mathfrak b}))}) &=[L\Gamma_\eta (\mathfrak b,\mathcal D_\mathfrak f ,\mathbf O)]_p, \label{Cl1} \\ \log_p(\times\hspace{-1.1em}\int_{\mathbf O}x\ d\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f ,x)) &=-L\Gamma_{\eta,p}(\mathfrak b,\mathcal D_\mathfrak f ,\mathbf O), \label{Cl2} \\ L\Gamma_p(\mathfrak b,\mathcal D_\mathfrak f ,\mathbf O)-[L\Gamma(\mathfrak b,\mathcal D_\mathfrak f ,\mathbf O)]_p &=Y_p((\tfrac{H/F}{\mathfrak b})). \label{Cl3} \end{align} Let $\bm v_j$, $\epsilon_j$ ($j \in J'$) be as in Definition \ref{assump}-(iv). Since $\mathcal D_\mathfrak f,\pi^{-1}\mathcal D_\mathfrak f$ are fundamental domains of $F\otimes \mathbb R_+/E_{\mathfrak f,+}$, we see that \begin{align*} \epsilon\mathcal D_\mathfrak f\cap\pi^{-1}\mathcal D_\mathfrak f =\coprod_{j \in J',\ \epsilon_j =\epsilon} \epsilon_j C(\bm v_j) \quad (\epsilon \in E_{\mathfrak f,+}) \end{align*} Namely we have \begin{align*} \epsilon_\eta (\mathfrak b,\mathcal D_\mathfrak f ,\pi)= \prod_{j\in J'}\epsilon_j ^{\nu_\eta(\mathfrak b,\epsilon_jC(\bm v_j),\mathcal O_{F_\mathfrak p})}. \end{align*}
By Lemma \ref{key}, we can write \begin{align*} &\nu_\eta(\mathfrak b,\epsilon_jC(\bm v_j),\mathcal O_{F_\mathfrak p}) \\ &=\zeta(0,F^\times_\mathfrak f \cap\mathfrak b^{-1}\cap\epsilon_j C(\bm v_j)\cap\mathcal O_{F_\mathfrak p})- N\eta\,\zeta(0,F^\times_\mathfrak f \cap\mathfrak b^{-1}\eta\cap\epsilon_j C(\bm v_j)\cap \mathcal O_{F_\mathfrak p}). \end{align*} Therefore by Proposition \ref{eZ}-(i) we obtain \begin{align} \label{lg1} \log_p(\epsilon_\eta (\mathfrak b,\mathcal D_\mathfrak f ,\pi)) &=[\sum_{j\in J'}L\Gamma_\eta (\mathfrak b,C(\bm v_j),\mathcal O_{F_\mathfrak p}) -\sum_{j\in J'}L\Gamma_\eta (\mathfrak b,\epsilon_jC(\bm v_j) ,\mathcal O_{F_\mathfrak p})]_p \notag \\ &=[L\Gamma_\eta (\mathfrak b,\mathcal D_\mathfrak f ,\mathcal O_{F_\mathfrak p}) -L\Gamma_\eta (\mathfrak b,\pi^{-1}\mathcal D_\mathfrak f ,\mathcal O_{F_\mathfrak p})]_p. \end{align} We easily see that \begin{align*} \zeta_{\mathfrak f,\eta}(0,(\tfrac{H_\mathfrak f/F}{\mathfrak b})) &=\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f ,\mathcal O_{F_\mathfrak p}), \\ \pi(F^\times_\mathfrak f \cap\mathfrak b^{-1}\cap\pi^{-1}\mathcal D_\mathfrak f \cap\mathcal O_{F_\mathfrak p}) &=F^\times_\mathfrak f \cap\pi\mathfrak b^{-1}\cap\mathcal D_\mathfrak f \cap\pi\mathcal O_{F_\mathfrak p}. \end{align*} Hence, by Proposition \ref{eZ}-(i) again, we get \begin{align} \label{lg2} \log_p(\pi^{\zeta_{\mathfrak f,\eta}(0,(\frac{H_\mathfrak f/F}{\mathfrak b}))}) =[L\Gamma_\eta (\mathfrak b,\pi^{-1}\mathcal D_\mathfrak f ,\mathcal O_{F_\mathfrak p}) -L\Gamma_\eta (\pi^{-1}\mathfrak b,\mathcal D_\mathfrak f ,\pi\mathcal O_{F_\mathfrak p})]_p. \end{align} Since $(F^\times_\mathfrak f \cap\mathfrak b^{-1}\cap\mathcal D_\mathfrak f \cap\mathbf O) \coprod (F^\times_\mathfrak f \cap\pi\mathfrak b^{-1}\cap\mathcal D_\mathfrak f \cap\pi\mathcal O_{F_\mathfrak p}) =F^\times_\mathfrak f \cap\mathfrak b^{-1}\cap\mathcal D_\mathfrak f \cap\mathcal O_{F_\mathfrak p}$, we have \begin{align} \label{lg3} L\Gamma_\eta (\mathfrak b,\mathcal D_\mathfrak f ,\mathbf O) =L\Gamma_\eta (\mathfrak b,\mathcal D_\mathfrak f ,\mathcal O_{F_\mathfrak p}) -L\Gamma_\eta (\pi^{-1}\mathfrak b,\mathcal D_\mathfrak f ,\pi\mathcal O_{F_\mathfrak p}). \end{align} Then the assertion (\ref{Cl1}) follows from (\ref{lg1}), (\ref{lg2}), (\ref{lg3}).
Next, differentiating (\ref{pmzitopm}) at $s=0$, we obtain \begin{align*} -\int_{\mathbf O}\log_p x\, d\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f ,x) =L\Gamma_{\eta,p}(\mathfrak b,\mathcal D_\mathfrak f ,\mathbf O). \end{align*} By definition, we have $\log_p(\times\hspace{-0.9em}\int_{\mathbf O}x\ d\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f ,x)) =\int_{\mathbf O}\log_p x\, d\nu_\eta(\mathfrak b,\mathcal D_\mathfrak f ,x)$. Hence the assertion (\ref{Cl2}) is clear.
Finally we prove (\ref{Cl3}). Let $D$ be a Shintani domain $\bmod E_+$. For each $c \in C_{\mathfrak f \mathfrak p}$, we take an integral ideal $\mathfrak a_c$ satisfying $\mathfrak a_c\mathfrak f \in \pi(c)$, and put $R_c:=\{z \in D \mid \mathcal O_F \supset z \mathfrak a_c \mathfrak f\mathfrak p \in c\}$. By (\ref{rewrite}) we can write \begin{align*}
Y_p((\tfrac{H/F}{\mathfrak b}))=\sum_{c \in C_{\mathfrak f\mathfrak p},\ \mathrm{Art}(\overline c)|_H=(\frac{H/F}{\mathfrak b})}L\Gamma_p(R_c)
-[\sum_{c \in C_{\mathfrak f\mathfrak p},\ \mathrm{Art}(\overline c)|_H=(\frac{H/F}{\mathfrak b})}L\Gamma(R_c)]_p. \end{align*} Since $H$ is the fixed subfield under $(\frac{H_\mathfrak f/F}{\mathfrak p})$, we may replace \begin{align*}
\sum_{c \in C_{\mathfrak f\mathfrak p},\ \mathrm{Art}(\overline c)|_H=(\frac{H/F}{\mathfrak b})}\cdots= \sum_{k=0}^{e-1}\sum_{c \in C_{\mathfrak f\mathfrak p},\ \overline c=[\mathfrak b\mathfrak p^{-k}]}\cdots, \end{align*} where $\overline c$ denotes the image under $C_{\mathfrak f\mathfrak p}\rightarrow C_\mathfrak f$, $[\mathfrak a]$ denotes the ideal class in $C_\mathfrak f$ of a fractional ideal $\mathfrak a$. On the other hand, we can write for $*=\emptyset,p$ \begin{align*} \begin{split} L\Gamma_*(\mathfrak b,\mathcal D_\mathfrak f ,\mathbf O)= \sum_{k=0}^{e-1}L\Gamma_*(\mathfrak b,\mathcal D_\mathfrak f ,\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times). \end{split} \end{align*} Therefore it suffices to show that we have for each $k$ \begin{align} &(\sum_{c \in C_{\mathfrak f\mathfrak p},\ \overline c=[\mathfrak b\mathfrak p^{-k}]}L\Gamma_p(R_c)) -L\Gamma_p(\mathfrak b,\mathcal D_\mathfrak f ,\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times) \notag \\ &=[(\sum_{c \in C_{\mathfrak f\mathfrak p},\ \overline c=[\mathfrak b\mathfrak p^{-k}]}L\Gamma(R_c)) -L\Gamma(\mathfrak b,\mathcal D_\mathfrak f ,\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times)]_p. \label{Cl32} \end{align} We fix $k$. Whenever $\overline c=[\mathfrak b\mathfrak p^{-k}]$, $\pi(c) \in C_{(1)}$ is constant, so we may put $\mathfrak a_c$ to be a fixed integral ideal $\mathfrak a_0$. Then we have \begin{align*} \coprod_{c \in C_{\mathfrak f\mathfrak p},\ \overline c=[\mathfrak b\mathfrak p^{-k}]}R_c= \{z \in (\mathfrak a_0 \mathfrak f\mathfrak p)^{-1} \cap D \mid (z \mathfrak a_0 \mathfrak f\mathfrak p, \mathfrak f\mathfrak p)=1,\ [z\mathfrak a_0 \mathfrak f\mathfrak p]= [\mathfrak b\mathfrak p^{-k}] \text{ in } C_\mathfrak f\}. \end{align*} Let $\alpha_0 \in F_+$ be a generator of the principal ideal $(\mathfrak a_0\mathfrak f\mathfrak p)(\mathfrak b\mathfrak p^{-k})^{-1}$. Then the following are equivalent: \begin{center} $[z\mathfrak a_0 \mathfrak f\mathfrak p]= [\mathfrak b\mathfrak p^{-k}] \Leftrightarrow [(z\alpha_0)]=[(1)] \Leftrightarrow \exists \epsilon \in E_+$ s.t.\ $z\epsilon\alpha_0 \equiv 1 \bmod \mathfrak f$. \end{center} Hence, taking a representative set $E_0$ of $E_+/E_{\mathfrak f ,+}$, we can write \begin{align*} &\{z \in (\mathfrak a_0 \mathfrak f\mathfrak p)^{-1} \cap D \mid (z \mathfrak a_0 \mathfrak f\mathfrak p, \mathfrak f\mathfrak p)=1,\ [z\mathfrak a_0 \mathfrak f\mathfrak p]= [\mathfrak b\mathfrak p^{-k}] \text{ in } C_\mathfrak f\} \\ &=\coprod_{\epsilon \in E_0}(\epsilon\alpha_0)^{-1} (F_\mathfrak f ^\times\cap\mathfrak b^{-1}\cap\epsilon\alpha_0D\cap \mathfrak p^k\mathcal O_{F_\mathfrak p}^\times). \end{align*} Namely we have for $*=\emptyset,p$ \begin{align} \sum_{c \in C_{\mathfrak f\mathfrak p},\ \overline c=[\mathfrak b\mathfrak p^{-k}]}L\Gamma_*(R_c) =\sum_{\epsilon \in E_0}L\Gamma_*((\epsilon\alpha_0)^{-1} (F_\mathfrak f ^\times\cap\mathfrak b^{-1}\cap\epsilon\alpha_0D\cap \mathfrak p^k\mathcal O_{F_\mathfrak p}^\times)). \label{YpR0} \end{align} On the other hand, $\mathcal D_\mathfrak f ':=\coprod_{\epsilon \in E_0} \epsilon\alpha_0 D$ becomes another Shintani domain $\bmod E_{\mathfrak f,+}$, and we can write for $*=\emptyset,p$ \begin{align} \label{R0lg} L\Gamma_*(\mathfrak b,\mathcal D_\mathfrak f ',\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times)=\sum_{\epsilon \in E_0} L\Gamma_*(F_\mathfrak f ^\times\cap\mathfrak b^{-1}\cap\epsilon\alpha_0D\cap \mathfrak p^k\mathcal O_{F_\mathfrak p}^\times). \end{align} Then the assertion (\ref{Cl32}), replacing $\mathcal D_\mathfrak f$ with $\mathcal D_\mathfrak f'$, follows from (\ref{YpR0}), (\ref{R0lg}) and Proposition \ref{eZ}. We conclude the proof of (\ref{Cl3}) by showing that \begin{align*} L\Gamma_p(\mathfrak b,\mathcal D_\mathfrak f ,\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times) -L\Gamma_p(\mathfrak b,\mathcal D_\mathfrak f ',\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times) =[L\Gamma(\mathfrak b,\mathcal D_\mathfrak f ,\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times) -L\Gamma(\mathfrak b,\mathcal D_\mathfrak f ',\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times)]_p. \end{align*} Note that the independence on the choice of $\mathcal D_\mathfrak f$ is also discussed in \cite[\S 5.2]{Da} under certain conditions. Similarly to \cite[Chap.\ III, Lemma 3.13]{Yo}, we see that there exist cones $C(\bm v_j)$ and units $u_j \in E_{\mathfrak f,+}$ ($j\in J''$) which satisfy \begin{align*} \mathcal D_\mathfrak f =\coprod_{j\in J''} C(\bm v_j),\quad \mathcal D_\mathfrak f '=\coprod_{j\in J''} u_jC(\bm v_j). \end{align*} Therefore it suffices to show that \begin{align*} &L\Gamma_p(\mathfrak b,C(\bm v_j),\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times) -L\Gamma_p(\mathfrak b,u_jC(\bm v_j),\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times) \\ &=[L\Gamma(\mathfrak b,C(\bm v_j),\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times) -L\Gamma(\mathfrak b,u_jC(\bm v_j),\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times)]_p. \end{align*} It follows from Proposition \ref{eZ} since $F_\mathfrak f ^\times\cap\mathfrak b^{-1}\cap u_jC(\bm v_j)\cap\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times =u_j(F_\mathfrak f ^\times\cap\mathfrak b^{-1}\cap C(\bm v_j)\cap\mathfrak p^k\mathcal O_{F_\mathfrak p}^\times)$. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1704.04606.tex",
"language_detection_score": 0.4800047278404236,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On the spread of outerplanar graphs} \begin{abstract}
The spread of a graph is the difference between the largest and most negative eigenvalue of its adjacency matrix. We show that for sufficiently large $n$, the $n$-vertex outerplanar graph with maximum spread is a vertex joined to a linear forest with $\Omega(n)$ edges. We conjecture that the extremal graph is a vertex joined to a path on $n-1$ vertices. \end{abstract}
\section{Introduction} The {\em spread} of a square matrix $M$ is defined to be \[
S(M): = \max_{i,j}|\lambda_i - \lambda_j|, \] where the maximum is taken over all pairs of eigenvalues of $M$. That is, $S(M)$ is the diameter of the spectrum of $M$. The spread of general matrices has been studied in several papers (e.g. \cite{B, Deutsch, JKW, Mirsky, NylenTam, Thompson, WZL}). In this paper, given a graph $G$ we will study the spread of the adjacency matrix of $G$, and we will call this quantity the {\em spread of $G$} and denote it by $S(G)$. Since the adjacency matrix of an $n$-vertex graph is real and symmetric, it has a full set of real eigenvalues which we may order as $\lambda_1 \geq \cdots \geq \lambda_n$. In this case, the spread of $G$ is given simply by $\lambda_1 - \lambda_n$.
The study of the spread of graphs was introduced in a systematic way by Gregory, Hershkowitz, and Kirkland in \cite{GHK}. Since then, the spread of graphs has been studied extensively. A problem in this area with an extremal flavor is to maximize or minimize the spread over a fixed family of graphs. This problem has been considered for trees \cite{AP}, graphs with few cycles \cite{FWG, PBA, WS}, the family of all $n$-vertex graphs \cite{variable, BRTU, alex, stanic, stevanovic, john}, bipartite graphs \cite{BRTU}, graphs with a given matching number \cite{LZZ} or girth \cite{WZS} or size \cite{LM}.
We also note that spreads of other matrices associated with a graph have been considered extensively (e.g. \cite{ADLR, FF, FWL, GZ, LMG, LL, OLNK, YRY, YL,YZLWS}), but in this paper we will focus on the adjacency matrix. This paper examines the question of maximizing the spread of an $n$-vertex outerplanar graph. A graph is {\em outerplanar} if it can be drawn in the plane with no crossings and such that all vertices are incident with the unbounded face. Similarly to Wagner's theorem characterizing planar graphs, a graph is outerplanar if and only if it does not contain either $K_{2,3}$ or $K_4$ as a minor. Maximizing the spread of this family of graphs is motivated by the extensive history on maximizing eigenvalues of planar or outerplanar graphs, for example \cite{BR, CV, CR, DM, EZ, HS, LN, SW, Rowlinson, yuan1, yuan2}.
Our main theorem comes close to determining the outerplanar graph of maximum spread. Let $P_k$ denote the path on $k$ vertices and $G\vee H$ the join of $G$ and $H$. A linear forest is a disjoint union of paths.
\begin{theorem}\label{thm: main} For $n$ sufficiently large, any graph which maximizes spread over the family of outerplanar graphs on $n$ vertices is of the form $K_1 \vee F$ where $F$ is a linear forest with $\Omega(n)$ edges. \end{theorem}
We leave it as an open problem to determine whether or not $F$ should be a path on $n-1$ vertices, and we conjecture that this is the case.
\begin{conjecture} For $n$ sufficiently large, the unique $n$-vertex outerplanar graph of maximum spread is $K_1 \vee P_{n-1}$. \end{conjecture}
\section{Preliminaries} Let $G$ be an outerplanar graph of maximum spread and let $A$ be its adjacency. We will frequently assume that $n$ is sufficiently large. We will use the characterization that a graph is outerplanar if and only it does not contain $K_{2,3}$ or $K_4$ as a minor. In particular, $G$ does not contain $K_{2,3}$ as a subgraph. Given a vertex $v\in V(G)$, the neighborhood of $v$ will be denoted by $N(v)$ and its degree by $d_v$. If $f,g:\mathbb{N} \to \mathbb{R}$ we will use $f = \mathcal{O}(g)$ to mean that there exists a constant $c$ such that $f(n) \leq cg(n)$ for $n$ sufficiently large. $f = \Omega(g)$ means that $g = \mathcal{O}(f)$ and $f = \Theta(g)$ means that $f = \mathcal{O}(g)$ and $g = \mathcal{O}(f)$. We will occasionally have sequences of inequalities where we will abuse notation and mix inequality symbols with $\mathcal{O}(\cdot)$ and $\Theta(\cdot)$. \\
Let the eigenvalues of $A$ be represented as $\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_n$. For any disconnected graph, adding an edge between the connected components will not decrease $\lambda_1$ and will also not decrease $-\lambda_n$. Therefore, without loss of generality, we may assume that $G$ is connected. By the Perron-Frobenius theorem we may assume that the eigenvector $\mathbf{x}$ corresponding to $\lambda_1$ has $\mathbf{x}_u > 0$ for all $u$.
Furthermore, we will normalize $\mathbf{x}$ so that it has maximum entry equal to $1$, and let $\mathbf{x}_w=1$ where $w$ is a vertex attaining the maximum entry in $\mathbf{x}$. Note there may be more than one such vertex, in which case we can arbitrarily choose and fix $w$ among all such vertices. The other eigenvector of interest to us corresponds to $\lambda_n$, call it $\mathbf{z}$. We will also normalize $\mathbf{z}$ so that its largest entry in absolute value has absolute value $1$ and let $w'$ correspond to a vertex with maxumum absolute value in $\mathbf{z}$ (so $\mathbf{x}_{w'}$ equals $1$ or $-1$).
We will implement the following known equalities for the largest and smallest eigenvalues: \begin{align}
\lambda_1=\max\limits_{\mathbf{x}'\neq 0}&\frac{(\mathbf{x}')^tA\mathbf{x}'}{(\mathbf{x}')^t\mathbf{x}'}=\frac{\mathbf{x}^tA\mathbf{x}}{\mathbf{x}^t\mathbf{x}}\label{Rayleigh max}\\
\lambda_n=\min\limits_{\mathbf{z}'\neq 0}&\frac{(\mathbf{z}')^tA\mathbf{z}'}{(\mathbf{z}')^t\mathbf{z}'}=\frac{\mathbf{z}^tA\mathbf{z}}{\mathbf{z}^t\mathbf{z}}\label{Rayleigh min} \end{align}\\
An important result from equations \ref{Rayleigh max} and \ref{Rayleigh min} and the Perron-Frobenius Theorem is that for any strict subgraph $H$ of $G$, we have $\lambda_1(A(G))>\lambda_1(A(H))$. Finally, we will use the following theorem from \cite{TT}
\begin{theorem}\label{taittobin} For $n$ large enough, $K_1 \vee P_{n-1}$ has maximum spectral radius over all $n$-vertex outerplanar graphs. \end{theorem}
\section{Vertex of Maximum Degree} The main goal of this section is to prove that $G$ has a vertex of degree $n-1$. This is stated in the following theorem. \begin{theorem}\label{vertex degree n-1} For $n$ large enough, we have $d_w=n-1$. \end{theorem} As a first step, we will get preliminary upper and lower bounds on the largest and smallest eigenvalues of $G$. First we obtain an upper bound on the spectral radius.
\begin{lemma}\label{lambda_1 bounds}
$\lambda_1 \leq \sqrt{n}+1$. \end{lemma} \begin{proof} We define the graph $G_1$ to be the graph $K_1 \vee P_{n-1}$. By Theorem \ref{taittobin}, we know that any outerplanar graph on sufficiently many vertices cannot have a spectral radius larger than that of $G_1$. Now define $G_2$ as $G_1$ with another edge joining the endpoints of the path, so $G_2 = K_1 \vee C_{n-1}$. Clearly $G_1$ is a subgraph of $G_2$. Putting all this together gives us \begin{align*}
\lambda_1(G)\leq\lambda_1(G_1)<\lambda_1(G_2)= \sqrt{n}+1, \end{align*} where the last equality can be calculated using an equitable partition with two parts (the dominating vertex and the cycle).
\end{proof}
Next we bound $|\lambda_n|$. \begin{lemma}\label{lambda_n crude bound}
For $n$ sufficiently large, $\sqrt{n-1}-2\leq |\lambda_n| \leq \sqrt{n-1}+2$. \end{lemma} \begin{proof}
The upper bound on $|\lambda_n|$ follows immediately from Lemma \ref{lambda_1 bounds} and the well-known fact $\lambda_1\geq |\lambda_n|$ for any graph. Now to get the lower bound, since $G$ is the outerplanar graph on $n$ vertices that maximizes spread, we have \begin{align*}
S(G)&\geq S(K_{1,n-1})\\
&=\lambda_1(K_{1,n-1})-\lambda_n(K_{1,n-1})\\
&=\sqrt{n-1}-(-\sqrt{n-1})\\
&=2\sqrt{n-1}. \end{align*} So \[ 2\sqrt{n-1} \leq \lambda_1(G) - \lambda_n(G) \leq \sqrt{n}+1 - \lambda_n(G) < \sqrt{n-1}+2 - \lambda_n(G). \] Hence we have \[ -\lambda_n \geq \sqrt{n-1}-2. \] \end{proof} Essentially the same proof also gives a lower bound for $\lambda_1$. \begin{corollary}\label{lambda_1 lower bound} For $n$ large enough we have $\lambda_1\geq \sqrt{n-1}-2$. \end{corollary} We shall use Lemma \ref{lambda_n crude bound} to obtain a lower bound on the degree of each vertex. \begin{lemma}\label{d_u lower bound z_u}
Let $u$ be an arbitrary vertex in $G$. Then $d_u>|\mathbf{z}_u|n-\mathcal{O}(\sqrt{n})$ and $d_u>\mathbf{x}_u n-\mathcal{O}(\sqrt{n})$. \end{lemma} \begin{proof} We will show the first part explicitly. When $\lambda_n$ is the smallest eigenvalue for our graph, we have \begin{align*}
|\lambda_n^2\mathbf{z}_u|&=\left|\sum\limits_{y\sim u}\sum\limits_{v\sim y} \mathbf{z}_v\right|\\
&\leq d_u+\sum\limits_{y\sim u}\sum_{\substack{v\sim y\\ v\not=u}}|\mathbf{z}_v|.
\end{align*}
Recall that an outerplanar graph cannot have a $K_{2,3}$. This implies every vertex in $G$ has at most two neighbors in $N(u)$, meaning the eigenvector entry for each vertex contained in the neighborhood of $N(u)$ can be counted at most twice. Hence $\sum\limits_{y\sim u}\sum\limits_{v\sim y}|\mathbf{z}_v|\leq 2\sum\limits_{v\not=w}|\mathbf{z}_v|$. Note \[|\lambda_n| |\mathbf{z}_v|\leq \sum\limits_{v'\sim v}|\mathbf{z}_{v'}|\leq \sum\limits_{v'\sim v} 1=d_v.\] So we have
$$\sum_{y\sim u} \sum_{v\sim y } |\mathbf{z}_v| \leq 2\sum_{v \not=w} |\mathbf{z}_v|\leq \frac{2}{|\lambda_n|}\sum\limits_{v\not=w}d_v \leq \frac{4e(G)}{|\lambda_n|}\leq \frac{4(2n-3)}{|\lambda_n|},$$ as $e(G)\leq 2n-3$ by outerplanarity. Combining and using Lemma \ref{lambda_n crude bound}, we have
\[
(\sqrt{n-1}-2)^2|\mathbf{z}_u |\leq |\lambda_n^2 \mathbf{z}_u| \leq d_u + \frac{4(2n-3)}{|\lambda_n|} \leq d_u + \frac{8n}{\sqrt{n-1}-2},
\]
for $n$ sufficiently large. Isolating $d_u$ gives the result. A similar proof can be written to justify the lower bound with respect to $\mathbf{x}$ and we omit these details.
\end{proof}
\begin{lemma}\label{lmac3}
We have $d_w>n-\mathcal{O}(\sqrt{n})$ and $d_{w'}>n-\mathcal{O}(\sqrt{n})$. For every other vertex $u$ we get $|\mathbf{z}_u|, \mathbf{x}_u = \mathcal{O}\left(\frac{1}{\sqrt{n}}\right)$, for n sufficiently large. \end{lemma} \begin{proof}
The bound on $d_w$ and $d_{w'}$ follows immediately from the previous lemma and the normalization that $|\mathbf{z}_{w'}| = \mathbf{x}_w=1$. Now consider any other vertex $u$. We know that $G$ contains no $K_{2,3}$ and hence can have at most $2$ common neighbors with $w$. Thus $d_u = \mathcal{O}(\sqrt{n})$. By Lemma \ref{d_u lower bound z_u}, we have that there are constants $c_1$ and $c_2$ such that \[ c_1\sqrt{n} > d_u\mathbf{x}_u - c_2\sqrt{n}, \] and \[
c_1\sqrt{n} > d_u|\mathbf{z}_u| - c_2\sqrt{n}, \] for sufficiently large $n$. This implies the result. \end{proof}
If $w$ and $w'$ were distinct vertices, then for sufficiently large $n$ they would share many neighbors, contradicting outerplanarity. Hence we immediately have the following important fact. \begin{corollary}\label{w and w'} For $n$ sufficiently large we have $w=w'$. \end{corollary} Hence from now on, we will denote the vertex in Corollary \ref{w and w'} by $w$, and furthermore for the remainder of the paper we will also assume that $\mathbf{z}_w = 1$ without loss of generality. Before deriving our next result quantifying the other entries of $\mathbf{z}$, we first need to define an important vertex set. \begin{definition}\label{B def} Recall that $w$ is the fixed vertex of maximum degree in $G$. Let $B=V(G)\backslash(N(w)\cup\{w\})$. \end{definition} Now we consider the $\mathbf{z}_u$ eigenvector entries of vertices in $B$. At the moment we know that each eigenvector entry for a vertex in $B$ has order at most $\frac{1}{\sqrt{n}}$. The next lemma shows that in fact the sum of all of the eigenvector entries of $B$ has this order.
\begin{lemma}\label{lsum}For $n$ large enough, we have that
$ \sum\limits_{u\in B}|\mathbf{z}_u|$ and $ \sum\limits_{u\in B}\mathbf{x}_u$ are each $ \mathcal{O}\left(\frac{1}{\sqrt{n}}\right)$. \end{lemma}
\begin{proof} Let $u\in B$. Since $u$ is not adjacent to $w$, all of the neighbors of $u$ have eigenvector entry of order at most $1/\sqrt{n}$ and the size of $B$ is also of order at most $1/\sqrt{n}$ by Lemma \ref{lmac3}. Hence there is a constant $C$ such that \[
|\lambda_n||\mathbf{z}_u| \leq \sum_{v\sim u}|\mathbf{z}_v| \leq \frac{C d_u}{\sqrt{n}}, \]
and $|B| \leq C\sqrt{n}$. Now
$$ \sum_{u\in B} |\mathbf{z}_u| \leq \frac{1}{|\lambda_n|}\sum_{u\in B}\left(\frac{Cd_u}{\sqrt{n}}\right)\leq \frac{C}{|\lambda_n| \sqrt{n}}(e(B, V(G)\setminus B)+2e(B)).$$
Each vertex in $B$ is connected to at most two vertices in $N(u)$, so $e(B, V(G)\setminus B)\leq 2|B|\leq 2C\sqrt{n}$. The graph induced on $B$ is outerplanar, so $e(B)\leq 2|B|-3<2C\sqrt{n}$. Finally, using Lemma \ref{lambda_n crude bound}, we get the required result. A slightly modified version of this argument proves the bound on $ \sum\limits_{u\in B}\mathbf{x}_u$. \end{proof}
We will use Lemma \ref{lsum} to show that $B$ is empty, and this will complete the proof of Theorem \ref{vertex degree n-1}. First we define the following alteration of $G$. Let $t$ be an arbitrary vertex in $B$ (if it exists).
\begin{definition}\label{G star} Let $G^*$ be the graph defined such that its adjacency matrix $A^*$ satisfies $$A^*_{ij}=\begin{cases} 1 &$if $ i=w, j=t\\ 0 &$if $ i=t, j\not=w\\ A_{ij} &$otherwise$ \end{cases}$$ That is, to get $G^*$ from $G$ we add an edge from $t$ to $w$ and remove all other edges incident with $t$. In particular, the only neighbor of the $t$ vertex in $G^*$ is $w$. \end{definition}
\begin{lemma}\label{B empty} For large enough $n$, $B$ is empty. \end{lemma}
\begin{proof} Assume for contradiction that $B$ is nonempty. Then there is a vertex $t$ such that $t\not\sim w$. Define $G^*$ as in Definition \ref{G star}. Furthermore, we define the vector $\mathbf{z}^*$ which slightly modifies $\mathbf{z}$ as follows. \[ \mathbf{z}^*_u = \begin{cases} \mathbf{z}_u & \mbox{if $u\not=t$}\\
-|\mathbf{z}_u| & \mbox{if $u=t$} \end{cases} \] That is, if $\mathbf{z}_t<0$ then $\mathbf{z}^*$ is the same vector as $\mathbf{z}$ and otherwise we flip the sign of $\mathbf{z}_t$. Note that $(\mathbf{z}^*)^T \mathbf{z}^* = \mathbf{z}^T\mathbf{z}$. Now \begin{align*} S(G^*) - S(G) &\geq \left( \frac{\mathbf{x}^T A^*\mathbf{x}}{\mathbf{x}^T\mathbf{x}}- \frac{(\mathbf{z^*})^T A^*\mathbf{z^*}}{\mathbf{z}^T\mathbf{z}}\right) - \left(\frac{\mathbf{x}^T A\mathbf{x}}{\mathbf{x}^T\mathbf{x}}- \frac{\mathbf{z}^T A\mathbf{z}}{\mathbf{z}^T\mathbf{z}} \right)\\ &=\frac{2\mathbf{x}_t}{\mathbf{x}^T \mathbf{x}}\left(1-\sum\limits_{v\sim t}x_v\right)+\frac{2\mathbf{z}_t}{\mathbf{z}^T \mathbf{z}}\left(\mathrm{sgn}(\mathbf{z}_t)+\sum\limits_{v\sim t}\mathbf{z}_v\right) \\
&\geq \frac{2\mathbf{x}_t}{\mathbf{x}^T \mathbf{x}}\left(1-\sum\limits_{v\sim t}x_v\right)+\frac{2|\mathbf{z}_t|}{\mathbf{z}^T \mathbf{z}}\left(1-\left|\sum\limits_{v\sim t}\mathbf{z}_v\right|\right), \end{align*} where $\mathrm{sgn}(\mathbf{z}_t)$ equals $1$ if $\mathbf{z}_t > 0$ and $-1$ otherwise.
\[
\left|\sum\limits_{v\sim t}\mathbf{z}_v \right| \leq \sum_{v\sim t}|\mathbf{z}_v| \leq \sum_{\substack{v\sim t\\v\not\in B}}|\mathbf{z}_v| + \sum_{v\in B} |\mathbf{z}_v|. \] There are at most $2$ terms in the first sum, and so by Lemmas \ref{lmac3} and \ref{lsum}, we have \[
\left|\sum\limits_{v\sim t}\mathbf{z}_v \right| = \mathcal{O}\left(\frac{1}{\sqrt{n}}\right). \] Similarly, we have \[ \sum\limits_{v\sim t}\mathbf{x}_v = \mathcal{O}\left(\frac{1}{\sqrt{n}}\right). \]
This implies $1-\sum\limits_{v\sim t}\mathbf{x}_v > 0$ and $1-\left|\sum\limits_{v\sim t}\mathbf{z}_v\right|>0$ for $n$ large enough, which implies that
\[
\frac{2\mathbf{x}_t}{\mathbf{x}^T \mathbf{x}}\left(1-\sum\limits_{v\sim t}x_v\right)+\frac{2|\mathbf{z}_t|}{\mathbf{z}^T \mathbf{z}}\left(1-\left|\sum\limits_{v\sim t}\mathbf{z}_v\right|\right) > 0
\]
for $n$ large enough. Hence $S(G^*) > S(G)$, contradicting the assumption that $G$ is spread-extremal. \end{proof}
We have finally achieved our goal for this section. Theorem \ref{vertex degree n-1} follows immediately from the definition of $B$ and the fact that it is empty, as implied by Lemma \ref{B empty}.
\section{Determining Graph Structure} By Theorem \ref{vertex degree n-1}, the vertex $w$ has degree $n-1$, or equivalently, $K_{1,n-1}$ is a subgraph of $G$. Since $G$ is $K_{2,3}$-free, the graph induced by the neighborhood of $w$ has maximum degree at most $2$. Furthermore, this subgraph cannot contain a cycle, otherwise $G$ would contain a wheel-graph and this is a $K_4$-minor. Any graph of maximum degree at most $2$ that does not contain a cycle is a disjoint union of paths. Therefore, we know that $G$ is given by a $K_1 \vee F$ where $F$ is a disjoint union of paths. Our next task is to study $F$. To this end, we denote the number of edges in $F$ by $m$. Our main theorem, Theorem \ref{thm: main} is proved if we can show that $m = \Omega(n)$. Before doing this, we need a more accurate estimate for the eigenvector entries.
\begin{lemma}\label{z_u} For any $u\not=w$, we have $\mathbf{z}_u= \frac{1}{\lambda_n}+ \frac{d_u-1}{\lambda_n^2}+\Theta\left(\frac{1}{n^{3/2}}\right)$. \end{lemma} \begin{proof} As we are only considering outerplanar graphs, we have $d_u\in \{1,2,3\}$ for all $u\neq w$. Hence \begin{align*} \lambda_n \mathbf{z}_u&=\sum\limits_{y \sim u}\mathbf{z}_y\\ &=\mathbf{z}_w+\sum_{\substack{y\sim u\\ y\not=w}} \mathbf{z}_y \\ &=1-\Theta\left(\frac{1}{\sqrt{n}}\right) \end{align*} by Lemma \ref{lmac3} and our normalization.
Note that as $\lambda_n\mathbf{z}_u=1+\mathbf{z}_{u_1}+\mathbf{z}_{u_2}>0$ and $\lambda_n<0$, we must have $\mathbf{z}_u<0$. Next we repeat the argument to improve our estimate, \begin{align*}
\lambda_n \mathbf{z}_u&=\mathbf{z}_w+\sum_{\substack{y\sim u\\ y\not=w}} \mathbf{z}_y\\
&=1+(d_u-1)\left(\frac{1}{\lambda_n}+\Theta\left(\frac{1}{n}\right)\right).
\shortintertext{Now we apply our bounds on $\lambda_n$ in Lemma \ref{lambda_n crude bound} to get}
\mathbf{z}_u&= \frac{1}{\lambda_n}+ \frac{d_u-1}{\lambda_n^2}+\Theta\left(\frac{1}{n^{3/2}}\right). \end{align*} \end{proof} We can use equivalent reasoning to obtain a very similar approximation for the $\mathbf{x}_u$ entries, \begin{lemma}\label{x_u}For any $u\not=w$, we have
$\mathbf{x}_u= \frac{1}{\lambda_1}+\frac{d_u-1}{\lambda_1^2}+\Theta(\frac{1}{n^{3/2}})$ \end{lemma} \begin{proof} By the same logic as in the proof of Lemma \ref{z_u}, we have \begin{align*}\lambda_1 \mathbf{x}_u&=\sum\limits_{w \sim u}\mathbf{x}_w\\ &=\mathbf{x}_w+\sum_{\substack{y\sim u\\ y\not=w}} \mathbf{x}_y\\ &=1+\Theta\left(\frac{1}{\sqrt{n}}\right) \text{by Corollary \ref{lmac3}}. \end{align*} So $\mathbf{x}_u=\frac{1}{\lambda_1}+\Theta\left(\frac{1}{n}\right)$. Next we repeat the argument to improve our estimate, \begin{align*}
\lambda_1 \mathbf{x}_u&=\mathbf{x}_w+\sum_{\substack{y\sim u\\ y\not=w}} \mathbf{x}_y\\
&=1+(d_u-1)\left(\frac{1}{\lambda_1}+\Theta \left(\frac{1}{n}\right)\right) . \end{align*} So $\mathbf{x}_u= \frac{1}{\lambda_1}+\frac{d_u-1}{\lambda_1^2}+\Theta\left(\frac{1}{n^{3/2}}\right)$ according to our bounds on $\lambda_1$ in Lemma \ref{lambda_1 bounds}. \end{proof} Using Lemma \ref{z_u}, we can now get tighter estimates on $\lambda_n$.
\begin{lemma}\label{refined lambda_n bound} We have that $\lambda_n= -\sqrt{n-1}+\frac{m}{n-1} + \Theta \left(\frac{m}{n^{3/2}}\right)$. \end{lemma} \begin{proof} We first define the vector \[\mathbf{y_2}=\begin{bmatrix}1\\ -\frac{1}{\sqrt{n-1}}\\ -\frac{1}{\sqrt{n-1}}\\ \vdots\\ -\frac{1}{\sqrt{n-1}}\end{bmatrix}\] where $\mathbf{y_2}$ has $n$ entries and $w$ corresponds to the 1 entry. As $\lambda_n$ is the minimum Rayleigh quotient we have \begin{align*}
\lambda_n &\leq \frac{\mathbf{y_2}^TA\mathbf{y_2}}{\mathbf{y_2}^T\mathbf{y_2}}\\
&=\frac{2\sum\limits_{i \sim j} (\mathbf{y}_{2})_i(\mathbf{y}_{2})_j}{2}\\
&=(n-1)\left(-\frac{1}{\sqrt{n-1}}\right)+m\left(\frac{1}{n-1}\right)\\
&=-\sqrt{n-1}+\frac{m}{n-1}. \end{align*} In order to show the lower bound on $\lambda_n$, we need to realize that $\lambda_n$ is the minimum possible Rayleigh quotient. So
\[\lambda_n= \frac{\mathbf{z}^TA\mathbf{z}}{\mathbf{z}^T\mathbf{z}}
=\frac{2\sum\limits_{i\sim j}\mathbf{z}_i\mathbf{z}_j}{\mathbf{z}^T\mathbf{z}}
= \frac{2\sum\limits_{w\sim k}\mathbf{z}_w\mathbf{z}_k}{\mathbf{z}^T\mathbf{z}}+\frac{2\sum\limits_{{i\sim j \, ,\, i,j\neq w}}\mathbf{z}_i\mathbf{z}_j}{\mathbf{z}^T\mathbf{z}}.\]
Note the first term is the Rayleigh quotient for the star subgraph centered at the vertex $w$ of maximum degree. Hence it is bounded from below by $-\sqrt{n-1}$. More specifically, we have
\[
\frac{2\sum\limits_{w\sim k}\mathbf{z}_w\mathbf{z}_k}{\mathbf{z}^T\mathbf{z}} = \frac{\mathbf{z}^T A(K_{1,n-1})\mathbf{z}}{\mathbf{z}^T\mathbf{z}} \geq \lambda_n(A(K_{1,n-1})) = -\sqrt{n-1}.
\]
Applying Lemma \ref{z_u} we have
\begin{align*}
\lambda_n &\geq -\sqrt{n-1}+\frac{2\sum\limits_{\substack{i\sim j\\ i,j\not=w}}\left(\frac{1}{\lambda_n}+ \frac{d_i-1}{\lambda_n^2}+\Theta\left(\frac{1}{n^{3/2}}\right)\right)\left(\frac{1}{\lambda_n}+ \frac{d_j-1}{\lambda_n^2}+\Theta\left(\frac{1}{n^{3/2}}\right)\right)}{1+\left(\sum_{\ell\not= w} \frac{1}{\lambda_n} + \frac{d_\ell-1}{\lambda_n^2}+ \Theta\left(\frac{1}{n^{3/2}}\right)\right)^2}
\end{align*}
For $\ell \not=w$ we have $0\leq d_\ell-1 \leq 2$. Note that for $n$ large enough we have that $\frac{1}{\lambda_n} + \frac{d_u-1}{\lambda_n^2} + \Theta\left(\frac{1}{n^{3/2}}\right) < 0$, and so we have a lower bound if we replace all $d_\ell-1$ terms in the numerator by $2$ and in the denominator by $0$, and so we have
\begin{align*}
\lambda_n &\geq -\sqrt{n-1}+\frac{2\sum\limits_{\substack{i\sim j\\ i,j\not=w}}\left(\frac{1}{\lambda_n}+ \frac{2}{\lambda_n^2}+\Theta\left(\frac{1}{n^{3/2}}\right)\right)\left(\frac{1}{\lambda_n}+ \frac{2}{\lambda_n^2}+\Theta\left(\frac{1}{n^{3/2}}\right)\right)}{1+\sum\limits_{\ell\not= w} \left(\frac{1}{\lambda_n} + \Theta\left(\frac{1}{n^{3/2}}\right)\right)^2}
\\&=
-\sqrt{n-1} + \frac{2m\left(\frac{1}{\lambda_n^2} + \Theta\left(\frac{1}{n^{3/2}}\right)\right)}{1+(n-1)\left(\frac{1}{\lambda_n^2} + \Theta\left(\frac{1}{n^{3/2}}\right)\right)}.
\end{align*}
Since $-\sqrt{n-1}-2\leq \lambda_n \leq \sqrt{n-1}+2$, we have that $\frac{1}{\lambda_n^2} = \frac{1}{n-1} + \Theta\left(\frac{1}{n^{3/2}}\right)$. Hence we have
\begin{align*}
\lambda_n &\geq -\sqrt{n-1} + \frac{2m\left(\frac{1}{n-1} + \Theta\left(\frac{1}{n^{3/2}}\right)\right)}{1+(n-1)\left(\frac{1}{n-1} +\Theta\left(\frac{1}{n^{3/2}}\right)\right)}\\
&= -\sqrt{n-1}+ \frac{\frac{2m}{n-1} + \Theta\left(\frac{m}{n^{3/2}}\right)}{2 + \Theta\left(\frac{1}{n^{1/2}}\right)} \\
&= -\sqrt{n-1}+ \left(\frac{m}{n-1} + \Theta\left(\frac{m}{n^{3/2}}\right)\right)\left(1 + \Theta\left(\frac{1}{\sqrt{n}}\right)\right),
\end{align*}
completing the proof.
\end{proof}
We claim a very similar result for $\lambda_1$.
\begin{lemma}\label{refined lambda_1 bound} $\lambda_1=\sqrt{n-1}+\frac{m}{n-1}+\Theta\left(\frac{m}{n^{3/2}}\right)$. \end{lemma} \begin{proof} First we prove the lower bound. Define the vector \[\mathbf{y_3}=\begin{bmatrix}1\\ \frac{1}{\sqrt{n-1}}\\ \frac{1}{\sqrt{n-1}}\\ \vdots\\ \frac{1}{\sqrt{n-1}}\end{bmatrix},\] where $w$ corresponds to the entry 1. As $\lambda_1$ is the maximum possible Rayleigh quotient we have \begin{align*}
\lambda_1 &\geq \frac{\mathbf{y}^TA\mathbf{y_3}}{\mathbf{y_3}^T\mathbf{y_3}}\\
&=\frac{2\sum\limits_{i \sim j} (\mathbf{y}_3)_i(\mathbf{y}_3)_j}{2}\\
&=(n-1)\left(\frac{1}{\sqrt{n-1}}\right)+m\left(\frac{1}{n-1}\right)\\
&=\sqrt{n-1}+\frac{m}{n-1}. \end{align*} For the upper bound, we use the fact that $\lambda_1$ is the maximum possible Rayleigh quotient similarly to the previous lemma. So \begin{align*}
\lambda_1&= \frac{\mathbf{x}^TA\mathbf{x}}{\mathbf{x}^T\mathbf{x}}\\
&=\frac{2\sum\limits_{i\sim j}\mathbf{x}_i\mathbf{x}_j}{\mathbf{x}^T\mathbf{x}}\\
&= \frac{2\sum\limits_{w\sim k}\mathbf{x}_w\mathbf{x}_k}{\mathbf{x}^T\mathbf{x}}+\frac{2\sum{\substack{i\sim j \\ i,j\neq w}}\mathbf{x}_i\mathbf{x}_j}{\mathbf{x}^T\mathbf{x}}.
\end{align*}
As before, the first term is
\[
\frac{\mathbf{x}^T A(K_{1,n-1})\mathbf{x}}{\mathbf{x}^T\mathbf{x}} \leq \lambda_1(A(K_{1,n-1})) = \sqrt{n-1}.
\]
Applying Lemma \ref{x_u} and using $0\leq d_u-1 \leq 2$ for all $u\not=w$ we have
\begin{align*}
\lambda_1 &\leq \sqrt{n-1}+\frac{2\sum_{\substack{i\sim j\\ i,j\not=w}}\left(\frac{1}{\lambda_1}+ \frac{d_i-1}{\lambda_1^2}+\Theta\left(\frac{1}{n^{3/2}}\right)\right)\left(\frac{1}{\lambda_1}+ \frac{d_j-1}{\lambda_1^2}+\Theta\left(\frac{1}{n^{3/2}}\right)\right)}{1+\left(\sum_{\ell\not= w} \frac{1}{\lambda_1} + \frac{d_\ell-1}{\lambda_1^2}+ \Theta\left(\frac{1}{n^{3/2}}\right)\right)^2}\\
&\leq \sqrt{n-1}+\frac{2\sum_{\substack{i\sim j\\ i,j\not=w}}\left(\frac{1}{\lambda_1}+ \frac{2}{\lambda_1^2}+\Theta\left(\frac{1}{n^{3/2}}\right)\right)\left(\frac{1}{\lambda_1}+ \frac{2}{\lambda_1^2}+\Theta\left(\frac{1}{n^{3/2}}\right)\right)}{1+\sum_{\ell\not= w} \left(\frac{1}{\lambda_1} + \Theta\left(\frac{1}{n^{3/2}}\right)\right)^2}
\end{align*}
Using $\sqrt{n-1}-2\leq \lambda_1 \leq \sqrt{n-1}+2$, we have
\begin{align*}
\lambda_1 &\leq \sqrt{n-1} + \frac{2m\left(\frac{1}{n-1} + \Theta\left(\frac{1}{n^{3/2}}\right)\right)}{1+(n-1)\left(\frac{1}{n-1} +\Theta\left(\frac{1}{n^{3/2}}\right)\right)}\\
&= \sqrt{n-1}+ \frac{\frac{2m}{n-1} + \Theta\left(\frac{m}{n^{3/2}}\right)}{2 + \Theta\left(\frac{1}{n^{1/2}}\right)} \\
&= \sqrt{n-1}+ \left(\frac{m}{n-1} + \Theta\left(\frac{m}{n^{3/2}}\right)\right)\left(1 + \Theta\left(\frac{1}{\sqrt{n}}\right)\right),
\end{align*}
completing the proof.
\end{proof}
We are now in a position to prove our main theorem.
\begin{proof}[Proof of Theorem \ref{thm: main}] As before, let $G_1$ be the graph $K_1\vee P_{n-1}$ and let $A_1$ be its adjacency matrix with the dominating vertex corresponding to the first row and column. Since $G$ is spread-extremal we must have $S(G) \geq S(G_1)$. We lower bound $S(G_1)$ using the vectors \[\mathbf{v}_1=\begin{bmatrix}-1 + \sqrt{n}\\ 1\\ 1\\ \vdots\\ 1\end{bmatrix} \quad \mbox{and} \quad \mathbf{v}_2=\begin{bmatrix}-1-\sqrt{n}\\ 1\\ 1\\ \vdots\\ 1\end{bmatrix}.\] Using these vectors and the Rayleigh principle, we have that \begin{align*} S(G_1) &\geq \frac{ \mathbf{v}_1^T A_1 \mathbf{v}_1}{\mathbf{v}_1^T\mathbf{v}_1} - \frac{ \mathbf{v}_2^T A_1 \mathbf{v}_2}{\mathbf{v}_2^T\mathbf{v}_2} \\ & = \frac{2(n-1)(\sqrt{n}-1) + (n-2)(1)}{2n-2\sqrt{n}} - \frac{2(n-1)(-\sqrt{n}-1) + (n-2)(1)}{2n+2\sqrt{n}}\\ & = \frac{2(n-1)(\sqrt{n}-1) + (n-1)(1)}{2n-2\sqrt{n}} -\frac{1}{2n-2\sqrt{n}} \\& - \frac{2(n-1)(-\sqrt{n}-1) + (n-1)(1)}{2n+2\sqrt{n}}+ \frac{1}{2n+2\sqrt{n}}\\ &= \frac{n-1}{\sqrt{n}-1} + \frac{n-1}{\sqrt{n}+1} - \frac{\sqrt{n}}{n^2-n} \geq 2\sqrt{n} - \frac{1}{n}. \end{align*} Combining this with Lemmas \ref{refined lambda_n bound} and \ref{refined lambda_1 bound}, we have \[ 2\sqrt{n} - \frac{1}{n} \leq \lambda_1(G) - \lambda_n(G) = \left(\sqrt{n-1}+\frac{m}{n-1}+\Theta\left(\frac{m}{n^{3/2}}\right)\right) - \left( -\sqrt{n-1}+\frac{m}{n-1}+\Theta\left(\frac{m}{n^{3/2}}\right)\right). \] Therefore there exists a constant $C$ such that for $n$ large enough we have \[ 2\sqrt{n} - \frac{1}{n} \leq 2\sqrt{n-1} + \frac{C\cdot m}{n^{3/2}}. \] Since $\sqrt{n}- \sqrt{n-1} = \Omega(n^{-1/2})$, rearranging gives the final result.
\end{proof}
\end{document}
|
arXiv
|
{
"id": "2111.11820.tex",
"language_detection_score": 0.6376593112945557,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Pythagoras numbers of orders in biquadratic fields}
\author{Jakub Kr\'asensk\'y$^{1,\ast}$}
\author{Martin Ra\v{s}ka$^1$}
\author{Ester Sgallov\'a$^1$}
\address{$^1$Charles University, Faculty of Mathematics and Physics, Department of Algebra,\newline
Sokolovsk\'{a}~83, 18600 Praha 8, Czech Republic}
\email{ [email protected] $^{\ast}$, [email protected], [email protected]}
\thanks{$^{\ast}$corresponding author}
\begin{abstract}
We examine the Pythagoras number $\P(\O_K)$ of the ring of integers $\O_K$ in a totally real biquadratic number field $K$. We show that the known upper bound $7$ is attained in a large and natural infinite family of such fields. In contrast, for almost all fields $\BQ{5}{s}$ we prove $\P(\O_K)=5$. Further we show that $5$ is a lower bound for all but seven fields $K$ and $6$ is a lower bound in an asymptotic sense.
\noindent \textsc{Keywords.} Sum of squares, Pythagoras number, biquadratic number field, ring of integers
\end{abstract}
\setcounter{tocdepth}{2}
\renewcommand{\contentsname}{}
\maketitle
\thispagestyle{empty}
\tableofcontents
\section{Introduction}
The study of sums of squares has a long tradition. Already Diophantus was aware that every natural number is a sum of four squares, a fact proven by Lagrange in 1770. Maa{\ss} \cite{Ma} proved that for any totally positive integer of $\Q(\!\sqrt5)$, three squares are sufficient. Siegel \cite{Si} later showed that this is a rather exceptional property: $\Q$ and $\Q(\!\sqrt5)$ are the only totally real number fields where each totally positive algebraic integer is a sum of integral squares.
To remedy this fact, it is customary to consider only those elements of a commutative ring $R$ which can be written as a sum of squares; these elements form a set often denoted by $\sum R^2$. For $\alpha \in \sum R^2$ we define its \emph{length} $\ell(\alpha)$ as the minimal number of squares by which we can represent $\alpha$. For example, $\ell(7)=4$ in $\Z$. The \emph{Pythagoras number} of $R$ is then
\[
\P(R) = \sup\{\ell(\alpha) : \alpha \in \textstyle{\sum} R^2 \},
\]
i.e.\ the smallest number $n$ such that any sum of squares is in fact a sum of $n$ squares. In this terminology, $\P(\Z)=4$ and $\P(\O_{\Q(\!\sqrt5)})=3$. The Pythagoras number of an order in a number field $K$ is always finite; in fact, if the number field $K$ is not totally real, then $\P(\O)\leq 5$ for any order $\O$, and $\P(\O_K)\leq 4$ for the maximal one (see \cite{Pf}). In the most difficult case of orders of totally real number fields, little is known. Scharlau \cite{Sch} has shown that the Pythagoras number of such an order can be arbitrarily large, but conjectured that it can be bounded by a function depending only on the degree of $K$. This was recently proven by Kala and Yatsyna \cite{KY}; for small degrees $2 \leq d \leq 5$, this bound is $d+3$. (See also Subsection \ref{ss:sqrt5upper} where we prove a stronger version of their result.)
\subsection{Historical overview}
The case of orders in real quadratic fields was completely solved, the main contribution being made by Peters \cite{Pe}. Since we exploit this result repeatedly and since some parts of the proof are either folklore or difficult to find, we collect them separately in Section \ref{se:quadratic}.
Besides providing an upper bound for the Pythagoras number, Peters also fully characterised which numbers are sums of squares in the orders $\O$ of real quadratic fields -- in other words, he determined the set $\sum \O^2$. It is well known (see e.g.\ \cite{KY}, Lemma 6) that the only \emph{local} conditions for a number $\alpha\in\O_K$ to be a sum of integral squares is that it is totally positive and it is a square modulo $2\O_K$; it seems to be folklore that the same holds for non-maximal orders as well. In a maximal order, every number satisfying these necessary conditions is in fact \emph{locally} a sum of four squares: In non-dyadic completions, three squares suffice, while in a dyadic completion $K_{\mathfrak{p}}$, three squares suffice for each square-mod-2 if and only if $[K_{\mathfrak{p}}:\Q_2]$ is even (\cite{Sch2}, Kapitel 0, Lemma 1). However, these local conditions usually do not suffice to characterise the set $\sum \O^2$: The only real quadratic orders where every totally positive square-mod-2 is \enquote{globally} a sum of squares, are $\Z[\sqrt2]$, $\Z[\sqrt3]$ and $\Z\bigl[\frac{1+\sqrt5}{2}\bigr]$; trivially, the same holds for $\Z$. In Scharlau's dissertation \cite{Sch2}, it is proved that only two other totally real orders of degree at most $4$ share this property, namely the maximal orders of $\Q\bigl(\!\sqrt{\frac{5+\sqrt5}{2}}\bigr)$ and of $\BQ25$. As a corollary of our Theorem \ref{th:mainSqrt5} we shall see that in these two orders, five squares suffice, which is an improvement of the upper bound $7$ provided for quartic orders by \cite{KY} (however, in the latter order we suspect that the correct value is $3$, see Conjecture \ref{co:conjecture}).
This failure of local-global principle is connected with the fact that in totally real fields, the genus of the quadratic form $x^2+y^2+z^2$ rarely consists only of one equivalence class \cite{Pe2}; therefore, it is difficult to prove any global results about sums of squares in these cases (already proving the tight upper bound for $\P(\Z[\sqrt3])$ was tricky, see the discussion below Theorem \ref{th:quadratic}). Nevertheless, there are some recent results for totally real orders. Tinková \cite{Ti} examined the Pythagoras number of orders in simplest cubic fields; one of the present authors \cite{KrCubic} exhibited an exceptional case with very different behaviour. Collinet \cite{Co} and Kala and Yatsyna \cite{KY2} investigated rings of the form $\O_K\bigl[\frac{1}{n}\bigr]$ where $K$ is totally real. For real quadratic orders, B.\ M.\ Kim examined sums of non-vanishing squares with J.\ Y.\ Kim in \cite{KK}, and sums of distinct squares with P.-S.\ Park in \cite{KP}. The papers \cite{KY, KY3} also show an application of the Pythagoras number for the construction of a universal quadratic form. A generalisation of the Pythagoras number, the so-called $g$-invariants, are the subject of recent study as well, see e.g.\ papers by Icaza and Chan \cite{CI, Ic}, Sasaki \cite{SaEng} or Yatsyna and one of the present authors \cite{KrY}. We shall need them in Subsection \ref{ss:sqrt5upper} and provide a definition there.
Studying Pythagoras numbers of fields is easier. For number fields, the local-global principle yields $\P(\,\cdot\,)\leq 4$, so more complicated fields are studied. The case of non-formally real fields is also well understood \cite{Pf}, while the theory for formally real fields is rich -- by Hoffmann \cite{Ho}, every $n\in\N$ appears as the Pythagoras number of such a field -- and still a vivid research area. From many recent papers we mention those of Becher, Grimm, Van Geel, Leep, Hu, Tikhonov and Yanchevskii \cite{BGV, BL, BV, Hu, TVY}.
\subsection{Our results}
This paper examines the Pythagoras numbers of orders $\O$ in totally real biquadratic fields, i.e.\ in fields of the form $\BQ{p}{q}$ where $p,q>1$ are distinct square-free integers. Since their degree is $4$, the above-mentioned result by Kala and Yatsyna gives $\P(\O) \leq 7$. This, together with the knowledge of the Pythagoras numbers of orders in quadratic fields, were essentially the only tools at our disposal. Our treatment is mostly elementary (the only exception is Subsection \ref{ss:sqrt5upper} where we use some local-global considerations) but some tricks are required to obtain a variety of well-rounded results:
\begin{theorem} \label{th:main7infinitely}
There exist infinitely many totally real biquadratic fields $K$ with $\P(\O_K)=7$.
\end{theorem}
\begin{proof}
Theorem \ref{th:(B1)P=7} provides a large and natural infinite family of such fields: It suffices to pick two coprime square-free numbers $p \equiv 2$, $q \equiv 3 \pmod{4}$ greater than $7$, and consider $K = \BQ{p}{q}$.
\end{proof}
The assumption that $K$ is generated by roots of coprime integers is a very common one: For example, multiquadratic fields generated by roots of coprime integers were used both by Scharlau in \cite{Sch} and by Kala and Svoboda in \cite{KS}, in the former case to exhibit orders of arbitrarily large Pythagoras number, in the latter to construct maximal orders without a universal quadratic form of a given rank. Therefore investigating the coprime case was important, and besides the above theorem it also yielded Proposition \ref{pr:coprime}.
Nevertheless, in the present paper we do not restrict to the coprime case, and prove surprisingly strong lower bounds which are valid for each totally real biquadratic field. In particular, the following two statements show that the Pythagoras number of $\O_K$ in most totally real biquadratic fields is not much smaller than the known upper bound $7$:
\begin{theorem} \label{th:main2parts}
Let $K$ be a totally real biquadratic field. Then:
\begin{enumerate}
\item $\P(\O_K)\geq 5$ for all but at most seven cases. \label{it:main(1)}
\item Fix a square-free positive $n > 7$. Then $\P(\O_K) \geq 6$ for all but finitely many totally real biquadratic fields $K \ni \sqrt n$.
\end{enumerate}
\end{theorem}
\begin{remark}
The seven fields probably satisfying $\P(\O_K)<5$ are $\BQ23$, $\BQ25$, $\BQ35$, $\BQ27$, $\BQ37$, $\BQ56$, $\BQ57$, see also Conjecture \ref{co:conjecture}.
\end{remark}
\begin{proof}
To prove the first statement is the sole purpose of Section \ref{se:P>=5}; the proof of the second statement is to be found in Section \ref{se:P>=6}.
\end{proof}
The proofs of Theorems \ref{th:main7infinitely} and \ref{th:main2parts} are based on cleverly finding families of elements with length $7$ ($5$ or $6$, resp.). To handle all fields, it was necessary to divide them into several families and for each family to figure out the best approach. One difficulty is that most biquadratic fields are not generated by square roots of coprime integers, another was the necessity to handle fields containing square roots of $2, 3, 5, 6, 7$ or $13$. One type of fields containing $\sqrt5$ was particularly difficult and required a very different approach, see Subsection \ref{ss:strongAsymptotics}; the developed technique is applicable to other cases, possibly beyond biquadratic fields, and may deserve attention on its own.
For some computations, especially in Theorem \ref{th:main2parts} (\ref{it:main(1)}), we have written a computer program, starting from scratch. It was primarily used to find elements of needed length for some fields, typically edge cases of our propositions containing square roots of small numbers. On the other hand, it also proved to be useful for finding patterns and establishing conjectures about Pythagoras numbers, which we were usually either able to prove or we formulate them below in Conjecture \ref{co:conjecture}. The used algorithms are described in Subsection \ref{ss:algorithms}, their implementation in Python is available at \url{https://github.com/raskama/number-theory/tree/main/biquadratic}.
Going in the other direction, i.e.\ proving an upper bound, is generally much more difficult. Nevertheless, after generalising the method of Kala and Yatsyna from \cite{KY}, we were able to apply it for quartic fields containing $\sqrt5$ by using a result of Sasaki \cite{SaEng, SaJapan} on $2$-universal quadratic forms.
\begin{theorem} \label{th:mainSqrt5}
Let $K$ be a quadratic extension of $\Q(\!\sqrt{5})$ and $\O$ any order in $K$ containing $\frac{1+\sqrt5}{2}$. Then $\P(\O)\leq 5$.
\end{theorem}
\begin{proof}
We prove this as Theorem \ref{th:upperboundSqrt5}.
\end{proof}
Besides requiring the use of the so-called $g$-invariants, part of the difficulty in proving the above theorem is that the crucial Theorem \ref{th:sasaki} is only available in Sasaki's Japanese paper \cite{SaJapan}. Moreover, the proof contained therein is very brief. For this reason we include a full proof of this result as well.
Note that by Theorem \ref{th:main2parts} (\ref{it:main(1)}), this improved upper bound is mostly optimal for maximal orders:
\begin{corollary}
Let $K$ be a totally real biquadratic field containing $\sqrt5$. Then $\P(\O_K)=5$ except possibly when $K = \BQ25$, $\BQ35$, $\BQ56$ or $\BQ57$.
\end{corollary}
The conclusion that there are infinitely many non-maximal orders with Pythagoras number at most $5$ is also not to be overlooked. One can put it in contrast with the following theorem, which is our main result about non-maximal orders, to see that their Pythagoras number is a rich topic. Although we have a complete proof of the theorem, we did not include it in this paper to shorten the text. It will be published separately.
\begin{theorem} \label{th:mainNonmaximal}
Let $K$ be a given totally real biquadratic field. There are infinitely many orders $\O \subset K$ satisfying $\P(\O)=7$.
More generally, any totally real biquadratic order $\O_0$ contains an order $\O \subsetneq \O_0$ with $\P(\O)=7$.
\end{theorem}
\begin{proof}
The full proof will be published separately in the article \cite{Kr}. Here we only announce that the orders in question are of the form $\Z + \Z\sqrt{S_0T_0} + \Z\sqrt{M_0T_0} + \Z\sqrt{M_0S_0}$ where $M_0,S_0,T_0 \geq 5$ are any numbers such that $M_0S_0$, $M_0T_0$ and $S_0T_0$ are not squares -- compare this with the notation $m_0, s_0, t_0$ introduced in Subsection \ref{ss:convention}.\notabene{The corresponding element of length seven is $7 + (1+\sqrt{S_0T_0})^2 + (1+\sqrt{M_0T_0})^2 + (1+\sqrt{M_0S_0})^2$.}
\end{proof}
It is interesting that while the proof of Theorem \ref{th:main7infinitely} worked with square roots of coprime integers, in the case of non-maximal orders we required the order to be generated by square roots of three integers where each two have a common divisor greater than four -- a quite weak condition which nevertheless excludes coprime numbers. This illustrates that while at the first glance the proofs of lower bounds on Pythagoras numbers may all seem the same, they in fact use a variety of ideas.
Although our conclusions are not as complete as Peters' on quadratic fields (see Section \ref{se:quadratic}), they give a very good idea about the behaviour of Pythagoras numbers of orders in biquadratic fields and thus in totally real fields in general. Regarding maximal orders in biquadratic fields, all the listed results seem to point forwards the following unifying conjecture:\footnote{Based on a preprint of this paper, part (2) of Conjecture \ref{co:conjecture} was proved by He and Hu \cite{HH}.}
\begin{conjecture} \label{co:conjecture}
Let $K$ be a totally real biquadratic field.
\begin{enumerate}
\item If $K$ contains none of $\sqrt2$ and $\sqrt5$, then $\P(\O_K) \geq 6$ holds with finitely many exceptions.\label{it:con1}
\item If $K$ contains $\sqrt2$ or $\sqrt5$, then $\P(\O_K) \leq 5$.\label{it:con2}
\item The inequality $\P(\O_K)<5$ holds precisely for the following seven fields:\\ For $K = \BQ23, \BQ25, \BQ35$, where $\P(\O_K)=3$, and\\ for $K =\BQ27, \BQ37, \BQ56, \BQ57$, where $\P(\O_K)=4$. \label{it:con3}
\end{enumerate}
\end{conjecture}
We collected evidence for this conjecture in Subsection \ref{ss:conjecture}. Here let us only recall that (\ref{it:con2}) is already proven for $\sqrt5$ in Theorem \ref{th:mainSqrt5}, and that the only missing part of (\ref{it:con3}) are upper bounds for the seven exceptional fields: All necessary lower bounds are contained in Theorem \ref{th:main2parts} (\ref{it:main(1)}) and Proposition \ref{pr:comp_examples}.
Note that while the quadratic fields $F$ generated by $\sqrt2$, $\sqrt3$ and $\sqrt5$ all have $\P(\O_F)=3$ (see Theorem \ref{th:quadratic}), their quadratic extensions behave very differently -- compare the first two statements of the above Conjecture. A possible explanation is that in general, if $K/L$ is a quadratic extension, the value of $\P(\O_K)$ is not directly dependent on $\P(\O_L)$ but rather on the invariant $g_{\O_L}(2)$, see Subsection \ref{ss:sqrt5upper}.
\subsection{Structure of the paper}
The structure of this paper is as follows: Section \ref{se:prelims} introduces biquadratic fields, repeats their important properties and fixes some notation and important conventions. Section \ref{se:quadratic} concisely presents the known result characterising $\P(\O_F)$ for a quadratic field $F$. In Section \ref{se:lemmata} we collected auxiliary results which are needed throughout the text. Section \ref{se:coprime} deals with biquadratic fields generated by roots of coprime integers, its main but not only result being Theorem \ref{th:main7infinitely}. Section \ref{se:P>=5} contains most of the proof of the first part of Theorem \ref{th:main2parts}, with the hardest case postponed until the next section. Although Section \ref{se:sqrt5} focuses on fields containing $\sqrt5$, its methods for producing both lower and upper bounds are fairly general; in particular, as necessary ingredients, we introduce the $g$-invariants and use them to generalise the upper bound of \cite{KY}, and also provide a full proof of a result known by Sasaki \cite{SaJapan}. In Section \ref{se:P>=6} we proceed to prove the other half of Theorem \ref{th:main2parts} and also to discuss the evidence which supports Conjecture \ref{co:conjecture}.
\section{Preliminaries} \label{se:prelims}
In this section, we explain the used terminology and notation. First we recall a few basic notions from number theory. Then, in Subsection \ref{ss:biquadratic}, we introduce the totally real biquadratic fields, and in Subsection \ref{ss:convention} we fix a very important convention about the meaning of letters $m,s,t$ used \emph{throughout the paper}.
For a number field $K$, we use $\O_K$ to denote its ring of integers. An \emph{order} in $K$ is any subring $\O \subset \O_K$ which has $K$ as its field of fractions; equivalently, $\O$ is both a subring of $K$ and a $\Z$-module of rank $[K : \Q]$. We often call $\O_K$ the \emph{maximal order} in $K$. A number field is \emph{totally real} if the images of all its embeddings into $\C$ are contained in $\R$. For $\alpha \in K$, its \emph{trace} is $\Tr (\alpha)=\sum \sigma_i(\alpha)$, where $ \sigma_i $ runs over all embeddings of $K$ into $\C$. Purely for notational convenience, we will also use the \textit{absolute trace} $\atr$, which is the trace divided by the degree $[K : \Q]$.
Let $K$ be totally real and $\alpha \in K$. If $\sigma_i(\alpha) > 0$ in all embeddings $\sigma_i$, then $\alpha$ is \emph{totally positive}, denoted by $\alpha \succ 0$. If $\alpha=0$ is allowed, we write $\alpha \succcurlyeq 0$ and call it \emph{totally nonnegative}; further, $\alpha \succcurlyeq \beta$ simply means $\alpha - \beta \succcurlyeq 0$. By summing the inequality over all embeddings, it follows that $\Tr \alpha \geq \Tr \beta$ and $\atr \alpha \geq \atr \beta$. It is clear from the definition that any square and thus also any sum of squares is totally nonnegative.
The Pythagoras number $\P(\,\cdot\,)$ of a ring and the length $\ell(\,\cdot\,)$ of an element were already defined in the Introduction. We point out that if $\alpha = x_1^2 + \cdots + x_n^2$, then $\alpha \succcurlyeq x_i^2$ for every $i$ and thus $\atr \alpha \geq \atr x_i^2$.\notabene{A very interesting fact, which is nevertheless never used in this paper, is that the squares-mod-$2$ form a ring (Scharlau denotes them by $R^{(2)}$ in a ring $R$.}
\subsection{Biquadratic fields} \label{ss:biquadratic}
A \emph{(totally real) biquadratic field} is a number field $K=\BQ{p}{q}$ where $p,q>1$ are two distinct square-free integers. In this paper, we usually omit the words \enquote{totally real}. Let us list a few basic facts about such fields. Further details including the proof of the explicit form of $\O_K$ can be found in \cite{Wi}.
Denote $r = \frac{pq}{\gcd(p,q)^2}$. The numbers $p,q,r$ are the unique square-free rational integers (not counting $1$) which are squares in $K$; one can freely interchange them in the sense that $K=\BQ{p}{q}=\BQ{p}{r}=\BQ{q}{r}$. The only subfields of $K$ are $\Q$, $\Q(\!\sqrt{p})$, $\Q(\!\sqrt{q})$ and $\Q(\!\sqrt{r})$.
The degree $[K :\Q] = 4$, the most natural $\Q$-basis of $K$ being $(1,\sqrt{p},\sqrt{q},\sqrt{r})$. The field $K$ is a Galois extension of $\Q$, with the following $\Q$-automorphisms: If $\alpha = g_0+g_1\sqrt{p}+g_2 \sqrt{q} + g_3 \sqrt{r}$ with $g_i \in \Q$, then
\[\begin{array}{lll}
\sigma_1(\alpha)=g_0+g_1\sqrt{p}+g_2 \sqrt{q} + g_3 \sqrt{r}, \\
\sigma_2(\alpha)=g_0-g_1\sqrt{p}+g_2 \sqrt{q} - g_3 \sqrt{r}, \\
\sigma_3(\alpha)=g_0+g_1\sqrt{p}-g_2 \sqrt{q} - g_3 \sqrt{r}, \\
\sigma_4(\alpha)=g_0-g_1\sqrt{p}-g_2 \sqrt{q} + g_3 \sqrt{r}. \\
\end{array}\]
In particular, $\Tr \alpha = 4g_0$ and $\atr \alpha = g_0$, which is the reason why the latter is often more convenient to work with. Note that $\atr (g_0+g_1\sqrt{p}+g_2\sqrt{q}+g_3\sqrt{r})^2 = g_0^2 + pg_1^2+qg_2^2+rg_3^2$. We also see that such a field is indeed totally real.
It is well known that the integral basis of $\O_{\Q(\!\sqrt{n})}$ depends on the value of $n$ modulo $4$. A similar statement holds for biquadratic fields: After possibly interchanging the roles of $p$, $q$ and $r$ (so that $p \equiv r \pmod4$), the field $K$ falls into one of the following five distinct types. For each type we present the appropriate integral basis.
\[\begin{array}{lll}
\text{(B1)}&\O_K=\Span_{\Z}\Bigl\{1, \sqrt{p}, \sqrt{q}, \frac{\sqrt{p}+\sqrt{r}}2\Bigr\} & \text{if } p\equiv 2,\ q\equiv 3\!\!\!\pmod{4}, \\
\text{(B2)}&\O_K=\Span_{\Z}\Bigl\{1, \sqrt{p}, \frac{1+\sqrt{q}}2, \frac{\sqrt{p}+\sqrt{r}}{2} \Bigr\} & \text{if } p\equiv 2,\ q\equiv 1\!\!\!\pmod{4}, \\
\text{(B3)}&\O_K=\Span_{\Z}\Bigl\{1, \sqrt{p}, \frac{1+\sqrt{q}}2, \frac{\sqrt{p}+\sqrt{r}}{2}\Bigr\} & \text{if } p\equiv 3,\ q\equiv 1\!\!\!\pmod{4},\\
\text{(B4a)}& \O_K=\Span_{\Z}\Bigl\{1, \frac{1+\sqrt{p}}2,\frac{1+\sqrt{q}}2, \frac{1+\sqrt{p}+\sqrt{q}+\sqrt{r}}{4}\Bigr\} & \text{if } p\equiv q\equiv 1,
\gcd(p,q)\equiv1\!\!\!\pmod{4}, \\
\text{(B4b)}& \O_K=\Span_{\Z}\Bigl\{1, \frac{1+\sqrt{p}}2,\frac{1+\sqrt{q}}2, \frac{1-\sqrt{p}+\sqrt{q}+\sqrt{r}}{4}\Bigr\} & \text{if } p\equiv q\equiv 1, \gcd(p,q)\equiv3\!\!\!\pmod{4}. \\
\end{array}\]
We shall refer to this distinction often, speaking e.g.\ about \enquote{fields of type (B1)}. Note that the role of $p$ and $r$ is freely interchangeable and in the fields of type (B4), all three letters are interchangeable.
We conclude this section by introducing another point of view on biquadratic fields. Usually, a biquadratic field is given by specifying two of the three values $p,q,r$. However, there is another, more symmetrical way of uniquely describing a biquadratic field: One can specify the values of $p_0, q_0$ and $r_0$, where $p_0=\gcd(q,r)$, $q_0=\gcd(p,r)$, $r_0=\gcd(p,q)$. These three numbers are pairwise coprime and square-free (but one of them is allowed to be $1$); any triple with these properties corresponds to a unique biquadratic field and vice versa. One easily sees that $p=q_0r_0$, $q=p_0r_0$ and $r=p_0q_0$.
\subsection{An important convention} \label{ss:convention}
In this subsection, we collect conventions used throughout this paper. Most of them are very natural; however, one of them could easily be overlooked and cause confusion, and is therefore highlighted by a frame. The same convention was already used in \cite{KTZ}.
Unless otherwise stated, $K$ is a totally real biquadratic field, and any biquadratic field is understood to be totally real. If we write $\BQ{p}{q}$, we usually understand $p,q>1$, square-free and distinct, but in important statements, we say it explicitly. When $K=\BQ{p}{q}$, the letter $r$ automatically stands for $r = \frac{pq}{\gcd(p,q)^2}$. We also denote $p_0 = \gcd(q,r)$ etc., see the end of the previous subsection.
In many statements, we use the letters $m,s,t$ rather than $p,q,r$ for the numbers whose square roots generate $K$. They have a precise meaning and are not interchangeable:
\begin{notation} \label{no:mst}
Let $K$ be a biquadratic field. Then the letters $m,s,t$ denote the three square-free rational integers which are squares in $K$, in the following order:
\[
1 < m < s < t.
\]
We also use $m_0 = \gcd(s,t)$, $s_0=\gcd(m,t)$ and $t_0=\gcd(m,s)$; the above inequality translates into $m_0>s_0>t_0\geq 1$.
\end{notation}
For example, if $K=\BQ23$, then $m=2$, $s=3$, $t=6$. The above convention is \emph{always} in use, so e.g.\ if we write $K = \BQ{7}{s}$, it automatically means (since $7=t$ is impossible) that $m=7 < s < t=7s$; in particular, $K$ is none of the fields $\BQ27$, $\BQ37$, $\BQ57$ and $\BQ67$ and $s$ can be any square-free number satisfying $7 \nmid s$, $s>7$.
\section{Pythagoras numbers of orders in quadratic fields} \label{se:quadratic}
As we noted in the Introduction, it is difficult to find references for some parts of the characterisation of $\P(\O)$ where $\O$ is an order in a real quadratic field. Therefore we collect them below:
\begin{theorem}[Peters; Cohn and Pall; Dzewas; Kneser; Maa\ss] \label{th:quadratic}
Let $\O$ be an order in a real quadratic number field. Then
\[
\P(\O) =
\begin{cases}
3 & \text{for $\O=\Z[\sqrt2]$, $\Z[\sqrt3]$ and $\Z\bigl[\frac{1+\sqrt5}{2}\bigr]$},\\
4 & \text{for $\O=\Z[\sqrt6]$, $\Z[\sqrt7]$ and the non-maximal order $\Z[\sqrt5]$},\\
5 & \text{otherwise.}
\end{cases}
\]
The maximal length is attained for example by:
\begin{itemize}
\item Length $3$: $1 + \sqrt2^2+(1+\sqrt{2})^2$,\; $2 + (2+\sqrt3)^2$, $2 + \bigl(\frac{1+\sqrt5}{2}\bigr)^2$;\;
\item Length $4$: $3+(1+\sqrt6)^2$,\; $3+(1+\sqrt7)^2$,\; $3 + (1+\sqrt5)^2$;
\item Length $5$: $3+\bigl(\frac{1+\sqrt{13}}{2}\bigr)^2+\bigl(1+\frac{1+\sqrt{13}}{2}\bigr)^2$ in $\Z\bigl[\frac{1+\sqrt{13}}{2}\bigr]$; in all the remaining cases $7 + (1+f\sqrt{n})^2$ or $7 + \bigl(f\frac{1+\sqrt{n}}{2}\bigr)^2$ in $\Z[f\sqrt{n}]$ and $\Z\bigl[f\frac{1+\sqrt{n}}{2}\bigr]$, respectively. Here $n$ is square-free and the integer $f$, called the conductor, can take any positive value.
\end{itemize}
\end{theorem}
In the following paragraphs, we explain in which reference to find which part of the proof of this theorem.
As far as we know, the upper bound $\P(\,\cdot\,) \leq 5$ is due to Peters \cite{Pe}, and in all but the seven exceptional cases (i.e.\ maximal orders in $\Q(\!\sqrt{n})$, $n=2,3,5,6,7,13$ and the non-maximal order $\Z[\sqrt5]$), he also exhibits the element of length $5$.
The sufficiency of three squares for $n=2$ and $n=5$ stems from the fact that over these fields, the genus of $x^2+y^2+z^2$ contains only one class (Peters cites Dzewas \cite{Dz} but the results are at least partially older, see e.g.\ \cite{Ma}). Showing the same for $\Z[\sqrt3]$ is more involved, since there the genus contains two non-equivalent forms; however, using a nice trick one sees that anything represented by the other form is also represented by three squares. Scharlau, in his dissertation \cite{Sch2}, Kapitel 0, Satz III (iv), attributes this result to \enquote{Kneser, unpublished}.
The sufficiency of four squares in $\Z[\sqrt6]$ and $\Z[\sqrt7]$ is contained in Section 4e of \cite{CP}, together with the curious fact that $12+2\sqrt{13}$ is in fact the \emph{only} (up to conjugation and multiplication by squares of units) element of length $5$ in $\Z\bigl[\frac{1+\sqrt{13}}{2}\bigr]$. This explains why Peters did not find this element (he was clearly unaware of Cohn's and Pall's paper). The sufficiency of four squares in $\Z[\sqrt5]$ is not stated in \cite{CP} explicitly, but it follows by the same method as for $\sqrt6$ and $\sqrt7$, namely using Theorems 2 and 6 and checking elements of small norms.\notabene{It was not that simple! After understanding Theorem 6 it seems to turn out that one must check the totally positive elements of $\Z[2\sqrt5]$ with norm up to $225$ and elements with norms divisible by $16$ up to $400$. Fortunately, by Theorem 2 it suffices to check one element of each norm. But already finding a representative of each admissible norm was quite some work!}
Note that Peters' upper bound $5$ is one instance of the bound given by Kala and Yatsyna in \cite{KY}, which is in turn generalised by our Proposition \ref{pr:upperbound}. Also observe that proving all the listed lower bounds is straightforward -- in Subsection \ref{ss:algorithms} we explain that determining the length of a given element in any fixed order is only a computational task, and in quadratic orders this is easily done with pen and paper, even keeping $f$ and $n$ as parameters in the last two cases. We show a slight variation of this in Observation \ref{ob:betterQuadratic}.
Let us remark that the seven problematic orders for which the element $7 + (1+f\sqrt{n})^2$ or $7 + \bigl(1 + f\frac{1+\sqrt{n}}{2}\bigr)^2$ has length less than $5$, namely the rings of integers in $\Q(\!\sqrt{n})$ for $n=2,3,5,6,7,13$, together with $\Z[\sqrt5]$, are exactly those where $\ell(7)<4$. In particular, $7 = \bigl(\frac{1+\sqrt{13}}{2}\bigr)^2+\bigl(\frac{1-\sqrt{13}}{2}\bigr)^2$.
We shall sometimes exploit parts of the above theorem, namely the information that a particular element has length $5$ (or sometimes $3$ or $4$) in the ring of integers of a quadratic subfield. The knowledge of the quadratic case was the starting point of our examination since it provides a hint on how to construct elements with big length.
\begin{observation} \label{ob:betterQuadratic}
Peters' choice $7 + \bigl(f\frac{1+\sqrt{n}}{2}\bigr)^2$ for the element of length $5$ is certainly possible, but for non-maximal orders, it is not the simplest one. In this observation, we suggest an alternative. Denote $N = f^2n$; then every quadratic order is either $\Z[\sqrt{N}]$ or $\Z\bigl[\frac{1+\sqrt{N}}{2}\bigr]$ with $N \equiv 1 \pmod4$, where $N$ is no longer square-free, only non-square. Then in $\Z[\sqrt{N}]$ with $N \geq 8$, the element $7 + (1+\sqrt{N})^2$ has length $5$ (this was proven by Peters), while in $\Z\bigl[\frac{1+\sqrt{N}}{2}\bigr]$ with $N \equiv 1 \pmod4$, $N \geq 17$, we claim that
\[
\ell\Bigl(7 + \Bigl(\frac{1+\sqrt{N}}{2}\Bigr)^2\Bigr) = 5.
\]
We provide the proof of this fact since it is a good and much simpler illustration of what happens later in some of our proofs. Write $\frac{29+N}{4} + \frac{\sqrt{N}}{2} = \sum \bigl(\frac{a_i+b_i\sqrt{N}}{2}\bigr)^2$, $a_i \equiv b_i \pmod2$. This is equivalent to the two equalities $29 + N = \sum a_i^2 + N\sum b_i^2$ and $1 = \sum a_ib_i$. Since changing signs inside of the squares does not affect anything, assume $b_i \geq 0$. Clearly there is a nonzero $b_i$, so $\sum b_i^2 \geq 1$. If $\sum b_i^2 \geq 3$, the first equality yields $29 + N \geq 3N$, a contradiction. If $\sum b_i^2 = 2$, we can assume $b_1=b_2=1$; then $a_1$ and $a_2$ are odd and all other $a_i, b_i$ are even, which contradicts the equality $1 = \sum a_ib_i$.
Therefore $\sum b_i^2 = 1$, say $b_1=1$ and $b_i=0$ otherwise. We have $1 = \sum a_ib_i = a_1$, so the first summand is $\bigl(\frac{1+\sqrt{N}}{2}\bigr)^2$ and all the others are squares of rational integers. Since their sum is $7$, there must be at least four of them, which concludes the proof.
\end{observation}
Let us note that while the Pythagoras numbers of real quadratic orders are known, the effort to understand sums of squares in these orders continues: See e.g.\ the already cited papers by B.\ M.\ Kim, J.\ Y.\ Kim, P.-S.\ Park \cite{KK, KP}; one of the present authors recently investigated \cite{Ra} the following question: When does $\sum \O_F^2$ contain all totally positive elements of $m\O_F$ for a given $m \in \N$ and real quadratic field $F$?
\section{Lemmata and the use of computers} \label{se:lemmata}
The first part of this section contains some general auxiliary results about arithmetics of biquadratic number fields. In the second part we explain how one can use a simple algorithm to prove a lower bound for the Pythagoras number of a given order $\O$. Since we used these self-written computer programs whenever a general \enquote{pen-and-paper} proof failed, in the third part we provide a list of statements proven in this way, to be referenced in later sections.
\subsection{Some arithmetics of biquadratic number fields}
Given an algebraic integer $\alpha$ in a quadratic subfield $F$ of our biquadratic field $K$, it is quite possible that $\alpha$ is not a square in $F$ but becomes a square in $K$; a typical example is $2 + \sqrt3 = \bigl(\frac12 (\sqrt{2}+\sqrt{6})\bigr)^2$. (Bear in mind that $\alpha$ is a square in $F$, resp.\ $K$, if and only if it is a square in $\O_F$, resp.\ $\O_K$.) However, the resulting square root of $\alpha$ satisfies a strong condition:
\begin{lemma}\label{le:KTZ}
Let $K=\BQ{n_1}{n_2}$ with $n_1, n_2>1$ square-free and distinct; put $n_3 = \frac{n_1n_2}{\gcd(n_1,n_2)^2}$.
If $\beta=x+y\sqrt{n_1}+z\sqrt{n_2}+w\sqrt{n_3} \in K$ (with $x,y,z,w \in \Q$) satisfies $\beta^2\in\Q(\!\sqrt{n_1})$, then either $x=y=0$ or $z=w=0$.
\end{lemma}
\begin{proof}
This is Lemma 4.1 of \cite{KTZ}; as far as we know, the original reference is \cite{CLSTZ}.
\end{proof}
The fields of type (B4) are the only ones where quarter-integers (i.e.\ numbers from $\Z + \frac14$) occur as coefficients of algebraic integers in the vector space basis. The following lemma shows how such elements behave when squared. Remember that in (B4), the elements $p,q,r$ are freely interchangeable.
\begin{lemma}\label{le:quarterSquares}
Let $K=\BQ{p}{q}$ be a biquadratic field of type (B4) and $\gamma \in \O_K$ be of the form $\frac{a+b\sqrt{p}+c\sqrt{q}+d\sqrt{r}}{4}$ with $a,b,c,d$ odd. Then $\gamma^2$ is of the same form.
\end{lemma}
\begin{proof}
This is just a straightforward computation which uses the fact that $p\equiv q \equiv r \equiv 1 \pmod4$. It suffices to check one of the four coefficients of $\gamma^2$ in the vector space basis $(1,\sqrt{p},\sqrt{q},\sqrt{r})$; if one of them is in $\Z + \frac14$, then the others must be as well, since $\gamma^2$ is an algebraic integer. We choose the first coefficient:
$\atr\gamma^2 = \frac1{16}(a^2+b^2p+c^2q+d^2r)$. Checking that this is in $\frac{\Z}{4}$ but not in $\frac{\Z}{2}$ is equivalent to proving that $a^2+b^2p+c^2q+d^2r \equiv 4 \pmod8$. And since $\mathrm{odd}^2 \equiv 1 \pmod8$, we actually want
\[
1+p+q+r \equiv 4 \pmod{8},
\]
which is readily checked.
\end{proof}
\subsection{The use of computers} \label{ss:algorithms}
While it is very difficult to find an upper bound for the Pythagoras number of an order and while this paper is mostly about finding lower bounds for \emph{infinite families} of number field orders, the task of finding some lower bound for one given order is easily algorithmically solved. Here we explain the ideas behind such an algorithm and in the next subsection we list some results obtained by using it. The algorithm has a reasonable running time only for elements with small traces, so it becomes hardly usable if we adjoin square roots of large numbers $m,s$; luckily, to fill the gaps in our proofs, we mostly needed to deal with fields where $m,s<100$. Implementation in Python is available at \url{https://github.com/raskama/number-theory/tree/main/biquadratic}.
While we describe the algorithm specifically for full rings of integers, it works for non-maximal orders as well, as long as one knows their integral basis.
Given an $\alpha \in \O_K$, we want to determine whether it is a sum of squares in $\O_K$, and if so, then how many squares are needed. It is in fact possible to find all representations of $\alpha$ as a sum of squares as follows:
\begin{enumerate}
\item If $\alpha$ is not totally positive, then it is not a sum of squares.
\item If $x^2$ occurs in a representation of $\alpha$ as a sum of squares, then $\alpha \succcurlyeq x^2$; in particular, $\atr\alpha \geq \atr x^2$. Therefore we identify all possible summands $x^2$ as follows:
\begin{enumerate}
\item Suppose $x = \frac{a+b\sqrt{m}+c\sqrt{s}+d\sqrt{t}}{4}$ and bear in mind that depending on the integral basis, there are congruence conditions imposed on the integers $a,b,c,d$. Find all quadruples which respect these conditions and satisfy
\[
\frac{1}{16}(a^2 + b^2m + c^2s + d^2t) \leq \atr\alpha;
\]
clearly there are only finitely many of them since $\atr\alpha$ is a given constant.
\item From the thus obtained list of possible $x^2$, omit those which do not satisfy $\alpha \succcurlyeq x^2$. In this way, the following finite set is obtained:
\[
S_{\alpha} = \{x^2 : x \in \O_K, \alpha \succcurlyeq x^2\}.
\]
\end{enumerate}
\item Now it remains to check whether $\alpha$ can be written as a sum of any number of elements of the finite set $S_{\alpha}$. This can be done straightforwardly in many ways, e.g.\ by recursion: Try one element $x^2 \in S_{\alpha}$ and then check whether there are any elements $y^2 \in S_{\alpha}$ satisfying $\alpha - x^2 \succcurlyeq y^2$; if there are, go over them one by one, etc.
\end{enumerate}
This algorithm is very useful if one has a guess about an element which will require many squares in its representation. If one wants to get an idea about the probable value of $\P(\O_K)$, one can use the following modification, which determines the lengths of all elements up to a given trace:
\begin{enumerate}
\item Choose a constant $T$ and find the set
\[
S^{(1)} = \{x^2 : x \in \O_K, \atr x^2 \leq T\}.
\]
This is done similarly as in the first algorithm.
\item Compute all sums of two elements of $S^{(1)}$ and reject those whose absolute trace exceeds $T$, thus forming the set
\[
S^{(2)} = \{x_1^2+x_2^2 : x_1,x_2 \in \O_K, \atr (x_1^2+x_2^2) \leq T\}.
\]
\item By iteratively adding all elements of $S^{(1)}$, compute the sequence of sets
\[
S^{(n)} = \{x_1^2 + \cdots + x_n^2 : x_i \in \O_K, \atr (x_1^2 + \cdots + x_n^2) \leq T\}.
\]
\item Stop as soon as $S^{(n+1)}=S^{(n)}$. Then $\P(\O_K)\geq n$; in particular, any element of $S^{(n)} \setminus S^{(n-1)}$ has length $n$.
\end{enumerate}
As $S^{(1)}$ has $O(T^2)$ elements, the time and space complexity of this algorithm is $O(T^{2(n+1)})$, where $n$ is the expected $\P(\O_K)$ as above. To reduce the space complexity, it is e.g.\ possible to restrict ourselves to finding all sums of $n$ squares with a specific trace (and do this for every trace up to $T$).
Checking whether certain element $\alpha$ is sum of $n$ squares can be done with time complexity $O(T^{2\ceil{n/2}})$, where $T = \atr \alpha$, since one can construct $S^{(\floor{n/2})}$, $S^{(\ceil{n/2})}$ and for every element in $S^{(\floor{n/2})}$ check whether its complementary element is in $S^{(\ceil{n/2})}$.
These time restrictions proved to be sufficient for our needs and also for producing hypotheses about infinite families of $\O_K$. In particular, the results of Subsection \ref{ss:235} were observed after running this algorithm over several hundreds of fields of similar type.
\subsection{The computed data}
Here we collected those lemmata needed in later proofs which were checked by an algorithm explained in the previous subsection. Of course, in principle they could be verified by a straightforward but very tedious computation with pen and paper.
\begin{lemma}\label{le:comp}
Let $K = \BQ{m}{s}$ be a totally real biquadratic field with the usual Convention \ref{no:mst} that $m<s<t$ are square-free. To simplify the notation, put
\[
\alpha_0 = \begin{cases}
7 + (1+\sqrt{m})^2 & \text{if $m \not\equiv 1 \pmod 4$},\\
7 + \Bigl(\frac{1+\sqrt{m}}{2}\Bigr)^2 & \text{if $m \equiv 1 \pmod 4$}.
\end{cases}
\]
Then:
\begin{enumerate}
\item If $(m,s)$ is one of $(17,19)$, $(17,21)$, $(17,22)$, $(21,22)$ and $(21,23)$, then $\ell(\alpha_0)=5$. \label{it:mIs1}
\item If $(m,s)$ is one of $(10,11)$, $(10,17)$, $(10,19)$, $(11,14)$, $(11,17)$, $(11,21)$; $(14,15)$, $(14,17)$, $(14,19)$, $(15,17)$, $(15,21)$, $(15,22)$, $(19,21)$, $(19,22)$, $(22,23)$, $(23,26)$ -- i.e.\ if $K$ lies in $\K_1 \cup \K_2 \cup \K_3$ in the notation of Subsection \ref{ss:P>=5mEquiv23} and $(m,s) \neq (10,13)$, $(11,13)$, $(30,35)$ -- then $\ell(\alpha_0)=5$. \label{it:mNot1Gen}
\item If $(m,s)$ is $(10,13)$ or $(11,13)$, then $3+(\frac{1+\sqrt{13}}{2})^2+(1+\frac{1+\sqrt{13}}{2})^2$ has length $5$; if $(m,s) = (30,35)$, the same holds for $7 + (1+\sqrt{42})^2$. \label{it:threeexceptions}
\item If $(m,s)$ is one of $(22,26)$, $(26,30)$, $(19,23)$, then $15 + (1+\sqrt{m})^2$ has length $5$. \label{it:mNot1SpecGood15}
\item If $(m,s)$ is one of $(30,38)$, $(34,42)$, $(38,46)$, $(31,39)$, $(35,43)$, $(39,47)$, $(43,51)$, $(47,55)$; $(34,46)$, $(31,43)$, $(35,47)$, $(39,51)$, $(43,55)$, then $28 + (1+\sqrt{m})^2$ has length~$5$. \label{it:mNot1SpecGood28}
\item If $(m,s)$ is one of $(10, 14)$, $(11,15)$, $(15,19)$; $(14,22)$, $(22,30)$, $(26,34)$, $(11,19)$, $(15,23)$, $(23,31)$; $(10,22)$, $(14,26)$, $(22,34)$, $(26,38)$, $(11,23)$, $(19,31)$, $(23,35)$,
then $7+\bigl(\frac{\sqrt{m}+\sqrt{s}}{2} \bigr)^2$ has length $5$. \label{it:mNot1SpecBad}
\item If $m=6$ and $s$ is one of $13$, $17$, $29$, $37$, $41$; $7$, $11$, $19$; $26$, $34$; $14$, $22$, $38$, $46$, $62$, $70$, $86$, then the $\alpha$ defined in Proposition \ref{pr:sqrt6} has length $5$. \label{it:sqrt6}
\item If $m=6$ and $s=10$, then $3+\bigl(\sqrt{6} -\frac{\sqrt{6}+\sqrt{10}}{2}\bigr)^2+\bigl(2+\frac{\sqrt{6}+\sqrt{10}}{2}\bigr)^2$ has length $5$. \label{it:sqrt6spec}
\item If $m=7$ and $s$ is one of $13$, $17$, $29$, $33$, $37$, $41$; $10$; $15$, $19$, $23$, $31$, $39$, $43$, $47$, $51$, then the $\alpha$ defined in Proposition \ref{pr:sqrt7} has length $5$. \label{it:sqrt7}
\item If $m=7$ and $s=11$, then $3+\bigl(\frac{\sqrt{7}+\sqrt{11}}{2}\bigr)^2+\bigl(1+\frac{\sqrt{7}+\sqrt{11}}{2}\bigr)^2$ has length $5$.\notabene{By the way, an analogy of this holds for all the fields of same type contained in the previous statement.}
\label{it:sqrt7spec}
\item If $m=2$ and $s$ is one of $11$, $15$, $17$, $21$, $29$, then the $\alpha$ defined in Proposition \ref{pr:sqrt2} has length $5$. \label{it:sqrt2}
\item If $m=2$ and $s=13$, then $3+(1+\sqrt{2})^2+\Bigl(\sqrt{2}+\frac{1+\sqrt{13}}{2}\Bigr)^2+\Bigl(2+\frac{1+\sqrt{13}}{2}+\frac{\sqrt{2}+\sqrt{26}}{2}\Bigr)^2$ has length $5$. \label{it:sqrt2spec}
\item If $m=3$ and $s$ is one of $13$, $17$, $29$, $37$, $41$; $10$; $11$, $19$, $23$, $31$, $35$, $43$, then the $\alpha$ defined in Proposition \ref{pr:sqrt3} has length $5$. \label{it:sqrt3}
\item If $m=5$ and $s = 13$ or $s=17$, then the $\alpha$ defined in Proposition \ref{pr:sqrt5sIs1} has length $5$. \label{it:sqrt5sIs1}
\item If $m=5$ and $s\not\equiv 1 \pmod4$, $7 < s \leq 3253$, then the $\alpha$ defined in Proposition \ref{pr:sqrt5sNot1} has length $5$. \label{it:sqrt5sNot1}
\item If $m=13$ and $s$ is one of $17$, $21$, $29$, $33$, $37$, $41$, then the $\alpha$ defined in Proposition \ref{pr:sqrt13P>=6} has length $6$. \label{it:sqrt13P>=6}
\item In fields $\BQ{13}{n}$ with $n=6,7,10,11$, the element $12+2\sqrt{13} + (1+\sqrt{n})^2$ has length $6$. \label{it:sqrt13=s}
\end{enumerate}
\end{lemma}
\begin{proof}
As we pointed out, proving that an element $\alpha$ is not a sum of $n$ squares in a given field is just a straightforward computational task. Therefore, we proved this lemma by implementing the above-described algorithm in Python.
One small exception is part (\ref{it:sqrt5sNot1}), but only for technical reasons: To reduce computation time, we verified the statement by computer only for $7 < s \leq 499$, while the remaining cases were handled by refining the proof of Proposition \ref{pr:ourStrongAsymptotics} -- see Observation \ref{ob:sqrt5Improvement}.
\end{proof}
During our computations, only seven fields stood out as possible candidates for $\P(\O_K) < 5$ (for all other fields, we were able to find an element of length $5$ with a relatively small trace). Especially, for $K = \BQ{2}{3},\BQ{2}{5},\BQ{3}{5}$ we suspect $\P(\O_K)=3$ and for $K =\BQ27, \BQ37, \BQ56, \BQ57$ we suspect $\P(\O_K)=4$.
The following proposition collects our results about these seven fields: We present an element of the suspected maximal length as well as bounds up to which the computation was completed. The example elements $\alpha_0$ are chosen with the lowest possible trace (many elements with the desired length can be found).
\begin{proposition}\label{pr:comp_examples}
If $K$, $l$ and $\alpha_0$ is one of the triples specified below then $\ell(\alpha_0) = l$ and $\ell(\alpha) \leq l$ for all $\alpha \in \sum \O_K^2$ such that $\Tr \alpha \leq 500$.
\begin{enumerate}
\item $K= \BQ23$, $l = 3$, $\alpha_0 = 6+\sqrt{2}+\sqrt{6}$;
\item $K= \BQ25$, $l = 3$, $\alpha_0 = 6+\sqrt{5}$;
\item $K= \BQ35$, $l = 3$, $\alpha_0 = 3+\frac{1+\sqrt{5}}{2}$;
\item $K= \BQ27$, $l = 4$, $\alpha_0 = 10+2\sqrt{2}+\sqrt{7}$;
\item $K= \BQ37$, $l = 4$, $\alpha_0 = 8+\sqrt{3}+\sqrt{7}$;
\item $K= \BQ56$, $l = 4$, $\alpha_0 = 10+2\sqrt{6}+\frac{1+\sqrt{5}}{2}$;
\item $K= \BQ57$, $l = 4$, $\alpha_0 = 11+2\sqrt{7}+\frac{1+\sqrt{5}}{2}$.
\end{enumerate}
\end{proposition}
Note that a nontrivial upper bound shall be proven only for fields containing $\sqrt5$, and in general, obtaining an upper bound even for a single field is difficult: Therefore, although we are convinced that e.g.\ $\P(\O_K)=3$ for $K=\BQ23$, we know only $3 \leq\P(\O_K)\leq 7$.
The following proposition collects some of our computed data for fields containing $\sqrt3$. Based on it, we conjecture that there are only finitely many fields $K=\BQ{3}{s}$ with $\P(\O_K)\leq 5$; this is in sharp contrast with the situation in fields containing $\sqrt2$ or $\sqrt5$ (see the conjecture in Subsection \ref{ss:conjecture} and compare this with the situation in quadratic fields, described in Section \ref{se:quadratic}).
\begin{proposition}\label{pr:sqrt3shock}
For $K=\BQ{3}{s}$ where $s=17$, $s=22$ or $26 \leq s \leq 55$, we have $\P(\O_K) \geq 6$. If $s$ is odd, the same holds also for $55 < s \leq 79$.
In all these cases, there exists $\alpha \in \O_K$ with $\ell(\alpha)=6$ and $\Tr(\alpha)\leq 1000$. For example, the following elements have length $6$:
\[
87 + 18\sqrt3 - 20\sqrt{17} - 4\sqrt{51}; \quad 150 + 45\sqrt3 - 14\sqrt{26}; \quad \frac12(199 - 50\sqrt3 + 14\sqrt{31} - 19\sqrt{93}).
\]
\end{proposition}
Although our program showed that for $s = 58, 62, 70$ or $74$, elements with $\Tr(\alpha)\leq 1100$ have length at most five, we believe that $\P(\O_K)\geq 6$ holds even in those fields (and in fact for all $K \ni \sqrt3$ with $s \geq 26$). A hint towards this is also the fact that for $s=46$, all elements with $\Tr(\alpha)\leq 900$ have $\ell(\alpha)\leq 5$. The computer we used is not a particularly strong one, since numerical experiments were not our priority.
\section{Results for fields with coprime \texorpdfstring{$m, s$}{m, s}} \label{se:coprime}
In this section we examine the specific fields $K = \BQ{m}{s}$ where $m$ and $s$ are coprime, i.e.\ $t = ms$. They are easier to work with than the general case (and therefore also often occur in works of other authors); in particular, if we restrict to the fields of type (B1), we were able to prove that the Pythagoras number of $\O_K$ is $7$ (with a few exceptions); see Theorem \ref{th:(B1)P=7}. This proves one of the main results of this paper, Theorem \ref{th:main7infinitely}. The other results of this section can be put neatly together as follows:
\begin{proposition} \label{pr:coprime}
Let $p,q \notin\{2,3,5,6,7,13\}$ be two coprime square-free positive integers which are not both congruent to $3$ modulo $4$.\notabene{I didn't try that much, but I've managed to solve the problematic case $m\equiv s \equiv 3$ only if $s>32+3m$. In such a case, the length of $7 + (1+\sqrt{m})^2 + \bigl(\frac{\sqrt{m}+\sqrt{s}}{2}\bigr)^2$ is $6$. It's quite possible that the same element works in all other cases as well!} Then $\P(\O_K)\geq 6$ for $K=\BQ{p}{q}$.
\end{proposition}
\begin{proof}
This is just a combination of Propositions \ref{pr:B1coprime}, \ref{pr:B23comprime} and \ref{pr:B4coprime}.
\end{proof}
As a side remark, the inequalities for $p,q$ in all the following propositions are optimal in the sense that if they are violated, then the given $\alpha$ has length less than $6$. To see this, observe that the inequalities only exclude fields containing square roots of $2$, $3$, $5$, $6$, $7$ or $13$, and in these fields $\ell(7)<4$, see Section \ref{se:quadratic}. On the other hand, at least for fields containing $\sqrt{13}$ another element of length $6$ can be found, see Proposition \ref{pr:sqrt13P>=6}.
We start with a result for fields of type (B1); besides being one third of the above proposition, it will also serve as a lemma for the upcoming Theorem \ref{th:(B1)P=7}.
\begin{proposition}[Type (B1)] \label{pr:B1coprime}
Let $K=\BQ{p}{q}$ where $p\equiv 2$ and $q\equiv3 \pmod{4}$ are coprime square-free integers such that $p,q\geq 10$. Then $\P(\O_K)\geq 6$.
In particular, the following element has length $6$:
\[
\alpha = 7+(1+\sqrt{p})^2+(1+\sqrt{q})^2.
\]
\end{proposition}
\begin{proof}
In this case, the integral basis is given by $\O_K=\Span_{\Z}\bigl\{1, \sqrt{p}, \sqrt{q}, \frac{\sqrt{p}+\sqrt{pq}}2\bigr\}$. Suppose
\[
\alpha = \sum \Bigl(a_i + b_i\sqrt{p} + c_i\sqrt{q} + d_i\frac{\sqrt{p}+\sqrt{pq}}2\Bigr)^2.
\]
This means
\begin{align*}
9+p+q+2\sqrt{p}+2\sqrt{q} = \sum\limits \Bigl(&\bigl(a_i^2+b_i^2p+c_i^2q+d_i^2\frac{p+pq}{4}+b_id_ip\bigr)\\
&+ (2a_ib_i+a_id_i+c_id_iq)\sqrt{p}
+ \bigl(2a_ic_i+b_id_ip+d_i^2\frac{p}{2}\bigr)\sqrt{q}\\
&+ \left(2b_ic_i+c_id_i+a_id_i\right)\sqrt{pq}\Bigr).
\end{align*}
By comparing coefficients, one gets the following conditions:
\begin{align}
9+p+q & = \sum a_i^2+q\sum c_i^2+p\sum\biggl(\!\Bigl(b_i+\frac{d_i}{2}\Bigr)^2+q\frac{d_i^2}{4}\biggr), \label{eq:B1c:1}\\
2 & = 2\sum a_ib_i+\sum a_id_i+q\sum c_id_i, \label{eq:B1c:2} \\
2 & = 2\sum a_ic_i+p\sum b_id_i+\frac{p}{2}\sum d_i^2, \label{eq:B1c:3}\\
0 & = 2\sum b_ic_i+\sum c_id_i+\sum a_id_i. \label{eq:B1c:4}
\end{align}
Without loss of generality $a_i\geq0$ and we will prove the following statements:
\begin{enumerate}[(i)]
\item There is a nonzero $b_i$ and a nonzero $c_j$.
\item All the coefficients $d_i$ are zero.
\item There is exactly one nonzero coefficient $|b_i|=1$ and one nonzero coefficient $|c_j|=1$.
\item If $c_i\neq 0$, then $b_i=0$ and vice versa.
\item If $c_i$ is nonzero, then $a_i=c_i=1$, the same for $b_i$.
\end{enumerate}
The proofs follow:
\textbf{(i)} Suppose $b_i = 0$ for all $i$.
Then from (\ref{eq:B1c:4}) we get $\sum c_id_i = -\sum a_id_i$. \\
Together with (\ref{eq:B1c:2}), this yields
\[
2 = q\sum c_id_i - \sum c_id_i = (q-1) \sum c_id_i,
\]
which is impossible as $q\geq 10$ and $\sum c_id_i$ is rational integer.
If $c_i = 0$ for all $i$, we proceed similarly: From (\ref{eq:B1c:3}) follows
$2 = \frac{p}{2}\sum (2b_id_i+d_i^2)$,
which is again impossible as $p\geq 10$ and $\sum 2b_id_i+d_i^2$ is rational integer.
\textbf{(ii)} Suppose there is some $|d_i|> 0$. Since there is also some nonzero $c_i$, from (\ref{eq:B1c:1}) we get
$9+p+q \geq q+ \frac{pq}{4}$.
This is not possible for $p,q\geq 10$.
\textbf{(iii)}
We already know that there is at least one nonzero $c_j$ and $b_i$. On the other hand, equation (\ref{eq:B1c:1}) now yields
\[
9+p+q \geq q\sum c_i^2+p\sum b_i^2.
\]
If $\sum c_i^2>1$ or $\sum b_i^2>1$ this is clearly impossible as $p,q\geq 10$.
\textbf{(iv)} This is an easy consequence of $0 = 2\sum b_ic_i$ (\ref{eq:B1c:4}) and the previous statement.
\textbf{(v)} These are again direct consequences of $1 = \sum a_ib_i$ (\ref{eq:B1c:2}) and $1 = \sum a_ic_i$ (\ref{eq:B1c:3}) and our assumption that $a_i\geq 0$.
From the previous statements, it can be seen that the only way how to write $\alpha$ as a sum of squares is $\alpha = (1+\sqrt{p})^2+(1+\sqrt{q})^2+\sum x_i^2$, where $x_i \in \Z$. This concludes the proof as $7$ is not a sum of three squares over rational integers.
\end{proof}
Exploiting the previous proposition, we are able to prove one of our main results. Observe that it covers \emph{all} fields with coprime $m,s \geq 10$ of type (B1), since if $m,s$ are coprime, they cannot both be $2$ modulo $4$.
\begin{theorem}[Type (B1)] \label{th:(B1)P=7}
Let $K=\BQ{p}{q}$ where $p\equiv 2$ and $q\equiv3 \pmod{4}$ are coprime square-free integers such that $p,q\geq 10$. Then $\P(\O_K)=7$.
In particular, the following element has length $7$:
\[
\alpha = 7 +(1+\sqrt{p})^2 +(1+\sqrt{q})^2 + \Bigl(\frac{\sqrt{p}+\sqrt{pq}}{2}\Bigr)^2.
\]
\end{theorem}
\begin{proof}
It suffices to prove that $\ell(\alpha)=7$; the rest is clear since $7$ is an upper bound on the Pythagoras number for every order of degree $4$. For a field of type (B1), we have $\O_K=\Span_{\Z}\bigl\{1, \sqrt{p}, \sqrt{q}, \frac{\sqrt{p}+\sqrt{pq}}2\bigr\}$. Suppose that $\alpha =
9+p+q+\frac{p+pq}{4}+2\sqrt{p}+\left(2+\frac{p}{2}\right)\sqrt{q}$ is a sum of an arbitrary number of squares.
Analogously to the proof of Proposition \ref{pr:B1coprime}, we denote the squares as $\bigl(a_i + b_i\sqrt{p} + c_i\sqrt{q} + d_i\frac{\sqrt{p}+\sqrt{pq}}2\bigr)^2$. By comparing coefficients, one gets the following conditions:
\begin{align}
9+p+q+\frac{p+pq}{4} & = \sum a_i^2+q\sum c_i^2+p\sum \biggl(\!\Bigl(b_i+\frac{d_i}{2}\Bigr)^2+\frac{d_i^2q}{4}\biggr), \label{eq:B1_7:1}\\
2 & = 2\sum a_ib_i+\sum a_id_i+q\sum c_id_i, \label{eq:B1_7:2}\\
2+\frac{p}{2} & = 2\sum a_ic_i+p\sum b_id_i+\frac{p}{2}\sum d_i^2, \label{eq:B1_7:3}\\
0 & = 2\sum b_ic_i+\sum c_id_i+\sum a_id_i. \label{eq:B1_7:4}
\end{align}
Without loss of generality all $d_i\geq0$ and we will step by step prove the following lemmata:
\begin{enumerate}[(i)]
\item There exists a nonzero $b_i$ and a nonzero $c_j$.
\item There is exactly one nonzero $d_i$, let us say $d_1$. This $d_1 = 1$ and $b_1 \in \{-1,0\}$.
\item There exists a nonzero $b_i$ for $i>1$.
\item There is exactly one $j$ such that $c_j$ is nonzero, and exactly one $k>1$ such that $b_k$ is nonzero. For these indices, $|c_j|=1$ and $|b_k|=1$. Also $\sum a_i^2 = 9$.
\item $a_1 = b_1 = c_1 = 0$. Thus, the first summand is $\bigl(\frac{\sqrt{p}+\sqrt{pq}}{2}\bigr)^2$.
\end{enumerate}
The last lemma implies that $\ell(\alpha)=7$ since $7+(1+\sqrt{p})^2+(1+\sqrt{q})^2$ has length $6$ by Proposition \ref{pr:B1coprime}.
Now let us present the proofs of the lemmata:
\textbf{(i)} This can be proven in the same way as the first part of the proof of \ref{pr:B1coprime}.
\textbf{(ii)} If $d_i^2\geq 4$ for some $i$ or there are at least two $d_i = 1$, the RHS of (\ref{eq:B1_7:1}) is at least $q+\frac{pq}{2}$ (since there is a nonzero $c_i$); however, $q+\frac{pq}{2}>9+p+q+\frac{p+pq}{4}$ for $p,q \geq 10$.
On the other hand, comparing parities of LHS and RHS of (\ref{eq:B1_7:3}), there must exist at least one odd $d_i$, implying there is exactly one nonzero $d_i$, without loss of generality $d_1 = 1$.
Now if $b_1 \notin \{-1,0\}$, then $p\bigl( b_1+\frac{d_1}{2}\bigr)^2\geq \frac{9}{4}p$, which, taken together with the other known estimates (nonzero $c_i$ and $d_1=1$), makes (\ref{eq:B1_7:1}) impossible for $p\geq 10$.
\textbf{(iii)} If $b_1=0$, it is clear from \textbf{(i)}, so assume $b_1=-1$ and $b_i=0$ for $i>1$.
Then from (\ref{eq:B1_7:2}) we get $2 = -2a_1+a_1+c_1q$, from (\ref{eq:B1_7:4}) also $0 = -2c_1+c_1+a_1$. By adding these two equalities, we get $2 = c_1(q-1)$, which is impossible.
\textbf{(iv)} This almost immediately follows from earlier results and (\ref{eq:B1_7:1}): With our knowledge, the equality can be rewritten as
\[
9+p+q+\frac{p+pq}{4} = \sum a_i^2 + q\sum c_i^2 + p \frac{1+q}{4} + p\sum_{i \geq 2} b_i^2.
\]
Since there is at least one nonzero $c_i$, we know $\sum c_i^2 \geq 1$, and similarly $\sum_{i \geq 2} b_i^2 \geq 1$. Thus the RHS is at least $p+q+\frac{pq+p}{4}$, which is LHS minus $9$. From this it is clear that both aforementioned sums must be exactly $1$ (implying that exactly one summand in each of them is $1$) and $\sum a_i^2 =9$.
\textbf{(v)}
From (\ref{eq:B1_7:3}) we get $2-pb_1 = \sum 2a_ic_i$. This is impossible for $b_1\neq 0$, as one easily deduces from $p\geq 10$ and the fact that there is exactly one nonzero $|c_i|=1$ and that $\sum a_i^2 =9$.
If $c_1\neq 0$, from (\ref{eq:B1_7:2}) we get $2-c_1q - a_1 =\sum_{i>1} 2a_ib_i$.
Since $\sum a_i^2 = 9$ and $q\geq 10$, this is again impossible as there is only one nonzero $b_i$ for $i>1$, equal to $\pm1$. Thus $c_1=0$.\notabene{For $q=11$ it might seem possible at the first glance, but only if there are two $a_i$ with $|a_i|=3$. That is clearly nonsense.}
From (\ref{eq:B1_7:4}) we get $a_1 = -\sum_{i>1}2b_ic_i$, so the only other possibility besides $a_1=0$ is $a_1 = \pm2$.
If $a_1=-2$, equations (\ref{eq:B1_7:2}), (\ref{eq:B1_7:3}), (\ref{eq:B1_7:4}) read as $2 = \sum_{i>1}a_ib_i$, $1= \sum_{i>1} a_ic_i$, $1 = \sum_{i>1} b_ic_i$. The last one shows that $b_i \neq 0$ and $c_j\neq 0$ must happen for $i=j$, but then the first two equations imply different values of $a_i$, which is a contradiction.
If $a_1=2$, we get $0 = \sum_{i>1}a_ib_i$, $1= \sum_{i>1} a_ic_i$, $-1 = \sum_{i>1} b_ic_i$, which leads to the same contradiction.
\end{proof}
The following propositions handle the fields of type (B2,3) and (B4). They can be proved using the exact same five steps as in the proof of Proposition $\ref{pr:B1coprime}$, each of these steps analogously requiring either straightforward inequalities concerning trace or a simple parity argument. As all these ideas were already illustrated, we omit these proofs.
One thing to mention is that instead of using the integral basis explicitly, one can work with elements of the form $\frac{a_i}{4}+\frac{b_i}{4}\sqrt{p}+\frac{c_i}{4}\sqrt{q}+\frac{d_i}{4}\sqrt{pq}$ and impose congruence relations upon the coefficients. The structure of the proofs would remain unchanged and it can perhaps be clearer, especially for fields of type (B4). This other approach is often used throughout the rest of the paper.
\begin{proposition}[Type (B2,3)] \label{pr:B23comprime}
Let $K=\BQ{p}{q}$ where $p\equiv 2,3$ and $q\equiv 1 \pmod{4}$ are coprime square-free integers such that $p\geq 10$, $q\geq 17$. Then $\P(\O_K)\geq 6$.
In particular, the following element has length $6$:
\[
\alpha = 7+(1+\sqrt{p})^2+\Bigl(\frac{1+\sqrt{q}}{2}\Bigr)^2.
\]
\end{proposition}
\begin{proposition}[Type (B4)] \label{pr:B4coprime}
Let $K=\BQ{p}{q}$ where $p \equiv q \equiv 1 \pmod{4}$ are coprime square-free integers such that $p,q\geq17$. Then $\P(\O_K)\geq 6$.
In particular, the following element has length $6$:
\[
\alpha = 7+\Bigl(\frac{1+\sqrt{p}}{2}\Bigr)^2+\Bigl(\frac{1+\sqrt{q}}{2}\Bigr)^2.
\]
\end{proposition}
Note that in the last proposition, the field is in fact always of type (B4a) since $\gcd(p,q) \equiv 1 \pmod4$. Let us also remark that the omitted case $p \equiv q \equiv 3$ (see Proposition \ref{pr:coprime}) seems to be somewhat harder, but we believe that $\P(\O_K)\geq 6$ holds even in that situation.
\section{Pythagoras number is at least 5 (up to seven exceptions)} \label{se:P>=5}
In this section we prove the first half of Theorem \ref{th:main2parts}, i.e.\ that $\P(\O_K) \geq 5$ unless $K = \BQ23$, $\BQ25$, $\BQ35$ (where we expect $\P(\O_K)=3$) or $K = \BQ37$, $\BQ27$, $\BQ56$, $\BQ57$ (where we expect $\P(\O_K)=4$).
The structure of the proof is as follows: The fields which contain $\sqrt2$, $\sqrt3$ or $\sqrt5$ are handled in Subsection \ref{ss:235}, with the most difficult case postponed until Subsection \ref{ss:strongAsymptotics}; the fields containing $\sqrt6$ or $\sqrt7$ are discussed in Subsection \ref{ss:67}. The fields containing $\sqrt{13}$ are postponed until Proposition \ref{pr:sqrt13P>=6}, where we prove that the Pythagoras number is in fact at least $6$. For the fields which contain none of these six numbers, the easier case of $m \equiv 1 \pmod4$ is solved in Subsection \ref{ss:P>=5mEquiv1}, while Subsection \ref{ss:P>=5mEquiv23} solves a majority of the fields where $m \not\equiv 1 \pmod4$. Among these, the fields with $s=m+4$, $s=m+8$ or $s=m+12$ turn out to be the most problematic; a solution for them is provided in Subsection \ref{ss:m+4,m+8,m+12}. A formal proof of the whole Theorem \ref{th:main2parts} (\ref{it:main(1)}), including references to specific propositions rather than subsections, is situated at the very end of Section \ref{se:P>=5}.
Since the proof is a bit lengthy, we first provide a hint how to easily obtain the weaker estimate $\P(\O_K)\geq 4$: By trace considerations, it is simple to show that if $\ell(7)<4$, then $K$ must contain $\sqrt2$, $\sqrt3$, $\sqrt5$, $\sqrt6$, $\sqrt7$ or $\sqrt{13}$. For fields with $m=6$, the element $10+2\sqrt{6}=1+1+1+(1+\sqrt{6})^2$ is a sum of less than four squares only in $\BQ6{10}$, and here one easily finds another counterexample. For fields with $m=7$ or $m=13$, one obtains similar results for $11+2\sqrt{7}=1+1+1+(1+\sqrt{7})^2$ and $\frac{13}{2}+\frac{1}{2}\sqrt{13}=1+1+1+\big(\frac{1+\sqrt{13}}{2}\big)^2$, respectively; only the field $\BQ{7}{11}$ has to be handled separately. It then remains to find a proof for fields containing $\sqrt{2}$, $\sqrt{3}$ or $\sqrt{5}$. This is easily done by the same method which we use in Subsection \ref{ss:67} for proving $\P(\O_K)\geq 5$ for $m=6, 7$; a more general explanation of this strategy is provided later in Subsection \ref{ss:P>=6idea}.
The rest of the section is spent solely by proving the result advertised in its title. Remember that we keep the notation $1<m<s<t$ for the three square-free integers whose square roots belong to $K$.
\subsection{The cases where \texorpdfstring{$m \equiv 1 \pmod4$}{m is 1 mod 4}} \label{ss:P>=5mEquiv1}
The following proposition shows that for $m \equiv 1 \pmod4$, the simplest element of length $5$ in $\Z\bigl[\frac{1+\sqrt{m}}{2}\bigr]$ has length $5$ also in $\O_K$:
\begin{proposition} \label{pr:mIs1}
Let $K = \BQ{m}{s}$ be a totally real biquadratic field with $m \equiv 1 \pmod{4}$ where $m \neq 5,13$. Then $\P(\O_K)\geq 5$.
In particular, $\ell(\alpha_0)=5$ holds for
\[
\alpha_0 = 7 + \Bigl(\frac{1+\sqrt{m}}{2}\Bigr)^2 = 7 + \frac{1+m}{4} + \frac{\sqrt{m}}{2}.
\]
\end{proposition}
\begin{proof}
Peters \cite{Pe} proved that $\alpha_0$ cannot be expressed as a sum of four squares in $\O_{\Q(\!\sqrt{m})}$, see Theorem \ref{th:quadratic}. Therefore, if $\alpha_0 = \sum_{i=1}^{4}x_i^2$ in $\O_K$, we can assume $x_1 \notin \Q(\!\sqrt{m})$.
First suppose that $x_i \in \Q(\!\sqrt{m})$ for $i\neq 1$. Then $x_1^2 \in \Q(\!\sqrt{m})$, hence by Lemma \ref{le:KTZ} $x_1$ has the form $k\sqrt{s}+l\sqrt{t}$ for $k,l\in\Q$. However, these elements have a big trace: To minimise the trace of $x_1^2$, one has to choose either $x_1 = \sqrt{s}$ or $x_1=\frac{\sqrt{s}\pm\sqrt{t}}{2}$. Therefore, by comparing traces of $\alpha_0$ and $x_1^2$, we obtain one of the inequalities
\[
7 + \frac{1+m}{4} \geq s \quad \text{or} \quad 7 + \frac{1+m}{4} \geq \frac{s+t}{4}.
\]
The first inequality is nonsense since $s\geq m+1$. As for the second, it clearly implies $t \leq 28$ which holds only for finitely many biquadratic fields; and none of them satisfies $m \equiv 1 \pmod{4}$.
Therefore we can assume $x_1,x_2\notin \Q(\!\sqrt{m})$. We distinguish two cases. In the first, both $x_1$ and $x_2$ are of the form $\frac{a+b\sqrt{m}+c\sqrt{s}+d\sqrt{t}}{2}$ for $a,b,c,d \in \Z$, i.e.\ \enquote{no quarter-integers are involved}. Then $\atr(x_1^2+x_2^2) \geq 2\atr\bigl(\frac{1+\sqrt{s}}{2}\bigr)^2$, yielding
\[
7 + \frac{1+m}{4} \geq 2\frac{1+s}{4},
\]
i.e.\ $27 + m \geq 2s$. This, together with our conditions on $m$, holds only for the five fields $\BQ{17}{19}$, $\BQ{17}{21}$, $\BQ{17}{22}$, $\BQ{21}{22}$, $\BQ{21}{23}$; these fields were handled by our computer program, see Lemma \ref{le:comp} (\ref{it:mIs1}).
In the other case, one of $x_i$ is of the form $\frac{a+b\sqrt{m}+c\sqrt{s}+d\sqrt{t}}{4}$ with $a,b,c,d$ odd. Then, by Lemma \ref{le:quarterSquares}, $x_i^2$ is of the same form; hence, to get $\alpha_0$, at least one other summand must take the same form. Comparing traces, this yields
\[
7 + \frac{1+m}{4} \geq 2 \frac{1+m+s+t}{16},
\]
i.e.\ $57+m\geq s+t$, implying $56\geq t$. This is impossible, since quarter-integers can only occur for fields of type (B4), and the smallest value of $t$ in such a field is $65$.
\end{proof}
\subsection{The cases where \texorpdfstring{$m \not\equiv 1 \pmod4$}{m is not 1 mod 4}} \label{ss:P>=5mEquiv23}
If we consider the fields where $m \not\equiv 1 \pmod4$, it is natural to ask whether $\alpha_0 = 7 + (1+\sqrt{m})^2$ satisfies $\ell(\alpha_0)=5$. Usually, the answer is positive. However, in contrast with the analogous question for $m\equiv 1$, there are infinitely many fields where this is not the case:
\begin{observation} \label{ob:m+4etcIsAProblem}
In a field $\BQ{m}{s}$ where $s=m+4$, $s=m+8$ or $s=m+12$, four squares suffice to represent $\alpha_0 = 7 + (1+\sqrt{m})^2 = (8+m) + 2\sqrt{m}$. More specifically, denote $y_1 = 1 + \frac{\sqrt{m}+\sqrt{s}}{2}$ and $y_2 = 1 + \frac{\sqrt{m}-\sqrt{s}}{2}$. Then
\[
\alpha_0 = \begin{cases}
y_1^2+y_2^2+2^2 & \text{if $s = m + 4$},\\
y_1^2+y_2^2+1^2+1^2 & \text{if $s = m + 8$},\\
y_1^2+y_2^2 & \text{if $s = m + 12$},
\end{cases}
\]
as one easily sees from the general identity $y_1^2+y_2^2 = 2 + \frac{m+s}{2} + 2\sqrt{m}$.
\end{observation}
The three families of fields which were shown to be problematic in Observation \ref{ob:m+4etcIsAProblem} will be handled in the next subsection. The aim of this subsection is to prove that when one excludes these three families (and of course supposes $m \neq 2,3,6,7$), then five squares are indeed needed for $\alpha_0$ in all but three fields.
\begin{proposition} \label{pr:mNot1Pis5}
Let $K = \BQ{m}{s}$ be a totally real biquadratic field with $m \not\equiv 1 \pmod{4}$ such that $\P(\O_F)=5$ for $F=\Q(\!\sqrt{m})$. (Namely $m \neq 2,3,6,7$.) Assume further that $s \neq m+4,m+8,m+12$. Then $\P(\O_K)\geq 5$.
In particular, the length of
\[
\alpha_0 = 7 + (1+\sqrt{m})^2 = (8+m) + 2\sqrt{m}
\]
is $5$ unless $K=\BQ{10}{13}$, $\BQ{11}{13}$ or $\BQ{30}{35}$. In the first two of these three fields, $\ell\Bigl(3+(\frac{1+\sqrt{13}}{2})^2+(1+\frac{1+\sqrt{13}}{2})^2\Bigr)=5$; in the last, $\ell\Bigl(7 + (1+\sqrt{42})^2\Bigr)=5$.
\end{proposition}
\begin{proof}
We start by defining three sets of exceptional fields for which the following proof does not work and they must be handled separately. For convenience, denote by $\K$ the set of all fields for which we aim to prove the theorem, i.e.\ fields where $m \not\equiv 1 \pmod4$, $m \neq 2,3,6,7$ and $s \neq m+4,m+8,m+12$. Now, the problematic fields will be defined by the following inequalities:
\begin{align*}
\K_1 &= \{K \in \K : 31+3m \geq 4s\},\\
\K_2 &= \{K \in \K : 31+2m \geq s+t\},\\
\K_3 &= \{K \in \K : 31+m \geq 2s\}.
\end{align*}
Clearly $\K_1 \subset \K_3$. It is also not very difficult to see that all these sets are finite; this is done in Lemma \ref{le:listsOfExceptions}. Once the lists are proven to be finite, a computer program can be applied to solve them; this was done in Lemma \ref{le:comp} (\ref{it:mNot1Gen}) and (\ref{it:threeexceptions}). (Note that the three specific fields listed in the statement of Proposition \ref{pr:mNot1Pis5} belong to $\K_2$ or $\K_3$.) Having gotten this out of the way, let us now proceed with the general proof:
There are several possibilities regarding the integral basis; however, we shall perform the proof for all of them simultaneously. We assume $\alpha_0 = \sum x_i^2$, where
\[
x_i = \frac{a_i + b_i\sqrt{m} + c_i\sqrt{s} + d_i\sqrt{t}}{2}, \qquad \text{with $a_i+b_i+c_i+d_i \equiv 0 \pmod2$}.
\]
Depending on the integral basis, further conditions are imposed on the coefficients, but we shall repeat those directly when using them. As usual, the core of the proof is comparing of traces; however, in this case, it will not give us enough information. Therefore we start with another observation, which will then make the comparing of traces more efficient.
Since $\alpha_0$ is not a sum of four squares in $\Z[\sqrt{m}]$, we can assume that at least one of $x_i$ is not in $\Z[\sqrt{m}]$. Now we proceed to prove a sequence of claims which further restrict the possible values of all the coefficients $b_i,c_i,d_i$:
\begin{enumerate}[label=(\roman*)]
\item At least one product $a_kb_k \neq 0$. In particular, $\sum a_i^2 + m\sum b_i^2 \geq 1 + m$.
\item With the exception of fields $K \in \K_1$, we have $\sum c_i^2 + \sum d_i^2 \leq 3$. In particular, $c_i$ and $d_i$ only take the values $0, \pm1$.
\item With the same fields $K \in \K_1$ excluded, there are at least two $x_i \notin \Z[\sqrt{m}]$. This also means $\sum c_i^2 + \sum d_i^2 \geq 2$.
\item If $K \notin \K_1 \cup \K_2$, the following holds: Either all $d_i = 0$, or $\sum b_i^2 + \sum c_i^2 + \sum d_i^2 \leq 3$.
\item If $K \notin \K_1 \cup \K_3$, we have $\sum b_i^2 + \sum c_i^2 + \sum d_i^2 \leq 4$. In particular, $\sum b_i^2 \leq 2$, so $b_i$ can only take values $0, \pm 1$.
\end{enumerate}
The proofs follow:
\textbf{(i)} Comparing the coefficients in front of $\sqrt{m}$ yields the equality $2 = 2\bigl(\sum \frac{a_ib_i}{4} + m_0\sum\frac{c_id_i}{4}\bigr)$. Assuming that $a_ib_i=0$ for all indices, we get $4 = m_0 \sum c_id_i$, implying $m_0 \mid 4$. Since $m_0$ is square-free and at least $3$, this is impossible.
\textbf{(ii)} By comparing traces, exploiting the previous claim, we get
\[
32 + 4m = 4\atr\alpha = \sum a_i^2 + m\sum b_i^2 + s\sum c_i^2 + t\sum d_i^2 \geq 1 + m + s\Bigl(\sum c_i^2 + \sum d_i^2\Bigr).
\]
For the sake of contradiction, assume $\sum c_i^2 + \sum d_i^2 \geq 4$. This yields $31 + 3m \geq 4s$, which is the definition of $K \in \K_1$.
\textbf{(iii)} We already know that at least one $x_i \notin \Z[\sqrt{m}]$. Suppose it were the only one -- say, $x_1$. Then $x_1^2 \in \Z[\sqrt{m}]$, which, by Lemma \ref{le:KTZ}, means $x_1 = \frac{c_1\sqrt{s}+d_1\sqrt{t}}{2}$. \notabene{One can note that since $m\not\equiv 1$, this is an algebraic integer if and only if the field is of type (B1) with $s\equiv t \equiv m_0 \equiv 2 \pmod4$. But the given argument makes no use of this.}
Combining the previous claim with $c_1\equiv d_1 \pmod2$, we get $|c_1|=|d_1|=1$. Thus there are only two possibilities: $x_1^2 = \frac{s+t}{4} \pm \frac{m_0}{2}\sqrt{m}$. Although for $m_0 \equiv 2 \pmod4$ this indeed belongs to $\Z[\sqrt{m}]$, it can never belong to $\Z[2\sqrt{m}]$. Thus $\alpha_0 - x_1^2$ cannot be a sum of squares in $\Z[\sqrt{m}]$, a contradiction.
\textbf{(iv)} To arrive at a contradiction, assume that $\sum b_i^2 + \sum c_i^2 + \sum d_i^2 \geq 4$ where at least one $d_i$ is nonzero. Since $K \notin \K_1$, by the previous claim we also know $\sum c_i^2 + \sum d_i^2 \geq 2$. Putting these pieces of information together, we have $m\sum b_i^2 + s\sum c_i^2 + t \sum d_i^2 \geq 2m + s + t$. Thus
\[
32 + 4m \geq 1 + m\sum b_i^2 + s\sum c_i^2 + t \sum d_i^2 \geq 1 + 2m + s + t,
\]
i.e.\ $31 + 2m \geq s + t$. This is the definition of $K \in \K_2$.
\textbf{(v)} For the sake of contradiction, assume $\sum b_i^2 + \sum c_i^2 + \sum d_i^2 \geq 5$. By (iii) we also know $\sum c_i^2 + \sum d_i^2 \geq 2$. Putting these pieces of information together, we have $m\sum b_i^2 + s\sum c_i^2 + t \sum d_i^2 \geq 3m + 2s$. Thus
\[
32 + 4m \geq 1 + m\sum b_i^2 + s\sum c_i^2 + t \sum d_i^2 \geq 1 + 3m + 2s,
\]
i.e.\ $31 + m \geq 2s$. This is the definition of $K \in \K_3$. Now $\sum b_i^2 \leq 2$ follows using claim (iii).
Now, assuming $K\notin \K_1 \cup \K_2 \cup \K_3$, the five proven claims provide very strong restrictions for the possible values of $|b_i|,|c_i|,|d_i|$. One sees that all of them are only zeros or ones. To simplify the following discussion, denote by $B,C,D$ the number of nonzero $b_i,c_i,d_i$. If $D \neq 0$, then by claim (iv) $B+C+D \leq 3$ and by claims (i) and (iii) $B\geq 1$ and $C+D \geq 2$. This gives rise to two possibilities: $B=C=D=1$ or $B=1, C=0, D=2$. On the other hand, if $D = 0$, then claims (i) and (v) give $1 \leq B \leq 2$ and claims (ii) and (iii) yield $2 \leq C \leq 3$. These are four more possibilities.
Of these latter four possibilities, three can be excluded based on examining the integral bases: Since $D=0$, all $d_i$ are zero, hence even. Since $B,C \neq 0$, there are odd values of $b_i$ and of $c_i$. Unless $B=C=2$, the values of $B$ and $C$ are different, so there is some index $i$ where exactly one of $b_i$ and $c_i$ is odd; therefore $a_i$ is odd as well. Hence $a_i,b_i,c_i$ all attain odd values in our summands, while $d_i$ does not. However, this requires at least two of the elements $\frac{1+\sqrt{m}}{2}$, $\frac{1+\sqrt{s}}{2}$, $\frac{\sqrt{m}+\sqrt{s}}{2}$ to be algebraic integers, and this is impossible (it holds only for fields of type (B4), but there $m \equiv 1 \pmod4$).
So of the cases where $D=0$, only $B=C=2$ remains to be solved. Here all $d_i$ are even, while $b_i$ and $c_i$ attain also odd values. In order not to get a contradiction as in the last paragraph, they must attain these values simultaneously, so the integral basis must contain $\frac{\sqrt{m}+\sqrt{s}}{2}$. This happens if and only if $m \equiv s \pmod4$. Now we compare traces one last time, using also the existence of $a_k\neq0$:
\[
32 + 4m = \sum a_i^2 + 2m + 2s \geq 1 + 2m + 2s,
\]
which can be rewritten as $s - m \leq \frac{31}{2} < 16$. The condition $m \equiv s \pmod4$ now yields $s = m+4, m+8$ or $m+12$, in which cases nontrivial decompositions indeed exist.
The last two cases to be handled are $B=C=D=1$ and $B=1, C=0, D=2$. Let us tackle the latter. There are two odd $d_i$, while only one $b_i$ is odd and all $c_i$ are even. To satisfy the parity condition $a_i+b_i+c_i+d_i \equiv 0 \pmod2$ for both indices where $d_i \neq 0$, at least one $a_i$ must also be odd. This leads to a contradiction similar to one we already saw: For $m\not\equiv 1$, at most one of the elements $\frac{1+\sqrt{m}}{2}$, $\frac{1+\sqrt{t}}{2}$, $\frac{\sqrt{m}+\sqrt{t}}{2}$ can be an algebraic integer.
The last remaining case is $B=C=D=1$. (Here the parity condition is easily satisfied: Either one or three of $a_i$ are odd.) By claim (iii) there are two different summands $x_i\notin \Z[\sqrt{m}]$, so we can assume $c_1=1$, $d_2=1$ (and all the other $c_i, d_i = 0$). There is exactly one nonzero $b_i$, let us denote it by $b_k = \pm 1$. First we realise that $k=1$ or $k=2$: If not, then to satisfy the parity condition, all of $a_1,a_2,a_k$ are odd and all the elements $\frac{1+\sqrt{m}}{2}$, $\frac{1+\sqrt{s}}{2}$, $\frac{1+\sqrt{t}}{2}$ are integral; this is nonsense. So indeed $k=1$ or $k=2$. To conclude, let us consider three of the four equations obtained by comparing $\alpha_0$ with $\sum x_i^2$ coefficient by coefficient:
\begin{align*}
4 &= \sum a_ib_i + m_0\sum c_id_i = \pm a_k\\
0 &= \sum a_ic_i + s_0\sum b_id_i = a_1 + s_0 b_2\\
0 &= \sum a_id_i + t_0\sum b_ic_i = a_2 + t_0 b_1.
\end{align*}
If $k=2$, then $b_1=0$, so the last equation gives $a_k=a_2=0$, which contradicts the first equation. Similarly, if $k=1$, then the second equation contradicts the first.
Thus the proof is concluded (except for the cases $\K_1$, $\K_2$, $\K_3$, which are handled below).
\end{proof}
Now we handle the exceptional cases for which the inequalities in the previous proof were not good enough, thus completing the proof.
\begin{lemma}\label{le:listsOfExceptions}
Denote by $\K$ the set of all biquadratic fields where $m \not\equiv 1 \pmod4$, $m \neq 2,3,6,7$ and $s \neq m+4,m+8,m+12$. Then the following sets are finite:
\begin{align*}
\K_1 &= \{K \in \K : 31+3m \geq 4s\},\\
\K_2 &= \{K \in \K : 31+2m \geq s+t\},\\
\K_3 &= \{K \in \K : 31+m \geq 2s\}
\end{align*}
and for each $K \in \K_1 \cup \K_2 \cup \K_3$, the claim of the previous proposition holds: Namely, $\alpha_0 = 7 + (1+\sqrt{m})^2$ has length $5$ in $\O_K$ unless $K$ is one of the three exceptional fields; these three fields contain another element of length $5$, defined in the previous proposition.
\end{lemma}
\begin{proof}
Clearly $\K_1 \subset \K_3$, so it suffices to consider the larger sets $\K_2$ and $\K_3$. We already explained in Subsection \ref{ss:algorithms} that in a \emph{fixed} order $\O_K$, computing the length of a given element is just a computational task. Thus to prove our result, it suffices to provide a full list of fields in $\K_2$ and $\K_3$; the rest of the proof can be handled by a simple computer program (or a very patient student). The results of this computation are already prepared in Lemma \ref{le:comp} (\ref{it:mNot1Gen}) and (\ref{it:threeexceptions}).
We start with $\K_3$. The defining inequality implies $30 \geq s$, so we have $10 \leq m \leq 29$. It suffices to go through all square-free $m \not\equiv 1 \pmod4$ in this interval and for each of them to list all square-free $s$ satisfying $m < s \leq \frac{31+m}{2}$. We also omit cases where $s=m+4,m+8,m+12$. The full list of the eighteen pairs $(m,s)$ such that $\BQ{m}{s} \in \K_3$ follows:
\begin{align*}
&(10,11), (10,13), (10,17), (10,19); (11,13), (11,14), (11,17), (11,21); (14,15), (14,17),\\
& (14,19); (15,17), (15,21), (15,22); (19,21), (19,22); (22,23); (23,26).
\end{align*}
We omitted the fields $(10,15)$ and $(14,21)$, since we require $t$ to be the largest of the three number $m,s,t$, while these fields contain $\sqrt6$. Therefore, they do not belong to $\K$ and are handled in Subsection \ref{ss:67}.
The finiteness of $\K_2$ is less obvious. However, using the notation with $t_0<s_0<m_0$, one can write
\[
31 \geq s + t - 2m = m_0t_0 + m_0s_0 - 2s_0t_0 \geq m_0t_0 + m_0(t_0+1) - 2(m_0-1)t_0 = m_0 + 2t_0,
\]
so the values of $m_0$ and hence also $s_0$ and $t_0$ are bounded. With a little care, it is quite fast and simple to find a complete list by hand. \notabene{The most convenient way of listing all triples $(t_0,s_0,m_0)$ which correspond to a field in $\K_2$ is probably to observe that the above inequality implies $t_0 \leq \frac{29}{3} < 10$, so since $t_0$ is square-free, one has $t_0 \in \{1,2,3,5,6,7\}$. Going over these possibilities one by one, one has to look for solutions $m,s$ (which are coprime, square-free, and multiples of $t_0$) of inequalities obtained by plugging the given value of $t_0$ into $31 \geq s + m\bigl(\frac{s}{t_0^2} - 2\bigr)$. For example, if $t_0=2$, then the values $m=10$ and $s=14$ are the smallest possible, and they satisfy the inequality -- so they would belong to $\K_2$, were it not for the fact that $s=m+4$ is forbidden. Otherwise $m \geq 14$ and $s \geq 22$ (other even numbers are not square-free), but $s + m\bigl(\frac{s}{4} - 2\bigr) \geq 22 + 14\bigl(\frac{22}{4}-2\bigr) = 71 > 2$, so there are no other solutions with $t_0=2$.}
It turns out that, due to the restriction $m \geq 10$, the set $\K_2$ contains only two fields:
\[
\K_2 = \bigl\{\BQ{15}{21}, \BQ{30}{35} \bigr\}.
\]
Once we have made the lists explicit, we can apply our computer program. This was successfully done, see Lemma \ref{le:comp} (\ref{it:mNot1Gen}) and (\ref{it:threeexceptions}). In the three fields given by $(10,13)$, $(11,13)$ and $(30,35)$, the original $\alpha_0$ turns out to be a sum of four or even three squares, hence we had to find another element of length $5$.
\end{proof}
\subsection{The exceptional cases \texorpdfstring{$s=m+4, m+8, m+12$}{s=m+4, m+8, m+12}} \label{ss:m+4,m+8,m+12}
In this subsection we want to prove $\P(\O_K)\geq 5$ for the three families of fields given by $s=m+4$, $s=m+8$ and $s=m+12$ (keeping the conditions $m \not\equiv 1 \pmod4$, $m \neq 2,3,6,7$). For these fields, the proof from the previous subsection cannot be used, since $\alpha_0 = 7 + (1+\sqrt{m})^2$ can be written as a sum of four or less squares, see Observation \ref{ob:m+4etcIsAProblem}. To find a solution of this problem, let us first examine why the main proof failed in these cases.
The problem lies in the identity
\begin{align*}
\alpha_0 - \Bigl(1 + \frac{\sqrt{m}+\sqrt{s}}{2}\Bigr)^2-\Bigl(1 + \frac{\sqrt{m}-\sqrt{s}}{2}\Bigr)^2 &= (7 + 1 + m + 2\sqrt{m}) - \bigl(2 + \frac{m+s}{2} + 2\sqrt{m}\bigr)\\
&= 7 - 1 - \frac{s-m}{2},
\end{align*}
where $7 - 1 - \frac{s-m}{2}$ is either $7 - 3 = 4$, $7 - 5 = 2$ or $7 - 7 = 0$, depending on the concrete family. All the three values, $4$, $2$ and $0$, are sums of at most two squares in $\Z$.
In this, we were actually rather unlucky: Most positive numbers require three or even four squares! So one solution which suggests itself is to replace $7$ by another number $n$ which shares the crucial property of $7$ (namely, it is not a sum of three squares in $\Z$), but also solves the previous problem since $n - 3$, $n - 5$ or $n - 7$ (again depending on the family we are examining) is not a sum of two squares.
The next number after $7$ which is not a sum of three squares is $15$; and while $15-5 = 1^2 + 3^2$ and $15 - 7 = 2^2 + 2^2$ are sums of two squares, $15 - 3$ is not. Therefore the first family, $s = m + 4$, could be solved using the element $15 + (1 + \sqrt{m})^2$. The next number which is not a sum of three squares, $23$, does not help, but for $28$ one sees that both $28 - 5 = 23$ and $28 - 7 = 21$ can not be represented by two squares. Thus, in the following proposition, we solve the remaining two families thanks to the element $28 + (1+\sqrt{m})^2$.
\begin{proposition} \label{pr:m+4etc}
Let $K = \BQ{m}{s}$ where $m \not\equiv 1 \pmod4$, $m \neq 2,3,6,7$, and $s=m+4$, $s=m+8$ or $s=m+12$. Then $\P(\O_K)\geq 5$.
In particular,
\[
\alpha = \begin{cases}
15 + (1+\sqrt{m})^2 & \text{if $s = m + 4$},\\
28 + (1+\sqrt{m})^2 & \text{if $s = m + 8$ or $m + 12$}\\
\end{cases}
\]
has length $5$ unless $K$ is one of the following $16$ fields: $m = 10, 11, 15$ and $s= m+4$; $m = 14, 22, 26, 11, 15, 23$ and $s= m+8$; $m = 10, 14, 22, 26, 11, 19, 23$ and $s= m+12$. In these exceptional fields, $7 + \bigl(\frac{\sqrt{m}+\sqrt{s}}{2}\bigr)^2$ has length $5$.\notabene{It seems that this element actually solves emph{all} such fields, not only the $16$ exceptions; it is probably just better than $7 + (1+\sqrt{m})^2$, as soon as $s-m$ is small. Why? One must ask which of $\atr (1+\sqrt{m})^2 = 1 + m$ and $\atr \Bigl(\frac{\sqrt{m}+\sqrt{s}}{2}\Bigr)^2 = \frac{m+s}{4}$ is smaller. That depends on an inequality like $3m \leq s$. Equivalently, $3s_0 \leq m_0$. $\ldots$ Yes, similar thoughts are behind the distinction of $12$ families in the last section.}
\end{proposition}
\begin{proof}
The proof goes along the same lines as that of Proposition \ref{pr:mNot1Pis5}, only at the end one must prove that it is impossible to have two summands of the form $x_i = \frac{a_1+b_1\sqrt{m}+\sqrt{s}\pm \sqrt{t}}{2}$. We just go through the claims which have to be proven on the way, to see which exceptional fields must be handled separately. (See Lemma \ref{le:comp}, not only for the 16 exceptional fields listed in (\ref{it:mNot1SpecBad}) but also the fields included in (\ref{it:mNot1SpecGood15}) and (\ref{it:mNot1SpecGood28}), for which the statement holds but the following proof breaks down.)
As always, we suppose $\alpha = \sum x_i^2$ with $x_i = \frac{a_i+b_i\sqrt{m}+c_i\sqrt{s}+d_i\sqrt{t}}{2}$; since $m\equiv s \not\equiv 1 \pmod{4}$, the conditions are $a_i\equiv d_i$ and $b_i \equiv c_i \pmod2$ (and in the cases where $t \equiv 3$, $a_i$ and $d_i$ must both be even). It is important to note $t = \frac{ms}{t_0^2}$, where $t_0$, as the greatest common divisor of $m$ and $s$, divides $4$, $8$ or $12$, respectively. Together with $t_0$ being square-free, one sees that if $m$ and $s$ are even, then $t_0 = 2$ or $6$, the latter case occurring only if $s = m+12$ and $3 \mid m$. Similarly, if $m$ and $s$ are odd (thus $3$ modulo $4$), then $t_0 =1$ or $3$, the latter case again occurring only if $s=m+12$ and $3 \mid m$. From this, it is easy to see whether $t \equiv 1$ or $t \equiv 3 \pmod4$, determining the integral basis.
\textbf{(A) $s=m+4$:} The three fields with $m \leq 15$ must have been handled separately (Lemma \ref{le:comp} (\ref{it:mNot1SpecBad})) since $\alpha$ is a sum of less than five squares already in $\Z[\sqrt{m}]$. From now on suppose $m>15$. Then at least one $x_i$ is not in $\Z[\sqrt{m}]$. Also, by comparing coefficients in front of $\sqrt{m}$, we see that at least one $a_ib_i \neq 0$. Suppose now that there is a nonzero $d_i$. For odd $m$ we have $\frac{d_i^2 t}{4} = \frac{d_i^2}{4}m(m+4) \geq \frac14 m(22+4) = \frac{13}{2}m$, while for even $m$ we use the fact that $t = \frac{m(m+4)}{4} \equiv 3 \pmod4$ to deduce $\frac{d_i^2 t}{4} \geq \frac{2^2}{4}\frac{m(m+4)}{4} \geq m \frac{19+4}{4}$. Either way, $\frac{d_i^2 t}{4}$ is larger than $\atr \alpha = 16 + m$, which is a contradiction. Hence, all $d_i$ are zero.
Now this implies that all $a_i$ must be even, and one can forget about the integral basis since $t$ plays no more role. If there were only one summand with $c_i \neq 0$, its square would have to belong to $\Z[\sqrt{m}]$, so by Lemma \ref{le:KTZ}, it would be $\frac{c_i}{2}\sqrt{s}$ with $c_i$ even; we also know that at least one $b_j$ is nonzero, and since $c_j = 0$, this $b_j$ is even. Putting this together, if there is only one summand with $c_i \neq 0$, comparing traces yields $16 + m \geq \frac{b_j^2}{4}m + \frac{c_i^2}{4}s \geq m + s = 2m + 4$, which is impossible.
Therefore there are at least two summands with $c_i \neq 0$. This also means $\sum c_i^2 \geq 2$, and due to the congruence conditions $\sum b_i^2 + \sum c_i^2 \geq 4$. The next step is to show that $\sum b_i^2 + \sum c_i^2 > 4$ is impossible; and indeed, comparing traces and exploiting $\sum b_i^2 + \sum c_i^2 \geq 6$ and $\sum c_i^2 \geq 2$ leads to a contradiction for $m > 26$ (leaving a few cases to be checked separately, see Lemma \ref{le:comp} (\ref{it:mNot1SpecGood15})).
Now, finally, we know $\sum b_i^2 + \sum c_i^2 = 4$, which together with the existence of nonzero $b_i$ and nonzero $c_j$ implies that one can assume $c_1=c_2=1$, $|b_1|=|b_2|=1$, and all other $b_i$, $c_i$, $d_i$ are zeros. As a result of this, the equations obtained by comparing coefficients of $\alpha$ and $\sum x_i^2$ take the simple form (with the notation $A_i=\frac{a_i}{2} \in \Z$):
\begin{align*}
16 + m &= \sum A_i^2 + m\Bigl(\frac14 + \frac14\Bigr) + s\Bigl(\frac14 + \frac14\Bigr) = \sum A_i^2 + m + 2,\\
2 &= 2\sum \frac{A_ib_i}{2} = A_1b_1 + A_2b_2,\\
0 &= \sum A_ic_i = A_1 + A_2,\\
0 &= \sum b_ic_i = b_1 + b_2.\\
\end{align*}
The last three equations together with $|b_1|=1$ show that (after possibly switching $x_1$ and $x_2$) $A_1=b_1=1$, $A_2=b_2=-1$, which turns the first equation into
\[
12 = \sum_{i\geq 3} A_i^2.
\]
From this, we see that it is necessary (and sufficient) that there are at least three more summands $A_3$, $A_4$, $A_5$. We have proven that $\alpha$ is not a sum of less than five squares.
\textbf{(B) $s = m+8$:} For the cases where $m \leq 28$, another choice of $\alpha$ had to be made (Lemma \ref{le:comp} (\ref{it:mNot1SpecBad})), since this $\alpha$ is a sum of less than five squares in $\Z[\sqrt{m}]$. Otherwise the proof is just the same as for $s=m+4$. Proving that all $d_i$ must be zero is slightly complicated by the fact that both for odd and even $m$, we get $t \equiv 1$; but since in the even case we have $m \geq 30$, one still gets a contradiction if $d_{i_0}\neq 0$:
\[
29 + m \geq \frac{{d_{i_0}}^2}{4}t \geq \frac{1}{4}\frac{m(m+8)}{4} \geq m \frac{38}{16} \geq m + \frac{11}{8}m > m + m \geq m + 30.
\]
After that, one goes through the same steps as for $s=m+4$, so we omit most of them. The inequality $\sum b_i^2 + \sum c_i^2 > 4$ gives a contradiction for $m > 48$, so the cases with $m \leq 48$ must be handled separately (Lemma \ref{le:comp} (\ref{it:mNot1SpecGood28})). For $m > 48$, we eventually come to the conclusion that there must be the summands $x_1^2 = \Bigl(1 + \frac{\sqrt{m}+\sqrt{s}}{2}\Bigr)^2$ and $x_2^2 = \Bigl(-1 + \frac{-\sqrt{m}+\sqrt{s}}{2}\Bigr)^2$, and the other summands are squares of rational integers $A_i$ satisfying $\sum_{i\geq 3} A_i^2 = \alpha - x_1^2 - x_2^2 = 23$. Thus there are at least four such summands -- so any representation of $\alpha$ as a sum of squares which uses $x_1 \notin \Z[\sqrt{m}]$ actually needs at least \emph{six} squares.
\textbf{(C) $s = m+12$:} Here the proof is absolutely the same as in the previous case (even including the fact that for $m \leq 28$ there is a representation of $\alpha$ as a sum of less than five squares in $\Z[\sqrt{m}]$).
The only step where we have to be careful is showing that all $d_i$ are zero. If $s = m+12$, then $t_0$ can take the values $1,2,3,6$, and especially in the last case, $t = \frac{m(m+12)}{36}$ is not \emph{much larger} then $m$, which is what the estimates make use of. However, if $t_0$ is $2$ or $6$, then $t \equiv 3\pmod4$, so $\frac{d_i^2}{4}t \geq t$; and since for $m > 29$ the first field with $s=m+12$ and $t_0=6$ is actually $m=66$, $s=78$ (due to $54$ not being square-free and the field $\BQ{30}{42}$ satisfying $t=m+12$, since $s=35)$, one gets the following contradiction:
\[
29 + m \geq \frac{d_i^2}{4}t \geq t = \frac{m(m+12)}{36} \geq m \frac{78}{36} > 2m,
\]
implying $29 > m$. For $t_0 = 1, 2, 3$ one also gets the inequality $\frac{d_i^2}{4}t > 2m$, leading to the same contradiction, so all $d_i$ must indeed be zeros.
To disprove $\sum b_i^2 + \sum c_i^2 > 4$, one must assume $m > 44$ (the remaining cases being handled in Lemma \ref{le:comp} (\ref{it:mNot1SpecGood28})). With this assumption, one again concludes that one of the summands is $x_1^2 = \Bigl(1 + \frac{\sqrt{m}+\sqrt{s}}{2}\Bigr)^2$, another is $x_2^2 = \Bigl(-1 + \frac{-\sqrt{m}+\sqrt{s}}{2}\Bigr)^2$, and the other summands are squares of rational integers $A_i$. They must satisfy $\sum_{i\geq 3} A_i^2 = \alpha - x_1^2 - x_2^2 = 21$, so there must be at least three of them. Any representation of $\alpha$ as a sum of squares therefore needs at least five squares.
\end{proof}
\subsection{Fields containing \texorpdfstring{$\sqrt6$ or $\sqrt7$}{root of 6 or 7}} \label{ss:67}
In this section\notabene{In the \LaTeX source code, behind this section we kept an older version containing also $\sqrt{13}$.} we prove that, save a few exceptions, rings of integers containing $\sqrt6$ or $\sqrt7$ have Pythagoras number at least $5$. More specifically, we prove Propositions \ref{pr:sqrt7} and \ref{pr:sqrt6}. In these statements, the meaning of $s$ is fixed by Convention \ref{no:mst}: We have $m=7$ ($m=6$, resp.), $s>m$ square-free, and the condition $t>s$ translates into $7 \nmid s$ ($3 \nmid s$, resp.).
\begin{proposition} \label{pr:sqrt7}
Let $K=\BQ{7}{s}$.
Then $\P(\O_K)\geq 5$.
In particular, if $s\neq 11$, then $\ell(\alpha)=5$ holds for $\alpha = \alpha_0 + w^2$, where $\alpha_0 = 1^2+1^2+1^2+(1+\sqrt7)^2 = 11 + 2\sqrt7$ and
\[
w = \begin{cases}
\frac{1+\sqrt{s}}{2} & \text{if $s \equiv 1 \pmod4$},\\
1+\sqrt{s} & \text{if $s \equiv 2 \pmod4$},\\
\frac{\sqrt{7}+\sqrt{s}}{2} & \text{if $s \equiv 3 \pmod4$}.\\
\end{cases}
\]
For $s=11$, the element $3+\bigl(\frac{\sqrt{7}+\sqrt{11}}{2}\bigr)^2+\bigl(1+\frac{\sqrt{7}+\sqrt{11}}{2}\bigr)^2$ has length $5$.
\end{proposition}
\begin{proof}
Suppose that $\alpha = \sum x_i^2$; since $\alpha \notin \Q(\!\sqrt7)$, we can assume $x_1^2 \notin \Q(\!\sqrt7)$. It is known (and easy to check) that $\ell(\alpha_0)=4$ in $\Z[\sqrt7]$, see Theorem \ref{th:quadratic}. Therefore, it suffices to prove that $x_1 = \pm w$ and $x_i \in \Z[\sqrt7]$ for $i \neq 1$; indeed, in such a case $\alpha_0 = \sum_{i\neq 1} x_i^2$ requires at least four summands.
To achieve this, denote $x_i = \frac{a_i+b_i\sqrt{7}+c_i\sqrt{s}+d_i\sqrt{7s}}{2}$; there are congruence conditions imposed on the coefficients, but these depend on the type of the field and we do not need them just yet. Comparing traces yields
\begin{equation}\label{eq:ineq7}
11 + \atr w^2 = \atr \alpha = \sum \frac{a_i^2 + 7b_i^2 + sc_i^2 + 7sd_i^2}{4} \geq \frac{s}{4}\Bigl(\sum c_i^2 + 7 \sum d_i^2\Bigr).
\end{equation}
Since $x_1 \notin \Z[\sqrt7]$, the value of $\sum c_i^2 + 7 \sum d_i^2$ is nonzero. To examine this quantity further, we must distinguish the three possible types of fields:
\textbf{(A) $s \equiv 1$:} The field is of type (B2,3) with $q=s$, which means that $a_i\equiv c_i$ and $b_i \equiv d_i \pmod2$. Therefore the smallest nonzero value of $\sum c_i^2 + 7 \sum d_i^2$ is $1$. Suppose first, for the sake of contradiction, that $\sum c_i^2 + 7 \sum d_i^2 > 1$. Then we have at least $\sum c_i^2 + 7 \sum d_i^2 \geq 2$. Plugging this, together with $\atr w^2 = \frac{1+s}{4}$, into the inequality \eqref{eq:ineq7}, yields
\[
11 + \frac{1+s}{4} \geq \frac{s}{4}2,
\]
i.e.\ $45 \geq s$. This holds only for six fields, handled separately in Lemma \ref{le:comp} (\ref{it:sqrt7}); otherwise we get a contradiction.
Thus we proved $\sum c_i^2 + 7\sum d_i^2 = 1$; together with $x_1 \notin \Q(\!\sqrt7)$ this means $|c_1|=1$ and all other $c_i, d_i = 0$. In particular, $x_i \in \Z[\sqrt7]$ for $i \neq 1$. By choosing the sign, we may assume $x_1 = \frac{a_1 + b_1\sqrt7 + \sqrt{s}}{2}$. This yields
\[
\alpha = x_1^2 + \underbrace{\cdots\cdots}_{\in\Q(\!\sqrt7)} = \underbrace{\cdots\cdots}_{\in\Q(\!\sqrt7)} + \frac{a_1}{2}\sqrt{s} + \frac{b_1}{2}\sqrt{7s}.
\]
Comparing this with $\alpha = \bigl(11 + \frac{1+s}{4}\bigr) + 2\sqrt7 + \frac{\sqrt{s}}{2}$, we conclude $a_1=1$, $b_1=0$. Thus indeed $x_1 = \frac{1+\sqrt{s}}{2} = w$.
\textbf{(B) $s \equiv 2$:} The proof is the same as in the previous case, only with different congruence conditions: The field is of type (B1) with $q=7$, which means that $a_i, b_i$ are even and $c_i \equiv d_i \pmod2$. Therefore the smallest nonzero value of $\sum c_i^2 + 7 \sum d_i^2$ is $4$. Suppose first, for the sake of contradiction, that $\sum c_i^2 + 7 \sum d_i^2 > 4$. Then we have at least $\sum c_i^2 + 7 \sum d_i^2 \geq 8$, since either some $d_j$ is nonzero, giving $c_j^2+7d_j^2 \geq 1 + 7$, or there are at least two nonzero even $c_i$. Plugging this, together with $\atr w^2 = 1+s$, into the inequality \eqref{eq:ineq7}, yields
\[
11 + (1+s) \geq \frac{s}{4}8,
\]
i.e.\ $12 \geq s$. This holds only for $s=10$, handled separately in Lemma \ref{le:comp} (\ref{it:sqrt7}); otherwise we get a contradiction.
Thus $\sum c_i^2 + 7\sum d_i^2 = 4$; as before, this means that $x_i \in \Z[\sqrt7]$ for $i\neq 1$ and (after possibly changing the sign) $x_1 = \frac{a_1 + b_1\sqrt7}{2} + \sqrt{s}$. Hence
\[
\alpha = x_1^2 + \underbrace{\cdots\cdots}_{\in\Q(\!\sqrt7)} = \underbrace{\cdots\cdots}_{\in\Q(\!\sqrt7)} + a_1\sqrt{s} + b_1\sqrt{7s}.
\]
Comparing this with $\alpha = (11 + 1+s) + 2\sqrt7 + 2\sqrt{s}$, we conclude $a_1=2$, $b_1=0$. Thus indeed $x_1 = 1+\sqrt{s} = w$.
\textbf{(C) $s \equiv 3$:} This is analogous to the previous two cases, so we provide only stepping stones. This time the field is of type (B2,3) with $q=7s$, so $a_i \equiv d_i$ and $b_i\equiv c_i \pmod2$; as in the first case, the minimal nonzero value of $\sum c_i^2 + 7\sum d_i^2$ is $1$. The assumption $\sum c_i^2 + 7\sum d_i^2 \geq 2$ leads to $51 \geq s$; this holds only for nine fields, of which eight are handled in Lemma \ref{le:comp} (\ref{it:sqrt7}) and $s=11$ in Lemma \ref{le:comp} (\ref{it:sqrt7spec}). Thus $\sum c_i^2 + 7\sum d_i^2 = 1$, which straightforwardly leads to $x_i \in \Z[\sqrt7]$ for $i \neq 1$ and $\pm x_1 = \frac{a_1 + b_1\sqrt{7} + \sqrt{s}}{2}$; comparing coefficients then yields $\pm x_1 = \frac{\sqrt{7}+\sqrt{s}}{2} = w$ as needed.
\end{proof}
By straightforwardly applying the same method, we can prove the analogous result for $\sqrt6$. The only difference is that there are more cases to handle due to the fact that $s$ is not necessarily coprime with $6$. However, once the case distinction is made and the integral basis fixed, the proofs are complete analogies to the one above. Note that we had to include $m=6$ explicitly in the statement to exclude the one possibility $t=6$, $s=3$, $m=2$.
\begin{proposition} \label{pr:sqrt6}
Let $K=\BQ{6}{s}$ with $m=6$. Then $\P(\O_K)\geq 5$.
In particular, if $s\neq 10$, then $\ell(\alpha)=5$ holds for $\alpha = \alpha_0 + w^2$, where $\alpha_0 = 1^2+1^2+1^2+(1+\sqrt6)^2 = 10 + 2\sqrt6$ and
\[
w = \begin{cases}
\frac{1+\sqrt{s}}{2} & \text{if $s \equiv 1 \pmod4$},\\
\frac{\sqrt{6}+\sqrt{s}}{2} & \text{if $s \equiv 2 \pmod4$},\\
1+\sqrt{s} & \text{if $s \equiv 3 \pmod4$}.\\
\end{cases}
\]
For $s=10$, the element $3+\bigl(\sqrt{6} -\frac{\sqrt{6}+\sqrt{10}}{2}\bigr)^2+\bigl(2+\frac{\sqrt{6}+\sqrt{10}}{2}\bigr)^2$ has length $5$.
\end{proposition}
\begin{proof}
Direct analogy of the previous proof. If $s \equiv 1,3 \pmod4$, then $\gcd(6,s)=1$, so there is nothing new and one gets the inequalities $s \leq 41$ and $s \leq 22$, respectively, which leaves $5+3$ fields to be checked separately in Lemma \ref{le:comp} (\ref{it:sqrt6}). If $s \equiv 2$, one has $\gcd(6,s)=2$ and $s = 2m_0$, which makes the estimates slightly less efficient; moreover, one must distinguish between fields with $m_0 \equiv 1 \pmod4$, which are of type (B1), and fields with $m_0 \equiv 3 \pmod4$, which are of type (B2,3). The case $s=10$ is dealt with in Lemma \ref{le:comp} (\ref{it:sqrt6spec}), while the $2+7$ fields which are not excluded by the analogously obtainable inequalities $s \leq 2\cdot 23$ in the former and $s \leq 2\cdot 46$ in the latter case, are again solved in Lemma \ref{le:comp} (\ref{it:sqrt6}).
\end{proof}
\subsection{Fields containing \texorpdfstring{$\sqrt2$, $\sqrt3$ or $\sqrt5$}{root of 2, 3 or 5}} \label{ss:235}
Finally, we shall handle the fields with $m=2,3,5$. Here we find the result quite surprising; the Pythagoras number of $\O_{\Q(\!\sqrt{m})}$ is only three, but it still turns out that save a few exceptions, the Pythagoras number of $\O_K$ where $K$ contains $\Q(\!\sqrt{m})$ is at least five. This is in contrast with the situation in $\Z$: $\P(\Z)=4$, while in orders of quadratic extensions of $\Z$ the Pythagoras number grows only by (at most) one. Let us also remark that for $m=5$, the Pythagoras number is actually \emph{at most} five, and for $m=2$ we conjecture the same, while for $m=3$ it seems that usually $\P(\O_K)=6$ -- see Theorem \ref{th:mainSqrt5} and Conjecture \ref{co:conjecture}.
The proofs in this subsection are reasonably straightforward (although some tricks like using Lemmata \ref{le:KTZ} and \ref{le:quarterSquares} are required) and not even very lengthy; only guessing the appropriate choice of $\alpha$ would be difficult. We used data from our computer programs (see Subsection \ref{ss:algorithms}) to observe which choice might be the right one. In the case of $K = \BQ5{s}$ with $s \not\equiv 1 \pmod4$, we were unable to find such a suitable $\alpha$;\notabene{If such a choice exists at all, it is \emph{much} less obvious than those which are scattered throughout this subsection.} therefore we had to develop stronger tools -- the proof of this case is given in Subsection \ref{ss:strongAsymptotics}.
\begin{proposition} \label{pr:sqrt5sIs1}
Let $K=\BQ{5}{s}$ for $s \equiv 1 \pmod{4}$. Then $\P(\O_K)\geq 5$.
In particular, the following element has length $5$:
\[
\alpha = 1^2 + 1^2 + \Bigl(\frac{1+\sqrt5}{2}\Bigr)^2 + \Bigl(\frac{1+\sqrt{s}}{2}\Bigr)^2 + \Bigl(\frac{-\sqrt5+\sqrt{s}}{2}\Bigr)^2.
\]
\end{proposition}
\begin{proof}
Let us suppose that
\[
\alpha = \Bigl(5 + \frac{s}{2}\Bigr) + \frac{\sqrt5}{2} + \frac{\sqrt{s}}{2} - \frac{\sqrt{5s}}{2} = \sum x_i^2.
\]
First assume that at least one of $x_i$ is of the form $\frac{a_i+b_i\sqrt5+c_i\sqrt{s}+d_i\sqrt{5s}}{4}$ for $a_i,b_i,c_i,d_i$ odd. Then, by Lemma \ref{le:quarterSquares}, $x_i^2$ is of the same form, which implies that at least one other $x_j$ must have the same property as well. Then by comparing traces we obtain
\[
5 + \frac{s}{2} \geq \atr(x_i^2 + x_j^2) \geq 2 \cdot \frac{1+5+s+5s}{16} = \frac{3+3s}{4},
\]
i.e.\ $17 \geq s$. This is a contradiction unless $s=13$ or $s=17$, which cases are handled in Lemma \ref{le:comp} (\ref{it:sqrt5sIs1}).
So we have shown that all summands are in fact of the form
\[
x_i = \frac{a_i+b_i\sqrt5+c_i\sqrt{s}+d_i\sqrt{5s}}{2} \qquad \text{where $a_i+b_i+c_i+d_i$ is even}.
\]
Our next aim is to prove that $d_i=0$ for each $i$. Again, this is achieved by comparing traces: If at least one $d_i$ is nonzero, we get
\[
5 + \frac{s}{2} \geq 5s \sum \frac14 d_i^2 \geq \frac{5s}{4};
\]
this holds only for $s<7$, so in our situation all $d_i$ must indeed be zero.
What can be the value of $\sum c_i^2$? It cannot be zero, since $\alpha \notin \Q(\!\sqrt5)$. The cases $\sum c_i^2 = 1$ and $\sum c_i^2 = 2$ require the most care; now we shall prove that $\sum c_i^2 \geq 3$ is impossible (for $s>10$). Indeed, it would yield
\[
5 + \frac{s}{2} \geq s \frac14 \sum c_i^2 \geq \frac{3s}{4}.
\]
Next we exclude the case $\sum c_i^2 = 1$. Without loss of generality we assume $x_1 = \frac{a_1+b_1\sqrt{5} + \sqrt{s}}{2}$; $a_1 \not\equiv b_1 \pmod2$. All the other summands belong to $\Q(\!\sqrt5)$, so we must have $\alpha - x_1^2 \in \Q(\!\sqrt5)$. After comparing the third and fouth coefficient of
\[
x_1^2 = \frac{a_1^2 + 5b_1^2 + s}{4} + \frac{a_1b_1}{2}\sqrt5 + \frac{a_1}2\sqrt{s} + \frac{b_1}{2}\sqrt{5s}
\]
with those of $\alpha$, we see $a_1 = 1$, $b_1 = -1$. This is a contradiction, since one of them must be even.
So we have proven that $\sum c_i^2 =2$ is necessary. Without loss of generality we write $x_1 = \frac{a_1+b_1\sqrt{5} + \sqrt{s}}{2}$ and $x_2 = \frac{a_2+b_2\sqrt{5} + \sqrt{s}}{2}$ with $a_1 \not\equiv b_1$ and $a_2\not\equiv b_2 \pmod2$; the remaining $x_i$ are in $\Q(\!\sqrt5)$. Comparing traces, we get the equality
\[
5 + \frac{s}{2} = \frac14 \Bigl( \sum a_i^2 + 5 \sum b_i^2 + s \cdot 2 \Bigr),
\]
i.e.\ $20 = \sum a_i^2 + 5\sum b_i^2$. This yields a short list of possible combinations of $x_i$, and it is a simple matter to check by hand that the only one which indeed gives $\alpha$ is the one by which $\alpha$ was defined:
Since $\alpha - (x_1^2+x_2^2) \in \Q(\!\sqrt5)$, we compare coefficients:
\[
x_1^2+x_2^2 = \underbrace{\cdots\cdots}_{\in \Q(\!\sqrt5)} + \frac{a_1+a_2}{2}\sqrt{s} + \frac{b_1+b_2}{2}\sqrt{5s},
\]
so $a_1+a_2=1$ and $b_1+b_2=-1$. The latter equality together with $\sum b_i^2 \leq 4$ clearly gives only one solution: $b_1=-1, b_2=0$ (or vice versa). Then $a_1$ is even, and $a_2 = 1 - a_1$ is odd.
To satisfy the inequality $a_1^2 + a_2^2 \leq 20 - 5b_1^2 = 15$, the only options are $a_1 = -2, 0, 2$. Thus there are only three possibilities:
\begin{itemize}
\item $x_1 = \frac{-2-\sqrt{5}+\sqrt{s}}{2}$, $x_2 = \frac{3 + \sqrt{s}}{2}$,
\item $x_1 = \frac{-\sqrt{5}+\sqrt{s}}{2}$, $x_2 = \frac{1 + \sqrt{s}}{2}$,
\item $x_1 = \frac{2-\sqrt{5}+\sqrt{s}}{2}$, $x_2 = \frac{-1 + \sqrt{s}}{2}$.
\end{itemize}
In the first case, $\alpha - x_1^2 - x_2^2 = \frac12 - \frac{\sqrt5}2$, which is not totally positive, hence cannot be a sum of squares -- a contradiction. Similarly, in the third case $\alpha - x_1^2 - x_2^2 = \frac52 + \frac{2\sqrt5}2$, which is also not totally positive.
Hence we have proven that $x_1 = \frac{-\sqrt{5}+\sqrt{s}}{2}$, $x_2 = \frac{1 + \sqrt{s}}{2}$, which gives $\sum_{i \geq 3} x_i^2 = \frac{7+\sqrt5}{2}$. We already know that $x_i \in \Q(\!\sqrt5)$ for $i \geq 3$, so there must be at least three more summands since $\ell\bigl(\frac{7+\sqrt5}{2}\bigr)=3$ in $\Z\bigl[\frac{1+\sqrt5}{2}\bigr]$ by Theorem \ref{th:quadratic}. This concludes the proof.
\end{proof}
Now we solve both types of biquadratic fields containing $\sqrt2$.
\begin{proposition} \label{pr:sqrt2}
If $K=\BQ{2}{s}$ for $s \neq 3,5,7$, then $\P(\O_K)\geq 5$.
In particular, depending on the value of $s$ modulo $4$, the following element has length $5$:
\[
\alpha = \begin{cases}
1^2 + (1-\sqrt2)^2 + (2-\sqrt2)^2 + \Bigl(\frac12 + \sqrt2 + \frac{\sqrt{s}}{2}\Bigr)^2 + \Bigl(\frac{-1+\sqrt2-\sqrt{s}+\sqrt{2s}}{2}\Bigr)^2 & \text{if $13\neq s \equiv 1$};\\
1^2 + \sqrt2^2 + (1-\sqrt2)^2 + \Bigl(1 + \frac{-\sqrt2 - \sqrt{2s}}{2}\Bigr)^2 + \Bigl(\frac{\sqrt2}{2}-\sqrt{s}+\frac{\sqrt{2s}}{2}\Bigr)^2 & \text{if $s \equiv 3$};\\
1 + \sqrt2^2+(1+\sqrt{2})^2+\Bigl(\sqrt{2}+\frac{1+\sqrt{13}}{2}\Bigr)^2+\Bigl(2+\frac{1+\sqrt{13}}{2}+\frac{\sqrt{2}+\sqrt{26}}{2}\Bigr)^2 & \text{if $s = 13$}.\\
\end{cases}
\]
\end{proposition}
\begin{proof}
The proof is analogous to that of Proposition \ref{pr:sqrt5sIs1}. One handles the cases $s \equiv 1$ and $s \equiv 3$ separately and arrives at inequalities $s \leq 30$ and $s \leq 16$, respectively; thus, to get a full contradiction, one needs to cover the cases $s=13,17,21,29$ and $s=11,15$, which is done in Lemma \ref{le:comp} (\ref{it:sqrt2}) and (\ref{it:sqrt2spec}).
\end{proof}
Finally, we turn our attention to fields containing $\sqrt3$. As a side note, remember that, unlike with $\sqrt2$ and $\sqrt5$, the lower bound $5$ is known not to always be optimal -- see Proposition \ref{pr:sqrt3shock}.
\begin{proposition} \label{pr:sqrt3}
If $K=\BQ{3}{s}$ for $s \neq 5, 7$, then $\P(\O_K)\geq 5$.
In particular, the following element has length $5$:
\[
\alpha = \begin{cases}
1^2 + 1^2 + (2+\sqrt3)^2 + \Bigl(\frac{1+\sqrt{s}}{2} \Bigr)^2 + \Bigl(1 + \frac{1+\sqrt{s}}{2} \Bigr)^2 & \text{if $s \equiv 1 \pmod4$};\\
1^2 + 1^2 + (2+\sqrt3)^2 + \Bigl(\frac{\sqrt{s}+\sqrt{3s}}{2} \Bigr)^2 + \Bigl(1 + \frac{\sqrt{s}+\sqrt{3s}}{2} \Bigr)^2 & \text{if $s \equiv 2 \pmod4$};\\
1^2 + 1^2 + (2+\sqrt3)^2 + \Bigl(\frac{\sqrt3+\sqrt{s}}{2} \Bigr)^2 + \Bigl(1 + \frac{\sqrt3+\sqrt{s}}{2} \Bigr)^2 & \text{if $s \equiv 3 \pmod4$}.\\
\end{cases}
\]
\end{proposition}
\begin{proof}
Again, the proof is analogous to that of Proposition \ref{pr:sqrt5sIs1}. Each of the three cases must be done separately. In the first and in the third, one arrives at the inequality $s \leq 46$, while the second yields $s \leq 10$. These $5+1+6$ fields must be examined separately, see Lemma \ref{le:comp} (\ref{it:sqrt3}).
\end{proof}
It is interesting that while the used element $1^2 + 1^2 + (2+\sqrt3)^2$ is not a sum of two squares, it has another representation as a sum of three squares: $1^2 + (1+\sqrt3)^2 + (1+\sqrt3)^2$.
Now we have (almost) all the pieces needed to prove the main result of Section \ref{se:P>=5}. The structure of the proof should be clear from the structure of this section, but for clarity, we repeat it here:
\begin{proof}[Proof of Theorem \ref{th:main2parts} (\ref{it:main(1)})]
First suppose that $K$ contains none of $\sqrt2$, $\sqrt3$, $\sqrt5$, $\sqrt6$, $\sqrt7$ and $\sqrt{13}$. If $m \equiv 1 \pmod4$, we use Proposition \ref{pr:mIs1}. If $m \not\equiv 1 \pmod4$, most fields are covered in Proposition \ref{pr:mNot1Pis5}, and the exceptional cases where $s$ is equal to $m+4$, $m+8$ or $m+12$ are handled in Proposition \ref{pr:m+4etc}.
On the other hand, suppose that $m \in \{2,3,5,6,7,13\}$ and $K$ is not one of the seven exceptional fields. The cases $m=2$ and $m=3$ are covered in Proposition \ref{pr:sqrt2} and Proposition \ref{pr:sqrt3}, respectively. If $m=5$ and $s \equiv 1 \pmod4$, Proposition \ref{pr:sqrt5sIs1} is used, while for $s \not\equiv 1$, Proposition \ref{pr:sqrt5sNot1} applies. Finally, for $m=6$ ($7$ or $13$, resp.) one exploits Proposition \ref{pr:sqrt6} (\ref{pr:sqrt7} or \ref{pr:sqrt13P>=6}, resp.).
\end{proof}
\section{Fields containing \texorpdfstring{$\sqrt5$}{root of 5}} \label{se:sqrt5}
In this section, we take a closer look at fields containing $\sqrt5$. They are special since for most of them, we were able to determine the exact value of the Pythagoras number, namely $5$. (The four exceptional fields are $\BQ25$, $\BQ35$, $\BQ56$ and $\BQ57$; for the first two we expect $\P(\O_K)=3$, while for the latter two $\P(\O_K)=4$.)
In Subsection \ref{ss:strongAsymptotics}, we continue where we left off in the previous section by proving that $\P(\O_K)\geq 5$ for all but finitely many fields $K=\BQ{5}{s}$ where $s \equiv 2,3 \pmod4$. This turned out to be much more involved than the seemingly analogous cases handled in Subsection \ref{ss:235}; the method which we developed for proving Proposition \ref{pr:sqrt5GeneralStrongAsymptotics} deserves further investigation and can be employed in many other situations.
Subsection \ref{ss:sqrt5upper} is untypical for this text as it is the only part where we prove an upper bound on the Pythagoras number. To achieve that, we first introduce the reader to the so-called $g$-invariants $g_{R}(n)$ of a ring $R$ and then explain how they, in general, provide upper bounds for Pythagoras numbers in field extensions, generalising a result by Kala and Yatsyna \cite{KY}. Finally, we prove that $g_{\O_F}(2)=5$ holds for $F=\Q(\!\sqrt5)$, a result published in \cite{SaJapan}.
\subsection{Lower bound in the most difficult case}\label{ss:strongAsymptotics}
The aim of this subsection is to show that with finitely many exceptions, $\P(\O_K) \geq 5$ holds for all biquadratic fields $K=\BQ{5}{s}$. Recall the notation $1<m<s<t$, which in this case means $s \neq 2,3$, $s$ is square-free, $5 \nmid s$.
Since the case $s\equiv 1\pmod4$ was already handled in Proposition \ref{pr:sqrt5sIs1}, we achieve our goal by proving the following:
\begin{proposition} \label{pr:sqrt5sNot1}
Let $K= \BQ{5}{s}$ where $s\not\equiv 1 \pmod4$, $s \neq 6, 7$.
Then $\P(\O_K)\geq 5$.
In particular, $\ell(\alpha)=5$ holds for
\[
\alpha = 1^2 + \Bigl(\frac{1+\sqrt5}{2}\Bigr)^2 + \Bigl(\frac{1+\sqrt5}{2}\Bigr)^2 + (\fss + \sss)^2 + (\css + \sss)^2.
\]
The symbols $\floor{x}$ and $\ceil{x}$ denote the largest integer satisfying $n\leq x$ and the smallest integer satisfying $x\leq n$, respectively.
\end{proposition}
\begin{proof}
The core of the proof is contained in Proposition \ref{pr:ourStrongAsymptotics}. Once that result is proven, two tasks remain: First, one has to take care of the fields with $s \leq 3253$ where the proposition does not apply; as usual, this was done by our computer program, see Lemma \ref{le:comp} (\ref{it:sqrt5sNot1}). \notabene{At the cost of making the proof of Proposition \ref{pr:ourStrongAsymptotics} even more technical, it is possible to improve the bounds, see Observation \ref{ob:sqrt5Improvement}. Thus we actually used the computer only to check fields with $s \leq 499$.}
Second, one has to exploit that for $s \geq 3254$, Proposition \ref{pr:ourStrongAsymptotics} allows only the following type of decomposition:
\[
\alpha = x_1^2+x_2^2 + \beta = \Bigl(\frac{a_1+b_1\sqrt5}{2} + \sqrt{s}\Bigr)^2 + \Bigl(\frac{a_2+b_2\sqrt5}{2} + \sqrt{s}\Bigr)^2 + \beta, \quad\text{where $\beta \in \sum \O_{\Q(\!\sqrt5)}^2$.}
\]
We shall show that $b_1=b_2=0$ and $\frac12 a_1 = \fss$ and $\frac12 a_2 = \css$ (or vice versa). This suffices since then $\beta = 1^2 + \bigl(\frac{1+\sqrt5}{2}\bigr)^2 + \bigl(\frac{1+\sqrt5}{2}\bigr)^2$ which has length $3$ in $\O_{\Q(\!\sqrt5)}$ (remember that by Proposition \ref{pr:ourStrongAsymptotics} we can only use summands from this subfield).
To achieve this, we expand brackets:
\[
\Bigl(\frac{a_1+b_1\sqrt5}{2} + \sqrt{s}\Bigr)^2 + \Bigl(\frac{a_2+b_2\sqrt5}{2} + \sqrt{s}\Bigr)^2 = \underbrace{\cdots\cdots}_{\in\Q(\!\sqrt5)} + (a_1+a_2)\sss + (b_1+b_2)\sqrt{5s};
\]
comparing this with $\alpha = \underbrace{\cdots\cdots}_{\in\Q(\!\sqrt5)} + 2\bigl(\fss+\css\bigr)\sss$, one sees that $a_1+a_2 = 2(\fss+\css)$ and $b_1+b_2=0$.
To conclude the proof, one has to compare traces. As we explain below, the assumption $\atr \alpha \geq \atr (x_1^2 + x_2^2)$ together with the equalities $a_1+a_2 = 2(\fss+\css)$ and $b_1+b_2=0$ and parity conditions $a_1\equiv b_1$ and $a_2\equiv b_2 \pmod2$ leaves only five possible choices (if we fix that either $a_1>a_2$, or $a_1=a_2$ and $b_1\geq b_2$); namely, $(a_1,a_2,b_1,b_2)$ can either take the correct value $(2\css, 2\fss, 0, 0)$, or one of the following four:
\begin{align*}
&\bigl(2\css+2,2\fss-2,0,0\bigr); \qquad \bigl(\css+\fss,\css+\fss,1,-1\bigr);\\
&\bigl(\css+\fss+2,\css+\fss-2,\pm 1, \mp 1\bigr).
\end{align*}
For the four incorrect possibilities, one indeed gets $\beta = \alpha - (x_1^2+x_2^2) \in \Q(\!\sqrt5)$ with nonnegative trace, but in none of the cases is $\beta$ totally nonnegative. This concludes the proof.
To help the reader, we explain how to compare the traces efficiently. By evaluating the traces, one gets
\[
2s + \fss^2 + \css^2 + 4 \geq \frac{a_1^2+a_2^2 + 5(b_1^2+b_2^2)}{4} + 2s;
\]
on the right-hand side we use the general identity $2x^2 + 2y^2 = (x+y)^2+(x-y)^2$ together with the known values of $a_1+a_2$ and $b_1+b_2$ to obtain
\[
\fss^2 + \css^2 + 4 \geq \frac{(a_1-a_2)^2 + 5(b_1-b_2)^2}{8} + \frac{(\fss + \css)^2}{2}.
\]
Surprisingly, this is can be rewritten (for $\sqrt{s}\notin \Z$) as $36 \geq (a_1-a_2)^2 + 5(b_1-b_2)^2$, which, together with the parity conditions, leaves only the five possibilities listed above.
\end{proof}
Before proving Proposition \ref{pr:ourStrongAsymptotics}, we start with a general result which provides very strong information about the possible decompositions of a whole family of elements; this family contains also $\alpha$. Results of this type could be proved for extensions of fields other than $\Q(\!\sqrt5)$, but in no other case were we forced to use them. Avoiding them was practical, since counterexamples of this type lead to a rather large number of exceptional fields (compare part (\ref{it:sqrt5sNot1}) of Lemma \ref{le:comp} with other parts of the same lemma).
\begin{proposition} \label{pr:sqrt5GeneralStrongAsymptotics}
Let $\alpha_0,A_1,A_2 \in \O_{\Q(\!\sqrt5)}$ and $n \in \N$. Then there exists a bound $S(n)$ such that for $s>S(n)$, $s\not\equiv 1 \pmod4$, (square-free, $5 \nmid s$), the following holds:
Denote $K = \BQ{5}{s}$. Whenever the number
\[
\alpha = \alpha_0 + \bigl(\floor{\sqrt{s}} + A_1 + \sqrt{s}\bigr)^2 + \bigl(\floor{\sqrt{s}} + A_2 + \sqrt{s}\bigr)^2
\]
is represented as a sum of at most $n$ squares in $\O_K$, then exactly two of the squares belong to $(\O_{\Q(\!\sqrt5)} + \sss)^2$ and the remaining lie in $(\O_{\Q(\!\sqrt5)})^2$.
\end{proposition}
\begin{proof}
First we observe that absolute values of two of the conjugates of $\alpha$ are bounded by a constant $k = k(\alpha_0,A_1,A_2)$ independent on $s$; this is clearly true for the two automorphisms of $K$ sending $\sqrt{s} \mapsto -\sqrt{s}$, since the conjugates of $\alpha_0$, $A_1$ and $A_2$ are only constants and $\floor{\sqrt{s}}-\sqrt{s}$ is bounded. From this one easily sees that if $\alpha = \sum x_i^2$ with $x_i = a_i+b_i\sqrt5+c_i\sss+d_i\sqrt{5s}$ \textbf{(where the coefficients belong to $\Q$, not necessarily $\Z$)}, then
\begin{align*}
|a_i-b_i\sqrt5-c_i\sss+d_i\sqrt{5s}| &< \sqrt{k},\\
|a_i+b_i\sqrt5-c_i\sss-d_i\sqrt{5s}| &< \sqrt{k}.
\end{align*}
Triangle inequality then yields
\begin{equation}\label{eq:2ineq}
|a_i - c_i\sss| < \sqrt{k} \qquad \text{ and } \qquad |b_i\sqrt5 - d_i\sqrt{5s}| < \sqrt{k}.
\end{equation}
We shall exploit this later. Now we need to find any constants $C,D \in \Q$ and $S_1 \in\N$ such that for $s > S_1$, we have $|c_i| \leq C$ and $|d_i| \leq D$. This is easily done by comparing traces, since $\atr\alpha$ is $4s + O(\sqrt{s})$ while $\atr x_i^2 \geq c_i^2s$ and $\atr x_i^2 \geq d_i^25s$; thus by choosing a large enough $S_1$ we can even deduce $c_i^2 \leq 4$ and $5d_i^2 \leq 4$. (Since $c_i,d_i \in \frac{\Z}{2}$, we have $C=2$ and $D=\frac12$; but the exact value of these constants is not important.)
From now on consider only $s>S_1$. Our plan is to compare the asymptotic behaviour of $\atr\alpha$ and $\sum \atr x_i^2$. We already mentioned that $\atr\alpha$ can be written as $4s + x(s)$, where $|x(s)| \leq k_1 \sss + k_2$; here $k_1$ and $k_2$ are explicit constants depending only on $\alpha_0,A_1,A_2$.
So it remains to analyse $\sum\atr x_i^2$, where $\atr x_i^2 = a_i^2+5b_i^2+sc_i^2+5sd_i^2$. Thanks to \eqref{eq:2ineq} we can replace the first two terms: There is a $\delta \in (-\sqrt{k},\sqrt{k})$ such that
\[
a_i^2 = (c_i\sss + \delta)^2 = sc_i^2 + 2\delta c_i\sss + \delta^2 = sc_i^2 + y,
\]
where $|y| \leq (2\cdot\sqrt{k} C)\sss + (\sqrt{k})^2$. Similarly $5b_i^2 = 5sd_i^2 + z$ where $|z| \leq (2\cdot\sqrt{k} D\sqrt{5})\sss + (\sqrt{k})^2$. All in all,
\[
\atr x_i^2 = (sc_i^2 + y) + (5sd_i^2 + z) + sc_i^2 + 5sd_i^2 = 2(c_i^2 + 5d_i^2)s + (y+z) = 2(c_i^2 + 5d_i^2)s + O(\sss).
\]
So far the number of squares in the decomposition played no role; what we have proved actually holds for any $x_i^2 \preccurlyeq \alpha$. Now, in the last step, the constants start depending on $n$:
\begin{align*}
\sum_{i=1}^n \atr x_i^2 = \sum_{i=1}^n \Bigl( 2(c_i^2 + 5d_i^2)s + O(\sss) \Bigr) &= 2\Bigl(\sum_{i=1}^n c_i^2 + 5d_i^2\Bigr)s + n O(\sss)\\
&= 2\Bigl(\sum_{i=1}^n c_i^2 + 5d_i^2\Bigr)s + O(\sss).
\end{align*}
Since this also has to be equal to $\atr \alpha = 4s + O(\sqrt{s})$, we finally conclude that for $s>S(n)$ the coefficients in front of $s$ are equal:
\[
2 = \sum_{i=1}^n (c_i^2 + 5 d_i^2).
\]
Since $c_i,d_i$ are either both in $\frac{\Z}{2} \setminus \Z$ or both in $\Z$, one easily sees that each nonzero summand in this sum is either $1^2+0 = 1$ or $\bigl(\frac12\bigr)^2 + 5\bigl(\frac12\bigr)^2 = \frac32$. However, to obtain $2$ as a sum of these numbers, the only choice is $1+1$. Therefore for each $i$ we have either $c_i=d_i=0$, or $|c_i|=1$, $d_i=0$, and the latter case happens for exactly two indices $i_1,i_2$. Since $x_i^2=(-x_i)^2$, we can choose $c_{i_1}=c_{i_2}=1$, which concludes the proof.
\end{proof}
The result we need to complete the proof of Proposition \ref{pr:sqrt5sNot1} is just the previous Proposition with concrete choices $\alpha_0=1 + \bigl(\frac{1+\sqrt5}{2}\bigr)^2 + \bigl(\frac{1+\sqrt5}{2}\bigr)^2 = 4+\sqrt5$, $A_1=0$, $A_2=1$, and with the effectively computed bound $S(4)$. The main reason for proving first the general version was to make the proof easier to read. Now, going through the previous proof step by step, computing the bounds explicitly (and being slightly more careful in order to get reasonably small values) yields the following:
\begin{proposition} \label{pr:ourStrongAsymptotics}
Denote $K = \BQ{5}{s}$ for $s \geq 3254$, $s\not\equiv 1 \pmod4$, (square-free, $5 \nmid s$). Then the following holds:
Whenever the number
\[
\alpha = 1^2 + \Bigl(\frac{1+\sqrt5}{2}\Bigr)^2 + \Bigl(\frac{1+\sqrt5}{2}\Bigr)^2 + \bigl(\floor{\sqrt{s}} + \sss\bigr)^2 + \bigl(\css + \sss\bigr)^2
\]
is represented as a sum of four squares in $\O_K$, then exactly two of the squares belong to $(\O_{\Q(\!\sqrt5)} + \sss)^2$ and the remaining two lie in $(\O_{\Q(\!\sqrt5)})^2$.
\end{proposition}
\begin{proof}
The reader has already seen the general version of this proof in Proposition \ref{pr:sqrt5GeneralStrongAsymptotics}. Thus we only include concrete values of some bounds as stepping stones.
To avoid fractions, we denote $\phi = \frac{1+\sqrt5}{2}$ and $\overline{\phi} = \frac{1-\sqrt5}{2}$.
If $\alpha'$ is the image of $\alpha$ in the automorphism of $K$ sending $\sqrt5 \mapsto -\sqrt5$, $\sss \mapsto -\sss$ and $\alpha''$ in the automorphism $\sqrt5 \mapsto \sqrt5$, $\sss \mapsto -\sss$, then clearly
\[
\alpha' = 1 + 2\overline{\phi}^2 + (\fss -\sss)^2 + (\css-\sss)^2 < 1 + 2\overline{\phi}^2 + 1 = 5 - \sqrt5
\]
and similarly
\[
\alpha'' < 2 + 2\phi^2 = 5 + \sqrt5.
\]
Thus
\begin{align*}
|a_i-b_i\sqrt5-c_i\sss+d_i\sqrt{5s}| &< \sqrt{5-\sqrt5},\\
|a_i+b_i\sqrt5-c_i\sss-d_i\sqrt{5s}| &< \sqrt{5+\sqrt5}.
\end{align*}
By triangle inequality
\begin{equation}\label{eq:2ineqVAR}
|a_i - c_i\sss| < \kappa \qquad \text{ and } \qquad |b_i\sqrt5 - d_i\sqrt{5s}| < \kappa,
\end{equation}
where $\kappa = \frac12 \Bigl(\! \sqrt{5-\sqrt5}+\sqrt{5+\sqrt5}\Bigr)$.
Now we shall express the asymptotic behaviour of $\atr\alpha$. Denote $\{s\}=\sss - \fss$. Then
\begin{align*}
\atr\alpha &= 1 + 2\cdot\frac32 + \fss^2 + s + \css^2 + s\\
&= 4 + 2s + (\sss-\{s\})^2 + (\sss + (1-\{s\}))^2\\
&= 4 + 4s - 2\{s\}\sss + 2(1-\{s\})\sss + \{s\}^2 + (1-\{s\})^2\\
&= 4 + 4s + \lambda\sss + \mu\\
&= 4s + \lambda\sss + (4+\mu),
\end{align*}
where $\lambda \in (-2,2)$ and $\mu \in (\frac12,1)$. Thus we got
\begin{equation}\label{eq:traceAsymp}
\mathopen|\atr\alpha - 4s\mathclose| < 2\sss + 5.
\end{equation}
We first use this inequality together with $\atr\alpha \geq s(c_i^2+5d_i^2)$ to find $S_1$ such that for $s>S_1$ we have $|c_i| \leq 2$ and $|d_i| \leq \frac12$. Suppose that one of these inequalities does not hold, i.e.\ either $|c_i| \geq \frac52$ or $|d_i|\geq 1$. Then $\atr\alpha \geq 5s$, giving $s < 2\sss + 5$, which is a contradiction already for $s \geq 12$.
From now on consider only $s\geq 12$. From \eqref{eq:2ineqVAR} we know that there exists $\delta_i \in (-\kappa,\kappa)$ such that
\[
a_i^2 = (c_i\sss + \delta_i)^2 = sc_i^2 + 2\delta_i c_i\sss + \delta_i^2 = sc_i^2 + y_i,
\]
where $|y_i| \leq (2\kappa C)\sss + \kappa^2$. Similarly $5b_i^2 = 5sd_i^2 + z_i$ where $|z_i| \leq (2\kappa D\sqrt5)\sss + \kappa^2$. All in all,
\[
\atr x_i^2 = (sc_i^2 + y_i) + (5sd_i^2 + z_i) + sc_i^2 + 5sd_i^2 = 2(c_i^2 + 5d_i^2)s + w_i,
\]
where $|w_i| = |y_i+z_i| < \kappa(4+\sqrt5)\sss + 2\kappa^2$.
Now, assuming that there are only four summands, and summing over all $i$, this yields
\[
\sum_{i=1}^4 \atr x_i^2 = 2\Bigl(\sum_{i=1}^4 c_i^2 + 5d_i^2\Bigr)s + \sum_{i=1}^4 w_i,
\]
hence, since the left-hand side is equal to $\atr\alpha$, we conclude
\[
\Bigl|\atr\alpha - 2\Bigl(\sum_{i=1}^4 c_i^2 + 5d_i^2\Bigr)s \Bigr| < 4\bigl(\kappa(4+\sqrt5)\sss + 2\kappa^2\bigr).
\]
Finally we can draw conclusions about the value of $\Sigma := \sum_{i=1}^4 (c_i^2 + 5d_i^2)$: Comparing the last inequality with \eqref{eq:traceAsymp}, we obtain
\[
|s(4 - 2\Sigma)| < \bigl(2 + 4\kappa(4+\sqrt5)\bigr)\sss + (5 + 8\kappa^2).
\]
One easily sees that $\Sigma$ can only take values from $\frac{\Z}{2}$, since the same is true for each of the four summands. Therefore $\Sigma \neq 2$ would mean $|s(4-2\Sigma)| \geq s$. Hence if $\Sigma \neq 2$, then the previous inequality implies
\[
s < \bigl(2 + \kappa(16+4\sqrt5)\bigr)\sss + (5 + 8\kappa^2) \approx 56.29\sss + 42.89.
\]
This inequality holds only for $s \leq 3253$.
Thus for $s \geq 3254$, it is necessary that $\Sigma = \sum_{i=1}^4 (c_i^2 + 5d_i^2) = 2$. Drawing the conclusion that without loss of generality $c_1=c_2=1$ and the other coefficients $c_i$, $d_i$ are zero is easy (and was done in the previous proof already).
\end{proof}
There are many ways how to improve the bound $3254$ significantly. In our presentation, we preferred the simplicity of the proof over the optimality of the result. However, in order to spare computation time in proving Lemma \ref{le:comp} (\ref{it:sqrt5sNot1}), it is better to use the following stronger bound, for which we only provide a sketch of the proof:
\begin{observation} \label{ob:sqrt5Improvement}
The condition $s \geq 3254$ in Proposition \ref{pr:ourStrongAsymptotics} can be relaxed to $s \geq 500$.
To see this, one must go over the corresponding proof slower and more carefully. First show independently by comparing traces that $\Sigma = \sum_{i=1}^4 (c_i^2 + 5d_i^2) \leq 4$ unless $s \leq 32$. From this inequality deduce all possible values of $(c_1, d_1, \ldots, c_4, d_4)$. Use this to observe that $\sum \bigl(|c_i| + \sqrt5 |d_i|\bigr) \leq 2 + \sqrt5$. \notabene{This in fact holds even without the restriction that there are only four summands. The reason is that $c_i^2+5d_i^2 \neq 0$ can anyway hold at most four times.}
This enables to avoid the rough estimates $|c_i|\leq 2$, $|d_i| \leq \frac12$ and get a stronger inequality then the one listed in the proof:
\[
\Bigl|\atr\alpha - 2\Bigl(\sum_{i=1}^4 c_i^2 + 5d_i^2\Bigr)s \Bigr| < 2\kappa(2+\sqrt5)\sss + 4\cdot 2\kappa^2,
\]
leading\notabene{In fact, by a nice trick it can be shown that $2\kappa^2$ can be replaced by exactly $5$. But it would make the proof even longer, and it yields only a slight improvement: $s < 466.4$.} directly to
\[
|s(4 - 2\Sigma)| < \bigl(2 + 2\kappa(2+\sqrt5)\bigr)\sss + (5 + 8\kappa^2).
\]
This implies $\Sigma = 2$ unless $s < 499.8$.\notabene{By the way, I'm really curious whether the \emph{whole} bound can be done independently of the number of summands. I can improve $2\kappa^2$ to $5$, but after that I must multiply it by the number of summands. Yes, I can prove that the summands outside $\Q(\!\sqrt5)$ are at most 4, but what about the others?}
\end{observation}
\subsection{Upper bound} \label{ss:sqrt5upper}
Before stating a general result (Proposition \ref{pr:upperbound}) which provides an upper bound for Pythagoras numbers, we must introduce a generalisation of the Pythagoras number, the so-called \emph{$g$-invariants} of a ring. For $\Z$, they were introduced by Mordell, see \cite{Mo2}, their study being called \enquote{quadratic Waring's problem}. For recent results on them, see e.g.\ \cite{CI, KrY}.
Let $R$ be a commutative ring. A homogeneous polynomial in $n$ variables over $R$ of degree two is called an $n$-ary \emph{quadratic form}; if it is of degree one, we call it a \emph{linear form}. A quadratic form $Q$ \emph{represents} $\alpha \in R$ if there exists a nonzero $\vec{x} = (x_1, \ldots, x_n) \in R^n$ such that $\alpha = Q(x_1, \ldots, x_n) = Q(\vec{x})$. If a form represents only totally positive elements, it is \emph{totally positive definite}; a \emph{totally positive semidefinite} form may also represent zero. The simplest totally positive definite $n$-ary quadratic form, sum of $n$ squares, is traditionally denoted by $I_n$. Thus, the fact that a number $\alpha\in R$ is a sum of, say, five squares can be rephrased as \enquote{$\alpha$ is represented by $I_5$ over $R$}. More generally, let $Q$ be an $n$-ary quadratic form and $A$ a $k$-ary quadratic form, $1 \leq k \leq n$. We say that $A$ is \emph{represented} by $Q$ (or $Q$ \emph{represents} $A$) if there exist linear forms $\L_1, \ldots, \L_n : R^k \to R$ such that
\[
Q\bigl(\L_1(\vec{x}), \ldots, \L_n(\vec{x}) \bigr) = A(\vec{x}).
\]
Clearly, $Q$ represents $\alpha\in R$ if and only if it represents the unary form $\alpha x^2$.
Two quadratic forms are \emph{equivalent} if one can be obtained from the other by an invertible substitution.
Let us now consider the set $\mathrm{Sq}_k$ of all $k$-ary quadratic forms which can be written as a sum of squares of linear forms, i.e.\ which are represented by $I_n$ for some, possibly large, $n$. Then, similarly to the definition of the Pythagoras number, we put
\[
g_R(k) = \min\{n : I_n \text{ represents all forms in } \mathrm{Sq}_k\}.\] If no such $n$ exists, we put $g_R(k) = \infty$; however, if $R$ is an order in a number field, then $g_R(k)$ is finite for every $k$. (For maximal orders, see \cite{Ic}; the general statement is \cite{KrY}, Corollary 1.3.)
In particular, $g_R(1) = \P(R)$.
Very few exact values of $g_R(k)$ are known. Mordell and Ko proved that $g_{\Z}(k) = k+3$ for $2 \leq k \leq 5$, see e.g.\ \cite{Ko}; much later, Kim and Oh showed $g_{\Z}(6)=10$ in \cite{KO}. In Theorem \ref{th:sasaki}, we prove $g_{\O_F}(2)=5$ for $F=\Q(\!\sqrt5)$; this was probably proven by Sasaki and published in \cite{SaJapan}. For discussion of values of $g_R(k)$ where $R$ is a global field, local ring or a nonreal maximal order, see the introduction of \cite{Ic}.
The $g$-invariants provide upper bounds on the Pythagoras number. Kala and Yatsyna showed in \cite{KY}, Corollary 3.3, that $g_{\Z}(k)$ is an upper bound for $\P(\O)$ where $\O$ is any order of degree $k$; this is the upper bound $\P(\O) \leq 7$ which holds for all biquadratic orders. We exploit their idea to prove the following generalisation:
\begin{proposition}\label{pr:upperbound}
Let $R \supset S$ be commutative rings such that $R$ is a free module over $S$ of rank $k$. Then
\[
\P(R) \leq g_{S}(k).
\]
\end{proposition}
\begin{proof}
To simplify the notation, we assume $k=2$ (which is the case we shall need); the general proof is entirely analogous. Let $\beta_1,\beta_2 \in R$ be an integral basis of $R$ over $S$. Now take any $\alpha \in R$ and assume it is a sum of $N_\alpha \in \N$ squares; we show that it can be rewritten as a sum of $g = g_{S}(2)$ squares. We have
\[
\alpha = \sum_{i=1}^{N_\alpha} (a_i\beta_1 + b_i\beta_2)^2.
\]
Define a binary quadratic form $Q_{\alpha}$ over $S$ by
\[
Q_{\alpha}(X,Y) = \sum_{i=1}^{N_\alpha} (a_iX + b_iY)^2;
\]
since it is a sum of squares of linear forms over $S$, it can be written as
\[
Q_{\alpha}(X,Y) = \sum_{i=1}^{g} (c_iX + d_iY)^2.
\]
This is an equality of two polynomials, so it is valid after replacing $X,Y$ by elements of any commutative ring containing $S$. In particular, we can plug in the integral basis of $R$:
\[
\alpha = \sum_{i=1}^{N_\alpha}(a_i\beta_1 + b_i\beta_2)^2 = \sum_{i=1}^g (c_i\beta_1 + d_i\beta_2)^2.
\]
This is the desired representation of $\alpha$ as a sum of $g$ squares.
\end{proof}
\begin{remark}
The condition that $R$ is free as an $S$-module is not necessary; it suffices that it is generated by $k$ elements. Since any torsion-free module of rank $r$ over a Dedekind domain is generated by $r+1$ elements, one gets the weaker estimate $\P(\O) \leq g_{\O_F}(r+1)$ for any field extension $L/F$ and any order $\O_F \subset \O \subset \O_L$. An interesting question is whether $r+1$ can be replaced by $r$ (as in the case where $\O_F$ is a PID).\footnote{This question was one of the starting points for the recent preprint \cite{KrY}. There, it is resolved as follows: If $\O_F$ is not a PID, it is more natural to use another version of the $g$-invariant (there denoted as $G_{\O_F}$) which is defined in terms of general quadratic lattices instead of just quadratic forms. In this setting, the inequality $\P(\O) \leq G_{\O_F}(r)$ indeed holds, and one also gets several other interesting inequalities for both versions of the invariant, vastly generalising our Proposition \ref{pr:upperbound}.}
\end{remark}
For the rest of this section, denote $F = \Q(\!\sqrt5)$. With the previous proposition, in order to obtain an upper bound $\P(\O) \leq 5$ for $\O \supset \O_F$, it remains to prove $g_{\O_F}(2)=5$. This is Theorem \ref{th:sasaki}. To prove it, we need a lemma from Sasaki's paper \cite{SaEng}:
\begin{lemma}[\cite{SaEng}, Lemma 12] \label{le:sasaki}
Let $F = \Q(\!\sqrt5)$ and let $Q$ be a binary quadratic form over $\O_F$. Then $Q$ is a sum of four squares of linear forms if and only if $Q$ is
\begin{enumerate}
\item totally positive semidefinite,
\item classical (i.e.\ the coefficient in front of $XY$ is in $2\O_F$),
\item not equivalent to the form $G(X,Y) = 2X^2 + 2XY + 2\frac{1+\sqrt5}{2}Y^2$ over the completion $(\O_F)_{(2)}$, where $(2)$ is the dyadic place.
\end{enumerate}
\end{lemma}
Its proof is based on the fact that the genus of $I_4$ in $F$ contains only one class, so it suffices to determine which binary forms are represented locally. Let us note that the conditions of $Q$ being positive semidefinite and classical are implicit in Sasaki's formulation, and instead of $G$, he uses the form $2X^2 + 2\bigl(\frac{1+\sqrt5}{2}\bigr)XY + 2Y^2$ equivalent to $G$ over $(\O_F)_{(2)}$.
\begin{remark}
Due to 93:11 of \cite{OMeara}, the form $G$ from the previous statement can be characterised as the unique (up to equivalence) unimodular binary form over $(\O_F)_{(2)}$ which is anisotropic and takes only values from $2(\O_F)_{(2)}$.
\end{remark}
Since the following necessary result by Sasaki is, to our best knowledge, only available in Japanese in \cite{SaJapan}, we provide both its full statement and proof.
\begin{theorem}[Sasaki]\label{th:sasaki}
Let $F=\Q(\!\sqrt5)$. Then
\[
g_{\O_F}(2)=5.
\]
\end{theorem}
\begin{proof}
The easier inequality $g_{\O_F}(2) \geq 5$ can be proven either directly or using Lemma \ref{le:sasaki} by exhibiting any binary form which is a sum of squares and is equivalent to $G$ over the dyadic place. One such form\notabene{This form was rather randomly chosen; it is quite possible that even $G$ itself is a sum of squares in $\O_F$.} is $2X^2 + 2XY + \bigl(2\frac{1+\sqrt5}{2} + 16\bigr)Y^2 = (X+Y)^2 + X^2 + \bigl(2\frac{1+\sqrt5}{2} + 15\bigr)Y^2$; it is a sum of (five) squares since $2\frac{1+\sqrt5}{2} + 15$, as a totally positive element of $\O_{\Q(\!\sqrt5)}$, is a sum of (three) squares, and its equivalence to $G$ over the dyadic place follows e.g.\ from the previous remark.
Now we shall take any binary quadratic form $Q(X,Y) = \sum_i \L_i^2(X,Y) = \sum_i (a_iX + b_iY)^2$ where $a_i,b_i \in \O_F$ and prove that it can be rewritten as a sum of five squares. Clearly, any form which is a sum of squares is totally positive semidefinite and classical; therefore, if $Q$ is not equivalent to $G$ over $(\O_F)_{(2)}$, even four squares suffice by Lemma \ref{le:sasaki}. Thus we may assume that $Q$ is equivalent to $G$ over $(\O_F)_{(2)}$. This means that $Q$ represents only elements of $2\O_F$.
If one of $\L_i^2$ represents an element outside $2\O_F$, then so does $Q - \L_i^2$; therefore $Q - \L_i^2$ satisfies the conditions of Lemma \ref{le:sasaki} and can be represented by four squares, showing that $Q$ is a sum of five squares. It remains to show that it is absurd for all $\L_i^2$ to represent only elements of $2\O_F$: By plugging in $(X,Y) = (1,0)$ and $(0,1)$, one sees that both $a_i$ and $b_i$ lie in $2\O_F$; thus $\L_i^2$ takes only values from $4\O_F$. This contradicts the fact that $\sum \L_i^2$ represents $2$ over $(\O_F)_{(2)}$.
\end{proof}
Finally we can put the pieces together:
\begin{theorem} \label{th:upperboundSqrt5}
Let $K$ be a quadratic extension of $\Q(\!\sqrt5)$ and $\O$ any order in $K$ containing $\frac{1+\sqrt5}{2}$. Then $\P(\O)\leq 5$.
\end{theorem}
\begin{proof}
By combining Proposition \ref{pr:upperbound} and Theorem \ref{th:sasaki}, one sees that $\P(R)\leq 5$ holds for any commutative ring $R$ which is a free module over $\O_{\Q(\!\sqrt5)} = \Z\bigl[\frac{1+\sqrt5}{2}\bigr]$ of rank $2$.
Since the class number of $\Q(\!\sqrt5)$ is $1$, any finitely generated torsion-free module over $\Z\bigl[\frac{1+\sqrt5}{2}\bigr]$ is free. In particular, any ring which is a free $\Z$-module of rank $r$ and contains $\frac{1+\sqrt5}{2}$ is a free $\Z\bigl[\frac{1+\sqrt5}{2}\bigr]$-module of rank $\frac{r}{2}$. This concludes the proof.
\end{proof}
We reiterate that for biquadratic fields, the result is not only applicable to maximal orders; it also holds for any order of the form $\Z\bigl[\frac{1+\sqrt5}{2},\alpha\bigr]$ where $\alpha \notin \Q(\!\sqrt5)$ is an algebraic integer of degree $2$:
\begin{corollary}
For any biquadratic number field $K$ containing $\sqrt5$, there are infinitely many orders $\O\subset K$ satisfying $\P(\O) \leq 5$.
\end{corollary}
Moreover, Theorem \ref{th:upperboundSqrt5} does not only apply to biquadratic fields. For fields which are not totally real, it gives nothing new (recall that any order which is not totally real has $\P(\O) \leq 5$ by \cite{Pf}); but it can be applied to other totally real quartic orders. Notably, it includes the maximal order $\Z\Bigl[\!\sqrt{\frac{5+\sqrt5}{2}}\Bigr]$, which, along with $\Z\bigl[\sqrt2,\frac{1+\sqrt5}{2}\bigr]$, is the only totally real quartic order where the local-global principle for sums of squares is satisfied (by Scharlau's dissertation \cite{Sch2}).
\section{Six is almost always a lower bound} \label{se:P>=6}
In this section, we follow up the previous results by showing that $\P(\O_K)=5$ is still not the typical behaviour. We prove the second part of Theorem \ref{th:main2parts}, namely:
\begin{theorem} \label{th:P>=6}
Let $F = \Q(\!\sqrt{m})$ be a real quadratic field with $\P(\O_F) = 5$, i.e.\ $m \neq 2,3,5,6,7$. Then there are only finitely many totally real biquadratic fields $K \supset F$ with $\P(\O_K)\leq 5$.
\end{theorem}
\begin{proof}
For $m \neq 13$, it is a direct consequence of Proposition \ref{pr:P>=6witnesses} which also contains the appropriate definition of the element $\alpha$ with length $6$. The case $m=13$ is handled separately (but using the same idea) in Proposition \ref{pr:sqrt13P>=6}.
\end{proof}
The proof is concluded in Subsection \ref{ss:P>=6proof}, while Subsection \ref{ss:P>=6idea} explains the underlying main idea which may be exploited in many other situations besides biquadratic fields with a given quadratic subfield.
The case $m = 13$ is handled separately since $\Q(\!\sqrt{13})$ has very few elements of length $5$ (see Theorem \ref{th:quadratic}); also, the obtained Proposition \ref{pr:sqrt13P>=6} is used in Section \ref{se:P>=5}. However, it should be noted that the main idea of the proof remains the same as in the other cases.
Now we provide an important part of the proof -- for each biquadratic field with $m \neq 2,3,5,6,7,13$, we define the appropriate $\alpha$ which usually has length $6$. We always have $\alpha = \alpha_0 + w^2$ with $\alpha_0$ as in Section \ref{se:P>=5}; however, to find the appropriate definition of $w$, we must distinguish $12$ families of fields. The simple method by which this terrifyingly looking case distinction was constructed is explained in the next two subsections.
The list of all the mutually disjoint families, and for each of them the appropriate choice of $\alpha_0$ and $w$, follows. To be concise, we write only e.g.\ \enquote{Type (B1)} to mean \enquote{The field $K$ is of type (B1)}. The letter $q$ has the same meaning as in Subsection \ref{ss:biquadratic}, i.e.\ in (B1) it just means $q \equiv 3 \pmod4$ and in (B2,3) it means $q \equiv 1 \pmod4$.
\begin{enumerate}
\item If $m \not\equiv 1 \pmod{4}$, we put $\alpha_0 = 7 + (1+\sqrt{m})^2$ and: \label{it:(1)}
\begin{enumerate}
\item Type (B1): \label{it:(a)}
\begin{enumerate}
\item $q=m$, i.e.\ $m\equiv 3 \pmod{4}$ \label{it:(i)}
\begin{enumerate}
\item $s_0>3t_0$: Put $w = 1+\sqrt{s}$.\label{it:(A)}
\item $s_0<3t_0$: Put $w = 1 + \frac{\sqrt{s}+\sqrt{t}}{2}$.
\end{enumerate}
\item $q=s$, i.e.\ $s\equiv 3 \pmod{4}$
\begin{enumerate}
\item $s_0>4t_0$: Put $w = 1+\sqrt{s}$.\label{it:1aiiA}
\item $s_0<4t_0$: Put $w = \frac{\sqrt{m}+\sqrt{t}}{2}$.\label{it:1aiiB}
\end{enumerate}
\item $q=t$, i.e.\ $t\equiv 3 \pmod{4}$: Put $w = \frac{\sqrt{m}+\sqrt{s}}{2}$.\label{it:(iii)}
\end{enumerate}
\item Type (B2,3):
\begin{enumerate}
\item $q=s$, i.e.\ $s\equiv 1 \pmod{4}$: Put $w = \frac{1+\sqrt{s}}{2}$.
\item $q=t$, i.e.\ $t\equiv 1 \pmod{4}$: Put $w = \frac{\sqrt{m}+\sqrt{s}}{2}$.
\end{enumerate}
\end{enumerate}
\item If $m \equiv 1 \pmod{4}$, we put $\alpha_0 = 7 + \bigl(\frac{1+\sqrt{m}}{2}\bigr)^2$ and: \label{it:(2)}
\begin{enumerate}
\item Type (B2,3):
\begin{enumerate}
\item $s_0>3t_0$: Put $w = 1+\sqrt{s}$.
\item $s_0<3t_0$: Put $w = 1 + \frac{\sqrt{s}+\sqrt{t}}{2}$.
\end{enumerate}
\item Type (B4): \label{it:(2b)}
\begin{enumerate}
\item $s_0>3t_0$: Put $w = \frac{1+\sqrt{s}}{2}$.
\item $s_0<3t_0$: \label{it:(2bii)}
\begin{enumerate}
\item Type (B4a): Put $w = \frac{1+\sqrt{m}+\sqrt{s}+\sqrt{t}}{4}$. \label{it:(2biiA)}
\item Type (B4b): Put $w = \frac{1+\sqrt{m}+\sqrt{s}-\sqrt{t}}{4}$.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{enumerate}
Let us note that these families indeed cover all possibilities, since $s_0=4t_0$ cannot happen for $s_0$ square-free and $s_0=3t_0$ is possible only for $s_0=3$, $t_0=1$, leading to $m=3$ which violates the assumption $\P(\O_{\Q(\!\sqrt{m})})=5$.
We conclude this part by formulating what remains to be proven.
\begin{proposition}\label{pr:P>=6witnesses}
Let $F=\Q(\!\sqrt{m})$ be a given field as in Theorem \ref{th:P>=6}, with $m\neq 13$, and for each totally real biquadratic field $K \supset F$ define $\alpha = \alpha_0+w^2$ as above. Then, with finitely many exceptions, $\ell(\alpha)=6$ in $\O_K$.
\end{proposition}
\begin{proof}
For a given $m$, there are only finitely many biquadratic fields $K$ where $m$ is not the smallest of the three square-free integers whose square roots lie in $K$; disregarding them, we may assume that the usual Convention \ref{no:mst} holds. Clearly, every $m$ corresponds to only finitely many choices of $(t_0,s_0)$.
After realising this, it suffices to prove twelve separate lemmata, where for example the first of them, with code name \eqref{it:(A)}, claims: \enquote{Let $s_0>t_0\geq 1$ be two coprime square-free numbers such that $s_0t_0 \equiv 3 \pmod4$ and $s_0>3t_0$. Put $\alpha_0=7+(1+\sqrt{s_0t_0})^2$. If we take any large enough square-free $m_0$ coprime with $s_0t_0$, and then denote $K=\BQ{s_0t_0}{m_0t_0}$ and put $w=1+\sqrt{m_0t_0}$, then $\ell(\alpha_0+w^2) = 6$ in $\O_K$.}
Another of these twelve statements, Lemma \ref{le:oneOf12}, is formulated and proven in Subsection \ref{ss:P>=6proof}. The others are omitted, since all the twelve proofs are very similar, and after understanding one of them, it is not difficult to write any of the others.
\end{proof}
\subsection{The idea} \label{ss:P>=6idea}
Here we suggest a general strategy which applies in many situations when one examines $\P(\O_K)$ where $K$ runs over extensions of a given field $F$ with an already known Pythagoras number $\P(\O_F)$. This strategy was applied to obtain the appropriate definition of $\alpha$ for the situation of Theorem \ref{th:P>=6}, but it can be tried for any field, not just the biquadratic ones. The aim is to find conditions under which $\P(\O_K) \geq \P(\O_F) + 1$ can be proven: For example, $\P(\Z)=4$, and indeed, $\P(\O_{\Q(\!\sqrt{n})}) = 5$ holds for almost all positive $n$. Or, as we have already seen in Subsection \ref{ss:67}: For $F = \Q(\!\sqrt6)$ or $\Q(\!\sqrt7)$, where $\P(\O_F)=4$, almost all biquadratic fields $K \supset F$ have $\P(\O_K) \geq 5$.
The way of constructing the \enquote{witness} $\alpha \in \O_K$ which probably has length $P(\O_F)+1$, follows: First, one takes $\alpha_0 \in \O_F$ of length $\P(\O_F)$ in $\O_F$. (It is convenient to use one with the minimal trace, but any $\alpha_0$ can work.) It is reasonable to hope that in most extensions $K \supset F$, $\alpha_0$ will also require $\P(\O_F)$ squares. For example, the only real quadratic fields where $7$ can be written as a sum of less than four integral squares are those generated by $\sqrt2$, $\sqrt3$, $\sqrt5$, $\sqrt6$, $\sqrt7$ and $\sqrt{13}$; and a similar statement for our situation where $F$ is quadratic and $K$ biquadratic was proven in Subsections \ref{ss:P>=5mEquiv1} and \ref{ss:P>=5mEquiv23}. However, we need to find an element with length $\P(\O_F)+1$. This is often achieved by choosing $w^2 \in \O_K \setminus \O_F$ where the trace of $w^2$ is the minimal possible among such elements. Then one defines $\alpha = \alpha_0 + w^2$ and hopes to prove that any representation of $\alpha$ as a sum of squares uses $w^2$ as one of the summands. This has a good chance of success if $\Tr \alpha_0$ is much smaller than $\Tr w^2$: In such a case, $w^2$ will be one of the very few squares totally smaller or equal to $\alpha$ and not belonging to $\O_F$.
This strategy was very successful in Peters' examination of quadratic fields as extensions of $\Z$: The choice $\alpha_0 = 7$ and $w = (1+\sqrt{m})$ or $w = \bigl(\frac{1+\sqrt{m}}{2}\bigr)$, see Theorem \ref{th:quadratic}, led to an element of length $5$ in all but six maximal orders, and the analogy for non-maximal orders succeeded every time except of $\Z[\sqrt5]$, see also Observation \ref{ob:betterQuadratic}.
Let us note, however, that there is no guarantee that this strategy will indeed work -- as we have seen in Subsection \ref{ss:m+4,m+8,m+12}, there are infinitely many biquadratic fields where even the weaker statement $\ell(\alpha_0)=5$ fails. This is also why the proof in the next subsection had to be carefully divided into twelve different branches depending on the integral basis and on $m$, and why it was necessary to perform the proof separately for each of these twelve families. The most troublesome situation occurs if the choice of $w^2$ is not unique: In such a case it might be difficult to show that the rejected candidates for $w^2$ cannot be used in the decomposition of $\alpha$ instead of $w^2$. But the strategy turned out to be successful in all twelve families.
We conclude this subsection by illustrating how it applies in the exceptional case $m=13$.
\begin{proposition}\label{pr:sqrt13P>=6}
Let $K=\BQ{13}{s}$. Then $\P(\O_K)\geq 6$.
In particular, $\ell(\alpha)=6$ holds for $\alpha = \alpha_0 + w^2$, where $\alpha_0 = 3+\bigl(\frac{1+\sqrt{13}}{2}\bigr)^2+\bigl(1+\frac{1+\sqrt{13}}{2}\bigr)^2 = 12 + 2\sqrt{13}$ and
\[
w = \begin{cases}
\frac{1+\sqrt{s}}{2} & \text{if $s \equiv 1 \pmod4$},\\
1+\sqrt{s} & \text{if $s \equiv 2,3 \pmod4$}.
\end{cases}
\]
\end{proposition}
\begin{proof}
The proof is an incarnation of the above explained strategy: Since $\ell(\alpha_0)=5$ in $\O_{\Q(\!\sqrt{13})}$, see Theorem \ref{th:quadratic}, it suffices to show by trace considerations that $w^2$ must be one of the summands and all the others must belong to $\Q(\!\sqrt{13})$. See also the analogous proofs in Subsection \ref{ss:67}.
More specifically: If $s\not\equiv 1$ and we assume a decomposition of $\alpha$ which is not of the form $w^2 + \cdots$ where the remaining summands are squares in $\Q(\!\sqrt{13})$, we arrive at $s \leq 13$, an immediate contradiction. For $s\equiv 1$, the obtained inequality is $s \leq 49$, which left only six fields; these were solved in Lemma \ref{le:comp} (\ref{it:sqrt13P>=6}).
\end{proof}
Note that the statement of the proposition actually holds even for fields $\BQ{13}{n}$ for $n=6,7,10,11$, see Lemma \ref{le:comp} (\ref{it:sqrt13=s}); the element $\alpha_0 + (1+\sqrt{n})^2$ has length $6$. This is interesting, especially for fields containing $\sqrt{6}$ and $\sqrt{7}$, for which we in general know only $\P(\O_K)\geq 5$; it is further evidence that fields with Pythagoras number less than $6$ are very rare, see Conjecture \ref{co:conjecture}.
\subsection{The proof} \label{ss:P>=6proof}
In this subsection we provide the proof of Theorem \ref{th:P>=6}. The general idea was explained at length in the previous subsection: One starts with $\alpha_0 \in \Q(\!\sqrt{m})$ which has the minimal trace among elements of length $5$, namely $\alpha_0 = 7 + \bigl(\frac{1+\sqrt{m}}{2}\bigr)^2$ if $m\equiv 1 \pmod4$ and $7 + (1+\sqrt{m})^2$ otherwise. Then one takes $w^2 \in \O_K \setminus \Q(\!\sqrt{m})$ which has minimal trace, to define $\alpha = \alpha_0 + w^2$, as the table above Proposition \ref{pr:P>=6witnesses} shows. This $w$ depends not only on the integral basis and on whether $m=q$ or $m \neq q$, but sometimes also on (in)validity of inequalities like $3s > t$.
To solve this last problem, one had to consider all the finitely many decompositions $m = s_0t_0$ for the given $m$. Clearly, if we prove that for each fixed choice of $t_0$ and $s_0$ there are only finitely many $m_0$ such that the biquadratic field $K$ given by $m=t_0s_0, s=t_0m_0, t=s_0m_0$ has $\P(\O_K)\leq 5$, then we are done.
Since the proofs are really the same in all twelve families, we provide proof only in one of them. We have chosen the most difficult branch where some small additional tricks were needed.
\begin{lemma}[Branch \eqref{it:(2biiA)}] \label{le:oneOf12}
If the field $K=\BQ{m}{s}$ is of type (B4a), $m \neq 5,13$ and $s_0<3t_0$ holds, and $m_0$ is large enough,\notabene{If we needed a rough estimates, due to congruences we have $3t_0-s_0 \geq 2$, hence $m_0 > \frac{117+5m}{2}$ definitely suffices. If one follows this idea further, one sees that this condition is violated only $O(m)$-times. And for each $m$, one has to consider the number of decompositions $m=s_0t_0$, which is half of the number of divisors of $m$.) $\ldots$ This could lead to showing that not only is there only finitely many exceptions for each $m$, but that they are actually only $O(m\log m)$ or whatever. Interesting, but no time and space for it.} then the following element has length $6$:
\begin{align*}
\alpha &= \alpha_0 + w^2 = 7 + \frac{1+m}{4} + \frac{\sqrt{m}}{2} + \Bigl(\frac{1+\sqrt{m}+\sqrt{s}+\sqrt{t}}{4}\Bigr)^2\\
&= \Bigl(7 + \frac{5+5m+s+t}{16}\Bigr) + \frac{5+m_0}{8}\sqrt{m} + \frac{1+s_0}{8}\sqrt{s} + \frac{1+t_0}{8}\sqrt{t}.
\end{align*}
\end{lemma}
\begin{remark}
In this family, the condition is $m_0 > \frac{117+5m}{3t_0-s_0}$, as one sees from the proof.
\end{remark}
\begin{proof}
Suppose $\alpha = \sum x_i^2$. Since $\alpha_0$ cannot be written as a sum of less than five squares in $\O_{\Q(\!\sqrt{m})}$, it suffices to prove that one of the $x_i^2$ must be equal to $w^2$ and all other $x_i \in \Q(\!\sqrt{m})$.
Write $x_i = \frac{a_i+b_i\sqrt{m}+c_i\sqrt{s}+d_i\sqrt{t}}{4}$; since $K$ is of type (B4a), we see that $a_i,b_i,c_i,d_i$ are either all even or all odd, and $a_i+b_i+c_i+d_i \equiv 0 \pmod4$. Since $\alpha\notin \Q(\!\sqrt{m})$, at least one of the summands is not in $\Q(\!\sqrt{m})$, hence at least one of all the coefficients $c_i,d_i$ is nonzero. Due to the parity condition, either $\max\{\sum c_i^2,\sum d_i^2\} \geq 4$, or $\sum c_i^2 \geq 1$ and $\sum d_i^2 \geq 1$.
Comparing traces of $\alpha$ and $\sum x_i^2$, one gets
\[
7 + \frac{5+5m+s+t}{16} = \frac1{16}\Bigl(\sum a_i^2 + m\sum b_i^2 + s\sum c_i^2 + t\sum d_i^2\Bigr) \geq \frac{1}{16}\Bigl(s\sum c_i^2 + t\sum d_i^2\Bigr),
\]
which can be rewritten as
\[
\frac{117+5m}{16} + m_0\frac{t_0+s_0}{16} \geq m_0 \frac{1}{16}(t_0\sum c_i^2 + s_0\sum d_i^2).
\]
If $(t_0\sum c_i^2 + s_0\sum d_i^2)>t_0 + s_0$, then by choosing $m_0$ large enough we obtain a contradiction. Therefore the expression must be at most $t_0+s_0$. Hence $\sum c_i^2 \geq 4$ or $\sum d_i^2 \geq 4$ is impossible because of our assumption on $s_0$ and $t_0$, since then $(t_0\sum c_i^2 + s_0\sum d_i^2) \geq 4t_0 > t_0 + s_0$. Thus necessarily $\sum c_i^2 = \sum d_i^2 = 1$. Therefore we may assume $|c_1|=|d_1|=1$ and $c_i,d_i=0$ otherwise.
We have just proven $x_i\in\Q(\!\sqrt{m})$ for $i\neq 1$. Since $(-x_1)^2 = x_1^2$, we may assume $c_1=1$; hence there are two possibilities: Either $x_1 = \frac{a_1+b_1\sqrt{m}+\sqrt{s}+\sqrt{t}}{4}$, or $x_1 = \frac{a_1+b_1\sqrt{m}+\sqrt{s}-\sqrt{t}}{4}$. In both cases, we will use the fact that $\alpha - x_1^2 \in \Q(\!\sqrt{m})$ to determine the values of $a_1$ and $b_1$.
In the first case,
\[
x_1^2 = \Bigl(\frac{a_1+b_1\sqrt{m}+\sqrt{s}+\sqrt{t}}{4}\Bigr)^2 \in \Q(\!\sqrt{m}) + \frac{a_1+b_1s_0}{8}\sqrt{s} + \frac{a_1+b_1t_0}{8}\sqrt{t};
\]
by comparing the coefficients in front of $\sqrt{s}$ and $\sqrt{t}$ with those of $\alpha$, we obtain two equations $a_1+b_1s_0=1+s_0$ and $a_1+b_1t_0=1+t_0$. Since $t_0\neq s_0$, there is a unique solution: $a_1=b_1=1$. Therefore we indeed arrived at $x_1 = \frac{1+\sqrt{m}+\sqrt{s}+\sqrt{t}}{4}$, which is what we needed.
It remains to be shown that the other case is impossible. Again we write
\[
x_1^2 = \Bigl(\frac{a_1+b_1\sqrt{m}+\sqrt{s}-\sqrt{t}}{4}\Bigr)^2 \in \Q(\!\sqrt{m}) + \frac{a_1-b_1s_0}{8}\sqrt{s} + \frac{-a_1+b_1t_0}{8}\sqrt{t};
\]
this time we obtain the equations $a_1-b_1s_0=1+s_0$ and $-a_1+b_1t_0=1+t_0$. Again, the solution is unique: $a_1 = \frac{t_0+s_0+2s_0t_0}{t_0-s_0}$, $b_1 = \frac{t_0+s_0+2}{t_0-s_0}$. However, in order that $x_1 \in \O_K$, it is necessary that $a_1$ and $b_1$ are odd integers. But it is easy to check that if $s_0\equiv t_0 \equiv 1 \pmod4$, which are the conditions of (B4a), then $b_1 = \frac{t_0+s_0+2}{t_0-s_0} = 1 + 2\frac{s_0+1}{t_0-s_0}$ is either even, or not an integer. This concludes the proof.
\end{proof}
\subsection{A conjecture} \label{ss:conjecture}
We suspect that it is in fact possible to prove a stronger result, listed already in the Introduction as the first part of Conjecture \ref{co:conjecture}. However, we did not try proving it in order not to make this paper longer than it already is. Moreover, the potential proof would not only have many branches, but probably some of them would require new ideas. As the last part of this paper, let us repeat the Conjecture \ref{co:conjecture} and list some evidence towards it:
\begin{conjecture*}
Let $K$ be a totally real biquadratic field.
\begin{enumerate}
\item If $K$ contains none of $\sqrt2$ and $\sqrt5$, then $\P(\O_K) \geq 6$ holds with finitely many exceptions.
\item If $K$ contains $\sqrt2$ or $\sqrt5$, then $\P(\O_K) \leq 5$.
\item The inequality $\P(\O_K)<5$ holds precisely for the following seven fields:\\ For $K = \BQ23, \BQ25, \BQ35$, where $\P(\O_K)=3$, and\\ for $K =\BQ27, \BQ37, \BQ56, \BQ57$, where $\P(\O_K)=4$.
\end{enumerate}
\end{conjecture*}
Part (\ref{it:con2}) was already proven for $\sqrt5$; for $\sqrt2$, it is an observation based on computer calculations: Our program never found an element with length $6$ or $7$. Of course we do not have a proof for even one concrete field, let alone for all extensions of $F = \Q(\!\sqrt2)$. We suggest that in analogy with Theorem \ref{th:sasaki}, one should try to prove $g_{\O_{\Q(\sqrt2)}}(2)=5$. This seems feasible, since the crucial point in the proof of Lemma \ref{le:sasaki}, namely that the genus of $I_4$ contains only one class, remains valid for $\Q(\!\sqrt2)$.\footnote{This strategy of the proof was successfully carried out by He and Hu in \cite{HH} before the present paper was published. Thus, part (2) of the Conjecture in now in fact a theorem.}
Let us turn to (\ref{it:con3}). The fact that these seven are the only possible exceptional fields is proven as Theorem \ref{th:main2parts} (\ref{it:main(1)}), and an element of the suspected maximal length in each of them was already given in Proposition \ref{pr:comp_examples}. Thus, it only remains to prove a stronger upper bound for these seven fields. Proposition \ref{pr:comp_examples} also contains some numerical evidence -- namely, the conjecture is valid for elements with trace up to $500$.
A further point in favour of this part of the conjecture is that it holds locally -- by the result \cite{Sch2}, Kapitel 0, Lemma 1, already mentioned in the Introduction, four integral squares are sufficient in all completions $K_{\mathfrak{p}}$, and three squares suffice unless $\mathfrak{p}$ is dyadic and $[K_{\mathfrak{p}}:\Q_2]$ is odd. Since biquadratic fields are Galois extensions of $\Q$ of even degree, one can show that such a problematic dyadic place can exist only for fields of type (B4), where $2$ does not ramify. None of the seven listed fields are of this type.
Regarding part (\ref{it:con1}), again it is based on computer experiments. There would be two separate tasks in proving it. First, one has to handle the cases not included in Theorem \ref{th:P>=6}, namely fields containing $\sqrt3$, $\sqrt6$ or $\sqrt7$. Here we observed from our data that even for quite small values of $s$, the corresponding $\O_K$ usually contained an element of length $6$. (For instance, for $K \ni \sqrt3$, Proposition \ref{pr:sqrt3shock} suggests that $\P(\O_K)\geq 6$ holds for all $s\geq 26$ as well as for $s=17$ and $s=22$, leaving only nine exceptions.) To prove this, one would either have to find in the computed data a family of such elements (as in Subsection \ref{ss:235} for length $5$ and $K \ni \sqrt2, \sqrt3$ or $\sqrt5$); or alternatively, if no such family could be found, one could prove an analogy of Proposition \ref{pr:sqrt5GeneralStrongAsymptotics} and construct a family of counterexamples in this way.
And the second task would be to prove a stronger version of Theorem \ref{th:P>=6}: If $m$ is chosen large enough, then there are no exceptional fields. It would be ideal to show that the corresponding $\alpha = \alpha_0 + w^2$ \emph{always} requires six squares. However, this is not true; at least some infinite families of exceptional fields \emph{can} be found: $s=m+4$, $s=m+8$ and $s=m+12$. Possibly they can be handled similarly as in Subsection \ref{ss:m+4,m+8,m+12}. However, it is clear that solving all cases would be a tedious task requiring both patience and clever ideas.
A further indication that the inequality $\P(\O_K)\geq 6$ indeed holds for all but finitely many fields (excluding $m=2$ or $5$) is that it is almost true in the case of $m,s$ coprime (see Proposition \ref{pr:coprime}). As a final piece of evidence we list a result which was shown to us by M.\ Tinková (including a proof) and will be part of an upcoming paper of hers. Since it contains no phrase \enquote{up to finitely many exceptions}, it is a stronger version of Theorem \ref{th:P>=6} for two of the twelve families, namely those coded by \eqref{it:(A)} and \eqref{it:1aiiA}, as well as an alternative for a third family, \eqref{it:1aiiB}.
\begin{theorem}[Tinková]\label{th:MagdaNotCoprime}
Let $K=\BQ{p}{q}$ where $p,q$ are square-free positive integers such that $p\equiv2$ and $q\equiv3 \pmod4$. Moreover, let $q_0> r_0\geq 3$ and $p_0 > 3r_0$. Then $\P(\O_K)\geq 6$.
In particular, $\alpha=7+(1+\sqrt{p})^2+(1+\sqrt{q})^2$ has length $6$.
\end{theorem}
\addtocontents{toc}{~
{}\par}
\end{document}
|
arXiv
|
{
"id": "2105.08860.tex",
"language_detection_score": 0.7815607190132141,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{center}
{\Large{\sc{Symmetries between two Ramsey properties}}} \end{center}
\begin{center} \small{Lorenz Halbeisen\footnote{The author wishes to thank the {\em Swiss National Science Foundation\/} for supporting him.}\\
Universit\'e de Caen\\
France} \end{center}
\begin{abstract} In this article we compare the well-known Ramsey property with a dual form of it, the so called dual-Ramsey property (which was suggested first by Carlson and Simpson). Even if the two properties are different, it can be shown that all classical results known for the Ramsey property also hold for the dual-Ramsey property. We will also show that the dual-Ramsey property is closed under a generalized Suslin operation (the similar result for the Ramsey property was proved by Matet). Further we compare two notions of forcing, the Mathias forcing and a dual form of it, and will give some symmetries between them. Finally we give some relationships between the dual-Mathias forcing and the dual-Ramsey property. \end{abstract}
\section{Notations and definitions}
Most of our set-theoretic notations and notations of forcings are standard and can be found in \cite{Jech} or \cite{Kunen}. An exception is that we will write $A^B$ for the set of all functions from $B$ to $A$, instead of\ \ ${}^B\hspace{-1.5mm}A$ because we never use ordinal arithmetic. $A^{<\omega}$ is the set of all partial functions $f$ from $\omega$ to $A$ such that the cardinality of dom$(f)$ is finite.
First we will give the definitions of the sets we will consider as the real numbers.
Let $[x]^\kappa :=\{y\subseteq x: |y|=\kappa\}$ and
$[x]^{<\kappa} :=\{y\subseteq x: |y|<\kappa\}$, where $|y|$ denotes the cardinality of $y$. For $x\in [\omega]^\omega$, we will consider $[x]^{<\omega}$ as the set of strictly increasing, finite sequences in $x$ and $[x]^{\omega}$ as the set of strictly increasing, infinite sequences in $x$. For $x\in\reals$ and $n\in\omega$ let $x(n)$
be such that $x(n)\in x$ and $|x(n)\cap x|=n$.
We can consider $[\omega]^\omega$ also as the set of infinite $0$-$1$-sequences (denoted by $2^{\omega}$) or as the set of all infinite sequences in $\omega$ (denoted by ${\omega}^{\omega}$).
\subsubsection*{The Ellentuck topology}
We define a topology on $\reals$. Let $X\in\reals$ and $s\in [\omega]^{<\omega}$ such that $\mmax(s)<\mmin(X)$; then $[s,X]^\omega:=\{Y\in\reals :Y\subs (s\cup X)\wedge s\subs Y\}$. Now let the basic open sets on $\reals$ be the sets $[s,X]^\omega$. These sets are called the {\em Ellentuck neighborhoods.} The topology induced by the Ellentuck neighborhoods is called the {\em Ellentuck topology\/}.
\subsubsection*{Relations on the set of partitions}
A {\em partition\/} $\pp$ (of $\omega$) is a subset of ${\cal P} (\omega)$ such that the following holds: \begin{tabbing} iii)' \= \kill i)\> if $b\in\pp$ then $b\neq\emptyset$\\ ii)\> if $b_1,b_2\in\pp$ and $b_1\neq b_2$ then $b_1\cap b_2 =\emptyset$\\ iii)\> $\bigcup \pp =\omega$. \end{tabbing}
A partition means always a partition of $\omega$. If $\pp$ is a partition and $b\in\pp$ then we call $b$ a block of $\pp$. If a partition has infinitely many blocks (or equivalently if $\pp$ is infinite) we call $\pp$ an infinite partition. The set of all infinite partitions is denoted by $\parto$.
If $\pp$ is a partition, $b\in\pp$ and $n,m\in\omega$ both belong to $b$, then we write $\natural_{\pp}(n,m)$. On the other hand with $\{\{n,m\}\in [\omega]^2: \natural_{\pp}(n,m)\}$ we can reconstruct the partition $\pp$.
A {\em partial partition\/} $\pp'$ is a subset of ${\cal P} (\omega)$ such that (i) and (ii) hold but instead of (iii) we have \begin{tabbing} iii)$\prime$ \= \kill iii)${}'$\> $\bigcup \pp' =:\mdom(\pp')\subseteq\omega$. \end{tabbing}
Note that a partition is always also a partial partition. If $\mdom(\pp')\in\omega$ then $\pp'$ is a partition of some $n\in\omega$. The set of all partial partitions $\pp'$ where $\mdom(\pp')\in\omega$ is denoted by $\NN$. For $s\in\NN$, $s^*$ denotes the partial partition $s\cup\{\{ \mdom (s)\}\}$.
Let $X_1,X_2$ be two partial partitions. We say that $X_1$ is {\em coarser\/} than $X_2$, or that $X_2$ is {\em finer\/} than $X_1$, and write $X_1\sqsubseteq X_2$ if for all blocks $b\in X_1$ the set $b\cap\mbox{dom}(X_2)$ is the union of some sets $b_i\cap\mbox{dom}(X_1)$, where each $b_i$ is a block of $X_2$. Let $X_1\sqcap X_2$ denote the finest partial partition which is coarser than $X_1$ and $X_2$ such that $\mbox{dom}(X_1\sqcap X_2)= \mbox{dom}(X_1)\cup\mbox{dom}(X_2)$.
If $f\in[\omega]^{<\omega}$ is a finite subset of $\omega$, then $\{f\}$ is a partial partition with $\mbox{dom}(\{f\})=f$. For two partial partitions $X_1$ and $X_2$ we write $X_1\sqsubseteq^* X_2$ if there is a finite set $f\subseteq\mbox{dom}(X_1)$ such that $X_1\sqcap\{f\}\sqsubseteq X_2$ and say that $X_1$ is coarser${}^*$ than $X_2$. If $X_1\sqsubseteq^* X_2$, $X_2\sqsubseteq^* X_1$ and $\mbox{dom}(X_1)=\mbox{dom}(X_2)$, then we write $X_1\stackrel{*}{=} X_2$.
Let $\pp_1,\pp_2$ be two partial partitions. If each block of $\pp_1$ can be written as the intersection of a block of $\pp_2$ with $\mdom(\pp_1)$, then we write $\pp_1\seg\pp_2$. Note that $\pp_1\seg\pp_2$ implies $\mdom(\pp_1)\subseteq\mdom(\pp_2)$.
If $\pp$ is a partial partition, then $\mMin (\pp)$\index{$\mMin ()$} denotes the set $\{n\in\omega :\exists b\in\pp (n=\mmin(b))\}$, where $\mmin(b):=\bigcap b$. If we order the blocks of $\pp$ by their least element, then $\pp (n)$ denotes the $n$th block in this ordering and $\pp (n)(k)$ denotes the $k$th element (in the natural ordering) belonging to this block.
\subsubsection*{The dual Ellentuck topology}
We define a topology on the set of partitions as follows. Let $X\in\parto$ and $s\in\NN$ such that $s\ceq X$. Then ${\open{s}{X}}:=\{Y\in\parto :s\seg Y\wedge Y\ceq X\}$ and $\partX :={\open{\emptyset}{X}}$. Now let the basic open sets on $\parto$ be the sets ${\open{s}{X}}$ (where $X$ and $s$ as above). These sets are called the {\em dual Ellentuck neighborhoods.} The topology induced by the dual Ellentuck neighborhoods is called the {\em dual Ellentuck topology\/} (cf.~\cite{dual}).
\subsubsection*{Two notions of forcing}
The {\em Mathias forcing\/} $\M$ is defined as follows: \[\begin{array}{c} \langle s,S\rangle\in \M \Leftrightarrow s\in {[\omega ]}^{<\omega} \ \wedge\ S\in {[\omega ]}^{\omega}\ \wedge\ \mbox{max($s$)$<$min($S$)},\\ \langle s,S\rangle\leq\langle t,T\rangle \Leftrightarrow \ t\subs s\ \wedge\ S\subseteq T\ \wedge\ \forall n\in (s\setminus t) (n\in T). \end{array}\]
If $\langle s,S\rangle$ is an $\M$-condition, then we call $s$ the {\em{stem}\/} of the condition. The Mathias forcing $\M$ has a lot of combinatorial properties (which can be found in \cite{happy} and \cite{delta1-2} or in \cite{Halb}). Note that we can consider an $\M$-condition $\la s,S\ra$ as an Ellentuck neighborhood $[s,S]^\omega$ and $\la s,S\ra \leq\la t,T\ra$ if and only if $[s,S]^\omega\subs [t,T]^\omega$.
The {\em dual-Mathias forcing\/} $\dM$ is defined similarly to the Mathias forcing $\M$, using the dual Ellentuck topology instead of the Ellentuck topology. So, $$\langle s,X\rangle\in\dM\ \Leftrightarrow\ {\open{s}{X}}\ \mbox{is a dual Ellentuck neighborhood}$$ and $$\langle s,X\rangle\leq\langle t,Y\rangle\ \Leftrightarrow\ {\open{s}{X}}\subseteq{\open{t}{Y}}.$$ If $\langle s,X\rangle$ is an $\dM$-condition, then we call $s$ again the {\em{stem}\/} of the condition. Because the dual-Mathias forcing is very close to the usual Mathias forcing, it also has some nice properties similar to those of $\M$.
\subsubsection*{Two Ramsey properties}
The classical Ramsey property is a property of sets of infinite subsets of $\omega$ (of sets of reals). A set $A\subseteq\reals$ has the {\em{Ramsey property}\/} (or is {\em{Ramsey}}) if $\exists X\in\reals([X]^\omega\subseteq A\vee [X]^\omega\cap A=\emptyset).$ If there exists an $X$ such that $[X]^\omega\cap A=\emptyset$ we call $A$ a {\em{Ramsey null}} set. A set $A\subseteq\reals$ is {\em{completely Ramsey}\/} if for every Ellentuck neighborhood $[s,Y]^\omega$ there is an $X\in [s,Y]^\omega$ such that $[s,X]^\omega\subs A$ or $[s,X]^\omega\cap A=\emptyset$. If we are always in the latter case, then we call $A$ {\em{completely Ramsey null.}}
The {\em{dual-Ramsey property}\/} deals with sets of infinite partitions of $\omega$. A set $A\subseteq\parto$ has the {\em{dual-Ramsey property}\/} (or is {\em{dual-Ramsey}}) if $\exists X\in\parto(\partX\subseteq A\vee \partX\cap A=\emptyset).$ If there exists an $X$ such that $\partX\cap A=\emptyset$ we call $A$ a {\em{dual-Ramsey null}} set. A set $A\subseteq\parto$ is {\em{completely dual-Ramsey}\/} if for every dual Ellentuck neighborhood ${\open{s}{Y}}$ there is an $X\in {\open{s}{Y}}$ such that ${\open{s}{X}}\subs A$ or ${\open{s}{X}}\cap A=\emptyset$. If we are always in the latter case, then we call $A$ {\em{completely dual-Ramsey null.}}
Now we can start to give some symmetries between the two Ramsey properties and between the two Mathias forcings.
\section{Basic facts} \label{sec:basicfacts}
In this section we give the tools to consider sets of partitions as sets of reals and to compare the two Ramsey properties. We will give also some basic facts and well-known results concerning the dual-Ramsey property and dual-Mathias forcing. Further we give some symmetries between Mathias forcing and the dual-Mathias forcing.
To compare the two Ramsey properties we first show that we can consider each $A\subs\reals$ as a set of infinite partitions of $\omega$ and vice versa. For this we define some arithmetical relations and functions.
Let $n,m\in\omega$ then $\mbox{div}(n,m):=\mbox{max}(\{k\in\omega:k\cdot m\leq n\}$. For $\{n,m\}\in[\omega]^2$ let $\flat\{n,m\}:=\frac{1}{2} ({\mbox{max}(\{n,m\})^2-\mbox{max}(\{n,m\})})+\mbox{min}(\{n,m\})$. Consider $\flat\{n,m\}$ as undefined for $n=m$.
Let $x\in [\omega]^\omega$; then $\mbox{trans}(x)\subseteq\omega$ is such that $n\not\in\mbox{trans}(x)$ {\em iff\/} there is a finite sequence $s$ of natural numbers of length $l+1$ such that $$n=\flat\{s(0),s(l)\}\ \ \mbox{and}\ \ \forall k\in\{1,\ldots\!,l\} (\flat\{s(k-1),s(k)\}\not\in x).$$ Note that $\mbox{trans}(x)\subseteq x$. If $x\in [\omega]^\omega$, then we can consider $x$ as a partition with $$\natural_x (n,m)\ \mbox{if and only if}\ n=m\ \mbox{or} \ \flat\{n,m\}\not\in\mbox{trans}(x).$$ The {\em corresponding partition\/} of a real $x\in [\omega]^\omega$ is denoted by $\cp (x)$. Note that $\cp (x)\in\parto$ {\em{iff}\/} $\forall k\exists n>k\forall m<n(\neg\natural_x (n,m))$ and further if $y\subseteq x$, then $\cp (y)\ceq\cp (x)$.
A partition $\pp$ of $\omega$ we encode by a real $\pac (\pp)$ (the {\em partition code\/} of $\pp$) as follows. $$\pac (\pp):=\{k\in\omega :\exists n m(k=\flat\{n,m\}\wedge \neg\natural_{\pp}(n,m)\}.$$ Note that if $\pp_1\ceq\pp_2$ then $\pac (\pp_1)\subseteq\pac (\pp_2)$. With these definitions we get the
\begin{fct} \label{fact:finer} The dual Ellentuck topology is finer than the topology of the Baire space. \end{fct}
\proof Let $s\in \omega^{<\omega}$ and $U_s =\{f\in\omega^\omega :s\subset f\}$ be a basic open set in the Baire space $\omega^\omega$. Because there is a bijection between $\omega^\omega$ and $\reals$, we can write $U_s$ as a set $V_{s'}=\{r\in\reals :s'\subset r\wedge \mmin(r\setminus s)>\mmax(s)\}$. Now $\cp [V_{s'}]\cap\parto$ (where $\cp [V_{s'}]:=\{\cp (r): r\in V_{s'}\}$) is open with respect to the dual Ellentuck topology. Therefore the dual Ellentuck topology is finer than the topology of the Baire space.\eor
\rmk A similar result is true for the Ellentuck topology (cf.\,\cite{Ellentuck}).
\begin{fct} \label{fct:RamseyBaire} A set $C\subs\parto$ is completely dual-Ramsey if and only if $C$ has the Baire property with respect to the dual Ellentuck topology and it is completely dual-Ramsey null if and only if it is meager with respect to the dual Ellentuck topology. \end{fct}
\proof This is proved in \cite{dual}. \eor
\rmk The analogous result is known for the Ramsey property with respect to the Ellentuck topology (cf.\,\cite{Ellentuck}).
\subsubsection*{Some symmetries between the two Mathias forcings}
If $\mathfrak{g}$ is $\dM$-generic over $V$ and $\mathfrak{g}'\in (\mathfrak{g})^{\omega}$, then also $\mathfrak{g}'$ is $\dM$-generic over $V$ (cf.~\cite{dual} Theorem 5.5). From this it follows immediately that $\dM$ is proper and therefore does not collapse $\aleph_1$. (For the definition of properness consider e.g.~\cite{tools}.)
Further, for any $\dM$-condition $\langle s,X\rangle$ and any sentence $\Phi$ of the forcing language $\dM$ there is an $\dM$-condition $\langle s,Y\rangle\leq \langle s,X\rangle$ such that $\langle s,Y\rangle\force_{\dM}\Phi$ or $\langle s,Y\rangle\force_{\dM}\neg\Phi$ (cf.~\cite{dual} Theorem 5.2). This property is called {\em{pure decision}}.
\rmk The similar results for Mathias forcing $\M$ can be found in \cite{happy} (or in \cite{multiple}).
We can write the dual-Mathias forcing as a two step iteration where the first is the forcing notion $\fU$.
Let $\fU$ be the partial order defined as follows: $$p\in{\fU}\ \Leftrightarrow\
p\in\parto,$$ $$p\leq q \ \Leftrightarrow\ p\ceq^* q.$$
We can also write the Mathias forcing as a two step iteration, where the first step is the forcing notion $\bU$. Let $\J :=[\omega]^{<\omega}$ be the ideal of finite sets and let $\langle {\reals}/{\J},\leq\ \rangle =: \bU$ be the partial order defined as follows. $p\in{\bU}\ \Leftrightarrow\ p\in\reals$ and $p\leq q \ \Leftrightarrow\ p\setminus q\in\J$ (this is $p\subs^* q$).
\begin{fct}\label{fct:o-closed} The forcing notion $\fU$ is $\aleph_0$-closed and if $\fD$ is $\fU$-generic over $V$, then $\mMin (\fD)$ is a Ramsey ultrafilter in $V[\fD]$. \end{fct}
\proof Let $X_1\geq X_2\geq\ldots$ be a decreasing sequence in $\fU$. Choose a sequence $f_i$ $(i\in\omega)$ of finite sets of natural numbers, such that $X_{i+1}\puni \{f_i\}\ceq X_i$. Define $y_0:=X_0(0)$ and $y_n:=X_n(k)$ where $k:=3+\bigcup_{i<n}(\bigcup f_i)$. Now $Y:=\{y_i:i\in\omega\} \cup (\omega\setminus\bigcup_{i\in\omega}y_i)$ is coarser${}^*$ than each $X_i$ $(i\in\omega)$ and therefore $\fU$ is $\aleph_0$-closed.
Now we claim that the set $\{\mMin(\pp):\pp\in\fD\}$ is a Ramsey ultrafilter in $V[\fD]$. Remember that a forcing notion which is $\aleph_0$-closed adds no new reals to $V$ (cf.\cite{Jech} Lemma 19.6). Take a $\pi\in 2^{[\omega]^2}$ and a $Y\in\parto$; then by the Ramsey Theorem (cf.\cite{Jech} Lemma 29.1) for $\mMin(Y)\in\reals$ there exists an infinite $r\subseteq \mMin(Y)$ such that $\pi$ is constant on $[r]^2$. Now let $X:=\{b:b\in Y\wedge b\cap r\neq\emptyset\}\cup\bigcup\{b:b\in Y\wedge b\cap r=\emptyset\}$; then $X\ceq Y$ and $\mMin(X)=r$. Thus
$H_{\pi}:=\{X\in\parto: \pi |_{[\mMin(X)]^2}\ \mbox{is constant}\}$ is dense in $\fU$ and hence $H_{\pi}\cap\fD\neq\emptyset$. \eor
\rmk It is easy to see that the forcing notion $\bU$ is $\aleph_0$-closed. Further we have that if $D$ is $\bU$-generic over $V$, then $D$ is a Ramsey ultrafilter in $V[D]$.
The forcing notion $\fU$ is stronger than the forcing notion $\bU$.
\begin{fct} \label{fct:U-generic} If $\fD$ is $\fU$-generic, then the set $\{\mMin(\pp):\pp\in\fD\}$ is $\bU$-generic. \end{fct}
\proof Let $A\subseteq\reals$ be a maximal anti-chain in $\bU$, i.e., $A$ is a maximal almost disjoint family. Then the set $D_A :=\{\pp\in\fU:\exists a\in A(\mMin(\pp)\subseteq^* a)\}$ is dense in $\fU$. \eor
We define now the second step of the two step iteration.
Let $\fF\subs\parto$. The partial order $\Pf$ is defined as follows. \begin{center} {$\langle s,X\rangle\in \Pf \Leftrightarrow s\in\NN \ \wedge\ X\in\fF\ \wedge\ {\open{s}{X}}\ \mbox{is a dual Ellentuck neighborhood},$}\\ {$\langle s,X\rangle\leq\langle t,Y\rangle \Leftrightarrow \ {\open{s}{X}}\subseteq{\open{t}{Y}}$.} \end{center}
\rmk For ${\cal F}\subs\reals$ we can define the partial order $\Pcf$ similarly.
\begin{fct} \label{fct:fequi} Let $\tilde{\fD}$ be the canonical $\fU$-name for the $\fU$-generic object; then $${\fU}*\Padname \approx \dM.$$ \end{fct}
\proof \[\begin{array}{ccl} {\fU}*\Padname &\ =\ &\{\langle p,\langle \tilde{s},\tilde{X}\rangle \rangle :p\in {\fU}\wedge p{\force}_{\fU} \langle \tilde{s},\tilde{X}\rangle\in\Padname\}\\
&=&\{\langle p,\langle \tilde{s},\tilde{X}\rangle
\rangle : p\in\parto\wedge p{\force}_{\fU}
(\tilde{X}\in\tilde{\fD}\wedge
\tilde{s}\ceq\tilde{X})\}. \end{array}\]
Now the embedding \[\begin{array}{rccl} h:\ \ &\dM&\ \longrightarrow\ &\ {\fU}*\Padname\\
&\langle s,X\rangle& \longmapsto &\langle X,
\langle\check{s},\check{X}
\rangle\rangle \end{array}\] is a dense embedding (see \cite{tools} Definition 0.8):
\begin{enumerate} \item It is easy to see that $h$ preserves the order relation $\leq$. \item Let $\langle p,\langle \tilde{s},\tilde{X}\rangle\rangle\in{\fU}*
\Padname$.
Because ${\fU}$ is $\aleph_0$-closed, there is a condition
$q\leq p$ and $s\in\NN,\ X\in\parto$
such that $q{\force}_{\fU}\check{s}=\tilde{s}\wedge
\check{X}=\tilde{X}$.
Evidently, $\langle q,\langle\check{s},\check{X}\rangle\rangle\in
{\fU}*\Padname$ is stronger than $\langle p,\langle
\tilde{s},\tilde{X}\rangle\rangle$. Let $Z:=q\puni X$
and let $Z'\ceq^* Z$ be such that $s\ceq Z'$. Now we have
$h(\langle s,Z'\rangle)\leq \langle p,\langle
\tilde{s},\tilde{X}\rangle\rangle$. \eor \end{enumerate}
\rmk Let $\tilde{D}$ be the canonical $\bU$-name for the $\bU$-generic object, then ${\bU}*{\operatorname{\mathbf{P}}}_{\tilde{D}}\approx \M.$
The dual-Mathias forcing is stronger than the Mathias forcing.
\begin{fct} \label{fct:add-Mathias} The dual-Mathias forcing adds Mathias reals. \end{fct}
\proof Let $\DD$ be ${\fU}$-generic over $V$; then by Fact~\ref{fct:U-generic}, $D:=\{\mMin(X):X\in\DD\}$ is ${\bU}$-generic over $V$. Now we define $h:\Pad\rightarrow\Pd$ as follows. \[\begin{array}{rccl} h:\ \ &\Pad&\ \longrightarrow\ &\ \Pd\\
&\langle s,X\rangle& \longmapsto &\langle \mMin (s),
\mMin (X)\setminus\mMin (s)\rangle \end{array}\] For $h$ the following is true.
\\ (i) If $q_1,q_2\in\Pad$, $q_1\leq q_2$, then $h(q_1)\leq h(q_2)$.\\ (ii) $\forall q\in\Pad\forall p'\leq h(q)\exists q'\in\Pad$ such that $q$ and $q'$ are compatible and $h(q')\leq p'$.
\\ Therefore with \cite{multiple} Part~I, Lemma~2.7 we finally get $V^{\M}\subseteq V^{\dM}$. \eor
\section{On the dual-Ramsey property} \label{sec:dualRam}
In this section we will show that the dual-Ramsey property is closed under a generalized Suslin operation. As a corollary we will get the already known result that analytic sets are completely dual-Ramsey.
Let $\fJ\subseteq {\cal P}(\parto )$ be the set of all completely dual-Ramsey null sets. Further let $\add (\fJ)$ be the smallest cardinal $\kappa$ such that there exists a family ${\cal F}=\{J_\alpha\in\fJ :\alpha <\kappa\}$ with $\bigcup{\cal F}\not\in\fJ$ and let $\cov (\fJ)$ be the smallest cardinal $\kappa$ such that there exists a family ${\cal F}=\{J_\alpha\in\fJ :\alpha <\kappa\}$ with $\bigcup{\cal F}=\parto$. In \cite{Lori} it is shown that $\cov (\fJ)=\add (\fJ)= \fH$, where $\fH$ is the dual-shattering cardinal. Further it is shown that $\fH >\omega_1$ is relatively consistent with ZFC.\\ Let $\Seq (\kappa ):=\kappa^{<\omega}$ and for $f\in\kappa^{\omega}$, $n\in\omega$, let $\bar{f}(n)$ denote the finite sequence $\langle f(0), f(1),\ldots\!,\linebreak[2] f(n-1)\rangle$. The generalized Suslin operation ${\cal A}_{\kappa}$ (for a cardinal $\kappa$) is defined as follows: $${\cal A}_{\kappa}\{Q_s: s\in\Seq (\kappa )\}:= \bigcup\limits_{f\in\kappa^{\omega}}\bigcap\limits_{n\in\omega} Q_{\bar{f}(n)}\,.$$ In Theorem~\ref{thm:suslin} below we will show that for each cardinal $\kappa <\fH$, the completely dual-Ramsey sets are closed under the operation ${\cal A}_{\kappa}$. But first we give some other results.
A set $R\subseteq\parto$ is {\em dual Ellentuck meager\/} if $R$ is meager with respect to the dual Ellentuck topology. Remember that a set is dual Ellentuck meager if and only if it is completely dual-Ramsey null and a set is completely dual-Ramsey if and only if it has the Baire property with respect to the dual Ellentuck topology.\\ If ${\open{s}{X}}$ is a dual Ellentuck neighborhood then we say that $R$ is dual Ellentuck meager in ${\open{s}{X}}$ if $R\cap {\open{s}{X}}$ is dual Ellentuck meager. By \cite{dual} Theorem~4.1, $R$ is dual Ellentuck meager in ${\open{s}{X}}$ if for all ${\open{t}{Y}}\subseteq {\open{s}{X}}$ there exists a partition $Z\in{\open{t}{Y}}$ such that ${\open{t}{Z}}\cap R=\emptyset$.
Let $R\subseteq\parto$ and $M:=\bigcup\{ {\open{s}{X}}: R$ is dual Ellentuck meager in ${\open{s}{X}}\}$. Further let $M(R):=M\cap R$. We first show that \begin{lm}\label{lm:R} If ${\open{s}{X}}$ is a dual Ellentuck neighborhood such that ${\open{s}{X}}\subseteq M$, then $R$ is dual Ellentuck meager in ${\open{s}{X}}$. \end{lm}
\proof If ${\open{s}{X}}\subseteq M$, then ${\open{s}{X}}= \bigcup\{{\open{t}{Y}}\subseteq {\open{s}{X}}: R$ is dual Ellentuck meager in ${\open{t}{Y}}\}$. Let $N:=\bigcup\{{\open{u}{Z}}\subseteq {\open{s}{X}}: R\cap{\open{u}{Z}}=\emptyset\}$. Because $N$ is an open set, $N$ is completely dual-Ramsey. Therefore, for any ${\open{t}{Y}}\subseteq {\open{s}{X}}$ there exists a partition $Y' \in{\open{t}{Y}}$ such that ${\open{t}{Y'}}\subseteq N$ or ${\open{t}{Y'}}\cap N=\emptyset$. If we are in the latter case, then because ${\open{t}{Y'}}\subseteq {\open{s}{X}}$, we find a ${\open{u}{Y''}}\subseteq {\open{t}{Y'}}$ such that $R$ is dual Ellentuck meager in ${\open{u}{Y''}}$. Hence, there exists a ${\open{u}{Z}}\subseteq {\open{u}{Y''}}$ such that ${\open{u}{Z}} \cap R=\emptyset$, which contradicts ${\open{t}{Y'}}\cap N=\emptyset$. So we are always in the former case, which implies that $R$ is dual Ellentuck meager in ${\open{s}{X}}$. \eop
With this result, we can easily prove the following \begin{lm} \label{lm:MR} The set $M(R)$ is dual Ellentuck meager. \end{lm}
\proof Take a dual Ellentuck neighborhood ${\open{s}{X}}$ and let $S:=\bigcup\{{\open{t}{Z}}\subseteq{\open{s}{X}}:R$ is dual Ellentuck meager in ${\open{t}{Z}}\}$. Then $S$ as the union of open sets is open and a subset of ${\open{s}{X}}$. Because ${\open{s}{X}}$ is also closed (in the dual Ellentuck topology), the set $C:={\open{s}{X}}\setminus S$ is closed. By \cite{dual} Theorem~4.1 the sets $C$ and $S$ both are completely dual-Ramsey. Therefore we find for every ${\open{s'}{X'}}\subseteq{\open{s}{X}}$ a partition $Y\in{\open{s'}{X'}}$ such that ${\open{s'}{Y}}\subseteq S$ or ${\open{s'}{Y}}\subseteq C$. Now if ${\open{s'}{Y}}\subseteq S$, then by Lemma~\ref{lm:R}, $R$ is dual Ellentuck meager in ${\open{s'}{Y}}$ and if ${\open{s'}{Y}}\subseteq C$, then ${\open{s'}{Y}}\cap M(R)=\emptyset$. To see this, assume there is an $H\in M(R)\cap{\open{s'}{Y}}$. Because $H\in M(R)$ there exists a dual Ellentuck neighborhood ${\open{t}{Z}}$ such that $H\in{\open{t}{Z}}$ and $R$ is dual Ellentuck meager in ${\open{t}{Z}}$. Because $H\in{\open{t}{Z}}$ and $H\in{\open{s'}{Y}}$ there is a dual Ellentuck neighborhood ${\open{u}{U}}\subseteq{\open{t}{Z}}\cap{\open{s'}{Y}}$. But with ${\open{u}{U}}\subseteq{\open{t}{Z}}$ it follows that $R$ is dual Ellentuck meager in ${\open{u}{U}}$ and therefore ${\open{u}{U}}\subseteq S$, a contradiction to ${\open{u}{U}}\subseteq{\open{s'}{Y}}\subseteq C$.
Therefore, in both cases $M(R)$ is dual Ellentuck meager in ${\open{s'}{Y}}\subseteq{\open{s'}{X'}}$ and because ${\open{s}{X}}$ and ${\open{s'}{X'}}\subseteq{\open{s}{X}}$ were arbitrary, the set $M(R)$ is dual Ellentuck meager in each dual Ellentuck neighborhood. Hence, the set $M(R)$ is dual Ellentuck meager. \eop
\begin{cor} \label{cor:MR} The set $R\cup (\parto\setminus M)$ has the dual Ellentuck Baire property. \end{cor}
\proof Because $M$ is open, $\parto\setminus M$ is closed and $R\cup (\parto\setminus M)=(R\cap M)\cup (\parto\setminus M)=M(R)\cup (\parto\setminus M)$ which is the union of a meager set and a closed set and therefore has the dual Ellentuck Baire property. \eop
\begin{thm} \label{thm:MR2} If $R\subseteq\parto$, then we can construct a set $A\supseteq R$ which has the dual Ellentuck Baire property and whenever $Z\subseteq A\setminus R$ has the dual Ellentuck Baire property, then $Z$ is dual Ellentuck meager. \end{thm}
\proof Let $A:=R\cup (\parto\setminus M)$ where $M:=\bigcup\{ {\open{s}{X}}: R$ is dual Ellentuck meager in ${\open{s}{X}}\}$. By Lemma~\ref{lm:MR} and Corollary~\ref{cor:MR} we know that $A$ has the dual Ellentuck Baire property. Now let $Z\subseteq A\setminus R$ with the dual Ellentuck Baire property. If $Z$ is not dual Ellentuck meager, then there exists a dual Ellentuck neighborhood ${\open{u}{U}}$, such that ${\open{u}{U}}\setminus Z$ and therefore ${\open{u}{U}}\cap R$ are dual Ellentuck meager. Hence, $R$ is dual Ellentuck meager in ${\open{u}{U}}$ and therefore ${\open{u}{U}}\subseteq M$. Since ${\open{u}{U}}\cap Z\neq\emptyset$ and $Z\cap M=\emptyset$, there is a $Y\in{\open{u}{U}}$ such that $Y\not\in M$, a contradiction to $R$ is dual Ellentuck meager in ${\open{u}{U}}$. \eop
Now we can prove the following \begin{thm}\label{thm:suslin} Let $\kappa <\fH$ be a cardinal number and for each $s\in\Seq (\kappa )$ let $Q_s\subseteq\parto$. If all the sets $Q_s$ are completely dual-Ramsey, then the set $${\cal A}_{\kappa}\{Q_s:s\in\Seq (\kappa )\}$$ is completely dual-Ramsey too. \end{thm}
\proof Let $\{Q_s:s\in\Seq (\kappa )\}$ be a set of completely dual-Ramsey sets and let $A:={\cal A}_{\kappa}\{Q_s:s\in\Seq (\kappa )\}$. For two sequences $s,f\in\kappa^{\leq\omega}$ we write $s\subseteq f$
if $s$ is an initial segment of $f$. If $s\in\kappa^{<\omega}$ is a finite sequence, then $|s|$ denotes the length of $s$. Without loss of generality we may assume that $Q_s\supseteq Q_t$ whenever $s\subseteq t$.\\ For $s\in\Seq (\kappa )$ let $$A_s:=\bigcup\begin{Sb} f\in\kappa^{\omega} \\ s\subseteq f \end{Sb} \bigcap\begin{Sb}
n\in\omega \\ n\geq |s| \end{Sb} Q_{\bar{f}(n)}.$$ For $s\in\Seq (\kappa )$ we have $A_s\subseteq Q_s$, $A_s=\bigcup_{\alpha <\kappa} A_{s{\join{\alpha}}}$ and $A=A_{\emptyset}$. By Theorem~\ref{thm:MR2}, for each $s\in\Seq (\kappa )$ we find a $B_s\supseteq A_s$ which is completely dual-Ramsey and if $Z\subseteq B_s\setminus A_s$ has the dual-Ramsey property, then $Z$ is dual-Ramsey null. Because $Q_s\supseteq A_s$ is completely dual-Ramsey, we may assume that $B_s\subseteq Q_s$ and therefore $$A={\cal A}_{\kappa}\{B_s:s\in\Seq (\kappa )\}.$$ Let $B:=B_{\emptyset}$. Note that $A=\bigcup_{\alpha<\kappa} A_{\langle\alpha\rangle}\subseteq\bigcup_{\alpha<\kappa} B_{\langle\alpha\rangle}$ and therefore $B\subseteq \bigcup_{\alpha<\kappa} B_{\langle\alpha\rangle}$. Now we show that $$B\setminus A\subseteq \bigcup\limits_{\alpha<\kappa} B_{\langle\alpha\rangle}\subseteq \bigcup \limits_{f\in\kappa^{\omega}}\bigcap\limits_{n\in\omega} B_{\bar{f}(n)}\subseteq\bigcup\limits_{s\in\Seq(\kappa)}(B_s \setminus\bigcup\limits_{\alpha<\kappa}B_{s{\join{\alpha}}})\,.$$ Assume $x\not\in\bigcup_{s}(B_s \setminus\bigcup_{\alpha<\kappa}B_{s{\join{\alpha}}})$. If we have for all $\alpha<\kappa$, that $x\not\in B_{\langle\alpha\rangle}$, then $x\not\in B$. And if there exists an $\alpha_0<\kappa$ such that $x\in B_{\langle \alpha_0\rangle}$, because $x\not\in\bigcup_{s} (B_s\setminus\bigcup_{\alpha<\kappa}B_{s{\join{\alpha}}})$ we find an $\alpha_1$ such that $x\in B_{\langle\alpha_0,\alpha_1\rangle}$ and finally we find an $f\in\kappa^{\omega}$ such that for all $n\leq\omega$: $x\in B_{\bar{f}(n)}$. But this implies that $x\in A$. Now because $B_s\setminus\bigcup_{\alpha<\kappa}B_{s{\join{\alpha}}} \subseteq B_s\setminus\bigcup_{\alpha<\kappa}A_{s{\join{\alpha}}} =B_s\setminus A_s$ and because $\bigcup_{\alpha<\kappa}B_{s{\join{\alpha}}}$ is the union of less than $\fH$ completely dual-Ramsey sets, $B_s\setminus\bigcup_{\alpha<\kappa}B_{s{\join{\alpha}}}$ is completely dual-Ramsey and as a subset of $B_s\setminus A_s$, it is completely dual-Ramsey null. Therefore, $B\setminus A$ as a subset of the union of less than $\fH$ completely dual-Ramsey null sets is completely dual-Ramsey null and because $B$ is completely dual-Ramsey, $A$ is completely dual-Ramsey too. \eop
\rmk A similar result holds also for the Ramsey property and is proved by Matet in \cite{Matet}.
As a corollary we get a result which was first proved by Carlson and Simpson (cf.\,\cite{dual}).
\begin{cor} \label{cor:analytic} Every analytic set is completely dual-Ramsey. \end{cor}
\proof This follows from Theorem~\ref{thm:suslin} and because each analytic set $A\subseteq\reals$ can be written as $$A={\cal A}\{Q_s: s\in\Seq (\omega )\}$$ where each $Q_s\subseteq\reals$ is a closed set in the Baire space. \eop
\rmk For a similar result cf.\,\cite{Ellentuck} or \cite{Silver}.
\section{Game-families and the forcing notion P${}_{\fF}$} \label{sec:gamefamilies}
First we define a game and game-families. Then we show that, for game-families $\fF$, the forcing notion $\Pf$ has pure decision and if $X$ is $\Pf$-generic and $Y\in (X)^{\omega}$, then $Y$ is $\Pf$-generic too.
We call a family $\fF\subseteq\parto$ {\em non-principal}, if for all $X\in\fF$ there is a $Y\in\fF$ such that $Y\ceq X$ and $\neg (Y\ass X)$. A family $\fF$ is {\em closed under refinement}, if $X\ceq Y$ and $X\in\fF$ implies that $Y\in\fF$. Further it is {\em closed under finite changes\/} if for all $s\in\NN$ and $X\in\fF$, $X\kap s\in\fF$.
In the sequel $\fF$ is always a non-principal family which is closed under refinement and finite changes.
If $s\in\NN$ and $s\ceq X\in\fF$, then we call the dual Ellentuck neighborhood ${\open{s}{X}}$ an $\fF$-dual Ellentuck neighborhood and write ${\opf{s}{X}}$ to emphasize that $X\in\fF$. A set $\cO\subs\parto$ is called $\fF$-open if $\cO$ can be written as the union of $\fF$-dual Ellentuck neighborhoods.
For $s\in\NN$ remember that $s^* =s\cup\{\{\mdom (s)\}\}$.
Fix a family $\fF\subs\parto$ (which is non-principal and closed under refinement and finite changes). Let $X\in\fF$ and $s\in\NN$ be such that $s\ceq X$. We associate with ${\opf{s}{X}}$ the following game. (This type of game was suggested first by Kastanas in \cite{Kastanas}.)
$$\begin{array}{lcccccccc} {\operatorname{I}} &\ \ \ & \la X_0\ra & &\la X_1\ra & &\la X_2\ra & & \\
& & & & & & & & \ldots \\ {\operatorname{II}}& & &\la t_0,Y_0\ra & &\la t_1,Y_1\ra & &\la t_2,Y_2\ra & \end{array}$$
All the $X_i$ of player I and the $Y_i$ of player II must be elements of the family $\fF$. Player I plays $\la X_0\ra$ such that $X_0\in {\opf{s}{X}}$, then II plays $\la t_0,Y_0\ra$ such that $Y_0\in {\opf{s}{X_0}}$,
$s\seg t_0^*\seg Y_0$ and $|t_0|=|s|$. For $n\geq 1$, the $n$th move of player I is $\la X_n \ra$ such that $X_n\in {\opf{t_{n-1}^*}{Y_{n-1}}}$ and then player II plays $\la t_n,Y_n\ra$ such that $Y_n\in {\opf{t_{n-1}^*}{X_{n}}}$, $t_{n-1}^*\seg t_n^*\seg Y_n$
and $|t_n|=|t_{n-1}|+1$. Player I wins iff the only $Y$ with $t_n\seg Y$ (for all $n$) is in $\fF$. We denote this game by $\GF$ {\em {starting with}} $\la s,X\ra$.
A non-principal family $\fF$ which is closed under refinement and finite changes is a {\em game-family} if player II has no winning strategy in the game $\GF$.
A family $\fF\subs\parto$ is called a {\em filter} if for any $X,Y\in\fF$, also $X\kap Y\in\fF$. A filter which is also a game-family is called a {\em game-filter}. Note that $\parto$ is game-family but not a game-filter. (But it is consistent with ZFC that game-filters exist, as Theorem~\ref{thm:gamefilter} will show).
Let $\cO\subs\parto$ be an $\fF$-open set. Call ${\opf{s}{X}}$ {\em good\/} (with respect to $\cO$), if for some $Y\in {\opf{s}{X}}\cfF$, ${\opf{s}{Y}}\subs\cO$; otherwise call it {\em bad}. Note that if ${\opf{s}{X}}$ is bad and $Y\in {\opf{s}{X}}\cfF$, then ${\opf{s}{Y}}$ is bad, too. We call ${\opf{s}{X}}$ {\em ugly}
if ${\opf{t^*}{X}}$ is bad for all $s\seg t^*\ceq X$ with $|t|=|s|$. Note that if ${\opf{s}{X}}$ is ugly, then ${\opf{s}{X}}$ is bad, too.
To prove the following two lemmas, we will follow in fact the proof of Lemma~19.15 in \cite{Kechris}.
\begin{lm}\label{lm:ugly} Let $\fF$ be a game-family and $\cO\subs\parto$ an $\fF$-open set. If ${\opf{s}{X}}$ is bad (with respect to $\cO$), then there exists a $Z\in {\opf{s}{X}}$ such that ${\opf{s}{Z}}$ is ugly. \end{lm}
\proof We begin by describing a strategy for player II in the game $\GF$ starting with $\la s,X\ra$. Let $\la X_n\ra$ be the $n$th move of player I and $t_n$ be such that $s\seg t_{n}$, $|t_{n}|=|s|+n$
and $t_{n}^*\seg X_n$. Let $\{t_{n}^i:i\leq m\}$ be an enumeration of all $t$ such that $s\seg t\ceq t_{n}$, $|t|=|s|$ and $\mdom (t)=\mdom (t_{n})$. Further let $Y^{-1}:=X_{n}$. Now choose for each $i\leq m$ a partition $Y^i\in\fF$ such that $Y^i\ceq Y^{i-1}$, $t_{n}^*\seg Y^i$ and ${\opf{(t_{n}^i)^*}{Y^i}}$ is bad or ${\opf{(t_{n}^i)^*}{Y^i}}\subs\cO$. Finally, let $Y_{n}:=Y^m$ and let player II play $\la t_{n},Y_{n}\ra$.
Because player II has no winning strategy, player I can play so that the only $Y$ with $t_n\seg Y$ (for all $n$) belongs to $\fF$. Let $S_Y:=\{t^*\ceq Y: s\seg t\wedge |t|=|s|\}$; then (because of the strategy of player II), for all $t\in S_Y$ we have either ${\opf{t^*}{Y}}$ is bad or ${\opf{t^*}{Y}} \subs\cO$. Now let $C_0:=\{t\in S_Y: {\opf{t}{Y}}$ is bad$\}$ and $C_1:=\{t\in S_Y:{\opf{t^*}{Y}}\subs\cO\}=S_Y\setminus C_0$. By a result of \cite{HalMat}, there exists a partition $Z\in {\opf{s}{Y}}\cfF$, such that $S_Z\subs C_0$ or $S_Z\subs C_1$. If we are in the latter case, we have ${\opf{s}{Z}}\subs\cO$, which contradicts that ${\opf{s}{X}}$ is bad. So we must have $S_Z\subs C_0$, which implies that ${\opf{s}{Z}}$ is ugly and completes the proof of the Lemma.
\eop
\begin{lm}\label{lm:2ndstep} If $\fF$ is a game-family and $\cO\subs\parto$ is an $\fF$-open set, then for every $\fF$-dual Ellentuck neighborhood ${\opf{s}{X}}$ there exists a $Y\in {\opf{s}{X}}\cap\fF$ such that ${\opf{s}{Y}}\subs\cO$ or ${\opf{s}{Y}}\cap\cO\cap\fF =\emptyset$. \end{lm}
\proof If ${\opf{s}{X}}$ is good, then we are done. Otherwise we consider the game $\GF$ starting with $\la s,X\ra$. Let $\la X_0\ra$ be the first move of player I. Because ${\opf{s}{X_0}}$ is bad, by Lemma~\ref{lm:ugly} we can choose $Y' \in {\opf{s}{X_0}}\cfF$ such that ${\opf{s}{Y'}}$ is ugly. Let $t_0$ be such that
$s\seg t_0^*\seg Y'$ and $|t_0|=|s|$. Now we choose $Y_0 \in {\opf{t_0^*}{Y'}}\cfF$ such that ${\opf{t_0^*}{Y_0}}$ is ugly, which is is possible because ${\opf{t_0}{Y'}}$ is ugly and therefore ${\opf{t_0^*}{Y'}}$ is bad. Note that for all $t$ with $s\seg t\ceq t_{0}$ and $\mdom (t)=\mdom (t_{0})$ we have ${\opf{t^*}{Y_0}}$ is ugly. Now player II plays $\la t_0,Y_0\ra.$ \\
Let $\la X_{n+1}\ra$ be the $(n+1)$th move of player I. By the strategy of player II we have ${\opf{t^*}{X_{n+1}}}$ is ugly for all $t$ with $s\seg t\ceq t_{n}$ and $\mdom (t)=\mdom (t_{n})$. Let $t_{n+1}$ be such that $|t_{n+1}|=|t_n|+1=|s|+n$ and $t_n^*\seg t_{n+1}^*\seg X_{n+1}$. Let $\{t_{n+1}^i:i\leq m\}$ be an enumeration of all $t$ such that $s\seg t\ceq t_{n+1}$ and $\mdom (t)=\mdom (t_{n+1})$. Further let $Y^{-1}:=X_{n+1}$. Now choose for each $i\leq m$ a partition $Y^i\in\fF$ such that $Y^i\ceq Y^{i-1}$, $t_{n+1}^*\seg Y^i$ and ${\opf{(t_{n+1}^i)^*}{Y^i}}$ is ugly. (This is possible because we know that ${\opf{t^*}{X_{k}}}$ is ugly for all $k\leq n$ and $t$ with $s\seg t\ceq t_{k}$ and $\mdom (t)=\mdom (t_{k})$, which implies that ${\opf{(t_{n+1}^i)^*}{X_{n+1}}}$ is bad.) Finally, let $Y_{n+1}:=Y^m$ and let player II play $\la t_{n+1},Y_{n+1}\ra$.
Because player II has no winning strategy, player I can play so that the only $Y$ with $t_n\seg Y$ (for all $n$) belongs to $\fF$. We claim that ${\opf{s}{Y}}\cap\cO\cap\fF =\emptyset.$ Let $Z\in {\opf{s}{Y}}\cap\cO\cap\fF$. Because $\cO$ is $\fF$-open we find a $t\seg Z$ such that ${\opf{t^*}{Z}}\subs\cO$. Because $t^*\ceq Y$ we know by the strategy of player II that ${\opf{t^*}{Y}}$ is bad. Hence, there is no $Z\in {\opf{t^*}{Y}}$ such that ${\opf{t^*}{Z}}\subs\cO$. This completes the proof. \eop
Now we give two properties of the forcing notion $\Pf$, (where $\Pf$ is defined as in section~\ref{sec:basicfacts} and $\fF$ is a game-family). Note that for $\fF=\parto$ (which is obviously a game-family) the forcing notion $\Pf$ is the same as dual-Mathias forcing. The first property of the forcing notion $\Pf$ we give is called {\em pure decision.}
\begin{thm}\label{thm:pd} Let $\fF$ be a game-family and let $\Phi$ be a sentence of the forcing language $\Pf$. For any $\Pf$-condition ${\opf{s}{X}}$ there exists a $\Pf$-condition ${\opf{s}{Y}}\leq {\opf{s}{X}}$ such that ${\opf{s}{Y}}\force_{\Pf} \Phi$ or ${\opf{s}{Y}}\force_{\Pf} \neg\Phi$. \end{thm}
\proof With respect to $\Phi$ we define $\cO_1:=\{Y:{\opf{t}{Y}}{\force}_{\Pf} \Phi\ \mbox{for some $t\seg Y\in\fF$}\}$ and $\cO_2:=\{Y:{\opf{t}{Y}}{\force}_{\Pf} \neg\Phi\ \mbox{for some $t\seg Y\in\fF$}\}$. Clearly $\cO_1$ and $\cO_2$ are both $\fF$-open and $\cO_1\cup\cO_2$ is even dense (with respect to the partial order $\Pf$). Because $\fF$ is a game-family, by Lemma~\ref{lm:2ndstep} we know that for any ${\opf{s}{X}}\in\Pad$ there exists $Y\in{\opf{s}{X}}\cfF$ such that either ${\opf{s}{Y}} \subs\cO_1$ or ${\opf{s}{Y}}\cap\cO_1\cap\fF =\emptyset$. In the former case we have ${\opf{s}{Y}}{\force}_{\Pf}\Phi$ and we are done. In the latter case we find $Y'\in {\opf{s}{Y}}\cfF$ such that ${\opf{s}{Y'}}\subs\cO_2$. (Otherwise we would have ${\opf{s}{Y'}}\cap (\cO_2\cup\cO_1)\cap\fF =\emptyset$, which is impossible by the density of $\cO_1\cup\cO_2$.) Hence, ${\opf{s}{Y'}}{\force}_{\Pf}\neg\Phi$. \eop
Let $\fF$ be a game-family, $\fG$ be $\Pf$-generic and define $X_\fG:=\bigcap\fG$. Now $X_\fG$ is an infinite partition and $\fG=\{{\opf{s}{Z}}: s\seg X_\fG\ceq Z\}$. Therefore we can consider the partition $X_\fG\in\parto$ as a $\Pf$-generic object. Further we have $\fG\subs\Pf$ is $\Pf$-generic if and only if $X_\fG \in\bigcup D$ for all $D\subs\Pf$ which are dense in $\Pf$. Note that if $D$ is dense in $\Pf$, then $\bigcup D$ is $\fF$-open.
The next theorem shows in fact that if $\fF$ is a game-family, then $\Pf$ is proper.
\begin{thm}\label{thm:proper} Let $\fF\subs\parto$ be a game-family. If $X_0\in\parto$ is $\Pf$-generic over $V$ and $Y_0\in (X_0)^{\omega}\cap V[X_0]$, then $Y_0$ is also $\Pf$-generic over $V$. \end{thm}
\proof Take an arbitrary dense set $D\subs\Pf$, i.e. for all ${\opf{s}{X}}$ there exists a ${\opf{t}{Y}}\subs {\opf{s}{X}}$ such that ${\opf{t}{Y}}\in D$. Let $D'$ be the set of all ${\opf{s}{Z}}$ such that ${\opf{t}{Z}}\subs \bigcup D$ for all $t\ceq s$ with $\mdom (t)=\mdom (s)$.
First we show that $D'$ is dense in $\Pf$. For this take an arbitrary ${\opf{s}{W}}$ and let $\{t_i: 0\leq i\leq m\}$ be an enumeration of all $t\in\NN$ such that $t\ceq s$ and $\mdom (t)=\mdom (s)$. Because $D$ is dense in $\Pf$ and $\bigcup D$ is $\fF$-open, we find for every $t_i$ a $W'\in\fF$ such that $t_i\ceq W'$ and ${\opf{t_i}{W'}}\subs\bigcup D$. Moreover, if we define $W_{-1}:=W$, we can choose for every $i\leq m$ a partition $W_i\in\fF$ such that $W_i\ceq W_{i-1}$, $s\seg W_i$ and ${\opf{t_i}{W_i}}\subs\bigcup D$. Now ${\opf{s}{W_m}}\in D'$ and because ${\opf{s}{W_m}}\subs{\opf{s}{W}}$, $D'$ is dense in $\Pf$.
Since $D'$ is dense and $X_0\in\parto$ is $\Pf$-generic, there exists a ${\opf{s}{Z}}\in D'$ such that $s\seg X_0\ceq Z$. Because $Y_0\in (X_0)^{\omega}$ we have $t\seg Y_0\ceq Z$ for some $t\ceq s$ and because ${\opf{t}{Z}}\subs\bigcup D$, we get $Y_0\in\bigcup D$. Hence, $Y_0\in\bigcup D$ for every dense $D\subs\Pf$, which completes the proof. \eop
\rmk A similar result is proved in \cite{happy} and \cite{Matet3}.
\section{On the dual-Mathias forcing and game-filters} \label{sec:dMathias}
Now we show that it is consistent with ZFC that game-filters exist. (Remember that a game-filter $\fF$ is a game-family which is also a filter and a game-family is a non-principal family which is closed under refinement and finite changes such that player II has no winning strategy in the game $\GF$.) Further we show that the dual-Mathias forcing $\dM$ is flexible and with this result we can prove that if $V$ is $\Sigma^1_4$-$\dM$-absolute, then $\oV$ is inaccessible in L.
In the sequel let $\fU$ be the forcing notion we defined in section~\ref{sec:basicfacts}.
\begin{thm}\label{thm:gamefilter} If $\fD$ is $\fU$-generic over $V$, then $\fD$ is a game-filter in $V[\fD ]$ with respect to the game $\cG (\fD )$. \end{thm}
\proof Because $\fD$ is $\fU$-generic over $V$, we know that $\fD \subs\parto$ is a non-principal family in $V[\fD ]$ which is closed under refinement and finite changes, and for $X,Y\in\fD$ we also have $X\kap Y\in\fD$. It remains to show that player II has no winning strategy in the game $\cG (\fD )$.
Let $\tsig$ be a $\fU$-name for a strategy for player II in the game $\cG (\nameDD )$, where $\nameDD$ is the canonical $\fU$-name for the $\fU$-generic object. Let us assume that player II will follow this strategy. We may assume that $$\one\force_{\fU}``\tsig\ \mbox{is a strategy for II in the game} \ \cG (\nameDD )".$$ If $$Z\force_{\fU}\tsig (\la\tX_0\ra,\la\tit_0,\tY_0\ra,\ldots,\la\tX_n\ra) =\la\tit_n,\tY_n\ra,$$ then for $n\geq 1$ we get
$$Z\force_{\fU} (|\tit_n|=|\tit_{n-1}|+1\wedge\tit_{n-1}^*\seg\tit_{n}^*\seg \tY_n\ceq\tX_n\wedge\tY_n\in\nameDD)$$ and for $n=0$ we have
$$Z\force_{\fU} (|\tit_0|=|\ts|\wedge\ts\seg\tit_{0}^*\seg \tY_0\ceq\tX_0\ceq\tX\wedge\tY_0\in\nameDD)$$ where $\la\ts,\tX\ra$ is the starting point of $\cG (\nameDD )$.
Now let $\la\ts,\tX\ra$ (the starting point of the game $\cG (\nameDD )$) be such that ${\open{\ts}{\tX}}$ is a $\fU$-name for a dual Ellentuck neighborhood and let $Z_0\in\parto\cap V$ be a $\fU$-condition in $V$ such that $Z_0\force_{\fU} \tX\in\nameDD$. Therefore, $Z_0\force_{\fU}``{\open{\ts}{\tX}}$ is a $\nameDD$-dual Ellentuck neighborhood". By Fact~\ref{fct:o-closed} we know that the forcing notion $\fU$ adds no new reals (and therefore no new partitions) to $V$. So, we find a $Z_0^{\prime}\ceq^* Z_0$ and a dual Ellentuck neighborhood ${\open{s}{X}}$ in $V$ such that $$Z_0^{\prime}\force_{\fU}\la\ts,\tX\ra =\la\chs,\chX\ra$$ where $\chs$ and $\chX$ are the canonical $\fU$-names for $s$ and $X$. Because $Z_0^{\prime}\force_{\fU}\chX\in\nameDD$, we must have $Z_0^{\prime}\leq X$, which is the same as $Z_0^{\prime}\ceq^* X$. Finally put $X_0\in\parto$ such that $X_0\ass Z_0^{\prime}$ and $X_0\in {\open{s}{X}}$. Player I plays now $\la\chX_0\ra$. Since player II follows the strategy $\tsig$, player II plays now $\tsig (\la\chX_0\ra )=:\la\tit_0,\tY_0\ra$. Again by Fact~\ref{fct:o-closed} there exists a $Z_1\ceq^* X_0$ and a dual Ellentuck neighborhood ${\open{t_0}{Y_0}}$ in $V$ such that $$Z_1\force_{\fU}\la\tit_0,\tY_0\ra =\la\cht_0,\chY_0\ra.$$ And again by $Z_1\force_{\fU}\chY_0\in\nameDD$ we find $X_1\ass Z_1$ such that $t_0^*\seg X_1\ceq Y_0$. Player I plays now $\la\chX_1\ra$.
In general, if $\tsig (\la\tX_0\ra,\la\tit_0,\tY_0\ra,\ldots,\la\tX_n\ra) =\la\tit_n,\tY_n\ra$, then player I can play $\chX_{n+1}$ such that $X_n\force_{\fU}\la\tit_n,\tY_n\ra = \la\cht_n,\chY_n\ra$ and $t_n^*\seg X_{n+1}\ceq Y_n$. For $n\geq m$ we also have $X_n\ceq X_m$. Let $Y\in\parto$ be the such that $t_n\seg Y$ (for all $n$), then $$Y\force_{\fU} ``\mbox{the only $\tY$ such that $\tit_n\seg\tY$ (for all $n$) is in $\nameDD$}".$$ Hence, the strategy $\tsig$ is not a winning strategy for player II and because $\tsig$ was an arbitrary strategy, player II has no winning strategy at all. \eop
\rmk A similar result is in fact proved in \cite{happy} (cf. also \cite{Matet}).
As a corollary we get that the forcing notion $\Pad$ (where ${\fD}$ is $\fU$-generic over $V$) has pure decision in $V[\fD]$.
\begin{cor}\label{cor:pd} Let $\fD$ be $\fU$-generic over $V$. Then the forcing notion $\Pad$ has pure decision in $V[\fD ]$. \end{cor}
\proof This follows from Theorem~\ref{thm:pd} and Theorem~\ref{thm:gamefilter}. \eop
Corollary~\ref{cor:pd} follows also from the facts that the dual-Mathias forcing has pure decision (cf.\,\cite{dual}) and that it can be written as a two step iteration as in section~\ref{sec:basicfacts}.
\rmk If $D$ is $\bU$-generic over $V$, then $\Pd$ has pure decision in $V[D]$ (cf.\,\cite{happy}).
\subsubsection*{Some more properties of $\dM$}
Let $\Pp$ be a notion of forcing in the model $V$. We say that $V$ is $\Sigma^1_n$-$\Pp$-absolute if for every $\Sigma^1_n$-sentences $\Phi$ with parameters in $V$ the following is true. $$V\models\Phi\ \mbox{if and only if}\ V[G]\models\Phi,$$ where $G$ is any $\Pp$-generic object over $V$.
Now we will show that if $V$ is $\Sigma^1_4$-$\dM$-absolute, then $\oV$ is inaccessible in $\LL$. For this we first will translate the dual-Mathias forcing in a tree forcing notion.
If $s$ is a partial partition of some natural number $n\in\omega$, then we can consider $s$ as a subset of ${\cal P}(n)$ or equivalently, as a finite set of finite sets of natural numbers. Let $t$ be a finite set of natural numbers, then $\sharp t$ is such that for all $k\in\omega:$ $\mbox{div}(\sharp t,2^k)$ is odd $\Leftrightarrow k\in s$. (Remember that $\mbox{div}(n,m):=\mbox{max}(\{k\in\omega:k\cdot m\leq n\}$.) Now $\sharp s$ is such that for all $k\in\omega:$
$\mbox{div}(\sharp s,2^k)$ is odd $\Leftrightarrow k=\sharp t$ for some $t\in s$. (In fact $\sharp s$ is defined for any finite set of finite sets of natural numbers.) If $s\in\NN$, then $|s|$ denotes the cardinality of $s$, which is the number of blocks of $s$.
For $s\in\NN$ with $|s|=k$ let $\bar{s}$ be the finite sequence $\langle n_1,\ldots,n_k\rangle$
where $n_i:=\sharp s_i$ and $s_i\in\NN$ is such that $|s_i|=i$ and $s_i^*\seg s^*$.
Now let $p={\open{s}{X}}$ be an $\dM$-condition. Without loss of generality we may assume that $s^*\ceq X$. The tree ${\mathfrak{t}}_p\subseteq\omega^{<\omega}$ is defined as follows. $$\sigma\in {\mathfrak{t}}_p\ \Leftrightarrow\ \exists t\in\NN ((t^*\seg s^* \vee s\seg t)\wedge t^*\ceq X \wedge \sigma = \bar{t}.$$ \begin{fct} \label{fct:easy} Let $p,q$ be two $\dM$-conditions. Then ${\mathfrak{t}}_p$ is a subtree of ${\mathfrak{t}}_q$ if and only if $p\leq q$.\eor \end{fct}
Finally let $T_{\dM}:= \{ {\mathfrak{t}}_p :p\in \dM \}$; then $T_{\dM}$ is a set of trees. We stipulate that ${\mathfrak{t}}_p\leq {\mathfrak{t}}_q$ if ${\mathfrak{t}}_p$ is a subtree of ${\mathfrak{t}}_q$. Then (by Fact~\ref{fct:easy}) forcing with ${\fT}_{\dM}:=\langle T_{\dM},\leq\rangle$ is the same as forcing with $\dM$.
Now we will give the definition of a flexible forcing notion $\Pp$. But first we have to give some other definitions.
A set $T\subseteq {\omega}^{<\omega}$ is called a {\em{Laver-tree}\/} if $$T\ \mbox{is a tree and}\ \exists\tau\in T\forall\sigma\in T (\sigma\subseteq\tau \vee (\tau\subseteq\sigma\ \wedge\
|\{n:\sigma^{\frown}n\in T\}| =\omega)).$$ (We call $\tau$ the stem of $T$. For $\sigma\in T$ we let succ${}_T (\sigma ):=\{n:\sigma^{\frown}n\in T\}$, (the successors of $\sigma$ in $T$) and $T_\rho :=\{\sigma\in T: \sigma\subseteq\rho\ \wedge\ \rho \subseteq\sigma\}$.)
For a Laver-tree $T$, we say $A\subseteq T$ is a {\em{front}\/}
if $\sigma\neq\tau$ in $A$ implies $\sigma\not\subseteq\tau$ and for all $f\in [T]$ there is an $n\in\omega$ such that $f|_n\in A$.
The meaning of $p\leq \lval\Phi\rval$ and $p\cap \lval\Phi\rval$ are $U_p\subseteq \lval\Phi\rval$ and $U_p\cap \lval\Phi\rval$, respectively.
\begin{enumerate} \item We say a forcing notion $\Pp$ is {\em{Laver-like}\/} if there is a $\Pp$-name $\tilde{r}$ for a dominating real such that\\ (i) the complete Boolean
algebra generated by the family $\{\lval\tilde{r}(i)=n\rval : i,n
\in\omega\}$ equals r.o.~($\Pp$), and\\ (ii) for each condition $p\in \Pp$ there exists a Laver-tree $T\subseteq {\omega}^{<\omega}$ so that $$\forall\sigma\in T\Bigl(p(T_{\sigma}):= \prod\limits_{n\in\omega} \sum\limits_{\tau\in T_{\sigma}}
\{p\cap\lval\tilde{r}|_{\lg (\tau )}=\tau\rval :\lg (\tau )=n\}\in {\mbox{r.o.~}}(\Pp)\setminus \{\mbox{\bf 0}\}\Bigr).$$ We express this by saying $p(T)\neq\emptyset$ where $p(T):=p(T_{stem(T)}$).
\item If $\tilde{r}$ is a $\Pp$-name that witnesses that $\Pp$ is Laver-like, we say that $\Pp$ has {\em{strong fusion}\/} if for countably many open dense sets $D_n\subseteq \Pp$ and for $p\in \Pp$, there is a Laver-tree $T$ such that $p(T)\neq\emptyset$ and for each $n$:
$$\{\sigma\in T: p(T)\cap\lval\tilde{r} |_{\lg (\sigma )}= \sigma\rval\in D_n\}$$ contains a front.
\item A Laver-like $\Pp$ is {\em{closed under finite changes}\/} if given a $p\in \Pp$ and Laver trees $T$ and $T'$ so that for all
$\sigma\in T':\ |\mbox{succ}_T(\sigma )\setminus\mbox{succ}_{T'}(\sigma )| <\omega$, if $p(T)\neq\emptyset$, then $p(T')\neq\emptyset$, too. \end{enumerate}
Now we call $\Pp$ a {\em{flexible}\/} forcing notion {\em{iff}\/} $\Pp$ is Laver-like, has strong fusion and is closed under finite changes.
With this definition we can show (as a further symmetry between the forcing notions $\M$ and $\dM$), that the dual-Mathias forcing $\dM$ is flexible.
\begin{lm} \label{lm:dual-flexible} The dual-Mathias forcing $\dM$ is flexible. \end{lm}
\proof By $\dM\approx {\fT}_{\dM}$ it is enough to prove that the forcing notion ${\fT}_{\dM}$ is flexible. Let ${\tilde{r}}$ be the canonical ${\fT}_{\dM}$-name for the ${\fT}_{\dM}$-generic object. By the definition of $\sharp$ and the construction of ${\fT}_{\dM}$, ${\tilde{r}}$ is a name for a dominating real. The rest of the proof is similar to the proof that Mathias forcing is
flexible, which is given in \cite{Halb}. \eop
If all $\Sigma^1_n$-sets in $V$ with parameters in $V\cap W$ have the Ramsey property ${\cal R}$ or the dual-Ramsey property $\dR$, we will write $V\models\Sigma^1_n({\cal R})_W$ or $V\models\Sigma^1_n(\dR)_W$, respectively. If $V=W$, then we omit the index $W$. The notations for $\Delta^1_n$-sets and $\Pi^1_n$-sets are similar. Further ${\cal B}$ stands for the Baire property and ${\cal L}$ stands for Lebesgue measurable.
Now we can prove the following \begin{thm} If $V$ is $\Sigma^1_4$-$\dM$-absolute, then $\oV$ is inaccessible in $\LL$. \end{thm}
\proof To prove the corresponding result for Mathias forcing (cf.\,\cite{Halb})
we used only that $\M$ is flexible and that, if $V$ is $\Sigma^1_4$-$\M$-absolute, then $V\models\Sigma^1_2({\cal R})$, which is the same as $\Sigma^1_3$-$\M$-absoluteness (cf.\,\cite{Halb}).
Therefore it is enough to prove that $\Sigma^1_3$-$\dM$-absoluteness implies $\Sigma^1_3$-$\M$-absoluteness. It follows immediately from Fact~\ref{fct:add-Mathias} that $V\subseteq V^{\M}\subseteq V^{\dM}$. Now because $\Sigma^1_3$-formulas are upwards absolute, this completes the proof. \eop
\section{Iteration of dual-Mathias forcing}
In this section we will build two models in which every $\Sigma^1_2$-set is dual-Ramsey. In the first model $2^{\aleph_0}=\aleph_1$ and in the second model $2^{\aleph_0}=\aleph_2$. With the result that dual-Mathias forcing has the Laver property we can show that $\Sigma^1_2(\dR)$ implies neither $\Sigma^1_2({\cal L})$ nor $\Sigma^1_2({\cal B})$.
In the sequel we will use the same notations as in section~\ref{sec:dMathias}.
First we give a result similar to Theorem~1.15 of \cite{delta1-2}. \begin{lm} \label{lm:one-step} Let $\fD$ be $\fU$-generic over $V$. If $\fw$ is $\Pad$-generic over $V[\fD]$, then $V[\fD][\fw]\models\Sigma^1_2(\dR)_V$. \end{lm}
\proof Let $\Ww$ be the canonical name for the $\Pad$-generic object $\fw$ over $V[{\fD}]$ and let $\varphi (X)$ be a $\Sigma^1_2$-formula with parameters in $V$. By Theorem~\ref{thm:gamefilter} and Corollary~\ref{cor:pd}, the forcing notion $\Pad$ has pure decision. So, there exists a $\Pad$-condition $p\in V[\fD]$ with empty stem (therefore $p\in\fD$), such that {$V[\fD]$}$\models ``p{\force}_{\Pad}\varphi (\Ww)"$ or {$V[\fD]$}$\models ``p{\force}_{\Pad}\neg\varphi (\Ww)"$. Assume the former case holds. Because $\fw\ceq^* q$ for all $q\in\fD$, there is an $f\in [\omega]^{<\omega}$ such that ${\fw}\puni\{f\}\ceq p$. By Theorem~\ref{thm:gamefilter} and Theorem~\ref{thm:proper} we know that if $X$ is $\Pad$-generic over $V[\fD]$ and $X'\in\partX\cap V[{\fD}][{\fw}]$, then $X'$ is also $\Pad$-generic over $V[\fD]$. Hence every $\fw'\ceq {\fw}\puni\{f\}\ceq p$ is $\Pad$-generic over $V[\fD]$ and therefore $V[\fD][\fw']\models\varphi ({\fw'})$. Because $\Sigma^1_2$-formulas are absolute, we get $V[\fD][\fw]\models\varphi (\fw')$. So, $V[{\fD}][{\fw}]\models\exists X\forall Y\in\partX \varphi (Y)$. The case when {$V[\fD]$}$\models ``p{\force}_{\Pad}\neg\varphi ({\Ww})"$ is similar. Hence, we finally have $V[{\fD}][{\fw}]\models\Sigma^1_2(\dR)_V$. \eop
\rmk The proof of the analogous result can be found in \cite{delta1-2}.
Because G\"odel's constructible universe $L$ has a $\Delta^1_2$-well-ordering of the reals, $L$ is neither a model for $\Delta^1_2 (\dR)$ nor a model for $\Delta^1_2 ({\cal R})$. But we can build a model in which $2^{\aleph_0}=\aleph_1$ and all $\Sigma^1_2$-sets are dual-Ramsey.
\begin{thm} \label{thm:iteration} If we make an $\omega_1$-iteration of dual-Mathias forcing with countable support starting from $\LL$, we get a model in which every $\Sigma^1_2$-set of reals is dual-Ramsey and $2^{\aleph_0}=\aleph_1$. \end{thm}
\proof Follows immediately from the Fact~\ref{fct:fequi}, Lemma~\ref{lm:one-step} and the fact that the dual-Mathias forcing is proper.\eop
\rmk The proof of a similar result can be found in \cite{sigma1-2}.
We can build also a model in which $2^{\aleph_0}=\aleph_2$ and all $\Sigma^1_2$-sets are dual-Ramsey.
\begin{thm} \label{thm:iteration2} If we make an $\omega_2$-iteration of dual-Mathias forcing with countable support starting from $\LL$, we get a model in which every $\Sigma^1_2$-set of reals is dual-Ramsey and $2^{\aleph_0}=\aleph_2$. \end{thm}
\proof In \cite{Lori} it is shown that a $\omega_2$-iteration of dual-Mathias forcing with countable support starting from $\LL$ yields a model in which $2^{\aleph_0}=\aleph_2$ and the union of fewer than $\aleph_2$ completely dual-Ramsey sets is completely dual-Ramsey. Now because each $\Sigma^1_2$-set can be written as the union of $\aleph_1$ analytic sets (and analytic sets are completely dual-Ramsey) all the $\Sigma^1_2$-sets are dual-Ramsey. \eop
\rmk A similar result is true because an $\omega_2$-iteration of Mathias forcing with countable support starting from $\LL$ yields a model in which $\mathfrak{h}=\aleph_2$ (cf.\,\cite{ShSp}), and $\mathfrak{h}$ can be considered as the additivity of the ideal of completely Ramsey null sets (cf.\,\cite{Plewik}).
For the next result we have to give first the definition of the Laver property.
A {\em cone\/} $\bar{A}$ is a sequence $\la A_k:k\in\omega\ra$ of finite subsets of $\omega$ with $|A_k|<2^k$. We say that $\bar{A}$ {\em covers\/} a function $f\in\omega^{\omega}$ if for all $k>0$: $f(k)\in A_k$. For a function $H\in\omega^{\omega}$, we write $\Pi H$ for the set $\{f\in\omega^{\omega}:\forall k>0(f(k)<H(k))\}$. Now a forcing notion $\Pp$ is said to have the {\em Laver property\/} iff for every $H\in\omega^{\omega}$ in $V$, $$\one\force_{\Pp} ``\forall f\in\Pi H\,\exists\bar{A}\in V (\mbox{$\bar{A}$ is a cone covering $f$})".$$
Like Mathias forcing, the dual-Mathias forcing has the Laver property as well and therefore adds no Cohen reals (cf.\,\cite{tools} and \cite{book}).
\begin{lm} \label{lm:Laver} The forcing notion $\dM$ has the Laver property. \end{lm}
\proof Given $f,H\in {\omega}^{\omega}$ such that for all $k>0$: $f(k)<H(k)$, let $\langle s,X\rangle$ be an $\dM$-condition. Because $\dM$ has pure decision and $f(1)<H(1)$, we find a $Y_0\in {\open{s}{X}}$ such that $\langle s,Y_0\rangle$ decides $f(1)$. Set $s_0:=s$. Suppose we have constructed
$s_n\in\NN$ and $Y_n\in\parto$ such that $s\seg s_n$, $|s_n|=|s|+n$ and ${\open{s_n}{Y_n}}$ is a dual Ellentuck neighborhood. Choose $Y_{n+1}\in {\open{s_{n}}{Y_n}}$ such that for all $h\in\NN$ with $s\seg h\ceq s_{n}$ and $\mdom (h)=\mdom (s_{n})$: $\langle h,Y_{n+1}\rangle$ decides $f(k)$ for all $k<2^{n+1}$. Further let $s_{n+1}\in\NN$ be such that $s_n\seg s_{n+1}$,
$|s_{n+1}|=|s_n|+1=|s|+n+1$ and $s_{n+1}\seg Y_{n+1}$. Finally let $Y$ be such that for all $n\in\omega$: $s_n\seg Y$. Evidently, the $\dM$-condition $\langle s,Y\rangle$ is stronger than the given $\dM$-condition $\langle s,X\rangle$ (or equal). Now if $k,n\in\omega$ such that $2^n\leq k<2^{n+1}$, then let $\{h_j:j\leq m\}$ be an enumeration of all $s\seg h\ceq s_n$ with $\mdom (h)=\mdom (s_n)$. It is clear that $m<2^{2^n}$. Further let $A_k:=\{l\in\omega : \exists j\leq m(\langle h_j,Y\rangle {\force}_{\dM} f(k)=l)\}$; then
$|A_k|\leq m<2^{2^n}$ and because $2^n\leq k$, we have $|A_k|<2^k$. If we define $A_0:=\{l\in\omega :\langle s,Y\rangle {\force}_{\dM} f(0)=l\}$ then the $\dM$-condition $\langle s,Y\rangle$ forces that ${\bar{A}}:=\langle A_k: k\in\omega\rangle$ is a cone for $f$. \eop
Using these results we can prove the following \begin{thm} \label{thm:Ramsey-Baire} $\Sigma^1_2(\dR)$ implies neither $\Sigma^1_2({\cal L})$ nor $\Sigma^1_2({\cal B})$. \end{thm}
\proof Because a forcing notion with the Laver property adds no Cohen reals and because the Laver property is preserved under countable support iterations of proper forcings (with the Laver property), in the model constructed in Theorem~\ref{thm:iteration} no real is Cohen over $\LL$. Therefore in this model $\Delta^1_2({\cal B})$ fails and because $\Sigma^1_2({\cal L})$ implies $\Sigma^1_2({\cal B})$ (by \cite{sigma1-2}) also $\Sigma^1_2({\cal L})$ has to be wrong in this model. \eop
\rmk For the analogous result cf.\,\cite{delta1-2}.
\section{Appendix}
Although the Ramsey property and the dual-Ramsey property are very similar, we can show that the two Ramsey properties are different. \begin{thm} \label{thm:different} With the axiom of choice we can construct a set which is Ramsey but not dual-Ramsey. \end{thm}
\proof First we construct a set $C\subseteq\reals$ which is not dual-Ramsey. The relation $``\ass "$ is an equivalence-relation on $\parto$. (Remember that $X\ass Y$ if and only if there are $f,g\in
[\omega]^{<\omega}$ such that $X\puni \{f\}\ceq Y$ and $Y\puni \{g\}\ceq X$.) Now choose from each equivalence class $X^{*}$ an element $A_X$ and let $h_X:=|f|+|g|$ be of least cardinality, where $f$ and $g$ are such that $X\puni\{f\}\ceq A_X$ and $A_X\puni \{g\}\ceq X$. Further define: \[ F(X):= \left\{ \begin{array}{ll}
1 & \mbox{if } h_X \mbox{ is odd,}\\
0 & \mbox{otherwise.}
\end{array}\right. \] Then the set $\{X\in\parto: F(X)=1\}$ is evidently not dual-Ramsey and therefore also the set $C:=\{x\in\reals :\exists X\in\parto (F(X)=1\wedge x=\pac (X))\}$ is not dual-Ramsey.
Now define $r:=\{\flat\{k,k+1\}:k\in\omega\}$, then $\cp (r)=\{\omega\}\not\in\parto$ and hence $[r]^{\omega}\cap C=\emptyset$. So, the set $C$ is Ramsey. \eop We can show that the dual-Ramsey property is stronger than the Ramsey property.
\begin{lm} \label{lm:stronger} If $V\models\Sigma^1_n(\dR)$ then $V\models\Sigma^1_n(\cal R)$. \end{lm}
\proof Given a $\Sigma^1_n$-formula $\varphi (x)$ with parameters in $V$. Let $\psi (y)$ be defined as follows. $$\psi (y)\ \mbox{{\em iff}}\ \ \exists x(x=\mMin (\cp (y))\wedge\varphi (x)).$$ We see that $\psi (y)$ is also a $\Sigma^1_n$-formula (with the same parameters as $\varphi$). Now if there is an $X\in\parto$ such that for all $Y\in\partX$, $\psi (\pac (Y))$ holds, then for all $y\in [x]^{\omega}$ where $x=\mMin (X)$, $\varphi (y)$ holds. The case where for all $Y\in\partX$, $\neg\psi (\pac (Y))$ holds, is similar. \eop
With these results and all the symmetries we found between the two Ramsey properties and between the Mathias forcing and the dual-Mathias forcing, it is na\-tural to ask whether there is a pro\-perty which is equivalent to ``every $\Sigma^1_2$-set of reals has the dual-Ramsey property". Another interesting open problem, which surely would give us a lot of information about the relationship between the two Ramsey properties, would be the following question: $$\mbox{Is}\ \Sigma^1_2 ({\cal R})\ \mbox{equivalent to}\ \Sigma^1_2 (\dR)\mbox{?}$$
{\bf Acknowledgements:}\ I would like to thank Pierre Matet for all the fruitful discussions concerning the results in this paper, and the referee for valuable comments on an earlier version of this paper.
Eidgen\"ossische Technische Hochschule\\ Departement Mathematik\\ ETH-Zentrum\\ 8092 Z\"urich\\ Switzerland
E-mail: halbeis@@math.ethz.ch
\end{document}
|
arXiv
|
{
"id": "0109101.tex",
"language_detection_score": 0.7315792441368103,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Analysis of Quantum Network Coding for Realistic Repeater Networks}
\author{Takahiko Satoh}
\email{[email protected]} \affiliation{ Graduate School of Media and Governance, 5322 Endo, Keio University, Fujisawa, Kanagawa 252-0882, Japan} \author{Kaori Ishizaki}
\email{[email protected]} \altaffiliation[Current address: ]{HAL Laboratories} \affiliation{ Graduate School of Media and Governance, 5322 Endo, Keio University, Fujisawa, Kanagawa 252-0882, Japan} \author{Shota Nagayama}
\email{[email protected]} \affiliation{ Graduate School of Media and Governance, 5322 Endo, Keio University, Fujisawa, Kanagawa 252-0882, Japan} \author{Rodney Van Meter}
\email{[email protected]} \affiliation{ Graduate School of Media and Governance, 5322 Endo, Keio University, Fujisawa, Kanagawa 252-0882, Japan} \affiliation{ Faculty of Environment and Information Studies, Keio University, 5322 Endo, Fujisawa, Kanagawa 252-0882, Japan} \date{\today}
\begin{abstract} Quantum repeater networks have attracted attention for the implementation of long-distance and large-scale sharing of quantum states. Recently, researchers extended classical network coding, which is a technique for throughput enhancement, into quantum information. The utility of quantum network coding (QNC) has been shown under ideal conditions, but it has not been studied previously under conditions of noise and shortage of quantum resources. We analyzed QNC on a butterfly network, which can create end-to-end Bell pairs at twice the rate of the standard quantum network repeater approach. The joint fidelity of creating two Bell pairs has a small penalty for QNC relative to entanglement swapping. It will thus be useful when we care more about throughput than fidelity. We found that the output fidelity drops below 0.5 when the initial Bell pairs have fidelity $F < 0.90$, even with perfect local gates. Local gate errors have a larger impact on quantum network coding than on entanglement swapping. \end{abstract}
\maketitle
\section{Introduction} Researchers are striving to produce quantum communication technology for long-range transmission of quantum information and sharing of distributed quantum states~\cite{Lloyd_2004,kimble2008quantum,van-meter14}. Quantum information requires a network specialized for quantum communication. Quantum information may enable new functions not achievable using classical information. For example, quantum key distribution creates a shared random sequence of bits between two parties~\cite{bb84,qcb}. Because quantum information cannot in general be measured without disturbing the state and cannot be cloned~\cite{no_cloning}, statistical tests can prove the absence of as eavesdropper, guaranteeing the secrecy of the bit values. QKD technology is already realized at a commercial level for urban scale, complex topology networks~\cite{SECOQC,Sasaki_11}.
Besides QKD, other distributed security functions~\cite{byzantine}, general purpose distributed quantum computing and blind quantum computing~\cite{broadbent2010measurement} have been proposed as uses of long distance quantum communication. In addition, the realization of inter-continental and inter-major city QKD is also desired.
Thus, there is a growing need for large-scale quantum networks, but the current quantum network protocol suffers from a distance limit set by the probability of correctly receiving a photon through an exponentially lossy channel and other factors. In order to solve this problem, quantum repeaters have been proposed~\cite{repeater} and many of the components have been experimentally demonstrated~\cite{duan:RevModPhys.82.1209,hucul2014modular}. A quantum repeater has multiple important roles: to create and share physical entanglement pairs (Bell pairs) between nearest neighbors over short distances, to perform purification of Bell pairs, and to create one long Bell pair by connecting two entangled pairs using entanglement swapping~\cite{repeater,Briegel_2007,VanMeter_2009_2,VanMeter_2009,Munro_2010,VanMeter_2010,entanglement_swapping}. Long range, complex quantum networks can be realized by arranging a number of quantum repeaters and links. However, the cost of quantum communication per unit of quantum information (e.g. qubit) is very high compared with classical communication.
Quantum network coding (QNC) may contribute to solving this problem. Network coding~\cite{network_coding} is known as a bottleneck elimination method in classical networks. For example, Fig.~\ref{classic} shows simultaneous transmission over the directed classical butterfly network using network coding. \begin{figure}
\caption{ The classical network coding scheme on the butterfly network. The problem is to send a bit of information $a$ from sender node $S1$ to target node $T2$ and $b$ from $S2$ to $T1$ simultaneously. It is clearly impossible to solve this problem using simply routing. XOR operations on relay node $R1$ and target nodes $T1$, $T2$ solve this problem.}
\label{classic}
\end{figure} Two bits can be sent in one use of each link even though each individual transmission would result in conflicts for access to individual links. The butterfly network is the simplest case showing a throughput bottleneck which can be alleviated using quantum network coding. Verifying the behavior on this graph can show that quantum network coding can give an advantage over simple routing schemes in some circumstances. It is expected that network coding also allows the same resolution in a quantum network. In recent years, a number of researchers have studied quantum network coding~\cite{quantum_coding,quantum_coding2,quantum_coding3,quantum_coding4,kobayashi_coding,kobayashi_coding2,kobayashi_coding3} . However, all of these studies presuppose the use of pure states and perfect local gates. The effects of errors and resource shortages are unknown. In this paper, we aim to determine the usefulness of quantum network coding using mixed states.
First, we assume Pauli errors on the Bell pairs that are our initial resources. We investigate the error propagation in the QNC procedure and
calculate the change of fidelities step by step in our coding scheme. These calculations enable us to compare the communication efficiency between QNC
and entanglement swapping as used in many quantum repeater designs. Furthermore, we calculate error thresholds for practical QNC on the butterfly graph and find that initial resource fidelities are required to be $F\geq 0.9$ to achieve the final
fidelity over $0.5$.
Next, we assume Pauli errors on every CNOT
gate, single qubit rotation, measurement, and quantum memory storage time step and calculate the final fidelities using Monte Carlo simulation to assess the complete protocol.
The rest of this paper is organized as follows. In Section~\ref{protocol_qnc}, we show the protocol and related matters of quantum network coding for quantum repeaters. In Section~\ref{error_analysis}, we present the analysis of quantum network coding and entanglement swapping scheme in the presence of X and Z errors. In Section~\ref{conclusion}, we conclude the discussion of this paper.
\section{Quantum network coding} \label{protocol_qnc} Let us review the concept of quantum network coding for quantum repeaters by examining the butterfly graph in Fig.~\ref{quantumcoding_pre}~\cite{repeater_coding}. \begin{figure}
\caption{Initial resources and final states for QNC. Each
repeater node contains two or three qubits entangled with neighbors
into Bell pairs as shown. Our goal is to establish Bell pairs between nodes
$S1$ \& $T1$, and $S2$ \& $T2$.}
\label{quantumcoding_pre}
\end{figure} Quantum network coding, like classical network coding, shifts the location of required communication away from the single bottleneck link, to other links in the network, reducing demand on the bottleneck link. We assume that the performance of all links is the same, and that the number of times that the most-used link must be used to complete our operation determines our ultimate performance. We begin with $\lvert \Psi^{+} \rangle$ Bell pairs across the seven links as shown. In this section, we use the ket vector notation to describe the pure state with fidelity $F=1$. In following sections, we will use the ket vector to describe mixed states, as discussed in the beginning of section~\ref{error_analysis}.
\subsection{Encoding operations} To describe the QNC protocol, we first introduce the following three encoding operators. They consist of CNOT gate operations, $\hat{Z}$ basis measurement operators, and one qubit rotation based on measurement results. The CNOTs are executed between a Control bit and a Bell pair, where we designate one member of the Bell pair the Resource qubit, and the other the Target qubit. The Control qubit $C (C_1, C_2)$ and the Resource qubit $R (R_1, R_2)$ exist on the same repeater.
An $\hat{X}$ or $\hat{Z}$ rotation is performed on the Target qubit $T (T_1, T_2)$ if and only if the measurement result is positive. Our operations are \begin{eqnarray} {\bf Con}^{C}_{R\rightarrow T}&=& \hat{X}^{S_{1}}_{T} \hat{P}^{\pm ,S_{1}}_{\hat{Z},R} {\bf CNOT}^{(C,R)}\\ \label{eq_con} {\bf Add}^{C_{1},C_{2}}_{R\rightarrow T}&=& \hat{X}^{S_{1}}_{T} \hat{P}^{\pm ,S_{1}}_{\hat{Z},R} {\bf CNOT}^{(C_{2},R)}{\bf CNOT}^{(C_{1},R)} \label{eq_add} \end{eqnarray} \begin{eqnarray} {\bf Fanout}^{C}_{R_{1}\rightarrow T_{1},R_{2}\rightarrow T_{2}}&=& \hat{X'}^{S_{2}}_{T_{2}} \hat{X}^{S_{1}}_{T_{1}} \hat{P'}^{\pm ,S_{2}}_{\hat{Z},R_{2}} \hat{P}^{\pm ,S_{1}}_{\hat{Z},R_{1}}\nonumber\\ && {\bf CNOT}^{(C,R_{2})} {\bf CNOT}^{(C,R_{1})} \label{eq_fanout} \end{eqnarray} where $\hat{P}^{\pm}$ is the projective measurement operator \begin{eqnarray} \hat{P}^{\pm}_{\hat{X}}= \frac{1}{2}\left(\bold{1}\pm\hat{X}\right), \hat{P}^{\pm}_{\hat{Z}}= \frac{1}{2}\left(\bold{1}\pm\hat{Z}\right), \end{eqnarray} $\hat{X}$ and $\hat{Z}$ are the normal Pauli operators, and $S_{1}$ and $S_{2}$ are measurement outcomes of the operator $\hat{P}^{\pm}_{\hat{X}}$ and $\hat{P}^{\pm}_{\hat{Z}}$.
These operations correspond to the bit transfer, add and fanout operations in a classical network coding protocol~\cite{network_coding}. Fig.~\ref{encoding_circuit} shows quantum circuits for
${\bf Con}^{A}_{B\rightarrow C}$, ${\bf Add}^{D,E}_{F\rightarrow G}$, and
${\bf Fanout}^{H}_{I\rightarrow J,K\rightarrow L}$. \begin{figure}
\caption{(a)Connection operation between the
qubit $A$ and the Bell pair $BC$. (b)Add
operation between qubit $D$, $E$, and the Bell pair $FG$. (c)Fanout operation between the
qubit $H$, and the Bell pair $IJ$ and $KL$.}
\label{connection_circuit}
\label{add_circuit}
\label{fanout_circuit}
\label{encoding_circuit}
\end{figure}
\subsection{Removal operations} We also introduce the following two removal operators. These operators are unique to quantum network coding protocols, because we have to remove unnecessary entangled qubits before the end of the procedure. To remove these qubits without causing changes on the remaining system, we use $\hat{X}$ basis measurements and feedforward operations based on the measurement results. Our operations are \begin{eqnarray} {\bf Rem}_{R\rightarrow T}&=& \hat{Z}^{S_{1}}_{T} \hat{P}^{\pm,S_{1}}_{\hat{X},R}\\ {\bf RemAdd}_{R\rightarrow T_{1},T_{2}}&=& \hat{Z}^{S_{2}}_{T_{2}} \hat{Z}^{S_{2}}_{T_{1}} \hat{P}^{\pm,S_{2}}_{\hat{X},R}. \end{eqnarray} {\bf Rem} removes the qubits used as target qubits in the connection and fanout operations, and {\bf RemAdd} removes the qubits used as target qubits in the add operations in QNC protocol.
\subsection{QNC} Here, we introduce the protocol operator {\bf QNC} to describe the complete procedure for QNC. All operations in this procedure are LOCC as shown above. \begin{eqnarray} {\bf QNC}\lvert \psi^{\bf{QNC}}_{init} \rangle &=&{\bf Rem}_{H\rightarrow E}{\bf
Rem}_{D\rightarrow A}{\bf RemAdd}_{J\rightarrow {D,H}}\nonumber\\ &&{\bf Rem}_{N\rightarrow J} {\bf Rem}_{L\rightarrow J} {\bf CNOT}^{(L,B)}\nonumber\\ && {\bf CNOT}^{(N,F)}{\bf Fanout}^{J}_{K\rightarrow L, M\rightarrow N} \nonumber\\ && {\bf Add}^{{D,H}}_{I\rightarrow J} {\bf Con}^{E}_{G\rightarrow H} {\bf Con}^{A}_{C\rightarrow D} \lvert \psi_{init} \rangle\\ &=& \lvert\psi^{\bf{QNC}}_{final}\rangle = \lvert \Psi^{+} \rangle_{AF} \otimes \lvert \Psi^{+} \rangle_{BE} . \end{eqnarray} Here, \begin{eqnarray} \lvert \psi^{\bf{QNC}}_{init} \rangle &=& \lvert \Psi^{+} \rangle_{AB} \otimes \lvert \Psi^{+} \rangle_{CD} \otimes \lvert \Psi^{+} \rangle_{EF} \otimes \lvert \Psi^{+} \rangle_{GH} \nonumber \\ && \otimes \lvert \Psi^{+} \rangle_{IJ} \otimes \lvert \Psi^{+} \rangle_{KL} \otimes \lvert \Psi^{+} \rangle_{MN} . \end{eqnarray} When we perform {\bf QNC} on the seven Bell pairs, we can create two crossed Bell pairs as a result. In this state, we can perform quantum teleportation between repeaters in opposite corners simultaneously, as shown in Fig.~\ref{quantumcoding_pre}. The total circuit of {\bf QNC} is shown in Fig.~\ref{circuit}.
\subsection{QNC versus entanglement swapping} To compare this QNC protocol with the existing repeater protocols, we also introduce the protocol operator ${\bf 2ES}$. In this procedure, we perform two entanglement swapping operations using three Bell pairs. \begin{eqnarray} {\bf 2ES}\lvert \psi_{init}^{\bf{2ES}} \rangle &=& {\bf ES}^{(C,J)}_{(M,N)}{\bf ES}^{(C,D)}_{(I,J)}\lvert \psi_{init}^{\bf{2ES}} \rangle
\\ &=& \lvert \psi_{final} ^{\bf{2ES}}\rangle = \lvert \Psi^{+} \rangle_{CN}. \end{eqnarray} Here, \begin{eqnarray} {\bf ES}^{(C,D)}_{(I,J)} &=& {\bf Rem}_{D\rightarrow C} {\bf Con}^{D}_{I\rightarrow J} \\ \lvert \psi^{\bf{2ES}}_{init} \rangle &=& \lvert \Psi^{+} \rangle_{CD} \otimes \lvert \Psi^{+} \rangle_{IJ} \otimes \lvert \Psi^{+} \rangle_{MN} . \end{eqnarray} Entanglement swapping between two Bell pairs can generate one long Bell pair~\cite{entanglement_swapping}. {\bf Rem} removes the leftover qubit for this operation.
Next, we discuss the bottleneck problem on the butterfly network. In this case, we cannot perform ${\bf 2ES}$ two times and share two target Bell pairs between $AF$ and $BE$ without remaking Bell pairs as shown in Fig.~\ref{quantumcoding_swap}. Bell pair $IJ$ is the bottleneck limiting the performance. \begin{figure}
\caption{Conceptual diagram of
communication using entanglement swapping. Simultaneous execution
of Phase A and Phase B is not possible. Re-sharing of a Bell-pair
is needed between $R1$ and $R2$. The AB and EF Bell pairs are
unused in this protocol.}
\label{quantumcoding_swap}
\end{figure} One approach to solving this bottleneck problem is link multiplexing~\cite{repeater-muxing}. In this scheme, an approach such as time division multiplexing is proposed to solve the bottleneck problem on a dumbbell network with few shared Bell pairs. To compare 2ES and network coding, we adopt this scheme. Note that network coding generates the two goal Bell pairs while consuming seven Bell pairs in one cycle, whereas entanglement swapping consumes only six Bell pairs but requires two cycles because of the resource conflict. When we assume the time necessary to share Bell pairs between nearest neighbor repeaters and the memory lifetime of Bell pairs are similar, it is hard to share extra Bell pairs between bottleneck repeaters.
\section{Errors on the initial Bell pairs} \label{error_analysis} To elucidate the advantage of QNC, if any, we compare the communication fidelity of QNC and 2ES. Before tackling the more general problem including gate errors, we investigate the propagation of X and Z errors present in the initial seven Bell pairs in Fig.~\ref{quantumcoding_pre}. We define $\epsilon_{qubit,\hat{X}(\hat{Z})}$ as $\hat{X} (\hat{Z})$ rotation error with probability $p$. Due to the symmetry of Bell pairs, we do not need to distinguish between an error on qubit $A$ and one on qubit $B$. For example, we describe those two types of errors on Bell pair $\lvert\Psi^{+}\rangle_{AB} $ as follows: \begin{eqnarray} \epsilon^{}_{A,\hat{X}} \lvert\Psi^{+}\rangle_{AB} &=& F\lvert\Psi^{+}\rangle_{AB} + (1-F)\lvert\Phi^{+}\rangle_{AB}\\ \epsilon^{}_{A,\hat{Z}} \lvert\Psi^{+}\rangle_{AB} &=& F\lvert\Psi^{+}\rangle_{AB} + (1-F)\lvert\Psi^{-}\rangle_{AB}. \end{eqnarray} Here, fidelity $F = 1-p = \langle \psi \rvert \rho \lvert \psi \rangle$ where $\lvert \psi \rangle$ is the desired pure state. In this paper, for simplicity of representation, we retain the ket notation even for mixed states. The above should be understood to represent \begin{eqnarray} \rho &=& \sqrt{\epsilon^{}_{A,\hat{X}}} \lvert\Psi^{+}\rangle \langle \Psi^{+} \rvert_{AB} \sqrt{\epsilon^{}_{A,\hat{X}}} \\ &=& F\lvert\Psi^{+}\rangle \langle\Psi^{+}\rvert_{AB} + (1-F)\lvert\Phi^{+}\rangle \langle\Phi^{+}\rvert_{AB}. \end{eqnarray}
In this section, we assume that we can perform single qubit rotation, CNOT gate, and projective measurement perfectly with success probability $1$. Gate errors will be incorporated in Sec.~\ref{incorporating errors}.
\subsection{Z errors} Here, we discuss Z errors on our initially shared Bell pairs. Z errors propagate via a CNOT gate from target qubit to control qubit. \subsubsection{Connection} First, we investigate the Z error propagation in the Connection operation. When we perform Connection ${\bf Con}^{B}_{C\rightarrow D}$ between Bell pairs $AB$ and $CD$ with probabilistic Z errors on qubits A and C, the Z error on measured qubit C causes a similar error on qubit B. Then, the initial state $\lvert \psi^{'\bf{Con}}_{init}\rangle$
can be described as follows: \begin{eqnarray} \lvert \psi^{'\bf{Con}}_{init} \rangle &=& \epsilon^{(A)}_{A,\hat{Z}} \lvert \Psi^{+} \rangle_{AB} \otimes \epsilon^{(C)}_{C,\hat{Z}}\lvert \Psi^{+} \rangle_{CD}. \end{eqnarray} After the Connection operation, the final state $\epsilon^{(C)}_{B,\hat{Z}} \epsilon^{(A)}_{A,\hat{Z}} \lvert \psi^{{\bf Con}}_{final} \rangle$ becomes \begin{eqnarray} \vert000\rangle_{ABD} + \sum^{0,1}_{S_{AB}}\sum^{0,1}_{S_{CD}} p_{S_{AB}} p_{S_{CD}}{(-1)}^{S}\vert111\rangle_{ABD}. \end{eqnarray} Here, $\epsilon^{(P)}_{Q,\hat{Z}}$ denotes a Z error on qubit $Q$ resulting from the original Z error on qubit $P$. $S$ is calculated as follows: \begin{eqnarray} S = S_{AB} + S_{CD}. \end{eqnarray} $S_{AB}$ and $S_{CD}$ are $1$ if the corresponding Bell pair includes a Z error, otherwise they are $0$. When we assume the initial fidelity of each Bell pair $F_{AB} = F_{CD}=F$, the result is a phase flip error ($S=1$) with probability $2F(1-F)$, otherwise $S=0$. We show pre-operation and post-operation fidelities in Fig.~\ref{connection-pf}. \begin{figure}
\caption{Fidelity against Bell pair Z errors only during
the Connection operation. The horizontal axis corresponds to the initial fidelity of each Bell pair. The vertical axis corresponds to the final fidelity of the system. Local gates are assumed to be perfect.}
\label{connection-pf}
\end{figure}
\subsubsection{Add} Second, we investigate the error propagation in the Add operation. For example, we perform ${\bf Add}^{F,H}_{I\rightarrow J}$ with three Bell pairs $EF$, $GH$, and $IJ$. The initial state $\lvert \psi^{'\bf{Add}}_{init}\rangle$
can be describe as follows: \begin{eqnarray} \!\!\!\!\!\!\!\lvert \psi^{'\bf{Add}}_{init} \rangle \!=\! \epsilon^{(I)}_{I,\hat{Z}} \epsilon^{(G)}_{G,\hat{Z}} \epsilon^{(E)}_{E,\hat{Z}} \lvert \Psi^{+} \rangle_{EF} \!\otimes\! \lvert \Psi^{+} \rangle_{GH} \otimes \lvert \Psi^{+} \rangle_{IJ}. \end{eqnarray} After the Add operation, the final state $\epsilon^{(I)}_{H,\hat{Z}} \epsilon^{(I)}_{F,\hat{Z}} \epsilon^{(G)}_{G,\hat{Z}} \epsilon^{(E)}_{E,\hat{Z}} \lvert \psi^{{\bf Add}}_{final} \rangle$ becomes \begin{widetext} \begin{eqnarray} \sum^{0,1}_{S_{AB}}\!\sum^{0,1}_{S_{EF}}\!\sum^{0,1}_{S_{GH}}\! p_{S_{AB}} p_{S_{CD}} p_{S_{EF}} (\vert0000\rangle \!+\!
{\!(- \! 1)}^{S_{0}}\!\vert1111\rangle)_{EFGH}\vert0\rangle_{J} \!+\! {\!(- \!1)}^{S_{1}}\!(\vert0011\rangle \!+\!
{\!(- \!1)}^{S_{0}}\!\vert1100\rangle)_{EFGH}\vert1\rangle_{J}. \end{eqnarray} \end{widetext} Here, $\lvert \psi^{{\bf Add}}_{final} \rangle$ corresponds to the state in Eq.~\ref{eq_add}. Each $S_{i}$ is calculated as follows: \begin{eqnarray} S_{0} &=& S_{AB}+S_{EF},\\ S_{1} &=& S_{EF}+S_{IJ}. \end{eqnarray} When all $S_{i}\neq 1$ where $i \in \{0,1\}$, which occurs with probability $F^3+(1-F)^3$, then the final state is error free.
\subsubsection{Fanout} Third, we investigate the error propagation in the Fanout operation. When we perform ${\bf Fanout}^{J}_{K\rightarrow L, M\rightarrow N}$ with three Bell pairs $IJ$, $KL$ and $MN$, the initial state $\lvert \psi'^{\bf{Fanout}}_{init} \rangle$
can be describe as follows: \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\lvert \psi'^{\bf{Fanout}}_{init} \rangle \!\!=\!\! \epsilon^{(I)}_{I,\hat{Z}} \epsilon^{(K)}_{K,\hat{Z}} \epsilon^{(M)}_{M,\hat{Z}} \!\lvert \Psi^{+} \rangle_{IJ} \!\otimes\! \lvert \Psi^{+} \rangle_{KL} \!\otimes\! \lvert \Psi^{+} \rangle_{MN}\!. \end{eqnarray} After the Fanout operation, the final state $\epsilon^{(I)}_{I,\hat{Z}} \epsilon^{(K)}_{J,\hat{Z}} \epsilon^{(M)}_{J,\hat{Z}} \lvert \psi^{{\bf Fanout}}_{final} \rangle$ becomes \begin{widetext} \begin{eqnarray}
\sum^{0,1}_{S_{IJ}}\sum^{0,1}_{S_{KL}}\sum^{0,1}_{S_{MN}} p_{S_{IJ}}p_{S_{KL}}p_{S_{MN}}(\vert00\rangle_{LN} + {(-1)}^{S_{0}}\vert11\rangle_{LN} ) \lvert0\rangle_{J} +{(-1)}^{S_{1}}(\vert01\rangle_{LN} + {(-1)}^{S_{0}}\vert10\rangle_{LN})\vert1\rangle_{J}. \end{eqnarray} \end{widetext} Here, $\lvert \psi^{{\bf Fanout}}_{final} \rangle$ corresponds to the state in Eq.~\ref{eq_fanout}. Each $S_{i}$ is calculated as follows \begin{eqnarray} S_{0} &=& S_{EF}+S_{GH},\\ S_{1} &=& S_{GH}+S_{IJ}. \end{eqnarray} When all $S_{i}=0$ where $i \in \{0,1\}$, which occurs with probability $F^3$, then the final state is error free.
\subsubsection{Removal and Removal-Add} In Removal and Removal-Add operation, we perform X basis measurement on the target qubit. When a Z error exists on the target qubit, the measurement result flips. Removal and Removal-Add move a Z error from the measured qubit to the feedfoward qubit(s). We show this error propagation below: \begin{eqnarray} {\bf Rem}_{R\rightarrow T}\epsilon^{(R)}_{R,\hat{Z}}\lvert \psi_{init}^{\bf{Rem}}\rangle &=& \epsilon^{(R)}_{T,\hat{Z}}\lvert \psi^{\bf{Rem}}_{final}\rangle ,\\ \!\!\!\!\!\!\!\!\!\!{\bf R\!e\!m\!A\!d\!d}_{R\rightarrow T_{1},T_{2}}\epsilon^{\hat{Z}}_{R}\lvert \psi^{\bf{R\!e\!m\!A\!d\!d}}_{init}\rangle &=& \epsilon^{(R)}_{T_{1},\hat{Z}}\epsilon^{(R)}_{T_{2},\hat{Z}}\lvert \psi^{\bf{R\!e\!m\!A\!d\!d}}_{final}\rangle . \end{eqnarray}
To conclude the above discussion, we show the location of errors which cause Z errors on final Bell pairs in Fig.~\ref{zerror-comb}. \begin{figure}
\caption{Z errors propagation. The left figure
shows the five Bell pairs that affect the final Bell pair AF. The
right figure shows the five Bell pairs that affect the final Bell pair BE.}
\label{zerror-comb}
\end{figure}
\subsubsection{Comparison} To compare QNC and 2ES, we first calculate the final fidelity after the complete QNC sequence. When we assume each initial Bell pair has a Z error on one qubit with probability $1-F$, the initial state $\lvert \psi_{init}^{'\bf{QNC}}\rangle$ and final state $\lvert \psi_{final}^{'\bf{QNC}}\rangle$ become \begin{eqnarray} \!\!\!\!\!\lvert \psi_{init}^{'\bf{QNC}}\rangle &=& \epsilon^{(M)}_{M,\hat{Z}} \epsilon^{(K)}_{K,\hat{Z}} \epsilon^{(I)}_{I,\hat{Z}} \epsilon^{(G)}_{G,\hat{Z}} \epsilon^{(E)}_{E,\hat{Z}} \epsilon^{(C)}_{C,\hat{Z}} \epsilon^{(A)}_{A,\hat{Z}}\lvert \psi_{init}^{\bf{QNC}}\rangle ,\\ \!\!\!\!\!\lvert \psi_{final}^{'\bf{QNC}}\rangle &=& \sum_{m}^{0,1}\sum_{n}^{0,1} P_{m,n} \hat{Z}^{m}_{A}
\hat{Z}^{n}_{B} \lvert \psi_{final}^{\bf{QNC}}\rangle . \end{eqnarray} where $m$ and $n$ are the absence $(0)$ or presence $(1)$ of Z errors on the final $AF$ and $BE$ Bell pairs, respectively (or equivalently on the $A$ and $B$ qubits after use of the Bell pairs for e.g. teleportation). The probability of each case $P_{m,n}$ is \begin{eqnarray} P_{0,0} &=& F^7 + 5F^5(1-F)^2 + 12F^4(1-F)^3 \nonumber \\ & & + 7F^3(1-F)^4 + 4F^2(1-F)^5 + 3F(1-F)^6, \\ P_{0,1} &=& P_{1,0}= 2F^6(1-F) + 6F^5(1-F)^2 + 8F^4(1-F)^3 \nonumber \\ & & + 8F^3(1-F)^4 + 6F^2(1-F)^5 + 2F(1-F)^6, \\ P_{1,1} &=& 3F^6(1-F) + 4F^5(1-F)^2 + 7F^4(1-F)^3 \nonumber\\ & & +12F^3(1-F)^4 + 5F^2(1-F)^5 + (1-F)^7.\label{eq_probmn} \label{result_prob} \end{eqnarray} Each of the 128 combinations in Fig.~\ref{chartZ} occurs with probability $F^{(7-w)}(1-F)^{w}$ where $w$ is the Hamming weight of the bitstring. \begin{figure}
\caption{Chart of Z errors. Each seven-bit string
indicates the presence $(1)$ or absence $(0)$ of a Z error on the Bell pairs $AB$, $CD$, .. , and $LM$, respectively. The style of the string corresponds to the error existence on final state. Black and roman means no error, gray(italic) means Z error on $AF$($BE$), and gray and italic means errors on both Bell pairs.}
\label{chartZ}
\end{figure}
Next, we calculate the final fidelity in the 2ES scheme. When we assume each Bell pair for initial resource has a Z error on one qubit with probability $1-F$, the initial state $\lvert \psi_{init}^{'\bf{2ES}}\rangle$ and final state $\lvert \psi_{final}^{'\bf{2ES}}\rangle$ become as follows: \begin{eqnarray} \lvert \psi_{init}^{'\bf{2ES}}\rangle &=& \epsilon^{(M)}_{M,\hat{Z}} \epsilon^{(I)}_{I,\hat{Z}} \epsilon^{(C)}_{C,\hat{Z}} \lvert \psi_{init}^{\bf{2ES}}\rangle ,\\ \lvert \psi_{final}^{'\bf{2ES}}\rangle &=& \sum_{m}^{0,1} P_{m} \hat{Z}^{m}_{A} \lvert \Psi^{+}\rangle_{CN} . \end{eqnarray} Here, we show the probability of each case $P_{m}$ below: \begin{eqnarray} P_{0} &=& 1F^3 + 3F(1-F)^2 ,\\ P_{1} &=& 3F^2(1-F) + (1-F)^3. \end{eqnarray}
We show the relationship between the input fidelity and the output fidelity of our network coding protocol and 2-entanglement swapping in Fig.~\ref{SwappingvsQNC_Z}. \begin{figure}
\caption{Comparison of Swapping and QNC with Z
errors only. Both
show a substantial penalty compared to the fidelity of a single
Bell pair (the $x=y$ line).}
\label{SwappingvsQNC_Z}
\end{figure} Here, the final state with $F_{output}<0.5$ has no practical use for quantum communication. When $F_{input}\leq 0.87$, the 2ES protocol falls below $F_{out}=0.5$. When $F_{input}\leq 0.9$, the QNC protocol also falls below $F_{out}=0.5$.
\subsection{Classical correlation} Next, we discuss the classical correlation between two final Bell states. When we assume the input fidelity $F=0.90$, the probability of the possible resulting states of both the AF and BE Bell pairs is shown in Table~\ref{aleatory_Z} by the formula~(\ref{result_prob}). \begin{table}[htb] \begin{center}
\begin{tabular}{c|c|c|c}
&$\lvert \Psi_{BE}^{+} \rangle $ & $\lvert \Psi_{BE}^{-} \rangle$ & \\ \hline $\lvert \Psi_{AF}^{+} \rangle $ & a & b & e \\ & 0.516 & 0.148 & 0.664 \\ \hline $\lvert \Psi_{AF}^{-} \rangle$ & c & d & f \\ & 0.148 & 0.189 & 0.336 \\ \hline & g & h & \\ & 0.664 & 0.336 & \end{tabular} \end{center} \caption{The correlation between $\lvert \Psi_{AF} \rangle$ and
$\lvert \Psi_{BE} \rangle$ for input fidelity $F=0.9$, Z errors only,
and perfect local gates.} \label{aleatory_Z} \end{table} The correlation coefficient is \begin{equation} \phi = \frac{ad-bc}{\sqrt{efgh}} \fallingdotseq 0.339. \label{coefficient} \end{equation} The two output Bell pairs are unentagled using this error model but their error probabilities are classically correlated. This correlation is weak, despite the overlap of three Bell pairs in the left and right halves of Fig.~\ref{zerror-comb}.
\subsection{X errors} Next, we discuss X errors on the initially shared Bell pairs. X errors propagate via CNOT gate from control qubit to target qubit. \subsubsection{Connection} First, we investigate the error propagation in Connection, when we perform Connection ${\bf Con}^{B}_{C\rightarrow D}$ between Bell pairs $AB$ and $CD$ with probabilistic X errors on qubits B and D. The initial state $\lvert \psi^{''\bf{Con}}_{init}\rangle$
can be described as follows: \begin{eqnarray} \lvert \psi^{''\bf{Con}}_{init} \rangle &=& \epsilon^{(D)}_{D,\hat{X}} \epsilon^{(B)}_{B,\hat{X}} \lvert \Psi^{+} \rangle_{AB} \otimes \lvert \Psi^{+} \rangle_{CD}. \end{eqnarray} After the Connection operation, the final state $\epsilon^{(D)}_{D,\hat{X}} \epsilon^{(B)}_{D,\hat{X}} \epsilon^{(B)}_{B,\hat{X}} \lvert \psi^{{\bf Con}}_{final} \rangle$ becomes \begin{eqnarray}
\!\!\!\sum^{0,1}_{S_{AB}}\!\sum^{0,1}_{S_{CD}}p_{S_{AB}}p_{S_{CD}}{\hat{X}_{A}}^{S_{AB}}{\hat{X}_{D}}^{S_{CD}}{(\vert000\rangle
+\vert111\rangle)}_{ABD}. \end{eqnarray} Here, $\epsilon^{(P)}_{Q,\hat{X}}$ denotes an X error on qubit $Q$ from the original X error on qubit $P$.
When we assume the initial Fidelity of each Bell pair $F_{AB} = F_{CD}=F$, each $S_{i}=1$ with probability $2F(1-F)$, otherwise it is $0$. The fidelities of the input and output states in the Connection operation are plotted in Fig.~\ref{connection-pfxa}. \begin{figure}
\caption{Fidelity against X errors during Connection.}
\label{connection-pfxa}
\end{figure}
\subsubsection{Add} Second, we investigate the X error propagation in the Add operation. When we perform ${\bf Add}^{F,H}_{I\rightarrow J}$ to three X error included Bell pairs $\lvert \Psi^{+}\rangle_{EF}$, $\lvert \Psi^{+} \rangle_{GH}$, and $\lvert\Psi^{+}\rangle_{IJ}$, the initial state $\lvert \psi''^{{\bf
Add}}_{init} \rangle$ and the final state $\lvert \psi''^{{\bf
Add}}_{final} \rangle$ can be described as follows: \begin{eqnarray} \!\!\!\!\!\lvert \psi^{''\bf{Add}}_{init} \rangle \!=\! \epsilon^{(I)}_{I,\hat{X}} \epsilon^{(G)}_{G,\hat{X}} \epsilon^{(E)}_{E,\hat{X}} \lvert \Psi^{+} \rangle_{EF} \!\!\otimes\!\! \lvert \Psi^{+} \rangle_{GH} \!\!\otimes\!\! \lvert \Psi^{+} \rangle_{IJ}. \end{eqnarray} After the Add operation, the final system $\epsilon^{(I)}_{J,\hat{X}} \epsilon^{(G)}_{G,\hat{X}} \epsilon^{(E)}_{E,\hat{X}} \lvert \psi^{{\bf Add}}_{final} \rangle$
becomes \begin{widetext} \begin{equation}
\sum^{0,1}_{S_{EF}}\sum^{0,1}_{S_{GH}}\sum^{0,1}_{S_{IJ}} p_{S_{EF}}p_{S_{GH}}p_{S_{IJ}} \hat{X}^{S_{IJ}}_{J}\hat{X}^{S_{GH}}_{G} \hat{X}^{S_{EF}}_{E} ((\vert0000\rangle +\vert1111\rangle)_{EFGH}\vert0\rangle_{J}
+(\vert0011\rangle + \vert1100\rangle)_{EFGH}\vert1\rangle_{J}). \end{equation} \end{widetext} When all Bell pairs' fidelities are equal, the final state's fidelity becomes $F^3$. \subsubsection{Fanout} Third, we investigate the X error propagation in Fanout operation. When we perform ${\bf Fanout}^{L}_{M\rightarrow N, O\rightarrow P}$ with three Bell pairs $KL$, $MN$ and $OP$. Initial state $\lvert \psi''^{\bf{Fanout}}_{init} \rangle$
can be described as follows: \begin{eqnarray} \!\!\!\!\lvert \psi''^{\bf{F\!a\!n\!o\!u\!t}}_{init} \rangle \!\!=\!\! \epsilon^{(K)}_{K,\hat{X}}\! \epsilon^{(M)}_{M,\hat{X}}\! \epsilon^{(O)}_{O,\hat{X}}\! \lvert \!\Psi^{+}\!\rangle_{KL}\!\!\otimes\!\! \lvert\! \Psi^{+}\! \rangle_{MN} \!\!\otimes\!\! \lvert\! \Psi^{+}\! \rangle_{OP}. \end{eqnarray} After Fanout, the final system $\epsilon^{(K)}_{K,\hat{X}} \epsilon^{(M)}_{N,\hat{X}} \epsilon^{(O)}_{P,\hat{X}} \lvert \psi^{{\bf Fanout}}_{final} \rangle$ becomes \begin{widetext} \begin{eqnarray} \sum^{0,1}_{S_{KL}}\sum^{0,1}_{S_{MN}}\sum^{0,1}_{S_{OP}} p_{S_{KL}}p_{S_{MN}}p_{S_{OP}}\hat{X}^{S_{KL}}_{K}\hat{X}^{S_{MN}}_{N} \hat{X}^{S_{OP}}_{P} (\vert0000\rangle + \vert1111\rangle)_{KLNP}. \end{eqnarray} \end{widetext} Here, $\lvert \psi^{{\bf Fanout}}_{final} \rangle$ corresponds to the state in Eq.~(\ref{eq_fanout}). Each $S_{i}=1$ with probability $p$, otherwise it is $0$. When all initial Bell pairs' fidelity are equally $F$, final state's fidelity becomes $F^3-(1-F)^3$.
\subsubsection{Removal, Removal-Add} In Removal and Removal-Add operations, X errors on measured qubits do not change the measurement results. We describe these facts as follows: \begin{eqnarray} {\bf Rem}_{Q\rightarrow R}\epsilon^{(Q)}_{Q,\hat{X}}\lvert \psi_{init}\rangle &=& \lvert \psi_{final}\rangle ,\\ {\bf RemAdd}_{S\rightarrow T,U}\epsilon^{(S)}_{S,\hat{X}}\lvert \psi_{init}\rangle &=& \lvert \psi_{final}\rangle . \end{eqnarray}
To conclude the above discussion we show the X error propagation in Fig.~\ref{combination}. \begin{figure}
\caption{X errors propagation. The left figure
shows the five Bell pairs that affect on the final Bell pair AF. The
right figure shows the five Bell pairs that affect on the final Bell pair BE.}
\label{combination}
\end{figure}
\subsubsection{Comparison} X error relations between the input states and the final state in the 2ES and QNC protocols can be described as follows: When we assume each Bell pair for initial resource has an X error on one qubit with probability $P_{m,n}$ are as in Eq.~\ref{eq_probmn}, the initial state $\lvert \psi_{init}^{''\bf{QNC}}\rangle$ and final state $\lvert \psi_{final}^{''\bf{QNC}}\rangle$ become as follows: \begin{eqnarray} \!\!\!\!\lvert \psi_{init}^{''\bf{QNC}}\rangle &\!=\!& \epsilon^{(M)}_{M,\hat{X}} \!\epsilon^{(K)}_{K,\hat{X}} \!\epsilon^{(I)}_{I,\hat{X}}\! \epsilon^{(G)}_{G,\hat{X}} \!\epsilon^{(E)}_{E,\hat{X}} \!\epsilon^{(C)}_{C,\hat{X}} \!\epsilon^{(A)}_{A,\hat{X}}\!\lvert \psi_{init}^{\bf{QNC}}\rangle \\ \!\!\!\!\lvert \psi_{final}^{''\bf{QNC}}\rangle &=& \sum_{m}^{0,1}\sum_{n}^{0,1} P_{m,n} \hat{X}^{m}_{A}
\hat{X}^{n}_{B} \lvert \psi_{final}^{\bf{QNC}}\rangle \\ \!\!\!\!&=&\lvert \psi_{final}^{''\bf{QNC}}\rangle \end{eqnarray} Thus, the final fidelities of the 2ES protocol with X or Z errors are the same. When we assume each Bell pair in our initial resource set has an X error on one qubit with probability $p$, the initial state $\lvert \psi_{init}^{''\bf{2ES}}\rangle$ and final state $\lvert \psi_{final}^{''\bf{2ES}}\rangle$ become as follows: \begin{eqnarray} \lvert \psi_{init}^{''\bf{2ES}}\rangle &=& \epsilon^{(M)}_{M,\hat{X}} \epsilon^{(I)}_{I,\hat{X}} \epsilon^{(C)}_{C,\hat{X}} \lvert \psi_{init}^{\bf{2ES}}\rangle ,\\ \lvert \psi_{final}^{''\bf{2ES}}\rangle &=& \sum_{m}^{0,1} P_{m} \hat{X}^{m}_{A} \lvert \Psi^{+}\rangle_{CN} \nonumber\\ &=& \lvert \psi_{final}^{'\bf{2ES}}\rangle . \end{eqnarray}
Although the fidelity is the same, the location of errors which cause X or Z errors on the final Bell pairs are different. As a result, the relationship between input fidelity and output fidelity of our network coding protocol and 2-entanglement swapping are equal that of Z errors as shown in Fig.~\ref{SwappingvsQNC_Z}.
\subsection{General Pauli error model} Finally, we model more general errors on our initial resource Bell pairs.
as Pauli errors occuring during CNOT gates ${\bf
CNOT}^{(control,target)}_{\varepsilon}$ in the initial part of the total circuit in Fig.~\ref{circuit}. We define the following errors $\varepsilon$ on control and target qubits in every CNOT gate: \begin{eqnarray} {\bf CNOT}^{(A,B)}_{\varepsilon} \lvert \psi^{\bf{CNOT}}_{input} \rangle &=& \varepsilon_{A}\otimes\varepsilon_{B} \lvert \psi^{\bf{CNOT}}_{output} \rangle\\
\varepsilon_{A}\otimes\varepsilon_{B} &=&\sum_{i=0}^{3} p_{i} \sigma_{A}^{i} \otimes \sum_{j=0}^{3}p_{j}\sigma_{B}^{j}. \end{eqnarray} Here, $p_0 p_0 = 1-p = F$ and $p_i p_j = \frac{p}{15}$ except for both $i=0$ and $j=0$. $\sigma^{0},..,\sigma^{3}$ denote $\hat{I}$, $\hat{X}$, $\hat{Y}$, and $\hat{Z}$ respectively.
We investigate the relation between the fidelity of the input states and that of our output state. Following the above setting, our initially shared seven Bell pairs $\lvert\psi_{init}^{\varepsilon,\bf{QNC}}\rangle$ include Pauli errors. Each Bell pair, which is a combination of sixteen possible error conditions, becomes a mixture of four states. For example, we describe the state of Bell pair $AB$ below: \begin{eqnarray} {\bf CNOT}^{(A,B)}_{\varepsilon} H_{A} \lvert 00 \rangle_{AB} &=& \varepsilon_{A} \otimes \varepsilon_{B} \lvert \Psi^{+}_{AB} \rangle\\ &=& \left(1-\frac{4p}{5}\right) \lvert \Psi^{+}_{AB} \rangle +\frac{4p}{15} \lvert \Psi^{-}_{AB} \rangle \nonumber\\ &&+\frac{4p}{15} \lvert \Phi^{+}_{AB} \rangle +\frac{4p}{15} \lvert \Phi^{-}_{AB} \rangle. \end{eqnarray} This expression arises because of the symmetric effect of some errors on Bell pairs, as in the following equations: \begin{eqnarray} \lvert \Psi^{+}_{AB} \rangle &=& (\hat{I}_{A}\otimes\hat{I}_{B})\lvert \Psi^{+}_{AB} \rangle = (\hat{X}_{A}\otimes\hat{X}_{B})\lvert \Psi^{+}_{AB} \rangle \nonumber\\ &=&(\hat{Y}_{A}\otimes\hat{Y}_{B})\lvert \Psi^{+}_{AB} \rangle =(\hat{Z}_{A}\otimes\hat{Z}_{B})\lvert \Psi^{+}_{AB} \rangle ,\\ \lvert \Phi^{+}_{AB} \rangle &=& (\hat{I}_{A}\otimes\hat{X}_{B})\lvert \Psi^{+}_{AB} \rangle = (\hat{X}_{A}\otimes\hat{I}_{B})\lvert \Psi^{+}_{AB} \rangle \nonumber\\ &=&(\hat{Y}_{A}\otimes\hat{Z}_{B})\lvert \Psi^{+}_{AB} \rangle =(\hat{Z}_{A}\otimes\hat{Y}_{B})\lvert \Psi^{+}_{AB} \rangle ,\\ \lvert \Psi^{-}_{AB} \rangle
&=&(\hat{I}_{A}\otimes\hat{Z}_{B})\lvert \Psi^{+}_{AB} \rangle = (\hat{Z}_{A}\otimes\hat{I}_{B})\lvert \Psi^{+}_{AB} \rangle \nonumber\\ &=&(\hat{X}_{A}\otimes\hat{Y}_{B})\lvert \Psi^{+}_{AB} \rangle =(\hat{Y}_{A}\otimes\hat{X}_{B})\lvert \Psi^{+}_{AB} \rangle ,\\ \lvert \Phi^{-}_{AB} \rangle &=& (\hat{X}_{A}\otimes\hat{Z}_{B})\lvert \Psi^{+}_{AB} \rangle = (\hat{Z}_{A}\otimes\hat{X}_{B})\lvert \Psi^{+}_{AB} \rangle \nonumber\\ &=&(\hat{I}_{A}\otimes\hat{Y}_{B})\lvert \Psi^{+}_{AB} \rangle =(\hat{Y}_{A}\otimes\hat{I}_{B})\lvert \Psi^{+}_{AB} \rangle . \end{eqnarray}
Based on the above, we assume all Pauli errors exist on the target qubits of CNOT gates in our initial resources. We show the relationship between errors on initial states and final state in Table~\ref{epsilon_i}. For example, in the upper left corner of the table, the $\hat{I}_A \hat{X}_B$ entry indicates that an $X$ error on the initial Bell pair $AB$ results in an error-free Bell pair $AF$ and an $X$ error on the Bell pair $BE$, so that the final state is $\ket{\Psi^+}_{AF}\ket{\Phi^+}_{BE}$. \begin{table}[htbp] \caption{\label{epsilon_i} The relationship between errors on initial Bell
pairs and final states. Columns correspond to the type of errors on
underbarred qubits of initial Bell pairs.} \begin{ruledtabular}
\begin{tabular}{llll} Bell pair & $\hat{X}$& $\hat{Y}$& $\hat{Z}$\\ \hline $\lvert \Psi^{+}\rangle_{A\underline{B}}$ & $\hat{I}_{A}\hat{X}_{B}$ & $\hat{Z}_{A} \hat{X}_{B}$ & $\hat{Z}_{A} \hat{I}_{B}$ \\ $\lvert \Psi^{+}\rangle_{C\underline{D}}$ & $\hat{X}_{F}\hat{X}_{B}$ & $\hat{Z}_{A} \hat{X}_{F} \hat{X}_{B}$ & $\hat{Z}_{A} \hat{I}_{B}$ \\ $\lvert \Psi^{+}\rangle_{E\underline{F}}$ & $\hat{X}_{F}\hat{I}_{E}$ & $\hat{X}_{F} \hat{Z}_{E}$ & $\hat{I}_{F} \hat{Z}_{E}$ \\ $\lvert \Psi^{+}\rangle_{G\underline{H}}$ & $\hat{X}_{F}\hat{X}_{B}$ & $\hat{X}_{F} \hat{X}_{B}\hat{Z}_{E}$ & $\hat{I}_{F} \hat{Z}_{E}$ \\ $\lvert \Psi^{+}\rangle_{I\underline{J}}$ & $\hat{X}_{F}\hat{X}_{B}$ & $\hat{Z}_{A} \hat{X}_{F} \hat{X}_{B} \hat{Z}_{E}$ & $\hat{Z}_{F} \hat{Z}_{E}$ \\ $\lvert \Psi^{+}\rangle_{K\underline{L}}$ & $\hat{I}_{A}\hat{X}_{B}$ & $\hat{Z}_{A} \hat{X}_{B} \hat{Z}_{E}$ & $\hat{Z}_{A} \hat{Z}_{E}$ \\ $\lvert \Psi^{+}\rangle_{M\underline{N}}$ & $\hat{X}_{F}\hat{I}_{B}$ & $\hat{Z}_{A}\hat{X}_{F} \hat{Z}_{B}$ & $\hat{Z}_{A} \hat{Z}_{B}$ \\
\end{tabular} \end{ruledtabular} \end{table}
We show the relationship between the input fidelity and the output fidelity of our network coding protocol and 2-entanglement swapping in Fig.~\ref{SwappingvsQNC_Pauli}. \begin{figure}
\caption{Joint fidelity of the two output Bell
pairs. We compare Swapping and QNC with general Pauli error model. Both
show a substantial penalty compared to the fidelity of a single
Bell pair.}
\label{SwappingvsQNC_Pauli}
\end{figure} Here, the final state with $F_{output}<0.5$ has no practical use for quantum communication. When $F_{input}\leq 0.88$, the 2ES protocol falls below $F_{out}=0.5$. When $F_{input}\leq 0.9$, the QNC protocol also falls below $F_{out}=0.5$.
\section{Incorporating gate errors} \label{incorporating errors} In this section, we investigate the error propagation caused by local gates in each encoding step which shown in Fig.~\ref{circuit}. We introduce $\bf {Con_{\varepsilon}}$, $\bf {Add_{\varepsilon}}$, $\bf {Fanout_{\varepsilon}}$, and $\bf {QNC_{\varepsilon}}$. These operators use $\bf{ CNOT_{\varepsilon}}$ within these operations. Furthermore, the following error $\epsilon$ occur on
all qubits in every measurement, single qubit gate, and waiting time. \begin{eqnarray} \epsilon =\sum^{3}_{i=0}p_i \sigma^{i} \end{eqnarray} Here, $p_0 = F$ and $p_i = \frac{p}{3}$ whenever $i \neq 0$. In subsections~\ref{inc_1} through \ref{inc_5}, we give a step by step qualitative analysis, then in subsection~\ref{inc_t} we present the results of our Monte Carlo simulation of the complete circuit. \subsection{Errors in Step 1} \label{inc_1} In step 1, the CNOT gate in Connection causes the following errors $\varepsilon^{(1)}$: \begin{eqnarray} {\bf Con}^{E}_{\varepsilon, G\rightarrow H}{\bf Con}^{A}_{\varepsilon,
C\rightarrow D}\lvert \psi_{init} \rangle = \varepsilon^{(1)} \lvert \psi_{final} \rangle . \end{eqnarray} When we assume the initial resources and CNOT gates in other steps do not include errors, we can describe the relationship between errors in this step and final states as shown in Table~\ref{epsilon_1}.
\begin{table}[htbp] \caption{\label{epsilon_1} The relationship between errors caused by
CNOT gates in Step 1 and final states. Columns correspond to the
type of errors on underlined qubits.} \begin{ruledtabular}
\begin{tabular}{llll} Qubit(underlined) & $\hat{X}$& $\hat{Y}$& $\hat{Z}$\\ \hline $ \bf{CNOT}^{(\underline A ,C)}$ & $\hat{X}_{A}$ & $\hat{Y}_{A}$ & $\hat{Z}_{A}$ \\ $ \bf{CNOT}^{(A,\underline{C})}$ & $\hat{X}_{B} \hat{X}_{F}$ & $\hat{X}_{B} \hat{X}_{F}$ & $\hat{I}$ \\ $ \bf{CNOT}^{(\underline E,G)}$ & $\hat{X}_{E}$ & $\hat{Y}_{E}$ & $\hat{Z}_{E}$ \\ $ \bf{CNOT}^{(E,\underline G)}$ & $\hat{X}_{B} \hat{X}_{F}$ & $\hat{X}_{B} \hat{X}_{F}$ & $\hat{I}$ \\
\end{tabular} \end{ruledtabular} \end{table}
\subsection{Errors in Step 2.} In step 2, the CNOT gate in Add causes the following errors $\varepsilon^{(2)}$: \begin{eqnarray} {\bf Add}^{D,H}_{\varepsilon, I\rightarrow J} \lvert \psi_{(1)}\rangle = \varepsilon^{(2)} \lvert \psi_{final} \rangle .
\end{eqnarray} When we assume the initial resources and CNOT gates in other steps do not include errors, we can describe the relationship between errors in this step and final states as shown in Table~\ref{epsilon_2}.
\begin{table}[htbp] \caption{\label{epsilon_2} The relationship between errors caused by
CNOT gates in Step 2 and final states. Columns correspond to the
type of errors on underlined qubits.} \begin{ruledtabular}
\begin{tabular}{llll} Qubit(underlined) & $\hat{X}$& $\hat{Y}$& $\hat{Z}$\\ \hline $ \bf{CNOT}^{(\underline D ,I)}$ & $\hat{I}$ & $\hat{Z}_{A}$ & $\hat{Z}_{A}$ \\ $ \bf{CNOT}^{(D,\underline I)}$ & $\hat{X}_{B} \hat{X}_{F}$ & $\hat{X}_{B} \hat{Z}_{E} \hat{X}_{F}$ & $\hat{Z}_{E}$ \\ $ \bf{CNOT}^{(\underline H,I)}$ & $\hat{I}$ & $\hat{Z}_{E}$ & $\hat{Z}_{E}$ \\ $ \bf{CNOT}^{(H,\underline I)}$ & $\hat{X}_{B} \hat{X}_{F}$ & $\hat{X}_{B} \hat{X}_{F}$ & $\hat{I}$ \\
\end{tabular} \end{ruledtabular} \end{table}
\subsection{Errors in Step 3.} In step 3, the CNOT gate in Fanout causes the following errors $\varepsilon^{(3)}$: \begin{eqnarray} {\bf Fanout}^{J}_{\varepsilon, K\rightarrow L,M\rightarrow N} \lvert \psi_{(2)}\rangle =
\varepsilon^{(3)} \lvert \psi_{final} \rangle . \end{eqnarray} When we assume the initial resources and CNOT gates in other steps do not include errors, we can describe the relationship between errors in this step and final states as shown in Table~\ref{epsilon_3}.
\begin{table}[htbp] \caption{\label{epsilon_3} The relationship between errors caused by
CNOT gates in Step 3 and final states. Columns correspond to the
type of errors on underlined qubits.} \begin{ruledtabular}
\begin{tabular}{llll} Qubit(underlined) & $\hat{X}$& $\hat{Y}$& $\hat{Z}$\\ \hline $ \bf{CNOT}^{(\underline J ,K)}$ & $\hat{X}_{F}$ & $\hat{Z}_{A}\hat{Z}_{E}\hat{X}_{F}$ & $\hat{Z}_{A}\hat{Z}_{E}$ \\ $ \bf{CNOT}^{(J,\underline K)}$ & $\hat{X}_{B}$ & $\hat{X}_{B}$ & $\hat{I}$ \\ $ \bf{CNOT}^{(\underline J,M)}$ & $\hat{I}$ & $\hat{Z}_{A}\hat{Z}_{E}$ & $\hat{Z}_{A}\hat{Z}_{E}$ \\ $ \bf{CNOT}^{(J,\underline M)}$ & $\hat{X}_{F} $ & $ \hat{X}_{F}$ & $\hat{I}$ \\
\end{tabular} \end{ruledtabular} \end{table}
\subsection{Errors in Step 4.} In step 4, the CNOT gate operations cause the following errors $\varepsilon^{(4)}$: \begin{eqnarray} {\bf CNOT}^{(N,F)}_{\varepsilon} {\bf CNOT}^{(L,B)}_{\varepsilon}
\lvert \psi_{(3)}\rangle = \varepsilon^{(4)} \lvert \psi_{final} \rangle . \end{eqnarray} When we assume the initial resources and CNOT gates in other steps do not include errors, we can describe the relationship between errors in this step and final states as shown in Table~\ref{epsilon_4}.
\begin{table} \caption{\label{epsilon_4} The relationship between errors caused by
CNOT gates in Step 4 and final states. Columns correspond to the
type of errors on underlined qubits.} \begin{ruledtabular}
\begin{tabular}{llll} Qubit(underlined) & $\hat{X}$& $\hat{Y}$& $\hat{Z}$\\ \hline $ \bf{CNOT}^{(\underline L ,B)}$ & $\hat{I}$ & $\hat{Z}_{A}\hat{Z}_{E}$& $ \hat{Z}_{A}\hat{Z}_{E}$ \\ $ \bf{CNOT}^{(L,\underline B)}$ & $\hat{X}_{B}$ & $\hat{Y}_{B}$ & $\hat{Z}_{B}$ \\ $ \bf{CNOT}^{(\underline N,F)}$ & $\hat{I}$ & $\hat{Z}_{A}\hat{Z}_{E}$ & $\hat{Z}_{A}\hat{Z}_{E}$ \\ $ \bf{CNOT}^{(N,\underline F)}$ & $\hat{X}_{F}$ & $\hat{Y}_{F}$ & $\hat{Z}_{F}$ \\
\end{tabular} \end{ruledtabular} \end{table}
\subsection{Errors in Step 5-7.} \label{inc_5} In these steps, no additional errors are added to the system.
\subsection{Simulations for total errors} \label{inc_t} Using these results, the final state can be described as follows: \begin{eqnarray} \bf{QNC}_{\varepsilon}\lvert\psi_{final}^{'''\bf{QNC}}\rangle &=& \varepsilon^{(4)} \varepsilon^{(3)} \varepsilon^{(2)} \varepsilon^{(1)} \varepsilon^{init} \lvert \psi_{final} \rangle \\ &=&\lvert \psi^{'''}_{final} \rangle \end{eqnarray} Then, we show the relation between the input fidelity of Bell pairs, the accuracy of local operations, and the output fidelity in Fig.~\ref{final_result}.
\begin{figure}
\caption{Comparison of Swapping and QNC with
incorporating gate errors. Output fidelities correspond to the case
with no error on either final Bell pair. (a) Initial fidelity
$F=0.95$. (b) Initial fidelity $F=0.98$.}
\label{final_result_95}
\label{final_result_98}
\label{final_result}
\end{figure} To calculate these fidelities, we used Monte Carlo simulations. In each simulation, the fidelitiies of Bell pairs are fixed to $F=0.95$ or $F=0.98$. The accuracy of local operations is changed from $F=0.980$ to $F=1.000$ using $\Delta F=0.001$. In each parameter set, the simulation until we accumulate twenty thousand errors on the final states (up to a maximum of one hundred million times.).
\begin{figure*}
\caption{Complete circuit for QNC. Numbers refer
to the step of QNC procedure. The initial Bell pair creation is
modeled as a Hadamard gate followed by a CNOT, with a separate error probability from the rest of the circuit.}
\label{circuit}
\end{figure*}
\section{Conclusion} \label{conclusion} We have shown the propagation of errors in quantum network coding protocols using the example of the butterfly network. We also show the error threshold of quantum network coding in noisy quantum repeater networks using Monte-Carlo simulations. We can see that QNC is more sensitive to local gate errors than entanglement swapping. In the case of the butterfly network. 2ES tolerates about twice the local error rate of QNC. From these results, we see that each scheme is suitable for different purposes. 2ES is useful when the quantum resources are abundant or low communication speed is permitted. Quantum network coding is useful when the quantum resources are limited or high communication speed is required. The choice of scheme therefore depends on the environment of the quantum network and the quantum application used. We hope quantum network coding will be used in actual future repeater networks.
\end{document}
|
arXiv
|
{
"id": "1508.02141.tex",
"language_detection_score": 0.5981329679489136,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{spacing}{2}
\title{Growth Mixture Modeling with Measurement Selection}
\author{Abby Flynt\\ Assistant Professor\\ Department of Mathematics\\ Bucknell University\\ Lewisburg, PA 17837, USA\\ [email protected]\\ \and Nema Dean\\ Lecturer\\ School of Mathematics and Statistics\\ University of Glasgow\\ Glasgow G12 8QQ, UK\\
[email protected]\\
\date{} }
\normalsize \date{} \maketitle
\abstract{Growth mixture models are an important tool for detecting group structure in repeated measures data. Unlike traditional clustering methods, they explicitly model the repeat measurements on observations, and the statistical framework they are based on allows for model selection methods to be used to select the number of clusters. However, the basic growth mixture model makes the assumption that all of the measurements in the data have grouping information/separate the clusters. In other clustering contexts, it has been shown that including non-clustering variables in clustering procedures can lead to poor estimation of the group structure both in terms of the number of clusters and cluster membership/parameters. In this paper, we present an extension of the growth mixture model that allows for incorporation of stepwise variable selection based on the work done by \citet{maugis09} and \citet{raftery06}. Results presented on a simulation study suggest that the method performs well in correctly selecting the clustering variables and improves on recovery of the cluster structure compared with the basic growth mixture model. The paper also presents an application of the model to a clinical study dataset and concludes with a discussion and suggestions for directions of future work in this area.}
\keywords{Cluster analysis, growth mixture model, repeated measurements, longitudinal data, measurement selection}
\section{Introduction} Cluster analysis is the search for group structure in multivariate data where little or no \emph{a priori} information about groups is available \citep{everitt11}. There are a wide variety of different types of cluster analysis approaches available. These can be broadly categorized into three classes: algorithmic (which includes k-means \citep{macqueen67} and hierarchical clustering \citep{ward63}), non-parametric (mode-hunting, cluster tree approaches, \citep{wishart69}, Section 11 in \citep{hartigan75} and \citep{hartigan81}) and parametric (finite mixture model clustering \citep{titterington85}). Finite mixture model clustering (also commonly known as model-based clustering) is becoming more popular in many application areas due to the wide availability of software, ease of interpretation of output and limited number of subjective decisions necessary for its application.
This paper will focus on growth mixture modeling which is a special case of finite mixture model clustering. Growth mixture modeling is a framework that allows for cluster detection in situations with repeated measurements. It was first introduced by \citet{muthen99} and a good review can be found in \citet{ram2009}. The growth mixture model (GMM) framework allows for modeling of the repeated measurements either directly or as a regression model of outcome measurements on explanatory measurements.
One assumption made by the GMM is that all of the repeated measurements are important to the group structure. Figure \ref{fig:intro} (a) shows an example where this is the case, where each repeated measure/time point has a different mean for each group. In cases where the GMM at each time point is a separate regression, this would mean each component in the GMM had a different intercept and/or slope for each group at each repeated measure. However, if the levels or slopes/intercepts at some measurements do not vary across groups, we may have a situation similar to Figure \ref{fig:intro} (b). In the latter example, only the last two repeated measurements are important for separating the three groups. So the assumption of all measurements having clustering information may not be true and if noise variables are included, it has been shown in other contexts \citep{raftery06, rusakov05} that this can be detrimental to the performance of the clustering method. It is also the case that from a substantive point of view, knowing which measurements/time points differentiate between groups may be of interest in itself. For example, Figure \ref{fig:intro} (b) could be a case where a medication does have differing impacts on groups of patients but this difference effect is delayed and not visible in the first two measurements/time points. This paper applies the variable selection method proposed in \citet{raftery06} and \citet{maugis09} to the GMM to simultaneously allow for selection of the repeated measurements that drive the clustering with application of the clustering itself. Simultaneous or ``wrapper'' selection of clustering variables along with cluster model estimation is generally preferable to either ``filter'' approaches that select variables prior to clustering or post hoc approaches that select variables after the cluster model has been estimated. There is something of a chicken-and-egg problem with variable selection in clustering, as the variables included often drive the clustering found, and the clustering estimated defines the variables of interest. As such, a simultaneous approach is to be preferred to try to tackle both problems at the same time.
\begin{figure}
\caption{(a) 3 groups, where all time-points along x-axis are important for separating groups; (b) 3 groups, where only 3rd and 4th time-points are important for separating groups. Different groups have different line types and colors.}
\label{fig:intro}
\end{figure}
We begin with the methods in Section \ref{sec:methods}, where in Section \ref{sec:gmm}, we introduce the general GMM and discuss its properties and estimation. We then summarize the variable selection framework of \citet{raftery06} in Section \ref{sec:varsel}. The specific variable selection for both a basic GMM (without cluster-specific measurement regression) and the regression GMM is explained in \ref{sec:varselgmm}. These are applied in Section \ref{sec:results} to a variety of different settings in a simulation study, with the results presented in Section \ref{sec:simstudy}, followed by results on the Pittsburgh 600 dataset in Section \ref{sec:data}. The paper wraps up with a summary of the main results, some caveats and future directions for further research on this topic in Section \ref{sec:discuss}.
\section{Methods}\label{sec:methods} \subsection{Finite Mixture Model Clustering}\label{sec:fmm} One of the first major analyses using a finite mixture model was introduced by \citet{pearson94}. Finite mixture model clustering assumes that instead of a single homogenous population, data may come from a heterogenous population made up of homogenous sub-populations. Rather than directly model the overall population with a single density, each sub-population is modeled with its own density. The overall population density can then be expressed as a weighted sum of the component/sub-population densities, with the weights set to the proportions each sub-population makes up of the overall population.
So, if we have an individual (multivariate) observation $\bd{y}$, the mixture density with $K$ components/sub-populations is given by:
\begin{equation} f(\bd{y}|\bd{\theta}) = \sum_{k=1}^K\pi_kf_k(\bd{y}|\bd{\theta}_k), \label{eqn:mixmod}\end{equation} where \[ \pi_k \geq 0, \,\,\, \sum_{k=1}^K\pi_k = 1.\] Here, the $\pi_k$'s are the mixture proportions and the $\bd{\theta}_k$ are the sets of parameters for each component density. If $\bd{y}$ is continuous, then $f_k$ is often taken to be Gaussian and then $\bd{\theta}_k = (\bd{\mu}_k, \bd{\Sigma}_k)$, a set of component specific mean vectors, $\bd{\mu}_k$, and component specific covariance matrices, $\bd{\Sigma}_k$. Clustering using finite mixture models with Gaussian component densities is commonly known as model-based clustering (see \citet{fraley98} for details).
If we have a single variable $y$ that is related to another variable $x$ or a vector of variables $\bd{x}=(x_1, \ldots, x_p)$, then we can cluster the relationship between $y$ and $\bd{x}$ via a finite mixture of regression models as given by:
\begin{equation} f(y|\bd{x},\bd{\theta}) = \sum_{k=1}^K\pi_k f_k(y|\bd{x},\bd{\theta}_k)\label{eqn:mixreg}\end{equation}
Here, instead of having the marginal distributions of $y$ in the component densities (as with $f_k(\textbf{y}|\bd{\theta}_k)$ in \eqref{eqn:mixmod}), we have conditional component densities of $y$ given $\mathbf{x}$, $f_k(y|\mathbf{x},\bd{\theta}_k)$, which represent component level regressions giving an overall finite mixture of regressions.
For continuous $y$, $f_k$ is usually assumed to be Gaussian distributed, with $\bd{\theta}_k = (\bd{\beta}_k,\sigma_k^2)$, a set of component specific regression parameter vectors $\bd{\beta}_k = (\beta_{0k},\beta_{1k},\ldots,\beta_{pk})$ and component specific variance $\sigma_k^2$. The $\pi_k$'s remains as before. Regression relationships which are more complex than linear can be used and for a non-continuous $y$, more complex models like generalized linear models (see \cite{grun08} for details) or splines (see \cite{james03}) can be used.
Estimation of the finite mixture model can be performed via either frequentist or Bayesian inferential methods. In the frequentist setting, the EM algorithm \citep{dempster77} or variants thereof \citep{gupta11, mclachlan08} is commonly used for estimation, where the observed data is augmented by missing data, in this case an identifier of which mixture component each observation is generated from. \citet{fraley02} gives details of model-based clustering estimation via EM.
\subsubsection{Choice of number of components}\label{sec:no comps} Choosing the number of mixture components that best fits the data can be posed as a model choice question for finite mixture model clustering methods, in contrast to traditional algorithmic methods. Each number of components defines a different model so we can score the fit of each model, then choose the model (and associated number of components) that scores best. One of the most commonly used scoring mechanisms is the Bayesian Information Criterion, BIC \citep{schwarz78}. For a particular model $M$ with number of independent parameters $\nu$ and number of data points, $n$, the BIC can be defined as \begin{equation} BIC(M) = -\log(\mbox{maximized likelihood of model }M) + \nu\log(n) \label{eqn:bic}\end{equation} This essentially looks at how well the model fits the data, via the maximized log likelihood and penalizes the model complexity by the number of parameters required, weighted by the log of the number of observations. The best model will be the one with the \emph{smallest} BIC score.
Papers such as \cite{biernacki97} and \cite{biernacki99} have discussed using classification likelihood based criteria for model selection in the context of mixture modeling, but by far the most common measure used is the BIC (see \cite{fraley98} and \cite{fraley02} for further details), which has consistency results for selecting the correct order of the mixture model for Gaussian components under certain conditions (\cite{keribin00}). Thus, this is the criterion for component selection we look to use for the remainder of this paper. The approach could also be easily adapted to use a different method for choosing the orders of the mixtures.
\subsubsection{Assignment of observations to components}\label{sec:compassign} For each observation $y$, instead of a hard assignment to a particular cluster, mixture model clustering can produce a vector of posterior class membership probabilities via Bayes' rule, using the fitted model parameters:
\begin{equation} \hat{p}_{sm} = P(\mbox{component } = m|y_s, \bd{x}_s, \bd{\hat{\theta}}) = \frac{\hat{\pi}_m f_m(y_s|\bd{x}_s,\bd{\hat{\theta}}_m)}{ \sum_{k=1}^K\hat{\pi}_k f_k(y_s|\bd{x}_s,\bd{\hat{\theta}}_k)} \label{eqn:classprob}\end{equation}
These class membership probabilities are one of the advantages of mixture model clustering. They can give a measure of uncertainty in the assignment of observations to components, which is not available from a hard clustering. They can also give an indication of the degree of overlap between components.
If a hard classification is required, the \textit{maximum a posteriori} (MAP) mapping can be used. This assigns an object to the mixture component that has the highest value in the class membership probabilities. The MAP classification is simply: \[ \mbox{Component for subject } s = \arg\max_{m}\hat{p}_{sm} \]
\subsubsection{Connecting a fitted mixture model to clustering}\label{sec:mixclust}
Once the best mixture model for the data has been selected, the most common approach to assigning clusters is that each fitted density component represents a cluster. This paper therefore assumes each GMM component found represents a cluster, however alternative approaches are discussed in Section \ref{sec:discuss}.
\subsection{Growth Mixture Model}\label{sec:gmm} The growth mixture model is a mixture model applicable to data with multiple measurements, e.g. individuals recorded at multiple points over time. The assumption of conditional independence is made between different (sets of) repeated measurements, conditioned on component membership. This reduces the multivariate component density $f_k$ to a product of univariate component densities. If we have $S$ subjects/individuals, each with $N_s$ repeated measurements, the $k^{th}$ component density decomposes to the following product for multivariate observation $\bd{y}_s$:
\begin{equation} f_k(\bd{y}_s|\bd{\theta}) = \prod_{n=1}^{N_s}f_{nk}(y_{sn}|\bd{\theta}_{nk}) ,\label{eqn:gmm}\end{equation} or for the case of repeated measurements on an outcome variable $y$ and set of covariates $\bd{x}$ we have:
\begin{equation} f_k(\bd{y}_s|\bd{x}_s,\bd{\theta}) = \prod_{n=1}^{N_s}f_{nk}(y_{sn}|\bd{x}_{sn},\bd{\theta}_{nk}) ,\label{eqn:gmmreg}\end{equation} where $n$ indexes the repeated measurements in each subject.
This represents an assumption of independence between repeated measurements, conditional on the component membership. Latent class analysis makes a similar assumption of (component) conditional independence between categorical variables. Note that equation \eqref{eqn:gmm}, without covariates in the Gaussian setting, is a special case of the model-based clustering model with covariance parameterization set to be a diagonal matrix within each component (where the diagonal elements are not required to be equal within or across components).
This is the ``VVI'' model in {\verb+mclust+} (model-based clustering package \citep{mclust,fraley02} in the {\verb+R+} software language \citep{Rlanguage}) parlance (where the volume and shape of clusters are allowed to ``V''ary across clusters but the orientation of the cluster ellipses is ``I''n parallel with variable axes).
We can use the EM algorithm estimation along with the standard methods as discussed in Sections \ref{sec:no comps}, \ref{sec:compassign} and \ref{sec:mixclust} to produce a clustering model in this framework.
\subsubsection{Mixed mode and missing data}\label{sec:missdata} The conditional assumption means that the outcome variables can be mixed mode data, i.e. of different types (continuous or categorical, binary versus count, etc.), as conditional independence means that maximization within the M step of the EM algorithm takes place over each repeated measurement separately.
Thus, multiple outcome variables of different types (continuous/categorical) measured repeatedly can be jointly rather than separately modeled. Or outcomes that change from one type of variable to another over the time course, e.g. due to discretization, such as a blood pressure measure that is changed into high blood pressure status (yes/no), can be modeled in the same framework. The E step of the EM, essentially given by equation \eqref{eqn:classprob}, can easily be adapted to this situation as well, since it involves evaluation of each density separately before the product is taken.
Similarly, if there is data missing for some of the repeated measurements for some subjects, the data from these subjects can still be used in the estimation of model parameters, and component memberships can still be calculated on these subjects. The maximization for a particular repeated measurement can take place over only the subjects for which there is no missing data on that measurement as a result of the conditional independence assumption. Similarly, equation \eqref{eqn:classprob} can be updated in the following way:
\[ \hat{p}_{sm} = P(\mbox{component } = m|y_s, \bd{x}_s, \bd{\hat{\theta}}) = \frac{\hat{\pi}_m f_m(y_s|\bd{x}_s,\bd{\hat{\theta}}_m)}{ \sum_{k=1}^K\hat{\pi}_k f_k(y_s|\bd{x}_s,\bd{\hat{\theta}}_k)}, \] where
\[ f_k(y_s|\bd{x}_s,\bd{\hat{\theta}}_k) = \prod_{n: \,y_{sn}, \bd{x}_{sn} \mbox{\scriptsize{not missing}}}f_{nk}(y_{sn}|\bd{x}_{sn},\bd{\hat{\theta}}_{nk}).\]
\subsection{Variable Selection Framework}\label{sec:varsel} The variable selection procedure proposed in \citet{raftery06} and \citet{dean10} is ideal for application to the GMM. It reduces the variable selection problem to considering whether a single variable is useful for clustering or not. This is then combined with a search procedure (e.g. stepwise, backward, forward, headlong, etc.) to select a subset of the original variables as useful for clustering. The resulting framework allows for simultaneous variable selection and cluster estimation which is generally preferable to filter approaches.
\subsubsection{Variable Selection Comparison Models}\label{sec:varselmod} To check if a proposal variable is useful for clustering, two opposing models are posited (useful for clustering versus not) and the fit compared to see which model has stronger evidence. For each stage we have three (potential) sets in a partition of the variables/measurements $\bd{y}$: \begin{itemize} \item $y^{(proposal)}$ - the (single) variable being proposed as a clustering variable \item $\bd{y}^{(current)}$ - the current set of selected clustering variables \item $\bd{y}^{(not\, selected)}$ - all other variables (not being proposed or currently selected as the clustering variable). \end{itemize} The model for $y^{(proposal)}$ being useful for clustering, $M_1$, is a product of two sub-models:
\[ M_1(\bd{y}) = M_{clust}(y^{(proposal)},\bd{y}^{(current)})\times M_{not\,clust}(\bd{y}^{(not\, selected)}|y^{(proposal)},\bd{y}^{(current)}),\] where $M_{clust}$ indicates a clustering model was fitted to the set of variables in parentheses and $M_{not\,clust}$ is a non-clustering model.
The model for $y^{(proposal)}$ \emph{not} being useful for clustering, $M_2$, is a product of three sub-models:
\[ M_2(\bd{y}) = M_{clust}(\bd{y}^{(current)})\times M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)})\times M_{not\,clust}(\bd{y}^{(not\, selected)}|y^{(proposal)},\bd{y}^{(current)}).\]
Different approaches have been taken with respect to the sub-model, $M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)})$, for the relationship between the proposal variable and the current clustering variables in the model where the proposal variable does not have a clustering role. \begin{enumerate} \item In the original paper, \citet{raftery06}, where model-based clustering with Gaussian components was considered, this took the form of a linear regression of $y^{(proposal)}$ on the full set of current clustering variables $\bd{y}^{(current)}$.
\item In the \citet{dean10} paper, because conditional independence was assumed in the LCA model \citep{lazarsfeld68} applied to the categorical variables under consideration, it made sense to assume complete independence of $y^{(proposal)}$ from $\bd{y}^{(current)}$. This results in a reduction of $M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)})$ to $M_{not\, clust}(y^{(proposal)})$.
\item A compromise between these two extremes was proposed by \citet{maugis09} where variable selection was run on the regression of $y^{(proposal)}$ on the full set of current clustering variables $\bd{y}^{(current)}$ (allowing $y^{(proposal)}$ to depend on all, a subset of or none of the set $\bd{y}^{(current)}$). \label{maugis-model} \end{enumerate}
The modeling choices for the GMM case for $M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)})$, based on approach \ref{maugis-model}, are discussed further in Section \ref{sec:varselgmm}.
For a proposed variable $y^{(proposal)}$, once models $M_1$ and $M_2$ have been estimated from the data, a method for evaluating the evidence for one versus the other is needed. The obvious choice for doing so is to examine the Bayes factor of model 1 versus model 2, $B_{12}$
\[ B_{12} = \frac{p(\bd{y}|M_1)}{p(\bd{y}|M_2)}, \]
where $p(\bd{y}|M_i)$ is the integrated likelihood of model $M_i$. These integrated likelihoods are not available in closed form for finite mixture models. So instead, an approximation of the Bayes factor using the BIC is implemented. \begin{equation} \log(B_{12}) \approx BIC(M_1) - BIC(M_2). \label{eqn:bicdiff}\end{equation} As lower values of the BIC, as defined in equation \eqref{eqn:bic}, indicate a better fit, if the difference in BIC values in equation \eqref{eqn:bicdiff} is negative, this indicates more evidence in support of model $M_1$ than $M_2$, suggesting the proposal variable, $y^{(proposal)}$ is useful for clustering. Conversely, if the difference is positive, there is more evidence supporting $M_2$ than $M_1$, suggesting the proposal variable, $y^{(proposal)}$ is not useful for clustering. The natural default value of the threshold for the BIC difference for deciding if $y^{(proposal)}$ should be included in the set of clustering variables is 0. This value can be altered if stronger evidence is believed to be necessary before inclusion of a proposal variable into the set of clustering variables.
For $M_1$ we have:
\[ BIC(M_1) = BIC( M_{clust}(y^{(proposal)},\bd{y}^{(current)}))+ BIC(M_{not\,clust}(\bd{y}^{(not\, selected)}|y^{(proposal)},\bd{y}^{(current)})) \] For $M_2$ we have:
\begin{eqnarray*} BIC(M_2) &=& BIC(M_{clust}(\bd{y}^{(current)})) + BIC(M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)})) \\
&&+ BIC(M_{not\,clust}(\bd{y}^{(not\, selected)}|y^{(proposal)},\bd{y}^{(current)})) \end{eqnarray*}
We see that $M_{not\,clust}(\bd{y}^{not\, selected}|y^{(proposal)},\bd{y}^{(current)})$ appears in both models so this cancels in the difference giving us: \begin{align}
BIC_{\mathit{diff}}(y^{(proposal)}) &= BIC(M_{clust}(y^{(proposal)},\bd{y}^{(current)}))
\\
& - \left( BIC(M_{clust}(\bd{y}^{(current)})) + BIC(M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)}))\right) \nonumber \end{align}
So, for each proposed, $y^{(proposal)}$, the cluster model is fit on the set of variables including the current set of clustering variables $\bd{y}^{(current)}$ along with the proposal variable $y^{(proposal)}$ and the BIC score for that model, $BIC(M_1)$, is calculated. The cluster model is fit on only the current set of clustering variables $\bd{y}^{(current)}$ and the BIC score for this model is calculated, $BIC(M_{clust}(\bd{y}^{(current)}))$, and either the regression (with or without variable selection) is fit for $y^{(proposal)}$ on $\bd{y}^{(current)}$ or a single component cluster model is fit on $y^{(proposal)}$ and $BIC(M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)}))$ or $BIC(M_{not\,clust}(\bd{y}^{(current)}))$ is produced. These are plugged into the $BIC_{\mathit{diff}}(y^{(proposal)})$ to give a value that can be used to make a decision about the proposed clustering variable. This is then combined with a search algorithm, such as those presented in the next section, to produce a set of selected clustering variables.
\subsubsection{Variable Selection Search Algorithm}\label{sec:varselsearch} Given the nature of the data examined where variables correspond to repeated measurements or time points, the number of variables is not expected to be large. This means that there will be no issues with fitting a GMM using the full set of measurements. Therefore we describe two backward search algorithms that will be used in producing the results for Section \ref{sec:results}. \\ \\ \noindent\textbf{\underline{Basic Greedy Backward Search}} \\ \\ The standard greedy backward search proceeds as follows: \begin{enumerate} \item Start with all variables in the $\bd{y}^{(current)}$ set \item Take each variable from $\bd{y}^{(current)}$ individually in turn as $y^{(proposal)}$: \begin{itemize} \item Fit models $M_1$ and $M_2$ \item Calculate $BIC_{\mathit{diff}}$ using equation (8) \end{itemize} \item Choose the variable with largest $BIC_{\mathit{diff}}$ value \item If the variable's $BIC_{\mathit{diff}}$ is larger than the chosen threshold (usually 0), then remove this variable from the set of clustering variables and return to step 2. Otherwise, halt the algorithm. \end{enumerate} \noindent\textbf{\underline{Greedy Backward Search with Monotonicity}} \\ \\ Given that one of the common forms of data to which GMM is applied is repeated measurements where there is a temporal ordering to the recording of the repeated measurements, a greedy backward search with monotonicity may be of interest.
This type of search proceeds as follows: \begin{enumerate} \item Start with all variables in the $\bd{y}^{(current)}$ set \item Take the \emph{earliest} and the \emph{latest} variable from $\bd{y}^{(current)}$ individually in turn as $y^{(proposal)}$: \begin{itemize} \item Fit models $M_1$ and $M_2$ \item Calculate $BIC_{\mathit{diff}}$ using equation (8) \end{itemize} \item Choose the variable with largest $BIC_{\mathit{diff}}$ value \item If the variable's $BIC_{\mathit{diff}}$ is larger than the chosen threshold (usually 0), then remove this variable from the set of clustering variables and return to step 2. Otherwise, halt the algorithm. \end{enumerate}
The backward greedy search in the case of limited number of variables/measurements should be the most efficient (where there is at least some clustering going on). However, it is perfectly feasible to look at forward or forward-and-backward stepwise search types as well as headlong (as described in \citet{dean10}) instead of greedy approaches as wrappers for the framework from Section \ref{sec:varselmod}.
\subsection{Growth Mixture Model with Variable Selection}\label{sec:varselgmm}
This section presents the details for model $M_1(\mathbf{y})$, where $y^{(proposal)}$ is useful for clustering and $M_2(\mathbf{y})$, where $y^{(proposal)}$ is {\it not} useful for clustering in the framework of growth mixture modeling with variable selection. As discussed in Section \ref{sec:varselmod}, we can ignore $M_{not\,clust}(\bd{y}^{not\, selected}|y^{(proposal)},\bd{y}^{(current)})$ as it appears in both models. We focus on $M_{clust}(y^{(proposal)},\bd{y}^{(current)})$, $M_{clust}(\bd{y}^{(current)})$ and $M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)})$. Both $M_{clust}$ models come directly from Equations \eqref{eqn:gmm} and \eqref{eqn:gmmreg} depending on whether or not we have the inclusion of covariates.
Let $\mathcal{V}$ be the set of indices from $N_s$ for the current clustering variables. For example, if there were 5 repeated measurements and the third and fifth are current clustering variables, then $\mathcal{V} = \{3, 5\}$. Additionally, let $p$ represent the index of the proposed clustering variable.
Without covariates, we have
$$M_{clust}(y^{(proposal)},\bd{y}^{(current)}) =\prod \limits_{s=1}^S \sum \limits_{k=1}^K \pi_k \prod \limits_{v \in \mathcal{V} \cup p} f_{vk}(y_{sv}|\bd{\theta}_{vk}),$$
and
$$M_{clust}(\bd{y}^{(current)}) = \prod \limits_{s=1}^S \sum \limits_{k=1}^K \pi_k \prod \limits_{v \in \mathcal{V}} f_{vk}(y_{sv}|\bd{\theta}_{vk}).$$
For both of these clustering models, $f_{vk}$ is Gaussian with $\bd{\theta}_{vk} = (\bd{\mu}_{vk}, \bd{\Sigma}_{vk})$.
The addition of covariates gives us
$$M_{clust}(y^{(proposal)},\bd{y}^{(current)}) = \prod \limits_{s=1}^S \sum \limits_{k=1}^K \pi_k \prod \limits_{v \in \mathcal{V} \cup p} f_{vk}(y_{sv}|\bd{x}_{sv}, \bd{\theta}_{vk}),$$
and
$$M_{clust}(\bd{y}^{(current)}) = \prod \limits_{s=1}^S \sum \limits_{k=1}^K \pi_k \prod \limits_{v \in \mathcal{V}} f_{vk}(y_{sv}|\bd{x}_{sv}, \bd{\theta}_{vk}),$$
where again, $f_{vk}$ is Gaussian, but for this finite mixture of regression models, $\bd{\theta}_{vk}$ is a set of component specific regression parameters as described in Section \ref{sec:fmm}.
The model for $M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)})$ without covariates is given by
$$M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)}) = \prod \limits_{s=1}^S f_p(y_{s}^{(proposal)}|\bd{y}_s^{(current)*},\bd{\theta}_p), $$
where $\bd{y}^{(current)*}$ is a selection of $\bd{y}^{(current)}$ chosen using a regression variable selection based on BIC \citep{raftery95}.
The addition of covariates requires us to first fit the regression of $y^{(proposal)}$ on $\bd{x}^{(proposal)}$, we then select $\bd{y}^{(current)*}$ from the residuals of the previous regression on $\bd{y}^{(current)}$. Giving a final model of
$$M_{not\, clust}(y^{(proposal)}|\bd{y}^{(current)}) = \prod \limits_{s=1}^S f_p(y_{s}^{(proposal)}|\bd{x}_s^{(proposal)}, \bd{y}_s^{(current)*},\bd{\theta}_p).$$
\section{Results}\label{sec:results} The starting values for EM estimation in all the following sections were produced using the k-means algorithm on all $\bd{y}$ values for the measurements being considered in the model. Using 50 random initializations, this was found to produce reasonable sets of starting values in general, and was computationally quick. Other starting value schemes could, of course, be incorporated.
Only complete data (no missing values) were generated in the simulations in Section \ref{sec:simstudy} and complete cases were used in the dataset in Section \ref{sec:data}. The presented methodology can be extended to include missing values using the ideas in Section \ref{sec:missdata}, but evaluating the impact of this was not considered to be the goal of this paper.
\subsection{Simulation Study}\label{sec:simstudy}
For each simulated data set, we compare the final GMM after variable selection with the GMM using all of the variables. We can compare the accuracy of clustering by looking at the difference in the Adjusted Rand Index (ARI, \citet{hubert85}) compared to the true simulated partition for each model. Additionally, we can compare the accuracy of estimation by looking at the difference in the Root Mean Square Error (RMSE, \citet{steel60}) for each model. We include the RMSE for the full set of variables based on the model using the selected variables and the model using all variables. Note that the RMSE for the model estimated using selected variables includes estimation of the non-selected clustering variables assuming a 1 group model. We also provide results on the number of non-clustering variables chosen and the number of clusters chosen by each model.
All results are for 20 time points with 3 groups.
(Twenty time points was chosen as a reasonable upper limit for the number of variables that would be expected in a typical application. The method will, of course, run faster on a smaller number of time points.) The non-clustering time points all have intercept 0 and slope 1 with standard deviation 0.5 and standard normal distributed explanatory variables. All simulations are repeated 50 times and average results are reported in Tables \ref{tab1}, \ref{tab2}, \ref{tab3} and \ref{tab4}.
Tables \ref{tab1}, \ref{tab2} and \ref{tab3} all refer to clusterings with roughly equal sized clusters (in terms of membership/mixture probabilities) with varying degrees of separation in the two clustering time points. In the first simulation summarized in Table \ref{tab1}, the first clustering variable has slope parameters (1, 3, -2) and the second clustering variable has slope parameters (1, 2.5, -0.5) over the three groups respectively. 98\% of the simulations correctly selected both the clustering variables and 43\% of the simulations correctly chose 3 groups. Additionally, 78\% of the simulations had higher ARI and 66\% of the simulations had lower RMSE for the model using only the selected clustering variables. In the second and third simulation, both clustering variables share the same slopes for the three groups, with Table \ref{tab2} having slopes (1, 2.5, -0.5) and Table \ref{tab3} having slopes (1, 3, -2). We see an improvement in all of our measures of evaluation for these two simulations.
Finally in Table \ref{tab4}, the membership/mixture probabilities place 70\% of the observations in one group and split the remaining 30\% into the other two groups. The slopes for the clustering variables match that of the third simulation. Even with unbalanced groups, the clustering variables are selected correctly 96\% of the time and three groups are chosen 56\% of the time. It is worth noting that clustering of all the time points resulted in only 2 groups being chosen for every repetition of the simulation, demonstrating the detrimental effect that the inclusion of non-clustering variables can have on cluster estimation. The ARI was higher for 72\% of the simulations and the RMSE was smaller for 90\% of the simulations for the model using only the selected clustering variables.
Overall, as can be seen from the tables, the variable selection procedure almost always selects the clustering variables, but also often includes some additional variables. However, as can be seen from the ARI, in terms of group structure recovery, most of the time it is better to just select the cluster model on the basis of the selected variables rather than the full variable set. Similarly, if good estimation performance is the goal, again in terms of RMSE, it is usually better to fit the model based on the selected variables alone. \begin{table}[htbp] \begin{tabular}{lrrr} \hline \hline \multicolumn{4}{c}{\textbf{Simulation parameters}} \\ Group mixture weights&Group 1 = 0.3& Group 2 = 0.3& Group 3 = 0.4 \\ Clustering time points&5 and 15&& \\ \hline Clustering time points slopes&Group 1&Group 2&Group 3 \\ \hline Time point 5&1&3&-2 \\ Time point 15&1&2.5&-0.5 \\ \hline \hline \multicolumn{4}{c}{\textbf{Simulation results}} \\ Clustering time point selected&Time point 5& Time point 15&Both time points \\ \hline \% simulations&98\%&100\%&98\% \\ \hline \end{tabular} \begin{tabular}{lrrrrrrrrr} \hline Number of non-clustering&&&&&&&&& \\ time points selected&0&1&2&3&4&6&7&8&17 \\ \hline \% simulations&26\%&26\%&6\%&8\%&8\%&8\%&12\%&4\%&2\% \\ \hline \hline Number of groups chosen&2&3&4&&&&&& \\ \hline \% of simulations for &&&&&&&&& \\ model with \emph{selected} variables&14\%&42\%&44\%&&&&&& \\ \% of simulations for&&&&&&&&& \\ model with \emph{all} variables&70\%&12\%&18\%&&&&&& \\ \hline \end{tabular} \begin{tabular}{rrrrrrr} \hline \multicolumn{7}{c}{Summary statistics for difference between ARI for clustering with selected } \\ \multicolumn{7}{c}{variables versus clustering with all variables} \\ Minimum&1$^{st}$ quartile&Median&Mean&3$^{rd}$ quartile&Maximum& \\ \hline -0.4969& 0.01145 &0.15330& 0.18270& 0.30260&0.7489& \\ \hline \multicolumn{7}{l}{78\% of simulations had a higher ARI for the model using only the } \\ \multicolumn{7}{l}{selected variables} \\ \hline \hline \multicolumn{7}{c}{Summary statistics for difference between RMSE for clustering with selected} \\ \multicolumn{7}{c}{ variables versus clustering with all variables} \\ Minimum&1$^{st}$ quartile&Median&Mean&3$^{rd}$ quartile&Maximum& \\ \hline -0.64120 & -0.22790 &-0.11790 &-0.12330 &0.04693&0.92630& \\ \hline \multicolumn{7}{l}{66\% of simulations had lower RMSE for all time points for the model using only the } \\ \multicolumn{7}{l}{selected variables} \\ \hline \end{tabular} \caption{First simulations set} \label{tab1} \end{table}
\begin{table}[htbp] \begin{tabular}{lrrr} \hline \hline \multicolumn{4}{c}{\textbf{Simulation parameters}} \\ Group mixture weights&Group 1 = 0.3& Group 2 = 0.3& Group 3 = 0.4 \\ Clustering time points&5 and 15&& \\ \hline Clustering time points slopes&Group 1&Group 2&Group 3 \\ \hline Time point 5&1&2.5&-0.5 \\ Time point 15&1&2.5&-0.5 \\ \hline \hline \multicolumn{4}{c}{\textbf{Simulation results}} \\ Clustering time point selected&Time point 5& Time point 15&Both time points \\ \hline \% simulations&100\%&100\%&100\% \\ \hline \end{tabular} \begin{tabular}{lrrrrrrrrr} \hline Number of non-clustering&&&&&&&&& \\ time points selected&0&1&2&3&4&5&6&14&17 \\ \hline \% simulations&66\%&20\%&6\%&2\%&1\%&1\%&2\%&1\%&1\% \\ \hline \hline Number of groups chosen&2&3&4&&&&&& \\ \hline \% of simulations for &&&&&&&&& \\ model with \emph{selected} variables&19\%&52\%&29\%&&&&&& \\ \% of simulations for&&&&&&&&& \\ model with \emph{all} variables&94\%&6\%&0\%&&&&&& \\ \hline \end{tabular} \begin{tabular}{rrrrrrr} \hline \multicolumn{7}{c}{Summary statistics for difference between ARI for clustering with selected } \\ \multicolumn{7}{c}{variables versus clustering with all variables} \\ Minimum&1$^{st}$ quartile&Median&Mean&3$^{rd}$ quartile&Maximum& \\ \hline -0.428100 &-0.002086& 0.063560 &0.050580 &0.130200 &0.275100 \\ \hline \multicolumn{7}{l}{77\% of simulations had a higher ARI for the model using only the } \\ \multicolumn{7}{l}{selected variables} \\ \hline \hline \multicolumn{7}{c}{Summary statistics for difference between RMSE for clustering with selected} \\ \multicolumn{7}{c}{ variables versus clustering with all variables} \\ Minimum&1$^{st}$ quartile&Median&Mean&3$^{rd}$ quartile&Maximum& \\ \hline -0.5284& -0.3689 &-0.3232& -0.2634 &-0.1757 &0.3829 \\ \hline \multicolumn{7}{l}{94\% of simulations had lower RMSE for all time points for the model using only the } \\ \multicolumn{7}{l}{selected variables} \\ \hline \end{tabular} \caption{Second simulations set} \label{tab2} \end{table}
\begin{table}[htbp] \begin{tabular}{lrrr} \hline \hline \multicolumn{4}{c}{\textbf{Simulation parameters}} \\ Group mixture weights&Group 1 = 0.3& Group 2 = 0.3& Group 3 = 0.4 \\ Clustering time points&5 and 15&& \\ \hline Clustering time points slopes&Group 1&Group 2&Group 3 \\ \hline Time point 5&1&3&-2 \\ Time point 15&1&3&-2 \\ \hline \hline \multicolumn{4}{c}{\textbf{Simulation results}} \\ Clustering time point selected&Time point 5& Time point 15&Both time points \\ \hline \% simulations&100\%&100\%&100\% \\ \hline \end{tabular} \begin{tabular}{lrrrrrrrrr} \hline Number of non-clustering&&&&&&&&& \\ time points selected&0&1&2&3&4&5&6&7& \\ \hline \% simulations&45\%&23\%&7\%&7\%&7\%&2\%&5\%&5\%& \\ \hline \hline Number of groups chosen&2&3&4&&&&&& \\ \hline \% of simulations for &&&&&&&&& \\ model with \emph{selected} variables&2\%&77\%&20\%&&&&&& \\ \% of simulations for&&&&&&&&& \\ model with \emph{all} variables&61\%&34\%&5\%&&&&&& \\ \hline \end{tabular} \begin{tabular}{rrrrrrr} \hline \multicolumn{7}{c}{Summary statistics for difference between ARI for clustering with selected } \\ \multicolumn{7}{c}{variables versus clustering with all variables} \\ Minimum&1$^{st}$ quartile&Median&Mean&3$^{rd}$ quartile&Maximum& \\ \hline -0.15790& -0.01606 &0.17880& 0.13660& 0.21700& 0.55370& \\ \hline \multicolumn{7}{l}{73\% of simulations had a higher ARI for the model using only the } \\ \multicolumn{7}{l}{selected variables} \\ \hline \hline \multicolumn{7}{c}{Summary statistics for difference between RMSE for clustering with selected} \\ \multicolumn{7}{c}{ variables versus clustering with all variables} \\ Minimum&1$^{st}$ quartile&Median&Mean&3$^{rd}$ quartile&Maximum& \\ \hline -1.143000 &-0.531900& -0.479300 &-0.388600& -0.047100& -0.005398& \\ \hline \multicolumn{7}{l}{100\% of simulations had lower RMSE for all time points for the model using only the } \\ \multicolumn{7}{l}{selected variables} \\ \hline \end{tabular} \caption{Third simulations set} \label{tab3} \end{table}
\begin{table}[htbp] \begin{tabular}{lrrr} \hline \hline \multicolumn{4}{c}{\textbf{Simulation parameters}} \\ Group mixture weights&Group 1 = 0.7& Group 2 = 0.15& Group 3 = 0.15 \\ Clustering time points&5 and 15&& \\ \hline Clustering time points slopes&Group 1&Group 2&Group 3 \\ \hline Time point 5&1&2.5&-0.5 \\ Time point 15&1&2.5&-0.5 \\ \hline \hline \multicolumn{4}{c}{\textbf{Simulation results}} \\ Clustering time point selected&Time point 5& Time point 15&Both time points \\ \hline \% simulations&96\%&96\%&96\% \\ \hline \end{tabular} \begin{tabular}{lrrrrrrrrr} \hline Number of non-clustering&&&&&&&&& \\ time points selected&0&1&2&3&4&6&17& \\ \hline \% simulations&54\%&22\%&10\%&6\%&4\%&2\%&2\%&& \\ \hline \hline Number of groups chosen&1&2&3&4&&&&& \\ \hline \% of simulations for &&&&&&&&& \\ model with \emph{selected} variables&4\%&20\%&56\%&20\%&&&&& \\ \% of simulations for&&&&&&&&& \\ model with \emph{all} variables&0\%&100\%&0\%&0\%&&&&& \\ \hline \end{tabular} \begin{tabular}{rrrrrrr} \hline \multicolumn{7}{c}{Summary statistics for difference between ARI for clustering with selected } \\ \multicolumn{7}{c}{variables versus clustering with all variables} \\ Minimum&1$^{st}$ quartile&Median&Mean&3$^{rd}$ quartile&Maximum& \\ \hline -0.199900& -0.001656 &0.106100& 0.117800& 0.211500& 0.609400 & \\ \hline \multicolumn{7}{l}{72\% of simulations had a higher ARI for the model using only the } \\ \multicolumn{7}{l}{selected variables} \\ \hline \hline \multicolumn{7}{c}{Summary statistics for difference between RMSE for clustering with selected} \\ \multicolumn{7}{c}{ variables versus clustering with all variables} \\ Minimum&1$^{st}$ quartile&Median&Mean&3$^{rd}$ quartile&Maximum& \\ \hline -0.27330 &-0.15030& -0.10200 &-0.09932& -0.05620& 0.08910& \\ \hline \multicolumn{7}{l}{90\% of simulations had lower RMSE for all time points for the model using only the } \\ \multicolumn{7}{l}{selected variables} \\ \hline \end{tabular} \caption{Fourth simulations set} \label{tab4} \end{table}
\subsection{Application}\label{sec:data}
In this section, we apply variable selection to the Pittsburgh 600, a longitudinal data set composed of five depression studies, each with a different treatment protocol (\citet{thase1997}). Subjects were diagnosed as clinically depressed upon entrance to the study and there are 26 weekly scores indicating their current level of depression. In addition to their score, we have several explanatory variables, including the subjects' sex, age and medication status.
Because most trajectories in this study have intermittent missingness (which was not incorporated into our method, as it was not the focus of this manuscript), we have taken a subset of four time points that provided a large number of complete trajectories. There were 74 subjects that had complete observations at Weeks 1, 9, 14 and 25 of the study. Figure~\ref{fig:trajectories} shows the trajectories of these subjects over time. We can see that overall there is a decreasing trend, showing a reduction in clinical depression scores.
\begin{figure}
\caption{Selected subjects in Pittsburgh 600 study}
\label{fig:trajectories}
\end{figure}
Using subject age as the explanatory variable, variable selection chose Week 14 as the only time point that is useful for clustering. Clustering on Week 14 produces two classes, containing 46 and 28 trajectories as seen in Figure~\ref{fig:clusters}. In Figure~\ref{fig:week14}, we see that the two classes obtained from the growth mixture model are really driven by different intercepts.
\begin{figure}
\caption{Clusters resulting from GMM fit only to Week 14}
\label{fig:clusters}
\end{figure}
\begin{figure}
\caption{Clusters at Week 14}
\label{fig:week14}
\end{figure}
When a growth mixture model is fit to all 4 time points, 2 groups of size 43 and 31 are selected. The clustering can be seen in Figure~\ref{fig:fullclustering}. When we compare these two clustering solutions, the ARI is 0.487 indicating that the solutions are not very similar. It is worth noting that we are seeing nearly identical results from the variable selection, growth mixture model procedure when the explanatory variable is changed to whether or not each subject is on a medication and also which of the 5 studies they belong to. Using all of the time points for the growth mixture model with either alternative explanatory variable however, produces more variation between the clusterings.
\begin{figure}
\caption{Clusters resulting from GMM fit to all four time points}
\label{fig:fullclustering}
\end{figure}
\section{Discussion}\label{sec:discuss} This paper presents a framework to incorporate variable selection into the growth mixture model for the settings where one is clustering repeated measurements over time, with possible covariates. The simulated results suggest from both an estimation and interpretation point of view, that it is preferable to apply variable selection when using a GMM. It results in better estimation of the number of groups present in the data and superior estimation of the parameters in the resulting regressions.
A run on simulated data with 20 time points (2 representing true clustering variables), 3 groups and 400 datapoints takes 4987 seconds, i.e. just over an hour on an iMac with a 3.06 GHz Intel Core Duo with 8 GB 1067 MHz DDR3 memory. This represents software written only in R which has not been optimized for speed. Some of the routines could be transferred to C++ and run much faster as a result. Since the search algorithm loops over the number of time points, datasets with smaller numbers of time points would be expected to take less time as well.
The clustering found using the selected variables can often vary from that found using the full original set of variables/measurements as seen in the application in Section \ref{sec:data}. Which cluster model is of more interest can depend on the context of the problem. If one is interested in the grouping on all variables, a priori, then the variable selection is interesting only in the sense of highlighting variables that best separate the groups, not in the sense of estimating the clustering itself. If this is so, then variable selection can be performed to find these variables but the clustering model estimation is best done on the full set. Or indeed an alternative post-hoc variable selection method could be used after the clustering has already been performed. In other situations, it may be the case that both variable selection and clustering on the selected variables is the goal. It may also still be of interest to compare the clustering found on the selected variables versus the clustering on the original full set of variables.
An added advantage of variable selection in the context of GMMs is that, given many of the associated degeneracy and local optima issues are related to dimensionality, any reduction in dimensions will hopefully see a reduction in the occurrence of such. Although, of course, it cannot guarantee that they will not occur.
The framework presented here is extremely flexible and can be extended to facilitate estimation using incomplete observations as mentioned previously.
This framework could also be adapted to the generalized linear model framework in addition to the linear model approach presented in this paper, or the generalized additive model clustering case as seen in \citet{james03}. It would also be possible to extend the model to allow an autoregressive time dependence instead of the conditional independence presented, either by using previous time points as covariates in the covariate GMM setting or using the framework presented in \citet{mcnicholas12}.
In addition to variable selection in the clustering sense, it would also be possible to incorporate variable selection in the regression for the outcome variables on their covariates in the GMM setting.
A classification or supervised version of our methodology would be easy to implement since it essentially involves an observed version of the missing data (group labels) in the EM, and a single M step for parameter estimation followed by a single E step for estimating group membership probabilities for new data. A hybrid semi-supervised approach, similar to \citet{murphy10}, could also be considered.
Recent work by \citet{baudry10}, \citet{hennig10}, \citet{scrucca16}, and \citet{Melnykov2016} have examined different ways of combining components to create multi-component clusters as opposed to the ``one component equals one cluster'' approach mentioned in section \ref{sec:mixclust}. This is useful in cases where the underlying cluster distribution is different from the assumed component distribution. Particularly in cases with skewness or heavy tails, setting each component as a cluster can lead to an overestimate of the number of groups in the data. It may be possible for this to happen in the GMM case as well, particularly when there are outliers, so future work could look at adapting the methods for combining marginal density components to combining conditional density components instead.
Issues with this method can arise with difficulties in finding good starting values for the clustering. K-means does a good job in most cases but other methods could be used in place of this should problems arise. There can also be issues when a cluster is assigned to only one or a pair of trajectories as this can result in singular matrix issues during estimation of the cluster and regression parameters. Including a noise component in the clustering might be a possible method to deal with this issue (as a single observation cluster could be argued to be an outlier/noise observation). Bayesian estimation used as an alternative to the EM based estimation advocated in this paper, could also help with regularization issues.
This paper presents an initial look at variable selection in the growth mixture model framework, which will hopefully stimulate interesting further research in this area. R code for fitting the models described in this paper is available on request from the authors.
\end{spacing}
\end{document}
|
arXiv
|
{
"id": "1710.06930.tex",
"language_detection_score": 0.8232138752937317,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Bernoulli problem]{A Bernoulli problem with non constant gradient boundary constraint}
\author[C. Bianchini]{Chiara Bianchini}
\address{C. Bianchini, Institut Elie Cartan, Universit\'e Henri Poincar\'e Nancy, Boulevard des Aiguillettes B.P. 70239, F-54506 Vandoeuvre-les-Nancy Cedex, France} \email{[email protected]}
\date{}
\keywords{Bernoulli problem, convexity} \subjclass{35R35, 35J66, 35J70.}
\begin{abstract} We present in this paper a result about existence and convexity of solutions to a free boundary problem of Bernoulli type, with non constant gradient boundary constraint depending on the outer unit normal. In particular we prove that, in the convex case, the existence of a subsolution guarantees the existence of a classical solution, which is proved to be convex.
\end{abstract}
\maketitle
\section{Introduction}
Consider an annular condenser with a constant potential difference equal to one, such that one of the two plates is given and the other one has to be determined in such a way that the intensity of the electrostatic field is constant on it. If $\Omega\setminus\overline K$ represents the condenser, whose plates are $\Omega$ and $K$ (with $\overline K\subseteq\Omega$), and $u$ is the electrostatic potential, it holds $\Delta u=0$ in $\Omega\setminus\overline K$ and $|Du|= constant$ on either $\partial\Omega$ or $\partial K$, depending on which of them represents the unknown plate.
This gives rise to the classical Bernoulli problems (interior and exterior), where the involved differential operator is the Laplacian $\Delta$, which expresses the linearity of the electrical conduction law. However, some physical situations can be better modeled by general power flow laws, then yielding to the $\text{p}$-Laplacian as governing operator.
Moreover, one can consider the possibility to have a non constant prescribed intensity of the electric field on the free boundary. In particular, as the intensity of the electrostatic field $\overrightarrow{E}$ on an equipotential surface is related to its outer unit normal vector, through the curvature of that surface, one can assume $|\overrightarrow{E}|$ to depend on the outer unit normal vector $\nu(x)$ of the unknown boundary. In view of these considerations, we deal here with the following problem.
Given a domain in $\Omega\subseteq\mathbb{R}^N$, a real number $\text{p}>1$ and a smooth function $g:S^{N-1}\to \mathbb{R}$ such that \begin{equation}\label{gcC} c\le g(\textsf{v})\le C\quad\text{ for every }\textsf{v}\in S^{N-1}, \end{equation} for some $C>c>0$, find a function $u$ and a domain $K$, contained in $\Omega$, such that \begin{equation}\label{Bint} \begin{cases} \Delta_{\text{p}} u(x)=0 \quad &\text{in }\Omega\setminus\overlineK,\\ u=0 \quad &\text{on }\partial\Omega,\\ u=1, \quad &\text{on }\partialK,\\
|Du(x)|=g(\nu(x)), \quad &\text{on }\partialK,
\end{cases} \end{equation} where $\nu(x)=\nu_K(x)$ is the outer unit normal to $\partial K$ at $x\in\partial K$.
Here an later $\Delta_{\text{p}}$ is the $\text{p}$-Laplace operator for $\text{p}>1$, that is $$
\Delta_{\text{p}} u=\text{\sf div}(|Du|^{\text{p}-2}Du)\,. $$
If $u$ is a solution to (\ref{Bint}) we will tacitly continue $u$ by $1$ in $K$ throughout the paper, so that a solution $u$ to (\ref{Bint}) is defined, and continuous, in the whole $\Omega$.
The boundary condition $|Du|=\tau$ has to be understood in a classical way: $$
\lim_{\substack{y\to x \\ y\in \Omega\setminus\overline K}}|Du(y)|=|Du(x)|. $$ Moreover, in the convex case, that is when $\Omega$ is a convex set, we are allowed to consider classical solutions (justified by \cite{L}, since $K$ inherits the convexity of $\Omega$, as shown later).
Notice that, given $ K$ in (\ref{Bint}), the function $u$ is uniquely determined since it represents the capacitary potential of $\Omega\setminus\overline K$; on the other hand, given the function $u$ the free boundary $\partial K$ is determined as $\partial K=\partial\{x\in\mathbb{R}^N\ :\ u(x)\ge 1\}$. Hence, we will speak of {\em a solution} to (\ref{Bint}) referring indifferently to the sets $K$ or to the corresponding potential function $u$ (or to both) and we will indicate the class of solutions as $\mathscr{F}(\Omega,g)$, where $\Omega$ is the given domain and $g$ is the gradient boundary datum.
The original interior Bernoulli problem corresponds to the case $\text{p}=2$, that is the Laplace operator, with constant gradient boundary constraint $g(\nu(x))\equiv \tau$. In general, given a domain $\Omega\subseteq \mathbb{R}^N$, and $\tau>0$, the classical interior Bernoulli problem consists in finding a domain $K$, with $\overline K \subseteq \Omega$ and a function $u$ such that \begin{equation}\label{Bintconstant} \begin{cases} \Delta_{\text{p}} u(x)=0 \quad &\text{in }\Omega\setminus\overlineK,\\ u=0 \quad &\text{on }\partial\Omega,\\ u=1, \quad &\text{on }\partialK,\\
|Du|=\tau, \quad &\text{on }\partialK.
\end{cases} \end{equation} Easy examples show that Problem (\ref{Bintconstant}), and hence Problem (\ref{Bint}), need not have a solution for every given domain $\Omega$ and for every positive constant $\tau$. Many authors consider the classical problem, both from the side of the existence and geometric properties of the solution. In particular we recall the pioneering work of Beurling \cite{B} and several other contributions as \cite{A2}, \cite{AC}, \cite{F}, \cite{FR}, \cite{La}. The treatment of the nonlinear case is more recent and mainly due to Henrot and Shahgholian (see for instance \cite{HS1}, \cite{HS2}; see also \cite{AM},\cite{BS}, \cite{GK}, \cite{MPS} and references therein). The uniqueness problem has been solved later in \cite{CT} for $\text{p}=2$ and \cite{BS} for $\text{p}>1$. Here we summarize some of the known results.
\begin{equation}\label{Bresume}
\begin{minipage}{0.85\linewidth} Let $\Omega\subseteq\mathbb{R}^N$ be a convex $C^1$ bounded domain. There exists a positive constant $\Lab{\text{p}}=\Lab{\text{p}}(\Omega)$, named \emph{Bernoulli constant}, such that Problem (\ref{Bintconstant}) has a solution if and only if $\tau\ge\Lab{\text{p}}$; in such a case there is at least one which is $C^{2,\alpha}$ and convex. In particular for $\tau=\Lab{\text{p}}$ the solution is unique.
\end{minipage} \end{equation}
In this paper we consider Problem (\ref{Bint}) in the convex case, that is when the given domain is a convex set, and we prove that the convexity is inherited by the unknown domain without making additional assumptions on the function $g$. More precisely, let us indicate by $\mathscr{F}^-(\Omega,g(\nu))$ the class of the so called \emph{subsolution} to Problem (\ref{Bint}); essentially, $v$ and $K$ are subsolutions if $v$ solves $$ \begin{cases} \Delta_{\text{p}} v\ge 0 &\qquad\text{ in }\Omega\setminus\overline K\\ v=0 &\qquad\text{ on }\partial\Omega\\
v=1,\ |Dv(x)|\le g(\nu(x)) &\qquad\text{ on }\partial K\,; \end{cases} $$ (see Section \ref{secsubsol} for more details).
Our main theorem is the following.
\begin{theorem}\label{gsub-sol}
Let $\Omega\subseteq\mathbb{R}^N$ be a convex $C^1$ domain, and $g:S^{N-1}\to\mathbb{R}$ be a continuous function such that (\ref{gcC}) holds. If $\mathscr{F}^-(\Omega,g(\nu))$ is non empty, then there exists a $C^1$ convex domain $K$ with $\overline K\subseteq\Omega$ such that the $\text{p}$-capacitary potential $u$ of $\Omega\setminus\overline K$ is a classical solution to the interior Bernoulli problem (\ref{Bint}). \end{theorem}
The idea of a non constant boundary gradient condition has been developed in the literature by many authors, who considered the case of a space variable dependent constraint, ${\textrm{\sf{a}}}:\Omega\to(0,+\infty)$. We refer to \cite{AC},\cite{ACF},\cite{MW} for a functional approach, and to \cite{A2}, \cite{A3}, \cite{HS4} for the subsolution method. In particular, an analogous result to Theorem \ref{gsub-sol} has been proved in \cite{HS4} where the authors considered a Bernoulli problem with non constant gradient boundary datum ${\textrm{\sf{a}}}(x)$. For a given convex domain $\Omega\subseteq \mathbb{R}^N$, and a positive function ${\textrm{\sf{a}}}:\Omega\to(0,\infty)$, such that $$ c\le{\textrm{\sf{a}}}(x)\le C, \text{ for every } x\in \Omega, $$ for some $C>c>0$, with \begin{equation}\label{acvx} \frac 1{{\textrm{\sf{a}}}} \text{ convex in }\Omega, \end{equation} they consider the problem \begin{equation}\label{Bintaaa} \begin{cases} \Delta_{\text{p}} u(x)=0 \quad &\text{in }\Omega\setminus\overlineK,\\ u=0 \quad &\text{on }\partial\Omega,\\ u=1, \quad &\text{on }\partialK,\\
|Du(x)|={\textrm{\sf{a}}}(x) \quad &\text{on }\partialK.
\end{cases} \end{equation}
and they proved that, if a subsolution to the problem exists, then there exists a classical solution and moreover the convexity of the given domain transfers to the free boundary.
Notice that in Problem (\ref{Bintaaa}) the function ${\textrm{\sf{a}}}$ is required to be given in the whole $\Omega$, while in (\ref{Bint}) the function $g$ is defined only on the unit sphere $S^{N-1}$. Moreover, while in Problem (\ref{Bintaaa}) the convexity property (\ref{acvx}) is required for the boundary constraint ${\textrm{\sf{a}}}$, in Problem (\ref{Bint}) no additional assumptions on $g$ are needed.
\section{Preliminaries}
\subsection{Notations}
In the $N$-dimensional Euclidean space, $N\geq 2$, we denote by $|\cdot|$ the Euclidean norm; for $K\subseteq\mathbb{R}^N$, we denote by $\overline K$ its closure and by $\partial K$ its boundary, while $\text{\sf conv}(K)$ is its convex hull. $\mathscr{H}^m$ indicates the $m$-dimensional Hausdorff measure.
We denote by $B(x_0,r)$ the ball in $\mathbb{R}^N$ of center $x_0$ and radius $r>0$: $B(x_0,r)=\{x\in\mathbb{R}^N\,:\,|x-x_0|<r \}$; in particular $B$ denotes the unit ball $B(0,1)$ and we set $\omega_N=\mathscr{H}^N(B)$. Let us define $$
S^{N-1}=\partial B=\{x\in\mathbb{R}^N\,:\,|x|= 1\}; $$ hence $\mathscr{H}^{N-1}(S^{N-1})=N\omega_N$.
We set $$
\Lambda_m=\{ \lambda=(\lambda_1,...,\lambda_m)\ |\ \lambda_i\ge0, \sum_{i=1}^m\lambda_i=1 \}. $$
Given an open set $\Omega\subseteq \mathbb{R}^N$, and a function $u$ of class $C^2(\Omega)$, $Du=(u_{x_1},\dots,u_{x_N})$ and $D^2u=(u_{x_ix_j})_{i,j=1}^N$ denote its gradient and its Hessian matrix respectively.
\subsection{Quasi-concave and $Q^2_-$ functions}
An upper semicontinuous function $u:\mathbb{R}^N\to\mathbb{R}\cup\{\pm\infty\}$ is said \emph{quasi-concave} if it has convex superlevel sets, or, equivalently, if $$ u\left( (1-\lambda)x_0+\lambda x_1 \right)\ge \min\{ u(x_0),u(x_1) \}, $$ for every $\lambda\in[0,1]$, and every $x_0, x_1\in\mathbb{R}^N$. If $u$ is defined only in a proper subset $\Omega$ of $\mathbb{R}^n$, we extend $u$ as $-\infty$ in $\mathbb{R}^n\setminus\Omega$ and we say that $u$ is quasi-concave in $\Omega$ if such an extension is quasi-concave in $\mathbb{R}^N$. Obviously, if $u$ is concave then it is quasi-concave.
By definition a quasi-concave function determines a family of monotone decreasing convex sets; on the other hand, a continuous family of monotone decreasing convex sets, whose boundaries completely cover the first element, can be seen as the family of super-level sets of a quasi-concave function.
We use a local strengthened version of quasi-concavity, which was introduced and studied in \cite{LS}: let $u$ be a function defined in an open set $\Omega\subset\mathbb{R}^n$; we say that $u$ is a $Q^2_-$ function at a point $x\in \Omega$ (and we write $u\in Q^2_-(x)$) if: \begin{enumerate} \item $u$ is of class $C^2$ in a neighborhood of $x$; \item its gradient does not vanish at $x$;
\item the principal curvatures of $\{ y\in\mathbb{R}^n\ |\ u(y)= u(x)\}$ with respect to the normal $-\frac{Du(x)}{|Du(x)|}$ are positive at $x$. \end{enumerate}
In other words, a $C^2$ function $u$ is $Q^2_-$ at a regular point $\bar{x}$ if its level set $\{x\,:\,u(x)=u(\bar{x})\}$ is a regular convex surface (oriented according to $-Du$), whose Gauss curvature does not vanish in a neighborhood of $\bar{x}$. By $u\in Q^2_-(\Omega)$ we mean $u\in Q^2_-(x)$ for every $x\in\Omega$.
\subsection{Quasi concave envelope}
If $u$ is an upper semicontinuous function, we denote by $u^*$ its \emph{quasi-concave envelope}. Roughly speaking, $u^*$ is the function whose superlevel sets are the closed convex hulls of the corresponding superlevel sets of $u$. It turns out that $u^*$ is also upper semicontinuos.
Let us indicate by $\ot{t}$ the superlevel set of $u$ of value $t$, i.e. $$
\ot{t}=\{ x\in\mathbb{R}^N\ |\ u(x)\ge t \}, $$ and let $\Omega^*(t)=\overline{\text{\sf conv}(\ot{t})}$. Then $u^*$ is the function defined by its superlevel sets in the following way: $$
\Omega^*(t)=\{ x\in\mathbb{R}^N\ |\ u^*(x)\ge t \} \qquad \text{ for every }t\in\mathbb{R}\,, $$ that is $$
u^*(x)=\sup \{ t\in\mathbb{R}\, |\, x\in\Omega^*(t) \}. $$ Equivalently, as shown in \cite{CS}, \begin{eqnarray*}
u^*(x)=\max\left\{ \min\{ u(x_1),...,u(x_{N+1}) \} \ :\ x_i\in\overline{\Omega\setminus\overline K}, \exists \lambda\in\Lambda_{N+1}, \ x=\sum_{i=1}^{N+1}\lambda_ix_i \right\}. \end{eqnarray*}
Notice that $u^*$ is the smallest upper semicontinuous quasi-concave function greater than $u$, hence in particular $u^*\ge u$. Moreover, if $u$ satisfies $\Delta_{\text{p}} u=0$ in a convex ring $\Omega\setminus\overline K$ (that is $\Omega,K$ convex with $\overline{K}\subseteq\Omega$), then it holds $\Delta_{\text{p}} u^*\ge 0$ in $\Omega\setminus\overline K$ in the viscosity sense (see for instance \cite{CS}).
\subsection{Subsolutions}\label{secsubsol}
In his pioneering work \cite{B}, Beurling introduced the notion of sub-solution for the classical Problem (\ref{Bintconstant}). This concept was further developed by Acker \cite{A2} and then generalized by Henrot and Shahgholian \cite{HS2}, \cite{HS4} to the case $\text{p}>1$, both for constant and for non constant gradient boundary constraint.
Following the same idea, let us introduce the class of {sub-solutions} to the generalized Bernoulli Problem (\ref{Bint}). Let $\Omega$ be a subset of $\mathbb{R}^N$; $\mathscr{F}^-(\Omega,g)$ is the class of functions $v$ that are Lipschitz continuous on $\overline{\Omega}$ and such that \begin{equation}\label{Asubsol} \begin{cases} \Delta_{\text{p}} v \ge 0 &\qquad\text{in } \{ v<1\}\cap \Omega\\ v =0 &\qquad\text{on }\partial \Omega\\
|Dv(x)| \le g(\nu(x)) &\qquad\text{on }\partial\{v<1 \}\cap \Omega\,. \end{cases} \end{equation} If $v\in\mathscr{F}^-(\Omega,g)$ we call it a {\em subsolution}.
As in the definition of solutions, we say that a set $K$ is a {\em subsolution}, and we possibly write $K\in\mathscr{F}^-(\Omega,\tau)$ or $(v,K)\in\mathscr{F}^-(\Omega,\tau)$, if $K=\{x\in\Omega\,:\,v(x)\ge 1\}$ for some $v\in\mathscr{F}^-(\Omega,\tau)$.
In the standard case $g\equiv\tau$, for some positive constant $\tau$, it is known that the class of subsolutions and that of solutions are equivalent, indeed in \cite{HS2} is proved that, if $\Omega$ is a $C^1$ convex domain, and $\mathscr{F}^-(\Omega,\tau)$ is not empty, then there exists a classical solution to (\ref{Bintconstant}). In particular it is proved that $$ \widetilde{K}(\Omega,\tau)= \bigcup_{C\in\mathscr{F}^-(\Omega,\tau)}C, \qquad \widetilde{u}=\sup_{v\in\mathscr{F}^-(\Omega,\tau)}v, $$ solve Problem (\ref{Bintconstant}) and hence, recalling (\ref{Bresume}), it follows as a trivial consequence: $$ \Lab{\text{p}}(\Omega)=\inf\{\tau\ :\ \mathscr{F}(\Omega,\tau)\neq\emptyset\}=\inf\{\tau\ :\ \mathscr{F}^-(\Omega,\tau)\neq\emptyset\}. $$
Regarding the proof of Theorem \ref{gsub-sol}, it is clear that an analogous relation between subsolutions and solutions hold true also in the non constant case, that is: $$ \widetilde{K}(\Omega,g)= \bigcup_{C\in\mathscr{F}^-(\Omega,g)}C, \qquad \widetilde{u}=\sup_{v\in\mathscr{F}^-(\Omega,g)}v, $$ solve Problem (\ref{Bint}) and they are said \emph{maximal solution} to (\ref{Bint}).
\section{Proof of the main result}
In order to give a proof of Theorem \ref{gsub-sol}, some preliminary steps are needed; they are collected in the following propositions and lemmas.
\begin{proposition}\label{gcap}\label{gconvU}
Let $\Omega$ be a regular $C^1$ convex subset of $\mathbb{R}^N$; let $u_0,u_1\in\mathscr{F}^-(\Omega,g)$ with $K_0=\{u_0=1\}$ and $K_1=\{u_1=1\}$. Define $K=K_0\cup K_1$, $K^*=\overline{\text{\sf conv}{K}}$. Then $v\in\mathscr{F}^-(\Omega,g)$, where $v$ is the $\text{p}$-capacitary potential of $\Omega\setminus K^*$.
Moreover $$
|Dv(x)| \le g(\nu_{\ot{t}}(y_x)), $$
for every $x\in\Omega\setminus K^*$ and $y_x\in\partial K^*$ such that $\nu_{K^*}(y_x)= -Dv(x)/|Dv(x)| =\nu_{\ot{t}}(x)$, being $\ot{t}$ the superlevel set of $v$ of level $t=v(x)$. \end{proposition}
\begin{proof}
Let $u^*$ be the quasi-concave envelope of $u=\max\{u_0,u_1\}$; it satisfies in the viscosity sense $$ \begin{cases} \Delta_{\text{p}} u^* \ge 0 &\qquad\text{ in }\Omega\setminus K^*\\ u^*=0 &\qquad\text{ on }\partial\Omega\\ u^*=1 &\qquad\text{ on }\partial K^*, \end{cases} $$ and hence, by the viscosity comparison principle, \begin{equation}\label{vu*}
|Dv|\le |Du^*|\text{ on }\partial K^*. \end{equation} Consider $y\in\partial K^*$; then either $y\in \partial K^*\cap\partial K$ or $y\in\partial K^*\setminus \partial K$.
Assume $y\in\partial K^*\cap\partial K$, so that $\nu_{K}(y)=\nu_{K^*}(y)$. Then either $y\in\partial K_0$, or $y\in\partial K_1$ and hence $|Du^*(y)|\le|Du_0(y)|$ or $|Du_1(y)|$; however in both the cases $$
|Dv(y)|\le|Du_i(y)|\le g\big(\nu_{K}(y)\big)=g\big(\nu_{K^*}(y)\big), $$ as $u_0,u_1\in\mathscr{F}^-(\Omega,g)$.
Now assume $y\in\partial K^*\setminus\partial K$. By Proposition 3.1 in \cite{CS} there exist $x_1,...,x_N\in \partial(K_0\cup K_1)$ such that $x_1,...,x_l\in\partial K_0$, $x_{l+1},...,x_N\in\partial K_1$ (with $0\le l\le N$) and $\lambda\in\Lambda_N$ such that $$ \nu_{K_0}(x_i)=\nu_K(x_i)\text{ parallel to }\nu_{K_1}(x_j)=\nu_K(x_j)\text{ parallel to }\nu_{K^*}(y), $$ for $i=1,...,l$, $j=l,...,N$ and $y=\sum_{i=1}^N\lambda_ix_i$. Moreover thanks to Proposition 2.2 in \cite{BLS} it holds $$
|Du^*(y)|=\left( \sum_{k=1}^N \frac{\lambda_k}{|Du_{i_k}(x_k)|} \right)^{-1}\le \left( \sum_{k=1}^N \frac{\lambda_k}{g(\nu_{K_{i_k}}(x_k))} \right)^{-1} =\left( \sum_{k=1}^N \frac{\lambda_k}{g(\nu(x))} \right)=g(\nu(x)), $$ where $i_k\in\{0,1\}$. Hence, by (\ref{vu*}), $v\in\mathscr{F}^-(\Omega,g)$.
Notice that, as $\Omega,K^*$ are convex, the function $v$ is quasi-concave, in particular, thanks Lewis's result \cite{L}, $v\in \textrm{Q}^2_-(\Omega\setminus K^*)$. For every $x\in\Omega\setminus K^*$, let $\nu_{\ot{t}}(x)$ be the outer unit normal vector to the superlevel set $\{v(y)\ge v(x)\}$; hence by Lemma 4.1 in \cite{BS}, it holds $$
|Dv(x)|\le g(\nu_{K^*}(y_x)), $$ where $y_x\in\partial K^*$ is such that $\nu_{K^*}(y_x)=\nu_{\ot{t}}(x)$. \end{proof}
For the sake of completeness we rewrite here two lemmas in \cite{HS4} which are particularly useful in the proof of Theorem \ref{gsub-sol}.
\begin{lemma}[\cite{HS4}]\label{HS4_211}
Let $D_R=\{x_1<1\}\setminus B_R$, where $B_R=B(x_R,R)$ and $x_R=(-R,0,\dots,0)$. Assume $l>0$ and let $u_R$ solve $$ \begin{cases} \Delta_{\text{p}} u=0 &\qquad\text{ in }D_R\\ u=l &\qquad\text{ on }\{x_1=1\}\\ u=0 &\qquad\text{ on }\partial B_R. \end{cases} $$
Then for any $\varepsilon>0$ there exists $R$ sufficiently large such that $|Du_R|\le l+\varepsilon$ on $\partial B_R$. \end{lemma}
\begin{lemma}[\cite{HS4}]\label{lemmablowup}
Let $u$ be the $\text{p}$-capacitary potential of the convex ring $\Omega\setminus\overline K$, with $|Du|\le C$ uniformly in $\Omega\setminus\overline K$. Then any converging blow-up sequence $$ u_{r_j}(x)=\frac 1{r_j}\ \left(1-u(r_j x)\right), $$
at any boundary points gives a linear function $u_0=\alpha x_1^+$, after suitable rotation and translation, where $\alpha=|Du(O)|$ and $O$ indicates the origin. \end{lemma}
Following the idea of the proof of Theorem 1.2 in \cite{HS4}, now we present the proof of Theorem \ref{gsub-sol}.
\begin{proof}[Proof of Theorem \ref{gsub-sol}.]
Let us consider $u=\sup\{v\ : v\in\mathscr{F}^-(\Omega,g)\}$, and let $u_n$ be a maximizing sequence. Notice that, thanks to Proposition \ref{gconvU}, we can assume $\{u_n\}$ to be an increasing sequence of the $\text{p}$-capacitary potentials of convex rings $\Omega\setminus\overline{K_n}$, with $|Du_n(x)|\le g(\nu_{K_n}(x))$ on $\partial K_n$ for every $n$. Let $K$ be the increasing limit of $K_n$; hence $K$ is convex and, as uniform limit of $\text{p}$-harmonic functions, $u$ is the $\text{p}$-capacitary potential of $\Omega\setminus\overline K$, with $|Du(x)|\le g(\nu_K(x))$ on $\partial K$.
We need to show that in fact $|Du(x)|=g(\nu_K(x))$ and we will prove it by contradiction, constructing a function $w\in\mathscr{F}^-(\Omega,g)$ such that $w\ge u$ with $w>u$ at some point. Let us remind that $\nu(x)$ indicates the outer unit normal vector to $\partial K$ at $x$.
Let us assume by contradiction that there exists a point $y\in\partial K$ such that $$
\alpha=|Du(y)|<g(\nu(y)) $$ and assume $y$ to be the origin $O$ with outer unit normal $\nu$ parallel to the first axis. Let $\delta$ be such that \begin{equation}\label{alpha} \alpha + 3\delta < g(\nu). \end{equation} By Lemma \ref{lemmablowup} the sequence $$ u_{r_j}=\frac 1{r_j}\left(1-u(r_jx)\right), $$ converges to $u_0(x)=\alpha x_1^+$, hence for every $\eta>0$, \begin{equation}\label{u>} u(x)>1-\alpha x_1^+ -\eta r_j, \end{equation} if $r_j$ is small enough, for $x=(x_1,...,x_N)\in B(O,r_j)$.
Consider $$ w_R(x)=w_{R,\varepsilon}(x)= \left(\alpha+\frac{\delta}2\right) \left( \frac{u_R-\varepsilon}{\alpha+\delta/2-\varepsilon} \right)^+, $$ where $u_R$ is as in Lemma \ref{HS4_211} and $l=\alpha+\delta/2$. Then there exist $\varepsilon_0,R_0>0$ such that for $\varepsilon\le\varepsilon_0$ and $R\ge R_0$, \begin{equation}\label{DwR}
|Dw_R|\le \alpha+2\delta,\text{ on }\partial\{u_R\le\varepsilon\}=\{w_R=0\}. \end{equation} Moreover there exist $\delta_1,\delta_2>0$ such that $$ w_R>\alpha x_1^+ +\delta_2\quad\text{ on }\partial B(O,1)\cap\{x_1>-\delta_1\}, $$ in particular we can fix $\delta_1$ small enough such that $\{u_R=\varepsilon\}\cap\partial B(0,1)\subseteq\{x_1>-2\delta_1\}$, and choose $$ 0<\delta_2=2\inf\{u_R(x)-\alpha {x_1}^+\ :\ x\in \partial B(O,1)\cap\{x_1>-\delta_1\}\}. $$ Let $\tilde{w}(x)=1-r_j w_R(x/r_j)$; notice that, as $u_R$ is quasi-convex, then $\tilde w$ is quasi-concave. Moreover for $r_j$ sufficiently small, recalling (\ref{u>}) it holds $$ \tilde{w}< 1- \alpha x_1^+ -\delta_2 r_j<u\quad\text{ on }\partial B(O,r_j). $$
Define $$ w(x)= \begin{cases} \max\{u(x),\tilde{w}(x)\}&\qquad\text{ in }B(O,r_j),\\ u(x) &\qquad\text{ in }\mathbb{R}^N\setminus B(O,r_j), \end{cases} $$ and $W=\{\tilde{w}=1\}=r_j\{w_R=1\}$; observe that on $\partial B(O,r_j)$, $w=\tilde{w}$. By (\ref{alpha}) and (\ref{DwR}), for every $x\in W$ it holds $$
|D\tilde{w}(x)|\le \alpha +2\delta < g(\nu)-\delta. $$
Notice that $\{u_R=0\}=\partial B(x_R,R)$ and for every $x\in\partial\{u_R=0\}$ it holds $$ \lim_{R\to\infty}\nu_{B_R}(x)=\nu=(1,0,...,0). $$ Moreover $\lim_{\varepsilon\to 0}\{u_R=\varepsilon\}=\{u_R=0\}$ as limit in the Hausdorff metric of {convex sets}. Hence, by continuity, for sufficiently large $R$ and sufficiently small $\varepsilon$, we have $$
|g(\nu)-g(\nu_{W}(z))|\le \delta, $$ for every $z\in W\cap B(O,r_j)$, and hence, $$
|D\tilde{w}(x)|< g(\nu)-\delta \le g(\nu_W(x)), $$ for every $x\in W\cap B(o,r_J)$.
Then $w\in\mathscr{F}^-(\Omega,g)$ and, since $w>u$ at some points, we get a contradiction with the maximality of $u$. Therefore $|Du|=g(\nu)$ on $\partial K$. \end{proof}
\section{Final remarks}
\begin{remark} {\rm Notice that in the non constant case no characterization of functions $g$ for which $\mathscr{F}^-(\Omega,g)$ is not empty are known. However in some trivial case the existence or non-existence of a solution can be easily deduced by the characterization of the existence for the standard problem in (\ref{Bresume}). Indeed if $g$ satisfies $$ \min_{\nu\in S^{N-1}} g(\nu)\ge\Lab{\text{p}}(\Omega),\qquad\text{ then } \quad\mathscr{F}(\Omega,\Lab{\text{p}}(\Omega))\subseteq\mathscr{F}^-(\Omega,g), $$ and hence $\mathscr{F}^-(\Omega,g)\neq\emptyset$; on the other hand, if $$ M=\max_{\nu\in S^{N-1}} g(\nu)<\Lab{\text{p}}(\Omega),\qquad\text{ then }\quad\mathscr{F}^-(\Omega,g)\subseteq\mathscr{F}^-(\Omega,M)=\emptyset, $$ and hence problem (\ref{Bint}) has no solutions. } \end{remark}
\begin{remark}[Concavity property of Bernoulli Problems (\ref{Bint})] {\rm As in the classical case, geometric properties for the maximal solutions to (\ref{Bint}) can be proved. Indeed, following the argument in \cite{BS}, it is possible to define a combination of the Bernoulli Problems (\ref{Bint}) in the Minkowski sense and to prove that Problem (\ref{Bint}) has a concave behaviour with respect to this combination. More precisely: fix $\lambda\in[0,1]$; let $\Omega_0,\Omega_1$ be two given convex domains and $g_0,g_1:S^{N-1}\to\mathbb{R}^+$ two continuous functions (which both stay far away from zero). We define $\Omega_\lambda$ as the Minkowski combination of $\Omega_0,\Omega_1$, that is $$
\Omega_\lambda=(1-\lambda)\Omega_0+\lambda\Omega_1=\{z=(1-\lambda)k_0+\lambda k_1 \ |\ k_0\in K_0,k_1\in K_1 \}, $$ and $g_\lambda$ as the harmonic mean of $g_0$ and $g_1$, that is $$ \frac 1{g_\lambda(\nu)}=\frac{(1-\lambda)}{g_0(\nu)}+\frac{\lambda}{g_1(\nu)}. $$ Consider Problem (\ref{Bint}) for $\Omega_0,g_0$ and $\Omega_1,g_1$, respectively; we define their \emph{combined problem} of ratio $\lambda$ the Bernoulli problem of the type (\ref{Bint}), with given set $\Omega_\lambda$ and gradient boundary constraint $g_\lambda(\nu)$. Following the proof of Proposition 7.1 in \cite{BS} we can prove that if $\mathscr{F}^-(\Omega_i,g_i)$, $i=0,1,$ are non empty sets, then so is $\mathscr{F}^-(\Omega_\lambda,g_\lambda)$. More precisely let $(\widetilde{K}(\Omega_i,g_i),u_i)$ be the maximal solutions, for $i=0,1$ and let $u_{\la}$ be the Minkowski combination of $u_0$ and $u_1$ of ratio $\lambda$, that is $$ \{u_{\la}\ge t\}=(1-\lambda)\{u_0\ge t\}+\lambda\{u_1\ge t\}; $$ (see for instance \cite{BS} for more detailed definitions and properties). The function $u_{\la}$ belongs to $\mathscr{F}^-(\Omega_\lambda,g_\lambda)$ and hence, by Theorem \ref{gsub-sol}, Problem (\ref{Bint}) for $\Omega_\lambda$ and $g_\lambda$ admits a solution $(\widetilde K(\Omega_\lambda,g_\lambda),\tilde u_\lambda)$ which satisfies $$ (1-\lambda)\widetilde K(\Omega_0,g_0(\nu))+\lambda\widetilde K(\Omega_1,g_1(\nu))\subseteq \widetilde K(\Omega_\lambda, g_\lambda). $$ } \end{remark}
\begin{remark}[{A flop in the unbounded case}] {\rm It could be natural to try to extend Theorem \ref{gsub-sol} to the unbounded case with an approximation method considering a sequence of given domains $\Omega_R=\Omega\cap B(O,R)$ as $R$ grows. As the sequence $\{\Omega_R\}$ is monotone increasing by comparison principle $\tilde{K}(\Omega_R,g)$ also increases and hence it converges to a convex set. Unfortunately, this approach fails in the limit process as it turns out that in fact $\Omega_R$ converges to the given set $\Omega$ which means that the limit of maximal solutions degenerates.
More precisely assume for simplicity $\Omega=\mathbb{R}^N$, so that $\Omega_R=B_R=B(O,R)$ (or, analogously $\Omega=H^-$ the half space $\{ x_N\le 0 \}$ and take $\Omega_R=B_R=B(x_R,R)$, where $x_R=(0,...,0,-R)$). If $R$ is sufficiently large, then the Bernoulli constant of $B_R$, $\Lab{\text{p}}(B_R)=C_N/R$ (see \cite{BS} for example) is smaller than $c_0$ and hence $$ B_r\subseteq \widetilde{K}(B_R,c_0)\subseteq\widetilde{K}(B_R,g(\nu)), $$ where $B_r=B(O,r)$ is the unique solution to Problem (\ref{Bint}) corresponding to $\Omega=B_R$ and $g(\nu)\equiv\Lab{\text{p}}(B_R)$.
Hence for sufficiently large $R$, $\mathscr{F}^-(B_R,g)$ is not empty and Theorem \ref{gsub-sol} gives a sequence of quasi-concave $\text{p}$-capacitary potentials $\{u^R\}$ which solve Problem (\ref{Bint}) in $\Omega_R\setminus\overline{K_R}$, where $K_R=B_r$. By easy computations one can check that $r=R/c_N$, for some constant $c_N$ depending on the dimension and hence, the sequence of interior domains $\{K_R\}_{R>0}$ is not bounded for $R$ which tends to infinity. This implies that the limit of the maximal solutions $(K_R,u^R)$ is not the solution to the limit problem. } \end{remark}
\end{document}
|
arXiv
|
{
"id": "1009.1277.tex",
"language_detection_score": 0.7019892334938049,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Optimal Multi-Unit Mechanisms with Private Demands}
\begin{abstract}
In the multi-unit pricing problem, multiple units of a single item are for sale. A buyer's valuation for $n$ units of the item is $v \min \{ n, d\} $, where the per unit valuation $v$ and the capacity $d$ are private information of the buyer. We consider this problem in the Bayesian setting, where the pair $(v,d)$ is drawn jointly from a given probability distribution. In the \emph{unlimited supply } setting, the optimal (revenue maximizing) mechanism is a pricing problem, i.e., it is a menu of lotteries. In this paper we show that under a natural regularity condition on the probability distributions, which we call {\emph{decreasing marginal revenue}}, the optimal pricing is in fact \emph{deterministic}. It is a price curve,
offering $i$ units of the item for a price of $p_i$, for every integer $i$. Further, we show that the revenue as a function of the prices $p_i$ is a \emph{concave} function, which implies that the optimum price curve can be found in polynomial time. This gives a rare example of a natural multi-parameter setting where we can show such a clean characterization of the optimal mechanism. We also give a more detailed characterization of the optimal prices for the case where there are only two possible demands.
\end{abstract}
\section{Introduction}
We study a pricing problem that is motivated by the following examples. A cloud computing platform such as Amazon EC2 sells virtual machines to clients, each of who needs a different number of virtual machine hours. Similarly, cloud storage providers such as Dropbox have customers that require different amounts of storage. Software companies such as Microsoft sell software subscriptions that can have different levels of service. The levels could be the number of different documents you are allowed to create, or the number of hours you are allowed to use the software. Companies like Google and Microsoft sell API calls to artificial intelligence software such as face recognition, to other software developers. Video and mobile games are increasingly designed in such a way that one can pay for better access to certain features. Spotify and iTunes sell music subscription, and different people listen to different number of songs in a month. Cellphone service providers like AT\&T and Verizon offer cellular phone call minutes and data. People have widely varying amounts of data consumption. Dating apps provide paid services where certain number of messages sent by a client can be ``promoted".
Pricing is an important component in all these examples. The aim of this paper is to understand how to price such goods, for a monopolist seller who aims to maximize her revenue. The following common features (to a first degree of approximation) of these examples are crucial for our model. \begin{itemize}
\item The marginal cost of offering a higher level of service is essentially a constant (and in many cases zero). Most of the cost is a fixed cost.
\item The valuations for the different levels of service are roughly linear, subject to a cap.
\end{itemize} Based on this, we consider the following problem. There is a single good with multiple units of it for sale. Equivalently, there is a single service with various levels of service. For ease of presentation we simply refer to `goods' and `units' from now on. The marginal cost to the seller for procuring another unit of the good is a constant. There is a population of buyers, each of who has a {\em linear} valuation for consuming a number of units of the good, subject to a \emph{cap} (which we refer to as \emph{demand} henceforth). The type/private information of a buyer is determined by the per-unit valuation and the demand. Buyers try to maximize their utility which is quasi linear, i.e., the valuation minus their payment. The question is what is the revenue maximizing pricing scheme.
The standard approach in mechanism design \cite{Myerson} is Bayesian: assume that the types are drawn from a given distribution and find/characterize the incentive compatible (IC) mechanism that maximizes the expected revenue when the buyer types are drawn from this distribution. The optimization is over randomized mechanisms, which in our case corresponds to pricing lotteries. Here's a simple example that shows that lotteries can obtain better revenue than any deterministic pricing scheme. We represent the type of a buyer with a pair $(v,d)$, where $v$ is the per unit valuation and $d$ is the demand. \begin{example} [Deterministic pricing is not optimal]
\label{eg:randopt}
Suppose that there are 3 types of buyers, all occurring with equal probability $\frac{1}{3}$. These types are $t_1 = \left( 1 , 3 \right)$, $t_2 = \left( 1 , 2 \right)$ and $t_3 = \left( 6 , 1 \right)$. Consider the lottery (which happens to be optimal for this case) that offers the following options: \begin{enumerate}
\item 3 units at a price of 3, or
\item a lottery that gives $2$ units at a price of 2 with probability $\frac{3}{4}$, and nothing otherwise. \end{enumerate} Buyers of type $t_1$ and $t_3$ buy 3 units, where as buyers of type $t_2$ buy the lottery, for a total expected revenue of $\frac{7.5}{3}$. Consider the two deterministic prices that are in the support of the lottery. The first one offers 3 units at a price of 3 and 2 units at a price of 2. In this case, a type $t_3$ buyer will switch to buying 2 units instead of 3 since her demand is only 1. Thus you get a revenue of $\frac{7}{3}$. The $1/4$ probability of not getting anything in the lottery makes the $t_3$ buyer pay for 3 units. The other price in the support of the lottery only offers 3 units at a price of 3. Buyers of type $t_1$ and $t_3$ buy this option whereas $t_2$ will not buy anything, resulting in a revenue of $\frac{6}{3} = 2$. It can in fact be argued that the revenue of $\frac{7}{3}$ is optimal for deterministic prices.\footnote {Any deterministic mechanism will have $3$, possibly distinct, prices for each possible number of units. Consider the price per unit for $2$ units $q_2$, and the price per unit for $3$ units $q_3$. If both $q_2$ and $q_3$ are strictly bigger than $1$ then $t_1$ and $t_2$ will not buy; the revenue in this case is at most $\frac{6}{3} = 2$. If $q_3 \leq 1$ and $q_2 > 1$, then $t_2$ will not buy (the price for one unit is larger than $q_2$, so buying one unit is not an option); in this case the maximum price we could charge for one unit is $3$, and therefore the maximum revenue attainable is $\frac{6}{3} = 2$. If $q_3 \leq 1$ and $q_2 \leq 1$ setting them equal to $1$ only increases revenue, for a maximum of $\frac{7}{3}$. The last case ($q_3 > 1, q_2 < 1$) is infeasible. } \end{example}
It appears that the optimal mechanism is usually randomized for small examples with discrete support. This phenomenon is quite common. While \citet{Myerson} showed that the optimal mechanism for single dimensional types is deterministic under quite a general assumption about the prior distribution called regularity, even slight multi dimensional generalizations end up in randomized mechanisms as optimal \cite{thanassoulis2004haggling,pavlov2011optimal,hart2015maximal}. However, practical considerations force a seller to stick to deterministic mechanisms for the most part. (This is true for all the applications listed above.) Moreover, the optimal randomized mechanism sometimes doesn't even have a finite description \cite{DaskalakisDT13}. Hence it is important to understand the structure of the optimal deterministic mechanism. In this paper we offer two insights in this regard.
\textbf{Out first contribution is to identify a natural condition that guarantees that the optimal (randomized\footnote{We use the convention that the optimal mechanism is always randomized. When we wish to restrict ourself to deterministic mechanisms, we will use ``optimal deterministic mechanism/pricing''.}) mechanism is deterministic}.
We call the condition we need as \emph{decreasing marginal revenue} (DMR), in accordance with previous literature \cite{che1998standard}. Regularity requires that the \emph{virtual value function} is monotone, or equivalently, that the revenue function is concave in the \emph{quantile} space. DMR instead requires that \emph{in the value space}, the marginal revenue is decreasing or equivalently that revenue function is concave. In other words, a probability distribution with CDF $F$ is DMR, if the function $v(1-F(v))$, specifying the expected revenue of posting a price $v$, is concave. The condition we need for the optimal pricing to be deterministic is that the marginal distributions for $v$, conditioned on a given demand, are all DMR. We will provide a more detailed analysis of the DMR condition below. We also give a detailed description of the optimal prices in case there are only two distinct demands in the distribution. We note that the case of 2 distinct levels of service is quite common (e.g., {\em limited} and {\em premium}).
\paragraph{Closely Related Work} \citet{malakhov2009optimal} consider the same problem (more generally in the multiple bidder case), but made 2 strong assumptions: (1) that the buyers cannot report a higher demand, and (2) that the distribution satisfies the following: the Myerson virtual value\footnote{
The Myerson virtual value given a distribution with CDF $F$ and PDF $f$ is $\phi(v) = v - \frac{1-F(v)}{f(v)}$.
In our case, we define the virtual value of a type $(v,d)$ by applying the same definition using the marginal distribution on $v$, conditioned on $d$, and denoted by $F_d$ and $f_d$.
$\phi(v,d) = v - \frac{1-F_d(v)}{f_d(v)}$. } is monotone in both the value and the demand. This essentially results in the problem separating out into a 1 dimensional problem, one for each $d$. The non-triviality in the 2 dimensional problem comes because buyers can misreport their demands. The first assumption disallows reporting a higher demand. The second assumption makes reporting a lower demand never profitable, without having to do anything extra. When specialized to the case of a single buyer, it implies that a deterministic pricing is optimal, since the same is true for the 1 dimensional case.
The recent work of \citet{fiat2016fedex} solves the single buyer problem, with only the first assumption above, that buyers cannot report a higher demand. This is a significant improvement, since the second assumption above, which requires something quite strong about the correlation between value and demand, is the more problematic one. \citet{fiat2016fedex} consider what they call the ``FedEx'' problem, which too has a 2 dimensional type space, where one of them is a value $v$, and the other is a ``deadline'' $d$. The seller offers a service, such as delivering a package, at various points of time, and the buyer's valuation is $v$ for any time that is earlier than her deadline $d$. In their model, a higher $d$ corresponds to an inferior product, as opposed to our model where higher $d$ is superior. The other difference is that in their model, all times earlier than $d$ have the same valuation and times later than $d$ have a zero valuation, whereas in our model, the valuation stays the same for higher $d$s but degrades gracefully as $d$ decreases.
Despite these differences, the relevant IC constraints are \emph{syntactically} identical. As was also observed earlier by \citet{malakhov2009optimal}, without loss of generality, one can reduce the set of IC constraints under consideration to only ``local'' constraints, such as the ones where a buyer of type $(v,d)$ reports $(v,d-1)$. This IC constraint is exactly the same for both our problem and the FedEx problem. This is surprising because, as we observed above, what $d-1$ means in both cases is semantically different. (See Section \ref{sec:structure} for an explanation.) On the other hand, a buyer with deadline $d$ can be made to never report a $d'>d$, by making sure that she is always given the service at her reported deadline. Thus, the FedEx problem is the same as our problem, with the assumption that the buyers are not allowed to report a higher demand. \citet{fiat2016fedex} characterize the optimal mechanism, without any assumptions on the prior distribution.
\paragraph{Comparison.} We do not make the assumption that the buyers cannot report higher demands. Consider the case that there are just 2 different $d$s in the distribution, with $d_1 < d_2$, and the question, when is it optimal to offer each level of service at the monopoly reserve price (say, $r_1$ and $r_2$ resp.) for the corresponding marginal distributions over values. The answer for the FedEx problem is, when $r_1 \geq r_2$, which just says that $d_1$ should cost more than $d_2$. In our case, the answer is that $r_1 \leq r_2$ and $r_1 \geq \frac{d_1}{d_2} r_2$. Clearly $d_1$ units should cost less, but not too low either, since in that case some buyers with demand $d_2$ will actually prefer $d_1$ units. This points to the added difficulty in our problem: we need to worry about a buyer opting for a bundle that could be of any size, but in the FedEx problem a buyer would never consider later time slots. In addition, the new IC constraints we need to consider are of the form where $(v,d)$ reports $(\frac d {d+1} v, d+1)$. These are ``diagonal'' IC constraints, as compared to the ``vertical'' ones in the FedEx problem, where a buyer of type $(v,d)$ reports $(v,d-1)$. These are harder to handle and the techniques used in the FedEx problem, such as constructing an optimal dual, seem difficult to extend to this case.
\paragraph{The DMR Condition} We are not the first to make this assumption: \citet{che1998standard} made the exact same assumption for a very similar problem, of selling a single item to a single buyer with budget constraints, rather than demand or capacity constraints. The optimal mechanism there could still be randomized. \citet{fiat2016fedex} too show that the DMR condition is more natural than the usual notion of regularity for their setting. In particular, they show that to derive the optimal mechanism, one needs to \emph{iron}\footnote{Ironing is a technique introduced by \cite{Myerson} where the virtual value function is transformed so that it becomes monotone. This corresponds to transforming the corresponding revenue function into a concave function.} in the value space, rather than the quantile space as usual. DMR is precisely when no ironing is needed in the value space. As a result they too obtain that the optimal mechanism is deterministic under DMR. The same assumption was also made by \citet{kleinberg2003value} in the context of \emph{dynamic pricing}; see Section \ref{sec:concavityintro} for more discussion on dynamic pricing.
A simple class of DMR distributions is Uniform$[a,b]$ for any non-negative reals $a$ and $b$. More generally, any distribution with finite support and monotone non-decreasing probability density is DMR.\footnote{The second derivative of the revenue function is $-2f(v) - vf'(v)$, which is negative if $f'(v)\geq 0$.} Another standard class of demand distributions that satisfies DMR is a \emph{constant elasticity} distribution.\footnote{As the name suggests, the elasticity of demand for such a distribution is constant over the support. Such distributions are commonly used in Industrial Organization since they can be easily estimated by measuring elasticity anywhere on the support \cite{wolfstetter1999topics,berry1995automobile}.} (See Example \ref{ex:regularity comparisons} for the definition.) The DMR condition is different from the regularity condition of \citet{Myerson}, which requires that the function $\phi(v) = v - \frac{1-F(v)}{f(v)}$ is monotone non-decreasing in $v$. The example below shows that DMR and regularity are incomparable conditions.
\begin{example}[DMR vs. regularity]\label{ex:regularity comparisons} Consider the class of \emph{constant elasticity} distributions with cumulative density $F(v) = 1 - (v/a)^{1/\epsilon}$ for any for $a\geq 0$ and $\epsilon < 0$, supported on $[a,\infty)$. A special case is when $a = 1$ and $\epsilon = -1$, in which case $F(v) = 1-1/v$, known as the \emph{equal revenue distribution}. The corresponding revenue function $v(v/a)^{1/\epsilon}$ is concave if $\epsilon \leq -1$. However, the function $\phi(v) = v - \frac{1-F(v)}{f(v)}$ simplifies to $v (1+\epsilon)$, which is monotone \emph{decreasing} for $\epsilon <1$. Therefore such a distribution is DMR, but not regular, for $\epsilon < -1$. On the other hand, the exponential distribution is regular but not DMR. Calculations for this example are straightforward and deferred to Appendix~\ref{app:appendix}. \end{example}
The class of DMR distributions is well-behaved in the sense that it is closed under convex combinations. In particular, the distribution that results from drawing a sample from a DMR distribution with probability $\alpha$, and from another DMR distribution with probability $1-\alpha$, is a DMR distribution.\footnote{The cumulative density of a distribution that samples from $F_1$ with probability $\alpha$, and from $F_2$ otherwise, is $F(v) = \alpha F_1(v) + (1-\alpha) F_2(v)$. Therefore, the revenue function of the convex combination is the convex combinations of the revenue functions of $F_1$ and $F_2$, and is concave if $F_1$ and $F_2$ are DMR.} On the other hand, it is known that regular distributions are not closed under convex combinations \cite{sivan2013vickrey}.
We show that the DMR condition is necessary, by giving a distribution with monotone hazard rate\footnote{The function $\frac{1-F(v)}{f(v)}$ is monotone non-increasing}, a condition stronger than regularity, for which a deterministic pricing is not optimal. \begin{example} [MHR distributions where deterministic pricing is not optimal] The marginal distributions of Example~\ref{eg:randopt} for $d=1,2$ and $3$ are point masses at $6$, $1$ and $1$ respectively. Replace them with normal distributions $\mathcal{N}(1-\epsilon,\sigma)$, $\mathcal{N}(1-\epsilon,\sigma)$ and $\mathcal{N}(6-\epsilon,\sigma)$, truncated at $0$ and $V$, for some $V > 6$, and some $\epsilon > 0$. Truncated normal distributions satisfy the monotone hazard rate condition. For any $\delta > 0$, we can choose $\sigma$ and $\epsilon$ small enough, such that the revenue of the optimal deterministic and randomized mechanisms from Example~\ref{eg:randopt} changes by less than $\delta$. Furthermore, running these mechanisms on the new distributions yields essentially the same revenue. \end{example}
\paragraph{Our Approach.} Our approach is to show that any mechanism can be converted to a deterministic one with higher revenue, which we perform in two steps. First, we convert a mechanism so that a type with demand $d$ and with highest valuation receives a deterministic allocation of $d$ units, without reducing revenue. In order to do so, we first argue that without loss of generality, any type $(v,d)$ is assigned a lottery over $d$ units or no allocation (that is, there is no chance of receiving $d' \neq d$ units). Then we show that the randomized allocation of highest values can be converted to a deterministic allocation, without reducing revenue. Our first step holds generally and does not require the DMR condition. Second, we argue that a mechanism resulting from the first step can be converted to a deterministic mechanism. In particular, we remove all non-deterministic allocations from the mechanism, and allow types to choose only among the remaining deterministic allocations. Removal of allocations can only decrease (or keep fixed) the utility function of the mechanism pointwise. However, since the highest type of each demand was assigned a deterministic allocation, the utility of such a type remains unchanged. A technical lemma shows that under the DMR condition, we can improve revenue by pointwise lowering utility whilst fixing the utility of highest types.
\subsection{Concavity of the revenue function}\label{sec:concavityintro} Our first result implies that the optimal pricing scheme is a price vector, which offers each number of units for a given price. \textbf{Our second contribution is to show that the revenue as a function of the price vector is concave}, under the same assumption of DMR. This implies that the optimal prices can be found efficiently using the ellipsoid or other cutting plane methods \cite{khachiyan1980polynomial,vaidya1989new,lee2015faster}. Note that DMR is a condition on the marginal distributions of values, and does not immediately imply concavity as a function of the vector of prices. Note also that when we define concavity, we consider a deterministic pricing scheme where the price vector is a convex combination of two other deterministic prices, and not the corresponding lottery. This is best illustrated with the same instance as in Example \ref{eg:randopt}, which also shows that the revenue function need not always be concave. \begin{example}[Revenue function is not concave]
Consider the instance from Example \ref{eg:randopt}, and the convex combination of the two prices in the support of the lottery, using the same convex combination of $(3/4, 1/4)$ as before. Recall that the first price vector is 3 units at price 3 and 2 units at price 2 for a revenue of 7, and the second is a price of 3 for either 2 or 3 units, for a revenue of 6.
The convex combination offers 3 units at a price of 3, and 2 units at a price of $9/4$,
with a revenue of 6.
The corresponding convex combination of the revenues is strictly larger than 6, and hence the revenue function is not concave.
\end{example}
\paragraph{Techniques and Difficulties} In order to show that the revenue function is concave, we first give a closed form formula for the revenue function \emph{region-wise}. We divide the price space into different regions such that a region determines the order in which a buyer with a certain demand actually ends up buying a lower sized bundle. For instance, a region might determine that for all the buyers with demand 10, as their value decreases from $\infty$ down to 0, the bundle size they actually buy goes from 10 to 7 to 3 to 0; the exact transition points of course depend on the prices. We then show that the closed form formula for each of the regions is a concave function, implying that the revenue is piecewise concave. This in general does not imply that the revenue function is concave everywhere. One might surmise that the revenue function is the minimum of each of these functions, which would show that it is concave everywhere, but that is unfortunately not true. In fact, there is a partial order over these functions such that some of them are always higher than the others. We show a somewhat surprising property, that at the boundaries of the regions where they intersect, not only do the different functions agree (which they should, for the revenue function to be even continuous), but also their gradients agree! Showing this involves arguing that the equalities that hold at an intersection imply a whole set of other equalities such that disparate terms in the two gradients cancel out.
\paragraph{Dynamic Pricing} As a corollary, we obtain that under the DMR assumption, there is an efficient \emph{dynamic pricing} scheme, defined as follows. Consider a repeated setting where in each round $\tau \in \{1,2,\ldots, T\},$ the seller posts a price vector $\mathbf{p}^{\tau}$, a buyer is drawn from a fixed distribution, and buys her utility maximizing bundle. The seller does not know the distribution of buyer types, and has to only use the purchase information in previous rounds to set the price. The goal is to approach the optimal revenue as $T$ goes to infinity. Given that the distribution satisfies the DMR assumption, our result on the concavity of the revenue curve implies that this is a special case of the ``convex bandits'' problem \cite{agarwal2011stochastic,bubeck2016kernel}. The results of \citet{bubeck2016kernel} imply that there exists a dynamic pricing scheme such that the average revenue per round converges to the optimal revenue at the rate of $ \frac{n^{9.5}}{\sqrt{T}}$, where $n$ is the number of units. These bounds are quite strong, since the best known bounds for the dynamic pricing problem in general scale \emph{exponentially} in $n$; the concavity of the revenue function is an assumption often made to escape this curse of dimensionality \cite{besbes2012blind,talluri2006theory}. We show that this assumption can be weakened to an assumption about the concavity of only the 1 dimensional revenue functions for each $d$. The same assumption was made by \citet{kleinberg2003value} to get a $1/\sqrt{T}$ regret for the case of a \emph{single} item.
\subsection{Other Related work}
The seminal work of \citet{Myerson} settled the optimal mechanism design problem for selling to multiple buyers with single parameter type spaces. Since then, it has been discovered that multi-dimensional type spaces are a lot more difficult to analyze, and this remains to this day the foremost challenge in mechanism design. The optimal mechanism becomes randomized for even slight generalizations \cite{thanassoulis2004haggling,pavlov2011optimal,hart2015maximal}. Following \citet{Myerson}, some early work solved very special cases of this \cite{laffont1987optimal,mcafee1988multidimensional}. \citet{manelli2006bundling} showed conditions under which bundling all the items was optimal when there were either 2 or 3 heterogeneous items. Success with reasonably general settings had been limited.
There has been a recent spate of results in the algorithmic game theory community characterizing optimal mechanisms for special cases, and all of these consider a single buyer. \citet{DaskalakisDT13} use optimal transport theory to give sufficient conditions of optimality for an additive buyer with independent item valuations: when ``selling only the grand bundle'' is optimal and examples where a continuum of lotteries is the unique optimal mechanism. \citet{GiannakopoulosK14} identify a (deterministic) optimal auction for an additive buyer whose valuations are i.i.d. from $U[0,1]$, for up to 6 items. \citet{HaghpanahH14} identify conditions under which either ``selling only the favorite item'' for a unit-demand buyer or selling only the grand bundle for an additive buyer is optimal. \citet{DaskalakisDT15} identify necessary \emph{and} sufficient conditions for selling only the grand bundle to be optimal for an additive buyer. The FedEx problem \cite{fiat2016fedex} that we described earlier also falls in this line of work. Our paper contributes to this line of work by identifying a reasonably general setting where the optimal mechanism is in fact deterministic, and can be computed efficiently. All these results use linear or convex program duality, to construct a witness (dual optimal solution) of optimality. We also frame our problem as a mathematical program, but argue about the primal directly, which we find gives more intuition.\footnote{We did try to construct the optimal duals explicitly, but were not able to construct such duals in general. Constructing such duals is likely to facilitate characterizing the optimal mechanism for all distributions.}
The lack of characterizations of optimal mechanisms in general settings has been addressed by seeking computational results instead. (We refer the reader to \citet{hartline2013mechanism} for a thorough overview of this line of work.) A sequence of papers by \citet{CaiDW12a,CaiDW12b,CaiDW13a,CaiDW13b} showed that for finite (multi-dimensional) type spaces, the mechanism design problem can be reduced to a related algorithm design problem, thus essentially resolving the computational question for this case.
Most of these assume a finite support and the computation time is polynomial in the size of the support. This is different from our model which assumes a continuous distribution.
Yet another approach to cope with the complexity of optimal mechanisms has been to show that simple auctions approximate optimal ones. In this line of work, two classes of valuations have been widely studied, unit demand valuations \cite{ChawlaHK07,BriestCKW10,ChawlaHMS10,ChawlaMS10,Alaei11}, and additive valuations \cite{HartN12,li2013revenue,BabaioffILW14,Yao15}. A unified approach to both has been presented in \citet{cai2016duality}, and these approaches have been extended to more general valuations in \citet{RubinsteinW15,chawla2016mechanism,CaiZ16}. Most of these make some sort of assumption about independence of values for different items. Our model differs in this aspect: either we see it as a special case of a unit demand problem (each buyer wants one of several bundles) in which case the values are highly correlated, or as a problem with 2 dimensional type space $(v,d)$, and we allow arbitrary correlations between the $v$ and $d$. Also, the goal in our paper is a characterization of the optimal mechanism as opposed to identifying simple but approximately optimal mechanisms.
\section{Model and Main Results}\label{sec:model} We consider a {\em multi-unit mechanism} with a single buyer with {\em private} demand. In a multi-unit mechanism, there are infinitely many units of a single item for sale. The type $t$ of a buyer is specified by her per unit value $v\in \mathbb{R}_+$ and her demand $d\in \mathbb{Z}_+$. The valuation of such a buyer for $m\in \mathbb{Z}_+$ units of the item is $v * \min \left\{ m , d \right\}$. Both $v$ and $d$ are private information of the buyer, making this a multi-parameter setting.
We restrict our attention to direct revelation mechanisms, which ask the buyer to report her type $t=(v,d)$. The mechanism is allowed to be randomized, so the output is an allocation $A \in \mathbb{Z}_+$ and a payment $P\in \mathbb{R}_+$, both of which are random variables (and functions of the reported type $(v,d)$).
We require the mechanism to be incentive compatible, in expectation over the randomization of the mechanism. Formally, a mechanism is said to be EIC if for all valid types $(v,d)$ and $(v',d')$, the utility of the type $(v,d)$ from reporting its type truthfully is at least the utility it would get from reporting type $(v',d')$, \[ \textstyle \E \left[ v\left(\min \left\{ A(v,d) , d \right\} - \min \left\{ A(v',d') , d \right\} \right) - P(v,d) + P(v',d') \right] \geq 0 ,\] where the expectation is taken over the randomization of the mechanism. We assume that $(0,0)$ is a always a valid type declaration, so this includes as a special case, an expected individual rationality (EIR) condition, which requires that each type must get a non-negative utility from reporting its type truthfully \begin{align*} \textstyle \E \left[ v \min \left\{ A(v,d) , d \right\} - P(v,d) \right] \geq 0. \end{align*} \noindent By linearity of expectation, we may assume w.l.o.g. that the payment is deterministic, and we denote this deterministic payment by $p(t)$. A stronger notion of individual rationality is \emph{ex-post} individual rationality, which requires that the utility of a type is positive for any randomization of the mechanism. However, in the lemma below we show that any EIR mechanism can be converted to an \emph{ex-post} individually rational mechanism which guarantees positive for any randomization of the mechanism. The argument is standard and is deferred. As a result of the lemma, we will only focus on the EIR constraint in what follows. (All the missing proofs in the rest of the paper are in Appendix~\ref{app:appendix}.)
\begin{restatable}{lemma}{expost}
For every EIC and EIR mechanism, there exists an EIC and ex-post IR mechanism with the same expected payment \emph{for any type}.
\end{restatable}
When there are no supply constraints that bind across buyers, or equivalently there is a single buyer, an alternate interpretation of such a mechanism is as a \emph{menu of lotteries}. A lottery is a pair of a probability distribution over $\mathbb{Z}_+$ and a price, corresponding to the randomized allocation and payment. The buyer chooses the lottery that maximizes her expected utility from among a menu. In general this menu could be of infinite size. We call this the {\em multi-unit pricing} problem.
Consider a distribution over the type space, with a density function $f$. The \emph{Bayesian optimal} mechanism w.r.t. this distribution is the EIC (and EIR) mechanism that maximizes the expected revenue when the types are drawn from this distribution: \[ \textstyle \E_{t\sim f} \left[ p(t)\right] . \] Our goal is to characterize the Bayesian optimal mechanism. We make two assumptions: \begin{itemize}
\item The support of the distribution in the demand dimension is finite. We denote by $k$ the size of this support. In other words, there are $k$ different demands possible.
\item Let $f_d$ and $F_d$ denote the PDF and the CDF of the marginal distribution on values conditioned on the demand being $d$. Then $ v(1-F_d(v))$ is \textbf{concave} in $v$ for any given $d$. We call this property \emph{decreasing marginal revenue} (DMR). This is equivalent to the fact that $v f_d(v) - {1-F_d(v)}$ is a non-decreasing function of $v$. This is closely related to the usual definition of regularity, which requires monotonicity of this function divided by $f(v)$. \end{itemize}
We now state our first main theorem. \begin{restatable}{theorem}{deterministic} \label{thm:deterministic}
The Bayesian optimal multi-unit pricing with linear valuations, private demands, finitely many demands and DMR distributions is deterministic. \end{restatable} A deterministic mechanism is simply a menu with a deterministic allocation of each possible bundle of $d$ units, for $d$ in the support of $f$. Let $d_1 < d_2 < \cdots < d_k$ be the demands in this support. We denote the prices for the corresponding bundles by $p_1 , p_2, \ldots, p_k$. A buyer can get $d_i$ units by paying $p_i $ for any $i\in [k]$. A buyer with type $t = (v,d)$ chooses to buy the bundle that maximizes her utility $ v \min\{ d, d_i\} - p_i.$ Let $\mathbf{p} $ denote the vector of unit prices $(p_1, \ldots, p_k)$. We assume without loss of generality that the domain of $\mathbf{p}$ is such that $p_1 \leq p_2 \leq \cdots \leq p_k$. We denote by $\Rev(\mathbf{p})$ the (expected) revenue of this mechanism. Our second main theorem is Theorem~\ref{thm:concavity}. Due to this theorem, the optimal mechanism can be found efficiently, since maximizing a concave function can be done in polynomial time. \begin{restatable}{theorem}{concavity}\label{thm:concavity}
$\Rev(\mathbf{p})$ is a concave function if the marginal distributions are DMR for all $d$. \end{restatable}
\paragraph{Dynamic pricing} Consider the following online problem. In each round $\tau \in {1,2,\ldots, T}$, for some $T \in \mathbb{Z}_+$, the following takes place. \begin{enumerate}
\item The seller posts a price vector $\mathbf{p}^{\tau}$.
\item A buyer of type $(v^\tau, d^\tau)$ is drawn independently from the distribution $f$.
\item The buyer buys her utility maximizing bundle $x^\tau \in \arg\max_{\{i: d_i \leq d^\tau\}} v^\tau d_i - p^\tau_i $.
\item The seller observes only $x^\tau$. \end{enumerate} Assume, for the sake of notational convenience, that $d_0 = 0 $ and $p^\tau_0 = 0$ for all $\tau$, so $x^\tau = 0$ when the buyer doesn't buy anything. The goal of the seller is to maximize her average (or equivalently, total) revenue \[ \frac 1 T \sum_{\tau=1}^{T} p_{x^{\tau}}^\tau. \] We evaluate the performance of a dynamic pricing scheme by its \emph{regret}, which is the difference between the optimal expected revenue and the average expected revenue of the pricing scheme. We assume that the values are bounded, and that $v_{\max}$ is the maximum value. The results of \citet{bubeck2016kernel,bubeck2017personal} imply the following as a corollary of Theorem \ref{thm:concavity}. \begin{corollary} There is a dynamic pricing scheme where the regret is \[ \frac{\tilde{O}(n^{9.5}) d_k v_{\max}} {\sqrt{T}}.\] \end{corollary}
\section{Deterministic mechanisms are optimal} \label{sec:structure} In this section we prove our first main theorem. Throughout this section, we assume, for the sake of convenience, that the support of the distribution in the value space is $\subseteq [0,\bar{V}]$.
\deterministic*
\paragraph{Allocating only the demanded:} We first use a reduction that might actually introduce randomization: w.l.o.g. we may assume that $A(v,d)$ is supported on $\{0,d\}$. A buyer who reports a demand of $d$ is either allocated exactly $d$ units or none at all. The reduction replaces any allocation of $d' <d $ units with an allocation of $d$ units with probability $d'/d$ while retaining the same payment, and argues that this does not violate any EIC constraints. This may seem to go counter to our eventual conclusion that deterministic pricing is optimal; there are easy examples where a deterministic optimal pricing allocates $d' < d$ units.
Nonetheless, what we will show in the end is that the allocation probabilities for a buyer with demand $d$ should be exactly equal to $d'/d $ for some other (lower) demand $d'$. We can then reduce in the other direction: this is equivalent to deterministically allocating $d'$ units.
\begin{restatable}{lemma}{Support}\label{lem:support}
For every feasible Bayesian mechanism, there exists another mechanism, with revenue at least as large, such that $A(v,d)$ is supported on $\{0,d\}$.
\end{restatable}
Let $t_i = (v_i,d_i)$ and $t_j = (v_j,d_j)$ be any two types. We write $\ut{v_i,d_i}{v_j,d_j}$, or just $\ut{t_i}{t_j}$, for the utility of an agent with type $t_i$ when she reports type $t_j$. From now on, we assume that the mechanism allocates $d_i$ units to $t_i$, with some probability $w\left(t_i\right)$, and for some price $p(t_i)$. Using this, $\ut{t_i}{t_j}$ can be re written as $ v_i \min\left( d_i , d_j \right) w\left(t_j\right) - p(t_j)$. We write $w_d$ for the allocation probability as a function of $v$ when the reported demand is $d$.
\paragraph{Local IC constraints are sufficient:} We now show that it is sufficient to consider a subset of IC constraints; the others are implied by these. The first set of constraints are ``horizontal'' constraints, where you fix $d$ and only change $v$. Further, the horizontal constraints can be replaced by monotonicity and a payment identity \`a la Myerson: \[ \textstyle \p{v,d} = vd w_{d}(v) - d \int_0^v \w[d]{z} dz + \p{0,d}. \] \noindent We now argue that in the optimal mechanism we must have $\p{0,d} = 0$ for all $d$. Incentive compatibility requires that $\p{0,d} = \p{0,d'}$ for all $d,d'$, since otherwise the type with higher payment would prefer to report being the other type and pay less (such a type gets no utility from allocation). The next step is to show that an mechanism where $\p{0,\cdot}<0$ cannot be optimal. To see this, construct another mechanism which adds $\p{0,\cdot}$ to the payment of \emph{all types}. The new mechanism respects all the EIC and EIR constraints (utility of type $(0,d)$ is zero for all $d$), and has higher revenue. As a result, the payments identity simplifies to: \begin{equation}\label{eq:paymentid} \textstyle \p{v,d} = vd w_{d}(v) - d \int_0^v \w[d]{z} dz. \end{equation} In addition to the local horizontal constraints consider above, there are the local ``vertical'' constraints, which are of two types; a type with demand $d_i$ reports $d_{i +1}$ or $d_{i-1}$. In either case, we only need to consider a particular misreport of the value $v'$, and this value is such that $\ut{v,d}{v',d'} = \ut{v',d'}{v',d'}$. The following lemma characterizes such $v'$, which can be verified by an easy calculation.
\begin{restatable}{lemma}{utilandX}\label{lem:utilandX}
$\ut{v,d_i}{v\frac{d_i}{d_j},d_j} = \ut{v\frac{d_i}{d_j},d_j}{v\frac{d_i}{d_j},d_j}$ for $j>i$, and \\
$\ut{v,d_i}{v,d_j} = \ut{v,d_j}{v,d_j}$
for $j<i$. \end{restatable}
The next lemma formalizes our discussion above on sufficiency of local EIC constraints. The first condition of the lemma is the local horizontal constraint, and the next two are local vertical constraints. The lemma follows by showing that the EIC constraint where $(v,d)$ misreports $(v',d')$ is implied by a sequence of EIC constraints, where you iteratively use the vertical constraints to change the report of $d$ by $\pm1$ until you get to $d'$, and then use the horizontal constraint to change the report to $v'$.
\begin{restatable}{theorem}{localtoglobal} \label{thm:local_to_global} A mechanism satisfying the following conditions is EIC: $\forall d_i$ and $\forall v$, \begin{enumerate} \item $w_{d_i}$ is monotone non-decreasing, and $p(v,d_i)$ is given by Equation~\eqref{eq:paymentid}. \item $\ut{v,d_{i+1}}{v,d_{i+1}} \geq \ut{v,d_{i+1}}{v,d_i}$ \item $\ut{v,d_i}{v,d_i} \geq \ut{v,d_i}{v \frac{d_i}{d_{i+1}}, d_{i+1}}$ \end{enumerate}
\end{restatable}
It is interesting to compare this global-to-local reduction with that used in the FedEx problem. Syntactically, for the the FedEx problem just the first 2 constraints above are sufficient, but the semantics are different. In the FedEx problem the $d$'s are the deadlines, and a larger $d$ signifies an inferior product, whereas in our problem a larger $d$ is a superior product. That the EIC constraints still look the same for misreporting a lower $d$ is due to the other difference between the problems:
utility scales linearly with $d$ in our problem, but remains constant in the FedEx problem. Thus in both problems, the valuation for an item of type $d'<d$ is the same for types $(v,d)$ and $(v,d')$.
\iffalse \begin{proof} We will show that for all pairs of types $t_i =(v_i,d_i)$ and $t_j = (v_j,d_j)$, with $d_i \geq d_j + 1$, $t_i$ does not want to report $t_j$ and vice versa: \begin{itemize} \item $\displaystyle \ut{t_i}{t_i} \geq \ut{v_i,d_i}{v_i,d_i - 1} = \ut{v_i,d_i - 1}{v_i,d_i - 1}$ \\ $\qquad \geq \ut{v_i,d_i-1}{v_i,d_i - 2} = \ut{v_i,d_i-2}{v_i,d_i - 2} $\\ $\qquad \dots $\\ $\qquad \geq \ut{v_i,d_j}{v_i,d_j} \geq \ut{v_i,d_j}{v_j, d_j} = \ut{v_i,d_i}{v_j, d_j} = \ut{t_i}{t_j}$
\item $\ut{t_j}{t_j} \geq \ut{t_j}{v_j \frac{d_j}{d_j + 1},d_j+1} = v_j d_j \w{v_j \frac{d_j}{d_j + 1},d_j+1} - \p{v_j \frac{d_j}{d_j + 1},d_j+1}$ \\ $= v_j \frac{d_j}{d_j+1} (d_j+1) \w{v_j \frac{d_j}{d_j + 1},d_j+1} - \p{v_j \frac{d_j}{d_j + 1},d_j+1}$ \\ $= \ut{v_j \frac{d_j}{d_j + 1},d_j+1}{v_j \frac{d_j}{d_j + 1},d_j+1}$
Applying this argument repeatedly gives us: $\ut{t_j}{t_j} \geq \ut{v_j \frac{d_j}{d_i},d_i}{v_j \frac{d_j}{d_i},d_i}.$ Using truthfulness for a fixed $d$, we have that the right hand side is at least $\ut{v_j \frac{d_j}{d_i},d_i}{v_i,d_i} = v_j \frac{d_j}{d_i} d_i \w{v_j \frac{d_j}{d_i},d_i} - \p{v_j \frac{d_j}{d_i},d_i}$, which is just $\ut{t_j}{t_i}$. \end{itemize} \end{proof} \fi
\paragraph{Mathematical Program for the optimal mechanism:}
We now write a mathematical program that captures the optimal mechanism. It will turn out to be convenient to use the following as variables of the program. Let $U_{d_i}(v) := \int_0^v \w[d_i]{z} dz$. Notice that $d_i U_{d_i}(v)$ is just the utility of a type $(v,d_i)$ when reporting the truth. Our objective is to maximize revenue, i.e. $\sum_{d_i = 1}^D \int_0^{\bar{V}} \p{v,d_i} f(v,d_i) dv$. Let $\phi_{d}(v) := v - \frac{1 - F_{d}(v)}{f_{d}(v)}$ be the standard Myerson virtual value function. Using the payment identity (\ref{eq:paymentid}) and integration by parts \`a la Myerson, we can rewrite this objective in terms of the $U_{d_i}(v) $ variables as:
\begin{align} \textstyle Rev &= \sum_{d_i = 1}^{k} \int_0^{\bar{V}} \w[d_i]{v} \phi_{d_i}(v) f_{d_i}(v) dv = \sum_{d_i = 1}^{k} \int_0^{\bar{V}} U'_{d_i}(v) \phi_{d_i}(v) f_{d_i}(v) dv \nonumber \\
&= \sum_{d_i = 1}^{k} U_{d_i}\left( \bar{V} \right) \phi_{d_i}(\bar{V}) f_{d_i}(\bar{V}) - \int_0^{\bar{V}} U_{d_i}(v) \left( \phi_{d_i}(v) f_{d_i}(v) \right)' dv.\label{re:revenue utility} \end{align}
Using this, and Theorem~\ref{thm:local_to_global}, we can restate the Bayesian optimal mechanism design problem as the following program. We define $U'_{d}$ to be the left derivative of $U_{d}$, which will be convenient to think of as $w_{d}$, the probability of allocation. Note that since the distribution over types is continuous, whether we allocate or not to any particular type $(v,d)$ does not affect revenue. The first constraint is equivalent to saying that the allocation is monotone non decreasing, and the second constraint says that the allocation probability is between $0$ and $1$.
\begin{talign} \textrm{max} & \sum_{i=1}^k U_{d_i}\left( \bar{V} \right) \phi_{d_i}(\bar{V}) f_{d_i}(\bar{V}) - \int_0^{\bar{V}} U_{d_i}(v) \left( \phi_{d_i}(v) f_{d_i}(v) \right)' dv & \label{eq:mathprog} \notag \\ \textrm{subject to} &:\notag\\ & U_{d_i}(v) \text{ is concave}& \forall i \in [k] \notag \\ & 1 \geq U'_{d_i}(v) \geq 0 & \forall i \in [k], \forall v\\ & U_{d_i}(0) = 0 & \forall i \in [k] \notag\\ & d_i U_{d_i}(v) \geq d_{i+1} U_{d_{i+1}}\left( v \frac{d_i}{d_{i+1}}\right) & \forall i \in [k-1] \notag\\ & d_{i+1} U_{d_{i+1}}(v) \geq d_i U_{d_i}\left( v \right) & \forall i \in [k-1] \notag \end{talign}
\iffalse \paragraph{Tight constraints of optimal solutions:} We now state a lemma that is crucial for our characterization. It is seemingly simple, stating that certain constraints must be tight at optimality in the above program, but is quite powerful as we will see later. As a corollary of this lemma we obtain that the allocation probabilities of certain pairs of types must be in proportion to their demands. The astute reader will observe that this is closely related to what we claimed we would prove at the beginning of Section~\ref{sec:structure}, about the allocation probabilities being ratios of demands. \fi
\input{newproof}
\iffalse
\begin{restatable}{lemma}{shootingup}\label{lem:shootinup}
For every feasible solution to the mathematical program (\ref{eq:mathprog}), there exists another feasible solution, with revenue at least as large, such that for all $i \in [k]$,
either (1) $U_{d_i}' (v) = 1$,
(2) $d_i U_{d_i} (v) = d_{i+1} U_{d_{i+1}}(v \tfrac{d_i}{d_{i+1}}),$ (3) $d_i U_{d_i}(v) = d_{i-1} U_{d_{i-1}}(v),$ or (4) $U_{d_i}(v) =0.$
For each of these cases, the range of $v$ where it happens is a sub interval of $[0,\bar V]$.
Case (4) happens in the left most interval.
Case (1) happens for the right most interval. \end{restatable}
\begin{proof} Consider a feasible solution to (\ref{eq:mathprog}), and fix any $i\in [k]$. Suppose that for all $j\neq i$, we fix $U_{d_j}$ to be equal to this solution. We further fix the value of $U_{d_i}(\bar{V)}$ to be equal to its value in this solution. Having fixed these quantities, we consider the projection of the optimization program (\ref{eq:mathprog}) where the only variables are $U_{d_i}(v)$ for all $v < \bar{V}$. The constraints are the same as in (\ref{eq:mathprog}), plus the fact that $U_{d_i}$ should agree with the value chosen for $U_{d_i}(\bar{V})$.
Let $\ell(\cdot)$ be the linear function with slope 1 such that $\ell(\bar{V}) = U_{d_i}(\bar{V})$. Given that $U_{d_i}$ has slope at most 1 and that it should agree with $U_{d_i}(\bar{V})$, it follows that $\ell$ is a lower bound on $U_{d_i}$. We claim that the following function is a pointwise minimum among all functions that satisfy the constraints of the projected program. \[ \textstyle \max\left\{\frac{d_{i+1}}{d_i} U_{d_{i+1}}\left( v \frac{d_i}{d_{i+1}}\right) ,
\frac{d_{i-1}}{d_i} U_{d_{i-1}} \left( v \right),
\ell(v), 0 \right\} . \] Note that each of the functions inside the $\max$ is a concave function, and hence this function is concave too. The slopes of $U_{d_{i+1}}$ and $U_{d_{i-1}}$ are in $[0,1]$, and the slope of $\ell$ is 1 by definition, therefore the slope of this function is also in $[0,1]$. The value of this function at $\bar{V}$ is $U_{d_i}(\bar{V})$, because $\ell(\bar{V})$ is equal to this value and no other function is higher (since in that case the given solution to (\ref{eq:mathprog}) would become infeasible as well). The other constraints of (\ref{eq:mathprog}) are satisfied by definition, hence this function is feasible for the restricted program. Clearly no other function can be lower than this function at any point without violating one of the constraints. Now since DMR implies that $\left( \phi_{d}(v) f_{d}(v) \right)'$ is always non-negative, this is the unique optimal solution to the restricted program, and therefore the original given solution must agree with this. It is clear that this function satisfies the conclusion of the lemma as required. \end{proof}
\begin{corollary}\label{cor:derivatives_left_and_right} $U'_{d_i}(v)$ is either $0$,$1$, $U'_{d_{i+1}}\left( v \tfrac{d_i}{d_{i+1}} \right)$ or $\frac{d_{i-1}}{d_i} U'_{d_{i-1}}\left( v\right)$. \end{corollary}
We first consider the case $k=2$ (Subsection~\ref{subsec:d=2}), to see how Lemma \ref{lem:shootinup} and Corollary \ref{cor:derivatives_left_and_right} can be used to characterize the optimum mechanism. We then prove the result for general $k$ (Subsection~\ref{subsec:general_d}). \fi
\section{Concavity of the revenue function} In this section we prove Theorem~\ref{thm:concavity}. Recall that the demands in the support of the distribution are $d_1 < d_2 < \cdots < d_k$, and that for all $i \in [k]$, $p_i$ denotes the price for the bundle of $d_i$ units, and $\mathbf{p}$ denotes the vector of all $p_i$s. Without loss of generality, we may assume that the domain of $\mathbf{p}$ is \[ 0 \leq p_1 \leq p_2 \leq \cdots \leq p_k . \] With this, we may assume that a buyer with demand $d_i$ only buys a bundle $d_j$ for $j\leq i$. We restate Theorem~\ref{thm:concavity} for convenience. \concavity*
\paragraph{Characterizing optimal bundles:} The revenue is determined by what the optimal bundle for each type is, given a price $\mathbf{p}$. To analyze this, we first consider when a given type prefers a bundle of $d_j$ units to one of $d_l$ units, for $j\neq l \in [k]$. The following quantity turns out to be the threshold at which the preference changes. \[ \forall j,l \in [k] : j>l, \enspace D_{j,l} \defeq \frac{p_j-p_{l}} {d_j-d_{l}} \enspace. \] For convenience, we also define $D_{j,0} \defeq p_j/d_j$ for all $j\in [k]$. \begin{restatable}{lemma}{preferbundles}\label{lem:preferbundles}
For all $i\geq j>l \in [k]$, a buyer of type $(v,d_i)$ prefers a bundle of $d_j$ units to a bundle of $d_l$ units if and only if $v > D_{j,l}$. Both bundles are equally preferable precisely when $v = D_{j,l}$. \end{restatable} \begin{proof}
The buyer prefers $d_j$ units over $d_l$ units if and only if
$ v d_j - p_j > v d_l - p_l .$
Rearranging, we get the lemma. \end{proof}
Before we proceed further, we note the following property for future reference.
\begin{restatable}{lemma}{dijorder}\label{lem:dijorder}
For all $i\geq j\geq l \in [k]$, $D_{i,l}$ is a convex combination of (and hence is always in between) $D_{i,j}$ and $D_{j,l}$.
\end{restatable}
\begin{proof}
It is easy to check the following identity.
$ D_{i,l} = \frac{1}{d_i - d_l}\left((d_i - d_j )D_{i,j} + (d_j - d_l)D_{j,l} \right).$
\end{proof}
We next consider how the optimum bundle changes for a given $d_i$, as $v$ decreases from $\bar V$ to 0. For high enough $v$, the optimum bundle for type $(v,d_i)$ should be $d_i$ units. As $v$ decreases, the optimal bundle is going to switch at the threshold $\max_{j <i}\{D_{i,j} \} $ (to something in the $\arg \max$). Similarly, as $v$ decreases further, the optimal bundle is going to switch again and so on. In fact, these sequences for different $d_i$s are not independent and we can capture each such sequence of optimum bundles by
a single vector $\sigma\in \mathbb{Z}^k$ such that the $i^{\rm th}$ co-ordinate $\sigma(i) \in \arg \max_{j <i}\{D_{i,j}\}$. Given such a $\sigma,$ for each $i$, the sequence of optimal bundles for types with demand $d_i$ is given by the directed path $\mathcal{P}_\sigma(i)$, defined as the (unique) longest path starting from $i$ in the directed graph on $[k]$ with edges $(i,\sigma(i))$. (The path ends when $\sigma(i) = 0$ for some $i$.)
In fact, there is a closed form formula for the revenue function provided we know what the resulting $\sigma$ is. Towards this, it is going to be more useful to consider the inverse of this map from $\mathbf{p}$ to $\sigma$: given any $\sigma\in \mathbb{Z}^k$ such that $\sigma(i ) \in [i-1]$, we define $\Delta_\sigma$ to be all the prices where the sequence of optimal bundles as described above is given by $\mathcal{P}_\sigma(i)$. Formally, $$ \Delta_{\sigma} \defeq \left\{ \mathbf{p}: \forall i, \sigma(i) \in \arg \max_{j <i}\{D_{i,j} \} \right\} .$$
\paragraph{Revenue function formula:} We are now ready to give a closed form formula for the revenue function within each $\Delta_{\sigma}$. For ease of notation we let $F_i$ denote the conditional CDF $F_{d_i}$, and let $q_i$ to denote the probability that the buyer has a demand $d_i$. We also use $\sigma^2(i) $ to denote $\sigma(\sigma(i))$. We now define the following revenue function corresponding to $\sigma$ which captures $\Rev(\mathbf{p})$ in $\Delta_{\sigma}$: \[ \textstyle \Rev_\sigma(\mathbf{p}) \defeq \sum_i q_i\left(p_i \left(1 - F_i (D_{i,\sigma(i)}) \right) + \sum_{j \in \mathcal{P}_\sigma(i)} p_{\sigma(j)} \left( F_i(D_{j,\sigma(j)}) - F_i (D_{\sigma(j),\sigma^2(j)}) \right) \right) ,\]
\begin{lemma}\label{lem:revenuedef} $ \Rev(\mathbf{p}) = \Rev_\sigma(\mathbf{p}) $ for all $\mathbf{p} \in \Delta_\sigma$. \end{lemma}
\begin{proof}
Suppose $\mathbf{p} \in \Delta_\sigma$. Consider all buyer types with demand $d_i$. Among these, all types with value $v > D_{i,\sigma(i)}$ prefer to buy the bundle of $d_i$ units over any other bundle,
by Lemma~\ref{lem:preferbundles}, and because $\mathbf{p} \in \Delta_\sigma$. These contribute $q_ip_i \left(1 - F_i (D_{i,\sigma(i)}) \right)$ to the revenue.
Now consider all types with value $v\in [D_{j,\sigma(j)}, D_{\sigma(j),\sigma^2(j)}]$ for some $j \in \mathcal{P}_\sigma(i)$. We need to prove that these prefer a bundle of $d_{\sigma(j)}$ over any other bundle $d_l$, so that they contribute to the revenue exactly $q_i p_{\sigma(j)} \left( F_i(D_{j,\sigma(j)}) - F_i (D_{\sigma(j),\sigma^2(j)}) \right) $, and the lemma follows.
As characterized by Lemma~\ref{lem:preferbundles}, this follows from the following.
\begin{itemize}
\item If $ l < \sigma(j)$, then $v\geq D_{\sigma(j),\sigma^2(j)}\geq D_{\sigma(j), l}$. This holds because $\mathbf{p} \in \Delta_\sigma$.
\item If $i \geq l > \sigma(j) $, then $v \leq D_{j,\sigma(j)}\leq D_{l,\sigma(j)}$. We prove this in the rest of the proof.
\end{itemize}
We first prove that $\forall j \in \mathcal{P}_\sigma(i)$,
$ D_{j,\sigma(j)} \geq D_{\sigma(j), \sigma^2(j)} .$
This follows from the fact that $ D_{j,\sigma^2(j)}$ is in between $ D_{j,\sigma(j)}$ and $ D_{\sigma(j), \sigma^2(j)}$ (Lemma~\ref{lem:dijorder}),
and that $ D_{j,\sigma^2(j)}\leq D_{j,\sigma(j)}$ (since $\mathbf{p} \in \Delta_\sigma$).
We now prove the following: $\forall j \in \mathcal{P}_\sigma(i)$, and $l \in( \sigma(j),j]$, we have that
$ D_{l, \sigma(j)} \geq D_{j, \sigma(j)}.$
This follows from the fact that if $l \in (\sigma(j),j ]$, then $D_{j,\sigma(j)}$ is in between $D_{j,l}$ and $D_{l,\sigma(j)}$ (from Lemma~\ref{lem:dijorder}), and $D_{j,l}\leq D_{j,\sigma(j)}$.
Now by a repeated application of
the fact $ D_{j,\sigma(j)} \geq D_{\sigma(j), \sigma^2(j)}, $
we get the same conclusion for all $j$ and $l$ such that $i \geq l > \sigma(j)$. \end{proof}
\paragraph{Concavity of~ $\Rev_\sigma$:} We next show that each of the $\Rev_\sigma$s by itself is a concave function. We do this by showing that $\Rev_\sigma$ can be written as a positive linear combination of linear functions, and compositions of the functions $v (1- F_d(v))$ with linear functions. Since the $v(1 -F_d(v))$ functions are concave by assumption, and such compositions and positive linear combinations preserve concavity, $\Rev_\sigma$ is concave too. \begin{restatable}{lemma}{revsigmaconcave}\label{lem:revsigmaconcave}
For all $\sigma$, $\Rev_\sigma(\mathbf{p})$ is a concave function. \end{restatable} \begin{proof}
We can rewrite $\Rev_\sigma$ as follows, using the definition of $D_{j,l}$. \[ \textstyle \Rev_\sigma = \sum_i q_i\left(p_i - \sum_{j \in \mathcal{P}_\sigma(i)}F_i(D_{j,\sigma(j)}) \left( p_j - p_{\sigma(j)} \right) \right) \] \[ \textstyle = \sum_i q_i\left(p_i - \sum_{j \in \mathcal{P}_\sigma(i)} F_i(D_{j,\sigma(j)}) D_{j,\sigma(j)} \left(j - \sigma(j)\right) \right).\]
We assumed that $v (1-F_i(v))$ is concave, which implies that $-vF_i(v)$ is concave.
$D_{j,\sigma(j)}$ is a linear function of $ \mathbf{p}$ for all $j$.
Since composition of linear functions with concave functions is concave, it follows that
$- F_i(D_{j,\sigma(j)}) D_{j,\sigma(j)} $
is concave. Now $\Rev_\sigma$ is a positive linear combination of concave functions, which makes it concave too. \end{proof}
\paragraph{Stitching the $\Rev_\sigma$s together:} Lemmas~\ref{lem:revenuedef} and \ref{lem:revsigmaconcave} imply that $\Rev$ is piecewise concave, i.e., inside each $\Delta_\sigma$ it is concave. In general this does not imply that such a function is concave everywhere. One property that would imply that $\Rev$ is concave everywhere would be if $\Rev$ was equal to $\min_\sigma \Rev_\sigma$. Unfortunately, this is not true. In fact, there is a partial order over $\sigma$s that determine when one $\Rev_\sigma$ is always greater than the other. We show a different, and somewhat surprising, property of the $\Rev_\sigma$s that also implies that $\Rev$ is concave. We show that at the boundaries between two regions not only do the corresponding $\Rev_\sigma$s agree (which they should, for $\Rev$ to be even continuous), but also their gradients agree!
\begin{lemma}\label{lem:boundary}
For all $\sigma, \sigma'$, $\mathbf{p}$ such that $\mathbf{p} \in \Delta_\sigma \cap \Delta_{\sigma'}$, we have that
\[ \Rev_\sigma(\mathbf{p}) = \Rev_{\sigma'} (\mathbf{p}) \text{ and } \nabla \Rev_\sigma(\mathbf{p}) = \nabla \Rev_{\sigma'} (\mathbf{p}). \] \end{lemma}
\begin{proof}
We first argue that it is sufficient to prove Lemma \ref{lem:boundary} for the case where $\sigma$ and $\sigma'$ disagree in exactly one co-ordinate, i.e., there is some $i^*$ such that $\sigma(i^*) \neq \sigma'(i^*)$, and $\forall j \neq i^*$, $\sigma(j) = \sigma'(j)$. Suppose we have done that. Now consider any two $\sigma$ and $\sigma'$, and a sequence $\sigma = \sigma_1,\sigma_2,\ldots, \sigma_n = \sigma'$ such that for any $i$, $\sigma_i $ and $\sigma_{i+1}$ differ in exactly one co-ordinate, where $\sigma_i$ agrees with $\sigma$ in that co-ordinate and $\sigma_{i+1}$ agrees with $\sigma'$. The fact that $\mathbf{p} \in \Delta_\sigma \cap \Delta_{\sigma'}$ implies that for all co-ordinates $j$ such that $\sigma(j) \neq \sigma'(j)$, both $\sigma(j)$ and $\sigma'(j) \in \arg \max _{j'<j}\{D_{j,j'} \}.$ Similarly, $\mathbf{p} \in \Delta_{\sigma_i} \cap \Delta_{\sigma_{i+1}}$ requires the same condition, but only for the co-ordinate that they differ in, and therefore $\mathbf{p} \in \cap_{i=1}^n \Delta_{\sigma_i}$. Since we know Lemma \ref{lem:boundary} holds when the two $\sigma$s differ in at most one co-ordinate, it now follows that $\Rev$ and $\nabla \Rev$ at $\mathbf{p}$ are the same for all $\sigma_i$s and hence for $\sigma$ and $\sigma'$ as well.
Now we prove Lemma \ref{lem:boundary} when $\sigma$ and $\sigma'$ differ at exactly one co-ordinate, $i^*$. We consider the portions of the paths $\mathcal{P}_\sigma(i) $ and $\mathcal{P}_{\sigma'}(i) $ that are disjoint, and refer to these disjoint portions as simply $\mathcal{P}\subseteq \mathcal{P}_\sigma(i)$ and $\mathcal{P}'\subseteq\mathcal{P}_{\sigma'}(i) $. Both of these paths start at $i^*$ and end at $\hat{i}$. Note that once the two paths merge, they remain the same for the rest of the way. If the paths don't merge, then we let $\hat{i} = 0$. The critical fact we use is that along these paths the $D$s are all the same, which is stated as the following lemma.
\begin{claim}\label{lem:zero_on_path} All $j, j' \in \mathcal{P} \cup \mathcal{P}'$ s.t. $j> j'$ have the same $D_{j,j'}$. \end{claim} \begin{proof} We prove the claim by induction, where we add one node at a time in the following order. We start the base case with $i^*, \sigma(i^*)$ and $\sigma'(i^*)$. At any point let $j$ and $j'$ be the last points on $\mathcal{P}$ and $\mathcal{P}'$ that we have added so far. In the inductive step, if $j> j'$, we add $\sigma(j)$ and otherwise we add $\sigma'(j')$. We stop when all nodes in $\mathcal{P} \cup \mathcal{P}'$ have been added.
For the base case, let $j = \sigma(i^*)$ and $j' = \sigma'(i^*)$. Without loss of generality, assume that $j > j'$. By \autoref{lem:dijorder}, we get that $D_{i^*,j'}$ is between $D_{i^*,j}$ and $D_{j,j'}$. Since $D_{i^*,j} = D_{i^*,j'}$, from the definition of $i^*$, we get $D_{i^*,j} = D_{j,j'} = D_{i^*,j'}$.
For the inductive step, let $j \in \mathcal{P}$ and $j' \in \mathcal{P}' $ be the last points that we have added so far, and again without loss of generality $j > j'$.
Let $v = \sigma(j)$. If $v=j'$ we are done. There are two cases: $v > j'$ and $v<j'$.
In the former case, we have $D_{j,v} \geq D_{j,j'}$ from the definition of $v$. From \autoref{lem:dijorder}, $D_{j,j'}$ must be in between $D_{j,v}$ and $D_{v,j'}$, therefore $D_{j,j'}\geq D_{v,j'}$. Let $i'\in \mathcal{P}'$ be the predecessor of $j'$, i.e., $\sigma'(i') = j'$. Due to the order in which we added the nodes, it must be that $i'> j$. By definition, $D_{i',j'} \geq D_{i',v}$, and by \autoref{lem:dijorder} $D_{i',j'}$ must be in between $D_{i',v} $ and $D_{v,j'}$, therefore $D_{v,j'}\geq D_{i',j'}$. By the inductive hypothesis, we have that $D_{i',j'} = D_{j,j'} $ and hence they both must be equal to $D_{v,j'}$.
Now consider any $i\neq j'$ that we have already added. It must be that $i <v$, and hence $D_{i,j'}$ must be in between $D_{i,v}$ and $D_{v,j'}$, but from the argument in the previous paragraph and the inductive hypothesis, we have that $D_{i,j'} = D_{v,j'}$ , and hence they must be equal to $D_{i,v}$. This completes the induction for this case.
The latter case of $v < j'$ is identical. \end{proof}
\paragraph{Continuing the proof of Lemma~\ref{lem:boundary}:}
To show that $\Rev_\sigma$s agree on the boundary, consider the difference $\Rev_\sigma(\mathbf{p}) - \Rev_{\sigma'} (\mathbf{p})$. For all $i \leq i^*$, or $i$ such that $i^* \notin \mathcal{P}_{\sigma}(i)$, nothing changes, therefore all those terms cancel out. Moreover, even for $i$ such that $i^* \in \mathcal{P}_{\sigma}(i)$, the only terms that don't cancel out are $j \in \mathcal{P} \cup \mathcal{P}'$. Therefore, we get:
\[ \textstyle \Rev_\sigma(\mathbf{p}) - \Rev_{\sigma'} (\mathbf{p})
= \sum_{i \geq i^*: i^* \in \mathcal{P}_\sigma(i)} q_i \left( \sum_{j \in \mathcal{P}} p_{\sigma(j)} \left( F_i(D_{j,\sigma(j)}) - F_i (D_{\sigma(j),\sigma^2(j)}) \right) \right. \] \[ \textstyle \left. \qquad - \sum_{j \in \mathcal{P}'} p_{\sigma'(j)} \left( F_i(D_{j,\sigma'(j)}) - F_i (D_{\sigma'(j),(\sigma')^2(j)}) \right) \right), \] which is zero by Claim~\ref{lem:zero_on_path}.
For the second part of the proof, we'll show that the gradient of $\Rev_\sigma - \Rev_{\sigma'}$ is zero. We only need to consider the partial derivatives w.r.t. $p_j$ for $j \in \mathcal{P} \cup \mathcal{P}'$ (modulo some corner cases).
Fix a $j \in \mathcal{P}$, and consider the terms in $\frac{\partial (\Rev_\sigma - \Rev_{\sigma'})}{\partial p_j}$ corresponding to
some $i \geq i^*$ such that $i^* \in \mathcal{P}_{\sigma}(i)$, in the outer summation.
Let the path $\mathcal{P}_{\sigma}(i)$ be such that $a \in \mathcal{P}_{\sigma}(i)$, $b = \sigma(a)$, $j = \sigma(b)$, $c = \sigma(j)$ and $d = \sigma(c)$. \[ i \rightarrow \ldots \rightarrow i^* \rightarrow \ldots \rightarrow a \rightarrow b \rightarrow j \rightarrow c \rightarrow d \rightarrow \ldots \] Then the terms under consideration are \begin{align*} & \frac{\partial}{\partial p_j} q_i \left( p_{b} \left( F_i(D_{a,b}) - F_i(D_{b,j}) \right) + p_{j} \left( F_i(D_{b,j}) - F_i(D_{j,c}) \right) + p_{c} \left( F_i(D_{j,c}) - F_i(D_{c,d}) \right) \right) = \\ &= q_i \left( \frac{p_b}{d_b-d_j} f_i(D_{b,j}) - \frac{p_j}{d_b-d_j} f_i(D_{b,j}) + F_i(D_{b,j}) - F_i(D_{j,c})- \frac{p_j}{d_j-d_c} f_i(D_{j,c}) + \frac{p_c}{d_j-d_c} f_i(D_{j,c}) \right) \\ &= q_i \left( D_{b,j} f_i(D_{b,j}) - D_{j,c} f_i(D_{j,c}) + F_i(D_{b,j}) - F_i(D_{j,c}) \right). \end{align*} By Claim~\ref{lem:zero_on_path}, $D_{b,j} = D_{j,c}$, and therefore these terms are zero. The cases when $i = i^*$, or $i^* = a,b,j$, or $c,d=0$, or $j \in \mathcal{P}_{\sigma'}(i)$ are identical. \end{proof}
We are now ready to prove the main theorem of this section, which is simply arguing how this agreement of gradients implies that $\Rev$ is concave everywhere. \begin{proof}[Proof of Theorem~\ref{thm:concavity}]
Consider any two prices $\mathbf{p}_1$ and $\mathbf{p}_2$, and the line segment joining the two. We will argue that $\Rev$ is concave along this line segment, which then implies the Theorem.
From Lemmas~\ref{lem:revenuedef} and~\ref{lem:revsigmaconcave}, we have that this line segment is itself divided into many intervals, and within each interval,
$\Rev$ is a concave function. Further, from Lemma~\ref{lem:boundary}, we have that these concave functions agree at the intersections of the intervals, and the
gradients agree too. Thus $\Rev$ is smooth, and the derivative along this line is monotone. This implies that $\Rev$ is concave along the line. \end{proof}
In this section we show how to find the optimal prices for each possible number of items $d_i$, guaranteed to be the optimal auction by Theorem~\ref{thm:deterministic_auctions_are_optimal}. We do so by showing that $\rev{p_1,\dots, p_{k}}$ is a concave function.
\subsection{Warm up: 2 prices}
Say we have prices $p_1 d_1$ and $p_2 d_2$ for $d_1$ and $d_2$ items. It must be that $p_1 d_1 \leq p_2 d_2$.
\begin{align*} Rev(p_1,p_2) &= p_1d_1 \left( \text{Probability that $d_1$ items are bought} \right) \\ &+ p_2d_2 \left( \text{Probability that $d_2$ items are bought} \right) \end{align*}
Since $p_1d_1 \leq p_2d_2$, an agent with type $d_1$ never buys $d_2$ items. An agent with type $d_2$ might buy $d_1$ items, if her value is between $p_1$ and $\frac{p_2d_2-p_1d_1}{d_2- d_1}$. Now, if $p_1 > p_2$, then $p_1 > \frac{p_2d_2-p_1d_1}{d_2- d_1}$, and therefore, the probability that $d_2$ items are bought is $F_2\left( P_2 \right) - F_2\left( p_2 \right)$. The revenue in this case is $$ Rev_1( p_1,p_2 ) = p_1d_1 \left( P_1 - F_1(p_1) \right) + p_2d_2 \left( P_2 - F_2(p_2) \right)$$. If $p_1 \leq p_2$ then the probability that $d_2$ items are bought is $F_2\left( P_2 \right) - F_2\left( \frac{p_2d_2-p_1d_1}{d_2- d_1} \right).$ The revenue in this case is
\begin{align*} Rev_2 \left( p_1,p_2 \right) &= p_1d_1 \left( P_1 - F_1(p_1) + F_2(\frac{p_2d_2-p_1d_1}{d_2- d_1}) - F_2(p_1) \right) \\ &\qquad + p_2d_2 \left( P_2 - F_2(\frac{p_2d_2-p_1d_1}{d_2- d_1}) \right) \end{align*}
We will show that $\rev{p_1,p_2}$ is concave by showing that it is the minimum of two concave functions. We will show that (1) $Rev_1$ and $Rev_2$ are concave (Claim~\ref{claim:concave2}, and that (2) $\rev{p_1,p_2}$ is the minimum of $Rev_1$ and $Rev_2$ (Claim~\ref{claim:is_min}).
\begin{claim}\label{claim:is_min} $\rev{p_1,p_2} = \min \{ Rev_1 \left( p_1,p_2 \right), Rev_2 \left( p_1,p_2 \right) \}$ \end{claim}
\begin{proof} Consider the following two cases: \begin{itemize} \item $p_2 \geq p_1$. In this case, $\rev{p_1,p_2} = Rev_2(p_1,p_2)$. Consider $Rev_1 \left( p_1,p_2 \right) - Rev_2 \left( p_1,p_2 \right)$: \begin{align*} Rev_1 \left( p_1,p_2 \right) - Rev_2 \left( p_1,p_2 \right) &= p_1d_1 \left( P_1 - F_1(p_1) \right) + p_2d_2 \left( P_2 - F_2(p_2) \right) \\ &\quad - p_1d_1 \left( P_1 - F_1(p_1) + F_2(\frac{p_2d_2-p_1d_1}{d_2- d_1}) - F_2(p_1) \right) \\ &\quad - p_2d_2 \left( P_2 - F_2(\frac{p_2d_2-p_1d_1}{d_2- d_1}) \right) \\ &= p_2d_2 \left( F_2(\frac{p_2d_2-p_1d_1}{d_2- d_1}) - F_2(p_2) \right) \\ &\quad + p_1d_1 \left( F_2(p_1) - F_2(\frac{p_2d_2-p_1d_1}{d_2- d_1}) \right) \end{align*}
Since $p_2 \geq p_1$, by re-arranging we can get that $\frac{p_2d_2-p_1d_1}{d_2- d_1} \geq p_2 \geq p_1$. Moreover, we have $p_2 = \frac{d_1}{d_2}p_1 + \frac{d_2-d_1}{d_2} \frac{p_2d_2-p_1d_1}{d_2- d_1}$. We know that $(f_2 \phi_2)' \geq 0$, which implies that the revenue $R_2(p)$ from selling $d_2$ items for a price of $p$, is a concave function. Therefore: \begin{gather*} R_2 \left( p_2 \right) \geq \frac{d_1}{d_2}R_2 \left( p_1 \right) + \frac{d_2-d_1}{d_2} R_2 \left( \frac{p_2d_2-p_1d_1}{d_2- d_1} \right) \\ p_2d_2 \left( 1 - F_2(p_2) \right) \geq \frac{d_1}{d_2} p_1d_2 \left( 1 - F_2(p_1) \right) + \frac{d_2-d_1}{d_2} \frac{p_2d_2-p_1d_1}{d_2- d_1} d_2 \left( 1 - F_2( \frac{p_2d_2-p_1d_1}{d_2- d_1} ) \right) \\ p_2d_2 \left( F_2( \frac{p_2d_2-p_1d_1}{d_2- d_1} ) - F_2(p_2) \right) \geq p_1d_1 \left( F_2( \frac{p_2d_2-p_1d_1}{d_2- d_1} ) - F_2(p_1) \right), \end{gather*} which is exactly $Rev_1 \left( p_1,p_2 \right) \geq Rev_2 \left( p_1,p_2 \right).$
Thus, we have that \[ \min \{ Rev_1 \left( p_1,p_2 \right), Rev_2 \left( p_1,p_2 \right) \} = Rev_2 \left( p_1,p_2 \right) = \rev{p_1,p_2}. \] \item $p_2 \leq p_1$. In this case, $\rev{p_1,p_2} = Rev_1(p_1,p_2)$. Using a similar proof, we can show that $Rev_1 \left( p_1,p_2 \right) - Rev_2 \left( p_1,p_2 \right) \leq 0$.
\end{itemize}
\end{proof}
\appendix \section{Deferred Proofs}\label{app:appendix} The proof below contains the calculations for Example~\ref{ex:regularity comparisons}. \begin{proof} We first show that the constant elasticity distribution with cumulative density $F(v) = 1 - (v/a)^{1/\epsilon}$ is DMR. Recall that DMR is equivalent to concavity of the revenue function. To verify concavity, we calculate the second derivate of the revenue function and show that it is negative. \begin{align*} R'(v) &= (\frac{v}{a})^{1/\epsilon} + \frac{v}{a\epsilon}(\frac{v}{a})^{1/\epsilon -1}.\\ R''(v) &= (\frac{v}{a})^{1/\epsilon -1}\frac{2}{a\epsilon} + (\frac{v}{a})^{1/\epsilon -2}\frac{v}{a^2\epsilon}(1/\epsilon -1) \\ & = (\frac{v}{a})^{1/\epsilon -2}(\frac{2v}{a^2\epsilon} + \frac{v}{a^2\epsilon}(1/\epsilon - 1)) \\ & = (\frac{v}{a})^{1/\epsilon -2}\frac{v}{a^2\epsilon}(1+1/\epsilon) \leq 0. \end{align*}
Now consider regularity. Note that the probability density function $f(v) = \frac{-1}{\epsilon a} (v/a)^{1/\epsilon - 1}$. Recall that a distribution is regular if the function $\phi(v)$ is monotone non-decreasing in $v$. \begin{align*} \phi(v) &= v - \frac{1-F(v)}{f(v)}\\ &= v - \frac{(v/a)^{1/\epsilon}}{\frac{-1}{a\epsilon}(v/a)^{1/\epsilon-1}}\\ &= v - \frac{v/a}{-1/(a\epsilon)} = v (1+\epsilon), \end{align*}
which is monotone \emph{decreasing} since by assumption $\epsilon < -1$.
We finally argue that the exponential distribution, defined as $F(v) = 1 - e^{-v}$ is not DMR but is regular. The revenue function is $R(v) = ve^{-v}$, its first derivative is $R'(v) = (1-v)e^{-v}$, and its second derivative is $(v-2)e^{-v}$, which is positive for $v \geq 2$, violating concavity. However, as commonly known, this distribution is regular since $\phi(v) = v - \frac{1-F(v)}{f(v)} = v- \frac{e^{-v}}{e^{-v}} = v -1$ is monotone non-decreasing in $v$. \end{proof}
\expost*
\begin{proof}
Consider an EIC and EIR mechanism. First note that we can assume that for each type $(v,d)$, the randomized allocation $A(v,d)$ does not assign a number of units more than $d$. If this is not true, replace any assignment of more than $d$ units with the assignment of $d$ units. Note that this change does not change the utility of truthful reporting, and cannot improve utility of non-truthful reporting. Therefore the resulting mechanism is EIC and EIR. Now consider a type $(v,d)$, its \emph{realized} allocation $A(v,d)$, and its expected payment $p(v,d)$, and construct a randomized payment $\tilde{p}(v,d)$ as follows
\begin{align*}
\tilde{p}(v,d) = \frac{p(v,d) A(v,d)}{\E \left[ A(v,d)\right]}.
\end{align*}
\noindent Note that the expected payment of the type stays the same,
\begin{align*}
\E \left[ \tilde{p}(v,d)\right] = p(v,d) \frac{\E \left[ A(v,d)\right]}{\E \left[ A(v,d)\right]} = p(v,d).
\end{align*}
\noindent As a result, the modified mechanism stays EIC. In addition, the ex-post utility of the type from the realized allocation of $A(v,d)$ units is
\begin{align*}
vA(v,d)- \frac{p(v,d)A(v,d)}{\E \left[ A(v,d)\right]} ,
\end{align*}
which is non-negative if and only if
\begin{align*}
v \E \left[ A(v,d)\right] - p(v,d) \geq 0,
\end{align*}
which hold by EIR. \end{proof}
\Support*
\begin{proof}
Let $x^i_m= \Prob [A(t_i) \geq m]$ denote the probability that type $t_i$ is allocated $m$ or more units.
Let $h_i$ be such that $x^i_{h_i} > 0$ and $x^i_{h_i+1} = 0$. Set $w\left(t_i\right) = \frac{\sum_{m=1}^{h_i} x^i_m}{d_i}$, and
consider an alternate allocation given by a random variable $B(t_i)$
which is $d_i $ with probability $w(t_i)$ and is 0 otherwise.
Then, $\Prob [B (t_i) \geq m] = y^i_m = w\left(t_i\right)$ for all $m \leq d_i$.
The utility of $t_i$ when reporting $t_i$ remains unchanged under this alternate allocation:
\[ \ut{t_i}{t_i} = v_i \sum_{m=1}^{d_i} x^i_m - p\left( t_i \right) = v_i d_i w\left(t_i\right) - p\left( t_i \right) = v_i \sum_{m=1}^{d_i} y^i_m - p\left( t_i \right)\]
and so does $\ut{t_j}{t_i}$ for all $t_j$ with $d_j \geq d_i$. When $d_j < d_i$, it is easy to check that $\sum_{m=1}^{d_j} x^i_m \leq \sum_{m=1}^{d_j} y^i_m = d_j w\left(t_i\right)$, since $x^i_m \geq x^i_{m+1}$ for all $k$, and thus the utility of $t_j$ when reporting $t_i$ can only decrease.
Thus, when changing the allocation from $A$ to $B$, the EIC constraints are still satisfied, and total revenue remains unchanged. \end{proof}
\utilandX*
\begin{proof}
$\displaystyle u(v,d_i \rightarrow v \frac{d_{i}}{d_j}, d_{j} ) = v d_i w(v \frac{d_{i}}{d_{j}}, d_{j}) - p(v \frac{d_{i}}{d_{j}}, d_{j}) $\\
$\displaystyle = (v \frac{d_i}{d_{j}}) d_{j} w(v \frac{d_{i}}{d_{j}}, d_{j}) - p(v \frac{d_{i}}{d_{j}}, d_{j}) = u(v \frac{d_{i}}{d_{j}}, d_{j} \rightarrow v \frac{d_{i}}{d_{j}}, d_{j} ) = d_{j} U_{d_{j}} (v \frac{d_i}{d_{j}})$.\\
The second part is identical. \end{proof}
\localtoglobal*
\begin{proof} We will show that for all pairs of types $t_i =(v_i,d_i)$ and $t_j = (v_j,d_j)$, with $d_i \geq d_j + 1$, $t_i$ does not want to report $t_j$ and vice versa: \begin{itemize} \item $\displaystyle \ut{t_i}{t_i} \geq \ut{v_i,d_i}{v_i,d_i - 1} = \ut{v_i,d_i - 1}{v_i,d_i - 1}$ \\ $\qquad \geq \ut{v_i,d_i-1}{v_i,d_i - 2} = \ut{v_i,d_i-2}{v_i,d_i - 2} $\\ $\qquad \dots $\\ $\qquad \geq \ut{v_i,d_j}{v_i,d_j} \geq \ut{v_i,d_j}{v_j, d_j} = \ut{v_i,d_i}{v_j, d_j} = \ut{t_i}{t_j}$
\item $\ut{t_j}{t_j} \geq \ut{t_j}{v_j \frac{d_j}{d_j + 1},d_j+1} = v_j d_j \w{v_j \frac{d_j}{d_j + 1},d_j+1} - \p{v_j \frac{d_j}{d_j + 1},d_j+1}$ \\ $= v_j \frac{d_j}{d_j+1} (d_j+1) \w{v_j \frac{d_j}{d_j + 1},d_j+1} - \p{v_j \frac{d_j}{d_j + 1},d_j+1}$ \\ $= \ut{v_j \frac{d_j}{d_j + 1},d_j+1}{v_j \frac{d_j}{d_j + 1},d_j+1}$
Applying this argument repeatedly gives us: $\ut{t_j}{t_j} \geq \ut{v_j \frac{d_j}{d_i},d_i}{v_j \frac{d_j}{d_i},d_i}.$ Using truthfulness for a fixed $d$, we have that the RHS is at least $\ut{v_j \frac{d_j}{d_i},d_i}{v_i,d_i} = v_j \frac{d_j}{d_i} d_i \w{v_j \frac{d_j}{d_i},d_i} - \p{v_j \frac{d_j}{d_i},d_i}$, which is just $\ut{t_j}{t_i}$. \end{itemize} \end{proof}
\iffalse
\monotonePaths* \begin{proof} There are two cases: either the edge exiting $(v,d_i)$ is to $(v,d_{i-1})$ or $\left(v\frac{d_i}{d_{i+1}} , d_{i+1} \right)$.
In the first case, let $(v,d_j)$ be the first vertex such that there is an edge from $(v,d_j)$ to $(v\frac{d_j}{d_{j+1}} , d_{j+1})$. Also, since $(v,d_j)$ is the first such vertex, there must be an edge from $(v,d_{j+1})$ to $(v,d_j)$. The existence of these edge gives $d_{j+1} U_{d_{j+1}} \left( v \right) = d_j U_{d_j}(v) = d_{j+1} U_{d_{j+1}} \left( v \frac{d_j}{d_{j+1}} \right)$. The only way both equations are possible is if $U_{d_{j+1}}(.)$ is zero in the range $[ v \frac{d_j}{d_{j+1}} , v ]$, which implies that $d_i U_{d_i}(v) = 0$. The latter case is identical. \end{proof}
\preferbundles* \begin{proof}
The buyer prefers $d_j$ units over $d_l$ units if and only if
$ v d_j - p_j > v d_l - p_l .$
Rearranging, we get the lemma. \end{proof}
\dijorder* \begin{proof}
It is easy to check the following identity.
$ \textstyle D_{i,l} = \frac{1}{d_i - d_l}\left((d_i - d_j )D_{i,j} + (d_j - d_l)D_{j,l} \right).$ \end{proof}
\revsigmaconcave* \begin{proof}
We rewrite $\Rev_\sigma$ as
\[ \Rev_\sigma = \sum_i q_i\left(p_i - \sum_{j \in \mathcal{P}_\sigma(i)}F_i(D_{j,\sigma(j)}) \left( p_j - p_{\sigma(j)} \right) \right) \]
\[= \sum_i q_i\left(p_i - \sum_{j \in \mathcal{P}_\sigma(i)} F_i(D_{j,\sigma(j)}) D_{j,\sigma(j)} \left(j - \sigma(j)\right) \right) . \]
We assumed that $v (1-F_i(v))$ is concave, which implies that $-vF_i(v)$ is concave.
$D_{j,\sigma(j)}$ is a linear function of $ \mathbf{p}$ for all $j$.
Since composition of linear functions with concave functions is concave, it follows that
$- F_i(D_{j,\sigma(j)}) D_{j,\sigma(j)} $
is concave. Now $\Rev_\sigma$ is a positive linear combination of concave functions, which makes it concave too. \end{proof}
\fi
\section{Detailed Characterization for $k = 2$} \label{app:detailed d2}
We first complete the case analysis that shows that the optimal mechanism is deterministic for $k = 2$.
We have the following cases for $v_1$ and $v_2$: \begin{description}
\item {$v_1 \leq v_2$:}
Thus, for all $v \leq v_1 \leq v_2$, we have that $ U_{d_1}(v) = \frac{d_2}{d_1} U_{d_2}\left( v \frac{d_1}{d_2} \right) = U_{d_1} \left( v \frac{d_1}{d_2} \right),$ which implies that $U_{d_1}(v) = 0$, and therefore $U_{d_2}(v) = 0$. For all $v \geq v_1$, we have that $U_{d_1}'(v)=1$, i.e. $\w[d_1]{v} = 1$;
this corresponds to a posted price of $d_1 v_1$ for a bundle of $d_1$ units. For $v \in \left[ v_1 , v_2 \right]$, we have $ U_{d_2}(v) = \frac{d_1}{d_2} U_{d_1}(v) = \frac{d_1}{d_2}(v-v_1)$. This implies that the allocation function $\w[d_2]{v}$ is equal to $\frac{d_1}{d_2}$; for a price of $d_1 v_1$, we offer a bundle of $d_2$ units with probability $\frac{d_1}{d_2}$. For $v \geq v_2$, we have a posted price of $d_2 v_2 - (v_2 - v_1)d_1$ for a bundle of $d_2$ units. The same allocation rule can be induced by just two menu units (and no randomization): $d_1$ units cost $v_1 d_1$ and $d_2$ units cost $v_2 d_2 - (v_2 - v_1)d_1$.
\item {$v_2 \leq v_1$ and $v_1 d_1 \leq v_2 d_2$:}
As before, for all $v \leq v_2$, $U_{d_2}(v) = U_{d_1}(v) = 0$, and the bundle of $d_2$ units has a posted price of $v_2 d_2$. For $v \in \left[ v_2 , v_1 \right]$, we have $ U_{d_1}(v) = \frac{d_2}{d_1} U_{d_2} \left( v \frac{d_1}{d_2} \right) \leq \frac{d_2}{d_1} U_{d_2} \left( v_2 \right) = 0$. For $v \geq v_1$, $U'_{d_1}(v) = 1$; this is a posted price of $d_1 v_1$ for $d_1$ units.
\item {$v_2 \leq v_1$ and $v_1 d_1 > v_2 d_2$:}
Once again, for all $v \leq v_2$, $U_{d_2}(v) = U_{d_1}(v) = 0$, and the bundle of $d_2$ units has a posted price of $d_2 v_2$. For $v \in \left[ v_2 , \frac{d_2}{d_1} v_2 \right]$, we have $U_{d_1}(v) = \frac{d_2}{d_1} U_{d_2} \left( v \frac{d_1}{d_2} \right) = 0$. For $v \in \left[ \frac{d_2}{d_1} v_2 , v_1 \right]$, $U'_{d_1}(v) = 1$; offer a bundle of $d_1$ units for a price of $d_1 \frac{d_2}{d_1} v_2 = d_2 v_2$. This corresponds to selling only the $d_2$ bundle for a price of $v_2d_2$. \end{description}
We now characterize the optimal thresholds $v_1$ and $v_2$. Let $v_1$ and $v_2$ be the values after which $(.,d_1)$ and $(.,d_2)$ type agents are allocated $d_1$ and $d_2$ units respectively. Then, the optimal mechanism posts a price for $d_1$ units and a price for $d_2$ units that is either: (1) $v_1 d_1$ and $v_2 d_2 - (v_2 - v_1)d_1$, (2) $v_1 d_1$ and $v_2 d_2$, or (3) $v_2 d_2$ for both. This is equivalent to the maximum of:
\begin{itemize} \item $\max v_1 d_1 \left( 2 - F_1(v_1) - F_2(v_1) \right) + v_2 (d_2 - d_1) \left( 1 - F_2(v_2) \right)$\\ subject to $v_2 \geq v_1$.
\item $\max v_1 d_1 \left( 1 - F_1(v_1) \right) + v_2 d_2 \left( 1 - F_2(v_2) \right)$\\ subject to $v_1 \geq v_2$ and $v_1 \leq \frac{d_2}{d_1} v_2$.
\item $\max v_2 d_2 \left( 2 - F_2(v_2) - F_1(v_2 \frac{d_2}{d_1}) \right)$.
\end{itemize}
Let $\vstar{1}$ and $\vstar{2}$ be the optimal choices for $v_1$ and $v_2$. Also, let $\hat{v}_1$ and $\hat{v}_2$ be the monopoly pricing solutions, i.e. $\hat{v_i} = arg\max v d_i \left( 1 - F_i(v) \right)$.
Then, we have the following options for $\vstar{1}$ and $\vstar{2}$: \begin{enumerate} \item $\vstar{1} = \hat{v}_1$ and $\vstar{2} = \hat{v}_2$ (unconstrained version of the second bullet) \item $\vstar{1} = arg\max v d_1 \left( 2 - F_1(v) - F_2(v) \right)$ and $\vstar{2} = \hat{v}_2$ (unconstrained version of the first bullet) \item $\vstar{1} = \vstar{2} = arg\max v \left( d_1 (1 - F_1(v)) + d_2 (1-F_2(v)) \right)$ \item $\frac{d_1}{d_2} \vstar{1} = \vstar{2} = arg\max v d_2 \left( 2 - F_2(v) - F_1(v \frac{d_2}{d_1}) \right)$ \end{enumerate}
This corresponds to the following: compute $\hat{v}_1$ and $\hat{v}_2$. If $\frac{d_2}{d_1} \hat{v}_2 \geq \hat{v}_1 \geq \hat{v}_2$ we're done. Otherwise, compute $arg\max v d_1 \left( 2 - F_1(v) - F_2(v) \right)$. If it is at most $\hat{v}_2$, then pick the best option out of 2,3 and 4. If not, pick the best out of 3 and 4.
\begin{itemize} \item $\max v_1 d_1 \left( 2 - F_1(v_1) - F_2(v_1) \right) + v_2 (d_2 - d_1) \left( 1 - F_2(v_2) \right)$\\ subject to $v_2 \geq v_1$.
\item $\max v_1 d_1 \left( 1 - F_1(v_1) \right) + v_2 d_2 \left( 1 - F_2(v_2) \right)$\\ subject to $v_1 \geq v_2$ and $v_1 \leq \frac{d_2}{d_1} v_2$.
\item $\max v_2 d_2 \left( 2 - F_2(v_2) - F_1(v_2 \frac{d_2}{d_1}) \right)$ \end{itemize}
Let $\vstar{1}$ and $\vstar{2}$ be the optimal choices for $v_1$ and $v_2$. Also, let $\hat{v}_1$ and $\hat{v}_2$ be the monopoly pricing solutions, i.e. $\hat{v_i} = arg\max v d_i \left( 1 - F_i(v) \right)$. The following procedure gives the optimal $\vstar{1}$ and $\vstar{2}$: compute $\hat{v}_1$ and $\hat{v}_2$, and check whether they satisfy the IC constraints. If they do, then we are done. If they do not, it must be that either $\hat{v_1} < \hat{v_2}$, or $\hat{v_1} > \frac{d_2}{d_1} \hat{v_2}$.
In the former case, compute the \textit{best per unit price} $q$, i.e. a price $q$ such that $d_1$ units cost $qd_1$ and $d_2$ units cost $qd_2$. This corresponds to the solution of the first bullet.
In the latter case, compute the \textit{best bundle price}, i.e. the best price $p$ that is going to be the same for $d_1$ and $d_2$. This corresponds to the solution of the third bullet. The best of $p$ and $q$ the two is optimal, and given that, $\vstar{1}$ and $\vstar{2}$ can be easily calculated.
Then, we have the following options for $\vstar{1}$ and $\vstar{2}$: \begin{enumerate} \item $\vstar{1} = \hat{v}_1$ and $\vstar{2} = \hat{v}_2$ (unconstrained version of the second bullet) \item $\vstar{1} = arg\max v d_1 \left( 2 - F_1(v) - F_2(v) \right)$ and $\vstar{2} = \hat{v}_2$ (unconstrained version of the first bullet) \item $\vstar{1} = \vstar{2} = arg\max v \left( d_1 (1 - F_1(v)) + d_2 (1-F_2(v)) \right)$ \item $\frac{d_1}{d_2} \vstar{1} = \vstar{2} = arg\max v d_2 \left( 2 - F_2(v) - F_1(v \frac{d_2}{d_1}) \right)$ \end{enumerate}
This corresponds to the following: compute $\hat{v}_1$ and $\hat{v}_2$. If $\frac{d_2}{d_1} \hat{v}_2 \geq \hat{v}_1 \geq \hat{v}_2$ we're done. Otherwise, compute $arg\max v d_1 \left( 2 - F_1(v) - F_2(v) \right)$. If it is at most $\hat{v}_2$, then pick the best option out of 2,3 and 4. If not, pick the best out of 3 and 4.
\iffalse \section{Existence of Optimum} \label{app:existence}
In this section we show that a solution to the mathematical program~\ref{eq:mathprog} always exists. In the program we optimize some functional $Rev(U_{d_1}, U_{d_2}, \dots , U_{d_k} )$, over all functions $U_{d_i}$ that have a bounded domain $[0,\bar{V}]$, bounded range, are continuous, concave, whose first derivative is bounded between zero and one, and satisfy some additional linear constraints (the IC constraints).
\begin{definition}
A sequence $\{ f_n \}_{n \in \mathbb{N}}$ of continuous functions on an interval $I = [a, b]$ is uniformly bounded if there is a number $M$ such that $|f_n(x)| \leq M$, for every function $f_n$ in the sequence and every $x \in [ a,b ]$, where $M$ is independent of $x$ and $n$. \end{definition}
\begin{definition}
A sequence $\{ f_n \}_{n \in \mathbb{N}}$ of continuous functions on an interval $I = [a, b]$ is equicontinuous if , for every $\epsilon > 0$, there exists $\delta > 0$ such that $\left| f_{n}(x)-f_{n}(y) \right|< \epsilon$
whenever $| x - y | < \delta$ for all functions $f_n$ in the sequence. $\delta$ may depend on $\epsilon$ but not $x$,$y$ or $n$. \end{definition}
\begin{definition}
A sequence $\{ f_n \}_{n \in \mathbb{N}}$ of continuous functions on an interval $I = [a, b]$ converges uniformly to a function $f$, if for every $\epsilon > 0$, there exists a number $N(\epsilon)$, such that $\left| f_n(x) - f(x) \right| < \epsilon$, for all $n > N(\epsilon)$, where $N(\epsilon)$ is independent of $x$. \end{definition}
The Arzel\`{a}-Ascoli theorem states that: \begin{theorem}[Arzel\`{a}-Ascoli] A uniformly bounded and equicontinuous sequence of real-valued continuous functions $\{ f_n \}_{n \in \mathbb{N}}$, defined on a closed and bounded interval $[a, b]$ of the real line, has a subsequence $\{ f_m \}_{m \in \mathbb{N}}$ that converges uniformly. \end{theorem}
Let $C$ be the class of functions we're optimizing over in the program~\ref{eq:mathprog}. \anote{we might need something slighty more general, like functions in $\mathbb{R}^k$, but the theorem still applies.}
Let $J[f]$ be some functional. Let $OPT$ be the least upper bound of $C$ with respect to $J$.
Define the following sequence of functions: $f_0$ is some function in $C$. If $J(f_i) < OPT$, pick a function $f_{i+1} \in C$ such that $Rev(f_{i+1}) > Rev(f_i)$. $\{ f_n \}$ converges to a function $f^*$, not necessarily in $C$, such that $J(f^*) = OPT$.
\begin{claim} $\{ f_n \}$ is uniformly bounded and equicontinuous. \end{claim}
\begin{proof} Uniform boundedness is implied by the bounded range. Equicontinuity is implied by the bounded derivative. \end{proof}
By the Arzel\`{a}-Ascoli theorem, $\{ f_n \}$ converges uniformly to $f^*$. Furthermore, $f^*$ is continuous. \anote{it remains to show that $f^* \in C$. We might have to go and check every single constraint}
\fi
\end{document}
|
arXiv
|
{
"id": "1704.05027.tex",
"language_detection_score": 0.8242478966712952,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Santa Claus Schedules Jobs on Unrelated Machines}
\begin{abstract}
One of the classic results in scheduling theory is the
$2$-approximation algorithm by Lenstra, Shmoys, and Tardos for the
problem of scheduling jobs to minimize makespan on unrelated
machines, i.e., job $j$ requires time $p_{ij}$ if processed on
machine $i$. More than two decades after its introduction it is
still the algorithm of choice even in the restricted model where
processing times are of the form $p_{ij} \in \{p_j, \infty\}$. This
problem, also known as the restricted assignment problem, is NP-hard
to approximate within a factor less than $1.5$ which is also the
best known lower bound for the general version.
Our main result is a polynomial time algorithm that estimates the
optimal makespan of the restricted assignment problem within a
factor $33/17 + \epsilon \approx 1.9412 + \epsilon$, where $\epsilon
> 0$ is an arbitrarily small constant. The result is obtained by
upper bounding the integrality gap of a certain strong linear
program, known as configuration LP, that was previously successfully
used for the related Santa Claus problem. Similar to the strongest
analysis for that problem our proof is based on a local search
algorithm that will eventually find a schedule of the mentioned
approximation guarantee, but is not known to converge in polynomial
time. \end{abstract}
\section{Introduction} Scheduling on unrelated machines is the model where we are given a set $\ensuremath{\mathcal{J}}$ of jobs to be processed without interruption on a set $\ensuremath{\mathcal{M}}$ of unrelated machines, where the time a machine $i\in \ensuremath{\mathcal{M}}$ needs to process a job $j\in \ensuremath{\mathcal{J}}$ is specified by a machine and job dependent processing time $p_{ij} \geq 0$. When considering a scheduling problem the most common and perhaps most natural objective function is makespan minimization. This is the problem of finding
a schedule, also called an assignment, $\sigma: \ensuremath{\mathcal{J}}\mapsto \ensuremath{\mathcal{M}}$ so as to minimize the time $\max_{i\in \ensuremath{\mathcal{M}}} \sum_{j\in \sigma^{-1}(i)} p_{ij}$ required to process all the jobs.
A classic result in scheduling theory is Lenstra, Shmoys, and Tardos' $2$-approximation algorithm for this basic problem~\cite{LS90}. Their approach is based on several nice structural properties of the extreme point solutions of a natural linear program and has become a text book example of such techniques (see, e.\,g.,~\cite{V01}). Complementing their positive result they also proved that the problem is NP-hard to approximate within a factor less than $1.5$ even in the restricted case when $p_{ij} \in \{p_j, \infty\}$ (i.\,e., when job $j$ has processing time $p_j$ or $\infty$ for each machine). This problem is also known as the restricted assignment problem and, although it looks easier than the general version, the algorithm of choice has been the same $2$-approximation algorithm as for the general version.
Despite being a prominent open problem in scheduling theory, there has been very little progress on either the upper or lower bound since the publication of~\cite{LS90} over two decades ago. One of the biggest hurdles for improving the approximation guarantee has been to obtain a good lower bound on the optimal makespan. Indeed, the considered linear program has been useful for generalizations such as introducing job and machine dependent costs~\cite{ST93,Singh08} but is known to have an integrality gap of~$2-1/|\ensuremath{\mathcal{M}}|$ even in the restricted case. We note that Shchepin and Vakhania~\cite{SV05} presented a rounding achieving this gap slightly improving upon the approximation ratio of $2$.
In a relatively recent paper, Ebenlendr et al.~\cite{EKS08} overcame this issue in the special case of the restricted assignment problem where a job can be assigned to at most two machines. Their strategy was to add more constraints to the studied linear program, which allowed them to prove a $1.75$-approximation algorithm for this special case that they named Graph Balancing. The name arises naturally when interpreting the restricted assignment problem as a hypergraph with a vertex for each machine and a hyperedge $\Gamma(j) = \{i\in \ensuremath{\mathcal{M}}:p_{ij} = p_j\}$ for each job $j\in \ensuremath{\mathcal{J}}$ that is incident to the machines it can be assigned to. As pointed out by the authors of~\cite{EKS08} it seems difficult to extend their techniques to hold for more general cases. In particular, it can be seen that the considered linear program has an integrality gap of $2$ when we allow jobs that can be assigned to $3$ machines.
In this paper we overcome this obstacle by considering a certain strong linear program, often referred to as configuration LP. In particular, we obtain the first asymptotic improvement on the approximation factor of $2$.
\begin{theorem}
\label{thm:mainintro}
There is a polynomial time algorithm that estimates the optimal
makespan of the restricted assignment problem within a factor of
$33/17 + \epsilon\approx 1.9412 + \epsilon$, where $\epsilon>0$ is
an arbitrarily small constant. \end{theorem} We note that our proof gives a local search algorithm to also find a schedule with performance guarantee
$\frac{33}{17}$ but it is not known to converge in polynomial time.
Our techniques are based on the recent development on the related Santa Claus problem. In the Santa Claus problem we are given the same input as in the considered scheduling problem but instead of wanting to minimize the maximum we wish to maximize the minimum, i.\,e., to find an assignment $\sigma$ so as to maximize $\min_{i\in \ensuremath{\mathcal{M}}} \sum_{j\in
\sigma^{-1}(i)} p_{ij}$. The playful name now follows from associating the machines with kids and jobs with presents. Santa Claus' problem then becomes to distribute the presents so as to make the least happy kid as happy as possible.
The problem was first considered under this name by Bansal and Sviridenko~\cite{BS06}. They formulated and used the configuration LP to obtain an $O(\log\log \log |\ensuremath{\mathcal{M}}| / \log \log |\ensuremath{\mathcal{M}}|)$-approximation algorithm for the restricted Santa Claus problem, where $p_{ij} \in \{p_j, 0\}$. They also proved several structural properties that were later used by Feige~\cite{Feige08} to prove that the integrality gap of the configuration LP is in fact constant in the restricted case. The proof is based on repeated use of Lov\'{a}sz local lemma and was only recently turned into a polynomial time algorithm~\cite{HSS10}.
The approximation guarantee obtained by combining~\cite{Feige08} and~\cite{HSS10} is a large constant and the techniques do not seem applicable to the considered problem. This is because the methods rely on structural properties that are obtained by rounding the input and such a rounding applied to the scheduling problem would rapidly eliminate any advantage obtained over the current approximation ratio of $2$.
Instead, our techniques are mainly inspired by a paper of Asadpour et al.~\cite{AFS08} who gave a tighter analysis of the configuration LP for the restricted Santa Claus problem. More specifically, they proved that the integrality gap is lower bounded by $1/4$ by designing a local search algorithm that eventually finds a solution with the mentioned approximation guarantee, but is not known to converge in polynomial time.
Similar to their approach, we formulate the configuration LP and show that its integrality gap is upper bounded by $33/17$ by designing a local search algorithm. As the configuration LP can be solved in polynomial time up to any desired accuracy~\cite{BS06}, this implies Theorem~\ref{thm:mainintro}. Although we cannot prove that the local search converges in polynomial time, our results imply that the configuration LP gives a polynomial time computable lower bound on the optimal makespan that is strictly better than two. We emphasize that all the results related to hardness of approximation remain valid even for estimating the optimal makespan.
Before proceeding, let us mention that unlike the restricted assignment problem, the special case of uniform machines and that of a fixed number of machines are both significantly easier to approximate and are known to admit polynomial time approximation schemes~\cite{HS88,HS76,JP2001}. Also scheduling jobs on unrelated machines to minimize weighted completion time instead of makespan has a better approximation algorithm with performance guarantee $1.5$~\cite{SS02}. The Santa Claus problem has also been studied under the name Max-Min Fair Allocation and there have been several recent results for the general version of the problem (see e.g.~\cite{AS10,BCV09,CCK09}).
Compared to~\cite{AFS08}, our analysis is more complex and relies on the special structure of the dual of the linear program. To illustrate the main techniques, we have therefore chosen to first present the analysis for the case of only two job sizes (Section~\ref{sec:simple})
followed by the general case in Section~\ref{sec:mainalgo}.
\section{Preliminaries} \label{sec:prelim} As we consider the restricted case with $p_{ij} \in \{p_j, \infty\}$, without ambiguity, we refer to $p_j$ as the size of job $j$. For a subset $\ensuremath{\mathcal{J}}' \subseteq \ensuremath{\mathcal{J}}$ we let $p\left(\ensuremath{\mathcal{J}}'\right) = \sum_{j\in
\ensuremath{\mathcal{J}}'} p_j$ and often write $p(j)$ for $p(\{j\})$, which of course equals $p_j$.
We now give the definition of the configuration LP for the restricted assignment problem. Its intuition is that a solution to the scheduling problem with makespan $T$ assigns a set of jobs, referred to as a configuration, to each machine of total processing time at most $T$. Formally, we say that a subset $C \subseteq \ensuremath{\mathcal{J}}$ of jobs is a configuration for a machine $i\in \ensuremath{\mathcal{M}}$ if it can be assigned without violating a given target makespan $T$, i.e., $C\subseteq \{j: i\in \Gamma(j)\}$ and $p(C) \leq T$. Let $\mathcal{C}(i, T)$ be the set of configurations for machine $i\in \ensuremath{\mathcal{M}}$ with respect to the target makespan $T$. The configuration LP has a variable $x_{i,C}$ for each configuration $C$ for machine $i$ and two sets of constraints:
\begin{equation*} \text{\ensuremath{\mbox{[C-LP]}}} \qquad \begin{aligned}[t]
\sum_{C \in \conff{i,T}}x_{i, C} & \leq 1 & i \in \ensuremath{\mathcal{M}}\\
\sum_{C\ni j}\sum_{i} x_{i,C} & \geq 1 & j \in \ensuremath{\mathcal{J}} \\[-2mm]
x & \geq 0 & \end{aligned} \qquad \qquad \end{equation*}
The first set of constraints ensures that each machine is assigned at most one configuration and the second set of constraints says that each job should be assigned (at least) once.
Note that if \ensuremath{\mbox{[C-LP]}}{} is feasible with respect to some target makespan $T_0$ then it is also feasible with respect to all $T\geq T_0$. Let $OPT_{LP}$ denote the minimum over all such values of $T$. Since an optimal schedule of makespan $OPT$ defines a feasible solution to \ensuremath{\mbox{[C-LP]}}{} with $T=OPT$, we have $OPT_{LP} \leq OPT$. To simplify notation we will assume throughout the paper that $OPT_{LP} =1$ and denote $\conff{i,1}$ by $\conff{i}$. This is without loss of generality since it can be obtained by scaling processing times.
Although \ensuremath{\mbox{[C-LP]}}{} might have exponentially many variables, it can be solved (and $OPT_{LP}$ can be found by binary search) in polynomial time up to any desired accuracy $\epsilon >0$~\cite{BS06}. The strategy of~\cite{BS06} is to design a polynomial time separation oracle for the dual and then solve it using the ellipsoid method. To obtain the dual, we associate a dual variable $y_i$ with $i\in \ensuremath{\mathcal{M}}$ for each constraint from the first set of constraints and a dual variable $z_j$ with $j \in \ensuremath{\mathcal{J}}$ for each constraint from the second set of constraints. Assuming that the objective of $\ensuremath{\mbox{[C-LP]}}{}$ is to maximize an objective function with zero coefficients then gives the dual:
\begin{equation*} \text{Dual of \ensuremath{\mbox{[C-LP]}}} \qquad \begin{aligned}[t]
\min & \sum_{i\in \ensuremath{\mathcal{M}}} y_i - \sum _{j \in \ensuremath{\mathcal{J}}} z_j &\\
y_i&\geq \sum_{j\in C} z_j & i\in \ensuremath{\mathcal{M}}, C\in \conff{i} \\
y,z & \geq 0 \end{aligned} \end{equation*} Let us remark that, given a candidate solution $(y^*,z^*)$, the separation oracle has to find a violated constraint if any in polynomial time and this is just $m$ knapsack problems: for each $i\in \ensuremath{\mathcal{M}}$ solve the knapsack problem with capacity $1$ and an item with weight $p_{j}$ and profit $z_j$ for each $j\in \ensuremath{\mathcal{J}}$ with $i\in \Gamma(j)$. By rounding job sizes as explained in~\cite{BS06}, we can thus solve \ensuremath{\mbox{[C-LP]}}{} in polynomial time up to any desired accuracy.
\section{Overview of Techniques: Jobs of Two Sizes} \label{sec:simple} We give an overview of the main techniques used by considering the simpler case when we have jobs of two sizes: \emph{small} jobs of size $\epsilon$ and \emph{big} jobs of size $1$. Already for this case all previously considered linear programs have an integrality gap of $2$. In contrast we show the following for~\ensuremath{\mbox{[C-LP]}}.
\begin{theorem}
\label{thm:simple}
If an instance of the scheduling problem only has jobs of sizes
$\epsilon\geq 0$ and $1$ then \ensuremath{\mbox{[C-LP]}}{} has integrality gap at most
$5/3+\epsilon$. \end{theorem}
Throughout this section we let $R=2/3 + \epsilon$. The proof strategy is to design a local search algorithm that returns a solution with makespan at most $1+R$, assuming the \ensuremath{\mbox{[C-LP]}}{} is feasible. The algorithm starts with a partial schedule $\sigma$ with no jobs assigned to any machine. It will then repeatedly call a procedure, summarized in Algorithm~\ref{algo:SimpleExtSched}, that extends the schedule by assigning a new job until all jobs are assigned. When assigning a new job we need to ensure that $\sigma$ will still have a makespan of at most $1+R$. This might require us to also update the schedule $\sigma$ by moving already assigned jobs. For an example, consider Figure~\ref{fig:idea} where we have a partial schedule and wish to assign a new big job $j_{new}$. In the first step we try to assign $j_{new}$ to $M_1$ but discover that $M_1$ has too high load, i.e., the set of jobs assigned to $M_1$ have total processing time such that assigning $j_{new}$ to $M_1$ would violate the target makespan $1+R$. Therefore, in the second step we try to move jobs from $M_1$ to $M_2$ but $M_2$ has also too high load. Instead, we try to move $j_{new}$ to $M_3$. As $M_3$ already has a big job assigned we need to first reassign it. We try to reassign it to $M_4$ in the fourth step. In the fifth step we manage to move small jobs from $M_4$ to $M_3$, which makes it possible to also move the big job assigned to $M_3$ to $M_4$ and finally assign $j_{new}$ to $M_3$.
\begin{figure*}
\caption{Possible steps when moving jobs to assign a new job $j_{new}$. Big and small jobs depicted in dark and light grey, respectively.}
\label{fig:idea}
\end{figure*}
\paragraph{Valid schedule and move.} As alluded to above, the
algorithm always maintains a valid partial schedule by moving already
assigned jobs. Let us formally define these concepts.
\begin{definition}
\label{def:fissched}
A \emph{partial schedule} is an assignment $\sigma : \ensuremath{\mathcal{J}} \mapsto \ensuremath{\mathcal{M}} \cup
\{TBD\}$ with the meaning that a job $j$ with $\sigma(j) = TBD$ is not assigned.
A partial schedule is \emph{valid} if each machine $i\in \ensuremath{\mathcal{M}}$ is
assigned at
most one big job and $p(\sigma^{-1}(i)) \leq 1+R$.
\end{definition}
That $i$ is assigned at most one big job is implied here by
$p(\sigma^{-1}(i)) \leq 1+R$ but will be used for the general case in
Section~\ref{sec:mainalgo}. Note also that with this notation a
normal schedule is just a partial schedule $\sigma$ with
$\sigma^{-1}(TBD) = \emptyset$.
\begin{definition}
A move is a tuple $(j,i)$ of a job $j\in \ensuremath{\mathcal{J}}$ and a machine $i\in
\Gamma_{\sigma}(j)$, where $\Gamma_{\sigma}(j) = \Gamma(j)
\setminus\{\sigma(j)\}$ denotes the machines to which $j$ can be
assigned apart from $\sigma(j)$.
\end{definition}
The main steps of the algorithm are the following. At the start it
will try to choose a valid assignment of $j_{new}$ to a machine,
i.e., that can be made without violating the target makespan. If no
such assignment exists then the algorithm adds the set of jobs that
blocked the assignment of $j_{new}$ to the set of jobs we wish to
move. It then repeatedly chooses a move of a job $j$ from the set of
jobs that is blocking the assignment of $j_{new}$. If the move of $j$
is valid then this will intuitively make more space for
$j_{new}$. Otherwise the set of jobs blocking the move of $j$ is
added to the list of jobs we wish to move and the procedure will
in the next iteration continue to move jobs recursively.
To ensure that we will be able to eventually assign a new job
$j_{new}$ it is important which moves we choose. The algorithm will
choose between certain moves that we call potential moves, defined so
as to guarantee (i) that the procedure terminates and that (ii) if no
potential move exists then we shall be able to prove that the dual of
\ensuremath{\mbox{[C-LP]}}{} is unbounded, contradicting the feasibility of the primal.
For this reason, we need to remember to which machines we have
already tried to move jobs and which jobs we wish to move. We next
describe how the algorithm keeps track of its history and how this
affects which move we choose. We then describe the types of
potential moves that the algorithm will choose between.
\paragraph{Tree of blockers.} To remember its history, Algorithm~\ref{algo:SimpleExtSched} has a dynamic tree $\ensuremath{\mathcal{T}}$ of
so-called \emph{blockers} that ``block'' moves we wish to
do. Blockers of $\ensuremath{\mathcal{T}}$ have both tree and linear structure. The
linear structure is simply the order in time the blockers were added
to $\ensuremath{\mathcal{T}}$. To distinguish between the two we will use child and
parent to refer to the tree structure; and after and before to refer
to the linear structure. We also use the convention that the blockers
$B_0, B_1, \dots, B_t$ of $\ensuremath{\mathcal{T}}$ are indexed according to the
linear order.
\begin{definition}
A blocker $B$ is a tuple that contains a subset $\ensuremath{\mathcal{J}}(B)\subseteq
\ensuremath{\mathcal{J}}$ of jobs and a machine $\ensuremath{\mathcal{M}}(B)$ that takes value $\bot$ if no
machine is assigned to the blocker.
\end{definition}
To simplify notation, we refer to the machines and jobs in
$\ensuremath{\mathcal{T}}$ by $\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$ and $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$,
respectively. We will conceptually distinguish between \emph{small}
and \emph{big} blockers and use $\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$ and
$\ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}})$ to refer to the subsets of $\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$
containing the machines in small and big blockers, respectively. To
be precise, this convention will add a bit to the description of a
blocker so as to keep track of whether a blocker is small or
big.
The algorithm starts by initializing the tree $\ensuremath{\mathcal{T}}$ with a
special small blocker $B$ as root. Blocker $B$ is special in the
sense that it is
the only blocker with no machine assigned, i.e., $\ensuremath{\mathcal{M}}(B) = \bot$.
Its job set $\ensuremath{\mathcal{J}}(B)$ includes the job $j_{new}$ we wish to assign.
The next step of the procedure is to repeatedly try to move jobs,
until we can eventually assign $j_{new}$. During its execution, the procedure
also updates $\ensuremath{\mathcal{T}}$ based on which move that is chosen so that \begin{enumerate} \item $\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$ contains those machines to which the algorithm will not try to move any jobs;
\item $\ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}})$ contains those
machines to which the algorithm will not try to move any big jobs;
\item $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$
contains those jobs that the algorithm wishes to move. \end{enumerate}
\paragraph{Potential moves.} For a move $(j,i)$ to be useful it should be of some job $j\in \ensuremath{\mathcal{J}}( \ensuremath{\mathcal{T}})$ as this set contains those jobs we wish to move to make space for the unassigned job $j_{new}$. In addition, the move $(j,i)$ should have a potential of succeeding and be to a machine $i$ where $j$ is allowed to be moved according to $\ensuremath{\mathcal{T}}$. We refer to such moves as \emph{potential} moves and a subset of them as \emph{valid} moves. The difference is that for a potential move to succeed it might be necessary to recursively move other jobs whereas a valid move can be done immediately.
With this intuition, let us now define these concepts
formally.
\begin{definition}
A move $(j,i)$ of a job $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ is a potential
\begin{description}\itemsep0mm \item[\textnormal{small move}:]
if $j$ is small and $i \not \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$; \item [\textnormal{big-to-small move:}] if $j$ is big, $i \not \in \ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}}), p(S_i) \leq R,$ and no big job is assigned to $i$; \item [\textnormal{big-to-big move:} ] if $j$ is big, $i \not \in \ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}}), p(S_i) \leq R,$ and a big job is assigned to $i$; \end{description} where $S_i=\{j\in \sigma^{-1}(i): j \mbox{ is small with $\Gamma_\sigma(j) \subseteq \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$}\}$. A potential move $(j,i)$ is \emph{valid} if the update $\sigma(j) \leftarrow i$ results in a valid schedule. \end{definition} Note that $S_i$ refers to those small jobs assigned to $i$ with no potential moves with respect to the current tree. The condition $p(S_i) \leq R$ for big moves enforces that we do not try to move big jobs to machines where the load cannot decrease to at most $R$ without removing a blocker already present in $\ensuremath{\mathcal{T}}$.
The algorithm's behavior depends on the type of the chosen potential move, say $(j,i)$ of a job $j \in \ensuremath{\mathcal{J}}(B)$ for some blocker $B$: \begin{itemize} \item If $(j,i)$ is a valid move then the schedule is updated by $\sigma(j) \leftarrow i$. Moreover, $\ensuremath{\mathcal{T}}$ is updated by removing $B$ and all blockers added after $B$. This will allow us to prove that the procedure terminates with the intuition being that $B$ blocked some move $(j',i')$ that is more likely to succeed now after $j$ was reassigned.
\item If $(j,i)$ is a potential small or big-to-small move that is not
valid then the algorithm adds a small blocker $B_S$ as a child to
$B$ that consists of the machine $i$ and contains all jobs assigned
to $i$ that are not already in $\ensuremath{\mathcal{T}}$. Note that after this,
since $B_S$ is a small blocker no other jobs will be tried to be
moved to $i$. The intuition of this being that assigning more jobs to $i$ would make it less
likely to be able to assign $j$ to $i$ in the future.
\item If $(j,i)$ is a potential big-to-big move
then the algorithm adds a big blocker $B_B$ as child to $B$ that
consists of the machine $i$ and the big job that is assigned to $i$.
Since $B_B$ is a big blocker this prevents us from trying to
assign more big jobs to $i$ but at the same time allow us to try to
assign small jobs. The intuition being
that this will not prevent us from assigning $j$ to $i$ if the big
job currently assigned to $i$ is reassigned. \end{itemize} We remark that the rules on how to update $\ensuremath{\mathcal{T}}$ are so that a job can be in at most one blocker whereas a machine can be in at most two blockers (this happens if it is first added in a big blocker and then in a small blocker).
Returning to the example in Figure~\ref{fig:idea}, we can see that after Step~$4$, $\ensuremath{\mathcal{T}}$ consists of the special root blocker with two children, which in turn have a child each. Machines $M_1, M_2$ and $M_4$ belong to small blockers whereas $M_3$ belongs to a big blocker. Moreover, the moves chosen in the first, second, and the third step are big-to-small, small, and big-to-big, respectively, and from Step $5$ to $6$ a sequence of valid moves is chosen.
\paragraph{Values of moves.} In a specific iteration there might be several potential moves available. For the analysis it is important that they are chosen in a specific order. Therefore, we assign a vector in $\mathbb{R}^2$ to each move and Algorithm~\ref{algo:SimpleExtSched} will then choose the move with minimum lexicographic value.
\begin{definition} \label{def:simpleval} If we let $L_i=\sigma^{-1}(i)$ then a potential move $(j,i)$ has value $$ \ensuremath{\textnormal{Val}}(j,i) = \left\{ \begin{array}{ll} (0, 0) & \mbox{if valid,} \\ (1,p(L_i)) & \mbox{if small move,} \\ (2,p(L_i)) & \mbox{if big-to-small,} \\ (3, 0) & \mbox{if big-to-big,} \end{array} \right. $$ \end{definition}
Note that as the algorithm chooses moves of minimum lexicographic value, it always chooses a valid move if available and a potential small move before a potential move of a big job.
\paragraph{The algorithm.} Algorithm~\ref{algo:SimpleExtSched} summarizes the algorithm discussed above in a concise definition. Given a valid partial schedule $\sigma$ and an unscheduled job $j_{new}$, we prove that the algorithm preserves a valid schedule by moving jobs until it can assign $j_{new}$. Repeating the procedure by choosing a unassigned job in each iteration until all jobs are assigned then yields Theorem~\ref{thm:simple}.
\algsetup{ linenosize=\small, linenodelimiter=: } \begin{algorithm*}[hbtp] \caption{SimpleExtendSchedule($\sigma, j_{new}$)} \label{algo:SimpleExtSched}
\begin{algorithmic}[1]
\STATE Initialize \ensuremath{\mathcal{T}}{} with the root $\ensuremath{\mathcal{J}}(B) = \{j_{new}\}$ and $\ensuremath{\mathcal{M}}(B) = \bot$\; \WHILE{$\sigma(j_{new})$ is TBD} \STATE Choose a potential move $(j,i)$ with $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ of minimum lexicographic value\; \STATE Let $B$ be the blocker in $\ensuremath{\mathcal{T}}$ such that $j \in \ensuremath{\mathcal{J}}(B)$\; \IF{$(j,i)$ is valid} \STATE Update the schedule by $\sigma(j) \leftarrow i$\; \STATE Update $\ensuremath{\mathcal{T}}$ by removing $B$ and all blockers added after $B$\; \ELSIF{$(j,i)$ is either a potential small move or a potential big-to-small move} \STATE Add a small blocker $B_S$ as child to $B$ with $\ensuremath{\mathcal{M}}(B_S) = i$ and
$\ensuremath{\mathcal{J}}(B_S) = \sigma^{-1}(i) \setminus \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$\; \ELSE[$(j,i)$ is a big-to-big move] \STATE Let $j_B$ be the big job such that $\sigma(j_B) = i$\; \STATE Add a big blocker $B_B$ as a child to $B$ with $\ensuremath{\mathcal{J}}(B_B) = \{j_B\}$ and $\ensuremath{\mathcal{M}}(B_B) = i$\; \ENDIF \ENDWHILE \RETURN $\sigma$\; \end{algorithmic} \end{algorithm*}
\hide{
\begin{algorithm}[H]
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\SetKwFunction{Try}{TrySmallMove}
\SetKwFunction{Update}{UpdateSchedule}
\SetKwData{B}{Blocking}
\BlankLine
Initialize \ensuremath{\mathcal{T}}{} with the root $\ensuremath{\mathcal{J}}(B) = \{j_{new}\}$ and $\ensuremath{\mathcal{M}}(B) = \bot$\;
\While{$\sigma(j_{new})$ is TBD} {
Choose a potential move $(j,i)$ with $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ of minimum lexicographic value\;
Let $B$ be the blocker in $\ensuremath{\mathcal{T}}$ such that $j \in \ensuremath{\mathcal{J}}(B)$\;
\If{$(j,i)$ is valid} {
Update the schedule by $\sigma(j) \leftarrow i$\;
Update $\ensuremath{\mathcal{T}}$ by removing $B$ and all blockers added after $B$\;
}
\ElseIf{$(j,i)$ is either a potential small move or a potential big-to-small move}
{
Add a small blocker $B_S$ as child to $B$ with $\ensuremath{\mathcal{M}}(B_S) = i$ and
$\ensuremath{\mathcal{J}}(B_S) = \sigma^{-1}(i) \setminus \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$\;
}
\Else({$(j,i)$ is a big-to-big move}){
Let $j_B$ be the big job such that $\sigma(j_B) = i$\;
Add a big blocker $B_B$ as a child to $B$ with
$\ensuremath{\mathcal{J}}(B_B) = \{j_B\}$ and $\ensuremath{\mathcal{M}}(B_B) = i$\;
} } \Return $\sigma$\; \caption{SimpleExtendSchedule($\sigma, j_{new}$):} \label{algo:SimpleExtSched} \end{algorithm}
} \subsection{Analysis} Since the algorithm only updates $\sigma$ if a valid move was chosen we have that the schedule stays valid throughout the execution. It remains to verify that the algorithm terminates and that there always is a potential move to choose.
Before proceeding with the proofs, we need to introduce some notation. When arguing about $\ensuremath{\mathcal{T}}$ we will let \begin{itemize} \item $\ensuremath{\mathcal{T}}_t$ be the subtree of $\ensuremath{\mathcal{T}}$ induced by the blockers $B_0, B_1, \dots, B_t$; \item $S(\ensuremath{\mathcal{T}}_t) = \{j\in \ensuremath{\mathcal{J}}: \mbox{$j$ is small with $\Gamma_\sigma(j)
\subseteq \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}}_t)$}\}$ and often refer to $S(\ensuremath{\mathcal{T}})$ by simply $S$; and \item $S_i(\ensuremath{\mathcal{T}}_t) = \sigma^{-1}(i) \cap S(\ensuremath{\mathcal{T}}_t)$ (often
refer to $S_i(\ensuremath{\mathcal{T}})$ by $S_i$). \end{itemize} The set $S(\ensuremath{\mathcal{T}}_t)$ contains the set of small jobs with no potential moves with respect to $\ensuremath{\mathcal{T}}_t$. Therefore no job in $S(\ensuremath{\mathcal{T}}_t)$ has been reassigned since $\ensuremath{\mathcal{T}}_t$ became a subtree of $\ensuremath{\mathcal{T}}$, i.e., since $B_t$ was added. We can thus omit the dependence on $\sigma$ when referring to $S(\ensuremath{\mathcal{T}}_t)$ and $S_i(\ensuremath{\mathcal{T}}_t)$ without ambiguity. A related observation that will be useful throughout the analysis is the following. No job in a blocker $B$ of $\ensuremath{\mathcal{T}}$ has been reassigned after $B$ was added since that would have caused the algorithm to remove $B$ (and all blockers added after $B$).
We now continue by first proving that there always is a potential move to choose if \ensuremath{\mbox{[C-LP]}}{} is feasible followed by the proof that the procedure terminates in Section~\ref{sec:simpletermination}.
\subsubsection{Existence of potential moves} \label{sec:simpleexistence} We prove that the algorithm never gets stuck if the \ensuremath{\mbox{[C-LP]}}{} is feasible.
\begin{lemma} \label{lemma:simplefis}
If \ensuremath{\mbox{[C-LP]}}{} is feasible then Algorithm~\ref{algo:SimpleExtSched} can
always choose a potential move. \end{lemma} \begin{proof}
Suppose that the algorithm has reached an iteration where no
potential move is available.
We will show that this implies that the dual of \ensuremath{\mbox{[C-LP]}}{}
is unbounded and we can thus deduce as required that the primal is
infeasible in the case of no potential moves.
As each solution $(y,z)$ of the dual can be scaled by a scalar
$\alpha$ to obtain a new solution $(\alpha y, \alpha z)$, any
solution such that $\sum_{i\in \ensuremath{\mathcal{M}}} y_i < \sum_{j\in \ensuremath{\mathcal{J}}} z_j$
implies unboundedness. We proceed by defining such a solution $(y^*,
z^*)$: \[ z_j^* = \begin{cases}
2/3 & \mbox{if $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ is big,}\\
p_j = \epsilon & \mbox{if $j \in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})\cup S$ is small,} \\
0 & \mbox{otherwise,} \end{cases} \]
and \[ y^*_i = \begin{cases}
1 & \mbox{if } i \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}}),\\
\sum_{j\in \sigma^{-1}(i)} z_j^* & \mbox{otherwise.}
\end{cases} \] Let us first verify that $(y^*, z^*)$ is indeed a feasible solution. \begin{claim} \label{claim:simplefis}
Assuming no potential moves are available, $(y^*, z^*)$ is a
feasible solution. \end{claim} \begin{proofclaim}
We need to verify that $y_i^* \geq \sum_{j\in C} z_j^*$ for each
$i\in \ensuremath{\mathcal{M}}$ and each $C \in \conff{i}$. Recall that the total
processing time of the jobs in a configuration is at most $1$. Also
observe that $z_j^* = 0$ for jobs not in $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})\cup S$ and
we can thus omit such jobs when verifying the constraints.
Since $z_j^* \leq p_j$ for all $j\in \ensuremath{\mathcal{J}}$, we have that no
constraint involving the variable $y_i^*$ for $i\in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$
is violated. Indeed for such a machine $i$ we have $ y_i^* = 1$ and
$\sum_{j \in \conff{i}} z_j^* \leq \sum_{j \in \conff{i}} p_j \leq
1$ for any $C \in \conff{i}$.
As a small job $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ with a move $(j,i)$ is a
potential move if $i \not \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$ and no such moves exist by assumption, no small jobs in
$\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ can be moved to machines in
$\ensuremath{\mathcal{M}} \setminus \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$. Also, by definition, no small jobs in $S$ can
be moved to a machine $i\not\in \ensuremath{\mathcal{M}_{{S}}}$. This together with the fact
that a big job $j_B$ has processing time $1$ and is thus alone in a
configuration gives us that a constraint involving $y_i^*$ for
$i\not \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$ can only be violated if $y_i^* <
z_{j_B}^* = 2/3$.
As a machine $i \in \ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}})$
has a big job assigned, we have that for those $y_i^* \geq 2/3$. Now
consider the final case when $i \not \in \ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$. If a big job in $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$
has a move to $i$ then since it is not a potential move $p(S_i)
> R \geq 2/3$. As $z^*_j = p_j = \epsilon$ for small jobs, we have then $y_i^* = \sum_{j\in \sigma^{-1}(i)} z_j^* \geq p(S_i) \geq 2/3$, as required.
We can thus conclude that no constraint is violated and $(y^*, z^*)$ is a feasible solution. \end{proofclaim}
Having proved that $(y^*,z^*)$ is a feasible solution, the proof of Lemma~\ref{lemma:simplefis} is now completed by showing that the value of the solution is negative. \begin{claim} \label{claim:simplenegative} We have that $\sum_{i\in \ensuremath{\mathcal{M}}} y^*_i < \sum_{j\in \ensuremath{\mathcal{J}}} z^*_j$. \end{claim} \begin{proofclaim}
By the definition of $y^*$,
\begin{equation}
\label{eq:simpleval}
\sum_{i\in \ensuremath{\mathcal{M}}} y^*_i =
\sum_{i\in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} 1 + \sum_{i \not \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})}
\sum_{j\in\sigma^{-1}(i)} z_j^*
\end{equation}
We proceed by bounding $\sum_{i\in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} 1$ from above by
$$\sum_{i \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} \sum_{j\in \sigma^{-1}(i)} z_j^*.$$
Let $B_0, B_1, \dots, B_\ell$ be the blockers of $\ensuremath{\mathcal{T}}$ and
consider a small blocker $B_t$ for some $t=1, \dots, \ell$. By the
definition of Algorithm~\ref{algo:SimpleExtSched}, $B_t$ was added in an
iteration when either a potential small or big-to-small move
$(j_0,i_t)$ was chosen with $\ensuremath{\mathcal{M}}(B_t) = i_t$. Suppose first that
$(j_0,i_t)$ was a potential small move. Then as it was not valid,
$ p(j_0) + p(\sigma^{-1}(i_t)) > 1+R.$ This inequality together
with the fact that $i_t$ is assigned at most one big job $j_B$ gives us
that if $(j_0, i_t)$ is a small move then \begin{equation} \label{eq:simplecount} \sum_{j\in \sigma^{-1}(i_t)} z_j^* = p(\sigma^{-1}(i_t)) - \left(p(j_B) - z_{j_B}^*\right) \geq 1+R-p(j_0) - \frac{1}{3} = \frac{4}{3}. \end{equation}
On the other hand, if $(j_0, i_t)$ is a potential big-to-small move then as it was not valid \begin{equation} \label{eq:simplecount2} \frac{2}{3} < R < p(\sigma^{-1}(i_t)) = \sum_{j\in \sigma^{-1}(i_t)} z_j^*, \end{equation} where the equality follows from that $i_t$ is only assigned small jobs (since we assumed $(j_0, i_t)$ was a big-to-small move).
From~\eqref{eq:simplecount} and~\eqref{eq:simplecount2} we can see that $\sum_{i\in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} 1$ is bounded from above by $\sum_{i
\in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} \sum_{j\in \sigma^{-1}(i)} z_j^*$ if the number of small blockers added because of small moves is greater than the number of small blockers added because of big-to-small moves.
We proceed by proving this by showing that if $B_t$ is a small blocker added because of a potential big-to-small move then $B_{t+1}$ must be a small blocker added because of a small move. Indeed, the definition of a potential big-to-small move $(j_0, i_t)$ and the fact that it was not valid imply that \begin{equation*}
p\left(S_{i_t}(\ensuremath{\mathcal{T}}_t)\right) \leq R \qquad \mbox{and} \qquad p\left(\sigma^{-1}(i_t)\right) > R. \end{equation*}
As there are no big jobs assigned to $i_t$ (using that $(j_0, i_t)$ was a big-to-small move), the above inequalities give us that there is always a potential small move of a small job assigned to $i_t$ with respect to $\ensuremath{\mathcal{T}}_{t}$. In other words, we have that $B_t$ was not the last blocker added to $\ensuremath{\mathcal{T}}$ and as small potential moves have the smallest lexicographic value (apart from valid moves), $B_{t+1}$ must be a small blocker added because of a small move.
We can thus ``amortize'' the load of $B_{t+1}$ to increase the load of $B_t$. Indeed, if we let $\ensuremath{\mathcal{M}}(B_{t+1}) = i_{t+1}$ then~\eqref{eq:simplecount} and~\eqref{eq:simplecount2} yield $ \sum_{j\in \sigma^{-1}(i_t)} z_j^* +\sum_{j\in \sigma^{-1}(i_{t+1})} z_j^* \geq 2. $
Pairing each small blocker $B_t$ added because of a big-to-small moves with the small blocker $B_{t+1}$ added because of a small move as above allows us to deduce that $|\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})| \leq \sum_{i \in
\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} \sum_{j\in \sigma^{-1}(i)} z_j^*$. Combining this inequality with~\eqref{eq:simpleval} yields $$ \sum_{i\in \ensuremath{\mathcal{M}}} y^*_i \leq \sum_{i\in \ensuremath{\mathcal{M}}} \sum_{j\in \sigma^{-1}(i)} z_j^* = \sum_{j\in \ensuremath{\mathcal{J}}} z_j^* - z_{j_{new}}^* < \sum_{j\in \ensuremath{\mathcal{J}}} z_j^*, $$ as required. \end{proofclaim}
We have proved that there is a solution $(y^*, z^*)$ to the dual that is feasible (Claim~\ref{claim:simplefis}) and has negative value (Claim~\ref{claim:simplenegative}) assuming there are no potential moves. In other words, the \ensuremath{\mbox{[C-LP]}}{} cannot be feasible if no potential moves can be chosen which completes the proof of the lemma.
\end{proof}
\subsubsection{Termination} \label{sec:simpletermination}
We continue by proving that Algorithm~\ref{algo:SimpleExtSched} terminates. As the algorithm only terminates when a new job is assigned, Theorem~\ref{thm:simple} follows from Lemma~\ref{lemma:simpleterm} together with Lemma~\ref{lemma:simplefis} since then we can, as already explained, repeat the procedure until all jobs are assigned.
The intuition that the procedure terminates, assuming there always is a potential move, is the following. As every time the algorithm chooses a potential move that is not valid a new blocker is added to the tree and as each machine can be in at most 2$|\ensuremath{\mathcal{M}}|$ blockers, we have that the algorithm must choose a valid move after at most
$2|\ensuremath{\mathcal{M}}|$ steps. Such a move will perhaps trigger more valid moves and each valid move makes a potential move previously blocked more ``likely''. We can now guarantee progress by measuring the ``likeliness'' in terms of the lexicographic value of the move.
\begin{lemma} \label{lemma:simpleterm}
Assuming there is always a potential move to choose, Algorithm~\ref{algo:SimpleExtSched} terminates. \end{lemma} \begin{proof}
To prove that the procedure terminates we associate a vector, for
each iteration, with the dynamic tree $\ensuremath{\mathcal{T}}$. We will then show
that the lexicographic order of these vectors decreases.
The vector associated to $\ensuremath{\mathcal{T}}$ is defined as follows. Let $B_0,
B_1, \dots, B_\ell$ be the blockers of $\ensuremath{\mathcal{T}}$. With
blocker $B_i$ we will associate the value vector, denoted by
$\ensuremath{\textnormal{Val}}(B_i)$, of the move that was chosen in the iteration when $B_i$
was added. The vector associated with $\ensuremath{\mathcal{T}}$ is then simply $$ (\ensuremath{\textnormal{Val}}(B_0), \ensuremath{\textnormal{Val}}(B_1), \dots, \ensuremath{\textnormal{Val}}(B_\ell), \infty). $$ If the algorithm adds a new blocker then the lexicographic order clearly decreases as the vector ends with $\infty$. It remains to verify what happens when blockers are removed from $\ensuremath{\mathcal{T}}$. In that case let the algorithm run until it chooses a potential move that is not valid or terminates. As blockers will be removed in each iteration until it either terminates or chooses a potential move that is not valid we will eventually reach one of these cases. If the algorithm terminates we are obviously done.
Instead, suppose that starting with $\sigma$ and $\ensuremath{\mathcal{T}}$ the algorithm does a sequence of steps where blockers are removed until we are left with an updated schedule $\sigma_k$, a tree of blockers $\ensuremath{\mathcal{T}}_k$ with $k+1<\ell$ blockers, and a potential move $(j', i')$ that is not valid is chosen. As a blocker $B$ is removed if a blocker added earlier is removed, we have that $\ensuremath{\mathcal{T}}_k$ equals the subtree of $\ensuremath{\mathcal{T}}$ induced by $B_0, B_1, \dots, B_k$.
We will thus concentrate on comparing the lexicographic value of $(j', i')$ with that of $B_{k+1}$. Recall that $\ensuremath{\textnormal{Val}}(B_{k+1})$ equals the value of the move that was chosen when $B_{k+1}$ was added, say $(j_{t}, i_{k+1})$ for $j_{t} \in \ensuremath{\mathcal{J}}(B_{t})$ with $1\leq t \leq k$ and $ \ensuremath{\mathcal{M}}(B_{k+1}) = i_{k+1}$.
A key observation is that since blocker $B_{k+1}$ was removed but not $B_k$, the most recent move was of a job $j_{k+1}\in \ensuremath{\mathcal{J}}(B_{k+1})$ and we have $\sigma(j_{k+1}) = i_{k+1}$ and $\sigma_k(j_{k+1}) \neq i_{k+1}$. Moreover, as $(j_t, i_{k+1})$ was a potential move when $B_{k+1}$ was added, it is a potential move with respect to $\ensuremath{\mathcal{T}}_k$ (using that $S_{i_{k+1}}\left(\ensuremath{\mathcal{T}}_k\right)$ has not changed). Using these observations we now show that the lexicographic value of $(j_t, i_{k+1})$ has decreased. As the algorithm always chooses the move of minimum lexicographic value, this will imply $\ensuremath{\textnormal{Val}}(j',i') < \ensuremath{\textnormal{Val}}(B_{k+1})$ as required.
If $(j_t, i_{k+1})$ was a small or big-to-small move then $B_{k+1}$ was a small blocker. As no jobs were moved to $i_{k+1}$ after $B_{k+1}$ was added, $p(\sigma_k^{-1}(i_{k+1})) < p(\sigma^{-1}(i_{k+1}))$ and we have that the lexicographic value of $(j_t, i_{k+1})$ has decreased.
Otherwise if $(j_t, i_{k+1})$ was a big-to-big move then $j_{k+1}$ must be a big job. As we only have jobs of two sizes, the move of the big job $j_{k+1}$ implies that $(j_t, i_{k+1})$ now is a valid move which contradicts the assumption that the algorithm has chosen a potential but not valid move $(j', i')$. \hide{ together with the value of moves
\ensuremath{\textnormal{Val}}(B_{k+1})$ by distinguishing three cases.
\begin{itemize} \item{\emph{$(j_{t}, i_{k+1})$ was a small move:}} Then $B_{k+1}$ is a small
blocker and no jobs have been moved to $i_{k+1}$ in the iterations
after $B_{k+1}$ was added. Clearly, $(j_{t}, i_{k+1})$ is also a
potential small move with respect to $\sigma_k$ and $\ensuremath{\mathcal{T}}_k$. Furthermore by the key observation, $$ \ensuremath{\textnormal{Val}}(j', i') \leq \ensuremath{\textnormal{Val}}(j_{t}, i_{k+1}) = (1, p(\sigma_k^{-1}(i_{k+1}))) < (1, p(\sigma^{-1}(i_{k+1}))) = \ensuremath{\textnormal{Val}}(B_{k+1}), $$ as required. \item{\emph{$(j_{t}, i_{k+1})$ was a big-to-small move:}} This case follows in
the same manner as the previous case but we need to verify that
$(j_{t}, i_{k+1})$ is still a potential move with respect to
$\ensuremath{\mathcal{T}}_k$ and $\sigma_k$.
As $(j_{t}, i_{k+1})$ was a potential move with respect to
$\ensuremath{\mathcal{T}}_k$ when $B_{k+1}$ was added, we have by
Observation~\ref{obs:simple} that no jobs in $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}}_k)$ have
been reassigned and thus that the inequality $$ p(S_{i_{k+1}}) \leq R$$ is still satisfied with respect to $\sigma_k$ and $\ensuremath{\mathcal{T}}_k$. Indeed, $S_{i_{k+1}} = \{j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}}_k) \cap \sigma^{-1}(i_{k+1}): j \mbox{ is small}\}$ only depends on the jobs in $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}}_k)$ that have not been reassigned.
We have thus that $(j_t, i_{k+1})$ is also a potential small move with respect to $\sigma_k$ and $\ensuremath{\mathcal{T}}_k$. Similar to the previous case we have thus $\ensuremath{\textnormal{Val}}(j',i') \leq \ensuremath{\textnormal{Val}}(j_t, i_{k+1}) = (2, p(\sigma_k^{-1}(i_{k+1}))) < (2, p(\sigma^{-1}(i_{k+1}))) = \ensuremath{\textnormal{Val}}(B_{k+1}), $ as required.
\item{\emph{$(j_{t}, i_{k+1})$ was a big-to-big move:}} By the arguments of the previous case we have that $(j_{t}, i_{k+1})$ is still a potential move with respect to $\sigma_k$ and $\ensuremath{\mathcal{T}}_k$. Since $j_{k+1}\in \ensuremath{\mathcal{J}}(B_{k+1})$ must be a big job we have that $i_{k+1}$ is not assigned any big job with respect to $\sigma_k$. Hence, $(j_{t}, i_{k+1})$ will in the next iteration either be a valid or potential big-to-small move. In either case $\ensuremath{\textnormal{Val}}(j', i') \leq \ensuremath{\textnormal{Val}}(j_{t}, i_{k+1}) < \ensuremath{\textnormal{Val}}(B_{k+1})$ as required.
\end{itemize}
}
We have thus proved that the vector associated to $\ensuremath{\mathcal{T}}$ always decreases. As there are at most $2|\ensuremath{\mathcal{M}}|$ blockers in $\ensuremath{\mathcal{T}}$ (one small and one big blocker for each machine) and a vector associated to a blocker can take a finite set of values, we conclude that the algorithm terminates. \end{proof}
\section{Proof of Main Result} \label{sec:mainalgo} In this section we extend the techniques presented in Section~\ref{sec:simple} to prove our main result, i.e., that there is a polynomial time algorithm that estimates the optimal makespan of the restricted assignment problem within a factor of $\frac{33}{17} + \epsilon \approx 1.9412 +\epsilon$, where $\epsilon > 0$ is an arbitrarily small constant. More specifically, we shall show the following theorem which clearly implies Theorem~\ref{thm:mainintro}. The small loss of $\epsilon$ is as mentioned because the known polynomial time algorithms only solve \ensuremath{\mbox{[C-LP]}}{} up to any desired accuracy. \begin{theorem} \label{thm:main} The \ensuremath{\mbox{[C-LP]}}{} has integrality gap at most $\frac{33}{17}$.
\end{theorem} Throughout this section we let $R = \frac{16}{17}$. The proof follows closely the proof of Theorem~\ref{thm:simple}, i.e., we design a local search algorithm that returns a solution with makespan at most $1+R$, assuming the \ensuremath{\mbox{[C-LP]}}{} is feasible. The strategy is again to repeatedly call a procedure, now summarized in Algorithm~\ref{algo:ExtSched}, that extends a given partial schedule by assigning a new job while maintaining a valid schedule. Recall that a partial schedule is valid if each machine $i\in \ensuremath{\mathcal{M}}$ is assigned at most one big job and $p(\sigma^{-1}(i)) \leq 1+R$, where now $R= \frac{16}{17}$. We note that a machine is only assigned at most one big job will be a restriction here.
Since we allow jobs of any sizes we need to define what big and small jobs are. In addition, we have medium jobs and partition the big jobs into large and huge jobs. \begin{definition}
A job $j$ is called \emph{big} if $p_j \geq {11}/{17}$,
\emph{medium} if $11/17 > p_j > 9/17$, and \emph{small} if $p_j \leq
9/17$. Let the sets $\ensuremath{\mathcal{J}}_B, \ensuremath{\mathcal{J}}_M, \ensuremath{\mathcal{J}}_S$ contain the big, medium,
and small jobs, respectively. We will also call a big job $j$ \emph{huge}
if $p_j \geq 14/17$ and otherwise \emph{large}. \end{definition} The job sizes are chosen so as to optimize the achieved approximation ratio with respect to the analysis and were obtained by solving a linear program.
We shall also need to extend and change some of the concepts used in Section~\ref{sec:simple}. The final goal is still that the procedure shall choose potential moves so as to guarantee (i) that the procedure terminates and that (ii) if no potential move exists then we shall be able to prove that the dual of \ensuremath{\mbox{[C-LP]}}{} is unbounded.
The main difficulty compared to the case in Section~\ref{sec:simple} of two job sizes is the following. A key step of the analysis in the
case with only small and big jobs was that if small jobs blocked the move of a big job then we guaranteed that one of the small jobs on the blocking machine had a potential small move. This was to allow us to amortize the load when analyzing the dual. In the general case, this might not be possible when a medium job is blocking the move of a huge job. Therefore, we need to introduce a new conceptual type of blockers called medium blockers that will play a similar role as big blockers but instead of containing a single big job they contain at least one medium job that blocks the move of a huge job. Rather unintuitively, we allow for technical reasons large but not medium or huge jobs to be moved to machines in medium blockers.
Let us now point out the modifications needed starting with the tree $\ensuremath{\mathcal{T}}$ of blockers.
\paragraph{Tree of blockers.} Similar to Section~\ref{sec:simple}, Algorithm~\ref{algo:ExtSched} remembers its history by using the dynamic tree $\ensuremath{\mathcal{T}}$ of blockers. As already mentioned, it will now also have medium blockers that play a similar role as big blockers but instead of containing a single big job they contain a set of medium jobs. We
use $\ensuremath{\mathcal{M}_M}(\ensuremath{\mathcal{T}})$ to refer to the subset of $\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$ containing the machines in medium blockers.
As in the case of two job sizes, Algorithm~\ref{algo:ExtSched} initializes tree \ensuremath{\mathcal{T}}{} with the special small blocker as root that consists of the job $j_{new}$. The next step of the procedure is to repeatedly choose valid and potential moves until we can eventually assign $j_{new}$. During its execution, the procedure now updates $\ensuremath{\mathcal{T}}$ based on which move that is chosen so that \begin{enumerate} \item $\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$ contains those machines to which the algorithm will not try to move any jobs; \item $\ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}})$ contains those machines to which the algorithm will not try to move any huge, large, or medium jobs; \item $\ensuremath{\mathcal{M}_M}(\ensuremath{\mathcal{T}})$ contains those machines to which the algorithm will not try to move any huge or medium jobs;
\item $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$
contains those jobs that the algorithm wishes to move. \end{enumerate} We can see that small jobs are treated as in Section~\ref{sec:simple}, i.e., they can be moved to all machines apart from those in $\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$. Similar to big jobs in that section, huge and medium jobs can only be moved to a machine not in $\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$. The difference lies in how large jobs are treated: they are allowed to be moved to machines not in $\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$ but also to those machines only in $\ensuremath{\mathcal{M}_M}(\ensuremath{\mathcal{T}})$.
\paragraph{Potential moves.} As before a potential move $(j,i)$ will be called \emph{valid} if the update $\sigma(j) \leftarrow i$ results in a valid schedule, but the definition of potential moves needs to be extended to include the different job sizes. The definition for small jobs remains unchanged: a move $(j,i)$ of a small job $j \in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ is a potential small move if $i \not \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$. A subset of the potential moves of medium and large jobs will also be called potential small moves.
\begin{definition}
A move $(j,i)$ of a medium or large job $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ satisfying $i \not \in \ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$ if $j$ is medium and $i \not \in \ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}}) \cup \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$ if $j$ is large is a potential
\begin{description}
\item[\textnormal{small move:}] if $i$ is not assigned a big job.
\item[\textnormal{medium/large-to-big:}] if $i$ is assigned a big job.
\end{description} \end{definition} Note that the definition takes into account the convention that large jobs are allowed to be assigned to machines not in $\ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}}) \cup \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$ whereas medium jobs are only allowed to be assigned to machines not in $\ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$. The reason for distinguishing between whether $i$ is assigned a big (huge or large) job will become apparent in the analysis. The idea is that if a machine $i$ is not assigned a big job then if it blocks a move of a medium or large job then the dual variables $z^*$ will satisfy $\sum_{j \in\sigma^{-1}(i)} z_j^* \geq 1$, which we cannot guarantee if $i$ is assigned a big job since when setting the dual variables we will round down the sizes of big jobs which might decrease the sum by as much as $6/17$.
It remains to define the potential moves of huge jobs. \begin{definition} A move $(j, i)$ of a huge job $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ to a machine $i \not \in \ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$ is a potential \begin{description} \item[\textnormal{huge-to-small move}:] if no big job is
assigned to $i$ and $p(j) +
p(S_i \cup M_i) \leq 1+R$; \item [\textnormal{huge-to-big move:}] if a big job is assigned to $i$ and $p(j) +
p(S_i \cup M_i) \leq 1+R$;
\item [\textnormal{huge-to-medium move:}] if $p(j) + p(S_i) \leq 1+R$ and
$p(j) + p(S_i\cup M_i) > 1+R$; \end{description} where $S_i= \{j\in \sigma^{-1}(i): j \mbox{ is small with }\Gamma_\sigma(j) \subseteq \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})\}$ and $M_i = \ensuremath{\mathcal{J}}_M \cap \sigma^{-1}(i)$. \end{definition} Again $S_i$ denotes the set of small jobs assigned to $i$ with no potential moves with respect to the current tree. The set $M_i$ contains the medium jobs currently assigned to $i$. Moves huge-to-small and huge-to-big correspond to the moves big-to-small and big-to-big in Section~\ref{sec:simple}, respectively. The constraint $p(j) + p(S_i \cup M_i) \leq 1+R$ says that such a move should only be chosen if it can become valid by not moving any medium jobs assigned to $i$. The additional move called huge-to-medium covers the case when moving the medium jobs assigned to $i$ is necessary for the move to become valid.
Similar to before, the behavior of the algorithm depends on the type of the chosen potential move. The treatment compared to that in Section~\ref{sec:simple} of valid moves and potential small moves is unchanged; the huge-to-small move is treated as the big-to-small move; and the medium/large-to-big and huge-to-big moves are both treated as the big-to-big move were in Section~\ref{sec:simple}. It remains to specify what Algorithm~\ref{algo:ExtSched} does in the case when a potential huge-to-medium move (that is not valid) is chosen, say $(j,i)$ of a job $j \in \ensuremath{\mathcal{J}}(B)$ for some blocker $B$. In that case the algorithm adds a medium blocker $B_M$ as child to $B$ that consists of the machine $i$ and the medium jobs assigned to $i$. This prevents other huge or medium jobs to be assigned to $i$. Also note that constraints $p(j) + p(S_i) \leq 1+R$ and $p(j) + p(S_i \cup M_i) > 1+R$ imply that there is at least one medium job assigned to $i$.
We remark that the rules on how to update $\ensuremath{\mathcal{T}}$ is again so that a job can be in at most one blocker whereas a machine can now be in at most three blockers (this can happen if it is first added in a medium blocker, then in a big blocker, and finally in a small blocker). \paragraph{Values of moves.}
As in Section~\ref{sec:simple}, it is important in which order the moves are chosen, Therefore, we assign a value vector to each potential move and Algorithm~\ref{algo:ExtSched} chooses then, in each iteration, the move with smallest lexicographic value.
\begin{definition} \label{def:diffval} If we let $L_i=\sigma^{-1}(i)$ then a potential move $(j,i)$ has value $$ \ensuremath{\textnormal{Val}}(j,i) = \left\{ \begin{array}{ll} (0, 0) & \mbox{ if valid,} \\ (p(j),p(L_i)) & \mbox{ if small move,} \\ (2,0) & \mbox{ if medium/large-to-big,} \\ (3, p(L_i)) & \mbox{ if huge-to-small,} \\ (4, 0) & \mbox{ if huge-to-big,} \\
(5, |L_i\cap \ensuremath{\mathcal{J}}_M|) & \mbox{ if huge-to-medium.} \end{array} \right. $$ \end{definition} Note that as before, the algorithm chooses a valid move if available and a potential small move before any other potential move. Moreover, it chooses the potential small move of the smallest job available.
\paragraph{The algorithm.}
Algorithm~\ref{algo:ExtSched} summarizes the algorithm concisely using the concepts described previously. Given a valid partial schedule $\sigma$ and an unscheduled job $j_{new}$, we shall prove that it preserves a valid schedule by moving jobs until it can assign $j_{new}$. Repeating the procedure until all jobs are assigned then yields Theorem~\ref{thm:main}.
\begin{algorithm*}[hbtp] \caption{ExtendSchedule($\sigma, j_{new}$):} \label{algo:ExtSched} \begin{algorithmic}[1] \STATE Initialize \ensuremath{\mathcal{T}}{} with the root $\ensuremath{\mathcal{J}}(B) = \{j_{new}\}$ and $\ensuremath{\mathcal{M}}(B) = \bot$\;
\WHILE{$\sigma(j_{new})$ is TBD}
\STATE Choose a potential move $(j,i)$ with $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ of minimum lexicographic value\;
\STATE Let $B$ be the blocker in $\ensuremath{\mathcal{T}}$ such that $j \in \ensuremath{\mathcal{J}}(B)$\;
\IF{$(j,i)$ is valid}
\STATE Update the schedule by $\sigma(j) \leftarrow i$\;
\STATE Update $\ensuremath{\mathcal{T}}$ by removing $B$ and all blockers added after $B$\;
\ELSIF{$(j,i)$ is either a potential small or huge-to-small move}
\STATE Add a small blocker $B_S$ as child to $B$ with $\ensuremath{\mathcal{M}}(B_S) = i$ and
$\ensuremath{\mathcal{J}}(B_S) = \sigma^{-1}(i)\setminus \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$;
\ELSIF{$(j,i)$ is either a potential large/medium-to-big or huge-to-big move}
\STATE Let $j_B$ be the big job such that $\sigma(j_B) = i$\;
\STATE Add a big blocker $B_B$ as a child to $B$ in \ensuremath{\mathcal{T}}{} with
$\ensuremath{\mathcal{J}}(B_B) = \{j_B\}$ and $\ensuremath{\mathcal{M}}(B_B) = i$\;
\ELSE[$(j,i)$ \emph{is a potential huge-to-medium move}]
\STATE Add a medium blocker $B_M$ as child to $B$ with $\ensuremath{\mathcal{M}}(B_M) = i$ and $\ensuremath{\mathcal{J}}(B_M) = \sigma^{-1}(i) \cap \ensuremath{\mathcal{J}}_M$\;
\ENDIF \ENDWHILE \RETURN $\sigma$\;
\end{algorithmic} \end{algorithm*}
\hide{ \begin{procedure}[H]
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\SetKwData{Tree}{ComponentTree}
\SetKwData{SM}{PossibleMoves}
\SetAlgoNoEnd
\SetKwFunction{Try}{TrySmallMove}
\SetKwFunction{Update}{UpdateSchedule}
\SetKwData{B}{Blocking} Initialize \ensuremath{\mathcal{T}}{} with the root $\ensuremath{\mathcal{J}}(B) = \{j_{new}\}$ and $\ensuremath{\mathcal{M}}(B) = \bot$\;
\While{$\sigma(j_{new})$ is TBD} {
Choose a potential move $(j,i)$ with $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ of minimum lexicographic value\;
Let $B$ be the blocker in $\ensuremath{\mathcal{T}}$ such that $j \in \ensuremath{\mathcal{J}}(B)$\;
\If{$(j,i)$ is valid} {
Update the schedule by $\sigma(j) \leftarrow i$\;
Update $\ensuremath{\mathcal{T}}$ by removing $B$ and all blockers added after $B$\;
}
\ElseIf{$(j,i)$ is either a potential small or huge-to-small move}
{
Add a small blocker $B_S$ as child to $B$ with $\ensuremath{\mathcal{M}}(B_S) = i$ and
$\ensuremath{\mathcal{J}}(B_S) = \sigma^{-1}(i)\setminus \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$;
}
\ElseIf{$(j,i)$ is either a potential large/medium-to-big or huge-to-big move} {
Let $j_B$ be the big job such that $\sigma(j_B) = i$\;
Add a big blocker $B_B$ as a child to $B$ in \ensuremath{\mathcal{T}}{} with
$\ensuremath{\mathcal{J}}(B_B) = \{j_B\}$ and $\ensuremath{\mathcal{M}}(B_B) = i$\;
}
\Else({$(j,i)$ \emph{is a potential huge-to-medium move}}) {Add a medium blocker $B_M$ as child to $B$ with $\ensuremath{\mathcal{M}}(B_M) = i,$ and $\ensuremath{\mathcal{J}}(B_M) = \sigma^{-1}(i) \cap \ensuremath{\mathcal{J}}_M$\;} } \Return $\sigma$\; \caption{ExtendSchedule($\sigma, j_{new}$):} \label{algo:ExtSched} \end{procedure} }
\subsection{Analysis}
As Algorithm~\ref{algo:SimpleExtSched} for the case of two job sizes, Algorithm~\ref{algo:ExtSched} only updates the schedule if a valid move is chosen so it follows that the schedule stays valid throughout the execution. It remains to verify that the algorithm terminates and that the there always is a potential move to choose.
The analysis is similar to that of the simpler case but involves more case distinctions.
When arguing about $\ensuremath{\mathcal{T}}$ we again let
\begin{itemize} \item $\ensuremath{\mathcal{T}}_t$ be the subtree of $\ensuremath{\mathcal{T}}$ induced by the blockers $B_0, B_1, \dots, B_t$; \item $S(\ensuremath{\mathcal{T}}_t) = \{j\in \ensuremath{\mathcal{J}}: \mbox{$j$ is small with $\Gamma_\sigma(j)
\subseteq \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}}_t)$}\}$ and often refer to $S(\ensuremath{\mathcal{T}})$ by simply $S$; and \item $S_i(\ensuremath{\mathcal{T}}_t) = \sigma^{-1}(i) \cap S(\ensuremath{\mathcal{T}}_t)$ (often
refer to $S_i(\ensuremath{\mathcal{T}})$ by $S_i$). \end{itemize} As in the case of two job sizes, the set $S(\ensuremath{\mathcal{T}}_t)$ contains the set of small jobs with no potential moves with respect to $\ensuremath{\mathcal{T}}_t$. Therefore no job in $S(\ensuremath{\mathcal{T}}_t)$ has been reassigned since $\ensuremath{\mathcal{T}}_t$ became a subtree of $\ensuremath{\mathcal{T}}$, i.e., since $B_t$ was added. We can thus again omit without ambiguity the dependence on $\sigma$ when referring to $S(\ensuremath{\mathcal{T}}_t)$ and $S_i(\ensuremath{\mathcal{T}}_t)$. The following observation will again be used throughout the analysis. No job in a blocker $B$ of $\ensuremath{\mathcal{T}}$ has been reassigned after $B$ was added since that would have caused the algorithm to remove $B$ (and all blockers added after $B$).
\hide{ Observation~\ref{obs:simple} we have the following from the fact that Algorithm~\ref{algo:ExtSched} removes a blocker either (i) if any blocker added before it is removed from \ensuremath{\mathcal{T}}{} or (ii) if a job in the blocker is reassigned. \begin{observation} \label{obs:diff} Procedure~\ref{algo:ExtSched} satisfies the following invariant. Order the blockers $B_0,B_1, B_2, \dots, B_\ell$ of $\ensuremath{\mathcal{T}}$ in the order they were added ($B_0$ is the special root blocker). If we let $\ensuremath{\mathcal{T}}_t$ denote the subtree of $\ensuremath{\mathcal{T}}$ induced by $B_0, B_1,\dots, B_t$ then for $t=1,\dots, \ell$ \begin{enumerate} \item in the iteration when $B_t$ was added, the tree of blockers equaled $\ensuremath{\mathcal{T}}_{t-1}$ at the beginning and $\ensuremath{\mathcal{T}}_t$ at the end of that iteration; \item each job $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}}_{t})$ has not been reassigned after $B_{t}$ was added to the tree of blockers. \end{enumerate} \end{observation} }
In Section~\ref{sec:existence} we start by
presenting
the proof that the algorithm always can choose a potential move if
\ensuremath{\mbox{[C-LP]}}{} is feasible. We then present the proof that the algorithm
always terminates in Section~\ref{sec:termination}. As already
noted, by repeatedly calling Algorithm~\ref{algo:ExtSched} until all
jobs are assigned we will, assuming \ensuremath{\mbox{[C-LP]}}{} feasible, obtain a
schedule of the jobs with makespan at most $1+R$ which completes the
proof of Theorem~\ref{thm:main}.
\subsubsection{Existence of potential moves} \label{sec:existence} We prove that the algorithm never gets stuck if the \ensuremath{\mbox{[C-LP]}}{} is feasible. \begin{lemma}
\label{lemma:existence}
If \ensuremath{\mbox{[C-LP]}}{} is feasible then Algorithm~\ref{algo:ExtSched} can
always pick a potential move. \end{lemma} \begin{proof}
Suppose that the algorithm has reached an iteration where no potential move is available.
Similar to the proof of Lemma~\ref{lemma:simplefis} we will show
that this implies that the dual is unbounded and hence the primal is
not feasible. We will do so by defining a solution $(y^*, z^*)$ to
the dual with $\sum_{i\in \ensuremath{\mathcal{M}}} y^*_i < \sum_{j\in \ensuremath{\mathcal{J}}}
z^*_j$. Define $z^*$ and $y^*$ by
$$
z^*_j = \begin{cases}
11/17, & \mbox{if $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ is big},\\
9/17, & \mbox{if $j \in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ is medium}, \\
p_j, & \mbox{if $j \in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})\cup S$ is small}, \\
0, & \mbox{otherwise,}
\end{cases} $$ and $$
y^*_i = \begin{cases}
1 & \text{if $i \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$}, \\
\sum_{j\in \sigma^{-1}(i)} z_j^* & \text{otherwise.}
\end{cases}
$$
A natural interpretation of $z^*$ is that we have rounded down processing times where big jobs with processing times in $[11/17,1]$ are rounded down to $11/17$; medium jobs with processing times in $[9/17, 11/17]$ are rounded down to $9/17$; and small jobs are left unchanged.
The proof of the lemma is now completed by showing that $(y^*, z^*)$ is a feasible solution (Claim~\ref{claim:compfis}) and that the objective value is negative (Claim~\ref{claim:compneg}).
\begin{claim}
\label{claim:compfis}
Assuming no potential moves are available, $(y^*, z^*)$ is a feasible solution.
\end{claim}
\begin{proofclaim}
We need to verify that $y_i^* \geq \sum_{j\in C} z_j^*$ for each
$i\in \ensuremath{\mathcal{M}}$ and each $C\in \conff{i}$. Recall that the total
processing time of the jobs in a configuration is at most $1$. Also
observe as in the simpler case that $z_j^*=0$ for jobs not in
$\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}}) \cup S$ and we can thus omit them when verifying the
constraints.
As in the previous case, no constraint involving the variable
$y_i^*$ for $i\in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$ is violated. Indeed, for such a
machine $i$ we have $y_i^* = 1$ and $\sum_{j\in \conff{i}} z_j^* \leq \sum_{j\in \conff{i}} p_j
\leq 1$ for any configuration $C \in
\conff{i}$.
We continue by distinguishing between the remaining cases when
$i\not \in \ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}}), i\in \ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}})\setminus
\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$, and $i\in \ensuremath{\mathcal{M}_M}(\ensuremath{\mathcal{T}})\setminus
(\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}}) \cup \ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}}))$.
\paragraph{$i \not \in \ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$:} A move $(j,i)$ to machine
$i\not\in \ensuremath{\mathcal{M}}(\ensuremath{\mathcal{T}})$ is a potential move if $j\in
\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ is a small, medium, or large job. By assumption we
have thus that no such jobs in $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ have any moves to
$i$. We also have, by definition, that no job in $S$ has a move to
$i\not \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$.
Now consider the final case when a huge job $j_H \in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ has a
move $(j_H,i)$. Then since it is not a potential huge-to-small, huge-to-medium, or huge-to-big move $$p(j_H) + p(S_i) > 1+R \qquad \mbox{and hence} \qquad p(S_i) > R.$$
Since $p(j_H) \geq {14}/{17}$ but $z_{j_H}^* = 11/17$ a configuration $C\in \conff{i}$ with $j_H\in C$ satisfy \begin{equation} \label{eq:mastertwo} \sum_{j\in C} z_j^* \leq z_{j_H}^* + (1-p(j_H)) \leq \frac{11}{17} + \left(1- \frac{14}{17}\right) = \frac{14}{17}, \end{equation} which is less than $ \frac{16}{17} = R < p(S_i) \leq \sum_{j\in \sigma^{-1}(i)} z_j^* \leq y^*_i$ where we used that $S_i$ only contains small jobs for the second inequality.
\paragraph{$i \in \ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}})\setminus \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$:} A move to machine $i\not \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})$ is a potential move if $j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})$ is a small job. By the assumption that there are no potential moves and by the definition of $S$, we have thus that there is no small job in $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}}) \cup S$ with a move to $i$. In other words, all small jobs with
$z_j^* >0$ that can be moved to $i$ are already assigned to $i$. Now let $j_B$ be the big job such that $\sigma(j_B) = i$ which must exist since machine $i$ is contained in a big blocker. As both medium and big (large and huge) jobs have processing time strictly greater than $1/2$, a configuration $C\in \conff{i}$ can contain at most one such job $j_0$. Since $z_{j_0}^* \leq 11/17$ and $z_{j_B}^* = 11/17$ we have that a configuration $C\in \conff{i}$ with $j_0\in C$ cannot violate feasibility. Indeed,
\begin{equation}
\label{eq:master}
\sum_{j\in C} z_j^* \leq z_{j_0}^* + \sum_{j\in
\sigma^{-1}(i)\setminus j_B} z_j^* \leq z_{j_B}^* + \sum_{j\in
\sigma^{-1}(i)\setminus j_B} z_j^* = y^*_i.
\end{equation}
\paragraph{$i\in \ensuremath{\mathcal{M}_M}(\ensuremath{\mathcal{T}})\setminus (\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})\cup
\ensuremath{\mathcal{M}_{{B}}}(\ensuremath{\mathcal{T}})):$} By the definition of $S$ and the assumption that there are no potential
moves, no small or large jobs in $\ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})\cup S$ have any moves to
$i$. As $i$ is contained in a medium blocker, there is a medium job $j_M$ such that $\sigma(j_M) = i$.
As both medium and huge jobs have processing time strictly greater
than $1/2$, a configuration $C\in \conff{i}$ can contain at
most one such job $j_0$.
If $j_0$ is medium then $z_{j_0}^* \leq z_{j_M}^*$ and we have by
Inequality~\eqref{eq:master} (substituting $j_B$ with $j_M$) that
the constraint is not violated. Otherwise if $j_0$ is huge then
from~\eqref{eq:mastertwo}
(substituting $j_H$ with $j_0$) we have that $\sum_{j\in C} z_j^* \leq
\frac{14}{17}$ for any configuration $C\in \conff{i}$ with $j_0 \in C$.
We shall now show that $y_i^* \geq \frac{14}{17}$ which completes
this case. If $i$ is assigned more than one medium job then $y_i^* \geq 1$.
Instead suppose that the only medium job assigned to $i$ is
$j_M$. Since the medium blocker $B_M$, with $\ensuremath{\mathcal{M}}(B_M) =
i$ and $\ensuremath{\mathcal{J}}(B_M) = \{j_M\}$, was added to $\ensuremath{\mathcal{T}}$ because the algorithm chose a
potential huge-to-medium move we have $$ p(S_i(\ensuremath{\mathcal{T}}')) + p(j_M) > R.$$ where $\ensuremath{\mathcal{T}}'$ refer to the tree of blockers at the time when $B_M$ was added.
As each blocker in $\ensuremath{\mathcal{T}}'$ is also in $\ensuremath{\mathcal{T}}$, $p(S_i) \geq p(S_i(\ensuremath{\mathcal{T}}'))$ and the above inequality yields, $$ y_i^* \geq p(S_i) + p(j_M) - (p(j_M) - z_{j_M}^*) \geq R - \left(\frac{11}{17} - \frac{9}{17}\right) \geq \frac{14}{17}, $$ as required.
We have thus verified all the cases which completes the proof of the claim. \end{proofclaim}
Having verified that $(y^*, z^*)$ is a feasible solution, the proof is now completed by showing that the value of the solution is negative.
\begin{claim} \label{claim:compneg} We have that $\sum_{i\in \ensuremath{\mathcal{M}}} y^*_i < \sum_{j \in \ensuremath{\mathcal{J}}} z_j^*$. \end{claim} \begin{proofclaim}
By the definition of $y^*$, \begin{equation} \label{eq:diffval} \sum_{i\in \ensuremath{\mathcal{M}}} y^*_i = \sum_{i\in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} 1 + \sum_{i \not \in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} \sum_{j\in\sigma^{-1}(i)} z_j^*. \end{equation} Similar to the case with two job sizes in Section~\ref{sec:simple}, we proceed by bounding $\sum_{i\in \ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} 1$ from above by $\sum_{i \in
\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} \sum_{j\in \sigma^{-1}(i)} z_j^*$.
Let $B_0, B_1, \dots, B_\ell$ be the blockers of $\ensuremath{\mathcal{T}}$ and
consider a small blocker $B_t$ for some $t=1, \dots, \ell$ with $\ensuremath{\mathcal{M}}(B_t) = i_t$.
By the definition of the procedure, small blocker $B_t$ has been added in an iteration when either a potential small or huge-to-small move $(j_0,i_t)$ was chosen. Let us distinguish between the three cases when $(j_0, i_t)$ was a small move of a small job, $(j_0, i_t)$ was a small move of a medium or large job, and $(j_0, i_t)$ was a huge-to-small move.
\paragraph{$(j_0, i_t)$ was a small move of a small job:} Since the move was not valid, $$ p(j_0) + p(\sigma^{-1}(i_t)) > 1+R. $$ As a big job with processing time in $[11/17, 1]$ is rounded down to $11/17$ and a medium job with processing time in $[9/17, 11/17]$ is rounded down to $9/17$, the sum $\sum_{j\in \sigma^{-1}(i_t)} z_j^*$ will depend of the number of big and medium jobs assigned to $i_t$. Indeed, if we let $L_{i_t} = \sigma^{-1}(i_t) \cap \ensuremath{\mathcal{J}}_B$ and $M_{i_t} = \sigma^{-1}(i_t) \cap \ensuremath{\mathcal{J}}_M$ be the big and medium jobs assigned to $i_t$, respectively. Then on the one hand, $$ \sum_{j\in \sigma^{-1}(i_t)} z_j^* \geq 1+R - p(j_0) -
\left(1-\frac{11}{17}\right)|L_{i_t}| - \left(\frac{11}{17} - \frac{9}{17}\right)|M_{i_t}|, $$ which equals
$$ \frac{33}{17} - \frac{6}{17}\left(|L_{i_t}| + \frac{|M_{i_t}|}{3}\right) - p(j_0). $$
On the other hand, if (i) $|L_{i_t}| \geq 2$, (ii) $|L_{i_t}| \geq 1$ and $|M_i| \geq 1$ or (iii)
$|M_{i_t}| \geq 3$ then clearly $$\sum_{j\in \sigma^{-1}(i_t)} z_j^* \geq \frac{11}{17} + \frac{9}{17} \geq \frac{20}{17}. $$ Combining these two bounds we get (since $j_0$ is small and thus $p(j_0) \leq 9/17$) that \begin{align} \label{eq:jsmall} \sum_{j\in \sigma^{-1}(i_t)} z_j^* &\geq \min \left[\frac{27}{17}- p(j_0), \frac{20}{17}\right] \geq \frac{18}{17} & \mbox{if $(j_0,i_t)$ was a small move of a small job.} \end{align}
\paragraph{$(j_o, i_t)$ was a small move of a medium or large job:}Since the move was not valid, we have again $$ p(j_0) + p(\sigma^{-1}(i_t)) > 1+R. $$ By the definition of potential small moves it must be that there is no big job assigned to $i_t$. If there are more than one medium job assigned to $i$ then clearly $\sum_{j\in \sigma^{-1}(i_t)} z_j^* \geq 1$.
Otherwise, if there is at most one medium job assigned to $i_t$ that might have been rounded down from $11/17$ to $9/17$ we use the inequality $p(j_0) + p(\sigma^{-1}(i_t)) > 1+R$ to derive $$ \sum_{j\in \sigma^{-1}(i_t)} z_j^* \geq 1+ R -p(j_0) - \left(\frac{11}{17}- \frac{9}{17}\right) = \frac{31}{17} - p(j_0)\geq 1, $$ For the final inequality we used that the processing time of medium and large jobs is at most ${14}/{17}$. Summarizing again, we have \begin{align} \sum_{j\in \sigma^{-1}(i_t)} z_j^* &\geq 1 & \mbox{if $(j_0,i_t)$ was a potential small move of a medium or large job.} \end{align}
\paragraph{$(j_0, i_t)$ was a huge-to-small move:} Similar to the case of two job sizes in Section~\ref{sec:simple} where we considered the big-to-small move, we will show that we can amortize the cost from the blocker $B_{t+1}$. Indeed, since the move $(j_0, i_t)$ was not valid and by the definition of a potential huge-to-small move we have \begin{equation} \label{eq:diffar} p(j_0) + p(\sigma^{-1}(i_t)) > 1+ R \qquad \mbox{and} \qquad p(j_0) + p\left(S_{i_t}(\ensuremath{\mathcal{T}}_t) \cup M_{i_t}\right) \leq 1+R, \end{equation} where we let $M_{i_t}$ contain the medium jobs assigned to $i_t$ at the time when $B_t$ was added. Recall that $S_{i_t}(\ensuremath{\mathcal{T}}_t)$ contains those small jobs that have no potential moves with respect to the tree of blockers $\ensuremath{\mathcal{T}}_t$ after $B_t$ was added. As $B_t$ has not been removed from $\ensuremath{\mathcal{T}}$ both these sets have not changed. In particular, since $i_t$ is not assigned a big job (using the definition of huge-to-small move) we have that there must be a small job $j'\in \sigma^{-1}(i_t) \setminus (S_{i_t}(\ensuremath{\mathcal{T}}_t) \cup M_{i_t})$ that has a potential small $(j', i')$ move with respect to $\ensuremath{\mathcal{T}}_t$.
The above discussion implies that $t< \ell$. Moreover, the value of $(j',i')$ equals $(p(j'), p(\sigma^{-1}(i')))$. As the procedure always chooses a potential move with minimum lexicographic value we have that $B_{t+1}$ is a small blocker added because a potential small move was chosen of a small job with processing time at most $p(j')$.
If we let $\ensuremath{\mathcal{M}}(B_{t+1}) = i_{t+1}$ we have thus by~\eqref{eq:jsmall} \begin{equation} \label{eq:nyy} \sum_{j\in \sigma^{-1}(i_{t+1})}z_j^* \geq \min \left[\frac{27}{17}- p(j'), \frac{20}{17}\right] \geq \frac{18}{17}. \end{equation} We proceed by showing that we can use this fact to again amortize the cost as done in the simpler analysis, i.e., that \begin{equation} \label{eq:ammortize} \sum_{j\in \sigma^{-1}(i_{t})}z_j^* + \sum_{j\in \sigma^{-1}(i_{t+1})} z_j^* \geq 2. \end{equation} For this reason, let us distinguish between three subcases depending on the number of medium jobs assigned to $i_t$. \begin{description} \item[\textnormal{$i_t$ is assigned at least two medium jobs:}] Since medium jobs have value $9/17$ in the dual this clearly implies that $\sum_{j\in \sigma^{-1}(i_t)} z_j^* \geq 18/17$ so no amortizing is needed and~\eqref{eq:ammortize} holds in this case.
\item[\textnormal{$i_t$ is assigned one medium job:}] Let $j_M$ denote
the medium job assigned to $i_t$. In addition we have that the
small job $j'$ is assigned to $i_t$. We have thus that $\sum_{j\in
\sigma^{-1}(i)} z_j^* \geq 9/17 + p(j')$. So if $p(j') \geq 7/17$ then
we can amortize because $\sum_{j\in \sigma^{-1}(i_{t+1})} z_j^*$ is at least
$18/17$.
Now consider the case when $p(j') < 7/17$. By the assumption that only one medium job is assigned to $i_t$ and since $p(j_0) + p(\sigma^{-1}(i_t)) \geq 1+ R$ we have $$ \sum_{j\in \sigma^{-1}(i_t)} z_j^* \geq R - \left(\frac{11}{17} -
\frac{9}{17}\right) = \frac{14}{17}. $$ Moreover, as the case when $p(j') < 7/17$ is considered, we have from~\eqref{eq:nyy} that $$ \sum_{j\in \sigma^{-1}(i_{t+1})} z_j^* \geq \frac{20}{17}, $$which implies that~\eqref{eq:ammortize} is valid also in this case.
\item[\textnormal{$i_t$ is not assigned any medium jobs:}] As $i_t$
in this case is not assigned any medium or big jobs and the
huge-to-small move $(j_0,i_t)$ was not valid, $$
\sum_{j\in \sigma^{-1}(i_t)} z_j^* > R \geq \frac{16}{17}. $$ That we can amortize now follows from~\eqref{eq:nyy} that we always have $ \sum_{j\in \sigma^{-1}(i_{t+1})} z_j^* \geq \frac{18}{17}. $ \end{description}
We have thus shown that a blocker $B_t$ added because of a huge-to-small move can be paired with the following blocker $B_{t+1}$
that was added because of a potential small move of a small job. As seen above this give us that we can amortize the cost exactly as done in the simpler case and we deduce that $|\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})| \leq \sum_{i \in
\ensuremath{\mathcal{M}_{{S}}}(\ensuremath{\mathcal{T}})} \sum_{j\in \sigma^{-1}(i)} z_j^*$. Combining this inequality with~\eqref{eq:simpleval} yields $$ \sum_{i\in \ensuremath{\mathcal{M}}} y^*_i \leq \sum_{i\in \ensuremath{\mathcal{M}}} \sum_{j\in \sigma^{-1}(i)} z_j^* = \sum_{j\in \ensuremath{\mathcal{J}}} z_j^* - z_{j_{new}}^* < \sum_{j\in \ensuremath{\mathcal{J}}} z_j^*, $$ as required. \end{proofclaim}
We have proved that there is a solution $(y^*, z^*)$ to the dual that is feasible (Claim~\ref{claim:compfis}) and has negative value (Claim~\ref{claim:compneg}) assuming there are no potential moves. In other words, the \ensuremath{\mbox{[C-LP]}}{} cannot be feasible if no potential moves can be chosen which completes the proof of the lemma. \end{proof}
\subsubsection{Termination} \label{sec:termination}
As the algorithm only terminates when a new job is assigned, Theorem~\ref{thm:main} follows from the lemma below together with Lemma~\ref{lemma:existence} since then we can, as already explained, repeat the procedure until all jobs are assigned.
\begin{lemma}
Assuming there is always a potential move to choose,
Algorithm~\ref{algo:ExtSched} terminates. \end{lemma} \begin{proof}
As in the proof of Lemma~\ref{lemma:simpleterm}, we consider the $i$'th iteration of
the algorithm and associate the vector $$ (\ensuremath{\textnormal{Val}}(B_1), \ensuremath{\textnormal{Val}}(B_2), \dots, \ensuremath{\textnormal{Val}}(B_\ell), \infty). $$ with $\ensuremath{\mathcal{T}}$ where the blockers are ordered in the order they were added and $\ensuremath{\textnormal{Val}}(B_i)$ equals the value of the move chosen when $B_i$ was added. We shall now prove that the value of the vector associated with $\ensuremath{\mathcal{T}}$ decreases no matter the step chosen in the next iteration.
If a new blocker is added in the $i+1$'th iteration the lexicographic order clearly decreases as the vector ends with $\infty$. It remains to verify what happens when blockers are removed from $\ensuremath{\mathcal{T}}$. In that case let the algorithm run until it chooses a potential move that is not valid or terminates. As blockers will be removed in each iteration until it either terminates or chooses a potential move that is not valid we will eventually reach one of these cases. If the algorithm terminates we are obviously done.
Instead, suppose that starting with $\sigma$ and $\ensuremath{\mathcal{T}}$ the algorithm does a sequence of steps where blockers are removed until we are left with an updated schedule $\sigma_k$, a $\ensuremath{\mathcal{T}}_k$ with $k+1< \ell$ blockers, and a potential move $(j', i')$ that is not valid is chosen. As a blocker $B$ is removed if a blocker added earlier is removed, we have that $\ensuremath{\mathcal{T}}_k$ equals the subtree of $\ensuremath{\mathcal{T}}$ induced by $B_0, B_1, \dots, B_k$.
We will thus concentrate on comparing the lexicographic value of $(j', i')$ with that of $B_{k+1}$. The value of $B_{k+1}$ equals of the value of the move chosen when $B_{k+1}$ was added, say $(j_{t}, i_{k+1})$ for
$j_{t} \in \ensuremath{\mathcal{J}}(B_t)$ with $1 \leq t \leq k$ and $\ensuremath{\mathcal{M}}(B_{k+1}) = i_{k+1}$.
A key observation is that since blocker $B_{k+1}$ was removed but not $B_k$, the most recent move was of a job $j_{k+1}\in \ensuremath{\mathcal{J}}(B_{k+1})$ and we have $\sigma(j_{k+1}) = i_{k+1}$ and $\sigma_k(j_{k+1}) \neq i_{k+1}$. Moreover, as $(j_t, i_{k+1})$ was a potential move when $B_{k+1}$ was added, it is a potential move with respect to $\ensuremath{\mathcal{T}}_k$ (using that $S_{i_{k+1}}\left(\ensuremath{\mathcal{T}}_k\right)$ has not changed). Using these observations we now show that the lexicographic value of $(j_t, i_{k+1})$ has decreased. As the algorithm always picks the move of minimum lexicographic value, this will imply $\ensuremath{\textnormal{Val}}(j',i') < \ensuremath{\textnormal{Val}}(B_{k+1})$ as required.
If $(j_t, i_{k+1})$ was a small or huge-to-small move then $B_{k+1}$ was a small blocker. As no jobs were moved to $i_{k+1}$ after $B_{k+1}$ was added, $p(\sigma_k^{-1}(i_{k+1})) < p(\sigma^{-1}(i_{k+1})$ and we have that the lexicographic value of $(j_t, i_{k+1})$ has decreased.
If $(j_t, i_{k+1})$ was a medium/large-to-big (or huge-to-big move) then $j_{k+1}$ must be a big job and hence the machine $i_{k+1}$ has no longer a big job assigned. This implies that the move $(j_t, i_{k+1})$ is now a potential small (or huge-to-small using that no medium jobs are moved to machines in big blockers) with smaller lexicographic value.
Finally, if $(j_t, i_{k+1})$ was a huge-to-medium move then $j_{k+1}$
must be a medium job and since no medium jobs have potential moves to machines in medium blockers we have that $|\sigma_k^{-1}(i) \cap
\ensuremath{\mathcal{J}}_M| < |\sigma^{-1}(i) \cap \ensuremath{\mathcal{J}}_M|$ which implies that the lexicographic value of $(j_t, i_{k+1})$ also decreased in this case.
\hide{
\begin{description} \item[$B_{k+1}$ was added when a small or medium/large-to-big move was
chosen:] If $B_{k+1}$ was added because of a medium/large-to-big
move then $j_{k+1}$ must be a big job and hence in the next iteration
machine $i_{k+1}$ has no big job assigned. This implies that the
move $(j_{t}, i_{k+1})$ is now a potential small move and hence
$\ensuremath{\textnormal{Val}}(j',i') \leq \ensuremath{\textnormal{Val}}(j_{t}, i_{k+1}) < \ensuremath{\textnormal{Val}}(B_{k+1})$.
Otherwise, if $B_{k+1}$ was added because of a small move then it is
a small blocker. As no jobs were added to $i_{k+1}$ after
$B_{k+1}$ was added, $p(\sigma_k^{-1}(i)) < p(\sigma^{-1}(i)) $
which again implies that the lexicographic value of $(j_{t},
i_{k+1})$ has decreased which completes this case.
\item[$B_{k+1}$ was added when a huge-to-small, huge-to-big or
huge-to-medium move was chosen:] Let $\sigma'$ be the schedule when
$B_{k+1}$ was added. By Observation~\ref{obs:diff}, $B_{k+1}$ was added with respect to $\ensuremath{\mathcal{T}}_k$ and by definition of huge potential moves we have
\begin{align*}
p(j_{t}) + p\left(S'_i\right) \leq 1+R,
\end{align*}
where $S'_i= \{j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})\cap \sigma^{\prime -1}(i): j \mbox{ is small}\}$.
Moreover, since no job has been moved from a blocker in $\ensuremath{\mathcal{T}}_k$ (Observation~\ref{obs:diff})
\begin{align}
\label{EQ:feas}
p(j_t) + p\left(S_i\right) \leq 1+R.
\end{align} where $S_i= \{j\in \ensuremath{\mathcal{J}}(\ensuremath{\mathcal{T}})\cap \sigma_k^{-1}(i): j \mbox{ is small}\}$ now is with respect to $\sigma_k$.
Suppose first that that $B_{k+1}$ was added because of a huge-to-medium move. Then $j_{k+1}$ must be a medium job and since no medium jobs have potential moves to machines in medium blockers we have that $|\sigma_k^{-1}(i) \cap \ensuremath{\mathcal{J}}_M| < |\sigma^{-1}(i) \cap
\ensuremath{\mathcal{J}}_M|. $ Using~\eqref{EQ:feas} we have that $(j_{t}, i_{k+1})$ is a potential move with respect to $\sigma_k$ and $\ensuremath{\mathcal{T}}_k$ and since the number of medium jobs decreased we have that $\ensuremath{\textnormal{Val}}(j', i') \leq \ensuremath{\textnormal{Val}}(j_{t}, i_{k+1}) < \ensuremath{\textnormal{Val}}(B_{k+1})$.
Second suppose that $B_{k+1}$ was added because of a huge-to-big move. Then $j_{k+1}$ must be a big job. Using~\eqref{EQ:feas} we again get that the $(j_{t}, i_{k+1})$ is a potential move and since no medium jobs are moved to machines in big blockers it has now become a potential huge-to-small move, which has a smaller lexicographic value.
The final case follows from that $(j_{t}, i_{k+1})$ is still a potential move and by the same arguments as when $B_{k+1}$ was added because of a small move. \end{description}
}
We have thus proved that the vector associated to $\ensuremath{\mathcal{T}}$ always decreases. As there are at most $3|\ensuremath{\mathcal{M}}|$ blockers in $\ensuremath{\mathcal{T}}$ (one small, medium, and big blocker for each machine) and a vector associated to a blocker can take a finite set of values, we conclude that the algorithm eventually terminates. \end{proof}
\section{Conclusions} We have shown that the configuration LP gives a polynomial time computable lower bound on the optimal makespan that is strictly better than two. Our techniques are mainly inspired by recent developments on the related Santa Claus problem and gives a local search algorithm to also find a schedule of the same performance guarantee, but is not known to converge in polynomial time.
Similar to the Santa Claus problem, this raises the open question whether there is an efficient rounding of the configuration LP that matches the bound on the integrality gap (see also~\cite{FFeige08} for a comprehensive discussion on open problems related to the difference between estimation and approximation algorithms). Another interesting direction is to improve the upper or lower bound on the integrality gap for the restricted assignment problem: we show that it is no worse than $33/17$ and it is only known to be no better than $1.5$ which follows from the NP-hardness result. One possibility would be to find a more elegant generalization of the techniques, presented in Section~\ref{sec:simple} for two job sizes, to arbitrary processing times (instead of the exhaustive case distinction presented in this paper).
To obtain a tight analysis, it would be natural to start with the special case of graph balancing for which the $1.75$-approximation algorithm by Ebenlendr et al.~\cite{EKS08} remains the best known. We remark that the restriction $p_{ij} \in \{p_j, \infty\}$ is necessary as the integrality gap of the configuration LP for the general case is known to be $2$ even if a job can be assigned to at most $2$ machines~\cite{VW10}.
\end{document}
|
arXiv
|
{
"id": "1011.1168.tex",
"language_detection_score": 0.8048864006996155,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{{
ormalsize f CONVERGENCE OF VARIATIONAL APPROXIMATION SCHEMES FOR ELASTODYNAMICS WITH POLYCONVEX ENERGY}
\abstract{We consider a variational scheme developed by S.~Demoulini, D.~M.~A. Stuart and A.~E. Tzavaras [Arch.\;Rat.\;Mech.\;Anal.\;157\;(2001)] that approximates the equations of three dimensional elastodynamics with polyconvex stored energy.
We establish the convergence of the time-continuous interpolates constructed in the scheme to a solution of polyconvex elastodynamics before shock formation. The proof is based on a relative entropy estimation for the time-discrete approximants in an environment of $L^p$-theory bounds, and provides an error estimate for the approximation before the formation of shocks.}\\
\noindent {\bf keywords}: nonlinear elasticity, polyconvexity, variational approximation scheme. \\
\noindent AMS Subject Classification: 35L70 74B20 74H20
\section{Introduction}
The equations of nonlinear elasticity are the system \begin{equation*} \label{ELASTINTRO1} y_{tt}=\mathrm{div} \, \frac{\partial{W}}{\partial{F}}(\nabla{y}) \end{equation*} where $y:\Omega\times\mathbb{R}^{+}\to\mathbb{R}^3$ stands for { the motion}, and we have employed the constitutive theory of hyperelasticity, i.e.~the Piola-Kirchhoff stress tensor $S$ is expressed as the gradient, $S(F)=\frac{\partial{W}}{\partial{F}}(F)$, of a stored energy function $W(F)$. The equations \eqref{ELASTINTRO1} are often recast as a system of conservation laws, \begin{equation}\label{ELASTINTRO2} \begin{aligned} \partial_{t}v_i &= \partial_{\alpha} \frac{\partial{W}}{\partial{F_{i\alpha}}}(F)\\ \partial_{t}F_{i\alpha}&=\partial_{\alpha}v_i, \end{aligned} \end{equation} for the velocity $v=\partial_t y$ and the deformation gradient $F=\nabla y$. The differential constraints \begin{equation*} \partial_{\beta} F_{i\alpha} - \partial_{\alpha} F_{i\beta} = 0 \end{equation*} are propagated from the kinematic equation \eqref{ELASTINTRO2}$_2$ and are an involution, \cite{Dafermos86}.
The requirement of frame indifference imposes that $W(F):M_{+}^{3\times3}\to[0,\infty)$ be invariant under rotations. This renders the assumption of convexity of $W$ too restrictive \cite{Tr}, and convexity has been replaced by various weaker conditions familiar from the theory of elastostatics, see \cite{Ball77, BCO,Ball02} for a recent survey. A commonly employed assumption is that of polyconvexity, postulating that \begin{equation*} W(F)=G\circ\Phi(F) \end{equation*} where $\Phi(F):=(F,\mathrm{cof\, }{F},\det{F})$ is the vector of null-Lagrangians and $G=G(F,Z,w)=G(\Xi)$ is a convex function of $\Xi\in \mathbb{R}^{19}$; this encompasses certain physically realistic models \cite[Section 4.9, 4.10]{Ciarlet}. Starting with the work of Ball~\cite{Ball77}, substantial progress has been achieved for handling the lack of convexity of~$W$ within the existence theory of elastostatics.
For the elastodynamics system local existence of classical solutions has been established in \cite{DafermosHrusa85}, \cite[Theorem 5.4.4]{Dafermos10} for rank-1 convex stored energies, and in \cite[Theorem 5.5.3]{Dafermos10} for polyconvex stored entropies. The existence of global weak solutions is an open problem, except in one-space dimension, see \cite{DiPerna83}. Construction of entropic measure valued solutions has been achieved in \cite{DST} using a variational approximation method associated to a time-discretized scheme. Various uniqueness results of smooth solutions in the class of entropy weak and even dissipative measure valued solutions are available for the elasticity system \cite{Dafermos86,LT,Dafermos10,DST2}.
The objective of the present work is to show that the approximation scheme of \cite{DST} converges to the classical solution of the elastodynamics system before the formation of shocks. To formulate the problem we outline the scheme in \cite{DST} and refer to Section 2 for a detailed presentation. The null-Lagrangians $\Phi^A(F)$, $A=1,\dots,19$ satisfy \cite{Qin98} the nonlinear transport identities \begin{equation*} \partial_{t} {\Phi^{A}(F)}={\partial_\alpha }{\biggl(\frac{\partial{\Phi^A}}{\partial{F_{i\alpha}}}(F)v_i}\biggr) \, . \end{equation*} This allows to view the system \eqref{ELASTINTRO2} as constrained evolution of the extended system \begin{equation}\label{EXTSYSINTRO} \begin{aligned} \partial_{t} v_i &=\partial_{\alpha}\Bigl(\pd {G}{\Xi{_A}}(\Xi) \,\,\pd{\Phi^{A}}{F_{i\alpha}}(F)\Bigr)\\ \partial_{t}\Xi{_{A}}&=\partial_{\alpha}\Bigl(\pd{\Phi^{A}}{F_{i \alpha}}(F)\,v_{i}\Bigr). \\ \end{aligned} \end{equation} The extension \eqref{EXTSYSINTRO} has the properties: if $F(\cdot,0)$ is a gradient and $\Xi(\cdot,0)=\Phi(F(\cdot,0))$, then $F(\cdot,t)$ remains a gradient and $\Xi(\cdot,t)=\Phi(F(\cdot,t))$ for all $t$. The extended system is endowed with the entropy identity \begin{equation*}
\partial_t\biggl(\frac{|v|^2}{2}+G(\Xi)\biggr)-\partial_{\alpha}\biggl(v_i\,\pd{G}{\Xi_A}(\Xi)\,\pd{\Phi^A}{F_{i\alpha}}(F)\biggr)=0 \end{equation*} the entropy is convex and the system \eqref{EXTSYSINTRO} is thus symmetrizable.
{ For periodic solutions $v,\Xi$} (on the torus { $\mathbb{T}^3$}) a variational approximation method based on the time-discretization of \eqref{EXTSYSINTRO} is proposed in \cite{DST}: Given a time-step $h>0$ and initial data $(v^0,\Xi^0)$ the scheme provides the sequence of iterates $(v^j,\Xi^j)$, $j\geqslant 1$, by solving \begin{equation}\label{DISCEXTSYSINTRO} \begin{aligned} \frac{v^j_i-v^{j-1}_i}{h}&=\partial_{\alpha}\Bigl(\pd{G}{\Xi{_{A}}}(\Xi^j)\,\,\pd{\Phi^{A}}{F_{i\alpha}}\,\BBR{F^{j-1}}\Bigr)\\ \frac{\BBR{\Xi^j-\Xi^{j-1}}_{A}}{h} &=\partial_{\alpha}\Bigl(\pd{\Phi^{A}}{F_{i\alpha}}\,\BBR{F^{j-1}}\,v^j_i\Bigr). \end{aligned} \quad \text{in } \mathcal{D}'({\mathbb{T}^3}) \end{equation} This problem is solvable using variational methods and the iterates $(v^j,\Xi^j)$ give rise to a time-continuous approximate solution $\Theta^{(h)}=(V^{(h)},\Xi^{(h)})$. It is proved in \cite{DST} that the approximate solution generates a measure-valued solution of the equations of polyconvex elastodynamics.
In this work we consider a smooth solution of the elasticity system \linebreak $\bar{\Theta} \!= \!(\bar{V},\bar{\Xi})$ defined on $[0,T] \!\times \!{\mathbb{T}^3}$ and show that the approximate solution $\Theta^{(h)}$ constructed via the iterates $(v^j,\Xi^j)$ of \eqref{DISCEXTSYSINTRO} converges to $\bar{\Theta}=(\bar{V},\bar{\Xi})$ at a convergence rate $O(h)$. The method of proof is based on the relative entropy method developed for convex entropies in \cite{Dafermos79,DiPerna79} and adapted for the system of polyconvex elasticity in \cite{LT} using the embedding to the system \eqref{EXTSYSINTRO}. The difference between $\Theta^{(h)}$ and $\bar{\Theta}$ is controlled by monitoring the evolution of the relative entropy \begin{equation*}
\eta^r=\frac{1}{2}|V^{(h)}-\bar{V}|^2+ G(\Xi^{(h)})-G(\bar{\Xi}) -\nabla{G(\bar{\Xi})}(\Xi^{(h)}-\bar{\Xi}) \,. \end{equation*} We establish control of the function \begin{equation*}
{\mathcal{E}(t)} := \int_{{\mathbb{T}^3}} \,\Bigl( (1+|F^{(h)}|^{p-2}+|\bar{F}|^{p-2})
|F^{(h)}-{\bar{F}}|^2+|\Theta^{(h)}-\bar{\Theta}|^2 \Bigr) \, dx \end{equation*} and prove the estimation \begin{equation*} \begin{aligned} {\mathcal{E}(t)} \leqslant C \Bigl( \mathcal{E}(0) +h \Bigr), \quad t\in [0,T] \end{aligned} \end{equation*} which provides the result. There are two novelties in the present work: (a) In adapting the relative entropy method to the subject of time-discretized approximations. (b) In employing the method in an environment where $L^p$-theory needs to be used for estimating the relative entropy.
{ This work is a first step towards implementing a finite element method based on the variational approximation. To do that, one has to devise appropriate finite element spaces that preserve the involution structure. This is the subject of a future work.}
The paper is organized as follows. In Section 2 we present the variational approximation scheme and state the Main Theorem. In Section 3 we derive the relative entropy identity \eqref{RENTID} and, finally, in Section 4 we carry out the cumbersome estimations for the terms in the relative entropy identity and conclude the proof of Main Theorem via Gronwall's inequality.
\section{The variational approximation scheme and statement of the Main Theorem}
We assume that the stored energy $W:\mddplus{3} \to \mathbb{R}$ is \emph{polyconvex}: \begin{equation}\label{POLYCONVEXITY} W(F)=G\circ\Phi(F) \end{equation} with \begin{equation*} G=G(\Xi)=G(F,Z,w):\mdd{3}\times\mdd{3}\times\mathbb{R} \,\cong\, \mathbb{R}^{19} \to \mathbb{R} \end{equation*} uniformly convex and \begin{equation}\label{PHIDEF} \Phi(F) = (F, \mathrm{cof\, }{F}, \det{F}). \end{equation}
\subsection*{Assumptions} We work with periodic boundary conditions, i.e.~the spatial domain $\Omega$ is taken to be {the} three dimensional torus $\mathbb{T}^3$. The indices $i,\alpha, \dots$ generally run over $1,\dots,3$ while $A,B,\dots$ run over $1,\dots,19$. We use the notation $L^p=L^p(\mathbb{T}^3)$ and $W^{1,p}=W^{1,p}(\mathbb{T}^3)$. Finally, we impose the following convexity and growth assumptions on $G$: \begin{litemize}{(H4)}
\item[(H1)] $G\in C^3(\mdd{3}\times \mdd{3} \times \mathbb{R}; [0,\infty))$ is of the form \begin{equation}\label{GDECOMP} G(\Xi)=H(F) + R(\Xi) \end{equation} with $H\in C^3(\mdd{3}; [0,\infty))$ and $R\in C^3(\mdd{3}\times \mdd{3} \times \mathbb{R}; [0,\infty))$ strictly convex satisfying \begin{equation*}
\kappa |F|^{p-2}|z|^{2} \, \leqslant \, z^{T}\nabla^2H(F)z \,
\leqslant \, \kappa' |F|^{p-2}|z|^{2}, \quad \forall z \in \mathbb{R}^9 \end{equation*} and $\gamma I \leqslant \nabla^2R \leqslant \gamma' I\,$ for some fixed $\gamma,\gamma',\kappa,\kappa'>0$ and $p\in {[6,\infty)}$.
\item[(H2)] $G(\Xi)\,\geqslant\,c_1|F|^p+c_2|Z|^2+c_3|w|^2-c_4$.
\item[(H3)] $G(\Xi)\,\leqslant\,c_5(|F|^p+|Z|^2+|w|^2+1)$.
\item[(H4)] $|G_{F}|^{\frac{p}{p-1}}+
|G_Z|^{\frac{p}{p-2}}+|G_w|^{\frac{p}{p-3}}
\,\leqslant\,c_6\BBR{|F|^p+|Z|^2+|w|^2+1}.$
\item[(H5)] $\BBA{\frac{\partial^3H}{\partial{F_{i\alpha}}\partial{F_{ml}}\partial{F_{rs}}}}
\leqslant c_7 |F|^{p-3}$ \quad and \quad $\BBA{\frac{\partial^3R}{\partial{\Xi_A} \partial{\Xi_B} \partial{\Xi_D}}} \leqslant c_8$.
\end{litemize}
\subsection*{Notations} To simplify notation we write \begin{equation*} \begin{aligned} G_{,A}\,(\Xi)&=\pd{G}{\Xi_{A}}(\Xi),& \quad R_{,A}\,(\Xi)&=\pd{R}{\Xi_{A}}(\Xi),&\\ H_{,i\alpha}\,(F)&=\pd{H}{F_{i\alpha}}(F),& \quad \Phi^{A}_{,i\alpha}\,(F)&=\pd{\Phi^{A}}{F_{i\alpha}}(F).& \end{aligned} \end{equation*} In addition, for each $i,\alpha =1,2,3$ we set \begin{equation}\label{GCOMPFLD} \begin{aligned} g_{i\alpha}(\Xi,F^*)=\pd{G}{\Xi_{A}}\,(\Xi) \,\pd{\Phi^{A}}{F_{i\alpha}}\,(F^*), \quad F^* \in \mathbb{R}^9, \,\Xi \in \mathbb{R}^{19} \end{aligned} \end{equation} (where we use {the} summation convention over repeated indices) and denote the corresponding fields $g_i: \mathbb{R}^{19}\times\mathbb{R}^9\to\mathbb{R}^3$ by \begin{equation*} g_i(\Xi,F^*):=(g_{i1},g_{i2},g_{i3})(\Xi,F^*). \end{equation*}
\subsection{Time-discrete variational scheme}
The equations of elastodynamics \eqref{ELASTINTRO1} for polyconvex stored-energy \eqref{POLYCONVEXITY} can be expressed as a system of conservation laws, \begin{equation}\label{PXELASTSYS} \begin{aligned} \partial_{t} v_i&=\partial_{\alpha}\biggl(\pd {G}{\Xi_{A}}(\Phi(F)) \pd{\Phi^{A}}{F_{i\alpha}}(F)\biggr)\\ \partial_{t} F_{i\alpha}&=\partial_{\alpha} v_i \end{aligned} \end{equation} which is equivalent to \eqref{ELASTINTRO1} subject to differential constrains \begin{equation}\label{GRADCONSTR} \partial_{\beta} F_{i\alpha} - \partial_{\alpha} F_{i\beta} = 0 \end{equation} that are an involution \cite{Dafermos86}: if they are satisfied for $t=0$ then \eqref{PXELASTSYS} propagates \eqref{GRADCONSTR} to satisfy for all times. Thus the system \eqref{PXELASTSYS} is equivalent to systems \eqref{ELASTINTRO1} whenever $F(\cdot,0)$ is a gradient.
The components of $\Phi(F)$ defined by \eqref{PHIDEF} are null-Lagrangians and satisfy \begin{equation*} \partial_{\alpha}\biggl(\pd{\Phi^A}{F_{i\alpha}}(\nabla{u})\biggr) = 0, \quad A=1,\dots,19 \end{equation*} for any smooth $u(x):\mathbb{R}^3 \to\mathbb{R}^3$. Therefore, if $(v,F)$ are smooth solutions of~\eqref{PXELASTSYS}, the null-Lagrangians $\Phi^A(F)$ satisfy the transport identities \cite{DST} \begin{equation}\label{NLTID} \partial_t \Phi^A(F) = \partial_{\alpha} \biggl(\pd{\Phi^A}{F_{i\alpha}}(F)v_i \biggr), \quad \forall F \ \mbox{with} \ \partial_{\beta}F_{i\alpha}=\partial_{\alpha}F_{i\beta}. \end{equation} Due to the identities \eqref{NLTID} the system of polyconvex elastodynamics \eqref{PXELASTSYS} can be embedded into the enlarged system \cite{DST} \begin{equation}\label{EXTSYS} \begin{aligned} \partial_{t}v_i&=\partial_{\alpha}\biggl(\pd {G}{\Xi_A}(\Xi) \,\,\pd{\Phi^{A}}{F_{i\alpha}}(F)\biggr)\\ \partial_{t}\Xi_{A}&=\partial_{\alpha}\biggl(\pd{\Phi^{A}}{F_{i \alpha}}(F)\,v_{i}\biggr). \end{aligned} \end{equation} The extension has the following properties: \begin{litemize}{(E\,3)}
\item[(E\,1)] If $F(\cdot,0)$ is a gradient then $F(\cdot,t)$ remains a gradient for all $t$.
\item[(E\,2)] If $F(\cdot,0)$ is a gradient and $\Xi(\cdot,0)=\Phi(F(\cdot,0))$, then $F(\cdot,t)$ remains a gradient and $\Xi(\cdot,t)=\Phi(F(\cdot,t))$ for all $t$. In other words, the system of polyconvex elastodynamics can be viewed as a constrained evolution of \eqref{EXTSYS}.
\item[(E\,3)] The enlarged system admits a convex entropy \begin{equation}\label{ENTDEF}
\eta(v,\Xi) = \tfrac{1}{2} |v|^2 +G(\Xi), \quad (v,\Xi) \in \mathbb{R}^{22} \end{equation} and thus is symmetrizable (along the solutions that are gradients). \end{litemize}
Based on the time-discretization of the enlarged system \eqref{EXTSYS} S.~Demoulini, D.~M.~A.~Stuart and A.~E.~Tzavaras \cite{DST} developed a variational approximation scheme which, for the given initial data \begin{equation*} \Theta^0:=(v^0, \Xi^0)=(v^0,F^0,Z^0,w^0) \in L^2 \times L^p \times L^2 \times L^2 \end{equation*} and fixed $h>0$, constructs the sequence of successive iterates \begin{equation*} \Theta^j:=(v^j, \Xi^j)=(v^j,F^j,Z^j,w^j) \in L^2 \times L^p \times L^2 \times L^2, \quad j \geqslant 1 \end{equation*} with the following properties (see \cite[Lemma 1, Corollary 2]{DST}):
\begin{litemize}{(P\,5)}
\item[(P\,1)] The iterate $(v^j,\Xi^j)$ is the unique minimizer of the functional \begin{equation*} \mathcal{J}(v,\Xi) = \int_{\mathbb{T}^3}
\biggl(\tfrac{1}{2}|v-v^{j-1}|^2 + G(\Xi)\biggr) dx \end{equation*} over the weakly closed affine subspace \begin{equation*} \begin{aligned} \mathcal{C} = &\biggl\{ (v,\Xi)\in L^2 \times L^p \times L^2 \times L^2: \ \mbox{such that} \ \forall \varphi \in C^{\infty}(\mathbb{T}^3)\\ & \, \int_{\mathbb{T}^3} \biggl(\frac{\Xi_{A} - \Xi^{j-1}_{A}}{h}\biggr) \varphi \,dx = - \int_{\mathbb{T}^3} \biggl(\pd{\Phi^{A}}{F_{i\alpha}}(F^{j-1}) v_i\biggr) \partial_{\alpha} \varphi \, dx \biggr\}. \end{aligned} \end{equation*}
\item[(P\,2)] For each $j\geqslant 1$ the iterates satisfy \begin{equation}\label{DISCEXTSYS} \begin{aligned} \frac{v^j_i-v^{j-1}_i}{h}&=\partial_{\alpha}\biggl(\pd{G}{\Xi_{A}}(\Xi^j)\,\,\pd{\Phi^{A}}{F_{i\alpha}}(F^{j-1})\biggr)\\ \frac{\Xi^j_A-\Xi^{j-1}_A}{h}&=\partial_{\alpha}\biggl(\pd{\Phi^{A}}{F_{i\alpha}}(F^{j-1})\,v^j_i\biggr) \end{aligned} \quad \mbox{in} \ \mathcal{D}'(\mathbb{T}^3). \end{equation}
\item[(P\,3)] If $F^0$ is a gradient, then so is $F^j\,$
for all $j \geqslant 1$.
\item[(P\,4)] Iterates $v^j$, $j\geqslant 1$ have higher regularity: $v^j \in W^{1,p}(\mathbb{T}^3)$ for all $j\geqslant 1$.
\item[(P\,5)]There exists $E_0>0$ determined by the initial data such that \begin{equation}\label{ITERBOUND} \begin{aligned}
\sup_{j \geqslant \, 0} \Bigl( \| v^j\|^2_{L^2_{dx}} +\int_{\mathbb{T}^3} G(\Xi^j) \,dx \Bigr) + \sum_{j=1}^{\infty}
\|\Theta^j - \Theta^{j-1}\|_{L^2_{dx}}^2 \leqslant E_0. \end{aligned} \end{equation}
\end{litemize}
Given the sequence of spatial iterates $(v^j,\Xi^j)$, $j\geqslant 1$ we define (following~\cite{DST}) the time-continuous, piecewise linear interpolates $\Theta^{(h)}:=(V^{(h)}, \Xi^{(h)})$ by \begin{equation}\label{CONTINTP} \begin{aligned} V^{(h)}(t)&=\sum^{\infty}_{j=1}\mathcal{X}^j(t) \Bigl(v^{j-1}+\frac{t-h(j-1)}{h}(v^j-v^{j-1})\Bigr)\\ \Xi^{(h)}(t)&=\bigl(F^{(h)},Z^{(h)},w^{(h)}\bigr)(t)\\ &=\sum^{\infty}_{j=1}\mathcal{X}^j(t)\Bigl( \Xi^{j-1}+\frac{t-h(j-1)}{h}(\Xi^j - \Xi^{j-1})\Bigr), \end{aligned} \end{equation} and the piecewise constant interpolates $\theta^{(h)}:=(v^{(h)}, \xi^{(h)})$ and $\tilde{f}^{(h)}$ by \begin{equation}\label{CONSTINTP} \begin{aligned} v^{(h)}(t)&=\sum^{\infty}_{j=1}\mathcal{X}^j(t)v^j\\ \xi^{(h)}(t)&=(f^{(h)},z^{(h)},\omega^{(h)})(t)=\sum^{\infty}_{j=1}\mathcal{X}^j(t)\Xi^j\\ \tilde{f}^{(h)}(t)&=\sum^{\infty}_{j=1}\mathcal{X}^j(t)F^{j-1}, \end{aligned} \end{equation} where $\mathcal{X}^j(t)$ is the characteristic function of the interval $I_j:=[(j-1)h,jh)$. Notice that $\tilde{f}^{(h)}$ is the time-shifted version of $f^{(h)}$ and it is used later in defining a relative entropy flux, as well as the time-continuous equations \eqref{TCDSYS}.
Our main objective is to prove convergence of the interpolates $(V^{(h)},F^{(h)})$ obtained via the variational scheme to the solution of polyconvex elastodynamics as long as the limit solution remains smooth. This is achieved by employing the extended system \eqref{EXTSYS} and proving convergence of the time-continuous approximates $\Theta^{(h)}=(V^{(h)},\Xi^{(h)})$ to the solution $\bar{\Theta}=(\bar{V},\bar{\Xi})$ of the extension \eqref{EXTSYS} as long as $\bar{\Theta}$ remains smooth.
\newtheorem*{MThm}{Main Theorem} \begin{MThm} Let $W$ be defined by \eqref{POLYCONVEXITY} with $G$ satisfying {\rm (H1)--(H5)}. Let $\Theta^{(h)}=(V^{(h)}, \Xi^{(h)})$, $\theta^{(h)}=(v^{(h)}, \xi^{(h)})$ and $\tilde{f}^{(h)}$ be the interpolates defined via \eqref{CONTINTP}, \eqref{CONSTINTP} and induced by the sequence of spatial iterates \begin{equation*} \Theta^j=(v^j,\Xi^j)=(v^j,F^j,Z^j,w^j) \in L^2 \times L^p \times L^2 \times L^2, \quad j \geqslant 0 \end{equation*} which satisfy {\rm (P1)--(P5)}. Let $\bar{\Theta}=(\bar{V}, \bar{\Xi})=(\bar{V}, \bar{F},\bar{Z},\bar{w})$ be { the} smooth solution of \eqref{EXTSYS} defined on $\mathbb{T}^3 \times [0,T]$ and emanate from the data $\bar{\Theta}^0=(\bar{V}^0, \bar{F}^0,\bar{Z}^0,\bar{w}^0)$. Assume also that ${F}^0,\bar{F}^0$ are gradients. Then: \begin{itemize} \item[\rm (a)] The relative entropy $\eta^r=\eta^r(\Theta^{(h)},\bar{\Theta})$ defined by \eqref{RENTDEF} satisfies \eqref{RENTID}. Furthermore, there exist constants $\mu,\mu'>0$ such that \begin{equation*} \mu \, \mathcal{E}(t) \leqslant \int_{\mathbb{T}^3} \,\eta^r(x,t)\, dx \leqslant \mu' \mathcal{E}(t), \quad t\in [0,T] \end{equation*}
where \begin{equation*} \mathcal{E}(t) := \int_{\mathbb{T}^3} \,\Bigl(
(1+|F^{(h)}|^{p-2}+|\bar{F}|^{p-2})
|F^{(h)}-{\bar{F}}|^2+|\Theta^{(h)}-\bar{\Theta}|^2 \Bigr) \, dx. \end{equation*} \item[\rm (b)] There exists $\varepsilon>0$ and $C=C(T,\bar{\Theta},E_0,\mu,\mu',\varepsilon)>0$ such that for all $h\in(0,\varepsilon)$ \begin{equation*} \begin{aligned} \mathcal{E}(\tau) \leqslant C \,\bigl(\mathcal{E}(0) +h \bigr), \quad \tau\in [0,T] . \end{aligned} \end{equation*} Moreover, if the data satisfy $\,\mathcal{E}^{(h)}(0) \to 0$ as $h\downarrow 0$, then \begin{equation*}
\sup_{t\in[0,T]}\int_{\mathbb{T}^3} \BBR{|\Theta^{(h)} -
\bar{\Theta}|^2 +
|F^{(h)}-\bar{F}|^2(1+|F^{(h)}|^{p-2}+|\bar{F}|^{p-2})}dx \rightarrow 0 \end{equation*} as $h\downarrow 0$. \end{itemize} \end{MThm}
\begin{corollary*} Let $\Theta^{(h)}=(V^{(h)},\Xi^{(h)})$ be as in the Main Theorem. Let $(\bar{V},\bar{F})$ be a smooth solution of \eqref{PXELASTSYS} with $\bar{F}(\cdot,0)$ a gradient and $\bar{\Theta}=(\bar{V},\Phi(\bar{F}))$. Assume that initial data satisfy $\Theta^{(h)}(\cdot,0)=\bar{\Theta}(\cdot,0)$. Then \begin{equation*}
\sup_{t\in[0,T]} \BBR{ \|V - \bar{V} \|^2_{L^2(\mathbb{T}^3)} + \|\Xi^{(h)} - \Phi(\bar{F}) \|^2_{L^2(\mathbb{T}^3)} +
\|F^{(h)}-\bar{F}\|^p_{L^p(\mathbb{T}^3)} } = O(h). \end{equation*} \end{corollary*}
\begin{remark} The smooth solution $\bar{\Theta}=(\bar{V},\bar{\Xi})$ to the extended system \eqref{EXTSYSINTRO} is provided beforehand. A natural question arises whether such a solution exists. We briefly discuss the existence theory for \eqref{ELASTINTRO2} on the torus $\mathbb{T}^3$. In \cite{DafermosHrusa85} energy methods are used to establish local (in time) existence of smooth solutions to certain initial-boundary value problem that apply to the system of nonlinear elastodynamics \eqref{ELASTINTRO1} with rank-1 convex stored energy. More precisely, for a bounded domain $\Omega\subset \mathbb{R}^n$ with the smooth boundary $\partial \Omega$ the authors establish (\cite[Theorem 5.2]{DafermosHrusa85}) the existence of the unique {motion} $y( \cdot,t)$ satisfying \eqref{ELASTINTRO1} in $\Omega \times [0,T]$ together with boundary conditions $y(x,t)=0$ on $\partial\Omega \times [0,T]$ and initial conditions $y(\cdot,0)=y_0$ and $y_t(\cdot,0)=y_1$ whenever $T>0$ is small enough and the initial data lie in a compact set. One may get a counterpart of this result for solutions on $\mathbb{T}^3$ since the methods in \cite{DafermosHrusa85} are developed in the abstract framework: a quasi-linear partial differential equation is viewed as an abstract differential equation with initial value problem set on an interpolated scale of separable Hilbert spaces ${\{H_{\gamma}\}}_{\gamma\in[0,m]}$ with $m \geqslant 2$. To be precise, the spaces satisfy $H_{\gamma}=[H_0, H_m]_{\gamma/m}$ and the desired solution $u(t)$ of an abstract differential equation is assumed to be taking values in $H_m \bigcap V$, where $V$, a closed subspace of $H_1$, is designated to accommodate the boundary conditions (cf.~\cite[Section~2]{DafermosHrusa85}). By choosing appropriate spaces, namely \begin{equation*}
H_{\gamma} = \left[L^2(\mathbb{T}^3), W^{m,2}(\mathbb{T}^3)
\right]_{\gamma/m} \quad \mbox{and} \quad
V=H_1=W^{1,2}(\mathbb{T}^3), \end{equation*} and requiring strong ellipticity (cf.~\cite[Section 5]{DafermosHrusa85}) for the stored energy one may apply \cite[Theorem 4.1]{DafermosHrusa85} to conclude the local existence of smooth solutions on the torus $\mathbb{T}^3$ to the system of elastodynamics \eqref{ELASTINTRO1} and hence to \eqref{ELASTINTRO2}. Since strong polyconvexity implies strong ellipticity \cite{Ball77}, the same conclusion holds for the case of polyconvex energy which is used here. \end{remark}
\begin{remark} The framework for existence of measure-valued solutions for the polyconvex elasticity system (see (H1)--(H4) of \cite{DST}) and that of uniqueness of classical within the class of measure-valued solutions (see \cite{DST2}) is more general than the framework used in the Main Theorem. This discrepancy is due to the relative entropy being best adapted to an $L^2$ setting and technical difficulties connected to the estimations of the time-step approximants of \eqref{DISCEXTSYS}. Our approach, based on using the "distance" function in \eqref{DISTDEF} as a substitute for the relative entropy, simplifies the estimations but limits applicability to stored energies \eqref{POLYCONVEXITY}, \eqref{GDECOMP} with
$L^p$-growth for $F$ but only $L^2$-growth in $\mathrm{cof\, } F$ and $\det F$.
\end{remark}
\section{Relative entropy identity}
For the rest of the sequel, we suppress the dependence on $h$ to simplify notations and, {\it cf.} Main Theorem, assume: \begin{itemize} \item[{(1)}] $\Theta=(V,\Xi)$, $\theta=(v,\xi)$, $\tilde{f}$ are the approximates defined by \eqref{CONTINTP} and \eqref{CONSTINTP}.
\item[{(2)}] $\bar{\Theta}=(\bar{V},\bar{\Xi})=(\bar{V},\bar{F},\bar{Z},\bar{w})$ is a smooth solution of \eqref{EXTSYS} defined on \linebreak $\mathbb{T}^3 \times [0,T]$ where $T>0$ is finite. \end{itemize}
The goal of this section is to derive an identity for a relative energy among the two solutions. To this end, we define the relative entropy \begin{equation}\label{RENTDEF} \eta^r(\Theta,\Bar{\Theta}):=\eta(\Theta)-\eta(\bar{\Theta})-\nabla\eta(\bar{\Theta})(\Theta-\bar{\Theta}) \end{equation} and the associated relative flux which will turn out to be \begin{equation} \label{RENTFLUX} \begin{aligned} q^r_{\alpha}(\theta,\Bar{\Theta},\tilde{f}):=(v_i-\bar{V}_i)\bigl(G_{,A}(\xi)-G_{,A}(\bar{\Xi})\bigr) \, \Phi_{,i\alpha}^A(\tilde{f}), \quad \alpha = 1,2,3. \end{aligned} \end{equation}
We now state two elementary lemmas used in our further computations. The first one extends the null-Lagrangian properties while the second one provides the rule for the divergence of the product in the non-smooth case.
\begin{lemma}[null-Lagrangian properties]\label{DIVPHIZLMM} Assume $q>2$ and $r \geqslant \tfrac{q}{q-2}$. Then, if $u \in W^{1,q}(\mathbb{T}^3;\mathbb{R}^3)$, $z\in W^{1,r}(\mathbb{T}^3)$, we have \begin{equation*} \begin{aligned} \partial_{\alpha}\biggl(\PHD\BBR{\nabla{u}}\biggr)&=0\\ \partial_{\alpha}\biggl(\PHD(\nabla{u})z\biggr) &= \PHD(\nabla{u})\,\partial_{\alpha}z \end{aligned} \quad \mbox{in} \ \mathcal{D}'(\mathbb{T}^3) \end{equation*} for each $i=1,\dots,3$ and $A=1,\dots,19$. \end{lemma}
\begin{lemma}[product rule]\label{DIVPDLMM} Let $q \in (1,\infty)$ and $q'=\frac{q}{q-1}$. Assume \begin{equation*} f\in W^{1,q}(\mathbb{T}^3),\ h\in L^{q'}(\mathbb{T}^3;\mathbb{R}^3) \quad \mbox{and} \quad \mathrm{div} \,h \in L^{q'}(\mathbb{T}^3). \end{equation*} Then $fh\in L^1(\mathbb{T}^3;\mathbb{R}^3)$, $\mathrm{div}\,(fh)\in L^1(\mathbb{T}^3)$ and \begin{equation*} \mathrm{div}\,(fh)=f\mathrm{div}\,h+\nabla{f}h \quad \mbox{in} \ \mathcal{D}'(\mathbb{T}^3). \end{equation*} \end{lemma}
\begin{lemma}[relative entropy identity]\label{RENTIDLMM} For almost all $t\in [0,T]$ \begin{equation}\label{RENTID} \partial_t \eta^r-\mathrm{div}\,q^r =Q-\frac{1}{h} \sum_{j=1}^{\infty} \mathcal{X}^j(t) D^j+S \quad \mbox{in} \ \mathcal{D}'(\mathbb{T}^3) \end{equation} where \begin{equation}\label{QTERM} \begin{aligned} Q &:= \partial_{\alpha}(G_{,A}(\bar{\Xi})) \bigl(\Phi_{,i\alpha}^A(F)-\Phi_{,i\alpha}^A(\bar{F})\bigr) \bigl(V_i-\bar{V}_i\bigr)\\[1pt] &\phantom{:=}\; +\partial_{\alpha}\bar{V}_i \bigl(G_{,A}(\Xi)-G_{,A}(\bar{\Xi})\bigr) \bigl(\Phi_{,i\alpha}^A(F)-\Phi_{,i\alpha}^A(\bar{F})\bigr)\\[1pt] &\phantom{:=}\;+\partial_{\alpha}\bar{V}_i \bigl(G_{,A}(\Xi)-G_{,A}(\bar{\Xi})-G_{,AB}(\bar{\Xi})(\Xi-\bar{\Xi})_B \bigr)\Phi_{,i\alpha}^A(\bar{F}) \end{aligned} \end{equation} estimates the difference between the two solutions, \begin{equation}\label{DTERM} D^j:=\bigl(\nabla{\eta}(\theta)-\nabla{\eta}(\Theta)\bigr) \delta\Theta^j, \end{equation} where $\delta\Theta^j:=\Theta^j - \Theta^{j-1}$, are the dissipative terms, and \begin{equation}\label{STERM} \begin{aligned} S := & \, \partial_{\alpha}(G_{,A}(\bar{\Xi}))\Bigl[ \,\Phi_{,i\alpha}^A(\bar{F}) \bigl(v_i-V_i\bigr)
+ \bigl(\Phi_{,i\alpha}^A(F)-\Phi_{,i\alpha}^A(\bar{F})\bigr) \bigl(v_i-V_i\bigr)\\
&+ \bigl(\Phi_{,i\alpha}^A(\tilde{f})-\Phi_{,i\alpha}^A(F)\bigr) \bigl(v_i-V_i\bigr)
+ \bigl(\Phi_{,i\alpha}^A(\tilde{f})-\Phi_{,i\alpha}^A(F)\bigr) \bigl(V_i-\bar{V_i}\bigr) \Bigr]\\ &+\partial_{\alpha}\bar{V_i}\Bigl[ \bigl(G_{,A}(\xi)-G_{,A}(\Xi) \bigr)\Phi_{,i\alpha}^A(\bar{F})\\
&+ \bigl(G_{,A}(\xi)-G_{,A}(\Xi)\bigr) \bigl(\Phi_{,i\alpha}^A(\tilde{f})-\Phi_{,i\alpha}^A(F)\bigr)\\
&+ \bigl(G_{,A}(\xi)-G_{,A}(\Xi)\bigr) \bigl(\Phi_{,i\alpha}^A(F)-\Phi_{,i\alpha}^A(\bar{F})\bigr) \\
&+ \bigl(G_{,A}(\Xi)-G_{,A}(\bar{\Xi})\bigr) \bigl(\Phi_{,i\alpha}^A(\tilde{f})-\Phi_{,i\alpha}^A(F)\bigr)\Bigr] \end{aligned} \end{equation} is the error term. \end{lemma}
\begin{proof} Notice that by \eqref{CONTINTP} for almost all $t \geqslant 0$ \begin{equation}\label{TIMEDVVXI} \begin{aligned} \partial_{t}V(\cdot,t) &=\sum_{j=1}^{\infty}\mathcal{X}^j(t)\frac{\delta v^j}{h}, \quad \delta v^j:={v^j-v^{j-1}}\\ \partial_{t}\Xi(\cdot,t)&=\sum_{j=1}^{\infty}\mathcal{X}^j(t)\frac{\delta \Xi^j}{h},\quad \delta \Xi^j:=\Xi^j - \Xi^{j-1}. \end{aligned} \end{equation} Hence by \eqref{GCOMPFLD}, \eqref{DISCEXTSYS} and \eqref{TIMEDVVXI} we obtain for almost all $t \geqslant 0$ \begin{equation}\label{TCDSYS} \begin{aligned} \partial_{t} V_i(\cdot,t)&=\mathrm{div} \bigl( g_i(\xi,\tilde{f})\bigr)\\[0pt] \partial_{t} \Xi_{A}(\cdot,t)&=\partial_{\alpha}\bigl( \hspace{1pt} \Phi^A_{,i\alpha}(\tilde{f})\,v_i\bigr) \end{aligned}\quad \mbox{in} \ \mathcal{D}'(\mathbb{T}^3). \end{equation} Since $(\bar{V},\bar{\Xi})$ is the smooth solution of \eqref{EXTSYS}, using \eqref{GCOMPFLD} we also have \begin{equation}\label{EXTSYS2} \begin{aligned} \partial_{t}\bar{V}_i &= \mathrm{div} \bigl( \hspace{1pt} g_i(\bar{\Xi},\bar{F}) \bigr) \\ \partial_{t}\bar{\Xi}_{A}&=\partial_{\alpha}\bigl(\Phi^A_{,i\alpha}(\bar{F})\,\bar{V}_{i}\bigr) \end{aligned} \quad \mbox{in} \ \mathbb{T}^3 \times [0,T]. \end{equation}
Further in the proof we will perform a series of calculations that hold for smooth functions. A technical difficulty arises, since the iterates $(v^j,\Xi^j)$, $j\geqslant 1$ satisfying \eqref{DISCEXTSYS} are, in general, not smooth. To bypass this we employ Lemmas~\ref{DIVPHIZLMM} and \ref{DIVPDLMM} that provide the null-Lagrangian property and product rule in the smoothness class appropriate for the approximates $\Theta \!=\! (V,\Xi)$, $\theta\!= \!(v,\xi)$, $\tilde{f}$.
By assumption $F^0$ and $\bar{F}^0$ are gradients. Hence using (P\,3) we conclude that $F^j$, $j\geqslant 1$ are gradients. Furthermore, from (E1) it follows that $\bar{F}$ remains a gradient
for all $t$. Thus, recalling \eqref{CONTINTP}, \eqref{CONSTINTP}, we have \begin{equation}\label{FGRADPROP} \mbox{$F$, $f$, $\tilde{f}$ and $\bar{F}$ are gradients for all $t \in [0,T]$}. \end{equation} We also notice that by \eqref{PHIDEF}, \eqref{GCOMPFLD}, and (H4) we have for all $F^{*}\in \mathbb{R}^9$, $\Xi^{\circ}\in \mathbb{R}^{19}$
\begin{equation}\label{gEST} \begin{aligned}
\bigl|g_{i\alpha}\bigl(\Xi^{\circ},F^*\bigr)\bigr|^{p'}
&\leqslant C_g\Bigl(\,\Bigl|\pd{G}{F_{i\alpha}}\Bigr|^{\frac{p}{p-1}}+\bigl|F^*\bigr|^{\frac{p}{p-1}}
\Bigl|\pd{G}{Z_{k\gamma}}\Bigr|^{\frac{p}{p-1}}+\bigl|F^*\bigr|^{\frac{2p}{p-1}}
\Bigl|\pd{G}{w}\Bigr|^{\frac{p}{p-1}}\Bigr)\\[1pt]
&\leqslant C_g'\Bigl(\,|F^*|^p+\Bigl|\pd{G}{F_{i\alpha}}\Bigr|^{\frac{p}{p-1}}+
\Bigl|\pd{G}{Z_{k\gamma}}\Bigr|^{\frac{p}{p-2}}+\Bigl|\pd{G}{w}\Bigr|^{\frac{p}{p-3}}\Bigl)\\[1pt]
&\leqslant C_g''\Bigl(|F^*|^p+|F^{\circ}|^p+|Z^{\circ}|^2+|w^{\circ}|^2+1\Bigr) \end{aligned} \end{equation} where $p\in{[6,\infty)}$ and $p'=\tfrac{p}{p-1}$. Hence (H2), (P4)--(P5), \eqref{CONSTINTP}$_1$ and Lemmas~\ref{DIVPHIZLMM}, \ref{DIVPDLMM} along with \eqref{TCDSYS}$_1$ imply \begin{equation}\label{DIVPRODUCTS} \begin{aligned} \mathrm{div} \bigl(v_ig_i(\xi,\tilde{f})\bigr) &= {v_i}{\partial_{t} V_i} + \nabla{v_i} g_i(\xi,\tilde{f})\\ \mathrm{div}\bigl(\bar{V}_i g_i(\xi,\tilde{f})\bigr) &= \bar{V}_i \partial_t V_i + \nabla{\bar{V}_i} g_i\,(\xi,\tilde{f})\\ \mathrm{div}\bigl(v_ig_i(\bar{\Xi},\tilde{f})\bigr) &= v_i \Phi_{,i\alpha}^A(\tilde{f})\,\partial_\alpha(G_{,A}(\bar{\Xi}))+\nabla{v_i} g_i(\bar{\Xi},\tilde{f})\\ \mathrm{div}\bigl(\bar{V}_ig_i(\bar{\Xi},\tilde{f})\bigr) &= \bar{V}_i \Phi_{,i\alpha}^A(\tilde{f})\,\partial_\alpha(G_{,A}(\bar{\Xi}))+\nabla{\bar{V}_i} g_i(\bar{\Xi},\tilde{f}).\\ \end{aligned} \end{equation} Similarly, by (P4), Lemma \ref{DIVPHIZLMM}, \eqref{TCDSYS}$_2$ and \eqref{FGRADPROP} we have the identity \begin{equation}\label{NLPAPPL1} \partial_t\Xi_A(t)=\Phi^A_{,i\alpha}(\tilde{f})\,\partial_{\alpha}v_i.\\ \end{equation}
Thus, using \eqref{ENTDEF}, \eqref{DIVPRODUCTS}$_1$ and \eqref{NLPAPPL1}, we compute \begin{equation*} \begin{aligned} \partial_t \bigl(\eta(\Theta)\bigr) & = V_i \partial_t{V_i}+G_{,A}(\Xi) \partial_t\Xi_A\\[0pt] &=(V_i-v_i) \partial_t V_i+(G_{,A}(\Xi)-G_{,A}(\xi)) \partial_t \Xi_A + \mathrm{div}\bigl({v_i g_i(\xi,\tilde f)}\bigr)\\[0pt] &= \frac{1}{h}\sum_{j=1}^{\infty} \mathcal{X}^j(t) \bigl(\nabla{\eta(\Theta)}-\nabla{\eta(\theta)}\bigr) \delta\Theta^j + \mathrm{div}\bigl({v_i g_i(\xi,\tilde f)}\bigr). \end{aligned} \end{equation*} Furthermore, by \eqref{DIVPRODUCTS}$_2$ we have
$ \partial_t\bigl(\bar{V}_i(V_i-\bar{V}_i)\bigr) = \partial_t\bar{V}_i (V_i-\bar{V}_i) + \bar{V}_i \partial_t V_i - \bar{V}_i \hspace{1pt} \partial_t \bar{V}_i
=\partial_t \bar{V}_i (V_i-\bar{V}_i) + \mathrm{div}\bigl(\bar{V}_ig_i(\xi,\tilde{f})\bigr) - \nabla{\bar{V}_i} g_i(\xi,\tilde{f}) - \tfrac{1}{2} \partial_t \bar{V}^2 $
while using \eqref{NLPAPPL1} we obtain \begin{equation*} \begin{aligned} \partial_t (G_{,A}(\bar{\Xi})(\Xi-\bar{\Xi})_A)&= \partial_t (G_{,A}(\bar{\Xi}))(\Xi-\bar{\Xi})_A + G_{,A}(\bar{\Xi}) \partial_t \Xi_A - \partial_t (G(\bar{\Xi}))\\ &= \partial_t (G_{,A}(\bar{\Xi}))(\Xi-\bar{\Xi})_A + \nabla{v_i} g_i(\bar{\Xi},\tilde{f})- \partial_t (G(\bar{\Xi})). \end{aligned} \end{equation*} Next, notice that by \eqref{GCOMPFLD} and \eqref{RENTFLUX} we have \begin{equation}\label{RENTFULX1}
q^r = v_i g_i(\xi,\tilde{f}) - \bar{V}_i g_i(\xi,\tilde{f}) -
v_i g_i(\bar{\Xi},\tilde{f}) + \bar{V}_i
g_i(\bar{\Xi},\tilde{f}). \end{equation} Hence by \eqref{ENTDEF}, \eqref{RENTDEF}, \eqref{DTERM}, \eqref{DIVPRODUCTS} and the last four identities we obtain \begin{equation}\label{RELENTID1} \partial \eta^r - \mathrm{div} \, q^r = -\frac{1}{h}\sum_{j=1}^{\infty} \mathcal{X}^j(t) D^j + J \end{equation} where \begin{equation*} \begin{aligned} J:= & - \mathrm{div}\bigl(\bar{V}_i g_i(\bar{\Xi},\tilde{f})\bigr)+ \nabla{\bar{V}_i} g_i(\xi,\tilde{f})
+ \mathrm{div}\bigl(v_i g_i(\bar{\Xi},\tilde{f})\bigr) - \nabla{v_i} g_i(\bar{\Xi},\tilde{f}) \\
& - \partial_t \bar{V}_i (V_i-\bar{V}_i) - \partial_t (G_{,A}(\bar{\Xi}))(\Xi-\bar{\Xi})_A. \end{aligned} \end{equation*}
Consider now the term $J$. From \eqref{EXTSYS2}, \eqref{FGRADPROP} and Lemma \ref{DIVPHIZLMM} it follows that
$ \partial_t \bar{V}_i = \Phi^A_{,i\alpha}(\bar{F}) \partial_{\alpha} (G_{,A}(\Bar{\Xi})),$
$ \partial_t (G_{,A}(\bar{\Xi})) = G_{,AB}(\Bar{\Xi}) \Phi^{B}_{,i\alpha}(\bar{F}) \partial_{\alpha} \bar{V}_i. $
Then, \eqref{DIVPRODUCTS}$_{3,4}$ along with the last two identities and the fact that $G_{,AB}=G_{,BA}$ implies \begin{equation}\label{JTERM} \begin{aligned} J&= \partial_{\alpha}\bar{V}_i \Bigl(g_{i\alpha}(\xi,\tilde{f})-g_{i\alpha}(\bar{\Xi},\tilde{f})\Bigr)\\[0pt] &\quad+\partial_{\alpha}(G_{,A}(\bar{\Xi}))\Bigl(\Phi_{,i\alpha}^A(\tilde{f})(v_i-\bar{V}_i)-\Phi_{,i\alpha}^A(\bar{F})(V_i-\bar{V}_i)\Bigr)\\[0pt] &\quad-G_{,AB}(\bar{\Xi})(\Xi-\bar{\Xi})_A \Phi_{,i\alpha}^B(\bar{F})\, \partial_{\alpha}{\bar{V}}_i\\[0pt] &=\partial_{\alpha}\bar{V}_i \Bigl(g_{i\alpha}(\xi,\tilde{f})-g_{i\alpha}(\bar{\Xi},\tilde{f}) - g_{i\alpha}(\Xi,\bar{F}) + g_{i\alpha}(\bar{\Xi},\bar{F}) \Bigr)\\[0pt] &\quad+\partial_{\alpha}(G_{,A}(\bar{\Xi}))\Bigl(\Phi_{,i\alpha}^A(\tilde{f})(v_i-\bar{V}_i)-\Phi_{,i\alpha}^A(\bar{F})(V_i-\bar{V}_i)\Bigr)\\[0pt] &\quad+\partial_{\alpha}\bar{V}_i\Bigl(G_{,A}(\Xi)-G_{,A}(\bar{\Xi})-G_{,AB}(\bar{\Xi})(\Xi-\bar{\Xi})_B\Bigr)\Phi_{,i\alpha}^A(\bar{F})\\ &=:J_1+J_2+J_3. \end{aligned} \end{equation} Using \eqref{GCOMPFLD} we rearrange the term $J_1$ as follows: \begin{equation}\label{J1} \begin{aligned} J_1 &= \partial_{\alpha}\bar{V}_i \Bigl[ \bigl(G_{,A}(\xi)\!-\! G_{,A}(\bar{\Xi})\bigr) \Phi^A_{,i\alpha}(\tilde{f}) \!-\! \bigl(G_{,A}(\Xi)\!-\! G_{,A}(\bar{\Xi})\bigr) \Phi^A_{,i\alpha}(\bar{F})\Bigr]\\[0pt] &= \partial_{\alpha}\bar{V}_i \Bigl[ \bigl(G_{,A}(\xi)\!-\! G_{,A}(\Xi)\bigr) \bigl( \Phi^A_{,i\alpha}(\tilde{f}) \!-\! \Phi^A_{,i\alpha}(F) \bigr ) \\
& \quad +\! \bigl(G_{,A}(\xi)\!-\! G_{,A}(\Xi)\bigr) \bigl( \Phi^A_{,i\alpha}(F) \!- \!\Phi^A_{,i\alpha}(\bar{F}) \bigr )
\!+ \!\bigl(G_{,A}(\xi)\!- \!G_{,A}(\Xi)\bigr) \Phi^A_{,i\alpha}(\bar{F})\\
& \quad + \!\bigl(G_{,A}(\Xi)\!-\! G_{,A}(\bar{\Xi})\bigr) \bigl( \Phi^A_{,i\alpha}(\tilde{f}) \!-\! \Phi^A_{,i\alpha}(F) \bigr ) \\
& \quad + \!\bigl(G_{,A}(\Xi)\!-\! G_{,A}(\bar{\Xi})\bigr) \bigl( \Phi^A_{,i\alpha}(F) \!-\! \Phi^A_{,i\alpha}(\bar{F}) \bigr ) \Bigr]. \end{aligned} \end{equation} We also modify the term $J_2$ writing it in the following way: \begin{equation}\label{J2} \begin{aligned} J_2 & = \partial_{\alpha}(G_{,A}(\bar{\Xi})) \Bigl[ \Phi_{,i\alpha}^A(\tilde{f}) (v_i-\bar{V}_i) - \Phi_{,i\alpha}^A(\bar{F})(V_i-\bar{V}_i) \Big]\\[0pt] & = \partial_{\alpha}(G_{,A}(\bar{\Xi})) \Bigl[ \bigl(\Phi_{,i\alpha}^A(F)-\Phi_{,i\alpha}^A(\bar{F}) \bigr) \bigl(V_i-\bar{V}_i\bigr) \\
& \quad + \bigl(\Phi_{,i\alpha}^A(\tilde{f})-\Phi_{,i\alpha}^A(F) \bigr) \bigl(V_i- \bar{V}_i\bigr)
+ \bigl(\Phi_{,i\alpha}^A(\tilde{f})-\Phi_{,i\alpha}^A(F) \bigr) \bigl(v_i- V_i\bigr) \\
& \quad + \bigl(\Phi_{,i\alpha}^A(F)-\Phi_{,i\alpha}^A(\bar{F}) \bigr) \bigl(v_i- V_i\bigr)
+ \Phi_{,i\alpha}^A(\bar{F}) \bigl(v_i- V_i\bigr) \Bigr]. \end{aligned} \end{equation} By \eqref{JTERM}--\eqref{J2} we have $ J = J_1 + J_2 + J_3 = Q+S$. Hence by \eqref{RELENTID1} we get \eqref{RENTID}. \qed\end{proof}
\section{Proof of the Main Theorem}
The identity \eqref{RENTID} is central to our paper. In this section, we estimate each of its terms and complete the proof via Gronwall's inequality.
\subsection{A function $d(\cdot,\cdot)$ equivalent to the relative entropy }
\begin{definition*} Let $\Theta_1=(V_1,\Xi_1),\Theta_2=(V_2,\Xi_2) \in\mathbb{R}^{22}$. We set \begin{equation}\label{DISTDEF} d(\Theta_1,\Theta_2) =
\BBR{1+|F_1|^{p-2}+|F_2|^{p-2}}\BBA{F_1-{F_2}}^2+\BBA{\Theta_1-{\Theta_2}}^2 \end{equation} where $(F_1,Z_1,w_1)=\Xi_1,(F_2,Z_2,w_2)=\Xi_2\in \mathbb{R}^{19}$. \end{definition*}
The goal of this section is to show that the relative entropy $\eta^r$ can be equivalently represented by the function $d(\cdot,\cdot)$. Before we establish this relation, we prove an elementary lemma used in our further calculations: \begin{lemma}\label{AVESTLMM} Assume $q \geqslant 1$. Then for all $u,v\in \mathbb{R}^n$ and $\bar{\beta} \in [0,1]$ \begin{equation}\label{AVEST2}
\int_{0}^{\bar{\beta}}\int_{0}^{1}(1-\beta)\BBA{u+\alpha(1-\beta)(v-u)}^q d\alpha \,d\beta \, \geqslant\, c' \bar{\beta} \bigl(|u|^q+|v|^q \bigr) \end{equation} with constant $c'>0$ depending only on $q$ and $n$. \end{lemma}
\begin{proof} Observe first that \begin{equation} \label{AVEST1}
\int_{0}^{1}|u+\alpha(v-u)| \, d\alpha\,\geqslant\,
\bar{c}\BBR{|u|+|v|}, \quad \forall u,v \in \mathbb{R}^n \end{equation} with $\bar{c}=\frac{1}{4\sqrt{n}}$. Then, applying Jensen's inequality and using \eqref{AVEST1}, we get \begin{equation*} \begin{aligned}
&\int_{0}^{\bar{\beta}}\int_{0}^{1}(1-\beta)\bigl|u+\alpha(1-\beta)(v-u)\bigr|^q d\alpha\,d\beta\\
& \geqslant \,\int_{0}^{\bar{\beta}}(1-\beta)\biggl(\int_{0}^{1}\bigl|u+\alpha\bigl((1-\beta)v+\beta u-u\bigr)\bigr|\,d\alpha\biggr)^q d\beta\\
&\geqslant \, \bar{c}^q \int_{0}^{\bar{\beta}}(1-\beta) \bigl( { |u|+|(1-\beta)v+\beta u}| \bigr) ^q d\beta\\
& \geqslant \, \frac{\bar{c}^q}{2} \bigl(|u|^q+|v|^q\bigr) \int_{0}^{\bar{\beta}}(1-\beta)^{q+1} \,d\beta.\\
\end{aligned} \end{equation*} Since $q\geqslant 1$ and $(1-\bar{\beta})\in [0,1]$, we have
$ \int_{0}^{\bar{\beta}}(1-\beta)^{q+1}d\beta \, = \, \frac{1 - (1-\bar{\beta})^{q+2} }{q+2} \, \geqslant \, \frac{\bar{\beta}}{q+2}. $
Combining the last two inequalities we obtain \eqref{AVEST2}. \qed\end{proof}
\begin{lemma}[$\eta^r$-equivalence]\label{RENTEQUIVLMM} There exist constants $\mu,\mu'>0$ such that \begin{equation}\label{RENTEQUIVD} \begin{aligned} \mu \, d(\Theta_1,\Theta_2) \, \leqslant \,\eta^r(\Theta_1,\Theta_2) \, \leqslant \, \mu' d(\Theta_1,\Theta_2) \end{aligned} \end{equation} for every $\Theta_1=(V_1,\Xi_1), \Theta_2=(V_2,\Xi_2) \in\mathbb{R}^{22}$. \end{lemma}
\begin{proof} Notice that \begin{equation}\label{RENTEST1} \begin{aligned} \eta^r( \Theta_1,\Theta_2)&=\eta (\Theta_1) -\eta({\Theta_2})-\nabla\eta({\Theta_2})(\Theta_1-{\Theta_2})\\ &=\int_{0}^{1}
\int_{0}^{1} s(\Theta_1-{\Theta_2})^T \bigl(\nabla^2\eta(\hat{\Theta})\bigr)(\Theta_1-{\Theta_2})\,ds\, d\tau.\\ \end{aligned} \end{equation} where
$ \hat{\Theta}=({\hat{V},\hat{\Xi}})=(\hat{V},\hat{F},\hat{Z},\hat{w}) :={\Theta_2}+\tau s(\Theta_1-\Theta_2),$
$\tau,s \in [0,1].$
Observe next that \begin{equation}\label{GRADG} \nabla_{\Xi} G = \begin{bmatrix} \nabla_{F} H \quad {0} \quad {0} \end{bmatrix} + \nabla_{\Xi} R \end{equation} and therefore by \eqref{ENTDEF} \begin{equation}\label{ETAHESS} \begin{aligned} &(\Theta_1\!-\!\Theta_2)^T \nabla^2\eta(\hat{\Theta}) (\Theta_1\!-\!\Theta_2)\\
&\!=\!|V_1\!-\!V_2|^2\! +\!(\Xi_1\!-\!\Xi_2)^T\nabla^2R(\hat{\Xi})(\Xi_1\!-\!\Xi_2)
\!+\!(F_1\!-\!F_2)^T\nabla^2H(\hat{F})(F_1\!-\!F_2). \end{aligned} \end{equation} Then (H1), \eqref{RENTEST1} and \eqref{ETAHESS} imply \begin{equation}\label{RENTEST2} \begin{aligned} \tfrac{1}{2}\BBA{V_1-V_2}^2+\tfrac{\gamma}{2}\BBA{\Xi_1-\Xi_2}^2
&+\kappa \BBA{F_1-F_2}^2 \int_{0}^{1}\int_{0}^{1}s|\hat{F}|^{p-2}ds\,d\tau\\
\leqslant\,\eta^r( & \Theta_1 ,\Theta_2)\; \leqslant \\
\tfrac{1}{2}\BBA{V_1-V_2}^2+\tfrac{\gamma'}{2}\BBA{\Xi_1-\Xi_2}^2
&+\kappa' \BBA{F_1-F_2}^2 \int_{0}^{1}\int_{0}^{1}s|\hat{F}|^{p-2}ds\,d\tau.\\ \end{aligned} \end{equation} We now consider the integral term in \eqref{RENTEST2}. Recall that $\hat{F} = F_2 + \tau s (F_1 - F_2)$. Then, estimating from above, we get \begin{equation*}
\int_{0}^{1}\int_{0}^{1}s|\hat{F}|^{p-2}ds\,d\tau \leqslant 2^{p-3}\BBR{|F_1|^{p-2}+|{F_2}|^{p-2}} \end{equation*} while for the estimate from below we use Lemma $\ref{AVESTLMM}$ (with $s=1-\beta$ and $\bar{\beta}=1$) and obtain \begin{equation*}
\int_{0}^{1}\int_{0}^{1}s|\hat{F}|^{p-2}ds\,d\tau \geqslant \, c'\BBR{|F_1|^{p-2}+|{F_2}|^{p-2}}. \end{equation*} Combining \eqref{RENTEST2} with the two last inequalities we obtain \eqref{RENTEQUIVD}. \qed\end{proof}
Observe that the smoothness of $\Bar{\Theta}$ implies that there exists $M=M(T)>0$ such that \begin{equation}\label{MBOUND} \begin{aligned}
\quad M \, \geqslant \, |\bar{\Theta}| + |\nabla_x\Bar{\Theta}| +
|\partial_t\Bar{\Theta}|, \quad (x,t) \in \mathbb{T}^3 \times [0,T]. \end{aligned} \end{equation}
\begin{lemma}[$\mathcal{E}$-equivalence]\label{RENTINTEQUIVLMM}
The relative entropy $\eta^r$ and function $d$ satisfy \begin{equation*} \eta^r(\Theta,\bar{\Theta}), \, d(\Theta,\bar{\Theta})\in L^{\infty}\BBR{[0,T];L^1}. \end{equation*} Moreover, \begin{equation*} \mu \, \mathcal{E}(t) \, \leqslant \int_{\mathbb{T}^3} \, \eta^r\bigl(\Theta(x,t),\bar{\Theta}(x,t)\bigr)\,dx \, \leqslant \, \mu' \mathcal{E}(t), \quad t\in[0,T] \end{equation*} where \begin{equation*} \mathcal{E}(t):=\int_{\mathbb{T}^3} d\bigl(\Theta(x,t),\bar{\Theta}(x,t)\bigr) \,dx \end{equation*} and constants $\mu,\mu'>0$ are defined in Lemma \ref{RENTEQUIVLMM}.
\end{lemma} \begin{proof} Fix $t\in[0,T]$. Then there exists $j\geqslant 1$ such that $t\in I_j$. Hence \eqref{CONTINTP}, \eqref{DISTDEF}, \eqref{MBOUND} and (H2) imply for $p\in{[6,\infty)}$ \begin{equation}\label{DISTEST1} \begin{aligned} d(\Theta(\cdot,t),\bar{\Theta}(\cdot,t))
&\leqslant C \Bigl( 1+|F|^p + |Z|^2+|w|^2+|V|^2\Bigr)\\
&\leqslant C \Bigl( 1+G(\Xi^{j-1}) + G(\Xi^{j})+|v^{j-1}|^2 +
|v^j|^2\Bigr) \end{aligned} \end{equation} with $C=C(M)>0$ independent of $h$, $j$ and $t$. Hence \eqref{ITERBOUND} and \eqref{DISTEST1} imply \begin{equation}\label{DISTEST2} \int_{\mathbb{T}^3} \, d(\Theta(\cdot,t),\bar{\Theta}(\cdot,t))\, dx \, \leqslant C'(1+E_0), \quad \forall t\in [0,T] \end{equation} for some $C'=C'(M)>0$. Then \eqref{RENTEQUIVD} and \eqref{DISTEST2} imply the lemma. \qed\end{proof}
\subsection{Estimate for {the} term $Q$ on $t\in[0,T]$} \begin{lemma}[$Q$-bound]\label{QTERMESTLMM} There exists $ \lambda=\lambda(M) > 0$ such that \begin{equation}\label{QTERMEST}
\BBA{Q(x,t)} \leqslant \lambda \, d(\Theta,\bar{\Theta}), \quad (x,t) \in \mathbb{T}^3 \times [0,T]
\end{equation} where the term $Q$ is defined by \eqref{QTERM}. \end{lemma}
\begin{proof} Let $C=C(M)>0$ be a generic constant. Notice that for all $F_1, F_2 \in \mdd{3}$ \begin{equation}\label{PHIDIFFPROP}
\bigl|\Phi_{,i\alpha}^A(F_1)-\Phi_{,i\alpha}^A(F_2)\bigr|\,\leqslant\,\left\{ \begin{aligned} &\,0,& &A=1,\dots,9\\
&|F_1-F_2|,& & A=10,\dots,18\\
&3\bigl(|F_1|+|F_2|\bigr)|F_1-F_2|, & & A=19 \end{aligned} \right. \end{equation} and hence \begin{equation} \label{PHIDIFFPROP2}
|\Phi_{,i\alpha}^A(F)-\Phi_{,i\alpha}^A(\bar{F})|\,\leqslant\,C\BBR{1+|F|}\BBA{F-\bar{F}},\quad A=1,\dots19. \end{equation} Then, using \eqref{MBOUND} and \eqref{PHIDIFFPROP2} we estimate the first term of $Q$: \begin{equation}\label{QTEST1}
\bigl|\partial_{\alpha} (G_{,A}(\Bar{\Xi}))
(\Phi_{,i\alpha}^A(F)-\Phi_{,i\alpha}^A(\Bar{F}))(V_i-\bar{V}_i)\bigr|
\leqslant C \big(
(1+|F|^2)|F-\bar{F}|^2+|V-\bar{V}|^2 \big).
\end{equation}
Observe now that \eqref{GRADG} and \eqref{PHIDIFFPROP}$_1$ imply for all $\Xi_1,\Xi_2\in\mathbb{R}^{22}$, $F_3,F_4 \in \mathbb{R}^{9}$ \begin{equation}\label{GPHIDIFF} \begin{aligned} &(G_{,A}(\Xi_1) - G_{,A}(\Xi_2))(\Phi_{,i\alpha}^A(F_3) - \Phi_{,i\alpha}^A(F_4))\\ &=(R_{,A}(\Xi_1)-R_{,A}(\Xi_2))(\Phi_{,i\alpha}^A(F_3)-\Phi_{,i\alpha}^A(F_4)). \end{aligned} \end{equation} Thus, by (H1), \eqref{PHIDIFFPROP2} and \eqref{GPHIDIFF} we obtain the estimate for the second term: \begin{equation}\label{QTEST2}
\bigl|\partial_{\alpha} \Bar{V_i}
(G_{,A}(\Xi)\!-\!G_{,A}(\bar{\Xi}))(\Phi_{,i\alpha}^A(F)\!-\!\Phi_{,i\alpha}^A(\bar{F}))\bigr|
\!\leqslant\! C \big(|\Xi\!-\!\bar{\Xi}|^2\!+\!(1\!+\!|F|^2)|F\!-\!\bar{F}|^2\big).
\end{equation}
Finally, we define for each $A=1,\dots,19$ \begin{equation}\label{JADEF} \begin{aligned} J_A&:=G_{,A}(\Xi)-G_{,A}(\bar{\Xi})-G_{,AB}(\bar{\Xi})\BBR{\Xi-\bar{\Xi}}_B\\ &\phantom{:}=\int_{0}^{1}\int_{0}^{1}s(\Xi-\bar{\Xi})^T \nabla^2 G_{,A} (\hat{\Xi})(\Xi-\bar{\Xi}) \, ds \,d\tau \end{aligned} \end{equation} where
$ \hat{\Xi} = (\hat{F},\hat{Z},\hat{w}):= \bar{\Xi}+\tau s (\Xi -\bar{\Xi}),$
$\tau,s\in[0,1].$
By \eqref{GDECOMP} and (H5) we have for each $A=1,\dots,19$ \begin{equation}\label{GAHESSPROP} \begin{aligned}
\bigl|(\Xi-\bar{\Xi})^T \nabla^2 G_{,A}(\hat{\Xi})(\Xi-\bar{\Xi})\bigr| \leqslant C
\big(|F-\Bar{F}|^2|\hat{F}|^{p-3} + |\Xi - \Bar{\Xi}|^2 \big). \end{aligned} \end{equation} Then by \eqref{MBOUND} and \eqref{JADEF}, \eqref{GAHESSPROP} we obtain the estimate for the third term: \begin{equation}\label{QTEST3} \begin{aligned}
|\partial_{\alpha}\bar{V}_i \, \Phi_{,i\alpha}^A(\bar{F}) \, J_A|
&\leqslant C\Bigl(|\Xi\!-\!\bar{\Xi}|^2\!+\!|F\!-\!\bar{F}|^2 \!\!\int_{0}^{1}\!\!\!\int_{0}^{1}\!\!
|\bar{F}\! + \!\tau s(F \!-\!\bar{F})|^{p-3}ds\,d\tau \Bigr)\\
&\leqslant C\Bigl(|\Xi\!-\!\bar{\Xi}|^2\!+\!|F\!-\!\bar{F}|^2(1\!+\!|F|^{p-3})\Bigr). \end{aligned} \end{equation}
Thus by \eqref{DISTDEF}, \eqref{QTEST1}, \eqref{QTEST2} and \eqref{QTEST3} we conclude for $p \in {[6,\infty)}$ \begin{equation*} \BBA{Q(x,t)} \,\leqslant\,C
\Bigl(|\Theta-\bar{\Theta}|^2+(1+|F|^{p-2})|F-\bar{F}|^2\Bigr) \leqslant C\,d(\Theta,\Bar{\Theta}). \tag*{\qed} \end{equation*}
\end{proof}
\subsection{Estimates for the terms $D^j$ and $S$ on $t \in I_j'\subset [0,T]$} In this section, we consider $j \geqslant 1$ such that $(j-1)h < T$ and estimate the dissipative and error terms for $t\in I'_j$ where \begin{equation*} I_j'\,:= I_j\bigcap \;[0,T] = [(j-1)h,jh)\bigcap \;[0,T]. \end{equation*}
\begin{lemma}[$D^j$-bound]\label{DTERMESTLMM} Let $D^j$ be the term defined by \eqref{DTERM}. Then \begin{equation}\label{DINTEGRB} D^j \in L^{\infty}\bigl(I_j'\,; L^1(\mathbb{T}^3)\bigr) \end{equation} and there exists constant $C_D>0$ independent of $h$ and $j$ such that
for all times $\tau\in \bar{I}_j':=[(j-1)h,jh]\bigcap\,[0,T]$ \begin{equation} \label{DINTXTjEST}
\int_{(j\!-\!1)h}^{\tau} \! \int_{\mathbb{T}^3} \!\! \biggl(\frac{1}{h}D^j \!\biggr) \, dx \, dt
\!\geqslant \! a(\!\tau\!) C_D \!\!\int_{\mathbb{T}^3}\!\!
|{\delta\Theta}^j|^2 \!+\!\bigl(|F^{j-1}|^{p-2}\!+\!|F^j|^{p-2} \bigr)
|\delta F^j|^2 \,dx \! \geqslant \! 0
\end{equation} with \begin{equation}\label{ADEF} \begin{aligned} \quad a(\tau):=\frac{\tau-h(j-1)}{h}\in [0,1], \quad \tau \in \bar{I}_j'. \end{aligned} \end{equation} \end{lemma}
\begin{proof} By (H1), \eqref{ENTDEF} and the definition of $D^j$ we have for $t\in I'_j$ \begin{equation} \label{DDISSP} \begin{aligned} D^j =\BBR{v-V} {\delta v^j} +\bigl(\nabla H(f)-\nabla H(F)\bigr){\delta F^j} +\bigl(\nabla R(\xi)-\nabla R(\Xi)\bigl) {\delta\Xi^j}.\\ \end{aligned} \end{equation} Consider each of the three terms in \eqref{DDISSP}. Notice that, by \eqref{CONTINTP}, \eqref{CONSTINTP}, we have \begin{equation}\label{DIFFAPPROX1} \begin{aligned} v(\cdot,t)-V(\cdot,t)&=\BBR{1-a(t)}\delta v^j \\ \xi(\cdot,t)-\Xi(\cdot,t)&=\BBR{1-a(t)}\delta \Xi^j. \end{aligned} \end{equation} Using \eqref{DIFFAPPROX1} we compute \begin{equation}\label{VRHDISSP} \begin{aligned}
\bigl( v-V \bigr) {\delta v^j} &= \BBR{1-a(t)}|\delta v^j|^2 \\ \bigl(\nabla R (\xi)-\nabla R (\Xi)\bigr) {\delta\Xi^j} &= \BBR{1-a(t)}\int_{0}^{1} (\delta\Xi^j)^T \nabla^2R (\Hat{\Xi})\,(\delta\Xi^j) \,ds\\ \bigl(\nabla H (f)-\nabla H(F)\bigr) \delta F^j &=\BBR{1-a(t)}\int_{0}^{1} (\delta F^j)^{T}\nabla^2H(\hat{F})\,(\delta F^j) \,ds \end{aligned} \end{equation} where
$\Hat{\Xi} =(\hat{F},\hat{Z},\hat{w}):=s{\xi}(\cdot,t)+(1-s)\Xi(\cdot,t), $
$s\in[0,1].$
Then (H1), \eqref{DDISSP} and~\eqref{VRHDISSP} together with the fact that $\BBR{1-a(t)}\in[0,1]$ imply \begin{equation}\label{DABSEST1T} \begin{aligned}
\BBA{D^j(\cdot,t)} \leqslant \biggl( |\delta v^j|^2 +
\gamma'|\delta \Xi^j|^2 +\kappa'|\delta F^j|^2\int_{0}^{1}|\Hat{F}(s,t)|^{p-2}ds\biggr). \end{aligned} \end{equation} Consider now the two latter terms in \eqref{DABSEST1T}. Recalling that $\Hat{F} = sf - (1-s)F$ and using (H2) together with \eqref{CONTINTP}, \eqref{CONSTINTP} we obtain \begin{equation*} \begin{aligned}
&\gamma'|\delta \Xi^j|^2 + \kappa'|\delta F^j|^2\int_{0}^{1}|\Hat{F}(s,t)|^{p-2}ds \\
&\,\leqslant C \BBR{ 1 + |F^{j-1}|^p+|F^j|^p + |Z^{j-1}|^2+|Z^j|^2 + |w^{j-1}|^2+|w^j| }\\ \end{aligned} \end{equation*} for some $C>0$ independent of $h$, $j$ and $t$. Thus, combining the last inequality with (H2), the growth estimate \eqref{ITERBOUND} and \eqref{DABSEST1T}, we conclude \begin{equation*} \begin{aligned} \int_{\mathbb{T}^3}\,\BBA{D^j(x,t)}\,dx \, \leqslant \, \nu' \bigl( 1+E_0 \bigr), \quad \forall t\in I_j' \end{aligned} \end{equation*} for some $\nu'>0$ independent of $h$, $j$ and $t$. This proves \eqref{DINTEGRB}.
Let us now estimate $D^j$ from below. By \eqref{DDISSP}, \eqref{VRHDISSP} and (H1) we obtain \begin{equation}\label{DEST} \begin{aligned} D^j(\cdot,t)
& \, \geqslant \,\nu\BBR{1-a(t)} \Bigl(|\delta \Theta^j|^2+|\delta F^j|^2\int_{0}^{1}|\hat{F}(s,t)|^{p-2}ds \Bigr) \geqslant \,0\\ \end{aligned} \end{equation} for $\nu=\min(1,\gamma,\kappa)>0$. Notice that
$$ \hat{F}(s,t) =sf(t)+(1-s)F(t) = F^j+(1-s)(1-a(t))(F^{j-1}-F^j). $$
Then, by making use of Lemma $\ref{AVESTLMM}$ we obtain for $\tau \in \bar{I}'_j$ \begin{equation*} \begin{aligned}
&\int_{(j-1)h}^{\tau} \Big(\BBR{1-a(t)} |\delta F^j|^2\int_{0}^{1}|\Hat{F}(s,t)|^{p-2}ds \Big) \, dt\\ &=h
|\delta F^j|^2 \int_{0}^{a(\tau)}\int_{0}^{1}(1-\beta)
|F^j+\alpha(1-\beta)(F^{j-1}-F^j)|^{p-2} d\alpha\,d\beta \\
&\geqslant h \, a(\tau)\,c' \big(|F^{j-1}|^{p-2}+|F^{j}|^{p-2}\big) |\delta F^j|^2 \end{aligned} \end{equation*} where we used the change of variables $\alpha = 1-s$ and $\beta=a(t)$. Similarly, we get \begin{equation*} \begin{aligned}
\int_{(j-1)h}^{\tau} \BBR{1-a(t)}|{\delta\Theta}^j|^2 \, dt \, =
\, h \,|{\delta\Theta}^j|^2 \int_{0}^{a(\tau)}(1-\beta) \, d\beta
\, \geqslant \, \frac{h \, a(\tau)}{2} |{\delta\Theta}^j|^2 . \end{aligned} \end{equation*} Then \eqref{DEST} and the last two estimates imply \eqref{DINTXTjEST} for $C_D=\min(\nu c',\frac{\nu}{2}) > 0$. \qed\end{proof}
\begin{lemma}[$S$-bound]\label{STERMESTLMM} Let $S$ be the term defined by \eqref{STERM}. Then \begin{equation}\label{SINTEGRB} S \in L^{\infty}\bigl(I_j'\,;L^1(\mathbb{T}^3)\bigr) \end{equation} and there exists constant s$C_S>0$ independent of $h$, $j$ such that for any $\varepsilon>0$ and all $\tau \in \bar{I}'_j$ \begin{equation}\label{SINTEST} \begin{aligned} &\int_{(j-1)h}^{\tau} \int_{\mathbb{T}^3} \BBA{S(x,t)} \,dx\,dt\,\\
& \leqslant C_S\biggl[ \, a(\tau)(h+\varepsilon)
\int_{\mathbb{T}^3} |{\delta\Theta}^j|^2
+(|F^{j-1}|^{p-2}+|F^{j}|^{p-2})|\delta F^j|^2 \, dx \\
& \quad + \frac{a(\tau)h^2}{\varepsilon}(3+2E_0)+\int_{(j-1)h}^{\tau}\int_{\mathbb{T}^3} \,d(\Theta,\bar{\Theta}) \,dx \, dt \, \biggr] \end{aligned} \end{equation} with $a(\tau)$ defined by \eqref{ADEF}. \end{lemma}
\begin{proof} As before, we let $C=C(M)>0$ be a generic constant and remind the reader that all estimates are done for $t\in I'_j$.
Observe that \eqref{CONTINTP}$_2$, \eqref{CONSTINTP}$_3$ and \eqref{ADEF} imply
$ F(\cdot,t)-\tilde{f}(\cdot,t)=a(t) \hspace{1pt} \delta F^j. $
Hence by \eqref{CONTINTP}$_2$, \eqref{CONSTINTP}$_3$, \eqref{PHIDIFFPROP}, \eqref{ADEF} and the identity above we get the estimate \begin{equation}\label{PHIDIFFPROP3}
\bigl|\Phi_{,i\alpha}^{A}(\tilde{f})- \Phi_{,i\alpha}^{A}(F)
\bigr| \leqslant C \bigl(1+|\tilde{f}|+|F| \bigr) |F-\tilde{f}|
\leqslant C \BBR{1+|F^{j-1}|+|F^{j}|} |\delta F^j|.
\end{equation} Thus \eqref{PHIDIFFPROP2}, \eqref{ADEF}, \eqref{DIFFAPPROX1}$_1$, \eqref{PHIDIFFPROP3} and Young's inequality imply \begin{equation}\label{SESTTM1} \begin{aligned}
&\bigl| \Phi_{,i\alpha}^A(\bar{F})(v_i - V_i)\bigr|
+ \bigl|(\Phi_{,i\alpha}^{A} (F)-\Phi_{,i\alpha}^A(\bar{F}))(v_i-V_i)\bigr|\\
& +\bigl|(\Phi_{,i\alpha}^{A} (\tilde{f}) - \Phi_{,i\alpha}^{A}(F))(v_i-V_i) \bigr|
+\bigl|(\Phi_{,i\alpha}^{A} (\tilde{f}) - \Phi_{,i\alpha}^A(F))(V_i-\bar{V}_i)\bigr| \\
&\leqslant C \Bigl( |\delta v^j|+
(1+|F|^2)|F-\bar{F}|^2 + |\delta v^j|^2
+ (1+|F^{j-1}|^2+|F^{j}|^2)
|\delta F^j|^2 \\
& \quad + |V-\bar{V}|^2 \Bigl). \end{aligned} \end{equation} We also notice that for all $F_1, F_2 \in \mdd{3}$ \begin{equation*} \begin{aligned} H_{,i\alpha}(F_1)-H_{,i\alpha}(F_2)&= \int_{0}^{1}\frac{\partial^{2}H}{\partial F_{i\alpha}\partial F_{lm}}\bigl(sF_1+(1-s)F_2\bigr)(F_{1}-F_{2})_{lm} \, ds.\\ \end{aligned} \end{equation*} Hence (H1), (H5), \eqref{ADEF}, \eqref{DIFFAPPROX1}$_2$ and the identity above imply \begin{equation}\label{SESTTM2} \begin{aligned}
\bigl|\Phi_{,i\alpha}^A(\bar{F}) \BBR{G_{,A}(\xi)\!-\!G_{,A}(\Xi)}\bigr|
&\!\leqslant C\bigl(|\nabla H(f)\!-\!\nabla H(F)| \!+\!|\nabla R(\xi)\!-\!\nabla R(\Xi)| \bigr)\\
& \!\leqslant C\Bigl(|f \!-\! F|\!\! \int_{0}^1 \!\!\!|sf \!+\!(1\!-\!s)F|^{p-2}ds
\!+\! |\xi\!-\!\Xi|\Bigr)\\
&\!\leqslant C\bigl((|F^{j-1}|^{p-2}\!+\!|F^j|^{p-2})|\delta F^j| \!+ \!|\delta\Xi^j| \bigr). \end{aligned} \end{equation}
Next, by (H1), \eqref{PHIDIFFPROP2}, \eqref{GPHIDIFF}, \eqref{ADEF}, \eqref{DIFFAPPROX1}$_2$ and \eqref{PHIDIFFPROP3} we obtain \begin{equation}\label{SESTTM3} \begin{aligned}
&\bigl| (G_{,A}(\xi)\!-\!G_{,A}(\Xi)) (\Phi_{,i\alpha}^A(\tilde{f})\!-\!\Phi_{,i\alpha}^A(F)) \bigr| \\
&+\bigl| (G_{,A}(\xi)\!-\!G_{,A}(\Xi)) (\Phi_{,i\alpha}^A(F)\!-\!\Phi_{,i\alpha}^A(\bar{F})) \bigr|\\
&+\bigl| (G_{,A}(\Xi)\!-\!G_{,A}(\bar{\Xi}))(\Phi_{,i\alpha}^A(\tilde{f})\!-\!\Phi_{,i\alpha}^A(F)) \bigr|\\
&\leqslant C \Bigl( |\delta \Xi^j|^2 \!+\! (1\!+\!|F^{j-1}|^2\!+\!|F^{j}|^2)|\delta F^j|^2
\!+\! (1\!+\!|F|^2)|F\! -\!\bar{F}|^2 \!+\! |\Xi \!-\! \bar{\Xi}|^2 \Bigr). \end{aligned} \end{equation}
Finally, \eqref{STERM}, \eqref{MBOUND}, and the estimates \eqref{SESTTM1}-\eqref{SESTTM3} imply for $p\in{[6,\infty)}$ \begin{equation}\label{SEST1} \begin{aligned}
|S(\cdot,t)| \leqslant & C_S \biggl[ (|F^{j-1}|^{p-2}+|F^{j}|^{p-2})|\delta F^j|^2 + |{\delta\Theta}^j|^2 \\
&+(|F^{j-1}|^{p-2}+|F^{j}|^{p-2})|\delta F^j| + |\delta \Theta^j| + d(\Theta,\bar{\Theta}) \biggr] \end{aligned} \end{equation} for some $C_S>0$ independent of $h$, $j$ and $t$. Then, by \eqref{ITERBOUND} and \eqref{DISTEST1} we conclude that the right hand side of \eqref{SEST1} is in $L^{\infty}\BBR{I_j'\,; L^1(\mathbb{T}^3)}$ which proves \eqref{SINTEGRB}.
We now pick any $\varepsilon>0$. Then, employing Young's inequality, we obtain
$$
(|F^{j-1}|^{p-2}\!+ |F^j|^{p-2})|\delta F^j|
\!\leqslant \!\frac{h}{\varepsilon}\! \BBR{|F^{j-1}|^{p-2}\!+\!|F^j|^{p-2}}
\!+\frac{\varepsilon}{h}\!\BBR{|F^{j-1}|^{p-2}\!+\!|F^j|^{p-2}}|\delta F^j|^2
$$
and, similarly, $|\delta \Theta^j| \leqslant
\frac{h}{\varepsilon}+\frac{\varepsilon}{h} |\delta\Theta^j|^2$. Thus \eqref{SEST1} and the last two estimates imply \begin{equation}\label{SEST2} \begin{aligned} \BBA{S(\cdot,t)} \, \leqslant &\, C_S \Bigl[\bigl(1+\frac{\varepsilon}{h}\bigr)\bigl( \,
|{\delta\Theta}^j|^2 +(|F^{j-1}|^{p-2}+|F^{j}|^{p-2})|\delta F^j|^2 \,\bigr)\\
&+\frac{h}{\varepsilon}\bigl(1+|F^{j-1}|^{p-2}+|F^j|^{p-2}\bigr) + d(\Theta,\bar{\Theta})\Bigr]. \end{aligned} \end{equation} To this end, we integrate \eqref{SEST2} and use (H2) along with \eqref{ITERBOUND} to get \eqref{SINTEST}. \qed\end{proof}
\subsection{Conclusion of the proof via Gronwall's inequality}
We now estimate the left hand side of the relative entropy identity \eqref{RENTID}:
\begin{lemma}[LHS estimate]\label{LHSESTLMM} Let $\eta^r$, $q^r$ be the relative entropy and relative entropy flux, respectively, defined by \eqref{RENTDEF} and \eqref{RENTFLUX}. Then \begin{equation}\label{RENTIDLHSINTEGRB} \Bigl(\partial_t \eta^r-\mathrm{div}\,q^r \Bigr) \in L^{\infty}\BBR{[0,T], L^1(\mathbb{T}^3) } \end{equation} and there exists $\bar{\varepsilon}>0$ such that for all $h \in (0,\bar{\varepsilon})$ and $\tau \in [0,T]$ \begin{equation}\label{RENTIDLHSEST} \begin{aligned} \int_{0}^{\tau}\int_{\mathbb{T}^3} \,\bigl(\partial_t\,\eta^r-\mathrm{div}\,q^r \bigr)\,dx \,dt \leqslant C_I\Bigl(\tau h +\int_{0}^{\tau}\int_{\mathbb{T}^3} d(\Theta,\bar{\Theta}) \,dx\,dt\Bigr). \end{aligned} \end{equation} for some constant $C_I=C_I(M,E_0,\bar{\varepsilon})>0$. \end{lemma}
\begin{proof} Lemma \ref{RENTEQUIVLMM}, \eqref{QTERMEST}, \eqref{DINTEGRB}, and \eqref{SINTEGRB} imply that the right hand side of the relative entropy identity \eqref{RENTID} is in $L^{\infty}\BBR{[0,T]; L^1(\mathbb{T}^3)}$. This proves \eqref{RENTIDLHSINTEGRB}.
Notice that the constants $C_D$ and $C_S$ (that appear in Lemmas \ref{DTERMESTLMM} and \ref{STERMESTLMM}, respectively) are independent of $h$, $j$. Then set $\bar{\varepsilon}:= \frac{C_D}{2C_S}$. Take now $h\in(0,\bar{\varepsilon})$ and $\tau\in[0,T]$. Using Lemmas \ref{QTERMESTLMM}, \ref{DTERMESTLMM} and \ref{STERMESTLMM} (with $\varepsilon=\bar{\varepsilon}$) along with the fact that $-C_D+C_S(h+\bar{\varepsilon}) \leqslant 0 \,$ we get \begin{equation*} \begin{aligned} \int_{0}^{\tau}\hspace{-2pt}\int_{\mathbb{T}^3} \Bigl(-\frac{1}{h}\sum_{j=1}^{\infty} \mathcal{X}^j(t) \hspace{1pt}
D^j+|S|+|Q|\,\Bigr)dx dt \, \leqslant\, C_I\Bigl(\tau h +\int_{0}^{\tau} \hspace{-2pt}\int_{\mathbb{T}^3} d(\Theta,\bar{\Theta}) dx dt\Bigr) \end{aligned} \end{equation*} with $C_I:= 3\max \big(C_S \frac{1+E_0}{\bar{\varepsilon}},C_S+\lambda \big)>0$. Hence by \eqref{RENTID} and the estimate above we obtain \eqref{RENTIDLHSEST}. \qed\end{proof}
Observe that (P4), (P5), \eqref{RENTFLUX}, \eqref{TIMEDVVXI}, \eqref{gEST}, \eqref{DIVPRODUCTS}, and \eqref{RENTFULX1} imply \begin{equation*} \mathrm{div}\,q^r\in L^{\infty}\BBR{[0,T];L^1(\mathbb{T}^3)} \end{equation*} and hence by \eqref{RENTIDLHSINTEGRB} \begin{equation}\label{RENTINTEGRB} \partial_t \eta^r \in L^{\infty}\BBR{[0,T];L^1(\mathbb{T}^3)}. \end{equation}
Take now arbitrary $h\in(0,\bar{\varepsilon})$ and $\tau \in [0,T]$. Due to periodic boundary conditions (by the density argument) we have $\int_{\mathbb{T}^3} \bigl(\mathrm{div}\,q^r(x,s)\bigr) \,dx=0$ for a.e. $s\in[0,T]$ and hence \begin{equation*} \int_{0}^{\tau}\int_{\mathbb{T}^3} \mathrm{div}\,q^r \,dx\,dt=0. \end{equation*} Finally, by construction for each fixed $\bar{x}\in \mathbb{T}^3$ the function $\eta^r(\bar{x},t):[0,T] \to \mathbb{R}$ is absolutely continuous with the weak derivative $\partial_t\eta^r(\bar{x},t)$. Then, by \eqref{RENTINTEGRB} and Fubini's theorem we have \begin{equation*} \int_{0}^{\tau}\int_{\mathbb{T}^3}\partial_t \eta^r dx\,dt=\int_{\mathbb{T}^3} \BBS{\int_{0}^{\tau} \partial_t \eta^r(x,t) \,d\tau} dx =\int_{\mathbb{T}^3} \Bigl(\eta^r(x,\tau) - \eta^r(x,0)\Bigr) dx. \end{equation*} Thus by Lemma \ref{RENTINTEQUIVLMM}, \eqref{RENTIDLHSINTEGRB}-\eqref{RENTINTEGRB} and the two identities above we obtain \begin{equation}\label{PSIGROWTH} \mathcal{E}(\tau)\,\leqslant\,\bar{C}\Bigl(\mathcal{E}(0)+\int_{0}^{\tau}\mathcal{E}(t)\,dt+h\Bigr) \end{equation} with $\bar{C} \!:=\! \tfrac{T}{\mu}\max(C_I,\mu')$ independent of $\tau, h$. Since $\tau\!\in\![0,T]$ is arbitrary, by~\eqref{PSIGROWTH} and Gronwall's inequality we conclude \begin{equation*} \mathcal{E}(\tau)\,\leqslant\,\bar{C} \bigl(\mathcal{E}(0)+h\bigr)
e^{\bar{C}T}, \quad \forall\tau\in [0,T]. \end{equation*} In this case, if $\mathcal{E}^{(h)}(0)\to 0$ as $h\downarrow 0$, then $\sup_{\tau\in[0,T]} \bigl(\mathcal{E}^{(h)}(\tau)\bigr) \to 0$, as $h\downarrow 0$. \par
\end{document}
|
arXiv
|
{
"id": "1408.0820.tex",
"language_detection_score": 0.5026348829269409,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} We study the problem of a policymaker who aims at taming the spread of an epidemic while minimizing its associated social costs. The main feature of our model lies in the fact that the disease's transmission rate is a diffusive stochastic process whose trend can be adjusted via costly confinement policies. We provide a complete theoretical analysis, as well as numerical experiments illustrating the structure of the optimal lockdown policy. In all our experiments the latter is characterized by three distinct periods: the epidemic is first let freely evolve, then vigorously tamed, and finally a less stringent containment should be adopted. Moreover, the optimal containment policy is such that the product ``reproduction number $\times$ percentage of susceptible'' is kept after a certain date strictly below the critical level of one, although the reproduction number is let oscillate above one in the last more relaxed phase of lockdown. {{Finally, an increase in the fluctuations of the transmission rate is shown to give rise to an earlier beginning of the optimal lockdown policy, which is also diluted over a longer period of time.}} \end{abstract}
\maketitle
{\textbf{Keywords}}: SIR model; optimal stochastic control; viscosity solution; epidemic; lockdown.
{\textbf{MSC2010 subject classification}}: 93E20, 49N90, 92D30, 97M40.
{\textbf{JEL classification}}: C61, I18.
\section{Introduction} \label{introduction}
During the current Covid-19 pandemic, policymakers are dealing with the trade-off between safeguarding public health and damming the negative economic impact of severe lockdowns. The fight against the virus is made especially hard by the absence of a vaccination and the consequent random horizon of any policy, as well as by the extraordinariness of the event. In particular, the lack of data from the past, the difficulty of rapidly and accurately tracking infected, and super-spreading events such as mass gatherings, give rise to a random behavior of the transmission rate/reproduction number of the virus (see, e.g., \cite{Hotz}\footnote{Refer also to the website \url{https://stochastik-tu-ilmenau.github.io/COVID-19/index.html}}). In this paper we propose and study a model for the optimal containment of infections due to an epidemic in which both the time horizon and the transmission rate of the disease are stochastic.
In the last months, the scientific literature experienced an explosion in the number of works where the statistical analysis and the mathematical modeling of epidemic models is considered, as well as the economic and social impact of lockdown policies is investigated.
A large bunch of papers provides numerical studies related to the Covid-19 epidemics in the setting of classical epidemic models or of generalization of them. Among many others, we refer to \cite{Alvarez}, that studies numerically optimal containment policies in the context of a \emph{Susceptible-Infected-Recovered} (SIR) model (cf.\ \cite{KermackMcKendrick}); \cite{Kantner} which also allows for seasonal effects; \cite{Toda}, which estimates the transmission rate in various countries for a SIR model with given and fixed transmission rate; \cite{Asprietal}, which combines a careful numerical study with an elegant theoretical study of optimal lockdown policies in the SEAIRD (susceptible (S), exposed (E), asymptomatic (A), infected (I), recovered (R), deceased (D)) model; \cite{Erhan}, where a detailed numerical analysis is developed for a SIR model of the Covid-19 pandemic in which herd immunity, behavior-dependent transmission rates, remote workers, and indirect externalities of lockdown are explicitly considered; \cite{Acemoglu}, where -- in the context of a multi-group SIR model -- it is investigated the effect of lockdown policies which are targeted to different social groups (especially, the ``young'', the ``middle-aged'' and the ``old''); \cite{Gollier}, in which a multi-risk SIR model with heterogeneous citizens is calibrated on the Covid-19 pandemic in order to study the impact on incomes and mortality of age-specific confinements and Polymerase chain reaction (PCR) tests; \cite{Favero}, which calibrates and tests a SEIRD model (susceptible (S), exposed (E), infected (I), recovered (R), deceased (D)) of the spread of Covid-19 in an heterogeneous economy where different age and sectors are related to distinct risks.
A theoretical study of the optimal confinement policies in epidemic models is usually challenging because of the nonlinear structure of the underlying dynamical system. The first results on a control-theoretic approach to confinement policies are perhaps those presented in Chapter 4 of \cite{Behncke}, where it is shown that the optimal policy depends only on the shadow price difference between infected and susceptible. In the context of an optimal timing problem, \cite{Jacco} uses a continuous-time Markov chain model to study the value and optimal exercise decision of two (sequential) options: the option to intervene on the epidemic and, after intervention has started, the option to end the containment policies. Control-theoretic analysis are also presented in the recent \cite{KruseStrack} and \cite{Micloetal}. In \cite{Micloetal} the authors study a deterministic SIR model in which the social planner acts in order to keep the transmission rate below its natural level with the ultimate aim not to overwhelm the national health-care system.
The minimization of a social cost functional is instead considered in \cite{KruseStrack}, in the context of a deterministic SIR model over a finite time-horizon. The resulting control problem is tackled via the Pontryagin maximum principle and then a thorough numerical illustration is also provided.
Inspired by the deterministic problems of \cite{KruseStrack} and \cite{Micloetal} (see also \cite{Acemoglu, Alvarez}, among others), and motivated by the need of incorporating random fluctuations in the disease's transmission rate, in this paper we consider a stochastic control-theoretic version of the classical SIR model of Kermack and McKendrick \cite{KermackMcKendrick}. A population with finite size is divided in three different groups: healthy people that are susceptible to the disease, infected individuals, and people that have recovered (and are not anymore susceptible) or dead. However, differently to the classical SIR model, we suppose that disease's transmission rate is time-dependent and stochastic. In particular, it evolves as a general diffusion process whose trend can be adjusted by a social planner through policies like social restrictions and lockdowns. The randomness in the transmission rate is modeled by a Wiener process representing all those factors affecting the transmission rate and that are not under the direct control of the regulator. The social planner faces the trade-off between the expected social and economic costs (e.g., drops in the gross domestic product) arising from severe restrictions and the expected costs induced by the number of infections that -- if uncontrolled -- might strongly impact on the national health-care system and, more in general, on the social well-being. The social planner aims at minimizing those total expected costs up to the time at which a vaccination against the disease is discovered. In our model, such a time is also random and independent of the Wiener process.
We provide a complete theoretical study of our model by showing that the minimal cost function (value function) is a classical twice-continuously differentiable solution to its corresponding Hamilton-Jacobi-Bellman (HJB) equation, and by identifying an optimal control in feedback form\footnote{The aforementioned regularity of the value function is a remarkable result on its own. Indeed, although the state process is degenerate (as the Wiener process only affects the dynamics of the transmission rate), we can show that the so-called H\"ormander's condition (cf.\ \cite{Nualart}) holds true for any choice of the model's parameters. This then ensures the existence of a smooth probability transition density for the underlying (uncontrolled) stochastic process, which in turn enables to prove substantial regularity of the value function.}. From a technical point of view, the main difference between the models in \cite{Acemoglu, Alvarez, Erhan, KruseStrack, Micloetal} and ours, is that we deal with a stochastic version of the SIR model, instead of a deterministic one. As a matter of fact, in the aforementioned works the transmission rate is a deterministic control variable, while it is a controlled stochastic state variable in our paper. Moreover, our formulation is also different from that of other stochastic SIR models where the random transmission rate is chosen in such a way that only the levels of infected and susceptible people become affected by noise, with the transmission rate itself not being a state variable (see, e.g., \cite{Jiang,Tornatore} and references therein). To the best of our knowledge, ours is the first work considering the transmission rate as a diffusive stochastic state variable and providing the complete theoretical analysis of the resulting control problem.
{{In addition to its theoretical value,}} the determination of an optimal control in feedback form allows us to perform numerical experiments aiming at showing some implications of our model. For the numerical analysis we specialize the dynamics of the transmission rate, that we take to be mean-reverting and bounded between $0$ and some $\gamma>0$ (cf.\ \eqref{eq:ZOUbis}). In this case study, the containment policies employed by the social planner have the effect of modifying the long-run mean of the transmission rate, towards which the process converges at an exponential rate. Moreover, we take a separable social cost function (cf.\ \eqref{costexample}). This is quadratic both in the regulator's effort and in the percentage of infected people.
An interesting effect which is in fact common to all our numerical experiments is that the optimal lockdown policy is characterized by three distinct periods. In the first phase it is optimal to let the epidemic freely evolve, then the social restrictions should be stringent, and finally should be gradually relaxed in a third period. We also investigate which is the effect of the maximal level $L$ of allowed containment measures (i.e., the lockdown policy can take values in $[0,L]$) on the final percentage of recovered, which in fact turns out to be decreasing with respect to $L$. This then suggests that the case $L=1$ -- which leads in a shorter period to the definitive containment of the disease with the smallest percentage of final recovered -- might be thought of as optimal in the trade-off between social costs and final number of recovered.
We observe that if the epidemic spread is left uncontrolled, then its reproduction number $(\mathcal{R}_t)_t$ fluctuates around $1.8$ and the final percentage of recovered (i.e.\ the total percentage of infected during the disease) is approximately $72\%$ of the society after circa 7 months (in all our simulations the initial infected were $1\%$ of the population). On the other hand, when $L=1$, under the optimal policy we have a relative reduction of circa $30\%$ of the total percentage of recovered individuals, and the reproduction number drops below $0.6$ in the period of severe lockdown (circa 60 days). Moreover, the optimal containment is such that the so-called ``herd immunity'' is reached as the product $\mathcal{R}_tS_t$ (reproduction number $\times$ percentage of susceptible) becomes strictly smaller than the critical level of one, even if $\mathcal{R}_t$ oscillates at around $1.7$ in the last more relaxed phase of lockdown. {{Finally, we observe that an increase of the fluctuations of the transmission rate $\beta$ have the effect of anticipating the beginning of the lockdown policies, of diluting the actions over a longer period, and of keeping a larger level of containment in the long run. This can be explained by thinking that an higher uncertainty in the transmission rate induces the policymaker to act earlier and over a longer period in order to prevent positive larger shocks of $\beta$.}}
The rest of the paper is organized as follows. In Section \ref{sec:SIR} we set up the model and the social planner problem. In Section \ref{sec:mainresult} we develop the control-theoretic analysis and provide the regularity of the minimal cost function and an optimal control in feedback form. In Section \ref{sec:numerics} we present our numerical examples, while concluding remarks are made in Section \ref{sec:concl}. Finally, Appendix \ref{sec:app} collects the proof of some technical results needed in Section \ref{sec:mainresult}.
\section{Problem Formulation} \label{sec:setting}
\subsection{The Stochastic Controlled SIR Model} \label{sec:SIR}
We model the spread of the infection by relying on a generalization of the classical SIR model that dates back to the work by Kermack and McKendrick \cite{KermackMcKendrick}. The society has population $N$ and it consists of three different groups. The first group is formed by those people who are healthy, but susceptible to the disease; the second group contains those who are infected, while the last cohort consists of those who are recovered or dead. In line with the classical SIR model, we assume that, once recovered, an individual stays healthy for ever. We denote by $S_t$ the percentage of individuals who are susceptible at time $t\geq0$, by $I_t$ the percentage of infected, and by $R_t$ the fraction of recovered or dead. Clearly, $S_t + I_t + R_t=1$ for all $t\geq0$.
The fraction of infected people grows at a rate which is proportional to the fraction of society that it is still susceptible to the disease. In particular, letting $\beta_t$ be the instantaneous transmission rate of the disease, during an infinitesimal interval of time $\mathrm{d} t$, each infected individual generates $\beta_t S_t$ new infected individuals. It thus follows that the percentage of healthy individuals that get infected within $\mathrm{d} t$ units of time is $I_t \beta_t S_t.$
Notice that the instantaneous transmission rate $\beta_t$ measures the disease's rate of infection, as well as the the average number of contacts per person per time. In this regard, $\beta_t$ can be thus influenced by a social planner via policies that effectively cap the social interaction, like social distancing and lockdown.
During an infinitesimal interval of time $\mathrm{d} t$, the fraction of infected is reduced by $\alpha I_t$, since infected either recover from the disease, or die because of it at a rate $\alpha>0$.
According to the previous considerations, the dynamics of $S_t$ and $I_t$ can be thus written as \begin{equation} \label{eq:S} \mathrm{d} S_t = - \beta_t S_t I_t \mathrm{d} t, \quad t>0, \qquad S_0= x, \end{equation} and \begin{equation} \label{eq:I} \mathrm{d} I_t = \big(\beta_t S_t I_t - \alpha I_t\big) \mathrm{d} t, \quad t>0, \qquad I_0= y, \end{equation} where $(x,y) \in (0,1)^2$ are given initial values such that\footnote{The choice of considering $x+y<1$ -- i.e.\ of having an initial strictly positive percentage of recovered -- is only done in order to deal with an open set in the subsequent mathematical formulation of the problem. As a matter of fact, such a condition is not restrictive from the technical point of view as our results still apply if $x+y<\ell$, for some $\ell>1$, thus covering the case $x+y=1$ as well.} $x+y \in (0,1)$.
Notice that for any $t\geq0$, and for any choice of $(\beta_t)_t$ we can write \begin{equation} \label{eq:SolSI} S_t = x e^{-\int_0^t \beta_u I_u \mathrm{d} u} \qquad \text{and} \qquad I_t = y e^{-\alpha t + \int_0^t \beta_u S_u \mathrm{d} u}, \end{equation} and therefore $S_t>0$ and $I_t>0$ for all $t\geq0$. Moreover, summing up \eqref{eq:S} and \eqref{eq:I} we have $\mathrm{d}(S_t + I_t) = -\alpha I_t <0$ for all $t>0$, which then implies that $S_t + I_t < 1$ for all $t\geq0$.
We depart from the classical SIR model by assuming that the transmission rate $\beta_t$ is time-varying, stochastic, and may be controlled. More precisely, we let $(\Omega, \mathcal{F}, \mathbb{F}:=(\mathcal{F}_t)_t, \P)$ be a complete filtered probability space with filtration $\mathbb{F}$ satisfying the usual conditions, and we define on that a one-dimensional Brownian motion $(W_t)_t$. For a given and fixed $L\geq 0$, and for any $(\xi_t)_t$ belonging to $$\mathcal{A}:=\big\{\xi:\, \Omega \times [0,\infty) \to [0,L],\,\, (\xi_t)_t\,\, \mathbb{F}-\text{progressively measurable}\big\},$$ we assume that the transmission rate evolves according to the stochastic differential equation \begin{equation} \label{eq:Z} \mathrm{d} \beta_t = b(\beta_t,\xi_t) \mathrm{d} t + \sigma(\beta_t)\mathrm{d} W_t, \quad t>0, \qquad \beta_0= z >0. \end{equation} The process $(\xi_t)_t$ influences the trend of the transmission rate and it should be interpreted as any effort devoted by the social planner to the decrease of the transmission rate. In this sense, $\xi=0$ corresponds to the case of no effort done to decrease the disease, whereas the case $\xi=L$ corresponds to the maximal effort. To fix ideas, $\xi_t$ may represent a percentage of social/working lockdown at time $t$ and $L$ corresponds to the maximal implementable value of such lockdown (e.g. $60\%$, etc.). On the other hand, the Brownian motion $(W_t)_t$ models any shock affecting the transmission rate and which is not under the control of the social planner.
Regarding the dynamics of $(\beta_t)_t$ we make the following \textbf{standing assumption}. {{ \begin{assumption} \label{assZ} \hspace{10cm} \begin{itemize} \item[(i)] For every $\xi\in\mathcal{A}$, there exists a unique strong solution to \eqref{eq:Z} and it lies in an open interval $\mathcal{I}\subseteq(0,\infty)$.
\item[(ii)] $b:\mathcal{I}\times [0,L] \to \R$ is bounded, infinitely many times continuously differentiable with respect to its first argument, and has bounded derivatives of any order; that is, there exists $K_b>0$ such that
$$\sup_{n \in \mathbb{N}} \sup_{(z,\xi) \in \mathcal{I}\times [0,L]}\big|\frac{\partial^n}{\partial z^n}b(z,\xi)\big| \leq K_b.$$
\item[(iii)] $\sigma:\mathcal{I} \to (0,\infty)$ is bounded, infinitely many times continuously differentiable with respect to its first argument, and has bounded derivatives of any order; that is, there exists $K_{\sigma}>0$ such that
$$\sup_{n \in \mathbb{N}} \sup_{z \in \mathcal{I}}|\sigma^{(n)}(z)| \leq K_{\sigma}.$$
\end{itemize} \end{assumption} }}
A reasonable dynamics of the transmission rate $(\beta_t)_t$ is the mean-reverting \begin{equation} \label{eq:ZOU} \mathrm{d} \beta_t = \vartheta\Big(\widehat{\beta}\big(L-\xi_t\big) - \beta_t\Big) \mathrm{d} t + \sigma\beta_t(\gamma - \beta_t)\mathrm{d} W_t, \quad t>0, \qquad \beta_0= z \in (0,\gamma), \end{equation} for some $\vartheta, \gamma, \sigma>0$, $\widehat{\beta} \in (0,\gamma)$. In this case, in can be shown that $0$ and $\gamma$ are unattainable by the diffusion $(\beta_t)_t$, which then takes values in the interval $\mathcal{I}=(0,\gamma)$ for any $t\geq 0$. The level $\widehat{\beta}$ can be seen as the natural transmission rate of the disease, towards which the transmission rate reverts at rate $\vartheta$ when $\xi\equiv0$. Finally, the level $\gamma$ is the maximal possible transmission rate of the disease, and $\sigma$ is a measure of the fluctuations of $(\beta_t)_t$ around $\widehat{\beta}$. We will employ this dynamics in our numerical illustrations (cf.\ Section \ref{sec:numerics} below). {{Notice that dynamics \eqref{eq:ZOU} fulfills all the requirements of Assumption \ref{assZ}; this is shown, for the sake of completeness, in Proposition \ref{prop:betadyn} in Appendix \ref{sec:app}. Moreover, if $\xi_t \equiv L$, then the transmission rate defined through \eqref{eq:ZOU} reaches $0$ asymptotically, as its drift is negative and its diffusion coefficient stays bounded. Hence, under the maximal lockdown policy, the disease is asymptotically eradicated.}}
{{ \begin{remark} \label{rem:micromodel} Uncertainty comes into our model through the transmission rate. A fully probabilistic setting, in which the transitions from $S$ to $I$ to $R$ are driven by Poisson processes, can be formulated by following the discussion of Example 2 in \cite{Greenwood-Gordillo} or Section 3 in \cite{Allen2017} (see also the recent \cite{Donsimoni}). We describe informally such a model in the sequel.
Assume that the fraction of susceptible and infected $(S_t,I_t)_t$ follow a continuous-time Markov chain with state space $\mathcal{S}:=\{(s,i) \in \{0,\frac{1}{N},\dots,1\}^2: i + s < 1\}$, where $N$ denotes the fixed population's size. For a small time interval $\Delta t$ let
$$p_{(s,i), (s+k,i+j)}(\Delta t) := \P\big( (S_{t+\Delta t}, I_{t+\Delta t}) = (s+k,i+j) \big| (S_{t}, I_{t}) = (s,i)\big),$$ and assume that \begin{equation*} p_{(s,i), (s+k,i+j)}(\Delta t)=\left\{ \begin{array}{lr} \displaystyle \beta_t I_t S_t \Delta t + o(\Delta t)\,\,\qquad \qquad \quad \,\,\,\,\, \mbox{if $(k,j)=(-\frac{1}{N},\frac{1}{N})$}\\[+14pt] \displaystyle \alpha I_t \Delta t + o(\Delta t) \,\,\qquad \qquad \qquad \,\,\,\,\,\,\, \mbox{if $(k,j)=(0,-\frac{1}{N})$}\\[+14pt] \displaystyle \big(1 - \alpha I_t - \beta_t I_t S_t\big)\Delta t + o(\Delta t) \,\,\,\,\,\, \mbox{if $(k,j)=(0,0)$}\\[+14pt] \displaystyle o(\Delta t) \,\,\qquad \qquad \qquad \qquad \qquad\,\,\,\,\,\,\, \mbox{otherwise}. \end{array} \right. \end{equation*} It thus follows that the increments $\Delta S_t := S_{t+\Delta t} - S_{t}$ and $\Delta I_t := I_{t+\Delta t} - I_{t}$ can be written as \begin{equation} \label{eq:DeltaS} \Delta S_t = - \beta_t I_t S_t \Delta t + \Delta N^{(1)}_t \end{equation} and \begin{equation} \label{eq:DeltaI} \Delta I_t = \Big(\beta_t I_t S_t - \alpha I_t\Big) \Delta t - \Delta N^{(1)}_t + \Delta N^{(2)}_t, \end{equation} where $\Delta N^{(1)}_t$ and $\Delta N^{(2)}_t$ are conditionally centered Poisson increments with zero mean and conditional variances $\beta_t I_t S_t \Delta t$ and $\alpha I_t \Delta t$, respectively.
On the other hand, let $\Delta t$ be such that $\Delta t = \sum_{i=1}^n \Delta t_i$, where $\Delta t_i = t_i - t_{i-1}$, $i=1,\dots,n$, $t_0=t$ and $t_n=t + \Delta t$. Then, the increment of the transmission rate (in absence of any social planner's intervention) can be written as $$\Delta \beta_t = \sum_{i=1}^n \Delta \beta_{t_i},$$ where $\Delta \beta_t:=\beta_{t + \Delta t} - \beta_t$ and $\Delta \beta_{t_i}= \beta_{t_i} - \beta_{t_{i-1}}$. If $\Delta t_i$ are sufficiently small, we can reasonably argue that the random variables $\Delta \beta_{t_i}$ on the interval $\Delta t$ are independent and identically distributed. For $n$ sufficiently large, the Central Limit Theorem implies that $\Delta \beta_t$ has an approximate Gaussian distribution. We assume that such a mean is $b(\beta_t,0) \Delta t$ and the variance $\sigma^2(\beta_t) \Delta t$, so that \begin{equation} \label{eq:Deltabeta} \Delta \beta_t = b(\beta_t,0) \Delta t + \sigma(\beta_t) \Delta W_t, \end{equation} for a standard Gaussian increment $\Delta W_t$.
For $\Delta t$ sufficiently small, the previous \eqref{eq:DeltaS}, \eqref{eq:DeltaI} and \eqref{eq:Deltabeta} define a fully probabilistic model in which uncertainty in $S$ and $I$ is driven by jump processes and, indirectly, through the diffusive transmission rate $\beta$. Then, allowing for a governmental control of $\beta$ as in \eqref{eq:ZOU} would result into an interesting stochastic control problem with jump-diffusive dynamics that we leave for future research. \end{remark}
\begin{remark} \label{rem:impulse} Another modeling feature that needs some discussion regards the nature of the control rule in \eqref{eq:ZOU}. In our formulation, the policymaker adjusts $\xi$ continuously over time with the aim of decreasing the trend of the transmission rate. However, motivated by the real-world strategies employed during the Covid-19 crisis, one can very well imagine a model where regulatory constraints are introduced once the reproduction number $\mathcal{R}_t =\beta_t/\alpha$ becomes larger than a certain value, say $\mathcal{R}^{\star}$. Within this setting, a natural question would be: which is the optimal $\mathcal{R}^{\star}$ and the optimal size of interventions? A possible answer to this question could be found by proposing a model where the policymaker instantaneously reduces the level of $\beta$ via lockdown policies and faces proportional and/or fixed costs for its actions. This would gives rise to a singular or impulsive stochastic control problem; see \cite{AlvarezL}, \cite{Ferrari} and \cite{Belak} and references therein. Given the underlying multi-dimensional setting, we expect that the optimal trigger level $\mathcal{R}^{\star}$ would be a function of the current values of $(S_t,I_t)$. However, the proof of such a conjecture would require the thorough study of a complex (non convex) three-dimensional degenerate singular/impulse stochastic control problem that clearly requires techniques different from those employed in this work. \end{remark} }}
\subsection{The Social Planner Problem} \label{sec:PB}
The epidemic generates social costs, that we assume to be increasing with respect to the fraction of the population that is infected. These costs might arise because of lost gross domestic product (GDP) due to inability of working, because of an overstress of the national health-care system etc. The social planner thus employs policies $(\xi_t)_t$ in the form, e.g., of social distancing or lockdown in order to adjust the growth rate of the transmission rate $\beta$, with the aim of effectively flattening the curve of the infected percentage of the society. Such actions however come with a cost, which increases with the amplitude of the effort. Assuming that a vaccination against the disease is discovered at a random time $\tau$ exponentially distributed with parameter $\lambda_o>0$ and independent of $(W_t)_t$\footnote{We are implicitly requiring that the underlying probability space $(\Omega, \mathcal{F}, \mathbb{F}:=(\mathcal{F}_t)_t, \P)$ is rich enough to accommodate also such an exponential time $\tau$.} (see also Remark \ref{rem:tau} below), the social planner aims at solving \begin{equation} \label{eq:Probl0} \inf_{\xi \in \mathcal{A}}\E\bigg[\int_0^{\tau} {e^{-\delta t}} C\big(I_t, \xi_t\big) \mathrm{d} t\bigg]. \end{equation} Here, $\delta\geq 0$ measures the social planner's time preferences, and $C:[0,1]\times [0,L] \to [0,\infty)$ is a running cost function measuring the negative impact of the disease on the public health as well as the economic/social costs induced by lockdown policies. The following requirements are satisfied by $C$. \begin{assumption} \label{assC} \hspace{10cm} \begin{itemize} \item[(i)] $(y,\xi) \mapsto C(y,\xi)$ is convex and continuous on $[0,1]\times[0,L]$. \item[(ii)] For any $y\in [0,1]$ we have that $\xi \mapsto C(y,\xi)$ is nondecreasing. \item[(iii)] For any $\xi \in [0,L]$ we have that $y \mapsto C(y,\xi)$ is nondecreasing. \item[(iv)] There exists $K>0$ such that for any $\xi\in[0,L]$ we have that
$$|C(y,\xi) - C(y',\xi)| \leq K|y-y'|, \quad \forall (y,y')\in [0,1]^2.$$
\item[(v)] $y\mapsto C(y,\xi)$ is semiconcave\footnote{{{A function $f:\mathbb{R}^n \to \R$, $n\geq 1$, is called \emph{semiconvex} if there exists a constant $K\geq 0$ such that $f(x) + \frac{K}{2}|x|^2$ is convex; it is \emph{semiconcave} if $-f$ is semiconvex.} }} on $[0,1]$, uniformly with respect to $\xi\in[0,L]$; that is, there exists $K>0$ such that for any $\xi\in[0,L]$ and any $\mu \in [0,1]$ one has
$$\mu C(y,\xi) + (1-\mu) C(y',\xi) - C\big(\mu y + (1-\mu) y',\xi\big) \leq K\mu(1-\mu)|y-y'|^2, \quad \forall (y,y')\in [0,1]^2.$$ \end{itemize} \end{assumption} Without loss of generality, we also take $C(0,0)=0$. Convexity of $y\mapsto C(y,\xi)$ captures the fact that the social costs from the disease might be higher if a large share of the population is infected since, for example, the social health-care system is overwhelmed. The fact that $\xi\mapsto C(y,\xi)$ is convex describes that marginal costs of actions are increasing because, e.g., an additional lockdown policy might have a larger impact on an already stressed society. Finally, the Lipschitz and semiconcavity property of $C(\cdot,\xi)$ are technical requirements that will be important in the next section.
An application of Fubini's theorem, employing the independence of $\tau$ and $(W_t)_t$, allows to rewrite the problem defined in \eqref{eq:Probl0} as \begin{equation} \label{eq:Probl} \inf_{\xi \in \mathcal{A}}\E\bigg[\int_0^{\tau} {e^{-\delta t}} C\big(I_t, \xi_t\big) \mathrm{d} t\bigg] = \inf_{\xi \in \mathcal{A}}\E\bigg[\int_0^{\infty} e^{-\lambda t} C\big(I_t, \xi_t\big) \mathrm{d} t\bigg], \end{equation} where $\lambda:=\lambda_o+\delta$.
{{ \begin{remark} \label{rem:tau} The assumption that a vaccination against the disease is discovered at an exponential random time $\tau$, independent of $(W_t)_t$, has the technical important effect of leading to a time-homogeneous social planner problem (cf.\ \eqref{eq:Probl} above). From a modeling point of view, such a requirement is clearly debatable, as it presupposes that the decision maker does not take into account the scientific progress in the epidemic's treatment. In order to take care of this, we now propose an alternative more realistic formulation which, however, comes at the cost of substantially increasing the mathematical complexity of the social planner problem.
Suppose that the social planner has full information about the current technological level $Q_t$ achieved in the disease's treatment and assume, for example, that this evolves according to the SDE: $$\mathrm{d} Q_t = \mu(Q_t) \mathrm{d} t + \eta(Q_t) \mathrm{d} B_t, \quad Q_0=q \in \R_+,$$ for suitable $\mu$ and $\eta$, and for a standard Brownian motion $(B_t)_t$ independent of $(W_t)_t$. The process $(B_t)_t$ models all the exogenous shocks affecting the technological achievements (e.g., new scientific discoveries in related fields), while $\mu$ measures the instantaneous trend of the research. Define then a continuous-time Markov chain $(M_t)_t$ with two states, $0$ and $1$, where $0$ means that the vaccination is not available and $1$ that a treatment has been instead found. We assume that $1$ is an absorbing state and that the Markov chain has transition rate from state $0$ to state $1$ given by $(\lambda(t, Q_t))_t$. Here, $\lambda: \R_+ \times \R_+ \mapsto \R_+$ is such that $\Lambda_t:= \int_0^t \lambda(s,Q_s) \mathrm{d} s < \infty$, a.s.\ for any $t \geq 0$, and it is nondecreasing in its second argument. This latter condition clearly means that the larger the technological level is, the faster the disease is treated.
Within this setting, the problem can be then still be written as $$\inf_{\xi \in \mathcal{A}}\E\bigg[\int_0^{\tau} e^{-\delta t} C\big(I_t, \xi_t\big) \mathrm{d} t\bigg],$$ where $(I_t)_t$ and $(\xi_t)_t$ are as defined above in this section, but $$\tau:=\inf\{t \geq 0:\, M_t =1 \}.$$ The independence of $Q$ with respect to $W$ then leads to the equivalent formulation $$V(x,y,z,q):=\inf_{\xi \in \mathcal{A}}\E\bigg[\int_0^{\infty} e^{-\delta t - \Lambda_t } C\big(I_t, \xi_t\big) \mathrm{d} t \bigg],$$ which defines a four-dimensional stochastic control problem. Clearly, this problem is much more challenging than \eqref{eq:Probl} and its analysis, requiring different techniques and results, is left for future research.
Another interesting future work might concern an extension of the previous model in which the social planner can also increase the technological level $Q$ by supporting the research of a vaccination. Assuming that such an investment comes at proportional cost, this problem can be modeled in terms of an intricate stochastic control problem where the transition rate $\lambda$ of the Markov chain is controlled through a singular control. \end{remark} }}
In order to tackle Problem \eqref{eq:Probl} with techniques from dynamic programming, it is convenient to keep track of the initial values of $(S_t,I_t,\beta_t)_t$. We therefore set $$\mathcal{O}:=\big\{(x,y,z)\in \R^3:\,\, (x,y)\in (0,1)^2,\,\, x+y < 1,\,\, z\in \mathcal{I}\big\},$$ and, when needed, we stress the dependency of $(S_t,I_t,\beta_t)$ with respect to $(x,y,z)\in \mathcal{O}$ and $\xi\in\mathcal{A}$ by writing $(S^{x,y,z;\xi}_t,I^{x,y,z;\xi}_t,\beta^{z;\xi}_t)$. Indeed, due to \eqref{eq:SolSI} and the autonomous nature of \eqref{eq:Z}, we have that $S_t$ and $I_t$ depend on $(x,y,z)$ and on $\xi$ through $\beta_t$, while $\beta_t$ depends only on $z$ and directly on $\xi$. We shall also simply set $(S^{x,y,z}_t,I^{x,y,z}_t,\beta^{z}_t):=(S^{x,y,z;0}_t,I^{x,y,z;0}_t,\beta^{z;0}_t)$ to denote the solutions to \eqref{eq:S}, \eqref{eq:I}, and \eqref{eq:Z} when $\xi \equiv 0$.
Then, for any $(x,y,z)\in \mathcal{O}$, we introduce the problem's value function \begin{equation} \label{eq:V} V(x,y,z) := \inf_{\xi \in \mathcal{A}}\E\bigg[\int_0^{\infty} e^{-\lambda t} C\big(I^{x,y,z;\xi}_t, \xi_t\big) \mathrm{d} t\bigg]. \end{equation} The latter is well defined given that $C$ is nonnegative. In the next section we will show that $V$ solves the corresponding dynamic programming equation in the classical sense, and we also provide an optimal control in feedback form.
\section{The Solution to the Social Planner Problem} \label{sec:mainresult}
We introduce the differential operator $\mathcal{L}$ acting on functions belonging to the class $C^{1,1,2}(\R^3)$:\\ \begin{equation} \label{eq:L} \big(\mathcal{L}\varphi\big)(x,y,z) := xyz\big(\varphi_y - \varphi_x\big)(x,y,z) - \alpha y \varphi_y(x,y,z) + \frac{1}{2}\sigma^2(z) \varphi_{zz}(x,y,z). \end{equation}\\ Next, for any $(y,z,p) \in (0,1) \times \mathcal{I} \times \R$, define\\ \begin{equation} \label{eq:Cstar} C^{\star}(y,z,p):= \inf_{\xi \in [0,L]}\Big(C(y,\xi) + b(z,\xi)p\Big), \end{equation}\\ which is continuous on $[0,1] \times \mathcal{I} \times \R$. Indeed, by Assumptions \ref{assZ}-(ii) and \ref{assC}-(iv), there exists a constant $\overline{K}>0$ such that \begin{align*}
& |C^{\star}(y',z',p') - C^{\star}(y,z,p)| \leq \sup_{\xi \in [0,L]}\Big(|C(y',\xi)-C(y,\xi)| + |b(z',\xi)-b(z,\xi)||p'| + |b(z,\xi)||p'-p| \Big) \nonumber \\
& \leq \overline{K}\big(|y'-y| + |z'-z||p'| + |p'-p|\big). \end{align*}
By the dynamic programming principle, we expect that $V$ should solve (in a suitable sense) the Hamilton-Jacobi-Bellman (HJB) equation \\ \begin{equation} \label{eq:HJB} \lambda v(x,y,z) = (\mathcal{L}v)(x,y,z) + C^{\star}(y,z,v_z(x,y,z)), \quad (x,y,z) \in \mathcal{O}. \end{equation}\\ In order to show that $V$ indeed solves \eqref{eq:HJB} in the classical sense, we start with the following important preliminary results. Their proofs are standard in the literature of stochastic control (see, e.g., \cite{Pham, YongZhou1999}), upon employing Assumptions \ref{assZ} and \ref{assC}. \begin{proposition} \label{prop:HJB-semiconc} There exists $K>0$ such that, for each $\textbf{q}:=(x,y,z),\,\textbf{q}':=(x',y',z') \in \mathcal{O}$ \begin{itemize} \item[(i)]
$0 \leq V(\textbf{q}) \leq K$ {and} $|V(\textbf{q})-V(\textbf{q}')| \leq K|\textbf{q} - \textbf{q}'|$; i.e., $V$ is bounded and Lipschitz continuous on $\mathcal{O}$; \item[(ii)] for any $\mu \in [0,1]$ and for some $K>0$
$$\mu V(\textbf{q}) + (1-\mu) V(\textbf{q}') - V\big(\mu \textbf{q} + (1-\mu) \textbf{q}'\big) \leq K\mu(1-\mu)|\textbf{q}-\textbf{q}'|^2;$$ i.e., $V$ is semiconcave on $\mathcal{O}$. \end{itemize}
Moreover, $V$ is a viscosity solution to the HJB equation \eqref{eq:HJB}. \end{proposition}
\begin{proof} The first claim of (i) above follows from the fact that $C$ is nonnegative and bounded on $[0,1]^2$; the second claim of (i) is due to Proposition 3.1 in \cite{YongZhou1999}, whose proof can be easily adapted to our stationary setting. Analogously, the semiconcavity property of (ii) can be obtained by arguing as in Proposition 4.5 of \cite{YongZhou1999}. Finally, Theorem 5.2 of \cite{YongZhou1999} (again, easily adapted to our stationary setting) or Proposition 4.3.2-(2) of \cite{Pham} lead to the viscosity property. \end{proof}
The semiconcavity of $V$, together with the fact that $V$ solves the HJB equation \eqref{eq:HJB} in the viscosity sense, yield the following directional regularity result. \begin{proposition} \label{prop:Vz} $V_z$ exists continuous on $\mathcal{O}$. \end{proposition} \begin{proof} Let $(x,y,z)\in\mathcal{O}$. By semiconcavity of $V$, there exists the left and right derivatives of $V$ along the direction $z$ at $(x,y,z)$ that we denote, respectively, by $V_z^-(x,y,z), V_z^+(x,y,z)$. Moreover, again by semiconcavity, we have the inequality $V_z^-(x,y,z)\geq V_z^+(x,y,z)$. Assuming, by contradiction, that $V$ is not differentiable with respect to $z$ at $(x,y,z)$ means assuming that $V_z^-(x,y,z)> V_z^+(x,y,z)$. Hence, we can apply Lemma \ref{lemma:app} in the appendix and find a sequence of functions $(\tilde{\varphi}^n)_n\subset C^{2}(\mathcal{O})$ such that \begin{equation} \label{eq:varphinb} \begin{cases} \tilde{\varphi}^n(\bar{x},\bar{y},\bar{z})=V(\bar{x},\bar{y},\bar{z}),\\
\tilde{\varphi}^n\geq V \ \mbox{in a neighborhood of} \ (\bar{x},\bar{y},\bar{z}),\\ |D\tilde{\varphi}^n(\bar{x},\bar{y},\bar{z})|\leq \tilde{L}<\infty,\\ \tilde{\varphi}^n_{zz}(\bar{x},\bar{y},\bar{z})\stackrel{n\to\infty}{\longrightarrow}-\infty. \end{cases} \end{equation} Then, the viscosity subsolution property of $V$ (cf.\ Proposition \ref{prop:HJB-semiconc}) yields $$ \lambda V(\bar{x},\bar{y},\bar{z})\leq (\mathcal{L}\tilde{\varphi}^n)(\bar{x},\bar{y},\bar{z})+ C^\star(\bar{y},\bar{z},\tilde{\varphi}^n_z(\bar{x},\bar{y},\bar{z})). $$ Taking the limit as $n\to \infty$ and using \eqref{eq:varphinb} we get a contradiction. We have thus proved that $V_z$ exists at each arbitrary $(x,y,z)\in\mathcal{O}$.
Now we show that $V_z$ is continuous. Take a sequence $(\bm q^n)_n\subset\mathcal{O}$ such that $\bm q^n\to \bm q\in\mathcal{O}$, and let $\bm\eta^n=(\eta^n_x,\eta^n_y, \eta^n_z)\in D^+V(\bm q^n)$, the latter being nonempty due to the semiconcavity of $V$. Since $V_z$ exists at each point of $\mathcal{O}$, we have $\eta_z^n=V_z(\bm q^n)$. Since $V$ is semiconcave, the supergradient $D^+V$ is locally bounded as a set-valued map, and therefore there exists a subsequence $(\bm q^{n_k})_k$ such that $\bm\eta^{n_k}\to\bm\eta=(\eta_x,\eta_y, \eta_z)$. By \cite[Prop.\ 3.3.4-(a)]{CS}, we have $\bm\eta\in D^+V(\bm q)$, and again, since $V_z$ exists, we have $\eta_z=V_z(\bm q)$. Hence, we have proved that from any sequence $(\bm q^n)_n\subset\mathcal{O}$ converging to $\bm q$, we can extract a subsequence $(\bm q^{n_k})_k\subset\mathcal{O}$ such that $V_z(\bm q^{n_k})\to V_z(\bm q)$. By usual arguments on subsequences, the claim follows. \end{proof}
We can now prove the main theoretical result of our paper, which ensures that $V$ is actually a classical solution to the HJB equation \eqref{eq:HJB}. In turn, this provides a way to construct an optimal control in feedback form.\footnote{Notice that, in order to define a candidate optimal control in feedback form, one actually only needs the existence of the derivative $V_z$. For instance, in the deterministic problem tackled in \cite{FedericoTacconi} only the regularity of the directional derivative is exploited to prove a verification theorem in the context of viscosity solutions. However, here we can improve the regularity of $V$ due to the stochastic nature of our problem, and therefore prove a classical verification theorem.} \begin{theorem} \label{thm:main} The following holds: \begin{itemize} \item[(i)] $V \in C^{2}(\mathcal{O})$ and solves the HJB equation \eqref{eq:HJB} in the classical sense. \item[(ii)] Let \begin{equation} \label{eq:OCrule} \widehat{\xi}(x,y,z):=\argmin_{\xi\in[0,L]}\Big(C(y,\xi) - b(z,\xi)V_z(x,y,z)\Big), \ \ \ \ \ (x,y,z)\in \mathcal{O}. \end{equation} If the system of equations\footnote{Since $b$ is bounded, by the method of Girsanov's transformation, the system has a weak solution, which is also unique in law (see \cite[Ch.\,5,\,Propositions\,3.6 and 3.10]{KS} and also \cite[Ch.\,5,\,Remark\,3.7]{KS}). For the sake of brevity, we do not investigate further existence and uniqueness of strong solutions, even if this might be done by employing finer results (e.g., see the seminal paper \cite{V}).} \begin{equation} \label{system-OC} \begin{cases} \mathrm{d} S_t = - \beta_t S_t I_t \mathrm{d} t, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \qquad S_0= x,\\ \mathrm{d} I_t = \big(\beta_t S_t I_t - \alpha I_t\big) \mathrm{d} t, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ I_0= y,\\ \mathrm{d} \beta_t = b(\beta_t,\widehat\xi(S_t,I_t,\beta_t)) \mathrm{d} t + \sigma(\beta_t)\mathrm{d} W_t, \ \ \ \ \ \beta_0= z, \end{cases} \end{equation} admits a unique strong solution $(S^\star_t, I^\star_t, \beta^{\star}_t)_t$, then the control\\ \begin{equation} \label{eq:OC} \xi^{\star}_t:=\widehat\xi\big(S^\star_t, I^\star_t, \beta^{\star}_t\big), \end{equation}\\ is optimal for \eqref{eq:V} and $(\beta^{\star}_t)_t$ is the optimally controlled transmission rate; that is, $$V(x,y,z)=\E\bigg[\int_0^{\infty} e^{-\lambda t} C\big(I^{\star}_t, \xi^{\star}_t\big) \mathrm{d} t\bigg].$$ \end{itemize} \end{theorem}
\begin{proof} \emph{Proof of (i) - Step 1.} Recall \eqref{eq:Cstar} and define $F(x,y,z):=C^{\star}(y,z,V_z(x,y,z))$. Due to Proposition \ref{prop:Vz} and the continuity of $C^{\star}$ on $(0,1) \times \mathcal{I} \times \R$, we have that $F$ is continuous on $\mathcal{O}$. Moreover, since $C$ is bounded on $[0,1]\times[0,L]$, $V_z$ is bounded on $\mathcal{O}$ by Proposition \ref{prop:HJB-semiconc}-(i), and $b(\cdot,\xi)$ is bounded (cf.\ Assumption \ref{assZ}-(ii)), there exists $K>0$ such that \begin{equation} \label{eq:growF}
|F(x,y,z)| \leq K, \quad \forall (x,y,z) \in \mathcal{O}. \end{equation}
Set now \begin{equation} \label{eq:v} v(x,y,z):=\E\bigg[\int_0^{\infty}e^{-\lambda t} F\big(S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t\big) \mathrm{d} t\bigg], \quad (x,y,z)\in \mathcal{O}. \end{equation}
Although not uniformly elliptic, the differential operator $\mathcal{L}$ defined in \eqref{eq:L} is hypoelliptic, meaning that the so-called H\"ormander's condition is satisfied (cf.\ the proof of Proposition \ref{prop:A1} in the appendix and equation \eqref{eq:Hormander} therein). In fact, by Proposition \ref{prop:A1} in the appendix, for any $\textbf{q}:=(x,y,z)\in \mathcal{O}$ the (uncontrolled) process $(\textbf{Q}^{\textbf{q}}_t)_t:=(S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t)$ admits a transition density $p(t,\textbf{q},\cdot)$, $t>0$, which is absolutely continuous with respect to the Lebesgue measure in $\R^3$, infinitely many times differentiable, and satisfying the Gaussian estimates \eqref{eq:Gauss1} and \eqref{eq:Gauss2}. As a consequence, by Fubini's theorem we can write $$v(x,y,z)=\int_0^{\infty}e^{-\lambda t} \Big(\int_{\mathcal{O}}F\big(x', y', z'\big) p(t, x, y, z; x', y', z') \mathrm{d} x' \mathrm{d} y' \mathrm{d} z' \Big) \mathrm{d} t,$$ and recalling \eqref{eq:growF}, and applying the dominated convergence theorem, one shows that $v \in C^{2}(\mathcal{O})$.
For $(x,y,z) \in \mathcal{O}$, let now $\tau_n:=\inf\{t\geq0:\,|(S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t)|\geq n\}$, $n\in \mathbb{N}$, and notice that the strong Markov property yields
$$e^{-\lambda (t \wedge \tau_n)}v(S^{x,y,z}_{t\wedge\tau_n},I^{x,y,z}_{t\wedge\tau_n},\beta^{z}_{t\wedge\tau_n}) + \int_0^{t\wedge\tau_n} F\big(S^{x,y,z}_u, I^{x,y,z}_u, \beta^z_u\big) \mathrm{d} u = \E\bigg[\int_0^{\infty}e^{-\lambda t} F\big(S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t\big) \mathrm{d} t\,\Big|\, \mathcal{F}_{t\wedge\tau_n}\bigg].$$ Since $v \in C^{2}(\mathcal{O})$, we can apply It\^o's formula to the first addend on the left-hand side of the latter, take expectations, observe that the stochastic integral has zero mean (by definition of $\tau_n$ and the fact that $v_x$ is continuous), and finally find \begin{equation} \label{eq:eqv-1} \E\bigg[\int_0^{t \wedge \tau_n}e^{-\lambda u} \big(\mathcal{L}v + F - \lambda v\big)\big(S^{x,y,z}_u, I^{x,y,z}_u, \beta^z_u\big) \mathrm{d} u\bigg] + v(x,y,z) = \E\bigg[\int_0^{\infty}e^{-\lambda t} F\big(S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t\big) \mathrm{d} t\bigg]; \end{equation} that is, by \eqref{eq:v}, $$\E\bigg[\int_0^{t \wedge \tau_n}e^{-\lambda u} \big(\mathcal{L}v + F - \lambda v\big)\big(S^{x,y,z}_u, I^{x,y,z}_u, \beta^z_u\big) \mathrm{d} u\bigg] = 0.$$ Dividing now both left and right-hand sides of the latter by $t$, invoking the (integral) mean-value theorem, letting $t\downarrow 0$, and using that $t\mapsto (S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t)$ is continuous, we find that $v$ is a classical solution to \begin{equation} \label{eq:eqv-2} \lambda \varphi = \mathcal{L}\varphi + F \quad \text{on} \quad \mathcal{O}. \end{equation}
\emph{Proof of (i) - Step 2.} Let $(x,y,z) \in \mathcal{O}$, and $(\mathcal{K}_n)_n$ be an increasing sequence of open bounded subsets of $\mathcal{O}$ such that $\bigcup_{n\in\mathbb{N}}\mathcal{K}_n=\mathcal{O}$. Defining the stopping time $$\rho_n:=\inf\{t\geq0:\,(S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t\big) \notin \mathcal{K}_n\}, \ \ \ \ ,n\in \mathbb{N},$$ we set \begin{equation} \label{eq:defvhat} \widehat{v}_n(x,y,z):= \E\bigg[\int_0^{\rho_n} F\big(S^{x,y,z}_u, I^{x,y,z}_u, \beta^z_u\big) \mathrm{d} u + e^{-\lambda \rho_n}V\big(S^{x,y,z}_{\rho_n},I^{x,y,z}_{\rho_n},\beta^{z}_{\rho_n}\big)\bigg]. \end{equation} If $(x,y,z) \notin \mathcal{K}_n$, then $\widehat{v}_n(x,y,z)=V(x,y,z)$ as $\rho_n=0$ a.s. Take then $(x,y,z) \in \mathcal{K}_n$. By the same arguments as in Step 1 and considering that $V$ is continuous on $\mathcal{K}_n$, the function $\widehat{v}_n$ is a solution to \begin{equation} \label{eq:eqvhat-1} \lambda \varphi = \mathcal{L}\varphi + F, \quad \text{on} \quad \mathcal{K}_n, \qquad \varphi=V \quad \text{on} \quad \partial\mathcal{K}_n. \end{equation} Since also $V$ is a viscosity solution to the same equation and since uniqueness of viscosity solution holds for such a problem (cf., e.g., \cite{CIL}), we have $\widehat{v}_n = V$ on $\overline{\mathcal{K}}_n$. Because $\rho_n \uparrow \infty$ for $n \uparrow \infty$ (as the boundary of $\mathcal{O}$ is unattainable for $\big(S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t\big)$), by taking limits as $n \uparrow \infty$ in \eqref{eq:defvhat} we find that $$V(x,y,z) = \lim_{n\uparrow\infty}\widehat{v}_n(x,y,z) = v(x,y,z), \quad (x,y,z) \in \mathcal{O},$$ where the last equality follows by dominated convergence upon recalling that $V$ is bounded. But then $V=v$ on $\mathcal{O}$, and therefore $V \in C^{2}(\mathcal{O})$ and solves \eqref{eq:eqv-2} by \emph{Step 1}. That is, $V$ is a classical solution to the HJB equation \eqref{eq:HJB}.
\emph{Proof of (ii).} The optimality of \eqref{eq:OC} follows by a standard verification theorem based on an application of It\^o's formula and the proved regularity of $V$ (see, e.g., Chapter 3.5 in \cite{Pham}). \end{proof}
\section{A Case Study with Numerical Illustrations} \label{sec:numerics}
In this section we illustrate numerically the results of our model, with the aim of providing qualitative properties of the optimal containment policies in a case study.
We use the mean-reverting model for the dynamics of $\beta$, i.e. \begin{equation} \label{eq:ZOUbis} \mathrm{d} \beta_t = \vartheta\Big(\widehat{\beta}\big(L - \xi_t\big) - \beta_t\Big) \mathrm{d} t + \sigma\beta_t(\gamma - \beta_t)\mathrm{d} W_t, \quad t>0, \qquad \beta_0= z \in (0,\gamma), \end{equation} for some $L, \vartheta, \gamma, \sigma>0$, $\widehat{\beta} \in (0,\gamma)$. {{Notice that such a choice of the dynamics of $\beta$ fulfills all the requirements of Assumption \ref{assZ} (see Proposition \ref{prop:betadyn} in Appendix \ref{sec:app})}}. Moreover, we assume that the social planner has a quadratic cost function of the form \begin{equation} \label{costexample} C(y, \xi)=\left(\frac{y}{\bar{y}}\right)^2 +\frac{1}{2}\xi^2. \end{equation} {{The latter can be interpreted as a Taylor approximation of any smooth, convex, separable cost function with global minimum in $(0,0)$.}} In \eqref{costexample}, $\bar{y} \in (0,1)$ represents, e.g., the maximal percentage of infected people that the health-care system can handle.
Notice that in this case for any $(x,y,z)\in \mathcal{O}$ one has (cf.\ \eqref{eq:OCrule}) \begin{equation} \label{eq:OCrule-bis} \widehat{\xi}(x,y,z) = \begin{cases} L, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{if}\,\, V_z(x,y,z) > \frac{L}{\vartheta \widehat{\beta}},\\ \vartheta \widehat{\beta}V_z(x,y,z), \ \ \ \ \ \ \ \ \ \ \ \text{if}\,\, V_z(x,y,z) \in [0, \frac{L}{\vartheta \widehat{\beta}}], \\ 0, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{if}\,\, V_z(x,y,z) < 0. \end{cases} \end{equation}
Our numerics is based on a recursion on the nonlinear equation $$\big(\lambda - \mathcal{L}\big)v(x,y,z) = C^{\star}(y,z,v_z(x,y,z)), \quad (x,y,z) \in \mathcal{O},$$ which is solved by the value function in the classical sense (cf.\ Theorem \ref{thm:main}). Namely, starting from $v^{[0]} \equiv 0$ we use the recursive algorithm: $$\big(\lambda - \mathcal{L}\big)v^{[n+1]} = C^{\star}(y,z,v^{[n]}_z), \quad n \geq 1$$ and those equations are solved by Montecarlo methods based on the Feynmann-Kac formula $$v^{[n+1]}(x,y,z) = \E\bigg[\int_0^{\infty}e^{-\lambda t} C^{\star}\big(I^{x,y,z}_t, \beta^{z}_t, v^{[n]}_z(S^{x,y,z}_t, I^{x,y,z}_t, \beta^{z}_t)\big) \mathrm{d} t\bigg], \quad (x,y,z)\in \mathcal{O}.$$ Such an approach is needed because of the lack of appropriate boundary conditions on the HJB equation, as the boundary $\partial\mathcal{O}$ is unattainable for the underlying controlled dynamical system.
{{We take a day as a unit of time.}} In our experiments we assume that the average length of an infection equals $18$ days, so that $\alpha=\frac{1}{18}$ (see also \cite{Alvarez}, \cite{Erhan}, and \cite{KruseStrack}), the level of the maximal possible transmission rate of the disease is $\gamma=0.16$, the natural transmission rate of the disease is $\widehat{\beta}=0.1$, towards which the transmission rate $(\beta_t)_t$ reverts at rate $\vartheta=0.1$ when $\xi\equiv0$, {{and $\sigma=1$, so that the fluctuations of $(\beta_t)_t$ are (at most) of order $10^{-2}$}}. Furthermore, we set $\lambda=1/365$\footnote{{{Our choice of the value of $\lambda=\lambda_o + \delta$ can be justified by assuming that it takes at least a year to develop a vaccine (i.e.\ $1/\lambda_o \geq 365$) and that the intertemporal discount rate of the social planner $\delta$ is negligible with respect to vaccination discovery rate. Indeed, a typical value for the annual discount rate $\delta$ is $5\%$ which is clearly such that $\frac{0.05}{365} \ll \frac{1}{365}$.}}}, and we fix $\bar{y}={0.1}$ in \eqref{costexample}. Finally, in all simulations we assume that at day zero about $1\%$ of the population is infected.
{{In all the subsequent pictures we show the mean paths of the considered quantities, with their $95\%$ confidence interval. The Montecarlo average has been performed by employing $6000$ independent simulations.}}
In Section \ref{optimal}, we compare the optimal social planner policy with the case of no restrictions; in Section \ref{limitedContainment} we consider strategies in which the containment measures are limited to a fixed percentage $L \in [0,1]$ and provide a comparison between them; in Section \ref{sec:sigma} we study the effect of the fluctuations of the transmission rate on the problem's solution.
\subsection{The Optimal Social Planner Policy} \label{optimal}
We compare the optimal social planner policy with the case of no restrictions (see Figure \ref{fig1}). In the optimal social planner policy severe lockdown measures (larger than $40\%$) are imposed for a period of circa 63 days, starting on day 79; then, it follows a gradual reopening phase. The final percentage of recovered individuals is about $50\%$, in contrast to $72\%$ which is the total percentage of recovered individuals in the case of no restrictions.
Furthermore, the cases of optimal lockdown and no lockdown show a substantial difference in the evolution of the reproduction number $\mathcal{R}_t:=\frac{\beta_t}{\alpha}$: in the case of lockdown policies at work, in the most restrictive period, the latter is significantly decreased to values around $0.6$. Another relevant quantity to analyze is $\mathcal{R}_t S_t$. Indeed, recalling \eqref{eq:I}, it is easy to see that the percentage of infected naturally decreases at exponential rate $\alpha(\mathcal{R}_t S_t - 1)$ if $\mathcal{R}_t S_t$ is maintained strictly below $1$. We observe that, under the suboptimal action ``no lockdown'', $\mathcal{R}_t S_t$ lies below one from day $85$ on. On the other hand, the optimal containment policy is such that $\mathcal{R}_t S_t <1$ from day $75$ on. As a consequence, $\mathcal{R}_t$ can be let oscillate strictly above one (actually, around $1.7$) during the final phase of partial reopening so that the negative impact of lockdowns on the economic growth can be partially dammed.
\begin{figure}
\caption{Comparison between the optimal social planner policy (upper panel) and the case of no restrictions (lower panel). The figures in the first column show the (average) evolution of the containment policy through the value of the optimal control $\xi_t$; the ones in the second column show the (average) evolution of the instantaneous reproduction number $\mathcal{R}_t=\frac{\beta_t}{\alpha}$; the ones in the third column show (average) evolution of the percentage of susceptible (in blue), infected (in red) and recovered (in green) individuals; the ones in the fourth column show the (average) evolution of the product $\mathcal{R}_t\cdot S_t$.}
\label{fig1}
\end{figure}
\subsection{The Optimal Social Planner Policy with Limited Containment} \label{limitedContainment}
In many countries, a vigorous lockdown could not always be feasible, especially for long periods. Further, as pointed out by recent literature (for instance see \cite{Asprietal}), gradual policies of longer duration but more moderate containment exhibit large welfare benefits comparable to the ones obtained by a drastic lockdown. For this reason, we consider a strategy in which the containment measures are limited to a fixed percentage $L \in [0,1]$. Notice that $L=0.7$ in \cite{Alvarez}, $L=\{0.7, 1\}$ in \cite{Acemoglu} and \cite{Erhan}. A comparison of the optimal social planner policy with limited containment $L \in \{0.2, 0.4, 0.6, 0.8\}$ is shown in Figure \ref{fig2} and a summary is contained in Table \ref{tab1}. \begin{figure}
\caption{Comparison between the optimal social planner policy with limited containment $L$. The figures in the first row show the evolution of the (average) containment policy through the value of the optimal control $\xi_t$; the figures in the second row show the (average) evolution of the instantaneous reproduction number $\mathcal{R}_t=\frac{\beta_t}{\alpha}$; the figures in the third row show the evolution of the (average) percentage of susceptible (in blue), infected (in red) and recovered (in green) individuals; the figures in the fourth row show the (average) evolution of the product $\mathcal{R}_t\cdot S_t$. The limited level of containment varies with the columns: the first column treats the case $L=0.8$, the second column the case $L=0.6$,
the third column the case $L=0.4$ and the last column the case $L=0.2$.}
\label{fig2}
\end{figure}
\begin{table}[htb] \begin{center} \begin{tabular}{lcccccccccc} & & $L=0.8$ & & $L=0.6$ & & $L=0.4$ & & $L=0.2$\\ \hline First day of containment && {54} && {54} && {54} && {54}\\
Recovered && {52\%} && {58\%} && {61\%} && {68\%}\\
\hline \end{tabular} \caption{Optimal social planner policy with different values of limited containment $L$.}\label{tab1} \end{center} \end{table}
Clearly, the larger $L$ is, the smaller are the social costs (by definition of the value function).
Our experiment shows that for $L=0.4, 0.6, 0.8, 1$, the final percentage of recovered (hence of the total amount of infected) in average ranges from $52\%$ (case $L=0.8$) up to $68\%$ (case $L=0.2$). In all the cases, the optimal containment starts at the maximal rate and the first day of containment is substantially the same (around day $54$).
Different ceilings $L$ on the containment strategies also affect the values and the fluctuations' size of the reproduction number $\frac{\beta_t}{\alpha}$: smaller values of $L$ correspond to milder variation of the reproduction number $\mathcal{R}_t$ of size $0.3$, whereas larger values of $L$ lead to rapid changes of $\mathcal{R}_t$ which reaches levels smaller than $1$ (less than $0.8$ for $L=0.8$ and less than $0.6$ for $L=1$). In all the cases, $\mathcal{R}_t S_t$ lies strictly below $1$ after a certain date, which is decreasing with respect to $L$ (see the last column in Figures \ref{fig1} and \ref{fig2}). Notice that without any containment policies, $\mathcal{R}_t S_t$ decreases on time due to a natural ``herd-immunity'' effect. On the other, when lockdowns are in place, we observe a faster decrease of $\mathcal{R}_t S_t$ which is forced by the initial vigorous policymaker's actions. The final relaxation of the latter then allows for an increase of $\mathcal{R}_t S_t$, which is however constrained below the critical level of $1$. Such an effect is monotone decreasing with respect to $L$.
\subsection{The Role of Uncertainty} \label{sec:sigma}
{{ The main new feature of our model is to consider a (controlled) stochastic transmission rate in the framework of the classical SIR model. In this section we study numerically how an increase of the fluctuations of the transmission rate affects the optimal solution. In particular, in Figure \ref{fig3} the volatility $\sigma$ takes values $1$, $5$ and $10$, thus leading to fluctuations of $\beta$ of order $10^{-2}$, $5 \times 10^{-2}$, and $10^{-1}$, respectively (indeed, recall that $\sigma(\beta)=\sigma \beta(\gamma-\beta)$ attains its maximum at $\gamma/2$ and $\gamma=0.16$). \begin{figure}
\caption{Comparison between the optimal social planner policy with different $\sigma$, when $L=1$. The figures in the first row show the evolution of the (average) containment policy through the value of the optimal control $\xi_t$; the figures in the second row show the evolution of the (average) percentage of susceptible (in blue), infected (in red) and recovered (in green) individuals. The level of $\sigma$ varies with the columns: the first column treats the case $\sigma=1$, the second column the case $\sigma=5$,
the third column the case $\sigma=10$.}
\label{fig3}
\end{figure}
We observe from Figure \ref{fig3} that larger fluctuations of $\beta$ have the effect of anticipating the beginning of the lockdown policies, and of diluting the actions over a longer period. Indeed, when $\sigma=5$ and $\sigma=10$, the optimal lockdown policy starts around day $46$ and $42$, respectively, in contrast to day $54$ of the case $\sigma=1$. Moreover, when $\sigma$ increases, the maximal employed lockdown intensity reduces and the level of containment stabilizes at a larger value in the long run. This can be explained by thinking that an increase in the fluctuations of the transmission rate induces the policymaker to act earlier and over a longer period in order to prevent positive large shocks of $\beta$. However, in order to dam the social costs resulting from a longer period of restrictions, the maximal intensity of the lockdown policy should be reduced.
Moreover, such a spread of the optimal lockdown policy gives rise to an increase of the final percentage of recovered (which is circa $58\%$ and $60\%$ when $\sigma =5$ and $\sigma=10$, respectively, and circa $50\%$ when $\sigma =1$). }}
\section{Conclusions} \label{sec:concl}
We have studied the problem of a policymaker which during an epidemic is challenged to optimally balance the safeguard of public health and the negative economic impact of severe lockdowns. The policymaker can implement containment policies in order to reduce the trend of the disease's transmission rate, which evolves stochastically in continuous time. In the context of the SIR model, our theoretical analysis allows to identify the minimal social cost function as a classical solution to the corresponding dynamic programming equation, as well as to provide an optimal control in feedback form.
In a case study in which the transmission rate is a (controlled) mean-reverting diffusion process, numerical experiments show that the optimal lockdown policy is characterized by three distinct phases: the epidemic is first let freely evolve, then vigorously tamed, and finally a less stringent containment should be adopted. Interestingly, in the last period the epidemic's reproduction number is let oscillate strictly above one although the product ``reproduction number $\times$ percentage of susceptible'' is kept strictly below the critical level of one. Hence, under the optimal containment policy, the percentage of infected decreases naturally at an exponential rate and the social planner is then allowed to substantially relax the lockdown in order not to incur too heavy economic costs. {{Moreover, we show that an increase in the fluctuations of the transmission rate gives rise to an earlier beginning of the optimal lockdown policy, which is also diluted over a longer period of time.
We believe that our work is only a first step in enriching the SIR model of a stochastic controlled component and in understanding the policymaker's problem of optimally balancing the safeguard of public health and social wealth. There is still much to be done in order to incorporate other features like the partial detectability of the transmission rate or the role of public investment on the discovery of a vaccination (see Remark \ref{rem:tau} on this). We leave the analysis of the resulting challenging problems for future work.}}
\appendix
\section{Technical results} \label{sec:app}
\begin{lemma}\label{lemma:app} Let $\mathcal{O}'$ be an open neighborhood of $\bm 0=(0,0,0)\in\R^3$. Let $W:\mathcal{O}'\to \R$ be a semiconcave function such that $W_z^-(\bm 0)>W_z^+(\bm 0)$. Then there exists a sequence of functions $(\varphi^n)_n\subset C^{2}(\mathcal{O}')$ such that \begin{equation} \label{eq:varphin} \begin{cases}
\varphi^n(\mathbf{0})=W(\bm 0)=0,\\ \varphi^n\geq W \ \mbox{in a neighborhood of} \ \bm 0,\\ |D\varphi^n(\bm 0)|\leq L<\infty,\\ \varphi^n_{zz}(\mathbf{0})\stackrel{n\to\infty}{\longrightarrow}-\infty. \end{cases} \end{equation} \end{lemma}
\begin{proof} Since $W$ is semiconcave, there exists $C_0\geq 0$ such that $$ \widehat{W}: \mathcal{O}'\to \R, \ \ \ \widehat{W}(x,y,z):= W(x,y,z)-{C}_0\big(x^2+y^2+z^2\big), $$ is concave. Fix such a $C_0$.
Since $W_z^-(\bm 0)> W_z^+(\bm 0)$, also $\widehat{W}_z^-(\bm 0)>\widehat{W}_z^+(\bm 0)$ and it is clear that it is equivalent to show the claim for $\widehat{W}$. By \cite[Theorem 23.4]{Rock}, it follows that there exist $$ \bm{\eta}=(\eta_x,\eta_y,\eta_z), \ \bm{\zeta}=(\zeta_x,\zeta_y,\zeta_z) \in D^+W(\bm 0) \quad \text{such that} \quad \eta_z> \zeta_z.$$ Set $$g(\bm q):=\langle \bm \eta,\bm q \rangle \wedge \langle \bm \zeta, \bm q\rangle$$ and notice that $\widehat{W}(\bm 0)=0=g(\bm 0)$ and that, by concavity, $$ \widehat{W}(\bm q )\leq g(\bm q) \ \ \forall \bm q\in\mathcal{O}'. $$ Define $$A:=\mbox{Span}\{\bm\eta-\bm\zeta\}^\perp,$$ and denote by $\Pi:\R^3\to A$ the orthogonal projection on $A$. Given $\textbf{q}\in \R^3$ we then have the decompositon $$
\textbf{q}=\Pi \textbf{q}+ \frac{\bm{\eta}-\bm{\zeta}}{|\bm\eta-\bm\zeta|}\,s, \ \ \ s=\frac{\langle \textbf{q},\bm{\eta}-\bm{\zeta}\rangle}{|\bm\eta-\bm\zeta|}. $$ Define, for $\bm q\in\mathcal{O}'$, $$ \varphi^n(\bm q):=g(\Pi \bm q)+\psi^n(s), $$ where $$
\psi^n:\R\to\R, \ \ \psi^n(s)=-\frac{n}{2}s^2+\frac{1}{2}\frac{\langle \bm\eta+\bm\zeta, \bm\eta-\bm\zeta\rangle}{|\bm\eta-\bm\zeta|} s. $$ This sequence realizes \eqref{eq:varphin}. Indeed, the first two properties hold by construction; in particular the second one is due to the fact that
we have $$ g(\bm q)=g(\Pi\bm q)+\begin{cases}
\frac{\langle \bm \zeta, \bm\eta-\bm\zeta\rangle}{|\bm\eta-\bm\zeta|}s \ \ \ \mbox{if} \ s\geq 0, \\\\
\frac{\langle \bm \eta, \bm\eta-\bm\zeta\rangle}{|\bm\eta-\bm\zeta|}s \ \ \ \mbox{if} \ s<0. \end{cases} $$
As for the last two properties, we notice that $$
D \varphi^n(\bm q)= \Pi\bm \eta \ (=\Pi\bm \zeta)+ \frac{\bm \eta -\bm \zeta}{|\bm \eta-\bm \zeta|}\frac{\mathrm{d}\psi^n}{\mathrm{d} s}(s), $$ so $$
\varphi^n_z(\bm q)= \langle \Pi\bm \eta, (0,0,1)\rangle + \left\langle \frac{\bm \eta -\bm \zeta}{|\bm \eta-\bm \zeta|},(0,0,1)\right\rangle\frac{\mathrm{d}\psi^n}{\mathrm{d} s}(s) = \langle \Pi\bm \eta, (0,0,1)\rangle + \frac{\eta_z -\zeta_z}{|\bm \eta-\bm \zeta|}\frac{\mathrm{d}\psi^n}{\mathrm{d} s}(s), $$ $$
\varphi^n_{zz}(\bm q)= \frac{\eta_z -\zeta_z}{|\bm \eta-\bm \zeta|}\frac{\mathrm{d}^2\psi^n}{\mathrm{d} s^2}(s), $$ which then imply them. \end{proof}
Denote by $\textbf{q}=(q_1,q_2,q_3):=(x,y,z)$ an arbitrary point of $\mathcal{O}$. For any multi-index $\alpha:=(\alpha_1,\alpha_2,\alpha_3) \in \mathbb{N}^3$ we denote by $|\alpha| = \sum_{i=1}^3 \alpha_i$ and $D^{\alpha}_{\textbf{q}}=\partial^{|\alpha|}/\partial_{q_1}^{\alpha_1}\dots \partial_{q_3}^{\alpha_3}$, with the convention that $\partial^{0}$ is the identity.
\begin{proposition} \label{prop:A1} For any $\textbf{q} \in \mathcal{O}$ the (uncontrolled) process $(\textbf{Q}^{\textbf{q}}_t)_t:=(S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t)$ admits a transition density $p$ which is absolutely continuous with respect to the Lebesgue measure in $\R^3$, infinitely many times differentiable, and satisfies the Gaussian estimates \begin{equation} \label{eq:Gauss1}
p(t, \textbf{q}; \textbf{q}') \leq \frac{C_0(t)(1 + |\textbf{q}|)^{m_0}}{t^{\frac{n_0}{2}}} e^{- \frac{D_0(t)|\textbf{q}'- \textbf{q}|^2}{t}}, \quad \forall t>0,\, \textbf{q}'=(x',y',z') \in \mathcal{O}, \end{equation} \begin{equation} \label{eq:Gauss2}
|D^{\alpha}_{\textbf{q}}p(t, \textbf{q}; \textbf{q}')| \leq \frac{C_{\alpha}(t)(1 + |\textbf{q}|)^{m_{\alpha}}}{t^{\frac{n_{\alpha}}{2}}} e^{- \frac{D_{\alpha}(t)|\textbf{q}'- \textbf{q}|^2}{t}}, \quad \forall t>0,\, \textbf{q}'=(x',y',z') \in \mathcal{O}. \end{equation}
Here, $C_0$, $D_0$, $C_{\alpha}$, and $D_{\alpha}$ are increasing functions of time. \end{proposition}
\begin{proof} Given $f,g \in C^1(\R^3;\R^3)$, define the Lie bracket $$[f,g]:=\sum_{j=1}^3 \Big(\frac{\partial g}{\partial q_j} f - \frac{\partial f}{\partial q_j} g\Big).$$ Then, for any given and fixed $\textbf{q}\in \mathcal{O}$, we set \begin{eqnarray*} \mu(\textbf{q}):= \left(\begin{array}{c} -xyz\\xyz-\alpha y\\ b(z,0) \end{array}\right) \quad \textrm{ and } \quad \Sigma(\textbf{q}):= \left(\begin{array}{c} 0 \quad 0 \quad 0 \\ 0 \quad 0 \quad 0 \\ 0 \quad 0 \quad \sigma(z) \end{array}\right) \end{eqnarray*} and denoting by $\Sigma_i$, $i=1,2,3$, the columns of the matrix $\Sigma$, we construct recursively the set of functions $L_0:=\{\Sigma_1, \Sigma_2, \Sigma_3\}$, $L_{k+1}:= \{[\mu,\varphi], [\Sigma_1,\varphi], [\Sigma_2,\varphi], [\Sigma_3,\varphi]\,:\, \varphi \in L_k\}$, $k\geq0$. We also define $L_{\infty}:=\cup_{k\geq0}L_k$. We say that the H\"ormander condition holds true at $\textbf{q}\in \mathcal{O}$ if \begin{equation} \label{eq:Hormander} \Span\big\{\varphi(\textbf{q}),\, \varphi \in L_{\infty}\big\} = \R^3. \end{equation} Direct calculations show that \begin{eqnarray*} L_0(\textbf{q})=\{\Sigma_3\}(\textbf{q})=\left\{\left(\begin{array}{c} 0\\0\\ \sigma(z) \end{array}\right)\right\} \end{eqnarray*}
\begin{eqnarray*} L_1(\textbf{q}) = \big\{[\mu,\Sigma_3]\big\}(\textbf{q}) = \left\{ \left(\begin{array}{c} xy\sigma(z)\\ -xy\sigma(z) \\ \sigma_z(z)b(z,0)-\sigma(z) b_z(z,0) \end{array}\right)\right\} \end{eqnarray*} and \item \begin{eqnarray*} && L_2(\textbf{q})= \big\{ [\mu,[\mu,\Sigma_3]], [\Sigma_3,[\mu,\Sigma_3]]\big\}(\textbf{q})\\ && = \left\{\left(\begin{array}{c}
xy(2b(z,0) \sigma_z(z) - \alpha \sigma(z) - \sigma(z) b_z(z,0))\\ xy (\sigma(z)b_z(z,0) - 2 b(z,0) \sigma_z(z))\\ b(z,0)^2 \sigma_{zz}(z) - \sigma(z) b(z,0) b_{zz}(z,0) + \sigma(z) b_z(z,0)^2 - \sigma_z(z) b(z,0) b_z(z,0)
\end{array}\right),\right.\\ &&\left.\left(\begin{array}{c} xy \sigma(z)\sigma_z(z)\\ -xy \sigma(z) \sigma_z(z)\\ b(z,0) \sigma(z) \sigma_{zz}(z) - \sigma(z)^2 b_{zz}(z,0) - b(z,0) \sigma_z(z)^2 + \sigma(z) b_z(z,0) \sigma_z(z)
\end{array}\right) \right\} \end{eqnarray*} Hence, the matrix associated to $(L_0 \cup L_1 \cup L_2)(\textbf{q})$ has the sub-matrix formed by all its rows and its first three columns with determinant $-\alpha x^2 y^2 \sigma^3(z) < 0$. Hence, \eqref{eq:Hormander} holds true on $\mathcal{O}$ given the arbitrariness of $\textbf{q}$.
Therefore, by Theorem 2.3.3 in \cite{Nualart}, for any $t>0$ the uncontrolled process $(S^{x,y,z}_t, I^{x,y,z}_t, \beta^z_t)_t$ admits a transition density $p$ which is absolutely continuous with respect to the Lebesgue measure in $\R^3$, and infinitely many times differentiable. Moreover, Theorem 9 and Remark 11 in \cite{Bally} (see also \cite{KuStr}) show that $p$ satisfies the Gaussian estimates \eqref{eq:Gauss1} and \eqref{eq:Gauss2}. This completes the proof. \end{proof}
{{ \begin{proposition} \label{prop:betadyn} The dynamics of $\beta$ as in \eqref{eq:ZOUbis} satisfy Assumption \ref{assZ}. \end{proposition} \begin{proof} For any $z \in \R$ and $\xi \in [0,L]$, recall that $b(z,\xi)=\vartheta(\widehat{\beta}(L-\xi) - z)$ and define
\begin{equation} \label{sigmatilde} \widetilde{\sigma}(z)=\left\{ \begin{array}{lr} \displaystyle 0\,\,\qquad \qquad \ \ \mbox{if $z \in (-\infty, 0]$},\\[+14pt] \displaystyle \sigma z (\gamma -z)\,\,\quad \mbox{if $z \in (0,\gamma)$},\\[+14pt] \displaystyle 0\,\,\qquad \qquad \ \ \mbox{if $z \in [\gamma,\infty)$}. \end{array} \right. \end{equation} Then, for $(\xi_t)_t \in \mathcal{A}$, introduce the stochastic differential equation \begin{equation} \label{eq:betatilde} \mathrm{d} \widetilde{\beta}_t = b(\widetilde{\beta}_t, \xi_t) \mathrm{d} t + \widetilde{\sigma}(\widetilde{\beta}_t)\mathrm{d} W_t, \quad \widetilde{\beta}_0=z \in \R. \end{equation} Because $b$ and $\widetilde{\sigma}$ are Lipschitz-continuous and have sublinear growth, uniformly with respect to $\xi$, for any $(\xi_t)_t \in \mathcal{A}$ there exists a unique strong solution to \eqref{eq:betatilde} starting at $z \in \R$ (see, e.g., Theorem 7 in Chapter V of \cite{Protter}). We denote such a solution by $(\widetilde{\beta}^{\xi,z}_t)_t$. Since $\xi \mapsto b(z,\xi)$ is decreasing, by Theorem 54 in Chapter V of \cite{Protter}, we have \begin{equation}\label{comparison}
\widetilde{\beta}^{L,z}_t\leq \widetilde{\beta}^{\xi,z}_t \leq \widetilde{\beta}^{0,z}_t, \ \ \ \ \forall t\geq 0 \ \mbox{a.s.},
\end{equation} where $(\widetilde{\beta}^{L,z}_t)_t$ and $(\widetilde{\beta}^{0,z}_t)_t$) are, respectively, the solution to \eqref{eq:betatilde} with $\xi_t \equiv L$ and with $\xi_t \equiv 0$. On the other hand, by Feller's test for explosion (cf.\ Proposition 5.22 in Chapter 5.5 of \cite{KS}), it can be checked $\widetilde{\beta}^{L,z}_t >0$ for all $t\geq 0$ a.s.\ and $\widetilde{\beta}^{0,z}_t < \gamma$ for all $t\geq 0$ a.s. Hence, by \eqref{comparison}, we get $\widetilde{\beta}^{\xi,z}_t \in (0,\gamma)$ for all $t\geq 0$ a.s.\ This proves that the SDE \eqref{eq:ZOUbis} admits a unique strong solution which lies within the open interval $\mathcal{I}=(0,\gamma)$ (cf.\ Assumption \ref{assZ}-(i)).
Given the boundedness of the interval $\mathcal{I}=(0,\gamma)$, using the expression of $b$ and \eqref{sigmatilde} it is straightforward to verify that (ii) and (iii) of Assumption \ref{assZ} are satisfied as well. \end{proof} }}
\indent \textbf{Acknowledgments.} The authors thank Frank Riedel, Mauro Rosestolato, three anonymous Referees and the Guest Editors for interesting comments and suggestions.
\end{document}
|
arXiv
|
{
"id": "2007.10870.tex",
"language_detection_score": 0.7673192024230957,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\markboth{J.-F. Babadjian \& V. Millot} {Homogenization of variational problems in manifold valued $BV$-spaces}
\title{HOMOGENIZATION OF VARIATIONAL PROBLEMS\\ IN MANIFOLD VALUED $BV$-SPACES} \author{Jean-Fran\c{c}ois BABADJIAN\footnote{{\it Current adress:} CMAP, Ecole Polytechnique, 91128 Palaiseau, France. {\it E-mail:} \texttt{[email protected]}}}
\address{Laboratoire Jean Kuntzmann\\ Universit\'e Joseph Fourier\\ BP 53\\ 38041 Grenoble Cedex 9, France.\\ \emph{\tt{[email protected]}}}
\author{Vincent MILLOT}
\address{Univesit\'e Paris Diderot - Paris 7\\ CNRS, UMR 7598 Laboratoire Jacques-Louis Lions\\ F-75005 Paris, France.\\ \emph{\tt{[email protected]}}}
\maketitle
\begin{abstract} {\bf Abstract.} This paper extends the result of \cite{BM} on the homogenization of integral functionals with linear growth defined for Sobolev maps taking values in a given manifold. Through a $\Gamma$-convergence analysis, we identify the homogenized energy in the space of functions of bounded variation. It turns out to be finite for $BV$-maps with values in the manifold. The bulk and Cantor parts of the energy involve the tangential homogenized density introduced in \cite{BM}, while the jump part involves an homogenized surface density given by a geodesic type problem on the manifold. \end{abstract}
\keywords{Homogenization, $\Gamma$-convergence, manifold valued maps, functions of bounded variation.}
\ccode{Mathematics Subject Classification 2000: 74Q05; 49J45; 49Q20.}
\section{Introduction}
\noindent In this paper we extend our previous resut \cite{BM} concerning the homogenization of integral functionals with linear growth involving manifold valued mappings. More precisely, we are interested in energies of the form \begin{equation}\label{mainfunct}\int_\O f\left(\frac{x}{\e},\nabla u\right)dx\,,\quad u : \O \to \mathcal{M}\subset{\mathbb{R}}^d\,,\end{equation} where $\O \subset {\mathbb{R}}^N$ is a bounded open set, $f:{\mathbb{R}}^N \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$ is a periodic integrand in the first variable with linear growth in the second one, and ${\mathcal{M}}$ is a smooth submanifold. Our main goal is to find an effective description of such energies as $\varepsilon\to 0$. To this aim we perform a $\Gamma$-convergence analysis which is an appropriate approach to study asymptotics in variational problems (see \cite{DM} for a detailed description of this subject). For energies with superlinear growth, the most general homogenization result has been obtained independently in \cite{Br,M} in the nonconstrained case, and in \cite{BM} in the setting of manifold valued maps.
The functional in \eqref{mainfunct} is naturally defined for maps in the Sobolev class $W^{1,1}$. However if one wants to apply the Direct Method in the Calculus of Variations, it becomes necessary to extend the original energy to a larger class of functions (possibly singular) in which the existence of minimizers is ensured. In the nonconstrained case, this class is exactly the space of functions of bounded variation and the problem of finding an integral representation for the extension, the so-called {\it ``relaxed functional"}, has been widely invetigated, see {\it e.g.}, \cite{Serr,GMSbv,DMsc,AMT,AmPal,FR,ADM,FM,FM2,BFF} and \cite{Bouch,DAG} concerning homogenization in $BV$-spaces.
Many models from material science involve vector fields taking their values into a manifold. This is for example the case in the study of equilibria for liquid crystals, in ferromagnetism or for magnetostrictive materials. It then became necessary to understand the behaviour of integral functionals of the type (\ref{mainfunct}) under this additional constraint. In the framework of Sobolev spaces, it was the object of \cite{DFMT,AL,BM}. For $\varepsilon$ fixed, the complete analysis in the linear growth case has been performed in \cite{AEL} assuming that the manifold is the unit sphere of ${\mathbb{R}}^d$. Using a different approach, the arbitrary manifold case has been recently treated in \cite{Mucci} where a further isotropy assumption on the integrand is made. We will present in the Appendix the analogue result to \cite{AEL} for a general integrand and a general manifold.
We finally mention that the topology of ${\mathcal{M}}$ does not play an important role here. This is in contrast with a slightly different problem originally introduced in \cite{BCL,BBC}, where the starting energy is assumed to be finite only for smooth maps. In this direction, some recent results in the linear growth case can be found in \cite{GM,GM1} where the study is performed within the framework of Cartesian Currents \cite{GMS}. When the manifold ${\mathcal{M}}$ is topologically nontrivial, it shows the emergence in the relaxation process of non local effects essentially related to the non density of smooth maps (see \cite{B,BZ}). \vskip5pt
Throughout this paper we consider a compact and connected smooth submanifold ${\mathcal{M}}$ of ${\mathbb{R}}^d$ without boundary. The classes of maps we are interested in are defined as $$BV(\O;{\mathcal{M}}):=\big\{ u \in BV(\O;{\mathbb{R}}^d) : \; u(x) \in {\mathcal{M}} \text{ for ${\mathcal{L}}^N$-a.e. }x \in \O\big\}\,,$$ and $W^{1,1}(\O;{\mathcal{M}})=BV(\O;{\mathcal{M}}) \cap W^{1,1}(\O;{\mathbb{R}}^d)$. For a smooth ${\mathcal{M}}$-valued map, it is well known that first order derivatives belong to the tangent space of ${\mathcal{M}}$, and this property has a natural extension to $BV$-maps with values in ${\mathcal{M}}$, see Lemma \ref{manifold}. \vskip5pt
The function $f : {\mathbb{R}}^N \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$ is assumed to be a Carath\'eodory integrand satisfying \begin{itemize} \item[$(H_1)$] for every $\xi \in {\mathbb{R}}^{d \times N}$ the function $f(\cdot,\xi)$ is $1$-periodic, {\it i.e.} if $\{e_1,\ldots,e_N\}$ denotes the canonical basis of ${\mathbb{R}}^N$, one has $f(y+e_i,\xi)=f(y,\xi)$ for every $i=1,\ldots,N$ and $y \in {\mathbb{R}}^N$;\\ \item[$(H_2)$] there exist $0<\a \leq \b < +\infty$ such that
$$\a |\xi| \leq f(y,\xi)\leq \b(1+|\xi|) \quad \text{ for a.e. }y \in {\mathbb{R}}^N \text{ and all } \xi \in {\mathbb{R}}^{d \times N}\,;$$ \item[$(H_3)$]there exists $L>0$ such that
$$|f(y,\xi)-f(y,\xi')| \leq L |\xi-\xi'|\, \quad \text{ for a.e. }y \in {\mathbb{R}}^N \text{ and all } \xi,\, \xi' \in {\mathbb{R}}^{d \times N}\,.$$ \end{itemize} For $\e>0$, we define the functionals ${\mathcal{F}}_\e:L^1(\O;{\mathbb{R}}^d) \to [0,+\infty]$ by $${\mathcal{F}}_\e(u):=\begin{cases} \displaystyle \int_\O f\left(\frac{x}{\e},\nabla u\right) dx & \text{if }u \in W^{1,1}(\O;\mathcal{M})\,,\\[8pt] +\infty & \text{otherwise}\,. \end{cases}$$ \vskip5pt
We have proved in \cite{BM} the following representation result on $W^{1,1}(\O;{\mathcal{M}})$.
\begin{theorem}[\cite{BM}]\label{babmilp=1} Let ${\mathcal{M}}$ be a compact and connected smooth submanifold of ${\mathbb{R}}^d$ without boundary, and $f:{\mathbb{R}}^N \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$~be a Carath\'eodory function satisfying $(H_1)$ to $(H_3)$. Then the family $\{{\mathcal{F}}_\e\}_{\e>0}$ $\G$-converges for the strong $L^1$-topology at every $u \in W^{1,1}(\O;{\mathcal{M}})$ to ${\mathcal{F}}_{\rm hom} : W^{1,1}(\O;{\mathcal{M}}) \to [0,+\infty)$, where $${\mathcal{F}}_{\rm hom}(u):= \int_\O Tf_{\rm hom}(u,\nabla u)\, dx\,,$$ and $Tf_{\rm hom}$ is the tangentially homogenized energy density defined for every $s\in {\mathcal{M}}$ and $\xi\in [T_s({\mathcal{M}})]^N$ by \begin{equation}\label{Tfhom} Tf_{\rm hom}(s,\xi)=\lim_{t\to+\infty}\inf_{\varphi} \bigg\{ - \hskip -1em \int_{(0,t)^N} f(y,\xi+ \nabla \varphi(y))\, dy : \varphi \in W^{1,\infty}_0((0,t)^N;T_s(\mathcal{M})) \bigg\}. \end{equation} \end{theorem} \vskip5pt
Note that the previous theorem is not really satisfactory since the domain of the $\G$-limit is obviously larger than the Sobolev space $W^{1,1}(\O;{\mathcal{M}})$. In view of the studies performed in \cite{GM,Mucci}, the domain is exactly given by $BV(\O;{\mathcal{M}})$. Under the additional (standard) assumption, \begin{itemize} \item[$(H_4)$]there exist $C>0$ and $0<q<1$ such that
$$|f(y,\xi)-f^\infty(y,\xi)| \leq C (1+|\xi|^{1-q}) \quad \text{ for a.e. }y \in {\mathbb{R}}^N \text{ and all } \xi \in {\mathbb{R}}^{d \times N}\,,$$ where $f^\infty:{\mathbb{R}}^N \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$ is the recession function of $f$ defined by $$f^\infty(y,\xi):=\limsup_{t \to +\infty}\,\frac{f(y,t\xi)}{t}\,,$$ \end{itemize} we have extended Theorem~\ref{babmilp=1} to $BV$-maps, and our main result can be stated as follows.
\begin{theorem}\label{babmil2} Let ${\mathcal{M}}$ be a compact and connected smooth submanifold of ${\mathbb{R}}^d$ without boundary, and let $f:{\mathbb{R}}^N \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$ be a Carath\'eodory function satisfying $(H_1)$ to $(H_4)$. Then the family $\{{\mathcal{F}}_\e\}$ $\G$-converges for the strong $L^1$-topology to the functional ${\mathcal{F}}_{\rm hom} : L^1(\O;{\mathbb{R}}^d) \to [0,+\infty]$ defined by $${\mathcal{F}}_{\rm hom}(u):= \begin{cases} \displaystyle \begin{multlined}[9cm] \,\int_\O Tf_{\rm hom}(u,\nabla u)\, dx + \int_{\O\cap S_u}\vartheta_{\rm hom}(u^+,u^-,\nu_u)\, d{\mathcal{H}}^{N-1}\,+ \\[-12pt]
+ \int_\O Tf^\infty_{\rm hom}\left(\tilde u,\frac{dD^cu}{d|D^cu|}\right)\, d|D^cu| \end{multlined} & \text{if }u \in BV(\O;{\mathcal{M}})\,,\\ & \\ \,+\infty & \text{otherwise}\,, \end{cases}$$ where $Tf_{\rm hom}$ is given in (\ref{Tfhom}), $Tf_{\rm hom}^\infty$ is the recession function of $Tf_{\rm hom}$ defined for every $s \in {\mathcal{M}}$ and every $\xi \in [T_s({\mathcal{M}})]^N$ by $$Tf_{\rm hom}^\infty(s,\xi):=\limsup_{t \to +\infty}\,\frac{Tf_{\rm hom}(s,t\xi)}{t}\, ,$$ and for all $(a,b,\nu) \in {\mathcal{M}} \times{\mathcal{M}} \times{\mathbb{S}^{N-1}}$, \begin{multline}\label{thetahom} \vartheta_{\rm hom}(a,b,\nu) := \lim_{t\to+\infty}\inf_\varphi \bigg\{\frac{1}{t^{N-1}} \int_{t\, Q_\nu} f^\infty(y,\nabla \varphi(y))\, dy : \varphi \in W^{1,1}(tQ_\nu;{\mathcal{M}})\,, \\
\varphi=a \text{ on }\partial (tQ_\nu)\cap\{x\cdot \nu>0\} \text{ and }\varphi=b \text{ on }\partial (tQ_\nu)\cap\{x\cdot \nu \leq 0\}\bigg\}\,, \end{multline} $Q_\nu$ being any open unit cube in ${\mathbb{R}}^N$ centered at the origin with two of its faces orthogonal to $\nu$. \end{theorem}
The paper is organized as follows. We first review in Section 2 standard facts about of manifold valued Sobolev mappings and functions of bounded variation that will be used all the way through. The main properties of the energy densities $Tf_{\rm hom}$ and $\vartheta_{\rm hom}$ are the object of Section 3. A locality property of the $\Gamma$-limit is established in Section 4. The upper bound inequality in Theorem \ref{babmil2} is the object of Section~5. The lower bound is obtained
in Section 6 where the proof of the theorem is completed. Finally we state in the Appendix a relaxation result for general manifolds and integrands which extends \cite{AEL} and \cite{Mucci}.
\section{Preliminaries}
Let $\O$ be a generic bounded open subset of ${\mathbb{R}}^N$. We write ${\mathcal{A}}(\O)$ for the family of all open subsets of $\O$, and $\mathcal B(\O)$ for the $\sigma$-algebra of all Borel subsets of $\O$. We also consider a countable subfamily ${\mathcal{R}}(\O)$ of ${\mathcal{A}}(\O)$ made of all finite unions of cubes with rational edge length centered at rational points of ${\mathbb{R}}^N$. Given $\nu \in {\mathbb{S}^{N-1}}$, $Q_\nu$ stands for an open unit cube in ${\mathbb{R}}^N$ centered at the origin with two of its faces orthogonal to $\nu$ and $Q_\nu(x_0,\rho):= x_0 + \rho \,Q_\nu$. Similarly $Q:=(-1/2,1/2)^N$ is the unit cube in ${\mathbb{R}}^N$ and $Q(x_0,\rho):= x_0 + \rho \,Q$. We denote by $h^\infty$ the recession function of a generic scalar function $h$, {\it i.e.}, $$h^\infty(\xi):=\limsup_{t\to+\infty}\,\frac{h(t\xi)}{t}\,.$$
The space of vector valued Radon measures in $\O$ with finite total variation is denoted by ${\mathcal{M}}(\O;{\mathbb{R}}^m)$. We shall follow \cite{AFP}
for the standard notation on functions of bounded variation. We only recall Alberti Rank One Theorem which states that for $|D^c u|$-a.e.
$x \in \O$, $$A(x):=\frac{dD^cu}{d|D^cu|}(x)$$ is a rank one matrix.
In this paper, we are interested in Sobolev and $BV$ maps taking their values into a given manifold. We consider a connected smooth submanifold ${\mathcal{M}}$ of ${\mathbb{R}}^d$ without boundary. The tangent space of ${\mathcal{M}}$ at $s \in {\mathcal{M}}$ is denoted by $T_s({\mathcal{M}})$, ${\rm co}({\mathcal{M}})$ stands for the convex hull of ${\mathcal{M}}$, and $\pi_1({\mathcal{M}})$ is the fundamental group of ${\mathcal{M}}$.
It is well known that if $u \in W^{1,1}(\O;{\mathcal{M}})$, then $\nabla u(x) \in [T_{u(x)}({\mathcal{M}})]^N$ for ${\mathcal{L}}^N$-a.e. $x \in \O$. The analogue statement for $BV$-maps is given in Lemma \ref{manifold} below.
\begin{lemma}\label{manifold} For every $u \in BV(\O;{\mathcal{M}})$, \begin{align} \label{aplimM}&\tilde u(x) \in {\mathcal{M}}\text{ for every } x \in \O\setminus S_u\,;\\[0.2cm] \label{jumpM}&u^\pm(x) \in {\mathcal{M}}\text{ for every }x \in J_u\,;\\[0.2cm] \label{gradM}&\nabla u(x) \in [T_{u(x)}({\mathcal{M}})]^N \text{ for } {\mathcal{L}}^N\text{-a.e. }x \in \O\,;\\[0.2cm]
\label{cantM}&\displaystyle A(x):=\frac{dD^c u}{d|D^c u|}(x) \in [T_{\tilde u(x)}({\mathcal{M}})]^N \text{ for }|D^c u|\text{-a.e. }x \in \O\,. \end{align} \end{lemma}
\begin{proof} We first show (\ref{aplimM}). By definition of the space $BV(\O;{\mathcal{M}})$, $u(y)\in{\mathcal{M}}$ for a.e. $y\in \O$. Therefore for any
$x\in\O\setminus S_u$, we have $|u(y)-\tilde u(x)|\geq \text{dist}(\tilde u(x),{\mathcal{M}})$ for a.e. $y\in\O$. By definition of $S_u$, this yields $\text{dist}(\tilde u(x),{\mathcal{M}})=0$, {\it i.e.}, $\tilde u(x)\in {\mathcal{M}}$. Arguing as for the approximate limit points, one obtains (\ref{jumpM}).
Now it remains to prove \eqref{gradM} and \eqref{cantM}. We introduce the function $\Phi:{\mathbb{R}}^d\to{\mathbb{R}}$ defined by $$\Phi(s)=\chi(\delta^{-1}\text{dist}(s,{\mathcal{M}})^2)\, \text{dist}(s,{\mathcal{M}})^2\,,$$
where $\chi\in {\mathcal{C}}_c^\infty({\mathbb{R}};[0,1])$ with $\chi(t)=1$ for $|t|\leq 1$, $\chi(t)=0$ for $|t|\geq2$, and $\delta>0$ is small enough so that $\Phi \in {\mathcal{C}}^1({\mathbb{R}}^d)$. Note that for every $s \in {\mathcal{M}}$, $\Phi(s)=0$ and \begin{equation}\label{ker} \text{Ker}\,\nabla\Phi(s)=T_s({\mathcal{M}})\,. \end{equation} By the Chain Rule formula in $BV$ (see, {\it e.g., } \cite[Theorem~3.96]{AFP}), $\Phi\circ u\in BV(\O)$ and \begin{align*} D(\Phi\circ u)=&\,\nabla\Phi(u)\nabla u \,\mathcal{L}^N\res \, \O+\nabla \Phi(\tilde u) D^cu +\big(\Phi(u^+)-\Phi(u^-)\big)\otimes\nu_u\,\mathcal{H}^{N-1}\res \, J_u\\
=&\,\nabla\Phi(u)\nabla u \,\mathcal{L}^N\res \, \O+\nabla \Phi(\tilde u)A|D^cu|\,, \end{align*} thanks to \eqref{jumpM}. On the other hand, $\Phi\circ u=0$ a.e. in $\O$ since $u(x)\in{\mathcal{M}}$ for a.e. $x\in\O$. Therefore we have that
$D(\Phi\circ u)\equiv 0$. Since $\mathcal{L}^N\res \, \O$ and $|D^c u|$ are mutually singular measures, we infer that $\nabla\Phi(u(x))\nabla u(x)=0$ for $\mathcal{L}^N$-a.e. $x\in\O$
and $\nabla \Phi(\tilde u(x))A(x)=0$ for $|D^cu|$-a.e. $x\in\O$. Hence \eqref{gradM} and \eqref{cantM} follow from \eqref{ker} together with \eqref{aplimM}. \end{proof}
In \cite{B,BZ}, density results of smooth functions between manifolds into Sobolev spaces have been established. In the following theorem, we summarize these results only in $W^{1,1}$. Let $\mathcal S$ be the family of all finite unions of subsets contained in a $(N-2)$-dimensional submanifold of ${\mathbb{R}}^N$.
\begin{theorem}\label{density}Let ${\mathcal{D}}(\O;{\mathcal{M}}) \subset W^{1,1}(\O;{\mathcal{M}})$ be defined by $${\mathcal{D}}(\O;{\mathcal{M}}):=\begin{cases} W^{1,1}(\O;{\mathcal{M}})\cap{\mathcal{C}}^\infty(\O;{\mathcal{M}}) & \text{if $\,\pi_1({\mathcal{M}})=0$}\,,\\[10pt] \big\{ u \in W^{1,1}(\O;{\mathcal{M}})\cap {\mathcal{C}}^\infty(\O \setminus \Sigma;{\mathcal{M}}) \text{ for some } \Sigma \in \mathcal S \big\}
& \text{otherwise}\,. \end{cases}$$ Then ${\mathcal{D}}(\O;{\mathcal{M}})$ is dense in $W^{1,1}(\O;{\mathcal{M}})$ for the strong $W^{1,1}(\O;{\mathbb{R}}^d)$-topology. \end{theorem}
We now present a useful projection technique (taken from \cite{Dem} for ${\mathcal{M}}={\mathbb{S}^{d-1}}$). It was first introduced in \cite{HKL,HL}, and makes use of an averaging device going back to \cite{FF}. We sketch the proof for the convenience of the reader.
\begin{proposition}\label{proj} Let ${\mathcal{M}}$ be a compact connected $m$-dimensional smooth submanifold of ${\mathbb{R}}^d$ without boundary, and let $v \in W^{1,1}(\O;{\mathbb{R}}^d) \cap {\mathcal{C}}^\infty(\O\setminus \Sigma;{\mathbb{R}}^d)$ for some $\Sigma\in\mathcal{S}$ such that $v(x) \in {\rm co}({\mathcal{M}})$ for a.e. $x \in \O$. Then there exists $w \in W^{1,1}(\O;{\mathcal{M}})$ satisfying $w=v$ a.e. in $\big\{x \in \O\setminus\Sigma: v(x) \in {\mathcal{M}} \big\}$ and \begin{equation}\label{1127}
\int_\O |\nabla w|\, dx \leq C_\star \int_\O |\nabla v|\,dx\,,\end{equation} for some constant $C_\star >0$ which only depends on $d$ and ${\mathcal{M}}$. \end{proposition}
\begin{proof} According to \cite[Lemma 6.1]{HL} (which holds for $p=1$), there exist a compact Lipschitz polyhedral set $X\subset{\mathbb{R}}^d$ of codimension greater or equal to $2$, and a locally Lipschitz map $\pi:{\mathbb{R}}^d \setminus X \to {\mathcal{M}}$ such that \begin{equation}\label{gradproj}
\int_{B^d(0,R)} |\nabla \pi(s)|\, ds <+\infty \quad \text{ for every }R<+\infty\,. \end{equation} Moreover, in a neighborhood of ${\mathcal{M}}$ the mapping $\pi$ is smooth of constant rank equal to $m$.
We argue as in the proof of \cite[Theorem 6.2]{HL}. Let $B$ be an open ball in ${\mathbb{R}}^d$ containing ${\mathcal{M}} \cup X$, and let $\d>0$ small enough so that the nearest point projection on ${\mathcal{M}}$ is a well defined smooth mapping in the $\d$-neighborhood of ${\mathcal{M}}$. Fix $\sigma < \inf\{\d,\text{dist}({\rm co}({\mathcal{M}}),\partial B)\}$ small enough, and for $a \in B^d(0,\sigma)$ we define the translates $B_a:=a+B$ and $X_a:=a+X$, and the projection $\pi_a:B_a \setminus X_a \to {\mathcal{M}}$ by $\pi_a(s):=\pi(s-a)$. Since $\pi$ has full rank and is smooth in a neighborhood of ${\mathcal{M}}$, by the Inverse Function Theorem the number \begin{equation}\label{gradlambda}
\Lambda:=\sup_{a \in B^d(0,\sigma)} {\rm Lip}\big({\pi_a}_{|{\mathcal{M}}}\big)^{-1} \end{equation} is finite and only depends on ${\mathcal{M}}$. Using Sard's lemma, one can show that $\pi_a \circ v \in W^{1,1}(\O;{\mathcal{M}})$ for ${\mathcal{L}}^d$-a.e. $a \in B^d(0,\sigma)$. Then Fubini's theorem together with the Chain Rule formula yields \begin{multline*} \int_{B^d(0,\sigma)}
\int_{\O} |\nabla (\pi_a \circ v)(x)|\, d{\mathcal{L}}^N(x) \, d{\mathcal{L}}^d(a) \leq\,\\
\leq \int_\O |\nabla v(x)|
\left(\int_{B^d(0,\sigma)}|\nabla \pi(v(x)-a)| \, d{\mathcal{L}}^d(a) \right)\, d{\mathcal{L}}^N(x)
\leq \,\\
\leq \left(\int_{B} |\nabla \pi(s)|\, d{\mathcal{L}}^d(s)\right)
\left(\int_\O |\nabla v(x)|\,d{\mathcal{L}}^N(x)\right)\,. \end{multline*} Therefore we can find $a \in B^d(0,\sigma)$ such that
\begin{equation}\label{gradpia}\int_\O |\nabla (\pi_a \circ v)|\, dx
\leq C{\mathcal{L}}^d\left(B^d(0,\sigma)\right)^{-1} \int_\O |\nabla v|\,dx\,,\end{equation} where we used (\ref{gradproj}). To conclude, it suffices to set $w:=\big({\pi_a}_{|{\mathcal{M}}}\big)^{-1} \circ \pi_a \circ v$, and (\ref{1127}) arises as a consequence of (\ref{gradlambda}) and (\ref{gradpia}). \end{proof}
\section{Properties of homogenized energy densities}
In this section we present the main properties of the energy densities $Tf_\text{hom}$ and $\vartheta_\text{hom}$ defined in (\ref{Tfhom}) and (\ref{thetahom}). In particular we will prove that $\vartheta_\text{hom}$ is well defined in the sense that the limit in (\ref{thetahom}) exists.
\subsection{The tangentially homogenized bulk energy}\label{thbe}
We start by considering the bulk energy density $Tf_{\rm hom}$ defined in \eqref{Tfhom}. As in \cite{BM} we first construct a new energy density $g:{\mathbb{R}}^N\times{\mathbb{R}}^d\times{\mathbb{R}}^{d\times N}\to[0,+\infty)$ satisfying $$g(\cdot,s,\xi)=f(\cdot,\xi)\quad \text{and}\quad g_{\rm hom}(s,\xi)=Tf_{\rm hom}(s,\xi)\quad \text{for $s\in{\mathcal{M}}$ and $\xi\in[T_s({\mathcal{M}})]^N$}\,.$$ Hence upon extending $Tf_\text{hom}$ by $g_\text{hom}$ outside the set $\big\{(s,\xi) \in {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N}:\; s \in {\mathcal{M}},\, \xi \in [T_s({\mathcal{M}})]^N\big\}$, we will tacitly assume $Tf_\text{hom}$ to be defined over the whole ${\mathbb{R}}^d \times {\mathbb{R}}^{d \times N}$. We proceed as follow. \vskip5pt
For $s \in {\mathcal{M}}$ we denote by $P_s:{\mathbb{R}}^d \to T_s({\mathcal{M}})$ the orthogonal projection from ${\mathbb{R}}^d$ into $T_s({\mathcal{M}})$, and we set $$\mathbf{P}_s(\xi):=(P_s(\xi_1),\ldots,P_s(\xi_N)) \quad \text{for $\xi=(\xi_1,\ldots,\xi_N)\in{\mathbb{R}}^{d\times N}\,$.}$$ For $\delta_0>0$ fixed, let $\mathcal U:=\big\{s \in{\mathbb{R}}^d\,:\,\text{dist}(s,{\mathcal{M}})<\d_0\big\}$ be the $\d_0$-neighborhood of ${\mathcal{M}}$. Choosing $\delta_0>0$ small enough, we may assume that the nearest point projection $\Pi: \mathcal U \to {\mathcal{M}}$ is a well defined Lipschitz mapping. Then the map $s \in \mathcal U \mapsto P_{\Pi(s)}$ is Lipschitz. Now we introduce a cut-off function $\chi \in {\mathcal{C}}^\infty_c({\mathbb{R}}^d;[0,1])$ such that $\chi(t)=1$ if $\text{dist}(s,{\mathcal{M}}) \leq \delta_0/2$, and $\chi(s)=0$ if $\text{dist}(s,{\mathcal{M}}) \geq 3\delta_0/4$, and we define $$\mathbb{P}_s(\xi):=\chi(s) \mathbf{P}_{\Pi(s)}(\xi)\quad \text{for $(s,\xi) \in {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N}\,$.}$$ Given the Carath\'eodory integrand $f:{\mathbb{R}}^N \times {\mathbb{R}}^{d\times N} \to [0,+\infty)$ satisfying assumptions $(H_1)$ to $(H_3)$, we construct the new integrand $g: {\mathbb{R}}^N\times{\mathbb{R}}^d\times{\mathbb{R}}^{d\times N}\to[0,+\infty)$ as \begin{equation}\label{defig}
g(y,s,\xi):=f(y,\mathbb{P}_s(\xi))+|\xi-\mathbb{P}_s(\xi)|\,. \end{equation}
We summarize in the following lemma the main properties of $g$.
\begin{lemma}\label{defg} The integrand $g$ as defined in (\ref{defig}) is a Carath\'eodory function satisfying \begin{equation}\label{idfg} g(y,s,\xi)=f(y,\xi)\quad\text{and}\quad g^\infty(y,s,\xi)=f^\infty(y,\xi)\quad \text{for $s \in {\mathcal{M}}$ and $\xi \in [T_s({\mathcal{M}})]^N\,$,} \end{equation} and \begin{itemize} \item[(i)] $g$ is $1$-periodic in the first variable; \item[(ii)] there exist $0<\alpha'\leq \b'$ such that \begin{equation}\label{pgrowth}
\alpha'|\xi|\leq g(y,s,\xi)\leq
\b'(1+|\xi|)\quad\text{for every $(s,\xi)\in {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N}$ and a.e. $y \in {\mathbb{R}}^N$}\,;\end{equation} \item[(iii)] there exist $C>0$ and $C'>0$ such that \begin{equation}\label{moduluscont}
|g(y,s,\xi)-g(y,s',\xi)| \leq C|s-s'|\; |\xi|\,,\end{equation} \begin{equation}\label{lipg}
|g(y,s,\xi) - g(y,s,\xi')| \leq C'|\xi-\xi'|\end{equation} for every $s$, $s' \in {\mathbb{R}}^d$, every $\xi \in {\mathbb{R}}^{d \times N}$ and a.e. $y \in {\mathbb{R}}^N$; \item[(iv)] if in addition $(H_4)$ holds, there exists $0<q<1$ and $C''>0$ such that \begin{equation}\label{grec}
|g(y,s,\xi) - g^\infty(y,s,\xi)| \leq C''(1+|\xi|^{1-q}) \end{equation} for every $(s,\xi) \in {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N}$ and a.e. $y \in {\mathbb{R}}^N\,$. \end{itemize} \end{lemma} \vskip5pt
We can now state the properties of $Tf_\text{hom}$ and the relation between $Tf_\text{hom}$ and $g_\text{hom}$ through the homogenization procedure.
\begin{proposition}\label{properties1} Let $f:{\mathbb{R}}^N \times {\mathbb{R}}^{d\times N} \to [0,+\infty)$ be a Carath\'eodory integrand satisfying $(H_1)$ to $(H_3)$. Then the following properties hold: \begin{itemize} \item[(i)] for every $s \in {\mathcal{M}}$ and $\xi \in [T_s({\mathcal{M}})]^N$, \begin{equation}\label{identhomform} Tf_{\rm hom}(s,\xi)=g_{\rm hom}(s,\xi)\,, \end{equation} where $$g_{\rm hom}(s,\xi):=\lim_{t\to+\infty}\inf_{\varphi}\bigg\{ - \hskip -1em \int_{(0,t)^N} g(y,s,\xi+\nabla \varphi(y))\, dy: \varphi \in W^{1,\infty}_0((0,t)^N;{\mathbb{R}}^d) \bigg\}$$ is the usual homogenized energy density of $g$ (see, e.g., \cite[Chapter~14]{BD});\\ \item[(ii)] the function $Tf_{\rm hom}$ is tangentially quasiconvex, {\it i.e.}, for all $s \in {\mathcal{M}}$ and all $\xi \in [T_s({\mathcal{M}})]^N$, $$Tf_{\rm hom}(s,\xi) \leq \int_Q Tf_{\rm hom}(s,\xi + \nabla \varphi(y))\, dy$$ for every $\varphi \in W^{1,\infty}_0(Q;T_s({\mathcal{M}}))$. In particular $Tf_{\rm hom}(s,\cdot)$ is rank one convex;\\ \item[(iii)] there exists $C>0$ such that \begin{equation}\label{pgT}
\a|\xi|\leq Tf_{\rm hom}(s,\xi) \leq \b(1+|\xi|)\,, \end{equation} and \begin{equation}\label{plipT}
|Tf_{\rm hom}(s,\xi)-Tf_{\rm hom}(s,\xi')| \leq C|\xi-\xi'| \end{equation} for every $s \in {\mathcal{M}}$ and $\xi$, $\xi' \in [T_s({\mathcal{M}})]^N$; \item[(iv)] there exists $C_1>0$ such that \begin{equation}\label{hyp4}
|Tf_{\rm hom}(s,\xi)-Tf_{\rm hom}(s',\xi)| \leq C_1|s-s'|(1+|\xi|)\,, \end{equation} for every $s$, $s' \in {\mathbb{R}}^d$ and $\xi \in {\mathbb{R}}^{d \times N}$. In particular $Tf_{\rm hom}$ is continuous;\\ \item[(v)] if in addition $(H_4)$ holds, there exist $C_2>0$ and $0<q<1$ such that \begin{equation}\label{hyp5}
|Tf^{\,\infty}_{\rm hom}(s,\xi)-Tf_{\rm hom}(s,\xi)| \leq C_2(1+|\xi|^{1-q})\,, \end{equation} for every $(s,\xi)\in {\mathbb{R}}^d\times {\mathbb{R}}^{d \times N}$. \end{itemize} \end{proposition}
\begin{remark}\label{reminfty} Observe that, if $f$ satisfies assumption $(H_3)$, then $f^\infty$ satisfies $(H_3)$ as well. In particular the function $f^\infty$ is Carath\'eodory, $1$-periodic in the first variable, and positively 1-homogeneous with respect to the second variable. In view of the growth and coercivity condition $(H_2)$, one gets that
\begin{equation}\label{finfty1gc} \a |\xi| \leq f^\infty(y,\xi) \leq
\b|\xi| \quad \text{ for all }\xi \in {\mathbb{R}}^{d \times N} \text{ and a.e. }y \in {\mathbb{R}}^N\,. \end{equation} Then, as for $f^\infty$, the function $g^\infty$ is Carath\'eodory, $1$-periodic in the first variable, and positively 1-homogeneous with respect to the second variable. Moreover,
$$\alpha'|\xi|\leq g^\infty(y,s,\xi)\leq
\b'|\xi|\quad\text{for every $(s,\xi)\in {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N}$ and a.e. $y \in {\mathbb{R}}^N$}\,, $$ and $g^\infty$ satisfies estimates analogue to \eqref{moduluscont} and \eqref{lipg}. Hence we may apply classical homogenization results to $g^\infty$. In addition, in view of \eqref{idfg}, claim{\it (i)} in Proposition \ref{properties1} holds for $f^\infty$ and $g^\infty$, and we have $$T(f^\infty)_{\rm hom}(s,\xi)=(g^\infty)_{\rm hom}(s,\xi) \quad \text{for every $s \in {\mathcal{M}}$ and $\xi \in [T_s({\mathcal{M}})]^N\,$.}$$ In particular $T(f^\infty)_{\rm hom}$ will be tacitely extended by $(g^\infty)_{\rm hom}$. \end{remark}
\noindent {\bf Proof of Proposition \ref{properties1}.} The proofs of claims {\it (i)-(iii)} can be obtained exactly as in \cite[Proposition 2.1]{BM} and we shall omit it. It remains to prove {\it (iv)} and {\it (v)}.
Fix $s,s'\in{\mathbb{R}}^d$ and $\xi \in {\mathbb{R}}^{d \times N}$. For any $\eta>0$, we may find $k \in {\mathbb{N}}$ and $\varphi \in W_0^{1,\infty}((0,k)^N;{\mathbb{R}}^d)$ such that $$- \hskip -1em \int_{(0,k)^N} g(y,s,\xi+\nabla \varphi)\, dy \leq g_\text{hom}(s,\xi) + \eta\,.$$ We infer from \eqref{pgrowth}
that $\alpha'|\xi|\leq g_{\rm hom}(s,\xi)\leq \beta'(1+|\xi|)$ and consequently \begin{equation}\label{varphi}
- \hskip -1em \int_{(0,k)^N}|\nabla \varphi|\, dy \leq C(1+|\xi|)\,, \end{equation} for some constant $C>0$ depending only on $\a'$ and $\b'$. Then from (\ref{identhomform}) and \eqref{moduluscont} it follows that \begin{multline*} Tf_\text{hom}(s',\xi)-Tf_\text{hom}(s,\xi)= g_\text{hom}(s',\xi)-g_\text{hom}(s,\xi)\leq\\ \leq - \hskip -1em \int_{(0,k)^N} \big(g(y,s',\xi+\nabla \varphi)-g(y,s,\xi+\nabla
\varphi)\big)\, dy+ \eta \leq\\ \leq C |s-s'|- \hskip -1em \int_{(0,k)^N}|\xi
+\nabla \varphi|\, dy+\eta\leq C|s-s'|(1+|\xi|)+\eta\,. \end{multline*} We deduce relation (\ref{hyp4}) inverting the roles of $s$ and $s'$, and sending $\eta$ to zero. In particular, we obtain that $Tf_\text{hom}$ is continuous as a consequence of (\ref{hyp4}) and (\ref{plipT}).
To show (\ref{hyp5}), let us consider sequences $t_n \nearrow +\infty$, $k_n \in {\mathbb{N}}$ and $\varphi_n\in W^{1,\infty}_0((0,k_n)^N;T_s({\mathcal{M}}))$ such that \begin{equation}\label{comptfhoninf} Tf_\text{hom}^{\, \infty}(s,\xi)=\lim_{n \to +\infty}\frac{Tf_\text{hom}(s,t_n\xi)}{t_n}\,, \end{equation} and $$- \hskip -1em \int_{(0,k_n)^N}f(y,t_n \xi + t_n \nabla \varphi_n)\, dy \leq Tf_\text{hom}(s,t_n\xi)+\frac{1}{n}\,.$$ Then $(H_2)$ and (\ref{pgT}) yield
\begin{equation}\label{varphi_n}- \hskip -1em \int_{(0,k_n)^N}|\nabla \varphi_n|\, dy \leq C(1+|\xi|)\,, \end{equation} for some constant $C>0$ depending only on $\a$ and $\b$. Using $(H_4)$ and \eqref{comptfhoninf}, we derive that \begin{align*}
Tf_\text{hom}(s,\xi) - Tf_\text{hom}^{\,\infty}(s,\xi)
\leq & \liminf_{n \to +\infty} \Bigg\{- \hskip -1em \int_{(0,k_n)^N}\bigg|
f(y,\xi+\nabla\varphi_n)- f^{\,\infty}(y,\xi +\nabla \varphi_n)\bigg|\, dy\, +\\
&\,+ - \hskip -1em \int_{(0,k_n)^N}\bigg| f^{\,\infty}(y,\xi+\nabla \varphi_n)-\frac{ f(y,t_n \xi + t_n \nabla
\varphi_n)}{t_n}\bigg|\, dy\Bigg\}\\ \leq & \liminf_{n \to +\infty} \Bigg\{C
- \hskip -1em \int_{(0,k_n)^N}\!(1+|\xi+\nabla \varphi_n|^{1-q})\, dy\\
&+ \frac{C}{t_n}- \hskip -1em \int_{(0,k_n)^N}\!(1+t_n^{1-q}|\xi+\nabla
\varphi_n|^{1-q})\, dy\Bigg\}\,, \end{align*} where we have also used the fact that $f^{\,\infty}(y,\cdot)$ is positively homogeneous of degree one in the last inequality. Then (\ref{varphi_n}) and H\"older's inequality lead to \begin{equation}\label{firstineq}
Tf_\text{hom}(s,\xi) - Tf_\text{hom}^{\,\infty}(s,\xi) \leq C(1+|\xi|^{1-q})\,. \end{equation} Conversely, given $k\in {\mathbb{N}}$ and $\varphi \in W_0^{1,\infty}((0,k)^N;T_s({\mathcal{M}}))$, we deduce from $(H_2)$ that
$$\frac{f(\cdot,t(\xi+\nabla\varphi(\cdot)))}{t} \leq \b(1+|\xi+\nabla
\varphi|) \in L^1((0,k)^N)$$ whenever $t > 1$. Then Fatou's lemma implies \begin{equation*} Tf_\text{hom}^\infty(s,\xi) \leq \limsup_{t \to +\infty} - \hskip -1em \int_{(0,k)^N}\frac{f(y,t\xi+t \nabla \varphi)}{t}\, dy \leq - \hskip -1em \int_{(0,k)^N}f^\infty(y,\xi+ \nabla \varphi)\, dy\,. \end{equation*} Taking the infimum over all admissible $\varphi$'s and letting $k\to+\infty$, we infer \begin{equation}\label{remdensinf} Tf_\text{hom}^\infty(s,\xi) \leq T(f^\infty)_\text{hom}(s,\xi)\,. \end{equation} For $\eta>0$ arbitrary small, consider $k \in {\mathbb{N}}$ and $\varphi \in W^{1,\infty}_0((0,k)^N;T_s({\mathcal{M}}))$ such that $$- \hskip -1em \int_{(0,k)^N}f(y,\xi + \nabla \varphi)\, dy \leq Tf_\text{hom}(s,\xi)+\eta\,.$$ In view of $(H_2)$ and (\ref{pgT}), it turns out that \eqref{varphi} holds with constant $C>0$ only depending on $\a$ and $\b$. Then it follows from \eqref{remdensinf} that \begin{multline*} Tf_\text{hom}^\infty(s,\xi) - Tf_\text{hom}(s,\xi) \leq T(f^\infty)_\text{hom}(s,\xi) - Tf_\text{hom}(s,\xi)\leq\\
\leq - \hskip -1em \int_{(0,k)^N}|f^\infty(y,\xi + \nabla \varphi)- f(y,\xi
+ \nabla \varphi)|\, dy+\eta \leq C- \hskip -1em \int_{(0,k)^N}(1+|\xi + \nabla \varphi|^{1-q})\, dy+\eta\,, \end{multline*} where we have used $(H_4)$ in the last inequality. Using H\"older's inequality, relation (\ref{varphi}) together with the arbitrariness of $\eta$ yields \begin{equation}\label{secineq}Tf_\text{hom}^\infty(s,\xi) -
Tf_\text{hom}(s,\xi) \leq C(1+|\xi|^{1-q})\,.\end{equation} Gathering (\ref{firstineq}) and (\ref{secineq}) we conclude the proof of (\ref{hyp5}). \prbox
\subsection{The homogenized surface energy}\label{sectsurf}
\noindent We now present the homogenized surface energy density $\vartheta_\text{hom}$. We start by introducing some useful notations.
Given $\nu=(\nu_1,\ldots,\nu_N)$ an orthonormal basis of ${\mathbb{R}}^N$ and $(a,b) \in {\mathcal{M}} \times {\mathcal{M}} $, we denote by $$Q_\nu:=\Big\{\alpha_1\nu_1+\ldots+\alpha_N\nu_N\;:\;\alpha_1,\ldots,\alpha_N \in(-1/2,1/2)\Big\}\,,$$
and for $x \in {\mathbb{R}}^N$, we set $\|x\|_{\nu,\infty}:=\sup_{i \in
\{1,\ldots,N\}}|x\cdot\nu_i|$, $x_\nu:=x\cdot \nu_1$ and $x':=(x\cdot\nu_2)\nu_2 +\ldots+(x\cdot \nu_N)\nu_N$ so that $x$ can be identified to the pair $(x',x_\nu)$. Let $u_{a,b,\nu}:Q_\nu \to {\mathcal{M}}$ be the function defined by $$u_{a,b,\nu}(x):=\begin{cases} a & \text{if } x_\nu>0\,,\\[5pt] b & \text{if } x_\nu \leq 0\,. \end{cases}$$ We introduce the class of functions $${\mathcal{A}}_t(a,b,\nu) :=\Big\{\varphi \in W^{1,1}(t Q_\nu;{\mathcal{M}}): \varphi=u_{a,b,\nu} \text{ on }\partial(tQ_\nu)\Big\}\,.$$ We have the following result.
\begin{proposition}\label{limitsurfenerg} For every $(a,b,\nu_1)\in{\mathcal{M}}\times{\mathcal{M}}\times{\mathbb{S}^{N-1}}$, there exists \begin{eqnarray*} \vartheta_{\rm hom}(a,b,\nu_1) &: = & \lim_{t\to+\infty}\,\inf_\varphi \left\{\frac{1}{t^{N-1}} \int_{t Q_\nu} f^\infty(y,\nabla \varphi(y))\, dy : \varphi \in {\mathcal{A}}_t(a,b,\nu) \right\}\,, \end{eqnarray*} where $\nu=(\nu_1,\ldots,\nu_N)$ is any orthonormal basis of ${\mathbb{R}}^N$ with first element equal to $\nu_1$ (the limit being independent of such a choice). \end{proposition}
The proof of Proposition \ref{limitsurfenerg} is quite indirect and is based on an analogous result for a similar surface energy density $\tilde \vartheta_\text{hom}$ (see \eqref{surfen2} below). We will prove in Proposition \ref{limsurf2} that the two densities coincide. \vskip5pt
Given $a$ and $b\in {\mathcal{M}}$, we introduce the family of geodesic curves
between $a$ and $b$ by $$\mathcal{G}(a,b):=\bigg\{\g\in {\mathcal{C}}^{\infty}({\mathbb{R}};{\mathcal{M}}):\;\g(t)=a\text{ if } t\geq 1/2,\, \g(t)=b\text{ if }
t\leq -1/2\,,\;\int_{\mathbb{R}}|\dot \g|\, dt=\mathbf d_{\mathcal{M}}(a,b)\bigg\}\,,$$ where $\mathbf d_{\mathcal{M}}$ denotes the geodesic distance on ${\mathcal{M}}$. We define for $\e>0$ and $\nu=(\nu_1,\ldots,\nu_N)$ an orthonormal basis of ${\mathbb{R}}^N$, $$\mathcal{B}_\e(a,b,\nu):=\Big\{u \in W^{1,1}(Q_\nu;{\mathcal{M}})\;:\;u(x)=\g(x_\nu/\e) \text{ on }\partial Q_\nu\text{ for some } \g\in\mathcal{G}(a,b)\Big\}\,.$$
\begin{proposition}\label{limsurf2} For every $(a,b)\in{\mathcal{M}}\times{\mathcal{M}}$ and every orthonormal basis $\nu=(\nu_1,\ldots,\nu_N)$ of ${\mathbb{R}}^N$, there exists the limit \begin{equation}\label{surfen2} \tilde\vartheta_{\rm hom}(a,b,\nu) := \lim_{\e\to 0}\,\inf_u \left\{ \int_{Q_\nu} f^\infty\left(\frac{x}{\e},\nabla u\right) dx : u \in \mathcal{B}_\e(a,b,\nu) \right\}\,. \end{equation} Moreover $\tilde\vartheta_{\rm hom}(a,b,\nu)$ only depends on $a$, $b$ and $\nu_1$. \end{proposition}
\begin{proof} The proof follows the scheme of the one in \cite[Proposition~2.2]{BDV}. We fix $a$ and $b\in {\mathcal{M}}$. For every $\e>0$ and every orthonormal basis $\nu=(\nu_1,\ldots,\nu_N)$ of ${\mathbb{R}}^N$, we set $$ I_\e(\nu)= I_\e(a,b,\nu):= \inf \left\{ \int_{Q_\nu} f^\infty\left(\frac{x}{\e},\nabla u\right) dx : u \in \mathcal{B}_\e(a,b,\nu) \right\}\,. $$ We divide the proof into several steps.\vskip5pt
{\bf Step 1.} Let $\nu$ and $\nu'$ be two orthonormal bases of ${\mathbb{R}}^N$ with equal first vector, {\it i.e.}, $\nu_1=\nu'_1$. Suppose that $\nu$ is a rational basis, {\it i.e.}, for all $i\in\{1,\ldots,N\}$ there exists $\gamma_i\in{\mathbb{R}}\setminus\{0\}$ such that $v_i:=\gamma_i\nu_i\in{\mathbb{Z}}^N$. Similarly to Step 1 of the proof of \cite[Proposition 2.2]{BDV}, we readily obtain that \begin{equation}\label{complimsupinf} \limsup_{\e \to0}\, I_\e (\nu')\leq \liminf_{\e \to0}\, I_\e(\nu)\,. \end{equation}
{\bf Step 2.} Let $\nu$ and $\nu'$ be two orthonormal rational bases of ${\mathbb{R}}^N$ with equal first vector. By Step~1 we immediately obtain that the limits $\displaystyle \lim_{\e\to0}I_\e(\nu)$ and $\displaystyle\lim_{\e\to0}I_\e(\nu')$ exist and are equal.
\vskip5pt
{\bf Step 3.} We claim that for every $\sigma>0$ there exists $\delta>0$ (independent of $a$ and $b$) such that if $\nu$ and $\nu'$ are two orthonormal bases of ${\mathbb{R}}^N$ with
$|\nu_i-\nu'_i|<\delta$ for every $i=1,\ldots,N$, then $$\liminf_{\e\to0} I_\e(\nu)-K\sigma\leq \liminf_{\e\to0}I_\e(\nu')\leq\limsup_{\e\to0}I_\e(\nu')\leq \limsup_{\e\to0} I_\e(\nu)+K\sigma$$ where $K$ is a positive constant which only depends on ${\mathcal{M}}$, $\beta$ and $N$.
We use the notation $Q_{\nu,\eta}:=(1-\eta)Q_\nu$ where $0<\eta<1$. Let $\sigma>0$ be fixed and let $0<\eta<1$ be such that \begin{equation}\label{condeta} \eta<\frac{1}{34}\quad\text{and}\quad\max\bigg\{1-(1-\eta)^{N-1}\,,\; \frac{(1-\eta)^{N-1}(1-2\eta)^{N-1}}{(1-3\eta)^{N-1}}-(1-2\eta)^{N-1}\bigg\}<\sigma. \end{equation} Consider $\delta_0>0$ (that may be chosen so that $\d_0 \leq \eta/(2\sqrt{N})$) such that for every $0<\delta\leq \delta_0$ and every pair $\nu$ and $\nu'$ of orthonormal basis of ${\mathbb{R}}^N$
satisfying $|\nu_i-\nu'_i|\leq \delta$ for $i=1,\ldots,N$, one has \begin{equation}\label{approxnu} Q_{\nu,3\eta}\subset Q_{\nu',2\eta}\subset Q_{\nu,\eta}\,, \end{equation}
and $\{x\cdot\nu_1'=0\}\cap \partial Q_{\nu,\eta} \subset \{|x\cdot\nu_1|\leq 1/8\}$.
Given $\e>0$ small, we consider $u_\e\in\mathcal{B}_{\e}(a,b,\nu')$ such that $$\int_{Q_{\nu'}}f^\infty\left(\frac{x}{\e},\nabla u_\e\right)dx\leq I_{\e}(\nu') +\sigma\,,$$
where $u_\e(x)=\g_\e(x_{\nu'}/\e)$ for $x\in\partial Q_{\nu'}$. Now we construct $v_\e\in\mathcal{B}_{(1-2\eta)\e}(a,b,\nu)$ satisfying the boundary condition $v_\e(x)=\g_\e\big(x_\nu/(1-2\eta)\e\big)$ for $x\in\partial Q_{\nu}$. Consider $F_\eta:{\mathbb{R}}^N \to {\mathbb{R}}$,
$$F_\eta(x):=\bigg(\frac{1- 2\|x'\|_{\nu,\infty}}{\eta}\bigg)\frac{x_{\nu'}}{1-2\eta}+ \bigg(\frac{\eta-1+
2\|x'\|_{\nu,\infty}}{\eta}\bigg)\frac{x_\nu}{1-2\eta}\,,$$ and define $$v_\e(x):=\begin{cases} \displaystyle u_\e\bigg(\frac{x}{1-2\eta}\bigg) & \text{if }x\in Q_{\nu',2\eta},\\[10pt] \displaystyle \g_\e\bigg(\frac{x_{\nu'}}{(1-2\eta)\e}\bigg) &\text{if } x\in Q_{\nu,\eta}\setminus Q_{\nu',2\eta}\,,\\[8pt] \displaystyle a & \displaystyle\text{if } x \in Q_\nu \setminus Q_{\nu,\eta} \text{ and }x_\nu \geq \frac{1}{4}\,,\\[8pt]
\displaystyle \gamma_\e\bigg(\frac{F_\eta(x)}{\e}\bigg) & \text{if }x \in A_\eta :=\big\{x: |x_\nu|\leq 1/4\big\}\cap(Q_\nu\setminus Q_{\nu,\eta})\,,\\[8pt] \displaystyle b & \displaystyle\text{if } x \in Q_\nu \setminus Q_{\nu,\eta} \text{ and }x_\nu \leq -\frac{1}{4}\,. \end{cases}$$ We can check that $v_\e$ is well defined for $\e$ small enough and that $v_\e\in\mathcal{B}_{(1-2\eta)\e}(a,b,\nu)$. Therefore \begin{align}\label{Ebis} I_{(1-2\eta)\e}(\nu)\leq & \int_{Q_\nu}f^\infty\bigg(\frac{x}{(1-2\eta)\e},\nabla v_\e\bigg)\, dx\nonumber\\ =&\int_{Q_{\nu',2\eta}} f^\infty\bigg(\frac{x}{(1-2\eta)\e},\nabla v_\e\bigg)\,dx+\int_{Q_{\nu,\eta}\setminus Q_{\nu',2\eta}}f^\infty\bigg(\frac{x}{(1-2\eta)\e},\nabla v_\e\bigg)\, dx\nonumber\\ &+\int_{A_{\eta}}f^\infty\bigg(\frac{x}{(1-2\eta)\e},\nabla v_\e\bigg)\, dx=:\,I_1+I_2+I_3\,. \end{align} We now estimate these three integrals. First, we easily get that \begin{equation}\label{E1bis}I_1= (1-2\eta)^{N-1}\int_{Q_{\nu'}}f^\infty\left(\frac{y}{\e},\nabla u_\e\right) dy\leq I_\e(\nu')+\sigma\,. \end{equation} In view of \eqref{approxnu} we have $Q_{\nu,\eta}\subset (1-\eta)(1-2\eta)(1-3\eta)^{-1}Q_{\nu'}=:D_{\eta}$. Then we infer from the growth condition (\ref{finfty1gc}) together with Fubini's theorem that \begin{eqnarray}\label{E2bis}
I_2 & \leq & \beta\int_{D_\eta\setminus Q_{\nu',2\eta}}|\nabla v_\e|\, dx = \frac{\beta}{(1-2\eta)\e} \int_{(D_\eta\setminus Q_{\nu',2\eta})\cap\{|x_{\nu'}|\leq (1-2\eta)\e/2\}}
\bigg|\, \dot \g_\e\bigg(\frac{x_{\nu'}}{(1-2\eta)\e}\bigg)\bigg|\, dx\nonumber\\ &= & \beta\mathcal{H}^{N-1}\big((D_\eta\setminus Q_{\nu',2\eta})\cap \{x_{\nu'}=0\}\big)\frac{1}{(1-2\eta)\e}\int_{-(1-2\eta)\e/2}^{(1-2\eta)\e/2}
\bigg|\, \dot \g_\e\bigg(\frac{t}{(1-2\eta)\e}\bigg)\bigg|\, dt\nonumber\\ &=& \beta \mathbf d_{\mathcal{M}}(a,b)\bigg(\frac{(1-\eta)^{N-1}(1-2\eta)^{N-1}}{(1-3\eta)^{N-1}}-(1-2\eta)^{N-1}\bigg)\,. \end{eqnarray}
Now it remains to estimate $I_3$. To this purpose we first observe that \eqref{approxnu} yields \begin{equation}\label{Feta1} \|\nabla F_\eta\|_{L^\infty(A_\eta;{\mathbb{R}}^N)}\leq C\,, \end{equation} for some absolute constant $C>0$, and
\begin{equation}\label{Feta2}|\nabla F_\eta(x)\cdot \nu_1|\geq 1\quad\text{for a.e. $x\in A_\eta\,$.} \end{equation} Hence, thanks the growth condition (\ref{finfty1gc}), (\ref{Feta1}) and (\ref{Feta2}), we get that \begin{multline*}
I_3 \leq \beta\int_{A_\eta}|\nabla v_\e|\, dx \leq
\frac{C\beta}{\e} \int_{A_\eta}\bigg|\, \dot
\g_\e\bigg(\frac{F_\eta(x)}{\e}\bigg)\bigg |\, dx \leq \frac{C \beta}{\e} \int_{A_\eta}\bigg|\, \dot
\g_\e\bigg(\frac{F_\eta(x)}{\e}\bigg)\bigg |\, |\nabla F_\eta(x)\cdot\nu_1| \,dx=\\
=C\beta\int_{A'_\eta}\bigg(\frac{1}{\e}\int_{-1/4}^{1/4}\bigg|\,
\dot \g_\e\bigg(\frac{F_\eta(t\nu_1+x')}{\e}\bigg)\bigg |\, |\nabla F_\eta(t\nu_1+x')\cdot\nu_1|\, dt\bigg)\, d\mathcal{H}^{N-1}(x')\,, \end{multline*} where we have set $A'_\eta:=A_\eta\cap\{x_\nu=0\}$, and used Fubini's theorem in the last equality. Changing variables $s=(1/\e)F_{\eta}(t\nu_1+x')$, we obtain that for $\mathcal{H}^{N-1}$-a.e. $x' \in A'_\eta$,
$$\frac{1}{\e}\int_{-1/4}^{1/4}\bigg|\, \dot \g_\e\bigg(\frac{F_\eta(t\nu_1+x')}{\e}\bigg)\bigg|\,
|\nabla F_\eta(t\nu_1+x')\cdot\nu_1|\, dt\leq \int_{\mathbb{R}} |\dot
\g_\e(s)|\,ds=\mathbf d_{\mathcal{M}}(a,b)\,.$$ Consequently, \begin{equation}\label{E3bis}I_3\leq C\beta\, \mathcal{H}^{N-1}(A'_\eta)\, \mathbf d_{\mathcal{M}}(a,b)=C\beta \big(1-(1-\eta)^{N-1})\, \mathbf d_{\mathcal{M}}(a,b)\,.\end{equation} In view of (\ref{Ebis}), (\ref{condeta}) and estimates (\ref{E1bis}), (\ref{E2bis}) and (\ref{E3bis}), we conclude that $$I_{(1-2\eta)\e}(\nu)\leq I_\e(\nu')+K\sigma\,,$$ where $K=1+\beta \Delta(1+C)$, $\Delta$ is the diameter of ${\mathcal{M}}$ and $C$ is the constant given by (\ref{Feta1}). Finally, letting $\e\to0$ we derive $$\liminf_{\e\to 0}I_\e(\nu)\leq \liminf_{\e\to0}I_\e(\nu')+K\sigma\,, \text{ and }
\limsup_{\e\to 0}I_\e(\nu)\leq \limsup_{\e\to0}I_\e(\nu')+K\sigma\,. $$ The symmetry of the roles of $\nu$ and $\nu'$ allows us to invert them, thus concluding the proof of Step~3.\vskip5pt
{\bf Step 4.} Let $\nu$ and $\nu'$ be two orthonormal bases of ${\mathbb{R}}^N$ with equal first vector. Similarly to Step~4 of the proof of \cite[Proposition 2.2]{BDV}, by Steps 2 and 3 we readily obtain that the limits $\displaystyle\lim_{\e\to0}I_\e(\nu)$ and $\displaystyle\lim_{\e\to0}I_\e(\nu')$ exist and are equal. \end{proof}
\noindent{\bf Proof of Proposition \ref{limitsurfenerg}.} We use the notation of the previous proof. Given $\e>0$ and an orthonormal basis $\nu=(\nu_1,\ldots,\nu_N)$ of ${\mathbb{R}}^N$, we set \begin{align*} J_\e(\nu)=J_\e(a,b,\nu) : = & \inf \left\{ \int_{Q_\nu} f^\infty\left(\frac{x}{\e},\nabla u\right)\, dx : u \in \mathcal{A}_1(a,b,\nu) \right\} \\ =&\inf\bigg\{\e^{N-1}\int_{\frac{1}{\e}Q_\nu}f^\infty(y,\nabla \varphi)\,dy\;:\;\varphi\in \mathcal{A}_{1/\e}(a,b,\nu) \bigg\}\,.\end{align*} We claim that \begin{equation}\label{idsurfen} \lim_{\e\to0} J_{\e}(\nu)=\lim_{\e\to 0} I_\e(\nu)\,. \end{equation} For $0<\e<1$ we set $\tilde \e=\e/(1-\e)$, and we consider $u_{\tilde \e}\in\mathcal{B}_{\tilde \e}(a,b,\nu)$ satisfying $$\int_{Q_\nu}f^{\infty}\left(\frac{x}{\tilde\e},\nabla u_{\tilde\e}\right)dx\leq I_{\tilde \e}(\nu)+ \e\,, $$ where $u_{\tilde \e}(x)=\g_{\tilde \e}(x_\nu/\tilde \e)$ if $x\in\partial Q_\nu$, for some $\g_{\tilde \e}\in\mathcal{G}(a,b)$. We define for every $x\in Q_\nu$, $$v_\e(x):=\begin{cases} \displaystyle u_{\tilde \e}\left(\frac{x}{1-\e}\right) & \text{if $x\in Q_{\nu,\e}\,$,}\\[10pt]
\displaystyle \g_{\tilde\e}\bigg(\frac{x_\nu}{1-2\|x'\|_{\nu,\infty}}\bigg) &\text{otherwise}\,. \end{cases}$$ One may check that $v_\e\in\mathcal{A}_1(a,b,\nu)$, and hence $$J_\e(\nu)\leq \int_{Q_\nu}f^\infty\left(\frac{x}{\e},\nabla v_\e\right)dx=\int_{Q_{\nu,\e}}f^\infty\left(\frac{x}{\e},\nabla v_\e\right)dx +\int_{Q_\nu\setminus Q_{\nu,\e}}f^\infty\left(\frac{x}{\e},\nabla v_\e\right)dx=I_1+I_2\,.$$ We now estimate these two integrals. First, we have \begin{equation}\label{2016}I_1=(1-\e)^{N-1}\int_{Q_\nu}f^\infty\left(\frac{y}{\tilde\e},\nabla u_{\tilde\e}\right)dy \leq (1-\e)^{N-1}\big(I_{\tilde\e}(\nu)+\e\big)\,.\end{equation} In view of the growth condition (\ref{finfty1gc}), \begin{align*}
I_2&\leq \beta\int_{Q_\nu\setminus Q_{\nu,\e}}\bigg|\dot \g_{\tilde
\e}\bigg(\frac{x_\nu}{1-2\|x'\|_{\nu,\infty}}\bigg)\bigg|
\bigg(\frac{1}{1-2\|x'\|_{\nu,\infty}}+\frac{|x_\nu||\nabla(\|x'\|_{\nu,\infty})|}{(1-2\|x'\|_{\nu,\infty})^2}\bigg)\,dx\\
&\leq 2\beta\int_{(Q_\nu\setminus Q_{\nu,\e})\cap\{|x_\nu|\leq
(1-2\|x'\|_{\nu,\infty})/2\}}\bigg|\dot \g_{\tilde
\e}\bigg(\frac{x_\nu}{1-2\|x'\|_{\nu,\infty}}\bigg)\bigg|
\bigg(\frac{1}{1-2\|x'\|_{\nu,\infty}}\bigg)\,dx\,, \end{align*} where we have used the facts that $\dot \g_{\tilde
\e}(x_\nu/(1-2\|x'\|_{\nu,\infty}))=0$ in the set $\{|x_\nu|>
(1-2\|x'\|_\infty)/2\}$ and
$\|\nabla(\|x'\|_{\nu,\infty})\|_{L^\infty(Q_\nu;{\mathbb{R}}^N)}\leq 1$. Setting $Q'_\nu=Q_\nu\cap\{x_\nu=0\}$ and $Q'_{\nu,\e}=Q_{\nu,\e}\cap\{x_\nu=0\}$, we infer from Fubini's theorem that \begin{multline}\label{2017}
I_2\leq 2\beta\int_{Q'_\nu\setminus Q'_{\nu,\e}}\bigg(\int_{-(1-2\|x'\|_{\nu,\infty})/2}^{(1-2\|x'\|_{\nu,\infty})/2}
\bigg|\dot \g_{\tilde
\e}\bigg(\frac{t}{1-2\|x'\|_{\nu,\infty}}\bigg)\bigg|
\bigg(\frac{1}{1-2\|x'\|_{\nu,\infty}}\bigg)\, dt\bigg)\, d\mathcal{H}^{N-1}(x')\leq\\ \leq 2\beta\, \mathcal{H}^{N-1}(Q'_\nu\setminus Q'_{\nu,\e})\, \mathbf d_{\mathcal{M}}(a,b)\leq 2\beta \mathbf d_{\mathcal{M}}(a,b) \big(1-(1-\e)^{N-1}\big)\,. \end{multline} In view of the estimates (\ref{2016}) and (\ref{2017}) obtained for $I_1$ and $I_2$, we derive that \begin{equation}\label{2019} \limsup_{\e\to0}J_\e(\nu)\leq \lim_{\e\to0} I_\e(\nu)\,.\end{equation}
Conversely, given $0<\e<1$, we consider $\tilde u_\e\in\mathcal{A}_1(a,b,\nu)$ such that $$\int_{Q_\nu}f^\infty\left(\frac{x}{\e},\nabla\tilde u_\e\right)dx\leq J_\e(\nu)+\e\,,$$ and $\g\in\mathcal{G}(a,b)$ fixed. We define for $x\in Q_\nu$, $$w_\e(x):=\begin{cases} \displaystyle \tilde u_\e\left(\frac{x}{1-\e}\right) &\text{if $x\in Q_{\nu,\e}$,}\\[10pt] \displaystyle
\g\bigg(\frac{x_\nu}{(1-\e)(2\|x'\|_{\nu,\infty}-1+\e)}\bigg)&\text{otherwise.} \end{cases}$$ We can check that $w_\e\in\mathcal{B}_{(1-\e)\e}(a,b,\nu)$, and arguing as previously we infer that \begin{eqnarray*}I_{(1-\e)\e}(\nu) & \leq & \int_{Q_{\nu,\e}}\!f^\infty\left(\frac{x}{(1-\e)\e}\,,\nabla w_\e\right)\, dx+ \int_{Q_\nu\setminus Q_{\nu,\e}}f^\infty\left(\frac{x}{(1-\e)\e}\,,\nabla w_\e\right)dx\\ &\leq & (1-\e)^{N-1}\big(J_\e(\nu)+\e\big) +2\beta \mathbf d_{\mathcal{M}}(a,b)\big(1-(1-\e)^{N-1}\big) \,. \end{eqnarray*} Consequently, $\displaystyle\lim_{\e\to0}I_\e(\nu)\leq \liminf_{\e\to0}J_\e(\nu)$, which, together with (\ref{2019}), completes the proof of Proposition \ref{limitsurfenerg}. \prbox \vskip5pt
We now state the following properties of the surface energy density.
\begin{proposition}\label{contsurfenerg} The function $\vartheta_\text{hom}$ is continuous on ${\mathcal{M}} \times {\mathcal{M}} \times {\mathbb{S}^{N-1}}$ and there exist constants $C_1>0$ and $C_2>0$ such that \begin{equation}\label{propsurf}
|\vartheta_{\rm hom}(a_1,b_1,\nu_1) - \vartheta_{\rm hom}(a_2,b_2,\nu_1)| \leq C_1
(|a_1-a_2|+ |b_1-b_2|)\,, \end{equation} and \begin{equation}\label{propsurf2}
\vartheta_{\rm hom}(a_1,b_1,\nu_1)\leq C_2|a_1-b_1| \end{equation} for every $a_1,b_1,a_2,b_2\in {\mathcal{M}}$ and $\nu_1 \in {\mathbb{S}^{N-1}}$. \end{proposition}
\begin{proof} We use the notation of the previous proof. By Proposition \ref{limitsurfenerg} together with steps 3 and 4 of the proof of Proposition \ref{limsurf2}, we get that $\vartheta_\text{hom}(a,b,\cdot)$ is continuous on ${\mathbb{S}^{N-1}}$ uniformly with respect to $a$ and $b$. Hence it is enough to show that (\ref{propsurf}) holds to get the continuity of $\vartheta_\text{hom}$. \vskip5pt
{\bf Step 1.} We start with the proof of \eqref{propsurf}. Fix $\nu_1 \in {\mathbb{S}^{N-1}}$ and let $\nu=(\nu_1,\nu_2,\ldots,\nu_N)$ be any orthonormal basis of ${\mathbb{R}}^N$. For every $\e>0$, let $\tilde \e:= \e/(1-\e)$ and consider $\g_{\tilde \e} \in \mathcal G(a_1,b_1)$ and $u_{\tilde \e} \in \mathcal B_{\tilde \e}(a_1,b_1,\nu)$ such that $u_{\tilde \e}(x)=\g_{\tilde \e}(x_\nu/\tilde \e)$ for $x\in\partial Q_\nu$ and $$\int_{Q_\nu} f^\infty\left(\frac{x}{\tilde \e},\nabla u_{\tilde \e}\right) dx \leq I_{\tilde \e}(a_1,b_1,\nu) + \e\,.$$ We shall now carefully modify $u_{\tilde \e}$ in order to get another function $v_\e \in \mathcal A_1(a_2,b_2,\nu)$. We will proceed as in the proofs of Propositions \ref{limitsurfenerg} and \ref{limsurf2}. Let $\g_a \in \mathcal G(a_2,a_1)$ and $\g_b \in \mathcal G(b_2,b_1)$, and define $$v_\e(x):=\left\{ \begin{array}{lll} \displaystyle u_{\tilde\e}\left(\frac{x}{1-\e}\right) & \text{ if } & x \in Q_{\nu,\e}\,,\\[0.3cm]
\displaystyle \g_{\tilde\e}\left(\frac{x_\nu}{1-2\|x'\|_{\nu,\infty}} \right) & \text{ if } & \displaystyle x \in A_1\,,\\[0.3cm]
\displaystyle \g_a\left(\frac{2\|x\|_{\nu,\infty}-1}{\e}+\frac{1}{2} \right) & \text{ if } & \displaystyle x \in A_2:=(Q_\nu\setminus Q_{\nu,\e})\cap\{x_\nu\geq \e/2\}\,,\\[0.3cm]
\displaystyle \g_b\left(\frac{2\|x\|_{\nu,\infty}-1}{\e}+\frac{1}{2} \right) & \text{ if } & \displaystyle x \in A_3:=(Q_\nu\setminus Q_{\nu,\e})\cap\{x_\nu\leq -\e/2\}\,,\\[0.3cm]
\displaystyle \g_a\left(\frac{2\|x'\|_{\nu,\infty}-1}{2x_\nu}+\frac{1}{2} \right) & \text{ if } & \displaystyle x \in A_4:=\left\{ 0 < x_\nu \leq \frac{\e}{2}\,,\;
\frac{1}{2}-x_\nu \leq \|x'\|_{\nu,\infty} < \frac{1}{2}\right\}\,,\\[0.3cm]
\displaystyle \g_b\left(\frac{1-2\|x'\|_{\nu,\infty}}{2x_\nu}+\frac{1}{2} \right) & \text{ if } & \displaystyle x \in A_5:=\left\{ -\frac{\e}{2} < x_\nu
\leq 0\,,\; \frac{1}{2}+x_\nu \leq \|x'\|_{\nu,\infty} < \frac{1}{2}\right\}\,, \end{array}\right.$$ with $$ A_1:=\left\{ \frac{1-\e}{2} \leq
\|x'\|_{\nu,\infty} < \frac{1}{2}
\text{ and } |x_\nu|\leq -\|x'\|_{\nu,\infty}+\frac{1}{2}\right\}\,.$$ One may check that the function $v_\e$ has been constructed in such a way that $v_\e \in {\mathcal{A}}_1(a_2,b_2,\nu)$, and thus \begin{equation}\label{Je}J_\e(a_2,b_2,\nu) \leq \int_{Q_\nu}f^\infty\left(\frac{x}{\e},\nabla v_\e\right) dx\,. \end{equation} Arguing exactly as in the proof of Proposition \ref{limsurf2}, one can show that \begin{equation}\label{cunu}\int_{Q_{\nu,\e}} f^\infty\left(\frac{x}{\e},\nabla v_\e\right) dx \leq I_{\tilde \e}(a_1,b_1,\nu) + \e\,,\end{equation} and \begin{eqnarray}\label{A1}\int_{A_1} f^\infty\left(\frac{x}{\e},\nabla v_\e\right) dx \leq C\mathbf d_{\mathcal{M}}(a_1,b_1)(1-(1-\e)^{N-1}))\,. \end{eqnarray} Now we only estimate the integrals over $A_2$ and $A_4$, the ones over $A_3$ and $A_5$ being very similar. Define the Lipschitz function $F_\e:{\mathbb{R}}^{N} \to {\mathbb{R}}$ by
$$F_\e(x):=\frac{2\|x\|_{\nu,\infty}-1}{\e}+\frac{1}{2}\,.$$ Using the growth condition (\ref{finfty1gc}) together with Fubini's theorem, and the fact that $A_2 \subset F_\e^{-1}\big([-1/2,1/2)\big)$, we derive \begin{multline*} \int_{A_2} f^\infty\left(\frac{x}{\e},\nabla v_\e\right) dx \leq
\b \int_{A_2} |\dot\g_a (F_\e(x))|\, |\nabla F_\e(x)|\, dx
\leq\\
\leq \b \int_{F_\e^{-1}([-1/2,1/2))} |\dot\g_a
(F_\e(x))|\, |\nabla F_\e(x)|\, dx
\leq \b \int_{-1/2}^{1/2}|\,\dot \g_a(t)| \, {\mathcal{H}}^{N-1}(F_\e^{-1}\{t\})\, dt \,, \end{multline*} where we used the Coarea formula in the last inequality. We observe that for every $t\in(-1/2,1/2)$, $F^{-1}_\e\{t\}=\partial Q_{\nu,\frac{\e(1-2t)}{2}}$ so that ${\mathcal{H}}^{N-1}(F_\e^{-1}\{t\})\leq {\mathcal{H}}^{N-1}(\partial Q)$. Therefore \begin{eqnarray}\label{A2}\int_{A_2} f^\infty\left(\frac{x}{\e},\nabla v_\e\right) dx \leq \b {\mathcal{H}}^{N-1}(\partial Q)\mathbf d_{\mathcal{M}}(a_1,a_2)\,.\end{eqnarray} Define now $G : {\mathbb{R}}^N\setminus\{x_\nu=0\} \to {\mathbb{R}}$ by
$$G(x):=\frac{2\|x'\|_{\nu,\infty}-1}{2x_\nu}+\frac{1}{2}\,.$$ The growth condition (\ref{finfty1gc}) and Fubini's theorem yield \begin{multline*} \int_{A_4} f^\infty\left(\frac{x}{\e},\nabla v_\e\right) dx \,\leq \\ \leq \b \int_{0}^{\e/2} \left(\int_{G(\cdot,x_\nu)^{-1}([-1/2,1/2))}
|\dot\g_a (G(x',x_\nu))|\, |\nabla G(x',x_\nu)|\, d{\mathcal{H}}^{N-1}(x') \right)dx_\nu\,. \end{multline*}
As $|\nabla_{x'} G(x)| = 1/x_\nu$ and
$|\nabla_{x_\nu} G(x)| \leq 1/x_\nu$ for a.e. $x \in A_4$, it follows that $|\nabla G(x)| \leq 2 |\nabla_{x'} G(x)|$ for a.e. $x \in A_4$. Hence \begin{multline*} \int_{A_4} f^\infty\left(\frac{x}{\e},\nabla v_\e\right) dx \,\leq \\ \leq 2\b \int_{0}^{\e/2} \left(\int_{G(\cdot,x_\nu)^{-1}([-1/2,1/2))}
|\dot\g_a (G(x',x_\nu))|\, |\nabla_{x'} G(x',x_\nu)|\, d{\mathcal{H}}^{N-1}(x') \right)dx_\nu\,. \end{multline*} For every $x_\nu \in (0,\e/2)$ the function $G(\cdot,x_\nu):{\mathbb{R}}^{N-1} \to {\mathbb{R}}$ is Lipschitz, and thus the Coarea formula implies \begin{eqnarray}\label{A4} \int_{A_4} f^\infty\left(\frac{x}{\e},\nabla v_\e\right) dx & \leq &
2\b \int_0^{\e/2} \left( \int_{-1/2}^{1/2}|\, \dot\g_a(t)|\, {\mathcal{H}}^{N-2}(\{x': G(x',x_\nu)=t\})\, dt \right)dx_\nu\nonumber\\ &\leq & C\e\, \mathbf d_{\mathcal{M}}(a_1,a_2)\,, \end{eqnarray} where we used as previously the estimate ${\mathcal{H}}^{N-2}(\{x': G(x',x_\nu)=t\})\leq {\mathcal{H}}^{N-2}\big(\partial (\frac{-1}{2},\frac{1}{2})^{N-1}\big)$. Gathering (\ref{Je}) to (\ref{A4}) and considering the analogous estimates for the integrals over $A_3$ and $A_5$ (with $b_1$ and $b_2$ instead of $a_1$ and $a_2$), we infer that $$J_\e(a_2,b_2,\nu) \leq \int_{Q_\nu}f^\infty\left(\frac{x}{\e},\nabla v_\e\right) dx \leq I_{\tilde\e}(a_1,b_1,\nu) + C\big(\e + \mathbf d_{\mathcal{M}}(a_1,a_2) + \mathbf d_{\mathcal{M}}(b_1,b_2)\big)\,.$$ Taking the limit as $\e \to 0$, we get in light of Propositions \ref{limitsurfenerg} and \ref{limsurf2} that $$\vartheta_\text{hom}(a_2,b_2,\nu) \leq \vartheta_\text{hom}(a_1,b_1,\nu) + C \big( \mathbf d_{\mathcal{M}}(b_1,b_2) + \mathbf d_{\mathcal{M}}(a_1,a_2) \big)\,.$$ Since the geodesic distance on ${\mathcal{M}}$ is equivalent to the Euclidian distance, we conclude, possibly exchanging the roles of $(a_1,b_1)$ and $(a_2,b_2)$, that (\ref{propsurf}) holds. \vskip5pt
{\bf Step 2.} We now prove \eqref{propsurf2}. Given an arbitrary orthonormal basis $\nu=(\nu_1,\ldots,\nu_N)$ of ${\mathbb{R}}^N$, let $\g \in \mathcal G(a_1,b_1)$ and define $u_\e(x):=\g(x_\nu/\e)$. Obviously $u_\e \in \mathcal B_\e(a_1,b_1,\nu)$. Using (\ref{idsurfen}) together with the growth condition (\ref{finfty1gc}) satisfied by $f^\infty$, we derive that $$\vartheta_\text{hom}(a_1,b_1,\nu_1) \leq \liminf_{\e \to 0}\int_{Q_\nu}f^\infty\left(\frac{x}{\e},\nabla u_\e\right)dx \leq \liminf_{\e \to 0} \frac{\b}{\e}\int_{Q_\nu}
\left|\dot\g\left(\frac{x\cdot\nu_1}{\e}\right)\right|\, dx=\b\mathbf d_{{\mathcal{M}}}(a_1,b_1)\,.$$ Then (\ref{propsurf2}) follows from the equivalence between $\mathbf d_{{\mathcal{M}}}$ and the Euclidian distance. \end{proof}
\section{Localization and integral repersentation on partitions}\label{BV}
In this section we first show that the $\Gamma$-limit defines a measure. Then we prove an abstract representation on partitions in sets of finite perimeter. This two facts will allow us to obtain the upper bound on the $\Gamma$-limit in the next section.
\subsection{Localization}
We consider an arbitrary given sequence $\{\e_n\} \searrow 0^+$ and we localize the functionals $\{{\mathcal{F}}_{\e_n}\}_{n\in{\mathbb{N}}}$ on the family ${\mathcal{A}}(\O)$, {\it i.e.}, for every $u \in L^1(\O;{\mathbb{R}}^d)$ and every $A \in {\mathcal{A}}(\O)$, we set $${\mathcal{F}}_{\e_n}(u,A):= \begin{cases} \displaystyle \int_A f\left(\frac{x}{\e_n},\nabla u\right) dx & \text{if }u \in W^{1,1}(A;{\mathcal{M}})\,,\\[8pt] +\infty & \text{otherwise}\,. \end{cases}$$ Next we define for $u\in L^1(\O;{\mathbb{R}}^d)$ and $A \in {\mathcal{A}}(\O)$, \begin{equation*} {\mathcal{F}}(u,A):= \inf_{\{u_n\}} \bigg\{ \liminf_{n \to +\infty}\, {\mathcal{F}}_{\e_n} (u_n,A)\, :\, u_n \to u \text{ in }L^1(A;{\mathbb{R}}^d) \bigg\}\,. \end{equation*} Note that ${\mathcal{F}}(u,\cdot)$ is an increasing set function for every $u\in L^1(\O;{\mathbb{R}}^d)$ and that ${\mathcal{F}}(\cdot,A)$ is lower semicontinuous with respect to the strong $L^1(A;{\mathbb{R}}^d)$-convergence for every $A\in {\mathcal{A}}(\O)$.
Since $L^1(A;{\mathbb{R}}^d)$ is separable, \cite[Theorem~8.5]{DM} and a diagonalization argument bring the existence of a subsequence (still denoted $\{\e_n\}$) such that ${\mathcal{F}}(\cdot,A)$ is the $\G$-limit of ${\mathcal{F}}_{\e_n}(\cdot,A)$ for the strong $L^1(A;{\mathbb{R}}^d)$-topology for every $A \in {\mathcal{R}}(\O)$ (or $A=\O$). \vskip5pt
We have the following locality property of the $\G$-limit which, in the $BV$ setting, parallels \cite[Lemma 3.1]{BM}.
\begin{lemma}\label{measbis}
For every $u \in BV(\O;{\mathcal{M}})$, the set function ${\mathcal{F}}(u,\cdot)$ is the restriction to ${\mathcal{A}}(\O)$ of a Radon measure absolutely continuous with respect to ${\mathcal{L}}^N+|Du|$. \end{lemma}
\begin{proof} Let $u \in BV(\O;{\mathcal{M}})$ and $A \in {\mathcal{A}}(\O)$. By Theorem 3.9 in \cite{AFP}, there exists a sequence $\{u_n\} \subset W^{1,1}(A;{\mathbb{R}}^d) \cap {\mathcal{C}}^\infty(A;{\mathbb{R}}^d)$ such that $u_n \to u$ in
$L^1(A;{\mathbb{R}}^d)$ and $\int_A |\nabla u_n|\, dx \to |Du|(A)$. Moreover, $u_n(x)\in {\rm co}({\mathcal{M}})$ for a.e. $x \in A$ and every $n \in {\mathbb{N}}$. Applying Proposition \ref{proj} to $u_n$, we obtain a new sequence $\{w_n\} \subset W^{1,1}(A;{\mathcal{M}})$ satisfying
$$\int_A |\nabla w_n|\, dx \leq C_\star \int_A |\nabla u_n|\, dx,$$ for some constant $C_\star>0$ depending only on ${\mathcal{M}}$ and $d$. From construction of $w_n$, we have that $w_n \to u$ in $L^1(A;{\mathbb{R}}^d)$. Taking $\{w_n\}$ as admissible sequence, we deduce in light of the growth condition $(H_2)$ that \begin{equation*}
{\mathcal{F}}(u,A) \leq \beta\big({\mathcal{L}}^N(A)+C_\star|Du|(A)\big)\,. \end{equation*}
We now prove that \begin{equation*} {\mathcal{F}}(u,A) \leq {\mathcal{F}}(u,B) + {\mathcal{F}}(u,A \setminus \overline C) \end{equation*} for every $A$, $B$ and $C \in {\mathcal{A}}(\O)$ satisfying $\overline C \subset B \subset A$. Then the measure property of ${\mathcal{F}}(u,\cdot)$ can be obtained as in the proof of \cite[Lemma 3.1]{BM} with minor modifications. For this reason, we shall omit it.
Let $R \in {\mathcal{R}}(\O)$ such that $C \subset\subset R \subset\subset B$ and consider $\{u_n\} \subset W^{1,1}(R;{\mathcal{M}})$ satisfying $u_n \to u$ in $L^1(R;{\mathbb{R}}^d)$ and \begin{equation}\label{u_nbis} \lim_{n \to +\infty} {\mathcal{F}}_{\e_n}(u_n,R) = {\mathcal{F}}(u,R)\,. \end{equation} Given $\eta>0$ arbitrary, there exists a sequence $\{v_n\} \subset W^{1,1}(A \setminus \overline C;{\mathcal{M}})$ such that $v_n \to u$ in $L^1(A\setminus \overline C;{\mathbb{R}}^d)$ and \begin{equation}\label{v_nbis} \liminf_{n \to +\infty}\, {\mathcal{F}}_{\e_n}(v_n,A \setminus \overline C) \leq {\mathcal{F}}(u,A \setminus \overline C) + \eta\,. \end{equation} By Theorem \ref{density}, we can assume without loss of generality that $u_n \in {\mathcal{D}}(R;{\mathcal{M}})$ and $v_n \in {\mathcal{D}}(A \setminus \overline C;{\mathcal{M}})$. Let $L:=\text{dist}(C,\partial R)$ and define for every $i \in \{0,\ldots,n\}$, $$R_i:=\bigg\{x \in R:\, \text{dist}(x,\partial R) >\frac{iL}{n}\bigg\}\,.$$ Given $i \in \{0,\ldots,n-1\}$, let $S_i:=R_i \setminus \overline{R_{i+1}}$ and consider a cut-off function $\zeta_i \in
{\mathcal{C}}^\infty_c(\O;[0,1])$ satisfying $\zeta_i(x) = 1$ for $x\in R_{i+1}$, $\zeta_i(x) = 0$ for $x\in \O\setminus R_{i}$ and $|\nabla
\zeta_i|\leq 2n/L$. Define $$z_{n,i}:=\zeta_i u_n + (1-\zeta_i)v_n \in W^{1,1}(A;{\mathbb{R}}^d)\,.$$ If $\pi_1({\mathcal{M}}) \neq 0$, $z_{n,i}$ is smooth in $A\setminus \Sigma_{n,i}$ with $\Sigma_{n,i} \in \mathcal S$, while $z_{n,i}$ is smooth in $A$ if $\pi_1({\mathcal{M}}) = 0$. Observe that $z_{n,i}(x) \in {\rm co}({\mathcal{M}})$ for a.e. $x \in A$ and actually, $z_{n,i}$ fails to be ${\mathcal{M}}$-valued exactly in the set $S_i$. To get an admissible sequence, we project $z_{n,i}$ on ${\mathcal{M}}$ using Proposition \ref{proj}. It yields a sequence $\{w_{n,i}\} \subset W^{1,1}(A;{\mathcal{M}})$ satisfying $w_{n,i} =z_{n,i}$ a.e. in $A \setminus S_i$, \begin{equation}\label{L1}
\int_A|w_{n,i} -u |\, dx \leq \int_A |z_{n,i} - u|\, dx+C {\mathcal{L}}^N(S_i)\,, \end{equation} for some constant $C>0$ depending only on the diameter of ${\rm co}({\mathcal{M}})$, and
$$\int_{S_i} |\nabla w_{n,i}|\, dx \leq C_\star \int_{S_i} |\nabla z_{n,i}|\, dx
\leq C_\star \int_{S_i}\left(|\nabla u_n| + |\nabla v_n| +
\frac{n}{2L} |u_n - v_n|\right)\, dx\,.$$ Arguing exactly as in the proof of \cite[Lemma 3.1]{BM}, we now find an index $i_n \in \{0,\ldots,n-1\}$ such that \begin{multline}\label{nin} {\mathcal{F}}_{\e_n}(w_{n,i_n},A) \leq {\mathcal{F}}_{\e_n}(u_n,R)+{\mathcal{F}}_{\e_n}(v_n,A \setminus \overline C)\,+\\
+ C_0 \int_{R \setminus \overline C}|u_n - v_n|\, dx +
\frac{C_0}{n}\sup_{k \in {\mathbb{N}}}\int_{R \setminus \overline C}(1+|\nabla u_k|+ |\nabla v_k|)\, dx\,, \end{multline} for some constant $C_0$ independent of $n$.
A well known consequence of the Coarea formula yields (see, {\it e.g.}, \cite[Lemma 3.2.34]{Federer}), \begin{equation}\label{vanslice} {\mathcal{L}}^N(S_{i_n}) = \int_{i_n L/n}^{(i_n+1)L/n} {\mathcal{H}}^{N-1}(\{x \in R: \text{dist}(x,\partial R)=t\})\, dt \to 0 \quad\text{as $n \to +\infty$\,.} \end{equation} As a consequence of (\ref{L1}) and \eqref{vanslice}, $w_{n,i_n} \to u$ in $L^1(A;{\mathbb{R}}^d)$. Taking the $\liminf$ in (\ref{nin}) and using (\ref{u_nbis}) together with (\ref{v_nbis}), we derive $${\mathcal{F}}(u,A) \leq {\mathcal{F}}(u,R) + {\mathcal{F}}(u,A \setminus \overline C)+\eta \leq {\mathcal{F}}(u,B) + {\mathcal{F}}(u,A \setminus \overline C)+\eta\,.$$ The conclusion follows from the arbitrariness of $\eta$. \end{proof}
\begin{remark}\label{measconstr} In view of Lemma \ref{measbis}, for every $u\in BV(\O;{\mathcal{M}})$, the set function ${\mathcal{F}}(u,\cdot)$ can be uniquely extended to a Radon measure on $\O$. Such a measure is given by $${\mathcal{F}}(u,B):= \inf \big\{{\mathcal{F}}(u,A)\,:\,A\in{\mathcal{A}}(\O),\,B\subset A\big\}\,, $$ for every $B\in\mathcal{B}(\O)$ (see, \emph{e.g.}, \cite[Theorem 1.53]{AFP}). \end{remark}
\subsection{Integral representation on partitions}
Besides the locality of ${\mathcal{F}}(u,\cdot)$, another key point of the analysis is to prove an abstract integral representation on partitions. Similarly to {\it e.g.} \cite[Lemma 3.7]{BDV}, using $(H_1)$ we easily obtain the translation invariance property of the $\G$-limit, the proof of which is omitted.
\begin{lemma}\label{invtrans} For every $u \in BV(\O;{\mathcal{M}})$, every $A \in {\mathcal{A}}(\O)$ and every $y\in{\mathbb{R}}^N$ such that $y+A \subset \O$, we have $${\mathcal{F}}(\tau_yu,y+A)={\mathcal{F}}(u,A) \,,$$ where $(\tau_yu)(x):=u(x-y)$. \end{lemma}
We are now in position to prove the integral representation of the $\G$-limit on partitions.
\begin{proposition}\label{reppart} There exists a unique function $K:{\mathcal{M}}\times{\mathcal{M}}\times{\mathbb{S}^{N-1}}\to[0,+\infty)$ continuous in the last variable and such that \newline \emph{(i)} $K(a,b,\nu)=K(b,a,-\nu)$ for every $(a,b,\nu)\in{\mathcal{M}}\times{\mathcal{M}}\times{\mathbb{S}^{N-1}}$, \newline \emph{(ii)} for every finite subset $T$ of ${\mathcal{M}}$, \begin{equation}\label{intsurfbor} {\mathcal{F}}(u,S)=\int_{S} K(u^+,u^-,\nu_u)\, d{\mathcal{H}}^{N-1}\,, \end{equation} for every $u\in BV(\O;T)$ and every Borel subset $S$ of $\O\cap S_u\,$. \end{proposition}
\begin{proof} It follows the argument of \cite[Proposition 4.2]{BDV} that is based on the general result \cite[Theorem 3.1]{AB}, on account to Lemmas \ref{measbis}, \ref{invtrans} and Remark \ref{measconstr}. We omit any further details. \end{proof}
\section{The upper bound}
\noindent We now adress the $\G$-$\limsup$ inequality. The upper bound on the diffuse part will be obtained using an extension of the relaxation result of \cite{AEL} (see Theorem \ref{relax} in the Appendix) together with the partial representation of the $\G$-limit already established in $W^{1,1}$ (see Theorem~\ref{babmilp=1}). The estimate of the jump part relies on the integral representation on partitions in sets of finite perimeter stated in Proposition \ref{reppart}. \vskip5pt
In view of the measure property of the $\G$-limit, we may write for every $u\in BV(\O;{\mathcal{M}})$, \begin{equation}\label{decompupbd} {\mathcal{F}}(u,\O)={\mathcal{F}}(u,\O\setminus S_u)+{\mathcal{F}}(u,\O\cap S_u)\,. \end{equation} Hence the desired upper bound ${\mathcal{F}}(u,\O)\leq {\mathcal{F}}_{\rm hom}(u)$ will follow estimating separately the two terms in the right handside of \eqref{decompupbd}.
\begin{lemma}\label{upperboundBV} For every $u \in BV(\O;{\mathcal{M}})$, we have
$${\mathcal{F}}(u,\O\setminus S_u) \leq \int_\O Tf_{\rm hom}(u,\nabla u)\, dx+ \int_\O Tf^\infty_{\rm hom}\left(\tilde u,\frac{dD^cu}{d|D^cu|}\right)\, d|D^cu|\,.$$ \end{lemma}
\begin{proof} Let $A \in {\mathcal{A}}(\O)$ and $\{u_n\} \subset W^{1,1}(A;{\mathcal{M}})$ be such that $u_n \to u$ in $L^1(A;{\mathbb{R}}^d)$. Since ${\mathcal{F}}(\cdot,A)$ is sequentially lower semicontinuous for the strong $L^1(A;{\mathbb{R}}^d)$ convergence, it follows from Theorem~\ref{babmilp=1} that $${\mathcal{F}}(u,A) \leq \liminf_{n \to +\infty}\, {\mathcal{F}}(u_n,A)=\liminf_{n \to +\infty}\, \int_A Tf_\text{hom}(u_n,\nabla u_n)\, dx\,.$$ Since the sequence $\{u_n\}$ is arbitrary, we deduce $${\mathcal{F}}(u,A) \leq \inf\left\{\liminf_{n \to +\infty} \int_A Tf_\text{hom}(u_n,\nabla u_n)\, dx : \{u_n\} \subset W^{1,1}(A;{\mathcal{M}}), \, u_n \to u\text{ in }L^1(A;{\mathbb{R}}^d)\right\}.$$ According to Proposition \ref{properties1}, the energy density $Tf_\text{hom}$ is a continuous and tangentially quasiconvex function which fulfills the assumptions of Theorem \ref{relax}. Hence \begin{equation}\label{FUA}
{\mathcal{F}}(u,A) \leq \int_A Tf_\text{hom}(u,\nabla u)\, dx+ \int_A Tf^\infty_\text{hom}\left(\tilde u,\frac{dD^cu}{d|D^cu|}\right)
d|D^cu| + \int_{S_u \cap A} H(u^+,u^-,\nu_u)\, d{\mathcal{H}}^{N-1} \end{equation} for some function $H:{\mathcal{M}} \times {\mathcal{M}} \times {\mathbb{S}^{N-1}} \to [0,+\infty)$. By outer regularity, (\ref{FUA}) holds for every $A \in \mathcal B(\O)$. Taking $A=\O \setminus S_u$, we obtain
$${\mathcal{F}}(u,\O \setminus S_u) \leq \int_\O Tf_\text{hom}(u,\nabla u)\, dx+ \int_\O Tf^\infty_\text{hom}\left(\tilde u,\frac{dD^cu}{d|D^cu|}\right)\, d|D^cu|\,, $$ and the proof is complete. \end{proof}
To prove the upper bound of the jump part, we first need to compare the energy density $K$ obtained in Proposition \ref{reppart} with the expected density $\vartheta_\text{hom}$.
\begin{lemma}\label{upbdsurf} We have $K(a,b,\nu_1)\leq \vartheta_{\rm hom}(a,b,\nu_1)$ for every $(a,b,\nu_1)\in {\mathcal{M}}\times{\mathcal{M}}\times\mathbb{S}^{N-1}$. \end{lemma}
\begin{proof} We will partially proceed as in the proof of Proposition \ref{limsurf2} and we refer to it for the notation. Consider $\nu=(\nu_1,\ldots,\nu_N)$ an orthonormal basis of ${\mathbb{R}}^N$. We shall prove that $K(a,b,\nu_1)\leq \vartheta_{\rm hom}(a,b,\nu_1)$. Since $K$ and $\vartheta_{\rm hom}$ are continuous in the last variable, we may assume that $\nu$ is a rational basis, {\it i.e.}, for all $i \in \{1,\ldots,N\}$, there exists $\g_i \in {\mathbb{R}}\setminus \{0\}$ such that $v_i:= \g_i \nu_i \in {\mathbb{Z}}^N$, and the general case follows by density. \vskip5pt
Given $0<\eta<1$ arbitrary, by Proposition \ref{limitsurfenerg} and \eqref{idsurfen} we can find $\e_0>0$, $u_0\in\mathcal{B}_{\e_0}(a,b,\nu)$ and $\gamma_{\e_0}\in\mathcal{G}(a,b)$ such that $u_0(x)=\gamma_{\e_0}(x\cdot\nu_1/\e_0)$ and $$\int_{Q_{\nu}} f^{\infty}\bigg(\frac{x}{\e_0},\nabla u_0\bigg)\, dx\leq \vartheta_{\rm hom}(a,b,\nu_1)+\eta\,.$$ For every $\lambda=(\lambda_2,\ldots,\lambda_N)\in{\mathbb{Z}}^{N-1}$, we set $x_n^{(\lambda)}:=\e_n\sum_{i=2}^N\lambda_iv_i$ and $Q_{\nu,n}^{(\lambda)}:=x^{(\lambda)}_n+(\e_n/\e_0)Q_\nu$. We define the set $\Lambda_n$ by \begin{multline*} \Lambda_n :=\Bigg\{\lambda\in{\mathbb{Z}}^{N-1}\;:\;Q_{\nu,n}^{(\lambda)}\subset Q_{\nu} \text{ and } x_n^{(\lambda)}\in \sum_{i=2}^N l_i\left(\frac{\e_n}{\e_0}+\e_n \gamma_i\right)\nu_i+\e_n P\\ \text{ for some }(l_2,\ldots,l_N)\in{\mathbb{Z}}^{N-1}\Bigg\}\,, \end{multline*} where $$P:=\Big\{\a_2v_2 + \ldots + \a_N v_N : \, \a_2,\ldots,\a_N \in [-1/2,1/2)\Big\}.$$ Next consider $$u_n(x)=\begin{cases} \displaystyle u_0\bigg(\frac{\e_0(x-x_n^{(\lambda)})\cdot\nu_1}{\e_n}\bigg) & \text{if $x\in Q_{\nu,n}^{(\lambda)}$ for some $\lambda\in\Lambda_n$}\,,\\[10pt] \displaystyle \gamma_{\e_0}\bigg(\frac{x\cdot\nu_1}{\e_n}\bigg) & \text{otherwise}\,. \end{cases}$$ Note that $u_n\in W^{1,1}(Q_\nu;{\mathcal{M}})$, $\{\nabla u_n\}$ is bounded in $L^1(Q_\nu;{\mathbb{R}}^{d \times N})$, and $u_n\to u^{a,b}_{\nu_1}$ in $L^1(Q_\nu;{\mathbb{R}}^d)$ as $n\to+\infty$ with $u^{a,b}_{\nu_1}$ given by \begin{equation*} u_{\nu_1}^{a,b}(x):=\begin{cases} a & \text{if } x\cdot\nu_1 \geq 0\,, \\ b & \text{if } x\cdot\nu_1 <0\,, \end{cases} \quad \Pi_{\nu_1}:=\big\{x\in{\mathbb{R}}^N : x\cdot\nu_1=0\big\}\,. \end{equation*} Arguing as in Step 1 of the proof of \cite[Proposition 2.2]{BDV}, we obtain that \begin{equation}\label{pasidee1} \limsup_{n\to+\infty}\, \int_{Q_\nu}f^\infty\bigg(\frac{x}{\e_n},\nabla u_n\bigg)\, dx\leq \int_{Q_\nu} f^\infty\bigg(\frac{x}{\e_0},\nabla u_0\bigg)\, dx \leq \vartheta_{\rm hom}(a,b,\nu_1)+\eta\,. \end{equation}
For $\rho>0$ define $A_\rho:=Q_\nu\cap\{|x\cdot\nu_1|<\rho\}$. By construction the sequence $\{u_n\}$ is admissible for ${\mathcal{F}}(u^{a,b}_{\nu_1},A_\eta)$ so that \begin{multline}\label{pasidee1b} {\mathcal{F}}\big(u^{a,b}_{\nu_1},A_\eta\cap\Pi_{\nu_1}\big)\leq{\mathcal{F}}(u^{a,b}_{\nu_1},A_\eta)\leq \liminf_{n\to+\infty} \,\int_{A_\eta}f\bigg(\frac{x}{\e_n},\nabla u_n\bigg)\, dx\leq \\ \leq\beta \mathcal{L}^N(A_\eta)+\liminf_{n\to+\infty}\,\int_{A_{\e_n}}f\bigg(\frac{x}{\e_n},\nabla u_n\bigg)\, dx\leq \liminf_{n\to+\infty}\,\int_{A_{\e_n}}f\bigg(\frac{x}{\e_n},\nabla u_n\bigg)\, dx +\beta\eta\,, \end{multline} where we have used $(H_2)$ and the fact that $\nabla u_n=0$ outside $A_{\e_n}$. On the other hand, Proposition~\ref{reppart} yields \begin{equation}\label{pasidee2} {\mathcal{F}}\big(u^{a,b}_{\nu_1},A_\eta\cap\Pi_{\nu_1}\big)=\int_{A_\eta\cap\Pi_{\nu_1}}K(a,b,\nu_1)\, d{\mathcal{H}}^{N-1}= K(a,b,\nu_1)\,. \end{equation} Using $(H_4)$, the boundedness of $\{\nabla u_n\}$ in $L^1(Q_\nu;{\mathbb{R}}^{d \times N})$, the fact that $f^\infty(\cdot,0)\equiv 0$, and H\"older's inequality, we derive \begin{eqnarray}\label{pasidee3}
\bigg|\int_{A_{\e_n}}f\bigg(\frac{x}{\e_n},\nabla u_n\bigg) \,dx-
\int_{Q_\nu}f^\infty\bigg(\frac{x}{\e_n},\nabla u_n\bigg)\,dx\bigg|& \leq & C\int_{A_{\e_n}}(1+|\nabla u_n|^{1-q})\,dx\nonumber\\
& \leq & C\big(\e_n+\e_n^q\|\nabla u_n\|^{1-q}_{L^1(Q_\nu;{\mathbb{R}}^{d \times N})}\big)\to 0\, \end{eqnarray} as $n \to \infty$. Gathering \eqref{pasidee1}, \eqref{pasidee1b}, \eqref{pasidee2} and \eqref{pasidee3}, we obtain $K(a,b,\nu_1)\leq \vartheta_{\rm hom}(a,b,\nu_1)+(\beta+1)\eta$ and the conclusion follows from the arbitrariness of $\eta$. \end{proof}
We are now in position to prove the upper bound on the jump part of the energy. The argument is based on Lemma \ref{upbdsurf} together with an approximation procedure of \cite{AMT}. In view of Lemma \ref{upperboundBV} and \eqref{decompupbd}, this will complete the proof of the upper bound ${\mathcal{F}}(u,\O)\leq {\mathcal{F}}_{\rm hom}(u)$.
\begin{corollary}\label{upbdjp} For every $u\in BV(\O;{\mathcal{M}})$, we have $${\mathcal{F}}(u,\O\cap S_u)\leq \int_{\O\cap S_u} \vartheta_{\rm hom}(u^+,u^-,\nu_u)\, d{\mathcal{H}}^{N-1}\,.$$ \end{corollary}
\begin{proof} First assume that $u$ takes a finite number of values, {\it i.e.}, $u\in BV(\O;T)$ for some finite subset $T\subset {\mathcal{M}}$. Then the conclusion directly follows from Proposition \ref{reppart} together with Lemma~\ref{upbdsurf}.
Fix an arbitrary function $u\in BV(\O;{\mathcal{M}})$ and an open set $A\in{\mathcal{A}}(\O)$. For $\delta_0>0$ small enough, let $\mathcal U:=\big\{s \in{\mathbb{R}}^d\,:\,\text{dist}(s,{\mathcal{M}})<\d_0\big\}$ be the $\d_0$-neighborhood of ${\mathcal{M}}$ on which the nearest point projection $\Pi: \mathcal U \to {\mathcal{M}}$ is a well defined Lipschitz mapping. We extend $\vartheta_{\rm hom}$ to a function $\hat \vartheta_{\rm hom}$ defined in ${\mathbb{R}}^d \times {\mathbb{R}}^d \times\mathbb{S}^{N-1}$ by setting $$\hat \vartheta_{\rm hom}(a,b,\nu):=\chi(a)\chi(b)\vartheta_{\rm hom}\bigg(\Pi(a),\Pi(b), \nu\bigg)\,, $$ for a cut-off function $\chi \in {\mathcal{C}}^\infty_c({\mathbb{R}}^d;[0,1])$ satisfying $\chi(t)=1$ if $\text{dist}(s,{\mathcal{M}}) \leq \delta_0/2$, and $\chi(s)=0$ if $\text{dist}(s,{\mathcal{M}}) \geq 3\delta_0/4$. In view of Proposition \ref{contsurfenerg}, we infer that $\hat\vartheta_{\rm hom}$ is continuous and satisfies
$$|\hat\vartheta_{\rm hom}(a_1,b_1,\nu)-\hat\vartheta_{\rm hom}(a_2,b_2,\nu)|\leq C\big(|a_1-a_2|+|b_1-b_2|\big)\,,$$ and
$$\hat\vartheta_{\rm hom}(a_1,b_1,\nu)\leq C|a_1-b_1|\,,$$ for every $a_1$, $b_1$, $a_2$, $b_2\in{\mathbb{R}}^d$, $\nu\in{\mathbb{S}^{N-1}}$, and some constant $C>0$. Therefore we can apply Step~2 in the proof of \cite[Proposition 4.8]{AMT} to obtain a sequence $\{v_n\}\subset BV(\O;{\mathbb{R}}^d)$ such that, for every $n\in {\mathbb{N}}$, $v_n\in BV(\O;T_n)$ for some finite set $T_n\subset {\mathbb{R}}^d$, $v_n\to u$ in $L^\infty(\O;{\mathbb{R}}^d)$ and \begin{eqnarray*}
\limsup_{n\to+\infty}\,\int_{A\cap S_{v_n}}\hat\vartheta_{\rm hom}(v_n^+,v_n^-,\nu_{v_n})\, d{\mathcal{H}}^{N-1} & \leq & C|Du|(A\setminus S_u)+ \int_{A\cap S_{u}}\hat\vartheta_{\rm hom}(u^+,u^-,\nu_{u})\, d{\mathcal{H}}^{N-1}\\
& = & C|Du|(A\setminus S_u)+ \int_{A\cap S_{u}}\vartheta_{\rm hom}(u^+,u^-,\nu_{u})\, d{\mathcal{H}}^{N-1}\,. \end{eqnarray*}
Hence we may assume without loss of generality that for each $n\in {\mathbb{N}}$, $\|v_n - u\|_{L^\infty(\O;{\mathbb{R}}^d)} < \d_0/2$, and thus $\text{dist}(v_n^\pm(x),{\mathcal{M}}) \leq |v_n^\pm(x)-u^\pm(x)|< \d_0/2$ for ${\mathcal{H}}^{N-1}$-a.e. $x \in S_{v_n}$. In particular, we can define $$u_n:=\Pi(v_n)\,, $$ and then $u_n\in BV(\O;{\mathcal{M}})$, $u_n\to u$ in $L^1(\O;{\mathbb{R}}^d)$. Moreover, one may check that for each $n\in\mathbb{N}$, $S_{u_n}\subset S_{v_n}$ so that ${\mathcal{H}}^{N-1}\big(S_{u_n}\setminus(J_{u_n}\cap J_{v_n})\big)\leq {\mathcal{H}}^{N-1}(S_{u_n}\setminus J_{u_n})+ {\mathcal{H}}^{N-1}(S_{v_n}\setminus J_{v_n})=0$, and $$u_n^\pm(x)=\Pi(v_n^\pm(x)) \quad \text{ and } \quad \nu_{u_n}(x)=\nu_{v_n}(x) \quad\text{ for every $x\in J_{u_n}\cap J_{v_n}$}\,.$$ Consequently, \begin{multline}\label{estilimsurf} \limsup_{n\to+\infty}\, \int_{A\cap S_{u_n}}\vartheta_{\rm hom}(u_n^+,u_n^-,\nu_{u_n})\, d{\mathcal{H}}^{N-1} \,\leq \\ \leq \limsup_{n\to+\infty}\int_{A\cap S_{v_n}} \hat\vartheta_{\rm hom}(v_n^+,v_n^-,\nu_{v_n})\, d{\mathcal{H}}^{N-1}\,\leq\\
\leq C|Du|(A\setminus S_u)+ \int_{A\cap S_{u}}\vartheta_{\rm hom}(u^+,u^-,\nu_{u})\, d{\mathcal{H}}^{N-1}\,. \end{multline} Since $u_n$ takes a finite number of values, Proposition \ref{reppart} and Lemma \ref{upbdsurf} yield \begin{equation}\label{estisurfn} {\mathcal{F}}(u_n,A\cap S_{u_n})\leq \int_{A\cap S_{u_n}}\vartheta_{\rm hom}(u_n^+,u_n^-,\nu_{u_n})\, d{\mathcal{H}}^{N-1}\,, \end{equation} and, in view of Lemma \ref{measbis}, \begin{equation}\label{estidiffn} {\mathcal{F}}(u_n,A\setminus S_{u_n}) \leq C{\mathcal{L}}^N(A)\,. \end{equation} Combining \eqref{estilimsurf}, \eqref{estisurfn} and \eqref{estidiffn}, we deduce \begin{eqnarray*} \limsup_{n\to+\infty} \,{\mathcal{F}}(u_n,A) & = & \limsup_{n\to+\infty}\big({\mathcal{F}}(u_n,A\setminus S_{u_n})+{\mathcal{F}}(u_n,A\cap S_{u_n})\big)\\
& \leq & \int_{A\cap S_{u}}\vartheta_{\rm hom}(u^+,u^-,\nu_{u})\, d{\mathcal{H}}^{N-1}+ C\big({\mathcal{L}}^N(A)+|Du|(A\setminus S_u)\big)\,. \end{eqnarray*} On the other hand, ${\mathcal{F}}(\cdot,A)$ is lower semicontinuous with respect to the strong $L^1(A;{\mathbb{R}}^d)$-convergence, and thus $\displaystyle{\mathcal{F}}(u,A)\leq \liminf_{n\to+\infty}\,{\mathcal{F}}(u_n,A)$ which leads to $${\mathcal{F}}(u,A)\leq \int_{A\cap S_{u}}\vartheta_{\rm hom}(u^+,u^-,\nu_{u})\, d{\mathcal{H}}^{N-1}+
C\big({\mathcal{L}}^N(A)+|Du|(A\setminus S_u)\big)\,.$$ Since $A$ is arbitrary, the above inequality holds for any open set $A\in{\mathcal{A}}(\O)$ and, by Remark \ref{measconstr}, it also holds if $A$ is any Borel subset of $\O$. Then taking $A=\O\cap S_u$ yields the desired inequality. \end{proof}
\section{The lower bound}
\noindent We adress in this section with the $\G$-$\liminf$ inequality. Using the blow-up method, we follow the approach of \cite{FM2}, estimating separately the Cantor part and the jump part, while the bulk part is obtained exactly as in the $W^{1,1}$ analysis, see \cite[Lemma 5.2]{BM}.
\begin{lemma}\label{lowerboundBV} For every $u \in BV(\O;{\mathcal{M}})$, we have ${\mathcal{F}}(u,\O)\geq {\mathcal{F}}_{\rm hom} (u)$. \end{lemma}
\begin{proof} Let $u \in BV(\O;{\mathcal{M}})$ and $\{u_n\} \subset W^{1,1}(\O;{\mathcal{M}})$ be such that $${\mathcal{F}}(u,\O)= \lim_{n \to +\infty}\int_\O f \left(\frac{x}{\e_n},\nabla u_n \right)dx\,.$$ Define the sequence of nonnegative Radon measures $$\mu_n:=f \left(\frac{\cdot}{\e_n},\nabla u_n \right){\mathcal{L}}^N \res\, \O\,.$$ Up to the extraction of a subsequence, we can assume that there exists a nonnegative Radon measure $\mu \in {\mathcal{M}}(\O)$ such that $\mu_n \xrightharpoonup[]{*} \mu$ in ${\mathcal{M}}(\O)$. By the Besicovitch Differentiation Theorem, we can split $\mu$ into the sum of four mutually singular nonnegative measures $\mu=\mu^a + \mu^j+\mu^c+\mu^s$ where $\mu^a \ll \mathcal L^N$,
$\mu^j \ll {\mathcal{H}}^{N-1}\res\, S_u$ and $\mu^c \ll |D^c u|$. Since we have $\mu(\O) \leq {\mathcal{F}}(u,\O)$, it is enough to check that \begin{equation}\label{lambda^a} \frac{d\mu}{d{\mathcal{L}}^N}(x_0) \geq Tf_\text{hom}(u(x_0),\nabla u(x_0))\quad \text{ for }{\mathcal{L}}^N\text{-a.e. }x_0 \in \O\,, \end{equation} \begin{equation}\label{lambda^c}
\frac{d\mu}{d|D^c u|}(x_0) \geq Tf_\text{hom}^\infty\left(\tilde u(x_0),\frac{dD^c u}{d|D^c u|}(x_0)\right)\quad \text{ for }|D^c u|\text{-a.e. }x_0 \in \O\,, \end{equation} and \begin{equation}\label{lambda^j} \frac{d\mu}{d{\mathcal{H}}^{N-1}\res\, S_u}(x_0) \geq \vartheta_\text{hom}(u^+(x_0),u^-(x_0),\nu_u(x_0))\quad \text{ for }{\mathcal{H}}^{N-1}\text{-a.e. }x_0 \in S_u\,. \end{equation} The proof of (\ref{lambda^a}) follows the one in \cite[Lemma 5.2]{BM} and we shall omit it. The proofs of (\ref{lambda^c}) and (\ref{lambda^j}) are postponed to the remaining of this subsection. \end{proof}
\noindent {\bf Proof of \eqref{lambda^c}.} The lower bound on the density of the Cantor part will be achieved in three steps. We shall use the blow-up method to reduce the study to constant limits, and then a truncation argument as in the proof of \cite[Lemma 5.2]{BM}, to replace the starting sequence by a uniformly converging one. \vskip5pt
{\bf Step 1.} Choose a point $x_0 \in \O$ such that \begin{equation}\label{cantor2}
\lim_{\rho \to 0^+}- \hskip -1em \int_{Q(x_0,\rho)}|u(x)-\tilde u(x_0)|\, dx=0\,, \end{equation} \begin{equation}\label{cantor3}
A(x_0):=\lim_{\rho \to 0^+}\frac{Du(Q(x_0,\rho))}{|Du|(Q(x_0,\rho))}\in [T_{\tilde u(x_0)}({\mathcal{M}})]^N\;\text{ is a rank one matrix with }\;|A(x_0)|=1\,, \end{equation} \begin{equation}\label{cantor1}
\frac{d\mu}{d|D^cu|}(x_0) \quad \text{exists and is finite and}\quad
\frac{d|Du|}{d|D^cu|}(x_0)=1\,, \end{equation} \begin{equation}\label{cantor4}
\lim_{\rho \to 0^+}\frac{|Du|(Q(x_0,\rho))}{\rho^{N-1}}=0 \quad\text{and}\quad
\lim_{\rho \to 0^+}\frac{|Du|(Q(x_0,\rho))}{\rho^N}=+\infty\,, \end{equation} \begin{equation}\label{cantor5}
\liminf_{\rho \to 0^+}\,\frac{|Du|(Q(x_0,\rho) \setminus Q(x_0,\tau\rho))}{|Du|(Q(x_0,\rho))}\leq 1-\tau^N\quad\text{for every $0<\tau<1$}\,. \end{equation}
It turns out that $|D^c u|$-a.e. $x_0\in\O$ satisfy these properties. Indeed (\ref{cantor1}) is immediate while
(\ref{cantor2}) is a consequence of the fact that $S_u$ is $|D^c u|$-negligible. Property (\ref{cantor3}) comes from Alberti Rank One Theorem together with Lemma \ref{manifold}, (\ref{cantor4}) from \cite[Proposition~3.92~(a),~(c)]{AFP} and (\ref{cantor5}) from \cite[Lemma~2.13]{FM2}. Write $A(x_0)=a \otimes \nu$ for some $a \in {\mathcal{M}}$ and $\nu \in {\mathbb{S}^{N-1}}$. Upon rotating the coordinate axis, one may assume without loss of generality that $\nu=e_N$. To simplify the notations, we set $s_0:=\tilde u(x_0)$ and $A_0:=A(x_0)$. \vskip5pt
Fix $t\in(0,1)$ arbitrarily close to $1$, and in view of \eqref{cantor5}, find a sequence $\rho_k \searrow 0^+$ such that \begin{equation}\label{cantor5bis}
\limsup_{k\to+\infty}\,\frac{|Du|(Q(x_0,\rho_k) \setminus Q(x_0,t\rho_k))}{|Du|(Q(x_0,\rho_k))}\leq 1-t^N\,. \end{equation} Now fix $t<\gamma<1$ and set $\gamma':=(1+\gamma)/2$. Using (\ref{cantor1}), we derive \begin{multline}\label{infmu}
\frac{d\mu}{d|D^cu|}(x_0) =\lim_{k \to +\infty}\frac{\mu(Q(x_0,\rho_k))}{|Du|(Q(x_0,\rho_k))}\geq
\limsup_{k \to +\infty}\,\frac{\mu(\overline{Q(x_0,\gamma'\rho_k)})}{|Du|(Q(x_0,\rho_k))}\geq\\ \geq \limsup_{k\to+\infty}\,\limsup_{n\to+\infty}\,
\frac{1}{|Du|(Q(x_0,\rho_k))}\int_{Q(x_0,\gamma' \rho_k)}f\bigg(\frac{x}{\e_n},\nabla u_n\bigg)dx\,. \end{multline} Arguing as in the proof of \cite[Lemma 5.2]{BM} with minor modifications, we construct a sequence $\{\bar v_{n}\} \subset W^{1,\infty}(Q(0,\rho_k);{\mathbb{R}}^d)$ satisfying $\bar v_{n} \to u(x_0+\cdot)$ in $L^1(Q(0,\rho_k);{\mathbb{R}}^d)$ and \begin{equation}\label{passuv} \limsup_{n\to+\infty}\, \int_{Q(x_0,\gamma' \rho_k)}f\bigg(\frac{x}{\e_n},\nabla u_n\bigg)\, dx\geq \limsup_{n\to+\infty}\, \int_{Q(0,\gamma \rho_k)}g\bigg(\frac{x}{\e_n}, \bar v_n, \nabla \bar v_n\bigg)\, dx\,, \end{equation} where $g$ is given by \eqref{defg}. Setting $w_{n,k}(x):=\bar v_{n}(\rho_k\, x)$, a change of variable together with \eqref{infmu} and \eqref{passuv} yields \begin{equation}\label{c1}
\frac{d\mu}{d|D^cu|}(x_0) \geq \limsup_{k \to +\infty} \limsup_{n
\to +\infty}\, \frac{\rho_k^N}{|Du|(Q(x_0,\rho_k))} \int_{\gamma Q} g\left(\frac{\rho_k\, x}{\e_n},w_{n,k}, \frac{1}{\rho_k} \nabla w_{n,k}\right)dx\,. \end{equation} Then we infer from (\ref{cantor2}) that \begin{equation}\label{c2}
\lim_{k \to +\infty} \lim_{n \to +\infty}\int_Q|w_{n,k}-s_0|\, dx=0\,, \end{equation} and \begin{multline}\label{c3} \lim_{k \to +\infty} \lim_{n \to
+\infty}\frac{\rho_k^{N-1}}{|Du|(Q(x_0,\rho_k))}\int_Q
\bigg|w_{n,k}(x)-u(x_0+\rho_k x)\\
- \int_Q \big(w_{n,k}(y)-u(x_0+\rho_k y)\big)\, dy\bigg|\, dx =0\,. \end{multline} By \eqref{c1}, \eqref{c2} and (\ref{c3}), we can extract a diagonal sequence $n_k \to+\infty$ such that $\d_k:=\e_{n_k}/\rho_k\to 0$, $w_k:=w_{n_k,k}\to s_0$ in $L^1(Q;{\mathbb{R}}^d)$,
$$\frac{d\mu}{d|D^cu|}(x_0) \geq \limsup_{k \to +\infty}\,
\frac{\rho_k^N}{|Du|(Q(x_0,\rho_k))} \int_{\gamma Q} g\left(\frac{x}{\d_k},w_k, \frac{1}{\rho_k} \nabla w_k\right)dx\,,$$ and \begin{equation}\label{c5}
\lim_{k \to +\infty}\frac{\rho_k^{N-1}}{|Du|(Q(x_0,\rho_k))}\int_Q
\bigg|w_k(x)-u(x_0+\rho_k\, x) - \int_Q \big(w_k(y)-u(x_0+\rho_k\, y)\big)\, dy\bigg|\,dx =0\,. \end{equation} \vskip5pt
{\bf Step 2.} Now we reproduce the truncation argument used in Step 2 of the proof of \cite[Lemma~5.2]{BM} with minor modifications (make use of (\ref{cantor4}) and \cite[Lemma~2.12]{FM2} instead of \cite[Lemma~2.6]{FM}, see \cite{FM2} for details). Setting $a_k:=\int_Q w_k(y)\, dy$, it yields a sequence of cut-off functions $\{\zeta_k\} \subset {\mathcal{C}}^\infty_c({\mathbb{R}};[0,1])$ such that
$\zeta_k(\tau)=1$ if $|\tau| \leq s_k$, $\zeta_k(\tau)=0$ is
$|\tau|\geq t_k$ for some
$$\|w_k-a_k\|^{1/2}_{L^1(Q;{\mathbb{R}}^d)}<s_k<t_k<\|w_k-a_k\|^{1/3}_{L^1(Q;{\mathbb{R}}^d)}\,,$$
for which $\overline w_k:=a_k +\zeta_k(|w_k-a_k|)(w_k-a_k)\in W^{1,1}(Q;{\mathbb{R}}^d)$ satisfies $\overline w_k \to s_0$ in $L^\infty(Q;{\mathbb{R}}^d)$ and \begin{equation}\label{c6}
\frac{d\mu}{d|D^cu|}(x_0) \geq \limsup_{k \to +\infty}\,
\frac{\rho_k^N}{|Du|(Q(x_0,\rho_k))} \int_{\gamma Q} g\left(\frac{x}{\d_k},\overline w_k, \frac{1}{\rho_k} \nabla \overline w_k\right)dx\,. \end{equation} In view of the coercivity condition (\ref{pgrowth}), (\ref{cantor1}) and (\ref{c6}),
$$\sup_{k \in {\mathbb{N}}}\, \frac{\rho_k^{N-1}}{|Du|(Q(x_0,\rho_k))} \int_{\gamma Q} |\nabla \overline w_k|\, dx<+\infty\,.$$
Therefore, (\ref{moduluscont}), (\ref{c6}) and $\|\overline w_k -s_0\|_{L^\infty(Q;{\mathbb{R}}^d)}\to 0$ lead to
$$\frac{d\mu}{d|D^cu|}(x_0) \geq \limsup_{k \to +\infty}\,
\frac{\rho_k^N}{|Du|(Q(x_0,\rho_k))} \int_{\gamma Q} g\left(\frac{x}{\d_k},s_0, \frac{1}{\rho_k} \nabla \overline w_k\right)dx\,.$$ Next we define the three following sequences for every $x\in Q$, $$\left\{\begin{array}{l}
\displaystyle \overline u_k(x):=\frac{\rho_k^{N-1}}{|Du|(Q(x_0,\rho_k))} \bigg(u(x_0+\rho_k \, x) -\int_Q u(x_0 +\rho_k\, y)\, dy \bigg)\,,\\[0.4cm]
\displaystyle z_k(x):=\frac{\rho_k^{N-1}}{|Du|(Q(x_0,\rho_k))} \big(w_k(x) -a_k \big)\,,\\[0.4cm]
\displaystyle \overline z_k(x):=\frac{\rho_k^{N-1}}{|Du|(Q(x_0,\rho_k))} \big(\overline w_k(x) - a_k \big)\,. \end{array}\right.$$
As a consequence of (\ref{c5}) we have $\|z_k - \overline u_k
\|_{L^1(Q;{\mathbb{R}}^d)} \to 0$, and since
$$\int_Q \overline u_k(x)\, dx = 0 \quad \text{ and } \quad |D\overline u_k|(Q)=1\,,$$ it follows that the sequence $\{\overline u_k\}$ is bounded in $BV(Q;{\mathbb{R}}^d)$ and thus relatively compact in $L^1(Q;{\mathbb{R}}^d)$. Hence $\{\overline u_k\}$ is equi-integrable, and consequently so is $\{z_k\}$. Up to a subsequence, $\overline u_k$ converges in $L^1(Q;{\mathbb{R}}^d)$ to some function $v\in BV(Q;{\mathbb{R}}^d)$, and then $z_k\to v$ in $L^1(Q;{\mathbb{R}}^d)$. By \cite[Theorem~3.95]{AFP} the limit $v$ is representable by $$v(x)=a\, \theta(x_N)$$ for some increasing function $\theta\in BV((-1/2,1/2);{\mathbb{R}})$ (recall that we assume $A_0=a \otimes e_N$).
By construction, $\overline w_k$ coincides with $w_k$ in the set
$\{|w_k-a_k| \leq s_k\}$. Hence \begin{multline}\label{ei}
\|\overline z_k - z_k\|_{L^1(Q;{\mathbb{R}}^d)} = \frac{\rho_k^{N-1}}{|Du|(Q(x_0,\rho_k))} \int_{\{|w_k-a_k|> s_k\}}|w_k(x)-\overline w_k(x)|\, dx\leq \\
\leq \frac{\rho_k^{N-1}}{|Du|(Q(x_0,\rho_k))} \int_{\{|w_k-a_k|> s_k\}}|w_k(x)-a_k|\, dx= \int_{\{|w_k-a_k|> s_k\}} |z_k(x)|\, dx\,. \end{multline} By Chebyshev inequality, we have
\begin{equation}\label{ln}{\mathcal{L}}^N(\{|w_k-a_k|> s_k\}) \leq \frac{1}{s_k}\int_Q|w_k(x)-a_k|\, dx
\leq \|w_k-a_k\|^{1/2}_{L^1(Q;{\mathbb{R}}^d)}\to 0\,, \end{equation}
and thus (\ref{ei}), (\ref{ln}) and the equi-integrability of $\{z_k\}$ imply $\|\overline z_k -
z_k\|_{L^1(Q;{\mathbb{R}}^d)} \to 0$. Therefore $\overline z_k \to v$ in $L^1(Q;{\mathbb{R}}^d)$, and setting
$\a_k:=|Du|(Q(x_0,\rho_k))/\rho_k^N \to +\infty$, \begin{equation}\label{tilde}
\frac{d\mu}{d|D^cu|}(x_0) \geq \limsup_{k \to +\infty}\, \frac{1}{\a_k} \int_{\gamma Q} g\left(\frac{x}{\d_k},s_0,\a_k \nabla \overline z_k\right)dx\,. \end{equation} Using (\ref{grec}) and the positive $1$-homogeneity of the recession function $g^\infty(y,s,\cdot)$, we infer that \begin{align*}
\int_{\gamma Q} \left| \frac{1}{\a_k}\, g\left(\frac{x}{\d_k},s_0,\a_k \nabla \overline z_k\right) -
g^\infty\left(\frac{x}{\d_k},s_0, \nabla \overline z_k\right)\right| dx &\leq \frac{C}{\a_k} \int_{\gamma Q} (1+ \a_k^{1-q} |\nabla \overline z_k|^{1-q})\, dx\\
&\leq C\big(\a_k^{-1} + \a_k^{-q} \|\nabla \overline z_k\|^{1-q}_{L^1(\gamma Q;{\mathbb{R}}^{d \times N})}\big) \to 0\,, \end{align*} where we have used H\"older's inequality and the boundedness of $\{\nabla \overline z_k\}$ in $L^1(\gamma Q;{\mathbb{R}}^{d \times N})$ (which follows from (\ref{pgrowth}) and \eqref{tilde}). Consequently,
$$\frac{d\mu}{d|D^cu|}(x_0) \geq \limsup_{k \to +\infty}\, \int_{\gamma Q} g^\infty \left(\frac{x}{\d_k},s_0,\nabla \overline z_k\right)dx\,.$$ \vskip5pt
{\bf Step 3.} Extend $\theta$ continuously to ${\mathbb{R}}$ by the values of its traces at $\pm 1/2$. Define $v_k(x)=v_k(x_N):=a \theta * \varrho_k (x_N)$ where $\varrho_k$ is a sequence of (one dimensional) mollifiers. Then $v_k \to v$ in $L^1(Q;{\mathbb{R}}^d)$ and thus, since $\overline u_k - v_k \to 0$ in $L^1(Q;{\mathbb{R}}^d)$, it follows that (up to a subsequence) \begin{equation}\label{D} D\overline u_k(\tau Q) - Dv_k(\tau Q) \to 0 \end{equation} for ${\mathcal{L}}^1$-a.e. $\tau \in (0,1)$. Fix $\tau\in (t,\gamma)$
for which \eqref{D} holds. Since $\|\bar z_k -v_k\|_{L^1(Q;{\mathbb{R}}^d)}\to 0$, one can use a standard cut-off function argument (see \cite[p. 29--30]{FM2}) to modify the sequence $\{\overline z_k\}$ and produce a new sequence $\{\overline\varphi_k\} \subset W^{1,\infty}(\tau Q;{\mathbb{R}}^d)$ satisfying $\overline \varphi_k \to v$ in $L^1(\tau Q;{\mathbb{R}}^d)$, $\overline \varphi_k=v_k$ on a neighborhood of $\partial (\tau Q)$ and \begin{equation}\label{may}
\frac{d\mu}{d|D^cu|}(x_0) \geq \limsup_{k \to +\infty} \int_{\tau Q} g^\infty \left(\frac{x}{\d_k},s_0,\nabla \overline \varphi_k\right)dx\,. \end{equation} A simple computation shows that \begin{equation}\label{D2} D\overline u_k (\tau Q)=\frac{Du(Q(x_0,\tau
\rho_k))}{|Du|(Q(x_0,\rho_k))}\quad \text{ and }\quad Dv_k(\tau Q)=\tau^N \,A_k\,, \end{equation} where $A_k \in {\mathbb{R}}^{d \times N}$ is the matrix given by $$A_k:= a \otimes e_N \frac{\theta *\varrho_k (\tau/2)- \theta *\varrho_k (-\tau/2)}{\tau}\,.$$ We observe that $A_k$ is bounded in $k$ since $\theta$ has bounded variation.
Let $m_k:=[\tau/\delta_k]+1\in {\mathbb{N}}$, and define for $x=(x',x_N)\in \delta_km_k Q$, $$\varphi_k(x):=\begin{cases} \overline \varphi_k(x)-A_kx & \text{if $x\in \tau Q$}\,,\\
v_k(x_N)-A_k\, x & \text{if $|x_N|\leq \tau/2$ and $|x'|\geq \tau/2$}\, ,\\ v_k(\tau/2)-A_k(x',\tau/2) & \text{if $x_N\geq \tau/2$}\,,\\ v_k(-\tau/2)-A_k(x',-\tau/2) & \text{if $x_N\leq -\tau/2$}\,. \end{cases}$$ One may check that $\varphi_k\in W^{1,\infty}(\delta_km_kQ;{\mathbb{R}}^d)$, $\varphi_k$ is $\delta_km_k$-periodic, and that \begin{equation}\label{periodization}
\limsup_{k \to +\infty} \int_{\tau Q} g^\infty \left(\frac{x}{\d_k},s_0,\nabla \overline \varphi_k\right)dx= \limsup_{k \to +\infty} \int_{\delta_km_k Q} g^\infty \left(\frac{x}{\d_k},s_0,A_k+\nabla \varphi_k\right)dx\,. \end{equation} Setting $\phi_k(y):=\tau^{N}\delta_k^{-1}\varphi_k(\delta_k y)$ for $y\in m_k Q$, we have $\phi_k\in W^{1,\infty}_{\#}(m_kQ;{\mathbb{R}}^d)$, and a change of variables yields \begin{align} \nonumber\int_{\delta_km_k Q} g^\infty \left(\frac{x}{\d_k},s_0,A_k+\nabla varphi_k\right)dx&= \tau^{-N}\delta_k^Nm_k^N- \hskip -1em \int_{m_k Q} g^\infty \left(y,s_0,\tau^{N}A_k+\nabla \phi_k\right)dy \\ \label{compper} &\geq \tau^{-N}\delta_k^Nm_k^N (g^\infty)_{\rm hom}(s_0,\tau^N A_k)\,, \end{align} since $(g^\infty)_\text{hom}$ can be computed as follows (see Remark \ref{reminfty} and {\it e.g., } \cite[Remark~14.6]{BD}), \begin{eqnarray*} (g^\infty)_\text{hom}(s,\xi) = \inf \left\{- \hskip -1em \int_{(0,m)^N} g^\infty(y,s,\xi +\nabla \phi(y))\, dy : m\in {\mathbb{N}},\, \phi \in W^{1,\infty}_\#((0,m)^N;{\mathbb{R}}^d) \right\}\,. \end{eqnarray*} Gathering \eqref{may}, \eqref{periodization} and \eqref{compper}, we derive \begin{equation*}
\frac{d\mu}{d|D^cu|}(x_0) \geq\limsup_{k \to +\infty} \,(g^\infty)_\text{hom}(s_0,\tau^N A_k)\,. \end{equation*} In view (\ref{D}), (\ref{D2}), (\ref{cantor5bis}) and (\ref{cantor3}), we have \begin{multline*}
\limsup_{k \to +\infty}|\tau^N A_k -A_0|=\limsup_{k \to +\infty}|Dv_k(\tau Q) - A_0|=\limsup_{k \to +\infty}|D\overline u_k(\tau Q)-A_0|=\\
=\limsup_{k \to +\infty}\left| \frac{Du(Q(x_0,\tau \rho_k))}{|Du|(Q(x_0,\rho_k))}-A_0\right| = \limsup_{k \to +\infty}
\frac{|Du|(Q(x_0,\rho_k)\setminus Q(x_0,\tau\rho_k))}
{|Du|(Q(x_0,\rho_k))}\leq 1- t^{N}\,. \end{multline*} By Remark \ref{reminfty}, $(g^\infty)_\text{hom}(s_0,\cdot)$ is Lipschitz continuous, and consequently
$$\frac{d\mu}{d|D^cu|}(x_0) \geq (g^\infty)_\text{hom}(s_0,A_0)-C(1-t^{N})\,.$$ From the arbitrariness of $t$, we finally infer that
$$\frac{d\mu}{d|D^cu|}(x_0) \geq (g^\infty)_\text{hom}(s_0,A_0)\,. $$ Since $s_0 \in {\mathcal{M}}$ and $A_0 \in [T_{s_0}({\mathcal{M}})]^N$, Remark \ref{reminfty} and \eqref{remdensinf} yield $(g^\infty)_\text{hom}(s_0,A_0)= T(f^\infty)_\text{hom}(s_0,A_0)\geq Tf_\text{hom}^\infty(s_0,A_0)$, and the proof is complete. \prbox \vskip10pt
\noindent{\bf Proof of \eqref{lambda^j}.} The strategy used in that part follows the one already used for the bulk and Cantor parts. It still rests on the blow up method together with the projection argument in Proposition~\ref{proj}. \vskip5pt
{\bf Step 1.} Let $x_0 \in S_u$ be such that \begin{equation}\label{jump1}
\lim_{\rho \to 0^+}- \hskip -1em \int_{Q_{\nu_u(x_0)}^\pm(x_0,\rho)} |u(x)-u^\pm(x_0)|\, dx=0\,, \end{equation} where $u^\pm(x_0) \in {\mathcal{M}}$, \begin{equation}\label{jump2} \lim_{\rho \to 0^+}\frac{{\mathcal{H}}^{N-1}(S_u \cap Q_{\nu_u(x_0)}(x_0,\rho))}{\rho^{N-1}}=1\,, \end{equation} and such that the Radon-Nikod\'ym derivative of $\mu$ with respect to ${\mathcal{H}}^{N-1}\res \, S_u$ exists and is finite. By Lemma~\ref{manifold}, Theorem 3.78 and Theorem 2.83 (i) in \cite{AFP} (with cubes instead of balls), it turns out that ${\mathcal{H}}^{N-1}$-a.e. $x_0\in S_u$ satisfy these properties. Set $s_0^\pm:=u^\pm(x_0)$, $\nu_0:=\nu_u(x_0)$.
Up to a further subsequence, we may assume that $(1+|\nabla u_n|) {\mathcal{L}}^N \res\, \O \xrightharpoonup[]{*} \lambda$ in ${\mathcal{M}}(\O)$ for some nonnegative Radon measure $\lambda \in {\mathcal{M}}(\O)$. Consider a sequence $\rho_k \searrow 0^+$ satisfying $\mu(\partial Q_{\nu_0}(x_0,\rho_k))=\lambda(\partial Q_{\nu_0}(x_0,\rho_k))=0$ for each $k \in {\mathbb{N}}$. Using (\ref{jump2}) we derive \begin{multline*} \frac{d\mu}{d{\mathcal{H}}^{N-1}\res \, S_u}(x_0)\lim_{k \to +\infty}\frac{\mu(Q_{\nu_0}(x_0,\rho_k))}{{\mathcal{H}}^{N-1}(S_u \cap Q_{\nu_0}(x_0,\rho_k))} =\lim_{k \to +\infty}\frac{\mu(Q_{\nu_0}(x_0,\rho_k))}{\rho_k^{N-1}}=\\ = \lim_{k \to +\infty}\lim_{n \to +\infty}\frac{1}{\rho_k^{N-1}}\int_{Q_{\nu_0}(x_0,\rho_k)} f\left(\frac{x}{\e_n},\nabla u_n \right)dx\,. \end{multline*} Thanks to Theorem \ref{density}, one can assume without loss of generality that $u_n \in \mathcal D(\O;{\mathcal{M}})$ for each $n \in {\mathbb{N}}$. Arguing exactly as in Step 1 of the proof of \cite[Lemma 5.2]{BM} (with $Q_{\nu_0}(x_0,\rho_k)$ instead of $Q(x_0,\rho_k)$) we obtain a sequence $\{v_n\} \subset \mathcal D(Q_{\nu_0}(0,\rho_k);{\mathcal{M}})$ such that $v_n \to u(x_0+\cdot)$ in $L^1(Q_{\nu_0}(0,\rho_k);{\mathbb{R}}^d)$ as $n \to +\infty$, and $$\frac{d\mu}{d{\mathcal{H}}^{N-1}\res \, S_u}(x_0) \geq \limsup_{k \to +\infty}\,\limsup_{n \to +\infty}\, \frac{1}{\rho_k^{N-1}}\int_{Q_{\nu_0}(0,\rho_k)} f\left(\frac{x}{\e_n},\nabla v_n \right)dx$$ (note that the construction process to obtain $v_n$ from $u_n$ does not affect the manifold constraint). Changing variables and setting $w_{n,k}(x)=v_n(\rho_k\, x)$ lead to $$\frac{d\mu}{d{\mathcal{H}}^{N-1}\res \, S_u}(x_0) \geq \limsup_{k \to +\infty}\,\limsup_{n \to +\infty} \rho_k\int_{Q_{\nu_0}} f\left(\frac{\rho_k \, x}{\e_n},\frac{1}{\rho_k}\nabla w_{n,k} \right)dx\,.$$ Defining $$u_0(x):=\begin{cases} s_0^+ & \text{ if } x\cdot \nu_0 > 0\,,\\ s_0^- & \text{ if } x\cdot \nu_0 \leq 0\,, \end{cases}$$ we infer from (\ref{jump1}) that $$\lim_{k \to +\infty}\lim_{n \to +\infty}
\int_{Q_{\nu_0}}|w_{n,k}-u_0|\, dx=0\,.$$ By a standard diagonal argument, we find a sequence $n_k \nearrow +\infty$ such that
$\d_k:=\e_{n_k}/\rho_k\to 0$, $w_k:=w_{n_k,k} \in \mathcal D(Q_{\nu_0};{\mathcal{M}})$ converges to $u_0$ in $L^1(Q_{\nu_0};{\mathbb{R}}^d)$, and \begin{equation}\label{dhnwj} \frac{d\mu}{d{\mathcal{H}}^{N-1}\res \, S_u}(x_0) \geq \limsup_{k \to +\infty}\, \rho_k\int_{Q_{\nu_0}}f\left(\frac{x}{\d_k},\frac{1}{\rho_k}\nabla w_k \right)dx\,. \end{equation} According to $(H_4)$ and the positive $1$-homogeneity of $f^\infty(y,\cdot)$, we have \begin{align}\label{dhnwj2}
\nonumber\int_{Q_{\nu_0}}\left|\rho_k\, f\left(\frac{x}{\d_k},\frac{1}{\rho_k}\nabla w_k \right) -
f^\infty\left(\frac{x}{\d_k},\nabla w_k \right)\right| dx & \leq C\rho_k\int_{Q_{\nu_0}} (1 + \rho_k^{q-1}
|\nabla w_k|^{1-q})\, dx\\
&\leq C\left(\rho_k +\rho_k^q \|\nabla w_k\|^{1-q}_{L^1(Q_{\nu_0};{\mathbb{R}}^{d \times N})}\right)\,, \end{align} where we have used H\"older's inequality and $0<q<1$. From (\ref{dhnwj}) and the coercivity condition $(H_2)$, it follows that $\{\nabla w_k\}$ is uniformly bounded in $L^1(Q_{\nu_0};{\mathbb{R}}^{d \times N})$. Gathering (\ref{dhnwj}) and (\ref{dhnwj2}) yields \begin{equation}\label{j1} \frac{d\mu}{d{\mathcal{H}}^{N-1}\res \, S_u}(x_0) \geq \limsup_{k \to +\infty}\, \int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla w_k \right)dx\,. \end{equation} \vskip5pt
{\bf Step 2.} Now it remains to modify the value of $w_k$ on a neighborhood of $\partial Q_{\nu_0}$ in order to get an admissible test function for the surface energy density. We argue as in \cite[Lemma~5.2]{AEL}. Using the notations of Subsection~\ref{sectsurf}, we consider $\g \in \mathcal G(s_0^+,s_0^-)$, and set $$\psi_k(x):=\g\left(\frac{x\cdot\nu_0}{\d_k} \right)\,.$$ Using a De Giorgi type slicing argument, we shall modify $w_k$ in order to get a function which matches $\psi_k$ on $\partial Q_{\nu_0}$. To this end, define
$$r_k:=\|w_k-\psi_k\|^{1/2}_{L^1(Q_{\nu_0};{\mathbb{R}}^d)}\,, \quad M_k:=k[1+\|w_k\|_{W^{1,1}(Q_{\nu_0};{\mathbb{R}}^d)}
+\|\psi_k\|_{W^{1,1}(Q_{\nu_0};{\mathbb{R}}^d)}]\,, \quad \ell_k:=\frac{r_k}{M_k}\,.$$ Since $\psi_k$ and $w_k$ converge to $u_0$ in $L^1(Q_{\nu_0};{\mathbb{R}}^d)$, we have $r_k \to 0$, and one may assume that $0<r_k<1$. Set $$Q^{(i)}_k:=(1-r_k+i\, \ell_k) Q_{\nu_0} \quad \text{ for }i=0,\ldots,M_k\,.$$ For every $i \in \{1,\ldots,M_k\}$, consider a cut-off function $\varphi^{(i)}_k \in {\mathcal{C}}^\infty_c(Q^{(i)}_k;[0,1])$ satisfying
$\varphi^{(i)}_k=1$ on $Q^{(i-1)}_k$ and $|\nabla
\varphi^{(i)}_k|\leq c/\ell_k$. Define $$z^{(i)}_k:=\varphi^{(i)}_k w_k + (1-\varphi^{(i)}_k)\psi_k\in W^{1,1}(Q_{\nu_0};{\mathbb{R}}^d)\,,$$ so that $z_k^{(i)}=w_k$ in $Q^{(i-1)}_k$, and $z_k^{(i)}=\psi_k$ in $Q_{\nu_0} \setminus Q^{(i)}_k$. Since $z^{(i)}_k$ is smooth outside a finite union of sets contained in some $(N-2)$-dimensional submanifolds and $z^{(i)}_k(x) \in {\rm co}({\mathcal{M}})$ for a.e. $x\in Q_{\nu_0}$, one can apply Proposition \ref{proj} to obtain new functions $\hat z^{(i)}_k \in W^{1,1}(Q_{\nu_0};{\mathcal{M}})$ such that $\hat z^{(i)}_k= z^{(i)}_k$ on $(Q_{\nu_0} \setminus Q^{(i)}_k) \cup Q^{(i-1)}_k$, and \begin{align*}
\int_{Q^{(i)}_k \setminus Q^{(i-1)}_k}|\nabla \hat z^{(i)}_k|\, dx
&\leq C_\star \int_{Q^{(i)}_k \setminus Q^{(i-1)}_k}|\nabla z^{(i)}_k|\, dx\\
&\leq C_\star \int_{Q^{(i)}_k \setminus Q^{(i-1)}_k}\left(|\nabla w_k|+|\nabla \psi_k| + \frac{1}{\ell_k}|w_k-\psi_k| \right)\, dx\,. \end{align*} In particular $\hat z^{(i)}_k \in \mathcal B_{\d_k}(s_0^+,s_0^-,\nu_0)$, and by the growth condition (\ref{finfty1gc}), \begin{multline*} \int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla \hat z^{(i)}_k\right)dx \leq \int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla w_k
\right)dx+C\int_{Q_{\nu_0} \setminus Q^{(i)}_k} |\nabla \psi_k|\, dx\,+\\
+C\int_{Q^{(i)}_k\setminus Q^{(i-1)}_k} \left(|\nabla w_k|+|\nabla
\psi_k| + \frac{1}{\ell_k}|w_k-\psi_k| \right)\, dx\,. \end{multline*} Summing up over all $i=1,\ldots,M_k$ and dividing by $M_k$, we get that \begin{multline*} \frac{1}{M_k}\sum_{i=1}^{M_k}\int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla \hat z^{(i)}_k\right)dx \leq \int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla w_k \right)dx\,+\\
+ C\int_{Q_{\nu_0} \setminus Q^{(0)}_k} |\nabla \psi_k|\, dx
+\frac{C}{k}+C\|w_k-\psi_k\|^{1/2}_{L^1(Q_{\nu_0};{\mathbb{R}}^d)}\,. \end{multline*}
Since $$\int_{Q_{\nu_0} \setminus Q^{(0)}_k}|\nabla \psi_k|\, dx \leq \mathbf d_{{\mathcal{M}}}(s_0^+,s_0^-) {\mathcal{H}}^{N-1}((Q_{\nu_0} \setminus Q^{(0)}_k) \cap \{x\cdot \nu_0=0\}) \to 0$$ as $k \to +\infty$, there exists a sequence $\eta_k \to 0^+$ such that $$\frac{1}{M_k}\sum_{i=1}^{M_k}\int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla \hat z^{(i)}_k\right)dx \leq \int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla w_k \right)dx +\eta_k\,.$$ Hence, for each $k \in {\mathbb{N}}$ we can find some index $i_k \in \{1,\ldots,M_k\}$ satisfying \begin{equation}\label{j11} \int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla \hat z_k^{(i_k)}\right)dx \leq \int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla w_k \right)dx +\eta_k\,. \end{equation} Gathering (\ref{j1}) and (\ref{j11}), we obtain that $$\frac{d\mu}{d{\mathcal{H}}^{N-1}\res \, S_u}(x_0) \geq \limsup_{k \to +\infty} \int_{Q_{\nu_0}}f^\infty\left(\frac{x}{\d_k},\nabla \hat z_k^{(i_k)} \right)dx\,.$$ Since $\hat z_k^{(i_k)} \in \mathcal B_{\d_k}(s_0^+,s_0^-,\nu_0)$, we infer from Proposition \ref{limitsurfenerg}, Proposition \ref{limsurf2} and \eqref{idsurfen} that $$\frac{d\mu}{d{\mathcal{H}}^{N-1}\res \, S_u}(x_0) \geq \vartheta_\text{hom}(s_0^+,s_0^-,\nu_0)\,,$$ which completes the proof. \prbox
\subsection{Proof of Theorem \ref{babmil2}}
\noindent {\bf Proof of Theorem \ref{babmil2}. } In view of $(H_2)$ and the closure of the pointwise constraint under strong $L^1$-convergence, ${\mathcal{F}}(u)<+\infty$ implies $u \in BV(\O;{\mathcal{M}})$. In view of \eqref{decompupbd}, Lemma \ref{upperboundBV}, Corollary \ref{upbdjp} and Lemma~\ref{lowerboundBV}, the subsequence $\{{\mathcal{F}}_{\e_{n}}\}$ $\Gamma$-converges to ${\mathcal{F}}_\text{hom}$ in $L^1(\O;{\mathbb{R}}^d)$. Since the $\G$-limit does not depend on the particular choice of the subsequence, we get in light of \cite[Proposition~8.3]{DM} that the whole sequence $\G$-converges. \prbox
\section{Appendix} \vskip5pt
\noindent We present in this appendix a relaxation result already proved in \cite{AEL} for ${\mathcal{M}}={\mathbb{S}^{d-1}}$, and in \cite{Mucci} for isotropic integrands. The proof can be obtained following the one of \cite[Theorem~3.1]{AEL} replacing the standard projection on the sphere (used in Lemma 5.2, Proposition~6.2 and Lemma~6.4 of \cite{AEL}) by the projection on ${\mathcal{M}}$ of \cite{HL} as in Proposition \ref{proj}. Since we only make use of the upper bound on the diffuse part, we will just enlight the differences in the main steps leading to it. \vskip5pt
Assume that ${\mathcal{M}}$ is a smooth, compact and connected submanifold of ${\mathbb{R}}^d$ without boundary, and let $f : \O \times {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$ be a continous function satisfying:
\begin{itemize} \item[$(H_1')$] $f$ is tangentially quasiconvex, {\it i.e.}, for all $x \in \O$, all $s \in {\mathcal{M}}$ and all $\xi \in [T_s({\mathcal{M}})]^N$, $$f(x,s,\xi) \leq \int_Q f(x,s,\xi + \nabla \varphi(y))\, dy \quad \text{for every $\varphi \in W^{1,\infty}_0(Q;T_s({\mathcal{M}}))\,$;}$$
\item[$(H_2')$] there exist $\a>0$ and $\b>0$ such that
$$\a |\xi| \leq f(x,s,\xi) \leq \b(1+|\xi|) \quad \text{ for every }(x,s,\xi) \in \O \times {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N}\,;$$
\item[$(H_3')$] for every compact set $K \subset \O$, there exists a continuous function $\omega : [0,+\infty) \to [0,+\infty)$ satisfying $\omega(0)=0$ and
$$|f(x,s,\xi) - f(x',s',\xi)| \leq \omega(|x-x'| + |s-s'|)
(1+|\xi|)$$ for every $x$, $x' \in \O$, $s$, $s' \in {\mathbb{R}}^d$ and $\xi \in {\mathbb{R}}^{d \times N}$;
\item[$(H_4')$] there exist $C>0$ and $q \in (0,1)$ such that
$$|f(x,s,\xi) - f^\infty(x,s,\xi)| \leq C(1+|\xi|^{1-q}), \quad \text{ for every }(x,s,\xi) \in \O \times {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N}\,,$$ where $f^\infty : \O \times {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$ is the recession function of $f$ defined by $$f^\infty(x,s,\xi):=\limsup_{t \to +\infty} \frac{f(x,s,t\xi)}{t}\, .$$ \end{itemize}
Consider the functional $F:L^1(\O;{\mathbb{R}}^d) \to [0,+\infty]$ given by $$F(u):=\left\{ \begin{array}{ll} \displaystyle \int_\O f(x,u,\nabla u)\, dx & \text{ if }u \in W^{1,1}(\O;{\mathcal{M}}),\\[0.4cm] +\infty & \text{ otherwise}, \end{array}\right.$$ and its relaxation for the strong $L^1(\O;{\mathbb{R}}^d)$-topology $\overline F:L^1(\O;{\mathbb{R}}^d) \to [0,+\infty]$ defined by $$\overline F(u):=\inf_{\{u_n\}} \left\{ \liminf_{n \to +\infty} F(u_n) : u_n \to u \text{ in }L^1(\O;{\mathbb{R}}^d)\right\}\,.$$ Then the following integral representation result holds:
\begin{theorem} \label{relax} Let ${\mathcal{M}}$ be a smooth compact and connected submanifold of ${\mathbb{R}}^d$ without boundary, and let $f:{\mathbb{R}}^N \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$ be a continuous function satisfying $(H_1')$ to $(H_4')$. Then for every $u \in L^1(\O;{\mathbb{R}}^d)$, \begin{equation}\label{reprel} \overline F(u)= \begin{cases} \displaystyle \begin{multlined}[8.5cm] \,\int_\O f(x,u,\nabla u)dx + \int_{\O\cap S_u}K(x,u^+,u^-,\nu_u)d{\mathcal{H}}^{N-1}\,+ \\[-15pt]
+ \int_\O f^\infty\bigg(x,\tilde u,\frac{dD^cu}{d|D^cu|}\bigg)\, d|D^cu| \end{multlined} & \text{\it if }\,u \in BV(\O;{\mathcal{M}})\,,\\ & \\ \,+\infty & \text{\it otherwise}\,, \end{cases} \end{equation} where for every $(x,a,b,\nu) \in \O \times {\mathcal{M}} \times {\mathcal{M}} \times {\mathbb{S}^{N-1}}$, \begin{multline*} K(x,a,b,\nu) := \inf_\varphi \bigg\{\int_{Q_\nu} f^\infty(x,\varphi(y),\nabla \varphi(y))\, dy : \varphi \in W^{1,1}(Q_\nu;{\mathcal{M}}),\; \varphi=a \text{ on } \{x\cdot \nu=1/2\},\\ \varphi=b \text{ on }\{x\cdot \nu=-1/2\} \text{ {\it and} } \varphi \text{ \it is $1$-periodic in the }\nu_2,\ldots,\nu_{N} \text{ directions} \bigg\}\,, \end{multline*} $\{\nu,\nu_2,\ldots,\nu_N\}$ forms any orthonormal basis of ${\mathbb{R}}^N$, and $Q_\nu$ stands for the open unit cube in ${\mathbb{R}}^N$ centered at the origin associated to this basis. \end{theorem}
\noindent{\bf Sketch of the Proof.} The proof of the lower bound ``$\geq$" in \eqref{reprel} can be obtained as in \cite[Lemma~5.2]{BM} and Lemma \ref{lowerboundBV} using standard techniques to handle with the dependence on the space variable. The lower bounds for the bulk and Cantor parts rely on the construction of a suitable function $\tilde f: \O \times {\mathbb{R}}^d \times {\mathbb{R}}^{d \times N} \to [0,+\infty)$ replacing $f$ as we already pursued in Section \ref{thbe}. On the other hand, the jump part rests on the projection on ${\mathcal{M}}$ of \cite{HL} as in Proposition \ref{proj} instead of the standard projection on the sphere used in \cite[Proposition~5.2]{AEL}. \vskip5pt
\noindent To obtain the upper bound, we localize as usual the functionals setting for every $u \in L^1(\O;{\mathbb{R}}^d)$ and $A \in {\mathcal{A}}(\O)$, $$F(u,A):=\begin{cases} \displaystyle \int_A f(x,u,\nabla u)\, dx & \text{ if }u \in W^{1,1}(A;{\mathcal{M}})\,,\\ +\infty & \text{ otherwise}\,, \end{cases}$$ $$\overline F(u,A):=\inf_{\{u_n\}} \left\{ \liminf_{n \to +\infty} F(u_n,A) : u_n \to u \text{ in }L^1(A;{\mathbb{R}}^d)\right\}\,.$$ Arguing as in the proof of Lemma \ref{measbis}, we obtain that for every $u \in BV(\O;{\mathcal{M}})$, the set function
$\overline F(u,\cdot)$ is the restriction to ${\mathcal{A}}(\O)$ of a Radon measure absolutely continuous with respect to ${\mathcal{L}}^N+|Du|$. Hence it uniquely extends into a Radon measure on $\Omega$ (see Remark \ref{measconstr}), and it suffices to prove that for any $u\in BV(\O;{\mathcal{M}})$, \begin{equation}\label{jppartrel} \overline F(u,\Omega\cap S_u)\leq \int_{\Omega \cap S_u}K(x,u^+,u^-,\nu_u)\, d{\mathcal{H}}^{N-1}\,, \end{equation} \begin{equation}\label{contpartrel} \frac{d\overline F(u,\cdot)}{d {\mathcal{L}}^N}(x_0)\leq f(x_0,u(x_0),\nabla u(x_0))\quad \text{for ${\mathcal{L}}^N$-a.e. $x_0\in \Omega$}\,, \end{equation} \begin{equation}\label{cantpartrel}
\frac{d\overline F(u,\cdot)}{d |D^cu|}(x_0)\leq f^\infty\bigg(x_0,\tilde u(x_0),\frac{dD^c u}{d|D^cu|}(x_0)\bigg)\quad \text{for $|D^cu|$-a.e. $x_0\in \Omega$}\,, \end{equation} \vskip5pt
\noindent{\it Proof of \eqref{jppartrel}.} Concerning the jump part, one can proceed as in \cite[Lemma~6.5]{AEL}. A slight difference lies in the third step of its proof where one needs to approximate in energy an arbitrary $u\in BV(\O;{\mathcal{M}})$ by a sequence $\{u_n\}\subset BV(\O;{\mathcal{M}})$ such that for each $n$, $u_n$ assumes a finite number of values. This can be performed as in the proof of Corollary \ref{upbdjp} using the regularity properties of $K$ stated in \cite[Lemma~4.1]{AEL} for ${\mathcal{M}}=\mathbb{S}^{d-1}$. \vskip5pt
\noindent{\it Proof of \eqref{contpartrel}.} Let $x_0 \in \O$ be a Lebesgue point for $u$ and $\nabla u$ such that $u(x_0) \in {\mathcal{M}}$, $\nabla u(x_0) \in [T_{u(x_0)}({\mathcal{M}})]^N$,
$$\lim_{\rho \to 0^+} - \hskip -1em \int_{Q(x_0,\rho)} |u(x) - u(x_0)|(1+|\nabla u(x)|)\, dx=0\,,\quad \lim_{\rho \to 0^+}\frac{|D^s u|(Q(x_0,\rho))}{\rho^N}=0\,,$$ and
$$\frac{d |Du|}{d{\mathcal{L}}^N}(x_0) \quad \text{ and }\quad \frac{d\overline F(u,\cdot)}{d{\mathcal{L}}^N}(x_0)$$ exist and are finite. Note that ${\mathcal{L}}^N$-a.e. $x_0 \in \O$ satisfy these properties. We select a sequence $\rho_k
\searrow 0^+$ such that $Q(x_0,2\rho_k)\subset \O$ and $|Du|(\partial Q(x_0,\rho_k)) =0$ for each $k \in {\mathbb{N}}$. Next consider a sequence of standard mollifiers $\{\varrho_n\}$, and define $u_n :=\varrho_n * u \in W^{1,1}(Q(x_0,\rho_k);{\mathbb{R}}^d) \cap {\mathcal{C}}^\infty(Q(x_0,\rho_k);{\mathbb{R}}^d)$. In the sequel, we shall argue as in the proof of Proposition \ref{proj} and we refer to it for the notation. Fix $\d>0$ small enough such that $\pi:{\mathbb{R}}^d\setminus X\to {\mathcal{M}}$ is smooth in the $\d$-neighborhood of ${\mathcal{M}}$.
Since $u_n$ takes its values in ${\rm co}({\mathcal{M}})$, we can reproduce the proof of Proposition \ref{proj} to find $a_n^k \in{\mathbb{R}}^d$ with $|a_n^k|<\delta/4$
such that setting $p_n^k:=(\pi_{a_n^k}|_{{\mathcal{M}}})^{-1} \circ \pi_{a_n^k}$, $w_n^k:=p_n^k\circ u_n \in W^{1,1}(Q(x_0,\rho_k);{\mathcal{M}})$ and \begin{equation}\label{Ank}
\int_{A_n^k} |\nabla w_n^k|\, dx \leq C_* \int_{A_n^k}|\nabla u_n|\, dx\,, \end{equation} where $A_n^k$ denotes the open set $A_n^k:=\big\{x \in Q(x_0,\rho_k) : {\rm dist}(u_n(x),{\mathcal{M}}) >\d /2\big\}$.
Furthermore, since $ \pi$ is smooth in the $\d$-neighborhood of ${\mathcal{M}}$ and $|a_n^k|<\d/4$, there exists a constant $C_\d>0$ independent of $n$ and $k$ such that \begin{equation}\label{dn}
|\nabla^2 p_n^k(s)|+|\nabla p_n^k(s)| \leq C_\d \text{ for every $s \in {\mathbb{R}}^d$ satisfying $\text{dist}(s,{\mathcal{M}})\leq \d/2$}\,, \end{equation} and consequently, \begin{equation}\label{cAnk}
|\nabla w_n^k| \leq C_\d |\nabla u_n| \quad \text{${\mathcal{L}}^N$-a.e. in }Q(x_0,\rho_k) \setminus A_n^k\,. \end{equation} Since $u(x) \in {\mathcal{M}}$ for ${\mathcal{L}}^N$-a.e. $x \in \O$, it follows that $${\mathcal{L}}^N(A_n^k) \leq \frac{2}{\d} \int_{Q(x_0,\rho_k)} \text{dist}(u_n,{\mathcal{M}})\, dx \leq
\frac{2}{\d} \int_{Q(x_0,\rho_k)} |u_n-u|\, dx \xrightarrow[n \to +\infty]{} 0\,,$$ and then (\ref{dn}) yields \begin{multline*}
\int_{Q(x_0,\rho_k)}|w_n^k - u|\, dx = \int_{
A_n^k}|w_n^k - u|\, dx + \int_{Q(x_0,\rho_k)
\setminus A_n^k}|p_n^k(u_n) - p_n^k(u)|\, dx\leq\\
\leq {\rm diam}({\mathcal{M}}) {\mathcal{L}}^N(A_n^k) + C_\d
\int_{Q(x_0,\rho_k)}|u_n-u|\, dx \xrightarrow[n \to +\infty]{} 0\,. \end{multline*} Hence $w_n^k \to u$ in $L^1(Q(x_0,\rho_k);{\mathbb{R}}^d)$ as $n \to +\infty$ so that we are allowed to take $w_n^k$ as competitor, {\it i.e.}, $$\overline F(u,Q(x_0,\rho_k)) \leq \liminf_{n \to +\infty} \int_{Q(x_0,\rho_k)}f(x,w_n^k,\nabla w_n^k)\, dx\,.$$ At this stage we can argue exactly as in \cite[Lemma~6.4]{AEL} to prove that for any $\eta>0$ there exists $\lambda=\lambda(\eta)>0$ such that \begin{multline}\label{1numb} \overline F(u,Q(x_0,\rho_k)) \leq \liminf_{n \to
+\infty}\bigg\{\int_{Q(x_0,\rho_k)}f(x_0,u(x_0),\nabla u_n)\, dx+C\int_{Q(x_0,\rho_k)}|\nabla u_n - \nabla w_n^k|\, dx\, +\\
+ C(\eta +\lambda \rho_k) \int_{Q(x_0,\rho_k)}(1+|\nabla u_n|)\, dx
+C\lambda\int_{Q(x_0,\rho_k)}|w_n^k-u(x_0)|(1+|\nabla w_n^k|)\, dx \bigg\}\,. \end{multline} The first and third term in the right handside of \eqref{1numb} can be treated as in the proof of \cite[Theorem~2.16]{FM2}. Concerning the remaining terms, we proceed as follows. Using (\ref{Ank}), (\ref{dn}) and (\ref{cAnk}), we get that \begin{multline}\label{2numb}
\int_{Q(x_0,\rho_k)}|w_n^k-u(x_0)||\nabla w_n^k|\, dx
\leq{\rm diam}({\mathcal{M}}) \int_{A_n^k}|\nabla w_n^k|\, dx\,+\\ + \int_{Q(x_0,\rho_k)
\setminus A_n^k}|p_n^k(u_n) - p_n^k(u(x_0))| |\nabla w_n^k| \, dx
\leq C\int_{A_n^k}|\nabla u_n|\, dx\,+\\
+ C_\d \int_{Q(x_0,\rho_k) \setminus A_n^k}|u_n - u(x_0)| |\nabla u_n| \,dx
\leq C_\d \int_{Q(x_0,\rho_k)}|u_n - u(x_0)| |\nabla u_n|\, dx\,, \end{multline} where $C_\d>0$ still denotes some constant depending on $\d$ but independent of $k$ and $n$. Arguing in a similar way, we also derive \begin{equation}\label{3}
\int_{Q(x_0,\rho_k)}|\nabla u_n - \nabla w_n^k|\, dx \leq C_\delta
\int_{Q(x_0,\rho_k)} |u_n -u(x_0)| |\nabla u_n|\, dx+
\int_{Q(x_0,\rho_k) \setminus A_n^k} |L_n^k \nabla u_n|\, dx\,, \end{equation} where $L_n^k:={\rm Id} - \nabla p_n^k(u(x_0)) \in {\rm Lin}({\mathbb{R}}^{d\times d},{\mathbb{R}}^{d \times d})$. Gathering \eqref{1numb}, \eqref{2numb} and (\ref{3}) we finally obtain that \begin{multline}\label{4} \overline F(u,Q(x_0,\rho_k)) \leq \liminf_{n \to
+\infty}\bigg\{\int_{Q(x_0,\rho_k)}f(x_0,u(x_0),\nabla u_n)\, dx+C\int_{Q(x_0,\rho_k) \setminus A_n^k} |L_n^k \nabla u_n|\, dx\,+\\
+C(\eta +\lambda \rho_k) \int_{Q(x_0,\rho_k)}(1+|\nabla u_n|)\, dx
+C_\d\lambda\int_{Q(x_0,\rho_k)}|u_n-u(x_0)|(1+|\nabla u_n|)\, dx \bigg\}\,. \end{multline} Now we can follow the argument in \cite[Lemma~6.4]{AEL} to conclude that $$\frac{d\overline F(u,\cdot)}{d{\mathcal{L}}^N}(x_0) \leq f(x_0,u(x_0),\nabla u(x_0))\,,$$ which completes the proof of \eqref{contpartrel}. \vskip5pt
\noindent {\it Proof of \eqref{cantpartrel}.} Once again the proof parallels the one in \cite[Lemma~6.4]{AEL}. We first proceed as in the previous reasoning leading to \eqref{4}. Then we can exactly follow the argument of \cite[Lemma~6.4]{AEL} to obtain \eqref{cantpartrel}. \prbox
\vskip15pt
\noindent{\bf Acknowledgement. }The authors wish to thank Roberto Alicandro, Pierre Bousquet, Giovanni Leoni and Domenico Mucci for several interesting discussions on the subject. This work was initiated while V. Millot was visiting the department of {\it Functional Analysis and Applications} at S.I.S.S.A., he thanks G. Dal Maso and the whole department for the warm hospitality. The research of J.-F. Babadjian was partially supported by the Marie Curie Research Training Network MRTN-CT-2004-505226 ``Multi-scale modelling and characterisation for phase transformations in advanced materials'' (MULTIMAT). V. Millot was partially supported by the Center for Nonlinear Analysis (CNA) under the National Science Fundation Grant No. 0405343.
\end{document}
|
arXiv
|
{
"id": "0804.0563.tex",
"language_detection_score": 0.5035221576690674,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\theoremstyle{plain}\newtheorem{teo}{Theorem}[section] \theoremstyle{plain}\newtheorem{prop}[teo]{Proposition} \theoremstyle{plain}\newtheorem{lem}[teo]{Lemma} \theoremstyle{plain}\newtheorem{cor}[teo]{Corollary} \theoremstyle{definition}\newtheorem{defin}[teo]{Definition} \theoremstyle{remark}\newtheorem{rem}[teo]{Remark} \theoremstyle{definition}\newtheorem{example}[teo]{Example}
\theoremstyle{plain}\newtheorem*{teon}{Theorem}
\begin{abstract} Let $(M,g)$ be a (complete) Riemannian surface, and let $\Omega\subset M$ be an open subset whose closure is homeomorphic to a disk. We prove that if $\partial\Omega$ is smooth and it satisfies a strong concavity assumption, then there are at least two distinct orthogonal geodesics in $\overline\Omega=\Omega \bigcup\partial\Omega$. Using the results given in \cite{GGP1}, we then obtain a proof of the existence of two distinct {\em brake orbits\/} for a class of Hamiltonian systems. In our proof we shall use recent deformation results proved in \cite{esistenza}. \end{abstract}
\maketitle
\renewcommand{\contentsline}[4]{\csname nuova#1\endcsname{#2}{#3}{#4}} \newcommand{\nuovasection}[3]{
\hbox to \hsize{\vbox{\advance\hsize by -1cm\baselineskip=12pt\parfillskip=0pt\leftskip=3.5cm\noindent\hskip -2cm #1\leaders\hbox{.}\hfil\hfil\par}$\,$#2\hfil}} \newcommand{\nuovasubsection}[3]{
\hbox to \hsize{\vbox{\advance\hsize by -1cm\baselineskip=12pt\parfillskip=0pt\leftskip=4cm\noindent\hskip -2cm #1\leaders\hbox{.}\hfil\hfil\par}$\,$#2\hfil}}
\tableofcontents
\section{Introduction} \label{sec:intro} In this paper we will use a non-smooth version of the Ljusternik--Schnirelman theory to prove the existence of multiple orthogonal geodesic chords in a Riemannian manifolds with boundary. This fact, together with the results in \cite{GGP1}, gives a multiplicity result for brake orbits of a class of Hamiltonian systems. Let us recall a few basic facts and notations from \cite{GGP1}.
\begin{subsection}{Geodesics in Riemannian Manifolds with Boundary} \label{sub:geodes} Let $(M,g)$ be a smooth (i.e., of class $C^2$) Riemannian manifold with $\mathrm{dim}(M)=m\ge2$, let $\mathrm{dist}$ denote the distance function on $M$ induced by $g$; the symbol $\nabla$ will denote the covariant derivative of the Levi-Civita connection of $g$, as well as the gradient differential operator for smooth maps on $M$. The Hessian $\mathrm H^f(q)$ of a smooth map $f:M\to\mathbb R$ at a point $q\in M$ is the symmetric bilinear form $\mathrm H^f(q)(v,w)=g\big((\nabla_v\nabla f)(q),w\big)$ for all $v,w\in T_qM$; equivalently, $\mathrm H^f(q)(v,v)=\frac{\mathrm d^2}{\mathrm ds^2}\big\vert_{s=0} f(\gamma(s))$, where $\gamma:\left]-\varepsilon,\varepsilon\right[\to M$ is the unique (affinely parameterized) geodesic in $M$ with $\gamma(0)=q$ and $\dot\gamma(0)=v$. We will denote by $\Ddt$ the covariant derivative along a curve, in such a way that $\Ddt\dot\gamma=0$ is the equation of the geodesics. A basic reference on the background material for Riemannian geometry is \cite{docarmo}.
Let $\Omega\subset M$ be an open subset; $\overline\Omega=\Omega\bigcup\partial \Omega$ will denote its closure. In this paper we will use a somewhat strong concavity assumption for compact subsets of $M$, that we will call "strong concavity" below, and which is stable by $C^2$-small perturbations of the boundary.
If $\partial \Omega$ is a smooth embedded submanifold of $M$, let $\mathrm I\!\mathrm I_{\mathfrak n}(x):T_x(\partial\Omega)\times T_x(\partial\Omega)\to\mathbb R$ denote the {\em second fundamental form of $\partial\Omega$ in the normal direction $\mathfrak n\in T_x(\partial\Omega)^\perp$}. Recall that $\mathrm I\!\mathrm I_{\mathfrak n}(x)$ is a symmetric bilinear form on $T_x(\partial\Omega)$ defined by: \[\phantom{\qquad v,w\in T_x(\partial\Omega),}\mathrm I\!\mathrm I_{\mathfrak n}(x)(v,w)=g(\nabla_vW, \mathfrak n),\qquad v,w\in T_x(\partial\Omega),\] where $W$ is any local extension of $w$ to a smooth vector field along $\partial\Omega$.
\begin{rem}\label{thm:remphisecond} Assume that it is given a \emph{signed distance function} for $\partial\Omega$, i.e., a smooth function $\phi:M\to\mathbb R$ with the property that $\Omega=\phi^{-1}\big(\left]-\infty,0\right[\big)$ and $\partial\Omega=\phi^{-1}(0)$, with $\mathrm d\phi\ne0$ on $\partial\Omega$.\footnote{One can choose $\phi$ such that $\vert\phi(q)\vert=\mathrm{dist}(q,\partial\Omega)$ for all $q$ in a (closed) neighborhood of $\partial\Omega$.} The following equality between the Hessian $\mathrm H^\phi$ and the second fundamental form\footnote{ Observe that, with our definition of $\phi$, then $\nabla\phi$ is a normal vector to $\partial\Omega$ pointing {\em outwards\/} from $\Omega$.} of $\partial\Omega$ holds: \begin{equation}\label{eq:seches} \phantom{\quad x\in\partial\Omega,\ v\in T_x(\partial\Omega);}\mathrm H^\phi(x)(v,v)= -\mathrm I\!\mathrm I_{\nabla\phi(x)}(x)(v,v),\quad x\in\partial\Omega,\ v\in T_x(\partial\Omega);\end{equation} Namely, if $x\in\partial\Omega$, $v\in T_x(\partial\Omega)$ and $V$ is a local extension around $x$ of $v$ to a vector field which is tangent to $\partial\Omega$, then $v\big(g(\nabla\phi,V)\big)=0$ on $\partial\Omega$, and thus: \[\mathrm H^\phi(x)(v,v)=v\big(g(\nabla\phi,V)\big)-g(\nabla\phi,\nabla_vV)=-\mathrm I\!\mathrm I_{\nabla\phi(x)}(x)(v,v).\]
For convenience, we will fix throughout the paper a function $\phi$ as above. We observe that, although the second fundamental form is defined intrinsically, there is no canonical choice for the function $\phi$ describing the boundary of $\Omega$ as above. \end{rem}
\begin{defin}\label{thm:defstrongconcavity} We will say that that $\overline\Omega$ is {\em strongly concave\/} if $\mathrm I\!\mathrm I_{\mathfrak n}(x)$ is negative definite for all $x\in\partial\Omega$ and all inward pointing normal direction $\mathfrak n$. \end{defin}
\noindent Observe that if $\overline\Omega$ is strongly concave, geodesics starting tangentially to $\partial\Omega$ remain \emph{inside} $\Omega$.
\begin{rem}\label{thm:newremopencondition} Strong concavity is evidently a {\em $C^2$-open condition}. Then, by \eqref{eq:seches}, if $\overline\Omega$ is compact, we deduce the existence of $\delta_0>0$ such that $\mathrm H^\phi(x)(v,v)<0$ for all $x\in\phi^{-1}\big([-\delta_0,\delta_0]\big)$ and for all $v\in T_xM$, $v\ne0$, such that $g\big(\nabla\phi(x),v\big)=0$.
A simple contradiction argument based on Taylor expansion shows that, under the above condition, it is $\nabla\phi(q)\ne 0$, for all $q\in\phi^{-1}([-\delta_0,\delta_0])$. \end{rem} \begin{rem}\label{thm:remopencondition} Let $\delta_0$ be as above. The strong concavity condition gives us the following property of geodesics, that will be used systematically throughout the paper: \begin{equation}\label{eq:1.1bis} \begin{matrix} \text{for any geodesic $\gamma:[a,b]\to\overline\Omega$ with $\phi(\gamma(a))=\phi(\gamma(b))=0$}\\ \text{and $\phi(\gamma(s))<0$ for all $s\in \left]a,b\right[$, there exists $\overline s\in \left]a,b\right[$ such that $\phi\big(\gamma(\overline s)\big)<-\delta_0$.} \end{matrix} \end{equation} Such property is proved easily by looking at the minimum point of the map $s\mapsto\phi(\gamma(s))$. \end{rem}
The main objects of our study are geodesics in $M$ having image in $\overline\Omega$ and with endpoints orthogonal to $\partial\Omega$, that will be called {\em orthogonal geodesic chords}:
\begin{defin}\label{thm:defOGC} A geodesic $\gamma:[a,b]\to M$ is called a {\em geodesic chord\/} in $\overline\Omega$ if $\gamma\big(\left]a,b\right[\big)\subset\Omega$ and $\gamma(a),\gamma(b)\in\partial\Omega$; by a {\em weak geodesic chord\/} we will mean a geodesic $\gamma:[a,b]\to M$ with image in $\overline\Omega$ and endpoints $\gamma(a),\gamma(b)\in\partial\Omega$ and such that $\gamma(s_{0})\in \partial\Omega$ for some $s_{0} \in ]a,b[$. A (weak) geodesic chord is called {\em orthogonal\/} if $\dot\gamma(a^+)\in (T_{\gamma(a)}\partial\Omega)^\perp$ and $\dot\gamma(b^-)\in (T_{\gamma(b)}\partial\Omega)^\perp$, where $\dot\gamma(\,\cdot\,^\pm)$ denote the one-sided derivatives.
\end{defin}
For shortness, we will write \textbf{OGC} for ``orthogonal geodesic chord'' and \textbf{WOGC} for ``weak orthogonal geodesic chord''.
In the central result of this paper we will give a lower estimate on the number of distinct orthogonal geodesic chords; we recall here some results in this direction available in the literature. In \cite{bos}, Bos proved that if $\partial\Omega$ is smooth, $\overline\Omega$ convex and homeomorphic to the $m$-dimensional disk, then there are at least $m$ distinct OGC's for $\overline\Omega$. Such a result is a generalization of a classical result by Ljusternik and Schnirelman (see \cite{LustSchn}), where the same result was proven for convex subsets of $\mathbb R^m$ endowed with the Euclidean metric. Bos' result was used in \cite{gluckziller} to prove a multiplicity result for brake orbits under a certain ``non-resonance condition''. Counterexamples show that, if one drops the convexity assumption, the lower estimate for orthogonal geodesic chords given in Bos' theorem does not hold.
Motivated by the study of a certain class of Hamiltonian systems (see Subsection \ref{sub:brakehom}), in this paper we will study the case of sets with strongly concave boundary. A natural conjecture is that, also in the concave case, one should have at least $m$ distinct orthogonal geodesic chords in an $m$-disk, but at this stage, this seems to be a quite hard result to prove. Having this goal in mind, in this paper we give a positive answer to our conjecture in the special case when $m=2$. Our central result is the following: \begin{teo}\label{thm:main} Let $\Omega$ be an open subset of $M$ with smooth boundary $\partial\Omega$, such that $\overline\Omega$ is strongly concave and homeomorphic to the $m$--dimensional disk. Then, there are at least two geometrically distinct\footnote{ By {\em geometrically distinct\/} curves we mean curves having distinct images as subsets of $\overline\Omega$.} orthogonal geodesic chords in $\overline\Omega$. \end{teo}
A similar multiplicity result was proved in \cite{arma}, assuming that $\overline\Omega$ is homeomorphic to the $m$--dimensional annulus.
\subsection{Reduction to the case without WOGC} Although the general class of weak orthogonal geodesic chords are perfectly acceptable solutions of our initial geometrical problem, our suggested construction of a variational setup works well only in a situation where one can exclude {\em a priori\/} the existence in $\overline\Omega$ of orthogonal geodesic chords $\gamma:[a,b]\to\overline\Omega$ for which there exists $s_0\in\left]a,b\right[$ such that $\gamma(s_0)\in\partial\Omega$.
One does not lose generality in assuming that there are no such WOGC's in $\overline\Omega$ by recalling the following result from \cite{GGP1}:
\begin{prop} \label{thm:noWOGC} Let $\Omega\subset M$ be an open set whose boundary $\partial\Omega$ is smooth and compact and with $\overline\Omega$ strongly concave. Assume that there are only a finite number of (crossing) orthogonal geodesic chords in $\overline\Omega$. Then, there exists an open subset $\Omega'\subset\Omega$ with the following properties: \begin{enumerate} \item\label{itm:nowogcs1} $\overline{\Omega'}$ is diffeomorphic to $\overline\Omega$ and it has smooth boundary; \item\label{itm:nowogcs2} $\overline{\Omega'}$ is strongly concave; \item\label{itm:nowogcs3} the number of (crossing) OGC's in $\overline{\Omega'}$ is less than or equal to the number of (crossing) OGC's in $\overline\Omega$ ; \item\label{itm:nowogcs4} there are no (crossing) WOGC's in $\overline{\Omega'}$. \end{enumerate} \end{prop} \begin{proof} See \cite[Proposition~2.6]{GGP1} \end{proof}
\begin{rem}\label{thm:remmain} In view of the result of Proposition~\ref{thm:noWOGC}, it suffices to prove Theorem~\ref{thm:main} under the further assumption that: \begin{equation}\label{eq:ipotesi} \text{ there are no WOGC's in }\overline\Omega. \end{equation} For this reason, we will henceforth assume \eqref{eq:ipotesi}. \end{rem}
\subsection{On the curve shortening method in concave manifolds} Multiplicity of OGC's in the case of compact manifolds having convex boundary is typically proven by applying a curve-shortening argument. From an abstract viewpoint, the curve-shortening process can be seen as the construction of a flow in the space of paths, along whose trajectories the length or energy functional is decreasing.
In this paper we will follow the same procedure, with the difference that both the space of paths and the shortening flow have to be defined appropriately.
Shortening a curve having image in a closed convex subset $\overline\Omega$ of a Riemannian manifold produces another curve in $\overline\Omega$; in this sense, we think of the shortening flow as being ``inward pushing'' in the convex case. As opposite to the convex case, the shortening flow in the concave case will be ``outwards pushing'', and this fact requires the one should consider only those portions of a curve that remain inside $\overline\Omega$ when it is stretched outwards. This type of analysis has been carried out in \cite{esistenza}, and we shall employ here many of the results proved in \cite{esistenza}.
The concavity condition plays a central role in the variational setup of our construction. ``Variational criticality'' relatively to the energy functional will be defined in terms of ``outwards pushing'' infinitesimal deformations of the path space (see Definition~\ref{thm:defvariatcrit}). The class of variationally critical portions contains properly the set of portions consisting of crossing OGC's; such curves will be defined as ``geometrically critical'' paths (see Definition~\ref{thm:defgeomcrit}). In order to construct the shortening flow, an accurate analysis of all possible variationally critical paths is required (Section~\ref{sub:description}), and the concavity condition will guarantee that such paths are \emph{well behaved} (see Lemma~\ref{thm:leminsez4.4}, Proposition~\ref{thm:regcritpt} and Proposition~\ref{thm:irregcritpt}).
Once that a reasonable classification of variationally critical points is obtained, the shortening flow is constructed by techniques which are typical of pseudo-gradient vector field approach. The crucial property of the shortening procedure is that its flow lines move away from critical portions which are not OGC's, in the same way that the integral line of a pseudo-gradient vector field move away from points that are not critical. A technical description of the abstract \emph{minimax} framework that we will use is given in Subsection~\ref{sub:discussion}. \end{subsection}
\begin{subsection}{Brake and Homoclinic Orbits of Hamiltonian Systems} \label{sub:brakehom} The result of Theorem~\ref{thm:main} can be applied to prove a multiplicity result for brake orbits and homoclinic orbits, as follows.
Let $p=(p_i)$, $q=(q^i)$ be coordinates on $\mathbb R^{2m}$, and let us consider a {\em natural\/} Hamiltonian function $H\in C^2\big(\mathbb R^{2m},\mathbb R\big)$, i.e., a function of the form \begin{equation}\label{eq:hamfun} H(p,q)=\frac 12 \sum_{i,j=1}^m a^{ij}(q)p_ip_j+V(q), \end{equation} where $V\in C^2\big(\mathbb R^{m},\mathbb R\big)$ and $A(q)=\big(a^{ij}(q)\big)$ is a positive definite quadratic form on $\mathbb R^m$: \[\sum_{i,j=1}^m a^{ij}(q)p_ip_j\ge\nu(q)\vert q\vert^2\] for some continuous function $\nu:\mathbb R^m\to\mathbb R^+$ and for all $(p,q)\in\mathbb R^{2m}$.
The corresponding Hamiltonian system is: \begin{equation}\label{eq:HS} \left\{ \begin{aligned} &\dot p=-\frac{\partial H}{\partial q}\\ &\dot q=\frac{\partial H}{\partial p}, \end{aligned} \right. \end{equation} where the dot denotes differentiation with respect to time.
For all $q\in\mathbb R^m$, denote by $\mathcal L(q):\mathbb R^m\to\mathbb R^m$ the linear isomorphism whose matrix with respect to the canonical basis is $\big(a_{ij}(q)\big)$, which is the inverse of $\big(a^{ij}(q)\big)$; it is easily seen that, if $(p,q)$ is a solution of class $C^1$ of \eqref{eq:HS}, then $q$ is actually a map of class $C^2$ and \begin{equation}\label{eq:pintermsofq} p=\mathcal L(q)\dot q. \end{equation} With a slight abuse of language, we will say that a $C^2$-curve $q:I\to \mathbb R^m$ ($I$ interval in $\mathbb R$) is a solution of \eqref{eq:HS} if $(p,q)$ is a solution of \eqref{eq:HS} where $p$ is given by \eqref{eq:pintermsofq}. Since the system \eqref{eq:HS} is autonomous, i.e., time independent, then the function $H$ is constant along each solution, and it represents the total energy of the solution of the dynamical system. There exists a large amount of literature concerning the study of periodic solutions of autonomous Hamiltonian systems having energy $H$ prescribed (see for instance \cite{LiuLong,LZ,Long,rab} and the references therein).
\subsection{The Seifert conjecture in dimension $2$} We will be concerned with a special kind of periodic solutions of \eqref{eq:HS}, called {\em brake orbits}. A brake orbit for the system \eqref{eq:HS} is a non-constant periodic solution $\mathbb R\ni t\mapsto\big(p(t),q(t)\big) \in\mathbb R^{2m}$ of class $C^2$ with the property that $p(0)=p(T)=0$ for some $T>0$. Since $H$ is even in the variable $p$, a brake orbit $(p,q)$ is $2T$-periodic, with $p$ odd and $q$ even about $t=0$ and about $t=T$. Clearly, if $E$ is the energy of a brake orbit $(p,q)$, then $V\big(q(0)\big)=V\big(q(T)\big)=E$.
The link between solutions of brake orbits and orthogonal geodesic chords is obtained in \cite[Theorem~5.9]{GGP1}. Using this theorem and Theorem~\ref{thm:main}, we get immediately the following: \begin{teo}\label{thm:1.7} Let $H\in C^2\big(\mathbb R^{2m},\mathbb R\big)$ be a natural Hamiltonian function as in \eqref{eq:hamfun}, $E\in\mathbb R$ and \[\Omega_E=V^{-1}\big(\left]-\infty,E\right[\big).\] Assume that $\mathrm dV(x)\ne0$ for all $x\in\partial\Omega_E$ and that $\overline{\Omega}_E$ is homeomorphic to a $m$-disk. Then, the Hamiltonian system \eqref{eq:HS} has at least two geometrically distinct brake orbits having energy $E$. \end{teo}
Multiplicity results for brake orbits in even, convex case are obtained e.g. in \cite{LZ,LZZ,Z1,Z2,Z3}.
In \cite{seifert}, it was conjectured by Seifert the existence of at least $m$ brake orbits and it is well known that such lower estimate for the number of brake orbits cannot be improved. Indeed, consider the Hamiltonian: \[
H(q,p)=\tfrac12|p|^2+\sum_{i=1}^m\lambda_i^2 q_i^2,\qquad (q,p)\in\mathbb R^{2m}, \] where $\lambda_i\not=0$ for all $i$. If $E>0$ and the squared ratios $\left({\lambda_i}/{\lambda_j}\right)^2$ are irrational for all $i\ne j$, then the only periodic solutions of \eqref{eq:HS} with energy $E$ are the $m$ brake orbits moving along the axes of the ellipsoid with equation \[ \sum_{i=1}^m\lambda_i^2q_i^2=E. \] The result in \cite{LZ} is a proof of the Seifert conjecture (in any dimension) under the assumption that the potential is convex and even. Theorem~\ref{thm:1.7} gives a proof of the Seifert conjecture in dimension $m=2$, without any assumption on the potential. \end{subsection}
\section{Main ideas of the proof}\label{main} In this section we will give an outline of the paper, describing the functional framework and the main ideas of the proofs.
\subsection{Presentation of the proof of Theorems~\ref{thm:main} and \ref{thm:1.7}}\label{sec:outline} The proof of our multiplicity result will be carried out in the following way. Set $W = \{x \in \mathbb R^2: V(x) < E\}$.
\begin{itemize} \item
Using the well known Maupertuis--Jacobi variational principle, see e.g. \cite[Proposition 4.1]{GGP1}, brake orbits for the given Hamiltonian system are characterized, up to a reparameterization, as geodesics with endpoints on $\partial W$ relatively to a certain Riemannian metric, the so--called Jacobi metric on $W$, singular on $\partial W$ given by $g_*(v,v)=\big(E-V(x)\big)g_0(v,v)$, where $g_0(v,v) = \frac 12\sum_{i,j=1}^m a_{ij}(x)\,v_i v_j$;
\item by means of the Jacobi metric and the induced "distance from the boundary" function, one gets rid of the metric singularity on the boundary, and the problem is reduced to the search of geometrically distinct geodesics, orthogonal to the boundary of a Riemannian manifold which is homeomorphic to the $m$--dimensional, whose boundary satisfies a strong \emph{concavity} condition, cf \cite{GGP1};
\item a minimax argument will be applied to a suitable class of homotopies and to a particular nonsmooth functional (for the classical minimax theory cf e.g. \cite{MW, struwe}). \end{itemize}
\subsection{Abstract Ljusternik--Schnirelman theory}\label{sub:discussion}
For the minimax theory we shall use the following topological invariant. Consider a topological space $X$ and $Y \subset X$. We shall use a suitable version of the relative category in $\mathcal X\text{\ mod\ }\mathcal Y$ (see \cite{FH,FW}) as topological invariant, which is defined as follows.
Let $\mathcal D\subset\mathcal X$ be a closed subset, and assume that there exists $k>0$ and $A_0,A_1,\ldots, A_k$ open subsets of $\mathcal X$ such that: \begin{itemize} \item[(a)] $\mathcal D\subset\bigcup_{i=0}^k A_i$; \item[(b)] for any $i=1,\ldots,k$ there exists a homotopy $h_i$ sending $A_i$ to a single point moving in $\mathcal X$, while the homotopy $h_0$ sends $A_0$ inside $\mathcal Y$ moving $A_0 \cap\mathcal Y$ in $\mathcal Y$.
\end{itemize} The minimal integer $k$ with the above properties is the relative category of $\mathcal D$ in $\mathcal X\text{\ mod\ }\mathcal Y$ and it will be denoted by $\mathrm{cat}_{\mathcal X,\mathcal Y}(\mathcal D)$. We shall use it with $\mathcal X = \mathbb{S}^{m-1}\times \mathbb{S}^{m-1}$ and $\mathcal Y=\{(A,B)\in \mathbb{S}^{m-1} \,:\,A=B\}\equiv \Delta^{m-1}$, where $\mathbb{S}^{m-1}$ is the $(m-1)$--dimensional unit sphere.
In \cite{esistenza} a different relative category is considered. There, the maps $h_i$ were assumed to send the $A_i$'s to a single point moving outside the set $\Delta^{m-1}$; moreover, it was used a quotient of the product $\mathbb S^{m-1}\times\mathbb S^{m-1}$ obtained by identifying the pairs $(A,B)$ and $(B,A)$. Its numerical value is $m$, but unfortunately this notion of relative category is not compatible with the definition of the functional $\mathcal F$ used in the minimax argument, and no multiplicity result can be obtained. In order to have a relative category which fits with the properties of the functional $\mathcal F$, one must relax the assumptions on the maps $h_i$, and require that they take values in all the space $\mathbb{S}^{m-1}\times \mathbb{S}^{m-1}$. This gives a lower numerical value for such new notion of relative category, which is less than or equal to $2$, as we can see by the same topological arguments used in \cite{G}. This suggests that it is more convenient to use a relative category without the symmetry given by the identification of the pairs $(A,B)$ and $(B,A)$. With this definition, we have:
\begin{lem}\label{thm:estimatecat} For any $m \geq2$, $\mathrm{cat}_{\mathcal X,\mathcal Y}(\mathcal X)\geq 2$. \end{lem} The proof of Lemma~\ref{thm:estimatecat} uses the notion of cuplength in cohomology, and it will be given in Appendix~\ref{sec:appendixrelLScat}. Note that, in fact, the Lemma~\ref{thm:estimatecat} implies the equality $\mathrm{cat}_{\mathcal X,\mathcal Y}(\mathcal X)=2$, as it easy to show that, in any dimension, $\mathrm{cat}_{\mathcal X,\mathcal Y}(\mathcal X) \leq 2$.
The problem of finding orthogonal geodesic chords in a domain $\overline\Omega$ of a Riemannian manifold $M$ with non-convex boundary $\partial\Omega$ cannot be cast in a standard smooth variational context, due mainly to the fact that the classical shortening flow on the set of curves in $\overline\Omega$ with endpoints on the boundary produces stationary curves that are not "classical geodesics". In order to overcome this problem, our strategy will be to reproduce the ``ingredients'' of the classical smooth theory in a suitable non-smooth context. More precisely, we will define the following objects: \begin{itemize} \item a metric space $\mathfrak M$, that consists of curves of class $H^{1}$ having image in an open neighborhood of $\overline\Omega$ in $\mathfrak M$, and whose endpoints remain outside $\Omega$;
\item a compact subset $\mathfrak C$ of $\mathfrak M$ which is homeomorphic to the set of chords in the unit disk $\mathbb D^m$ with both endpoints in $\mathbb{S}^{m-1}$ (and therefore homeomorphic to $\mathcal X = \mathbb{S}^{m-1}\times\ \mathbb{S}^{m-1}$)); \item the class of the closed $\mathcal R$--invariant subsets $\mathcal D$ of $\mathfrak C$; \item a family ${\mathcal H}$ consisting of pairs $(\mathcal D,h)$, where $\mathcal D$ is a closed subset of $\mathfrak C$ and $h:[0,1]\times\mathcal D\to\mathfrak M$ is a homotopy whose properties will be described in section \ref{sec:homotopies}; \item a functional $\mathcal F :{\mathcal H}\rightarrow \mathbb R^{+}$, constructed starting from the classical energy functional used for the geodesic problem. \end{itemize}
We will define suitable notions of critical values for the functional $\mathcal F$, in such a way that distinct critical values determine geometrically distinct orthogonal geodesic chords in $\overline\Omega$.
Denote by $\star$ the operation of \emph{concatenation of homotopies}, see \eqref{eq:defconcatenationhomotop}. We shall say that a real number $c$ is a \emph{topological regular value} of $\mathcal F$ if there exists $\bar\varepsilon>0$ such that for all $(\mathcal D,h)\in\mathcal H$ satisfying $\mathcal F(\mathcal D,h)) \leq c + \bar\varepsilon$ there exists a homotopy $\eta$ such that $(\mathcal D,\eta \star h)\in\mathcal H$,
satisfying \[ \mathcal F(\mathcal D,\eta \star h)\le c-\bar\varepsilon. \]
A \emph{topological critical value} of $\mathcal F$ is a real number which is not a regular value.
Once this set up has been established, the proof of multiplicity of critical points of $\mathcal F$ is carried out along the lines of the standard relative Ljusternik--Schnirelman theory, as follows. Denote by $\mathfrak C_0$ the set of constant curves in $\mathfrak C$ (which is homeomorphic to $Y$). For $i=1,2$, set: \begin{equation}\label{eq:defGammai} \Gamma_i=\big\{\mathcal D\in\mathfrak C:\; \mathcal D \,\text{is closed\ },\mathrm{cat}_{{\mathfrak C},{\mathfrak C_0}} ({\mathcal D})\ge i\big\}, \end{equation} and define \begin{equation}\label{eq:defci} c_i=\inf_{\substack{\mathcal D\in\Gamma_i\\ (\mathcal D,h)\in\mathcal H}} \mathcal F(\mathcal D,h). \end{equation} As observed in Remark \ref{rem:6.6}, $\mathcal H$ is not empty since $(\mathfrak C,\mathrm{I}_{\mathfrak C})$ belongs to the class $\mathcal H$, where we denote by $\mathrm{I}_{\mathfrak C}:[0,1]\times\mathfrak C\to\mathfrak C$ the map $\mathrm{I}_{\mathfrak C}(\tau,x)=x$ for all $\tau$ and all $x$. Moreover, by Lemma \ref{thm:estimatecat}, $\mathfrak C \in \Gamma_i$ for any $i=1,2$, and from this we deduce that any $c_i$ is a finite real number.
By the very definition, one sees immediately that each $c_i$ is a topological critical value of $\mathcal F$; moreover, since $\Gamma_{1}\subset\Gamma_2$, we have $c_1 \leq c_2$.
The crucial point of the construction is the proof of some ``deformation lemmas'' for the sublevels of $\mathcal F$ using the homotopies in $\mathcal H_{1}$, in order to obtain that the $c_i$'s are energy values of geometrically distinct orthogonal geodesic chords parameterized in $[0,1]$.
The first deformation lemma
tells us that the topological critical values of $\mathcal F$ correspond to orthogonal geodesic chords, in the sense that if $c$ is a topological critical value for $\mathcal F$ then it is a \emph{geometrical critical value} (cf. Definition~\ref{thm:defgeomcrit}): there exists an orthogonal geodesic chord $\gamma$ (parameterized in the interval $[0,1]$) such that $\frac12\int_0^1 g(\dot \gamma,\dot \gamma)ds = c$. Indeed if $c > 0$ is not a geometrical critical value, there exists $\epsilon > 0$ such that for any $(\mathcal D,h) \in \mathcal H$ satisfying $\mathcal F(\mathcal D,h) \leq c+\epsilon$, there exists a homotopy $\eta$ such that $(\mathcal D, \eta \star h) \in \mathcal H$ and $\mathcal F(\mathcal D,\eta\star h) \leq c-\epsilon$ (cf. \ref{thm:firstdeflemma}).
The second deformation lemma (cf. \ref{thm:prop9.3}) says that a similar deformation exists also for geometrical critical values, provided that a suitable contractible neighborhood is removed.
More precisely, in our case, given a geometrical critical value $c > 0$, assuming there is only a finite number of orthogonal geodesic chord in $\overline\Omega$ having energy $c$, we will prove the existence of $\bar\varepsilon>0$ such that, for all $(\mathcal D,h)\in\mathcal H$ with $\mathcal F(\mathcal D,h)\le c+\bar\varepsilon$ there exists an open subset $\mathcal A\subset\mathfrak C$ and a homotopy $\eta$ such that: \begin{itemize} \item[(i)] $(\mathcal D\setminus\mathcal A,\eta \star h)\in\mathcal H$; \item[(ii)] $\mathcal F\big(\mathcal D\setminus\mathcal A,\eta \star h\big)\le c-\bar\varepsilon$; \item[(iii)] $A$ is contractible in ${\mathfrak C}$ (hence, $\mathrm{cat}_{\mathcal X,\mathcal Y}(\mathcal D\setminus{\mathcal A})\ge \mathrm{cat}_{\mathcal X,\mathcal Y}(\mathcal D)-1$). \end{itemize}
Moreover, we also see that low sublevels of the functional $\mathcal F$ consist of curves that can be deformed on $\partial \Omega$, obtaining that $c_i>0$ for any $i=1,2$, while by the two Fundamental Deformations Lemmas above we have: \begin{itemize} \item[(a)] $c_i$ is a geometrical critical value; \item[(b)] $c_1<c_2$, assuming the existence of only a finite number of orthogonal geodesic chords in $\overline\Omega$. \end{itemize} Note that if $c=c_1=c_2$ we can get a contradiction in the following way. Choose $\bar\varepsilon>0$ as in the second deformation Lemma, and take $(\mathcal D,h)\in\mathcal H$ such that $\mathcal D\in\Gamma_2$ and $\mathcal F(\mathcal D,h)\le c_2+\bar\varepsilon$. Let $\mathcal A\subset\mathfrak C$ and $\eta$ be as above. Then, $\mathcal F(\mathcal D\setminus\mathcal A,\eta \star h)\le c_1-\bar\varepsilon$, which is absurd, because $\mathcal D\setminus\mathcal A\in\Gamma_1$ and $(D\setminus\mathcal A,\eta \star h) \in \mathcal H$.
The argument proves the existence of at least 2 distinct geometrical critical values; the crucial point is that distinct geometrical critical values produce geometrically distinct orthogonal geodesic chords (cf. Proposition \ref{thm:distinct}). Then, using the results in \cite{GGP1}, we obtain the existence of at least two geometrically distinct brake orbits.
\section{The functional framework} \label{sub:varframe}
Throughout the paper, $(M,g)$ will denote a Riemannian manifold of class $C^2$ having dimension $m$; all our constructions will be made in suitable (relatively) compact subsets of $M$, and for this reason it will not be restrictive to assume, as we will, that $(M,g)$ is complete. Furthermore, we will work mainly in open subsets $\Omega$ of $M$ whose closure is homeomorphic to a $m$--dimensional disk, and in order to simplify the exposition we will assume that, indeed, $\overline\Omega$ is embedded topologically in $\mathbb R^m$, which will allow to use an auxiliary linear structure in a neighborhood of $\overline\Omega$. We will also assume that $\overline\Omega$ is strongly concave in $M$.
The symbol $H^1\big([a,b],\mathbb R^m\big)$ will denote the Sobolev space of all absolutely continuous curves in $\mathbb R^m$ whose weak derivative is square integrable. Similarly, $H^1\big([a,b],\mathbb R^m\big)$ will denote the infinite dimensional Hilbert manifold consisting of all absolutely continuous curves $x:[a,b]\to M$ such that $\varphi\circ x\vert_{[c,d]}\in H^1\big([c,d],\mathbb R^m)$ for all chart $\varphi:U\subset M\to \mathbb R^m$ of $M$ such that $x\big([c,d]\big) \subset U$. By $H^1_0\big(\left]a,b\right[,\mathbb R^m\big)$ we will denote the subset of $H^1\big([a,b],\mathbb R^m\big)$ with $x(a)=x(b)=0$. For $A \subset \mathbb R^m$ and $a < b$ we set \begin{equation}\label{eq:defH1abA} H^1\big([a,b],A\big)=\big\{x\in H^1\big([a,b],\mathbb R^m\big):x(s)\in A\ \text{for all $s\in[a,b]$}\big\}. \end{equation} The Hilbert space norm $\Vert\cdot\Vert_{a,b}$ of $H^1\big([a,b],\mathbb R^m\big)$ (equivalent to the usual one) will be defined by: \begin{equation}\label{eq:norma-ab} \Vert x\Vert_{a,b}=\left(\frac{\Vert x(a)\Vert_E^2+\int_a^b\Vert\dot x(s)\Vert^2_E\,\text ds}2\right)^{1/2}, \end{equation} where $\Vert\cdot\Vert_E$ is the Euclidean norm in $\mathbb R^m$. Note that by \eqref{eq:norma-ab} \begin{equation}\label{eq:stima1.8} \Vert x\Vert_{L^\infty([a,b],\mathbb R^m)}\le\Vert x\Vert_{a,b}, \end{equation} and this simplifies some estimates in the proofs of the deformation Lemmas (cf. \cite{esistenza}). We shall use also the space $H^{2,\infty}$ which consists of differentiable curves with absolutely continuous derivative and having bounded weak second derivative.
\begin{rem}\label{rem:rep} In the development of our results, we will consider curves $x$ with variable domain $[a,b]\subset[0,1]$. In this situation, by $H^1$-convergence of a sequence $x_n:[a_n,b_n]\to M$ to a curve $x:[a,b]\to M$ we will mean that $a_n$ tends to $a$, $b_n$ tends to $b$ and $\widehat x_n:[a,b]\to M$ is $H^1$-convergent to $x$ in $H^1\big([a,b],M\big)$ as $n\to\infty$, where $\widehat x_n$ is the unique affine reparameterization of $x$ on the interval $[a,b]$. One defines similarly the notion of $H^1$--weak convergence and of uniform convergence for sequences of curves with variable domain. \end{rem}
It will be useful also to consider the flows $\eta^{+}(\tau,x)$ and $\eta^{-}(\tau,x)$ on the Riemannian manifold $M$ defined by \begin{equation}\label{eq:flusso+} \left\{\begin{array}{l}\dfrac{\mathrm d{\eta^+}}{\mathrm d\tau}(\tau)= \dfrac{\nabla \phi(\eta^+)}{\Vert \nabla \phi(\eta^+) \Vert^{2}} \\[.5cm] {\eta}^{+}(0)=x \in\big \{y \in M: -\delta_0 \leq \phi(y) \leq \delta_0\big\}, \end{array} \right. \end{equation}
and
\begin{equation}\label{eq:flusso-} \left\{\begin{array}{l}\dfrac{\mathrm d{\eta}^{-}}{\mathrm d\tau}(\tau)= \dfrac{-\nabla \phi(\eta^-)}{\Vert \nabla \phi(\eta^-) \Vert^{2}} \\[.5cm] {\eta}^{-}(0)=x \in\big \{y \in M: -\delta_0 \leq \phi(y) \leq \delta_0\big\}, \end{array} \right. \end{equation} where $\Vert \cdot \Vert$ is the norm induced by $g$.
\begin{rem}\label{rem:flusso-well-defined} Note that $\eta^{+}(\tau,x)$ and $\eta^{-}(\tau,x)$ are well defined, because $\nabla \phi \not=0$ on the strip $\phi^{-1}\big([-\delta_0,\delta_0]\big)$. Moreover, using $\eta^+$ and $\eta^-$ we can show that the exists a homeomorphism between $\phi^{-1}\big([-\delta_0,\delta_0]\big)$ and $\big\{y \in \mathbb{R}^m: 1-\delta_0 \leq \Vert y \Vert_E \leq 1+\delta_0\big\}$. Therefore it must be $\delta_0 < 1$, since $\overline\Omega$ is homeomorphic to the unit $m$--dimensional disk. \end{rem}
Now, fix a convex $C^2$--real map $\chi$ defined in $[0,1+\delta_0,1]$ such that $\chi(s)=s-1$ for any $s \in [1-\delta_0,1+\delta_0]$, $\chi'(s) > 0$ for any $s \in [0,1+\delta_0]$, $\chi''(0)=0$ and consider the map: \begin{equation}\label{eq:phisuldisco} \phi_{\mathbb{D}^m}(z) = \chi(\Vert z \Vert_E), \end{equation} where ${\mathbb{D}^m}$ denotes the m--dimensional disk. Note that $\phi_{\mathbb{D}^m}$ satisfies the properties of the map $\phi$ described in Remark \ref{thm:remphisecond} for the set $\overline \Omega = {\mathbb{D}^m}$ and the Riemann structure given by the Euclidean metric.
We have the following \begin{lem}\label{thm:riconduco-sfera} There exists a homeomorphism $\Psi: \phi^{-1}(]-\infty,\delta_0]) \rightarrow \phi_{\mathbb{D}^m}^{-1}(]-\infty,\delta_0])$, which is of class $C^1$ on $\phi^{-1}([-\delta_0,\delta_0])$, such that \begin{equation}\label{eq:corrispondenza} -\phi(y)= 1-\Vert \Psi(y) \Vert_E \quad\forall\;y \in \phi^{-1}([-\delta_0,\delta_0]). \end{equation} \end{lem}
\begin{proof} Consider any homeomorphism $\psi : \overline\Omega \rightarrow \mathbb{D}^m$ and the flow $\eta^-$ given in \eqref{eq:flusso-}. Note that for all $y \in \phi^{-1}([-\delta_0,0])$ there exists a unique $y_0 \in \partial \Omega$ and $\tau \in [-\delta_0,0]$ such that \[ y = \eta^{-}(\tau,y_0). \] For any $y \in \phi^{-1}([-\delta_0,0])$ we set \[ \psi_0^-(y)=(1+\tau)\psi(y_0), \] obtaining a diffeomorphism between $\phi^{-1}([-\delta_0,0])$ and $\phi_{\mathbb{D}^m}^{-1}([-\delta_0,0])$. Similarly, using the flow $\eta^+$ starting from $\partial\Omega$, we can define $\psi_0^+$ on $\phi^{-1}([0,\delta_0])$ such that $\psi_0^+(y)=\psi_0^-(y)$ on $\partial \Omega$, obtaining a diffeomorphism $\psi_0$ between $\phi^{-1}([-\delta_0,\delta_0])$ and $\phi_{\mathbb{D}^m}^{-1}([-\delta_0,\delta_0])$. Now, we just have to extend $\psi_0$ as homeomorphism to all $\phi^{-1}(]-\infty,\delta_0])$.
Towards this goal, set \[ P_0 = \psi_0^{-1}, \] which is well defined on $\phi_{\mathbb{D}^m}^{-1}([-\delta_0,\delta_0])$, \[
\widehat{P_0} = P_{0}\big|_{\phi_{\mathbb{D}^m}^{-1}(-\delta_0)}, \] and \[ Q = \psi\vert_{\phi^{-1}(-\delta_0)} \circ \widehat{P_0} \] which is an homeomorphism on $\phi_{\mathbb{D}^m}^{-1}(-\delta_0)$.
Now extend $Q$ to all $\big\{z \in \mathbb{R}^m: \Vert z \Vert_E \leq 1-\delta_0\big\}$ by setting: \[ \tilde Q(z) = \begin{cases} \dfrac{\Vert z \Vert_E}{1-\delta_0}Q\big(\frac{1-\delta_0}{\Vert z \Vert_E}z\big)& \text{ if } z \neq 0\\ 0& \text{ if } z = 0. \end{cases} \] Finally, the desired homeomorphism $\Psi$ is obtained by setting: \[ \Psi^{-1}(z)= \begin{cases} \psi_0^{-1}(z) &\text{ if } z \in \phi_{\mathbb{D}^m}^{-1}([-\delta_0,\delta_0])\\ \psi^{-1}(\tilde Q(z)) &\text{ if }z \in \phi_{\mathbb{D}^m}^{-1}(]-\infty,-\delta_0]). \end{cases} \] \qedhere \end{proof}
Throughout the paper we shall use also the following constant: \begin{equation}\label{eq:1.9fabio} K_0=\max_{x\in\phi^{-1}(]-\infty,\delta_0]}\Vert\nabla\phi(x)\Vert. \end{equation}
\subsection{Path space and maximal intervals} \label{sec:pathspace} In this subsection we will describe the set of curves $\mathfrak M$, which will be the ambient space of our minimax framework, and the set $\mathfrak C \subset \mathfrak M$ homeomorphic to $\mathbb{S}^{1} \times \mathbb{S}^{1}$, that encodes all the topological information about $\mathfrak M$.
Let $\delta_0>0$ be as in Remark \ref{thm:remopencondition}. Consider first the following set of paths
\begin{equation}\label{eq:defM} \mathfrak M_0=\Big\{x\in H^1\big([0,1],\phi^{-1}(\left]-\infty,\delta_0\right[)\big): \phi(x(0))\ge 0,\,\phi(x(1))\ge 0\Big\}, \end{equation} see Figure \ref{fig:3bis}. \begin{figure}
\caption{ Curves (the dotted lines) representing typical elements of the path space $\mathfrak M_0$.}
\label{fig:3bis}
\end{figure}
This is a subset of the Hilbert space $H^1\big([0,1],\mathbb R^m\big)$, and it will be topologized with the induced metric.
The following result will be used systematically throughout the paper: \begin{lem}\label{thm:lemmacazzatina} If $x\in\mathfrak M_0$ and $[a,b]\subset[0,1]$ is such that $x(a)\in\partial\Omega$ and there exists $\bar s\in[a,b]$ such that $\phi(x(\bar s))\le-\delta<0$, then: \begin{equation}\label{eq:2.9fabio} b-a\ge\frac{\delta^2}{K_0^2}\left(\int_a^bg(\dot x,\dot x)\,\mathrm d\sigma\right)^{-1} , \end{equation} and \begin{equation}\label{eq:aggiuntalemma2.1}
\sup\{|\phi(x(s)|:s \in [a,b]\} \leq \sqrt2K_0\left(\frac{b-a}2 \int_a^bg(\dot x,\dot x) \, \mathrm d\sigma\right)^{\frac12} \end{equation} where $K_0$ is defined in \eqref{eq:1.9fabio}. \end{lem} \begin{proof} Since $\phi(x(a))=0$ we have, for any $s\in [a,b]$: \begin{multline*} \vert \phi(x(s))\vert = \vert\phi(x(s))-\phi(x(a))\vert\le\int_a^{s}\vert g\big(\nabla\phi(x(\sigma)),\dot x(\sigma)\big)\vert\, \mathrm d\sigma \le \\ \int_a^b \vert g\big(\nabla\phi(x(\sigma)),\dot x(\sigma)\big)\vert\, \mathrm d\sigma
\le K_0\int_a^bg(\dot x,\dot x)^{\frac12} \mathrm d\sigma\\ \le K_0\sqrt{b-a} \left( \int_a^bg(\dot x,\dot x) \,\mathrm d\sigma\right)^{\frac12}, \end{multline*} from which \eqref{eq:aggiuntalemma2.1} follows. Moreover, the same estimate shows that, if there exists $\bar s\in[a,b]$ such that $\phi(x(\bar s))\le-\delta<0$, then \eqref{eq:2.9fabio} holds. \end{proof}
For all $x\in\mathfrak M_0$, let $\mathcal I_{x}^{0}$ and $\mathcal I_x$ denote the following collections of closed subintervals of $[0,1]$: \[ \mathcal I_{x}^{0} = \big\{[a,b]\subset [0,1]: x([a,b]) \in \overline\Omega, x(a),x(b) \in \partial\Omega \big\}, \] \[ \mathcal I_x=\big\{[a,b]\in \mathcal I_{x}^{0} \text{ and $[a,b]$ is maximal with respect to this property}\big\}. \]
\begin{rem}\label{simple-sc} It is immediate to verify the following semicontinuity property. Suppose $x_n \rightarrow x$ in $\mathfrak M$, $[a,b] \in {\mathcal I}_x$ and $[a_n,b_n] \in {\mathcal I}_{x_n}$ with $[a_n,b_n] \cap [a,b] \neq \emptyset$ for all $n$. Then \[a \leq \liminf_{n \rightarrow \infty}a_n \leq \limsup_{n \rightarrow \infty}b_n \leq b.\] \end{rem}
\begin{rem}\label{rem:noROGC} Note that if $\gamma: [0,1]\rightarrow \overline\Omega$ is an OGC, then $\gamma\not\equiv \gamma$. Indeed if by contradiction $\gamma(1-t)=\gamma(t)$ for any $t$, from which we deduce $\dot \gamma(\frac12)=0$ and by the conservation law of the energy we should have that $\gamma$ is constant. \end{rem}
The following Lemma allows to describe the subset $\mathfrak C$ of ${\mathfrak M}_{0}$ which carries on all the topological properties of ${\mathfrak M}_{0}$.
\begin{lem}\label{thm:corde} There exists there a continuous map $G:\partial\Omega\times\partial\Omega\to H^1([0,1],\overline\Omega)$ such that \begin{enumerate} \item\label{corde1} $G(A,B)(0)=A,\,\,G(A,B)(1)=B$. \item\label{corde2} $A\not=B\,\Rightarrow\, G(A,B)(s)\in\Omega\,\forall s\in ]0,1[$. \item\label{corde3} $G(A,A)(s)=A\,\forall s\in [0,1]$.
\item\label{corde5} Suppose that there exists $s_0 \in [0,1] : \phi(G(A,B)(s_0)) > -\delta_0$. Then the set $\{s \in [0,1]: \phi(G((A,B)(s))) \in [-\delta_0,0]\}$ consists of two intervals where \\ $\phi(G(A,B)(\cdot))$ is strictly monotone. \end{enumerate} \end{lem}
\begin{proof} Let $\Psi: \phi^{-1}([-\infty,\delta_0]) \rightarrow \phi_{\mathbb{D}^m}^{-1}([-\infty,\delta_0])$ be the homeomorphism of Lemma \ref{thm:riconduco-sfera}. Define \[ \hat G(A,B)(s) = \Psi^{-1}\big((1-s)\Psi(A)+s\Psi(B)\big),\,A,B\in \overline\Omega. \]
In general, if $\overline\Omega$ is only homeomorphic to the disk $\mathbb D^m$, the above definition produces curves that in principle are only continuous. In order to produce curves with an $H^1$-regularity, we use a broken geodesic approximation argument. Towards this goal note that if the curve \[ (1-s)\Psi(A) + s\Psi(B) \] intersects $\phi_{\mathbb{D}^m}^{-1}(-\delta_0)$ this happen at the instants \[ 0<s_A \leq s_B < 1, \] with $s_A, s_B$ depending continuously by $A,B$ respectively.
Denote by $\varrho(\overline\Omega,g)$ the infimum of the injectivity radii of all points of
$\overline\Omega$ relatively to the metric $g$ (cf. \cite{docarmo}). By compactness, there exists $N_0\in\mathbb N$ with the property that $\mathrm{dist}\big(\hat G(A,B)(a),\hat G(A,B)(b)\big)\le\varrho(\overline\Omega,g)$ whenever
$\vert a-b\vert\le\frac1{N_0}$ (where $\mathrm{dist}$ denotes the distance induced by $g$).
Finally, for all $\hat G(A,B)$, denote by $\gamma_{A,B}$ the broken geodesic obtained as concatenation of the curves $\gamma_k:[s_A+\frac{k-1}{N_0}(s_B-s_A),s_A+\frac{k}{N_0}(s_B-s_A)]\to M$ given by the unique minimal geodesic in $(M,g)$ from $G(A,B)(s_A+\frac{k-1}{N_0}(s_B-s_A))$ to $G(A,B)(s_A+\frac{k}{N_0}(s_B-s_A))$, $k=1,\ldots,N_0+1$. Moreover we set \[ \gamma_{A,B}(s) =\hat G(A,B)(s) \text{ if }s \in [0,s_A] \cup [s_B,1]. \]
Since the minimal geodesic in any convex normal neighborhood depend continuously (with respect to the $C^2$--norm) on its endpoints, $\gamma_{A,B}$ depends continuously by $(A,B)$ in the $H^{1}$--norm. Moreover thanks to \eqref{eq:corrispondenza}, $\gamma_{A,B}$ satisfies \eqref{corde1}--\eqref{corde3} provided that $N_0$ is sufficiently large.
Now, using the flow $\eta^{-}$ of \eqref{eq:flusso-}, defined also in a neighborhood of $\phi^{-1}([-\delta_0,\delta_0])$, we can modified $\gamma_{A,B}$ in $[s_A,s_B]$ obtaining $G$ such that $\phi(G(A,B)(s)) > -\delta_0$ for any $s \in ]s_A,s_B[$. Then, thanks to \eqref{eq:corrispondenza} $G$ satisfies also property \eqref{corde5}. \end{proof}
We set \begin{equation}\label{eq:2.6bis} \begin{split} &\mathfrak C=\big\{G(A,B)\,:\,A,B\in\partial\Omega\big\},\\ &\mathfrak C_0=\{G(A,A)\,:\,A\in\partial\Omega\}.\end{split}\end{equation}
\begin{rem}\label{rem:conseguenze-homeo} Note that $\mathfrak C$ is homeomorphic to $\mathbb{S}^{m-1}\times \mathbb{S}^{m-1}$ by a homeomorphism mapping $\mathfrak C_0$ onto $\{(A,A)\,:\,A\in \mathbb{S}^{m-1}\}$. \end{rem}
Define now the following constant: \begin{equation}\label{eq:defM0} M_0=\sup_{x\in\mathfrak C}\int_0^1g(\dot x,\dot x)\,\mathrm dt. \end{equation} Since $\mathfrak C$ is compact and the integral in \eqref{eq:defM0} is continuous in the $H^1$-topology, then $M_0<+\infty$.
Finally we define the following subset of $\mathfrak M_0$: \begin{equation}\label{eq:defMM} \mathfrak M=\Big\{x\in\mathfrak M_0: \frac12\int_a^bg(\dot x,\dot x)\,\mathrm dt< M_0 \quad \forall [a,b] \in \mathcal I_x \Big\}. \end{equation}
We shall work in $\mathfrak M$ using flows in $H^1\big([0,1],\mathbb R^m\big)$ for which $\mathfrak M$ is invariant.
\section{Geometrically critical values and variationally critical portions}\label{sec:functional}
In this section we will introduce two different notions of {\em criticality\/} for curves in $\mathfrak M$.
\begin{defin}\label{thm:defgeomcrit} A number $c\in\left]0,M_0\right[$ will be called a {\em geometrically critical value} if there exists an OGC $\gamma$ parameterized in $[0,1]$ such that $\frac12\int_0^1g(\dot\gamma,\dot\gamma)\,\text dt=c$. A number which is not geometrically critical will be called {\em geometrically regular value}.
\end{defin}
It is important to observe that, in view to obtain multiplicity results, distinct geometrically critical values yield geometrically distinct orthogonal geodesic chords: \begin{prop}\label{thm:distinct} Let $c_1\ne c_2$, $c_1,c_2>0$ be distinct geometrically critical values with corresponding OGC $x_1,x_2$. Then $x_1\big([0,1]\big)\ne x_2\big([0,1]\big)$. \end{prop} \begin{proof} The OGC's $x_1$ and $x_2$ are parameterized in the interval $[0,1]$. Assume by contradiction, $x_1([0,1])=x_2([0,1])$. Since \[ x_{i}(]0,1[) \subset \Omega \text{ for any }i=1,2, \] we have \[\{x_1(0),x_1(1)\}= \{x_2(0),x_2(1)\}.\] Up to reversing the orientation of $x_2$, we can assume $x_1(0)=x_2(0)$. Since $x_1$ and $x_2$ are OGC's, $\dot x_1(0)$ and $\dot x_2(0)$ are parallel, but the condition $c_1\ne c_2$ says that $\dot x_1(0)\ne\dot x_2(0)$. Then there exists $\lambda > 0, \lambda\not=1$ such that $\dot x_2(0) =\lambda \dot x_1(0)$ and therefore, by the uniqueness of the Cauchy problem for geodesics we have $x_2(s)=x_1(\lambda s)$. Up to exchange $x_1$ with $x_2$ we can assume $\lambda > 1$. Since $x_2(\frac{1}{\lambda}) = x_1(1) \in\partial\Omega$, the transversality of $\dot x_2(0)$ to $\partial\Omega$ implies the existence of $\bar s \in ]\frac{1}{\lambda},1]$ such that $x_2(\bar s) \not\in\overline\Omega$, getting a contradiction. \end{proof}
A notion of criticality will now be given in terms of variational vector fields. For $x\in \mathfrak M$, let $\mathcal V^+(x)$ denote the following closed convex cone of $T_x H^1\big([0,1],\mathbb R^m\big)$: \begin{equation}\label{eq:defV+(x)} \mathcal V^+(x)=\big\{V\in T_x H^1\big([0,1],\mathbb R^m\big):g\big(V(s), \nabla\phi\big(x(s)\big)\big)\ge0\ \text{for $x(s)\in\partial\Omega$}\big\}; \end{equation} vector fields in $\mathcal V^+(x)$ are interpreted as infinitesimal variations of $x$ by curves stretching ``outwards'' from the set $\overline\Omega$.
\begin{defin}\label{thm:defvariatcrit} Let $x\in\mathfrak M$ and $[a,b] \subset [0,1]$; we say that $x\vert_{[a,b]}$ is a $\mathcal V^{+}$--\emph{\em variationally critical portion\/} of $x$ if $x\vert_{[a,b]}$ is not constant and if \begin{equation} \label{eq:varcriticality-plus}\int_a^bg\big(\dot x,\Ddt V\big)\,\mathrm dt\ge0, \quad\forall\,V\in\mathcal V^+(x). \end{equation} \end{defin}
Similarly, for $x\in\mathfrak M$ we define the cone: \begin{equation}\label{eq:defV-(x)} \mathcal V^-(x)=\big\{V\in T_x H^1\big([0,1],\mathbb R^m\big):g\big(V(s), \nabla\phi\big(x(s)\big)\big)\le0\ \text{for $x(s)\in \partial \Omega$}\big\}, \end{equation} and we give the following
\begin{defin}\label{thm:defvariatcrit-} Let $x\in\mathfrak M$ and $[a,b] \subset [0,1]$; we say that $x\vert_{[a,b]}$ is a $\mathcal V^{-}$--\emph{variationally critical portion\/} of $x$ if $x\vert_{[a,b]}$ is not constant and if \begin{equation} \label{eq:varcriticality}\int_a^bg\big(\dot x,\Ddt V\big)\,\mathrm dt\ge0, \quad\forall\,V\in\mathcal V^-(x). \end{equation} \end{defin}
The integral in \eqref{eq:varcriticality} gives precisely the first variation of the geodesic action functional in $(M,g)$ along $x\vert_{[a,b]}$. Hence, variationally critical portions are interpreted as those curves $x\vert_{[a,b]}$ whose geodesic energy is {\em not decreased\/} after infinitesimal variations by curves stretching outwards from the set $\overline\Omega$. The motivation for using outwards pushing infinitesimal variations is due to the concavity of $\overline\Omega$. Indeed in the convex case it is customary to use a curve shortening method in $\overline \Omega$, that can be seen as the use of a flow constructed by infinitesimal variations of $x$ in $\mathcal V^-(x)$, keeping the endpoints of $x$ on $\partial\Omega$.
Flows obtained as integral flows of convex combinations of vector fields in $\mathcal V^+(x)$ play, in a certain sense, the leading role in our variational approach. However we shall use also integral flows of convex combinations of vector fields in $\mathcal V^-(x)$ to avoid certain variationally critical portions that do not correspond to OGC's.
Clearly, we are interested in determining existence of geometrically critical values. In order to use a variational approach we will first have to keep into consideration the more general class of $\mathcal V^{+}$--variationally critical portions. A central issue in our theory consists in studying the relations between $\mathcal V^{+}$--variationally critical portions $x\vert_{[a,b]}$ and OGC's. From now on $\mathcal V^{+}$--variationally critical portions, will be called simply variationally critical portions.
\section{Classification of variationally critical portions} \label{sub:description} Let us now take a look at how variationally critical portions look like. In first place, let us point out that regular variationally critical portions are OGC's. In order to prove this, the following Lemma is crucial. Its proof can be found in \cite{esistenza}.
\begin{lem}\label{thm:leminsez4.4} Let $x\in\mathfrak M$ be fixed, and let $[a,b]\in[0,1]$ be such that $x\vert_{[a,b]}$ is a (non--constant) variationally critical portion of $x$, with $x(a),x(b)\in\partial\Omega$ and $x\big([a,b]\big)\subset\overline\Omega$. Then: \begin{enumerate} \item\label{itm:numfinint} $x^{-1}\big(\partial\Omega)\cap[a,b]$ consists of a finite number of closed intervals and isolated points; \item\label{itm:constconcomp} $x$ is constant on each connected component of $x^{-1}\big(\partial\Omega)\cap[a,b]$; \item\label{itm:piecC2} $x\vert_{[a,b]}$ is piecewise $C^2$, and the discontinuities of $\dot x$ may occur only at points in $\partial\Omega$; \item\label{itm:portgeo} each $C^2$ portion of $x\vert_{[a,b]}$ is a geodesic in $\overline\Omega$. \item\label{itm:addizionale} $\inf\{\phi(x(s))\,:\,s\in[a,b]\}<-\delta_0$. \end{enumerate} \end{lem}
Using the previous Lemmas, we can now prove the following: \begin{prop}\label{thm:regcritpt} Assume that there are no WOGC's in $\overline\Omega$. Let $x\in\mathfrak M$ and $[a,b]\in {\mathcal I_x^0}$ be such that $x\vert_{[a,b]}$ is a variationally critical portion of $x$ and such that the restriction of $x$ to $[a,b]$ is of class $C^1$. Then, $x\vert_{[a,b]}$ is an orthogonal geodesic chord in $\overline\Omega$. \end{prop} \begin{proof} $C^1$--regularity, together with \eqref{itm:numfinint} and \eqref{itm:constconcomp} of Lemma \ref{thm:leminsez4.4}, show that $x^{-1}(\partial\Omega)\cap[a,b]$ consists only of a finite number of isolated points. Then, by the $C^1$ regularity on $[a,b]$ and parts \eqref{itm:piecC2}--\eqref{itm:portgeo} of Lemma \ref{thm:leminsez4.4}, $x$ is a geodesic on the whole interval $[a,b]$. Moreover an integration by parts argument shows that $\dot x(a)$ and $\dot x(b)$ are orthogonal to $T_{x(a)}\partial\Omega$ and $T_{x(b)}\partial\Omega$ respectively. Finally, since there are no WOGC's on $\overline\Omega$, $x\vert_{[a,b]}$ is an OGC. \end{proof}
Variationally critical portions $x\vert_{[a,b]}$ of class $C^1$ will be called {\em regular variationally critical portions}; those critical portions that do not belong to this class will be called {\em irregular}. Irregular variationally critical portions of curves $x\in\mathfrak M$ are further divided into two subclasses, described in the Proposition below, whose proof can be obtained using Lemma \ref{thm:leminsez4.4} as done for the proof of Proposition \ref{thm:regcritpt}. \begin{prop}\label{thm:irregcritpt} Assume that there are not WOGC's in $\overline\Omega$. Let $x\in\mathfrak M$ and let $[a,b]\in {\mathcal I_x^0}$ be such that $x\vert_{[a,b]}$ is an irregular variationally critical portion of $x$. Then, there exists a subinterval $[\alpha,\beta]\subset [a,b]$ such that $x\vert_{[a,\alpha]}$ and $x\vert_{[\beta,b]}$ are constant (in $\partial\Omega$), $\dot x(\alpha^+)\in T_{x(\alpha)}(\partial\Omega)^\perp$, $\dot x(\beta^-)\in T_{x(\beta)}(\partial\Omega)^\perp$, and one of the two mutually exclusive situations occurs: \begin{enumerate} \item \label{itm:cusp} there exists a finite number of intervals $[t_1,t_2]\subset\left]\alpha,\beta\right[$ such that $x\big([t_1,t_2]\big)\subset\partial\Omega$ and that are maximal with respect to this property; moreover, $x$ is constant on each such interval $[t_1,t_2]$, and $\dot x(t_1^-)\ne\dot x(t_2^+)$; \item \label{itm:stop} $x\vert_{[\alpha,\beta]}$ is an OGC in $\overline\Omega$. \end{enumerate} \end{prop}
Irregular variationally critical portions in the class described in part \eqref{itm:cusp} will be called {\em of first type}, those described in part \eqref{itm:stop} will be called {\em of second type}. An interval $[t_1,t_2]$ as in part \eqref{itm:cusp} will be called a {\em cusp interval\/} of the irregular critical portion $x$.
\begin{rem}\label{rem:rem4.10} We observe here that, due to the strong concavity assumption, if $x\in\mathfrak M$ is an irregular variationally critical point of first type and $[t_1,t_2],[s_1,s_2]$ are cusp intervals for $x$ contained in $[a,b]$ with $t_2<s_1$, then \[\text{there exists}\ s_0\in\left]t_2, s_1\right[\text{\ with\ }\phi(x(s_0))< -\delta_0,\] (see Remark\ref{thm:remopencondition}). This implies that the number of cusp intervals of irregular variationally critical portions $x\vert_{[a,b]}$, is uniformly bounded (see Lemma \ref{thm:lemmacazzatina}).
We also remark that at each cusp interval $[t_1,t_2]$ of $x$, the vectors $\dot x(t_1^-)$ and $\dot x(t_2^+)$ may not be orthogonal to $\partial\Omega$. If $x\vert_{[a,b]}$ is a irregular critical portion of the first type, and if $[t_1,t_2]$ is a cusp interval of for $x$, we will set \begin{equation}\label{eq:4.15} \Theta_x(t_1,t_2) = \text{the (unoriented) angle between the vectors\ } \dot x(t_1^-) \text{\ and\ } \dot x(t_2^+); \end{equation} observe that $\Theta_x(t_1,t_2)\in\left]0,\pi\right]$. \end{rem}
\begin{rem}\label{thm:rem4.11bis}We observe that if $[t_1,t_2]$ is a cusp interval for $x$, then the tangential components of $\dot x(t_1^-)$ and of $\dot x(t_2^+)$ along $\partial\Omega$ are equal; this is is easily obtained with an integration by parts argument. It follows that if $\Theta_x(t_1,t_2)>0$, then $\dot x(t_1^-)$ and $\dot x(t_2^+)$ cannot be both tangent to $\partial\Omega$. \end{rem}
We will denote by $\mathcal Z$ the set of all curves having variationally critical portions: \[\mathcal Z=\big\{x\in\mathfrak M:\exists\,[a,b]\subset[0,1]\ \text{such that $x\vert_{[a,b]}$ is a variationally critical portion of $x$} \big\};\] the following compactness property holds for $\mathcal Z$: \begin{prop}\label{thm:Zrcompatto} If $x_n$ is a sequence in $\mathcal Z$ and $[a_n,b_n]\in{\mathcal J_{x_n}^0}$ is such that ${x_n}\vert_{[a_n,b_n]}$ is a (non-constant) variationally critical portion of $x_n$, then, up to subsequences, as $n\to\infty$ $a_n$ converges to some $a$, $b_n$ converges to some $b$, with $0\le a<b\le1$, and the sequence of paths $x_n:[a_n,b_n]\to\overline\Omega$ is $H^1$-convergent (in the sense of Remark \ref{rem:rep}) to some curve $x:[a,b]\to\overline \Omega$ which is variationally critical. \end{prop} \begin{proof} By Lemma~\ref{thm:lemmacazzatina}, $b_n-a_n$ is bounded away from $0$, which implies the existence of subsequences converging in $[0,1]$ to $a$ and $b$ respectively, and with $a<b$. If $x_n$ is a sequence of regular variationally critical portions, then the conclusion follows easily observing that $x_n$, and thus $\widehat x_n$ (its affine reparameterization in $[a,b]$) is a sequence of geodesics with image in a compact set and having bounded energy.
For the general case, one simply observes that the number of cusp intervals of each $x_n$ is bounded uniformly in $n$, and the argument above can be repeated by considering the restrictions of $x_n$ to the complement of the union of all cusp intervals. Finally, using partial integration of the term $\int_a^b g(\dot x,\Ddt V)\,\text dt$, one observes that it is nonnegative for all $ V\in\mathcal V^+(x)$, hence $x$ is variationally critical. \end{proof}
\begin{rem}\label{rem:rem4.12} We point out that the first part of the proof of Proposition \ref{thm:Zrcompatto} shows that if $x_n\in\mathcal Z$ and $[a_n,b_n]\in {\mathcal I_{x_n}^0}$ is an interval such that ${x_n}\vert_{[a_n,b_n]}$ is an OGC, then, up to subsequences, there exists $[a,b]\subset[0,1]$ and $x:[a,b]\to\overline\Omega$ such that ${x_n}\vert_{[a_n,b_n]}\to x\vert_{[a,b]}$ in $H^1$ and $x$ is an OGC. \end{rem}
Since we are assuming that there are no WOGC in $\overline\Omega$, by Lemma~\ref{thm:leminsez4.4}, Proposition~\ref{thm:regcritpt}, Proposition~\ref{thm:irregcritpt} and Proposition~\ref{thm:Zrcompatto}, we obtain immediately the following result.
\begin{cor}\label{thm:cor4.11bis}There exists $d_0>0$ such that for any $x\vert_{[a,b]}$ irregular variationally portion of first type with $[a,b] \in \mathcal I_x^0$, there exists a cusp interval $[t_1,t_2]\subset[a,b]$ for $x$ such that \[ \Theta_x(t_1,t_2)\ge d_0. \] \end{cor}
\section{The notion of topological non-essential interval}\label{V-}
As observed in \cite{esistenza}, we need three different types of flows, whose formal definition will be given below. ``Outgoing flows'' are applied to paths that are \emph{far} from variationally critical portions (cf.\ Definition \ref{thm:defvariatcrit}). ``Reparameterization flows'' are applied to curves that are \emph{close} to irregular variational portions of second type. ``Ingoing flows'' are used to avoid irregular variational portions of first type. In order to describe this type of homotopies, we introduce the notion of \textit{topological non-essential interval}, which is a key point in defining the admissible homotopies. The possibility of avoiding irregular variational portions of first type is based on the following {\em regularity\/} property of the critical variational portions with respect to ingoing directions.
\begin{lem}\label{thm:lem5.3} Let $y\in H^1\big([a,b],\overline\Omega\big)$ be such that: \begin{equation}\label{eq:eq5.6} \int_a^bg\big(\dot y,\Ddt V)\,\mathrm dt\ge0,\qquad\forall\,V\in\mathcal V^-(y) \ \text{with}\ V(a)=V(b)=0. \end{equation} Then, $y\in H^{2,\infty}([a,b],\overline\Omega)$ and in particular it is of class $C^1$. \end{lem} \begin{proof} See for instance \cite[Lemma 3.2]{London}. \end{proof} \begin{rem}\label{rem:4.16bis} Note that, under the assumption of strong concavity, the set \[C_y=\big\{s\in[a,b]\,:\,\phi(y(s))=0\big\}\] consists of a finite number of intervals. On each one of these intervals, $y$ is of class $C^2$, and it satisfies the ``constrained geodesic'' differential equation \begin{equation}\label{eq:4.30bis} \Dds\dot y(s)=-\left[\frac1{g(\nu(y(s)),\nabla\phi(y(s)))}H^\phi(y(s))[\dot y(s),\dot y(s)]\right]\nu(y(s)). \end{equation} \end{rem}
\begin{rem}\label{rem:4.17} For every $\delta\in\left]0,\delta_0\right]$ we have the following property: for any $x\in\mathfrak M$ and $[a,b] \in {\mathcal I}_x$ such that $x\vert_{[a,b]}$ is an irregular variationally critical portion of first type, there exists an interval $[\alpha,\beta]\subset[a,b]$ and a cusp interval $[t_1,t_2]\subset[\alpha,\beta]$ such that: \begin{equation}\label{eq:alphabeta} \Theta_x(t_1,t_2)\ge d_0, \text{ and } \phi(x(\alpha))=\phi(x(\beta))=-\delta, \end{equation} where $d_0$ is given in Corollary \ref{thm:cor4.11bis}.
Note that $g\big(\nabla\phi(x(\alpha)),\dot x(\alpha)\big)>0$ and $g\big(\nabla\phi(x(\beta)),\dot x(\beta)\big)<0$, by the strong concavity assumption. \end{rem}
For the remaining of the paper we will denote by \[\pi:\phi^{-1}\big([-\delta_0,0]\big)\longrightarrow\phi^{-1}(0)\] the retraction onto $\partial\Omega$ obtained from the inverse of the exponential map of the normal bundle of $\phi^{-1}(0)$. By Remark \ref{thm:rem4.11bis}, a simple contradiction argument shows that the following properties are satisfied by irregular variationally critical portions of first type (see also Corollary \ref{thm:cor4.11bis}):
\begin{lem}\label{thm:lem4.18} There exists $\bar\gamma>0$ and $\delta_1\in\left]0,\delta_0\right[$ such that, for all $\delta\in\left]0,\delta_1\right]$, for any $x\in{\mathfrak M}$ such that $x\vert_{[a,b]}$ is an irregular variationally critical portion of first type, and for any interval $[\alpha,\beta] \subset [a,b]$ that contains a cusp interval $[t_1,t_2]$ satisfying \eqref{eq:alphabeta}, the following inequality holds: \begin{equation}\label{eq:4.31} \max\Big\{\Vert x(\beta)-\pi(x(\alpha))\Vert_{E},\, \Vert x(\alpha)-\pi(x(\beta))\Vert_{E}\Big\}\ge(1+2\bar\gamma) \Vert\pi(x(\beta))-\pi(x(\alpha))\Vert_{E}, \end{equation} \end{lem} \noindent (recall that $\Vert \cdot \Vert_{E}$ denotes the Euclidean norm).
The following Lemma says that curves satisfying \eqref{eq:4.31} and those that satisfy \eqref{eq:eq5.6} are contained in \emph{disjoint} closed subsets; in other words, curves satisfying \eqref{eq:4.31} are far from being critical with respect to $\mathcal V^-$. In particular, the set of irregular variationally critical portions of first type consists of curves at which the value of the energy functional can be decreased by deforming in the directions of $\mathcal V^-$.
Let $\bar\gamma$ be as in Lemma~\ref{thm:lem4.18}. \begin{lem}\label{thm:lem4.19} There exists $\delta_2\in\left]0,\delta_0\right[$ with the following property: for any $\delta\in\left]0,\delta_2\right]$, for any $[a,b]\subset\mathbb R$ and for any $y\in H^1([a,b],\overline\Omega)$ satisfying \eqref{eq:eq5.6} and \[ \phi(y(a))=\phi(y(b))=-\delta, \phi(y(\bar t))=0 \text{ for some }\bar t\in ]a,b[, \] the following inequality holds: \begin{equation}\label{eq:4.32} \max\Big\{\Vert y(b)-\pi(y(a))\Vert_{E},\,\Vert y(a)-\pi(y(b))\Vert_{E}\Big\} \le \left(1+\frac{\bar\gamma}2\right)\Vert\pi(y(b))-\pi(y(a))\Vert_{E}. \end{equation} \end{lem}
\begin{proof} See \cite{esistenza}. \end{proof} Using vector fields in $\mathcal V^-(x),\,x\in\mathfrak M$, we can build a flow moving away from the set of irregular variationally critical portions of first type, without increasing the energy functional. To this aim let $\pi,\,\bar\gamma,\,\delta_1,\,\delta_2$ be chosen as in Lemma \ref{thm:lem4.18} and \ref{thm:lem4.19}, and set \begin{equation}\label{eq:4.37} \bar\delta=\min\{\delta_1,\delta_2\}. \end{equation} Let us give the following: \begin{defin}\label{thm:defcostapp} Let $x\in\mathfrak M$, $[a,b]\in \mathcal I_x^{0}$ and $[\alpha,\beta]\subset[a,b]$. We say that \emph{$x$ is $\bar\delta$-close to $\partial\Omega$ on $[\alpha,\beta]$} if the following situation occurs: \begin{enumerate} \item\label{itm:app1} $\phi(x(\alpha))=\phi(x(\beta))=-\bar\delta$; \item\label{itm:app2} $\phi(x(s))\ge-\bar\delta$ for all $s\in[\alpha,\beta]$; \item\label{itm:app3} there exists $s_0\in\left]\alpha,\beta\right[$ such that $\phi(x(s_0))>-\bar\delta$; \item $[\alpha,\beta]$ is minimal with respect to properties \eqref{itm:app1}, \eqref{itm:app2} and \eqref{itm:app3}.
\end{enumerate} If $x$ is $\bar\delta$-close to $\partial\Omega$ on $[\alpha,\beta]$, the \emph{maximal proximity} of $x$ to $\partial\Omega$ on $[\alpha,\beta]$ is defined to be the quantity \begin{equation}\label{eq:maxprox} \mathfrak p^x_{\alpha,\beta}=\max_{s\in[\alpha,\beta]}\phi(x(s)).\end{equation} \end{defin} Given an interval $[\alpha,\beta]$ where $x$ is $\bar\delta$-close to $\partial\Omega$, we define the following constant, which is a sort of measure of how much the curve $x\vert_{[\alpha,\beta]}$ fails to flatten along $\partial\Omega$: \begin{defin}\label{thm:defflat}
The \emph{bending constant} of $x$ on $[\alpha,\beta]$ is defined by: \begin{equation}\label{eq:bendconst} \mathfrak b^x_{\alpha,\beta}=
\frac{\max\big\{ \Vert x(\beta)-\pi(x(\alpha))\Vert_{E},\Vert x(\alpha)-\pi(x(\beta))\Vert_{E}\big\}}{\Vert \pi(x(\alpha))-\pi(x(\beta))\Vert_{E}}\in\mathbb R^+\cup\{+\infty\}, \end{equation} where $\pi$ denotes the projection onto $\partial\Omega$ along orthogonal geodesics. \end{defin}
We observe that $\mathfrak b^x_{\alpha,\beta}=+\infty$ if and only if $x(\alpha)=x(\beta)$.
Let $\bar\gamma$ be as in Lemma~\ref{thm:lem4.18}. If the bending constant of a path $y\vert_{[\alpha,\beta]}$ is greater than or equal to $1+\bar\gamma$, then the energy functional in the interval $[\alpha,\beta]$ can be decreased in a neighborhood of $y\vert_{[\alpha,\beta]}$ keeping the endpoints $y(\alpha)$ and $y(\beta)$ fixed, and moving away from $\partial \Omega$ (cf. \cite{esistenza}).
In order to prove this, we first need the following \begin{defin}\label{def:summary-interval} An interval $[\tilde\alpha,\tilde\beta]$ is called a \emph{summary interval} for $x \in \mathfrak M$ if it is the smallest interval contained in $[a,b]\in\mathcal I_{x}^{0}$ and containing all the intervals $[\alpha,\beta]$ such that \begin{itemize} \item $x$ is $\bar\delta$--close to $\partial\Omega$ on $[\alpha,\beta]$, \item $b^{x}_{\alpha,\beta} \geq 1 + \bar \gamma$. \end{itemize} \end{defin} The following result is proved in \cite{esistenza}: \begin{prop}\label{thm:prop4.20} There exist positive constants $\sigma_0\in\left]0,{\bar\delta}/2\right[$, $\varepsilon_0 \in\left]0,\bar \delta - 2\sigma_0\right[$, $\rho_0,\,\theta_0$ and $\mu_0$ such that for all $y\in\mathfrak M$, for all $[a,b]\in \mathcal I_y$
and for all $[\tilde\alpha,\tilde\beta]$ summary interval for $y$ containing an interval $[\alpha,\beta]$ such that : \[ y \text{ is $\bar\delta$--close to $\partial\Omega$ on }[\alpha,\beta], \, \mathfrak b^y_{\alpha,\beta}\ge1+\bar\gamma, \, \mathfrak p^y_{\alpha,\beta}\ge-2\sigma_0, \] there exists $V_y\in H_0^1\big([\tilde\alpha,\tilde\beta],\mathbb R^m\big)$ with the following property:
for all $z\in H^1([\tilde\alpha,\tilde\beta],\mathbb R^m)$ with $\Vert z-y\Vert_{\tilde \alpha, \tilde \beta}\le\rho_0$ it is: \begin{enumerate} \item\label{itm:teruno} $V_y(s)=0$ for all $s\in[\tilde\alpha,\tilde\beta]$ such that $\phi(z(s))\le-\bar\delta + \varepsilon_0$; \item\label{itm:terdue} $g\big(\nabla\phi(z(s)),V_y(s)\big)\le-\theta_0\Vert V_y\Vert_{\tilde\alpha,\tilde\beta}$, if $s\in[\tilde\alpha,\tilde\beta]$ and $\phi(z(s))\in[-2\sigma_0,2\sigma_0]$ \item\label{itm:tertre} $\int_{\tilde\alpha}^{\tilde\beta} g(\dot z,\Ddt V_y)\,\mathrm dt\le-\mu_0\Vert V_y\Vert_{\tilde\alpha,\tilde\beta}.$ \end{enumerate} \end{prop}
\begin{rem}\label{rem:sigma1} As observed in \cite{esistenza}, in order to define flows that move away from curves having topologically non-essential intervals (defined below), it will be necessary to fix a constant $\sigma_1 \in\left]0,\sigma_0\right[$ such that \[ \sigma_1 \leq \frac27\rho_0\theta_0, \] where $\rho_0, \, \theta_0$ are given by Proposition \ref{thm:prop4.20}. \end{rem}
Proposition \ref{thm:prop4.20} and Remark \ref{rem:sigma1} are crucial ingredients for the
definition of the class of the admissible homotopies, whose elements will avoid irregular variationally critical points of first type. The description of this class is based on the notion of topologically non-essential interval given below.
Let $\bar\delta$ be as in \eqref{eq:4.37}, $\bar\gamma$ as in Lemma \ref{thm:lem4.18} and $\sigma_1$ as in Remark \ref{rem:sigma1}. \begin{defin}\label{def:4.21} Let $y\in \mathfrak M$ be fixed. An interval $[\alpha,\beta]\subset[a,b]\in{\mathcal I}_{y}$, is called \emph{topologically not essential interval (for $y$)} if $y$ is $\bar\delta$-close to $\partial\Omega$ on $[\alpha,\beta]$, with $\mathfrak p^y_{\alpha,\beta}\ge-\sigma_1$ and $\mathfrak b^y_{\alpha,\beta}\ge(1+\tfrac32\bar\gamma)$. \end{defin}
\begin{rem}\label{rem:4.22} By Lemma \ref{thm:lem4.18} the intervals $[\alpha,\beta]$ containing cusp intervals $[t_1,t_2]$ of curves $x$, which are irregular variationally critical portion of first type, and satisfying $\Theta_x(t_1,t_2)\ge d_0$ are topologically not essential intervals with $\mathfrak p_{\alpha,\beta}^x=0$ and $\mathfrak b_{\alpha,\beta}^x\ge 1+2\bar\gamma$. This fact will allow us to move away from the set of irregular variationally critical portions of first type without increasing the value of the energy functional. \end{rem}
\section{The admissible homotopies}\label{sec:homotopies}
In the present section we shall list the properties of the admissible homotopies used in our minimax argument. The notion of topological critical level used in this paper, depends on the choice of the admissible homotopies.
We shall consider continuous homotopies $h:[0,1]\times\mathcal D\to\mathfrak M$ where $\mathcal D$ is a closed subset of $\mathfrak C$. It should be observed, however, that
totally analogous definitions apply also to any element $h$ in $\mathfrak M$,
not necessarily contained in $\mathfrak C$.
Recall that $\mathfrak C$ is described in \eqref{eq:2.6bis}. First of all, we require that:
\begin{equation}\label{eq:numero1} h(0,\cdot) \text{ is the inclusion of } \mathcal D \text{ in }\mathfrak M. \end{equation}
The homotopies that we shall use are of three types: outgoing homotopies, reparameterizazions and ingoing homotopies. They can be described in the following way.
\begin{defin}\label{thm:tipoA} Let $0 \leq \tau' < \tau'' \leq 1$. We say that $h$ is of type $A$ in $[\tau',\tau'']$ if it satisfies the following property: \begin{enumerate} \item\label{unico} for all $\tau_0 \in [\tau',\tau'']$, for all $s_0 \in [0,1]$, for all $x \in \mathcal D$, if $\phi(h(\tau_0,x)(s_0) = 0$, then $\tau \mapsto \phi(h(\tau,x)(s_0)$ is strictly increasing in a neighborhood of $\tau_0$. \end{enumerate} \end{defin}
\begin{rem}\label{rem:monotonia-intervalli} It is relevant to observe that, by property above of Definition \ref{thm:tipoA}, if $[a_{\tau},b_{\tau}]$ denotes any interval in $\mathcal I_{h(\tau,\gamma)}$ we have: \begin{equation*}\label{eq:4.35f}
\tau' \le\tau_1<\tau_2\le\tau'' \text{ and } [a_{\tau_1},b_{\tau_1}]\cap [a_{\tau_2},b_{\tau_2}]\ne\emptyset \Longrightarrow [a_{\tau_2},b_{\tau_2}] \subset [a_{\tau_1},b_{\tau_1}].
\end{equation*} \end{rem}
In the next Definition we describe the admissible homotopies consisting in suitable reparameterizations $\Lambda(\tau,\gamma)$. The deformation parameter $\tau$ moves in a fixed interval $[\tau',\tau'']$.
\begin{defin}\label{thm:tipoB} Let $0 \leq \tau' < \tau'' \leq 1$. We say that $h$ is of type $B$ in $[\tau',\tau'']$ if it satisfies the following property: there exists $\Lambda : [\tau',\tau''] \times \mathcal H_0^1([0,1],[0,1]) \rightarrow [0,1]$ continuous and such that \begin{itemize} \item $\Lambda(\tau,\gamma)(0)=0,\,\Lambda(\tau,\gamma)(1)=1, \, \forall \tau \in [\tau',\tau''],\,\forall \gamma \in \mathcal D$;
\item $s\mapsto\Lambda(\tau,\gamma)(s) \text{is strictly increasing in }[0,1], \; \forall \tau \in [\tau',\tau''],\forall \gamma \in \mathcal D$;
\item $\Lambda(0,\gamma)(s) = s \text{ for any }\gamma \in \mathcal D, s \in [0,1]$;
\item $h(\tau,\gamma)(s) = (\gamma \circ \Lambda(\tau,\gamma))(s) \; \forall \tau \in [\tau',\tau''], \forall s \in [0,1], \forall \gamma \in \mathcal D$. \end{itemize}
\end{defin}
\begin{defin}\label{thm:tipoC} Let $0 \leq \tau' < \tau'' \leq 1$. We say that $h$ is of type $C$ in $[\tau',\tau'']$ if it satisfies the following properties: \begin{enumerate} \item\label{eq:CI} $h(\tau',\gamma)(s) \not \in \Omega \Rightarrow h(\tau,\gamma)(s) = h(\tau',\gamma)(s)$ for any $\tau \in [\tau',\tau'']$; \item\label{eq:CIbis} $h(\tau',\gamma)(s) \in \Omega \Rightarrow h(\tau,\gamma)(s) \in \Omega$ for any $\tau \in [\tau',\tau'']$; \end{enumerate} \end{defin}
The interval $[0,1]$ where $\tau$ varies will be partitioned in the following way: \begin{multline}\label{eq:partizione} \text{There exists a partition of the interval } [0,1],\; 0=\tau_{0} < \tau_1 < \ldots <\tau_k =1 \text{ such that}\\ \text{ on any interval } [\tau_i, \tau_{i+1}], i=0,\ldots,k-1, \text{ the homotopy $h$ is either of type A, or B, or C.} \end{multline}
Homotopies of type A will be used away from variationally critical portions, homotopies of type B near variationally critical portions of II type, while homotopies of type C will be used near variationally critical portions of I type.
Now, in order to move far from topologically non-essential intervals (cf.\ Definition \ref{def:4.21}) we need the following further property:
\begin{multline}\label{eq:numero8} \text{if }[a,b]\in {\mathcal I}_{h(\tau,\gamma)} \text{ then for all } [\alpha,\beta]\subset[a,b] \text{ topologically non-essential it is }\\ \phi(h(\tau,\gamma)(s))\le-\frac{\sigma_1}2 \text {for all }s \in[\alpha,\beta], \end{multline} where ${\sigma_1}$ is defined in Remark \ref{rem:sigma1}.
We finally define the following class of admissible homotopies: \begin{multline}\label{eq:class0} \mathcal H =\big \{(\mathcal D, h): \mathcal D \text{ is a closed subset of $\mathfrak C$ and } \\ h:[0,1] \times \mathcal D \rightarrow \mathfrak M \text{ satisfies \eqref{eq:numero1}, \eqref{eq:partizione} and \eqref{eq:numero8}}\big\}. \end{multline}
\begin{rem}\label{rem:6.6} Obviously, it is crucial to have $\mathcal H \not= \emptyset$. But thanks to Lemma \ref{thm:corde} we see that any $G(A,B)$ does not have topological non-essential intervals, and denoting by $I_{\mathfrak C}$ the constant identity homotopy we have $(\mathfrak C,I_{\mathfrak C}) \in \mathcal H$. \end{rem}
In order to introduce the functional for our minimax argument, we set for any $(\mathcal D,h) \in \mathcal H$, \begin{equation}\label{eq:funzionepreparatoria} {\mathcal F}(\mathcal D,h) = \sup\Big\{\tfrac{b-a}2\int_{a}^{b}g(\dot y,\dot y)\,\mathrm ds: y=h(1,x), x\in{\mathcal D}, [a,b] \in {\mathcal I}_y\Big\}. \end{equation}
\begin{rem}\label{rem:7.0} It is interesting to observe that the integral $\tfrac{(b-a)}2\int_a^bg(\dot y,\dot y)\,\text dt$ coincides with $\tfrac12\int_0^1g(\dot y_{a,b},\dot y_{a,b})\, \mathrm dt$, where $y_{a,b}$ is the affine reparameterization of $y$ on the interval $[0,1]$. \end{rem}
\begin{rem} Note also that, by the definition of $\mathcal H$, we have \begin{equation}\label{eq:7.0bis} \mathcal F(\mathcal D,h) < \frac{M_0}{2},\qquad\forall (\mathcal D,h)\in\mathcal H. \end{equation} \end{rem}
Given continuous maps $h_1:[0,1]\times F_1\to\mathfrak M$ and $h_2:[0,1]\times F_2\to\mathfrak M$ such that $h_1(1,F_1)\subset F_2$, then we define the \emph{concatenation} of $h_1$ and $h_2$ as the continuous map $h_2\star h_1:[0,1]\times F_1\to\Lambda$ given by \begin{equation}\label{eq:defconcatenationhomotop} h_2\star h_1(t,x)= \begin{cases} h_1(2t,x),&\text{if\ } t\in[0,\tfrac12],\\ h_2(2t-1,h_1(1,x)),&\text{if\ } t\in[\tfrac12,1]. \end{cases} \end{equation} \section{Deformation Lemmas}\label{sec:second}
The first deformation result that we use is the analogous of the first (classical) deformation Lemma. By the same proof in \cite{esistenza} (without any use of symmetries properties on the flows) we obtain:
\begin{prop}[First Deformation Lemma] \label{thm:firstdeflemma} Let $c \in\left ]0,M_0\right[$ be a geometrically regular value (cf. Definition \ref{thm:defgeomcrit}). Then, $c$ is a topologically regular value of $\mathcal F$, namely there exists $\varepsilon=\varepsilon(c)>0$ with the following property: for all $(\mathcal D,h)\in {\mathcal H}$ with \[ \mathcal F(\mathcal D,h)\leq c+\varepsilon \] there exists a continuous map $\eta\in C^0\big([0,1]\times h(1,\mathcal D),\mathfrak M\big)$ such that $(\mathcal D,\eta\star h)\in{\mathcal H}$ and \[ \mathcal F(\mathcal D,\eta\star h)\leq c-\varepsilon. \] \end{prop}
\begin{rem} Let us recall here briefly the main idea behind the proof of Proposition~\ref{thm:firstdeflemma}, which is discussed in details in reference~\cite{esistenza}. As in the classical Deformation Lemma, if $c$ is a regular value, then one shows there exist $\varepsilon > 0$ and a flow carrying the sublevel $c+\varepsilon$ inside the sublevel $c-\varepsilon$. The technical issue here is the fact that we need flows under which pieces of curves which are outside $\overline \Omega$ remain outside of $\overline \Omega$. This is obtained as follows. Using suitable pseudo-gradient vector fields, we first move away form curves having topologically non-essential intervals. Near irregular variational critical portions of second type, the desired flow is obtained by using reparameterizations, as described in Definition \ref{thm:tipoB}. Finally, we use flows described in Definition \ref{thm:tipoA} in order to move outside $\overline \Omega$ when we are far form variational critical portions of any type. A suitable partition of unity argument, needed to combine these different flows, allows to define the required homotopy that carries the sublevel $c+\varepsilon$ into the sublevel $c-\varepsilon$ if there are no OGC's having energy $c$. \end{rem}
We shall find positive geometrical critical level using the following Lemma, which is a simple consequence of Lemma \ref{thm:lemmacazzatina}. \begin{lem}\label{lem:topological} Suppose that \[ \mathcal F(\mathcal D,h) < \frac12\left(\frac{3\delta_0}{4K_0}\right)^2. \] Then there exists an homotpy $\eta$ such that $(\eta \star h)(1,\gamma)(s)\in \partial \Omega$ for all $\gamma \in \mathcal D$, for any $s \in [0,1]$. \end{lem}
In order to obtain an analogue of the classical Second Deformation Lemma, we first need to describe neighborhoods of critical curves that must be removed in order to make the functional $\mathcal F$ decrease. We shall assume that the number of OGC's is finite; obviously such an assumption is not restrictive.
For every $[a,b] \subset [0,1]$ and $\omega$ OGC parameterized in the interval $[0,1]$, we denote by $\omega_{a,b}$ the OGC $\omega$ affinely reparameterized on the interval $[a,b]$. We shall consider only intervals $[a,b]$ such that \begin{equation}\label{eq:limitazione-ab} \int_a^bg(\dot \omega_{a,b},\dot \omega_{a,b})\mathrm ds \leq M_0. \end{equation}
Since we are assuming that the number of OGC's is finite we can choose a positive $r_*$ sufficiently small so that \begin{multline}\label{eq:rstar1} \Vert \omega^{1}_{a,b} - \omega^{2}_{a,b}\Vert_{a,b} > 2r_{*}, \text{ for any } [a,b] \subset [0,1] \text{ satisfying \eqref{eq:limitazione-ab}}, \\ \text{ for any $\omega^{1},\omega^{2}$ OGC's parameterized in $[0,1]$} \text{ and such that $\omega^{1} \neq \omega^{2}$.} \end{multline}
Note that \eqref{eq:rstar1} holds even if $\omega^{2}(s) = \omega^{1}(1-s)$, because $\omega_1$ is not constant. Moreover, since for any OGC $\omega$ it is $\omega(0)\ne\omega(1)$ (by uniqueness in the geodesic Cauchy problem), if $r_*$ is sufficiently small we have: \begin{multline}\label{eq:rstar2} \text{for any OGC $\omega$ parameterized in $[0,1]$,} \\ \{y \in \partial \Omega: \mathrm{dist}_{E}(y,\omega(0))\leq r_* \} \cap \{y \in \partial \Omega: \mathrm{dist}_{E}(y,\omega(1))\leq r_* \} = \emptyset. \end{multline} (Recall that $\mathrm{dist}_{E}$ denotes the Euclidean distance in $\mathbb{R}^m$.)
Also note that $r_*$ can be chosen so small that
\begin{multline}\label{eq:rstar3} \text{for any OGC $\omega$, the sets} \\ \text{ $\{\pi (y)\,:\,\mathrm{dist}_{E}(y,\omega(0))<2r_*\}$ and $\{\pi(y)\,:\,\mathrm{dist}_{E}(y,\omega(1))<2r_*\}$} \\ \text{are contractible in $\partial\Omega$}, \end{multline} where $\pi:\phi^{-1}\big([-\delta_0,0]\big)\longrightarrow\phi^{-1}(0)$ is the retraction onto $\partial\Omega$ obtained by the gradient flow for $\phi$.
For any $(\mathcal D,h) \in \mathcal H$, and $\omega$ orthogonal geodesic chord parameterized in $[0,1]$, we set, for any $r \in ]0,r_*]$, \begin{multline}\label{eq:9.1} \mathcal U(\mathcal D,h,\omega,r)=\big\{x = h(1,y): y \in \mathcal D \text{ and there exists }[a,b] \in \mathcal I_x \text{ such that }\\ \Vert x\vert_{[a,b]} - \omega_{a,b} \Vert_{a,b} \leq r \big\}, \end{multline}
If $[a,b]$ satisfies the above property we say that $x_{[a,b]}$ is $r_*$--close to $\omega_{a,b}$. Note that $\mathcal U(\mathcal D,h,\omega,r)$ is closed in $\mathfrak M$ and we have \begin{multline}\label{eq:9.2} {\mathcal U(\mathcal D,h,\omega_1,r_*)}\cap {\mathcal U(\mathcal D,h,\omega_2,r_*)}=\emptyset,\quad \forall\,(\mathcal D,h)\in\mathcal H,\\ \forall\,\omega_1,\omega_2\text{\ OGC's parameterized in $[0,1]$} \text{ and such that } \omega_1 \neq \omega_2. \end{multline} Now if $c > 0$ is a geometrically critical we set \[ E_c = \{\omega \text{ OGC}: \int_0^1 g(\dot \omega,\dot \omega)ds = c \} \] and, for any $r \in ]0,r_*]$ \[ \mathcal U_{r}(\mathcal D,h,c)=\bigcup_{\omega \in E_c} \mathcal U(\mathcal D, h,\omega,r). \]
\begin{rem}\label{thm:chiusura} Fix $\varepsilon > 0$ so that $c-\epsilon > 0$ and consider \begin{multline}\label{eq:Ac} {\mathcal A}_{c,\varepsilon} = \{y \in \mathcal D: x = h(1,y) \in \mathcal U_{r_*}(\mathcal D,h,c), \text{ and there exists $[a,b] \in \mathcal I_x$ such that }\\ x\vert_{[a,b]} \text{ is $r_*$--close to }\omega_{a,b} \text{ and }\frac{b-a}2 \int_a^bg(\dot x,\dot x)\,\mathrm ds \in [c-\varepsilon,c+\varepsilon]\}. \end{multline} \end{rem}
Again, by the same proof in \cite{esistenza}, we obtain the following \begin{prop}[Second Deformation Lemma]\label{thm:prop9.3} Let $c \geq \frac12\big(\frac{3\delta_0}{4K_0}\big)^2$ be a geometrical critical value. Then, there exists $\varepsilon_*=\varepsilon_*(c) > 0$ such that, for all $(\mathcal D,h)\in\mathcal H$ with \[ \mathcal F(\mathcal D,h)\leq c+\varepsilon_* \] there exists a continuous map $\eta:[0,1]\times h(1,\mathcal D)\to\mathfrak M$ such that $(\eta\star h,\mathcal D) \in\mathcal H$ and \[ \mathcal F\big(\mathcal D\setminus {{\mathcal A}_{c,\varepsilon_*},\eta\star h} \big) \leq c-\varepsilon_*. \] \end{prop}
Then, to conclude the proof of Theorem \ref{thm:main} by minimax arguments we need just the following topological results.
\begin{prop}\label{thm:prop9.4} Suppose that is only one orthogonal geodesic chord and let $\varepsilon_*$ given by Proposition \ref{thm:prop9.3}. Then, there exists $\varepsilon \in ]0,\varepsilon_*]$ such that the set ${\mathcal A}_{c,\varepsilon}$ given in \eqref{eq:Ac} satisfies the following property: there exist an open subset $\widehat {{\mathcal A}_{c,\varepsilon}}\subset\mathfrak C$ containing ${{\mathcal A}_{c,\varepsilon}}$ and a continuous map $h_{c,\varepsilon}:[0,1]\times \widehat {{\mathcal A}_{c,\varepsilon}} \to\mathfrak C$ such that \begin{enumerate}
\item\label{itm:prop9.4-1} $h_{c,\varepsilon_*}(0,y)=y$, for all $y\in \widehat {{\mathcal A}_{c,\varepsilon}}$;
\item\label{itm:prop9.4-4} $h_{c,\varepsilon}(1,\widehat {{\mathcal A}_{c,\varepsilon}})=\{y_0\}$ for some $y_0\in\mathfrak C$. \end{enumerate} \end{prop}
\begin{proof}[Proof of Proposition \ref{thm:prop9.4}]
By the Second Deformation Lemma, we deduce the existence of $\epsilon$ such that $\mathcal A_{c,\varepsilon}$ consists of the disjoint union of a finite number of closed sets $C_i$ consisting of curves $x$ with the same number of intervals $[a,b]\in \mathcal I_x$ such that $x_{[a,b]}$ is $r_*$--close to $\omega_{a,b}$.
On any $C_i$, arguing as in \cite{arma}, thanks to the transversality properties of OGC's, we can construct continuous maps $\alpha(x)$ and $\beta(x)$ having the following properties: \begin{itemize} \item $\alpha(x) < \beta(x)$,
\item $\mathrm{dist}_{E}(x(\alpha(x)),\omega(0))<2r_*$ or $\mathrm{dist}_{E}(x(\alpha(x)),\omega(1))<2r_*$, \item $\mathrm{dist}_{E}(x(\beta(x)),\omega(0))<2r_*$ or $\mathrm{dist}_{E}(x(\beta(x)),\omega(1))<2r_*$, \item if $[a,b] \in \mathcal I_x$ is such that $b \leq \alpha(x)$ or $a \geq \beta(x)$ then $x_{[a,b]}$ is not close to $\omega_{a,b}$. \end{itemize} Then, as in the First Deformation Lemma, since $\omega$ is the unique OGC, we see that we can continuously retract any $x\vert_{[0,\alpha(x)}]$ and $x\vert_{[\beta(x),1]}$ on $\partial \Omega$. Then moving $x(0)$ along $x$ until we reach $x(\alpha(x))$ and $x(1)$ along $x$ until we reach $x(\beta(x))$ we obtain the searched homotopy on ${\mathcal A}_{c,\varepsilon}$ Finally Since $\mathfrak C$ is an ANR (\emph{absolute neighborhood retract}, cf.\ \cite{Palais}), we can immediately extend the obtained homotopy to a suitable open set $\widehat{{\mathcal A}_{c,\varepsilon}}$, containing ${{\mathcal A}_{c,\varepsilon}}$ and satisfying the required properties. \end{proof}
\section{Proof of the main Theorem \ref{thm:main}}
The topological invariant that will be employed in the minimax argument is the relative category $\mathrm{cat}$ defined in Section~\ref{main}; recall from Lemma~\ref{thm:estimatecat} that: \begin{equation}\label{eq:10.1} \mathrm{cat}_{\mathfrak C,{\mathfrak C_0}}({\mathfrak C})\ge 2. \end{equation}
Denote by $\mathfrak D$ the class of closed $\mathcal R$--invariant subset of $\mathfrak C$. Define, for any $i=1,2$, \begin{equation}\label{eq:10.2} \Gamma_i=\big\{\mathcal D\in\mathfrak D\,:\,\mathrm{cat}_{{\mathfrak C},{\mathfrak C}_0}(\mathcal D)\ge i\big\}. \end{equation} Set \begin{equation}\label{eq:10.3} c_i=\inf_{\stackrel{\mathcal D\in\Gamma_i,}{(\mathcal D,h)\in\mathcal H}}\mathcal F(\mathcal D,h). \end{equation}
\begin{rem}\label{rem:10.2} If $\mathrm I_{\mathfrak C}:[0,1]\times\mathfrak C$ denotes the map $\mathrm I_{\mathfrak C}(\tau,x)=x$ for all $\tau$ and all $x$, the the pair $(\mathfrak C,\mathrm I_{\mathfrak C})\in \mathcal H$. Since $\widetilde{\mathfrak C}\in\Gamma_i$ for any $i$ (see \eqref{eq:10.1}), we get: \[ c_i\le\mathcal F(\mathfrak C,\mathrm I_{\mathfrak C}) < M_0. \] Moreover $\mathcal F\ge 0$, therefore $0\le c_i\le M_0$ for any $i$ (recall also the definition of $\mathcal F$ and $M_0$). \end{rem}
We have the following lemmas involving the real numbers $c_i$.
\begin{lem}\label{thm:lem10.3} The following statements hold: \begin{enumerate} \item\label{itm:lem10.3-1} $c_1\ge \frac{1}{2}\left(\tfrac{3\delta_0}{4K_0}\right)^2$; \item\label{itm:lem10.3-2} $c_1\le c_2$. \end{enumerate} \end{lem}
\begin{lem}\label{thm:lem10.4} For all $i=1,2$, $c_i$ is a geometrically critical value. \end{lem}
\begin{lem}\label{thm:lem10.5} Assume that there is only one OGC in $\overline\Omega$. Then, \begin{equation}\label{eq:10.4} c_1<c_2. \end{equation} \end{lem}
\begin{proof}[Proof of Lemma \ref{thm:lem10.3}] Let us prove \eqref{itm:lem10.3-1}. Assume by contradiction $c_1<\frac12 \left(\tfrac{3\delta_0}{4K_0}\right)^2$, and take $\varepsilon>0$ such that $c_1+\varepsilon<\frac12 \left(\tfrac{3\delta_0}{4K_0}\right)^2$. By \eqref{eq:10.2}--\eqref{eq:10.3} there exists $\mathcal D_\varepsilon\in\Gamma_1$, and $(\mathcal D_\varepsilon,h_\varepsilon)\in\mathcal H$ such that \[ \mathcal F(\mathcal D_\varepsilon,h_\varepsilon)\le c_1+\varepsilon <\frac12\left(\tfrac{3\delta_0}{4K_0}\right)^2. \] Let $h_0$ be the homotopy sending any curve $x$ on $x(\frac12)$, and take $\eta_\varepsilon$ given by Lemma \ref{lem:topological} with $h$ replaced by $h_\varepsilon$.
Then: \[ (h_0\star \eta_\varepsilon \star h_\varepsilon(1,\mathcal D_\varepsilon)) \text{ consists of constant curves in } \partial \Omega, \] (and $h_0\star \eta_\varepsilon \star h_\varepsilon$ does not move the constant curves in $\mathcal D_{\varepsilon}$). Then there exist a homotopy $K_\varepsilon:[0,1]\times\mathcal D_\varepsilon\to{\mathfrak C}$ such that $K_\varepsilon(0,\cdot)$ is the identity, $K_\varepsilon(1,\mathcal D_\varepsilon)\subset{\mathfrak C}_0$ and \[ K_\varepsilon(\tau,\mathcal D_\varepsilon\cap\widetilde{\mathfrak C}_0)\subset{\mathfrak C}_0,\,\forall\tau\in[0,1]. \] Then $\mathrm{cat}_{{\mathfrak C},{\mathfrak C}_0}(\mathcal D_\varepsilon)=0$, in contradiction with the definition of $\Gamma_1$.
To prove \eqref{itm:lem10.3-2}, observe that by \eqref{eq:10.3} for any $\varepsilon>0$ there exists $\mathcal D\in\Gamma_2$ and $(\mathcal D,h)\in\mathcal H$ such that \[ \mathcal F(\mathcal D,h)\le c_2+\varepsilon. \] Since $\Gamma_2\subset\Gamma_1$ by definition of $c_1$ we deduce $c_1\le c_2+\varepsilon$, and \eqref{itm:lem10.3-2} is proved, since $\varepsilon$ is arbitrary. \end{proof}
\begin{proof}[Proof of Lemma \ref{thm:lem10.4}] Assume by contradiction that $c_i$ is not a geometrically critical value for some $i$. Take $\varepsilon=\varepsilon(c_i)$ as in Proposition~\ref{thm:firstdeflemma}, and $(\mathcal D_\varepsilon,h)\in\mathcal H$ such that \[ \mathcal F(\mathcal D_\varepsilon,h)\le c_i+\varepsilon. \] Now let $\eta$ as in Proposition~\ref{thm:firstdeflemma} and take $h_\varepsilon=\eta\star h$. By the same Proposition, \[ \mathcal F(\mathcal D_\varepsilon,h_\varepsilon)\le c_i-\varepsilon, \] in contradiction with \eqref{eq:10.3} because $(\mathcal D_\varepsilon,h_\varepsilon)\in\mathcal H$. \end{proof}
\begin{proof}[Proof of Lemma \ref{thm:lem10.5}] Assume by contradiction that \eqref{eq:10.4} does not hold. Then \[ c\equiv c_1=c_2. \] Take $\varepsilon_*=\varepsilon_*(c)$ as in Proposition \ref{thm:prop9.3}, $\mathcal D_2\in\Gamma_2$ and $(\mathcal D_2,h)\in\mathcal H$, such that \[ \mathcal F(\mathcal D_2,h)\le c+\varepsilon_*. \]
Let $\mathcal A = \widehat{{\mathcal A}_{c,\varepsilon}}$ be the open set given by Proposition \ref{thm:prop9.4}. The by definition of $\Gamma_1$, and simple properties of relative category, \[ \mathcal D_1\equiv\mathcal D_2\setminus {\mathcal A}\in\Gamma_1. \] Now let $\eta$ as in Proposition \ref{thm:prop9.3}. We have \[ \mathcal F(\mathcal D_2\setminus {\mathcal A},\eta\star h)\leq c-\varepsilon_*, \] in contradiction with the definition of $\Gamma_1$. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}] It follows immediately from lemmas \ref{thm:lem10.3}--\ref{thm:lem10.5} and Proposition \ref{thm:distinct}. \end{proof}
\appendix \section{An estimate on the relative category} \label{sec:appendixrelLScat} Let $n\geq 1$ be an integer; $\mathbb S^n$ is the $n$-dimensional sphere, and $\Delta^n\subset\mathbb S^n\times\mathbb S^n$ is the diagonal. We want to estimate the relative Ljusternik--Schnirelman category of the pair $(\mathbb S^n\times\mathbb S^n,\Delta^n)$, and to this aim we will prove an estimate on the relative cuplength of the pair.
For a topological space $X$ and an integer $k\ge0$, we will denote by $H^k(X)$ and $\widetilde H^k(X)$ respectively the $k$-th singular cohomology and the $k$-th reduced singular cohomology group of $X$. For a topological pair $(X,Y)$, $H^k(X,Y)$ is the $k$-th relative singular cohomology group of the pair; in particular, $H^k(X,\emptyset)=H^k(X)$. Given $\alpha\in H^p(X,Y)$ and $\beta\in H^q(X,Z)$, $\alpha\cup\beta\in H^{p+q}(X,Y\bigcup Z)$ will denote the cup product of $\alpha$ and $\beta$; recall that $\alpha\cup\beta=(-1)^{pq}\beta\cup\alpha$.
The notion of relative cuplength, here recalled, will be also used.
\begin{defin}\label{def:A.4} The number $\mathrm{cuplength}(X,Y)$ is the largest positive integer $k$ for which there exists $\alpha_0\in H^{q_0}(X,Y)$ ($q_0\ge 0$) and $\alpha_i\in H^{q_i}(X)$, $i=1,\ldots,k$ such that \[ q_i\ge 1,\quad\forall\; i=1,\ldots,k, \] and \[ \alpha_0\cup\alpha_1\cup\ldots\cup\alpha_k\ne 0 \text{\ in\ } H^{q_0+q_1+\ldots+q_k}(X,Y), \] where $\cup$ denotes the cup product. \end{defin}
As for the absolute Lusternik--Schirelmann category, we have the following estimate of relative category by means of relative cuplenght, cf e.g. \cite{FH,FW} \begin{prop}\label{thm:propA2} $\mathrm{cat}_{\mathbb S^n \times \mathbb S^n,\Delta^n}(\mathbb S^n \times \mathbb S^n) \geq \mathrm{cuplength}(\mathbb S^n\times\mathbb S^n,\Delta^n) + 1$.\qed \end{prop}
Therefore, to prove that $cat_{\mathbb S^n \times \mathbb S^n,\Delta^n}(\mathbb S^n \times \mathbb S^n) \geq 2$ it will be sufficient to prove the following
\begin{prop}\label{thm:propA3} For all $n\ge1$, $\mathrm{cuplength}(\mathbb S^n\times\mathbb S^n,\Delta^n)\ge1$. \end{prop} \begin{proof} The statement is equivalent to proving the existence of $p\ge0$, $q\ge1$, $\alpha\in H^p(\mathbb S^n\times\mathbb S^n,\Delta^n)$ and $\beta\in H^q(\mathbb S^n\times\mathbb S^n)$ such that $\alpha\cup\beta\ne0$. This will follow immediately from the Lemma below. \end{proof} \begin{lem} For $n\ge1$, the group $H^{2n}(\mathbb S^n\times\mathbb S^n, \Delta^n)$ is isomorphic to $\mathds Z$, and the map $H^n(\mathbb S^n\times\mathbb S^n,\Delta^n) \times H^n(\mathbb S^n\times\mathbb S^n)\ni(\alpha,\beta)\mapsto\alpha\cup\beta\in H^{2n}(\mathbb S^n\times\mathbb S^n, \Delta^n)$ is surjective. \end{lem} \begin{proof} It is well known that $H^k(\mathbb S^n)\cong\mathds Z$ for $k=0,n$, and $H^k(\mathbb S^n)=0$ if $k\ne0,n$. It follows $H^n(\mathds S^n\times\mathds S^n)\cong\bigoplus_{k=0}^nH^k(\mathbb S^n)\otimes H^{n-k}(\mathbb S^n)\cong\mathds Z\oplus\mathds Z$. If $\omega$ is a generator of $H^n(\mathbb S^n)$, then the two generators of $H^n(\mathbb S^n\times\mathbb S^n)\cong\mathds Z\oplus\mathds Z$ are $\pi_1^*(\omega)$ and $\pi_2^*(\omega)$, where $\pi_1,\pi_2:\mathbb S^n\times\mathbb S^n\to\mathbb S^n$ are the projections.
For the computation of $H^n(\mathbb S^n\times\mathbb S^n,\Delta^n)$, we use the long exact sequence of the pair $(\mathbb S^n\times\mathbb S^n,\Delta^n)$ in reduced cohomology: \[\cdots\longrightarrow\widetilde H^{n-1}(\Delta^n)\longrightarrow H^n(\mathbb S^n\times\mathbb S^n,\Delta^n)\longrightarrow\widetilde H^n(\mathbb S^n\times\mathbb S^n)\stackrel{\mathfrak i^*}\longrightarrow\widetilde H^n(\Delta^n)\longrightarrow\cdots\] Since $\Delta^n$ is homeomorphic to $\mathbb S^n$, then $\widetilde H^{n-1}(\Delta^n)=0$. Thus, the group $H^n(\mathbb S^n\times\mathbb S^n,\Delta^n)$ can be identified with the subgroup of $\widetilde H^n(\mathbb S^n\times\mathbb S^n)$ given by the kernel of the map $\mathfrak i^*:\widetilde H^n(\mathbb S^n\times\mathbb S^n)\to\widetilde H^n(\Delta^n)$. This map takes each of the two generators $\pi_i^*(\omega)$, $i=1,2$, to $\omega$ (here we identify $\Delta^n$ with $\mathbb S^n$), so that $H^n(\mathbb S^n\times\mathbb S^n,\Delta^n)$ is the subgroup of $\widetilde H^n(\mathbb S^n\times\mathbb S^n)$ generated by $\pi_1^*(\omega)-\pi_2^*(\omega)$, which is isomorphic to $\mathds Z$.
Finally, let us compute $H^{2n}(\mathbb S^n\times\mathbb S^n,\Delta)$ using again the long exact sequence of the pair $(\mathbb S^n\times\mathbb S^n,\Delta^n)$ in reduced cohomology: \[\cdots\longrightarrow\widetilde H^{2n-1}(\Delta^n)\longrightarrow H^{2n}(\mathbb S^n\times\mathbb S^n,\Delta^n)\longrightarrow\widetilde H^{2n}(\mathbb S^n\times\mathbb S^n)\stackrel{\mathfrak i^*}\longrightarrow\widetilde H^{2n}(\Delta^n)\longrightarrow\cdots\] Clearly, $\widetilde H^{2n}(\Delta^n)=0$, and if $n>1$, also $\widetilde H^{2n-1}(\Delta^n)=0$. When $n=1$, then $\widetilde H^{2n-1}(\Delta^n)=\widetilde H^1(\Delta^1)\cong\mathds Z$, however the map $\widetilde H^1(\Delta^1)\to\widetilde H^2(\mathbb S^1\times\mathbb S^1)$ is identically zero, because the previous map of the exact sequence $\widetilde H^1(\mathbb S^1\times\mathbb S^1)\to\widetilde H^1(\Delta^1)$ is clearly surjective.\footnote{The map $\widetilde H^1(\mathbb S^1\times\mathbb S^1)\to\widetilde H^1(\Delta^1)$ is induced by the diagonal inclusion of $\mathbb S^1$ into $\mathbb S^1\times\mathbb S^1$. It takes both generators $\pi_1^*(\omega)$ and $\pi_2^*(\omega)$ of $H^1(\mathbb S^1\times\mathbb S^1)$ to the generator $\omega$ of $H^1(\mathbb S^1)\cong H^1(\Delta^1)$.} In both cases, $n=1$ or $n>1$, we obtain $H^{2n}(\mathbb S^n\times\mathbb S^n,\Delta^n)\cong\widetilde H^{2n}(\mathbb S^n\times\mathbb S^n)\cong\mathds Z$. A generator of $\widetilde H^{2n}(\mathbb S^n\times\mathbb S^n)$ is $\pi_1^*(\omega)\cup\pi_2^*(\omega)$.
In conclusion, using the above identifications, the map $H^n(\mathbb S^n\times\mathbb S^n,\Delta^n) \times H^n(\mathbb S^n\times\mathbb S^n)\ni(\alpha,\beta)\mapsto\alpha\cup\beta\in H^{2n}(\mathbb S^n\times\mathbb S^n, \Delta^n)$ reads as the bilinear map $\mathds Z\times(\mathds Z\oplus\mathds Z)\to\mathds Z$ that takes $\big(1,(1,0)\big)$ to $(-1)^{n+1}$ and $\big(1,(0,1)\big)$ to $1$. This is clearly surjective. \end{proof} From Proposition~\ref{thm:propA2} and Proposition~\ref{thm:propA3} we get: \begin{cor}\label{thm:estrelcategory} For all $n\ge1$, $\mathrm{cat}_{\mathbb S^n \times \mathbb S^n}(\mathbb S^n \times \mathbb S^n,\Delta^n)\ge2$.\qed \end{cor}
\end{document}
|
arXiv
|
{
"id": "1503.05805.tex",
"language_detection_score": 0.6426371335983276,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{An $O({\log n\over \log\log n})$ Upper Bound on the Price of Stability for Undirected Shapley Network Design Games} \author{ Jian Li}
\maketitle
\begin{center} University of Maryland, College Park \end{center}
\begin{abstract}
In this paper, we consider the Shapley network design game on undirected networks. In this game, we have an edge weighted undirected network $G(V,E)$ and $n$ selfish players where player $i$ wants to choose a path from source vertex $s_i$ to destination vertex $t_i$. The cost of each edge is equally split among players who pass it. The price of stability is defined as the ratio of the cost of the best Nash equilibrium to that of the optimal solution. We present an $O(\log n/\log\log n)$ upper bound on price of stability for the single sink case, i.e, $t_i=t$ for all $i$. \end{abstract}
\paragraph{Keywords:} Price of Stability, Shapley Network Design Game, network design game.
\section{Introduction} We consider the Shapley network design game, which is also called network design games with fair cost allocation, introduced in \cite{FOCS04:pos}. In this game, we are given a network and $n$ selfish players. where player $i$ wants to go from source vertex $s_i$ to destination vertex $t_i$. The cost of each edge is shared in a fair manner among players who pass it. We are interested in stable status of the network where no player has the incentive to deviate from its current strategy, which can be modeled by Nash equilibria. The price of stability, defined as the ratio of the cost of the best Nash equilibrium and that of an optimal solution, is used to measure the inefficiency of Nash equilibria. We imagine a network where the traffic will be initially designed by a central network coordinator. However, the coordinator is unable prevent the network users from selfishly deviating from the designated paths. Therefore, in this scenario, the best Nash equilibrium is an obvious solution to propose. In this sense, we can think the price of stability as the degree of degradation of the solution quality for the outcome being stable.
The price of stability was first studied in Schulzan and Moses \cite{SODA03:SM} and was so-called in Anshelevich et al. \cite{FOCS04:pos} where the Shapley network design game was also first explored. They showed that a pure-strategy Nash equilibrium always exists and the price of stability of this game is at most the $n$th harmonic number $H(n)$ and also provide an example showing that this upper bound is the best possible in directed networks. For undirected networks, Anshelevich et al. \cite{FOCS04:pos} presented a tight bound on price of stability of $4/3$ for single source and two players case. However, whether there is a tighter bound for arbitrarily many players in undirect networks was left as an open question.
Fiat et al. \cite{icalp06:pos} improved the upper bound to $O(\log\log n)$ for a special case where each node of the network has a player and they are required to connect to a common destination. Chen and Roughgarden \cite{SPAA06:ho_tim} considered the weighted version of the game where each player has a weight and the cost of an edge is shared among the players who pass it in proportion to their weights. As opposed to the ordinary Nash equilibrium considered before, Albers \cite{SODA08:susan} investigated the situation where coordination among players is allowed and showed nearly matching upper and lower bounds on the price of stability with respect to the notion of {\em strong Nash equilibrium}.
\noindent\textbf{Our results :} We prove that for undirected graphs with a distinguished destination to which all players must connect, the price of stability of the Shapley network design game is $O({\log n\over \log\log n})$ where $n$ is the number of players.
\section{Preliminaries} \label{sec:prel}
We first introduce notations and formally state the problem. We are given a undirected network $G(V,E)$ and $n$ selfish players. Player $i$ has to choose a path from source vertex $s_i$ to destination vertex $t_i$. Let $\mathcal{P}_i$ denote the set of simple $s_i-t_i$ paths. The cost of an edge $e$, $c(e)$, is shared equally by all players who pass $e$. An {\em outcome} of the game is specified by a set of $n$ path, each chosen by one player. For an outcome $(P_1,P_2,\ldots,P_n)$ for $P_i\in \mathcal{P}_i$ ,the cost assigned to player $i$ is $c_i(P_1,P_2,\ldots,P_n)=\sum_{e\in P_i}{c_e\over f_e}$ where $f_e$ is the number of paths that include $e$. We define the cost of the outcome as $$ c(P_1,P_2,\ldots,P_n)=\sum_i c_i(P_1,P_2,\ldots,P_n)=\sum_{e\in \cup_i P_i}c_e. $$
Let $P_{-i}$ denote the vector of paths chosen by the players other than $i$ An outcome $(P_1,P_2,\ldots,P_n)$ is a \emph{Nash equilibrium} if for every player $i$ , $c_i(P_i,P_{-i})=\min_{\tilde{P}_i\in \mathcal{P}_i}c_i(\tilde{P}_i,P_{-i})$.
The \emph{price of stability} is defined as the ratio of the cost of the best Nash equilibrium of the game to that of an optimal solution. We note that the optimal solution is the min-cost steiner forest satisfying all connectivity requirement $(s_i,t_i)$s.
We consider the following potential function, also used in \cite{FOCS04:pos}, that maps every outcome into a numeric value.
\begin{equation} \label{eq:nd} \Phi(P_1,\ldots,P_k)=\sum_{e\in E}\sum_{i=1}^{f_e}{c_e\over i}=\sum_{e\in E}c_e\cdot H(f_e) \end{equation} where $f_e$ denotes the number of paths $P_i$ that include edge $e$ and $H(n)=1+{1\over 2}+{1\over 3}+\ldots+{1\over n}$ is the $n$'th Harmonic number.
The most important property of the potential function is that if a single player $i$ changes its strategy then the difference between the potential of the new state and that of the original state is exactly the change in the cost of player $i$ \cite{FOCS04:pos}.
In a finite game, \emph{better-response dynamics} is the following process: If the current outcome is not a Nash equilibrium, there exists a player who can decrease its cost by switching its strategy. The player updates its strategy to an arbitrary superior one, and repeat until a Nash equilibrium is reached. While better response dynamics needs not terminate in general, it must terminate in finite steps in Shapley network design games since the potential $\Phi$ strictly decreases during the process and no outcome appears twice in a finite game.
\section{An $O({\log n\over\log\log n})$ Upper Bound for the Single Sink Case} \label{sec:main}
We assume the network is connected and all players share the same destination $t$. It is easy to see an optimal solution is a steiner tree with terminals $\{s_i\}_{i=1,\ldots,n}\cup\{t\}$. Suppose the outcome $NASH=(P^N_1,\ldots,P^N_n)$ is a Nash equilibrium which is obtained by better-response dynamics from an optimal solution $OPT=(P^O_1,\ldots,P^O_n)$. We can assume without loss of generality that $NASH$ is also a tree. The property of the the potential function ensures that $\Phi(NASH)\leq \Phi(OPT)$. We denote paths of $NASH$ and that of $OPT$ by $\{P^N_i\}_{i=1,\ldots,n}$ and $\{P^O_i\}_{i=\{1,\ldots,n\}}$, respectively. We also denote the trees of $Nash$ by $T^N=\cup_i P^N_i$ and that of $OPT$ by $T^O=\cup_i P^O_i$. Let $|NASH|$ and $|OPT|$ be their costs respectively.
Let $f_e^N$ denote the number of paths that include edge $e$ in $NASH$. Let $f^N(i)=\sum_{e: f_e^N=i}c_e$ and $g^{N}(j)=\sum_{e:f_e^N\geq j}c_e=\sum_{i\geq j}f^N(i)$. It is easy to see $|NASH|=\sum_i f^N(i)=g^N(1)$.
For ease of discussion, we create a dummy player $0$ residing in $s_0=t$. We can see this player has no influence on either $NASH$ or $OPT$. First we consider the tree $T_O=\cup_i P^O_i$. Doubling all edges in $T_O$ forms a Eulerian tour. Traversing this tour gives a sequence $S$ of vertices in $T_O$. Suppose $\phi$
is a permutation of $\{s_i\}_{i=0,\ldots,n}$ according to their first appearance in $S$. It is easy to see $\sum_{i=0}^n d(\phi(i),\phi(i+1\textrm{ mod }n+1))\leq 2|T_O|=2|OPT|$ where $d(u,v)$ is the length of the shortest path between $u$ and $v$.
For any two players $i$ and $j$, let $LCA(i,j)$ be the least common ancestor of $s_i$ and $s_j$ in tree $T^N$(take $t$ as the root). We let $P^j_i$ be the subpath of $P^N_i$ starting from $s_i$ and ending at $LCA(i,j)$. From the definition of Nash equilibrium, we know the cost of player $i$ in $NASH$ is less than that of first reaching $s_j$ and then following the path $P^N_j$ to $t$. Thus, we have the following. $$ \sum_{e\in P^j_i}{c_e\over f^N_e} \leq d(s_i,s_j)+ \sum_{e\in P^i_j}{c_e\over f^N_e+1}. $$ Similarly, we have $$ \sum_{e\in P^i_j}{c_e\over f^N_e} \leq d(s_i,s_j)+ \sum_{e\in P^j_i}{c_e\over f^N_e+1}. $$ Adding them together, We get $$ \sum_{e\in P^j_i}{c_e\over f^N_e(f^N_e+1)}+\sum_{e\in P^i_j}{c_e\over f^N_e(f^N_e+1)}\leq 2d(s_i,s_j). $$ We denote the left hand side of last equality by $A(i,j)$. We have
$\sum_{i=0}^n A(\phi(i),\phi(i+1\textrm{ mod }n+1))\leq 2\sum_{i=0}^n d(v_i,v_{i+1\textrm{ mod }n+1}) \leq 4|OPT|$.
Now we prove \begin{equation} \label{eq:na} \sum_{i=0}^n A(\phi(i),\phi(i+1\textrm{ mod }n+1))\geq 2\sum_{e\in T_N} {c_e\over f^N_e(f^N_e+1)} =2\sum_{i}{1\over i(i+1)} f^N(i) \end{equation}
Actually, we only need to prove every $e\in T_N$ appears in some $P_{\phi(i)}^{\phi(i+1\textrm{ mod }n+1)}$ or $P_{\phi(i+1\textrm{ mod }n+1)}^{\phi(i)}$ for $0\leq i \leq n$. It is easy to see $P_i^j\cup P^j_i$ is the unique path from $s_i$ to $s_j$ in $T_N$.
For any $e\in T_N$, let $T^1_{N,e}$ and $T^2_{N,e}$ be two trees obtained by deleting $e$ from $T_N$. It is easy to see $T^i_{N,e}\cap \{s_0,\ldots,s_n\}\ne \emptyset$ for $i=1,2$ since each leaf of $T_N$ contains at least one player. So, there exists some $i$ such that $\phi(i)\in T^1_e$ and $\phi(i+1\textrm{ mod }n+1)\in T^2_e$ and $e$ must lie in the unique path from $S_{\phi(i)}$ to $S_{\phi(i+1\textrm{ mod }n+1)}$.
We let $n_{1/2}=\max\{i| g^N(i)\geq {1\over 2}\cdot |NASH|\}$. We can see the following. $$ \begin{array}{ll} \Phi(NASH)& =\sum_i f^N(i)H(i)\geq \sum_{i\geq n_{1/2}}f^N(i)H(i)\geq H(n_{1/2})\sum_{i\geq n_{1/2}}f^N(i) \\
& =H(n_{1/2})g^N(n_{1/2}) \geq {1\over 2}H(n_{1/2})|NASH|. \end{array} $$
Since $\Phi(NASH)\leq \Phi(OPT)\leq H(n)|OPT|$, we have \begin{equation} \label{eq:eq1}
|NASH|\leq {2H(n)\over H(n_{1/2})}\cdot|OPT|. \end{equation} From (\ref{eq:na}), we can get $$ \begin{array}{ll}
2|OPT| & \geq \sum_{i}{1\over i(i+1)} f^N(i) \geq \sum_{i< n_{1/2}}{1\over i(i+1)} f^N(i)\geq {1\over n_{1/2}(n_{1/2}+1)} \sum_{i\leq n_{1/2}}f^N(i)\ \\
& \geq {1\over 2n_{1/2}(n_{1/2}+1)} \sum_i f^N(i) ={1\over 2n_{1/2}(n_{1/2}+1)}|NASH|. \end{array} $$ Thus, we have \begin{equation} \label{eq:eq2}
|NASH|\leq 4n_{1/2}(n_{1/2}+1)\cdot |OPT| . \end{equation}
Combining inequalities (\ref{eq:eq1}) and (\ref{eq:eq2}), we have
$|NASH|\leq \min\{ {2H(n)\over H(n_{1/2})},4n_{1/2}(n_{1/2}+1)\}\cdot |OPT|$
for any $n_{1/2}$. The right hand side takes maximum value $O({\log n\over \log\log n})\cdot |OPT|$ by choosing $n_{1/2}=O(\sqrt{{\log n\over \log\log n}})$. Therefore, we have proved ${|NASH|\over |OPT|}\leq O({\log n\over \log\log n})$.
\end{document}
|
arXiv
|
{
"id": "0812.2567.tex",
"language_detection_score": 0.8674686551094055,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Elliptic fibrations on toric $K3$ hypersurfaces and mirror symmetry derived from Fano polytopes}
\begin{abstract} We determine the N\'eron-Severi lattices of $K3$ hypersurfaces with large Picard number in toric three-folds derived from Fano polytopes. Such $K3$ surfaces are important objects of research in mirror symmetry. On each $K3$ surface, we introduce a particular elliptic fibration which induces a Mordell-Weil group of rank $1$. In the proof of the main theorem, we show that the N\'eron-Severi lattice of each $K3$ surface is generated by a general fibre, sections and appropriately selected components of the singular fibres of our elliptic fibration. Our argument gives a down-to-earth proof of the Dolgachev conjecture for Fano polytopes, which is a conjecture on mirror symmetry for $K3$ surfaces. \end{abstract}
\footnote[0]{Keywords: $K3$ surfaces ; elliptic fibrations ; mirror symmetry ; Fano polytopes. } \footnote[0]{Mathematics Subject Classification 2020: Primary 14J28 ; Secondary 14J27, 14J33, 52B10.}
\setlength{\baselineskip}{14 pt}
\section*{Introduction} In this paper, we determine the N\'eron-Severi lattices and the transcendental lattices of $K3$ surfaces which are given as hypersurfaces in toric three-folds derived from three-dimensional Fano polytopes. These lattices are important from the viewpoint of mirror symmetry. Our argument gives a down-to-earth proof of a conjecture in mirror symmetry for $K3$ surfaces. The proof of our main theorem is based on precise studies of appropriate Jacobian elliptic fibrations on our $K3$ surfaces.
Mirror symmetry is a phenomenon originally discovered by physicists studying superstring theory. It suggests non-trivial relationships between geometric objects and provides a lots of interesting problems in algebraic geometry. Especially, many mathematicians have intensively studied Calabi-Yau hypersurfaces in toric varieties coming from reflexive polytopes since the publication of \cite{B}. Calabi-Yau hypersurfaces in toric three-folds are $K3$ surfaces. There is a famous conjecture on mirror symmetry for such toric $K3$ hypersurfaces by Dolgachev \cite{D}. This conjecture is formulated in terms of the following lattices of $K3$ surfaces. The $2$-cohomology group $H^2(S,\mathbb{Z})$ of a $K3$ surface $S$ gives an even unimodular lattice $L_{K3}$ of signature $(3,19).$ The N\'eron-Severi lattice ${\rm NS}(S)$ is a sublattice of $H^2(S,\mathbb{Z})$ defined as $H^2(S,\mathbb{Z}) \cap H^{1,1}(S,\mathbb{R})$. We call $\rho={\rm rank}({\rm NS} (S))$ the Picard number of $S$. Then, ${\rm NS}(S)$ is of signature $(1,\rho-1)$. We will identify $H^2(S,\mathbb{Z})$ with the $2$-homology group $H_2(S,\mathbb{Z})$ by the Poincar\'e duality. Then, ${\rm NS}(S)$ is regarded as the sublattice of $H_2(S,\mathbb{Z})$ generated by divisors on $S$. The transcendental lattice ${\rm Tr}(S)$ is the orthogonal complement of ${\rm NS}(S)$ in $H^2(S,\mathbb{Z})$. Here, arithmetic techniques of even lattices are powerful tools to study the above lattices (for example, see \cite{Ni}). Thus, mirror symmetry for toric $K3$ hypersurfaces is an interesting research theme involving researchers from various fields.
Fano polytopes are special reflexive polytopes (see Definition \ref{dfFano}). There exists a smooth Fano $n$-fold attached to each $n$-dimensional Fano polytope. Since Fano polytopes have nice combinatorial properties, many researchers have convergently studied them (for example, see \cite{BToric}, \cite{BToric4}, \cite{WW}, \cite{Sa}, and \cite{Ob}). In particular, three-dimensional Fano polytopes are classified into $18$ types up to ${\rm GL}_3(\mathbb{Z})$-action. The number of vertices of such a polytope is one of $4,5,6,7$ or $8$. See Table 1. Here, each column of the matrix in the table stands for the coordinates of each vertex of the corresponding polytope.
\begin{longtable}{lll} \caption{Fano polytopes $P_k$ and Lattices $L_k$}\\ \hline
$k$ & $P_k$ & $L_k$
\\
\hline
\endhead
$1$ & $ \left( \begin{array}{cccccc} 1 & 0 & 0 & -1 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \\ \end{array}\right)$ & $(4)$
\\
$2$ & $ \left( \begin{array}{cccccc} 1 & 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 & 0 \\ 0 & 0 & 1 & 0 & -1 \\ \end{array}\right)$ &$\begin{pmatrix} 0 & 3 \\ 3 & 2 \end{pmatrix}$
\\
$3$ & $ \left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & -1 \\ 0 & 0 & 1 & -1 & -1 \\ \end{array}\right)$ & $\begin{pmatrix} -2 & 2 \\ 2 & -2 \end{pmatrix}$
\\
$4$ &$ \left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & -1 \\ 0 & 0 & 1 & -1 & -2 \\ \end{array}\right)$ & $\begin{pmatrix} -2 & 1 \\ 1 & -2 \end{pmatrix}$
\\
$5$ & $ \left( \begin{array}{cccccc} 1 & 0 & 0 & -1 & -1 \\ 0 & 1 & 0 & -1 & -1 \\ 0 & 0 & 1 & 0 & -1 \\ \end{array}\right)$ & $\begin{pmatrix} 0 & 3 \\ 3 & -2 \end{pmatrix}$
\\
$6$ &$ \left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & -1 & 0 \\ 0 & 0 & 1 & -1 & 0 & 0 \\ \end{array}\right)$ & $\left(\begin{array}{ccc} 0 & 2 & 2 \\ 2 & 0 & 2 \\ 2 & 2 & 0 \\ \end{array}\right)$
\\
$7$ &$\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & -1 & 0 \\ 0 & 0 & 1 & -1 & -1 & -1 \\ \end{array}\right) $ & $\left(\begin{array}{ccc} -2 & 1 & 1 \\ 1 & 0 & 2 \\ 1 & 2 & 0 \\ \end{array}\right)$
\\
$8$ &$\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & -1 & 0 \\ 0 & 0 & 1 & -1 & 1 & -1 \\ \end{array}\right)$ &$\left(\begin{array}{ccc} -2 & 1 & 3 \\ 1 & 0 & 2 \\ 3 & 2 & 0 \\ \end{array}\right)$
\\
$9$ & $\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & -1 & -1 \\ 0 & 0 & 1 & -1 & 0 & 0 \\ \end{array}\right)$ & $\left(\begin{array}{ccc} 0 & 1 & 2 \\ 1 & -2 & 2 \\ 2 & 2 & 0 \\ \end{array}\right)$
\\
$10$ & $\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & -1 & -1 \\ 0 & 0 & 1 & -1 & -1 & -1 \\ \end{array}\right)$ & $\left(\begin{array}{ccc} -2 & 1 & 1 \\ 1 & -2 & 2 \\ 1 & 2 & 0 \\ \end{array}\right)$
\\
$11$ & $\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & 1 & -1 \\ 0 & 0 & 1 & -1 & -1 & -1 \\ \end{array}\right)$ & $\left(\begin{array}{ccc} -2 & 1 & 1 \\ 1 & -2 & 1 \\ 1 & 1 & 2 \\ \end{array}\right)$
\\
$12$ & $\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & -1 \\ 0 & 1 & 0 & 0 & 1 & -1 \\ 0 & 0 & 1 & -1 & -1 & 0 \\ \end{array}\right)$ & $\left(\begin{array}{ccc} -2 & 2 & 2 \\ 2 & -2 & 1 \\ 2 & 1 & 2 \\ \end{array}\right)$
\\
$13$ & $\left( \begin{array}{ccccccc} 1 & 0 & 0 & 0 & 0 & -1 & -1\\ 0 & 1 & 0 & 0 & -1 & 0 & -1\\ 0 & 0 & 1 & -1 & 0 & 0 & 0\\ \end{array}\right)$ & $\left(\begin{array}{cccc} 0 & 1 & 1 & 1 \\ 1 & -2 & 0 & 2 \\ 1 & 0 & -2 & 2 \\ 1 & 2 & 2 & -2 \\ \end{array}\right)$
\\
$14$ & $\left( \begin{array}{ccccccc} 1 & 0 & 0 & -1 & 0 & -1 & -1\\ 0 & 1 & 0 & 0 & -1 & 0 & -1\\ 0 & 0 & 1 & -1 & 0 & 0 & 0\\ \end{array}\right)$ & $\left(\begin{array}{cccc} 0 & 1 & 1 & 1 \\ 1 & -2 & 0 & 2 \\ 1 & 0 & -2 & 1 \\ 1 & 2 & 1 & -2 \\ \end{array}\right)$
\\
$15$ & $\left( \begin{array}{ccccccc} 1 & 0 & 0 & 1 & 0 & -1 & -1\\ 0 & 1 & 0 & 0 & -1 & 0 & -1\\ 0 & 0 & 1 & -1 & 0 & 0 & 0\\ \end{array}\right)$ &$\left(\begin{array}{cccc} 0 & 1 & 1 & 1 \\ 1 & -2 & 0 & 2 \\ 1 & 0 & -2 & 3 \\ 1 & 2 & 3 & -2 \\ \end{array}\right)$
\\
$16$ & $\left( \begin{array}{ccccccc} 1 & 0 & 0 & -1 & 0 & -1 & -1\\ 0 & 1 & 0 & -1 & -1 & 0 & -1\\ 0 & 0 & 1 & -1 & 0 & 0 & 0\\ \end{array}\right)$ & $\left(\begin{array}{cccc} 0 & 1 & 1 & 1 \\ 1 & -2 & 0 & 1 \\ 1 & 0 & -2 & 1 \\ 1 & 1 & 1 & -2 \\ \end{array}\right)$
\\
$17$ & $\left( \begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & -1 & 1 & -1 \\ 0 & 1 & 0 & 0 & -1 & 0 & -1 & 1 \\ 0 & 0 & 1 & -1 & 0 & 0 & 0 & 0 \\ \end{array}\right)$ & $\left(\begin{array}{ccccc} 0 & 1 & 1 & 1 & 1 \\ 1 & -2 & 2 & 2 & 0 \\ 1 & 2 & -2 & 0 & 2 \\ 1 & 2 & 0 & -2 & 0 \\ 1 & 0 & 2 & 0 & -2 \\ \end{array}\right)$
\\ $18$ & $\left( \begin{array}{cccccccc} 1 & 0 & 0 & 1 & 0 & -1 & 1 & -1 \\ 0 & 1 & 0 & -1 & -1 & 0 & -1 & 1 \\ 0 & 0 & 1 & -1 & 0 & 0 & 0 & 0 \\ \end{array}\right)$ & $\left(\begin{array}{ccccc} 0 & 1 & 1 & 1 & 1 \\ 1 & -2 & 2 & 1 & 0 \\ 1 & 2 & -2 & 0 & 3 \\ 1 & 1 & 0 & -2 & 0 \\ 1 & 0 & 3 & 0 & -2 \\ \end{array}\right)$
\\ \hline \end{longtable}
From a three-dimensional Fano polytope $P$, we obtain another polytope $P^\circ$, which is called the polar dual. The duality between $P$ and $P^\circ$ derives two different families of $K3$ surfaces. Namely, we have the family of $S_{P^\circ}$ with small Picard number and the family of $S_P$ with large Picard number. Dolgachev \cite{D} conjectures that $(S_P , S_{P^\circ})$ gives a mirror pair. This means that
there is a non-trivial relationship between the lattices of $S_P$ and those of $S_{P^\circ}$ (for detail, see Section 1). Here, the $K3$ surface $S_{P^\circ}$ is a hypersurface in a smooth toric Fano three-fold. One can apply algebrogeometric techniques to smooth toric varieties. Then, it is possible to determine ${\rm NS}(S_{P^\circ})$ systematically. Indeed, Koike \cite{K2} and Mase \cite{Mase} determine the N\'eron-Severi lattices for all $S_{P^\circ}$ (see Proposition \ref{PropLatticeDual}). On the other hand, to the best of the authors' knowledge, it is not easy to determine the N\'eron-Severi lattice of $S_P$ for every $P$, because its Picard number is large and its ambient toric variety has singularities.
The subjects of the present paper are the $K3$ surfaces $S_P$. Here, we note that Narumiya and Shiga \cite{NS} determine the lattice structure of $S_P$ if the number of vertices of $P$ is $4$. Also, if the number of vertices of $P$ is $5$, the second author \cite{Na} determines the lattice structure of $S_P$ (see also \cite{IIT}). In this paper, we will determine the lattice structure of $S_{P}$ for every three-dimensional Fano polytope $P$ with $6,7$ or $8$ vertices. Our main theorem is given as follows.
\begin{thm}\label{ThmMain} Let $P_k$ ($k\in \{1,\ldots,18\}$) be the Fano polytope in Table 1. Then, the intersection matrix of the transcendental lattice ${\rm Tr}(S_{P_k})$ is isometric to $U\oplus L_k$, where $U$ is the hyperbolic lattice of rank $2$ and $L_k$ is the lattice in Table 1. \end{thm}
Indeed, we will determine the structure of ${\rm NS} (S_{P_k})$ $(k\in \{6,\ldots,18\})$ in Theorem \ref{ThmNSEvident}. Theorem \ref{ThmMain} immediately follows from Theorem \ref{ThmNSEvident} (see Corollary \ref{CorTr}). Also, this main theorem establishes that the Dolgachev conjecture holds for Fano polytopes.
In our proof, it is important to study divisors on $S_P$ with large Picard number. For the cases of $S_P$, unlike the cases of $S_{P^\circ}$, algebrogeometric techniques for toric divisors are not so effective. Therefore, our mathematical argument is much different to those of \cite{K2} or \cite{Mase}. We will explicitly take a divisor $F$ on each $S_P$ whose self intersection number is $0$. Then, we obtain an elliptic fibration $\pi_P: S_P \rightarrow \mathbb{P}^1 (\mathbb{C})$ with a general fibre $F$. Each fibration $\pi_P$ has sections $\mathbb{P}^1(\mathbb{C}) \rightarrow S_P$. Such a fibration is called a Jacobian elliptic fibration. We will make full use of arithmetic properties of even lattices and Mordell-Weil groups in order to determine ${\rm NS}(S_P)$ in Section 4 and 5.
As a matter of fact, our proof of the main theorem is heavily due to nice properties of the fibration $\pi_P$. Let us summarize our argument. For each $P$, we will introduce a sublattice $E_P$ of ${\rm NS}(S_P)$ generated by a general fibre, sections and appropriately selected irreducible components of the singular fibres of $\pi_P$ (see Table 8 and 9). The lattice $E_P$ will be called the evident lattice attached to $\pi_P.$
The absolute value $|\det (E_P)|$ of the discriminant of our lattice $E_P$ is equal to $|\det({\rm NS})(S_{P^\circ})|$, which is already calculated in \cite{K2} or \cite{Mase}. Moreover, our lattice $E_P$ defines a period mapping for the family of $S_P$ as in Section 3. By using these properties, we will prove that $E_P$ is equal to ${\rm NS}(S_P)$ in Theorem \ref{ThmNSEvident}.
We note that each $S_P$ has several Jacobian elliptic fibrations. However, not every Jacobian elliptic fibration is useful to determine the lattice structure in question. Our fibration $\pi_P$ is necessary for our proof of the main theorem, because it has particular properties as we saw in the above paragraph. It is highly non-trivial to find such a fibration. Actually, our $\pi_P$ for every $P$ was explicitly constructed by the first author with the greatest care and it first appeared in his master thesis \cite{M1}.
The authors expect that the $K3$ surfaces $S_P$ with large Picard number have various applications. For example, the second author shows that the period mapping for the family of $K3$ surfaces $S_{P_4}$, which is derived from the polytope $P_4$ with $5$ vertices,
has a good arithmetic properties (see \cite{NaC}). Hence, it would be natural to expect that $S_P$ for other $P$ are also interesting objects of research. In particular, it is desirable to comprehend the structures of the Mordell-Weil groups attached to our elliptic fibrations in order to study arithmeticity of our $K3$ surfaces. In Section 5, we will determine the structure of the Mordell-Weil group ${\rm MW}(\pi_P,O)$ attached to $\pi_P$ for each Fano polytope $P$. Especially, the rank of ${\rm MW}(\pi_P,O)$ is equal to $1$.
\section{Mirror pairs of toric $K3$ hypersurfaces from Fano polytopes}
\subsection{Preliminaries and notations}
We start this paper with surveying the construction of toric varieties from reflexive polytopes. For detailed arguments or proofs, see \cite{B}, \cite{Od} and \cite{CLS}.
Let $M$ be a free abelian group of rank $n$. We identify $M$ with $\mathbb{Z}^n.$ Let $N$ be the dual of $M$: $N={\rm Hom}(M,\mathbb{Z}).$ We set $M_\mathbb{R}=M\otimes_\mathbb{Z} \mathbb{R}$ and $N_\mathbb{R}=N\otimes_\mathbb{Z} \mathbb{R}.$ The non-degenerate pairing $N_\mathbb{R} \times M_\mathbb{R}\rightarrow \mathbb{R}$ is denoted by $\langle n,m\rangle$ for $n\in N_\mathbb{R}$ and $m\in M_\mathbb{R}$. In this paper, $m\in M_\mathbb{R}$ ($n\in N_\mathbb{R}$, resp.) will be usually regarded as an $n$-component column (row, resp.) vector and $\langle n,m\rangle$ is regarded as the canonical inner product of $\mathbb{R}^n$. Let $\mathbb{T}$ be a torus: $\mathbb{T} \simeq (\mathbb{C}^\times)^n$. For $m=(m_1,\ldots, m_n)\in M$, let $\chi^m$ be a character $\mathbb{T} \rightarrow \mathbb{C}^\times$ defined by $\chi^m (t_1,\ldots, t_n) = t_1^{m_1}\cdots t_n^{m_n}$.
Let $P \subset M_\mathbb{R}$ be an $n$-dimensional convex lattice polytope. Here, a lattice polytope means a polytope whose vertices are integral. Let $\mathfrak{F}$ be a face of $P$ and $\mathfrak{F}_0$ be a facet which contains $\mathfrak{F}$. We can take the unique primitive vector $n_{\mathfrak{F}_0} \in N$ which gives the normal vector of $\mathfrak{F}_0$
pointing towards the interior of $P$. Let $\sigma_{\mathfrak{F}}\subset N_\mathbb{R}$ be the cone generated by $n_{\mathfrak{F}_0}$ for all facets $\mathfrak{F}_0$ satisfying $\mathfrak{F} \subset \mathfrak{F}_0.$ Then, $
\Sigma(P)=\{\sigma_\mathfrak{F} | \mathfrak{F} \text{ is a face of } P\} $ gives a complete rational fan in $N_\mathbb{R}$. The fan $\Sigma(P)$ determines an $n$-dimensional toric variety $X_{\Sigma(P)}$, which will be shortly denoted by $X_P$. In this paper, we call $X_P$ the toric variety determined by $P$.
If an $n$-dimensional convex polytope $P\subset M_\mathbb{R}$ contains the origin as an interior point,
the polar dual $P^\circ \subset N_\mathbb{R}$ of $P$ is defined by $\{n\in N_\mathbb{R} | \langle n,m\rangle \geq -1 \text{ for all } m\in P \}$. We can see that $P^\circ$ is also an $n$-dimensional convex polytope and contains the origin as the interior point. Moreover, it holds $(P^\circ)^\circ =P.$ If a convex lattice polytope $P$ contains the origin $0\in M$ in its interior and $P^\circ$ gives a lattice polytope, then $P$ is called a reflexive polytope. Batyrev \cite{B} proves that the $n$-dimensional toric variety $X_P$ is a Fano variety with Gorenstein singularities whose anti-canonical sections are given by \begin{align}\label{LatticePtsGenerators} H^0(X_P, \mathcal{O}_{X_P} (-K_{X_P}) ) =\bigoplus_{m\in P\cap M} \mathbb{C} \chi^m. \end{align}
Then, a generic member of the linear system $|-K_{X_P}|$ defines an $(n-1)$-dimensional Calabi-Yau hypersurface in $X_P$ as in \cite{B} Section 3.
Let $P\subset M_\mathbb{R}$ be a reflexive polytope. For a face $\mathfrak{F}\subset P^\circ \subset N_\mathbb{R}$, let $\mathbb{R}_{\geq 0} \mathfrak{F}$ be the cone $\{\lambda x | x\in \mathfrak{F}, \lambda \in \mathbb{R}_{\geq 0} \}$. Then, the set $\Sigma[P^\circ]$ of $\mathbb{R}_{\geq 0} \mathfrak{F}$ for all faces $\mathfrak{F}$ of $P^\circ$ gives a complete fan in $N_\mathbb{R}$ such that $X_{\Sigma[P^\circ]} = X_{\Sigma(P)}=X_P$ (\cite{B} Proposition 2.1.1, Corollary 4.1.11).
We introduce Fano polytopes as follows. This terminology is due to \cite{BToric4}, \cite{Sa} and \cite{Ob}.
\begin{df}\label{dfFano} An $n$-dimensional convex integral polytope $P\subset M_\mathbb{R}$ is called a Fano polytope, if $P$ satisfies the following conditions: \begin{itemize}
\item[(i)] $0 \in M_\mathbb{R}$ is a inner point of $P$,
\item[(ii)] the vertices of every facet of $P$ give a $\mathbb{Z}$-basis of $M$.
\end{itemize} \end{df}
If $P$ is a Fano polytope, $X_{P^\circ}=X_{\Sigma(P^\circ)}=X_{\Sigma[P]}$ is a smooth toric Fano $n$-fold.
\subsection{Dolgachev conjecture for toric $K3$ hypersurfaces}
In this paper, we will study cases of $n=3$. In these cases, we will consider $K3$ hypersurfaces corresponding to the anti-canonical sections of three-dimensional toric varieties. We need to study lattice theoretic properties of $K3$ surfaces. In this paper, let $U$ be the unimodular hyperbolic lattice of rank $2$.
Also, let $E_8 (-1)$ be the root lattices whose self intersection numbers are $-2$.
Then, the $K3$ lattice $L_{K3}$, which is isomorphic to the $2$-homology group of a $K3$ surface, is isometric to the unimodular lattice
$II_{3,19}=U^{\oplus 3}\oplus E_8(-1)^{\oplus 2}.$ For a $K3$ surface $S$, its N\'eron-Severi lattice is denoted by ${\rm NS}(S)$ and its transcendental lattice is denoted by ${\rm Tr}(S).$
For a three-dimensional reflexive polytope $P$, a generic member of $|-K_{X_P}|$ defines a $K3$ surface $S_P$. Also, we obtain another $K3$ surface $S_{P^\circ}$ from a generic member of $|-K_{X_{P^\circ}}|$. Let $L_P$ ($L_{P^\circ}$, resp.) be the lattice in $L_{K3}$ which is isometric to the sublattice generated by the elements of $\iota^* ({\rm NS}(X_P))$ ($\iota_\circ ^* ({\rm NS}(X_{P^\circ}))$, resp.), where $\iota^* : H^2(X_P,\mathbb{Z}) \rightarrow H^2(S_P,\mathbb{Z})$ ($\iota_\circ^* : H^2(X_{P^\circ},\mathbb{Z}) \rightarrow H^2(S_{P^\circ},\mathbb{Z})$, resp.) is the pull back of $\iota: S_P \hookrightarrow X_P$ ($\iota_\circ : S_{P^\circ} \hookrightarrow X_{P^\circ}$, resp.). Let $L_P^\perp$ ($L_{P^\circ}^\perp $, resp.) be the orthogonal complement of $L_P$ ($L_{P^\circ}$, resp.) in $L_{K3}$.
The following conjecture is famous and important to study mirror symmetry for toric $K3$ hypersurfaces.
\begin{conj}(The Dolgachev conjecture, \cite{D} Conjecture (8.6)) The notation being as above, the lattice $L_{P^\circ}^\perp$ has an orthogonal decomposition $ L_{P^\circ}^\perp \simeq U \oplus \widecheck{L}_{P^\circ} $ with the summand $U$ and there is a primitive embedding $L_P \hookrightarrow \widecheck{L}_{P^\circ}$. Moreover, $L_P = \widecheck{L}_{P^\circ}$ holds if and only if $L_{P^\circ} \simeq {\rm NS}(S_{P^\circ})$. \end{conj}
We will study the cases for three-dimensional Fano polytopes. As in Table 1, three-dimensional Fano polytopes are classified into 18 types up to the action of ${\rm GL}_3(\mathbb{Z})$ by \cite{BToric} and \cite{WW}. Suppose $P$ is a three-dimensional Fano polytope. One can determine the structure of the N\'eron-Severi lattice ${\rm NS}(S_{P^\circ})$ of the $K3$ surface $S_{P^\circ}$ on the basis of algebrogeometric techniques, like intersection theory for divisors of toric manifolds. We already have the following fact.
\begin{prop} (\cite{K2} or \cite{Mase})\label{PropLatticeDual}
For $k\in \{1,\ldots, 18\},$
let $P_k$ be the Fano polytope of Table 1.
Then, the Neron-S\'everi lattice ${\rm NS}(S_{P_k^\circ})$ is isometric to the lattice $L_k$ of signature $(1,\ell_k-4)$ of Table 1, where $L_k$ is equal to the lattice $\iota_\circ^* ({\rm NS} (X_{P_k^\circ}))$. \end{prop}
In practice, \cite{K2} proves the proposition by applying the following algebrogeometric result.
\begin{prop} (\cite{Mo} Theorem 7.5)\label{PropMoishezon} Let $X$ be a three-dimensional smooth projective algebraic variety. Let $\iota : S\hookrightarrow X$ be an embedding of a general hyperplane section. Then, the pull-back $\iota^*: {\rm NS}(X) \rightarrow {\rm NS}(S)$ is a surjective mapping if and only if $X $ and $S$ satisfy the following (i) or (ii): \begin{itemize}
\item[(i)] $b_2(X)=b_2(S)$,
\item[(ii)] $h^{2,0} (S) > h^{2,0}(X)$.
\end{itemize} \end{prop}
It holds $h^{2,0}(S_{P^\circ})=1$. Also, a smooth Fano three-fold $X_{P^\circ}$ satisfies $h^{2,0}(X_{P^\circ})=0$. So, one can apply Proposition \ref{PropMoishezon} and calculate the intersection form of ${\rm NS}(S_{P^\circ})$ from that of ${\rm NS}(X_{P^\circ})$.
\begin{rem} \label{RemCheck}
For each $k$, we can see that the orthogonal complement $L_k^\perp$ admits a decomposition in the form $U\oplus \widecheck{L}_k$, where $\widecheck{L}_k$ is a lattice of signature $(1,22-\ell_k)$.
\end{rem}
Furthermore, according to \cite{Kob} Corollary 4.3.6,
Proposition \ref{PropLatticeDual} implies the isomorphism $L_{P_k}\simeq {\rm NS}(S_{P_k})$.
Therefore, in order to see that the Dolgachev conjecture holds for three-dimensional Fano polytopes,
it is enough to prove
\begin{align}\label{mirrorK3}
{\rm Tr}(S_P) \simeq U \oplus {\rm NS}(S_{P^\circ}) =U \oplus L_{P^\circ}
\end{align}
for each three-dimensional Fano polytope $P$.
By the way, a pair $(S,S')$ of $K3$ surfaces is usually called a mirror pair if $ {\rm Tr}(S) \simeq U \oplus {\rm NS}(S'). $ The relation (\ref{mirrorK3}) means that $(S_P, S_{P^\circ})$ is a mirror pair.
\subsection{$K3$ hypersurfaces with explicit parameters}
For each Fano polytope $P_k$ in Table 1, let $S_k=S_{P_k}$ be a generic member of the linear system $ |-K_{X_{P_k}}| $ in the toric three-fold $X_{P_k}$. Due to Table 1, Proposition \ref{PropLatticeDual} and (\ref{mirrorK3}), it is enough to prove the relation \begin{align} {\rm Tr}(S_k) \simeq U \oplus L_{k} \end{align} for $k\in \{1,\ldots, 18\}$ to see that the Dolgachev conjecture holds for the Fano polytopes. As in Table 1, the numbers of vertices of a three-dimensional Fano polytopes is one of $4,5,6,7$ or $8$.
The N\'eron-Severi lattice of $S_1$, which is derived from the unique Fano polytope $P_1$ with $4$ vertices, is determined by Narumiya and Shiga \cite{NS}. The motivation and the method of their paper are based on the study of the monodromy group of the Picard-Fuchs equation for the family of $S_1$. The second author was inspired by \cite{NS}. Also, he was interested in a numerical approach of T. Ishige,
who studies the monodromy group of the Picard-Fuchs system for $K3$ surfaces derived from the Fano polytope $P_3$ in detail. Affected by these earlier studies,
the second author \cite{Na} determines the the N\'eron-Severi lattices coming from the Fano polytopes with $5$ vertices and studies the monodromy groups of the Picard-Fuchs systems. His proof is based on techniques of Mordell-Weil lattices introduced in \cite{Shioda}. Anyway, in these works, it is established that the Dolgachev conjecture is true for the Fano polytopes with $4$ or $5$ vertices.
\begin{rem} In \cite{IIT}, one can find the above mentioned Ishige's numerical approach to study the monodromy group. We remark that there are several other computer-assisted studies of Picard-Fuchs systems coming from Fano polytopes. For example, Nakayama and Takayama \cite{NT} give an approximation algorithm to compute
Picard-Fuchs systems attached to three-dimensional Fano polytopes via techniques of $D$-modules. \end{rem}
In the present paper, we will see that the Dolgachev conjecture is true for all the Fano polytopes with $6,7$ or $8$ vertices. Namely, we will study toric $K3$ hypersurfaces $S_k$ $(k\in \{6,\ldots, 18\}).$ In this subsection, we will give explicit forms of them.
Suppose a three-dimensional Fano polytope $P_k$ is given by \begin{align}\label{polytopeP} P_k=\begin{pmatrix} p_{11} & \cdots & p_{1\ell_k} \\ p_{21} & \cdots & p_{2\ell_k} \\ p_{31} & \cdots & p_{3\ell_k} \end{pmatrix}, \end{align}
where $\ell_k$ is the number of vertices of $P_k$. According to (\ref{LatticePtsGenerators}), the family of the $K3$ hypersurfaces $S_k$ is explicitly given by \begin{align}\label{EquationA}
\Big\{\sum_{i=1}^{\ell_k} c_i t_1^{p_{1i}} t_2^{p_{2i}} t_3^{p_{3i}}=0 \Big| \hspace{1mm} c_i \in \mathbb{C}\Big\}. \end{align} Each Fano polytope in Table 1 contains the origin as the unique inner lattice point and the standard simplex generated by ${}^t (1,0,0), {}^t (0,1,0), {}^t (0,0,1)$. From the polytope $P_k$ of (\ref{polytopeP}), we set \begin{align} \widetilde{P}_k= \begin{pmatrix} 1&1&\cdots &1 \\ p_{10}&p_{11}& \cdots &p_{1\ell_k} \\ p_{20}&p_{21}& \cdots &p_{2\ell_k} \\ p_{30}&p_{31}& \cdots &p_{3\ell_k} \end{pmatrix} \end{align}
where ${\small \begin{pmatrix} p_{10} & p_{11} & p_{12} & p_{13} \\ p_{20} & p_{21} & p_{22} & p_{23} \\ p_{30} & p_{31} & p_{32} & p_{33} \end{pmatrix} =\begin{pmatrix} 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{pmatrix} }$.
The matrix $\widetilde{P}_k$ is regarded as a surjective linear mapping $\mathbb{R}^{\ell_k+1} \rightarrow \mathbb{R} \oplus M_\mathbb{R} $
which induces the exact sequence
\begin{align}\label{Sequence1} 0 \rightarrow \mathbb{Z}^{\ell_k-3} \overset{K_k} \longrightarrow \mathbb{Z}^{\ell_k+1} \overset{\widetilde{P}_k}\longrightarrow \mathbb{Z} \oplus M \rightarrow 0,
\end{align} where $K_k=(\kappa_{1}, \ldots, \kappa_{\ell_k-3})$ is a $(\ell_k+1)\times (\ell_k-3)$-matrix with entries in $\mathbb{Z}$ such that $\kappa_k = {}^t (\kappa_{k0},\ldots , \kappa_{k\ell_k})$ $(k=1,\ldots, \ell_k-3)$ satisfies \begin{align*} \kappa_{k,k'+3}= \begin{cases} & 1 \quad (k'=k),\\ & 0 \quad (k'\not=k). \end{cases} \end{align*} Then, by putting $ \displaystyle x=\frac{c_1 t_1}{c_0}, y=\frac{c_2 t_2}{c_0}$ and $\displaystyle z=\frac{c_3 t_3}{c_0}, $ the equation in (\ref{EquationA}) is transformed to \begin{align}\label{EqStack} S_k(\lambda_1,\ldots, \lambda_{\ell_k-3}): \quad 1 + x + y + z + \sum_{i=1}^{\ell_k-3} \lambda_i x^{p_{1,i+3}} y^{p_{2,i+3}} z^{p_{3,i+3}} =0. \end{align} Here, we put \begin{align}\label{lambda} \lambda_i = \prod_{h=0}^{\ell_k} c_h^{\kappa_{i+3, h}}. \end{align}
By clearing fractions of (\ref{EqStack}) for the cases of $k\in \{6,\ldots,18\}$, we obtain explicit defining equations of $S_k(\lambda_1,\ldots,\lambda_{\ell_k -3})$ as in Table 2.
\begin{longtable}{ll} \caption{Defining equations of toric $K3$ hypersurfaces $S_k$}
\\ \hline
$k$ & Defining equations of $S_k (\lambda_1,\ldots,\lambda_{\ell_k -3})$
\\
\hline
\endhead
$6$ & $x y z (x + y + z+1) +\lambda_1 x y + \lambda_2 x z + \lambda_3 y z = 0$ \\
$7$ & $xyz(x+y+z+1)+\lambda_1xy+\lambda_2x+\lambda_3y= 0$\\
$8$ & $x y z (x+y+z+1)+\lambda_1 x y+\lambda_2 x z^2+\lambda_3 y=0$\\
$9$ & $x y z (x+y+z+1)+\lambda_1 x y+\lambda_2 x z+\lambda_3 z=0$\\
$10$ & $x y z (x+y+z+1)+\lambda_1 x y+\lambda_2 x+\lambda_3=0$\\
$11$ & $x y z (x+y+z+1)+\lambda_1 x y+\lambda_2 x y^2+\lambda_3=0$\\
$12$ & $x y z (x+y+z+1)+\lambda_1 x y+\lambda_2 x y^2+\lambda_3 z=0$\\
$13$ & $x y z (x+y+z+1)+\lambda_1 x y+\lambda_2 x z+\lambda_3 y z+\lambda_4 z=0$\\
$14$ & $x y z (x + y + z+1) +\lambda_1 y + \lambda_2 x z + \lambda_3 y z + \lambda_4 z=0$\\
$15$ & $ x y z
(x+y+z+1)+\lambda_1 x^2 y+\lambda_2 x z+\lambda_3 y z+\lambda_4 z=0$\\
$16$ & $x y z (x+y+z+1)+ \lambda_1+\lambda_2 x z+\lambda_3 y z+\lambda_4 z=0$\\
$17$ & $x y z (x+y+z+1)+\lambda_1 x y+\lambda_2 y z+\lambda_3 x z+\lambda_4 x^2 z+\lambda_5 y^2 z=0$\\
$18$ & $x y z (x+y+z+1)+\lambda_1 x^2+\lambda_2 y z+\lambda_3 x z+\lambda_4 x^2
z+\lambda_5 y^2 z=0$\\ \hline\\ \end{longtable}
Now, let us characterize the above parameters $\lambda_1,\ldots, \lambda_{\ell_k-3}$ coming from the polytope $P_k$. The matrix $K_k$ of (\ref{Sequence1}) is called the Gale transform of $\widetilde{P}_k$. Each row of the matrix $K_k$ in (\ref{Sequence1}) is a $\mathbb{Z}$-vector of $\mathbb{R}^{\ell_k-3}$. Let $\Sigma'(P_k)$ be a fan in $\mathbb{R}^{\ell_k-3}$ whose one-dimensional cones are generated by these vectors. This fan $\Sigma'(P_k)$ is regarded as the secondary fan coming from the set of lattice points of the Fano polytope $P_k$ (for detail, see \cite{GKZ}; see also \cite{BFS} Section 4). Then, the tuple of the parameters $\lambda_1,\ldots, \lambda_{\ell_k-3}$ of (\ref{lambda}) gives a system of coordinates of the torus $(\mathbb{C}^\times)^{\ell_k-3}$ of the $(\ell_k-3)$-dimensional toric variety determined by the secondary fan $\Sigma' (P_k)$. Namely, $(\mathbb{C}^\times)^{\ell_k-3}$ is regarded as ${\rm Spec}\left(\mathbb{C}\left[\lambda_1^{\pm},\ldots, \lambda_{\ell_k-3}^{\pm}\right]\right).$
Thus, for each $j \in \{6,\ldots, 18\}$, we obtain a family \begin{align}\label{Family}
\mathscr{F}_k: \{S_k (\lambda_1,\ldots, \lambda_{\ell_k-3}) \text{ of (\ref{EqStack})} \hspace{1mm} | \hspace{1mm} (\lambda_1,\ldots, \lambda_{\ell_k-3})\in (\mathbb{C}^\times)^{\ell_k-3} \} \rightarrow (\mathbb{C}^\times)^{\ell_k-3} \end{align} of toric $K3$ hypersurfaces. We remark that such a family can also be obtained via a morphism of fans precisely studied in \cite{Lafforgue}.
\section{Jacobian elliptic fibrations and evident lattices}
Let $S$ be a $K3$ surface.
We call a triple $(S,\pi,C)$ an elliptic $K3$ surface,
if $C$ is a curve, $\pi:S\rightarrow C$ is a proper mapping such that the fibres $\pi^{-1}(p)$ are elliptic curves for almost all $p\in C$ and $S$ is relatively minimal. We suppose that an elliptic fibration $\pi:S\rightarrow C$ has a section $O:C\rightarrow S$ such that $O$ is the identity element of the Mordell-Weil group ${\rm MW}(\pi,O)$ of sections of $\pi$ (see Section 5). We call such an elliptic fibration $\pi$ a Jacobian elliptic fibration on $S$.
In this section, we introduce an appropriate Jacobian elliptic fibration on the $K3$ surface $S_k (\lambda_1,\ldots, \lambda_{\ell_k-3})$ in Table 2 for each $k\in\{6,\ldots, 18\}$. From now on, $S_k (\lambda_1,\ldots, \lambda_{\ell_k-3})$ will be shortly denoted by $S_k(\lambda)$.
\begin{prop}\label{PropEquationJac} For each $k\in \{6,\ldots, 18\},$ there is a Jacobian elliptic fibration $\pi_k=\pi_k^{(\lambda)}:S_k(\lambda) \rightarrow \mathbb{P}^1 (\mathbb{C})$
defined by the equation \begin{align}\label{EquationJac} z_1^2 =4 y_1^3 + a_1 (x_1) y_1^2 + a_2(x_1) y_1 + a_3(x_1), \end{align} where $a_k (x)$ $(k=1,2,3)$ are polynomials given in Table 3. Here, the fibration $\pi_k$ is given by $(x_1,y_1,z_1)\mapsto x_1$ and the section $O$ is given by $x \mapsto (x, \infty, \infty)$. \end{prop}
{\small \begin{longtable}{llll}
\caption{\normalsize{$a_1(x_1), a_2(x_1)$ and $a_3(x_1)$ of (\ref{EquationJac})}}
\\ \hline
$k$ & $a_1(x_1) $ &$a_2(x_1)$ & $a_3(x_1)$
\\
\hline
\endhead
$6$ & $\lambda_2^2 + 2 \lambda_2 x_1 (1 + x_1) + x_1^2 (-4 \lambda_1 - 4 \lambda_3 + (1 + x_1)^2)$ & $4 \lambda_1 \lambda_3 x_1^4$ & $0$ \\
$7$ & $x_1 (-4 \lambda_2 + x_1 (-4 \lambda_1 + (1 + x_1)^2))$ & $-2 \lambda_3 x_1^4 (1 + x_1)$ & $\lambda_3^2 x_1^6$ \\
$8$ & $x_1 (x_1 (1 + x_1)^2 - 4 \lambda_1 (\lambda_2 + x_1))$ & $-2 \lambda_3 x_1^3 (1 + x_1) (\lambda_2 + x_1)$ & $\lambda_3^2 x_1^4 (\lambda_2 + x_1)^2$ \\
$9$ & $\lambda_2^2 + 2 \lambda_2 x_1 (1 + x_1) $ & $4 \lambda_1 \lambda_3 x_1^3$ & $0$ \\
& $+ x_1 (-4 \lambda_3 + x_1 (-4 \lambda_1 + (1 + x_1)^2))$ & & \\
$10$ & $\lambda_1^2 + 2 \lambda_1 x_1 (1 + x_1) + x_1 (-4 \lambda_2 + x_1 (1 + x_1)^2)$ & $-2 \lambda_3 x_1^2 (\lambda_1 + x_1 + x_1^2)$ & $\lambda_3^2 x_1^4$ \\
$11$ & $x_1^2 (1 - 4 \lambda_1 + (2 - 4 \lambda_2) x_1 + x_1^2)$ & $-2 \lambda_3 x_1^3 (1 + x_1)$ & $\lambda_3^2 x_1^4$\\
$12$ & $(\lambda_1 + x_1 + x_1^2)^2$ & $-2 \lambda_3 x_1^2 (\lambda_2 + x_1) (\lambda_1 + x_1 + x_1^2)$ & $\lambda_3^2 x_1^4 (\lambda_2 + x_1)^2$\\
$13$ & $\lambda_2^2 + 2 \lambda_2 x_1 (1 + x_1) $ & $4 \lambda_1 x_1^3 (\lambda_4 + \lambda_3 x_1)$ & $0$\\
& $+ x_1 (-4 \lambda_4 + x_1 (-4 \lambda_1 - 4 \lambda_3 + (1 + x_1)^2))$ & & \\
$14$ & $\lambda_2^2 + 2 \lambda_2 x_1 (1 + x_1) $ & $-2 \lambda_1 x_1^3 (\lambda_2 + x_1 + x_1^2)$ & $\lambda_1^2 x_1^6$\\
& $+ x_1 (-4 \lambda_4 + x_1 (-4 \lambda_3 + (1 + x_1)^2))$ && \\
$15$ & $\lambda_2^2 + 2 \lambda_2 x_1 (1 + x_1) $ & $-2 \lambda_1 x_1^2 (\lambda_4 + \lambda_3 x_1) (\lambda_2 + x_1 + x_1^2)$ & $\lambda_1^2 x_1^4 (\lambda_4 + \lambda_3 x_1)^2$\\
&$+ x_1 (-4 \lambda_4 + x_1 (-4 \lambda_3 + (1 + x_1)^2))$ && \\
$16$ & $\lambda_2^2 + 2 \lambda_2 x_1 (1 + x_1) $ & $-2 \lambda_1 x_1^2 (\lambda_2 + x_1 + x_1^2)$ & $\lambda_1^2 x_1^4$\\
& $+ x_1 (-4 \lambda_4 + x_1 (-4 \lambda_3 + (1 + x_1)^2))$ && \\
$17$ & $\lambda_3^2 + 2 \lambda_3 x_1 (1 + x_1) $ & $4 \lambda_1 x_1^3 (\lambda_4 + x_1) (\lambda_2 + \lambda_5 x_1)$ & $0$\\
& $+x_1 (-4 \lambda_2 (\lambda_4 + x_1) $ & & \\
&$\hspace{7mm}+ x_1 (1 - 4 \lambda_1 - 4 \lambda_4 \lambda_5 + 2 x_1 - 4 \lambda_5 x_1 + x_1^2))$ & & \\
$18$ & $\lambda_2^2 + 2 \lambda_2 x_1 (1 + x_1) $ & $-2 \lambda_1 x_1^3 (\lambda_5 + x_1) (\lambda_2 + x_1 + x_1^2)$ & $\lambda_1^2 x_1^6 (\lambda_5 + x_1)^2$\\
&$+
x_1 (-4 \lambda_3 (\lambda_5 + x_1) $ && \\
& $\hspace{7mm} + x_1 ((1 + x_1)^2 - 4 \lambda_4 (\lambda_5 + x_1)))$ & & \\ \hline\\ \end{longtable} }
\begin{proof} By performing the birational transformation $(x,y,z)\mapsto (x_1,y_1,z_1)$ given by $$ x=x(x_1,y_1,z_1), \quad y=y(x_1,y_1,z_1), \quad z=z(x_1,y_1,z_1), $$
as in Table 4, 5 and 6, we have the assertion. {\small \begin{longtable}{llll} \caption{Rational functions $ x(x_1,y_1,z_1)$ }
\\
\\
\hline
$k$ & $x(x_1,y_1,z_1) $
\\
\hline
\endhead
$6$ &
$\frac{2 y_1 (-\lambda_3 x_1^2 + y_1)}{x_1 (\lambda_2 y_1 + x_1 y_1 + x_1^2 y_1 + z_1)}$
\\
$7$ &
$\frac{2 y_1^2}{x_1 (-\lambda_3 x_1^3 + x_1 y_1 + x_1^2 y_1 - z_1)}$ \\
$8$ &
$\frac{2 y_1^2}{x_1 (-\lambda_2 \lambda_3 x_1^2 - \lambda_3 x_1^3 + x_1 y_1 + x_1^2 y_1 + z_1)}$ \\
$9$ &
$\frac{2 y_1 (-\lambda_3 x_1 + y_1)}{x_1 (\lambda_2 y_1 + x_1 y_1 + x_1^2 y_1 - z_1)}$ \\
$10$ &
$\frac{2 y_1^2}{x_1 (-\lambda_3 x_1^2 + \lambda_1 y_1 + x_1 y_1 + x_1^2 y_1+z_1)}$
\\
$11$ &
$\frac{2 y_1^2}{x_1 (-\lambda_3 x_1^2 + x_1 y_1 + x_1^2 y_1 + z_1)}$
\\
$12$ &
$\frac{2 y_1^2}{x_1 (-\lambda_2 \lambda_3 x_1^2 - \lambda_3 x_1^3 + \lambda_1 y_1 + x_1 y_1 + x_1^2 y_1 + z_1)}$
\\
$13$ & $\frac{2 y_1 (-\lambda_4 x_1 - \lambda_3 x_1^2 + y_1)}{x_1 (\lambda_2 y_1 + x_1 y_1 + x_1^2 y_1 - z_1)}$ \\
$14$ & $\frac{2 (\lambda_4 x_1 + \lambda_3 x_1^2 - y_1) y_1}{x_1 (\lambda_1 x_1^3 - \lambda_2 y_1 - x_1 y_1 - x_1^2 y_1 -
z_1)}$
\\
$15$ &
$\frac{(\lambda_4 + \lambda_3 x_1)( -\lambda_1 \lambda_4 x_1^2 - \lambda_1 \lambda_3 x_1^3 + \lambda_2 y_1 + x_1 y_1 + x_1^2 y_1 + z_1)}{2 y_1 (-\lambda_4 x_1 - \lambda_3 x_1^2 + y_1)}$ \\
$16$ &
$\frac{2 y_1 (-\lambda_4 x_1 - \lambda_3 x_1^2 + y_1)}{x_1 (-\lambda_1 x_1^2 + \lambda_2 y_1 + x_1 y_1 +
x_1^2 y_1 + z_1)}$
\\
$17$ &
$\frac{2 y_1 (-\lambda_2 \lambda_4 x_1 - \lambda_2 x_1^2 - \lambda_4 \lambda_5 x_1^2 - \lambda_5 x_1^3 + y_1)}{(\lambda_4 +
x_1) (\lambda_3 y_1 + x_1 y_1 + x_1^2 y_1 + z_1)}$
\\
$18$ & $x_1$
\\
\hline\\ \end{longtable} }
{\small \begin{longtable}{llll} \caption{Rational functions $ y(x_1,y_1,z_1)$ }
\\ \\
\hline
$k$ & $y(x_1,y_1,z_1)$
\\
\hline
\endhead
$6$ & $x_1$
\\
$7$ & $x_1$
\\
$8$ &
$x_1$
\\
$9$ &
$x_1$
\\
$10$ &
$-\frac{-\lambda_3 x_1^2 + \lambda_1 y_1 + x_1 y_1 + x_1^2 y_1+z_1}{2 x_1 y_1}$
\\
$11$ &
$x_1$
\\
$12$ &
$\frac{\lambda_2 \lambda_3 x_1^2 + \lambda_3 x_1^3 - \lambda_1 y_1 - x_1 y_1 - x_1^2 y_1 - z_1}{2 (\lambda_2 + x_1) y_1}$
\\
$13$ &
$x_1$
\\
$14$ &
$x_1$
\\
$15$ &
$x_1$
\\
$16$ &
$x_1$
\\
$17$ &
$x_1$
\\
$18$ &
$\frac{\lambda_1 \lambda_5 x_1^3 + \lambda_1 x_1^4 - \lambda_2 y_1 - x_1 y_1 - x_1^2 y_1 + z_1}{2 (\lambda_5 + x_1) y_1}$
\\
\hline\\ \end{longtable} }
{\small \begin{longtable}{llll} \caption{Rational functions $ z(x_1,y_1,z_1)$ }
\\ \\
\hline
$k$ & $z(x_1,y_1,z_1)$
\\
\hline
\endhead
$6$ &
$- \frac{\lambda_2 y_1 + x_1 y_1 + x_1^2 y_1 + z_1}{2 x_1 (-\lambda_3 x_1^2 + y_1)}$ \\
$7$ &
$-\frac{-\lambda_3 x_1^3 + x_1 y_1 + x_1^2 y_1 - z_1}{2 x_1 y_1}$ \\
$8$ &
$\frac{\lambda_2 \lambda_3 x_1^2 + \lambda_3 x_1^3 - x_1 y_1 - x_1^2 y_1 - z_1}{2 (\lambda_2 + x_1) y_1}$ \\
$9$ &
$\frac{\lambda_2 y_1 + x_1 y_1 + x_1^2 y_1 - z_1}{2 x_1 (\lambda_3 x_1 - y_1)}$ \\
$10$ &
$x_1$ \\
$11$ & $-\frac{-\lambda_3 x_1^2 + x_1 y_1 + x_1^2 y_1 + z_1}{2 x_1 y_1}$\\
$12$ & $x_1$\\
$13$ &
$\frac{\lambda_2 y_1 + x_1 y_1 + x_1^2 y_1 - z_1}{2 x_1 (\lambda_4 x_1 + \lambda_3 x_1^2 - y_1)}$ \\
$14$
& $-\frac{\lambda_1 x_1^3 - \lambda_2 y_1 - x_1 y_1 - x_1^2 y_1 - z_1}{
2 x_1 (\lambda_4 x_1 + \lambda_3 x_1^2 - y_1)}$\\
$15$ &
$-\frac{-\lambda_1 \lambda_4 x_1^2 - \lambda_1 \lambda_3 x_1^3 + \lambda_2 y_1 + x_1 y_1 + x_1^2 y_1 + z_1}{
2 x_1 (-\lambda_4 x_1 - \lambda_3 x_1^2 + y_1)}$\\
$16$ &
$\frac{-\lambda_1 x_1^2 + \lambda_2 y_1 + x_1 y_1 +
x_1^2 y_1 + z_1}{2 x_1 (\lambda_4 x_1 + \lambda_3 x_1^2 - y_1)}$\\
$17$ &
$\frac{\lambda_3 y_1 + x_1 y_1 +
x_1^2 y_1 + z_1}{2 x_1 (\lambda_2 \lambda_4 x_1 + \lambda_2 x_1^2 + \lambda_4 \lambda_5 x_1^2 + \lambda_5 x_1^3 - y_1)}$\\
$18$ &
$-\frac{\lambda_1 x_1^2 (\lambda_5 + x_1)}{y_1}$\\
\hline\\ \end{longtable} } \end{proof}
\begin{rem}
We can naturally extend the family $\mathscr{F}_k$ to $\tilde{\mathscr{F}}_k : \{S_k(\lambda_1,\ldots, \lambda_{\ell_k -3}) \text{ of } (\ref{EqStack}) |(\lambda_1,\ldots, \lambda_{\ell_k -3})\in \mathbb{C}^{\ell_k -3} \} \rightarrow \mathbb{C}^{\ell_k -3}$. From Table 3, one can find inclusion relationships among our families: $\tilde{\mathscr{F}}_7 $ ($\tilde{\mathscr{F}}_9$, $\tilde{\mathscr{F}}_{10}$, resp.) is the subfamily of $\tilde{\mathscr{F}}_{14}$ ($\tilde{\mathscr{F}}_{13}$, $\tilde{\mathscr{F}}_{16}$, resp.) over the locus $\{\lambda_2=0\}$ ($\{\lambda_3=0\}$, $\{\lambda_3=0\}$, resp.). \end{rem}
\begin{rem}
There are several Jacobian elliptic fibrations of $S_k$ for each $k\in \{6,\ldots,18\}$.
Our fibration $\pi_k$ of $S_k$ is carefully chosen in the first author's master thesis \cite{M1}.
In fact, $\pi_k$ enable us to determine the lattice structure of $S_k$. Namely, our mathematical arguments in Section 3, 4 and 5 heavily depend on the properties of $\pi_k$.
\end{rem}
We have the following proposition by direct observations.
\begin{prop} For each $k\in \{6,\ldots,18\},$ there are singular fibres of the elliptic fibration $\pi_k^{(\lambda)}$ on a generic member $S_k(\lambda)$ of the family $\mathscr{F}_k$ induced from the equation (\ref{EquationJac}) as listed in Table 7. Also, there are non-trivial sections $\mathbb{P}^1 (\mathbb{C}) \rightarrow S_k$ of $\pi_k^{(\lambda)}$ given by $x_1 \mapsto (x_1,y_1,z_1)$ as in Table 7. \end{prop}
{\small \begin{longtable}{lll}
\caption{\normalsize{Singular fibres and non-trivial sections of $\pi_k$ }}
\\
\\ \hline
$k$ & Kodaira type of singular fibres & Non-trivial sections $x_1 \mapsto (x_1,y_1,z_1)$
\\
\hline
\endhead
$6$ &$I_8 + I_8 +8 I_1$ & $O': x_1\mapsto (x_1,0,0)$ \\
& & $Q: x\mapsto (x_1,\lambda_1 x_1^2,\lambda_1 x_1^2 (\lambda_2 + x_1 +x_1^2))$\\
$7$ & $I_3^* +I_8 +7 I_1$ & $Q: x_1\mapsto (x_1,0,\lambda_3 x_1^3)$ \\
$8$ & $I_1^* + I_3 + I_8 + 6 I_1$ & $Q: x_1\mapsto (x_1,0,\lambda_3 x_1^2 (\lambda_2+x_1) )$ \\
$9$ & $I_6 + I_{10} +8 I_1$ & $O': x_1\mapsto (x_1,0,0)$ \\
&& $Q: x_1\mapsto (x_1,\lambda_1 x_1^2,\lambda_1 x_1^2 (\lambda_2+x_1 + x_1^2)$ \\
$10$ & $I_5 +I_{11}+8 I_1$ & $Q: x_1\mapsto (x_1,0,\lambda_3 x_1^2)$ \\
$11$ &$IV^* + I_9 + 7 I_1$ & $Q: x_1\mapsto (x_1,0,\lambda_3 x_1^2)$ \\
$12$ &$I_6 + I_3 + I_9 + 6 I_1$ & $Q: x_1\mapsto (x_1,0,\lambda_3 x_1^2 (\lambda_2+x_1))$ \\
$13$ &$I_6 + I_2 +I_8 + 8I_1 $ & $O': x_1\mapsto (x_1,0,0)$ \\
& & $Q: x_1\mapsto (x_1,x_1(\lambda_4+ \lambda_3 x_1),x_1(\lambda_4+\lambda_3 x_1) (\lambda_2+x_1 + x_1^2)$ \\
$14$ &$I_7 +I_8 + 9 I_1$ & $Q: x_1\mapsto (x_1,0,\lambda_1 x_1^3)$ \\
$15$ &$I_5 +I_3 +I_8 +8I_1$ & $Q: x_1\mapsto (x_1,0,\lambda_1 x_1^2 (\lambda_4 + \lambda_3 x))$ \\
$16$ &$I_5 + I_{10} + 9I_1$ & $Q: x_1\mapsto (x_1,0,\lambda_1 x_1^2 )$ \\
$17$ &$I_6 + I_2 +I_2 + I_6 + 8 I_1$ & $O': x_1\mapsto (x_1,0,0)$ \\
& & $Q: x_1\mapsto (x_1,x_1(\lambda_2+ \lambda_5 x_1) (\lambda_4+x_1),x_1 (\lambda_2+\lambda_5 x_1)(\lambda_4 +x_1)(\lambda_3+x_1 +x_1^2))$ \\
$18$ & $I_7 +I_3 +I_5 + 9I_1$ & $Q: x_1\mapsto (x_1,0,\lambda_1 x_1^3 (\lambda_5 +x_1) )$ \\ \hline\\ \end{longtable} }
From now on, for a $K3$ surface $S$, let us regard ${\rm NS}(S)$ as a sublattice of the $2$-homology group $H_2(S,\mathbb{Z}).$ For $P\in {\rm MW}(\pi,O)$, let $(P) \in {\rm NS}(S)$ denote the corresponding divisor.
For our elliptic fibrations $\pi_k$, a general fibre is denoted by $F$. For each $k\in \{6,\ldots, 18\}$, the dual graph of the sections and the fibres of $\pi_k$ is illustrated in Table 8. Here, each dot $\bullet$ ($\odot$, resp.) stands for a rational nodal curve (a general fibre of the elliptic fibration, resp.).
\begin{longtable}{lll} \caption{Dual graphs of the elliptic fibration $\pi_k$ coming from (\ref{EquationJac})}
\\ \\ \hline
$k$ & Dual graphs & Divisors in singular fibres
\\
\hline
\endhead
\raisebox{8.6em}{$6$} & {\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(-1,2)--(-1,4)--(0,5)--(1,4)--(1,2)--cycle;
\draw (3,1)--(8,1);
\draw (5,4)--(7,3); \draw (5,4)--(1,3);
\draw (3,5)--(8,5);
\draw (8,1)--(7,2)--(7,4)--(8,5)--(9,4)--(9,2)--cycle;
\draw (3,1)--(0,1);
\draw (3,5)--(0,5);
\draw (3,1)--(3,3.35);
\draw (3,5)--(3,3.65);
\draw (5,4)--(3,2.5);
\draw (0,1)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (-1,4)node{$\bullet$};
\draw (0,5)node{$\bullet$};
\draw (1,4)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (3,1)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (5,4)node{$\bullet$};
\draw (8,1)node{$\bullet$};
\draw (7,2)node{$\bullet$};
\draw (7,3)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (8,5)node{$\bullet$};
\draw (9,2)node{$\bullet$};
\draw (9,3)node{$\bullet$};
\draw (9,4)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (1,3)node[left]{$a_2$};
\draw (1,4)node[left]{$a_3$};
\draw (0,5)node[left]{$a_4$}; \draw (-1,4)node[left]{$a_5$};
\draw (-1,3)node[left]{$a_6$};
\draw (-1,2)node[left]{$a_7$};
\draw (0,1)node[left]{$a_0$}; \draw (5,4)node[below]{$(Q)$};
\draw (3,1)node[below]{$(O)$};
\draw (3,5)node[above]{$(O')$};
\draw (8,1)node[right]{$b_0$};
\draw (7,2)node[right]{$b_1$};
\draw (7,3)node[right]{$b_2$};
\draw (7,4)node[right]{$b_3$};
\draw (8,5)node[above]{$b_4$};
\draw (9,4)node[right]{$b_5$};
\draw (9,3)node[right]{$b_6$};
\draw (9,2)node[right]{$b_7$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_6^{-1}(0)=a_0+a_1+\cdots +a_7,\\ \pi_6^{-1}(\infty)=b_0+b_1+\cdots +b_7. \end{matrix}$}}\\
\raisebox{8.8em}{$7$} & {\normalsize \begin{tikzpicture}[scale=0.5]
\draw (-1,0)--(0,1)--(0,4)--(-1,5);
\draw (0,4)--(1,5);
\draw (0,1)--(1,0);
\draw (1,0)--(6,0);
\draw (3,0)--(3,5);
\draw (1,5)--(3,5);
\draw (3,5)--(5,4);
\draw (6,0)--(5,1)--(5,4)--(6,5)--(7,4)--(7,1)--cycle;
\draw (-1,0)node{$\bullet$};
\draw (0,1)node{$\bullet$};
\draw (0,2)node{$\bullet$};
\draw (0,3)node{$\bullet$};
\draw (0,4)node{$\bullet$};
\draw (-1,5)node{$\bullet$};
\draw (1,5)node{$\bullet$};
\draw (1,0)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (6,0)node{$\bullet$};
\draw (5,1)node{$\bullet$};
\draw (5,2.5)node{$\bullet$};
\draw (5,4)node{$\bullet$};
\draw (6,5)node{$\bullet$};
\draw (7,1)node{$\bullet$};
\draw (7,2.5)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (-1,0)node[left]{$a_1$};
\draw (0,1)node[left]{$a_2$};
\draw (0,2)node[left]{$a_3$}; \draw (0,3)node[left]{$a_4$};
\draw (0,4)node[left]{$a_5$};
\draw (-1,5)node[left]{$a_6$};
\draw (1,5)node[left]{$a_7$};
\draw (1,0)node[left]{$a_0$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(Q)$};
\draw (6,0)node[right]{$b_0$};
\draw (5,1)node[right]{$b_1$};
\draw (5,2.5)node[right]{$b_2$};
\draw (5,4)node[right]{$b_3$};
\draw (6,5)node[right]{$b_4$};
\draw (7,1)node[right]{$b_7$};
\draw (7,2.5)node[right]{$b_6$};
\draw (7,4)node[right]{$b_5$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_7^{-1}(0)=a_0+a_1+\cdots +a_7,\\ \pi_7^{-1}(\infty)=b_0+b_1+\cdots +b_7. \end{matrix}$}}\\
\raisebox{8.8em}{$8$} & {\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(1,2)--(1,3)--(0,4);
\draw (1,3)--(2,4);
\draw (1,2)--(2,1);
\draw (2,1)--(3,0)--(8,0);
\draw (3,0)--(3,5);
\draw (2,4)--(3,5);
\draw (3,5)--(4.3,3);
\draw (3,5)--(7,4);
\draw (5,2)--(4.3,3)--(5.7,3)--cycle;
\draw (3,0)--(5,2);
\draw (8,0)--(7,1)--(7,4)--(8,5)--(9,4)--(9,1)--cycle;
\draw (0,1)node{$\bullet$};
\draw (2,1)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (0,4)node{$\bullet$};
\draw (2,4)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (5,2)node{$\bullet$};
\draw (4.3,3)node{$\bullet$};
\draw (5.7,3)node{$\bullet$};
\draw (8,0)node{$\bullet$};
\draw (7,1)node{$\bullet$};
\draw (7,2.5)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (8,5)node{$\bullet$};
\draw (9,1)node{$\bullet$};
\draw (9,2.5)node{$\bullet$};
\draw (9,4)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (0,1)node[left]{$a_1$};
\draw (1,2)node[left]{$a_2$};
\draw (1,3)node[left]{$a_3$}; \draw (2,4)node[left]{$a_5$};
\draw (0,4)node[left]{$a_4$};
\draw (2,1)node[left]{$a_0$};
\draw (5,2)node[right]{$b_0$};
\draw (4.5,3)node[above]{$b_1$};
\draw (5.7,3)node[right]{$b_2$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(Q)$};
\draw (8,0)node[right]{$c_0$};
\draw (7,1)node[right]{$c_1$};
\draw (7,2.5)node[right]{$c_2$};
\draw (7,4)node[right]{$c_3$};
\draw (7,5)node[right]{$c_4$};
\draw (9,1)node[right]{$c_7$};
\draw (9,2.5)node[right]{$c_6$};
\draw (9,4)node[right]{$c_5$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_8^{-1}(0)=a_0+a_1+\cdots +a_5,\\ \hspace{-5mm} \pi_8^{-1} (-\lambda_2)=b_0 +b_1 +b_2, \\ \pi_8^{-1}(\infty)=c_0+c_1+\cdots +c_7. \end{matrix}$}}\\
\raisebox{8.8em}{$9$} & {\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(-1,2)--(-1,3)--(0,4)--(1,3)--(1,2)--cycle;
\draw (3,0)--(8,0);
\draw (5,2)--(7,3); \draw (5,2)--(1,2);
\draw (3,5)--(8,5);
\draw (8,0)--(7,1)--(7,4)--(8,5)--(9,4)--(9,1)--cycle;
\draw (3,0)--(0,1);
\draw (3,5)--(0,4);
\draw (3,1.8)--(3,0);
\draw (3,2.1)--(3,5);
\draw (5,2)--(3,2.5);
\draw (0,1)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (0,4)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (5,2)node{$\bullet$};
\draw (8,0)node{$\bullet$};
\draw (7,1)node{$\bullet$};
\draw (7,2)node{$\bullet$};
\draw (7,3)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (8,5)node{$\bullet$};
\draw (9,1)node{$\bullet$};
\draw (9,2)node{$\bullet$};
\draw (9,3)node{$\bullet$};
\draw (9,4)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (1,3)node[left]{$a_2$};
\draw (0,4)node[left]{$a_3$};
\draw (-1,3)node[left]{$a_4$};
\draw (-1,2)node[left]{$a_5$};
\draw (0,1)node[left]{$a_0$}; \draw (5,2)node[below]{$(Q)$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(O')$};
\draw (8,0)node[right]{$b_0$};
\draw (7,1)node[right]{$b_1$};
\draw (7,2)node[right]{$b_2$};
\draw (7,3)node[right]{$b_3$};
\draw (7,4)node[right]{$b_4$};
\draw (8,5)node[above]{$b_5$};
\draw (9,4)node[right]{$b_6$};
\draw (9,3)node[right]{$b_7$};
\draw (9,2)node[right]{$b_8$};
\draw (9,1)node[right]{$b_9$};
\draw (3,2.88)node[right]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_9^{-1}(0)=a_0+a_1+\cdots +a_5,\\ \pi_9^{-1}(\infty)=b_0+b_1+\cdots +b_9. \end{matrix}$}}\\
\raisebox{8.8em}{$10$} &
{\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(1,2)--(1,3)--(-1,3)--(-1,2)--cycle;
\draw (0,1)--(3,0)--(6,0);
\draw (3,0)--(3,5);
\draw (1,3)--(3,5);
\draw (3,5)--(5,4);
\draw (6,0)--(5,1)--(5,5)--(7,5)--(7,1)--cycle;
\draw (0,1)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (6,0)node{$\bullet$};
\draw (5,1)node{$\bullet$};
\draw (5,2)node{$\bullet$};
\draw (5,3)node{$\bullet$};
\draw (5,4)node{$\bullet$};
\draw (5,5)node{$\bullet$};
\draw (7,1)node{$\bullet$};
\draw (7,2)node{$\bullet$}; \draw (7,3)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (7,5)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (0.7,3)node[above]{$a_2$}; \draw (-1,3)node[left]{$a_3$};
\draw (-1,2)node[left]{$a_4$};
\draw (0,1)node[left]{$a_0$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(Q)$};
\draw (6,0)node[right]{$b_0$};
\draw (5,1)node[right]{$b_1$};
\draw (5,2)node[right]{$b_2$}; \draw (5,3)node[right]{$b_3$};
\draw (5,4)node[right]{$b_4$};
\draw (5,5)node[above]{$b_5$};
\draw (7,1)node[right]{$b_{10}$};
\draw (7,2)node[right]{$b_9$};
\draw (7,3)node[right]{$b_8$};
\draw (7,4)node[right]{$b_7$};
\draw (7,5)node[right]{$b_6$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_{10}^{-1}(0)=a_0+a_1+\cdots +a_4,\\ \hspace{2mm} \pi_{10}^{-1}(\infty)=b_0+b_1+\cdots +b_{10}. \end{matrix}$}}\\
\raisebox{8.8em}{$11$} & {\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,0)--(0,4);
\draw (0,2)--(-2,2);
\draw (3,0)--(3,5);
\draw (3,5)--(5,4);
\draw (0,0)--(3,0);
\draw (0,4)--(3,5);
\draw (3,0)--(6,0);
\draw (6,0)--(5,1)--(5,4)--(7,4)--(7,1)--cycle;
\draw (0,0)node{$\bullet$};
\draw (0,1)node{$\bullet$};
\draw (0,2)node{$\bullet$};
\draw (0,3)node{$\bullet$};
\draw (0,4)node{$\bullet$};
\draw (-1,2)node{$\bullet$}; \draw (-2,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (6,0)node{$\bullet$};
\draw (5,1)node{$\bullet$};
\draw (5,2)node{$\bullet$};
\draw (5,3)node{$\bullet$};
\draw (5,4)node{$\bullet$};
\draw (7,1)node{$\bullet$};
\draw (7,2)node{$\bullet$}; \draw (7,3)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (0,2)node[right]{$a_2$};
\draw (0,1)node[left]{$a_1$}; \draw (0,3)node[left]{$a_3$};
\draw (0,4)node[left]{$a_4$};
\draw (0,0)node[left]{$a_0$};
\draw (-1,2)node[above]{$a_5$};
\draw (-2,2)node[above]{$a_6$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(Q)$};
\draw (6,0)node[right]{$b_0$};
\draw (5,1)node[right]{$b_1$};
\draw (5,2)node[right]{$b_2$}; \draw (5,3)node[right]{$b_3$};
\draw (5,4.4)node[right]{$b_4$};
\draw (7,1)node[right]{$b_{8}$};
\draw (7,2)node[right]{$b_7$};
\draw (7,3)node[right]{$b_6$};
\draw (7,4)node[right]{$b_5$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_{11}^{-1}(0)=a_0+a_1+\cdots +a_6,\\ \pi_{11}^{-1}(\infty)=b_0+b_1+\cdots +b_8. \end{matrix}$}}\\
\raisebox{8.8em}{$12$} & {\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(-1,2)--(-1,3)--(0,4)--(1,3)--(1,2)--cycle;
\draw (3,0)--(8,0);
\draw (3,0)--(3,5);
\draw (3,5)--(4.3,3);
\draw (3,5)--(7,3);
\draw (5,2)--(4.3,3)--(5.7,3)--cycle;
\draw (3,0)--(5,2);
\draw (8,0)--(7,1)--(7,4)--(9,4)--(9,1)--cycle;
\draw (3,0)--(0,1);
\draw (3,5)--(1,3);
\draw (0,1)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (0,4)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (5,2)node{$\bullet$};
\draw (4.3,3)node{$\bullet$};
\draw (5.7,3)node{$\bullet$};
\draw (8,0)node{$\bullet$};
\draw (7,1)node{$\bullet$};
\draw (7,2)node{$\bullet$};
\draw (7,3)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (9,1)node{$\bullet$};
\draw (9,2)node{$\bullet$};
\draw (9,3)node{$\bullet$};
\draw (9,4)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (1,3)node[left]{$a_2$};
\draw (0,4)node[left]{$a_3$}; \draw (-1,3)node[left]{$a_4$};
\draw (-1,2)node[left]{$a_5$};
\draw (0,1)node[left]{$a_0$};
\draw (5,2)node[right]{$b_0$};
\draw (4.5,3)node[above]{$b_1$};
\draw (5.7,3)node[right]{$b_2$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(Q)$};
\draw (8,0)node[right]{$c_0$};
\draw (7,1)node[right]{$c_1$};
\draw (7,2)node[right]{$c_2$};
\draw (7,3)node[right]{$c_3$};
\draw (7,4)node[above]{$c_4$};
\draw (9,4)node[right]{$c_5$};
\draw (9,3)node[right]{$c_6$};
\draw (9,2)node[right]{$c_7$};
\draw (9,1)node[right]{$c_8$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_{12}^{-1}(0)=a_0+a_1+\cdots +a_5,\\ \hspace{-5mm} \pi_{12}^{-1} (-\lambda_2)=b_0 +b_1 +b_2, \\ \pi_{12}^{-1}(\infty)=c_0+c_1+\cdots +c_8. \end{matrix}$}} \\
\raisebox{8.8em}{$13$} & {\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(-1,2)--(-1,3)--(0,4)--(1,3)--(1,2)--cycle;
\draw (3,0)--(9,0);
\draw (9,0)--(8,1)--(8,3)--(9,4)--(10,3)--(10,1)--cycle;
\draw (3,0)--(0,1);
\draw (3,5)--(0,4);
\draw (3,2.9)--(3,0);
\draw (3,3.1)--(3,5);
\draw (5,4)--(1,2);
\draw (3,0)--(6,1);
\draw (5,4)--(6,2);
\draw (6,1)--(6,2);
\draw (4.6,3.4)--(6,2);
\draw (3,5)--(4.2,3.8);
\draw (3,5)--(9,4);
\draw (3,2.5)--(5,4);
\draw (5,4)--(8,2);
\draw (0,1)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (0,4)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (5,4)node{$\bullet$};
\draw (6,1)node{$\bullet$}; \draw (6,2)node{$\bullet$};
\draw (9,0)node{$\bullet$};
\draw (8,1)node{$\bullet$};
\draw (8,2)node{$\bullet$};
\draw (8,3)node{$\bullet$};
\draw (9,4)node{$\bullet$};
\draw (10,1)node{$\bullet$};
\draw (10,2)node{$\bullet$};
\draw (10,3)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (1,3)node[left]{$a_2$};
\draw (0,4)node[left]{$a_3$};
\draw (-1,3)node[left]{$a_4$};
\draw (-1,2)node[left]{$a_5$};
\draw (0,1)node[left]{$a_0$}; \draw (5,4)node[right]{$(Q)$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(O')$};
\draw (6,1)node[right]{$b_0$};
\draw (6,2)node[right]{$b_1$};
\draw (9,0)node[right]{$c_0$};
\draw (8,1)node[right]{$c_1$};
\draw (8,2)node[right]{$c_2$};
\draw (8,3)node[right]{$c_3$};
\draw (9,4)node[right]{$c_4$};
\draw (10,3)node[right]{$c_5$};
\draw (10,2)node[right]{$c_6$};
\draw (10,1)node[right]{$c_7$};
\draw (3,2.3)node[right]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_{13}^{-1}(0)=a_0+a_1+\cdots +a_5,\\ \hspace{-12.5mm} \pi_{13}^{-1} (-\frac{\lambda_4}{\lambda_3})=b_0 +b_1, \\ \pi_{13}^{-1}(\infty)=c_0+c_1+\cdots +c_7. \end{matrix}$ }}\\
\raisebox{8.8em}{$14$} &
{\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(1,2)--(1,4)--(-1,4)--(-1,2)--cycle;
\draw (0,1)--(3,0)--(6,0);
\draw (3,0)--(3,5);
\draw (1,4)--(3,5);
\draw (3,5)--(5,4);
\draw (6,0)--(5,1)--(5,4)--(6,5)--(7,4)--(7,1)--cycle;
\draw (0,1)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (1,4)node{$\bullet$};
\draw (-1,4)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (6,0)node{$\bullet$};
\draw (5,1)node{$\bullet$};
\draw (5,2.5)node{$\bullet$};
\draw (5,4)node{$\bullet$};
\draw (6,5)node{$\bullet$};
\draw (7,1)node{$\bullet$};
\draw (7,2.5)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (1,3)node[left]{$a_2$};
\draw (0.8,4)node[above]{$a_3$}; \draw (-1,4)node[left]{$a_4$};
\draw (-1,3)node[left]{$a_5$};
\draw (-1,2)node[left]{$a_6$};
\draw (0,1)node[left]{$a_0$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(Q)$};
\draw (6,0)node[right]{$b_0$};
\draw (5,1)node[right]{$b_1$};
\draw (5,2.5)node[right]{$b_2$};
\draw (5,4)node[right]{$b_3$};
\draw (6,5)node[right]{$b_4$};
\draw (7,1)node[right]{$b_7$};
\draw (7,2.5)node[right]{$b_6$};
\draw (7,4)node[right]{$b_5$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_{14}^{-1}(0)=a_0+a_1+\cdots +a_6,\\ \pi_{14}^{-1}(\infty)=b_0+b_1+\cdots +b_7. \end{matrix}$}} \\
\raisebox{8.8em}{$15$} & {\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(1,2)--(1,3)--(-1,3)--(-1,2)--cycle;
\draw (0,1)--(3,0);
\draw (3,0)--(8,0);
\draw (3,0)--(3,5);
\draw (3,5)--(4.3,3);
\draw (3,5)--(7,4);
\draw (5,2)--(4.3,3)--(5.7,3)--cycle;
\draw (3,0)--(5,2);
\draw (1,3)--(3,5);
\draw (8,0)--(7,1)--(7,4)--(8,5)--(9,4)--(9,1)--cycle;
\draw (0,1)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (5,2)node{$\bullet$};
\draw (4.3,3)node{$\bullet$};
\draw (5.7,3)node{$\bullet$};
\draw (8,0)node{$\bullet$};
\draw (7,1)node{$\bullet$};
\draw (7,2.5)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (8,5)node{$\bullet$};
\draw (9,1)node{$\bullet$};
\draw (9,2.5)node{$\bullet$};
\draw (9,4)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (0.9,3)node[above]{$a_2$};
\draw (-1,3)node[left]{$a_3$}; \draw (-1,2)node[left]{$a_4$};
\draw (0,1)node[left]{$a_0$};
\draw (5,2)node[right]{$b_0$};
\draw (4.5,3)node[above]{$b_1$};
\draw (5.7,3)node[right]{$b_2$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(Q)$};
\draw (8,0)node[right]{$c_0$};
\draw (7,1)node[right]{$c_1$};
\draw (7,2.5)node[right]{$c_2$};
\draw (7,4)node[right]{$c_3$};
\draw (7,5)node[right]{$c_4$};
\draw (9,1)node[right]{$c_7$};
\draw (9,2.5)node[right]{$c_6$};
\draw (9,4)node[right]{$c_5$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_{15}^{-1}(0)=a_0+a_1+\cdots +a_4,\\ \hspace{-5mm} \pi_{15}^{-1} (-\lambda_2)=b_0 +b_1 +b_2, \\ \pi_{15}^{-1}(\infty)=c_0+c_1+\cdots +c_7. \end{matrix}$}}\\
\raisebox{8.8em}{$16$} &
{\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(1,2)--(1,3)--(-1,3)--(-1,2)--cycle;
\draw (0,1)--(3,0)--(6,0);
\draw (3,0)--(3,5);
\draw (1,3)--(3,5);
\draw (3,5)--(5,4);
\draw (6,0)--(5,1)--(5,4)--(6,5)--(7,4)--(7,1)--cycle;
\draw (0,1)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (6,0)node{$\bullet$};
\draw (5,1)node{$\bullet$};
\draw (5,2)node{$\bullet$};
\draw (5,3)node{$\bullet$};
\draw (5,4)node{$\bullet$};
\draw (6,5)node{$\bullet$};
\draw (7,1)node{$\bullet$};
\draw (7,2)node{$\bullet$}; \draw (7,3)node{$\bullet$};
\draw (7,4)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (0.7,3)node[above]{$a_2$}; \draw (-1,3)node[left]{$a_3$};
\draw (-1,2)node[left]{$a_4$};
\draw (0,1)node[left]{$a_0$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(Q)$};
\draw (6,0)node[right]{$b_0$};
\draw (5,1)node[right]{$b_1$};
\draw (5,2)node[right]{$b_2$}; \draw (5,3)node[right]{$b_3$};
\draw (5,4)node[right]{$b_4$};
\draw (6,5)node[right]{$b_5$};
\draw (7,1)node[right]{$b_9$};
\draw (7,2)node[right]{$b_8$};
\draw (7,3)node[right]{$b_7$};
\draw (7,4)node[right]{$b_6$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_{16}^{-1}(0)=a_0+a_1+\cdots +a_4,\\ \pi_{16}^{-1}(\infty)=b_0+b_1+\cdots +b_9. \end{matrix}$}} \\
\raisebox{8.8em}{$17$} &
{\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(-1,2)--(-1,3)--(0,4)--(1,3)--(1,2)--cycle;
\draw (3,0)--(9,1);
\draw (9,1)--(8,2)--(8,3)--(9,4)--(10,3)--(10,2)--cycle;
\draw (3,0)--(0,1);
\draw (3,5)--(0,4);
\draw (3,2.9)--(3,0);
\draw (3,3.1)--(3,5);
\draw (5,4)--(1,2);
\draw (3,0)--(6,1);
\draw (5,4)--(6,2);
\draw (6,1)--(6,2);
\draw (3,0)--(4.5,1);
\draw (4.5,1)--(4.5,2);
\draw (4.55,3.45)--(4.7,3.3);
\draw (4.9,3.1)--(6,2);
\draw (3,5)--(4.2,3.8);
\draw (3,5)--(9,4);
\draw (3,2.5)--(5,4);
\draw (5,4)--(8,2);
\draw (3,5)--(3.7,3.6);
\draw (4,3)--(4.5,2);
\draw (3.84,3.32)--(3.88,3.24);
\draw (5,4)--(4.5,2);
\draw (0,1)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (0,4)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (5,4)node{$\bullet$};
\draw (4.5,1)node{$\bullet$}; \draw (4.5,2)node{$\bullet$};
\draw (6,1)node{$\bullet$}; \draw (6,2)node{$\bullet$};
\draw (9,1)node{$\bullet$};
\draw (8,2)node{$\bullet$};
\draw (8,3)node{$\bullet$};
\draw (9,4)node{$\bullet$};
\draw (10,2)node{$\bullet$};
\draw (10,3)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (1,3)node[left]{$a_2$};
\draw (0,4)node[left]{$a_3$};
\draw (-1,3)node[left]{$a_4$};
\draw (-1,2)node[left]{$a_5$};
\draw (0,1)node[left]{$a_0$}; \draw (5,4)node[right]{$(Q)$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(O')$};
\draw (4.5,1)node[right]{$b_0$};
\draw (4.5,2)node[right]{$b_1$};
\draw (6,1)node[right]{$c_0$};
\draw (6,2)node[right]{$c_1$};
\draw (9,1)node[right]{$d_0$};
\draw (8,2)node[right]{$d_1$};
\draw (8,3)node[right]{$d_2$};
\draw (9,4)node[right]{$d_3$};
\draw (10,3)node[right]{$d_4$};
\draw (10,2)node[right]{$d_5$};
\draw (3,2.1)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_{17}^{-1}(0)=a_0+a_1+\cdots +a_5,\\ \hspace{-12.5mm} \pi_{17}^{-1} (-\lambda_4)=b_0 +b_1, \\
\hspace{-11mm} \pi_{17}^{-1} (-\frac{\lambda_2}{\lambda_5})=c_0 +c_1,,\\ \pi_{17}^{-1}(\infty)=d_0+d_1+\cdots +c_5. \end{matrix}$}}\\
\raisebox{8.8em}{$18$} &
{\normalsize \begin{tikzpicture}[scale=0.5]
\draw (0,1)--(-1,2)--(-1,4)--(1,4)--(1,2)--cycle;
\draw (3,0)--(8,1);
\draw (3,0)--(3,5);
\draw (3,5)--(4.3,3);
\draw (3,5)--(7,3);
\draw (5,2)--(4.3,3)--(5.7,3)--cycle;
\draw (3,0)--(5,2);
\draw (8,1)--(7,2)--(7,3)--(9,3)--(9,2)--cycle;
\draw (3,0)--(0,1);
\draw (3,5)--(1,4);
\draw (0,1)node{$\bullet$};
\draw (-1,2)node{$\bullet$};
\draw (-1,3)node{$\bullet$};
\draw (-1,4)node{$\bullet$};
\draw (1,4)node{$\bullet$};
\draw (1,3)node{$\bullet$};
\draw (1,2)node{$\bullet$};
\draw (3,0)node{$\bullet$};
\draw (3,5)node{$\bullet$};
\draw (5,2)node{$\bullet$};
\draw (4.3,3)node{$\bullet$};
\draw (5.7,3)node{$\bullet$};
\draw (8,1)node{$\bullet$};
\draw (7,2)node{$\bullet$};
\draw (7,3)node{$\bullet$};
\draw (9,2)node{$\bullet$};
\draw (9,3)node{$\bullet$};
\draw (3,2.5)node{$\odot$};
\draw (1,2)node[left]{$a_1$};
\draw (1,3)node[left]{$a_2$};
\draw (1,4)node[above]{$a_3$}; \draw (-1,4)node[left]{$a_4$};
\draw (-1,3)node[left]{$a_5$};
\draw (-1,2)node[left]{$a_6$};
\draw (0,1)node[left]{$a_0$};
\draw (5,2)node[right]{$b_0$};
\draw (4.5,3)node[above]{$b_1$};
\draw (5.7,3)node[right]{$b_2$};
\draw (3,0)node[below]{$(O)$};
\draw (3,5)node[above]{$(Q)$};
\draw (8,1)node[right]{$c_0$};
\draw (7,2)node[right]{$c_1$};
\draw (7,3)node[above]{$c_2$};
\draw (9,3)node[right]{$c_3$};
\draw (9,2)node[right]{$c_4$};
\draw (3,2.5)node[left]{$F$}; \end{tikzpicture} }& \raisebox{5.0em}{{\normalsize $\begin{matrix} \pi_{18}^{-1}(0)=a_0+a_1+\cdots +a_6,\\ \hspace{-5mm} \pi_{18}^{-1} (-\lambda_5)=b_0 +b_1 +b_2, \\ \pi_{18}^{-1}(\infty)=c_0+c_1+\cdots +c_4. \end{matrix}$}}\\ \hline\\ \end{longtable}
\begin{prop}\label{PropEvident} For each $k\in \{6,\ldots, 18\},$ let $\ell_k$ be the number of vertices of the Fano polytope $P_k$,
$L_k$ be the lattice given in Table 1
and $S_k$ be a generic member of the family $\mathscr{F}_k$. Then, there is a sublattice $E_k$ of ${\rm NS}(S_k)$, which we call the evident lattice of $S_k$, satisfying ${\rm rank} (E_k) =23 - \ell_k$ and $|\det (E_k)| = |\det(L_k)|.$ The generators of $E_k$ are explicitly given in Table 9. \end{prop}
\begin{longtable}{llll}
\caption{\normalsize{Evident lattices $E_k$ }}
\\
\\ \hline
$k$ & $E_k$ & ${\rm rank}(E_k)$ & $|\det(E_k)|=|\det (L_k)|$
\\
\hline
\endhead
$6$ &$\langle F, (O), (Q), (O'), a_2 ,\ldots, a_7,b_1,\ldots,b_7 \rangle_\mathbb{Z}$ & \hspace{3.9mm} $17$ &\hspace{12.9mm} $ 16$ \\
$7$ & $\langle F, (O), (Q), a_1 ,\ldots, a_7,b_1,\ldots,b_7 \rangle_\mathbb{Z}$ &\hspace{3.9mm} $17$ &\hspace{12.9mm} $12$\\
$8$ & $\langle F, (O), (Q), a_1 ,\ldots, a_5,b_1, b_2, c_1\ldots,c_7 \rangle_\mathbb{Z}$ &\hspace{3.9mm} $17$ &\hspace{12.9mm} $20$ \\
$9$ & $\langle F, (O), (Q), (O'), a_1 ,\ldots,a_5, b_1,\ldots ,b_8 \rangle_\mathbb{Z}$ &\hspace{3.9mm} $17$ &\hspace{12.9mm} $16$ \\
$10$ &$\langle F, (O),(Q), a_1 ,\ldots, a_4,b_1, \ldots,b_{10} \rangle_\mathbb{Z}$&\hspace{3.9mm} $17$ &\hspace{12.9mm} $14$ \\
$11$ &$\langle F, (O), (Q),a_1 ,\ldots, a_6,b_1, \ldots,b_{8} \rangle_\mathbb{Z}$ &\hspace{3.9mm} $17$ &\hspace{12.9mm} $12$ \\
$12$ &$\langle F, (O), (Q), a_1 ,\ldots, a_5, b_1, c_1\ldots,c_7 \rangle_\mathbb{Z}$ &\hspace{3.9mm} $17$ &\hspace{12.9mm} $18$ \\
$13$ &$\langle F, (O), (Q),(O'), a_1 ,\ldots,a_4, b_1, c_1,\ldots ,c_7 \rangle_\mathbb{Z} $ &\hspace{3.9mm} $16$ &\hspace{12.9mm} $28$ \\
$14$ &$\langle F, (O), (Q),a_1 ,\ldots, a_6,b_1, \ldots,b_{7} \rangle_\mathbb{Z}$ &\hspace{3.9mm} $16$ &\hspace{12.9mm} $23$ \\
$15$ &$\langle F, (O),(Q), a_1 ,\ldots,a_4, b_1,b_2, c_1,\ldots ,c_7 \rangle_\mathbb{Z}$ &\hspace{3.9mm} $16$ &\hspace{12.9mm} $31$ \\
$16$ &$\langle F, (O), (Q),a_1 ,\ldots, a_4,b_1, \ldots,b_9 \rangle_\mathbb{Z}$ &\hspace{3.9mm} $16$ &\hspace{12.9mm} $20$ \\
$17$ &$\langle F, (O), (Q), (O'), a_1 ,\ldots, a_4,b_1,c_1,d_1, \ldots,d_5 \rangle_\mathbb{Z}$ &\hspace{3.9mm} $15$ &\hspace{12.9mm} $48$ \\
$18$ & $\langle F, (O),(Q), a_1 ,\ldots,a_6, b_1,b_2, c_1,\ldots ,c_4 \rangle_\mathbb{Z}$ &\hspace{3.9mm} $15$ &\hspace{12.9mm} $44$ \\ \hline\\ \end{longtable}
\begin{proof} For an elliptic $K3$ surface $(S,\pi, \mathbb{P}^1(\mathbb{C}))$, a general fibre $F$ and a section $R$ of $\pi$ satisfies $(F \cdot F)=0$, $(R\cdot R)=-2$ and $(F \cdot R)=1$ (see \cite{Shioda}). Then, we can calculate the intersection matrix of the evident lattices $E_k$ for each $k$. (One can find explicit forms of the intersection matrices in \cite{M1}.) \end{proof}
\section{Period mappings and Picard numbers}
For $k\in \{6,\ldots, 18\},$ take a generic point $\lambda^0 =(\lambda_1^0,\ldots, \lambda_{l_k-3}^0) \in (\mathbb{C}^\times)^{\ell_k-3}.$ We call $S_k^0=S_k (\lambda^0)$ a reference surface of the family $\mathscr{F}_k$ of (\ref{Family}). Let us introduce a marking for $S_k(\lambda^0)$ using the evident lattice $E_k$ in Proposition \ref{PropEvident}. According to the procedure, $E_k$ is a sublattice of the N\'eron-Severi lattice ${\rm NS}(S_k^0)$. Let $\widehat{E}_k$ be the primitive closure of $E_k$ in ${\rm NS}(S_k^0)$: $\widehat{E}_k = (E_k \otimes_\mathbb{Z} \mathbb{Q}) \cap {\rm NS}(S_k^0)$. Take a basis $\{\Gamma_{k,1},\ldots, \Gamma_{k,23 -\ell_k}\}$ of $\widehat{E}_k$.
Let us identify $H_2 (S_k^0,\mathbb{Z}) $ with $L_{K3}$.
This identification gives an isometry
$\psi^0: H_2 (S_k^0,\mathbb{Z}) \rightarrow L_{K3}$.
Put $\mathcal{E}_k = \psi^0 \left(\widehat{E}_k \right)$ and $\gamma_{k,j} =\psi \left(\Gamma_{k,j} \right)$ ($j \in\{1,\ldots, 23-\ell_k\}$). Then, $\mathcal{E}_k$ satisfies $\left(\psi^0\right)^{-1} (\mathcal{E}_k) \subset{\rm NS}(S_k^0)$. We can extend $\left\{\gamma_{k,1},\ldots, \gamma_{k,23-\ell_k}\right\}$ to a basis $\left\{\gamma_{k,1},\ldots, \gamma_{k,22} \right\}$ of $L_{K3}$.
Let $\left\{ \delta_{k,1},\ldots, \delta_{k,22} \right\}$ be the dual basis of $\left\{\gamma_{k,1},\ldots, \gamma_{k,22}\right\}$ with respect to the unimodular lattice $L_{K3}$.
There is a sufficiently small neighborhood $\mathcal{U}$ of $\lambda^0$ in $(\mathbb{C}^\times)^{\ell_k-3}$ with the trivialization
$\tau: \{S_k(\lambda) | \lambda\in \mathcal{U}\} \rightarrow S_k^0 \times \mathcal{U}$. Letting $\beta: S_k^0 \times \mathcal{U} \rightarrow S_k^0$ be the projection, set $r=\beta \circ \tau.$ Then, for $\lambda\in \mathcal{U}$,
$\left(r^{(\lambda)}_k \right) ' := r|_{S_k(\lambda)}: S_k(\lambda)\rightarrow S_k^0 $ gives a $\mathcal{C}^\infty $-isomorphisms of compact complex surfaces. We obtain an isometry $\psi^{(\lambda)}= \psi^0 \circ \left(r^{(\lambda)}_k \right) '_{*} : H_2(S_k(\lambda),\mathbb{Z}) \rightarrow L_{K3}$ for $\lambda \in \mathcal{U}$. We call the pair $(S_k(\lambda),\psi^{(\lambda)})$ a $S$-marked $K3$ surfaces on $\mathcal{U}$. For $\lambda^1$ and $\lambda^2\in \mathcal{U}$, if there exists a biholomorphic mapping $f : S_k (\lambda^1) \rightarrow S_k(\lambda^2)$ satisfying
$\left( \psi^{\left( \lambda^2 \right)} \circ f_* \circ \left(\psi^{\left( \lambda^1 \right)} \right)^{-1} \right)\Big|_{\mathcal{E}_k}$, we say that $\left( S_k (\lambda^1),\psi^{\left( \lambda^1 \right)} \right)$ and $\left( S_k (\lambda^2), \psi^{\left( \lambda^2 \right)} \right)$ are equivalent as $S$-marked $K3$ surfaces.
\begin{rem} We remark the precise meaning of ``generic''. In fact, there are certain loci in $(\mathbb{C}^\times)^{\ell_k -3}$ on which we have degenerated elliptic $K3$ surfaces. For instance, one can obtain a locus on which two singular fibres of type $I_1$ (see Table 7) collapse into a fibre of type $I_2$ for each $k$. We take a generic point $\lambda^0 \in (\mathbb{C}^\times)^{\ell_k -3}$ while avoiding such loci. Similarly, we should take a neighborhood $\mathcal{U}$ of $\lambda^0$ not to touch them. \end{rem}
Letting the notation be as above, we denote the intersection matrix of the lattice $\left\{ \delta_{k,23-\ell_k+1 },\ldots, \delta_{k,22}\right\}$ by $\textbf{A}_k$. Set $\mathcal{D}_{\mathcal{E}_k}
=\{ \xi \in \mathbb{P}^{\ell_k-2} (\mathbb{C}) | \xi \textbf{A}_k {}^t \xi=0, \xi \textbf{A}_k {}^t \overline{\xi}>0\}$. We remark that $\mathcal{D}_{\mathcal{E}_k}$ has two connected components. Let $\mathcal{D}_k$ be a connected component of $\mathcal{D}_{\mathcal{E}_k}$. Then, we have the local period mapping $\Phi_k : \mathcal{U} \rightarrow \mathcal{D}_k$ given by \begin{align}\label{PhiInt} \lambda \mapsto \left(\int_{(\psi^{(\lambda)})^{-1}\left(\gamma_{k,23-\ell_k +1} \right) } \omega_k : \cdots : \int_{(\psi^{(\lambda)})^{-1}\left(\gamma_{k,22} \right) } \omega_k\right). \end{align} Here, $\omega_k$ is the unique holomorphic $2$-form on $S_k$ up to a constant factor.
Let $\pi: S\rightarrow \mathbb{P}^1 (\mathbb{C})$ be an elliptic fibration of a $K3$ surface $S$ with a general fibre $F$. If $\pi$ is defined by the Weierstrass equation $z^2 =4 y^3 -g_2(x) y - g_3(x)$, where $\pi $ is given by $(x,y,z)\mapsto x$, then the $j$-invariant is given by $j(x)=\frac{g_2^3(x)}{g_2^3(x)-27 g_3^2(x)}$. Suppose $(S,\pi,\mathbb{P}^1(\mathbb{C}))$ and $(S',\pi',\mathbb{P}^1(\mathbb{C}))$ are two elliptic surfaces such that there exists a biholomorphic mapping $f: S \rightarrow S'$ and $\varphi \in {\rm Aut}(\mathbb{P}^1(\mathbb{C}))$ satisfying $\varphi \circ \pi = \pi' \circ f$. Then, they are said to be isomorphic as elliptic fibrations. In this case, if $(S,\pi,\mathbb{P}^1(\mathbb{C}))$ ($(S',\pi',\mathbb{P}^1(\mathbb{C}))$, resp.) is defined by the Weierstrass equation with the $j$-invariant $j (x)$ ($j' (x)$, resp.), then $j' \circ \varphi = j$ holds (see \cite{Kod} or \cite{Ne}). Moreover, if $\pi^{-1} (x)$ is a singular fibre, then $\pi'^{-1} (\varphi (x))$ is a singular fibre of the same type.
\begin{lem}\label{LemPsi} For $k\in \{6,\ldots,18\}$, let $\lambda^0$ be a general point of $(\mathbb{C}^\times)^{\ell_k -3}$ and $\mathcal{U} $ be a neighborhood of $\lambda^0$ as above. For each $k$, let $\pi_k^{(\lambda)}$ be the Jacobian elliptic fibration given by the equation (\ref{EquationJac}). Take $\lambda^1$ and $\lambda^2 \in \mathcal{U}$. If $\left(S_k(\lambda^1), \pi_k^{\left( \lambda^1 \right)}, \mathbb{P}^1 (\mathbb{C}) \right)$ and $\left(S_k(\lambda^2), \pi_k^{\left( \lambda^2 \right)}, \mathbb{P}^1 (\mathbb{C}) \right)$ are isomorphic as elliptic surfaces, then $\lambda^1 = \lambda^2$ holds. \end{lem}
\begin{proof} For each $k$, the elliptic fibration $\pi_k^{(\lambda)}$ has singular fibres for $\lambda \in \mathcal{U}$ as in Table 7 and 8. By a direct calculation, we can transform the equation (\ref{EquationJac}) to a Weierstrass form and obtain the $j$-invariant $j_k^{(\lambda)}(x_1).$ There is a polynomial $\mathcal{P}_k ^{(\lambda)} (x_1) \in \mathbb{C}(\lambda)[x_1]$ such that $x_1 =\xi \in \mathbb{P}^1 (\mathbb{C}) - \{0,\infty\}$ is a root of $\mathcal{P}_k^{(\lambda)}(x_1)$ if and only if $\left(\pi_k^{(\lambda)} \right)^{-1} (\xi) $ gives a singular fibre. We can explicitly obtain this polynomial. In practice, $\mathcal{P}_k^{(\lambda)} (x_1) $ appears as an appropriate factor of the discriminant of the right hand side of (\ref{EquationJac}) in the variable $y_1$. Note that $\mathcal{P}_k^{(\lambda)}(x_1)$ factorizes the denominator of $j_k^{(\lambda)}(x_1).$
Suppose that $f:S_k (\lambda^1) \rightarrow S_k(\lambda^2)$ is a biholomorphic mapping which derives an isomorphism of elliptic surfaces. Namely, there exists $\varphi \in {\rm Aut}(\mathbb{P}^1(\mathbb{C}))$ satisfying $\varphi \circ \pi_k^{\left( \lambda^1 \right)} = \pi_k^{\left( \lambda^2 \right)} \circ f$. Since the Kodaira type of the singular fibre $\left(\pi_k^{\left( \lambda^1 \right)} \right)^{-1} (0) $ ($\left(\pi_k^{\left( \lambda^1 \right)} \right)^{-1} (\infty) $, resp.) is same as that of $\left(\pi_k^{\left( \lambda^2 \right)} \right)^{-1} (0) $ ($\left(\pi_k^{\left( \lambda^2 \right)} \right)^{-1} (\infty) $, resp.). Hence, it follows that $\varphi$ must have the form $x_1\mapsto \varphi(x_1)=\mu x_1$ for some $\mu \in \mathbb{C}^\times$. In particular, it holds $j_k^{\left(\lambda^2 \right)}(x_1) = j_k^{\left( \lambda^1 \right)} (\mu x_1).$ Also, each root $\xi$ of $\mathcal{P}_k^{\left( \lambda^1 \right)}(x_1)$ is sent to a root of $\mathcal{P}_k^{\left( \lambda^2 \right)}(x_1)$ by $\varphi$ such that the Kodaira type of $\left(\pi_k^{\left( \lambda^1 \right)} \right)^{-1} (\xi) $ is same as that of $\left(\pi_k^{\left( \lambda^2 \right)} \right)^{-1} (\mu \xi) $. Considering the above situation and observing the explicit forms of $\mathcal{P}_k^{(\lambda)}(x_1)$ and $j_k^{(\lambda)} (x_1)$ for each $k$, we can see that $\mu=1$ and $\lambda^1 = \lambda^2$. \end{proof}
\begin{lem}\label{LemSEquiv} For $k\in \{6,\ldots,18\}$, let $\lambda^0$ be a general point of $(\mathbb{C}^\times)^{\ell_k -3}$ and
$\mathcal{U} $ be a neighborhood of $\lambda^0$ as above.
Take the $S$-marked $K3$ surface $\left(S_k(\lambda), \psi^{(\lambda)} \right)$ for $\lambda \in \mathcal{U}$ as above. For $\lambda^1$ and $\lambda^2 \in \mathcal{U}$, $\left(S_k(\lambda^1),\psi^{\left( \lambda^1 \right)} \right)$ and $\left(S_k(\lambda^2),\psi^{\left( \lambda^2 \right)} \right)$ are equivalent as $S$-marked $K3$ surfaces if and only if $\lambda^1 = \lambda^2$ holds. \end{lem}
\begin{proof} For each $k$, let $F_k^{\left( \lambda^j \right)}$ $(j\in \{ 1,2 \})$ be a general fibre for the elliptic fibration $\pi_k^{\left( \lambda^j \right)}$. We assume that a biholomorphic mapping $f: S_k(\lambda^1) \rightarrow S_k(\lambda^2)$ induces an equivalence of $S$-marked $K3$ surfaces. Then, we have
$\left(\psi_k^{\left( \lambda^2 \right)} \circ f_* \circ \left( \psi_k^{\left( \lambda^1 \right)} \right)^{-1} \right)\Big|_{\mathcal{E}_k} = {\rm id}_{\mathcal{E}_k}$. This implies $f_* \left(F_k^{\left( \lambda^1 \right)}\right) = F_k^{\left( \lambda^2 \right)}$. Hence, $F_k^{\left( \lambda^2 \right)}$ is a general fibre of the elliptic fibration $\pi_k^{\left( \lambda^1 \right)} \circ f^{-1}$ of $S_k(\lambda^2)$ also. By the way, it hold $K_{S_k(\lambda^2)}=0, H^1(S_k(\lambda^2),\mathcal{O}_{S_k(\lambda^2)})=0$ and $\left(F_k^{\left( \lambda^2 \right)}\cdot F_k^{\left( \lambda^2 \right)} \right)=0.$ Due to the Riemann-Roch theorem, we have $H^1 \left(S_k (\lambda^2),\mathcal{O}_{S_k(\lambda^2)} \left(F_k^{\left( \lambda^2 \right)}\right) \right)=0$ and ${\rm dim}H^0 \left(S_k(\lambda^2), \mathcal{O}_{S_k(\lambda^2) } \left(F_k^{\left( \lambda^2 \right)}\right)\right) =2$. Therefore, $1$ and $\pi_k^{\left( \lambda^2 \right)}$ generate $H^0 \left(S_k(\lambda^2),\mathcal{O}_{S_k(\lambda^2)} \left(F_k^{\left( \lambda^2 \right)}\right)\right)$ over $\mathbb{C}$. Thereby,
$\pi_k^{\left( \lambda^2 \right)} $ coincides with $\pi_k^{\left( \lambda^1 \right)} \circ f^{-1}$ up to an action of ${\rm Aut}(\mathbb{P}^1(\mathbb{C}))$. Thus, $\left(S_k(\lambda^1),\psi^{\left( \lambda^1 \right)}\right)$ is equivalent to $\left(S_k(\lambda^2),\psi^{\left( \lambda^2 \right)} \right)$
as $S$-marked $K3$ surfaces if and only if $\left(S_k(\lambda^1),\pi_k^{\left( \lambda^1 \right)}, \mathbb{P}^1(\mathbb{C})\right)$ is isomorphic to $\left(S_k(\lambda^2),\pi_k^{\left( \lambda^2 \right)}, \mathbb{P}^1(\mathbb{C}) \right)$ as elliptic surfaces. According to Lemma \ref{LemPsi}, we have the assertion. \end{proof}
\begin{thm}\label{ThmPic} For each $k\in \{6,\ldots,18\},$ the Picard number of a generic member of $\mathscr{F}_k$ of (\ref{Family}) is equal to ${\rm rank}(E_k)=23-\ell_{j}$, where $E_k$ is the evident lattice in Proposition \ref{PropEvident}. \end{thm}
\begin{proof} For each $k\in \{6,\ldots, 18\}$, the lattice $E_k$ is a sublattice of ${\rm NS}(S_k(\lambda))$ for generic $\lambda$. Therefore, it holds ${\rm rank}({\rm NS}(S_k(\lambda))) \geq 23-\ell_k$. Let us assume that the rank of ${\rm NS}(S_k(\lambda))$ is not equal to $23-\ell_k$. Then, the dimension of the image of our period mapping $\Phi_k$ of (\ref{PhiInt}) must be smaller than $\ell_k-3$. By the way, letting $\mathcal{U}$ be a small open set as in the above argument, by virtue of the local Torelli theorem (\cite{D} Section 3) for $K3$ surfaces polarized by the lattice $\mathcal{E}_k$, $\Phi_k (\lambda^1)=\Phi_k (\lambda^2)$ $(\lambda^1,\lambda^2\in \mathcal{U})$ holds if and only if $\left(S_k(\lambda^1),\psi^{\left( \lambda^1 \right)}\right)$ is equivalent to $\left(S_k(\lambda^2),\psi^{\left( \lambda^2 \right)} \right)$ as $S$-marked $K3$ surfaces. Together with Lemma \ref{LemSEquiv}, $\mathcal{U}\ni \lambda \mapsto \Phi_k (\lambda)\in \mathcal{D}_k$ is an injection. This contradicts to the assumption. \end{proof}
\section{Lattice structures}
Let $L$ be a non-degenerate even lattice. Setting $L^\vee ={\rm Hom}(L,\mathbb{Z}),$ the group $\mathscr{A}_L = L^\vee /L$ becomes to be a finite abelian group. The length $l(\mathscr{A}_L)$ of $\mathscr{A}_L$ is defined as the minimum number of generators of $\mathscr{A}_L.$ The intersection form of $L$ induces a quadratic form $q_L:\mathscr{A}_L \rightarrow \mathbb{Q}/2\mathbb{Z}.$ This $q_L$ is called the discriminant form of $L$. We note that $q_{L_1 \oplus L_2} \simeq q_{L_1} \oplus q_{L_2}$ holds. If the signature of $L$ is of $(s,t)$, the triple $(s,t,q_L)$ is called the invariant of $L$.
\begin{prop} (\cite{Ni} Proposition 1.6.1)\label{PropLatticeOrthogonal} Let $L$ be a non-degenerate even lattice. Suppose $L$ is a primitive lattice in a unimodular lattice $\widehat{L}$. Letting $L^\perp$ be the orthogonal complement of $L$ in $\widehat{L}$, $q_{L^\perp} \simeq -q_{L}$ holds.
\end{prop}
\begin{prop} (\cite{Ni} Corollary 1.13.1) \label{PropQI} Let $L$ be a non-degenerate even lattice with the invariant $(s,t,q_L).$ Suppose $s>0$, $t>0$ and $l(\mathscr{A}_L)\leq {\rm rank}(L)-2$ hold. Then, $L$ is the unique lattice with the invariant $(s,t,q_L)$ up to isometry. \end{prop}
\begin{prop} (\cite{Mor} Lemma 2.3) \label{PropMor} Let $L_1$ and $L_2$ be even lattices with the same invariant. Let $\widehat{L}$ be an even lattice which is uniquely determined by its invariant in the sense of Proposition \ref{PropQI}. Suppose that there is a primitive embedding $L_1 \hookrightarrow \widehat{L}.$ Then, there exists a primitive embedding $L_2 \hookrightarrow \widehat{L}.$ \end{prop}
We will apply these arithmetic results for lattices to our N\'eron-Severi lattices of toric $K3$ hypersurfaces.
\begin{lem}\label{LemFinAbel} For $k\in\{6,\ldots, 18\},$ let $L_k$ be the lattice of Table 1 and $E_k$ be the evident lattice of Proposition \ref{PropEvident}. The finite abelian group $\mathscr{A}_{E_k}$ is isomorphic to the group $\mathscr{A}_{L_k}.$ Precisely, letting $\mathcal{G}_k$ be a finite abelian group as in Table 10, one has $\mathscr{A}_{E_k} \simeq \mathscr{A}_{L_k} \simeq \mathcal{G}_k$. Also, it holds $q_{E_k} \simeq - q_{L_k}.$ \end{lem}
\begin{longtable}{ll} \caption{Finite abelian groups $\mathcal{G}_k$}
\\ \\ \hline
$k$ & $\mathcal{G}_k$
\\
\hline
\endhead
$6$ & $(\mathbb{Z}/4\mathbb{Z})\oplus (\mathbb{Z}/2\mathbb{Z}) \oplus (\mathbb{Z}/2 \mathbb{Z})$ \\
$7$ & $\mathbb{Z}/12 \mathbb{Z}$\\
$8$ & $\mathbb{Z}/10 \mathbb{Z}$ \\
$9$ & $\mathbb{Z}/16 \mathbb{Z}$ \\
$10$ & $\mathbb{Z}/14 \mathbb{Z}$\\
$11$ & $\mathbb{Z}/12 \mathbb{Z}$\\
$12$ & $\mathbb{Z}/18 \mathbb{Z}$\\
$13$ & $(\mathbb{Z}/14 \mathbb{Z})\oplus (\mathbb{Z}/2 \mathbb{Z})$ \\
$14$ & $\mathbb{Z}/23 \mathbb{Z}$\\
$15$ & $ \mathbb{Z}/31 \mathbb{Z}$\\
$16$ & $(\mathbb{Z}/10\mathbb{Z})\oplus (\mathbb{Z}/2 \mathbb{Z})$ \\
$17$ & $(\mathbb{Z}/12 \mathbb{Z}) \oplus (\mathbb{Z}/2 \mathbb{Z})\oplus (\mathbb{Z}/2 \mathbb{Z})$ \\
$18$ & $\mathbb{Z}/44 \mathbb{Z}$\\ \hline\\ \end{longtable}
\begin{proof} For each $k\in\{6,\ldots, 18\},$ let $\{v_1,\ldots,v_{\ell_k-3}\}$ be the basis of $L_k$ corresponding to the intersection matrix of Table 1, where $\ell_k$ is the number of vertices of $P_k$. Then, we can find an element $\alpha \in L_k^\vee / L_k$ in the form $\alpha = \sum_{k=1}^{\ell_k-3} c_k v_k $ $(c_k\in \mathbb{Q}/\mathbb{Z})$, such that $\alpha $ generates a cyclic group which is a direct summand of $\mathcal{G}_k$. In fact, the coefficients $c_1,\ldots,c_{\ell_{j} -3 }$ are given in Table 11.
\begin{longtable}{llll} \caption{Coefficients $c_1,\ldots,c_{\ell_k-3}$ and discriminants $q_{L_k}(\alpha) $}
\\ \\ \hline
$k$ & Cyclic groups & Coefficients $c_1,\ldots,c_{\ell_k-3}$ & $q_{L_k} (\alpha)$
\\
\hline
\endhead
$6$ &$ \mathbb{Z}/4\mathbb{Z}$ & $\frac{3}{4},\frac{1}{4},\frac{1}{4}$ & $\frac{7}{4}$
\\
&$\mathbb{Z}/2\mathbb{Z}$ & $0, \frac{1}{2}, 0$ & $0$
\\
&$\mathbb{Z}/2\mathbb{Z}$ & $\frac{1}{2}, \frac{1}{2}, 0 $ & $1$
\\
$7$ &$ \mathbb{Z}/12\mathbb{Z}$ & $\frac{1}{6}, \frac{11}{12}, \frac{5}{12}$ & $\frac{23}{12}$
\\
$8$ & $ \mathbb{Z}/20\mathbb{Z}$ & $\frac{1}{10},\frac{7}{20},\frac{19}{20}$ & $\frac{39}{20}$
\\
$9$ & $ \mathbb{Z}/16\mathbb{Z}$ & $\frac{3}{8},\frac{1}{8},\frac{15}{16}$ & $\frac{31}{16}$
\\
$10$ & $ \mathbb{Z}/14\mathbb{Z}$ & $\frac{2}{7},\frac{5}{14},\frac{3}{14}$ & $ \frac{3}{14}$
\\
$11$ & $ \mathbb{Z}/12\mathbb{Z}$ & $\frac{7}{12},\frac{11}{12},\frac{1}{4} $ & $\frac{19}{12}$
\\
$12$ & $ \mathbb{Z}/18\mathbb{Z}$ & $\frac{13}{18},\frac{8}{9},\frac{1}{3} $ & $\frac{31}{18}$
\\
$13$ & $ \mathbb{Z}/14\mathbb{Z}$ & $\frac{2}{7},\frac{5}{7},\frac{3}{14},\frac{1}{14}$ & $\frac{12}{7}$
\\
& $ \mathbb{Z}/2\mathbb{Z}$ & $0,\frac{1}{2},0,\frac{1}{2}$ & $0$
\\
$14$ &$ \mathbb{Z}/23\mathbb{Z}$ & $\frac{9}{23},\frac{17}{23},\frac{5}{23},\frac{1}{23}$ & $\frac{40}{23}$
\\
$15$ & $ \mathbb{Z}/31\mathbb{Z}$ & $ \frac{13}{31},\frac{5}{31},\frac{12}{31},\frac{14}{31} $ & $\frac{44}{31}$
\\
$16$ & $ \mathbb{Z}/10\mathbb{Z}$ & $\frac{3}{10},\frac{1}{5},\frac{7}{10},\frac{1}{10} $ & $\frac{17}{10}$
\\
& $ \mathbb{Z}/2\mathbb{Z}$ & $\frac{1}{2},\frac{1}{2},0,\frac{1}{2} $ & $\frac{1}{2}$\\
$17$ & $ \mathbb{Z}/12\mathbb{Z}$ & $\frac{1}{6},\frac{5}{6},\frac{1}{3},\frac{5}{12},\frac{5}{12} $ & $\frac{17}{12}$
\\
& $ \mathbb{Z}/2\mathbb{Z}$ & $0,0,\frac{1}{2},0,\frac{1}{2} $ & $0$
\\
&$ \mathbb{Z}/2\mathbb{Z}$ & $0,\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2}$ & $1$
\\
$18$ &$ \mathbb{Z}/44\mathbb{Z}$ & $\frac{13}{44},\frac{5}{44},\frac{27}{44},\frac{31}{44},\frac{25}{44}$ & $\frac{57}{44}$
\\ \hline\\ \end{longtable}
On the other hand, let $\{w_1,\ldots,w_{23-\ell_k}\}$ be the basis of $E_k$ described in Table 9. We can take $\beta\in E_k^\vee/E_k$ with $\beta = \sum_{k=1}^{23-\ell_k} d_k w_k $ $(d_k\in \mathbb{Q}/\mathbb{Z})$ such that $\beta$ generates a direct summand of $\mathcal{G}_k$. Here, the coefficients $d_1,\ldots,d_{23-\ell_k}$ are given in Table 12.
\begin{longtable}{llll} \caption{Coefficients $d_1,\ldots,d_{23-\ell_k}$ and discriminants $q_{E_k}(\beta)$}
\\ \\ \hline
$k$ &Cyclic groups & Coefficients $d_1,\ldots,d_{23-\ell_k}$ & $q_{E_k} (\beta)$
\\
\hline
\endhead
$6$ &$ \mathbb{Z}/4\mathbb{Z}$ & $\frac{1}{2},\frac{1}{4},\frac{1}{2},\frac{1}{4},0,\frac{1}{2},0,\frac{1}{4},\frac{1}{2},\frac{3}{4},\frac{3}{4},\frac{1}{2},\frac{3}{4},0,0,0,0 $ & $\frac{1}{4}$
\\
&$ \mathbb{Z}/2\mathbb{Z}$& $0,0,0,0,0,0,0,0,0,0,\frac{1}{2},0,\frac{1}{2},0,\frac{1}{2},0,\frac{1}{2} $ & $0$
\\
&$ \mathbb{Z}/2\mathbb{Z}$ & $0,\frac{1}{2},\frac{1}{2},0,0,\frac{1}{2},0,\frac{1}{2},0,\frac{1}{2},\frac{1}{2},0,0,0,0,0,0 $ & $1$
\\
$7$ &$ \mathbb{Z}/12\mathbb{Z}$ & $\frac{2}{3}, \frac{1}{3}, \frac{2}{3}, \frac{5}{6}, \frac{2}{3}, \frac{1}{2}, \frac{1}{3}, \frac{1}{6}, \frac{1}{12}, \frac{11}{12}, \frac{11}{12}, \frac{5}{6}, \frac{3}{4}, 0, \frac{1}{4}, \frac{1}{2}, \frac{3}{4}$ & $\frac{1}{12}$
\\
$8$ &$ \mathbb{Z}/20\mathbb{Z}$ & $\frac {1} {5}, \frac {3} {5}, \frac {2} {5}, \frac {7} {10}, \frac {2} {5}, \frac {1} {10}, \frac {1} {20}, \frac {3} {4}, \frac {3}{5}, \frac {4} {5}, \frac {3} {4}, \frac {1} {2}, \frac {1} {4},\frac {3} {5}, \frac {19} {20}, \frac {3} {10}, \frac {13} {20}$& $\frac{1}{20}$
\\
$9$ &$ \mathbb{Z}/16\mathbb{Z}$ & $\frac{3}{4},\frac{3}{8},\frac{9}{16},\frac{1}{16},\frac{1}{2},\frac{7}{16},\frac{3}{8},\frac{1}{4},\frac{1}{8},\frac{5}{8},\frac{1}{4},\frac{7}{8},\frac{15}{16},0,0,0,0$ & $\frac{1}{16}$
\\
$10$ &$ \mathbb{Z}/14\mathbb{Z}$ & $ \frac{5}{7},\frac{5}{14},\frac{9}{14},\frac{11}{14},\frac{4}{7},\frac{5}{7},\frac{6}{7},\frac{1}{2},0,\frac{1}{2},0,\frac{6}{7},\frac{5}{7},\frac{4}{7},\frac{3}{7},\frac{2}{7},\frac{1}{7} $ & $\frac{25}{14} $
\\
$11$ &$ \mathbb{Z}/12\mathbb{Z}$ & $ \frac{1}{2},\frac{3}{4},\frac{1}{4},\frac{1}{4},\frac{1}{2},\frac{5}{12},\frac{1}{3},\frac{1}{3},\frac{1}{6},\frac{11}{12},\frac{5}{6},\frac{3}{4},\frac{2}{3},\frac{1}{3},0,\frac{2}{3},\frac{1}{3} $ & $\frac{5}{12}$
\\
$12$ &$ \mathbb{Z}/18\mathbb{Z}$& $\frac{1}{3},\frac{2}{3},\frac{1}{3},\frac{1}{18},\frac{1}{9},\frac{5}{6},\frac{5}{9},\frac{5}{18},\frac{2}{9},\frac{1}{9},\frac{1}{3},\frac{2}{3},0,0,0,0,0 $ & $ \frac{5}{18}$
\\
$13$ &$ \mathbb{Z}/14\mathbb{Z}$& $\frac{3}{7},\frac{5}{7},\frac{13}{14},\frac{5}{14},\frac{2}{7},\frac{9}{14},0,0,\frac{1}{7},0,0,\frac{1}{14},\frac{1}{7},\frac{6}{7},\frac{4}{7},\frac{2}{7}$ & $\frac{2}{7}$
\\
&$ \mathbb{Z}/2\mathbb{Z}$ & $0,0,0,0,0,0,0,0,0,\frac{1}{2},0,\frac{1}{2},0,\frac{1}{2},0,\frac{1}{2} $ & $0$
\\
$14$ &$ \mathbb{Z}/23\mathbb{Z}$ & $\frac{6}{23},\frac{3}{23},\frac{20}{23},\frac{18}{23},\frac{13}{23},\frac{8}{23},\frac{6}{23},\frac{4}{23},\frac{2}{23},\frac{1}{23},\frac{2}{23},\frac{3}{23},\frac{7}{23},\frac{11}{23},\frac{15}{23},\frac{19}{23} $ & $\frac{6}{23}$
\\
$15$ &$ \mathbb{Z}/31\mathbb{Z}$ & $ \frac{28}{31},\frac{14}{31},\frac{17}{31},\frac{4}{31},\frac{8}{31},\frac{26}{31},\frac{13}{31},\frac{1}{31},\frac{16}{31},\frac{30}{31},\frac{29}{31},\frac{28}{31},\frac{10}{31},\frac{23}{31},\frac{5}{31},\frac{18}{31}$ & $\frac{18}{31}$
\\
$16$ &$ \mathbb{Z}/10\mathbb{Z}$& $0,\frac{1}{2},\frac{1}{2},\frac{3}{10},\frac{3}{5},\frac{2}{5},\frac{1}{5},\frac{1}{10},\frac{1}{5},\frac{3}{10},\frac{2}{5},0,\frac{3}{5},\frac{1}{5},\frac{4}{5},\frac{2}{5}$ & $\frac{3}{10}$
\\
&$ \mathbb{Z}/2\mathbb{Z}$ & $0,0,0,0,0,0,0,\frac{1}{2},0,\frac{1}{2},0,\frac{1}{2},0,\frac{1}{2},0,\frac{1}{2} $ & $\frac{3}{2}$ \\
$17$ &$ \mathbb{Z}/12\mathbb{Z}$ & $\frac{1}{3},\frac{1}{6},\frac{1}{4},\frac{7}{12},\frac{5}{6},\frac{5}{12},0,0,\frac{5}{12},\frac{5}{12},\frac{1}{2},\frac{3}{4},0,\frac{2}{3},\frac{1}{3}$ & $\frac{7}{12}$
\\
&$ \mathbb{Z}/2\mathbb{Z}$& $0, 0, 0, 0, 0, 0, 0, 0, 0, \frac{1}{2}, \frac{1}{2}, 0, \frac{1}{2}, 0, \frac{1}{2} $ & $0$
\\
&$ \mathbb{Z}/2\mathbb{Z}$& $0,0,0,0,0,0,0,0,\frac{1}{2},\frac{1}{2},0,0,0,0,0 $ & $1$
\\
$18$ &$ \mathbb{Z}/44\mathbb{Z}$ & $\frac{1}{22},\frac{1}{44},\frac{43}{44},\frac{3}{11},\frac{6}{11},\frac{9}{11},\frac{5}{44},\frac{9}{22},\frac{31}{44},\frac{7}{22},\frac{29}{44},\frac{17}{44},\frac{17}{22},\frac{2}{11},\frac{13}{22}$ & $\frac{31}{44}$
\\ \hline\\ \end{longtable}
By direct calculations, we can check that $$ q_{L_k} (\alpha)+q_{E_k}(\beta) \equiv 0 \quad (\text{mod } 2) $$ for each direct summand of $\mathcal{G}_k$. \end{proof}
\begin{thm}\label{ThmNSEvident} For each $k\in \{6, \ldots, 18\},$ letting $S_k$ be a generic member of the family $\mathscr{F}_k$ of (\ref{Family}), the N\'eron-Severi lattice ${\rm NS}(S_k)$ is isometric to the evident lattice $E_k$ of Proposition \ref{PropEvident}. \end{thm}
\begin{proof} According to Proposition \ref{PropLatticeDual}, the lattice $L_k$ for each $k\in \{6,\ldots,18\}$ is isometric to the N\'eron-Severi lattice of the $K3$ surface $S_{P_k^\circ}$. Hence, there exists a primitive embedding $L_k \hookrightarrow L_{K3}$. So, its orthogonal complement $L_k^\perp$ is a primitive sublattice of $L_{K3}$. Here, recalling Remark \ref{RemCheck},
$L_k^\perp$ has a form $U\oplus \widecheck{L}_k$. Therefore, $\widecheck{L}_k$ is a primitive sublattice of $L_{K3}$. The invariant of $\widecheck{L}_k$ can be calculated by applying the fact $q_{\widecheck{L}_k} \simeq q_{U\oplus \widecheck{L}_k} \simeq q_{L_k^\perp} \simeq -q_{L_k}$ and Table 11. Then, together with Lemma \ref{LemFinAbel},
we can see that the invariant of $\widecheck{L}_k$ coincides with that of $E_k$ for each $k$.
By virtue of Proposition \ref{PropMor},
there exists a primitive embedding $E_k \hookrightarrow L_{K3}$. Namely, $L_{K3}/E_k$ is a free abelian group.
By the way, $E_k$ is a sublattice of ${\rm NS}(S_k)$. From Theorem \ref{ThmPic} and the fact that ${\rm NS}(S_k)$ is a primitive sublattice of $L_{K3}$, $L_{K3}/{\rm NS}(S_k)$ is a free abelian group whose rank is equal to that of $L_{K3}/E_k .$ Therefore, we obtain $E_k={\rm NS}(S_k).$ \end{proof}
This theorem shows that the primitive closure $\widehat{E}_k$ of $E_k$, which is introduced in Section 3, is equal to $E_k$ for each $k\in\{6,\ldots,18\}$.
From the proof of the theorem, we obtain the following Corollary.
\begin{cor}\label{CorTr} For each $k\in \{6,\ldots,18\},$ let $S_k$ be a generic member of the family $\mathscr{F}_k$ of (\ref{Family}) and $L_k$ be the lattice in Table 1. Then, the transcendental lattice ${\rm Tr}(S_k)$ is isometric to $U\oplus L_k$. \end{cor}
\begin{rem}\label{RemHes}
The lattice structure of $S_6$ is determined in \cite{DG} Proposition 5.4.
In \cite{DG}, they study $K3$ surfaces,
which are coming from the Hessian of a general cubic surfaces which does not admit a Sylvester form.
Such Hessian $K3$ surface is birationally equivalent to our $S_6$
(see \cite{KHes} also).
\end{rem}
\section{Mordell-Weil groups for our elliptic fibrations}
Let $\pi : S \rightarrow C$ be a Jacobian elliptic fibration. We assume that $\pi$ has singular fibres. We let ${\rm MW}(\pi,O)$ denote the Mordell-Weil group of sections of $\pi$. For all $P\in {\rm MW}(\pi,O)$ and $v\in C$, we have $(P\cdot \pi^{-1}(v))=1$. Note that the section $P$ intersects an irreducible component with multiplicity $1$ of every fibre $\pi^{-1}(v)$. Set $
R = \{ v\in \mathbb{C}| \pi^{-1}(v) {\rm \enspace is \enspace a \enspace singular \enspace fibre \enspace of \enspace} \pi \}. $ For all $v\in R$, we have an expression $ \pi^{-1}(v)=\Theta_{v,0}+ \sum_{j=1}^{m_v-1} \mu _{v,j} \Theta_{v,j}, $ where $m_v$ is the number of irreducible components of $\pi^{-1}(v)$, $\Theta_{v,j} \hspace{1mm} (j=0,\cdots,m_v -1)$ are irreducible components with multiplicity $\mu_{v,j}$ of $\pi^{-1}(v)$ and $\Theta_{v,0}$ is the component such that $\Theta_{v,0} \cap O \not= \phi$. Let $F$ be a general fibre of $\pi$. The lattice $
T=\langle F, O, \Theta_{v,j} | v\in R, 1\leq j \leq m_v-1 \rangle_{\mathbb{Z}} \subset {\rm NS}(S) $ is called the trivial lattice for $\pi$. We set $\hat{T}=(T \otimes _{\mathbb{Z}} \mathbb{Q}) \cap {\rm NS}(S)$ for the trivial lattice $T$.
\begin{prop}\label{propMW} (\cite{Shioda}) {\rm (1)} There is an isomorphism ${\rm MW}(\pi,O) \simeq {\rm NS}(S)/T$ of groups.
{\rm (2)} The rank of ${\rm MW}(\pi,O)$ is equal to ${\rm rank} ({\rm NS}(S))-2- \sum_{v\in R} (m_v -1).$
{\rm (3)} Let ${\rm MW}(\pi,O)_{tor}$ be the torsion part of ${\rm MW}(\pi,O)$. Then, $ {\rm MW}(\pi,O)_{tor} \simeq \hat{T}/T. $ \end{prop}
\begin{thm}\label{ThmMW}
Let $\pi_k$ $(k\in \{6,\ldots,18\})$ be the elliptic fibration given in Proposition \ref{PropEquationJac}. Then, the rank of the Mordell-Weil group ${\rm MW}(\pi_k,O)$ is equal to $1$. Also, for each $k$, the free part of ${\rm MW}(\pi_k,O)$ is generated by the non-trivial section $Q$ in Table 7. Moreover, if $k\in \{6,9,13,17\}$, the section $O'$ in Table 7 generates the torsion part of ${\rm MW}(\pi_k, O)$. \end{thm}
\begin{proof} By applying Proposition \ref{propMW} (2) to our Jacobian elliptic fibration $\pi_k$ for each $k$ whose singular fibres are illustrated in Table 8, we can see that ${\rm rank}({\rm MW}(\pi_k,O))=1.$ Now, $Q\not\in {\rm MW}(\pi_k,O)_{tor}.$ Also, according to Theorem \ref{ThmNSEvident}, together with Proposition \ref{PropEvident}, we obtain a basis of ${\rm NS}(S_k)$ for each $k$ consisting of $Q$, $O' \in {\rm MW}(\pi_k,O)_{tor}$ and appropriate members of $T$. Hence, by virtue of Proposition \ref{propMW} again, the section $Q$ must generates the free part of ${\rm MW}(\pi_k,O)$, which is isomorphic to $\mathbb{Z}$. \end{proof}
Thus, we prove Theorem \ref{ThmMW} on the basis of Theorem \ref{ThmNSEvident}. We remark that
the first author gives another proof of Theorem \ref{ThmNSEvident} and Theorem \ref{ThmMW} in \cite{M1} and \cite{M2}. In fact, he first proves Theorem \ref{ThmMW} by applying detailed techniques of Mordell-Weil lattices (see \cite{Shioda}) to our elliptic fibrations $\pi_k$. Then, he shows that Theorem \ref{ThmNSEvident} follows from Theorem \ref{ThmMW}. The argument in the present paper is simpler than the proof of them.
\section*{Acknowledgment} This second author is supported by JSPS Grant-in-Aid for Scientific Research (22K03226, 18K13383) and MEXT LEADER.
{\small
}
\begin{center} \hspace{7.7cm}\textit{Tomonao Matsumura}\\ \hspace{7.7cm}\textit{Core Concept Technologies Inc.,}\\ \hspace{7.7cm}\textit{11F DaiyaGate Ikebukuro, }\\ \hspace{7.7cm}\textit{1-16-15 Minamiikebukuro, Toshima-ku, Tokyo}\\ \hspace{7.7cm}\textit{171-0022, Japan}\\
\hspace{7.7cm}\textit{(E-mail: [email protected])}
\end{center}
\begin{center} \hspace{7.7cm}\textit{Atsuhira Nagano}\\ \hspace{7.7cm}\textit{Faculty of Mathematics and Physics,}\\
\hspace{7.7cm} \textit{Institute of Science and Engineering,}\\ \hspace{7.7cm}\textit{Kanazawa University,}\\ \hspace{7.7cm}\textit{Kakuma, Kanazawa, Ishikawa}\\ \hspace{7.7cm}\textit{920-1192, Japan}\\
\hspace{7.7cm}\textit{(E-mail: [email protected])}
\end{center}
\end{document}
|
arXiv
|
{
"id": "2208.01465.tex",
"language_detection_score": 0.5505294799804688,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{frontmatter} \title{Joint exceedances of random products}
\runtitle{Joint exceedances of random products}
\begin{aug} \author{\fnms{Anja} \snm{Jan\ss en}\thanksref{a}\ead[label=e1]{[email protected]}} \and \author{\fnms{Holger} \snm{Drees}\thanksref{a}\ead[label=e2]{[email protected]}} \thankstext{a}{Both authors were supported by the German Research Foundation DFG, grant no JA 2160/1.}
\address[a]{University of Hamburg, Department of Mathematics, SPST, Bundesstr.~55, D-20146 Hamburg, Germany\\ \printead{e1,e2}}
\runauthor{A. Jan\ss en and H. Drees}
\affiliation{University of Hamburg}
\end{aug}
\begin{abstract} We analyze the joint extremal behavior of $n$ random products of the form $\prod_{j=1}^m X_j^{a_{ij}}, 1 \leq i \leq n,$ for non-negative, independent regularly varying random variables $X_1, \ldots, X_m$ and general coefficients $a_{ij} \in \mathbb{R}$. Products of this form appear for example if one observes a linear time series with gamma type innovations at $n$ points in time. We combine arguments of linear optimization and a generalized concept of regular variation on cones to show that the asymptotic behavior of joint exceedance probabilities of these products is determined by the solution of a linear program related to the matrix $\mathbf{A}=(a_{ij})$. \end{abstract}
\begin{keyword} \kwd{extreme value theory} \kwd{linear programming} \kwd{M-convergence} \kwd{random products} \kwd{regular variation} \end{keyword}
\end{frontmatter} \section{Introduction}
The tail behavior of products of powers of heavy-tailed positive random variables is of crucial importance in many applications, particularly in finance, but e.g.\ in network modeling too. In stochastic volatility time series, the log-volatilities are usually modelled as linear time series
\begin{equation*}
\log \sigma_t=\sum_{i=0}^\infty \alpha_i \eta_{t-i}, \;\;\; t \in \mathbb{Z},
\end{equation*}
If the innovations $\eta_i, i \in \mathbb{Z},$ have an exponential or gamma type tail, then the volatility $\sigma_t$ at time $t$ is a product of powers of the regularly varying random variables $X_i:=e^{\eta_i}, i \in \mathbb{Z}$, with exponents depending on $t$. To assess the risk of a volatile market at different time points $t_1,\ldots, t_n$, one thus has to analyze probabilities of the type
$ P(\prod_{j=1}^\infty X_j^{a_{ij}}>x,\;1 \leq i \leq n)$ for suitable exponents $a_{ij}$. Using a new Breiman type result, it was shown in \cite{JaDr15} that these probabilities also determine the risk of jointly large losses over different periods.
Similarly, in a credit risk model for $n$ risks with $k$ independent factors $Z_1,\ldots, Z_k$, the $i$-th risk is often modeled as a multiple of $\exp(\sum_{j=1}^k a_{ij} Z_j+Y_i)$ with $Y_i, 1 \leq i \leq n,$ denoting the idiosyncratic part (cf.\ \cite{EmHaMi14}). If the $Z_j, 1 \leq j \leq k,$ and $Y_i, 1 \leq i \leq n,$ have an exponential or gamma type tail, the analysis of the joint tail risk again leads to probabilities of the above type.
In network modeling, both transmission durations $L$ and rates $R$ arising from one source may be modeled by regularly varying random variables with different indices $\alpha_L$ and $\alpha_R$ (see, e.g., \cite{MauReRo02}). The total volume of traffic from one source can then be expressed as $X_L^{\alpha_L} X_R^{\alpha_R}$ for random variables $X_L,X_R$ which are regularly varying with index $-1$. If one wants to determine the probability that different sources contribute large volumes in the same period, then again probabilities of the above type arise. Moreover, one may introduce dependencies between $X_L$ and $X_R$ for the same or for different sources by modeling them as products of (partially) identical factors with different exponents.
As the example of log-volatilities demonstrates, the analysis of the joint tail behavior of power products is equivalent to the corresponding analysis for linear combinations of random variables with exponential type tails such that exponentials of these random variables are regularly varying. Hence the results given below allow a tail analysis in such settings too.
Motivated by these examples, we analyze the asymptotic behavior of the probability of joint exceedances of power products, i.e. of
\begin{equation}\label{Eq:powerproductsmot}
P\left(\prod_{j=1}^m X_j^{a_{ij}}>c_i x, 1 \leq i \leq n\right), \;\;\; c_i>0, 1 \leq i \leq n,
\end{equation}
where $X_j$ are independent, non-negative regularly varying random variables and $a_{ij}$ are coefficients which may be negative. We restrict the analysis to the case of a finite number $m$ of factors, but extensions to an infinite number of factors, using a Breiman-type argument, are possible. In \cite{JaDr15}, such probabilities were investigated under the restrictive assumption that no coefficient is negative. It was shown there, that the probabilities behaved asymptotically like a multiple of $\prod_{i=1}^m P(X_i>x^{\kappa_i})$, where $\boldsymbol{\kappa}=(\kappa_1, \ldots, \kappa_m)$ is the solution to a linear program determined by the matrix $\mathbf{A}=(a_{ij})$. While the restriction to positive coefficients $a_{ij}$ seems acceptable e.g.\ for multi-factor models, it is quite severe for log-volatility time series. Moreover, essentially only the case $n=2$ was considered in \cite{JaDr15} and the techniques employed do not easily generalize to higher dimensions, which limits the applicability of the established results further.
Using a recently introduced abstract concept of regular variation on cones based on the notion of $\mathbb{M}$-convergence (see \cite{HuLi06}, \cite{LiReRo14}), we can avoid all these drawbacks. To this end, we first introduce a non-standard form of regular variation on the cone $(0, \infty)^m$ for the random vector $(X_1, \ldots, X_m)$, from which one may conclude the asymptotics of probabilities of the type $P((X_1/x^{\kappa_1}, \ldots, X_m/x^{\kappa_m})\in B)$ for suitable coefficients $\kappa_1, \ldots, \kappa_m$ and sets $ B \subset(0, \infty)^m$ that are bounded away from the boundary of the cone. Unfortunately, in general the sets $M=\{x\in\mathbb{R}^m: \prod_{j=1}^m x_j^{a_{ij}}>c_i, 1 \leq i \leq n\}$ pertaining to the probabilities \eqref{Eq:powerproductsmot} are not of this type. Hence, quite involved arguments are needed to prove that the parts of $M$ close to the boundary of the cone are asymptotically negligible. To this end, auxiliary results are proved in Section \ref{Sec:auxresults} which are of interest on their own. In particular, Proposition \ref{Lem:momentsconverge} can be seen as a multivariate version of the direct half of Karamata's Theorem, cf.\ Remark \ref{Rem:Karamata}.
The outline of this paper is as follows: In Section \ref{Sec:background} an abstract notion of regular variation on cones is briefly recalled, with a view towards the later application of this concept. Our main results are stated in Section \ref{Sec:mainsec}, with Theorem \ref{Th:maintheorem} being the central conclusion. Proofs of the results are given in Section \ref{Sec:proof} while some auxiliary results needed in the proofs are gathered in Section \ref{Sec:auxresults}.
\subsection*{Notations and conventions}
We write bold letters for vectors, i.e. $\mathbf{x}$ is short for $(x_1, \ldots, x_n) \in \mathbb{R}^n$ if it is clear that $\mathbf{x}$ is of dimension $n$. The $i$-th component of $\mathbf{x}$ is denoted by $x_i$. We write $\mathbf{0}$ and $\mathbf{1}$ for a (column) vector of suitable dimension which consists of only zeros or only ones. Inequalities for vectors are meant to hold componentwise. We denote the complement of a set $A$ by $A^c$ and its boundary by $\partial A$. For $\mathbf{x} \in \mathbb{R}^n$ and $A \subset \mathbb{R}^n$ we set $d(\mathbf{x},A)=\inf_{\mathbf{a} \in A}\|\mathbf{x}-\mathbf{a}\|$, where $\| \cdot \|$ denotes the Euclidean norm. Similarly, we set $d(A,B)=\inf_{\mathbf{a} \in A, \mathbf{b} \in B}\|\mathbf{a}-\mathbf{b}\|$. For $A \subset \mathbb{R}^n$ and $r>0$ set $A^r:=\{\mathbf{x} \in \mathbb{R}^n: d(\mathbf{x},A)<r\}$. Denote the Borel sigma algebra on $\mathbb{R}^n$ by $\mathbb{B}^n$ and for a set $A \in \mathbb{B}^n$ write $\mathbb{B}^n \cap A=\{B \in \mathbb{B}^n: B \subset A\}$. We write $\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\cdot)$ for the Lebesgue measure on $\mathbb{B}^n$.
\section{General regular variation on cones}\label{Sec:background}
In the following, we will make frequent use of an extension of the concept of multivariate regular variation which was introduced in \cite{HuLi06} and \cite{LiReRo14}. For some $m \in \mathbb{N}$, let $\otimes:(0,\infty)\times[0,\infty)^m \to [0,\infty)^m, (\lambda,\mathbf{x})\mapsto \lambda\otimes\mathbf{x} $ be a ``multiplicatish'' mapping with the following two properties: \begin{itemize}
\item[(A1)] the mapping $\otimes$ is continuous,
\item[(A2)] $1 \otimes \mathbf{x}=\mathbf{x}$ and for $\lambda_1,\lambda_2>0$ we have $\lambda_1\otimes(\lambda_2 \otimes \mathbf{x})=(\lambda_1 \cdot \lambda_2)\otimes \mathbf{x}$ for all $\mathbf{x}\in [0,\infty)^m$. \end{itemize}
Consider a closed subcone $\mathbb{C}$ of $[0,\infty)^m$ w.r.t.\ this mapping, that is, $\lambda\otimes \mathbb{C} := \{\lambda\otimes \mathbf{x}: \mathbf{x}\in\mathbb{C}\}\subset \mathbb{C} $ for all $\lambda>0$. We assume that the following condition holds: \begin{itemize}
\item[(A3)] $d(\mathbf{x},\mathbb{C})< d(\lambda \otimes \mathbf{x},\mathbb{C})$ if $\lambda>1$ and $\mathbf{x} \in \mathbb{O}$. \end{itemize} The complement $\mathbb{O}:=[0,\infty)^m \setminus \mathbb{C}$ is an open cone, which is assumed not to be empty.
The notion of regular variation on $\mathbb{O}$ w.r.t.\ $\otimes$, which is introduced below, rests on the definition of convergence in the space $\mathbb{M}_{\mathbb{O}}$ of Borel measures on $(\mathbb{O}, \mathbb{B}^m \cap \mathbb{O})$ whose restrictions to $[0,\infty)^m \setminus \mathbb{C}^r$ are finite for each $r>0$. Denote by $\mathcal{C}^+(\mathbb{O})$ the class of non-negative, bounded and continuous functions $f$ on $\mathbb{O}$ vanishing on $\mathbb{C}^r$ for some $r > 0$. We endow $\mathbb{M}_{\mathbb{O}}$ with the topology that is generated by open sets of the form
\begin{equation*} \left\{\nu \in \mathbb{M}_{\mathbb{O}}: \left|\int f_i d\nu - \int f_i d\mu\right|<\epsilon, 1\le i\le k\right\} \end{equation*} with $\mu \in \mathbb{M}_{\mathbb{O}}, f_i \in \mathcal{C}^+(\mathbb{O}), i=1, \ldots, k,$ and $\epsilon>0$. A Portmanteau Theorem (cf.\ \cite{LiReRo14}, Theorem 2.1) shows that convergence of measures $\nu_n$ to a measure $\nu$ in this topology is equivalent to the convergence $\nu_n(A)\to\nu(A)$ for all Borel sets $A$ in $\mathbb{O}$ which are bounded away from $\mathbb{C}$ and for which $\nu(\partial A)=0$.
\begin{definition}[see \cite{LiReRo14}, Definitions 3.1 and 3.2]\label{Def:RVkappa} A measure $\nu \in \mathbb{M}_{\mathbb{O}}$ is called\emph{ regularly varying on $\mathbb{O}$ with respect to the mapping $\otimes$} if there exists an increasing, regularly varying function $c:[0,\infty)\to (0,\infty)$ and a nonzero measure $\mu \in \mathbb{M}_{\mathbb{O}}$ such that \begin{equation*}c(x) \nu(x \otimes \cdot) \to \mu(\cdot) \; \mbox{in} \; \mathbb{M}_{\mathbb{O}} \;\;\; \mbox{as}\; x \to \infty.\end{equation*} \end{definition}
\begin{lemdef}[see \cite{LiReRo14}, Theorem 3.1]\label{LemDef:index} Definition \ref{Def:RVkappa} implies that there exists an $\alpha \geq 0$ such that \begin{equation}\label{Eq:homlimit}\mu(\lambda \otimes A)=\lambda^{-\alpha} \mu(A) \end{equation} for all $\lambda>0$ and Borel sets $A \subset \mathbb{O}$. We call $-\alpha$ the {\em index of regular variation of the measure $\nu$} in Definition \ref{Def:RVkappa}. The value of $\alpha$ in \eqref{Eq:homlimit} is equal to the index of regular variation of the normalizing function $c$ in Definition \ref{Def:RVkappa}. \end{lemdef} \begin{proof} Equation \eqref{Eq:homlimit} is stated in \cite{LiReRo14}, Theorem 3.1. By this and (A2), we have, for all $\lambda>0$ and $A$ in $\mathbb{O}$ which are bounded away from $\mathbb{C}$ and for which $\nu(\partial A)=0$, that \begin{eqnarray*}
\lim_{x \to \infty} \frac{c(\lambda x)}{c(x)}=\lim_{x \to \infty} \frac{\nu(x \otimes A)}{\nu((\lambda x) \otimes A)}=\lim_{x \to \infty} \frac{\nu(x \otimes A)}{\nu(x \otimes (\lambda \otimes A))}= \frac{\mu(A)}{\mu(\lambda\otimes A)}=\lambda^\alpha. \end{eqnarray*} Therefore, $c$ is (univariate) regularly varying with index $\alpha$. \end{proof}
Definition \ref{Def:RVkappa} unifies several different concepts of regular variation of a random vector $\mathbf{X}=(X_1, \ldots, X_m)$ with values in $[0,\infty)^m$ and distribution $\nu$.
\begin{example}[Multivariate regular variation] \label{ex:MRV}
If $\otimes$ denotes the usual scalar multiplication $\lambda\otimes\mathbf{x}=\lambda\mathbf{x}$ and $\mathbb{C}:=\{\mathbf{0}\}$, then $\mathbb{O}=[0,\infty)^m \setminus\{\mathbf{0}\}$ and Definition \ref{Def:RVkappa} reads as $$ c(x)P\left(\left(\frac{X_1}{x}, \ldots, \frac{X_m}{x}\right) \in A \right)\to \mu(A) $$ as $x \to \infty$ for all $\mu$-continuity sets $A\subset\mathbb{B}^m\cap [0,\infty)^m$ bounded away from $\mathbf{0}$. This is the classical regular variation of $\mathbf{X}$ (see e.g.\ \cite{Res07}, Section 6.1.4). \end{example}
\begin{example}[Ledford-Tawn-model] \label{ex:LT} If $\mathbb{C}:=[0,\infty)^m\setminus(0,\infty)^m$, then regular variation on $\mathbb{O}=(0,\infty)^m$ w.r.t.\ the usual scalar multiplication has been considered by \cite{LT97} in the bivariate case $m=2$ (after suitable marginal standardization). It is equivalent to the convergence $$ \tilde c(x)P\left(\left(\frac{X_1}{x}, \ldots, \frac{X_m}{x}\right) \in A \right)\to \mu(A) $$ as $x \to \infty$ for all $\mu$-continuity sets $A\subset\mathbb{B}^m\cap [0,\infty)^m$ bounded away from both axes in the case $m=2$ resp.\ from $\{\mathbf{x}: x_i=0$ for some $1\le i\le m\}$ in the general case.
Note that a random vector $\mathbf{X}$ may be regularly varying in the classical sense of Example \ref{ex:MRV} and in the present sense with different normalizing functions $c$ resp.\ $\tilde c$. If $c(x)=o(\tilde c(x))$ as $x\to\infty$, then $\mathbf{X}$ is said to exhibit hidden regular variation (cf.\ \cite{Res07}, Section 9.4.1). \end{example}
Here, we consider a different mapping $\otimes$ and different cones as well. Let, for $\boldsymbol{\kappa} \in [0,\infty)^m$, \begin{align*} \otimes_{\boldsymbol{\kappa}}: \, & (0,\infty) \times [0,\infty)^m \to [0,\infty)^m, \\
& (\lambda,(x_1, \ldots,x_m)) \mapsto \lambda \otimes_{\boldsymbol{\kappa}} (x_1, \ldots, x_m):=(\lambda^{\kappa_1}x_1, \ldots, \lambda^{\kappa_m}x_m). \end{align*} We want to analyze the asymptotic behavior of $m$-dimensional non-negative random vectors that have extreme values in $n \in\{1,\ldots, m\}$ of their components. For ease of notation, assume that the first $n$ components of $\boldsymbol{\kappa}$ are positive and the last $m-n$ components are equal to zero, so that $x^{\kappa_i}\to \infty$ as $x\to\infty$ only for $1\le i\le n$. Define the cones $\mathbb{C}_n=([0,\infty)^n\setminus (0,\infty)^n)\times [0,\infty)^{m-n}$ and $\mathbb{O}_n=[0,\infty)^m \setminus \mathbb{C}_n=(0,\infty)^n\times[0,\infty)^{m-n}$ w.r.t.\ the mapping $ \otimes_{\boldsymbol{\kappa}}$. Since $$ d(\mathbf{x},\mathbb{C}_n)=\min\{x_1, \ldots, x_n\} < \min\{\lambda^{\kappa_1}x_1, \ldots, \lambda^{\kappa_n} x_n\}=d(\lambda \otimes_{\boldsymbol{\kappa}} \mathbf{x},\mathbb{C}_n) $$ for all $\lambda>1$ and $\mathbf{x} \in \mathbb{O}_n$, the assumptions (A1)--(A3) are satisfied. Note that in the case $n=m$ and $\boldsymbol{\kappa}=\boldsymbol{1}$, the regular variation on $\mathbb{O}_n$ w.r.t.\ $\otimes_{\boldsymbol{\kappa}}$ is equivalent to the concept of regular variation considered in Example \ref{ex:LT}.
\begin{lemma}\label{lem:vectorisrv} Let $\boldsymbol{\kappa} \in [0,\infty)^m$ with $\kappa_i>0, 1 \leq i \leq n,$ and $\kappa_i=0, n<i\leq m,$ for some $n \leq m$. Furthermore, let $X_1, \ldots, X_m$ be independent, non-negative random variables such that $X_1, \ldots, X_n$ are regularly varying with index $-1$. Then, $P^{(X_j)_{1 \leq j \leq m }}$ is regularly varying on $\mathbb{O}_n$ w.r.t.\ $\otimes_{\boldsymbol{\kappa}}$ with index $-\alpha=-\sum_{i=1}^m\kappa_i$.
\end{lemma} \begin{proof}
From the independence of $X_1, \ldots, X_m$ and the regular variation of $X_1, \ldots, X_n$ it follows that \begin{align} & \nonumber \lim_{x \to \infty} \frac{P^{(X_j)_{1 \leq j \leq m}}\left(x \otimes_{\boldsymbol{\kappa}} \left(\left(\bigtimes\limits_{1 \leq i \leq n}(a_i,\infty)\right) \times \left(\bigtimes\limits_{n< j \leq m}[b_j,\infty)\right)\right)\right)}{\prod_{i=1}^n P(X_i>x^{\kappa_i})}\\ \nonumber &= \lim_{x \to \infty} \frac{\prod_{i=1}^n P(X_i>a_i x^{\kappa_i})\prod_{j=n+1}^m P(X_j\geq b_j)}{\prod_{i=1}^n P(X_i>x^{\kappa_i})}\\ \label{Eq:definemu} &= \prod_{i=1}^n a_i^{-1} \prod_{j=n+1}^m P(X_j\geq b_j)=:\mu\left(\left(\bigtimes\limits_{1 \leq i \leq n}(a_i,\infty)\right) \times \left(\bigtimes\limits_{n< j \leq m}[b_j,\infty)\right)\right) \end{align} for all $a_i>0, 1 \leq i \leq n,$ and $b_j\geq 0, n<j \leq m$. Since these limits are finite, we have shown that the family of measures $c(x) P^{(X_j)_{1 \leq j \leq m }}\left(x \otimes_{\boldsymbol{\kappa}} \cdot\right)$, $ x>0$, is relatively compact in $\mathbb{M}_{\mathbb{O}_n}$ (cf.\ Theorems~2.4 and 2.5 in \cite{LiReRo14}), where $c(x)=\left(\prod_{i=1}^n P(X_i>x^{\kappa_i})\right)^{-1}$. Furthermore, all accumulation points of this family agree on a generating $\pi$-system. Thus, $P^{(X_j)_{1 \leq j \leq m }}$ is regularly varying on $\mathbb{O}_n$ w.r.t.\ $\otimes_{\boldsymbol{\kappa}}$. The index of regular variation follows from Lemma and Definition~\ref{LemDef:index}, since $c$ is regularly varying with index $\sum_{i=1}^n\kappa_i=\sum_{i=1}^m\kappa_i$. \end{proof}
\section{Joint extremal behavior of random power products}\label{Sec:mainsec} In the following, let $X_1, \ldots, X_m, m \in \mathbb{N},$ be independent, non-negative random variables, not necessarily with the same distribution. We will give asymptotics for the joint exceedance probabilities \eqref{Eq:powerproductsmot} of $n \leq m$ ``power products''
\begin{equation}\label{Eq:powerprods} \prod_{j=1}^m X_j^{a_{ij}}, \;\;\; 1 \leq i \leq n,
\end{equation} over the same threshold $x$ as $x \to \infty$ for rather general values of $a_{ij} \in \mathbb{R}, 1 \leq i \leq n, 1 \leq j \leq m$. A product may take the value $+\infty$ if $X_j=0$ and $a_{ij}<0$ for some $i,j$, but throughout we use the convention that $+\infty \cdot 0=0$ and $0^0=1$.
In order to derive our results we make some assumptions about the tail behavior of the $X_j, 1 \leq j \leq m$. We assume that all or at least the ``relevant'' (in a sense specified below) $X_j, 1 \leq j \leq m,$ are regularly varying with index $-1$. We will see that the joint extremal behavior of the products in \eqref{Eq:powerprods} is closely related to the solution of the linear optimization problem \begin{equation}\label{Eq:LOP} \mbox{ find } \mathbf{x} \geq \mathbf{0} \mbox{ such that } \mathbf{A}\mathbf{x} \geq \mathbf{1}, \;\;\; \sum_{i=1}^m x_i \to \min! \end{equation} with $\mathbf{A}=(a_{ij})_{1\le i\le n, 1\le j\le m} \in \mathbb{R}^{n \times m}$. Before we give proofs for the asymptotic behavior of the joint exceedances, we want to motivate the connection of our question to the linear optimization problem in \eqref{Eq:LOP}. To this end, assume for simplicity that $a_{ij}\geq 0$ for all $1 \leq i \leq n, 1 \leq j \leq m$. Let $\mathbf{y} \geq \mathbf{0}$ be a feasible solution to \eqref{Eq:LOP}, i.e.\ $\mathbf{A}\mathbf{y} \geq \mathbf{1}$. Note that for all $x\geq 1$ \begin{equation*} X_j>x^{y_j}, \;\; 1 \leq j \leq m \;\;\; \Rightarrow \;\;\; \prod_{j=1}^m X_j^{a_{ij}}\geq x^{\sum_{j=1}^m a_{ij}y_j}\geq x, \;\;\; 1 \leq i \leq n,\end{equation*} by \eqref{Eq:LOP} and thus \begin{equation}\label{Eq:lowerboundheuristic} P\left(\prod_{j=1}^m X_j^{a_{ij}}\geq x, \;1 \leq i \leq n\right)\geq \prod_{j=1}^m P(X_j>x^{y_j}). \end{equation}
If all $X_j, 1 \leq j \leq m,$ are independent and regularly varying with index $-1$, the right hand side is a regularly varying function in $x$, with index $\alpha=-\sum_{j=1}^m y_j$. Now, the smaller the value of $|\alpha|$, the slower is the decay of the function on the right hand side as $x \to \infty$. So, heuristically, if the value of $|\alpha|$ and thus the value of $\sum_{j=1}^m y_j$ is minimized, this is the most likely combination of extremal events for the single $X_j$ which leads to joint extremal behavior of the power products \eqref{Eq:powerprods}. We will see in Theorem \ref{Th:maintheorem} that the right hand side of \eqref{Eq:lowerboundheuristic} is not only a lower bound for the joint exceedance probabilities but also, under some additional assumptions about real valued $\mathbf{A}$, tail equivalent to it. For a general (not necessarily non-negative) matrix $\mathbf{A}$, the next theorem gives upper and lower bounds for the order of decay of the joint exceedance probabilities.
\begin{theorem}\label{Th:bounds} Let $X_1, \ldots, X_m$ be independent non-negative random variables. Let $\boldsymbol{\kappa}=(\kappa_1, \ldots, \kappa_m)^T$ be an optimal solution to \eqref{Eq:LOP}. \begin{itemize} \item[(a)] Assume that all $X_j, 1 \leq j \leq m,$ are regularly varying with index -1. Then for all $\epsilon>0$, \begin{equation}\label{Eq:lowerbound} x^{-\sum_{i=1}^m\kappa_i-\epsilon} = o\left(P\left(\prod_{j=1}^m X_j^{a_{ij}}>x, \;1 \leq i \leq n\right)\right), \;\;\; x \to \infty. \end{equation} \item[(b)] Assume that $E(X_j^{1-\delta})<\infty, 1 \leq j \leq m,$ for all $\delta \in (0,1)$, and additionally that there exists $c>0$ such that $P(X_j \geq c)=1$ for all $1 \leq j \leq m $ with $\kappa_j=0$. Then for all $\epsilon>0$, \begin{equation}\label{Eq:upperbound} P\left(\prod_{j=1}^m X_j^{a_{ij}}>x, \;1 \leq i \leq n\right)= o(x^{-\sum_{i=1}^m\kappa_i+\epsilon}), \;\;\; x \to \infty. \end{equation} \end{itemize} \end{theorem} \begin{remark} In contrast to the other results of this and the following section, the assumptions of Theorem \ref{Th:bounds} (b) do not include regular variation of at least some of the $X_j, 1 \leq j \leq m$. Note, however, that regular variation with index $-1$ of $X_1, \ldots, X_m$ implies that $E(X_j^{1-\delta})<\infty, 1 \leq j \leq m,$ for all $\delta \in (0,1)$. \end{remark}
The proof is given in Section \ref{Sec:boundsproof}. Under some additional assumptions about the structure of $\mathbf{A}$, the following Theorem \ref{Th:maintheorem} gives precise asymptotics for the joint exceedance probabilities of the random power products. \begin{theorem}\label{Th:maintheorem}
Let $\mathbf{A}=(a_{ij}) \in \mathbb{R}^{n \times m}, n \leq m,$ be such that the optimal solution $\boldsymbol{\kappa}$ to the linear optimization problem \eqref{Eq:LOP} is unique and non-degenerate (i.e.\ it has $n$ positive components) and denote by $\mathbf{A}_{\boldsymbol{\kappa}} \in \mathbb{R}^{n \times n}$ the matrix which is derived from $\mathbf{A}$ by deleting all columns $1 \leq j \leq m$ for which $\kappa_j=0$. Then this matrix is invertible.
Let $X_1, \ldots, X_m$ be independent non-negative random variables and assume that there exists $\epsilon>0$ such that for \begin{align} \nonumber 1 \leq j \leq m \mbox{ with } \kappa_j>0:& \;\;\; X_j \mbox{ is regularly varying with index } -1 \\ \label{Eq:Cond2Main} 1 \leq j \leq m \mbox{ with } \kappa_j=0:& \;\;\; \begin{array}{cc} E\left(X_j^{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j+\epsilon}\right)<\infty & \mbox{ and }\\ \hspace{0.3cm} E\left(X_j^{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j-\epsilon}\right)<\infty & \end{array}. \end{align} Then \begin{eqnarray}\label{Eq:Maintheorem1} && \lim_{x \to \infty} \frac{P\left(\prod_{j=1}^m X_j^{a_{ij}}>c_i x, \;1 \leq i \leq n\right)}{\prod\limits_{\{1 \leq j \leq m: \kappa_j>0\}}P(X_j>x^{\kappa_j})}\\
\label{Eq:Maintheorem2} &=&|\det \mathbf{A}_{\boldsymbol{\kappa}}|^{-1}\frac{\prod_{i=1}^n c_i^{-(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_i}}{\prod_{i=1}^n (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_i}\prod_{j:\kappa_j=0} E\left(X_j^{ (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j}\right)=:\mu\left(\bigtimes_{1 \leq i \leq n} (c_i, \infty)\right) \end{eqnarray} for all $c_i>0, 1 \leq i \leq n$. In particular, the distribution of $(\prod_{j=1}^m X_j^{a_{ij}})_{1 \leq i \leq n}$ is regularly varying on $(0,\infty)^n$ w.r.t.\ scalar multiplication (in the sense of Example \ref{ex:LT}). The normalizing function can be chosen as $c(x)= (\prod_{\{1 \leq j \leq m: \kappa_j>0\}}P(X_j>x^{\kappa_j}))^{-1}$ and the corresponding limit measure is $\mu$ as above. The index of regular variation is equal to $-\sum_{j=1}^m \kappa_j=-\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{1}$. \end{theorem} \begin{remark} Given the expression in \eqref{Eq:Maintheorem2}, it is obviously necessary to assume that $E\left(X_j^{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j}\right)<\infty$ for all $1 \leq j \leq m$ with $\kappa_j=0$ in order to ensure that the limit in \eqref{Eq:Maintheorem1} is finite. The actual assumption about the moments of those $X_j$ with $\kappa_j=0$ which is stated in Theorem \ref{Th:maintheorem} is similar to the assumption in Breiman's lemma (cf.\ \cite{Br65}) in that we need a little more than the finiteness of the moments $E\left(X_j^{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j}\right)$ in order to apply a dominated convergence theorem. Furthermore, note that for $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j>0$ only the first moment assumption is necessary, the second follows for $\epsilon>0$ chosen small enough. On the other hand, if $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j<0$, then only the second assumption is necessary. Only in the case $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j=0$ both assumptions are necessary. \end{remark}
\begin{remark}\label{Rem:dualcoefs} \begin{itemize} \item[(a)] Under the given assumptions the vector $\mathbf{1}^T \mathbf{A}_{\boldsymbol{\kappa}}^{-1}$ which appears in the statement of Theorem \ref{Th:maintheorem} is the transposed of the unique optimal solution to the so-called dual problem to \eqref{Eq:LOP}, which is given by \begin{equation}\label{Eq:DualLO} \mbox{find } \mathbf{x} \geq \mathbf{0} \; \mbox{such that } \mathbf{A}^T \mathbf{x} \leq \mathbf{1}, \;\;\; \sum_{i=1}^mx_i \to \max!, \end{equation} see the proof of Theorem \ref{Th:maintheorem} and \eqref{Eq:identitykappahat} for details. This implies that $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})^T =\mathbf{A}^T \left(\mathbf{1}^T \mathbf{A}_{\boldsymbol{\kappa}}^{-1}\right)^T \leq \mathbf{1}$ and by the assumed uniqueness and non-degeneracy of the solution even $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j<1$ for those $j$ with $\kappa_j=0$, cf.\ the remark after \eqref{Eq:alldualcases} in the proof of Theorem \ref{Th:maintheorem}. Therefore, the assumptions in \eqref{Eq:Cond2Main} are always satisfied if all $X_j, 1 \leq j \leq m,$ are regularly varying with index $-1$ and bounded away from 0 (or their distributions concentrate sufficiently little mass around 0).
\item[(b)] For a general linear program of the form \begin{equation}\label{Eq:LOPgeneral} \mbox{ find } \mathbf{x} \geq \mathbf{0} \mbox{ such that } \mathbf{A}\mathbf{x} \geq \mathbf{1}, \;\;\; \sum_{j=1}^m z_j x_j \to \min! \end{equation} with optimal solution $\boldsymbol{\kappa}$, the value of $z_j-(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j$ is sometimes called the reduced cost of variable $1 \leq j \leq m$. If $\kappa_j=0$ in the optimal solution, then this solution is not affected by a change of $z_j$ in the objective function in \eqref{Eq:LOPgeneral} as long as $z_j>(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j$, cf.\ Section 2.5.1 in \cite{Si96}. In the context of Theorem \ref{Th:maintheorem}, the values of $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j$ for $j$ with $\kappa_j=0$ can be interpreted in a similar way, since the left or right tail behavior of these $X_j$ does not influence the extremal behavior of the random products (except for a possible change in the multiplicative constant of the limit) as long as there exists $\epsilon>0$ such that \begin{eqnarray*}\begin{array}{cc}
P(X_j>x)= O(x^{-(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j-\epsilon}), & \mbox{ if } (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j\geq 0 \\
P(X_j^{-1}>x)= O(x^{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j-\epsilon}), & \mbox{ if } (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j\leq 0
\end{array}. \end{eqnarray*} \end{itemize} \end{remark}
The proof of Theorem \ref{Th:maintheorem} is given in detail in Section \ref{Sec:proof}, but we want to lay out briefly the main idea here. To this end, assume w.l.o.g. that the first $n$ components of $\boldsymbol{\kappa}$ are positive and the last $m-n$ components are equal to zero. The main idea is to use the regular variation of the measure $\nu:=\bigotimes_{i=1}^m P^{X_i}$ (where $P^{X_i}$ stands for the law of $X_i$) on $\mathbb{O}_n$ w.r.t.\ $\otimes_{\boldsymbol{\kappa}}$, cf. Lemma~\ref{lem:vectorisrv}. Furthermore, we show that under our assumptions the equality $\mathbf{A} \boldsymbol{\kappa}=\mathbf{1}$ holds. Then, \begin{eqnarray}\nonumber && P\left(\prod_{j=1}^m X_j^{a_{ij}}>c_i x, \;1 \leq i \leq n\right)\\ \label{Eq:isRVproblem} &=&P\left(\prod_{j=1}^m \left(\frac{X_j}{x^{\kappa_j}}\right)^{a_{ij}}>c_i, \;1 \leq i \leq n\right)=\nu(x \otimes_{\kappa} M) \end{eqnarray} for \begin{equation*}M = M(\mathbf{A},\mathbf{c}) := \left\{\mathbf{x} \in [0, \infty)^m: \prod_{j=1}^m x_j^{a_{ij}}>c_i, \; 1 \leq i \leq n\right\}.\end{equation*} The next step is to apply the Portmanteau Theorem (cf.\ Theorem 2.1 in \cite{LiReRo14}) to show convergence of the right hand side in \eqref{Eq:isRVproblem} under suitable normalization as $x \to \infty$. Note, however, that the set $M$ is not bounded away from $\mathbb{C}_n$ (cf.\ Section \ref{Sec:background}), so we cannot directly apply this argument. As an intermediate step, we therefore have to show that we can replace $M$ by $M \cap (\delta,\infty)^n\times[0,\infty)^{m-n}, \delta>0,$ in \eqref{Eq:isRVproblem} and that under the necessary normalization the difference is negligible as $\delta \searrow 0$.
The following example illustrates the statements of Theorem \ref{Th:maintheorem} and in particular the role of negative coefficients $a_{ij}$. It also demonstrates applications of Theorem \ref{Th:maintheorem} for the extreme value analysis of time series. \begin{example} Let $(Y_t)_{t \in \mathbb{Z}}$ be a log-linear time series of the form \begin{equation*} \ln(Y_t)=\ln(X_{t})-0.5\ln(X_{t+1}), \;\;\; t \in \mathbb{Z}, \end{equation*} where $X_t, t \in \mathbb{Z},$ are i.i.d.\ and regularly varying with index $-1$. Using Theorem \ref{Th:maintheorem}, we can derive the asymptotics for the probability that three consecutive extreme observations of similar magnitude occur, i.e.\ for $P(Y_1>c_1x, Y_2>c_2x, Y_3>c_3x)$, $c_i>0, i=1,2,3.$ Rewrite this probability as \begin{equation*} P(X_1X_{2}^{-0.5}>c_1x, X_2X_{3}^{-0.5}>c_2x, X_3X_{4}^{-0.5}>c_3x)=P\left(\prod_{j=1}^4 X_j^{a_{ij}}>c_i x, \;\; 1 \leq i \leq 3\right) \end{equation*} with \begin{equation*} \mathbf{A}=(a_{ij})_{1 \leq i \leq 3, 1 \leq j \leq 4}= \left( \begin{array}{cccc}
1 & -0.5 & 0 & 0\\
0 & 1 & -0.5 & 0 \\
0 & 0 & 1 & -0.5 \end{array} \right). \end{equation*} The optimal solution to \eqref{Eq:LOP} is then given by $\boldsymbol{\kappa}=(7/4, 3/2, 1, 0)$ and this solution is unique and non-degenerate. Furthermore, \begin{equation*}(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_4=\left(\mathbf{1}^T\left( \begin{array}{ccc}
1 & -0.5 & 0 \\
0 & 1 & -0.5 \\
0 & 0 & 1 \end{array} \right)^{-1} \mathbf{A}\right)_4=\left((1,3/2,7/4)\mathbf{A}\right)_4=-7/8. \end{equation*} Let us first additionally assume that $E(X_4^{-7/8-\epsilon})<\infty$ for some $\epsilon>0$. Then all assumptions of Theorem \ref{Th:maintheorem} are satisfied and the random vector $(Y_1, Y_2, Y_3)$ is regularly varying on $(0,\infty)^3$ with respect to scalar multiplication. The index of regular variation is equal to $-\sum_{j=1}^4 \kappa_j=-17/4$ and the limit measure $\mu$ is given by $$\mu(\times_{i=1}^3 (c_i,\infty))= \frac{8}{21}c_1^{-1}c_2^{-3/2}c_3^{-7/4}E(X_4^{-7/8}).$$ Note that the negative exponents do influence the solution of the optimization problem and hence the index of regular variation. If, for instance, in the matrix $\mathbf{A}$ $-0.5$ is replaced with $-0.25$ everywhere, the optimal solution is given by $(21/16,5/4,1,0)$ and the index of regular variation equals $-57/16$.
If the assumption $E(X_4^{-7/8-\epsilon})<\infty$ is not satisfied, Theorem \ref{Th:maintheorem} may still be helpful. For instance, let us assume that $X_4^{-1}$ is regularly varying with index $-1/2$, so that the above moment assumption does not hold. But since this assumption implies that $X_4^{-1/2}$ is regularly varying with index $-1$, we can write the above joint exceedance probability as $P(\prod_{j=1}^4 \tilde{X}_j^{\tilde{a}_{ij}}>c_i x, \;\; 1 \leq i \leq 3)$ for $\tilde{X}_j=X_j, 1 \leq j \leq 3$, $\tilde{X}_4=X_4^{-1/2}$ and \begin{equation*} \tilde{\mathbf{A}}=(\tilde{a}_{ij})_{1 \leq i \leq 3, 1 \leq j \leq 4}= \left( \begin{array}{cccc}
1 & -0.5 & 0 & 0\\
0 & 1 & -0.5 & 0 \\
0 & 0 & 1 & 1 \end{array} \right). \end{equation*} If we replace $\mathbf{A}$ in \eqref{Eq:LOP} by $\tilde{\mathbf{A}}$, then the optimal solution is given by $\tilde{\boldsymbol{\kappa}}=(3/2,1,0,1)$ and this solution is unique and non-degenerate. Furthermore, \begin{equation*}(\mathbf{1}^T\tilde{\mathbf{A}}_{\tilde{\boldsymbol{\kappa}}}^{-1} \tilde{\mathbf{A}})_3=\left(\mathbf{1}^T\left( \begin{array}{ccc}
1 & -0.5 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \end{array} \right)^{-1} \tilde{\mathbf{A}}\right)_3=\left((1,3/2,1)\tilde{\mathbf{A}}\right)_3=1/4. \end{equation*} Since $E(X_3^{1/4+\epsilon})<\infty$ for $\epsilon \in (0,3/4)$ by our above assumptions, we can apply Theorem \ref{Th:maintheorem} also in this case to obtain again that $(Y_1, Y_2, Y_3)$ is regularly varying on $(0,\infty)^3$ with respect to scalar multiplication. But now the index of regular variation is equal to $-\sum_{j=1}^4 \tilde{\kappa}_j=-7/2$ and the limit measure $\tilde{\mu}$ is given by $$\tilde{\mu}(\times_{i=1}^3 (c_i,\infty))= \frac{2}{3}c_1^{-1}c_2^{-3/2}c_3^{-1}E(X_3^{1/4}).$$ \end{example}
\section{Proofs and auxiliary results} \subsection{Proof of Theorem \ref{Th:bounds}} \label{Sec:boundsproof} \begin{proof} We start with the proof of (a). The optimal solution $\boldsymbol{\kappa}$ to \eqref{Eq:LOP} lies in the closure of \begin{equation*} N(\mathbf{A}):=\left\{\mathbf{z} \in \mathbb{R}^m: \mathbf{A}\mathbf{z} > \mathbf{1}\right\}. \end{equation*} Since the ray $\{\mathbf{z} \in \mathbb{R}^m: \mathbf{z}=(1+\delta)\boldsymbol{\kappa}, \delta >0\}$ is a subset of the open set $N(\mathbf{A})$, for all $\epsilon>0$ there exists $\epsilon' >0$ such that \begin{equation*} \bigotimes_{j=1}^m \left(\kappa_j\left(1+\frac{\epsilon}{2\sum_{j=1}^m\kappa_j}\right)-\epsilon',\kappa_j \left(1+\frac{\epsilon}{2\sum_{j=1}^m\kappa_j}\right)+\epsilon'\right) \subset N(\mathbf{A}).\end{equation*} Thus, for $x > 1$, \begin{eqnarray*} && P\left(\prod_{j=1}^m X_j^{a_{ij}}>x, 1 \leq i \leq n\right)\\ &\geq& P\left(\prod_{j=1}^m X_j^{a_{ij}}>x, 1 \leq i \leq n, \mbox{ and } X_j>0, 1 \leq j \leq m\right)\\ &=& P\left(\left(\frac{\ln(X_j)}{\ln(x)}\right)_{1 \leq j \leq m} \in N(\mathbf{A})\right)\\ &\geq & \prod_{j=1}^m P\left(x^{\kappa_j\left(1+\frac{\epsilon}{2\sum_{j=1}^m\kappa_j}\right)-\epsilon'}<X_j <x^{\kappa_j\left(1+\frac{\epsilon}{2\sum_{j=1}^m\kappa_j}\right)+\epsilon'}\right). \end{eqnarray*} By the regular variation of $X_1, \ldots, X_m$, the expression on the right-hand side is of larger order than \begin{equation*} \prod_{j=1}^m x^{-\kappa_j\left(1+\frac{\epsilon}{2 \sum_{j=1}^m\kappa_j}\right)+\epsilon'-\frac{\epsilon}{2m}}= x^{-\sum_{j=1}^m\kappa_j-\frac{\epsilon}{2}+m\epsilon'-\frac{\epsilon}{2}} \geq x^{-\sum_{j=1}^m\kappa_j-\epsilon}, \;\;\; x \geq 1,\end{equation*} which proves (a).
For the proof of (b) let us for simplicity assume that $c\geq 1$ so that $P(X_j\geq 1)=1$ for those $1 \leq j \leq m$ with $\kappa_j=0$. The modifications for general $c>0$ (substitute $X_j/c$ for $X_j$) are simple. Let $\tilde{\mathbf{A}}$ be as in Lemma \ref{Lem:makepositive} (a) (see Section \ref{Sec:auxresults} below), so that we have \begin{align*} & & \prod_{j=1}^m X_j^{a_{ij}}>x, & \;\;\;1 \leq i \leq n&\\ &\Rightarrow& \prod_{j=1}^m X_j^{\tilde{a}_{ij}}>x, & \;\;\;1 \leq i \leq n&\\ &\Rightarrow& \prod_{j=1}^m \max(X_j,1)^{\tilde{a}_{ij}}>x, & \;\;\;1 \leq i \leq n,& \end{align*} where we have used that $\tilde{a}_{ij}>0$ for all $1 \leq j \leq m$ with $\kappa_j>0$ and $X_j\geq 1$ for $1 \leq j \leq m$ with $\kappa_j=0$. The last inequalities imply that for $x>1$ \begin{align*} && \sum_{j=1}^m \tilde{a}_{ij}\frac{\ln(\max(X_j,1))}{\ln(x)}&>1, \;\;\;1 \leq i \leq n \\ & \Rightarrow & \sum_{j=1}^m \frac{\ln(\max(X_j,1))}{\ln(x)}&\geq \sum_{j=1}^m \kappa_j , \end{align*} because otherwise $\boldsymbol{\kappa}$ could not be an optimal solution to \eqref{Eq:posLP}, in contrast to Lemma \ref{Lem:makepositive} (a) and our assumptions. Thus, for $x>1$, \begin{equation}\label{Eq:upperboundproof} P\left(\prod_{j=1}^m X_j^{a_{ij}}>x,\;\;1 \leq i \leq n\right)\leq P\left(\prod_{j=1}^m\max(X_j,1)\geq x^{\sum_{j=1}^m\kappa_j}\right). \end{equation} By our assumptions, for all $\delta \in (0,1)$, \begin{equation*} E\left(\left(\prod_{j=1}^m\max(X_j,1)\right)^{1-\delta}\right)= \prod_{j=1}^m E\left(\max(X_j,1)^{1-\delta}\right) <\infty \end{equation*} and by the Markov inequality and \eqref{Eq:upperboundproof} we conclude \begin{equation*} P\left(\prod_{j=1}^m X_j^{a_{ij}}>x,\;\;1 \leq i \leq n\right)=O\left(x^{-(\sum_{j=1}^m\kappa_j)(1-\delta)}\right)\end{equation*} for all $\delta \in (0,1)$. Choosing $\delta < \epsilon/\sum_{j=1}^m \kappa_j$ yields \eqref{Eq:upperbound}. \end{proof}
\subsection{Proof of Theorem \ref{Th:maintheorem}}\label{Sec:proof} In order to prove Theorem \ref{Th:maintheorem}, we first deal with a setting that covers a slightly more general case for the solution of the linear program \eqref{Eq:LOP} than the one assumed in the statement of Theorem \ref{Th:maintheorem}. The proof of this result is by induction on the number of positive components in the unique optimal solution to \eqref{Eq:LOP}. Several auxiliary results needed for the proof can be found in Section \ref{Sec:auxresults}.
\begin{proposition}\label{Pr:mainprop} Let $\mathbf{A}=(a_{ij}) \in \mathbb{R}^{n \times m}$ with $n, m \in \mathbb{N}$ be such that the solution $\boldsymbol{\kappa}$ to the linear optimization problem \eqref{Eq:LOP} is unique with $\mathbf{A}\boldsymbol{\kappa}=\mathbf{1}$. Define $J=\{j \in \{1, \ldots, m\}: \kappa_j>0\}$.
Let $X_1, \ldots, X_m$ be independent non-negative random variables. Assume that \begin{eqnarray*} \mbox{ for } j \in J& : & X_j \mbox{ is regularly varying with index $-1$}, \\ \mbox{ for } j \in \{1, \ldots, m\} \setminus J& : & P(X_j\geq 1)=1 \mbox{ and } E(X_j^{1-\delta})<\infty \mbox{ for all } \delta \in (0,1). \end{eqnarray*} Then, \begin{eqnarray}\nonumber && \lim_{x \to \infty} \frac{P\left(\prod_{j=1}^m X_j^{a_{ij}}>x, \;1 \leq i \leq n\right)}{\prod\limits_{j \in J}P(X_j>x^{\kappa_j})}\\ \nonumber &=&\int\limits_{M(\mathbf{A})}\prod_{j \in J}x_j^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(x_j)_{j \in J}) \otimes P^{(X_j)_{j \notin J}}(\mbox{d}(x_j)_{j \notin J}) \in [0,\infty) \end{eqnarray} with $M(\mathbf{A}):=\{(x_1, \ldots, x_m) \in [0,\infty)^m: \prod_{j=1}^m x_j^{a_{ij}}>1, \; 1 \leq i \leq n\}$. \end{proposition} \begin{proof} The proof is by induction on the number $l$ of positive components in the unique optimal solution $\boldsymbol{\kappa}$. Note that $\boldsymbol{\kappa} \geq 0$ and that at least one component of $\boldsymbol{\kappa}$ has to be positive in order to satisfy $\mathbf{A} \boldsymbol{\kappa} \geq \mathbf{1}$. In the following, we assume w.l.o.g. that the first $l \in \mathbb{N}$ components of $\boldsymbol{\kappa}$ are positive and the last $m-l$ components are equal to zero (if this is not the case, interchange the $X_i$'s and the corresponding columns of $\mathbf{A}$ accordingly).
We start now with the case $l=1$, i.e. $\kappa_1>0$ and $\kappa_j=0$ for $2 \leq j \leq m$. Our assumptions imply that $a_{i1}\kappa_1=1$ for all $1 \leq i \leq n$, i.e. $a_{11}=\ldots=a_{n1}$ and $\kappa_1=a_{i1}^{-1}, 1 \leq i \leq n$. Thus, \begin{equation}\label{Eq:inductionstart}P\left(\prod_{j=1}^mX_j^{a_{ij}}>x, \;\; 1 \leq i \leq n\right)=P\left(X_1\min_{1 \leq i \leq n}\left(\prod_{j=2}^mX_j^{a_{ij}\kappa_1}\right)>x^{\kappa_1}\right). \end{equation} If the linear program \begin{equation}\label{Eq:LOPwithoutfirstcolumn} \mbox{find } \mathbf{x} \in [0, \infty)^{m-1} \; \mbox{such that } \left(\begin{array}{ccc} a_{12} & \cdots & a_{1m} \\ \vdots & \ddots & \vdots \\ a_{n2} & \cdots & a_{nm} \end{array}\right)\mathbf{x} \geq \mathbf{1}, \;\;\; \sum_{i=1}^{m-1}x_i \to \min! \end{equation} has no feasible solution, then \begin{equation*} \min_{1 \leq i \leq n}\sum_{j=2}^ma_{ij}x_{j-1}<1, \;\;\; \forall \, \mathbf{x} \in [0, \infty)^{m-1}, \end{equation*} and thus \begin{equation}\label{Eq:smallerthane} \min_{1 \leq i \leq n} \left(\prod_{j=2}^mX_j^{a_{ij}\kappa_1}\right)<e^{\kappa_1} \;\;\; \mbox{a.s.}, \end{equation} because $\ln(X_2), \ldots, \ln(X_m) \geq 0$ almost surely by our assumptions. On the other hand, if there exists a feasible solution to \eqref{Eq:LOPwithoutfirstcolumn}, then there exists $\epsilon>0$ such that all feasible solutions $\mathbf{x}$ to \eqref{Eq:LOPwithoutfirstcolumn} satisfy $\sum_{i=1}^{m-1} x_i>\kappa_1+\epsilon$, since otherwise there would exist a solution $\mathbf{x'}$ $=(0,x_1', \ldots, x_{m-1}')^T$ $\neq (\kappa_1, 0, \ldots, 0)^T$ to \eqref{Eq:LOP} with $\sum_{j=1}^mx_j'\leq \kappa_1$, in contradiction to our assumptions. Hence, an optimal solution to \eqref{Eq:LOPwithoutfirstcolumn} exists with $\sum_{i=1}^{m-1} x_i>\kappa_1+\epsilon$. By Theorem \ref{Th:bounds} (b) we have \begin{equation*} P\left(\min_{1 \leq i \leq n} \left(\prod_{j=2}^mX_j^{a_{ij}\kappa_1}\right)>x\right)=o\left(x^{-1-\epsilon/(2\kappa_1)}\right), \;\;\; x \to \infty. \end{equation*} So, whether there exists a solution to \eqref{Eq:LOPwithoutfirstcolumn} or not, we have \begin{equation*}E\left(\left(\min_{1 \leq i \leq n} \left(\prod_{j=2}^mX_j^{a_{ij}\kappa_1}\right)\right)^{1+\epsilon/(4\kappa_1)}\right)<\infty \end{equation*} and we may thus apply Breiman's Lemma, cf.\ \cite{Br65}, to derive the asymptotic behavior of \eqref{Eq:inductionstart} as $x \to \infty$. This gives us \begin{eqnarray*} && \lim_{x \to \infty} \frac{P\left(\prod_{j=1}^mX_j^{a_{ij}}>x, \;\; 1 \leq i \leq n\right)}{P(X_1>x^{\kappa_1})}\\ &=& E\left(\min_{1 \leq i \leq n} \left(\prod_{j=2}^mX_j^{a_{ij}\kappa_1}\right)\right) \\ &=& \int\limits_{\{\mathbf{x}\in [0,\infty)^m:x_1>\max_{1 \leq i \leq n} \left(\prod_{j=2}^mx_j^{-a_{ij}\kappa_1}\right)\}}x_1^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}x_1)\otimes P^{(X_j)_{2 \leq j \leq m}}(\mbox{d}(x_j)_{2 \leq j \leq m}) \\ &=& \int\limits_{\{\mathbf{x}\in [0,\infty)^m:\prod_{j=1}^m x_j^{a_{ij}}>1, 1 \leq i \leq n\}}x_1^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}x_1)\otimes P^{(X_j)_{2 \leq j \leq m}}(\mbox{d}(x_j)_{2 \leq j \leq m}), \end{eqnarray*} which concludes the proof in the case $l=1$.
For the induction step, assume that Proposition \ref{Pr:mainprop} holds for all matrices $\mathbf{A}^\ast \in \mathbb{R}^{n^\ast \times m^\ast}, n^\ast, m^\ast \in \mathbb{N},$ for which the corresponding linear program \eqref{Eq:LOP} (with $\mathbf{A}$ replaced by $\mathbf{A}^\ast$) has a unique solution $\boldsymbol{\kappa}^\ast$ and for which $\mathbf{A}^\ast\boldsymbol{\kappa}^\ast=\mathbf{1}$ and at most $l-1 \geq 1$ components of $\boldsymbol{\kappa}^\ast$ are positive. In the following, assume that $\boldsymbol{\kappa}$ has $l$ positive components, again w.l.o.g. the first $l$ ones.
Define the map $\otimes_{\boldsymbol{\kappa}}$ as in Section \ref{Sec:background}. From Lemma \ref{lem:vectorisrv} we get that $P^{(X_j)_{1 \leq j \leq m }}$ is regularly varying on $\mathbb{O}_n$ with respect to $\otimes_{\boldsymbol{\kappa}}$.
Now, with $\mathbf{A}\boldsymbol{\kappa}=\mathbf{1}$ we have \begin{eqnarray*} P\left(\prod_{j=1}^m X_j^{a_{ij}}>x, \;1 \leq i \leq n\right)&=&P\left(\prod_{j=1}^m \left(\frac{X_j}{x^{\kappa_j}}\right)^{a_{ij}}>1, \;1 \leq i \leq n\right)\\ &=& P(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} M), \end{eqnarray*} where $\mathbf{X}=(X_1, \ldots, X_m)$ and \begin{equation*}M = M(\mathbf{A}) := \left\{\mathbf{x} \in [0, \infty)^m: \prod_{j=1}^m x_j^{a_{ij}}>1, \; 1 \leq i \leq n\right\}.\end{equation*} For $\delta>0$ write \begin{eqnarray}\nonumber \frac{P(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} M)}{\prod_{i=1}^lP(X_i>x^{\kappa_i})}&=&\frac{P\left(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} (M \cap \left((\delta,\infty)^l \times [0,\infty)^{m-l})\right)\right)}{\prod_{i=1}^lP(X_i>x^{\kappa_i})}\\ \label{Eq:boundedandunboundedset} && \hspace{-1cm}+\,\frac{P\left(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} \left(M \cap \left((\delta,\infty)^l \times [0,\infty)^{m-l}\right)^c\right)\right)}{\prod_{i=1}^lP(X_i>x^{\kappa_i})}. \end{eqnarray} We will first show that the second summand in \eqref{Eq:boundedandunboundedset} tends to zero as first $x \to \infty$ and then $\delta \searrow 0$. Note that \begin{eqnarray} \nonumber && \lim_{\delta \searrow 0} \limsup_{x \to \infty} \frac{P\left(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} \left(M \cap \left((\delta,\infty)^l \times [0,\infty)^{m-l}\right)^c\right)\right)}{\prod_{i=1}^lP(X_i>x^{\kappa_i})} \\ \label{Eq:summandstozero} &\leq & \sum_{k=1}^l \lim_{\delta \searrow 0} \limsup_{x \to \infty} \frac{P\left(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} M, X_k\leq \delta x^{\kappa_k}\right)}{\prod_{i=1}^lP(X_i>x^{\kappa_i})}. \end{eqnarray} We will show that all summands in \eqref{Eq:summandstozero} equal zero. To this end, note first that we may apply Lemma \ref{Lem:makepositive} (b) to the matrix $\mathbf{A}$, i.e. there exists a matrix $\tilde{\mathbf{A}}$ such that $\boldsymbol{\kappa}$ as above is the unique solution to the linear program \eqref{Eq:posLP} with $\tilde{a}_{ij}>0$ for $1 \leq i \leq n$ and $1 \leq j \leq l$ and $\tilde{\mathbf{A}}\boldsymbol{\kappa}=\mathbf{1}$. We have \begin{align}\nonumber P\left(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} M, X_k\leq \delta x^{\kappa_k}\right)&=& \label{Eq:tildebound} P\left(\prod_{j=1}^m X_j^{a_{ij}}>x, \;1 \leq i \leq n, X_k <\delta x^{\kappa_k}\right) \\ &\leq& P\left(\prod_{j=1}^m X_j^{\tilde{a}_{ij}}>x, \;1 \leq i \leq n, X_k <\delta x^{\kappa_k}\right) \end{align} by Lemma \ref{Lem:makepositive} (b). For ease of notation, we restrict ourselves to the analysis for the summand $k=1$ in \eqref{Eq:summandstozero}. For $C>0$, use \eqref{Eq:tildebound}, $\tilde{a}_{i1}>0, 1 \leq i \leq n,$ and $\tilde{\mathbf{A}}\boldsymbol{\kappa}=\mathbf{1}$ to write \begin{eqnarray} \nonumber && \frac{P\left(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} M, X_1\leq \delta x^{\kappa_1}\right)}{\prod_{i=1}^l P(X_i>x^{\kappa_i})} \\ \nonumber &\leq& \frac{P\left(\prod_{j=1}^m X_j^{\tilde{a}_{ij}}>x, \;1 \leq i \leq n, X_1 \leq \delta x^{\kappa_1}\right)}{\prod_{i=1}^l P(X_i>x^{\kappa_i})} \\ \nonumber &= & \frac{\int\limits_{[0,\infty)}P\left(\frac{X_1}{x^{\kappa_1}}>z^{-1}, X_1\leq \delta x^{\kappa_1}\right)P^{\min\limits_{1 \leq i \leq n}\prod\limits_{j=2}^m \left(\frac{X_j}{x^{\kappa_j}}\right)^{\tilde{a}_{ij}/\tilde{a}_{i1}}}(\mbox{d}z)}{\prod_{i=1}^l P(X_i>x^{\kappa_i})} \\ \nonumber &\leq & \frac{\int\limits_{(\delta^{-1}, x^{\kappa_1}/C]}P\left(\frac{X_1}{x^{\kappa_1}}>z^{-1}\right)/P(X_1>x^{\kappa_1})P^{\min\limits_{1 \leq i \leq n}\prod\limits_{j=2}^m \left(\frac{X_j}{x^{\kappa_j}}\right)^{\tilde{a}_{ij}/\tilde{a}_{i1}}}(\mbox{d}z)}{\prod_{i=2}^l P(X_i>x^{\kappa_i})} \\ \nonumber && +\, \frac{\int\limits_{(x^{\kappa_1}/C, \infty)}P\left(\frac{X_1}{x^{\kappa_1}}>z^{-1}, X_1\leq \delta x^{\kappa_1}\right)P^{\min\limits_{1 \leq i \leq n}\prod\limits_{j=2}^m \left(\frac{X_j}{x^{\kappa_j}}\right)^{\tilde{a}_{ij}/\tilde{a}_{i1}}}(\mbox{d}z)}{\prod_{i=1}^l P(X_i>x^{\kappa_i})} \\ \label{Eq:summandsIandII} &=:& I(x,\delta,C) + II(x,\delta,C). \end{eqnarray} We deal first with $I=I(x,\delta,C)$. Use $\tilde{a}_{ij}/\tilde{a}_{i1}>0, 1 \leq i \leq n, 2 \leq j \leq l,$ and $\kappa_j=0$ and $P(X_j \geq 1)=1, l< j \leq m,$ to obtain \begin{equation}\label{Eq:replacelnbylnplus} \min_{1 \leq i \leq n}\sum_{j=2}^m\frac{\tilde{a}_{ij}}{\tilde{a}_{i1}} \ln\left(\frac{X_j}{x^{\kappa_j}}\right) \leq \min_{1 \leq i \leq n}\sum_{j=2}^m\frac{\tilde{a}_{ij}}{\tilde{a}_{i1}} \left(\ln\left(\frac{X_j}{x^{\kappa_j}}\right)\right)_+ \;\; \mbox{a.s.} \end{equation} Choose $\epsilon \in (0,1)$ according to Lemma \ref{Lem:boundedbysum} such that the expression on the right hand side of \eqref{Eq:replacelnbylnplus} is a.s.\ bounded by \begin{equation*}\frac{1-\epsilon}{1+\epsilon} \sum_{j=2}^m \left(\ln\left(\frac{X_j}{x^{\kappa_j}}\right)\right)_+. \end{equation*} For this $\epsilon>0$, there exists $C>0$ such that \begin{equation*} \frac{P\left(\frac{X_1}{x^{\kappa_1}}>z^{-1}\right)}{P(X_1>x^{\kappa_1})}\leq (1+\epsilon)z^{1+\epsilon} \;\;\; \forall\, 1\leq z\leq x^{\kappa_1}/C, \, x > C\end{equation*} by Potter's bounds applied to $x \mapsto P(X_1>x)$ (cf.\ \cite{BiGoTe87}, Theorem 1.5.6). So, for $\delta\leq 1$, the numerator of $I(x, \delta, C)$ is bounded by \begin{eqnarray*} && \int_{(\delta^{-1}, \infty)}(1+\epsilon)z^{1+\epsilon} P^{\min\limits_{1 \leq i \leq n}\prod\limits_{j=2}^m \left(\frac{X_j}{x^{\kappa_j}}\right)^{\tilde{a}_{ij}/\tilde{a}_{i1}}}(\mbox{d}z)\\ &= & \int_{\tilde{M}(\tilde{\mathbf{A}}, \delta)} (1+\epsilon)\min_{1 \leq i \leq n}\prod\limits_{j=2}^m z_j^{(1+\epsilon)\tilde{a}_{ij}/\tilde{a}_{i1}}P^{\left(\frac{X_j}{x^{\kappa_j}}\right)_{2 \leq j \leq m}}(\mbox{d}\mathbf{z})\\ &\leq & \int_{\tilde{M}(\tilde{\mathbf{A}}, \delta)} (1+\epsilon)\prod_{j=2}^m \max(1,z_j)^{1-\epsilon} P^{\left(\frac{X_j}{x^{\kappa_j}}\right)_{2 \leq j \leq m}}(\mbox{d}\mathbf{z})\\ &\leq & \sum_{(\beta_j)_{2 \leq j \leq m} \in \{0,1-\epsilon\}^{m-1}}\int_{\tilde{M}(\tilde{\mathbf{A}}, \delta)} (1+\epsilon)\prod_{j=2}^m z_j^{\beta_j} P^{\left(\frac{X_j}{x^{\kappa_j}}\right)_{2 \leq j \leq m}}(\mbox{d}\mathbf{z}) \end{eqnarray*} with \begin{equation*} \tilde{M}(\tilde{\mathbf{A}}, \delta):=\left\{(z_2, \ldots, z_m) \in [0,\infty)^{m-1}:\min\limits_{1 \leq i \leq n}\prod\limits_{j=2}^m z_j^{\tilde{a}_{ij}/\tilde{a}_{i1}}>\delta^{-1}\right\}.\end{equation*} Note that $\sum_{k=2}^m \tilde{a}_{ik}\kappa_k>0, 1 \leq i \leq n,$ by our assumptions about $\tilde{\mathbf{A}}$ and $\boldsymbol{\kappa}$ and let \begin{equation}\label{Eq:Mdeltadef} D(\delta):=\min\limits_{1 \leq i \leq n}\delta^{-\tilde{a}_{i1}/\sum_{k=2}^m\tilde{a}_{ik}\kappa_k}>0 \end{equation} and \begin{equation}\label{Eq:atildeprime} \tilde{\mathbf{A}}':=(\tilde{a}_{ij}')_{1 \leq i \leq n, 2 \leq j \leq m}=\left(\frac{\tilde{a}_{ij}}{\sum_{k=2}^m\tilde{a}_{ik}\kappa_k}\right)_{1 \leq i \leq n, 2 \leq j \leq m}. \end{equation} Hence, \begin{eqnarray*}
\tilde{M}(\tilde{\mathbf{A}}, \delta) &\subset & \left\{(z_2, \ldots, z_m) \in [0,\infty)^{m-1}:\min\limits_{1 \leq i \leq n}\prod\limits_{j=2}^m z_j^{\tilde{a}_{ij}'}>D(\delta)\right\}.\\ \end{eqnarray*} Note that a feasible solution to the linear program \begin{equation}\label{Eq:LOPtildeprime} \mbox{find } \mathbf{x}=(x_2, \ldots, x_m)^T \geq \mathbf{0} \; \mbox{such that } \tilde{\mathbf{A}}' \mathbf{x} \geq \mathbf{1}, \;\;\; \sum_{i=2}^mx_i \to \min! \end{equation} is given by $\tilde{\boldsymbol{\kappa}}'=(\kappa_2, \ldots, \kappa_m)^T$ with $\tilde{\mathbf{A}}'\boldsymbol{\kappa}'=\mathbf{1}$. Furthermore, this is also the unique optimal solution to \eqref{Eq:LOPtildeprime}, because if there would be another feasible solution $(x_2, \ldots, x_m)^T$ to it with $\sum_{j=2}^m x_j \leq \sum_{j=2}^m\kappa_j$, then $\mathbf{x}':=(\kappa_1, x_2, \ldots, x_m)^T$ would be a solution to \eqref{Eq:posLP} as well because of \begin{equation*}\tilde{\mathbf{A}}\mathbf{x}'=\left(\left(\tilde{a}_{i1} \kappa_1+\sum_{j=2}^m\tilde{a}_{ij}x_j\right)_{1 \leq i \leq n}\right)^T\geq \left(\left(\tilde{a}_{i1}\kappa_1+\sum_{k=2}^m\tilde{a}_{ik}\kappa_k\right)_{1 \leq i \leq n}\right)^T = \mathbf{1}, \end{equation*} as $\tilde{\mathbf{A}}\boldsymbol{\kappa}=\mathbf{1}$. This would lead to a contradiction to our assumption about the uniqueness of $\boldsymbol{\kappa}$ and Lemma \ref{Lem:makepositive} (b). Thus, \begin{eqnarray*} I&\leq& \sum_{(\beta_j)_{2 \leq j \leq m} \in \{0,1-\epsilon\}^{m-1}}\frac{\int\limits_{\left\{\mathbf{z}:\min\limits_{1 \leq i \leq n}\prod\limits_{j=2}^m z_j^{\tilde{a}'_{ij}}>D(\delta)\right\}}(1+\epsilon)\prod_{j=2}^m z_j^{\beta_j} P^{\left(\frac{X_j}{x^{\kappa_j}}\right)_{2 \leq j \leq m}}(\mbox{d}\mathbf{z})}{\prod_{i=2}^l P(X_i>x^{\kappa_i})} \\ &=& \sum_{(\beta_j)_{2 \leq j \leq m} \in \{0,1-\epsilon\}^{m-1}} \frac{\int\limits_{\left\{\mathbf{z}:\min\limits_{1 \leq i \leq n}\prod\limits_{j=2}^m z_j^{\tilde{a}'_{ij}}>1\right\}}\prod_{j=2}^m z_j^{\beta_j} P^{\left(\frac{X_j}{(D(\delta)x)^{\kappa_j}}\right)_{2 \leq j \leq m}}(\mbox{d}\mathbf{z})}{\prod_{i=2}^l P(X_i>(D(\delta)x)^{\kappa_i})} \\ && \cdot \frac{\prod_{i=2}^l P(X_i>(D(\delta)x)^{\kappa_i})}{\prod_{i=2}^l P(X_i>x^{\kappa_i})}(1+\epsilon)\prod_{j=2}^l D(\delta)^{\beta_j \kappa_j}. \end{eqnarray*} Now, by Proposition \ref{Lem:momentsconverge} combined with the induction hypothesis and the properties of $\tilde{\mathbf{A}}'$, the first factor of each summand in the above expression converges to the finite expression \begin{equation*} \int\limits_{M(\tilde{\mathbf{A}}')}\prod_{j=2}^m x_j^{\beta_j} \prod_{j=2}^l x_j^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(x_j)_{2 \leq j \leq l}) \otimes P^{(X_j)_{l< j \leq m}}(\mbox{d}(x_j)_{l< j \leq m}),\end{equation*} as $x \to \infty,$ whereas the remainder of the expression converges to \begin{equation*}(1+\epsilon)\prod_{j=2}^l D(\delta)^{-\kappa_j(1-\beta_j)} \end{equation*} with $\kappa_j(1-\beta_j)>0$ for $2 \leq j \leq l$ by our assumptions. The first limit does not depend on the value of $\delta>0$, while the second converges to 0 as $\delta \searrow 0$ and thus $D(\delta) \to \infty$ by \eqref{Eq:Mdeltadef}. We have thus shown that \begin{equation*} \lim_{\delta \searrow 0}\limsup_{x \to \infty}I(x,\delta,C)=0\end{equation*} for $C$ large enough. Let us now deal with $II=II(x,\delta,C)$ from \eqref{Eq:summandsIandII}. We have \begin{align}\nonumber II &\leq \frac{P\left(\min\limits_{1 \leq i \leq n}\prod\limits_{j=2}^m \left(\frac{X_j}{x^{\kappa_j}}\right)^{\tilde{a}_{ij}/\tilde{a}_{i1}}>x^{\kappa_1}/C\right)}{\prod_{i=1}^l P(X_i>x^{\kappa_i})}\\ \nonumber &\leq \frac{P\left(\prod\limits_{j=2}^m \max(1,X_j)^{\tilde{a}_{ij}}>x/C^{\tilde{a}_{i1}}, \;\; 1 \leq i \leq n\right)}{\prod_{i=1}^l P(\max(1,X_i)>x^{\kappa_i})}\cdot \frac{\prod_{i=1}^l P(\max(1,X_i)>x^{\kappa_i})}{\prod_{i=1}^l P(X_i>x^{\kappa_i})} \\
\label{Eq:boundforII} &\leq \frac{P\left(\prod\limits_{j=2}^m \max(1,X_j)^{\tilde{a}_{ij}}>x D'(C), \;\; 1 \leq i \leq n\right)}{\prod_{i=1}^l P(\max(1,X_i)>x^{\kappa_i})}\cdot \frac{\prod_{i=1}^l P(\max(1,X_i)>x^{\kappa_i})}{\prod_{i=1}^l P(X_i>x^{\kappa_i})}, \end{align} where we set \begin{equation*} D'(C):=\min_{1 \leq i \leq n} C^{-\tilde{a}_{i1}}>0 \end{equation*} for abbreviation. Set \begin{equation*} \tilde{\mathbf{A}}'':=(\tilde{a}_{ij})_{1 \leq i \leq n, 2 \leq j \leq m} \in \mathbb{R}^{n \times (m-1)} \end{equation*} and consider the linear program \begin{equation}\label{Eq:LOPtildeprimeprime} \mbox{find } \mathbf{x}=(x_2, \ldots, x_m)^T \geq \mathbf{0} \; \mbox{such that } \tilde{\mathbf{A}}'' \mathbf{x} \geq \mathbf{1}, \;\;\; \sum_{i=2}^mx_i \to \min! \end{equation} If this linear program has no feasible solution then \begin{equation*}\min_{1 \leq i \leq n}\prod_{j=2}^m\max(1,X_j)^{\tilde{a}_{ij}}<e \;\; \mbox{a.s.} \end{equation*} (cf.\ \eqref{Eq:smallerthane} for analogous reasoning), so the first factor in \eqref{Eq:boundforII} equals 0 for $x$ large enough. On the other hand, if there exists a feasible solution to \eqref{Eq:LOPtildeprimeprime}, then there exists $\epsilon>0$ such that all feasible solutions $(x_2, \ldots, x_m)^T$ to \eqref{Eq:LOPtildeprimeprime} satisfy $\sum_{j=2}^m x_j>\sum_{j=1}^m\kappa_j + \epsilon$, since otherwise there would exist a solution $\mathbf{x'}$ $=(0,x_2, \ldots, x_m)^T$ $\neq \boldsymbol{\kappa}$ to \eqref{Eq:posLP} with $\sum_{j=1}^mx_j'\leq \sum_{j=1}^m\kappa_j$, in contradiction to our assumptions and Lemma \ref{Lem:makepositive} (b). In the latter case, the numerator of the first factor in \eqref{Eq:boundforII} is of smaller order than $x^{-\sum_{j=1}^m\kappa_j-\epsilon/2}$ as $x \to \infty$ by Theorem \ref{Th:bounds} (b), while the denominator is regularly varying in $x$ with index $-\sum_{j=1}^m\kappa_j$ and the second factor in \eqref{Eq:boundforII} equals 1 for $x \geq 1$. So, in both cases, and for all $C>0$ \begin{equation*} \lim_{\delta \searrow 0}\limsup_{x \to \infty}II(x,\delta,C)=\limsup_{x \to \infty}II(x,\delta,C)=0,\end{equation*} and the first summand in \eqref{Eq:summandstozero} is equal to zero. All other summands can be treated analogously.
Taken together, we have shown that \begin{equation}\label{Eq:unboundedisfinite} \lim_{\delta \searrow 0} \limsup_{x \to \infty} \frac{P\left(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} \left(M \cap \left((\delta,\infty)^l \times [0,\infty)^{m-l}\right)^c\right)\right)}{\prod_{i=1}^l P(X_i>x^{\kappa_i})}=0. \end{equation} With $\mu(\cdot)$ as defined in \eqref{Eq:definemu} we have \begin{eqnarray*}
&& \lim_{x \to \infty} \frac{P\left(\prod_{j=1}^mX_j^{a_{ij}}>x, 1 \leq i \leq n\right)}{\prod_{i=1}^lP(X_i>x^{\kappa_i})}= \lim_{x \to \infty} \frac{P(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} M)}{\prod_{i=1}^lP(X_i>x^{\kappa_i})}\\
&=& \lim_{\delta \searrow 0} \lim_{x \to \infty} \frac{P\left(\mathbf{X} \in x \otimes_{\boldsymbol{\kappa}} (M \cap \left((\delta,\infty)^l \times [0,\infty)^{m-l})\right)\right)}{\prod_{i=1}^lP(X_i>x^{\kappa_i})}\\
&=& \lim_{\delta \searrow 0} \mu\left(M \cap \left((\delta,\infty)^l \times [0,\infty)^{m-l})\right)\right)\\
&=& \lim_{\delta \searrow 0} \int\limits_{M \cap \left((\delta,\infty)^l \times [0,\infty)^{m-l}\right)}\prod\limits_{j=1}^l x_l^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(x_j)_{1 \leq j \leq l})\otimes P^{(X_j)_{l < j \leq m}}(\mbox{d}(x_j)_{l< j \leq m}) \\
&=& \int\limits_{M}\prod\limits_{j=1}^l x_l^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(x_j)_{1 \leq j \leq l})\otimes P^{(X_j)_{l< j \leq m}}(\mbox{d}(x_j)_{l< j \leq m}), \end{eqnarray*} by monotone convergence where we used that \begin{equation*} M \cap\left((0,\infty)^l \times [0,\infty)^{m-l}\right)=M, \end{equation*} because $\kappa_j>0$ for $1 \leq j \leq l$ implies that at least one $1 \leq i \leq n$ exists with $a_{ij}>0$. But then $\min_{1 \leq i \leq n} \prod_{l=1}^m X_l^{a_{il}}=0$ as soon as $X_j=0, 1 \leq j \leq l$, so $M \subset \left((0,\infty)^l \times [0,\infty)^{m-l}\right)$.
This concludes the proof of Proposition \ref{Pr:mainprop}. \end{proof} \begin{proof}[Proof of Theorem \ref{Th:maintheorem}] Let us again assume w.l.o.g.\ that the first $n \geq 1$ components of $\boldsymbol{\kappa}$ are positive and the last $m-n\geq 0$ components are equal to zero. We start with some implications of our assumptions about the matrix $\mathbf{A}$. Since the optimal solution $\boldsymbol{\kappa}$ to \eqref{Eq:LOP} is unique it must be a vertex of the polygon defined by $\{\mathbf{x} \geq \mathbf{0}:\mathbf{A}\mathbf{x}\geq \mathbf{1}\}$, cf.\ \cite{Si96}, Theorem 1.5. Each vertex of $\{\mathbf{x} \geq \mathbf{0}:\mathbf{A}\mathbf{x}\geq \mathbf{1}\}$ corresponds to a so-called basic feasible solution (cf.\ \cite{Si96}, Theorem 1.2) of the standard form linear program \begin{equation}\label{Eq:LOPstandard} \mbox{find } \mathbf{x} \in \mathbb{R}^{m+n}\; \mbox{such that } (\mathbf{A};(-1)\cdot \mathbf{E}_n)\mathbf{x} =\mathbf{1}, \;\; \mathbf{x} \geq \mathbf{0},\;\;\; \sum_{i=1}^m x_i \to \min!, \end{equation} where the matrix $(\mathbf{A},(-1)\cdot \mathbf{E}_n) \in \mathbb{R}^{n \times (m+n)}$ consists of the columns of $\mathbf{A}$ in its first $m$ columns and of the columns of the $n$-dimensional unit matrix, $\mathbf{E}_n$, multiplied with $-1$ in its last $n$ columns. The basic feasible solutions of \eqref{Eq:LOPstandard} can be found by choosing $n$ linearly independent columns of $(\mathbf{A};(-1)\cdot \mathbf{E}_n)$ with indices $B \subset \{1, \ldots, m+n\}$, denoting the resulting matrix by $(\mathbf{A};(-1)\cdot \mathbf{E}_n)_B$ and deriving $\mathbf{s}_B=((s_j)_{j \in B})^T:=((\mathbf{A};(-1)\cdot \mathbf{E}_n)_B)^{-1}\mathbf{1}$. If $\mathbf{s}_B \geq \mathbf{0}$, then we call \begin{equation*}\mathbf{x}_B=(x_1, \ldots, x_{m+n})^T \;\;\; \mbox{with} \;\; \begin{cases}
x_j=s_j, \;\; &\mbox{if } j \in B, \\
x_j=0, \;\; &\mbox{if } j \in \{1, \ldots, m+n\} \setminus B
\end{cases} \end{equation*} a basic feasible solution to \eqref{Eq:LOPstandard}. The corresponding solution to \eqref{Eq:LOP} is given by the first $m$ components of $\mathbf{x}_B$, the remaining last $n$ components of $\mathbf{x}_B$ are called slack variables. Since we assumed that the first $n$ components of $\boldsymbol{\kappa}$ are positive and that the optimal solution is unique, it can only correspond to the basic feasible solution with $B=\{1, \ldots, n\}$ which implies that $\mathbf{A}_{\boldsymbol{\kappa}}$, the matrix which consists of only the first $n$ columns of $\mathbf{A}$, is invertible and $(\kappa_1, \ldots, \kappa_n)^T=(\mathbf{A}_{\boldsymbol{\kappa}})^{-1}\mathbf{1}$ which leads to $\mathbf{A}\boldsymbol{\kappa}=\mathbf{1}$. Thus, the assumptions about $\mathbf{A}$ of Theorem \ref{Th:maintheorem} are a special case of the assumptions about $\mathbf{A}$ of Proposition \ref{Pr:mainprop}. Furthermore, the optimal value of \eqref{Eq:LOP} equals \begin{equation}\label{Eq:optval} \sum_{j=1}^m\kappa_j=\sum_{j=1}^n\kappa_j=\mathbf{1}^T(\mathbf{A}_{\boldsymbol{\kappa}}^{-1})\mathbf{1}. \end{equation}
Since we assumed $\boldsymbol{\kappa}$ to be unique and non-degenerate, the optimal solution to the dual problem \eqref{Eq:DualLO} is unique and non-degenerate as well, cf.\ \cite{Si96}, Theorem 2.11. Furthermore, the optimal solution $\hat{\boldsymbol{\kappa}}$ to \eqref{Eq:DualLO} is in our case given by \begin{equation}\label{Eq:identitykappahat} \hat{\boldsymbol{\kappa}}=(\mathbf{A}_{\boldsymbol{\kappa}}^{-1})^T\mathbf{1}, \end{equation} cf.\ \cite{Si96}, Theorem 2.2. This explains Remark~\ref{Rem:dualcoefs}~(a).
Again w.l.o.g.\ assume in the following that $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j=0$ for $n<j \leq n'$ with $n \leq n' \leq m$ and that $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j\neq 0$ for $n'<j \leq m$. Define now for $1 \leq j \leq m$ \begin{equation}\label{Eq:DefXtilde} \hat{X}_j=\begin{cases}
X_j, & \mbox{ for } j \leq n, \\
X_j^{\epsilon}, & \mbox{ for } n< j \leq n', \\
X_j^{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j+\epsilon}, & \mbox{ for } j>n', (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j > 0, \\
X_j^{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j-\epsilon}, & \mbox{ for } j>n', (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j < 0,
\end{cases} \end{equation} with $\epsilon>0$ as in the statement of Theorem \ref{Th:maintheorem} and set furthermore $\hat{\mathbf{A}}$ with \begin{equation}\label{Eq:DefAtilde}\hat{a}_{ij}=\begin{cases}
a_{ij}, & \mbox{ for } 1 \leq i \leq n, j \leq n, \\
\frac{a_{ij}}{\epsilon}, & \mbox{ for } 1 \leq i \leq n, n<j \leq n', \\
\frac{a_{ij}}{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j+\epsilon}, & \mbox{ for } 1 \leq i \leq n, j>n', (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j > 0, \\
\frac{a_{ij}}{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j-\epsilon}, & \mbox{ for } 1 \leq i \leq n, j>n', (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j < 0.
\end{cases} \end{equation}
Obviously, this leads to
\begin{equation}\label{Eq:firsttransfo}P\left(\prod_{j=1}^mX_j^{a_{ij}}>c_ix, \; 1 \leq i \leq n\right)=P\left(\prod_{j=1}^m\hat{X}_j^{\hat{a}_{ij}}>c_ix, 1 \leq i \leq n\right). \end{equation} In order to apply Proposition \ref{Pr:mainprop} we will distinguish between events where $\hat{X}_j\geq 1$ and those where $\hat{X}_j<1$ for $n< j \leq n'$. Therefore, for $J \subset \{n+1, \ldots, n'\}$ denote the event $\{\hat{X}_j \geq 1 \mbox{ for } j \in \{n+1, \ldots, n'\} \setminus J, \hat{X}_j < 1 \mbox{ for } j \in J\}$ by $B(J)$, and write \begin{eqnarray} \nonumber && P\left(\prod_{j=1}^m \hat{X}_j^{\hat{a}_{ij}}>c_i x, \; 1 \leq i \leq n\right)\\
\label{Eq:condsummands}&=& \sum_{J \subset \{n+1, \ldots, n'\}} P\left(\prod_{j=1}^m \hat{X}_j^{\hat{a}_{ij}}>c_ix, \; 1 \leq i \leq n \, \Bigg| \, B(J)\right)P(B(J)). \end{eqnarray}
For $J$ with $P(B(J))>0$ define now independent random variables $\hat{X}_j^{(J)}, 1 \leq j \leq m$, with \begin{equation}\label{Eq:DefXJtilde} P^{\hat{X}_j^{(J)}}=\begin{cases}
P^{\hat{X}_j}, & \mbox{ for } j \in \{1, \ldots, n, n'+1, \ldots, m\}, \\
P^{\hat{X}_j^{-1}|\hat{X}_j<1}, & \mbox{ for } j \in J, \\
P^{\hat{X}_j|\hat{X}_j\geq 1}, & \mbox{ for } j \in \{n+1, \ldots, n'\} \setminus J. \\
\end{cases} \end{equation} Furthermore, set $\hat{\mathbf{A}}^{(J)}$ with \begin{equation}\label{Eq:DefAJtilde} \hat{a}_{ij}^{(J)}=\begin{cases}
\hat{a}_{ij}, & \mbox{ for } 1 \leq i \leq n, j \in \{1, \ldots, m\} \setminus J, \\
-\hat{a}_{ij}, & \mbox{ for } 1 \leq i \leq n, j \in J.
\end{cases} \end{equation} By independence of the $\hat{X}_j$'s and of the $\hat{X}_j^{(J)}$'s, the first factor of each summand in \eqref{Eq:condsummands} is equal to \begin{equation}\label{Eq:probwithX^{(J)}} P\left(\prod_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}>c_i x, \; 1 \leq i \leq n\right) \end{equation} for all $J\subset \{n+1, \ldots, n'\}$ with $P(B(J))>0$. The vector $\boldsymbol{\kappa}$ is a basic feasible solution to \begin{equation}\label{Eq:LO(J)} \mbox{find } \mathbf{x} \geq \mathbf{0} \; \mbox{such that } \hat{\mathbf{A}}^{(J)} \mathbf{x} \geq \mathbf{1}, \;\;\; \sum_{i=1}^mx_i \to \min!, \end{equation} because $\hat{\mathbf{A}}^{(J)}\boldsymbol{\kappa}=\mathbf{1}$, as the first $n$ columns of $\hat{\mathbf{A}}^{(J)}$ are identical to those of $\mathbf{A}$. Furthermore, \begin{equation}\label{Eq:alldualcases} (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\hat{\mathbf{A}}^{(J)})_j= \begin{cases} (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j=1, & \mbox{ if } j \leq n, \\ \epsilon^{-1}(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j=0, & \mbox{ if } j \in \{n+1, \ldots, n'\}\setminus J\\ \epsilon^{-1}(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}(-\mathbf{A}))_j=0, & \mbox{ if } j \in J\\ \frac{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1} \mathbf{A})_j}{(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j+\epsilon}\in (0,1), & \mbox{ if } j>n', (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j> 0, \\ \frac{(\mathbf{1}\mathbf{A}_{\boldsymbol{\kappa}}^{-1} \mathbf{A})_j}{(\mathbf{1}\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j-\epsilon} \in (0,1), & \mbox{ if } j>n', (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j< 0,
\end{cases} \end{equation} which proves that $\boldsymbol{\kappa}$ is the unique optimal solution to \eqref{Eq:LO(J)}, because $1-(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\hat{\mathbf{A}}^{(J)})_j$ is strictly positive for all non-basic variables $n<j\leq m$ (cf.\ the analogue of Theorem 1.6 and the remark after the proof of this theorem in \cite{Si96} for a linear minimization problem instead of a maximization problem).
Let now $J \subset \{n+1, \ldots, n'\}$ and let $\tilde{\hat{\mathbf{A}}}^{(J)}$ be the matrix described in Lemma \ref{Lem:makepositive} (c), corresponding to $\hat{\mathbf{A}}^{(J)}$ with $\tilde{\hat{a}}_{ij}^{(J)}>0$ for all $1 \leq i \leq n, j \in \{1, \ldots, n, n'+1, \ldots, m\}$. Then, for $c>0$, \begin{align} \nonumber & P\left(\prod_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}>c_i x, \; 1 \leq i \leq n\right)\\ \nonumber = & P\left(\prod_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}>c_i x, \; 1 \leq i \leq n, \hat{X}_j^{(J)} \geq c, n'< j \leq m\right) \\ \label{Eq:twoXhatsummands} + & \, P\left(\prod_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}>c_i x, \; 1 \leq i \leq n, \exists \; n'< j \leq m:\hat{X}_j^{(J)} < c \right) \end{align} and for the second summand we have \begin{eqnarray*}
&& P\left(\prod\limits_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}>c_i x, \; 1 \leq i \leq n, \exists \; n'< j \leq m:\hat{X}_j^{(J)} < c\right)\\
&\leq & \sum_{\emptyset \neq K \subset \{n'+1, \ldots, m\}} P\Bigg(\prod\limits_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\tilde{\hat{a}}^{(J)}_{ij}}>x \min\limits_{1 \leq k \leq n}c_k, \\
&& \hspace{3cm} \hat{X}_j^{(J)}<c, j \in K, \hat{X}_j^{(J)}\geq c, j \in \{n'+1, \ldots, m\} \setminus K\Bigg)\\
&\leq & \sum_{\emptyset \neq K \subset \{n'+1, \ldots, m\}}P\Bigg(\prod\limits_{j \in \{1, \ldots, n'\}} \left(\hat{X}_j^{(J)}\right)^{\tilde{\hat{a}}^{(J)}_{ij}}\prod\limits_{j \in \{n'+1, \ldots, m\} \setminus K} \max\left(1,\hat{X}_j^{(J)}\right)^{\tilde{\hat{a}}^{(J)}_{ij}}\\
&& > x \min\limits_{1 \leq k \leq n}c_k\left( \min\limits_{1 \leq i \leq n} c^{-\sum\limits_{j \in K}\tilde{\hat{a}}_{ij}^{(J)}}\right), \; 1 \leq i \leq n\; \Bigg). \end{eqnarray*} Set \begin{equation*} D(c)=D(c,c_1, \ldots, c_n):=\left(\min\limits_{1 \leq k \leq n}c_k\right)\left( \min\limits_{1 \leq i \leq n} c^{-\sum\limits_{j \in K}\tilde{\hat{a}}_{ij}^{(J)}}\right)>0\end{equation*} for abbreviation and note that by our assumptions and Lemma \ref{Lem:makepositive} (c) $\boldsymbol{\kappa}$ is the unique optimal solution to \begin{equation*} \mbox{find } \mathbf{x} \geq \mathbf{0} \; \mbox{such that } \tilde{\hat{\mathbf{A}}}^{(J)} \mathbf{x} \geq \mathbf{1}, \;\;\; \sum_{i=1}^mx_i \to \min!,\end{equation*} with $\tilde{\hat{\mathbf{A}}}^{(J)}\boldsymbol{\kappa}=\mathbf{1}$, that $P^{\hat{X}_j^{(J)}}=P^{X_j}, 1 \leq j \leq n$, that $\hat{X}_{n+1}^{(J)}, \ldots, \hat{X}_{n'}^{(J)}\geq 1$ a.s. with $E(\hat{X}_j^{(J)})<\infty, n<j \leq n',$ and that $E(\max(1,\hat{X}_j^{(J)}))<\infty, n'<j \leq m$. Therefore, apply Proposition \ref{Pr:mainprop} to obtain for $\emptyset \neq K \subset \{n'+1, \ldots, m\}$ \begin{eqnarray*} && \frac{P\Big(\prod\limits_{j \in \{1, \ldots, n'\}} \left(\hat{X}_j^{(J)}\right)^{\tilde{\hat{a}}^{(J)}_{ij}}\prod\limits_{j \in \{n'+1, \ldots, m\} \setminus K} \max\left(1,\hat{X}_j^{(J)}\right)^{\tilde{\hat{a}}^{(J)}_{ij}}>D(c) x, \; 1 \leq i \leq n\Big)}{\prod_{j=1}^nP(X_j>x^{\kappa_j})} \\ & \to & (D(c))^{-\sum_{j=1}^n\kappa_j} D(J), \end{eqnarray*} as $x \to \infty$ for some finite constant $D(J)$ which does not depend on $c$. As $D(c) \to \infty$ for $c \searrow 0$ we conclude from \eqref{Eq:twoXhatsummands} that \begin{align} \nonumber & \lim_{x \to \infty} \frac{P\left(\prod\limits_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}>c_i x, \; 1 \leq i \leq n\right)}{\prod_{j=1}^nP(X_j>x^{\kappa_j})}\\ \label{Eq:limitctozero}&= \lim_{c \searrow 0}\lim_{x \to \infty} \frac{P\left(\prod\limits_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}>c_i x, \; 1 \leq i \leq n, \hat{X}_j^{(J)} \geq c, n'< j \leq m\right)}{\prod_{j=1}^nP(X_j>x^{\kappa_j})} \end{align} Let now \begin{equation*} c_i(c)=c_i c^{-\sum_{j={n'+1}}^m \hat{a}_{ij}^{(J)}}=c_i c^{-\sum_{j={n'+1}}^m \hat{a}_{ij}}, \;\;\; 1 \leq i \leq n. \end{equation*} Then \begin{eqnarray} \nonumber && \lim_{x \to \infty} \frac{P\left(\prod\limits_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}>c_i x, \; 1 \leq i \leq n, \hat{X}_j^{(J)} \geq c, n'< j \leq m\right)}{\prod_{j=1}^nP(X_j>x^{\kappa_j})} \\ \label{Eq:limitwithcs} &=& \lim_{x \to \infty} \prod_{j=n'+1}^m P(\hat{X}_j^{(J)} \geq c)\left(\prod_{j=1}^n P(X_j>x^{\kappa_j})\right)^{-1}\\
\nonumber && P\left(\prod\limits_{j=1}^{n'} \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}\prod\limits_{j=n'+1}^{m} \left(\frac{\hat{X}_j^{(J)}}{c}\right)^{\hat{a}^{(J)}_{ij}}>c_i(c) x, \; 1 \leq i \leq n \Bigg| \min\limits_{n'<j\leq m} \hat{X}_j^{(J)} \geq c \right). \end{eqnarray} The last factor in the above expression can be written as \begin{equation*} P\left(\prod\limits_{j=1}^{m} \left(\hat{X}_j^{(J,c)}\right)^{\hat{a}^{(J)}_{ij}}>c_i(c) x, \; 1 \leq i \leq n\right) \end{equation*} where $\hat{X}_j^{(J,c)}, 1 \leq j \leq m,$ denote independent random variables with \begin{equation*} P^{\hat{X}_j^{(J,c)}}=\begin{cases}
P^{\hat{X}_j^{(J)}}, & \mbox{ for } 1 \leq j \leq n', \\
P^{c^{-1}\hat{X}_j^{(J)}|\tilde{X}_j^{(J)}\geq c}, & \mbox{ for } n'< j \leq m.
\end{cases} \end{equation*} Set \begin{equation*} \hat{c}_j(c)=\begin{cases} \prod_{i=1}^n c_i(c)^{(\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_{ji}}, & 1 \leq j \leq n, \\ 1, & n<j \leq m, \end{cases}\end{equation*} which implies that \begin{equation*} \prod_{j=1}^m \hat{c}_j(c)^{\hat{a}_{ij}^{(J)}}=\prod_{j=1}^m \hat{c}_j(c)^{a_{ij}}=\exp\left(\sum_{j=1}^n a_{ij}\sum_{k=1}^n(\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_{jk} \ln(c_k(c))\right)=c_i(c), \;\; 1 \leq i \leq n.\end{equation*} Then we have \begin{eqnarray}\label{Eq:limitstep1} && \frac{P\left(\prod\limits_{j=1}^m \left(\hat{X}_j^{(J,c)}\right)^{\hat{a}_{ij}^{(J)}}>c_i(c) x, \;1 \leq i \leq n\right)}{\prod_{j=1}^n P(X_j>x^{\kappa_j})}\\ \label{Eq:twocfactors} &=& \frac{P\left(\prod\limits_{j=1}^m \left(\frac{\hat{X}_j^{(J,c)}}{\hat{c}_j(c)}\right)^{\hat{a}_{ij}^{(J)}}>x, \;1 \leq i \leq n\right)}{\prod_{j=1}^n P(X_j>\hat{c}_j(c)x^{\kappa_j})} \cdot \frac{\prod_{j=1}^n P(X_j>\hat{c}_j(c)x^{\kappa_j})}{\prod_{j=1}^n P(X_j>x^{\kappa_j})}. \end{eqnarray} As $x \to \infty$, in view of \eqref{Eq:identitykappahat}, the second factor of \eqref{Eq:twocfactors} converges to \begin{eqnarray}&&\nonumber \prod_{j=1}^n (\hat{c}_j(c))^{-1}=\prod_{j=1}^n\prod_{i=1}^n (c_i(c))^{-(\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_{ji}}=\prod_{i=1}^n (c_i(c))^{-((\mathbf{A}_{\boldsymbol{\kappa}}^{-1})^T\mathbf{1})_i}=\prod_{i=1}^n (c_i(c))^{-\hat{\kappa}_i}\\ \label{Eq:limitofcs} &=&\prod_{i=1}^n\left(c_i^{-\hat{\kappa}_i}c^{\hat{\kappa}_i \sum_{j=n'+1}^m\hat{a}_{ij}^{(J)}}\right)=c^{\sum_{i=1}^n\sum_{j=n'+1}^m \hat{a}_{ij}^{(J)}\hat{\kappa}_i}\prod_{i=1}^n c_i^{-\hat{\kappa}_i}. \end{eqnarray}
Note that $P^{\hat{X}_j^{(J,c)}}=P^{X_j}$ for $1 \leq j \leq n$ and \begin{equation*}\frac{\hat{X}_j^{(J,c)}}{\hat{c}_j(c)}=\hat{X}_j^{(J,c)}\geq 1 \mbox{ a.s.}, \;\; E\left(\frac{\hat{X}_j^{(J,c)}}{\hat{c}_j(c)}\right)<\infty \mbox{ for } n<j\leq m.\end{equation*} Hence, we can apply Proposition \ref{Pr:mainprop} to see that the first factor of \eqref{Eq:twocfactors} converges to \begin{equation*} \int_{M(\hat{\mathbf{A}}^{(J)})} \prod_{j=1}^nx_j^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(x_j)_{1 \leq j \leq n}) \otimes P^{(\hat{X}_j^{(J,c)})_{n< j \leq m}}(\mbox{d}(x_j)_{n< j \leq m})\end{equation*} Write $\mathbf{x}_1=(x_1, \ldots, x_n), \mathbf{x}_2=(x_{n+1}, \ldots, x_m)$ for abbreviation and use the substition $\mathbf{y}=\ln(\mathbf{x}_1):=(\ln(x_1), \ldots, \ln(x_n))$ to see that the above expression equals \begin{align} \nonumber & \int\limits_{[0,\infty)^{m-n}}\int\limits_{\left\{\mathbf{y} \in \mathbb{R}^n:\exp(\mathbf{A}_{\boldsymbol{\kappa}}\mathbf{y})>(\prod_{j=n+1}^m x_j^{-\hat{a}^{(J)}_{ij}})_{1 \leq i \leq n}\right\}} \hspace{-1cm} \exp(-\sum_{i=1}^n y_i)\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}\mathbf{y})P^{(\hat{X}_j^{(J,c)})_{n< j \leq m}}(\mbox{d}\mathbf{x}_2)\\
\nonumber &= \int\limits_{[0,\infty)^{m-n}}\int\limits_{\left\{\mathbf{z} \in \mathbb{R}^n:\exp(\mathbf{z})>(\prod_{j=n+1}^m x_j^{-\hat{a}^{(J)}_{ij}})_{1 \leq i \leq n}\right\}}|\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1} \\ \nonumber & \hspace{1cm}\exp\left(-\sum_{k=1}^n\sum_{l=1}^n (\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_{kl}z_l\right)\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}\mathbf{z})\, P^{(\hat{X}_j^{(J,c)})_{n< j \leq m}}(\mbox{d}\mathbf{x}_2) \\
\nonumber &= |\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1}\int\limits_{[0,\infty)^{m-n}}\prod_{l=1}^n \int\limits_{(\ln(\prod_{j=n+1}^m x_j^{-\hat{a}^{(J)}_{lj}}),\infty)} \hspace{-1cm} \exp\left(-z_l\sum_{k=1}^n (\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_{kl}\right)\mbox{d}z_l P^{(\hat{X}^{(J,c)}_j)_{n< j \leq m}}(\mbox{d}\mathbf{x}_2) \\
\nonumber &= \frac{|\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1}}{\prod_{i=1}^n ((\mathbf{A}_{\boldsymbol{\kappa}}^{-1})^T\mathbf{1})_i}\int_{[0,\infty)^{m-n}}\prod_{l=1}^n\prod_{j=n+1}^m x_j^{\hat{a}_{lj}^{(J)}\sum_{k=1}^n (\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_{kl}}P^{(\hat{X}^{(J,c)}_j)_{n< j \leq m}}(\mbox{d}\mathbf{x}_2) \\
\nonumber &= \frac{|\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1}}{\prod_{i=1}^n \hat{\kappa}_i}\prod_{j=n+1}^m E\left(\left(\hat{X}_j^{(J,c)}\right)^{\sum_{l=1}^n \hat{a}^{(J)}_{lj} ((\mathbf{A}_{\boldsymbol{\kappa}}^{-1})^T\mathbf{1})_l}\right)\\
\label{Eq:finalsecondfactor} &= \frac{|\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1}}{\prod_{i=1}^n \hat{\kappa}_i}\prod_{j=n'+1}^m \left( E\left(\left(\hat{X}_j^{(J)}\right)^{\sum_{i=1}^n \hat{a}^{(J)}_{ij} \hat{\kappa}_i}\Big|\hat{X}_j^{(J)}\geq c \right)c^{-\sum_{i=1}^n \hat{a}^{(J)}_{ij} \hat{\kappa}_i}\right), \end{align} where we used in the final step that $\sum_{i=1}^n \hat{a}^{(J)}_{ij} \hat{\kappa}_i=(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\hat{\mathbf{A}}^{(J)})_j=0$ for $n<j \leq n'$, cf.\ \eqref{Eq:alldualcases}. Combine \eqref{Eq:limitofcs} and \eqref{Eq:finalsecondfactor} to see that the expression in \eqref{Eq:twocfactors} converges to
\begin{equation}\label{Eq:limitforstep1} \frac{|\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1}\prod_{i=1}^n c_i^{-\hat{\kappa}_i}}{\prod_{i=1}^n \hat{\kappa}_i}\prod_{j=n'+1}^m E\left(\left(\hat{X}_j^{(J)}\right)^{\sum_{i=1}^n \hat{a}^{(J)}_{ij} \hat{\kappa}_i}\Big|\hat{X}_j^{(J)}\geq c \right) \end{equation} as $x \to \infty$. Now, \eqref{Eq:limitforstep1} together with \eqref{Eq:limitctozero} and \eqref{Eq:limitwithcs} yields that \begin{eqnarray*} && \lim_{x \to \infty} \frac{P\left(\prod\limits_{j=1}^m \left(\hat{X}_j^{(J)}\right)^{\hat{a}^{(J)}_{ij}}>c_i x, \; 1 \leq i \leq n\right)}{\prod_{j=1}^nP(X_j>x^{\kappa_j})} \\
&=& \lim_{c \searrow 0}\frac{|\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1}\prod_{i=1}^n c_i^{-\hat{\kappa}_i}}{\prod_{i=1}^n \hat{\kappa}_i}\prod_{j=n'+1}^m E\left(\left(\hat{X}_j^{(J)}\right)^{\sum_{i=1}^n \hat{a}^{(J)}_{ij} \hat{\kappa}_i}\mathds{1}_{\{\hat{X}_j^{(J)}\geq c\}}\right)\\
&=& \frac{|\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1}\prod_{i=1}^n c_i^{-\hat{\kappa}_i}}{\prod_{i=1}^n \hat{\kappa}_i}\prod_{j=n'+1}^m E\left(\left(\hat{X}_j^{(J)}\right)^{\sum_{i=1}^n \hat{a}^{(J)}_{ij} \hat{\kappa}_i}\right)\\
&=& \frac{|\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1}\prod_{i=1}^n c_i^{-\hat{\kappa}_i}}{\prod_{i=1}^n \hat{\kappa}_i}\prod_{j=n'+1}^m E\left(X_j^{\sum_{i=1}^n a_{ij} \hat{\kappa}_i}\right), \end{eqnarray*} where we used $\sum_{i=1}^n \hat{a}^{(J)}_{ij} \hat{\kappa}_i=((\hat{\mathbf{A}}^{(J)})^T (\mathbf{A}_{\boldsymbol{\kappa}}^{-1})^T\mathbf{1})_j=(\mathbf{1}^T \mathbf{A}_{\boldsymbol{\kappa}}^{-1}\hat{\mathbf{A}}^{(J)})_j>0, n'<j \leq m$ (cf.\ \eqref{Eq:alldualcases}) in the penultimate equality and \eqref{Eq:DefXtilde}, \eqref{Eq:DefAtilde}, \eqref{Eq:DefXJtilde} and \eqref{Eq:DefAJtilde} in the final equality. This expression no longer depends on $J \subset \{n+1, \ldots n'\}$ and therefore \eqref{Eq:firsttransfo}, \eqref{Eq:condsummands} and \eqref{Eq:probwithX^{(J)}} lead to \begin{eqnarray*}
&& \lim_{x \to \infty} \frac{P\left(\prod_{j=1}^m X_j^{a_{ij}}>c_i x, \; 1 \leq i \leq n\right)}{\prod_{j=1}^nP(X_j>x^{\kappa_j})}=\lim_{x \to \infty} \frac{P\left(\prod_{j=1}^m \hat{X}_j^{\hat{a}_{ij}}>c_i x, \; 1 \leq i \leq n\right)}{\prod_{j=1}^nP(X_j>x^{\kappa_j})}\\
&=& \frac{|\det(\mathbf{A}_{\boldsymbol{\kappa}})|^{-1}\prod_{i=1}^n c_i^{-\hat{\kappa}_i}}{\prod_{i=1}^n \hat{\kappa}_i}\prod_{j=n'+1}^m E\left(X_j^{\sum_{i=1}^n a_{ij} \hat{\kappa}_i}\right) \\
&=& |\det \mathbf{A}_{\boldsymbol{\kappa}}|^{-1}\frac{\prod_{i=1}^n c_i^{-(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_i}}{\prod_{i=1}^n (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1})_i}\prod_{j:\kappa_j=0} E\left(X_j^{ (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j}\right), \end{eqnarray*} and so the limit in \eqref{Eq:Maintheorem1} equals the expression in \eqref{Eq:Maintheorem2}. By Theorems~2.4 and 2.5 of \cite{LiReRo14}, this shows that $c(x) P\left((x^{-1}\prod_{j=1}^m X_j^{a_{ij}})_{1 \leq i \leq n} \in \cdot \right)$, $ x>0$, with $c(x)=(\prod_{j=1}^nP(X_j>x^{\kappa_j}))^{-1}$, is relatively compact in $\mathbb{M}_{(0,\infty)^n}$. Furthermore, all accumulation points of this family agree on a generating $\pi$-system. Thus, $P^{(\prod_{j=1}^m X_j^{a_{ij}})_{1 \leq i \leq n}}$ is regularly varying on $(0,\infty)^n$ w.r.t.\ scalar multiplication, cf.\ Example~\ref{ex:LT}. The index of regular variation follows from Lemma and Definition~\ref{LemDef:index} since $c$ is regularly varying with index $-\sum_{j=1}^n\kappa_j=-\sum_{j=1}^m\kappa_j=-\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{1}$, cf.\ \eqref{Eq:optval}. \end{proof}
\subsection{Auxiliary results}\label{Sec:auxresults} In the following, we collect two lemmas and a proposition which are needed for the proofs in Sections \ref{Sec:boundsproof} and \ref{Sec:proof}. \begin{lemma}\label{Lem:makepositive} Let $\boldsymbol{\kappa}=(\kappa_1, \ldots, \kappa_m)^T$ be an optimal solution to \eqref{Eq:LOP}. \begin{itemize} \item[(a)] There exists a matrix $\tilde{\mathbf{A}}=(\tilde{a}_{ij}) \in \mathbb{R}^{n \times m}$ such that \begin{itemize} \item the columns $j$ in $\tilde{\mathbf{A}}$ for which $\kappa_j>0$ have all positive entries, \item $\boldsymbol{\kappa}$ is an optimal solution to the linear program \begin{equation}\label{Eq:posLP} \mbox{find } \mathbf{x} \geq \mathbf{0} \; \mbox{such that } \tilde{\mathbf{A}} \mathbf{x} \geq \mathbf{1}, \;\;\; \sum_{i=1}^mx_i \to \min! \end{equation} \item for all $x, x_1, x_2, \ldots, x_m\geq 0$, \begin{equation} \label{Eq:Upperprobbound}\prod_{j=1}^m x_j^{a_{ij}}>x, \;1 \leq i \leq n \; \Rightarrow \; \prod_{j=1}^m x_j^{\tilde{a}_{ij}}>x, \;1 \leq i \leq n. \end{equation} \end{itemize} \item[(b)] Moreover, if the assumptions of Proposition \ref{Pr:mainprop} hold, then the matrix $\tilde{\mathbf{A}}$ can be chosen such that additionally \begin{itemize} \item $\boldsymbol{\kappa}$ is the \emph{unique} optimal solution to the linear program \eqref{Eq:posLP}, \item $\tilde{\mathbf{A}}\boldsymbol{\kappa}=\mathbf{1}.$ \end{itemize} \item[(c)] If the assumptions of Theorem \ref{Th:maintheorem} hold, then there exists a matrix $\tilde{\mathbf{A}}=(\tilde{a}_{ij}) \in \mathbb{R}^{n \times m}$ such that \begin{itemize} \item the columns $j$ in $\tilde{\mathbf{A}}$ for which $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j>0$ have all positive entries, \item $\boldsymbol{\kappa}$ is the unique optimal solution to the linear program \eqref{Eq:posLP}, \item $\tilde{\mathbf{A}}\boldsymbol{\kappa}=\mathbf{1}$, \item for all $x, x_1, x_2, \ldots, x_m\geq 0$ \eqref{Eq:Upperprobbound} holds. \end{itemize} \end{itemize} \end{lemma} \begin{proof} First note that if $a_{ij}>0$ for all $1 \leq i \leq n$ and all $j$ such that $\kappa_j>0$ (cases (a) and (b)) or $(\mathbf{1}^T\mathbf{A}^{-1}_{\boldsymbol{\kappa}}\mathbf{A})_j>0$ (case (c)), then we may simply set $\tilde{\mathbf{A}}=\mathbf{A}$. So, assume the contrary in the following. Set $J:=\{j \in \{1, \ldots, m\}:\kappa_j>0\}$. Since we have assumed an optimal solution $\boldsymbol{\kappa}$ to \eqref{Eq:LOP}, there also exists an optimal (not necessarily unique) solution $\hat{\boldsymbol{\kappa}}=(\hat{\kappa}_1, \ldots, \hat{\kappa}_n)^T$ to the dual problem \eqref{Eq:DualLO} and this solution satisfies $\sum_{i=1}^n\hat{\kappa}_i=\sum_{j=1}^m\kappa_j$, cf.\ Theorem 2.2 in \cite{Si96}. Furthermore, by the Complementary Slackness Theorem (cf.\ \cite{Si96}, Theorem 2.4) we have $(\mathbf{A}^T\hat{\boldsymbol{\kappa}})_j=1$ for all $j \in J$.
For assertions (a) and (b) let $a_{\min}:=-(\min_{1\leq i \leq n, j \in J} a_{ij})+\epsilon$ for some $\epsilon>0$. By our assumptions, $a_{\min}$ is positive. Define \begin{equation*} \tilde{\mathbf{A}}=(\tilde{a}_{ij})_{1\leq i \leq n, 1 \leq j \leq m} \;\; \mbox{with} \;\; \tilde{a}_{ij}=\frac{a_{ij}+a_{\min}\sum_{k=1}^n a_{kj}\hat{\kappa}_k}{1+a_{\min}\sum_{k=1}^m\kappa_k}.\end{equation*} As seen above, we have $\sum_{k=1}^na_{kj}\hat{\kappa}_k=1$ and thus $\tilde{a}_{ij}>0$ for $j \in J$ and all $1\leq i\leq n$.
Note that \begin{eqnarray} \nonumber \tilde{\mathbf{A}}\boldsymbol{\kappa}&=& \left(1+a_{\min}\sum_{i=1}^m\kappa_i\right)^{-1}\left((a_{ij}+a_{\min})_{1 \leq i \leq n, j \in J}\right)((\kappa_j)_{j \in J})^T \\ \label{Eq:kappafeasiblefortilde} &\geq & \left(1+a_{\min}\sum_{i=1}^m\kappa_i\right)^{-1}\left(\mathbf{1}+a_{\min}\sum_{i=1}^m\kappa_i \mathbf{1}\right)=\mathbf{1}, \end{eqnarray} so $\boldsymbol{\kappa}$ is a feasible solution to \eqref{Eq:posLP}. Furthermore, if there would exist a $\boldsymbol{\kappa}'\geq \mathbf{0}$ with $\tilde{\mathbf{A}}\boldsymbol{\kappa}'\geq \mathbf{1}$ and $\sum_{i=1}^m\kappa_i'<\sum_{i=1}^m\kappa_i$, then \begin{equation}\label{Eq:implicationkappaprime} \sum_{j=1}^m\left(a_{ij}+a_{\min}\sum_{k=1}^na_{kj}\hat{\kappa}_k\right)\kappa_j' \geq 1+a_{\min}\sum_{k=1}^m\kappa_k, \;\;\; 1 \leq i \leq n, \end{equation} and thus \begin{eqnarray} \nonumber \sum_{j=1}^m a_{ij}\kappa_j' &\geq & 1+a_{\min}\sum_{k=1}^m\kappa_k-a_{\min}\sum_{k=1}^n\sum_{j=1}^m a_{kj}\hat{\kappa}_k\kappa_j' \\ \nonumber &\geq & 1+a_{\min}\left(\sum_{k=1}^m\kappa_k-\sum_{j=1}^m \kappa_j'\right) \\ \label{Eq:kappaprimealsofeasible} &\geq& 1, \;\;\; 1 \leq i \leq n, \end{eqnarray} where we used in the penultimate inequality that $\sum_{k=1}^n a_{kj}\hat{\kappa}_k\leq 1$ and $\kappa_j'\geq 0, 1 \leq j \leq m$. But this implies that $\boldsymbol{\kappa}'$ with $\sum_{i=1}^m\kappa_i'<\sum_{i=1}^m\kappa_i$ would also be a feasible solution to \eqref{Eq:LOP}, in contrast to the assumption about the optimality of $\boldsymbol{\kappa}$. Thus, $\boldsymbol{\kappa}$ is also an optimal solution to \eqref{Eq:posLP}.
We are thus left to show \eqref{Eq:Upperprobbound} for the proof of (a). For $x, x_1, x_2, \ldots, x_m \geq 0$ such that $\prod_{j=1}^m x_j^{a_{ij}}>x, 1 \leq i \leq n,$ we have \begin{equation*} \prod_{j=1}^m x_j^{a_{ij}a_{\min}\hat{\kappa}_i}\geq x^{a_{\min}\hat{\kappa}_i}, \;\;\; 1 \leq i \leq n, \end{equation*} with strict inequality if $\hat{\kappa}_i>0$, which must be the case for at least one $1 \leq i \leq n$. So by multiplication of left hand sides and right hand sides we obtain \begin{align*} &&\left(\prod_{j=1}^m x_j^{a_{ij}}\right)\prod_{k=1}^n\prod_{j=1}^m x_j^{a_{kj}a_{\min}\hat{\kappa}_k}&>x \prod_{k=1}^nx^{a_{\min} \hat{\kappa}_k}=x^{1+a_{\min}\sum_{k=1}^n\hat{\kappa}_k}, \;\; 1 \leq i \leq n \\ &\Leftrightarrow & \prod_{j=1}^mx_j^{a_{ij}+a_{\min}\sum_{k=1}^na_{kj}\hat{\kappa}_k}&>x^{1+a_{\min}\sum_{k=1}^n\hat{\kappa}_k}, \;\; 1 \leq i \leq n\\ &\Leftrightarrow & \prod_{j=1}^mx_j^{\frac{a_{ij}+a_{\min}\sum_{k=1}^na_{kj}\hat{\kappa}_k}{1+a_{\min}\sum_{k=1}^n\hat{\kappa}_k}}&>x, \;\; 1 \leq i \leq n\\ &\Leftrightarrow & \prod_{j=1}^mx_j^{\tilde{a}_{ij}}&>x, \;\; 1 \leq i \leq n. \end{align*} Thus, \eqref{Eq:Upperprobbound} holds.
For the proof of (b), we use that the additional assumption implies that $\boldsymbol{\kappa}$ is the \emph{unique} optimal solution to \eqref{Eq:LOP} and that $\mathbf{A}\boldsymbol{\kappa}=\mathbf{1}$. Similar to \eqref{Eq:kappafeasiblefortilde} one shows that $\tilde{\mathbf{A}}\boldsymbol{\kappa}=\mathbf{1}$. Furthermore, if there would exist a $\boldsymbol{\kappa}'\neq \boldsymbol{\kappa}$ with $\tilde{\mathbf{A}}\boldsymbol{\kappa}'\geq \mathbf{1}$ and $\sum_{j=1}^m\kappa_j'\leq \sum_{j=1}^m\kappa_j$ then one shows analogously to \eqref{Eq:kappaprimealsofeasible} that this would imply that the optimal solution $\boldsymbol{\kappa}$ to \eqref{Eq:LOP} is not unique. This shows that $\boldsymbol{\kappa}$ is the unique optimal solution to \eqref{Eq:posLP} and proves (b).
For the proof of (c), we use that the additional assumption implies that $\boldsymbol{\kappa}$ is the \emph{unique} optimal solution to \eqref{Eq:LOP} and that $\hat{\boldsymbol{\kappa}}=(\mathbf{A}_{\boldsymbol{\kappa}}^{-1})^T\mathbf{1}$ is the unique solution to \eqref{Eq:DualLO}, cf.\ the beginning of the proof of Theorem \ref{Th:maintheorem}. Let for some $\epsilon>0$ \begin{eqnarray*}a_{\min}^{(c)}&:=&-\min_{1 \leq i \leq n, j: (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j>0}\frac{a_{ij}}{ (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j}+\epsilon\\ &=&-\min_{1 \leq i \leq n, j: (\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j>0}\frac{a_{ij}}{ \sum_{k=1}^na_{kj}\hat{\kappa}_k}+\epsilon \end{eqnarray*}
which is positive by our assumptions. Define \begin{equation*} \tilde{\mathbf{A}}^{(c)}=(\tilde{a}_{ij}^{(c)})_{1\leq i \leq n, 1 \leq j \leq m} \;\; \mbox{with} \;\; \tilde{a}_{ij}=\frac{a_{ij}+a_{\min}^{(c)}\sum_{k=1}^na_{kj}\hat{\kappa}_k}{1+a_{\min}^{(c)}\sum_{k=1}^m\kappa_k}.\end{equation*} We have thus $\tilde{a}_{ij}^{(c)}>0$ for those $j$ with $(\mathbf{1}^T\mathbf{A}_{\boldsymbol{\kappa}}^{-1}\mathbf{A})_j>0$ and all $1\leq i\leq n$. The rest of the proof for assertion (c) follows analogously to the proof of (a) and (b) which did not depend on the value of $a_{\min}>0$. \end{proof} \begin{lemma}\label{Lem:boundedbysum} Let the assumptions of Proposition \ref{Pr:mainprop} hold and assume in addition that $a_{ij}>0$ for all $1\leq i \leq n$ and those $1 \leq j \leq m$ for which $\kappa_j>0$. Then for all $j$ with $\kappa_j>0$ there exists $\epsilon>0$ such that \begin{equation}\label{Eq:boundedbysum} \min_{1 \leq i \leq n}\sum\limits_{1 \leq k \leq m, k \neq j}\frac{a_{ik}}{a_{ij}}x_k \leq (1-\epsilon)\sum\limits_{1 \leq k \leq m, k \neq j}x_k \end{equation} for all $(x_k)_{1 \leq k \leq m, k \neq j} \in [0,\infty)^{m-1}$. \end{lemma} \begin{proof} For ease of notation and w.l.o.g., let us assume that $\kappa_1>0$ and treat only the case $j=1$. For $(x_k)_{2 \leq k \leq m}=\mathbf{0}$ the inequality holds for all $\epsilon>0$. The rest of the proof is by contradiction. Let $(\epsilon_l)_{l \in \mathbb{N}}$ be a sequence such that $\epsilon_l>0$ for all $l \in \mathbb{N}$ and $\epsilon_l \searrow 0$ for $l \to \infty$. Assume that for each $l$ there exists $(x_k^{(l)})_{2 \leq k \leq m} \in [0,\infty)^{m-1}\setminus\{\mathbf{0}\}$ such that \begin{align} \nonumber &&\min_{1 \leq i \leq n}\sum_{j=2}^m \frac{a_{ij}}{a_{i1}}x_j^{(l)} &\geq (1-\epsilon_l)\sum_{k=2}^m x_k^{(l)} & \\ \label{Eq:epsilonbound}&\Leftrightarrow & \sum_{j=2}^m a_{ij}\frac{x_j^{(l)}}{\sum_{k=2}^m x_k^{(l)}} &\geq (1-\epsilon_l)a_{i1}, & 1 \leq i \leq n, \end{align} where we used that $a_{i1}>0, 1 \leq i \leq n,$ by our assumption. Define now \begin{equation*}\tilde{\boldsymbol{\kappa}}^{(l)}=(\tilde{\kappa}_1^{(l)}, \ldots, \tilde{\kappa}_m^{(l)})^T=\left(0,\kappa_2+\frac{x_2^{(l)}}{\sum_{k=2}^m x_k^{(l)}}\kappa_1, \ldots, \kappa_m+\frac{x_m^{(l)}}{\sum_{k=2}^m x_k^{(l)}}\kappa_1\right)^T\geq \mathbf{0}\end{equation*} for all $l \in \mathbb{N}$. We have \begin{equation*} \sum_{j=1}^m\tilde{\kappa}_j^{(l)}=\sum_{j=1}^m \kappa_j \end{equation*} for all $l \in \mathbb{N}$. Furthermore, \begin{eqnarray} \nonumber \sum_{j=1}^m a_{ij}\tilde{\kappa}_j^{(l)}&=& \sum_{j=2}^m a_{ij}\left(\kappa_j+\frac{x_j^{(l)}}{\sum_{k=2}^m x_k^{(l)}}\kappa_1\right)\\ \nonumber &=& \sum_{j=2}^m a_{ij} \kappa_j + \sum_{j=2}^ma_{ij}\frac{x_j^{(l)}}{\sum_{k=2}^m x_k^{(l)}}\kappa_1 \\ \label{Eq:contraepsilon}& \geq & 1-a_{i1}\kappa_1 +(1-\epsilon_l)a_{i1}\kappa_1=1-\epsilon_la_{i1}\kappa_1, \end{eqnarray} for all $l \in \mathbb{N}$, where we used $\mathbf{A}\boldsymbol{\kappa}\geq \mathbf{1}$ and \eqref{Eq:epsilonbound} in the last step. For $l \to \infty$, the bounded sequence $\tilde{\boldsymbol{\kappa}}^{(l)}$ must have an accumulation point $\tilde{\boldsymbol{\kappa}} \neq \boldsymbol{\kappa}$ (because $\tilde{\kappa}_1=0<\kappa_1$) and $\tilde{\boldsymbol{\kappa}} \geq \mathbf{0}$. But \begin{equation*} \sum_{j=1}^m\tilde{\kappa}_j=\sum_{j=1}^m\kappa_j \;\;\; \mbox{and} \;\;\; \sum_{j=1}^ma_{ij}\tilde{\kappa}_j\geq 1 \end{equation*} by \eqref{Eq:contraepsilon}, so our optimal solution $\boldsymbol{\kappa}$ would not be unique, in contradiction to our assumptions. Thus, for some $\epsilon>0$, the inequality \eqref{Eq:boundedbysum} holds for all $(x_j)_{2 \leq j \leq m} \in [0,\infty)^{m-1}$. \end{proof} \begin{proposition}\label{Lem:momentsconverge} Assume that \begin{align}\nonumber& \lim_{y \to \infty} \frac{P\left((X_j)_{1 \leq j \leq m} \in y \otimes_{\kappa}M(\mathbf{A})\right)}{\prod\limits_{j: \kappa_j>0}P(X_j>y^{\kappa_j})}\\ \label{Eq:Lemma34i} &=\int\limits_{M(\mathbf{A})}\prod_{j:\kappa_j>0}x_j^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(x_j)_{\{j:\kappa_j>0\}}) \otimes P^{(X_j)_{\{j:\kappa_j=0\}}}(\mbox{d}(x_j)_{\{j:\kappa_j=0\}}) \in [0,\infty) \end{align} holds for all $X_1, \ldots, X_m$ and all matrices $\mathbf{A}$ which satisfy the assumptions of Proposition \ref{Pr:mainprop}, with $\boldsymbol{\kappa}$ being the unique solution to \eqref{Eq:LOP} and $M(\mathbf{A})$ as in Proposition \ref{Pr:mainprop}. Then also \begin{align}&\nonumber \lim_{y \to \infty} \frac{\int\limits_{M(\mathbf{A})} \prod_{j=1}^m x_j^{\beta_j} P^{(X_j/y^{\kappa_j})_{\{1 \leq j \leq m\}}}(\mbox{d}\mathbf{x})}{\prod\limits_{j: \kappa_j>0}P(X_j>y^{\kappa_j})}\\ \label{Eq:Lemma34ii}&=\int\limits_{M(\mathbf{A})}\prod_{j=1}^m x_j^{\beta_j} \prod_{j:\kappa_j>0} x_j^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(x_j)_{\{j:\kappa_j>0\}}) \otimes P^{(X_j)_{\{j:\kappa_j=0\}}}(\mbox{d}(x_j)_{\{j:\kappa_j=0\}}) \in [0,\infty) \end{align} for all $X_1, \ldots, X_m, \mathbf{A}$ and $\boldsymbol{\kappa}$ as above and all $\beta_j \in [0,1), 1 \leq j \leq m$. \end{proposition} \begin{remark}\label{Rem:Karamata} As the preceding proposition is used in the induction step of the proof of Proposition \ref{Pr:mainprop}, the convergence \eqref{Eq:Lemma34i} had to be assumed. However, since Proposition \ref{Pr:mainprop} shows that \eqref{Eq:Lemma34i} holds for all $X_1, \ldots, X_m$ and all matrices $\mathbf{A}$ which satisfy the assumptions, the convergence in \eqref{Eq:Lemma34ii} follows. The result may thus be regarded as a multivariate version of the direct half of Karamata's Theorem. \end{remark} \begin{proof}[Proof of Proposition \ref{Lem:momentsconverge}] Define independent random variables $X'_j, 1 \leq j \leq m,$ such that $X'_j$ has $P^{X_j}$-density $x \mapsto \mathds{1}_{[0,\infty)}(x)x^{\beta_j}(E(X_j^{\beta_j}))^{-1}, 1 \leq j \leq m$. This is possible because all $\beta_j \in [0,1)$ and thus $E(X_j^{\beta_j})<\infty$ by our assumptions. For those $1 \leq j \leq m$ with $\kappa_j>0$ the random variable $X'_j$ is regularly varying with index $-(1-\beta_j)$, because \begin{equation*} \lim_{x \to \infty}\frac{P(X'_j>x)}{x^{\beta_j}P(X_j>x)}=\lim_{x \to \infty}\frac{\int_x^\infty y^{\beta_j}P^{X_j}(\mbox{d}y)}{E(X^{\beta_j})x^{\beta_j}P(X_j>x)}=(E(X^{\beta_j})(1-\beta_j))^{-1} \end{equation*} for all $1 \leq j \leq m$ by Karamata's Theorem (cf.\ \cite{Cl83}, Lemma 1.1). Thus, for $1 \leq j \leq m$ with $\kappa_j>0$ the random variable $\tilde{X}_j:=(X_j')^{1-\beta_j}$ is regularly varying with index $-1$ and \begin{equation}\label{Eq:asymptildeX} \lim_{x \to \infty}\frac{P(\tilde{X}_j>x^{1-\beta_j})}{x^{\beta_j}P(X_j>x)}=(E(X^{\beta_j})(1-\beta_j))^{-1}. \end{equation} For $1 \leq j \leq m$ with $\kappa_j=0$ we have $\tilde{X}_j \geq 1$ a.s. because we assumed $X_j \geq 1$ a.s. Furthermore, we have for all $\delta \in (0,1]$ and all $j$ with $\kappa_j=0$ that \begin{equation*} E(\tilde{X}_j^{1-\delta})=\frac{\int_1^\infty x^{(1-\delta)(1-\beta_j)}x^{\beta_j}P^{X_j}(\mbox{d}x)}{E(X_j^{\beta_j})}=\frac{E(X_j^{1-(1-\beta_j)\delta})}{E(X_j^{\beta_j})}<\infty.\end{equation*} Thus, the random variables $\tilde{X}_j, 1 \leq j \leq m,$ satisfy the assumptions of Proposition \ref{Pr:mainprop}. Set now \begin{equation*} \tilde{\mathbf{A}}=(\tilde{a}_{ij}):=((1-\beta_j)^{-1}a_{ij}) \in \mathbb{R}^{n \times m}.\end{equation*} Then $\tilde{\boldsymbol{\kappa}}:=((1-\beta_j)\kappa_j)_{1 \leq j \leq m}$ is the unique solution to the linear program \begin{equation*} \mbox{find } \mathbf{x} \geq \mathbf{0} \; \mbox{such that } \tilde{\mathbf{A}} \mathbf{x} \geq \mathbf{1}, \;\;\; \sum_{i=1}^mx_i \to \min! \end{equation*} and $\tilde{\mathbf{A}}\tilde{\boldsymbol{\kappa}}=\mathbf{1}$. Set \begin{equation*} M(\tilde{\mathbf{A}})=\left\{(x_1, \ldots, x_m): \prod_{j=1}^m x_j^{\tilde{a}_{ij}}>1, 1 \leq i \leq n\right\}.\end{equation*} Then,
\begin{align*} & &(\tilde{X}_j)_{1 \leq j \leq m} \in y &\otimes_{\tilde{\boldsymbol{\kappa}}}M(\tilde{\mathbf{A}})& \\ & \Leftrightarrow & \prod_{j=1}^m \left(\frac{\tilde{X}_j}{y^{\tilde{\kappa}_j}}\right)^{\tilde{a}_{ij}}>1, & \;\;\;1 \leq i \leq n,&\\ & \Leftrightarrow & \prod_{j=1}^m \left(\frac{(X_j')^{1-\beta_j}}{y^{(1-\beta_j)\kappa_j}}\right)^{(1-\beta_j)^{-1}a_{ij}}>1, &\;\;\;1 \leq i \leq n,&\\ & \Leftrightarrow & (X_j')_{1 \leq j \leq m} \in y &\otimes_{\boldsymbol{\kappa}}M(\mathbf{A}),& \end{align*} and so \begin{eqnarray*} P\left((\tilde{X}_j)_{1 \leq j \leq m} \in y \otimes_{\tilde{\boldsymbol{\kappa}}}M(\tilde{\mathbf{A}})\right)&=&\int\limits_{y \otimes_{\boldsymbol{\kappa}}M(\mathbf{A})} P^{(X_j')_{1 \leq j \leq m} }(\mbox{d}\mathbf{x})\\ &=&\int\limits_{y \otimes_{\boldsymbol{\kappa}}M(\mathbf{A})} \prod_{j=1}^m \frac{x_j^{\beta_j}}{E(X_j^{\beta_j})}P^{(X_j)_{1 \leq j \leq m} }(\mbox{d}\mathbf{x})\\ &=&\int\limits_{M(\mathbf{A})} \prod_{j=1}^m \frac{(y^{\kappa_j}x_j)^{\beta_j}}{E(X_j^{\beta_j})}P^{(X_j/y^{\kappa_j})_{1 \leq j \leq m} }(\mbox{d}\mathbf{x}). \end{eqnarray*} Thus, \begin{align} \nonumber & \lim_{y \to \infty} \frac{\int\limits_{M(\mathbf{A})} \prod_{j=1}^m x_j^{\beta_j} P^{(X_j/y^{\kappa_j})_{1 \leq j \leq m}}(\mbox{d}\mathbf{x})}{\prod\limits_{j: \kappa_j>0}P(X_j>y^{\kappa_j})}\\ \label{Eq:twofinalfactors}&= \lim_{y \to \infty} \frac{P\left((\tilde{X}_j)_{1 \leq j \leq m} \in y \otimes_{\tilde{\boldsymbol{\kappa}}}M(\tilde{\mathbf{A}})\right)}{\prod\limits_{j: \kappa_j>0}P(\tilde{X_j}>y^{\tilde{\kappa}_j})}\cdot \frac{\prod\limits_{j=1}^mE(X_j^{\beta_j})\prod\limits_{j: \kappa_j>0}P(\tilde{X_j}>y^{\tilde{\kappa}_j})}{\prod\limits_{j=1}^my^{\kappa_j\beta_j}\prod\limits_{j: \kappa_j>0}P(X_j>y^{\kappa_j})}, \end{align} where the first factor converges to \begin{equation}\label{Eq:limitAtilde} \int\limits_{M(\tilde{\mathbf{A}})}\prod_{j:\kappa_j>0}x_j^{-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(x_j)_{\{j:\kappa_j>0\}}) \otimes P^{(\tilde{X}_j)_{\{j:\kappa_j=0\}}}(\mbox{d}(x_j)_{\{j:\kappa_j=0\}}) \end{equation} by the assumption. Substitute $(y_1, \ldots, y_m):=(x_1^{(1-\beta_1)^{-1}}, \ldots, x_m^{(1-\beta_m)^{-1}})$ and note that $(x_1, \ldots, x_m) \in M(\tilde{\mathbf{A}})$ is equivalent to $(y_1, \ldots, y_m) \in M(\mathbf{A})$, so the expression in \eqref{Eq:limitAtilde} equals \begin{eqnarray*} &&\int\limits_{M(\mathbf{A})}\prod_{j:\kappa_j>0}(1-\beta_j)y_j^{\beta_j-2}\ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(y_j)_{\{j:\kappa_j>0\}}) \otimes P^{(X_j')_{\{j:\kappa_j=0\}}}(\mbox{d}(y_j)_{\{j:\kappa_j=0\}}) \\ &=&\int\limits_{M(\mathbf{A})}\left(\prod_{j:\kappa_j>0}(1-\beta_j)y_j^{\beta_j-2}\right)\left(\prod_{j:\kappa_j=0}E(X_j^{\beta_j})^{-1}y_j^{\beta_j}\right)\\ && \hspace{3cm} \ensuremath{\lambda\hspace{-1.1ex}\lambda}(\mbox{d}(y_j)_{\{j:\kappa_j>0\}}) \otimes P^{(X_j)_{\{j:\kappa_j=0\}}}(\mbox{d}(y_j)_{\{j:\kappa_j=0\}}). \end{eqnarray*} The second factor in \eqref{Eq:twofinalfactors} converges to \begin{equation*}\frac{\prod_{j:\kappa_j=0}E(X_j^{\beta_j})}{\prod_{j:\kappa_j>0}(1-\beta_j)}\end{equation*} by \eqref{Eq:asymptildeX}. Taken together, this yields the statement of the proposition. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1505.03325.tex",
"language_detection_score": 0.48862549662590027,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\newcommand\J{\mathfrak{J}} \newcommand\A{\mathbb{A}} \newcommand\C{\mathbb{C}} \newcommand\G{\mathbb{G}} \newcommand\N{\mathbb{N}} \newcommand{\mathbb{T}}{\mathbb{T}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand\sO{\mathcal{O}} \newcommand\sE{{\mathcal{E}}} \newcommand\tE{{\mathbb{E}}} \newcommand\sF{{\mathcal{F}}} \newcommand\sG{{\mathcal{G}}} \newcommand\GL{{\mathrm{GL}}}
\newcommand{\mathrm H}{\mathrm H} \newcommand\mM{{\mathrm M}} \newcommand\fS{\mathfrak{S}} \newcommand\fP{\mathfrak{P}} \newcommand\fQ{\mathfrak{Q}} \newcommand\Qbar{{\bar{\mathbb{Q}}}} \newcommand\sQ{{\mathcal{Q}}} \newcommand\sP{{\mathbb{P}}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{H}}{\mathbb{H}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathfrak{D}}{\mathfrak{D}} \newcommand\gP{\mathfrak{P}} \newcommand\Gal{{\mathrm {Gal}}} \newcommand\SL{{\mathrm {SL}}}
\newcommand{\legendre}[2] {\left(\frac{#1}{#2}\right)} \newcommand\iso{{\> \simeq \>}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{J}}{\mathcal{J}} \newtheorem{thm}{Theorem} \newtheorem{theorem}[thm]{Theorem} \newtheorem{cor}[thm]{Corollary} \newtheorem{conj}[thm]{Conjecture} \newtheorem{prop}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \theoremstyle{definition} \newtheorem{definition}[thm]{Definition} \theoremstyle{remark} \newtheorem{remark}[thm]{Remark} \newtheorem{example}[thm]{Example} \newtheorem{claim}[thm]{Claim} \newtheorem{lem}[thm]{Lemma}
\theoremstyle{definition} \newtheorem{dfn}{Definition}
\theoremstyle{remark}
\theoremstyle{remark} \newtheorem*{fact}{Fact}
\makeatletter \def\imod#1{\allowbreak\mkern10mu({\operator@font mod}\,\,#1)} \makeatother \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathfrak{a}}{\mathfrak{a}} \newcommand{\mathfrak{b}}{\mathfrak{b}} \newcommand{\mathfrak{H}}{\mathfrak{H}} \newcommand{\mathfrak{h}}{\mathfrak{h}} \newcommand{\mathfrak{m}}{\mathfrak{m}} \newcommand{\mathfrak{n}}{\mathfrak{n}} \newcommand{\mathfrak{p}}{\mathfrak{p}} \newcommand{\mathfrak{q}}{\mathfrak{q}} \newcommand{\mathfrak{F}}{\mathfrak{F}} \newcommand{\mathrm{B}}{\mathrm{B}} \newcommand{\mathrm{G}}{\mathrm{G}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathrm{H}}{\mathrm{H}} \newcommand{\mathrm{H}}{\mathrm{H}} \newcommand{\mathrm{Z}}{\mathrm{Z}} \newcommand{\Gamma}{\Gamma} \newcommand{\mathrm{cyc}}{\mathrm{cyc}} \newcommand{\mathrm{Fil}}{\mathrm{Fil}}
\newcommand{\mathrm{p}}{\mathrm{p}} \newcommand{\mathrm{PGL}}{\mathrm{PGL}}
\newcommand{{\mathcal{X}}}{{\mathcal{X}}} \newcommand{\textrm{Sp}}{\textrm{Sp}} \newcommand{\textrm{ab}}{\textrm{ab}}
\newcommand{\longrightarrow}{\longrightarrow} \newcommand{\rightarrow}{\rightarrow} \newcommand{\hookrightarrow}{\hookrightarrow} \newcommand{\twoheadrightarrow}{\twoheadrightarrow}
\newcommand{\rho_{f,\wp}|_{G_p}}{\rho_{f,\wp}|_{G_p}} \newcommand{{\rho}_{f,\wp}}{{\rho}_{f,\wp}}
\newcommand{\langle}{\langle} \newcommand{\rangle}{\rangle}
\newcommand{\mo}[1]{|#1|}
\newcommand{\hw}[1]{#1+\frac{1}{2}} \newcommand{\mcal}[1]{\mathcal{#1}} \newcommand{\trm}[1]{\textrm{#1}} \newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\car}[1]{|#1|} \newcommand{\pmat}[4]{ \begin{pmatrix} #1 & #2 \\ #3 & #4 \end{pmatrix}} \newcommand{\bmat}[4]{ \begin{bmatrix} #1 & #2 \\ #3 & #4 \end{bmatrix}} \newcommand{\pbmat}[4]{\left \{ \begin{pmatrix} #1 & #2 \\ #3 & #4 \end{pmatrix} \right \}} \newcommand{\psmat}[4]{\bigl( \begin{smallmatrix} #1 & #2 \\ #3 & #4 \end{smallmatrix} \bigr)} \newcommand{\bsmat}[4]{\bigl[ \begin{smallmatrix} #1 & #2 \\ #3 & #4 \end{smallmatrix} \bigr]}
\makeatletter \def\imod#1{\allowbreak\mkern10mu({\operator@font mod}\,\,#1)} \makeatother \title{The Heisenberg covering of the Fermat curve}
\author{Debargha Banerjee} \address{Indian Institute of science, education and research, Pune, India} \author{Lo\"ic Merel} \address{Universit\'e Paris Cit\'e and Sorbonne Universit\'e, CNRS, IMJ-PRG, F-75013 Paris, France.} \thanks{The author was partially supported by the SERB grant MTR/2017/000357 and CRG/2020/000223. The first named author is deeply indebted to Professor Yuri Manin for several stimulating conversation at the MPIM} \begin{abstract} For $N$ integer $\ge1$, K. Murty and D. Ramakrishnan defined the $N$-th Heisenberg curve, as the compactified quotient $X'_N$ of the upper half-plane by a certain non-congruence subgroup of the modular group. They ask whether the Manin-Drinfeld principle holds, namely if the divisors supported on the cusps of those curves are torsion in the Jacobian. We give a model over ${\bf Z}[\mu_N,1/N]$ of the $N$-th Heisenberg curve as covering of the $N$-th Fermat curve. We show that the Manin-Drinfeld principle holds for $N=3$, but not for $N=5$. We show that the description by generator and relations due to Rohrlich of the cuspidal subgroup of the Fermat curve is explained by the Heisenberg covering, together with a higher covering of a similar nature. The curves $X_N$ and the classical modular curves $X(n)$, for $n$ even integer, both dominate $X(2)$, which produces a morphism between jacobians $J_N\rightarrow J(n)$. We prove that the latter has image $0$ or an elliptic curve of $j$-invariant $0$. In passing, we give a description of the homology of $X'_{N}$.
\end{abstract}
\subjclass[2010]{Primary: 11D11, Secondary: 11F11, 11G05, 11G30} \keywords{Fermat's curves, Modular symbols, Heisenberg curves} \maketitle
\setcounter{tocdepth}{1} \tableofcontents{}
\section{Introduction} \label{intro} Let $\Gamma$ be a subgroup of finite index of $\SL_2(\mathbb{Z})$. This subgroup acts by homographies on the complex upper half-plane $\mathfrak{H}$. Consider the corresponding modular curve $Y_\Gamma=\Gamma\backslash \mathfrak{H}$, and its compactification obtained by adding the cusps $X_\Gamma$. We say that $X_\Gamma$ {\it satisfies the Manin-Drinfeld} principle if any cuspidal ({\it i.e.} supported on the cusps) divisor of degree $0$ is torsion in the Jacobian of $X_\Gamma$. Manin and Drinfeld proved that it is the case when $\Gamma$ is a congruence subgroup.
For a subgroup of finite index (not necessarily a congruence subgroup), K. Murty and Ramakrishnan ~\cite{MR983619} give an analytic criterion for the Manin-Drinfeld principle to be satisfied. As an illustrative example, Murty and Ramakrishnan consider modular curves attached to certain subgroups of $\Gamma(2)$: Fermat curves, and what they propose to call Heisenberg curves. We revisit those examples. In \cite{BanerjeeMerel}, we reconsider this question and give an analytic criterion of a different nature, but also based on Eisenstein series ; this is unconnected to the present work, which is purely algebraic in nature.
The Heisenberg curves are defined as follows from the complex analytic point of view. Let $A$ and $B$ be the classes of the matrices $\begin{pmatrix}
1 & 2 \\
0 & 1 \\ \end{pmatrix}$ and $\begin{pmatrix}
1 & 0 \\
2 & 1 \\ \end{pmatrix}$ respectively in ${\bar\Gamma}(2)=\Gamma(2)/\{\pm1\}$. These matrices generate freely the group ${\bar\Gamma}(2)$. Let $C=ABA^{-1}B^{-1}$.
Let $N$ be an integer $>0$. Denote by $\Phi_N$ the kernel of the morphism ${\bar\Gamma}(2)\rightarrow(\mathbb{Z}/N\mathbb{Z})^2$ which to $A$ associates $(1,0)$ and to $B$ associates $(0,1)$. The corresponding modular curve is the {\it Fermat modular curve} and is denoted by $X_{N}$. It can be identified with the complex points of the Fermat curve $F_N$ (see for instance ~\cite{MR0441978}, ~\cite{MR681120}). Let $\Phi_N'$ be the subgroup of $\Phi_N$ generated by $A^N$, $B^N$, $C^N$ and the third term $[{\bar\Gamma}(2),[{\bar\Gamma}(2),{\bar\Gamma}(2)]]$ in the descending central series of ${\bar\Gamma}(2)$. An exact sequence of groups follows $$ 1\rightarrow \Phi_N'\rightarrow {\bar\Gamma}(2)\rightarrow H_N\rightarrow 1, $$ where $H_N$ is a central extension of $({\mathbb{Z}/N\mathbb{Z}})^{2}$ by ${\mathbb{Z}/N\mathbb{Z}}$ (coinciding with the $\mathbb{Z}/N\mathbb{Z}$-points of the Heisenberg group). The {\it $N$-th Heisenberg modular curve}, in the sense of Murty and Ramakrishnan, is $X'_N=X_{\Phi'_N}$.
Let $\mathbb{Q}(\mu_{N})$ be a cyclotomic extension of $\mathbb{Q}$ generated by $N$-th roots of unity. Denote by $\mathbb{Z}[\mu_{N}]$ its ring of integers. The covering $X'_{N}\rightarrow X_{N}$ extends to a morphism $F'_{N}\rightarrow F_{N}$ of curves over $\mathbb{Q}(\mu_{N})$ that we call the {\it Heisenberg covering of the Fermat curve}.
\begin{theorem} \label{Modelthm} Suppose $N$ is an odd integer. The Heisenberg modular curve $X'_N$ extends to a smooth projective scheme ${\mathcal{F}}'_N$ of relative dimension one over ${\rm Spec}({\mathbb{Z}}[\mu_{N},1/N])$ given by the following model: $$ X^{N}+Y^{N}=Z^{N} $$ and, for every primitive $N$-th root of unity $\zeta$ in $\mathbb{Q}(\mu_{N})$ $$ \prod_{j=1}^{(N-1)/2}(Y-\zeta^{-j}Z)^{j}T_{\zeta}^{N}=\prod_{j=1}^{(N-1)/2}(Y-\zeta^{j}Z)^{j}U_{\zeta}^{N}. $$
\end{theorem}
It seems to have been known to Deligne (see a comment in ~\cite{MR983619}) that the generic fiber $F'_{N}$ of ${\mathcal{F}}'_N$ can be defined over ${\mathbb{Q}}$, an assertion for which we provide a proof.
Rohrlich ~\cite{MR0441978} (see also V\'elu ~\cite{MR582434}) has determined the cuspidal subgroup of $F_N$. In particular, he has shown that any cuspidal divisor on $F_N$ is of order dividing $N$. This description plays a key role in justifying the existence of the Heisenberg covering. We show that, by going further in the descending central series of $\Gamma(2)$, $X'_N$ is covered by a modular curve $X''_N$, in such a way that $X''_N$ is still an abelian covering of the Fermat curve $X_{N}$. We do not describe algebraically $X''_N$.
We note that there has been a considerable interest in the cuspidal group of the Fermat curve. For instance, in ~\cite{MR488287}, p. 39, Mazur draws (or rather ``stretches'') an analogy between Fermat curves and modular curves. Such an analogy is somewhat strengthened by the fact that the Heisenberg covering is analogous to the familiar Shimura covering $X_1(N)\rightarrow X_0(N)$ between modular curves.
Like Murty and Ramakrishnan, our goal had been to illustrate our study of non-congruence subgroups by examining Heisenberg curves. We can not determine in general whether such curves satisfy the Manin-Drinfeld principle. But we can show easily that the principle holds for $N=3$. Furthermore, for $N=3$, we study the connection between $F'_{3}$ and various modular curves. By contrast, for $N=5$:
\begin{theorem} \label{theoremN=5} There exists a cuspidal divisor on $X'_5$ whose class in the jacobian of $X'_5$ is of infinite order. \end{theorem}
Let $\bar\Gamma'(2)$ be the subgroup of index $3$ of $\bar\Gamma(2)$ obtained by pulling back the $2$-Sylow subgroup of ${\rm PSL}_2(\mathbb{Z}/3\mathbb{Z})$. Let $\Gamma$ be a congruence subgroup of $\bar\Gamma(2)$. Consider the correspondence $X'_N\rightarrow X_\Gamma$ obtained by combining pulling back to the modular curve $X_{\Gamma\cap \Phi'_N}$ with pushing to $X_\Gamma$. It produces a morphism of abelian varieties between the jacobians or Riemann surfaces $\theta_{N,\Gamma}$ : $J'_N\rightarrow J_\Gamma$. In view of the following statement, we can hardly have any hope of establishing a limited form of the Manin-Drinfeld principle for Heisenberg curves using the classical Manin-Drinfeld theorem for congruence subgroups.
\begin{thm} \label{theoremcomparison} The morphism $\theta_{N,\Gamma}$ is zero if and only if either $3 \nmid N$ or $\Gamma$ is not contained in $\bar\Gamma'(2)$. If $3 \mid N$ and $\Gamma$ is contained in $\bar\Gamma'(2)$, the image of $\theta_{N,\Gamma}$ is isogenous to an elliptic curve with $j$-invariant $0$. Furthermore, when $\Gamma$ is contained in $\bar\Gamma'(2)$, $\theta_{3,\Gamma}$ has finite kernel. \end{thm}
The proof of theorem \ref{theoremcomparison} is a translation of group theoretic statement: any term the lower central series of $\bar\Gamma(2)$ is essentially dense in adelic completions of $\bar\Gamma(2)$ (see Proposition \ref{oddadelic}). In addition to these results of algebraic nature, we give a combinatorial description of the homology of the Riemann surface $X'_{N}$, by a method similar to Manin's presentation, but following the variant introduced in \cite{MR1405312}. This might have an interest of its own. But it does not help us for establishing our other results. It will be intriguing to develop the theory of modular symbols on the Heisenberg curves following Shokurov~ \cite{MR0563089}, \cite{MR582162},\cite{MR571104}.
\section{Heisenberg Groups and the lower central series of $\bar\Gamma(2)$} \label{}
\subsection{The Heisenberg group} The Heisenberg group is the algebraic group of $3\times 3$ unipotent upper triangular matrices. Set:
$x=\begin{pmatrix}
1 & 1 & 0\\
0 & 1 & 0\\
0 & 0 & 1\\ \end{pmatrix}$, $y=\begin{pmatrix}
1 & 0 & 0\\
0 & 1 & 1\\
0 & 0 & 1\\ \end{pmatrix}$ and $z=\begin{pmatrix}
1 & 0 & 1\\
0 & 1 & 0\\
0 & 0 & 1\\ \end{pmatrix}$.
Those elements satisfy the relations $xz=zx$, $yz=zy$ and $z=xyx^{-1}y^{-1}=[x,y]$. Thus one obtains a presentation of the Heisenberg group over the integers. Note the formula, for $a$, $b$, $c\in{\bf Z}$
\[ x^{a}z^{b}y^{c}=\begin{pmatrix}
1 & a & b\\
0 & 1 & c\\
0 & 0 & 1\\ \end{pmatrix}. \]
From that perspective, the group law is given by $$ x^{a}z^{b}y^{c}x^{a'}z^{b'}y^{c'}=x^{a+a'}z^{b+b'+a'c}y^{c+c'}. $$
The Heisenberg group $H_{\bf Z}$ over ${\bf Z}$ can be identified with ${\bf Z}^{3}$ with the previous group law. The abelianization of $H_{\bf Z}$ is freely generated by the images of $x$ and $y$ and is thus isomorphic to ${\bf Z}^{2}$. Thus $H_{\bf Z}$ is a central extension of ${\bf Z}^{2}$ by ${\bf Z}$.
Let $M$ and $N$ be natural integers. Let $L$ be a common divisor of $M$ and $N$. Let $H_{M,N,L}$ be the quotient group of $H_{\bf Z}$ spanned by $x$ and $y$ with relations $xz=zx$, $yz=zy$, $z=xyx^{-1}y^{-1}$ and $x^{N}=y^{M}=z^{L}=1$. Such a group can be identified (as a set) with ${\bf Z}/M{\bf Z}\times {\bf Z}/L{\bf Z}\times {\bf Z}/N{\bf Z}$ via the inverse of the map $(a,b,c)\mapsto x^{a}z^{b}y^{c}$.
Note the map $H_{M,N,L}\rightarrow {\bf Z}/M{\bf Z}\times {\bf Z}/N{\bf Z}$ is coming from the abelianization.
Let $M'$, $N'$ and $L'$ be integer $\ge1$ such that $M|M'$, $N|N'$, and $L|L'$. The canonical group homomorphism $H_{M',N',L'}\rightarrow H_{M,N,L}$ is surjective. Its kernel is generated by $\{x^{M},y^{N}\}$. Since $[x^{M},y^{N}]=z^{NM}$, this kernel is abelian if and only if $L'|NM$.
\begin{prop} Let $T$ be the lower common multiple of $M$ and $N$. The group $H_{M,N,L}$ is of exponent $T$ if $T$ is odd or $T/L$ is even. It is of exponent $2T$ otherwise. In particular $H_{N,N,N}$, for $N$ odd, and $H_{N,N,N/2}$, for $N$ even, are of exponent $N$. \end{prop} \begin{proof} Let $\alpha$, $\beta\in H_{M,N,L}$ in the subgroups generated by $x$ and $y$ respectively. Let $e=T$ if $T$ is odd or $T/L$ is even. Let $e=2T$ otherwise. Since $[\alpha,\beta]$ belongs to the center of $H_{M,N,L}$, it is of order dividing $L$. Moreover, one has $(\alpha\beta)^{n}=\alpha^{n}[\alpha,\beta]^{n(n-1)/2}\beta^{n}$, which vanishes if $n=e$. By this calculation, one has $(xy)^{n}=x^{n}z^{n(n-1)/2}y^{n}$, which shows that $xy$ is of order $T$. \end{proof}
Since the group ${\bar\Gamma}(2)=\Gamma(2)/\{\pm1\}$ is freely generated by $A$ and $B$, one gets a surjective group homomorphism ${\bar\Gamma}(2)\rightarrow H_{\bf Z}$ which sends $A$ and $B$ to $x$ and $y$ respectively. Its kernel is $[{\bar\Gamma}(2),[{\bar\Gamma}(2),{\bar\Gamma}(2)]]$. Every (necessarily nilpotent) finite quotient group of ${\bar\Gamma}(2)$ which factorizes through ${\bar\Gamma}(2)/[{\bar\Gamma}(2),[{\bar\Gamma}(2),{\bar\Gamma}(2)]]$ is isomorphic to one of the groups $H_{M,N,L}$.
Denote by $\Gamma_{M, N, L}$ the kernel of the map : ${\bar\Gamma}(2)\rightarrow H_{\bf Z}\rightarrow H_{M,N,L}$.
\subsection{The lower central series} Recall that the {\it lower central series} $(G_k)_{k\ge1}$ of a group $G$ is defined recursively by $G_1=G$ and $G_{k+1}=[G_k,G]$. The quotient $G_k/G_{k+1}$ is then an abelian group. When $G$ is a free group generated by the family $(t_i)_{i\in I}$, $G_k/G_{k+1}$ is a free abelian group generated (not freely) by the classes of the commutators of weight $k$ on the generators, {\it i.e.} by the $[t_{i_1},...,t_{i_k}]$, where the indices run through any sequence $\{1,2,...,k\}\rightarrow I$ \cite[Theorem 10.2.3]{MR0103215}. When, furthermore, $I$ is finite of cardinality $m$, the rank $r_k$ of $G_k/G_{k+1}$ is given by Witt's formula, involving the necklace polynomial, $$
r_k(G)=\frac{1}{k}\sum_{d|k}\mu(d)m^{k/d}, $$ where $\mu$ is the M\"obius function. In particular the lower central series $({\bar\Gamma}(2)_k)_{k\ge1}$ of ${\bar\Gamma}(2)$ satisfies $r_1({\bar\Gamma}(2))=2$, $r_2({\bar\Gamma}(2))=1$, $r_3({\bar\Gamma}(2))=2$ and $r_4({\bar\Gamma}(2))=3$. The corresponding generators of ${\bar\Gamma}(2)_k/{\bar\Gamma}(2)_{k+1}$ for $k=1$, $2$, $3$ are the classes of $\{A,B\}$, $\{C\}$, and $\{[C,A],[C,B]\}$ respectively. Therefore one has a surjective group morphism $\phi_1$ : ${\bar\Gamma}(2)\rightarrow \mathbb{Z}^2$ such that $\phi_1(A)=(1,0)$ and $\phi_1(B)=(0,1)$. Its kernel is ${\bar\Gamma}(2)_2$. Moreover, one gets a group isomorphism: ${\bar\Gamma}(2)_1/{\bar\Gamma}(2)_{3}\simeq H_{\bf Z}$.
Next we have the surjective group morphism $\phi_2$ : ${\bar\Gamma}(2)_2\rightarrow \mathbb{Z}$, such that $\phi_2(C)=1$. Its kernel is ${\bar\Gamma}(2)_3$. We can now describe $\phi_3$ : ${\bar\Gamma}(2)_3\rightarrow \mathbb{Z}^2$ such that $\phi_3([C,A])=(1,0)$ and $\phi_3([C,B])=(0,1)$. Something interesting happens at that stage.
For $k$ integer $\ge 2$, the extension $1\rightarrow G_{k}/G_{k+1}\rightarrow G_{k-1}/G_{k+1}\rightarrow G_{k-1}/G_k\rightarrow 1$ is central. Consequently, since $r_2({\bar\Gamma}(2))=1$, the group ${\bar\Gamma}(2)_2/{\bar\Gamma}(2)_4$ is abelian, and free of rank $3$. Of course, the extension $1\rightarrow {\bar\Gamma}(2)_{2}/{\bar\Gamma}(2)_{4}\rightarrow {\bar\Gamma}(2)_{1}/{\bar\Gamma}(2)_{4}\rightarrow {\bar\Gamma}(2)_{1}/{\bar\Gamma}(2)_2\rightarrow 1$ is not central.
\begin{prop} There exists a group isomorphism $\psi$ : ${\bar\Gamma}(2)_2/{\bar\Gamma}(2)_4\simeq \mathbb{Z}^3$ given by $C\mapsto (0,0,1)$, $[C,A]\mapsto (1,0,0)$, and $[C,B]\mapsto(0,1,0)$. One has, for $\gamma\in {\bar\Gamma}(2)$ and $\delta\in\bar\Gamma(2)_2$, the formula $$ \psi(\gamma\delta\gamma^{-1})=(-\phi_1(\gamma)\phi_2(\delta),0)+\psi(\delta). $$ or equivalently $$ \psi([\delta,\gamma])=(\phi_1(\gamma)\phi_2(\delta),0). $$ In particular, one has, for $i$, $j\in\mathbb{Z}$, the formula $\psi(A^iB^jC^kB^{-j}A^{-i})=(-ki,-kj,k)$. \end{prop}
\begin{proof} Given that $\{[C,A],[C,B]\}$ is basis of the $\mathbb{Z}$-module ${\bar\Gamma}(2)_3/{\bar\Gamma}(2)_4$, and $C$ is a basis of the $\mathbb{Z}$ module ${\bar\Gamma}(2)_2/{\bar\Gamma}(2)_3$, any lifting of $C$ modulo ${\bar\Gamma}(2)_3$, to a class $C'$ modulo ${\bar\Gamma}(2)_4$ gives a basis $\{[C,A],[C,B], C'\}$ of ${\bar\Gamma}(2)_2/{\bar\Gamma}(2)_4$. The choice $C=C'$ is evidently suitable, but is somewhat arbitrary.
One has $$ \psi(\gamma\delta\gamma^{-1})=\psi(\gamma\delta C^{-\phi_2(\delta)}\gamma^{-1}C^{\phi_2(\delta)}\delta^{-1}\delta C^{-\phi_2(\delta)}\gamma C^{\phi_2(\delta)}\gamma^{-1}). $$ Since $\delta C^{-\phi_2(\delta)}\in\bar\Gamma(2)_3$, the factor $\gamma\delta C^{-\phi_2(\delta)}\gamma^{-1}C^{\phi_2(\delta)}\delta^{-1}$ belongs to $\bar\Gamma(2)_4$; hence its image by $\psi$ vanishes. So we get \begin{equation} \label{psi} \psi(\gamma\delta\gamma^{-1})=\psi(\delta)-\phi_2(\delta)\psi(C)+\phi_2(\delta)\psi(\gamma C\gamma^{-1}), \end{equation} which translates into $$ \psi([\delta,\gamma])=\phi_2(\delta)\psi([C,\gamma]). $$ It remains to determine $\psi([C,\gamma])$. Let $\gamma_1$, $\gamma_2\in {\bar\Gamma}(2)$. One has \begin{equation} \label{doubleconjugacy} (\gamma_1\gamma_2)\delta(\gamma_1\gamma_2)^{-1}=[\gamma_1,\gamma_2][\gamma_2,[\gamma_1,\delta]][\gamma_1,\delta][\gamma_2,\delta]\delta[\gamma_2,\gamma_1]. \end{equation} Since $[\gamma_2,[\gamma_1,\delta]]$ is a commutator of degree $4$, it is in the kernel of $\psi$. It follows that the map $\gamma\mapsto \psi(\gamma\delta\gamma^{-1})$ is a group homomorphism from $\bar\Gamma(2)$ to $\mathbb{Z}^2$. For $\delta=C$, $\gamma=A$ (resp. $\gamma=B$), one has $\psi(\gamma\delta\gamma^{-1})=-(\psi_1(\gamma),0)$, hence the latter equality is true for all $\gamma\in\bar\Gamma(2)$. Thus one gets $$ \psi([C,\gamma])=(\phi_1(\gamma),0), $$ which gives the main formula. It remains to apply this to $\gamma=A^iB^j$ and $\delta=C^k$ to obtain the final formula.
\end{proof}
The exact sequence $1\rightarrow {\bar\Gamma}(2)_{3}/{\bar\Gamma}(2)_{4}\rightarrow {\bar\Gamma}(2)_{1}/{\bar\Gamma}(2)_{4}\rightarrow {\bar\Gamma}(2)_{1}/{\bar\Gamma}(2)_3\rightarrow 1$ identifies to a central group extension $$ 0\rightarrow \mathbb{Z}^2\rightarrow H'_\mathbb{Z}\rightarrow H_\mathbb{Z}\rightarrow 1. $$
\subsection{The groups $\Phi_N$, $\Phi'_N$ and $\Phi''_N$} We denote the group $\Gamma_{N,N,1}$ by $\Phi_N$. It is the kernel of the group homomorphism $\bar\phi_1$ : ${\bar\Gamma}(2)\rightarrow (\mathbb{Z}/N\mathbb{Z})^2$, which maps $A$ to $(1,0)$ and $B$ to $(0,1)$ (and thus $C$ to $(0,0)$). A system of representatives for the cosets $\Phi_N \backslash {\bar\Gamma}(2)$ is given by $A^i B^j$ with $i,j \in \{0,1...(N-1)\}$. The structure of $\Phi_N$ is probably well known, but we could not find a complete reference for a presentation $\Phi_N$.
\begin{prop}
The group $\Phi_N$ is generated by $U=\{A^N,B^N, A^iB^jCB^{-j}A^{-i}/0\le i,j\le N-1\}$. Moreover $\Phi_N$ has presentation $<A^N, B^N, T_{i,j}, 0\le i,j\le N-1 |[A^N,B^N]=\prod_{i=0}^{N-1}\prod_{j=0}^{N-1}T_{N-1-i,j}>$. \end{prop} \begin{proof}
The group $\Phi_N$ is generated by $U=\{A^N,B^N, A^iB^jCB^{-j}A^{-i}/0\le i,j\le N-1\}$ ~\cite{MR681120}, ~\cite{MR0441978}. By the Nielsen-Schreier theorem, $\Phi_N$, being a free subgroup of index $N$ of a free group on two generators, is free on $N^2+1$ generators. A relation between the $N^2+2$ exhibited generators in $U$ presents itself : \begin{equation} \label{relation} A^NB^NA^{-N}B^{-N}=\prod_{i=0}^{N-1}\prod_{j=0}^{N-1}A^{N-1-i}B^jCB^{-j}A^{i+1-N}. \end{equation} One of the exhibited generators in $U$ can be expressed in terms of the $N^2+1$ remaining elements of $U$. Thus we get a presentation by generators and relations of $\Phi_N$. \end{proof}
Set $N'=N$ if $N$ is odd, and $N'=N/2$ if $N$ is even.
We set $\Phi'_N=\Gamma_{N,N,N'}$. It is the subgroup of $\Phi_N$ obtained as the kernel of the morphism $\bar\phi_2$ : $\Phi_N\rightarrow \mathbb{Z}/N'\mathbb{Z}$ which vanishes on $A^N$ and $B^N$ and takes the value $1$ on $A^iB^jCB^{-j}A^{-i}$ for $i$, $j$ integers.
Alternately, $\Phi'_N$ is the kernel of the composed map ${\bar\Gamma}(2)\rightarrow H_{\bf Z}\rightarrow H_{N,N,N'}$. Thus the composed map $[{\bar\Gamma}(2),{\bar\Gamma}(2)]\rightarrow \mathbb{Z}\rightarrow \mathbb{Z}/N'\mathbb{Z}$ extends to a map $\bar\phi_2$: $\Phi_N\rightarrow \mathbb{Z}/N'\mathbb{Z}$, with kernel $\Phi'_N$. It vanishes on $A^N$ and $B^N$.
\begin{prop} The composed map $[{\bar\Gamma}(2),{\bar\Gamma}(2)]\rightarrow \mathbb{Z}^3\rightarrow (\mathbb{Z}/N'\mathbb{Z})^3$ extends to a group homomorphism $$ \bar\psi : \Phi_N\rightarrow (\mathbb{Z}/N'\mathbb{Z})^3 $$ which vanishes on $A^N$ and $B^N$ and, for $\gamma\in\bar\Gamma(2)$ and $\delta\in\Phi_N$, satisfies \begin{equation} \label{barpsi} \bar\psi(\gamma\delta\gamma^{-1})=(-\bar\phi_1(\gamma)\bar\phi_2(\delta),0)+\bar\psi(\delta). \end{equation} In particular, for $i$, $j$, $k\in\mathbb{Z}$, one has \begin{equation} \label{psigen} \bar\psi(A^iB^jC^kB^{-j}A^{-i})=(-ik,-jk,k). \end{equation} \end{prop} \begin{proof} It follows from the presentation of $\Phi_N$ that formula \ref{psigen} defines a morphism $\Phi_N\rightarrow (\mathbb{Z}/N\mathbb{Z})^3$ (it vanishes on the exhibited relation \ref{relation} between the generators). Such a morphism coincides with the composed map $[{\bar\Gamma}(2),{\bar\Gamma}(2)]\rightarrow \mathbb{Z}^3\rightarrow (\mathbb{Z}/N\mathbb{Z})^3$, by the formula we established on $\psi$.
The formula \ref{barpsi} is valid whenever $\delta\in\bar\Gamma(2)_2$, by \ref{psi}. Since $\Phi_N$ is generated by $\bar\Gamma(2)_2$ together with $A^N$ and $B^N$, it remains to prove \ref{barpsi} for $\delta=A^N$ and $\delta=B^N$. Consider the case $\delta=B^N$ for instance (the other case is similar). Let us use the formula \ref{doubleconjugacy} again. We get, for $\gamma_1$, $\gamma_2\in\bar\Gamma(2)$, (leaving out opposite terms) $$ \bar\psi(\gamma_1\gamma_2 B^N\gamma_2^{-1}\gamma_1^{-1})=\bar\psi([\gamma_2,[\gamma_1,B^N]])+\bar\psi([\gamma_1,B^N])+\bar\psi([\gamma_2,B^N])+\bar\psi(B^N). $$ We have $\bar\psi(B^N)=0$, and $\bar\psi([\gamma_2,[\gamma_1,B^N]])$ is the reduction modulo $N$ of $\psi([\gamma_2,[\gamma_1,B^N]])$. One has $\psi([\gamma_2,[\gamma_1,B^N]])=-(\phi_1(\gamma_2)\phi_2([\gamma_1,B^N],0)$. But $\phi_2([\gamma_1,B^N])\in N\mathbb{Z}$. So we have proved that the map $\gamma\mapsto \bar\psi(\gamma B^N\gamma^{-1})$ is a group homomorphism. It remains to prove that the formula \ref{barpsi} holds for $\gamma=A$ and $\gamma=B$ (still in the configuration where $\delta=B^N$). Only the case $\gamma=A$ is of interest. We have $$ \bar\psi(AB^NA^{-1})=\bar\psi(AB^NA^{-1}B^{-N}). $$ As $AB^NA^{-1}B^{-N}\in\bar\Gamma(2)_2$, $\bar\psi(AB^NA^{-1}B^{-N})$ is the reduction modulo $N$ of $\psi(AB^NA^{-1}B^{-N})$. We use the identity $$ AB^NA^{-1}B^{-N}=(ABA^{-1})^NB^{-N}=(ABA^{-1}B^{-1}B)^NB^{-N}=CBCB^{-1}B^2CB^{-2}...B^{N-1}CB^{1-N}. $$ The right-hand side is a product of generators of $\Phi_N$. We can apply \ref{psigen} $$ \psi(AB^NA^{-1}B^{-N})=\sum_{i=0}^{N-1}\psi(B^iCB^{-i})=\sum_{i=0}^{N-1}(0,-i,0)=(0,N(N-1)/2,0). $$ Since $N'={\rm gcd}(N,N(N-1)/2)$, we have indeed that $\bar\psi(AB^NA^{-1}B^{-N})=0$. \end{proof}
\begin{remark} Let $N''=N'$ if $N$ is prime to $3$, and $N''=N'/3$ otherwise. The appearance of the denominator $2$ (for $N'$) and now $3$ (for $N''$) is related to Bernoulli numbers. We suspect that ultimately it is related to the mixed Tate motives that have been discovered by Deligne in his study of the nilpotent completion of the fundamental group of the projective line deprived of three points \cite{MR1012168}. \end{remark}
We define $\Phi''_N$ as the kernel of the composed maps $\Phi_{N}\rightarrow (\mathbb{Z}/N''\mathbb{Z})^{2}\times(\mathbb{Z}/N'\mathbb{Z})$.
\begin{cor} The exact sequence $$ 0\rightarrow \Phi_N'/\Phi_N''\rightarrow {\bar\Gamma}(2)/\Phi_N''\rightarrow {\bar\Gamma}(2)/\Phi_N'\rightarrow 0 $$ makes of the group ${\bar\Gamma}(2)/\Phi_N''$ a central extension of the Heisenberg group $H_{N,N,N'}$ by $(\mathbb{Z}/N''\mathbb{Z})^2$. \end{cor} \begin{proof} It follows from formula \ref{barpsi}, as $\bar\phi_2$ vanishes on $\Phi'_N$. \end{proof} We denote by $H'_{\mathbb{Z}/N\mathbb{Z}}$ the group ${\bar\Gamma}(2)/\Phi_N''$
\begin{prop} The group $H'_{\mathbb{Z}/N\mathbb{Z}}$ is of exponent $N$. In other words, for every $\gamma\in\bar\Gamma(2)$, one has $\gamma^N\in\Phi''_N$. \end{prop} \begin{proof} It relies on relations for commutators, that, we presume, are well known. Let $G$ be a group. Suppose $G_4$ is trivial. Then $G_3$ is contained in the center of $G$. Let $\alpha$, $\beta\in G$. Set $\gamma=[\alpha,\beta]$, $\alpha'=[\gamma^{-1},\alpha]$ and $\beta'=[\gamma^{-1},\beta]$. Let $n$ be an integer. One has the relation \begin{equation} \label{betanalpha} \beta^n\alpha=\alpha\gamma^{-n}\beta^n\alpha'^n\beta'^{-n(n-1)/2}. \end{equation} We prove it by induction on $n$. Indeed, it holds for $n=0$. Suppose it holds for some value of $n$. We have $$ \beta^{n+1}\alpha=\beta\alpha\gamma^{-n}\beta^n\alpha'^n\beta'^{-n(n-1)/2}=\gamma^{-1}\alpha\beta\gamma^{-n}\beta^n\alpha'^n\beta'^{-n(n-1)/2}. $$ We use the relations $\gamma^{-1}\beta=[\gamma^{-1},\beta]\beta\gamma^{-1}$ and $\gamma^{-1}\alpha=[\gamma^{-1},\alpha]\alpha\gamma^{-1}$. Thus we get $$ \beta^{n+1}\alpha=\alpha\gamma^{-n-1}\beta^{n+1}\alpha'^{n+1}\beta'^{-n-n(n-1)/2}, $$ which is the desired formula. We pass to the next step. We have $$ (\alpha\beta)^n=\alpha^n\gamma^{-n(n-1)/2}\beta^n\alpha'^{-n(n-4)(n+1)}\beta'^{n(n-1)(n+1)/6}. $$ We proceed again by induction on $n$. We suppose the formula holds for a certain value of $n$. We get $$ (\alpha\beta)^{n+1}=\alpha^n\gamma^{-n(n-1)/2}\beta^n\alpha'^{-n(n-4)(n+1)}\beta'^{n(n-1)(n+1)/6}\alpha\beta. $$ Using the formula \ref{betanalpha}, we get $$ (\alpha\beta)^{n+1}=\alpha^n\gamma^{-n(n-1)/2}\alpha\gamma^{-n}\beta^{n+1}\alpha'^{-n(n-4)(n+1)+n}\beta'^{n(n-1)(n+1)/6-n(n-1)/2}. $$ We now use the formula $\gamma^{-1}\alpha=[\gamma^{-1},\alpha]\alpha\gamma^{-1}$ and get $$ (\alpha\beta)^{n+1}=\alpha^{n+1}\gamma^{-n(n-1)/2-n}\beta^{n+1}\alpha'^{-n(n-4)(n+1)+n-n(n-1)/2}\beta'^{n(n-1)(n+1)/6}, $$ which is the desired formula.
Consider the case where $G=H'_{\mathbb{Z}/N\mathbb{Z}}$. Since this group is generated by the classes of $A$ and $B$, which are of order $N$, it follows that all elements of $H'_{\mathbb{Z}/N\mathbb{Z}}$ are of order divisible by $N$. \end{proof}
\begin{remark} Let $p$ be prime number. Stallings introduced the lower $p$-central series $(S_{k})_{k\ge1}$ as a particular case for $N=p$ of the following construction. One has $S_1=G$, and, for $k\ge2$, $S_{k+1}=[G,S_k](S_k)^N$, where the latter expression is the subgroup of $G$ generated by $[G,S_k]$ and $(S_k)^N$. Note that, when $G= {\bar\Gamma}(2)$, one has $S_2=[{\bar\Gamma}(2),{\bar\Gamma}(2)]{\bar\Gamma}(2)^N=\Phi_N$ and $S_3=[{\bar\Gamma}(2),\Phi_N]\Phi_N^N$. Note that $S_3\subset \Phi_N'\subset S_2$. Since $A^N$ and $B^N$ do not belong to $S_3$, the groups $S_3$ and $\Phi'_N$ do not coincide. \end{remark}
\subsection{Odd adelic completions} Recall that, for $k\ge 1$, ${\bar\Gamma}(2)_k$ is the $k$-th term in the lower central series of ${\bar\Gamma}(2)$. Let $D_3=\{\pm {\rm Id}, \pm \begin{pmatrix}
0 & -1 \\
1 & 0 \\ \end{pmatrix}, \pm\begin{pmatrix}
-1 & 1 \\
1 & 1 \\ \end{pmatrix}, \pm\begin{pmatrix}
1 & 1 \\
1 & -1 \\ \end{pmatrix}\}$ in ${\rm PSL}_{2}(\mathbb{Z}/3\mathbb{Z})$ be the index $3$, $2$-Sylow subgroup of ${\rm PSL}_{2}(\mathbb{Z}/3\mathbb{Z})$. It is isomorphic to the Klein group.
Recall that the derived subgroup of ${\rm PSL}_{2}(\mathbb{Z})$ is the projective congruence subgroup of level $6$ whose image in ${\rm PSL}_{2}(\mathbb{Z}/3\mathbb{Z})$ is $D_{3}$, and whose image in ${\rm PSL}_{2}(\mathbb{Z}/2\mathbb{Z})$ is cyclic of order $3$.
Let \[ \hat\mathbb{Z}_{\rm odd}=\varprojlim_{n\,{\rm odd}} \mathbb{Z}/n\mathbb{Z}\simeq\prod_{p\ne2}\mathbb{Z}_{p} \]
be the profinite completion of $\mathbb{Z}$ away from the prime $2$. Let $\hat D_{{\rm odd}}$ be the inverse image of $D_{3}$ in ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$.
\begin{prop} The image of ${\bar\Gamma}(2)_2$ in ${\rm PSL}_{2}(\mathbb{Z}/3\mathbb{Z})$ is equal to $D_{3}$. For $p$ prime, $p>3$, its image modulo $p$ is ${\rm PSL}_{2}(\mathbb{Z}/p\mathbb{Z})$. \end{prop} \begin{proof} Indeed, the images of ${\bar\Gamma}(2)$ and ${\rm PSL}_{2}(\mathbb{Z}))$ in ${\rm PSL}_{2}(\mathbb{Z}/3\mathbb{Z})$ coincide by weak approximation. Thus ${\bar\Gamma}(2)_2$ modulo $3$ coincides with the derived subgroup of ${\rm PSL}_{2}(\mathbb{Z}/3\mathbb{Z})$, which in turn is the reduction modulo $3$ of the derived subgroup of ${\rm PSL}_{2}(\mathbb{Z}))$. The second assertion is proved similarly. \end{proof}
\begin{prop} \label{oddadelic} Let $k$ be an integer $\ge2$. The closure of ${\bar\Gamma}(2)_k$ in ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$ is equal to $\hat D_{{\rm odd}}$. \end{prop} \begin{proof} We prove this first for $k=2$. Let $n$ be an odd integer divisible by $3$. Let $D_n$ be the inverse image of $D_{3}$ in ${\rm PSL}_{2}(\mathbb{Z}/n\mathbb{Z})$. The image of ${\bar\Gamma}(2)_2$ modulo $n$ coincide with the image of the derived subgroup of ${\rm PSL}_{2}(\mathbb{Z}))$ modulo $n$, which is a subgroup of index $3$ of ${\rm PSL}_{2}(\mathbb{Z}/n\mathbb{Z})$. Such a subgroup can only be $D_n$. Thus we obtain the proposition for $k=2$.
We prove the proposition for $k=3$. Let $I_n$ be the image of ${\bar\Gamma}(2)_3$ in ${\rm PSL}_{2}(\mathbb{Z}/n\mathbb{Z})$. Note that we have an exact sequence \[ 1\rightarrow K_n \rightarrow I_n\rightarrow D_3\rightarrow 1. \] Since we have an exact sequence \[ 1\rightarrow \pm1+3{\rm M}_2( \mathbb{Z}/\frac{n}{3}\mathbb{Z})_0 \rightarrow {\rm PSL}_{2}(\mathbb{Z}/n\mathbb{Z})\rightarrow {\rm PSL}_{2}(\mathbb{Z}/3\mathbb{Z})\rightarrow 1 \] where ${\rm M}_2( \mathbb{Z}/\frac{n}{3}\mathbb{Z})_0$ is the subgroup of ${\rm M}_2( \mathbb{Z}/\frac{n}{3}\mathbb{Z})$ made of matrices of trace $0$, the equality $I_n=D_n$ would follow from the inclusion $1+3{\rm M}_2( \mathbb{Z}/\frac{n}{3}\mathbb{Z})_0\subset K_n$. It remains to establish the latter inclusion. Since ${\rm PSL}_{2}(\mathbb{Z}/n\mathbb{Z})$ is equal to its derived subgroup when $n$ is prime to $6$, by the Chinese remainder theorem, it is enough to prove it when $n$ is a power of $3$.
It follows from the equality $[(1+p{\rm M}_2(\mathbb{Z}_p))_{0},{\rm SL}_2(\mathbb{Z}_p)]=(1+p{\rm M}_2(\mathbb{Z}_p))_{0}$, valid for any prime $p$, and the fact that the closure of ${\bar\Gamma}(2)_2$ in ${\rm PSL}_2(\mathbb{Z}_3)$ contains $[(1+3{\rm M}_2(\mathbb{Z}_3))_{0}$.
The general case $k\ge3$ of the proposition is now immediate. Indeed, since ${\bar\Gamma}(2)_3$ and ${\bar\Gamma}(2)_2$ have the same image modulo $n$, those images are the second and third respectively derived subgroups of ${\rm PSL}_{2}(\mathbb{Z}/n\mathbb{Z})$. Thus the lower central series of ${\rm PSL}_{2}(\mathbb{Z}/n\mathbb{Z})$ stabilizes to the image modulo $n$ of ${\bar\Gamma}(2)_k$ for any $k\ge 3$. \end{proof}
\begin{prop} The closure of $\Phi_{N}$ in ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$ is ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$ if $3$ does not divide $N$. It is $\hat H_{{\rm odd}}$ if $3$ divides $N$. \end{prop} \begin{proof} If $3$ does not divide $N$, $\Phi_N$ contains a non-zero upper triangular matrix (for instance $A'^N$) which is not the identity modulo $3$. Its closure in ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$ contains a maximal proper subgroup of index $3$, namely $\hat D_{{\rm odd}}$, and an element which is not in that subgroup. Therefore the closure is ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$. If $3$ divides $N$, since both $A^{3N}$ and $B^{3N}$ vanish modulo $3$, the images of $\Phi_N$ and ${\bar\Gamma}(2)_2$ coincide in ${\rm PSL}_{2}(\mathbb{Z}/3\mathbb{Z})$. Hence the result.
\end{proof}
\begin{prop} \label{closureHeisenberg} The closure of $\Phi'_{N}$ in ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$ is ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$ if $3$ does not divide $N$. It is $\hat H_{{\rm odd}}$ if $3$ divides $N$. \end{prop} \begin{proof} The closure of ${\bar\Gamma}(2)_2$ and ${\bar\Gamma}(2)_3$ coincide modulo $n$ for every $n$ divisible by $3$, as we have just established. Thus the closure of $\Phi'_N$ in ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$ contains ${\bar\Gamma}(2)_2$. Since the group $\Phi'_{N}$ contains the matrices $A^N$ and $B^N$, its closure contains $\Phi_N$. Thus the closure of $\Phi'_N$ in ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$ is equal to the closure of $\Phi_{N}$ in ${\rm PSL}_{2}(\hat\mathbb{Z}_{\rm odd})$.
\end{proof} Let $\bar\Gamma'(2)=\bar\Gamma(2)\cap \hat D_{{\rm odd}}$. It is a subgoup of index $3$ of $\bar\Gamma(2)$. \begin{cor} Let $\Gamma$ be a congruence subgroup of $\bar\Gamma(2)$. One has $\Gamma\Phi'_N= \bar\Gamma'(2)$ if $\Gamma\subset\bar\Gamma'(2)$ and $3$ divides $N$. One has $\Gamma\Phi'_N= \bar\Gamma(2)$ otherwise.
Let $n$ be an even integer. In particular, one has $\Gamma(n)\Phi'_{N}=\bar\Gamma(2)$ if $3$ does not divide either $n$ or $N$. One has $\Gamma(n)\Phi'_{N}=\bar\Gamma'(2)$ if $3$ divides both $n$ and $N$. \end{cor}
\section{The associated Riemann surfaces}
\subsection{The Riemann surface $X_{M, N,L}$} Denote by $X_{M, N,L}$ the compactified modular curve defined by $\Gamma_{M, N, L}$. \begin{prop} The genus $g_{M, N,L}$ of the curve $X_{M, N,L}$ is given by the following formulas. Denote by $T$ the lowest common multiple of $M$ and $N$. Suppose $T$ is even and $T/L$ is odd, then one has \[ g_{M, N,L}:=g(X_{M, N,L})=(NML-NL-ML-NML/2T)/2+1. \] Suppose $T$ is odd or $T/L$ is even, then one has \[ g_{M, N,L}:=g(X_{M, N,L})=(NML-NL-ML-NML/T)/2+1. \]
\end{prop} \begin{proof} We use Riemann-Hurwitz formula for the morphism $X_{N,M,L}\rightarrow X(2)$. Since $\Gamma(2)$ has no elliptic elements, the ramification points of this morphism reside entirely among the cusps.
Concerning the cusps above $0$ (resp. $\infty$), note that the stabilizer of the rational number $0$ (resp. $\infty$) in $P\Gamma(2)$ is generated by $B$ (resp. $A$). As the morphism $X_{N,M,L}\rightarrow X(2)$ is Galois, the ramification index is independent of the chosen cusp, and is the order of the orbit of $B$ (resp. of $A$) acting on $\Gamma_{M,N,L}\backslash\Gamma(2)$. This is $N$ (resp. $M$) by definition of $\Gamma_{M,N,L}$. Concerning the cusps above $1$, the stabilizer in $P\Gamma(2)$ of the rational number $-1$ is generated by $A^{-1}B$. It remains to determine the order of $A^{-1}B$ in $\Gamma_{M,N,L}\backslash\Gamma(2)=H_{M,N,L}$.
The abelianization provides a map : $H_{M,N,L}\rightarrow {\bf Z}/M{\bf Z}\times {\bf Z}/N{\bf Z}$ which sends $A^{-1}B$ to $(1,-1)$. The latter element is of order $T$, which leads us to examine the order of $(A^{-1}B)^{T}$ in $H_{M,N,L}$. Denote by $D=[A^{-1},B]$. It belongs to and generates the center of $H_{M,N,L}$. It is of order $L$. Note the formula $A^{-1}B=DBA^{-1}$. Thus one has $(A^{-1}B)^{T}=D^{k}A^{-T}B^{T}$ in $H_{M,N,L}$, where $k$ is the number of factors $A^{-1}$ to the right of a factor $B$ in the $(A^{-1}B)^{T}$ written as a product of $2T$ factors. One has $k=1+2+...+(T-1)=T(T-1)/2$. This is why the order of $(A^{-1}B)^{T}$ is $1$ if $L$ is odd or if $T/L$ is even. This order is $2$ otherwise.
Thus the ramification index $e$ of any cusp above $1$ is equal to $T$ if $L$ is odd or if $T/L$ is even. It is $2T$ otherwise. We can now apply the Riemann-Hurwitz formula for the dominant morphism $X_{M,N,L}\rightarrow X(2)$ :
\[ 2 g_{M, N,L}-2=-2d+\sum_{x \in X_{M, N, L}}(e_x-1). \]
We have $d=|H_{M,N,L}|=MNL$. One gets
\[ 2 g_{M, N,L}-2=-2MNL+NL(M-1)+ML(N-1)+NML(1-1/e) \] and \[ g_{M, N,L}=NML-NL-ML-NML/e+1 \] The lemma follows from the calculation of $e$.
\end{proof}
\subsection{The Riemann surfaces $X_N$, $X'_N$ and $X''_N$} All three groups $\Phi_N$, $\Phi_N'$ and $\Phi_N''$ act on the upper half-plane $\mathbb{H}$. We denote respectively by $X_N$, $X'_N$ and $X''_N$ the corresponding completed modular curves.
\begin{prop} Both morphisms $X'_N\rightarrow X_N$ and $X''_N\rightarrow X'_N$ (and therefore $X''_N\rightarrow X_N$ as well) are unramified. The covering $X'_{N}\rightarrow X_{N}$ is cyclic of degree $N'$. The covering $X''_{N}\rightarrow X'_{N}$ is Galois with group $(\mathbb{Z}/N''\mathbb{Z})^{2}$. The covering $X''_{N}\rightarrow X_{N}$ is abelian with Galois group $(\mathbb{Z}/N''\mathbb{Z})^{2}\times(\mathbb{Z}/N'\mathbb{Z})$. \end{prop} \begin{proof} The first statement needs only to be established for the morphism $X''_N\rightarrow X_N$. The ramification points can only reside at the cusps. To show that those cusps are unramified, we need only to show that their width in $X''_N$ is equal to their width, equal to $N$, in $X_N$. Since the covering $X''_N\rightarrow X_1$ is Galois, the width of a cusp of $X_N''$ depends only on its image in the set of cusps $\{0,1,\infty\}$ of $X_1$. We just need to look at the order of $A$, $B$ and $A^{-1}B$ in ${\bar\Gamma}(2)/\Phi''_N$. All three are of order $N$ in the latter quotient.
The other assertions follow immediately from the properties of the groups $\Phi_N$, $\Phi_N'$ and $\Phi_N''$.
\end{proof} The Riemann surface $X_N$ is isomorphic to the Riemann surface obtained from the complex points of the Fermat curve. The covering $X_N'\rightarrow X_N$ is obtained from what we call {\it Heisenberg covering of the Fermat curve} by passing to the complex numbers.
We obtain the genus $g_N$ and $g_N'$ of the curves $X_N$ and $X_N'$ by our formulas for the genus of $X_{N,M,L}$. One has $g_N=g_{N,N,1}=(N-1)(N-2)/2$. Furthermore, if $N$ is odd, one has $g'_{N}=g_{N,N,N}=(N^{3}-3N^{2}+2)/2=(N-1)(N^2-2N-2)/2$. If $N$ is even, one gets $g'_{N}=g_{N,N,N'}=(2N^3-5N^2+4)/4=(N-2)(2N^2-N-2)/4$.
The genus of $g_N''$ of the curve $X_N''$ can be deduced from the Riemann-Hurwitz formula. Since we have a covering $X''_N\rightarrow X'_N$ of degree $N''^2$, one has $$ g''_N=N''^2g'_N-N^2+1. $$
The curve $X'_{N}$ possesses $NN'$ cusps above each of the cusps $0$, $1$ and $\infty$ of $X(2)$.
\subsection{Some cases of small genus}
\begin{prop} The genus $g$ of the curve $X_{M, N,L}$ is equal to $0$ for the following values of $(N,M,L)$, and only for those values : $(N,1,1)$, $(1,M,1)$, $(2,2,1)$, $(2,2,2)$. \end{prop} \begin{proof} The formula just established gives : $2-2g=-L(MN-N-M-MN/e)$. Thus $g=0$ implies that $L=1$ or $2$.
If $g=0$ and $L=1$, then one has $MN-N-M-MN/e=-2$ and $e={\rm lcm}(M,N)$. Thus ${\rm gcd}(M,N)$ divides $2$. If ${\rm gcd}(M,N)=1$, then one has $MN-N-M-1=-2$, and thus $(M-1)(N-1)=0$ that is $M=1$ or $N=1$. If ${\rm gcd}(M,N)=2$, one has $MN-N-M-2=-2$ and thus $(M-1)(N-1)=1$; therefore one has $(M,N)=(2,2)$.
If $g=0$ and $L=2$, then one has $MN-N-M-MN/e=-1$. If $4$ divides neither $M$ nor $N$, one has $e=2{\rm lcm}(M,N)$. Then ${\rm gcd}(M,N)$ divides $2$, and is equal to $2$. Then one has $MN-N-M-1=-2$, as above. As $L=2$, the cases $M=1$ and $N=1$ are excluded; thus one has $(M,N)=(2,2)$.
\end{proof} \begin{prop} The genus $g$ of the curve $X_{M, N,L}$ is equal to $1$ for the following values of $(N,M,L)$, and only for those values (up to permutation of $N$ and $M$) : $(3,2,1)$, $(4,2,1)$, $(4,2,2)$, $(3,3,1)$, $(3,3,3)$.
The Jacobian varieties of those curves are elliptic curves endowed with automorphisms of order $3$, $4$, $4$, $3$ and $3$ respectively. Consequently the $j$-invariants of those curves are $0$, $1728$, $1728$, $0$ and $0$ respectively. \end{prop} \begin{proof} Consider again the formula : $2-2g=-L(MN-N-M-MN/e)$. Thus $g=1$ amounts to $MN-N-M-MN/e=0$. One has $MN-N-M-MN/e=(N-2)(M-1)-2+M(1-N/e)$. We can suppose that $N>1$ and $M>1$ (otherwise $g=0$).
If $N=2$, one gets $M(1-2/e)=2$. If $L=1$, then $e={\rm lcm}(M,2)$. Thus $M-1=2$ or $M-2=2$. One has $(N,M,L)=(2,3,1)$ or $(N,M,L)=(2,4,1)$.
If $N=2$ and $L=2$, then $e=2M$ or $e=M$. If $e=M$, then $M-2=2$. If $e=2M$, then $M-1=2$ (absurd since $L|M$). One has $(N,M,L)=(2,4,2)$ or $(N,M,L)=(2,4,1)$.
If $N=3$, then one has $M-3+M(1-3/e)=0$ and $e={\rm lcm}(M,3)$. One can suppose that $M>2$. Thus one has $M=3$ and $L=1$ or $L=3$. One has $(N,M,L)=(3,3,1)$ or $(N,M,L)=(3,3,3)$.
The case where $M=2$ or $M=3$ are treated similarly. If $N>3$ and $M>3$, one has $(N-2)(M-1)-2+M(1-N/e)>0$, which precludes $g=1$.
The automorphisms comme from the action of the image of $A$ in $H_{N,M,L}$ which stabilizes a cusp and therefore is an automorphism of an elliptic curve.
\end{proof}
We can derive some information on the Manin-Drinfeld principle in the genus $1$ cases.
\begin{prop} Divisors of the form $(CP)-(P)$, where $P$ is any point (in particular a cusp) of $X_{3,3,3}$ (resp. $X_{4,2,2}$) and $C$ acts via the map $\bar\Gamma(2)\rightarrow H_{N,M,L}$, are of order dividing $3$ (resp. $2$). \end{prop} \begin{proof}
The canonical morphism $X_{3,3,3}\rightarrow X_{3,3,1}$ is of degree $3$. It gives rise to an isogeny of degree $3$ on the Jacobians by Albanese functioriality. Thus the kernel of this isogeny is of order $3$. Moreover any divisor of the form $(CP)-(P)$ is in the kernel of the isogeny.
\end{proof}
\subsection{The curve $X'(2)$} Recall that the group $\bar\Gamma'(2)$ is the subgroup of index $3$ of $\bar\Gamma(2)$ that is the inverse image of the $2$-Sylow subgroup of ${\rm PSL}_2(\mathbb{Z}/3\mathbb{Z})$. Denote by $X'(2)=X_{\Gamma'(2)}$ the corresponding modular curve.
\begin{prop} The curve $X'(2)$ is of genus $1$ and its $j$-invariant is $0$. \end{prop} \begin{proof} Consider the morphism of degree $d=3$ : $X'(2)\rightarrow X(2)$. Since none of the matrices $A$ (generator of the stabilizer of the cusp $\infty$), $B$ (generator of the stabilizer of the cusp $0$), and $AB^{-1}$ (generator of the stabilizer of the cusp $1$) are not in $\bar\Gamma(2)$, the morphism is totally ramified at each of the three cusps of $X(2)$, and ramified only over those points. The Riemann-Hurwitz formula expresses the genus $g$ of $X'(2)$ as $(2g-2)=-2d+\sum_P(e_P-1)=6-6=0$ (where $P$ runs through points of ramification and $e_P$ designates the ramification index at that point), hence $g=1$.
The curve $X'(2)$ has an automorphism (the class of $A$ in $\bar\Gamma(2)/\bar\Gamma'(2)$) of order $3$ which leaves fixed the cusp $\infty$. Since it is of genus $1$, it is an elliptic curve of with an automorphism of order $3$. It has necessarily $j$-invariant $0$. \end{proof}
\begin{remark} Since $\Gamma_{3,3,3}=\Phi'_3$ is a subgroup of index $3$ of $\Gamma'(2)$. One has a morphism $X_{3,3,3}\rightarrow X'(2)$ of degree $3$. Both curves involved are of genus $1$, so we have an isogeny of degree $3$. Note that $\bar\Gamma'(2)$ is a congruence subgroup of level $12$ and a subgroup of index $3$ of the derived subgroup $\tilde\Gamma$ of ${\rm PSL}_2(\mathbb{Z})$. The latter subgroup defined a modular curve $X_{\tilde\Gamma}$ of genus $1$, which happens to have $j$-invariant $0$. Thus we get an isogeny $X'(2)\rightarrow X_{\tilde\Gamma}$ of degree $3$. \end{remark}
We now prove theorem \ref{theoremcomparison}.
\begin{proof} The correspondence is obtained by composing pushing to $X_{\Gamma. \Phi'_d}$ and pulling back to $X_\Gamma$. By Proposition \ref{closureHeisenberg}, one has $\Gamma. \Phi'_d=\bar\Gamma(2)$ except if $3$ divides $N$ and $\Gamma$ is contained in $\bar\Gamma'(2)$. In the latter case, $\theta_{N,\Gamma}$ factorizes through the jacobian of $X(2)$, which is $0$. Otherwise, namely if $3$ divides $N$ and $\Gamma$ is contained in $\bar\Gamma'(2)$, $\theta_{N,\Gamma}$ factorizes through the surjective map $J'_N\rightarrow J'(2)$. Since $J'(2)$ is the jacobian of a curve of genus $1$, and $j$-invariant $0$, it is an elliptic curve of $j$-invariant $0$. Moreover the map $J'(2)\rightarrow J_\Gamma$ has finite kernel. The result follows. \end{proof}
It is well known that the modular curve $X_0(27)$ has $j$-invariant $0$, and that the Fermat curve is a model for $X_0(27)$. We might expect a connection between $X_0(27)$ and $X'_3$. Let $\Gamma=\bar\Gamma(2)\cap \Gamma_0(27)$. It is a congruence subgroup. But it is not contained in $\Gamma'(2)$. Thus, if we apply theorem \ref{theoremcomparison} to the group $\Gamma$, we obtain, counterintuitively, the $0$-morphism $J'_3\rightarrow J_{\Gamma}$. {\it A fortiori}, if we push forward $J_\Gamma\rightarrow J_0(27)$ we obtain $0$.
\subsection{Mixed homology groups}
In \cite{MR3951413}, the homology group of $X_N$ relative to the whole set of cusps is thoroughly studied, by the method of Manin ~\cite{MR0314846}. We found fruitful in \cite{BanerjeeMerel} to consider the following slightly different point of view: for $\Gamma$ a subgroup of finite index of $\bar\Gamma(2)$, the corresponding modular curve $X_\Gamma$ covers $X(2)$, which admits three cusps: $\Gamma(2)0$, $\Gamma(2)1$ and $\Gamma(2)\infty$. Let $\partial_\Gamma^+$ (resp. $\partial_\Gamma^-$) be the set of cusps above $\Gamma(2)0\cup\Gamma(2)\infty$ (resp. $\Gamma(2)1$)). It is thus possible to consider the mixed homology group $\mathrm H_1(X_{\Gamma}-\partial_{\Gamma}^{-}, \partial_{\Gamma}^{+}; {\mathbb{Z}})$ (and its dual $\mathrm H_1(X_{\Gamma}-\partial_{\Gamma}^{+}, \partial_{\Gamma}^{-}; {\mathbb{Z}})$). One gets a group isomorphism \[ \xi_{\Gamma}^{+} : \mathbb{Z}[\Gamma\backslash\bar\Gamma(2)]\rightarrow \mathrm H_1(X_{\Gamma}-\partial_{\Gamma}^{-}, \partial_{\Gamma}^{+}; {\mathbb{Z}}) \]
which, for $g\in\bar \Gamma(2)$, associates to $\Gamma g$ the class $\xi_\Gamma(g)$ in $\mathrm H_1(X_{\Gamma}-\partial_{\Gamma}^{-}, \partial_{\Gamma}^{+}; {\mathbb{Z}})$ of a path from $g0$ to $g\infty$ in the upper half-plane.
Consider now the case where $\Gamma=\Phi'_N$. To simplify notations, set $\partial^{+}=\partial^+$ and $\partial^{-}=\partial^-$. Recall that we have a group isomorphism $\Phi_N'\backslash \Gamma(2)\rightarrow H_{N,N,N'}$ which, for $(a,b,c)\in({\mathbb{Z}})^{3}$, to $\Phi'_NA^aC^cB^b$ associates $x^az^cy^b$. We get thus a group isomorphism \[ \mathbb{Z}[ H_{N,N,N'}]\simeq \mathrm H_1(X'_N-\partial^-,\partial^+;\mathbb{Z}). \]
For $(a,b,c)\in({\mathbb{Z}}/N{\mathbb{Z}})^{2}\times(\mathbb{Z}/N'\mathbb{Z})$, set $i(a,b,c)=\xi^{+}_{\Phi'_N}(\Phi'_{N}A^{a}C^{c}B^{b})$.
Thus, $H_{N,N,N'}$ acts on the curve $X'_{N}$, and transitively on the sets of cusps of $X'_{N}$ above $0$, $1$ and $\infty$ respectively. Thus the stabilizer of a cusp is cyclic of order $N$.
The long exact sequence of relative homology yields: \[ 0 \rightarrow \mathrm H_1(X'_N-\partial^-; \mathbb{Z}) \rightarrow \mathrm H_1(X'_N-\partial^-,\partial^+;\mathbb{Z}) \xrightarrow{\delta_N} \mathbb{Z}[\partial^+]^0 \rightarrow 0 \] where $\delta_N$ is the boundary map. Similarly we have a dual exact sequence: \[ 0 \rightarrow \mathbb{Z}[\partial^-]^0\xrightarrow{\delta_N^*}\mathrm H_1(X'_N-\partial^-,\partial^+;\mathbb{Z}) \rightarrow \mathrm H_1(X'_N,\partial^+;\mathbb{Z}) \rightarrow 0 \] where $\delta_N^*$ is the dual boundary map. It induces \[ 0 \rightarrow \mathbb{Z}[\partial^-]^0\xrightarrow{\delta_N^*}\mathrm H_1(X'_N-\partial^-;\mathbb{Z}) \rightarrow \mathrm H_1(X'_N;\mathbb{Z}) \rightarrow 0. \] Hence $\mathrm H_1(X'_N;\mathbb{Z}) $ can be described as a subquotient of the group $\mathrm H_1(X'_N-\partial^-,\partial^+;\mathbb{Z})$. We will make this explicit by spelling out the maps $\delta_N$ and $\delta_N^*$.
The sets of cusps of $X'_N$ lying above $\infty$, $0$ and $1$ are respectively $\Phi_N'\backslash \Gamma(2)/A^{\mathbb{Z}}$, $\Phi_N' \backslash \Gamma(2)/B^{\mathbb{Z}}$ and $\Phi_N' \backslash \Gamma(2)/(AB^{-1})^{\mathbb{Z}} $. All three sets can be understood as follows.
\begin{prop} \label{bijections} We have three bijective maps as follows: \[ \Phi_N'\backslash \Gamma(2)/A^{\mathbb{Z}} \rightarrow (\mathbb{Z}/N\mathbb{Z})\times (\mathbb{Z}/N'\mathbb{Z}) \] given by $x^{a} z^cy^{b}\mapsto (b,c+ab)$, \[ \Phi_N' \backslash \Gamma(2)/B^{\mathbb{Z}} \rightarrow (\mathbb{Z}/N\mathbb{Z})\times (\mathbb{Z}/N'\mathbb{Z}) \] given by $x^a z^c y^{b}\mapsto (a,c)$, and \[ \Phi_N' \backslash \Gamma(2)/(AB^{-1})^{\mathbb{Z}} \rightarrow (\mathbb{Z}/N\mathbb{Z})\times (\mathbb{Z}/N'\mathbb{Z}) \] given by $x^a z^c y^{b}\mapsto (a+b,c-b(b+1)/2)$. \end{prop} \begin{proof} The first two identifications are straightforward. We establish the third one. Let $k$ be an integer. One finds by induction on $k$, $x^a z^c y^{b}(xy^{-1})^{k}=x^{a+k}z^{c-kb+k(k-1)/2}y^{b-k}$. Since $(a+k)+(b-k)=a+b$ and \[ c-kb+k(k-1)/2-(b-k)(b-k+1)/2=c-b(b+1)/2, \] the map passes indeed to the quotient $\Phi_N' \backslash \Gamma(2)/(AB^{-1})^{\mathbb{Z}}$. It is surjective (take $b=0$). Since there are $NN'$ cusps above $1$, it is bijective.
\end{proof}
Denote by $j_\infty(b,c)$ the cusp $\Phi_N'A^{a}C^{c-ab}B^bA^{N\mathbb{Z}}$, for any $a\in \mathbb{Z}$, by $j_0(a,c)$ the cusp $\Phi_N'A^{a}C^{c}B^{b}B^{N\mathbb{Z}}$, for any $b\in \mathbb{Z}$, and $j_{1}(d,c)$ the cusp $\Phi_N'A^{a}C^{c-b(b+1)/2}B^{b}(AB^{-1})^{N\mathbb{Z}}$, for any $a$, $b\in \mathbb{Z}$ such that $a+b=d$. With these conventions we can express $\delta_{N}$.
\begin{prop} \label{boundary} Let $a$, $b$, $c\in\mathbb{Z}$. One has $\delta_N(i(a,b,c))=j_\infty(b,c-ab)-j_0(a,c)$. \end{prop} \begin{proof} The boundary of the modular symbol $\{A^{a}C^{c}B^{b}0, A^{a}C^{c}B^{b}\infty\}$ is $[\Phi'_{N}A^{a}C^{c}B^{b}A^{\mathbb{Z}}]-[\Phi'_{N}A^{a}C^{c}B^{b}B^{\mathbb{Z}}]$, which translates immediately into the claimed formula. \end{proof}
We use \cite[Proposition 5]{BanerjeeMerel} to determine $\delta_N^*$.
\begin{prop} \label{boundarydual} One has, for $d\in\mathbb{Z}/N\mathbb{Z}$ and $c\in\mathbb{Z}/N'\mathbb{Z}$, \[ \delta_N^*(j_{1}(d,c))=\sum_{a,b\in\mathbb{Z}/N\mathbb{Z}, a+b=d}i(a,b+1,c-b(b+1)/2)-i(a,b,c-b(b+1)/2). \] \end{prop} \begin{proof} We just need to translate the third statement \cite[Proposition 5]{BanerjeeMerel}. With the notations of that proposition, we have $w_{1}=N$. It remains to use the third bijection of proposition \ref{bijections} and the definition of $i$.
\end{proof} Let $S_{N}$ be the subgroup of $\mathbb{Z}[(\mathbb{Z}/N\mathbb{Z})^{2}\times\mathbb{Z}/N'\mathbb{Z}]$ formed by the elements of the form $\sum_{a,b,c}\lambda_{a,b,c}[a,b,c]$, satisfying, for every $(a,c)\in(\mathbb{Z}/N\mathbb{Z})\times (\mathbb{Z}/N'\mathbb{Z})$, the relation $\sum_{b\in\mathbb{Z}/N\mathbb{Z}}\lambda_{a,b,c}=0$ and, for every $(b,c)\in(\mathbb{Z}/N\mathbb{Z})\times (\mathbb{Z}/N'\mathbb{Z})$, the relation $\sum_{a\in\mathbb{Z}/N\mathbb{Z}}\lambda_{a,b,c+ab}=0$. By Proposition \ref{boundary}, its image by $i$ has boundary $0$.
Let $R_{N}$ be the subgroup of $\mathbb{Z}[(\mathbb{Z}/N\mathbb{Z})^{2}\times\mathbb{Z}/N'\mathbb{Z}]$ spanned by elements of the form \[ e_{c,d}=\sum_{a,b\in\mathbb{Z}/N\mathbb{Z}, a+b=d}[a,b+1,c-b(b+1)/2)]-[a,b,c-b(b+1)/2], \] for $(d,c)\in(\mathbb{Z}/N\mathbb{Z})\times (\mathbb{Z}/N'\mathbb{Z})$. By Proposition \ref{boundarydual}, it is a subgroup of $S_{N}$. Thus we get a presentation by generators and relations of the homology of $X'_{N}$.
\begin{cor} The map $i$ produces an exact sequence \[ 0\rightarrow R_{N}\rightarrow S_{N}\rightarrow\mathrm H_1(X'_N;\mathbb{Z}) \rightarrow 0. \]
\end{cor}
\begin{remark} By exchanging the roles of $\partial^{+}$ and $\partial^{-}$, it is possible to give a dual presentation of $\mathrm H_1(X'_N;\mathbb{Z}) $. We leave this to the reader. \end{remark}
\ \section{The Heisenberg covering and its models} \label{Fermat} In this section, we assume $N$ to be odd. Therefore $N'=N$. \subsection{Modular functions} Let $z\in\mathbb{H}$. For $q_2=\exp{(\pi i z)}$, consider the classical $\lambda$-function \cite{MR983619}: \[ \lambda(z)=16 q_2 \prod_{n \geq 1} \left( \frac{1+q_2^{2n}}{1+q_2^{2n-1}}\right)^8, 1-\lambda(z)=\prod_{n \geq 1} \left( \frac{1-q_2^{2n-1}}{1+q_2^{2n-1}}\right)^8. \] From the above expression, it is clear that $\lambda(1)=1$ and $(1-\lambda(1))=0$. The $N$-th roots \[ x:=\sqrt[N]{\lambda}, y:=\sqrt[N]{1-\lambda} \] define modular units for $\Phi_{N}$. We recover thus the familiar model of the Fermat curve.
Since the $\lambda$ function identifies $\sP^1-\{0,1,\infty\}$ to $Y(2)$, a covering of $\sP^1-\{0,1,\infty\}$ can be understood as a covering of $Y(2)$, {\it i.e.} a modular curve.
\subsection{Reminder on Fermat curves} The $N$-th Fermat curve $F_{N}$ is given by the projective model: \[ X^N+Y^N=Z^N. \] Fermat curves and their points at infinity (cusps) are studied extensively by Rohrlich \cite{MR2626317}, \cite{MR0441978}, V\'elu \cite{MR582434} and Posingies~\cite{Posingies}. In particular, these authors consider the map \[ \beta_N:F_N \rightarrow \sP^1 \] given by $(X:Y:Z) \rightarrow (X^N:Z^N)$. The map $\beta_N$ is of degree $N^{2}$. It is ramified only above the points $0,1,\infty$. The corresponding ramification points are given by $a_j=(0:\zeta^j:1)$, $b_j=(\zeta^j:0:1)$, $c_j=(\epsilon\zeta^j:1:0)$, for $j\in\mathbb{Z}/N\mathbb{Z}$.
Recall that $\zeta$ is a primitive $N$-th root of unity and $\epsilon$ is a square root of $\zeta$. Each of the above points has ramification index $N$ over $\sP^1$. For all $j\in\mathbb{Z}/N\mathbb{Z}$, the cusps $a_j$, $b_j$, $c_j$ are all defined over the cyclotomic field $\mathbb{Q}(\mu_N)$. Among them, only $a_0$, $b_0$ and $c_0$ are defined over $\mathbb{Q}$.
\subsection{The cuspidal subgroup of the Fermat curve}
The divisors of following modular functions are given by: \[ div(x-\zeta^j)=N b_j-\sum_j c_j, div(y-\zeta^j)=N a_j-\sum_j c_j, div(x-\epsilon\xi^j y)=N c_j-\sum_j c_j. \]
Rohrlich \cite{MR0441978} has determined the structure of the cuspidal group of the Jacobian of $F_{N}$ (see also V\'elu's alternative proof and description \cite{MR582434} ). Since every cuspidal divisor on $F_{N}$ is annihilated by $N$ in the Jacobian, the cuspidal group is a quotient of $\mathbb{Z}/N\mathbb{Z}[\partial_{\Phi_{N}}]^{0}$. The additional relations are given as follows. Recall that $N$ is odd.
\begin{thm}[Rohrlich ~\cite{MR0441978}] The group $\mathcal{P}$ of principal divisors is spanned by the following set $$ \{\sum_{i=0}^{N-1}[a_{i}]-[P],\sum_{i=0}^{N-1}[b_{i}]-[P],\sum_{i=0}^{N-1}[c_{i}]-[P], $$ $$ \sum_{i=0}^{N-1}i([a_{i}]-[b_{i}]),\sum_{i=0}^{N-1}i([a_{i}]-[c_{i}]),\sum_{i=0}^{N-1}i^{2}([a_{i}]+[b_{i}]+[c_{i}]-3[P])\}, $$ where $P$ is any cusp of $F_{N}$. Thus the cuspidal subgroup of the Jacobian of $F_N$ is a free $\mathbb{Z}/N\mathbb{Z}$-module of rank $3N-7$.
\end{thm}
We set \[ D_{A}=\sum_{i\in\mathbb{Z}/N\mathbb{Z}}\{i\}[a_{i}], \] (resp. $D_{B}=\sum_{i\in\mathbb{Z}/N\mathbb{Z}}\{i\}[b_{i}]$, resp. $D_{C}=\sum_{i\in\mathbb{Z}/N\mathbb{Z}}\{i\}[c_{i}]$) and \[ f_{A}=\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{\{i\}}. \] \begin{cor} The class of $D_A$ is of order $N$ in the cuspidal subgroup of the Jacobian of $F_N$. Moreover it is congruent to $D_B$ and $D_C$ modulo $\mathcal{P}$ . \end{cor} \begin{proof} Indeed the divisor of $f_A$ is $ND_A$. By Rohrlich's theorem, $D_A$ is of order $N$. The congruence of $D_A$, $D_B$ and $D_C$ modulo $\mathcal{P}$ follows from Rohrlich's relations. \end{proof}
\subsection{The covering as a function field extension} By Rohrlich's theorem, the order of $D_{A}$ in the cuspidal group is $N$, therefore an $N$-th root of $f_{A}$ defines a cyclic covering $G\rightarrow F_{N}$ (a provisional notation, since $G$ will be shown to be equal to $F_N'$) of degree $N$. Such a morphism is indeed unramified, since the divisor of $f_{A}$ belongs to $N\mathbb{Z}[\partial_{\Phi_{N}}]^{0}$. Since $D_{A}-D_{B}$ is a principal divisor, exchanging the roles of $X$ and $Y$ would give the same covering. A similar reasoning with respect to $D_{C}$ holds. The cyclic covering $G\rightarrow F_{N}$ translates into a covering of Riemann surfaces $W\rightarrow X_{N}$.
Consider now the third term $[\Gamma(2),[\Gamma(2),\Gamma(2)]]$ in the lower central series of $\Gamma(2)$. It is not a subgroup of finite index of $\Gamma(2)$, but one can still consider the Riemann surface $X_{[\Gamma(2),[\Gamma(2),\Gamma(2)]]}$.
\begin{prop} The covering $W\rightarrow X_{N}$ factorizes through $X_{[\Gamma(2),[\Gamma(2),\Gamma(2)]]}$. \end{prop} \begin{proof}
Let $U\in \Gamma(2)$ and $V\in [\Gamma(2),\Gamma(2)]]$. Let $g$ be an $N$-th root of $f_{A}$. One has to prove that $g$ is invariant under $[U,V]$, that is $g_{|UV}=g_{|VU}$. Note first that $y_{|U}=\zeta^{r}y$, with $r\in {\mathbb{Z}}$. One has $$
g^{N}_{|U}=\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-\zeta^{r}y+\zeta^{i})^{\{i\}}=\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i-r})^{\{i\}}=\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{\{i+r\}}. $$ Write $\{i+r\}=\{i\}+r+t$, with $t\in N\mathbb{Z}$. Thus we get $$
g^{N}_{|U}=g^N\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{r}\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{t}. $$ The last factor is obviously an $N$-th power.
By the description of Rohrlich of the cuspidal subgroup of $F_{N}$, the factor $\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{r}$ is an $N$-th power in the function field of $F_{N}$. So there exists a function $h$ on $F_{N}$, and an integer $s$ such that $g_{|U}=g\zeta^{s}h$. Note that $h_{|V}=h$ and $g_{|V}=\zeta^{q}g$, with $q\in\mathbb{Z}$. Therefore, one has: $$
g_{|UV}=g_{|V}\zeta^{s}h_{|V}=\zeta^{s+q}gh=\zeta^{q}g_{|U}=g_{|VU}. $$
\end{proof}
\begin{prop} Any covering of degree $N$ of the Fermat modular curve $X_{N}$ that factors through $X_{[\Gamma(2),[\Gamma(2),\Gamma(2)]]}$, factors through the Heisenberg covering. \end{prop} \begin{proof} Such a covering corresponds to a cyclic quotient of order $N$ of $\Phi_{N}$. Denote by $\Gamma$ the corresponding cocyclic subgroup of $\Phi_{N}$. Since the covering is unramified at the cusps, $\Gamma$ contains the matrices $A^{N}$ and $B^{N}$. Recall that $\Phi_{N}$ is generated by $A^{N}$, $B^{N}$ and the commutator subgroup of $\Gamma(2)$. Since the covering factors through $X_{[\Gamma(2),[\Gamma(2),\Gamma(2)]]}$, the group $\Gamma$ contains $[\Gamma(2),[\Gamma(2),\Gamma(2)]]$. In particular, $\Gamma$ contains $[A,C]$ and $[B,C]$. Thus the image of $C$ in $\Phi_{N}/\Gamma$ is of order $N$. Consequently, $\Gamma$ is the group generated by $[\Gamma(2),[\Gamma(2),\Gamma(2)]]$, $A^{N}$, $B^{N}$, $C^{N}$.
\end{proof}
\begin{cor} Let $\zeta$ be a primitive $N$-th root of unity in $\mathbb{Q}(\mu_N)$. The function \[ f_{A}'=\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{\{i\}} \] admits an $N$-th root in the function field of $F'_N$.
\end{cor} \begin{proof} Indeed, the covering of $F_{N}$ defined by an $N$-th root of $f_{A}'$ is cyclic of order $N$ and factorizes through $X_{[\Gamma(2),[\Gamma(2),\Gamma(2)]]}$. Consequently, it is $F'_{N}$. \end{proof}
Let $K$ be the field of fractions of the curve given in the introduction over the complex numbers. In inhomogeneous form, it is generated by the variable $X$, $Y$, $T_{\zeta}$ for every primitive $N$-th root of unity $\zeta$, with the relations $$ X^{N}+Y^{N}=1 $$ and, for every primitive $N$-th root of unity $\zeta$ (in ${\bf Q}(\mu_{N})$) $$ \prod_{j=1}^{(N-1)/2}(Y-\zeta^{-j})^{j}T_{\zeta}^{N}=\prod_{j=1}^{(N-1)/2}(Y-\zeta^{j})^{j}. $$
\begin{cor} The function field of $F'_{N}$ over $\C$ is isomorphic to $K$. \end{cor} \begin{proof} Indeed, the function field of $F'_{N}$ is the subfield of $K$ generated by an $N$-th root of $f_{A}$ over the function field of $F_{N}$. Let $\zeta'$ be a primitive $N$-th root of unity. By the preceding corollary, $T_{\zeta'}$ belongs to $K$. Thus all of $K$ is contained in the function field of $F'_{N}$. \end{proof} Thus we have shown that the curve $F'_N$ extends the Riemann surface $X'_N$. \begin{cor} The Riemann surfaces $X'_N$ and $F'_N({\C})$ are isomorphic. \end{cor}
\subsection{The Heisenberg covering}
Denote by $G$ the Galois group of the field extension $K|\mathbb{Q}(X^N)$. It sits in an exact sequence $$ 1\rightarrow H_N\rightarrow G\rightarrow (\mathbb{Z}/N\mathbb{Z})^\times\rightarrow 1 $$ by the transitive action of $G$ on $N$-th roots of unity, and the fact that the Heisenberg covering is defined over $\mathbb{Q}(\zeta)$. Recall that for $i\in (\mathbb{Z}/N\mathbb{Z})$, $\{i\}$ denotes the representative of $i$ in $\{-(N-1)/2,...., (N-1)/2\}$.
\begin{prop} For $\sigma\in G$, there exists $u$, $v$, $s\in (\mathbb{Z}/N\mathbb{Z})$ and $r\in (\mathbb{Z}/N\mathbb{Z})^\times$ such that $\sigma(X)=\zeta^uX$, $\sigma(Y)=\zeta^v(Y)$, $\sigma(\zeta)=\zeta^r$. For $\rho$ a representative of $r$ in $\mathbb{Z}$, $$ \sigma(T^\rho)=\zeta^sTX^{v}\prod_{i\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^i)^{(\rho\{i/\rho\}-\{i\})/N}. $$ Furthermore, $(u,v,s,r)\in (\mathbb{Z}/N\mathbb{Z})^{3}\times (\mathbb{Z}/N\mathbb{Z})^\times$ caracterizes $\sigma$. \end{prop} \begin{proof} The first three identities are evident. A simple calculation establishes the last one. Indeed $$ \sigma(T^\rho)^N=\sigma(T^N)^\rho=\sigma(\prod_{i\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^i)^{\{i\}})^\rho=\prod_{i\in (\mathbb{Z}/N\mathbb{Z})}(-\zeta^vY+\zeta^{\rho i})^{\{i\}\rho}. $$ By factoring $T^N$ and $\prod_i \zeta^{v\{i\}}$, and replacing the variable $i$ by $j=\rho i-v$, one gets $$ \sigma(T^\rho)^N=T^N\prod_i\zeta^{v\{i\}}\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^{j})^{\rho\{(j+v)/\rho\}}\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^j)^{-\{j\}}. $$ We use $\prod_i \zeta^{v\{i\}}=1$ and we get $$ \sigma(T^\rho)^N=T^N\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^{j})^{\rho\{(j+v)/\rho\}-\{j\}-v}\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^j)^{v}. $$ Using the identity $X^N=(1-Y^N)=\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^{j})$, we get $$ \sigma(T^\rho)^N=T^NX^{Nv}\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^{j})^{\rho\{(j+v)/\rho\}-\{j\}-v}. $$ The desired formula follows by taking $N$-th roots. \end{proof} With the notations of the proposition, we note $\sigma=\sigma_{u,v,r,s}$. Murty and Ramakrishnan note that Deligne showed that $F'_N$ can be defined over $\mathbb{Q}$, without giving a reference. We spell out this assertion.
\begin{prop} The curve $F'_N$ can be defined over $\mathbb{Q}$. More precisely, the surjective map $G\rightarrow (\mathbb{Z}/N\mathbb{Z})^\times$ admits the map $r\mapsto \sigma_{0,0,r,0}$ as a section. Denote by $S$ the corresponding subgroup of $G$. It acts trivially on $N$-th roots of unity. Consequently, the field of invariants by $S$ in $K$ is the function field of a curve over $\mathbb{Q}$, and defines $F'_N$ over $\mathbb{Q}$. The Heisenberg covering $F'_N\rightarrow F_N$ extends also over $\mathbb{Q}$. \end{prop} \begin{proof} Group theoretic arguments about the structure of $G$ as an extension imply easily the existence of the section. We will show that our explicit map is indeed a section. Let $r$, $r'\in (\mathbb{Z}/N\mathbb{Z})^\times$ and $\rho$ and $\rho'$ be representatives of $r$ and $r'$ respectively in $\mathbb{Z}$. We need to check that $\sigma_{0,0,r',0}\sigma_{0,0,r,0}=\sigma_{0,0,rr',0}$, which need to be verified only by application on $T$, or equivalently on $T^{\rho\rho'}$, and on $\zeta$. This is trivial for $\zeta$. Here is the computation on $T^{\rho\rho'}$. We first simplify the formula of the previous proposition $$ \sigma_{0,0,r,0}(T^{\rho\rho'})=T^{\rho'}\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^j)^{(\rho\{j/\rho\}-\{j\})\rho'/N}. $$ Thus one has $$ \sigma_{0,0,r',0}\sigma_{0,0,r,0}(T^{\rho\rho'})=T\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^j)^{(\rho'\{j/\rho'\}-\{j\})/N}\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^{\rho'j})^{(\rho\{j/\rho\}-\{j\})\rho'/N}). $$ A change of variable in the second product of the right-hand side gives \[ \sigma_{0,0,r',0}\sigma_{0,0,r,0}(T^{\rho\rho'})=T\prod_{j\in (\mathbb{Z}/N\mathbb{Z})}(-Y+\zeta^j)^{(\rho\rho'\{j/\rho\rho'\}-\{j\})/N}=\sigma_{0,0,rr',0}(T^{\rho\rho'}). \]
\end{proof} \subsection{Automorphisms} The group $(\mathbb{Z}/N\mathbb{Z})^{2}\simeq\Gamma(2)/\Phi_{N}$ acts on $F_{N}$ : $(i,j)\in(\mathbb{Z}/N\mathbb{Z})^{2}$ acts by the rule $(x,y)\mapsto(\zeta^{i}x,\zeta^{j}y)$. Such an action is defined on $\mathbb{Q}(\mu_{N})$. It lifts to an action of $H_{\mathbb{Z}/N\mathbb{Z}}\simeq\Gamma(2)/\Phi_{N}'$ on $F_{N}'$.
\begin{lemma} The action of $H_{\mathbb{Z}/N\mathbb{Z}}$ on $F_{N}'$ is defined over $\mathbb{Q}(\mu_{N})$. \end{lemma} \begin{proof} For $U\in H_{\mathbb{Z}/N\mathbb{Z}}$, one needs to check that the action of $U$ on a $\mathbb{Q}(\zeta)$-rational function is still $\mathbb{Q}(\zeta)$-rational. It suffices to verify this for $g$, an $N$-th root of $f_{A}$. We repeat a previous calculation and get $$
g^{N}_{|U}=g^N\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{r}\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{t}, $$
with $t\in N\mathbb{Z}$. Both factors $\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{r}$ and $\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-y+\zeta^{i})^{t}$ are $N$-th power, (of, say, $g_{1}$ and $g_{2}$) in the function field of $F_{N}$ over $\mathbb{Q}(\zeta)$. We thus find that $g_{|U}$ is equal to $\zeta^{p}gg_{1}g_{2}$, which belongs to the function field of $X_{N}$ over $\mathbb{Q}(\zeta)$.
\end{proof}
\begin{remark} We do not give an explicit algebraic model of the curve $X''_{N}$. But it can be obtained by taking $N''$-th roots of the functions whose divisors are $\sum_{i=0}^{N-1}i^{2}([a_{i}]-[P])\}$, $\sum_{i=0}^{N-1}i^{2}([b_{i}]-[P])\}$ and $\sum_{i=0}^{N-1}i^{2}([c_{i}]-[P])\}$.
\end{remark} \subsection{Regular integral models of Heisenberg curves} We relate the Heisenberg covering to the model given in the introduction. Note that the inhomogeneous version of the projective model is given by: $$ X^{N}+Y^{N}=1 $$ and, for every primitive $N$-th root of unity $\zeta$ (in a fixed cyclotomic extension of ${\bf Q}$) $$ \prod_{j=1}^{(N-1)/2}(Y-\zeta^{-j})^{j}T_{\zeta}^{N}=\prod_{j=1}^{(N-1)/2}(Y-\zeta^{j})^{j}. $$ We now prove Theorem~\ref{Modelthm} (in the spirit of {\it e.g.} \cite[Proposition 1.1.13]{Curilla}). \begin{proof} We have indeed a scheme of relative dimension $1$ over ${\rm Spec}({\mathbb{Z}}[\mu_{N},1/N])$. We just have to establish the smoothness. We use the Jacobian criterion ({\it e.g.} \cite[p. 130, Theorem 2.19]{MR1917232}).
Since the Heisenberg covering is obtained by taking the $N$-th root of a function which does not vanish outside the zero locus of $XY$, all points away from $X=0$ and $Y=0$ are regular in all characteristics prime to $N$. It remains to establish the regularity of a point $P_{\zeta}$ of the form $(X,Y)=(0,\zeta)$ or $(X,Y)=(\zeta,0)$ with $\zeta$ a primitive $N$-th root of unity. Suppose $(X,Y)=(0,\zeta)$. For points of this form, one has $T_{\zeta}=0$. We set \[ F_{\zeta}=\prod_{j=1}^{(N-1)/2}(Y-\zeta^{-j})^{j}T_{\zeta}^{N}-\prod_{j=1}^{(N-1)/2}(Y-\zeta^{j})^{j} \] and compute the partial derivative at $P_{\zeta}$. One obtains \[ \frac{\partial F_{\zeta}}{\partial Y}(P_{\zeta})=- \prod_{j=2}^{(N-1)/2}(\zeta-\zeta^{j})^{j}, \] which is a cyclotomic unit that belongs to ${\bf Z}[\mu_{N},1/N]^{\times}$. Thus the jacobian matrix is non-zero over any fiber in characteristic prime to $N$. This is sufficient to establish the regularity of $P_{\zeta}$ over ${\rm Spec}({\bf Z}[\mu_{N},1/N])$ since the structural morphism is of relative dimension $1$. This reasoning applies to the zero locus of $Y$, by exchanging the roles of $X$ and $Y$. \end{proof}
\subsection{Cusps} The term cusp refers to the cusps of the modular curve $X_\Gamma$ attached to a subgroup $\Gamma$ of $\Gamma(2)$. These points are not intrinsic to the corresponding algebraic curves. In the cases of interest to us, we have a morphism $X_N\rightarrow X(2)$, given by the function $x^N$. Thus the cusps of $F_N$ are the $3N$ points above the points $0$, $1$, and $\infty$ described above.
Since the morphism $F'_N\rightarrow F_N$ is unramified, the modular curve $F'_N$ possesses $3N^2$ cusps.
\begin{prop} If $N$ is prime to $3$, the cusps of $F'_N$ are defined over $\mathbb{Q}(\zeta)$. If $3$ divides $N$, the cusps of $F'_N$ are defined over the cyclotomic field generated by the $3N$-th roots of unity. \end{prop} \begin{proof} Let $a$ be a cusp of $F'_{N}$ above $a_{0}$. It is defined over the field generated by $g(a)$ and $\mathbb{Q}(\zeta)$, where $g$ is an $N$-th root of $f_{A}$. One has $$ f_{A}(a_{0})=\prod_{i\in\mathbb{Z}/N\mathbb{Z}}(-1+\zeta^{i})^{\{i\}}=\prod_{i=1}^{(N-1)/2}\frac{(-1+\zeta^{i})^{i}}{(-1+\zeta^{-i})^{i}}=\prod_{i=1}^{(N-1)/2}(-\zeta^{i})^{i}. $$ Thus we get $$ f_{A}(a_{0})=(-1)^{\sum_{i=1}^{(N-1)/2}i}\zeta^{\sum_{i=1}^{(N-1)/2}i^2}=(-1)^{(N^2-1)/8}\zeta^{(N-1)(N+1)N/24}, $$ which is a $6$-th root of unity.
Suppose $N$ is prime to $3$. Then $g(a)$ is an $N$-th root of unity, up to sign. Since the group $H_{N}$ acts transitively on the cups above $\infty$, and its action is $\mathbb{Q}(\zeta)$-rational, we deduce that all the cusps above $\infty$ are defined over $\mathbb{Q}(\zeta)$. A similar reasoning apply to the cusps above $0$, and above $1$.
A similar reasoning applies when $3$ divides $N$. Indeed $g(a)$ is a $3N$- th root of unity, up to sign.
\end{proof}
The cusps of $F'_N$ above the cusp $\infty$ (resp. $0$, resp. $1$) of $X(2)$ coincide with the classes $\Gamma\backslash\Gamma(2)\infty$ (resp. $\Gamma\backslash\Gamma(2)0$, resp. $\Gamma\backslash\Gamma(2)1$), which in turn can be identified with the double classes $\Gamma\backslash\Gamma(2)/A^{\mathbb{Z}}$ (resp. $\Gamma\backslash\Gamma(2)/B^{\mathbb{Z}}$, resp. $\Gamma\backslash\Gamma(2)/(AB^{-1})^{\mathbb{Z}}$).
\subsection{About the Manin-Drinfeld principle for $F'_3$} Recall that $g_{3}=1$ and observe that $F'_{3}$ has 27 cusps. Fix one cusp $P_{0}$ of $F'_{3}$, which becomes thus an elliptic curve $(F'_{3}, P_{0})$. Since a cyclic group of order $3$ acts on $F'_{3}$ and stabilizes $P_{0}$, the elliptic curve $(F'_{3}, P_{0})$ admits an automorphism of order $3$. Thus the $j$-invariant of $(F'_{3}, P_{0})$ is $0$. \begin{prop} Divisors supported on the cusps of $F'_{3}$ above $\infty$ (resp. $0$, resp. $1$) are of order dividing $3$ in the Jacobian of $F'_{3}$. Furthermore, cuspidal divisors of degree $0$ are torsion in the Jacobian of $F'_3$, and of order dividing $9$. \end{prop} \begin{proof} Let $X$ be a compact connected Riemann surface of genus $1$. Let $J$ be the Jacobian of $X$. Recall the exact sequence $$ 0\rightarrow J({\bf C})\rightarrow {\rm Aut}(X)\rightarrow {\rm Aut}(J)\rightarrow 0, $$ where ${\rm Aut}$ denotes the automorphisms over ${\bf C}$. The first map associates to the class of a divisor $D$ the translation by $D$ in $X$.
For $X=F'_{3}$, the group ${\rm Aut}(J)$ is cyclic of order $6$. One gets a group homomorphism $H_{{\bf Z}/3{\bf Z}}\rightarrow {\rm Aut}(J)$, whose kernel is of order $9$, and therefore isomorphic to $({\bf Z}/3{\bf Z})^{2}$. Since this kernel identifies to a subgroup of the one dimensional complex torus $J({\bf C})$, the latter subgroup is $J({\bf C})[3]$. We have proved that the orbit of any cups $Q$ by $H_{{\bf Z}/3{\bf Z}}$ contains $Q+J({\bf C})[3]$. But these sets are both of cardinality $9$, and are therefore equal. Since $H_{{\bf Z}/3{\bf Z}}$ acts transitively on the cusps above $\infty$ (resp. $0$, resp. $1$), the first statement of the proposition is proved.
About the second statement, it is sufficient to prove this for a divisor of the form $(\alpha)-(\beta)$ where $\alpha$ and $\beta$ are cusps of $F'_3$ not above the same point of $\{0,1,\infty\}$. Without loss of generality, say they are above $0$ and $\infty$ respectively. Let $a$ and $b$ be the cusps of $F_3$ below $\alpha$ and $\beta$ respectively.
We have $$ 3((\alpha)-(\beta))=(3(\alpha)-\sum_{\alpha'}(\alpha'))+(\sum_{\alpha'}(\alpha')-\sum_{\beta'}(\beta'))+(\sum_{\beta'}(\beta')-3(\beta)), $$ where $\alpha'$ (resp. $\beta'$) runs through the cusps of $F'_3$ above $a$ (resp. $b$). By the first statement of the proposition and the torsion properties of the cuspidal subgroup of the $F_3$, each of the three terms of the right-hand side is of order dividing $3$. \end{proof}
Recall that the dessin for $X'_{N}$ is a graph with the following additional structure: the vertices are bicolored (white and black) and the of set edges attached to any given vertex are endowed with a cyclic ordering (a transitive action of $\mathbb{Z}$). The vertices are the cusps of $X'_{N}$ above $0$ and $\infty$. The edges form the coset $\Phi'_{N}\backslash\Gamma(2)\simeq H_{\mathbb{Z}/N\mathbb{Z}}$, which is in bijection with $(\mathbb{Z}/N\mathbb{Z})^{3}$. the edge associated with $\Phi'_{N}g$ has extremities $\Phi'_{N}g0$ and $\Phi'_{N}g\infty$. The cyclic ordering of the edges attached to the vertices $\Phi'_{N}g0$ (resp. $\Phi'_{N}g\infty$ ) is given by the action of $B$ (resp. $A^{-1}$). To sum up the dessin can be drawn on $X'_{N}$.
To be more concrete, each edge is in bijection with $\mathbb{Z}/N\mathbb{Z}\times\mathbb{Z}/N'\mathbb{Z}\times\mathbb{Z}/N\mathbb{Z}$, via the map $\Phi'_{N}A^{a}C^{c}B^{b}\mapsto (a,c,b)$. The edge thus labeled $(a,c,b)$ is connected to the edge labeled $(a,c,b+1)$ via a black vertex (cusp above $0$) and the edge labeled $(a,c,b)$ is connected to the edge labeled $(a-1,c-ab,b)$ via a white vertex (cusp above $\infty$). The line segments represent the arcs on $X'_3$ above the geodesic arc from $0$ to $\infty$ in the upper half-plane.
We illustrate all this for $N=3$. In that case, the genus of $X'_{3}$ is equal to $1$. According to these rules, the drawing (dessin) for $X_3'$ is given as follows. \begin{center} \begin{tikzpicture}[mynode/.style={font=\color{red}\sffamily,circle,inner sep=1pt,minimum size=.5cm}, scale=0.6] \node[circle,draw,radius=1em][mynode=black,fill=black] (v1) at (1,-8,0) {}; \node[circle,draw,radius=1em][mynode=black,fill=black] (v2) at (-8,-3,0) { }; \node[circle,draw,radius=.01em][mynode=black,fill=white] (v3) at (1,-3,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=black] (v4) at (10,-3,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=black] (v5) at (-1.5,-1,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=black] (v6) at (3.5,-1,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=white] (v7) at (-4,1,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=white] (v8) at (1,1,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=white] (v9) at (6,1,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=black] (v10) at (-4,3,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=black] (v11) at (1,3,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=black] (v12) at (6,3,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=white] (v13) at (-1.5,4.5,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=white] (v14) at (3.5,4.5,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=white] (v15) at (-8,6,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=black] (v16) at (1,6,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=white] (v17) at (10,6,0) {}; \node[circle,draw,radius=.01em][mynode=black,fill=white] (v18) at (1,12,0) {}; \draw [black,thick] (v15) to[out=70,in=70, distance=17cm ] (v4); \draw [black,thick] (v17) to[out=-70,in=-70, distance=17cm ] (v2); \draw[black,very thick] (v2)--(v3); \draw[black,very thick] (v4)--(v3); \draw[black,very thick] (v15)--(v16); \draw[black,very thick] (v17)--(v16); \draw[black,very thick] (v10)--(v18); \draw[black,very thick] (v10)--(v7); \draw[black,very thick] (v16)--(v18); \draw[black,very thick] (v12)--(v18); \draw[black,very thick] (v12)--(v9); \draw[black,very thick] (v1)--(v7); \draw[black,very thick] (v1)--(v3); \draw[black,very thick] (v1)--(v9); \draw[black,very thick] (v5)--(v7); \draw[black,very thick] (v5)--(v8); \draw[black,very thick] (v6)--(v8); \draw[black,very thick] (v11)--(v8); \draw[black,very thick] (v13)--(v10); \draw[black,very thick] (v13)--(v11); \draw[black,very thick] (v14)--(v11); \draw[black,very thick] (v14)--(v12); \draw[black,very thick] (v6)--(v9); \draw [black,thick] (v2) to[out=90,in=80, distance=6cm ] (v14); \draw [black,thick] (v15) to[out=-110,in=-70, distance=9cm ] (v6); \draw [black,thick] (v13) to[out=90,in=70, distance=11cm ] (v4); \draw [black,thick] (v17) to[out=-30,in=-30, distance=7cm ] (v5); \end{tikzpicture} \end{center}
As the group $\Phi_{3}$ is not a congruence subgroup \cite{MR1091611}, $\Phi'_{3}$ is {\it a fortiori} not a congruence subgroup. The latter fact can be derived alternately from Wolfart's criterion and an examination of the dessin. Indeed, the width of the each cusps is equal to $6$ and $[\Gamma(2):\Phi'_3]=27$. Wolfart's criterion, to check that $\Phi'_3$ is a congruence subgroup or not it is enough to check $\Gamma(6) \subset \Phi'_3 \subset \Gamma(2)$. However, $[\Gamma(2):\Gamma(6)]=144$ is not divisible by the index $27$.
\subsection{About the failure of the Manin-Drinfeld principle for $F'_5$} Suppose $N=5$. We show that the Manin-Drinfeld theorem fails for the Heisenberg covering ${\mathcal{F}}'_5$ of ${\mathcal{F}}_5$. Consider the scheme ${\mathcal{C}}_5$ over ${\rm \Spec}({\bf Z}[\mu_{5},1/5])$ given by the system of equations $$ T_\zeta^5=\frac{(-Y+\zeta)(-Y+\zeta^2)^2}{(-Y+\zeta^{-1})(-Y+\zeta^{-2})^2}, $$ where $\zeta$ runs through the primitive $5$-th roots of unity in $\mathbb{Q}(\mu_{5})$. It is smooth, as can be shown by applying the Jacobian criterion. One has an obvious morphism of schemes ${\mathcal{F}}'_5\rightarrow {\mathcal{C}}_5$.
Denote by $C_{5}$ the generic fiber of ${\mathcal{C}}_5$. Over $\C$, $C_5$ identifies to a modular curve as follows. Consider the morphism $\bar\Gamma(2)\rightarrow H_{5,5,5}$ and the inverse image $\Gamma'$ of the subgroup of $H_{5,5,5}$ generated by $B$. Then $\Gamma'$ defines a corresponding modular curve isomorphic to the Riemann surface $C_5(\C)$.
The curve $C_{5}$ possesses $19$ cusps, given by the following planar coordinates $(Y,T_{\zeta})$ : $(0,\epsilon)$, $(\infty, \epsilon)$, $(1, -\delta)$, $(\zeta,0)$, $(\zeta^2,0)$, $(\zeta^{-1},\infty)$, $(\zeta^{-2},\infty)$, where $\epsilon$ and $\delta$ run through the $5$-th roots of unity. Set $T=T_{\zeta}$. The function fields of $F'_{5}$ and $C_{5}$ are ${\bf Q}(\zeta,X,Y,T)$ and ${\bf Q}(\zeta,Y,T)$ respectively.
The obvious morphism $\pi$ : $F'_5\rightarrow C_5$ sends the cusps of $F'_5$ to the cusps of $C_5$. We show that $C_5$ does not satisfy the Manin-Drinfeld principle. Since ${\mathcal{C}}_5$ is smooth, the Jacobian of $C_5$ extends to an abelian scheme ${\mathcal{J}}_5$ over ${\rm \Spec}({\bf Z}[\mu_5,1/5])$.
\begin{prop} There exists a divisor of infinite order supported on the cusps of $C_{5}$. \end{prop} \begin{proof} We suppose that all cuspidal divisors are torsion in $C_{5}$. Our proof is organized around the following calculation. One has \begin{align} \label{fivroot} T^5+1=\frac{(1-Y)(2Y^2+(2-2(\zeta^2+\zeta^{-2})-\zeta-\zeta^{-1})Y+2)}{(-Y+\zeta^{-1})(-Y+\zeta^{-2})^2}. \end{align} Let $y_1$ and $y_2$ be the roots of the polynomial $2Y^2+(2-2(\zeta^2+\zeta^{-2})-\zeta-\zeta^{-1})Y+2$. Let $\zeta_1$ be a $5$-th root of unity in ${\bf Z}[\mu_5]$. Consider the function $T+\zeta_1$, which divides $T^5+1$. The divisor of the function $T+\zeta_1$ is (in terms of planar coordinates for $(Y,T)$) : $(1,-\zeta_1)+(y_1,-\zeta_1)+(y_2,-\zeta_1)-D$, where $D=(\zeta^{-1},\infty)+2(\zeta^{-2},\infty)$. Apparently fortuitously, this divisor is cuspidal in the fibers at $11$ and at $2$ of $C_{5}$.
\begin{lemma} Let $\zeta_1$ and $\zeta_2$ be distinct primitive $5$-th roots of unity in $\mathbb{Z}[\mu_5]$. The divisors $3(1,\zeta_1)-3(1,\zeta_2)$ and $3(1,\zeta_1)-D$ are principal in any fiber above $11$ of $C_5$. \end{lemma} \begin{proof} In characteristic $11$, the polynomial $2Y^2+(2-2(\zeta^2+\zeta^{-2})-\zeta-\zeta^{-1})Y+2$ has the providential property of having a double root equal to $1$. Therefore the divisor of the function $T+\zeta_1$ over $\mathbb{F}_{11}$ is cuspidal and equal to $3(1,\zeta_1)-D$. It follows that the function $(T+\zeta_1)/(T+\zeta_2)$ has divisor $3(1,\zeta_1)-3(1,\zeta_2)$. \end{proof}
We return to the proof of the proposition. By our Manin-Drinfeld assumption in $C_{5}$, the divisor $3(1,\zeta_1)-D$ is torsion in the Jacobian of $C_{5}$.
It extends to a torsion point of ${\mathcal{J}}_5$, whose order is determined in any special fiber at a prime $\pi$ of residual characteristic $p$, provided $p-1>e$, where $e$ is the ramification index at $\pi$ of the extension ${\mathbb{Q}}(\mu_5)|{\mathbb{Q}}$ \cite{MR482230}. This applies for any prime except perhaps $p=2$ and $p=5$. The calculation for $p=11$ ensures that the divisors $3(1,\zeta_1)-3(1,\zeta_2)$ and $3(1,\zeta_1)-D$ are principal in $C_{5}$.
Therefore those divisors are principal in any special fiber of ${\mathcal{C}}_5$. Consider any fiber $\bar{C}$ above $2$ of $C_{5}$. In that fiber, the function $T-\zeta=T+\zeta$ has divisor, in view of Equation~\ref{fivroot}, $(0,\zeta)+(1,\zeta)+(\infty,\zeta)-D_{\zeta}$, which is principal. Thus, by taking $\zeta_{1}=\zeta$, the divisor \[ (3(1,\zeta)-D_{\zeta})-((0,\zeta)+(1,\zeta)+(\infty,\zeta)-D_{\zeta})=2(1,\zeta)-(0,\zeta)-(1,\zeta) \] is principal in $\bar C$. Thus there exists $f$ : ${\bar C}\rightarrow \sP^{1}$ of degree $2$. So $\bar{C}$ is hyperelliptic. The principality of the divisor $3(1,\zeta_1)-3(1,\zeta_2)$ ensures that there is a degree $3$ morphism $\bar C\rightarrow \sP^{1}$. By Castelnuovo-Severi type inequalities, this imposes that the genus of $\bar {C}$ is $\le2$. But the genus of $\bar C$ equals the genus of $C_{5}$; it is equal to $6$ and we have reached a contradiction.
\end{proof}
\end{document}
|
arXiv
|
{
"id": "2210.04439.tex",
"language_detection_score": 0.6760625243186951,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Efficient entanglement criteria for discrete, continuous, and hybrid variables} \author{Manuel Gessner} \email{[email protected]} \affiliation{QSTAR, CNR-INO and LENS, Largo Enrico Fermi 2, I-50125 Firenze, Italy} \affiliation{Istituto Nazionale di Ricerca Metrologica, Strada delle Cacce 91, I-10135 Torino, Italy} \author{Luca Pezz\`e} \affiliation{QSTAR, CNR-INO and LENS, Largo Enrico Fermi 2, I-50125 Firenze, Italy} \author{Augusto Smerzi} \affiliation{QSTAR, CNR-INO and LENS, Largo Enrico Fermi 2, I-50125 Firenze, Italy} \date{\today}
\begin{abstract} We provide a method to construct entanglement criteria for arbitrary multipartite systems of discrete or continuous variables and hybrid combinations of both. While any set of local operators generates a sufficient condition for entanglement of arbitrary quantum states, a suitable set leads to a necessary and sufficient criterion for pure states. The criteria are readily implementable with existing technology and reveal entanglement that remains undetected by the respective state-of-the-art methods for discrete and continuous variables. \end{abstract}
\maketitle
\textit{Introduction.}---Determining whether an unknown quantum state is entangled or not is a highly complex and in general unsolved problem \cite{Braunstein,Horodecki,GuehneToth}. The fundamental role of entanglement in quantum physics renders this issue directly relevant for various fields ranging from quantum information to condensed-matter physics \cite{Horodecki,Fazio,NielsenChuang}. A large amount of theoretical methods to characterize entanglement has been proposed \cite{Braunstein,Horodecki,Mintert,GuehneToth,Huber,Guehne,Sperling,Gisin2,Dicke}, however only few of them can be formulated in terms of a feasible operational recipe, involving only a given set of accessible operators and a limited number of measurements. These are crucial requirements to render such methods relevant for scalable experimental applications in a broad range of scenarios.
The development of methods for systems of either discrete or continuous variables has led to two distinct approaches, which are widely popular in present-day experiments \cite{Dicke,Chip,Strobel,Bohnet,Peise,Treps,Lucke,Schmied}. Continuous-variable systems, on the one hand, are typically analyzed with separability criteria involving uncertainty relations \cite{Duan,Simon,Giovannetti,Braunstein}. They provide a complete characterization of the entanglement of bipartite Gaussian states in terms of variances of suitably defined operators \cite{Duan,Simon}. These criteria have been further sharpened by means of entropic uncertainty relations \cite{Walborn} or moments of arbitrary order \cite{SV}, enabling them to detect a larger amount of non-Gaussian states, but their application remains limited to bipartite systems.
On the other hand, a convenient method to detect entanglement in the discrete (e.g. multi-qubit) case is based on the violation of spin-squeezing inequalities \cite{Wineland92, Sorensen,TothPRA2009}. A larger class of entangled states, including the spin-squeezed ones, can be detected by the Fisher information \cite{PS09}, a fundamental quantity in estimation theory \cite{Helstrom}. However, as this method is specifically designed to detect only those states that lead to a metrological advantage -- only one of many applications of entangled states --, certain entangled states remain undetected, including some pure states \cite{Hyllus}.
In this article, we provide a unified approach to entanglement detection in arbitrary multipartite quantum systems, which proves to be more efficient than standard strategies for either discrete \cite{GuehneToth,PS09,Hyllus} or continuous variables \cite{Duan,Simon,Giovannetti,Walborn}. In particular, every pure entangled state can be detected. Furthermore, the method can be readily adapted to a wide range of present-day experimental setups, since any set of accessible local observables can be used to construct a separability criterion. This flexibility also opens up the possibility to detect hybrid entanglement between discrete and continuous variables, whose prospects as a platform for implementations of quantum information is currently being explored \cite{Bellini,Laurat,Andersen}.
Specifically, we show that all separable quantum states of $N$ parties must satisfy the following inequality: \begin{equation} \label{eq.sepbound} F_{\hat{M}} \bigg[ \hat{\rho}_{\rm sep}, \sum_{i=1}^N \hat{A}_i \bigg] \leq 4\sum_{i=1}^N\mathrm{Var} \left( \hat{A}_i \right)_{\hat{\rho}_{\rm sep}}. \end{equation} Here $\hat{A}_i$ is a local observable for the $i$th party, $\mathrm{Var}(\hat{A})_{\hat{\rho}}=\langle\hat{A}^2\rangle_{\hat{\rho}} -\langle\hat{A}\rangle_{\hat{\rho}}^2$ denotes the variance and the quantum expectation values are given as $\langle \hat{A}\rangle_{\hat{\rho}}=\mathrm{Tr}[\hat{A}\hat{\rho}]$. The quantity appearing on the left-hand side of Eq.~(\ref{eq.sepbound}) is the Fisher information \cite{FIfootnote,Helstrom}. It quantifies how sensitively changes of the parameter $\theta$ are detected when the initial state $\hat{\rho}$ is transformed by the unitary evolution $\hat{\rho}(\theta)=e^{-i\sum_j\hat{A}_j\theta} \hat{\rho} e^{i\sum_j\hat{A}_j\theta}$ and then observed by measurements of the observable $\hat{M}$ \cite{Helstrom,Giovannetti06,Paris,GiovannettiReview,Varenna}. It is furthermore experimentally accessible \cite{Strobel}, see also \cite{Bohnet,Monroe,FIPhoton}, without any knowledge of the full quantum state \cite{PezzePRE,frowis,hauke}. The bound~(\ref{eq.sepbound}) holds for arbitrary observables $\hat{M}$, rendering the criterion robust against imperfect implementations of the measurement \cite{PezzePRE}.
Since Eq.~(\ref{eq.sepbound}) represents a necessary criterion for separability, its violation is a sufficient criterion for entanglement. The appearance of state-dependent variances on the right-hand side makes this criterion highly versatile since it holds for arbitrary local observables $\hat{A}_i$, independent of the Hilbert-space structure.
Equation~(\ref{eq.sepbound}) expresses the trade-off for separable states $\hat{\rho}_{\rm sep}$ between the state's sensitivity quantified by the Fisher information and the variances of the local operators $\hat{A}_i$ generating the transformation. If quantum correlations between the parties are present, then the bound~(\ref{eq.sepbound}) can be violated. In fact, for arbitrary quantum states $\hat{\rho}$, a different bound, $F_{\hat{M}} [ \hat{\rho}, \sum_{i=1}^N \hat{A}_i ] \leq 4\sum_{i,j}\mathrm{Cov}(\hat{A}_i,\hat{A}_j)_{\hat{\rho}}$ holds, where $\mathrm{Cov}(\hat{A},\hat{B})_{\hat{\rho}}=\langle\hat{A}\hat{B}+\hat{B}\hat{A}\rangle_{\hat{\rho}} / 2 -\langle\hat{A}\rangle_{\hat{\rho}}\langle\hat{B}\rangle_{\hat{\rho}}$. The difference between this bound and Eq.~(\ref{eq.sepbound}) lies entirely in the absence of covariances between the subsystems $(i\neq j)$ in~(\ref{eq.sepbound}).
To demonstrate Eq.~(\ref{eq.sepbound}) we first notice that the Fisher information can be maximized over all possible observables $\hat{M}$ \cite{BraunsteinPRL1994}: We have $F_{\hat{M}} \leq F_{Q}$, where $F_{Q} [ \hat{\rho}, \sum_{i=1}^N \hat{A}_i] = \max_{\hat{M}} F_{\hat{M}} [ \hat{\rho}, \sum_{i=1}^N \hat{A}_i]$ is a saturable bound (i.e. optimal observables can be constructed) called the quantum Fisher information \cite{Helstrom}. For an arbitrary separable state $\hat{\rho}_{\rm sep} = \sum_{\gamma}p_{\gamma} \ket{\varphi_\gamma} \bra{\varphi_\gamma}$,
where $\ket{\varphi_\gamma} = |\varphi^{\gamma}_1\rangle\otimes\dots\otimes|\varphi^{\gamma}_N\rangle$ is a product state, $p_\gamma > 0$, $\sum_\gamma p_\gamma=1$ and $\ket{ \varphi^{\gamma}_i }$ is the state of the $i$th party, the chain of inequalities \begin{subequations} \label{eq.sepbound.c} \begin{align}
F_Q\bigg[\hat{\rho}_{\mathrm{sep}},\sum_{i=1}^N \hat{A}_i\bigg]&\leq\sum_{\gamma}p_{\gamma}F_Q\bigg[\ket{\varphi_{\gamma}},\sum_{i=1}^N\hat{A}_i\bigg] \label{eq.proof1} \\ &=4\sum_{\gamma}p_{\gamma}\mathrm{Var}\bigg(\sum_{i=1}^N\hat{A}_i\bigg)_{\ket{\varphi_{\gamma}}} \label{eq.proof2} \\
&=4\sum_{\gamma}p_{\gamma} \sum_{i=1}^N \mathrm{Var}\big(\hat{A}_i\big)_{|\varphi_i^{\gamma}\rangle} \label{eq.proof3} \\ & \leq 4\sum_{i=1}^N \mathrm{Var}\big(\hat{A}_i\big)_{\hat{\rho}_{\mathrm{sep}}} \label{eq.proof4} \end{align} \end{subequations} holds. In Eq.~(\ref{eq.proof1}) we used the convexity of the quantum Fisher information \cite{Varenna} and, in Eq.~(\ref{eq.proof2}), the general expression
$F_Q\big[\ket{\psi},\hat{M}\big] = 4 (\Delta \hat{M})^2$, valid for pure states $|\psi\rangle$ and Hermitian operators $\hat{M}$ \cite{BraunsteinPRL1994}. We have $\mathrm{Var}\big(\sum_{i=1}^N\hat{A}_i\big)_{\hat{\rho}}=\sum_{i,j=1}^N\mathrm{Cov}(\hat{A}_i,\hat{A}_j)_{\hat{\rho}}$ and $\mathrm{Cov}(\hat{A},\hat{A})_{\hat{\rho}} = \mathrm{Var}(\hat{A})_{\hat{\rho}}$. Equation~(\ref{eq.proof3}) is then obtained by noticing that
$\mathrm{Cov}(\hat{A}_i,\hat{A}_j)_{|\varphi_1^{\gamma}\rangle\otimes\dots\otimes|\varphi_N^{\gamma}\rangle}=0$ when $i\neq j$. Therefore, $\mathrm{Var}\big(\sum_{i=1}^N\hat{A}_i\big)_{|\varphi_1^{\gamma}\rangle\otimes\dots\otimes|\varphi_N^{\gamma}\rangle}
= \sum_{i=1}^N \mathrm{Var}\big(\hat{A}_i\big)_{|\varphi_i^{\gamma}\rangle}$. Finally, the last inequality (\ref{eq.proof4}) follows from the concavity of the variance, see also Ref.~\cite{Guehne04}.
In the above derivation, no assumption is made about the local operators $\hat{A}_i$. In fact, any choice of $\hat{A}_i$ leads to a sufficient criterion for entanglement. However, certain choices of operators may be better suited than others to detect the entanglement of a given state $\hat{\rho}$. In order to construct the strongest possible criterion, we decompose each of the individual $\hat{A}_i$ in terms of an accessible set of operators $\hat{\mathbf{A}}_i=(\hat{A}_i^{(1)},\hat{A}_i^{(2)},\dots)^T$ in the Hilbert space $\mathcal{H}_{i}$ of the $i$th party ($i=1,...,N$). Thus, the local operators $\hat{A}_i$ are replaced by the expressions $\sum_{m=1}c_i^{(m)}\hat{A}_i^{(m)}=\mathbf{c}_i\cdot\hat{\mathbf{A}}_i$, where the $\mathbf{c}_i=(c^{(1)}_i,c^{(2)}_i,\dots)$ are vectors of coefficients. In this case, the full generator of the unitary transformation $\hat{A}(\mathbf{c})=\sum_{i=1}^N\mathbf{c}_i\cdot\hat{\mathbf{A}}_i$ is characterized by the combined vector $\mathbf{c}=(\mathbf{c}_1,\dots,\mathbf{c}_N)^T$. According to Eq.~(\ref{eq.sepbound}), the quantity \begin{equation} \label{eq.W} W[\hat{\rho},\hat{A}(\mathbf{c})]=F_Q[\hat{\rho},\hat{A}(\mathbf{c})]-4\sum_{i=1}^N\mathrm{Var}(\mathbf{c}_i\cdot\hat{\mathbf{A}}_i)_{\hat{\rho}} \end{equation} must be non-positive for arbitrary choices of $\mathbf{c}$ whenever the state $\hat{\rho}$ is separable. We can now maximize $W[\hat{\rho},\hat{A}(\mathbf{c})]$ by variation of $\mathbf{c}$ to obtain an optimized entanglement witness for the state $\hat{\rho}$, given the sets of available operators contained in $\mathcal{A}=\{\hat{\mathbf{A}}_1,\dots,\hat{\mathbf{A}}_N\}$.
To this aim let us first express the quantum Fisher information in matrix form as
$F_Q[\hat{\rho},\hat{A}(\mathbf{c})]=\mathbf{c}^TQ^{\mathcal{A}}_{\hat{\rho}}\mathbf{c}$, where the spectral decomposition $\hat{\rho}=\sum_kp_k|\Psi_k\rangle\langle\Psi_k|$ defines $\left(Q^{\mathcal{A}}_{\hat{\rho}}\right)^{mn}_{ij}=2\sum_{k,l}\frac{(p_k-p_l)^2}{p_k+p_l}\langle\Psi_k|\hat{A}_i^{(m)}|\Psi_l\rangle\langle\Psi_l|\hat{A}_j^{(n)}|\Psi_k\rangle$ element-wise and the sum extends over all pairs with $p_k+p_l\neq 0$. The indices $i$ and $j$ refer to different parties (i.e. different Hilbert spaces), while the indices $m$ and $n$ label the respective local sets of observables within each Hilbert space. Similarly, for the list of operators $\mathcal{A}$, we can express the elements of the covariance matrix of $\hat{\rho}$ as $(\Gamma^{\mathcal{A}}_{\hat{\rho}})^{mn}_{ij}=\mathrm{Cov}(\hat{A}_i^{(m)},\hat{A}_j^{(m)})_{\hat{\rho}}$. Note that only the block-diagonal elements $(i=j)$ of this matrix appear on the right-hand side of Eq.~(\ref{eq.sepbound}). If the above covariance matrix is evaluated after replacing $\hat{\rho}$ with $\Pi(\hat{\rho})=\hat{\rho}_1\otimes\dots\otimes\hat{\rho}_N$ where $\hat{\rho}_i$ is the reduced density operator, obtained from $\hat{\rho}$ by tracing over all parties except the $i$th one, all of those inter-party correlations $(i\neq j)$ are removed, while the local terms $(i=j)$ remain unchanged. Hence, we arrive at the following expression for the local variances, $\sum_{i=1}^N\mathrm{Var}\left(\mathbf{c}_i\cdot\hat{\mathbf{A}}_i\right)_{\hat{\rho}} = \mathbf{c}^T \Gamma^{\mathcal{A}}_{\Pi(\hat{\rho})} \mathbf{c}$. Combining this with the expression for the quantum Fisher matrix, the separability criterion reads $W[\hat{\rho},\hat{A}(\mathbf{c})]=\mathbf{c}^T\left(Q^{\mathcal{A}}_{\hat{\rho}}-4\Gamma^{\mathcal{A}}_{\Pi(\hat{\rho})} \right)\mathbf{c}\leq 0$. Since this condition must be satisfied for arbitrary vectors $\mathbf{c}$, it can be formulated independently of $\mathbf{c}$, as \begin{align} \label{eq.boundrho} Q^{\mathcal{A}}_{\hat{\rho}}-4\Gamma^{\mathcal{A}}_{\Pi(\hat{\rho})}\leq 0. \end{align} An entanglement witness is therefore found when the matrix $Q^{\mathcal{A}}_{\hat{\rho}}- 4\Gamma^{\mathcal{A}}_{\Pi(\hat{\rho})}$ has at least one positive eigenvalue. The criterion~(\ref{eq.boundrho}) can be equivalently stated as $\lambda_{\max}(Q^{\mathcal{A}}_{\hat{\rho}}- 4\Gamma^{\mathcal{A}}_{\Pi(\hat{\rho})})\leq 0$, where $\lambda_{\max}(M)$ denotes the largest eigenvalue of the matrix $M$.
For pure states $\hat{\rho}=|\Psi\rangle\langle\Psi|$, the quantum Fisher matrix coincides, up to a factor of four, with the covariance matrix, i.e., $Q^{\mathcal{A}}_{|\Psi\rangle}=4\Gamma^{\mathcal{A}}_{|\Psi\rangle}$ \cite{BraunsteinPRL1994}. Thus, according to Eq.~(\ref{eq.boundrho}), every pure separable state must satisfy the condition \begin{align}\label{eq.boundpure}
\Gamma^{\mathcal{A}}_{|\Psi\rangle}-\Gamma^{\mathcal{A}}_{\Pi(|\Psi\rangle)}\leq 0. \end{align} Conversely, if Eq.~(\ref{eq.boundpure}) is satisfied, then $\mathrm{Cov}(\hat{A}_i^{(m)},\hat{A}_j^{(n)})_{\ket{\Psi}}=0$,
or equivalently $\langle\hat{A}_i^{(m)}\hat{A}_j^{(n)}\rangle_{|\Psi\rangle}=\langle\hat{A}_i^{(m)}\rangle_{|\Psi\rangle}\langle\hat{A}_j^{(n)}\rangle_{|\Psi\rangle}$ for all $i\neq j$ and all $n,m$ \cite{footnote1}. If, additionally, each local set $\hat{A}_i^{(1)},\hat{A}_i^{(2)},\dots$ forms a complete set of observables, able to span the entire Hilbert space $\mathcal{H}_i$, for $i=1,...,N$, then this statement is only compatible
with a product state, $|\Psi\rangle=|\varphi_1\rangle\otimes\dots\otimes|\varphi_N\rangle$. Hence, for each entangled pure state, a set of operators $\mathcal{A}$ can be found, such that the criterion~(\ref{eq.boundpure}) is violated. This means, the criterion~(\ref{eq.boundpure}) becomes necessary and sufficient for separability of pure states, while Eq.~(\ref{eq.boundrho}) is always a necessary criterion for arbitrary states.
\begin{figure}\label{fig.dephasedsup}
\end{figure}
\textit{Application to continuous-variable entanglement.}---Let us first illustrate the applicability of the separability criterion derived here for the detection of mode-entanglement with continuous variables. A natural but arbitrary choice for the local observables $\hat{\mathbf{A}}_i$ are the phase-space operators $(\hat{x}_i,\hat{p}_i)^T$, such that the list of accessible observables $\mathcal{A}$ is given by $\mathcal{R}=\{\hat{x}_1,\hat{p}_1,\hat{x}_2,\hat{p}_2,\dots,\hat{x}_N,\hat{p}_N\}$ (we henceforth drop the vector notation for clarity). Gaussian states are fully characterized in terms of their covariance matrix $\Gamma^{\mathcal{R}}_{\hat{\rho}}$ \cite{Braunstein}, and their bipartite entanglement is efficiently captured by separability criteria based on Heisenberg's uncertainty relation \cite{Duan,Giovannetti}. The strongest criterion of this kind follows from the sharpest uncertainty relation, which is formulated in terms of entropic quantities \cite{Walborn}. To test the limits of these criteria, they are applied to non-Gaussian states, such as a partially dephased superposition state of the form \begin{align}\label{eq.dephsup}
\hat{\rho}_s=N(\alpha,s)&\big[|\alpha,\alpha\rangle\langle \alpha,\alpha|+|-\alpha,-\alpha\rangle\langle -\alpha,-\alpha|\\
&+(1-s)\left(|-\alpha,-\alpha\rangle\langle \alpha,\alpha|+|\alpha,\alpha\rangle\langle -\alpha,-\alpha|\right)\big]\notag,
\end{align}
where $|\alpha\rangle$ is a coherent state, $0\leq s\leq 1$ is a parameter and $N(\alpha,s)$ a normalization constant. The entropic separability criterion, and with it all other uncertainty-based criteria \cite{Duan,Simon,Giovannetti}, are unable to detect the entanglement of $\hat{\rho}_s$ for small values of $\alpha$ \cite{Walborn}. The criterion~(\ref{eq.boundrho}) -- using only the local operators contained in $\mathcal{R}$ -- detects the entanglement of the state $\hat{\rho}_s$ for all values of $s$ and $\alpha$, except at $s=1$ where the state is separable. This is illustrated in Fig.~\ref{fig.dephasedsup} for values of $\alpha$ in the undetected region of the entropic criterion.
The position and momentum observables contained in $\mathcal{R}$ represent one of many possible sets of local operators that can be used to construct a separability criterion from Eq.~(\ref{eq.sepbound}) in a continuous-variable system. Another choice of local observables is given by the local number operators $\hat{n}_i$. In quantum optical experiments, the local number fluctuations are accessible in a variety of platforms \cite{Bakr10,Schindler,Lucke}, and comparison with the quantum state's sensitivity to a collective phase shift $\exp(i\theta\hat{N})$, generated by $\hat{N}=\sum_{i=1}^N\hat{n}_i$ leads to the experimentally convenient separability criterion $F_Q[\hat{\rho}_{\mathrm{sep}},\hat{N}]\leq 4\sum_{i=1}^N\mathrm{Var}(\hat{n}_i)_{\hat{\rho}_{\mathrm{sep}}}$.
The various criteria obtained from Eq.~(\ref{eq.sepbound}) for different choices of local operators may also be combined to generate bounds whose verification does not require direct measurements of the local variances. Consider for example a continuous variable system of $N$ parties (modes), for which $F_Q[\hat{\rho}_{\mathrm{sep}},\hat{X}]$ and $F_Q[\hat{\rho}_{\mathrm{sep}},\hat{P}]$ have been independently probed, with $\hat{P}=\sum_{i=1}^N\hat{p}_i$ and $\hat{X}=\sum_{i=1}^N\hat{x}_i$. The sum of the two corresponding inequalities~(\ref{eq.sepbound}) then yields the criterion \begin{align}\label{eq.combinedbound} F_Q[\hat{\rho}_{\mathrm{sep}},\hat{X}]+F_Q[\hat{\rho}_{\mathrm{sep}},\hat{P}]&\leq 4\sum_{i=1}^N\left(\mathrm{Var}(\hat{x}_i)+\mathrm{Var}(\hat{p}_i)\right)_{\hat{\rho}_{\mathrm{sep}}}\notag\\ &\leq 4\sum_{i=1}^N\left(\langle\hat{x}_i^2\rangle_{\hat{\rho}_{\mathrm{sep}}}+\langle\hat{p}_i^2\rangle_{\hat{\rho}_{\mathrm{sep}}}\right)\notag\\ &= 4\sum_{i=1}^N(2n_i+1)=4(2n+N), \end{align} where $n_i=\langle\hat{n}_i\rangle_{\hat{\rho}}$ is the average particle number in mode $i$ with $\hat{n}_i=\hat{a}^{\dagger}_i\hat{a}_i$ and $\hat{a}_j=(\hat{x}_j+i\hat{p}_j)/\sqrt{2}$. The second inequality is saturated if and only if $\langle \hat{x}_i\rangle_{\hat{\rho}_{\mathrm{sep}}}=\langle \hat{p}_i\rangle_{\hat{\rho}_{\mathrm{sep}}}=0$, for all $i=1,\dots,N$. If the number of modes $N$ and the total number of particles $n=\sum_{i=1}^Nn_i$ are known from independent measurements, Eq.~(\ref{eq.combinedbound}) can be used as an entanglement witness without measurement of the local variances.
\begin{figure}\label{fig.shotnoiseentanglement}
\end{figure}
\textit{Application to discrete-variable entanglement.}---From Eq.~(\ref{eq.sepbound}) it is possible to derive state-independent upper bounds, using $4 \mathrm{Var}(\hat{A})_{\hat{\rho}}\leq (\lambda_{\max}(\hat{A})-\lambda_{\min}(\hat{A}))^2$, which holds for all $\hat{\rho}$ and $\lambda_{\min/\max}(\hat{A})$ denote minimal and maximal eigenvalues of $\hat{A}$. This yields the state-independent separability bound \begin{align}\label{eq.stateindep} F_Q\bigg[\hat{\rho}_{\rm sep},\hat{A}(\mathbf{c})\bigg]&\leq \sum_{i=1}^N\left(\lambda_{\max}(\mathbf{c}_i\cdot\hat{\mathbf{A}}_i)-\lambda_{\min}(\mathbf{c}_i\cdot\hat{\mathbf{A}}_i)\right)^2.
\end{align} In the special case of $N$ qubit systems, we recover the shot-noise bound $F_Q[\hat{\rho}_{\rm sep},\sum_{i=1}^N \mathbf{c}_i\cdot\hat{\boldsymbol{\sigma}}_i/2]\leq N$,
whose violation identifies entangled states of $N$ qubits that are useful for sub-shot-noise interferometry \cite{PS09} when $\mathbf{c}_i=\mathbf{n}\in\mathbb{R}^3$ is a unit vector (leading to $|\mathbf{c}|^2=N$). Here, $\hat{\boldsymbol{\sigma}}_i=(\hat{\sigma}_i^{(x)},\hat{\sigma}^{(y)}_i,\hat{\sigma}^{(z)}_i)$ is the vector of Pauli matrices for the $i$th qubit. Certain entangled $N$-qubit states, however, cannot be detected by the shot-noise criterion, even if the state is further optimized by means of local unitary manipulations \cite{Hyllus}, i.e., by optimization of the $\mathbf{c}_i$ under the normalization constraints $|\mathbf{c}_i|^2=1$. More generally, without respecting these constraints, we obtain the separability criterion $\lambda_{\max}(Q^{\mathcal{S}}_{\hat{\rho}_{\rm sep}})\leq 1$, where $\mathcal{S}=\{\hat{\boldsymbol{\sigma}}_1/2,\dots,\hat{\boldsymbol{\sigma}}_N/2\}$. Yet, the entanglement of states of the form \begin{align}\label{eq.shotnoiseent}
|\Psi_q\rangle=\sqrt{q}|0\rangle^{\otimes N}+\sqrt{1-q}e^{i\varphi}|1\rangle^{\otimes N}, \end{align}
will be overlooked by any of these state-independent bounds when $q\leq(1-\sqrt{(N-1)/N})/2$ or $q\geq (1+\sqrt{(N-1)/N})/2$ and $N\geq 3$ \cite{Hyllus}. In contrast, the stronger state-dependent criterion~(\ref{eq.boundrho}), which for pure states reduces to~(\ref{eq.boundpure}), is necessary and sufficient for all pure states, since the Pauli matrices span a complete set of qubit observables---together with the identity operator which commutes with all operators and can therefore be omitted. Figure~\ref{fig.shotnoiseentanglement} shows that, indeed, the stronger criterion~(\ref{eq.boundrho}) (red continuous line) detects the entanglement of $|\Psi_q\rangle$ for arbitrary values of $0< q< 1$, while the optimized quantum Fisher information (blue dashed line) does not overcome the shot-noise bound (gray dotted line) in the intervals mentioned above (gray shaded areas). Choosing an orientation along the $z$-axis, i.e., $\mathbf{c}_i=(0,0,1)^T$ for all $i=1,\dots,N$, yields the largest positive values of the witness $W$ [Eq.~(\ref{eq.W})], for all parameters $q$, while it maximizes the quantum Fisher information only outside of the gray-shaded parameter regime.
\textit{Entanglement depth, hybrid variables and experiments.}---The state-independent bounds~(\ref{eq.stateindep}) can be generalized to distinguish among the hierarchical classes of $k$-partite entanglement \cite{FImulti}. Consider an $N$-mode state which contains at most $k$-partite entanglement, i.e., $\hat{\rho}_{k\rm-prod}=\sum_\gamma p_\gamma |\varphi^\gamma_1\rangle\langle\varphi^\gamma_1|\otimes|\varphi^\gamma_2\rangle\langle\varphi^\gamma_2|\otimes\cdots$, where each of the states $|\varphi^\gamma_l\rangle$ describes $N^{(\gamma)}_l\leq k$ modes with $\sum_lN^{(\gamma)}_l=N$ for all $\gamma$. An upper bound for the quantum Fisher information is given by \begin{align}\label{eq.kpartite} F_Q[\hat{\rho}_{k\rm-prod},\hat{A}(\mathbf{c})] \leq \Delta_{\max}^2(sk^2+r^2), \end{align} where $s=\lfloor N/k\rfloor$ and $r=N-sk$ and the maximum spectral span of all local operators is denoted by $\Delta_{\max}=\max_i\{\lambda_{\max}(\mathbf{c}_i\cdot\hat{\mathbf{A}}_i)-\lambda_{\min}(\mathbf{c}_i\cdot\hat{\mathbf{A}}_i)\}$. The above result is obtained following Refs.~\cite{FImulti} together with $\lambda_{\max}(\sum_i\hat{A}_i)\geq \sum_i\lambda_{\max}(\hat{A}_i)$ and $\lambda_{\min}(\sum_i\hat{A}_i)\leq \sum_i\lambda_{\min}(\hat{A}_i)$, which is ensured by Weyl's inequality \cite{Weyl}, and thereby generalizes the $N$-qubit result of Refs.~\cite{FImulti} to the case of unequal, arbitrary subsystems. Hence, whenever the spectrum of the local operators is bounded, Eq.~(\ref{eq.kpartite}) provides a criterion not only to test if any entanglement is present, but also how many of the $N$ modes are entangled. Besides finite-dimensional systems \cite{FImulti,Varenna}, there may also be applications to continuous variables if further knowledge about the system limits the spectral range $\Delta_{\max}$. For example if a gas is contained in a trap of extension $L$, the spectral span of the position operator cannot exceed $L$. In such a system, any observation of $F_Q[\hat{\rho},\hat{X}]>L^2(sk^2+r^2)$ indicates entanglement of $k$ modes.
Entanglement detection protocols have traditionally been developed for homogeneous systems of either discrete or continuous variables \cite{Braunstein,GuehneToth}. Nevertheless, hybrid correlations between the two are generated in many different experiments \cite{Haroche,Wineland,Bellini,Laurat,Berkeley14,Hefei15}, and their potential for quantum information processing is recognized \cite{Andersen}. One of the advantages of the separability criterion~(\ref{eq.sepbound}) is its independence of the Hilbert space structure and dimension, allowing for the possibility of witnessing hybrid entanglement. As a simple example, consider the composition of a two-level atom, coupled to a single harmonic oscillator mode \cite{Haroche,Wineland}. Correlated states such as $|\phi_n\rangle=(|0,n\rangle+|1,n+1\rangle)/\sqrt{2}$, where $|n\rangle$ denotes a Fock state of $n$ excitations, are produced in ion-trap experiments \cite{Berkeley14}. A suitably designed hybrid criterion such as $F_Q[\hat{\rho}_{\mathrm{sep}},\hat{\sigma}^{(x)}+\hat{x}]\leq 4\mathrm{Var}(\hat{\sigma}^{(x)})_{\hat{\rho}_{\mathrm{sep}}}+4\mathrm{Var}(\hat{x})_{\hat{\rho}_{\mathrm{sep}}}$ is able to reveal this entanglement. Recall from Eq.~(\ref{eq.boundpure}) that for pure states, separability requires the absence of inter-party covariances. The entanglement of the state $|\phi_n\rangle$ can thus be attributed to the coherences that lead to $\langle \sigma^{(x)}\hat{x}\rangle_{|\phi_n\rangle}\neq \langle\sigma^{(x)}\rangle_{|\phi_n\rangle}\langle \hat{x}\rangle_{|\phi_n\rangle}$, and ultimately cause the violation of the separability criterion above.
Before we conclude, let us briefly discuss the experimental implementation of our proposed entanglement criteria. In order to measure the witness~(\ref{eq.W}), two quantities need to be obtained. On the one hand, the variances of the local operators $\mathbf{c}_i\cdot\hat{\mathbf{A}}_i$ need to be measured. Single-site \textit{addressing} is not needed to achieve this. Instead, only the much less demanding resolved imaging of the individual constituents is required. Such measurements are possible in current experiments with, e.g., multi-mode photonic states \cite{Fabre,Bellini,Laurat,Treps}, trapped ions \cite{Monz,Schindler}, as well as with cold atoms distinguished by multi-well potentials or internal states \cite{Strobel,Lucke,Peise,BECfootnote}, and under quantum-gas microscopes \cite{Bakr10,Bloch2011,FermiMicroscope1,FermiMicroscope2,FermiMicroscope3,Greiner}. On the other hand, the Fisher information can be extracted, e.g., following the method of Ref.~\cite{Strobel} by determining the impact of a collective unitary operation with no need for local measurements, see also \cite{Bohnet,Monroe,PezzePRE,hauke,frowis}. Measurements of the Fisher information can be completely avoided by using the lower bound $F_Q[\hat{\rho},\hat{A}]\geq |\langle [\hat{A},\hat{B}]\rangle_{\hat{\rho}}|^2/\mathrm{Var}(\hat{B})_{\hat{\rho}}$, which holds for arbitrary operators $\hat{A}$ and $\hat{B}$ \cite{Varenna}. Together with the separability condition $W[\hat{\rho},\hat{A}(\mathbf{c})]\geq 0$ from Eq.~(\ref{eq.W}), we obtain the simple criterion, \begin{align}
\mathrm{Var}(\hat{A}(\mathbf{c}))_{\Pi(\hat{\rho})}\mathrm{Var}(\hat{B})_{\hat{\rho}}\geq \frac{|\langle [\hat{A}(\mathbf{c}),\hat{B}]\rangle_{\hat{\rho}}|^2}{4}, \end{align} whose violation indicates entanglement of $\hat{\rho}$.
\textit{Conclusions.}---We have introduced a family of entanglement criteria that are applicable to multipartite systems of discrete and/or continuous variables. The criteria are based on a comparison of the Fisher information, which expresses the sensitivity of the quantum state to a collective unitary transformation, with the sum of variances of the local operators that generate this transformation. We have illustrated the applicability with examples of spins and continuous variables, showing in both cases that our criteria are able to detect the entanglement of states that remain undetected with commonly employed methods that defined the state of the art throughout the last years. In particular, we have constructed entanglement criteria that are necessary and sufficient for all pure states. Since any set of accessible local operators can be used to generate a sufficient entanglement criterion, the strategy presented here allows for versatile adaptations to a variety of experiments, and can be readily implemented within the current technology.
\end{document}
|
arXiv
|
{
"id": "1608.02421.tex",
"language_detection_score": 0.7205111980438232,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\captionsetup[figure]{labelfont={bf},name={Fig.},labelsep=period} \title{\bf \Large Stochastic stability of a system of perfect integrate-and-fire inhibitory neurons}
\author{Timofei Prasolov\\Heriot-Watt University}
\maketitle
\begin{abstract} We study a system of perfect integrate-and-fire inhibitory neurons. It is a system of stochastic processes which interact through receiving an instantaneous increase at the moments they reach certain thresholds. In the absence of interactions, these processes behave as a spectrally positive L\'evy processes. Using the fluid approximation approach, we prove convergence to a stable distribution in total variation. \end{abstract}
\begin{quotation} \noindent \textit{Keywords:} spiking neural network, L\'evy process, stability, fluid limits.\\ \noindent \textit{AMS classification:} 60B10, 60G51. \end{quotation}
\section{Introduction}
We analyse the stochastic stability of a model of a neural network. Our model is inspired by the \textit{stochastic integrate-and-fire neuron} model. The original model of membrane potentials was introduced by Lapicque (1907) and has been developed over the years (for a review of the model see, e.g., Burkitt (2006a,b)). In this model, at any time $t$, the internal state of a neuron $i$ is given by its membrane potential $Z_i(t)$, which evolves according to a stochastic differential equation $$dZ_i(t) = F(Z_i(t), I(t), t)dt + \sigma(Z_i(t), I(t), t)dW_i(t),$$ where $F$ is a drift function, $\sigma$ the diffusion coefficient, $I$ is the neuronal input, and $W_i$ is a Brownian motion (see, e.g., Gerstner and Kistler (2002)). The process $W_i(t)$ represents combined internal and external noise. The process $I$ models firings of neurons' potentials (or ''spikes''): whenever a potential $Z_i(t)$ reaches certain threshold, it resets to a base-level, and the neuron sends signals to other neurons.
A large number of experiments have given us an understanding of the dynamics of a single neuron. For example, Hodgkin and Huxley (1952) found three different types of ion current flowing through a neuron's membrane, and introduced a detailed model of a membrane potential. To give a basic description, without any input, the neuron is at rest, corresponding to a constant membrane potential. Given a small change, the membrane potential returns to the resting position. If the membrane potential is given a big enough increase, it reaches a certain threshold, and exhibits a pulse-like excursion that will effect connected neurons. After the pulse, the membrane potential does not directly return to the resting potential, but goes below it. This is connected to the fact that a neuron can not have two spikes one right after another.
As for the neural networks, neurons form a connected graph with synapses between neurons. When a presynaptic neuron fires a spike it sends a signal through a synapse to a postsynaptic neuron. A neuron is called \textit{inhibitory} if its signals predominantly move the membrane potentials away from a threshold; and \textit{excitatory} if they move potentials toward a threshold. In this paper, we consider a model containing inhibitory neurons only. It is important to point out that the effect of a signal depends on the potential of a receiving neuron. For example, if the membrane potential of a postsynaptic neuron is lower than that of a corresponding inhibitory synapse, the effect of a signal will be reversed. Therefore, in the models where signals and potentials are assumed to be independent, it is important to assume that the potentials should not decrease too much.
One of the most classical models, introduced by Stein (1965), is \textit{leaky integrate-and-fire neuron} model, where $$F(Z_i(t), I(t), t) = -\alpha Z_i(t) + I(t), \ \ \sigma(Z_i(t), I(t), t) = \sigma = \text{const.}$$ There are several variations of this model. For instance, nonlinear models were considered, such as the quadratic model (see, e.g., Latham \textit{et al.} (2000)) where $-\alpha Z_i(t)$ is replaced with\\
$a(Z_i(t) - z_{rest})(Z_i(t) - z_c)$, where $z_c > z_{rest}$. Another direction for generalisation of this model is the Spike Response Model (see, e.g., Gerstner and Kistler (2002), Chapter 4.2). In this model, the relation between the dynamics and the potential is determined by the time of the last spike. This allows one to explicitly forbid spikes to occur one right after another and to write the dynamics in integrated form.
In this paper we consider \textit{perfect integrate-and-fire neuron} model, where $\alpha = 0$ and, therefore, the decay of the membrane potential over time is neglected. This restriction is a stepping stone to achieve more general results and it allows us to write the model in integrated form.
In our model, the spikes and corresponding signals are represented by shifts from a threshold of a random length, independent of everything else. We analyse the system under certain conditions on the distribution of those shifts and prove stability. Instead of considering the recurrence of sets $[-k, H]^N$ (where $H$ is a threshold and $N$ is a number of neurons), we move each coordinate down and reflect the system to work with more convenient sets $[0, k+H]^N$. Thus, in our model, membrane potentials are nonnegative processes that jump to a random positive level after reaching zero. Signals from inhibitory neurons push membrane potentials from the threshold, i.e. they are positive shifts. It is important to note that we assume that travel time of signals between neurons is zero, which in general can cause uncertainty in the order of spikes. However the inhibitory signals do not cause spikes right away, and we assume that the potentials $Z_i(t)$ almost surely do not reach their thresholds at the same time. We refer to Taillefumier \textit{et al.} (2012) for further discussion.
It is often assumed that the studied system of neurons is itself a part of a much larger system of neurons. The corresponding effect on our system is often modelled by a multivariate Brownian motion $W(t)$ with a drift (the drift guaranties the stability of a system of a single neuron). However, we can generalise it to a multivariate spectrally positive (i.e. with positive jumps) L\'evy process $X(t)$ to account for inhibitory signals. It is important for our analysis that the signals do not influence the dynamics of the process $Z(t)$ if it is away from the threshold, i.e. we have $dZ(t) = dX(t)$ if $Z_i(t) > 0$, for $i\in \{1, \ldots, N\}$. Nevertheless, the number of spikes $\eta_i(t)$ before time $t$ is essential to stability analysis. The fact that $\eta_i(t)$ is not pathwise monotone with respect to signal sizes or the initial state brings certain difficulties in proving stability.
As was mentioned, a system of a single neuron is stable, however, for a general distribution of signals between neurons, ''partial stability'' can occur when only a strict subset of neurons (maybe random) stabilises, while membrane potentials of other neurons are ''pushed'' to infinity (which contradicts the physical setup). The latter is of independent mathematical interest and will not be discussed in the main body of the paper.
Under specific conditions on average signals and the drift $\mathbb{E} X(1)$, we prove the positive recurrence of the system using the fluid approximation approach, introduced by Rybko and Stolyar (1992) and Dai (1995) (see also Stolyar (1995)). Although this method is usually shown on queueing networks, it is quite universal, and applicable to our model too. Using results from Section 7 of Borovkov and Foss (1992) (see, also, Chapter VII of Asmussen (2003)), we prove \textit{Harris positive recurrence} and convergence to stationary distribution in total variation. We refer to Foss and Konstantopoulos (2004) for an overview of some stochastic stability methods.
The \textit{stochastic integrate-and-fire neuron} model has received an increasing amount of attention in recent years. There are a number of papers considering mean-field limits of such systems. De Masi \textit{et al.} (2015) consider a model with identical inhibitory neurons, where each membrane potential has a drift to the average potential. Inglis and Talay (2015) consider general signals between neurons and describe signal transmissions through the use of the cable equation (instead of instant transmissions). Robert and Touboul (2016) consider a model where neurons do not have a fixed threshold and spikes occur as a inhomogeneous Poisson process, with intensity given as a function of a membrane potential, and prove ergodicity. Several authors studied cases when excitatory signals lead to so-called \textit{blow-up phenomena} (e.g. C\'aceres \textit{et al.}(2011), Delarue \textit{et al.} (2015)).
This paper is structured as follows. In \sectn{model&results} we define our model, introduce auxiliary concepts and notations, and formulate our results. In particular, in \sectn{fluid_model} we introduce the fluid model and formulate related technical results. In \sectn{auxiliary_results} we prove important auxiliary results. In \sectn{lemma_positive_recurrence} we prove positive recurrence. In \sectn{lemma_mixing} we prove that our model satisfies the classical ''minorization'' condition. The Appendix includes the remaining auxiliary results and comments.
\section{Model and results}\label{sect:model&results}
We analyse a network of $N$ stochastic perfect integrate-and-fire inhibitory neurons. At any time $t$, the internal state of all neuron is given by a multidimensional process $Z(t)$ which represents neurons' membrane potential. Let $X(t)$ be a $N$-dimensional spectrally positive left-continuous L\'evy process with a finite mean and its distribution has a non-degenerate absolute continuous component. The process $X(t)$ represents combined internal and external noise. Let $\nu_i = - \mathbb{E} X_i(1) > 0$ and $X^0_i(t) = \nu_i t + X_i(t)$. While $Z(t) \in (0, \infty)^N$, membrane potentials evolve as the process $X(t)$, i.e. $dZ(t) = dX(t)$. Let $\{\{\xi^{(k)}_{ij}\}_{i, j=1}^N\}_{k=1}^\infty$ be i.i.d. random matrices, independent of everything else, with a.s. strictly positive elements.
\begin{Remark} One can allow the absolute continuous component of the distribution of the process $X_i(t)$ to be degenerate (for example, take a sum of a Poisson process and a linear function $-at$) and, instead, condition the distribution of the matrix $\{\xi^{(1)}_{ij}\}_{i, j=1}^N$ to have an absolute continuous component. The main result of the paper would still hold and the proof would need few minor changes. \end{Remark}
Let $b_{ij} = \mathbb{E} \xi^{(1)}_{ij} < \infty$ and $S_{ij}(n) = \sum_{k=1}^{n} \xi^{(k)}_{ij},$ for $i, j \in \{1, 2, \ldots, N\}$. If potential $Z_i(t)$ hits non-positive values for the $k$-th time, then instantaneously it increases to $\xi^{(k)}_{ii}$ and other membrane potentials increase by $\xi^{(k)}_{ij}$. We call this event ''a spike of neuron $i$``.
Let $Z^{\textbf{z}}(t) = (Z^{\textbf{z}}_1(t), \ldots, Z^{\textbf{z}}_N(t)) \in \mathcal{Z} = [0, \infty)^N$ be the membrane potentials at time $t$ with an initial value $\textbf{z} = (z_1, \ldots, z_N)$. Let $T^{\textbf{z}}_{i0} = 0$ and $$T^{\textbf{z}}_{ik} = \inf\{t> T^{\textbf{z}}_{i(k-1)}: \ Z^{\textbf{z}}_i(t) \leq 0\}, \ \ \text{for $k\geq 1$,}$$ be the times when neuron $i$ reaches its threshold. Let $\eta^{\textbf{z}}_i(0) = 0$ and $\eta^{\textbf{z}}_i(t) = \max\{k: \ T^{\textbf{z}}_{ik} < t\}$ be the number of spikes of $Z^{\textbf{z}}_i(t)$ before time $t$. Then the dynamics of the system is given by \begin{equation}\label{eq:dynamic_equation} Z^{\textbf{z}}_i(t) = z_i + X_i(t) + \sum_{j=1}^N \sum_{k=1}^{\eta^{\textbf{z}}_j(t)} \xi^{(k)}_{ji} = z_i + X_i(t) + \sum_{j=1}^N S_{ji}(\eta^{\textbf{z}}_j(t)), \ \ i=1, \ldots, N. \end{equation}
Before talking about stability of the system it is important to point out that, due to the negative drift, one can easily show that potential of an isolated neuron is stable (this is also a subcase of our main result).
\begin{Remark}\label{rem:partial_stability} There are examples of parameters $\nu_i$ and $b_{ij}$ such that there exists a subset of neurons which, after reaching stability, can ''push other neurons to infinity``. We do not discuss such cases of partial stability in the main body of the paper, however, we include a few comments in the Appendix. \end{Remark}
We assume that all potentials have the same drift $\nu$ and that signals from neuron $i = 1, \ldots, N$ to all other neurons have the same mean $w_i$. We have \begin{equation}\label{eq:simple_network} \nu_i = \nu >0, \ \ b_{ij} = \mathbb{E} \xi^{(1)}_{ij} = w_i > 0 \ \ \text{and} \ \ b_{ii} = H_i > w_i, \ \ \text{for $i=1, \ldots, N$ and $j\neq i$.} \end{equation}
\begin{Theorem}\label{thr:ergodicity} Assume condition \textnormal{\eq{simple_network}} to hold. Then the process $(Z^{\textbf{z}}_1(t), \ldots, Z^{\textbf{z}}_N(t))$ is Harris positive recurrent: there is a distribution $\pi$ such that
$$\sup_{A}|\mathbb{P}\{Z^{\textbf{z}}(t) \in A\} - \pi(A)| \to 0 , \ \text{as $t\to\infty$.}$$ \end{Theorem}
\begin{Remark}\label{rem:spike_rate} Given \textnormal{\eq{simple_network}}, matrix $B = (b_{ij})_{i, j=1}^N$ is invertible and $$(\textbf{1}B^{-1})_i = \frac{1}{\left( H_i - w_i \right) \left( 1 + \sum_{k=1}^{N}\frac{w_k}{H_k - w_k} \right)}, \ \ \text{for $i = 1, \ldots, N$,}$$
where $\textbf{1} = (1, \ldots, 1)$. Vector $\nu \textbf{1}B^{-1}$ represents rates of spikes when stability is achieved. In particular, for large $t > \nu^{-1}(1 + \sum_{k=1}^N w_k / (H_k - w_k))$ and for each sequence $\textbf{z}_n$, $\|\textbf{z}_n\| \to \infty$, there exists a subsequence $\textbf{z}_{n_k}$ such that
$$\frac{\eta^{\textbf{z}_{n_k}}(\|\textbf{z}_{n_k}\|(t+\Delta)) - \eta^{\textbf{z}_{n_k}}(\|\textbf{z}_{n_k}\|t)}{\|\textbf{z}_{n_k}\|\Delta} \Rightarrow \nu \textbf{1}B^{-1}, \ \text{for $\Delta >0$.}$$ \end{Remark}
We prove \thr{ergodicity} following two standard steps. For the reader's convenience, we formulate those steps as lemmas. Let $\tau^{\textbf{z}}(\varepsilon, B) = \inf\{t> \varepsilon: \ Z^{\textbf{z}}(t) \in B\}$ be the first hitting time of a set $B$ after time $\varepsilon$. The first step is the proof of positive recurrence which we achieve via the fluid approximation method. \begin{Lemma}\label{lem:positive_recurrence}
There exists $k_0 >0$ such that for $V = \{\textbf{z}\in \mathcal{Z}: \|\textbf{z}\| < k_0\}$ we have $$\sup_{\textbf{z}\in V} \mathbb{E}\tau^{\textbf{z}}(\varepsilon, V) < \infty.$$ \end{Lemma} In the second step, we show that our model satisfies the classical ''minorization'' condition. \begin{Lemma}\label{lem:minorant_measure} There exist a number $p>0$ and a probability measure $\psi$ such that for a uniformly distributed r.v. $U \in [1,2]$, independent of everything else, we have $$\inf_{\textbf{z} \in V} \mathbb{P}\{Z^{\textbf{z}}(U) \in B\} \geq p \psi(B).$$ \end{Lemma} Using Lemmas $1$ and $2$ we can prove that conditions of Theorem 7.3 from Borovkov and Foss (1992) are satisfied, which gives us the result. The proof of \lemt{positive_recurrence} is based on the fluid approximation. We dedicate the following subsection to formulate corresponding definitions and auxiliary results. We point out that we need to assume condition \eq{simple_network} only in the proof of \lemt{positive_recurrence} and in \rem{spike_rate}.
One of the difficulties of our model is lack of path-wise monotonicity for the number of spikes $\eta^{\textbf{z}}(t)$ with respect to signals $\xi^{(k)}_{ij}$ or initial state $\textbf{z}$. In general, making one neuron firing a spike earlier may lead to other spikes occur later. However, there is a ''partial monotonicity'' which allows us to get an upper bound for process $\eta^{\textbf{z}}(t)$ with useful properties.
Since all neurons are inhibitory, one way to increase the number of spikes is to remove all interactions between neurons. Let the process $\widetilde{Z}^{\textbf{z}}$ be the transformation of the process $Z^{\textbf{z}}$ by replacing signals $\xi^{(k)}_{ji}$, $j\neq i$, by $0$ for $k\geq 1$ (trajectories of $X(t)$ remain the same). The resulting process has a simpler dependence between coordinates and it has a greater number of spikes before any time $t>0$ than that of $Z^{\textbf{z}}$. For our convenience, we want to remove the dependence of the upper bound on $\textbf{z}$ (which is significant because we take $\textbf{z}$ large in the following lemmas) and make the time until the first spike to have the same distribution with the rest of waiting times. Let the process $\bar{Z}$ be the transformation of the process $\widetilde{Z}^{\textbf{z}}$, so that $\bar{Z}_i(0) \overset{d}{=} \xi^{(1)}_{ii}$, $1\leq i \leq N$. Let $\widetilde{\eta}^{\textbf{z}}$ and $\bar{\eta}$ be the number of spikes in processes $\widetilde{Z}^{\textbf{z}}$ and $\bar{Z}$, respectively.
\begin{Lemma}\label{lem:spikes_bound} We have $$\eta^{\textbf{z}}_i(t) \leq \widetilde{\eta}^{\textbf{z}}_i(t), \ \ \text{a.s.,}$$ $$\widetilde{\eta}^{\textbf{z}}_i(t) \overset{st.}{\leq} 1 + \bar{\eta}_i(t),$$ and $\bar{\eta}_i(t)$ is an undelayed renewal process, which satisfies the integral renewal theorem and SLLN $$\frac{\mathbb{E} \bar{\eta}_i(t)}{t} \to \frac{\nu_i}{b_{ii}} \overset{a.s.}{\leftarrow} \frac{\bar{\eta}_i(t)}{t}$$ \end{Lemma}
\subsection{Fluid model and corresponding auxiliary results}\label{sect:fluid_model}
Let us define the fluid approximation model. Let $\rho(\textbf{x}, \textbf{y}) = \sum_{i=1}^N |x_i - y_i|$ be the metric on our space $\mathcal{Z}$ and $\|\textbf{x}\| = \rho(\textbf{x}, 0)$, for $\textbf{x}, \textbf{y} \in \mathcal{Z}$. For each $\textbf{z}\in\mathcal{Z}$, introduce a family of scaled processes
$$\widehat{Z}^{\textbf{z}} = \left\lbrace \widehat{Z}^{\textbf{z}}(t) = \frac{Z^{\textbf{z}}(\|\textbf{z}\|t)}{\|\textbf{z}\|}, \ t\geq 0\right\rbrace .$$ We call the family
$$\widehat{Z} = \{\widehat{Z}^\textbf{z}, \ \|\textbf{z}\| \geq 1\}$$
\textit{relatively compact (at infinity)} if, for each sequence $\widehat{Z}^{\textbf{z}_n}$, $\|\textbf{z}_n\| \to \infty$, there exists a subsequence $\widehat{Z}^{\textbf{z}_{n_k}}$ that converges weakly (in Skorokhod topology) to some limit process $\varphi^Z = \{\varphi^Z(t), \ t\geq 0\}$, which is called a \textit{fluid limit}. A family of such limits is called a \textit{fluid model}. The fluid model is \textit{stable} if there exists a finite constant $T$ such that $\|\varphi^Z(T)\| = 0$ a.s. for any fluid limit $\varphi^Z$ (there are several equivalent definitions of stability of a fluid model, see e.g. Stolyar (1995)). Based on stability of a fluid model, one can prove positive recurrence of the original Markov process following the lines of Dai (1995).
Using \lemt{spikes_bound} we prove the next result.
\begin{Lemma}\label{lem:relative_compact} The family of processes $\{Z^{\textbf{z}}, \ \textbf{z}\in\mathcal{Z}\}$ is such that \begin{itemize} \item for all $t>0$ and $\textbf{z}\in\mathcal{Z}$,
$$\mathbb{E}\|Z^\textbf{z} (t)\| < \infty$$ and moreover, for any K,
$$\sup_{\|\textbf{z}\|\leq K}\mathbb{E}\|Z^\textbf{z} (t)\| < \infty;$$ \item for all $0\leq u < t$, the family of random variables
$$\{\rho(\widehat{Z}^\textbf{z}(u), \widehat{Z}^\textbf{z}(t)), \ \|\textbf{z}\|\geq 1\}$$ is uniformly integrable and there exists a constant $C$ such that
$$\limsup_{\|\textbf{z}\|\to\infty} \mathbb{P}\{\sup_{u',t'\in [u, t]}\rho(\widehat{Z}^\textbf{z}(u'), \widehat{Z}^\textbf{z}(t')) > C(t-u) \} = 0.$$ \end{itemize} \end{Lemma}
With this result, one can follow the lines of the proof of Theorem 7.1 from Stolyar (1995) to obtain the following.
\begin{Corollary} The family of processes $\widehat{Z}$ is relatively compact and every fluid limit $\varphi^Z$ is an a.s. Lipschitz continuous function with Lipschitz constant $C+1$. \end{Corollary} Additionally, the function $\varphi^Z(t)$ is a.s. differentiable. We call a time $t_0$ a \textit{regular point} if $\varphi^Z(t)$ is differentiable at $t_0$. Further more, we have \begin{equation}\label{eq:lipschitz_integration} \varphi^Z(t) - \varphi^Z(s) = \int_{s}^t \frac{d \varphi^Z}{du}(u) du, \ \ t > s > 0, \end{equation} where the derivative is arbitrarily defined (for example, it equals zero) outside regular points.
Let $\widehat{\eta}^{\textbf{z}}(t) = \eta^{\textbf{z}}(\|\textbf{z}\|t) / \|\textbf{z}\|$. Following the lines of the proof of \lemt{relative_compact}, one can prove similar results for the family $\widehat{\eta} = \{\widehat{\eta}^\textbf{z}, \ \|\textbf{z}\| \geq 1\}$. Denote a fluid limit of $\widehat{\eta}$ as $\varphi^{\eta}$. If at time $t$ we have $\varphi^{\eta}_i(t) >0$, then for certain sequence $\textbf{z}_n$ the number of spikes $ \eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t)$ becomes large. If additionally, $ \eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t)$ converges to infinity a.s., then by the law of large numbers
$$\frac{S_{ij}(\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t))}{\|\textbf{z}_n\|} = \frac{\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t)}{\|\textbf{z}_n\|}\frac{S_{ij}(\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t))}{\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t)} \Rightarrow \varphi^\eta_i(t) b_{ij}, \ \text{as $n\to\infty$.}$$ If $\varphi^\eta_i(t) = 0$, then the number of spikes is not as large and, if we prove that the left-hand side of the last equation converges to zero, the resulting convergence will be of the same form.
Using this idea we get the following result.
\begin{Lemma}\label{lem:weak_convergence}
Let $\widehat{\eta}^{\textbf{z}_n}$ converge weakly to a fluid limit $\varphi^{\eta}$ for a sequence $\textbf{z}_n$, $\|\textbf{z}_n\| \to \infty$ as $n\to\infty$. Then we have weak convergence of processes
$$\left(\left(\frac{1}{\|\textbf{z}_n\|}\sum_{i=1}^N S_{ij}(\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t))\right)_{j=1}^N, \ t\geq 0\right) \overset{D}{\Rightarrow} (\varphi^{\eta}(t)B, \ t\geq 0).$$ \end{Lemma}
\section{Proofs of auxiliary results}\label{sect:auxiliary_results} In this section we prove our auxiliary results for a general matrix $B$ and parameters $\nu_i$.
\subsection{Proof of \lemt{spikes_bound}}
We prove that $T^{\textbf{z}}_{ik} \geq \widetilde{T}^{\textbf{z}}_{ik} $: \begin{eqnarray*} T^{\textbf{z}}_{ik} & =&\inf\{t> T^{\textbf{z}}_{i(k-1)}: \ Z^{\textbf{z}}_i(t) \leq 0\} = \inf\{t> T^{\textbf{z}}_{i(k-1)}: \ Z^{\textbf{z}}_i(t) = 0\}\\ & = & \inf\{t> T^{\textbf{z}}_{i(k-1)}: \ z_i + X_i(t) + \sum_{j=1}^N S_{ji}(\eta^{\textbf{z}}_j(t)) = 0\}\\ & = & \inf\{t> T^{\textbf{z}}_{i(k-1)}: \ z_i + X_i(t) + S_{ii}(k-1) + \sum_{j\neq i} S_{ji}(\eta^{\textbf{z}}_j(t)) = 0\}\\ & \geq & \inf\{t> T^{\textbf{z}}_{i(k-1)}: \ z_i + X_i(t) + S_{ii}(k-1) = 0\}. \end{eqnarray*}
Since $T^{\textbf{z}}_{i0} = \widetilde{T}^{\textbf{z}}_{i0} = 0$, by induction we have $$T^{\textbf{z}}_{ik} \geq \inf\{t> \widetilde{T}^{\textbf{z}}_{i(k-1)}: \ z_i + X_i(t) + S_{ii}(k-1) = 0\} = \widetilde{T}^{\textbf{z}}_{ik}.$$ Thus, we get $\eta^{\textbf{z}}_i(t) \leq \widetilde{\eta}^{\textbf{z}}_i(t)$. Since $\widetilde{\eta}^{\textbf{z}}_i(t) - 1$ has the same distribution with $\bar{\eta}_i(t-\widetilde{T}^{\textbf{z}}_{i1}) \leq \bar{\eta}_i(t)$, we have the second inequality.
The process $\bar{\eta}_i(t)$ is an undelayed renewal process with waiting times having the same distribution with $\tau_i = \inf\{t> 0: X_i (t) = -\xi^{(1)}_{ii}\}$. Using the strong law of large numbers, one can prove that $\mathbb{E} \tau_i = b_{ii}/\nu_i$ (see also Borovkov (1965) for a detailed proof). Therefore, via the standard argument of renewal theory the rest of the proof follows (see e.g. Feller (1971)).
\subsection{Proof of \lemt{relative_compact}} \textbf{Part 1.} Using \lemt{spikes_bound} and positivity of $\xi^{(k)}_{ij}$, we get $$
\|Z^\textbf{z} (t)\| = \sum_{i=1}^N |z_i + X_i(t) + \sum_{j=1}^N S_{ji}(\eta^{\textbf{z} }_j(t))| \leq \|\textbf{z}\| + \|X(t)\|
+\sum_{i=1}^N\sum_{j=1}^N S_{ji}(\widetilde{\eta}^{\textbf{z} }_j(t)). $$ We have $$ \{\widetilde{\eta}^{\textbf{z} }_i (t) = m\} = \{-\sum_{k=1}^m \xi^{(k)}_{ii} < z_i + \inf_{0\leq s\leq t} X(s) \leq -\sum_{k=1}^{m-1} \xi^{(k)}_{ii}\} $$ and, therefore, $$\widetilde{\eta}^{\textbf{z} }_i (t) = \inf\{m \in \mathbb{Z}^+ : \ \sum_{k=1}^m \xi^{(k)}_{ii} > -z_i - \inf_{0\leq s\leq t} X(s)\}.$$ Since $\{\{\xi^{(k)}_{ij}\}_{i, j=1}^N\}_{k=1}^\infty$ and $(X(t), \ t\geq 0)$ are independent, the random variable $\widetilde{\eta}^{\textbf{z} }_i (t)$ is a stopping time for the sequence $\{\{\xi^{(k)}_{ij}\}_{i, j=1}^N\}_{k=1}^\infty$. By Wald's identity, $$\mathbb{E} S_{ji}(\widetilde{\eta}^{\textbf{z} }_j(t)) = \mathbb{E}\sum_{k=1}^{\widetilde{\eta}^{\textbf{z} }_j(t)} \xi^{(k)}_{ji} = \mathbb{E}\widetilde{\eta}^{\textbf{z} }_j (t)b_{ji} < \infty.$$
\textbf{Part 2.} We have \begin{equation}\label{eq:distance_t-u} \begin{split}
\rho(\widehat{Z}^\textbf{z}(u), \widehat{Z}^\textbf{z}(t)) & = \sum_{i=1}^N \frac{|Z^{\textbf{z}}_i(\|\textbf{z}\|t)-Z^{\textbf{z}}_i(\|\textbf{z}\|u)|}{\|\textbf{z}\|} \leq (t-u)\sum_{i=1}^N \nu_i\\
& + \sum_{i=1}^N \frac{|X^0_i(\|\textbf{z}\|t)-X^0_i(\|\textbf{z}\|u)|}{\|\textbf{z}\|}
+ \sum_{i=1}^N\sum_{j=1}^N \frac{S_{ji}(\eta^{\textbf{z}}_j(\|\textbf{z}\|t)) - S_{ji}(\eta^{\textbf{z}}_j(\|\textbf{z}\|u))}{\|\textbf{z}\|}. \end{split} \end{equation} Process $X_i$ is a L\'evy process, from which we have
$$\mathbb{E} \frac{|X^0_i(\|\textbf{z}\|t)|}{\|\textbf{z}\|} \leq 2\sup_{0\leq s \leq t}\mathbb{E}|X^0_i(s)|, \ \text{for $\|\textbf{z}\| \geq 1$,}$$ and, therefore, the second summand in the right-hand side of \eq{distance_t-u} is uniformly integrable. By \lemt{spikes_bound}, we have
$$S_{ij}(\eta^{\textbf{z}}_j(\|\textbf{z}\|t)) - S_{ij}(\eta^{\textbf{z}}_j(\|\textbf{z}\|u)) \overset{st.}{\leq} S_{ij}(1 + \bar{\eta}_i(\|\textbf{z}\|(t-u))).$$
Since $S_{ij}(n)/n \to b_{ij}$ and $\bar{\eta}_i(\|\textbf{z}\|(t-u)) \to \infty$ a.s., we have
$$\frac{S_{ij}(1 + \bar{\eta}_i(\|\textbf{z}\|(t-u)))}{1 +\bar{\eta}_i(\|\textbf{z}\|(t-u))} \overset{a.s.}{\to } b_{ij},$$ and therefore \begin{equation*} \begin{split}
0 & \leq \frac{S_{ji}(\eta^{\textbf{z}}_j(\|\textbf{z}\|t)) - S_{ji}(\eta^{\textbf{z}}_j(\|\textbf{z}\|u))}{\|\textbf{z}\|} \overset{st.}{\leq} \frac{S_{ij}(1 + \bar{\eta}_i(\|\textbf{z}\|(t-u)))}{\|\textbf{z}\|} \\
& = \frac{1 +\bar{\eta}_i(\|\textbf{z}\|(t-u))}{\|\textbf{z}\|}\frac{S_{ij}(1 + \bar{\eta}_i(\|\textbf{z}\|(t-u)))}{1 +\bar{\eta}_i(\|\textbf{z}\|(t-u))} \to (t-u)\frac{\nu_i}{b_{ii}}b_{ij} \end{split} \end{equation*}
a.s. and in $L_1$, as $\|\textbf{z}\|\to\infty$. Then the distance on the left-hand side of \eq{distance_t-u} is bounded above by the sum of uniformly integrable random variables and therefore is also uniformly integrable.
Given
$$C>\sum_{i=1}^N\nu_i\left( 1 + \sum_{j=1}^N \frac{b_{ij}}{b_{ii}}\right) ,$$ there exists $\varepsilon >0$ such that for $\|\textbf{z}\|$ large
\begin{equation*} \begin{split}
\mathbb{P}\{\sup_{u',t'\in [u, t]}\rho(\widehat{Z}^\textbf{z}(u'), \widehat{Z}^\textbf{z}(t')) > C(t-u) \} & \leq \mathbb{P}\left\{\sup_{u',t'\in [u, t]}\sum_{i=1}^N \frac{|X^0_i(\|\textbf{z}\|t')-X^0_i(\|\textbf{z}\|u')|}{\|\textbf{z}\|} > \varepsilon(t-u) \right\}\\
& \leq 2N \mathbb{P}\left\{ \sup_{s\in [0, t-u]}\frac{|X^0_1(\|\textbf{z}\|s)|}{\|\textbf{z}\|} > \frac{\varepsilon}{2N}(t-u) \right\} \to 0, \end{split} \end{equation*} by Theorem 36.8 from Sato (1999).
\subsection{Proof of \lemt{weak_convergence}} By Skorokhod (1956), it is sufficient to prove that there is a convergence of finite-dimensional distributions on everywhere dense set of times $t$ and that a tightness condition holds. Tightness can be deduced from the second statement of \lemt{relative_compact}. We prove that \begin{equation}\label{eq:finite_dimensional_convergence}
\mathbb{P}\left\{\bigcap_{k=1}^K \bigcap_{i, j =1}^N \left\{\frac{S_{ij}(\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t_k))}{\|\textbf{z}_n\| } < y^k_{ij}\right\}\right\} \to \mathbb{P}\left\{\bigcap_{k=1}^K \bigcap_{i =1}^N \left\{\varphi^{\eta}_i(t_k) < \min_{1\leq j\leq N}\frac{y^k_{ij}}{b_{ij}}\right\}\right\} \end{equation} as $n\to\infty$, for appropriate $t\geq 0$ and $\textbf{y}\in (0, \infty)^{K N^2}$.
Define sets
$$C_{ij}^k(n) = \left\{ \frac{S_{ij}(\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t_k))}{\|\textbf{z}_n\| } < y^k_{ij} \right\},$$
$$D^k_i (n, m) = \{\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t_k) > m\},$$
$$E^k_{ij} (n, \delta) = \left\{\left| \frac{S_{ij}(\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t_k))}{\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t_k)} - b_{ij} \right| \leq \delta \right\},$$
$$F^{k \pm}_{i} (n, \delta) = \left\{\frac{\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t_k)}{\|\textbf{z}_n\|} < \min_{1 \leq j \leq N} \frac{y^k_{ij}}{b_{ij} \mp \delta} \right\}, $$ where $\delta \in (0, \min_{i,j} b_{ij})$. We prove that $$\mathbb{P}\left\{F^{k -}_{i} (n, \delta)\right\} + o(1) \leq \mathbb{P}\left\{\bigcap_{j=1}^N C_{ij}^k(n)\right\} \leq \mathbb{P}\left\{F^{k +}_{i} (n, \delta)\right\} + o(1), \ \text{as $n\to \infty$.}$$ For any $\textbf{y}\in (\mathbb{R}^+)^{K N^2}$ such that $(\min_j (y^k_{ij}/b_{ij}))_{i=1}^N$ is a continuity point of the cdf of $(\varphi^{\eta}(t_k))_{k=1}^k$, there is a neighbourhood $\Delta$ of $\textbf{y}$ such that every point $\textbf{x} \in \Delta$ is also a continuity point. Thus, for $\delta$ small we have
\begin{equation*} \begin{split}
\mathbb{P}\left\{\bigcap_{k=1}^K \bigcap_{i=1}^N F^{k \pm}_{i} (n, \delta)\right\} & = \mathbb{P}\left\{\bigcap_{k=1}^K\bigcap_{i =1}^N\left\{\frac{\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t_k)}{\|\textbf{z}_n\| } < \min_{1 \leq j \leq N} \frac{y^k_{ij}}{b_{ij} \mp \delta}\right\}\right\} \\
& \to \mathbb{P}\left\{\bigcap_{k=1}^K\bigcap_{i =1}^N \left\{\varphi^{\eta}_i(t_k) < \min_{1\leq j\leq N}\frac{y^k_{ij}}{b_{ij} \mp \delta}\right\}\right\}, \ \text{as $n \to \infty$,} \end{split} \end{equation*} and, therefore, by letting $\delta$ converge to $0$, we get \eq{finite_dimensional_convergence}.
By the law of large numbers, we have $$\mathbb{P}\{D^k_i (n, m) \cap \overline{E^k_{ij} (n, \delta)}\} \to 0, \ \ \text{as $m\to\infty$,}$$ and $$\mathbb{P}\{\overline{C_{ij}^k(n)} \cap \overline{D^k_i (n, m)}\} \to 0, \ \ \text{as $n\to\infty$,}$$
if $m=o(\|\textbf{z}_n\|)$. Take $m=\sqrt{\|\textbf{z}_n\| }$.
From the definitions we have \begin{multline*} \left( \bigcap_{j=1}^N C_{ij}^k(n) \cap D^k_i (n, m) \cap E^k_{ij} (n, \delta)\right) \subseteq \left( F^{k +}_{i} (n, \delta) \cap D^k_i (n, m) \cap E^k_{ij} (n, \delta)\right)\\
= \left( F^{k +}_{i} (n, \delta) \cap D^k_i (n, m)\right) \setminus \left( F^{k +}_{i} (n, \delta) \cap D^k_i (n, m) \cap \overline{E^k_{ij} (n, \delta)}\right) \end{multline*} and $$ \left( F^{k +}_{i} (n, \delta) \cap D^k_i (n, m)\right) = F^{k +}_{i} (n, \delta) \setminus \left( F^{k +}_{i} (n, \delta) \cap \overline{D^k_i (n, m)}\right).$$
Since $m=o(\|\textbf{z}_n\|)$, we have $F^{k +}_{i} (n, \delta) \cap \overline{D^k_i (n, m)} = \overline{D^k_i (n, m)}$ for $n$ large. Combining altogether, we get
\begin{equation*} \begin{split} \mathbb{P}\left\{\bigcap_{j=1}^N C_{ij}^k(n)\right\} & = \mathbb{P}\left\{\bigcap_{j=1}^N C_{ij}^k(n) \cap D^k_i (n, m)\right\} +
\mathbb{P}\left\{\bigcap_{j=1}^N C_{ij}^k(n) \cap \overline{D^k_i (n, m)}\right\} \\ & \leq \mathbb{P}\left\{F^{k +}_{i} (n, \delta)\right\} - \mathbb{P}\left\{\overline{D^k_i (n, m)}\right\} + \mathbb{P}\left\{\bigcap_{j=1}^N C_{ij}^k(n) \cap \overline{D^k_i (n, m)}\right\} + o(1), \end{split} \end{equation*} as $n\to\infty$, and \begin{multline*} \mathbb{P}\left\{\bigcap_{j=1}^N C_{ij}^k(n) \cap \overline{D^k_i (n, m)}\right\} - \mathbb{P}\left\{\overline{D^k_i (n, m)}\right\} = \mathbb{P}\left\{\bigcup_{j=1}^N \overline{C_{ij}^k(n)} \cap \overline{D^k_i (n, m)}\right\} \to 0, \end{multline*} as $n\to\infty$. Following the same lines with replacing a set $F^{k+}_i(n, \delta)$ with a set $F^{k-}_i(n, \delta)$ and relations $\subseteq$ and $\leq$ with relations $\supseteq$ and $\geq$, we get the lower bound.
\section{Proof of \lemt{positive_recurrence}}\label{sect:lemma_positive_recurrence}
We prove that under condition \eq{simple_network} fluid limits $\varphi^Z(t)$ are deterministic and uniquely defined by initial value $\varphi^{Z}(0)$. Further, each coordinate of a fluid limit is a continuous piecewise linear function which tends to zero and then remains there.
Let sequence $\textbf{z}_n$, $\|\textbf{z}_n\| \to \infty$, be such that $$\widehat{Z}^{\textbf{z}_n} \overset{\mathcal{D}}{\Rightarrow} \varphi^Z \ \text{and} \ \widehat{\eta}^{\textbf{z}_n} \overset{\mathcal{D}}{\Rightarrow} \varphi^\eta.$$ By Corollary $1$, function $\varphi^Z$ is a.s. Lipschitz continuous.
Following the lines of the proof of \lemt{weak_convergence}, one can easily show that
$$\left(\widehat{Z}^{\textbf{z}_n}(t) - \frac{\textbf{z}_n}{\|\textbf{z}_n\|} - \frac{X(\|\textbf{z}_n\|t)}{\|\textbf{z}_n\|}, \ t\geq 0 \right) \overset{\mathcal{D}}{\Rightarrow} (\varphi^Z(t) - \varphi^Z(0) + \nu t \textbf{1} , \ t\geq 0).$$
Now, given \eq{dynamic_equation} and \lemt{weak_convergence}, we have $$(\varphi^\eta(t) B, \ t\geq 0) \overset{d}{=} (\varphi^Z(t) - \varphi^Z(0) + \nu t \textbf{1} , \ t\geq 0).$$
By \rem{spike_rate}, the matrix $B$ is invertible and we have $$(\varphi^\eta(t), \ t\geq 0) \overset{d}{=} \left( \left( \varphi^Z(t) - \varphi^Z(0) + \nu t \textbf{1}\right) B^{-1}, \ t\geq 0\right) .$$ Since $\varphi^\eta$ is a weak limit, we can assume without loss of generality $\varphi^\eta(t) = \left( \varphi^Z(t) - \varphi^Z(0) + \nu t \textbf{1}\right) B^{-1}$. Thus, $\varphi^\eta$ is differentiable wherever $\varphi^Z$ is.
Assume that $\|\varphi^Z(t_0)\| > 0$ and $t_0$ is a regular point (see \sectn{fluid_model}). Let $N_0 = \sharp\{i: \ \varphi^Z_i(t_0) = 0\} < N$. Then, with a proper reordering, $ \varphi^Z_i(t_0) = 0$, for $i\in\{1, \ldots, N_0\}$ and $ \varphi^Z_i(t_0) > 0$, for $i\in\{N_0 + 1, \ldots, N\}$. Since $\varphi^{Z}_i(t) \geq 0$ and $t_0$ is a regular point, from $\varphi^{Z}_i(t_0) = 0$ we get $(\varphi^{Z}_i)'(t_0) = 0$. We find the values of $$(\varphi^Z_i)'(t_0) = -\nu + H_i(\varphi^\eta_i)'(t_0) + \sum_{j\neq i}w_{j}(\varphi^\eta_j)'(t_0).$$ We prove that $(\varphi^\eta_i)'(t_0) = 0$ for $i> N_0$ (if a potential is very far from the threshold then the neuron does not have a spike for a long time) and, therefore, \begin{equation}\label{eq:zero_level_fluid} 0 = -\nu + (H_i-w_i)(\varphi^\eta_i)'(t_0)+ \sum_{j=1}^{N_0}w_{j}(\varphi^\eta_j)'(t_0), \ \ i=1,\ldots, N_0, \end{equation} \begin{equation}\label{eq:positive_level_fluid} (\varphi^Z_i)'(t_0) = -\nu + \sum_{j=1}^{N_0}w_j(\varphi^\eta_j)'(t_0), \ \ i=N_0+1, \ldots, N. \end{equation}
Let $$h = \min_{N_0 + 1 \leq i \leq N} \varphi^Z_i(t_0).$$ We prove that for any $\Delta < h/(4\nu)$ and $i\in[N_0 +1, N]$ equality $\varphi^{\eta}_i(t_0 + \Delta) = \varphi^{\eta}_i(t_0)$ holds. Since $\widehat{Z}_i^{\textbf{z}_n}(t_0) \Rightarrow \varphi^Z_i (t_0)$, we have $\widehat{Z}_i^{\textbf{z}_n}(t_0) > h/2 > 2\nu\Delta$ a.s. for $n$ large. We have \begin{equation*} \begin{split}
\mathbb{P}\{\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|(t_0 + \Delta)) > \eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t_0) \} & \leq \mathbb{P}\{2\nu\Delta\|\textbf{z}_n\| + \inf_{0\leq s\leq \Delta} (X_i(\|\textbf{z}_n\|(t_0 + s)) - X_i(\|\textbf{z}_n\|t_0)) \leq 0\}\\
& \leq \mathbb{P}\{\nu\Delta\|\textbf{z}_n\| + \inf_{0\leq s\leq \Delta} (X^0_i(\|\textbf{z}_n\|(t_0 + s)) - X^0_i(\|\textbf{z}_n\|t_0)) \leq 0\}\\
& = \mathbb{P}\{\sup_{0\leq s\leq \Delta} X^0_i(\|\textbf{z}_n\|s) \geq \nu\Delta\|\textbf{z}_n\|\}. \end{split} \end{equation*}
Thus, by Theorem 36.8 from Sato (1999), we have convergence $\eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|(t_0 + \Delta)) - \eta^{\textbf{z}_n}_i(\|\textbf{z}_n\|t_0) \to 0$ in probability and convergence $\widehat{\eta}^{\textbf{z}_n}_i(t_0 + \Delta) - \widehat{\eta}^{\textbf{z}_n}_i(t_0) \to 0$ a.s., as $n\to\infty$. Thus, equality $\varphi^{\eta}_i(t_0 + \Delta) = \varphi^{\eta}_i(t_0)$ holds for $\Delta < h/(4\nu)$ and $(\varphi^{\eta}_i)'(t_0) = 0$.
Using \rem{spike_rate} we solve system \eq{zero_level_fluid} and get $$(\varphi^\eta_i)'(t_0) = \frac{\nu}{H_i - w_i} \frac{1}{1 + \sum_{k=1}^{N_0}\frac{w_k}{H_k - w_k}}, \ \ i=1,\ldots, N_0,$$ and therefore, $$(\varphi^Z_i)'(t_0) = -\nu + \frac{\nu}{1 + \sum_{k=1}^{N_0}\frac{w_k}{H_k - w_k}} \sum_{j=1}^{N_0}\frac{w_j}{H_j - w_j} = -\frac{\nu}{1 + \sum_{k=1}^{N_0}\frac{w_k}{H_k - w_k}}, \ \ i=N_0+1, \ldots, N.$$
Therefore, the process $\varphi^Z$ is deterministic and piecewise linear. We have $$\varphi_i^Z(0)\leq 1 \ \text{and} \ (\varphi^Z_i)'(t_0) \leq -\frac{\nu}{1 + \sum_{k=1}^{N}\frac{w_k}{H_k - w_k}}, \ \ i=1,\ldots, N,$$ for any regular point $t_0$ such that $\varphi_i^Z(t_0) >0$. Thus, from \eq{lipschitz_integration} we have that in time interval\\ $(0, \nu^{-1} (1 + \sum_{k=1}^{N}\frac{w_k}{H_k - w_k}))$ process $\varphi^Z$ reaches zero and stays there.
Let $\tau^{\textbf{z}}(\varepsilon, B) = \inf\{t> \varepsilon: \ Z^{\textbf{z}}(t) \in B\}$. Since fluid limits are stable, there exists $\kappa >0$ such that for $V = \{\textbf{z}\in \mathcal{Z}: \|\textbf{z}\| < k_0\}$ we have $$\sup_{\textbf{z}\in V} \mathbb{E}\tau^{\textbf{z}}(\varepsilon, V) < \infty.$$
\section{Proof of \lemt{minorant_measure}}\label{sect:lemma_mixing} We prove existence of a lower bound for $\inf_{\textbf{z} \in V} \mathbb{P}\{Z^{\textbf{z}}(U) \in B\}$ where\\
$V= \{\textbf{z}\in \mathcal{Z}: \|\textbf{z}\| < k_0\}$ (see the end of previous section). By Theorem 19.2 from Sato (1999), the L\'evy process $X(t)$ can be represented as a sum $X^1(t) + X^2(t)$ of two independent processes, a jump process $X^1(t)$ and a Gaussian process $X^2(t)$ with drift. We consider cases where at least one coordinate is close enough to zero. If all the coordinates of $\textbf{z}$ are bounded away from zero then the proof follows similar lines.
Since random variables $\xi^{(1)}_{ij}$, $i, j\in [1, N]$, are strictly positive, there are constants $k^+_1, k^-_1 >0$ such that $$p_1 = \mathbb{P}\{A_1\} \equiv \mathbb{P}\left\{(\xi^{(1)}_{ij})_{i, j=1}^N \in [k^-_1, k^+_1]^{N^2}\right\} >0.$$ Without loss of generality, we assume that $z_1 < k^-_1/6$.
First, we bound the jump process $X^1(t)$ in the time interval $[0, 2]$, which includes the time interval $[0, U]$, and take time instant $t_0 \leq 1/2$ such that $\mathbb{P}\{X^1_1(t_0) < k^-_1/6\} >0$. Denote $$A_2 = \left\{\max_{1\leq i \leq N} (X^1_i(2)) < k_2\right\} \cap \left\{X^1_1(t_0) < k^-_1/6\right\},$$ and take a constant $k_2 >0$ such that $p_2 = \mathbb{P}\{A_2\} >0.$ Next, we use the condition that the Gaussian process $X^2(t)$ is non-degenerate and none of its coordinates is a deterministic line. Thus, we denote \begin{equation*} \begin{split} A_3 & = \left\{\max_{1\leq i \leq N} \sup_{0\leq t \leq \frac{1}{2}} (X^2_i(t)) \leq k_3-k_2 - k^-_1\right\} \cap \left\{\min_{1\leq i \leq N} \inf_{0\leq t \leq \frac{1}{2}} (X^2_i(t)) \geq -\frac{k^-_1}{2}\right\}\\ & \cap \left\{\inf_{0\leq t \leq t_0} (X^2_1(t)) \leq -\frac{k^-_1}{3}\right\} \end{split} \end{equation*} and take a constant $k_3 > k_2 - k^-_1$ such that $p_3 = \mathbb{P}\{A_3\} >0$. One can show that, given $A_2 \cap A_3$, the first spike occurs up to time $t_0$ and the second one can occur only after time $1/2$.
Denote the new set $D = A_1 \cap A_2 \cap A_3$. From independence of $X^1$, $X^2$ and $(\xi^{(1)}_{ij})_{i, j =1}^N$ we have $\mathbb{P}\{D\} = p_1p_2p_3 >0$. We have \begin{equation}\label{eq:away_from_zero_after_spike} D \subseteq \left\{\frac{k^-_1}{2} \leq Z_i^{\textbf{z}}\left( \frac{1}{2}\right) + X_i^1(U) - X_i^1\left( \frac{1}{2}\right) \leq k^+_1 + k_2 + k_3, \ i = 1, \ldots, N\right\}. \end{equation} We restrict ourselves to events without the second spike up to time $U$. Denote
$$G_2(t_1, t_2, k) = \left\{\min_{1\leq i \leq N} \inf_{t_1\leq s \leq t_2} (X^2_i(s)) > -k\right\}.$$ Using \eq{away_from_zero_after_spike}, we get that, given $G_2\left( 1/2, U, k^-_1/2\right) \cap D$, the second spike occurs after time $U$.
Let $K = k^+_1 + k_2 + k_3$. We prove that, for any point $\textbf{y} \in (K, 2K)^N$ and a measurable set $\Delta \subset [0, K]^N$, there is a number $p>0$ such that $$ \mathbb{P}\left\{\left\{Z^{\textbf{z}}(U) \in \textbf{y} + \Delta\right\} \cap G_2\left( \frac{1}{2}, U, \frac{k^-_1}{2}\right) \cap D \right\} \\ \geq p \lambda(\Delta), $$ where $\lambda$ is the Lebesgue measure. Denote, $\widehat{y}_i = y_i - Z_i^{\textbf{z}}\left( 1/2\right) - \left( X_i^1(U) - X_i^1\left( 1/2\right)\right) $, for $i\in[1, N]$. Then we have $$\left\{Z^{\textbf{z}}(U) \in \textbf{y} + \Delta\right\} = \left\{ X^2(U) - X^2\left( \frac{1}{2} \right) \in\widehat{\textbf{y}} + \Delta\right\}.$$
Since $X^2$ is a Markov process, the events $ \left\{ X^2(U) - X^2\left( 1/2 \right) \in\widehat{\textbf{y}} + \Delta\right\} \cap G_2\left( 1/2, U, k^-_1/2\right)$ and $D$ are independent, conditioned on a value of $\widehat{\textbf{y}}$. Thus, we have \begin{equation*} \begin{split} &\mathbb{P}\left\{\left\{ X^2(U) - X^2\left( \frac{1}{2} \right) \in\widehat{\textbf{y}} + \Delta\right\} \cap G_2\left( \frac{1}{2}, U, \frac{k^-_1}{2}\right) \cap D \right\}\\ & = \mathbb{E}\left( \mathbb{P}\left\{\left\{ X^2(U) - X^2\left( \frac{1}{2} \right) \in\widehat{\textbf{y}} + \Delta\right\} \cap G_2\left( \frac{1}{2}, U, \frac{k^-_1}{2}\right) \mid \ \widehat{\textbf{y}}\right\} \mathbb{P}\left\{D \mid \ \widehat{\textbf{y}}\right\}\right) \end{split} \end{equation*}
Next, we need a technical lemma regarding a monotonicity property of the Brownian bridge. \begin{Lemma}\label{lem:brownian_bridge} For any $t, k>0$ and $\Delta \subset [0, \infty)^N$ we have $$\mathbb{P}\{G_2(0, t, k) \mid \ X^2(t) \in \Delta\} \geq \mathbb{P}\{G_2(0, t, k) \mid \ X^2(t) = \textbf{0}\} >0.$$ \end{Lemma}
Given $D$, we have $\widehat{y}_i \geq 0$, for $i \in [1, N]$, and we can use \lemt{brownian_bridge} to obtain \begin{multline*} \mathbb{P}\left\{\left\{ X^2(U) - X^2\left( \frac{1}{2} \right) \in\widehat{\textbf{y}} + \Delta\right\} \cap G_2\left( \frac{1}{2}, U, \frac{k^-_1}{2}\right) \mid \ \widehat{\textbf{y}}\right\}\\
\overset{a.s.}{\ge} \mathbb{P}\left\{ X^2(U) - X^2\left( \frac{1}{2} \right) \in\widehat{\textbf{y}} + \Delta\mid \ \widehat{\textbf{y}}\right\} \mathbb{P}\left\lbrace G_2\left( \frac{1}{2}, U, \frac{k^-_1}{2}\right) \mid \ X^2\left( U\right) -X^2\left( \frac{1}{2}\right) = \textbf{0} \right\rbrace. \end{multline*}
The density of $X^2(t)$ is bounded away from zero on any compact set, and $X^2$ and $U$ are independent. Therefore, there exists $p_4 >0$ such that, given $D$, for a measurable set $\Delta \subseteq [0, K]^N$ we have $$\mathbb{P}\left\{ X^2(U) - X^2\left( \frac{1}{2} \right) \in\widehat{\textbf{y}} + \Delta\mid \ \widehat{\textbf{y}}\right\} \overset{a.s.}{\geq} p_4\lambda(\Delta).$$ Denote $p'_4 = p_4\mathbb{P}\left\lbrace G_2\left( \frac{1}{2}, U, \frac{k^-_1}{2}\right) \mid \ X^2\left( U\right) -X^2\left( \frac{1}{2}\right) = \textbf{0} \right\rbrace >0$. Combining altogether, we get that if $z_1 < k^-_1/6$ then $$\mathbb{P}\{Z^{\textbf{z}}(U) \in \textbf{y} + \Delta\} \geq p_1p_2p_3p'_4\lambda(\Delta),$$ for $\textbf{y} \in (K, 2K]$ and $\Delta \subseteq [0, K]^N$.
\appendix \section*{Appendix} \section*{Comments on \rem{partial_stability}} In a system of two inhibitory neurons it is sufficient for stability to assume that the signals are smaller on the average than the thresholds. However, in a system of three inhibitory neurons it is not enough. For the matrix $B=\begin{pmatrix} 8 & 2 & 6\\ 2 & 8 & 6\\ 6 & 6 & 8 \end{pmatrix}$ and drifts $\nu_i = -1$, $i=1,2,3,$, first two neurons can form a stable system that 'pushes' the potential of the third neuron to infinity. Here is an example of sufficient conditions on matrix $B$ and parameters $\nu_i$ to avoid such cases (we believe that they can be weakened). \begin{itemize} \item For every set $S \subseteq [1, N]$ the matrix $B^S = (b_{ij})_{i, j \in S}$ is invertible and $a^S_i = \left( (B^S)^{-1}f^S\right)_i >0 $, $i\in [1, N]$ where $f^S = (\nu_i)_{i\in S}$; \item $\sum_{i\in S} a^S_i \sum_{j\notin S} b_{ij} < \sum_{j\notin S} \nu_j.$ \end{itemize}
\section*{Proof of \rem{spike_rate}}
We prove the remark by assuming that the system of equations $\textbf{x} B = \textbf{1}$ has a solution, and then we prove that the solution is unique and has all positive coordinates. Let us rewrite the system $\textbf{x} B = \textbf{1}$ as $$\sum_{j=1}^N x_j b_{ji} = H_i x_i + \sum_{j\neq i} w_j x_j = (H_i-w_i)x_i + \sum_{j=1}^N w_j x_j = 1 \ \ i= 1, \ldots, N.$$
Denote $M = \sum_{j=1}^N w_j x_j$ and get $$x_i = \frac{1 - M}{H_i - w_i} \ \ i= 1, \ldots, N \ \Rightarrow \ M = (1-M)\sum_{i=1}^N\frac{w_i}{H_i - w_i} \ \Rightarrow \ M = \frac{\sum_{i=1}^N\frac{w_i}{H_i - w_i}}{1 + \sum_{i=1}^N\frac{w_i}{H_i - w_i}} < 1.$$ Thus, $M$ and, therefore, $\textbf{x}$ are uniquely defined through $\{H_i, w_i\}_{i=1}^n$ and $x_i > 0$, $i=1, \ldots, N$. The rest of the proof follows from the proof of \lemt{positive_recurrence}.
\section*{Proof of \lemt{brownian_bridge}}
First, take $X(t)$, $t\in [0,1]$, a standard one-dimensional Brownian motion. Then for the Brownian bridge $B_x(t) = xt + B_0(t)$ and $x\leq y$ we have \begin{equation*} \begin{split}
\mathbb{P}\{ \inf_{0\leq t \leq 1} (X(t)) \geq -k | \ X(1) = x\} & = \mathbb{P}\{ \inf_{0\leq t \leq 1} (xt + B_0(t)) \geq -k\}\\
& \leq \mathbb{P}\{ \inf_{0\leq t \leq 1} (yt + B_0(t)) \geq -k\}\\
& = \mathbb{P}\{ \inf_{0\leq t \leq 1} (X(t)) \geq -k | \ X(1) = y\}. \end{split} \end{equation*} For a general $N$-dimensional Brownian motion $X(t)$, $t\in[0, 1]$, with a non-singular covariance matrix $\Sigma$, there exists an invertible matrix $L$ and a vector $\textbf{v}$ such that $W(t) = LX(t) + \textbf{v}t$ is a vector of $N$ independent standard Brownian motions. Let $B_{\textbf{0}}(t)$ denote the corresponding $N$-dimensional Brownian bridge. Denote $A_k = [-k, \infty)^N$. Then for $\textbf{x}, \textbf{y}$, such that $x_i \leq y_i$, we have \begin{equation*} \begin{split}
&\mathbb{P}\left\{\min_{1 \leq i \leq N} \inf_{0\leq t \leq 1} (X_i(t)) \geq -k | \ X(1) = \textbf{x} \right\}\\
& = \mathbb{P}\left\{X(t) \in A_k, \ t\in[0, 1]| \ X(1) = \textbf{x}\right\} = \mathbb{P}\left\{W(t) \in LA_k + \textbf{v}t, \ t\in[0, 1] | \ W(1) = L\textbf{x} + \textbf{v}\right\}\\ & = \mathbb{P}\left\{L\textbf{x}t + \textbf{v}t + B_{\textbf{0}}(t) \in LA_k + \textbf{v}t, \ t\in[0, 1]\right\} = \mathbb{P}\left\{\textbf{x}t + L^{-1}B_{\textbf{0}}(t) \in [-k, \infty)^N, \ t\in[0, 1]\right\}\\
& \leq \mathbb{P}\left\{\textbf{y}t + L^{-1}B_{\textbf{0}}(t) \in [-k, \infty)^N, \ t\in[0, 1]\right\} = \mathbb{P}\left\{\min_{1 \leq i \leq N} \inf_{0\leq t \leq 1} (X_i(t)) \geq -k | \ X(1) = \textbf{y}\right\}. \end{split} \end{equation*} Using the properties of the Brownian bridge, one can replace the $\textbf{y}$ with a measurable set $\Delta$ and prove the statement.
\end{document}
|
arXiv
|
{
"id": "1809.08878.tex",
"language_detection_score": 0.6389188170433044,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} We discuss a definition of the Maslov index for Lagrangian pairs on $\mathbb{R}^{2n}$ based on spectral flow, and develop many of its salient properties. We provide two applications to illustrate how our approach leads to a straightforward analysis of the relationship between the Maslov index and the Morse index for Sch\"odinger operators on $[0,1]$ and $\mathbb{R}$. \end{abstract}
\title{The Maslov Index for Lagrangian pairs on $\mathbb{R}
\section{Introduction} \label{introduction}
With origins in the work of V. P. Maslov \cite{Maslov1965a} and subsequent development by V. I. Arnol'd \cite{arnold67}, the Maslov index on $\mathbb{R}^{2n}$ is a tool for determining the nature of intersections between two evolving Lagrangian subspaces (see Definition \ref{lagrangian_subspace}). As discussed in \cite{CLM}, several equivalent definitions are available, and we focus on a definition for Lagrangian pairs based on the development in \cite{BF98} (using the definition of spectral flow introduced in \cite{P96}). We note at the outset that the theory associated with the Maslov index has now been extended well beyond the simple setting of our analysis (see, for example, \cite{BF98, F}); nonetheless, the Maslov index for Lagrangian pairs on $\mathbb{R}^{2n}$ is a useful tool, and a systematic development of its properties is certainly warranted.
As a starting point, we define what we will mean by a {\it Lagrangian subspace} of $\mathbb{R}^{2n}$.
\begin{definition} \label{lagrangian_subspace} We say $\ell \subset \mathbb{R}^{2n}$ is a Lagrangian subspace if $\ell$ has dimension $n$ and \begin{equation*} (Jx, y)_{\mathbb{R}^{2n}} = 0, \end{equation*} for all $x, y \in \ell$. Here, $(\cdot, \cdot)_{\mathbb{R}^{2n}}$ denotes Euclidean inner product on $\mathbb{R}^{2n}$, and \begin{equation*} J = \begin{pmatrix} 0 & -I_n \\ I_n & 0 \end{pmatrix}, \end{equation*} with $I_n$ the $n \times n$ identity matrix. We sometimes adopt standard notation for symplectic forms, $\omega (x,y) = (Jx, y)_{\mathbb{R}^{2n}}$. Finally, we denote by $\Lambda (n)$ the collection of all Lagrangian subspaces of $\mathbb{R}^{2n}$, and we will refer to this as the {\it Lagrangian Grassmannian}. \end{definition}
A simple example, important for intuition, is the case $n = 1$, for which $(Jx, y)_{\mathbb{R}^{2}} = 0$ if and only if $x$ and $y$ are linearly dependent. In this case, we see that any line through the origin is a Lagrangian subspace of $\mathbb{R}^2$. As a foreshadowing of further discussion, we note that each such Lagrangian subspace can be identified with precisely two points on the unit circle $S^1$.
More generally, any Lagrangian subspace of $\mathbb{R}^{2n}$ can be spanned by a choice of $n$ linearly independent vectors in $\mathbb{R}^{2n}$. We will generally find it convenient to collect these $n$ vectors as the columns of a $2n \times n$ matrix $\mathbf{X}$, which we will refer to as a {\it frame} for $\ell$. Moreover, we will often write $\mathbf{X} = {X \choose Y}$, where $X$ and $Y$ are $n \times n$ matrices.
Given any two Lagrangian subspaces $\ell_1$ and $\ell_2$, with associated frames $\mathbf{X}_1 = {X_1 \choose Y_1}$ and $\mathbf{X}_2 = {X_2 \choose Y_2}$, we can define the complex $n \times n$ matrix \begin{equation} \label{tildeW} \tilde{W} = - (X_1 + i Y_1) (X_1 - i Y_1)^{-1} (X_2 - i Y_2) (X_2 + i Y_2)^{-1}, \end{equation} which we will see in Section \ref{derivation_section} is unitary. (We will also verify in Section \ref{derivation_section} that $(X_1 - iY_1)$ and $X_2 + iY_2$ are both invertible, and that $\tilde{W}$ is independent of the choice of frames we take for $\ell_1$ and $\ell_2$.) Notice that if we switch the roles of $\ell_1$ and $\ell_2$ then $\tilde{W}$ will be replaced by $\tilde{W}^{-1}$, and since $\tilde{W}$ is unitary this is $\tilde{W}^*$. We conclude that the eigenvalues in the switched case will be complex conjugates of those in the original case.
\begin{remark} \label{tildeW_remark} We use the tilde to distinguish the $n \times n$ complex-valued matrix $\tilde{W}$ from the Souriau map (see equation (\ref{souriau}) below), which is a related $2n \times 2n$ matrix often---as here---denoted $W$. The general form of $\tilde{W}$ appears in a less general context in \cite{DZ, HS}. For the special case $\mathbf{X}_2 = {0 \choose I}$ (associated, for example, with Dirichlet boundary conditions for a Sturm-Liouville eigenvalue problem) we see that \begin{equation} \label{Dirichlet_form} \tilde{W} = (X_1 + iY_1) (X_1 - iY_1)^{-1}, \end{equation} which has been extensively studied, perhaps most systematically in \cite{At} (particularly Chapter 10). If we let $\tilde{W}_D$ denote (\ref{Dirichlet_form}) for $\mathbf{X}_1 = {0 \choose I}$ and for $j = 1, 2$ set \begin{equation*} \tilde{W}_j = (X_j + iY_j) (X_j - iY_j)^{-1}, \end{equation*} then our form for $\tilde{W}$ can be viewed as the composition map \begin{equation} \label{composition} - \tilde{W}_1 \tilde{W}_D (\tilde{W}_2 \tilde{W}_D)^{-1} = - \tilde{W}_1 (\tilde{W}_2)^{-1}. \end{equation} For a related observation regarding the Souirau map see Remark \ref{souriau_remark}. \end{remark}
Combining observations from Sections \ref{framework_section} and \ref{derivation_section}, we will establish the following theorem (cf. Lemma 1.3 in \cite{BF98}).
\begin{theorem} \label{intersection_theorem} Suppose $\ell_1, \ell_2 \subset \mathbb{R}^{2n}$ are Lagrangian subspaces, with respective frames $\mathbf{X}_1 = {X_1 \choose Y_1}$ and $\mathbf{X}_2 = {X_2 \choose Y_2}$, and let $\tilde{W}$ be as defined in (\ref{tildeW}). Then \begin{equation*} \dim \operatorname{ker} (\tilde{W} + I) = \dim (\ell_1 \cap \ell_2). \end{equation*} That is, the dimension of the eigenspace of $\tilde{W}$ associated with the eigenvalue $-1$ is precisely the dimension of the intersection of the Lagrangian subspaces $\ell_1$ and $\ell_2$. \end{theorem}
Given a parameter interval $I = [a,b]$, which can be normalized to $[0,1]$, we consider maps $\ell:I \to \Lambda (n)$, which will be expressed as $\ell (t)$. In order to specify a notion of continuity, we need to define a metric on $\Lambda (n)$, and following \cite{F} (p. 274), we do this in terms of orthogonal projections onto elements $\ell \in \Lambda (n)$. Precisely, let $\mathcal{P}_i$ denote the orthogonal projection matrix onto $\ell_i \in \Lambda (n)$ for $i = 1,2$. I.e., if $\mathbf{X}_i$ denotes a frame for $\ell_i$, then $\mathcal{P}_i = \mathbf{X}_i (\mathbf{X}_i^t \mathbf{X}_i)^{-1} \mathbf{X}_i^t$. We take our metric $d$ on $\Lambda (n)$ to be defined by \begin{equation*}
d (\ell_1, \ell_2) := \|\mathcal{P}_1 - \mathcal{P}_2 \|, \end{equation*}
where $\| \cdot \|$ can denote any matrix norm. We will say that $\ell: I \to \Lambda (n)$ is continuous provided it is continuous under the metric $d$. Likewise, for $\mathcal{L} = (\ell_1, \ell_2) \in \Lambda (n) \times \Lambda (n)$ and $\mathcal{M} = (m_1, m_2) \in \Lambda (n) \times \Lambda (n)$, we take \begin{equation} \label{rho_metric} \rho(\mathcal{L}, \mathcal{M}) = (d(\ell_1, m_1)^2 + d(\ell_2, m_2)^2)^{1/2}. \end{equation}
Given two continuous maps $\ell_1 (t), \ell_2 (t)$ on a parameter interval $I$, we denote by $\mathcal{L}(t)$ the path \begin{equation*} \mathcal{L} (t) = (\ell_1 (t), \ell_2 (t)). \end{equation*} In what follows, we will define the Maslov index for the path $\mathcal{L} (t)$, which will be a count, including both multiplicity and direction, of the number of times the Lagrangian paths $\ell_1$ and $\ell_2$ intersect. In order to be clear about what we mean by multiplicty and direction, we observe that associated with any path $\mathcal{L} (t)$ we will have a path of unitary complex matrices as described in (\ref{tildeW}). We have already noted that the Lagrangian subspaces $\ell_1$ and $\ell_2$ intersect at a value $t_0 \in I$ if and only if $\tilde{W} (t_0)$ has -1 as an eigenvalue. In the event of such an intersection, we define the multiplicity of the intersection to be the multiplicity of -1 as an eigenvalue (since $\tilde{W}$ is unitary the algebraic and geometric multiplicites are the same). When we talk about the direction of an intersection, we mean the direction the eigenvalues of $\tilde{W}$ are moving (as $t$ varies) along the unit circle $S^1$ as they pass through $-1$ (we take counterclockwise as the positive direction). We note that the eigenvalues certainly do not all need to be moving in the same direction, and that we will need to take care with what we mean by a crossing in the following sense: we must decide whether to increment the Maslov index upon arrival or upon departure.
Following \cite{BF98, F, P96}, we proceed by choosing a partition $a = t_0 < t_1 < \dots < t_n=b$ of $I = [a,b]$, along with numbers $\epsilon_j \in (0,\pi)$ so that $\operatorname{ker}\big(\tilde{W} (t) - e^{i (\pi \pm \epsilon_j)} I\big)=\{0\}$ for $t_{j-1} < t < t_j$; that is, $e^{i(\pi \pm \epsilon_j)} \in {\mathbb{C}} \setminus \sigma(\tilde{W} (t))$, for $t_{j-1} < t < t_j$ and $j=1,\dots,n$. Moreover, for each $j=1,\dots,n$ and any $t \in [t_{j-1},t_j]$ there are only finitely many values $\theta \in [0,\epsilon_j]$ for which $e^{i(\pi+\theta)} \in \sigma(\tilde{W} (t))$.
Fix some $j \in \{1, 2, \dots, n\}$ and consider the value \begin{equation} \label{kdefined} k (t,\epsilon_j) := \sum_{0 \leq \theta < \epsilon_j} \dim \operatorname{ker} \big(\tilde{W} (t) - e^{i(\pi+\theta)}I \big). \end{equation} for $t_{j-1} \leq t \leq t_j$. This is precisely the sum, along with multiplicity, of the number of eigenvalues of $\tilde{W} (t)$ that lie on the arc \begin{equation*} A_j := \{e^{i t}: t \in [\pi, \pi+\epsilon_j)\}. \end{equation*} The stipulation that $e^{i(\pi\pm\epsilon_j)} \in {\mathbb{C}}\setminus \sigma(\tilde{W} (t))$, for $t_{j-1} < t < t_j$ asserts that no eigenvalue can enter $A_j$ in the clockwise direction or exit in the counterclockwise direction during the interval $t_{j-1} < t < t_j$. In this way, we see that $k(t_j, \epsilon_j) - k (t_{j-1}, \epsilon_j)$ is a count of the number of eigenvalues that entered $A_j$ in the counterclockwise direction minus the number that left in the clockwise direction during the interval $(t_{j-1}, t_j)$.
In dealing with the catenation of paths, it's particularly important to understand this quantity if an eigenvalue resides at $-1$ at either $t = t_{j-1}$ or $t = t_j$ (i.e., if an eigenvalues begins or ends at a crosssing). If an eigenvalue moving in the counterclockwise direction arrives at $-1$ at $t = t_j$, then we increment the difference foward, while if the eigenvalue arrives at -1 from the clockwise direction we do not. On the other hand, suppose an eigenvalue resides at -1 at $t = t_{j-1}$ and moves in the counterclockwise direction. There is no change, and so we do not increment the difference, but we decrement the difference if the eigenvalue leaves in the clockwise direction. In summary, the difference increments forward upon arrivals in the counterclockwise direction, but not upon arrivals in the clockwise direction, and it decrements upon departure in the clockwise direction, but not upon departure in the counterclockwise direction.
We are now ready to define the Maslov index.
\begin{definition}\label{dfnDef3.6} Let $\mathcal{L} (t) = (\ell_1 (t), \ell_2 (t))$, where $\ell_1, \ell_2:I \to \Lambda (n)$ are continuous paths in the Lagrangian--Grassmannian. The Maslov index $\operatorname{Mas}(\mathcal{L};I)$ is defined by \begin{equation} \operatorname{Mas}(\mathcal{L};I)=\sum_{j=1}^n(k(t_j,\epsilon_j)-k(t_{j-1},\epsilon_j)). \end{equation} \end{definition}
\begin{remark} \label{cf} In \cite{CLM} the authors provide a list of six properties that entirely characterize the Maslov index for a pair of Lagrangian paths. Our definition satisfies their properties, except for the choice of normalization (their Property VI), which is reversed. In our notation, their normalization is specified for $n=1$ with reference to Lagrangian subspaces $\ell_1$ and $\ell_2$ with respective frames $\mathbf{X}_1 = {1 \choose 0}$ and $\mathbf{X}_2 = {\cos t \choose \sin t}$. For this choice, we have \begin{equation*} \tilde{W} (t) = - \frac{\cos t - i\sin t}{\cos t + i \sin t}, \end{equation*} for which we see immediately that $\tilde{W} (-\frac{\pi}{4}) = -i$, $\tilde{W} (0) = -1$, and $\tilde{W} (\frac{\pi}{4}) = i$. This path is monotonic, so the following three values are immediate: $\operatorname{Mas} (\ell_1, \ell_2; [-\frac{\pi}{4}, \frac{\pi}{4}]) = -1$, $\operatorname{Mas} (\ell_1, \ell_2; [-\frac{\pi}{4}, 0]) = 0$, and $\operatorname{Mas} (\ell_1, \ell_2; [0, \frac{\pi}{4}]) = -1$. (Cf. equation (1.7) in \cite{CLM}).
We also note two additional definitions of the Maslov index for paths. In Section 3 of \cite{rs93} the authors give a definition based on crossing forms, and in Section 3.5 of \cite{F} the author gives a definition based on a direct sum of the Lagrangian pairs. In Section \ref{derivation_section} (of the current paper) we clarify how these two definitions are related to our Definition \ref{dfnDef3.6}. \end{remark}
One of the most important features of the Maslov index is homotopy invariance, for which we need to consider continuously varying families of Lagrangian paths. To set some notation, we denote by $\mathcal{P} (I)$ the collection of all paths $\mathcal{L} (t) = (\ell_1 (t), \ell_2 (t))$, where $\ell_1, \ell_2:I \to \Lambda (n)$ are continuous paths in the Lagrangian--Grassmannian. We say that two paths $\mathcal{L}, \mathcal{M} \in \mathcal{P} (I)$ are homotopic provided there exists a family $\mathcal{H}_s$ so that $\mathcal{H}_0 = \mathcal{L}$, $\mathcal{H}_1 = \mathcal{M}$, and $\mathcal{H}_s (t)$ is continuous as a map from $[a,b] \times [0,1]$ into $\Lambda (n)$.
The Maslov index has the following properties (see, for example, Theorem 3.6 in \cite{F}).
\noindent {\bf (P1)} (Path Additivity) If $a < b < c$ then \begin{equation*} \operatorname{Mas} (\mathcal{L};[a, c]) = \operatorname{Mas} (\mathcal{L};[a, b]) + \operatorname{Mas} (\mathcal{L}; [b, c]). \end{equation*}
\noindent {\bf (P2)} (Homotopy Invariance) If $\mathcal{L}, \mathcal{M} \in \mathcal{P} (I)$ are homotopic, with $\mathcal{L} (a) = \mathcal{M} (a)$ and $\mathcal{L} (b) = \mathcal{M} (b)$ (i.e., if $\mathcal{L}, \mathcal{M}$ are homotopic with fixed endpoints) then \begin{equation*} \operatorname{Mas} (\mathcal{L};[a, b]) = \operatorname{Mas} (\mathcal{M};[a, b]). \end{equation*}
\begin{remark} For (P1), the only issue regards cases in which there is an intersection at $t = b$. For example, suppose the intersection is an arrival in the clockwise direction, followed by departure in the same direction. Then at this intersection, $\operatorname{Mas} (\mathcal{L}; [a,c])$ decrements by 1, $\operatorname{Mas} (\mathcal{L}; [a,b])$ is unaffected, and $\operatorname{Mas} (\mathcal{L}; [b,c])$ decrements by 1. Other cases are similar.
Verification of (P2) requires more work, and we leave that discussion to an appendix. \end{remark}
\section{Framework for $W$ and $\tilde{W}$} \label{framework_section}
In Section \ref{derivation_section}, we will use the formulation of \cite{BF98, F} to derive our form of $\tilde{W}$, and in preparation for that we will briefly discuss the nature of this formulation. This material has all been covered in a much more general case in \cite{BF98, F}, and our motivation for including this section is simply to allow readers to understand this framework in the current setting.
We record at the outset an important property of Lagrangian frames.
\begin{proposition} \label{Lagrangian_property} A $2n \times n$ matrix $\mathbf{X} = {X \choose Y}$ is a frame for a Lagrangian subspace if and only if the columns of $\mathbf{X}$ are linearly independent, and additionally \begin{equation} \label{LP} X^t Y - Y^t X = 0. \end{equation} We refer to this relation as the Lagrangian property for frames. \end{proposition}
\begin{proof} To see this, we observe by definition that $\mathbf{X}$ is the frame of a Lagrangian subspace if and only if its columns are linearly independent, and each of its column pairs ${x_i \choose y_i}$, ${x_j \choose y_j}$ satisfies \begin{equation*} (J {x_i \choose y_i}, {x_j \choose y_j})_{\mathbb{R}^{2n}} = 0; \quad \text{i.e., } ({-y_i \choose x_i}, {x_j \choose y_j})_{\mathbb{R}^{2n}} = (x_i, y_j)_{\mathbb{R}^{2n}} - (x_j, y_i)_{\mathbb{R}^{2n}} = 0. \end{equation*} Observing that \begin{equation*} (X^t Y - Y^t X)_{i j} = (x_i, y_j)_{\mathbb{R}^{n}} - (x_j, y_i)_{\mathbb{R}^{n}}, \end{equation*} we obtain the claim. \end{proof}
\begin{remark} It is clear that the Lagrangian property can alternatively be expressed as \begin{equation*} \mathbf{X}^t J \mathbf{X} = 0. \end{equation*} \end{remark}
We next observe that for a given pair of Lagrangian subspaces $\mathcal{L} = (\ell_1, \ell_2) \in \Lambda (n) \times \Lambda (n)$ we can change our choice of frames without changing either the associated $\tilde{W}$ or the projection matrices $\mathcal{P}_1$ and $\mathcal{P}_2$.
\begin{proposition} \label{changing_frames} Suppose $\mathbf{X}_1 = {X_1 \choose Y_1}$ and $\mathbf{X}_2 = {X_2 \choose Y_2}$ are any two frames for the same Lagrangian subspace $\ell \subset \mathbb{R}^{2n}$. Then \begin{equation*} (X_1 + i Y_1) (X_1 - i Y_1)^{-1} = (X_2 + i Y_2) (X_2 - i Y_2)^{-1}, \end{equation*} and likewise \begin{equation*} \mathbf{X}_1 (\mathbf{X}_1^t \mathbf{X}_1)^{-1} \mathbf{X}_1^t = \mathbf{X}_2 (\mathbf{X}_2^t \mathbf{X}_2)^{-1} \mathbf{X}_2^t. \end{equation*} \end{proposition}
\begin{proof} Under our assumptions, there exists an invertible $n \times n$ matrix $M$ so that $\mathbf{X}_1 = \mathbf{X}_2 M$. In particular, we must have $X_1 = X_2 M$ and $Y_1 = Y_2 M$. But then \begin{equation*} \begin{aligned} (X_1 + i Y_1) (X_1 - i Y_1)^{-1} &= (X_2 M + i Y_2 M) (X_2 M - i Y_2 M)^{-1} \\ &= (X_2 + i Y_2) M M^{-1} (X_2 - i Y_2)^{-1} = (X_2 + i Y_2) (X_2 - i Y_2)^{-1}. \end{aligned} \end{equation*} Likewise, \begin{equation*} \begin{aligned} \mathbf{X}_1 (\mathbf{X}_1^t \mathbf{X}_1)^{-1} \mathbf{X}_1^t &= \mathbf{X}_2 M ((\mathbf{X}_2 M)^t \mathbf{X}_2 M)^{-1} (\mathbf{X}_2 M)^t \\ &= \mathbf{X}_2 M (M^t (\mathbf{X}_2^t \mathbf{X}_2) M)^{-1} M^t \mathbf{X}_2^t = \mathbf{X}_2 M M^{-1} (\mathbf{X}_2^t \mathbf{X}_2)^{-1} (M^t)^{-1} M^t \mathbf{X}_2^t \\ &= \mathbf{X}_2 (\mathbf{X}_2^t \mathbf{X}_2)^{-1} \mathbf{X}_2^t. \end{aligned} \end{equation*} \end{proof}
Next, we introduce a complex Hilbert space, which we will denote $\mathbb{R}_J^{2n}$. The elements of this space will continue to be real-valued vectors of length $2n$, but we will define multiplication by complex scalars $\gamma = \alpha + i \beta$ as \[ (\alpha + i\beta) u := \alpha u + \beta Ju, \quad u \in \mathbb{R}^{2n}, \alpha + i\beta \in {\mathbb{C}}, \] and we will define a complex scalar product \[ (u,v)_{\mathbb{R}^{2n}_J} := (u,v)_{\mathbb{R}^{2n}}-i\omega(u,v),\quad u,v\in\mathbb{R}^{2n} \] (recalling $\omega (u,v) = (Ju, v)_{\mathbb{R}^{2n}}$). It is important to note that, considered as a real vector space, $\mathbb{R}^{2n}_J$ is identical to $\mathbb{R}^{2n}$, and not its complexification $\mathbb{R}^{2n} \otimes_{{\mathbb{R}}} {\mathbb{C}}$. (In fact, $\mathbb{R}^{2n}_J \cong {\mathbb{C}}^n$ while $\mathbb{R}^{2n} \otimes_{{\mathbb{R}}} {\mathbb{C}} \cong {\mathbb{C}}^{2n}$.) However, it is easy to see that $\mathbb{R}^{2n}_J \cong \ell \otimes_{{\mathbb{R}}} {\mathbb{C}}$ for any Lagrangian subspace $\ell \in \Lambda(n)$, and we'll take advantage of this correspondence.
For a matrix $U$ acting on $\mathbb{R}^{2n}_J$, we denote the adjoint in $\mathbb{R}^{2n}_J$ by $U^{J*}$ so that \begin{equation*} (U u, v )_{\mathbb{R}^{2n}_J} =
(u, U^{J*} v )_{\mathbb{R}^{2n}_J}, \end{equation*} for all $u,v \in \mathbb{R}^{2n}_J$. We denote by $\mathfrak{U}_J$ the space of unitary matrices acting on $\mathbb{R}^{2n}_J$ (i.e., the matrices so that $U U^{J*} = U^{J*} U = I$). In order to clarify the nature of $\mathfrak{U}_J$, we note that we have the identity \begin{equation*} (U u, U v )_{\mathbb{R}^{2n}_J} = (u, v)_{\mathbb{R}^{2n}_J}, \end{equation*} from which \begin{equation*} (U u, U v)_{\mathbb{R}^{2n}} - i (J U u, U v)_{\mathbb{R}^{2n}} = (u, v)_{\mathbb{R}^{2n}} - i (J u, v)_{\mathbb{R}^{2n}}. \end{equation*} Equating real parts, we see that $U$ must be unitary as a matrix on $\mathbb{R}^{2n}$, while by equating imaginary parts we see that $UJ = JU$. We have, then, \begin{equation*}
\mathfrak{U}_J=\{U\in\mathbb{R}^{2n \times 2n}\,|\,U^tU=UU^t=I_{2n},\, UJ=JU\}. \end{equation*}
Fix some Lagrangian subspace $\ell_0 \subset \mathbb{R}^{2n}$, and notice that $J (\ell_0)$ is orthogonal to $\ell_0$; i.e., if $\mathbf{X}_0 = {X_0 \choose Y_0}$ is a frame for $\ell_0$ then $J \mathbf{X}_0 = {-Y_0 \choose X_0}$ is a frame for $J (\ell_0)$, and we have \begin{equation*} \begin{pmatrix} X_0^t & Y_0^t \end{pmatrix} \begin{pmatrix} -Y_0 \\ X_0 \end{pmatrix} = -X_0^t Y_0 + Y_0^t X_0 = 0, \end{equation*} by the Lagrangian property. In this way, we see that \begin{equation*} \mathbb{R}^{2n} = \ell_0 \oplus J(\ell_0), \end{equation*} so that given any $z \in \mathbb{R}^{2n}$ we can express $z$ uniquely as $z = x + Jy$ for some $x,y \in \ell_0$. We define the conjuguate of $z$ in $R^{2n}_J$ by \begin{equation*} \tau_0 z := x - Jy. \end{equation*} Notice that we can compute $\tau_{0} = 2 \Pi_{0} - I_{2n}$, where $\Pi_{0}$ is the orthogonal projection onto $\ell_0$. For any $U \in \mathfrak{U}_J$, we define \begin{equation} \label{capitalT} U^T := \tau_0 U^t \tau_0, \end{equation} which is also in $\mathfrak{U}_J$ (as follows easily from our next proposition).
\begin{proposition} \label{tau_properties} Let $\mathbf{X}_0 = {X_0 \choose Y_0}$ be a frame for a Lagrangian subspace $\ell_0 \subset \mathbb{R}^{2n}$. Then the matrix $X_0^t X_0 + Y_0^t Y_0$ is symmetric and positive definite, and if we set $M_0 := (X_0^t X_0 + Y_0^t Y_0)^{-1/2}$ we have \begin{equation*} \begin{aligned} \Pi_0 &= \begin{pmatrix} X_0 M_0^2 X_0^t & X_0 M_0^2 Y_0^t \\ Y_0 M_0^2 X_0^t & Y_0 M_0^2 Y_0^t \end{pmatrix} \\
\tau_0 &= \begin{pmatrix} 2 X_0 M_0^2 X_0^t - I & 2 X_0 M_0^2 Y_0^t \\ 2 Y_0 M_0^2 X_0^t & 2 Y_0 M_0^2 Y_0^t - I \end{pmatrix}, \end{aligned} \end{equation*} with additionally $\tau_0^t = \tau_0$, $\tau_0^2 = I$, and $J \tau_0 = - \tau_0 J$. \end{proposition}
\begin{proof} These claims can all be proven in a straightforward manner, using the following identities, which are established in the proof of Lemma 3.3 in \cite{HS}: \begin{equation} \label{lemma3.3} \begin{aligned} X_0 M_0^2 X_0^t + Y_0 M_0^2 Y_0^t &= I; \\ X_0 M_0^2 Y_0^t - Y_0 M_0^2 X_0^t &= 0. \end{aligned} \end{equation} Noting that \begin{equation*} \mathbf{X}_0^t \mathbf{X}_0 = \begin{pmatrix} X_0^t & Y_0^t \end{pmatrix} \begin{pmatrix} X_0 \\ Y_0 \end{pmatrix} = X_0^t X_0 + Y_0^t Y_0, \end{equation*} we see that \begin{equation*} \begin{aligned} \Pi_0 &= \mathbf{X}_0 (\mathbf{X}_0^t \mathbf{X}_0)^{-1} \mathbf{X}_0^t = \begin{pmatrix} X_0 \\ Y_0 \end{pmatrix} M_0^2 \begin{pmatrix} X_0^t \\ Y_0^t \end{pmatrix} \\ &= \begin{pmatrix} X_0 M_0^2 Y_0^t & X_0 M_0^2 Y_0^t \\ Y_0 M_0^2 X_0^t & Y_0 M_0^2 Y_0^t \end{pmatrix}. \end{aligned} \end{equation*} The remaining claims follow in a straightfoward manner. \end{proof}
Now, given a second Lagrangian subspace $\ell$, let $U \in \mathfrak{U}_J$ satisfy \begin{equation} \label{U_relation} \ell = U (J (\ell_0)), \end{equation} or equivalently \begin{equation} \label{U_equivalent} U^t (\ell) = J(\ell_0). \end{equation} (Such a matrix $U$ is not uniquely defined.) We define \begin{equation*} W = U U^T = U \tau_0 U^t \tau_0, \end{equation*} and it follows from Proposition \ref{tau_properties} that $W \in \mathfrak{U}_J$.
\begin{lemma} \label{W_lemma} For $\ell_0$, $\ell$, and $W$ as above \begin{equation*} \operatorname{ker} (W+I) = (\ell \cap \ell_0) \oplus J(\ell \cap \ell_0). \end{equation*} \end{lemma}
\begin{proof} As a start, take any $z \in (\ell \cap \ell_0) \oplus J(\ell \cap \ell_0)$, and write $z = x + Jy$ for some $x, y \in \ell \cap \ell_0$. We compute \begin{equation*} \begin{aligned} Wz &= U \tau_0 U^t \tau_0 (x+Jy) \\ &= U \tau_0 U^t (x-Jy) \\ &= U \tau_0 (U^t x - JU^ty) \\ &\overset{*}{=} U (- U^t x - J U^t y) = - x - Jy = - z, \end{aligned} \end{equation*} where in obtaining the equality indicated with * we have observed from (\ref{U_relation}) and (\ref{U_equivalent}) that $U^t x \in J(\ell_0)$ and $J U^t y \in \ell_0$.
On the other hand, suppose $z \in \mathbb{R}^{2n}$ satisfies $Wz = -z$. We can write $z = x+Jy$ for some $x, y \in \ell_0$, and we would like to show that $x, y \in \ell$ so that in fact $x,y \in \ell \cap \ell_0$. We compute \begin{equation*} \begin{aligned} - (x+Jy) &= U \tau_0 U^t \tau_0 (x+Jy) = U \tau_0 U^t (x - Jy) \\ &= U \tau_0 (U^t x - U^t Jy), \end{aligned} \end{equation*} which implies \begin{equation*} - (U^t x + U^t Jy) = \tau_0 (U^t x - U^t Jy). \end{equation*} It's straightforward to see that this can only hold if $U^t Jy \in \ell_0$ and $U^t x \in J (\ell_0)$, which according to (\ref{U_relation}) and (\ref{U_equivalent}) implies that $x, y \in \ell$. \end{proof}
For a similar statement in a more general context, see equation (2.37) in \cite{F}.
The relationship between $\ell_0$, $\ell$, and $U \in \mathfrak{U}_J$ provides a natural and productive connection between the elements $\ell$ of the Lagrangian Grassmannian and elements $U \in \mathfrak{U}_J$. However, the associated unitary matrices are not uniquely specified, and consequently the spectrum of $U$ contains redundant information. For example, in the simple case of $\mathbb{R}^2$ this redundant information corresponds with our previous observation that each element $\ell \in \Lambda (1)$ corresponds with two points on $S^1$. We overcome this difficulty by defining a new (uniquely specified) unitary matrix $W$ in $\mathbb{R}^{2n}_J$ by $W=U U^T$.
We observe that the unitary condition $UJ = JU$ implies $U$ must have the form \begin{equation*} U = \begin{pmatrix} U_{11} & - U_{21} \\ U_{21} & U_{11} \end{pmatrix} = \begin{pmatrix} U_{11} & 0 \\ 0 & U_{11} \end{pmatrix} + J \begin{pmatrix} U_{21} & 0 \\ 0 & U_{21} \end{pmatrix}. \end{equation*} In addition, we have the scaling condition \begin{equation} \label{unitary_scaling} \begin{aligned} U_{11}^t U_{11} + U_{21}^t U_{21} &= I \\ U_{11} U_{11}^t + U_{21} U_{21}^t &= I \\ U_{11}^t U_{21} - U_{21}^t U_{11} &= 0 \\ U_{11} U_{21}^t - U_{21} U_{11}^t &= 0 \end{aligned} \end{equation} (from $U U^t = U^t U = I$). In this way, there is a natural one-to-one correspondence between matrices $U \in \mathfrak{U}_J$ and the $n \times n$ complex unitary matrices $\tilde{U} = U_{11} + i U_{21}$ (i.e., the $\tilde{U} \in \mathbb{C}^{n \times n}$ so that $\tilde{U}^* \tilde{U} = \tilde{U} \tilde{U}^* = I$). It follows that the matrix $W = U U^T$, which can be expressed as \begin{equation*} W = \begin{pmatrix} W_{11} & - W_{21} \\ W_{21} & W_{11} \end{pmatrix}, \end{equation*} has a natural corresponding matrix $\tilde{W} = W_{11} + i W_{21}$. We will see in section \ref{derivation_section} that our matrix $\tilde{W}$ in (\ref{tildeW}) is constructed in precisely this way.
\noindent {\bf Proof of Theorem \ref{intersection_theorem}.} Let $W$ and $\tilde{W}$ be as in the preceding paragraph, and suppose $z = x + Jy$, $x,y \in \ell_0$, is an eigenvector for $W$, associated to the eigenvalue $\lambda = -1$. If we write $x = {x_1 \choose x_2}$ and $y = {y_1 \choose y_2}$ then the equation $Wz = - z$ becomes \begin{equation*} \begin{aligned} W_{11} (x_1 - y_2) - W_{21} (x_2 + y_1) &= - (x_1 - y_2) \\ W_{21} (x_1 - y_2) + W_{11} (x_2 + y_1) &= - (x_2 + y_1). \end{aligned} \end{equation*} We see that if $w = u + iv$, with $u = x_1 - y_2$ and $v = x_2 + y_1$, then $\tilde{W} w = - w$. Moreover, $w$ cannot be trivial, because if $w = 0$ then $x_1 = y_2$ and $x_2 = -y_1$, so that \begin{equation*}
0 = \omega (x,y) = (Jx, y) = |x_1|^2 + |x_2|^2, \end{equation*} which would imply $x = 0$, and consequently $y=0$. This contradicts our assumption that $z$ is an eigenvector of $W$.
On the other hand, notice that if $w = u+iv$ is any eigenvector of $\tilde{W}$ associated to the eigenvalue $\lambda = -1$, then \begin{equation*} \begin{aligned} W_{11} u - W_{21} v &= - u \\ W_{11} v + W_{21} u &= - v. \end{aligned} \end{equation*} If we set $x = {x_1 \choose x_2} = {u \choose v}$ then $Wx = - x$, and likewise if we set $y = {y_1 \choose y_2} = {v \choose -u}$ then $WJy = - Jy$. We see that each eigenvector of $\tilde{W}$ associated to $\lambda = -1$ corresponds with precisely two eigenvectors of $W$ associated to $\lambda = -1$. Since $\dim \operatorname{ker} (W+I) = 2 \dim (\ell_0 \cap \ell)$ (from Lemma \ref{W_lemma}), the theorem follows immediately.
$\square$
\section{Derivation of $W$ and $\tilde{W}$} \label{derivation_section}
In this section, we will use our general formulation from Section \ref{framework_section} to derive the form of $\tilde{W}$ expressed in (\ref{tildeW}). We begin by collecting some straightforward observations that will be used throughout our derivation.
\begin{lemma} If $\mathbf{X} = {X \choose Y}$ is a frame for a Lagrangian subspace $\ell \subset \mathbb{R}^{2n}$ then $X^t X + Y^t Y$ is a symmetric positive definite matrix, and the matrices $X-iY$ and $X+iY$ are both invertible. \end{lemma}
\begin{proof} First, if $\mathbf{X}$ is the frame for a Lagrangian subspace $\ell \subset \mathbb{R}^{2n}$ then the columns of $\mathbf{X}$ must be linearly independent. Positive definiteness (and hence invertibility) of $\mathbf{X}^t \mathbf{X} = X^t X + Y^t Y$ follows (see, e.g., p. 28 in \cite{Keener}; also, note that it's clear that this matrix is symmetric).
Turning to invertibility of $X \pm iY$, we focus on $X+iY$, noting that if this matrix has zero as an eigenvalue then there will be a vector $w = u+iv$ so that $(X+iY)(u+iv) = 0$, which means \begin{equation} \label{UVsystem} \begin{aligned} Xu - Yv &= 0 \\ Yu + Xv &= 0. \end{aligned} \end{equation} If we multiply the first of these equations by $Y^t$ and the second by $X^t$ and subtract the results (recalling the Lagrangian property of frames (\ref{LP})) we obtain $(X^t X + Y^t Y) v = 0$. But we've already seen that $(X^t X + Y^t Y)$ is invertible, so we must have $v = 0$. Likewise, if we multiply the first equation in (\ref{UVsystem}) by $X^t$ and the second by $Y^t$ we find that $u = 0$, which contradicts our assumption that $w = u+iv$ is an eigenvector associated with zero. \end{proof}
To begin our construction of $\tilde{W}$, we let $\ell_1$ and $\ell_2$ denote two Lagrangian subspaces of $\mathbb{R}^{2n}$, with associated frames $\mathbf{X}_1 = {X_1 \choose Y_1}$ and $\mathbf{X}_2 = {X_2 \choose Y_2}$. As discussed in Section \ref{framework_section}, we proceed by associating this pair of Lagrangian subspaces with a matrix $U \in \mathfrak{U}_J$. In particular, $U$ should map $\ell_2^{\perp} = J(\ell_2)$ to $\ell_1$. In terms of frames, this asserts that \begin{equation*} \mathbf{X}_1 = U J \mathbf{X}_2, \end{equation*} where in order to ensure the unitary normalization $U_{11}^t U_{11} + U_{21}^t U_{21} = I$, we note that for each $i=1,2$ we can choose the frame $\mathbf{X}_i$ to be $X_i M_i \choose Y_i M_i$ for any $n \times n$ invertible matrix $M_i$. With this choice, we find that $U$ should solve \begin{equation} \label{Udefined} \begin{pmatrix} X_1 M_1 \\ Y_1 M_1 \end{pmatrix} = \begin{pmatrix} U_{11} & - U_{21} \\ U_{21} & U_{11} \end{pmatrix} \begin{pmatrix} - Y_2 M_2 \\ X_2 M_2 \end{pmatrix}. \end{equation} We will verify below that the choices \begin{equation*} M_i = (X_i^t X_i + Y_i^t Y_i)^{-1/2} \end{equation*} suffice. We can express (\ref{Udefined}) as \begin{equation} \begin{pmatrix} (X_1 M_1)^t \\ (Y_1 M_1)^t \end{pmatrix} = V \begin{pmatrix} U_{11}^t \\ U_{21}^t \end{pmatrix}; \quad V = \begin{pmatrix} - (Y_2 M_2)^t & - (X_2 M_2)^t \\ (X_2 M_2)^t & - (Y_2 M_2)^t \end{pmatrix}. \end{equation} Using identities of the form (\ref{lemma3.3}), we can check that $V$ is orthogonal, allowing us to solve for $U$ and see that \begin{equation*} U = \begin{pmatrix} X_1 M_1 & - Y_1 M_1 \\ Y_1 M_1 & X_1 M_1 \end{pmatrix} \begin{pmatrix} -M_2 Y_2^t & M_2 X_2^t \\ - M_2 X_2^t & - M_2 Y_2^t \end{pmatrix} =: U_1 U_2. \end{equation*}
We now compute \begin{equation*} W = U U^T = U \tau_2 U^t \tau_2 = U_1 U_2 \tau_2 U_2^t U_1^t \tau_2, \end{equation*} where $\tau_2$ denotes the conjugation operator obtained as in Section \ref{framework_section}, with $\ell_0$ replaced by $\ell_2$. As in Proposition \ref{tau_properties}, we have \begin{equation*} \tau_2 = \begin{pmatrix} 2 X_2 M_2^2 X_2^t - I & 2 X_2 M_2^2 Y_2^t \\ 2 Y_2 M_2^2 X_2^t & 2 Y_2 M_2^2 Y_2^t - I \end{pmatrix}, \end{equation*} and computing directly we can show that \begin{equation*} U_2 \tau_2 U_2^t = \begin{pmatrix} -I & 0 \\ 0 & I \end{pmatrix}. \end{equation*} Using this intermediate step, and computing directly again we arrive at \begin{equation*} \begin{aligned} U_1 U_2 \tau_2 U_2^t U_1^t \tau_2 & = \begin{pmatrix} X_1 M_1^2 X_1^t - Y_1 M_1^2 Y_1^t & - 2 X_1 M_1^2 Y_1^t \\ 2 X_1 M_1^2 Y_1^t & X_1 M_1^2 X_1^t - Y_1 M_1^2 Y_1^t \end{pmatrix} \\ & \quad \quad \times \begin{pmatrix} Y_2 M_2^2 Y_2^t - X_2 M_2^2 X_2^t & - 2 X_2 M_2^2 Y_2 \\ 2 X_2 M_2^2 Y_2^t & Y_2 M_2^2 Y_2^t - X_2 M_2^2 X_2^t \end{pmatrix} =: W_1 W_2. \end{aligned} \end{equation*}
Last, we identify the matrix $\tilde{W}$, which we can compute as $\tilde{W} = \tilde{W}_1 \tilde{W}_2$. First, it's clear that \begin{equation} \label{W1alt} \begin{aligned} \tilde{W}_1 &= X_1 M_1^2 X_1^t - Y_1 M_1^2 Y_1^t + i 2 X_1 M_1^2 Y_1^t \\ &= (X_1 + i Y_1) M_1^2 (X_1^t + i Y_1^t), \end{aligned} \end{equation} where we've used the identity $X_1 M_1^2 Y_1^t = Y_1 M_1^2 X_1^t$ (see the proof of Proposition \ref{tau_properties}). Using the Lagrangian property (\ref{LP}), we see that \begin{equation} \label{useful1} \begin{aligned} (X_1 - i Y_1)^{-1} (X_1^t + i Y_1^t)^{-1} &= \Big((X_1^t + i Y_1^t) (X_1 - i Y_1) \Big)^{-1} \\ &= \Big(X_1^t X_1 + Y_1^t Y_1 + i (Y_1^t X_1 - X_1^t Y_1) \Big)^{-1} = M_1^2. \end{aligned} \end{equation} Continuing with our calculation of $\tilde{W}_1$, we conclude \begin{equation*} \begin{aligned} \tilde{W}_1 &= (X_1 + i Y_1) (X_1 - i Y_1)^{-1} (X_1^t + i Y_1^t)^{-1} (X_1^t + i Y_1^t) \\ &= (X_1 + i Y_1) (X_1 - i Y_1)^{-1}. \end{aligned} \end{equation*} Proceeding similarly, we find \begin{equation*} \tilde{W}_2 = - (X_2 - i Y_2) (X_2 + i Y_2)^{-1}, \end{equation*} from which the form of $\tilde{W}$ in (\ref{tildeW}) is immediate.
Using the argument leading to (\ref{useful1}), we obtain the identities \begin{equation} \label{useful2} \begin{aligned} (X_j - i Y_j)^{-1} &= M_j^2 (X_j^t + i Y_j^t) \\ (X_j + i Y_j)^{-1} &= M_j^2 (X_j^t - i Y_j^t), \end{aligned} \end{equation} for $j = 1, 2$. This provides us with the alternative form \begin{equation*} \tilde{W} = - (X_1 + i Y_1) M_1^2 (X_1^t + i Y_1^t) (X_2 - iY_2) M_2^2 (X_2^t - i Y_2^t). \end{equation*}
Using (\ref{W1alt}) (and the fact that $M_1^2$ is self-adjoint), we compute \begin{equation*} \begin{aligned} \tilde{W}_1 \tilde{W}_1^* &= (X_1 + i Y_1) M_1^2 (X_1^t + i Y_1^t) (X_1 - i Y_1) M_1^2 (X_1^t - i Y_1^t) \\ &= (X_1 + iY_1) M_1^2 (X_1^t - i Y_1^t) = I, \end{aligned} \end{equation*} verifying that $\tilde{W}_1$ is unitary. Likewise, $\tilde{W}_2$ is unitary, and so $\tilde{W}$ is unitary.
\begin{remark} \label{det_squared} We can now extend Arnol'd's $\text{Det}^2$ map to the current setting (see, for example, Section 1.3 in \cite{arnold67}). We define a map $\text{Det}^2: \Lambda (n) \times \Lambda (n) \to S^1$ as follows: given any Lagrangian pair $\ell_1, \ell_2 \in \Lambda (n)$ and respectively any frames $\mathbf{X}_1 = {X_1 \choose Y_1}$, $\mathbf{X}_2 = {X_2 \choose Y_2}$, we set \begin{equation} \begin{aligned} \text{Det}^2 (\ell_1, \ell_2) &:= \operatorname{det} \tilde{W} = - \operatorname{det} \Big{\{} \Big( (X_1 + i Y_1) M_1^2 (X_1^t + i Y_1^t) \Big) \cdot \Big((X_2 - iY_2) M_2^2 (X_2^t - i Y_2^t)\Big) \Big{\}} \\ &= - \frac{\operatorname{det}^2 (X_1 + i Y_1)}{\operatorname{det} (X_1^t X_1 + Y_1^t Y_1)} \cdot \frac{\operatorname{det}^2 (X_2 - i Y_2)}{\operatorname{det} (X_2^t X_2 + Y_2^t Y_2)}. \end{aligned} \end{equation} We have already seen that $\tilde{W}$ does not depend on the choice of frames, and so the map $\text{Det}^2$ is well-defined. \end{remark}
For some calculations, it's productive to observe that we can express our matrix $W$ in the coordinate-free form \begin{equation} \label{souriau} W = - (2\mathcal{P}_1 - I) (2 \mathcal{P}_2 - I), \end{equation} sometimes referred to as the {\it Souriau map}. Here, $\mathcal{P}_1$ and $\mathcal{P}_2$ are respectively orthogonal projections onto $\ell_1$ and $\ell_2$, and given particular frames $\mathbf{X}_i = {X_i \choose Y_i}$ we can express these as \begin{equation*} \mathcal{P}_i = \mathbf{X}_i (\mathbf{X}_i^t \mathbf{X}_i)^{-1} \mathbf{X}_i^t = \begin{pmatrix} X_i \\ Y_i \end{pmatrix} M_i^2 \begin{pmatrix} X_i^t & Y_i^t \end{pmatrix} = \begin{pmatrix} X_i M_i^2 X_i^t & X_i M_i^2 Y_i^t \\ Y_i M_i^2 X_i^t & Y_i M_i^2 Y_i^t \end{pmatrix}, \end{equation*} where $M_i = (X_i^t X_i + Y_i^t Y_i)^{-1/2}$. We see that \begin{equation*} 2\mathcal{P}_i - I_{2n} = \begin{pmatrix} 2 X_i M_i^2 X_i^t - I_n & 2 X_i M_i^2 Y_i^t \\ 2 Y_i M_i^2 X_i^t & 2 Y_i M_i^2 Y_i^t - I_n \end{pmatrix}. \end{equation*} Using the relations \begin{equation*} \begin{aligned} X_i M_i^2 X_i^t + Y_i M_i^2 Y_i^t &= I_n \\ X_i M_i^2 X_i^t - Y_i M_i^2 Y_i^t &= 0, \end{aligned} \end{equation*} and temporarily setting \begin{equation*} \begin{aligned} A_i &= X_i M_i^2 X_i^t - Y_i M_i^2 Y_i^t \\ B_i &= 2 X_i M_i^2 Y_i^t, \end{aligned} \end{equation*} we can check that \begin{equation*} \begin{aligned} (2 \mathcal{P}_1 - I_n) (2 \mathcal{P}_2 - I_n) &= \begin{pmatrix} A_1 & B_1 \\ B_1 & -A_1 \end{pmatrix} \begin{pmatrix} A_2 & B_2 \\ B_2 & -A_2 \end{pmatrix} \\ &= - \begin{pmatrix} A_1 & -B_1 \\ B_1 & A_1 \end{pmatrix} \begin{pmatrix} -A_2 & -B_2 \\ B_2 & -A_2 \end{pmatrix} = - W. \end{aligned} \end{equation*}
In order to clarify the relationship between $W$ and $\tilde{W}$, we recall that since $W \in \mathfrak{U}_J$ we have the correspondence \begin{equation*} W = \begin{pmatrix} W_{11} & -W_{21} \\ W_{21} & W_{11} \end{pmatrix}; \quad \iff \tilde{W} = W_{11} + i W_{21}. \end{equation*} We can easily check that $W$ and $\tilde{W}$ have precisely the same eigenvalues, and indeed we have \begin{equation*} \tilde{W} (u+iv) = e^{i\theta} (u+iv) \end{equation*} if and only if \begin{equation*} W {u+iv \choose v - iu} = e^{i\theta} {u+iv \choose v - iu} \quad \text{and} \quad W {-v + iu \choose u + iv} = e^{i\theta} {-v+iu \choose u + iv}. \end{equation*} I.e., $e^{i\theta}$ is an eigenvalue of $\tilde{W}$ with multiplicity $k$ if and only if it is an eigenvalue of $W$ with multiplicity $2k$. Notice that this simply generalizes our observations from the proof of Theorem \ref{intersection_theorem}.
\begin{remark} \label{souriau_remark} We are now in a position to observe that our composition relation from Remark \ref{tildeW_remark} corresponds with Corollary 2.45 in \cite{F}. In particular, if we let $\mathcal{P}_D$ denote projection onto the Dirichlet Lagrangian subspace (i.e., the Lagrangian subspace with frame ${0 \choose I}$), and we set \begin{equation*} \begin{aligned} W_{1D} &= - (2 \mathcal{P}_1 - I) (2 \mathcal{P}_D - I) \\ W_{D2} &= - (2 \mathcal{P}_D - I) (2 \mathcal{P}_2 - I), \end{aligned} \end{equation*} then Corollary 2.45 in \cite{F} asserts \begin{equation*} W = - W_{1D} W_{D2}, \end{equation*} which corresponds with the composition (\ref{composition}). (Here, $W$ is from (\ref{souriau}).) \end{remark}
\subsection{Relation to Furutani's Development} \label{Furutani}
In \cite{F} (Section 3.5), the author takes a different approach to computing the Maslov index for a pair of evolving Lagrangian subspaces, and we verify here that the two approaches are equivalent in the current setting. As a starting point, we denote by $H_{\omega}$ the symplectic Hilbert space obtained by equipping $\mathbb{R}^{2n}$ with the symplectic form $\omega (x, y) = (Jx, y)_{\mathbb{R}^{2n}}$, and likewise we denote by by $H_{-\omega}$ the symplectic Hilbert space obtained by equipping $\mathbb{R}^{2n}$ with the symplectic form $- \omega (x, y) = (-Jx, y)_{\mathbb{R}^{2n}}$. Following \cite{F}, we denote the direct sum of these spaces \begin{equation*} \mathbb{H} = H_{\omega} \boxplus H_{-\omega}. \end{equation*}
Now let $\ell_1, \ell_2 \subset \mathbb{R}^{2n}$ denote two Lagrangian subspaces with associated frames $\mathbf{X}_1 = {X_1 \choose Y_1}$ and $\mathbf{X}_2 = {X_2 \choose Y_2}$. We can identify the direct sum $\ell_1 \oplus \ell_2$ with a subspace of $\mathbb{R}^{4n}$. For $z_1, z_2 \in \mathbb{R}^{4n}$, we set \begin{equation*} \omega_{\mathbb{J}} (z_1, z_2) = (\mathbb{J} z_1, z_2)_{\mathbb{R}^{4n}}; \quad \mathbb{J} = \begin{pmatrix} J & 0 \\ 0 & -J \end{pmatrix}. \end{equation*} It follows immediately from the assumption that $\ell_1$ and $\ell_2$ are Lagrangian subspaces in $\mathbb{R}^{2n}$ that \begin{equation*} \mathbf{Z} = \begin{pmatrix} \mathbf{X}_1 & 0_{2n \times n} \\ 0_{2n \times n} & \mathbf{X}_2 \end{pmatrix} \end{equation*} is a frame for a Lagrangian subspace in $\mathbb{R}^{4n}$. We denote this Lagrangian subspace $\ell$, and note that we can associate it with $\ell_1 \oplus \ell_2$.
In \cite{F}, the author detects intersections between $\ell_1$ and $\ell_2$ by identifying intersections between $\ell$ and the diagonal in $\mathbb{H}$: i.e., the Lagrangian subspace $\Delta \subset \mathbb{R}^{4n}$ with frame $\mathbf{Z}_{\Delta} = {I_{2n} \choose I_{2n}}$. The orthogonal projection associated with $\ell$ can be expressed as \begin{equation*} \mathcal{P}_{\mathbf{Z}} = \mathbf{Z} (\mathbf{Z}^t \mathbf{Z}) \mathbf{Z}^t = \begin{pmatrix} \mathcal{P}_1 & 0 \\ 0 & \mathcal{P}_2 \end{pmatrix}, \end{equation*} and likewise the orthogonal projection associated with $\Delta$ can be expressed as \begin{equation*} \mathcal{P}_{\Delta} = \frac{1}{2} \begin{pmatrix} I_{2n} & I_{2n} \\ I_{2n} & I_{2n} \end{pmatrix}. \end{equation*}
We can now compute the Souriau map for $\ell$ and $\Delta$ as \begin{equation*} \mathcal{W} = - (2 \mathcal{P}_{\mathbf{Z}} - I_{4n}) (2 \mathcal{P}_{\Delta} - I_{4n}) = \begin{pmatrix} 0 & I_{2n} - 2\mathcal{P}_1 \\ I_{2n} - 2\mathcal{P}_2 & 0 \end{pmatrix}. \end{equation*} We see that the eigenvalues of $\mathcal{W}$ will satisfy \begin{equation*} \operatorname{det} \begin{pmatrix} -\lambda I_{2n} & I_{2n} - 2\mathcal{P}_1 \\ I_{2n} - 2 \mathcal{P}_2 & -\lambda I_{2n} \end{pmatrix} = \operatorname{det} \Big(\lambda^2 I - (I_{2n} - 2\mathcal{P}_1) (I_{2n} - 2 \mathcal{P}_2)\Big). \end{equation*} We see that the values $- \lambda^2$ will be the eigenvalues of the Souriau map (\ref{souriau}).
According to Lemma \ref{W_lemma} we have an intersection of $\ell_1$ and $\ell_2$ if and only if $-1$ is an eigenvalue of $W$, and the multiplicity of $-1$ as an eigenvalue of $W$ is twice the dimension of the intersection. In this case, we will have eigenvalues $\lambda$ of $\mathcal{W}$ satisfying $- \lambda^2 = -1$. We see that $\mathcal{W}$ has two corresponding eigenvalues $\lambda = -1, +1$, each with the same multiplicity for $\mathcal{W}$ as $-1$ has for $W$. Reversing the argument, we conclude that $-1$ is an eigenvalue of $W$ if and only if it is an eigenvalue of $\mathcal{W}$, and its multiplicity as an eigenvalue of these two matrices agrees.
Finally, we will be able to conclude that the spectral flow through $-1$ is the same for $W$ and $\mathcal{W}$ if the directions associated with crossings agree. Suppose $e^{i (\pi - \epsilon)}$ is an eigenvalue of $W$ for some small $\epsilon > 0$ (i.e., an eigenvalue rotated slightly clockwise from $-1$). If $\lambda$ is the associated eigenvalue of $\mathcal{W}$ then we will have $- \lambda^2 = e^{i (\pi - \epsilon)}$, and so $\lambda = e^{i (\pi - \frac{\epsilon}{2})}$, $e^{i (2\pi - \frac{\epsilon}{2})}$. If the eigenvalue of $W$ rotates through $-1$ then its counterpart $e^{i (\pi - \frac{\epsilon}{2})}$ will rotate through $-1$ in the same direction. Other cases are similar, and we see that indeed the directions associated with the crossings agree.
\section{Monotoncity} \label{monotonicity_section}
For many applications, such as the ones discussed in Section \ref{applications_section}, we have monotonicity in the following sense: as the parameter $t \in I$ varies in a fixed direction, the eigenvalues of $\tilde{W} (t)$ move monotonically around $S^1$. In this section, we develop a general framework for checking monotonicity in specific cases.
As a starting point, we take the following lemma from \cite{HS} (see also Theorem V.6.1 in \cite{At}):
\begin{lemma} [\cite{HS}, Lemma 3.11.] Let $\tilde{W} (t)$ be a smooth family of unitary $n \times n$ matrices on some interval $I$, satisfying the differential equation $\frac{d}{dt} \tilde{W} (t) = i \tilde{W} (t) \tilde{\Omega} (t)$, where $\tilde{\Omega} (t)$ is a continuous, self-adjoint and negative-definite $n \times n$ matrix. Then the eigenvalues of $\tilde{W} (t)$ move (strictly) monotonically clockwise on the unit circle as $\tau$ increases. \label{HS_monotonicity} \end{lemma}
In order to employ Lemma \ref{HS_monotonicity} we need to obtain a convenient form for $\frac{d \tilde{W}}{dt}$. For this, we begin by writing $\tilde{W} (t) = - \tilde{W}_1 (t) \tilde{W}_2 (t)$, where \begin{equation*} \begin{aligned} \tilde{W}_1 (t) &= (X_1 (t) + i Y_1 (t)) (X_1 (t) - i Y_1 (t))^{-1} \\ \tilde{W}_2 (t) &= (X_2 (t) - i Y_2 (t)) (X_2 (t) + i Y_2 (t))^{-1}. \end{aligned} \end{equation*} For $\tilde{W}_1 (t)$ we have \begin{equation*} \begin{aligned} \frac{d \tilde{W}_1}{dt} & = (X_1' (t) + i Y_1' (t)) (X_1 (t) - i Y_1 (t))^{-1} \\ & \quad - (X_1 (t) + i Y_1 (t)) (X_1 (t) - i Y_1 (t))^{-1} (X_1' (t) - i Y_1' (t)) (X_1 (t) - i Y_1 (t))^{-1} \\ & = (X_1' (t) + i Y_1' (t)) (X_1 (t) - i Y_1 (t))^{-1} \\ & \quad - \tilde{W}_1 (X_1' (t) - i Y_1' (t)) (X_1 (t) - i Y_1 (t))^{-1} \\ & = \tilde{W}_1 \tilde{W}_1^* (X_1' (t) + i Y_1' (t)) (X_1 (t) - i Y_1 (t))^{-1} \\ & \quad - \tilde{W}_1 (X_1' (t) - i Y_1' (t)) (X_1 (t) - i Y_1 (t))^{-1} \\ & = \tilde{W}_1 \Big{\{}\tilde{W}_1^* (X_1' (t) + i Y_1' (t)) - (X_1' (t) - i Y_1' (t)) \Big{\}} (X_1 (t) - i Y_1 (t))^{-1}, \end{aligned} \end{equation*} where we have liberally taken advantage of the fact that $\tilde{W}$ is unitary. Here, \begin{equation*} \begin{aligned} \{ \cdots \} &= (X_1 (t)^t + i Y_1 (t)^t)^{-1} (X_1 (t)^t - i Y_1 (t)^t) (X_1' (t) + i Y_1' (t)) - (X_1' (t) - i Y_1' (t)) \\ &= (X_1 (t)^t + i Y_1 (t)^t)^{-1} \Big[ (X_1 (t)^t - i Y_1 (t)^t) (X_1' (t) + i Y_1' (t)) \\ & \quad \quad - (X_1 (t)^t + i Y_1 (t)^t) (X_1' (t) - i Y_1' (t)) \Big], \end{aligned} \end{equation*} and \begin{equation*} [ \cdots ] = 2i (X_1 (t)^t Y_1' (t) - Y_1 (t)^t X_1'(t)). \end{equation*} We conclude that \begin{equation*} \frac{d \tilde{W}_1}{dt} = i \tilde{W}_1 (t) \tilde{\Omega}_1 (t), \end{equation*} where \begin{equation*} \tilde{\Omega}_1 (t) = 2 \Big((X_1 (t) - i Y_1 (t))^{-1}\Big)^* \Big(X_1 (t)^t Y_1' (t) - Y_1 (t)^t X_1'(t) \Big) \Big((X_1 (t) - i Y_1 (t))^{-1} \Big). \end{equation*}
Proceeding similarly for $\tilde{W}_2 (t)$ we find \begin{equation*} \frac{d \tilde{W}_2}{dt} = i \tilde{W}_2 (t) \tilde{\Omega}_2 (t), \end{equation*} where \begin{equation*} \tilde{\Omega}_2 (t) = - 2 \Big((X_2 (t) + i Y_2 (t))^{-1}\Big)^* \Big(X_2 (t)^t Y_2' (t) - Y_2 (t)^t X_2'(t) \Big) \Big((X_2 (t) + i Y_2 (t))^{-1} \Big). \end{equation*}
Combining these observations, we compute \begin{equation*} \begin{aligned} \frac{d \tilde{W}}{dt} &= - \frac{d \tilde{W}_1}{dt} \tilde{W}_2 - \tilde{W}_1 \frac{d \tilde{W}_2}{dt} \\ &= -i \tilde{W}_1 (t) \tilde{\Omega}_1 (t) \tilde{W}_2 (t) - i \tilde{W}_1 (t) \tilde{W}_2 (t) \tilde{\Omega}_2 (t) \\ &= i (- \tilde{W}_1 (t) \tilde{W}_2 (t)) \tilde{W}_2 (t)^* \tilde{\Omega}_1 (t) \tilde{W}_2 (t) + i (- \tilde{W}_1 (t) \tilde{W}_2 (t)) \tilde{\Omega}_2 (t) \\ &= i \tilde{W} (t) \Big{\{}\tilde{W}_2 (t)^* \tilde{\Omega}_1 (t) \tilde{W}_2 (t) + \tilde{\Omega}_2 (t) \Big{\}}. \end{aligned} \end{equation*} That is, we have \begin{equation*} \frac{d \tilde{W}}{dt} = i \tilde{W} (t) \tilde{\Omega} (t), \end{equation*} where \begin{equation*} \tilde{\Omega} (t) = \tilde{W}_2 (t)^* \tilde{\Omega}_1 (t) \tilde{W}_2 (t) + \tilde{\Omega}_2 (t). \end{equation*} We notice particularly that we can write \begin{equation*} \tilde{W}_2^* \tilde{\Omega}_1 \tilde{W}_2 = 2 \Big((X_1 - iY_1)^{-1} \tilde{W}_2\Big)^* (X_1^t Y_1' - Y_1^t X_1') \Big((X_1 - iY_1)^{-1} \tilde{W}_2\Big). \end{equation*}
We see that the nature of $\tilde{\Omega} (t)$ will be determined by the matrices $(X_1 (t)^t Y_1' (t) - Y_1 (t)^t X_1'(t))$ and $(X_2 (t)^t Y_2' (t) - Y_2 (t)^t X_2'(t))$. In order to check that these matrices are symmetric, we differentiate the Lagrangian property \begin{equation*} X_1 (t)^t Y_1 (t) - Y_1 (t)^t X_1 (t) = 0 \end{equation*} to see that \begin{equation*} X_1 (t)^t Y_1' (t) - Y_1 (t)^t X_1'(t) = Y_1'(t)^t X_1 (t) - X_1'(t)^t Y_1 (t). \end{equation*} Symmetry of $(X_1 (t)^t Y_1' (t) - Y_1 (t)^t X_1'(t))$ is immediate, and we proceed similarly for $(X_2 (t)^t Y_2' (t) - Y_2 (t)^t X_2'(t))$. We conclude that $\tilde{\Omega} (t)$ is self-adjoint.
Finally, for monontonicity, we need to check that $\tilde{\Omega} (t)$ is definite. We show how to do this in certain cases in Section \ref{applications_section}. For convenient reference, we summarize these observations into a lemma.
\begin{lemma} Suppose $\ell_1, \ell_2: I \to \Lambda (n)$ denote paths of Lagrangian subspaces with $C^1$ frames $\mathbf{X}_1 = {X_1 \choose Y_1}$ and $\mathbf{X}_2 = {X_2 \choose Y_2}$ (respectively). If the matrices \begin{equation*} - \mathbf{X}_1^t J \mathbf{X}_1 = X_1 (t)^t Y_1' (t) - Y_1 (t)^t X_1'(t) \end{equation*} and (noting the sign change) \begin{equation*} \mathbf{X}_2^t J \mathbf{X}_2 = - (X_2 (t)^t Y_2' (t) - Y_2 (t)^t X_2'(t)) \end{equation*} are both non-negative and at least one is positive definite then the eigenvalues of $\tilde{W} (t)$ rotate in the counterclockwise direction as $t$ increases. Likewise, if both of these matrices are non-positive, and at least one is negative definite then the eigenvalues of $\tilde{W} (t)$ rotate in the clockwise direction as $t$ increases. \end{lemma}
\subsection{Monotonicity at Crossings} \label{monotonicity_at_crossings}
We are often interested in the rotation of eigenvalues of $\tilde{W}$ through $-1$; i.e., the rotation associated with an intersection of our Lagrangian subspaces. Let $t_*$ denote the time of intersection. As discussed in \cite{HS}, if we let $\tilde{\mathcal{P}}$ denote projection onto $\operatorname{ker} (\tilde{W} + I)$, then the rotation of eigenvalues through $-1$ is determined by the eigenvalues of the matrix $\tilde{\mathcal{P}} \tilde{\Omega} (t_*) \tilde{\mathcal{P}}$. Notice that if $\tilde{v} \in \operatorname{ker} (\tilde{W} + I)$ we will have \begin{equation*} - (X_1 (t_*) + i Y_1 (t_*)) (X_1 (t_*) - i Y_1 (t_*))^{-1} (X_2 (t_*) - i Y_2 (t_*)) (X_2 (t_*) + i Y_2 (t_*))^{-1} \tilde{v} = - \tilde{v}, \end{equation*} and correspondingly \begin{equation*} (X_1 (t_*) - i Y_1 (t_*))^{-1} \tilde{W}_2 (t_*) \tilde{v} = (X_1 (t_*) + i Y_1 (t_*))^{-1} \tilde{v}. \end{equation*} Recalling relations (\ref{useful2}), we find that \begin{equation*} (X_1 (t_*) + i Y_1 (t_*))^{-1} \tilde{v} = M_1 (t_*)^2 (X_1 (t_*)^t - i Y_1 (t_*)^t) \tilde{v}. \end{equation*} We see that if $\tilde{\Omega} (t_*)$ acts on $\operatorname{ker} (\tilde{W}+I)$ we can replace it with \begin{equation*} \begin{aligned} \tilde{\Omega}_{\mathcal{P}} (t_*) &:= 2 \Big(M_1 (t_*)^2 (X_1 (t_*)^t - Y_1 (t_*)^t) \Big)^* \Big(X_1 (t_*)^t Y_1' (t_*) - Y_1^t (t_*) X_1' (t_*) \Big) \\ & \quad \times M_1 (t_*)^2 (X_1 (t_*)^t - Y_1 (t_*)^t) \\ &-2 \Big(M_2 (t_*)^2 (X_2 (t_*)^t - Y_2 (t_*)^t) \Big)^* \Big(X_2 (t_*)^t Y_2' (t_*) - Y_2^t (t_*) X_2' (t_*) \Big) \\ & \quad \times M_2 (t_*)^2 (X_2 (t_*)^t - Y_2 (t_*)^t). \end{aligned} \end{equation*}
If we express $\tilde{v} = v_1 + i v_2$, we can write \begin{equation*} \begin{aligned} (X_1 (t_*) - i Y_1 (t_*))^{-1} \tilde{W}_2 (t_*) \tilde{v} &= M_1 (t_*)^2 (X_1 (t_*)^t - i Y_1 (t_*)^t) (v_1 + iv_2) \\ &= M_1 (t_*)^2 \Big{\{}X_1 (t_*)^t v_1 + Y_1 (t_*)^t v_2 + i (X_1 (t_*)^t v_2 - Y_1 (t_*)^t v_1) \Big{\}} \\ &= M_1 (t_*)^2 \Big{\{}X_1 (t_*)^t v_1 + Y_1 (t_*)^t v_2 \Big{\}}. \end{aligned} \end{equation*} Here, we have observed that it follows from the Lagrangian property that $X_1 (t_*)^t v_2 - Y_1 (t_*)^t v_1 = 0$. Likewise, \begin{equation*} M_2 (t_*)^2 (X_2 (t_*)^t - Y_2 (t_*)^t) (\tilde{v}) = M_2 (t_*)^2 \Big{\{}X_2 (t_*)^t v_1 + Y_2 (t_*)^t v_2 \Big{\}}. \end{equation*}
If we now write \begin{equation*} \tilde{\Omega}_{\mathcal{P}} (t_*) = \tilde{\Omega}^{(1)}_{\mathcal{P}} (t_*) + \tilde{\Omega}^{(2)}_{\mathcal{P}} (t_*), \end{equation*} then the quadratic form associated with $\tilde{\Omega}^{(1)}_{\mathcal{P}} (t_*)$ will take the form \begin{equation} \label{omega1} \begin{aligned} \Big(\tilde{\Omega}^{(1)}_{\mathcal{P}} (t_*) \tilde{v}, \tilde{v} \Big)_{\mathbb{C}^n} &= 2 \Big( (X_1 (t_*)^t Y_1' (t_*) - Y_1^t (t_*) X_1' (t_*)) M_1 (t_*)^2 \Big{\{}X_1 (t_*)^t v_1 + Y_1 (t_*)^t v_2 \Big{\}}, \\ &\quad \quad M_1 (t_*)^2 \Big{\{}X_1 (t_*)^t v_1 + Y_1 (t_*)^t v_2 \Big{\}} \Big)_{\mathbb{C}^n} , \end{aligned} \end{equation} and likewise the quadratic form associated with $\tilde{\Omega}^{(2)}_{\mathcal{P}} (t_*)$ will take the form \begin{equation} \label{omega2} \begin{aligned} \Big(\tilde{\Omega}^{(2)}_{\mathcal{P}} (t_*) \tilde{v}, \tilde{v} \Big)_{\mathbb{C}^n} &= 2 \Big( (X_2 (t_*)^t Y_2' (t_*) - Y_2^t (t_*) X_2' (t_*)) M_2 (t_*)^2 \Big{\{}X_2 (t_*)^t v_1 + Y_2 (t_*)^t v_2 \Big{\}}, \\ & \quad \quad M_2 (t_*)^2 \Big{\{}X_2 (t_*)^t v_1 + Y_2 (t_*)^t v_2 \Big{\}} \Big)_{\mathbb{C}^n}. \end{aligned} \end{equation} We will use (\ref{omega1}) and (\ref{omega2}) in our next section in which we relate our approach to the development of \cite{rs93}, based on crossing forms.
\subsection{Relation to Crossing Forms} \label{crossing_forms_section}
In this section, we discuss the relation between our development and the crossing forms of \cite{rs93}. As a starting point, let $\ell_1 (t)$ denote a path of Lagrangian subspaces, and let $\ell_2$ denote a fixed {\it target} Lagrangian subspace. Let the respective frames be \begin{equation*} \mathbf{X}_1 (t) = {X_1 (t) \choose Y_1 (t)}; \quad \mathbf{X}_2 = {X_2 \choose Y_2}, \end{equation*} and let $t_*$ denote the time of a crossing; i.e., \begin{equation*} \ell_1 (t_*) \cap \ell_2 \ne \{0\}. \end{equation*} The corresponding matrix $\tilde{W} (t)$ will be \begin{equation*} \tilde{W} (t) = - (X_1 (t) + i Y_1 (t)) (X_1 (t) - i Y_1 (t))^{-1} (X_2 - i Y_2) (X_2 + i Y_2)^{-1}. \end{equation*} Our goal is to compare the information obtained by computing $\tilde{W}'(t_*)$ with the information we get from the crossing form at $t_*$.
Following \cite{rs93}, we construct the crossing form at $t_*$ as a map \begin{equation*} \Gamma (\ell_1, \ell_2; t_*): \ell_1 (t_*) \cap \ell_2 \to \mathbb{R} \end{equation*} defined as follows: given $v \in \ell_1 (t_*) \cap \ell_2$, we find $u \in \mathbb{R}^n$ so that $v = \mathbf{X}_1 (t_*) u$, and compute \begin{equation*} \begin{aligned} \Gamma (\ell_1, \ell_2; t_*) (v) &= (X_1 (t_*) u, Y_1' (t_*) u)_{\mathbb{R}^n} - (X_1 (t_*) u, Y_1' (t_*) u)_{\mathbb{R}^n} \\ &= \Big( (X_1 (t_*)^t Y_1' (t_*) - Y_1 (t_*)^t X_1' (t_*)) u, u \Big). \end{aligned} \end{equation*} Since $v \in \ell_1 (t_*) \cap \ell_2 \subset \ell_1 (t_*)$ the vector $u$ is uniquely defined and we can compute it in terms of the Moore-Penrose pseudo-inverse of $\mathbf{X}_1$, \begin{equation*} u = (\mathbf{X}_1^t \mathbf{X}_1)^{-1} \mathbf{X}_1^t v = M_1^2 (X_1^t v_1 + Y_1^t v_2), \end{equation*} where $v = {v_1 \choose v_2}$.
Comparing with (\ref{omega1}), and taking $\mathbf{X}_2$ in this setting to be $\mathbf{X}_2 (t_*)$ in the setting of (\ref{omega1}), we see that \begin{equation} \label{crossing_form_relation} \Gamma (\ell_1, \ell_2; t_*) (v) = \frac{1}{2} \Big(\tilde{\Omega}^{(1)}_{\mathcal{P}} (t_*) \tilde{v}, \tilde{v} \Big)_{\mathbb{C}^n}. \end{equation} When computing the Maslov index with crossing forms, the rotation of eigenvalues of $\tilde{W}$ through $-1$ is determined by the signature of the crossing form. We see from (\ref{crossing_form_relation}) that this information is encoded in the eigenvalues of $\tilde{\Omega}^{(1)}_{\mathcal{P}} (t_*)$.
Turning now to path pairs, we recall that in \cite{rs93} the crossing form for a pair of Lagrangian paths $\ell_1 (t)$ and $\ell_2 (t)$ is defined as \begin{equation*} \Gamma (\ell_1, \ell_2; t_*) = \Gamma (\ell_1, \ell_2 (t_*); t_*) - \Gamma (\ell_2, \ell_1 (t_*); t_*). \end{equation*} Here, $\ell_2 (t_*)$ is viewed as a constant Lagrangian subspace, so that our previous development can be applied to $\Gamma (\ell_1, \ell_2 (t_*); t_*)$, and similary for $\Gamma (\ell_2, \ell_1 (t_*); t_*)$, in which case $\ell_1 (t_*)$ is viewed as a constant Lagrangian subspace. In the previous calculations, we have already checked that \begin{equation*} \Gamma (\ell_1, \ell_2 (t_*); t_*) (v) = \frac{1}{2} \Big(\tilde{\Omega}^{(1)}_{\mathcal{P}} (t_*) \tilde{v}, \tilde{v} \Big)_{\mathbb{C}^n}, \end{equation*} and we similarly find that \begin{equation*} \Gamma (\ell_2, \ell_1 (t_*); t_*) (v) = \frac{1}{2} \Big(\tilde{\Omega}^{(2)}_{\mathcal{P}} (t_*) \tilde{v}, \tilde{v} \Big)_{\mathbb{C}^n}. \end{equation*} Combining these expressions, we see that the crossing form for the Lagrangian pair $(\ell_1 (t), \ell_2 (t))$ at a crossing point $t_*$ is \begin{equation*} \Gamma (\ell_1, \ell_2; t_*) = \frac{1}{2} \Big(\tilde{\Omega}_{\mathcal{P}} (t_*) \tilde{v}, \tilde{v} \Big)_{\mathbb{C}^n}. \end{equation*}
\section{Applications} \label{applications_section}
Although full applications will be carried out in separate papers, we indicate two motivating applications for completeness.
\noindent {\bf Application 1.} In \cite{HS}, the authors consider Schr\"odinger equations \begin{equation} \label{schrodinger_01} \begin{aligned} -y'' + V(x) y &= \lambda y \\ \alpha_1 y(0) + \alpha_2 y'(0) &= 0 \\ \beta_1 y(1) + \beta_2 y'(1) &= 0, \end{aligned} \end{equation} where $V \in C([0, 1])$ is a real-valued symmetric matrix, \begin{equation} \label{rank_condition} \text{rank} \begin{bmatrix} \alpha_1 & \alpha_2 \end{bmatrix} = n; \quad \text{rank} \begin{bmatrix} \beta_1 & \beta_2 \end{bmatrix} = n, \end{equation} and we assume separated, self-adjoint boundary conditions, for which we have \begin{equation} \label{sac} \begin{aligned} \alpha_1 \alpha_2^t - \alpha_2 \alpha_1^t &= 0; \\ \beta_1 \beta_2^t - \beta_2 \beta_1^t &= 0. \end{aligned} \end{equation} By a choice of scaling we can take, without loss of generality, \begin{equation*} \begin{aligned} \alpha_1 \alpha_1^t + \alpha_2 \alpha_2^t &= I; \\ \beta_1 \beta_1^t + \beta_2 \beta_2^t &= I. \end{aligned} \end{equation*}
In order to place this system in the current framework, we set $p = y$, $q = y'$, and $\mathbf{p} = {p \choose q}$, so that it can be expressed as a first-order system \begin{equation} \label{system01} \frac{d \mathbf{p}}{dx} = \mathbb{A} (x; \lambda) \mathbf{p}; \quad \mathbb{A} (x; \lambda) = \begin{pmatrix} 0 & I \\ V(x) - \lambda I & 0 \end{pmatrix}. \end{equation} Since $\text{rank} \begin{bmatrix} \alpha_1 & \alpha_2 \end{bmatrix} = n$, there exists an $n$-dimensional space of solutions to the left boundary condition \begin{equation*} \begin{bmatrix} \alpha_1 & \alpha_2 \end{bmatrix} \mathbf{p} (0) = 0 \end{equation*} (i.e., the kernel of $\begin{bmatrix} \alpha_1 & \alpha_2 \end{bmatrix}$). In particular, we see from (\ref{sac}) that we can take \begin{equation*} \mathbf{X}_1 (0,\lambda) = \begin{pmatrix} \alpha_2^t \\ - \alpha_1^t \end{pmatrix}. \end{equation*} By virtue of the Lagrangian property, we see that $\mathbf{X}_1 (0; \lambda)$ is the frame for a Lagrangian subspace.
Let $\mathbf{X}_1 (x, \lambda)$ be a path of frames created by starting with $\mathbf{X}_1 (0, \lambda)$ and evolving according to (\ref{system01}). In order to see that $\mathbf{X}_1 (x, \lambda)$ continues to be a frame for a Lagrangian subspace for all $x \in [0,1]$, we begin by setting \begin{equation*} Z(x,\lambda) = X_1(x,\lambda)^t Y_1(x,\lambda) - Y_1(x, \lambda)^t X_1(x, \lambda), \end{equation*} and noting that $Z(0,\lambda) = 0$. Also (using prime to denote differentiation with respect to $x$), \begin{equation*} \begin{aligned} Z' &= (X_1')^t Y_1 + X_1^t Y_1' - (Y_1')^t X_1 - Y_1^t X_1' \\ &= Y_1^t Y_1 + X_1^t (V(x) X_1 - \lambda X_1) - (V(x) X_1 - \lambda X_1)^t X_1 - Y_1^t Y_1 \\ &= 0, \end{aligned} \end{equation*} where we have observed $X_1' = Y_1$, $Y_1' = V(x) X_1 - \lambda X_1$, and have used our assumption that $V$ is symmetric. We see that $Z(x,\lambda)$ is constant in $x$, and since $Z(0,\lambda) = 0$ this means $Z(x,\lambda) = 0$ for all $x \in [0,1]$. We conclude from Lemma \ref{Lagrangian_property} that $\mathbf{X}_1 (x, \lambda)$ is the frame for a Lagrangian subspace for all $x \in [0,1]$. As usual, we denote the Lagrangian subspace associated with $\mathbf{X}_1$ by $\ell_1$.
In this case, the second (``target") Lagrangian subspace is the one associated with the boundary conditions at $x = 1$. I.e., \begin{equation*} \mathbf{X}_2 = {X_2 \choose Y_2} = {\beta_2^t \choose -\beta_1^t}, \end{equation*} which is Lagrangian due to our boundary condition and the Lagrangian property. We denote the Lagrangian subspace associated with $\mathbf{X}_2$ by $\ell_2$. We find that \begin{equation*} \tilde{W} (x,\lambda) = - (X_1(x,\lambda) + i Y_1(x,\lambda)) (X_1(x,\lambda) - i Y_1(x,\lambda))^{-1} (\beta_2^t + i \beta_1^t) (\beta_2^t - i \beta_1^t)^{-1}. \end{equation*} For comparison with \cite{HS}, we observe that \begin{equation} \label{compareHS} (\beta_2^t + i \beta_1^t) (\beta_2^t - i \beta_1^t)^{-1} = \beta_2^t \beta_2 - \beta_1^t \beta_1 + 2 i (\beta_2^t \beta_1), \end{equation} and this right-hand side, along with the negative sign, is the form that appears in \cite{HS} (see p. 4517). In order to verify (\ref{compareHS}), we directly compute \begin{equation*} (\beta_2 + i \beta_1) (\beta_2^t - i \beta_1^t) = \beta_2 \beta_2^t + \beta_1 \beta_1^t + i (\beta_1 \beta_2^t - \beta_2 \beta_1^t) = I, \end{equation*} showing that \begin{equation*} (\beta_2^t - i \beta_1^t)^{-1} = (\beta_2 + i \beta_1). \end{equation*} But then \begin{equation*} \begin{aligned} (\beta_2^t + i \beta_1^t) (\beta_2^t - i \beta_1^t)^{-1} &= (\beta_2^t + i \beta_1^t) (\beta_2 + i \beta_1) \\ &= \beta_2^t \beta_2 - \beta_1^t \beta_1 + i (\beta_2^t \beta_1 + \beta_1^t \beta_2) \\ &= \beta_2^t \beta_2 - \beta_1^t \beta_1 + 2 i (\beta_2^t \beta_1). \end{aligned} \end{equation*} (These are the same considerations that led to (\ref{useful2}).)
Turning to the important property of monotoncity, we see that we can consider monotonicity as $x$ varies or as $\lambda$ varies (or, in principle, we could consider any other path in the $x$-$\lambda$ plane). We find that while monotoncity doesn't generally hold as $x$ varies (except in special cases, such as Dirichlet boundary conditions), it does hold generally as $\lambda$ varies. In order to see this, we observe that in light of Section \ref{monotonicity_section} we can write \begin{equation*} \frac{\partial \tilde{W}}{\partial \lambda} = i \tilde{W} \tilde{\Omega}, \end{equation*} where \begin{equation*} \tilde{\Omega} = 2 \Big((X_1-iY_1)^{-1} \tilde{W}_2 \Big)^* \Big(X_1^t \partial_{\lambda} Y_1 - Y_1^t \partial_{\lambda} X_1 \Big) \Big((X_1 - iY_1)^{-1} \tilde{W}_2 \Big), \end{equation*} and \begin{equation*} \tilde{W}_2 = (\beta_2^t + i \beta_1^t) (\beta_2^t - i \beta_1^t)^{-1}. \end{equation*} We see that monotonicity is determined by the matrix \begin{equation*} A (x, \lambda) = X_1 (x,\lambda)^t \partial_{\lambda} Y_1 (x,\lambda) - Y_1 (x,\lambda)^t \partial_{\lambda} X_1 (x,\lambda), \end{equation*} where our introduction of the notation $A(x,\lambda)$ is simply for the convenience of the next calculation. Differentiating with respect to $x$, we find \begin{equation*} \begin{aligned} A' &= (X_1')^t \partial_{\lambda} Y_1 + X_1^t \partial_{\lambda} Y_1' - (Y_1')^t \partial_{\lambda} X_1 - Y_1^t \partial_{\lambda} X_1' \\ &= Y_1^t \partial_{\lambda} Y_1 + X_1^t \partial_{\lambda} (V(x) X_1 - \lambda X_1) - (V(x) X_1 - \lambda X_1)^t \partial_{\lambda} X_1 - Y_1^t \partial_{\lambda} Y_1 \\ &= - X_1^t X_1. \end{aligned} \end{equation*}
Integrating on $[0,x]$, we find \begin{equation*} A (x,\lambda) = X_1(0,\lambda)^t \partial_{\lambda} Y_1 (0,\lambda) - Y_1(0,\lambda)^t \partial_{\lambda} X_1 (0,\lambda) - \int_0^x X_1(y,\lambda)^t X_1(y,\lambda) dy. \end{equation*} We observe that since $X_1(0,\lambda) = \alpha_2^t$ and $Y_1(0,\lambda) = -\alpha_1^t$, we have $\partial_{\lambda} X_1 (0,\lambda) = 0$ and $\partial_{\lambda} Y_1 (0,\lambda) = 0$, and so \begin{equation*} A(x, \lambda) = - \int_0^x X_1(y,\lambda)^t X_1(y,\lambda) dy, \end{equation*} which is negative definite. We conclude that $\tilde{\Omega}$ is negative definite, and so for any $x \in [0,1]$, as $\lambda$ increases the eigenvalues of $\tilde{W}$ rotate monotonically in the clockwise direction.
In order to summarize the result that these observations lead to, we will find it productive to fix $s_0 > 0$ (taken sufficiently small during the analysis) and $\lambda_{\infty} > 0$ (taken sufficiently large during the analysis), and to consider the rectangular path \begin{equation*} \Gamma = \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4, \end{equation*} where the paths $\{\Gamma_i\}_{i=1}^4$ are depicted in Figure \ref{F1} (taken from \cite{HS}).
\begin{figure}
\caption{Schematic of the path $\Gamma = \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4$}
\label{F1}
\end{figure}
Due to path additivity, \begin{equation*} \text{Mas} (\ell_1, \ell_2; \Gamma) = \text{Mas} (\ell_1, \ell_2; \Gamma_1) + \text{Mas} (\ell_1, \ell_2; \Gamma_2) + \text{Mas} (\ell_1, \ell_2; \Gamma_3) + \text{Mas} (\ell_1, \ell_2; \Gamma_4), \end{equation*} and by homotopy invariance the Maslov index around any closed path will be 0, so that \begin{equation*} \text{Mas} (\ell_1, \ell_2; \Gamma) = 0. \end{equation*}
In order to deal efficiently with our self-adjoint boundary conditions, we adapt an elegant theorem from \cite{BK} (see also an earlier version in \cite{Kuchment2004}).
\begin{theorem}[Adapted from \cite{BK}] \label{BKtheorem} Let $\alpha_1$ and $\alpha_2$ be as described in (\ref{rank_condition})-(\ref{sac}). Then there exist three orthogonal (and mutually orthogonal) projection matrices $P_D$ (the Dirichlet projection), $P_N$ (the Neumann projection), and $P_R = I - P_D - P_N$ (the Robin projection), and an invertible self-adjoint operator $\Lambda$ acting on the space $P_R \mathbb{R}^n$ such that the boundary condition \begin{equation*} \alpha_1 y(0) + \alpha_2 y'(0) = 0 \end{equation*} can be expressed as \begin{equation*} \begin{aligned} P_D y(0) &= 0 \\ P_N y'(0) &= 0 \\ P_R y'(0) &= \Lambda P_R y(0). \end{aligned} \end{equation*} Moreover, $P_D$ can be constructed as the projection onto the kernel of $\alpha_2$ and $P_N$ can be constructed as the projection onto the kernel of $\alpha_1$. Construction of the operator $\Lambda$ is discussed in more detail in \cite{BK}, and also in \cite{HS}. Precisely the same statement holds for $\beta_1$ and $\beta_2$ for the boundary condition at $x = 1$. \end{theorem}
We also take the following from \cite{HS}.
\begin{definition} \label{B_defined} Let $(P_{D_0}, P_{N_0}, P_{R_0}, \Lambda_0)$ denote the projection quadruplet associated with our boundary conditions at $x = 0$, and let $(P_{D_1}, P_{N_1}, P_{R_1}, \Lambda_1)$ denote the projection quadruplet associated with our boundary conditions at $x = 1$. We denote by $B$ the self-adjoint operator obtained by restricting $(P_{R_0} \Lambda_0 P_{R_0} - P_{R_1} \Lambda_1 P_{R_1})$ to the space $(\operatorname{ker} P_{D_0}) \cap (\operatorname{ker} P_{D_1})$. \end{definition}
The main result of \cite{HS} is the following theorem.
\begin{theorem} \label{hs_main} For system (\ref{schrodinger_01}), let $V \in C([0, 1])$ be a symmetric matrix in $\mathbb{R}^{n \times n}$, and let $\alpha_1$, $\alpha_2$, $\beta_1$, and $\beta_2$ be as in (\ref{rank_condition})-(\ref{sac}). In addition, let $Q$ denote projection onto the kernel of $B$, and make the non-degeneracy assumption $0 \notin \sigma (Q (V(0)-(P_{R_0} \Lambda_0 P_{R_0})^2) Q)$. Then we have \begin{equation*} \operatorname{Mor} (H) = - \operatorname{Mas} (\ell, \ell_1; \Gamma_2) + \operatorname{Mor} (B) + \operatorname{Mor} (Q(V(0) - (P_{R_0} \Lambda_0 P_{R_0})^2)Q). \end{equation*} \end{theorem}
In order to clarify the nature of the terms $\operatorname{Mor} (B) + \operatorname{Mor} (Q(V(0) - (P_{R_0} \Lambda_0 P_{R_0})^2)Q)$, we show here how they easily arise from a naive perturbation argument; for a rigorous treatment, the reader is referred to \cite{HS}.
First, we observe that a crossing at a point $(s, \lambda)$ corresponds with a solution to the system \begin{equation} \label{schrodinger_01s} \begin{aligned} -y'' + V(x) y &= \lambda y \\ \alpha_1 y(0) + \alpha_2 y'(0) &= 0 \\ \beta_1 y(s) + \beta_2 y'(s) &= 0. \end{aligned} \end{equation} Setting $\xi = x/s$ and $u(\xi) = y(x)$, we obtain the system \begin{equation} \label{schrodinger_01u} \begin{aligned} H(s) u := -u'' + s^2 V(s \xi) y &= s^2 \lambda u \\ \alpha_1 u(0) + \frac{1}{s} \alpha_2 u'(0) &= 0 \\ \beta_1 u(1) + \frac{1}{s}\beta_2 u'(1) &= 0. \end{aligned} \end{equation} Employing a straightforward energy estimate similar to the proof of Lemma 3.12 in \cite{HS}, we find that there exists a constant $c$ so that any eigenvalue of (\ref{schrodinger_01s}) satisfies \begin{equation*}
\lambda (s) \ge - \frac{c}{s} - \|V\|_{L^{\infty} (0,1)}. \end{equation*} This means that by taking $\lambda_{\infty}$ sufficiently large we can ensure that there are no crossings along the left shelf. In order to understand crossings along the bottom shelf we set $\tilde{\lambda} = s^2 \lambda (s)$ and take the naive expansions \begin{equation} \label{naive} \begin{aligned} \tilde{\lambda} (s) &= \tilde{\lambda}_0 + \tilde{\lambda}_1 s + \tilde{\lambda}_2 s^2 + \cdots \\ \phi(\xi; s) &= \phi_0 (\xi) + \phi_1 (\xi) s + \phi_2 (\xi) s^2 + \cdots, \end{aligned} \end{equation} where $\phi(\xi; s)$ is an eigenfunction corresponding with eigenvalue $\tilde{\lambda} (s)$. We emphasize that the spectral curves we are looking for will have the corresponding form \begin{equation} \lambda(s) = \frac{\tilde{\lambda}_0}{s^2} + \frac{\tilde{\lambda}_1}{s} + \tilde{\lambda}_2 + \dots. \end{equation}
Using Theorem \ref{BKtheorem}, we can express the boundary conditions for (\ref{schrodinger_01u}) as \begin{alignat*}{2} P_{D_0} u (0) &= 0; & \qquad P_{D_1} u (1) &= 0; \\
P_{N_0} u'(0) &= 0; & \qquad P_{N_1} u'(1) &= 0; \\
P_{R_0} u'(0) &= s \Lambda_0 P_{R_0} u(0); & \qquad P_{R_1} u'(1) &= s \Lambda_1 P_{R_1} u(1). \end{alignat*} Upon substitution of (\ref{naive}) into (\ref{schrodinger_01u}) with projection boundary conditions, we find that the zeroth order equation is $-\phi_0'' = \tilde{\lambda}_0 \phi_0$ with boundary conditions \begin{alignat*}{2} P_{D_0} \phi_0 (0) &= 0; & \qquad P_{D_1} \phi_0 (1) &= 0; \\
P_{N_0} \phi_0'(0) &= 0; & \qquad P_{N_1} \phi_0'(1) &= 0; \\
P_{R_0} \phi_0'(0) &= 0; & \qquad P_{R_1} \phi_0'(1) &= 0. \end{alignat*} Taking an $L^2 (0,1)$ inner product of this equation with $\phi_0$ we obtain \begin{equation*} \begin{aligned}
\tilde{\lambda}_0 \|\phi_0\|_{L^2 (0,1)}^2 &= \langle \phi_0'', \phi_0 \rangle \\
&= \|\phi_0'\|_{L^2 (0,1)}^2 - (\phi_0' (1), \phi_0 (1))_{\mathbb{R}^n} + (\phi_0' (0), \phi_0 (0))_{\mathbb{R}^n}. \end{aligned} \end{equation*} Observing that \begin{equation} \label{working_with_projections} \begin{aligned} (\phi_0' (1), \phi_0 (1))_{\mathbb{R}^n} &= (\phi_0' (1), P_{D_1} \phi_0 (1) + P_{N_1} \phi_0 (1) + P_{R_1} \phi_0 (1))_{\mathbb{R}^n} \\ &= (P_{N_1} \phi_0' (1) + P_{R_1} \phi_0' (1), \phi_0 (1))_{\mathbb{R}^n} = 0, \end{aligned} \end{equation} and noting that similarly $(\phi_0' (0), \phi_0 (0))_{\mathbb{R}^n} = 0$, we see that \begin{equation*}
\tilde{\lambda}_0 \|\phi_0\|_{L^2 (0,1)}^2 = \|\phi_0'\|_{L^2 (0,1)}^2. \end{equation*} Clearly, we must have $\tilde{\lambda}_0 \ge 0$, and if $\tilde{\lambda}_0 > 0$ the associated spectral curve will lie in the right quarter-plane and will not cross into the Maslov Box. On the other hand, if $\tilde{\lambda}_0 = 0$
then $\|\phi_0'\|_{L^2 (0,1)} = 0$ and $\phi_0$ will be a constant function. In this case, the only requirement on the constant vector $\phi_0$ is (from the projection boundary conditions) \begin{equation*} \phi_0 \in (\operatorname{ker} P_{D_0}) \cap (\operatorname{ker} P_{D_1}). \end{equation*}
Let $P$ denote the orthogonal projection onto the space $(\operatorname{ker} P_{D_0}) \cap (\operatorname{ker} P_{D_1})$ and set \begin{equation*} B = P (P_{R_0} \Lambda_0 P_{R_0} - P_{R_1} \Lambda_1 P_{R_1}) P \end{equation*} (i.e., $B$ is the matrix defined in (\ref{B_defined})). Since $B$ is symmetric and maps $(\operatorname{ker} P_{D_0}) \cap (\operatorname{ker} P_{D_1})$ to itself, we can create an orthonormal basis for $(\operatorname{ker} P_{D_0}) \cap (\operatorname{ker} P_{D_1})$ from the eigenvectors of $B$. Moreover, let $Q$ denote the orthogonal projection onto $\operatorname{ker} B$ (as in the statement of Theorem \ref{hs_main}) and create an orthonormal basis for $\operatorname{ker} B$ from the eigenvectors of $Q (V(0) - (P_{R_0} \Lambda_0 P_{R_0})^2) Q$.
Now, we are ready for the order 1 equation, assuming already that $\tilde{\lambda}_0 = 0$. For any $\phi_0$ selected from our chosen basis for $(\operatorname{ker} P_{D_0}) \cap (\operatorname{ker} P_{D_1})$, we obtain the equation $-\phi_1'' = \tilde{\lambda}_1 \phi_0$, with projection boundary conditions \begin{alignat}{2} \label{phi1bc} P_{D_0} \phi_1 (0) &= 0; & \qquad P_{D_1} \phi_1 (1) &= 0; \notag \\
P_{N_0} \phi_1'(0) &= 0; & \qquad P_{N_1} \phi_1'(1) &= 0; \\
P_{R_0} \phi_1'(0) &= \Lambda_0 P_{R_0} \phi_0; & \qquad P_{R_1} \phi_1'(1) &= \Lambda_1 P_{R_1} \phi_0. \end{alignat} Upon taking an $L^2 (0,1)$ inner product with $\phi_0$, we find \begin{equation*} \begin{aligned}
\tilde{\lambda}_1 |\phi_0|_{\mathbb{R}^n}^2 &= - \langle \phi_1'', \phi_0 \rangle \\ &= \Big((P_{R_0} \Lambda_0 P_{R_0} - P_{R_1} \Lambda_1 P_{R_1}) \phi_0, \phi_0 \Big)_{\mathbb{R}^n} = \Big(B \phi_0, \phi_0 \Big)_{\mathbb{R}^n}, \end{aligned} \end{equation*} using a calculation similar to (\ref{working_with_projections}). Since $\phi_0$ is an eigenvector for $B$, $\tilde{\lambda}_1$ will be an eigenvalue of $B$. If $\tilde{\lambda}_1 > 0$ this eigenvalue will be in the right half-plane for $s$ small and so won't cross into the Maslov Box. On the other hand, if $\tilde{\lambda}_1 < 0$ we will obtain a spectral curve with the asymptotic form $\lambda (s) \sim \frac{\tilde{\lambda}_1}{s}$, and (for $\lambda_{\infty}$ chosen sufficiently large) this will enter the Maslov Box through the bottom shelf. These crossings are precisely counted by the term $\operatorname{Mor} (B)$ in Theorem \ref{hs_main}.
Finally, if $\tilde{\lambda}_1 = 0$ we need to proceed with the next order of our perturbation argument. For this step, we note that we have $\tilde{\lambda}_0 = 0$ and $\tilde{\lambda}_1 = 0$, and that we now restrict to $\phi_0 \in \operatorname{ker} B$. Our second order perturbation equation is $-\phi_2'' + V(0) \phi_0 = \tilde{\lambda}_2 \phi_0$ subject to the conditions \begin{alignat*}{2} P_{D_0} \phi_2 (0) &= 0; & \qquad P_{D_1} \phi_2 (1) &= 0; \\
P_{N_0} \phi_2'(0) &= 0; & \qquad P_{N_1} \phi_2'(1) &= 0; \\
P_{R_0} \phi_2'(0) &= \Lambda_0 P_{R_0} \phi_1 (0); & \qquad P_{R_1} \phi_2'(1) &= \Lambda_1 P_{R_1} \phi_1 (1). \end{alignat*} We take an $L^2 (0,1)$ inner product of this equation with $\phi_0$ and compute \begin{equation*} \begin{aligned}
\tilde{\lambda}_2 |\phi_0|_{\mathbb{R}^{n}}^2 - (V(0) \phi_0, \phi_0)_{\mathbb{R}^{n}} &= - \langle \phi_2'', \phi_0 \rangle = - (\phi_2'(1), \phi_0)_{\mathbb{R}^{n}} + (\phi_2'(0), \phi_0)_{\mathbb{R}^{n}} \\ &= (P_{R_0} \Lambda_0 P_{R_0} \phi_1 (0) - P_{R_1} \Lambda_1 P_{R_1} \phi_1 (1), \phi_0)_{\mathbb{R}^{n}}. \end{aligned} \end{equation*} In order to understand this last inner product, we note that for $\tilde{\lambda}_1 = 0$ we have $\phi_1'' = 0$ with boundary conditions (\ref{phi1bc}). We can write $\phi_1 (x) = ax+b$ for constant vectors $a, b \in \mathbb{R}^n$, and the conditions $P_{R_0} \phi_1'(0) = \Lambda_0 P_{R_0} \phi_0$ and $P_{R_1} \phi_1'(1) = \Lambda_1 P_{R_1} \phi_0$ imply $P_{R_0} a = P_{R_0} \Lambda_0 P_{R_0} \phi_0$ and likewise $P_{R_1} a = P_{R_1} \Lambda_1 P_{R_1} \phi_0$. Noting also that $\phi_1 (1) - \phi_1 (0) = a$, we compute \begin{equation*} \begin{aligned} (P_{R_0} &\Lambda_0 P_{R_0} \phi_1 (0) - P_{R_1} \Lambda_1 P_{R_1} \phi_1 (1), \phi_0)_{\mathbb{R}^{n}} = (\phi_1 (0), P_{R_0} \Lambda_0 P_{R_0} \phi_0)_{\mathbb{R}^{n}} - (\phi_1 (1), P_{R_1} \Lambda_1 P_{R_1} \phi_0)_{\mathbb{R}^{n}} \\ &= (\phi_1 (0) - \phi_1 (1),P_{R_0} \Lambda_0 P_{R_0} \phi_0 )_{\mathbb{R}^{n}} = - (a, P_{R_0} \Lambda_0 P_{R_0} \phi_0 )_{\mathbb{R}^{n}} \\ &= - (P_{R_0} a, P_{R_0} \Lambda_0 P_{R_0} \phi_0 )_{\mathbb{R}^{n}} = - (P_{R_0} \Lambda_0 P_{R_0} \phi_0, P_{R_0} \Lambda_0 P_{R_0} \phi_0 )_{\mathbb{R}^{n}} \\ &=-((P_{R_0} \Lambda_0 P_{R_0})^2 \phi_0, \phi_0 )_{\mathbb{R}^{n}}. \end{aligned} \end{equation*} We see that \begin{equation*}
\tilde{\lambda}_2 |\phi_0|_{\mathbb{R}^{n}}^2 = \Big((V(0) - (P_{R_0} \Lambda_0 P_{R_0})^2) \phi_0, \phi_0 \Big)_{\mathbb{R}^{n}}. \end{equation*} Recalling that we have selected the vectors $\phi_0$ to be orthonormal eigenvectors for the matrix $Q (V(0) - (P_{R_0} \Lambda_0 P_{R_0})^2) Q$, we see that we have a spectral curve entering the Maslov Box if and only if $\tilde{\lambda}_2$ is a negative eigenvalue of this matrix.
In principle, if $\tilde{\lambda}_2 = 0$ we can proceed to the next step in the perturbation argument, but this is the case that we have eliminated by our non-degeneracy assumption.
\noindent {\bf Application 2.} In \cite{HLS}, the authors consider Schr\"odinger equations on $\mathbb{R}$, \begin{equation} \label{main_hls} \begin{aligned} Hy &:= -y'' + V(x) y = \lambda y, \\ \operatorname{dom} (H) &= H^1 (\mathbb{R}), \end{aligned} \end{equation} where $y \in \mathbb{R}^n$ and $V \in C(\mathbb{R})$ is a symmetric matrix satisfying the following asymptotic conditions:
\noindent {\bf (A1)} The limits $\lim_{x \to \pm \infty} V(x) = V_{\pm}$ exist, and for all $M \in \mathbb{R}$, \begin{equation*}
\int_{-M}^{\infty} (1 + |x|) |V(x) - V_+| dx < \infty; \quad
\int_{-\infty}^M (1+|x|) |V(x) - V_-| dx < \infty. \end{equation*}
\noindent {\bf (A2)} The eigenvalues of $V_{\pm}$ are all non-negative.
As verified in \cite{HLS}, if $\lambda < 0$ then (\ref{main_hls}) will have $n$ linearly independent solutions that decay as $x \to -\infty$ and $n$ linearly independent solutions that decay as $x \to +\infty$. We express these respectively as \begin{equation*} \begin{aligned} \phi_{n+j}^- (x; \lambda) &= e^{\mu_{n+j}^- (\lambda) x} (r_j^- + \mathcal{E}_j^- (x;\lambda)) \\ \phi_j^+ (x; \lambda) &= e^{\mu_j^+ (\lambda) x} (r_{n+1-j}^+ + \mathcal{E}_j^+ (x;\lambda)), \end{aligned} \end{equation*} with also \begin{equation*} \begin{aligned} \partial_x \phi_{n+j}^- (x; \lambda) &= e^{\mu_{n+j}^- (\lambda) x} (\mu_{n+j}^- r_j^- + \tilde{\mathcal{E}}_j^- (x;\lambda)) \\ \partial_x \phi_j^+ (x; \lambda) &= e^{\mu_j^+ (\lambda) x} (\mu_j^+ r_{n+1-j}^+ + \tilde{\mathcal{E}}_j^+ (x;\lambda)), \end{aligned} \end{equation*} for $j = 1,2,\dots,n$, where the nature of the $\mu_j^{\pm}$, $r_j^{\pm}$, and $\mathcal{E}_j^{\pm} (x; \lambda), \tilde{\mathcal{E}}_j^{\pm} (x; \lambda)$ are developed in \cite{HLS}, but won't be necessary for this brief discussion, except for the observation that under assumptions (A1) and (A2) \begin{equation} \label{limits} \lim_{x \to \pm \infty} \mathcal{E}_j^{\pm} (x;\lambda) = 0; \quad \lim_{x \to \pm \infty} \tilde{\mathcal{E}}_j^{\pm} (x;\lambda) = 0. \end{equation}
If we create a frame $\mathbf{X}^- (x; \lambda) = {X^- (x; \lambda) \choose Y^- (x; \lambda)}$ by taking $\{\phi_{n+j}^-\}_{j=1}^n$ as the columns of $X^-$ and $\{{\phi_{n+j}^-}'\}_{j=1}^n$ as the respective columns of $Y^-$ then it is straightforward to verify that $\mathbf{X}^-$ is a frame for a Lagrangian subspace, which we will denote $\ell^-$ (see \cite{HLS}). Likewise, we can create a frame $\mathbf{X}^+ (x; \lambda) = {X^+ (x; \lambda) \choose Y^+ (x; \lambda)}$ by taking $\{\phi_j^+\}_{j=1}^n$ as the columns of $X^+$ and $\{{\phi_j^+}'\}_{j=1}^n$ as the respective columns of $Y^+$. Then $\mathbf{X}^+$ is a frame for a Lagrangian subspace, which we will denote $\ell^+$.
In either case, we can view the exponential multipliers $e^{\mu_j^{\pm} x}$ as expansion coefficients, and if we drop these off we retain frames for the same spaces. That is, we can create an alternative frame for $\ell^-$ by taking the expressions $r_j^- + \mathcal{E}_j^- (x;\lambda)$ as the columns of $X^-$ and the expressions $\mu_{n+j}^- r_j^- + \tilde{\mathcal{E}}_j^- (x;\lambda)$ as the corresponding columns for $Y^-$. Using (\ref{limits}) we see that in the limit as $x$ tends to $-\infty$ we obtain the frame $\mathbf{R}^- (\lambda) = {R^- \choose S^- (\lambda)}$, where \begin{equation*} \begin{aligned} R^- &= \begin{pmatrix} r_1^- & r_2^- & \dots & r_n^- \end{pmatrix} \\
S^- (\lambda) &= \begin{pmatrix} \mu_{n+1}^- r_1^- & \mu_{n+2}^- r_2^- & \dots & \mu_{2n}^- r_n^- \end{pmatrix}. \end{aligned} \end{equation*} As discussed in \cite{HLS}, $\mathbf{R}^-$ is the frame for a Lagrangian subspace, which we will denote $\ell^-_{\infty}$. Proceeding similarly with $\ell^+$, we obtain the asymptotic Lagrangian subspace $\ell^+_{\infty}$ with frame $\mathbf{R}^+ (\lambda) = {R^+ \choose S^+ (\lambda)}$, where \begin{equation} \label{RplusSplus} \begin{aligned} R^+ &= \begin{pmatrix} r_n^+ & r_{n-1}^+ & \dots & r_1^+ \end{pmatrix} \\
S^+ (\lambda) &= \begin{pmatrix} \mu_1^+ r_n^+ & \mu_2^+ r_{n-1}^+ & \dots & \mu_n^+ r_1^+ \end{pmatrix}. \end{aligned} \end{equation}
We can now construct $\tilde{W} (x,\lambda)$ in this case as \begin{equation} \label{tildeWmain} \tilde{W} (x; \lambda) = - (X^- (x; \lambda) + i Y^- (x; \lambda)) (X^- (x; \lambda) - i Y^- (x; \lambda))^{-1} (R^+ - i S^+ (\lambda)) (R^+ + i S^+ (\lambda))^{-1}. \end{equation} We will be interested in a closed path in the $x$-$\lambda$ plane, determined by a sufficiently large value $\lambda_{\infty}$. First, if we fix $\lambda = 0$ and let $x$ run from $-\infty$ to $+\infty$, we denote the resulting path $\Gamma_0$ (the {\it right shelf}). Next, we let $\Gamma_+$ denote a path in which $\lambda$ decreases from $0$ to $-\lambda_{\infty}$. (We can view this as a path corresponding with the limit $x \to +\infty$, but the limiting behavior will be captured by the nature of the Lagrangian subspaces; we refer to this path as the {\it top shelf}.) Continuing counterclockwise along our path, we denote by $\Gamma_{\infty}$ the path obtained by fixing $\lambda = -\lambda_{\infty}$ and letting $x$ run from $+\infty$ to $-\infty$ (the {\it left shelf}). Finally, we close the path in an asysmptotic sense by taking a final path, $\Gamma_-$, with $\lambda$ running from $-\lambda_{\infty}$ to $0$ (viewed as the asymptotic limit as $x \to + \infty$; we refer to this as the {\it bottom shelf}).
The principal result of \cite{HLS} is as follows.
\begin{theorem} \label{hls_main_theorem} Let $V \in C(\mathbb{R})$ be a symmetric real-valued matrix, and suppose (A1) and (A2) hold. Then \begin{equation*} \operatorname{Mor}(H) = - \operatorname{Mas}(\ell^-, \ell^+_{\infty}; \Gamma_0). \end{equation*} \end{theorem}
\begin{remark} As discussed in Section 5 of \cite{HLS}, Theorem \ref{hls_main_theorem} can be extended to the case \begin{equation} \label{Es} H_s y := - y'' + s y' + V(x) y = \lambda y, \end{equation} for any $s \in \mathbb{R}$. This observation---for which the authors are indebted to \cite{BJ1995}---allows the application of these methods in the study of spectral stability for traveling wave solutions in Allen-Cahn equations. \end{remark}
\section*{Appendix}
In this brief appendix, we verify (P2) (homotopy invariance) for our definition of the Maslov index. We assume $\mathcal{L} (s,t) = (\ell_1 (s,t), \ell_2 (s,t))$ is continuous on a cartesian product of closed, bounded intervals $I \times J = [0, 1] \times [a, b]$, and that $\mathcal{L} (s, a) = \mathcal{L}_a$ for all $s \in I$ and likewise $\mathcal{L} (s,b) = \mathcal{L}_b$ for all $s \in I$, for some fixed $\mathcal{L}_a, \mathcal{L}_b \in \Lambda (n) \times \Lambda (n)$. We denote by $\tilde{W} (s,t)$ the matrix (\ref{tildeW}) associated with $\mathcal{L} (s, t)$. It's straightforward to see from our metric (\ref{rho_metric}) that continuity of $\mathcal{L}$ implies continuity of the associated frame $\mathbf{X} (s,t)$, which in turn (and along with non-degeneracy) implies continuity of $\tilde{W} (s, t)$. We know from Theorem II.5.1 in \cite{Kato} that the eigenvalues of $\tilde{W} (s,t)$ must vary continuously with $s$ and $t$. Moreover, we see from Theorem II.5.2 in the same reference that these eigenvalues can be tracked as $n$ continuous paths $\{\mu^k (s,t)\}_{k=1}^n$, which in our case will be restricted to $S^1$.
For notational convenience, let's fix $s_1, s_2 \in I$ suitably close together (in a manner that we make precise below) and set $\tilde{W}_1 (t) := \tilde{W} (s_1, t)$ and $\tilde{W}_2 (t) := \tilde{W} (s_2, t)$.
\begin{claim} Suppose $\mu (t)$ and $\nu (t)$ are any two continuous eigenvalue paths of $\tilde{W}_1 (t)$ and $\tilde{W}_2 (t)$ respectively, with $\mu (a) = \nu (a)$ and $\mu (b) = \nu (b)$. Then there exists $\epsilon > 0$ sufficiently small so that if \begin{equation*}
\max_{t \in J} |\mu (t) - \nu (t)| < \epsilon \end{equation*} then the spectral flow of $\mu (t)$ is the same as the spectral flow of $\nu (t)$. \end{claim}
\begin{proof} First, suppose neither $\mu (a)$ nor $\mu (b)$ is -1 (and so the same is true for $\nu (a)$ and $\nu (b)$). Take $\epsilon$ small enough so that $B_{\epsilon} (\mu (a))$ (the ball in $\mathbb{C}$ centered at $\mu (a)$ with radius $\epsilon$) does not contain -1, and similarly for $\mu (b)$. According to our hypothesis, we will have $\mu(t), \nu (t) \in B_{\epsilon} (\mu(t))$ for all $t \in J$, and so the spectral flows for $\mu (t)$ and $\nu (t)$ will both match the flow for $B_{\epsilon} (\mu (t))$.
Suppose next that $\mu (a) = -1$, but $\mu (b)$ does not. In this case, there must be a first time, $t_*$, at which $B_{\epsilon} (\mu (t_*))$ does not contain -1. By assumption, we must have $\nu (t_*) \in B_{\epsilon} (\mu (t_*))$, and this allows us to apply an argument on $[t_*, b]$ similar to our argument on $[a, b]$ in the previous paragraph. A similar argument holds if $\mu (b) = -1$, but $\mu (a)$ does not.
Last, suppose $\mu (a) = -1$ and $\mu (b) = -1$. If $\mu(t)$ and $\nu (t)$ are both -1 for all $t \in J$ then we're fininshed. If not, i.e., if there exists a time $t_*$ at which one or both $\mu (t_*)$ and $\nu (t_*)$ is not $-1$, then we can apply one of the first two cases to complete the proof. \end{proof}
Since $I \times J$ is closed and bounded, the matrices $\tilde{W} (s, t)$ are uniformly continuous on $I \times J$. This means that given any $\tilde{\epsilon} > 0$ we can find $\delta > 0$ sufficiently small so that \begin{equation*}
|s_1 - s_2| < \delta \implies
\max_{t \in J} \|\tilde{W}_1 (t) - \tilde{W}_2 (t)\| < \tilde{\epsilon}. \end{equation*} Fix any $k \in \{1, 2, \dots, n\}$, and set $\mu^k_1 (t) = \mu^k (s_1, t)$ and $\mu^k_2 (t) = \mu^k (s_2, t)$. By eigenvalue continuity, this means we can take $\delta$ small enough to ensure that \begin{equation*}
\max_{t \in J} |\mu^k_1 (t) - \mu^k_2 (t)| < \epsilon \end{equation*} for all $k \in \{1, 2, \dots, n\}$. But since $\epsilon$ is arbitrary, we see from our claim that the flow associated with each of these eigenvalue pairs must be the same, and so the spectral flow for $\tilde{W}_1 (t)$ must agree with that of $\tilde{W}_2 (t)$.
Finally, then, by starting with $s_1 = 0$, and proceeding to $s_2 = \frac{\delta}{2}$, $s_3 = \delta$ etc., we see that the Maslov index will be the same at each step, and that since the steps have fixed length we eventually arrive at $s = 1$. This concludes the proof of property (P2).
{\it Acknowledgements.} Y. Latushkin was supported by NSF grant DMS-1067929, by the Research Board and Research Council of the University of Missouri, and by the Simons Foundation.
\end{document}
|
arXiv
|
{
"id": "1608.00632.tex",
"language_detection_score": 0.664292573928833,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Persistence by Parts: Multiscale Feature Detection via Distributed Persistent Homology}
\begin{abstract}
A method is presented for the distributed computation of persistent homology, based on an extension of the generalized Mayer-Vietoris principle to filtered spaces. Cellular cosheaves and spectral sequences are used to compute global persistent homology based on local computations indexed by a scalar field. These techniques permit computation localized not merely by geography, but by other features of data points, such as density. As an example of the latter, the construction is used in the multi-scale analysis of point clouds to detect features of varying sizes that are overlooked by standard persistent homology. \end{abstract}
\section{Introduction} \label{intro}
\subsection{Cosheaves and computational persistence} \label{sec:coco}
Persistence has emerged as a central principle in Topological Data Analysis (TDA). As its core, persistence exploits the functoriality of homology to extract robust features from point cloud data. Given a filtered cell complex arising from such data, its homology is a persistence module -- a sequence of vector spaces and linear transformations. The representation theory of such objects leads to an unambiguous decomposition into indecomposables (persistent homology classes) \cite{CarlssonTopology09,Carlssonshape13,CSZigzag10} which, thanks to the Stability Theorem for persistent homology \cite{CEHStability07,CSG+structure12}, is robust with respect to perturbations of the original data points. For details, see \S\ref{sec:PH}.
There is no inherent obstruction to computing homology from local data: Mayer-Vietoris and spectral sequences lead the way. The story is more complicated for persistent homology, due to the algebra of persistence modules. The approach of this paper is to use theory of cellular cosheaves to store and relate local homology information over the complex. This theory, like the algebraically dual theory of cellular sheaves \cite{CurrySheaves14}, is ideal for local-to-global operations.
The main result of this paper is an application of spectral sequences and the Mayer-Vietoris principle to compute persistent homology by breaking a filtered complex into pieces based on a scalar field on the point cloud. This has utility beyond the obvious idea of breaking up a complex into parts based on geographic proximity.
In its typical interpretation, homology classes which persist over large parameter intervals are the statistically significant features -- those which are not artefacts of noisy sampling. For \v{C}ech complexes of manifolds sampled with sufficiently high density, a single (practically unobtainable) homology computation yields truth \cite{NSWFinding08}. For the (more realistic) case of a filtered Vietoris-Rips complex, persistent homology is a good first approximation to truth. Questions of a more epistemological bent ({\em ``Are these really the important features?''}) are evident. Recent work of Berry and Sauer argues that persistent homology can erroneously label important small features as non-persistent, and they propose a {\em Continuous $k$-Nearest Neighbors} graph to estimate the geometry of a discretely sampled manifold \cite{BSConsistent19}. They prove that this method captures multiscale connectivity data and conjecture that it works in higher homologies.
As an application of our results, we show how to use a local density as the distribution parameter in order to retrieve persistent homology classes that are weighted according to the density of sampling. The idea is that when faced with geometrically small but tightly-sampled homology classes which may be of significance, one can compute persistent homology using distance as the (usual) parameter, and sampling density as the distribution field: see \S\ref{sec:multi}. The oft-expressed desire of doing multiparameter persistence on both distance and density \cite{CarlssonTopology09,GhristBarcodes08} is full of algebraic challenges \cite{CZtheory09,CSZComputing10}; in a sense, this paper gives a novel approach by separating out one parameter as a scalar field for distributed computation.
\subsection{Related and supporting work}
Algorithms for computing persistent homology now comprise a rich and intricate literature. The original algorithm for computing persistent homology \cite{ELZTopological02,EHComputational10} computes persistence pairs by reducing the boundary operator. Since then, variations of the original algorithm have been developed to improve computation \cite{OPT+roadmap}. In particular, parallelized and distributed algorithms have been proposed in order to improve memory usage and computation time. The spectral sequence algorithm from \cite{EHComputational10} reduces blocks of matrices at each phase, which results in persistence pairs of particular lengths. In \cite{BKRClear14}, the authors incorporate optimization techniques and construct a distributed algorithm of the spectral sequence algorithm in both shared and distributed memory.
The above mentioned algorithms distribute data with respect to ranges of filtration values. In \cite{LMParallel15}, the authors provide a distributed computation algorithm in which data is distributed with respect to spatial decomposition of the domain. They build a Mayer-Vietoris blowup complex, and its boundary matrix is reduced by reducing submatrices in parallel. Our work shares a similar philosophy, as we distribute data according to spatial decomposition of the domain and we operate on the foundation of Mayer-Vietoris principle. Instead of using the geometric construction of Mayer-Vietoris blowup complex, we use the algebraic construction of cosheaf homology to combine local information. Our approach adapts the use of cosheaf homology to compute homology from subspaces \cite{CGNDiscrete13}, with cosheaf morphisms and spectral sequences to take the filtration into account.
This work has its origins in the Ph.D. dissertation of the first author \cite{YoonCellular18}. After preparing this article, the preprint of Casas appeared \cite{CasasDistributing19}, which, influenced by \cite{YoonCellular18}, builds on and extends the results. In particular, Algorithm 2 of \cite{CasasDistributing19} recapitulates \S 4.2.3 of \cite{YoonCellular18}. Casas greatly extends the results of that thesis and this paper by not limiting the nerve of the distribution cover to 1-d; however, in the restricted case considered here, Casas' diagram chase is exactly that of \cite{YoonCellular18} and the present work.
\subsection{Problem statement and contributions} We address the following question.
\textit{Given a point cloud $P$ (a finite subset of Euclidean $\mathbb{R}^m$), compute the persistence module of $P$ from local persistence modules subordinate to a cover of $P$}
Let $\mathbb{V}$ denote the persistence module obtained from $P$ --- a finite sequence of vector spaces and linear maps that encodes topological information about Vietoris-Rips filtration associated to $P$. We use cellular cosheaf homology to assemble the relevant information gathered from subsets of $P$ subordinate to a cover. Morphisms of cellular cosheaves then allow us to incorporate persistence. We use spectral sequences to discover a hidden map among cosheaf homologies. Our main result, as stated in the following theorem, is a distributed construction of a persistence module $\mathbb{V}_{\Psi}$ that is isomorphic to the persistence module $\mathbb{V}$ of interest.
\begin{Theorem} The local $\mathbb{V}_{\Psi}$ and global $\mathbb{V}$ persistence modules are isomorphic. \end{Theorem}
We argue that this distributed computation can be used to filter and annotate persistent homology based on meta-data associated to the point-cloud, using such to build a cover. We illustrate this idea in the case where the meta-data is a sampling density estimate, yielding a method for computing multiscale persistent homology, identifying significant features that are overlooked by standard persistent homology methods.
Section \ref{sec:background} contains a summary of persistent homology and cellular sheaf theory. We review a distributed computation method for homology using Leray cellular cosheaves in \S\ref{sec:DistributedHomology}. In \S\ref{sec:DistributedPH}, we construct a general distributed persistent homology computation mechanism subordinate to a cover with at most pairwise overlaps. Finally, in \S\ref{sec:multi}, we indicate the utility of this distributed computation in multiscale persistent homology, using our methods to identify persistent homology classes relative to a sampling density.
\section{Background} \label{sec:background}
Throughout this paper, we assume that every homology is computed with coefficients in a field $\mathbb{K}$.
\subsection{Persistent Homology} \label{sec:PH}
Given a point cloud, one can interrogate its global structure via persistent homology. Let $P$ be a finite collection of points in Euclidean $\mathbb{R}^m$. The \textbf{Vietoris-Rips complex} $\mathscr{R}^{\epsilon}$ is the simplicial complex whose $k$-simplices correspond to $(k+1)$-tuple of points from $P$ that have pairwise distance $\leq \epsilon$. For brevity, we use the term ``Rips complex" to refer to the Vietoris-Rips complex.
Assume that $( \mathscr{R}^{i} )_{i=1}^N$ is a sequence of Rips complexes over $P$ for increasing parameter values $( \epsilon_i ) _{i=1}^N$, with inclusion maps between each pair of Rips complexes \[\mathscr{R}^1 \hookrightarrow \mathscr{R}^{2} \hookrightarrow \dots \hookrightarrow \mathscr{R}^{N}.\] By applying the homology functor, one obtains the following sequence of vector spaces \begin{equation} \label{OriginalPM} \mathbb{V}: H_{\bullet}(\mathscr{R}^{1}) \to H_{\bullet}(\mathscr{R}^{2}) \to \dots \to H_{\bullet}(\mathscr{R}^{N}). \end{equation} The above sequence is an instance of a \textbf{persistence module}. For the purpose of this paper, it suffices to consider a persistence module as a finite sequence of vector spaces and linear maps between them.
A \textbf{morphism of persistence modules} $\alpha:\mathbb{V} \to \mathbb{W}$ is a collection of linear maps $\alpha_i : V_i \to W_i$ such that the following diagram commutes. \[ \begin{tikzcd} V_1 \arrow[d, "\alpha_1"] \arrow[r]& V_2 \arrow[d, "\alpha_2"] \arrow[r]& \cdots \arrow[r] & V_n \arrow[d,"\alpha_n"] \\ W_1 \arrow[r] & W_2 \arrow[r] & \cdots \arrow[r] & W_n \end{tikzcd} \] When all $\alpha_i$'s are isomorphisms, then $\alpha$ is an isomorphism of persistence modules.
By the Structure Theorem \cite{ZCComputing05}, the persistence module from Equation (\ref{OriginalPM}) decomposes uniquely as \[ \mathbb{V} \cong \bigoplus\limits_{l=1}^N \mathbb{I}(b_l, d_l),\] where each $\mathbb{I}(b,d)$ is a simpler persistence module of the form \[ \mathbb{I}(b,d): 0 \to \dots 0 \to \mathbb{K} \xrightarrow{1} \mathbb{K} \xrightarrow{1} \dots \xrightarrow{1} \mathbb{K} \rightarrow 0 \to \dots \to 0.\] The $b$ and $d$ each indicates the first and last index of $\mathbb{K}$. Each $\mathbb{I}(b,d)$ represents a homological feature with birth time $b$ and death time $d$. One can visualize such birth and death times of $\mathbb{I}(b,d)$ using a \textbf{barcode}. Given a persistence module $\mathbb{V}$, a barcode diagram, $\textit{barcode}(\mathbb{V})$, is a collection of bars that correspond to the intervals $(b,d)$ obtained from the decomposition of $\mathbb{V}$. In simple settings, long bars of $\textit{barcode}(V)$ capture significant homological features; shorter bars may be due to noise.
\subsection{Cellular Cosheaves} \label{sec:CellularCosheaves}
A cellular cosheaf is a certain assignment of algebraic structure to a cell complex \cite{CurrySheaves14}. Given a cell complex $\CellComplex{X}$, there is a face poset category whose objects are the cells of $\CellComplex{X}$ and whose morphisms $\tau\to\sigma$ correspond to the face relation $\tau \unlhd \sigma$ (that is, $\tau \subset \bar{\sigma}$).
Following \cite{ShepardCellular85,CurrySheaves14}, a \textbf{cellular cosheaf} $\sheaf{F}$ on a cell complex $\CellComplex{X}$ with values in category $D$ is a contravariant functor from the associated face poset category to $D$. In other words, $\sheaf{F}$ assigns to each cell $\sigma$ of $\CellComplex{X}$ an object $\sheaf{F}(\sigma)$ in $D$, and to each face relation $\tau \unlhd \sigma$ a morphism $\sheaf{F}(\tau \unlhd \sigma):\sheaf{F}(\sigma) \to \sheaf{F}(\tau)$ such that \begin{itemize} \item $\sheaf{F}(\tau \unlhd \tau) : \sheaf{F}(\tau) \to \sheaf{F}(\tau)$ is the identity, and \item if $\rho \unlhd \tau \unlhd \sigma$, then $\sheaf{F}( \rho \unlhd \sigma ) =\sheaf{F}(\rho \unlhd \tau) \circ \sheaf{F}(\tau \unlhd \sigma).$ \end{itemize}
A cellular cosheaf $\cosheaf{F}$ over a compact cell complex $X$ has a well-defined homology associated to its chain complex \begin{equation} \label{chaincomplex} C_{\bullet} \cosheaf{F} : \quad \cdots \xrightarrow{\partial_3} \bigoplus \limits_{\textrm{dim }\sigma=2 } \cosheaf{F}(\sigma) \xrightarrow{\partial_2} \bigoplus \limits_{\textrm{dim } \sigma =1}\cosheaf{F}(\sigma) \xrightarrow{\partial_1} \bigoplus \limits_{\textrm{dim } \sigma =0} \cosheaf{F}(\sigma) \xrightarrow{\partial_0} 0, \end{equation} where $C_n(\CellComplex{X}, \cosheaf{F})$ is the direct sum of the data over the $n$-cells of $X$. The boundary map $\partial_n : C_n(\CellComplex{X}, \cosheaf{F}) \to C_{n-1}(\CellComplex{X}, \cosheaf{F})$ is defined in the familiar manner as \[
\partial_n = \sum\limits_{\tau \unlhd \sigma} [\tau : \sigma] \cosheaf{F}(\tau \unlhd \sigma), \] where $[\tau:\sigma]$ is the incidence number.
The $n^{\textrm{th}}$ \textbf{cosheaf homology} of $\cosheaf{F}$ is the homology of this chain complex $H_n(C_{\bullet} \cosheaf{F}) = \ker \partial_n / \im \partial_{n+1}$.
Cosheaf homologies reflect global structures from locally encoded data. When comparing local data, cosheaf morphisms allow one to extract global changes from the local changes. Following \cite{BredonSheaf97}, a \textbf{cosheaf morphism} $\phi : \sheaf{F} \to \sheaf{F}'$ between cosheaves $\sheaf{F}$ and $\sheaf{F}'$ on $\CellComplex{X}$ is a collection of morphisms $\phi|_{\sigma} :\sheaf{F}(\sigma) \to \sheaf{F'}(\sigma)$ indexed by cells $\sigma \in X$ such that the following diagram commutes \[ \begin{tikzcd}
\sheaf{F}(\sigma) \arrow[d,"\sheaf{F}(\tau \unlhd \sigma)" , swap] \arrow[r,"\phi|_{\sigma}"] & \sheaf{F}'(\sigma) \arrow[d,"\sheaf{F}'( \tau \unlhd \sigma)" ]\\
\sheaf{F}(\tau) \arrow[r,"\phi|_{\tau}"] & \sheaf{F}'(\tau) \end{tikzcd} \] for every face relation $\tau \unlhd \sigma$.
Thus, a cosheaf morphism $\phi : \sheaf{F} \to \sheaf{F}'$ is a natural transformation from the functor $\sheaf{F}$ to $\sheaf{F}'$. As such, any cosheaf morphism $\phi : \cosheaf{F} \to \cosheaf{F}'$ induces morphisms on homology: \[
H_n(\phi) : H_n(C_{\bullet}\cosheaf{F}) \to H_n(C_{\bullet} \cosheaf{F}') . \]
In the special case of a cell complex $\CellComplex{X}$ that is homeomorphic to a closed interval, a cellular cosheaf $\cosheaf{F}$ on $\CellComplex{X}$ can be interpreted as a generalized persistence module, or \textbf{zigzag module} \cite{CSZigzag10}, of the form \[
V_1 \leftarrow V_2 \rightarrow V_3 \leftarrow \cdots \rightarrow V_m. \] By Gabriel's Theorem \cite{GabrielUnzerlegbare72}, such a cosheaf $\cosheaf{F}$ can be decomposed into a direct sum of \textbf{indecomposable cosheaves}, of the form \[
\mathscr{I} : 0 \leftarrow \cdots 0 \leftrightarrow \mathbb{K} \leftrightarrow \cdots \leftrightarrow \mathbb{K} \leftrightarrow 0 \cdots \rightarrow 0 , \] as in Figure~\ref{fig:DecomposedCosheaf}. \begin{figure}
\caption{Direct sum of indecomposable cosheaves}
\label{fig:DecomposedCosheaf}
\end{figure}
Note that there are four types of indecomposable cosheaves possible: cosheaves $\sheaf{I}_{[-]}$ whose left- and rightmost supports are 0-cells, cosheaves $\sheaf{I}_{]-[}$ whose left and right most supports are 1-cells, cosheaves $\sheaf{I}_{[-[}$ with leftmost support a 0-cell and rightmost support a 1-cell, and the reversed $\sheaf{I}_{]-]}$.
\begin{Lemma}[\cite{CurrySheaves14}] \label{IndecomposableCosheaf} The indecomposable cosheaves $\cosheaf{I}$ satisfy \[
H_0(C_{\bullet}\cosheaf{I}_{[-]}) = \mathbb{K}, \quad H_1(C_{\bullet}\cosheaf{I}_{]-[})=\mathbb{K}, \quad H_i(C_{\bullet}\cosheaf{I}_{[-[})=H_i(C_{\bullet}\cosheaf{I}_{]-]})=0. \] \end{Lemma}
Thus, if cosheaf $\cosheaf{F}$ can be decomposed as $\cosheaf{F} \cong \oplus \cosheaf{I}$, then $H_i(C_{\bullet} \cosheaf{F}) \cong \oplus H_i(C_{\bullet} \cosheaf{I})$. Thus, $\dim H_0(C_{\bullet}\sheaf{F})$ counts the number of indecomposable cosheaves $\cosheaf{I}_{[-]}$ in the decomposition, whereas $\dim H_1(C_{\bullet}\sheaf{F})$ counts the number of indecomposable cosheaves $\cosheaf{I}_{]-[}$ in the decomposition. Lemma \ref{IndecomposableCosheaf} is the key to enriching the persistent homology barcodes in \S\ref{sec:multi}.
\section{Distributed computation of homology} \label{sec:DistributedHomology}
Our goal is to compute the persistence module \[
\mathbb{V}: H_{\bullet} (\mathscr{R}^1) \to H_{\bullet} (\mathscr{R}^2) \to \cdots \to H_{\bullet} (\mathscr{R}^N) \] in a distributed manner. To commence, we recall the local nature of homology computations, drawing on the classic results of Mayer-Vietoris, Leray \cite{BredonSheaf97}, and Serre \cite{McClearyUsers01}, in language of sheaf cohomology \cite{CGNDiscrete13}. The following adapts the classic constructions from \cite{CGNDiscrete13} to analyzing point cloud data.
For $P$ a point cloud, denote by $\mathscr{R}^{\epsilon}$ the Rips complex built on $P$ for parameter $\epsilon$. Let $\mathscr{V}$ be a finite cover of $P$ and $N_{\mathscr{V}}$ the resulting nerve complex.\footnote{The simplicial complex that indexes intersections of cover elements.} To each simplex $\sigma \in N_{\mathscr{V}}$ is then associated $\mathscr{R}_{\sigma}^{\epsilon}$, the Rips complex built on the subset of points of $P$ indexed by $\sigma$ at proximity parameter $\epsilon$.
We will refer to the collection $\coprod\limits_{\dim \sigma=n } \mathscr{R}^{\epsilon}_{\sigma}$ as the ``Rips complexes over the $n$-simplices of $N_{\mathscr{V}}$", referring to the entire collection $\coprod\limits_{\sigma \in N_{\mathscr{V}}} \mathscr{R}^{\epsilon}_{\sigma}$ as the \textbf{Rips system} of the nerve.
Define a cosheaf $\cosheaf{F}^{\epsilon}_n$ on $N_{\mathscr{V}}$ as the following. For each $\sigma \in N_{\mathscr{V}}$, let $\cosheaf{F}_{n}^{\epsilon} (\sigma)= H_n( \mathscr{R}^{\epsilon}_{\sigma} )$. For $\tau \unlhd \sigma$, let $\cosheaf{F}_n^{\epsilon}(\tau \unlhd \sigma)$ be the map induced by inclusion $\mathscr{R}^{\epsilon}_{{\sigma}} \hookrightarrow \mathscr{R}^{\epsilon}_{\tau}$.
\begin{restatable}{Lemma}{discreteCGNlemma} \label{DiscreteCGNBound} Let $P$ be a point cloud with finite cover $\mathscr{V}$ having one-dimensional nerve $N_{\mathscr{V}}$. There exists a constant $\epsilon_*$ such that \begin{equation} \label{DistributedComputationK} H_n(\mathscr{R}^{\epsilon}) \cong H_0(C_{\bullet} \cosheaf{F}^{\epsilon}_n) \oplus H_1(C_{\bullet} \cosheaf{F}^{\epsilon}_{n-1}) \end{equation} for every $0<\epsilon<\epsilon_*$. \end{restatable}
\begin{proof} Theorem 5.7 in \cite{CGNDiscrete13} can be used to obtain the above isomorphism when the Rips system covers the full Rips complex: the technical work is in showing that this happens for sufficiently small $\epsilon$: see Appendix \ref{A:DiscreteCGNBound} for details. \end{proof}
The following two examples illustrate the difference between $H_0(C_{\bullet}\cosheaf{F}_1^{\epsilon})$ and $H_1(C_{\bullet}\cosheaf{F}_0^{\epsilon})$.
\begin{example} \label{SmallE} Let $P \subset \mathbb{R}^2$ be a point cloud covered by three sets $V_1, V_2, V_3$ with nerve an interval as illustrated in Figure \ref{fig:AllPoints}. \begin{figure}\label{fig:AllPoints}
\end{figure}
Consider $H_1(\mathscr{R}^{\epsilon})$ for some parameter $\epsilon$. The Rips complex $\mathscr{R}^{\epsilon}$ and the Rips system over the nerve $N_{\mathscr{V}}$ are illustrated in Figures \ref{fig:RipsComplexEx1} and \ref{fig:RipsSystemEx1}. Let $v_1, v_2, v_3$ denote the vertices of $N_{\mathscr{V}}$ that correspond to the cover sets $V_1, V_2, V_3$. Let $e_{12}$ and $e_{23}$ denote the edges of $N_{\mathscr{V}}$ that correspond to $V_1 \cap V_2$ and $V_2 \cap V_3$.
\begin{figure}
\caption{Rips complex $\mathscr{R}^{\epsilon}$.}
\caption{Rips complex and the associated Rips system.}
\label{fig:RipsComplexEx1}
\label{fig:RipsSystemEx1}
\label{fig:RipsSystem80}
\end{figure}
The two relevant cosheaves, $\cosheaf{F}_0^{\epsilon}$ and $\cosheaf{F}_1^{\epsilon}$, are illustrated in Figures \ref{fig:F12} and \ref{fig:F02}. The maps $\cosheaf{F}^{\epsilon}_0(v_1 \unlhd e_{12})$ and $\cosheaf{F}^{\epsilon}_0(v_2 \unlhd e_{12})$ are represented by the matrix $\begin{bmatrix} 1 & 1 \end{bmatrix} $. All other maps are identity maps.
\begin{figure}
\caption{Cosheaf $\cosheaf{F}_1^{\epsilon}$.}
\label{fig:F12}
\caption{Cosheaf $\cosheaf{F}_0^{\epsilon}$.}
\label{fig:F02}
\caption{The two relevant cosheaves for computing $H_1(\mathscr{R}^{\epsilon}).$}
\label{fig:Cosheaves}
\end{figure}
One can verify that Equation (\ref{DistributedComputationK}) holds by computing \begin{equation} \label{CosheafHomologies1} H_0(C_{\bullet}\cosheaf{F}_1^{\epsilon})=0, \quad H_1(C_{\bullet}\cosheaf{F}_0^{\epsilon}) =\mathbb{K}. \end{equation} \end{example}
\begin{example} \label{LargeE}
Consider now a parameter $\epsilon'$ that is larger than the parameter from Example \ref{SmallE}. The Rips complex $\mathscr{R}^{\epsilon'}$ and the Rips system are illustrated in Figure \ref{fig:RipsComplexEx2} and \ref{fig:RipsSystemEx2}.
\begin{figure}
\caption{Rips complex $\mathscr{R}^{\epsilon'}$.}
\caption{Rips complex and the associated Rips system.}
\label{fig:RipsComplexEx2}
\label{fig:RipsSystemEx2}
\label{fig:RipsSystem140}
\end{figure}
One can compute cosheaves $\cosheaf{F}_1^{\epsilon'}$, $\cosheaf{F}_0^{\epsilon'}$, and compute the following cosheaf homologies
\begin{equation} \label{CosheafHomologies2} H_0(C_{\bullet}\cosheaf{F}_1^{\epsilon'}) = \mathbb{K}, \quad H_1(C_{\bullet}\cosheaf{F}_0^{\epsilon'})=0. \end{equation}
To compare the cosheaf homologies from Equations (\ref{CosheafHomologies1}) and (\ref{CosheafHomologies2}) for the two parameters $\epsilon < \epsilon'$, note that both $H_1(\mathscr{R}^{\epsilon}) = \mathbb{K}$ and $H_1(\mathscr{R}^{\epsilon'}) = \mathbb{K}$. In Example \ref{SmallE}, the homology class of $H_1(\mathscr{R}^{\epsilon})$ is detected by $H_1(C_{\bullet}\cosheaf{F}_0^{\epsilon})$, while in Example \ref{LargeE}, the homology class of $H_1(\mathscr{R}^{\epsilon'})$ is detected by $H_0(C_{\bullet}\cosheaf{F}_1^{\epsilon'})$. The difference can be explained by comparing the Rips systems from Figures \ref{fig:RipsSystemEx1} and \ref{fig:RipsSystemEx2}. In Figure \ref{fig:RipsSystemEx2}, the Rips complex $\mathscr{R}^{\epsilon'}_{v_2}$ contains a non-trivial 1-cycle, while in Figure \ref{fig:RipsSystemEx1}, there is no such 1-cycle contained in any of the complexes $\mathscr{R}^{\epsilon}_{\sigma}$ for $\sigma \in N_{\mathscr{V}}$.
In general, $H_0(C_{\bullet}\cosheaf{F}_n^{\epsilon})$ reads $n$-cycles that exist in $\mathscr{R}^{\epsilon}_{\sigma}$ for some $\sigma \in N_{\mathscr{V}}$. On the other hand, $H_1(C_{\bullet}\cosheaf{F}_{n-1}^{\epsilon})$ reads $n$-cycles of $\mathscr{R}^{\epsilon}$ that are not cycles of $H_n(\mathscr{R}^{\epsilon}_{\sigma})$ for any $\sigma \in N_{\mathscr{V}}$. \end{example}
\section{Distributed computation of persistent homology} \label{sec:DistributedPH}
We restate the main question using the terminology introduced so far.
Given a point cloud $P$, one can build Rips complexes for increasing parameter values $(\epsilon_i)_{i=1}^N$, resulting in the following sequence of Rips complexes and inclusion maps \[ \mathscr{R}^1 \xhookrightarrow{\iota^1} \mathscr{R}^2 \xhookrightarrow{\iota^2} \cdots \xhookrightarrow{\iota^{N-1}} \mathscr{R}^N.\] By applying the homology functor of dimension $n$, one obtains the persistence module \begin{equation} \label{InterestPM} \mathbb{V}: H_{n}(\mathscr{R}^1) \xrightarrow{\iota^1_*} H_{n}(\mathscr{R}^2) \xrightarrow{\iota^2_*} \cdots \xrightarrow{\iota^{N-1}_*} H_{n}(\mathscr{R}^N). \end{equation} Assuming a covering $\mathscr{V}$ of the point cloud $P$ that has 1-d nerve (cf. Lemma \ref{DiscreteCGNBound}), we have the following isomorphisms \begin{equation} H_n(\mathscr{R}^i) \cong H_0( C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet}\cosheaf{F}^i_{n-1}) \end{equation} for every $i$ and $n$. Our main question is stated as the following.
\textit{ Can we construct a persistence module \[
\mathbb{V}_{\Psi} : H_0( C_{\bullet} \cosheaf{F}^1_n) \oplus H_1(C_{\bullet}\cosheaf{F}^1_{n-1}) \xrightarrow{\Psi^1} \cdots \xrightarrow{\Psi^{N-1}} H_0( C_{\bullet} \cosheaf{F}^{N}_n) \oplus H_1(C_{\bullet}\cosheaf{F}^{N}_{n-1}) \] isomorphic to the persistence module $\mathbb{V}$ from Equation (\ref{InterestPM})? }
In \S\ref{CosheafMorphismSection}, we show that the most naturally induced morphisms of cosheaf homologies are not enough to construct the desired persistence module $\mathbb{V}_{\Psi}$. In \S\ref{SpectralSequenceSection}, we use spectral sequences to construct a map $\psi^i : H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ that can be used to define the persistence module $\mathbb{V}_{\Psi}$. As it will be illustrated in \S\ref{SpectralSequenceSection}, there are multiple choices involved in defining the map $\psi^i$, and the construction of $\mathbb{V}_{\Psi}$ requires that the maps $\psi^i$ be defined consistently across parameters $(\epsilon_i)_{i=1}^N$. In \S\ref{Construction}, we provide an algorithm to construct the maps $\psi^i$ consistently across parameters $(\epsilon_i)_{i=1}^N$. Furthermore, we construct the persistence module \[
\mathbb{V}_{\Psi} : H_0( C_{\bullet} \cosheaf{F}^1_n) \oplus H_1(C_{\bullet}\cosheaf{F}^1_{n-1}) \xrightarrow{\Psi^1} \cdots \xrightarrow{\Psi^{N-1}} H_0( C_{\bullet} \cosheaf{F}^{N}_n) \oplus H_1(C_{\bullet}\cosheaf{F}^{N}_{n-1}). \] In \S\ref{IsoPersistenceModules}, we show that the persistence module $\mathbb{V}_{\Psi}$ is isomorphic to the persistence module $\mathbb{V}$ from Equation (\ref{InterestPM}).
\subsection{Cosheaf morphisms} \label{CosheafMorphismSection}
Given a pair of parameters $\epsilon_i < \epsilon_{i+1}$ and $\sigma \in N_{\mathscr{V}}$, there exist maps $\cosheaf{F}^i_n(\sigma) \to \cosheaf{F}^{i+1}_n(\sigma)$ induced by the inclusion $\mathscr{R}^i_{\sigma} \hookrightarrow \mathscr{R}^{i+1}_{\sigma}$. The collection of such maps is the cosheaf morphism $\phi^i_n : \cosheaf{F}^i_n \to \cosheaf{F}^{i+1}_n$. In particular, the cosheaf morphisms $\phi^i_n$ and $\phi^i_{n-1}$ induce the following morphisms on homology \begin{eqnarray} \label{InducedMaps} \begin{split} H_0(\phi^i_n) & : H_0(C_{\bullet}\cosheaf{F}^i_n) \to H_0(C_{\bullet}\cosheaf{F}^{i+1}_n),\\ H_1(\phi^i_{n-1}) & : H_1(C_{\bullet}\cosheaf{F}^i_{n-1}) \to H_1(C_{\bullet}\cosheaf{F}^{i+1}_{n-1}). \end{split} \end{eqnarray}
The maps $H_0(\phi^i_n)$ and $H_1(\phi^i_{n-1})$ are insufficient to construct a persistence module isomorphic to $\mathbb{V}$ from Equation (\ref{InterestPM}), as we now demonstrate. Using the maps $H_0(\phi^i_n)$ and $H_1(\phi^i_{n-1})$, one defines \[
\omega^i : H_0(C_{\bullet}\cosheaf{F}^i_n) \oplus H_1(C_{\bullet}\cosheaf{F}^i_{n-1}) \to H_0(C_{\bullet}\cosheaf{F}^{i+1}_n) \oplus H_1(C_{\bullet}\cosheaf{F}^{i+1}_{n-1}) \] by $\omega^i ( u,v) = ( H_0(\phi^i_n)(u), H_1(\phi^i_{n-1})(v))$. The obvious attempt to reconstruct $\mathbb{V}$ is the persistence module \[
\mathbb{O} : H_0( C_{\bullet} \cosheaf{F}^1_n) \oplus H_1(C_{\bullet}\cosheaf{F}^1_{n-1}) \xrightarrow{\omega^1} \cdots \xrightarrow{\omega^{N-1}} H_0( C_{\bullet} \cosheaf{F}^{N}_n) \oplus H_1(C_{\bullet}\cosheaf{F}^{N}_{n-1}). \]
{\bf Claim:} $\mathbb{O}$ cannot be isomorphic to $\mathbb{V}$ from Equation (\ref{InterestPM}).
{\em Proof:} The putative isomorphisms $\Phi^i: H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_n(\mathscr{R}^i)$ would yield commutative diagrams \begin{equation} \label{CommutativeDiagram1} \begin{tikzcd} H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \arrow{d}{\omega^i} \arrow{r}{\Phi^i} & H_n(\mathscr{R}^i) \arrow{d}{\iota^i_*} \\ H_0(C_{\bullet} \cosheaf{F}^{i+1}_n) \oplus H_1(C_{\bullet} \cosheaf{F}^{i+1}_{n-1}) \arrow{r}{\Phi^{i+1}} & H_n(\mathscr{R}^{i+1}) \end{tikzcd} \end{equation}
However, Examples \ref{SmallE} and \ref{LargeE} illustrate a situation where it is impossible to find isomorphisms $\Phi^i$ and $\Phi^{i+1}$ that make Diagram \ref{CommutativeDiagram1} commute.
Assume that there exists an isomorphism $\Phi^i$, and let $(0,s) \in H_0(C_{\bullet} \cosheaf{F}^i_1) \oplus H_1(C_{\bullet} \cosheaf{F}^i_0)$ be the element such that $\Phi^i(0,s)$ represents the non-trivial $1$-cycle in Figure \ref{fig:RipsComplexEx1}. Then, $\iota^i_* \circ \Phi^i (0,s)$ must be the non-trivial 1-cycle in Figure \ref{fig:RipsComplexEx2}. On the other hand, with our current construction of $\omega^i$, we have $\omega^i(0,s)=0$, and hence, $\Phi^{i+1} \circ \omega^i (0,s) =0$ for any isomorphism $\Phi^{i+1}$. Thus, there are no isomorphisms $\Phi^i$ and $\Phi^{i+1}$ that make Diagram \ref{CommutativeDiagram1} commute.
The core reason why Diagram \ref{CommutativeDiagram1} fails to commute is that
as we increase the parameter from $\epsilon_i$ to $\epsilon_{i+1}$, a cycle in $H_1(C_{\bullet}\cosheaf{F}^i_{n-1})$ can become homologous to a cycle in $H_0(C_{\bullet}\cosheaf{F}^{i+1}_n)$. The current construction of $\omega^i$ fails to take such subtlety into account. This motivates our technique: we construct a map from $H_1(C_{\bullet}\cosheaf{F}^i_{n-1})$ to $H_0(C_{\bullet}\cosheaf{F}^{i+1}_n)$.
\subsection{Connecting morphism via spectral sequences} \label{SpectralSequenceSection}
We seek a map $\psi^i : H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ for the construction of the persistence module $\mathbb{V}_{\Psi}$.
The plan to build $\psi^i$ is as an extension of a map $\delta^i: \ker H_1(\phi^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ using a spectral sequence type argument.
\begin{Theorem} \label{ConnectingHomThm2} Let $P$ be a point cloud with finite cover $\mathscr{V}$ having one-dimensional nerve $N_{\mathscr{V}}$. There exists a morphism $\delta^i : \ker H_1( \phi^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ induced by cosheaf morphisms $\phi^i_n$ and $\phi^i_{n-1}$. \end{Theorem}
\begin{proof}
\sloppy Consider the following commutative diagram. Let $\iota^i_n :\bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_v) \to \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^{i+1}_v)$ denote the collection of inclusion maps of the Rips complexes over the vertices of $N_{\mathscr{V}}$, and let $\kappa^i_n : \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_e) \to \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^{i+1}_e)$ denote the collection of inclusion maps of the Rips complexes over the edges of $N_{\mathscr{V}}$. Let $e^i_n:\bigoplus\limits_{e \in N_{\mathscr{V}}}C_n(\mathscr{R}^i_e) \to \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_v)$ denote the collection of inclusion maps.
The front and back faces of the cube in Diagram \ref{SS0} are the $0^{\textrm{th}}$ pages of the spectral sequence in the proof of Lemma \ref{DiscreteCGNBound} for parameters $\epsilon_i$ and $\epsilon_{i+1}$ respectively.
\begin{equation} \label{SS0} \begin{tikzcd}[column sep=small]
\quad &\quad \arrow{d} & \quad & \quad \arrow{d} \\ \quad \arrow{d} & \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^{i+1}_v) \arrow[dd,"\partial", near start] \arrow[from=rr,"e^{i+1}_n" near end] & \quad \arrow{d} &\bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^{i+1}_{e}) \arrow[dd,"\partial", near start] \\
\bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n}(\mathscr{R}^i_v) \arrow{ur}{\iota^i_n} \arrow[dd, "\partial", near start ] \arrow[from=rr, crossing over,"e^i_n" near end] & & \bigoplus\limits_{e \in N_{\mathscr{V}}} C_{n}(\mathscr{R}^i_{e}) \arrow{ur}{\kappa^i_{n}} \\
& \bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n-1}(\mathscr{R}^{i+1}_v) \arrow{d} \arrow[from=rr, "e^{i+1}_{n-1}" near end] & & \bigoplus\limits_{e \in N_{\mathscr{V}}} C_{n-1}(\mathscr{R}^{i+1}_{e}) \arrow{d} \\
\bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n-1}(\mathscr{R}^i_v) \arrow{d} \arrow[from=rr,"e^i_{n-1}" near end] \arrow{ur}{\iota^i_{n-1}} & \quad & \bigoplus\limits_{e \in N_{\mathscr{V}}} C_{n-1}(\mathscr{R}^i_{e}) \arrow{ur}{\kappa^i_{n-1}} \arrow{d} \arrow[from=uu, crossing over,"\partial", near start] & \quad \\
\quad &\quad & \quad & \quad \end{tikzcd} \end{equation}
Computing the homology with respect to the boundary maps $\partial$ yields Diagram \ref{SS1}, in which the maps $\partial_n$ are boundary maps of the chain complexes $C_{\bullet} \cosheaf{F}^i_n$ of the respective cosheaves.
\begin{equation} \label{SS1} \begin{tikzcd}[column sep=small]
\quad &\quad \dar[dashed,dash] & \quad & \quad \dar[dashed,dash] \\ \quad \arrow[d, dashed, dash] & \bigoplus\limits_{v \in N_{\mathscr{V}}} H_n(\mathscr{R}^{i+1}_v) \arrow[dd,dashed,dash] \arrow[from=rr,"\partial^{i+1}_n" near end] & \quad \arrow[d,dashed,dash] &\bigoplus\limits_{e \in N_{\mathscr{V}}} H_n(\mathscr{R}^{i+1}_{e}) \arrow[dd,dashed,dash] \\
\bigoplus\limits_{v \in N_{\mathscr{V}}} H_{n}(\mathscr{R}^i_v) \arrow{ur}{(\phi^{i}_n)_v} \arrow[dd, dashed,dash ] \arrow[from=rr, crossing over,"\partial^i_n" near end] & & \bigoplus\limits_{e \in N_{\mathscr{V}}} H_{n}(\mathscr{R}^i_{e}) \arrow{ur}{(\phi^{i}_{n})_e} \\
& \bigoplus\limits_{v \in N_{\mathscr{V}}} H_{n-1}(\mathscr{R}^{i+1}_v) \arrow[d,dashed,dash] \arrow[from=rr, "\partial^{i+1}_{n-1}" near end] & & \bigoplus\limits_{e \in N_{\mathscr{V}}} H_{n-1}(\mathscr{R}^{i+1}_{e}) \arrow[d,dashed,dash] \\
\bigoplus\limits_{v \in N_{\mathscr{V}}} H_{n-1}(\mathscr{R}^i_v) \dar[dashed,dash] \arrow[from=rr,"\partial^i_{n-1}" near end] \arrow{ur}{(\phi^{i}_{n-1})_v} & \quad & \bigoplus\limits_{e \in N_{\mathscr{V}}} H_{n-1}(\mathscr{R}^i_{e}) \arrow{ur}{(\phi^{i}_{n-1})_e} \dar[dashed,dash] \arrow[from=uu, crossing over, dashed, dash] & \quad \\
\quad &\quad & \quad & \quad \end{tikzcd} \end{equation}
Computing the homology with respect to these maps $\partial^i_n$ yields Diagram \ref{SS2} of cosheaf homologies.
\begin{equation} \label{SS2} \begin{tikzcd}
\quad &\quad \dar[dashed,dash] & \quad & \quad \dar[dashed,dash] \\ \quad \arrow[d, dashed, dash] & H_0(C_{\bullet} \cosheaf{F}^{i+1}_n) \arrow[dd,dashed,dash] \arrow[from=rr,dashed,dash] & \quad \arrow[d,dashed,dash] &H_1(C_{\bullet} \cosheaf{F}^{i+1}_n) \arrow[dd,dashed,dash] \\
H_0(C_{\bullet} \cosheaf{F}^i_n) \arrow{ur}{H_0(\phi^i_n)} \arrow[dd, dashed,dash ] \arrow[from=rr, crossing over,dashed,dash] & & H_1(C_{\bullet} \cosheaf{F}^i_n) \arrow{ur}{H_1(\phi^i_{n})} \\
& H_0(C_{\bullet} \cosheaf{F}^{i+1}_{n-1}) \arrow[d,dashed,dash] \arrow[from=rr, dashed,dash] & & H_1(C_{\bullet} \cosheaf{F}^{i+1}_{n-1}) \arrow[d,dashed,dash] \\
H_0(C_{\bullet} \cosheaf{F}^i_{n-1}) \dar[dashed,dash] \arrow[from=rr,dashed,dash] \arrow{ur}{H_0(\phi^i_{n-1})} & \quad & H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \arrow{ur}{H_1(\phi^i_{n-1})} \dar[dashed,dash] \arrow[from=uu, crossing over, dashed, dash] & \quad \\
\quad &\quad & \quad & \quad \end{tikzcd} \end{equation}
To continue, some notation is necessary to distinguish where homology classes reside. Let $\langle$ $\rangle$ and $\{$ $\}$ denote the homology classes that appear in Diagrams \ref{SS1} and $\ref{SS2}$ respectively. For example, if $\gamma \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_{n-1}(\mathscr{R}^i_e)$ and $\partial \gamma=0$, then $\langle \gamma \rangle$ denotes the homology class of $\gamma$ in $\bigoplus\limits_{e \in N_{\mathscr{V}}} H_{n-1}(\mathscr{R}^i_e)$. Furthermore, if $\partial^i_{n-1} \langle \gamma \rangle=0$, then $\{ \langle \gamma \rangle \}$ denotes the homology class of $\langle \gamma \rangle$ in $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$.
With this notation in place, we define a map $\delta^i : \ker H_1(\phi^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ on a basis $\mathscr{B}^i_{\ker}$ of $\ker H_1(\phi^i_{n-1})$. For each basis element $\{ \langle b \rangle \} \in \mathscr{B}^i_{\ker}$, fix a coset representative $b^*$ of $\langle b \rangle$. Since $\{ \langle b \rangle \} \in \ker H_1(\phi^i_{n-1})$, we know that
$(\phi^{i}_{n-1})_e \langle b \rangle$ is trivial in $\bigoplus\limits_{e \in N_{\mathscr{V}}} H_{n-1}(\mathscr{R}^{i+1}_e)$. Thus, there exists $\alpha^{i+1} \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^{i+1}_e)$ such that \begin{equation} \label{Alpha} \partial \alpha^{i+1}=\kappa^i_{n-1} b^* . \end{equation} Moreover,
since $\partial^i_{n-1} \langle b \rangle =0$, there exists $\beta^i \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_{v})$ such that
$\partial \beta^i=e_{n-1}^i b^*$.
With this, we now define \begin{equation} \label{deltai_def_basis} \delta^i \{ \langle b \rangle \} = \{ \langle -e^{i+1}_n \circ \alpha^{i+1} + \iota^i_n \circ \beta^i \rangle \}. \end{equation} One can check that $-e^{i+1}_n \circ \alpha^{i+1} + \iota^i_n \circ \beta^i$ represents an element in $H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$.
Extending linearly from the basis gives a morphism $\delta^i : \ker H_1(\phi^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$.
Note that the construction of $\delta^i$ involved a choice of basis $\mathscr{B}^i_{\ker}$ of $\ker H_1(\phi^i_{n-1})$, its coset representatives, and a choice of $\alpha^{i+1}$ and $\beta^i$ for each basis vector $\{ \langle b \rangle \}$ of $\ker H_1(\phi^i_{n-1}) $. One can check that different choices of $\alpha^{i+1}$ do not affect the map $\delta^i$ (Appendix \ref{DeltaWellDefined}). However, the different choices of $\beta^i$ does affect the map $\delta^i$. In \S\ref{Construction}, we construct the map $\delta^i$ by carefully choosing the basis $\mathscr{B}^i_{\ker}$, its coset representatives, and $\beta^i$. \end{proof}
Once we define the map $\delta^i$, we can extend the map to $\psi^i:H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ as the following. Note that \begin{equation} \label{H1Decomposition} H_1 (C_{\bullet} \cosheaf{F}^i_{n-1}) = A^i \oplus \ker H_1(\phi^i_{n-1}) \end{equation} for some subspace $A^i$. Then every $\{ \langle y \rangle \} \in H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$ can be written uniquely as $\{ \langle y \rangle \}= \{ \langle y_1 \rangle \} + \{ \langle y_2 \rangle \}$, with $ \{ \langle y_1 \rangle \} \in A^i$ and $\{ \langle y_2 \rangle \} \in \ker H_1(\phi^i_{n-1})$. Extend the map $\delta^i$ to $\psi^i : H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ by \begin{equation} \label{PsiExtension} \psi^i \{ \langle y \rangle \} = \psi^i(\{ \langle y_1 \rangle \} + \{ \langle y_2 \rangle \} ) = \delta^i\{ \langle y_2 \rangle \}. \end{equation}
\subsection{Construction of distributed persistence module} \label{Construction}
In this section, we provide an algorithmic way of making consistent choices across parameters $(\epsilon_i)_{i=1}^N$ so that we can define the maps $\delta^i$ and $\psi^i$ consistently across parameters. The resulting collection of maps $(\psi^i)_{i=1}^N$ will then be used to construct the desired persistence module $\mathbb{V}_{\Psi}$.
We will inductively fix a basis $\mathscr{B}^i_{\ker}$ of $\ker H_1(\phi^i_{n-1})$ and extend it to a basis $\mathscr{B}^i$ of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$. We will define a set map $\Gamma^i : \mathscr{B}^i \to \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_v)$ that consistently chooses $\beta^i$'s for each element of $\mathscr{B}^i$. We will then define $\delta^i$ on the basis $\{ \langle b \rangle \} \in \mathscr{B}^i_{\ker}$ by \begin{equation} \label{RealDelta} \delta^i \{ \langle b \rangle \} =\{ \langle \, -e^{i+1}_n \alpha^{i+1} + \iota^i_n \circ \, \Gamma^i \{ \langle b \rangle \} \, \rangle \} \end{equation} and extend the map linearly to $\ker H_1(\phi^i_{n-1})$. The map $\delta^i$ can then be extended to $\psi^i:H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ as Equation (\ref{PsiExtension}).
Note that the construction of $\delta^i: \ker H_1(\phi^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ requires a choice of $\beta^i$ for only the basis elements $\{ \langle b \rangle \} \in \mathscr{B}^i_{\ker}$. However, we choose such $\beta^i$ for every basis element $\{ \langle b \rangle \} \in \mathscr{B}^i$ because such choice can affect the construction of $\delta^j$ for $j > i$.
\subsubsection*{Base case}
Recalling Diagram \ref{SS0},
note that \[ H_1(C_{\bullet} \cosheaf{F}^1_{n-1}) = A^1 \oplus \ker H_1(\phi^1_{n-1})\] for some subspace $A^1$. Let $\mathscr{B}^1_A$ be a basis of $A^1$, and let $\mathscr{B}^1_{\ker}$ be a basis of $\ker H_1(\phi^1_{n-1})$. Then, \begin{equation} \label{BaseCaseB} \mathscr{B}^1 = \mathscr{B}^1_A \cup \mathscr{B}^1_{\ker} \end{equation} is a basis of $H_1(C_{\bullet} \cosheaf{F}^1_{n-1})$. For each basis element $\{ \langle b \rangle \} \in \mathscr{B}^1$, fix a coset representative $b^*$ of $\langle b \rangle$.
Define a set map $\Gamma^1: \mathscr{B}^1 \to \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^1_v)$ as following. For each $\{ \langle b \rangle \} \in \mathscr{B}^1$, let \begin{equation} \label{Correspondence1} \Gamma^1 \{ \langle b \rangle \} = \beta^1, \end{equation} where $\beta^1 \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^1_v)$ is any element satisfying $\partial \beta^1 = e^1_{n-1} b^*$.
Define $\delta^1$ on $\{ \langle b \rangle \} \in \mathscr{B}^1_{\ker}$ by \[ \delta^1 \{ \langle b \rangle \} =\{ \langle \, -e^{2}_n \alpha^{2} + \iota^1_n \circ \, \Gamma^1 \{ \langle b \rangle \} \, \rangle \}, \] where $\alpha^2 \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^{i+1}_e)$
is any element satisfying Equation (\ref{Alpha}). Extend $\delta^1$ linearly to $\ker H_1(\phi^1_{n-1})$. The map $\delta^1$ can be used to define the map $\psi^1 : H_1(C_{\bullet} \cosheaf{F}^1_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^2_n)$ via Equation (\ref{PsiExtension}).
\subsubsection*{Inductive step}
\begin{itemize} \renewcommand{$\bullet$}{$\bullet$} \item \textbf{Inductive assumption.}
Note that \[ H_1(C_{\bullet} \cosheaf{F}^{i-1}_{n-1}) = A^{i-1} \oplus \ker H_1(\phi^{i-1}_{n-1})\] for some subspace $A^{i-1}$. \begin{itemize} \item Assume that there exists a basis $\mathscr{B}^{i-1}$ of $H_1(C_{\bullet} \cosheaf{F}^{i-1}_{n-1})$ that has the form \[\mathscr{B}^{i-1} = \mathscr{B}^{i-1}_A \cup \mathscr{B}^{i-1}_{\ker}, \] where $\mathscr{B}^{i-1}_A$ is a basis of $A^{i-1}$ and $\mathscr{B}^{i-1}_{\ker}$ is a basis of $\ker H_1(\phi^{i-1}_{n-1})$. \item Assume that for each basis element $\{ \langle b \rangle \} \in \mathscr{B}^{i-1}$, a coset representative $b^*$ of $\langle b \rangle$ has been fixed. \item Assume that there exists a set map $\Gamma^{i-1} : \mathscr{B}^{i-1} \to \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^{i-1}_v)$ such that \begin{equation} \label{InductionAssumption} \partial \Gamma^{i-1} \{ \langle b \rangle \} = e^{i-1}_{n-1} b^* \end{equation} for every $ \{ \langle b \rangle \} \in \mathscr{B}^{i-1}$. \end{itemize}
\item \textbf{Step 1. Fix a basis $\mathscr{C}^i$ of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$ that is compatible with $\mathscr{B}^{i-1}$.}
\sloppy By assumption, the basis $\mathscr{B}^{i-1}$ of $H_1(C_{\bullet} \cosheaf{F}^{i-1}_{n-1})$ has the form $\mathscr{B}^{i-1} = \mathscr{B}^{i-1}_A \cup \mathscr{B}^{i-1}_{\ker}$. Without loss of generality, assume that \[ \mathscr{B}^{i-1}_A = \{ \, \{ \langle b_1 \rangle \}, \dots, \{ \langle b_t \rangle \} \, \}. \] One can show that $\{ \langle \kappa^{i-1}_{n-1} b_1 \rangle \}, \dots, \{ \langle \kappa^{i-1}_{n-1} b_t \rangle \}$ are linearly independent in $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$ (Appendix \ref{LinearIndependence}). Let \[ \mathscr{C}^i_{\im} = \{ \, \{ \langle \kappa^{i-1}_{n-1} b_1 \rangle \}, \dots, \{ \langle \kappa^{i-1}_{n-1} b_t \rangle \} \, \}.\] Extend $\mathscr{C}^i_{\im}$ to a basis $\mathscr{C}^i$ of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$. Let $\mathscr{C}^i_D$ denote the basis vectors of $\mathscr{C}^i$ that are not in $\mathscr{C}^i_{\im}$, i.e., \begin{equation} \label{BasisC} \mathscr{C}^i = \mathscr{C}^i_{\im} \cup \mathscr{C}^i_D. \end{equation}
If $\{ \langle c \rangle \} \in \mathscr{C}^i_{\im}$ such that $\{ \langle c \rangle \} = \{ \langle \kappa^{i-1}_{n-1} b \rangle \}$, then let $ c^*= \kappa^{i-1}_{n-1} b^*$ be the coset representative of $\langle c \rangle$. If $\{ \langle c \rangle \} \in \mathscr{C}^i_D$, fix any coset representative $c^*$ of $\langle c \rangle$.
\item \textbf{Step 2. Define a set map $\Gamma^i_{\mathscr{C}} : \mathscr{C}^i \to \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_v)$.}
We will define a set map $\Gamma^i_{\mathscr{C}}: \mathscr{C}^i \to \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_v)$ such that \begin{equation} \label{GammaOnCReq} \partial \Gamma^i_{\mathscr{C}} \{ \langle c \rangle \} = e^i_{n-1} c^* \end{equation} for every $\{ \langle c \rangle \} \in \mathscr{C}^i$. Define \begin{equation} \label{GammaOnC} \Gamma^i_{\mathscr{C}} \{ \langle c \rangle \} = \begin{cases} \iota^{i-1}_n \Gamma^{i-1} \{ \langle b \rangle \} & \text{if } \{ \langle c \rangle \} = \{ \langle \kappa^{i-1}_{n-1} b \rangle \} \in \mathscr{C}^i_{\im} \\ \text{any } \beta^i \text{ satisfying } \partial \beta^i = e^i_{n-1} c^* & \text{if } \{ \langle c \rangle \} \in \mathscr{C}^i_D \end{cases} \end{equation}
\item \textbf{Step 3. Fix a new basis $\mathscr{B}^i$ of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$}
Note that \[ H_1(C_{\bullet} \cosheaf{F}^{i}_{n-1}) = A^{i} \oplus \ker H_1(\phi^{i}_{n-1})\] for some subspace $A^i$. Let $\mathscr{B}^i_A$ be a basis of $A^i$, and let $\mathscr{B}^i_{\ker}$ be a basis of $\ker H_1(\phi^{i}_{n-1})$. Then, \begin{equation} \label{BasisB} \mathscr{B}^i = \mathscr{B}^i_{A} \cup \mathscr{B}^{i}_{\ker} \end{equation} is a basis of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$.
The coset representative $b^*$ for each basis vector $\{ \langle b \rangle \} \in \mathscr{B}^i$ follows naturally from the coset representatives of $\mathscr{C}^i$. In other words, if $\mathscr{C}^i =\{ \, \{ \langle c_1 \rangle \} , \dots, \{ \langle c_l \rangle \} \, \}$ and $\{ \langle b \rangle \} \in \mathscr{B}^i$ is written as \[ \{ \langle b \rangle \} = d_1 \{ \langle c_1 \rangle \} + \dots + d_l \{ \langle c_l \rangle \}, \] then let \begin{equation} \label{BasisBC} b^* = d_1 c_1^* + \dots + d_l c_l^* \end{equation} be the coset representative of $\langle b \rangle$.
\item \textbf{Step 4. Define the set map $\Gamma_i : \mathscr{B}^i \to \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_v)$. }
Given $\{ \langle b \rangle \} \in \mathscr{B}^i$, if $\{ \langle b \rangle \}$ is written as \[ \{ \langle b \rangle \} = d_1 \{ \langle c_1 \rangle \} + \cdots + d_l \{ \langle c_l \rangle \}\] where $\{ \langle c_1 \rangle \} \dots, \{ \langle c_l \rangle \}$ are basis $\mathscr{C}^i$, then define $\Gamma^i \{ \langle b \rangle \}$ as \[ \Gamma^i \{ \langle b \rangle \} = d_1 \Gamma^i_{\mathscr{C}} \{ \langle c_1 \rangle \} + \cdots + d_l \Gamma^i_{\mathscr{C}} \{ \langle c_l \rangle \}. \] One can check that \begin{equation} \label{KeyEq} \partial \Gamma^i \{ \langle b \rangle \} = e^i_{n-1} b^*. \end{equation}
\item \textbf{Step 5. Define the maps $\delta^i$ and $\psi^i$.}
For each $\{ \langle b \rangle \} \in \mathscr{B}^i_{\ker}$, let \[ \delta^i \{ \langle b \rangle \} = \{ \langle -e^{i+1}_n \alpha^{i+1} + \kappa^{i}_n \circ \Gamma^i \{ \langle b \rangle \} \, \rangle \} , \] where $\alpha^{i+1} \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^{i+1}_e)$ is any element satisfying $\partial \alpha^{i+1}=\kappa^i_{n-1} b^*.$
Extend $\delta^i$ linearly to $\delta^i : \ker H_1(\phi^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$. One can then extend $\delta^i$ to map $\psi^i : H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$ via Equation (\ref{PsiExtension}).
\end{itemize}
Going through Steps 1-5 defines $\delta^i$ and $\psi^i$ inductively for every $i$. We can then define the map \[ \Psi^i : H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^{i}_{n-1}) \to H_0(C_{\bullet} \cosheaf{F}^{i+1}_n) \oplus H_1(C_{\bullet} \cosheaf{F}^{i+1}_{n-1})\] by \begin{equation} \label{PsiDefinition} \Psi^{i} ( \{ \langle x \rangle \}, \{ \langle y \rangle \}) = (\,H_0(\phi^{i}_n) \{ \langle x \rangle \} + (-1)^{n+1} \psi^{i} \{ \langle y \rangle \} \,,\, H_1(\phi^{i}_{n-1}) \{ \langle y \rangle \} \,), \end{equation} where $H_0(\phi^i_n)$ and $H_1(\phi^{i}_{n-1})$ are the maps defined in Equation (\ref{InducedMaps}). We can then define the persistence module $\mathbb{V}_{\Psi}$ \begin{equation} \label{PerPsi} \mathbb{V}_{\Psi} : H_0(C_{\bullet} \cosheaf{F}^1_n) \oplus H_1(C_{\bullet} \cosheaf{F}^1_{n-1}) \xrightarrow{\Psi^1} \cdots \xrightarrow{\Psi^{N-1}} H_0(C_{\bullet} \cosheaf{F}^N_n) \oplus H_1(C_{\bullet} \cosheaf{F}^N_{n-1}). \end{equation}
\subsection{Isomorphism of persistence modules} \label{IsoPersistenceModules}
We show that the persistence module $\mathbb{V}_{\Psi}$ constructed in Equation (\ref{PerPsi}) is isomorphic to the persistence module $\mathbb{V}$ from Equation (\ref{InterestPM}). In order to do so, we will show that both $\mathbb{V}_{\Psi}$ and $\mathbb{V}$ are isomorphic to the following persistence module \[\mathbb{V}_{\textrm{Tot}} : H_n(\textrm{Tot}^1) \xrightarrow{\iota^1_{\textrm{Tot}}} H_n(\textrm{Tot}^2) \xrightarrow{\iota^2_{\textrm{Tot}}} \cdots \xrightarrow{\iota^{N-1}_{\textrm{Tot}}} H_n(\textrm{Tot}^{N}), \] where each $H_n(\textrm{Tot}^i)$ is the homology of the double complex from Diagram \ref{Spectral0'} for parameter $\epsilon_i$, and $\iota^i_{\textrm{Tot}}$ is the morphism induced by maps of double complexes. \begin{equation} \label{Spectral0'} \begin{tikzcd}
\arrow{d}{\partial} \vdots & \arrow{d}{\partial} \vdots \\ \dar{\partial} \bigoplus\limits_{v \in N_{\mathscr{V}}} C_2(\mathscr{R}^i_v) & \arrow{l}{e^i_2} \dar{\partial} \bigoplus\limits_{e \in N_{\mathscr{V}}} C_2( \mathscr{R}^i_e) \\
\dar{\partial} \bigoplus\limits_{v \in N_{\mathscr{V}}} C_1( \mathscr{R}^i_v ) & \arrow{l}{e^i_1} \dar{\partial} \bigoplus\limits_{e \in N_{\mathscr{V}}} C_1(\mathscr{R}^i_e)\\ \bigoplus\limits_{v \in N_{\mathscr{V}}} C_0(\mathscr{R}^i_v) & \arrow{l}{e^i_0} \bigoplus\limits_{e \in N_{\mathscr{V}}} C_0(\mathscr{R}^i_e) \\ \end{tikzcd} \end{equation} Let $H_n(\textrm{Tot}^i)$ denote the homology of the double complex. Note that a coset of $H_n(\textrm{Tot}^i)$ is represented by $[x,y]$, where $x \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_v)$, $y \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_{n-1}(\mathscr{R}^i_e)$, $\partial y =0$ and $\partial x =(-1)^{n-1} e^i_{n-1} y$. A coset $[x,y]$ is trivial in $H_n(\textrm{Tot}^i)$ if there exist $p_{n+1} \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n+1}(\mathscr{R}^i_v)$ and $q_n \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_e)$ such that $\partial q_n = y$ and $\partial p_{n+1}+ (-1)^{n+1} e^i_n (q_n) =x$.
Given increasing parameter values $( \epsilon_i)_{i=1}^N$, one can construct a double complex for each parameter $\epsilon_i$. There exists an inclusion map from double complex associated with parameter $\epsilon_i$ to that of parameter $\epsilon_{i+1}$, as illustrated in Diagram \ref{SS0}. The vertical maps $\iota_n^i$ and $\kappa_n^i$ constitute the inclusion maps of double complexes. Such inclusion of double complexes induces a morphism $\iota^i_{\textrm{Tot}} :H_n(\textrm{Tot}^i) \to H_n(\textrm{Tot}^{i+1})$ which can be written explicitly as \begin{equation} \label{InducedTotalHomology} \iota_{\textrm{Tot}}^i ([x,y]) = [ \iota^i_n(x), \kappa^i_{n-1}(y)]. \end{equation}
\begin{Lemma} \label{Theorem2} The persistence modules $\mathbb{V}_{\emph{Tot}}$ and $\mathbb{V}$ are isomorphic. \end{Lemma}
\begin{proof} We will define isomorphisms $\Phi^i_{\textrm{Tot}} : H_n(\textrm{Tot}^i) \to H_n(\mathscr{R}^i)$ such that the following diagram commutes. \begin{equation} \label{Diagram2} \begin{tikzcd} H_n(\textrm{Tot}^1) \arrow{d}{\Phi^1_{\textrm{Tot}}} \arrow{r}{\iota^1_{\textrm{Tot}}} & H_n(\textrm{Tot}^2) \arrow{d}{\Phi^{2}_{\textrm{Tot}}} \arrow{r}{\iota^2_{\textrm{Tot}}} & \cdots \arrow{r}{\iota^{N-1}_{\textrm{Tot}}} & H_n(\textrm{Tot}^N) \arrow{d}{\Phi^N_{\textrm{Tot}}} \\ H_n(\mathscr{R}^1) \arrow{r}{\iota^1_*} & H_n(\mathscr{R}^{2}) \arrow{r}{\iota^2_*} & \cdots \arrow{r}{\iota^{N-1}_*} & H_n(\mathscr{R}^{N}) \end{tikzcd} \end{equation}
For each parameter $\epsilon_i$, let $j^i_n : \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_v) \to C_n(\mathscr{R}^i)$ be a collection of inclusion maps. Define $\Phi^i_{\textrm{Tot}}$ by
\begin{equation} \label{PsiTotal} \Phi^i_{\textrm{Tot}} ([x,y])=[j^i_n(x)]. \end{equation} One can check that $\Phi^i_{\textrm{Tot}}$ is well-defined and bijective (Appendix \ref{ProofTheorem2}).
Given $[x, y] \in H_n(\textrm{Tot}^i)$, note that \[ \iota^i_* \circ \Phi^i_{\textrm{Tot}} [x,y]= \iota^i_* [ j^i_n(x) ] = [\iota^i \circ j^i_n(x)],\] \[ \Phi^{i+1}_{\textrm{Tot}} \circ \iota^i_{\textrm{Tot}} [x,y]= \Phi^{i+1}_{\textrm{Tot}} [\iota^i_n ( x) , \kappa^i_{n-1}(y)] = [j^{i+1}_n \circ \iota^i_n(x)].\] Then, $\iota^i_* \circ \Phi^i_{\textrm{Tot}} = \Phi^{i+1}_{\textrm{Tot}} \circ \iota^i_{\textrm{Tot}}$ because all the maps involved are inclusion maps. Thus, Diagram \ref{Diagram2} commutes. \end{proof}
We now show that $\mathbb{V}_{\Psi}$ and $\mathbb{V}_{\textrm{Tot}}$ are isomorphic persistence modules. Recall that the persistence module $\mathbb{V}_{\Psi}$ is defined as \begin{equation} \mathbb{V}_{\Psi} : H_0(C_{\bullet} \cosheaf{F}^1_n) \oplus H_1(C_{\bullet} \cosheaf{F}^1_{n-1}) \xrightarrow{\Psi^1} \cdots \xrightarrow{\Psi^{N-1}} H_0(C_{\bullet} \cosheaf{F}^N_n) \oplus H_1(C_{\bullet} \cosheaf{F}^N_{n-1}), \end{equation} where the maps $\Psi^i$ are defined as \begin{equation} \Psi^{i} ( \{ \langle x \rangle \}, \{ \langle y \rangle \}) = (\,H_0(\phi^{i}_n) \{ \langle x \rangle \} + (-1)^{n+1} \psi^{i} \{ \langle y \rangle \} \,,\, H_1(\phi^{i}_{n-1}) \{ \langle y \rangle \} \,), \end{equation} where $H_0(\phi^i_n)$ and $H_1(\phi^{i}_{n-1})$ are the maps defined in Equation (\ref{InducedMaps}). Recall from Equation (\ref{H1Decomposition}) that every $\{ \langle y \rangle \} \in H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$ can be expressed explicitly as $\{ \langle y \rangle \} = \{ \langle y_1 \rangle \} + \{ \langle y_2 \rangle \}$, where $\{ \langle y_1 \rangle \} \in A^i$ and $\{ \langle y_2 \rangle \} \in \ker H_1(\phi^i_{n-1})$. Then, we can express the map $\Psi^i$ explicitly using maps from Diagram \ref{SS0} and $\delta^i$ as the following \begin{equation} \label{PsiExplicit} \Psi^{i} ( \{ \langle x \rangle \}, \{ \langle y \rangle \}) = (\, \{ \langle \iota^i_n x \rangle \} + (-1)^{n+1} \delta^i \{ \langle y_2 \rangle\} \,,\,\{ \langle \kappa^i_n y \rangle \} \,) \end{equation}
\begin{Lemma} \label{Theorem1} The persistence modules $\mathbb{V}_{\Psi}$ and $\mathbb{V}_{\emph{Tot}}$ are isomorphic. \end{Lemma}
\begin{proof} We will define isomorphisms $\Phi^i : H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_n(\textrm{Tot}^i)$ such that the following diagram commutes. \begin{equation} \label{DiagramFirst} \begin{tikzcd} H_0(C_{\bullet} \cosheaf{F}^1_n) \oplus H_1(C_{\bullet} \cosheaf{F}^1_{n-1}) \arrow{d}{\Phi^1} \arrow{r}{\Psi^1} & \cdots \arrow{r}{\Psi^{N-1}} & H_0(C_{\bullet} \cosheaf{F}^N_n) \oplus H_1(C_{\bullet} \cosheaf{F}^N_{n-1}) \arrow{d}{\Phi^N} \\ H_n(\textrm{Tot}^1) \arrow{r}{\iota^1_{\textrm{Tot}}} & \cdots \arrow{r}{\iota^{N-1}_{\textrm{Tot}}} & H_n(\textrm{Tot}^N) \end{tikzcd} \end{equation}
We will define $\Phi^i:H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_n(\textrm{Tot}^i)$ by first defining \begin{align} \Phi^i_0: H_0(C_{\bullet} \cosheaf{F}^i_n) \to H_n( \textrm{Tot}^i) \label{PhiZero}, \\ \Phi^i_1: H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_n(\textrm{Tot}^i) \label{PhiOne}. \end{align}
Define $\Phi^i_0: H_0(C_{\bullet} \cosheaf{F}^i_n) \to H_n( \textrm{Tot}^i)$ by \begin{equation} \label{Phi0} \Phi^i_0 ( \{ \langle x \rangle \} ) = [ x, 0 ]. \end{equation} To define the map $\Phi^i_1$, we will define $\Phi^i_1$ on the basis $\mathscr{B}^i$ of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$. Recall the fixed basis $\mathscr{B}^i$ of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$ in Equation (\ref{BasisB}). For each basis $\{ \langle b \rangle \} \in \mathscr{B}^i$, let \begin{equation} \label{Phi1} \Phi^i_1 ( \{ \langle b \rangle \} )= [ (-1)^{n+1} \Gamma^i \{ \langle b \rangle \}, b^* ], \end{equation} where the coset representative $b^*$ and the set map $\Gamma^i$ are defined according to the construction in \S\ref{Construction}. Extend $\Phi^i_1$ linearly to $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$.
We can now define $\Phi^i:H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_n(\textrm{Tot}^i)$ by \begin{equation} \label{Phi01} \Phi^i( \{ \langle x \rangle \}, \{ \langle y \rangle \}) = \Phi^i_0(\{\langle x \rangle \}) + \Phi^i_1 ( \{ \langle y \rangle \} ). \end{equation} One can check that $\Phi^i$ is well-defined and bijective (Appendix \ref{ProofTheorem1}).
To show that Diagram \ref{DiagramFirst} commutes, it suffices to show that the following diagram commutes for each element of $H_0(C_{\bullet} \cosheaf{F}^i_n)$ and $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$. \begin{equation} \label{Commute1} \begin{tikzcd} H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \arrow{d}{\Phi^i} \arrow{r}{\Psi^i} & H_0(C_{\bullet} \cosheaf{F}^{i+1}_n) \oplus H_1(C_{\bullet} \cosheaf{F}^{i+1}_{n-1}) \arrow{d}{\Phi^{i+1}} \\ H_n(\textrm{Tot}^i) \arrow{r}{\iota^i_{\textrm{Tot}}} & H_n(\textrm{Tot}^{i+1}) \end{tikzcd} \end{equation}
\textbf{Case 1}: Given $\{ \langle x \rangle \} \in H_0(C_{\bullet} \cosheaf{F}^i_n)$, we know from Equations (\ref{InducedTotalHomology}) and (\ref{Phi0}) that \[ \iota^i_{\textrm{Tot}} \circ \Phi^i (\{ \langle x \rangle \},0) = \iota^i_{\textrm{Tot}}([x,0]) = [ \iota^i_n x, 0].\] On the other hand, we know from Equations (\ref{PsiDefinition}) and (\ref{Phi0}) that \[ \Phi^{i+1} \circ \Psi^i(\{ \langle x \rangle \},0) =\Phi^{i+1}(H_0(\phi^i_n)(x),0)= \Phi^{i+1} ( \{ \langle \iota^i_n x \rangle \},0) =[ \iota^i_n x,0].\] Thus, the Diagram \ref{Commute1} commutes for every $\{ \langle x \rangle \} \in H_0(C_{\bullet} \cosheaf{F}^i_n)$.
\textbf{Case 2}: To show that Diagram \ref{Commute1} commutes for every vector in $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$, we will show that Diagram \ref{Commute1} commutes for every basis element $\{ \langle b \rangle \} \in \mathscr{B}^i$ of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$. Recall that $\mathscr{B}^i = \mathscr{B}^i_A \cup \mathscr{B}^i_{\ker}$. We consider two cases separately: the first, if $\{ \langle b \rangle \} \in \mathscr{B}^i_{\ker}$, and the second, if $\{ \langle b \rangle \} \in \mathscr{B}^i_A$.
\textbf{Case 2A}: Assume $\{ \langle b \rangle \} \in \mathscr{B}^i_{\ker}$. We know from Equations (\ref{InducedTotalHomology}) and (\ref{Phi1}) that \[ \iota^i_{\textrm{Tot}} \circ \, \Phi^i (0, \{ \langle b \rangle \}) = \iota^i_{\textrm{Tot}}([(-1)^{n+1} \Gamma^i \{ \langle b \rangle \}, b^* ]) = [(-1)^{n+1} \iota^i_n \circ \Gamma^i \{ \langle b \rangle \}, \kappa^i_{n-1}b^*]. \] On the other hand, from Equations (\ref{PsiExplicit}) and (\ref{Phi1}), \begin{align*} \Phi^{i+1} \circ \Psi^i ( 0, \{ \langle b \rangle \}) &= \Phi^{i+1} ((-1)^{n+1} \delta^i \{ \langle b \rangle \} , 0) \\ &= \Phi^{i+1}((-1)^{n+1} \{ \langle -e^{i+1}_n \alpha^{i+1} + \iota^i_n \circ \Gamma^i \{ \langle b \rangle \} \, \rangle \}, 0 )\\ &= [ -(-1)^{n+1} e^{i+1}_n \alpha^{i+1} + (-1)^{n+1}\iota^i_n \circ \Gamma^i \{ \langle b \rangle \},0]. \end{align*} Then, \[ \iota^i_{\textrm{Tot}} \circ \Phi^i (0, \{ \langle b \rangle \} )- \Phi^{i+1} \circ \Psi^i (0, \{ \langle b \rangle \} )= [(-1)^{n+1}e^{i+1}_n \alpha^{i+1}, \kappa^i_{n-1} b^* ]. \] Recall from Equation (\ref{Alpha}) that $\partial \alpha^{i+1}= \kappa^i_{n-1} b^*$. Thus, $\iota^i_{\textrm{Tot}} \circ \Phi^i ( 0, \{ \langle b \rangle \} ) - \Phi^{i+1} \circ \Psi^i (0, \{ \langle b \rangle \})$ is trivial in $H_n(\textrm{Tot}^{i+1})$, and Diagram \ref{Commute1} commutes for $\{ \langle b \rangle \} \in \mathscr{B}^i_{\ker}$.
\textbf{Case 2B}: If $\{ \langle b \rangle \} \in \mathscr{B}^i_A$, then again by Equations (\ref{InducedTotalHomology}) and (\ref{Phi1}), \[ \iota^i_{\textrm{Tot}} \circ \, \Phi^i (0, \{ \langle b \rangle \}) = \iota^i_{\textrm{Tot}}([(-1)^{n+1} \Gamma^i \{ \langle b \rangle \} , b^* ]) = [(-1)^{n+1} \iota^i_n \circ \Gamma^i \{ \langle b \rangle \}, \kappa^i_{n-1} b^* ]. \]
On the other hand, from Equations (\ref{PsiExplicit}) and (\ref{Phi1}), \begin{align*} \Phi^{i+1} \circ \Psi^i ( 0, \{ \langle b \rangle \}) &= \Phi^{i+1}(0, \{ \langle \kappa^i_{n-1} b \rangle \} ) \\ &= \Phi^{i+1}_1 ( \{ \langle \kappa^i_{n-1} b \rangle \} ) \\ &= [(-1)^{n+1} \Gamma^{i+1} \{ \langle \kappa^i_{n-1} b \rangle \}, \kappa^i_{n-1} b^* ] \\ &= [(-1)^{n+1} \iota^i_n \circ \, \Gamma^i \{ \langle b \rangle \}, \kappa^i_{n-1} b^*] \end{align*}
The last equality follows from the construction of $\Gamma^{i+1}$ in Equation (\ref{GammaOnC}). Thus, the diagram commutes for $\{ \langle b \rangle \} \in \mathscr{B}_A^i$.
Thus, Diagram \ref{DiagramFirst} commutes. \end{proof}
Lemma \ref{Theorem2} and \ref{Theorem1} yield the desired result.
\begin{Theorem} \label{MainThm} The persistence modules $\mathbb{V}_{\Psi}$ and $\mathbb{V}$ are isomorphic. \end{Theorem}
\section{Application: Multiscale Persistence} \label{sec:multi}
There are a number of potential uses for distributed persistent homology computations. Perhaps scalable decentralized computation for large data sets is the most obvious: here the partition of the data set into patches is based on localization via coordinates (this is what appears in \cite{CasasDistributing19,LMParallel15}). However, even among small data sets, there are reasons for distributing the computation along a partition of the point cloud based on scalar fields other than coordinates. Data often comes with additional features, such as density estimates, distance to a landmark, and time dependence, that one may wish to examine. In this section, we apply the distributed computation method from \S\ref{sec:DistributedPH} to point cloud data based on density as a parameter. Section \ref{MultiscaleGeneral} provides a general framework for multiscale persistence, and Example \ref{DataDensity} illustrates the advantage of multiscale analysis when examining dataset with varying density. In such situations, multiscale persistent homology allows the user to detect significant features that are overlooked by standard persistent homology methods.
\subsection{Multiscale Barcode Annotation} \label{MultiscaleGeneral}
We provide a general framework for computing multiscale persistent homology. Given a point cloud $P$, let $f: P \to \mathbb{R}$ be a (user-chosen)
density estimate. Construct a cover $\mathscr{V}$ of $f(P)$
with nerve $N_{\mathscr{V}}$ a compact interval.
Let $\mathbb{V}$ denote the persistence module \begin{equation} \label{UsualPH} \mathbb{V} : H_n(\mathscr{R}^1) \to H_n(\mathscr{R}^2) \to \dots \to H_n(\mathscr{R}^N) \end{equation} in the usual sense. Let $\textit{barcode}(\mathbb{V})$ denote the barcode of $\mathbb{V}$. If a bar of the barcode represents a feature $\gamma$ that consists of points in $f^{-1}(U)$ for some $U \in \mathscr{V}$, we say that the feature $\gamma$ lives in $U$. Moreover, we can annotate the bar with its corresponding set $U$. The goal of multiscale persistent homology is to annotate the bars of $\textit{barcode}(\mathbb{V})$ with their corresponding sets $U$ of $\mathscr{V}$.
An algorithmic summary of the annotation process is provided, followed by a detailed explanation of each step.
\begin{algorithm} \caption{Annotate $\textit{barcode}(\mathbb{V})$.} \label{TheAlgorithm} \begin{algorithmic}[1]
\State Compute persistence module $\mathbb{V}_*$ using distributed computation.
\State Label vector spaces of $\mathbb{V}_*$.
\State For each persistence module $\mathbb{W}_s$ of $\mathbb{V}_*=\bigoplus\limits_s \mathbb{W}_s$, annotate $\textit{barcode}(\mathbb{W}_s)$.
\State Using the annotations of $\textit{barcode}(\mathbb{V}_*)$, annotate $\textit{barcode}(\mathbb{V})$.
\State Return annotated $\textit{barcode}(\mathbb{V})$. \end{algorithmic} \end{algorithm}
\subsubsection*{Step 1. Compute persistence module $\mathbb{V}_*$}
Let $\mathbb{V}$ denote the persistence module of interest \begin{equation} \label{UsualPH2} \mathbb{V} : H_n(\mathscr{R}^1) \to H_n(\mathscr{R}^2) \to \dots \to H_n(\mathscr{R}^N). \end{equation}
Let $\epsilon_*$ be the upper bound from Lemma \ref{DiscreteCGNBound} such that \[H_n(\mathscr{R}^{\epsilon}) \cong H_0(C_{\bullet} \cosheaf{F}^{\epsilon}_{n}) \oplus H_1(C_{\bullet} \cosheaf{F}^{\epsilon}_{n-1})\]
for all $\epsilon < \epsilon_*$. Let $\mathbb{V}|_L$ denote the sequence of vector spaces and maps of $\mathbb{V}$ up to parameter $\epsilon_L $ such that $\epsilon_L< \epsilon_*$:
\[\mathbb{V} |_L : H_n(\mathscr{R}^1) \to H_n(\mathscr{R}^2) \to \dots \to H_n(\mathscr{R}^L).\] We can compute the persistence module \begin{equation} \label{TruncatedDistributed} \mathbb{V}_{\Psi} : H_0(C_{\bullet} \cosheaf{F}^1_n)\oplus H_1(C_{\bullet} \cosheaf{F}^1_{n-1}) \xrightarrow{\Psi^1} \dots \xrightarrow{\Psi^{L-1}} H_0(C_{\bullet} \cosheaf{F}^L_n)\oplus H_1(C_{\bullet} \cosheaf{F}^L_{n-1}) \end{equation}
isomorphic to $\mathbb{V}|_L$ using the distributed computation method from \S\ref{sec:DistributedPH}.
We will in fact compute a persistence module $\mathbb{V}_*\cong\mathbb{V}_{\Psi}$ that can reveal additional information about the barcode. Recall from \S\ref{sec:CellularCosheaves} that each cosheaf $\cosheaf{F}^{i}_n$ can be decomposed as $\cosheaf{F}^{i}_n \cong \oplus \cosheaf{I}^{i}_n$, where each $\cosheaf{I}^i_n$ is an indecomposable cosheaf over $N_{\mathscr{V}}$. In other words, there exists an isomorphism of cosheaves \begin{equation} \label{IsoCosheafMorphism} D^i_n : \cosheaf{F}^i_n \to \oplus \cosheaf{I}^i_n. \end{equation}
For each parameter $\epsilon_i$, there exists an isomorphism \[g^i: H_0(C_{\bullet} \cosheaf{F}^i_n)\oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_0(C_{\bullet} \oplus \cosheaf{I}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1})\] defined by \[ g^i ( \, \{ \langle x \rangle \}, \{ \langle y \rangle \} \,) = (\, H_0(D^i_n) \{ \langle x \rangle \}, \{ \langle y \rangle \} \, ),\] where $H_0(D^i_n) : H_0(C_{\bullet} \cosheaf{F}^i_n) \to H_0(C_{\bullet} \oplus \cosheaf{I}^i_n)$ is the isomorphism induced by $D^i_n$.
Define the persistence module $\mathbb{V}_*$ by \[\mathbb{V}_* : H_0(C_{\bullet} \oplus \cosheaf{I}^{1}_n) \oplus H_1(C_{\bullet} \cosheaf{F}^{1}_{n-1}) \xrightarrow{\Psi^1_*} \cdots \xrightarrow{\Psi^{N-1}_*} H_0(C_{\bullet} \oplus \cosheaf{I}^{N}_n) \oplus H_1(C_{\bullet} \cosheaf{F}^{N}_{n-1}), \] where the map $\Psi^i_*$ is defined by $\Psi^i_* = g^{i+1} \circ \Psi^i \circ (g^i)^{-1}$.
By construction, $\mathbb{V}_*$ is isomorphic to the persistence module $\mathbb{V}_{\Psi}$ and hence isomorphic to $\mathbb{V}|_L$. Our mechanism of decomposing the cosheaf $\cosheaf{F}^{i}_n$ into indecomposable cosheaves may seem like a cumbersome step. However, such decomposition allows us to understand the cosheaf homologies in terms of the indecomposable cosheaves $\cosheaf{I}^i_n$.
\subsubsection*{Step 2. Label the vector spaces of $\mathbb{V}_*$}
For any parameter $\epsilon_i$, recall that \[\mathbb{V}_*^{i} = H_0(C_{\bullet} \oplus \cosheaf{I}^{i}_n) \oplus H_1(C_{\bullet} \cosheaf{F}^{i}_{n-1}).\] By Lemma \ref{IndecomposableCosheaf}, each component of $H_0(C_{\bullet} \oplus \cosheaf{I}^i_n)$ corresponds to an indecomposable cosheaf of the form $\cosheaf{I}^i_{[-]}$. We can thus annotate each component of $H_0(C_{\bullet} \oplus \cosheaf{I}^i_n)$ according the support of the corresponding indecomposable $\cosheaf{I}^i_{[-]}$.
\begin{enumerate}[label = \textit{Case \arabic*}: , leftmargin=*, align=left] \item If the indecomposable $\cosheaf{I}^i_{[-]}$ is supported on a unique vertex $v \in N_{\mathscr{V}}$, then annotate the component by $U \in \mathscr{V}$, where $U$ is the open set corresponding to the vertex $v$. \item Let $v_l, v_r \in N_{\mathscr{V}}$ each denote the left and rightmost supports of $\cosheaf{I}^i_{[-]}$. If $v_l, v_{l+1}, \dots, v_r$ are the verices of $N_{\mathscr{V}}$ between $v_l$ and $v_r$, then the cosheaf $\cosheaf{I}^i_{[-]}$ represents a feature that lives in all $U_l, U_{l+1}, \dots, U_r \in \mathscr{V}$. The user can annotate the corresponding component by $[U_l, U_r]$ or $U_l$ or $U_r$, depending on the user's goal. \end{enumerate}
For example, assume that \[ \mathbb{V}_*^{i} = H_0(C_{\bullet} \oplus \cosheaf{I}^{i}_{n}) \oplus H_1(C_{\bullet} \cosheaf{F}^{i}_{n-1})= \mathbb{K} \oplus \mathbb{K} \oplus \mathbb{K} \oplus \mathbb{K},\] where the first three components come from $H_0(C_{\bullet}\oplus \cosheaf{I}^{i}_{n})$ and the last component $\mathbb{K}$ comes from $H_1(C_{\bullet} \cosheaf{F}^{i}_{n-1})$. An example of cosheaf $\oplus \cosheaf{I}^{i}_{n}$ is illustrated in Figure \ref{fig:example_decomposition}.
\begin{figure}
\caption{An example decomposition of a cosheaf $\cosheaf{F}^i_{n} \cong \oplus\cosheaf{I}^i_n$}
\label{fig:example_decomposition}
\end{figure}
Then one can label the components of $H_0(C_{\bullet} \oplus \cosheaf{I}^{i}_{n})$ by $\mathbb{K}_1 \oplus \mathbb{K}_2 \oplus \mathbb{K}_{1,2}$, where each label corresponds to the support of the indecomposable cosheaf in Figure \ref{fig:example_decomposition}. Then, the vector space $\mathbb{V}^{i}_*$ can be labeled as $\mathbb{K}_1 \oplus \mathbb{K}_2 \oplus \mathbb{K}_{1,2} \oplus \mathbb{K}$.
\subsubsection*{Step 3. Annotate the $\textit{barcode}(\mathbb{W}_s)$ for each $\mathbb{W}_s$ of $\mathbb{V}_*=\bigoplus\limits_s \mathbb{W}_s$} Note that $\mathbb{V}_*$ can be expressed naturally as a sum of persistence modules as \[\mathbb{V}_* = \bigoplus\limits_s \mathbb{W}_s.\]
For example, the persistence module \[\mathbb{V}_*: \mathbb{R}^3 \xrightarrow{ \begin{bmatrix} 1 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & 1 & 1 \end{bmatrix}} \mathbb{R}^3 \xrightarrow{ \begin{bmatrix} 1 & -1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{bmatrix}} \mathbb{R}^3\] is the sum of persistence modules \[ \mathbb{W}_1: \mathbb{R} \xrightarrow{ \begin{bmatrix} 1 \\ 1 \end{bmatrix}} \mathbb{R}^2 \xrightarrow{ \begin{bmatrix} 1 & -1 \end{bmatrix}} \mathbb{R} \quad \textrm{and} \quad \mathbb{W}_2: \mathbb{R}^2 \xrightarrow{ \begin{bmatrix} 1 & 1 \end{bmatrix}} \mathbb{R} \xrightarrow{ \begin{bmatrix} 1 \\ 1 \end{bmatrix}} \mathbb{R}^2.\]
Moreover, $\textit{barcode}(\mathbb{V}_*)$ is the collection of barcodes $\textit{barcode}(\mathbb{W}_s)$. For each $\mathbb{W}_s$, compute $\textit{barcode}(\mathbb{W}_s)$. Let $\overline{b}$ be a bar of $\textit{barcode}(\mathbb{W}_s)$ born at parameter $\epsilon_i$. Consider the annotation of the components of $\mathbb{W}_s^i$ from Step 2. There are two cases to consider.
\begin{enumerate}[label = \textit{Case \arabic*}: , leftmargin=*, align=left] \item All components of $\mathbb{W}^i_s$ have been annotated by a unique set $U \in \mathscr{V}$. Bar $\overline{b}$ then represents some linear combination of features in $U$, so annotate $\overline{b}$ with $U$. \item The components of $\mathbb{W}^i_s$ have been annotated by two or more sets in $\mathscr{V}$, say $U_j$ and $U_k$. The user can decide to either not annotate the bar at all, to annotate the bar by $U_j$, or to annotate the bar by $U_k$, depending on the question of interest. \end{enumerate}
The result of Step 3 is an annotation of $\textit{barcode}(\mathbb{V}_*)$.
\subsubsection*{Step 4. Annotate $\textit{barcode}(\mathbb{V}$)} We can use the annotations of $\textit{barcode}(\mathbb{V}_*)$ to annotate $\textit{barcode}(\mathbb{V})$. Note that $\textit{barcode}(\mathbb{V}_*)$ can be obtained from $\textit{barcode}(\mathbb{V})$ by truncating $\textit{barcode}(\mathbb{V})$ at parameter $\epsilon_L$. Let $[b,d]$ be a bar of $\textit{barcode}(\mathbb{V}_*)$ that has been annotated by a set $U$ in Step 3. There are two cases to consider. \begin{enumerate}[label = \textit{Case \arabic*}: , leftmargin=*, align=left] \item If $d< \epsilon_L$, then annotate the bar $[b,d]$ of $\textit{barcode}(\mathbb{V})$ with $U$. \item If $d = \epsilon_L$ and $[b,d]$ is the unique bar of $\textit{barcode}(\mathbb{V}_*)$ with birth time $b$, then there exists a unique bar $[b, d']$ in $\textit{barcode}(\mathbb{V})$ with birth time $b$. Annotate the bar $[b,d']$ of $\textit{barcode}(\mathbb{V})$ with $U$. \end{enumerate}
The final result of the algorithm is an annotation of $\textit{barcode}(\mathbb{V})$. One can use this annotated barcode to perform finer data analysis.
\subsection{Example: Multiscale Persistence} \label{DataDensity}
Consider a situation where the size of a feature depends on the density of the constituting points, as illustrated in Figure \ref{fig:PointCloudExample}. Figure \ref{TotalBarcode} illustrates the corresponding barcode, which suggests that there is one significant feature. Standard persistent homology fails to detect the small, but densely sampled features. Multiscale persistent homology can select the bars that correspond to small but densely sampled features and annotate them as being significant.
\begin{figure}
\caption{A point cloud with varying density}
\label{fig:PointCloudExample}
\end{figure} \begin{figure}
\caption{Barcode from standard persistent homology in dimension $1$}
\label{TotalBarcode}
\end{figure}
Let $P$ denote the point cloud in Figure \ref{fig:PointCloudExample}, and let $f : P \to \mathbb{R}$ be the function mapping each point to its estimated density value. In our example, $f(p)$ represents the number of points in a radius $r$-ball centered at $p$.
We chose a covering $\mathscr{V}=\{U_s, U_d \}$ of $f(P)$ by the following. We first plotted a histogram of density values as illustrated in Figure \ref{histogramPlot}, and decided to cover $f(P)$ with two sets, $U_s$ and $U_d$, where $U_s = (0,18)$ and $U_d=(8,26)$. We will refer to points in $f^{-1}(U_s)$ as the sparse points, and we will refer to points in $f^{-1}(U_d)$ as the dense points. Figures \ref{fig:SparsePoints} and \ref{fig:DensePoints} illustrate the sparse and dense points. \begin{figure}
\caption{Histogram plot of estimated density values.}
\label{histogramPlot}
\end{figure}
\begin{figure}
\caption{Sparse points}
\label{fig:SparsePoints}
\caption{Dense points}
\label{fig:DensePoints}
\caption{Sparse and dense points.}
\label{fig:SparseDensePoints}
\end{figure}
We now follow Algorithm \ref{TheAlgorithm}. Let \[ \mathbb{V} : H_1(\mathscr{R}^1) \to \dots \to H_1(\mathscr{R}^N) \] be the persistence module obtained from the point cloud $P$. For this example, the maximum parameter is $\epsilon_N = 1.6$. Let $\epsilon_*$ be the upper bound of the parameter $\epsilon$ from Lemma \ref{DiscreteCGNBound} for which the isomorphism \[H_n(\mathscr{R}^{\epsilon}) \cong H_0(C_{\bullet} \cosheaf{F}^{\epsilon}_n) \oplus H_1( C_{\bullet} \cosheaf{F}^{\epsilon}_{n-1}) \] holds. For this example, $\epsilon_*$ is $0.0719$. Compute the persistence module \begin{equation} \label{DistributedTruncated} \mathbb{V}_* : H_0(C_{\bullet}\oplus \cosheaf{I}^{1}_1) \oplus H_1(C_{\bullet} \cosheaf{F}^{1}_{0}) \to \dots \to H_0(C_{\bullet} \oplus \cosheaf{I}^{\epsilon_*}_1) \oplus H_1(C_{\bullet}\cosheaf{F}^{\epsilon_*}_0) \end{equation} following Step 1 of Algorithm \ref{TheAlgorithm}.
Step 2 of Algorithm \ref{TheAlgorithm} labels the components of vector space $H_0(C_{\bullet} \oplus \cosheaf{I}^i_1)$ according to the support of the indecomposable cosheaves $\cosheaf{I}^i_{[-]}$.
Step 3 of Algorithm \ref{TheAlgorithm} results in an annotated version of $\textit{barcode}(\mathbb{V}_*)$, illustrated in Figure \ref{TruncatedBarcodeAnnotated}. The top two gray bars have been annotated by $U_s$. The two bars represent features in the sparse points. The remaining black bars have been annotated by $U_d$, and they represent features in the dense points.
\begin{figure}
\caption{Annotated $\textit{barcode}(\mathbb{V}_*)$. The top two gray bars have been annotated by $U_s$, and they represent features in the sparse points. The remaining black bars have been annotated by $U_d$, and they represent features in the dense points.}
\label{TruncatedBarcodeAnnotated}
\end{figure} Step 4 of Algorithm \ref{TheAlgorithm} allows us to transfer the annotation of $\textit{barcode}(\mathbb{V}_*)$ to $\textit{barcode}(\mathbb{V})$ resulting in an annotated version of $\textit{barcode}(\mathbb{V})$ illustrated in Figure \ref{TotalBarcodeAnnotated}. The two bars enclosed by the gray box are annotated by $U_s$, and the bars enclosed by black box are annotated by $U_d$. \begin{figure}
\caption{Annotated $\textit{barcode}(\mathbb{V})$. The two bars enclosed by the gray box are annotated by $U_s$, and the bars dyenclosed by black box are annotated by $U_d$.}
\label{TotalBarcodeAnnotated}
\end{figure}
The goal is to determine the small but significant features that consist of the denser points. Thus, we focus on the bars of Figure \ref{TotalBarcodeAnnotated} that have been annotated by $U_d$. By restricting our attention to only the bars that represent features in $U_d$, we are able to determine the significant features built among the dense points. By examining the bars annotated by $U_d$, one can conclude that there are eight significant bars.
Lastly, we return to $\textit{barcode}(\mathbb{V})$ and indicate the significant features. We then obtain the barcode in Figure \ref{AnnotatedBarcode}, where the black bars indicate significant features and the gray bars indicate noise. Note that we have one long black bar, which is deemed significant because of its length. We have eight additional shorter significant bars which were identified via Algorithm \ref{TheAlgorithm}.
\begin{figure}
\caption{Final annotation of $\textit{barcode}(\mathbb{V})$}
\label{AnnotatedBarcode}
\end{figure}
The persistent homology computation Julia package {\em Eirene.jl} \cite{HenselmanGhrist16} identifies persistent homology generators in the point cloud. Using this, we identified the points of $P$ that constitute each significant feature. The eight significant short bars identified indeed correspond to the eight small but densely sampled features in Figure \ref{fig:PointCloudExample}.
\begin{appendices}
\section{Proof of Lemma \ref{DiscreteCGNBound}} \label{A:DiscreteCGNBound}
\discreteCGNlemma*
\begin{proof} We first specify $\epsilon_*$. In what follows, use the convention that the minimum over an empty set is $\infty$.
Each $p \in P$ lies in either one or two elements of the cover $\mathscr{V}$. If unique, $p\in U \in \mathscr{V}$, then set \begin{equation} \label{KpEq1} K_p = \min_{\{ q \notin U \}} d(p,q). \end{equation} If $p$ lies in two sets of the cover, $p\in U\cap W$, then first let \begin{equation} \label{kp1} K_p' =\min_{ \{ q \notin U \cup W \} } d(p,q), \end{equation} and let \begin{equation} \label{kp2}
K_p'' = \min_{ \substack{ \{ q, q' | d(p,q) < K_p', \quad q\notin U, \\
\qquad d(p,q')<K_p', \quad q' \notin W \} } }
d(q, q'). \end{equation} Then set \[ K_p = \min \{ K_p', K_p'' \}. \] Let $\epsilon_* = \min_{p \in P} K_p$. Assume $\epsilon < \epsilon_*$. We assert Equation (\ref{DistributedComputationK}) by showing that the Rips system covers $\mathscr{R}^{\epsilon}$. Let $\omega = (v_0, \dots, v_l)$ be a simplex of $\mathscr{R}^{\epsilon}$ with vertices as listed. The pairwise distances thus satisfy $d(v_i, v_j) < \epsilon$.
If there exists a vertex of $\omega$, say $v_0$, such that $v_0$ belongs to a unique $U\in\mathscr{V}$, then by construction, $\epsilon_* < K_{v_0} = \min_{\{ q \notin U \}} d(v_0, q)$ from Equation (\ref{KpEq1}). Thus, for any other vertex $v$ of $\omega$, we have $d(v_0,v) < \epsilon < K_{v_0}$, and hence all of $\omega$ is covered by $U$.
Otherwise, every vertex $v$ of $\omega$ is covered by two sets in $\mathscr{V}$. Without loss of generality, assume that $v_0 \in U \cap W$. Note that for any other vertex $v$ of $\omega$, we have \begin{equation} \label{ineq} d(v_0, v) < \epsilon < K_{v_0} \leq K_{v_0}', \end{equation} where $K_{v_0}'$ is given by Equation (\ref{kp1}). So $v\in U\cup W$ for every $v \in \omega$. In fact, we can show that all of $\omega$ lies in $U$ or in $W$. Assuming the contrary, there exist distinct vertices, say $v_1$ and $v_2$, such that $v_1 \notin U$ and $v_2 \notin W$. By construction, $d(v_0, v_1) < \epsilon < K_{v_0}'$, and $d(v_0, v_2) < \epsilon < K_{v_0}'$. By definition of $K_{v_0}''$ from Equation (\ref{kp2}), we know that $K_{v_0}'' \geq d(v_1, v_2)$. However, this contradicts the fact that $d(v_1, v_2) < \epsilon < K_{v_0}''$. Thus, $\omega$ is covered by some subcomplex $\mathscr{R}^{\epsilon}_{\sigma}$. Thus, the Rips system covers $\mathscr{R}^{\epsilon}$, and Lemma \ref{DiscreteCGNBound} follows from the proof of the analogous result in \cite{CGNDiscrete13}. \end{proof}
\section{Independence of $\alpha^{i+1}$} \label{DeltaWellDefined}
\begin{Lemma} The construction of $\delta^i$ on a basis element $\{ \langle b \rangle \} \in \mathscr{B}^i_{\ker}$ in Equation (\ref{deltai_def_basis}) does not depend on the choice of $\alpha^{i+1}$. \end{Lemma} \begin{proof} Let $\alpha_1, \alpha_2 \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^{i+1}_e)$ be two different choices of $\alpha$ that satisfy Equation (\ref{Alpha}): $\partial \alpha = \kappa^i_{n-1} b^*$. Note that $\partial (\alpha_1-\alpha_2) =\kappa^i_{n-1} b^*-\kappa^i_{n-1} b^*=0$. So $\langle \alpha_1 - \alpha_2 \rangle $ represents an element in $\bigoplus\limits_{e \in N_{\mathscr{V}}} H_n(\mathscr{R}^{i+1}_e)$. Then, $ \langle e^{i+1}_n(\alpha_1-\alpha_2) \rangle = \partial^{i+1}_n \langle \alpha_1-\alpha_2 \rangle \in \im \partial^{i+1}_n$. So $\{ \langle e^{i+1}_n (\alpha_1 -\alpha_2) \rangle \}$ is trivial in $H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$. Hence, $ \{ \langle- e^{i+1}_n \alpha_1+ \iota^i_n \circ \beta^i \rangle \}= \{ \langle -e^{i+1}_n \alpha_2+\iota^i_n \circ \beta^i \rangle \}$ in $H_0(C_{\bullet} \cosheaf{F}^{i+1}_n)$. Thus, the map $\delta^i$ does not depend on the choice of $\alpha$. \end{proof}
\section{Obtaining basis $\mathscr{C}^i_{\im}$ from basis $\mathscr{B}^{i-1}_A$ of $A^{i-1}$} \label{LinearIndependence}
\begin{Lemma} Let $\mathscr{B}^{i-1}= \mathscr{B}^{i-1}_A \cup \mathscr{B}^{i-1}_{\ker}$ be a basis of $H_1(C_{\bullet} \cosheaf{F}^{i-1}_{n-1}) =A^{i-1} \oplus \ker H_1(\phi^{i-1}_{n-1})$. Let $\mathscr{B}^{i-1}_A = \{ \, \{ \langle b_1 \rangle \}, \dots, \{ \langle b_t \rangle \} \, \}$ be the basis of $A^{i-1}$. Then, \[ \{ \langle \kappa^{i-1}_{n-1} b_1 \rangle \} , \dots, \{ \langle \kappa^{i-1}_{n-1} b_t \rangle \} \] are linearly independent in $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$. \end{Lemma}
\begin{proof} Assume the contrary, that \[ c_1 \{ \langle \kappa^{i-1}_{n-1} b_1 \rangle \} + \cdots + c_t \{ \langle \kappa^{i-1}_{n-1} b_t \rangle \} = \{ \langle 0 \rangle \} \] for some $c_1, \dots, c_t$ that are not all zero. By construction, this implies that \[ c_1 \langle \kappa^{i-1}_{n-1} b_1 \rangle + \cdots + c_t \langle \kappa^{i-1}_{n-1} b_t \rangle = \langle 0 \rangle \] for some $c_1, \dots, c_t$ that are not all zero. Then, $\langle c_1 b_1 + \dots c_t b_t \rangle \in \ker H_1(\phi^{i-1}_{n-1})$. Note that $\langle c_1 b_1 + \dots c_t b_t \rangle \in A^{i-1}$ as well since $A^{i-1}$ is a subspace of $H_1(C_{\bullet} \cosheaf{F}^{i-1}_{n-1})$. This contradicts the fact that $H_1(C_{\bullet} \cosheaf{F}^{i-1}_{n-1})$ is a direct sum of $\ker H_1^{i-1}(\phi_{n-1})$ and $A^{i-1}$. Thus, $\{ \langle \kappa^{i-1}_{n-1} b_1 \rangle \} , \dots, \{ \langle \kappa^{i-1}_{n-1} b_t \rangle \}$ are linearly independent. \end{proof}
\section{Details of proof of Lemma \ref{Theorem2}} \label{ProofTheorem2}
\begin{Lemma} The map $\Phi^i_{\textrm{Tot}} : H_n(\textrm{Tot}^i) \to H_n(\mathscr{R}^i) $ is well-defined and bijective. \end{Lemma} For clarity, we omit the superscript $i$ indicating the parameter $\epsilon_i$. \begin{proof} We first show that $\Phi_{\textrm{Tot}}: H_n(\textrm{Tot}) \to H_n(\mathscr{R})$ is a well-defined map. Assume that $[x_1, y_1] = [x_2,y_2]$ in $H_n(\textrm{Tot})$. One can build a commutative diagram whose rows are short exact sequences \begin{equation} \label{DoubleComplexSection} \begin{tikzcd} 0 &\arrow{l} C_{n}(\mathscr{R}) & \arrow{l}{j_{n}} \bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n}(\mathscr{R}_v) & \arrow{l}{e_{n}} \bigoplus\limits_{e \in N_{\mathscr{V}}} C_{n}(\mathscr{R}_e) & \arrow{l} 0
\end{tikzcd} \end{equation} and whose vertical maps are the boundary operator $\partial_n$. Since $[x_1, y_1] =[x_2, y_2]$ in $H_n(\textrm{Tot})$, there exists $p_{n+1} \in \bigoplus \limits_{v \in N_{\mathscr{V}}} C_{n+1} (\mathscr{R}_v)$ and $q_n \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}_e)$ such that $y_2 - y_1 = \partial q_n$ and $x_2-x_1 = \partial p_{n+1} +(-1)^{n+1} e_n q$. Then, \begin{align*} j_n(x_2-x_1) =& j_n( \partial p_{n+1} + (-1)^{n+1}e_n q) \\ =& j_n (\partial p_{n+1}) \\ =& \partial (j_{n+1}(p_{n+1})). \end{align*} The second equality follows from the exactness of Diagram \ref{DoubleComplexSection}. Thus, $[j_n(x_2)] = [j_n(x_1)]$, and the map $\Phi_{\textrm{Tot}}$ is well-defined.
We now show that $\Phi_{\textrm{Tot}}$ is surjective. Let $[\gamma] \in H_n(\mathscr{R})$. Since the rows of Diagram \ref{DoubleComplexSection} are exact, there exists $x_n \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_n(\mathscr{R}_v)$ such that $\gamma = j_n (x_n)$. Then, \[ j_{n-1} \circ \partial x_n = \partial \circ j_n ( x_n) = \partial \gamma = 0.\] So $\partial x_n \in \ker j_{n-1}$. By exactness, there exists $y_{n-1} \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_{n-1}(\mathscr{R}_e)$ such that $\partial x_n = e_{n-1} ( y_{n-1})$. Moreover, \[ e_{n-2} \circ \partial y_{n-1} = \partial \circ e_{n-1} y_{n-1} = \partial \partial x_n = 0.\] Since $e_{n-2}$ is injective, we know that $\partial y_{n-1}=0$. Then, $[x_n, (-1)^{n-1} y_{n-1}] \in H_n(\textrm{Tot})$, and $\Psi_{\textrm{Tot}}[x_n, (-1)^{n-1} y_{n-1}] = [\gamma]$. Thus, $\Psi_{\textrm{Tot}}$ is surjective.
Lastly, we show that $\Phi_{\textrm{Tot}}$ is injective. Assume that $\Phi_{\textrm{Tot}}([x,y])=[j_n(x)] = 0$. Then, there exists $p_{n+1} \in C_{n+1}(\mathscr{R})$ such that $\partial p_{n+1} = j_n(x)$. Since $j_n$ and $j_{n+1}$ are surjective, there exists $p_{n+1}' \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n+1}(\mathscr{R}_v)$ such that $p_{n+1} =j_{n+1} (p_{n+1}')$. Then, \begin{align*} j_n(\partial p'_{n+1}-x) &= j_n \circ \partial p'_{n+1} - j_n(x) \\ &= \partial \circ j_{n+1}(p'_{n+1}) -j_n(x) \\ &= \partial p_{n+1} - j_n(x) = 0.
\end{align*}
Thus, $\partial p'_{n+1} - x \in \ker j_n$. From the exactness of rows of Diagram \ref{DoubleComplexSection}, there exists $q_n \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}_e)$ such that $\partial p'_{n+1} - x = e_n( q_n)$.
Note that while $\partial ( \partial p'_{n+1} - x) = \partial e_n (q_n)$, this is equal to $-\partial x = -(-1)^{n+1}e_n(y)$ by definition. Then, $ e_n \partial q_n=\partial e_n( q_n) = - \partial x = -(-1)^{n+1}e_n(y)$. Since $e_n$ is injective, this implies that $\partial q_n = -(-1)^{n+1} y$. Let $q_n' = - (-1)^{n+1} q_n$, so that $\partial q_n' = y$.
So far, we found $p'_{n+1} \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n+1}(\mathscr{R}_v)$ and $q'_n \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}_e)$ such that $\partial q'_n = y$ and \[ x= \partial p'_{n+1} -e_n(q_n) = \partial p'_{n+1} - e_n ( - (-1)^{n+1} q'_n) = \partial p'_{n+1} +(-1)^{n+1} e_n q'_n.\] Thus, $[x,y]=0$ in $H_n(\textrm{Tot})$, and $\Phi_{\textrm{Tot}}$ is injective. \end{proof}
\section{Details of the proof of Lemma \ref{Theorem1}} \label{ProofTheorem1}
\begin{Lemma} The map $\Phi^i : H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1}) \to H_n(\textrm{Tot}^i)$ defined in Equation (\ref{Phi01}) is well-defined and bijective. \end{Lemma}
\begin{proof} We first show that $\Phi^i$ is well-defined. Assume that $( \{ \langle x \rangle \}, \{ \langle y \rangle \} ) = (\{ \langle x' \rangle \} , \{ \langle y' \rangle \})$ in $H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$, i.e., $\{ \langle x \rangle \} = \{ \langle x' \rangle \}$ in $H_0(C_{\bullet} \cosheaf{F}^i_n)$ and $\{ \langle y \rangle \} = \{ \langle y' \rangle \}$ in $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$.
Note that \[ \Phi^i_0 ( \{ \langle x \rangle \}) - \Phi^i_0 ( \{ \langle x' \rangle \}) = [ x- x', 0].\] Since $\{ \langle x \rangle \} = \{ \langle x' \rangle \}$ in $H_0(C_{\bullet} \cosheaf{F}^i_n)$, there exists $p_{n+1} \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n+1}(\mathscr{R}^i_v)$ such that $\partial p_{n+1}= x- x'$. Thus, $[x-x',0]$ is trivial in $H_n(\textrm{Tot})$, and $\Phi^i_0$ is a well-defined map.
By construction, $\Phi^i_1( \{ \langle y \rangle \}) = \Phi^i_1( \{ \langle y' \rangle \} )$. Thus, $\Phi^i$ is a well-defined map.
We now show that $\Phi^i$ is surjective. Given $[x, y] \in H_n(\textrm{Tot}^i)$, we know that $\partial y=0$, so $\{ \langle y \rangle \}$ is an element of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$. If $\mathscr{B}=\{ \, \{ \langle b_1 \rangle \}, \dots, \{ \langle b_t \rangle \} \, \}$ is a basis of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$, and $b_1^*, \dots, b_t^*$ are the coset representatives of the basis, then $\{ \langle y \rangle \}$ can be written as \[ \{ \langle y \rangle \} = c_1 \{ \langle b_1^* \rangle \} + \cdots + c_t \{ \langle b_t^* \rangle \} \] for some $c_1, \dots, c_t$. That is, there exists $q_n \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n (\mathscr{R}^i_e)$ such that \begin{equation} \label{Rewrite} c_1 b_1^*+ \cdots + c_t b_t^* - y = \partial q_n. \end{equation}
Recall from Equation (\ref{KeyEq}) that \begin{equation} \label{BetaB} \partial \Gamma^i \{ \langle b \rangle \} = e^i_{n-1} (b^*) \end{equation} for every $\{ \langle b \rangle \} \in \mathscr{B}$. Let \begin{equation} \label{RnDef}
r_n = x-(-1)^{n+1}( c_1 \Gamma^i \{ \langle b_1^* \rangle \} + \cdots + c_t \Gamma^i \{ \langle b_t^* \rangle \})+(-1)^{n+1}e^i_n (q_n).
\end{equation}
We know that $\partial x = (-1)^{n-1} e^i_{n-1}(y)$, and, by commutativity of Diagram \ref{SS0}, we have $\partial e^i_n(q_n) = e^i_{n-1} (\partial q_n)$. Thus, it follows that \begin{align*} \partial r_n &= \partial x - (-1)^{n+1} \partial ( c_1 \Gamma^i \{ \langle b_1^* \rangle \} + \cdots + c_t \Gamma^i \{ \langle b_t^* \rangle \}) +(-1)^{n+1} \partial e^i_n (q_n) \\ &= (-1)^{n-1} e^i_{n-1}(y) - (-1)^{n+1} e^i_{n-1} (c_1 b_1^* + \cdots + c_t b_t^*) + (-1)^{n+1} e^i_{n-1} ( \partial q_n) \\ &=0. \end{align*} The second equality follows from Equation (\ref{BetaB}) and Diagram \ref{SS0}. The third equality follows from Equation (\ref{Rewrite}). Thus, $\{ \langle r_n \rangle \}$ represents an element of $H_0(C_{\bullet} \cosheaf{F}^i_n)$. Then, \begin{align*} \Phi^i( \{ \langle r_n \rangle \} ,\{ \langle y \rangle \}) &= \Phi^i_0 ( \{ \langle r_n \rangle \} ) + \Phi^i_1 ( \{ \langle y \rangle \} )\\ &= [r_n +(-1)^{n+1}( c_1 \Gamma^i \{ \langle b_1^* \rangle \} + \cdots + c_t \Gamma^i \{ \langle b_t^* \rangle \}), c_1 b_1^*+ \dots + c_t b_t^* ] \\ &=[x+(-1)^{n+1} e^i_n(q_n), y+ \partial q_n] \\ &=[x,y] + [(-1)^{n+1}e^i_n(q_n), \partial q_n] \\ &=[x,y] \end{align*} The third equality follows from Equations (\ref{Rewrite}) and (\ref{RnDef}). Thus, $\Phi^i$ is surjective.
Lastly, we show that $\Phi^i$ is injective. Let $( \{ \langle x \rangle \}, \{ \langle y \rangle \} ) \in H_0(C_{\bullet} \cosheaf{F}^i_n) \oplus H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$. If $\mathscr{B}=\{ \, \{ \langle b_1 \rangle \}, \dots, \{ \langle b_t \rangle \} \, \}$ is a basis of $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$, and $b_1^*, \dots, b_t^*$ are the coset representatives of the basis, then $\{ \langle y \rangle \}$ can be written as \[ \{ \langle y \rangle \} = c_1 \{ \langle b_1^* \rangle \} + \cdots + c_t \{ \langle b_t^* \rangle \} \] for some $c_1, \dots, c_t$. Assume that \[\Phi^i (\{ \langle x \rangle \}, \{ \langle y \rangle \} ) = [x+(-1)^{n+1}( c_1 \Gamma^i \{ \langle b_1^* \rangle \} + \cdots + c_t \Gamma^i \{ \langle b_t^* \rangle \}), c_1 b_1^*+ \dots + c_t b_t^*]=0.\] Then, there exists $q_n \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_e)$ and $p_{n+1} \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n+1}(\mathscr{R}^i_v)$ such that \begin{equation} \label{trivial1} \partial q_n =c_1 b_1^*+ \dots + c_t b_t^*, \end{equation} \begin{equation} \label{trivial2} \partial p_{n+1} + (-1)^{n-1}e^i_n (q_n) =x+(-1)^{n+1}( c_1 \Gamma^i \{ \langle b_1^* \rangle \} + \cdots + c_t \Gamma^i \{ \langle b_t^* \rangle \}).\end{equation}
From Equation (\ref{trivial1}), we know $c_1 \{ \langle b_1^* \rangle \} + \dots + c_t \{ \langle b_t^* \rangle \} = \{ \langle y \rangle \}$ is trivial in $H_1(C_{\bullet} \cosheaf{F}^i_{n-1})$. Thus, $\Phi^i(\{ \langle x \rangle \} , \{ \langle y \rangle \}) = \Phi^i ( \{ \langle x \rangle \},0) = [x,0]$.
If $[x,0]$ is trivial in $H_n(\textrm{Tot}^i)$, then there exists $a_n \in \bigoplus\limits_{e \in N_{\mathscr{V}}} C_n(\mathscr{R}^i_e)$ and $b_{n+1} \in \bigoplus\limits_{v \in N_{\mathscr{V}}} C_{n+1}(\mathscr{R}^i_v)$ such that \[ \partial a_n =0, \] \[ \partial b_{n+1} + (-1)^{n-1}e^i_n a_n =x. \] The above two equations imply that $\{ \langle x \rangle \}$ is trivial in $H_0(C_{\bullet} \cosheaf{F}^i_n)$ as well. Thus, $\Phi^i$ is injective.
\end{proof}
\end{appendices}
\end{document}
|
arXiv
|
{
"id": "2001.01623.tex",
"language_detection_score": 0.5805164575576782,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{When does stabilizability imply the existence of infinite horizon optimal control in nonlinear systems? hanks{
Submitted to the editors DATE.
unding{The work was supported, in part, by JSPS KAKENHI Grant Numbers
JP26289128 and Nanzan University Pache Research Subsidy I-A-2 for the 2020 academic year.}
\begin{tcbverbatimwrite}{tmp_\jobname_abstract.tex} \begin{abstract} The paper addresses an existence problem for infinite horizon optimal control when the system under control is exponentially stabilizable or stable. Classes of nonlinear control systems for which infinite horizon optimal controls exist are identified in terms of stability, stabilizability, detectability and growth conditions. The result then applies to estimate the existence region of stable manifolds in the associated Hamiltonian systems. Applications of the results also include the analysis for turnpike property in nonlinear finite horizon optimal control problems by a geometric approach. \end{abstract}
\begin{keywords}
Optimal control, Stability, Stabilizability, Detectability, Stable manifold, Turnpike. \end{keywords}
\begin{AMS}
49K15, 49J15, 93D20, 93C10 \end{AMS} \end{tcbverbatimwrite} \input{tmp_\jobname_abstract.tex}
\section{Introduction} Optimal control problems (OCPs) are of significance from mathematical and engineering viewpoints, as applications and extensions of Calculus of Variations as well as design tools for systems describing engineering processes. There are two approaches to OCPs, one from \clred{the} sufficiency of optimality (Dynamic Programming \cite{Bellman:57:DP} developed by Bellman) and the other from necessity (Maximum Principle \cite{Pontryagin:62:MTOP} developed by Pontryagin). We refer to \cite{Athans:66:OC,Bryson:75:AOC,Cesari:83:OTA,Liberzon:12:CVOCT} for the theory of OCPs and to \cite{Bryson:96:cssmag} for a survey on OCPs from mathematical and engineering viewpoints. \clred{OCPs for infinite horizon are of special interest in engineering, such as linear quadratic regulator problems, since the stability issues are inherently involved in such problems.}
In the Dynamic Programming approach for infinite horizon OCPs, one derives a nonlinear partial differential equation, called {\em Hamilton-Jacobi-Bellman equation} (HJBE), the solution of which gives an optimal control as a feedback law. There is \clred{a} large amount of research on \clred{the} solution method for HJBEs, for which we refer to \cite{Al'brekht:61:jamm,Lukes:69:sicon,Navasca2007251,Aguilar2014527} for Taylor expansion method and to \cite{Kreisselmeier:94:ieee-ac,Beard:97:automatica,Beard:98:ijc,Sakamoto:08:ieee-ac,Ohtsuka:11:ieee-ac,Sakamoto:13:automatica} for other numerical or algebraic approaches (see \cite{Beeler:00:jota} for a survey on the numerical methods for HJBE). Interestingly, when one applies these methods, no information for the solvability region is available and only local solvability around an equilibrium is examined, which amounts to the stabilizability and \clred{the detectability} of the linear part. For instance, swing-up and stabilization \clred{feedbacks} for inverted pendulum and acrobot are obtained in \cite{Horibe:17:ieee_cst,Horibe:19:ieee_cst} by numerically solving HJBEs. However, no theory a priori guarantees the solvability of the OCPs for initial pending positions.
In this paper, motivated \clred{by} linear control theory, we wish to clarify under what conditions stabilizability (or stability) guarantees the solvability of infinite horizon OCPs. In our study, we restrict ourselves to \clred{an} affine nonlinear control system and to a cost functional that consists of a quadratic term on inputs and a nonnegative penalty function on states. We make full use of these structures to prove that if \clred{the} free dynamics is globally exponentially stable and \clred{the} input matrix is bounded, an optimal control exists globally and that if the control system is exponentially stabilizable, certain growth conditions at infinity are satisfied and detectability and coercivity conditions on the penalty function on the states are satisfied, then, an optimal control exists in the stabilizable region. \clred{ The results allow us to utilize a large number of works in nonlinear stability and stabilizability such as feedback linearization, the notion of zero dynamics and backstepping method in \cite{Nijmeijer:90:NDCS,Isidori:95:NCS,Khalil:92:NS,vanderSchaft:17:LGPTNC,Sepulchre:97:CNC} in the analysis for OCPs (see \S~\ref{sctn:applications}). }
Another motivation of the paper arises from turnpike phenomena in optimal control. It is often observed that under certain conditions optimal control and corresponding trajectory for finite (but long) horizon \clred{problems} are exponentially close to their steady-state optimum counterparts most of \clred{the} time in the control process except for the beginning and the end \clred{in} thin intervals. Turnpike is a metaphor used in econometrics \cite{Mckenzie:63:econometrica} for this behavior of optimally controlled systems as, when traveling from one place to a distant place, we always take \clred{the} highway to cover the distance at the best rate\cite{Dorfman:58:LPEA}. In control theory, this property is observed first in \cite{Wilde:72:ieeetac,Rockafellar:73:jota} as dichotomy or saddle point property. We refer to \cite{Carlson:91:IHOC,Zaslavski:06:TPCVOC} for general accounts on turnpike theory in control systems. The turnpike phenomena are investigated from nonlinear control \cite{Anderson:87:automatica,Trelat:15:jde}, Hamilton-Jacobi theoretic \cite{Rapaport:04:esaim}, PDEs \cite{Porretta:13:sicon,Porretta:16:INdAM,Gugat:16:syscon,Trelat:18:sicon,Zuazua:17:arc}, dissipative system theoretic \cite{Damm:14:sicon,Grune:16:syscon,Faulwasser:17:automatica,Berberich:18:ieeecst,Grune:18:sicon} and geometric (or dynamical system theoretic) \cite{sakamoto:20:prep_cdc} viewpoints. The framework in \cite{sakamoto:20:prep_cdc} to study the turnpike is based on stable and unstable manifolds of a Hamiltonian system associated with an OCP. It is shown that turnpike if the stable and unstable manifolds satisfy certain conditions. Under some conditions on the linear part of systems, \clred{\S~\ref{sctn:stabl_manifold} of the present paper shows that} the problem to find solvability region for OCPs is equivalent to estimate the existence region of a stable manifold in the base space (control space) for associated Hamiltonian systems. \clred{This analysis for the stable manifold is closely related to the transversality condition for infinite horizon OCPs, for which the papers such as \cite{Aseev20041094,Aseev20071,Cannarsa20181188} consider under more general conditions than those in the present paper.}
The structure of the paper is as follows. In \S~\ref{sctn:existence_OC}, the Direct Method of Calculus of Variations is applied to show the existence of optimal control for exponentially stable and stabilizable cases. In \S~\ref{sctn:stabl_manifold}, the existence of costates $p(t)$ defined on $[0,\infty)$ with $p(\infty)=0$ is shown, which is equivalent to the existence of stable manifold. In \S~\ref{sctn:applications}, we show several classes of nonlinear systems in which the results in this paper are applicable. One of them is a class where stable and unstable manifolds in associated Hamiltonian systems exist with a canonical projection property to the base space, and therefore, the turnpike occurs in finite interval OCPs.
\section{Existence of optimal control}\label{sctn:existence_OC} Let us consider a nonlinear control system of the form \begin{equation}
\dot{x}=f(x)+g(x)u,\ x(0)=x_0,\label{eqn:n_sys} \end{equation} where $f:\mathbb{R}^n\to\mathbb{R}^n$ and $g:\mathbb{R}^n\to \mathbb{R}^{n\times m}$ are $C^2$ maps. Let us assume that $x=0$ is an equilibrium of $\dot x=f(x)$; $f(0)=0$. The OCP for (\ref{eqn:n_sys}) is to find a control input $u$ that minimizes a cost functional. In this paper, we consider a cost functional of the form \begin{equation}
J=\int_0^\infty |u(t)|^2/2+h(x(t))\,dt \label{eqn:cost} \end{equation} with control set $L^2((0,\infty);\mathbb{R}^m)$. The function $h:\mathbb{R}^n\to\mathbb{R}$ is a locally Lipschitz nonnegative function $h(x)\geqslant0$ with $h(0)=0$ that penalizes $x$ in the process of control. $J$ is a functional on $L^2((0,\infty);\mathbb{R}^m)$ taking values in $\mathbb{R}^+\cup \{+\infty\}$ and we denote its value for $u$ by $J(u)$. When a solution to (\ref{eqn:n_sys}) for a $u\in L^2((0,\infty);\mathbb{R}^m)$ has finite escape time, we set $J(u)=\infty$.
The first problem we tackle in the paper is to determine the conditions under which stability/stabilizability of (\ref{eqn:n_sys}) guarantees the existence of optimal control (\ref{eqn:n_sys})-(\ref{eqn:cost}). Throughout this section, the initial condition $x(0)=x_0$ for (\ref{eqn:n_sys}) is fixed. Since input function is not assumed to be piecewise continuous, we use a generalized notion of solution for ordinary differential equations (ODEs). For an ODE, a Carath\'eodory solution is an absolute continuous function defined on an interval $I\subset\mathbb{R}$ such that it satisfies the ODE except on a subset of $I$ which has zero Lebesgue measure (see, e.g., page 28 of \cite{Hale:73:ODE}). \begin{lemma} For each $u\in L^2((0,\infty);\mathbb{R}^m)$ and $t_0\geqslant0$, there exists a unique solution passing through $(t_0,x_0)$ in the sense of \clred{Carath\'eodory} for (\ref{eqn:n_sys}). \end{lemma} \begin{proof} Take an arbitrary compact set $K\subset \mathbb{R}^n$. Then, \clred{for $x\in K$,} \begin{align}
&|f(x)+g(x)u(t)|\leqslant \sup_{K}|f(x)|+\sup_{K}\|g(x)\|\,|u(t)|\label{eqn:carathe1}\\ & \begin{aligned}
|f(x_1)+g(x_1)u(t) - f(x_2)-g(x_2)u(t)|&\leqslant |f(x_1)-f(x_2)|+\|g(x_1)-g(x_2)\|\,|u(t)|\\
&\leqslant (M_f + M_g|u(t)|)|x_1-x_2|, \end{aligned}\label{eqn:carathe2} \end{align}
where $\|\cdot\|$ is the matrix induced norm and $M_f$, $M_g$ are constants such that \[
|f(x_1)-f(x_2)|\leqslant M_f|x_1-x_2|, \quad
\|g(x_1)-g(x_2)\|\leqslant M_g|x_1-x_2| \] for all $x_1$, $x_2$ in $K$. The right hands of (\ref{eqn:carathe1}), (\ref{eqn:carathe2}) are locally integrable functions of $t$. Therefore, from Theorems~5.1 and 5.3 in \S~1 of \cite{Hale:73:ODE}, there exists a unique solution that is absolute continuous. \end{proof} For $u\in L^2((0,\infty);\mathbb{R}^m)$, let us denote the corresponding solution for (\ref{eqn:n_sys}) by \clred{$x_u$}.
\begin{proposition}\label{prop:minimizer} Assume that for each bounded set $\mathscr{U}$ in $L^2((0,\infty);\mathbb{R}^m)$, the corresponding set
$\{x_u\,|\, u\in\mathscr{U} \}$ is bounded in $C^0([0,T];\mathbb{R}^n)$ for any $T>0$. Assume also that there exists a $u\in L^2((0,\infty);\mathbb{R}^m)$ such that $J(u)<+\infty$. Then, there exists a $\bar u \in L^2((0,\infty);\mathbb{R}^m)$ such that $J(\bar u)=\inf_{L^2((0,\infty);\mathbb{R}^m)}J$. \end{proposition}
\begin{proof} \underline{Step 1.} From the existence of a $u\in L^2((0,\infty);\mathbb{R}^m)$ such that $J(u)<\infty$, there exists a minimizing sequence $\{u_m\}\subset L^2((0,\infty);\mathbb{R}^m)$. For sufficiently large $m$, we have $J(u_m)<1+\inf_{L^2((0,\infty);\mathbb{R}^m)}J$ and therefore, $\{u_m\}$ is a bounded set. By Banach-Alaoglu Theorem, up to subsequence, $u_m$ weakly converges to a $\bar u \in L^2((0,\infty);\mathbb{R}^m)$. Let $x_m:=\clred{x_{u_m}}$. Then, from the assumption, $\{x_m\}$ is uniformly bounded. \clred{Take an arbitrary $T>0$ and} we will show that $x_m$ is equicontinuous \clred{on $[0,T]$}. Let us take arbitrary $t_1<t_2$ \clred{in $[0,T]$}. Then, \begin{align*}
|x_m(t_2)-x_m(t_1)|&\leqslant\int_{t_1}^{t_2}|f(x_m(s))|+\|g(x_m(s))\|\,|u_m(s)|\,ds\\
&\leqslant C_1|t_2-t_1|+C_2\int_{t_1}^{t_2}|u_m(s)|\,ds\\
&\leqslant C_1|t_2-t_1|+C_2\sqrt{|t_2-t_1|}\|u_m\|_{L^2((0,\infty);\mathbb{R}^m)}\\
&\leqslant C_1|t_2-t_1|+C_2 \sup_{m\in\mathbb{N}}\|u_m\|_{L^2((0,\infty);\mathbb{R}^m)}\sqrt{|t_2-t_1|}, \end{align*} where we have taken constants $C_1$, $C_2>0$, \clred{which may depend on $T$,} such that \[
|f(x_m(t))|<C_1, \quad \|g(x_m(t))\|<C_2 \quad \text{for }t\in [0,T], m\in\mathbb{N} \] by using the uniform boundedness of $x_m$. This proves that $x_m$ is equicontinuous. From Ascoli-Arzel\'a Theorem, up to subsequence $x_m$ uniformly converges to an $\bar x\in C^0([0,\infty);\mathbb{R}^n)$ on $[0,T]$.
Next, we prove that $\bar x = \clred{x_u}$. Since \[ x_m(t)=x_0+\int_0^tf(x_m(s))+g(x_m(s))u_m(s)\,ds, \] and uniform convergence of $x_m$ to $\bar x$, it suffices to prove that \[ \int_0^tg(x_m(s))u_m(s)\,ds\to \int_0^tg(\bar x(s))\bar u(s)\,ds \quad \text{as }m\to\infty \] from the uniqueness of solutions of initial value problems. First we note that \begin{align}
\left| \int_0^tg(x_m(s))u_m(s)-g(\bar x(s))u_m(s)\,ds \right|
&\leqslant \int_0^t\|g(x_m(s))-g(\bar x(s))\|\,|u_m(s)|\,ds\nonumber\\
&\leqslant \left(\int_0^t\|g(x_m(s))-g(\bar x(s))\|^2\,ds\right)^{1/2} \left(\int_0^t|u_m(s)|^2\,ds\right)^{1/2}\nonumber\\
&\leqslant\sup_{m\in\mathbb{N}}\|u_m\|_{L^2((0,\infty);\mathbb{R}^m)} \left(\int_0^t\|g(x_m(s))-g(\bar x(s))\|^2\,ds\right)^{1/2}\nonumber\\ &\to 0 \quad \text{as}\ m\to\infty. \label{ineq:conv_gmum} \end{align} Let $[\,\cdot\,]_j$ denote the $j$-th component of a vector for $j=1,\ldots,n$. Then, the operator \[ u\in L^2((0,\infty);\mathbb{R}^m)\mapsto \int_0^t[g(\bar x(s))u(s)]_j\,ds \] is a linear bounded functional for each $t\geqslant0$ since \begin{align*}
\left| \int_0^t[g(\bar x(s))u(s)]_j\,ds \right|
&\leqslant \int_0^t\|g(\bar x(s))\|\,|u(s)|\,ds\\
&\leqslant \left(\int_0^t\|g(\bar x(s))\|^2\,ds\right)^{1/2}\|u\|_{L^2((0,\infty);\mathbb{R}^m)}. \end{align*} Therefore, we have \begin{equation} \lim_{m\to\infty}\int_0^tg(\bar x(s))u_m(s)\,ds=\int_0^tg(\bar x(s))\bar u(s)\,ds\label{eqn:lim_gbar_um} \end{equation} from the weak convergence of $u_m$ in $L^2((0,\infty);\mathbb{R}^m)$. From (\ref{ineq:conv_gmum}) and (\ref{eqn:lim_gbar_um}), \begin{align*} \lim_{m\to\infty}\int_0^tg(x_m(s))u_m(s)\,ds &=\lim_{m\to\infty} \left[\int_0^t g(\bar x(s))u_m(s)\,ds +\int_0^t(g(x_m(s))-g(\bar x(s)))u_m(s)\,ds\right]\\ &=\int_0^tg(\bar x(s))\bar u(s)\,ds. \end{align*}
\noindent \underline{Step 2.} For a sufficiently large $m$, \begin{align*}
1+\inf_{L^2((0,\infty);\mathbb{R}^m)}J &\geqslant \int_0^\infty \frac{1}{2}|u_m(t)|^2+h(x_m(t))\,dt\\
&\geqslant \int_0^\infty h(x_m(t))\,dt=\|h(x_m)^{1/2}\|_{L^2((0,\infty);\mathbb{R})} \end{align*} and $\{h(x_m)^{1/2}\}$ is a bounded set in $L^2((0,\infty);\mathbb{R})$. Replacing $\int_0^\infty\,dt$ with $\int_0^T\,dt$, it is also a bounded set in $L^2((0,T);\mathbb{R})$ for all $T>0$. By Banach-Alaoglu Theorem and the diagonal argument, up to subsequence, we have \begin{gather*} h(x_m)^{1/2}\to l \quad \text{weakly in }L^2((0,\infty);\mathbb{R}),\\ h(x_m)^{1/2}\to l \quad \text{weakly in }L^2((0,T);\mathbb{R}) \quad \text{for all }T>0, \end{gather*} as $m\to\infty$ for some $l\in L^2((0,\infty);\mathbb{R})$. On the other hand, \[
\int_0^T|h(x_m(t))-h(\bar x(t))|^2\,dt\to0 \quad\text{as }m\to\infty \text{ for all }T>0, \] implying that \[ h(x_m)^{1/2}\to h(\bar x)^{1/2} \quad \text{strongly in }L^2((0,T);\mathbb{R}) \quad \text{for all }T>0. \] From the uniqueness of weak limit, we have $h(\bar x)=l$ on $[0,T]$ for all $T>0$. Thus, we have shown that \begin{equation} h(x_m)^{1/2}\to h(\bar x)^{1/2} \quad \text{weakly in }L^2((0,\infty);\mathbb{R}). \label{eqn:weaklimit_h} \end{equation} \noindent \underline{Step 3.} From the weak convergence of $u_m$ to $\bar u$ in $L^2((0,\infty);\mathbb{R}^m)$, $h(x_m)^{1/2}$ to $h(\bar x)^{1/2}$ in $L^2((0,\infty);\mathbb{R})$ and lower semi-continuity of norm for weak topology, \begin{gather*}
\|\bar u\|_{L^2((0,\infty);\mathbb{R}^m)}\leqslant \liminf_{m\to\infty}\|u_m\|_{L^2((0,\infty);\mathbb{R}^m)},\\
\|h(\bar x)^{1/2}\|_{L^2((0,\infty);\mathbb{R})}\leqslant
\liminf_{m\to\infty}\|h(x_m)^{1/2}\|_{L^2((0,\infty);\mathbb{R})}, \end{gather*} and it holds that \begin{align*}
J(\bar u) =\int_0^\infty \frac{1}{2}|\bar u(t)|^2+h(\bar x(t))\,dt &\leqslant \frac{1}{2}\liminf_{m\to\infty}\|u_m\|_{L^2((0,\infty);\mathbb{R}^m)}
+\liminf_{m\to\infty}\int_0^\infty h(x_m(t))\,dt\\
&= \liminf_{m\to\infty}J(u_m)=\inf_{L^2((0,\infty);\mathbb{R}^m)}J. \end{align*} This proves that $\bar u$ is an optimal control. \end{proof}
\subsection{Exponentially stable case}\label{sctn:exp_stable} In this section, we consider the case where the free dynamics $\dot x =f(x)$ in (\ref{eqn:n_sys}) is globally exponentially stable. That is, there exist constants \clred{$\mu>0$ and $K>0$ that are independent of $x_0$} such that the following estimate for the corresponding solution $x(t,x_0)$ holds \[
|x(t,x_0)|\leqslant K\clred{|x_0|}e^{-\mu t} \text{ for }t\geqslant0, \ x_0\in \mathbb{R}^n. \]
\begin{theorem}\label{prop:exp_stable} Assume that $g(x)$ is bounded in $\mathbb{R}^n$ and free dynamics $\dot x=f(x)$ is globally exponentially stable. Assume also that $Df(x)$ is bounded in $\mathbb{R}^n$.\footnote{ These assumptions are necessary to have \clred{Lyapunov function} $V(x)$ defined on $\mathbb{R}^n$.} Then, for each $x_0\in \mathbb{R}^n$, there exists an optimal control \clred{$\bar u$} for (\ref{eqn:n_sys})-(\ref{eqn:cost}) \clred{and it holds that $x_{\bar u}(t)\to0$ as $t\to\infty$}. \end{theorem} \begin{proof}
We prove that the assumptions in Proposition~\ref{prop:minimizer} are satisfied. Namely, we show that for any $u\in L^2((0,\infty);\mathbb{R}^m)$, the corresponding solution belongs to $H^1((0,\infty);\mathbb{R}^n)$ and hence, for any bounded set $\mathscr{U}\subset L^2((0,\infty);\mathbb{R}^m)$, the corresponding set $\{\clred{x_u}\,|\,u\in\mathscr{U}\}$ is bounded in $C^0([0,T];\mathbb{R}^n)$ for all $T>0$. \\ \underline{$x\in L^2((0,\infty);\mathbb{R}^n)$:} From the global exponential stability of $\dot x=f(x)$ and the boundedness of $Df(x)$, there exist a $C^1$ function $V:\mathbb{R}^n\to\mathbb{R}$\clred{,} positive constants $c_1$, $c_2$, $c_3$ and $c_4$ such that for all $x\in \mathbb{R}^n$ \begin{enumerate}[i)]
\item
$ c_1|x|^2\leqslant V(x)\leqslant c_2|x|^2$
\item
$DV(x)f(x)\leqslant -c_3|x|^2$
\item
$|DV(x)|\leqslant c_4|x|$ \end{enumerate} (see, e.g., page 180 of \cite{Khalil:92:NS}). \clred{Take an $u\in L^2((0,\infty);\mathbb{R}^m)$ and let $x(t)=x_u(t)$.}
Using the above inequalities and the boundedness of $\|g(x(t))\|$ on the trajectory, one can derive \[
\frac{d}{dt}V(x(t))\leqslant -cV(x(t))+c'|u(t)|^2, \ t\geqslant0, \] for some positive constants $c$, $c'$ (which depend on $g$). Applying Gronwall's inequality and the quadratic estimates on $V$, it follows that \[
|x(t)|^2\leqslant c|x_0|^2 e^{-c't}+K\int_0^te^{-c'(t-s)}|u(s)|^2\,ds, \ t\geqslant0 \] for some positive constants $c$, $c'$ and $K$ (that depend on $U$ and $g$). Since \[
\int_0^\infty\left( \int_0^t e^{-c'(t-s)}|u(s)|^2\,ds \right)\,dt
=\frac{1}{c'}\left( \int_0^\infty |u(t)|^2\,dt - \lim_{t\to\infty}e^{-c't}\int_0^t e^{c's}|u(s)|^2\,ds \right) \] and the second term on the right is 0, we have \begin{equation}
\int_0^\infty|x(t)|^2\,dt\leqslant \frac{c}{c'}|x_0|^2 +\frac{1}{c'}\|u\|_{L^2((0,\infty);\mathbb{R}^m)}^2.\label{ineq:x_in_L2} \end{equation} \underline{$\dot x\in L^2((0,\infty);\mathbb{R}^n)$:}
Applying Lemma~\ref{lemma:h_convergence} with $H(x)=|x|^2$, we know that $x(t)$ is bounded for $t\geqslant0$. Therefore, \begin{align*}
\int_0^\infty|\dot{x}(t)|^2\,dt &= \int_0^\infty |f(x(t))+g(x(t))u(t)|^2\,dt\\
&\leqslant 2\int_0^\infty |f(x(t))|^2+\|g(x(t))\|^2\,|u(t)|^2\,dt\\
&\leqslant 2M^2\int_0^\infty |x(t)|^2\,dt+\sup_{t\geqslant0}\|g(x(t))\|^2
\int_0^\infty|u(t)|^2\,dt\\
&\leqslant 2M^2\|x\|_{L^2((0,\infty);\mathbb{R}^n)}^2+\sup_{t\geqslant0}\|g(x(t))\|^2 \|u\|_{L^2((0,\infty);\mathbb{R}^m)}^2, \end{align*}
where $M>0$ is a constant satisfying $|f(x)|\leqslant M|x|$ in a bounded set that contains $\{x(t)\,|\,t\geqslant0\}$. This and (\ref{ineq:x_in_L2}) show that $x\in H^1((0,\infty);\mathbb{R}^n)$ with \begin{equation}
\|x\|_{H^1((0,\infty);\mathbb{R}^n)}\leqslant c_1+c_2\|u\|_{L^2((0,\infty);\mathbb{R}^m)}\label{ineq:x_h1norm_u} \end{equation} for some positive constants $c_1$, $c_2$.
Now, we have for $t\in[0,T]$, where $T>0$ is arbitrary, \begin{align*}
|x(t)|&\leqslant \int_0^t|\dot{x}(s)|\,ds +|x_0|\\
&\leqslant \sqrt{t}\|\dot{x}\|_{L^2((0,\infty);\mathbb{R}^n)}+|x_0|\\
&\leqslant \sqrt{T}\|x\|_{H^1((0,\infty);\mathbb{R}^n)}+|x_0|
\leqslant C_T+C'_T\|u\|_{L^2((0,\infty);\mathbb{R}^m)}, \qquad \text{from (\ref{ineq:x_h1norm_u})} \end{align*}
for some positive constants $C_T$, $C'_T$ that depend on $T$. This proves that the assumptions in Proposition~\ref{prop:minimizer} are satisfied and an optimal control exists. \clred{The last assertion of the theorem follows from the same estimate $\int_0^\infty|x_{\bar u}(t)|^2\,dt<\infty$ and Lemma~\ref{lemma:h_convergence} with $H(x)=|x|^2$.} \end{proof}
\subsection{Exponentially stabilizable case} In this subsection, we consider (\ref{eqn:n_sys}) for a more general case where there exists a $C^1$ exponentially stabilizing feedback control $u=k(x)$ with $k(0)=0$ for initial points in \clred{a neighborhood of $x=0$}. Let us recall zero-state detectability for nonlinear systems, which is often imposed in OCPs.
\begin{definition} System $\dot x=f(x)$ and output $y=h(x)$ (or, simply the pair $(f,h)$) is said to be {\em zero-state detectable for a neighborhood $U\subset \mathbb{R}^n$ of the origin} if the following holds. If a solution $x(t)$ with $x(0)\in U$ satisfies $h(x(t))=0$ for $t\geqslant 0$, then $x(t)\to0$ as $t\to\infty$. \end{definition}
\begin{proposition}\label{prop:detect}
Suppose that $h$ is a $C^1$ function which is coercive, namely, $h(x)\to\infty$ if $|x|\to\infty$. Take positive constants $R$, $\mu$ such that $h(x)>\mu$ if $|x|>R$. Assume also that the pair $(f,h)$ is zero-state detectable for an open set containing $|x|\leqslant R$. Let $\delta(t)\in L^2((0,\infty);\mathbb{R}^n$).
Assume finally that solution $x_\delta(t)$ for $\dot x=f(x)+\delta(t)$ satisfies $h(x_\delta(t))\to0$ as $t\to\infty$. Then, $x_\delta(t)\to0$ as $t\to\infty$. \end{proposition} \begin{proof}
From the coercivity of $h$, $\{x_\delta(t)|\,t\geqslant 0\}$ is bounded in $\mathbb{R}^n$. Using Bolzano-Weierstrass Theorem, there exists a subsequence $\{x_\delta(t_n)\}$, $t_n\to\infty$ ($n\to\infty$), such that $x_\delta(t_n)\to\bar x$ as $t_n\to\infty$. Define $\delta_n(t):=\delta(t_n+t)$, $f_n(t,x):=f(x)+\delta_n(t)$ and $\xi_n:=x_\delta(t_n)$. Note that $\xi_n\to\bar x$ as $n\to\infty$. We consider an initial value problem \[ \dot x =f_n(t,x),\quad x(0)=\xi_n. \] Let $\varphi_n(t)$ denote the solution of the above problem. One then notices that $\varphi_n(t)=x_\delta(t_n+t)$ and therefore $\varphi_n(t)$ is bounded for $t\geqslant0$ \clred{ uniformly in $n$}. Also, let $\varphi(t)$ be the solution on $[0,t_1]$ for \[ \dot x=f(x), \quad x(0)=\bar x. \] Write $f_0(t,x):=f(x)$. Then, for any bounded set $\bar D\in\mathbb{R}^n$, there exists a constant $M>0$ such that for any $x$, $y\in \bar D$ it follows that \begin{gather*}
|f_n(t,x)-f_0(t,x)|\leqslant |\delta_n(t)|,\\
\int_0^t|\delta_n(s)|\,ds=\int_{t_n}^{t+t_n}|\delta(s)|\,ds\to0, \text{ as }n\to \infty \text{ for }t>0,\\
\int_{t_1}^{t_2}|\delta_n(t)|\,dt \leqslant \sqrt{t_2-t_1}\| \delta \|_{L^2((0,\infty);\mathbb{R}^n)} \text{ for }0\leqslant t_1\leqslant t_2,\\
|f_n(t,x)-f_n(t,y)|=|f(x)-f(y)|\leqslant M|x-y|. \end{gather*} Now all the assumptions in Proposition~\ref{prop:hale_lemma} are satisfied. Therefore, $\varphi_n\to\varphi$ uniformly on $[0,t_1]$ as $n\to\infty$. However, from the boundedness of $\varphi_n$, $\varphi$ is defined on $[0,\infty)$ and also bounded for $t\geqslant0$. Take a constant $M_h>0$ \clred{that is independent of $n$} such that in a bounded set that contains $\varphi_n(t)$ and $\varphi(t)$ for $t\geqslant0$, \[
|h(x)-h(y)|\leqslant M_h|x-y| \] holds for all $x$, $y$ in the bounded set. Then we have \[
0\leqslant h(\varphi(t))\leqslant M_h|\varphi_n(t)-\varphi(t)|+h(x_\delta(t_n+t)) \] for $t\geqslant0$. Taking the limit $n\to\infty$, we obtain \[ h(\varphi(t))=0 \quad \text{for}\quad t\geqslant0. \]
We have then $|\varphi(t)|\leqslant R$ for $t\geqslant0$. From the detectability of $(f,h)$ for the open set containing $|x|\leqslant R$, we have $\varphi(t)\to0$ as $t\to\infty$ and therefore $x_\delta(t)\to0$ as $t\to\infty$ since $\varphi_n\to\varphi$ uniformly on $[0,T]$ as $n\to\infty$ for any $T>0$. \end{proof}
\begin{theorem} \label{thm:minimizer_2}
Assume that system (\ref{eqn:n_sys}) is exponentially stabilizable by a $C^1$ feedback control for \clred{an open set} $U\subset\mathbb{R}^n$ and that $h$ is a $C^1$ function of $x\in\mathbb{R}^n$. Assume also that the pair $(f,h)$ is zero-state detectable for an open set containing $|x|\leqslant\rho$ for some $\rho>0$. Assume finally that there exist positive constants $p$, $c_f$, $c_g$ and a constant $0\leqslant\theta<1$ such that \begin{subequations} \begin{align}
|f(x)|&\leqslant c_f|x|^{p+\theta} \label{ineq:groth_f}\\
\|g(x)\|&\leqslant c_g|x|^{p/2+\theta}\label{ineq:groth_g} \end{align} \end{subequations} for sufficiently large $x\in\mathbb{R}^n$ and that there exists a positive constants $c_h$ such that \begin{equation}
h(x) \geqslant c_h|x|^p\label{ineq:groth_h} \end{equation}
for $x$ with $|x|>\rho$. Then, for $x_0\in U$, OCP (\ref{eqn:n_sys})-(\ref{eqn:cost}) has an optimal control $\bar u\in L^2((0,\infty);\mathbb{R}^m)$ \clred{and it holds that $x_{\bar u}(t)\to0$ as $t\to\infty$}. \end{theorem} \begin{proof}
Let $k(x)$ be a $C^1$ exponentially stabilizing feedback and let $u_e(t)=k(x(t))$. Then $J_e:=J(u_e)<+\infty$ since the closed loop solution satisfies the exponential decay and around the origin we have estimates on $k$ and $h$; $|k(x)|\leqslant M_k|x|$, $h(x)\leqslant M_h|x|$ for some constants $M_h,M_k>0$. There exists a minimizing sequence $\{u_m\}\subset L^2((0,\infty);\mathbb{R}^m)$, which is bounded in $L^2((0,\infty);\mathbb{R}^m)$ as in the proof of Proposition~\ref{prop:minimizer}, such that \[ \lim_{m\to\infty}J(u_m)=\inf_{L^2((0,\infty);\mathbb{R}^m)}J, \quad J(u_m) <J_e\ (m\in\mathbb{N}). \] Let \clred{$x_m=x_{u_m}$}. Then, we have $\int_0^\infty h(x_m(t))\,dt<\infty$. From Lemma~\ref{lemma:h_convergence} with $H(x)=h(x)$, $h(x_m(t))\to0$ as $t\to\infty$ and $x_m(t)$ is bounded for $t\geqslant0$. With the correspondence \[ \delta(t)\longleftrightarrow g(x_m(t))u_m(t), \quad x_\delta \longleftrightarrow x_m,\quad R\longleftrightarrow\rho, \] in Proposition~\ref{prop:detect}, we have $x_m\in C^0([0,\infty);\mathbb{R}^n)$, $x_m(t)\to0$ as $t\to\infty$.
We next prove that $x_m(t)$ is uniformly bounded. Let $L_m:= \sup_{t\geqslant0}|x_m(t)|$ and we prove that $\sup_{m\in\mathbb{N}}L_m<\infty$. Define $I_m\subset\mathbb{R}$ by \[
I_m:=\{t\geqslant0\,|\, |x_m(t)|\geqslant L_m/2\}. \] Assume that $\sup_{m\in\mathbb{N}}L_m=\infty$ and take $\{m_k\}\subset \mathbb{N}$ such that $L_{m_k}\to\infty$ as $m_k\to\infty$. Then, for $m_k$ sufficiently large, \begin{align*} J_e&\geqslant \int_0^\infty h(x_{m_k}(t))\,dt\\ &\geqslant \int_{I_{m_k}}h(x_{m_k}(t))\,dt\\
&\geqslant c_h\int_{I_{m_k}} |x_{m_k}(t)|^p\,dt\qquad \text{by (\ref{ineq:groth_h})}\\
&\geqslant c_h\left(\frac{L_{m_k}}{2}\right)^p|I_{m_k}|, \end{align*}
where $|\cdot|$ denotes length (Lebesgue measure) of interval. Thus we have \begin{equation}
|I_{m_k}|\leqslant C {L_{m_k}}^{-p},\label{ineq:I_m_length} \end{equation}
where $C>0$ is independent of $m_k$. Next we compute the length of trajectory connecting spheres $|x|=L_{m_k}$ and $|x|=L_{m_k}/2$, which is \begin{align*}
\int_{I_{m_k}}|\dot{x}_{m_k}(t)|\,dt &\leqslant
\int_{I_{m_k}}|f(x_{m_k}(t))|+\|g(x_{m_k})\|\,|u_{m_k}(t)|\,dt\\
&\leqslant |I_{m_k}|\sup_{L_{m_k}/2\leqslant |x|\leqslant L_{m_k}}|f(x)|
+\sup_{L_{m_k}/2\leqslant |x|\leqslant L_{m_k}}\|g(x)\|\int_{I_{m_k}}|u_{m_k}(t)|\,dt\\
&\leqslant |I_{m_k}|\sup_{L_{m_k}/2\leqslant |x|\leqslant L_{m_k}}|f(x)|\\
&\qquad +\sqrt{|I_{m_k}|}\sup_{L_{m_k}/2\leqslant |x|\leqslant L_{m_k}}\|g(x)\|\sup_{m\in\mathbb{N}}\|u_m\|_{L^2((0,\infty);\mathbb{R}^m)}\\
&\leqslant c_1|I_{m_k}|{L_{m_k}}^{p+\theta}+c_2\sqrt{|I_{m_k}|}{L_{m_k}}^{p/2+\theta}, \qquad \text{by (\ref{ineq:groth_f}) and (\ref{ineq:groth_g})} \end{align*} where $c_1$, $c_2$ are positive constants independent of $m_k$. Then, from (\ref{ineq:I_m_length}) it follows that \[
\int_{I_{m_k}}|\dot{x}_{m_k}(t)|\,dt\leqslant C{L_{m_k}}^{\theta}, \]
where $C>0$ is a constant independent of $m_k$. However, the left side of above grows, at least, as $O(|L_{m_k}|)$, which is a contradiction.
The rest of the proof is the same as Proposition~\ref{prop:minimizer}. \clred{The last assertion follows using Proposition~\ref{prop:detect} with $\delta(t)=g(x_{\bar u}(t))\bar{u}(t)$.} \end{proof}
\begin{remark} Although exponential stability of free dynamics $\dot x=f(x)$ implies exponential stabilizability, Theorem~\ref{thm:minimizer_2} does not include Theorem~\ref{prop:exp_stable} since in Theorem~\ref{prop:exp_stable} $h\equiv 0$ is allowed. \end{remark} \clred{ \begin{remark}
Theorem~\ref{thm:minimizer_2} cannot handle the case $h(x)=|Cx|^2$ with a constant matrix $C\in\mathbb{R}^{r\times n}$. This is due to the coercivity of $h$ used in the proof. \end{remark} } \section{Stable manifold analysis of associated Hamiltonian system}\label{sctn:stabl_manifold} In this section, we additionally assume that $h(x)$ is $C^2$ and $Dh(0)=0$. The second problem treated in the paper is to give estimates on a stable manifold of the associated Hamiltonian system for OCP (\ref{eqn:n_sys})-(\ref{eqn:cost}); \begin{subequations}\label{eqn:hsys_all} \begin{align}
\dot{x}&=f(x)-g(x)g(x)^\top p \label{eqn:hsys1}\\
\dot{p}&=-Df(x)^\top p +\frac{1}{2}D(p^\top g(x)g(x)^\top p)^\top -Dh(x)^\top \label{eqn:hsys2}. \end{align} \end{subequations}
\clred{The Hamiltonian system (\ref{eqn:hsys_all}) appears in optimal control theory in two ways. The first is from Pntryagin's minimum principle, where (\ref{eqn:hsys_all}) takes the form of \begin{gather*} \dot x=\frac{\partial H_D}{\partial p}(x(t),u^\ast (t), p(t))^\top,\quad \dot p=-\frac{\partial H_D}{\partial x}(x(t),u^\ast (t), p(t))^\top,\\ u^\ast(t) = \min_{u\in\mathbb{R}^m}H_D(x(t),u, p(t)), \end{gather*}
for $H_D(x,u,p)=p^\top f(x)+p^\top g(x)u+\frac{1}{2}|u|^2+h(x)$. The minimization over $u\in\mathbb{R}^m$ can be expressed explicitly and (\ref{eqn:hsys_all}) is obtained. The second is from the method of characteristics for the HJBE \[ H(x,\partial V/\partial x)=0 \] obtained by the Dynamic Programming, where $H(x,p)=\min_{u\in\mathbb{R}^n}H_D(x,u,p)$. }
Let us explain the motivation of the second problem in the paper using a simple example \[ \dot x=-x+x^2+u,\quad J=\int_0^\infty u(t)^2/2\,dt. \] This system is globally exponentially stabilizable by $u=-x^2$ and Hamiltonian system is \begin{equation} \dot x= -x+x^2-p,\quad \dot p= p-2xp.\label{eqn:ex_hamsys} \end{equation} The optimal control is $u=0$ for all $x(0)\in\mathbb{R}$ and the stable manifold for the Hamiltonian system (\ref{eqn:ex_hamsys}) exists only in $x<1$. In (\ref{eqn:ex_hamsys}), there are two \clred{equilibria}; $(x,p)=(0,0), (1,0)$ and heteroclinic orbits connecting them exist. To guarantee the global existence of stable manifold, one realizes that the detectability condition is necessary.
\clred{It should be also noted that the asymptotic behavior of $p(t)$ is related with the transversality condition in the minimum principle, but, in infinite horizon OCPs, it does not hold in general that $p(\infty)=0$. For this issue, we refer to \cite{Halkin:74:econometrica}. The paper shows a counter example using an example with singular costate (which does not happen in our setting). In the following, we show that in (\ref{eqn:hsys_all}) $p(t)$ satisfies $p(\infty)=0$.}
\begin{proposition}\label{prop:p_exists_stable} Suppose that $h(x)$ is $C^2$ and $Dh(0)=0$. \begin{enumerate}[(i)]
\item Assume that hypotheses in Theorem~\ref{prop:exp_stable} are satisfied. Then, for any $x_0\in\mathbb{R}^n$, Hamiltonian system (\ref{eqn:hsys_all}) admits a unique solution $(x(t),p(t))$ defined on $[0,\infty)$ satisfying $x(0)=x_0$ and $(x(t),p(t))\to0$ as $t\to\infty$. \item \clred{Assume that all the hypotheses in Theorem~\ref{thm:minimizer_2} are satisfied. Assume additionally that the linear part of $(h(x), f(x)))$ is detectable.} \clred{Then} for any $x_0\in U$, \clred{where $U$ is the open set for the exponential stabilization}, Hamiltonian system (\ref{eqn:hsys_all}) admits a unique solution $(x(t),p(t))$ defined on $[0,\infty)$ satisfying $x(0)=x_0$ and $(x(t),p(t))\to0$ as $t\to\infty$. \end{enumerate} \end{proposition} \begin{proof} Let us write \begin{gather*}
f(x)=Ax+\varphi(x), \ \varphi(x)=O(|x|^2), \quad g(x)=B+\tilde{g}(x),\ \tilde{g}(x)=O(|x|)\\
h(x)=\frac{1}{2}|Cx|^2+\tilde{h}(x),\ \tilde{h}(x)= O(|x|^3),\ |x|\to0, \end{gather*} where $\varphi$, $\tilde g(x)$ and $\tilde h(x)$ are higher order $C^2$ maps. \clred{The proofs for (i) and (ii) are almost the same since $A$ is a Hurwitz matrix in (i) and the exponential stabilizability by $C^1$ feedback in (ii) implies that $(A,B)$ is stabilizable. One additionally needs the detectability of $(C,A)$ for (ii).}
The Hamiltonian system (\ref{eqn:hsys_all}) can be written as \begin{gather} \frac{d}{dt}\bmat{x\\p}=\mathrm{Ham}\bmat{x\\p}
+\bmat{\varphi(x) -\Phi(x)p\\ -D\varphi(x)^\top p +\frac{1}{2}D(p^\top g(x)g(x)^\top p)^\top -D\tilde{h}(x)^\top}\label{eqn:ham_all}\\ \intertext{where} \mathrm{Ham}=\bmat{A& -BB^\top\\-C^\top C&-A^\top}, \quad \Phi(x)=B\tilde{g}(x)^\top +\tilde{g}(x)B^\top+\tilde{g}(x)\tilde{g}(x)^\top.\nonumber \end{gather} \clred{ (\ref{eqn:ham_all}) is written as \[ \frac{d}{dt}\bmat{x\\p}=\mathrm{Ham}\bmat{x\\p}
+\bmat{\gamma_1(x,p)\\\gamma_2(x,p)} \]
with appropriately defined $\gamma_1(x,p)$, $\gamma_2(x,p)$. Note that $\gamma_j(x,p)=O((|x|+|p|)^2)$, $|x|+|p|\to0$, $j=1,2$. }
\clred{Since $(A,B)$ is stabilizable and $(C,A)$ is detectable}, the following Riccati equation has a solution $P_1\geqslant0$ \[ PA+A^\top P-PBB^\top P+C^\top C=0 \] with $\mathrm{Re}\,\lambda(A-BB^\top P_1)<0$. Also, take a solution $P_2\leqslant0$ for the following Lyapunov equation \[ P(A-BB^\top P_1)^\top +(A-BB^\top P_1)P=BB^\top. \] Using a symplectic transformation (see, e.g., \cite{Lukes:69:sicon,Sakamoto:02:sicon} for detail) \[ L=\bmat{I&P_2\\P_1& I+P_1P_2},\quad L^{-1}=\bmat{I+P_2P_1&-P_2\\-P_1&I}, \] the linear part $\mathrm{Ham}$ is block-diagonalized \[ L^{-1}\mathrm{Ham}L=\bmat{A-BB^\top P_1&0\\0&-(A-BB^\top P_1)^\top}. \] Let us introduce new coordinates $\xi$, $\eta$ by \begin{equation} \bmat{x\\p}=L\bmat{\xi\\\eta}=\bmat{\xi+P_2\eta\\P_1\xi+(P_1P_2+I)\eta} =\bmat{\xi+P_2\eta\\P_1 x+ \eta}.\label{eqn:xi_eta} \end{equation} The Hamiltonian system is then written, in the coordinates $(\xi, \eta)$, as \begin{equation} \frac{d}{dt}\bmat{\xi\\ \eta}=\bmat{(A-BB^\top P_1)&0\\0&-(A-BB^\top P_1)^\top}\bmat{\xi\\\eta}
+\bmat{\nu_1(\xi,\eta)\\\nu_2(\xi,\eta)},\label{eqn:ham_all_xi_eta} \end{equation}
where $\nu_j(\xi,\eta)=O((|\xi|+|\eta|)^2)$, $|\xi|+|\eta|\to0$, $j=1,2$. It is known (see, e.g., Page~107 of \cite{Chow:82_96:MBT}) that in (\ref{eqn:ham_all_xi_eta}), there exists a unique $C^1$ stable manifold $\eta=\theta(\xi)$ in a neighborhood of $(\xi,\eta)=(0,0)$ satisfying $\theta(0)=0$, $D\theta(0)=0$ such that the solution $\xi(t)$, $\eta(t)$ satisfy $\xi(t)=\theta(\eta(t))$ for $t\geqslant0$ and $\xi(t)$, $\eta(t)\to0$ provided that $\eta(0)=\theta(\xi(0))$
\footnote{For a dynamical system whose linearization at the equilibrium (denoted $0$) has no eigenvalues on the imaginary axis, a stable manifold exists around the equilibrium, which is defined as \[ \bigcup_{t\geqslant0}\varphi_{-t}(W_{\mathrm{loc}}^s(0)), \] where $\varphi_t(z)$ is the flow of the dynamical system starting from $z$ at $t=0$ and $W_{\mathrm{loc}}^s(0)$ is the set of points near the equilibrium from which the flow is bounded for $t\geqslant0$ (local stable manifold). }.
\clred{Take an $x_0$ for which an optimal control $u^\ast$ exists by either Theorem~\ref{prop:exp_stable} or Theorem~\ref{thm:minimizer_2} and let $x^\ast$ be the optimal trajectory, which satisfies $x^\ast(t)\to0$ as $t\to\infty$. For arbitrary large $t_1\geqslant0$, there exists a $\xi_1\in\mathbb{R}^n$ such that $x^\ast (t_1)=\xi_1+P_2\theta(\xi_1)$. This is shown from \[
\left.\frac{\partial}{\partial \xi}(\xi+P_2\theta(\xi))\right|_{\xi=0}=I, \]
which uses $\theta(0)=0$, $D\theta(0)=0$ and from the Implicit Function Theorem if $|x^\ast (t_1)|$ is sufficiently small. Let $(\xi(t),\eta(t))$ be the solution for (\ref{eqn:ham_all_xi_eta}) with $\eta(t_1)=\theta(\xi_1)$ and define $(x(t),p(t))$ by (\ref{eqn:xi_eta}). Then, $(x(t),p(t))$ satisfy (\ref{eqn:hsys_all}) and therefore $x(t)$ is an optimal trajectory for $t\geqslant t_1$ (this is guaranteed, for instance, by the result in \cite{Lukes:69:sicon}). However, from the principle of optimality, it follows that $x^\ast (t)=x(t)$ for $t\geqslant t_1$ since $x(t_1)=x^\ast (t_1)$. This shows that $p(t)$ defined above satisfies (\ref{eqn:hsys_all}) together with $x^\ast (t)$ and $p(t)\to0$ as $t\to\infty$.
To prove that $p(t)$ satisfying (\ref{eqn:hsys_all}) exists on $[0,\infty)$, we show that $p(t)$ can be extended to $[0,t_1]$. To do this, we equivalently consider (\ref{eqn:hsys2}) as the second equation from the necessary condition of optimality; \[\clred{ \dot p = -[Df(x^\ast (t))^\top +L(x^\ast (t),u^\ast (t))]p -Dh(x^\ast (t))^\top, } \] where \[ L(x,u)= \sum_{j=1}^m u_jDg_j(x)^\top,\quad g(x)=\bmat{g_1(x)&\cdots&g_m(x)}. \] } This is a linear equation for $p$ and solution exists in the sense of Carath\'eodory for all $t\in[0,t_1]$. \end{proof}
\clred{ Under the assumptions in Proposition~\ref{prop:p_exists_stable}, the linearization of (\ref{eqn:hsys_all}) at the origin has no eigenvalues on the imaginary axis, and therefore, a stable manifold $\Lambda_S$ at the origin exists. The following theorem is a restatement of Proposition~\ref{prop:p_exists_stable} in terms of the stable manifold, which simply says that the stable manifold of (\ref{eqn:hsys_all}), when it is projected to the base $x$-space, covers the region of stability or stabilizability. } Let $\pi:\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}^n$ be the canonical projection; $\pi(x,p)=x$.
\begin{theorem}\label{thm:smani} Suppose that $h(x)$ is $C^2$ and $Dh(0)=0$. \begin{enumerate}[(i)]
\item Assume that hypotheses in Theorem~\ref{prop:exp_stable} are satisfied. Then, for the Hamiltonian system (\ref{eqn:hsys_all}), the stable manifold $\Lambda_S$ \clred{satisfies} $ \pi(\Lambda_S)=\mathbb{R}^n$. \item \clred{Assume that all the hypotheses in Theorem~\ref{thm:minimizer_2} are satisfied. Assume additionally that the linear part of $(h(x), f(x)))$ is detectable.} Then, for the Hamiltonian system (\ref{eqn:hsys_all}), the stable manifold $\Lambda_S$ \clred{satisfies} $ U\subset \pi(\Lambda_S)$, \clred{where $U$ is the open set for the exponential stabilization}. \end{enumerate} \end{theorem}
\section{Applications}\label{sctn:applications}
This section first provides a tool to weaken the growth conditions in Theorem~\ref{thm:minimizer_2}. Let $x=(x_1,x_2)$ with $x_1\in\mathbb{R}^{n_1}$, $x_2\in\mathbb{R}^{n_2}$, $n_1+n_2=n$. Rewrite (\ref{eqn:n_sys}) as \[ \frac{d}{dt}\bmat{x_1\\x_2}=\bmat{f_1(x_1,x_2)\\f_2(x_1,x_2)}+\bmat{g_1(x_1,x_2)\\g_2(x_1,x_2)}u, \] where $f_j:\mathbb{R}^n\to\mathbb{R}^{n_j}$, $g_j:\mathbb{R}^n\to\mathbb{R}^{n_j\times m}$, $j=1,2$.
Let $\varphi_R:\mathbb{R}^{n_2}\to\mathbb{R}$ be a $C^\infty$ cutoff function such that $\varphi_R(x_2)=1$ for $|x_2|<R$ and $\varphi_R(x_2)=0$ for $|x_2|\geqslant R+1$. Define $\tilde{f}_R(x_1,x_2):=f(x_1,\varphi_R(x_2)x_2)$ and $\tilde{g}_R(x_1,x_2):=g(x_1,\varphi_R(x_2)x_2)$.
\begin{proposition}\label{prop:minimizer_2_appl}Assume that (\ref{eqn:n_sys}) is $C^1$-exponentially stabilizable for \clred{an open set} $U\subset\mathbb{R}^n$ and that $h$, which is $C^1$, satisfies (\ref{ineq:groth_h}) for some positive constants $p$, $c_h$, $\rho$ and for $|x|>\rho$. Suppose that $(f,h)$ is zero-state detectable for an open set containing $|x|\leqslant\rho$. Assume finally the following. \begin{enumerate}[(i)]
\item For any $R>0$, there exist positive constants $c_f=c_f(R)$, $c_g=c_g(R)$ and $0\leqslant\theta<1$ which is independent of $R$ such that $\tilde{f}_R$ and $\tilde{g}_R$ satisfy (\ref{ineq:groth_f}) and (\ref{ineq:groth_g}), respectively, for sufficiently large $x\in\mathbb{R}^n$.
\item There exist constants $c_{f2}>0$, $c_{g2}>0$, $0\leqslant\theta_2<1$ such that
\begin{subequations}\label{ineq:groth_all_app} \begin{align}
|{f}_2(x_1,x_2)|&\leqslant c_{f2}|x_2|^{p+\theta_2} \label{ineq:groth_f_appl}\\
\|{g}_2(x_1,x_2)\|&\leqslant c_{g2}|x_2|^{p/2+\theta_2}\label{ineq:groth_g_appl} \end{align}
\end{subequations}
for all $(x_1,x_2)\in\mathbb{R}^n$. \end{enumerate} Then, an optimal control $u_R\in L^2((0,\infty);\mathbb{R}^m)$ exists for an OCP $\dot{x}=\tilde{f}_R(x)+\tilde{g}_R(x)u$, $x(0)=x_0\in U$ and cost functional (\ref{eqn:cost}). Moreover, for sufficiently large $R$, it is an optimal control for the original problem (\ref{eqn:n_sys})-(\ref{eqn:cost}). \end{proposition}
\begin{proof}
Suppose that $u_e=k(x)$ is the exponentially stabilizing feedback for $x_0\in U$ and let $x_e(t)$ be the corresponding solution. If $R>\max_{t\geqslant0}|x_e(t)|$, the corresponding solution is in the ball $|x|<R$ and the solution satisfies $|x_2|<R$ for $t\geqslant0$ and therefore, $u_e$ exponentially stabilizes $\dot x=\tilde{f}_R(x)+\tilde{g}_R(x)u$, $x(0)=x_0$. Let $J_e(x_0)$ be the value of (\ref{eqn:cost}) for this control. Next, we show that $(\tilde{f}_R, h)$ is zero-state detectable for a set containing $|x|\leqslant \rho$ for sufficiently large $R$. To this end, we assume that a solution $x_R(t)$ for $\dot x =\tilde{f}_R(x)$ satisfies $h(x_R(t))=0$ for $t\geqslant0$.
Then, $|x_R(t)|\leqslant\rho$ from (\ref{ineq:groth_h}). Therefore, for $R>\rho$, $x_R(t)$ is a solution to $\dot x=f(x)$ since $|x_{2R}(t)|\leqslant\rho<R$, where we denote $x_R(t)=(x_{1R}(t),x_{2R}(t))$, $x_{jR}(t)\in\mathbb{R}^{n_j}$, $j=1,2$. From the detectability assumption on $(f,h)$, we have $x_R(t)\to0$, $t\to\infty$. Now, the existence of the optimal control $u_R$ follows from Theorem~\ref{thm:minimizer_2}.
We prove that there exists an $R$ such that the corresponding optimal solution $\bar{x}_R(t)$ satisfies $|\bar{x}_{2R}(t)|<R$ for all $t\geqslant0$. If this is true, the cutoff function has no effect on the OCP defined by $\tilde{f}_R$, $\tilde{g}_R$ and (\ref{eqn:cost}) and therefore $u_R$ is an optimal control for the original problem (\ref{eqn:n_sys})-(\ref{eqn:cost}). Since $\bar{x}_{2R}(t)\to0$, $t\to\infty$ (by \clred{Theorems~\ref{prop:exp_stable}, \ref{thm:minimizer_2}}), define \begin{align*}
L_R &:= \max_{t\geqslant0}|\bar{x}_{2R}(t)|, \\
I_R &:= \left\{ t\geqslant0\, |\, |\bar{x}_{2R}(t)|\geqslant L_R/2 \right\}. \end{align*} We assume that $L_R\to\infty$ as $R\to\infty$ and derive a contradiction. We may assume that $R\leqslant L_R$. Then, for sufficiently large $R$, \begin{align*} J_e(x_0) &\geqslant \int_0^\infty h(\bar{x}_R(t))\,dt\\
&\geqslant c_h\int_{I_R}|\bar{x}_R(t)|^p\,dt \qquad \text{by (\ref{ineq:groth_h})}\\
&\geqslant c_h\int_{I_R}|\bar{x}_{2R}(t)|^p\,dt \geqslant c_h |I_R|\left(\frac{L_R}{2}\right)^p, \end{align*} \clred{where $c_h$ is independent of $R$,} and thus we have \begin{equation}
|I_R|\leqslant C{L_R}^{-p}\leqslant CR^{-p}, \qquad\text{$C$ independent of $R$.}\label{ineq:I_R} \end{equation} Next, we give an estimate on the length of $\bar{x}_{2R}(t)$ in $I_R$. To do this, we note from (\ref{ineq:groth_all_app}) that $\tilde{f}_{2R}$, $\tilde{g}_{2R}$ satisfy the estimates \clred{ \begin{equation}
|\tilde{f}_{2R}(x_1,x_2)| = O(R^{p+\theta_2}),\quad
\|\tilde{g}_{2R}(x_1,x_2)\| =O(R^{p/2+\theta_2}) \label{ineq:groth_all_app2} \end{equation} } for $(x_1,x_2)\in\mathbb{R}^n$. Now we compute the length \begin{align*}
\int_{I_R}\left|\frac{d}{dt}\bar{x}_{2R}(t)\right|\,dt &\leqslant
\int_{I_R}|\tilde{f}_{2R}(\bar{x}_R(t))| + \|\tilde{g}_{2R}(\bar{x}_R)\||u_R(t)|\,dt\\
&\leqslant C|I_R|R^{p+\theta_2} +C'\sqrt{|I_R|}\|u_R\|_{L^2((0,\infty);\mathbb{R}^m)}R^{p/2+\theta_2}\quad\text{by (\ref{ineq:groth_all_app2})}\\
&\leqslant C'' R^{\theta_2}\leqslant C'' (L_R)^{\theta_2} \quad\text{by (\ref{ineq:I_R})} \end{align*}
where $C''$ is a positive constant independent of $R$ since $\|u_R\|_{L^2((0,\infty);\mathbb{R}^m)}^2<2J_e(x_0)$.
If $L_R\to\infty$ as $R\to\infty$, the left side of the above grows at least as $O(L_R)$, which is a contradiction. If $R>\sup_{R>0}L_R$, then $|\bar{x}_{2R}(t)|<R$ for $t\geqslant0$. This choice of $R$ is possible for every $x_0\in U$ and the proposition has been proved. \end{proof}
\subsection{Feedback linearizable systems} Control system (\ref{eqn:n_sys}) is said to be {\em feedback linearizable} (see \cite{Khalil:92:NS} page 293 or \cite{Isidori:95:NCS} for more detail) in a domain $U\subset\mathbb{R}^n$ containing the origin if there exists a $C^1$ diffeomorphism $T:U\to\mathbb{R}^n$ such that $T(0)=0$ and the change of coordinates $y=T(x)$ transforms (\ref{eqn:n_sys}) into \[ \dot y = A y + B\beta(y)^{-1}[u-\alpha(y)] \] with $(A,B)$ controllable and $\alpha(y)$, $\beta(y)$ are $C^1$ with $\beta(y)$ being a nonsingular matrix for $y\in T(U)$. If (\ref{eqn:n_sys}) is feedback linearizable in $U$, the input transformation $u=\alpha(y)+\beta(y)v$ brings the system into $\dot y=Ay+Bv$. It is possible to prove that a feedback linearizable system in $U$ is exponentially stabilizable for $U$ in the following way. Take a matrix $K\in\mathbb{R}^{m\times n}$ and an open set $V\subset T(U)$ such that all the solutions of $\dot y=(A+BK)y$ starting from $V$ at $t=0$ stay in $T(U)$ for $t\geqslant0$ and converge to $y=0$ as $t\to\infty$. Also, from the controllability of $(A,B)$, for any $y_0\in T(U)$, there exists a piece-wise continuous control that brings $y_0$ to a point in $V$ within $T(U)$ in finite time. Hence, for any $x_0\in U$ there is a control $u_e$ with which an exponential estimate holds for the corresponding solution of (\ref{eqn:n_sys}).
\begin{ex}[\cite{Khalil:92:NS} page 296] A mathematical model of a synchronous generator connected to an infinite bus is \begin{align*}
\frac{d}{dt}\bmat{x_1\\x_2\\x_3}=\bmat{x_2\\ -a[(1+x_3)\sin(x_1+\delta)-\sin\delta]-bx_2\\
-cx_3 +d[\cos(x_1+\delta)-\cos\delta]}+\bmat{0\\0\\1}u \end{align*} where $a$, $b$, $c$ and $\delta$ are positive constants. It is feedback linearizable in $U:=\{-\delta<x_1<\pi-\delta\}\subset\mathbb{R}^3$. This system satisfies the growth conditions (\ref{ineq:groth_f}), (\ref{ineq:groth_g}) for $p\geqslant1$, $\theta=0$ in $\mathbb{R}^3$. Therefore, for any $h(x)$ satisfying $h(0)=0$, $Dh(0)=0$ and (\ref{ineq:groth_h}) for some $c_h>0$ and $\rho>0$, an optimal control exists for each initial value in $U$. \end{ex}
\begin{ex}[A 2-dimensional pendulum in \cite{Sakamoto:13:automatica,Horibe:17:ieee_cst,Oishi:17:ifac-wc}] \clred{A pendulum on a cart is a classical control experiment in linear and nonlinear control theories} and if the cart position is neglected, we have a system of the form \begin{align*} \dot{x}_1 &=x_2\\ \dot{x}_2 &= \frac{\sin x_1 -x_2^2\sin x_1\cos x_1}{1+\sin^2x_1}-\frac{\cos x_1}{1+\sin^2x_1}u \end{align*}
where $x_1$ is the angle of the pendulum from vertical (up). For $U:=\{|x_1|<\frac{\pi}{2}\}\subset\mathbb{R}^2$, the system is feedback linearizable and growth conditions (\ref{ineq:groth_f}), (\ref{ineq:groth_g}) are satisfied for $p\geqslant2$, $\theta=0$ and thus, for a suitable $h$, an optimal control exists in $U$. In \cite{Horibe:17:ieee_cst,Oishi:17:ifac-wc}, however, exponentially stabilizing feedback swing-up controls (namely $(x_1(0),x_2(0))=(\pi,0)$) are designed by computing stable manifolds of associated Hamiltonian systems. Moreover, it has been reported that for $h(x)=\varepsilon (|x_1|^2+|x_2|^2)$ with $\varepsilon$ small, a number of feedback swing-up stabilization controls exist for a single cost functional (\ref{eqn:cost}). Each control effectively uses reaction (or swing) of the pendulum and the more swings are used during the control, the smaller the cost value gets. In \cite{Horibe:17:ieee_cst}, a detailed account of this phenomenon (namely, the existence of multiple solutions to an HJBE) is shown using 3D figures of the stable manifold. However, it has not been confirmed whether or not an optimal control that minimizes the cost functional exists. Now, using Theorem~\ref{thm:minimizer_2}, we can answer this question by saying that as long as $\varepsilon>0$, there exists an optimal control that achieves the minimum value of the cost. The question of whether or not infinitely many swing-up controllers for the HJBE exist, when $h=0$, is still open. \end{ex}
\subsection{Systems with globally exponentially stable zero dynamics} It is known (see, e.g., \cite{Byrnes:91:ieee-ac}, \cite{Isidori:95:NCS}) that under suitable conditions such as relative degree condition, system (\ref{eqn:n_sys}), when it is a single input system, can be transformed by a smooth coordinate change and a smooth feedback transformation into \begin{subequations} \begin{align} &\dot\eta = q(\eta,\xi_1)\label{eqn:zero_dyn1}\\ & \dot{\xi}_1 = \xi_2, \ldots, \dot{\xi}_r = v \label{eqn:zero_dyn2} \end{align} \end{subequations} where $\xi_j(t)\in\mathbb{R}^1$, $j=1,\ldots,r$, $\eta(t)\in\mathbb{R}^{n-r}$ and $v\in\mathbb{R}^1$ is a new input. This system transformation is important to understand the system structure from an input-output viewpoint and $\dot\eta=q(0,\eta)$ is called zero dynamics describing the dynamics on which the system output $y=\xi_1$ is identically zero.
In this subsection, we are interested merely in an OCP for (\ref{eqn:zero_dyn1})-(\ref{eqn:zero_dyn2}) with input $v$ and a cost functional. Suppose that zero dynamics $\dot\eta=q(0,\eta)$ is globally exponentially stable. Then, there exists a smooth feedback control that exponentially stabilizes (\ref{eqn:zero_dyn1})-(\ref{eqn:zero_dyn2}) for all initial conditions in $\mathbb{R}^n$. Suppose, in addition, that for any $R>0$ there is a $C=C(R)>0$ such that $q(\eta, \varphi_R(\xi_1)\xi_1)$ satisfies a growth condition \[
|q(\eta,\varphi_R(\xi_1)\xi_1)|\leqslant C|\eta|^p \] for sufficiently large $\xi_1$ and $\eta$, where $\varphi_R$ is the cutoff function and $p>0$ is a constant independent of $R$.
We consider an OPC (\ref{eqn:zero_dyn1})-(\ref{eqn:zero_dyn2}) and $J=\int_0^\infty |v|^2/2 +h(\xi_1,\ldots,\xi_r,\eta)\,dt$, where $h\geqslant0$ is $C^1$ with $h(0)=0$, $Dh(0)=0$ and satisfies (\ref{ineq:groth_h}) for $p>0$ above and some constants $c_h>0$, $\rho>0$. Assume, in addition, that $h$ is zero-state detectable with (\ref{eqn:zero_dyn1})-(\ref{eqn:zero_dyn2}) in $\mathbb{R}^n$. Then, from Proposition~\ref{prop:minimizer_2_appl}, \clred{an optimal control} exists for the OCP for any initial conditions.
\begin{ex} Consider a 3-dimensional system \begin{align*} \dot x_1 &=-x_1 +x_1^2x_2\\ \dot x_2 &=x_3 \\ \dot x_3 &=u. \end{align*} This system is in the form (\ref{eqn:zero_dyn1})-(\ref{eqn:zero_dyn2}) with globally exponentially stable zero dynamics $\dot x_1=-x_1$ and is globally exponentially stabilizable. Consider also an OCP for this system with a quadratic cost functional $J=\frac{1}{2}\int_0^\infty u^2+x_1^2+x_2^2+x_3^2\,dt$. This problem does not satisfy the growth condition in Theorem~\ref{thm:minimizer_2}, but, with a cutoff function on $x_2$, the right side of the first equation $-x_1+\varphi_R(x_2)x_1^2x_2$ satisfies the hypotheses in Proposition~\ref{prop:minimizer_2_appl}. Therefore, a solution for this OCP exists \clred{for any $x_0\in\mathbb{R}^3$}. \end{ex}
\subsection{Turnpike analysis} Let us consider a finite horizon OCP for (\ref{eqn:n_sys}) with \begin{equation}
J_T = \int_0^T |u(t)|^2/2 +h(x(t))\,dt,\label{cost:J_T} \end{equation} where $h(x)=x^\top C^\top Cx$, $C\in \mathbb{R}^{r\times n}$, and $x(T)=x_f$ is specified.
Suppose that $u_T$ is an optimal control and $x_T$ is the optimal trajectory. The optimal control $u_T$ is said to {\em enjoy the turnpike property} if for any $\varepsilon>0$, there exists an $\eta_\varepsilon>0$ such that \[
\left| \left\{t\geqslant0\,|\,|u_T (t)|+|x_T(t,x_0)|>\varepsilon \right\} \right| <\eta_\varepsilon \]
for all $T>0$, where $\eta_\varepsilon$ depends only on $\varepsilon$, $f$, $g$, $h$, and $x_0$ and $|\cdot|$ denotes length (Lebesgue measure) of interval. A necessary condition for optimality is the existence of $(x(t),p(t))$, $0\leqslant t\leqslant T$ for the Hamiltonian system (\ref{eqn:hsys1})-(\ref{eqn:hsys2}) with $x(0)=x_0$, $x(T)=x_f$.
In \cite{sakamoto:20:prep_cdc}, it is shown that one of the sufficient conditions for the OCP to have a solution with turnpike property is that at an equilibrium of the Hamiltonian system, a stable manifold $\Lambda_S$ exists and satisfies $x_0\in \mathrm{Int}(\pi(\Lambda_S))$ and an unstable manifold $\Lambda_U$ exists and satisfies $x_f\in\mathrm{Int}(\pi(\Lambda_U))$, where $\mathrm{Int}$ represents the interior of a set in $\mathbb{R}^n$ \clred{and $\pi$ is the canonical projection appeared in \S~\ref{sctn:stabl_manifold}}. We show, in the following example, that those conditions can be examined using the results of the present paper.
\begin{ex} Backstepping design is one of the popular and powerful feedback design methods which is applied widely in practice (see, e.g., \cite{Byrnes:89:syscon,Krstic:95:NACD,Sepulchre:97:CNC}). In this section, we show a class of nonlinear control systems for which \clred{the} turnpike occurs for all initial and terminal conditions using backstepping stabilization.
Let us consider a nonlinear control system of the form \begin{subequations}\label{eqns:backstepp} \begin{align} &\dot x_1 = f_1(x_1)+g_1(x_1)x_2 \label{eqn:backstepp1}\\ &\dot x_2 =f_2(x_1,x_2)+g_2(x_1,x_2)u,\label{eqn:backstepp2} \end{align} \end{subequations} where $x_1,\ x_2\in\mathbb{R}^{n}$ are the \clred{states} and $u\in\mathbb{R}^n$ is the control. We assume that $(0,0)$ is an equilibrium of (\ref{eqn:backstepp1})-(\ref{eqn:backstepp2}), that $f_j$, $g_j$, $j=1,2$ are smooth and that $g_1(x_1)$, $g_2(x_1,x_2)$ are invertible for all $x_1$, $x_2$.
The backstepping technique is a cascade design, in which (\ref{eqn:backstepp1}) is stabilized with a virtual stabilizing input $x_2=\alpha(x_1)$ and then obtain a feedback control that stabilizes the error system for $z:=x_2-\alpha(x_1)$. A typical feedback stabilization using backstepping under the above conditions is, first, obtain $\alpha(x_1)$ that globally exponentially stabilizes (\ref{eqn:backstepp1}) with a Lyapunov function $V_1(x_1)$, such as $\alpha(x_1)=g_1(x_1)^{-1}[-f_1(x_1)-x_1]$. Next, with a new coordinates $(x_1,z)$ where $z=x_2-\alpha(x_1)$, the overall exponentially stabilizing feedback is given as \[ u=g_2(x_1,z+\alpha(x_1))^{-1}\left[-f_2(x_1,z+\alpha(x_1)) +\dot{\alpha}(x_1) -g_1(x_1)^\top DV_1(x_1)^\top -z\right]. \]
For above $\alpha(x_1)$, one takes $V_1(x_1)=|x_1|^2/2$ and the global exponential stability of the closed loop system is guaranteed with
$V(x_1,z)=V_1(x_1)+|z|^2/2$ and $\dot{V}=-|x_1|^2-|z|^2$. The time-reversed system of (\ref{eqn:backstepp1})-(\ref{eqn:backstepp2}) is \begin{subequations}\label{eqns:backstepp_rev} \begin{align} & x_1' = -f_1(x_1)-g_1(x_1)x_2 \label{eqn:backstepp_rev1}\\ &x_2' = -f_2(x_1,x_2)-g_2(x_1,x_2)u,\label{eqn:backstepp_rev2} \end{align} \end{subequations} where $(\cdot)' = \frac{d}{d\tau}$, $\tau =-t$. This system can also be globally exponentially stabilized via the backstepping method. Thus for OCPs with cost functionals satisfying growth conditions in Theorem~\ref{thm:minimizer_2} or Proposition~\ref{prop:minimizer_2_appl} \clred{and the linear detectabiliity condition}, one can give estimates on the existence regions of stable and unstable manifolds \clred{$\Lambda_S$, $\Lambda_U$} in associated Hamiltonian systems via Theorem~\ref{thm:smani} \clred{as $ \pi(\Lambda_S)=\mathbb{R}^n$, $\pi(\Lambda_U)=\mathbb{R}^n$. }
\clred{The result in \cite{sakamoto:20:prep_cdc} guarantees that the turnpike occurs for all initial and terminal states. }
For example, a nonlinear system \begin{align*}
\dot{x}_1 &=x_1^2 +(1+x_1^2)x_2\\
\dot{x}_2 &= x_2^2 +u \end{align*} satisfies the conditions above and is globally exponentially stabilizable at the origin via a smooth backstepping feedback. For a cost functional \[ J = \frac{1}{2}\int_0^\infty u^2+x_1^2+x_2^2\,dt, \] one readily sees that Proposition~\ref{prop:minimizer_2_appl} can be applied \clred{(note that the growth conditions in Theorem~\ref{thm:minimizer_2} are not satisfied)}. Then, using Theorem~\ref{thm:smani} for this OCP, the stable manifold $\Lambda_S$ of the associated Hamiltonian system at the origin exists with $\pi(\Lambda_S)=\mathbb{R}^{2}$ and applying the same argument for the time-reversed OCP replacing $t$ by $\tau$, the unstable manifold $\Lambda_U$ of the Hamiltonian system at the origin exists with $\pi(\Lambda_U)=\mathbb{R}^{2}$. Therefore, we conclude that for all initial and terminal states the finite horizon OCP \[ J = \frac{1}{2}\int_0^T u^2+x_1^2+x_2^2\,dt, \] with sufficiently large $T$ has an optimal control that enjoys the turnpike property. \end{ex} \begin{remark} In \cite{sakamoto:20:prep_cdc}, a numerical example is worked out in detail in which turnpike occurs for all $x_0$ and for sufficiently small $x_f$ using the global exponential stabilizability. The example also exhibits the peaking phenomenon \cite{Sussmann:91:ieee_tac} during the turnpike. \end{remark}
{\bf Acknowledgment.} The author would like to thank the Chair of Computational Mathematics at University of Deusto for their support during his stay. He is also grateful to Dario Pighin for valuable discussions. Especially, the proofs of Proposition~\ref{prop:minimizer} and Lemmas~\ref{lemma:ode_affine}, \ref{lemma:h_convergence} owe to him.
\appendix \section{Useful results} \setcounter{equation}{0} \renewcommand{\Alph{section}.\arabic{equation}}{\Alph{section}.\arabic{equation}} \renewcommand{\Alph{section}.\arabic{theorem}}{\Alph{section}.\arabic{theorem}}
This Appendix includes several preliminary propositions and lemmas for the proofs of the main results.
\begin{proposition}\label{prop:ode_new} Let $D\subset\mathbb{R}^{n+1}$ be an open set and let $F:D\to\mathbb{R}^n$, $(t,x)\in D \mapsto F(t,x)$, be a map which is measurable in $t$ and continuous in $x$. For positive $\alpha$, $\beta$, let us introduce notations $I_\alpha(t_0)$, $B_\beta(x_0)$ representing the sets \[
I_\alpha(t_0)=\{t\geqslant0\,|\, |t-t_0|\leqslant \alpha\}, \quad B_\beta(x_0)=\{x\in\mathbb{R}^n\,|\,|x-x_0|\leqslant\beta\}. \] Let $U\subset\mathbb{R}^n$ be a compact set and assume that one can take $\bar\beta>0$ such that \[ V:=[0,\infty)\times \bigcup_{x_0\in U}B_{\bar{\beta}}(x_0)\subset D. \] Assume also that there exist $0<\alpha$, $0<\beta\leqslant\bar\beta$ and functions $m(t)$, $k(t)$ such that the following are satisfied, \begin{subequations} \begin{gather}
|F(t,x)|\leqslant m(t) \quad \text{on }V,\quad \text{and}\quad \int_{I_\alpha(t_0)}m(t)\,dt \leqslant \beta \quad\text{for all } t_0\geqslant0, \label{ineq:ode_new1}\\
|F(t,x)-F(t,y)|\leqslant k(t)|x-y| \quad \text{on }V,\quad \text{and}\quad \int_{I_\alpha(t_0)}k(t)\,dt \leqslant\frac{1}{2} \quad\text{for all } t_0\geqslant0.\label{ineq:ode_new4} \end{gather} \end{subequations}
Then, for any $t_0\geqslant0$, $x_0\in U$, there exists a unique solution $x(t,t_0,x_0)$ to $\dot x=F(t,x)$ passing through $(t_0,x_0)$ defined on $I_\alpha(t_0)$ satisfying $|x(t,t_0,x_0)-x_0|\leqslant\beta$ for $t\in I_\alpha(t_0)$, where $\alpha$, $\beta$ are independent of $t_0$. This $x$ is a continuous map defined on $\bigcup_{t_0\geqslant0} (I_\alpha(t_0)\times \{t_0\})\times U$. \end{proposition} \begin{proof} Define a closed set $\mathscr{A}$ in $C^0(I_\alpha(0);\mathbb{R}^n)$ and an operator $T:\mathscr{A}\to C^0(I_\alpha(0);\mathbb{R}^n)$ by \begin{gather*}
\mathscr{A}=\{\varphi\in C^0(I_\alpha(0);\mathbb{R}^n)\,|\, \varphi(0)=0, \ |\varphi(t)|\leqslant\beta \text{ for } t\in I_\alpha(0)\}\\ (T\varphi)(t)=\int_{t_0}^{t_0+t}F(s,\varphi(s-t_0)+x_0)\,ds. \end{gather*} Then for $t\in I_\alpha(0)$, we have \begin{align*}
|(T\varphi)(t)| &\leqslant \int_{t_0}^{t_0+t}|F(s,\varphi(s-t_0)+x_0)\,ds\\ &\leqslant \int_{t_0}^{t_0+t}m(s)\,ds\\ &\leqslant \int_{I_\alpha(t_0)}m(s)\,ds\leqslant\beta, \end{align*} where we have used $(s,\varphi(s-t_0)+x_0)\in V$ for $s\in [t_0,t_0+t]$ and (\ref{ineq:ode_new1}), implying that $T\varphi \in \mathscr{A}$. Since $(T\varphi)(0)=0$, we confirm that $T:\mathscr{A}\to \mathscr{A}$.
Next, take $\varphi, \bar \varphi \in \mathscr{A}$ and $t\in I_\alpha(0)$. Then, \begin{align*}
|(T\varphi)(t)-(T\bar\varphi)(t)| &\leqslant \int_{t_0}^{t_0+t}|F(s,\varphi(s-t_0)+x_0)-F(s,\bar\varphi(s-t_0)+x_0)|\,ds\\
&\leqslant \int_{t_0}^{t_0+t}k(s)|\varphi(s-t_0)-\bar\varphi(s-t_0)|\,ds\\
&\leqslant\|\varphi-\bar\varphi\|_{C^0(I_\alpha(0);\mathbb{R}^n)}\int_{I_\alpha(t_0)}k(s)\,ds\\
&\leqslant \frac{1}{2}\|\varphi-\bar\varphi\|_{C^0(I_\alpha(0);\mathbb{R}^n)}, \end{align*} where we have used $(s,\varphi(s-t_0)+x_0),\ (s,\bar\varphi(s-t_0)+x_0)\in V$ for $s\in [t_0,t_0+t]$ and (\ref{ineq:ode_new4}). Therefore, by Contraction Mapping Theorem, there exists a unique $\varphi\in\mathscr{A}$ such that \[ \varphi(t)=\int_{t_0}^{t_0+t}F(s,\varphi(s-t_0)+x_0)\,ds. \] Defining $x(t)=\varphi(t-t_0)+x_0$, $x(t)$ is the unique solution of $\dot x=F(t,x)$ passing through $(t_0,x_0)$ defined on $I_\alpha(t_0)$, where $\alpha$ is independent of $t_0$.
If we regard $T=T_{(t_0,x_0)}$, we have proved that $T_{(t_0,x_0)}$ is a uniform contraction with respect to $(t_0,x_0)$ in $[0,\infty)\times U$. Therefore, from Uniform Contraction Mapping Theorem, the fixed point $\varphi$ is continuous function of $(t_0,x_0)$ and therefore, $x(t,t_0,x_0)$ is a continuous function in $t\in I_\alpha(t_0)$ and $(t_0,x_0)\in [0,\infty)\times U$, where $\alpha$ is independent of $t_0$. \end{proof}
\begin{lemma}\label{lemma:ode_affine} \clred{S}uppose that $f:\mathbb{R}^n\to\mathbb{R}^n$ is $C^1$ and that $g:\clred{\mathbb{R}^n\to\ }\mathbb{R}^{n\times m}$ is Lipschitz continuous. Let $U\subset\mathbb{R}^n$ be a compact set and let \[ V:=[0,\infty)\times \bigcup_{x_0\in U}B_\beta(x_0). \] \clred{Let us consider (\ref{eqn:n_sys}) with $u\in L^2((0,\infty);\mathbb{R}^m)$.} Then, there exist $\alpha$, $\beta>0$ and $m(t)$, $k(t)$ such that (\ref{ineq:ode_new1})-(\ref{ineq:ode_new4}) in Proposition~\ref{prop:ode_new} \clred{with $F(t,x)=f(x)+g(x)u(t)$} are satisfied. Moreover, there exists an $M>0$ which is independent of $t_0$ such that \clred{the solution satisfies} \[
|x(t,t_0,x_0)|\leqslant M \quad \text{for }t\in I_\alpha(t_0), \ t_0\geqslant0. \] \end{lemma} \begin{proof} Let $B_{\beta,U}:=\bigcup_{x_0\in U}B_\beta(x_0)$. We show that the conditions in Proposition~\ref{prop:ode_new} are satisfied. First, \begin{align*}
|f(x)+g(x)u(t)|&\leqslant|f(x)|+\|g(x)\|\,|u(t)|\\
&\leqslant \sup_{x\in B_{\beta,U}}|f(x)|+\sup_{x\in B_{\beta,U}}\|g(x)\|\,|u(t)|:=m(t). \end{align*} Then, for $t_0\geqslant0$, \begin{align}
\int_{I_\alpha(t_0)}m(t)\,dt &\leqslant 2\alpha\sup_{x\in B_{\beta,U}}|f(x)|+\sup_{x\in B_{\beta,U}}\|g(x)\|\int_{I_{\alpha}(t_0)}|u(t)|\,dt\nonumber\\
&\leqslant 2\alpha \sup_{x\in B_{\beta,U}}|f(x)|+\sqrt{\alpha}\sup_{x\in B_{\beta,U}}\|g(x)\|\|u\|_{L^2((0,\infty);\mathbb{R}^m)}.\nonumber \end{align} Also, we have \begin{align}
|f(x)+g(x)u(t)-f(y)-g(y)u(t)|&\leqslant
|f(x)-f(y)|+\|g(x)-g(y)\|\,|u(t)|\nonumber\\
&\leqslant \sup_{x\in B_{\beta,U}}\|Df(x)\|\,|x-y| +L_g|x-y|\,|u(t)|:=k(t),\nonumber \end{align} where $L_g$ is a Lipschitz constant for $g$ in $B_{\beta,U}$. Then, for $I_\alpha(t_0)$, $t_0\geqslant0$, \[
\int_{I_\alpha(t_0)}k(t)\,dt \leqslant 2\alpha \sup_{x\in B_{\beta,U}}\|Df(x)\|+\sqrt{\alpha}L_g\|u\|_{L^2((0,\infty);\mathbb{R}^m)}. \] We can find $\alpha$, $\beta>0$ independently of $t_0$ satisfying \begin{gather*}
2\alpha \sup_{x\in B_{\beta,U}}|f(x)|+\sqrt{\alpha}\sup_{x\in B_{\beta,U}}\|g(x)\|\|u\|_{L^2((0,\infty);\mathbb{R}^m)}<\beta,\\
2\alpha \sup_{x\in B_{\beta,U}}\|Df(x)\|+\sqrt{\alpha}L_g\|u\|_{L^2((0,\infty);\mathbb{R}^m)}<\frac{1}{2}, \end{gather*}
showing that all the conditions in Proposition~\ref{prop:ode_new} are satisfied. Since $|x(t,t_0,x_0)-x_0|<\beta$ for $t\in I_{\alpha}(t_0)$ and we can take an $r>0$ such that if $x_0\in U$ then $|x_0|<r$, \[
|x(t,t_0,x_0)|\leqslant |x_0|+\beta<r+\beta, \] the right side of which is independent of $t_0$. \end{proof}
\begin{lemma}\label{lemma:h_convergence} Let $H:\mathbb{R}^n\to\mathbb{R}$ be a nonnegative locally Lipschitz function, which is coercive. Let $x(t)$ be a solution to (\ref{eqn:n_sys}) such that $\int_0^\infty H(x(t))\,dt<\infty$ for a $u\in L^2((0,\infty);\mathbb{R}^m)$. Then, $x(t)$ is bounded for $t\geqslant0$ and $H(x(t))\to0$ as $t\to\infty$. \end{lemma}
\begin{proof} From the assumptions on $H$, there exist constants $R>0$, $\mu>0$ such that \begin{align}
|x|> R \ \Rightarrow \mu< H(x).\label{ineq:outsideR} \end{align} Take the compact set $U$ in Lemma~\ref{lemma:ode_affine} as $U=B_{R}(0)$. Take also $\alpha$ and $M$ in the same proposition. We note that if $x(\bar t)\in U$ for some $\bar t >0$, then, \begin{equation}
|x(t)|<M \quad \text{for }t\in I_\alpha(\bar t),\label{ineq:bunded_M} \end{equation} where $M$ and $\alpha$ are independent of $\bar t$. We next prove the following claim. If a nonnegative integrable function $\psi(t)$ satisfies $\int_0^\infty\psi(t)\,dt<\infty$, then, for any $l>0$ and $\varepsilon>0$, there exists a $T_{l,\varepsilon}>0$ such that for any $\tau >T_{l,\varepsilon}$, there exists a $\bar t\in [\tau-\varepsilon,\tau+\varepsilon]$ such that $\psi(\bar t)<l$. Suppose that the claim is not true. Then, there exist $l>0$ and $\varepsilon>0$ such that for any $T>0$, there exists a $t>T$ such that $\psi(t)\geqslant l$ for all $t\in [t-\varepsilon,t+\varepsilon]$. From this, we take a sequence $\{t_n\}$, $t_n>0$, $t_n +2\varepsilon<t_{n+1}$, $t_n\to\infty$ $(n\to\infty)$ such that $\psi(t)\geqslant l$ for all $t\in [t_n-\varepsilon,t_n+\varepsilon]$. Then we have \[ \int_0^\infty \psi(t)\,dt> \sum_{n=1}^\infty \int_{t_n-\varepsilon}^{t_n+\varepsilon}\psi(t)\,dt>\sum_{n=1}^\infty 2l\varepsilon=\infty, \]
which is a contradiction. Now we apply this claim with $l=\mu$ and $\varepsilon=\alpha$. Then, there exists a $T_{\mu,\alpha}>0$ such that for any $\tau>T_{\mu,\alpha}$, there exists a $\bar t\in [\tau-\alpha,\tau+\alpha]$ such that $H(x(\bar t))\leqslant\mu$. Then, from (\ref{ineq:outsideR}), we have $|x(\bar t)|\leqslant R$, equivalently, $x(\bar t)\in U$. Then, from (\ref{ineq:bunded_M}), \begin{equation}
|x(t)|<M \quad \text{for }t\in I_\alpha(\bar t).\label{ineq:boundM2} \end{equation}
But, $\bar t\in [\tau-\alpha,\tau+\alpha]$ implies that $\tau\in I_\alpha(\bar t)$. Then, from (\ref{ineq:boundM2}), $|x(\tau)|<M$. We emphasize that $M$ and $\alpha$ are independent of $\bar t$ and $\tau$ is any sufficiently large time (larger than $T_{\mu,\alpha}$). Therefore, on $[T_{\mu,\alpha},\infty)$, $x(t)$ is bounded. Since $x(t)$ is continuous, it is bounded for $[0,\infty)$ and thus $H(x(t))$ is uniformly continuous on $[0,\infty)$ since $H$ is locally Lipschitz. From Barbalat's Lemma (see, e.g., \cite{Popov:73:HCS}), we have $H(x(t))\to0$ as $t\to\infty$. \end{proof}
\begin{remark}
Lemma~\ref{lemma:h_convergence} can be seen as a nonlinear finite-dimensional modification of Datko-Pazy Theorem (see \cite{Pazy:83:SLOAPDE}, Page~116, Theorem~4.1). Namely, in addition to the hypothes in Lemma~\ref{lemma:h_convergence}, assume that for any $\varepsilon>0$, there exists a $\delta_\varepsilon>0$ such that $H(x)<\delta_\varepsilon$ if $|x|<\varepsilon$. Then, $x(t)\to0$ as $t\to\infty$. \end{remark}
The following Proposition is a generalization of Lemma~3.1 on Page~24 in \cite{Hale:73:ODE} for Carath\'eodory solution.
\begin{proposition}\label{prop:hale_lemma} Let $D\subset \mathbb{R}^{n+1}$ be an open set. For $m=0,1,\ldots,$ let $f_m:D\to\mathbb{R}^n$ be maps such that $f_m(t,x)$ are measurable in $t$ for each $x$ and continuous in $x$ for each $t$. For $f_m(t,x)$, $m=0,1,2,\ldots,$ assume the following hypotheses. For any compact $\bar D\subset D$, there exist integrable $k_0(t)$, $k_m(t)$, $\bar{k}_m(t)$, $m=1,2,\ldots$, such that \begin{align*}
&|f_0(t,x)|\leqslant k_0(t)\quad \text{for}\quad (t,x)\in \bar D,\\
&|f_m(t,x)-f_0(t,m)|\leqslant k_m(t)\quad \text{for}\quad (t,x)\in \bar D,\\ &\int_Ik_m(t)\,dt \to 0 \ (m\to\infty)\quad\text{for any interval }I\subset\bar D,\\
& \int_I k_m(t)\,dt\leqslant C |I| \quad\text{for any interval }I\subset\bar D, \\ &\qquad \qquad\qquad\qquad \text{ where $C>0$ is a constant independent of $m$},\\
& |f_m(t,x)-f_m(t,y)|\leqslant\bar{k}_m(t)|x-y|\quad \text{for}\quad (t,x), (t,y)\in \bar D,\\ & \int_I\bar{k}_m(t)\,dt \quad \text{is bounded for }m\in\mathbb{N}, I\subset\bar D. \end{align*} Let $x_m\in\mathbb{R}^{n}$, $m=0,1,2,\ldots$, be points such that $x_m\to x_0$ when $m\to\infty$. Let finally $\varphi_m$ be a solution for \[ \dot x=f_m(t,x) \] passing through $(t_0,x_m)$, $m=0, 1,2,\ldots$. If $\varphi_0(t)$ is a unique solution defined on a finite interval $[a,b]$, then, for sufficiently large $m$, $\varphi_m$ is defined on $[a,b]$ and uniformly converges to $\varphi_0$ on $[a,b]$ as $m\to\infty$. \end{proposition} \begin{proof}
Let $S:=\{(t,\varphi_0(t))\,|\,t\in[a,b]\}$ and $U$ be a compact set in $D$ which contains $S$ in its interior; $S\subset U\subset D$. By the hypotheses, there exist integrable $k_m(t)$ in $U$, $m=0,1,2,\ldots$, such that \begin{align}
&|f_0(t,m)|\leqslant k_0(t)\quad \text{for }(t,x)\in U, \label{ineq:f_0}\\
&|f_m(t,x)-f_0(t,x)|\leqslant k_m(t)\quad \text{for }(t,x)\in U, \ m=1,2,\ldots,\label{ineq:fm-f0}\\ &\int_I k_m(t)\,dt\to0 \quad \text{as }m\to\infty \text{ for } I\subset U, \label{eqn:int_km_converg}\\
&\int_I k_m(t)\,dt\leqslant C|I| \text{ for } I\subset U, \label{ineq:km_int_uniform_bdd} \end{align} where $C>0$ is a constant independent of $m$. Take $\alpha>0$, $\beta>0$ such that \begin{gather} I_\alpha(\bar t)\times B_{\beta}(\bar x) \subset U \text{ for }(\bar{t},\bar{x})\in S,\nonumber\\ \int_I k_0(s)+k_m(s)\,ds\leqslant\beta \text{ for } I\subset I_\alpha(t_0),\ m\in\mathbb{N}.\label{eqn:int_k0_km_beta} \end{gather}
For sufficiently large $m$, $I_\alpha(t_0)\times B_\beta(x_m)\subset U$ holds and $\varphi_m(t)$ is defined on $I_\alpha(t_0)$ satisfying $|\varphi_m(t)-x_m|\leqslant\beta$ for $t\in I_\alpha(t_0)$, which is from \begin{align*}
|f_m(t,x)| &\leqslant |f_0(t,x)|+k_m(t)\\
&\leqslant k_0(t)+k_m(t)\quad\text{by (\ref{ineq:f_0})} \end{align*} and (\ref{eqn:int_k0_km_beta}). This shows $(t,\varphi_m(t))\in I_\alpha(t_0)\times B_\beta(x_m)\subset U$ and $\varphi_m(t)$ is uniformly bounded. Moreover, for $t_1<t_2$ in $I_\alpha(t_0)$, \begin{align*}
|\varphi_m(t_2)-\varphi_m(t_1)| &\leqslant \int_{t_1}^{t_2}|f_m(s,\varphi_m(s))|\,ds\\ &\leqslant\int_{t_1}^{t_2}k_0(s)+k_m(s)\,ds\leqslant \int_{t_1}^{t_2}k_0(s)\,ds+C(t_2-t_1) \end{align*} by (\ref{ineq:km_int_uniform_bdd}) and therefore $\varphi_m$ is equicontinuous. By using Ascoli-Arzel\'a Theorem, up to subsequence, we have \[ \varphi_m\to\bar{\varphi}\quad \text{uniformly on }I_\alpha(t_0) \] for some $\bar \varphi\in C(I_\alpha(t_0);\mathbb{R}^n)$. We will show $\bar \varphi=\varphi_0$ by using the integral equation for $\varphi_m$; \begin{equation} \varphi_m(t)=x_m+\int_{t_0}^tf_m(s,\varphi_m(s))\,ds.\label{eqn:int_eq_phim} \end{equation} Using the hyphotheses, let $\bar{k}_m(t)$ be integrable functions on $I_\alpha(t_0)$, $m=1,2,\ldots$, such that \begin{gather}
|f_m(t,x)-f_m(t,y)|\leqslant \bar{k}_m(t)|x-y| \quad \text{for }(t,x), (t,y)\in U\label{ineq:fm_kbar}\\ \int_{t_0}^t\bar{k}_m(s)\,ds \text{ is bounded for }m
\text{ and }t\in I_\alpha(t_0).\label{eqn:int_kbar_bounded} \end{gather} Then, it follows that, for $t_0\leqslant t\leqslant t_0+\alpha$, \begin{align*}
&\left| \int_{t_0}^tf_m(s,\varphi_m(s))\,ds
- \int_{t_0}^tf_0(s,\bar{\varphi}(s))\,ds\right|\\
&\quad \leqslant\int_{t_0}^t|f_m(s,\varphi_m(s))-f_0(s,\bar{\varphi}(s))|\,ds\\ &\quad \leqslant\int_{t_0}^t
|f_m(s,\varphi_m(s))-f_m(s,\bar{\varphi}(s))|\,ds
+\int_{t_0}^t |f_m(s,\bar{\varphi}(s))-f_0(s,\bar{\varphi}(s))|\,ds \\
&\quad \leqslant\int_{t_0}^t \bar{k}_m(s)|\varphi_m(s)-\bar{\varphi}(s)|\,ds +\int_{t_0}^tk_m(s)\,ds\quad \text{by (\ref{ineq:fm-f0}), (\ref{ineq:fm_kbar})}\\
&\quad \leqslant \|\varphi_m-\bar{\varphi}\|_{C(I_\alpha(t_0);\mathbb{R}^n)}\int_{t_0}^{t}\bar{k}_m(s)\,ds +\int_{t_0}^tk_m(s)\,ds\\ &\quad \to0 \ (m\to\infty), \end{align*} where we have used the uniform convergence of $\varphi_m$ to $\bar \varphi$, (\ref{eqn:int_km_converg}) and (\ref{eqn:int_kbar_bounded}). Now, taking the limit $m\to\infty$ in (\ref{eqn:int_eq_phim}) shows that $\bar\varphi$ is a solution for $\dot x=f_0(t,x)$ passing through $(t_0,x_0)$. Thus, the uniqueness of initial value problem implies that $\bar{\varphi}=\varphi_0$. This means that for every convergent subsequence of $\{\varphi_m\}$ on $I_\alpha(t_0)$ converges to $\varphi_0$ and therefore, $\varphi_m$ uniformly converges to $\varphi_0$ on $I_\alpha(t_0)$.
Replacing $x_m$ with $\varphi_m(t_0+\alpha)$, $x_0$ with $\varphi_0(t_0+\alpha)$ and $t_0$ with $t_0+\alpha$ and applying the above procedure, one obtains $\varphi_m(t)$ defined on $[t_0-2\alpha, t_0+2\alpha]$ that is uniformly convergent to $\varphi_0$. Repeating this, the uniform convergence of $\varphi_m(t)$ to $\varphi_0(t)$ on $[a,b]$ is proved. \end{proof}
\end{document}
|
arXiv
|
{
"id": "2008.13387.tex",
"language_detection_score": 0.5972939133644104,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title {The Carlson-type zero-density theorem for the Beurling $\zeta$ function\thanks{Support of this research was rejected
by Hungarian National Research, Development and Innovation Office, Project \# K-142564.}}
\author{Szil\' ard Gy. R\' ev\' esz}
\date{}
\maketitle
\begin{abstract} In a previous paper we proved a Carlson type density theorem for zeroes in the critical strip for Beurling zeta functions satisfying Axiom A of Knopfmacher. There we needed to invoke two additonal conditions, the integrality of the norm (Condition B) and an ``average Ramanujan Condition" for the arithmetical function counting the number of different Beurling integers of the same norm $m\in {\mathbb N}$ (Condition G).
Here we implement a new approach of Pintz, using the classic zero detecting sums coupled with Halász' method, but otherwise arguing in an elementary way avoiding e.g. large sieve type inequalities or mean value estimates for Dirichlet polynomials. This way we give a new proof of a Carlson type density estimate--with explicit constants--avoiding any use of the two additional conditions, needed earlier.
Therefore it is seen that the validity of a Carlson-type density estimate does not depend on any extra assumption--neither on the functional equation, present for the Selberg class, nor on growth estimates of coefficients say of the "average Ramanujan type"--but is a general property, presenting itself whenever the analytic continuation is guaranteed by Axiom A. \end{abstract}
{\bf MSC 2020 Subject Classification.} Primary 11M41; Secondary 11M36, 30B50, 30C15.
{\bf Keywords and phrases.} {\it Beurling prime number formula, Beurling zeta function, analytic continuation, zero of the Beurling zeta function, zero detecting sums, method of Halász, density estimates for zeta zeros.}
\section{Introduction}
\subsection{Beurling's theory of generalized integers and primes.} Beurling's theory fits well to the study of several mathematical structures. A vast field of applications of Beurling's theory is nowadays called \emph{arithmetical semigroups}, which are described in detail e.g. by Knopfmacher, \cite{Knopf}.
Here ${\mathcal G}$ is a unitary, commutative semigroup, with a countable set of indecomposable generators, called the \emph{primes} of ${\mathcal G}$ and denoted usually as $p\in{\mathcal P}$, (with ${\mathcal P}\subset {\mathcal G}$ the set of all primes within ${\mathcal G}$), which freely generate the whole of ${\mathcal G}$: i.e., any element $g\in {\mathcal G}$ can be (essentially, i.e., up to order of terms) uniquely written in the form $g=p_1^{k_1}\cdot \dots \cdot p_m^{k_m}$: two (essentially) different such expressions are necessarily different as elements of ${\mathcal G}$, while each element has its (essentially) own unique prime decomposition. Moreover, there is a \emph{norm} $|\cdot|~: {\mathcal G}\to {\mathbb R}_{+}$ so that the following hold.
First, the image of ${\mathcal G}$, $|{\mathcal G}|\subset {\mathbb R}_{+}$ is locally finite\footnote{Sometimes this property is mentioned as "discreteness", but what we mean is that any finite interval of ${\mathbb R}_{+}$ can contain the norm of only a finite number of elements of ${\mathcal G}$.}, hence the function \begin{equation}\label{Ndef}
{{\mathcal N}}(x):=\# \{g\in {\mathcal G}~:~ |g| \leq x\} \end{equation} exists as a finite, nondecreasing, right continuous, nonnegative integer valued function on ${\mathbb R}_{+}$.
Second, the norm is multiplicative, i.e. $|g\cdot h| = |g| \cdot
|h|$; it follows that for the unit element $e$ of ${\mathcal G}$ $|e|=1$, and that all other elements $g \in {\mathcal G}$ have norms strictly larger than 1.
Arithmetical functions can also be defined on ${\mathcal G}$. We will use in this work the M\"obius function $\mu$: for its analogous to the classical case definition see pages 36--37 in \cite{Knopf}. The generalized von Mangoldt function $\Lambda_{{\mathcal G}}(g)$ will appear below in \eqref{vonMangoldtLambda}.
In this work we assume the so-called \emph{``Axiom A"} (in its normalized form to $\delta=1$) of Knopfmacher, see page 75 (and for the normalization pages 78--79) of his fundamental book \cite{Knopf}.
\begin{definition} It is said that ${{\mathcal N}}$ (or, loosely speaking, $\zeta$) satisfies \emph{Axiom A} -- more precisely, Axiom $(\kappa, A, \theta)$ with the suitable constants $\kappa, A>0$ and
$0<\theta<1$ -- if we have\footnote{The usual formulation uses the more natural version ${\mathcal R}(x):= {\mathcal N}(x)-\kappa x$. However, our version is more convenient with respect to the initial values at 1, as we here have ${\mathcal R}(1-0)=0$. All respective integrals of the form $\int_X$ will be understood as integrals from $X-0$, and thus we can avoid considering endpoint values in the partial integration formulae involving integration starting form 1. Alternatively, we could have taken also ${\mathcal N}(x):=\# \{g\in {\mathcal G}, |g|<x\}$ left continuous, and $$ {\mathcal R}(x):={\mathcal N}(x)-\begin{cases}\kappa x \qquad &\text{if}~ x>1 \\ 0 &\text{if} ~ x\le 1. \end{cases} $$ Also with this convention we would have ${\mathcal R}(1-0)=0$ for the remainder, but this seemed to be less convenient than our choice.} for the remainder term $$ {\mathcal R}(x):= {\mathcal N}(x)-\kappa (x-1) $$ the estimate \begin{align}\label{Athetacondi}
\left| {\mathcal R}(x) \right| \leq A x^{\theta} \quad ( x \geq 1 ). \end{align} \end{definition}
It is clear that under Axiom A the Beurling zeta function \begin{equation}\label{zetadef}
\zeta(s):=\zeta_{{\mathcal G}}(s):=\int_1^{\infty} x^{-s} d{\mathcal N}(x) = \sum_{g\in{\mathcal G}} \frac{1}{|g|^s} \end{equation} admits a meromorphic, essentially analytic continuation $\kappa\frac{1}{s-1}+\int_1^{\infty} x^{-s} d{\mathcal R}(x)$ up to $\Re s >\theta$ with only one, simple pole at 1.
\subsection{Analytic theory of the distribution of Beurling generalized primes} The Beurling $\zeta$ function \eqref{zetadef} makes it possible to express the generalized von Mangoldt function \begin{equation}\label{vonMangoldtLambda}
\Lambda (g):=\Lambda_{{\mathcal G}}(g):=\begin{cases} \log|p| \quad & \textrm{if}\quad g=p^k, ~ k\in{\mathbb N} ~~\textrm{with some prime}~~ p\in{\mathcal P}\\ 0 \quad &\textrm{if}\quad g\in{\mathcal G} ~~\textrm{is not a prime power in} ~~{\mathcal G} \end{cases}, \end{equation} as coefficients of the logarithmic derivative of the zeta function \begin{equation}\label{zetalogder}
-\frac{\zeta'}{\zeta}(s) = \sum_{g\in {\mathcal G}} \frac{\Lambda(g)}{|g|^s}. \end{equation} Beurling's theory of generalized primes is mainly concerned with the analysis of the summatory function \begin{equation}\label{psidef}
\psi(x):=\psi_{{\mathcal G}}(x):=\sum_{g\in {\mathcal G},~|g|\leq x} \Lambda (g). \end{equation} The generalized PNT (Prime Number Theorem) is the asymptotic equality $\psi(x)\thicksim x$. The remainder term in this equivalence is denoted, as usual, \begin{equation}\label{Deltadef} \Delta(x):=\Delta_{{\mathcal G}}(x):=\psi(x)-x. \end{equation} In the classical case of prime number distribution, as well as regarding some extensions to primes in arithmetical progressions and distribution of prime ideals in algebraic number fields, the connection between location and distribution of zeta-zeroes and oscillatory behavior of the remainder term ${\Delta}(x)$ in the prime number formula $\psi(x)\thicksim x$ is well understood \cite{Kac1, K-97, K-98, Knapowski, Pintz1, Pintz2, Pintz9, PintzProcStekl, Rev1, Rev2, RevPh, RevAA, Stas1, Stas2, Stas-Wiertelak-1, Stas-Wiertelak-2, Turan1, Turan2}. On the other hand in the generality of Beurling primes and zeta function, investigations so far were focused on mainly four directions. First, better and better, minimal conditions were sought in order to have a Chebyshev type formula $x\ll \psi(x) \ll x$, see e.g. \cite{Vindas12, Vindas13, DZ-13-2, DZ-13-3}. Understandably, as in the classical case, this relation requires only an analysis of the $\zeta$ function of Beurling in, and on the boundary of the convergence halfplane. Second, conditions for the PNT to hold, were sought see e.g. \cite{Beur, K-98, DebruyneVindas-PNT, {DebruyneVindas-RT}, DSV, DZ-17, Zhang15-IJM, Zhang15-MM}. Again, this relies on the boundary behavior of $\zeta$ on the one-line $\sigma=1$. Third, rough (as compared to our knowledge in the natural prime number case) estimates and equivalences were worked out in the analysis of the connection between $\zeta$-zero distribution and error term behavior for $\psi(x)$ see e.g. \cite{H-5}, \cite{H-20}. Fourth, examples were constructed for arithmetical semigroups with very ``regular" (such as satisfying the Riemann Hypothesis RH and error estimates $\psi(x)=x+O(x^{1/2+\varepsilon})$) and very ``irregular" (such as having no better zero-free regions than \eqref{classicalzerofree} below and no better asymptotic error estimates than \eqref{classicalerrorterm}) behavior and zero- or prime distribution, see, e.g., \cite{H-15}, \cite{BrouckeDebruyneVindas}, \cite{DMV}, \cite{H-5}, \cite{Zhang7}. Here we must point out that the above citations are just examples, and are far from being a complete description of the otherwise formidable literature\footnote{E.g. a natural, but somewhat different direction, going back to Beurling himself, is the study of analogous questions in case the assumption of Axiom A is weakened to e.g. an asymptotic condition on ${\mathcal N}(x)$ with a product of $x$ and a sum of powers of $\log x$, or sum of powers of $\log x$ perturbed by almost periodic polynomials in $\log x$, or ${\mathcal N}(x)-cx$ periodic, see \cite{Beur}, \cite{Zhang93}, \cite{H-12}, \cite{RevB}.}. For a throughout analysis of these directions as well as for much more information the reader may consult the monograph \cite{DZ-16}.
The main focus of our study, presented in the recent papers \cite{Rev-One} and \cite{Rev-Many} was to establish as precise as possible connections between distribution of the zeros of the Beurling zeta function $\zeta$ on the one hand and order of magnitude estimates or oscillatory properties of $\Delta(x)$ on the other hand.
Apart from generality and applicability to, e.g., distribution of prime ideals in number fields, the interest in the Beurling theory were greatly boosted by a construction of Diamond, Montgomery and Vorhauer \cite{DMV}. They basically showed\footnote{Let us call attention to the very nice further sharpening of this breakthrough result, which appeared very recently \cite{BrouckeVindas}.} that under Axiom A RH may still fail; moreover, nothing better than the most classical zero-free region and error term \cite{V} of \begin{equation}\label{classicalzerofree} \zeta(s) \ne 0 \qquad \text{whenever}~~~ s=\sigma+it, ~~ \sigma > 1-\frac{c}{\log t}, \end{equation} and \begin{equation}\label{classicalerrorterm} \psi(x)=x +O(x\exp(-C\sqrt{\log x}) \end{equation} follows from \eqref{Athetacondi} at least if $\theta>1/2$.
\subsection{Carlson-type density estimates for the Beurling $\zeta$ function}
In \cite{Rev-D} we proved a Carlson-type density result for the zeros of the Beurling zeta function. This was in need for our studies of the Littlewood- and Ingham-type questions, studied in the Beurling context in our recent works \cite{Rev-One, Rev-Many}, for prior to \cite{Rev-D} no density estimates were known for the Beurling zeta function.
A predecessor of such results--the only one in the Beurling context which touched upon the topic of density-type estimates--was worked out by Kahane \cite{K-99}, who proved that under a suitable (strong) condition on the prime counting function, the number of Beurling zeta zeroes lying on some vertical line $\Re s=\sigma=a>\max(1/2,\theta)$, has finite upper density. That is already a nontrivial fact\footnote{This particular result enabled Kahane to draw deep number theoretical consequences regarding the oscillation (sign changes) of the error term in the prime number formula. Obviously, obtaining a much sharper result -- estimating the total number of zeroes in a full rectangle, not only on one individual vertical line, and with a quantity essentially below the order of $T$ when $a$ is getting close to $1$ -- provides an even stronger foothold for deriving number theoretical consequences.} because the total number $N(T)$ of zeroes with imaginary part not exceeding $T$ may grow in the order $T\log T$.
For deriving the below density theorem in \cite{Rev-D} we needed two additional assumptions, too. One was that the norm would actually map to the natural integers. Following Knopfmacher, this was called \emph{Condition B}. So we said that Condition B is satisfied, if $|\cdot|:{\mathcal G}\to{\mathbb N}$, that is, the norm $|g|$ of any element $g\in{\mathcal G}$ is a natural number. That was necessary mainly for using some large sieve type estimates from the classic book of Montgomery \cite{Mont}. Without this condition, the terms of the arising generalized Dirichlet polynomials, occurring in our proof, could not be controlled well, and such strong tools could not be used.
As is natural, we will write $\nu\in|{\mathcal G}|$ if there exists $g\in{\mathcal G}$ with $|g|=\nu$. Under Condition B we can introduce the arithmetical function $G(\nu):=\sum_{g\in{\mathcal G},~|g|=\nu} 1$, which is then a super-multiplicative arithmetical function on ${\mathbb N}$. The next condition, called \emph{Condition G} and also taken from \cite{Knopf}, was a so-called ``average Ramanujan condition", meaning that the arithmetical function $G(\nu)$ is $O(\nu^{\varepsilon})$, at least on the (say $p$-th power) average.
Denote the number of zeroes of the Beurling zeta function in $[b,1]\times [-iT,iT]$ as \begin{equation}\label{NTest} N(b,T):=\#\{ \rho=\beta+i\gamma~:~ \zeta(\rho)=0, \,\beta\geq b,
|\gamma|\leq T \}. \end{equation}
The main result of the paper \cite{Rev-D} was the following. \begin{theorem}\label{th:density} Assume that ${\mathcal G}$ satisfies besides Axiom A also Conditions B and G, too. Then for any $\varepsilon>0$ there exists a constant $C=C(\varepsilon,{\mathcal G})$ such that for all sufficiently large $T$ and\footnote{Note that by the (standard) Lemma \ref{l:Littlewood} below, $N(\alpha,T)=O(T^{1+{\varepsilon}})$ for $\alpha>\theta$, always. Thus the statement is nontrivial only if $\alpha$ is close to $1$, more precisely when $\alpha> \frac{5-\theta}{6-2\theta}$.} $\alpha>(1+\theta)/2$ we have \begin{equation}\label{density} N(\alpha,T)\leq C T^{\frac{6-2\theta}{1-\theta}(1-\alpha)+{\varepsilon}}. \end{equation} \end{theorem}
Theorem \ref{th:density} was somewhat surprising, because we lack a functional equation, essential in the treatment of the Selberg class, where zero density estimates are known to hold \cite{KP}. However--as one referee pointed out to us--the functional equation is mainly used in the Selberg class to estimate $\zeta$, and so we could possibly succeed only because similar estimates can be derived directly from our extra conditions.
Two main questions were naturally posing themselves after this result. First, if solemnly under Axiom A such a density result can be obtained. Here we give an affirmative answer to this question\footnote{Thus we may surprise even the above mentioned referee.}.
\begin{theorem}\label{th:NewDensity} Let ${\mathcal G}$ be a Beurling system subject to Axiom $A$. Then for any $\sigma>(1+\theta)/2$ the number of zeroes of the corresponding Beurling zeta function $\zeta(s)$ admits a Carlson-type density estimate \begin{equation}\label{densityresult} N(\sigma,T) \le 1000 \frac{(A+\kappa)^4}{(1-\theta)^3 (1-\sigma)^4} T^{\frac{12}{1-\theta}(1-\sigma)} \log^5 T \end{equation} for all $T\ge T_0$, where also $T_0$ depends explicitly on the parameters $A, \kappa, \theta$ of Axiom A and on the value of $\eta:=1-\sigma$. In particular, for $\sigma>\frac{11+\theta}{12}$ we have $N(\sigma,T)=o(T)$. \end{theorem}
This means that even without any extra assumptions, order or regularity, just by the mere Axiom A, (which is the natural assumption to guarantee the analytic continuation of $\zeta(s)$ to some larger halfplane $\Re s >\theta$ with $\theta<1$), a Carlson-type density theorem holds true, always. So, in a philosophical sense, density results hold not because of some extra assumptions, but because of the basic analytical nature of the Beurling $\zeta$ function--in particular, the role of a functional equation can fully be suspended for such a result to hold.
Second, in case of the classical Riemann zeta function, use of advanced Vinogradov type estimates and corresponding zero-free regions (and other advances) were all exploited in getting even stronger density results with $o(1-\sigma)$ exponents at least in the vicinity of $\sigma=1$, the very first such result being achieved by Halász and Turán \cite{Hal-Tur-I, Hal-Tur-II}. For further advances on this issue of key number theory importance see, e.g., \cite{Bombieri, HB, PintzBestDens} and in particular \cite{GGL} and \cite{Pintz-AMH-Dens}, whose direct and simplified method we follow here in a great extent. As the fundamental work \cite{DMV} demonstrated, however, Vinogradov-type strong estimates cannot be expected in the generality of Beurling systems, as in particular it can well happen that only the classical de la Vallée-Poussin-Landau type zero free region \eqref{classicalzerofree} exists and only the classical de la Vallée-Poussin error estimate \eqref{classicalerrorterm} holds true. So the natural question is if regarding density results, are Carlson-style $O(T^{C(1-\sigma)+{\varepsilon}})$ estimates optimal (at least``in their nature", leaving the still important question of the value of $C$ still subject to further optimization), or if there can be expected some Halász-Turán type $T^{o(1-\sigma)+{\varepsilon}}$ improved density estimate, too?
This second question seems having already been answered, too. Indeed, F. Broucke and G. Debruyne \cite{FGoral} informed us that by a nontrivial adaptation of the Diamond-Montgomery-Vorhauer construction, they might be able to construct a Beurling system, subject to Axiom A, but having $\Omega(T^{c(1-\sigma)})$ zeroes in some rectangle $[\sigma,1]\times [-iT,iT]$ with $\sigma$ arbitrarily close to $1$. That is, Carlson-type density estimates might be seen in general the best possible in the generality of Beurling systems satisfying Axiom A.
\subsection{Consequences of density theorems regarding the error term of the PNT of Beurling}
In the recent papers \cite{Rev-One, Rev-Many} we investigated the questions of Littlewood and Ingham together with its converse about the connection of the location of $\zeta$-zeros and order resp. oscillation estimates for the error term ${\Delta}(x)$ in the PNT of Beurling. As said, to obtain our results we needed to use Theorem \ref{th:density} at several occurrences. Therefore, several our results were restricted to Beurling systems and zeta functions satisfying also Conditions B and G, moreover, implied constants also depended on the somewhat implicit ones of these conditions and the density result of Theorem \ref{th:density}.
Extending the generality of the density theorem serves also to extend several of our number theory results regarding the connection between the behavior of $\Delta(x)$ on the one hand and zero distribution of $\zeta(s)$ on the other hand. We discuss these ameliorated versions of our results from \cite{Rev-One, Rev-Many} in the last section, also pointing out the explicit dependence of constants on the main parameters $A, \kappa, \theta$ of the system.
\section{Lemmata on the Beurling $\zeta$ function}\label{sec:basics}
The following basic lemmas are just slightly more explicit forms of well-known basic estimates, like, e.g., 4.2.6. Proposition, 4.2.8. Proposition and 4.2.10. Corollary of \cite{Knopf}. In \cite{Rev-MP} we elaborated on the proofs of some of them only for explicit handling of the arising constants in these estimates. Those which did not appear in \cite{Rev-MP}, we briefly prove here without considering them original.
\subsection{Basic estimates of the Beurling $\zeta$ and its partial sums.}
\begin{lemma}\label{l:oneperzeta} For any $s=\sigma+it$, $\Re s=\sigma >1$ we have \begin{equation}\label{zsintheright}
|\zeta(s)| \leq \zeta(\sigma) \le \frac{(A+\kappa)\sigma}{\sigma-1} \qquad (\sigma >1), \end{equation} and also \begin{equation}\label{reciprok}
|\zeta(s)| \geq \frac{1}{\zeta(\sigma)} > \frac{\sigma-1}{(A+\kappa)\sigma} \qquad (\sigma >1). \end{equation} \end{lemma}
\begin{lemma}\label{l:zetaxs} Denote the ``partial sums" (partial Laplace transforms) of ${\mathcal N}|_{[1,X]}$ as $\zeta_X$ for arbitrary $X\geq 1$: \begin{equation}\label{zexdef} \zeta_X(s):=\int_1^X x^{-s} d{\mathcal N}(x). \end{equation} Then $\zeta_X(s)$ is an entire function and for $\sigma:=\Re s >\theta$ it admits \begin{equation}\label{zxrewritten} \zeta_X(s)= \begin{cases} \zeta(s)-\frac{\kappa X^{1-s}}{s-1}-\int_X^\infty x^{-s}d{\mathcal R}(x) & \textrm{for all} ~~~s\ne 1, \\ \frac{\kappa }{s-1}-\frac{\kappa X^{1-s}}{s-1}+\int_1^X x^{-s}d{\mathcal R}(x) & \textrm{for all} ~~~s\ne 1, \\ \kappa \log X + \int_1^X \frac{d{\mathcal R}(x)}{x} & \textrm{for}~~~ s=1, \end{cases} \end{equation} together with the estimate \begin{equation}\label{zxesti}
\left|\zeta_X(s) \right| \leq \zeta_X(\sigma) \leq \begin{cases} \min \left( \frac{\kappa X^{1-\sigma}}{1-\sigma} + \frac{A}{\sigma-\theta},~ \kappa X^{1-\sigma}\log X + \frac{A}{\sigma-\theta}\right) &\textrm{if} \quad \theta<\sigma < 1, \\ \kappa \log X + \frac{A}{1-\theta}& \textrm{if} \qquad \sigma =1, \\ \min\left( \frac{\sigma (A+\kappa)}{\sigma-1},~{\kappa}\log X + \frac{\sigma A}{\sigma-\theta} \right) &\textrm{if}\quad \sigma>1. \end{cases} \end{equation} Moreover, the above remainder terms can be bounded as follows. \begin{equation}\label{zxrlarge}
\left| \int_X^\infty x^{-s}d{\mathcal R}(x) \right| \leq A
\frac{|s|+\sigma-\theta}{\sigma-\theta} X^{\theta-\sigma}, \end{equation} and \begin{equation}\label{zxrlow}
\left| \int_1^X x^{-s}d{\mathcal R}(x) \right| \leq A \left(
|s|\frac{1-X^{\theta-\sigma}}{\sigma-\theta} + X^{\theta-\sigma}\right) \leq A \min \left(
\frac{|s|}{\sigma-\theta},~|s| \log X + X^{\theta-\sigma} \right). \end{equation} \end{lemma}
\begin{lemma}\label{l:zetasigma} For any $X\ge 1$ and $s=\sigma+it$ with $\Re s=\sigma <1 $, it holds \begin{equation}\label{zetaXlogfree}
|\zeta_X(s)| \le \zeta_X(\sigma) \le \frac{A+\kappa}{1-\sigma} X^{1-\sigma} \end{equation} and also \begin{equation}\label{zetaXestimate}
|\zeta_X(s)| \le \zeta_X(\sigma) \le \frac{A+\kappa}{\sigma-\theta} X^{1-\sigma}\log X, \end{equation} \end{lemma} \begin{proof} We have \begin{align*}
|\zeta_X(s)| \le \zeta_X(\sigma)&=\int_1^X \frac{d {\mathcal N}(t)}{t^{\sigma}} = [{\mathcal N}(t)t^{-\sigma} ]_1^X +\sigma \int_1^X \frac{{\mathcal N}(t)}{t^{\sigma+1}} dt. \end{align*} The integrated first term is ${\mathcal N}(X) X^{-\sigma}\le (A+\kappa) X^{1-\sigma}$. For the left-over integral we find $$ \sigma \int_1^X \frac{{\mathcal N}(t)}{t^{\sigma+1}} dt \le \sigma \int_1^X \frac{A+\kappa}{t^{\sigma}} dt = (A+\kappa)\frac{\sigma}{1-\sigma} \left(X^{1-\sigma}-1\right)< (A+\kappa)\frac{\sigma}{1-\sigma} X^{1-\sigma}. $$ Adding the two estimates, we get \eqref{zetaXlogfree}.
The second part of the first line of \eqref{zxesti} directly implies \eqref{zetaXestimate}. \end{proof}
\subsection{Behavior of the Beurling zeta function in the critical strip}\label{sec:orderofgrowth}
\begin{lemma}\label{l:zkiss} We have \begin{equation}\label{zsgeneral}
\left|\zeta(s)-\frac{\kappa}{s-1}\right|\leq \frac{A|s|}{\sigma-\theta} \qquad \qquad\qquad (\theta <\sigma ,~ t\in{\mathbb R},~~ s\ne 1). \end{equation} In particular, for large enough values of $t$ it holds \begin{equation}\label{zsgenlarget}
\left|\zeta(s) \right|\leq \sqrt{2} \frac{(A+\kappa)|t|}{\sigma-\theta}
\qquad \qquad\qquad (\theta <\sigma \leq |t|), \end{equation} while for small values of $t$ we have \begin{equation}\label{zssmin1}
|\zeta(s)(s-1)-\kappa|\leq \frac{A|s||s-1|}{\sigma-\theta} \leq
\frac{100 A}{\sigma-\theta}\qquad (\theta <\sigma \leq 4,~|t|\leq 9). \end{equation} As a consequence, we also have \begin{equation}\label{polenozero}
\zeta(s)\ne 0 \qquad \textrm{for}\qquad |s-1| \leq \frac{\kappa(1-\theta)}{A+\kappa}. \end{equation} \end{lemma}
\begin{lemma}\label{l:zzzz} For arbitrary $X \ge 2$ and $s=\sigma+it$, $\theta<\Re s=\sigma<1$ and $t\ge 2$ the following estimates hold true: \begin{align}
|\zeta(s) -\zeta_X(s)| & \le \frac{1}{\sigma-\theta} \left( \kappa \frac{X^{1-\sigma}}{t}+ 2A \frac{t}{X^{\sigma-\theta}} \right) \le \frac{2A + \kappa}{\sigma-\theta} \left( \frac{X^{1-\sigma}}{t}+ \frac{t}{X^{\sigma-\theta}} \right), \label{zetaminuszetaX} \\
|\zeta(s)| & \le \frac{(2A+\kappa)(1-\theta)}{(1-\sigma)(\sigma-\theta)} t^{\frac{1-\sigma}{1-\theta}}, \label{zetaestimate}
\\ |\zeta(s)| & \le \frac{4A+3\kappa}{(\sigma-\theta)(1-\theta)}t^{\frac{1-\sigma}{1-\theta}} \log t. \label{zetawithlog} \end{align} In particular, by the triangle inequality, it also holds \begin{equation}\label{zetaXsecond}
|\zeta_X(s)| \le \min\left(\frac{(2A+\kappa)(1-\theta)}{(1-\sigma)(\sigma-\theta)} t^{\frac{1-\sigma}{1-\theta}}, \frac{4A+3\kappa}{(\sigma-\theta)(1-\theta)}t^{\frac{1-\sigma}{1-\theta}} \log t \right)+ \frac{2A + \kappa}{\sigma-\theta} \left(\frac{X^{1-\sigma}}{t}+ \frac{t}{X^{\sigma-\theta}} \right). \end{equation} \end{lemma} \begin{proof}
For estimating $\zeta(s)-\zeta_X(s)$ we combine the first line of \eqref{zxrewritten} and \eqref{zxrlarge} to infer \begin{align*}
|\zeta(s)-\zeta_X(s)|&\le \frac{\kappa}{|s-1|}X^{1-\sigma} + A \frac{|s|+\sigma-\theta}{\sigma-\theta} X^{\theta-\sigma} \le \frac{\kappa}{t} X^{1-\sigma}+ A\frac{t+2\sigma-\theta}{\sigma-\theta} X^{\theta-\sigma}. \end{align*} Taking into account $2\sigma-\theta <2 \le t$, this also furnishes \eqref{zetaminuszetaX}.
Further, substituting into the first form of \eqref{zetaminuszetaX} the particular choice $X:=t^{\frac{1}{1-\theta}} (\ge t \ge 2)$ leads to $$
|\zeta(s)-\zeta_X(s)|\le \frac{2A+\kappa}{\sigma-\theta} ~t^{\frac{1-\sigma}{1-\theta}}. $$ From here a trivial triangle inequality and \eqref{zetaXlogfree} of Lemma \ref{l:zetasigma} yield \eqref{zetaestimate}: \begin{align*}
|\zeta(s)| & \le |\zeta(s)-\zeta_X(s)|+ |\zeta_X(s)| \le \frac{2A+\kappa}{\sigma-\theta} ~t^{\frac{1-\sigma}{1-\theta}}+ \frac{A+\kappa}{1-\sigma} t^{\frac{1-\sigma}{1-\theta}} < (2A+\kappa) \left( \frac{1}{\sigma-\theta} +\frac1{1-\sigma}\right) t^{\frac{1-\sigma}{1-\theta}}. \end{align*} If we apply here \eqref{zetaXestimate} instead of \eqref{zetaXlogfree} then we get $$
|\zeta(s)| \le \frac{2A+\kappa}{\sigma-\theta} ~t^{\frac{1-\sigma}{1-\theta}}+ \frac{A+\kappa}{\sigma-\theta} t^{\frac{1-\sigma}{1-\theta}} \frac{1}{1-\theta} \log t < \frac{4A+3\kappa}{(\sigma-\theta)(1-\theta)}t^{\frac{1-\sigma}{1-\theta}} \log t, $$ whence \eqref{zetawithlog}, too.
\end{proof}
Let us introduce the notation \begin{align}\label{Mdef}
M(\sigma,T)&:=\max\{ |\zeta(s)|~:~ s=\sigma+it, \Re s=\sigma, 2\le |t|\le T\}. \end{align} The combination of \eqref{zsgeneral}, \eqref{zetaestimate} and \eqref{zetawithlog} leads to the following. \begin{corollary}\label{c:Mesti} For arbitrary $s=\sigma+it$ with $\Re s=\sigma \in (\theta,1)$, and for arbitrary $T\ge 0$ we have \begin{equation}\label{Mestimate} M(\sigma,T) \le \min\left(\frac{(2A+\kappa)(1-\theta)}{(1-\sigma)(\sigma-\theta)}, \frac{4A+3\kappa}{(\sigma-\theta)(1-\theta)} \log T \right) \max(1,T^{\frac{1-\sigma}{1-\theta}}) . \end{equation} \end{corollary}
\subsection{Estimates for the number of zeros of $\zeta$}\label{sec:zeros}
Denote the set of all $\zeta$-zeroes in the rectangle $[b,1]\times i[-T,T]$ as ${\mathcal Z}(b;T)$, and the set of $\zeta$-zeroes in $[b,1]\times i([-T,-R]\cup[R,T])$ as ${\mathcal Z}(b;R,T)$, while their cardinality is denoted by $N(b;T)$ and $N(b;R,T)$, respectively. Also, we will write ${\mathcal Z}_{+}(b;R,T)$ for the part of ${\mathcal Z}(b,;R,T)$ lying in the upper halfplane. \begin{lemma}\label{l:Littlewood} Let $\theta<b<1$ and consider any height $T\geq 5$. Then the number of zeta-zeros $N(b,T)$ satisfy \begin{equation}\label{zerosinth-corr} N(b,T)\le \frac{1}{b-\theta} \left\{\frac{1}{2} T \log T + \left(2 \log(A+\kappa) + \log\frac{1}{b-\theta} + 3 \right)T\right\}. \end{equation} \end{lemma}
\begin{lemma}\label{c:zerosinrange} Let $\theta<b<1$ and consider any heights $T>R\geq 5 $
Then $N(b;R,T)$ satisfies\footnote{Here and below in \eqref{zerosbetweenone} a formulation with slightly different numerical constants is presented correcting the original calculation of \cite{Rev-MP}. About the error made in \cite{Rev-MP} and the description of the corrections see \cite{Rev-D}.} \begin{equation}\label{zerosbetween} N(b;R,T) \leq\frac{1}{b-\theta} \left\{ \frac{4}{3\pi} (T-R) \left(\log\left(\frac{11.4 (A+\kappa)^2}{b-\theta}{T}\right)\right) + \frac{16}{3} \log\left(\frac{60 (A+\kappa)^2}{b-\theta}{T}\right)\right\}. \end{equation}
In particular, for the zeroes between $T-1$ and $T+1$ we have for $T\geq 6$ \begin{align}\label{zerosbetweenone} N(b;T-1,T+1) \leq \frac{1}{(b-\theta)} \left\{6.2 \log T + 6.2 \log\left( \frac{(A+\kappa)^2}{b-\theta}\right) + 24 \right\}. \end{align} \end{lemma}
\section{Proof of the density estimate of Theorem \ref{th:NewDensity} }\label{sec:ProofofNewDensity}
The constants $A$ and $\kappa$ frequently occur in our calculations and sometimes we take logarithms and reciprocals of their sum $A+\kappa$. To overcome technical distinctions and difficulties when e.g. the logarithm is negative, let us note that once a constant $A$ is admissible, then the possibly enlarged value $A^*:=\max(A,1-\kappa)$ is admissible, too, so that in what follows we will automatically consider that $A+\kappa\ge 1$.
At the outset we fix some parameter $\xi$ with $\theta<\xi<\sigma(<1)$, and also write $\eta:=1-\sigma$ and $\delta:=\sigma-\xi$, so that e.g. $1-\xi=\delta+\eta$. We will need later on the restriction $\xi>\frac{1+\theta}{2}$, anyway, so we assume that once for all. Further, note that the statement \eqref{densityresult} of Theorem \ref{th:NewDensity} directly follows from Lemma \ref{l:Littlewood} if $\eta > \frac{1}{12}(1-\theta)$, so that we can assume in the following that $\eta\le (1-\theta)/12\le 1/12$. In the course of proof we will finally specify $\delta$ as $\delta=1.5 \eta$, so that we will also have $\delta\le(1-\theta)/8\le 1/8$. The numerical estimates $\eta\le 1/12$ and $\delta\le 1/8$ will thus be capitalized on without further mention\footnote{Also we will use without notice that $\log u \le \frac{1}{e} u$ and $\log u \le \frac{2}{e} \sqrt{u}$, always.}. Let us also note that with these assumptions $\xi=1-2.5\eta>\frac{1+\theta}{2}$ is guaranteed, too.
Further, we take three large parameters $X>Y>e^{10}$ and $T>e^{10}$, and denote $\lambda:=\log Y$, $L:=\log T$. In fact, in two steps we will restrict $X$ first to be at least $e^2Y$, and then to be exactly this value, so that we can as well consider $X:=e^2Y$ right away. (We can also foretell that we will choose $X:=T^{\frac{3.5}{1-\theta}}$ at the end.)
The starting point of the proof is the following constant quantity, defined by a complex integral: $$ I:=\frac{1}{2\pi i} \int_{(\Re s=3)} \frac{1}{s} \exp\left(s^2/L+\lambda s\right)~ds=1+\frac{1}{2\pi i} \int_{(-L)} \frac{1}{s} \exp\left(s^2/L+\lambda s\right)~ds, $$ where the last formula obtains by the Residue Theorem upon shifting the line of integration to the left until $\Re s=-L$. From this second expression it follows\footnote{The exact value of $I$ is of no importance for us here, but it is clear that its limit, for either $L=\log T\to \infty$ or for $\lambda=\log Y\to \infty$ is 1.} \begin{equation}\label{Irhofirstform}
\left|I-1\right| \le \frac{1}{\pi} \int_0^\infty \frac{1}{L} e^{L-t^2/L-\lambda L} dt = \frac{1}{\pi L} e^{-(\lambda-1)L} \frac{\sqrt{\pi L}}{2} < T^{-(\lambda-1)} <0.1. \end{equation}
The very simple base idea of the proof, with which we copy Pintz \cite{Pintz-AMH-Dens}, is to write here $\displaystyle 1=\frac{1}{\zeta(s+\rho)} \zeta(s+\rho)$, and thus involve a $\zeta(s)$-dependent (zero-detecting) expression into the simple mean $I$ defined above. Namely, for any complex number $\rho=\beta+i\gamma$ in the critical strip i.e. with $\Re \rho=\beta \in (\theta,1)$, we now write in this trivial identity and reformulate as follows. \begin{align*} I:=\frac{1}{2\pi i} \int_{(3)} \frac{1}{s} \exp\left(s^2/L+\lambda s\right)~ds &= \frac{1}{2\pi i} \int_{(3)} \frac{1}{\zeta(s+\rho)} \frac{\zeta(s+\rho)}{s} \exp\left(s^2/L+\lambda s\right)~ds \\
&= \frac{1}{2\pi i} \int_{(3)} \sum_{g \in {\mathcal G}} \frac{\mu(g)}{|g|^{s+\rho}} \frac{\zeta(s+\rho)}{s} \exp\left(s^2/L+\lambda s\right)~ds \\
&= \sum_{g \in {\mathcal G}} \frac{\mu(g)}{|g|^{\rho}} \frac{1}{2\pi i} \int_{(3)} \frac{\zeta(s+\rho)}{s} \exp\left(s^2/L+(\lambda-\log|g|) s\right)~ds. \end{align*} Later we will need three more restrictions on the choice of $\rho$: it will be restricted to the strip $\Re \rho=\beta \in (\sigma,1)$, its imaginary part will be chosen to satisfy $\Im \rho=\gamma \in [10L,T]$, and finally we will also assume that it is a zero of the Beurling zeta function $\zeta(s)$. That is, altogether we will have $\rho \in {\mathcal Z}_{+}(\sigma; 10L,T)$. However, for the moment we do not need these restrictions yet.
Defining for arbitrary $h\in {\mathbb R}$ the weight function $$ w(\rho,h):= \frac{1}{2\pi i} \int_{(3)} \frac{\zeta(s+\rho)}{s} \exp\left(s^2/L+h s\right)~ds, $$ the last expression of $I$ takes the form \begin{equation}\label{Iwithw}
I=\sum_{g \in {\mathcal G}} \frac{\mu(g)}{|g|^{\rho}} w(\rho, \lambda-\log|g|). \end{equation} Our next aim is to evaluate the weighted quantity $w(\rho,h)$ for case when $h\le-2$. Then we shift the line of integration to the line $\Re s =-\frac12 h L$, which is positive (as $h<0$), whence the integrand is analytic between the old and new lines of integration, and in view of the fast decrease of the integrand towards $i\infty$, the formula $w(\rho,h)=\frac{1}{2\pi i} \int_{(-\frac12 h L)} \frac{\zeta(s+\rho)}{s} \exp\left(s^2/L+h s\right)~ds$ is justified. Therefore, taking into account \eqref{zsintheright}, we obtain the estimate $$
|w(\rho,h)|\le \frac{1}{2\pi} (A+\kappa) \frac{\beta-hL/2}{\beta-hL/2-1} \frac{2}{L} \int_0^\infty e^{\frac14 h^2 L -t^2/L -\frac12 h^2 L} dt < (A+\kappa) e^{-\frac14 h^2 L}. $$
So we assume now $X\ge e^2 Y$, i.e. $\log X \ge \lambda +2$, and consider the $g \in {\mathcal G}$ with $|g|\ge X$, i.e. $h=\lambda-\log|g|\le -2$. The part with $|g| \ge X$ of the above sum \eqref{Iwithw} can be estimated as \begin{align}\label{Irhilargeg}
\bigg| \sum_{|g|\ge X} \frac{\mu(g)}{|g|^{\rho}} w(\rho, \lambda-\log|g|) \bigg| &\le \sum_{|g|\ge X} (A+\kappa) e^{-\frac14 (\log|g|-\lambda)^2 L}
(A+\kappa) \int_X^\infty e^{-\frac14 (\log x-\lambda)^2 L} d{\mathcal N}(x). \end{align} For the inner integral partial integration yields \begin{align}\label{Irhilargeg} \notag & = \left[ e^{-\frac14 (\log x-\lambda)^2 L} {\mathcal N}(x) \right]_X^\infty + \int_X^\infty \frac{L}{2} (\log x-\lambda) \frac{1}{x} e^{-\frac14 (\log x-\lambda)^2 L} {\mathcal N}(x) dx \\ \notag & \le (A+\kappa) \int_X^\infty \frac{L}{2} (\log x-\lambda) e^{-\frac14 (\log x-\lambda)^2 L} dx \\ & = (A+\kappa) \int_{\log(X/Y)}^\infty \frac{L}{2} ~y~ e^{-\frac14 y^2 L} e^y dy \\ \notag & \le (A+\kappa) \int_{\log(X/Y)}^\infty 2 \left(\frac{L}{2} y -1\right) e^{-\frac{L}{4} y^2+y} dy \\ &= 2(A+\kappa) e^{-\frac{L}{4} \log^2(X/Y)+\log(X/Y)} \le 2 (A+\kappa) e^{2-L} = 2e^2 (A+\kappa) \frac{1}{T},\notag \end{align} because the expression in the exponent is a decreasing function in $\log(X/Y)\ge 2$. Thus we are led to \begin{equation}\label{Irhoforlargegest}
\bigg| \sum_{|g|\ge X} \frac{\mu(g)}{|g|^{\rho}} w(\rho, \lambda-\log|g|) \bigg| \le 2e^2 (A+\kappa)^2 \frac{1}{T}<0.1, \end{equation} if $T \ge T_1:=200 (A+\kappa)^2$, say. Altogether from \eqref{Irhofirstform} and \eqref{Irhoforlargegest} we get for $T\ge T_1$ and $X\ge e^2 Y$ the estimate \begin{equation}\label{Irhosmallpart}
\bigg| I(\rho,X) -1 \bigg| \le 0.2, \qquad \textrm{where}\qquad I(\rho,X):=\sum_{|g|\le X} \frac{\mu(g)}{|g|^{\rho}} w(\rho, \lambda-\log|g|). \end{equation}
The evaluation of the terms in $I(\rho,X)$ (i.e. the ones with $h\ge -2$) will be done differently, not melting into a comprised quantity $h$ the two exponents $\lambda$ and $-\log|g|$, but handling them separately. Recalling that $\xi \in (\theta,\sigma)$, and $\eta:=1-\sigma>0$, $\delta:=\sigma-\xi>0$, so that $1-\xi=\eta+\delta$, the line of integration in $w(\rho,h)$ is moved from the line with $\Re s=3$ to the left, to the line $\Re s=\xi-\beta$. In view of the fast decrease of the integrand towards $i\infty$, transferring the line is without problem, but the strip $\xi-\beta<\Re s<3$ between the old and new vertical lines of integration may contain singularities of the integrand.
Namely, the function $\zeta(s+\rho)$ has a first order pole singularity at $s=1-\rho$, whose real part is $1-\beta$ whence is in the strip $\xi-\beta<\Re s <3$. As the residuum of $\zeta(s)$ at $s=1$ is $\kappa$, at $s+\rho$ the residuum of the whole integrand amounts to $\kappa \cdot \frac{1}{1-\rho} \exp((1-\rho)^2/L+(\lambda-\log|g|) (1-\rho))$, with absolute value not exceeding $\frac{\kappa }{\gamma} e^{1/L+(\lambda-\log|g|) (1-\beta) -\gamma^2/L}$. From here on, we will also use that $\Re \rho=\beta> \sigma=1-\eta$, (with $0<\eta \le1/12$) and $\Im \rho=\gamma \ge 10 L$. Then the above residuum is at most $$
\frac{\kappa }{\gamma} e^{1/L +(\lambda-\log|g|+2) \eta - 2 (1-\beta) -\gamma^2/L} \le \frac{\kappa e^{1/L+2\eta}}{\gamma |g|^\eta} Y^\eta e^{-10\gamma} < \frac{2 \kappa }{\gamma |g|^\eta} Y^\eta e^{-10\gamma} . $$
Further, when $s=0$, in principle there is another singularity of the integrand. Therefore, here we finally \emph{assume that $\rho$ is a zero of $\zeta(s)$}, so that vanishing of $\zeta(s+\rho)$ extinguishes the first order pole of the kernel $\frac{1}{s}\exp(s^2/L+hs)$ at $s=0$, making the point a removable singularity, harmless for the transfer of the line of integration. In sum, we take $\rho=\beta+i\gamma \in {\mathcal Z}_{+}(\sigma,10L,T)$, a zero with real part at least $\sigma$ and imaginary part between $10L$ and $T$.
Let us add up all the contributions arising from the residue of the translated zeta function $\zeta(s+\rho)$ at its pole at $s=1-\rho$. We get, using also $\beta>\sigma$, i.e. $\beta+\eta>1$, the estimate \begin{align}\label{eq:zsrpole}
\bigg| \sum_{|g|\le X}\frac{\mu(g)}{|g|^\rho}~ &\textrm{Res}\left[\frac{\zeta(s+\rho)}{s} \exp(s^2/L+(\lambda-\log|g|) s) ~ ;~ s=1-\rho \right] \bigg| \\& \le \frac{2 \kappa Y^\eta }{\gamma e^{10\gamma}} \int_{1}^X \frac{1}{t^{\beta+\eta}} d {\mathcal N}(t) \le \frac{2 \kappa Y^\eta }{\gamma e^{10\gamma}} \int_{1}^X \frac{1}{t} d {\mathcal N}(t) = \frac{2 \kappa Y^\eta }{\gamma e^{10\gamma}}\zeta_X(1) \le \frac{(A+\kappa)^2}{1-\theta} \frac{Y^\eta \log X}{L T^{100}}, \notag \end{align} applying Lemma \ref{l:zetaxs}, \eqref{zxesti} middle line in the last estimate.
The essential part of $I(\rho,X)$ is to be the integral on the changed path $\Re s = \xi-\beta \in (-1,0)$. More precisely, we will find that the essential contribution of the integral over the whole line comes from the part where $\Im s=t \in [-2L,2L]$. Denoting $D:=D(A,\kappa,\theta,\xi):=\frac{(A+\kappa)(1-\theta)}{(1-\xi)(\xi-\theta)}$, we get from Corollary \ref{c:Mesti} for any $h>-2$ \begin{align*}
\bigg| \frac{1}{2\pi i} \int_{\Re s=\xi-\beta, \atop |\Im s|\ge 2L} \frac{\zeta(s+\rho)}{s} \exp\left(s^2/L+h s\right)~ds \bigg| & \le \frac{e^{1/L+(\xi-\beta)h}}{2\pi} 2\int_{2L}^\infty \frac{M(\xi,t+\gamma)}{t} e^{-t^2/L} dt \\ & \le \frac{e^{1/L+(\xi-\beta)h}}{\pi} \int_{2L}^\infty \frac{2D ~(t+\gamma)^{\frac{1-\xi}{1-\theta}} }{t} e^{-t^2/L} dt \\ & \le \frac{2 e^{1/L+1} D} {\pi} \int_{2L}^\infty (2\max(t,\gamma))^{\frac{1-\xi}{1-\theta}} \frac{1}{t} e^{-t^2/L} dt \\ & \le \frac{2 \cdot e^{1.1} \cdot \sqrt{2}}{\pi} D \left\{\int_{2L}^\gamma\gamma^{\frac{1-\xi}{1-\theta}} \frac{1}{t} e^{-t^2/L} dt +\int_\gamma^\infty t^{\frac{1-\xi}{1-\theta}} \frac{1}{t} e^{-t^2/L} dt\right\}, \end{align*} using also that $\xi>\frac{1+\theta}{2}\ge 1/2$, $h\ge -2$, $\beta<1$ and $L\ge 10$ in view of $T\ge e^{10}$. In the first integral we can estimate $\gamma^{\frac{1-\xi}{1-\theta}} \le \gamma \le T$ by assumption, and as $1/t < 2t/L$, we have $$ \int_{2L}^{\gamma} \gamma^{\frac{1-\xi}{1-\theta}} \frac{1}{t} e^{-t^2/L}dt \le \int_{2L}^{\infty} T \frac{2t}{L} e^{-t^2/L}dt = T^{-3}. $$ For the second integral we use that $10L \le \gamma \le t \le T$ and calculate as follows. $$ \int_\gamma^\infty t^{\frac{1-\xi}{1-\theta}} \frac{1}{t} e^{-t^2/L} dt \le \int_{10L}^\infty t \frac{1}{10 L} e^{-t^2/L} dt = \frac{1}{20} e^{-100L} \le T^{-100}. $$ The above estimates thus lead to $$
\bigg| \frac{1}{2\pi i} \int_{\Re s=\xi-\beta, |\Im s|\ge 2L} \frac{\zeta(s+\rho)}{s} \exp\left(s^2/L+h s\right)~ds \bigg| \le 3D~ \frac{1}{T^3}. $$
Next, similarly to \eqref{eq:zsrpole} we add up all these contributions for all $|g|\le X$ in the sum for $I(\rho,X)$. With a reference to the first form in the first line of \eqref{zetaXestimate} and using again $\beta>\sigma$ we get \begin{align}\label{eq:farintegral}
\bigg| \sum_{|g|\le X} \frac{\mu(g)}{|g|^\rho} & \frac{1}{2\pi i} \int_{\Re s=\xi-\beta, |\Im s|\ge 2L} \frac{\zeta(s+\rho)}{s} \exp\left(s^2/L+h s\right)~ds \bigg|
\\ & \notag \le \sum_{|g|\le X} \frac{|\mu(g)|}{|g|^\beta} 3D \frac{1}{T^3} = 3D\frac{1}{T^3}\zeta_X(\beta) \le 3D \frac{1}{T^3} \left(\kappa X^{1-\beta} + \frac{A}{\beta-\theta} \right)\le \frac{3 D~(A+\kappa)}{\sigma-\theta} \frac{X^\eta }{T^3}. \end{align}
So assuming now the light condition that $\log X <T^{97}$ and collecting \eqref{eq:zsrpole} and \eqref{eq:farintegral} furnish that the residues and the integrals in $w(\rho,\lambda-\log|g|)$ restricted to $t=\Im s \not\in [-2L,2L]$ contribute at most $$ \frac{3~(A+\kappa)^2(1-\theta)}{(1-\xi)(\xi-\theta)(\sigma-\theta)} \frac{X^\eta }{T^3} \le 0.1, $$ provided that $T\ge T_2(\xi,\sigma,X):= \max\left(\log^{1/97}X;\left(\frac{30~(A+\kappa)^2(1-\theta)}{(1-\xi)(\xi-\theta)(\sigma-\theta)} \right)^{1/3} X^{\eta/3}\right)$. In other words, if we introduce the notations \begin{align*}
I(\rho,X,L)& := \sum_{|g|\le X} \frac{\mu(g)}{|g|^{\rho}} \frac{1}{2\pi i} \int_{-2L}^{2L} \frac{\zeta(\xi+i(t+\gamma))}{\xi-\beta+it} e^{(\xi-\beta+it)^2/L+(\lambda-\log|g|)(\xi-\beta+it)}dt,
\\& = \frac{1}{2\pi i} \int_{-2L}^{2L} \sum_{|g|\le X} \frac{\mu(g)}{|g|^{\xi+i(\gamma+t)}} \frac{\zeta(\xi+i(t+\gamma))}{\xi-\beta+it} e^{\frac{(\xi-\beta)^2-t^2+2it(\xi-\beta)}{L}+\lambda(\xi-\beta+it)} dt , \end{align*} then we have already derived for $T\ge T_1, T_2$ the estimate $$
\left| I(\rho,X,L) -1\right| \le 0.3. $$
Next we compute an upper estimation for the quantity $I(\rho,X,L)$. Since $|\gamma|\le T$ and $t\in[-2L,2L]$, in the integral we can estimate $\zeta(\xi+i(t+\gamma))$ by $M(\xi,2T)$. Therefore, writing $M:=M(\xi,2T)$ for short, \begin{align*}
\int_{-2L}^{2L} \bigg|\zeta(\xi+i(t+\gamma)) &
\frac{\exp((\xi-\beta)^2/L-t^2/L+2it(\xi-\beta)/L+\lambda(\xi-\beta+it))}{\xi-\beta+it} \bigg| dt \\& \le M e^{(\xi-\beta)^2/L+\lambda(\xi-\beta)} ~2\int_0^{2L} \frac{e^{-t^2/L}~dt}{\sqrt{(\xi-\beta)^2+t^2}} \le 2 M e^{\delta^2/L+\lambda(\xi-\sigma)} ~\int_0^{\infty} \frac{e^{-t^2/L}}{\max(\delta,t)}~dt \\ &\le 2 M ~1.01 ~Y^{\xi-\sigma} \left(\delta+ \log\frac{2\sqrt{L}}{\delta} + \frac{1}{e}\right) \le 4 M Y^{-\delta} \log L, \end{align*} if we assume $T>T_3(\delta):=e^{1/\delta}$, whence $\log(1/\delta)\le \log L$, too. Thus we must have for all $T \ge T_1, T_2, T_3$ $$
|I(\rho,X,L)| \le \frac{1}{2\pi} \max_{-2L\le t \le 2L} \left|\sum_{|g|\le X} \frac{\mu(g)}{|g|^{\xi+i(\gamma+t)}} \right| \cdot 4 M Y^{-\delta} \log L. $$
However, we have already seen that $|I(\rho,X,L)|$, appearing on the left hand side, is at least $0.7$. It follows that there exist some $\tau:=\tau(\rho) \in [-2L,2L]$ and a corresponding complex unit $\alpha:=\alpha(\rho,\tau):=e^{i\varphi(\rho,\tau)}$, where $\varphi:=\varphi(\rho,\tau):=\arg \left(\sum_{|g|\le X} \frac{\mu(g)}{|g|^{\xi+i(\gamma+\tau)}}\right)$, such that \begin{equation}\label{SumXrholower}
\alpha ~\sum_{|g|\le X} \frac{\mu(g)}{|g|^{\xi+i(\gamma+\tau)}} \ge \frac{2\pi \cdot 0.7}{4} \frac{ Y^\delta }{M \log L} > 1.1 \cdot \frac{ Y^\delta }{M \log L} \qquad (M:=M(\xi,2T)). \end{equation}
Now let us take a subset ${\mathcal S}$ of ${\mathcal Z}_{+}(\sigma;10L,T)$ of Beurling $\zeta$ zeros, numbered as ${\mathcal S}=\{ \rho_k=\beta_k+i\gamma_k ~:~k=1,\ldots,K\}$, so that $\#{\mathcal S}=K$. The corresponding parameter values from the above considerations will be denoted as $\tau_k:=\tau(\rho_k)$ and $\alpha_k:=\alpha(\rho_k,\tau_k)$. Following Halász, we sum up the inequalities \eqref{SumXrholower} for all $\rho_k$ ($k=1,\ldots,K$), square both sides, and apply the Cauchy-Schwartz inequality. This yields \begin{align}\label{Keyinequalities} 1.2 \cdot K^2 & \cdot \notag \frac{ Y^{2\delta }}{M^2 \log^2 L}
\le \left( \sum_{k=1}^K \alpha_k \sum_{|g|\le X} \frac{\mu(g)}{|g|^{\xi+i(\gamma_k+\tau_k)}} \right)^2
= \left( \sum_{|g|\le X} \frac{\mu(g)}{\sqrt{|g|}} \sum_{k=1}^K \frac{\alpha_k }{|g|^{\xi-1/2+i\omega_k}} \right)^2
\\ \notag & \le \sum_{|g|\le X} \frac{\mu^2(g)}{|g|} \sum_{|g|\le X} \left|\sum_{k=1}^K \frac{\alpha_k}{|g|^{\xi-1/2+i\omega_k}} \right|^2
\le \zeta_X(1) \left( \sum_{|g|\le X} \sum_{k=1}^K \sum_{j=1}^K \frac{\alpha_k \overline{\alpha_j}}{|g|^{2\xi-1+i\omega_k-i\omega_j}} \right)
\\ \notag & = \zeta_X(1) \left( \sum_{k=1}^K \sum_{j=1, j\ne k}^K \alpha_k \overline{\alpha_j} \sum_{|g|\le X} \frac{1}{|g|^{2\xi-1+i\omega_k-i\omega_j}} + \sum_{k=1}^K \sum_{|g|\le X} \frac{1}{|g|^{2\xi-1}} \right) \\ & = \zeta_X(1)\left(\sum_{k=1}^K \sum_{j=1, j\ne k}^K \alpha_k \overline{\alpha_j} ~\zeta_X(2\xi-1+i(\omega_k-\omega_j))+ K \zeta_X(2\xi-1) \right) \notag
\\ & \le \frac{A+\kappa}{1-\theta} \log X \left(\sum_{k=1}^K \sum_{j=1, j\ne k}^K \left|\zeta_X(2\xi-1+i(\omega_k-\omega_j))\right|+ K (A+\kappa)\frac{X^{2-2\xi}}{2\xi-1-\theta} \log X\right), \end{align} taking into account the middle line of \eqref{zxesti} from Lemma \ref{l:zetaxs} and also \eqref{zetaXestimate} of Lemma \ref{l:zetasigma} in the last step.
The interesting terms come from the double sum. We assume, as it will be justified below, that the different $\omega_k$ are at least $2$ apart, so that each $\zeta_X(2\xi-1+i(\omega_k-\omega_j))$ term in this double sum can be estimated by \eqref{zetaXsecond} from Lemma \ref{l:zzzz}. If we choose the log-free part from the first minimum expression in \eqref{zetaXsecond}, then the double sum is seen not exceeding \begin{align}\label{Doublesumwithlemma}
\frac{(2A+\kappa)}{(2\xi-1-\theta)} & \sum_{k=1}^K \sum_{j=1, j\ne k}^K \left\{ \frac{1-\theta}{2-2\xi}|\omega_k-\omega_j|^{\frac{2-2\xi}{1-\theta}} +
\left( \frac{X^{2-2\xi}}{|\omega_k-\omega_j|}+ \frac{|\omega_k-\omega_j|}{X^{2\xi-1-\theta}} \right)\right\}
\\ & \le \frac{(2A+\kappa)(1-\theta)}{(2\xi-1-\theta)(2-2\xi)} K^2 \left( T^{\frac{2-2\xi}{1-\theta}}+ \frac{T}{X^{2\xi-1-\theta}} \right) + \frac{2A+\kappa}{2\xi-1-\theta}\sum_{k=1}^K \sum_{j=1, j\ne k}^K \frac{X^{2-2\xi}}{|\omega_k-\omega_j|}. \notag \end{align}
Assume, as we may, that the $\omega_k$ are indexed according to increasing magnitude, and take the (minimal) separation between any two as a parameter $Q$. (We will see in a moment that the concrete set ${\mathcal S}$ admits such a separation.) Then the separation between $\omega_k$ and $\omega_j$ is at least $|k-j|Q$. Using this, we are led to $$
\sum_{k=1}^K \sum_{j=1, j\ne k}^K \frac{1}{|\omega_k-\omega_j|} \le 2~ \sum_{k=1}^K \sum_{j=k+1}^K \frac{1}{(j-k)Q} < \frac{2}{Q} K(1+\log K). $$ Assume further (as we will see in a moment from the concrete definition of the set ${\mathcal S}$ right below) that $K\le T/e$. Then $(1+\log K)\le \log T=L$, and we are led to \begin{equation}\label{reciproksum}
\sum_{k=1}^K \sum_{j=1, j\ne k}^K \frac{X^{2-2\xi} }{|\omega_k-\omega_j|} \le K \frac{2L}{Q} X^{2-2\xi}. \end{equation}
At this point we finally specify the set ${\mathcal S}$. As said, it will be a subset of ${\mathcal Z}_{+}(\sigma,10L,T)$, chosen with a maximal number of elements under the condition that the imaginary parts $\gamma_k=\Im \rho_k$ are at least $10L$ apart. In other words, take $\rho_1$ from ${\mathcal Z}_{+}(\sigma,10L,T)$ with minimal possible imaginary part $\gamma_1$, and then inductively choose $\rho_{k+1}$ with minimal imaginary part $\gamma_{k+1}$ not below $\gamma_k+10L$, once $\rho_k$ is already selected. The construction then terminates after at most $T/(10L)$ steps, justifying our assumption that $K\le T/e$. Also, recalling that $|\omega_k-\gamma_k|\le 2L$, we find that the separation between the elements of the sequence $(\omega_k)$ is at least $Q=6L$. (Note also that the indexing is in the natural, increasing order of the $\omega_k$.)
Winding up the estimations done, from \eqref{Keyinequalities}, \eqref{Doublesumwithlemma} and \eqref{reciproksum} after a cancellation by $K$ we are led to \begin{align}\label{Keysimplified} 1.2 \cdot K \cdot \frac{Y^{2\delta }}{M^2 \log^2 L} & \le \frac{(A+\kappa)^2}{(1-\theta)(2\xi-1-\theta)} \log^2 X ~X^{2-2\xi}+ \frac{A+\kappa}{1-\theta} \log X \frac{2A+\kappa}{2\xi-1-\theta} \frac{1}{3} X^{2-2\xi} \notag \\& \qquad \qquad + \frac{(2A+\kappa)(A+\kappa)}{(2-2\xi)(2\xi-1-\theta)} \log X ~K ~ \left( T^{\frac{2-2\xi}{1-\theta}} + \frac{T}{X^{2\xi-1-\theta}} \right) \\ & \le \frac{1.1 \cdot (A+\kappa)^2}{(1-\theta)(2\xi-1-\theta)} \log^2 X ~X^{2-2\xi} +\frac{(A+\kappa)^2}{(1-\xi)(2\xi-1-\theta)} \log X ~\left( T^{\frac{2-2\xi}{1-\theta}} + \frac{T}{X^{2\xi-1-\theta}} \right)~K. \notag \end{align} Here we specify our parameters. We take $X:=T^{\frac{3.5}{1-\theta}}$ and $\delta:=1.5 \eta$ so that $1-\xi= \delta+\eta= 2.5 \eta$ and the condition $\xi>\frac{1+\theta}{2}$ will be met as long as $\eta<\frac15 (1-\theta)$.
Recall that at the outset we restricted the argument to $\eta\le (1-\theta)/12$ (as otherwise Lemma \ref{l:Littlewood} already furnished the assertion of the theorem). In view of this stringer condition the inequalities \begin{equation}\label{xietadelta} \xi-\theta = 1-2.5\eta-\theta \ge \frac{19}{24} (1-\theta) \quad \textrm{and} \quad 2\xi-1-\theta = 1-5\eta-\theta \ge \frac{7}{12} (1-\theta) \end{equation} obtain easily, so that \eqref{Keysimplified} entails \begin{align*} 1.2\cdot K\frac{T^{\frac{7\delta}{1-\theta}}e^{-4\delta}}{M^2 \log^2 L} & \le \frac{1.1 \cdot 12 \cdot (A+\kappa)^2}{7(1-\theta)^4} 3.5^2~\log^2 T ~T^{\frac{7(1-\xi)}{1-\theta}} +\frac{12~(A+\kappa)^2}{2.5\eta ~7~(1-\theta)^2} 3.5 \log T ~\left(T^{\frac{2(1-\xi)}{1-\theta}} + T^{\frac{(1-\theta)-3.5(1-5\eta-\theta)}{1-\theta}} \right)~K \notag \end{align*} or, cancelling by $T^{\frac{7\delta}{1-\theta}}$, but otherwise equivalently \begin{align}\label{Keyparamterized} 1.2\cdot K\frac{e^{-6\eta}}{M^2 \log^2 L} & \le \frac{23.1 \cdot (A+\kappa)^2}{(1-\theta)^4}~L^2 ~T^{\frac{7\eta}{1-\theta}} +\frac{12~(A+\kappa)^2}{5\eta (1-\theta)^2} L ~\left(T^{\frac{-5.5\eta}{1-\theta}} + T^{\frac{17.5\eta-2.5(1-\theta)}{1-\theta}} \right)~K. \end{align} Since $\eta<(1-\theta)/12$, and $17.5\eta<18\eta \le 1.5(1-\theta)$, here on the right hand side we have $T^{\frac{17.5\eta-2.5(1-\theta)}{1-\theta}} \le T^{-1}\le T^{\frac{-5.5\eta}{1-\theta}} T^{-1/2}$ so in view of $T\ge 200>14^2$ also $T^{\frac{17.5\eta-2.5(1-\theta)}{1-\theta}} \le \frac{1}{14}T^{\frac{-5.5\eta}{1-\theta}}$. Using this, and on the left hand side also $6\eta<(1-\theta)/2\le 1/2$ and $\log^2 L \le (2/e)^2 L$, we get $$ 1.34 \cdot \frac{K}{M^2 L} < 1.2\cdot K\frac{e^{-1/2} e^2/4}{M^2 L} \le 1.2\cdot K \frac{e^{-6\eta}}{M^2 \log^2 L} \le \frac{23.1 \cdot (A+\kappa)^2}{(1-\theta)^4}~L^2 ~T^{\frac{7\eta}{1-\theta}} +\frac{18~(A+\kappa)^2}{7\eta (1-\theta)^2} L ~T^{\frac{-5.5\eta}{1-\theta}}~K, $$ or in other words \begin{align}\label{Keycompressed} 1.34 ~K \le \frac{23.1 \cdot (A+\kappa)^2}{(1-\theta)^4}~L^3 M^2 ~T^{\frac{7\eta}{1-\theta}} +\frac{18~(A+\kappa)^2}{7\eta (1-\theta)^2} L^2 M^2 ~T^{\frac{-5.5\eta}{1-\theta}}~K . \end{align} Recall that $M=M(\xi,2T)$. A reference to Corollary \ref{c:Mesti} furnishes from here (also using $2^{\frac{2(1-\xi)}{1-\theta}} =2^{\frac{5\eta}{1-\theta}} \le \sqrt{2}$) $$ 1.34 ~K \le \frac{23.1 \cdot (A+\kappa)^2}{(1-\theta)^2}~L^3 \sqrt{2} \frac{(2A+\kappa)^2}{(1-\xi)^2(\xi-\theta)^2}~T^{\frac{12 \eta}{1-\theta}} +\frac{18~(A+\kappa)^2}{7\eta } L^2 \sqrt{2} \frac{(2A+\kappa)^2}{(1-\xi)^2(\xi-\theta)^2} ~T^{\frac{-0.5\eta}{1-\theta}}~K, $$ so that according to \eqref{xietadelta} we get $$ 1.34 K \le \frac{23.1 \cdot \sqrt{2} (A+\kappa)^4}{(1-\theta)^2}~L^3 \frac{16\cdot 12^2}{25 \eta^2 7^2 \eta^2}~T^{\frac{12 \eta}{1-\theta}} +\frac{18~\sqrt{2} ~(A+\kappa)^2}{7\eta } L^2 \frac{16 (A+\kappa)^2}{25 \eta^2 7^2 \eta^2} ~T^{\frac{-0.5\eta}{1-\theta}}~K. $$ For the second term we apply $L=\log T = \frac{5(1-\theta}{\eta} \log \left( T^{\frac{\eta}{5(1-\theta)}} \right) \le \frac{10}{e} \frac{1-\theta}{\eta} T^{\frac{0.1 \eta}{1-\theta}}$ and get $$ 1.34 K \le \frac{23.1 \cdot \sqrt{2} (A+\kappa)^4}{(1-\theta)^2}~L^3 \frac{16\cdot 12^2}{35^2 \eta^4 }~T^{\frac{12 \eta}{1-\theta}} +\frac{18~\sqrt{2} ~(A+\kappa)^4}{7\eta } \frac{1600 }{(35e)^2 \eta^4 } ~T^{\frac{-0.3\eta}{1-\theta}}~K, $$ or, computing the constants \begin{equation}\label{computingconstants} 1.34 K \le 62 \frac{(A+\kappa)^4}{(1-\theta)^2\eta^4}~L^3~T^{\frac{12 \eta}{1-\theta}} +\frac{0.65~(A+\kappa)^4}{\eta^5} ~T^{\frac{-0.3\eta}{1-\theta}}~K. \end{equation} Let $T_4:=T_4(A,\kappa,\theta,\eta):=(A+\kappa)^{\frac{40(1-\theta)}{\eta}}$ and $T_5:=T_5(\kappa,\theta,\eta):=\exp\left(\frac{25(1-\theta)}{\eta}\log\frac{1}{\eta}\right)$. Assuming $T\ge \max(T_4,T_5)$ we get $(A+\kappa)^4 \le T^{\frac{0.1\eta}{1-\theta}}$ and also $\frac{1}{\eta} \le T^{\frac{\eta}{25(1-\theta)}}$, so that on the right hand side the last term is estimated by a constant times $K$, more precisely $$ 1.34 K \le 62 \frac{(A+\kappa)^4}{(1-\theta)^2\eta^4}~L^3~T^{\frac{12 \eta}{1-\theta}} + 0.65 ~K, $$ whence $0.69 K \le 62 \frac{(A+\kappa)^4}{(1-\theta)^2\eta^4}~L^3~T^{\frac{12 \eta}{1-\theta}}$ and \begin{equation}\label{Kfinal} K \le 90 \frac{(A+\kappa)^4}{(1-\theta)^2\eta^4}~L^3~T^{\frac{12 \eta}{1-\theta}}. \end{equation}
It remains to compare $K$ to $N(\sigma,T)$. By construction, the union of the intervals $[\gamma_k,\gamma_k+10L]$ cover ${\mathcal Z}_{+}(\sigma,10L,T)$. Therefore, Lemmas \ref{l:Littlewood} and \ref{c:zerosinrange} furnish \begin{align}\label{Nfirstestimate} N(\sigma,T)& \le N(\sigma,10L)+ 2 \left( \sum_{k=1}^{K-1} N(\sigma,\gamma_k,\gamma_k+10L) + N(\sigma,\gamma_K,\min(\gamma_K,T)) \right) \notag \\ & \le N(\sigma,10L) + 2 K \max_{10L \le R \le T} N(\sigma,R,\min(R+10L,T)) \notag \\ &\le \frac{1}{\sigma-\theta} \left\{\frac{1}{2} 10L \log (10L) + \left(\log\frac{(A+\kappa)^2}{\sigma-\theta} + 3 \right)10L\right\} \notag \\ &\quad + 2 K\cdot \frac{1}{\sigma-\theta} \left\{ \frac{4}{3\pi} 10L \left(\log\left(\frac{11.4 (A+\kappa)^2}{\sigma-\theta}{T}\right)\right) + \frac{16}{3} \log\left(\frac{60 (A+\kappa)^2}{\sigma-\theta}{T}\right)\right\} \end{align} Let us also assume $T \ge T_6:=T_6(\theta):=\frac{1}{(1-\theta)^{100}}$, say. Then $L\ge 100 \log \frac{1}{1-\theta}$. Further, $L\ge \log T_4$ entails $L\ge 480 \log(A+\kappa)$, and $T\ge T_5$ entails $L\ge 300$, whence taking into account $\sigma-\theta\ge \frac{11}{12}(1-\theta)$, too we obtain $$ \log\left(\frac{(A+\kappa)^2}{\sigma-\theta}\right) \le \log\frac{12}{11} + \frac{L}{240}+\frac{L}{100} \le L\left( \frac{1}{3300}+\frac{1}{240}+\frac{1}{100} \right) \le 0.02 L. $$ Applying in the above this and $\log u/u \le 1/e$ or $\log u/u \le \log u_0/u_0$ whenever $u\ge u_0\ge e$
we are led to \begin{align*} N(\sigma,T) & \le \frac{12}{11(1-\theta)} \bigg\{ 50L \frac{\log 3000}{3000} L + (0.02 L +0.01 L) \cdot 10L \\& \qquad \qquad \qquad + 2 K\left[ \frac{40}{3\pi} L \left(L + \log 13 + 0.02 L \right) + \frac{16}{3} \left(\log 60 + 0.02 L +L\right) \right]\bigg\} \\& \le \frac{12}{11(1-\theta)} \bigg\{ 0.5 L^2 + 8.6 K\cdot L \left(L + 0.01 L + 0.02 L \right) + 10.8 K \left(4.1 + 1.02 L \right)\bigg\} \\ & \le \frac{1}{1-\theta} \bigg\{ L^2 + 10 K\cdot L^2 + 13 K L \bigg\} \le \frac{10.5}{1-\theta} (K+1) L^2. \end{align*} Using \eqref{Kfinal} in this last estimate we obtain \eqref{densityresult}. Therefore, the theorem is proved as soon as we check that all our conditions are met. The assumptions were always of the form $T\ge T_k$ with some explicit $T_k$, except for the condition about $T_2$, where we assumed $T\ge T_2(\xi,\sigma,X):= \max\left(\log^{1/97}X;\left(\frac{30~(A+\kappa)^2(1-\theta)}{(1-\xi)(\xi-\theta)(\sigma-\theta)} \right)^{1/3} X^{\eta/3}\right)$. The first part, $T^{97} \ge \log X= \frac{3.5}{1-\theta} \log T$ is obvious. Also the constant part of the second condition is easy, too, because $T\ge T_1=200 (A+\kappa)^2$ according to the first assumption with $T_1$, while $\frac{1-\theta}{(1-\xi)(\xi-\theta)(\sigma-\theta)} \le \frac{1-\theta}{2.5 \eta \cdot\frac{19}{24}(1-\theta) \cdot \frac{11}{12}(1-\theta)} \le \frac{0.3(1-\theta)}{\eta(1-\theta)^2}=0.3 \frac{1-\theta}{\eta } \cdot \frac{1}{(1-\theta)^2} \le \exp\left(\frac{0.3(1-\theta)}{\eta }\right) \frac{1}{(1-\theta)^2} \le T_5^{0.3/25} \cdot T_6^{1/50} \le T$ in view of \eqref{xietadelta} and $\sigma-\theta=1-\eta-\theta \ge \frac{11}{12}(1-\theta)$. That is, we surely have $T^2\ge \frac{30~(A+\kappa)^2(1-\theta)}{(1-\xi)(\xi-\theta)(\sigma-\theta)}$. What remains to check is if we have $T\ge X^{\eta}$, too. However, $\eta<(1-\theta)/12$, whence $X^{\eta}\le T^{\frac{3.5 \eta}{1-\theta}} \le T^{1/3}$, and the condition is met.
This concludes the proof of Theorem \ref{th:NewDensity} with $T_0:=\max(T_1,T_3,T_4,T_5,T_6)$.
\section{Some consequences of the extended effective Carlson-type density estimate}\label{sec:consequences}
As said, our detailed study of the distribution of zeroes of the Beurling zeta function was motivated by the goal to investigate the questions of Littlewood and Ingham--together with the sharpness of the obtained results--in the Beurling context. Satisfactorily precise results on these questions were obtained in \cite{Rev-One} and \cite{Rev-Many}. In the work on the Littlewood question in \cite{Rev-One}, we referred to the density theorem only in the heuristical conbsiderations, to which our construction for the proof of sharpness was based. However, we needed to use Theorem \ref{th:density} heavily in the arguments of \cite{Rev-Many}, hence our results there were restricted to Beurling systems satisfying Conditions B and G of this density result. Moreover, the constants depended also on these conditions and the somewhat implicit handling of them.
Here we made all effort to handle the dependence on Axiom A and the natural parameters of the problem (basically, the quantity $\eta:=1-\sigma$) in order to guarantee that the dependence in those order- and oscillation results can be made explicit, too. The arguments in this paper somewhat suffer from the clumsy explicit calculations with all the arising constants, while the result was not fully optimized\footnote{Nevertheless, our calculations seemed to suggest that the optimization of the exponent could not bring down $12$, that is $\frac{12(1-\sigma)}{1-\theta}$ in the exponent, too much--certainly not below 11. Our final choice was therefore to get a round, integer exponent with a somewhat less messy calculus.} regarding neither the $\log T$ power nor the exponent of $T$. One may also note that the dependence of the constant in \eqref{densityresult} on the value of $\eta$ can be eliminated by noting that the result holds true trivially if $\sigma$ is so large that $\zeta(s)$ has no zero at all in the rectangle. This is certainly true (and nothing more can in general be expected in view of \cite{DMV} !) if $\sigma>1-c/\log T$, that is, if $1/\eta > (1/c) \log T$. Using that provides the corollary that $N(\sigma,T)\le 1000 \frac{(A+\kappa)^4}{(1-\theta)^3} \frac{1}{c^4} T^{\frac{12}{1-\theta}(1-\sigma)} \log^9 T$, always. However, the value of $c$--even if it is plausible that it could be expressed explicitly by means of the main parameters $A, \kappa$ and $\theta$ of Axiom A--was not estimated yet. So in order to preserve the effective nature of our estimate we opted for saving its current form in the main result.
Below we give the consequent effective and generalized formulations of our main results from \cite{Rev-Many}, what one can obtain if replacing Theorem \ref{th:density} by the new Theorem \ref{th:NewDensity}. We leave it to the reader to check details and convince himself that our arguments go through even in these effective versions, and now assuming only Axiom A without reference to any other conditions.
The first corollary answers Ingham's question in the Beurling context. For that, we recall the setup of Ingham. Denote by $\eta(t):(0,\infty)\to (0,1/2)$ a nonincreasing function and consider the domain \begin{equation}\label{eq:etazerofree} {\mathcal D}(\eta):=\{ s=\sigma+it \in{\mathbb C}~:~ \sigma>1-\eta(t),~ t>0\}. \end{equation} Following Ingham \cite{Ingham} consider also the derived function (the \emph{Legendre transform} of $\eta$ in logarithmic variables) \begin{equation}\label{omegadef} \omega_{\eta}(x):= \inf_{y>1} \left(\eta(y)\log x+\log y\right). \end{equation} Then we have the following generalized and effective version of Corollary 10 from \cite{Rev-Many}. \begin{theorem}\label{th:upperestimation} Let ${\mathcal G}$ be an arithmetical semigroup satisfying Axiom A. Then we have for any ${\varepsilon}>0$ and any sufficiently large $x>x_0({\varepsilon},A,\kappa,\theta)$ the estimate \begin{equation}\label{eq:uppertheorem} D(x) \le A_3({\varepsilon},A,\kappa,\theta) x\exp(-(1-{\varepsilon})\omega_\eta(x)). \end{equation} \end{theorem} Actually, sharper results with an $\omega$-function directly derived from the set of zeros (and not depending on a domain boundary function $\eta(t)$) hold also true, see in particular Theorem 10 in \cite{Rev-Many}, which generalize now similarly to the above to arbitrary ${\mathcal G}$ with Axiom A. Also, other PNT-and $\zeta(s)$-related quantities are estimated similarly in \cite{Rev-Many}, and now extend similarly as described above. However, we refrain from the technicalities necessary in formulating them here.
Finally, we address sharpness of the above Ingham type result, i.e., a generalization of Pintz' oscillation theorem \cite{Pintz2}. We present here the effective, generalized version of Theorem 6 from \cite{Rev-Many}. \begin{theorem}\label{th:Beurlingdomainosci} Let ${\mathcal G}$ be an arithmetical semigroup satisfying Axiom A, and consider a function $\eta(t)$, which is convex in logarithmic variables (i.e. $\eta(e^v)$ is convex). Further, consider the conjugate function $\omega_\eta$ defined above in \eqref{omegadef}. Assume that there are infinitely many zeroes of $\zeta(s)$ within the domain \eqref{eq:etazerofree}.
Then we have for any ${\varepsilon} >0$ the oscillation estimate ${\Delta}(x)=\Omega(x\exp(-(1+{\varepsilon})\omega_\eta(x))$ with effective implied constants (depending only on ${\varepsilon}, A, \kappa$ and $\theta$). \end{theorem} As above, similar versions with an $\eta$-independent $\omega$ function hold also true, but we skip the exact details.
Let us offer some comments on the role of density results in these, seemingly different questions of Littlewood and Ingham, and the converse results showing sharpness of these results. As said, the original de la Vallée-Poussin argument for the classical zero-free region \eqref{classicalzerofree} and error term \eqref{classicalerrorterm} was worked out in greater generality by Landau \cite{L}, who derived the latter with any $C<\sqrt{c}$. Later Ingham generalized Landau's argument \cite{Ingham} to general zero-free regions and their corresponding conjugate functions $\omega_\eta$, but his result suffered from the same loss of precision (a factor halving the constant in the exponent, i.e., getting only $|{\Delta}(x)|\ll x\exp((\frac12-{\varepsilon})\omega(x))$). It turned out only much later \cite{Pintz2} that (some) density theorem needs to be invoked to obtain sharp conclusions in this direction (sharpness demonstrated by the exact converse i.e. oscillation result). In this regard, any density theorem with $o(1)$ exponent as $\sigma\to 1-$ suffices, so after Carlson's result \cite{Carlson} the necessary tools were more than available--still it took quite a while until number theorists realized what the sharp form of Landau's and Ingham's estimates would be. In fact, the realization of this possibility of sharpening the original estimates (so in Landau's case to get $C=2\sqrt{c}-{\varepsilon}$, e.g.) was prompted by the search for the sharp converse, i.e., oscillatory result, because until the error term estimate itself is not sharp, a precisely corresponding converse cannot be obtained either.
Let us finally mention a few related questions of varying degree of difficulty, which we consider interesting at this stage of development of the Beurling theory of arithmetical semigroups. As mentioned above, we would be interested to see--a possibly optimal in principle--effective zero-free region estimate of the form \eqref{classicalzerofree}. This would immediately imply by Theorem \ref{th:upperestimation} above, the respective effective error term estimate in the PNT. Note that in case of the Riemann zeta function many later tricky improvements were combined to sharpen the effective zero-free region, all being started by an insightful paper of Stechkin \cite{St}, and continued e.g. in \cite{Kadiri} and \cite{MT-JNT}. However, from Stechkin on, these authors capitalize on the fact that together with each root of the Riemann $\zeta$ function the symmetric (about the critical line $\Re s=1/2$) point is a zero, too, which essentially refers to the functional equation. In case we have no such information, only the classic method of Landau and optimization of the Landau extremal problem on the respective auxiliary nonnegative trigonometric polynomials, is possible. For that direction see \cite{AK} and \cite{Rev-L}.
Naturally, we don't think that the constant 12 in the exponent would be optimal--hopefully later development\footnote{After finalizing this paper, we were informed in email that F. Broucke
obtained Theorem \ref{th:NewDensity} with some better exponent. In particular, in the case when Condition G is also satisfied, he could also recover our earlier exponent $\frac{6-2\theta}{1-\theta}$. The announced proof would go along different, more classical lines, following the large sieve type Dirichlet polynomial mean value estimate approach. It is very remarkable if indeed these can go through with not necessarily natural numbers but only reals, possibly not even spaced uniformly. We were also informed by email that following F. Broucke also Bin Chen could derive this crucial new mean value estimate.} can bring it down.
Once we have a density theorem, much of classical number theory--e.g. estimates for primes in short intervals--may possibly be extended to Beurling systems. It is interesting if certain regularity of the integers, such as Axiom A itself, ``forces" the primes to admit some regularity in their short scale behavior, too. We look forward to interesting developments in this regard, too.
Interesting is the question\footnote{The question was exposed to us in an email by Hugh L. Montgomery, pointing out that Vinogradov himself claimed that his estimates ``in themselves" provide the respective improved bounds, but did never provide a convincing proof for this statement. The question is hard to interpret and thus investigate (what it means ``in itself" when we work with a concrete function like the Riemann zeta?), but if we fix our setup to Beurling systems, then the question is meaningful (and challenging).} if assuming extra hypothesis, such as some Vinogradov type better estimates on $\zeta(s)$ either exclusively on $\Re s=1$ or possibly in some neighborhood of it, entail in themselves the drastic improvement known about the prime distribution (as is proved in the case of the Riemann zeta and natural prime numbers, but invoking into the proofs not only Vinogradov's estimates, but also further extra information, known only for the very case). This assertion can only be analysed if only a generalized setup is fixed without additional special information, characteristic to the natural number system and the Rieamnn zeta function. Beurling systems offer themselves as a very natural general setup for studying these issues.
Naturally, the same questions are there both for zero-free regions and also for density estimates.
\noindent \hspace*{5mm} \begin{minipage}{\textwidth} \noindent \hspace*{-5mm} Szilárd Gy.{} Révész\\ Alfréd Rényi Institute of Mathematics\\ Reáltanoda utca 13-15\\ 1053 Budapest, Hungary \\ {\tt [email protected]} \end{minipage}
\end{document}
|
arXiv
|
{
"id": "2209.01689.tex",
"language_detection_score": 0.6801702976226807,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Quantum estimation of tripartite coupling in Spin-Magnon-Mechanical Hybrid Systems} \author{Dong Xie} \email{[email protected]} \affiliation{College of Science, Guilin University of Aerospace Technology, Guilin, Guangxi 541004, People's Republic of China} \author{Chunling Xu} \affiliation{College of Science, Guilin University of Aerospace Technology, Guilin, Guangxi 541004, People's Republic of China}
\begin{abstract} Tripartite interactions play a fundamental role in the quantum information processing and quantum technology. However, it is generally difficult to realize strong tripartite coupling. We investigate the estimation of a tripartite coupling strength in a hybrid setup composed of a single nitrogen-vacancy (NV) center and a micromagnet. A time-independent parametric drive can be utilized to increase the estimation precision of the tripartite coupling strength. By calculating the quantum Fisher information (QFI), we can obtain the optimal estimation precision by measuring the eigenstate of the tripartite system. At the critical position, the QFI is divergent due to that the preparation time of the eigenstate is divergent. When the system is subjected to a dissipation, the QFI near the critical point of the driven-dissipation phase transition is analytically obtained. The direct intensity measurement is the optimal measurement near the dissipation phase transition point. In addition, we quantify the robustness of an imperfect measurement operator by the measurement noise susceptibility based on the error propagation formula. We find that the direct intensity measurement is enough robust against small measurement disturbance from a coherent drive. But it can be disturbed by the nonlinear anti-harmonic measurement noise, especially near the critical point. \end{abstract} \maketitle
\section{Introduction} In the field of quantum information processing, the coupling between quantum systems is an important foundation to realize various purposes~\cite{lab1,lab2}. The pairwise coherent coupling between a two-level quantum system and a quantized field is described by the Jaynes-Cummings mode, which is a typical quantum regime in the quantum optics~\cite{lab3,lab4}. In order to perform more complex tasks, the exploration of interactions beyond the pairwise interactions is becoming increasingly important and appealing~\cite{lab5}. However, the realization and control of the tripartite interactions are more difficult than the pairwise interactions.
A lot of previous studies mostly focus on using pairwise interactions to construct hybrid quantum setups based on magnons in microscopic magnets~\cite{lab6,lab7,lab8,lab9,lab10,lab11,lab12,lab13}, mechanical motions~\cite{lab14,lab15,lab16,lab17}, and nitrogen-vacancy (NV) centers in diamond~\cite{lab18,lab19,lab20,lab21}. Recently, it is shown that the tripartite interaction among single spins, magnons, and phonons can be obtained in a hybrid setup comprising a single NV center in diamond and a micromagnet~\cite{lab22}. The tripartite coupling can be enhanced by a parametric drive to amplify the mechanical zeropoint fluctuations of the vibration mode~\cite{lab23}.
Quantum estimation mainly aims at exploiting quantum resources to increase the estimation precision of measurements~\cite{lab24}. Previous studies focus on quantum estimation in the pairwise interaction system. The critical point of quantum phase transition has been utilized to improve the quantum estimation precision in the quantum Rabi system~\cite{lab25,lab26}. How to use critical resources to improve the measurement precision of tripartite coupling strength is of great significance.
In this article, we investigate the quantum estimation of the tripartite coupling strength in a hybrid setup comprising a single NV center in diamond and a micromagnet. A time-independent parametric drive is proposed to increase the estimation precision of the tripartite coupling strength. The optimal estimation precision is achieved by measuring the eigenstate of the tripartite system. The relative position between the NV center and the micromagnet can be the extra freedom, which is used to obtain the critical point. At the critical position, the QFI is divergent due to that the preparation time of the eigenstate is divergent. When the system is subjected to a dissipation process, the QFI near the critical point of the driven-dissipation phase transition is analytically obtained. The direct intensity measurement is the optimal measurement near the critical point. The measurements are often imperfect, which will reduce the estimation precision. We quantify the robustness of an imperfect measurement operator by the measurement noise susceptibility based on the error propagation formula. The direct intensity measurement is enough robust against small measurement disturbance from a coherent drive. However, it can be disturbed by the nonlinear anti-harmonic measurement noise, especially near the critical point.
This article is organized as follows. In Section II, we introduce the tripartite interaction Hamiltonian, which can be obtained in spin-magnon-mechanical hybrid systems, and the QFI based on the eigenstate is achieved. In Section III, the dissipation dynamic evolution is derived. In Section IV, the optimal estimation precision of the tripartite is obtained in the dissipation process. In Section V, the direct intensity measurement is shown to be the optimal measurement at the critical point. The feasibility is discussed in Subsection VI. The measurement noise susceptibility is proposed to quantify the robustness of measurement operator in section VII.
\section{Spin-Magnon-Mechanical Hybrid Systems} We focus on the tripartite interaction Hamiltonian, which is described by ($\hbar=1$ through the whole article) \begin{align} H_{\textmd{Tri}}=\lambda (b+b^\dagger)(a+a^\dagger)\sigma_x, \tag{1} \end{align} where $\lambda$ is the tripartite coupling strength to be estimated. The tripartite interaction Hamiltonian can appear in spin-magnon-mechanical hybrid systems composed of magnon [such as a yttrium iron garnet (YIG) sphere] and NV center in diamond, as shown in Fig.~\ref{fig.1}. \begin{figure}
\caption{Schematic diagram of a spin-magnon-mechanical hybrid system. $\lambda$ represents the tripartite coupling strength among the NV center spin with the frequency $\omega_{NV}$, magnon with the frequency $\omega_K$, and mechnical mode with the frequency $\omega_m$. }
\label{fig.1}
\end{figure}
The free Hamiltonian of the magnon is given by the Kittel mode \begin{align} H_\textmd{K}=\omega_\textmd{K}a^\dagger a, \tag{2} \end{align} where $\omega_\textmd{K}=\gamma B_Z$, the gyromagnetic ratio $\gamma$, a large external magnetic field $B_Z$. The free Hamiltonian of NV center in diamond as a magnetic dipole is given by \begin{align} H_{\textmd{NV}}=\omega_{\textmd{NV}}\sigma_z/2, \tag{3} \end{align}
where the Pauli operators $\sigma_i$ are defined in the basis $\{|g\rangle,|e\rangle\}$ and $\omega_{\textmd{NV}}$ denotes the resonance frequency of the NV center. The interaction Hamiltonian between the magnetic field of the magnon and the NV center is described by \begin{align} H_{\textmd{int}}=-(g_e\mu_B)\hat{\vec{B}}(\vec{r})\cdot\hat{\vec{S}}, \tag{4} \end{align} where the Land\'{e} factor $g_e$, Bohr magneton $\mu_B$, and spin operators $\hat{\vec{S}}=1/2(\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z)$.
We consider that the magnetic field generated by the magnon (YIG) is along the coordinate vectors $\vec{e}_x$~\cite{lab22} \begin{align}
\hat{\vec{B}}(\vec{r})=-\frac{\mu_0\sqrt{3 |\gamma|M_sR^3/(8\pi )}}{3 r^3}\vec{e}_x, \tag{5} \end{align} where $R$ is a micromagnet of radius, $\mu_0$ is the permeability of vacuum, $M_s$ is the saturation magnetization, and $r$ is the distance between the NV center and the YIG sphere. $r$ can be reexpressed as $r=r_0+z$ with $r_0$ denoting the equilibrium part of the distance.
Up to the first order of the coordinate $z$, the center-of-mass vibration (phonon) is quantized by $\hat{z}=z_{zpf}(\hat{b}+\hat{b}^\dagger)$ with with the zero-point fluctuation $z_{zpf}=\sqrt{1/(2M\omega_m)}$ to obtain the interaction Hamiltonian as \begin{align} H_{\textmd{int}}=\lambda (b+b^\dagger)(a+a^\dagger)\sigma_x+g_0(a+a^\dagger)\sigma_x \tag{6} \end{align} with the pairwise coupling $g_0=\frac{\lambda r_0}{3z_{zpf}}$ and the tripartite spin-magnon-phonon coupling strength \begin{align}
\lambda=\frac{3g_e\mu_0\mu_B}{8\pi r_0^4}\sqrt{\frac{4\pi|\gamma|M_sR^3}{3M\omega_m}}, \tag{7} \end{align} where $\omega_m$ denotes the vibration frequency of the phonon mode, and $M$ is the effective mass of the phonon mode.
Then, the total Hamiltonian of spin-magnon-mechanical hybrid system is described as \begin{align} H_{\textmd{Tot}}=&\omega_Ka^\dagger a+\omega_mb^\dagger b+\omega_{NV}\sigma_z/2\nonumber\\ &+\lambda (b+b^\dagger)(a+a^\dagger)\sigma_x+g_0(a+a^\dagger)\sigma_x \tag{8} \end{align}
In order to enhance the tripartite coupling strength, the the center-of-mass motion of the trapped diamond particle can be driven by an additional electrical potential~\cite{lab27} with the time-independent stiffness Hamiltonian \begin{align} H_\textmd{d}=-\Omega_p({b^2}^\dagger+b^2)\tag{9} \end{align} In the squeezed frame by applying the unitary transformation $U_s(r)=\exp[r(b^2-{b^2}^\dagger)]$, the total Hamiltonian can be written as
\begin{align} H^s_{\textmd{Tot}}=&\omega_\textmd{K}a^\dagger a+\Delta_mb^\dagger b+\omega_{\textmd{NV}}\sigma_z/2+\nonumber\\ &\lambda_{e} (b+b^\dagger)(a+a^\dagger)\sigma_x+g_0(a+a^\dagger)\sigma_x \tag{10} \end{align} where $\Delta_m=(\omega_m-\Omega_p)/\cosh 2r$ and $\lambda_{e}=\lambda e^r$ with the squeezing parameter $r$ defined as $\tanh 2r=\Omega_p/(\omega_m-\Omega_p)$.
Applying the Schrieffer-Wolff transformation \begin{align} U=\exp[i\frac{1}{\omega_{\textmd{NV}}}[\lambda_e(b+b^\dagger)+g_0](a+a^\dagger)\sigma_y] \tag{11} \end{align} under the limit condition $\omega_\textmd{K}/\omega_{\textmd{NV}}\rightarrow0$~\cite{lab28}, we can achieve the decoupling Hamiltonian \begin{align} H_{\textmd{Tot}}^S=\omega_\textmd{K}a^\dagger a+\Delta_mb^\dagger b+\omega_{\textmd{NV}}\sigma_z/2+\nonumber\\ \frac{1}{\omega_{NV}}[\lambda_{e} (b+b^\dagger)+g_0]^2(a+a^\dagger)^2\sigma_z. \tag{12} \end{align}
For a large squeezing parameter $r$, $\Delta_m$ is close to 0. The $n$th eigenstate of the system is described as \begin{align}
|\psi_n(\xi)\rangle=S(\xi)|n\rangle|\downarrow\rangle|x_b\rangle\tag{13}\label{13}, \end{align} where the squeeze operator $S(\xi)=\exp\{(\xi/2){a^2}^\dagger-(\xi^*/2){a^2}\}$ with the squeezing parameter $\xi=-\frac{1}{4}\ln\{1-\frac{4\Lambda^2}{\omega_{\textmd{NV}}\omega_\textmd{K}}\}$ with $\Lambda=\lambda_ex_b+g_0$.
When $\frac{4\Lambda^2}{\omega_{\textmd{NV}}\omega_\textmd{K}}=1$, the squeezing parameter $\xi$ is divergent. It denotes that the transition between normal phase and the supperradiance phase appears. Then, we can obtain the critical position $x_\pm=\pm\frac{\sqrt{\omega_{\textmd{NV}}\omega_\textmd{K}}}{2}-g_0$. When $x_-<x_b<x_+$, the system is in the normal phase; When $x_b<x_-$ or $x_b>x_+$, the system is in the supperradiance phase~\cite{lab29}.
Our goal is to estimate the strength of the tripartite coupling $\lambda$. The measurement precision of $\lambda$ can be derived through the quantum Cram\'{e}r-Rao bound~\cite{lab30}
\begin{align} \delta \lambda\geq1/\sqrt{\mathcal{F}_\lambda},\tag{14}\label{14} \end{align} where $\mathcal{F}_\lambda$ denotes the QFI associated with the parameter $\lambda$.
For the pure state in Eq.~(\ref{13}), the QFI $\mathcal{F}_\lambda$ can be calculated by using the derivative of the wave function with respect to the unknown parameter
\begin{align}
\mathcal{F}_\lambda=4[\langle\partial_\lambda\psi_n(\xi)|\partial_\lambda\psi_n(\xi)\rangle-|\langle\psi_n(\xi)|\partial_\lambda\psi_n(\xi)\rangle|^2]\tag{15}\\ =\frac{\Lambda^2e^{2r}x_b^2(n^2+n+1)}{2(\frac{\omega_{NV}\omega_K}{4}-\Lambda^2)^2}\tag{16} \end{align}
For large number of the magnons $n\gg1$, the measurement precision with Heisenberg scaling with respect to the number of magnons can be achieved, i.e., $\mathcal{F}_\lambda\propto n^2$.
At the critical point $\frac{\omega_{\textmd{NV}}\omega_\textmd{K}}{4}=\Lambda_c^2$, the QFI $\mathcal{F}_\lambda$ is divergent. The reason for this is that the preparation time of the eigenstate will diverge. In order to stay in the eigenstate, we assume that the coupling strength was adiabatically ramped from $\frac{\Lambda}{\sqrt{\omega_{\textmd{NV}}\omega_\textmd{K}}/2}=0$ to $\frac{\Lambda}{\sqrt{\omega_{\textmd{NV}}\omega_\textmd{K}}/2}<1$. The time of such an adiabatic sweep is given by
\begin{align} T\approx \left(2\gamma \omega_\textmd{K}\sqrt{1-{4\Lambda^2}/{\omega_{\textmd{NV}}\omega_\textmd{K}}}\right)^{-1}\tag{17}\label{17}, \end{align} where $\gamma\ll1$ for satisfying the adiabatic condition\cite{lab30a}. Using this time, the QFI $\mathcal{F}_\lambda$ can be repressed as \begin{align} \mathcal{F}_\lambda=\frac{8(\gamma\omega_KT)^4\Lambda^2e^{2r}x_b^2n^2}{\Lambda_c^4}\tag{18}\label{18}. \end{align} It shows that $\mathcal{F}_\lambda $ is proportional to the fourth power of time $T^4$ without any dissipation process.
\section{dissipation dynamic} We consider that the total systems are suffering from dissipation process and coherent drive. The master equation can be described by \begin{align} \frac{d\rho}{dt}=&-i[H^S_{\textmd{Tot}}+H_{\textmd{cd}},\rho]+(\kappa_aD[a]+\kappa_aD[b]\nonumber\\ &+\kappa_{\sigma^-}D[\sigma^-])\rho,\tag{20}\label{20} \end{align} where the coherent driving Hamiltonian is $H_{\textmd{cd}}=iF(a+a^\dagger)$ with the driving strength $F$ and the dissipation superoperator is given by $D[c]\rho=2c\rho c^\dagger-c^\dagger c\rho-\rho c^\dagger c$ with $c=(a, b, \sigma^-)$. The Langevin equations corresponding to Eq.~(\ref{20}) can be derived by the formula~\cite{lab31,lab32}
\begin{align} \frac{d\mathcal{O}}{dt}=&i[H^S_{\textmd{Tot}}+H_{cd},\mathcal{O}]+\sum_{c=a, b, \sigma^-}-[\mathcal{O},c^\dagger](\kappa_cc-\sqrt{2\kappa_c}c_{in})\nonumber\\ &+(\kappa_cc^\dagger-\sqrt{2\kappa_c}c^\dagger_{in}[\mathcal{O},c],\tag{21}\label{21} \end{align} where $\mathcal{O}$ can be any operator and the input quantum Gaussian noise operators $c_{in}(t)$ at zero temperature are characterized as \begin{align} &\langle c_{in}(t) \rangle=\langle c^\dagger_{in}(t') \rangle=0,\ \ \langle c^\dagger_{in}(t) c_{in}(t') \rangle=0,\tag{22}\label{22}\\ &\langle c_{in}(t)c^\dagger_{in}(t') \rangle=\delta(t-t').\tag{23}\label{23} \end{align}
Utilizing above equations, we can obtain the evolution dynamic according to Eq.~(\ref{21}) \begin{align} \dot{X}_a&=-\kappa_aX_a+\omega_KP_a+A_{a+},\tag{23}\label{23}\\ \dot{P}_a&=-\kappa_aP_a-\omega_KX_a-2\sigma_x(\lambda e^rX_b+g)-F+A_{a-},\tag{24}\label{24}\\ \dot{X}_b&=-\kappa_bX_b+\omega_mP_b+A_{b+},\tag{25}\label{25}\\ \dot{P}_b&=-\kappa_bP_b-\omega_2X_b-2\sigma_x\lambda e^rX_a+A_{b-},\tag{26}\label{26}\\ \dot{\sigma_x}&=-\kappa_{\sigma^-}\sigma_x-\omega_z\sigma_y-2\sigma_zA_{\sigma^-+},\tag{27}\label{27}\\ \dot{\sigma_y}&=-\kappa_{\sigma^-}\sigma_y+\omega_z\sigma_x-2\sigma_zX_a(\lambda e^rX_b+g)+2\sigma_zA_{\sigma^--},\tag{28}\label{28}\\ \dot{\sigma_z}&=-2\kappa_{\sigma^{-}}\sigma_z-\kappa_{\sigma^{-}}+2\sigma_yX_a(\lambda e^rX_b+g)\nonumber\\ &\ +\sqrt{2\kappa_{\sigma^{-}}}(\sigma^-\sigma^\dagger_{in}+\sigma^+\sigma_{in}),\tag{29}\label{29}\ \end{align} where the operators are defined as $A_{c+}=\sqrt{2\kappa_c}(c_{in}+c^\dagger_{in})$ and $X_c=c+c^\dagger$ and $P_c=i(c^\dagger-c)$.
In order to obtain the steady values, let the expectation values of the above operators be equal to 0, i.e., $\langle \mathbf{M}\rangle=0$ with $\mathbf{M}=(X_a,P_a,X_b,P_b,\sigma_x,\sigma_y,\sigma_z)$. As a result, we can obtain the stable steady values \begin{align} &\langle X_a\rangle=\frac{F\omega_a}{k_a^2+\omega_a^2}, \langle P_a\rangle=\frac{F\kappa_a}{k_a^2+\omega_a^2},\tag{30}\\ &\langle \sigma_z\rangle=-1/2, \langle X_b\rangle=\langle P_b\rangle=\langle \sigma_x\rangle=\langle \sigma_y\rangle=0.\tag{31} \end{align} Then, we can obtain the linear Langevin equation using the mean field approximation by expanding an arbitrary operator $M$ in the form of $M=\langle M\rangle+\delta M$. After careful deduction, we can get \begin{align} \dot{\delta{X}_a}&=-\kappa_a\delta X_a+\omega_K\delta P_a+A_{a+},\tag{32}\\ \dot{\delta{P}_a}&=-\kappa_a\delta P_a-\omega_K \delta X_a-2g\delta\sigma_x+A_{a-},\tag{33}\\ \dot{\delta{X}_b}&=-\kappa_b\delta X_b+\omega_m\delta P_b+A_{b+},\tag{34}\\ \dot{\delta{P}_b}&=-\kappa_b\delta P_b-\omega_m\delta X_b-2\lambda e^r\langle X_a\rangle\delta\sigma_x+A_{b-},\tag{35}\\ \dot{\delta{\sigma_x}}&=-\kappa_{\sigma^-}\delta\sigma_x-\omega_z\delta\sigma_y+A_{\sigma^-+},\tag{36}\\ \dot{\delta{\sigma_y}}&=-\kappa_{\sigma^-}\delta\sigma_y+g\delta X_a+\lambda e^r\langle X_a\rangle\delta X_b+\omega_z\delta\sigma_x\nonumber\\ &\ \ -2g\langle X_a\rangle\delta\sigma_z-A_{\sigma^--},\tag{37}\\ \dot{\delta{\sigma_z}}&=-2\kappa_{\sigma^{-}}\delta\sigma_z+2g\langle X_a\rangle \delta\sigma_y,\tag{38} \end{align}
We consider that $\omega_{NV}\gg\omega_K\gg\omega_m$ and $(\kappa_{\sigma^-},\ \kappa_a)\gg\kappa_b$. As a consequence, we can adiabatically eliminate the modes $\delta a$ and $\delta\sigma_x$, $\delta\sigma_y$, $\delta\sigma_z$. By careful derivation, the mode $\delta\sigma_x$ can be obtained \begin{align} \delta \sigma_x\approx-\frac{\lambda \langle X_a\rangle}{\omega_{NV}}.\tag{39} \end{align}
The reduced evolution dynamic of the modes $\delta X_b$ and $\delta P_b$ is described by
\[
\left( \begin{array}{ll} \dot{\delta X_b}\\ \dot{\delta P_b}\\
\end{array} \right )= \mathbf{V}\left( \begin{array}{ll} \delta X_b\\ \delta P_b\\
\end{array} \right ) +\\
\left( \begin{array}{ll} A_{b+}\\ A_{b-}\\
\end{array} \right ),\tag{40}\] where the evolution matrix $\mathbf{V}$ is given by
\[
\mathbf{V}=\left( \begin{array}{ll}
-\kappa_b,\ \ \omega_m\\ \ \omega,\ \ \ -\kappa_b\\
\end{array} \right ),\tag{41}\] where the parameter $\omega=2\lambda^2\langle X_a\rangle^2/\omega_Z-\omega_m$. The eigenvalues of $\mathbf{V}$ are \begin{align} E_\pm=-\kappa_m\pm\sqrt{\omega_m\omega}.\tag{42} \end{align} The characteristic time $\tau$ for the system to reach the steady state is obtained by satisfying $e^{\mathbf{V}\tau}=0$. Therefore, the characteristic time $\tau$ is expressed as \begin{align} \tau=1/(\kappa_m-\sqrt{\omega_m\omega}).\tag{43} \end{align}
Assuming that the cavity field has reached the steady state after a long-time evolution, the solutions are described \begin{align} \delta X_b=\int_0^\infty e^{-\kappa_mt}[\cosh(\sqrt{\omega \omega_m}t)A_{b+}\nonumber\\ +\sqrt{\frac{\omega_m}{\omega}}\sinh(\sqrt{\omega \omega_m}t)A_{b-}],\tag{44}\\ \delta P_b=\int_0^\infty e^{-\kappa_mt}[\cosh(\sqrt{\omega \omega_m}t)A_{b-}\nonumber\\ +\sqrt{\frac{\omega}{\omega_m}}\sinh(\sqrt{\omega \omega_m}t)A_{b+}].\tag{45} \end{align} \section{The optimal estimation precision from the QFI} The corresponding variance matrix $\mathcal{C}$ for the quadrature operators $q=(b+b^\dagger)/\sqrt{2}$ and $p=i(b^\dagger-b)/\sqrt{2}$ can be derived by the above equations \begin{align} &\mathcal{C}_{11}=\frac{1}{2}(\langle X_2^2\rangle-\langle X_2\rangle^2)=\frac{2\kappa_2^2-\omega\omega_m+\omega_m^2}{2\Delta},\tag{46}\\ &\mathcal{C}_{22}=\frac{1}{2}(\langle P_2^2\rangle-\langle P_2\rangle^2)=\frac{2\kappa_2^2-\omega\omega_m+\omega^2}{2\Delta},\tag{47}\\ &\mathcal{C}_{12}=\frac{1}{2}[\langle (X_2P_2+P_2X_2)/2\rangle-\langle X_2\rangle\langle P_2\rangle]=\frac{\kappa_b(\omega+\omega_m)}{2\Delta},\tag{48} \end{align} where the parameter $\Delta=\kappa_m^2-\omega\omega_m$.
Due to that the effective Hamiltonian $H_{eff}=i\mathbf{V}$ is quadratic, the steady state of the system is Gaussian~\cite{lab33}. The QFI of Gaussian state is obtained by~\cite{lab26,lab34} \begin{align} \mathcal{F}_\lambda=\frac{1}{2(1+P_\lambda^2)}\textmd{Tr}[(\mathcal{C}^{-1}\mathcal{C}'_\lambda)^2]+2\frac{P_\lambda^{'2}}{1-P_\lambda^4}\nonumber\\ +\langle\mathbf{X}^\top\rangle'_\lambda\mathcal{C}^{-1}\langle\mathbf{X}\rangle'_\lambda,\tag{49} \end{align} where $P_\lambda=\frac{1}{2\sqrt{\textmd{Det}\mathcal{C}}}$ and $\bullet'_\lambda$ is the term by term derivative of $\bullet$ with respect to $\lambda$.
When $\Delta$ is close to 0, the QFI can be achieved analytically \begin{align} \mathcal{F}_\lambda=\frac{16\omega_m^2\lambda^2e^{4r}\langle X_a\rangle^4}{\omega_{NV}^2\Delta^2},\tag{50}\\ =\frac{16\omega_m\lambda^2\tau^2e^{4r}\langle X_a\rangle^4}{\omega\omega_{NV}^2}.\tag{51} \end{align} Here, we can see that the QFI is proportional to the square of the evolution time. In contrast, the QFI is proportional to the fourth power of the evolution time in the absence of dissipation. It shows that the dissipation reduces the QFI from the fourth to the second power of the evolution time. From the quantum Cram\'{e}r-Rao bound, the measurement precision of $\lambda$ is given by \begin{align} \delta\lambda\geq\frac{\sqrt{\omega}\omega_{NV}}{4\sqrt{\omega_m}\lambda\tau e^{2r}\langle X_a\rangle^2}\tag{52}, \end{align} \section{The direct intensity measurement} The QFI gives the measurement precision obtained by the optimal measurement operator. However, the optimal measurement may not be easy to do in the experiment.
Next, the we calculate the measurement precision of some feasible operators by using the error propagation formula \begin{align}
\delta\lambda|_O=\frac{\Delta O}{|\langle\partial O/\partial \lambda\rangle|},\tag{53} \end{align} where $\Delta O=\sqrt{\langle O^2\rangle-\langle O\rangle^2}$ denotes the variance of the specific measurement operator $O$. With a direct intensity measurement $n=b^\dagger b$, the measurement precision of $\lambda$ can be given by \begin{align}
\delta\lambda|_{n}=\frac{\sqrt{(\mathcal{C}_{11}+\mathcal{C}_{22})^2-1}}{|\partial (\mathcal{C}_{11}+\mathcal{C}_{22})/\partial \lambda|},\tag{54} \end{align} When close to the critical point $\Delta\rightarrow0$, we can obtain the analytical precision obtained by the direct intensity measurement \begin{align}
\delta\lambda|_{n}=\frac{\sqrt{\omega}\omega_{NV}}{4\sqrt{\omega_m}\lambda\tau e^{2r}\langle X_a\rangle^2}.\tag{55} \end{align} From this result, we can see that the direct intensity measurement can get the result from the optimal measurement. Therefore, the direct measurement is the optimal measurement near the critical point.
Another common measurement is the homodyne detection with the quadrature operator $e^{i\theta}b+e^{-i\theta}b^\dagger$. Due to that the expectation value of the quadrature operator is still equal to 0, the information for $\lambda$ can not be achieved by the homodyne detection. \section{the measurement noise susceptibility} In this section, we investigate whether the measurement scheme is robust against measurement imperfections. In general, small measurement disturbances are difficult to avoid completely. In order to quantify the influence of measurement noise, we propose the measurement noise susceptibility based on the error propagation formula \begin{align}
\chi[P_M,N_M,\lambda]=\lim_{\epsilon\rightarrow0}\frac{1}{\epsilon}[1-\frac{\delta^2\lambda|_{P_M}}{\delta^2\lambda|_{(1-\epsilon)P_M+\epsilon N_M}}],\tag{56}\label{56} \end{align} where $P_M$ denotes the perfect measurement, $N_M$ denotes the noise measurement, $(1-\epsilon)P_M+\epsilon N_M$ denotes the practical measurement. When the measurement noise does not have any effect on the measurement precision of $\lambda$, the measurement noise susceptibility $\chi[P_M,N_M,\lambda]=0$. Based on the measurement noise susceptibility, we try to investigate the robustness of the direct intensity measurement $P_d=a^\dagger a$.
When the noise measurement is a coherent drive $N_M=b^{\dagger j}+b^j$ with $j=1,2...$, we can achieve that $\delta^2\lambda|_{(1-\epsilon)P_d+\epsilon N_M}=\frac{1}{\delta^2\lambda|_M+\epsilon^2/(1-\epsilon)^2\Delta^2N/|\langle\partial O/\partial \lambda\rangle|^2}$. Utilizing the Eq.~(\ref{56}), we can obtain the measurement noise susceptibility $\chi[P_d,N_M=b^{\dagger j}+b^j,\lambda]=0$. It shows that the direct intensity measurement is roust against the general coherent drive.
Then, we consider that the noise measurement is the anti-harmonic term $\xi(b^\dagger b)^2$. In order to deal with the higher order terms, we utilize the decoupling relation~\cite{lab35,lab36} \begin{align} \langle ABC\rangle\approx&\langle AB\rangle\langle C\rangle+\langle A\rangle\langle BC\rangle+\langle AC\rangle\langle B\rangle\nonumber\\ &-2\langle A\rangle\langle B\rangle\langle C\rangle,\tag{57}\\ \langle ABCD\rangle\approx&\langle AB\rangle\langle CD\rangle+\langle AD\rangle\langle BC\rangle+\langle AC\rangle\langle BD\rangle\nonumber\\ &-2\langle A\rangle\langle B\rangle\langle C\rangle\langle D\rangle.\tag{58} \end{align} After an analytical derivation, we can obtain the measurement noise susceptibility \begin{align} \chi[P_d,N_M=\xi(b^\dagger b)^2,\lambda]&=\nonumber\\ &\xi\left(2+8\langle b^\dagger b\rangle-\frac{2\langle b^\dagger b\rangle^3+2\langle b^\dagger b\rangle^2}{\partial \langle b^\dagger b\rangle/\partial \lambda}\right).\tag{59} \end{align} In general, the measurement noise susceptibility $\chi[P_d,N_M=\xi(b^\dagger b)^2,\lambda]$ is not equal to 0. As it approaches the critical point, the measurement noise susceptibility also goes to infinity. It shows that the direct intensity measurement is is more susceptible to the interference of the nonlinear anharmonic noise measurement at the critical point. Therefore, the strength of noise measurement $\xi$ must be small for improving the measurement precision.
\section{parameter feasibility} To realize the time-independent parametric drive, a proper electric potential can be obtained by the static Paul trap~\cite{lab37}.
For practical considerations, the decay rate of Kittel mode is given by $\kappa_a\simeq 10\textmd{MHz}$~\cite{lab38}. The the decay rate of the mechanical mode is $\kappa_b=1\textmd{Hz}$\cite{lab39}. The decay rate of the NV center spin is about $\kappa_{\sigma^-}\approx1\textmd{kHz}$~\cite{lab40}. Hence, it satisfies that $(\kappa_{\sigma^-},\ \kappa_a)\gg\kappa_b$. The mechanical frequency is about $\omega_m\approx10\textmd{kHz}$~\cite{lab41}. The frequency of the NV center spin can be $\omega_{\textmd{NV}}\approx 10\textmd{GHz}$ and the frequency of the magnon is about $\omega_\textmd{K}\approx1\textmd{GHz}$. Therefore, it satisfies that $\omega_{NV}\gg\omega_K\gg\omega_m$.
\section{conclusion } In this article, we study on the quantum estimation the tripartite coupling strength. At the critical position of the mechanical mode, the QFI is divergent due to the divergent adiabatic preparation time. In the dissipation process, a coherent single-particle drive is utilized to obtain the driven-dissipation phase transition. The QFI around the critical point is analytically achieved, which shows that the dissipation reduces the QFI from the fourth to the second power of the evolution time. The direct intensity measurement is the optimal measurement near the critical point. In addition, we quantify the robustness of an imperfect measurement operator by the measurement noise susceptibility based on the error propagation formula. The direct intensity measurement is enough robust against small measurement disturbance from a coherent drive. However, it can be disturbed by the nonlinear anti-harmonic measurement noise, especially near the critical point. Hence, it is necessary to reduce the disturbance from the nonlinear anti-harmonic measurement noise in the experiment.
\end{document}
|
arXiv
|
{
"id": "2305.12435.tex",
"language_detection_score": 0.6855537295341492,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{The Magic of Logical Inference in Probabilistic Programming}
\author[B.~Gutmann, I.~Thon, A.~Kimmig, M.~Bruynooghe, L.~De~Raedt]{Bernd Gutmann,
Ingo Thon, Angelika Kimmig, Maurice Bruynooghe and Luc De~Raedt \\
Department of Computer Science, Katholieke Universiteit Leuven,\\
Celestijnenlaan 200A - bus 2402, 3001 Heverlee, Belgium\\
\{firstname.lastname\}@cs.kuleuven.be}
\maketitle
\begin{abstract}
Today, many different probabilistic programming languages exist and
even more inference mechanisms for these languages. Still, most
logic programming based languages use backward reasoning based on
SLD resolution for inference. While these methods are typically
computationally efficient, they often can neither handle infinite
and/or continuous distributions, nor evidence. To overcome these
limitations, we introduce distributional clauses, a variation and
extension of Sato's distribution semantics. We also contribute a
novel approximate inference method that integrates forward reasoning
with importance sampling, a well-known technique for probabilistic
inference. To achieve efficiency, we integrate two logic programming
techniques to direct forward sampling. Magic sets are used to focus
on relevant parts of the program, while the integration of backward
reasoning allows one to identify and avoid regions of the sample
space that are inconsistent with the evidence. \end{abstract}
\section{Introduction} The advent of statistical relational learning \cite{Getoor07,DeRaedtAPRIL08} and probabilistic programming \cite{NIPSWorkshop} has resulted in a vast number of different languages and systems such as PRISM~\cite{SatoKameya:01}, ICL~\cite{Poole08}, ProbLog~\cite{DeRaedt07-IJCAIa}, Dyna~\cite{Eisner05}, BLPs~\cite{Kersting08}, CLP($\mathcal{BN}$)~\cite{clpbn}, BLOG~\cite{Milch05}, Church~\cite{Goodman08}, IBAL~\cite{Pfeffer01}, and MLNs~\cite{Richardson:06}. While inference in these languages generally involves evaluating the probability distribution defined by the model, often conditioned on evidence in the form of known truth values for some atoms, this diversity of systems has led to a variety of inference approaches. Languages such as IBAL, BLPs, MLNs and CLP($\mathcal{BN}$) combine knowledge-based model construction to generate a graphical model with standard inference techniques for such models. Some probabilistic programming languages, for instance BLOG and Church, use sampling for approximate inference in generative models, that is, they estimate probabilities from a large number of randomly generated program traces. Finally, probabilistic logic programming frameworks such as ICL, PRISM and ProbLog, combine SLD-resolution with probability calculations.
So far, the second approach based on sampling has received little attention in logic programming based systems. In this paper, we investigate the integration of sampling-based approaches into probabilistic logic programming frameworks to broaden the applicability of these. Particularly relevant in this regard are the ability of Church and BLOG to sample from continuous distributions and to answer conditional queries of the form $p(q |e )$ where $e$ is the evidence. To accommodate (continuous and discrete) distributions, we introduce \emph{distributional clauses}, which define random variables together with their associated distributions, conditional upon logical predicates. Random variables can be passed around in the logic program and the outcome of a random variable can be compared with other values by means of special built-ins. To formally establish the semantics of this new construct, we show that these random variables define a basic distribution over facts (using the comparison built-ins) as required in Sato's distribution semantics~\cite{Sato:95}, and thus induces a distribution over least Herbrand models of the program. This contrasts with previous instances of the distribution semantics in that we no longer enumerate the probabilities of alternatives, but instead use arbitrary densities and distributions.
From a logic programming perspective, BLOG~\cite{milch:aistats2005} and related approaches perform \emph{forward reasoning}, that is, the samples needed for probability estimation are generated starting from known facts and deriving additional facts, thus generating a \emph{possible world}. PRISM and related approaches follow the opposite approach of \emph{backward reasoning}, where inference starts from a query and follows a chain of rules backwards to the basic facts, thus generating \emph{proofs}. This difference is one of the reasons for using sampling in the first approach: exact forward inference would require that all possible worlds be generated, which is infeasible in most cases. Based on this observation, we contribute a new inference method for probabilistic logic programming that combines sampling-based inference techniques with forward reasoning. On the probabilistic side, the approach uses rejection sampling~\cite{KollerFriedman09}, a well-known sampling technique that rejects samples that are inconsistent with the evidence. On the logic programming side, we adapt the \emph{magic set} technique~\cite{bancilhon} towards the probabilistic setting, thereby combining the advantages of forward and backward reasoning. Furthermore, the inference algorithm is improved along the lines of the \emph{SampleSearch} algorithm~\cite{Gogate09samplesearch:importance}, which avoids choices leading to a sample that cannot be used in the probability estimation due to inconsistency with the evidence. We realize this using a heuristic based on backward reasoning with limited proof length, the benefit of which is experimentally confirmed. This novel approach to inference creates a number of new possibilities for applications of probabilistic logic programming systems, including continuous distributions and Bayesian inference.
This paper is organized as follows: we start by reviewing the basic concepts in Section~\ref{sec:prelim}. Section~\ref{sec:semantics} introduces the new language and its semantics, Section~\ref{sec:algorithms} a novel forward sampling algorithm for probabilistic logic programs. Before concluding, we evaluate our approach in Section~\ref{sec:experiments}.
\section{Preliminaries} \label{sec:prelim} \subsection{Probabilistic Inference} \label{sec:probinf}
A discrete probabilistic model defines a probability distribution~$p(\cdot)$ over a set~$\Omega$ of basic outcomes, that is, value assignments to the model's random variables. This distribution can then be used to evaluate a conditional probability distribution $p(q|e) =\frac{p(q\wedge e)}{p(e)}$, also called \emph{target distribution}. Here, $q$ is a query involving random variables, and $e$ is the \emph{evidence}, that is, a partial value assignment of the random variables\footnote{If $e$ contains
assignments to continuous variables, $p(e)$ is zero. Hence, evidence
on continuous values has to be defined via a probability density
function, also called a sensor model.}. Evaluating this target distribution is called \emph{probabilistic
inference}~\cite{KollerFriedman09}. In probabilistic logic programming, random variables often correspond to ground atoms, and~$p(\cdot)$ thus defines a distribution over truth value assignments, as we will see in more detail in Sec.~\ref{sec:ds} (but see also~\citeNP{NIPSWorkshop}). Probabilistic inference then asks for the probability of a logical query being true given truth value assignments for a number of such ground atoms.
In general, the probability~$p(\cdot)$ of a query~$q$ is in the discrete case the sum over those outcomes $\omega\in \Omega$ that are consistent with the query. In the continuous case, the sum is replaced by an (multidimensional) integral and the distribution~$p(\cdot)$ by a (product of) densities $\mathbf{F}(\cdot)$ That is, \begin{equation}\label{eq:exact} p(q)= \sum_{\omega\in \Omega} p(\omega) \mymathbb{1}_{q}(\omega), \quad \quad and\quad \quad p(q)=\idotsint\limits_\Omega \mymathbb{1}_q(\omega) d\mathbf{F}(\omega) \end{equation} where $\mymathbb{1}_{q}(\omega)=1$ if $\omega\models q$ and $0$ otherwise. As common (e.g.~\cite{wasserman04allof}) we will use for convenience the notation $\int xdF(x)$ as unifying notation for both discrete and continuous distributions.
As $\Omega$ is often very large or even infinite, exact inference based on the summation in~\eqref{eq:exact} quickly becomes infeasible, and inference has to resort to approximation techniques based on \emph{samples}, that is, randomly drawn outcomes $\omega\in
\Omega$. Given a large set of such samples $\{s_1,\ldots,s_N\}$ drawn from~$p(\cdot)$, the probability~$p(q)$ can be estimated as the fraction of samples where $q$ is true. If samples are instead drawn from the target distribution $p(\cdot|e)$, the latter can directly be estimated as \begin{equation*}
\hat{p}(q|e) := \frac{1}{N}\sum_{i=1}^N \mymathbb{1}_q(s_i) \enspace. \end{equation*}
However, sampling from~$p(\cdot|e)$ is often highly inefficient or infeasible in practice, as the evidence needs to be taken into account. For instance, if one would use the standard definition of conditional probability to generate samples from $p(\cdot)$, all samples that are not consistent with the evidence do not contribute to the estimate and would thus have to be discarded or, in sampling terminology, \emph{rejected}.
More advanced sampling methods therefore often resort to a so-called \emph{proposal distribution} which allows for easier sampling. The error introduced by this simplification then needs to be accounted for when generating the estimate from the set of samples. An example for such a method is \emph{importance sampling}, where each sample~$s_i$ has an associated \emph{weight}~$w_i$. Samples are drawn from an
\emph{importance distribution}~$\pi(\cdot |e)$, and weights are defined as $w_i=\frac{p(s_i|e)}{\pi(s_i|e)}$. The true target distribution can then be estimated as \begin{equation*}
\hat{p}(q|e) = \frac{1}{W} \sum_{i=1}^Nw_i\cdot \mymathbb{1}_q(s_i) \end{equation*} where $W=\sum_i w_i$ is a normalization constant. The simplest instance of this algorithm is \emph{rejection sampling} as already sketched above, where the samples are drawn from the prior distribution~$p(\cdot)$ and weights are $1$ for those samples consistent with the evidence, and $0$ for the others. Especially for evidence with low probability, rejection sampling suffers from a very high rejection rate, that is, many samples are generated, but do not contribute to the final estimate. This is known as the \emph{rejection
problem}. One way to address this problem is \emph{likelihood
weighted sampling}, which dynamically adapts the proposal distribution during sampling to avoid choosing values for random variables that cause the sample to become inconsistent with the evidence. Again, this requires corresponding modifications of the associated weights in order to produce correct estimates.
\subsection{Logical Inference} \label{sec:forward}
A (definite) clause is an expression of the form $\mathtt{h\mbox{ :- }
b_1, \ldots ,b_n}$, where $\mathtt{h}$ is called head and $\mathtt{b_1,\ldots,b_n}$ is the body. A program consists of a set of clauses and its semantics is given by its least Herbrand model. There are at least two ways of using a definite clause in a logical derivation. First, there is \emph{backward chaining}, which states that to prove a goal $\mathtt{h}$ with the clause it suffices to prove $\mathtt{b_1, \ldots,b_n}$; second, there is \emph{forward chaining}, which starts from a set of known facts $\mathtt{b_1, \ldots ,b_n}$ and the clause and concludes that $\mathtt{h}$ also holds~(cf.~\cite{nilsson:book}). Prolog employs backward chaining (SLD-resolution) to answer queries. SLD-resolution is very efficient both in terms of time and space. However, similar subgoals may be derived multiple times if the query contains recursive calls. Moreover, SLD-resolution is not guaranteed to always terminate (when searching depth-first). Using forward reasoning, on the other hand, one starts with what is known and employs the immediate consequence operator $T_P$ until a fixpoint is reached. This fixpoint is identical to the least Herbrand model.
\begin{definition}[$T_P$ operator]
\label{def:tpop}
Let $P$ be a logic program containing a set of definite clauses and
$ground(P)$ the set of all ground instances of these clauses.
Starting from a set of ground facts $S$ the $T_P$ operator returns
\begin{equation*}
T_P(S) = \{ \mathtt{h} \mid \mathtt{h \mbox{ :- } b_1,\ldots,b_n} \in
ground(P) \mbox{ and } \{ \mathtt{b_1}, \ldots , \mathtt{b_n}\}
\subseteq S\}
\end{equation*} \end{definition}
\subsection{Distribution Semantics} \label{sec:ds}
Sato's distribution semantics~\cite{Sato:95} extends logic programming to the probabilistic setting by choosing truth values of basic facts randomly. The core of this semantics lies in splitting the logic program into a set $F$ of \emph{facts} and a set $R$ of \emph{rules}. Given a probability distribution $P_F$ over the facts, the rules then allow one to extend $P_F$ into a distribution over least Herbrand models of the logic program. Such a Herbrand model is called a \emph{possible world}.
More precisely, it is assumed that $DB = F\cup R$ is ground and denumerable, and that no atom in $F$ unifies with the head of a rule in $R$. Each truth value assignment to $F$ gives rise to a unique least Herbrand model of $DB$. Thus, a probability distribution $P_F$ over $F$ can directly be extended into a distribution $P_{DB}$ over these models. Furthermore, Sato shows that, given an enumeration $f_1,f_2,\ldots$ of facts in $F$, $P_F$ can be constructed from a series of finite distributions $P_F^{(n)}(f_1=x_1,\ldots, f_n=x_n)$ provided that the series fulfills the so-called compatibility condition, that is, \begin{equation}
\label{eq:compat}
P_F^{(n)}(f_1=x_1,\ldots, f_n=x_n) =
\sum_{x_{n+1}}P_F^{(n+1)}(f_1=x_1,\ldots, f_{n+1}=x_{n+1}) \end{equation}
\section{Syntax and Semantics} \label{sec:semantics}
Sato's distribution semantics, as summarized in Sec.~\ref{sec:ds}, provides the basis for most probabilistic logic programming languages including PRISM~\cite{SatoKameya:01}, ICL~\cite{Poole08}, CP-logic~\cite{Vennekens09} and ProbLog~\cite{DeRaedt07-IJCAIa}. The precise way of defining the basic distribution $P_F$ differs among languages, though the theoretical foundations are essentially the same. The most basic instance of the distribution semantics, employed by ProbLog, uses so-called \emph{probabilistic facts}. Each ground instance of a \emph{probabilistic fact} directly corresponds to an independent random variable that takes either the value ``true'' or ``false''. These probabilistic facts can also be seen as binary switches, cf.~\cite{Sato:95}, which again can be extended to multi-ary switches or choices as used by PRISM and ICL. For switches, at most one of the probabilistic facts belonging to the switch is ``true'' according to the specified distribution. Finally, in CP-logic, such choices are used in the head of rules leading to the so-called \emph{annotated disjunction}.
Hybrid ProbLog~\cite{gutmann10ilp} extends the distribution semantics with continuous distributions. To allow for exact inference, Hybrid ProbLog imposes severe restrictions on the distributions and their further use in the program. Two sampled values, for instance, cannot be compared against each other. Only comparisons that involve one sampled value and one number constant are allowed. Sampled values may not be used in arithmetic expressions or as parameters for other distributions, for instance, it is not possible to sample a value and use it as the mean of a Gaussian distribution. It is also not possible to reason over an unknown number of objects as BLOG~\cite{Milch05} does, though this is the case mainly for algorithmic reasons.
Here, we alleviate these restrictions by defining the basic distribution $P_F$ over probabilistic facts based on both discrete and continuous random variables. We use a three-step approach to define this distribution. First, we introduce explicit random variables and corresponding distributions over their domains, both denoted by terms. Second, we use a mapping from these terms to terms denoting (sampled) outcomes, which, then, are used to define the basic distribution $P_F$ on the level of probabilistic facts. For instance, assume that an urn contains an unknown number of balls where the number is drawn from a Poisson distribution and we say that this urn contains many balls if it contains at least $10$ balls. We introduce a random variable $\mathtt{number}$, and we define $\mathtt{many} \mbox{ :- } \mathtt{dist\_gt(\simeq\!\!(number), 9).}$ Here, $\simeq\!\!(\mathtt{number})$ is the Herbrand term denoting the sampled value of $\mathtt{number}$, and $\mathtt{dist\_gt(\simeq\!\!(number), 9)}$ is a probabilistic fact whose probability of being true is the expectation that this value is actually greater than $9$. This probability then carries over to the derived atom $\mathtt{many}$ as well. We will elaborate on the details in the following.
\subsection{Syntax} \label{sec:syntax}
In a logic program, following Sato, we distinguish between probabilistic facts, which are used to define the basic distribution, and rules, which are used to derive additional atoms.\footnote{A rule
can have an empty body, in which case it represents a deterministic
fact.} Probabilistic facts are not allowed to unify with any rule head. The distribution over facts is based on random variables, whose distributions we define through so called distributional clauses.
\begin{definition}[Distributional clause]
\label{def:disclause}
A \emph{distributional clause} is a definite clause with an atom
$\mathtt{h} \sim \mathcal{D}$ in the head where $\sim$ is a binary
predicate used in infix notation. \end{definition}
For each ground instance $(\mathtt{h}\sim \mathcal{D} \mbox{ :- } \mathtt{b_1,\ldots,b_n})\theta$ with $\theta$ being a substitution over the Herbrand universe of the logic program, the distributional clause defines a random variable $\mathtt{h}\theta$ and an associated distribution $\mathcal{D}\theta$. In fact, the distribution is only defined when $(\mathtt{b_1,\ldots,b_n})\theta$ is true in the semantics of the logic program. These random variables are terms of the Herbrand universe and can be used as any other term in the logic program. Furthermore, a term $\simeq\!\!(d)$ constructed from the reserved functor $\simeq\!\!/1$ represents the outcome of the random variable $d$. These functors can be used inside calls to special predicates in $dist\_rel =\{ dist\_eq/2, dist\_lt/2, dist\_leq/2, dist\_gt/2, dist\_geq/2\}$. We assume that there is a fact for each of the ground instances of these predicate calls. These facts are the \emph{probabilistic facts} of Sato's distribution semantics. Note that the set of probabilistic facts is enumerable as the Herbrand universe of the program is enumerable. A term $\simeq\!\!(d)$ links the random variable $d$ with its outcome. The probabilistic facts compare the outcome of a random variable with a constant or the outcome of another random variable and succeed or fail according to the probability distribution(s) of the random variable(s).
\begin{example}[Distributional clauses]
\label{ex:probrules}
\begin{align}
\mathtt{nballs \sim poisson(6)}.
& \label{lbl:ex1}\\
\mathtt{color(B)\sim\ [0.7:b, 0.3:g]}
&\mbox{ :- } \mathtt{between(1,\simeq\!\!(nballs),B).} \label{lbl:ex2}\\
\mathtt{ diameter(B,MD)\sim gamma(MD/20,20) }
&\mathtt{\mbox{ :- } between(1,\simeq\!\!(nballs),B),}\nonumber \\
&\quad\ \mathtt{mean\_diameter(\simeq\!\!(color(B)),MD).}\label{lbl:ex3}
\end{align}
The defined distributions depend on the following logical clauses:
\begin{align*}
\mathtt{mean\_diameter(C,5) } & \mathtt{\mbox{ :- } dist\_eq(C,b).}\\
\mathtt{mean\_diameter(C,10) } & \mathtt{\mbox{ :- } dist\_eq(C,g). }\\
\mathtt{ between(I,J,I)} & \mathtt{\mbox{ :- } dist\_leq(I,J).} \\
\mathtt{between(I, J, K)} & \mathtt{\mbox{ :- } dist\_lt(I,J), \ I1\ is\ I + 1, between(I1,J,K)}.
\end{align*}
The distributional clause~\eqref{lbl:ex1} models the number of balls
as a Poisson distribution with mean 6. The distributional
clause~\eqref{lbl:ex2} models a discrete distribution for the random
variable color(B). With probability 0.7 the ball is blue and green
otherwise. Note that the distribution is defined only for the values
B for which $between(1, \simeq\!\!(nballs), B)$ succeeds. Execution of
calls to the latter give rise to calls to probabilistic facts that
are instances of $dist\_leq(I,\simeq\!\!(nballs))$ and
$dist\_lt(I,\simeq\!\!(nballs))$. Similarly, the distributional
clause~\eqref{lbl:ex3} defines a gamma distribution that is also
conditionally defined. Note that the conditions in the distribution
depend on calls of the form $mean\_diameter(\simeq\!\!(color(n)), MD)$
with $n$ a value returned by between/3. Execution of this call
finally leads to calls $dist\_eq(\simeq\!\!(color(n)),b)$ and
$dist\_eq(\simeq\!\!(color(n)),g)$. \end{example}
It looks feasible, to allow $\simeq\!\!(d)$ terms everywhere and to have a simple program analysis insert the special predicates in the appropriate places by replacing $</2$, $>/2$, $\leq/2$, $\geq/2$ predicates by $dist\_rel/2$ facts. Though extending unification is a bit harder: as long as a $\simeq\!\!(h)$ term is unified with a free variable, standard unification can be performed; only when the other term is bound an extension is required. In this paper, we assume that the special predicates $\mathtt{dist\_eq/2}$, $\mathtt{dist\_lt/2}$, $\mathtt{dist\_leq/2}$, $\mathtt{dist\_gt/2}$, and $\mathtt{dist\_geq/2}$ are used whenever the outcome of a random variable need to be compared with another value and that it is safe to use standard unification whenever a $\mathtt{\simeq\!\!(h)}$ term is used in another predicate.
For the basic distribution on facts to be well-defined, a program has to fulfill a set of validity criteria that have to be enforced by the programmer.
\begin{definition}[Valid program]
\label{def:validprog}
A program $P$ is called \emph{valid} if:
\begin{description}
\item[(V1)] In the relation $\mathtt{h} \sim \mathcal{D}$ that holds
in the least fixpoint of a program, there is a functional
dependency from $\mathtt{h}$ to $ \mathcal{D}$, so there is a
unique ground distribution $ \mathcal{D}$ for each ground random
variable $\mathtt{h}$.
\item[(V2)] The program is \emph{distribution-stratified}, that is,
there exists a function $rank(\cdot)$ that maps ground atoms to
$\mathbb{N}$ and that satisfies the following properties: (1)
for each ground instance of a distribution clause $\mathtt{h}\sim
\mathcal{D} \mbox{ :- } \mathtt{b_1, \ldots b_n}$ holds
$rank(\mathtt{h}\sim \mathcal{D} > rank(\mathtt{b_i})$ (for all
$i$). (2) for each ground instance of another program clause:
$\mathtt{h} \mbox{ :- } \mathtt{b_1, \ldots b_n}$ holds
$rank(\mathtt{h})\geq rank(\mathtt{b_i})$ (for all $i$). (3) for
each ground atom $\mathtt{b}$ that contains (the name of) a random
variable $\mathtt{h}$, $rank(\mathtt{b}) \geq rank(\mathtt{h} \sim
\mathcal{D})$ (with $\mathtt{h} \sim \mathcal{D}$ the head of the
distribution clause defining $\mathtt{h}$).
\item[(V3)] All ground probabilistic facts or, to be more precise,
the corresponding indicator functions are
\emph{Lebesgue-measurable}.
\item[(V4)] Each atom in the least fixpoint can be derived from a
finite number of probabilistic facts (\emph{finite support
condition}~\cite{Sato:95}).
\end{description} \end{definition}
Together, (V1) and (V2) ensure that a single basic distribution $P_F$ over the probabilistic facts can be obtained from the distributions of individual random variables defined in $P$. The requirement (V3) is crucial. It ensures that the series of distributions $P_F^{(n)}$ needed to construct this basic distribution is well-defined. Finally, the number of facts over which the basic distribution is defined needs to be countable. This is true, as we have a finite number of constants and functors: those appearing in the program.
\subsection{Distribution Semantics} \label{sec:distributionsemantics}
We now define the series of distributions $P_F^{(n)}$ where we fix an enumeration $f_1,f_2,\ldots$ of probabilistic facts such that $i < j \implies rank(f_i) \leq rank(f_j)$ where $rank(\cdot)$ is a \emph{ranking function} showing that the program is distribution-stratified. For each predicate $\mathtt{rel/2}\in dist\_rel$, we define an \emph{indicator function} as follows: \begin{align}
I^1_{rel}(X_1,X_2) & =
\begin{cases}
1 & \text{if } \mathtt{rel}(X_1,X_2) \text{ is true} \\
0 & \text{if } \mathtt{rel}(X_1,X_2) \text{ is false}
\end{cases} \end{align}
Furthermore, we set $I^0_{rel}(X_1,X_2) = 1.0 - I^1_{rel}(X_1,X_2)$. We then use the expected value of the indicator function to define probability distributions $P_F^{(n)}$ over finite sets of ground facts $f_1,\ldots,f_n$. Let $\{rv_1,\ldots rv_m\}$ be the set of random variables these $n$ facts depend on, ordered such that if $rank(rv_i)<rank(rv_j)$, then $i<j$ (cf.~(V2) in Definition~\ref{def:validprog}). Furthermore, let $f_i = rel_i(t_{i1},t_{i2})$, $x_j\in\{1,0\}$, and $\theta^{-1} = \{\simeq\!\!(rv_1)/V_1,\ldots, \simeq\!\!(rv_m)/V_m\}$. The latter replaces all evaluations of random variables on which the $f_i$ depend by variables for integration. \begin{align} \lefteqn{P_F^{(n)}(f_1 = x_1, \ldots , f_n = x_n) = \mathbb{E} [ I^{x_1}_{rel_1}(t_{11},t_{12}), \ldots , I^{x_n}_{rel_n}(t_{n1},t_{n2}) ]}\label{eq:series}\\ & = \idotsint \left( I^{x_1}_{rel_1}(t_{11}\theta^{-1}, t_{12}\theta^{-1})\cdots I^{x_n}_{rel_n}(t_{n1}\theta^{-1}, t_{n2})\theta^{-1}\right) d\mathcal{D}_{rv_1}(V_1)\cdots d\mathcal{D}_{rv_m}(V_m) \nonumber \end{align} \begin{example}[Basic Distribution] Let $f_1,f_2,\ldots = dist\_lt(\simeq\!\!(b1), 3), dist\_lt(\simeq\!\!(b2), \simeq\!\!(b1)), \ldots$. The second distribution in the series then is \begin{align*} \lefteqn{P_F^{(2)}(dist\_lt(\simeq\!\!(b1), 3)=x_1 ,dist\_lt(\simeq\!\!(b2), \simeq\!\!(b1))=x_2)}\\
&= \mathbb{E} [ I_{\mathtt{dist\_lt}}^{x_1} (\simeq\!\!(b1),3),I_{\mathtt{dist\_lt}}^{x_2} (\simeq\!\!(b2),\simeq\!\!(b1))]\\
&= \int\int \left(I_{\mathtt{dist\_lt} }^{x_1} (V1,3) ,I_{\mathtt{dist\_lt} }^{x_2} (V2,V1) \right)d\mathcal{D}_{b1}(V1)d\mathcal{D}_{b2}(V2) \end{align*} \end{example}
By now we are able to prove the following proposition.
\begin{proposition}
\label{prop:adm}
Let $P$ be a valid program. $P$ defines a probability measure $P_P$
over the set of fixpoints of the $T_P$ operator. Hence, $P$ also
defines for an arbitrary formula $\mathtt{q}$ over atoms in its
Herbrand base the probability that $\mathtt{q}$ is true. \end{proposition}
\begin{proof}[Proof sketch]
It suffices to show that the series of distributions $P_F^{(n)}$
over facts (cf.~\eqref{eq:series}) is of the form that is
required in the distribution semantics, that is, these are
well-defined probability distributions that satisfy the
compatibility condition, cf.~\eqref{eq:compat}. This is a direct
consequence of the definition in terms of indicator functions and
the measurability of the underlying facts required for valid
programs. \end{proof}
\subsection{$T_P$ Semantics} \label{sec:tp}
In the following, we give a procedural view onto the semantics by extending the $T_P$ operator of Definition~\ref{def:tpop} to deal with probabilistic facts $\mathtt{dist\_rel(t_1,t_2)}$. To do so, we introduce a function \textsc{ReadTable}$(\cdot)$ that keeps track of the sampled values of random variables to evaluate probabilistic facts. This is required because interpretations of a program only contain such probabilistic facts, but not the underlying outcomes of random variables. Given a probabilistic fact $\mathtt{dist\_rel(t1,t2)}$, \textsc{ReadTable} returns the truth value of the fact based on the values of the random variables $h$ in the arguments, which are either retrieved from the table or sampled according to their definition $\mathtt{h}\sim \mathcal{D}$ as included in the interpretation and stored in case they are not yet available.
\begin{definition}[Stochastic $T_P$ operator]
\label{def:stp}
Let $P$ be a valid program and $ground(P)$ the set of all ground
instances of clauses in $P$. Starting from a set of ground facts
$S$ the $ST_P$ operator returns
\begin{align*}
ST_P(S) := \Big\{ \mathtt{h}\ \Big|\ &
\mathtt{h \mbox{ :- } b_1,\ldots,b_n} \in ground(P) \mbox{ and }
\forall\ \mathtt{b_i}: \mbox{ either } \mathtt{b_i}\in S \mbox{
or }\\
& \big(\mathtt{b_i}=dist\_rel(t1,t2) \wedge
(t_j=\simeq\!\!(h)\rightarrow (\mathtt{h}\sim\mathcal{D})\in S) \wedge\\
& \ \mbox{\textsc{ReadTable}}(b_i)=true \big) \Big\} \end{align*} \end{definition}
\textsc{ReadTable} ensures that the basic facts are sampled from their joint distribution as defined in Sec.~\ref{sec:distributionsemantics} during the construction of the standard fixpoint of the logic program. Thus, each fixpoint of the $ST_P$ operator corresponds to a possible world whose probability is given by the distribution semantics.
\section{Forward sampling using Magic Sets and backward reasoning} \label{sec:algorithms}
In this section we introduce our new method for probabilistic forward inference. To this aim, we first extend the magic set transformation to distributional clauses. We then develop a rejection sampling scheme using this transformation. This scheme also incorporates backward reasoning to check for consistency with the evidence during sampling and thus to reduce the rejection rate.
\subsection{Probabilistic magic set transformation} \label{sec:magic}
The disadvantage of forward reasoning in logic programming is that the search is not goal-driven, which might generate irrelevant atoms. The \emph{magic set} transformation~\cite{bancilhon,nilsson:book} focuses forward reasoning in logic programs towards a goal to avoid the generation of uninteresting facts. It thus combines the advantages of both reasoning directions.
\begin{definition}[Magic Set Transformation]
\label{def:magicsets}
If $P$ is a logic program, then we use $\textsc{Magic}(P)$ to denote the
smallest program such that if $\mathtt{A_0 \mbox{ :- } A_1,\ldots, A_n}
\in P$ then
\begin{itemize}
\item $\mathtt{A_0 \mbox{ :- } c(A_0), A_1, \ldots, A_n }\in \textsc{Magic}(P)$ and
\item for each $1\le i \le n$: $\mathtt{c(A_i) \mbox{ :- } c(A_0), A_1,\ldots, A_{i-1} }\in \textsc{Magic}(P)$
\end{itemize} \end{definition}
The meaning of the additional $\mathtt{c/1}$ atoms (c=call) is that they ``switch on'' clauses when they are needed to prove a particular goal. If the corresponding switch for the head atom is not true, the body is not true and thus cannot be proven. The magic transformation is both sound and complete. Furthermore, if the SLD-tree of a goal is finite, forward reasoning in the transformed program terminates. The same holds if forward reasoning on the original program terminates.
We now extend this transformation to distributional clauses. The idea is that the distributional clause for a random variable $h$ is activated when there is a call to a probabilistic fact $dist\_rel(t_1, t_2)$ depending on $h$.
\begin{definition}[Probabilistic Magic Set Transformation]
\label{def:pmagicsets}
For program $P$, let $P_L$ be $P$ without distributional clauses.
$\textsc{M}(P)$ is the smallest program s.t. $\textsc{Magic}(P_L)
\subseteq \textsc{M}(P)$ and for each $\mathtt{h} \sim \mathcal{D}
\mathtt{ \mbox{ :- } b_1,\ldots,b_n} \in P$ and $\mathtt{rel} \in
\{\mathtt{eq, lt, leq, gt, geq}\}$:
\begin{itemize}
\item $\mathtt{h} \sim \mathcal{D}\mbox{ :- }
\mathtt{(c(dist\_rel(\simeq\!\!(h), X)); c(dist\_rel(X ,
\simeq\!\!(h))),b_1,\ldots,b_n.} \in \textsc{M}(P)$.
\item $\mathtt{c(b_i) \mbox{ :- } (c(dist\_rel(\simeq\!\!(h) , X));
c(dist\_rel(X, \simeq\!\!(h))), b_1,\ldots, b_{i-1}. } \in \textsc{M}(P)$.
\end{itemize}
Then $\textsc{PMagic}(P)$ consists of:
\begin{itemize}
\item a clause $\mathtt{a\_p(t_1,\ldots,t_n)} \mbox{ :- }
\mathtt{c(p(t_1,\ldots,t_n)), p(t_1,\ldots,t_n)}$ for each
built-in predicate (including $\mathtt{dist\_rel/2}$ for
$\mathtt{rel} \in \{\mathtt{eq, lt, leq, gt, geq} \}$) used in
$\textsc{M}(P)$.
\item a clause $\mathtt{h} \mbox{ :- }
\mathtt{b_1',\ldots,b_n'}$ for each clause $\mathtt{h} \mbox{ :- } \mathtt{b_1,\ldots,b_n} \in
\textsc{M}(P)$ where $\mathtt{b_i'=a\_b_i}$ if $\mathtt{b_i}$ uses a built-in
predicate and else $\mathtt{b_i'=b_i}$.
\end{itemize} \end{definition}
Note that every call to a built-in $\mathtt{b}$ is replaced by a call to $\mathtt{a\_b}$; the latter predicate is defined by a clause that is activated when there is a call to the built-in ($\mathtt{c(b)}$) and that effectively calls the built-in. The transformed program computes the distributions only for random variables whose value is relevant to the query. These distributions are the same as those obtained in a forward computation of the original program. Hence we can show:
\begin{lemma}
Let $P$ be a program and $\textsc{PMagic}(P)$ its probabilistic
magic set transformation extended with a seed $c(q)$. The
distribution over $q$ defined by $P$ and by $\textsc{PMagic}(P)$ is
the same. \end{lemma}
\begin{proof}[Proof sketch]
In both programs, the distribution over $q$ is determined by the
distributions of the atoms $dist\_eq(t_1,t_2)$,
$dist\_leq(t_1,t_2)$, $dist\_lt(t_1,t_2)$, $dist\_geq(t_1,t_2)$, and
$dist\_gt(t_1,t_2)$ on which $q$ depends in a forward computation of
the program $P$. The magic set transformation ensures that these
atoms are called in the forward execution of $\textsc{PMagic}(P)$.
In $\textsc{PMagic}(P)$, a call to such an atom activates the
distributional clause for the involved random variable. As this
distributional clause is a logic program clause, soundness and
completeness of the magic set transformation ensures that the
distribution obtained for that random variable is the same as in
$P$. Hence also the distribution over $q$ is the same for both
programs. \end{proof}
\subsection{Rejection sampling with heuristic lookahead}
As discussed in Section~\ref{sec:probinf}, sampling-based approaches to probabilistic inference estimate the conditional probability
$p(q|e)$ of a query $q$ given evidence $e$ by randomly generating a large number of samples or possible worlds~(cf.~Algorithm~\ref{alg:sampling}). The algorithm starts by preparing the program $L$ for sampling by applying the \textsc{PMagic} transformation. In the following, we discuss our choice of subroutine \textsc{STPMagic} (cf.~Algorithm~\ref{alg:rejection}) which realizes likelihood weighted sampling. It is used in Algorithm~\ref{alg:sampling}, line~\ref{line:callWeightedsample}, to generate individual samples. It iterates the stochastic consequence operator of Definition~\ref{def:stp} until either a fixpoint is reached or the current sample is inconsistent with the evidence. Finally, if the sample is inconsistent with the evidence, it receives weight 0.
\begin{algorithm}[t]
\caption{Main loop for sampling-based inference to calculate the
conditional probability $p(q|e)$ for query $q$, evidence $e$ and
program $L$. }
\label{alg:sampling} \begin{algorithmic}[1] \Function{\textsc{Evaluate}}{$L$, $q$, $e$, $Depth$}
\State $L^*:=$\textsc{PMagic}$(L)\cup \{c(a) | a \in e \cup{q} \}$ \State $n^+ :=0$ \hspace{1cm} $n^- := 0$ \While{Not converged} \State $(I,w):=$\textsc{STPMagic}$(L^*,L,e,Depth)$\label{line:callWeightedsample} \State \textbf{if} $q\in I$ \textbf{then} $n^+:= n^+ + w$ \textbf{else} $n^-:= n^- + w$ \EndWhile \State \Return $n^+ / (n^+ + n^-)$ \EndFunction \end{algorithmic} \end{algorithm}
\begin{algorithm}[t]
\caption{Sampling one interpretation; used in
Algorithm~\ref{alg:sampling}. }
\label{alg:rejection}
\begin{algorithmic}[1]
\Function{STPMagic}{$L^*,L,e,Depth$}
\State $T_{pf}:=\varnothing$, $T_{dis}:=\varnothing$, $w:=1$, $I_{old}:=\varnothing$, $I_{new}:=\varnothing$
\Repeat \State $I_{old}:=I_{new}$ \ForAll{$(\mathtt{h \mbox{ :- } \
body)}\ \in L^*$} \State split body in $B_{PF}$ (prob. facts) and $B_L$ (the rest) \ForAll{grounding substitution $\theta$ such that $B_L\theta \subseteq I_{old}$} \State $s := true$, $w_d := 1$ \While {$s \wedge B_{PF}\neq \varnothing$} \State select and remove $pf$ from $B_{PF}$ \State $(b_{pf},w_{pf}) := $\textsc{ReadTable}$(pf\theta, I_{old}, T_{pf}, T_{dis}, L, e, Depth)$ \label{line:samplehead} \State $s := s \wedge b_{pf}$ \hspace{1cm} $w_d := w_d \cdot w_{pf}$ \EndWhile \If {$s$} \If {$h\theta \in e^-$} \Return $(I_{new}, 0)$ \Comment{check negative evidence }\EndIf \State $I_{new} := I_{new} \cup \{h\theta\}$ \hspace{1cm} $w := w \cdot w_d$ \EndIf \EndFor \EndFor
\Until {$I_{new}=I_{old} \vee w=0$} \Comment{Fixpoint or
impossible evidence} \If {$e^+ \subseteq I_{new}$} \Return $(I_{new}, w)$ \Comment{check
positive evidence} \Else \ \Return $(I_{new}, 0)$\EndIf
\EndFunction \end{algorithmic} \end{algorithm}
\begin{algorithm}[ht]
\caption{Evaluating a probabilistic fact $pf$; used in
Algorithm~\ref{alg:rejection}. \textsc{ComputePF}$(pf,T_{dis})$
computes the truth value and the probability of $pf$ according to
the information in $T_{dis}$.}
\label{alg:likelihoodweighting}
\begin{algorithmic}[1]
\Function{ReadTable}{$pf, I,T_{pf}, T_{dis}, L, e,Depth$} \If{$pf \notin T_{pf}$} \ForAll {random variable $h$ occurring in $pf$ where $h \notin T_{dis}$} \If{$h \sim D \notin I $} \Return $(false,0)$ \EndIf \If {not \Call{Sample}{$h, D, T_{dis}, I, L, e, Depth$}} \Return $(false,0)$ \label{line:sample} \EndIf \EndFor \State $(b,w) :=$ \Call{ComputePF}{$pf$,$T_{dis}$} \If {$(b \wedge (pf \in e^-)) \vee (\neg b \wedge (pf \in e^+)) $}\State\Return $(false,0)$ \Comment{inconsistent with evidence}\EndIf \State extend $T_{pf}$ with $(pf,b,w)$ \EndIf \State\Return $(b,w)$ as stored in $T_{pf}$ for $pf$
\EndFunction
\Procedure{Sample}{$\mathtt{h}, \mathcal{D},\ T_{dis}, \ I, \ L,\ e,\
Depth$}
\State $w_h:=1$, $\mathcal{D}' := \mathcal{D}$ \Comment{Initial
weight, temp. distribution}
\If{$\mathcal{D}' = [p_1:\mathtt{a_1} ,\ldots , p_n:\mathtt{a_n}]$} \Comment{finite distribution}
\For{$p_j:\mathtt{a_j} \in \mathcal{D}'$ where $\mathtt{dist\_eq}(\mathtt{h},\mathtt{a_j}) \in e^-$} \Comment{remove neg.~evidence}
\State $\mathcal{D}' := \Call{Norm}{\mathcal{D}' \setminus
\{p_j:\mathtt{a_j}\}}$, \hspace{1cm}$w_h:=w_h\times (1-p_j)$
\EndFor \If {$\exists v: \mathtt{dist\_eq(\simeq\!\!(h),v)} \in e^+ $ and $p:\mathtt{v} \in \mathcal{D}'$} \State $\mathcal{D}' := [1 : v]$, \hspace{1cm} $w_h:=w_h\times p$ \EndIf
\For{$p_j:\mathtt{a_j} \in \mathcal{D}'$} \Comment{remove choices that make
$e^+$ impossible}
\If{$\exists b\in e^+$: not \Call{MaybeProof}{$b,Depth , I \cup
\lbrace \mathtt{dist\_eq}(\mathtt{h},\mathtt{a_j}) \rbrace,L$} \textbf{or}\\
\hspace{1.8cm} $\exists b\in e^-$: not \Call{MaybeFail}{$b,Depth , I \cup
\lbrace \mathtt{dist\_eq}(\mathtt{h},\mathtt{a_j}) \rbrace,L$} }
\State $\mathcal{D}' := \Call{Norm}{\mathcal{D}' \setminus
\{p_j:\mathtt{a_j}\}}$, \hspace{1cm}$w_h:=w_h\times (1-p_j)$ \EndIf
\EndFor \EndIf
\State \textbf{if} {$\mathcal{D}'=\varnothing$} \Return false
\State Sample $x$ according to $\mathcal{D'}$, extend $T_{dis}$ with $(h,x)$ and \Return true \EndProcedure
\end{algorithmic} \end{algorithm}
Algorithm~\ref{alg:likelihoodweighting} details the procedure used in line~\ref{line:samplehead} of Algorithm~\ref{alg:rejection} to sample from a given distributional clause. The function \textsc{ReadTable} returns the truth value of the probabilistic fact, together with its weight. If the outcome is not yet tabled, it is computed. Note that \texttt{false} is returned when the outcome is not consistent with the evidence. Involved distributions, if not yet tabled, are sampled in line~\ref{line:sample}. In the infinite case, \textsc{Sample} simply returns the sampled value. In the finite case, it is directed towards generating samples that are consistent with the evidence. Firstly, all possible choices that are inconsistent with the negative evidence are removed. Secondly, when there is positive evidence for a particular value, only that value is left in the distribution. Thirdly, it is checked whether each left value is consistent with all other evidence. This consistency check is performed by a simple depth-bounded meta-interpreter. For positive evidence, it attempts a top-down proof of the evidence atom in the original program using the function \textsc{MaybeProof}. Subgoals for which the depth-bound is reached, as well as probabilistic facts that are not yet tabled are assumed to succeed. If this results in a proof, the value is consistent, otherwise it is removed. Similarly for negative evidence: in \textsc{MaybeFail}, subgoals for which the depth-bound is reached, as well as probabilistic facts that are not yet tabled are assumed to fail. If this results in failure, the value is consistent, otherwise it is removed. The $Depth$ parameter allows one to trade the computational cost associated with this consistency check for a reduced rejection rate.
Note that the modified distribution is normalized and the weight is adjusted in each of these three cases. The weight adjustment takes into account that removed elements cannot be sampled and is necessary as it can depend on the distributions sampled so far which elements are removed from the distribution sampled in \textsc{Sample} (the clause bodies of the distribution clause are instantiating the distribution).
\section{Experiments} \label{sec:experiments}
We implemented our algorithm in YAP Prolog and set up experiments to answer the questions \begin{itemize} \item[\textbf{Q1}] Does the lookahead-based sampling improve the
performance? \item[\textbf{Q2}] How do rejection sampling and likelihood weighting compare? \end{itemize}
To answer the first question, we used the distributional program in Figure~\ref{fig:experimentnogreen}, which models an urn containing a random number of balls. The number of balls is uniformly distributed between 1 and 10 and each ball is either red or green with equal probability. We draw 8 times a ball with replacement from the urn and observe its color. We also define the atom $\mathtt{nogreen(}D\mathtt{)}$ to be true if and only if we did not draw any green ball in draw 1 to $D$. We evaluated the query $P(\mathtt{dist\_eq(\simeq(color(\simeq(drawnball(1)))),red)}\
|\mathtt{nogreen(}D\mathtt{)})$ for $D=1,2,\ldots, 8$. Note that the evidence implies that the first drawn ball is red, hence that the probability of the query is 1; however, the number of steps required to proof that the evidence is inconsistent with drawing a green first ball increases with D, so the larger is D, the larger Depth is required to reach a 100\% acceptance rate for the sample as illustrated in Figure ~\ref{fig:experimentnogreen}. It is clear that by increasing the depth limit, each sample will take longer to be generated. Thus, the $Depth$ parameter allows one to trade off convergence speed of the sampling and the time each sample needs to be generated. Depending on the program, the query, and the evidence there is an optimal depth for the lookahead.
\begin{figure}
\caption{ The program modeling the urn (left); rate of accepted
samples (right) for evaluating the query
$P(\mathtt{dist\_eq(\simeq(color(\simeq(drawnball(1)))),red)}\ |$
$\mathtt{nogreen(}D\mathtt{)})$ for $D=1,2,\ldots, 10$ and for
$Depth=1,2,\ldots,8$ using Algorithm~\ref{alg:sampling}. The
acceptance rate is calculated by generating 200 samples using our
algorithm and counting the number of sample with weight larger
than 0.}
\label{fig:experimentnogreen}
\end{figure}
To answer \textbf{Q2}, we used the standard example for BLOG~\cite{Milch05}. An urn contains an unknown number of balls where every ball can be either green or blue with $p=0.5$. When drawing a ball from the urn, we observe its color but do not know which ball it is. When we observe the color of a particular ball, there is a $20\%$ chance to observe the wrong one, e.g. green instead of blue. We have some prior belief over the number of balls in the urn. If 10 balls are drawn with replacement from the urn and we saw 10 times the color green, what is the probability that there are $n$ balls in the urn? We consider two different prior distributions: in the first case, the number of balls is uniformly distributed between 1 and 8, in the second case, it is Poisson-distributed with mean $\lambda=6$.
\begin{figure}
\caption{Results of urn experiment with forward reasoning. 10 balls
with replacement were drawn and each time green was
observed. Left: Uniform prior over \# balls, right: Poisson prior
$(\lambda=6)$.}
\label{fig:urnproblog}
\end{figure}
One run of the experiment corresponds to sampling the number $N$ of balls in the urn, the color for each of the $N$ balls, and for each of the ten draws both the ball drawn and whether or not the color is observed correctly in this draw. Once these values are fixed, the sequence of colors observed is determined. This implies that for a fixed number $N$ of balls, there are $2^N\cdot N^{10}$ possible proofs. In case of the uniform distribution, exact PRISM inference can be used to calculate the probability for each given number of balls, with a total runtime of $0.16$ seconds for all eight cases. In the case of the Poisson distribution, this is only possible up to 13 balls, with more balls, PRISM runs out of memory. For inference using sampling, we generate 20,000 samples with the uniform prior, and 100,000 with Poisson prior. We report average results over five repetitions. For these priors, PRISM generates 8,015 and 7,507 samples per second respectively, ProbLog backward sampling 708 and 510, BLOG 3,008 and 2,900, and our new forward sampling (with rejection sampling) 760 and 731. The results using our algorithm for both rejection sampling and likelihood weighting with $Depth=0$ are shown in Figure~\ref{fig:urnproblog}. As the graphs show, the standard deviation for rejection sampling is much larger than for likelihood weighting.
\section{Conclusions and related work} \label{sec:conclusions}
We have contributed a novel construct for probabilistic logic programming, the distributional clauses, and defined its semantics. Distributional clauses allow one to represent continuous variables and to reason about an unknown number of objects. In this regard this construct is similar in spirit to languages such as BLOG and Church, but it is strongly embedded in a logic programming context. This embedding allowed us to propose also a novel inference method based on a combination of importance sampling and forward reasoning. This contrasts with the majority of probabilistic logic programming languages which are based on backward reasoning (possibly enhanced with tabling \cite{SatoKameya:01,Mantadelis10iclp}). Furthermore, only few of these techniques employ sampling, but see \cite{Kimmig11} for a Monte Carlo approach using backward reasoning. Another key difference with the existing probabilistic logic programming approaches is that the described inference method can handle evidence. This is due to the magic set transformation that targets the generative process towards the query and evidence and instantiates only relevant random variables.
P-log~\cite{Baral09} is a probabilistic language based on Answer Set Prolog (ASP). It uses a standard ASP solver for inference and thus is based on forward reasoning, but without the use of sampling. Magic sets are also used in probabilistic Datalog~\cite{Fuhr00}, as well as in Dyna, a probabilistic logic programming language \cite{Eisner05} based on rewrite rules that uses forward reasoning. However, neither of them uses sampling. Furthermore, Dyna and PRISM require that the exclusive-explanation assumption. This assumption states that no two different proofs for the same goal can be true simultaneously, that is, they have to rely on at least one basic random variable with different outcome. Distributional clauses (and the ProbLog language) do not impose such a restriction. Other related work includes MCMC-based sampling algorithms such as the approach for SLP~\cite{cussensmc}. Church's inference algorithm is based on MCMC too, and also BLOG is able to employ MCMC. At least for BLOG it seems to be required to define a domain-specific proposal distribution for fast convergence. With regard to future work, it would be interesting to consider evidence on continuous distributions as it is currently restricted to finite distribution. Program analysis and transformation techniques to further optimize the program w.r.t. the evidence and query could be used to increase the sampling speed. Finally, the implementation could be optimized by memoizing some information from previous runs and then use it to more rapidly prune as well as sample.
\end{document}
|
arXiv
|
{
"id": "1107.5152.tex",
"language_detection_score": 0.808223307132721,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Quintasymptotic primes and ideal topologies]{Asymptotic behaviour of integral closures, quintasymptotic primes and ideal topologies} \author[Reza naghipour and Peter Schenzel]{Reza Naghipour$^*$ and Peter Schenzel}
\address{Department of mathematics, University of Tabriz, Tabriz, Iran; and School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box: 19395-5746, Tehran, Iran.} \email{[email protected]} \email{[email protected]} \address{Martin-l\"uther-universit\"at halle-wittenberg, fachbereich mathematik and informatik, d-06099 halle (saale), germany} \email{[email protected]} \thanks{ 2010 {\it Mathematics Subject Classification}: 13E05, 13B2.\\ The first author thanks the Martin-L\"uther-Universit\"at Halle-Wittenberg for the hospitality and facilities offered during the preparation of this paper.\\ $^*$Corresponding author: e-mail: {\it [email protected]} (Reza Naghipour)} \subjclass{} \keywords{Integral closure, ideal topologies, local cohomology, quintasymptotic prime.}
\begin{abstract} Let $R$ be a Noetherian ring, $N$ a finitely generated $R$-module and $I$ an ideal of $R$. It is shown that the sequences $\Ass _R R/(I^n)_a^{(N)}$, $\Ass _R (I^n)_a^{(N)}/ (I^{n+1})^{(N)}_a$ and $\Ass _R (I^n)_a^{(N)}/ (I^n)_a, n= 1,2, \dots,$ of associated prime ideals, are increasing and ultimately constant for large $n$. Moreover, it is shown that, if $S$ is a multiplicatively closed subset of $R$, then the topologies defined by $(I^n)_a^{(N)}$ and $S((I^n)_a^{(N)}), \,{n\geq1}$, are equivalent if and only if $S$ is disjoint from the quintasymptotic primes of $I$. By using this, we also show that, if $(R, \mathfrak{m})$ is local and $N$ is quasi-unmixed, then the local cohomology module $H^{\dim N}_I(N)$ vanishes if and only if there exists a multiplicatively closed subset $S$ of $R$ such that $\mathfrak{m} \cap S \neq \emptyset$ and that the topologies induced by $(I^n)_a^{(N)}$ and $S((I^n)_a^{(N)}), \, {n\geq1},$ are equivalent. \end{abstract} \maketitle
\section {Introduction} The important concept of integral closure of an ideal of a commutative Noetherian ring (with identity), developed by D. G. Northcott and D. Rees in \cite{NR}, is fundamental to a considerable body of recent and current research both in commutative algebra and algebraic geometry. Let $R$ be a commutative ring (with identity), $I$ an ideal of $R$. In the case when $R$ is Noetherian, we denote by $(I)_a$ the integral closure of $I$, i.e., $(I)_a$ is the ideal of $R$ consisting of all elements $x\in R$ which satisfy an equation $x^n+ r_1x^{n-1}+ \cdots + r_n= 0$, where $r_i\in I^i, i=1, \ldots, n$.
In \cite{R} L.J. Ratliff, Jr., has shown that (when $R$ is Noetherian), the sequence of associated prime ideals \[ \Ass_R R/(I^n)_a, n= 1,2, \ldots , \] is increasing and ultimately constant; we use the notation $A^*_a(I)$ to denote $\Ass_R R/(I^n)_a$ for large $n$.
The notion of integral closures of ideals of $R$ relative to a Noetherian $R$-module $N$,
was initiated by R.Y. Sharp et al., in \cite{STY}.
An element $x\in R$ is said to be {\it integrally dependent on $I$ relative to} $N$ if there exists a positive integer $n$ such that $x^{n}N \subseteq \sum_{i=1}^n x^{n-i}I^iN.$ Then the set \begin{center}
$I^{(N)}_a=\{x\in R\,|\, x$ is integrally dependent on $I$ relative to $N$\} \end{center}
is an ideal of $R$, called the {\it integral closure of $I$ relative to} $N$, in the case $N=R$, $I^{(N)}_a$ is the classical integral closure $I_a$ of $I$.
It is clear that $I\subseteq I^{(N)}_a.$ We say that $I$ is {\it integrally closed} relative to $N$ if $I= I^{(N)}_a.$
In the second section (among other things) we show that, when $R$ is a Noetherian ring and $N$ is a finitely generated $R$-module, the sequences \[ \Ass_R R/(I^n)^{(N)}_a, \,\,\, \Ass _R (I^n)_a^{(N)}/ (I^{n+1})^{(N)}_a \text{ and } \Ass _R (I^n)_a^{(N)}/ ((I+\Ann_R N)^n)_a, \,\, n= 1,2, \ldots, \] of associated primes, are ultimately constant; we let $A^*_a(I, N):=\Ass_R R/(I^n)^{(N)}_a$ and
$C^*_a(I, N):=\Ass _R (I^n)_a^{(N)}/ ((I+\Ann_R N)^n)_a$, for large $n$. Pursing this point of view further we shall show that
$A^*_a(I+ \Ann_R N)\setminus C^*_a(I, N) \subseteq A^*_a(I, N).$
In \cite{Mc2}, McAdam studied the following interesting set of prime ideals of $R$ associated with $I$, \[ \bar{Q}^*(I)= \{\mathfrak{p} \in \Spec R : \text{ there exists a } \mathfrak{q} \in \mAss \hat{R}_{\mathfrak{p}} \text{ such that} \Rad (I\hat{R}_{\mathfrak{p}}+ \mathfrak{q})= \mathfrak{p}\hat{R}_{\mathfrak{p}}\}, \] and he called $\bar{Q}^*(I)$ the set of {\it quintasymptotic prime ideals} of $I$.
On the other hand, Ahn in \cite{Ah} extended the notion of the quintasymptotic prime ideals to a finitely generated module over $R.$ More precisely, if $N$ is a finitely generated $R$-module then a prime ideal $\mathfrak{p}$ of $R$ is said to be a {\it quintasymptotic prime ideal} of $I$ with respect to $N$ whenever there exists a $\mathfrak{q}\in \mAss_{\hat{R}_\mathfrak{p}}\hat{N}_\mathfrak{p}$ such that $\Rad(I\hat{R}_{\mathfrak{p}}+ \mathfrak{q})= \mathfrak{p}\hat{R}_{\mathfrak{p}}.$ The set of all {\it quintasymptotic prime ideals} of $I$ with respect to $N$ is denoted by $\bar{Q}^*(I,N).$
In the third section, for a multiplicatively closed subset $S$ of $R$, we examine the equivalence between the topologies defined by the filtrations $\{(I^n)_a^{(N)}\}_{n\geq1}$, $\{S((I^n)_a^{(N)})\}_{n\geq1}$, $\{S(((I+ \Ann_RN)^n)_a)\}_{n\geq 1}$ and $\{S((I+ \Ann_RN)^n)\}_{n\geq 1}$ by using the quintasymptotic prime ideals of $I$ with respect to $N$. Some of these results has been established, by Schenzel in \cite{Sc, Sc1}, McAdam in \cite{Mc2} and Mehrvarz et al., in \cite{MNS}, in certain case when $N=R$.
A typical result in this direction is the following: \begin{theorem} Let $N$ be a finitely generated module over a Noetherian ring $R$ and let $I$ be an ideal of $R$. Let $S$ be a multiplicatively closed subset of $R$. Then the topologies defined by $(I^n)_a^{(N)}$, $S((I^n)_a^{(N)})$, $S(((I+ \Ann_RN)^n)_a)$ and $S((I+ \Ann_RN)^n), \, {n\geq 1},$ are equivalent if and only if $S$ is disjoint from each of the quintasymptotic prime ideals of $I$ with respect to $N$. \end{theorem}
The proof of Theorem 1.1 is given in Theorem 3.11. One of our tools for proving Theorem 1.1 is the following, which is a characterization of the quintasymptotic prime ideals of $I$ with respect to $N$. In the following, we use $I_a^{\langle N \rangle}$ to denote the union $I_a^{(N)}:_Rs$, where $s$ varies in $R\backslash \bigcup \{\frak p\in \mAss_RN/IN\}$; in particular, for every integer $k\geq1$ and every prime ideal $\mathfrak{p}$ of $R$, \[ (\mathfrak{p}^k)^{\langle N \rangle}_a= \bigcup _{s\in R\backslash \mathfrak{p}}((\mathfrak{p}^k)^{(N)}_a:_Rs). \]
\begin{proposition} \label{3.1} Let $R$ be a Noetherian ring and let $N$ be a finitely generated $R$-module. Let $I \subseteq \mathfrak{p}$ be ideals of $R$ such that $\mathfrak{p}\in \Supp(N).$ Then $\mathfrak{p}\in \bar{Q}^*(I, N)$ if and only if there exists an integer $k \geq 0$ such that for all integers $m \geq 0$ \[ (I^m)^{(N)}_a :_R \langle \mathfrak{p}\rangle \nsubseteq (\mathfrak{p}^k)^{\langle N \rangle}_a. \] \end{proposition}
Finally in this section we derive the following consequence of Theorem 1.1. \begin{corollary} \label{3.3} Let $R$ be a Noetherian ring, $N$ a finitely generated $R$-module and $I$ an ideal of $R.$ Then the following conditions are equivalent: \begin{itemize} \item[(i)] $\bar{Q}^*(I, N)= \mAss_R N/IN.$
\item[(ii)] The topologies defined by $\{(I^n)^{(N)}_a \}_{n \geq 0}$ and $\{(I^n)^{\langle N \rangle}_a \}_{n \geq 0}$ are equivalent. \end{itemize} \end{corollary}
For any ideal $I$ of $R$ and any $R$-module $N$, the i-th {\it local cohomology module of $N$ with respect to} $I$ is defined by \[ H^i_I(N):= \varinjlim \Ext^i_R(R/I^n,N). \] We refer the reader to \cite{BS} for basic properties of local cohomology modules. The purpose of the fourth section is to characterize the equivalence between the topologies defined by $(I^n)_a^{(N)}$ and $S((I^n)_a^{(N)}), \, {n\geq1}$ in terms of the top local cohomology module $H^{\dim N}_I(N)$. This will generalize the main result of Marti-Farre \cite{MF}, as an extension of the main results of Call \cite[Corollary 1.4]{Ca},
Call-Sharp \cite{CS} and Schenzel \cite[Corollary 4.3]{Sc2}. \begin{theorem}
If $(R, \mathfrak{m})$ is a local (Noetherian) ring and $N$ a finitely generated quasi-unmixed $R$-module of dimension $d$, then $H^d_I(N)= 0$ if and only if there exists a multiplicatively closed subset $S$ of $R$ such that $\mathfrak{m} \cap S \neq \emptyset$ and the topologies induced by $(I^n)_a^{(N)}$ and $S((I^n)_a^{(N)}), \, {n\geq1}$ are equivalent. \end{theorem}
The result in Theorem 1.4 is proved in Theorem 4.1. Pursing this point of view further we show that the support of the $(d-1)$-th local cohomology module of a finitely generated $R$-module $N$ is always finite ($d= \dim N$), which will be the strengthened and a generalized version of a corresponding one by Marley (\cite[Corollaries 2.4 and 2.5]{M}) and by Naghipour-Sedghi (\cite[Corollary 3.3]{NS}). \begin{theorem} Assume that $R$ is a Noetherian ring. Let $N$ be a finitely generated $R$-module of dimension $d$ and $I$ an ideal of $R.$ Then $\Supp(H^d_I(N)) \subseteq \bar{Q}^*(I, N).$ Moreover, if $(R, \frak m)$ is local, then
$$\Supp(H^{d-1}_I(N)) \subseteq \bar{Q}^*(I,N) \cup \{\mathfrak{m}\}.$$ \end{theorem} The proof of Theorem 1.5 is given in Corollaries 4.2 and 4.3.\\
Throughout the paper, all rings are commutative, with identity, unless otherwise specified. We shall use $R$ to denote such a ring, $I$ an ideal of $R$, and $N$ a non-zero module over $R.$ If $(R, \mathfrak{m})$ is a Noetherian local ring and $N$ a finitely generated $R$-module, then $\hat{R}$ (resp. $\hat{N}$) denotes the completion of $R$ (resp. $N$) with respect to the $\mathfrak{m}$-adic topology. Then $N$ is said to be {\it quasi-unmixed } if for every $\mathfrak{p}\in \mAss_{\hat{R}}\hat{N}$, the condition $\dim\hat{R}/\mathfrak{p}= \dim N$ is satisfied. For any ideal $J$ of $R$, the radical of $J$, denoted by $\Rad (J)$, is defined to be the set $\{ a\in R:\, a^n\in J \text{ for some }n \in \mathbb{N}\}.$ Moreover we use $V(J)$ to denote the set of prime ideals of $R$ containing $J.$ Finally, for any $R$-module $L$ we shall use $\mAss_RL$ to denote the set of minimal elements of $\Ass_RL.$ For any unexplained notation or terminology we refer the reader to \cite{Ma} and \cite{Na}.
\section{Asymptotic behaviour of integral closures of ideals} The purpose of this section is to study the asymptotic behaviour of the integral closure of ideals with respect to a finitely generated module $N$ over a Noetherian ring $R$. More precisely we show that the sequences \[ \{\Ass_R R/(I^n)^{(N)}_a\}_{n\geq1},\,\, \{\Ass _R (I^n)_a^{(N)}/ (I^{n+1})^{(N)}_a\}_{n\geq1}, \,\,\, \{\Ass _R (I^n)_a^{(N)}/ ((I+\Ann_RN)^n)_a\}_{n\geq1}, \] of associated prime ideals, are ultimately constant; and pursing this point of view further we show that
$A^*_a(I+ \Ann_R N)\setminus C^*_a(I, N) \subseteq A^*_a(I, N).$
\begin{lemma} \label{2.1} Let $R$ be a ring (not necessarily Noetherian), and $N$ a Noetherian $R$-module. Then for any ideal $I$ of $R$, the following statements hold: \begin{itemize} \item[(i)] $I_a + {\rm Ann}_RN \subseteq I^{(N)}_a.$
\item[(ii)] $0^{(N)}_a= \Rad( \Ann_RN).$
\item[(iii)] $I^{(N)}_a/\Ann_RN = (I+ \Ann_RN / \Ann_RN)_a;$ so that $\Rad (I^{(N)}_a) = \Rad (I+ \Ann_RN).$
\item[(iv)] For any multiplicatively closed subset $S$ of $R$,
$(S^{-1}I)^{(S^{-1}N)}_a = S^{-1}(I^{(N)}_a).$
\item[(v)] $\bigcap_{n\geq1}(I^n)^{(N)}_a= \bigcap\{\mathfrak{p}\in
\mAss_RN\,|\, \mathfrak{p}+ I\neq R\}.$ In particular, if $R$ is local then $$\bigcap_{n\geq1}(I^n)^{(N)}_a=\Rad( \Ann_RN).$$
\item[(vi)] $(I^{(N)}_a J^{(N)}_a)^{(N)}_a = (IJ)^{(N)}_a$, where $J$ is a second ideal of $R$. \end{itemize} \end{lemma}
\begin{proof} (i) and (ii) follow from the definition. For the proof of (iii) see \cite[Remark 1.6]{STY}. (iv) follows from (iii) and \cite[Lemma 2.3]{S}. To show (v) use (iii) and \cite[Lemma 3.11]{Mc1}. Finally, in order to prove (vi), use (iii) and the fact that $(KL)_a = (K_aL_a)_a$ for all ideals $K$ and $L$ of $R.$ \end{proof}
The following corollary extends McAdam's result \cite[Lemma 1.4]{Mc2}. \begin{corollary} \label{2.6} Let $R$ and $N$ be as in Lemma {\rm2.1}. Let $I$ be an integrally closed ideal with respect to $N.$ Then $I$ has a primary decomposition each primary component of which is integrally closed with respect to $N.$ \end{corollary} \begin{proof} The result follows from Lemma \ref{2.1} and \cite[Lemma 1.4]{Mc2}. \end{proof}
\begin{lemma} \label{2.2} Let $R$ be a ring (not necessarily Noetherian), and $N$ a Noetherian $R$-module. If $I$ is an ideal of $R$ such that is not contained in any of the minimal prime ideals of $\Ann_RN$, then \begin{align*} ((I^n)^{(N)}_a :_R I^m )&= ((I^n)^{(N)}_a :_R (I^m)_a) \\
&=((I^n)^{(N)}_a :_R (I^m)^{(N)}_a)\\
& = (I^{n-m})^{(N)}_a \end{align*} for all integers $n\,\geq m\,\geq \,0.$
\end{lemma}
\begin{proof} In view of Lemma \ref{2.1} it is enough to show that \[ ((I^n)^{(N)}_a :_R I^m) \subseteq (I^{n-m})^{(N)}_a. \] To this end, let $x\in R$ be such that $I^m x \subseteq (I^n)^{(N)}_a.$ To simplify notation let us denote the Noetherian ring $R/{\rm Ann}_RN$ by $\widetilde{R};$ the natural image of $x$ in $\widetilde{R}$ is denoted by $\widetilde{x};$ and for each ideal $J$ of $R$ write $\widetilde{J}$ for the ideal $J+ \Ann_RN/\Ann_RN.$ Then by Lemma \ref{2.1} \[ \widetilde{x} \in ((\widetilde{I}^n)_a :
_{\widetilde{R}} (\widetilde{I})^m). \] Now, it follows from \cite[Lemma 11.27]{Mc1} that $x\in (I^{n-m})^{(N)}_a,$ as desired. \end{proof}
We are now ready to state and prove the main results of this section. Namely, we prove that the sequences of associated primes $\{\Ass_R R/(I^n)^{(N)}_a\}_{n\geq1}$, $\{\Ass_R(I^n)^{(N)}_a/(I^{n+1})^{(N)}_a\}_{n\geq1}$ and $\{\Ass_R(I^n)^{(N)}_a/((I+\Ann_RN)^n)_a\}_{n\geq 1}$ are increasing and become eventually constant.
\begin{theorem} \label{2.3} Let $I$ denote an ideal of a ring $R$ (not necessarily Noetherian), and let $N$ be a Noetherian $R$-module. Then the sequence of associated primes \[ \Ass_R R/(I^n)^{(N)}_a , n= 1,2, \ldots, \] is increasing and ultimately constant. Moreover, if $I$ is not contained in any of the minimal prime ideals of $\Ann_RN$, then the sequence \[ \Ass_R (I^n)^{(N)}_a/ (I^{n+1})^{(N)}_a, n= 1,2, \ldots, \] is also increasing and eventually constant. \end{theorem}
\begin{proof} First assume that $n\geq 1$ is an integer and $\mathfrak{p}\in \Spec R.$ In order to simplify notation, we will use $\widetilde{R}$ to denote the commutative Noetherian ring $R/\Ann_R N$ and for each ideal $J$ of $R$ we will write $\widetilde{J}$ for the ideal $J+ \Ann_RN/\Ann_RN$ of $\widetilde{R}.$ Then by Lemma \ref{2.1}, it is easy to see that $\mathfrak{p}\in \Ass_R R/(I^n)^{(N)}_a$ if and only if $\widetilde{\mathfrak{p}}\in \Ass_{\widetilde{R}} \, \widetilde{R} /(\widetilde{I}^n)_a.$ Hence it follows that \[ \bigcup _{n\geq 1}\Ass_R R/ (I^n)^{(N)}_a= \{\mathfrak{q}\cap R\,
|\, \mathfrak{q} \in A^*_a(\widetilde{I})\}. \] As, by Ratliff's Theorem \cite{R}, $A^*_a(\widetilde{I})$ is finite it follows that the set $ \bigcup _{n\geq 1}\Ass_R R/ (I^n)^{(N)}_a$ is a finite set. Moreover, since the sequence $\{\Ass_{\widetilde{R}} \, \widetilde{R} /(\widetilde{I}^n)_a\}_{n\geq 1}$
is increasing, it turns out that the sequence $\{\Ass_R R/(I^n)^{(N)}_a\}_{n\geq 1}$ is increasing, and therefore ultimately constant.
In order to prove the second part, we
assume that $I$ is not contained in any of the minimal prime ideals of $\Ann_RN$. Suppose that $\mathfrak{p}\in \Ass_R R/(I^{n+1})^{(N)}_a.$ Then there exists an $x \in R \setminus(I^{n+1})^{(N)}_a$ such that $\mathfrak{p}= ((I^{n+1})^{(N)}_a:_R x).$ Since $I \subseteq \mathfrak{p}$ it follows that $Ix \subseteq (I^{n+1})^{(N)}_a.$ Hence by Lemma \ref{2.2}, $x \in (I^n)^{(N)}_a.$ Whence we get $\mathfrak{p}\in \Ass_R\,(I^n)^{(N)}_a/(I^{n+1})^{(N)}_a.$ Therefore, when $I$ is not contained in any of the minimal prime ideals of $\Ann_RN$, it follows that \[ \Ass_R R/(I^{n+1})^{(N)}_a= \Ass_R\,(I^n)^{(N)}_a/(I^{n+1})^{(N)}_a \] for all integers $n\geq 0.$ This finally completes the proof. \end{proof}
\begin{theorem} \label{2.5} Suppose that $R$ is a Noetherian ring. Let $N$ denote a finitely generated $R$-module and let $I$ be an ideal of $R$ such that is not contained in any of the minimal prime ideals of $\Ann_RN$. Then the sequence \[ \{\Ass_R(I^n)^{(N)}_a/(I^n)_a\}_{n\geq 1} \] is increasing and eventually constant. \end{theorem}
\begin{proof} Assume first that $\mathfrak{p}\in \Ass_R(I^n)^{(N)}_a/(I^n)_a$ for some integer $n \geq 1.$ Without loss of generality we may assume that $(R, \mathfrak{p})$ is a local ring. There exists an element $x\in (I^n)^{(N)}_a$ such that $\mathfrak{p}= ((I^n)_a:_R x).$ Then, in view of Lemma \ref{2.2}, $((I^{n+1})_a:_R Ix)$ is a proper ideal of $R$ and $\mathfrak{p}\subseteq(I^{n+1})_a:_R Ix.$ Thus $\mathfrak{p}= ((I^{n+1})_a:_R Ix).$ Because of $Ix \subseteq (I^{n+1})^{(N)}_a$ it follows that $\mathfrak{p}\in \Ass_R(I^{n+1})^{(N)}_a/(I^{n+1})_a.$ Therefore the sequence $\{\Ass_R(I^n)^{(N)}_a/(I^n)_a\}_{n\geq 1}$ is increasing. As \[ \bigcup_{n\geq1} \Ass_R(I^n)^{(N)}_a/(I^n)_a \subseteq A^*_a(I), \] and $A^*_a(I)$ is finite, we deduce that it is eventually constant for large $n.$ \end{proof}
\begin{corollary} Let $R$ be a Noetherian ring, $N$ a finitely generated $R$-module and $I$ an ideal of $R$ such that is not contained in any of the minimal prime ideals of $\Ann_RN$. Then the sequence
\[ \{\Ass_R (I^n)^{(N)}_a/((I+ \Ann_RN)^n)_a\}_{n\geq 1} \] is increasing and ultimately constant. \end{corollary}
\begin{proof} This follows from Theorem \ref{2.5}, when $I$ is replaced by $I+\Ann_RN$, since the integral closure with respect to $N$ of their powers are the same. \end{proof} \begin{definition} \label{4.4} Suppose that $R$ is a Noetherian ring. Let $I$ be an ideal of $R.$ Let $N$ denote a finitely generated $R$-module.
The eventual constant values of the sequences \[ \{\Ass_R R/(I^n)^{(N)}_a \}_{n\geq1} \text{ and } \{\Ass_R(I^n)^{(N)}_a/((I+ \Ann_R N)^n)_a \}_{n\geq1} \] will be denoted by $A^*_a(I, N)$ and $C^*_a(I, N)$ respectively.
It is easy to see that $A^*_a(I, N)$ and $C^*_a(I, N)$ are stable under localization. Moreover, $\mAss_RN/IN\subseteq A^*_a(I, N)$ and $A^*_a(0, N)=\mAss_R N.$
\end{definition}
\begin{proposition} \label{4.5} Let $R$ be a Noetherian ring, $I$ an ideal of $R$ and $N$ a finitely generated $R$-module. Then
$A^*_a(I+ \Ann_R N)\setminus C^*_a(I, N) \subseteq A^*_a(I, N).$
\end{proposition}
\begin{proof} Let $\mathfrak{p}\in A^*_a(I+ \Ann_R N)\setminus C^*_a(I, N).$ Because $A^*_a(I+ \Ann_R N),$ $C^*_a(I, N)$ and $A^*_a(I, N)$ behave well under localization we may assume that $(R, \mathfrak{p})$ is a local ring. Let $\mathfrak{p} = ((I+ \Ann_R N)^n)_a:_R x)$ for some $x\in R$ and for large $n.$ Because $\mathfrak{p} \subseteq ((I^n)^{(N)}_a:_R x)$ and $\mathfrak{p} \not\in C^*_a(I, N)$ it follows that $\mathfrak{p}=( (I^n)^{(N)}_a:_R x).$ Hence $\mathfrak{p} \in A^*_a(I, N)$, as required. \end{proof}
\begin{remark} \label{4.6} Let $R$ be a Noetherian ring, $N$ a finitely generated $R$-module and $I$ an ideal of $R$ such that is not contained in any of the minimal prime ideals of $\Ann_RN$ and
$(I^nR_\mathfrak{p})^{(N_\mathfrak{p})}_a = ((IR_\mathfrak{p}+\Ann_{R_\mathfrak{p}}N_\mathfrak{p})^n)_a$,
for large $n$ and for all $\mathfrak{p} \in C^*_a(I, N).$ Then $(I^n)^{(N)}_a = ((I+\Ann_RN)^n)_a$ for large $n$. For this,
let $C^*_a(I, N)= \{\mathfrak{p}_1, \ldots, \mathfrak{p}_t\}.$ Choose an integer $k$ such that \[ C^*_a(I, N)= \Ass_R (I^n)^{(N)}_a /((I+ \Ann_RN)^n)_a \] for all $n \geq k$, and let $(I^nR_{\mathfrak{p}_i})^{(N_{\mathfrak{p}_i})}_a= ((IR_{\mathfrak{p}_i}+ \Ann_{R_\mathfrak{p_i}} N_\mathfrak{p_i})^n)_a$ for all $i= 1, \ldots, t.$ Now, if
$(I^n)^{(N)}_a\neq ((I+ \Ann_RN)^n)_a,$ then there exists $x\in (I^n)^{(N)}_a\setminus ((I+ \Ann_RN)^n)_a,$ and so $((I+ \Ann_RN)^n)_a:_R x$ is a proper ideal of $R$. As $R$ is Noetherian, there is $r \in R$ such that $\mathfrak{p}:=((I+ \Ann_RN)^n)_a:_R rx$ is a prime ideal of $R,$ and hence $\mathfrak{p}\in C^*_a(I, N).$ Thus $(I^nR_\mathfrak{p})^{(N_\mathfrak{p})}_a= ((IR_\mathfrak{p}+ \Ann_{R_\mathfrak{p}}N_\mathfrak{p})^n)_a$; so that $x/1 \in ((I+ \Ann_RN)^n)_aR_\mathfrak{p}.$ Hence there is $s \in R \setminus \mathfrak{p}$ such that $sx \in ((I+ \Ann_RN)^n)_a.$ That is $s \in ((I+ \Ann_RN)^n)_a:_R x \subseteq \mathfrak{p}$, which is a contradiction.
\end{remark}
\section{Quintasymptotic primes and ideal topologies}
In this section we study the equivalence of the topologies defined by $(I^n)_a^{(N)}$, $S((I^n)_a^{(N)})$, $S(((I+ \Ann_RN)^n)_a)$ and $S((I+ \Ann_RN)^n), \, {n\geq 1},$ by using the quintasymptotic prime ideals of $I$ with respect to $N$. The main results are Proposition 3.10 and Theorem 3.11. As a consequence we show that $\bar{Q}^*(I, N)= \mAss_R N/IN$ if and only if the topologies $(I^n)^{(N)}_a$
and $(I^n)^{\langle N \rangle}_a, \, {n \geq 1},$ are equivalent. We begin with the following elementary result.
\begin{lemma} \label{2.7} Let $R$ be a Noetherian ring and $N$ a finitely generated $R$-module. Let $T$ be a faithfully flat Noetherian ring extension of $R.$ Then, for any ideal $I$ of $R,$ \[ (IT)^{(N\otimes_RT)}_a \cap R= I^{(N)}_a.\] \end{lemma}
\begin{proof} Let $x\in (IT)^{(N\otimes_RT)}_a\cap R.$ Then, in view of \cite[Corollary 1.5]{STY}, there is an integer $n\geq1$ such that $$(IT+Tx)^{n+1}(N\otimes_RT)= IT(IT+Tx)^n(N\otimes_RT).$$ Hence \[ (I+Rx)^{n+1}(N\otimes_RT)= I(I+Rx)^n(N\otimes_RT). \] Therefore $$(I+Rx)^{n+1}N \otimes_RT= I(I+Rx)^nN \otimes_RT.$$ Now, by faithfully flatness, we deduce that $(I+Rx)^{n+1}N= I(I+Rx)^nN$; hence $x\in I^{(N)}_a$, by \cite[Corollary 1.5]{STY}. Therefore the conclusion follows, since the opposite inclusion is clear by the faithfully flatness of $T$ over $R$.
\end{proof}
\begin{remark} \label{2.8} Before continuing, let us fix two notations, which are employed by Schenzel in \cite{Sc3} and McAdam in \cite{Mc2} in the case $N= R$, respectively.
For any multiplicatively closed subset $S$ of $R$ and for each ideal $J$ of $R$ we use $S(J)$ to denote the ideal $\cup_{s\in S}(J:_Rs).$ Note that \[ \Ass_RR/S(J)= \{ \mathfrak{p}\in \Ass_R R/J \,:\,\mathfrak{p}\cap S=\emptyset\}. \] In the case $N$ is a finitely generated $R$-module and $S=R\backslash \bigcup \{\frak p\in \mAss_RN/JN\}$, we use $J_a^{\langle N \rangle}$ to denote the ideal $S(J_a^{(N)})$. In particular, for every integer $k\geq1$ and every prime ideal $\mathfrak{p}$ of $R$, we have \[ (\mathfrak{p}^k)^{\langle N \rangle}_a= \bigcup _{s\in R\backslash \mathfrak{p}}((\mathfrak{p}^k)^{(N)}_a:_Rs). \] \end{remark}
\begin{proposition} \label{2.9} Let $R$ be a Noetherian ring and let $N$ be a finitely generated $R$-module. \begin{itemize} \item[(i)] If $(R, \mathfrak{m})$ is local and $\mathfrak{p}\in \mAss_RN,$ then there exists an element $x\in R$ not in $\mathfrak{p}$ such that for every ideal $J$ of $R$ with $\mathfrak{m}$ is minimal over $J+ \mathfrak{p}$, $x\in J+ \Ann_RN$ or $\mathfrak{m}\in \Ass_RR/J+ \Ann_RN$.
\item[(ii)] If $\mathfrak{p}\in \Spec R$ and $\mathfrak{q}\in \mAss_RN$ with $\mathfrak{q}\subseteq\mathfrak{p},$ then there is an integer $k\geq1$ such that $\mathfrak{p}\in \Ass_RR/J+ \Ann_RN$ for any ideal $J$ of $R$ with $J\subseteq (\mathfrak{p}^k)^{\langle N \rangle}_a$ and $\mathfrak{p}\in \mAss_RR/J+ \mathfrak{q}.$ \end{itemize} \end{proposition}
\begin{proof} In order to show (i), let $$\frak q_1\cap \dots \cap \frak q_t= \Ann_RN,$$ be an irredundant primary decomposition of the ideal $\Ann_RN$ with $\frak q_1$ \, $\frak p$-primary. (Note that $\mathfrak{p}\in \Ass_RR/\Ann_RN$.) It follows from $\mathfrak{p}\in \mAss_RN$ that $\bigcap _{i=2}^t\frak q_i\nsubseteq \frak p$. Hence there exists $x\in \bigcap _{i=2}^t\frak q_i$ such that $x\not\in \frak p$. Now, let $J$ be any ideal of $R$ such that ${\rm Rad}(J+ \mathfrak{p})= \mathfrak{m}$ and that $\mathfrak{m}\not\in \Ass_RR/J+ \Ann_RN$. It is enough for us to show that $x\in J+ \Ann_RN$. To this end, let $$Q_1\cap \dots \cap Q_l=J+\Ann_RN,$$
be an irredundant primary decomposition of the ideal $J+\Ann_RN$, with $Q_i$ is $\frak p_i$-primary ideal, for all $i=1, \dots, l$. Then $\frak m\neq \frak p_i$, for all $i=1, \dots, l$, and so it follows from ${\rm Rad}(J+ \mathfrak{p})= \mathfrak{m}$ that $\frak p\nsubseteq \frak p_i$. Hence $\frak p^v\nsubseteq \frak p_i$, where $v\geq1$ is an integer such that $\frak p^v\subseteq \frak q_1$. Therefore, because of $x\frak q_1\subseteq \Ann_RN$, it follows that $x\frak p^v\subseteq Q_i$, for all $i=1, \dots, l$. Consequently $x\in Q_1\cap \dots \cap Q_l$, so that $x\in J+\Ann_RN,$ as required.
In order to prove (ii), without loss of generality, we may assume that $(R, \mathfrak{p})$ is local. Then $(\mathfrak{p}^k)^{\langle N \rangle}_a= (\mathfrak{p}^k)^{(N)}_a.$ Now, let $x$ be as in (i). Then, in view of Lemma \ref{2.1}(v), there exists an integer $k\geq1$ such that $x \not\in (\mathfrak{p}^k)^{(N)}_a.$ Therefore if $J$ is an ideal of $R$ such that $J\subseteq (\mathfrak{p}^k)^{\langle N \rangle}_a$ and $\mathfrak{p}\in \mAss_RR/J+ \mathfrak{q}$, then $x\not\in J+\Ann_RN,$ and so it follows from (i) that $\mathfrak{p}\in \Ass_RR/J+ \Ann_RN$. \end{proof}
\begin{proposition} \label{2.10} Let $I$ be an ideal of a Noetherian ring $R$ and $S$ a multiplicatively closed subset of $R.$
Then for any finitely generated $R$-module $N$, \[ \bigcap_{n\geq1}S((I^n)^{(N)}_a)= \bigcap \{\mathfrak{p}\in \mAss_RN\,\mid\,(I+ \mathfrak{p})\cap S= \emptyset \}.\] \end{proposition}
\begin{proof} Let $x\in \bigcap_{n\geq1}S((I^n)^{(N)}_a).$ Then, for all $n\geq1$, there exists $s\in S$ such that $sx \in (I^n)^{(N)}_a.$ Now let $\mathfrak{p}\in \mAss_R N$ be such that $(\mathfrak{p}+ I)\cap S= \emptyset.$ Then, it follows from Lemma \ref{2.1}(v) that $x\in \mathfrak{p}.$
Conversely, suppose that $x\in \mathfrak{p}$ for all $\mathfrak{p}\in \mAss_RN$ with $(\mathfrak{p}+ I)\cap S= \emptyset.$ Then, by virtue of Lemma \ref{2.1}(v), $x/1\in (S^{-1}I^n)^{(S^{-1}N)}_a$ for all $n\geq1.$ Hence, in view of Lemma \ref{2.1}(iv), $x/1\in S^{-1}((I^n)^{(N)}_a)$, and so $sx \in (I^n)^{(N)}_a$ for some $s\in S.$ Consequently, we have $x\in S((I^n)^{(N)}_a)$, as required. \end{proof}
\begin{theorem} \label{2.11} Let $R$ be a Noetherian ring and let $N$ be a finitely generated $R$-module. Let $I$ and $J$ be ideals of $R.$ Then \[ \bigcap_{n\geq1}((I^n)^{(N)}_a:_R \langle J\rangle)= \bigcap\{\mathfrak{p}\in \mAss_RN\,\mid\,J\nsubseteq {\rm Rad}(I+ \mathfrak{p})\}. \] \end{theorem}
\begin{proof} In view of Theorem \ref{2.3} the set $A^*_a(I,N):= \bigcup _{n\geq1} \Ass_RR/(I^n)^{(N)}_a$ is finite. Let $A^*_a(I,N)= \{\mathfrak{p}_1,\ldots, \mathfrak{p}_t\}.$ Let $r$ be an integer such that $0\leq r\leq t$ and $J\nsubseteq\bigcup_{i=1}^r \mathfrak{p}_i$ but $J\subseteq \bigcap_{i=r+1}^t \mathfrak{p}_j.$ Then there exists an element $s\in J$ such that $s\not\in\bigcup_{i=1}^r \mathfrak{p}_i.$ Suppose $S= \{ s^{i} \,\mid\, i\geq0 \}.$ Then it easily seen that $((I^n)^{(N)}_a:_R \langle J\rangle)= S((I^n)^{(N)}_a)$ for each integer $n\geq1.$ Now in view of Proposition \ref{2.10} it is enough to show that $J\subseteq {\rm Rad}(I+ \mathfrak{p})$ if and only if $s\in {\rm Rad}(I+ \mathfrak{p})$ for each $\mathfrak{p}\in \mAss_R N.$ To do this, as $s\in J$ one direction is clear. For other direction, let $\mathfrak{q}$ be a
minimal prime ideal over $I+ \mathfrak{p}.$ Then, as $s\in {\rm Rad}(I+ \mathfrak{p})$ and $I+ \mathfrak{p}\subseteq \frak q$, we have $s\in\mathfrak{q}$, and hence in view of the choice of $s$, it suffices to show that $\mathfrak{q}\in A^*_a(I,N).$ By virtue of Lemma \ref{2.1}, we may assume that $R$ is local with maximal ideal $\mathfrak{q}.$ Let $x$ be as in the Proposition \ref{2.9}. Then by Lemma \ref{2.1}, there is an integer $n\geq1$ such that $x\not\in (I^n)^{(N)}_a.$ Now it is easy to see that $\mathfrak{q}$ is minimal over $ (I^n)^{(N)}_a+\mathfrak{p}.$ Therefore, it follows from Proposition \ref{2.9} that $\mathfrak{q}\in \Ass_R R/(I^n)^{(N)}_a,$ and so $\mathfrak{q}\in A^*_a(I,N)$, as required.\\ \end{proof}
\begin{corollary} Let $R$ be a Noetherian ring and $I$ an ideal of $R$. Let $N$ be a finitely generated $R$-module and $\mathfrak{p}\in \mAss_RN$. Then $\mAss_RR/(I+\frak p) \subseteq A^*_a(I,N).$ \end{corollary} \begin{proof} The assertion follows from the last argument in the proof of Theorem \ref{2.11}. \end{proof}
\begin{corollary}\label{3.7} Let $(R,\mathfrak{m})$ be a local (Noetherian) ring and let $N$ be a finitely generated $R$-module. Then, for any proper ideal $I$ of $R$,
\[ \bigcap_{n\geq1}((I^n)^{(N)}_a:_R \langle \mathfrak{m}\rangle)= \bigcap\{\mathfrak{p}\in \mAss_RN\,\mid\,{\rm Rad}(I+ \mathfrak{p})\subsetneqq \mathfrak{m}\}. \] \end{corollary} \begin{proof}
The assertion follows from Theorem \ref{2.11}.\\ \end{proof} \begin{proposition} \label{2.13} Let $(R,\mathfrak{m})$ be a local (Noetherian) ring, $I$ a proper ideal of $R$ and $N$ a finitely generated $R$-module. Then the following conditions are equivalent: \begin{itemize} \item[(i)] For all $\mathfrak{p}\in \mAss_RN, \Rad(I+\mathfrak{p})\neq\mathfrak{m}.$
\item[(ii)] $\bigcap_{n\geq1}(((I+\Ann_RN)^n)_a:_R\langle \mathfrak{m}\rangle)\subseteq {\rm Rad}(\Ann_RN).$
\item[(iii)] $\bigcap_{n\geq1}((I+\Ann_RN)^n:_R\langle \mathfrak{m}\rangle)\subseteq {\rm Rad}(\Ann_RN).$
\item[(iv)] $\bigcap_{n\geq1}((I^n)^{(N)}_a:_R \langle \mathfrak{m}\rangle)= \Rad(\Ann_RN).$ \end{itemize} \end{proposition}
\begin{proof} (i) $\Longrightarrow$ (ii): In view of Corollary \ref{3.7}, \[ \bigcap_{n\geq1}((I^n)^{(N)}_a:_R \langle\mathfrak{m}\rangle)= \Rad(\Ann_RN). \] Hence as \[ \bigcap_{n\geq1}(((I+\Ann_RN)^n)_a:_R\langle \mathfrak{m}\rangle) \subseteq \bigcap_{n\geq1}((I^n)^{(N)}_a:_R \langle \mathfrak{m}\rangle), \]
it follows that (ii) holds. The implication (ii) $\Longrightarrow$ (iii) is trivial.
In order to show that (iii) $\Longrightarrow$ (iv), suppose the contrary, that is (iv) is not true. Then $$\Rad(\Ann_RN) \subsetneqq \bigcap_{n\geq1}((I^n)^{(N)}_a:_R \langle \mathfrak{m}\rangle).$$ Hence, according to Corollary 3.7, there exists $\mathfrak{p}\in \mAss_RN$ such that $\Rad(I+ \mathfrak{p})= \mathfrak{m}.$ Moreover, applying the assumption it is easily seen that $\Rad(I+ \Ann_RN) \neq \mathfrak{m}.$ Therefore $$\Rad((I+ \Ann_RN)^n:_R\langle \mathfrak{m} \rangle + \mathfrak{p})= \mathfrak{m},$$ for each integer $n\geq1$.
Now, let $x$ be as in the Proposition \ref{2.9}. Since $\mathfrak{m}\not\in \Ass_R R/((I+\Ann_RN)^n:_R\langle \mathfrak{m} \rangle)$ it follows that $$x \in \bigcap_{n\geq1}((I+\Ann_RN)^n:_R\langle \mathfrak{m}\rangle).$$
Thus $x\in {\rm \Rad}(\Ann_RN)$, i.e., $x\in \mathfrak{p}$ which is a contradiction.
Finally, the implication (iv) $\Longrightarrow$ (i) follows from Corollary 3.7. \end{proof}
\begin{theorem} \label{2.14} Let $(R,\mathfrak{m})$ be a local (Noetherian) ring, let $N$ be a finitely generated $R$-module, and let $I$ be an ideal of $R.$ Then the following conditions are equivalent: \begin{itemize} \item[(i)] $\bigcap_{n\geq1}((I^n \hat{R})^{(\hat{N})}_a:_{\hat{R}} \langle \mathfrak{m} \hat{R} \rangle)= \Rad (\Ann_ {\hat{R}} \hat{N}).$
\item[(ii)] For all integers $n\geq1$ there exists an integer $k\geq1$ such that \[ (I+\Ann_RN)^k:_R\langle \mathfrak{m}\rangle \subseteq (\mathfrak{m}^n)^{(N)}_a. \]
\item[(iii)] For all integers $n\geq1$ there exists an integer $k\geq1$ such that \[ ((I+\Ann_RN)^k)_a:_R\langle \mathfrak{m}\rangle \subseteq (\mathfrak{m}^n)^{(N)}_a. \]
\item[(iv)] For all integers $n\geq1$ there exists an integer $k\geq1$ such that \[ (I^k)^{(N)}_a:_R\langle \mathfrak{m} \rangle \subseteq (\mathfrak{m}^n)^{(N)}_a. \] \end{itemize} \end{theorem}
\begin{proof} Without loss of generality we may assume that $(R,\mathfrak{m})$ is a complete local ring as follows by virtue of the faithfully flatness of $\hat{R}$. Now suppose that (i) is satisfied, then \[ \bigcap_{n\geq1}((I^n)^{(N)}_a:_R \langle \mathfrak{m} \rangle/ \Rad(\Ann_AN))= 0. \] As $R/\Ann_RN$ is a complete local ring, Chevalley's Theorem (see \cite[Theorem 30.1]{Na}) says that for all $n\geq1$ there exists an integer $k\geq1$ such that \[ ((I^k)^{(N)}_a:_R \langle \mathfrak{m} \rangle)/\Rad(\Ann_AN)\subseteq (\mathfrak{m}/\Rad(\Ann_RN))^n. \] Therefore \[ ((I^k)^{(N)}_a:_R \langle \mathfrak{m}\rangle) \subseteq \mathfrak{m}^n+ \Rad(\Ann_RN) \subseteq (\mathfrak{m}^n)^{(N)}_a, \] and so the statement (iv) is shown to be true.
The conclusions (iv) $\Longrightarrow$ (iii) and (iii) $ \Longrightarrow$ (ii) are obviously true. So, in order to complete the proof, it is enough to show that (ii) $\Longrightarrow$ (i). To this end, suppose that for all $n\geq1$ there exists an integer $k\geq1$ such that $$(I+\Ann_RN)^k:_R\langle \mathfrak{m}\rangle \subseteq (\mathfrak{m}^n)^{(N)}_a.$$ Then in view of Lemma \ref{2.1} we have $$\bigcap_{k\geq1}((I+\Ann_RN)^k:_R \langle \mathfrak{m}\rangle) \subseteq \Rad(\Ann_RN).$$ Now use Proposition 3.8 to complete the proof. \end{proof}
We are now ready to prove the first main result of this section. In fact there is a characterization of the quintasymptotic prime ideals of $I$ with respect to $N$, which is a generalization of \cite[Proposition 3.5]{Mc2}. \begin{proposition} \label{3.1} Let $R$ be a Noetherian ring and let $N$ be a finitely generated $R$-module. Let $I \subseteq \mathfrak{p}$ be ideals of $R$ such that $\mathfrak{p}\in \Supp(N).$ Then the following conditions are equivalent: \begin{itemize} \item[(i)] $\mathfrak{p}\in \bar{Q}^*(I, N).$
\item[(ii)] There exists an integer $k\geq0$ such that $\mathfrak{p}\in \Ass_R R/J+\Ann_RN$ for any ideal $J$ of $R$ with $I\subseteq \Rad(J)$ and $J\subseteq (\mathfrak{p}^k)^{\langle N \rangle}_a.$
\item[(iii)] There exists an integer $k \geq 0$ such that for all integers $m \geq 0$ \[ (I+ \Ann_RN)^m :_R \langle \mathfrak{p}\rangle \nsubseteq (\mathfrak{p}^k)^{\langle N \rangle}_a. \]
\item[(iv)] There exists an integer $k \geq 0$ such that for all integers $m \geq 0$ \[ ((I+ \Ann_RN)^m)_a :_R \langle \mathfrak{p}\rangle \nsubseteq (\mathfrak{p}^k)^{\langle N \rangle}_a. \]
\item[(v)] There exists an integer $k \geq 0$ such that for all integers $m \geq 0$ \[ (I^m)^{(N)}_a :_R \langle \mathfrak{p}\rangle \nsubseteq (\mathfrak{p}^k)^{\langle N \rangle}_a. \] \end{itemize} \end{proposition}
\begin{proof} (i) $\Longrightarrow$ (ii): Let $\mathfrak{p}\in \bar{Q}^*(I, N).$ Then there exists a prime ideal $\mathfrak{q}\in \mAss_{\hat{R_{\mathfrak{p}}}}\hat{N_{\mathfrak{p}}}$ such that $\Rad(I\hat{R_{\mathfrak{p}}}+ \mathfrak{q})= \mathfrak{p}\hat{R_{\mathfrak{p}}}.$ Now, let $k$ be as in the Proposition \ref{2.9}(ii), applied to $\mathfrak{q}\in \mAss_{\hat{R_{\mathfrak{p}}}}\hat{N_{\mathfrak{p}}}.$ Let $J$ be any ideal of $R$ such that $I\subseteq \Rad(J)$ and $J\subseteq (\mathfrak{p}^k)^{\langle N \rangle}_a.$ Then $I\hat{R_{\mathfrak{p}}} \subseteq \Rad(J\hat{R_{\mathfrak{p}}})$ and $J\hat{R_{\mathfrak{p}}}\subseteq (\mathfrak{p}^k \hat{R_{\mathfrak{p}}})^{(\hat{N_{\mathfrak{p}}})}_a$ by virtue of Lemma \ref{2.7}. Since $\mathfrak{p}\hat{R_{\mathfrak{p}}}$ is the maximal ideal of $\hat{R_{\mathfrak{p}}}$, it follows that $(\mathfrak{p}\hat{R_{\mathfrak{p}}})^{(\hat{N_{\mathfrak{p}}})}_a= \mathfrak{p}\hat{R_{\mathfrak{p}}}$, and so $J\hat{R_{\mathfrak{p}}}$ is a proper ideal of $\hat{R_{\mathfrak{p}}}.$ Thus $\Rad(J\hat{R_{\mathfrak{p}}}+ \mathfrak{q})= \mathfrak{p}\hat{R_{\mathfrak{p}}}$. Hence Proposition \ref{2.9} shows that $\mathfrak{p}\hat{R_{\mathfrak{p}}}\in \Ass_{\hat {R_{\mathfrak{p}}}} \hat {R_{\mathfrak{p}}}/J\hat {R_{\mathfrak{p}}}+\Ann_{\hat {R_{\mathfrak{p}}}}\hat {N_{\mathfrak{p}}}$, and so by \cite[Theorem 23.2]{Ma} we have $\mathfrak{p}\in \Ass_R R/J+\Ann_RN$. That is (ii) holds.
The implication (ii) $\Longrightarrow$ (iii) follows easily from the fact that $$\mathfrak{p}\not\in \Ass_R R/((I+ \Ann_RN)^n :_R\langle \mathfrak{p}\rangle),$$ for all integers $n \geq 0.$
The conclusions (iii) $\Longrightarrow$ (iv) and (iv) $\Longrightarrow$ (v) are obviously true.
Finally, in order to complete the proof, we have to show the implication (v) $\Longrightarrow$ (i). To this end, suppose that there is an integer $k \geq 0$ such that $((I^m)^{(N)}_a :_R \langle \mathfrak{p}\rangle) \nsubseteq (\mathfrak{p}^k)^{\langle N \rangle}_a$, for all integers $m \geq 0$. Then by Lemma \ref{2.7}, $$((I^m\hat{R_{\mathfrak{p}}})^{(\hat{N_{\mathfrak{p}}})}_a :_ {\hat{R_{\mathfrak{p}}}} \langle \mathfrak{p}\hat{R_{\mathfrak{p}}}\rangle) \nsubseteq (\mathfrak{p}^k\hat{R_{\mathfrak{p}}})^{(\hat{N_{\mathfrak{p}}})}_a.$$ Hence, in view of Proposition \ref{2.13}, there is a $\mathfrak{q}\in \mAss_{\hat{R_{\mathfrak{p}}}} \hat{N_{\mathfrak{p}}}$ such that $\Rad(I\hat{R_{\mathfrak{p}}}+ \mathfrak{q})= \mathfrak{p}\hat{R_{\mathfrak{p}}}$, and so $\mathfrak{p}\in \bar{Q}^*(I, N)$, as required. \end{proof}
We are now ready to state and prove the second main theorem of this section, which is a characterization of the equivalence between the topologies $\{(I^n)_a^{(N)}\}_{n\geq1}$, $\{S((I^n)_a^{(N)})\}_{n\geq1}$, $\{S(((I+ \Ann_RN)^n)_a)\}_{n\geq 1}$ and $\{S((I+ \Ann_RN)^n)\}_{n\geq 1}$ in terms of the quintasymptotic primes of $I$ with respect to $N$. This will generalize the main result of McAdam \cite{Mc2}. \begin{theorem}\label{3.2} Let $R$ be a Noetherian ring, $N$ a finitely generated $R$-module and $I$ an ideal of $R.$ Then for any multiplicatively closed subset $S$ of $R$ the following are equivalent: \begin{itemize} \item[(i)] $S \subseteq R\setminus\bigcup\{\mathfrak{p}\in \bar{Q}^*(I, N)\}.$
\item[(ii)] The topologies defined by $\{S((I^n)^{(N)}_a)\}_{n \geq 0}$ and $\{(I^n)^{(N)}_a \}_{n \geq 0}$ are equivalent.
\item[(iii)] The topology defined by $\{S(((I+ \Ann_RN)^n)_a)\}_{n \geq 0}$ is finer than the topology defined by $\{(I^n)^{(N)}_a \}_{n \geq 0}.$
\item[(iv)] The topology defined by $\{S((I+ \Ann_R N)^n) \}_{n \geq 0}$ is finer than the topology defined by $\{(I^n)^{(N)}_a \}_{n\geq0}.$
\item[(v)] For all $\mathfrak{p}\in \Supp(N) \cap V(I)$ the topology defined by $\{S((I^n)^{(N)}_a)\}_{n\geq0}$ is finer than the topology defined by $\{(\mathfrak{p}^n)^{\langle N \rangle}_a)\}_{n \geq 0}.$
\item[(vi)] For all $\mathfrak{p}\in \Supp(N) \cap V(I)$ the topology defined by $\{S(((I+ \Ann_RN)^n)_a)\}_{n \geq 0}$ is finer than the topology defined by $\{(\mathfrak{p}^n)^{\langle N \rangle}_a)\}_{n \geq 0}.$
\item[(vii)] For all $\mathfrak{p}\in \Supp(N) \cap V(I)$ the topology defined by $\{S((I+\Ann_R N)^n) \}_{n \geq 0}$ is finer than the topology defined by $\{(\mathfrak{p}^n)^{\langle N \rangle}_a)\}_{n \geq 0}.$ \end{itemize} \end{theorem}
\begin{proof} In order to show (i) $\Longrightarrow$ (ii), let $\mathfrak{p}\in \Spec R$ with $I+ \Ann_RN \subseteq \mathfrak{p}$ and let $l \geq 1.$ We first show that there exists an integer $m \geq 1$ such that $S((I^m)^{(N)}_a) \subseteq (\mathfrak{p}^l)^{\langle N \rangle}_a.$ To do this, let $S'$ be the natural image of $S$ in $R_\mathfrak{p}.$ Since $\bar{Q}^*(I, N)$ behaves well under localization, we have $S' \subseteq R_\mathfrak{p}\setminus\cup\{\mathfrak{q}\in \bar{Q}^*(IR_\mathfrak{p}, N_\mathfrak{p})\}.$ Moreover, it is easy to see that $S'((I^mR_\mathfrak{p})^{(N_\mathfrak{p})}_a) \subseteq (\mathfrak{p}^lR_\mathfrak{p})^{\langle N_\mathfrak{p}\rangle }_a$ implies $S((I^n)^{(N)}_a) \subseteq (\mathfrak{p}^n)^{\langle N \rangle}_a.$ Therefore we may assume that $R$ is local with maximal ideal $\mathfrak{p}.$ Then $(\mathfrak{p}^n)^{\langle N \rangle}_a= (\mathfrak{p}^n)^{(N)}_a.$ By view of Lemma \ref{2.7} and \cite[Proposition 3.8]{Ah} we may assume in addition that $R$ is complete. Whence, in view of \cite[Lemma 3.5]{Ah}, for any $\mathfrak{q} \in \mAss_R N$, $S$ is disjoint from $I+ \mathfrak{q}.$ Therefore, by Proposition \ref{2.10} we have $ \bigcap_{n \geq 1} S((I^n)^{(N)}_a) = \Rad (\Ann_RN).$ Consequently $$ \bigcap_{n\geq1}S((I^n)^{(N)}_a)/ \Rad(\Ann_R N)= 0.$$ As the ring $R/\Rad(\Ann_RN)$ is complete, Chevalley's Theorem (see \cite[Theorem 30.1]{Na}) implies the existence of an integer $m \geq 1$ such that $$S((I^m)^{(N)}_a)/\Rad(\Ann_RN) \subseteq (\mathfrak{p}/\Rad(\Ann_RN))^l.$$ Hence \[ S((I^m)^{(N)}_a) \subseteq \mathfrak{p}^l+ \Rad(\Ann_RN) \subseteq (\mathfrak{p}^l)^{(N)}_a. \] Now, in view of Corollary \ref{2.6}, we can consider $\mathfrak{q}_1\cap \dots \cap \mathfrak{q}_n$ a minimal primary decomposition of $(I^l)^{(N)}_a$ where $\mathfrak{q}_i$ is $\mathfrak{p}_i$-primary and integrally closed with respect to $N$ for every $i= 1, \ldots, n.$ Then there exists an integer $l_i$ such that $\mathfrak{p}_i^{l_i} \subseteq \mathfrak{q}_i$ for $i= 1, \ldots, n$, and moreover for some $m_i$ we have $S((I^{m_i})^{(N)}_a)\subseteq (\mathfrak{p}_i^{l_i})^{\langle N \rangle}_a.$ Let $m= \max\{m_1,\ldots, m_n\}.$ Then we deduce that $S((I^m)^{(N)}_a)\subseteq (\mathfrak{p}_i^{l_i})^{\langle N \rangle}_a$ for each $1\leq i\leq n.$ On the other hand, we have $$(\mathfrak{p}_i^{l_i})^{\langle N \rangle}_a \subseteq \bigcup _{s\in R\setminus \mathfrak{p}_i}((\mathfrak{q}_i)^{(N)}_a:_Rs)= \mathfrak{q}_i,$$ and therefore $S((I^m)^{(N)}_a)\subseteq \bigcap_{i=1}^n \mathfrak{q}_i.$ This completes the proof of (ii).
The implications (ii) $\Longrightarrow$ (iii) $\Longrightarrow$ (iv) are obviously true. In order to prove the implication (iv)$\Longrightarrow$ (v), it is enough to show that \[ S \subseteq R\setminus \bigcup \{\mathfrak{p}\in \bar{Q}^*(I, N)\}. \] To do this, let $\mathfrak{p}\in \bar{Q}^*(I, N).$ Then, by Proposition \ref{3.1} there exists an integer $k \geq 0$ such that $((I+ \Ann_R N)^m :_R \langle \mathfrak{p}\rangle) \nsubseteq (\mathfrak{p}^k)^{\langle N \rangle}_a$ for all integers $m \geq 0.$ On the other hand, by the assumption there is an integer $l \geq 0$ such that $S((I+ \Ann_RN)^l) \subseteq (I^k)^{(N)}_a.$ Therefore,
$$(I+ \Ann_RN)^l :_R \langle \mathfrak{p}\rangle \nsubseteq S((I+ \Ann_R N)^l).$$ Then it is readily to see that $\mathfrak{p} \cap S = \emptyset$, as required.
The conclusions (v) $\Longrightarrow$ (vi) $\Longrightarrow$ (vii) are trivial. Finally, an argument similar to that used in the proof of the implication (iv) $\Longrightarrow$ (v) shows that (vii) $\Longrightarrow$ (ii) holds. \end{proof}
An immediate consequence of the Theorem \ref{3.2} is the following corollary.
\begin{corollary} \label{3.3} Let $R$ be a Noetherian ring, $N$ a finitely generated $R$-module and $I$ an ideal of $R.$ Then the following conditions are equivalent: \begin{itemize} \item[(i)] $\bar{Q}^*(I, N)= \mAss_R N/IN.$
\item[(ii)] The topologies defined by $\{(I^n)^{(N)}_a \}_{n \geq 0}$ and $\{(I^n)^{\langle N \rangle}_a \}_{n \geq 0}$ are equivalent. \end{itemize} \end{corollary} \begin{proof} Let $S=R\setminus\bigcup\{\mathfrak{p}\in \mAss_RN/IN\}.$ Then $S((I^n)^{(N)}_a )=(I^n)^{\langle N\rangle}_a $. Now, if $\bar{Q}^*(I, N)= \mAss_R N/IN,$ then $S=R\setminus\bigcup\{\mathfrak{p}\in \bar{Q}^*(I, N)\}.$ Hence Theorem \ref{3.2} implies that the topologies defined by $\{(I^n)^{(N)}_a \}_{n \geq 0}$ and $\{(I^n)^{\langle N \rangle}_a \}_{n \geq 0}$ are equivalent. Conversely, if these
topologies are equivalent, then it follows from Theorem \ref{3.2} that $S\subseteq R\setminus\bigcup\{\mathfrak{p}\in \bar{Q}^*(I, N)\}$, and so
$\bar{Q}^*(I, N)\subseteq\mAss_R N/IN$. On the other hand, using \cite[Lemma 3.5]{Ah}, one easily sees that $\mAss_R N/IN\subseteq \bar{Q}^*(I, N)$; and so
$\bar{Q}^*(I, N)= \mAss_R N/IN.$ This completes the proof. \end{proof}
\section{Local cohomology and ideal topologies} The purpose of this section is to establish the equivalence between the topologies defined by $\{(I^n)_a^{(N)}\}_{n\geq1}$ and $\{S((I^n)_a^{(N)})\}_{n\geq1}$ in terms of the vanishing of the top local cohomology module $H^{\dim N}_I(N)$. This will generalize the main result of Marti-Farre \cite{MF}, as an extension of the main results of Call \cite[Corollary 1.4]{Ca},
Call-Sharp \cite{CS} and Schenzel \cite[Corollary 4.3]{Sc2}.
\begin{theorem} \label{4.1} Let $(R, \mathfrak{m})$ be a local (Noetherian) ring, $N$ a finitely generated $R$-module of dimension $d$ and $I$ an ideal of $R.$ Consider the following conditions: \begin{itemize} \item[(i)] There exists a multiplicatively closed subset $S$ of $R$ such that $ \mathfrak{m} \cap S \neq\emptyset$ and that the topologies defined by $\{S((I^n)^{(N)}_a)\}_{n\geq0}$ and $\{(I^n)^{(N)}_a \}_{n\geq0}$ are equivalent.
\item[(ii)] $H^d_I(N)= 0.$ \end{itemize} Then \text{\rm(i)} $\Longrightarrow$ \text{\rm (ii)}; and these conditions are equivalent, whenever $N$ is quasi-unmixed. \end{theorem}
\begin{proof} We start with the proof of the implication (i) $\Longrightarrow$ (ii). By Theorem \ref{3.2} we have $S \subseteq R \setminus \bigcup \{\mathfrak{p}\in \bar{Q}^*(I, N)\}.$ Then $\mathfrak{m} \not\in \bar{Q}^*(I, N).$ Therefore for all $\mathfrak{q}\in \mAss_{\hat{R}}\hat{N}$ we have $\dim \hat{R}/I\hat{R}+ \mathfrak{q} >0.$ By the Lichtenbaum-Hartshorne Theorem (see \cite[Corollary 3.4]{DS}), it follows that $H^d_I(N)= 0.$
Now, assume that $N$ is quasi-unmixed and that (ii) holds. We show that (i) is true. To this end, let $S= R \setminus \bigcup \{\mathfrak{p} \in \bar{Q}^*(I, N)\}.$ Then, in view of Theorem \ref{3.2}, the topologies defined by $\{S((I^n)^{(N)}_a)\}_{n\geq0}$ and $\{(I^n)^{(N)}_a \}_{n\geq0}$ are equivalent. Hence, it is enough to show that $ \mathfrak{m} \cap S \neq\emptyset.$ Suppose, the contrary, namely $ \mathfrak{m} \cap S = \emptyset.$ Then $\mathfrak{m}\in \bar{Q}^*(I, N).$ So there exists $ \mathfrak{q}\in \mAss_{\hat{R}}\hat{N}$ such that $\mathfrak{m}\hat{R}= \Rad(I\hat{R}+ \mathfrak{q}).$ As $N$ is quasi-unmixed it follows that $\dim \hat{R}/I\hat{R}+ \mathfrak{q}= 0$ for some $ \mathfrak{q} \in \mAss_{\hat{R}}\hat{N}$ such that $\dim \hat{R}/\mathfrak{q}= d.$ Now, use \cite[Corollary 3.4]{DS} to see that $H^d_I(N) \neq 0$, which is a contradiction. \end{proof}
The final results will be the strengthened and a generalized version of a corresponding one by Marley (\cite[Corollaries 2.4 and 2.5]{M}) and Naghipour-Sedghi (\cite[Corollary 3.3]{NS}). \begin{corollary} \label{3.5} Assume that $R$ is a Noetherian ring. Let $N$ be a finitely generated $R$-module of dimension $d$ and $I$ an ideal of $R.$ Then $\Supp(H^d_I(N)) \subseteq \bar{Q}^*(I, N).$ Moreover the equality holds, whenever $N$ is Cohen-Macaulay. \end{corollary}
\begin{proof} Let $\mathfrak{p}\in \Supp(H^d_I(N)).$ Then $H^d_{IR_\mathfrak{p}}(N_{\mathfrak{p}})\neq 0,$ and so $\dim N_{\mathfrak{p}} = d.$ Hence, in view of the Lichtenbaum-Hartshorne Theorem (cf. \cite[Corollary 3.4]{DS}) there exists $ \mathfrak{q}\in \mAss_{\hat{R_{\mathfrak{p}} }}\hat{N_{\mathfrak{p}}}$ such that $\mathfrak{p}\hat{R_{\mathfrak{p}}}= \Rad(I\hat{R_{\mathfrak{p}}}+ \mathfrak{q}).$ Thus $\mathfrak{p}\in \bar{Q}^*(I, N)$, and so $\Supp(H^d_I(N))\,\subseteq\,\bar{Q}^*(I,N).$
In order to prove the second assertion, let $\mathfrak{p}\in\bar{Q}^*(I, N)$. Then there exists $ \mathfrak{q}\in \mAss_{\hat{R_{\mathfrak{p}} }}\hat{N_{\mathfrak{p}}}$ such that $\mathfrak{p}\hat{R_{\mathfrak{p}}}= \Rad(I\hat{R_{\mathfrak{p}}}+\mathfrak{q}).$ Now, since $N$ is Cohen-Macaulay, we deduce that $\dim \hat{R_{\mathfrak{p}} }/\mathfrak q=\dim\hat{N_{\mathfrak{p}}}$. Whence, in view of the Lichtenbaum-Hartshorne Theorem, $H^d_{IR_\mathfrak{p}}(N_{\mathfrak{p}})\neq 0,$ and so $\mathfrak{p}\in \Supp(H^d_I(N)).$ \end{proof}
\begin{corollary} \label{3.6} Let $(R, \mathfrak{m})$ be a local (Noetherian) ring, $N$ a finitely generated $R$-module of dimension $d$ and $I$ an ideal of $R.$ Then \[ \Supp(H^{d-1}_I(N))\,\subseteq\,\bar{Q}^*(I, N) \cup\, \{\mathfrak{m}\}. \] Therefore $\Ass_R H^{d-1}_I(N)$ is a finite set. \end{corollary}
\begin{proof} Let $\mathfrak{p}\in \Supp(H^{d-1}_I(N))$ such that $\mathfrak{p}\neq \mathfrak{m}.$ Then $H^{d-1}_{IR_{\mathfrak{p}}}(N_{\mathfrak{p}})\neq 0$, and so $\dim N_{\mathfrak{p}} = d-1.$ Now, by the proof of Corollary \ref{3.5} we have $\mathfrak{p}\in \bar{Q}^*(I, N)$, as required. \end{proof}
\begin{center} {\bf Acknowledgments} \end{center} The authors are deeply grateful to the referee for providing numerous suggestions to improve the readability of the paper and for drawing the authors' attention to Lemma 2.3 and Theorem 2.5. Also, we would like to thank Dr. Monireh Sedghi for a careful reading of the original manuscript and helpful suggestions.
\end{document}
|
arXiv
|
{
"id": "1708.04635.tex",
"language_detection_score": 0.6638193726539612,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[{An explicit formula for Szeg\H{o} kernels on the Heisenberg group}] {An explicit formula for Szeg\H{o} kernels on the Heisenberg group} \author[Hendrik Herrmann]{Hendrik Herrmann} \address{Mathematical Institute, University of Cologne, Weyertal 86-90, 50931 Cologne, Germany} \thanks{Hendrik Herrmann was partially supported by the CRC TRR 191: ``Symplectic Structures in Geometry, Algebra and Dynamics''. He would like to thank the Mathematical Institute, Academia Sinica, and the School of Mathematics and Statistics, Wuhan University, for hospitality, a comfortable accommodation and financial support during his visits in January and March - April, respectively.} \email{[email protected] or [email protected]} \author[Chin-Yu Hsiao]{Chin-Yu Hsiao} \address{Institute of Mathematics, Academia Sinica and National Center for Theoretical Sciences, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan} \thanks{Chin-Yu Hsiao was partially supported by Taiwan Ministry of Science of Technology project 104-2628-M-001-003-MY2 and the Golden-Jade fellowship of Kenda Foundation} \email{[email protected] or [email protected]} \author[Xiaoshan Li]{Xiaoshan Li} \address{School of Mathematics and Statistics, Wuhan University, Hubei 430072, China} \thanks{Xiaoshan Li was supported by National Natural Science Foundation of China (Grant No. 11501422).}
\email{[email protected]}
\dedicatory{In memory of Professor Qikeng Lu}
\begin{abstract} In this paper, we give an explicit formula for the Szeg\H{o} kernel for $(0,q)$ forms on the Heisenberg group $H_{n+1}$. \end{abstract}
\maketitle \tableofcontents
\section{Introduction}
Let $(X, T^{1,0}X)$ be a CR manifold of dimension $2n+1$, $n\geq1$, and let $\Box^{(q)}_b$ be the Kohn Lalpacian acting on $(0,q)$ forms. The orthogonal projection $S^{(q)}:L^2_{(0,q)}(X)\rightarrow {\rm Ker\,}\Box^{(q)}_b$ onto ${\rm Ker\,}\Box^{(q)}_b$ is called the Szeg\H{o} projection, while its distribution kernel $S^{(q)}(x,y)$ is called the Szeg\H{o} kernel. The study of the Szeg\H{o} projection and kernel is a classical and important subject in several complex variables and CR geometry. When $X$ is compact, strongly pseudoconvex and $\Box^{(0)}_b$ has $L^2$ closed range, Boutet de Monvel-Sj\"ostrand~\cite{BM76} showed that $S^{(0)}(x,y)$ is a complex Fourier integral operator with complex phase. In particular, $S^{(0)}(x,y)$ is smooth outside the diagonal of $X\times X$ and there is a precise description of the singularity on the diagonal $x=y$, where $S^{(0)}(x,x)$ has a certain asymptotic expansion. The second-named author ~\cite{Hsiao08} showed that if $X$ is compact, the Levi form is non-degenerate and $\Box^{(q)}_b$ has $L^2$ closed range for some $q\in\set{0,1,\ldots,n-1}$, then $S^{(q)}(x,y)$ is a complex Fourier integral operator.
When $X$ is non-compact or $\Box^{(q)}_b$ has no $L^2$ closed range, it is very difficult to study the Szeg\H{o} kernel. In this work, we give an explicit formula for the Szeg\H{o} kernel for $(0,q)$ forms on the Heisenberg group $H_{n+1}=\mathbb C^n\times \mathbb R$. Our results tell us that in the Heisenberg group case, the Szeg\H{o} kernel for $(0,q)$ forms is also a complex Fourier integral operator. Note that in the Heisenberg group case, $\Box^{(q)}_b$ may has no $L^2$ closed range.
Only few examples of CR manifolds with explicit Szeg\H{o} kernels are known. To give a closed formula of the Szeg\H{o} kernel is not only a problem of its own interest but also significant for the general theory. For example, when $X$ is asymptotically flat, the explicit Szeg\H{o} kernel on the Heisenberg group was used in positive mass theorem in CR geometry~\cite{CMY17}, ~\cite{HY13},~\cite{HY15}.
We now formulate our main results. We refer to Section~\ref{s:prelim} for some notations and terminology used here. Let $H_{n+1}=\mathbb C^{n}\times\mathbb R$ be the Heisenberg group. We use $x=(z, x_{2n+1})=(x_1,\ldots,x_{2n+1})$ to denote the coordinates on $H_{n+1}$, where $z=(z_1,\ldots,z_n)$, $z_j=x_{2j-1}+ix_{2j}$, $j=1,\ldots,n$. We denote by $T^{1, 0}H_{n+1}$ the CR structure on $H_{n+1}$ which is given by $$T^{1, 0}H_{n+1}:={\rm span}_{\mathbb C}\{Z_j: Z_j=\frac{\partial}{\partial z_j}-i\lambda_j\overline z_j\frac{\partial}{\partial x_{2n+1}}, j=1, \cdots, n\}$$
where $\lambda_j\in\mathbb R, \forall j$, are given real numbers. We denote by $T^{0, 1}H_{n+1}$ the complex conjugate of $T^{1, 0}H_{n+1}$. Set $T=-\frac{\partial}{\partial x_{2n+1}}$. Fix a Hermitian metric on $\mathbb C TH_{n+1}$ denoted by $\langle\cdot|\cdot\rangle$ such that \begin{equation*} \begin{split} &T\bot T^{1, 0}H_{n+1}\bot T^{0, 1}H_{n+1},\\
&\langle T|Z_j\rangle=0, \langle Z_j|Z_k\rangle =\langle\overline Z_j|\overline Z_k\rangle=\delta_{jk}, \forall j, k=1, \cdots, n. \end{split} \end{equation*} Take $d\mu_{H_{n+1}}:=2^{n}dx_1\wedge\cdots\wedge dx_{2n+1}$ be the volume form on $H_{n+1}$.
Denote by $T^{\ast 1,0}H_{n+1}$ and $T^{\ast0,1}H_n$ the dual bundles of $T^{1,0}H_{n+1}$ and $T^{0,1}H_{n+1}$, respectively. Define the vector bundle of $(0,q)$-forms by $\Lambda^qT^{\ast0,1}H_{n+1}$. Put \[\Lambda^{\bullet}T^{\ast0,1}H_{n+1}:=\oplus_{q=0}^n\Lambda^qT^{\ast0,1}H_{n+1}.\]
The Hermitian metric $\langle\cdot|\cdot\rangle$ on
$\mathbb CTH_{n+1}$ induces by duality a Hermitian metric on $\mathbb CT^\ast H_{n+1}$ and also on $\Lambda^{\bullet}T^{\ast0,1}H_{n+1}$. We shall also denote all these induced metrics by $\langle\cdot|\cdot\rangle$. We can check that the dual frame of $\{Z_j, \overline Z_j, -T;\, j=1,\ldots,n\}_{j=1}^n$ is $\{dz_j, d\overline z_j, \omega_0;\, j=1,2,\ldots,n\}$, where $dz_j=dx_{2j-1}+idx_{2j}$, $j=1,\ldots,n$, and \begin{equation}\label{e-gue170522} \omega_0(x)=dx_{2n+1}+\sum\limits_{j=1}^ni(\lambda_j\overline z_jdz_j-\lambda_jz_jd\overline z_j). \end{equation} Thus, one has \[\Lambda^qT^{\ast 0,1}H_{n+1}={\rm span}_{\mathbb C}\{d\overline z_{j_1}\wedge\cdots\wedge d\overline z_{j_q};\, 1\leq j_1<\cdots<j_q\leq n\}.\]
Let $D\subset H_{n+1}$ be an open subset. Let $\Omega^{0,q}(D)$
denote the space of smooth sections of $\Lambda^qT^{\ast0, 1}H_{n+1}$ over $D$. Let $\Omega^{0,q}_0(D)$ be the subspace of $\Omega^{0,q}(D)$ whose elements have compact support in $D$. Let $(\,\cdot\,|\,\cdot\,)$ be the $L^2$ inner product on $\Omega^{0,q}_0(H_{n+1})$ induced by $\langle\,\cdot\,|\,\cdot\,\rangle$ and the volume form $d\mu_{H_{n+1}}$ and let $\norm{\cdot}$ denote the corresponding norm. Then for all $u, v\in\Omega^{0,q}_0(H_{n+1})$ \begin{equation}
(u|v)=\int_{H_{n+1}}\langle u| v\rangle d\mu_{H_{n+1}}. \end{equation}
Let $L^2_{(0,q)}(H_{n+1})$ be the completion of $\Omega^{0,q}_0(H_{n+1})$ with respect to $(\cdot|\cdot)$. We write $L^2(H_{n+1}):=L^2_{(0,0)}(H_{n+1})$. We extend $(\,\cdot\,|\,\cdot\,)$ to $L^2_{(0,q)}(H_{n+1})$
in the standard way. For $f\in L^2_{(0,q)}(H_{n+1})$, we denote $\norm{f}^2:=(\,f\,|\,f\,)$. Let \begin{equation} \label{e-suIV} \overline\partial_b:\Omega^{0,q}(H_{n+1})\rightarrow\Omega^{0,q+1}(H_{n+1}) \end{equation} be the tangential Cauchy-Riemann operator. We extend $\overline\partial_{b}$ to $L^2_{(0,r)}(H_{n+1})$, $r=0,1,\ldots,n$, by \begin{equation}\label{e-suVII} \overline\partial_{b}:{\rm Dom\,}\overline\partial_{b}\subset L^2_{(0,r)}(H_{n+1})\rightarrow L^2_{(0,r+1)}(H_{n+1})\,, \end{equation} where ${\rm Dom\,}\overline\partial_{b}:=\{u\in L^2_{(0,r)}(H_{n+1});\, \overline\partial_{b}u\in L^2_{(0,r+1)}(X)\}$ and, for any $u\in L^2_{(0,r)}(H_{n+1})$, $\overline\partial_{b} u$ is defined in the sense of distributions. We also write \begin{equation}\label{e-suVIII} \overline{\partial}^{*}_{b}:{\rm Dom\,}\overline{\partial}^{*}_{b}\subset L^2_{(0,r+1)}(H_{n+1})\rightarrow L^2_{(0,r)}(H_{n+1}) \end{equation}
to denote the Hilbert space adjoint of $\overline\partial_{b}$ in the $L^2$ space with respect to $(\,\cdot\,|\,\cdot\, )$. Let $\Box^{(q)}_{b}$ denote the (Gaffney extension) of the Kohn Laplacian given by \begin{equation}\label{e-suIX} \begin{split} {\rm Dom\,}&\Box^{(q)}_{b}\\ &=\Big\{s\in L^2_{(0,q)}(H_{n+1});\, s\in{\rm Dom\,}\overline\partial_{b}\cap{\rm Dom\,}\overline{\partial}^{*}_{b},\, \overline\partial_{b}s\in{\rm Dom\,}\overline{\partial}^{*}_{b},\ \overline{\partial}^{*}_{b}s\in{\rm Dom\,}\overline\partial_{b}\Big\}\,,\\ \Box^{(q)}_{b}s&=\overline\partial_{b}\overline{\partial}^{*}_{b}s+\overline{\partial}^{*}_{b}\overline\partial_{b}s \:\:\text{for $s\in {\rm Dom\,}\Box^{(q)}_{b}$}\,.
\end{split} \end{equation} By a result of Gaffney, for every $q=0,1,\ldots,n$, $\Box^{(q)}_{b}$ is a positive self-adjoint operator (see \cite[Proposition\,3.1.2]{MM}). That is, $\Box^{(q)}_{b}$ is self-adjoint and the spectrum of $\Box^{(q)}_{b}$ is contained in $\overline\mathbb R_+$, $q=0,1,\ldots,n$. Let \begin{equation}\label{e-suXI-I} S^{(q)}:L^2_{(0,q)}(H_{n+1})\rightarrow{\rm Ker\,}\Box^{(q)}_b \end{equation}
be the orthogonal projection with respect to the $L^2$ inner product $(\,\cdot\,|\,\cdot\,)$ (Szeg\H{o} projection) and let \begin{equation}\label{e-suXI-II} S^{(q)}(x,y)\in D'(H_{n+1}\times H_{n+1},\Lambda^qT^{\ast 0,1}H_{n+1}\boxtimes(\Lambda^qT^{\ast 0,1}H_{n+1})^*) \end{equation} denote the distribution kernel of $S^{(q)}$ (Szeg\H{o} kernel). Put $\mathcal H^{q}_b(H_{n+1}):={\rm Ker\,}\Box^{(q)}_b$. Our first result is the following
\begin{theorem}\label{thm:1.1} If $\lambda_j=0$ for some $j$, then \[\mathcal H^{q}_b(H_{n+1})=\set{0}.\]
Suppose that all $\lambda_j$ are non-zero and let $n_{-}$ be the number of negative $\lambda_js$ and $n_{+}$ be the number of positive $\lambda_{j}s$. If $q\notin\set{n_-,n_+}$, then \[\mathcal H^{q}_b(H_{n+1})=\set{0}.\] \end{theorem}
In view of Theorem~\ref{thm:1.1}, we only need to consider the non-degenerate case, that is, all $\lambda_j$ are non-zero.
We now state our explicit formula for $S^{(q)}$ in the non-degenerate case. We introduce some notations and definitions. Consider the two functions \begin{equation}\label{phase1} \begin{split} &\varphi_-(x, y)\\
&=-x_{2n+1}+y_{2n+1}+i\sum_{j=1}^n|\lambda_j||z_j-w_j|^2+i\sum_{j=1}^n \lambda_j(\overline z_jw_j-z_j\overline w_j)\in C^\infty(H_{n+1}\times H_{n+1}),\\ &\varphi_+(x, y)\\
&=x_{2n+1}-y_{2n+1}+i\sum_{j=1}^n|\lambda_j||z_j-w_j|^2+i\sum_{j=1}^n \lambda_j(z_j\overline w_j-\overline z_jw_j)\in C^\infty(H_{n+1}\times H_{n+1}). \end{split} \end{equation} Let $\varphi\in\{\varphi_-,\varphi_+\}$ be one of the functions defined above. Fix $q=0,1,\ldots,n$. For $u\in\Omega^{0,q}_0(H_{n+1})$, by using integration by parts with respect to $y_{2n+1}$ several times, we can show that \[\lim_{\varepsilon\To0^+}\int_{H_{n+1}}\frac{1}{\bigr(-i(\varphi(x,y)+i\varepsilon)\bigr)^{n+1}}u(y)d\mu_{H_{n+1}}(y)\] exists for every $x\in H_{n+1}$, \begin{equation}\label{e-gue170525} \lim_{\varepsilon\To0^+}\int_{H_{n+1}}\frac{1}{\bigr(-i(\varphi(x,y)+i\varepsilon)\bigr)^{n+1}}u(y)d\mu_{H_{n+1}}(y)\in\Omega^{0,q}(H_{n+1}) \end{equation} and the operator \begin{equation}\label{e-gue170525I} \begin{split} I^{(q)}_{\varphi}: \Omega^{0,q}_0(H_{n+1})&\rightarrow\Omega^{0,q}(H_{n+1}),\\ u&\mapsto \lim_{\varepsilon\To0^+}\int_{H_{n+1}}\frac{1}{\bigr(-i(\varphi(x,y)+i\varepsilon)\bigr)^{n+1}}u(y)d\mu_{H_{n+1}}(y)\in\Omega^{0,q}(H_{n+1}) \end{split} \end{equation} is continuous. Moreover, we will show in Theorem~\ref{tema2} and Corollary~\ref{c-gue170528} that there is a constant $C>0$ such that \[\norm{I^{(q)}_{\varphi}u}\leq C\norm{u},\ \ \forall u\in\Omega^{0,q}_0(H_{n+1}).\] Thus, we can extend $I^{(q)}_{\varphi}$ to $L^2_{(0,q)}(H_{n+1})$ in the standard way and we have that \begin{equation}\label{e-gue170525II} I^{(q)}_{\varphi}: L^2_{(0,q)}(H_{n+1})\rightarrow L^2_{(0,q)}(H_{n+1}) \end{equation} is continuous. We first state our main result for positive case.
\begin{theorem}\label{main theorem 2} Assume that $\lambda_j>0$ for every $j=1,2,\ldots,n$. With the notations used above, we have \begin{equation}\label{e-gue170525rj} S^{(0)}=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}n!I^{(0)}_{\varphi_-}\ \ \mbox{on $L^2(H_{n+1})$}. \end{equation} \end{theorem}
We now consider non-positive case. Let $r\in\mathbb N$. For a multi-index $J=(j_1,\ldots,j_r)\in\{1,\ldots,n\}^r$ we set $l(J)=r$. We say that $J$ is strictly increasing if $1\leqslant j_1<j_2<\cdots<j_r\leqslant n$ and we put $d\overline z^J=d\overline z_{j_1}\wedge\cdots\wedge d\overline z_{j_r}$. Let $u\in\oplus^n_{q=0}L^2_{(0,q)}(H_{n+1})$. We have the representation \[u=\sideset{}{'}\sum_{1\leq l(J)\leq n}u_J(z)d\overline z^J+u_0(z),\ \ u_0(z)\in L^2(H_{n+1}),\] where $\sum^{'}$ means that the summation is performed only over strictly increasing multi-indices. Suppose that all $\lambda_j$ are non-zero and let $n_-$ be the number of negative $\lambda_js$. Assume that $n_->0$ and suppose that $\lambda_1<0,\ldots,\lambda_{n_-}<0$. Put \begin{equation}\label{e-gue170525r} \begin{split} J_{n_-}=(1,\ldots,n_-),\\ J_{n_+}=(n_-+1,\ldots,n)\ \ \mbox{if $n_-<n$}. \end{split} \end{equation} Consider the operator \begin{equation}\label{e-gue170522a} \begin{split} \tau_{-}: \oplus^n_{q=0}L^2_{(0,q)}(H_{n+1})&\rightarrow L^2_{(0,n_-)}(H_{n+1}),\\ u=\sideset{}{'}\sum_{1\leq l(J)\leq n}u_Jd\overline z^J+u_0(z)&\mapsto u_{J_{n_-}}d\overline z^{J_{n_-}}. \end{split} \end{equation} If $n_-<n$, set \begin{equation}\label{e-gue170522aI} \begin{split} \tau_{+}: \oplus^n_{q=0}L^2_{(0,q)}(H_{n+1})&\rightarrow L^2_{(0,n-n_-)}(H_{n+1}),\\ u=\sideset{}{'}\sum_{1\leq l(J)\leq n}u_Jd\overline z^J+u_0(z)&\mapsto u_{J_{n_+}}d\overline z^{J_{n_+}}. \end{split} \end{equation} If $n_-=n$, set \begin{equation}\label{e-gue170607} \begin{split} \tau_{+}: \oplus^n_{q=0}L^2_{(0,q)}(H_{n+1})&\rightarrow L^2(H_{n+1}),\\ u=\sideset{}{'}\sum_{1\leq l(J)\leq n}u_Jd\overline z^J+u_0(z)&\mapsto u_0(z). \end{split} \end{equation} Then \(\tau_-\) and \(\tau_{+}\) are continuous operators. Our main result for $(0,q)$ forms is the following
\begin{theorem}\label{main theorem 3} Suppose that all $\lambda_j$ are non-zero and let $n_-$ be the number of negative $\lambda_js$. Assume that $n_->0$. Let $q\in\set{n_-,n_+}$, where $n_+=n-n_-$. With the notations used above, we have \[S^{(q)}=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}n!I^{(n_-)}_{\varphi_-}\circ\tau_-+\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}n!I^{(n_+)}_{\varphi_+}\circ\tau_+\ \ \mbox{on $L^2_{(0,q)}(H_{n+1})$}.\] \end{theorem}
\begin{remark}\label{r-gur170525} With the notations and assumptions used in Theorem~\ref{main theorem 3}, suppose $q=n_-$ and $n_-\neq n_+$. Since $\tau_+u=0$ for every $u\in L^2_{(0,q)}(X)$, we get \[S^{(q)}=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}n!I^{(n_-)}_{\varphi_-}\circ\tau_-.\] If $q=n_-=n_+$, the operators $I^{(n_-)}_{\varphi_-}\circ\tau_-$ and $I^{(n_+)}_{\varphi_+}\circ\tau_+$ are non-trivial. \end{remark}
\begin{remark}\label{r-gue170525I} It should be notice that the operator $I^{(q)}_\varphi$ in \eqref{e-gue170525I} is a complex Fourier integral operator \[\int_0^\infty e^{it\varphi(x, y)}\frac{1}{n!}t^ndt\] with complex phase $\varphi$ and symbol $\frac{1}{n!}t^n$ (see the discussion in the beginning of Section~\ref{s-gue170528}). \end{remark}
In Section \ref{sec:relbergszeg} we show how \(S^{(0)}\) is related to a weighted Bergman kernel on \(\mathbb C^n\) (see Theorem \ref{thm:szegobergman}).
\section{Preliminaries}\label{s:prelim}
We shall use the following notations: $\mathbb N=\set{1,2,\ldots}$, $\mathbb N_0=\mathbb N\cup\set{0}$, $\mathbb R$ is the set of real numbers, $\overline\mathbb R_+:=\set{x\in\mathbb R;\, x\geq0}$. Let $m\in\mathbb N$. For a multi-index $\alpha=(\alpha_1,\ldots,\alpha_m)\in\mathbb N_0^m$, we denote by $\abs{\alpha}=\alpha_1+\ldots+\alpha_m$ its norm and by $l(\alpha)=m$ its length. $\alpha$ is strictly increasing if $\alpha_1<\alpha_2<\ldots<\alpha_m$.
Let $z=(z_1,\ldots,z_n)$, $z_j=x_{2j-1}+ix_{2j}$, $j=1,\ldots,n$, be coordinates of $\mathbb C^n$. Let $\alpha=(\alpha_1,\ldots,\alpha_n)\in\mathbb N_0^n$ be a multi-index. We write \[ \begin{split} &z^\alpha=z_1^{\alpha_1}\ldots z^{\alpha_n}_n\,,\quad\overline z^\alpha=\overline z_1^{\alpha_1}\ldots\overline z^{\alpha_n}_n\,,\\ &\frac{\partial}{\partial z_j}= \frac{1}{2}\Big(\frac{\partial}{\partial x_{2j-1}}-i\frac{\partial}{\partial x_{2j}}\Big)\,,\quad \frac{\partial}{\partial\overline z_j}=\frac{1}{2}\Big(\frac{\partial}{\partial x_{2j-1}}+i\frac{\partial}{\partial x_{2j}}\Big),\ \ j=1,\ldots,n. \end{split} \] For $j, s\in\mathbb Z$, set $\delta_{j,s}=1$ if $j=s$, $\delta_{j,s}=0$ if $j\neq s$.
Let $M$ be a $C^\infty$ paracompact manifold. We let $TM$ and $T^*M$ denote the tangent bundle of $M$ and the cotangent bundle of $M$, respectively. The complexified tangent bundle of $M$ and the complexified cotangent bundle of $M$ will be denoted by $\mathbb C TM$ and $\mathbb C T^*M$, respectively. Write $\langle\,\cdot\,,\cdot\,\rangle$ to denote the pointwise standard pairing between $TM$ and $T^*M$. We extend $\langle\,\cdot\,,\cdot\,\rangle$ bilinearly to $\mathbb C TM\times\mathbb C T^*M$. Let $G$ be a $C^\infty$ vector bundle over $M$. The fiber of $G$ at $x\in M$ will be denoted by $G_x$. Let $E$ be a vector bundle over a $C^\infty$ paracompact manifold $M_1$. We write $G\boxtimes E^*$ to denote the vector bundle over $M\times M_1$ with fiber over $(x, y)\in M\times M_1$ consisting of the linear maps from $E_y$ to $G_x$. Let $Y\subset M$ be an open set. From now on, the spaces of distribution sections of $G$ over $Y$ and smooth sections of $G$ over $Y$ will be denoted by $D'(Y, G)$ and $C^\infty(Y, G)$, respectively.
Let $G$ and $E$ be $C^\infty$ vector bundles over paracompact orientable $C^\infty$ manifolds $M$ and $M_1$, respectively, equipped with smooth densities of integration. If $A: C^\infty_0(M_1,E)\rightarrow D'(M,G)$ is continuous, we write $A(x, y)$ to denote the distribution kernel of $A$.
Let $H(x,y)\in D'(M\times M_1,G\boxtimes E^*)$. We write $H$ to denote the unique continuous operator $C^\infty_0(M_1,E)\rightarrow D'(M,G)$ with distribution kernel $H(x,y)$. In this work, we identify $H$ with $H(x,y)$.
\section{Proof of Theorem~\ref{thm:1.1}}
Consider the affine complex space $\mathbb C^n$. Let $z=(z_1,\ldots,z_n)=(x_1,\ldots,x_{2n})$ be complex coordinates of $\mathbb C^n$, $z_j=x_{2j-1}+ix_{2j}$, $j=1,\ldots,n$. Put $d\mu(z)=2^ndx_1\cdots dx_{2n}$. The following follows from some elementary calculation. We omit the proof.
\begin{lemma}\label{lem:monomialintegrals} Fix $q\in\{0,1,2,\ldots,n\}$. Put \[\begin{split}
&\widetilde \lambda\abs{z}^2=\lambda_1|z_1|^2+\cdots+\lambda_q|z_q|^2-\lambda_{q+1}|z_{q+1}|^2-\cdots-\lambda_n|z_n|^2\ \ \mbox{if $1\leq q<n$},\\
&\widetilde \lambda\abs{z}^2=-\lambda_{1}|z_{1}|^2-\cdots-\lambda_n|z_n|^2\ \ \mbox{if $q=0$},\\
&\widetilde \lambda\abs{z}^2=\lambda_{1}|z_{1}|^2+\cdots+\lambda_n|z_n|^2\ \ \mbox{if $q=n$}. \end{split}\]
Consider the expression \(I(\alpha,\eta,\lambda)=\int_{\mathbb{C}^n}|z^\alpha|^2e^{-2\eta\widetilde\lambda|z|^2}d\mu(z)\) for \(\alpha\in\mathbb N^n_0\), \(\eta\in\mathbb{R}\).
If $\lambda_j=0$ for some \(j\in\{1,\ldots,n\}\), then \(I(\alpha,\eta,\lambda)=\infty\) for all \(\alpha\in\mathbb N^n_0\) and \(\eta\in\mathbb{R}\).
Assume that all $\lambda_j$ are non-zero and let $n_{-}$ be the number of negative $\lambda_js$ and $n_{+}$ be the number of positive $\lambda_{j}s$. If $q\not\in\{n_{-}, n_{+}\}$, then \(I(\alpha,\eta,\lambda)=\infty\) for all \(\alpha\in\mathbb N^n_0\) and \(\eta\in\mathbb{R}\).
\end{lemma}
We pause and introduce some notations. Choose $\chi(\theta)\in C_0^\infty(\mathbb R)$
so that $\chi(\theta)=1$ when $|\theta|<1$ and
$\chi(\theta)=0$ when $|\theta|>2$
and set $\chi_j(\theta)=\chi(\frac{\theta}{j}),j\in\mathbb N$. For any $u(z, x_{2n+1})\in\Omega^{0, q}(H_{n+1})$ with $\|u\|<\infty$, let \begin{equation} \hat u_j(z, \eta)=\int_{\mathbb R} u(z, x_{2n+1})\chi_j(x_{2n+1})e^{-ix_{2n+1}\eta}dx_{2n+1}\in\Omega^{0, q}(H_{n+1}), j=1, 2,\ldots. \end{equation} From Parseval's formula, we have \begin{align*} &\int_{H_{n+1}}\!\abs{\hat u_j(z,\eta)-\hat u_t(z,\eta)}^2d\eta d\mu(z)\\ &=2\pi\int_{H_{n+1}}\!\abs{u(z,x_{2n+1})}^2\abs{\chi_j(x_{2n+1})-\chi_t(x_{2n+1})}^2dx_{2n+1} d\mu(z)\To0,\ \ \mbox{as $j,t\rightarrow\infty$}. \end{align*} Thus there is $\hat u(z, \eta)\in L^2_{(0, q)}(H_{n+1})$ such that $\hat u_j(z, \eta)\rightarrow \hat u(z, \eta)$ in $L^2_{(0, q)}(H_{n+1})$. We call $\hat u(z, \eta)$ the partial Fourier transform of $u(z, x_{2n+1})$ with respect to $x_{2n+1}$. Formally, \begin{equation} \hat u(z, \eta)=\int_{\mathbb R} e^{-i x_{2n+1} \eta}u(z, x_{2n+1})dx_{2n+1}. \end{equation} Moreover, we have \begin{equation}\label{neg1}
\int_{H_{n+1}}|\hat u(z, \eta)|^2d\mu(z)d\eta=2\pi \int_{H_{n+1}}|u(z, x_{2n+1})|^2d\mu(z)dx_{2n+1}. \end{equation}
From Fubini's theorem, $\int_{\mathbb C^n}|\hat u(z, \eta)|^2 d\mu(z)<\infty$, for a.e. $\eta\in\mathbb R$. More precisely, there is a measure zero set $A_0\subset\mathbb R$ such that $\int_{\mathbb C^n}|\hat u(z, \eta)|^2d\mu(z)<\infty, ~\forall \eta\not\in A_0.$
Similarly, let \begin{equation} \check u_j(z, \eta)=\frac{1}{2\pi}\int_{\mathbb R} u(z, x_{2n+1})\chi_j(x_{2n+1})e^{ix_{2n+1}\eta}dx_{2n+1}\in\Omega^{0, q}(H_{n+1}), j=1, 2,\ldots. \end{equation} Then $\check u_j(z, \eta)\rightarrow \check u(z, \eta)$ in $L^2_{(0, q)}(H_{n+1})$ for some $\check u(z,\eta)$. We call $\check u(z, \eta)$ the partial inverse Fourier transform of $u(z, x_{2n+1})$ with respect to $x_{2n+1}$.
Let $u, v\in L^2_{(0, q)}(H_{n+1})$. Assume that $\int |u(z, t)|^2dt<\infty$ and $\int |u(z, t)|dt<\infty$ for all $z\in\mathbb C^n.$ Then from Parseval's formula, it is not difficult to check that \begin{equation}\label{e-gue170527cr}
\int\int \langle \hat v(z, \eta)| u(z, \eta)\rangle d\mu(z)d\eta=\int\int \langle v(z, x_{2n+1})|\int e^{ix_{2n+1}\eta}u(z, \eta)d\eta\rangle d\mu(z)dx_{2n+1}. \end{equation}
\begin{lemma}\label{lem:equivkernel} Let $u=\sideset{}{'}\sum_{l(J)=q}u_Jd\overline{z}_J\in L^2_{(0,q)}(H_{n+1})$. We have
\begin{equation}\label{CR1} u=\sideset{}{'}\sum_{l(J)=q}u_Jd\overline{z}_J\in \mathcal{H}^q_b(H_{n+1})
\Leftrightarrow\begin{cases}
\left(\frac{\partial}{\partial z_j}-i\lambda_j\overline{z}_j\frac{\partial}{\partial x_{2n+1}}\right) u_J=0 &\text{, if }j\in J,\\
\left(\frac{\partial}{\partial \overline{z}_j}+i\lambda_jz_j\frac{\partial}{\partial x_{2n+1}}\right) u_J=0 &\text{, if }j\notin J.
\end{cases} \end{equation} in the sense of distribution. \end{lemma}
\begin{proof} Assume $u\in \mathcal H^q_b(H_{n+1})$. Then $\overline\partial_bu=0$ and $\overline\partial_b^\ast u=0.$
Choose $\chi(x)\in C_0^\infty(H_{n+1})$ such that $\chi\equiv1$ on $\{x\in H_{n+1}: |x|\leq 1\}$ and ${\rm supp}\chi\Subset\{x\in H_{n+1} : |x|<2\}.$ Set $\chi_j(x)=\chi(\frac{x}{j})$ and $v_j=u\chi_j(x)$. Then ${\rm supp}v_j\Subset \{x\in H_{n+1}: |x|<2j\}$. It is easy to see that $v_j\in{\rm Dom\,}\overline\partial_b\bigcap{\rm Dom\,}\overline\partial_b^\ast $, $v_j\rightarrow u$ in $L^2_{(0, q)}(H_{n+1})$ and $\overline\partial_b v_j=\overline\partial_b\chi_j\wedge u$. Since $$\max\limits_{x\in H_{n+1}}|\overline\partial_b\chi_j(x)|=\max\limits_{x\in H_{n+1}}|\sum_k (\frac{\partial}{\partial\overline z_k}+i\lambda_kz_k\frac{\partial}{\partial x_{2n+1}})\chi(\frac{x}{j})d\overline z_k|\leq c,$$ where $c$ is a constant which does not depend on $j$. Thus, by Dominated convergence theorem $\overline\partial_b v_j\rightarrow 0$ in $L^2_{(0, q+1)}(H_{n+1})$. Similarly, $\overline\partial_b^\ast v_j\rightarrow 0$ in $L^2_{(0, q-1)}(H_{n+1}).$ By Friedrichs Lemma (see Appendix D in~\cite{CS01}), for each $j=1,2,\ldots$, there is a $u_j\in \Omega^{0, q}_0(H_{n+1})$ such that \begin{equation}\label{04-24}
\|u_j-v_j\|\leq \frac{1}{j},~ \|\overline\partial_b u_j-\overline\partial_b v_j\|\leq \frac{1}{j}~\text{and}~\|\overline\partial_b^\ast u_j-\overline\partial_b^\ast v_j\|\leq \frac{1}{j}. \end{equation} From (\ref{04-24}) we have $u_j\rightarrow u$ in $L^2_{(0,q)}(H_{n+1})$, $\overline\partial_b u_j\rightarrow 0$ in $L^2_{(0,q+1)}(H_{n+1})$ and $\overline\partial_b^\ast u_j\rightarrow 0$ in $L^2_{(0,q-1))}(H_{n+1})$. We deduce that \begin{equation}\label{e-gue170527rcm}
(\,\Box^q_b u_j\,|\,u_j\,)=(\,\overline\partial_b u_j\,|\,\overline\partial_bu_j\,)+(\,\overline\partial_b^\ast v_j\,|\,\overline\partial_b^\ast v_j\,)\rightarrow0\ \ \mbox{as $j\rightarrow\infty$}. \end{equation} Write $u_j=\sideset{}{'}\sum_{l(J)=q}u_{j,J}d\overline z^J$. It is well-known that (see Chapter 10 in~\cite{CS01}) \begin{equation}\label{e-gue170527rm} \Box^q_b u_j=-\sideset{}{'}\sum_{l(J)=q}((\sum_{k\not\in J}Z_k\overline Z_k+\sum_{k\in J}\overline Z_k Z_k)u_{jJ})d\overline z_J, \end{equation} where $Z_k=\frac{\partial}{\partial z_k}-i\lambda_j\overline{z}_k\frac{\partial}{\partial x_{2n+1}}$, $k=1,\ldots,n$. From \eqref{e-gue170527rcm} and \eqref{e-gue170527rm}, we have \begin{equation}\label{04-24-2}
\sum_{k\not\in J}\|\overline Z_k u_{jJ}\|^2+\sum_{k\in J}\|Z_k u_{jJ}\|^2\rightarrow 0,\ \ \mbox{as $j\rightarrow\infty$},\ \ \forall J, l(J)=q. \end{equation} From (\ref{04-24-2}), the direction ``$\Rightarrow$'' in (\ref{CR1}) follows.
Let $u=\sideset{}{'}\sum_{l(J)=q}u_Jd\overline{z}_J\in L^2_{(0,q)}(H_{n+1})$ be arbitrary. Assume that \begin{equation}\label{e-gue170527ry} \left(\frac{\partial}{\partial \overline{z}_j}+i\lambda_jz_j\frac{\partial}{\partial x_{2n+1}}\right) u_J=0\ \ \text{, if }j\notin J \end{equation} and \begin{equation}\label{e-gue170527ryI} \left(\frac{\partial}{\partial z_j}-i\lambda_j\overline z_j\frac{\partial}{\partial x_{2n+1}}\right) u_J=0\ \ \text{if }j\in J. \end{equation} We have \begin{equation}\label{e-gue170527ryII} \overline\partial_bu=\sideset{}{'}\sum_{l(J)=q}\sum_{1\leq j\leq n, j\notin J}\left(\frac{\partial}{\partial \overline{z}_j}+i\lambda_jz_j\frac{\partial}{\partial x_{2n+1}}\right)u_Jd\overline z_j\wedge d\overline{z}_J. \end{equation} From \eqref{e-gue170527ryII} and \eqref{e-gue170527ry}, we get $\overline\partial_bu=0$. Moreover, from \eqref{e-gue170527ryI}, it is easy to see that \begin{equation}\label{e-gue170527ryIII}
(\,u\,|\,\overline\partial_bv\,)=0,\ \ \forall v\in\Omega^{0,q-1}_0(H_{n+1}). \end{equation} Let $v\in{\rm Dom\,}\overline\partial_b\bigcap L^2_{(0,q-1)}(H_{n+1})$. By using Friedrichs Lemma, we can find $v_j\in\Omega^{0,q-1}_0(H_{n+1})$, $j=1,2,\ldots$, such that $v_j\rightarrow v\in L^2_{(0,q-1)}(H_{n+1})$ and $\overline\partial_bv_j\rightarrow\overline\partial_bv$ in $L^2_{(0,q)}(H_{n+1})$. From this observation and \eqref{e-gue170527ryIII}, we see that
\[(\,u\,|\,\overline\partial_bv\,)=0\ \ \forall v\in{\rm Dom\,}\overline\partial_b\bigcap L^2_{(0,q-1)}(H_{n+1})\] and hence $\overline{\partial}^*_bu=0$. The direction ``$\Leftarrow$'' follows. We get the conclusion of Lemma \ref{lem:equivkernel}. \end{proof}
\begin{lemma}\label{fo1} Given $u=\sideset{}{'}\sum_{l(J)=q}u_Jd\overline{z}_J\in\mathcal{H}^q_b(H_{n+1})$ we have \begin{equation}\label{f1} \begin{cases}
\left(\frac{\partial}{\partial z_j}+\lambda_j\overline{z}_j\eta\right) \hat{u}_J(z,\eta)=0 &\text{if }j \in J,\\
\left(\frac{\partial}{\partial \overline{z}_j}-\lambda_jz_j\eta\right) \hat{u}_J(z,\eta)=0 &\text{if }j\notin J
\end{cases} \end{equation} for a.e. $\eta\in\mathbb R$ and all $J$, $l(J)=q$, in the sense of distribution. \end{lemma}
\begin{proof} Let $A_0\subset\mathbb R$ be as in the discussion after (\ref{neg1}). Let $\varphi\in C^\infty_0(\mathbb C^n)$. Fix $l(J)=q$ and fix $j=1,\ldots,n$ with $j\not\in J$. Put $h(\eta)=-\int_{\mathbb C^n}\hat u_J(z, \eta)\overline{(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) \varphi(z)}d\mu(z)$ if $\eta\not\in A_0$, $h(\eta)=0$ if $\eta\in A_0$. We can check that
\begin{equation}\label{4-24-3}
|h(\eta)|^2\leq\int_{\mathbb C^n}|\hat u_J(z, \eta)|^2d\mu(z)\int_{\mathbb C^n}|(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) \varphi(z)|^2d\mu(z).
\end{equation}
For $R>0$, put $g_{R}(\eta)=h(\eta)\chi_{[-R, R]}(\eta)$, where $\chi_{[-R, R]}(\eta)=1$ if $-R\leq \eta\leq R$ and $\chi_{[-R, R]}(\eta)=0$ if $|\eta|>R.$ From (\ref{4-24-3}), we have
\begin{equation}
\int |g_R(\eta)|^2d\eta=\int_{-R}^R|h(\eta)|^2d\eta\leq C_R\|\hat u_J(z, \eta)\|^2<\infty,
\end{equation}
where $C_R>0$ is a constant.
Thus, $g_R(\eta)\in L^2(\mathbb R)\cap L^1(\mathbb R)$. From \eqref{e-gue170527cr}, we have
\begin{equation}
\begin{split}
&\int_{-R}^R|h(\eta)|^2d\eta=\int_{\mathbb R} h(\eta)\overline g_{R}(\eta)d\eta\\
&=\int\int\hat u_J(z, \eta)\overline{(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) \varphi(z)g_R(\eta)} d\eta d\mu(z)\\
&=\int\int u_J(z, x_{2n+1})\overline{\int_{\mathbb R}e^{ix_{2n+1}\eta}(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) \varphi(z)g_R(\eta) d\eta}d\mu(z)dx_{2n+1}\\
&=\int_{H_{n+1}} u_J(z, x_{2n+1})\overline{\left(\frac{\partial}{\partial z_j}-i\lambda_j\overline z_j\frac{\partial}{\partial x_{2n+1}}\right)\left(\varphi(z)\int_{\mathbb R}e^{ix_{2n+1}\eta}g_R(\eta)d\eta\right)}d\mu_{H_{n+1}}.
\end{split}
\end{equation}
Put $S(z, x_{2n+1})=\varphi(z)\int_{\mathbb R}e^{ix_{2n+1}\eta}g_R(\eta)d\eta$. Let $u_j=\sideset{}{'}\sum_{l(J)=q}u_{jJ}d\overline z_J$, $j=1,2,\ldots$, be the sequence chosen in (\ref{04-24}). Then
\begin{equation}\label{e-gue170527yc}
\begin{split}
\int_{-R}^R|h(\eta)|^2d\eta &=\lim_{k\rightarrow\infty}\int_{H_{n+1}}u_{kJ}\overline Z_j \overline {S(z, x_{2n+1})}d\mu_{H_{n+1}}\\ &=-\lim_{k\rightarrow\infty}\int_{H_{n+1}}\overline Z_j u_{k J}\overline{S(z, x_{2n+1})}d\mu_{H_{n+1}}. \end{split}
\end{equation}
From (\ref{04-24-2}), \eqref{e-gue170527yc} and H\"older's inequality, we conclude that $\int_{-R}^R|h(\eta)|^2d\eta=0.$ Letting $R\rightarrow\infty$, we get $h(\eta)=0$ almost everywhere. Thus, we have proved that $\forall j\not\in J$ and for a given $\varphi(z)\in C_0^\infty(\mathbb C^n)$, $\int_{\mathbb C^n}\hat u_J(z, \eta)\overline{(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) \varphi(z)}d\mu(z)=0$ for a.e. $\eta\in\mathbb R$.
Let us consider the Sobolev space $W^1(\mathbb C^n)$ of distributions in $\mathbb C^n$ whose derivatives of order $\leq 1$ are in $L^2.$ Since $W^1(\mathbb C^n)$ is separable and $C_0^\infty(\mathbb C^n)$ is dense in $W^1(\mathbb C^n)$, we can find $f_k\in C_0^\infty(\mathbb C^n)$, $k=1,2,\ldots$, such that $\{f_k\}^\infty_{k=1}$ is a dense subset of $W^1(\mathbb C^n)$. Moreover, we can find $\{f_k\}^\infty_{k=1}$ so that for all $g\in C_0^\infty(\mathbb C^n)$ with ${\rm supp}g\Subset B_r:=\{z\in\mathbb C^n: |z|<r\}, r>0$, we can find $f_{k_1}, f_{k_2}, \ldots, {\rm supp}f_{k_t}\Subset B_r, t=1, 2, \ldots, $ such that $f_{k_t}\rightarrow g$ in $W^1(\mathbb C^n)$, as $t\rightarrow\infty$.
Now, for each $k$, we can repeat the method above and find a measurable set $A_k\supseteqq A_0$, $|A_k|=0$ such that
$\int_{\mathbb C^n}\hat u_J(z, \eta)\overline{(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) f_k(z)}d\mu(z)=0, ~\forall \eta\not\in A_k.$
Put $A=\cup_k A_\lambda.$ Then $|A|=0$ and for all $\eta\not\in A$ and all $k$
$$\int_{\mathbb C^n}\hat u_J(z, \eta)\overline{(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) f_k(z)}d\mu(z)=0.$$
Let $\varphi\in C_0^\infty(\mathbb C^n)$ with ${\rm supp}\varphi\Subset B_r.$ From the discussion above, we can find $f_{k_1}, f_{k_2}, \ldots, {\rm supp} f_{k_t}\Subset B_r$, $t=1, 2, \ldots, $ such that $f_{k_t}\rightarrow \varphi$ in $W^1(\mathbb C^n)$ , as $t\rightarrow \infty$. Then for $\eta\not\in A$,
\begin{equation}
\int_{\mathbb C^n}\hat u_J(z, \eta)\overline{(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) \varphi(z)}d\mu(z)=\int_{\mathbb C^n}\hat u_J(z, \eta)\overline{(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) (\varphi(z)-f_{k_t}(z))}d\mu(z).
\end{equation}
Fix $\eta\notin A$. By H\"older's inequality
\begin{equation}
\begin{split}
&\left|\int_{\mathbb C^n}\hat u_J(z, \eta)\overline{(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) (\varphi(z)-f_{k_t}(z))}d\mu(z)\right|\\
&\leq C(\eta)\sum_{|\alpha|\leq1}\int_{\mathbb C^n}|\partial_{x}^\alpha(\varphi-f_{k_t})|^2d\mu(z)\rightarrow0,\ \ \mbox{as $t\rightarrow\infty$},
\end{split}
\end{equation}
where $C(\eta)>0$ is a constant.
Thus, for all $\eta\not\in A$,
\begin{equation}
\int_{\mathbb C^n}\hat u_J(z, \eta)\overline{(\frac{\partial}{\partial z_j}+\lambda_j\overline z_j\eta) \varphi(z)}d\mu(z)=0, ~\forall \varphi\in C_0^\infty(\mathbb C^n).
\end{equation}
We have proved the second case of (\ref{f1}) and the proof of the first case of (\ref{f1}) is the same. The lemma follows. \end{proof}
It should be mentioned that the partial Fourier transform technique used in the proof of Lemma~\ref{fo1} was inspired by~\cite{HM12}.
\begin{proof}[Proof of Theorem~\ref{thm:1.1}] Fix $q=0,1,\ldots,n$. Assume that \begin{equation}\label{e-gue170528ry} \begin{split} &\mbox{$\lambda_j=0$ for some $j\in\set{1,\ldots,n}$}\\ &\mbox{ or all $\lambda_j$ are non-zero but $q\notin\set{n_-,n_+}$},\ \end{split} \end{equation} where $n_{-}$ denotes the number of negative $\lambda_js$ and $n_{+}$ denotes the number of positive $\lambda_{j}s$.
Consider \(u=\sum_{l(J)=q}'u_Jd\overline{z}_J\in \mathcal{H}^q_b(H_{n+1})\). For every \(l(J)=q\), we can find
\[(((z,x_{2n+1})\mapsto u_J(z,x_{2n+1}))\in L^2(H_{n+1})\] and hence
\[((z,\eta)\mapsto \hat{u}_J(z,\eta))\in L^2(H_{n+1}),\]
where \(\hat{u}_J\) is the partial Fourier transform of \(u_J\) with respect to \(x_{2n+1}\). Since \(u\in \mathcal{H}^q_b(H_{n+1})\), by Lemma \ref{fo1}, for a.e. $\eta\in\mathbb R$, we have
\begin{align}\label{eq:fouriereq}\begin{cases}
\left(\frac{\partial}{\partial z_j}+\lambda_j\overline{z}_j\eta\right) \hat{u}_J=0 &\text{if }j \in J, \\
\left(\frac{\partial}{\partial \overline{z}_j}-\lambda_jz_j\eta\right) \hat{u}_J=0 &\text{if }j\notin J,
\end{cases}
\end{align}
for all \(l(J)=q\), $j=1,\ldots,n$. We will show \(u_J=0\) for all \(l(J)=q\).
So let \(J_0\), \(|J_0|=q\), be arbitrary. Without loss of generality we can assume \(J_0=\{1,\ldots,q\}\). In order to simplify the notation we write
\[\begin{split}
&\widetilde \lambda\abs{z}^2=\lambda_1|z_1|^2+\ldots+\lambda_q|z_q|^2-\lambda_{q+1}|z_{q+1}|^2-\ldots-\lambda_n|z_n|^2\ \ \mbox{if $q\geq1$},\\
&\widetilde \lambda\abs{z}^2=-\lambda_{1}|z_{1}|^2-\ldots-\lambda_n|z_n|^2\ \ \mbox{if $q=0$}. \end{split}\] Then (\ref{eq:fouriereq}) reduces to
\begin{equation}\label{fourier transform} \begin{cases}
\frac{\partial}{\partial z_j}\left(e^{\eta\widetilde\lambda|z|^2}\hat{u}_{J_0}(z,\eta)\right) =0 &\mbox{if $j\in J_0$, for a.e. $\eta\in\mathbb R$},\\
\frac{\partial}{\partial \overline{z}_j}\left(e^{\eta\widetilde\lambda|z|^2}\hat{u}_{J_0}(z,\eta)\right) =0 &\mbox{if $j\notin J_0$, for a.e. $\eta\in\mathbb R$}.
\end{cases}\end{equation}
For every $\eta\in\mathbb R$, let
\[\begin{split}
F_{J_0}(z_1,\ldots,z_{n},\eta):=e^{\eta\widetilde\lambda|z|^2}\hat{u}_{J_0}(\overline z_1,\ldots,\overline z_q,z_{q+1},\ldots,z_n,\eta)\ \ \mbox{if $q\geq1$},\\
F_{J_0}(z_1,\ldots,z_{n},\eta):=e^{\eta\widetilde\lambda|z|^2}\hat{u}_{J_0}(z_1,\ldots,z_q,z_{q+1},\ldots,z_n,\eta)\ \ \mbox{if $q=0$}.
\end{split}\]
From \eqref{fourier transform}, it is easy to see that \((z,\eta)\mapsto F_{J_0}(z,\eta)\) is holomorphic in \(z\) for almost every \(\eta\) and we have
\begin{equation}\label{e-gue170608c}
\int_{H_{n+1}}|F_{J_0}(z,\eta)|^2e^{-2\eta\widetilde\lambda|z|^2}d\mu(z)d\eta=\int_{H_{n+1}}|\hat{u}_{J_0}(z,\eta)|^2d\mu(z)d\eta<\infty
\end{equation}
which implies
\begin{equation}\label{e-gue170528}
\mbox{$\int_{\mathbb{C}^n}|F_{J_0}(z,\eta)|^2e^{-2\eta\widetilde\lambda|z|^2}d\mu(z)<\infty$ for almost every $\eta$}.
\end{equation}
Let $B$ be a negligible set of $\mathbb R$ such that for all $\eta\notin B$, $F_{J_0}(z,\eta)$ is holomorphic in $z$ and \eqref{e-gue170528} holds. For $\eta\notin B$, we have
\[F_{J_0}(z,\eta)=\sum_{\alpha\in\mathbb N^n_0}F_{J_0,\alpha}(\eta)z^\alpha.\]
Fix $\eta\notin B$ and fix a $\alpha_0\in\mathbb N^n_0$. It is easy to see that
\[|F_{J_0,\alpha_0}(\eta)|^2I(\alpha_0,\eta,\lambda)\leq \int_{\mathbb{C}^n}|F_{J_0}(z,\eta)|^2e^{-2\eta\widetilde\lambda|z|^2}d\mu(z)<\infty,\]
where \(I(\alpha_0,\eta,\lambda)=\int_{\mathbb{C}^n}|z^{\alpha_0}|^2e^{-2\eta\widetilde\lambda|z|^2}d\mu(z)\). Under the assumption~\ref{e-gue170528ry} and using Lemma \ref{lem:monomialintegrals}, we find \(I(\alpha_0,\eta,\lambda)=\infty\) for all \(\eta\in\mathbb{R}\) which implies \(F_{J_0,\alpha_0}(\eta)=0\). Thus, for all \(\alpha\in\mathbb N^n_0\), \(F_{J_0,\alpha}(\eta)=0\) and hence \(\int_{\mathbb{C}^n}|F_{J_0}(z,\eta)|^2e^{-2\eta\widetilde\lambda|z|^2}d\mu(z)=0\) for almost every \(\eta\in\mathbb{R}\). By Fubini's theorem, \eqref{neg1} and \eqref{e-gue170608c}, we get
\[\|u_{J_0}\|^2=\frac{1}{2\pi}\|\hat{u}_{J_0}\|^2=\int_{H_{n+1}}|F_{J_0}(z,\eta)|^2e^{-2\eta\widetilde\lambda|z|^2}d\mu(z)d\eta=0.\] We have proved that \(u_J=0\), for all $l(J)=q$. Thus, $\mathcal{H}^q_b(H_{n+1})=\set{0}$ and Theorem~\ref{thm:1.1} follows.
\end{proof}
\section{Complex Fourier integral operators}\label{s-gue170528}
Let $\varphi\in\{\varphi_-,\varphi_+\}$ be one of the two functions defined in \eqref{phase1}, that is
\begin{equation*}
\begin{split}
&\varphi_-(x, y)\\
&=-x_{2n+1}+y_{2n+1}+i\sum_{j=1}^n|\lambda_j||z_j-w_j|^2+i\sum_{j=1}^n
\lambda_j(\overline z_jw_j-z_j\overline w_j)\in C^\infty(H_{n+1}\times H_{n+1}),\\
&\varphi_+(x, y)\\
&=x_{2n+1}-y_{2n+1}+i\sum_{j=1}^n|\lambda_j||z_j-w_j|^2+i\sum_{j=1}^n
\lambda_j(z_j\overline w_j-\overline z_jw_j)\in C^\infty(H_{n+1}\times H_{n+1}).
\end{split}
\end{equation*} Take $\chi\in C^\infty_0(\mathbb R)$ with $\chi(x)=1$ if $\abs{x}\leq 1$ and $\chi(x)=0$ if $\abs{x}>2$. Fix $q=0,1,\ldots,n$. For $u\in\Omega^{0,q}_0(H_{n+1})$, by using integration by parts with respect to $y_{2n+1}$ several times, we can show that \[\lim_{\varepsilon\To0^+}\int^\infty_0\int_{H_{n+1}}e^{it\varphi(x,y)}t^n\chi(\varepsilon t)u(y)d\mu_{H_{n+1}}(y)dt\] exists for every $x\in H_{n+1}$, \begin{equation}\label{e-gue170528ay} \lim_{\varepsilon\To0^+}\int^\infty_0\int_{H_{n+1}}e^{it\varphi(x,y)}t^n\chi(\varepsilon t)u(y)d\mu_{H_{n+1}}(y)dt\in\Omega^{0,q}(H_{n+1}) \end{equation} and the operator \begin{equation}\label{e-gue170528ayI} \begin{split} \int^\infty_0e^{it\varphi(x,y)}t^ndt: \Omega^{0,q}_0(H_{n+1})&\rightarrow\Omega^{0,q}(H_{n+1}),\\ u&\mapsto \lim_{\varepsilon\To0^+}\int^\infty_0\int_{H_{n+1}}e^{it\varphi(x,y)}t^n\chi(\varepsilon t)u(y)d\mu_{H_{n+1}}(y)dt \end{split} \end{equation} is continuous. The operator $\int^\infty_0e^{it\varphi(x,y)}t^ndt$ is a complex Fourier integral operator in the sense of \cite{GrSj94}. Again, by using integration by parts with respect to $y_{2n+1}$ several times, we can show that \begin{equation}\label{e-gue170528ayII} \begin{split} &\lim_{\varepsilon\To0^+}\int_{H_{n+1}}\int_0^\infty e^{-t\bigr(-i(\varphi+i\varepsilon)\bigr)}t^nu(y)dtd\mu_{H_{n+1}}(y)\\ &=\lim_{\varepsilon\To0^+}\int^\infty_0\int_{H_{n+1}}e^{it\varphi(x,y)}t^n\chi(\varepsilon t)u(y)d\mu_{H_{n+1}}(y)dt. \end{split} \end{equation} Note that \begin{equation}\label{e-gue170528ayIII} \int_0^\infty e^{-tx}t^mdt=m! x^{-m-1}\ \ \text{if}~ m\in\mathbb Z, m\geq0. \end{equation} From \eqref{e-gue170528ayIII} and \eqref{e-gue170528ayII}, we deduce that \[\begin{split} &\lim_{\varepsilon\To0^+}\int_{H_{n+1}}\frac{1}{\bigr(-i(\varphi(x,y)+i\varepsilon)\bigr)^{n+1}}u(y)d\mu_{H_{n+1}}(y)\\ &=\lim_{\varepsilon\To0^+}\frac{1}{n!}\int^\infty_0\int_{H_{n+1}}e^{it\varphi(x,y)}t^n\chi(\varepsilon t)u(y)d\mu_{H_{n+1}}(y)dt\end{split}\] and hence $I^{(q)}_\varphi=\frac{1}{n!}\int^\infty_0e^{it\varphi(x,y)}t^ndt$ on $\Omega^{0,q}_0(H_{n+1})$, where $I^{(q)}_\varphi$ is given by \eqref{e-gue170525I}. We need the following
\begin{theorem}\label{tema2} There is a constant $C>0$ such that for all $\varepsilon>0$ and $u\in\Omega^{0,q}_0(H_{n+1})$, we have \[\int_{H_{n+1}}\abs{\int_{H_{n+1}}\int^\infty_0\chi(\varepsilon t)e^{it\varphi(x,y)}t^nu(y)d\mu_{H_{n+1}}(y)dt}^2d\mu_{H_{n+1}}(x)\leq C\norm{u}^2.\] \end{theorem}
\begin{proof} We may assume that $\varphi=\varphi_-$. For the case $\varphi=\varphi_+$, the proof is the same. Let $u\in\Omega^{0,q}_0(H_{n+1})$ be a compactly supported \((0,q)\)-form and set \[\check u(y',t):=\frac{1}{2\pi}\int u(y',y_{2n+1})e^{ity_{2n+1}}dy_{2n+1},\]
where $y'=(y_1,\ldots,y_{2n})$ By Parseval's formula, we have \begin{equation}\label{e-I} \int\abs{\check u(y',t)}^2dt=\frac{1}{2\pi}\int\abs{u(y',y_{2n+1})}^2dy_{2n+1}. \end{equation} Let \[g_\varepsilon(z,t):=\int \chi(\varepsilon t)\check u(y',t)\chi_{[0,\infty)}(t)e^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}d\mu(y'), \] where $\chi_{[0,\infty)}(t)=1$ if $t\in[0,\infty)$, $\chi_{[0,\infty)}(t)=0$ if $t\notin[0,\infty)$, $\abs{\lambda}\abs{z-w}^2:=\sum^n_{j=1}\abs{\lambda_j}\abs{z_j-w_j}^2$, $i\lambda(\overline zw-z\overline w)=i\sum^n_{j=1}\lambda_j(\overline z_jw_j-z_j\overline w_j)$ and $d\mu(y')=2^ndy_1\cdots dy_{2n}$. Then we find \begin{equation}\label{e-II} \int_{H_{n+1}}\int^\infty_0\chi(\varepsilon t)e^{it\varphi(x,y)}t^nu(y)d\mu_{H_{n+1}}(y)dt=(2\pi)\int t^ng_\varepsilon(z,t)e^{-itx_{2n+1}}dt. \end{equation} By Parseval's formula again, we have \begin{equation}\label{e-III} \int\abs{\int t^ng_\varepsilon(z,t)e^{-itx_{2n+1}}dt}^2dx_{2n+1}=(2\pi)\int t^{2n}\abs{g_\varepsilon(z,t)}^2dt. \end{equation} From \eqref{e-I}, \eqref{e-II} and \eqref{e-III}, we have \begin{equation} \begin{split} &\int_{H_{n+1}}\abs{\int^\infty_0\chi(\varepsilon t)e^{it\varphi(x,y)}t^nu(y)d\mu_{H_{n+1}}(y)dt}^2d\mu(z)dx_{2n+1}\\ &=(4\pi)^2\int_{H_{n+1}}\abs{\int t^ng_\varepsilon(z,t)e^{-itx_{2n+1}}dt}^2dx_{2n+1}d\mu(z)\\ &=(8\pi)^3\int_{H_{n+1}} t^{2n}\abs{g_\varepsilon(z,t)}^2d\mu(z)dt\\ &=(8\pi)^3\int_{H_{n+1}} t^{2n}\chi^2(\varepsilon t)\abs{\int_{\mathbb{C}^n}\check u(y',t)\chi_{[0,\infty)}(t)e^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}d\mu(y')}^2d\mu(z)dt\\ &\leq C_1\int_{H_{n+1}} t^{2n}\chi^2(\varepsilon t)\Bigr(\int_{\mathbb{C}^n}\abs{\check u(y',t)\chi_{[0,\infty)}(t)}^2e^{-t\abs{\lambda}\abs{z-w}^2}d\mu(w)\int_{\mathbb{C}^n} e^{-t\abs{\lambda}\abs{z-w}^2}d\mu(w)\Bigr)d\mu(z)dt\\ &\leq C_2 \int_{H_{n+1}}\int_{\mathbb{C}^n} t^{n} \abs{\chi(\varepsilon t)\check u(y',t)\chi_{[0,\infty)}(t)}^2e^{-t\abs{\lambda}\abs{z-w}^2}d\mu(z)d\mu(w)dt\\ &\leq C_3 \int_{H_{n+1}}\abs{\check u(y',t)\chi_{[0,\infty)}(t)}^2d\mu(w)dt\\ &\leq C_3\int_{H_{n+1}}\abs{\check u(y',t)}^2d\mu(w)dt=\frac{C_3}{2\pi}\int_{H_{n+1}}\abs{u(y',y_{2n+1})}^2d\mu(y), \end{split} \end{equation} where $C_1>0, C_2>0, C_3>0$ are constants independent of $\varepsilon$ and $u$. The theorem follows. \end{proof}
From Theorem~\ref{tema2}, we deduce
\begin{corollary}\label{c-gue170528} There is a constant $C>0$ such that \[\norm{I^{(q)}_{\varphi}u}\leq C\norm{u},\ \ \forall u\in\Omega^{0,q}_0(H_{n+1}).\] Thus, we can extend $I^{(q)}_{\varphi}$ to $L^2_{(0,q)}(H_{n+1})$ in the standard way and we have that \[I^{(q)}_{\varphi}: L^2_{(0,q)}(H_{n+1})\rightarrow L^2_{(0,q)}(H_{n+1})\] is continuous. \end{corollary}
\section{Proof of Theorem~\ref{main theorem 2}}\label{s-gue170528e}
In this Section, we will prove Theorem~\ref{main theorem 2}. We assume that $\lambda_j>0$, for all $j=1,2,\ldots,n$. Put \begin{equation}\label{e-gue170528xr} \tilde S^{(0)}:=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}n!I^{(0)}_{\varphi_-}. \end{equation}
\begin{lemma}\label{l-gue170528xr} For every $u\in L^2(H_{n+1})$, we have $\tilde S^{(0)}u\in\mathcal H^{0}_b(H_{n+1})$. \end{lemma}
\begin{proof} Let $\chi\in C^\infty_0(\mathbb R)$ be as in the beginning of Section~\ref{s-gue170528}. Fix $\varepsilon>0$. We first show that for all $u\in C^\infty_0(H_{n+1})$, we have \begin{equation}\label{CR1a} I^{(0)}_{\varphi_-,\varepsilon}u:=\frac{1}{n!}\int_{H_{n+1}}\int_0^\infty e^{it\varphi_-(x, y)}t^nu(y)\chi(\varepsilon t)dtd\mu_{H_{n+1}}(y)\in \mathcal H^{0}_b(H_{n+1}). \end{equation} Let $u\in C^\infty_0(H_{n+1})$. For every $j=1, \cdots, n$, we have \begin{equation}\label{CCR2} \begin{split} &\overline Z_j\left(\int_{H_{n+1}}\int_0^\infty e^{it\varphi_-(x, y)}t^nu(y)\chi(\varepsilon t)dt d\mu_{H_{n+1}}(y)\right)\\ =&\int_{H_{n+1}}\left(\frac{\partial}{\partial\overline z_j}+i\lambda_j z_j\frac{\partial}{\partial x_{2n+1}}\right)\varphi_-(x, y)\int_0^\infty it e^{it\varphi_-(x, y)}t^nu(y)\chi(\varepsilon t)dt d\mu_{H_{n+1}}(y)\\ =&\int_{H_{n+1}}\left[i\abs{\lambda_j}(z_j-w_j)+i\lambda_jw_j-i\lambda_jz_j\right] \int_0^\infty it e^{it\varphi_-(x, y)}t^nu(y)\chi(\varepsilon t)dt d\mu_{H_{n+1}}(y)\\ =&0. \end{split} \end{equation} Thus, we get the conclusion of (\ref{CR1a}). Since $\lim_{\varepsilon\To0^+}I^{(0)}_{\varphi_-,\varepsilon}u=I^{(0)}_{\varphi_-}u$ in $C^\infty(H_{n+1})$ topology, we deduce that $I^{(0)}_{\varphi_-}u\in\mathcal H^{0}_b(H_{n+1})$, for every $u\in C^\infty_0(H_{n+1})$.
Let $u\in L^2(H_{n+1})$ and take $u_j\in C^\infty_0(H_{n+1})$, $j=1,2,\ldots$, $u_j\rightarrow u$ in $L^2(H_{n+1})$ as $j\rightarrow\infty$. From Theorem~\ref{tema2}, we see that $I^{(0)}_{\varphi_-}u_j\rightarrow I^{(0)}_{\varphi_-}u$ $L^2(H_{n+1})$ as $j\rightarrow\infty$ and hence $\overline\partial_b(I^{(0)}_{\varphi_-}u_j)\rightarrow\overline\partial_b(I^{(0)}_{\varphi_-}u)$ in $D'(H_{n+1})$ as $j\rightarrow\infty$. Thus, $I^{(0)}_{\varphi_-}u\in{\rm Ker\,}\overline\partial_b=\mathcal H^{0}_b(H_{n+1})$. The lemma follows. \end{proof}
We need
\begin{lemma}\label{l-gue170602} Let $g\in C^\infty_0(H_{n+1})$ and $u\in L^2(H_{n+1})$. Then, \[\begin{split}
&(\widetilde S^{(0)}u\,|\,g\,)\\ &=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}\int^\infty_0t^n\hat u(w,-t)\overline{\hat g(z,-t)}e^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}d\mu(z)d\mu(w)dt,\end{split}\] where $\abs{\lambda}\abs{z-w}^2=\sum^n_{j=1}\abs{\lambda_j}\abs{z-w}^2$, $\lambda(\overline zw-z\overline w)=\sum^n_{j=1}\lambda_j(\overline z_jw_j-z_j\overline w_j)$. \end{lemma}
\begin{proof} Let $u_j\in C^\infty_0(H_{n+1})$, $j=1,2,\ldots$, with $\lim_{j\rightarrow\infty}u_j\rightarrow u$ in $L^2(H_{n+1})$ as $j\rightarrow\infty$. We have \begin{equation}\label{e-gue170602}
\lim_{j\rightarrow\infty}(\widetilde S^{(0)}u_j\,|\,g\,)=(\widetilde S^{(0)}u\,|\,g\,). \end{equation} Let $\chi\in C^\infty_0(\mathbb R)$ be as in the beginning of Section~\ref{s-gue170528}. We have \begin{equation}\label{e-gue170602I} \begin{split}
(\widetilde S^{(0)}u_j\,|\,g\,)&=\lim_{\varepsilon\To0^+}c_0\int e^{it(-x_{2n+1}+y_{2n+1}+\hat\varphi(z,w))}t^n\chi(\varepsilon t)u_j(y)\overline g(x)d\mu_{H_{n+1}}(y)d\mu_{H_{n+1}}(x)dt\\ &=\lim_{\varepsilon\To0^+}c_0\int^\infty_0t^n\chi(\varepsilon t)\hat u_j(w,-t)\overline{\hat g(z,-t)}e^{it\hat\varphi(z,w)}d\mu(z)d\mu(w)dt\\ &=c_0\int^\infty_0t^n\hat u_j(w,-t)\overline{\hat g(z,-t)}e^{it\hat\varphi(z,w)}d\mu(z)d\mu(w)dt, \end{split} \end{equation} where $c_0=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}$ and $\hat\varphi(z,w)=i\abs{\lambda}\abs{z-w}^2+i\lambda(\overline zw-z\overline w)$. We have \begin{equation}\label{e-gue170602II} \begin{split} &\abs{\int^\infty_0t^n\Bigr(\hat u_j(w,-t)-\hat u(w,-t)\Bigr)\overline{\hat g(z,-t)}e^{it\hat\varphi(z,w)}d\mu(z)d\mu(w)dt}\\ &\leq \int^\infty_0t^n\abs{\hat u_j(w,-t)-\hat u(w,-t)}\abs{\overline{\hat g(z,-t)}e^{it\hat\varphi(z,w)}}d\mu(z)d\mu(w)dt\\ &\leq\int\Bigr(\int\abs{\hat u_j(w,-t)-\hat u(w,-t)}^2d\mu(w)dt\Bigr)^{\frac{1}{2}}\Bigr(\int \abs{t^n\overline{\hat g(z,-t)}e^{it\hat\varphi(z,w)}}^2d\mu(w)dt\Bigr)^{\frac{1}{2}}d\mu(z)\\
&\rightarrow 0\ \ \mbox{as $j\rightarrow\infty$}. \end{split} \end{equation}
From \eqref{e-gue170602II}, \eqref{e-gue170602I} and \eqref{e-gue170602}, the lemma follows. \end{proof}
We need
\begin{lemma}\label{l-gue170603}
Fix $t>0$. Let $g(z)\in C^\infty(\mathbb C^n)$ be any holomorphic function with $\int\abs{g(z)}^2e^{-2t\abs{\lambda}\abs{z}^2}d\mu(z)<+\infty$, where $\abs{\lambda}\abs{z}^2=\sum^n_{j=1}\abs{\lambda_j}\abs{z_j}^2$. Then,
\[e^{-t\abs{\lambda}\abs{z}}g(z)=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{\pi^{n}}\int_{\mathbb C^n}t^ne^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}e^{-t\abs{\lambda}\abs{w}^2}g(w)d\mu(w),\]
where $\lambda(\overline zw-z\overline w)=\sum^n_{j=1}\lambda_j(\overline z_jw_j-z_j\overline w_j)$.
\end{lemma}
\begin{proof}
We have
\begin{equation}\label{e-gue170603c}
\begin{split}
&\int_{\mathbb C^n}t^ne^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}e^{-t\abs{\lambda}\abs{w}^2}g(w)d\mu(w)\\
&=\int_{\mathbb C^n}t^ne^{-2t\abs{\lambda}\abs{z-w}^2}\Bigr(e^{-2t\abs{\lambda}\overline zw+t\abs{\lambda}\abs{z}^2}g(w)\Bigr)d\mu(w).
\end{split}
\end{equation}
From Cauchy integral formula, it is easy to see that
\begin{equation}\label{e-gue170603cI}
\begin{split}
\int t^ne^{-2t\abs{\lambda}\abs{z-w}^2}h(w)d\mu(w)&=h(z)\int t^ne^{-2t\abs{\lambda}\abs{z}^2}d\mu(z)\\
&=h(z)\frac{\pi^n}{\abs{\lambda_1}\cdots\abs{\lambda_n}},
\end{split}
\end{equation}
for every holomorphic function $h(z)\in C^\infty(\mathbb C^n)$ with $\int\abs{h(z)}^2e^{-2t\abs{\lambda}\abs{z}^2}d\mu(z)<+\infty$. From \eqref{e-gue170603cI} and \eqref{e-gue170603c}, the lemma follows.
\end{proof}
We need
\begin{lemma}\label{lemma CR}
Let $u\in \mathcal H^0_b(H_{n+1})$, then $\hat u(z, -t)=0$ for a.e. $t\in (-\infty, 0)$. \end{lemma}
\begin{proof} Suppose $u\in\mathcal H^0_b(H_{n+1}).$ From \eqref{f1}, we see that \begin{equation}
\frac{\partial}{\partial\overline z_j}\left(\hat u(z, -t)e^{t\lambda|z|^2}\right)=0, \ \ \mbox{for a.e. $t\in\mathbb R$}. \end{equation}
Thus, $\hat u(z, -t)e^{t\lambda|z|^2}$ is a holomorphic function on $\mathbb C^n$, for a.e. $t\in\mathbb R$. From Parseval's formula, we can check that \begin{equation}
\int_{\mathbb R}\int_{\mathbb C^n}|\hat u(z, -t)e^{t\lambda|z|^2}|^2 e^{-2t\lambda|z|^2}d\mu(z)dt=(2\pi)\int_{H_{n+1}}|u(z, x_{2n+1})|^2d\mu(z)dx_{2n+1}<\infty. \end{equation} It follows that \begin{equation}
\int_{\mathbb C^n}|\hat u(z, -t)e^{t\lambda|z|^2}|^2e^{-2t\lambda|z|^2}d\mu(z)<\infty,\ \ \mbox{for a.e. $t\in\mathbb R$}. \end{equation}
Fix $t_0\in (-\infty,0)$ so that $\int_{\mathbb C^n}|\hat u(z, -t_0)e^{t_0\lambda|z|^2}|^2e^{-2t_0\lambda|z|^2}d\mu(z)<\infty$ and $\hat u(z, -t_0)e^{t_0\lambda|z|^2}$ is a holomorphic function on $\mathbb C^n$. We write $\hat u(z,-t_0)e^{t_0\lambda\abs{z}^2}=\sum_{\alpha\in\mathbb N^n_0}c_\alpha(t_0)z^\alpha$.
Fix a $\alpha_0\in\mathbb N^n_0$. It is easy to see that
\[|c_{\alpha_0}(t_0)|^2\int\abs{z^{\alpha_0}}^2e^{-2t_0\lambda\abs{z}^2}\leq \int_{\mathbb{C}^n}|\hat u(z,-t_0)e^{t_0\lambda\abs{z}^2}|^2e^{-2t_0\lambda|z|^2}d\mu(z)<\infty.\]
In view of Lemma \ref{lem:monomialintegrals}, we see that $\int\abs{z^{\alpha_0}}^2e^{-2t_0\lambda\abs{z}^2}=\infty$ and hence $c_{\alpha_0}(t_0)=0$ and thus
$\hat u(z,-t_0)=0$. The lemma follows. \end{proof}
Now, we can prove
\begin{theorem}\label{t-gue170603}
Let $u\in \mathcal H^0_b(H_{n+1})$. Then $\widetilde S^{(0)}u=u$. \end{theorem}
\begin{proof} Fix $g\in C^\infty_0(H_{n+1})$. We only need to prove that \begin{equation}\label{e-gue170603cr}
(\,\widetilde S^{(0)}u\,|\,g\,)=(\,u\,|\,g\,). \end{equation} From Lemma~\ref{l-gue170602}, we have \begin{equation}\label{e-gue170603crI} \begin{split}
&(\widetilde S^{(0)}u\,|\,g\,)\\ &=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}\int^\infty_0t^n\hat u(w,-t)\overline{\hat g(z,-t)}e^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}d\mu(z)d\mu(w)dt.\end{split}\end{equation} From Fubini's theorem, we have \begin{equation}\label{e-gue170603crII} \begin{split} &\int^\infty_0t^n\hat u(w,-t)\overline{\hat g(z,-t)}e^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}d\mu(z)d\mu(w)dt\\ &=\int^\infty_0\Bigr(\int t^n\hat u(w,-t)\overline{\hat g(z,-t)}e^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}d\mu(z)d\mu(w)\Bigr)dt\\ &=\int^\infty_0\Bigr(\int\Bigr(\int t^n\hat u(w,-t)\overline{\hat g(z,-t)}e^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}d\mu(w)\Bigr)d\mu(z)\Bigr)dt. \end{split} \end{equation}
From \eqref{f1} and Fubini's theorem, we see that there is a measure zero set $B\subset\mathbb R$ such that for all $t\notin B$, $\hat u(z, -t)e^{t\lambda|z|^2}$ is a holomorphic function on $\mathbb C^n$ and $\int_{\mathbb C^n}|\hat u(z, -t)e^{t\lambda|z|^2}|^2e^{-2t\lambda|z|^2}d\mu(z)<\infty$. From this observation and Lemma~\ref{l-gue170603}, we deduce that
\begin{equation}\label{e-gue170604}
\hat u(z,-t)=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{\pi^{n}}\int^\infty_0t^ne^{-t\abs{\lambda}\abs{z-w}^2-t\lambda(\overline zw-z\overline w)}\hat u(w,-t)d\mu(w),
\end{equation}
for every $t\notin B$. From \eqref{e-gue170604}, \eqref{e-gue170603crII}, Lemma~\ref{lemma CR} and Parseval’s formula, we get
\[(\widetilde S^{(0)}u\,|\,g\,)=\frac{1}{2\pi}\int\hat u(z,-t)\overline{\hat g(z,-t)}d\mu(z)dt=(\,u\,|\,g\,).\]
The theorem follows. \end{proof}
\begin{proof}[Proof of Theorem \ref{main theorem 2} ] Let $u\in L^2(H_{n+1})$. From Lemma~\ref{l-gue170528xr}, we see that $\widetilde S^{(0)}u\in\mathcal H^0_b(H_{n+1})$. To show that $\widetilde S^{(0)}=S^{(0)}$, we only need to show that $(I-\widetilde S^{(0)})u\perp\mathcal H^0_b(H_{n+1})$. We observe that $\widetilde S^{(0)}$ is self-adjoint, that is, \begin{equation}\label{e-gue170604w}
(\,\widetilde S^{(0)}g\,|\,h\,)=(\,g\,|\,\widetilde S^{(0)}h\,),\ \ \forall g, h\in L^2(H_{n+1}). \end{equation} Let $f\in\mathcal H^0_b(H_{n+1})$. From Theorem~\ref{t-gue170603} and \eqref{e-gue170604w}, we have
\[(\,(I-\widetilde S^{(0)})u\,|\,f\,)=(\,u\,|\,f\,)-(\,\widetilde S^{(0)}u\,|\,f\,)=(\,u\,|\,f\,)-(\,u\,|\,\widetilde S^{(0)}f\,)=(\,u\,|\,f\,)-(\,u\,|\,f\,)=0.\] The theorem follows. \end{proof}
\section{Proof of Theorem \ref{main theorem 3} }\label{s-gue170604}
In this Section, we will prove Theorem~\ref{main theorem 3}. We only prove the case when $q=n_{-}=n_+$. For the cases $q=n_-$, $n_-\neq n_+$ and $q=n_{+}, n_-\neq n_{-}$, the arguments are similar and simpler and therefore we omit the details.
Suppose that $\lambda_1<0,\ldots,\lambda_{n_-}<0$, $\lambda_{n_-+1}>0,\ldots,\lambda_n>0$. Let $q\in\set{n_-,n_+}$, where $n_+=n-n_-$. Let $J_{n_-}=(1,\ldots,q)$, $J_{n_+}=(q+1,\ldots,n)$. We first need
\begin{lemma}\label{lemma vanish} Let $u=\sideset{}{'}\sum_{l(J)=q}u_J d\overline z_J\in \mathcal H^{q}_b(H_{n+1})$. If $J\notin\set{J_{n_-}, J_{n_+}}$, then $u_J=0.$ \end{lemma} \begin{proof} Fix $J=(j_1, j_2, \cdots, j_q)\notin\set{J_{n_-}, J_{n_+}}$ with $j_1<j_2<\cdots<j_q.$ Set \begin{equation}
\hat\lambda|z|^2=\sum_{k\in J}\lambda_k|z_k|^2-\sum_{k\not\in J}\lambda_k|z_k|^2. \end{equation} Let
$F_J(z, \eta)=\hat u_J(\xi, \eta) e^{\eta\hat\lambda|z|^2}$, where $\xi_i=\overline z_i$ if $i\in J$ and $\xi_i=z_i$ if $i\not\in J.$ Then (\ref{f1}) implies that $F_J(z, \eta)$ is holomorphic, for a.e. $\eta\in\mathbb R$. Moreover, \begin{equation}
\int _{\mathbb C^n}|F_J(z, \eta)|^2e^{-2\eta\hat\lambda|z|^2}\mu(z)<\infty,\ \ \mbox{for a.e. $\eta\in\mathbb R$}. \end{equation} From $J=(j_1, j_2, \cdots, j_q)\notin\set{J_{n_-}, J_{n_+}}$, by using and Lemma \ref{lem:monomialintegrals}, we see that \begin{equation}\label{e-gue170604lc}
\int e^{-2\eta\hat\lambda|z|^2}d\mu(z)=\infty,\ \ \forall\eta\in\mathbb R. \end{equation} From \eqref{e-gue170604lc} and repeating the argument in the proof of Lemma \ref{lemma CR}, we deduce that $F_J(z, \eta)=0$, for a.e. $\eta\in\mathbb R, z\in\mathbb C^n$. From Parseval's formula, we deduce that $u_J=0$. \end{proof}
Put \begin{equation}\label{e-gue170604lcI} \tilde S^{(q)}:=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}n!\Bigr(I^{(q)}_{\varphi_-}\circ\tau_-+I^{(q)}_{\varphi_+}\circ\tau_+\Bigr). \end{equation}
\begin{lemma}\label{l-gue170604lc} For every $u\in L^2_{(0,q)}(H_{n+1})$, we have $\tilde S^{(q)}u\in\mathcal H^{q}_b(H_{n+1})$. \end{lemma}
\begin{proof} Let $\chi\in C^\infty_0(\mathbb R)$ be as in the beginning of Section~\ref{s-gue170528}. Fix $\varepsilon>0$. We first show that for all $u\in\Omega^{0,q}_0(H_{n+1})$, we have \begin{equation}\label{CR1q} \begin{split} I^{(q)}_{\varphi_-,\varepsilon}\circ\tau_-u:&=\frac{1}{n!}\int_{H_{n+1}}\int_0^\infty e^{it\varphi_-(x, y)}t^n(\tau_-u)(y)\chi(\varepsilon t)dtd\mu_{H_{n+1}}(y)\\ &=\frac{1}{n!}\int_{H_{n+1}}\int_0^\infty e^{it\varphi_-(x, y)}t^nu_{J_{n_-}}(y)d\overline z^{J_{n_-}}(y)\chi(\varepsilon t)dtd\mu_{H_{n+1}}(y)\in \mathcal H^{q}_b(H_{n+1}). \end{split} \end{equation} Let $u=\sideset{}{'}\sum_{l(J)=q}u_Jd\overline z^J\in\Omega^{0,q}_0(H_{n+1})$. For every $j=1, \cdots, n$, $j\notin J_{n_-}$, we have \begin{equation}\label{CCR2q} \begin{split} &\overline Z_j\left(\int_{H_{n+1}}\int_0^\infty e^{it\varphi_-(x, y)}t^nu_{J_{n_-}}(y)d\overline z^{J_{n_-}}\chi(\varepsilon t)dt d\mu_{H_{n+1}}(y)\right)\\ =&\int_{H_{n+1}}\left(\frac{\partial}{\partial\overline z_j}+i\lambda_j z_j\frac{\partial}{\partial x_{2n+1}}\right)\varphi_-(x, y)\int_0^\infty it e^{it\varphi_-(x, y)}t^nu_{J_{n_-}}(y)\chi(\varepsilon t)dt d\mu_{H_{n+1}}(y)\\ =&\int_{H_{n+1}}\left[i\abs{\lambda_j}(z_j-w_j)+i\lambda_jw_j-i\lambda_jz_j\right] \int_0^\infty it e^{it\varphi_-(x, y)}t^nu_{J_{n_-}}(y)d\overline z^{J_{n_-}}\chi(\varepsilon t)dt d\mu_{H_{n+1}}(y)\\ =&0. \end{split} \end{equation} Similarly, for every $j=1, \cdots, n$, $j\in J_{n_-}$, we have \begin{equation}\label{CCR2qa} \begin{split} &Z_j\left(\int_{H_{n+1}}\int_0^\infty e^{it\varphi_-(x, y)}t^nu_{J_{n_-}}(y)d\overline z^{J_{n_-}}\chi(\varepsilon t)dt d\mu_{H_{n+1}}(y)\right)\\ =&\int_{H_{n+1}}\left(\frac{\partial}{\partial z_j}-i\lambda_j\overline z_j\frac{\partial}{\partial x_{2n+1}}\right)\varphi_-(x, y)\int_0^\infty it e^{it\varphi_-(x, y)}t^nu_{J_{n_-}}(y)\chi(\varepsilon t)dt d\mu_{H_{n+1}}(y)\\ =&\int_{H_{n+1}}\left[i\abs{\lambda_j}(\overline z_j-\overline w_j)-i\lambda_j\overline w_j+i\lambda_j\overline z_j\right] \int_0^\infty it e^{it\varphi_-(x, y)}t^nu_{J_{n_-}}(y)d\overline z^{J_{n_-}}\chi(\varepsilon t)dt d\mu_{H_{n+1}}(y)\\ =&0. \end{split} \end{equation} From \eqref{CCR2q}, \eqref{CCR2qa} and \eqref{CR1}, we get the conclusion of (\ref{CR1q}). Let $u\in\Omega^{0,q}_0(H_{n+1})$. Since $\lim_{\varepsilon\To0^+}I^{(q)}_{\varphi_-,\varepsilon}\circ\tau_-u=I^{(q)}_{\varphi_-}\circ\tau_-u$ in $\Omega^{0,q}(H_{n+1})$ topology, we deduce that $\overline\partial_bI^{(q)}_{\varphi_-}\circ\tau_-u=0$ and
$(\,I^{(q)}_{\varphi_-}\circ\tau_-u\,|\,\overline\partial_bv\,)=0$, for every $v\in\Omega^{0,q}_0(H_{n+1})$. By Friedrichs' lemma, we conclude that $(\,I^{(q)}_{\varphi_-}\circ\tau_-u\,|\,\overline\partial_bv\,)=0$, for every $v\in{\rm Dom\,}\overline\partial_b$ and hence $I^{(q)}_{\varphi_-}\circ\tau_-u\in\mathcal H^{q}_b(H_{n+1})$. Similarly, we can repeat the argument above with minor change and deduce that $I^{(q)}_{\varphi_+}\circ\tau_+u\in\mathcal H^{q}_b(H_{n+1})$, for every $u\in\Omega^{0,q}_0(H_{n+1})$.
Let $u\in L^2_{(0,q)}(H_{n+1})$ and take $u_j\in\Omega^{0,q}_0(H_{n+1})$, $j=1,2,\ldots$, $u_j\rightarrow u$ in $L^2_{(0,q)}(H_{n+1})$ as $j\rightarrow\infty$. From Theorem~\ref{tema2}, we see that $I^{(q)}_{\varphi_-}\circ\tau_-u_j\rightarrow I^{(q)}_{\varphi_-}\circ\tau_-u$ in $L^2_{(0,q)}(H_{n+1})$ as $j\rightarrow\infty$, and $I^{(q)}_{\varphi_+}\circ\tau_+u_j\rightarrow I^{(q)}_{\varphi_+}\circ\tau_+u$ in $L^2_{(0,q)}(H_{n+1})$ as $j\rightarrow\infty$. Again, by using Friedrichs' lemma, we conclude that $\overline\partial_b(I^{(q)}_{\varphi_-}\circ\tau_-+I^{(q)}_{\varphi_+}\circ\tau_+\Bigr)u=0$, $\overline{\partial}^*_b(I^{(q)}_{\varphi_-}\circ\tau_-+I^{(q)}_{\varphi_+}\circ\tau_+\Bigr)u=0$ and hence $(I^{(q)}_{\varphi_-}\circ\tau_-+I^{(q)}_{\varphi_+}\circ\tau_+\Bigr)u\in\mathcal H^{q}_b(H_{n+1})$. The lemma follows. \end{proof}
Let \begin{equation}\label{e-gue170605m} \hat{T}^{1, 0}H_{n+1}:={\rm span}_{\mathbb C}\{\frac{\partial}{\partial z_j}-i\abs{\lambda_j}\overline z_j\frac{\partial}{\partial x_{2n+1}}, j=1, \cdots, n\}. \end{equation} Then, $\hat{T}^{1, 0}H_{n+1}$ is a CR structure of $H_{n+1}$. Let $\hat\overline\partial_b$ be the tangential Cauchy Riemann operator with respect to $\hat{T}^{1, 0}H_{n+1}$ and let $\hat S^{(0)}:L^2(H_{n+1})\rightarrow{\rm Ker\,}\hat\overline\partial_b$ be the associated Szeg\H{o} projection. Put \[\begin{split} &\hat\varphi(x, y)\\
&=-x_{2n+1}+y_{2n+1}+i\sum_{j=1}^n|\lambda_j||z_j-w_j|^2+i\sum_{j=1}^n \abs{\lambda_j}(\overline z_jw_j-z_j\overline w_j)\in C^\infty(H_{n+1}\times H_{n+1}).\end{split}\] From Theorem~\ref{main theorem 2}, we see that \begin{equation}\label{e-gue170604qa} \hat S^{(0)}=\frac{\abs{\lambda_1}\cdots\abs{\lambda_n}}{2\pi^{n+1}}n!I^{(0)}_{\hat\varphi}\ \ \mbox{on $L^2(X)$}, \end{equation} where $I^{(0)}_{\hat\varphi}:L^2(H_{n+1})\rightarrow L^2(H_{n+1})$ is defined as in \eqref{e-gue170525I}, \eqref{e-gue170525II}.
Let $u(x)=u_{J_{n_-}}(z,x_{2n+1})d\overline z^{J_{n_-}}+u_{J_{n_+}}(z,x_{2n+1})\in L^2_{(0,q)}(H_{n+1})$. Put \begin{equation}\label{e-gue170605} \begin{split} &v_{J_{n_-}}(z,x_{2n+1}):=u_{J_{n_-}}(\overline z_1,\ldots,\overline z_{n_-},z_{n_-+1},\ldots,z_n,x_{2n+1})\in L^2(H_{n+1}),\\ &v_{J_{n_+}}(z,x_{2n+1}):=u_{J_{n_+}}(z_1,\ldots,z_{n_-},\overline z_{n_-+1},\ldots,\overline z_n,-x_{2n+1})\in L^2(H_{n+1}). \end{split} \end{equation} It is straightforward to see that \begin{equation}\label{e-gue170605I} \begin{split} &(I^{(q)}_{\varphi_-}\circ\tau_- u)(z,x_{2n+1})=(I^{(0)}_{\hat\varphi}v_{J_{n_-}})(\overline z_1,\ldots,\overline z_{n_-},z_{n_-+1},\ldots,z_n,x_{2n+1})d\overline z^{J_{n_-}},\\ &(I^{(q)}_{\varphi_+}\circ\tau_+ u)(z,x_{2n+1})=(I^{(0)}_{\hat\varphi}v_{J_{n_+}})(z_1,\ldots,z_{n_-},\overline z_{n_-+1},\ldots,\overline z_n,-x_{2n+1})d\overline z^{J_{n_+}}. \end{split} \end{equation}
Now, we can prove
\begin{theorem}\label{t-gue170604lc}
Let $u\in \mathcal H^q_b(H_{n+1})$. Then $\widetilde S^{(q)}u=u$. \end{theorem}
\begin{proof} Let $u=\sideset{}{'}\sum_{l(J)=q}u_Jd\overline z^J\in \mathcal H^q_b(H_{n+1})$. From Lemma~\ref{lemma vanish}, we see that $u(x)=u_{J_{n_-}}(z,x_{2n+1})d\overline z^{J_{n_-}}+u_{J_{n_+}}(z,x_{2n+1})d\overline z^{J_{n_+}}$. Let \[v_{J_{n_-}}(z,x_{2n+1})\in L^2(H_{n+1}),\ \ v_{J_{n_+}}(z,x_{2n+1})\in L^2(H_{n+1})\] be as in \eqref{e-gue170605}. From \eqref{CR1}, we see that $\hat\overline\partial_bv_{J_{n_-}}=0$ and $\hat\overline\partial_bv_{J_{n_+}}=0$, where $\hat\overline\partial_b$ is the tangential Cauchy-Riemann operator with respect to the CR stricture $\hat{T}^{1, 0}H_{n+1}$ in \eqref{e-gue170605m}. Hence, we find \begin{equation}\label{e-gue170605mI} \hat S^{(0)}v_{J_{n_-}}=v_{J_{n_-}},\ \ \hat S^{(0)}v_{J_{n_+}}=v_{J_{n_+}}, \end{equation} where $\hat S^{(0)}:L^2(H_{n+1})\rightarrow{\rm Ker\,}\hat\overline\partial_b$ is the Szeg\H{o} projection. From \eqref{e-gue170605mI}, \eqref{e-gue170604qa}, \eqref{e-gue170605} and \eqref{e-gue170605I}, we get $\widetilde S^{(q)}u=u$. The theorem follows. \end{proof}
\begin{proof}[Proof of Theorem \ref{main theorem 3}] Let $u\in L^2_{(0,q)}(H_{n+1})$. From Lemma~\ref{l-gue170604lc}, we see that $\widetilde S^{(q)}u\in\mathcal H^q_b(H_{n+1})$. To show that $\widetilde S^{(q)}=S^{(q)}$, we only need to show that $(I-\widetilde S^{(q)})u\perp\mathcal H^q_b(H_{n+1})$. We observe that $\widetilde S^{(q)}$ is self-adjoint, that is, \begin{equation}\label{e-gue170604wf}
(\,\widetilde S^{(q)}g\,|\,h\,)=(\,g\,|\,\widetilde S^{(q)}h\,),\ \ \forall g, h\in L^2_{(0,q)}(H_{n+1}). \end{equation} Let $f\in\mathcal H^q_b(H_{n+1})$. From Theorem~\ref{t-gue170604lc} and \eqref{e-gue170604wf}, we have
\[(\,(I-\widetilde S^{(q)})u\,|\,f\,)=(\,u\,|\,f\,)-(\,\widetilde S^{(q)}u\,|\,f\,)=(\,u\,|\,f\,)-(\,u\,|\,\widetilde S^{(q)}f\,)=(\,u\,|\,f\,)-(\,u\,|\,f\,)=0.\] The theorem follows. \end{proof} \section{Relations to weighted Bergman kernels on \(\mathbb{C}^n\)}\label{sec:relbergszeg} In this section we show how the Szeg\H{o} kernel on the Heisenberg group is related to a weighted Bergman kernel on \(\mathbb{C}^n\) (see Theorem \ref{thm:szegobergman}). The connection mainly depends on Lemma \ref{fo1}. We restrict ourselves to the case \(\lambda_1,\ldots,\lambda_n>0\) which reduces the problem to the study of the Bergman kernel for holomorphic functions. However, a generalization of the relation between Szeg\H{o}- and Bergman kernels to the case \(\lambda_1\leq\ldots\leq\lambda_q<0<\lambda_{q+1}\leq\ldots\leq\lambda_n\) is possible.
Let \(\psi\colon\mathbb{C}^n\rightarrow \mathbb{R}\) be a smooth function. We denote by \(L^2(\mathbb{C}^n,\psi)\) the weighted \(L^2\) space with norm
\[\|f\|_\psi^2=\int_{\mathbb{C}^n}|f(z)|^2e^{-2\psi(z)}d\mu(z)\] and let \(H^0_{\psi}(\mathbb{C}^n)=\mathcal{O}(\mathbb{C}^n)\cap L^2(\mathbb{C}^n,\psi)\) be the space of holomorphic functions with finite weighted \(L^2\)-norm. The Bergman kernel is a smooth function defined by \[P_\psi(z,w)=e^{-(\psi(z)+\psi(w))}\sum_j s_j(z)\overline{s_j(w)}\] where \(\{s_j\}\) is an ONB of \(H^0_{\psi}(\mathbb{C}^n)\). We have for example \begin{align}\label{eq:reproducingBK} f(z)e^{-\psi(z)}=\int_{\mathbb{C}^n}P_\psi(z,w)f(w)e^{-\psi(w)}d\mu(w) \end{align} for any \(f\in H^0_\psi(\mathbb{C}^n)\).
Set \(\psi(w)=\sum_{j=1}^{n}\lambda_j|w_j|^2\) with \(\lambda_1,\ldots,\lambda_n>0\). Then we have \begin{align}\label{Eq:BKforT}
P_{t\psi}(z,w)=1_{(0,\infty)}(t)\frac{t^n}{\pi^n}\lambda_1\cdot\ldots\cdot\lambda_ne^{-t\sum_{j=1}^n \lambda_j|w_j-z_j|^2-t\sum_{j=1}^n \lambda_j(w_j\overline{z}_j-\overline{w}_jz_j)}. \end{align}
Now consider \(H_{n+1}=\mathbb{C}^n\times \mathbb{R}\). We define an operator \(\tilde{P}\colon L^2(H_{n+1})\rightarrow L^2(H_{n+1})\) by \(\tilde{P}(u)(x)=\hat{v}(z,x_{2n+1})\) with \[v(z,t)=\int_{\mathbb{C}^n}P_{t\psi}(z,w)\check{u}(w,t)d\mu(w)\]
and \(v\mapsto \hat{v}\) (or \(u\mapsto \check{u}\)) denotes the Fourier transform in the last argument (or its inverse), using coordinates \(x=(z,x_{2n+1})\) and \(y=(w,y_{2n+1})\). In other words we have: \begin{definition}\label{def:segobergman}
Given \(u\in L^2(H_{n+1})\), then
\begin{align*}
\tilde{P}(u)=\frac{1}{2\pi}\int_{\mathbb{R}}e^{-itx_{2n+1}}\left(\int_{\mathbb{C}^n}P_{t\psi}(z,w)\left(\int_{\mathbb{R}}e^{ity_{2n+1}}u(w,y_{2n+1})dy_{2n+1}\right)d\mu(w)\right)dt,
\end{align*} where the order of integration is important. The integrals are all well defined by Lemma \ref{2017-06-29a1} and the extensions of the Fourier transform in the \(L^2(H_{n+1})\). \end{definition} \begin{remark}
Note that given \(u\in C_0^\infty(H_{n+1})\) we have \(\)
\begin{align*}
\tilde{P}&(u)(x)=\\ &\lim_{\varepsilon\rightarrow0}\frac{1}{2\pi}\int_{\mathbb{R}}\int_{H_{n+1}} \chi(\varepsilon t) e^{-it(x_{2n+1}-y_{2n+1})}P_{t\psi}(z,w)u(y)d\mu_{H_{n+1}}(y)dt,
\end{align*}
where $\chi\in C^\infty_0(\mathbb R)$ with $\chi(x)=1$ if $\abs{x}\leq 1$ and $\chi(x)=0$ if $\abs{x}>2$.
Hence we formally write \begin{equation} \begin{split} \tilde{P}(x,y) &=\int_{\mathbb{R}} \frac{1}{2\pi}e^{-it(x_{2n+1}-y_{2n+1})}P_{t\psi}(z,w)dt\\ &=\frac{\lambda_1\cdots\lambda_n}{2\pi^{n+1}}\int_0^\infty e^{it\varphi(x, y)}t^ndt \end{split} \end{equation} for the distribution kernel \(\tilde{P}(x,y)\) of \(\tilde{P}\), using \(x=(z,x_{2n+1})\) and \(y=(w,y_{2n+1})\). Thus, the operator $\tilde P$ is a complex Fourier integral operator in the sense of \cite{GrSj94}. \end{remark} We need to show that \(\tilde{P}\) is well defined in the sense that \(\tilde{P}(L^2(H_{n+1}))\subset L^2(H_{n+1})\). \begin{lemma}\label{lem:projectionL2}
Given \(u\in L^2(H_{n+1})\), one has \(\tilde{P}(u)\in L^2(H_{n+1})\) with \(\|\tilde{P}(u)\|\leq\|u\|\). \end{lemma} \begin{proof}
We have that the Fourier transform in the last argument and its inverse preserve \(L^2(H_{n+1})\). More precisely, given \(u\in L^2(H_{n+1})\) we find \(\hat{u},\check{u}\in L^2(H_{n+1})\) with \((2\pi)^{-1}\|\hat{u}\|=\|u\|=2\pi\|\check{u}\|\). Then the proof can be deduced from the following Lemma. \end{proof} \begin{lemma}\label{2017-06-29a1}
Given \(u\in L^2(H_{n+1})\), one has \(v\in L^2(H_{n+1})\) with \(\|v\|\leq\|u\|\), where
\[v(z,t)=\int_{\mathbb{C}^n}P_{t\psi}(z,w)u(w,t)d\mu(w)\]
for almost every \(t\in\mathbb{R}\). \end{lemma} \begin{proof}
Since \(u\in L^2(H_{n+1})\) we find \(u(\cdot,t)\in L^2(\mathbb{C}^n)\) for all \(t\in A\) for some \(A\subset \mathbb{R}\), such that \(\mathbb{R}\setminus A\) has zero measure. We find \(u(\cdot,t)e^{t\psi(\cdot)}\in L^2(\mathbb{C}^n,t\psi)\) and hence it has a unique decomposition
\[u(z,t)e^{t\psi(z)}=f(z,t)+g(z,t)\] with
\(f(\cdot,t)\in H^0_{t\psi}(\mathbb{C}^n)\) and \(g(\cdot,t)\in H^0_{t\psi}(\mathbb{C}^n)^{\perp}\) for all \(t\in A\). We write \(u(z,t)=f(z,t)e^{-t\psi(z)}+g(z,t)e^{-t\psi(z)}\) and using the properties of the Bergman kernel we find
\begin{align}\label{eq:represantf}
f(z,t)e^{-t\psi(z)}=\int_{\mathbb{C}^n}P_{t\psi}(z,w)u(w,t)d\mu(w)
\end{align}
for all \(t\in A\).
Combining (\ref{Eq:BKforT}) and (\ref{eq:represantf}) we find that
\[(z,t)\mapsto\begin{cases}
f(z,t)e^{-t\psi(z)}&, \text{ if } t\in A,\\
0 &, \text{ else}.
\end{cases}\]
and hence \((z,t)\mapsto 1_{A}(t)g(z,t)e^{-t\psi(z)}\) define measurable functions on \(H_{n+1}\).
We have
\[\int_{\mathbb{C}^n}f(z,t)\overline{g(z,t)}e^{-2t\psi(z)}d\mu(z)=0\] for all \(t\in A\) and hence
\[I_0:=\int_{\mathbb{R}}\int_{\mathbb{C}^n}f(z,t)\overline{g(z,t)}e^{-2t\psi(z)}d\mu(z)dt=0.\]
By positivity we have that
\[I_1:=\int_{\mathbb{R}}\int_{\mathbb{C}^n}|f(z,t)|^2e^{-2t\psi(z)}d\mu(z)dt \, \text{ and } \, I_2:=\int_{\mathbb{R}}\int_{\mathbb{C}^n}|g(z,t)|^2e^{-2t\psi(z)}d\mu(z)dt\]
exist in \([0,\infty]\).
We then write
\begin{align*}
I_1+I_2&=I_1+I_0+\overline{I_0}+I_2=\int_{\mathbb{R}}\int_{\mathbb{C}^n}|f(z,t)e^{-t\psi(z)}+g(z,t)e^{-t\psi(z)}|^2d\mu(z)dt\\
&=\int_{\mathbb{R}}\int_{\mathbb{C}^n}|u(z,t)|^2d\mu(z)dt=\int_{H_{n+1}}|u(z,t)|^2d\mu_{H_{n+1}}<\infty
\end{align*}
because \(u\in L^2(H_{n+1})\). Thus, we have \(I_1,I_2<\infty\).
Setting
\[v(z,t)=f(z,t)^{-t\psi(z)}=\int_{\mathbb{C}^n}P_{t\psi}(z,w)u(w,t)d\mu(w)\]
for \(t\in A\) we have
\(\|v\|^2=I_1\leq \|u\|^2<\infty\) and hence \(v\in L^2(H_{n+1})\) with \(\|v\|\leq\|u\|\). \end{proof} \begin{lemma}\label{lem:projectionH}
One has \(\tilde{P}(L^2(H_{n+1}))\subset \mathcal{H}^0_{b}(H_{n+1})\). \end{lemma} \begin{proof}
Given \(u\in L^2(H_{n+1})\) we have \(h:=\tilde{P}(u)\in L^2(H_{n+1})\) with \(h=\lim_{\varepsilon\to 0}h_{\varepsilon}\) in \(L^2\) norm, where \(\{h_\varepsilon\}_{\varepsilon>0}\subset L^2(H_{n+1})\),
\[h_\varepsilon(z,x_{2n+1}):=\int_{\mathbb{R}}\chi(\varepsilon t)g(z,t)e^{-t\psi(z)}e^{-itx_{2n+1}}dt\]
for a.e. \((z,x_{2n+1})\in H_{n+1}\), $\chi\in C^\infty_0(\mathbb R)$, $\chi(x)=1$ if $\abs{x}\leq 1$, $\chi(x)=0$ if $\abs{x}>2$ and \(g(\cdot,t)\) is holomorphic for a.e.~\(t\in\mathbb{R}\).
A straightforward calculation shows that
\[\left(\frac{\partial}{\partial \overline{z}_j}+i\lambda_jz_j\frac{\partial}{\partial x_{2n+1}}\right)h_\varepsilon=0\, \, \text{for} \, 1\leq j\leq n, \, \varepsilon>0\]
holds in the sense of distributions and
thus we conclude \(\left(\frac{\partial}{\partial \overline{z}_j}+i\lambda_jz_j\frac{\partial}{\partial x_{2n+1}}\right)h=0\) for \(1\leq j\leq n\) in the sense of distributions which shows, by Lemma \ref{lem:equivkernel}, \(h\in\mathcal{H}^0_{b}(H_{n+1})\). \end{proof}
\begin{theorem}\label{thm:szegobergman}
Let \(S^{(0)}\) denote the Szeg\H{o} projection for the space \(\mathcal{H}^0_b(H_{n+1})\) with distribution kernel \(S^{(0)}(x,y)\) and let \(\tilde{P}\) be the operator defined in Definition \ref{def:segobergman}. One has \(\tilde{P}=S^{(0)}\) and hence formally
\[S^{(0)}(x,y)=\frac{1}{2\pi}\int_{\mathbb{R}}e^{-it(x_{2n+1}-y_{2n+1})}P_{t\psi}(z,w)dt\]
where \(P_{t\psi}(z,w)\) denotes the weighted Bergman kernel on \(\mathbb{C}^n\) with respect to the weight \(\psi(z)=\sum_{j=1}^n\lambda_j|z_j|^2\), \(\lambda_1,\ldots,\lambda_n>0\), using the notation \(x=(z,x_{2n+1})\) and \(y=(w,y_{2n+1})\). \end{theorem} \begin{proof}
Using Lemma \ref{fo1} and (\ref{eq:reproducingBK}) one finds \(\tilde{P}(u)=u\) for all \(u\in \mathcal{H}^0_{b}(H_{n+1})\). Moreover, by Lemma \ref{lem:projectionH} we have that \(\tilde{P}\) satisfies \(\tilde{P}(L^2(H_{n+1}))\subset \mathcal{H}^0_{b}(H_{n+1})\). By Lemma \ref{lem:projectionL2} we find \(\|\tilde{P}\|\leq 1\). Thus \(\tilde{P}\) is the orthogonal Projection on \(\mathcal{H}^0_{b}(H_{n+1})\) and hence we have \(\tilde{P}=S^{(0)}\). \end{proof}
\begin{center} {\bf Acknowledgement} \end{center}
The authors would like to thank the Institute for Mathematics, National University of Singapore for hospitality, a comfortable accommodation and financial support during their visits in May for the program "Complex Geometry, Dynamical Systems and Foliation Theory". A main part of this work was done when the first and third author were visiting the Institute of Mathematics, Academia Sinica in January.
\end{document}
|
arXiv
|
{
"id": "1706.09762.tex",
"language_detection_score": 0.48926347494125366,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Bimonotone Brownian Motion}
\author{Malte Gerhold} \date{\today} \address{M.G., Institut f\"ur Mathematik und Informatik \\ Ernst Moritz Arndt Universit\"at Greifswald\\Walther-Rathenau-Stra\ss{}e 47 \\ 17487 Greifswald \\ Germany} \email{[email protected]} \urladdr{www.math-inf.uni-greifswald.de/index.php/mitarbeiter/282-malte-gerhold} \thanks{The author thanks Prof.\ Michael Sch{\"u}rmann for several remarks which helped to improve the readibility of this paper. He also thanks Volkmar Liebscher for some enlightning discussions. }
\begin{abstract} We define bi-monotone independence, prove a bi-monotone central limit theorem and use it to study the distribution of bi-monotone Brownian motion, which is defined as the two-dimensional operator process with monotone and antimonotone Brownian motion as components. \end{abstract}
\keywords{noncommutative probability, bi-monotone independence, L{\'e}vy processes, central limit theorem, bi-monotone partitions} \subjclass[2010]{Primary: 46L53; Secondary: 60G51, 60F05, 05A18} \maketitle
\section{Introduction} \label{sec:introduction}
It is a key feature of noncommutative probability that there is not a unique notion of independence, but a number of different ones, each of which allows to a distinct theory with its own limit theorems and limit distributions, L{\'e}vy processes, de Finetti theorems, etc. Although weaker concepts have been considered, a full theory comparable to classical probability theory is only achieved with a \emph{universal product independence}, i.e.\ independence for a family of noncommutative random variables is defined as factorization of the joint distribution with respect to a universal product construction for distributions of noncommutative random variables; this product construction replaces the tensor product of probability measures in the classical case. One possibility is to use the tensor product of states, and this leads to the notion of independence most frequently used in quantum mechanics. The most famous example of an alternative product is the free product of states which leads to the notion of \emph{freeness}, and which has been studied extensively, exhibiting many connections to random matrix theory and the theory of operator algebras, cf.\ \cite{HiPe00,VDN92}. Based on work of Speicher \cite{Spe97} and Ben Ghorbal and Sch{\"u}rmann \cite{BGS02}, Muraki showed that besides the tensor product and the free product, there are only three more universal products of states, namely the Boolean, the monotone and the antimonotone product \cite{Mur03, Mur13b}. Extending Muraki's result to a purely algebraic setting, Gerhold and Lachs classified all universal products of linear functionals on associative algebras; these are all deformations of Muraki's five, the Boolean products admits a two-parameter deformation, while the others admit a one-parameter deformation \cite{GLa14}. Results on cummulants and L{\'e}vy processes for general universal product independences can be found in \cite{Fra06,MaSc16,GLS16}.
In 2014, Voiculescu introduced the notion of bi-freeness in a series of papers \cite{Voi14,Voi16a,Voi16b}. The main difference between bi-freeness and the universal product independences mentioned above is that Voiculescu does not consider a notion of independence for single random variables, but for pairs of random variables, or \emph{two-faced} random variables. This corresponds to considering only such algebras which are structured as a free product of subalgebras. Since universality only has to hold for algebra homomorphisms which respect the two-faced structure, the possibility arises to find universal products of states besides Muraki's five. Bi-freeness is built on the fact that there are two distinct natural realizations of the GNS-representation of a free product state, a left and a right one. The monotone product was introduced by Muraki \cite{Mur95,Mur96} and Lu \cite{Lu97}. In contrast to the free case, the monotone product is not symmetric, i.e.\ the monotone product $\mathbin{\vartriangleright}$ and its opposite, the anti-monotone product $\varphi_1\mathbin{\vartriangleleft}\varphi_2:=\varphi_2\mathbin{\vartriangleright}\varphi_1$, do not coincide. There is a ``left'' and a ``right'' product representation, which corresponds to the monotone product and the anti-monotone product, respectively. In this paper we will introduce a new universal product for states on two-faced $*$-algebras, the \emph{bi-monotone product}, and study its associated notion of independence for two-faced random variables (Section~\ref{sec:bimon-indep}), construct a bi-monotone Brownian motion on monotone Fock space (Section~\ref{sec:bimon-brown-moti}), introduce bi-monotone partitions (Section~\ref{sec:bi-monot-part}), and use these to give a combinatorial description of the distribution of bi-monotone Brownian motion via a bi-monotone central limit theorem (Sections~\ref{sec:bi-monotone-central} and \ref{sec:distr-bi-monot}).
\section{Preliminaries and notation} \label{sec:prel-notat}
For any natural number $n$, we denote by $[n]$ the set $\{1,\ldots, n\}$. If the set $[n]$ appears as an argument inside usual brackets, we omit the inner square bracket. By an \emph{algebra}, we always mean a not necessarily unital, complex, associative algebra. The free product of algebras $A_1,\ldots, A_n$ is denoted by $A_1\sqcup\cdots\sqcup A_n$.
We write
$A=A^{(1)}\sqcup\cdots\sqcup A^{(n)}$ if $A^{(1)},\ldots, A^{(n)}$ are all subalgebras of $A$ and the canonical homomorphism $A^{(1)}\sqcup\cdots\sqcup A^{(n)}\to A$ (i.e.\ the free product of the embeddings $\iota^{(i)}\colon A^{(i)}\hookrightarrow A$) is an isomorphism. An algebra $A$ with subalgebras $A^{(1)},\ldots, A^{(n)}$ such that
$A=A^{(1)}\sqcup\cdots\sqcup A^{(n)}$ is called an \emph{$n$-faced algebra}. Let $B_1,B_2$ be $n$-faced algebras. Then the free product $B_1\sqcup B_2$ is again an $n$-faced algebra with respect to the subalgebras $B^{(i)}:=B_1^{(i)}\sqcup B_2^{(i)}$. Recall that an \emph{augmented algebra} is a unital algebra $A$ with a character, i.e.\ a non-zero homomorphism to $\mathbb C$, whose kernel is called \emph{augmentation ideal} (cf.\ for example \cite{LoVa12}). For any algebra $B$, its the unitization $\widetilde B$ is an augmented algebra with augmentation ideal $B$ and, conversely, every augmented algebra is isomorphic to the unitization of its augmentation ideal. Therefore, we will always denote an augmented algebra as $\widetilde A$ where $A$ denotes its augmentation ideal.
We say that $\widetilde A$ is an \emph{augmented $n$-faced algebra} if it is the unitization of an $n$-faced algebra $A$. In this case $\widetilde A= (A^{(1)}\sqcup\cdots\sqcup A^{(n)})^{\sim}=\widetilde{A}^{(1)}\sqcup_1\cdots \sqcup_1 \widetilde{A}^{(n)}$, where $\sqcup_1$ denotes the free product of unital algebras. If $A,B$ are $*$-algebras, then we consider $\widetilde A$ and $A\sqcup B$ as $*$-algebras in the obvious way.
A \emph{noncommutative probability space} is a pair $(A,\Phi)$, where $A$ is a unital $*$-algebra and $\Phi$ is a state on $A$. Let $B$ be an $n$-faced $*$-algebra and $(A,\Phi)$ a noncommutative probability space. A $*$-homomorphism $j\colon B\to A$ is called \emph{$n$-faced random variable}. We call $j$ augmented if $B$ is an augmented algebra and $j$ is unital, that is if $B=\widetilde{B'}$ and $j=\widetilde{j\restriction B'}$. Just as for algebras, we will write augmented random variables as $\widetilde \jmath$ from the start with $j$ its restriction to the augmentation ideal. An element of $(b_1,\ldots, b_n)\in A^{n}$ gives rise to the $*$-algebra homomorphism $j_{b_1,\ldots, b_n}\colon\operatorname{*-alg}(b_1)\sqcup\cdots\sqcup \operatorname{*-alg}(b_n)\to A$ determined by $j_{b_1,\ldots, b_n}(b_i)=b_i$. In this sense, we will consider elements of $A^{n}$ as $n$-faced random variables. For an $n$ faced random variable $b=(b_1,\ldots b_n)\in A^n$ and an $m$ tuple $\delta\in [n]^{[m]}$ we define $b^{\delta}:=b_{\delta_1}\cdots b_{\delta_m}$. The numbers $\Phi(b^{\delta})$ are called \emph{moments} of $b$ and the collection of all moments is called \emph{distribution} of $b$.
In this paper we will only consider two-faced algebras. We write a two-faced algebra as $A=A^{\mathfrak{l}}\sqcup A^{\mathfrak{r}}$. $A^{\mathfrak{l}}$ is called \emph{left face} and $A^{\mathfrak{r}}$ is called \emph{right face} of $A$.
A \emph{pointed representation} of an algebra $A$ consists of a pre-Hilbert space $H$, an algebra homomorphism $\pi\colon A\to L(H)$ and a unit vector $\Omega\in H$. For every pointed representation $\pi$ we can define a linear functional $\varphi_\pi(a):=\langle\Omega, \pi(a)\Omega\rangle$. If $\pi$ is a unital $*$-representation, then $\varphi_\pi$ is a state. On the other hand, for any state $\Phi$ on $A$, the GNS-representation yields a pointed unital $*$-representation. Given a pointed representation on $H$ with unit vector $\Omega$, we denote by $P$ the projection onto $\mathbb C\Omega$ and by $\mathrm{id}$ the identity operator on $H$.
\section{Bimonotone independence} \label{sec:bimon-indep}
We define the bimonotone product first for pointed representations of two-faced algebras and afterwards for states on augmented two-faced $*$-algebras.
\begin{definition}
Let $A_1, A_2$ be two-faced algebras and $\pi_1,\pi_2$ pointed representations of $A_1,A_2$ on pre-Hilbert spaces $H_1,H_2$ respectively.
Then we define the pointed representation $\pi_1\mathbin{\bowtie}\pi_2$ of $A_1\sqcup A_2$ on $H_1\otimes H_2$ with unit vector $\Omega:=\Omega_1\otimes\Omega_2$ by
\begin{align*}
\pi_1\mathbin{\bowtie}\pi_2(a):=
\begin{cases}
\pi_1(a)\otimes \mathrm{id}& a\in A_1^{\mathfrak{l}}\\
\pi_1(a)\otimes P & a\in A_1^{\mathfrak{r}}\\
\mathrm{id}\otimes \pi_2(a) & a\in A_2^{\mathfrak{r}}\\
P\otimes\pi_2(a) & a\in A_2^{{\mathfrak{l}}}
\end{cases}
\end{align*}
Now suppose that $\varphi_1,\varphi_2$ are states on augmented two-faced $*$-algebras $\widetilde A_1,\widetilde A_2$ and $\pi_i$ are pointed $*$-representations of $A_i$ such that $\varphi_i(a)=\langle\Omega_i,\pi_i(a)\Omega_i\rangle$ for all $a\in A_i$. Then we put
\[\varphi_1\mathbin{\bowtie} \varphi_2(a):=\langle\Omega,\pi_1\mathbin{\bowtie}\pi_2(a) \Omega\rangle\]
for all $a\in A_1\sqcup A_2$ and $\varphi(1)=1$.
\end{definition}
Note that the definition is independent of the choice of the pointed representations, so we can always take $\widetilde \pi_i$ to be the GNS-represenation of $\varphi_i$. By construction, $\varphi_1\mathbin{\bowtie} \varphi_2$ is a state, because $\pi:=\pi_1\mathbin{\bowtie}\pi_2$ is a pointed $*$-representation with $\varphi_1\mathbin{\bowtie} \varphi_2(a)=\langle\Omega, \widetilde{\pi}(a)\Omega\rangle$ for all $a\in \widetilde{A_1\sqcup A_2}$. But in general $(\pi_1\mathbin{\bowtie} \pi_2)^{\sim}$ will not be the GNS-represenation of $\varphi_1\mathbin{\bowtie} \varphi_2$, because $\Omega$ is not necessarily cyclic.
It is easy to check that the bimonotone product is associative. Indeed
\[\pi_1\mathbin{\bowtie}\cdots\mathbin{\bowtie}\pi_n(a)=
\begin{cases}
P^{\otimes k-1}\otimes \pi_k(a)\otimes \mathrm{id}^{\otimes n-k} & a \in A_k^{\mathfrak{l}}\\
\mathrm{id}^{\otimes k-1}\otimes \pi_k(a)\otimes P^{\otimes n-k} & a \in A_k^{\mathfrak{r}}
\end{cases}
\]
holds independently of the way one insert parentheses. Associativity on the level of pointed representations clearly implies associativity on the level of states.
\begin{remark}
The bi-monotone product can also be defined for arbitrary linear functionals on algebras. Instead of pointed representations on pre-Hilbert spaces, one can use pointed representations on vector spaces $\widetilde V$ with a fixed decomposition $\widetilde V=\mathbb{C}\Omega\oplus V$. Then every linear functional admits a pointed representation such that $\Phi(a)\Omega=P\pi(a)\Omega$, where $P$ is the projection onto $\mathbb C\Omega$. If $\widetilde V_i=\mathbb{C}\Omega_i\oplus V_i$, then $\widetilde V_1\otimes \widetilde V_2=\mathbb{C}(\Omega_1\otimes\Omega_2)\oplus (\Omega_1\otimes V_2)\oplus (V_1\otimes \Omega_2)\oplus (V_1\otimes V_2)$ yields a direct sum decomposition of $\widetilde V_1\otimes \widetilde V_2$. Given linear functionals $\varphi_i$ on algebras $A_i$, we can find pointed representations $\pi_1,\pi_2$ as above and define
\[\varphi_1\mathbin{\check\mathbin{\bowtie}} \varphi_2(a):=\langle\Omega,\pi_1\mathbin{\bowtie}\pi_2(a) \Omega\rangle\]
for all $a\in A_1\sqcup A_2$.
\end{remark}
\begin{theorem}
The bi-monotone product of linear functionals is a positive $(1,2)$-u.a.u. product in the sense of \cite{MaSc16}, i.e. for all linear functionals $\varphi_i$ on two-faced algebras $A_i$ and all two-faced algebra homomorphisms $j_i\colon B_i\to A_i$ it holds that
\begin{itemize}
\item $(\varphi_1\mathbin{\check\mathbin{\bowtie}}\varphi_2)\circ \iota_1=\varphi_1$, $(\varphi_1\mathbin{\check\mathbin{\bowtie}}\varphi_2)\circ \iota_2=\varphi_2$
\item $(\varphi_1\mathbin{\check\mathbin{\bowtie}}\varphi_2)\mathbin{\bowtie}\varphi_3=\varphi_1\mathbin{\bowtie}(\varphi_2\mathbin{\bowtie}\varphi_3)$
\item $(\varphi_1\mathbin{\check\mathbin{\bowtie}}\varphi_2)\circ (j_1\mathbin{\underline{\sqcup}} j_2)=(\varphi_1\circ j_1)\mathbin{\check\mathbin{\bowtie}}(\varphi_2\circ j_2)$
\item $\widetilde{\varphi_1\mathbin{\check\mathbin{\bowtie}}\varphi_2}$ is a state on $\widetilde A_1\sqcup_1\widetilde A_2$ whenever $\widetilde\varphi_1,\widetilde\varphi_2$ are states on augmented $*$-algebras $\widetilde A_1,\widetilde A_2$ respectively.
\end{itemize}
where $\iota_i\colon A_i\hookrightarrow A_1\sqcup A_2$ is the canonical embedding and $j_1\mathbin{\underline{\sqcup}} j_2:=(\iota_1\circ j_1)\sqcup(\iota_2\circ j_2)$.
\end{theorem}
\begin{proof}
Straightforward. Note that $\widetilde{\varphi_1\mathbin{\check\mathbin{\bowtie}}\varphi_2}=\widetilde\varphi_1\mathbin{\bowtie}\widetilde\varphi_2$.
\end{proof}
Since we are in the realm of universal products, there are associated notions of independence and L{\'e}vy processes for the bi-monotone product, cf.\ \cite{MaSc16} and \cite{GLS16}.
\begin{definition}
Let $j_1,\ldots, j_n$ be two-faced random variables. We call $j_1,\ldots, j_n$ \emph{bi-monotonely independent} if
\begin{align}
\Phi\circ\widetilde {\bigsqcup_i j_i}=(\Phi\circ \widetilde \jmath_1)\mathbin{\bowtie}\cdots\mathbin{\bowtie}(\Phi\circ \widetilde\jmath_n).\label{eq:bimon-ind}
\end{align}
Pairs of $*$-subalgebras are called bi-monotonely independent if their embedding homomorphisms are. Pairs of elements are called bi-monotonely independent if their generated $*$-algebras are.
\end{definition}
\begin{lemma}\label{lem:bimon-ind-isom}
Let $j_i\colon B_i\to A$ be two-faced random variables over a noncommutative probability space $(A,\Phi)$ with distributions $\varphi_i:=\Phi\circ\widetilde\jmath_i$. We denote by $\pi_i,\Pi$ pointed representations on pre-Hilbert spaces $H_i,K$ with unit vectors $\Omega_i,\Omega$ respectively such that
\[\Phi(a)=\langle\Omega, \Pi(a)\Omega\rangle,\quad \varphi_i(b)=\langle\Omega_i,\pi_i(b)\Omega_i\rangle\]
for all $a\in A$, $b\in B_i$, $i\in\{1,\ldots,n\}$. Assume that there exists an isometry
$i\colon H_1\otimes\cdots\otimes H_n\to K$ such that
\begin{itemize}
\item $i(\Omega_1\otimes\cdots\otimes\Omega_n)=\Omega$
\item $i\circ\bigl(\BIGOP{\mathbin{\bowtie}}_i \pi_i (b)\bigr)=\Pi(j_k(b))\circ i$ for all $b\in B_k$, $k\in\{1,\ldots, n\}$
\end{itemize}
Then $j_1,\ldots, j_n$ are bi-monotonely independent.
\end{lemma}
\begin{proof}
Since both sides of \eqref{eq:bimon-ind} are states, it is enough to prove equality for elements $a_1\cdots a_m\in \bigsqcup_i B_i$. Suppose $a_i\in B_{\varepsilon_i}$ for $i\in 1,\ldots, n$ with $\varepsilon_1,\ldots, \varepsilon_n\in[2]$, $\varepsilon_i\neq\varepsilon_{i+1}$ for $i=1,\ldots, n-1$. In this case we have
\begin{align*}
&\Phi\circ (j_1\sqcup\cdots\sqcup j_n) (a_1\cdots a_m)\\
&= \langle \Omega, \Pi\Bigl(\bigsqcup_i j_i(a_1\cdots a_m)\Bigr)\Omega\rangle\\
&= \langle i\Omega_1\otimes\cdots\otimes\Omega_n, \Pi(j_{\varepsilon_1}(a_1))\cdots\Pi(j_{\varepsilon_m}(a_m)) i\Omega_1\otimes\cdots\otimes\Omega_n\rangle\\
&= \langle \Omega_1\otimes\cdots\otimes\Omega_n, i^*i\Bigl(\BIGOP{\mathbin{\bowtie}}_i \pi_i(a_1)\Bigr) \cdots \Bigl(\BIGOP{\mathbin{\bowtie}}_i \pi_i(a_m)\Bigr) \Omega_1\otimes\cdots\otimes\Omega_n\rangle\\
&= \langle \Omega_1\otimes\cdots\otimes\Omega_n, \Bigl(\BIGOP{\mathbin{\bowtie}}_i \pi_i(a_1\cdots a_m)\Bigr) \Omega_1\otimes\cdots\otimes\Omega_n\rangle\\
&= \varphi_1\mathbin{\bowtie}\cdots \mathbin{\bowtie} \varphi_n (a_1\cdots a_m)\qedhere
\end{align*}
\end{proof}
\section{Bimonotone Brownian motion} \label{sec:bimon-brown-moti}
For a set $M$, put \(M^*:=\bigcup_{n\in\mathbb{N}_{0}} M^n\). In particular, \(\mathbb R^*:=\bigcup_{n\in\mathbb{N}_{0}} \mathbb{R}^n\). We view $\mathbb R^*$ as a measure space with the unique measure whose restriction to $\mathbb R^n$ is $n$-dimensional Lebesgue measure. Restriction of this measure to the subset $\Delta:=\{(t_1,\ldots,t_n)\in\mathbb R^*\mid t_1<\cdots<t_n\}$ makes $\Delta$ a measure space. We will also use the subspaces $\Delta_{[s,t]}:=\Delta \cap [s,t]^{*}$ for the interval $[s,t]$, $s<t$.
\begin{definition}
Put $\Gamma:=L^2(\Delta)$ and denote by $\Omega$ the function with value $1$ on the empty tuple $\Lambda:=()\in\mathbb{R}^0$ and which vanishes on all $(t_1,\ldots,t_n)$ with $n>0$. We call $\Gamma$ \emph{monotone Fock space}. The unit vector $\Omega$ induces the vector state $\varphi$ with $\varphi(a)=\langle\Omega, a\Omega\rangle$ on $\mathcal A:=\mathcal B(\Gamma)$ called the \emph{vacuum state}. We also define the monotone Fockspaces over intervals as $\Gamma_{[s,t]}:=L^2(\Delta_{[s,t]})$ and view them as subspaces of $\Gamma$ with respect to the obvious embedding. \end{definition}
We define the following bounded operators on $\Gamma$:
\begin{definition}
For every $f\in L^2(\mathbb R)$, we define the \emph{left creation operator} $\lambda^*(f)$, \emph{left annihilation operator} $\lambda(f)$, \emph{right creation operator} $\rho^*(f)$, and \emph{right annihilation operator} $\rho(f)$ by
\begin{align*}
\lambda^*(f)(g)(t_1,\ldots, t_n)&:=f(t_1)g(t_2,\ldots, t_n), \quad(n>0);&\lambda^*(f)(g)(\Lambda)&:=0\\
\lambda(f)(g)(t_1,\ldots, t_n)&:=\int_{-\infty}^{t_1} \overline{f(\tau)}g(\tau,t_1,\ldots, t_n)\mathrm{d}\tau&&\\
\rho^*(f)(g)(t_1,\ldots, t_n)&:=g(t_1,\ldots, t_{n-1})f(t_n), \quad(n>0);&\rho^*(f)(g)(\Lambda)&:=0 \\
\rho(f)(g)(t_1,\ldots, t_n)&:=\int_{t_n}^{\infty} \overline{f(\tau)}g(t_1,\ldots, t_n,\tau)\mathrm{d}\tau&&
\end{align*}
respectively. \end{definition}
All these operators belong to $\mathcal B(\Gamma)$. It is easy to check that $\lambda^*(f)=(\lambda(f))^*$ and $\rho^*(f)=(\rho(f))^*$. We abbreviate $\lambda^*(1_{[s,t]})$ as $\lambda^*_{s,t}$ and similarly for the other three families of operators. For every interval $[s,t]$ let $B_{[s,t]}$ denote the two-faced $*$-algebra with $B_{s,t}^{\mathfrak{l}}=\operatorname{*-alg}(\lambda^*_{s,t})$ and $B_{s,t}^{\mathfrak{r}}=\operatorname{*-alg}(\rho^*_{s,t})$
\begin{theorem}
The two-faced $*$-subalgebras $B_{t_1,t_2}$, $B_{t_2,t_3}$, \ldots, $B_{t_{n-1},t_n}$ are bi-monotonely independent for all choices of $t_1\leq \ldots \leq t_n$. \end{theorem}
\begin{proof}
There is a natural isometry $i\colon \Gamma_{[t_1,t_2]}\otimes\cdots\otimes \Gamma_{[t_{n-1},t_n]}\to \Gamma$ with
\[i(g_1\otimes \cdots\otimes g_{n-1})(\mathbf s)=
\begin{cases}
g_1(\mathbf s\cap[t_1,t_2])\cdots g_{n-1}(\mathbf s\cap[t_{n-1},t_n]), &\mathbf s \in \Delta_{[t_1,t_n]}\\
0, & \text{else}.
\end{cases}
\]
Obviously, $i(\Omega_{[t_1,t_2]}\otimes\cdots\otimes \Omega_{[t_{n-1},t_n]})=\Omega$. Since $\Gamma_{[t_i,t_i+1]}$ is invariant under $B_i$, we can define a pointed representation $\pi_i\colon B_i\to \mathcal {B}(\Gamma_{[t_i,t_i+1]})$ by restriction. We shortly write $\lambda/\rho^{(*)}_i$ for $\pi_i(\lambda/\rho^{(*)}_{t_i,t_{i+1}})$. So, by Lemma~\ref{lem:bimon-ind-isom} and the definition of the bi-monotone product of pointed representations, we are done if we can show
\begin{align*}
\lambda^*_{t_i,t_i+1}\circ i&=i\circ P^{\otimes i-1}\otimes \lambda_i^* \otimes \mathrm{id}^{n-1-i}\\
\lambda_{t_i,t_i+1}\circ i&=i\circ P^{\otimes i-1}\otimes \lambda_i \otimes \mathrm{id}^{n-1-i}\\
\rho^*_{t_i,t_i+1}\circ i&=i\circ \mathrm{id}^{\otimes i-1}\otimes \rho_i^* \otimes P^{n-1-i}\\
\rho_{t_i,t_i+1}\circ i&=i\circ \mathrm{id}^{\otimes i-1}\otimes \rho_i \otimes P^{n-1-i}.
\end{align*}
We check the first equality and leave the remaining ones to the reader, as the calculations are very similar.
Suppose that $s_1<\ldots <s_m$ and that $s_1,\ldots, s_k$ belong to $[t_1,t_{i}]$, $s_{k+1},\ldots,s_{r}$ belong to $[t_{i},t_{i+1}]$ and $s_{r+1},\ldots, s_m$ belong to $[t_{i+1},t_n]$. For all $g_k\in \Gamma_{[t_k,t_{k+1]}}$, $k\in\{1,\ldots, n-1\}$, we get, using the increasing order of $t_1,\ldots, t_n$ and $s_1,\ldots, s_m$,
\begin{align*}
&\lambda^*_{t_i,t_i+1}(i(g_1\otimes\cdots\otimes g_{n-1}))(s_1,\ldots, s_m)\\
&=1_{[t_i,t_{i+t}]}(s_1)\cdot(g_1\otimes\cdots\otimes g_{i-1})(s_2,\ldots, s_k)\cdot \\
& \hspace*{12em} g_{i}(s_{k+1},\ldots, s_r)\cdot(g_{i+1}\otimes\cdots\otimes g_{n-1})(s_{r+1},\ldots, s_m)\\
&=
\begin{cases}
0 & k>0\\
g_1(\Lambda)\cdots g_{i-1}(\Lambda) g_{i}(s_2,\ldots, s_{r}) (g_{i+1}\otimes\cdots g_{n-1})(s_{r+1},\ldots, s_m) & k=0, r>0\\
0 & k=0, r=0
\end{cases}\\
&= \bigl(P(g_{1})\otimes\cdots\otimes P(g_{i-1})\otimes \lambda^*_{i}(g_{i})\otimes g_{i+1}\otimes\cdots\otimes g_{n-1}\bigr) (s_1,\ldots,s_m)\\
&=i\circ (P^{\otimes i-1}\otimes \lambda_i^* \otimes \mathrm{id}^{n-1-i}) (g_1\otimes\cdots\otimes g_{n-1})(s_1,\ldots, s_m).\qedhere
\end{align*} \end{proof}
Put $b_{s,t}^{\mathfrak{l}}:=\lambda^*_{s,t}+\lambda_{s,t}$ and $b_{s,t}^{\mathfrak{r}}:=\rho^*_{s,t}+\rho_{s,t}$.
\begin{theorem}
The $b_{s,t}=(b_{s,t}^{\mathfrak{l}},b_{s,t}^{\mathfrak{r}})$, $s,t\geq 0$ form an additive L{\'e}vy process, i.e.
\begin{itemize}
\item $b_{r,s}+b_{s,t}=b_{r,t}$ for all $r\leq s\leq t$ and $b_{0,0}=\mathrm{id}$ (increment property)
\item $\Phi(b_{s,t}^{\delta})=\Phi(b_{0,t-s}^{\delta})$ for all $s\leq t$ and all $\delta\in\{\mathfrak{l},\mathfrak{r}\}^*$ (stationarity of increments)
\item $b_{t_1,t_2}$, $b_{t_2,t_3}$, \ldots, $b_{t_{n-1},t_n}$ are bi-monotonely independent for all $0\leq t_1\leq \ldots \leq t_n$ (independence of increments).
\end{itemize} \end{theorem}
\begin{proof}
The increment property is obvious, independence follows from the previous theorem. Stationarity follows from the fact that the canonical unitary $U_s\colon\Gamma_{s,t}\to\Gamma_{0,t-s}$ with $U_s(f)(t_1,\ldots, t_n)=f(t_1+s,\ldots, t_n+s)$ fulfills $U_s(\Omega)=\Omega$ and $b_{s,t}= U_s^* b_{0,t-s}U_s$. \end{proof}
\section{Bi-monotone partitions} \label{sec:bi-monot-part}
\begin{definition}
A \emph{partition} of a set $X$ is a set $\pi$ of subsets of $X$, called \emph{blocks} of $\pi$, such that
\begin{itemize}
\item $V\neq\emptyset$ for all blocks $V\in\pi$
\item $V_1\cap V_2=\emptyset$ for all all distinct blocks $V_1,V_2\in\pi$
\item $\displaystyle \bigcup_{V\in\pi} V=X$
\end{itemize}
A partition $\pi$ of $X$ is called a \emph{pair partition} if $\#V=2$ for all blocks $V\in\pi$.
An \emph{ordered partition} of a set $X$ is a partition $\pi$ of $X$ with a total order $\leq$ between the blocks (i.e.\ a total order on the set $\pi$). \end{definition}
The special kinds of partitions and ordered partitions we shall define now are important for the study of noncommutative independences, such as freeness (noncrossing partitions), Boolean (interval partitions) and monotone independence (monotona partitions). They appear in moment-cumulant formulas, in the universal coefficients of the correspondung universal product and in the central limit theorems.
\begin{definition}
Let $X$ be a totally ordered set.
A partition $\pi$ of $X$ is called
\begin{itemize}
\item \emph{noncrossing} if $a,c\in V$ and $b,c\in W$ for elements
$a<b<c<d$ of $X$ and blocks $V,W\in \pi$ implies $V=W$,
\item \emph{interval parition} if $a,c\in V$, $b\in W$ for elements $a<b<c$ of $X$ and blocks $V,W\in \pi$ implies $V=W$.
\end{itemize}
An ordered partition $\pi$ of $X$ is called
\begin{itemize}
\item \emph{monotone partition} if $a,c\in V$, $b\in W$ for
elements $a<b<c$ of $X$ and blocks $V,W\in \pi$ implies $V\geq W$.
\end{itemize} \end{definition}
We denote the set of all partitions, noncrossing partitions, interval partitions and monotone partitions of an ordered set $X$ by $\mathrm{P}_{\mathbin{\otimes}}(X)$, $\mathrm{P}_{\free}(X)$, $\mathrm{P}_{\bool}(X)$, and $\mathrm{P}_{\mathbin{\vartriangleright}}(X)$ respectively, according to their associated universal product. An (ordered) partitions is called a \emph{pair partition} if all its blocks have cardinality 2. The set of all pair partitions of type $\odot$ is denoted by $\mathrm{PP}_\odot(X)$ for $\odot\in\{\mathbin{\otimes}, \free,\bool,\mathbin{\vartriangleright}\}$.
Partitions and ordered partitions can be nicely depicted in the following way: \begin{itemize} \item The elements of $X$ are written on the bottom line of the diagram, in ascending order if $X$ is ordered \item Each block is represented by a horizontal line with vertical ``legs'' connecting it down to the points of the block \item If $\pi$ is not ordered, the heights are chosen such that there are no overlaps and as little crossings as possible. \item If $\pi$ is ordered, the heights are chosen such that greater blocks have higher horizontal lines. \end{itemize}
Noncrossing and monotone partitions are simply recognized as the ones that have no crossing lines in the corresponding diagram.
\begin{definition}
Let $X$ be a set.
A \emph{two-faced partition} of $X$ is a partition $\pi$ of $X$ together with a map $\delta\colon X \to\{\el,\er\}$. Similarly, an \emph{ordered two-faced partition} of $X$ is an ordered partition $\pi$ of $X$ together with a map $\delta\colon X \to\{\el,\er\}$.
The map $\delta$ is called \emph{pattern} of the (ordered) two-faced partition. Frequently, we will consider $\delta$ as an element of $\{\el,\er\}^X$ and write $\delta_x$ instead of $\delta(x)$. \end{definition}
An (ordered) two-faced partition can be drawn like a partition, but the points of $X$ are put on the bottom and on the top line and the vertical leg at $x\in X$ is drawn down to the bottom line if $\delta(x)=\er$ and up to the top line if $\delta(x)=\el$.
\begin{definition}
Let $X$ be an ordered set.
A a two-faced partition of $X$ is called
\begin{itemize}
\item \emph{bi-noncrossing partition} if $a,c\in V$ and $b,c\in W$ for elements
$a<b<c<d$ with $\delta(b)=\delta(c)$ of $X$ and blocks $V,W\in \pi$ implies $V=W$. (cf.\ \cite{CNS15})
\end{itemize}
An ordered two-faced partition $\pi$ of $X$ is called
\begin{itemize}
\item \emph{bi-monotone partition} if $a,c\in V$, $b\in W$ for
elements $a<b<c$ of $X$ and blocks $V,W\in \pi$ implies $V\geq W$ if $\delta(b)=\er$ and $V\leq W$ if $\delta(b)=\el$.
\end{itemize} \end{definition}
Again, the bi-noncrossing and bi-monotone partitions exactly correpond to those diagrams which have no crossing lines. The sets of bi-noncrossing partitions, bi-monotone partitions, bi-noncrossing pair partitions, and bi-monotone pair partitions of $X$ with pattern $\delta\colon X\to\{\mathfrak{l},\mathfrak{r}\}$ are denoted by $\mathrm{P}_{\mathbin{\text{\FiveStarOutlineHeavy}}}(\delta)$, $\mathrm{P}_{\mathbin{\bowtie}}(\delta)$, $\mathrm{PP}_{\mathbin{\text{\FiveStarOutlineHeavy}}}(\delta)$, and $\mathrm{PP}_{\mathbin{\bowtie}}(\delta)$ respectively.
\begin{example}
The two-faced partition $(\pi,\delta)$ of $\{1,\dots, 6\}$ with
\[\pi=\Bigl\{\{1,4\},\{2,3\},\{5,6\}\Bigr\},\quad \delta=(\er,\er,\er,\el,\el,\el)\]
is bi-noncrossing, as it can be drawn in a noncrossing way, see Figure~\ref{bncpp-ex}.
\begin{figure}\label{bncpp-ex}
\end{figure}
There are three possible orders between the blocks such that the resulting ordered two-faced partition is bi-monotone. Indeed, the block $\{1,4\}$ has to be smaller than the block $\{2,3\}$, whereas $\{5,6\}$ can be in any position. This is easily seen in the corresponding diagrams in Figure~\ref{bmord-ex}.
\begin{figure}
\caption{bi-monotone orderings of a bi-noncrossing partition}
\label{bmord-ex}
\end{figure} \end{example}
\begin{remark}
Let $X$ be an ordered $2n$-set. Then one can show that for every pattern $\delta\colon X\to\{\mathfrak{l},\mathfrak{r}\}$, there are at most $(2n-1)!!$ bi-monotone pair partitions with pattern $\delta$. A complete proof requires quite a bit of combinatorial background which is not needed for the central limit theorem, so we will present it in a second paper. Here we only sketch the key idea.
A partition $\pi$ of an ordered set $X$ is called \emph{irrecucible} if for all $a\neq\max X$ there is a block $V\in\pi$ with elements $b,c\in V$ such that $b\leq a< c$. For a set $M$ of partitions of $X$ we denote by $\mathrm{I}M$ the set of irreducible partitions in $M$. For monotone partitions (bi-monotone partitions with constant pattern), we know that $\#\mathrm{PP}(X)=(2n-1)!!$. Also note that the highest block of an irreducible monotone partition has to be $\{\min X,\max X\}$, so deleting the highest block yields a bijection between $\mathrm{IPP}_{\mathbin{\vartriangleright}}(X)$ and $\mathrm{PP}_{\mathbin{\vartriangleright}}('X')$, where $'X':=X\setminus\{\min X,\max X\}$.
For a general pattern $\delta\colon X\to \{\mathfrak{l},\mathfrak{r}\}$, it is not difficult to see that
\[ \#\mathrm{PP}_{\mathbin{\bowtie}}(\delta)=\sum_{\substack{\delta=\delta_1\smile\cdots \smile\delta_k\\|\delta_i|\in 2\mathbb N}} \#\mathrm{IPP}_{\mathbin{\bowtie}}(\delta_1)\cdots\#\mathrm{IPP}_{\mathbin{\bowtie}}(\delta_k) \binom{|\delta|/2}{|\delta_1|/2,\ldots,|\delta_k|/2}. \]
On the right hand side the induction hypothesis can be applied to replace all partitions with shorter patterns by ordinary monotone partitions. Finally, one has to deal with the the summand $\mathrm{IPP}_{\mathbin{\bowtie}}(\delta)$. The Dyck path description of noncrossing pair partitions (see e.g.\ \cite[Exercise 8.23]{NiSp06}) also works for bi-noncrossing pair partitions with fixed pattern, and one can use it to find an injection $\mathrm{IPP}_{\mathbin{\bowtie}}(\delta)\hookrightarrow\mathrm{PP}_{\mathbin{\bowtie}}('\delta')$, where $'\delta':=\delta\restriction{'X'}$.
Then the induction hypothesis yields
\begin{align*}
\#\mathrm{IPP}_{\mathbin{\bowtie}}(\delta)
\leq \#\mathrm{PP}_{\mathbin{\bowtie}}('\delta')\leq\#\mathrm{PP}_{\mathbin{\vartriangleright}}('X')=\#\mathrm{IPP}_{\mathbin{\vartriangleright}}(X).
\end{align*}
\end{remark}
\section{Bi-monotone central limit theorem} \label{sec:bi-monotone-central}
Our proof of the bi-monotone central limit theorem is based on the central limit theorem for singleton independence as given in {\cite[Lemma 2.4]{AHO98}}. For convenience of the reader, we recall the relevant part of the theorem. \begin{theorem}[cf.\ {\cite[Lemma 2.4]{AHO98}}]\label{theo:SCLT}
Let $J$ be a set and $(A,\Phi)$ a noncommutative probability space. Assume that $b^{(j)}=(b^{(j)}_n)_{n=1}^{\infty}$, $j\in J$, sequences of elements in $A$ such that
\begin{itemize}
\item each $b_n^{(j)}$ is centered, i.e. $\Phi(b_n^{(j)})=0$,
\item the condition of boundedness of mixed moments is fulfilled, i.e. for each $k\in\mathbb N$ there exists a positive constant $\nu_k$ such that $\left\vert\Phi(b_{n_1}^{(j_1)}\cdots b_{n_k}^{(j_k)})\right\vert\leq\nu_k$ for any choice of $n_1,\ldots, n_k\in\mathbb N$ and $j_1,\ldots, j_k\in J$,
\item the singleton condition is satified, i.e. for any choice of $k\in\mathbb N$, $j_1,\ldots, j_k\in J$ and $n_1,\ldots, n_k\in\mathbb N$
\[\Phi(b_{n_1}^{(j_1)}\cdots b_{n_k}^{(j_k)})=0\] holds whenever there exists an index $n_s$ which is different from all other ones.
\end{itemize}
Write $S_N(b^{(j)}):=\sum_{n=1}^{N}b^{(j)}_n$. Then
\begin{multline}
\lim_{N\to \infty} \Phi\left(\frac{S_N(b^{(j_1)})}{N^{1/2}}\cdots \frac{S_N(b^{(j_{2n})})}{N^{1/2}}\right)\\
=\lim_{N\to\infty} N^{-n} \sum_{\substack{\pi\colon[2n]\to[n]\\\text{2 to 1}}}\sum_{\substack{\sigma\colon[n]\to[N]\\\text{order preserving}}}\Phi\left(b^{(j_1)}_{\sigma\circ\pi(1)}\cdots b^{(j_{2n})}_{\sigma\circ\pi(2n)}\right).\label{eq:2}
\end{multline} \end{theorem}
We will use it in the form of the following Corollary.
\begin{corollary}\label{cor-SCLT}
Let $b^{(j)}=(b^{(j)}_n)_{n=1}^{\infty}$, $j\in \{\mathfrak{l},\mathfrak{r}\}$, be sequences of two-faced random variables such that
\begin{itemize}
\item each $b^{(j)}_n$ is centered
\item the singleton condition is fulfilled
\item the distribution is \emph{spreadable}, i.e.\
\[\Phi(b_{n_1}^{j_1}\cdots b_{n_k}^{j_k})=\Phi(b_{\sigma(n_1)}^{j_1}\cdots b_{\sigma(n_k)}^{j_k})\]
for each order preserving $\sigma\colon[n]\to\mathbb N$.
\end{itemize}
Then
\begin{align}
\lim_{N\to \infty} \Phi\left(\frac{S_N(b^{(j_1)})}{N^{1/2}}\cdots \frac{S_N(b^{(j_{2n})})}{N^{1/2}}\right)
=\frac{1}{n!}\sum_{\substack{\pi\colon[2n]\to[n]\\\text{2 to 1}}}\Phi\left(b^{(j_1)}_{\pi(1)}\cdots b^{(j_{2n})}_{\pi(2n)}\right).
\end{align} \end{corollary}
\begin{proof}
Spreadability implies boundedness of mixed moments.
There are exactly $\binom{N}{n}$ order preserving maps from $[n]$ to $[N]$, and by spreadability the value of $\Phi\left(b^{(j_1)}_{\sigma\circ\pi(1)}\cdots b^{(j_{2n})}_{\sigma\circ\pi(2n)}\right)$ is independent of $\sigma$.
Finally, note that
\[\lim_{N\to\infty} N^{-n}\binom{N}{n}=\frac{1}{n!}\lim_{N\to\infty}\frac{N(N-1)\cdots (N-n+1)}{N^n}=\frac{1}{n!}.\qedhere\] \end{proof}
\begin{lemma}\label{lem:bi-mon-clt-1}
Let $(B_1^{\mathfrak{l}}, B_1^{\mathfrak{r}}),\ldots, (B_n^{\mathfrak{l}},B_n^{\mathfrak{r}})$ be bi-monotonely independent pairs of subalgebras. Let $b_i\in B_{\varepsilon_i}^{\delta_i}$, $i\in\{1,\ldots, m\}$, with $\varepsilon_i\in\{1,\ldots, n\}$ and $\delta_i\in\{\mathfrak{r},\mathfrak{l}\}$. If the two-faced partition $(\pi,\delta)$ with blocks $V_i:=\{k\mid \varepsilon_k=i\}$, $V_1<\ldots<V_n$, and pattern $\delta$ is bimonotone, then
\[\Phi(b_1\cdots b_m)=\Phi\Bigl(\prod^{\rightarrow}_{\varepsilon_k=1} b_k\Bigr)\cdots \Phi\Bigl(\prod^{\rightarrow}_{\varepsilon_k=n} b_k\Bigr). \] \end{lemma}
\begin{proof}
Let $\pi_i$ be pointed representations with $\Phi(b)=\langle\Omega_i,\pi_i(b)\Omega_i\rangle$ for all $b\in B_i$, $i\in[n]$. On the representation space of each $\pi_i$, we denote by $P$ the projection onto $\mathbb C\Omega_i$ and by $\mathrm{id}$ the identity operator.
For $b\in B_\varepsilon^{\delta}$ with $\varepsilon\in [n]$, $\delta\in\{\mathfrak{l}, r\}$, we write $\BIGOP{\mathbin{\bowtie}}_i \pi_i(b)$ as $T_1(b)\otimes\cdots \otimes T_n(b)$ with
\[T_i(b)=
\begin{cases}
P&\text{if ($i<\varepsilon$ and $\delta=\mathfrak{l}$) or ($i>\varepsilon$ and $\delta=\mathfrak{r}$)}\\
\mathrm{id}&\text{if ($i<\varepsilon$ and $\delta=r$) or ($i>\varepsilon$ and $\delta=\mathfrak{l}$)}\\
\pi_i(b)&\text{if $i=\varepsilon$}.
\end{cases}
\]
With this notation
\begin{align*}
\BIGOP{\mathbin{\bowtie}}_i \pi_i(b_1\cdots b_n)
&=\Bigl(\BIGOP{\mathbin{\bowtie}}_i \pi_i(b_1)\Bigr) \cdots \Bigl(\BIGOP{\mathbin{\bowtie}}_i \pi_i(b_m)\Bigr)\\
&=\bigl(T_1(b_1)\cdots T_1(b_n)\bigr)\otimes\cdots\otimes \bigl(T_n(b_1)\cdots T_n(b_n)\bigr).
\end{align*}
One checks that the fact that $(\pi,\delta)$ is bimonotone implies that $T_i(b_j)=\mathrm{id}$ whenever there are $\mu,\nu$ with $\mu<j<\nu$, $\varepsilon_\mu=\varepsilon_\nu=i$ and $\varepsilon_j\neq i$. Then it follows easily that \[\langle\Omega_i,T_i(b_1)\cdots T_i(b_n)\Omega_i\rangle=\Phi\Bigl(\prod^{\rightarrow}_{\varepsilon_k=i} b_k\Bigr)\] and therefore
\begin{align*}
\Phi(b_1\cdots b_m)
&=\left\langle\Omega_1\otimes\cdots\otimes\Omega_n, \BIGOP{\mathbin{\bowtie}}_i \pi_i(b_1\cdots b_n)\Omega_1\otimes\cdots\otimes\Omega_n\right\rangle\\
&=\Bigl\langle\Omega_1,T_1(b_1)\cdots T_1(b_n)\Omega_i\Bigr\rangle\cdots \Bigl\langle\Omega_n,T_n(b_1)\cdots T_n(b_n)\Omega_i\Bigr\rangle\\
&=\Phi\Bigl(\prod^{\rightarrow}_{\varepsilon_k=1} b_k\Bigr)\cdots \Phi\Bigl(\prod^{\rightarrow}_{\varepsilon_k=n} b_k\Bigr)
\end{align*}
as claimed.
\end{proof}
\begin{lemma}\label{lem:bi-mon-clt-2}
Let $(b_{1}^{(\mathfrak{l})},b_{1}^{(\mathfrak{r})}),\ldots, (b_{n}^{(\mathfrak{l})},b_{n}^{(\mathfrak{r})})\in A\times A$ be bi-monotonely independent and centered. For an ordered two-faced pair partition $(\pi,\delta)$ with blocks $V_i:=\{k\mid \varepsilon_k=i\}$, $V_1<\ldots<V_n$, and pattern $\delta=(\delta_1,\ldots, \delta_{2n})$, it holds that
\[\Phi(b_\pi)=\Phi(b_{\varepsilon_1}^{(\delta_1)}\cdots b_{\varepsilon_{2n}}^{(\delta_{2n})})= 0\]
whenever $(\pi,\delta)$ is not bi-monotone. \end{lemma}
\begin{proof}
If $(\pi,\delta)$ is not bi-monotone, there are $\mu<j<\nu$ and $i\in[n]$ with $\varepsilon_\mu=\varepsilon_\nu=i$ and either
\begin{enumerate}
\item $\delta_j=\el$ and $\varepsilon_j>i$ or
\item $\delta_j=\er$ and $\varepsilon_j<i$.
\end{enumerate}
In either case, using the notation of the previous Lemma,
\begin{align*}
T_i(b_{\varepsilon_\mu}^{(\delta_\mu)})&=\pi_i(b_{\varepsilon_\mu}^{(\delta_\mu)})\\
T_i(b_{\varepsilon_j}^{(\delta_j)})&=P\\
T_i(b_{\varepsilon_\nu}^{(\delta_\nu)})&=\pi_i(b_{\varepsilon_\nu}^{(\delta_\nu)}),
\end{align*}
so
\[\langle\Omega_i,T_i(b_{\varepsilon_1}^{(\delta_1)})\cdots T_i(b_{\varepsilon_{2n}}^{(\delta_{2n})})\Omega_i\rangle=\Phi(b_{i}^{(\delta_\mu)})\Phi(b_{i}^{(\delta_\nu)})=0\qedhere\] \end{proof}
\begin{theorem}[Bi-monotone central limit theorem]\label{theo:bi-mon-clt}
Let $b^{(j)}=(b^{(j)}_n)_{n=1}^{\infty}$, $j\in \{\mathfrak{l},\mathfrak{r}\}$, be sequences of elements in a quantum probability space such that
\begin{itemize}
\item each $b^{(j)}_n$ is centered
\item the sequence of pairs $(b_n^{(\mathfrak{l})},b_n^{(\mathfrak{r})})_{n\in\mathbb N}$ is bi-monotonely independent
\item the pairs $(b_n^{(\mathfrak{l})},b_n^{(\mathfrak{r})})$ have the same distribution for all $n\in\mathbb N$.
\end{itemize}
Put $c_{p,q}:=\Phi(b_n^{(p)}b_n^{(q)})$ for $p,q\in\{\mathfrak{l},\mathfrak{r}\}$. Then for every pattern $\delta=(\delta_1,\ldots, \delta_{2n})$ we have
\begin{align*}
\lim_{N\to \infty} \Phi\left(\frac{S_N(b^{(\delta_1)})}{N^{1/2}}\cdots \frac{S_N(b^{(\delta_{2n})})}{N^{1/2}}\right)
=\frac{1}{n!}\sum_{\pi\in \mathrm{PP}_{\mathbin{\bowtie}}(\delta)} \prod_{\{k<l\}\in\pi} c_{\delta_k,\delta_l}.
\end{align*} \end{theorem}
\begin{proof}
The singleton condition and spreadibility of the distribution follow from bimonotone independence. Therefore we can apply Corollary~\ref{cor-SCLT} to get
\begin{align*}
\lim_{N\to \infty} \Phi\left(\frac{S_N(b^{(\delta_1)})}{N^{1/2}}\cdots \frac{S_N(b^{(\delta_{2n})})}{N^{1/2}}\right)
=\frac{1}{n!}\sum_{\substack{\pi\colon[2n]\to[n]\\\text{2 to 1}}}\Phi\left(b^{(\delta_1)}_{\pi(1)}\cdots b^{(\delta_{2n})}_{\pi(2n)}\right).
\end{align*} With every 2-to-1 map $\pi\colon[2n]\to[n]$ we associate the ordered pair partition $\hat\pi$ with blocks $V_i:=\pi^{-1}(i)$, $V_1<\ldots<V_n$ and the two-faced ordered pair partition $(\hat\pi,\delta)$. Thus, we can combine Lemma~\ref{lem:bi-mon-clt-1} and Lemma~\ref{lem:bi-mon-clt-2} to get
\begin{align*}
\Phi\left(b^{(\delta_1)}_{\pi(1)}\cdots b^{(\delta_{2n})}_{\pi(2n)}\right)=
\begin{cases}
0&\text{if $(\hat\pi,\delta)$ is not bi-monotone}\\
\displaystyle\prod_{\{k<l\}\in\pi} c_{\delta_k,\delta_l}& \text{if $(\hat\pi,\delta)$ is bi-monotone}
\end{cases}
\end{align*}
and the proof is finished. \end{proof}
\section{Distribution of bi-monotone Brownian motion} \label{sec:distr-bi-monot}
Next, we will apply the bi-monotone central limit theorem in order to determine the distribution of bi-monotone Brownian motion.
\begin{theorem}
Put $b_{s,t}^{(\mathfrak{l})}:=\lambda^*_{s,t}+\lambda_{s,t}$ and $b_{s,t}^{(\mathfrak{r})}:=\rho^*_{s,t}+\rho_{s,t}$. Then
\[\langle\Omega,b_{0,1}^{(\delta_1)}\cdots b_{0,1}^{(\delta_n)} \Omega\rangle=\frac{1}{n!}\#\mathrm{PP}_{\mathbin{\bowtie}}(\delta).\] \end{theorem}
\begin{proof}
The proof is based on two simple observations:
\begin{itemize}
\item For any $N\in\mathbb N$, we can write $b_{0,1}^{(j)}=b_{0,\frac{1}{N}}^{(j)}+b_{\frac1N,\frac2N}^{(j)}+\cdots+b_{\frac{N-1}{N},1}^{(j)}$.
\item The two families
\[\left(b_{\frac{i}{n},\frac{i+1}{n}}^{(j)}\right)_{\substack{i\in\mathbb{N}\\j\in\{\mathfrak{l},\mathfrak{r}\}}} \quad\text{and}\quad \left(\frac{b_{i,i+1}^{(j)}}{\sqrt{n}}\right)_{\substack{i\in\mathbb{N}\\j\in\{\mathfrak{l},\mathfrak{r}\}}}\]
have the same distribution, i.e.
\[\Phi\left(b_{\frac{i_1}{n},\frac{i_1+1}{n}}^{(j_1)}\cdots b_{\frac{i_k}{n},\frac{i_k+1}{n}}^{(j_k)}\right)=N^{-\frac{k}{2}}\Phi\left(b_{i_1,i_1+1}^{j_1}\cdots b_{i_k,i_k+1}^{j_k}\right)\]
\end{itemize}
for all $i_1,\ldots, i_k\in\mathbb{N}$, $j_1,\ldots, j_k\in\{\mathfrak{l},\mathfrak{r}\}$. Now put $b_i^{(j)}:=b_{i,i+1}^{(j)}$. The sequences $(b_i^{(j)})$ consist of centered bounded operators and are bi-monotonely independent. Therefore, we can apply the bi-monotone central limit theorem. Since $c_{p,q}:=\Phi(b_{i}^{(p)}b_{i}^{(q)})=1$ for all $i\in\mathbb N$ and all $p,q\in\{\mathfrak{l},\mathfrak{r}\}$, the stated formula follows.
\end{proof}
\begin{corollary}
Put $b_{s,t}:=\lambda^*_{s,t}+\lambda_{s,t}+\rho^*_{s,t}+\rho_{s,t}$. Then
\[\langle\Omega,b_{0,1}^n\Omega\rangle=\frac{1}{n!}\#\mathrm{PP}_{\mathbin{\bowtie}}(n),\]
where $\mathrm{PP}_{\mathbin{\bowtie}}(n)$ is the set of all bi-monotone pair partitions of $[n]$ with arbitrary pattern. \end{corollary}
\begin{proof}
Since $b_{s,t}=b_{s,t}^{(\mathfrak{l})}+b_{s,t}^{(\mathfrak{r})}$, this follows directly from the previous theorem and
\[b_{s,t}^n=(b_{s,t}^{(\mathfrak{l})}+b_{s,t}^{(\mathfrak{r})})^{n}=\sum_{\delta\in\{\mathfrak{l},\mathfrak{r}\}^n}b_{s,t}^{(\delta_1)}\cdots b_{s,t}^{(\delta_n)}.\qedhere\] \end{proof}
This corollary allows us to calculate the number of bi-monotone pair partitions. The number of bi-monotone partitions of a $2n$-set for $n=0,1,\ldots,10$ are:\\ 1,
4,
48,
928,
24448,
811776,
32460032,
1516774912,
81064953344,
4876115246080,
325959895390976.\\
We do not know any explicit or recursive formula for these numbers.
Since $b_{0,1}$ is a bounded selfadjoint operator on a Hilbert space, its moments are the moments of a uniquely determined compactly supported probability measure on $\mathbb R$. It would be nice to have an explicit formula of this probability measure.
\linespread{1.25}
\addcontentsline{toc}{chapter}{References}
\setlength{\parindent}{0pt}
\end{document}
|
arXiv
|
{
"id": "1708.03510.tex",
"language_detection_score": 0.6424996852874756,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{\sc Definable zero-sum stochastic games}
\author{ J\'{e}r\^{o}me BOLTE\footnote{TSE (GREMAQ, Universit\'e Toulouse Capitole), Manufacture des Tabacs, 21 all\'ee de Brienne, 31015 Toulouse Cedex 5, France. email: {\tt [email protected]}} , \; St\'ephane GAUBERT \footnote{INRIA \& Centre de Math\'ematiques Appliqu\'ees (CMAP), UMR 7641, \'Ecole Polytechnique, 91128 Palaiseau, France. email: {\tt [email protected]}
} \, \& Guillaume VIGERAL\footnote{Universit\'e Paris-Dauphine, CEREMADE, Place du Mar\'echal De Lattre de Tassigny. 75775 Paris cedex 16, France. email: {\tt [email protected]} }} \maketitle \renewcommand{\arabic{footnote}}{}\footnotetext{ The first and second author were partially supported by the PGMO Programme of Fondation Math\'ematique Jacques Hadamard and EDF. The third author was partially supported by the french Agence Nationale de la Recherche (ANR) "ANR JEUDY: ANR-10- BLAN 0112." This work was co-funded by the European Union under the 7th Framework Programme ``FP7-PEOPLE-2010-ITN'', grant agreement number 264735-SADCO. } \renewcommand{\arabic{footnote}}{\arabic{footnote}}
\begin{abstract} Definable zero-sum stochastic games involve a finite number of states and action sets, reward and transition functions that are definable in an o-minimal structure. Prominent examples of such games are finite, semi-algebraic or globally subanalytic stochastic games.
We prove that the Shapley operator of any definable stochastic game with separable transition and reward functions is definable in the same structure. Definability in the same structure does not hold systematically: we provide a counterexample of a stochastic game with semi-algebraic data yielding a non semi-algebraic but globally subanalytic Shapley operator.
Our definability results on Shapley operators are used to prove that any separable definable game has a uniform value; in the case of polynomially bounded structures we also provide convergence rates. Using an approximation procedure, we actually establish that general zero-sum games with separable definable transition functions have a uniform value. These results highlight the key role played by the tame structure of transition functions. As particular cases of our main results, we obtain that stochastic games with polynomial transitions, definable games with finite actions on one side, definable games with perfect information or switching controls have a uniform value. Applications to nonlinear maps arising in risk sensitive control and Perron-Frobenius theory are also given. \end{abstract}
\noindent {\bf Keywords} Zero-sum stochastic games, Shapley operator, o-minimal structures, definable games, uniform value, nonexpansive mappings, nonlinear Perron-Frobenius theory, risk-sensitive control, tropical geometry.
\section{Introduction}
Zero-sum stochastic games have been widely studied since their introduction by Shapley \cite{Shap53} in 1953 (see the textbooks \cite{sorin02,FiVr97,MSZ,NeSo03} for an overview of the topic). They model long term interactions between two players with completely opposite interest; they appear in a wealth of domains including computer science, population dynamics or economics. In such games the players face, at each time $n$, a zero-sum game whose data are determined by the state of nature. The evolution of the game is governed by a stochastic process which is partially controlled by both players through their actions, and which determines, at each stage of the game, the state of nature and thus the current game faced by both players. We assume that the players know the payoffs functions, the underlying stochastic process and the current state; they also observe at each stage the actions played by one each other. They aim at optimizing their gain over time. This objective depends on specific choices of payoff evaluations and in particular on the choice of a distribution of discount/weighting factors over time.
We shall focus here on two kinds of payoff evaluations which are based on Ces\`aro and Abel means. For any finite horizon time $n$, one defines the ``repeated game" in $n$ stages for which each player aims at optimizing his averaged gain over the frame time $t=1,\ldots,n$. Similarly for any discount rate $\lambda$, one defines the $\lambda$-discounted game for infinite horizon games. Under minimal assumptions these games have values, and an important issue in Dynamic Games theory is the asymptotic study of these values (see Subsection~\ref{Defsto}). These aspects have been dealt along two lines: \begin{itemize} \item The ``asymptotic approach" consists in the study of the convergence of these values when the players become more and more patient -- that is when $n$ goes to infinity or $\lambda$ goes to 0. \item The ``uniform value approach", for which one seeks to establish that, in addition, both players have near optimal strategies that do not depend on the horizon (provided that the game is played long enough). \end{itemize}
The asymptotic approach is less demanding as there are games \cite{Zamir73} with no uniform value but for which the value does converge to a common limit; the reader is referred to \cite{MSZ} for a thorough discussion on those two approaches and their differences in zero-sum repeated games.
For the asymptotic approach, the first positive results were obtained in recursive games \cite{Everett57}, games with incomplete information \cite{AuMa95,MeZa71} and absorbing games \cite{Kohlberg74}. In 1976, Bewley and Kohlberg settled, in a fundamental paper \cite{BewlKohl76}, the case of games with finite sets of states and actions. Their proof is based on the observation that the discounted value, thought of as a function of the discount factor, is semi-algebraic, and that it has therefore a Puiseux series expansion.
Bewley-Kohlberg's result of convergence was later considerably strengthened by Mertens and Neyman who proved \cite{MertNeym81} the existence of a uniform value in this finite framework. Several types of improvements based on techniques of semi-algebraic geometry were developed in \cite{Ney03,Milman02}. Algorithms using an effective version of the Tarski-Seidenberg theorem were recently designed in order to compute either the uniform value \cite{CMH2008} or $\epsilon$-optimal strategies \cite{solanvieille2010}.
The semi-algebraic techniques used in the proof of Bewley and Kohlberg have long been considered as specifically related to the finiteness of the action sets and it seemed that they could not be adapted to wider settings. In \cite{Sha08} the authors consider a special instance of polynomial games but their focus is computational and concerns mainly the estimation of discounted values for a fixed discount rate. In order to go beyond their result and to tackle more complex games, most researchers have used topological or analytical arguments, see e.g.\ \cite{MNR2009,Renault2012,Rosenberg2000,RoSo01,RoVi2000,Sorin03,Sorin04,SoVitoappear}. The common feature of most of these papers is to study the analytical properties of the so-called Shapley operator of the game in order to infer various convergence results of the values. This protocol, called the ``operator approach" by Rosenberg and Sorin, grounds on Shapley's theorem which ensures that the dynamic structure of the game is entirely represented by the Shapley operator.
Our paper can be viewed as a ``definable operator approach". In the spirit of Bewley-Kohlberg and Neyman, we identify first a class of potentially ``well-behaved games" through their underlying geometric features (definable stochastic games) and we investigate what this features imply for the Shapley operator (its definability and subsequent properties). By the use of Mertens-Neyman result this implies in turn the existence of a uniform value for a wide range of games (e.g.\ polynomial games).
Before giving a more precise account of our results, let us describe briefly the topologi\-cal/geo\-me\-tri\-cal framework used in this paper. The rather recent introduction of o-minimal structures as models for a tame topology (see \cite{Dries98}) is a major source of inspiration to this work. O-minimal structures can be thought of as
an abstraction of semi-algebraic geometry through an axiomatization of its most essential properties. An o-minimal structure consists indeed in a collection of subsets belonging to spaces of the form $\mathbb{R}^n$, where $n$ ranges over $\mathbb{N}$, called {\em definable sets}~(\footnote{Functions are called definable whenever their graph is definable.}). Among other things, this collection is required to be stable by linear projections and its ``one-dimensional" sets must be finite unions of intervals. Definable sets are then shown to share most of the qualitative properties of semi-algebraic sets like finiteness of the number of connected components or differential regularity up to stratification.
Our motivation for studying stochastic games in this framework is double. First, it appears that definability allows one to avoid highly oscillatory phenomena in a wide variety of settings: partial differential equations \cite{Simon83}, Hamilton-Jacobi-Bellman equations and control theory (see \cite{Trelat06} and references therein), continuous optimization \cite{Ioff09}. We strongly suspect that definability is a simple means to ensure the existence of a value to stochastic games.
Another very important motivation for working within these structures is their omnipresence in finite-dimensional models and applications~(see e.g.\ \cite{Ioff09} and the last section).
The aim of this article is therefore to consider stochastic games --with a strong focus on their asymptotic properties-- in this o-minimal framework. We always assume that the set of states is finite and we say that a stochastic game is definable in some o-minimal structure if all its data (action sets, payoff and transition functions) are definable in this structure. The central issue behind this work is probably: \begin{itemize} \item[ ($\mathcal{Q}$)] {\em Do definable stochastic games have definable Shapley operators }? \end{itemize} As we shall see this question plays a pivotal role in the study of stochastic games. It seems however difficult to solve it in its full generality and we are only able to give here partial results. We prove in particular that any stochastic game with definable, separable reward and transition functions (e.g.\ polynomial games) yields a Shapley operator which is definable in the same structure. The separability assumption is important to ensure definability in the same structure, we indeed describe a rather simple semi-algebraic game whose Shapley operator is globally subanalytic but not semi-algebraic. The general question of knowing whether a definable game has a Shapley operator definable in a possibly larger structure remains fully open.
An important consequence of the definability of the Shapley operator is the existence of a uniform value for the corresponding game (Theorem~\ref{conv}). The proof of this result is both based on the techniques and results of \cite{Ney03} and \cite{MertNeym81}. For games having a Shapley operator definable in a polynomially bounded structure, we also show, in the spirit of Milman \cite{Milman02}, that the rate of convergence is of the form $O(\frac{1}{n^{\gamma}})$ for some positive $\gamma$.
These results are used in turn to study games with arbitrary continuous reward functions (not necessarily definable), separable and definable transition functions and compact action sets. Using the Stone-Weierstrass and Mertens-Neyman theorems, we indeed establish that such games have a uniform value (Theorem~\ref{t:value}). This considerably generalizes previous results; for instance, our central results imply that: \begin{itemize} \item definable games in which one player has finitely many actions, \item games with polynomial transition functions, \item games with perfect information and definable transition functions, \item games with switching control and definable transition functions, \end{itemize} have a uniform value.
The above results evidence that most of the asymptotic complexity of a stochastic game lies in its dynamics, i.e.\ in its transition function. This intuition has been reinforced by a recent follow-up work by Vigeral~\cite{Vipreprint} which shows, through a counterexample to the convergence of values, that the o-minimality of the underlying stochastic process is a crucial assumption. The example involves finitely many states, simple compact action sets, and continuous transition and payoffs but the transition functions are typically non definable since they oscillate infinitely many times on a compact set.
We also include an application to a class of maps arising in risk sensitive control~\cite{fleming,hernandez,akian} and in nonlinear Perron-Frobenius theory (growth minimization in population dynamics). In this context, one considers a self-map $T$ of the interior of the standard positive cone of $\mathbb{R}^d$, and looks for conditions of existence of the geometric growth rate $[T^k(e)]_i^{1/k}$ as $k\to \infty$, where $e$ is an arbitrary vector in the interior of this cone. This leads to examples of Shapley operators, namely, the conjugates of $T$ by ``log-glasses'' (i.e., log-log coordinates), that are definable in the log-exp structure. This is motivated also by tropical geometry~\cite{viro}. The latter can be thought of as a degenerate limit of classical geometry through log-glasses. This limit process is called ``dequantization''; the inverse process sends Shapley operators to (non-linear) Perron-Frobenius operators. This shows that the familiar o-minimal structure used in game theory, consisting of real semi-algebraic sets, is not the only useful one in the study of Shapley operators. We note that other o-minimal structures, like the one involving absolutely converging Hahn series constructed by van den Dries and Speisseger~\cite{vandendries98}, are also relevant in potential applications to tropical geometry. \label{vdd}
The paper is structured as follows. The first sections give a basic primer on the theory of o-minimal structures and on stochastic games. We introduce in particular definable zero-sum stochastic games and discuss several subclasses of games. The main result of that section is the following: if the Shapley operator of a game is definable in an o-minimal structure, this game has a uniform value. Since the Shapley operator is itself a one-shot game where the expectation of the future payoffs acts as a parameter, we study one-shot parametric games in Section~\ref{sec-one-shot}. We prove that the value of a parametric definable game is itself definable in two cases: either if the game is separable, or if the payoff is convex. These results are in turn used in Section~\ref{sectionvaluesto} to prove the existence of a uniform value for several classes of games including separably definable games. We finally point an application to a class of ``log-exp'' maps arising in population dynamics (growth minimization problems) and in risk sensitive control.
\section{O-minimal structures}
O-minimal structures play a fundamental role in this paper; we recall here the basic results that we shall use throughout the article. Some references on the subject are
van der Dries \cite{Dries98}, van\,der\,Dries-Miller \cite{Dries-Miller96}, Coste \cite{Coste99}.
For a given $p$ in $\mathbb{N}$, the collection of subsets of $\mathbb{R}^p$ is denoted by ${\mathcal P}(\mathbb{R}^p)$. \begin{definition}[o-minimal structure, {\cite[Definition\,1.5]{Coste99}}] \label{d:omin}
An \NEW{o-minimal} structure on $(\mathbb{R},+,.)$ is a sequence of Boolean algebras ${\cal O}=(\mathcal{O}_{p})_{p\in \mathbb{N}}$ with $\mathcal{O}_{p}\subset{\mathcal P}(\mathbb{R}^p)$, such that for each $p\in\mathbb{N}$:
\begin{enumerate}\itemsep=1mm \item[(i)]
if $A$ belongs to $\mathcal{O}_{p}$, then $A\times\mathbb{R}$ and $\mathbb{R}\times A$ belong to $\mathcal{O}_{p+1}$ ; \item[(ii)]
if $\Pi:\mathbb{R}^{p+1}\rightarrow\mathbb{R}^p$ is the canonical projection onto $\mathbb{R}^p$ then for any $A$ in $\mathcal{O} _{p+1}$, the set $\Pi(A)$ belongs to $\mathcal{O}_{p}$ ; \item[(iii)]
$\mathcal{O}_{p}$ contains the family of real algebraic subsets of $\mathbb{R}^p$, that is, every set of the form \[ \{x\in\mathbb{R}^p:g(x)=0\}, \] where $g:\mathbb{R}^p\rightarrow\mathbb{R}$ is a real polynomial function ; \item[(iv)] the elements of $\mathcal{O}_{1}$ are exactly the finite unions of intervals. \end{enumerate} \end{definition}
\noindent A subset of $\mathbb{R}^p$ which belongs to an o-minimal structure $\cal O$, is said to be definable in $\cal O$ or simply definable. A mapping $F:S\subset \mathbb{R}^p\rightarrow \mathbb{R}^q$ is called definable (in ${\cal O}$), if its graph $\{(x,y)\in\mathbb{R}^p\times\mathbb{R}^q:y\in F(x)\}$ is definable (in $\cal O$) as a subset of~$\mathbb{R}^p\times\mathbb{R}^q$. Similarly if $g:\mathbb{R}^p\rightarrow (-\infty,+\infty]$ (resp.\ $g:\mathbb{R}^p\rightarrow [-\infty,+\infty)$) is a real-extended-valued function, it is called definable (in $\mathcal O$), if its graph $\{(x,r)\in\mathbb{R}^p\times \mathbb{R}: g(x)=r\}$ is definable (in $\mathcal O$).
\begin{remark}\label{R.tame}\noindent The smallest o-minimal structure is given by the class $\cal SA$ of {\em real semi-algebraic }objects(\footnote{This is due to axiom (iii). Sometimes this axiom is weakened \cite{Dries98}, allowing smaller classes than ${\cal SA }$, for instance the structure of \em{semilinear sets}.}). We recall that a set $A \subset \mathbb{R}^p$ is called semi-algebraic if it can be written as $$A=\,\bigcup_{j=1}^{l}\,\bigcap_{i=1}^{k}\,\{\,x\in \mathbb{R}^p : g_{ij}(x)=0,\, h_{ij}(x)<0\},$$ where the $g_{ij}, h_{ij} : \mathbb{R}^p\rightarrow \mathbb{R}$ are real polynomial functions on $\mathbb{R}^p$. The fact that ${\cal SA }$ is an o-minimal structure stems from the Tarski-Seidenberg principle (see \cite{Boc98}) which asserts the validity of item (ii) in this class. \end{remark}
The following result is an elementary but fundamental consequence of the definition. \begin{proposition}[{\cite{Dries-Miller96}}]\label{p:image} Let $A\subset \mathbb{R}^p$ and $g:A\rightarrow\mathbb{R}^q$ be definable objects.\\ (i) Let $B\subset A$ a definable set. Then $g(B)$ is definable.\\ (ii) Let $C\subset\mathbb{R}^q$ be a definable set. Then $g^{-1}(C)$ is definable. \end{proposition}
One can already guess from the above definition and proposition that definable sets behave qualitatively as semi-algebraic sets. The reader is referred to \cite{Dries-Miller96,Coste99} for a comprehensive account on the topic.
\begin{example}[max and min functions]\label{ex} In order to illustrate these stability properties, let us consider nonempty subsets $A,B$ of $\mathbb{R}^p,\mathbb{R}^q$ respectively, and $g:A\times B\rightarrow \mathbb{R}$ a definable function. Note that the projection axiom applied on the graph of $g$ ensures the definability of both $A$ and $B$. Set $h(x)=\inf_{y\in B} g(x,y)$ for all $x$ in $A$ and let us establish the definability of $h$; note that the domain of $h$, i.e.\ $\operatorname{dom} h=\{x\in A:h(x)> -\infty\}$ may be smaller than $A$ and possibly empty. The graph of $h$ is given by $$\mbox{graph}\, h:=\left\{ (x,r)\in A\times\mathbb{R}: \left(\forall y\in B, g(x,y)\geqslant r \right)\mbox{ and } \left(\forall \epsilon>0,\exists y\in B, g(x,y)<r+\epsilon \right) \right\}.$$
As explained below, the assertion \begin{equation}\label{FO1} \left(\left(\forall y\in B, g(x,y)\geqslant r \right)\mbox{ and } \left(\forall \epsilon>0,\exists y\in B, g(x,y)<r+\epsilon \right)\right), \end{equation} is called a first order definable formula, but the main point for the moment is to prove that such a formula necessarily describes a definable set.
Consider the sets \[T=\left\{(x,r)\in A\times \mathbb{R}: \forall \epsilon>0,\exists y\in B, g(x,y)<r+\epsilon \right\},\]
\[S_0=\left\{(x,y,r,\epsilon)\in A\times B\times \mathbb{R}\times (0,+\infty): g(x,y)-r-\epsilon<0\right\}.\] $S_0$ is definable by Proposition~\ref{p:image}(ii). We wish to prove that $T$ is definable.
Projecting $S_0$ via $\Pi(x,y,r,\epsilon)=(x,r,\epsilon)$, one obtains the definable set $S_1=\{(x,r,\epsilon)\in A\times \mathbb{R}\times (0,+\infty): \exists y \in B, g(x,y)-r-\epsilon<0\}.$ Introducing $\Pi'(x,r,\epsilon)=(x,r)$, we see that $T$ can be expressed as
$$\left(A\times \mathbb{R}\right) \setminus \Pi'\left(E\right)$$ \noindent with $E:=\left(A\times \mathbb{R} \times (0,+\infty)\right)\setminus S_1$.
Since the complement operations preserve definability, $T$ is definable. Using this type of idea and Definition~\ref{d:omin}, we can prove similarly that
$$T'=\{(x,r)\in A\times \mathbb{R}: \forall y\in B, g(x,y)\geqslant r \}$$
is definable. Hence $\mbox{graph}\, h=T\cap T'$ is definable and thus $h$ is definable. \end{example}
The most common method to establish the definability of a set is thus to interpret it as the result of a finite sequence of basic operations on definable sets (projection, complement, intersection, union). This idea is conveniently captured by the notion of a first order definable formula (when no confusion can occurred we shall simply say first order formula). {\em First order definable formulas } are built inductively according to the following rules:
\begin{itemize}
\item If $A$ is a definable set, $x\in A$ is a first order definable formula
\item If $P(x_1,\ldots,x_p)$ and $Q(x_1,\ldots,x_q)$ are first order definable formulas then
({$\mbox{not }P$}), ($P\mbox{ and }Q$), and ($P\mbox{ or }Q$) are first order definable formulas.
\item Let $A$ be a definable subset of $\mathbb{R}^p$ and $P(x_1,\ldots,x_p,y_1,\ldots,y_q)$ a first order definable formula then both
$$\begin{array}{l}
(\exists x\in A, P(x,y))\\
(\forall x\in A, P(x,y))
\end{array}$$
are first order definable formulas.
\end{itemize}
Note that Proposition~\ref{p:image} ensures that $``g(x_1,\ldots,x_p)=0"$ or $`g(x_1,\ldots,x_p)<0"$ are first order definable formulas whenever $g:\mathbb{R}^p\rightarrow\mathbb{R}$ is definable (e.g.\ polynomial). Note also that \eqref{FO1} is, as announced earlier, a first order definable formula.\\ It is then easy to check, by induction, that: \begin{proposition}[\cite{Coste99}]\label{p:FO1} If $\Phi(x_1,\ldots,x_p)$ is a first order definable formula, then $\{(x_1,\ldots,x_p)\in\mathbb{R}^p:\Phi(x_1,\ldots,x_p)\}$ is a definable set. \end{proposition}
\begin{remark} A rigorous treatment of these aspects of o-minimality can be found in \cite{Mark02}. \end{remark}
An easy consequence of the above proposition that we shall use repeatedly and in various form is the following.
\begin{proposition}\label{defder} Let $\Omega$ be a definable open subset of $\mathbb{R}^n$ and $g:\Omega\rightarrow\mathbb{R}^m$ a definable differentiable mapping. Then its derivative $g'$ is definable. \end{proposition}
There exists many regularity results for definable sets \cite{Dries-Miller96}. In this paper, we essentially use the following fundamental lemma.
\noindent Let ${\cal O}$ be an o-minimal structure on $(\mathbb{R},+,.)$.
\begin{theoremofothers}[Monotonicity Lemma {\cite[Theorem\,4.1]{Dries-Miller96}}] Let $f : I\subset \mathbb{R} \rightarrow \mathbb{R}$ be a definable function and $k\in \mathbb{N}$. Then there exists a finite partition of $I$ into $l$ disjoint intervals $I_1,\ldots,I_l$ such that $f$ restricted to each nontrivial interval $I_j$, $j\in\{1,\ldots,l\}$ is $C^k$ and either strictly monotone or constant. \end{theoremofothers}
\noindent We end this section by giving examples of o-minimal structures (see \cite{Dries-Miller96} and references therein).
\noindent {\bf Examples} (a) {\bf (globally subanalytic sets)} There exists an o-minimal structure, that contains all sets of the form $\{(x,t)\in [-1,1]^p\times\mathbb{R}: f(x)=t\}$ where $f:[-1,1]^p\rightarrow \mathbb{R}$ ($p\in \mathbb{N}$) is an analytic function that can be extended analytically on a neighborhood of the box~$[-1,1]^p$. The sets belonging to this structure are called {\em globally subanalytic sets}; see \cite{Dries-Miller96} and also \cite{Bie88} for an account on subanalytic geometry.
For instance the functions $$\sin : [-a,a]\rightarrow \mathbb{R} $$ (where $a$ ranges over $\mathbb{R}_+$) are globally subanalytic, while $\sin :\mathbb{R}\rightarrow\mathbb{R}$
is not (else the set $\sin^{-1}(\{0\})$ would be finite by Proposition~\ref{p:image}(ii) and Definition~\ref{d:omin}(iv)).
\noindent (b) {\bf (log-exp structure)} There exists an o-minimal structure containing the globally subanalytic sets and the graph of $\exp\,: \mathbb{R}\rightarrow \mathbb{R}$.
We shall also use a more ``quantitative" characteristic of o-minimal structures.
\begin{definition}[Polynomially bounded structures]\label{d:polbon}
An o-minimal structure is called polynomially bounded if for all function $\psi:(a,+\infty)\rightarrow \mathbb{R}$ there exists a positive constant $C$ and an integer $N$ such that $|\psi(t)|\leqslant Ct^N$ for all $t$ sufficiently large \end{definition}
The classes of semi-algebraic sets or of globally subanalytic sets are polynomially bounded \cite{Dries-Miller96}, while the log-exp structure is obviously not.
We have the following result in the spirit of the classical Puiseux development of semi-algebraic mappings, which will be used in the proof of Theorem~\ref{conv} below.
\begin{corollary}[\cite{Dries-Miller96}]\label{c:polbon} If $\epsilon>0$ and $\phi:(0,\epsilon)\rightarrow \mathbb{R} $ is definable in a polynomially bounded o-minimal structure there exist $c\in \mathbb{R}$ and $\alpha\in \mathbb{R}$ such that $$\phi(t)=ct^{\alpha}+o(t^{\alpha}),\quad t\in (0,\epsilon).$$ \end{corollary}
\section{Stochastic games} \subsection{Definitions and fundamental properties}\label{Defsto}
\noindent {\bf Stochastic games: definition.} A stochastic game is determined by
\begin{itemize} \item Three sets: a finite set of {\em states} $\Omega$, with cardinality $d$, and two nonempty sets of {\em actions} $X\subset \mathbb{R}^{p}$ and $Y\subset \mathbb{R}^{q}$. \item A {\em payoff function} $g:\Omega\times X\times Y\rightarrow \mathbb{R}$ and a {\em transition} probability $\rho : \Omega\times X\times Y\rightarrow \Delta(\Omega)$, where $\Delta(\Omega)$ is the set of probabilities over $\Omega$. \end{itemize} Such a game is denoted by $(\Omega,X,Y,g,\rho)$. Unless explicitly specified, we will always assume the following, which guarantees that
the finite horizon and discounted values
do exist.
\begin{center} \fbox{\begin{minipage}{0.7\textwidth} Standing assumptions ({$\cal A$}): The reward function $g$ and the transition function $\rho$ are {\em continuous}; both action sets $X,Y$ are
{\em nonempty compact sets}.\end{minipage}} \end{center}
\noindent
{\bf Strategies and values.} The game is played as follows. At time $n=1$, the state $\omega_1$ is known by both players, player 1 (resp.\ 2) makes a move $x$ in $X$ (resp.\ $y$ in $Y$), the resulting payoff is $g_1:=g(x_1,y_1,\omega_1)$ and the couple $(x_1,y_1)$ is observed by the two players. The new state $\omega_2$ is drawn according to the probability distribution $\rho(\cdot|x_1,y_1,\omega_1)$, both players observe this new state and can thus play accordingly. This process goes on indefinitely and generates a stream of actions $x_i,y_i$, states $\omega_i$ and payoffs $g_i=g(x_i,y_i,\omega_i)$. Denote by $H_n=(\Omega\times X\times Y)^n\times \Omega$ the sets of stories of length\footnote{This is the set of histories at the end of the n-th stage, with the convention that $n=0$ before the first stage.} n, $H=\cup_{n\in\mathbb{N}}H_n$ the set of all finite stories and $H_{\infty}=(\Omega\times X\times Y)^{\mathbb{N}}$ the set of infinite stories. A strategy for player 1 (resp.\ player 2) is a mapping $$\sigma:H\rightarrow \Delta(X)\quad(\mbox{resp.\ }\tau:H\rightarrow \Delta(Y)).$$ A triple $(\sigma,\tau,\omega_1)$ defines a probability measure on $H_{\infty}$ whose expectation is denoted $\mathbb{E}_{\sigma,\tau,\omega_1}$. The stream of payoffs corresponding to the triple $(\sigma,\tau,\omega_1)$ can be evaluated, at time $n$, as \begin{equation} \gamma_n(\sigma,\tau,\omega_1)=\frac{1}{n}\left(\mathbb{E}_{\sigma,\tau,\omega_1}\left(\sum_{i=1}^n g_i\right) \right). \end{equation} The corresponding game is denoted by $\Gamma_n$; Assumption ({$\cal A$}) allows us to apply Sion's Theorem~
\cite[Theorem A.7, p. 156]{sorin02}, which shows that this game has a value $v_n(\omega_1)$ or simply $(v_n)_1$.
When the sequence $v_n=((v_n)_1,\ldots,(v_n)_d)$ converges as $n$ tends to infinity the stochastic game is said to have an asymptotic value.
Another possibility for evaluating the stream of payoffs is to rely on a discount factor $\lambda\in]0,1]$ and to consider the game $\Gamma_{\lambda}$ with payoff \begin{equation} \gamma_\lambda(\sigma,\tau,\omega_1)=\mathbb{E}_{\sigma,\tau,\omega_1}\left(\lambda\sum_{i=1}^{+\infty}(1-\lambda)^{i-1}g_i\right). \end{equation} Applying once more Sion result this game has a value which we denote by $v_{\lambda}(\omega_1)$ or simply $(v_{\lambda})_1$. The vector $v_{\lambda}$ is defined as $v_{\lambda}=((v_{\lambda})_1,\ldots,(v_{\lambda})_d)$. One of the central question of this paper is to find sufficient conditions to have $$\lim_{n\rightarrow +\infty} v_n=\lim_{\lambda\rightarrow 0,\,\lambda>0}v_{\lambda}.$$
\noindent {\bf Shapley operator and Shapley's theorem.} Let us now describe the fundamental result of Shapley which provides an interpretation of the value of the games $\Gamma_n$ as rescaled iterates of a nonexpansive mapping. In the same spirit, the discounted values $v_{\lambda}$ appear as fixed points of a family of contractions.
Let $(\Omega, X,Y, g,\rho)$ be an arbitrary stochastic game. The Shapley operator associated to such a game is a mapping $\Psi:\mathbb{R}^d\rightarrow \mathbb{R}^d$, whose $k$th component is defined through \begin{equation}\label{op_shapley}
\Psi_k(f_1,\ldots,f_d)=\max_{\mu \in \Delta(X)}\min_{\nu \in \Delta(Y)} \int_X\int_Y\left[ g(x,y,\omega_k)+ \sum_{i=1}^d \rho(\omega_i|x,y,\omega_k)f_i\right]d\mu(x) \,d\nu(y). \end{equation} Observe as before, that the maximum and the minimum can be interchanged in the above formula. The space $\mathbb{R}^d$ can be
thought of as the set of {\em value functions ${\mathcal F}(\{1,\ldots,d\};\mathbb{R})$}, i.e.\ the functions which map $\{1,\ldots,d\}\simeq \Omega$ (set of states) to $\mathbb{R}$ (real-space of values). It is known that a self-map $\Psi$ of $\mathbb{R}^d$ can be represented as the Shapley operator of some stochastic game --- that does not satisfy necessarily assumption ({$\cal A$}) --
if and only if it preserves the standard partial order of $\mathbb{R}^d$ and commutes with the addition of a constant~\cite{kolokoltsov}. Moreover, the transition probabilities can be even required to be degenerate (deterministic), see~\cite{singer00,mxpnxp0}.
\begin{theorem}[Shapley, \cite{Shap53}]$\;$\\ (i) For every positive integer $n$, the value $v_n$ of the game $\Gamma_n$ satisfies $v_n=\frac{1}{n}\Psi^n(0)$.\\ (ii) The value $v_{\lambda}$ of the discounted game $\Gamma_{\lambda}$ is characterized by the following fixed point condition \begin{equation}\label{fixed_point} v_{\lambda}=\lambda\Psi(\frac{1-\lambda}{\lambda}\,v_{\lambda}). \end{equation} \end{theorem}
\noindent {\bf Uniform value.} A stochastic game is said to have a uniform value $v_{\infty}$ if both players can almost guarantee $v_{\infty}$ provided that the length of the $n$-stage game is large enough. Formally, $v_{\infty}$ is the uniform value of the game if for any $\epsilon>0$, there is a couple of strategies of each player $(\sigma, \tau)$ and a time $N$ such that, for every $n\geqslant N$, every starting state $\omega_1$ and every strategies $\sigma'$ and $\tau'$, \begin{eqnarray*} \gamma_n(\sigma,\tau',\omega_1)&\geqslant &v_{\infty}(\omega_1)-\epsilon\\ \gamma_n(\sigma',\tau,\omega_1)&\leqslant &v_{\infty}(\omega_1)+\epsilon \end{eqnarray*}
It is straightforward to establish that if a game has a uniform value $v_\infty$, then $v_n$ and $v_\lambda$ converges to $v_\infty$. The converse is not true however, as there are games with no uniform value but for which $v_n$ and $v_\lambda$ converge~\cite{MeZa71}.
\noindent {\bf Some subclasses of stochastic games.}\begin{itemize} \item {\em Markov Decision Processes} : they correspond to one-player stochastic games (the choice of Player 2 has no influence on payoff nor transition). In this case the Shapley operator has the particular form
\begin{equation}
\Psi_k(f_1,\ldots,f_d)=\max_{x \in X} \left[g(x,\omega_k)+ \sum_{i=1}^d \rho(\omega_i|x,\omega_k)f_i\right] \end{equation} \noindent for every $k=1,\ldots,d$.
\item {\em Games with perfect information }: each state is entirely controlled by one of the player (i.e.\ the action of the other player has no influence on the payoff in this state nor on transitions from this state). In that case, the Shapley operator has a specific form : for any state $\omega_k$ controlled by Player 1, \begin{equation}
\Psi_k(f_1,\ldots,f_d)=\max_{x \in X} \left[g(x,\omega_k)+ \sum_{i=1}^d \rho(\omega_i|x,\omega_k)f_i\right], \end{equation} \noindent and for any state $\omega_k$ controlled by Player 2, \begin{equation}
\Psi_k(f_1,\ldots,f_d)=\min_{y \in Y} \left[g(y,\omega_k)+ \sum_{i=1}^d \rho(\omega_i|y,\omega_k)f_i\right]. \end{equation}
\item {\em Games with switching control }: in each state the transition is entirely controlled by one of the player (i.e.\ the action of the other player has no influence on transitions from this state, but it may alter the payoff). In that case, the Shapley operator has a specific form: for any state $\omega_k$ where the transition is controlled by Player 1, \begin{equation}
\Psi_k(f_1,\ldots,f_d)=\max_{\mu \in \Delta(X)}\int_X\left[ \min_{y \in Y} g(x,y,\omega_k)+ \sum_{i=1}^d \rho(\omega_i|x,\omega_k)f_i\right] d\mu(x), \end{equation} \noindent and for any state $\omega_k$ where the transition is controlled by Player 2, \begin{equation}
\Psi_k(f_1,\ldots,f_d)=\min_{\nu \in \Delta(Y)}\int_X\left[\max_{x \in X} g(x,y,\omega_k)+\sum_{i=1}^d \rho(\omega_i|y,\omega_k)f_i \right] d\nu(y). \end{equation} \end{itemize}
\noindent
\begin{remark}\label{remarqueperfectinfo} Recall that we made assumption ({$\cal A$}) in order to prove the existence of $v_\lambda$ and $v_n$. For Markov decision processes and games with perfect information this existence is automatic whenever the payoff is bounded, hence there is no need to assume continuity of $g$ or~$\rho$. \end{remark}
\noindent {\bf Definable stochastic games.} Let $\cal O$ be an o-minimal structure. A stochastic game is called {\em definable} if both the payoff function and the probability transition are definable functions.\\
Observe in the above definition that the definability of $g$ implies that the action sets are also definable. Note also that the space $\Delta(\Omega)$, is naturally identified to the $d$ simplex and is thus a semi-algebraic set. Hence there is no possible ambiguity when we assume that transition functions are definable.
The questions we shall address in the sequel revolve around the following two ones
\begin{enumerate} \item[(a)] Under which conditions the Shapley operator of a definable game is definable in the {\em same} o-minimal structure? \item[(b)] If a Shapley operator of a game is definable, what are the consequences in terms of games values? \end{enumerate}
In the next subsection we answer the second question in a satisfactory way: if a Shapley operator is definable, then $v_n$ and $v_\lambda$ converge, to the same limit. The first question is more complex and will be partially answered in Section~\ref{sectionvaluesto}
\subsection{Games with definable Shapley operator have a uniform value}
Let $\cal O$ be an o-minimal structure and $d$ be a positive integer. We recall the following definition: a subset $\mathcal{K}\subset\mathbb{R}^d$ is called a {\em cone} if it satisfies $\mathbb{R}_+\mathcal{K}\subset \mathcal{K}$.\\
Let $\|\cdot \|$ be a norm on $\mathbb{R}^d$. A mapping $\Psi :A\subset\mathbb{R}^d\rightarrow\mathbb{R}^d$ is called {\em nonexpansive} if
$$\|\Psi(f)-\Psi(g) \|\leqslant \|f-g \|,$$ whenever $f,g$ are in $\mathbb{R}^d$. Let us recall that the Shapley operator of a stochastic game is nonexpansive with respect to the supremum norm (see \cite{sorin02}), norm which is defined as usual by $\|f\|_{\infty}=\max\{ f_i:i=1,\ldots,d\}$.
The following abstract result is strongly motivated by the operator approach to stochastic games, i.e.\ the approach in terms of Shapley operator (see Sorin \cite{Sorin03}). It grounds on the work of Bewley-Kohlberg \cite{BewlKohl76} and on its refinement by Neyman \cite[Th.~4]{Ney03}, who showed that the convergence of the iterate $\Psi^n(0)/n$ as $n\to \infty$ is guaranteed if the map $\lambda \to v_\lambda$ has bounded variation, and deduced part $(i)$ of the following theorem in the specific case of a semi-algebraic operator \cite[Th.~5]{Ney03}.
\begin{theorem}[Nonexpansive definable mappings]\label{conv} The vector space $\mathbb{R}^d$ is endowed with an arbitrary norm $\|\cdot \|$. Let $\mathcal{K}$ be a nonempty definable closed cone of $\mathbb{R}^d$ and $\Psi:\mathcal{K}\rightarrow\mathcal{K}$ a definable nonexpansive mapping. Then \begin{itemize} \item[(i)] There exists $v$ in $\mathcal{K}$, such that for all $f$ in $\mathcal{K}$, the sequence $\frac{1}{n}\Psi^n(f)$ converges to $v$ as $n$ goes to infinity. \item[(ii)] When in addition $\Psi$ is definable in a polynomially bounded structure there exists $\theta \in ]0,1[$ and $c>0$ such that
$$\| \frac{\Psi^n(f)}{n}-v\| \leqslant \frac{c}{n^\theta}+\frac{\|f\|}{n},$$ for all $f$ in $\mathcal{K}$. \end{itemize}
\end{theorem} \begin{proof}{Proof.} For any $\lambda\in(0,1]$, we can apply Banach fixed point theorem to define $V_\lambda$ as the unique fixed point of the map $\Psi((1-\lambda)\,\cdot)$ and set $v_\lambda=\lambda V_\lambda$ (recall that $\mathcal{K}$ is a cone). The graph of $V_\lambda$ is given by $\{(\lambda,\,f)\in (0,1]\times\mathcal{K}:\Psi((1-\lambda)f)-f=0\}$. Using Proposition~\ref{p:FO1}, we obtain that $\lambda\rightarrow V_{\lambda}$ and
$\lambda\rightarrow v_{\lambda}$ are definable in $\mathcal O$. Observe also that \begin{eqnarray*}
\|V_\lambda\|&=&\|\Psi((1-\lambda)V_\lambda)\|\\
&\leqslant&\|\Psi((1-\lambda)V_\lambda)-\Psi(0)\|+\|\Psi(0)\|\\
&\leqslant&\|(1-\lambda)V_\lambda\|+\|\Psi(0)\|
\end{eqnarray*}
\noindent so that the curve $\lambda\rightarrow v_\lambda$ is bounded by $\|\Psi(0)\|$. Applying
the monotonicity lemma to each component of this curve, we obtain that $v_{\lambda}$ is piecewise $C^1$, has a limit as $\lambda$ goes to $0$ which we denote by $v=v_{0}$. In order to establish that
\begin{equation}\label{eq:int}
\int_0^1\|\frac{d}{d\lambda}v_\lambda\|\,d\lambda<+\infty,
\end{equation}
we first observe that there exists a constant $\mu>0$ such that $\|\cdot\|\leqslant \mu \|\cdot\|_1$.
It suffices thus to establish that \eqref{eq:int} holds for the specific case of the 1-norm. Applying simultaneously the monotonicity lemma to the coordinate functions of $v_{\lambda}$, we obtain the existence of $\epsilon \in (0,1)$ such that $v_{\lambda}$ is in $C^1(0,\epsilon)$ and such that each coordinate is monotonous on this interval.
This shows that \[
\int_0^{\epsilon} \left\|\frac{d}{d\lambda}v_\lambda\right\|_{1} d\lambda=\sum_{i=1}^d
\int_0^{\epsilon}|v'_{\lambda}(\omega_i)| d\lambda=\sum_{i=1}^d \left|(v_{\epsilon})(\omega_i)-(v_{0})(\omega_i)\right|=\|v_{\epsilon}-v_0\|_1, \]
and \eqref{eq:int} follows.
Let $\bar \lambda$ such that $\lambda\rightarrow v_{\lambda}$ is $C^1$ on $(0,\bar\lambda)$. Let $\lambda>\mu$ be in $(0,\bar\lambda)$.
Then for any decreasing sequence $(\lambda_i)_{i\in \mathbb{N}}$ in $(\lambda,\mu)$, we have
\begin{eqnarray}\label{estim1}
\sum_{i=1}^{+\infty} \|v_{\lambda_{i+1}}-v_{\lambda_i}\|\leqslant \int_{\mu}^{\lambda}\|\frac{d}{d\lambda} v_{\lambda}\|ds. \end{eqnarray} Indeed
$\|v_{\lambda_{i+1}}-v_{\lambda_i}\|\leqslant\| \int_{\lambda_{i+1}}^{\lambda_i}\frac{d}{d\lambda}v_{\lambda}d\lambda\|\leqslant \int_{\lambda_{i+1}}^{\lambda_i}\|\frac{d}{d\lambda}v_{\lambda}\|d\lambda,$ so that the result follows by summation.
The map $\lambda\rightarrow v_\lambda$ is thus of bounded variation, and $(i)$ follows from Neyman's proof that the latter property implies the convergence of $\Psi^n(0)/n$ to the limit $v:=\lim_{\lambda\to 0^+} v_\lambda$~\cite{Ney03}. Some intermediary results in Neyman's proof
are necessary to establish the rate of convergence of $(ii)$; we thus include the
remaining part of the proof of $(i)$. First observe that
\begin{equation}\label{estim0}\|\frac{1}{n}\Psi^n(f)-\frac{1}{n}\Psi^n(0)\|\leqslant \frac{1}{n}\|f\|,\; \forall f\in \mathcal{K} \end{equation} for all positive integers $n$, so it suffices to establish the convergence result for $f=0$.
For $n$ in $\mathbb{N}$, define
\[ d_n:=\|nv_{1/n}-\Psi^n(0)\|=\|V_{1/n}-\Psi^n(0)\|, \] and let us prove that $n^{-1}d_n$ tends to zero as $n$ goes to infinity. If $n>0$, we have \begin{align}
d_{n}&=\|\Psi((n-1)v_{1/n})-\Psi^n(0)\|\nonumber \\
&\leqslant\|(n-1)v_{1/n}-\Psi^{n-1}(0)\|\nonumber\\
&\leqslant d_{n-1} + (n-1)\|v_{1/n}-v_{1/n-1}\|.\label{e-iterate} \end{align} Let \[
D_n := \sum_{i\geqslant n}\|v_{1/i+1}-v_{1/i}\| <\infty \enspace, \] Using~\eqref{e-iterate} and a discrete integration by parts, we get \begin{align} d_n&\leqslant
\sum_{i=1}^{n-1}
i \|v_{1/i+1}-v_{1/i}\|+d_1 \\ &= \sum_{i=1}^{n-1}
i (D_i - D_{i+1})+d_1 = \sum_{i=1}^{n-1}D_i -(n-1)D_n + d_1 \enspace . \label{e-abel} \end{align} Since $D_n$ tends to $0$ as $n\to\infty$, the Ces\`aro sum $n^{-1}\sum_{i=1}^{n-1}D_i$ also tends to $0$ as $n\to\infty$. Then, it follows from~\eqref{e-abel} that $n^{-1}d_n$ tends to $0$ as $n\to\infty$.
Finally $\left\|v_{1/n}-\frac{1}{n}\Psi^n(0)\right\|$ tends to 0 as $n$ goes to infinity. We know from the monotonicity lemma (or from the fact that $v_\lambda$ as bounded variation) that $v_{1/n}$ converges to some $v$. It follows that $\frac{1}{n}\Psi^n(0)$ also converges to $v$.
We now prove (ii). Assume that $\Psi$ is definable in a polynomially bounded structure and recall that the monotonicity lemma implies the existence of $\bar \lambda\in (0,1)$ such that
$v$ is $C^1$ on the open interval $(0,\bar \lambda)$ and continuous on $[0,\bar\lambda)$. Since $\mathcal O$ is a polynomially bounded structure and since the first derivative of $v$ is also definable in $\cal O$ (see Proposition~\ref{defder}), there exist $\gamma$ and $c_1\geqslant 0$ such that $\|\frac{d}{d\lambda}v_{\lambda}\|= c_1\lambda^{-\gamma} + o(\lambda^{-\gamma})$ (see Corollary~\ref{c:polbon}). If we are able to deal with the case when $\gamma$ is positive, the other case follow trivially. Assume thus that $\gamma$ is positive; note that, since $\frac{d}{d\lambda}v_{\lambda}$ is integrable, we must also have $\gamma<1$. Let $c_2>0$ be such that
$$ \|\frac{d}{d\lambda} v_{\lambda}\|\leqslant c_2\lambda^{-\gamma},$$ for all positive $\lambda$ small enough. Let us now consider a positive integer $i$ which is sufficiently large; by using \eqref{estim1}, we have \begin{eqnarray}
i\|v_{1/i}-v_{1/i+1}\| & \leqslant & i \int_{\frac{1}{i+1}}^{\frac{1}{i}}\|\frac{d}{d\lambda}v_{\lambda}\|d\lambda\\ \nonumber
& \leqslant & i \int_{\frac{1}{i+1}}^{\frac{1}{i}}c_2\lambda^{-\gamma}d\lambda\\ \nonumber
& \leqslant & \int_{\frac{1}{i+1}}^{\frac{1}{i}}c_2\lambda^{-1}\lambda^{-\gamma}d\lambda\\ \nonumber &= & c_2 \left[\frac{1}{-1-\gamma}\lambda^{-\gamma}\right]_{\frac{1}{i+1}}^{\frac{1}{i}}\\
& = & \frac{c_2}{1+\gamma} \left((i+1)^{\gamma}-i^{\gamma}\right) \label{maj} \end{eqnarray} Replacing $c_2$ by a bigger constant, we may actually assume that \eqref{maj} holds for all positive integers. Hence \begin{eqnarray*}
||v_{\frac{1}{n}}-\frac{\Psi^n(0)}{n}||=n^{-1}d_n &\leqslant &n^{-1}\sum_{i=1}^{n} i \|v_{1/i+1}-v_{1/i}\|-n^{-1}d_1\\
& \leqslant &n^{-1}\sum_{i=1}^{n} \frac{c_2}{1+\gamma} (i^{\gamma}-(i+1)^{\gamma})-n^{-1}d_1\\
& \leqslant & \frac{c_2}{1+\gamma}\frac{(n+1)^{\gamma}}{n}-n^{-1}d_1\\
&= & O\left(\frac{1}{n^{1-\gamma}}\right). \end{eqnarray*} Recalling the estimate~\eqref{estim0} and observing that
\begin{eqnarray*}
\| \frac{\Psi^n(0)}{n}-v\| & \leqslant & \| \frac{\Psi^n(0)}{n}-v_{\frac{1}{n}}\| +\| v_{\frac{1}{n}}-v\|\\
& = & O\left(\frac{1}{n^{1-\gamma}}\right)+\int_0^{\frac{1}{n}}\|\frac{d}{d\lambda}v_{\lambda}\|d\lambda\\
& \leqslant & O\left(\frac{1}{n^{1-\gamma}}\right)+\int_0^{\frac{1}{n}}c_2\frac{1}{\lambda^{\gamma}}d\lambda=O\left(\frac{1}{n^{1-\gamma}}\right) \end{eqnarray*} the conclusion follows by setting $\theta=1-\gamma$ ($\theta\in(0,1)$). \end{proof}
The above result and some of its consequences can be recast within game theory as follows. Point (iii) of the following corollary is essentially due to Mertens-Neymann \cite{MertNeym81}.
\begin{corollary}[Games values and Shapley operators]\label{c:shapdef} If the Shapley operator of a stochastic game is definable the following assertions hold true. \begin{enumerate} \item[(i)] The limits of $v_{\lambda}$ and $v_n$ coincide, i.e.\ $$\lim_{n\rightarrow+\infty}v_n=\lim_{\lambda\rightarrow 0} v_{\lambda}:=v_{\infty}.$$ \item[(ii)] If $\Phi$ is definable in a polynomially bounded o-minimal structure, there exists $\theta \in (0,1]$ such that
$$\|v_n-v_{\infty}\|=O(\frac{1}{n^{\theta}}).$$ \item[(iii)] {\bf (Mertens-Neyman, \cite{MertNeym81})} The game has a uniform value. \end{enumerate} \end{corollary} \noindent \begin{proof}{Proof.} Since the Shapley Operator of a game is nonexpansive for the supremum norm, the two first points are a mere rephrasing of the proof of Theorem~\ref{conv}. Concerning the last one, we note from the proof (see \eqref{eq:int}), that there exists an $L^1$ definable function $\phi:(0,1)\rightarrow \mathbb{R}_+$ such that \begin{equation}\label{eq:boundvar}
\|v_{\lambda}-v_{\mu}\|\leqslant\int_{\lambda}^{\mu}\phi(s)ds, \end{equation} whenever $\lambda<\mu$ are in $(0,1)$. Applying \cite[Theorem of p. 54]{MertNeym81}, the result follows (\footnote{In \cite{MertNeym81} the authors uniquely consider {\em finite} stochastic games, however their proof relies only on the property~\eqref{eq:boundvar}. We are indebted to X.~Venel for his valuable advices on this aspect.}). \end{proof}
\begin{remark} The first two items of Corollary~\ref{c:shapdef} remain true if we do not assume that players observe the actions (since the value $v_\lambda$ does not depend on this observation). Similarly the third item remains true if players only observe the sequence of states and the stage payoffs. \end{remark}
\begin{remark}[Stationary strategies]\label{r:stat} When the action sets are infinite, we do not know in general if the correspondences of optimal stationary actions in the discounted game, $$\lambda\rightarrow X_\lambda(\omega_i),\:\: \lambda\rightarrow Y_\lambda(\omega_i),\,i=1,\ldots,d,$$ are definable. However, in the particular case of games with perfect observation, the existence of optimal \emph{pure} stationary strategies ensures that for each state $\omega_i$ the above correspondence are indeed definable. \end{remark}
\begin{remark}[Regularity of definable Shapley operators] In the particular case of finite games, more is known: it is proved in \cite{Milman02} that the real $\theta$ in (ii) can be chosen depending only on the dimension (number of states and actions) of the game. These global aspects cannot be deduced directly from our abstract approach in Theorem~\ref{conv}. However we think that similar results could be derived for {\em definable families of Shapley operators induced by definable families of games } as those described in Section~\ref{sectionvaluesto}. \end{remark}
\begin{remark}[Semi-smoothness of Shapley operators]\label{qi} The definability of the Shapley operator
and its Lipschitz continuity imply by \cite[Theorem 1]{BDL09} its semi-smoothness. Since the works of Qi and Sun \cite{qiliqun}, the semi-smoothness condition has been identified as an essential ingredient behind the good local behavior of nonsmooth Newton's methods. We think that this type of regularity might help game theorists in designing/understanding algorithms for computing values of stochastic games. Interested readers are referred to \cite[Section 3.3]{FiVr97} for related topics and possible links with iterating policy methods. \end{remark} \section{Definability of the value function for parametric games}\label{sec-one-shot}
Let $\mathcal O$ be an o-minimal structure over $(\mathbb{R},+,.)$. The previous section showed the importance of proving the definability of the Shapley operator of a game.\\
Recall that the Shapley operator associates to each vector $f$ in $\mathbb{R}^d$, the values of $d$ zero-sum games
$$\max_{\mu\in \Delta(X)}\min_{\nu \in \Delta(Y)} \int_X\int_Y\left[ g(x,y,\omega_k)+ \sum_{i=1}^d \rho(\omega_i|x,y,\omega_k)f_i\right]d\mu d\nu,$$ where $k$ ranges over $\{1,\ldots,d\}$.
Hence each coordinate function of the operator can be seen as the value of a static zero-sum game depending on a vector parameter $f$. In this section we thus turn our attention to the analysis of {\em parametric zero-sum games} with definable data.
Consider nonempty compact sets $X\subset \mathbb{R}^p,Y\subset\mathbb{R}^q$, an arbitrary nonempty set $Z\subset\mathbb{R}^d$ and a continuous pay-off function $\mathfrak{g}:X\times Y \times Z \rightarrow \mathbb{R}.$ The sets $X$ and $Y$ are action spaces for players 1 and 2, whereas $Z$ is a {\em parameter space}. Denote by $\Delta(X)$ (resp.\ $\Delta(Y)$) the set of probability measures over $X$ (resp.\ $Y$). When $z\in Z$ is fixed, the mixed extension of $g$ over $\Delta(X)\times\Delta(Y)$ defines a zero-sum game $\Gamma(z)$ whose value is denoted by $V(z)$ (recall that the $\max$ and $\min$ commutes by Sion's theorem):
\begin{eqnarray} V(z)&=&\max_{\mu\in \Delta(X)}\min_{\nu \in \Delta(Y)} \int_X\int_Y \mathfrak{g}(x,y,z) d\mu d\nu\\ &=& \min_{\nu \in \Delta(Y)} \max_{\mu\in \Delta(X)} \int_X\int_Y \mathfrak{g}(x,y,z) d\mu d\nu. \end{eqnarray}
In the sequel a parametric zero-sum game is denoted by $(X,Y,Z,\mathfrak{g})$; when the objects $X,Y,Z,\mathfrak{g}$ are definable, the game $(X,Y,Z,\mathfrak{g})$ is called {\em definable}.
The issue we would like to address in this section is: can we assert that the value function $V:Z\rightarrow \mathbb{R}$ is definable in $\mathcal O$ whenever the game $(X,Y,Z,\mathfrak{g})$ is definable in $\mathcal O$?
As shown in a forthcoming section, the answer to the previous question is not positive in general; but as we shall see additional algebraic or geometric structure may ensure the definability of the value function.
\subsection{Separable parametric games}
The following type of games and the ideas of convexification used in their studies seems to originate in the work of Dresher-Karlin-Shapley \cite{DresKarlShap50} (where these games appear as {\em polynomial-like games}).
When $x_1,\ldots,x_m$ are vectors in $\mathbb{R}^p$, the convex envelope of the family $\{x_1,\ldots,x_m\}$ is denoted by $$\mbox{co}\,\{x_1,\ldots,x_m\}.$$
\begin{definition}[Separable functions and games]{\rm Let $X\subset\mathbb{R}^p$, $Y\subset \mathbb{R}^q$, $Z\subset \mathbb{R}^d$ and\\ $\mathfrak{g}:X\times Y\times Z\rightarrow \mathbb{R}$ be as above.\\ (i) The function $\mathfrak{g}$ is called {\em separable} with respect to the variables $x,y$, if it is of the form \[ \mathfrak{g}(x,y,z)=\sum_{i=1}^{I}\sum_{j=1}^{J} m_{ij}(z)a_i(x,z) b_j(y,z). \] where $I,J$ are positive integers and the $a_i$, $b_j$, $m_{ij}$ are continuous functions. \\ The function $\mathfrak{g}$ is called {\em separably definable,} if in addition the functions $a_i$, $b_j$, $m_{ij}$ are definable.\\ (ii) A parametric game $(X,Y,Z,\mathfrak{g})$ is called {\em separably definable}, if its payoff function $\mathfrak{g}$ is itself separably definable.} \end{definition}
\begin{proposition}[Separable definable parametric games]\label{p:sep} Let $(X,Y,Z,\mathfrak{g})$ be a separably definable zero-sum game. Then the value function $Z\ni z\rightarrow V(z)$ is definable in $\mathcal O$. \end{proposition} \begin{proof}{Proof.} Let us consider the correspondence
${\mathcal L}:Z \rightrightarrows \mathbb{R}^{I}$ defined by
$${\mathcal L}(z)=\operatorname{co} \{(a_1(x,z),\cdots,a_I(x,z)) : x\in X\}$$ and define
$\mathcal M:Z \rightrightarrows \mathbb{R}^J$ similarly by ${\mathcal M}(z)=\operatorname{co} \{(b_1(y,z),\cdots,b_J(y,z)) : y\in Y\}.$ Using Carath\'eodory's theorem, we observe that the graph of $\cal L$ is defined by a first order formula, as $(z,s)\in \mbox{graph}\,{\cal L}\subset Z\times \mathbb{R}^I$ if and only if \[ \exists (\lambda_1,\ldots,\lambda_{I+1})\in \mathbb{R}_+^{I+1},\exists (x_1,\ldots,x_{I+1})\in X^{I+1}, \sum_{i=1}^{I+1}\lambda_i=1, s=\sum_{i=1}^{I+1}\lambda_i a_i(x_i,z) \enspace .\]
This ensures the definability of $\cal L$ and $\cal M$. Let us introduce the definable matrix-valued function
$$Z\ni z\rightarrow M(z)=[m_{ij}(z)]_{1\leqslant i \leqslant I\,, 1\leqslant i \leqslant J}$$ and the mapping $$W(z)=\sup_{S\in{\mathcal L}(z)}\inf_{T\in{\mathcal M}(z)}\, SM(z)T^t.$$ Using again Proposition~\ref{p:FO1}, we obtain easily that $W$ is definable. Let us prove that $W=V$, which will conclude the proof. Using the linearity of the integral \begin{eqnarray*} W(z)=\sup_{S\in{\mathcal L}(z)}\inf_{T\in{\mathcal M}(z)}SM(z)T^t &=&\sup_{S\in{\mathcal L}(z)}\inf_{y\in Y} \sum_{i=1}^I\sum_{j=1}^J m_{ij}(z)S_i \, b_j(y,z)\\ &\leqslant & \sup_{\mu\in\Delta(X)}\inf_{y\in Y} \int_X \mathfrak{g}(x,y,z) d\mu \\ &=&V(z). \end{eqnarray*} An analogous inequality for $\inf\sup$ and a minmax argument imply the result. \end{proof}
\subsection{Definable parametric games with convex payoff}
Scalar products on $\mathbb{R}^m$ spaces are denoted by $\langle\cdot,\cdot\rangle$. \\ We consider parametric games $(X,Y,Z,\mathfrak{g})$ such that:
\begin{equation}\label{e:convex} Y \mbox{ and the partial payoff }\mathfrak{g}_{x,z}:\left\{\begin{array}{lll} Y& \rightarrow & \mathbb{R}\\
y & \rightarrow & \mathfrak{g}(x,y,z)
\end{array}\mbox{ are both convex.}\right.\end{equation} One could alternatively assume that $X$ is convex and that player 1 is facing a concave function $\mathfrak{g}_{y,z}$ for each $y,z$ fixed.
We recall some well-known concepts of convex analysis (see \cite{Rock70}). If $f:\mathbb{R}^p\rightarrow (-\infty,+\infty]$ is a convex function
its {\em subdifferential} $\partial f(x)$ at $x$ is defined by
$$x^*\in\partial f(x)\Leftrightarrow f(y)\geqslant f(x)+\langle x^*,y-x\rangle, \forall y\in \mathbb{R}^p,$$
whenever $f(x)$ is finite; else we set $\partial f(x)=\emptyset$.
When $C$ is a closed convex set and $x\in C$, the {\em normal cone} to $C$ at $x$ is given by
$$N_C(x):=\left\{v\in\mathbb{R}^p: \langle v,y-x\rangle\leqslant 0, \forall y\in C\right\}.$$
The indicator function of $C$, written $I_C$, is defined by $I_C(x)=0$ if $x$ is in $C$, $I_C(x)=+\infty$ otherwise. It is straightforward
to see that $\partial I_C=N_C$ (where we adopt the convention $N_C(x)=\emptyset$ whenever $x\notin C$).
\begin{proposition}\label{convexparametric} Let $(X,Y,Z,\mathfrak{g})$ be a zero-sum parametric game. Recall that $X\subset \mathbb{R}^p, Y\subset \mathbb{R}^q$ are nonempty compact sets and $\emptyset\neq Z\subset\mathbb{R}^d$ is arbitrary.
Assume that $Y$ and $\mathfrak{g}$ satisfy \eqref{e:convex}. Then \begin{enumerate} \item[(i)] The value $V(z)$ of the game coincides with $$\max_{\begin{array}{r}(x_1,\ldots,x_{q+1}) \in X^{q+1}\\ \lambda\in\Delta_{q+1}\end{array}}\;\min_{\begin{array}{r}y\in Y\\ \end{array}} \: \, \sum_{i=1}^{q+1} \lambda_i \mathfrak{g}(x_i,y,z),$$ where $\Delta_{q+1}=\{(\lambda_1,\ldots,\lambda_{q+1}) \in \mathbb{R}_+:\sum_{i=1}^{q+1}\lambda_i=1\}$ denotes the $q+1$ simplex. \item[(ii)] If the payoff function $\mathfrak{g}$ is definable then so is the value mapping $V$. \end{enumerate} \end{proposition}
\begin{proof}{Proof.} Item (ii) follows from the fact that (i) provides a first order formula that describes the graph of $V$.
Let us establish (i). In what follows $\partial $ systematically denotes the subdifferentiation with respect to the variable $y\in Y$, the other variables being fixed.
Fix $z$ in the parameter space. Let us introduce the following continuous function \begin{equation} \Phi(y,z)=\max_{x\in X} \mathfrak{g}(x,y,z). \end{equation} $\Phi(\cdot,z)$ is clearly convex and continuous. Let us denote by $\bar y$ a minimizer of $\Phi(\cdot,z)$ over $Y$. Using the sum rule for the subdifferential of convex functions, we obtain \begin{equation} \partial \Phi(\bar y,z)+N_Y(\bar y)\ni 0. \end{equation} Now from the envelope's theorem (see \cite{Rock70}), we know that $\partial \Phi(\bar y,z)=\operatorname{co} \{\partial \mathfrak{g}(x,\bar y,z):x\in J(y,z)\},$ where $J(y,z):=\{x \mbox{ in }X \mbox{ which maximizes } \mathfrak{g}(x,y,z)\mbox{ over }X\}$. Hence Carath\'eodory's theorem implies the existence of $\mu$ in the simplex of $\mathbb{R}^{q+1}$, $x_1,\ldots,x_{q+1}\in X$ such that
\begin{equation}\label{opt} \sum_{i=1}^{q+1}\mu_i\partial \mathfrak{g}(x_i,\bar y,z)+N_Y(\bar y)\ni 0. \end{equation} where, for each $i$, $x_i$ is a maximizer of $x\rightarrow \mathfrak{g}(x,\bar y,z)$ over the compact set $X$. Being given $x$ in $X$, the Dirac measure at $x$ is denoted by $\delta_x$. We now establish that $\bar x=\sum_{i=1}^{q+1}\mu_i\delta_{x_i}$ and $\bar y$ are optimal strategies in the game $\Gamma(z)$. Let $x$ be in $X$, we have \begin{eqnarray}\label{eq:saddle} \int_X \mathfrak{g}(s,\bar y,z) d\bar x(s)& = &\sum_i\mu_i\mathfrak{g}(x_i,\bar y,z)\\ \nonumber
& = & \sum_i\mu_i\mathfrak{g}(x_1,\bar y,z)\\ \nonumber
& = & \mathfrak{g}(x_1,\bar y,z)\\ \nonumber
&\geqslant & \mathfrak{g}(x,\bar y,z). \end{eqnarray} Using the sum rule for the subdifferential, we see that \eqref{opt} rewrites $$\partial \left(\sum_i\mu_i \mathfrak{g}(x_i,\cdot,z)+I_Y\right)(\bar y)\ni 0,$$ where $I_Y$ denotes the indicator function of $Y$. The above equation implies that $\bar y$ is a minimizer of the convex function $\sum_i\mu_i \mathfrak{g}(x_i,\cdot,z)$ over $Y$. This implies that \begin{eqnarray*} \int_X \mathfrak{g}(s,\bar y,z) d\bar x(s)& = &\sum_i\mu_i \mathfrak{g}(x_i,\bar y,z)\\
& \leqslant & \sum_i\mu_i \mathfrak{g}(x_i, y,z) \end{eqnarray*} for all $y$ in $Y$. Together with \eqref{eq:saddle}, this shows that $(\bar x,\bar y)$ is a saddle point of the mixed extension of $\mathfrak{g}$ with value $\int_X \mathfrak{g}(s,\bar y,z)d\bar x(s)$. To conclude, we finally observe that we also have $$\sum_{i=1}^{q+1}\mu_i \mathfrak{g}(\bar x_i,\bar y,z)=\mathfrak{g}(\bar x_1,\bar y,z) \geqslant \sum_{i=1}^{q+1}\lambda_i \mathfrak{g}(x_i,\bar y,z)$$ for all $\lambda\in \Delta_{q+1}$ and $x_i$ in $X$. Hence $\left((\lambda,x_1,\ldots,x_{q+1}),\bar y\right)$ is a saddle point of the map $\left((\lambda,x_1,\ldots,x_{q+1}),y\right)\rightarrow \sum_{i=1}^{q+1}\lambda_i \mathfrak{g}(x_i,y,z)$
with value $\sum\mu_i\mathfrak{g}(\bar x_i,\bar y,z)=\int_X \mathfrak{g}(s,\bar y,z)d\bar x(s)$. \end{proof}
\begin{remark}{\rm (a) Observe that the above proof actually yields optimal strategies for both players.\\ (b) An analogous result holds, when we assume that $X$ is convex and $X\ni x\rightarrow g(x,y,z)$ is a concave function.} \end{remark}
\subsection{A semi-algebraic parametric game whose value function is not semi-algebraic} The following lemma is adapted from an example in McKinsey \cite[Ex.~10.12 p 204]{McKinsey} of a one-shot game played on the square where the payoff is a rational function yet the value is transcendental.
\begin{lemma}\label{lemmecontreexemple} Consider the semi-algebraic payoff function $$\mathfrak{g}(x,y,z)=\frac{(1+x)(1+yz)}{2(1+xy)^2}$$ where $(x,y,z)$ evolves in $[0,1]\times[0,1]\times(0,1]$. Then $$V(z)=\frac{z}{2\ln(1+z)}, \quad \forall z\in(0,1].$$
\end{lemma} \begin{proof}{Proof.} Fix $z$ in $(0,1]$. Player 1 can guarantee $V(z)$ by playing the probability density \[ \frac{dx}{\ln(1+z)(1+x)} \] on $[0,z]$ since for any $y\in[0,1]$, \[ \int_0^z \frac{\mathfrak{g}(x,y,z) dx}{\ln(1+z)(1+x)} = \frac{1+yz}{2\ln(1+z)} \int_0^z \frac{dx}{(1+xy)^2} =\frac{z}{2\ln(1+z)} \]
On the other hand, Player 2 can guarantee $V(z)$ by playing the probability density \[\frac{z\, dy}{\ln(1+z)(1+yz)} \]
on $[0,1]$ since for any $x\in[0,1]$, \[ \int_0^1 \frac{z\, \mathfrak{g}(x,y,z) dy}{\ln(1+z)(1+yz)} = \frac{z(1+x)}{2\ln(1+z)} \int_0^1 \frac{dy}{(1+xy)^2} =\frac{z}{2\ln(1+z)}. \qquad \] \end{proof}
We see on this example that the underlying objects of the initial game are semi-algebraic while the value function is not. Observe however that the value function is definable in a {\em larger structure} since it is globally subanalytic (the $\log$ function only appears through its restriction on compact sets). The question of the possible definability of the value function in a larger structure is exciting but it seems difficult, it is certainly a matter for future research.
\section{Values of stochastic games}\label{sectionvaluesto} \subsection{Definable stochastic games}\label{propperfect} We start by a simple result. Recall that a stochastic game has perfect information if each state is controlled by only one of the players (see Section~\ref{Defsto}).
\begin{proposition}[Definable games with perfect information]\label{p:perfect} Definable games with perfect information and bounded payoff {\rm (\footnote{Recall that we do not need to assume continuity of $g$ and $\rho$ in that case, as stated in Remark \ref{remarqueperfectinfo}})} have a uniform value. \end{proposition} \begin{proof}{Proof.}
Let $\omega_k$ be any state controlled by the first player. The Shapley operator in this state can be written as \[
\Psi_k(f)=\sup_{X} \left[ g(x,\omega_k)+ \sum_{i=1}^d \rho(\omega_i|x,\omega_k)f_i\right]. \] So $\Psi_k$ is the supremum, taken on a definable set, of definable functions, and is thus definable (see Example~\ref{ex}). The same is true if $\omega_k$ is controlled by the second player, so we conclude by Corollary~\ref{c:shapdef}.~ \end{proof}
A stochastic game $(\Omega,X,Y,g,\rho)$ is called {\em separably definable}, if both the payoff and the transition functions are separably definable. More precisely: \begin{enumerate} \item[(a)] $\Omega$ is finite and $X\subset\mathbb{R}^p$, $Y\subset\mathbb{R}^q$ are definable sets. \item[(b)] For each state $\omega$, the reward function $g(\cdot,\cdot,\omega)$ has a definable/separable structure, that is
$$g(x,y,\omega):=\sum_{i=1}^{I_{\omega}}\sum_{j=1}^{J_{\omega}}m_{i,j}^{\omega}\,a_i(x,\omega)\,b_j(y,\omega),\;\forall (x,y)\in X\times Y,$$ where ${I_{\omega}},{J_{\omega}}$ are positive integers, $m_{ij}^{\omega}$ are real numbers, $a_i(\cdot,\omega)$ and $b_j(\cdot,\omega)$ are continuous definable functions.
\item[(c)] For each couple of states $\omega,\omega'$, the transition function $\rho(\omega'|\cdot,\cdot,\omega)$ has a definable/separable structure, that is
$$\rho(\omega'|x,y,\omega):=\sum_{i=1}^{K_{(\omega,\omega')}}\sum_{j=1}^{L_{(\omega,\omega')}} n_{i,j}^{(\omega,\omega')}\;c_i(x,\omega,\omega')\,d_j(y,\omega,\omega')\;\forall (x,y)\in X\times Y,$$ where $K_{(\omega,\omega')},L_{(\omega,\omega')}$ are positive integers, $n_{ij}^{(\omega,\omega')}$ are real numbers, $c_i(\cdot,\omega,\omega')$ and $d_j(\cdot,\omega,\omega')$ are continuous definable functions. \end{enumerate} The most natural example of separably definable games are games with semi-algebraic action spaces and polynomial reward and transition functions.
\begin{theorem}[Separably definable games]\label{t:sepgamesunif} Separably definable games have a uniform value. \end{theorem} \begin{proof}{ Proof.} The coordinate functions of the Shapley operator yield $d$ parametric separable definable games. Hence the Shapley operator of the game, say $\Psi$, is itself definable by Proposition~\ref{p:sep}. Applying Corollary~\ref{c:shapdef} to $\Psi$, the result follows. \end{proof}
An important subclass of separable definable games is the class of definable games for which {\em one of the player} has a finite set of strategies. \begin{corollary}[Definable games finite on one-side] Consider a definable stochastic game and assume that one of the player has a finite set of strategies. Then the game has a uniform value. \end{corollary} \begin{proof}{Proof.} It suffices to observe that the mixed extension of the game is both separable and definable, and to apply the previous theorem.\\ One could alternatively observe that the mixed extension fulfills the convexity assumptions of Proposition~\ref{convexparametric}. This shows that the Shapley operator of the game is definable, hence Corollary~\ref{c:shapdef} applies and yields the result. \end{proof}
The above theorems generalize in particular the results of Bewley-Kohlberg \cite{BewlKohl76}, Mertens-Neyman \cite{MertNeym81} on {\em finite }stochastic games.\\
As shown by the following result, it is not true in general that semi-algebraic stochastic games have a semi-algebraic Shapley operator. \begin{example}\label{exshap}
Consider the following stochastic game with two states $\{\omega_1,\omega_2\}$ and action sets $[0,1]$ for each player. The first state is absorbing with payoff $0$, while for the second state, the payoff is $$g(x,y,\omega_2)=\frac{1+x}{2(1+xy)^2}$$ and the transition probability is given by $$1-\rho(\omega_1|x,y,\omega_2)=\rho(\omega_2|x,y,\omega_2)=\frac{(1+x)y}{2(1+xy)^2},$$ for all $(x,y)$ in $[0,1]^2$.
This stochastic game is defined by semi-algebraic and continuous functions but neither the Shapley operator $\Psi$ nor the curve of values $(v_\lambda)_{\lambda\in(0,1]}$ are semi-algebraic mappings. \end{example} \begin{proof}{Proof.}
Notice first that $\rho(\omega_2|x,y,\omega_2)\in [0,1]$ for all $x$ and $y$ so the game is well defined. It is straightforward that $\Psi_1(f_1,f_2)=f_1$, and $\Psi_2(f_1,f_2)=f_1+V(f_2-f_1)$ (where $V$ is the value of the parametric game in Lemma \ref{lemmecontreexemple}) hence $\Psi$ is not semi algebraic.
For any $\lambda\in]0,1[$ let $u_\lambda=\left(0,\frac{\lambda(e^\frac{1-\lambda}{2}-1)}{1-\lambda}\right)$, the identity $u_\lambda=v_\lambda$ will follow as we prove that $u_{\lambda}=\lambda\Psi(\frac{1-\lambda}{\lambda}\,u_{\lambda})$. This is clear for the first coordinate, and for the second, since $\frac{1-\lambda}{\lambda}\,u_{\lambda}=e^\frac{1-\lambda}{2}-1\in]0,1[$, Lemma \ref{lemmecontreexemple} implies that \begin{eqnarray*} \lambda \Psi_2(\frac{1-\lambda}{\lambda}\,u_{\lambda})&=&\lambda V(e^\frac{1-\lambda}{2}-1)\\ &=&\lambda \frac{e^\frac{1-\lambda}{2}-1}{1-\lambda}\\ &=&u_\lambda. \qquad \qquad \end{eqnarray*}
\end{proof}
\begin{remark}{\rm As in Lemma~\ref{lemmecontreexemple}, one observes that both the Shapley operator $\Psi$ and the curve of values $(v_\lambda)_{\lambda\in(0,1]}$ are globally subanalytic. } \end{remark}
\subsection{Stochastic games with separable definable transitions}
This section establishes, by means of the Weierstrass density Theorem, that the assumptions we made on payoff functions can be brought down to mere continuity without altering our results on uniform values. From a conceptual viewpoint this shows that the essential role played by definability in our framework is to tame oscillations generated by the underlying stochastic process $\rho$.
\begin{theorem}[Games with separable definable transitions]\label{t:value}
Let $(\Omega, X,Y,g,\rho)$ be a stochastic game, and assume that: \begin{enumerate} \item[(i)] $\Omega$ is finite and $X, Y$ are definable, \item[(ii)] the reward function $g$ is an arbitrary continuous function, \item[(iii)] the transition function $\rho$ is definable and separable (e.g.\ polynomial). \end{enumerate} Then the game $(\Omega, X,Y,g,\rho)$ has a uniform value. \end{theorem}
As it appears below, the proof of the above theorem relies on Mertens-Neyman uniform value theorem \cite{MertNeym81} that we do not reproduce here. We shall however provide a complete proof of a weaker result in the spirit of the ``asymptotic approach" of Rosenberg-Sorin: \begin{theorem}[Games with separable definable transitions -- weak version]\label{t:valuebis} We consider a stochastic game $(\Omega, X,Y,g,\rho)$ which is as in Theorem~\ref{t:value}.\\ Then the following limits exist and coincide: $$\lim_{n\rightarrow=\infty}v_n=\lim_{\lambda \rightarrow 0}v_{\lambda}.$$ \end{theorem}
Before establishing the above results, we need some abstract results that allow to deal with certain approximation of stochastic games. In the following proposition, the space $({\mathcal X},\|\cdot\|)$ denotes a real Banach space and $\mathcal{K}$ denotes a nonempty closed cone of ${\mathcal X}$. Being given two mappings $\Phi_1,\Phi_2:\mathcal{K}\rightarrow\mathcal{K}$, we define their
supremum ``norm" through
$$\| \Phi_1-\Phi_2 \|_{\infty}=\sup\left\{\|\Phi_1(f)-\Phi_2(f)\|: f\in \mathcal{K}\right\}.$$
Observe that the above value may be $+\infty$, so that $\|\cdot\|_\infty$ is not a norm, however, $\delta(\Phi_1,\Phi_2):= \|\Phi_1 -\Phi_2\|_{\infty}
/ (1+ \|\Phi_1 -\Phi_2\|_{\infty})$ does provide a proper metric (\footnote{We of course set: $\delta(\Phi_1,\Phi_2):=1$ whenever $\| \Phi_1-\Phi_2 \|_{\infty}=\infty$.}) on the space of mappings $\mathcal{K} \to \mathcal{K}$. We say that a sequence $\Psi_k:\mathcal{K} \rightarrow \mathcal{K}$ ($k\in\mathbb{N}$) converges uniformly to $\Psi:\mathcal{K}\rightarrow\mathcal{K}$ if $\| \Psi_k-\Psi \|_{\infty}$ tends to zero as $k$ goes to infinity, or equivalently, if it converges to $\Psi$ with respect to the metric $\delta$. The observation that the set of nonexpansive mappings $\Psi:\mathcal{K}\to \mathcal{K}$ such that the limit $\lim_{n\to \infty}\Psi^n(0)/n$ does exist is closed in the topology of uniform convergence was made in~\cite{gg04}.
\begin{proposition}\label{t:unif} Let $\Psi_k:\mathcal{K} \rightarrow \mathcal{K}$ be a sequence of nonexpansive mappings. Assume that\\
(i) There exists $\Psi:\mathcal{K} \rightarrow \mathcal{K}$ such that $\Psi_k$ converges uniformly to $\Psi$ as $k\rightarrow+\infty$,\\
(ii) for each fixed integer $k$, the sequence $\frac{1}{n}\Psi^n_k(0)$ has a limit $v^k$ in $\mathcal{K}$ as $n\rightarrow+\infty$.
Then the sequence $v^k$ has a limit $v$ in $\mathcal{K}$, $\Psi$ is nonexpansive and $\frac{1}{n}\Psi^n(0)$ converges to $v$ as $k$ goes to infinity.
\end{proposition}
\begin{proof}{Proof.} Take $\epsilon>0$. Note first, that if $\Phi_1,\Phi_2$ are two nonexpansive mappings such that $\| \Phi_1-\Phi_2 \|_{\infty}\leqslant \epsilon$, we have $\| \Phi_1^n-\Phi_2^n \|_{\infty}\leqslant n\epsilon$. This follows indeed from an induction argument. The result obviously holds for $n=1$, so assume that $n\geqslant 2$ and consider that the inequality holds at $n-1$. For all $f$ in $\mathcal{K}$, we have \begin{eqnarray} \nonumber
\|\Phi_1^n(f)-\Phi_2^n(f)\|& \leqslant &\|\Phi_1(\Phi_1^{n-1}(f))-\Phi_1(\Phi_2^{n-1}(f))\| +\|\Phi_1(\Phi_2^{n-1}(f))-\Phi_2(\Phi_2^{n-1}(f))\|\\ \nonumber
& \leqslant &\|\Phi_1^{n-1}(f)-\Phi_2^{n-1}(f)\|+\epsilon\\
& \leqslant & n\epsilon.\label{ineq} \end{eqnarray}
Let us now prove that $v^k$ is a Cauchy sequence. Let $N>0$ be such that $\| \Psi_p-\Psi_q \|_{\infty}\leqslant \epsilon$, for all $p,q\geqslant N$. Then, for each $p,q\geqslant N$ and each positive integer $n$, we have \begin{equation*}
\|\frac{\Psi_p^n(0)}{n}-\frac{\Psi_q^n(0)}{n}\| \leqslant \epsilon. \end{equation*}
Letting $n$ goes to infinity ($p$ and $q$ are fixed), one gets $\|v^p-v^q\| \leqslant\epsilon$ and thus $v^k$ converges to a vector $v$ belonging to $\mathcal{K}$.
Take $\epsilon>0$. Let $N$ be such that $\| \Psi_p-\Psi \|_{\infty}\leqslant \epsilon/3$ and $\|v^p-v\|<\epsilon/3$ for all $p\geqslant N$. Using \eqref{ineq}, one obtains $\|\Psi_p^n(0)-\Psi^n(0)\|\leqslant n\,\epsilon/3$ where $n>0$ is an arbitrary integer. Whence \begin{eqnarray*}
\|v-\frac{\Psi^n(0)}{n}\| & \leqslant & \|v-v^p\|+\|v^p-\frac{\Psi_p^n(0)}{n}\|+\|\frac{\Psi_p^n(0)}{n}-\frac{\Psi^n(0)}{n}\|\\
& \leqslant & \frac{2\epsilon}{3}+\|v^p-\frac{\Psi_p^n(0)}{n}\|,\\ \end{eqnarray*} for all $n>0$. The conclusion follows by choosing $n$ large enough. \end{proof}
\noindent Similarly, we prove: \begin{proposition}\label{t:uniflambda} Let $\Psi_k:\mathcal{K} \rightarrow \mathcal{K}$ be a sequence of nonexpansive mappings. Assume that\\
(i) There exists $\Psi:\mathcal{K} \rightarrow \mathcal{K}$ such that $\Psi_k$ converges uniformly to $\Psi$ as $k\rightarrow+\infty$,\\
(ii) for each fixed integer $k$, the family of fixed point $v_\lambda^k:=\lambda \Psi_k\left(\frac{1-\lambda}{\lambda}v_\lambda^k\right)$ has a limit $v^k$ in $\mathcal{K}$ as $\lambda\rightarrow 0$.
Then the sequence $v^k$ has a limit $v$ in $\mathcal{K}$, $\Psi$ is nonexpansive and $v_\lambda:=\lambda \Psi\left(\frac{1-\lambda}{\lambda}v_\lambda\right)$ converges to $v$ as $k$ goes to infinity.
\end{proposition} \begin{proof}{Proof.}
Take $\epsilon>0$. Let $N>0$ be such that $\| \Psi_p-\Psi_q \|_{\infty}\leqslant \epsilon$, for all $p,q\geqslant N$. Then, for each $p,q\geqslant N$ and any $\lambda\in]0,1]$, we have \begin{eqnarray*}
\|v_\lambda^p-v_\lambda^q\|& = &\lambda \left\|\Psi_p\left(\frac{1-\lambda}{\lambda}v_\lambda^p\right)-\Psi_q\left(\frac{1-\lambda}{\lambda}v_\lambda^q\right)\right\|\\
& \leqslant &\lambda \left\|\Psi_p\left(\frac{1-\lambda}{\lambda}v_\lambda^p\right)-\Psi_q\left(\frac{1-\lambda}{\lambda}v_\lambda^p\right)\right\| +\lambda \left\|\Psi_q\left(\frac{1-\lambda}{\lambda}v_\lambda^p\right)-\Psi_q\left(\frac{1-\lambda}{\lambda}v_\lambda^q\right)\right\|\\
& \leqslant &\lambda \epsilon +(1-\lambda)\|v_\lambda^p-v_\lambda^q\|. \end{eqnarray*}
so $\|v_\lambda^p-v_\lambda^q\|\leqslant\epsilon$.
Letting $\lambda$ to 0, we get that $v^k$ is a Cauchy sequence, hence converges to some $v$. Moreover, for any $p>N$, \begin{eqnarray*}
\|v-v_\lambda\| & \leqslant & \|v-v^p\|+\|v^p-v_\lambda^p\|+\|v_\lambda^p-v_\lambda\|\\
& \leqslant & 2\epsilon + \|v^p-v_\lambda^p\| \end{eqnarray*} for all $\lambda\in]0,1]$. Hence $v_\lambda$ converges to $v$. \end{proof}
\begin{proof}{[Proof of Theorem~\ref{t:valuebis}]} Let $k$ be a positive integer. {From} the Stone-Weierstrass theorem (see \cite{Choq66}), there exists a finite family $\{\pi_k(\cdot,\omega);\omega\in\Omega\}$ of real polynomial functions \begin{equation}\label{eqpi} \pi_k(x,y,\omega)=\sum_{i,\; j \mbox{ multi-index lower than } \, \delta^{\omega}_k} m^k_{ij}(\omega) \, x^i y^j \end{equation} with $\delta_k^{\omega}$ in $\mathbb{N}^*$, $m_{ij}^k(\omega)$ in $\mathbb{R}$ and $(x,y)$ in $X\times Y\subset\mathbb{R}^p\times \mathbb{R}^q$, such that
$$\sup_{\omega\in\Omega}\sup\left\{|\pi_k(x,y,\omega)-r(x,y,\omega)|:(x,y)\in X\times Y\right\}\;\leqslant\; \frac{1}{k}.$$ Consider now, for each positive $k$, the game given by $(\Omega,X,Y,\pi_k,\rho)$. Since this game is definable, Proposition~\ref{p:sep} applies and the game has a value. In other words its Shapley operator $\Psi_k:\mathbb{R}^d\rightarrow\mathbb{R}^d$ (recall that the cardinality of $\Omega$ is $d$) is such that the sequence $\frac{1}{n}\Psi_k^n(0)$ has a limit as $n$ goes to $+\infty$. On the other hand, one easily sees that $$\Psi(f)-\frac{1}{k}\leqslant \Psi_k(f) \leqslant \Psi(f) +\frac{1}{k}$$ whenever $f$ is in $\mathbb{R}^d$ and $k$ is positive. This proves that $\Psi_k$ converges uniformly to $\Psi$. Thus by using Proposition~\ref{t:unif} and Proposition~\ref{t:uniflambda} , we obtain the existence of a common limit $v$ in $\mathbb{R}^d$ of the sequence $v_n=\frac{1}{n}\Psi^n(0)$ and of the family of fixed points $v_\lambda$. \end{proof}
Let us now establish the stronger version of our result.
\begin{proof}{[Proof of Theorem \ref{t:value}]} Let $k$ be a positive integer. As before we consider a finite family of real polynomial functions, $\{\pi_k(\cdot,\omega);\omega\in\Omega\}$,
such that \begin{equation}\label{eqpi2}
\sup_{\omega\in\Omega}\sup\left\{|\pi_k(x,y,\omega)-r(x,y,\omega)|:(x,y)\in X\times Y\right\}\;\leqslant\; \frac{1}{k}. \end{equation} Consider now, for each positive $k$, the game $\Gamma^k$ given by $(\Omega,X,Y,\pi_k,\rho)$. Since this game is definable, Theorem~\ref{t:sepgamesunif} applies and the game has a uniform value $v^k$. Hence, there exists an integer $N$ (depending on $k$) and a strategy $\sigma$ of Player 1 which is $\frac{1}{k}$ optimal in the $n$-stage game $\Gamma^k_n$ for any $n\geqslant N$. That is, for any strategy $\tau$ of Player 2 and any starting state $\omega$, \[ \gamma^k_n(\sigma,\tau,\omega)\geqslant v^k(\omega)-\frac{1}{k}. \] Hence by \eqref{eqpi2}, \begin{equation}\label{eqgammavk} \gamma_n(\sigma,\tau,\omega)\geqslant v^k(\omega)-\frac{2}{k}. \end{equation} Taking the infimum over all possible strategies $\tau$, we get that for every $\omega$ and every large $n$,
\[
v_n(\omega)\geqslant v^k(\omega)-\frac{2}{k}.
\] Using the dual inequality \begin{equation} v_n(\omega)\leqslant v^k(\omega)+\frac{2}{k}\label{eqvnvk} \end{equation} one gets that $\limsup v_n(\omega)-\liminf v_n(\omega)\leqslant \frac{4}{k}$. Hence $v_n$ converges to some $v$. Moreover, combining \eqref{eqgammavk} and \eqref{eqvnvk} yields \[ \gamma_n(\sigma,\tau,\omega)\geqslant v_n(\omega)-\frac{4}{k}\geqslant v(\omega)-\frac{5}{k} \] for $n$ sufficiently large. Hence $v$ is the uniform value of the game. \end{proof}
An immediate consequence of Theorem~\ref{t:value} is the following (\footnote{After this article was first submitted, examples were constructed in \cite{Ziliotto13, sorinvigeral13reversibility} that show that the definability assumption for the games described in this corollary cannot be removed.}) \begin{corollary}\label{coroswitch} Any game with a definable transition probability, and either switching control or finitely many actions on one side, has a uniform value. \end{corollary}
\subsection{Geometric growth in nonlinear Perron-Frobenius theory}\label{sec-geom} We finally point out an application of the present results to nonlinear Perron-Frobenius theory, in which Shapley operators do appear, albeit after a change of variables, using ``log-glasses~\cite{viro}. In this setting, the mean payoff of the game determines the growth rate of a population model. The same Shapley operators arise in risk-sensitive control, where the mean payoff problem is also of interest. Whereas the importance of the o-minimal model of real semi-algebraic sets is well known in game theory~\cite{bewkohl,Ney03}, the present application show that there are natural Shapley operators which are definable in a larger structure, the log-exp o-minimal model.
We denote by $C=\mathbb{R}_+^d$ the standard (closed) nonnegative cone of $\mathbb{R}^d$, equipped with the product ordering. We are interested in maps $T$ defined on the interior of $C$, satisfying some of the following properties. We say that $T$ is {\em order preserving} if \[ f\leqslant g\implies T(f)\leqslant T(g), \qquad \forall f,g\in \operatorname{int} C, \] that it is {\em positively homogeneous} (of degree $1$) if \[ T(\lambda f) = \lambda T(f), \qquad \forall f\in \operatorname{int}C, \, \forall \lambda >0, \] and {\em positively subhomogeneous} if \[ T(\lambda f) \leqslant \lambda T(f), \qquad \forall f\in \operatorname{int}C,\, \forall \lambda \geqslant 1. \] Let $\log: \operatorname{int} C\to \mathbb{R}^d$ denote the map which does $\log$ entrywise, and let $\exp:= \log^{-1}$. It is clear that $T$ is order-preserving and positively homogeneous if and only if the conjugate map \begin{align}\label{e-fromTtoShapley} \Psi:= \log \circ\, T \circ \exp \end{align} is order-preserving and commutes with the addition of a constant. These two properties hold if and only if $\Psi$ is a dynamic programming
operator associated to an undiscounted game with state space $\{1,\dots,d\}$, i.e.\ if $\Psi$ can be written as in~\eqref{op_shapley}, but with possibly noncompact sets of actions (see in particular~\cite{kolokoltsov}). Note also that if $T$ is order preserving and positively subhomogeneous, then, $\Psi$ is sup-norm nonexpansive.
In the setting of nonlinear Perron-Frobenius theory, we are interested in the existence of the geometric {\em growth rate} $\chi(T)$, defined by \begin{align} \chi(T):= \exp(\lim_{n\to \infty} n^{-1}\log T^n(e)) = \exp( \lim_{n\to \infty} n^{-1} \Psi^n(\log e))\label{e-corresp} \end{align} where $e$ is an arbitrary vector in the interior of $C$.
Problems of this nature arise in population dynamics. In this context, one considers a population vector $f(n) \in \operatorname{int}\mathbb{R}_+^d$, where $[f(n)]_i$ represents the number of individuals of type $i$ at time $n$, assuming a dynamics of the form $f(n)=T(f(n-1))$. Then, $[\chi(T)]_i= \lim_{n\to \infty} [T^n(f(0))]_i^{1/n}$ represents the geometric growth rate of individuals of type $i$.
\begin{corollary}[Geometric Growth]\label{cor-geom} Let $T$ be an order preserving and positively subhomogeneous self map of $\operatorname{int} C$ that is definable in the log-exp structure, and let $e$ be a vector in $\operatorname{int}C$. Then, the growth rate $\chi(T)$, defined by~\eqref{e-corresp}, does exist and is independent of the choice of~$e$. \end{corollary} \begin{proof}{Proof.} Apply Theorem~\ref{conv} to the operator~\eqref{e-fromTtoShapley}, which is nonexpansive in the sup-norm as well as definable in the log-exp structure, and use~\eqref{e-corresp}. \end{proof}
Here is now an application of Corollary~\ref{cor-geom} to a specific class of maps.
\begin{corollary}[Growth minimization] Assume that $T$ is a self-map of $\operatorname{int} C$ every coordinate of which can be written as \begin{align} [T(f)]_i = \inf_{p\in \mathcal{M}_i } \langle p,f\rangle \qquad 1\leqslant i\leqslant d, \label{e-T} \end{align} where $\mathcal{M}_i$ is a subset of $C$. Assume in addition that each set $\mathcal{M}_i$ is definable in the log-exp structure. Then, the growth rate $\chi(T)=\exp(\lim_{n\to \infty} n^{-1}\log T^n(e))$ does exist and is independent of the choice of $e\in \operatorname{int} C$. \end{corollary} \begin{proof}{Proof.} The map $T$ is obviously order preserving, positively homogeneous, and, by Proposition~\ref{p:FO1} or Example~\ref{ex}, it is definable in the log-exp structure as soon as every set $\mathcal{M}_i$ is definable in this structure. Hence, the result follows from Corollary~\ref{cor-geom}. \end{proof}
Several motivations lead to consider maps of the form~\eqref{e-T}. The first motivation arises from discrete time controlled growth processes. As above, to each time $n\geqslant 1$ and state $1\leqslant i\leqslant d$ is attached a population $[f(n)]_i$. The control at time $n$ is chosen after observing the current state $1\leqslant i\leqslant d$. It consists in selecting a vector $p\in \mathcal{M}_i$. Then, the population at time $i$ becomes $[f(n)]_i =\langle p, f(n-1)\rangle$. The iterate $[T^n(e)]_i$ represents the minimal possible population at state $i$ and time $n$, with an initial population $e$. Then, the limit $\chi(T)$ represents the minimal possible growth rate. This is motivated in particular by some therapeutic problems (see e.g~\cite{billy}), for which $\chi(T)$ yields a lower bound on the achievable growth rates.
Another motivation comes from risk sensitive control~\cite{fleming,hernandez} or from mathematical finance models with logarithmic utility~\cite{akian}. In this context, it is useful to consider the conjugate map $\Psi:= \log \circ \,T \circ \exp$, which has the following explicit representation \begin{align} [\Psi(h)]_i & = \inf_{p\in \mathcal{M}_i } \log(\sum_{1\leqslant j\leqslant d} p_j e^{h_j})
= \inf_{p\in \mathcal{M}_i } \sup_{q\in \Delta_d} (-S(q,p) + \langle q,h\rangle )\label{e-explicit} \end{align} where \[S(q,p):= \sum_{1\leqslant j \leqslant d} q_j \log(q_j/p_j) \] denotes the {\em relative entropy} or {\em Kullback-Leibler divergence}, and $\Delta_d:=\{q\in C\mid \sum_{1\leqslant j\leqslant d} q_j=1\}$ is the standard simplex. Then, $\log [\chi(T)]_i$ can be interpreted as the value of an ergodic risk sensitive problem, and it is also the value of a zero-sum game.
The case in which inf is replaced by sup in~\eqref{e-T}, i.e., $[T(f)]_i = \sup_{p\in \mathcal{M}_i } \langle p,f\rangle$, for $1\leqslant i\leqslant d$, which is also of interest, turns out to be simpler. Indeed, each coordinate of the operator $\Psi:= \log \circ \,T \circ \exp$ becomes convex (this can be easily seen from the representation analogous to~\eqref{e-explicit}, in which the infimum is now replaced by a supremum). More generally, the latter convexity property is known to hold if and only if $\Psi$ is the dynamic programming operator of a one player stochastic game~\cite{spectral,vigeral}. It has been shown by several authors~\cite{gg04,vigeral,Rena11} that for this class of operators (or games), the limit $\lim_{n\rightarrow+\infty} \Psi^n(f)/n$ does exist, from which the existence of the limit~\eqref{e-corresp} readily follows.
Finally, we note that we may consider more general hybrid versions of~\eqref{e-T},
for instance with a partition $\{1,\dots,d\}=I\cup J$ and \[ [T(f)]_i = \inf_{p\in \mathcal{M}_i } \langle p,f\rangle \qquad i\in I, \qquad [T(f)]_i = \sup_{p\in \mathcal{M}_i } \langle p,f\rangle \qquad i\in J \enspace . \] Then the existence of the growth rate, for such maps, also follows from Corollary~\ref{cor-geom}.
\noindent
\section*{Acknowledgments.} The authors would like to thank J. Renault, S. Sorin and X. Venel for their very useful comments.
\end{document}
|
arXiv
|
{
"id": "1301.1967.tex",
"language_detection_score": 0.7408841848373413,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Analog control of open quantum systems under arbitrary decoherence} \date{\today} \author{Jens Clausen} \email{[email protected]} \author{Guy Bensky, and Gershon Kurizki} \affiliation{Department of Chemical Physics,\\
Weizmann Institute of Science,\\
Rehovot,
76100, Israel} \begin{abstract} We derive and investigate a general non-Markovian equation for the time-dependence of a Hamiltonian that maximizes the fidelity of a desired quantum gate on any finite-dimensional quantum system in the presence of arbitrary bath and noise sources. The method is illustrated for a single-qubit gate implemented on a three-level system. \end{abstract} \pacs{03.65.Yz,
03.67.Pp,
02.70.-c } \keywords{ decoherence, open systems,quantum information, decoherence protection, quantum error correction, computational techniques, simulations } \maketitle
\section{Introduction}
The quest for strategies for combatting decoherence is of paramount importance to the control of open quantum systems, particularly for quantum information operations \cite{krausNielsen}. A prevailing \emph{unitary} strategy aimed at suppressing decoherence is dynamical decoupling (DD) \cite{vio98,KLi05,uhr07}, which consists, in the case of a qubit, in the application of strong and fast pulses alternating along orthogonal Bloch-sphere axes, e.g., $X$ and $Z$. In the frequency domain, where the decoherence rate can be described as overlap between the spectra of the pulse-driven (modulated) system and the bath \cite{kof01}, DD is tantamount to shifting the driven-system resonances beyond the bath cutoff frequencies. The DD efficacy can be enhanced for \emph{certain} bath spectra upon choosing the timings of the pulses so as to reduce the low-frequency parts in the system spectrum and thus its overlap with the low-frequency portion of the bath spectrum \cite{uhr07}. DD sequences are inherently \emph{binary}, i.e., their pulsed control parameters are discretely switched on or off. Realistically, the finiteness of pulse durations and spacings sets an upper limit on the speed and fidelity of DD-assisted quantum gate operations \cite{vio98,KLi05,uhr07}.
An alternative strategy formulated here in full generality is \emph{analog unitary control} of multidimensional systems subject to \emph{any} noise or decoherence. It is effected by a system Hamiltonian whose time-dependence is variationally tailored to optimally perform a desired gate operation. The vast additional freedom of non-discrete (smooth) Hamiltonian parametrization significantly enhances the efficacy of decoherence control under realistic constraints compatible with the non-Markov time scales required for such control. Its formulation meets the long-standing conceptual challenge of \emph{simultaneously} controlling non-commuting system operators subject to noise along \emph{orthogonal} axes. This is here achieved by working in an optimally rotated, different basis at each instant. The price we pay for such general optimal control is the need for at least partial knowledge of the bath or noise spectrum, which is \emph{experimentally} accessible \cite{Sag09} without the need for microscopic models. The goal is to \emph{minimize its overlap} with the spectrum of the controlled system, as was already shown for pure dephasing of qubits \cite{gkl08}.
\section{Gate Error}
We assume that the system Hamiltonian $\hat{H}_{\mathrm{S}}(t)$ implements a desired quantum gate operation at time $t$, and aim at designing it so as to minimize the decoherence and noise errors. The system-bath interaction $\hat{H}_{\mathrm{I}}$ then acquires time-dependence in the interaction picture under the action of $\hat{H}_{\mathrm{S}}(t)$ and the bath Hamiltonian $\hat{H}_{\mathrm{B}}$. Assuming factorized initial states of the system and the bath, $\hat{\varrho}_{\mathrm{tot}}(0)$ $\!=$ $\!\hat{\varrho}(0)\otimes\hat{\varrho}_{\mathrm{B}}$, tracing over the bath, and further assuming that $\mathrm{Tr}_{\mathrm{B}} [\hat{H}_{\mathrm{I}}(t)\hat{\varrho}_{\mathrm{B}}]$ $\!=$ $\!\hat{0}$, yields for the system state $\hat{\varrho}(t)$ the integrated (exact) deviation from the initial state (App.A), \begin{eqnarray}
\hat{\varrho}(t)&=&\!\hat{\varrho}(0)-\Delta\hat{\varrho}(t),
\nonumber\\
\Delta\hat{\varrho}(t)&=&
\!\!\int_{0}^{t}\!\!\!\mathrm{d}t_1\!\int_{0}^{t_1}\!\!\!\mathrm{d}t_2
\mathrm{Tr}_{\mathrm{B}}
[\hat{H}_{\mathrm{I}}(t_1),
[\hat{H}_{\mathrm{I}}(t_2),\hat{\varrho}_{\mathrm{tot}}(t_2)]].\quad \label{NZi} \end{eqnarray}
In what follows, we assume that up to $t$, the combined system-bath state changes only weakly compared to $\hat{H}_{\mathrm{I}}$, so that we approximate in (\ref{NZi}) $\hat{\varrho}_{\mathrm{tot}}(t_2)\approx\hat{\varrho}_{\mathrm{tot}}(0)$ in the integral. This means that the control is assumed effective enough to allow only \emph{small errors}, consistently with the first order approximation of the solutions of both the Nakajima-Zwanzig and the time-convolutionless master equations \cite{JPB,bookBreuer}.
To justify this assumption, we try to reduce the discrepancy between the states evolved for time $t$ in the presence and absence of the bath by minimizing $\langle\Delta\hat{\varrho}(t)\rangle$ $\!\equiv$
$\!\langle\Psi|\Delta\hat{\varrho}(t)|\Psi\rangle$ averaged over
\emph{all initial states} $|\Psi\rangle$ that are unknown in general. For a $d$-level system this averaging is tantamount to taking the expectation value with respect to the maximum entropy state $\hat{\varrho}_{\mathrm{S}}$ $\!=$ $\!d^{-1}\hat{I}$. Assuming that $\mathrm{Tr}_{\mathrm{S}}[\hat{H}_{\mathrm{I}}(t)\hat{\varrho}_{\mathrm{S}}]$ $\!=$ $\!\hat{0}$, we obtain our measure of decoherence (error) in the form of (App.A) \begin{eqnarray}
\overline{\langle\Delta\hat{\varrho}(t)\rangle}
&=&\!2\kappa\mathrm{Re}
\!\int_{0}^{t}\!\mathrm{d}t_1\!\int_{0}^{t_1}\!\mathrm{d}t_2
\bigl\langle
\hat{H}_{\mathrm{I}}(t_1)\hat{H}_{\mathrm{I}}(t_2)
\bigr\rangle_{\mathrm{SB}}
\nonumber\\
&=&\kappa
\Bigl\langle
\Bigl[\int_{0}^{t}\!\mathrm{d}t_1\hat{H}_{\mathrm{I}}(t_1)\Bigr]^2
\Bigr\rangle_{\mathrm{SB}}, \label{grate} \end{eqnarray} where $\kappa$ $\!=$ $\!1\!-\!(d\!+\!1)^{-1}$, $\langle\cdot\rangle_{\mathrm{SB}}$ $\!=$ $\!\mathrm{Tr}_{\mathrm{SB}}[(\cdot)\hat{\varrho}_{\mathrm{S}} \otimes\hat{\varrho}_{\mathrm{B}}]$. Hence, $\overline{\langle\Delta\hat{\varrho}(t)\rangle}$ is always positive and proportional to the mean square of the interaction energy as observed in the interaction picture (by a co-rotating observer).
Since our aim is to suppress $\overline{\langle\Delta\hat{\varrho}(t)\rangle}$
by system manipulations alone, we now separate system and bath parts by decomposing any interaction Hamiltonian in an orthogonal basis of system states $|j\rangle$ as \begin{equation} \label{HID2}
\hat{H}_{\mathrm{I}}(t)
=\sum_{j=1}^{d^2-1}\hat{B}_j(t)\hat{S}_j(t), \end{equation} where the Hermitian $\hat{B}_j$ and $\hat{S}_j$ are bath and system operators, respectively, assumed to obey $\langle\hat{B}_j(t)\rangle_{\mathrm{B}}$ $\!=$ $\!\mathrm{Tr}\hat{S}_j(t)$ $\!=$ $\!0$ and carry no explicit time dependence. In the interaction picture \begin{eqnarray}
\hat{B}_j(t)&=&\mathrm{e}^{\mathrm{i}\hat{H}_{\mathrm{B}}t}\hat{B}_j
\mathrm{e}^{-\mathrm{i}\hat{H}_{\mathrm{B}}t},
\nonumber\\
\hat{S}_j(t)&=&\hat{U}^\dagger(t)\hat{S}_j\hat{U}(t),
\nonumber\\ \label{Uop}
\hat{U}(t)&=&\mathrm{T}_+
\mathrm{e}^{-\mathrm{i}\int_{0}^{t}\!\mathrm{d}t^\prime
\hat{H}_{\mathrm{S}}(t^\prime)}. \end{eqnarray} We shall minimize $\overline{\langle\Delta\hat{\varrho}(t)\rangle}$ for given, experimentally accessible \cite{Sag09}, bath correlations \begin{equation} \label{Phijkconst}
{\Phi}_{jk}(t)=\bigl\langle\hat{B}_j(t)\hat{B}_k\bigr\rangle_{\mathrm{B}}. \end{equation}
It is expedient to define the decoherence matrix \begin{equation} \label{Rmat1t2}
\underline{\bm{R}}(t_1,t_2)
=\underline{\bm{\epsilon}}^T(t_1)\underline{\bm{\Phi}}(t_1-t_2)
\underline{\bm{\epsilon}}(t_2), \end{equation} which obeys $\underline{\bm{R}}^\dagger(t_1,t_2)$ $\!=$ $\!\underline{\bm{R}}(t_2,t_1)$. It is the matrix product of the bath correlation matrix $\underline{\bm{\Phi}}$ formed from the coefficients ${\Phi}_{jk}$ in (\ref{Phijkconst}) and the system-modulation (rotation) matrix defined as \begin{eqnarray}
\hat{S}_j(t)&=&\sum_{k=1}^{d^2-1}{\epsilon}_{jk}(t)\hat{S}_k,
\nonumber\\
\quad{\epsilon}_{jk}(t)&=&\frac{1}{2}
\mathrm{Tr}[\hat{S}_j(t)\hat{S}_k], \label{Rjk} \end{eqnarray} where we have assumed that $\mathrm{Tr}(\hat{S}_j\hat{S}_k)$ $\!=$ $\!2\delta_{jk}$. The transformation (\ref{Rjk}) is at the heart of the treatment: it defines the instantaneous rotating frame where the system and bath are maximally decoupled, as shown below.
We can now write (\ref{grate}) as (App.B) \begin{equation} \label{deltarho}
\overline{\langle\Delta\hat{\varrho}(t)\rangle}
=2\frac{\kappa}{d}\;\int_{0}^{t}\!\mathrm{d}t_1\int_{0}^{t}\!\mathrm{d}t_2
\mathrm{Tr}\underline{\bm{R}}(t_1,t_2). \end{equation} Alternatively, we can rewrite (\ref{deltarho}) as \begin{equation} \label{som}
\overline{\langle\Delta\hat{\varrho}(t)\rangle}
=4t\,\frac{\kappa}{d}\int_{0}^{\infty}\mathrm{d}\omega\,
\mathrm{Tr}[\underline{\bm{G}}_{}(\omega)\underline{\bm{F}_t}(\omega)], \end{equation} i.e., as the spectral overlap of two matrix-valued functions: the bath coupling spectral matrix $\underline{\bm{G}}_{}(\omega)$ $\!=$ $\!\int_{-\infty}^{\infty}\!\mathrm{d}t\;\mathrm{e}^{\mathrm{i}\omega{t}}\; \mathrm{Re}\underline{\bm{\Phi}}_{}(t)$, and the system-modulation spectral matrix at finite time $t$ [cf. (\ref{Rmat1t2})] $\underline{\bm{F}_t}(\omega)$ $\!=$ $\!\frac{1}{t}\underline{\bm{\epsilon}_t}(\omega) \underline{\bm{\epsilon}_t}^\dagger(\omega)$, $\underline{\bm{\epsilon}_t}(\omega)$ $\!=$ $\!\frac{1}{\sqrt{2\pi}}\int_{0}^t\mathrm{d}\tau\, \mathrm{e}^{\mathrm{i}\omega\tau}\underline{\bm{\epsilon}}(\tau)$. In (\ref{som}) we have made use of the fact that $\underline{\bm{\Phi}}(-t)$ $\!=$ $\!\underline{\bm{\Phi}}^\dagger(t)$, so that it is sufficient to integrate over positive frequencies.
Equation (\ref{som}) constitutes a generalization of the ``universal formula'' \cite{kof01} to arbitrary multidimensional systems and baths. It provides a major insight: the system and bath spectra (all matrix components) must be \emph{anticorrelated}, i.e., ${G}_{jk}(\omega)$ minima must coincide with $({F}_t)_{jk}(\omega)$ maxima and vice versa to minimize (\ref{som}), as illustrated below. It should be emphasized that for given $\omega$, both $\underline{\bm{G}}_{}(\omega)$ and $\underline{\bm{F}_t}(\omega)$ are positive matrices. Nevertheless, certain components ${G}_{jk}(\omega)$, $({F}_t)_{jk}(\omega)$ may be \emph{negative} if $d$ $\!>$ $\!2$ (i.e., not for qubits). This may allow us to `destructively interfere' their contributions, i.e., \emph{engineer} ``dark states'' \cite{gor06b} or ``decoherence-free'' subspaces \cite{Zan05}. These prospects of our general scheme will be explored elsewhere.
\section{Decoherence Minimization}
Our goal is to find a system Hamiltonian $\hat{H}_{\mathrm{S}}({t_1})$, $0$ $\!\le$ $\!t_1$ $\!\le$ $\!t$, implementing a given unitary gate $\hat{U}(t)$ at a fixed time $t$ according to (\ref{Uop}). This requires minimizing (\ref{grate}) or (\ref{deltarho}), $\overline{\langle\Delta\hat{\varrho}(t)\rangle}$ $\!\to$ $\!{min}$, i.e., minimizing the bath-induced state error in the interaction picture under $\hat{H}_{\mathrm{S}}(t)$. We may similarly account for the effects of \emph{modulation or control noise}, in addition to bath noise (App.C).
The major difficulty in minimizing (\ref{deltarho}) using (\ref{Uop})-(\ref{Rjk}) is that (\ref{Uop}) involves time-ordered integration for arbitrary bath and control axes. To circumvent this difficulty, we make use of $\hat{U}(t_1)$ instead of $\hat{H}_{\mathrm{S}}$, and assume a parametrization $\hat{U}[{f}_l({t_1}),{t_1}]$ in terms of a set of \emph{real parameters} ${{f}}_l({t_1})$, which may be combined to a vector $\underline{\bm{{f}}}({t_1})$. The number of parameters may vary, since the parametrization does not have to be complete. The boundary values $\underline{\bm{{f}}}(0)$ and $\underline{\bm{{f}}}(t)$ should be such that $\hat{U}(t_1\!=\!0)$ $\!=$ $\!\hat{I}$ and $\hat{U}(t_1\!=\!t)$ is the desired gate.
If a bath coupling spectrum $\underline{\bm{G}}(\omega)$ vanishes (has cutoff) at any high frequency, the overlap (\ref{som}) can be presumed arbitrarily small under sufficiently rapid modulation of the Hamiltonian, such that all components of $\underline{\bm{F}_t}(\omega)$ are shifted beyond this cutoff, thus achieving DD \cite{vio98,KLi05,uhr07}. Yet this may require a \emph{diverging} system energy. Furthermore, \emph{fidelity generally drops with modulation energy}, as discussed below. We therefore impose an energy constraint on the modulated system \begin{equation} \label{ES}
E_{\mathrm{S}}=\int_{0}^{t}\!\mathrm{d}{t_1}
\bigl\langle\hat{H}_{\mathrm{S}}^2({t_1})\bigr\rangle_{\mathrm{S}}={const}., \end{equation} where $\langle\cdot\rangle_{\mathrm{S}}$ $\!=$ $\!\mathrm{Tr}[(\cdot){d}^{-1}\hat{I}]$ [cf. (\ref{grate})]. An alternative constraint \begin{equation} \label{constraint}
E=\int_{0}^{t}\mathrm{d}{t_1}\;|\dot{\underline{\bm{{f}}}}({t_1})|^2
={const}. \end{equation} allows a simplified treatment. In general, $E$ accounts for the fact that the time dependence of a parametrization cannot be arbitrarily fast and hence bounds the modulated $\hat{H}_{\mathrm{S}}({t_1})$, thus also limiting $E_{\mathrm{S}}$.
The minimization of (\ref{deltarho}) subject to (\ref{constraint}) is an extremal problem in terms of $\underline{\bm{{f}}}$. Denoting by $\underline{\bm{\delta}}$ the total variation with respect to $\underline{\bm{{f}}}$, the stationary condition can be formulated in terms of a Lagrange multiplier $\lambda$ as $\underline{\bm{\delta}}\overline{\langle\Delta\hat{\varrho}(t)\rangle}$ $\!+$ $\!\lambda\underline{\bm{\delta}}E$ $\!=$ $\!0$. Then, using the parametrization in $\underline{\bm{R}}$ [Eq.~(\ref{Rmat1t2})], $\nabla\underline{\bm{\epsilon}}$ $\!\equiv$ $\!\{\frac{\partial}{\partial{{f}}_l}\underline{\bm{\epsilon}}(t_1)\}$, yields the Euler-Lagrange equation \begin{equation} \label{elel}
\ddot{\underline{\bm{{f}}}}(t_1)
=\lambda\underline{\bm{{g}}}(t_1),
\quad
\underline{\bm{{g}}}(t_1)\equiv\int_{0}^{t}\!\mathrm{d}t_2
\nabla\mathrm{Re}\mathrm{Tr}\underline{\bm{R}}(t_1,t_2), \end{equation} where $\lambda$ is related to the constraint (\ref{constraint}) on $E$ (App.D).
We conclude the general treatment by recapitulating on the steps to find the optimal modulation of $\hat{H}_{\mathrm{S}}({t_1})$: 1) After defining the `cycle time' $t$ and gate operation $\hat{U}(t)$, we declare a parametrization $\hat{U}[\underline{\bm{{f}}}({t_1}),{t_1}]$ which induces a parametrization $\underline{\bm{\epsilon}}[\underline{\bm{{f}}}({t_1}),{t_1}]$ that in turn yields $\underline{\bm{R}}(t_1,t_2)$ as a functional of $\underline{\bm{{f}}}$ via (\ref{Uop})-(\ref{Rjk}), using our knowledge of (\ref{Phijkconst}). 2) We now solve (\ref{elel}) for a given initial $\underline{\bm{{f}}}_{\mathrm{init}}(t_1)$ satisfying the boundary conditions, e.g., such that $\hat{U}[\underline{\bm{{f}}}_{\mathrm{init}}(t_1),{t_1}]$ $\!=$ $\![\hat{U}(t)]^{\frac{t_1}{t}}$, and calculate $\overline{\langle\Delta\hat{\varrho}(t)\rangle}$. 3) The optimization is repeated for different values of $\lambda$ and $E_{\mathrm{S}}$ in (\ref{ES}) is calculated for each solution. Among all solutions for which $\overline{\langle\Delta\hat{\varrho}(t)\rangle}$ falls below a desired threshold value, we choose the one corresponding to the lowest $E_{\mathrm{S}}$. 4) The chosen solution $\underline{\bm{{f}}}({t_1})$ is inserted into $\hat{U}[\underline{\bm{{f}}}({t_1}),{t_1}]$ in (\ref{Uop}), yielding the instantaneous control parameters \begin{equation} \label{instpar}
\hat{H}_{\mathrm{S}}({t_1})=\sum_j\omega_j({t_1})\hat{S}_j,\quad
\omega_j({t_1})=\frac{1}{2}\mathrm{Tr}[\hat{S}_j\hat{H}_{\mathrm{S}}({t_1})]. \end{equation}
\section{Application to a qubit}
To apply the general procedure to a qubit for which $\hat{S}_j$ $\!=$ $\!\hat{\sigma}_j$ ($j$ $\!=$ $\!x,y,z$) in (\ref{instpar}), we resort to the Euler rotation-angle parametrization, \begin{displaymath}
\hat{U}(t)=
\mathrm{e}^{-\frac{\mathrm{i}}{2}f_3(t)\hat{\sigma}_3}
\mathrm{e}^{-\frac{\mathrm{i}}{2}f_2(t)\hat{\sigma}_2}
\mathrm{e}^{-\frac{\mathrm{i}}{2}f_1(t)\hat{\sigma}_3}. \end{displaymath} In (\ref{instpar}), $\omega_3(t)$ is now the level splitting, whereas $\omega_{1(2)}(t)$ are Rabi flipping rates. We choose two examples of uncorrelated (i.e., diagonal) baths, namely, an Ohmic bath with \emph{different cutoffs} in $X$, $Y$, $Z$, and a Lorentzian noise spectrum superposed with a second Lorentzian such that a spectral `hole' is obtained at different frequencies in $X$, $Y$, and $Z$. The corresponding bath coupling spectra are shown in Fig.~\ref{F1}, along with our optimized modulation spectra, which are contrasted with Uhrig's DD pulse-sequence spectra \cite{uhr07} (App.E,F).
\begin{figure}\label{F1}
\end{figure}
The minimized gate error is shown in Fig.~\ref{F2} as a function of the energy constraint (\ref{ES}) for both baths.
\begin{figure}\label{F2}
\end{figure}
Its comparison with the gate error obtained using various DD pulse sequences reveals two differences. The first concerns the energy scale: in rectangular DD pulse sequences, each $\pi$-pulse of duration $T$ contributes an amount $\pi^2/(4T)$ to (\ref{ES}), which \emph{diverges} for ideal pulses, $T$ $\!\to$ $\!0$. By contrast, our approach assumes finite, much smaller $E_{\mathrm{S}}$. The second difference concerns energy monotonicity: DD-sequences are designed a priori, regardless of the bath-spectrum, and hence only significantly reduce the gate error if $E_{\mathrm{S}}$ has risen above some threshold which is needed to shift all system frequencies beyond the bath cutoffs \cite{uhr07}. In contrast, our approach starts to reduce the gate error as soon as $E_{\mathrm{S}}$ $\!>$ $\!0$, since it optimizes the use of the available energy, by anti-correlating the modulation and bath spectra.
We next consider the \emph{gate fidelity limitations as a function of $E_S$}
posed by \emph{leakage} \cite{WKB09} to levels \emph{outside} the relevant subspace (here a qubit). In a 3-level $\Lambda$-system, any off-resonant control field acting on the qubit levels $|1\rangle$, $|2\rangle$, causes leakage to the unwanted level $|3\rangle$ \cite{aga96}. Such leakage and the ensuing incoherent decay $|3\rangle$ $\!\to$ $\!|1\rangle$ incur gate errors that grow with $E_{\mathrm{S}}$ (App.G). This behaviour is illustrated in Fig.~\ref{F3}, which reveals that leakage error is the more dramatic, the more energetic the $\pi$-pulse sequences are. If $\pi$-pulses are experimentally implemented as $(2m+1)\pi$-pulses, $m{\scriptstyle\gtrapprox}10^3$, it can therefore be expected that isolated manipulation on a subspace is difficult. This, together with the qubit-gate optimization, the general expressions for the gate error (\ref{deltarho}) and (\ref{som}) and its minimization (\ref{elel}) are the main results of this work.
\begin{figure}\label{F3}
\end{figure}
\section{Conclusions}
A) We have expressed an \emph{arbitrary} gate error for finite-dimensional quantum systems as the spectral overlap between the driven-system and the bath spectra. B) We have derived a non-Markovian Euler-Lagrange equation for the time dependence of control parameters whose solution maximizes the gate fidelity. C) This solution leads to \emph{anticorrelation} of the system and bath spectra. Hence, while DD-based methods rely on shifting the \emph{entire} spectrum of the system beyond that of the bath, our optimization takes advantage of gaps or dips of the bath spectra. D) The treatment of a qubit demonstrates that our approach is significantly more economic in terms of energy investment than DD-based methods. Such energy saving may be crucial in terms of fidelity as excessive energies lead to \emph{leakage} into additional levels \cite{aga96}, or \emph{increase} the control noise \cite{Sag09}.
\acknowledgments
The support of EC (MIDAS), DIP, ISF and the Humboldt Award (G.K.) is acknowledged.
\appendix
\section{Derivation of the decoherence (error) expression (\ref{grate})}
The von Neumann equation for the total density operator of the bath and system combined in the interaction picture, \begin{equation}
\frac{\partial}{\partial{t}}\hat{\varrho}_{\mathrm{tot}}(t)
=-\mathrm{i}[\hat{H}_{\mathrm{I}}(t),\hat{\varrho}_{\mathrm{tot}}(t)], \label{vne} \end{equation} can be written in integrated form \begin{equation}
\hat{\varrho}_{\mathrm{tot}}(t)=\hat{\varrho}_{\mathrm{tot}}(0)
-\mathrm{i}\int_{0}^{t}\!\mathrm{d}t_1
[\hat{H}_{\mathrm{I}}(t_1),\hat{\varrho}_{\mathrm{tot}}(t_1)]. \label{vnei} \end{equation} Substituting (\ref{vnei}) back into (\ref{vne}) gives \begin{eqnarray}
\frac{\partial}{\partial{t}}\hat{\varrho}_{\mathrm{tot}}(t)
&=&-\mathrm{i}[\hat{H}_{\mathrm{I}}(t),\hat{\varrho}_{\mathrm{tot}}(0)]
\nonumber\\
&&-\!\int_{0}^{t}\!\mathrm{d}t_1
[\hat{H}_{\mathrm{I}}(t),
[\hat{H}_{\mathrm{I}}(t_1),\hat{\varrho}_{\mathrm{tot}}(t_1)]],\quad \end{eqnarray} and after tracing over the bath, \begin{equation} \label{ME}
\frac{\partial}{\partial{t}}\hat{\varrho}(t)
=-\!\int_{0}^{t}\!\mathrm{d}t_1\mathrm{Tr}_{\mathrm{B}}
[\hat{H}_{\mathrm{I}}(t),
[\hat{H}_{\mathrm{I}}(t_1),\hat{\varrho}_{\mathrm{tot}}(t_1)]]. \end{equation} Although we do not make use of the differential equation for the system state $\hat{\varrho}(t)$, it may be useful to mention that it can be obtained from (\ref{ME}) by neglecting the bath correlations, i.e., setting $\hat{\varrho}_{\mathrm{tot}}(t_1)$ $\!\approx$ $\!\hat{\varrho}_{\mathrm{B}}\otimes\hat{\varrho}(t_1)$, which yields the second-order Nakajima-Zwanzig equation \cite{bookBreuer}. Replacing $\hat{\varrho}_{\mathrm{tot}}(t_1)$ $\!\approx$ $\!\hat{\varrho}_{\mathrm{B}}\otimes\hat{\varrho}(t)$ instead yields the second-order time-convolutionless equation. For the averaging, we make use of \begin{equation} \label{dankert}
\overline{\langle\Psi|\hat{A}|\Psi\rangle\langle\Psi|\hat{B}|\Psi\rangle}
=\frac{\mathrm{Tr}\hat{A}\hat{B}+\mathrm{Tr}\hat{A}\mathrm{Tr}
\hat{B}}{d(d+1)} \end{equation} \cite{dan05} to write the covariance of two operators $\hat{A}$ and $\hat{B}$ as \begin{eqnarray}
Cov(\hat{A},\hat{B})&\equiv&
\overline{
\langle\Psi|\hat{A}\hat{B}|\Psi\rangle
-\langle\Psi|\hat{A}|\Psi\rangle\langle\Psi|\hat{B}|\Psi\rangle
}
\nonumber\\
&=&\kappa\bigl(\langle\hat{A}\hat{B}\rangle
-\langle\hat{A}\rangle\langle\hat{B}\rangle\bigr). \label{cov2} \end{eqnarray} Expressing now the double commutator in (\ref{NZi}) as $[\hat{H}_{\mathrm{I}}(t_1),[\hat{H}_{\mathrm{I}}(t_2), \hat{\varrho}_{\mathrm{tot}}]]$ $\!=$ $\!([\hat{H}_{\mathrm{I}}(t_1), \hat{H}_{\mathrm{I}}(t_2)\hat{\varrho}_{\mathrm{tot}}]$ $\!+$ $\!h.a.)$, and applying (\ref{cov2}), we obtain (\ref{grate}).
\section{Derivation of the spectral overlap error (\ref{som})}
The differential equation (\ref{ME}) for the system state can be written as [see comments following (\ref{ME})] \begin{equation} \label{MEop}
\frac{\partial}{\partial{t}}\hat{\varrho}(t)
\!=\!-\sum_{j,k=1}^{d^2-1}\!\int_{0}^{t}\!\mathrm{d}t_1\!
\bigl\{\!{\Phi}_{jk}(t\!-\!t_1)
[\hat{S}_j(t),\hat{S}_k(t_1)\hat{\varrho}(t)]\!+\!h.a.\!\bigr\}\!, \end{equation} while (\ref{NZi}) reads \begin{eqnarray}
\Delta\hat{\varrho}(t)&=&
\sum_{j,k=1}^{d^2-1}
\int_{0}^{t}\!\!\!\mathrm{d}t_1\!\int_{0}^{t_1}\!\!\!\mathrm{d}t_2
\nonumber\\
&&\times\,
\{{\Phi}_{jk}(t_1\!-\!t_2)[\hat{S}_j(t_1),\hat{S}_k(t_2)\hat{\varrho}(0)]
+h.a.\}\quad\quad \label{dr}
\\
&=&
\sum_{j,k=1}^{d^2-1}\int_{0}^{t}\!\mathrm{d}t_1\int_{0}^{t_1}\!\mathrm{d}t_2
\nonumber\\
&&\times\,
\left\{{R}_{jk}(t_1,t_2)
[\hat{S}_j,\hat{S}_k\hat{\varrho}(0)]+h.a.\right\}.
\nonumber \end{eqnarray}
We can define a decoherence operator \begin{equation} \label{Ropt1t2}
\hat{R}(t_1,t_2)
=\sum_{j,k=1}^{d^2-1}\hat{S}_j(t_1){\Phi}_{jk}(t_1-t_2)\hat{S}_k(t_2), \end{equation} which obeys $\hat{R}^\dagger(t_1,t_2)$ $\!=$ $\!\hat{R}(t_2,t_1)$. Assuming finite $d$, (\ref{grate}) then becomes \begin{eqnarray}
\overline{\langle\Delta\hat{\varrho}(t)\rangle}
&=&2\frac{\kappa}{d}\mathrm{Re}
\int_{0}^{t}\mathrm{d}t_1\int_{0}^{t_1}\mathrm{d}t_2
\mathrm{Tr}\hat{R}(t_1,t_2)
\nonumber\\
&=&\frac{\kappa}{d}\int_{0}^{t}\mathrm{d}t_1\int_{0}^{t}\mathrm{d}t_2
\mathrm{Tr}\hat{R}(t_1,t_2). \label{overl} \end{eqnarray} Alternatively, by defining the spectral counterparts of the ingredients of (\ref{Ropt1t2}): \begin{eqnarray} \label{bcs}
{G}_{jk}(\omega)
&=&\int_{-\infty}^{\infty}\!\mathrm{d}t\;\mathrm{e}^{\mathrm{i}\omega{t}}\;
\mathrm{Re}{\Phi}_{jk}(t),
\\ \label{Sjomega}
\hat{S}_j(\omega)&=&\frac{1}{\sqrt{2\pi}}\int_{0}^t\mathrm{d}\tau\,
\mathrm{e}^{\mathrm{i}\omega\tau}\hat{S}_j(\tau),
\\
{F}_{kj}(\omega)&=&\frac{1}{2t}
\mathrm{Tr}[\hat{S}_k(\omega)\hat{S}_j^\dagger(\omega)], \label{Fjkomega} \end{eqnarray} Equation (\ref{overl}) can be written as the following spectral overlap \begin{eqnarray}
\overline{\langle\Delta\hat{\varrho}(t)\rangle}
&=&2\frac{\kappa}{d}\int_{0}^{\infty}\mathrm{d}\omega\,
\sum_{j,k=1}^{d^2-1}\mathrm{Tr}
[\hat{S}_j^\dagger(\omega){G}_{jk}(\omega)\hat{S}_k(\omega)]\quad
\nonumber\\
&=&4t\,\frac{\kappa}{d}\int_{0}^{\infty}\mathrm{d}\omega\,
\sum_{j,k=1}^{d^2-1}{G}_{jk}(\omega){F}_{kj}(\omega). \label{soo} \end{eqnarray}
\section{Modulation Errors}
Since, in practice, a modulation can be realized only with finite accuracy, it is important to consider the effect of modulation errors. To do so, we add to $\hat{H}_{\mathrm{S}}(t)$ a small random Hamiltonian $\hat{H}_{\mathrm{N}}(t)$ which acts on the system variables and repeat the previous analysis without $\hat{H}_{\mathrm{N}}(t)$ in the interaction picture. In addition, we now perform an ensemble average (also denoted with an overbar) over different realizations of $\hat{H}_{\mathrm{N}}(t)$. Neglecting systematic errors, $\overline{\hat{H}_{\mathrm{N}}}(t)$ $\!=$ $\!0$, we can in analogy to (\ref{Phijkconst}) define a correlation matrix $\underline{\bm{\Phi}}^{\mathrm{N}}(t_1,t_2)$ with elements \begin{eqnarray}
{\Phi}^{\mathrm{N}}_{jk}(t_1,t_2)&=&\overline{h_j(t_1)h_k(t_2)},
\\
h_j(t)&=&\frac{1}{2}\mathrm{Tr}[\hat{H}_{\mathrm{N}}(t)\hat{S}_j], \end{eqnarray} which gives rise to a noise contribution \begin{equation}
\underline{\bm{R}}^{\mathrm{N}}(t_1,t_2)
=\underline{\bm{\epsilon}}^T(t_1)\underline{\bm{\Phi}}^{\mathrm{N}}(t_1,t_2)
\underline{\bm{\epsilon}}(t_2), \end{equation} that must be added to (\ref{Rmat1t2}) with $\underline{\bm{\epsilon}}(t)$ defined as before. Assuming ${\Phi}^{\mathrm{N}}_{jk}(t_2,t_1)$ $\!=$ $\!{\Phi}^{\mathrm{N}}_{kj}(t_1,t_2)$, we have $\underline{\bm{R}}^{\mathrm{N}\,\dagger}(t_1,t_2)$ $\!=$ $\!\underline{\bm{R}}^{\mathrm{N}}(t_2,t_1)$, and (\ref{deltarho}) now holds for $\overline{\overline{\langle\Delta\hat{\varrho}(t)\rangle}}$: the double overbar means that $\langle\Delta\hat{\varrho}(t)\rangle$ is averaged over both the initial states and the ensemble. This analysis accounts for modulation errors if we use a \emph{modified} correlation function containing both system-noise and bath contributions and refer to the \emph{ensemble} only.
\section{Euler-Lagrange Variational Analysis}
The minimization of (\ref{deltarho}) subject to (\ref{ES}) constitutes the \emph{original (unsimplified) extremal problem in terms of $\underline{\bm{{f}}}$}. The stationary condition corresponding to (\ref{ES}), \begin{equation}
\underline{\bm{\delta}}\overline{\langle\Delta\hat{\varrho}(t)\rangle}
+\lambda\underline{\bm{\delta}}E_{\mathrm{S}}=0, \end{equation} with variations fixed at the boundaries,
$\underline{\bm{\delta}}\underline{\bm{{f}}}({t_1})|_{{t_1}=0,t}=0$, yields an Euler-Lagrange-equation \begin{equation} \label{ELG}
\mathrm{Re}\mathrm{Tr}\left[\ddot{\hat{U}}(t_1)\nabla\hat{U}^\dagger(t_1)
-\lambda\int_{0}^{t}\!\mathrm{d}t_2\nabla\hat{R}(t_1,t_2)\right]=0. \end{equation} Here $\nabla_l$ $\!=$ $\!\partial/\partial{{f}}_l(t_1)$ and the double dots denote a second derivative with regard to $t_1$. In order to obtain (\ref{ELG}), we have applied in (4) the relation \begin{equation} \label{subs}
\hat{H}_{\mathrm{S}}({t_1})
=\mathrm{i}\dot{\hat{U}}({t_1})\hat{U}^\dagger({t_1}). \end{equation}
The Lagrange multiplier in (\ref{elel}) can be shown to obey \begin{equation} \label{ele}
\lambda=\frac{\sqrt{b^2+a(E-c)}-b}{a}, \end{equation} where \begin{eqnarray}
a&=&\int_{0}^{t}\mathrm{d}t_1
\Bigl|\int_{0}^{t_1}\!\mathrm{d}t_2\,\underline{\bm{{g}}}(t_2)\Bigr|^2,
\\
b&=&\int_{0}^{t}\mathrm{d}t_1\int_{0}^{t_1}\!\mathrm{d}t_2\,
\dot{\underline{\bm{{f}}}}(0)\cdot\underline{\bm{{g}}}(t_2),
\\
c&=&t|\dot{\underline{\bm{{f}}}}(0)|^2. \end{eqnarray} Note that for $\dot{\underline{\bm{{f}}}}(0)$ $\!=$ $\!0$ we have $b$ $\!=$ $\!c$ $\!=$ $\!0$ and (\ref{ele}) reduces to $\lambda$ $\!=$ $\!\sqrt{E/a}$.
\section{Bloch Equation Analysis}
The state evolution of a qubit can be formulated in terms of the Bloch vector $\underline{\bm{r}}$ with components $r_j$ $\!=$ $\!\mathrm{Tr}[\hat{\sigma}_j\hat{\varrho}(t)]$, $j$ $\!=$ $\!1,2,3$, as the equation of a ``top'' forced by time-dependent torque \begin{equation} \label{BE3}
\dot{\underline{\bm{r}}}
=\underline{\bm{L}}_{-}\!\cdot\!\underline{\bm{r}}
+\underline{\bm{L}}_{+}\!\cdot\!(\underline{\bm{r}}-\underline{\bm{r}}_0). \end{equation} Here the matrix function \begin{equation} \label{BE3L2}
\underline{\bm{L}}=4\mathrm{Re}\int_{0}^{t}\!\mathrm{d}t_1
\{\underline{\bm{R}}^{T}(t,t_1)
-[\mathrm{Tr}\underline{\bm{R}}(t,t_1)]\bm{I}\} \end{equation} has been decomposed into its (anti)symmetric parts $\underline{\bm{L}}_{\pm}$ $\!=$ $\!(\underline{\bm{L}}$ $\!\pm$ $\!\underline{\bm{L}}^{T})/2$, while \begin{eqnarray}
\underline{\bm{r}}_0
&=&-\underline{\bm{L}}_{+}^{-1}\!\cdot\!\underline{\bm{b}},
\\
{b}_j&=&4\mathrm{Im}\int_{0}^{t}\!\mathrm{d}t_1
\mathrm{Tr}[\underline{\bm{\sigma}}_j\underline{\bm{R}}(t,t_1)], \end{eqnarray} is the quasi-steady state under the chosen time-dependent control. The term $\underline{\bm{L}}_{+}\!\cdot\!(\underline{\bm{r}}-\underline{\bm{r}}_0)$ accounts for the dynamically-modified relaxation of $\langle\hat{\sigma}_j\rangle$ at non-Markov time-dependent rates that are the eigenvalues of $\underline{\bm{L}}_{+}(t)$, reverting to the standard (Markov) rates $1/T_j$ in the limit of slow control. The term $\underline{\bm{L}}_{-}\!\cdot\!\underline{\bm{r}}$ $\!=$ $\!\underline{\bm{\Delta\omega}}\times\underline{\bm{r}}$, reflects a bath-induced energy shift \begin{equation}
{\Delta\omega}_j=2\mathrm{Re}\int_{0}^{t}\!\mathrm{d}t_1
\mathrm{Tr}[\underline{\bm{\sigma}}_j\underline{\bm{R}}(t,t_1)], \end{equation} since it represents a unitary evolution observed in the instantaneous interaction picture. The elements of the SO(\ref{HID2}) generator matrices $\underline{\bm{\sigma}}_j$ can be calculated from \begin{equation}
2(\underline{\bm{\sigma}}_j)_{ik}
=\frac{\mathrm{Tr}([\hat{\sigma}_i,\hat{\sigma}_j]\hat{\sigma}_k)}
{2\mathrm{i}}. \end{equation} The optimized instantaneous control parameters $\omega_j(t)$ are obtained upon minimizing the departure of $\underline{\bm{r}}$ from its initial value and following the procedure in the main text leading to (\ref{instpar}). The results are illustrated in Fig.~\ref{F1S} (see also Fig.~\ref{F1} main text).
\begin{figure}
\caption{ Optimized time dependence of the control parameters of the system Hamiltonian, the solid red, dashed green, and dotted blue line show $\omega_{1,2,3}$, respectively: (i) parameters referring to the graphs (a),(b),(c) in Fig.1; (ii) parameters referring to graphs (d),(e),(f) in Fig.1. }
\label{F1S}
\end{figure}
\section{Comparison with Uhrig's DD-sequence}
In Figs.~\ref{F1} and \ref{F2} of the main text we compare our results with the following DD sequences: \begin{itemize} \item[]{a)} Concatenated DD (CDD) \cite{KLi05} defined by \begin{equation}
p_{n+1}=p_nXp_nZp_nXp_nZ \end{equation} with $p_0$ $\!=$ $\!f_\tau$ denoting free evolution over time $\tau$, where $p^{\mathrm{CDD}}_1$ $\!=$ $\!(fXfZ)^2$ recovers periodic DD (PDD). \item[]{b)} Uhrig-DD (UDD) \cite{uhr07} defined (for $n$ pulses in $Z$) by \begin{equation}
p^{\mathrm{UDD}}_n=f_{t-\tau_n}{Z}f_{\tau_n-\tau_{n-1}}
{Z}\cdots{Z}f_{\tau_2-\tau_1}{Z}f_{\tau_1} \end{equation} with \begin{equation}
\tau_{i}=t\sin^2[\pi{j}/(2(n+1))], \end{equation} where \begin{equation}
p^{\mathrm{UDD}}_1=fZf \end{equation} recovers the spin echo (SE) and \begin{equation}
p^{\mathrm{UDD}}_2=(fZf)^2 \end{equation} the CPMG-sequence. \item[]{c)} Combined CDD and UDD in concatenated UDD (CUDD) \cite{uhr07} defined by concatenating according to \begin{equation}
p_{n+1}=p_nXp_nX \end{equation} an $m$-pulse UDD sequence \begin{equation}
p_0=p^{\mathrm{UDD}}_m \end{equation} for $m$ times. \end{itemize} The named basic sequences can be iterated, i.e., repeatedly applied.
\section{Leakage from a Subspace}
We can adapt our formalism to the situation where the
$d$-dimensional state space (to which the relevant quantum information is to be confined) is a \emph{subspace} of a $N$-dimensional system state space [12]. To do so, the averaging of the initial states $|\Psi\rangle$ is performed on the subspace, for which
$|\Psi\rangle$ $\!=$ $\!\hat{P}|\Psi\rangle$, where
$\hat{P}$ $\!=$ $\!\sum_{n=1}^{d}|\varphi_n\rangle\langle\varphi_n|$ is the associated projector. Applying (\ref{dankert}) to (\ref{dr}) and defining a matrix $\underline{\bm{\Gamma}}$ $\!=$ $\!\underline{\bm{\Gamma}}^\dagger$ with elements \begin{equation}
{\Gamma}_{ik}
=\frac{\mathrm{Tr}(\hat{S}_i\hat{P}\hat{S}_k)}{d}
-\frac{
\mathrm{Tr}(\hat{S}_i\hat{P}\hat{S}_k\hat{P})
+\mathrm{Tr}(\hat{S}_i\hat{P})\mathrm{Tr}(\hat{S}_k\hat{P})
}{d(d+1)}, \end{equation} generalizes (\ref{deltarho}) to \begin{equation} \label{deltarhogen}
\overline{\langle\Delta\hat{\varrho}(t)\rangle}
=\int_{0}^{t}\!\mathrm{d}t_1\int_{0}^{t}\!\mathrm{d}t_2
\mathrm{Tr}\left[\underline{\bm{R}}(t_1,t_2)\underline{\bm{\Gamma}}\right], \end{equation} which recovers (\ref{deltarho}) for $N$ $\!=$ $\!d$, where ${\Gamma}_{ik}$ $\!=$ $\!\frac{2}{d+1}\delta_{ik}$ $\!=$ $\!2\frac{\kappa}{d}\delta_{ik}$. Equivalently, \begin{eqnarray}
\overline{\langle\Delta\hat{\varrho}(t)\rangle}
&=&2\int_{0}^{\infty}\mathrm{d}\omega\,
\mathrm{Tr}\bigl[
\underline{\bm{\epsilon}_t}^\dagger(\omega)
\underline{\bm{G}}_{\mathrm{re}}(\omega)
\underline{\bm{\epsilon}_t}(\omega)
\mathrm{Re}\underline{\bm{\Gamma}}
\nonumber\\
&&\quad\quad\quad-\underline{\bm{\epsilon}_t}^\dagger(\omega)
\underline{\bm{G}}_{\mathrm{im}}(\omega)
\underline{\bm{\epsilon}_t}(\omega)
\mathrm{Im}\underline{\bm{\Gamma}}\bigr]
\nonumber\\
&=&t\,\int_{-\infty}^{\infty}\mathrm{d}\omega\,
\mathrm{Tr}[\underline{\bm{G}}_{\mathrm{tot}}(\omega)
\underline{\bm{F}_t^{\scriptscriptstyle{\Gamma}}}(\omega)], \label{somtot} \end{eqnarray} which replaces (9). While $\underline{\bm{G}}_{\mathrm{re}}(\omega)$ $\!=$ $\!\int_{-\infty}^{\infty}\!\mathrm{d}t\;\mathrm{e}^{\mathrm{i}\omega{t}}\; \mathrm{Re}\underline{\bm{\Phi}}_{}(t)$ is identical to $\underline{\bm{G}}_{}(\omega)$ in (9), here we also need $\underline{\bm{G}}_{\mathrm{im}}(\omega)$ $\!=$ $\!\int_{-\infty}^{\infty}\!\mathrm{d}t\;\mathrm{e}^{\mathrm{i}\omega{t}}\; \mathrm{Im}\underline{\bm{\Phi}}_{}(t)$, or the combined $\underline{\bm{G}}_{\mathrm{tot}}(\omega)$ $\!=$ $\!\int_{-\infty}^{\infty}\!\mathrm{d}t\;\mathrm{e}^{\mathrm{i}\omega{t}}\; \underline{\bm{\Phi}}_{}(t)$, whereas in (\ref{somtot}) $\underline{\bm{F}_t^{\scriptscriptstyle{\Gamma}}}(\omega)$ $\!=$ $\!\frac{1}{t}\underline{\bm{\epsilon}_t}(\omega)\underline{\bm{\Gamma}} \underline{\bm{\epsilon}_t}^\dagger(\omega)$ replaces $\underline{\bm{F}_t}(\omega)$ in (9).
(\ref{deltarhogen}) encompasses both the \emph{internal} decoherence effects within the system-subspace associated with $\hat{P}$ and \emph{leakage} effects related to a population $\overline{\langle\hat{Q}\rangle}$ $\!=$ $\!\overline{\mathrm{Tr}\bigl[\hat{\varrho}(t)\hat{Q}\bigr]}$ of the orthogonal complement $\hat{Q}$ $\!=$ $\!\hat{I}$ $\!-$ $\!\hat{P}$, averaged over all initial states on $\hat{P}$, \begin{eqnarray}
\overline{\mathrm{Tr}\bigl[\hat{\varrho}(t)\hat{Q}\bigr]}
&=&\mathrm{Tr}\bigl[
\overline{\Delta\hat{\varrho}(t)}\hat{P}\bigr]
\\
&=&\int_{0}^{t}\!\mathrm{d}t_1\int_{0}^{t}\!\mathrm{d}t_2
\mathrm{Tr}\left[\underline{\bm{R}}(t_1,t_2)
\underline{\bm{\Gamma}_{\mathrm{L}}}\right], \end{eqnarray} $\underline{\bm{\Gamma}_{\mathrm{L}}}$ $\!=$ $\!\underline{\bm{\Gamma}_{\mathrm{L}}}^\dagger$ being a matrix with elements \begin{equation}
({\Gamma}_{\mathrm{L}})_{ik}
=\frac{\mathrm{Tr}(\hat{S}_i\hat{P}\hat{S}_k\hat{Q})}{d}. \end{equation} If leakage is disregarded in the procedure minimizing $\overline{\langle\Delta\hat{\varrho}(t)\rangle}$, it is likely that a stronger system modulation increases the population of $\hat{Q}$, giving rise to a significant surplus error. This is illustrated in Fig.3, where optimal and PDD-modulations originally designed within a two-level model [as shown in Fig.2(a)] are reconsidered for a two-level subspace of a three-level system. This is done by replacing the Pauli matrices $\hat{\sigma}_i$ with the corresponding Gell-Mann matrices $\hat{\gamma}_i$, multiplying $\hat{U}(t)$ with $\mathrm{e}^{-\mathrm{i}t{f}\hat{\gamma}_8}$ to separate the levels, and adding to $\hat{H}_{\mathrm{I}}$ a leakage term
$\hat{\gamma}_6\hat{B}_{\mathrm{L}}$. The latter gives rise to an additional bath correlation function $\Phi_{\mathrm{L}}$, assuming here that it can be described by a $1/\omega$-bath coupling spectrum. The total system space is hence spanned by the energy states $|1\rangle$,
$|2\rangle$, and $|3\rangle$, the projector onto the relevant subspace is
$\hat{P}$ $\!=$ $\!\sum_{n=1}^{2}|{n}\rangle\langle{n}|$, whereas
$\hat{Q}$ $\!=$ $\!|3\rangle\langle3|$, and the states $|\Psi\rangle$ used for averaging are arbitrary superpositions of $|1\rangle$ and $|2\rangle$. The time-independent ${f}$ is a parameter that controls the coupling to the ``leakage bath''. It reflects the fact that the energy of the leakage level
$|3\rangle$ induces a free evolution, which is shifted to high frequencies for sufficiently large ${f}$, when $|3\rangle$ is strongly energy-detuned from the other two levels, thus providing a ``natural'' dynamic decoupling of our $1/\omega$-coupling spectrum, and hence the vanishing of the surplus error induced by leakage, justifying the two-level system approximation.
\end{document}
|
arXiv
|
{
"id": "0909.1680.tex",
"language_detection_score": 0.5850344300270081,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On the Width of the Regular $n$-Simplex}
\author{Sariel Har-Peled\SarielThanks{Work on this paper
was partially supported by a NSF AF award
CCF-1907400.
}
\and
Eliot W. Robson
\EliotThanks{} }
\date{\today}
\maketitle
\begin{abstract}
Consider the regular $n$-simplex $\Mh{\Delta}_n$ -- it is formed by the
convex-hull of $n+1$ points in Euclidean space, with each pair of
points being in distance exactly one from each other. We prove an
exact bound on the width of $\Mh{\Delta}_n$ which is $\approx \sqrt{2/n}$.
Specifically,
\begin{math}
\mathrm{width}(\Mh{\Delta}_n) = \sqrt{\frac{2}{n + 1}}
\end{math}
if $n$ is odd, and
\begin{math}
\mathrm{width}(\Mh{\Delta}_n) = \sqrt{\frac{2(n+1)}{n(n+2)}}
\end{math}
if $n$ is even. While this bound is well known
\cite{gk-iojrc-92,a-wds-77}, we provide a self-contained
elementary proof that might (or might not) be of interest. \end{abstract}
\section{The width of the regular simplex}
A regular $n$-simplex $\Mh{\Delta}_n$ is a set of $n + 1$ points in Euclidean space such that every pair of points is in distance exactly $1$ from each other. For simplicity, it is easier to work with the simplex $\Mh{D}_n$ formed by the convex-hull of $\Mh{\mathcalb{e}}_1,\ldots, \Mh{\mathcalb{e}}_{n+1} \in \mathbb{R}^{n+1}$, where $\Mh{\mathcalb{e}}_i$ is the $i$th\xspace standard unit vector\footnote{That is, $\Mh{\mathcalb{e}}_i$ is $0$ in all
coordinates except the $i$th\xspace coordinate where it is $1$.}. Observe that $\bigl.\dwY{\smash{\Mh{\mathcalb{e}}_i}}{\smash{\Mh{\mathcalb{e}}_j}} = \sqrt{2}$, which implies that $\Mh{D}_n = \sqrt{2}\Mh{\Delta}_n$. All the vertices of $\Mh{D}_n$ lie on the hyperplane $\Mh{\mathcalb{h}} \equiv \sum_{i=1}^{n+1} x_i = \DotProdY{(x_1,\ldots,
x_{n+1})}{\mathds{1}}= 1$, where $\mathds{1} = (1, 1, \ldots , 1)$. In particular, the point $\cenX{n} = \mathds{1}/(n + 1) \in \Mh{D}_n$ is the center of $\Mh{D}_n$, and is in equal distance \begin{equation*}
\widehat{R_n}
=
\dY{\cenX{n}}{\Mh{\mathcalb{e}}_i}
=
\sqrt{n \frac{1}{(n+1)^2} + \pth{1-\frac{1}{n+1}}^2}
=
\sqrt{1 - \frac{2}{n+1} + \frac{n+1}{(n+1)^2}}
=
\sqrt{\frac{n}{n+1}}. \end{equation*} from all the vertices of $\Mh{D}_n$. Note that the largest ball one can place in $\Mh{\mathcalb{h}}$, which is still contained in $\Mh{D}_n$, is centered at $\cenX{n}$. Furthermore, it has radius \begin{equation*}
\widehat{r_n}
=
\dY{\cenX{n}}{-(0,\cenX{n-1})}
=
\sqrt{\frac{1}{(n+1)^2} + n \pth{\frac{1}{n} - \frac{1}{n+1}}^2}
=
\sqrt{\frac{n+1}{n(n+1)^2}}
=
\frac{1}{\sqrt{n(n+1)}}. \end{equation*}
For a unit vector $u$, and a set $\Mh{P} \subseteq \mathbb{R}^{n+1}$, let \begin{equation*}
\pwY{u}{\Mh{P}}
=
\max_{p \in \Mh{P}} \DotProdY{u}{p} - \min_{p \in \Mh{P}}
\DotProdY{u}{p} \end{equation*} denote the \emphi{projection width} of $\Mh{P}$ in the direction of $u$. The \emphi{width} of $\Mh{P}$ is the minimum projection width over all directions.
\paragraph{Energy of points.}
For a vector $v = (v_1 , \ldots, v_{n+1}) \in \mathbb{R}^{n+1}$, let \begin{math}
\avgX{v} = \sum_{i} v_i / (n+1), \end{math} and let \begin{equation*}
\widehat{v} = v - \avgX{v} \mathds{1}, \end{equation*} be the translation of $v$ by its centroid $\avgX{v} \mathds{1}$ so that $\DotProdY{\widehat{v}}{\mathds{1}} = 0$. The \emphi{energy} of $v$ is $\egX{v} = \norm{\widehat{v}}^2$. The energy is the minimum $1$-mean clustering price of the numbers $v_1 ,\ldots, v_{n+1}$. We need the following standard technical claim, which implies that if we move a value away from the centroid, the $1$-mean clustering price of the set goes up.
\begin{claim}
\clmlab{energy}
Let $v = (v_1 , \ldots, v_{n+1}) \in \mathbb{R}^{n+1}$ be a point, and
let $u$ be a point that is identical to $v$ in all coordinates
except the $i$th\xspace one, where $u_i > v_i \geq \avgX{v}$. Then
$\egX{u} > \egX{v}$. The same holds if $u_i < v_i < \avgX{v}$. \end{claim}
\begin{proof}
Let $\delta_i = (u_i - v_i )/(n + 1)$, and observe that
$\avgX{u} = \avgX{v} + \delta_i$. As such, we have
\begin{equation*}
\egX{u}
=
\sum\nolimits_j (u_j - \avgX{u})^2
=
\sum\nolimits_j (v_j - \avgX{v}+\delta_i)^2
- (v_i - \avgX{v} + \delta_i)^2 +
(u_i - \avgX{v} + \delta_i)^2.
\end{equation*}
Since
\begin{math}
\sum\nolimits_j (v_j - \avgX{v}) = 0,
\end{math}
we have that
\begin{equation*}
\sum\nolimits_j (v_j -\avgX{v} +\delta_i)^2
=
\sum\nolimits_j (v_j -\avgX{v})^2
+
\pth{2\delta_i \sum\nolimits_j (v_j -\avgX{v})}
+
\sum\nolimits_j \delta_i^2
=
\egX{v}
+ (n+1) \delta_i^2.
\end{equation*}
Rearranging the above and using that $u_i > v_i\geq \avgX{v}$, we have
\begin{equation*}
\egX{u} - \egX{v}
=
(n+1) \delta_i^2
+ (u_i - \avgX{v} + \delta_i)^2
- (v_i - \avgX{v} + \delta_i)^2
>
(u_i - v_i)
(u_i + v_i -2\avgX{v}+ 2\delta_i) >0.
\end{equation*} \end{proof}
\begin{lemma}
\lemlab{w:odd}
For $n$ odd, the width of $\Mh{D}_n$ is $2/\sqrt{n+1}$, and this is
realized by they projection width of all the directions in
$\Mh{\mathcal{H}} = \Set{\smash{v/ \sqrt{n + 1}}}{ v \in \{-1, +1\}^{n+1}
\text{ and } \DotProdY{v}{\mathds{1}} =0}\Bigr.$ (and no other
direction). \end{lemma}
\begin{proof}
Consider a unit vector $z$ that realizes the minimum width of
$\Mh{D}_n$ – here, in addition to $\norm{z} = 1$, we also require that
$\DotProdY{z}{\mathds{1}} = 0$, as one has to consider only directions
that are parallel to the hyperplane containing $\Mh{D}_n$. To this
end, let
\begin{math}
\beta = \max_i z_i
\end{math}
and
\begin{math}
\alpha = \min_i z_i
\end{math}
and observe that
\begin{equation*}
\mathrm{width}(\Mh{D}_n)
=
\pwY{z}{\Mh{D}_n}
=
\max_{i} \DotProdY{z}{\Mh{\mathcalb{e}}_i} - \min_{i}
\DotProdY{z}{\Mh{\mathcalb{e}}_i}
=
\beta - \alpha.
\end{equation*}
Next, Consider the point $u$, where for all $i$ we set
\begin{equation*}
u_i
=
\begin{cases}
\alpha & z_i < 0\\
\beta & z_i \geq 0.
\end{cases}
\end{equation*}
A careful repeated application of \clmref{energy}, implies that
$\egX{u} >\egX{z}$ if any coordinate of $z$ is not already either
$\alpha$ or $\beta$. But then the point $\widehat{u}$ has (i)
``width'' $\beta - \alpha$, (ii) $\norm{\widehat{u}} > 1$, and
(iii) $\DotProdY{\widehat{u}}{\mathds{1}}=0$. But this implies that the
projection width of $\Mh{D}_n$ on $\widehat{u} / \norm{\widehat{u}}$ is
$(\beta-\alpha)/ \norm{\widehat{u}} < \beta - \alpha$, which is a
contradiction to the choice of $z$.
Thus, it must be that all the coordinates of $z$ are either
$\alpha$ or $\beta$. Let $t$ be the number of coordinates of $z$
that are $\alpha$, and observe that
\begin{equation*}
\egX{z}
=
\norm{z}^2
=
t\alpha^2 + (n+1-t)\beta^2
=
1
\qquad\text{and}\qquad
\DotProdY{z}{\mathds{1}}= t\alpha + (n+1-t) \beta = 0.
\end{equation*}
This implies that $\beta = - \frac{t}{n+1-t}\alpha$, and thus
\begin{equation*}
t\alpha^2 + \frac{(n+1-t)t^2}{(n+1-t)^2} \alpha^2 = 1
\quad\implies\quad
\frac{t(n+1-t)+ t^2}{n +1 -t} \alpha^2 = 1
\quad\implies\quad
\alpha = -
\sqrt{\frac{n+1-t}{t(n+1)}}.
\end{equation*}
Thus, the width of $\Mh{D}_n$ is
\begin{math}
\beta - \alpha =
\bigl(1 + \tfrac{t}{n+1-t} \bigr) \sqrt{\frac{n+1-t}{t(n+1)}}
=
\sqrt{\frac{n+1}{t(n+1-t)}}.
\end{math}
The last quantity is minimized when the denominator is maximized,
which happens for $t = (n+1)/2$. Namely, the width of $\Mh{D}_n$ is
$ 2/\sqrt{n+1}$.
We have that $t\alpha + t\beta = 0$, which implies that
$\alpha = -\beta$ and thus $\beta = 1/\sqrt{n + 1}$. It follows
that $z \in \Mh{\mathcal{H}}$. \end{proof}
\begin{lemma}
\lemlab{w:even}
For $n$ even, the width of $\Mh{D}_n$ is
$2 \sqrt{ \smash{\frac{n+1}{n(n+2)}}\bigr.}$. \end{lemma}
\begin{proof}
The proof of \lemref{w:odd} goes through with minor
modifications. The minimum value is realized by $t = n/2$. This
implies that
\begin{equation*}
\alpha
=
- \sqrt{\frac{n+1-t}{t(n+1)}}
=
- \sqrt{\frac{n+2}{n(n+1)}}
\qquad\text{and}\qquad
\beta
=
-\frac{t}{n+1-t}\alpha
=
\sqrt{\frac{n}{(n+1)(n+2)}}.
\end{equation*}
As a sanity
check, observe that
\begin{math}
t\alpha^2 + (n + 1 - t)\beta^2
=
\frac{n}{2}\frac{n+2}{n(n+1)} + \frac{n+2}{2}
\frac{n}{(n+1)(n+2)}
=
1.
\end{math}
Thus, the width of
$\Mh{D}_n$ is
\begin{equation*}
\beta - \alpha
=
\sqrt{\frac{n}{(n+1)(n+2)}}
+
\sqrt{\frac{n+2}{n(n+1)}}
=
\frac{2(n+1)}{\sqrt{n(n+1)(n+2)}}
=
2 \sqrt{\frac{n+1}{n(n+2)}}.
\end{equation*} \end{proof}
Using that $\Mh{\Delta}_n = \Mh{D}_n/ \sqrt{2}$ and rescaling the above bounds, we get the following.
\begin{corollary}
\corlab{bounds}
The width of the regular $n$-simplex $\Mh{\Delta}_n$ is
\begin{math}
\sqrt{2/(n + 1)}
\end{math}
if $n$ is odd. The width is
\begin{math}
\sqrt{\frac{2(n+1)}{n(n+2)}}
\end{math}
if $n$ is even. The inradius (i.e., radius of largest ball inside
$\Mh{\Delta}_n$) is
\begin{math}
r_n = 1/\sqrt{2n(n+1)},
\end{math}
and the circumradius (i.e.,
radius of minimum ball enclosing $\Mh{\Delta}_n$ is
\begin{math}
R_n = \sqrt{\frac{n}{2(n+1)}}.
\end{math} \end{corollary}
\BibTexMode{
} \BibLatexMode{\printbibliography}
\end{document}
|
arXiv
|
{
"id": "2301.02616.tex",
"language_detection_score": 0.6392256617546082,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On the difference between the eccentric connectivity index and eccentric distance sum of graphs}
\author{ Yaser Alizadeh $^{a}$ \and Sandi Klav\v zar $^{b}$}
\date{}
\maketitle
\begin{center} $^a$ Department of Mathematics, Hakim Sabzevari University, Sabzevar, Iran\\ e-mail: {\tt [email protected]} \\
$^b$ Faculty of Mathematics and Physics, University of Ljubljana, Slovenia \\
e-mail: {\tt [email protected]}
\end{center}
\begin{abstract} The eccentric connectivity index of a graph $G$ is $\xi^c(G) = \sum_{v \in V(G)}\varepsilon(v)\deg(v)$, and the eccentric distance sum is $\xi^d(G) = \sum_{v \in V(G)}\varepsilon(v)D(v)$, where $\varepsilon(v)$ is the eccentricity of $v$, and $D(v)$ the sum of distances between $v$ and the other vertices. A lower and an upper bound on $\xi^d(G) - \xi^c(G)$ is given for an arbitrary graph $G$. Regular graphs with diameter at most $2$ and joins of cocktail-party graphs with complete graphs form the graphs that attain the two equalities, respectively. Sharp lower and upper bounds on $\xi^d(T) - \xi^c(T)$ are given for arbitrary trees. Sharp lower and upper bounds on $\xi^d(G)+\xi^c(G)$ for arbitrary graphs $G$ are also given, and a sharp lower bound on $\xi^d(G)$ for graphs $G$ with a given radius is proved. \end{abstract}
\noindent {\bf Key words:} eccentricity; eccentric connectivity index; eccentric distance sum; tree
\noindent {\bf AMS Subj.\ Class (2020)}: 05C12, 05C09, 05C92
\section{Introduction} \label{sec:intro}
In this paper we consider simple and connected graphs. If $G = (V(G), E(G))$ is a graph and $u, v \in V(G)$, then the {\em distance} $d_G(u, v)$ between $u$ and $v$ is the number of edges on a shortest $u,v$-path. The eccentricity of a vertex and its total distance are distance properties of central interest in (chemical) graph theory; they are defined as follows. The {\em eccentricity} $\varepsilon_G(v)$ of a vertex $v$ is the distance between $v$ and a farthest vertex from $v$, and the {\em total distance} $D_G(v)$ of $v$ is the sum of distances between $v$ and the other vertices of $G$. Even more fundamental property of a vertex in (chemical) graph theory is its degree (or valence in chemistry), denoted by $\deg_G(v)$. (We may skip the index $G$ in the above notations when $G$ is clear.) Multiplicatively combining two out of these three basic invariants naturally leads to the {\em eccentric connectivity index} $\xi^c(G)$, the {\em eccentric distance sum} $\xi^d(G)$, and the {\em degree distance} $DD(G)$, defined as follows: \begin{align*} \xi^c(G) & = \sum_{v \in V(G)}\varepsilon(v)\deg(v)\,. \\ \xi^d(G) & = \sum_{v \in V(G)}\varepsilon(v)D(v)\,. \\ DD(G) & = \sum_{v \in V(G)}\deg(v)D(v)\,. \end{align*} $\xi^c$ was introduced by Sharma, Goswami, and Madan~\cite{sha-1997}, $\xi^d$ by Gupta, Singh, and Madan~\cite{Gupta}, and $DD$ by Dobrynin and Kochetova~\cite{Dob1994} and by Gutman~\cite{Gut-1994}. These three topological indices are well investigated, selected contrubutions to the eccentric connectivity index are~\cite{Hauweele-2019, Ill-2011, xu-2016}, to the eccentric distance sum~\cite{chen-2018, Illic, xie-2019}, and to the degree distance~\cite{li-2015, Tom-2008, wang-2016}. The three invariants were also compared to other invariants, cf.~\cite{Dank2014, Das2011, das-2015, Das-2016, xu-2016b}. For information on additional topological indices based on eccentricity see~\cite{Madan-2010}.
In~\cite{hua-2018} the eccentric distance sum and the degree distance are compared, while in~\cite{xu-2017} the difference between the eccentric connectivity index and the (not defined here) connective eccentricity index is studied. The primary motivation for the present paper, however, are the papers~\cite{hua-2019, zhang-2019} in which $\xi^d(G) - \xi^c(G)$ was investigated. In~\cite{zhang-2019}, Zhang, Li, and Xu, besides other results on the two indices, determined sharp upper and lower bounds on $\xi^d(G) - \xi^c(G)$ for graphs $G$ of given order and diameter $2$. Parallel results were also derived for sub-classes of diameter $2$ graphs with specified one of the minimum degree, the connectivity, the edge-connectivity, and the independence number. Hua, Wang, and Wang~\cite{hua-2019} extended the last result to general graphs. More precisely, they characterized the graphs that attain the minimum value of $\xi^d(G) - \xi^c(G)$ among all connected graphs $G$ of given independence number. They also proved a related result for connected graphs with given matching number.
In this paper we continue the investigation along the lines of~\cite{hua-2019, zhang-2019} and proceed as follows. In the rest of this section definitions and some observations needed are listed. In Section~\ref{sec:difference-general}, we give a lower and an upper bound on $\xi^d(G) - \xi^c(G)$ and in both cases characterize the equality case. The upper bound involves the Wiener index, the first Zagreb index, as well as the degree distance of $G$. In Section~\ref{sec:trees} we focus on trees and first prove that among all trees $T$ with given order and diameter, $\xi^d(T)-\xi^c(T)$ is minimized on caterpillars. Using this result we give a lower bound on $\xi^d(T)-\xi^c(T)$ for all trees $T$ with given order, the bound being sharp precisely on stars. We also give a sharp upper bound on $\xi^d(T)-\xi^c(T)$ for trees $T$ with given order. In the last section we give a sharp lower bound and a sharp upper bound on $\xi^d(G)+\xi^c(G)$, compare $\xi^d(G)$ with $\xi^c(G)$ for graphs $G$ with not too large maximum degree, and give a sharp lower bound on $\xi^d(G)$ for graphs $G$ with a given radius.
\subsection{Preliminaries}
The order and the size of a graph $G$ will be denoted by $n(G)$ and $m(G)$, respectively. The star of order $n\ge 2$ is denoted by $S_n$; in other words, $S_n = K_{1,n-1}$. If $n\ge 2$, then the {\em cocktail party graph} $CP_{2n}$ is the graph obtained from $K_{2n}$ by removing a perfect matching. The {\em join} $G\oplus H$ of graphs $G$ and $H$ is the graph obtained from the disjoint union of $G$ and $H$ by connecting by an edge every vertex of $G$ with every vertex of $H$. The maximum degree of a vertex of $G$ is denoted by $\Delta(G)$. A graph $G$ is {\em regular} if all vertices have the same degree. The {\em first Zagreb index}~\cite{gutman-1972} $M_1(G)$ of $G$ is the sum of the squares of the degrees of the vertices of $G$. The {\em Wiener index}~\cite{wiener} $W(G)$ of $G$ is the sum of distances between all pairs of vertices in $G$.
The {\em diameter} ${\rm diam}(G)$ and the {\em radius} ${\rm rad}(G)$ of a graph $G$ are the maximum and the minimum vertex eccentricity in $G$, respectively. A graph $G$ is {\em self-centered} if all vertices have the same eccentricity. It this eccentricity is $d$, we further say that $G$ is {\em $d$-self-centered}. The {\em eccentricity} $\varepsilon(G)$ of $G$ is $$\varepsilon(G)=\sum_{v \in V(G)}\varepsilon(v)\,.$$ The eccentric connectivity index of $G$ can be equivalently written as \begin{equation} \label{eq:eci} \xi^c(G) = \sum_{uv \in E(G)}\varepsilon(u) + \varepsilon(v)\,, \end{equation} and the eccentric distance sum as \begin{equation} \label{eq:eds} \xi^d(G) = \sum_{ \{u,v\}\subseteq V(G)}(\varepsilon(u)+\varepsilon(v))d(u,v)\,. \end{equation}
\section{The difference on general graphs} \label{sec:difference-general}
In this section we give some sharp upper and lower bounds on $\xi^d(G) - \xi^c(G)$ for an arbitrary graph $G$. The bounds are in terms of the eccentricity, the Wiener index, the first Zagreb index, the degree distance, the maximum degree, the size, and the order of $G$.
\begin{theorem} \label{thm:difference} If $G$ is a connected graph, then the following hold. \begin{enumerate}[(i)] \item $\xi^d(G) - \xi^c(G) \ge 2\big(n(G)-1-\Delta(G)\big)\varepsilon(G)$. Moreover, the equality holds if and only if $G$ is a regular graph with ${\rm diam}(G) \le 2$. \item $\xi^d(G) - \xi^c(G) \le 2n(G)\big(W(G)-m(G)\big) + M_1(G) - DD(G)$. Moreover, the equality holds if and only if $G\in \{P_4\} \cup \{CP_{2k}\oplus K_{n(G)-2k}:\ 0\le k\le n/2\}$. \end{enumerate} \end{theorem}
\noindent{\bf Proof.\ } (i) Let $v$ be a vertex of $G$. If $w$ is not adjacent to $v$, then $d(v,w)\ge 2$ and consequently $D(v) - \deg(v) \ge 2(n(G)-1-\Delta(G))$. Thus: \begin{eqnarray*}
\xi^d(G)-\xi^c(G) &=& \sum_{v \in V(G)}\varepsilon(v)\big(D(G)-\deg(v)\big)\\
& \ge & \sum_{v \in V(G)} 2\, \varepsilon(v)\big(n(G) -1-\Delta(G)\big) \\
& = & 2\,\varepsilon(G)\big( n(G)-1-\Delta(G)\big)\,.
\end{eqnarray*} The equality holds if and only if $D(v) - \deg(v) = 2(n(G)-1-\Delta(G))$ for every vertex $v$. As the last equality in particular holds for a vertex of maximum degree, we infer that $G$ must be regular. Then the condition $D(v) - \deg(v) = 2(n(G)-1-\Delta(G))$ simplifies to \begin{equation} \label{eq:D-delta} D(v) + \Delta(G) = 2n(G)-2\,. \end{equation} Suppose that ${\rm diam}(G) = d$, and let $x_i$, $i\in\{2,\ldots, d\}$, be the number of vertices at distance $i$ from $v$. Then $n(G) = 1 + \Delta(G) + x_2 + \cdots + x_d$ and $D(v) = \Delta(G) + 2x_2 + \cdots + dx_d$. Plugging these equalities into~\eqref{eq:D-delta} yields $$2\Delta(G) + 2x_2 + \cdots + dx_d = 2 + 2\Delta(G) + 2x_2 + \cdots + 2x_d - 2$$ which implies that $x_3 = \cdots = x_d = 0$, that is, ${\rm diam}(G) = 2$. Finally, if ${\rm diam}(G) = 2$, then $D(v) = \Delta(G) + 2(n(G)-\Delta(G)-1)$, so~\eqref{eq:D-delta} is fulfilled for every regular graph of diameter $2$. Clearly,~\eqref{eq:D-delta} is also fulfilled for graphs of diameter $1$, that is, complete graphs.
(ii) If $v \in V(G)$, then clearly $\varepsilon(v) \le n(G)-\deg(v)$. Then we deduce that \begin{eqnarray*} \xi^d(G) - \xi^c(G) &=& \sum_{v \in V(G)}\varepsilon(v)\big( D(v)-\deg(v) \big) \\ & \le & \sum_{v \in V(G)} \big( n(G)-\deg(v) \big)\big( D(v)-\deg(v) \big) \\ &=& n(G)\sum_{v \in V(G)}\big(D(v)-\deg(v) \big) + \sum_{v \in V(G)}\deg(v)^2 \\ & & - \sum_{v \in V(G)}\deg(v)D(v) \\ &=& 2n(G)\big(W(G)-m(G)\big) + M_1(G) - DD(G)\,. \end{eqnarray*} The equality in the above computation holds if and only if $\varepsilon(v) = n(G) - \deg(v)$ holds for all $v \in V (G)$. So suppose that $G$ is a graph for which $\varepsilon(v) = n(G) - \deg(v)$ holds for all $v \in V (G)$ and distinguish the following two cases.
Suppose first that ${\rm diam}(G)\ge 3$. Let $P$ be a diametral path in $G$ and let $v$ and $v'$ be its endpoints. Since $\varepsilon(v) = n(G) - \deg(v)$ and $|V(P) \setminus N[v]| = \varepsilon(v) - 1$, it follows that $n(G) = 1 + \deg(v) + |V(P) \setminus N[v]|$. The latter means that $V(G) = N[v]\cup V(P)$. Since ${\rm diam}(G) = \varepsilon(v) \ge 3$ it follows that $\deg(v') = 1$. Since we have also assumed that $\varepsilon(v') = n(G) - \deg(v')$ holds we see that $\varepsilon(v') = n(G) -1$ which in turn implies that $G$ is a path. Among the paths $P_n$, $n\ge 4$, the path $P_4$ is the unique one which fulfills the condition $\varepsilon(v) = n - \deg(v)$ for all $v \in V (P_n)$.
Suppose second that ${\rm diam}(G)\le 2$. Then $\varepsilon(v)\in \{1,2\}$ for every $v\in (G)$. Since $\varepsilon(v) = n(G) - \deg(v)$ it follows that $\deg(v)\in \{n(G)-1, n(G)-2\}$. Let $V_1 = \{v:\ \deg(v) = n(G) - 1\}$ and $V_2 = \{v:\ \deg(v) = n(G) - 2\}$. Then $V(G) = V_1 \cup V_2$. Clearly, the subgraph of $G$ induced by $V_1$ is complete, and there are all possible edges between $V_1$ and $V_2$. Moreover, the complement of the subgraph of $G$ induced by $V_2$ is a disjoint union of copies of $K_2$, which means that $V_2$ induces a cocktail party graph. In summary, $G$ must be of the form $CP_{2k}\oplus K_{n(G)-2k}$, where $0\le k\le n/2$. On the other hand, the condition $\varepsilon(v) = n(G) - \deg(v)$ clearly holds for each vertex of $CP_{2k}\oplus K_{n(G)-2k}$, hence these graphs together with $P_4$ from the previous case are precisely the graphs that attain the equality.
$\square$
\section{The difference on trees} \label{sec:trees}
In this section we turn our attention to $\xi^d(T) - \xi^c(T)$ for trees $T$, and in particular on extremal trees regarding this difference.
\begin{theorem} \label{caterpilar} Among all trees $T$ with given order and diameter, $\min\{ \xi^d(T)-\xi^c(T)\}$ is achieved on caterpillars. \end{theorem}
\noindent{\bf Proof.\ } Fix the order and diameter of trees to be considered. Let $T$ be an arbitrary tree that is not a caterpillar with this fixed order and diameter. Let $P$ be a diametral path of $T$ connecting $x$ to $y$. Then the eccentricity of each vertex $w$ of $T$ is equal to $\max\{d(w,x), d(w,y)\}$. Let $z\ne x,y$ be a vertex of $P$ and let $T_z$ be a maximal subtree of $T$ which contains $z$ but no other vertex of $P$. We may assume that $z$ can be selected such that $\varepsilon_{T_z}(z) = k \ge 2$, for otherwise $T$ is a caterpillar. Let $u$ be vertex of $T_z$ with $d(u,z)=k-1$ and let $v$ be the neighbor of $u$ with $d(v,z)=k-2$. Let $S = N(u)\setminus \{v\}$ and let $s = |S|$. Note that $s > 0$. Let now $T'$ be the tree obtained from $T$ by replacing the edges between $u$ and the vertices of $S$ with the edges between $v$ and the vertices of $S$.
\noindent {\bf Claim A}: $\xi^d(T) - \xi^c(T) > \xi^d(T') - \xi^c(T')$.\\ Set $X_d = \xi^d(T) - \xi^d(T')$ and $X_c = \xi^c(T) - \xi^c(T')$. To prove the claim it is equivalent to show that $X_d - X_c > 0$.
For a vertex $w \in V(G)\setminus (S \cup \{u\})$ we have $D_{T'}(w) = D_{T}(w) - s$ and $\varepsilon_{T'}(w)\le \varepsilon_{T}(w)$. Moreover if $w \in S$, then $\varepsilon_{T'}(w)= \varepsilon_{T}(w) -1$ and $D_T(w) = D_{T'}(w) + n-s-2$. With these facts in hand we can compute as follows. \begin{eqnarray*} X_d &=& \sum_{w \in V(T)}\varepsilon_T(w)D_T(w) - \sum_{w \in V(T')}\varepsilon_{T'}(w)D_{T'}(w) \\ &=& \varepsilon_{T}(u)D_{T}(u)- \varepsilon_{T'}(u)D_{T'}(u) + \varepsilon_{T}(v)D_{T}(v)-\varepsilon_{T'}(v)D_{T'}(v)\\ & &+ \sum_{w \in S}\varepsilon_T(w)D_T(w) - \varepsilon_{T'}(w)D_{T'}(w) \\ &&+ \sum_{w \in V(T)- ( S\cup \{u,v\})}\varepsilon_T(w)D_T(w) - \varepsilon_{T'}(w)D_{T'}(w)\\ &\ge& s(\varepsilon_T(v)-\varepsilon_T(u)) + \sum_{w \in V(T)- ( S\cup \{u,v\})} \varepsilon_T(w)s \\&& + \sum_{w \in S}\big( \varepsilon_T(w)D_T(w) - (\varepsilon_T(w)-1)(D_T(w)-n+2+s) \big) \\ &=& -s + \sum_{w \in V(T)- ( S\cup \{u,v\})} \varepsilon_T(w)s \\ && + \sum_{w \in S}\big( (D_T(w)-n+2+s ) - \varepsilon_T(w)(-n+2+s) \big) \\ &=& -s + \sum_{w \in V(T)\setminus ( S\cup \{u,v\})} \varepsilon_T(w)s + (n-s-2)\sum_{w \in S}\varepsilon_T(w)-1 + D_T(w)\\ &=& -s + \sum_{w \in V(T)\setminus ( S\cup \{u,v\})} \varepsilon_T(w)s \\ & & + s(n-s-2)\varepsilon_T(u) + s(D_T(u)+ n-2)\\ &=& s\big[ \varepsilon(T)-\varepsilon_T(u)(s+2)-s+1 + (n-s-2)\varepsilon_T(u)+ D_T(u)+ n-3) \big]\,. \end{eqnarray*} Similarly, but shorter, we get that $X_c = 2s$. Thus \begin{align*} X_d - X_c & \ge s\big[ \varepsilon(T)-\varepsilon_T(u)(s+2) \\
&\quad + (n-s-2)\varepsilon_T(u)+ D_T(u)+ n-s-4) \big] \\
& > 0\,. \end{align*} This proves Claim A. If $T'$ is not a caterpillar, we can repeat the construction as many times as required to arrive at a caterpillar. Since at each step the value of $\xi^d - \xi^c$ is decreased, the minimum of this difference is indeed achieved on caterpillars.
$\square$
\begin{theorem}\label{min} If $T$ is a tree of order $n\ge 3$, then \[\xi^d(T) - \xi^c(T)\ge 4 n^2-12 n+8\,.\] Moreover, equality holds if and only if $T = S_n$. \end{theorem}
\noindent{\bf Proof.\ } Let $n\ge 3$ be a fixed integer. By Theorem~\ref{caterpilar}, it suffices to consider caterpillars. More precisely, let $T$ be a caterpillar of order $n$ and with ${\rm diam}(T) = d \ge 3$. Then we wish to prove that $\xi^d(T) - \xi^c(T) > \xi^d(S_n) - \xi^c(S_n) = 4 n^2-12 n+8$. The latter equality is straightforward to check, for the strict inequality we proceed as follows.
Let $w, z\in V(T)$ be two adjacent vertices of eccentricities $d-1$ and $d-2$, respectively. Let $S = N(w)\setminus\{z\}$ and set $s = |S|$. As $\varepsilon(w) = d-1$, we have $s\ge 1$. Let further $S_1 = V(G) \setminus (S\cup \{w,z\})$. Construct now a tree $T'$ from $T$ by replacing the edges between $w$ and the vertices of $S$ with the edges between $z$ and the vertices of $S$. Note that $\deg_T(w) = \deg_{T'}(w)+s = 1+s$ and $\deg_T(z) = \deg_{T'}(z)- s$, while the other vertices have the same degree in $T$ and $T'$. Further, it is straightforward to verify the following relations: \begin{align*} D_T(w) & = D_{T'}(w)- s,\quad \varepsilon_T(w)= \varepsilon_{T'}(w); \\ D_T(z) & = D_{T'}(z)+ s,\quad \varepsilon_{T'}(z)\le\varepsilon_T(z)\le \varepsilon_{T'}(z)+1; \\ D_T(x) & = D_{T'}(x) + n-s-2,\quad \varepsilon_T(x)=\varepsilon_{T'}(x)+1\ (x \in S); \\ D_T(y) & = D_{T'}(y) + s,\quad \varepsilon_{T'}(y)\le \varepsilon_T(y)\le \varepsilon_{T'}(y) + 1\ (y \in S_1).\\ \end{align*} Setting $X_d = \xi^d(T)-\xi^d(T')$ we have: \begin{eqnarray*} X_d & = & \sum_{v \in \{w,z\}}D_T(v)\varepsilon_{T}(v)-D_{T'}(v)\varepsilon_{T'}(v) + \sum_{v \in S}D_{T}(v)\varepsilon_{T}(v)-D_{T'}(v)\varepsilon_{T'}(v)\\&&+ \sum_{v \in S_1 }D_T(v)\varepsilon_{T}(v)-D_{T'}(v)\varepsilon_{T'}(v) \\ &\ge & s(\varepsilon_{T'}(z)-\varepsilon_T(w)) + \sum_{v \in S}D_{T}(v)\varepsilon_{T}(v)- (D_{T}(v) -(n-s-2))(\varepsilon_{T}(v)-1)\\ &&+ \sum_{v \in S_1 }D_T(v)\varepsilon_{T}(v)-(D_{T}(v)-s)\varepsilon_{T}(v) \\ &\ge& -s + (n-s-2)\sum_{v \in S}\varepsilon_T(v)+ \sum_{v \in S}D_T(v) - s(n-s-2) + s \sum_{v \in S_1}\varepsilon_T(v)\\ &\ge & -s+ 3s(n-s-2) + s(2(n-s-2)+2s+1) - s(n-s-2) \\ & & + 3s(n-s-2)\\ &=& 5s(n-s-3) + 2s(n-1)\,. \end{eqnarray*} Similarly, setting $X_c = \xi^c(T)-\xi^c(T')$, we have \begin{eqnarray*} X_c &=& \sum_{v \in \{w,z\}} \big(\deg_T(v)\varepsilon_{T}(v)-\deg_{T'}(v)\varepsilon_{T'}(v)\big)\\ &&+\sum_{v \in S} \big( \deg_{T}(v)\varepsilon_{T}(v)-\deg_{T'}(v)\varepsilon_{T'}(v)\big)\\ &&+ \sum_{v \in S_1 } \big(\deg_T(v)\varepsilon_{T}(v)-\deg_{T'}(v)\varepsilon_{T'}(v)\big) \\ &\le & s\varepsilon_T(w) + \deg_T(z)\varepsilon_T(z) - (\deg_T(z)+s)(\varepsilon_T(z)-1) \\ && + s + \sum_{v \in S_1 }\deg_T(v)\varepsilon_{T}(v)-\deg_{T}(v)(\varepsilon_{T}(v)-1)\\ &=& 2s + \deg_T(z) + \sum_{v \in S_1}\deg(v)\\ &=& 2n -3. \end{eqnarray*} Therefore, \begin{eqnarray*} X_d - X_c \ge \big(5s(n-s-3) + 2s(n-1)\big) -\big(2n -3\big) > 0\,, \end{eqnarray*} that is, $\xi^d(T)-\xi^c(T) > \xi^d(T')-\xi^c(T')$. Repeating the above transformation until $S_n$ is constructed finishes the argument.
$\square$
To bound the difference $\xi^d(T) - \xi^c(T)$ for an arbitrary tree $T$ from above, we first recall the following result.
\begin{lemma}\label{illic} \cite[Theorem 2.1]{Illic}
Let $w$ be a vertex of graph $G$. For non-negative integers $p$ and $q$, let $G(p,q)$ denotes the graph obtained from $G$ by attaching to vertex $w$ pendant paths $P = wv_1\cdots v_p$ and $Q = wu_1 \cdots u_q$ of lengths $p$ and $q$, respectively. Let $G(p+q,0)=G(p,q)- wu_1+v_pu_1$. If $r = \varepsilon_G(w)$ and $r\ge p\ge q \ge 1$, then
\begin{align*}
\xi^d(G(p+q,0))- \xi^d(G(p,q)) \ge &\frac{pq}{6}\big[ 6D_G(w)+p(2p-3) + q(2q-3) + 3pq -12r \\&+ 6n(G)(p+q+r+1) + 6\sum_{v \in V(G)}\varepsilon(v)\big]\,.
\end{align*} \end{lemma}
\begin{lemma}\label{xic} Let $G$, $p$, $q$, $G(p,q)$, and $G(p+q,0)$ be as in Lemma~\ref{illic}. Then $$\xi^c(G(p+q,0)) - \xi^c(G(p,q)) \le q(3p + 2m(G)-1)\,.$$ \end{lemma}
\noindent{\bf Proof.\ } Let $\deg'(v)$ and $\varepsilon'(v)$ (resp.\ $\deg(v)$ and $\varepsilon(v)$) denote the degree and the eccentricity of $v$ in $G(p+q,0)$ (resp. $G(p,q)$). Then we have: \begin{align*} \deg'(w) &= \deg(w)-1, \quad \varepsilon'(w) \le \varepsilon(w) + q; \\ \deg'(v_i)&=\deg(v_i), i\in [p-1],\quad \deg'(v_p)=\deg(v_p)+1; \\ \varepsilon'(v_i) &\le \varepsilon(v_i) + q, i\in [p]; \\ \deg'(u_j)&=\deg(u_j), \quad \varepsilon'(u_j) = \varepsilon(u_j)+ p; \\ \varepsilon'(x)&\le \varepsilon(x) + q, x \in V(G)\,. \end{align*} Moreover, the degrees of vertices in $G(p+q,0)$ do not decrease. Calculating the difference of contributions of vertices in $\xi^c$ for $G(p+q,0)$ and $G(p,q)$, we can estimate the difference $X_c = \xi^c(G(p+q,0)) - \xi^c(G(p,q))$ as follows: \begin{eqnarray*} X_c & \le & \sum_{w\neq x \in V(G)}\deg(x) q + \sum_{i=1}^{q} \deg(u_i)p + \sum_{i=1}^{ \lfloor \frac{p}{2} \rfloor}\deg(v_i)q\\ & & + \varepsilon(v_p) + (\deg(w)-1)(r+q)-\deg(w)r\\ & = & \big(2m(G)-\deg(w) \big)q + (2q-1)p + pq +p +q(\deg(w)-1) \\ & = & 2qm(G) + 3pq - q\,. \end{eqnarray*}
$\square$
\begin{theorem} If $T$ is a tree of order $n$, then $$\xi^d(T) - \xi^c(T) \le \begin{cases} \frac{25 n^4}{96}-\frac{n^3}{6}-\frac{89 n^2}{48}+\frac{19 n}{6}-\frac{45}{32}; & n\ odd\,, \\[5pt] \frac{25 n^4}{96}-\frac{n^3}{6}-\frac{43 n^2}{24}+\frac{19 n}{6}-2; & n\ even\,. \end{cases}$$ Moreover, equality holds if and only if $T=P_n$. \end{theorem}
\noindent{\bf Proof.\ } The right side of the above inequality is equal to $\xi^d(P_n)-\xi^c(P_n)$. (The value of $\xi^d(P_n)$ has been determined in~\cite{Illic}, while it is straightforward to deduce $\xi^c(P_n)$. Combining the two formulas, the polynomials from the right hand side of the inequality are obtained.) Suppose now that $T \neq P_n$. Then there is always a vertex $w$ of degree at least $3$ such that we can apply Lemmas~\ref{illic} and~\ref{xic}. Setting $$X_{dc} = \big(\xi^d(T(p+q,0) - \xi^c(p+q,0)\big)-\big(\xi^d(T(p,q) - \xi^c(p,q) \big)$$ we have: \begin{eqnarray*} X_{dc} & \ge& pqD_T(w) + \frac{pq}{6} \big( p(2p-3) + q(2q-3) \big)+ \frac{(pq)^2}{2} -2pqr \\ & & + pqn(T)(p+q+r+1) +pq\sum_{v \in V(T)}\varepsilon(v) - \big( 2qm(T) + 3pq - q \big)\\ & = & pq\big(D_T(w) + \sum_{v \in V(T)}\varepsilon(v) -3 \big) + \frac{pq}{6}\big(2p^2-3p+2q^2-3q+3pq \big)\\ & & + pqr(n(G)-2) + q\big(pn(T)(p+q+r)-2m(T)+1\big) > 0 \end{eqnarray*} and the result follows.
$\square$
\section{Further comparison} \label{sec:further}
In this concluding section we give sharp lower and upper bounds on $\xi^d(G)+\xi^c(G)$, compare $\xi^d(G)$ with $\xi^c(G)$ for graphs $G$ with $\Delta(G) \le \frac{2}{3}(n-1)$, and give a sharp lower bound on $\xi^d(G)$ for graphs $G$ with a given radius.
\begin{theorem} If $G$ is a connected graph, then the following hold. \begin{enumerate}[(i)] \item $\xi^d(G) + \xi^c(G)\le 2(n(G)-1) \varepsilon(G) + 2 {\rm diam}(G) \big( W(G) + m(G) - 2{n(G) \choose 2} \big)$. \item $\xi^d(G) + \xi^c(G)\ge 2(n(G)-1)\varepsilon(G) + 2 {\rm rad}(G) \big( W(G) + m(G) - 2{n(G) \choose 2} \big) $. \end{enumerate} Moreover, each of the equalities holds if and only if $G$ is a self-centered graph. \end{theorem}
\noindent{\bf Proof.\ } (i) Partition the pairs of vertices of $G$ into neighbors and non-neighbors, and using~\eqref{eq:eci}, we can compute as follows: \begin{eqnarray*} \xi^d(G) &=& \sum_{ \{u,v\}\subseteq V(G)}(\varepsilon(u)+\varepsilon(v))d(u,v) \\ &=& \sum_{ uv\in E(G)}(\varepsilon(u)+\varepsilon(v)) +2 \sum_{ \{u,v\}\subseteq V(G) \atop d(u,v)\ge 2} (\varepsilon(u)+\varepsilon(v)) \\ & & + \sum_{ \{u,v\}\subseteq V(G) \atop d(u,v)\ge 2} \big(\varepsilon(u)+\varepsilon(v)\big)\big(d(u,v)-2\big) \\ &=& \xi^c(G) + \sum_{\{u,v\}\subseteq V(G)}(\varepsilon(u) + \varepsilon(v)) - 2 \xi^c(G) \\ && +\sum_{\{u,v\} \subseteq V(G)\atop d(u,v)\ge 2} \big(\varepsilon(u)+\varepsilon(v)\big)\big(d(u,v)-2\big) \\ & \le & -\xi^c(G) + 2(n(G)-1)\varepsilon(G) \\ && + 2{\rm diam}(G) \big( W(G) + m(G) - 2 \tbinom{n(G)}{2} \big)\,. \end{eqnarray*} The inequality above becomes equality if and only if $\varepsilon(v) = {\rm diam}(G)$ for every $v \in V(G)$. That is, the equality holds if and only if $G$ is a self-centered graph.
(ii) This inequality as well as its equality case are proved along the same lines as (i). The only difference is that the inequality $\varepsilon(u)+\varepsilon(v) \le 2{\rm diam}(G)$ is replaced by $\varepsilon(u)+\varepsilon(v) \ge 2{\rm rad}(G)$.
$\square$
In our next result we give a relation between $\xi^d(G)$ and $\xi^c(G)$ for graph $G$ with maximum degree at most $\frac{2}{3}(n(G)-1)$.
\begin{theorem} If $G$ is a graph with $\Delta(G) \le \frac{2}{3}(n-1)$, then $\xi^d(G) \ge 2\xi^c(G)$. Moreover, the equality holds if and only if $G$ is $2$-self-centered, $\frac{2}{3}(n(G)-1)$-regular graph. \end{theorem}
\noindent{\bf Proof.\ } Set $n = n(G)$ and let $v$ be a vertex of $G$. Since $\deg(v) < n-1$ we have $\varepsilon(v) \ge 2$. Therefore $D(v) \ge 2(n-1) - \deg(v) $ with equality holding if and only if $\varepsilon(v) = 2$. Using the assumption that $\deg(v) \le \frac{2}{3}(n-1)$, equivalently, $ 2n-2 \ge 3\deg(v)$, we infer that $ \varepsilon(v)D(v) \ge 2\varepsilon(v)\deg(v)$. Summing over all vertices of $G$ the inequality is proved. Its derivation also reveals that the equality holds if and only if $\deg(v) =\frac{2}{3}(n-1)$ and $\varepsilon(v)=2$ for each vertex $v \in V(G)$.
$\square$
To conclude the paper we give a lower bound on the eccentric distance sum in terms of the radius of a given graph. Interestingly, the cocktail-party graphs are again among the extreme graphs.
\begin{theorem} If $G$ is a graph with ${\rm rad}(G) = r$, then $$\xi^d(G)\ge \big(n(G)-1 + \tbinom{r}{2}\big)\varepsilon(G)\,.$$ Equality holds if and only if $G$ is a complete graph or a cocktail-party graph. \end{theorem}
\noindent{\bf Proof.\ } Set $n = n(G)$ and let $v\in V(G)$. Let $P$ be a longest path starting in $v$. Separately considering the neighbors of $v$, the last $\varepsilon(v) - 2$ vertices of $P$, and all the other vertices, we can estimate that
\begin{eqnarray*}
D(v)&\ge& \deg(v) + (3 + \cdots + \varepsilon(v)) + 2\big(n - 1 - \deg(v) - (\varepsilon(v) -2) \big) \\
&=& 2n - \deg(v) + \frac{\varepsilon(v)^2-3\varepsilon(v)}{2}-1\,.
\end{eqnarray*} Since $n-\deg(v) \ge \varepsilon(v)$ for every vertex $v \in V(G)$, we have $D(v) \ge n+ \varepsilon(v) + \frac{\varepsilon(v)^2 - 3\varepsilon(v)}{2}-1$. Consequently, having the fact $\varepsilon(v)\ge r$ in mind, we get $D(v)\ge n-1 + {r \choose 2}$. Multiplying this inequality by $\varepsilon(v)$ and summing over all vertices of $G$ the claimed inequality is proved.
From the above derivation we see that the equality can holds only if $\varepsilon(v)=r=n-\deg(v)$ holds for every $v\in V(G)$. From the equality part of the proof of Theorem~\ref{thm:difference}(ii) we know that this implies ${\rm diam}(G)\le 2$. For the equality we must also have $D(v) = n-1 + {r \choose 2}$ for every $v$. If $r = 2$ this means that $D(v) = n$ and hence $\deg(v) = n-2$. It follows that $G$ is a cocktail-party graph. And if $r = 2$, then we get a complete graph.
$\square$
\end{document}
|
arXiv
|
{
"id": "2005.02635.tex",
"language_detection_score": 0.6952688694000244,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Global Nonlinear Normal Modes in\\ the Fullerene Molecule $C_{60}$} \author{ Carlos Garc\'{\i}a-Azpeitia\thanks{ Departamento de Matem\'{a}ticas, Facultad de Ciencias, Universidad Nacional Aut\'{o}noma de M\'{e}xico, 04510 M\'{e}xico}, Wieslaw Krawcewicz\thanks{Center for Applied Mathematics at Guangzhou University, Guangzhou, 510006 China. } \footnotemark[4] \and Manuel Tejada-Wriedt\footnotemark[1] \thanks{Laboratorios de Nanomedicina. Nano Tutt S.A. de C.V. Tripoli 803, Portales, 03300, Mexico City, Mexico} , Haopin Wu\thanks{ Department of Mathematical Sciences University of Texas at Dallas Richardson, 75080 USA }} \maketitle
\begin{abstract}
\noindent In this paper we analyze nonlinear dynamics of the fullerene molecule. We prove the existence of global branches of periodic solutions emerging from an icosahedral equilibrium (nonlinear normal modes). We also determine the symmetric properties of the branches of nonlinear normal modes for maximal orbit types.
We find several solutions which are standing and rotating waves that propagate along the molecule with icosahedral, tetrahedral, pentagonal and triangular symmetries.
We complement our theoretical results with numerical computations using Newton's method.
\end{abstract}
\section{Introduction}\label{sec:1}
The fullerene molecule was discovered in 1986 by R. Smalley, R. Curl, J. Heath, S. O'Brien, and H. Kroto, and since then continues to attract great deal of attention in the scientific community (e.g. the original work \cite{Kr} has currently 15 thousand citations). Its importance led in 1996 to awarding the Nobel prize in chemistry to Smalley, Curl and Kroto. Since its discovery, applications of fullerene C60 have been extensively explored in biomedical research due to their unique structure and physicochemical properties.
\begin{figure}
\caption{The fullerene molecule}
\label{fig:fullerene}
\end{figure}
The fullerene molecule is composed of $60$ carbon atoms at the vertices of a truncated icosahedron (see Figure \ref{fig:fullerene}). Various mathematical models for the fullerene molecule are built in the framework of classical mechanics. Many force fields have been proposed for the fullerene in terms of bond stretching, bond bending, torsion and van der Waals forces. Some force fields were optimized to duplicate the normal modes obtained using IR or Raman spectroscopy, but only few of these models reflect the nonlinear characteristics of the fullerene. For instance, the force field implemented in \cite{Wa} for carbon nanotubes and in \cite{Be} for the fullerene, assumes bond deformations that exceed very small fluctuations about equilibrium states, while the force fields proposed in \cite{We} and \cite{Wu} are designed only to consider small fluctuations in the fullerene model. Although the linear vibrational modes of fullerene can be measured using IR and Raman spectroscopy, increasing attention has been given to the mathematical study of other vibrational modes that cannot be measured experimentally (see \cite{He,Ho,Ji,Wa,We,Wu,Sc} and the large bibliography therein).
\paragraph{Mathematical Models.}
In molecular dynamics, a fullerene molecule model consists of a Newtonian system \begin{equation} \label{eq:1}m\ddot u=-\nabla V(u), \end{equation} where the vector $u(t)\in\widetilde{\mathscr V}:=\mathbb{R}^{180}$ represents the positions of 60 carbon atoms in space, $m$ is the carbon mass (which by rescaling the model can be assumed to be one) and $V (u)$ is the energy given by a force field. This force field is symmetric with respect to the action of the group $\Gamma:=I\times O(3)$, where $I$ denotes the full icosahedral group. This important property allows application of various equivariant methods to analyze its dynamical properties. In order to make the system \eqref{eq:1} reference point-depended, we define the subspace $\mathscr V$ of $\mathbb{R}^{180}$ by \begin{equation} \label{eq:space}\mathscr V:=\{x=(x_{1},x_{2},\dots, x_{60}): x_{k} \in\mathbb{R}^{3},\; \sum_{k=1}^{60} x_{k}=0\}, \end{equation} from which we exclude the collision orbits, i.e. we consider the restriction of the system \eqref{eq:1} to the set $\Omega_{o}:=\{x\in\mathscr V: x_{j}\not = x_{k}, \; j\not =k\}$.
\paragraph{Analysis of Nonlinear Molecular Vibrations.}
The mathematical analysis of a molecular model includes two objectives: identification of the \textit{normal frequencies} and the classification of different families of \textit{periodic solutions with various spatio-temporal symmetries} emerging from the equilibrium configuration of the molecule (nonlinear normal modes). Let us emphasize that the classification of normal modes
is a central problem of the molecular spectroscopy.
The study of periodic orbits in Hamiltonian systems can be traced back to Poincare and Lyapunov, who proved the existence of nonlinear normal modes (periodic orbits) near an elliptic equilibrium under non-resonant conditions. Later on, Weinstein (cf. \cite{Weinstein}) extended this result to the case with resonances. However, in general, the existence of nonlinear normal modes is not guaranteed under the presence of resonances. Indeed, Moser presented in \cite{Mos} an example of a Hamiltonian systems with resonances, where the linear system is full of periodic solutions, but the nonlinear system has none.
Let us point out that due to the icosahedral symmetries, the fullerene molecule $C_{60}$ has resonances of multiplicities 3, 4 and 5, i.e. the existence of linear modes does not guarantee the existence of nonlinear normal modes in fullerene $C_{60}$ due to resonances. Therefore, one needs a good method that takes in consideration these symmetries. Standard methods for such analysis may use reductions to the $H$-fixed point spaces, normal form classification, center manifold theorem, averaging method and/or Lyapunov--Schmidt reduction.
\paragraph{Variational Reformulation.}
The problem of finding periodic solutions for the fullerene can be reformulated as a variational problem on the Sobolev space $H_{2\pi} ^{1}(\mathbb{R};{\mathscr V})$ (of $2\pi$-periodic ${\mathscr V}$-valued functions) with the functional \[
J_{\lambda}(u):=\int_{0}^{2\pi}\left[ \frac{1}{2}|\dot{u}(t)|^{2}-\lambda ^{2}V(u(t))\right] dt,\quad u\in H_{2\pi}^{1}(\mathbb{R};\Omega_{o}), \] where $V$ is the force field, $\lambda^{-1}$ the frequency and $u$ the renormalized $2\pi$-periodic solution. The existence of periodic solutions (with fixed frequency $\lambda^{-1}$) is equivalent to the existence of critical points of $J_{\lambda}$. It follows from the construction of the force field that the functional $J_{\lambda}$ is invariant under the action of the group \[ G:=\Gamma\times O(2)=(I\times O(3))\times O(2), \] which acts as permutations of atoms, rotations in space, and translations and reflection in time, respectively.
\begin{figure}
\caption{Local bifurcation.}
\label{fig:bif}
\end{figure} \paragraph{Gradient Equivariant Degree Method.}
To provide an alternative to the equivariant singularity theory (cf. \cite{Golubitsky}) and other geometric methods that have been used to analyze molecules (see \cite{Hoy3}, \cite{Mon1}, \cite{Mon2} and references), we proposed (see \cite{GaTe16,BeKr,BeGa}) the method based on the equivariant gradient degree (fundamental properties of the gradient equivariant degree are collected in Appendix \ref{sec:equi-degree}) -- a generalization of the Brouwer/Leray-Schauder degree that was developed in \cite{Geba} for the gradient maps (see also \cite{DaKr} and \cite{RY2}). The gradient equivariant degree is just one of many equivariant degrees that were introduced in the last three decades for various types of differential equations (see \cite{survey}, \cite{BaKr06}, \cite{IzVi03}, \cite{KW} and references therein).
To describe the main idea of this method, let us point out that the gradient equivariant degree satisfies all the standard properties expected from a degree theory (i.e. existence, additivity, homotopy and multiplicativity properties). The $G$-equivariant gradient degree $\nabla_{G}\text{-Deg}(\nabla J_{\lambda},\mathscr U)$ of $\nabla J_{\lambda}$ on $\mathscr U$ can be expressed elegantly as an element of the Euler ring $U(G)$ (which is the free $\mathbb{Z}$-module generated by the conjugacy classes $(H)$ of closed subgroups $H\leq G$) in the form \[ \nabla_{G}\text{-Deg}(\nabla J_{\lambda},\mathscr U)=n_{1}(H_{1})+n_{2} (H_{2})+\dots+n_{m}(H_{m}),\;\;\;n_{k}\in\mathbb{Z}, \] where $\mathscr U$ is a neighborhood of the $G$-orbit of the equilibrium $u_{o}$ (for some non-critical frequency $\lambda^{-1}$) and $(H_{j})$ are the orbit types in $\mathscr U$. The changes of $\nabla_{G}\text{-deg}(\nabla J_{\lambda},\mathscr U)$ when $\lambda^{-1}$ crosses a critical frequency $\lambda_{o}^{-1}$ allow to establish the existence of various families of orbits of periodic molecular vibrations and their symmetries emerging from an equilibrium. In fact, the equivariant topological invariant \begin{equation}\label{eq:top-inv} \omega_{G}(\lambda_{o}):=\nabla_{G}\text{-Deg\thinspace}(\nabla J_{\lambda _{-}},\mathscr U)-\nabla_{G}\text{-Deg\thinspace}(\nabla J_{\lambda_{+} },\mathscr U) \end{equation} provides a full topological characterization of the families of periodic solutions (together with their symmetries) emerging from an equilibrium at $\lambda_{o}$ (cf. \cite{DGR}). More precisely, for every non-zero coefficient $m_{j}$ in \[ \omega_{G}(\lambda_{o})=m_{1}(K_{1})+m_{2}(K_{2})+\dots m_{r}(K_{r}), \] there exists a global family of periodic molecular vibrations with symmetries at least $K_{j}$ (see Figure \ref{fig:bif} below). Moreover, if $(K_{j})$ is a maximal orbit type then this family has exact symmetries $K_{j}$.
\paragraph{Global Bifurcation Result.}
The so-called classical Rabinowitz Theorem \cite{Ra} establishes occurrence of a global bifurcation from purely local data for compact perturbations of the identity. Its main idea is that if the maximal connected set $\mathcal{C}$ bifurcating from a trivial solution is compact (i.e. bounded), then the sum of the local Leray-Schauder degrees at the set of bifurcation points of $\mathcal{C}$ is zero. Since such maximal connected set $\mathcal{C}$ is either unbounded or comes back to another bifurcation point, this result is also referred to as the \emph{global Rabinowitz alternative} (we refer to Nirenberg's book \cite{Ni} where a simplified proof of this statement is presented in Theorem 3.4.1).
The classical Rabinowitz's global bifurcation argument can be easily adapted in the equivariant setting for the gradient $G$-equivariant degree (cf. \cite{GolRyb}). That is, for any $G$-orbit of a compact (bounded) branch $\mathcal{C}$ in $\mathbb{R}_{+}\times H_{2\pi}^{1}(\mathbb{R};\Omega_{o})$ containing $(\lambda_{0},u_{o})$ we have \begin{equation} \sum_{k=0}^{m}\omega_{G}(\lambda_{k})=0,\label{eq:int-global} \end{equation} (see Figure \ref{fig:glob-bif}), where $\lambda_{k}^{-1}$ are the normal modes belonging to $\mathcal{C}$. In this context the \textbf{global property} means that a family of periodic solutions, represented by continuous branch $\mathcal{C}$ in $\mathbb{R}_{+}\times H_{2\pi}^{1}(\mathbb{R};\Omega_{o})$, is not compact or comes back to another bifurcation point from the equilibrium. The non-compactness of $\mathcal{C}$ implies that the norm or period of solutions from $\mathcal{C}$ goes to the infinity, $\mathcal{C}$ ends in a collision orbit or goes to a different equilibrium point.
\begin{figure}
\caption{Global bifurcation}
\label{fig:glob-bif}
\end{figure}
By applying formula \eqref{eq:int-global} one can establish an effective criterium allowing to determine the existence of the non-compact branches of nonlinear normal modes with particular (e.g. maximal) orbit types. To be more precise, it is sufficient to consider all the critical frequencies $\lambda_{k}^{-1}$ corresponding to the first Fourier mode and simply show, that for some of them, say $\lambda_{0}^{-1}$, the sum in \eqref{eq:int-global} can never be zero. For the fullerene molecule such non-compact global branches exist.
\paragraph{Novelty.}
In this paper, we apply equivariant gradient degree to the classification of the global nonlinear modes in a model of the fullerene molecule $C_{60}$. By taking advantage of various properties of the gradient equivariant degree, the approximate values (obtained numerically) of the normal frequencies can be used to determine (under plausible isotypical non-resonance assumption) the exact values of the topological invariants $\omega_{G}(\lambda_{o})$. In particular, the information contained in the topological invariants can be applied to obtain the presence of such global branches of periodic solutions with the maximal orbit types, as it is presented in our main theorem (Theorem \ref{th:main}).
Let us point out that (to the best of our knowledge) all the previous studies of the fullerene molecule have considered only the existence of the linear modes (which have constant frequency), while the occurrence of the non-linear normal modes have frequencies depending on the amplitudes of the oscillation. \emph{Such an analysis of nonlinear normal modes for the fullerene molecule was never done before}. We complement our results with numerical computations using Newton's method and pseudo-arclength procedure to continue some of these nonlinear normal modes.
It is important to notice that the icosahedral symmetries appear also in adenoviruses with icosahedral capsid, or other icosahedral molecules considered in \cite{We}. The methods presented here are applicable to these cases as well.
\vskip.3cm
\paragraph{Contents.}
The rest of the paper is arranged as follows. In section \ref{sec:fullerene} we present the model equations appropriate for studying the dynamics of the fullerene molecule. In subsection \ref{sec:coordinates}, we propose a new indexation for the fullerene atoms which greatly simplifies the description of symmetries in the molecule. Then, in subsection \ref{sec:forcefield} we discuss the choice of the force field for the fullerene molecule that seems to be the most appropriate in order to model nonlinear vibrations and in subsection \ref{sec:iso-symm} we describe the action of the group $I\times O(3)$ on the space $\mathscr V$. Then, in subsection \ref{sec:minimizer}, we find the minimizer of the potential $V$ among the configurations with icosahedral symmetries by applying Palais criticality principle. In subsection \ref{sec:iso-decomp}, we identify the $I$-isotypical decomposition of the space $\mathscr V$ and use it to determine the spectrum of the operator $\nabla^{2}V(u_{o})$ and the $I$-isotypical types of the corresponding eigenspaces. In Section \ref{sec:eq-bif} we prove the bifurcation of periodic solutions from the equilibrium configuration $u_{o}$ of the fullerene molecule. In Section \ref{sec:symmetries} we describe the symmetries of the periodic solutions. In addition to the theoretical results stated Theorem \ref{th:main}, several of these symmetric periodic solutions were obtained by numerical continuation, for which the numerical data is shown graphically. In Appendix \ref{sec:equi-degree}, we include a short review of the gradient degree, including computational algorithms, and the computations of the $I\times O(2)$-equivariant gradient degree.
\vskip.3cm
\section{Fullerene Model}
\label{sec:fullerene}
\subsection{Equations for Carbons}
\label{sec:coordinates}
In this section, we propose a new indexation for the fullerene atoms which greatly simplifies the description of symmetries in the molecule: to each atom we assigned two indices -- one (being a $5$-cycle in $S_{5}$) indicating in which the side of the dodecahedron the atom is located and the second indicating its position in that side (as it is illustrated on Figure \ref{fig:fullerene})
Let $A_{5}$ be the alternating group of permutations of five elements $\{1,2,3,4,5\}$. The five conjugacy classes in $A_{5}$ are listed in Table \ref{tab:conjugacy}. The $C_{60}$ molecule is arranged in $12$ unconnected pentagons of atoms. We implement the following notation for the indices of the $60$ atoms (see Figure \ref{fig:fullerene}):
\begin{itemize} \item $\tau\in\mathcal{C}_{4}$ is used to denote each of the $12$ pentagonal faces.
\item $k\in\{1,...,5\}=:\mathbb{Z}[1,5]$ is used to denote each of the $5$ vertices in the $12$ pentagonal faces. \end{itemize}
\begin{table}[ptb] \vskip.4cm \par \begin{center} \begin{tabular}
[c]{|c|c|c|c|c|} \noalign{\vskip2pt\hrule\vskip2pt} $\mathcal{C}_{1}$ & $\mathcal{C}_{2}$ & $\mathcal{C}_{3}$ & $\mathcal{C}_{4}$ & $\mathcal{C}_{5}$\\ \noalign{\vskip2pt\hrule\vskip2pt} $(1)$ & $(12)(34)$, $(13)(24)$,$(14)(23)$ & $(123)$, $(132)$ & $(12345)$ & $(12354)$\\ & $(12)(35)$, $(13)(25)$,$(15)(23)$ & $(124)$, $(142)$ & $(12453)$ & $(12435)$\\ & $(12)(45)$, $(14)(25)$,$(15)(24)$ & $(125)$, $(152)$ & $(12534)$ & $(12543)$\\ & $(13)(45)$, $(14)(35)$, $(15)(34)$ & $(134)$, $(143)$ & $(13254)$ & $(13245)$\\ & $(23)(45)$, $(24)(35)$,$(25)(34)$ & $(135)$, $(153)$ & $(13542)$ & $(13524)$\\ & & $(145)$, $(154)$ & $(13425)$ & $(13452)$\\ & & $(234)$, $(243)$ & $(14235)$ & $(14253)$\\ & & $(235)$, $(253)$ & $(14352)$ & $(14325)$\\ & & $(245)$, $(254)$ & $(14523)$ & $(14532)$\\ & & $(345)$, $(354)$ & $(15243)$ & $(15234)$\\ & & & $(15432)$ & $(15423)$\\ & & & $(15324)$ & $(15342)$\\\hline \end{tabular} \end{center} \caption{Conjugacy classes of elements in $A_{5}$.} \label{tab:conjugacy} \end{table}
We define the set of indices as \[ \Lambda=\mathcal{C}_{4}\times\mathbb{Z}[1,5]~\text{.} \] With these notations each index $(\tau,k)\in\Lambda$ represents a face and a vertex in the face of the truncated icosahedron as it is shown on Figure \ref{fig:fullerene}. The vectors that represent the positions of the carbon atoms are \[ u_{\tau,k}\in\mathbb{R}^{3}, \] and the vector for the $60$ positions is \[ u=(u_{\tau,k})_{(\tau,k)\in\Lambda}\in\left( \mathbb{R}^{3}\right) ^{60}\text{.} \]
The space $\left( \mathbb{R}^{3}\right) ^{60}$ is a representation of the group $I\times O(3)$, where $I=A_{5}\times\mathbb{Z}_{2}$ stands for the full icosahedral group. With this notation, the action of $I\times O(3)$ on $V$ has a simple definition: the action of $\sigma\in A_{5}$ and $-1\in\mathbb{Z}_{2}$ in $u$ is defined in each component by \begin{equation} \rho(\sigma)u_{\tau,k}=u_{\sigma^{-1}\tau\sigma,\sigma^{-1}(k)}~,\qquad \rho(-1)u_{\tau,k}:=u_{\tau^{-1},k}~. \label{eq:act1} \end{equation} And the action of the group $A\in O(3)$ is defined by \[ Au=(Au_{\tau,k})_{(\tau,k)\in\Lambda}. \]
\subsection{Force Field}
\label{sec:forcefield}
The system for the fullerene molecule is given by \eqref{eq:1}. To describe the force field $\nabla V$ we recognize that:
\begin{itemize} \item The $60$ edges in the $12$ pentagonal faces represent single bonds. For these single bonds we define the function $S:\Lambda\rightarrow\Lambda,$ \[ S(\tau,k)=(\tau,\tau(k))~, \quad\tau\in\mathcal{C}_{4},\; k\in\mathbb{Z} [1,5]. \]
\item The $30$ remaining edges in the hexagon, which connect the different pentagonal faces, represent double bounds. For these double bonds we define the function $D:\Lambda\rightarrow\Lambda$, \[ D(\tau,k)=(\sigma,k)\text{ with }\sigma=\left( k,\tau^{2}(k),\tau(k),\tau ^{4}(k),\tau^{3}(k)\right) . \]
\end{itemize}
Using the above notation, the force field energy is elegantly expressed by \[ V(u)=\sum_{\left( \tau,k\right) \in\Lambda}\left( U(\left\vert u_{\tau ,k}-u_{S(\tau,k)}\right\vert )+\frac{1}{2}U(\left\vert u_{\tau,k} -u_{D(\tau,k)}\right\vert )+U_{\left( \tau,k\right) }(u)\right) \text{,} \] where the coefficient $\frac12$ before the second term is to eliminate the double count bonds, and the term $U_{\left( \tau,k\right) }(u)$ includes the bending and torsion forces.
Bond stretching is represent by potential \[ U(x)=E_{0}\left( \left( 1-e^{-\beta(x-r_{0})}\right) ^{2}-1\right) , \] where $r_{0}$ is the equilibrium bond length, $E_{0}$ is the bond energy and $\beta^{-1}$ is the width of the energy. The term $U_{\left( \tau,k\right) }(u)$ includes bending and torsion forces given by \[ U_{\left( \tau,k\right) }(u)=E_{B}(\theta_{1})+E_{B}(\theta_{2} )+E_{B}(\theta_{3})+E_{T}(\phi_{1})+E_{T}(\phi_{2})+E_{T}(\phi_{3}), \] where the bending $E_{B}(\theta)$ around each atom in a molecule is governed by the hybridization of orbitals and is given by \[ E_{B}(\theta)=\frac{1}{2}k_{0}(\cos\theta-\cos\theta_{0})^{2}=\frac{1} {2}k_{\theta}(\cos\theta+1/2)^{2},\quad\theta_{0}:=2\pi/3, \] (here $\theta_{o}$ is the equilibrium angle and $k_{0}$ is the bending force constant), and the torsion energy $E_{T}(\phi)$ (which describes the energy change associated with rotation around a bond with a four-atom sequence) is given by \[ E_{T}(\phi)=\frac{1}{2}k_{\phi}\left( 1-\cos2\phi\right) =k_{\phi}\left( 1-\cos^{2}\phi\right) \text{.} \] The torsion energy reaches a maximum value at angles $\phi=\pm\pi/2$.
Each carbon $\left( \tau,k\right) \in\Lambda$ has three angles, \begin{align*} \cos\theta_{1} & =\frac{u_{\tau,k}-u_{S(\tau,k)}}{\left\vert u_{\tau ,k}-u_{S(\tau,k)}\right\vert }\bullet\frac{u_{\tau,k}-u_{S^{-1}(\tau,k)} }{\left\vert u_{\tau,k}-u_{S^{-1}(\tau,k)}\right\vert },\\ \cos\theta_{2} & =\frac{u_{\tau,k}-u_{D(\tau,k)}}{\left\vert u_{\tau ,k}-u_{D(\tau,k)}\right\vert }\bullet\frac{u_{\tau,k}-u_{S(\tau,k)} }{\left\vert u_{\tau,k}-u_{S(\tau,k)}\right\vert }~,\\ \cos\theta_{3} & =\frac{u_{\tau,k}-u_{D(\tau,k)}}{\left\vert u_{\tau ,k}-u_{D(\tau,k)}\right\vert }\bullet\frac{u_{\tau,k}-u_{S^{-1}(\tau,k)} }{\left\vert u_{\tau,k}-u_{S^{-1}(\tau,k)}\right\vert }. \end{align*} Clearly, the bond bending at each atom $\left( \tau,k\right) \in\Lambda$ is $E_{B}(\theta_{1})+E_{B}(\theta_{2})+E_{B}(\theta_{3})$. Let \[ n=\frac{u_{D(\tau,k)}-u_{S(\tau,k)}}{\left\vert u_{D(\tau,k)}-u_{S(\tau ,k)}\right\vert }\times\frac{u_{D(\tau,k)}-u_{S^{-1}(\tau,k)}}{\left\vert u_{D(\tau,k)}-u_{S^{-1}(\tau,k)}\right\vert } \] be the unit normal vector to the plane passing by $u_{D(\tau,k)}$, $u_{S(\tau,k)}$ and $u_{S^{-1}(\tau,k)}$. Each carbon $\left( \tau,k\right) \in\Lambda$ has three torsion energies given by \begin{align*} \cos\phi_{1} & =n\bullet n_{1}~,\qquad n_{1}=\frac{u_{(\tau,k)} -u_{S(\tau,k)}}{\left\vert u_{(\tau,k)}-u_{S(\tau,k)}\right\vert }\times \frac{u_{(\tau,k)}-u_{S^{-1}(\tau,k)}}{\left\vert u_{(\tau,k)}-u_{S^{-1} (\tau,k)}\right\vert }~,\\ \cos\phi_{2} & =n\bullet n_{2}~,\qquad n_{2}=\frac{u_{(\tau,k)} -u_{D(\tau,k)}}{\left\vert u_{(\tau,k)}-u_{D(\tau,k)}\right\vert }\times \frac{u_{(\tau,k)}-u_{S^{-1}(\tau,k)}}{\left\vert u_{(\tau,k)}-u_{S^{-1} (\tau,k)}\right\vert }~,\\ \cos\phi_{3} & =n\bullet n_{3}~,\qquad n_{3}=\frac{u_{(\tau,k)} -u_{D(\tau,k)}}{\left\vert u_{(\tau,k)}-u_{D(\tau,k)}\right\vert }\times \frac{u_{(\tau,k)}-u_{S(\tau,k)}}{\left\vert u_{(\tau,k)}-u_{S(\tau ,k)}\right\vert }~. \end{align*} Then, the bond bending at each atom $\left( \tau,k\right) \in\Lambda$ is $E_{T}(\phi_{1})+E_{T}(\phi_{2})+E_{T}(\phi_{3})$.
For this work, we use the parameters given in \cite{Be} , which are $E_{0}=6.1322~eV$, $\beta=1.8502~A^{-1}$, $r_{0}=01.4322$~$A$, $k_{\theta }=10~eV$, $k_{\phi}=0.346~eV$. In this paper we use exactly these values.
\subsection{Icosahedral Symmetries}
\label{sec:iso-symm} In order to make the system \eqref{eq:1} reference point-depended, we define the subspace \begin{equation} \mathscr V:=\{u\in\left( \mathbb{R}^{3}\right) ^{60}:\sum_{(\sigma ,k)\in\Lambda}u_{\sigma,k}=0\} \label{eq:V} \end{equation} and \[ \Omega_{o}=\left\{ u\in\mathscr V:u_{\sigma,k}\not =u_{\tau,j}\right\} . \] We have that $\mathscr V$ and $\Omega_{o}$ are flow-invariant for \eqref{eq:1}
By the properties of functions $S$ and $D$, the potential \begin{equation} V:\Omega_{o}\rightarrow\mathbb{R~} \label{eq:mol} \end{equation} is well defined and $I$-invariant. Moreover, the potential $V$ is invariant by rotations and reflections of the group $O(3)$ because bonding, bending and torsion forces depend only on the norm of the distances among pairs of atoms. Therefore, the potential $V$ is $I\times O(3)$-invariant, \[ V((\sigma,A)u)=V(u),\qquad(\sigma,A)\in I\times O(3)~. \]
Let \begin{equation} \label{eq:infinitezimal}J_{1}:=\left[ \begin{array} [c]{ccc} 0 & 0 & 0\\ 0 & 0 & -1\\ 0 & 1 & 0 \end{array} \right] ,\quad J_{2}:=\left[ \begin{array} [c]{ccc} 0 & 0 & -1\\ 0 & 0 & 0\\ 1 & 0 & 0 \end{array} \right] ,\quad J_{3}:=\left[ \begin{array} [c]{ccc} 0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{array} \right] \end{equation} be the three infinitesimal generators of rotations in $O(3)$, i.e., $e^{\phi J_{1}}$, $e^{\theta J_{2}}$ and $e^{J_{3}\psi}$, where $\phi$, $\theta$ and $\psi$ are the Euler angles. The angle between two adjacent pentagons in a dodecahedron is $\theta_{0}=\arctan2$. Then, the rotation by $\pi$ that fixes a pair of antipodal edges is \begin{equation} A=e^{-\left( \theta_{0}/2\right) J_{2}}e^{\pi J_{3}}e^{\left( \theta _{0}/2\right) J_{2}}= \begin{bmatrix} -\frac{1}{\sqrt{5}} & 0 & \frac{2}{\sqrt{5}}\\ 0 & -1 & 0\\ \frac{2}{\sqrt{5}} & 0 & \frac{1}{\sqrt{5}} \end{bmatrix} \text{.} \end{equation} The rotation by $2\pi/5$ of the upper pentagonal face of a dodecahedron is \begin{equation} B=e^{\frac{2\pi}{5}J_{3}}= \begin{bmatrix} \frac{-1+\sqrt{5}}{4} & -\sqrt{\frac{5+\sqrt{5}}{8}} & 0\\ \sqrt{\frac{5+\sqrt{5}}{8}} & \frac{-1+\sqrt{5}}{4} & 0\\ 0 & 0 & 1 \end{bmatrix} ~\text{.} \end{equation} The subgroup of $O(3)$ generated by the matrices $A$ and $B$ is isomorphic to icosahedral group $A_{5}$. Indeed, the generators $A$ and $B$ satisfy the relations \[ A^{2}=B^{5}=(AB)^{3}=\text{\textrm{Id\,}}\ . \] On the other hand, the group $A_{5}$ is generated by \begin{equation} a=(23)(45),\qquad b=\left( 12345\right) \text{,} \end{equation} and we have similar relations \[ a^{2}=b^{5}=(ab)^{3}=(1). \] Therefore, the explicit homomorphism $\rho:A_{5}\rightarrow\mathrm{SO}(3)$ defined on generators by $\rho(a):= A$ and $\rho( b):= B$ is the required isomorphism $A_{5}\simeq\rho(A_{5})\subset SO(3)$. We extend \[ \rho:A_{5}\times\mathbb{Z}_{2}\rightarrow O(3) \] with $\rho(-1)=-\text{\textrm{Id\,}}\in O(3)$, and consequently we obtain an explicit identification of the full icosahedral group $I$ with $\rho(I)\subset O(3)$.
\subsection{Icosahedral Minimizer}
\label{sec:minimizer}
Let $\tilde{I}$ be the icosahedral subgroup of $I\times O(3)$ given by \[ \tilde{I}=\left\{ (\sigma,\rho(\sigma))\in I\times O(3):\sigma\in I\right\} ~. \] The fixed point space \[ \Omega_{0}^{\tilde{I}}=\mathscr V^{\tilde{I}}\cap\Omega_{o}=\{\left( a_{\tau,k}\right) _{(\tau,k)\in\Lambda}\in\Omega_{0}:a_{\tau,k}=(\sigma ,\rho(\sigma))a_{\tau,k}\}, \] consist of all the truncated icosahedral configurations. An equilibrium of the fullerene molecule can be found as a minimizer of $V$ on these configurations. More precisely, since $V$ is $I\times O(3)$-invariant, by Palais criticality principle, the minimizer of the potential $V$ on the fixed-point space of $\tilde{I}$ is a critical point of $V$.
To find the minimizer among configurations with symmetries $\tilde{I}$, we parameterize the carbons positions by fixing the position of one of them. Let \[ u_{b,1}=(x,0,z), \quad x,z\in\mathbb{R}, \] then we have \[ u_{b,1}=(\sigma,\rho(\sigma))u_{b,1}=\rho(\sigma)u_{\sigma^{-1}b\sigma ,\sigma^{-1}(1)}\text{~,} \] and relations \begin{equation} u_{\sigma^{-1}b\sigma,\sigma^{-1}(1)}=\rho(\sigma)^{-1}u_{b,1}=\rho (\sigma)^{-1}(x,0,z)^{T},\quad\sigma\in A_{5}, \label{rel} \end{equation} allow us to determine the positions of all other coordinates of the vector $u=\left( u_{\tau,k}\right) $.
Therefore, the representation of $u(x,z)$ given by (\ref{rel}) provides us a parametrization of a connected component of $\Omega_{0}^{\tilde{I}}/O(3)$ with the domain \[ \mathcal{D}=\{(x,z)\in\mathbb{R}^{2}:0<z,\;x<Cz\} \] where $C>0$ is a number determined by the geometric restrictions. We define $v:\mathcal{D}\rightarrow\mathbb{R}$ by \[ v(x,z)=V(u(x,z)). \] Since $v(x,z)$ is exactly the restriction of $V$ to the fixed-point subspace $\Omega_{0}^{\tilde{I}}/O(3)$, then by the symmetric criticality condition, a critical point of $v:\mathcal{D}\rightarrow\mathbb{R}$ is also a critical point of $V$.
We implemented a numerical minimizing procedure to find the minimizer $(x_{o},z_{o})$ of $v(x,z)$. We denote the truncated icosahedral minimizer corresponding to the fullerene $C_{60}$ as \[ u_{o}=u\left( x_{o},z_{o}\right) \in\mathscr V. \] The lengths of single and double bonds for this minimizer are given by \begin{align*} d_{S}=\left\vert u_{b,1}-u_{S(b,1)}\right\vert =1.438084,\\ d_{D}=\left\vert u_{b,1}-u_{D(b,1)}\right\vert =1.420845\text{,} \end{align*} respectively. These results are in accordance with the distances measured in the paper \cite{He}.
An advantage of the notation $u=\left( u_{\tau,k}\right) $ is that it is easy to visualize the elements associated to the rotations $\rho(\sigma)$ in the truncated icosahedron (Figure \ref{fig:fullerene}). In these configurations we have $\rho(\sigma)u_{\tau,k}=\sigma^{-1}u_{\tau,k} =u_{\sigma\tau\sigma^{-1},\sigma(k)}$, then $\rho(\sigma)$ is identified with the rotation that sends the face $\tau$ to $\sigma\tau\sigma^{-1}$ and the carbon atom identified by $k$ to $\sigma(k)$; for example, under the $\pi $-rotation given by $\rho(a)=A$, face $b=(12345)$ goes to the face $aba^{-1}=(13254)$, and the element $k=1$ to $a(1)=1$. In this manner, we conclude that the elements of the conjugacy classes $\mathcal{C}_{2}$, $\mathcal{C}_{3}$, $\mathcal{C}_{4}$ and $\mathcal{C}_{5}$ correspond to the $15$ rotations by $\pi$, the $20$ rotations by $2\pi/3$, the $12$ rotations by $2\pi/5$ and the $12$ rotations by $\pi/5$, respectively.
\subsection{Isotypical Decomposition and Spectrum of $\nabla^{2}V(u_{o})$}
\label{sec:iso-decomp}
The space $\mathscr V$ is an orthogonal complement in $\widetilde {\mathscr V}=(\mathbb{R}^{3})^{60}$ of the subspace $\{(v,v,\dots,v): v\in\mathbb{R}^{3}\}$, thus it is $\tilde{I}$-invariant. Given that the system \eqref{eq:1}, $u(t)\in\mathscr V$, is symmetric with respect the group action of $I\times O(3)$, the orbit of equilibria $u_{o}$ is a $3$-dimensional submanifold in $\mathscr V$. To describe the tangent space, we define \[ \mathcal{J}_{j}u=(J_{j}u_{\sigma,k}), \] where $J_{j}$ are the three infinitesimal generators of the rotations defined in \eqref{eq:infinitezimal}. Then, the slice $S_{o}$ to the orbit of $u_{o}$ is \[ S_{o}:=\{x\in\mathscr V:x\bullet\mathcal{J}_{j}u_{0}=0,\quad j=1,2,3\}. \] Since $u_{o}$ has the isotropy group $\tilde{I}$, then $S_{o}$ is an orthogonal $\tilde{I}$ representation.
In this section we identify the $\tilde{I}$-isotypical components of $S_{o}$. In order to simplify the notation, hereafter we identify $\tilde{I}$ with the group $A_{5}\times\mathbb{Z}_{2}$, \[ \tilde{I}=A_{5}\times\mathbb{Z}_{2}~. \] Put $\varphi_{\pm}=\frac{1}{2}(1\pm\sqrt{5})$, and consider the permutations $a=(2,3)(4,5)$, $b=\left( 1,2,3,4,5\right) $ and $c=(1,2,3)$. The character table of $A_{5}$ is:
\[ \begin{tabular}
[c]{|cc|ccccc|}\hline Rep. & Character & $(1)$ & $a$ & $c$ & $b$ & $b^{2}$\\\hline $\mathcal{V}_{1}$ & $\chi_{1}$ & $1$ & $1$ & $1$ & $1$ & $1$\\ $\mathcal{V}_{2}$ & $\chi_{2}$ & $4$ & $0$ & $1$ & $-1$ & $-1$\\ $\mathcal{V}_{3}$ & $\chi_{3}$ & $5$ & $1$ & $-1$ & $0$ & $0$\\ $\mathcal{V}_{4}$ & $\chi_{4}$ & $3$ & $-1$ & $0$ & $\varphi_{+}$ & $\varphi_{-}$\\ $\mathcal{V}_{5}$ & $\chi_{5}$ & $3$ & $-1$ & $0$ & $\varphi_{-}$ & $\varphi_{+}$\\\hline \end{tabular} \ \ \ \ \ \] The character table of $\tilde{I}\simeq A_{5}\times\mathbb{Z}_{2}$ is obtained from the table of $A_{5}$. We denote the irreducible representations of $I$ by\ $\mathcal{V}_{\pm n}$ for $n=1,...,5$, where the element $-1\in \mathbb{Z}_{2}$ acts as $\pm\text{\textrm{Id\,}}$ in $\mathcal{V}_{\pm n}$ and elements $\gamma\in A_{5}$ act as they act on $\mathcal{V}_{n}$. Notice that all the representations $\mathcal{V}_{\pm n}$ are absolutely irreducible. Therefore, the character table for $A_{5}\times\mathbb{Z}_{2}$ is as follows: \begin{table}[ptb] \begin{center} \begin{tabular}
[c]{|c|cccccccccc|}\hline & (1) & $a$ & $c$ & $b$ & $b^{2}$ & $(-1)$ & $(a,-1) $ & $(c, -1) $ & $(b,-1)$ & $(b^{2},-1) $\\\hline $\chi_{1}$ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ $\chi_{2}$ & 4 & 0 & 1 & -1 & -1 & 4 & 0 & 1 & -1 & -1\\ $\chi_{3}$ & 5 & 1 & -1 & 0 & 0 & 5 & 1 & -1 & 0 & 0\\ $\chi_{4}$ & 3 & -1 & 0 & $\varphi_{+} $ & $\varphi_{-}$ & 3 & -1 & 0 & $\varphi_{+}$ & $\varphi_{-}$\\ $\chi_{5}$ & 3 & -1 & 0 & $\varphi_{-}$ & $\varphi_{+}$ & 3 & -1 & 0 & $\varphi_{-}$ & $\varphi_{+}$\\ $\chi_{-1}$ & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1\\ $\chi_{-2}$ & 4 & 0 & 1 & -1 & -1 & -4 & 0 & -1 & 1 & 1\\ $\chi_{-3}$ & 5 & 1 & -1 & 0 & 0 & -5 & -1 & 1 & 0 & 0\\ $\chi_{-4}$ & 3 & -1 & 0 & $\varphi_{+} $ & $\varphi_{-}$ & -3 & 1 & 0 & $-\varphi_{+} $ & $-\varphi_{-}$\\ $\chi_{-5}$ & 3 & -1 & 0 & $\varphi_{-}$ & $\varphi_{+} $ & -3 & 1 & 0 & $-\varphi_{-} $ & $-\varphi_{+}$\\\hline \end{tabular} \end{center} \caption{Character table for $A_{5}\times\mathbb{Z}_{2}$} \label{tab:I} \end{table}\vskip.3cm By comparing the character $\chi_{_{\mathscr V}}$ withe the characters in Table \ref{tab:I}, we obtain the following $I$-isotypical decomposition of $\mathscr V$ \[ \mathscr V=\bigoplus_{n=1}^{5} \mathscr V_{n}\oplus\mathscr V_{-n}, \] where $\mathscr V_{\pm n}$ is modeled on $\mathcal{V}_{\pm n}$.
\vskip.3cm
We numerically computed the spectrum $\{\mu_{j}: j=1,2,\dots, 47\}$ of the Hessian $\nabla^{2}V(u_{o})$ at this minimizer $u_{o}$. Since $\nabla ^{2}V(u_{o}):\mathscr V\rightarrow\mathscr V$ is $\tilde{I}$-equivariant, \[
\nabla^{2}V(u_{o})|_{\mathscr{V}_{n}\cap E(\mu_{j})}=\mu_{j} \,\text{\textrm{Id}}:\mathscr{V}_{n}\cap E(\mu_{j})\rightarrow\mathscr{V}_{n} \cap E(\mu_{j}), \] where $E(\mu_{j})$ stands for the eigenspace corresponding to $\mu_{j}$. We found that each of the eigenspaces $E(\mu_{j})$ is an irreducible subrepresentation of $\mathscr V$ , i.e. the isotypical multiplicity of
$\mu_{j}$ is simple. Including the zero eigenspace, we have $47$ different components. Thus we have that $\sigma(\nabla^{2}V(u_{o})|_{{S_{o}}})=\{\mu _{1},...,\mu_{46}\}$ with $\mu_{j}>0$, so \begin{equation} S_{o}=\bigoplus_{j=1}^{46}\mathcal{V}_{n_{j}} \quad\text{ and } \quad
\nabla^{2}V(u_{o})|_{\mathcal{V}_{n_{j}}}=\mu_{j}\,\text{\textrm{Id}.} \label{eq:S4-iso-S} \end{equation}
In order to determine in which isotypical component $\mathscr V_{n_{j}}$ the eigenspace $E(\mu_{j})$ is contained for a given eigenvalue $\mu_{j}$, we apply the isotypical projections \[ P_{{n}}v:=\frac{\dim(\mathcal{V}_{n})}{120}\sum_{g\in\tilde{I}}\chi _{n}(g)~gv,\quad v\in\mathscr V, \] where $\mathcal{V}_{n}$, $n=\pm1,\dots,\pm5$, is the irreducible representation identified by the character Table \ref{tab:I}. Then the component $\mathscr V_{n_{j}}$ can be clearly identified by the projection $P_{n_{j}}$.
\begin{table}[t] \begin{center} \scalebox{.7}{ \begin{tabular}
[c]{|c|cccc|}\hline $j$ & \text{{\small Mult.}} & $\mu_{j}$ & $\lambda_j$&${\small n}_{j}$\\ \hline {\small 1} & {\small 5} & {\small 176.536} &{\small $ 0.075263 $}& -{\small 3}\\ {\small 2} & {\small 5} & {\small 176.366} &{\small $ 0.075300 $}& {\small 3}\\ {\small 3} & {\small 4} & {\small 164.083} &{\small $ 0.078067 $}& {\small 2}\\ {\small 4} & {\small 4} & {\small 160.292} &{\small $ 0.078985 $}& -{\small 2}\\ {\small 5} & {\small 3} & {\small 159.290} &{\small $ 0.079233$}& -{\small 5}\\ {\small 6} & {\small 5} & {\small 148.597} &{\small $ 0.082034 $}& {\small 3}\\ {\small 7} & {\small 3} & {\small 141.071} &{\small $ 0.084194 $}& -{\small 4}\\ {\small 8} & {\small 3} & {\small 140.573} &{\small $ 0.084343 $}& {\small 5}\\ {\small 9} & {\small 1} & {\small 135.632} &{\small $ 0.085866 $}& {\small 1}\\ {\small 10} & {\small 4} & {\small 134.935} &{\small $ 0.086087$}& -{\small 2}\\ {\small 11} & {\small 4} & {\small 129.544} &{\small $ 0.087860 $}& {\small 2}\\ {\small 12} & {\small 5} & {\small 125.431} &{\small $ 0.089289 $}& -{\small 3}\\ {\small 13} & {\small 3} & {\small 107.719} &{\small $ 0.096350 $}& {\small 4}\\ {\small 14} & {\small 5} & {\small 98.5525} &{\small $ 0.100732 $}& {\small 3}\\ {\small 15} & {\small 3} & {\small 93.4648} &{\small $ 0.103437 $}& -{\small 5}\\ {\small 16} & {\small 5} & {\small 87.7541} &{\small $0.106750 $}& -{\small 3}\\ {\small 17} & {\small 3} & {\small 83.9718} &{\small $ 0.109127 $}& -{\small 4}\\ {\small 18} & {\small 4} & {\small 71.6288} &{\small $ 0.118156 $} &{\small 2}\\ {\small 19} & {\small 5} & {\small 67.1181} &{\small $ 0.122062 $} &{\small 3}\\ {\small 20} & {\small 1} & {\small 59.3865} &{\small $ 0.129765$}& -{\small 1}\\ {\small 21} & {\small 3} & {\small 50.4797} &{\small $0.140748 $}& -{\small 5}\\ {\small 22} & {\small 4} & {\small 47.5646} &{\small $ 0.144997$}& -{\small 2}\\ {\small 23} & {\small 3} & {\small 42.2947} &{\small $ 0.153765$}& {\small 4}\\ \hline \end{tabular} \qquad\begin{tabular}
[c]{|c|cccc|}\hline $j$ & Mult. & ${\small \mu}_{j}$ & $\lambda_j$&$n_{j}$\\ \hline {\small 24} & {\small 3} & {\small 41.3918} &{\small $ 0.155433$}& {\small 5}\\ {\small 25} & {\small 4} & {\small 33.9885} &{\small $ 0.171528$}& -{\small 2}\\ {\small 26} & {\small 5} & {\small 28.8031} &{\small $ 0.186329 $}& -{\small 3}\\ {\small 27} & {\small 5} & {\small 27.4795} &{\small $ 0.190764 $}& {\small 3}\\ {\small 28} & {\small 3} & {\small 27.3153} &{\small $ 0.191336 $}& {\small 5}\\ {\small 29} & {\small 4} & {\small 25.5388} &{\small $ 0.197879 $}& -{\small 2}\\ {\small 30} & {\small 4} & {\small 22.7212} &{\small $ 0.209790$}& {\small 2}\\ {\small 31} & {\small 5} & {\small 19.4536} &{\small $ 0.226725 $}& -{\small 3}\\ {\small 32} & {\small 5} & {\small 19.3377} & {\small $ 0.227404 $}&{\small 3}\\ {\small 33} & {\small 3} & {\small 19.2379} &{\small $ 0.227993 $}& -{\small 5}\\ {\small 34} & {\small 4} & {\small 16.5356} &{\small $ 0.245918 $} &{\small 2}\\ {\small 35} & {\small 3} & {\small 16.5255} & {\small $ 0.245993 $}&{\small 5}\\ {\small 36} & {\small 3} & {\small 15.1033} & {\small $ 0.257314 $}&-{\small 4}\\ {\small 37} & {\small 3} & {\small 10.3908} & {\small $0.310224 $}&{\small 4}\\ {\small 38} & {\small 1} & {\small 10.2520} &{\small $ 0.312317 $} &{\small 1}\\ {\small 39} & {\small 5} & {\small 10.1098} &{\small $ 0.314506 $} &-{\small 3}\\ {\small 40} & {\small 3} & {\small 9.03077} &{\small $ 0.332765 $} &-{\small 4}\\ {\small 41} & {\small 4} & {\small 9.02666} &{\small $ 0.332841 $}& {\small 2}\\ {\small 42} & {\small 5} & {\small 6.99929} & {\small $ 0.377984 $}&{\small 3}\\ {\small 43} & {\small 5} & {\small 6.95354} & {\small $ 0.379225$}&-{\small 3}\\ {\small 44} & {\small 3} & {\small 5.42311} &{\small $0.429414 $} &-{\small 5}\\ {\small 45} & {\small 4} & {\small 5.26429} & {\small $ 0.435843$}&-{\small 2}\\ {\small 46} & {\small 5} & {\small 3.04384} &{\small $ 0.573177$} &{\small 3}\\ \hline \end{tabular}} \end{center} \caption{Eigenvalues $\mu_{j}$ of $\nabla^{2}V(u_{o})$ and critical numbers $\lambda_{j}$ according to their isotypical type $\mathcal{V}_{n_{j}}$.}\label{tab:crit-values} \end{table}
In the following Table \ref{tab:crit-values}, we show the number $n_{j} \in\{-5,...,-1,1,...,5\}$ that identifies the irreducible representation corresponding to the eigenvalue $\mu_{j}$ for $j=1,..,46$. The numerical computations strongly indicate that all the eigenvalues $\mu_{j}$, $j=1,..,46$, are non-resonant.
\begin{remark} \textrm{The models proposed in \cite{Wa} and \cite{Be}, consider the presence of van der Waals forces among carbon atoms, which are modeled by the potential \[ W(x)=\varepsilon\left( \frac{\sigma^{12}}{x^{12}}-2\frac{\sigma^{6}}{x^{6} }\right) ~, \] where $\sigma=3.4681~A^{-1}$ is the minimum energy distance and $\epsilon =0.0115~eV$ the depth of this minimum. Our numerical computations indicate that the models with van der Waals forces do not produce acceptable bond lengths between the atoms (as it is given in \cite{He}), neither the spectrums fit the experimental data (cf. \cite{Ho}). Actually, the models \cite{Wa} and \cite{Be} without van der Waals forces lead to results which correctly approximate the measurements in \cite{He} and \cite{Ho} (for bond lengths $d_{S}$ and $d_{D}$ and frequencies $\sqrt{\mu_{j}/m}$, which are within the range $100$ to $1800$ $cm^{-1}$). } \end{remark}
\section{Equivariant Bifurcation}
\label{sec:eq-bif}
In what follows, we are interested in finding non-trivial $T$-periodic solutions to \eqref{eq:1}, bifurcating from the $G$-orbit of the equilibrium point $u_{o}$. By normalizing the period, i.e. by making the substitution $v(t):=u\left( \lambda t\right) $ in \eqref{eq:1}, we obtain the system \begin{equation} \begin{cases} \ddot{v}=-\lambda^{2}\nabla V(v),\\ v(0)=v(2\pi),\;\;\dot{v}(0)=\dot{v}(2\pi), \end{cases} \label{eq:mol1} \end{equation} where $\lambda^{-1}=2\pi/T$ is the frequency.
\subsection{Equivariant Gradient Map}
Since $\mathscr V$ is an orthogonal $I\times O(3)$- representation, we can consider the first Sobolev space of $2\pi$-periodic functions from $\mathbb{R}$ to $\mathscr V$, i.e., \[
H_{2\pi}^{1}(\mathbb{R},\mathscr V):=\{u:\mathbb{R}\rightarrow\mathscr V\;:\;u(0)=u(2\pi),\;u|_{[0,2\pi]}\in H^{1}([0,2\pi];\mathscr V)\}, \] equipped with the inner product \[ \langle u,v\rangle:=\int_{0}^{2\pi}(\dot{u}(t)\bullet\dot{v}(t)+u(t)\bullet v(t))dt~. \]
Let $O(2)=SO(2)\cup\kappa SO(2)$ denote the group of $2\times2$-orthogonal matrices, where \[ \kappa=\left[ \begin{array} [c]{cc} 1 & 0\\ 0 & -1 \end{array} \right] ,\qquad\left[ \begin{array} [c]{cc} \cos\tau & -\sin\tau\\ \sin\tau & \cos\tau \end{array} \right] \in SO(2)~. \] It is convenient to identify a rotation with $e^{i\tau}\in S^{1} \subset\mathbb{C}$. Notice that $\kappa e^{i\tau}=e^{-i\tau}\kappa$, i.e., $\kappa$ as a linear transformation of $\mathbb{C}$ into itself, acts as complex conjugation.
Clearly, the space $H_{2\pi}^{1}(\mathbb{R},\mathscr V)$ is an orthogonal Hilbert representation of \[ G:=I\times O(3)\times O(2). \] Indeed, we have for $u\in H_{2\pi}^{1}(\mathbb{R},\mathscr V)$ and $(\sigma,A)\in I\times O(3)$ (see \eqref{eq:act1}) \begin{align} \left( \sigma,A\right) u(t) & =(\sigma,A)u(t),\label{eq:ac}\\ e^{i\tau}u(t) & =u(t+\tau),\nonumber\\ \kappa u(t) & =u(-t).\nonumber \end{align}
It is useful to identify a $2\pi$-periodic function $u:\mathbb{R}\rightarrow V$ with a function $\widetilde{u}:S^{1}\rightarrow\mathscr V$ via the map {$\mathfrak{e}(\tau)=e^{i\tau}:\mathbb{R}$}$\rightarrow S^{1}$. Using this identification, we will write $H^{1}(S^{1},\mathscr V)$ instead of $H_{2\pi }^{1}(\mathbb{R},\mathscr V)$. Let \[ \Omega:=\{u\in H^{1}(S^{1},\mathscr V):u(t)\in\Omega_{o}\}. \] We define $J:\mathbb{R}\times\Omega\rightarrow\mathbb{R}$ by \begin{equation}
J(\lambda,u):=\int_{0}^{2\pi}\left[ \frac{1}{2}|\dot{u}(t)|^{2}-\lambda ^{2}V(u(t))\right] dt. \label{eq:var-1} \end{equation} Then, the system \eqref{eq:mol1} can be written as the following variational equation \begin{equation} \nabla_{u}J(\lambda,u)=0,\quad(\lambda,u)\in\mathbb{R}\times\Omega. \label{eq:bif1} \end{equation}
Consider $u_{o}\in\mathscr V$ -- the equilibrium point of \eqref{eq:mol} (i.e. symmetric ground state) described in previous section. Then $u_{o}$ is a critical point of $J$. We are interested in finding non-stationary $2\pi $-periodic solutions bifurcating from $u_{o}$, i.e. non-constant solutions to system \eqref{eq:bif1}. We consider the orbit $G(u_{o})$ of $u_{o}$ in $H^{1}(S^{1},\mathscr V)$. We denote by $\mathcal{S}_{o}$ the slice to $G(u_{o})$ in $H^{1}(S^{1},\mathscr V)$. We consider the $G_{u_{o}}$-invariant restriction $J:\mathbb{R}\times\left( \mathcal{S}_{o}\cap\Omega\right) \rightarrow\mathbb{R}$ of $J$ to the set $\mathcal{S}_{o}\cap\Omega$. This restriction will allow us to apply the Slice Criticality Principle (see Theorem \ref{thm:SCP}) in order to compute the gradient equivariant degree of $\nabla J_{\lambda}$ on a small neighborhood $\mathscr U$ of $G(u_{o})$ needed for evaluation of the equivariant invariant $\omega_{G}(\lambda)$.
Consider the operator $L:H^{2}(S^{1};\mathscr V)\rightarrow L^{2} (S^{1};\mathscr V)$, given by \[ Lu=-\ddot{u}+u \] for $u\in H^{2}(S^{1},\mathscr V)$. Then the inverse operator $L^{-1}$ exists and is bounded. Let $j:H^{2}(S^{1};\mathscr V)\rightarrow H^{1}(S^{1},\mathscr V)$ be the natural embedding operator. Clearly, $j$ is a compact operator. Then, one can easily verify that \begin{equation} \nabla_{u}J(\lambda,u)=u-j\circ L^{-1}(\lambda^{2}\nabla V(u)+u), \label{eq:gradJ} \end{equation} where $u\in H^{1}(S^{1},\mathscr V)$. Consequently, the bifurcation problem \eqref{eq:bif1} is equivalent to $\nabla_{u}J(\lambda,u)=0$. Moreover, we have \begin{equation} \nabla_{u}^{2}J(\lambda,u_{o})v=v-j\circ L^{-1}(\lambda^{2}\nabla^{2} V(u_{o})v+v)~, \label{eq:D2J} \end{equation} where $v\in H^{1}(S^{1},\mathscr V)$.
Notice that \[
\mathscr A(\lambda):=\nabla_{u}^{2}J(\lambda,u_{o})|_{\mathcal{S}_{o} }:\mathcal{S}_{o}\rightarrow\mathcal{S}_{o}. \] Thus, by implicit function theorem, $G(u_{o})$ is an isolated orbit of critical points, whenever $\mathscr A(\lambda)$ is an isomorphism. Therefore, if a point $(\lambda_{o},u_{o})$ is a bifurcation point for \eqref{eq:bif1}, then $\mathscr A(\lambda_{o})$ cannot be an isomorphism. In such case we define \[ \Lambda:=\{\lambda>0:\mathscr A(\lambda_{o})\text{ is not an isomorphism}\}~, \] and call this set the \textit{critical set} for the trivial solution $u_{o}$.
\subsection{Critical Numbers}
Consider the $S^{1}$-action on $H^{1}(S^{1},\mathscr V)$, where $S^{1}$ acts on functions by shifting the argument (see \eqref{eq:ac}). Then, $(H^{1} (S^{1},\mathscr V))^{S^{1}}$ is the space of constant functions, which can be identified with the space $\mathscr V$, i.e., \[ H^{1}(S^{1},\mathscr V)=\mathscr V\oplus\mathscr W,\quad\mathscr W:=\mathscr V^{\perp}. \] Then, the slice $\mathcal{S}_{o}$ in $H^{1}(S^{1},\mathscr V)$ to the orbit $G(u_{o})$ at $u_{o}$ is exactly \[ \mathcal{S}_{o}=S_{o}\oplus\mathscr W. \] Consider the $S^{1}$-isotypical decomposition of $\mathscr W$, i.e., \[ \mathscr W=\overline{\bigoplus_{l=1}^{\infty}\mathscr W_{l}},\quad\mathscr W_{l}:=\{\cos(l\cdot)\mathfrak{a}+\sin(l\cdot)\mathfrak{b}:\mathfrak{a} ,\,\mathfrak{b}\in\mathscr V\} \] In a standard way, the space $\mathscr W_{l}$, $l=1,2,\dots$, can be naturally identified with the complexification $\mathscr V^{\mathbb{C}}$ on which $S^{1}$ acts by $l$-folding, \[ \mathscr W_{l}=\{e^{il\cdot}z:z\in\mathscr V^{\mathbb{C}}\}. \]
Since the operator $\mathscr A(\lambda)$ is $G_{u_{o}}$-equivariant with \[ G_{u_{o}}=\tilde{I}\times O(2), \] it is also $S^{1}$-equivariant and thus $\mathscr A(\lambda)(\mathscr W_{l})\subset\mathscr W_{l}$. Using the $\tilde{I}$-isotypical decomposition of $\mathscr V^{\mathbb{C}}$, we have the $G_{u_{o}}$-invariant decomposition \[ \mathscr W_{l}=\bigoplus_{j=1}^{46}\mathcal{V}_{{j},l},\quad \mathcal{V}_{{j},l}:=\{e^{il\cdot}z:z\in E(\mu_j)^\mathbb C \}, \] where $\mathcal{V}_{n_{j},l}=\mathcal V_{n_j}^\mathbb C=\mathbb C\otimes_\mathbb R \mathcal V_{n_j}$ is the $I\times O(2)$-irreducible representation with $O(2)$ acting on $\mathbb C$ by $l$-folding and complex conjugation.
We have \[
\mathscr A(\lambda)|_{\mathcal{V}_{{j},l}}=\left( 1-\frac{\lambda^{2} \mu_{j}+1}{l^{2}+1}\right) \text{\rm Id\,}. \]
Thus $A(\lambda_{o})|_{\mathcal{V}_{{j},l}}=0$ if and only if $\lambda_{o}^{2}=l^{2}/\mu_{j}$ for some $l=1,2,3,\dots$ and $j=0,1,2, \dots, 46$.
We will write \[ \lambda_{j,l}=\frac{l}{\sqrt{\mu_{j}}} \] to denote the critical numbers in $\Lambda$. Then the critical set for the equilibrium $u_{o}$ of system \eqref{eq:mol} is \[ \Lambda= \{ \lambda_{j,l}:j=0,...,46,\quad l=1,2,3,\dots \} . \]
Let us point out that in the case of isotypical resonances, the critical numbers may not be uniquely identified by indices $(j,l)$. The first and last critical numbers for $l=1$ are $\lambda_{1,1}=.07526\,3$ and $\lambda_{46,1}=0.573\,18$, respectively. We computed numerically (with precision $10^{-5}$) all the different values $\lambda_{j,l}$ from $\lambda_{1,1}$ to $\lambda_{46,1}$. We obtain that among these approximations there is no-resonance with harmonic critical number from $\lambda_{1,1}$ to $\lambda_{46,1}$, i.e., \begin{equation}\label{eq:reson-no} \lambda_{1,1}<\lambda_{2,1}<\lambda_{3,1}<\lambda_{4,1} <...<\lambda_{5,7}<\ \lambda_{26,3}<\lambda_{21,4}<\lambda_{27,3} <\lambda_{46,1}. \end{equation} Therefore, a plausible assumption under the numerical evidence is that all the eigenvalues $\lambda_{j,1}$ are isotypical non-resonant for $j=1,...,46$.
\subsection{Conjugacy Classes of Subgroups in $I\times O(2)$}\label{sec:conjugacy}
In order to simplify the notation, in what follows, instead of using the symbol $\tilde{I}$, we will write $I$. Under this notation the isotropy group $G_{u_o}$ is \[ G_{u_{o}}=I\times O(2). \] The notation in this section is useful to obtain the classification of all conjugacy classes $(\mathscr H)$ of closed subgroups in $I\times O(2)$.
The representatives of the conjugacy classes of the subgroups in $A_5\times \mathbb Z_2$ consisting of proper nontrivial subgroups of $A_5$ are: \begin{align*} \mathbb Z_2&=\{ ((1),1),\, ((12)(34),1)\},\\ \mathbb Z_3&=\{ ((1),1)\,, ((123),1),\, ((132),1)\},\\ V_4&=\{((1),1),\, ((12)(34),1),\, ((13)(24),1),\, ((23)(14),1)\},\\ \mathbb Z_5&=\{((1),1), \,((12345),1),\, ((13524),1),\, ((14253),1),\, ((15432),1)\},\\ D_3&=\{((1),1),\, ((123),1),\, ((132),1),\, ((12)(45),1),\, ((13)(45),1),\, ((23)(45),1)\},\\ A_4&=\{((1),1),\, ((12)(34),1),\, ((13)(24),1),\, ((14)(23),1),\,((123),1),\, ((132),1),\, ( (124),1),\\
&\hskip.5cm ((142),1,\, ((134),1),\, ((143),1),\, ( (234),1),\, ((243),1)\},\\ D_5&=\{((1),1),\, ((12345),1),\, ((13524),1),\, ((14253),1),\, ((15432),1),\, ((12)(35),1),\, ((13)(45),1),\\ &\hskip.5cm ((14)(23),1),
\,( (15)(24),1),\, ((25)(34)\}. \end{align*}
The representatives of the additional conjugacy classes of the subgroups in $A_5\times \mathbb Z_2$ will be used to describe the symmetries of nonlinear vibrations. Besides of the product subgroups $H^{p}:=H\times\mathbb{Z}_{2}$, we have the also following twisted subgroups $H^\varphi$ of $A_5\times \mathbb Z_2$ (where $H$ is a subgroup of $A_5$): {\small\begin{align*} \mathbb Z_2^z&=\Big\{\big((1),1\big),\, \big((12)(34),-1\big)\Big\},\\ V_4^z&=\Big\{\big((1),1\big),\, \big((12)(34),-1\big),\, \big((13)(24),-1\big),\, \big((23)(14),1\big)\Big\},\\ D_3^z&=\Big\{\big((1),1\big),\, \big((123),1\big),\, \big((132),1\big),\,\big((12)(45),-1\big),\, \big((13)(45),-1\big),\\ &\hskip.5cm \, \big((23)(45),-1\big)\Big\},\\ D_5^z&=\Big\{\big((1),1\big),\, \big((12345),1\big),\, \big((13524),1\big),\, \big(14253),1\big),\, \big((15432),1\big),\\ &\hskip.5cm \, \big((12)(35),-1\big),\, \big((13)(45),-1\big),\, \big((14)(23),-1\big),\, \big((15)(24),-1\big),\\ &\hskip.5cm \, \big((25)(34),-1\big)\Big\}. \end{align*}} With these definitions the subconjugacy lattice for $A_5\times \mathbb Z_2$ is shown in Figure \ref{fig:sub-A5}.
\begin{figure}
\caption{Lattice of conjugacy classes of subgroups in $A_5\times \mathbb Z_2$. The square boxes indicate that the related subgroup is normal in $A_5\times \mathbb Z_2$.}
\label{fig:sub-A5}
\end{figure}
The result (see \cite{DaKr,Goursat}) provides a description of subgroups of the product group $I\times O(2)$. Namely, for any subgroup $\mathscr H$ of the product group $I\times O(2)$, there exist subgroups $H\leq I$ and $K\leq O(2)$, a group $L$ and two epimorphisms $\varphi :H\rightarrow L$ and $\psi:K\rightarrow L$ such that \[\mathscr H=\{(h,k)\in H\times K:\varphi(h)=\psi(k)\}.\] In order to make the notation self-contained, we will assume that $L=K/\ker(\psi)$, so $\psi:K\rightarrow L$ is evidently the natural projection. On the other hand, the group $L$ can be naturally identified with a finite subgroup of $O(2)$ being either $D_{n}$ or $\mathbb{Z}_{n}$. Since we are interested in describing conjugacy classes of $\mathscr H$, we can identify different epimorphisms $\varphi,\psi:H\rightarrow L$ by indicating \[ Z=\text{Ker\thinspace}(\varphi)\quad\text{ and }\quad L=K/\ker(\psi). \]
Therefore, to identify $\mathscr H$ we will write \begin{equation} \mathscr H=:H{\prescript{Z}{}\times_{L}^{m}}K~, \label{eq:amalg} \end{equation} where $H$ and $Z$ are subgroups of $I$ and $m$ is a number used to identify groups in different conjugacy classes. In the case that all the epimorphisms $\varphi$ with the kernel $Z$ are conjugate, there is no need to use the number $m$ in \eqref{eq:amalg}, so we will simply write $\mathscr H=H{\prescript{Z}{}\times_{L}K}$. In addition, in the case that all epimorphisms $\varphi$ from $H$ to $L$ are conjugate, we can also omit the symbol $Z$, i.e. we will write $\mathscr H=H\times_{L}K$.
\subsection{Bifurcation Theorem}
\begin{theorem} \label{th:main} Assume that the critical numbers $\lambda_{j,1}\in\Lambda$, $j=1,2,\dots,46$, for the system \eqref{eq:bif1} are isotypically non-resonant. Then, there exist multiple global bifurcations of solutions from the critical number $\lambda_{j,1}$ corresponding to the irreducible representation $V_{n_{j}}$ in Table 3:
\begin{itemize} \item For $n_{j}=1$ there exists a $G$-orbit of a branch of periodic solutions with the orbit type ${(\amal{{A_5^p}}{D_{1}}{}{}{})}$;
\item For $n_{j}=2$ there exist $G$-orbits of branches of periodic solutions with the orbit types ${(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_3^p}}{})}$, ${(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^p}}{})}$, ${(\amal{{A_4^p}}{D_{1}}{}{}{})}$, ${(\amal{{D_3^p}}{D_{1}}{}{}{})}$, ${(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{1}})}$, ${(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{2}})}$, ${(\amal{{D_3^p}}{D_{3}}{D_{3}}{{\mathbb Z_1^p}}{})}$;
\item For $n_{j}=3$ there exist $G$-orbits of branches of periodic solutions with the orbit types ${(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^p}}{})}$, ${(\amal{{D_5^p}}{D_{1}}{}{}{})}$, ${(\amal{{D_3^p}}{D_{1}}{}{}{})}$, ${(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{1}})}$, ${(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{2}})}$, ${(\amal{A_4^p}{\mathbb Z_{3}}{\mathbb Z_3}{V_4^p}{})}$;
\item For $n_{j}=4$ there exist $G$-orbits of branches of periodic solutions with the orbit types ${(\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_5^p}}{})}$, ${(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_3^p}}{})}$, ${(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^p}}{})}$, ${(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{1}})}$, ${(\amal{{D_3^p}}{D_{3}}{D_{3}}{{\mathbb Z_1^p}}{})}$,
\item For $n_{j}=5$ there exist $G$-orbits of branches of periodic solutions with the orbit types ${(\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_5^p}}{})}$, ${(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_3^p}}{})}$, ${(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^p}}{})}$, ${(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{2}})}$, ${(\amal{{D_3^p}}{D_{3}}{D_{3}}{{\mathbb Z_1^p}}{})}$;
\item For $n_{j}=-1$ here exists a $G$-orbit of a branch of periodic solutions with the orbit type ${(\amal{{A_5^p}}{D_{2}}{\mathbb Z_{2}}{{A_5}}{})}$;
\item For $n_{j}=-2$ there exist $G$-orbits of branches of periodic solutions with the orbit types ${(\amal{{A_4^p}}{D_{2}}{\mathbb Z_{2}}{{A_4}}{})}$, ${(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3^z}}{})}$, ${(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3}}{})}$,\break ${(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{V_4^z}}{})}$, ${(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{1}})}$, ${(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{2}})}$, ${(\amal{{D_3^p}}{D_{6}}{D_{6}}{{\mathbb Z_1}}{})}$,
\item For $n_{j}=-3$ there exist $G$-orbits of branches of periodic solutions with the orbit types ${(\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{D_5}}{})}$, ${(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3}}{})}$, ${(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{V_4^z}}{})}$,\break ${(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{1}})}$, ${(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{2}})}$, ${(\amal{A_4^p}{\mathbb Z_{6}}{\mathbb Z_6}{V_4}{})}$;
\item For $n_{j}=-4$ there exist $G$-orbits of branches of periodic solutions with the orbit types ${(\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{D_5^z}}{})}$, ${(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3^z}}{})}$, ${(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{V_4^z}}{})}$,\break ${(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{1}})}$, ${(\amal{{D_3^p}}{D_{6}}{D_{6}}{{\mathbb Z_1}}{})}$;
\item For $n_{j}=-5$ there exist $G$-orbits of branches of periodic solutions with the orbit types ${(\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{D_5^z}}{})}$, ${(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3^z}}{})}$, ${(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{V_4^z}}{})}$,\break ${(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{2}})}$, ${(\amal{{D_3^p}}{D_{6}}{D_{6}}{{\mathbb Z_1}}{})}$. \end{itemize} \end{theorem}
\begin{proof} The critical numbers for system \eqref{eq:bif1} are $\lambda_{j,l}=l/\sqrt {\mu_{j}}$ for $l=1,2,3,\dots,$ and $j=1,2,\dots,46$, where $\mu_{j}$ (together with its isotypical types) are listed in Table \ref{tab:crit-values}. We assumed (under the numerical evidence) that the critical frequencies $\lambda_{j,1}^{-1}$ are isotypically non-resonant. Based on the ideas explained in section \ref{sec:1}, then $\lambda_{o}:=\lambda_{j_{o},1}$ is an isolated critical point in $\Lambda$. That is, there are $\lambda_{-}<\lambda_{o}<\lambda_{+}$ such that $[\lambda_{-},\lambda_{+}]\cap\Lambda=\{\lambda_{o}\}$. Moreover, there exists an isolating $G$-neighborhood $\mathscr U$ of $G(u_{o})$ such that no other critical orbits of $J_{\lambda_{\pm}}$ belong to $\overline{\mathscr U}$. Thus, we can define the topological invariant $\omega_{G}(\lambda_{o})$ by \eqref{eq:top-inv}. Then, by the properties of the gradient equivariant degree, if \[ \omega_{G}(\lambda_{o})=n_{1}(H_{1})+n_{2}(H_{2})+\dots+n_{m}(H_{m}) \] is non-zero, i.e. $n_{j}\not =0$ for a $j=1,2,\dots,m$, then there exists a bifurcating branch of nontrivial solutions to \eqref{eq:bif1} from the orbit $\{\lambda_{o}\}\times G(u_{o})$ with symmetries at least $(H_{j})$.
Next, by Theorem \ref{thm:SCP}, \[ \nabla_{G}\text{-deg}(\nabla J_{\lambda_{\pm}},\mathscr U)=\Theta\left( \nabla_{G_{u_{o}}}\text{-deg}(\nabla{ J}_{\lambda_{\pm}},\mathscr U\cap\mathscr S_{o})\right) , \] where $G=I\times O(3)\times O(2)$, $G_{u_{o}}=\tilde{I}\times O(2)$ and $\Theta:U(G_{u_{o}})\rightarrow U(G)$ is the homomorfism given by $\Theta(H)=(H)$. For convenience, in what follows we will ignore the symbol $\Theta$. Moreover, by standard linearization technique, we have \[ \nabla_{G_{u_{o}}}\text{-deg}(\nabla{J}_{\lambda_{\pm}},\mathscr U\cap \mathscr S_{o})=\nabla_{G_{u_{o}}}\text{-deg}(\mathscr A_{\lambda_{\pm}},\mathscr U\cap\mathscr S_{o}). \] By \eqref{eq:lin-GdegGrad}, since all the eigenvalues $\mu_{j}$ are isotopically simple, we have \begin{align*} \nabla_{G_{u_{o}}}\text{-deg}(\mathscr A_{\lambda_{-}},\mathscr U\cap \mathscr S_{o}) & =\prod_{\left\{ \left( j,l\right) \in\mathbb{N} ^{2}:\lambda_{j,l}<\lambda_{o}\right\} }\nabla\text{-deg}_{\mathcal{V} _{n_{j},l}},\\ \nabla_{G_{u_{o}}}\text{-deg}(\mathscr A_{\lambda_{+}},\mathscr U\cap \mathscr S_{o}) & =\nabla\text{-deg}_{\mathcal{V}_{n_{j_{o}},l}} \prod_{\left\{ \left( j,l\right) \in\mathbb{N}^{2}:\lambda_{j,l} <\lambda_{o}\right\} }\nabla\text{-deg}_{\mathcal{V}_{n_{j},l}}, \end{align*} where $\nabla\text{-deg}_{\mathcal{V}_{n_{j},l}}$ are gradient $I\times O(2)$-equivariant basic degrees listed in Appendix \ref{sec:basic}. Therefore, we obtain \[ \omega_{G}(\lambda_{o}):=\Big((I\times O(2))-\nabla\text{-deg}_{\mathcal{V} _{n_{j_{o}},1}}\Big)\prod_{\left\{ \left( j,l\right) \in\mathbb{N} ^{2}:\lambda_{j,l}<\lambda_{o}\right\} }\nabla\text{-deg}_{\mathcal{V} _{n_{j},l}}, \] where $(I\times O(2))$ is the unit element in $U(I\times O(2))$. For instance, the first equivariant invariants are given by \begin{align*} \omega_{G}(\lambda_{1,1}) & =(I\times O(2))-\nabla\text{-deg}_{\mathcal{V} _{3,1}}\\ \omega_{G}(\lambda_{2,1}) & =\nabla\text{-deg}_{\mathcal{V}_{3,1}} \ast\Big((I\times O(2))-\nabla\text{-deg}_{\mathcal{V}_{-3,1}}\Big)\\ \omega_{G}(\lambda_{3,1}) & =\nabla\text{-deg}_{\mathcal{V}_{3,1}}\ast \nabla\text{-deg}_{\mathcal{V}_{-3,1}}\ast\Big((I\times O(2))-\nabla \text{-deg}_{\mathcal{V}_{2,1}}\Big). \end{align*}
We will prove that a maximal orbit type $(H)$ that appears in the gradient $I\times O(2)$-basic degree $\nabla\text{-deg}_{\mathcal{V}_{n_{j_{o}},1}}$ with non-zero coefficient $n_{H}$, \[ \nabla\text{-deg}_{\mathcal{V}_{n_{j_{o}},1}}=(I\times O(2))+n_{H}(H)+\dots, \] also appears in $\omega_{G}(\lambda_{o})$ with non-zero coefficients. Hereafter, dots indicate all the remaining terms corresponding to orbit types strictly smaller than $(H)$. Notice that all such maximal orbit types (which are indicated in subsection \ref{sec:basic} by red color) belong to $\Phi _{0}(I\times O(2))$ (i.e. dim\thinspace$W(H)=0$) except for $(\amal{A_4^p}{\mathbb Z_{3}}{\mathbb Z_3}{V_4^p}{})$ (in $\nabla\text{-deg} _{\mathcal{V}_{3,1}}$) and $(\amal{A_4^p}{\mathbb Z_{6}}{\mathbb Z_6}{V_4}{}))$ (in $\nabla\text{-deg}_{\mathcal{V}_{-3,1}}$).
Now, assume that $(H)$ is a maximal orbit type such that dim\thinspace$W(H)=0$ and \[ \nabla\text{-deg}_{\mathcal{V}_{n,1}}=(I\times O(2))+n_{H}(H)+\dots, \] with $n_{H}\not =0$. By maximality of $(H)$ in $\mathcal{V}_{n,1}$, formula \eqref{eq:rec-Brouwer} gives \[
n_{H}=\frac{(-1)^{k}-1}{|W(H)|},\quad k:=\text{dim\thinspace}\mathcal{V} _{n,1}^{H}. \]
Then $k$ must be odd and consequently $n_{H}=-1$ when $|W(H)|=2$ or $n_{H}=-2$
when $|W(H)|=1$. Suppose now that $\nabla\text{-deg}_{\mathcal{V}_{\bar{n},1} }$ is another (not necessarily different) basic degree containing a non-zero coefficient for $(H)$, i.e. \[ \nabla\text{-deg}_{\mathcal{V}_{\bar{n},1}}=(I\times O(2))+n_{H}(H)+\dots. \] Then, we have, \[ \nabla\text{-deg}_{\mathcal{V}_{n,1}}\ast\nabla\text{-deg}_{\mathcal{V} _{\bar{n},1}}=(I\times O(2))+2n_{H}(H)+n_{H}^{2}(H)^{2}+\dots \] However, by \eqref{eq:rec-coef}, we have $(H)\ast(H)=m_{H}(H)+\dots$, where \[
m_{H}:=\frac{|W(H)|\cdot|W(H)|}{|W(H)|}=|W(H)|. \] Thus \[ \nabla\text{-deg}_{\mathcal{V}_{n,1}}\ast\nabla\text{-deg}_{\mathcal{V} _{\bar{n},1}}=(I\times O(2))+\left( 2n_{H}+n_{H}^{2}m_{H}\right) (H)+\dots. \] One can easily notice that \[ 2n_{H}+n_{H}^{2}m_{H}=0, \] for both cases $n_{H}=-1$ and $n_{H}=-2$. Therefore, the coefficient of the product $\nabla\text{-deg}_{\mathcal{V}_{n,1}}\ast\nabla\text{-deg} _{\mathcal{V}_{\bar{n},1}}$ is zero for the group $(H)$. Consequently, since ether $\nabla_{G_{u_{o}}}\text{-deg}(\mathscr A_{\lambda_{-}},\mathscr U\cap\mathscr S_{o})$ or $\nabla_{G_{u_{o}}}\text{-deg}(\mathscr A_{\lambda_{+}},\mathscr U\cap\mathscr S_{o})$ (but not both) contains an even number of factors $\nabla\text{-deg}_{\mathcal{V}_{n_{i},1}}$ with non-zero coefficient $n_{H}$ of $(H)$, it follows that their difference also contains non-zero $\pm n_{H}$ coefficient of $(H)$. Actually, the computation with GAP shows that $n_{H}=-1$, then in these cases we have \[ \omega_{G}(\lambda_{o})=\pm(H)+\dots.\text{.} \]
Now, assume that $H=\amal{A_4^p}{\mathbb Z_{3}}{\mathbb Z_3}{V_4^p}{}$ and consider $\nabla\text{-deg}_{\mathcal{V}_{3,1}}=(G)+n_{H}(H)+\dots$. Then we have \[ \nabla\text{-deg}_{\mathcal{V}_{3,1}}\ast\nabla\text{-deg}_{\mathcal{V}_{3,1} }=(G)+2n_{H}(H)+n_{H}^{2}(H)^{2}+\dots. \] By functoriality property of the gradient equivariant degree we have that the inclusion $\psi:I\times S^{1}\rightarrow I\times O(2)$ induces the Euler homomorphism $\Psi:U(I\times O(2))\rightarrow U(I\times S^{1})$ such that $\Psi(\nabla\text{-deg}_{\mathcal{V}_{3,1}})$ is also a gradient equivariant basic degree (see \cite{DaKr}). This can be easily computed (cf. \cite{RR}) as follows, \begin{align*} \Psi(\nabla\text{-deg}_{\mathcal{V}_{3,1}}) & =(I\times S^{1})-(D_{5} ^{p})-(D_{3}^{p})-(A_{4}^{t_{1}}\times\mathbb Z_{2})-(A_{4}^{t_{2}}\times\mathbb Z_{2})\\ & -(V_{4}^{-}\times\mathbb Z_{2})-(\mathbb Z_{5}^{t_{1}}\times\mathbb Z_{2})-(\mathbb Z_{5}^{t_{2} }\times\mathbb Z_{2})+2(\mathbb Z_{2}^{p}). \end{align*} Thus $n_{H}=-1$. Notice that by \eqref{eq:Euler-hom}, $\Psi (\amal{A_4^p}{\mathbb Z_{3}}{\mathbb Z_3}{V_4^p}{})=(A_{4}^{t_{1}}\times\mathbb Z_{2} )+(A_{4}^{t_{2}}\times\mathbb Z_{2})$. As it was shown in \cite{RR}, $(A_{4}^{t_{i} }\times\mathbb Z_{2})\ast(A_{4}^{t_{j}}\times\mathbb Z_{2})=0$. Thus we have \[ 0=\Psi((H)\ast(H))=\Psi(m_{H}(H)+\dots)=m_{H}\Big(((A_{4}^{t_{1}}\times \mathbb Z_{2})+(A_{4}^{t_{2}}\times\mathbb Z_{2})\Big), \] which implies $m_{H}=0$. Therefore, $(H)\ast(H)=0$ and for $k\in\mathbb{N}$, \[ \left( \nabla\text{-deg}_{\mathcal{V}_{3,1}}\right) ^{k}=(I\times O(2))-k(H)+\dots. \] Clearly, $\omega_{G}(\lambda_{o})$ has a non-zero coefficient, \[ \omega_{G}(\lambda_{o})=(H)+\dots. \] For $(H)=(\amal{A_4^p}{\mathbb Z_{6}}{\mathbb Z_6}{V_4}{})$ the proof is similar. This concludes the proof of our main theorem. \end{proof}
\begin{remark}\rm Since all the invariants are $\omega_{G}(\lambda_{o})=(H)+...$ for $H=(\amal{A_4^p}{\mathbb Z_{3}}{\mathbb Z_3}{V_4^p}{})$ and $H=(\amal{A_4^p}{\mathbb Z_{6}}{\mathbb Z_6}{V_4}{})$, then the sum of $\omega_{G}$'s can never be zero, i.e. all the connected components $\mathcal{C}$ with symmetries $(\amal{A_4^p}{\mathbb Z_{3}}{\mathbb Z_3}{V_4^p}{})$ and $(\amal{A_4^p}{\mathbb Z_{6}}{\mathbb Z_6}{V_4}{})$ are non-compact. Similarly, notice that there is an odd number of irreducible subrepresentations $\mathcal V_{-n}$ in the isotypical component $\mathscr V_{-n}$, for $n=1$, $3$, $4$, $5$, and the topological invariant is $\omega_{G}(\lambda_{o})=\pm(H)+...$ (for a maximal group $(H)$). This excludes a possibility that all the branches with the orbit type $(H)$, bifurcating from all the critical points $\lambda_{j,1}$ corresponding to $\mathcal V_{-n}$, are compact. Thus, for any maximal orbit type $(H)$ in $\mathcal V_{-n}$ for $n=1,3,4,5$ there exists a non-compact branch $\mathcal{C}$ with orbit type $(H)$. \end{remark}
\begin{remark}\rm All the gradient basic degrees $\nabla\text{-deg}_{\mathcal{V}_{\pm n,1}}$, which were computed using G.A.P. programming, are included in Appendix \ref{sec:equi-degree}. These degrees can be used to compute the exact value of topological invariants $\omega_{G}(\lambda_{o})$ even in the case that $\lambda_{o}$ is isotypically resonant, so a bifurcation result can be established in the resonant case as well. For example, such a resonant case was studied in \cite{BeGa} (to classify the nonlinear modes in a tetrahedral molecule). \end{remark}
\section{Description of Symmetries and Numerical results}
\label{sec:symmetries}
For any maximal orbit type $(H)$ in $\mathcal V_{n,1}$ the element $-1\in\mathbb{Z}_{2}<I$ belongs to $ H$, and in $\mathcal V_{-n,1}$, the element $(-1,-1)\in\mathbb{Z}_{2}\times \mathbb Z_2<I\times S^1$ belongs to $ H$. A solution $u(t)$ in the fixed point space for a group containing $-1\in\mathbb{Z}_{2}$, satisfies \[ u_{\tau,k}(t)=-u_{\tau^{-1},k}(t), \] while for a group containing $(-1,-1)\in\mathbb{Z}_{2}\times \mathbb Z_2$ satisfies \[ u_{\tau,k}(t)=-u_{\tau^{-1},k}(t+\pi). \] Since the isotropy groups in $\mathcal V_{n,1}$ and $\mathcal V_{-n,1}$ differ only in this element, we only need to describe the symmetries of the maximal groups for the representations $\mathcal{V}_{n,1}$.
The existence of the symmetry $\kappa\in O(2)$ in the maximal groups implies that the solutions are brake orbits, \[ u_{\tau,k}(t)=u_{\tau,k}(-t)\text{,} \] i.e., the velocity $\dot{u}$ of all the molecules are zero at times $t=0,\pi$, i.e., $\dot{u}(0)=\dot{u}(\pi)=0$. We classify the maximal groups in two classes: the groups that have the element $\kappa\in O(2)$ and the groups that have the element $\kappa$ coupled with a rotation of $I$. That is, if there is an element $\gamma\in\mathcal{C}_{2}$ such that $(\gamma,\kappa)$ is in the second class of groups, then their solutions have the symmetry \[ u_{\tau,k}(t)=\rho(\gamma)u_{\gamma\tau\gamma^{-1},\gamma^{-1}(k)}(-t). \]
The maximal orbit type that does not have a symmetry $(\gamma,\kappa)$ is $(\amal{A_4^p}{\mathbb Z_{6}}{\mathbb Z_6}{V_4}{})$ which is the only maximal group (in $\mathscr V_{3,1}$) with Weyl group of dimension one. \vskip.3cm
\subsection{Standing Waves (Brake Orbits)}
In this category we consider the groups that have the element $\kappa\in O(2)$, which generate the group $D_{1}<O(2)$.
For the groups \[ {({A_{5}^{p}}\prescript{}{}\times_{{}}^{{}}D_{1}),({A_{4}^{p}} \prescript{}{}\times_{{}}^{{}}D_{1}),({D_{5}^{p}}\prescript{}{}\times_{{}} ^{{}}D_{1}),({D_{3}^{p}}\prescript{}{}\times_{{}}^{{}}D_{1})} \] the solutions have the following symmetries at all times: icosahedral symmetries for ${{A_{5}}}$, tetrahedral symmetries for ${{A_{4}}}$, pentagonal symmetries for ${{D_{5}}}$ and triangular symmetries for ${{D_{3}}}$.
For the group \[ {({D_{3}^{p}}\prescript{{\mathbb Z_3^p}}{}\times_{\mathbb{Z}_{2}}^{{}}D_{2})} \] the solutions are symmetric by the $2\pi/3$-rotations of $\mathbb{Z}_{3} <D_{3}<I$, while the reflection of $D_{3}<I$ is coupled with the $\pi$-time shift of $-1\in\mathbb{Z}_{2}<S^{1}$. Therefore, the solutions in three faces have the exact dynamics, but these faces are not symmetric by reflection such as in the symmetries of ${({D_{3}^{p}}\prescript{}{}\times_{{} }^{{}}D_{1})}$.
For the group \[ {({V_{4}^{p}}\prescript{{\mathbb Z_2^p}}{}\times_{\mathbb{Z}_{2}}^{{}}D_{2} )}\text{,} \] the solutions are symmetric by the $\pi$-rotations of $V_{4}<I$, while the other $\pi$-rotation of $\mathbb{Z}_{2}<V_{4}$ is coupled with the $\pi$-time shift of $-1\in D_{2}<S^{1}$.
These seven symmetries give solutions which are standing waves in the sense that each symmetric face has the exact dynamic repeated for all times.
\subsection{Discrete Rotating Waves}
In the groups \[ {({D_{5}^{p}}\prescript{{\mathbb Z_1^p}}{}\times_{D_{5}}^{1}D_{5})},{({D_{5}^{p} }\prescript{{\mathbb Z_1^p}}{}\times_{D_{5}}^{2}D_{5})}~, \] the spatial dihedral group $D_{5}<I$ is coupled with the temporal group $D_{5}<O(2)$. Therefore, in these solutions we have $5$ faces with the same dynamics, but there is a $2\pi/5$-time shift in time between consecutive faces. In addition, $\kappa$ is coupled with a $\pi$-rotation, i.e., there is an axis of symmetry in each face. In this sense, the solutions have the appearance of a discrete rotating wave with a $2\pi/5$ delay along consecutive faces. There are two groups because there are two different conjugacy classes, $\mathcal{C}_{4}$ and $\mathcal{C}_{5}$, of $A_{5}$.
Similarly, in the solutions of the group \[ {({D_{3}^{p}}\prescript{{\mathbb Z_1^p}}{}\times_{D_{3}}^{{}}D_{3})~,} \] we have $3$ faces with the same dynamics, but with a $2\pi/3$-time shift, i.e., the solutions have the appearance of a discrete rotating wave in $3$ faces with a $2\pi/3$-time shift and each face has an axis of symmetry.
For the solutions of the group \[ (\amal{A_4^p}{\mathbb Z_{6}}{\mathbb Z_6}{V_4}{}) \] we have $3$ faces with the same dynamics with a $2\pi/3$-time shift. Moreover, in these solutions the inversion is coupled with a $\pi$-time shift in time. Therefore, there are a total of $6$ faces ($3$ faces and their inversions) that have the same dynamics but with $2\pi/6$-time shift. In these solutions the faces do not have an axis of symmetry, instead there are two symmetries by a $\pi$-rotation.
\subsection{Numerical results}
In this section, we present the implementation of the numerical continuation of some families of periodic solutions. In order to compute numerically the families of periodic solutions, we use the Hamiltonian formulation, \begin{equation} \dot{x}=J\nabla H(x),\qquad x=(q,p),\label{ODE} \end{equation} where $H(q,p)=\left\vert p\right\vert ^{2}/2-V(q)$ is the Hamiltonian and $J$ is the symplectic matrix \[ J=\left( \begin{array} [c]{cc} 0 & -I\\ I & 0 \end{array} \right) \text{.} \]
Since the Hamiltonian is invariant by the action of the group $\mathbb{R} ^{3}$ that acts by translation, $O(3)$ by rotations and $\varphi\in S^{1}$ by time shift, then the Hamiltonian satisfies the orthogonal relations \[ \left\langle H(x),A_{j}(x)\right\rangle =0, \] for $j=1,...,7$, where $A_{j}$ are the generators of the groups, \begin{align*}
A_{j}(q,p) & =\partial_{\tau}|_{\tau=0}(q+\tau\mathcal{E}_{j},p)=(\mathcal{E} _{j},0),\qquad\mathcal{E}_{j}=(e_{j},...,e_{j})\\
A_{j+3}(q,p) & =\partial_{\theta}|_{\theta=0}(e^{\theta\mathcal{J} }q,e^{\theta\mathcal{J}}p)=(\mathcal{J}_{j}q,\mathcal{J}_{j}p),\qquad \mathcal{J}_{j}=diag(J_{j},...,J_{j}) \end{align*} for $j=1,2,3$, and \[
A_{7}(q,p)=\partial_{\varphi}|_{\varphi=0}(q,p)(t+\varphi)=J\nabla H\text{.} \]
\begin{remark} Actually, the conserved quantities $G_{j}$ are related to the generator fields $A_{j}$ by \[ A_{j}=J\nabla G_{j}. \] Using the Poisson bracket, the orthogonality relations are equivalent to \[ \{H,G_{j}\}=\left\langle \nabla H,J\nabla G_{j}\right\rangle =\left\langle \nabla H,A_{j}\right\rangle =0\text{.} \] The explicit conserved quantities are $G_{j}=-p\cdot\mathcal{E}_{j}$, $G_{j+3}=p^{T}\mathcal{J}_{j}q$, for $j=1,2,3$, and $G_{7}=H$. \end{remark}
To numerically continue a solution it is necessary to augment the differential equation with Lagrange multipliers $\lambda_{j}\in\mathbb{R}$ for $j=1,..,7$, \begin{equation} \dot{x}=J\nabla H(x)+\sum_{j=1}^{7}\lambda_{j}JA_{j}(x)\text{.}\label{ODE2} \end{equation} The solutions of equation \eqref{ODE2} are solutions of the original equations of motion when the values of the seven parameters are zero. If $A_{j}(x)$ are linearly independent, a solution $x$ to the equation (\ref{ODE2}) is a solution to the equation (\ref{ODE}) because \[ 0=\left\langle \dot{x},JA_{i}(x)\right\rangle =\sum_{j=1}^{7}\lambda _{j}\left\langle A_{j},A_{i}\right\rangle \] implies that $\lambda_{j}=0$ for $j=1,...,7$.
The period $T=2\pi\lambda$ can be obtained as parameter in equation \eqref{ODE2} by rescaling time, \[ \dot{x}=TJ\nabla H+\sum_{j=1}^{7}\lambda_{j}JA_{j}\text{.} \] Let $\varphi_{t}(x)$ be the flow of this equation. We can define the time one map (for the rescaled time)\ as \[ \varphi_{1}(x;\lambda_{1},...,\lambda_{7},T):V\times\mathbb{R}^{7} \times\mathbb{R}\rightarrow V\text{,} \] where the period $T$ is a parameter. Therefore, a fixed point of $\varphi _{1}(x)$ corresponds to a $T$-periodic solutions of the Hamiltonian system.
To numerically continue the fixed points of $\varphi_{1}(x)$ it is necessary to implement Poincar\'{e} sections. For this we define the augmented map \begin{align*} F(q,p,\lambda_{1},...,\lambda_{7};T) & :V\times\mathbb{R}^{7}\times \mathbb{R}\rightarrow V\times\mathbb{R}^{7}\\ F & =\left( x-\varphi_{1}(x),A_{j}(\tilde{x})\cdot\left( x-\tilde {x}\right) \right) . \end{align*} Then a solution of $F=0$ is a $T$-periodic solution of the Hamiltonian system. The restrictions $A_{j}(\tilde{x} )\cdot\left( x-\tilde{x}\right) =0$ for $j=1,...,7$ represent the Poincar\'{e} sections, where $\tilde{x}$ is a previously computed solutions on the family of solutions of $F=0$. This map is a local submersion except for bifurcation points, see \cite{MuAl}.
\begin{figure}\label{fig-1}
\end{figure} \begin{figure}\label{fig-2}
\end{figure}
The map $\varphi_{1}(x)$ is computed numerically using a Runge-Kutta integrator.\ A first solution of $F=0$ is obtained by applying a Newton method in the approximating solution obtained by solving the linearized Hamiltonian system. The family of periodic solutions is computed numerically using a pseudo-arclength procedure to continue the solutions of $F=0$.
We present the results of our numerical computations in Figures \ref{fig-1} and \ref{fig-2}. The position of the atoms in space are in the right columns. The atoms with the same color have oscillations related by a rotation or inversion in $O(3)$. In addition, atoms with the same color but different texture describe oscillations that are related by the inversion coupled with a phase shift in time. In the left columns of Figures \ref{fig-1} and \ref{fig-2} we illustrate the norm of the atoms oscillating in time.
\vskip.3cm \appendix{\huge \bf Appendix}
\section{Equivariant Gradient Degree}\label{sec:equi-degree} For an Euclidean space $V$ we denote by $B(V)$ the open unit ball in $V$, and for $x$, $y\in V:=\mathbb R^n$ we will denote by $x\bullet y $ the standard inner product in $V$.
We assume that $G$ stands for a compact Lie group and that all considered subgroups of $G$ are closed. For a subgroup $H\le G$,
$N\left( H\right) $ stands for the {\it normalizer} of $H$ in $G$, and $W\left( H\right) =N\left( H\right) /H$ denotes the {\it Weyl group} of $H$ in $G$. We denote by $\left( H\right) $ the {\it conjugacy} class of $H$ in $G$ and use the notations $ \Phi\left( G\right) :=\left\{ \left( H\right) :H\;\;\text{is a subgroup of }\;G\right\}$ and $\Phi_{n}\left( G\right) :=\left\{ \left( H\right) \in\Phi\left( G\right) :\text{\textrm{dim\thinspace}}W\left( H\right) =n\right\}$. The set $\Phi\left( G\right) $ has a natural partial order given by: $\left( H\right) \leq\left( K\right) \;\;\Longleftrightarrow\;\;\exists _{g\in G}\;\;gHg^{-1}\le K$.
For a $G$-space $X$ and $x\in X$, we put $ G_{x}:=\left\{ g\in G:\;gx=x\right\}$ to denote the {\it isotropy group} of $x$, $G\left( x\right) :=\left\{ gx:\;g\in G\right\}$ to denote the orbit of $x$, and the conjugacy class $\left( G_{x}\right) :=\left\{ H\subset G:\;\exists_{g\in G}\;\;G_{x} =g^{-1}Hg\right\}$ will be called the orbit type
of $x$, and $\Phi(G;X):=\{(G_x): x\in X\}$ will stand for the set of all the orbit types in $X$. We also put $\Phi_n(G;X):=\Phi(G;X)\cap \Phi_n(G)$.
For a subgroup $H\le G$, we write $X^{H}:=\left\{ x\in X:\;G_{x}\ge H\right\}$ to denote the {\it $H$-fixed point space} of $H$. The orbit space for a $G$-space $X$ will be denoted by $X/G$.
As any compact Lie group admits only countably many non-equivalent real irreducible representations, given a compact Lie group $G $, we will assume that we have a complete list of all real
irreducible representations, denoted $\mathcal{V}_{i}$, $i=0 $, $1$, $\ldots $, which could also be identified by the {\it character list} $\{\chi_i\}$. We refer to \cite{BaKr06} for examples of such lists and the related notation. \vskip.3cm
Any finite-dimensional real $G$-representation $V$ can be decomposed into the direct sum of $G $-invariant subspaces \begin{equation} V=V_{0}\oplus V_{1}\oplus \dots \oplus V_{r}\text{,} \label{eq:Giso} \end{equation} called the $G$\textit{-}{\it isotypical decomposition of }$V$, where each isotypical component $V_{i}$ is \emph{modeled} on the irreducible $G$-representation $\mathcal{V}_{i}$, $i=0,$ $1,$ $\dots ,$ $r$, i.e., $V_{i}$ contains all the irreducible subrepresentations of $V$ which are equivalent to $\mathcal{V}_{i}$. \vskip.3cm Let $V$ be a $G$-representation, $\Omega\subset V$ an open $G$-invariant bounded set and $f:V\to V$ a continuous $G$-equivariant map such that for all $x\in \partial \Omega$ we have $f(x)\not=0$. Then we say that $f$ is an {\it $\Omega$-admissible} $G$-map and we call $(f,\Omega)$ an {\it admissible $G$-pair}. The set of all admissible $G$-pairs in $V$ will be denoted by $\mathcal M^G(V)$. We also put $\mathcal M^G:=\bigcup_{V}\mathcal M^G(V)$ (here $V$ denotes all possible $G$-representations) to denote the set of all admissible $G$-pair. A map $f:V\to V$ is called a {\it gradient map} if there exists a continuously differentiable $\varphi:V\to \mathbb R$ such that $f=\nabla \varphi$. We denote by $\mathcal M_\nabla^G(V)$ the subset of $\mathcal M^G(V)$ consisting of all gradient maps and we define $\mathcal M_\nabla^G:=\bigcup_{V}\mathcal M_\nabla^G(V)$. In the set $\mathcal M^G(V)$ (resp. $\mathcal M_\nabla^G(V)$) we have the so-called {\it admissible homotopy} (resp. {\it admissible gradient homotopy}) relation between $(f_0,\Omega)$ and $(f_1,\Omega)$, i.e., if $f_1$ and $f_2$ are homotopic by an homotopy $h:[0,1]\times V\to V$, such that $h_t$ belongs to $\mathcal M^G(V)$ (resp. $\mathcal M_\nabla^G(V)$) for every $t\in [0,1]$.
\vskip.3cm \subsection{Euler and Burnside Rings}
The concept of the {\it Euler ring} was introduced by T. tom Dieck in \cite{tD}. Due to its topological nature, computations of the Euler ring $U(G)$, for a general compact group $G$, may be quite complicated. However, in our case of interest, when $G:=\Gamma\times O(2)$ with $\Gamma$ being a finite group, the Euler ring structure in $U(G)$ can be effectively described by using elementary techniques based on the reduction techniques and the properties of the Euler ring homomorphisms (see \cite{DaKr} for more details).
Let us recall the definition of the Euler ring $U(G)$. As a $\mathbb Z$-module, $U(G)$ is the free $\mathbb Z$-module generated by $\Phi(G)$, i.e. $U\left( G\right) :={\mathbb{Z}}\left[ \Phi\left( G\right) \right] $. The ring multiplication is defined on $U(G)$ on generators $\left( H\right) $, $\left( K\right) \in\Phi\left( G\right) $ by \begin{equation} \left( H\right) \ast\left( K\right) =\sum_{\left( L\right) \in \Phi\left( G\right) }n_{L}\left( L\right) ,\label{eq:Euler-mult} \end{equation} where \begin{equation} n_{L}:=\chi_{c}\left( \left( G/H\times G/K\right) _{L}/N\left( L\right) \right) \label{eq:Euler-coeff} \end{equation} with $\chi_{c}$ the Euler characteristic taken in Alexander-Spanier cohomology with compact support (cf. \cite{Spa}). We refer to \cite{BtD} for more details.
The ${\mathbb{Z}}$-module $A\left( G\right) :={\mathbb{Z}}\left[ \Phi _{0}\left( G\right) \right] $ equipped with the same multiplication as in $U\left( G\right)$ but restricted to generators from $\Phi_{0}\left( G\right) $ is called \emph{Burnside ring}, i.e., \[ \left( H\right) \cdot\left( K\right) =\sum_{\left( L\right) } n_{L}\left( L\right) ,\qquad\left( H\right) ,\,\left( K\right) ,\,\left( L\right) \in\Phi_{0}\left( G\right) , \] where $n_L$ stands for the number of $(L)$ orbits in $G/H\times G/K$, i.e. $n_{L}:=\left( \left( G/H\times G/K\right) _{L}/N\left( L\right) \right) =\left\vert \left( G/H\times G/K\right) _{L}/N\left(L\right)
\right\vert $ (here $|X|$ stands for the number of elements in the set $X$). We have the following recurrence formula {\small \begin{equation} n_{L}=\frac{n\left( L,K\right) \left\vert W\left( K\right) \right\vert n\left( L,\text{ }H\right) \left\vert W\left( H\right) \right\vert -{\displaystyle \sum_{\left( \widetilde L\right) >\left( L\right) }}n\left( L,\widetilde L\right) n_{\widetilde L}\left\vert W\left( \widetilde L\right) \right\vert }{\left\vert W\left( L\right) \right\vert }, \label{eq:rec-coef} \end{equation}} where \[ n(L,K)=\left\vert \frac{N(L,K)}{N(K)}\right\vert ,\quad N(L,K):=\{g\in G:gLg^{-1}\subset K\}, \] and $\left( H\right)$, $\left( K\right)$, $( L)$, $( \widetilde L) $ are taken from $\Phi_{0}\left( G\right) $.
Clearly, the structure of the Burnside ring $A(G)$ is significantly simpler and can be effectively computed. It is also possible to implement the G.A.P. routines in computer programs evaluating Burnside rings products. Notice that $A\left( G\right) $ is a ${\mathbb{Z}}$-submodule of $U\left( G\right) $, but not a subring. However (see \cite{BKR}), the projection $\pi_{0}:U\left( G\right) \rightarrow A\left( G\right) $ defined on generators $\left( H\right) \in\Phi\left( G\right) $ by \begin{equation}\label{eq:pi0} \pi_{0}\left( \left( H\right) \right) = \begin{cases} \left( H\right) & \text{ if }\;\left( H\right) \in\Phi_{0}\left( G\right) ,\\ 0 & \text{ otherwise,} \end{cases} \end{equation} is a ring homomorphism, i.e., \[ \pi_{0}\left( \left( H\right) \ast\left( K\right) \right) =\pi _{0}\left( \left( H\right) \right) \cdot\pi_{0}\left( \left( K\right) \right) ,\qquad\left( H\right) ,\,\left( K\right) \in\Phi\left( G\right) , \] where `$\cdot$' denotes the multiplication in the Burnside ring $A(G)$. The homomorphism $\pi_0$ allows to identify the Burnside ring $A\left( G\right) $ as a part of the Euler ring $U\left( G\right)$ and with the help of additional algorithms, the Euler ring structure for $G=\Gamma\times O(2)$ can be completely computed by elementary means (cf. \cite{DaKr}).
\subsection{Equivariant Gradient Degree}
The existence and properties of the so-called $G$-equivariant gradient degree are presented in the following result from \cite{Geba}: \begin{theorem} \label{thm:Ggrad-properties} There exists a unique map $\nabla_{G}\text{\rm -deg\,}:\mathcal{M}_{\nabla}^{G}\rightarrow U(G)$, which assigns to every $(\nabla\varphi,\Omega)\in\mathcal{M}_{\nabla}^{G}$ an element $\nabla_{G}\text{\rm -deg\,}(\nabla\varphi,\Omega)\in U(G)$, called the $G$\textit{-gradient degree} of $\nabla\varphi$ on $\Omega $, \begin{equation} \nabla_{G}\text{\rm -deg\,}(\nabla\varphi,\Omega)=\sum _{(H)\in\Phi(G)}n_{H}(H_{i})=n_{H_{1}}(H_{1})+\dots+n_{H_{m} }(H_{m}), \label{eq:grad-deg} \end{equation} satisfying the following properties:
\begin{description} \item \textbf{(Existence)} If $\nabla_{G}\text{\rm -deg\,} (\nabla\varphi,\Omega)\not =0$, i.e. there is in \eqref{eq:grad-deg} a non-zero coefficient $n_{H_{i}}$, then exists $u_{0}\in\Omega$ such that $\nabla\varphi(u_{0})=0$ and $(G_{u_{0}})\geq(H_{i})$.
\item \textbf{(Additivity)} Let $\Omega_{1}$ and $\Omega_{2}$ be two disjoint open $G$-invariant subsets of $\Omega$ such that $(\nabla\varphi)^{-1} (0)\cap\Omega\subset\Omega_{1}\cup\Omega_{2}.$ Then, $\nabla_{G}\text{\rm -deg\,}(\nabla\varphi,\Omega)=\nabla_{G}\text{\rm -deg\,}(\nabla\varphi,\Omega_{1})+\nabla_{G}\text{\rm -deg\,}(\nabla\varphi,\Omega_{2}).$
\item \textbf{(Homotopy)} If $\nabla_{x}\psi:[0,1]\times V\rightarrow V$ is a $G$-gradient $\Omega$-admissible homotopy, then \[ \nabla_{G}\text{\rm -deg\,}(\nabla_{x}\psi,\Omega )=\text{\textit{constant}}. \]
\item \textbf{(Normalization)} Let $\varphi\in C_{G}^{2}(V,\mathbb{R})$ be a special $\Omega$-Morse function (cf. \cite{Geba}) such that $(\nabla\varphi)^{-1}(0)\cap \Omega=G(u_{0})$ and $G_{u_{0}}=H$. Then, \[ \nabla_{G}\text{\rm -deg\,}(\nabla\varphi,\Omega )=(-1)^{\mathrm{m}^{-}(\nabla^{2}\varphi(u_{0}))}\cdot(H), \] where \textquotedblleft$\mathrm{m}^{-}(\cdot)$\textquotedblright\ stands for the total dimension of all the eigenspaces corresponding to negative eigenvalues of a (symmetric) matrix.
\item \textbf{(Multiplicativity)} For all $(\nabla\varphi_{1},\Omega_{1})$, $(\nabla\varphi_{2},\Omega_{2})\in\mathcal{M}_{\nabla}^{G}$, \[ \nabla_{G}\text{\rm -deg\,}(\nabla\varphi_{1}\times\nabla \varphi_{2},\Omega_{1}\times\Omega_{2})=\nabla_{G}\text{\rm -deg\,}(\nabla\varphi_{1},\Omega_{1})\ast\nabla_{G}\text{\textrm{-deg\thinspace} }(\nabla\varphi_{2},\Omega_{2}) \] where the multiplication `$\ast$' is taken in the Euler ring $U(G)$.
\item \textbf{(Functoriality)}(cf. \cite{DaKr}) Suppose $G_o\le G$ is a subgroup of a compact Lie group $G$ such that $\text{\rm dim\,} G_0=\text{\rm dim\,} G$. Then any gradient admissible $G$-pair $(\nabla \varphi,\Omega)$ is also an admissible $G_0$-pair and we have \[ \Psi\left[ \nabla_{G}\text{\rm -deg\,}(\nabla\varphi,\Omega). \right] = \nabla_{G_0}\text{\rm -deg\,}(\nabla\varphi,\Omega), \] where $\Psi:U(G)\to U(G_0)$ is the Euler ring homomorphism induced by the inclusion $\psi :G_0 \hookrightarrow G$ (see \cite{BKR}).
\end{description} \end{theorem}
Using a standard finite-dimensional approximation scheme, the $G$-equivariant gradient degree can be extended to admissible $G$-pairs in Hilbert $G$-representation. To be more precise, consider a Hilbert $G$-representation $\mathscr H$, a $G$-equivariant completely continuous gradient field $\nabla f:\mathscr H\to \mathscr H$ and an open bounded $G$-invariant set $\Omega\subset \mathscr H$, such that $\nabla f$ is $\Omega$-admissible. Then the pair $(\nabla f,\Omega)$ is called a {\it $G$-admissible pair} in $\mathscr H$. This degree admits the same properties as those listed in Theorem \ref{thm:Ggrad-properties} (cf. \cite{survey,DaKr}).
One of the most important properties of $G$-equivariant gradient degree $\nabla_G\text{-deg}(\nabla f,\Omega)$ is that it provides a full equivariant topological classification of the solution set for $\nabla f(x)=0$ and $x\in \Omega$. More precisely, in addition to the properties listed in Theorem \ref{thm:Ggrad-properties}, the equivariant gradient degree has also the so-called {\it Universality Property}, which says that two $B(V)$-admissible $G$-equivariant gradient maps $\nabla f_1$, $\nabla f_2:V\to V$ have the same gradient degrees if and only if they are $B(V)$-admissibly gradient homotopic.
Suppose that \[\nabla_G\text{-deg}(\nabla f,\Omega)=n_1(H_1)+n_2(H_2)+\dots +n_k(H_k)+\dots +n_m(H_m), \] and $n_k\not=0$. Then, by the existence property, there exists a solution $x_o\in \Omega$ of $\nabla f(x)=0$, such that $H:=G_{x_o}$. In addition, if $(H_k)$ is a maximal orbit type in $\Omega$, then for any $\Omega$-admissible continuous deformation $\{\nabla f_t\}_{t\in [0,1]}$ (in the class of gradient maps) we obtain a continuum of solutions in $\Omega$ to $\nabla f_t(x)=0$ that starts at $x_0$ for $t=0$ and ends at $x_1$ for $t=1$, and $G_{x_1}=H$. This property is called {\it Continuation Property}.
\subsection{Degree on the Slice} Let $\mathscr H$ be a Hilbert $G$-representation. Suppose that the orbit $G(u_{o})$ of $u_{o}\in\mathscr H$ is contained in a finite-dimensional $G$-invariant subspace, so the $G$-action on that subspace is smooth and $G(u_{o})$ is a smooth submanifold of $\mathscr H$. In such a case we call the orbit $G(u_o)$ {\it finite-dimensional}. Denote by $S_{o}\subset\mathscr H$ the slice to the orbit $G(u_{o})$ at $u_{o}$. Denote by $V_{o}:=\tau_{u_{o}}G(u_{o})$ the tangent space to $G(u_{o})$ at $u_{o}$. Then $S_{o}=V_{o}^{\perp}$ and $S_{o}$ is a smooth Hilbert $G_{u_{o}}$-representation.
Then we have (cf. \cite{BeKr})
\begin{theorem} {\smc(Slice Principle)} \label{thm:SCP} Let $\mathscr{H}$ be a Hilbert $G$-representation, $\Omega$ an open $G$-invariant subset in $\mathscr H$, and $\varphi:\Omega\rightarrow\mathbb{R}$ a continuously differentiable $G$-invariant functional such that $\nabla \varphi$ is a completely continuous field. Suppose that $u_{o}\in\Omega$ and $G(u_{o})$ is an finite-dimensional isolated critical orbit of $\varphi$ with $S_{o}$ being the slice to the orbit $G(u_{o})$ at $u_o$, and $\mathcal{U}$ an isolated tubular neighborhood of $G(u_{o})$. Put $\varphi_{o}:S_{o}\rightarrow \mathbb{R}$ by $\varphi_{o}(v):=\varphi(u_{o}+v)$, $v\in S_{o}$. Then \begin{equation} \nabla_{G}\text{\rm -deg\,}(\nabla\varphi,\mathcal{U})=\Theta (\nabla_{G_{u_{o}}}\text{\rm -deg\,}(\nabla\varphi_{o},\mathcal{U}\cap S_{o})), \label{eq:SDP} \end{equation} where $\Theta:U(G_{u_{o}})\rightarrow U(G)$ is homomorphism defined on generators $\Theta(H)=(H)$, $(H)\in\Phi(G_{u_{o}})$. \end{theorem}
\subsection{$G$-Equivariant Gradient Degree of Linear Maps}
Let us establish a computational formula to evaluate the $G$-equivariant degree $\nabla_{G}\text{-deg\thinspace}(\mathscr A,B(V))$, where\textrm{ }$\mathscr A:V\rightarrow V$ is a symmetric $G $-equivariant linear isomorphism and $V$ is an orthogonal $G$-representation, i.e., $\mathscr A=\nabla\varphi$ for $\varphi(v)=\frac{1}{2}(\mathscr Av\bullet v)$, $v\in V$. Consider the $G$-isotypical decomposition \eqref{eq:Giso} of $V$ and put \[
\mathscr A_{i}:=\mathscr A|_{V_{i}}:V_{i}\rightarrow V_{i},\quad i=0,1,\dots,r. \] Then, by the multiplicativity property, \begin{equation} \nabla_{G}\mbox{-deg}(\mathscr A,B(V))=\prod_{i}^{r}\nabla_{G }\mbox{-deg}(\mathscr A_{i},B(V_{i})) \label{eq:deg-Lin-decoGrad} \end{equation} Take $\xi\in\sigma_{-}(\mathscr A)$, where $\sigma_{-}(\mathscr A)$ stands for the negative spectrum of $\mathscr A$, and consider the corresponding eigenspace $E(\xi):=\ker(\mathscr A-\xi\mbox{Id})$. Define the numbers $m_{i}(\xi)$ by \begin{equation} m_{i}(\xi):=\dim\left( E(\xi)\cap V_{i}\right) /\dim\mathcal{V}_{i}, \label{eq:m_j(mu)-gra} \end{equation} and the so-called {\it gradient $G$-equivariant basic degrees} by \begin{equation} \nabla\text{-deg}_{\mathcal{V}_{i}}:=\nabla_{G} \mbox{-deg}(-\mbox{Id\,},B(\mathcal{V}_{i})). \label{eq:basicGrad-deg0}, \quad i=0,1,2,\dots \end{equation} Then \begin{equation} \nabla_{G}\text{\textrm{-deg}}(\mathscr A,B(V))=\prod_{\xi\in\sigma _{-}(\mathscr A)}\prod_{i=0}^{r}\left( {\nabla\mbox{\rm -deg}_{\mathcal{V}_{i}} }\right) ^{m_{i}(\xi)}. \label{eq:lin-GdegGrad} \end{equation}
\subsection{$\Gamma\times O(2)$-Equivariant Basic Degrees}\label{sec:grad-techniques}
In order to be able to effectively use the formula \eqref{eq:deg-Lin-decoGrad}, it is important to establish the exact values of the gradient $G$-equivariant basic degrees. A direct usage of topological definition (see \cite{Geba}) of the gradient $G$-equivariant degree to compute the basic degrees $\nabla\text{-deg}_{\mathcal V_i}$ (given by \eqref{eq:basicGrad-deg0}), may be very complicated for infinite compact Lie groups $G$. However, in the case of the group $G:=\Gamma\times O(2)$ ($\Gamma$ being a finite group), we have effective reduction techniques (see \cite{DaKr,RR}), using the homomorphism $\pi_0$ and the Euler ring homomorphism $\Psi: U(\Gamma\times O(2))\to U(\Gamma\times S^1)$, which allow to establish the exact values of the gradient $\Gamma\times O(2)$-equivariant basic degrees.
To be more precise, let us recall the {\it $G$-equivariant Brouwer degree}\break
$G\text{\rm -deg}( f,\Omega)\in A(G)$, which is defined for admissible $G$-pairs $(f,\Omega)\in \mathcal M^G(V)$ and has similar existence, additivity, homotopy and multiplicativity properties as the gradient degree. It can be computed by applying the following recurrence formula to the usual Brouwer degrees of maps $f^H:V^H\to V^H$, $(H)\in \Phi_0(G;V)$, i.e., \[ G\text{\rm -deg}(f,\Omega)=\sum_{(H)\in \Phi_0(G;V)} n_H(H), \] and \begin{equation}\label{eq:rec-Brouwer}
n_H=\frac{\deg(f^H,\Omega^H)-\sum_{(L)>(H)} n_L \,n(H,L) \,|W(L)|}{|W(H)|}. \end{equation}
In addition, for any gradient admissible $G$-pair $(\nabla\varphi,\Omega)$, the $G$-equivariant Brouwer degree $G\text{\rm -deg}(\nabla \varphi,\Omega)\in A(G)$ is well-defined and we have the following relation (see \cite{DaKr}) \begin{equation}\label{eq:degs} \pi_0\left( \nabla_G\text{-deg}(\nabla \varphi, \Omega) \right)=G\text{\rm -deg}(\nabla \varphi,\Omega). \end{equation} Moreover, the {\it Brouwer $G$-equivariant basic degrees} \[ \deg_{\mathcal V_i}:=G\text{\rm -deg}(-\text{\rm Id\,}, B(\mathcal V_i)), \quad i=0,1,2,3,\dots. \] satisfy \begin{equation}\label{eq:basic-pi0} \deg_{\mathcal V_i}=\pi_0\left[ \nabla\text{-deg}_{\mathcal V_i} \right], \quad i=0,1,2,3\dots \end{equation} Therefore, by \eqref{eq:basic-pi0}, \[ \nabla\text{-deg}_{\mathcal V_i}= \deg_{\mathcal V_i}+\sum_{(H)\in \Phi_1(G;\mathcal V_i)} x_H (H),\] with the integers $x_H$ that need to be determined by other means.
Since the gradient basic degrees $\widetilde{\deg}_{\mathcal V_i}$ for the group $\Gamma\times S^1$ are well-known, one can apply the Euler ring homomorphism $\Psi:U(\Gamma\times O(2))\to U(\Gamma\times S^1)$ to determine these coefficients (see \cite{DaKr}) by using the relation \[ \widetilde{\deg}_{\mathcal V_i}=\Psi\left[ \deg_{\mathcal V_i} \right] +\sum_{(H)\in \Phi_1(G;\mathcal V_i)} x_H\Psi (H), \] (here we assume that $S^1$ acts nontrivially on $\mathcal V_i$). The Euler ring homomorphism $\Psi: U(\Gamma\times O(2))\to U(\Gamma \times S^1)$ is defined on the generators by \begin{align} \Psi(H)&= \begin{cases} 2(K) &\text{ if }\;\; K=H \text{ and } K\sim K' \text{ in } \Gamma\times SO(2),\\ (K)+(K') &\text{ if }\;\; K=H \text{ and } K\not\sim K' \text{ in } \Gamma\times SO(2),\\ (K) &\text{ if }\;\; K\not=H, \end{cases}\label{eq:Euler-hom} \end{align} where $K:=H\cap \Gamma\times SO(2)$, $K':=\kappa H\kappa \cap \Gamma \times SO(2)$.
The formula \eqref{eq:rec-Brouwer} allows the usage of computational programs based on G.A.P. platform to obtain exact symbolic evaluation of the $G$-equivariant Brouwer degree of linear isomorphisms for a large class of classical groups and their products (see \cite{Pin}). \vskip.3cm
\subsection{Gradient $I\times O(2)$-Equivariant Basic Degrees}\label{sec:basic}
We used GAP programming (all the GAP routines are available for download at the website listed in \cite{Pin}) to classify all the conjugacy classes of closed subgroups in $G:=I\times O(2)$. We also computed the following basic gradient degrees (corresponding to the irreducible $G$-representations associated to the characters listed in Table \ref{tab:I}), where we use red color to indicate the maximal orbit types (in the class of $2\pi$-periodic functions $x:\mathbb R\to V$ with Fourier mode $1$).
{\small \begin{align*} \eqdeg{\nabla}_{\mathcal{V}_{1,1}}=\; & -{\color{red}(\amal{{A_5^p}}{D_{1}}{}{}{})} +(\amal{{A_5^p}}{O(2)}{}{}{}),\\ \eqdeg{\nabla}_{\mathcal{V}_{2,1}}=\; & -{\color{red}(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_3^p}}{})} -{\color{red}(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^p}}{})} +2(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_1^p}}{})-{\color{red} (\amal{{A_4^p}}{D_{1}}{}{}{})}\\ & -{\color{red} (\amal{{D_3^p}}{D_{1}}{}{}{})}+2(\amal{{\mathbb Z_3^p}}{D_{1}}{}{}{})+2(\amal{{\mathbb Z_2^p}}{D_{1}}{}{}{})-2(\amal{{\mathbb Z_1^p}}{D_{1}}{}{}{})\\ & -{\color{red}(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{1}})} -{\color{red}(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{2}})} -{\color{red}(\amal{{D_3^p}}{D_{3}}{D_{3}}{{\mathbb Z_1^p}}{})} +(\amal{{V_4^p}}{D_{2}}{D_{2}}{{\mathbb Z_1^p}}{})\\ & +(\amal{{D_3^p}}{D_{1}}{D_{1}}{{\mathbb Z_3^p}}{})+(\amal{{V_4^p}}{D_{1}}{D_{1}}{{\mathbb Z_2^p}}{})+(\amal{{A_5^p}}{O(2)}{}{}{})\\ & -(\amal{\mathbb Z_3^p}{\mathbb Z_1}{}{}{})-(\amal{\mathbb Z_2^p}{\mathbb Z_1}{}{}{})+(\amal{\mathbb Z_1^p}{\mathbb Z_1}{}{}{}),\\ \eqdeg{\nabla}_{\mathcal{V}_{3,1}}=\; & -{\color{red}(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^p}}{})} +(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_1^p}}{})-{\color{red} (\amal{{D_5^p}}{D_{1}}{}{}{})}-{\color{red}(\amal{{D_3^p}}{D_{1}}{}{}{})}\\ & +2(\amal{{\mathbb Z_2^p}}{D_{1}}{}{}{})-(\amal{{\mathbb Z_1^p}}{D_{1}}{}{}{})-{\color{red}(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{1}})} -{\color{red}(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{2}})}\\ & +(\amal{{V_4^p}}{D_{2}}{D_{2}}{{\mathbb Z_1^p}}{})+(\amal{{\mathbb Z_2^p}}{D_{1}}{D_{1}}{{\mathbb Z_1^p}}{})+(\amal{{A_5^p}}{O(2)}{}{}{})\\ & -{\color{red} (\amal{A_4^p}{\mathbb Z_{3}}{\mathbb Z_3}{V_4^p}{})}-(\amal{\mathbb Z_2^p}{\mathbb Z_{2}}{\mathbb Z_2}{\mathbb Z_1^p}{}),\\ \eqdeg{\nabla}_{\mathcal{V}_{4,1}}=\; & -{\color{red}(\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_5^p}}{})} -{\color{red}(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_3^p}}{})} -{\color{red}(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^p}}{})} +3(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_1^p}}{})\\ & -(\amal{{\mathbb Z_1^p}}{D_{1}}{}{}{})-{\color{red}(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{1}})} -{\color{red}(\amal{{D_3^p}}{D_{3}}{D_{3}}{{\mathbb Z_1^p}}{})} +(\amal{{V_4^p}}{D_{2}}{D_{2}}{{\mathbb Z_1^p}}{})\\ & +(\amal{{\mathbb Z_2^p}}{D_{1}}{D_{1}}{{\mathbb Z_1^p}}{})+(\amal{{A_5^p}}{O(2)}{}{}{}) -(\amal{\mathbb Z_2^p}{\mathbb Z_{2}}{\mathbb Z_2}{\mathbb Z_1^p}{}),\\ \eqdeg{\nabla}_{\mathcal{V}_{5,1}}=\; & -{\color{red}(\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_5^p}}{})} -{\color{red}(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_3^p}}{})} -{\color{red}(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^p}}{})} +3(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_1^p}}{})\\ & -(\amal{{\mathbb Z_1^p}}{D_{1}}{}{}{})-{\color{red}(\amal{{D_5^p}}{D_{5}}{D_{5}}{{\mathbb Z_1^p}}{{2}})} -{\color{red}(\amal{{D_3^p}}{D_{3}}{D_{3}}{{\mathbb Z_1^p}}{})} +(\amal{{V_4^p}}{D_{2}}{D_{2}}{{\mathbb Z_1^p}}{})\\ & +(\amal{{\mathbb Z_2^p}}{D_{1}}{D_{1}}{{\mathbb Z_1^p}}{})+(\amal{{A_5^p}}{O(2)}{}{}{}) -(\amal{\mathbb Z_2^p}{\mathbb Z_{2}}{\mathbb Z_2}{\mathbb Z_1^p}{})\\ \eqdeg{\nabla}_{\mathcal{V}_{-1,1}}=\; & -{\color{red}(\amal{{A_5^p}}{D_{2}}{\mathbb Z_{2}}{{A_5}}{})} +(\amal{{A_5^p}}{O(2)}{}{}{}),\\ \eqdeg{\nabla}_{\mathcal{V}_{-2,1}}=\; & -{\color{red} (\amal{{A_4^p}}{D_{2}}{\mathbb Z_{2}}{{A_4}}{})}-{\color{red}(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3^z}}{})} -{\color{red} (\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3}}{})}-{\color{red}(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{V_4^z}}{})} \\ & +2(\amal{{\mathbb Z_3^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_3}}{})+2(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^z}}{})+2(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2}}{})-2(\amal{{\mathbb Z_1^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_1}}{})\\ & -{\color{red}(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{1}})} -{\color{red}(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{2}})} -{\color{red}(\amal{{D_3^p}}{D_{6}}{D_{6}}{{\mathbb Z_1}}{})} +(\amal{{D_3^p}}{D_{2}}{D_{2}}{{\mathbb Z_3}}{{\mathbb Z_3^p}})\\ & +(\amal{{V_4^p}}{D_{2}}{D_{2}}{{\mathbb Z_2^z}}{{\mathbb Z_2^p}})+(\amal{{V_4^p}}{D_{2}}{D_{2}}{{\mathbb Z_2}}{{\mathbb Z_2^p}})+(\amal{{A_5^p}}{O(2)}{}{}{}) +(\amal{\mathbb Z_1^p}{\mathbb Z_{2}}{\mathbb Z_2}{\mathbb Z_1}{}),\\ \eqdeg{\nabla}_{\mathcal{V}_{-3,1}}=\; & -{\color{red} (\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{D_5}}{})}-{\color{red} (\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3}}{})}-{\color{red}(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{V_4^z}}{})} +(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^z}}{})\\ & +2(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2}}{})-(\amal{{\mathbb Z_1^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_1}}{})-{\color{red}(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{1}})} -{\color{red}(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{2}})}\\ & +(\amal{{V_4^p}}{D_{2}}{D_{2}}{{\mathbb Z_2^z}}{{\mathbb Z_2^p}})+(\amal{{\mathbb Z_2^p}}{D_{2}}{D_{2}}{{\mathbb Z_1}}{{\mathbb Z_1^p}})+(\amal{{A_5^p}}{O(2)}{}{}{})\\ & -{\color{red}(\amal{A_4^p}{\mathbb Z_{6}}{\mathbb Z_6}{V_4}{})} -(\amal{\mathbb Z_2^p}{\mathbb Z_{2}}{\mathbb Z_2}{\mathbb Z_2^z}{}), \end{align*}
\begin{align*} \eqdeg{\nabla}_{\mathcal{V}_{-4,1}}=\; & -{\color{red}(\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{D_5^z}}{})} -{\color{red}(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3^z}}{})} -{\color{red}(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{V_4^z}}{})} +3(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^z}}{})\\ & -(\amal{{\mathbb Z_1^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_1}}{})-{\color{red}(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{1}})} -{\color{red}(\amal{{D_3^p}}{D_{6}}{D_{6}}{{\mathbb Z_1}}{})} +(\amal{{V_4^p}}{D_{2}}{D_{2}}{{\mathbb Z_2^z}}{{\mathbb Z_2^p}})\\ & +(\amal{{\mathbb Z_2^p}}{D_{2}}{D_{2}}{{\mathbb Z_1}}{{\mathbb Z_1^p}})+(\amal{{A_5^p}}{O(2)}{}{}{}) -(\amal{\mathbb Z_2^p}{\mathbb Z_{2}}{\mathbb Z_2}{\mathbb Z_2^z}{}),\\ \eqdeg{\nabla}_{\mathcal{V}_{-5,1}}=\; & -{\color{red}(\amal{{D_5^p}}{D_{2}}{\mathbb Z_{2}}{{D_5^z}}{})} -{\color{red}(\amal{{D_3^p}}{D_{2}}{\mathbb Z_{2}}{{D_3^z}}{})} -{\color{red}(\amal{{V_4^p}}{D_{2}}{\mathbb Z_{2}}{{V_4^z}}{})} +3(\amal{{\mathbb Z_2^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_2^z}}{})\\ & -(\amal{{\mathbb Z_1^p}}{D_{2}}{\mathbb Z_{2}}{{\mathbb Z_1}}{})-{\color{red}(\amal{{D_5^p}}{D_{10}}{D_{10}}{{\mathbb Z_1}}{{2}})} -{\color{red}(\amal{{D_3^p}}{D_{6}}{D_{6}}{{\mathbb Z_1}}{})} +(\amal{{V_4^p}}{D_{2}}{D_{2}}{{\mathbb Z_2^z}}{{\mathbb Z_2^p}})\\ & +(\amal{{\mathbb Z_2^p}}{D_{2}}{D_{2}}{{\mathbb Z_1}}{{\mathbb Z_1^p}})+(\amal{{A_5^p}}{O(2)}{}{}{}) -(\amal{\mathbb Z_2^p}{\mathbb Z_{2}}{\mathbb Z_2}{\mathbb Z_2^z}{}), \end{align*} }
Let us point out that the computational algorithms for the basic degrees truncated to $A(I\times O(2))$ where established in \cite{DaKr}, and are now being effectively implemented into a computer software for equivariant gradient degree. Most of the maximal orbit types of periodic vibrations have finite Weyl groups, i.e., they are in the basic gradient degrees truncated to $A(I\times O(2))$. Nevertheless, we have detected in the basic gradient degree of the representations $\mathcal{V}_{\pm3,1}$ a maximal group with Weyl group of dimension one.
\noindent\textbf{Acknowledgement.} C. Garc\'{\i}a was partially supported by PAPIIT-UNAM through grant IA105217. W. Krawcewicz acknowledge partial support from National Science Foundation through grant DMS-1413223 and from National Science Foundation of China through grant no. 11871171. \vskip.3cm
\end{document}
|
arXiv
|
{
"id": "1804.05455.tex",
"language_detection_score": 0.6073315143585205,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\setcounter{page}{1} \title[Controlled Integral Frames for Hilbert $C^{\ast}$-Modules]{Controlled Integral Frames for Hilbert $C^{\ast}$-Modules}
\author[Hatim LABRIGUI$^{*}$ and Samir KABBAJ]{Hatim LABRIGUI$^1$$^{\ast}$ \MakeLowercase{and} Samir KABBAJ$^1$}
\address{$^{1}$Department of Mathematics, Ibn Tofail University, B.P. 133, Kenitra, Morocco} \email{\textcolor[rgb]{0.00,0.00,0.84}{ [email protected]; [email protected]}}
\subjclass[2010]{42C15,41A58}
\keywords{Integral Frames, Integral $\ast$-frame, Controlled Integral Frames, controlled Integral $\ast$-frame, $C^{\ast}$-algebra, Hilbert $\mathcal{A}$-modules.\\ \indent \\ \indent $^{*}$ Corresponding author} \maketitle \begin{abstract} The notion of controlled frames for Hilbert spaces were introduced by Balazs, Antoine and Grybos to improve the numerical efficiency of iterative algorithms for inverting the frame operator. Controlled Frame Theory has a great revolution in recent years. This Theory have been extended from Hilbert spaces to Hilbert $C^{\ast}$-modules. In this paper we introduce and study the extension of this notion to integral frame for Hilbert $C^{\ast}$-module. Also we give some characterizations between integral frame in Hilbert $C^{\ast}$-module. \end{abstract} \section{Introduction and preliminaries} The concept of frames in Hilbert spaces has been introduced by Duffin and Schaeffer \cite{Duf} in 1952 to study some deep problems in nonharmonic Fourier series. After the fundamental paper \cite{13} by Daubechies, Grossman and Meyer, frames theory began to be widely used, particularly in the more specialized context of wavelet frames and Gabor frames \cite{Gab}.
Hilbert $C^{\ast}$-module arose as generalization of the Hilbert space notion. The basic idea was to consider modules over $C^{\ast}$-algebras instead of linear spaces and to allow the inner product to take values in the $C^{\ast}$-algebras \cite{28}. Continuous frames defined by Ali, Antoine and Gazeau \cite{STAJP}. Gabardo and Han in \cite{14} called these kinds frames or frames associated with measurable spaces. For more details, the reader can refer to \cite{MR1}, \cite{MR2} and \cite{ARAN}.\\ The goal of this article is the introduction and the study of the concept of Controlled integral frames for Hilbert $C^{\ast}$-module. Also we give some characterizations between integral frame in Hilbert $C^{\ast}$-module, and we give some characterizations.
In the following we briefly recall the definitions and basic properties of $C^{\ast}$-algebra and Hilbert $\mathcal{A}$-modules. Our references for $C^{\ast}$-algebras are \cite{{Dav},{Con}}. For a $C^{\ast}$-algebra $\mathcal{A}$, if $a\in\mathcal{A}$ is positive we write $a\geq 0$ and $\mathcal{A}^{+}$ denotes the set of positive elements of $\mathcal{A}$. \begin{definition}\cite{Con}.
Let $ \mathcal{A} $ be a unital $C^{\ast}$-algebra and $\mathcal{H}$ be a left $ \mathcal{A} $-module, such that the linear structures of $\mathcal{A}$ and $ \mathcal{H} $ are compatible. $\mathcal{H}$ is a pre-Hilbert $\mathcal{A}$-module if $\mathcal{H}$ is equipped with an $\mathcal{A}$-valued inner product $\langle.,.\rangle_{\mathcal{A}} :\mathcal{H}\times\mathcal{H}\rightarrow\mathcal{A}$, such that is sesquilinear, positive definite and respects the module action. In the other words,
\begin{itemize}
\item [(i)] $ \langle x,x\rangle_{\mathcal{A}}\geq0 $, for all $ x\in\mathcal{H} $, and $ \langle x,x\rangle_{\mathcal{A}}=0$ if and only if $x=0$.
\item [(ii)] $\langle ax+y,z\rangle_{\mathcal{A}}=a\langle x,z\rangle_{\mathcal{A}}+\langle y,z\rangle_{\mathcal{A}},$ for all $a\in\mathcal{A}$ and $x,y,z\in\mathcal{H}$.
\item[(iii)] $ \langle x,y\rangle_{\mathcal{A}}=\langle y,x\rangle_{\mathcal{A}}^{\ast} $, for all $x,y\in\mathcal{H}$.
\end{itemize}
For $x\in\mathcal{H}, $ we define $||x||=||\langle x,x\rangle_{\mathcal{A}}||^{\frac{1}{2}}$. If $\mathcal{H}$ is complete with $||.||$, it is called a Hilbert $\mathcal{A}$-module or a Hilbert $C^{\ast}$-module over $\mathcal{A}$.\\
For every $a$ in $C^{\ast}$-algebra $\mathcal{A}$, we have $|a|=(a^{\ast}a)^{\frac{1}{2}}$ and the $\mathcal{A}$-valued norm on $\mathcal{H}$ is defined by $|x|=\langle x, x\rangle_{\mathcal{A}}^{\frac{1}{2}}$, for all $x\in\mathcal{H}$.
Let $\mathcal{H}$ and $\mathcal{K}$ be two Hilbert $\mathcal{A}$-modules, a map $T:\mathcal{H}\rightarrow\mathcal{K}$ is said to be adjointable if there exists a map $T^{\ast}:\mathcal{K}\rightarrow\mathcal{H}$ such that $\langle Tx,y\rangle_{\mathcal{A}}=\langle x,T^{\ast}y\rangle_{\mathcal{A}}$ for all $x\in\mathcal{H}$ and $y\in\mathcal{K}$.
We reserve the notation $End_{\mathcal{A}}^{\ast}(\mathcal{H},\mathcal{K})$ for the set of all adjointable operators from $\mathcal{H}$ to $\mathcal{K}$ and $End_{\mathcal{A}}^{\ast}(\mathcal{H},\mathcal{H})$ is abbreviated to $End_{\mathcal{A}}^{\ast}(\mathcal{H})$.
\end{definition}
The following lemmas will be used to prove our mains results. \begin{lemma} \label{l1} \cite{Pas}.
Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module. If $T\in End_{\mathcal{A}}^{\ast}(\mathcal{H})$, then $$\langle Tx,Tx\rangle_{\mathcal{A}}\leq\|T\|^{2}\langle x,x\rangle_{\mathcal{A}}, \qquad x\in\mathcal{H}.$$ \end{lemma}
\begin{lemma} \label{l2} \cite{Ara}.
Let $\mathcal{H}$ and $\mathcal{K}$ be two Hilbert $\mathcal{A}$-modules and $T\in End_{\mathcal{A}}^{\ast}(\mathcal{H},\mathcal{K})$. Then the following statements are equivalent:
\begin{itemize}
\item [(i)] $T$ is surjective.
\item [(ii)] $T^{\ast}$ is bounded below associted to norm, i.e., there is $m>0$ such that $ m\|x\|\leq \|T^{\ast}x\|$, for all $x\in\mathcal{K}$.
\item [(iii)] $T^{\ast}$ is bounded below associted to the inner product, i.e., there is $m'>0$ such that $ m'\langle x,x\rangle_{\mathcal{A}} \leq \langle T^{\ast}x,T^{\ast}x\rangle_{\mathcal{A}}$, for all $x\in\mathcal{K}$.
\end{itemize} \end{lemma} \begin{lemma}\cite{Deh}\label{l3}
Let $\mathcal{H}$ and $\mathcal{K}$ be two Hilbert $\mathcal{A}$-modules and $T\in End_{\mathcal{A}}^{\ast}(\mathcal{H},\mathcal{K})$.
\begin{itemize}
\item [(i)] If $T$ is injective and $T$ has closed range, then the adjointable map $T^{\ast}T$ is invertible and $$\|(T^{\ast}T)^{-1}\|^{-1}I_{\mathcal{H}}\leq T^{\ast}T\leq\|T\|^{2}I_{\mathcal{H}}.$$
\item [(ii)] If $T$ is surjective, then the adjointable map $TT^{\ast}$ is invertible and $$\|(TT^{\ast})^{-1}\|^{-1}I_{\mathcal{K}}\leq TT^{\ast}\leq\|T\|^{2}I_{\mathcal{K}}.$$
\end{itemize} \end{lemma} \begin{lemma} \label{l4} \cite{33}.
Let $(\Omega,\mu )$ be a measure space, $X$ and $Y$ are two Banach spaces, $\lambda : X\longrightarrow Y$ be a bounded linear operator and $f : \Omega\longrightarrow X$ measurable function; then,
\begin{equation*}
\lambda (\int_{\Omega}fd\mu)=\int_{\Omega}(\lambda f)d\mu.
\end{equation*} \end{lemma} \begin{theorem}\cite{Ch}\label{t0}
Let $X$ be a Banach space, $U : X \longrightarrow X$ a bounded operator and $\|I-U\|<1$. Then $U$ is invertible. \end{theorem}
\section{Controlled Integral Frames in Hilbert $C^{\ast}$-Modules} Let $X$ be a Banach space, $(\Omega,\mu)$ a measure space, and $f:\Omega\to X$ be a measurable function. Integral of Banach-valued function $f$ has been defined by Bochner and others. Most properties of this integral are similar to those of the integral of real-valued functions (see \cite{32, 33}). Since every $C^{\ast}$-algebra and Hilbert $C^{\ast}$-module are Banach space, we can use this integral and its properties.
Let $(\Omega,\mu)$ be a measure spaces, $\mathcal{H}$ and $\mathcal{K}$ be two Hilbert $C^{\ast}$-modules over a unital $C^{\ast}$-algebra and $\{\mathcal{H}_{w}\}_{w\in\Omega}$ is a family of submodules of $\mathcal{H}$. $End_{\mathcal{A}}^{\ast}(\mathcal{H},\mathcal{H}_{w})$ is the collection of all adjointable $\mathcal{A}$-linear maps from $\mathcal{H}$ into $\mathcal{H}_{w}$.
We define the following: \begin{equation*}
l^{2}(\Omega, \{\mathcal{H}_{w}\}_{\omega \in \Omega})=\left\{x=\{x_{w}\}_{w\in\Omega}: x_{w}\in \mathcal{H}_{w}, \left\|\int_{\Omega}\langle x_{w},x_{w}\rangle_{\mathcal{A}} d\mu(w)\right\|<\infty\right\}. \end{equation*}
For any $x=\{x_{w}\}_{w\in\Omega}$ and $y=\{y_{w}\}_{w\in\Omega}$, the $\mathcal{A}$-valued inner product is defined by $\langle x,y\rangle_{\mathcal{A}}=\int_{\Omega}\langle x_{w},y_{w}\rangle_{\mathcal{A}} d\mu(w)$ and the norm is defined by $\|x\|=\|\langle x,x\rangle_{\mathcal{A}}\|^{\frac{1}{2}}$. \\ In this case, the $l^{2}(\Omega,\{\mathcal{H}_{w}\}_{\omega \in \Omega})$ is a Hilbert $C^{\ast}$-module (see \cite{28}).
Let $GL^{+}(\mathcal{H})$ be the set of all positive bounded linear invertible operators on $\mathcal{H}$ with bounded inverse. \begin{definition}\cite{Ros1} Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module and $(\Omega , \mu)$ a measure space. A mapping $F: \Omega \longrightarrow \mathcal{H}$ is called an integral frame associted to $(\Omega , \mu)$ if: \\ \begin{itemize} \item [$\centerdot$ ] For all $x\in \mathcal{H}$, $w \longrightarrow \langle x,F_{\omega}\rangle_{\mathcal{A}} $ is measurable function on $\Omega$. \item [$\centerdot$ ] There is a pair of constants $0<A, B$ such that, \begin{equation} A\langle x,x\rangle_{\mathcal{A}} \leq\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq B\langle x,x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}. \end{equation} \end{itemize} \end{definition} \begin{definition}\cite{Ros1}
Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module and $(\Omega , \mu)$ a measure space. A mapping $F: \Omega \longrightarrow \mathcal{H}$ is called a $\ast$-integral frame associted to $(\Omega , \mu)$ if: \\
\begin{itemize}
\item [$\centerdot$ ] For all $x\in \mathcal{H}$, $w \longrightarrow \langle x,F_{\omega}\rangle_{\mathcal{A}} $ is measurable function on $\Omega$,
\item [$\centerdot$ ] there exist two non-zero elements $A$, $B$ in $\mathcal{A}$ such that,
\begin{equation}
A\langle x,x\rangle_{\mathcal{A}} A^{\ast} \leq\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq B\langle x,x\rangle_{\mathcal{A}} B^{\ast}, \quad x\in \mathcal{H}.
\end{equation}
\end{itemize} \end{definition}
\section{Main Results}
\begin{definition} Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module and $(\Omega , \mu)$ a measure space. A $C$-controlled integral frame in $C^{\ast}$-module $\mathcal{H}$ is a map $F: \Omega \longrightarrow \mathcal{H}$ such that there exist $0<A\leq B<\infty$ such that, \begin{equation}\label{eqd1} A\langle x,x\rangle_{\mathcal{A}} \leq\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle C F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq B\langle x,x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}. \end{equation} The elements $A$ and $B$ are called the $C$-controlled integral frame bounds.\\ If $A=B$, we call this a $C$-controlled integral tight frame.\\ If $A=B=1$, it's called a $C$-controlled integral parseval frame.\\ If only the right hand inequality of \eqref{eqd1} is satisfied, we call $F$ a $C$-controlled integral Bessel mapping with bound $B$. \end{definition} \begin{example}
Let $\mathcal{H}=\left\{ X=\left(
\begin{array}{ccc}
a & 0 & 0 \\
0 & 0 & b
\end{array}
\right) \text{ / }a,b\in
\mathbb{C}
\right\} $, \\
and $\mathcal{A=}\left\{ \left(
\begin{array}{cc}
x & 0 \\
0 & y
\end{array}
\right) \text{ / }x,y\in
\mathbb{C}
\right\} $ which is a $C^{\ast}$-algebra.\\
We define the inner product :
\[
\begin{array}{ccc}
\mathcal{H}\times \mathcal{H} & \rightarrow & \mathcal{A} \\
(A,B) & \mapsto & A(\overline{B})^{t}
\end{array}
\]\\
This inner product makes $\mathcal{H}$ a $C^{\ast}$-module over $\mathcal{A}$.\\
Let $C$ be an operator defined by,
\begin{align*}
C: \mathcal{H} &\longrightarrow \mathcal{H} \\
X&\longrightarrow \alpha X
\end{align*}
where $\alpha$ is a reel number strictly greater than zero.\\
It's clair that $ C \in Gl^{+}(\mathcal{H})$.\\
Let $\Omega = [0,1]$ endewed with the lebesgue's measure. It's clear that a measure space.\\
We consider :
\begin{align*}
F:\qquad [0,1] &\longrightarrow \mathcal{H}\\
w&\longrightarrow F_{w}=\left(
\begin{array}{cccc}
w & 0 & 0 \\
0 & 0 & \frac{w}{2}
\end{array}
\right).
\end{align*}
In addition, for $X\in \mathcal{H}$, we have,
\begin{align*}
\int_{\Omega}\langle X,F_{w}\rangle_{\mathcal{A}} \langle CF_{w},X\rangle_{\mathcal{A}} {\mathcal{A}}d\mu(\omega)&=\int_{\Omega}\alpha w^{2}\left(
\begin{array}{cccc}
|a|^{2} & 0 \\
0 & \frac{|b|^{2}}{4}
\end{array}
\right)d\mu(\omega)\\
&=\frac{\alpha}{3}\left(
\begin{array}{cccc}
|a|^{2} & 0 \\
0 & \frac{|d|^{2}}{4}
\end{array}
\right).
\end{align*}
It's clear that,
\begin{align*}
\frac{1}{4}\|X\|_{\mathcal{A}}^{2}\leq
\left(
\begin{array}{cccc}
|a|^{2} & 0 \\
0 & \frac{|b|^{2}}{4}
\end{array}
\right)\leq
\left(
\begin{array}{cccc}
|a|^{2} & 0 \\
0 & |b|^{2}
\end{array}
\right)=\|X\|_{\mathcal{A}}^{2}.
\end{align*}
Then we have
\begin{equation*}
\frac{\alpha}{12}\|X\|_{\mathcal{A}}^{2}\leq \int_{\Omega}\langle X,F_{w}\rangle_{\mathcal{A}} \langle CF_{w},X\rangle_{\mathcal{A}}d\mu(\omega)\leq \frac{\alpha}{3}\|X\|_{\mathcal{A}}^{2}.
\end{equation*}
Which show that $F$ is a $C$-controlled integral frame for the $C^{\ast}$-module $\mathcal{H}.$
\end{example}
\begin{definition}
Let $F$ be a $C$-controlled integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$. We define the frame operator $S_{C} : \mathcal{H} \longrightarrow \mathcal{H}$ for $F$ by,
\begin{equation*}
S_{C}x=\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} CF_{\omega} d\mu(\omega), \quad x\in \mathcal{H}.
\end{equation*} \end{definition} \begin{proposition} The frame operator $S_{C}$ is positive, selfadjoint, bounded and invertible. \end{proposition} \begin{proof} For all $x\in \mathcal{H}$, by lemma \eqref{l4}, we have, \begin{equation*} \langle S_{C}x,x\rangle_{\mathcal{A}} = \langle \int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} CF_{\omega} d\mu(\omega),x\rangle_{\mathcal{A}}=\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle CF_{\omega} ,x\rangle_{\mathcal{A}} d\mu(\omega). \end{equation*} By left hand of inequality \eqref{eqd1}, we have, \begin{equation*} 0\leq A\langle x,x\rangle_{\mathcal{A}} \leq \langle S_{C}x,x\rangle_{\mathcal{A}}. \end{equation*} Then $S_{C}$ is a positive operator, also, it's sefladjoint.\\ From \eqref{eqd1}, we have, \begin{equation*} A\langle x,x\rangle_{\mathcal{A}} \leq\langle S_{C}x,x\rangle_{\mathcal{A}}\leq B\langle x,x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}. \end{equation*} So, \begin{equation*} A.I \leq S_{C}\leq B.I \end{equation*} Then $S_{C}$ is a bounded operator.\\ Moreover, \begin{equation*} 0 \leq I-B^{-1}S_{C} \leq \frac{B-A}{B}.I, \end{equation*} Consequently, \begin{equation*}
\|I-B^{-1}S_{C} \|=\underset{x \in \mathcal{H}, \|x\|=1}{\sup}\|\langle(I-B^{-1}S_{C})x,x\rangle_{\mathcal{A}} \|\leq \frac{B-A}{B}<1. \end{equation*} The Theorem \ref{t0} shows that $S_{C}$ is invertible. \end{proof} \begin{corollary}
Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module and $(\Omega,\mu)$ be a measure space. Let $F : \Omega \longrightarrow \mathcal{H}$ be a mapping. Assume that $S$ is the frame operator for $F$. Then the following statements are equivalent :
\begin{itemize}
\item [$(1)$]$F$ is an integral frame associted to $(\Omega, \mu)$ with integral frame bounds $A$ and $B$.
\item [$(2)$] We have $A.I \leq S \leq B.I$
\end{itemize} \end{corollary} \begin{proof} $(1) \Longrightarrow(2)$ Let $F$ be an integral frame associted to $(\Omega, \mu)$ with integral frames bounds $A$ and $B$, then, \begin{equation*} A\langle x,x\rangle_{\mathcal{A}} \leq\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq B\langle x,x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}. \end{equation*} Since, \begin{equation}
Sx=\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} F_{\omega}d\mu(\omega).
\end{equation}
We have, \begin{equation*} \langle Sx,x\rangle_{\mathcal{A}} = \langle \int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} F_{\omega}d\mu(\omega),x\rangle_{\mathcal{A}} = \int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(\omega), \end{equation*} then \begin{equation*} \langle Ax,x\rangle_{\mathcal{A}} \leq\langle Sx,x\rangle_{\mathcal{A}} \leq \langle Bx,x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}. \end{equation*} So, \begin{equation*} A.I \leq S \leq B.I. \end{equation*} $(2) \Longrightarrow(1)$ Let $x\in \mathcal{H}$, then, \begin{equation}\label {p1}
\|\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\|=\|\langle S_{C}x,x\rangle_{\mathcal{A}} \|\leq \|S_{C}x\|\|x\|\leq B\|x\|^{2} \end{equation} Also, \begin{equation}\label{p2}
\|\langle S_{C}x,x\rangle_{\mathcal{A}} \|\geq \|\langle Ax,x\rangle_{\mathcal{A}}\| \geq A\|x\|^{2} \end{equation} By \eqref{p1} and \eqref{p2} we obtain \begin{equation*}
A\|x\|^{2} \leq \|\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\| \leq B\|x\|^{2} \end{equation*} Which ends the proof. \end{proof} \begin{theorem}
Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module, $(\Omega , \mu)$ be a measure space and let $F$ be a mapping over $\Omega$ to $\mathcal{H}$, then $F$ is an integral frame associted to $(\Omega , \mu)$ if and only if there exist $0<A\leq B<\infty$ such that,
\begin{equation}\label{eq01t1}
A\|x\|^{2} \leq\|\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\|^{2}\leq B\|x\|^{2} \quad x\in \mathcal{H}.
\end{equation} \end{theorem} \begin{proof} Let $F$ be an integral frame associted to $(\Omega , \mu)$ with bounds $A$ and $B$, then,
\begin{equation}
A\langle x,x\rangle_{\mathcal{A}} \leq\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq B\langle x,x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}.
\end{equation}
Since the lower and upper bounds are positive then we have,
\begin{equation*}
A\|x\|^{2} \leq\|\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\|^{2}\leq B\|x\|^{2} \quad x\in \mathcal{H}.
\end{equation*}
Conversely, suppose \eqref{eq01t1} holds. By (\cite{Ros1}, Theorem 2.4), we have,
\begin{equation*}
\|\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\|^{2}=\|\langle Sx,x\rangle_{\mathcal{A}}\|
=\|S^{\frac{1}{2}}x,S^{\frac{1}{2}}x\|=\|S^{\frac{1}{2}}x\|^{2}
\end{equation*}
Then,
\begin{equation*}
A\|x\|^{2} \leq\|S^{\frac{1}{2}}x\|^{2}\leq B\|x\|^{2} \quad x\in \mathcal{H}.
\end{equation*}
By lemma \ref{l2}, there exists $0<m,M$ such that,
\begin{equation}
m\langle x,x\rangle_{\mathcal{A}} \leq\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq M\langle x,x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}.
\end{equation}
which ends the proof. \end{proof}
\begin{theorem} Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module, $C\in GL^{+}(\mathcal{H})$ and $(\Omega , \mu)$ a measure space and $F$ be a mapping for $\Omega$ to $\mathcal{H}$. Then $F$ is a $C$-controlled integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ if and only if there exist $0<A\leq B<\infty$ such that, \begin{equation}\label{eq1t1}
A\|x\|^{2} \leq\|\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle C F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\|^{2}\leq B\|x\|^{2} \quad x\in \mathcal{H}. \end{equation} \end{theorem} \begin{proof} $\Longrightarrow$) obvious.\\ $\Longleftarrow$) Supposes there exists $0<A\leq B<\infty$, such that \eqref{eq1t1} holds.\\ On one hand, for all $x\in \mathcal{H}$ we have , \begin{align*}
A\|x\|^{2} &\leq\|\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle C F_{\omega},x\rangle_{\mathcal{A}} d\mu (\omega)\|\\
&= \|\langle S_{C}x,x\rangle_{\mathcal{A}}\|\\
&= \|\langle S_{C}^{\frac{1}{2}}x,S_{C}^{\frac{1}{2}}x\rangle_{\mathcal{A}}\|\\
&= \|S_{C}^{\frac{1}{2}}x\|^{2}. \end{align*} By lemma \ref{l2}, there exist $0< m$ such that, \begin{equation}\label{tt1} m \langle x,x\rangle_{\mathcal{A}} \leq \langle S_{C}^{\frac{1}{2}}x,S_{C}^{\frac{1}{2}}x\rangle_{\mathcal{A}}=\langle S_{C}x,x\rangle_{\mathcal{A}}. \end{equation} On other hand, for all $x\in \mathcal{H}$ we have , \begin{align*}
B\|x\|^{2}&\geq \|\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle C F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\|^{2}\\
&= \|\langle S_{C}x,x\rangle_{\mathcal{A}}\|\\
&= \|\langle S_{C}^{\frac{1}{2}}x,S_{C}^{\frac{1}{2}}x\rangle_{\mathcal{A}}\|\\
&= \|S_{C}^{\frac{1}{2}}x\|^{2}. \end{align*} By lemma \ref{l2}, there exist $0< m^{'}$ such that, \begin{equation}\label{tt2} \langle S_{C}^{\frac{1}{2}}x,S_{C}^{\frac{1}{2}}x\rangle_{\mathcal{A}}=\langle S_{C}x,x\rangle_{\mathcal{A}} \leq m^{'}\langle x,x\rangle_{\mathcal{A}}. \end{equation} From \eqref{tt1} and \eqref{tt2}, we conclude that $F$ is a $C$-controlled integral frame. \end{proof} \begin{proposition}
Let $C\in GL^{+}(\mathcal{H})$ and $F$ be a $C$-controlled integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A$ and $B$. Then $F$ is an integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A\|C^{\frac{1}{2}}\|^{-2}$ and $B\|C^{\frac{-1}{2}}\|^{2}$. \end{proposition} \begin{proof} Let $F$ be a $C$-controlled integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$, with bounds $A$ and $B$.\\ On one hand we have, \begin{align*} A\langle x,x\rangle_{\mathcal{A}} &\leq \langle S_{C}x,x\rangle_{\mathcal{A}} \\ &=\langle CSx,x\rangle_{\mathcal{A}}\\ &= \langle C^{\frac{1}{2}}Sx,C^{\frac{1}{2}}x\rangle_{\mathcal{A}} \\
&\leq \|C^{\frac{1}{2}}\|^{2}\langle Sx,x\rangle_{\mathcal{A}} \\ \end{align*} So, \begin{equation}\label{p10}
A\|C^{\frac{1}{2}}\|^{-2}\langle x,x\rangle_{\mathcal{A}} \leq\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w) \end{equation} On other hand, for all $x\in \mathcal{H}$, we have, \begin{align*} \int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)&=\langle Sx,x\rangle_{\mathcal{A}} \\ &=\langle C^{-1}CSx,x\rangle_{\mathcal{A}} \\ &=\langle (C^{-1}CS)^{\frac{1}{2}}x,(C^{-1}CS)^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|(C^{-1}CS)^{\frac{1}{2}}x\|^{2}\\
&\leq \|C^{\frac{-1}{2}}\|^{2} \|(CS)^{\frac{1}{2}}x\|^{2}\\
&= \|C^{\frac{-1}{2}}\|^{2}\langle (CS)^{\frac{1}{2}}x,(CS)^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|C^{\frac{-1}{2}}\|^{2}\langle (S_{C})^{\frac{1}{2}}x,(S_{C})^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|C^{\frac{-1}{2}}\|^{2}\langle S_{C}x,x\rangle_{\mathcal{A}}\\
&\leq \|C^{\frac{-1}{2}}\|^{2}B\langle x,x\rangle_{\mathcal{A}}. \end{align*} Then, \begin{equation}\label{pp10}
\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq \|C^{\frac{-1}{2}}\|^{2}B\langle x,x\rangle_{\mathcal{A}}. \end{equation}
From \eqref{p10} and \eqref{pp10} we conclude that $F$ is an integral frame $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A\|C^{\frac{1}{2}}\|^{-2}$ and $B\|C^{\frac{-1}{2}}\|^{2}$. \end{proof} \begin{proposition}
Let $C\in GL^{+}(\mathcal{H})$ and $F$ be an integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A$ and $B$. Then $F$ is a $C$-controlled integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A\|C^{\frac{-1}{2}}\|^{2}$ and $B\|C^{\frac{1}{2}}\|^{2}$. \end{proposition} \begin{proof} Let $F$ be an integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A$ and $B$. Then for all $x\in \mathcal{H}$, we have \begin{align*} A\langle x, x\rangle_{\mathcal{A}}&\leq \langle Sx,x\rangle_{\mathcal{A}} \\ &=\langle C^{-1}CSx,x\rangle_{\mathcal{A}} \\ &=\langle (C^{-1}CS)^{\frac{1}{2}}x,(C^{-1}CS)^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|(C^{-1}CS)^{\frac{1}{2}}x\|^{2}\\
&\leq \|C^{\frac{-1}{2}}\|^{2} \|(CS)^{\frac{1}{2}}x\|^{2}\\
&= \|C^{\frac{-1}{2}}\|^{2}\langle (CS)^{\frac{1}{2}}x,(CS)^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|C^{\frac{-1}{2}}\|^{2}\langle (S_{C})^{\frac{1}{2}}x,(S_{C})^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|C^{\frac{-1}{2}}\|^{2}\langle S_{C}x,x\rangle_{\mathcal{A}}\\ \end{align*} So, \begin{equation}
A\|C^{\frac{-1}{2}}\|^{-2}\langle x, x\rangle_{\mathcal{A}} \leq \langle S_{C}x,x\rangle_{\mathcal{A}} \end{equation} Hence, for all $x\in \mathcal{H}$, we have, \begin{align*} \langle S_{C}x,x\rangle_{\mathcal{A}}&=\langle CSx,x\rangle_{\mathcal{A}}\\ &=\langle C^{\frac{1}{2}}Sx,C^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&\leq \|C^{\frac{1}{2}}\|^{2}\langle Sx,x\rangle_{\mathcal{A}} \\
&\leq \|C^{\frac{1}{2}}\|^{2}B\langle x,x\rangle_{\mathcal{A}}. \end{align*}
Therefore we conclude that $F$ is a $C$-controlled integral frame $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A\|C^{\frac{-1}{2}}\|^{-2}$ and $B\|C^{\frac{1}{2}}\|^{2}$. \end{proof} \begin{theorem} Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module and $(\Omega , \mu)$ a measure space. Let $F$ be a $C$-controlled integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with the frame operator $S_{C}$ and bounds $A$ and $B$. Let $K\in End_{\mathcal{A}}^{\ast}(\mathcal{H})$ a surjective operator such that $KC=CK$. Then $KF$ is a $C$-controlled integral frame for $\mathcal{H}$ with the operator frame $KS_{C}K^{\ast}$. \end{theorem} \begin{proof} Let $F$ be a $C$-controlled integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$, then, \begin{equation*} A\langle K^{\ast}x,K^{\ast}x\rangle_{\mathcal{A}} \leq\int_{\Omega}\langle K^{\ast}x,F_{\omega}\rangle_{\mathcal{A}} \langle C F_{\omega},K^{\ast}x\rangle_{\mathcal{A}} d\mu(w)\leq B\langle K^{\ast}x,K^{\ast}x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}. \end{equation*} By lemma \ref{l1} and lemma \ref{l3}, we obtain, \begin{equation*}
A\|(KK^{\ast})^{-1}\|^{-1}\langle x,x\rangle_{\mathcal{A}} \leq\int_{\Omega}\langle x,KF_{\omega}\rangle_{\mathcal{A}} \langle CK F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq B\|K\|^{2}\langle x,x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}. \end{equation*} which shows that $KF$ is a $C$-controlled integral operator.\\ Moreover, by lemma \ref{l4}, we have, \begin{equation*} KS_{C}K^{\ast}x=K\int_{\Omega}\langle K^{\ast}x,F_{\omega}\rangle_{\mathcal{A}} CF_{\omega} d\mu (\omega)= \int_{\Omega}\langle x,KF_{\omega}\rangle_{\mathcal{A}} CKF_{\omega} d\mu (\omega), \end{equation*} which ends the proof. \end{proof}
\section{Controlled $\ast$-integral frames} \begin{definition}
Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module and $(\Omega , \mu)$ a measure space. A $C$-controlled $\ast$-integral frame in $C^{\ast}$-module $\mathcal{H}$ is a map $F: \Omega \longrightarrow \mathcal{H}$ such that there exist two strictly nonzero elements A,B in A such that,
\begin{equation}\label{eqd2}
A\langle x,x\rangle_{\mathcal{A}} A^{\ast} \leq\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle C F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq B\langle x,x\rangle_{\mathcal{A}} B^{\ast}, \quad x\in \mathcal{H}.
\end{equation}
The elements $A$ and $B$ are called the $C$-controlled $\ast$-integral frame bounds.\\
If $A=B$, we call this a $C$-controlled $\ast$-integral tight frame.\\
If $A=B=1$, it's called a $C$-controlled $\ast$-integral parseval frame.\\
If only the right hand inequality of \eqref{eqd2} is satisfied, we call $F$ a $C$-controlled $\ast$-integral Bessel mapping with bound $B$. \end{definition} \begin{example}
Let $\mathcal{H}=\mathcal{A}=\{(a_{n})_{n\in \mathbb{N}}\subset \mathbb{C}, \quad \sum_{n\geq 0}|a_{n}|< \infty\}$ \\
Endeweed with the product and the inner product defined as follow.
\[
\begin{array}{ccc}
\mathcal{A}\times \mathcal{A} & \rightarrow & \mathcal{A} \\
((a_{n})_{n\in \mathbb{N}},(b_{n})_{n\in \mathbb{N}}) & \mapsto & (a_{n})_{n\in \mathbb{N}}.(b_{n})_{n\in \mathbb{N}}=(a_{n}b_{n})_{n\in \mathbb{N}}
\end{array}
\]\\
and
\[
\begin{array}{ccc}
\mathcal{H}\times \mathcal{H} & \rightarrow & \mathcal{A} \\
((a_{n})_{n\in \mathbb{N}},(b_{n})_{n\in \mathbb{N}}) & \mapsto & \langle (a_{n})_{n\in \mathbb{N}},(b_{n})_{n\in \mathbb{N}}\rangle_{\mathcal{A}} =(a_{n}\overline{b_{n}})_{n\in \mathbb{N}}
\end{array}
\]\\
Let $\Omega = [0,+\infty[$ endewed with the lebesgue's measure wich's a measure space.
\begin{align*}
F:\qquad [0,+\infty[ &\longrightarrow \mathcal{H}\\
w&\longrightarrow F_{w}=(F^{w}_{n})_{n\in \mathbb{N}},
\end{align*}
where
\begin{equation*}
F^{w}_{n} = \frac{1}{n+1} \quad if \quad n=[w] \qquad and \qquad F^{w}_{n} =0 \quad elsewhere,
\end{equation*}
where $[w]$ is the whole part of $w$.\\
On the other hand, we consider the measure space $(\Omega,\mu)$, where $\mu$ is the lebesgue measure restricted to $ [0,+\infty[$, and the operator,
\begin{align*} C:\qquad\mathcal{H}&\longrightarrow \mathcal{H} \\ (a_{n})_{n\in \mathbb{N}} &\longrightarrow (\alpha a_{n})_{n\in \mathbb{N}},
\end{align*}
where $\alpha$ is a strictly positive real number.\\
It's clear that $C$ is an invertible and both operators, and $C$,$C^{-1}$ are bounded. \\
So,
\begin{align*} &\int_{\Omega}\langle (a_{n})_{n\in \mathbb{N}},F_{w}\rangle_{\mathcal{A}}\langle CF_{w},(a_{n})_{n\in \mathbb{N}}\rangle_{\mathcal{A}}d\mu(w)\\ &=\int_{1}^{+\infty}(0,0,...,\frac{a_{[w]}}{[w]+1},0,...)\alpha (0,0,...,\overline{\frac{a_{[w]}}{[w]+1}},0,...)d\mu (w)\\
&=\alpha \sum_{p=0}^{+\infty}\int_{p}^{p+1}(0,0,...,\frac{|a_{[w]}|^{2}}{([w]+1)^{2 }},0,...)d\mu (w)\\
&=\alpha \sum_{p=0}^{+\infty}(0,0,...,\frac{|a_{[p]}|^{2}}{(p+1)^{2 }},0,...)\\
&=\alpha (\frac{|a_{n}|^{2}}{(n+1)^{2 }})_{n\in \mathbb{N}}\\ &=\sqrt{\alpha}(1,\frac{1}{2},\frac{1}{3},...,\frac{1}{n},...)\langle (a_{n})_{n\in \mathbb{N}},(a_{n})_{n\in \mathbb{N}}\rangle_{\mathcal{A}}\sqrt{\alpha}(1,\frac{1}{2},\frac{1}{3},...,\frac{1}{n},...). \end{align*} Which shows that $F$ is a $C$-controlled $\ast$-integral tight frame for $\mathcal{H}$ with bound $A=\sqrt{\alpha}(1,\frac{1}{2},\frac{1}{3},...,\frac{1}{n},...) \in \mathcal{A}$ \end{example} \begin{definition}
Let $F$ be a $C$-controlled $\ast$-integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$. We define the frame operator $S_{C} : \mathcal{H} \longrightarrow \mathcal{H}$ for $F$ by,
\begin{equation*}
S_{C}x=\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} CF_{\omega} d\mu(\omega), \quad x\in \mathcal{H}.
\end{equation*} \end{definition} \begin{proposition}
The frame operator $S_{C}$ is positive, selfadjoint, bounded and invertible. \end{proposition} \begin{proof} For all $x\in \mathcal{H}$, by lemma \eqref{l4}, we have,
\begin{equation*}
\langle S_{C}x,x\rangle_{\mathcal{A}} = \langle \int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} CF_{\omega} d\mu(\omega),x\rangle_{\mathcal{A}}=\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle CF_{\omega} ,x\rangle_{\mathcal{A}} d\mu(\omega).
\end{equation*}
By left hand of inequality \eqref{eqd2}, we deduce that $S_{C}$ is a positive operator, also, it's sefladjoint.\\
From \eqref{eqd2}, we have,
\begin{equation*}\label{111}
A\langle x,x\rangle_{\mathcal{A}} \leq\langle S_{C}x,x\rangle_{\mathcal{A}}\leq B\langle x,x\rangle_{\mathcal{A}}, \quad x\in \mathcal{H}.
\end{equation*}
The Theorem 2.5 in \cite{MNAZ} shows that $S_{C}$ is invertible. \end{proof} \begin{proposition}
Let $C\in GL^{+}(\mathcal{H})$ and $F$ be a $C$-controlled $\ast$-integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A$ and $B$. Then $F$ is a $\ast$-integral frame $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $\|C^{\frac{1}{2}}\|^{-1}A$ and $\|C^{\frac{-1}{2}}\|B$ \end{proposition} \begin{proof}
Let $F$ be a $C$-controlled $\ast$-integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$, with bounds $A$ and $B$.\\
On one hand we have
\begin{align*}
A\langle x,x\rangle_{\mathcal{A}} A^{\ast} &\leq \langle S_{C}x,x\rangle_{\mathcal{A}} \\
&=\langle CSx,x\rangle_{\mathcal{A}}\\
&= \langle C^{\frac{1}{2}}Sx,C^{\frac{1}{2}}x\rangle_{\mathcal{A}} \\
&\leq \|C^{\frac{1}{2}}\|^{2}\langle Sx,x\rangle_{\mathcal{A}}. \\
\end{align*}
So,
\begin{equation}\label{p10}
(\|C^{\frac{1}{2}}\|^{-1}A)\langle x,x\rangle_{\mathcal{A}} (\|C^{\frac{1}{2}}\|^{-1}A)^{\ast} \leq\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w).
\end{equation}
On other hand, for all $x\in \mathcal{H}$, we have,
\begin{align*}
\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)&=\langle Sx,x\rangle_{\mathcal{A}} \\
&=\langle C^{-1}CSx,x\rangle_{\mathcal{A}} \\
&=\langle (C^{-1}CS)^{\frac{1}{2}}x,(C^{-1}CS)^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|(C^{-1}CS)^{\frac{1}{2}}x\|^{2}\\
&\leq \|C^{\frac{-1}{2}}\|^{2} \|(CS)^{\frac{1}{2}}x\|^{2}\\
&= \|C^{\frac{-1}{2}}\|^{2}\langle (CS)^{\frac{1}{2}}x,(CS)^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|C^{\frac{-1}{2}}\|^{2}\langle (S_{C})^{\frac{1}{2}}x,(S_{C})^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|C^{\frac{-1}{2}}\|^{2}\langle S_{C}x,x\rangle_{\mathcal{A}}\\
&\leq \|C^{\frac{-1}{2}}\|^{2}B\langle x,x\rangle_{\mathcal{A}} B^{\ast}.
\end{align*}
Then,
\begin{equation}\label{pp10}
\int_{\Omega}\langle x,F_{\omega}\rangle_{\mathcal{A}} \langle F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq (\|C^{\frac{-1}{2}}\|B)\langle x,x\rangle_{\mathcal{A}} (\|C^{\frac{-1}{2}}\|B)^{\ast}.
\end{equation}
From \eqref{p10} and \eqref{pp10} we conclude that $F$ is a $\ast$-integral frame $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A\|C^{\frac{1}{2}}\|^{-2}$ and $B\|C^{\frac{-1}{2}}\|^{2}$. \end{proof} \begin{proposition}
Let $C\in GL^{+}(\mathcal{H})$ and $F$ be an $\ast$-integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A$ and $B$. Then $F$ is a $C$-controlled $\ast$-integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $\|C^{\frac{-1}{2}}\|^{-1}A$ and $\|C^{\frac{1}{2}}\|B$. \end{proposition} \begin{proof}
Let $F$ be an integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $A$ and $B$. Then for all $x\in \mathcal{H}$, we have
\begin{align*}
A\langle x, x\rangle_{\mathcal{A}} A^{\ast}&\leq \langle Sx,x\rangle_{\mathcal{A}} \\
&=\langle C^{-1}CSx,x\rangle_{\mathcal{A}} \\
&=\langle (C^{-1}CS)^{\frac{1}{2}}x,(C^{-1}CS)^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|(C^{-1}CS)^{\frac{1}{2}}x\|^{2}\\
&\leq \|C^{\frac{-1}{2}}\|^{2} \|(CS)^{\frac{1}{2}}x\|^{2}\\
&= \|C^{\frac{-1}{2}}\|^{2}\langle (CS)^{\frac{1}{2}}x,(CS)^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|C^{\frac{-1}{2}}\|^{2}\langle (S_{C})^{\frac{1}{2}}x,(S_{C})^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&=\|C^{\frac{-1}{2}}\|^{2}\langle S_{C}x,x\rangle_{\mathcal{A}}.\\
\end{align*}
So,
\begin{equation}
(\|C^{\frac{-1}{2}}\|^{-1}A)\langle x, x\rangle_{\mathcal{A}} (\|C^{\frac{-1}{2}}\|^{-1}A)^{\ast}\leq \langle S_{C}x,x\rangle_{\mathcal{A}}
\end{equation}
Hence, for all $x\in \mathcal{H}$,
\begin{align*}
\langle S_{C}x,x\rangle_{\mathcal{A}}&=\langle CSx,x\rangle_{\mathcal{A}}\\
&=\langle C^{\frac{1}{2}}Sx,C^{\frac{1}{2}}x\rangle_{\mathcal{A}}\\
&\leq \|C^{\frac{1}{2}}\|^{2}\langle Sx,x\rangle_{\mathcal{A}} \\
&\leq \|C^{\frac{1}{2}}\|^{2}B\langle x,x\rangle_{\mathcal{A}} B^{\ast} \\
&= (\|C^{\frac{1}{2}}\|B)\langle x,x\rangle_{\mathcal{A}} (\|C^{\frac{1}{2}}\|B)^{\ast}.
\end{align*}
Therefore we conclude that $F$ is a $C$-controlled $\ast$-integral frame $\mathcal{H}$ associted to $(\Omega , \mu)$ with bounds $\|C^{\frac{-1}{2}}\|^{-1}A$ and $\|C^{\frac{1}{2}}\|B$. \end{proof} \begin{theorem}
Let $\mathcal{H}$ be a Hilbert $\mathcal{A}$-module and $(\Omega , \mu)$ a measure space. Let $F$ a $C$-controlled $\ast$-integral frame for $\mathcal{H}$ associted to $(\Omega , \mu)$ with the frame operator $S_{C}$ and bounds $A$ and $B$. Let $K \in End_{\mathcal{A}}^{\ast}(\mathcal{H})$ a surjective operator such that $KC=CK$. Then $KF$ is a $C$-controlled $\ast$-integral frame for $\mathcal{H}$ with the operator frame $KS_{C}K^{\ast}$. \end{theorem} \begin{proof}
For all $x\in \mathcal{H}$ and \eqref{eqd2},we have,
\begin{equation*}
A\langle K^{\ast}x,K^{\ast}x\rangle_{\mathcal{A}} A^{\ast} \leq\int_{\Omega}\langle K^{\ast}x,F_{\omega}\rangle_{\mathcal{A}} \langle C F_{\omega},K^{\ast}x\rangle_{\mathcal{A}} d\mu(w)\leq B\langle K^{\ast}x,K^{\ast}x\rangle_{\mathcal{A}} B^{\ast}, \quad x\in \mathcal{H}.
\end{equation*}
By lemma \ref{l1} and lemma \ref{l3}, we obtain,
\begin{equation*}
A\|(KK^{\ast})^{-1}\|^{-1}\langle x,x\rangle_{\mathcal{A}} A^{\ast} \leq\int_{\Omega}\langle x,KF_{\omega}\rangle_{\mathcal{A}} \langle CK F_{\omega},x\rangle_{\mathcal{A}} d\mu(w)\leq B\|K\|^{2}\langle x,x\rangle_{\mathcal{A}} B^{\ast}, \quad x\in \mathcal{H}.
\end{equation*}
which shows that $KF$ is a $C$-controlled $\ast$-integral operator.\\
Moreover, by lemma \ref{l4}, we have,
\begin{equation*}
KS_{C}K^{\ast}x=K\int_{\Omega}\langle K^{\ast}x,F_{\omega}\rangle_{\mathcal{A}} CF_{\omega} d\mu (\omega)= \int_{\Omega}\langle x,KF_{\omega}\rangle_{\mathcal{A}} CKF_{\omega} d\mu (\omega)
\end{equation*}
which ends the proof \end{proof}
\subsection*{Acknowledgment} The authors would like to express their gratitude to the reviewers for helpful comments and suggestions.
\end{document}
|
arXiv
|
{
"id": "2008.08217.tex",
"language_detection_score": 0.3601059317588806,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\baselineskip=17pt
\title{Cuspidal divisor class groups of non-split Cartan modular curves}
\author{Pierfrancesco Carlucci\\ Dipartimento di Matematica\\
Universit\'a degli Studi di Roma Tor Vergata\\
Via della Ricerca Scientifica 1, 00133, Rome, Italy\\ E-mail: [email protected]} \date{30 April, 2016}
\maketitle
\renewcommand{\arabic{footnote}}{}
\footnote{2010 \emph{Mathematics Subject Classification}: Primary 11G16; Secondary 11B68, 13C20.}
\footnote{\emph{Key words and phrases}: Siegel Functions, Modular Units, Cuspidal Divisor Class Group, Non-Split Cartan Curves, Generalized Bernoulli Numbers.}
\renewcommand{\arabic{footnote}}{\arabic{footnote}} \setcounter{footnote}{0}
\begin{abstract}
I find an explicit description of modular units in terms of Siegel functions for the modular curves $X^+_{ns}(p^k) $ associated to the normalizer of a non-split Cartan subgroup of level $ p^k $ where $ p\not=2,3 $ is a prime. The Cuspidal Divisor Class Group $ \mathfrak{C}^+_{ns}(p^k) $ on $X^+_{ns}(p^k)$ is explicitly described as a module over the group ring $R = \mathbb{Z}[(\mathbb{Z}/p^k\mathbb{Z})^*/\{\pm 1\}] $. In this paper I give a formula involving generalized Bernoulli numbers $ B_{2,\chi} $ for $ |\mathfrak{C}^+_{ns}(p^k)| $. \end{abstract}
\section{Motivation and overview} Let $ X^+_{ns}(n) $ be the modular curve associated to the normalizer of a non-split Cartan subgroup of level $ n$. One noteworthy reason for studying these curves is the Serre's uniformity problem over $ \mathbb{Q} $ stating that there exists a constant $ C >0 $ so that, if $ E $ is an elliptic curve over $ \mathbb{Q} $ without complex multiplication, then the Galois representation: $$ \rho_{E,p}: \mbox{Gal}(\overbar{\mathbb{Q}},\mathbb{Q}) \rightarrow \mbox{GL}_2(\mathbb{F}_p) $$ attached to the elliptic curve $ E $ is onto for all primes $ p>C$ (see \cite{Serre72} and \cite[pag. 198]{KL}). If the Galois representation were not surjective, its image would be contained in one of the maximal proper subgroups of $ \mbox{GL}_2(\mathbb{F}_p) $. These subgroups are:\\
\noindent 1. A Borel subgroup;\\ 2. The normalizer of a split Cartan subgroup;\\ 3. The normalizer of a non-split Cartan subgroup; \\ 4. A finite list of exceptional subgroups. \\
Serre himself showed that if $ p > 13 $ the image of $ \rho_{E,p} $ is not contained in an exceptional subgroup. Mazur in \cite{Mazur} and Bilu-Parent-Rebolledo in \cite{BiPaRe} presented analogous results for Borel subgroups and split Cartan subgroups respectively. The elliptic curves over $ \mathbb{Q} $ for which the image of the Galois representation is contained in the normalizer of a non-split Cartan subgroup are parametrized by the non-cuspidal rational points of $X^+_{ns}(p) $. Thus the open case of Serre's uniformity problem can be reworded in terms of determining whether there exist $ \mathbb{Q}$-rational points on $X^+_{ns}(p) $, that do not arise from elliptic curves with complex multiplication.
This paper focuses on an aspect of the curves $X^+_{ns}(p^k) $ that has never been treated before: their Cuspidal Divisor Class Group $ \mathfrak{C}^+_{ns}(p^k)$, a finite subgroup of the Jacobian $ J^+_{ns}(p^k)$ whose support is contained in the set of cusps of $X^+_{ns}(p^k) $. Let $ \mathfrak{D}_{ns}^+(p^k) $ be the free abelian group generated by the cusps of $X^+_{ns}(p^k) $, let $ \mathfrak{D}_{ns}^+(p^k)_0 $ be its subgroup consisting of elements of degree 0 and let $ \mathfrak{F}_{ns}^+(p^k) $ be the group of divisors of modular units of $X^+_{ns}(p^k)$, i.e. those modular functions on $X^+_{ns}(p^k)$ in the modular function field $ F_{p^k} $, which have no zeros and poles in the upper-half plane. We define:
$$ \mathfrak{C}^+_{ns}(p^k):= \mathfrak{D}_{ns}^+(p^k)_0 / \mathfrak{F}_{ns}^+(p^k).$$ In \cite{Siegelgenerator} Kubert and Lang gave an explicit and complete description of the group of modular units of $ X(p^k) $ in terms of Siegel functions $ g_a(\tau) $ (see \cite{Lang:ef} or \cite{Siegel}) with $a \in \frac{1}{p^k}\mathbb{Z}^2 \setminus \mathbb{Z}^2 $. We will define the set of functions $$ \{ G^+_h(\tau)\}_{h \in ((\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\})} $$ in terms of classical Siegel functions and we will prove the following result: \\
\noindent \textbf{Theorem \ref{powprod}} \textit{If $ p \not= 2,3 $, the group of modular units of the modular curve $ X^+_{ns}(p^k) $ consists (modulo constants) of power products:} $$ g(\tau)= \prod_{h \in ((\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\}) }{{G^+_h}^{n^+_h}(\tau)} $$ \textit{where
$$ G^+_h(\tau) = \prod_{t \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) , \pm|t|= h}g_{[t]}(\tau) $$
\textit{and} $ d=\displaystyle\frac{12}{\gcd(12,p+1)} \mbox{ divides } \sum_{h}n^+_h$.}
In \cite[Chapter 5]{KL} Kubert and Lang studied the Cuspidal Divisor Class Group on the modular curve $ X(p^k) $. Since their description utilizes the parametrization of the set of cusps of $ X(p^k) $ by the elements of the quotient $ C_{ns}(p^k)/\{\pm 1\} $, it appears natural to develop and extend their techniques to non-split Cartan modular curves. Kubert and Lang proved the following:\\
\noindent \textbf{Theorem \ref{CDCG}} \textit{If $ p\ge 5 $ consider $ R:= \mathbb{Z}[C_{ns}(p^k)/\{\pm 1\}] $ and let $ R_0 $ be the ideal of $ R $ consisting of elements of degree $0$. The Cuspidal Divisor Class Group $ \mathfrak{C}_{p^k} $ on $ X(p^k) $ is an $R-$module, more precisely there exists a Stickelberger element $ \theta \in \mathbb{Q}[C_{ns}(p^k)/\{\pm 1\}]$ such that, under the identification of the group $ C_{ns}(p^k)/\{\pm 1\} $ with the set of cusps at level $ p^k $, the ideal $ R \cap R \theta $ corresponds to the group of divisors of units in the modular function field $ F_{p^k} $ and: $$ \mathfrak{C}_{p^k} \cong R_0 / R \cap R \theta. $$} \noindent In this theorem the authors exhibited an isomorphism reminding to a classical result in cyclotomic fields theory. Let $ J $ be a fractional ideal of $ \mathbb{Q}(\zeta_m) $ and $ G= $\mbox{Gal}$( \mathbb{Q}(\zeta_m)/\mathbb{Q}) \cong (\mathbb{Z}/m\mathbb{Z})^*$. Consider $\mathbb{Z}[G]$ acting on the ideals and ideal classes in the natural way: if $ x= \sum_{\sigma}x_{\sigma} \sigma $ then $ J^x := \prod_{\sigma}(J^{\sigma})^{x_{\sigma}}.$ We have the following result:\\ \\ \textbf{Stickelberger's Theorem} \cite[pag. 333]{Washington} \textit{Define the Stickelberger element: $$ \theta = \sum_{a \scriptsize{\mbox{ mod }} m, (a,m)=1}\displaystyle \left\langle \frac{a}{m} \right\rangle \sigma_a^{-1} \in \mathbb{Q}[G].$$ The Stickelberger ideal $ \mathbb{Z}[G]\cap\theta\mathbb{Z}[G] $ annihilates the ideal class group of $ \mathbb{Q}(\zeta_m)$.}\\
\noindent Along these lines, the main result can be summarized as follows:\\
\noindent \textbf{Main Theorem \ref{main}}\textit{ Consider $ p \ge 5 $, $ H:=(\mathbb{Z}/p^k\mathbb{Z})^*/\{\pm 1\}$ and $ w $ a generator of $ H $. There exists a Stickelberger element
$$ \theta := \displaystyle\frac{p^k}{2} \sum_{i=1}^{\frac{p-1}{2}p^{k-1}} {\displaystyle\sum_{ \pm|s|=w^i, s \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}} B_2 \left( \left\langle \frac{\frac{1}{2}(s+\overline{s}) }{p^k} \right\rangle \right) } w^{-i} \in \mathbb{Q}[H] $$ such that, under the identification of the group $ H $ with the set of cusps of $ X^+_{ns}(p^k) $, the ideal $ \mathbb{Z}[H]\theta \cap \mathbb{Z}[H] $ represents the group of divisors of units of $ X^+_{ns}(p^k)$. The Cuspidal Divisor Class Group on $ X^+_{ns}(p^k) $ is a module over $\mathbb{Z}[H]$ and, more precisely, we have: } $$ \mathfrak{C}^+_{ns}(p^k) \cong \mathbb{Z}_0[H] / (\mathbb{Z}[H]\theta \cap \mathbb{Z}[H]).
$$
From the previous statement we will show another result which has a counterpart in cyclotomic field theory. \\
\noindent \textbf{Theorem \ref{Cardinalità}} \textit{For any character $ \chi $ of $C_{ns}(p^k)/\{\pm I\}$ (identified with an even character of $ C_{ns}(p^k) $), we let:} $$ B_{2,\chi} = \sum_{\alpha \in C_{ns}(p^k)/\{\pm I\} } B_2 \left( \left\langle \frac{T(\alpha)}{p^k} \right\rangle \right) \chi(\alpha) $$ \textit{where $ B_2(t) = t^2 - t + \frac{1}{6} $ is the second Bernoulli polynomial and $T$ is a certain $ (\mathbb{Z}/p^k\mathbb{Z})$-linear map. Then we have:}
$$ |\mathfrak{C}^+_{ns}(p^k)| = \displaystyle 24\frac{ \displaystyle\prod_{}{\frac{p^k}{2}B_{2,\chi}}}{\gcd(12,p+1)(p-1)p^{k-1}} $$ \textit{where the product runs over all nontrivial characters $ \chi $ of $ C_{ns}(p^k)/{\pm I} $ such that $ \chi(M)=1 $ for every $ M \in C_{ns}(p^k) $
with $ \det M = \pm 1 $.\\ In particular, for $ k=1 $ let $ \omega$ be a generator of the character group of $ C_{ns}(p) $ and $ v $ a generator of $ \mathbb{F}_{p^2}^* $. Then:}
$$ |\mathfrak{C}^+_{ns}(p)| = \displaystyle \frac{24}{(p-1)\gcd(12,p+1)}\prod_{j=1}^{\frac{p-3}{2}}\frac{p}{2}B_{2,\omega^{(2p+2)j}} = $$
$$ = \displaystyle\frac{ 576 \left| \det\left[\displaystyle\frac{p}{2}\left(\displaystyle\sum_{l=0}^{p}B_2\left( \left\langle \frac{\frac{1}{2}\mbox{Tr}(v^{i-j+l\frac{p-1}{2}}) }{p} \right\rangle \right) - \frac{p+1}{6} \right) \right]_{1\le i,j \le \frac{p-1}{2}} \right|}{(p-1)^2 p (p+1)\gcd(12,p+1)}. $$
This theorem could be considered analogous to the relative class number formula \cite[Theorem 4.17]{Washington}: $$ h^-_m = Q w \prod_{\chi \mbox{ odd}} -\frac{1}{2}B_{1,\chi} $$ where $ Q=1 $ if $ m $ is a prime power and $ Q=2 $ otherwise, $ w $ is the number of roots of unity in $ \mathbb{Q}(\zeta_m) $ and we encounter the classical generalized Bernoulli numbers: $$ B_{1,\chi} := \sum_{a=1}^{m}\chi(a)B_1\left( \frac{a}{m} \right)= \frac{1}{m} \sum_{a=1}^{m} \chi(a)a \mbox{ for } \chi\not=1.$$
In the last section we will explicitly calculate $ |\mathfrak{C}^+_{ns}(p)| $ for some $ p \le 101 $. Consider the isogeny (cfr.\cite[Paragraph 6.6]{Diamond:mf}):$$ {J_0^+}^{new}(p^2) \longrightarrow \mathop{\bigoplus_{f}} A'_{p,f} $$ where the sum is taken over the equivalence classes of newforms $ f\in S_2(\Gamma^+_0(p^2))$. From Theorems \ref{x1}, \ref{x2} and \ref{x3} we deduce that:
$$
|\mathfrak{C}^+_{ns}(p)| \mbox{ divides } \prod_f \mathop \textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q \mbox{ prime}, q \nmid |\mathfrak{C}^+_{ns}(p)|, \\ q \equiv \pm 1 \mbox{ mod }p \end{array} \end{scriptsize} } |A'_{p,f}(\mathbb{F}_{q})|. $$ Using the modular form database of W.Stein, we will find out that for $ p \le 31 $:
$$|\mathfrak{C}^+_{ns}(p)| = \prod_f \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }p \end{array} \end{scriptsize} } |A'_{p,f}(\mathbb{F}_{q})|. $$
\section{Galois groups of modular function fields} Following \cite[Chapter 1]{Silverman}, let
$ \mathbb{H} = \{x + iy \;| y > 0; x, y \in \mathbb{R} \} $ be the upper-half plane and \textit{n} a positive integer. The principal congruence subgroup of level \textit{n} is the subgroup of $ SL_2(\mathbb{Z}) $
defined as follows: \begin{center} $ \Gamma(n) := \left\{\begin{pmatrix} a&b\\c&d \end{pmatrix} \in SL_2({\mathbb{Z}}) : a\equiv d\equiv 1,~b\equiv c\equiv 0\mod n\right\}. $ \end{center} Then the quotient space $ \Gamma(n) \textbackslash \mathbb{H} $ is complex analytically isomorphic to an affine curve $ Y(n) $ that can be compactified by considering $ \mathbb{H}^*:= \mathbb{H} \cup \mathbb{Q} \cup \{\infty\} $ and by taking the extended quotient: \begin{center} $X(n)= \Gamma(n)\textbackslash \mathbb{H}^* = Y(n)\cup\Gamma(n)\textbackslash (\mathbb{Q}\cup\{\infty\}).$ \end{center} The points $ \Gamma(n) \tau $ in $ \Gamma(n)\textbackslash (\mathbb{Q}\cup\{\infty\}) $ are called the cusps of $ \Gamma(n) $ and can be described by the fractions \textit{s=$\frac{a}{c}$} with $0 \leq a \leq n-1 $, $0 \leq c \leq n-1 $ and gcd(\textit{a,c})=1. As a consequence, it is not difficult to infer that $ X(n) $ has
$\displaystyle \frac{1}{2} n^2 \displaystyle\prod_{p | n} \Big(1- \frac{1}{p^2}\Big) $ cusps. \\ Let $F_{n,\mathbb{C}}$ the field of modular functions of level $n$. A classical result states that $F_{1,\mathbb{C}} = \mathbb{C}(j)$ where \textit{j} is the Klein's \textit{j}-invariant. We shall now find generators for $F_{n,\mathbb{C}}$. Consider: \begin{center} $ f_0(w;\tau) = -2^7 3^5 \displaystyle\frac{g_2(\tau)g_3(\tau)}{\Delta(\tau)}\wp(w;\tau,1),$ \end{center} where $ \Delta$ is the modular discriminant, $ \wp$ is the Weierstrass elliptic function, $ \tau \in \mathbb{H} $, $w \in \mathbb{C}$ and $ g_2 = 60G_4 $ and $ g_3 = 140G_6 $ are constant multiples of the Eisenstein series:
$$ G_{2k}(\tau) = \sum_{{ (m,n)\in\mathbf{Z}^2\backslash(0,0)}} \frac{1}{(m+n\tau )^{2k}}.
$$ For $ r,s \in \mathbb{Z} $ and not both divisible by \textit{n} we define $ f_{r,s}= f_{0}(\frac{r\tau +s}{n}; \tau) $. Whereas the Weierestrass $ \wp$-function is elliptic with respect to the lattice $[\tau,1]$ it follows that $ f_{r,s} $ depends only on the residue of $r,s$ mod $n$. Thus, it is convenient to use a notation emphasizing this property. If $ a=(a_1,a_2) \in \mathbb{Q}^2$ but $a \not\in \mathbb{Z}^2$ we call the functions $ f_a(\tau)=f_0(a_1\tau+a_2;\tau)$ the Fricke functions. They depend only on the residue class of $a $ mod $ \mathbb{Z}^2 $.
\begin{thm} We have: $$ \mbox{ Gal}( F_{n,\mathbb{C}}, F_{1,\mathbb{C}})\cong SL_2(\mathbb{Z}/n\mathbb{Z})/ \{\pm I \}. $$ \end{thm} \begin{proof} There is a surjective homeomorphism (see \cite[pag.279]{Diamond:mf} and \cite[pag.65]{Lang:ef}): \begin{center} $ \theta: SL_2(\mathbb{Z}) \longrightarrow $ Aut $(\mathbb{C}(X(n))), $ \end{center} \begin{center} $ \gamma \longmapsto $ $ (f \longmapsto f^{(\theta(\gamma))} = f \circ \gamma). $ \end{center} From Ker$(\theta) = \pm \Gamma(n) $ and the relations $ f_a (\gamma(\tau)) = f_{a\gamma}(\tau) $ it follows easily that Gal$( F_{n,\mathbb{C}}, F_{1,\mathbb{C}})\cong \Gamma(1)/\pm \Gamma(n) \cong SL_2(\mathbb{Z}/n\mathbb{Z})/ \{\pm I \} $. \end{proof} We say that a modular form in $ F_{n,\mathbb{C}} $ is defined over a field if all the coefficients of its $q$-expantion lie in that field and analogously for every $\mbox{Gal}( F_{n,\mathbb{C}}, F_{1,\mathbb{C}})$-conjugate of the form. Let: \begin{center} \begin{flushleft} $ F_n= $ function field on $ X(n) $ consisting of those functions which are defined over the \textit{n}-th cyclotomic field $ \mathbb{Q}_n = \mathbb{Q}(\zeta_n) $. \end{flushleft} \end{center}
\begin{thm}\label{fricke} The field $ F_n $ has the following properties:\\ (1) $ F_n $ is a Galois extension of $ F_1 = \mathbb{Q}(j) $.\\ (2) $ F_n = \mathbb{Q}(j,f_{r,s})
_{all (r,s)\in \frac{1}{n}\mathbb{Z}^2 \smallsetminus \mathbb{Z}^2 } $. \\ (3) For every $ \gamma \in GL_2(\mathbb{Z}/n\mathbb{Z})$ the map $ f_a \mapsto f_{a\gamma}$ gives an element of Gal$(F_n,\mathbb{Q}(j))$ which we write $\theta(\gamma) $. Then $ \gamma \mapsto \theta(\gamma) $ induces an isomorphism of $ GL_2(\mathbb{Z}/n\mathbb{Z})/{\pm I} $ to Gal$(F_n,\mathbb{Q}(j))$. The subgroup $ SL_2(\mathbb{Z}/n\mathbb{Z})/{\pm I} $ operates on a modular function by composition with the natural action of $ SL_2(\mathbb{Z})$ on the upper half-plane $\mathbb{H} $. \\ Furthermore the group of matrices $ \begin{pmatrix} 1&0\\0&d \end{pmatrix} $ operates on $ F_n $ as follows: \\ for $d \in (\mathbb{Z}/n\mathbb{Z})^* $ consider the automorphism $ \sigma_d $ of $ \mathbb{Q}_n $ such that $ \sigma_d(\zeta_n)= \zeta_n^d $. Then $ \sigma_d $ extends to $ F_n $ by operating on the coefficients of the power series expansions: \begin{center} $ \sigma_d (\sum{a_i q^{i/n}})= \sum{\sigma_d(a_i) q^{i/n}} $ with $ q=e^{2\pi i \tau}$. \\ \end{center} If $ (r,s) \in \frac{1}{n}\mathbb{Z}^2 \setminus \mathbb{Z}^2 $, we have: $\sigma_d(f_{r,s}(\tau))=f_{r,sd}(\tau). $
\end{thm} \begin{proof}
\cite[Theorem 6.6]{Shimura:af}
\end{proof}
\section{Modular Units and Manin-Drinfeld Theorem} In this paper we will focus our attention on the modular units of $ X(n) $. In other words, the invertible elements of the integral closure of $ \mathbb{Q}[j] $ in $ F_n $. The only pole of \textit{j}$ (\tau) $ is at infinity. So, from the algebraic characterization of the integral closure as the intersection of all valuation subrings containing the given ring, the modular units in $ F_n $ are exactly the modular functions which have poles and zeros exclusively at the cusps of $ X(n) $.
Let $ \mathfrak{D}_n \simeq \bigoplus_{\footnotesize{\mbox{cusps}}} \mathbb{Z} $ be the free abelian group of rank $ {\frac{1}{2} n^2 \prod_{p | n} (1- \frac{1}{p^2})} $ generated by the cusps of $ X(n) $. Let $ \mathfrak{D}_{n,0} $ be its subgroup consisting of elements of degree $ 0 $ and let $ \mathfrak{F}_n $ be the subgroup generated by the divisors of modular units in the modular function field $ F_n $. The quotient group: \begin{center} $ \mathfrak{C}_n := \mathfrak{D}_{n,0} / \mathfrak{F}_n $ \end{center} is called the Cuspidal Divisor Class Group on $ X(n) $. The previous definition generalizes \textit{mutatis mutandis} to every modular curve $ X_\Gamma $ where $ \Gamma $ is a modular subgroup. Manin and Drinfeld proved that: \begin{thm}\label{mandrinf}
If $ \Gamma $ is a congruence subgroup then all divisors of degree 0 whose support is a subset of the set of cusps of $ X_\Gamma $ have a multiple that is a principal divisor. In other word if $ x_1,x_2 \in X_\Gamma $ are cusps, then $ x_1 - x_2 $ has finite order in the jacobian variety $ Jac(X_\Gamma) $. \end{thm} \begin{proof} Let $ x_1,x_2$ two cusps in $ X_\Gamma $. Denote by $ \{x_1,x_2 \} \in (\Omega^1(X_\Gamma))^*$ the functional on the space of differential of the first kind given by: $$ \{x_1, x_2 \}: \omega \mapsto \int_{x_1}^{x_2}\omega. $$
\textit{A priori} we have $ \{x_1,x_2\} \in H_1(X_\Gamma, \mathbb{R}) $. Manin and Drinfeld showed that it lies in $H_1(X_\Gamma, \mathbb{Q})$. Cf. \cite{Drinfeld}, \cite[Chapter IV]{Lang:mf} and \cite{Manin}. \end{proof}
\section{Siegel Functions and Cuspidal Divisor Class Groups}
Let $ n=p^k $ with $ p \ge 5 $ prime. Following \cite{KL} we will give an explicit description of modular units of $ X(n) $ and its cuspidal divisor class group. \\ Let $L$ a lattice in $ \mathbb{C} $. Define the Weierstrass sigma function: $$ \sigma_L (z) = z \displaystyle\prod_{ \begin{scriptsize} \begin{array}{c} \omega \in L \\ \omega \not=0 \end{array} \end{scriptsize} }{ \left( 1 - \frac{z}{\omega} \right) e^{z/\omega + \frac{1}{2} (z/\omega)^2 } }, $$ \noindent which has simple zeros at all non-zero lattice points. Define: $$ \zeta_L (z) = \frac{d}{dz} \log (\sigma_L(z)) = \displaystyle{\frac{1}{z} + \displaystyle\sum_{ \begin{scriptsize} \begin{array}{c} \omega \in L \\ \omega \not=0 \end{array} \end{scriptsize} }{\left[\frac{1}{z-\omega} +\frac{1}{\omega} + \frac{z}{\omega^2} \right]} }, $$ $$ \wp_L(z)= -\zeta'_L(z) = \frac{1}{z^2} + \displaystyle\sum_{ \begin{scriptsize} \begin{array}{c} \omega \in L \\ \omega \not=0 \end{array} \end{scriptsize} }{\left[\frac{1}{(z-\omega)^2} - \frac{1}{\omega^2} \right]}.
$$ \noindent If $ \omega \in L $, by virtue of the periodicity of $ \wp_L $ we obtain $ \frac{d}{dz} \zeta_L(z+\omega) = \frac{d}{dz}\zeta_L(z)$, whence follows the existence of a $ \mathbb{R}$-linear function $ \eta_L(z) $ such that: $$ \zeta_L(z+\omega) = \zeta_L(z) + \eta_L(\omega).$$ For $ L=[\tau,1] $ (with $\tau \in \mathbb{H} $) and $ a=(a_1,a_2) \in \mathbb{Q}^2 \smallsetminus \mathbb{Z}^2 $ we define the Klein forms: $$ \mathfrak{k}_a(\tau) = e^{-\eta_L(a_1\tau+a_2)}\sigma_L(a_1\tau+a_2).$$
Note that $ z=a_1\tau+a_2 \not\in L=[\tau,1] $ so we know directly from their definition that the Klein forms are holomorphic functions which have no zeros and poles on the upper half plane.
When $ \Gamma $ is a congruence subgroup and $ k $ is an integer, we will say that a holomorphic function $ f(\tau) $ on $ \mathbb{H}$ is a nearly holomorphic modular form for $ \Gamma $ of weight $ k $ if:\\ (i) $ f(\gamma(\tau))= (r\tau +s)^k f(\tau) $ for all $\gamma=\begin{pmatrix} p&q\\r&s \end{pmatrix} \in \Gamma $;\\ (ii) $ f(\tau) $ is meromorphic at every cusp.
\begin{prop}\label{Kleinforms}
Let $ a=(a_1,a_2) \in \mathbb{Q}^2 \smallsetminus \mathbb{Z}^2 $ and $ b=(b_1,b_2) \in \mathbb{Z}^2 $. The Klein Forms $ \mathfrak{k}_a(\tau) $ have the following properties: \\
(1) $ \mathfrak{k}_{-a}(\tau) = - \mathfrak{k}_a(\tau); $\\
(2) $ \mathfrak{k}_{a+b} = \epsilon(a,b)\mathfrak{k}_a(\tau) $ with $ \epsilon(a,b)=(-1)^{b_1 b_2 + b_1 + b_2} e^{- \pi i (b_1 a_2 - b_2 a_1)}; $\\
(3) For every $\gamma=\begin{pmatrix} p&q\\r&s \end{pmatrix} \in SL_2(\mathbb{Z}) $ we have:
$$ \mathfrak{k}_a (\gamma(\tau)) = \mathfrak{k}_a \left(\frac{p\tau+q}{r\tau+s}\right) = \frac{\mathfrak{k}_{a\gamma}(\tau)}{r \tau + s} =
\frac{\mathfrak{k}_{(a_1 p + a_2 r , a_1 q + a_2 s )}(\tau)}{r \tau + s}; $$
(4) If $ n \ge 2 $ and $ a \in \frac{1}{n}\mathbb{Z}^2 \setminus \mathbb{Z}^2 $ then $ \mathfrak{k}_a(\tau) $ is a nearly holomorphic modular form for $ \Gamma(2n^2) $ of weight -1. \\
(5) Let $ n \ge 3 $ odd and $ \{m_a\}_{a \in \frac{1}{n}\mathbb{Z}^2 \setminus \mathbb{Z}^2} $ a family of integers such that $ m_a \not= 0 $ occurs only for finitely many $a$. Then the product of Klein form:
$$ \prod_{a \in \frac{1}{n}\mathbb{Z}^2 \setminus \mathbb{Z}^2} \mathfrak{k}_a^{m_a}(\tau) $$
is a nearly holomorphic modular form for $ \Gamma(n)$ of weight $ -\sum_a m_a $ if and only if:
$$ \sum_a {m_a (n a_1)^2 } \equiv \sum_a {m_a (n a_2)^2 } \equiv \sum_a {m_a (n a_1)(n a_2) \equiv 0 \mod n }. $$
\end{prop} \begin{proof} Property (2) is nothing more than a reformulation of the Legendre relation: $ \eta_{[\tau,1]}(1)\tau - \eta_{[\tau,1]}(\tau) = 2 \pi i $. Property (5) is discussed in \cite[Chapter 3, Paragraph 4]{KL}.\\ For more details see: \cite[Chapters 2 and 3]{KL} or \cite[Chapter 19]{Lang:ef}. \end{proof}
We are now ready to define the Siegel function: $$ g_a(\tau) = \mathfrak{k}_a(\tau)\Delta(\tau)^{1/12},$$
\noindent where $ \Delta(\tau)$ is the square of the Dedekind eta funtion $ \eta(\tau) $ (not to be mistaken for the aforementioned $\eta_L(\tau) $):
\begin{center} $ \eta(\tau)^2 = 2 \pi i q^{1/12}\prod_{k=1}^{\infty}(1-q^n)^2 $ with $ q=e^{2\pi i \tau} .$ \end{center}
\begin{prop}\label{sigfun} The set of functions $ \{h_a(\tau)=g_a(\tau)^{12n} \} _{a \in \frac{1}{n}\mathbb{Z}^2 \setminus \mathbb{Z}^2} $ constitute a Fricke family. Just like the Fricke functions $ f_a(\tau) $ of Theorem \ref{fricke} we have: $ h_a(\tau) \in F_n $, for every $ \gamma \in SL_2({\mathbb{Z}}) $ we have $ h_a(\gamma(\tau))=h_{a\gamma}(\tau) $ and in addition if $ \sigma_d \in Gal(\mathbb{Q}_n, \mathbb{Q}) $ then $ \sigma_d (h_{a_1, a_2}(\tau)) = h_{a_1, d a_2}(\tau) $. In other words, the Siegel functions, raised to the appropriate power, are permuted by the elements of the Galois Group Gal$( F_n,\mathbb{Q}(j) )$.
\end{prop} \begin{proof}
\cite[Chapter 2]{KL} or \cite{Siegel}. \end{proof}
\begin{thm}\label{unitàmodulari} Assume that $ n=p^k $ for $ p \not=2,3.$ Then the units in $ F_n $ (modulo constants) consist of the power products: $$ \prod_{a \in \frac{1}{n}\mathbb{Z}^2 \setminus \mathbb{Z}^2} g_a^{m_a}(\tau) $$
with: $$ \sum_a {m_a (n a_1)^2 } \equiv \sum_a {m_a (n a_2)^2 } \equiv \sum_a {m_a (n a_1)(n a_2) \equiv 0 \mod n } $$ and $$ \sum_a{m_a} \equiv 0 \mod 12. $$ \noindent In addition, if $ k \ge 2 $ it is not restrictive to consider power products of Siegel functions $g_a$ with primitive index $a=(a_1,a_2)$, namely such that $ p^{k-1}a \not\in \mathbb{Z}^2 $.
\end{thm}
\begin{proof} See \cite{Siegelgenerator}, \cite[Theorem 3.2, Chapter 2]{KL}, \cite[Theorem 5.2, Chapter 3]{KL} and \cite[Theorem 1.1, Chapter 4]{KL} . The last assertion is a consequence of the distribution relations discussed in \cite[pp. 17-23]{KL}. \end{proof}
Following \cite{KL} it will be useful to decompose Gal($ F_{p^{k}},\mathbb{Q}(j) $). Let $ \mathfrak{o}_p $ the ring of integers in the unramified quadratic extension of the $p$-adic field $ \mathbb{Q}_p $. The group of units $ \mathfrak{o}_p^* $ acts on $ \mathfrak{o}_p $ by multiplication and after choosing a basis of $ \mathfrak{o}_p $ over the $p$-adic ring $ \mathbb{Z}_p $, we obtain an embedding: $$ \mathfrak{o}_p^* \longrightarrow GL_2(\mathbb{Z}_p).$$
\noindent We call the image in $ GL_2(\mathbb{Z}_p) $ the Cartan Group at the prime $ p $ and indicate it by $C_p $. It is worth noting that the elements of $ \mathfrak{o}_p^* $, written in terms of a basis of $ \mathfrak{o}_p $ over $ \mathbb{Z}_p $, are characterized by the fact that at least one of the two coefficients is a unit.
Consider now $ GL_2(\mathbb{Z}_p) $ as operating on $\mathbb{Z}_p^2 $ on the left and denote by $ G_{p,\infty} $ the isotropy group of $ \begin{pmatrix} 1 \\ 0 \end{pmatrix} $. Obviously we have:
\begin{center} $ G_{p,\infty} = \left\{ \begin{pmatrix} 1&b\\0&d \end{pmatrix} b \in \mathbb{Z}_p , d \in \mathbb{Z}_p^* \right\}. $
\end{center}
Since $ C_p $ operates simply transitively on the set of primitive elements (that is: vectors whose coordinates are not both divisibile by $ p $) we have the following decomposition:
$$ GL_2(\mathbb{Z}_p)/\{\pm I\} = ( C_p/\{\pm I\} ) G_{p,\infty}. $$ \noindent For each integer $ k $ we define the reduction of the Cartan Group $ C_p \mod p^k$: $$ C(p^k)= C_p / p^k C_p $$ \noindent and let $ G_\infty(p^k) $ the reduction of $ G_{p,\infty} \mod p^k $:
\begin{center} $ G_\infty(p^k) = \left\{ \begin{pmatrix} 1&b\\0&d \end{pmatrix} b \in \mathbb{Z}/p^k\mathbb{Z} , d \in (\mathbb{Z}/p^k\mathbb{Z})^* \right\}.$
\end{center} \noindent We can now reformulate the previous decomposition as follows:
\begin{center} Gal($ F_{p^k} $,$ \mathbb{Q}(j) $) $ \simeq GL_2( \mathbb{Z}/p^k\mathbb{Z})/\{\pm I\} = ( C(p^k)/\{\pm I\} ) G_{\infty}(p^k). $
\end{center}
\noindent The embedding: $$ F_{p^k} \hookrightarrow \mathbb{Q}(\zeta_{p^k})((q^{1/{p^k}})) $$ enables us to mesaure for each modular function $ f(\tau) \in F_{p^k} $ its order at $ \Gamma(p^k) \infty $ in term of the local parameter $ q^{1/{p^k}} $.
\begin{prop} If $ a \in \frac{1}{p^k} \mathbb{Z}^2 \setminus \mathbb{Z}^2 $, the $q$-expansion of the Siegel functions shows that: \begin{center}
ord$_\infty (g_a(\tau))^{12p^k} = 6p^{2k} B_2(\langle a_1 \rangle)$
\end{center}
\noindent where $ B_2(X) = X^2 -X + \frac{1}{6} $ is the second Bernoulli polynomial and $ \langle X \rangle $ is the fractional part of $ X $.
\end{prop} \begin{proof} \cite[Chapter 19]{Lang:ef}. \end{proof}
For every automorphism $ \sigma \in $ Gal($ F_{p^{k}},\mathbb{Q}(j) $) and each $h(\tau) \in F_{p^k}$ we have the prime $ \sigma^{-1}(\infty) $ which is such that:
\begin{center} ord$_{\sigma^{-1}(\infty)} (h(\tau)) = $ ord$_\infty \sigma (h(\tau)) $ \end{center}
\noindent and if $ \sigma \in G_\infty(p^k) $:
\begin{center} ord$_\infty (h(\tau)) = $ ord$_\infty \sigma (h(\tau)), $ \end{center}
\noindent so we may identify the cusps of $ X(p^k) $ with the elements of the Cartan Group (viewing it as a subgroup of Gal($ F_{p^{k}},\mathbb{Q}(j) $)). From now on, we will indicate the cusp $ \sigma^{-1}(\infty) $ simply by $ \sigma^{-1} $. \\ We may also index the primitive Siegel function by elements of the Cartan Group. Following \cite{KL}, if $ \alpha \in C(p^k)/\{\pm I\} $ we put:
\begin{center}
$ g_\alpha = g_{e_1 \alpha } $ where $ e_1=(\frac{1}{p^k},0). $
\end{center} \noindent It should be noted that $ g_\alpha $ is defined up to a root of unity (this follows from Proposition \ref{Kleinforms}, second claim). Nonetheless, $ g_\alpha ^{12p^k} $ is univocally defined as well as its divisor:
\begin{prop}\label{divisori} We have: $$ \mbox{div } g_\alpha^{12p^k} = 6p^{2k} \sum_{\beta \in C(p^k)/\{\pm I\} } B_2 \left( \left\langle \frac{T(\alpha\beta^{-1})}{p^k} \right\rangle \right) \beta $$
\noindent where the map T on $ 2 \times 2 $ matrices is defined as follows: $$ T:\begin{pmatrix} a&b\\c&d \end{pmatrix} \mapsto a . $$
\end{prop}
\begin{proof} See \cite[Paragraph 5.1]{KL} \end{proof}
The first part of \cite{KL} culminates with the theorem below. The computation of the order of the cuspidal divisor class group on $ X(p^k) $ could be considered analogous to that in the study of cyclotomic fields: instead of the generalized Bernoulli numbers $ B_{1,\chi} $ encountered in the latter case, in the former we will define the second generalized Bernoulli numbers $ B_{2,\chi} $.
\begin{thm}\label{CDCG} Let $ p $ a prime $ \ge 5 $. Let $ R:= \mathbb{Z}[C(p^k)/\{\pm 1\}] $ and $ R_0 $ the ideal of $ R $ consisting of elements of degree $ 0$. The Cuspidal Divisor Class Group $ \mathfrak{C}_{p^k} $ is an $R-$module, more precisely there exists a Stickelberger element $$ \theta = \displaystyle \frac{p^k}{2}\sum_{\beta \in C(p^k)/\{\pm 1\} } B_2 \left( \left\langle\frac{T(\beta)}{p^k} \right\rangle \right) \beta^{-1} \in \mathbb{Q}[C(p^k)/\{\pm 1\}]$$ such that: $$ \mathfrak{C}_{p^k} \cong R_0 / R \cap R \theta. $$
For any character $ \chi $ of $C(p^k)/\{\pm I\}$ (identified with an even character of $ C(p^k) $) we let: $$ B_{2,\chi} = \sum_{\alpha \in C(p^k)/\{\pm I\} } B_2 \left( \left\langle \frac{T(\alpha)}{p^k} \right\rangle \right) \chi(\alpha). $$ \noindent The order of the cuspidal divisor class group on $ X(p^k) $ is:
$$ | \mathfrak{C}_{p^k} | = \frac{12 p^{3k}}{|C(p^k) |} \prod_{\chi \not= 1} \frac{p^k}{2} B_{2,\chi}. $$
\end{thm}
\begin{proof}
\cite[Chapter 5]{KL}.
\end{proof}
\section{Non-split Cartan Groups}
Following \cite{BB} or \cite[pag. 194]{SerreMordel}, let $n $ a positive integer and let $ A $ be a finite free commutative algebra of rank $ 2 $ over $ \mathbb{Z}/n\mathbb{Z}$ with unit discriminant. Fixing a basis for $ A $ we can use the action of $ A^* $ on $ A $ to embed $ A^* $ in $ GL_2(\mathbb{Z}/n\mathbb{Z}) $. If for every prime $ p|n $ the $ \mathbb{F}_p $ algebra $ A/pA $ is isomorphic to $ \mathbb{F}_{p^2} $, the image of $ A^* $ just now described is called a non-split Cartan subgroup of $ GL_2(\mathbb{Z}/n\mathbb{Z}) $. Therefore, such a group $ G $ has the property that for every prime $p$ dividing $n$ the reduction of $G\mod p $ is isomorphic to $ \mathbb{F}_{p^2}^* $. All the non-split Cartan subgroups of $ GL_2({\mathbb{Z}/n\mathbb{Z}}) $ are conjugate and so are their normalizers.
In this paper we are interested in the case $ n=p^k$ and $ p \not= 2,3$. The cases $p=2$ and $ p=3 $ are essentially equal but require more cumbersome calculations (see \cite[Theorem 5.3, Chapter 3]{KL} and \cite[Theorem 1.3, Chapter 4]{KL}). Choose a squarefree integer $ \epsilon \equiv 3 \mod 4 $ and such that its reduction modulo $ p$ is a quadratic non-residue. If $ p \equiv 3 \mod 4 $, a canonical choice could be $ \epsilon = -1 $. Let $ K= \mathbb{Q}(\sqrt{\epsilon}) $ and $ \mathbf{O}_K = \mathbb{Z}[\sqrt{\epsilon}]$ its ring of integers. After choosing a basis for $ \mathbf{O}_K $ over $ \mathbb{Z} $ we can represent any element of $ (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ with its corresponding multiplication matrix in $ GL_2({\mathbb{Z}/p^k\mathbb{Z}}) $ with respect to the chosen basis. This embedding produces a non-split Cartan subgroup of $ GL_2({\mathbb{Z}/p^k\mathbb{Z}}) $ and we will denote it by $ C_{ns}(p^k) $. Notice that such a group is isomorphic to the already introduced $ C(p^k) $.
To describe the normalizer $ C_{ns}^+(p^k) $ of $ C_{ns}(p^k) $ in $ GL_2({\mathbb{Z}/p^k\mathbb{Z}}) $ it will suffice to consider the following group automorphism induced by conjugation by a fixed $c \in C_{ns}^+(p^k) $: $$ \phi_c : C_{ns}(p^k) \longrightarrow C_{ns}(p^k) $$ $$ x \longmapsto \phi_c(x) = cxc^{-1}. $$ \noindent The group automorphism $ \phi_c $ extends to a ring automorphism of $ (\mathbf{O}_K / p^k \mathbf{O}_K) \cong (\mathbb{Z}/p^k\mathbb{Z})[\sqrt{\epsilon}] $ so if $ \phi_c $ is not the trivial automorphism we necessarily have $ \phi_c(\sqrt{\epsilon})=-\sqrt{\epsilon} $.
\begin{prop}\label{strutturanonsplit} If $ p \not= 2 $ we have the following isomorphism: $$ C_{ns}(p^k) \simeq \mathbb{Z}/p^{k-1}\mathbb{Z} \times \mathbb{Z}/p^{k-1}\mathbb{Z} \times \mathbb{Z}/(p^2-1)\mathbb{Z}, $$ $$ C_{ns}^+(p^k) \simeq (\mathbb{Z}/p^{k-1}\mathbb{Z} \times \mathbb{Z}/p^{k-1}\mathbb{Z} \times \mathbb{Z}/(p^2-1)\mathbb{Z}) \rtimes_{\phi} \mathbb{Z}/2\mathbb{Z}. $$ \end{prop}
\begin{proof}
Let $ a_1 + \sqrt{\epsilon}a_2 \in (\mathbf{O}_K / p^k \mathbf{O}_K)$: it is invertible if and only if $ (a_1,a_2)$ is primitive or in other words $ p $ does not divide both $ a_1 $ and $ a_2 $ so we have $ |C_{ns}(p^k)| = p^{2k-2}(p^2-1) $. Consider the reduction$\mod p $: $$ C_{ns}(p^k) \longrightarrow \mathbb{F}_{p^2}^* $$ $$ a_1 + \sqrt{\epsilon}a_2 \longmapsto \overbar{a_1} + \sqrt{\epsilon}\overbar{a_2}. $$
The map is surjective and let $B$ its kernel:
$$ B:=\{x \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* \mbox{ such that } x \equiv 1\mod p \}. $$
$ |B|=p^{2k-2} $: it remains to check that $ B \simeq \mathbb{Z}/p^{k-1}\mathbb{Z} \times \mathbb{Z}/p^{k-1}\mathbb{Z} $. Let $ k\ge2 $ and $ p \not=2 $. First, we check that for all $ x \in \mathbf{O}_K $ we have $ (1+xp)^{p^{k-2}} \equiv 1+xp^{k-1} \mod p^k $. In case $ k=2 $ there is nothing to prove. We proceed by induction on $ k $: suppose the claim is true for some $ k\ge2 $. We have:
$$ (1+xp)^{p^{k-2}} = 1+xp^{k-1} + yp^k, $$ $$ (1+xp)^{p^{k-1}} = \sum_{j=0}^p {p \choose j}{(1+xp^{k-1})}^{p-j}({yp^k})^j \equiv (1+xp^{k-1})^p \mod p^{k+1}, $$
$$ (1+xp^{k-1})^p = \sum_{j=0}^p {p \choose j}(xp^{k-1})^j \equiv 1 + xp^k \mod p^{k+1}. $$
\noindent In conclusion: $ (1+xp)^{p^{k-1}} \equiv 1 + xp^k \mod p^{k+1} $. From the previous claim follows that if $ h\le k-1$ is such that $ x \in p^h\mathbf{O}_K \setminus p^{h+1}\mathbf{O}_K $ then the reduction of $ 1+xp $ in $ B $ has order $ p^{k-1-h} $. So $B $ has $ p^{2k-2}-p^{2k-4}$ elements of order $ p^{k-1} $ and the proposition is proved. The second isomorphism follows immediately.
\end{proof}
We present now the modular curves $ X_{ns}(n) $ and $ X_{ns}^+(n) $ associated to the subgroups $ C_{ns}(n) $ and $ C_{ns}^+(n) $. First of all, $ Y(n) $ (the non-cuspidal points of $ X(n) $) are isomorphism classes of pairs $(E,(P,Q)) $ where $ E $ is a complex elliptic curve and $ (P,Q) $ constitute a $ \mathbb{Z}/n\mathbb{Z}$-basis of the $n$-torsion subgroup $ E[n]$ with $ e_n(P,Q)=e^{2\pi i / n}$ where $ e_n $ is the Weil pairing discussed in details in \cite[Chapter 7]{Diamond:mf}. By definition, two pairs $(E,(P,Q)) $ and $(E',(P',Q')) $ are considered equivalent in $ Y(n) $ if and only if there exists an isomorphism between $ E $ and $ E'$ taking $ P $ to $ P' $ and $ Q $ to $ Q'$. Notice that the definition is well-posed since the Weil pairing is invariant under isomorphism, i.e. if $ f: E \rightarrow E' $ is an isomorphism of elliptic curves and $ e'_n $ is the Weil pairing on $ E' $ we have: $$ e'_n(f(P),f(Q))=e_n(P,Q). $$ \noindent Since $ GL_2(\mathbb{Z}/n\mathbb{Z}) $ acts on $ E[n] $ and since for every $ \gamma \in GL_2(\mathbb{Z}/n\mathbb{Z}) $ we have $ e_n(\gamma(P,Q)) = e_n(P,Q)^{\det \gamma} $, the group $ SL_2(\mathbb{Z}/n\mathbb{Z}) $ acts on $ Y(n) $ on the right in the following way: $$ \begin{pmatrix} a&b\\c&d \end{pmatrix} \cdot(E,(P,Q)) = (E,(aP+cQ,bP+dQ)). $$ Define:
$$ C'_{ns}(n) := C_{ns}(n) \cap SL_2(\mathbb{Z}/n\mathbb{Z}), $$ $$ C'^+_{ns}(n) := C^+_{ns}(n) \cap SL_2(\mathbb{Z}/n\mathbb{Z}), $$ $$ \Gamma_{ns}(n) := \{M \in SL_2({\mathbb{Z}}) \mbox{ such that } M \equiv M'\mbox{ mod } n\ \mbox{for some } M' \in C'_{ns}(n)\}, $$ $$ \Gamma^+_{ns}(n) := \{M \in SL_2({\mathbb{Z}}) \mbox{ such that } M \equiv M'\mbox{ mod } n\ \mbox{for some } M' \in C'^+_{ns}(n)\}. $$
\noindent A possible explicit description for these groups is: $$ C_{ns}(p^k)= \left\{M_s=\begin{pmatrix} a&b\\\epsilon b&a \end{pmatrix} \in GL_2({\mathbb{Z}/p^k \mathbb{Z}}) \mbox{ with } s=a+\sqrt{\epsilon}b \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* \right\}, $$ $$ C^+_{ns}(p^k) = \left\langle {\begin{pmatrix} a&b\\\epsilon b&a \end{pmatrix}} \in C_{ns}(p^k) \mbox{ , } {C=\begin{pmatrix} 1&0\\ 0&-1 \end{pmatrix}} \right\rangle. $$
\noindent If $ s \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ we define $ |s|:= s \bar{s} \in (\mathbb{Z} / p^k \mathbb{Z})^*$ where $ \bar{s} $ is the conjugate of $ s $. So we have:
$$ C'_{ns}(p^k)= \left\{M_s=\begin{pmatrix} a&b\\\epsilon b&a \end{pmatrix} \in C_{ns}(p^k) \mbox{ such that } |s|=|a+\sqrt{\epsilon}b|=1 \mbox{ mod } p^k \right\}, $$
$$ C'^+_{ns}(p^k) = C'_{ns}(p^k) \cup \left\{M_s C=\begin{pmatrix} a&-b\\\epsilon b&-a \end{pmatrix} \mbox{ with } |s|=|a+\sqrt{\epsilon}b|=-1 \mbox{ mod } p^k \right\}. $$
Points in $ Y_{ns}(n) $ are nothing but orbits of $Y(n)$ under the action of $ C'_{ns}(n) $ and similarly for $ Y_{ns}^+(n) $ and $ C'^+_{ns}(n) $. The above-mentioned action extends uniquely to $ X(n) $. The quotients $ X_{ns}(n) $ and $ X^+_{ns}(n) $ are isomorphic as Riemann surfaces to $ \mathbb{H}^*/\Gamma_{ns}(n) $ and $ \mathbb{H}^*/\Gamma^+_{ns}(n) $ respectively.
Using the identification of the cusps of $ X(p^k) $ with the elements of $ C(p^k)/\{\pm I\}$ explained in the previous section we obtain a shorter proof of the first claim of \cite[Proposition 7.10]{BB}:
\begin{prop}\label{numerocuspidi} We identify the cusps of $ X_{ns}(p^k) $ with $ (\mathbb{Z}/p^k\mathbb{Z})^* $ and the cusps of $ X^+_{ns}(p^k) $ with $ H=(\mathbb{Z}/p^k\mathbb{Z})^*/\{\pm 1\} $. So $ X_{ns}(p^k) $ has $ p^{k-1}(p-1) $ cusps and $ X^+_{ns}(p^k) $ has $ p^{k-1}\frac{p-1}{2} $ cusps. \end{prop}
\begin{proof}
We identify the cusps of $ X(p^k) $ with the elements of $ C(p^k)/\{\pm I\} \cong C_{ns}(p^k)/\{\pm I\} \cong (\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\} $. Bearing this in mind, it is clear that $ \pm M_r, \pm M_{r'} \in C_{ns}(p^k)/\{\pm I\} $ represent the same cusp in $ X_{ns}(p^k) $ if and only if there exists $ s \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ with $ |s|=1 $ such that $ \pm r=\pm s r' $. But this is equivalent to say that $ |r|=|sr'|=|r'| $ or $ \det M_r = \det M_{r'} \mod p^k$ and consequently we may identify the cusps of $ X_{ns}(p^k) $ with $ (\mathbb{Z}/p^k\mathbb{Z})^* $. For the same reason $ \pm M_r, \pm M_{r'} \in C_{ns}(p^k)/\{\pm I\} $ are indistinguishable in $ X^+_{ns}(p^k) $ if and only if they were already indistinguishable in $ X_{ns}(p^k) $ or there exists $ s' \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ with $ |s'|=-1 $ such that $\pm r = \pm s'\overbar{r'} $ that is equivalent to say $ |r|=|s'\overbar{r'}|=-|r'| $ or $ \det M_r = - \det M_{r'} \mod p^k$. In conclusion we may identify the cusps of $ X^+_{ns}(p^k) $ with $ H=(\mathbb{Z}/p^k\mathbb{Z})^*/\{\pm 1\} $. \end{proof}
\noindent Furthermore, we can deduce that the covering $ \pi: X_{ns}(p^k) \to X(p^k) $ is not ramified above the cusps. So the ramification degree of a cusp of $ X_{ns}(p^k) $ under the covering projection $ \pi': X_{ns}(p^k) \to SL_2(\mathbb{Z})\textbackslash \mathbb{H}^* $, is equal to the one of a cusp of $ X(p^k) $ respect to $ \pi'': X(p^k) \to SL_2(\mathbb{Z})\textbackslash \mathbb{H}^* $ that is $ p^k $. The same happens for $X^+_{ns}(p^k) $.
\section{Modular units on non-split Cartan curves} Let $ t \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) $: write it in the form $ t=a_1 + \sqrt{\epsilon}a_2 $ choosing $ a_1, a_2 \in \mathbb{Z} $ such that $ 0 \le a_1 \le \frac{p^k-1}{2} $, $ 0 \le a_2 \le p^k-1$ and $a_2 \le \frac{p^k-1}{2}$ if $ a_1=0$. Define: $$ [t] := \frac{1}{p^k}(a_1,a_2). $$
If $ s \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ we define $ [s]:=[\{\pm s\}] $. Notice that if $ s,t \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* $, $ |s|=1 $ and $\gamma_s \in \Gamma_{ns}(p^k) $ lifts $M_s $ we have:
$$ [t] \gamma_s - [ts] \in \mathbb{Z}^2 \mbox{ or } [t] \gamma_s + [ts] \in \mathbb{Z}^2 . $$ \noindent
Analogously if $ |s|=-1 $ and $ \gamma $ lifts $ M_sC $ to $ \Gamma^+_{ns}(p^k) $ we have:
$$ [t] \gamma - [\overbar{ts}] \in \mathbb{Z}^2 \mbox{ or } [t] \gamma + [\overbar{ts}] \in \mathbb{Z}^2. $$ These relations together with Proposition \ref{Kleinforms} imply:
\begin{prop}\label{indici}
The Klein forms: $ \mathfrak{k}_{[t]\gamma_s}(\tau) $ and $ \mathfrak{k}_{[ts]}(\tau) $ up to a $ 2p^k-$th root of unity represent the same function in the sense that:
$$ \mathfrak{k}_{[t]\gamma_s}(\tau) = c \mathfrak{k}_{[ts]}(\tau) $$ for some $ c \in \bm{\mu_{2p^k}} $.
Similarly, for the Klein forms $ \mathfrak{k}_{[t]\gamma}(\tau) $ and $ \mathfrak{k}_{[\overbar{ts}]}(\tau) $ we have:
$$ \mathfrak{k}_{[t]\gamma}(\tau) = c' \mathfrak{k}_{[\overbar{ts}]}(\tau) $$
for some $ c' \in \bm{\mu_{2p^k}} $. \end{prop}
\noindent
For $ h \in (\mathbb{Z}/p^k\mathbb{Z})^* $ we define the following complex-valued functions on $ \mathbb{H} $:
$$ T_h (\tau) := \prod_{t \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) , |t|=h}\mathfrak{k}_{[t]}(\tau), $$
$$ G_h(\tau) := T_h(\tau)(\Delta(\tau))^{p^{k-1}\frac{p+1}{24}} = \prod_{t \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) , |t|=h}g_{[t]}(\tau). $$
\noindent For $ h \in (\mathbb{Z}/p^k\mathbb{Z})^*/\{ \pm 1 \} $ consider:
$$ T^+_h (\tau) := \prod_{t \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) , \pm|t|= h}\mathfrak{k}_{[t]}(\tau), $$
$$ G^+_h(\tau) := T^+_h(\tau)(\Delta(\tau))^{p^{k-1}\frac{p+1}{12}} = \prod_{t \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) , \pm|t|= h}g_{[t]}(\tau) .$$
\begin{prop}\label{FunzioniG} Let $ p \not= 2,3 $ a prime. Consider:
$$ g(\tau) = \prod_{x \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) ) } g_{[x]}^{m(x)}(\tau) $$ \noindent and suppose that it is a modular unit on $ X(p^k) $ (or equivalently that it satisfies the conditions of Theorem \ref{unitàmodulari}). If $ g(\tau) $ is a modular unit on $ X_{ns}(p^k) $ there exist integers $ \{n_h\}_{h \in (\mathbb{Z}/p^k\mathbb{Z})^* }$ such that: $$ g(\tau)= \prod_{h \in (\mathbb{Z}/p^k\mathbb{Z})^* }G_h^{n_h}(\tau) .$$
\noindent Similarly, if the function $ g(\tau) $ is a modular unit on $ X^+_{ns}(p^k) $, there exist integers $ \{n^{+}_h\}_{h \in (\mathbb{Z}/p^k\mathbb{Z})^*/\{\pm 1 \}}$ such that: $$ g(\tau)= \prod_{h \in ((\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\}) }{{G^+_h}^{n^+_h}(\tau)} .$$
\end{prop}
\begin{proof} We look for conditions on the exponents $ \{m(x)\}_{x \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\})} $ that guarantee: $$ \frac{g(\sigma^{-1}(\tau))}{g(\tau)} \in \mathbb{C} \mbox{ for every } \sigma \in \Gamma_{ns}(p^k) \mbox{ (respectively } \Gamma^+_{ns}(p^k) \mbox{)} . $$ \noindent From Proposition \ref{Kleinforms}, assertion (3), the fact that $ \Delta(\tau) $ is weakly modular of weight 12 and that by hypotesis $ 12 $ divides $ \sum{m(x)} $ we have: $$ g(\sigma^{-1}(\tau)) = (\Delta(\sigma^{-1} (\tau)))^{\frac{1}{12}\sum{m(x)}} \prod{\mathfrak{k}_{[x]}^{m(x)}(\sigma^{-1}(\tau))} = $$ $$ = (\Delta(\tau))^{\frac{1}{12}\sum{m(x)}} \prod{\mathfrak{k}_{[x]\sigma^{-1}}^{m(x)}(\tau)} . $$
\noindent By Proposition \ref{strutturanonsplit}, $ C'_{ns}(p^k)$ is a cyclic group with $ (p+1)p^{k-1} $ elements. Let $ M_r $ be a generator where $ r $ is a generator of:
$$ \{ s \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* \mbox{ with } |s|= 1 \} .$$ \noindent
Every $S \in C'^+_{ns}(p^k) \setminus C'_{ns}(p^k) $ is of the form $ M_tC$ where $ t \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ and $ |t|=-1 $. Fix $ S $ and choose $ \gamma_r $ lifting $ M_r $ in $ \Gamma_{ns}(p^k) $ and $\gamma_t $ lifting $ M_t $ in $ GL_2(\mathbb{Z}) $ with $ \det \gamma_t=-1 $. Of course $ \gamma_tC $ lifts $ S $ in $ \Gamma^+_{ns}(p^k) $. \\ For every $ j $ we have that: $$ (([x] \mbox{ mod } \mathbb{Z}^2)/\{\pm 1\}) \longmapsto (([xr^j] \mbox{ mod } \mathbb{Z}^2)/\{\pm 1\}) \mbox{ and} $$ $$ (([x] \mbox{ mod } \mathbb{Z}^2)/\{\pm 1\}) \longmapsto (([\overbar{xr^jt}] \mbox{ mod } \mathbb{Z}^2)/\{\pm 1\}) $$
\noindent are permutations of the primitive elements in $ ((\frac{1}{p^k}\mathbb{Z})^2 \mod \mathbb{Z}^2 )/(\pm 1) $.
\noindent
As a consequence of these observations and Proposition \ref{indici}, taking $ \sigma=(\gamma_r)^j $ we have: $$ (\Delta(\tau))^{\frac{1}{12}\sum{m(x)}} \prod{\mathfrak{k}_{[x]\gamma_r^{-j}}^{m(x)}(\tau)} = (\Delta(\tau))^{\frac{1}{12}\sum{m(x)}} \prod{\mathfrak{k}_{[xr^j]\gamma_r^{-j}}^{m(xr^j)}(\tau)} = $$ $$ = c_j(\Delta(\tau))^{\frac{1}{12}\sum{m(x)}} \prod{\mathfrak{k}_{[x]}^{m(xr^j)}(\tau)} = c_j \prod{g_{[x]}^{m(xr^j)}(\tau)} ,$$
\noindent where $ \{c_j\}_j $ are $ 2p^k-$th roots of unity. Taking $ \sigma = (\gamma_r)^j \gamma_t C $ we obtain: $$ (\Delta(\tau))^{\frac{1}{12}\sum{m(x)}} \prod{\mathfrak{k}_{[x]C\gamma_t^{-1} \gamma_r^{-j}}^{m(x)}(\tau)} = (\Delta(\tau))^{\frac{1}{12}\sum{m(x)}} \prod{\mathfrak{k}_{[\overbar{xr^jt}]C\gamma_t^{-1} \gamma_r^{-j}}^{m(\overbar{xr^jt})}(\tau)} = $$ $$ = d_j (\Delta(\tau))^{\frac{1}{12}\sum{m(x)}} \prod{\mathfrak{k}_{[x]}^{m(\overbar{xr^jt})}(\tau)} = d_j \prod{g_{[x]}^{m(\overbar{xr^jt})}(\tau)} ,$$
\noindent where $ \{d_j\}_j $ are $ 2p^k-$th roots of unity. Consider the following expression: $$ \frac{g(\gamma_r^{-1}(\tau))}{g(\tau)} = c_1 \prod{g^{m(xr)-m(x)}_{[x]}(\tau) } .$$ \noindent By the independence of Siegel functions \cite[p.42 or p.120]{KL} a product $ \prod {g_{[x]}^{l(x)}} $ is constant if and only if the exponents $ l(x)$ are all equal. So the previous quotient is constant if and only if:
$$ a(xr^j)= m(xr^{j+1})-m(xr^j) \mbox{ satisfy } a(xr^j)=a(xr^l) \mbox{ } \mbox{ for all } j,l \in \mathbb{Z}. $$
\noindent But $ (\gamma_r)^{\frac{p+1}{2} p^{k-1}} \equiv -I \mod p^k$ and $ r^{\frac{p+1}{2} p^{k-1}} = -1 \mod p^k $. So we have that $ \sum_{j=1}^{ \frac{p+1}{2} p^{k-1}}a(xr^j) =0 $ and consequently $ a(xr^j)=0 $ for every $ j $, which implies that $ m(xr^j)$ does not depend on $ j $.
Since $g(\tau)$ is $ \Gamma(p^k)-$invariant and every element in $ \Gamma_{ns}(p^k) $ can be written in the form $ \gamma\gamma_r^j $ with $ \gamma \in \Gamma(p^k) $, we conclude that if $ g(\sigma^{-1}(\tau))/g(\tau) \in \mathbb{C} \mbox{ for every } \sigma \in \Gamma_{ns}(p^k) $, this implies that if $ |x|=|y|$ then $ m(x)=m(y)$. For each $ h $ invertible mod $p^k $ choose $ x $ with $ |x|=h $, put $ n_h:=m(x)$ and the first claim follows.
Consider now:
$$ \frac{g((C\gamma_t^{-1})(\tau))}{g(\tau)} = d_0 \prod{g^{m(\overbar{xt})-m(x)}_{[x]}(\tau) } .$$
\noindent If this quotient is constant the exponent of $ g_{[x]}(\tau) $ is equal to the exponent of the Siegel function $ g_{[\overbar{x}rt]}(\tau) $. So: $$ m(\overbar{xt})-m(x) = m(x\overbar{rt^2})-m(\overbar{x}rt) $$ or equivalently:
$ m(\overline{xt}) + m(\overbar{x}rt) = m(x) + m(x\overbar{rt^2}) $. But $ |rt\overbar{t^{-1}}|=1 $ so $ m(\overline{xt}) = m(\overbar{x}rt) $ and $ |\overbar{rt^2}|=1 $ so $ m(x) = m(x\overbar{rt^2}) $. Hence $ m(x)=m(\overbar{xt}) $ and observe that $ |x|=-|\overbar{xt}| $. So, in consideration of the previous result, we can conclude that $ g(\sigma^{-1}(\tau))/g(\tau) \in \mathbb{C} \mbox{ for every } \sigma \in \Gamma^+_{ns}(p^k) $ implies that if $ |x|=|y|$ or $ |x|=-|y| $ then $m(x)=m(y)$. For every $ h \in (\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\} $ choose $ x $ such that $ \pm|x|=h $ and define $ n^+_h := m(x) $ and the second claim follows. \end{proof}
\begin{prop}\label{Somme} The product: $$ \prod_{h \in (\mathbb{Z}/p^k\mathbb{Z})^* }T_h^{n_h}(\tau) $$ is a nearly holomorphic modular form for $ \Gamma(p^k) $ if and only if $p $ divides $ \sum_h n_h h$. \end{prop} \begin{proof} First of all, for every $ h $ invertible mod $ p^k $:
$$ \mbox{ (1) } \sum_ { \begin{scriptsize} \begin{array}{c} \pm s \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) \\ |\pm s|=h \end{array} \end{scriptsize} }\left(\frac{1}{2}(s+\overline{s})\right)^2 = \frac{h}{4}(p+1)p^{k-1} \mod p^k, $$ $$ \mbox{ (2) } \sum_{ \begin{scriptsize} \begin{array}{c} \pm s \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}
) \\ |\pm s|=h \end{array} \end{scriptsize} }\left(\frac{1}{2\sqrt{\epsilon}}(s-\overline{s})\right)^2 = - \frac{h}{4\epsilon}(p+1)p^{k-1} \mod p^k, $$
$$ \mbox{ (3) } \sum_{ \begin{scriptsize} \begin{array}{c} \pm s \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) \\ |\pm s|=h \end{array} \end{scriptsize} }\left(\frac{1}{2}(s+\overline{s})\right)\left(\frac{1}{2\sqrt{\epsilon}}(s-\overline{s})\right) = 0 \mod p^k . $$
We prove only the first assertion because the other statements can be shown by the same argument . Every $ s \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ with $ |s|= h $ can be written as $ s=r^i\alpha_h $ where $ r $ is a generator of the subgroup $ \{ t \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* \mbox{ with } |t|= 1 \} $ and $ \alpha_h $ are fixed elements such that $ |\alpha_h|=h $.
$$ \sum_{|\pm s|=h}(\frac{1}{2}(s+\overline{s}))^2 = \sum_{i=0}^{\frac{p+1}{2}p^{k-1}-1}(\frac{1}{2}(r^i\alpha_h +\overline{ r^i\alpha_h}))^2 = $$ $$ = \displaystyle\frac{\alpha_h^2}{4}\sum_ {i=0}^{\frac{p+1}{2}p^{k-1}-1}(r^{2i}) +
\displaystyle\frac{\overline{\alpha_h^2}}{4}\sum_ {i=0}^{\frac{p+1}{2}p^{k-1}-1}(r^{-2i}) + \frac {\alpha_h\overbar{\alpha_h}}{4} (p+1)p^{k-1} $$ \noindent and the assertion (1) follows because: $$\displaystyle \sum_{i=0}^{\frac{p+1}{2}p^{k-1}-1}r^{2i}= \displaystyle \sum_{i=0}^{\frac{p+1}{2}p^{k-1}-1}r^{-2i} = \displaystyle\frac{1- r^{(p+1)p^{k-1}} }{1-r^2} =0 \mod p^k $$ \noindent To prove this proposition we apply Proposition \ref{Kleinforms} to the product $ \prod_{h}T_h^{n_h}(\tau) $. Considering that for every $ s= a_1+\sqrt{\epsilon}a_2 \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ we have: $$ p^k[ s] \equiv (a_1,a_2) \mbox{ mod } (p^k\mathbb{Z})^2 \mbox{ or } p^k[ s] \equiv -(a_1,a_2) \mbox{ mod } (p^k\mathbb{Z})^2 \mbox{ and:} $$ $$ (a_1,a_2) \equiv \left( \frac{1}{2}(s+\overline{s}),\frac{1}{2\sqrt{\epsilon}}(s-\overline{s}) \right) \mbox{ mod } (p^k\mathbb{Z})^2 $$ and reformulating condition (5) of Proposition \ref{Kleinforms} in terms of assertions (1),(2) and (3) we attain the desired result. \end{proof}
From this proposition it follows immediately that the functions $ T^+_h({\tau}) $ are nearly holomorphic for $ \Gamma(p^k) $. We will examine them further in details.
For every $ s= \begin{pmatrix} a&b\\c&d \end{pmatrix} \in SL_2({\mathbb{Z}}) $ define: $$ J_s(\tau)=(c\tau+d)^{-(p+1)p^{k-1}} , \tau \in \mathbb{H}. $$
\noindent \begin{prop}\label{piùmenodiedrale} For every prime $ p \equiv 3 $ mod $ 4 $, for every $ h \in ((\mathbb{Z}/p^k\mathbb{Z})^*/\{\pm 1\}) $ and for every $ s \in \Gamma^+_{ns}(p^k) $ we have: $$ T^+_h(s(\tau))=J_s(\tau) T^+_h(\tau) $$ in other words $ T^+_h(\tau) $ is a nearly holomorphic modular form for $ \Gamma^+_{ns}(p^k) $ of weight $-(p+1)p^{k-1} $.\\ \noindent If $ p \equiv 1 $ mod $ 4 $ and $ s \in \Gamma_{ns}(p^k) $ we have: $$ T^+_h(s(\tau))=J_s(\tau) T^+_h(\tau) $$ in other words $ T^+_h(\tau) $ is a nearly holomorphic modular form for $ \Gamma_{ns}(p^k) $ of weight $ -(p+1)p^{k-1} $.\\ \noindent If $ p \equiv 1 $ mod $ 4 $ and $ s \in \Gamma^+_{ns}(p^k) \setminus \Gamma_{ns}(p^k) $ we have: $$ T^+_h(s(\tau))= - J_s(\tau) T^+_h(\tau) .$$ \end{prop}
\begin{proof}
It is clear from Proposition \ref{Kleinforms} that for every $ s\in \Gamma^+_{ns}(p^k) $ there exists a $ 2p^k-$th root of unity $ c $ such that: $ T^+_h(s(\tau))=c T^+_h(\tau)J_s(\tau) $ so it is natural to define: $$ C_h(s) = \displaystyle\frac{T^+_h(s(\tau))}{T^+_h(\tau)J_s(\tau)} \in \bm{\mu_{2p^k}}. $$ \noindent On the one hand: $$ T^+_h((ss')(\tau)) = C_h(ss')T^+_h(\tau)J_{ss'}(\tau), $$ on the other hand: $$ T^+_h(s(s'(\tau)))= C_h(s)T^+_h(s'(\tau))J_s(s'(\tau)) = C_h(s)C_h(s')J_s(s'(\tau))J_{s'}(\tau)T^+_h(\tau). $$
\noindent Considering that $ J_{ss'}(\tau) = J_s(s'(\tau))J_{s'}(\tau) $ we have: $$ C_h(ss')=C_h(s)C_h(s'). $$ From Proposition \ref{Somme} we deduce easily that $ C_h(\pm\Gamma(p^k))=1 $ for every $ h $. So $C_h $ are characters of $ \Gamma^+_{ns}(p^k)/\pm \Gamma(p^k) $. Since this quotient is isomorphic to $ C'^+_{ns}(p^k)/\{\pm I\} $ and since for every $ \alpha \in C'^+_{ns}(p^k) \setminus C'_{ns}(p^k) $ we have $ \alpha^2 = -I $, by Proposition \ref{strutturanonsplit} we obtain that $ \Gamma^+_{ns}(p^k)/\pm \Gamma(p^k) $
is a dihedral group of $ (p+1)p^{k-1} $ elements. These observations entail \textit{ipso facto} that $ C_h(s) \in \{ \pm 1 \} $. As in Proposition \ref{FunzioniG} choose a matrix $ \gamma_r \in SL_2(\mathbb{Z}) $ lifting $ M_r \in C'_{ns}(p^k) $ where $ r $ generates the subgroup of $ (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ of elements of norm 1.\\ Choose $ \gamma = \begin{pmatrix} a&b\\c&d \end{pmatrix} $ in $ \Gamma^+_{ns}(p^k) \setminus \Gamma_{ns}(p^k)$ . It is not restrictive to suppose that $ a=d \mbox{ mod } 2 $. If this did not happen we would alternatively choose: $$ \gamma = \begin{pmatrix} a&b\\c&d \end{pmatrix} \begin{pmatrix} 1&p^k\\0&1 \end{pmatrix} = \begin{pmatrix} a&{ap^k+b}\\c&{cp^k+d} \end{pmatrix}.$$
If $ a\not\equiv d \mbox{ mod } 2$ then $ b $ and $ c $ are inevitably odd because $ ad-bc=1 $ so the new coefficients on the diagonal verify $ a \equiv cp^k+d \mbox{ mod } 2$. \\
Notice that $ \{ \gamma\gamma_r^j\gamma^{-1} \mbox{, } \gamma\gamma_r^j \}_{j=1,...,\frac{p+1} {2}(p^{k-1})}$ is a set of representatives of cosets for the quotient group $ \Gamma^+_{ns}(p^k)/\pm \Gamma(p^k) $. Furthermore if $ h \in (\mathbb{Z}/p^k\mathbb{Z})^*/\{\pm 1\} $ and $s \in (\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\} $ with $ \pm|s|=h $, there exists a $ 2p^k-$th root of unity $ c' $ such that:
$$ T^+_h(\tau) = c' \prod_{j=1}^{\frac{p+1}{2}p^{k-1}} {\mathfrak{k}_{[s]\gamma\gamma_r^j\gamma^{-1}}}(\tau)\prod_{j=1}^{\frac{p+1}{2}p^{k-1}} {\mathfrak{k}_{[s]\gamma\gamma_r^j}}(\tau). $$ \noindent We calculate:
$$ T^+_h(\gamma(\tau)) = c' J_{\gamma}(\tau) \prod_{j=1}^{\frac{p+1}{2}p^{k-1}} {\mathfrak{k}_{[s]\gamma\gamma_r^j}}(\tau)\prod_{j=1}^{\frac{p+1}{2}p^{k-1}} {\mathfrak{k}_{[s]\gamma\gamma_r^j\gamma}}(\tau) = $$
$$ = c' (-1)^{\frac{p+1}{2}} J_{\gamma}(\tau) \prod_{j=1}^{\frac{p+1}{2}p^{k-1}} {\mathfrak{k}_{[s]\gamma\gamma_r^j}}(\tau)\prod_{j=1}^{\frac{p+1}{2}p^{k-1}} {\mathfrak{k}_{-[s]\gamma\gamma_r^j\gamma}}(\tau) .$$ \noindent But $ \gamma\gamma_r^j\gamma^{-1} \equiv -\gamma\gamma_r^j\gamma $ mod $ p^k $ and $ \gamma^{-1}+\gamma $ (in agreement with the previous convention) has all even coefficients so: $$ [s]\gamma\gamma_r^j\gamma^{-1} - (-[s]\gamma\gamma_r^j\gamma) = [s]\gamma\gamma_r^j( \gamma^{-1}+\gamma) \in (2\mathbb{Z})^2 $$ and considering Proposition \ref{Kleinforms} part (2) we have: $$ \frac{{\mathfrak{k}_{-[s]\gamma\gamma_r^j\gamma}}(\tau) }{{\mathfrak{k}_{[s]\gamma\gamma_r^j\gamma^{-1}}}(\tau)} \in \bm{\mu_{p^k}}. $$ Therefore: $$ C_h(\gamma) = (-1)^{\frac{p+1}{2}}\prod_{j=1}^{\frac{p+1}{2}p^{k-1}}\frac{{\mathfrak{k}_{-[s]\gamma\gamma_r^j\gamma}}(\tau) }{{\mathfrak{k}_{[s]\gamma\gamma_r^j\gamma^{-1}}}(\tau)}, $$
\noindent so $ C_h(\gamma)(-1)^{\frac{p+1}{2}} \in \bm{\mu_{p^k}} \cap \{\pm 1\} $, we have necessarily $ C_h(\gamma)=(-1)^{\frac{p+1}{2}} $ for every $ \gamma \in \Gamma^+_{ns}(p^k) \setminus \Gamma_{ns}(p^k) $ and the proposition follows.
\end{proof}
\begin{thm}\label{powprod} If $ p \not= 2,3 $ the subgroup of modular units in $ F_{p^k} $ of $ X^+_{ns}(p^k) $ consists (modulo constants) of power products: $$ g(\tau)= \prod_{h \in ((\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\}) }{{G^+_h}^{n^+_h}(\tau)} $$ where $ d=\displaystyle\frac{12}{\gcd(12,p+1)} \mbox{ divides } \sum_{h}n^+_h $.
\end{thm}
\begin{proof}
By Proposition \ref{FunzioniG} and Theorem \ref{unitàmodulari}, every modular unit on $ X^+_{ns}(p^k) $ can be written in the above indicated way. In fact, $ d|\sum_h{n^+_h} $ is equivalent to saying: $ 12|(p+1)p^{k-1}\sum_h{n^+_h} $. \\By Proposition \ref{piùmenodiedrale} all the functions of this form are modular units of $ X^+_{ns}(p^k) $. In fact, if $ p \equiv 3 \mbox{ mod } 4 $, the functions $ T^+_h(\tau) $ are nearly holomorphic modular forms for $ X^+_{ns}(p^k) $. If $ p \equiv 1 \mbox{ mod } 4 $, even if the functions $ T^+_h(\tau) $ are not nearly holomorphic modular forms for $ X^+_{ns}(p^k) $, the product $ \prod_{h }{{T^+_h}^{n^+_h}(\tau)} $ has this property, because $ \sum_h{n^+_h} $ is even in this case. \end{proof}
Notice that such a writing for $ g(\tau) $ is not unique because of the fact that the following product is constant: $$ \displaystyle \prod_{h \in ((\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\})}{G^+_h(\tau)} = \prod_{t \in (\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}}{g_{[t]}(\tau)}.$$
\begin{rem}\label{pseudoGalois} Let $g$ be a generator of $ (\mathbb{Z}/p^k\mathbb{Z})^* $. Choose $s \in (\mathbf{O}_K / p^k \mathbf{O}_K)^* $ with $ |s|=g $ and denote with $ \rho \in Gal(F_{p^k},\mathbb{Q}(j))$ the automorphism corresponding to the matrix $ M_s $ respect to the isomorphism $ Gal(F_{p^k},\mathbb{Q}(j)) \cong GL_2(\mathbb{Z}/p^k\mathbb{Z})/{\pm I} $ described in Theorem \ref{fricke}. Let $ F^+_{ns}(p^k) $ be the subfield of $ F_{p^k} $ fixed by $ C'^+_{ns}(p^k)/\pm I $. Choose $\sigma \in Gal(F_{p^k},\mathbb{Q}(j)) $. From Galois theory we have: $$ Gal(F_{p^k}, \sigma ( F^+_{ns}(p^k)) ) = \sigma Gal(F_{p^k}, F^+_{ns}(p^k) )\sigma^{-1} $$ \noindent thus saying that $ \sigma ( F^+_{ns}(p^k)) = F^+_{ns}(p^k) $ amounts to say that $ \sigma $ belongs to the normalizer of $ C'^+_{ns}(p^k)/\pm I $, in other words we have: $\sigma \in C^+_{ns}(p^k) / \pm I $. Consider $ \sigma_1, \sigma_2 \in C^+_{ns}(p^k) / \pm I $. We have $ \sigma_1(f(\tau)) = \sigma_2 (f(\tau)) $ for every $ f(\tau) \in F^+_{ns}(p^k) $ if and only if $ \sigma_1\sigma_2^{-1} \in C'^+_{ns}(p^k)/\pm I $ or equivalently $ \det \sigma_1 = \det \sigma_2 $. So every automorphism $ \sigma\restriction_{F^+_{ns}(p^k)}: F^+_{ns}(p^k)\rightarrow F^+_{ns}(p^k)$ fixing $ \mathbb{Q}(j) $ can be written in the form $ \sigma = \rho^j $ for some $ 0 \le j \le \varphi(p^k)-1 $.
Notice that if $$ f(\tau)= \prod_{h \in ((\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\}) }{{G^+_h}^{n^+_h}(\tau)} $$
and $$ h(\tau)= \prod_{h \in ((\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\}) }{{G^+_{h(\pm g)}}^{n^+_h}(\tau)} $$ \noindent are modular units on $ X^+_{ns}(p^k) $, from proposition \ref{sigfun} we have: $$ (\rho(f(\tau)))^{12p^k} = \rho(f(\tau)^{12p^k})= \rho \left(\prod_{h \in ((\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\}) }{{G^+_h}^{12p^kn^+_h}(\tau)}\right) =$$ $$= \prod_{h \in ((\mathbb{Z} / p^k \mathbb{Z})^*/\{\pm 1\}) }{{G^+_{h(\pm g)}}^{12p^kn^+_h}(\tau)} = (h(\tau))^{12p^k}. $$ So $ \rho(f(\tau)) = c h(\tau)$ for some $ c \in \mathbb{Q}(\zeta_{p^k})$ and all the functions $ \rho^j(f(\tau)) $ are modular units. Choosing $ j= \frac{1}{2} \varphi(p^k) $ we deduce that for every modular unit $ f(\tau) $ on $ X^+_{ns}(p^k) $ there exist $ c' \in \mathbb{Q}(\zeta_{p^k}) $ such that: $$ c'f(\tau) \in \mathbb{Q}\left(\cos \left(\frac{2\pi}{p^k}\right)\right)((q^{p^{-k}})) \mbox{ with } q=e^{2\pi i \tau}. $$
\end{rem}
\section{Cuspidal Divisor Class Group of non-split Cartan curves} Let $ p \ge 5 $ a prime and let $ R = \mathbb{Z}[H] $ be the group ring of $ H=(\mathbb{Z}/p^k\mathbb{Z} )^*/\{ \pm 1 \} $ over $ \mathbb{Z} $. Let $ w $ be a generator of $ H $. For $ \alpha \in \mathbb{Z}/p^k\mathbb{Z} $, let be $ a \in \mathbb{Z} $ congruent to $ \alpha \mbox{ mod }p^k $. We define: $$ \left\langle \frac{\alpha}{p} \right\rangle:= \left\langle \frac{a}{p} \right\rangle .$$ Define the Stickelberger element:
$$ \theta= \displaystyle\frac{p^k}{2} \sum_{i=1}^{\frac{p-1}{2}p^{k-1}} {\displaystyle\sum_{ \begin{scriptsize} \begin{array}{c} s \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) \\ \pm| s|=w^i \end{array} \end{scriptsize} } B_2 \left( \left\langle \frac{\frac{1}{2}(s+\overline{s}) }{p^k} \right\rangle \right) } w^{-i} \in \mathbb{Q}[H]. $$ Define the ideals: $$ R_0 := \Big{\{} \sum b_jw^j \in R \mbox{ such that } \deg \left( \sum b_jw^j\right)=\sum b_j=0 \Big{\}}, $$ $$ R_d := \Big{\{} \sum b_jw^j \in R \mbox{ such that } d \mbox{ divides } \deg\left(\sum b_jw^j\right)=\sum b_j \Big{\}}. $$ Now we can state the main result: \begin{mainthm}\label{main} The group generated by the divisors of modular units in $F_{p^k} $ of the curve $ X^+_{ns}(p^k) $ can be expressed both as $ R_d\theta $ and as Stickelberger module $ R\theta \cap R $. The Cuspidal Divisor Class Group on $ X^+_{ns}(p^k) $ is a module over $Z[H]$ and we have the following isomorphism: $$ \mathfrak{C}^+_{ns}(p^k) \cong R_0 / R_d \theta. $$ \end{mainthm} \begin{proof}
For every $ i \in \mathbb{Z}/\frac{\varphi(p^k)}{2}\mathbb{Z} $ define:
$$ a_i = \displaystyle\frac{p^k}{2}\sum_{ { \begin{scriptsize} \begin{array}{c} s \in ((\mathbf{O}_K / p^k \mathbf{O}_K)^*/\{\pm 1\}) \\ \pm| s|=w^i \end{array} \end{scriptsize} }} B_2 \left( \left\langle \frac{\frac{1}{2}(s+\overline{s}) }{p^k} \right\rangle \right). $$ \noindent We identify the cusps of $ X^+_{ns}(p^k) $ with the elements in $H= (\mathbb{Z}/p^k\mathbb{Z})^*/\{\pm 1\} $ as explained in Proposition \ref{numerocuspidi}. In consideration of Proposition \ref{divisori} we obtain: $$ \mbox{ div } {G^+_{w^j}}^d(\tau) = d \sum_{i=1}^{\frac{p-1}{2}p^{k-1}} a_i w^{j-i}. $$ If $ p\not\equiv 11 \mbox{ mod } 12$, the function $ G^+_{w^j}(\tau) $ is not $ \Gamma^+_{ns}(p^k)-$invariant but with a slight abuse of notation we write: $$ \mbox{ div } G^+_{w^j}(\tau) = \sum_{i=1}^{\frac{p-1}{2}p^{k-1}} a_i w^{j-i}. $$
It is clear that $\mbox{div }G^+_{w^i}(\tau) \in \mathbb{Q}[H] $ and $d \mbox{ div }G^+_{w^i}(\tau) \in R $. Consider the Stickelberger element: $$ \mbox{div } G^+_{\{\pm 1\}}(\tau) = \sum_{i=1}^{\frac{p-1}{2}p^{k-1}} a_i w^{-i} = $$
$$ = \displaystyle\frac{p^k}{2} \sum_{i=1}^{\frac{p-1}{2}p^{k-1}} {\displaystyle\sum_{ \pm|s|=w^i} B_2 \left( \left\langle \frac{\frac{1}{2}(s+\overline{s}) }{p^k} \right\rangle \right) } w^{-i} = \theta. $$ Notice that: $ \mbox{div } G^+_{w^j}(\tau) = w^j\theta $. By Theorem \ref{powprod}, a $ \Gamma^+_{ns}(p^k)-$invariant function $g(\tau) \in F_{p^k} $ is a modular unit of $ X^+_{ns}(p^k) $, if and only if $ \mbox{ div } g(\tau) \in R_d \theta $. By \cite[Proposition 2.3, Chapter 5]{KL} we have $R_d \theta =R\theta \cap R $. \end{proof} \noindent
\begin{rem} Following Remark \ref{pseudoGalois}, consider $ G:=\displaystyle\frac{C^+_{ns}(p^k) / \pm I}{C'^+_{ns}(p^k) / \pm I} \cong (\mathbb{Z}/p^k\mathbb{Z})^*$ and let $ \rho $ be a generator of $ G/\{\pm 1\} $ with $ \pm \det \rho = w $. We may identify the group $ H $ parameterizing the cusps of $ X^+_{ns}(p^k) $ with $ G/\{\pm 1\} $ observing that for every automorphism $ \rho^j \in G/\{\pm 1\} $ and each moduar unit $h(\tau) \in F^+_{ns}(p^k)$ we have: \begin{center} ord$_{w^{-j}} (h(\tau)) = $ord$_{\rho^{-j}(\infty)} (h(\tau)) = $ ord$_\infty \rho^j (h(\tau)) $ \end{center} and $$ \mbox{div}(\rho^j(h(\tau))) = w^j\mbox{div}(h(\tau)). $$ If $ \sum a_j \rho^j \in \mathbb{Z}[G/\{\pm 1\}] \cong \mathbb{Z}[H] $ we define $$ \left(\sum a_j w^j\right) (h(\tau)) = \prod \rho^j(h(\tau))^{a_j} $$ and clearly we have: $$ \mbox{div} \left(\prod \rho^j(h(\tau))^{a_j}\right) = \left(\sum a_j w^j \right)\mbox{div}(h(\tau)) $$ so $ \mathfrak{C}^+_{ns}(p^k) $ has a natural structure of $ \mathbb{Z}[H]$-module which emphasizes the analogy with the classical theory of cyclotomic fields recalled in the introductory section. \end{rem}
\noindent Define: $$ \theta' = \theta - \displaystyle\frac{(p+1)p^{2k-1}}{12}\sum_{i=1}^{\frac{p-1}{2}p^{k-1}}w^i $$ \noindent and observe that $ \theta' \in R$ and $ \deg(\theta') = - \displaystyle\frac{p^2-1}{24} p^{3k-2}. $
\begin{prop}\label{calcolorapido} We have: $$ R_0 \cap \left(R\theta' + p^{2k-1} R \sum_{i=1}^{\frac{p-1}{2}p^{k-1}} w^i \right) = R_d \theta .$$
\end{prop} \begin{proof} Let $ \alpha, \beta \in R $ such that: $$ \alpha \theta' + p^{2k-1}\beta\sum_{i=1}^{\frac{p-1}{2}p^{k-1}} w^i \in R_0 .$$ \noindent Then $$ -\deg(\alpha)\frac{p^2-1}{24} p^{3k-2} + p^{2k-1} \deg(\beta) \displaystyle\frac{p-1}{2}p^{k-1} = 0 $$ \noindent implies $ (p+1)\deg(\alpha)= 12 \deg(\beta) $. This is equivalent to say: $$ d=\displaystyle\frac{12}{\gcd(12,p+1)} \mbox{ divides }\deg(\alpha) $$ and $$ \alpha \theta' + p^{2k-1}\beta\sum w^i = \alpha\theta' + \frac{(p+1)p^{2k-1}\deg(\alpha)}{12} \sum w^i = \alpha \theta .$$ \end{proof}
\begin{thm}\label{Cardinalità} We have:
$$ |\mathfrak{C}^+_{ns}(p^k)| = \displaystyle\frac{|\det A_{\theta'}|}{\frac{p^2-1}{24} p^{k-1} e} = \displaystyle 24\frac{ \displaystyle\prod{}^{} {\frac{p^k}{2}B_{2,\chi}}}{\gcd(12,p+1)(p-1)p^{k-1}}, $$
where $ A_{\theta'} $ is a circulant Toeplitz matrix, $ e= p^{3k-2}\frac{p-1}{2d} $ and the product runs over all nontrivial characters $ \chi $ of $ C(p^k)/{\pm I} $ such that $ \chi(M)=1 $ for every $ M \in C(p^k) $
with $ \det M = \pm 1 $. \end{thm}
\begin{proof} \noindent From Proposition \ref{calcolorapido} and the following isomorphism: $$ R_0 / \left(R_0 \cap \left(R\theta' + p^{2k-1} R \sum w^i \right)\right) \cong$$
$$ \cong \left(R_0 + R\theta' + p^{2k-1} R \sum w^i \right) / \left(R\theta' + p^{2k-1} R \sum w^i\right) $$ \noindent we deduce that
$$ \displaystyle|\mathfrak{C}^+_{ns}(p^k)|=\left(R_0 + R\theta' + p^{2k-1} R \sum w^i\right):\left(R\theta' + p^{2k-1} R \sum w^i\right). $$ From the following chain of consecutive inclusions: $$ R \supset R_0 + R\theta' + p^{2k-1} R \sum w^i \supset R\theta' + p^{2k-1} R \sum w^i \supset R\theta' $$ \noindent we obtain
$$ \displaystyle|\mathfrak{C}^+_{ns}(p^k)| = \displaystyle\frac{(R:R\theta')}{\left( R : \left(R_0 + R\theta' + p^{2k-1} R \sum w^i\right)\right)\left(\left(R\theta' + p^{2k-1} R \sum w^i\right) : R\theta'\right)}. $$ \noindent Define $$ e:= \gcd \left(\deg(\theta'),p^{2k-1}\deg\left(\sum w^i\right)\right) = $$ $$ = p^{3k-2}\gcd \left(\frac{p^2-1}{24},\frac{p-1}{2} \right) = p^{3k-2}\frac{p-1}{2d}. $$ \noindent It is clear that $$ R_0 + R\theta' + p^{2k-1} R \sum w^i = R_e, $$ where by $ R_e $ we mean the ideal of $ R $ consisting of elements whose degree is divisibile by $ e$. So $$ \left(R : \left(R_0 + R\theta' + p^{2k-1} R \sum w^i\right)\right) = e. $$
\noindent Regarding $ \left(\left(R\theta' + p^{2k-1} R \sum w^i\right) : R\theta'\right) $, we observe that $$ \left(R\theta' + p^{2k-1} R \sum w^i\right) \big{/} R\theta' \cong \left(p^{2k-1} R \sum w^i\right) \big{/} \left(p^{2k-1} R \sum w^i \cap R\theta'\right). $$ \noindent But $ \prod_{h}{G^+_h}^{n^+_h} $ is constant if and only if all $ n^+_h $ are the same and so $$ \mbox{ div } \prod_{h}{G^+_h}^{n^+_h} = \left(\sum n^+_h h\right)\theta= \sum n^+_h h \left(\theta' + \frac{(p+1)p^{2k-1}}{12}\sum_{i=1}^{\frac{p-1}{2}p^{k-1}}w^i \right) =0$$ implies $$ \left(\sum n^+_h h \right)\theta' = - \frac{(p+1)p^{2k-1}}{12} \sum n^+_h \sum w^i \iff n^+_{w}=n^+_{w^2}=n^+_{w^3}=...=n^+_{\pm 1}. $$ \noindent But $$ \left( \sum w^i \right) \theta' = \deg (\theta') \sum w^i = - \displaystyle\frac{p^2-1}{24} p^{3k-2} \sum w^i $$ so: $$ \left(R\theta' + p^{2k-1} R \sum w^i\right) : R\theta' = \frac{p^2-1}{24} p^{k-1}. $$ The last index we need to compute is $ (R:R\theta') $. Write $ \theta'=\sum a'_i w^{-i} $ and $ a'_i=a_i - \frac{p+1}{12} p^{2k-1} $. Define the following matrix:
$$ A_{\theta'} =
\begin{pmatrix}
a'_{0} & a'_{1} & a'_{2} & \dots & a'_{\frac{p-1}{2}p^{k-1} -2} & a'_{\frac{p-1}{2}p^{k-1} -1} \\
a'_{\frac{p-1}{2}p^{k-1}-1} & a'_{0} & a'_{1} & \dots & a'_{\frac{p-1}{2}p^{k-1} -3} & a'_{\frac{p-1}{2}p^{k-1} -2}\\
a'_{\frac{p-1}{2}p^{k-1}-2} & a'_{\frac{p-1}{2}p^{k-1}-1} & a'_{0} & \dots & a'_{\frac{p-1}{2}p^{k-1} -4} & a'_{\frac{p-1}{2}p^{k-1} -3}\\ \hdotsfor{6} \\
a'_{2} & a'_{3} & a'_{4} & \dots & a'_{0} & a'_{1} \\
a'_{1} & a'_{2} & a'_{3} & \dots & a'_{\frac{p-1}{2}p^{k-1} -1} & a'_{0}
\end{pmatrix}.
$$
\noindent We have: $ (R:R\theta')=|\det A_{\theta'}| $. The matrix $ A_{\theta'} $ is a circulant Toeplitz matrix, in other words the coefficients $ (A_{\theta'})_{i,j} $ depend only on $ i-j \mod \frac{p-1}{2}p^{k-1}$. This is the matrix of multiplication by $ \theta'$ in $ \mathbb{C}[H] $, so we easily deduce that for $ n=1,2,...,\frac{p-1}{2}p^{k-1} $ the eighenvalues of $ A_{\theta'}$ are: $$ \lambda_n = \sum_{j=1}^{\frac{p-1}{2}p^{k-1}}a'_j e^{\begin{Large}\frac{4 \pi i j n}{(p-1)p^{k-1}}\end{Large} } $$ with corresponding eighenvectors: $$ v_n = \sum_{j=1}^{\frac{p-1}{2}p^{k-1}} e^{\begin{Large}\frac{4 \pi i j n}{(p-1)p^{k-1}}\end{Large} } w^j .$$ \noindent Observe that $ \lambda_{\frac{p-1}{2}p^{k-1}} = \sum a'_i = \deg(\theta') = -\frac{p^2-1}{24}p^{3k-2} $ and that according to the definition of Theorem \ref{CDCG} the others $ \lambda_n $ correspond to the generalized Bernoulli number $\frac{p^k}{2} B_{2,\chi} $ where $ \chi $ runs over the nontrivial characters of $ C(p^k)/{\pm I} $ such that $ \chi(M)=1 $ for every $ M \in C(p^k) $ with $ \det M = \pm 1 $. \\ Gathering all this information together we obtain the desired result. \end{proof}
\section{Explicit calculation}
In this section we examine the curve $ X^+_{ns}(p) $ more in details. Denote with $ v $ a generator of the multiplicative group of $ \mathbb{F}_{p^2} $ and indicate with $ \omega $ a generator of the character group $ \hat{\mathbb{F}_{p^2}^*} $ viewing $ C(p) \cong \mathbb{F}_{p^2}^* $. By Theorem \ref{Cardinalità}, in this case we have: $$ B_{2,\chi} = \sum_{x \in \mathbb{F}_{p^2}^* / \pm 1} B_2 \left( \left\langle \frac{\frac{1}{2}\mbox{Tr}(x)}{p} \right\rangle \right) \chi(x), $$
$$ |\mathfrak{C}^+_{ns}(p)| = \displaystyle \frac{24}{(p-1)\gcd(12,p+1)}\prod_{j=1}^{\frac{p-3}{2}}\frac{p}{2}B_{2,\omega^{(2p+2)j}} =
$$
$$ = \displaystyle\frac{ 576 \left| \det\left[\displaystyle\frac{p}{2}\left(\displaystyle\sum_{l=0}^{p}B_2\left( \left\langle \frac{\frac{1}{2}\mbox{Tr}(v^{i-j+l\frac{p-1}{2}}) }{p} \right\rangle \right) - \frac{p+1}{6} \right) \right]_{1\le i,j \le \frac{p-1}{2}} \right|}{(p-1)^2 p (p+1)\gcd(12,p+1)}. $$ \\ \noindent In the following table we show the factorization of the orders of cuspidal divisor class groups $ \mathfrak{C}^+_{ns}(p) $ for some primes $ p \le 101 $:
\begin{tab}\label{tab} \noindent \\ \begin{tabular}{rc} \toprule
$ p $ & $ |\mathfrak{C}^+_{ns}(p)| $ \\ \midrule 5 & $ 1 $ \\ 7 & $ 1 $ \\ 11 & $ 11 $ \\ 13 & $ 7 \cdot 13^2 $ \\ 17 & $ 2^4 \cdot 3 \cdot 17^3 $ \\ 19 & $ 3 \cdot 19^3 \cdot 487 $ \\ 23 & $ 23^4 \cdot 37181 $ \\ 29 & $ 2^6 \cdot 5 \cdot 7^2 \cdot 29^6 \cdot 43^2 $\\ 31 & $ 2^2 \cdot 5 \cdot 7 \cdot 11 \cdot 31^6 \cdot 2302381 $ \\ 37 & $ 3^4 \cdot 7^2 \cdot 19^3 \cdot 37^8 \cdot 577^2 $ \\ 41 & $ 2^6 \cdot 5^2 \cdot 7 \cdot 31^4 \cdot 41^9 \cdot 431^2 $ \\ 43 & $ 2^2 \cdot 19 \cdot 29 \cdot 43^9 \cdot 463 \cdot 1051 \cdot 416532733 $ \\ 53 & $ 3^2 \cdot 13^2 \cdot 53^{12} \cdot 96331^2 \cdot 379549^2 $ \\ 59 & $ 59^{14} \cdot 9988553613691393812358794271 $ \\ 67 & $ 67^{16} \cdot 193 \cdot 661^2 \cdot 2861 \cdot 8009 \cdot 11287 \cdot 9383200455691459 $ \\ 71 & $ 31 \cdot 71^{16} \cdot 113 \cdot 211 \cdot 281 \cdot 701^2 \cdot 12713 \cdot 13070849919225655729061 $ \\ 73 & $ 2^2 \cdot 3^4 \cdot 11^2 \cdot 37 \cdot 73^{17} \cdot 79^2 \cdot 241^2 \cdot 3341773^2 \cdot 11596933^2 $ \\ 83 & $ 83^{19} \cdot 17210653 \cdot 151251379 \cdot 18934761332741 \cdot 48833370476331324749419 $ \\ 89 & $ 2^2 \cdot 3 \cdot 5 \cdot 11^2 \cdot 13^2 \cdot 89^{21} \cdot 4027^2 \cdot 262504573^2 \cdot 15354699728897^2 $\\ 101 & $ 5^4 \cdot 17 \cdot 101^{24} \cdot 52951^2 \cdot 54371^2 \cdot 58884077243434864347851^2 $ \end{tabular} \end{tab}
\noindent \\ We recall the following result of \cite{BB}:
\begin{thm}\label{Hurw}
The genera of $ X^+_{ns}(p) $ are:
$$ g(X^+_{ns}(p)) = \displaystyle\frac{1}{24}\left( p^2 - 10 p + 23 + 6\left(\frac{-1}{p}\right) + 4\left(\frac{-3}{p}\right) \right). $$ \end{thm} \begin{proof} It is a consequence of Hurwitz's formula \cite[Proposition 1.40]{Shimura:af}:
$$ g(\Gamma) = 1 + \frac{d}{12} - \frac{e_2}{4} - \frac{e_3}{3} - \frac{e_\infty}{2}. $$ \noindent In this case: $ d:= [SL_2(\mathbb{Z}):\Gamma_{ns}^+(p)] = \frac{p(p-1)}{2} $, $ e_\infty = \frac{p-1}{2} $ is the number of cusps (see Proposition \ref{numerocuspidi}), $ e_2 $ and $ e_3 $ denote the number of elliptic points of period 2 and 3. We have (cfr. \cite[Proposition 12]{Rebolledo}): $$ e_2 = \frac{p+1}{2} - \left(\frac{-1}{p}\right) \mbox{ and } e_3 = \frac{1}{2} - \frac{1}{2}\left(\frac{-3}{p}\right). $$ \end{proof}
By Theorem \ref{Hurw} we have $ g(X^+_{ns}(5))=g(X^+_{ns}(7))=0 $ so it will not be surprising to find out that $ \mathfrak{C}^+_{ns}(5)$ and $ \mathfrak{C}^+_{ns}(7) $ are trivial.
For $ 11 \le p \le 31 $ we provide further corroborative evidence of Table \ref{tab}. From \cite[p. 195]{SerreMordel} we have:
\begin{thm}\label{x1}
The modular curve $ X^+_{ns}(p) $ associated to the subgroup $ C_{ns}^+(p) $ is a projective non-singular modular curve which can be defined over $ \mathbb{Q} $. The cusps are defined over $ \mathbb{Q}(\cos(\frac{2\pi}{p})) $, the maximal real subfield of the $ p $-th cyclotomic field. \end{thm}
From \cite{Chen} we have the following result:
\begin{thm}\label{x2} The jacobian of $ X^+_{ns}(p) $ is isogenous to the new part of the Jacobian $ J_0^+(p^2) $ of $ X_0^+(p^2) $. \end{thm}
From \cite[Chapter 12]{Ribet} we have this interesting corollary of the Eichler-Shimura relation \cite[pag. 354]{Diamond:mf}:
\begin{thm}\label{x3} Let $q$ be a prime that does not divide $ N $ and let $f(x) $ the characteristic polynomial of the Hecke operator $ T_q $ acting on $ S^{new}_2(\Gamma^+_0(N)) $. Then:
$$ |{J_0^+}^{new}(N)(\mathbb{F}_q)| = f(q+1).$$ \end{thm} \noindent
Choose a prime $ q \equiv \pm 1 \mbox{ mod }p$ that does not divide $ |\mathfrak{C}^+_{ns}(p)| $. From the previous theorems, the cuspidal divisor class group $ \mathfrak{C}^+_{ns}(p) $ injects into $ J^+_{ns}(p)(\mathbb{F}_q) $. So we expect that $ |\mathfrak{C}^+_{ns}(p)| $ divides $ |{J_0^+}^{new}(p^2)(\mathbb{F}_q)| = f_{q,p^2}(q+1)$ where $ f_{q,p^2} $ is the characteristic polynomial of the Hecke operator $T_q $ acting on $ S_2^{new}(\Gamma^+_0(p^2))$. From the modular form database of W.Stein we have:\\
\noindent $ |{J_0^+}^{new}(11^2)(\mathbb{F}_{23})| = f_{23,121}(24)= 3 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{43})| = f_{43,121}(44)= 2^2 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{67})| = f_{67,121}(68)= 5 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{89})| = f_{89,121}(90)= 3^2 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{109})| = f_{109,121}(110)= 2 \cdot 5 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{131})| = f_{131,121}(132)= 2^2 \cdot 3 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{197})| = f_{197,121}(198)= 2 \cdot 3^2 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{199})| = f_{199,121}(200)= 2^2 \cdot 5 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{241})| = f_{241,121}(242)= 2 \cdot 11^2 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{263})| = f_{263,121}(264)= 2^3 \cdot 3 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{307})| = f_{307,121}(308)= 2^2 \cdot 7 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{331})| = f_{331,121}(332)= 3^3 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{353})| = f_{353,121}(354)= 3 \cdot 11^2 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{373})| = f_{373,121}(374)= 2 \cdot 11 \cdot 17 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{397})| = f_{397,121}(398)= 2^2 \cdot 3^2 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{419})| = f_{419,121}(420)= 2^2 \cdot 3^2 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{439})| = f_{439,121}(440)= 2^3 \cdot 5 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{461})| = f_{461,121}(462)= 2 \cdot 3 \cdot 7 \cdot 11 ,$\\
$ |{J_0^+}^{new}(11^2)(\mathbb{F}_{463})| = f_{463,121}(464)= 3^2 \cdot 5 \cdot 11 ,$\\
\noindent $ |{J_0^+}^{new}(13^2)(\mathbb{F}_{53})| = f_{53,169}(54)= 7 \cdot 13 ^2 \cdot 127 ,$\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{79})| = f_{79,169}(80)= 7 \cdot 13 ^2 \cdot 449 ,$\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{103})| = f_{103,169}(104)= 7 \cdot 13 ^2 \cdot 967 ,$\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{131})| = f_{131,169}(132)= 7 \cdot 13 ^5, $\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{157})| = f_{157,169}(158)= 7^2 \cdot 13 ^2 \cdot 503,$\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{181})| = f_{181,169}(182)= 7 \cdot 13 ^2 \cdot 4327, $\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{233})| = f_{233,169}(234)= 7 \cdot 13 ^2 \cdot 11731, $\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{311})| = f_{311,169}(312)= 7 \cdot 13 ^2 \cdot 26249, $\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{313})| = f_{313,169}(314)= 7 \cdot 13 ^2 \cdot 29443, $\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{337})| = f_{337,169}(338)= 7 \cdot 13 ^2 \cdot 35449, $\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{389})| = f_{389,169}(390)= 2^3 \cdot 7 \cdot 13 ^2 \cdot 71 \cdot 83, $\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{443})| = f_{443,169}(444)= 2^3 \cdot 7 \cdot 13 ^3 \cdot 643, $\\
$ |{J_0^+}^{new}(13^2)(\mathbb{F}_{467})| = f_{467,169}(468)= 7 \cdot 13 ^2 \cdot 93199, $\\
\noindent $ |{J_0^+}^{new}(17^2)(\mathbb{F}_{67})| = f_{67,289}(68)= 2^8 \cdot 3 \cdot 17^5 \cdot 71 ,$\\
$ |{J_0^+}^{new}(17^2)(\mathbb{F}_{101})| = f_{101,289}(102)= 2^4 \cdot 3^2 \cdot 7 \cdot 17^3 \cdot 19 \cdot 79 \cdot 181 ,$\\
$ |{J_0^+}^{new}(17^2)(\mathbb{F}_{103})| = f_{103,289}(104)= 2^7 \cdot 3^4 \cdot 17^4 \cdot 1601 ,$\\
$ |{J_0^+}^{new}(17^2)(\mathbb{F}_{137})| = f_{137,289}(138)= 2^6 \cdot 3^8 \cdot 17^4 \cdot 181 ,$\\
$ |{J_0^+}^{new}(17^2)(\mathbb{F}_{239})| = f_{239,289}(240)= 2^8 \cdot 3^2 \cdot 17^3 \cdot 373 \cdot 48871 ,$\\
$ |{J_0^+}^{new}(17^2)(\mathbb{F}_{271})| = f_{271,289}(272)= 2^5 \cdot 3^9 \cdot 5^3 \cdot 17^4 \cdot 53 ,$\\
$ |{J_0^+}^{new}(17^2)(\mathbb{F}_{307})| = f_{307,289}(308)= 2^6 \cdot 3^5 \cdot 5 \cdot 17^3 \cdot 23 \cdot 71 \cdot 1423 ,$\\
$ |{J_0^+}^{new}(17^2)(\mathbb{F}_{373})| = f_{373,289}(374)= 2^4 \cdot 3^4 \cdot 17^3 \cdot 23 \cdot 73 \cdot 101 \cdot 2789 ,$\\
$ |{J_0^+}^{new}(17^2)(\mathbb{F}_{409})| = f_{409,289}(410)= 2^7 \cdot 3^5 \cdot 17^3 \cdot 23 \cdot 53 \cdot 71 \cdot 359 ,$\\
$ |{J_0^+}^{new}(17^2)(\mathbb{F}_{443})| = f_{443,289}(444)= 2^5 \cdot 3^2 \cdot 13 \cdot 17^4 \cdot 19 \cdot 79 \cdot 15263 ,$\\
\noindent $ |{J_0^+}^{new}(19^2)(\mathbb{F}_{37})| = f_{37,361}(38)= 2 \cdot 3 \cdot 19^3 \cdot 37 \cdot 487 \cdot 5441 ,$\\
$ |{J_0^+}^{new}(19^2)(\mathbb{F}_{113})| = f_{113,361}(114)= 2^5 \cdot 3^7 \cdot 19^7 \cdot 487 ,$\\
$ |{J_0^+}^{new}(19^2)(\mathbb{F}_{151})| = f_{151,361}(152)= 2^3 \cdot 3^3 \cdot 17 \cdot 19^4 \cdot 487 \cdot 1459141 ,$\\
$ |{J_0^+}^{new}(19^2)(\mathbb{F}_{191})| = f_{191,361}(192)= 3^2 \cdot 11^5 \cdot 19^6 \cdot 73 \cdot 487 ,$\\
$ |{J_0^+}^{new}(19^2)(\mathbb{F}_{227})| = f_{227,361}(228)= 2^2 \cdot 3^4 \cdot 19^3 \cdot 487 \cdot 971 \cdot 7323581,$\\
$ |{J_0^+}^{new}(19^2)(\mathbb{F}_{229})| = f_{229,361}(230)= 3 \cdot 11 \cdot 17 \cdot 19^3 \cdot 467 \cdot 487 \cdot 2819^2 ,$\\
$ |{J_0^+}^{new}(19^2)(\mathbb{F}_{379})| = f_{379,361}(380)= 2^6 \cdot 3 \cdot 5^2 \cdot 19^3 \cdot 179 \cdot 487 \cdot 4019 \cdot 33247 ,$\\
$ |{J_0^+}^{new}(19^2)(\mathbb{F}_{419})| = f_{419,361}(420)= 2^6 \cdot 3^2 \cdot 5^3 \cdot 19^3 \cdot 487 \cdot 509^2 \cdot 16487 ,$\\
$ |{J_0^+}^{new}(19^2)(\mathbb{F}_{457})| = f_{457,361}(458)= 2^4 \cdot 3 \cdot 5^4 \cdot 19^3 \cdot 487 \cdot 521^2 \cdot 65629 ,$\\
\noindent $ |{J_0^+}^{new}(23^2)(\mathbb{F}_{47})| = f_{47,529}(48)= 2^3 \cdot 3^3 \cdot 7^4 \cdot 11 \cdot 13 \cdot 23^4 \cdot 8117 \cdot 37181 ,$\\
$ |{J_0^+}^{new}(23^2)(\mathbb{F}_{137})| = f_{137,529}(138)= 2^4 \cdot 3^6 \cdot 23^8 \cdot 2399 \cdot 37181 \cdot 75553 ,$\\
$ |{J_0^+}^{new}(23^2)(\mathbb{F}_{139})| = f_{139,529}(140)= 2^4 \cdot 3^8 \cdot 23^9 \cdot 107^2 \cdot 109 \cdot 37181 ,$\\
$ |{J_0^+}^{new}(23^2)(\mathbb{F}_{229})| = f_{229,529}(230)= 2^6 \cdot 11 \cdot 23^6 \cdot 43 \cdot 67 \cdot 37181 \cdot 325729 \cdot 1296721 ,$\\
$ |{J_0^+}^{new}(23^2)(\mathbb{F}_{277})| = f_{277,529}(278)= 2^8 \cdot 3^{10} \cdot 23^7 \cdot 113^2 \cdot 331 \cdot 7193 \cdot 37181 ,$\\
$ |{J_0^+}^{new}(23^2)(\mathbb{F}_{367})| = f_{367,529}(368)= 2^4 \cdot 23^5 \cdot 67^2 \cdot 193 \cdot 1847 \cdot 37181 \cdot 44617 \cdot 8643209 ,$\\
$ |{J_0^+}^{new}(23^2)(\mathbb{F}_{461})| = f_{461,529}(462)= 3^6 \cdot 7^4 \cdot 23^7 \cdot 43^2 \cdot 67 \cdot 199 \cdot 2857^2 \cdot 37181 ,$\\
\noindent $ |{J_0^+}^{new}(29^2)(\mathbb{F}_{59})| = f_{59,841}(60)= 2^8 \cdot 3^2 \cdot 5 \cdot 7^2 \cdot 11^2 \cdot 17 \cdot 23^2 \cdot 29^6 \cdot 43^2 \cdot 569 \cdot 967^2 \cdot 2999 \cdot 11695231 ,$\\
$ |{J_0^+}^{new}(29^2)(\mathbb{F}_{173})| = f_{173,841}(174)= 2^{10} \cdot 3^2 \cdot 5^2 \cdot 7^2 \cdot 29^6 \cdot 31 \cdot 41^2 \cdot 43^2 \cdot 89 \cdot 419^2 \cdot 719 \cdot 1061 \cdot 36571 \cdot 1269691 \cdot 1909421 ,$\\
$ |{J_0^+}^{new}(29^2)(\mathbb{F}_{233})| = f_{233,841}(234)= 2^{10} \cdot 3^2 \cdot 5 \cdot 7^2 \cdot 29^6 \cdot 43^2 \cdot 167^2 \cdot 211^2 \cdot 421 \cdot 1049 \cdot 3989 \cdot 317321 \cdot 422079165281099 ,$\\
$ |{J_0^+}^{new}(29^2)(\mathbb{F}_{347})| = f_{347,841}(348)= 2^8 \cdot 3^{12} \cdot 5^3 \cdot 7^2 \cdot 11 \cdot 23^2 \cdot 29^6 \cdot 31 \cdot 43^2 \cdot 71 \cdot 127^2 \cdot 967^2 \cdot 9601 \cdot 783719 \cdot 7292986801 ,$\\
$ |{J_0^+}^{new}(29^2)(\mathbb{F}_{349})| = f_{349,841}(350)= 2^8 \cdot 5^9 \cdot 7^2 \cdot 13^2 \cdot 19 \cdot 23 \cdot 29^7 \cdot 43^2 \cdot 83^2 \cdot 103 \cdot 211 \cdot 3786151 \cdot 92610181 \cdot 3477902249 ,$\\
$ |{J_0^+}^{new}(29^2)(\mathbb{F}_{463})| = f_{463,841}(464)= 2^{13} \cdot 5^7 \cdot 7^7 \cdot 29^6 \cdot 43^3 \cdot 59 \cdot 97^3 \cdot 461^3 \cdot 1459 \cdot 23656223369 \cdot 230667656992649 ,$\\
\noindent $ |{J_0^+}^{new}(31^2)(\mathbb{F}_{61})| = f_{61,961}(62) = 2^{10} \cdot 5 \cdot 7 \cdot 11 \cdot 31^{7} \cdot 137 \cdot 179 \cdot 1249 \cdot 10369 \cdot 26699 \cdot 38177 \cdot 2302381 \cdot 24080801 ,$\\
$ |{J_0^+}^{new}(31^2)(\mathbb{F}_{311})| = f_{311,961}(312)= 2^8 \cdot 3^2 \cdot 5 \cdot 7^2 \cdot 11 \cdot 31^7 \cdot 409 \cdot 3793^2 \cdot 51551^2 \cdot 162691 \cdot 2302381 \cdot 22340831^2 \cdot 24037019 ,$\\
$ |{J_0^+}^{new}(31^2)(\mathbb{F}_{373})| = f_{373,961}(374)= 2^4 \cdot 5 \cdot 7^2 \cdot 11^2 \cdot 13^2 \cdot 31^6 \cdot 251 \cdot 449 \cdot 2302381 \cdot 366424077359 \cdot 13600706515978033^2 ,$\\
$ |{J_0^+}^{new}(31^2)(\mathbb{F}_{433})| = f_{433,961}(434)= 2^6 \cdot 3^6 \cdot 5 \cdot 7 \cdot 11 \cdot 17 \cdot 31^{11} \cdot 89 \cdot 97 \cdot 191 \cdot 401 \cdot 1153 \cdot 54331 \cdot 126961 \cdot 2302381 \cdot 12958271 \cdot 53053053405791.$\\
\noindent For $ 11 \le p \le 23 $ we have: $$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }p \end{array} \end{scriptsize} } |{J_0^+}^{new}(p^2)(\mathbb{F}_{q})| = |\mathfrak{C}^+_{ns}(p)|. $$ For $ p=29 $ and $ p=31 $ we have:
$$ \mathop\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }p \end{array} \end{scriptsize} } |{J_0^+}^{new}(p^2)(\mathbb{F}_{q})| = 4|\mathfrak{C}^+_{ns}(p)|.$$ We can improve the result by using the isogeny (cfr.\cite[Paragraph 6.6]{Diamond:mf}): $$ {J_0^+}^{new}(p^2) \longrightarrow \mathop{\bigoplus_{f}} A'_{p,f} $$ where the sum is taken over the equivalence classes of newforms $ f\in S_2(\Gamma^+_0(p^2)) $. Two forms $ f $ and $ g $ are declared equivalent if $ g=f^{\sigma} $ for some automorphism $ \sigma: \mathbb{C} \longrightarrow \mathbb{C}$. Denote with $ \mathbb{K}_f $ the number field of $ f $. We have: $$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }29 \end{array} \end{scriptsize} } |A'_{29,f_1}(\mathbb{F}_{q})| = 7^2 \mbox{ where }\mathbb{K}_{f_1}= \mathbb{Q}(\sqrt{2}), $$ $$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }29 \end{array} \end{scriptsize} } |A'_{29,f_2}(\mathbb{F}_{q})| = 29 \mbox{ where }\mathbb{K}_{f_2}= \mathbb{Q}(\sqrt{5}), $$ $$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }29 \end{array} \end{scriptsize} } |A'_{29,f_3}(\mathbb{F}_{q})|= \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }29 \end{array} \end{scriptsize} } |A'_{29,f_4}(\mathbb{F}_{q})|= 2^3 \cdot 43 $$ where $\mathbb{K}_{f_3}= \mathbb{K}_{f_4} $ and $ [\mathbb{K}_{f_3}:\mathbb{Q}]=3$,
$$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }29 \end{array} \end{scriptsize} } |A'_{29,f_5}(\mathbb{F}_{q})|= 5 \cdot 29^2 \mbox { where } [\mathbb{K}_{f_5}:\mathbb{Q}]=6, $$ $$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }29 \end{array} \end{scriptsize} } |A'_{29,f_6}(\mathbb{F}_{q})|= 29^3 \mbox { where } [\mathbb{K}_{f_6}:\mathbb{Q}]=8, $$ $$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }31 \end{array} \end{scriptsize} } |A'_{31,g_1}(\mathbb{F}_{q})| = 2^2 \cdot 7 \mbox{ where }\mathbb{K}_{g_1}= \mathbb{Q}(\sqrt{2}), $$ $$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }31 \end{array} \end{scriptsize} } |A'_{31,g_2}(\mathbb{F}_{q})| = 5 \cdot 11 \mbox{ where }\mathbb{K}_{g_2}= \mathbb{Q}(\sqrt{5}), $$ $$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }31 \end{array} \end{scriptsize} } |A'_{31,g_3}(\mathbb{F}_{q})|= 2302381 \mbox { where } [\mathbb{K}_{g_3}:\mathbb{Q}]=8, $$ $$ \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }31 \end{array} \end{scriptsize} } |A'_{31,g_4}(\mathbb{F}_{q})|= 31^6 \mbox { where } [\mathbb{K}_{g_4}:\mathbb{Q}]=16. $$ So for $ p=29 $ and $ p=31 $ we have: $$ \prod_f \mathop
\textbf{\mbox{ gcd }}_{ \begin{scriptsize} \begin{array}{c} q < 500 \mbox{ prime}, \\ q \equiv \pm 1 \mbox{ mod }p \end{array} \end{scriptsize} } |A'_{p,f}(\mathbb{F}_{q})|= |\mathfrak{C}^+_{ns}(p)| $$ where the product runs over all equivalence classes of newforms.
\end{document}
|
arXiv
|
{
"id": "1605.00375.tex",
"language_detection_score": 0.4008311331272125,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} Consider a complete discrete valuation ring $\mathcal{O}$ with quotient field~$F$ and finite residue field. Then the inclusion map $\mathcal{O} \hookrightarrow F$ induces a map $\hat{\K}^\mathrm{M}_*\mathcal{O} \nach \hat{\K}^\mathrm{M}_*F$ on improved Milnor K-theory. We show that this map is an isomorphism in degrees bigger or equal to 3. This implies the Gersten conjecture for improved Milnor \mbox{K-theory} for $\mathcal{O}$. This result is new in the $p$-adic case. \end{abstract}
\maketitle
\section{Introduction}
Let $\hat{\K}^\mathrm{M}_*$ denote the improved Milnor K-theory as introduced by \df{Gabber} \cite{gabber} and mainly developed by \df{Kerz} \cite{kerz}. For fields, it coincides with the usual Milnor K-theory. For $n\geq 1$ and a discrete valuation ring $\mathcal{O}$ with quotient field $F$ and residue field $\kappa$, there is the so-called Gersten complex
\begin{align*}
0 \Nach \hat{\K}^\mathrm{M}_n\mathcal{O} \Nach \hat{\K}^\mathrm{M}_nF \Nach \hat{\K}^\mathrm{M}_{n-1}\kappa \Nach 0.
\end{align*} This complex is known to be right-exact. Is it also exact on the left side? In the equicharacteristic case this was shown by \df{Kerz} \cite[Prop.~10~(8)]{kerz}.
As a slight extension to a special case of the mixed characteristic case, we show the exactness on the left side by proving the following result (Theorem~\ref{thm_main_result}).
\begin{thm*} Let $\mathcal{O}$ be a complete discrete valuation ring with quotient field $F$ and finite residue field. Then for $n\geq 3$ the inclusion map $\iota\,\colon\,\mathcal{O}\hookrightarrow F$ induces an isomorphism
\begin{align*}
\iota_* \,\colon \, \hat{\K}^\mathrm{M}_n\mathcal{O} \Nach[\cong] \hat{\K}^\mathrm{M}_n F
\end{align*} on improved Milnor K-theory. \end{thm*}
In the \mbox{$p$-adic} case this is new. We prove the theorem as follows: We show that $\hat{\K}^\mathrm{M}_n\mathcal{O}$ is a divisible abelian group for $n\geq 3$ (Theorem \ref{KM_n_O_divisible}) using results from the appendix of \df{Milnor}'s book \cite{milnor}. Combining this with the unique divisibility of $\hat{\K}^\mathrm{M}_nF$ (proved by \df{Sivitskii} \cite{sivitskii}) and a comparision between algebraic and Milnor K-theory (proved by \df{Nesterenko} and \df{Suslin} \cite{suslin}) yields the theorem.
\emph{Acknowledgements.} I am grateful to my advisor Moritz Kerz for proposing me this interesting topic. Also, I thank Morten L\"uders for useful comments and fruitful discussions and Thomas Fiore for having a look at the language. I want to thank my unknown referee from AKT (though my submission was finally declined). Their reports helped a lot in clarifying the presentation of this paper's content. Last, but not least, I thank Chuck Weibel for helpful comments on the presentation of the paper's content.
\section{Milnor K-theory}
\begin{defin} \label{defin_milnor_k_fields} Let $A$ be a commutative ring with unit, $\T_* A^\times$ be the (non-commutative) tensor algebra of $A^\times$ over $\mathbf{Z}$, and $\StR_* A^\times$ the homogeneous ideal of $\T_* A^\times$ which is generated by the set $\{x\otimes y \in\T_2A^\times \, | \, x+y=1\}$ (the so-called \df{Steinberg relations}). Define the \df{Milnor K-theory} of $A$ to be the graded ring
\begin{align*}
\K^\mathrm{M}_* A := \T_* A^\times/\StR_* A^\times.
\end{align*} For $x_1,\ldots,x_n\in A^\times$ let $\{x_1,\ldots,x_n\}$ denote the image of $x_1\otimes\ldots\otimes x_n$ under the natural homomorphism $\T_nA^\times \nach \K^\mathrm{M}_nA$. Evidently, this yields a functor from commutative rings to abelian groups. \end{defin}
This notion behaves well if $A$ is a field or a local ring with infinite residue field \cite{kerz_gersten_conjecture}. But some nice properties do not hold for arbitrary commutative rings (e.g. that the natural map to algebraic K-theory is an isomorphism in degree 2). For local rings, this lack is repaired by a generalisation due to \df{Gabber} \cite{gabber}, the improved Milnor K-theory, which was mainly developed by \df{Kerz} \cite{kerz}.
\begin{defin} \label{def_rational_functions} Let $A$ be a local ring and $n \in \mathbf{N} := \mathbf{Z}_{\geq0}$. The subset
\begin{align*}
S := \bigl\{ \sum_{\underline{i}\in \mathbf{N}^n} a_{\underline{i}}\cdot \underline{t}^{\underline{i}} \in A[t_1,\ldots,t_n] \,\, \big| \,\, \skp{a_{\underline{i}}\, | \,\underline{i}\in \mathbf{N}^n} = A \bigr\}
\end{align*} of $A[t_1,\ldots,t_n]$ is multiplicatively closed, where $\underline{t}^{\underline{i}} = t_1^{i_1}\cdot\ldots\cdot t_n^{i_n}$. Define the \df{ring of rational functions} (in $n$ variables) to be $A(t_1,\ldots,t_n) := S^{-1} A[t_1,\ldots,t_n]$. We obtain maps $\iota \colon A \nach A(t)$ and $\iota_1,\iota_2 \colon A(t) \nach A(t_1,t_2)$ by mapping $t$ respectively to $t_1$ or $t_2$.
For $n\geq 0$ we define the $n$\df{-th improved Milnor K-theory} of $A$ to be
\begin{align*}
\hat{\K}^\mathrm{M}_nA := \ker\bigl[\K^\mathrm{M}_nA(t) \Nach[\delta^\mathrm{M}_n] \K^\mathrm{M}_nA(t_1,t_2) \bigr],
\end{align*} where $\delta^\mathrm{M}_n := \K^\mathrm{M}_n(\iota_1) - \K^\mathrm{M}_n(\iota_2)$. By definition, we have an exact sequence
\begin{align*} \label{improved_exact_sequence}
0 \Nach \hat{\K}^\mathrm{M}_nA \Nach[\iota_*] \K^\mathrm{M}_nA(t) \Nach[\delta^\mathrm{M}_n] \K^\mathrm{M}_nA(t_1,t_2).
\end{align*} In particular, for $n=0$ we have $\hat{\K}^\mathrm{M}_0 = \ker\bigl(\mathbf{Z}\Nach[0]\mathbf{Z}\bigr) = \mathbf{Z}$. As a direct consequence of the construction we obtain a natural homomorphism
\begin{align*}
\K^\mathrm{M}_*A \Nach \hat{\K}^\mathrm{M}_*A.
\end{align*} \end{defin}
We state some facts about Milnor K-theory of local rings. \begin{prop} \label{iKM_properties} Let $A$ be a local ring. Then: \begin{enumerate}
\item $\hat{\K}^\mathrm{M}_1A \cong A^\times$.
\item For $\alpha \in \hat{\K}^\mathrm{M}_nA$ and $\beta \in \hat{\K}^\mathrm{M}_mA$ we have $\alpha\beta = (-1)^{nm} \beta\alpha$, i.e. $\hat{\K}^\mathrm{M}_*A$ is graded-commutative.
\item For $x\in A^\times$ the equations $\{x,-x\} = 0$ and $\{x,x\} = \{x,-1\}$ hold in $\hat{\K}^\mathrm{M}_2A$.
\item There exists a natural homomorphism
\begin{align*}
\Phi_{\mathrm{MQ}}(A) \,\colon\, \hat{\K}^\mathrm{M}_*A \Nach \K_*A.
\end{align*}
where $\K_*$ denotes algebraic K-theory due to \df{Quillen}. In degree 2, this is an isomorphism.
\item If $A$ is a field, the natural homomorphism
\begin{align*}
\K^\mathrm{M}_*A \Nach \hat{\K}^\mathrm{M}_*A
\end{align*}
is an isomorphism.
\item If $A$ is a finite field, then $\hat{\K}^\mathrm{M}_nA \cong 0$ for $n\geq 2$. \end{enumerate} \end{prop} \begin{proof} Everything is due to \df{Kerz} and references within this proof refer to \cite{kerz} (unless stated otherwise). (i) and (ii) is [Prop.~10~(1), (2)]. The first statement of (iv) follows from [Thm.~7] and the existence of a natural map $\K^\mathrm{M}_*A \to \K_*A$. The second statement of (iv) and (iii) is [Prop.~10~(3)] combined with [Prop.~2]. (v) is [Prop.~10~(4)]. (vi) follows from (v) and \cite[Ex.~1.5]{milnor}. Precisely, (ii) and (iii) rely on the corresponding facts for $\K^\mathrm{M}_*$; for proofs of them see e.g. \cite[Prop.~7.1.1, Lem.~7.1.2]{gille}. \end{proof}
\begin{thm}[{\cite[Thm.~A, Thm.~B]{kerz}}] Let $A$ be a local ring. Then: \begin{enumerate}
\item The natural homomorphism
\begin{align*}
\K^\mathrm{M}_*A \Nach \hat{\K}^\mathrm{M}_*A
\end{align*} is surjective.
\item If $A$ has infinite residue field, the homomorphism
\begin{align*}
\K^\mathrm{M}_*A \Nach \hat{\K}^\mathrm{M}_*A
\end{align*}
is an isomorphism. \end{enumerate} \end{thm}
\begin{lemma} \label{KM_DVR_generators} Let $\mathcal{O}$ be a discrete valuation ring with quotient field $F$ and let $\pi\in\mathcal{O}$ be a uniformising element. For $n\geq 1$ we have
\begin{align*}
\K^\mathrm{M}_nF = \bigl< \{\pi,u_2,\ldots,u_n\},\{u_1,\ldots,u_n\} \, \big| \, u_1,\ldots,u_n \in \mathcal{O}^\times \bigr>.
\end{align*} \end{lemma} \begin{proof} This follows straightforwardly from Proposition~\ref{iKM_properties}~(iii) and the fact that every element $x\in F^\times$ has a description $x=u\pi^k$ for suitable $u\in\mathcal{O}^\times$ and $k\in\mathbf{Z}$. \end{proof}
\begin{thm}[{\cite[Lem.~2.1]{milnor_k}, cf. \cite[Prop.~7.1.4]{gille}}] \label{tame_symbol} Let $\mathcal{O}$ be a discrete valuation ring with quotient field $F$, discrete valuation $v\, \colon\, F^\times \nach \mathbf{Z}$, and residue field~$\kappa$. Then for every $n\geq 1$ there exists a unique homomorphism (the \df{tame symbol})
\begin{align*}
\partial \,\colon\, \K^\mathrm{M}_nF \Nach \K^\mathrm{M}_{n-1}\kappa
\end{align*} such that for every uniformising element $\pi\in\mathcal{O}$ and all units $u_2,\ldots,u_n\in\mathcal{O}^\times$ we have
\begin{align*}
\partial \bigl( \{\pi,u_2,\ldots,u_n\} \bigr) = \{\bar{u}_2,\ldots,\bar{u}_n\}.
\end{align*} We also write $\partial_\pi$ or $\partial_v$ to indicate the considered valuation. \end{thm}
\begin{prop} \label{tame_symol_exact_sequence} Let $\mathcal{O}$ be a discrete valuation ring with quotient field $F$ and residue field $\kappa$. For $n\geq 1$ we have an exact sequence of groups
\begin{align*}
0 \Nach V_n \Nach \K^\mathrm{M}_nF \Nach[\partial] \K^\mathrm{M}_{n-1}\kappa \Nach 0,
\end{align*} where $V_n$ is the subgroup of $\K^\mathrm{M}_nF$ generated by
\begin{align*}
\bigl\{ \{u_1,\ldots,u_n\} \, \big| \, u_1,\ldots,u_n\in\mathcal{O}^\times \bigr\}.
\end{align*} As a consequence, we have an exact sequence
\begin{align*}
\hat{\K}^\mathrm{M}_n\mathcal{O} \Nach[\iota_*] \hat{\K}^\mathrm{M}_nF \Nach[\hat{\partial}] \hat{\K}^\mathrm{M}_{n-1}\kappa \Nach 0
\end{align*} where $\hat{\partial}$ is induced by $\partial$ and the natural isomorphism of Proposition~\ref{iKM_properties}~(v). \end{prop} \begin{proof} For the exactness of the first sequence look at \cite[Prop.~1.7.1]{gille}. The exactness of the second sequence is shown by a diagram chase in the commutative diagram
\[ \begin{xy}
\xymatrix{
\K^\mathrm{M}_n\mathcal{O} \ar[r]^{\iota_*} \ar@{->>}[d]
& \K^\mathrm{M}_nF \ar[r]^\partial \ar[d]^-\cong
& \K^\mathrm{M}_{n-1}\kappa \ar[r] \ar[d]^-\cong
& 0 \\
\hat{\K}^\mathrm{M}_n\mathcal{O} \ar[r]^{\iota_*}
& \hat{\K}^\mathrm{M}_nF \ar[r]^\hat{\partial}
& \hat{\K}^\mathrm{M}_{n-1}\kappa \ar[r]
& 0 .
}
\end{xy} \] \end{proof}
\begin{prop}[{\cite[Thm.~1.3 a)]{gersten}, cf. \cite[IV.~Cor.~1.13; V.~Cor.~6.9.2]{weibel}}] \label{gersten_conjecture} Let $\mathcal{O}$ be a discrete valuation ring with quotient field $F$ and \emph{finite} residue field $\kappa$. Then for all $n\geq 0$ there is an exact sequence
\begin{align*}
0 \Nach \K_n\mathcal{O} \Nach \K_nF \Nach \K_{n-1}\kappa \Nach 0.
\end{align*} Furthermore, $\K_{2n}\kappa=0$ for $n\geq 1$ \cite{quillen}. \end{prop} In particular, by using Proposition~\ref{iKM_properties}~(iv), there is a short exact sequence
\begin{align} \label{gersten_sequence_milnor_2}
0 \Nach \hat{\K}^\mathrm{M}_2\mathcal{O} \Nach \hat{\K}^\mathrm{M}_2F \Nach \hat{\K}^\mathrm{M}_1\kappa \Nach 0.
\end{align}
The goal is to generalise the sequence \eqref{gersten_sequence_milnor_2} to arbitrary $n\geq 2$. We want to know if for any discrete valuation ring $\mathcal{O}$ and all $n\geq 2$ the sequences
\begin{align} \label{gersten_sequenz}
0 \Nach \hat{\K}^\mathrm{M}_n\mathcal{O} \Nach[\iota_*] \hat{\K}^\mathrm{M}_nF \Nach[\partial] \hat{\K}^\mathrm{M}_{n-1}\kappa \Nach 0
\end{align} are exact? For algebraic K-theory, \df{Gersten} conjectured that the analogous sequences to \eqref{gersten_sequenz} are exact for all $n\geq 2$ \cite{gersten}. Thus we refer to this question as the Gersten conjecture for improved Milnor K-theory. In the case that $\mathcal{O}$ is complete and $\kappa$ is finite this will be an immediate consequence of our main result which relies on an analogous statement by \df{Nesterenko} and \df{Suslin} for local rings with infinite residue field and classical Milnor K-theory \cite[Thm.~4.1]{suslin}. A proof based on their result will be presented in the appendix.
\begin{prop}[{\cite[Prop.~10~(6)]{kerz}}] \label{MQM} Let $A$ be a local ring and $n\in\mathbf{N}$. Then there exists a map
\begin{align*}
\Phi_{\mathrm{MQ}}(A) \,\colon\, \hat{\K}^\mathrm{M}_nA \Nach \K_nA
\end{align*} as well as a map
\begin{align*}
\Phi_{\mathrm{QM}}(A) \,\colon\, \K_nA \Nach \hat{\K}^\mathrm{M}_nA
\end{align*} such that the composition
\begin{align*}
\hat{\K}^\mathrm{M}_nA \Nach[\Phi_{\mathrm{MQ}}(A)] \K_nA \Nach[\Phi_{\mathrm{QM}}(A)] \hat{\K}^\mathrm{M}_nA
\end{align*} is multiplication with $\chi_n := (-1)^{n-1}\cdot (n-1)!$. \end{prop}
\section{Divisibility of $\hat{\K}^\mathrm{M}_n\mathcal{O}$ for $n\geq 3$}
\label{section_div_of_iKM}
In this section we prove that $\hat{\K}^\mathrm{M}_n\mathcal{O}$ is divisible for a complete discrete valuation ring $\mathcal{O}$ with finite residue field and $n\geq 3$. This result will be the key ingredient for the proof of our main result. First, we examine the divisibility prime to $p$.
\begin{defin} For an abelian group $A$ and $m\in\mathbf{Z}$ we set $A/m := A/mA$, where $mA = \{ma \, | \, a\in A\}$. Thus $A/m\cong A\otimes_\mathbf{Z}\mathbf{Z}/m$. \end{defin}
\begin{lemma} \label{mod_m} Let $\mathcal{O}$ be a complete discrete valuation ring with quotient field~$F$ and finite residue field $\kappa$ of characteristic $p$. Furthermore, let $m\in\mathbf{Z}$ be such that $(p,m)=1$. Then the canonical projection $\mathcal{O} \twoheadrightarrow \kappa$ induces an isomorphism
\begin{align*}
\bigl(\K^\mathrm{M}_*\mathcal{O}\bigr)/m \Nach[\cong] \bigl(\K^\mathrm{M}_*\kappa\bigr)/m
\end{align*} of graded rings. \end{lemma} \begin{proof} The claim is trivial for $\K^\mathrm{M}_0$. From Hensel's Lemma we obtain
\begin{align*}
\mathcal{O}^\times/m \cong (U_1 \times \kappa^\times)/m \cong (U_1/m) \times (\kappa^\times/m) \cong \kappa^\times/m.
\end{align*} This implies the claim for $\K^\mathrm{M}_1$. Now let $n\geq 2$. Then we have the following commutative diagram with exact rows
\begin{align} \label{diag1}
\begin{xy}
\xymatrix{
0 \ar[r] & \StR_n\mathcal{O}^\times \ar[r] \ar[d] & \T_n\mathcal{O}^\times \ar[r] \ar[d] & \K^\mathrm{M}_n\mathcal{O} \ar[r] \ar[d] & 0 \\
0 \ar[r] & \StR_n\kappa^\times \ar[r] & \T_n\kappa^\times \ar[r] & \K^\mathrm{M}_n\kappa \ar[r] & 0,
}
\end{xy}
\end{align} where $\T_*\mathcal{O}^\times$ and $\T_*\kappa^\times$ denote the tensor algebras of $\mathcal{O}^\times$ and $\kappa^\times$ and
\begin{align*}
\StR_n\mathcal{O}^\times & = \skp{x_1\otimes\ldots\otimes x_n \in \T_n\mathcal{O}^\times \, | \, \exists i\neq j \colon x_i+x_j=1} \\
\StR_n\kappa^\times & = \skp{\bar{x}_1\otimes\ldots\otimes\bar{x}_n \in \T_n\kappa^\times\, | \, \exists i\neq j \colon \bar{x}_i+\bar{x}_j=\bar{1}}.
\end{align*} Note that $\overline{y}=\overline{1}$ in $\kappa$ implies $y\in U_1$. Tensoring the diagram \eqref{diag1} with $\mathbf{Z}/m$ we obtain the following commutative diagram with exact lines
\begin{align} \label{diag2}
\begin{xy}
\xymatrix{
\StR_n\mathcal{O}^\times/m \ar[r] \ar[d]^\alpha & \T_n\mathcal{O}^\times/m \ar[r] \ar[d]^\beta
& \K^\mathrm{M}_n\mathcal{O}/m \ar[r] \ar[d]^\gamma & 0 \\
\StR_n\kappa^\times/m \ar[r] & \T_n\kappa^\times/m \ar[r]
& \K^\mathrm{M}_n\kappa/m \ar[r] & 0.
}
\end{xy}
\end{align} It is easy to check that $(\T_*A)/m\cong\T_*(A/m)$ for an arbitrary abelian group $A$; along with $\mathcal{O}^\times/m\cong\kappa^\times/m$ we see that $\beta$ is an isomorphism.
Now let's show that $\alpha$ is surjective. Therefore let $n\in\mathbf{N}$ and $\bar{x}_1 \otimes\ldots\otimes \bar{x}_n \in\StR_n\kappa^\times$. Without loss of generality we can assume that $\bar{x}_1+\bar{x}_2=\bar{1}$, i.e. $x_1+x_2=:u\in U_1$, hence $u^{-1} x_1+u^{-1} x_2=1$ in $\mathcal{O}$. Thus $\xi:= u^{-1} x_1\otimes u^{-1} x_2\otimes x_3\otimes\ldots\otimes x_n\in\StR_n\mathcal{O}^\times$ satisfies $\alpha(\xi) = \bar{x}_1 \otimes\ldots\otimes \bar{x}_n$. So $\alpha$ is surjective.
Applying the five lemma to \eqref{diag2} demonstrates that $\gamma$ is an isomorphism. This concludes the proof. \end{proof}
\begin{cor} \label{divisible_coprime} Let $\mathcal{O}$ be a complete discrete valuation ring with quotient field $F$ and finite residue field $\kappa$ of characteristic $p$. Then for $n\geq 2$ the groups $\K^\mathrm{M}_n\mathcal{O}$ and $\hat{\K}^\mathrm{M}_n\mathcal{O}$ are $m$-divisible if $(p,m)=1$. \end{cor} \begin{proof} For $n\geq 2$ and $m\in\mathbf{Z}$ such that $(p,m)=1$ we have $(\K^\mathrm{M}_n\mathcal{O})/m \cong (\K^\mathrm{M}_n\kappa)/m = 0$, whence $\K^\mathrm{M}_n\kappa=0$ since $\kappa$ is a finite field (see Proposition~\ref{iKM_properties}~(iv)). Thus $\K^\mathrm{M}_n\mathcal{O}$ is $m$-divisible. The surjective homomorphism $\K^\mathrm{M}_n\mathcal{O} \twoheadrightarrow \hat{\K}^\mathrm{M}_n\mathcal{O}$ shows the claim for $\hat{\K}^\mathrm{M}_n\mathcal{O}$. \end{proof}
Next we want to understand multiplication by $p$ on Milnor K-theory. For that we need the following results.
\begin{prop} \label{delta} Let $\mathcal{O}$ be a complete discrete valuation ring with quotient field $F$ and finite residue field of characteristic~$p$.
\begin{enumerate}
\item Let $F$ contain all $p$-th roots of unity. Then $(\hat{\K}^\mathrm{M}_2F)/p$ is cyclic of order $p$.
\item If $F$ contains only the trivial $p$-th root of unity, $(\hat{\K}^\mathrm{M}_2F)/p$ is zero.
\end{enumerate} Here we use tacitly the fact that $\hat{\K}^\mathrm{M}_*F\cong \K^\mathrm{M}_*F$ from Proposition~2.3~(v). \end{prop} \begin{proof} This is done in \cite{milnor}, (i) is [Cor.~A.12] and (ii) follows from [Lem.~A.6] together with [Lem.~A.4]. \end{proof}
To treat a special case in our proof below, we need a norm map for a non-\'{e}tale{} extension which is not covered literally by \df{Kerz}' work. However, everything needed is basically already contained in \cite{kerz_gersten_conjecture}.
\begin{prop} \label{norm} Let $A$ be a local and factorial domain and $\iota\colon A\hookrightarrow B$ an extension of local rings such that $B\cong A[X]/\skp{\pi}$ for a monic irreducible polynomial $\pi\in A[X]$. Then there exists a transfer morphism, the \df{norm map},
\begin{align*}
N_{B/A} \,\colon\, \hat{\K}^\mathrm{M}_*B \Nach \hat{\K}^\mathrm{M}_*A
\end{align*} satisfying the projection formula
\begin{align} \label{projection_formula}
N_{B/A}\bigl(\{\iota_*(x),y\}\bigr) = \bigl\{x,N_{B/A}(y)\bigr\}
\end{align} for all $x\in\hat{\K}^\mathrm{M}_*A$ and $y\in\hat{\K}^\mathrm{M}_*B$. \end{prop} \begin{proof} This will be done in the appendix, see \ref{norm_appendix}. \end{proof}
\begin{prop}[{Bass, Tate, cf.~\cite[Cor.~A.15]{milnor}}] \label{norm_surjective} Let $\mathcal{O} \subset \mathcal{O}'$ be complete discrete valuation rings with quotient fields $F \subset F'$ and finite residue fields. Then the norm map
\begin{align*}
N_{F'/F} \,\colon\, \hat{\K}^\mathrm{M}_2F' \twoheadrightarrow \hat{\K}^\mathrm{M}_2F.
\end{align*} is surjective. \end{prop}
With this we can prove the missing $p$-divisibility.
\begin{thm} \label{KM_n_O_divisible} Let $\mathcal{O}$ be a complete discrete valuation ring with quotient field $F$, finite residue field $\kappa$ of characteristic $p$, and $n\geq 3$. Then $\hat{\K}^\mathrm{M}_n\mathcal{O}$ is divisible. \end{thm} \begin{proof} The homomorphism
\begin{align*}
\K^\mathrm{M}_3\mathcal{O} \otimes \K^\mathrm{M}_{n-3}\mathcal{O} &\Nach \K^\mathrm{M}_n\mathcal{O} \\
\{x_1,x_2,x_3\}\otimes\{x_4,\ldots,x_n\} &\mapsto \{x_1,\ldots,x_n\}
\end{align*} is surjective and for every $n\geq 0$ the homomorphism $\K^\mathrm{M}_n\mathcal{O} \nach \hat{\K}^\mathrm{M}_n\mathcal{O}$ is also surjective by Proposition \ref{iKM_properties}~(8). By the commutative diagram
\[ \begin{xy}
\xymatrix{
\K^\mathrm{M}_3\mathcal{O} \otimes_\mathbf{Z} \K^\mathrm{M}_{n-3}\mathcal{O} \ar@{->>}[r] \ar@{->>}[d]
& \K^\mathrm{M}_n\mathcal{O} \ar@{->>}[d]
\\
\hat{\K}^\mathrm{M}_3\mathcal{O} \otimes_\mathbf{Z} \hat{\K}^\mathrm{M}_{n-3}\mathcal{O} \ar[r]
& \hat{\K}^\mathrm{M}_n\mathcal{O},
}
\end{xy} \] we see that the lower map is also surjective and hence its target is divisible if $\hat{\K}^\mathrm{M}_3\mathcal{O}$ is divisible. Thus it suffices to show that multiplication with $\ell$ on $\hat{\K}^\mathrm{M}_3\mathcal{O}$ is surjective for every prime number $\ell$. Equivalently, we can show that $(\hat{\K}^\mathrm{M}_3\mathcal{O})/\ell=0$ for every prime $\ell$. The case $\ell\neq p$ is the assertion of Corollary~\ref{divisible_coprime}. All in all, it remains to show that $(\hat{\K}^\mathrm{M}_3\mathcal{O})/p=0$.
\noindent According to Proposition~\ref{gersten_conjecture} we have an exact sequence
\begin{align*}
0 \Nach \hat{\K}^\mathrm{M}_2\mathcal{O} \Nach[\iota_*] \hat{\K}^\mathrm{M}_2F \Nach[\hat{\partial}] \hat{\K}^\mathrm{M}_1\kappa \Nach 0.
\end{align*} Tensoring with $\mathbf{Z}/p$ leads to an exact sequence
\begin{align*}
\underbrace{\Tor_1^\mathbf{Z}(\mathbf{Z}/p,\hat{\K}^\mathrm{M}_1\kappa)}_{=0} \Nach (\hat{\K}^\mathrm{M}_2\mathcal{O})/p \Nach (\hat{\K}^\mathrm{M}_2F)/p \Nach \underbrace{\hat{\K}^\mathrm{M}_1\kappa/p}_{=0},
\end{align*} as $\hat{\K}^\mathrm{M}_1\kappa \cong \kappa^\times$ is a finite cyclic group with order prime to $p$. By Proposition~\ref{delta}, we get accordingly
\begin{align} \label{identification}
(\hat{\K}^\mathrm{M}_2\mathcal{O})/p \cong (\hat{\K}^\mathrm{M}_2F)/p \cong
\begin{cases} \mathbf{Z} /p & \text{(if }F\text{ contains the }p\text{-th roots of unity)}
\\ 0& \text{(else)} \end{cases}
\end{align} If this is zero, we are done. Thus we consider only the other case.
\noindent Now we suppose for a while that $-1$ has a p-th root $\sqrt[p]{-1}$ in $F$. In fact, this is always the case for $p\neq 2$ or if $F$ is a finite extension of $\mathbf{F}_2((t))$. Let $\{u,v,w\}\in\hat{\K}^\mathrm{M}_3\mathcal{O}$ and let squared brackets denote residue classes modulo $p$. If $[u,x]=0$ in $(\hat{\K}^\mathrm{M}_2\mathcal{O})/p\cong\mathbf{Z}/p$ for all $x\in\mathcal{O}^\times$, then
\begin{align*}
[u,v,w] = [u,v]\cdot[w] = 0 \in (\hat{\K}^\mathrm{M}_3\mathcal{O})/p.
\end{align*} Otherwise, there is an $x\in\mathcal{O}^\times$ such that $[u,x]$ is a generator of $(\hat{\K}^\mathrm{M}_2\mathcal{O})/p$. Let $[v,w]=k\cdot[u,x]$ for a suitable $k\in\mathbf{Z}$. Then
\begin{align*}
[u,v,w] = k\cdot[u,u,x] = k\cdot[-1,u,x] = kp\cdot[\sqrt[p]{-1},u,x] = 0 \in (\hat{\K}^\mathrm{M}_3\mathcal{O})/p.
\end{align*}
\noindent If, in contrast, $-1$ does not have a $p$-th root in $F$, we have $p=2$ and $F$ is a finite extension of $\mathbf{Q}_2$. Thus $\mathcal{O}$ is a local and factorial domain. We consider the finite field extension $F' := F[\sqrt{-1}]$. By \cite[ch.~1, \S 6~(ii)]{serre}, the valuation ring of $F'$ is $\mathcal{O}':=\mathcal{O}[\sqrt{-1}]$. This allows us to apply Proposition~\ref{norm}.
Due to finiteness, $F'$ is also complete. The norm map $N_{F'/F} \colon \hat{\K}^\mathrm{M}_2F'\to \hat{\K}^\mathrm{M}_2F$ is surjective due to Proposition~\ref{norm_surjective} and induces a surjective map $\tilde{N}_{F'/F} \colon (\hat{\K}^\mathrm{M}_2F')/p \twoheadrightarrow (\hat{\K}^\mathrm{M}_2F)/p$. This induces a surjective map
\begin{align*}
\tilde{N}_{\mathcal{O}'/\mathcal{O}} \, \colon \,(\hat{\K}^\mathrm{M}_2\mathcal{O}')/p \twoheadrightarrow (\hat{\K}^\mathrm{M}_2\mathcal{O})/p
\end{align*} on the Milnor K-theory of their integers via the identification \eqref{identification}. Let $[x,y,z]\in(\hat{\K}^\mathrm{M}_3\mathcal{O})/p$. Decompose $[x,y,z] = [x]\cdot[y,z]$ such that $[x]\in(\hat{\K}^\mathrm{M}_1\mathcal{O})/p$ and $[y,z]\in(\hat{\K}^\mathrm{M}_2\mathcal{O})/p$. We find a preimage $\alpha\in\hat{\K}^\mathrm{M}_2\mathcal{O}'$ of $[y,z]$ under $\tilde{N}_{\mathcal{O}'/\mathcal{O}}$. The projection formula \eqref{projection_formula} yields
\begin{align*}
\tilde{N}_{\mathcal{O}'/\mathcal{O}}\bigl(\iota_*([x])\cdot \alpha\bigr) = [x]\cdot \tilde{N}_{\mathcal{O}'/\mathcal{O}}(\alpha) = [x]\cdot[y,z]=[x,y,z],
\end{align*} where $\iota\colon \mathcal{O}\hookrightarrow\mathcal{O}'$ is the inclusion. Thus $(\hat{\K}^\mathrm{M}_3\mathcal{O}')/p \to (\hat{\K}^\mathrm{M}_3\mathcal{O})/p$ is onto. By the previous argument, the domain of this map is zero, hence its target, too. This concludes the proof. \end{proof}
\section{The isomorphism $\hat{\K}^\mathrm{M}_n\mathcal{O}\Nach[\cong]\hat{\K}^\mathrm{M}_n F$ for $n\geq 3$}
\label{section_main_result}
Our last ingredient is the following result by \df{Sivitskii}:
\begin{thm}[{\cite[p.~562]{sivitskii}}] \label{KMF_uniquely_divisible} Let $\mathcal{O}$ be a complete discrete valuation ring with quotient field $F$ and finite residue field, and let $n\geq 3$. Then $\K^\mathrm{M}_nF$ is an uncountable, uniquely divisible group. \end{thm}
The uncountability was found by \df{Bass} and \df{Tate} \cite{bass_tate} and divisibility was shown by \df{Milnor} \cite[Ex.~1.7]{milnor_k}. The torsion-freeness was originally proved by \df{Sivitskii} \cite{sivitskii} with explicit calculations; there is also a more structural proof using the Norm Residue Theorem which was formerly known as the Bloch-Kato conjecture and which was proved by \df{Rost} and \df{Voevodsky}, for a proof consider \cite[VI.~Prop.~7.1]{weibel}. Now we are able to prove our main result.
\begin{thm} \label{thm_main_result} Let $\mathcal{O}$ be a complete discrete valuation ring with quotient field $F$ and finite residue field, and let $n\geq 3$. Then the inclusion $\iota\,\colon\,\mathcal{O}\hookrightarrow F$ induces an isomorphism
\begin{align*}
\iota_* \,\colon \, \hat{\K}^\mathrm{M}_n\mathcal{O} \Nach[\cong] \hat{\K}^\mathrm{M}_n F
\end{align*} on improved Milnor K-theory. \end{thm} \begin{proof} \noindent\textbf{Surjectivity.} Let $\kappa$ be the residue field of $\mathcal{O}$. According to Proposition~\ref{tame_symol_exact_sequence} we have an exact sequence
\begin{align*}
\hat{\K}^\mathrm{M}_n\mathcal{O} \Nach[\iota_*] \hat{\K}^\mathrm{M}_nF \Nach[\hat{\partial}] \hat{\K}^\mathrm{M}_{n-1}\kappa \Nach 0.
\end{align*} Because $\kappa$ is a finite field and $n-1\geq 2$, we see that $\hat{\K}^\mathrm{M}_{n-1}\kappa \cong \K^\mathrm{M}_{n-1}\kappa = 0$. Thus $\iota_*$ is surjective.
\noindent\textbf{Injectivity.} Consider the commutative diagram
\begin{align} \label{main_result_diagram}
\begin{xy}
\xymatrix{
\hat{\K}^\mathrm{M}_n\mathcal{O} \ar[rr]^{\iota_*} \ar[d]^{\Phi_{\mathrm{MQ}}(\mathcal{O})} \ar@(dl,ul)[dd]_{\chi_n}
&\hspace{0cm}
& \hat{\K}^\mathrm{M}_nF \ar@{^{(}->}[d]_{\Phi_{\mathrm{MQ}}(F)} \ar@(dr,ur)[dd]^{\chi_n}
\\
\K_n\mathcal{O} \ar@{^{(}->}[rr]^{\K_n(\iota)} \ar[d]^{\Phi_{\mathrm{QM}}(\mathcal{O})}
&& \K_nF \ar[d]_{\Phi_{\mathrm{QM}}(F)}
\\
\hat{\K}^\mathrm{M}_n\mathcal{O} \ar[rr]^{\iota_*}
&& \hat{\K}^\mathrm{M}_nF,
}
\end{xy}
\end{align} where $\K_n(\iota)$ is injective due to Proposition~\ref{gersten_conjecture}. Let $\alpha\in\ker(\iota_*)$. As $\hat{\K}^\mathrm{M}_n\mathcal{O}$ is divisible we find $\beta\in \hat{\K}^\mathrm{M}_n\mathcal{O}$ such that $\alpha=\chi_n \beta$ where $\chi_n = (-1)^{n-1} (n-1)!$. Hence
\begin{align*}
0 = \iota_*(\alpha) = \iota_*(\chi_n \beta) = \chi_n\iota_*(\beta).
\end{align*} We have $\beta\in\ker(\iota_*)$ because $\hat{\K}^\mathrm{M}_nF \cong \K^\mathrm{M}_nF$ is uniquely divisible by Theorem~\ref{KMF_uniquely_divisible}. Since $\K_n(\iota)$ is injective, we also have $\ker(\iota_*)\subseteq\ker(\Phi_{\mathrm{MQ}}(\mathcal{O}))$ by diagram chasing and the latter one is clearly a subset of $\ker(\chi_n)$. We conclude that $\alpha = \chi_n \beta = 0$. So $\iota_*$ is injective, hence an isomorphism. \end{proof}
\begin{remark} In fact, one could simplify this proof using only $2$-divisibility for $\hat{\K}^\mathrm{M}_3\mathcal{O}$. With that one could proof the theorem first for $n=3$ and gets entire divisibility of $\hat{\K}^\mathrm{M}_3\mathcal{O}$ for free by Theorem~\ref{KMF_uniquely_divisible}. Then, one could do the proof for higher $n$. In characteristic $p\neq 2$, this would spare the proof of Theorem~\ref{KM_n_O_divisible}. \end{remark}
Immediately from Theorem~\ref{KMF_uniquely_divisible} (respectively its proof) we obtain the following.
\begin{cor} Let $\mathcal{O}$ be a complete discrete valuation ring with finite residue field. Then: \begin{enumerate}
\item The canonical map $\hat{\K}^\mathrm{M}_*\mathcal{O} \to \K_*\mathcal{O}$ is injective.
\item $\hat{\K}^\mathrm{M}_n\mathcal{O}$ is uniquely divisible for $n\geq 3$. \end{enumerate} \end{cor}
Furthermore, we are now able to generalise the exact sequence \eqref{gersten_sequence_milnor_2}.
\begin{cor} \label{cor_gersten_conjecture} Let $\mathcal{O}$ be a complete discrete valuation ring with quotient field $F$ and finite residue field $\kappa$. Then we have for each $n\geq 1$ an exact sequence
\begin{align*}
0 \Nach \hat{\K}^\mathrm{M}_n\mathcal{O} \Nach[\iota_*] \hat{\K}^\mathrm{M}_nF \Nach \hat{\K}^\mathrm{M}_{n-1}\kappa \Nach 0.
\end{align*} \end{cor}
\appendix \section{The ring of rational functions}
The ring of rational functions plays a crucial role for the definition of improved Milnor K-theory. This section's content relies on \df{Kerz}'s paper \cite{kerz} and elaborates some of the proofs. We start by recalling Definition~\ref{def_rational_functions} from above.
\begin{defin} \label{def_ring_rational_functions_appendix} Let $A$ be a commutative ring and $n \in \mathbf{N}$. The subset
\begin{align*}
S := \bigl\{ \sum_{\underline{i}\in \mathbf{N}^n} a_{\underline{i}}\cdot \underline{t}^{\underline{i}} \in A[t_1,\ldots,t_n] \,\, \big| \,\, \skp{a_{\underline{i}}\, | \,\underline{i}\in \mathbf{N}^n} = A \bigr\}
\end{align*} is multiplicatively closed, where $\underline{t}^{\underline{i}} = t_1^{i_1}\cdot\ldots\cdot t_n^{i_n}$. Define the \df{ring of rational functions} (in $n$ variables) to be
\begin{align*}
A(t_1,\ldots,t_n) := S^{-1} A[t_1,\ldots,t_n].
\end{align*} We obtain maps $\iota \colon A \nach A(t)$ and $\iota_1,\iota_2 \colon A(t) \nach A(t_1,t_2)$ by mapping $t$ respectively to $t_1$ or $t_2$. \end{defin}
\begin{remark} For a field $F$ and $n\geq 1$ we have $F(t_1,\ldots,t_n) = \Frac(F[t_1,\ldots,t_n])$. \end{remark}
\begin{lemma}[{cf. \cite[p.~6]{kerz}}] \label{A(t)_local_infinite} Let $(A,\mathfrak{m},\kappa)$ be a local ring and $n\geq 1$. Then the ring $A(t_1,\ldots,t_n)$ is a local ring with maximal ideal $\mathfrak{m} A(t_1,\ldots,t_n)$ and infinite residue field $\kappa(t_1,\ldots,t_n)$. \end{lemma} \begin{proof} Set $B := A[t_1,\ldots,t_n]$ and $\mathfrak{n} := \mathfrak{m} A[t_1,\ldots,t_n]$. Obviously, $\mathfrak{n}\vartriangleleft B$ is a prime ideal and
\begin{align*}
S & = \bigl\{ \sum_{\underline{i}\in \mathbf{N}^n} a_{\underline{i}}\cdot t^{\underline{i}} \in A[t_1,\ldots,t_n]
\, | \, \skp{a_{\underline{i}}|\underline{i}\in I^n} = A \bigr\} \\
& = \bigl\{ \sum_{\underline{i}\in \mathbf{N}^n} a_{\underline{i}}\cdot t^{\underline{i}} \in A[t_1,\ldots,t_n]
\, | \, \exists \underline{i}\in \mathbf{N}^n \colon a_{\underline{i}} \notin \mathfrak{m} \bigr\} \\
& = B \backslash \mathfrak{n},
\end{align*} hence $A(t_1,\ldots,t_n) = B_\mathfrak{n}$ is local with maximal ideal $\mathfrak{n}B_\mathfrak{n}$. Its residue field is
\begin{align*}
B_\mathfrak{n}/\mathfrak{n}B_\mathfrak{n}
\cong \bigl( B/\mathfrak{n} \bigr)_\mathfrak{n}
= \Frac\bigl( \kappa[t_1,\ldots,t_n])
= \kappa(t_1,\ldots,t_n).
\end{align*} \end{proof}
\begin{lemma} \label{lem_going_up} Let $(A,\mathfrak{m})$ be a local ring and $A \hookrightarrow B$ an integral ring extension such that $\mathfrak{m} B\vartriangleleft B$ is a prime ideal. Then $B$ is also local with maximal ideal $\mathfrak{m} B$. \end{lemma} \begin{proof} According to the Going-Up theorem (see \cite[pt.~I, ch.~2, Thm.~5]{matsumura}) the maximal ideals of $B$ are precisely the prime ideals of $B$ lying over $\mathfrak{m}$. But every prime ideal $\mathfrak{p}\vartriangleleft B$ with $A\cap \mathfrak{p} \supseteq \mathfrak{m}$ must contain $\mathfrak{m} B$. Thus it is the only maximal ideal of $B$. \end{proof}
\begin{lemma} \label{localisation_injecitvie} Let $A$ be a ring, $S\subset A$ a multiplicatively closed subset and let $\iota \colon A \to S^{-1} A$ the canonical homomorphism. If $S$ does not contain any zero-divisors, then $\iota$ is injective. In this case a ring homomorphism $f_S \colon S^{-1} A\nach B$ is injective if and only if the composition $f \colon A \to B$ is injective. \end{lemma} \begin{proof} If $f$ is injective and $0 = f_S(\frac{a}{s})=f(a)f(s)^{-1}$, then $f(a)=0$ since $f(s)^{-1}$ is a unit. The rest is omitted. \end{proof}
\begin{prop} \label{ring_of_fractions_base_change} Let $A \hookrightarrow B$ be a flat, local (i.e.~the maximal ideal of $A$ generates the maximal ideal of $B$), and integral extension of local rings, and $n\geq 1$. Then the canonical homomorphism
\begin{align*}
\varphi \,\colon\, \B \otimes_A A(t_1,\ldots,t_n) \Nach[\cong] B(t_1,\ldots,t_n)
\end{align*} is an isomorphism. In particular, the induced homomorphism $A(t_1,\ldots,t_n)\nach B(t_1,\ldots,t_n)$ is an extension of local rings with infinite residue fields. \end{prop} \begin{proof} For convenience, we write ``$t$'' for ``$t_1,\ldots,t_n$''. Let $\mathfrak{m} \vartriangleleft A$ and $\mathfrak{n} \vartriangleleft B$ the maximal ideals, hence $\mathfrak{n}=\mathfrak{m} B$. We set $\mathfrak{m}_t := \mathfrak{m} A[t]$ and $\mathfrak{n}_t := \mathfrak{n} B[t] = \mathfrak{m} B[t]$. We use some facts for ring maps corresponding to the permanences of properties of schemes as stated in \cite[Appendix~C]{gw}.
\noindent\textbf{Injectivity.} The flat base change $A[t]\hookrightarrow B[t]$ is injective. According to Lemma~\ref{localisation_injecitvie}, the localisation $B[t]\hookrightarrow B(t)$ is injective. Applying again Lemma~\ref{localisation_injecitvie}, this time to $A[t] \hookrightarrow A(t)$, and using the commutative diagram
\[ \begin{xy}
\xymatrix{
A[t] \ar@{^{(}->}[r] \ar@{^{(}->}[d] & B[t] \ar@{^{(}->}[d] \\
A(t) \ar[r] & B(t),
}
\end{xy} \] we see that $A(t) \hookrightarrow B(t)$ is injective, hence also $B\otimes_AA(t) \hookrightarrow B\otimes_AB(t)$ since $B$ is a flat $A$-module. Thus the commutative diagram
\[ \begin{xy}
\xymatrix{
B\otimes_AA(t) \ar[rr]^\varphi \ar@{^{(}->}[dr] && B(t) \ar@{^{(}->}[dl] \\
&B\otimes_AB(t)
}
\end{xy} \] tells us that $\varphi$ is injective.
\noindent\textbf{Surjectivity.} The base change $A(t)\hookrightarrow B\otimes_AA(t)$ is an integral extension. By Lemma~\ref{lem_going_up}, $B\otimes_AA(t)$ is a local ring with maximal ideal $\mathfrak{m}_t(B\otimes_AA(t))$. Furthermore, we have a commutative diagram
\[ \begin{xy}
\xymatrix{
B\otimes_AA(t) \ar[r]^-{\cong} \ar[d]_\varphi
& B\otimes_A \bigl( A[t]\otimes_{A[t]} A(t) \bigr) \ar[r]^{\cong}
& \bigl( B\otimes_A A[t] \bigr) \otimes_{A[t]} A(t) \ar[d]^-\cong
\\ B(t) && B[t] \otimes_{A[t]} A(t) \ar[ll]_{\varphi'}
}
\end{xy} \] Hence surjectivity for $\varphi$ is equivalent to surjectivity for $\varphi'$. Consider the homomorphism
\begin{align*}
\psi \colon B[t] &\Nach B[t]\otimes_{A[t]}A(t) \\
f &\mapsto f\otimes 1
\end{align*} If $f = \sum_{i=0}^d b_it^i \in B[t]\backslash\mathfrak{n}_t$, then there is a $j$ such that $b_j$ is a unit in $B$. Then its image $\psi(b_j)$ is a unit as well and hence $\psi(f) \in\bigl(B[t]\otimes_{A[t]}A(t)\bigr)^\times$. Thus the universal property of localisation yields a homomorphism
\begin{align*}
\psi' \colon B(t) \Nach B[t]\otimes_{A[t]}A(t)
\end{align*} such that $\psi = \psi'\circ\iota$ where $\iota \colon B[t]\hookrightarrow B(t)$. The commutative diagram
\[ \begin{xy}
\xymatrix{
B[t] \ar[rr]^\iota \ar[dd]_{\id_{B[t]}} \ar[dr]^\psi
&& B(t) \ar[dl]_{\psi'} \ar@{.>}[dd]^{\id_{B(t)}} \\
& B[t]\otimes_{A[t]}A(t) \ar[dr]^{\varphi'} \\
B[t] \ar[rr]^\iota && B(t)
}
\end{xy} \] and the uniqueness for lifts to the localisation show $\varphi'\circ\psi'=\id_{B(t)}$. So $\varphi'$ is surjective. \end{proof}
\section{Milnor K-theory and algebraic K-theory}
This section is dedicated to \df{Kerz}'s Theorem~\ref{MQM} which we restate at this place again.
\begin{thm}[{\cite[Prop.~10~(6)]{kerz}}] \label{MQM_appendix} Let $A$ be a local ring and $n\in\mathbf{N}$. Then there exists a natural map
\begin{align*}
\Phi_{\mathrm{MQ}}(A) \,\colon\, \hat{\K}^\mathrm{M}_nA \Nach \K_nA
\end{align*} as well as a natural map
\begin{align*}
\Phi_{\mathrm{QM}}(A) \,\colon\, \K_nA \Nach \hat{\K}^\mathrm{M}_nA
\end{align*} such that the composition
\begin{align*}
\hat{\K}^\mathrm{M}_nA \Nach[\Phi_{\mathrm{MQ}}(A)] \K_nA \Nach[\Phi_{\mathrm{QM}}(A)] \hat{\K}^\mathrm{M}_nA
\end{align*} is multiplication with $\chi_n := (-1)^{n-1}\cdot (n-1)!$. \end{thm}
As there are few details given in \cite{kerz}, we present more of them in this section. The statement relies on an analogous statement by \df{Nesterenko} and \df{Suslin} for local rings with infinite residue field and classical Milnor K-theory.
\noindent For any commutative ring $A$ there exists a graded-commutative product map $K_1(A) \otimes\ldots\otimes K_1(A) \nach \K_n(A)$ such that the Steinberg relations are satisfied \cite[IV.1.10.1]{weibel}. Thus there exists a graded map
\begin{align*}
\Phi_{\mathrm{MQ}}'(A) \,\colon\, \K^\mathrm{M}_*A \Nach \K_*A.
\end{align*} In the case of a local ring with an infinite residue field, there is also a map in the other direction.
\begin{thm}[{\cite[Thm.~4.1]{suslin}}] \label{suslin_appendix_thm} Let $A$ be a local ring with infinite residue field and $n\in\mathbf{N}$. Then there exists a natural map
\begin{align*}
\Phi_{\mathrm{QM}}'(A) \,\colon\, \K_nA \Nach \K^\mathrm{M}_n(A)
\end{align*} such that the composition
\begin{align*}
\K^\mathrm{M}_nA \Nach[\Phi_{\mathrm{MQ}}'(A)] \K_nA \Nach[\Phi_{\mathrm{QM}}'(A)] \K^\mathrm{M}_nA
\end{align*} is multiplication with $\chi_n = (-1)^{n-1}\cdot (n-1)!$. \end{thm}
We say a few words on the construction of the map $\Phi_{\mathrm{QM}}'(A)$. By \cite[Thm.~3.25]{suslin} there are natural isomorphisms
\begin{align*}
\Ho_n(\GL_n(A)) &\Nach[\cong] \Ho_n(\GL(A)) \text{ and} \\
\Ho_n(\GL_n(A))/\Ho_n(\GL_{n-1}(A)) &\Nach[\cong] \K^\mathrm{M}_nA.
\end{align*} for a local ring $A$ with infinite residue field. So a homomorphism $\Phi_{\mathrm{QM}}'(A)$ can be defined as the composition of homomorphisms such that the following diagram commutes.
\small
\[ \begin{xy}
\xymatrix{
\K_nA \ar@{.>}[d]^{\Phi_{\mathrm{QM}}'(A)} & \ar@{=}[l] \pi_n(\BGL(A)^+) \ar[r]^-{\text{Hurewicz}}
& \Ho_n(\BGL(A)^+) \ar@{=}[r]& \Ho_n(\GL(A)) \\
\K^\mathrm{M}_nA && \ar[ll]_-\cong \Ho_n(\GL_n(A))/\Ho_n(\GL_{n-1}(A))
& \Ho_n(\GL_n(A)) \ar[l] \ar[u]^-\cong
}
\end{xy} \]
\normalsize
\noindent For passing from the case of local rings with infinite residue fields to the case of arbitrary local rings, we consider the construction of improved Milnor K-theory in a more general setting. Given a functor $E$ from rings to abelian groups, we can associate to it an \emph{improved} functor $\hat{E}$ given by
\begin{align*}
\hat{E}(A) := \ker\bigl[ E(A(t)) \Nach[\delta] E(A(t_1,t_2)) \bigr]
\end{align*} where $A(t)$ and $A(t_1,t_2)$ are rings of rational functions and $\delta$ is the map $E(\iota_1)-E(\iota_2)$ (cf.~Definition~\ref{def_ring_rational_functions_appendix}). This construction is essentially due to \df{Gabber} \cite{gabber} and was investigated by \df{Kerz} \cite{kerz}. The latter one proved \cite[Prop.~9]{kerz} that the canonical map $E(A) \nach \hat{E}(A)$ is an isomorphism if one of the following two conditions holds
\begin{enumerate}
\item The ring $A$ is a local with infinite residue field.
\item The ring $A$ is local and $E$ admits compatible norm maps for all finite \'{e}tale{} extensions of local rings (cf. \cite[p.~5]{kerz}).
\end{enumerate}
Algebraic K-theory admits those norm maps (cf.~\cite[IV.6.3.2]{weibel} where they are called ``transfer maps''). Thus for any local ring $A$ we have a natural isomorphism
\begin{align} \label{quillen_hat_iso}
\K_*A \Nach[\cong] \hat{\K}_*A.
\end{align}
\begin{proof}[Proof of Theorem~\ref{MQM_appendix}] Let $A$ be a local ring. Then the rings $A(t)$ and $A(t_1,t_2)$ are local rings with infinite residue field, see Lemma~\ref{A(t)_local_infinite}. By naturality of the maps $\Phi_{\mathrm{MQ}}'$ and $\Phi_{\mathrm{QM}}'$ together with the isomorphism~\eqref{quillen_hat_iso}, we obtain the desired maps via the universal property of the kernel as indicated in the diagram
\[ \begin{xy}
\xymatrix{
& \hat{\K}^\mathrm{M}_*A \ar@{^{(}->}[r]^{\text{kernel}} \ar@{.>}[d]_{\Phi_{\mathrm{MQ}}(A)}
& \K^\mathrm{M}_*A(t) \ar[r]^{\delta^\mathrm{M}_*} \ar[d]^{\Phi_{\mathrm{MQ}}'(A(t))}
& \K^\mathrm{M}_*A(t_1,t_2) \ar[d]^{\Phi_{\mathrm{MQ}}'(A(t_1,t_2))}
\\
& \K_*A \ar@{.>}[d]_{\Phi_{\mathrm{QM}}(A)} \ar@{^{(}->}[r]^{\text{kernel}}
& \K_*A(t) \ar[d]^{\Phi_{\mathrm{QM}}'(A(t))} \ar[r]^{\delta^\mathrm{Q}_*}
& \K_*A(t_1,t_2) \ar[d]^{\Phi_{\mathrm{QM}}'(A(t_1,t_2))}
\\
& \hat{\K}^\mathrm{M}_*A \ar@{^{(}->}[r]^{\text{kernel}}
& \K^\mathrm{M}_*A(t) \ar[r]^{\delta^\mathrm{M}_*}
& \K^\mathrm{M}_*A(t_1,t_2).
}
\end{xy} \] By Theorem~\ref{suslin_appendix_thm}, the middle vertical composition is multiplication with the natural number $\chi_n$, hence this also holds for the left vertical composition. \end{proof}
\section{The norm map} There is a norm map for Milnor K-theory of fields. More precisely, for every field extension $F \hookrightarrow F'$ there is a homomorphism $N_{F'/F} \colon \K^\mathrm{M}_*F' \nach \K^\mathrm{M}_*F$ of graded $\K^\mathrm{M}_*F$-modules such that the following two conditions hold.
\begin{enumerate}
\item \textbf{Functoriality}. For every tower $F\hookrightarrow F'\hookrightarrow F''$ of fields holds $N_{F/F}=\id$ and $N_{F'/F}\circ N_{F''/F'}=N_{F''/F}$.
\item \textbf{Reciprocity.} For $\alpha\in\K^\mathrm{M}_*F(t)$ holds
\begin{align*}
\sum_v N_{\kappa(v)/F}\circ\partial_v(\alpha) = 0
\end{align*}
where $v$ runs over all discrete valuations of $F(t)$ over $F$ and $\kappa(v)$ is the residue field of $v$.
\end{enumerate} We give a brief sketch of the construction of the norm map. For a detailed exposition we refer the reader to \cite[\S~7.3]{gille}.
\noindent For a finite field extension $F'/F$ such that $F'\cong F[X]/\skp{\pi}$ where $\pi$ is monic and irreducible, the norm map is defined via the split exact Bass-Tate sequence
\begin{align} \label{bass_tate_sequence}
0 \Nach \K^\mathrm{M}_nF \Nach \K^\mathrm{M}_nF(X) \overset{\bigoplus \partial_P}{\Nach} \bigoplus_P \K^\mathrm{M}_{n-1}(F[X]/\skp{P}) \Nach 0,
\end{align} where the sum is over all monic irreducible $P\in F[X]$ and $\bigoplus \partial_P$ is the well-defined sum of the tame symbols $\partial_P$ (with respect to the discrete valuation which is associated to the prime element $P$, see Theorem~\ref{tame_symbol}). Taking coproducts over all values of $n$, we obtain an exact sequence of $\K^\mathrm{M}_*F$-modules. Let $\partial_\infty$ be the tame symbol corresponding to the negative degree valuation on $F(X)$. Its residue field is isomorphic to $F$. The composition $\K^\mathrm{M}_*F \nach \K^\mathrm{M}_*F(X) \nach[\partial_\infty] \K^\mathrm{M}_{*-1}F$ vanishes since all elements in $F$ have degree zero. By the universal property of the cokernel, we obtain a map $N$ as indicated in the diagram (where all the maps have degree zero).
\begin{align} \label{eq_norm_classical}
\begin{xy}
\xymatrix{
0 \ar[r]
& \K^\mathrm{M}_*F \ar[r]^-{\iota_*} \ar[rd]_-0
& \K^\mathrm{M}_*F(X) \ar[r]^-{\bigoplus \partial_P} \ar[d]^-{\partial_\infty}
& \bigoplus_P \K^\mathrm{M}_{*-1}(F[X]/\skp{P}) \ar[r] \ar@{.>}[dl]^-N
& 0
\\
&& \K^\mathrm{M}_{*-1}F
}
\end{xy}
\end{align} By precomposing $N$ with the inclusion of the factor $\K_{*-1}(F[X]/\skp{\pi})$, we get the \df{norm map}
\begin{align*}
N_{F'/F} \colon \K^\mathrm{M}_*F' \nach \K^\mathrm{M}_*F
\end{align*} of the extension $F'/F$.
All the maps in diagram \eqref{eq_norm_classical} are (graded) $\K^\mathrm{M}_*F$-linear. This is clear for the map $\iota_*$ which is induced by the inclusion $\iota \colon F \nach F(X)$. With the explicit description of the tame symbol (see Theorem~\ref{tame_symbol}) we deduce linearity for the tame symbols $\partial : \K^\mathrm{M}_*F(X) \nach \K^\mathrm{M}_{*-1}\kappa$ where $\kappa=F$ (if $\partial=\partial_\infty$) or $\kappa=F[X]/\skp{P}$ (if $\partial=\partial_P$ for a monic irreducible $P\in F[X]$). Let $\pi$ be a uniformiser and $\mathcal{O}$ the valuation ring with respect to the valuation in question. Then $\K^\mathrm{M}_*F(X)$ is additively generated by the set
\begin{align*}
\Bigl\{ \{\pi,u_2,\ldots,u_n\},\{u_1,\ldots,u_n\} \, \big| \, n\geq 1, u_1,\ldots,u_n \in \mathcal{O}^\times \Bigr\}.
\end{align*} We note that $\partial(\{u_1,\ldots,u_n\})=0$ for all $u_1,\ldots,u_n\in\mathcal{O}^\times$, hence linearity for those elements is clear. Now let $x_1,\ldots, x_k\in F^\times$. Observe that $F^\times \subseteq \mathcal{O}^\times$ and that the map $F\nach\kappa$ is injective. Thus we see $\K^\mathrm{M}_*F$-linearity as follows.
\begin{align} \label{eq_tame_linearity}
\partial \bigl( \{x_1, \ldots, x_k\} \cdot \{\pi,u_2,\ldots,u_n\} \bigr)
&= \partial \bigl( \{x_1, \ldots, x_k, \pi,u_2,\ldots,u_n\} \bigr) \\
&= \partial \bigl( (-1)^k \{\pi, x_1, \ldots, x_k,u_2,\ldots,u_n\} \bigr) \nonumber \\
&= (-1)^k\{\bar{x}_1, \ldots, \bar{x}_k,\bar{u}_2,\ldots,\bar{u}_n\} \nonumber \\
&= (-1)^k\{x_1, \ldots, x_k\} \cdot \{\bar{u}_2,\ldots,\bar{u}_n\} \nonumber \\
&= (-1)^k\{x_1, \ldots, x_k\} \cdot \partial \bigl( \{\pi,u_2,\ldots,u_n\} \bigr) \nonumber
\end{align} The sign appears for $\partial$ having degree $-1$. Hence $\partial_\infty$ and also the direct sum $\bigoplus \partial_P$ in \eqref{eq_norm_classical} are $\K^\mathrm{M}_*F$-linear. Thus this also holds for the induced map $N$ and (since the inclusion of a factor is also linear) as well for the norm map $N_{F'/F}$.
For an arbitrary finite field extension $F(\alpha_1,\ldots,\alpha_n)/F$, we define the norm map via a decomposition
\begin{align*}
F \subset F(\alpha_1) \subset F(\alpha_1,\alpha_2) \subset \ldots \subset F(\alpha_1,\ldots,\alpha_n).
\end{align*} This construction is due to \df{Bass} and \df{Tate} \cite[\S 5]{bass_tate}. Its independence of the generating family $(\alpha_1,\ldots,\alpha_n)$ was proven by \df{Kato} \cite{kato}.
\noindent \df{Kerz} extended this to the realm of finite \'{e}tale{} extensions of semi-local rings with infinite residue fields \cite{kerz_gersten_conjecture}. For improved Milnor K-theory, he also showed the existence of norm maps for finite \'{e}tale{} extensions of arbitrary local rings by reducing to the case of infinite residue fields and classical Milnor K-theory \cite{kerz}.
As a matter of fact, finite \'{e}tale{} extensions $A\hookrightarrow B$ of local rings are precisely those of the form $B \cong A[t]/\skp{\pi}$ with $\pi$ monic irreducible and $\mathrm{Disc}(\pi)\in A^\times$. Our aim is to drop the condition with the discriminant. This is possible by restricting to factorial domains so that we can use a Bass-Tate-like sequence by \df{Kerz}. This is basically \df{Kerz}' work though not stated literally by himself.
\begin{prop} \label{norm_appendix} Let $A$ be a local and factorial domain and $\iota\colon A\hookrightarrow B$ an extension of local rings such that $B\cong A[X]/\skp{\pi}$ for a monic irreducible polynomial $\pi\in A[X]$. Then there exists a norm map
\begin{align*}
N_{B/A} \,\colon\, \hat{\K}^\mathrm{M}_*B \Nach \hat{\K}^\mathrm{M}_*A
\end{align*} satisfying the projection formula
\begin{align} \label{projection_formula_appendix}
N_{B/A}\bigl(\{\iota_*(x),y\}\bigr) = \bigl\{x,N_{B/A}(y)\bigr\}
\end{align} for all $x\in\hat{\K}^\mathrm{M}_*A$ and $y\in\hat{\K}^\mathrm{M}_*B$. \end{prop} \begin{proof} References within this proof refer to \cite[\S 4]{kerz_gersten_conjecture} unless said otherwise. For semi-local domains with infinite residue field, there is a split exact sequence
\begin{align} \label{bass-tate-kerz}
0 \Nach \K^\mathrm{M}_nA \Nach \K^\mathrm{t}_nA \Nach[\bigoplus \partial] \bigoplus_P \K^\mathrm{M}_{n-1}(A[X]/\skp{P}) \Nach 0,
\end{align} where the sum is over all monic irreducible $P\in A[X]$, see [Thm.~4.4]. Here $\K^\mathrm{t}_nA$ is an appropriate group generated by so-called \emph{feasible} tuples of elements in $\Frac(A)(X)$. A tuple
\begin{align*}
\Bigl(\frac{p_1}{q_1},\ldots,\frac{p_n}{q_n}\Bigr)
\end{align*} with $p_i,q_i\in A[X]$ and all $p_i/q_i$ in reduced form is \df{feasible} iff the highest nonvanishing coefficients of $p_i,q_i$ are invertible in $A$ and for irreducible factors $u$ of $p_i$ or $q_i$ and $v$ of $p_j$ or $q_j$ ($i\neq j$), we have $u=av$ with $a\in A^\times$ or $(u,v)=1$, see [Def.~4.1]. Now $\K^\mathrm{t}_nA$ is defined by modding out relations for $A^\times$-linearity which yield a $\K^\mathrm{M}_*A$-module structure on $\K^\mathrm{t}_*A$, and the relations
\begin{align*}
(p_1,\ldots,p,1-p,\ldots,p_n)=0 \quad\text{and}\quad (p_1,\ldots,p,-p,\ldots,p_n)=0
\end{align*} for $p\in\Frac(A)(X)$, see [Def.~4.2].
The map $\bigoplus \partial_P$ is similar to the tame symbol appearing in \eqref{bass_tate_sequence}. First, for all monic irreducible $P\in A[t]$ there are maps $\partial_P\colon\K^\mathrm{t}_nA\nach\K^\mathrm{M}_{n-1}A[t]/\skp{P}$ satisfying
\begin{align} \label{eq_improved_tame_formula}
\partial_P \{P,p_2,\ldots,p_n\}) = \{\overline{p_2},\ldots,\overline{p_n}\}
\end{align} for $p_i\in A[X]$ such that $(P,p_i)=1$ [Proof of Theorem.~4.4, Step~2]. Then $\bigoplus \partial$ is the well-defined sum of those $\partial_P$. Analogously, we define a map $\partial_\infty\colon\K^\mathrm{t}_nA\nach\K^\mathrm{M}_{n-1}A$ corresponding to the negative degree valuation by using $X^{-1}$ as a uniformiser.
\noindent With this we can define norm maps in the case of an infinite residue field exactly as in the case of fields via a map $N$ as indicated in the following diagram.
\[ \begin{xy}
\xymatrix{
0 \ar[r]
& \K^\mathrm{M}_*A \ar[r]^i \ar[rd]_-0
& \K^\mathrm{t}_*A \ar[r]^-{\bigoplus \partial} \ar[d]^-{\partial_\infty}
& \bigoplus_P \K^\mathrm{M}_*(A[X]/\skp{P}) \ar[r] \ar@{.>}[dl]^-N
& 0.
\\
&& \K^\mathrm{M}_*A
}
\end{xy} \] Again, all the maps are $\K^\mathrm{M}_*A$-linear. For the map $i$ this follows directly from the construction above. For the tame symbols this holds as in \eqref{eq_tame_linearity} since they satisfy \eqref{eq_improved_tame_formula}. For the map $N$ this is a formal consequence. Again, we obtain $\K^\mathrm{M}_*A$-linear norm map $N_{B/A} \colon \K^\mathrm{M}_*B \nach \K^\mathrm{M}_*A$ for $B=A[X]/\skp{\pi}$ by precomposing with the linear inclusion of the factor. The projection formula \eqref{projection_formula_appendix} is nothing else than the statement that the norm maps are $\K^\mathrm{M}_*A$-linear.
\noindent Now we treat the case of arbitrary residue fields. Let $A$ be an arbitrary factorial and local domain and $A\hookrightarrow B$ an extension of local rings such that $B\cong A[X]/\skp{\pi}$ for a monic irreducible polynomial $\pi\in A[X]$.
\textbf{Claim.} $(A[X]/\skp{\pi})(t_1,\ldots,t_k) \cong A(t_1,\ldots,t_k)[X]/\skp{\pi}$.
\noindent To see this, we first observe that the right-hand side is isomorphic to the tensor product $A(t_1,\ldots,t_k)\otimes_AA[X]/\skp{\pi}$. By Proposition~\ref{ring_of_fractions_base_change} above, the left-hand side fulfils this as well.
If $A$ has residue field $\kappa$, then $A(t_1,\ldots,t_k)$ has residue field $\kappa(t_1,\ldots,t_k)$ by Lemma~\ref{A(t)_local_infinite}. Hence we can reduce to the situation of infinite residue fields. The diagram
\[ \begin{xy}
\xymatrix{
\K^\mathrm{M}_nA(t) \ar@{^{(}->}[r] \ar[d]^\delta
& \K^\mathrm{t}_nA(t) \ar@{->>}[r]^-{\partial} \ar[d]^\delta
& \bigoplus_P \K^\mathrm{M}_{n-1}(A(t)[X]/\skp{P}) \ar[d]^\delta
\\
\K^\mathrm{M}_nA(t_1,t_2) \ar@{^{(}->}[r]
& \K^\mathrm{t}_nA(t_1,t_2) \ar@{->>}[r]^-{\partial}
& \bigoplus_Q \K^\mathrm{M}_{n-1}(A(t_1,t_2)[X]/\skp{Q})
}
\end{xy} \] commutes where $\delta = (\iota_1)_*-(\iota_2)_*$ and $P$ and $Q$ run over all monic irreducible polynomials over $A(t)$ respectively over $A(t_1,t_2)$ (note that a monic irreducible polynomial over $A(t)$ is also monic irreducible over $A(t_1,t_2)$ both via $\iota_1$ and via $\iota_2$). Thus the right-hand square in the diagram
\[ \begin{xy}
\xymatrix{
\hat{\K}^\mathrm{M}_nB \ar@{^{(}->}[r] \ar@{.>}[d]^{N_{B/A}}
& \K^\mathrm{M}_nB(t) \ar[r]^-{\delta_n^\mathrm{M}} \ar[d]^{N_{B(t)/A(t)}}
& \K^\mathrm{M}_nB(t_1,t_2) \ar[d]^{N_{B(t_1,t_2)/A(t_1,t_2)}}
\\
\hat{\K}^\mathrm{M}_nA \ar@{^{(}->}[r]
& \K^\mathrm{M}_nA(t) \ar[r]^-{\delta_n^\mathrm{M}}
& \K^\mathrm{M}_nA(t_1,t_2)
}
\end{xy} \] commutes which enables us to define the desired norm map $N_{B/A}$ via restriction and the projection formula inherits from the case of infinite residue fields to the case of arbitrary residue fields. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1509.01087.tex",
"language_detection_score": 0.5517328977584839,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Exactly Solvable Many-Body Systems and Pseudo-Hermitian Point Interactions}
\authori{Shao-Ming Fei\footnote{E-mail: [email protected]}}
\addressi{Department of Mathematics, Capital Normal University, Beijing 100037\\ Institute of Applied Mathematics, University of Bonn, D-53115 Bonn}
\authorii{} \addressii{} \authoriii{} \addressiii{} \authoriv{} \addressiv{} \authorv{} \addressv{} \authorvi{} \addressvi{}
\headauthor{Shao-Ming Fei} \headtitle{ Exactly Solvable Many-Body Systems and Pseudo-Hermitian Point Interactions} \lastevenhead{First Author et al.: Title of the contribution
ldots}
\pacs{02.30.Ik, 11.30.Er, 03.65.Fd} \keywords{Point interactions, PT-symmetry, Integrability}
\refnum{A} \daterec{XXX} \issuenumber{0} \year{2003} \setcounter{page}{1} \firstpage{1}
\maketitle
\begin{abstract} We study Hamiltonian systems with point interactions and give a systematic description of the corresponding boundary conditions and the spectrum properties for self-adjoint, PT-symmetric systems and systems with real spectra. The integrability of one dimensional many body systems with these kinds of point (contact) interactions are investigated for both bosonic and fermionic statistics. \end{abstract}
The complex generalization of conventional quantum mechanics has been investigated extensively in recent years. In particular it is shown that the standard formulation of quantum mechanics in terms of Hermitian Hamiltonians is overly restrictive and a consistent physical theory of quantum mechanics can be built on a complex Hamiltonian that is not Hermitian but satisfies the less restrictive and more physical condition of space-time reflection symmetry (PT symmetry) \cite{bender}. It is proven that if PT symmetry is not spontaneously broken, the dynamics of a non-Hermitian Hamiltonian system is still governed by unitary time evolution. A number of models with PT-symmetric and continuous interaction potentials have been constructed and studied \cite{models}. In this article we study Hamiltonian systems with singular interaction potentials at a point. We give a systematic and complete description of the boundary conditions and the spectra properties for self-adjoint, PT-symmetric systems and systems with real spectra. We then study the integrability of one dimensional many body systems with these kinds of point interactions.
\section{Self-adjoint point interactions}
Self-adjoint quantum mechanical models describing a particle moving in a local singular potential concentrated at one or a discrete number of points have been extensively discussed in the literature, see e.g. \cite{agh-kh,gaudin,AKbook} and references therein. One dimensional problems with contact interactions at, say, the origin ($x=0$) can be characterized by separated or nonseparated boundary conditions imposed on the wave function $\psi$ at $x=0$ \cite{kurasov}. The first model of this type with the pairwise interactions determined by $\delta$-functions was suggested and investigated in \cite{mcguire}. Intensive studies of this model applied to statistical mechanics (particles having boson or fermion statistics) are given in \cite{y,y1}.
Nonseparated boundary conditions correspond to the cases where the perturbed operator is equal to the orthogonal sum of two self-adjoint operators in $L_{2} (-\infty,0]$ and $L_{2} [0,\infty)$. The family of point interactions for the one dimensional Schr\"odinger operator $ - \frac{d^2}{dx^2}$ can be described by unitary $ 2 \times 2 $ matrices via von Neumann formulas for self-adjoint extensions of symmetric operators. The non-separated boundary conditions describing the self-adjoint extensions have the following form \begin{equation} \label{bound} \left( \begin{array}{c} \psi(+0)\\ \psi '(+0)\end{array} \right) = e^{i\theta} \left( \begin{array}{cc} a & b \\ c & d \end{array} \right) \left( \begin{array}{c} \psi(-0)\\ \psi '(-0)\end{array} \right), \end{equation} where $ad-bc = 1$, $\theta, a,b,c,d \in {I\!\! R}$, $\psi(x)$ is the wave function of a spinless particle with coordinate $x$. The values $\theta = b=0$, $a=d=1$ in (\ref{bound}) correspond to the case of a positive (resp. negative) $\delta$-function potential for $c>0$ (resp. $c<0$). For general $a,b,c$ and $d$, the properties of the corresponding Hamiltonian systems have been studied in detail, see e.g. \cite{kurasov,ch,abd}.
The separated self-adjoint boundary conditions are described by \be\label{bounds} \psi^\prime(+0) = h^+ \psi (+0)~, ~~~ \psi^\prime(-0) = h^- \psi (-0), \ee where $h^{\pm} \in {I\!\! R} \cup \{ \infty\}$. $ h^+ = \infty$ or $ h^- = \infty$ correspond to Dirichlet boundary conditions and $ h^+ = 0$ ~or~ $ h^- = 0$ correspond to Neumann boundary conditions.
\section{PT-symmetric point interactions}
An operator is said to be PT-symmetric if it commutes with the product operator of the parity operator P and the time reversal operator T. It can be shown that the family of PT-symmetric second derivative operators with point interactions at the origin coincides with the set of restrictions of the second derivative operator to the domain of functions satisfying the boundary conditions at the origin \cite{afkpt}: \begin{equation} \label{bcond1} \left(\begin{array}{c} \psi (+0) \\ \psi' (+0) \end{array}\right) = B
\left(\begin{array}{c} \psi (-0) \\ \psi' (-0) \end{array}\right) \end{equation} for non-separated type, where $$ B = e^{i \theta} \left( \begin{array}{cc} \sqrt{1 +bc} \; e^{i\phi} & b \\ c & \sqrt{1+bc} \; e^{-i\phi} \end{array} \right), $$ the real parameters $ b \geq 0, c \geq -1/b$\footnote{If the parameter $b$ is equal to zero, then the second inequality should be neglected.}, $ \theta, \phi \in [0, 2 \pi)$; or corresponding to the separated type \begin{equation} \label{bcond2} h_0 \psi' (+0) = h_1 e^{i\theta} \psi (+0),~~~ h_0 \psi' (+0) =
- h_1 e^{-i\theta} \psi (-0) \end{equation} with the real phase parameter $ \theta \in [0,2\pi)$ and the parameter $ {\bf h} = (h_0, h_1)$ taken from the (real) projective space $ {\bf P}^1$.
The spectrum of any PT-symmetric second derivative operator with point interactions at the origin consists of the branch $[0,\infty)$ of the absolutely continuous spectrum and at most two (counting multiplicity) eigenvalues, which are real negative or are (complex) conjugated to each other. The eigenvalues corresponding to PT-symmetric eigenfunctions are real and negative. Every eigenfunction corresponding to any real eigenvalue can be chosen either PT-symmetric or -antisymmetric.
The spectrum of the PT-symmetric second derivative operator with non-separated type point interaction at the origin is pure real if and only if the parameters appearing in (\ref{bcond1}) satisfy in addition at least one of the following conditions: \begin{equation} \label{bcondreal1} bc \sin^2 \phi \leq \cos^2 \phi; \end{equation} \begin{equation} \label{bcondreal2} bc \sin^2 \phi \geq \cos^2 \phi \; \; {\rm and}\; \; \cos \phi \geq 0. \end{equation}
\section{Point interactions with real spectra}
We consider further non-separated type boundary conditions at the origin leading to second derivative operators with real spectrum. A general form of the boundary condition can be written as \begin{equation}\label{generalb} \left(\begin{array}{c} \psi (+0) \\ \psi' (+0) \end{array}\right) = \left( \begin{array}{cc} \alpha & \beta \\ \gamma & \delta \end{array} \right) \left(\begin{array}{c} \psi (-0) \\ \psi' (-0) \end{array}\right), \end{equation} where $\alpha$, $\beta$, $\gamma$ and $\delta\in\Cb$. We suppose that the matrix $ B = \left( \begin{array}{cc} \alpha & \beta \\ \gamma & \delta \end{array} \right) $ appearing in the boundary conditions (\ref{generalb}) is non degenerate (from $GL(2, \Cb)$). Again it is easy to prove that the operator has branch of absolutely continuous spectrum $ [0, \infty)$. To study its discrete spectrum we use the following {\it Ansatz} for the eigenfunction $$ \psi (x) = \left\{ \begin{array}{ll} c_1 e^{-ikx}, & x < 0 \\ c_2 e^{ikx}, & x > 0 \end{array} \right. , \; \; \Im k > 0, $$ corresponding to the energy $\lambda = k^2$. Substituting this function into the boundary conditions (\ref{generalb}) we get the dispersion equation $k^2 \beta + ik (\alpha + \delta) - \gamma = 0$.
The set of coefficients $\alpha,~ \beta,~ \gamma$, and $\delta$ satisfying the condition $\Im k_{1,2} \leq 0$ can be parameterized by $8$ real parameters and leads to operators with pure absolutely continuous spectrum $ [0, \infty)$. Pure imaginary solutions to the dispersion equation leads to nontrivial discrete spectrum. Set $ \tau = \alpha + \delta$. The solutions are pure imaginary if and only if the following conditions are satisfied: \begin{equation}\label{betaneq0} \tau = t e^{i\theta},~ \beta = b e^{i\theta},~ \gamma = c e^{i \theta},~4 \frac{c}{b} \leq \frac{t^2}{b^2}, \end{equation} for $\beta\neq 0$, where $ t,b,c $ are real numbers. If $\beta=0$, the spectrum is guaranteed to be real when $\alpha + \delta$ and $\gamma$ have the same phases, i.e., \begin{equation}\label{beta0} \tau = t e^{i\theta},~ \gamma = c e^{i\theta}. \end{equation}
The real spectrum point interaction (\ref{betaneq0}) is parameterized by $6$ real parameters. The four-parameter family of self-adjoint (non-separated) boundary conditions (\ref{bound}) is contained in this $6$-parameter family. The family of PT-symmetric (non-separated) boundary conditions leading to operators with real spectrum is also included in the family of boundary conditions (\ref{betaneq0}) or (\ref{beta0}).
\section{Integrable many-body systems}
The self-adjoint boundary conditions (\ref{bound}) and (\ref{bounds}), PT-symmetric boundary conditions (\ref{bcond1}) and (\ref{bcond2}), and the real spectrum boundary conditions (\ref{generalb}) with the parameters satisfying (\ref{betaneq0}) or (\ref{beta0}) also describe two spinless particles moving in one dimension with contact interaction when they meet (i.e. the relative coordinate $x=0$). When the particles have spin $s$ but without any spin coupling among the particles, $\psi$ represents any one of the components of the wave function. In the following we study the integrability of one dimensional systems of $N$-identical particles with general contact interactions described by the non-separated boundary conditions that are imposed on the relative coordinates of the particles. We first consider the case of two particles ($N=2$) with coordinates $x_1$, $x_2$ and momenta $k_1$, $k_2$ respectively. Each particle has $n$-`spin' states designated by $s_1$ and $s_2$, $1\leq s_i\leq n$. For $x_1\neq x_2$, these two particles are free. The wave functions $\varphi$ are symmetric (resp. antisymmetric) with respect to the interchange $(x_1,s_1)\leftrightarrow(x_2,s_2)$ for bosons (resp. fermions). In the region $x_1<x_2$, from the Bethe ansatz the wave function is of the form, \be\label{w1} \varphi=\alpha_{12}e^{i(k_1x_1+k_2x_2)}+\alpha_{21}e^{i(k_2x_1+k_1x_2)}, \ee where $\alpha_{12}$ and $\alpha_{21}$ are $n^2\times 1$ column matrices. In the region $x_1>x_2$, \be\label{w2} \varphi=(P^{12}\alpha_{12})e^{i(k_1x_2+k_2x_1)} +(P^{12}\alpha_{21})e^{i(k_2x_2+k_1x_1)}, \ee where according to the symmetry or antisymmetry conditions, $P^{12}=p^{12}$ for bosons and $P^{12}=-p^{12}$ for fermions, $p^{12}$ being the operator on the $n^2\times 1$ column that interchanges $s_1\leftrightarrow s_2$.
Set $k_{12} = (k_1 -k_2)/2$. In the center of mass coordinate $X=(x_1+x_2)/2$ and the relative coordinate $x=x_2-x_1$, we get, by substituting (\ref{w1}) and (\ref{w2}) into the boundary conditions (\ref{generalb}) at $x=0$, \be\label{a1} \left\{ \begin{array}{l} \alpha_{12}+\alpha_{21} =\alpha P^{12}(\alpha_{12}+\alpha_{21})+ i \beta k_{12}P^{12}(\alpha_{12}-\alpha_{21}),\\ ik_{12}(\alpha_{21}-\alpha_{12}) = \gamma P^{12}(\alpha_{12}+\alpha_{21})+i\delta k_{12}P^{12} (\alpha_{12}-\alpha_{21}). \end{array}\right. \ee Eliminating the term $P^{12}\alpha_{12}$ from (\ref{a1}) we obtain the relation \be\label{2112} \alpha_{21} = Y_{21}^{12} \alpha_{12}~, \ee where \be\label{a21a12} Y_{21}^{12} =\frac{ 2ik_{12}(\alpha\delta-\beta\gamma)P^{12}+ik_{12}(\alpha-\delta)+(k_{12})^2\beta+\gamma} {ik_{12}(\alpha+\delta) + (k_{12})^2\beta-\gamma}. \ee
For $N\geq 3$ and $x_1<x_2<...<x_N$, the wave function is given by \be\label{psi} \ba{rcl} \varphi&=&\alpha_{12...N}e^{i(k_1x_1+k_2x_2+...+k_Nx_N)} +\alpha_{21...N}e^{i(k_2x_1+k_1x_2+...+k_Nx_N)}\\ &&+(N!-2)~other~terms. \ea \ee The columns $\alpha$ have $n^N\times 1$ dimensions. The wave functions in the other regions are determined from (\ref{psi}) by the requirement of symmetry (for bosons) or antisymmetry (for fermions). Along any plane $x_i=x_{i+1}$, $i\in 1,2,...,N-1$, from similar considerations we have \be\label{a1n} \alpha_{l_1l_2...l_il_{i+1}...l_N}=Y_{l_{i+1}l_i}^{ii+1} \alpha_{l_1l_2...l_{i+1}l_i...l_N}, \ee where \be\label{y} Y_{l_{i+1}l_i}^{ii+1}= \frac{2ik_{l_il_{i+1}}(\alpha\delta-\beta\gamma)P^{ii+1} +ik_{l_il_{i+1}}(\alpha-\delta) + (k_{l_il_{i+1}})^2 \beta+\gamma} {ik_{l_il_{i+1}}(\alpha+\delta)+(k_{l_il_{i+1}})^2 \beta-\gamma}. \ee Here $k_{l_il_{i+1}}=(k_{l_i}-k_{l_{i+1}})/2$ play the role of spectral parameters. $P^{ii+1}=p^{ii+1}$ for bosons and $P^{ii+1}=-p^{ii+1}$ for fermions, with $p^{ii+1}$ the operator on the $n^N\times 1$ column that interchanges $s_i\leftrightarrow s_{i+1}$.
For consistency $Y$ must satisfy the Yang-Baxter equation with spectral parameter \cite{y,ma}, i.e., $$ Y^{m,m+1}_{ij}Y^{m+1,m+2}_{kj}Y^{m,m+1}_{ki} =Y^{m+1,m+2}_{ki}Y^{m,m+1}_{kj}Y^{m+1,m+2}_{ij}, $$ or \be\label{ybe1} Y^{mr}_{ij}Y^{rs}_{kj}Y^{mr}_{ki} =Y^{rs}_{ki}Y^{mr}_{kj}Y^{rs}_{ij} \ee if $m,r,s$ are all unequal, and \be\label{ybe2} Y^{mr}_{ij}Y^{mr}_{ji}=1,~~~~~~ Y^{mr}_{ij}Y^{sq}_{kl}=Y^{sq}_{kl}Y^{mr}_{ij} \ee if $m,r,s,q$ are all unequal. These Yang-Baxter relations are satisfied when \be\label{realsp} \alpha\delta-\beta\gamma=1,~~~~~\beta=0,~~~~~\alpha=\delta, \ee i.e., $\beta=0$, $\alpha=\delta=\pm 1$.
Therefore for self-adjoint contact interactions (\ref{bound}), the $N$-body system is integrable when $\theta =0$, $a=d=\pm 1$, $b=0$, $c$ arbitrary. The case $a=d=1$, $\theta =b=0$ corresponds to the usual $\delta$-function interactions, which has been investigated in \cite{y,y1}. The case $a=d=-1$, $\theta =b=0$, is related to a kind of anti-$\delta$ interactions \cite{adf}.
For $N$-body systems with PT-symmetric contact interactions (\ref{bcond1}), the integrable condition (\ref{realsp}) implies that $\theta=\phi=b=0$, which is just the usual self-adjoint $\delta$-interaction.
From (\ref{beta0}) and (\ref{realsp}), we also have that an many-body system with contact interaction and real spectra is integrable only when $\alpha=\delta=\pm 1$, $\gamma=c$ for some $c\in{I\!\! R}$, which is again the self-adjoint $\delta$-type interactions, see Fig.1 for the relations among point interactions according to integrability.
\begin{figure}
\caption{Relations among point interactions according to integrability.}
\end{figure}
We have presented a complete picture of self-adjoint, PT-symmetric and real spectrum point interactions, and their corresponding integrability. What we concerned here are just the case of particles with only pure contact interactions, and the possible contact coupling of the spins of two particles are not taken into account \cite{spin}. A further study along this direction would possibly give rise to more interesting integrable quantum many-body systems with various symmetries and spectrum properties.
\end{document}
|
arXiv
|
{
"id": "0402185.tex",
"language_detection_score": 0.7218218445777893,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Bounding the resources for thermalizing many-body localized systems}
\author{Carlo Sparaciari} \affiliation{Department of Computing, Imperial College London, London SW7 2AZ, U.K.} \affiliation{Department of Physics and Astronomy, University College London, London WC1E 6BT, U.K.} \author{Marcel Goihl} \affiliation{Dahlem Center for Complex Quantum Systems, Freie Universit{\"a}t Berlin, 14195 Berlin, Germany} \author{Paul Boes} \affiliation{Dahlem Center for Complex Quantum Systems, Freie Universit{\"a}t Berlin, 14195 Berlin, Germany} \author{Jens Eisert} \email{[email protected]} \affiliation{Dahlem Center for Complex Quantum Systems, Freie Universit{\"a}t Berlin, 14195 Berlin, Germany} \author{Nelly Huei Ying Ng} \affiliation{Dahlem Center for Complex Quantum Systems, Freie Universit{\"a}t Berlin, 14195 Berlin, Germany} \affiliation{School of Physical and Mathematical Sciences, Nanyang Technological University, 637371 Nanyang, Singapore}
\begin{abstract} Understanding under which conditions physical systems thermalize is a long-standing question in many-body physics. While generic quantum systems thermalize, there are known instances where thermalization is hindered, for example in many-body localized (MBL) systems. Here we introduce a class of stochastic collision models coupling a many-body system out of thermal equilibrium to an external heat bath. We derive upper and lower bounds on the size of the bath required to thermalize the system via such models, under certain assumptions on the Hamiltonian. We use these bounds, expressed in terms of the max-relative entropy, to characterize the robustness of MBL systems against externally-induced thermalization. Our bounds are derived within the framework of resource theories using the convex split lemma, a recent tool developed in quantum information. We apply our results to the disordered Heisenberg chain, and numerically study the robustness of its MBL phase in terms of the required bath size. \end{abstract}
\maketitle
\section{Introduction} When pushed out of equilibrium, closed interacting quantum many-body systems generically relax to an equilibrium state, where local subsystems can be described using thermal ensembles that only depend on the energy of the initial state. While this behaviour is plausible from the perspective of quantum statistical mechanics~\cite{Deutsch1991, Srednicki1994,Popescu06,1408.5148,ngupta_Silva_Vengalattore_2011}, it is far from clear which local properties are responsible for the emergence of thermalization. The discovery of many-body localization (MBL) offers a fresh perspective into this question, as this effect occurs in interacting many-body systems preventing them from actually thermalizing~\cite{Oganesyan,Schreiber}. Examples of systems that are non-thermalizing, like integrable systems, have been known before, but these have been fine-tuned such that small perturbations restore thermalization. Many-body localization is strikingly different in this respect, as its non-thermalizing behaviour appears to be robust to changes in the Hamiltonian~\cite{abanin_colloquium:_2019}. A related open question, recently considered in several papers, is whether MBL is stable with respect to its own dynamics when small ergodic regions are present~\cite{luitz_how_2017, ponte_pedro_thermal_2017,hetterich_noninteracting_2017,barisic_incoherent_2009,Goihl19}, or when the system is in contact with an actual external environment~\cite{nandkishore_spectral_2014, fischer_dynamics_2016,levi_robustness_2016,johri_many-body_2015}. This is both a critical question for the experimental realization of systems exhibiting MBL properties, and for its fundamental implications on the process of thermalization in quantum systems. \par The main focus of this work is to investigate the robustness of the MBL phase under instances of external dissipative processes. To do so, we introduce a physically-realistic class of interaction models, describing the interaction between a many-body system and a finite-sized thermal environment. Within these models, the interactions are described in terms of energy-preserving stochastic collisions occurring between the system (or regions thereof) and sub-regions of the bath. During the interactions, system and bath can be either weakly or strongly coupled. For this class of processes, we are able to derive analytical bounds on the minimum size of the bath required to thermalize the many-body system. We apply these bounds to the setting where the system is in the MBL phase, so as to characterize the robustness of this phase with respect to the coupling with an external environment. It is woth noting, however, that the bounds obtained hold for general many-body systems out of thermal equilibrium. \par It is key to the approach taken here -- and one of the merits of this work -- that in order to arrive at our results, we make use of tools from quantum information theory, tools that might at first seem somewhat alien to the problem at hand, but which turn out to provide a powerful machinery. We demonstrate this by using a technical result known as the convex split lemma~\cite{anshu_quantum_2017,anshu_quantifying_2018}, to derive the quantitative bounds on the bath size required for a region of the spin lattice to thermalize. Given that MBL phases are challenging to study theoretically, and most known results are numerical in nature \cite{Luitz,Devakul,PhysRevLett.118.017201,Prosen,Pollmann,Wahl,AugstineMBL}, our work provides a fresh approach in understanding such phases from an analytical perspective. The convex split lemma has originally been derived in the context of quantum Shannon theory, which is the study of compression and transmission rates of quantum information. Our main contribution is to connect this mathematical result to a class of thermodynamic models which can be used to describe thermalization processes in quantum systems. This gives rise to surprisingly stringent and strong results. Note, however, in the approach taken it is assumed that systems thermalize close to exactly, a requirement that will be softened in future work. \par As part of our results, we find that the max-relative entropy~\cite{datta_min-_2009,tomamichel_quantum_2016} and its smoothed version emerge as operationally significant measures, that quantify the robustness of the MBL phase in a spin lattice. The max-relative entropy is an element within a family of entropic measures that generalize the R{\'e}nyi divergences~\cite{erven_renyi_2014} to the quantum setting. In order to illustrate the practical relevance of our results, we consider a specific system exhibiting the MBL phase, namely the disordered Heisenberg chain. Employing exact diagonalization, we numerically compute the value of the max-relative entropy as a function of the disorder and of the size of the lattice region that we are interested in thermalizing. Our findings suggest that the MBL phase is robust to thermalization despite being coupled to a finite external bath, under our collision models, indicating that such models allow for a conceptual understanding of the MBL phase stability. This extends the narrative of Refs.~\cite{nandkishore_spectral_2014,fischer_dynamics_2016,levi_robustness_2016}, which find that MBL is thermalized when coupled to an infinite sized bath. Moreover, our numerical simulations show that the max-relative entropy signals the transition from the ergodic to the MBL phase.
\section{Results}
\subsection{Thermalization setting}\label{sec:setting} We first set up some basic notation. Given some Hamiltonian $H_S$ of a system $S$ with Hilbert space $\mc{H}_S$, we define the thermal state with respect to inverse temperature $\beta = 1/(k_B T)$ as the quantum state \begin{align}
\tau_\beta(H_S) := \frac{e^{-\beta H_S}}{\mathrm{Tr}(e^{-\beta H_S})}. \end{align} In what follows, we model the process of thermalization of $ S $ with an external heat bath $B$. In particular, let $H_S$ and $ H_B $ denote the Hamiltonian of $ S $ and $ B $ respectively. If $S$ is initially in a state $\rho$, then, for fixed $\beta$ and any $\epsilon > 0$, we say that a global process $\mc{E} : \BH{\mc{H}_S \otimes \mc{H}_B} \rightarrow \BH{\mc{H}_S \otimes \mc{H}_B}$ $\epsilon$-thermalizes the system $S$ if \begin{equation} \label{eq:def_thermalizing} \norm{\mc{E} \left( \rho \otimes \tau_\beta(H_B) \right) - \tau_\beta(H_S) \otimes \tau_\beta(H_B)}_1 \leq \epsilon, \end{equation} where $\norm{\cdot}_1$ is the trace norm. Intuitively, this corresponds to the situation where the process $\mc{E}$ acts on the compound of the initial state of the system $\rho$ and the bath state $\tau_\beta(H_B)$, and brings the system state close to its thermal state $\tau_\beta(H_S)$ while leaving the bath mostly invariant. We write \begin{equation} \rho\xrightarrow[H_B,\epsilon]{\mathcal{E}} \tau_\beta(H_S) \end{equation} if Eq.~\eqref{eq:def_thermalizing} holds. It is worth noting that $\epsilon$-thermalization requires the global system $SB$ to be close to thermal after the channel $\mc{E}$ is applied. Monitoring the bath as well is necessary in order to avoid the possibility of trivial thermalization processes in which the non-thermal state $\rho$ is simply swapped into the bath, which would merely move the excitation out of the considered region, rather than describing a physically realistic dissipation process. Thus, our notion of thermalization is different from previously considered ones, where for instance the sole system's evolution is considered. At the same time, and as mentioned before, it is a rather stringent measure, in that close to full global thermalization is required. \par Having introduced the basic notation and terminology, we now turn to the model used in this work. We consider a spin lattice $V$, where each site is described by a finite-dimensional Hilbert space $\mc{H}$. The Hamiltonian of the system is composed of local operators, i.~e., \begin{equation} \label{eq:lattice_ham} H_V = \sum_x H_x, \end{equation} where $x$ is labeling a specific subset of adjacent sites in the lattice and $H_x$ is the corresponding Hamiltonian operator whose support is limited to these sites. Within the lattice, we consider a local region $R \subseteq V$ with Hilbert space $\mc{H}_R$. We are interested in the stability of the MBL phase with respect to stochastic collisions between the lattice region $R$ and an external thermal bath $B$. In order to precisely re-cast this problem in terms of $\epsilon$-thermalization, we first need to detail our choices for the initial state of the region $R$, the Hamiltonians $H_B$ and $H_R$, and the class of maps $\mc{E}$ that model the thermalization process. \par Given an initial state vector of the lattice $\ket{\psi(0)}$, if we consider closed evolution generated by the Hamiltonian dynamics, then the global system is always in a pure state. However, if the local subsystems eventually equilibrate, then the equilibrium state is given by partially tracing over the global infinite-time average~\cite{gogolin_equilibration_2016}, \begin{equation} \label{eq:inf_time_average} \omega := \lim_{T \to \infty} \frac{1}{T} \int_0^T \mr{d} t \, \ket{\psi(t)}\bra{\psi(t)}, \end{equation} so that the state that describes region $R$ at the time of its first interaction with the bath is $\omega_R = \Trp{R^c}{\omega}$, where we trace out the remaining of the lattice $R^c = V \backslash R$. To define a valid Hamiltonian $H_R$, a natural approach is often to disregard interactions between $R$ and the rest of the lattice $R^c$. As such, we consider the Hamiltonian $H_R = \sum_{x: x \subseteq R} H_x$, which includes only terms $H_x$ whose support is contained in $R$, and denote the corresponding thermal state $\tau_\beta(H_R)$. It is worthwhile to point out that that there is an alternative natural approach to defining thermal states of subsystems, namely $\hat\tau_\beta (H_R) = \mathrm{Tr}_{R^c}\left[\tau_\beta(H_V)\right]$ the reduced state over the complement $R$ of the thermal state of the full lattice~\cite{kliesch_locality_2014}. These two thermal states are close to each other whenever the interaction terms between $ R $ and $ R^c $ in $ H_V$ are small. During our later simulations, we check the values of max-relative entropy using both versions of thermal states, and find that they produced similar values, implying that these different alternatives are actually not too dissimilar from each other for the disordered Heisenberg chain, which is expected, given that interactions are 2-local. \par There has been a large body of work in thermalization, where a large class of many-body systems can effectively act as their own “environment”~\cite{1408.5148,gogolin_equilibration_2016}, thus local observables tend to equilibrate towards the corresponding thermal values. However, there are exceptions to this, in particular whenever there are non-negligible interactions between subsystems, of which MBL systems are an example. Nevertheless, most such systems still do equilibrate, and this is the assumption we make throughout the paper, that $\omega$ as defined in Eq.~\eqref{eq:inf_time_average} exists.
We model the external thermal bath as a collection of $n-1$ copies of the region $R$ in thermal equilibrium. More formally, $B$ is a system with Hilbert space $\mc{H}_B = \mc{H}_R^{\otimes n-1}$ and Hamiltonian \begin{equation} \label{eq:H_B_copiesofregion} H_B = \sum_{i=1}^{n-1} H_R^{(i)}, \end{equation} where the operator $H_R^{(i)} = \mb{I}_1 \otimes \ldots \otimes \mb{I}_{i-1} \otimes H_R \otimes \mb{I}_{i+1} \otimes \ldots \otimes \mb{I}_{n-1}$ only acts non-trivially on the $i$-th subsystem of the bath. With this choice of Hamiltonian, the initial state on $B$ is $\tau_\beta(H_B) = \tau_{\beta}(H_R)^{\otimes n-1}$. Such a choice for the bath is crucial to make our problem analytically tractable, but is also physically relevant for experimental setups, where it is possible to engineer one dimensional systems which are then coupled to a bath per site \cite{bordia2016coupling,bordia2017probing} or a mixture of a system and bath species that interact via contact interactions \cite{abadal2018bath}. Moreover, we note that for the model of system-bath interactions that we introduce below, any state transition that can be realised on $R$ with any heat bath Hamiltonian $H_B$ can also be realized, for some $n$, with a Hamiltonian of the form \eqref{eq:H_B_copiesofregion}~\cite{horodecki2013fundamental}. We also refer the interested reader to Supplementary Note 3, for a further discussion on bath choices.
\begin{figure}
\caption{{\bf Thermalization setting.} Thermalization of a region $R$ of an equilibrated lattice $V$ via stochastic collisions with an external bath $B$ (red). The collisions are modelled as randomly distributed, energy-preserving unitary interactions between $R$ and $B$, where interactions could be either between single subsystems of $B$ (top) or multiple ones (bottom). Our framework is inspired by the resource theoretic framework of thermal operations~\cite{janzing_thermodynamic_2000,brandao_resource_2013}, which have been used to investigate a wide variety of questions, such as the notion of work for microscopic systems~\cite{horodecki2013fundamental,brandao2015second}, quantum fluctuation theorems~\cite{alhambra2016fluctuating}, the third law~\cite{masanes_general_2017,wilming2017third}, and several other topics~\cite{woods2019maximum,halpern2019quantum,alhambra_revivals_2019}.}
\label{fig:coupling}
\end{figure}
\par We now turn to our model of the system-bath interactions, described via the following master equation, \begin{equation} \label{eq:master_eq} \frac{\partial \, \rho_{RB}(t)}{\partial \, t} = \sum_{k} \frac{1}{r_k} \left[ U_{RB}^{(k)} \, \rho_{RB}(t) \, U_{RB}^{(k) \dagger} - \rho_{RB}(t) \right], \end{equation} where $\rho_{RB}$ is the global state on $R$ and $B$. This equation models a series of collisions, each described by a unitary operator $U_{RB}^{(k)} \in \BH{\mc{H}_R \otimes \mc{H}_B}$ acting non-trivially on the region $R$ and a subset of bath components, occurring at a given rate $r_k^{-1} > 0$ in time according to a Poissonian distribution (see Supplementary Note 1 for details). We consider elastic collisions, that is, we require that each unitary operator $U_{RB}^{(k)}$ conserves the global energy \begin{equation}\label{eq:UHcommute} [ U_{RB}^{(k)} , H_R + H_B ] = 0. \end{equation} It is worth noting that the above condition does not imply that the total energy of the lattice is conserved. In fact this is in general not the case, due to those operators $H_x$ in Eq.~\eqref{eq:lattice_ham}, with support on both $R$ and $R^c$. However, since by assumption these operator have support on at most $k$ adjacent lattice sites, if the region $R$ is sufficiently large one expects this boundary terms to contribute less and less to the total energy of the region. For high temperatures, such notions that boundary terms are negligible have rigorously been established \cite{kliesch_locality_2014}. Note that Eq.~\eqref{eq:master_eq} is already in standard Lindblad form, and therefore describes a Markovian dynamical semi-group on $RB$. It is also important to note, however, that the process happening on the region $R$ is in principle non-Markovian, since $ B $ is modelled here explicitly, and can be a very small heat bath, which retains memory of the system's initial state. See Fig.~\ref{fig:coupling} for a graphical illustration of the setup.
In summary, the interaction model is general, in the sense that the unitary operators can act on a single subsystem (thus generating local Hamiltonian dynamics, for example), on two subsystems (these are the standard two-body interactions) or many more subsystems. Long-range interaction terms are also allowed, with the only restriction of Eq.~\eqref{eq:UHcommute} -- in the hope that one may work towards a relaxation in the future. We are nonetheless neglecting the interaction terms between the region $ R $ and $ R^c $. While this assumption is critical for allowing us to apply our framework to this problem, we note that there are situations where it is physically relevant -- for example, if the interaction between $ R $ and $ B $ occurs on a much shorter timescale than between $ R $ and $ R^c $, which is allowed by our class of interactions, as the rates $ \lbrace r_k \rbrace_k $ can be chosen to be high enough. Moreover, our results on bath size are independent on the system-bath coupling strength. \par We are finally in a position to define our central measure of robustness to thermalization. Let $\mathfrak{E}_{n}$ denote the set of quantum channels (i.e., completely-positive trace preserving maps acting over the quantum states of a system~\cite{nielsen_quantum_2010}) on $RB$ that can be generated via the above collision process for a bath of size $n-1$. Each channel in this set is realized through a different choice of unitary operations $\{ U_{RB}^{(k)} \}$, collisions rates $\{ r_k^{-1} \}$, and final time $t$. Given the region initial state $\omega_R $, its Hamiltonian $ H_R $, and the bath Hamiltonian $ H_B $ defined as in Eq.~\eqref{eq:H_B_copiesofregion}, we define $n_\epsilon $ as the minimum integer such that there exists an element in $\mathfrak{E}_{n_\epsilon+1}$ that $\epsilon$-thermalizes $R$, \begin{equation}
n_\epsilon := \min \left\{ n \in \mb{N} \, \Bigg| \, \exists \, \mc{E} \in \mathfrak{E}_{n+1} \, : \, \omega_R \xrightarrow[\epsilon, H_B]{\mc{E}} \tau_{\beta}(H_R)\right\} -1. \end{equation} The integer $n_\epsilon$ then quantifies the smallest size of a thermal environment required to thermalize region $R$ under stochastic collisions, and hence provides a natural measure to quantify robustness of an MBL system against thermalization.
\subsection{Upper and lower bounds for thermalization} Our main results are upper and lower bounds on $n_\epsilon$ that are essentially tight for a wide range of Hamiltonians $H_R$. These bounds are stated using the (smooth) max-relative entropy, an entropic quantity that has received considerable attention in recent years in quantum information and communication theoretical research~\cite{konig2009operational,horodecki2013fundamental,bu2017maximum}. Once again, it is worth noting that our results are applicable in general for finite-dimensional quantum systems where equilibration occurs, of which MBL is a particularly interesting example. The max-relative entropy~\cite{datta_min-_2009} between two quantum states $\rho, \sigma \in \SH{\mc{H}}$ such that $\mathrm{supp}(\rho) \subseteq \mathrm{supp}(\sigma)$ is defined as \begin{equation} \label{eq:max_rel_ent}
\Dmax{}\left( \rho \| \sigma \right) = \inf \left\{ \lambda \in \mb{R} \ : \ \rho \leq 2^{\lambda} \sigma \right\}, \end{equation} while the smooth max-relative entropy between the same two quantum states, for some $\epsilon > 0$, is defined as
\begin{equation} \label{eq:smooth_max_rel_ent}
\Dmax{\epsilon}(\rho \| \sigma) = \inf_{\tilde{\rho} \in B_{\epsilon}(\rho)} \Dmax{}\left( \tilde{\rho} \| \sigma \right), \end{equation} where $B_{\epsilon}(\rho)$ is the ball of radius $\epsilon$ around the state $\rho$ with respect to the distance induced by the trace norm.
\subsubsection{Upper bound} We first present and discuss the upper bound, which can be easily stated in terms of the quantities just introduced. \begin{theorem}[Upper bound on the size of the bath] \label{thm:upper} For a given Hamiltonian $H_R$, inverse temperature $\beta$, and a constant $\epsilon > 0$, we have that \begin{align}
n_\epsilon \leq \frac{1}{\epsilon^2} \, 2^{\Dmax{}\left( \omega_R \| \tau_{\beta}(H_R) \right)}. \end{align} \end{theorem} The above theorem provides a quantitative bound on the size of the thermal bath needed to $\epsilon$-thermalize a lattice region $R$, when the coupling is mediated by stochastic collisions. For this specific dynamics, the region can be $\epsilon$-thermalized if the size of the bath (the number of components) is proportional to the exponential of the max-relative entropy between the state of the region $\omega_R$ and its thermal state $\tau_{\beta}(H_R)$. \par Theorem~\ref{thm:upper} is proven in Supplementary Note 2. Here, we present a sketch of the proof in two steps. In the first step, we show that $\mathfrak{E}_n$ can be connected to so-called random unitary channels~\cite{audenaert_random_2008}. In the second step, we use this connection to find a particular channel in $\mathfrak{E}_n$ that achieves the upper bound of the above theorem. A central ingredient to the second step is a result known as convex split lemma~\cite{anshu_quantum_2017, anshu_quantifying_2018}. \par Turning to the first step, recall that a random unitary channel is a map of the form \begin{equation} \label{eq:ran_unit_chn} \mc{E}(\cdot) = \sum_k \, p_k \, U_k \, \cdot \, U_k^{\dagger}, \end{equation} where $\left\{ p_k \right\}_k$ is a probability distribution, and $\{U_k\}_k$ is a set of unitary operators. For a given number $n-1$ of bath subsystems, we define the class of energy-preserving random unitary channels $\mathfrak{R}_n$ as those random unitary channels on $RB$ for which each unitary operator $U_k \in \BH{\mc{H}_R \otimes \mc{H}_B}$ commutes with the Hamiltonian of the global system, i.~e., $\left[ U_k , H_R + H_B \right] = 0$. In Supplementary Note 1, we show that for any $n\geq1$, $\mathfrak{E}_n \subseteq \mathfrak{R}_n$, therefore allowing us to analyse any element of $ \mathfrak{E}_n $ as a random unitary channel. \par Turning to the second step, we use a stochastic collision model with a simple representation in terms of random unitary channels. Let us first recall that the thermal bath $B$ is described by $n-1$ copies of $\tau_{\beta}(H_R)$, the Gibbs state of the Hamiltonian $H_R$ at inverse temperature $\beta$. The collisions occur either between the region $R$ and one subsystem of the bath, or between two bath subsystems. The rate of collisions is uniform, and given by $r^{-1} >0$. During a collision involving the $i$-th and $j$-th subsystems of $RB$, the states of the two colliding components are swapped, so that the interaction is described by the unitary operator $U^{(i,j)}_{\text{swap}}$. The action of this operator over two quantum systems, described by the state vectors $\ket{\psi}_1$ and $\ket{\phi}_2$ respectively, is given by $U^{(1,2)}_{\mathrm{swap}} \ket{\psi}_1 \otimes \ket{\phi}_2 =\ket{\phi}_1 \otimes \ket{\psi}_2$. For an initial global state $\rho_{RB}^{(\text{in})} = \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n-1}$, the steady state obtained through this process is \begin{equation} \label{eq:equilibrium_state} \rho_{RB}^{(\text{ss})} = \sum_{m=1}^n \frac{1}{n} \, \tau_{\beta}(H_R)^{\otimes m-1} \otimes \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n-m}, \end{equation} where the a-thermality of the region has been uniformly hidden into the different components of the bath. It is worth noting that, under the stochastic collision model described above, the global system reaches its steady state exponentially quickly in the collision rate $r^{-1}$, see Supplementary Note 2 for more details. \par The mapping from the initial state of region and bath to the steady state is achieved by the following random unitary channel \begin{equation} \label{eq:equilibration_chn} \bar{\mc{E}}_n(\cdot) = \sum_{i=1}^n \frac{1}{n} \, U^{(1,i)}_{\text{swap}} \, \cdot \, U^{(1,i) \, \dagger}_{\text{swap}}, \end{equation} which uniformly swaps each of the bath subsystems with $R$. Such channels have been studied before in the context of entropy production~\cite{diosi2006exact,csiszar2007limit}, see also Ref.~\cite{scarani2002thermalizing} for a similar example. Since all subsystems share the same Hamiltonian $H_R$, it is easy to see that each one of the $U^{(1,i)}_{\text{swap}}$ commutes with the joint Hamiltonian, so that $\bar{\mc{E}}_n
\in \mathfrak{R}_n$. Finally, we can invoke the convex split lemma, see Supplementary Note 2, which allows us to show that, for any $ \epsilon >0 $, the channel $\bar{\mc{E}}_n$ can $\epsilon$-thermalize the region $R$ when the number of subsystems is $n = \epsilon^{-2} 2^{\Dmax{}\left( \omega_R \| \tau_{\beta}(H_R) \right)}$.
The collision model presented here encompasses a wide range of physically realistic thermalization processes. However, before turning to a lower bound, we note that, due to the particularly simple nature of \eqref{eq:equilibration_chn}, one can use the above construction to upper bound the required size of a bath for other thermalization models as well. For example, by noting that the channel~\eqref{eq:equilibration_chn} is permutation-symmetric (in the sense that it has permutation-invariant states as its fixed points), it follows that thermalization models with permutation-symmetric dynamics (i.e.~those allowing for any permutation symmetric channel) are also subject to the above upper bound.
\subsubsection{Lower bound and optimality results} We now turn to deriving a lower bound on $n_\epsilon$. This bound is obtained through a further assumption on the Hamiltonian $ H_R $, which we call the energy subspace condition (ESC).
\begin{definition}[Energy subspace condition] Given a Hamiltonian $ H_R $, we say that it fulfills the ESC iff for any $n \in \mb{N}$, given the set of energy levels $\left\{E_k \right\}_{k=1}^d$ of the Hamiltonian $H_R$, we have that for any vectors $m, m' \in \mb{N}^d $ with the same normalization factor, namely $\sum_k m_k = \sum_k m'_k = n $, \begin{equation} \label{item:opt_cond} \sum_k m_k E_k \neq \sum_k m'_k E_k . \end{equation} \end{definition}
Let us here briefly discuss the physical significance of the ESC condition. The ESC entails (but is not equivalent to) that energy levels cannot be exact integer multiples of one another, which also implies full non-degeneracy. Furthermore, note that the ESC is not approximate, in other words, it still holds even if energy levels are very close to each other.
Having exact integer multiple energy levels is a very fine-tuned situation that breaks as soon as randomness is introduced in the Hamiltonian \cite{Linden2009}. Let us for example consider how likely it is for MBL systems to have degenerate energy levels. In the ergodic phase, the level statistics are Wigner-Dyson, therefore non-degeneracy is enforced by level repulsion. On the other hand, in the strong MBL phase, level statistics are Poissonian, meaning that the probability density function is maximum for zero-energy gaps. Despite so, the probability of exact degeneracy would correspond to a zero-volume integral of the Poissonian distribution (which is bounded from above), and therefore still amounts to zero probability of having degenerate gaps.
It is clear, however, that the ESC is much more stringent than requiring non-degeneracy; If we require Eq.~\eqref{item:opt_cond} to be satisfied for all $n \in \mb{N}$, this implies that the energy levels of the Hamiltonian $H_R$ need to be irrational. Nevertheless, one can relax this condition by asking it to hold for all $n \leq N$, for some sufficiently large $N \in \mb{N}$, for example with the upper bound on $ n_\epsilon $ in Theorem \ref{thm:upper}. We refer to this as the ESC being satisfied up to $ N $. In Section \ref{sec:disorderedHeisenberg}, we discuss how a paradigmatic MBL system relates to this condition.
We can now state the following theorem, proved in Supplementary Note 4, on the optimality of the channel associated with the convex split lemma.
\begin{theorem}[Optimal thermalization processes] \label{thm:csl_optimality} If $H_R$ satisfies the ESC and the state $\omega_R$ is diagonal in the energy eigenbasis, then the channel $\bar{\mc{E}}_n$ in Eq.~\eqref{eq:equilibration_chn} provides the optimal thermalization process, that is, for any $n \in \mb{N}$, \begin{equation} \bar{\mc{E}}_n \in \underset{\mc{E} \in \mathfrak{E}_n}{\mathrm{argmin}} \norm{ \mc{E} \left( \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n-1} \right) - \tau_{\beta}(H_R)^{\otimes n} }_1. \end{equation} \end{theorem}
Theorem~\ref{thm:csl_optimality} shows that, for Hamiltonians satisfying the ESC, the channel $\bar{\mc{E}}_n$ provides the optimal thermalization of $R$, that is, no other random energy-preserving channel acting on the same global system can achieve a smaller value of $\epsilon$ in Eq.~\eqref{eq:def_thermalizing}. The above result applies to initial states that are diagonal in the energy eigenbasis; this is in general not the case for the reduced state $\omega_R$ of the infinite-time average of Eq.~\eqref{eq:inf_time_average}, since it might have coherence in the eigenbasis of the reduced Hamiltonian $H_R$. For states with coherence, the channel of Eq.~\eqref{eq:equilibration_chn} is not necessarily optimal anymore, but we can still bound the difference in thermalization achieved by this channel and an optimal one, see Supplementary Note 4 for the proof.
\begin{theorem}[Thermalization bound for coherent states] \label{thm:csl_semi-optimality} Fix $n \in \mb{N}$, and assume that $H_R$ satisfies the ESC. Consider the channel $\mc{E}_{\mathrm{opt}} \in \mathfrak{E}_n$ achieving optimal thermalization $\epsilon_{\mathrm{opt}} = \norm{ \mc{E}_{\mathrm{opt}} \left( \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n-1} \right) - \tau_{\beta}(H_R)^{\otimes n} }_1$, and the decohering channel $\Delta( \cdot ) = \sum_E \Pi_E \cdot \Pi_E$, where $\Pi_E$ is the eigenprojector onto the energy subspace associated with $E$. We define the parameter $\delta = \norm{ \omega_R - \Delta(\omega_R) }_1$, quantifying the amount of coherence contained in the state of the region. Then, the thermalization achieved by the channel $\bar{\mc{E}}_n$ is bounded as \begin{equation} \norm{ \bar{\mc{E}}_n \left( \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n-1} \right) - \tau_{\beta}(H_R)^{\otimes n} }_1 \leq \epsilon_{\mathrm{opt}} + \delta. \end{equation} \end{theorem}
The above theorem provides a quantitative bound on the thermalization achieved by the channel $\bar{\mc{E}}_n$ when the input system has coherence in the energy eigenbasis. In the case of MBL systems, the eigenstates of the Hamiltonian are close to product states, see for instance Ref~\cite{friesdorf_many-body_2015}, and therefore the reduced state of the infinite-time average $\omega_R$ is expected to have small and strongly-decaying coherence. Thus, Thm.~\ref{thm:csl_semi-optimality} shows that the stochastic collision model introduced in Section \ref{sec:setting} is able to effectively thermalize the region of an MBL systems. From the two theorems stated above we derive the following corollary, providing a lower bound on the size of the thermal bath needed to thermalize a given quantum system.
\begin{corollary}[Lower bound to size of the bath] For a given $\beta$ and $\epsilon > 0$, and some Hamiltonian $H_R$ satisfying the ESC, we have \begin{equation} \label{eq:lower_bound}
n_\epsilon \geq 2^{\Dmax{2 \sqrt{\epsilon} +\delta}\left( \omega_R \| \tau_{\beta}(H_R) \right)}, \end{equation} where $\delta = \norm{\omega_R - \Delta(\omega_R)}_1$ and $\Delta(\omega_R)$ is the decohered version of the state $\omega_R$. \end{corollary}
Note that this lower bound is arising from the stringent model of thermalization of Eq.~(\ref{eq:def_thermalizing}), and that less stringent models will potentially lead to smaller lower bounds. When $H_R$ does not satisfy the ESC, it is easy to find counter-examples to the optimality of the channel $\bar{\mc{E}}$, as we show in Supplementary Note 4. The idea is that this channel is optimal only when it is able to produce a uniform distribution within each energy subspace of the global system, and this is possible if each subspace is fully characterized by a different frequency of single-system eigenvectors, which is exactly given by the ESC. Indeed, when the ESC is maximally violated,~i.~e., when the system Hamiltonian is completely degenerate, then no bath is required at all. See Sec.~III for a discussion of this and its relation to known bounds from randomness extraction.
\subsection{The disordered Heisenberg chain} \label{sec:disorderedHeisenberg} Our results from the previous section show that, for systems that satisfy the ESC, the max-relative entropy between the local state of a lattice region and its thermal state provides a natural measure for the robustness of that region to thermalization for a broad family of interactions. This includes many-body systems close to the transition between the ergodic and MBL phase, where both level repulsion and randomness effects favour a lack of exact degeneracies, so that it seems reasonable to expect for such systems to satisfy the ESC to sufficient order $n$. In this section, we use these results to study the robustness of the MBL phase to the thermal noise for a concrete system. Specifically, we consider the disordered Heisenberg chain, a one-dimensional spin-$\frac{1}{2}$ lattice system composed of $L$ sites, governed by the Hamiltonian \begin{equation} \label{eq:XXZChainH} H_V = \sum_i^L \left( \sigma_i^x \sigma_{i+1}^x + \sigma_i^y \sigma_{i+1}^y+ \sigma_i^z \sigma_{i+1}^z \right) + \Delta \sum_i^L h_i \sigma_i^z, \end{equation} where $\sigma^x, \sigma^y, \sigma^z \in \BH{\mb{C}^2}$ are the Pauli operators, $\Delta$ is the (dimensionless) disorder strength, and each parameter $h_i \in [-1,1]$ is drawn uniformly at random. We employ periodic boundary conditions. \par It has been demonstrated both theoretically~\cite{BAA} and experimentally~\cite{Schreiber} that this system undergoes a localization transition above the critical disorder strength $\Delta_c \approx 7$. The transition manifests itself in a breakdown of conductance~\cite{BAA,Schreiber}, and a slowdown of entanglement growth after a quench~\cite{Prosen,Pollmann}. Moreover, a phenomenological model in terms of quasi-local constants of motions exists which provides an explanation for the non-thermal behavior of the system~\cite{Serbyn,Huse}.
To relate the model to our theoretical results, we note that when the region $ R $ is a single qubit ($|R|=1$) with a non-zero energy gap, then the ESC is always satisfied for all $ n\in \mb{N} $. For $ |R|=2 $, we have verified the ESC condition across a large range of disorder strengths $ \Delta \in [0.01,20] $, up to $N=25$ (out of $\sim 2000$ realizations, all of them satisfy the ESC); we have additionally considered a small number of realizations for $N=35$, for all of which the ESC holds. As $ |R| $ increases, higher values of $ N $ become significantly harder to check numerically. However, we have verified that for example $|R|=4$ always satisfies the ESC up to $N=5$ (for 2000 generated realizations), while the condition is satisfied with high probability when $N=6$ ($90\%$ of the realizations).
\begin{figure}
\caption{{\bf Max-relative entropy for the disordered Heisenberg chain.} {\bf a)} Max-relative entropy $\Dmax{} \left( \omega_R \| \tau_{\beta}(H_R) \right)$ as a function of subsystem size $|R|$ for a lattice of $L=15$ sites. The plots show an average over $100$ disorder realizations. The states were calculated employing exact diagonalization. For low values of disorder
$\Delta$ the max-relative entropy is almost constant as $|R|$ increases, while for higher values of
$\Delta$ it scales linearly in $|R|$, hinting toward a robustness of the MBL phase with respect to the class of interaction models we are considering. {\bf b)} The slope of the max-relative entropy as a function of the disorder $\Delta$ provides information on the phase transition. Indeed, we can see that this quantity abruptly increases in proximity of the expected phase transition from the ergodic to the MBL phase. The slope is obtained by a linear fit with error bars indicating least squares errors. The inset shows the derivative of the slope, with the grey lines indicate a possible transition region. }
\label{fig:Dmax}
\end{figure}
\par In our simulation, we choose as initial state vector $\ket{\Psi(0)}$ a variation of the N{\'e}el state
with support on the total-magnetization sectors $M=\pm 1, 0$. Our choice is motivated by the fact that this state, due to its increased overlap over different symmetric subspaces of the Hamiltonian, thermalizes more easily during the ergodic phase. For each random realization, we numerically compute the infinite time average of $\ket{\Psi}$ as defined in Eq.~\eqref{eq:inf_time_average}, using exact diagonalization. We then trace out part of the lattice so as to obtain the state $\omega_R$, describing the infinite time averaged state reduced to the region $R$. Notice that in the ergodic phase, when the disorder strength $\Delta < \Delta_c$, this state is expected to be close to thermal, with a temperature depending on the energy of the initial state of the lattice. However, when the disorder strength $\Delta$ passes its critical value, the state $\omega_R$ is not thermal any more \cite{Schreiber}. \par To numerically compute the max-relative entropy for this system, we use the Gibbs state of the reduced Hamiltonian $H_R$, obtained from the Hamiltonian in Eq.~\eqref{eq:XXZChainH}
by only considering terms with full support on the region $R$. The inverse temperature $\beta$ is obtained by constructing the global Gibbs state of the lattice and requiring its energy to equal to the one of the initial state vector $\ket{\Psi(0)}$. We compute $\Dmax{}(\omega_R\| \tau_{\beta}(H_R))$ for different disorder strengths $\Delta$, and different sizes of the region $R$, see Fig~\ref{fig:Dmax}.
\par We find that in the ergodic phase the state is approximately thermal, and the max-relative entropy remains almost constant as $|R|$ increases. For big enough sizes of the region, the max-relative entropy starts increasing even in the ergodic case. However, this effect is due to the finite size of the lattice in our simulation, and it can be mitigated by increasing the number of lattice sites (at the expenses of a higher computational cost). As $\Delta$ approaches the critical value, we find that the max-relative entropy scales linearly in the region size $|R|$, with a linear coefficient which increases with the disorder strength, see Fig.~\ref{fig:Dmax}.a. As a result, the size of the external thermal bath $n_{\epsilon}$ scales exponentially in the region size, due to the bounds we have obtained in the previous section. This exponential scaling in the size of the bath suggests a robustness of the MBL phase with respect to the dynamics given by Eq.~\eqref{eq:master_eq}, since the relative size of the bath ${n_{\epsilon}}/{|R|}$ needs to diverge as $|R|$ tends to infinity. In other words, for the MBL phase to be destroyed one needs, under the interaction models we consider, an exponentially vast amount of thermal noise. It is worth noting that our characterization of robustness of the MBL phase to thermal noise is distinctly different from others found in the literature~\cite{fischer_dynamics_2016,nandkishore_spectral_2014, johri_many-body_2015,levi_robustness_2016}. Indeed, we couple the system with a finite-sized thermal bath, and quantify the robustness in terms of its size. Furthermore, our notion of thermalization accounts for the evolution of both system and bath, rather than focusing on the system only. Other works instead consider infinite thermal reservoirs and quantify the robustness as a function, for instance, of the coupling between system and environment. A promising realization are recent optical lattice experiments \cite{bordia2016coupling,bordia2017probing,abadal2018bath}. However, to connect to our findings one would need full state tomography on both system and bath which so far is out of reach for these platforms.
\par We additionally study the first derivative of the max-relative entropy with respect to the region size $|R|$, as a function of disorder strength, shown in Fig.~\ref{fig:Dmax}.b. We find that, during the ergodic phase, the derivative remains constant and small. As $\Delta$ approaches the critical value, the derivative increases, and for $\Delta \gg \Delta_c$ the derivative becomes constant again. Thus, we find that the derivative of the max-relative entropy with respect to the region size is an order parameter for the MBL phase transition. We then use this order parameter to estimate the critical value $\Delta^{(L)}_c$ for the finite-length spin chain we are considering, obtaining a value of approximatively $4.5$ for $L = 15$ sites. While the critical value for infinite-length spin chains is considered to be $\Delta_c \approx 7$, we find that our value, which we stress is obtained for a finite number of sites, seems to be in good accord with known results found in the literature using other measures~\cite{luitz_many-body_2015,gray_many-body_2018}.
\section{Discussion} We show that mathematical results originally developed to study quantum information processing may find their applications in many body physics, in particular for the study of MBL in this paper. We demonstrate this by applying the recently developed convex split lemma technique, to derive upper and lower bounds for the size of the external thermal bath required to thermalize an MBL system. The class of interaction models between lattice and thermal bath is described by the master equation~\eqref{eq:master_eq}, and consists in stochastic energy-preserving collisions between the system and bath components. The bounds we obtain depend on the max-relative entropy between the state we aim to thermalize and its thermal state. \par We make use of these analytic results to study a specific and at the same time much ubiquitous system exhibiting MBL features, known as the disordered Heisenberg chain. We show that the MBL phase in this system is in fact robust with respect to the thermalization processes considered here, and that the derivative of the max-relative entropy with respect to region size serves as an approximate order parameter of the ergodic to MBL transition. We emphasize that this is not in contradiction with previous results, where a breakdown of localization was reported~\cite{fischer_dynamics_2016, nandkishore_spectral_2014,levi_robustness_2016}, as the size of the baths considered in these works was unbounded. Resource-theoretic frameworks offer another potentially useful approach for studying thermalization with infinite-dimensional baths; the framework of elementary thermal operations~\cite{lostaglio2018elementary} which involves a single bosonic bath that is coupled only with two levels of the system of interest. One may then study the resources (the number of bosonic baths with different frequencies) required to achieve thermalization. Also, and more technically, it would be interesting to study the extent to which both the ESC condition and the requirement of exact commutation in our framework can be relaxed to only approximately hold true and how this in turn affects the lower bound of Corollary~\ref{eq:lower_bound}. These questions we leave to be studied in future work. \par The success of our application implies that, potentially, other information theoretic tools could be employed to study the thermalization of MBL systems -- and non-equilibrium dynamics of many-body systems in more generality, for that matter. For instance, results in randomness extraction~\cite{trevisan_extracting_2000} might be useful to provide new bounds. In randomness extraction, a weakly random source is converted into an approximately uniform distribution, with the use of a seed (a small, uniformly distributed auxiliary system). In analogy, thermalization requires a non-thermal state to be mapped into an almost thermal state, with the help of an external bath (the seed). Thus, it seems possible that results from randomness extraction might be modified to study this setting, and to obtain bounds on the thermal seed. \par It has been shown that excited states of one-dimensional MBL systems are well-approximated by matrix product states (MPS) with a low bond dimension~\cite{friesdorf_many-body_2015,Bauer} if the system features an information mobility gap. These states have several interesting properties, and in particular they feature an area law for the entanglement entropy which is logarithmic in the bond dimension~\cite{eisert_colloquium:_2010}. Since our result is based on a particular entropic quantity, it might be possible to use the properties of MPS to derive a fully analytical bound on the robustness of these systems with respect to thermal noise. It is the hope that our work stimulates further cross-fertilization between the fields of quantum thermodynamics and the study of quantum many-body systems out of equilibrium.
\section{Acknowledgments}
We would like to thank {\'A}lvaro Martin Alhambra, Johnnie Gray, Volkher Scholz, and Henrik Wilming for helpful discussions. C.~S.\ is funded by EPSRC. N.~N. is funded by the Alexander von Humboldt foundation. P.~B. acknowledges funding from the Templeton Foundation. J.~E. has been supported by the DFG (FOR 2724, CRC 183), the FQXi, and the Templeton Foundation. We acknowledge the Freie Universit{\"a}t Berlin for covering the costs of offsetting the emission generated by this research.
\begin{figure}
\caption{Estimated climate footprint of this paper. Prototyping is not included in these calculations. Estimations have been calculated using the examples of Scientific CO$_2$nduct \cite{scicon2019} and are correct to the best of our knowledge.}
\end{figure}
\begin{thebibliography}{67}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Deutsch}(1991)}]{Deutsch1991}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Deutsch}},\ }\href {\doibase 10.1103/PhysRevA.43.2046} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {43}},\
\bibinfo {pages} {2046} (\bibinfo {year} {1991})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Srednicki}(1994)}]{Srednicki1994}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Srednicki}},\ }\href {\doibase 10.1103/PhysRevE.50.888} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume}
{50}},\ \bibinfo {pages} {888} (\bibinfo {year} {1994})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Popescu}\ \emph {et~al.}(2006)\citenamefont
{Popescu}, \citenamefont {Short},\ and\ \citenamefont {Winter}}]{Popescu06}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Popescu}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Short}},
\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}},\ }\href
{\doibase 10.1038/nphys444} {\bibfield {journal} {\bibinfo {journal}
{Nature Phys.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {754}
(\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Eisert}\ \emph {et~al.}(2015)\citenamefont {Eisert},
\citenamefont {Friesdorf},\ and\ \citenamefont {Gogolin}}]{1408.5148}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Eisert}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Friesdorf}}, \
and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}},\ }\href
{\doibase doi:10.1038/nphys3215} {\bibfield {journal} {\bibinfo {journal}
{Nature Phys.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {124}
(\bibinfo {year} {2015})},\ \Eprint {http://arxiv.org/abs/1408.5148}
{arXiv:1408.5148} \BibitemShut {NoStop}
\bibitem [{\citenamefont {Polkovnikov}\ \emph {et~al.}(2011)\citenamefont
{Polkovnikov}, \citenamefont {Sengupta}, \citenamefont {Silva},\ and\
\citenamefont {Vengalattore}}]{ngupta_Silva_Vengalattore_2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Polkovnikov}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Sengupta}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Silva}}, \
and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Vengalattore}},\
}\href {\doibase https://doi.org/10.1103/RevModPhys.83.863} {\bibfield
{journal} {\bibinfo {journal} {Rev.\ Mod.\ Phys.}\ }\textbf {\bibinfo
{volume} {83}},\ \bibinfo {pages} {863} (\bibinfo {year} {2011})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Oganesyan}\ and\ \citenamefont
{Huse}(2007)}]{Oganesyan}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Oganesyan}}\ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Huse}},\ }\href {\doibase 10.1103/PhysRevB.75.155111} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {75}},\
\bibinfo {pages} {155111} (\bibinfo {year} {2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Schreiber}\ \emph {et~al.}(2015)\citenamefont
{Schreiber}, \citenamefont {Hodgman}, \citenamefont {Bordia}, \citenamefont
{L{\"u}schen}, \citenamefont {Fischer}, \citenamefont {Vosk}, \citenamefont
{Altman}, \citenamefont {Schneider},\ and\ \citenamefont
{Bloch}}]{Schreiber}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Schreiber}}, \bibinfo {author} {\bibfnamefont {S.~S.}\ \bibnamefont
{Hodgman}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bordia}},
\bibinfo {author} {\bibfnamefont {H.~P.}\ \bibnamefont {L{\"u}schen}},
\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Fischer}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Vosk}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Altman}}, \bibinfo {author} {\bibfnamefont
{U.}~\bibnamefont {Schneider}}, \ and\ \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Bloch}},\ }\href {\doibase 10.1126/science.aaa7432}
{\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {349}},\ \bibinfo {pages} {842} (\bibinfo {year}
{2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Abanin}\ \emph {et~al.}(2019)\citenamefont {Abanin},
\citenamefont {Altman}, \citenamefont {Bloch},\ and\ \citenamefont
{Serbyn}}]{abanin_colloquium:_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Abanin}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Altman}},
\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Bloch}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Serbyn}},\ }\href {\doibase
10.1103/RevModPhys.91.021001} {\bibfield {journal} {\bibinfo {journal}
{Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages}
{021001} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Luitz}\ \emph {et~al.}(2017)\citenamefont {Luitz},
\citenamefont {Huveneers},\ and\ \citenamefont {De~Roeck}}]{luitz_how_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont
{Luitz}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Huveneers}}, \
and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {De~Roeck}},\ }\href
{\doibase 10.1103/PhysRevLett.119.150602} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo
{pages} {150602} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ponte}\ \emph {et~al.}(2017)\citenamefont {Ponte},
\citenamefont {Laumann}, \citenamefont {Huse},\ and\ \citenamefont
{Chandran}}]{ponte_pedro_thermal_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Ponte}}, \bibinfo {author} {\bibfnamefont {C.~R.}\ \bibnamefont {Laumann}},
\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Huse}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Chandran}},\ }\href
{\doibase 10.1098/rsta.2016.0428} {\bibfield {journal} {\bibinfo {journal}
{Phil. Trans. Roy. Soc. A}\ }\textbf {\bibinfo {volume} {375}},\ \bibinfo
{pages} {20160428} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hetterich}\ \emph {et~al.}(2017)\citenamefont
{Hetterich}, \citenamefont {Serbyn}, \citenamefont {Domínguez},
\citenamefont {Pollmann},\ and\ \citenamefont
{Trauzettel}}]{hetterich_noninteracting_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Hetterich}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Serbyn}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Domínguez}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Pollmann}}, \ and\ \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Trauzettel}},\ }\href {\doibase
10.1103/PhysRevB.96.104203} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. B}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {104203}
(\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bari\v{s}i{\'c}}\ \emph {et~al.}(2009)\citenamefont
{Bari\v{s}i{\'c}}, \citenamefont {Prelov\v{s}ek}, \citenamefont
{Metavitsiadis},\ and\ \citenamefont {Zotos}}]{barisic_incoherent_2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.~S.}\ \bibnamefont
{Bari\v{s}i{\'c}}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Prelov\v{s}ek}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Metavitsiadis}}, \ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Zotos}},\ }\href {\doibase 10.1103/PhysRevB.80.125118} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {80}},\
\bibinfo {pages} {125118} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Goihl}\ \emph {et~al.}(2019)\citenamefont {Goihl},
\citenamefont {Eisert},\ and\ \citenamefont {Krumnow}}]{Goihl19}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Goihl}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}}, \
and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Krumnow}},\ }\href
{\doibase 10.1103/PhysRevB.99.195145} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo
{pages} {195145} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nandkishore}\ \emph {et~al.}(2014)\citenamefont
{Nandkishore}, \citenamefont {Gopalakrishnan},\ and\ \citenamefont
{Huse}}]{nandkishore_spectral_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Nandkishore}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Gopalakrishnan}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\
\bibnamefont {Huse}},\ }\href {\doibase 10.1103/PhysRevB.90.064203}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo
{volume} {90}},\ \bibinfo {pages} {064203} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fischer}\ \emph
{et~al.}(2016{\natexlab{a}})\citenamefont {Fischer}, \citenamefont
{Maksymenko},\ and\ \citenamefont {Altman}}]{fischer_dynamics_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Fischer}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Maksymenko}},
\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Altman}},\ }\href
{\doibase 10.1103/PhysRevLett.116.160401} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo
{pages} {160401} (\bibinfo {year} {2016}{\natexlab{a}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Levi}\ \emph {et~al.}(2016)\citenamefont {Levi},
\citenamefont {Heyl}, \citenamefont {Lesanovsky},\ and\ \citenamefont
{Garrahan}}]{levi_robustness_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Levi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Heyl}}, \bibinfo
{author} {\bibfnamefont {I.}~\bibnamefont {Lesanovsky}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.~P.}\ \bibnamefont {Garrahan}},\ }\href {\doibase
10.1103/PhysRevLett.116.237203} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages}
{237203} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Johri}\ \emph {et~al.}(2015)\citenamefont {Johri},
\citenamefont {Nandkishore},\ and\ \citenamefont
{Bhatt}}]{johri_many-body_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Johri}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Nandkishore}},
\ and\ \bibinfo {author} {\bibfnamefont {R.~N.}\ \bibnamefont {Bhatt}},\
}\href {\doibase 10.1103/PhysRevLett.114.117401} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\
\bibinfo {pages} {117401} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Anshu}\ \emph {et~al.}(2017)\citenamefont {Anshu},
\citenamefont {Devabathini},\ and\ \citenamefont
{Jain}}]{anshu_quantum_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Anshu}}, \bibinfo {author} {\bibfnamefont {V.~K.}\ \bibnamefont
{Devabathini}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Jain}},\ }\href {\doibase 10.1103/PhysRevLett.119.120506} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {119}},\ \bibinfo {pages} {120506} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Anshu}\ \emph {et~al.}(2018)\citenamefont {Anshu},
\citenamefont {Hsieh},\ and\ \citenamefont {Jain}}]{anshu_quantifying_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Anshu}}, \bibinfo {author} {\bibfnamefont {M.-H.}\ \bibnamefont {Hsieh}}, \
and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Jain}},\ }\href
{\doibase 10.1103/PhysRevLett.121.190504} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {121}},\ \bibinfo
{pages} {190504} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Luitz}\ \emph
{et~al.}(2015{\natexlab{a}})\citenamefont {Luitz}, \citenamefont
{Laflorencie},\ and\ \citenamefont {Alet}}]{Luitz}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont
{Luitz}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Laflorencie}},
\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Alet}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\
}\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {081103} (\bibinfo
{year} {2015}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Devakul}\ \emph {et~al.}(2017)\citenamefont
{Devakul}, \citenamefont {Khemani}, \citenamefont {Pollmann}, \citenamefont
{Huse},\ and\ \citenamefont {Sondhi}}]{Devakul}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Devakul}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Khemani}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pollmann}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Huse}}, \ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Sondhi}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phil. Trans. Roy. Soc. Lond. A}\ }\textbf
{\bibinfo {volume} {375}},\ \bibinfo {pages} {2108} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Yu}\ \emph {et~al.}(2017)\citenamefont {Yu},
\citenamefont {Pekker},\ and\ \citenamefont
{Clark}}]{PhysRevLett.118.017201}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Yu}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Pekker}}, \ and\
\bibinfo {author} {\bibfnamefont {B.~K.}\ \bibnamefont {Clark}},\ }\href
{\doibase 10.1103/PhysRevLett.118.017201} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo
{pages} {017201} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {\v{Z}nidari\v{c}}\ \emph {et~al.}(2008)\citenamefont
{\v{Z}nidari\v{c}}, \citenamefont {Prosen},\ and\ \citenamefont
{Prelov\v{s}ek}}]{Prosen}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{\v{Z}nidari\v{c}}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Prosen}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Prelov\v{s}ek}},\ }\href {\doibase 10.1103/PhysRevB.77.064426} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{77}},\ \bibinfo {pages} {064426} (\bibinfo {year} {2008})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Bardarson}\ \emph {et~al.}(2012)\citenamefont
{Bardarson}, \citenamefont {Pollmann},\ and\ \citenamefont
{Moore}}]{Pollmann}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont
{Bardarson}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pollmann}},
\ and\ \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Moore}},\
}\href {\doibase 10.1103/PhysRevLett.109.017202} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\
\bibinfo {pages} {017202} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wahl}\ \emph {et~al.}(2019)\citenamefont {Wahl},
\citenamefont {Pal},\ and\ \citenamefont {Simon}}]{Wahl}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~B.}\ \bibnamefont
{Wahl}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Pal}}, \ and\
\bibinfo {author} {\bibfnamefont {S.~H.}\ \bibnamefont {Simon}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Nature Phys.}\ }\textbf
{\bibinfo {volume} {15}},\ \bibinfo {pages} {164} (\bibinfo {year}
{2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kshetrimayum}\ \emph {et~al.}()\citenamefont
{Kshetrimayum}, \citenamefont {Goihl},\ and\ \citenamefont
{Eisert}}]{AugstineMBL}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kshetrimayum}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Goihl}},
\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\
}\href@noop {} {\enquote {\bibinfo {title} {Time evolution of many-body
localized systems in two spatial dimensions},}\ }\bibinfo {note}
{ArXiv:1910.11359}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Datta}(2009)}]{datta_min-_2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Datta}},\ }\href {\doibase 10.1109/TIT.2009.2018325} {\bibfield {journal}
{\bibinfo {journal} {IEEE Trans. Inf. Th.}\ }\textbf {\bibinfo {volume}
{55}},\ \bibinfo {pages} {2816} (\bibinfo {year} {2009})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Tomamichel}(2016)}]{tomamichel_quantum_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Tomamichel}},\ }\href {\doibase 10.1007/978-3-319-21891-5} {\emph {\bibinfo
{title} {Quantum {Information} {Processing} with {Finite} {Resources} -
{Mathematical} {Foundations}}}}\ (\bibinfo {publisher} {Springer, Cham},\
\bibinfo {year} {2016})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Erven}\ and\ \citenamefont
{Harremos}(2014)}]{erven_renyi_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~v.}\ \bibnamefont
{Erven}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Harremos}},\ }\href {\doibase 10.1109/TIT.2014.2320500} {\bibfield
{journal} {\bibinfo {journal} {IEEE Trans. Inf. Th.}\ }\textbf {\bibinfo
{volume} {60}},\ \bibinfo {pages} {3797} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gogolin}\ and\ \citenamefont
{Eisert}(2016)}]{gogolin_equilibration_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Gogolin}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Eisert}},\ }\href {\doibase 10.1088/0034-4885/79/5/056001} {\bibfield
{journal} {\bibinfo {journal} {Rep. Prog. Phys.}\ }\textbf {\bibinfo
{volume} {79}},\ \bibinfo {pages} {056001} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kliesch}\ \emph {et~al.}(2014)\citenamefont
{Kliesch}, \citenamefont {Gogolin}, \citenamefont {Kastoryano}, \citenamefont
{Riera},\ and\ \citenamefont {Eisert}}]{kliesch_locality_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Kliesch}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}},
\bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Kastoryano}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Riera}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\href {\doibase
10.1103/PhysRevX.4.031019} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. X}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {031019}
(\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bordia}\ \emph {et~al.}(2016)\citenamefont {Bordia},
\citenamefont {L\"uschen}, \citenamefont {Hodgman}, \citenamefont
{Schreiber}, \citenamefont {Bloch},\ and\ \citenamefont
{Schneider}}]{bordia2016coupling}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Bordia}}, \bibinfo {author} {\bibfnamefont {H.~P.}\ \bibnamefont
{L\"uschen}}, \bibinfo {author} {\bibfnamefont {S.~S.}\ \bibnamefont
{Hodgman}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schreiber}},
\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Bloch}}, \ and\ \bibinfo
{author} {\bibfnamefont {U.}~\bibnamefont {Schneider}},\ }\href {\doibase
10.1103/PhysRevLett.116.140401} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages}
{140401} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bordia}\ \emph {et~al.}(2017)\citenamefont {Bordia},
\citenamefont {L{\"u}schen}, \citenamefont {Scherg}, \citenamefont
{Gopalakrishnan}, \citenamefont {Knap}, \citenamefont {Schneider},\ and\
\citenamefont {Bloch}}]{bordia2017probing}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Bordia}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {L{\"u}schen}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Scherg}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Gopalakrishnan}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Knap}}, \bibinfo {author}
{\bibfnamefont {U.}~\bibnamefont {Schneider}}, \ and\ \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {Bloch}},\ }\href {\doibase
10.1103/PhysRevX.7.041047} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {041047}
(\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Rubio-Abadal}\ \emph {et~al.}(2018)\citenamefont
{Rubio-Abadal}, \citenamefont {Choi}, \citenamefont {Zeiher}, \citenamefont
{Rui}, \citenamefont {Bloch},\ and\ \citenamefont {Gross}}]{abadal2018bath}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Rubio-Abadal}}, \bibinfo {author} {\bibfnamefont {J.-y.}\ \bibnamefont
{Choi}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zeiher},
\bibfnamefont {J.~Hollerith}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Rui}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Bloch}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Gross}},\ }\href@noop {} {\enquote {\bibinfo {title} {Many-body
delocalization in the presence of a quantum bath},}\ } (\bibinfo {year}
{2018}),\ \bibinfo {note} {arxiv:1805.00056}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Horodecki}\ and\ \citenamefont
{Oppenheim}(2013)}]{horodecki2013fundamental}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Horodecki}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Oppenheim}},\ }\href {\doibase 10.1038/ncomms3059} {\bibfield {journal}
{\bibinfo {journal} {Nature Comm.}\ }\textbf {\bibinfo {volume} {4}},\
\bibinfo {pages} {2059} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Janzing}\ \emph {et~al.}(2000)\citenamefont
{Janzing}, \citenamefont {Wocjan}, \citenamefont {Zeier}, \citenamefont
{Geiss},\ and\ \citenamefont {Beth}}]{janzing_thermodynamic_2000}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Janzing}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wocjan}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Zeier}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Geiss}}, \ and\ \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Beth}},\ }\href {\doibase
10.1023/A:1026422630734} {\bibfield {journal} {\bibinfo {journal} {Int. J.
Th. Phys.}\ }\textbf {\bibinfo {volume} {39}},\ \bibinfo {pages} {2717}
(\bibinfo {year} {2000})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Brand{\~a}o}\ \emph {et~al.}(2013)\citenamefont
{Brand{\~a}o}, \citenamefont {Horodecki}, \citenamefont {Oppenheim},
\citenamefont {Renes},\ and\ \citenamefont
{Spekkens}}]{brandao_resource_2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~G. S.~L.}\
\bibnamefont {Brand{\~a}o}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Oppenheim}}, \bibinfo {author} {\bibfnamefont {J.~M.}\
\bibnamefont {Renes}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\
\bibnamefont {Spekkens}},\ }\href {\doibase 10.1103/PhysRevLett.111.250404}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {111}},\ \bibinfo {pages} {250404} (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Brandao}\ \emph {et~al.}(2015)\citenamefont
{Brandao}, \citenamefont {Horodecki}, \citenamefont {Ng}, \citenamefont
{Oppenheim},\ and\ \citenamefont {Wehner}}]{brandao2015second}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~G. S.~L.}\
\bibnamefont {Brandao}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Horodecki}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Ng}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Oppenheim}}, \ and\
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wehner}},\ }\href
{\doibase 10.1073/pnas.1411728112} {\bibfield {journal} {\bibinfo {journal}
{PNAS}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {3275}
(\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Alhambra}\ \emph {et~al.}(2016)\citenamefont
{Alhambra}, \citenamefont {Masanes}, \citenamefont {Oppenheim},\ and\
\citenamefont {Perry}}]{alhambra2016fluctuating}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {{\'A}.~M.}\
\bibnamefont {Alhambra}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Masanes}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Oppenheim}},
\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Perry}},\ }\href
{\doibase 10.1103/PhysRevX.6.041017} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages}
{041017} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Masanes}\ and\ \citenamefont
{Oppenheim}(2017)}]{masanes_general_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Masanes}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Oppenheim}},\ }\href {\doibase 10.1038/ncomms14538} {\bibfield {journal}
{\bibinfo {journal} {Nature Comm.}\ }\textbf {\bibinfo {volume} {8}},\
\bibinfo {pages} {14538} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wilming}\ and\ \citenamefont
{Gallego}(2017)}]{wilming2017third}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Wilming}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Gallego}},\ }\href {\doibase 10.1103/PhysRevX.7.041033} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume}
{7}},\ \bibinfo {pages} {041033} (\bibinfo {year} {2017})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Woods}\ \emph {et~al.}(2019)\citenamefont {Woods},
\citenamefont {Ng},\ and\ \citenamefont {Wehner}}]{woods2019maximum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont
{Woods}}, \bibinfo {author} {\bibfnamefont {N.~H.~Y.}\ \bibnamefont {Ng}}, \
and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wehner}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf
{\bibinfo {volume} {3}},\ \bibinfo {pages} {177} (\bibinfo {year}
{2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Halpern}\ \emph {et~al.}(2019)\citenamefont
{Halpern}, \citenamefont {White}, \citenamefont {Gopalakrishnan},\ and\
\citenamefont {Refael}}]{halpern2019quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~Y.}\ \bibnamefont
{Halpern}}, \bibinfo {author} {\bibfnamefont {C.~D.}\ \bibnamefont {White}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gopalakrishnan}}, \ and\
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Refael}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo
{volume} {99}},\ \bibinfo {pages} {024203} (\bibinfo {year}
{2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Alhambra}\ and\ \citenamefont
{Wilming}(2020)}]{alhambra_revivals_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {{\'A}.~M.}\
\bibnamefont {Alhambra}}\ and\ \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Wilming}},\ }\href {\doibase 10.1103/PhysRevB.101.205107}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo
{volume} {101}},\ \bibinfo {pages} {205107} (\bibinfo {year}
{2020})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nielsen}\ and\ \citenamefont
{Chuang}(2010)}]{nielsen_quantum_2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Nielsen}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont
{Chuang}},\ }\href {\doibase 10.1017/CBO9780511976667} {\emph {\bibinfo
{title} {Quantum {Computation} and {Quantum} {Information}}}}\ (\bibinfo
{publisher} {Cambridge University Press},\ \bibinfo {year}
{2010})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Konig}\ \emph {et~al.}(2009)\citenamefont {Konig},
\citenamefont {Renner},\ and\ \citenamefont
{Schaffner}}]{konig2009operational}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Konig}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Renner}}, \
and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Schaffner}},\ }\href
{\doibase 10.1109/TIT.2009.2025545} {\bibfield {journal} {\bibinfo
{journal} {IEEE Trans. Inf. Th.}\ }\textbf {\bibinfo {volume} {55}},\
\bibinfo {pages} {4337} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bu}\ \emph {et~al.}(2017)\citenamefont {Bu},
\citenamefont {Singh}, \citenamefont {Fei}, \citenamefont {Pati},\ and\
\citenamefont {Wu}}]{bu2017maximum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Bu}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Singh}}, \bibinfo
{author} {\bibfnamefont {S.-M.}\ \bibnamefont {Fei}}, \bibinfo {author}
{\bibfnamefont {A.~K.}\ \bibnamefont {Pati}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Wu}},\ }\href {\doibase
10.1103/PhysRevLett.119.150405} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages}
{150405} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Audenaert}\ and\ \citenamefont
{Scheel}(2008)}]{audenaert_random_2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~M.~R.}\
\bibnamefont {Audenaert}}\ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Scheel}},\ }\href {\doibase 10.1088/1367-2630/10/2/023011}
{\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo
{volume} {10}},\ \bibinfo {pages} {023011} (\bibinfo {year}
{2008})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Di{\'o}si}\ \emph {et~al.}(2006)\citenamefont
{Di{\'o}si}, \citenamefont {Feldmann},\ and\ \citenamefont
{Kosloff}}]{diosi2006exact}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Di{\'o}si}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Feldmann}},
\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kosloff}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Int. J. Quant.
Inf.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {99} (\bibinfo
{year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Csisz{\'a}r}\ \emph {et~al.}(2007)\citenamefont
{Csisz{\'a}r}, \citenamefont {Hiai},\ and\ \citenamefont
{Petz}}]{csiszar2007limit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Csisz{\'a}r}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Hiai}}, \
and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Petz}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf
{\bibinfo {volume} {48}},\ \bibinfo {pages} {092102} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Scarani}\ \emph {et~al.}(2002)\citenamefont
{Scarani}, \citenamefont {Ziman}, \citenamefont {{\v{S}}telmachovi{\v{c}}},
\citenamefont {Gisin},\ and\ \citenamefont
{Bu{\v{z}}ek}}]{scarani2002thermalizing}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Scarani}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ziman}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{{\v{S}}telmachovi{\v{c}}}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Gisin}}, \ and\ \bibinfo {author} {\bibfnamefont
{V.}~\bibnamefont {Bu{\v{z}}ek}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {88}},\
\bibinfo {pages} {097905} (\bibinfo {year} {2002})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Linden}\ \emph {et~al.}(2009)\citenamefont {Linden},
\citenamefont {Popescu}, \citenamefont {Short},\ and\ \citenamefont
{Winter}}]{Linden2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Linden}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Popescu}},
\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Short}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Winter}},\ }\href
{\doibase 10.1103/PhysRevE.79.061103} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. E}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo
{pages} {061103} (\bibinfo {year} {2009})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Friesdorf}\ \emph {et~al.}(2015)\citenamefont
{Friesdorf}, \citenamefont {Werner}, \citenamefont {Brown}, \citenamefont
{Scholz},\ and\ \citenamefont {Eisert}}]{friesdorf_many-body_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Friesdorf}}, \bibinfo {author} {\bibfnamefont {A.~H.}\ \bibnamefont
{Werner}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Brown}},
\bibinfo {author} {\bibfnamefont {V.~B.}\ \bibnamefont {Scholz}}, \ and\
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\href
{\doibase 10.1103/PhysRevLett.114.170505} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo
{pages} {170505} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Basko}\ \emph {et~al.}(2006)\citenamefont {Basko},
\citenamefont {Aleiner},\ and\ \citenamefont {Altshuler}}]{BAA}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Basko}}, \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Aleiner}},
\ and\ \bibinfo {author} {\bibfnamefont {B.~L.}\ \bibnamefont {Altshuler}},\
}\href {\doibase 10.1016/j.aop.2005.11.014} {\bibfield {journal} {\bibinfo
{journal} {Ann. Phys.}\ }\textbf {\bibinfo {volume} {321}},\ \bibinfo {pages}
{1126} (\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Serbyn}\ \emph {et~al.}(2013)\citenamefont {Serbyn},
\citenamefont {Papi{\'c}},\ and\ \citenamefont {Abanin}}]{Serbyn}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Serbyn}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Papi{\'c}}}, \
and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Abanin}},\
}\href {\doibase doi.org/10.1103/PhysRevLett.111.127201} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {111}},\ \bibinfo {pages} {127201} (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Huse}\ \emph {et~al.}(2014)\citenamefont {Huse},
\citenamefont {Nandkishore},\ and\ \citenamefont {Oganesyan}}]{Huse}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont
{Huse}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Nandkishore}}, \
and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Oganesyan}},\ }\href
{\doibase 10.1103/PhysRevB.90.174202} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo
{pages} {174202} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Luitz}\ \emph
{et~al.}(2015{\natexlab{b}})\citenamefont {Luitz}, \citenamefont
{Laflorencie},\ and\ \citenamefont {Alet}}]{luitz_many-body_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont
{Luitz}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Laflorencie}},
\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Alet}},\ }\href
{\doibase 10.1103/PhysRevB.91.081103} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo
{pages} {081103} (\bibinfo {year} {2015}{\natexlab{b}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Gray}\ \emph {et~al.}(2018)\citenamefont {Gray},
\citenamefont {Bose},\ and\ \citenamefont {Bayat}}]{gray_many-body_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Gray}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bose}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bayat}},\ }\href
{\doibase 10.1103/PhysRevB.97.201105} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo
{pages} {201105} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lostaglio}\ \emph {et~al.}(2018)\citenamefont
{Lostaglio}, \citenamefont {Alhambra},\ and\ \citenamefont
{Perry}}]{lostaglio2018elementary}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Lostaglio}}, \bibinfo {author} {\bibfnamefont {{\'A}.~M.}\ \bibnamefont
{Alhambra}}, \ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Perry}},\ }\href {\doibase 10.22331/q-2018-02-08-52} {\bibfield {journal}
{\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo
{pages} {52} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Trevisan}\ and\ \citenamefont
{Vadhan}(2000)}]{trevisan_extracting_2000}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Trevisan}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Vadhan}},\ }in\ \href {\doibase 10.1109/SFCS.2000.892063} {\emph {\bibinfo
{booktitle} {Proceedings 41st {Annual} {Symposium} on {Foundations} of
{Computer} {Science}}}}\ (\bibinfo {year} {2000})\ pp.\ \bibinfo {pages}
{32--42}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bauer}\ and\ \citenamefont {Nayak}(2013)}]{Bauer}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Bauer}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Nayak}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {J. Stat. Mech.}\
}\textbf {\bibinfo {volume} {P09005}} (\bibinfo {year} {2013})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Eisert}\ \emph {et~al.}(2010)\citenamefont {Eisert},
\citenamefont {Cramer},\ and\ \citenamefont
{Plenio}}]{eisert_colloquium:_2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Eisert}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cramer}}, \
and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\
}\href {\doibase 10.1103/RevModPhys.82.277} {\bibfield {journal} {\bibinfo
{journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {82}},\ \bibinfo
{pages} {277} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Goihl}\ and\ \citenamefont
{Sweke}(2019)}]{scicon2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Goihl}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sweke}},\
}\href {https://scientific-conduct.github.io/} {\bibfield {journal}
{\bibinfo {journal} {Scientific CO$_2$nduct raising awareness for the
climate impact of science}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fischer}\ \emph
{et~al.}(2016{\natexlab{b}})\citenamefont {Fischer}, \citenamefont
{Maksymenko},\ and\ \citenamefont {Altman}}]{fischer2016dynamics}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Fischer}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Maksymenko}},
\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Altman}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {160401} (\bibinfo
{year} {2016}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Scharlau}\ and\ \citenamefont
{Mueller}(2018)}]{scharlau2018quantum}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Scharlau}}\ and\ \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont
{Mueller}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {54} (\bibinfo
{year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Streltsov}\ \emph {et~al.}(2018)\citenamefont
{Streltsov}, \citenamefont {Kampermann}, \citenamefont {W\"olk},
\citenamefont {Gessner},\ and\ \citenamefont
{Bru{\ss}}}]{streltsov_maximal_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Streltsov}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Kampermann}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {W\"olk}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gessner}}, \ and\
\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bru{\ss}}},\ }\href
{\doibase 10.1088/1367-2630/aac484} {\bibfield {journal} {\bibinfo
{journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo
{pages} {053058} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Boes}\ \emph {et~al.}(2018)\citenamefont {Boes},
\citenamefont {Wilming}, \citenamefont {Gallego},\ and\ \citenamefont
{Eisert}}]{boes_catalytic_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Boes}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Wilming}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Gallego}}, \ and\
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Eisert}},\ }\href
{\doibase 10.1103/PhysRevX.8.041016} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages}
{041016} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \end{thebibliography}
\onecolumngrid
\title{Supplementary Information: Bounding the resources for thermalizing many-body localized systems}
\maketitle
\section*{Supplementary Note 1: Collisional master equation and random unitary channels}
In this section, we consider a large class of dynamical processes involving $n$ subsystems (for example, particles or molecules), where each of the subsystems has a given probability in time of interacting with another one (or with more than one at a time), and the interaction is fully general as long as it conserves the total energy. For simplicity, below we only consider two-body interactions, but extending the setting to $k$-body interactions is straightforward. Any such process can be described by the following master equation, \begin{equation} \label{eq:stoch_process} \frac{\partial \, \rho_n(t)}{\partial \, t} = \sum_{i, j} \lambda_{i,j} \left[ U_{i,j} \, \rho_n(t) \, U_{i,j}^{\dagger} - \rho_n(t) \right], \end{equation} where $\rho_n(t)$ is the state of the $n$ subsystems at time $t$, and $\lambda_{i,j} > 0$ is the rate at which the interaction $U_{i,j}$ between the $i$-th and $j$-th subsystem occurs. Each interaction is energy preserving, that is, $\left[ U_{i,j}, H^{(n)} \right] = 0$, where $H^{(n)}$ is the Hamiltonian of the global system. In the case in which only two-body interactions are present in the above equation, the total number of different unitaries is $N = \frac{1}{2} n (n-1)$. In the following, we re-label the two indices $i,j$ with a single one $k = i,j$ taking values between $1$ and $N$. At any given time $t > 0$, we show in the next section that the solution of the above process has the form \begin{equation} \rho_n(t) = \sum_{\mathbf{m}\in\mathbb{Z}^N} P_t(\mathbf{m}) \cdot \rho(\mathbf{m}) , \end{equation} where the distribution \begin{equation} P_t(\mathbf{m}) = \prod_{k=1}^N p^{(\lambda_k)}_t(m_k), \end{equation} is given by the product of $N$ Poisson distributions, each with a different mean value $ \lambda_k t $. For a given $\mathbf{m} = \left( m_1 , m_2, \ldots, m_N \right)^T$ such that $m = \sum_{k=1}^N m_k$, the value of $P_t(\mathbf{m})$ is the probability that $m$ interactions occurs during the time $t$, of which $m_k$ are described by the $k$-th unitary operator $U_k$. On the other hand, the state $\rho(\mathbf{m})$ is a uniform mixture of different states obtained from the initial state $\rho_n(t=0)$, by considering all different sequences of events giving rise to the same $ \mathbf{m} $, \begin{equation} \label{eq:m_state} \rho(\mathbf{m}) = \frac{1}{\mathcal{N}_{\mathbf{m}}} \sum_{\pi} U_{\pi}^{\mathbf{m}} \, \rho_n(t=0) \, (U_{\pi}^{\mathbf{m}})^{\dagger}, \end{equation} where the unitary operator is \begin{equation} \label{eq:unitary_perm} U_{\pi}^{\mathbf{m}} = \pi( \underbrace{U_1 \ldots U_1}_{m_1} \underbrace{U_2 \ldots U_2}_{m_2} \ldots \underbrace{U_N \ldots U_N}_{m_N} ), \end{equation} and $\pi$ is an element of the symmetric group $S_m$ which produces a different permutation of the $m$ unitaries describing the two-body interactions. Furthermore, \begin{equation} \mathcal{N}_{\mathbf{m}} := \binom{m}{m_1,\cdots,m_N} \end{equation} is the multinomial coefficient, which is precisely the number of different permutations. Notice that the unitary operator $U_{\pi}^{\mathbf{m}}$ commutes with the total Hamiltonian $H^{(n)}$, since it is obtained from two-body interactions which commutes with $H^{(n)}$. In the special case in which the two-body operators $\left\{ U_{k} \right\}_k$ commute with each other, the permutation $\pi$ can be dropped, and in Eq.~\eqref{eq:m_state} we can remove the weighted sum. \par As a result, the evolution of the system at any finite time $t\geq0$ can be described by a convex mixture of energy-preserving unitary operations; specifically, for the dynamics given by the master equation~\ref{eq:stoch_process}, the channel which maps the initial state into $\rho_n(t)$ is given by \begin{equation} \mathcal{E}(\cdot) = \sum_{\mathbf{m}\in\mathbb{Z}^N} \frac{P_t(\mathbf{m})}{\mathcal{N}_{\mathbf{m}}} \sum_{\pi} (U_{\pi}^{\mathbf{m}}) \, \cdot \, (U_{\pi}^{\mathbf{m}})^{\dagger}. \end{equation} We can formally define a class of channels which are associated with this dynamics.
\begin{definition}[Convex mixtures of energy-preserving unitary operations]
\label{def:ruc}
For a given number of subsystems $n \in \mb{N}$ and a global Hamiltonian $H^{(n)}$, we define the class of maps $\mathfrak{E}_n$
as composed by every channel generated by the master equation~\eqref{eq:stoch_process}, for any finite time $t \geq 0$,
and for any choice of rates $\left\{ \lambda_{i,j} \right\}$ and of energy-preserving unitary operations $\left\{ U_{i,j} \right\}$. \end{definition}
This very general set of maps is at the core of the thermalization results we derive in Supplementary Note 2.
\subsection{Solution of a non-commuting Poisson process}
\begin{proposition}[Solution to non-commuting Poisson process]
\label{prop:solution_poisson_process}
Consider a finite-dimensional Hilbert space $\mc{H}$ and the following master equation
\begin{equation}
\label{eq:poisson_process}
\frac{\partial \, \rho(t)}{\partial \, t} = \sum_{k=1}^N \lambda_k \left[ U_k \rho(t) U_k^{\dagger} - \rho(t) \right],
\end{equation}
where $\rho(t) \in \SH{\mc{H}}$, and for all $k$ the coefficient $\lambda_k > 0$, and $U_k$ is a unitary operator
acting on $\mc{H}$. The solution of this master equation is
\begin{equation}
\label{eq:solution_pp}
\rho(t) = \sum_{m=0}^{\infty} p_t^{(\bar\lambda)}(m) \, \rho(m),
\end{equation}
where $p_t^{(\bar{\lambda})}(m)$ is a Poisson distribution whose mean value is $\bar{\lambda} \, t$, with $\bar{\lambda} = \sum_k \lambda_k$.
The state $\rho(m)$ is obtained from the initial state $\rho(t=0)$ by applying $m$ times the channel $\Lambda$, i.e. $\rho(m) = \Lambda^{(m)} (\rho(t=0))$, where
\begin{equation}
\label{eq:lambda_map}
\Lambda(\cdot) = \sum_{k=1}^N \frac{\lambda_k}{\bar{\lambda}} U_k \, \cdot \, U_k^{\dagger}.
\end{equation} \end{proposition}
\begin{proof}
Let us first rearrange Eq.~\eqref{eq:poisson_process} in such a way that, on the right-hand side, only one operator is acting on
the state $\rho(t)$, that is
\begin{align*}
\frac{\partial \, \rho(t)}{\partial \, t}
&=
\bar{\lambda} \left(\sum_{k=1}^N \frac{\lambda_k}{\bar{\lambda}} U_k \rho(t) U_k^{\dagger} - \rho(t) \right) \\
&=
\bar{\lambda} \big( \Lambda( \rho(t) ) - \rho(t) \big),
\end{align*}
where the map $\Lambda$ is defined in Eq.~\eqref{eq:lambda_map}. To show that Eq.~\eqref{eq:solution_pp} is the
solution of the above differential equation, first notice that the Poisson distribution $p_t(m)$ is the sole time-dependant
object, since $\rho(m) = \Lambda^{(m)}(\rho_0) = \Lambda \circ \ldots \circ \Lambda (\rho_0)$, and $\rho_0 = \rho(t=0)$.
Thus, when taking the time derivative of $\rho(t)$, we can exploit the fact that
\begin{equation*}
\frac{\partial \, p_t(m)}{\partial \, t} = \bar{\lambda} \left[ p_t(m-1) - p_t(m) \right] .
\end{equation*}
Then, the time derivative of the solution in Eq.~\eqref{eq:solution_pp} is given by
\begin{align*}
\frac{\partial \, \rho(t)}{\partial \, t}
&=
\sum_{m=0}^{\infty} \bar{\lambda} \left[( p_t(m-1) - p_t(m) \right] \Lambda^{(m)}(\rho_0)
=
\bar{\lambda}
\left[
\sum_{m=0}^{\infty} p_t(m) \Lambda^{(m+1)}(\rho_0)
-
\sum_{m=0}^{\infty} p_t(m) \Lambda^{(m)}(\rho_0)
\right] \\
&= \bar{\lambda}
\left[
\Lambda \left( \sum_{m=0}^{\infty} p_t(m) \rho(m) \right)
-
\sum_{m=0}^{\infty} p_t(m) \rho(m)
\right]
=
\bar{\lambda} \big( \Lambda( \rho(t) ) - \rho(t) \big),
\end{align*}
where in the third line we use the fact that $\Lambda$ is linear and continuous. \end{proof}
A straightforward corollary of the above proposition concerns the relation between the maps in $\mathfrak{E}_n$ and the class of (energy-preserving) \emph{random unitary channels}~\cite{audenaert_random_2008}, defined as follows.
\begin{definition}[Energy preserving random unitary channels]
For a given number of subsystems $n \in \mb{N}$ and a global Hamiltonian $H^{(n)}$, we define the class of energy-preserving
random unitary channels $\mathfrak{R}_n$ as composed by every maps of the form
\begin{equation}
\mc{E}( \cdot ) = \sum_k p_k \, U_k \cdot U_k^{\dagger},
\end{equation}
where $\left\{ p_k \right\}_k$ is a probability distribution, and each unitary operator $U_k$ preserves the energy,
that is, $\left[ U_k , H^{(n)} \right] = 0$. \end{definition}
The corollary of Prop.~\ref{prop:solution_poisson_process} is then the following.
\begin{corollary}[$\mathfrak{E}_n$ as subsets of energy preserving random unitary channels]
\label{cor:E_sub_R}
Given number of subsystems $n \in \mb{N}$ and a global Hamiltonian $H^{(n)}$, the set of maps $\mathfrak{E}_n$ is
a subset of the class of energy-preserving random unitary channels $\mathfrak{R}_n$. \end{corollary}
\subsection{Solution of several independent Poisson processes} Notice that the solution of Eq.~\eqref{eq:poisson_process} can be modified so that the single Poisson distribution in Eq.~\eqref{eq:solution_pp} is replaced by a product of $N$ Poisson distributions, each of them governing the number of a specific two-body interaction applied to the initial state at time $t$. To show this, let us first introduce some useful notation. For the $ i $-th two-body process, denote $ \lambda_i $ as the corresponding rate, and denote the tuple $ \mathbf{\lambda} = (\lambda_1,\cdots,\lambda_N) $. Consider the state $\rho(m) = \Lambda^{(m)}(\rho_0)$, which is the state where a total of $ m $ such two-body interactions have happened. Let $ \mathcal{M}_m = \lbrace
\mathbf{m}\in\mathbb{N}^N |\sum_i m_i = m \rbrace $ denote the set of $ N $-dimensional tuples consisting of non-negative integers, such that the sum of all elements equals $ m $. We can explicitly re-write $\rho(m)$ as \begin{align}\label{eq:rho_m} \rho(m) = \Lambda^{(m)}(\rho_0) = \sum_{\mathbf{m}\in\mathcal{M}_m} \mathcal{N}_{\mathbf{m}}\cdot C_{\vec\lambda}\cdot \rho(\mathbf{m}), \end{align}
where $\mathcal{N}_{\mathbf{m}} := \binom{m}{m_1, \ldots, m_N}$ is the multinomial coefficient, \begin{align} C_{\vec\lambda} := \prod_{k=1}^N \left( \frac{\lambda_k}{\bar{\lambda}} \right)^{m_k} \end{align} is the product of the corresponding weights, and the state $\rho(\mathbf{m})$ is a mixture over all possible combination of $m$ unitaries, where each unitary $U_k$ appears $m_k$ times, \begin{equation} \rho(\mathbf{m}) = \frac{1}{\mathcal{N}_{\mathbf{m}}} \sum_{\pi} U_{\pi}^{\mathbf{m}} \, \rho_0 \, (U_{\pi}^{\mathbf{m}})^{\dagger}, \end{equation} where the unitary $U_{\pi}^{\mathbf{m}}$ has been defined in Eq.~\eqref{eq:unitary_perm}. \par Let us now consider the corresponding Poisson distribution $P_t^{(\bar\lambda)}(m)$, with a mean value $ \bar\lambda t$. We want to see how this relates to the $ N $ independent Poisson processes with mean values $ \vec\lambda $. Noting that $ \sum_k m_k = m $ and $ \sum_k \lambda_k = \bar\lambda $, we can re-write this distribution as follows, \begin{align}\label{eq:joint_prob_dist} p_t^{(\bar\lambda)} (m) &= \left( \bar{\lambda} t \right)^m \frac{e^{- \bar{\lambda} t}}{m!} = \frac{1}{m!} \prod_{k=1}^N \left( \bar{\lambda} t \right)^{m_k} e^{- \lambda_k \, t} = \frac{1}{\mathcal{N}_{\mathbf{m}}}\cdot \prod_{k=1}^N \frac{\left( \bar{\lambda} t \right)^{m_k} e^{- \lambda_k \, t}}{m_k !} \nonumber\ =\frac{1}{\mathcal{N}_{\mathbf{m}}} \cdot \prod_{k=1}^N \left[ \left( \frac{\bar{\lambda}}{\lambda_k} \right)^{m_k} p^{\lambda_k}_t(m_k)\right] \nonumber\\ &=\frac{1}{\mathcal{N}_{\mathbf{m}}}\cdot \frac{1}{C_{\vec\lambda}}\cdot \prod_{k=1}^N p^{\lambda_k}_t(m_k), \end{align} by noting that each $p^{(\lambda_k)}_t(m_k)$ is a Poisson distribution with mean value $\lambda_k \, t$. If we now replace Eq.~\eqref{eq:rho_m} and Eq.~\eqref{eq:joint_prob_dist} into Eq.~\eqref{eq:solution_pp}, we see that the coefficients $ \mathcal{N}_{\mathbf{m}} $ and $ C_{\vec\lambda}$ cancel out, and therefore, for a given time $t > 0$, \begin{align} \rho(t) &= \sum_{m=0}^{\infty} \sum_{\mathbf{m}\in\mathcal{M}_m} \left[ \prod_k p^{(k)}_t(m_k) \right] \rho(\mathbf{m}) = \sum_{\mathbf{m}\in\mathbb{Z}^N} \left[ \prod_k p^{(\lambda_k)}_t(m_k) \right] \rho(\mathbf{m}). \end{align}
\section{Supplementary Note 2: Upper and lower bounds on the size of the thermal environment}
In this section we prove the main results presented in the main text, namely the upper and lower bounds to the size of the external thermal bath required to thermalize a region of a many-body system. These bounds depend on the entropic quantity known as \emph{max-relative entropy}~\cite{datta_min-_2009}, defined for two quantum states $\rho, \sigma \in \SH{\mc{H}}$ such that $\mathrm{supp}(\rho) \subseteq \mathrm{supp}(\sigma)$ as \begin{equation}
\Dmax{}\left( \rho \| \sigma \right) = \inf \left\{ \lambda \in \mb{R} \ : \ \rho \leq 2^{\lambda} \sigma \right\}. \end{equation} The smooth max-relative entropy between the same two quantum states, for $\epsilon > 0$, is defined as \begin{equation} \label{eq_supp:smooth_max_rel_ent}
\Dmax{\epsilon}(\rho \| \sigma) = \inf_{\tilde{\rho} \in B_{\varepsilon}(\rho)} \Dmax{}\left( \tilde{\rho} \| \sigma \right), \end{equation} where $B_{\epsilon}(\rho)$ is the ball of radius $\epsilon$ around the state $\rho$ with respect to the distance induced by the trace norm. The main technical tool we use to derive our bounds is a result from quantum information theory known as the \emph{convex split lemma}, first introduced and proved in Ref.~\cite[Lemma~2.1 in Supp.~Mat.]{anshu_quantum_2017}.
\begin{lemma}[Convex split lemma]
\label{lem:csl}
Consider a finite-dimensional Hilbert space $\mc{H}$ and two states $\rho, \sigma \in \SH{\mc{H}}$ such that
$\mathrm{supp}(\rho) \subseteq \mathrm{supp}(\sigma)$. Then the state $\rho^{(n)} \in \SH{\mc{H}^{\otimes n}}$ defined as
\begin{equation}
\label{eq:csl_state}
\rho^{(n)}
=
\frac{1}{n} \sum_{m=1}^n
\sigma^{\otimes m-1} \otimes \rho \otimes \sigma^{\otimes n-m},
\end{equation}
is such that its trace distance to the $n$-copy i.i.d state $\sigma^{\otimes n}$ is upper-bounded as
\begin{equation}
\norm{\rho^{(n)} - \sigma^{\otimes n}}_1^2 \leq {\frac{2^{\Dmax{}\left( \rho \| \sigma \right)}}{n}}.
\end{equation} \end{lemma}
In the following, we consider a specific channel which, when acting on the $n$-subsystem state $\rho \otimes \sigma^{\otimes n-1}$, is able to produce the state $\rho^{(n)}$ given in Eq.~\eqref{eq:csl_state} of the above lemma. This channel belongs to the set of random unitary channels acting on $n$ subsystems, and is defined as \begin{equation} \label{eq:channel_csl} \bar{\mc{E}}_n(\cdot) = \frac{1}{n} \sum_{i=1}^n U^{(1,i)}_{\text{swap}} \, \cdot \, U^{(1,i) \, \dagger}_{\text{swap}}, \end{equation} where $U^{(i,j)}_{\text{swap}}$ denotes the unitary swap between the $i$-th and the $j$-th subsystems. \par We now specialise the setting to the one considered in the main text. We consider a region $R$ described by the Hilbert space $\mc{H}_R$, with Hamiltonian $H_R$ and state $\omega_R$, and an external bath $B$ composed by $n-1$ subsystems at inverse temperature $\beta$. The Hilbert space of the bath is $\mc{H}_B = \mc{H}_R^{\otimes n-1}$, with Hamiltonian $H_B = \sum_{i=1}^{n-1} H_R^{(i)}$, where the operator $H_R^{(i)}$ only acts non-trivially on the $i$-th subsystem of the bath. The state of the bath is thermal, thus defined by $\tau_{\beta}(H_B) = \tau_{\beta}(H_R)^{\otimes n-1}$. We are interested in the process of thermalization of the region $R$ by means of the collisional models described by the master equation of the form given in Eq.~\eqref{eq:stoch_process}. For this specific setting, we say that a channel $\mc{E} \in \mathfrak{E}_n$ is able to $\epsilon$-thermalize the region $R$ if, \begin{equation} \label{eq:eps_therm_specific} \norm{\mc{E} \left( \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n - 1} \right) - \tau_{\beta}(H_R)^{\otimes n}}_1 \leq \epsilon, \end{equation} that is, if the output of the channel is close, in trace distance, to the thermal state of region and bath. \par The quantity we seek to bound is $n_\epsilon$, that is, the minimum number of subsystems needed to $\epsilon$-thermalize the region $R$, when the global dynamics is produced by a master equation of the form given in Eq.~\eqref{eq:stoch_process}, \begin{equation} \label{eq:opt_n_epsilon}
n_\epsilon := \min \left\{ n \in \mb{N} \, | \, \exists \, \mc{E} \in \mathfrak{E}_n \, : \, \norm{\mc{E} \left( \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n - 1} \right) - \tau_{\beta}(H_R)^{\otimes n}}_1 \leq \epsilon \right\}. \end{equation} It is worth noting that, for the current setting, the channel $\bar{\mc{E}}_n$ defined in Eq.~\eqref{eq:channel_csl} belongs to the class of maps $\mathfrak{E}_n$. Indeed, this channel can be obtained from a master equation describing stochastic collisions that occur between the region $R$ and each subsystem of the bath $B$, where the collision is described by a swap operator, \begin{equation} \label{eq:master_eq_for_swap} \frac{\partial \, \rho_{RB}(t)}{\partial \, t} = \sum_{i<j}^n \frac{1}{\tau} \left( U^{(i,j)}_{\text{swap}} \, \rho_{RB}(t) \, U^{(i,j) \, \dagger}_{\text{swap}} - \rho_{RB}(t) \right). \end{equation} For simplicity, in the above equation the collision rate is the same for all subsystems; however, the steady-state, and therefore the resulting channel associated to it, does not depend on the specifics of these rates (since we consider the infinite-time limit). Using the result of Prop.~\ref{prop:solution_poisson_process}, it is easy to show that, for an initial state $\rho_{RB}(t=0) = \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n - 1}$, the state at time $t$ is \begin{equation} \rho_{RB}(t) = e^{- n \frac{t}{\tau}} \, \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n - 1} + \left( 1 - e^{- n \frac{t}{\tau}} \right) \rho_{RB}^{(n)}, \end{equation} where $\rho_{RB}^{(n)} = \frac{1}{n} \sum_{m=1}^n \tau_{\beta}(H_R)^{\otimes m-1} \otimes \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n-m}$ is the state in Eq.~\eqref{eq:csl_state}. Thus, we see that, under this stochastic collision model, the system approaches its steady state exponentially fast in the collision rate $\tau^{-1}$, and in the number of subsystems composing region and bath. Finally, notice that, since the Hamiltonian of each subsystem is the same, the swap operator trivially conserves the energy. \par We are now able, with the help of Lemma~\ref{lem:csl}, to derive an upper bound on the quantity $n_{\epsilon}$. The upper bound is obtained by providing an explicit protocol able to $\epsilon$-thermalize the system, and by computing the number of subsystems $n$ needed for it.
\begin{theorem} [Upper bound on $n_{\epsilon}$]\label{thm_supp:upper}
For a given Hamiltonian $H_R$, inverse temperature $\beta$, and a constant $\epsilon > 0$, we have that
\begin{align}
n_\epsilon \leq \frac{1}{\epsilon^2} \, 2^{\Dmax{}\left( \omega_R \| \tau_{\beta}(H_R) \right)}.
\end{align} \end{theorem}
\begin{proof}
Let us consider the action of the channel $\bar{\mc{E}}_n$, defined in Eq.~\eqref{eq:channel_csl}, on the initial state
of the global system (region and bath) $\omega_R \otimes \tau_{\beta}(H_R)^{\otimes n - 1}$. It is easy to show that
the final state of this channel is given by
\begin{equation}
\bar{\mc{E}}_n \left( \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n - 1} \right)
=
\frac{1}{n} \sum_{m=1}^n
\tau_{\beta}(H_R)^{\otimes m-1} \otimes \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n-m},
\end{equation}
which takes the same form of the state in the convex split lemma, see Eq.~\eqref{eq:csl_state}. Then, it directly
follows from Lemma~\ref{lem:csl} that
\begin{equation}
\norm{\bar{\mc{E}}_n \left( \omega_R \otimes \tau_{\beta}(H_R)^{\otimes n - 1} \right)
-
\tau_{\beta}(H_R)^{\otimes n}}_1^2 \leq {\frac{2^{\Dmax{}\left( \omega_R \| \tau_{\beta}(H_R) \right)}}{n}}.
\end{equation}
For the above trace distance to be smaller than $\epsilon$, that is, for the region to $\epsilon$-thermalize,
we need a number of subsystems
\begin{equation}
n = \frac{1}{\epsilon^2} 2^{\Dmax{}\left( \omega_R \| \tau_{\beta}(H_R) \right)},
\end{equation}
which closes the proof. \end{proof}
\par Let us now derive a lower bound to the quantity $n_{\epsilon}$. To do so, we first need to introduce two lemmata; the first one is just a slight modification of Ref.~\cite[Fact~4 in Supp.~Mat.]{anshu_quantifying_2018}, where we replace the quantum fidelity with the trace distance.
\begin{lemma}[Trace distance bound]
\label{lem:cq_trace_dist}
Consider two quantum states $\rho_A, \sigma_A \in \SH{\mc{H}_A}$, and let $\rho_{AB} \in \SH{\mc{H}_A \otimes \mc{H}_B}$
be a classical-quantum state such that $\rho_A = \Trp{B}{\rho_{AB}}$. Then, there exists a classical-quantum state
$\sigma_{AB} \in \SH{\mc{H}_A \otimes \mc{H}_B}$ such that $\sigma_A = \Trp{B}{\sigma_{AB}}$ and
\begin{equation}
\norm{\rho_{AB} - \sigma_{AB}}_1 \leq 2 \, \norm{\rho_A - \sigma_A}_1^{\frac{1}{2}}.
\end{equation}
Furthermore, $\mathrm{supp} ( \sigma_B ) \subseteq \mathrm{supp}( \rho_B )$. \end{lemma}
\begin{proof}
Under the hypotheses of this lemma, it was shown in Ref.~\cite[Fact~4 in Supp.~Mat.]{anshu_quantifying_2018} that
\begin{equation}
\label{eq:fid_equality}
F(\rho_{AB},\sigma_{AB}) = F(\rho_A,\sigma_A) ,
\end{equation}
where $F(\rho,\sigma) = \norm{\sqrt{\rho} \sqrt{\sigma}}_1$ if the quantum fidelity between $\rho$ and $\sigma$.
It is known that the trace distance between two states is linked to the quantum fidelity by the following chain of inequalities,
\begin{equation}
\label{eq:trace_dist_fid}
1 - F(\rho,\sigma) \leq \frac{1}{2} \norm{\rho - \sigma}_1 \leq \sqrt{1 - F(\rho,\sigma)^2}.
\end{equation}
Therefore, we have that
\begin{align}
\norm{\rho_{AB} - \sigma_{AB}}_1^2
\leq
4 \left( 1 - F(\rho_{AB}, \sigma_{AB})^2 \right)
=
4 \left( 1 - F(\rho_A, \sigma_A)^2 \right)
\leq
4 \norm{\rho_A - \sigma_A}_1
\end{align}
where the first inequality follows from the rhs of Eq.~\eqref{eq:trace_dist_fid}, the equality from
Eq.~\eqref{eq:fid_equality}, and the second inequality follows from the lhs of Eq.~\eqref{eq:trace_dist_fid}. \end{proof}
We now recall and prove another result used in Ref.~\cite[Fact~6 in Supp.~Mat.]{anshu_quantifying_2018} that we use to lower bound the quantity $n_{\epsilon}$ in the next theorem.
\begin{lemma}[\cite{anshu_quantifying_2018}]
\label{lem:op_ineq_cq}
Consider a classical-quantum state $\rho_{AB} \in \SH{\mc{H}_A \otimes \mc{H}_B}$, where $B$ is the classical part.
Let $\Pi_B \in \BH{\mc{H}_B}$ be the projector onto the support of $\rho_B = \Trp{A}{\rho_{AB}}$. Then the
following operator inequality holds,
\begin{equation}
\rho_{AB} \leq \rho_A \otimes \Pi_B.
\end{equation} \end{lemma}
\begin{proof}
Since $\rho_{AB}$ is a classical-quantum state, there exists a probability distribution $\{ p_i \}_{i=1}^d$,
being $d$ the dimension of the support of the classical part, and a set of states $\{ \rho_A^{(i)} \}_{i=1}^d$
in $\SH{\mc{H}_A}$ such that $\rho_{AB} = \sum_{i=1}^d p_i \, \rho_A^{(i)} \otimes \ket{i}\bra{i}_B$. The reduced state
on the quantum part of the system is $\rho_A = \sum_{i=1}^d p_i \, \rho_A^{(i)}$, and consequently we have that
$\rho_A \otimes \Pi_B = \sum_{i,j = 1}^d p_i \, \rho_A^{(i)} \otimes \ket{j}\bra{j}_B$. Then, it is easy to show that
the operator
\begin{equation}
\rho_A \otimes \Pi_B - \rho_{AB} = \sum_{i \neq j}^d p_i \, \rho_A^{(i)} \otimes \ket{j}\bra{j}_B,
\end{equation}
is positive semi-definite, since it is composed by a positive mixture of states. \end{proof}
We are now in the position to derive a lower bound for the quantity $n_{\epsilon}$ defined in Eq.~\eqref{eq:opt_n_epsilon}. Our proof is inspired by the one used in Ref.~\cite[Sec.~3.2 in Supp.~Mat.]{anshu_quantifying_2018} to derive a converse to the convex split lemma.
\begin{theorem}[Lower bound on $n_{\epsilon}$]
\label{thm:lower}
For a given $\beta$ and $\epsilon > 0$, and a Hamiltonian $H_R$ satisfying the energy subspace condition
(see Def.~\ref{def:ESC}), we have
\begin{equation}
n_\epsilon \geq 2^{\Dmax{2 \sqrt{\epsilon}+\delta}\left( \omega_R \| \tau_{\beta}(H_R) \right)},
\end{equation}
where $\delta = \norm{\Delta(\omega_R) - \omega_R}_1$ quantifies the distance from the state of the region $\omega_R$
and its decohered version $\Delta(\omega_R)$. \end{theorem}
\begin{proof}
For the sake of simplicity, in the following we refer to the initial state as $\rho_{RB} = \omega_R \otimes
\tau_{\beta}(H_R)^{\otimes n_{\epsilon} - 1}$, and to the target state as $\tau_{RB} = \tau_{\beta}(H_R)^{
\otimes n_{\epsilon}}$. For a fixed parameter $\epsilon > 0$, let $\hat{\mc{E}} \in \mathfrak{E}_{n_{\epsilon}}$
be the (not necessarily unique) channel such that
\begin{equation}
\norm{ \hat{\mc{E}} \left( \rho_{RB} \right) - \tau_{RB} }_1
\leq \epsilon.
\end{equation}
Let us now introduce the channel $\Delta$, that decoheres the system in the energy eigenbasis of the total Hamiltonian $H = \sum_{i=1}^{n_{\epsilon}} H_R^{(i)}$. In the proof of Prop.~\ref{prop:csl_semi-optimality}, we show that the action of this channel commutes with that of any channel in $\mathfrak{E}_{n_{\epsilon}}$. Thus, using monotonicity of the trace distance under CPTP maps, we have that
\begin{equation}
\norm{\hat{\mc{E}} \circ \Delta \left( \rho_{RB} \right) - \tau_{RB} }_1
=
\norm{\Delta \circ \hat{\mc{E}} \left( \rho_{RB} \right) - \Delta ( \tau_{RB} ) }_1
\leq
\norm{ \hat{\mc{E}} \left( \rho_{RB} \right) - \tau_{RB} }_1
\leq \epsilon,
\end{equation}
where we additionally used the fact that $\tau_{RB}$ is diagonal in the energy eigenbasis. In Prop.~\ref{prop:csl_optimality} we show that, when the Hamiltonian $H_R$ satisfies the ESC and the input and target states are diagonal, the optimal thermalization is achieved via the channel $\bar{\mc{E}}_{n_{\epsilon}}$ of Eq.~\eqref{eq:channel_csl}. Thus, it holds that
\begin{equation}
\norm{\bar{\mc{E}}_{n_{\epsilon}} \circ \Delta \left( \rho_{RB} \right) - \tau_{RB} }_1
\leq
\epsilon
\end{equation}
We now make use of the above bound on the trace distance, and of the specific form of the channel
$\bar{\mc{E}}_{n_{\epsilon}}$ to derive a lower bound on the quantity $n_{\epsilon}$. Let us first recall
that the channel
\begin{equation}
\bar{\mc{E}}_{n_{\epsilon}}(\cdot) = \sum_{i=1}^{n_{\epsilon}} \frac{1}{n_{\epsilon}} \,
U^{(1,i)}_{\text{swap}} \, \cdot \, U^{(1,i) \, \dagger}_{\text{swap}},
\end{equation}
where the unitary operator
$U^{(1,i)}_{\text{swap}} \in \BH{\mc{H}_R \otimes \mc{H}_B}$ swaps the state of the first subsystem (the
region $R$) with that of the $i$-th subsystem (belonging to the bath $B$). We can dilate this map by
introducing the following unitary operation,
\begin{equation}
V_{RBA} = \sum_{i=1}^{n_{\epsilon}} \, U^{(1,i)}_{\text{swap}} \otimes \ket{i}\bra{i}_A,
\end{equation}
which acts over the region, the bath, and an ancillary system $A$ of dimension $n_{\epsilon}$. Then,
for an ancillary system described by the state $\rho_A = \sum_{i=1}^{n_{\epsilon}} {n_{\epsilon}}^{-1}
\, \ket{i}\bra{i}_A$, we can define the global state $\tilde{\rho}_{RBA} = V_{RBA}
\left( \Delta(\rho_{RB}) \otimes \rho_A \right) V_{RBA}^{\dagger}$. This is a classical-quantum state,
and when the ancillary subsystem $A$ is traced out it coincides with $\bar{\mc{E}}_{n_{\epsilon}} \circ \Delta \left( \rho_{RB} \right)$.
\par
From Lemma~\ref{lem:cq_trace_dist} it follows that there exists a classical-quantum extension of the target state
$\tau_{RB}$, which we refer to as $\tau_{RBA}$ (where $A$ is the classical part of the state with dimension $n_{\epsilon}$),
such that $\norm{\tilde{\rho}_{RBA} - \tau_{RBA}}_1 \leq 2 \, \sqrt{\epsilon}$. Furthermore, since $\tau_{RBA}$ is
classical-quantum, we have that the operator inequality $\tau_{RBA} \leq \tau_{RB} \otimes \mb{I}_A$ holds, see
Lemma.~\ref{lem:op_ineq_cq}. Using this operator inequality, the fact that $\rho_A = {\mb{I}_A}/{n_{\epsilon}}$,
and the definition of the max-relative entropy it follows that $\Dmax{}(\tau_{RBA}|\tau_{RB} \otimes \rho_A)
\leq \log n_{\epsilon}$. By monotonicity of this measure with respect to CPTP maps, we have that
\begin{equation}
\label{eq:max_rel_as_lower_bound}
\Dmax{}\left( \Trp{BA}{ V_{RBA}^{\dagger} \, \tau_{RBA} \, V_{RBA}} \, \right| \,
\Trp{BA}{ V_{RBA}^{\dagger} \left( \tau_{RB} \otimes \rho_A \right) V_{RBA}}
\Big) \leq \log n_{\epsilon}.
\end{equation}
Let us consider the first argument of the above max-relative entropy. Using the triangle inequality, we can map the problem to the decohered case,
\begin{align*}
\norm{ \Trp{BA}{ V_{RBA}^{\dagger} \, \tau_{RBA} \, V_{RBA}} - \omega_R }_1
&=
\norm{ \Trp{BA}{ V_{RBA}^{\dagger} \, \tau_{RBA} \, V_{RBA}} - \Delta(\omega_R) + \Delta(\omega_R) - \omega_R }_1 \\
&\leq
\norm{ \Trp{BA}{ V_{RBA}^{\dagger} \, \tau_{RBA} \, V_{RBA}} - \Delta(\omega_R)}_1
+
\norm{ \Delta(\omega_R) - \omega_R }_1.
\end{align*}
The first term of the above sum can be further simplified,
\begin{align*}
\norm{ \Trp{BA}{ V_{RBA}^{\dagger} \, \tau_{RBA} \, V_{RBA}} - \Delta(\omega_R)}_1
&\leq
\norm{ V_{RBA}^{\dagger} \, \tau_{RBA} \, V_{RBA} - \Delta(\rho_{RB}) \otimes \rho_A }_1 \\
&=
\norm{ \tau_{RBA} - V_{RBA} \left( \Delta(\rho_{RB}) \otimes \rho_A \right) V_{RBA}^{\dagger} }_1 \\
&=
\norm{ \tau_{RBA} - \tilde{\rho}_{RBA}} \leq 2 \, \sqrt{\epsilon},
\end{align*}
where in the first inequality we use the monotonicity of the trace distance under partial trace, and the fact that $\Delta(\rho_{RB}) = \Delta(\omega_R) \otimes \tau_{\beta}(H_R)^{\otimes n_{\epsilon} - 1}$ since $\tau_{\beta}(H_R)$ has no coherence in the energy eigenbasis of $H_R$. The first equality follows from the unitary invariance of the trace distance, and the last inequality follows from how we have defined $\tau_{RBA}$. Thus, the initial state of the region $\omega_R$ is within a ball of radius $2 \, \sqrt{\epsilon} + \delta$ from the state in the first argument of the max-relative entropy in Eq.~\eqref{eq:max_rel_as_lower_bound}, where $\delta = \norm{ \Delta(\omega_R) - \omega_R }_1$.
\par
The state in the second argument of the max-relative entropy can instead be explicitly computed,
\begin{equation*}
\Trp{BA}{ V_{RBA}^{\dagger} \left( \tau_{RB} \otimes \rho_A \right) V_{RBA}}
=
\Trp{B}{ \bar{\mc{E}}_{n_{\epsilon}} \left( \tau_{RB} \right) }
=
\Trp{B}{ \tau_{RB} } = \tau_{\beta}(H_R),
\end{equation*}
where the first equality follows from the definition of $V_{RBA}$, while the second one from the fact
that $\tau_{RB} = \tau_{\beta}(H_R)^{\otimes n_{\epsilon}}$ is invariant under permutation. As a
result, we can replace Eq.~\eqref{eq:max_rel_as_lower_bound} with the following one,
\begin{equation}
\Dmax{2 \, \sqrt{\epsilon} + \delta}\left( \omega_R \, | \, \tau_{\beta}(H_R) \right) \leq \log n_{\varepsilon},
\end{equation}
which closes the proof. \end{proof}
\section{Supplementary Note 3: Discussion on choice of bath used in model} Given the model studied in this work, a question arises whether the choice of bath is a suitable one, since this directly relates to the meaningfulness of the lower and upper bounds on bath size, which are derived in this work. It is worthwhile to note that, in the scientific literature that addresses thermalization in many-body systems (hence in particular MBL systems), mostly long-time limits of master equations are considered~\cite{fischer2016dynamics}. Such a setting would correspond to a bath of infinite size and no memory. On the other hand, other works on MBL thermalization use a specific finite bath~\cite{luitz_how_2017,Goihl19} which is modeled as part of the chain, with regions that have low disorder. These works so far feature only thermalization with respect to local observables, which are a much more lenient measure of thermalization and do not fully capture the non-local, non-thermal aspects of the system.
From the resource-theoretic point of view, not much has been said so far about the required bath sizes for arbitrary Hamiltonians and state transitions~\cite{scharlau2018quantum}. In all resource theoretic settings to date, the final state of the bath is relatively unimportant, since it is always discarded, and the bath acts simply as a heat source used to thermalize systems. However, discarding the bath is not a suitable consideration in the current setting, as we mention in the main text, since full thermalization of the system can always be achieved with a bath composed by one single copy of the system in a thermal state. Clearly the resource-theoretic setting is still useful in general, since it does not simply characterize transformations achieving full-thermalization, but it studies state transitions that allow work extraction (usually modeled as a transition between two non-equilibrium, non-thermal states).
A question of concern is what the minimal bath structure necessary to thermalize a system is, given globally energy-preserving interactions. The populations of each energy level on the system need to be altered, which means that the bath must contain energy gaps that are present in the system. We mention in the main text two reasons in choosing the particular bath, namely tractability of the problem and its relevance in some experiments. However, one also observes that the level structure of our bath is such that it contains all features of the system Hamiltonian, and no additional/unnecessary features (such as energy gaps that are not present in the system), which gives good reason to think that our bath should not be unnecessarily large.
Nevertheless, one can of course also consider other bath models. An alternative, conceivable bath example to use are qubit baths, namely baths consisting of a collection of qubits where the energy gaps correspond to gaps of the system. A second example would be a collection of bosonic modes with frequencies corresponding to each energy gap present in the system~\cite{lostaglio2018elementary}; however, the dimension of such a bath will be infinite to begin with, and would automatically satisfy the lower bound derived in our work. The number of different required frequencies, however, might be an interesting research question for future work, should such bath models be of particular interest. Both of the above example suffer a particular disadvantage; since energy eigenstates of many-body systems are generally very non-local, this would mean that the operations required to thermalize system+bath, according to energy-preserving operations, would also be highly non-local and thus convoluted.
\section{Supplementary Note 4: Optimality of the stochastic swapping collision model} In this section we show that the stochastic collision model introduced in Eq.~\eqref{eq:channel_csl} allows us, under some assumptions on the system’s Hamiltonian and initial state, to obtain the optimal thermalization for a given number of subsystems $n \in \mb{N}$. Specifically, we show that within the class of energy-preserving random unitary channels acting on $n$ subsystem, the map $\bar{\mc{E}}_n$ provides the minimum value of $\epsilon$ in Eq.~\eqref{eq:eps_therm_specific}, when the state $\omega_R$ is diagonal in the energy eigenbasis, Prop.~\ref{prop:csl_optimality}. Additionally, we are able to bound the performance of the swapping collision model in the situation in which the state $\omega_R$ has coherence in the energy eigenbasis, Prop.~\ref{prop:csl_semi-optimality}, and we show that this channel is able to efficiently thermalize the system when the state has low coherence. In order to prove the above statement, we need to introduce the following lemmata. The first lemma concerns the power of energy-preserving random unitary channels in modifying the spectrum of a quantum state.
\begin{lemma}[Power of energy-preserving random unitary channels in modifying the spectrum of a quantum state]
\label{lem:preserve_weights}
Consider a Hilbert space $\mc{H}$, a Hamiltonian $H = \sum_E E \, \Pi_E$ where $\left\{ \Pi_E \right\}_E$ is the set of projectors
onto the energy subspaces, and a state $\rho \in \SH{\mc{H}}$. Given any channel of the form $\mathcal{E}(\cdot) = \sum_k p_k \, U_k \,
\cdot \, U_k^{\dagger}$, where $\left\{ p_k \right\}_k$ is a probability distribution and $\left\{ U_k \right\}_k$ is a set of energy
preserving unitaries $\left[ U_k , H \right] = 0$, we have that
\begin{equation}
\mathrm{Tr} \left[ \mathcal{E}(\rho) \, \Pi_E \right] = \mathrm{Tr} \left[ \rho \, \Pi_E \right] \quad \forall \, E.
\end{equation} \end{lemma}
\begin{proof}
Due to the fact that each unitary $U_k$ commutes with the Hamiltonian $H$, we have that $U_k^{\dagger} \, \Pi_E \, U_k = \Pi_E$
for all $k$ and $E$. Then,
\begin{align*}
\mathrm{Tr} \left[ \mathcal{E}(\rho) \, \Pi_E \right] &=
\sum_k p_k \mathrm{Tr} \left[ U_k \, \rho \, U_k^{\dagger} \, \Pi_E \right]
= \sum_k p_k \mathrm{Tr} \left[ \rho \, U_k^{\dagger} \, \Pi_E \, U_k \right]
= \sum_k p_k \mathrm{Tr} \left[ \rho \, \Pi_E \right] = \mathrm{Tr} \left[ \rho \, \Pi_E \right].
\end{align*} \end{proof}
In the next lemma we consider a family of quantum states with fixed weights in different subspaces, and we explicitly construct a state in this family which minimizes the distance to a given state outside the family.
\begin{lemma}[Fixed weights subspaces]
\label{lem:opt_state}
Consider a Hilbert space $\mc{H}$ and a set of orthogonal projectors $\{\Pi_i\}_i$ on $\mc{H}$ such that $\sum_i \Pi_i =
\mb{I}$. Given a set of probabilities $\{p_i\}_i$, let $S = \left\{ \rho \in \SH{\mc{H}} \ | \ \mathrm{Tr} \left[\rho \, \Pi_i \right] = p_i \ \forall \, i \right\}$.
Furthermore, assume that a state $\sigma \in \mc S(\mc{H})$ has the form $ \sigma = \sum_i q_i \sigma_i $, where each
$ \Pi_i\sigma_i\Pi_i = \sigma_i $ and $ \mathrm{Tr}(\sigma_i) =1 $. Then, the state $ \bar{\rho} = \sum_i p_i \sigma_i$ minimizes
the trace distance to $\sigma$ over $S$, that is,
\begin{align}
\bar{\rho} \in \displaystyle\mathrm{argmin}_{\rho \in S} \norm{\rho - \sigma}_1.
\end{align} \end{lemma}
\begin{proof}
In order to find the optimal state in the family $S$, we introduce the following CPTP maps which describes a quantum
instrument, $\mathcal{E}( \cdot ) = \sum_i \mathrm{Tr} \left[ \, \cdot \, \Pi_i \right] \, \sigma_i$. It is easy to see that $\sigma$ is
left invariant by the above map, and that for any $\rho \in S$, $\mathcal{E}( \rho ) = \sum_i p_i \, \sigma_i$. Using the
monotonicity of the trace distance under CPTP maps, we find that for any $\rho \in S$,
\begin{equation*}
\left\| \rho - \sigma \right\|_1 \geq
\left\| \mathcal{E}( \rho) - \mathcal{E}( \sigma) \right\|_1 =
\left\| \sum_i p_i\, \sigma_i - \sigma \right\|_1.
\end{equation*} \end{proof}
The next lemma we prove require the system Hamiltonian to satisfy the following condition,
\begin{definition}[Energy subspace condition]
\label{def:ESC}
Given a Hamiltonian $H$, we say that it fulfills the ESC iff for any $n \in \mb{N}$, given the set of energy levels
$\left\{E_k \right\}_{k=1}^d$ of the Hamiltonian $H$, we have that for any vectors $m, m' \in \mb{N}^d $ with
the same normalization factor, namely $\sum_k m_k = \sum_k m'_k = n $,
\begin{equation}
\label{item_supp:opt_cond}
\sum_k m_k E_k \neq \sum_k m'_k E_k .
\end{equation} \end{definition}
The following lemma concern the state $\rho^{(n)}$ introduced in Eq.~\eqref{eq:csl_state}. This is the central object of the convex split lemma, as well as of our Prop.~\ref{prop:csl_optimality}. We show that, under the above assumption on the Hamiltonian of the system, this state is uniformly distributed over each energy subspace.
\begin{lemma}[Uniform distribution over each energy subspace]
\label{lem:csl_uniform}
Consider a Hilbert space $\mc{H}$ of dimension $d$, a Hamiltonian $H^{(1)}$, and two states $\rho, \sigma \in \SH{\mc{H}}$.
For any $n \in \mb{N}$, consider the state $\rho^{(n)} \in \SH{\mc{H}^{\otimes n}}$ defined in Eq.~\eqref{eq:csl_state}. This state
is uniformly mixed over each energy subspace of the total Hamiltonian $H^{(n)} = \sum_{i=1}^n H^{(1)}_i$ if
\begin{enumerate}
\item $ H^{(1)} $ satisfies the ESC condition, and
\item \label{item:states} The states $\rho, \sigma$ are diagonal in the eigenbasis of $H^{(1)}$.
\end{enumerate} \end{lemma}
\begin{proof}
Let the eigenbasis of $H^{(1)}$ be $\left\{ \ket{i} \right\}_{i=1}^d$, and consider the basis of $\mc{H}^{\otimes{n}}$
given by
\begin{equation}\label{eq:totaleigenbasis}
\left\{\ket{\mathbf{i}} = \ket{i_1} \otimes \ldots \otimes \ket{i_n} \right\}_{i_1, \ldots, i_n = 1}^d,
\end{equation}
where $\mathbf{i}^T = (i_1, \dots, i_n)$. It is easy to see that the set of vectors in Eq.~\eqref{eq:totaleigenbasis}
form an eigenbasis of the total Hamiltonian $H^{(n)}$. Let us now introduce a way of partitioning the vectors
$ \mathbf{i} $, namely by characterizing them w.~r.~t.~the number of elements in $ \mathbf{i} $ corresponding to
each distinct energy eigenvalue on $ H^{(1)} $. For a simple example when $ H^{(1)} $ is a qubit, then this scheme
characterizes each basis vector $ |\mathbf{i}\rangle $ by the Hamming weight of $ \mathbf{i} $. Given a tuple
\begin{equation}
\label{eq:def_tuple_type}
\mathbf{n} := \left( n_1, n_2 , \ldots, n_d \right),
\end{equation}
let $I_{\mathbf{n}}$ denote the set of vectors $\mathbf{i}$ such that if $ \mathbf{i}\in I_{\mathbf{n}} $, then
$ \mathbf{i} $ contains $ n_i $ elements that are equal to $ i $. Furthermore, let $\Pi_{\mathbf{n}} =
\sum_{\mathbf{i} \in I_{\mathbf{n}}} \proj{i}$ denote the projector onto this subspace.
\par
Note that basis vectors $ |\mathbf{i}\rangle,|\mathbf{i'}\rangle $, where $ \mathbf{i},\mathbf{i'}\in I_{\mathbf{n}}$
correspond to the same tuple ${\mathbf{n}}$, have the same energy. On the other hand, the ESC guarantees that if $ \mathbf{i},\mathbf{i'} $ are not in the same set $I_{\mathbf{n}}$, then they belong to different
energy subspaces. In other words, the set of $ \lbrace \Pi_\mathbf{n}\rbrace $ coincides with the set of energy
subspaces of the total Hamiltonian $H^{(n)}$. We now want to show that $\Pi_\mathbf{n} \rho^{(n)} \Pi_\mathbf{n}
\propto \mb{I}$, namely that $ \rho^{(n)} $ is uniform in a fixed energy subspace. To do so, we may calculate the overlap
$ \bra{\mathbf{i}} \rho^{(n)}\ket{\bf i} $ and show that it does not depend on the particular basis vector $ \ket{\bf i} $,
but only the tuple $ {\bf n} $ such that $ {\bf i} \in I_{\bf n} $.
\par
Given a particular $ \ket{\bf i} $ corresponding to tuple $ {\bf n} $, notice that for each $ i = 1,\dots, d $, each
single-system state $\ket{i}$ describes a total $n_i$ number of the $n$ subsystems; and we denote the indices
of these subsystems as $\{ j^{(i)}_{\ell} \}_{\ell=1}^{n_i}$. Let us first observe that for all $ i=1,\dots,d $
and $ \ell = 1,\dots,n_i $, the following overlap holds,
\begin{equation}
\label{eq:overlap_elem_mixture}
\bra{\bf i} \sigma_1 \otimes \ldots \otimes \rho_{j^{(i)}_{\ell}} \otimes \ldots \otimes \sigma_n \ket{\bf i}
=
p_i \, Q_i,
\end{equation}
where $p_i = \bra{i} \rho \ket{i}$ and $Q_i = q_i^{n_i - 1} \prod_{j \neq i}^d q_j^{n_j}$, with $q_j = \bra{j} \sigma \ket{j}$.
Here, one can already observe that the particular location, characterized by $ \ell $, does not affect the overlap;
Eq.~\eqref{eq:overlap_elem_mixture} is fully characterized by $ i $. We now proceed to rewrite the state $ \rho^{(n)} $
can be rewritten as
\begin{align}\label{eq:tau_n}
\rho^{(n)} &= \frac{1}{n} \sum_{j=1}^n \sigma_1\otimes\cdots\otimes\rho_j\otimes\sigma_n
=\frac{1}{n} \sum_{i=1}^d\sum_{\ell =1}^{n_i} \sigma_1 \otimes \ldots \otimes \rho_{j^{(i)}_{\ell}} \otimes \ldots \otimes \sigma_n.
\end{align}
Using Eq.~\eqref{eq:overlap_elem_mixture} and \eqref{eq:tau_n} together, we find that \begin{align*}
\bra{\bf i} \rho^{(n)} \ket{\bf i}
&= \sum_{i=1}^d \sum_{\ell=1}^{n_i} \frac{1}{n} \,
\bra{\bf i} \sigma_1 \otimes \ldots \otimes \rho_{j^{(i)}_{\ell}} \otimes \ldots \otimes \sigma_n \ket{\bf i}
= \sum_{i=1}^d \frac{n_i}{n} p_i \, Q_i.
\end{align*}
Note that $ \lbrace p_i\rbrace $ and $ \lbrace Q_i\rbrace $ are fully determined by $ \rho $, $ \sigma $, $ H^{(1)} $ and
$ n $, which are fixed from the beginning. Therefore, for all ${\bf i} \in I_{\bf n}$, the overlap $ \bra{\bf i}\rho^{(n)}\ket{\bf i} $
depends only on $ \bf n $, which concludes the proof. \end{proof}
We are now able to prove the optimality of the stochastic collision model $\bar{\mc{E}}_n$, within the class of channels $\mathfrak{E}_n$, for the thermalization of a system in contact with an external bath.
\begin{proposition}[Optimality of the stochastic collision model for diagonal states]
\label{prop:csl_optimality}
Given $ n \in\mb{N}$, let $H^{(1)}$ be a Hamiltonian satisfying the ESC as defined in Def.~\ref{def:ESC}. Then, for any two states $\rho,
\sigma \in \SH{\mc{H}}$ diagonal in the energy eigenbasis, the channel $\bar{\mc{E}}_n$ of Eq.~\eqref{eq:channel_csl}
allows to achieve the optimal thermalization, that is
\begin{align}
\bar{\mc{E}}_n \in \mathrm{argmin}_{\mathcal{E} \in \mathfrak{E}_n}
\norm{\mathcal{E}(\rho \otimes \sigma^{\otimes n-1})- \sigma^{\otimes n}}_1.
\end{align} \end{proposition}
\begin{proof}
Let us first notice that Cor.~\ref{cor:E_sub_R} tells us that the family of channels $\mathfrak{E}_n$ is a subset of the class of
energy-preserving random unitary channels. In Lem.~\ref{lem:preserve_weights}, we have shown that this latter class of channels
cannot modify the weight associated with each energy subspace in the initial state $\rho \otimes \sigma^{\otimes n-1}$. Thus,
the channels in $\mathfrak{E}_n$ can only modify the form of the distribution in each of these subspaces. Furthermore, in
Lem.~\ref{lem:opt_state} we showed that the state minimizing the trace distance to the target state $\sigma^{\otimes n}$ is the
one with the same distribution over each energy subspace and with no coherence in the energy eigenbasis (since the target state is diagonal).
It is easy to see that $\sigma^{\otimes n}$ has a uniform distribution over each energy subspace. Thus, a channel $\epsilon \in \mathfrak{E}_n$
minimizes the trace distance $\norm{\mathcal{E}(\rho \otimes \sigma^{\otimes n-1})- \sigma^{\otimes n}}_1$ if its output
state is uniform over the energy subspaces. But in Lem.~\ref{lem:csl_uniform} we showed that, if the Hamiltonian $H^{(1)}$
satisfies ESC, then the state $\rho^{(n)} = \bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1})$ is uniform over
the energy subspaces , which concludes the proof. \end{proof}
It is worth noting that the above proposition applies to diagonal states only. In the context of MBL systems, the thermal state $\tau_{\beta}(H_R)$ is (by construction) diagonal in the energy eigenbasis, but the same needs not to apply for the reduced state $\omega_R$ of the infinite-time average. For this reason, we introduce the following proposition, characterizing the limitations of the collisional model of Eq.~\eqref{eq:channel_csl} when the initial state of the region has coherence in the energy eigenbasis.
\begin{proposition}[Thermalization bound for the stochastic collision model]
\label{prop:csl_semi-optimality}
Given $n \in\mb{N}$, let $H^{(1)}$ be a Hamiltonian satisfying the ESC as defined in Def.~\ref{def:ESC}. Then, for any two states $\rho,
\sigma \in \SH{\mc{H}}$, where $\sigma$ is diagonal in the energy eigenbasis, the following bound on the thermalization achievable
via the channel $\bar{\mc{E}}_n$ of Eq.~\eqref{eq:channel_csl} holds,
\begin{align}
\label{eq:semi-opt_bound}
\norm{\bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1})- \sigma^{\otimes n}}_1
\leq
\norm{\mc{E}_{\mathrm{opt}}(\rho \otimes \sigma^{\otimes n-1})- \sigma^{\otimes n}}_1
+
\norm{\rho - \Delta(\rho)}_1,
\end{align}
where $\mc{E}_{\mathrm{opt}} \in \mathfrak{E}_n$ is the channel achieving the optimal thermalization, and $\Delta(\rho)$ is the
decohered version of the state $\rho$. \end{proposition}
\begin{proof}
As first step, let us notice that the decohering channel $\Delta_n$, which removes coherence in the energy eigenbasis, is defined as $\Delta_n(\cdot) = \sum_E \Pi_E \, \cdot \, \Pi_E$, where $\Pi_E$ is the eigenprojector of the total Hamiltonian $H = \sum_{i=1}^n H^{(1)}_i$ associated with the energy $E$. Since the set of channels we are optimizing over is a subset of energy-preserving random unitary channels, see Cor.~\ref{cor:E_sub_R}, it is easy to show that the action of $\Delta_n$ commutes with that of any channel $\mc{E} \in \mathfrak{E}_n$,
\begin{align}
\label{eq:comm_relation_decoh}
\Delta \circ \mc{E}
&=
\sum_E \Pi_E \, \left( \sum_k p_k \, U_k \, \cdot \, U_k^{\dagger} \right) \, \Pi_E
=
\sum_{E,k} p_k \left( \Pi_E U_k \right) \, \cdot \, \left( \Pi_E U_k \right)^{\dagger}
=
\sum_{E,k} p_k \left( U_k \Pi_E \right) \, \cdot \, \left( U_k \Pi_E \right)^{\dagger} \nonumber \\
&=
\sum_k p_k \, U_k \left( \sum_E \Pi_E \, \cdot \, \Pi_E \right) U_k^{\dagger}
=
\mc{E} \circ \Delta,
\end{align}
where we used $\mc{E}(\cdot) = \sum_k p_k \, U_k \, \cdot \, U_k^{\dagger}$, and the third equality follows from the fact that $[U_k,H] = 0$ for all $k$. Furthermore, due to the form of the global Hamiltonian $H$ we have that, when $\sigma$ is diagonal in the energy eigenbasis, $\Delta_n(\rho \otimes \sigma^{\otimes n-1}) = \Delta_1(\rho) \otimes \sigma^{\otimes n-1}$, where $\Delta_1$ decoheres with respect to the energy eigenbasis of $H^{(1)}$. For simplicity, in the following we suppress the subscript from the map $\Delta_n$, since the number of subsystems the map is action over should be clear from its argument.
\par
Let us now consider the trace distance between the output of the channel $\bar{\mc{E}}_n$ and the target state,
\begin{align}
\label{eq:out_coh_bound}
\norm{\bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1}) - \sigma^{\otimes n}}_1
&=
\norm{\bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1}) - \Delta \circ \bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1})
+ \Delta \circ \bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1}) - \sigma^{\otimes n}}_1 \nonumber \\
&\leq
\norm{\bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1}) - \Delta \circ \bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1})}_1
+
\norm{\Delta \circ \bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1}) - \sigma^{\otimes n}}_1,
\end{align}
where we have used triangle inequality. Let us consider the first term of the last line above,
\begin{align}
\label{eq:coh_dist_bound}
\norm{\bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1}) - \Delta \circ \bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1})}_1
&=
\norm{\bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1}) - \bar{\mc{E}}_n(\Delta(\rho) \otimes \sigma^{\otimes n-1})}_1
\leq
\norm{\rho \otimes \sigma^{\otimes n-1} - \Delta(\rho) \otimes \sigma^{\otimes n-1}}_1 \nonumber \\
&=
\norm{\rho - \Delta(\rho)}_1
\end{align}
where the first equality follows from Eq.~\eqref{eq:comm_relation_decoh}, the inequality from the monotonicity of the
trace distance with respect to CPTP maps, and the last equality from the fact that $\sigma$ is a normalized state.
The second term of Eq.~\eqref{eq:out_coh_bound} can instead be bounded as follows,
\begin{align}
\label{eq:opt_bound_coh}
\norm{\Delta \circ \bar{\mc{E}}_n(\rho \otimes \sigma^{\otimes n-1}) - \sigma^{\otimes n}}_1
&=
\norm{\bar{\mc{E}}_n(\Delta(\rho) \otimes \sigma^{\otimes n-1}) - \sigma^{\otimes n}}_1
\leq
\norm{\mc{E}_{\mathrm{opt}}(\Delta(\rho) \otimes \sigma^{\otimes n-1}) - \sigma^{\otimes n}}_1 \nonumber \\
&=
\norm{\Delta \circ \mc{E}_{\mathrm{opt}}(\rho \otimes \sigma^{\otimes n-1}) - \Delta(\sigma^{\otimes n})}_1
\leq
\norm{\mc{E}_{\mathrm{opt}}(\rho \otimes \sigma^{\otimes n-1}) - \sigma^{\otimes n}}_1
\end{align}
where, again, the first equality follows from the fact that the actions of $\Delta$ and $\bar{\mc{E}}_n$ commute with
each other. The first inequality follows from Prop~\eqref{prop:csl_optimality}, where we have shown that $\bar{\mc{E}}_n$
is the optimal channel when the input and target states are diagonal; the channel $\mc{E}_{\mathrm{opt}} \in \mathfrak{E}_n$
is instead the optimal channel when the input state $\rho$ is not dechoered. The second equality follows from
Eq.~\eqref{eq:comm_relation_decoh} and the fact that $\sigma$ is diagonal. The final inequality follows from the
monotonicity of the trace distance. Combining Eqs.~\eqref{eq:coh_dist_bound} and \eqref{eq:opt_bound_coh} into
Eq.~\eqref{eq:out_coh_bound} proves the proposition. \end{proof}
The above result provides information on the thermalization power of the channel $\bar{\mc{E}}_n$ for states with coherence in the energy eigenbasis. For systems in the MBL phase, one expects almost all the eigenstates of the Hamiltonian to be close to product states~\cite{friesdorf_many-body_2015}, and hence $\omega_R$ should have relatively small and strongly decaying off-diagonal terms in the eigenbasis of $H_R$. To support this statement, we numerically compute the coherence in the energy eigenbasis contained in the reduced state $\omega_R$ for the disordered Heisenberg chain we studied in the main text, see Supplementary Figure~\ref{fig:coh_reduced}. As a result, the above proposition tells us that the channel $\bar{\mc{E}}_n$ is able to efficiently thermalizing an MBL system, conditioned on the fact that the ESC condition is satisfied.
\begin{figure}\label{fig:coh_reduced}
\end{figure}
\par The other assumption in Prop.~\ref{prop:csl_optimality} and \ref{prop:csl_semi-optimality} involves the energy gaps of the region Hamiltonian $H_R$. Due to the noise affecting the Hamiltonian of the spin chain, it seems a reasonable assumption to have non-degenerate energy gaps which could, at least up to a given $n \in \mb{N}$, satisfy this condition. A more detail discussion on this property is given in the main text, and in the following we clarify the limitations of the swapping collision model studied in this section.
\begin{remark}[Role of trivial Hamiltonians]\label{rem:trivH}
For the case of trivial Hamiltonians, which clearly violates the ESC, the target thermal state is simply
the maximally mixed state. Since we allow for the use of random unitary channels, thermalization in this case can already
be achieved without further bath copies. Another way to put it is that the usage of bath copies and randomness coincide
fully. The results in Ref.~\cite{boes_catalytic_2018} imply that only an amount of $ \log d $ of randomness (bits), are needed
to perform this task, where $d = \dim \mc{H}$. \end{remark}
\subsection{Limitation of the stochastic swapping collision model} We have seen that, while the channel $\bar{\mc{E}}_n$ can be proven to be optimal, necessary conditions on the Hamiltonian follow. When such conditions are dropped, we in fact know of cases where this channel is non-optimal (see Remark~\ref{rem:trivH} for example), in the sense that there exists a much more efficient protocol using a smaller number of bath copies. The case in Remark~\ref{rem:trivH} is somewhat less interesting since considering trivial Hamiltonians reduces the bath to a purely randomness resource, without any heat considerations whatsoever. In this section, we provide a counter example using non-trivial Hamiltonians. \par One implication of the restriction given by the ESC of Def.~\ref{def:ESC} is that energy gaps of a single copy of the system cannot be degenerate. The following counter-example is constructed then as follows; let $ H^{(1)} = \sum_{i=1}^4 E_i \Pi_{E_i}$ be a 4-level Hamiltonian such that $ E_2 - E_1 = E_4 - E_3 $. Furthermore, let $ \tau $ be a particular thermal state of $ H^{(1)} $ with eigenvalues $ \left( p_1, \cdots, p_4 \right)$ denoting the thermal occupations. To show that $\bar{\mc{E}}_n$ can be non-optimal in general, let us consider simply the usage of one copy of the bath. Due to the degeneracy of energy gaps assumed, the energy subspace for global energy \begin{equation} \label{eq:deg_subspace} E_t = E_1 + E_4 = E_2+E_3 , \end{equation}
contains two different types (as characterized by the tuple in Eq.~\eqref{eq:def_tuple_type}). In particular, if we denote $ \Pi_{i,j} = |i,j\rangle\langle i,j| + |j,i\rangle\langle j,i|$, then the projector on energy subspace $ E_t $ can be written as $ \Pi_{E_t} = \Pi_{1,4} + \Pi_{2,3} $. \par Let $ \tau^{(2)} := \tau^{\otimes 2} $ denote the target state. We know that the total weight within this subspace is $ p_1p_4+p_2p_3+p_3p_2+p_4p_1 = 4p_1p_4 $, where we know that since $ \tau $ is a thermal state, Eq.~\eqref{eq:deg_subspace} implies that $ p_2p_3 = p_1p_4 $. Therefore, within the $ E_t $ energy subspace, we have \begin{equation} \Pi_{E_t} \, \tau^{(2)} \, \Pi_{E_t} = p_1 p_4 \, \Pi_{E_t}, \end{equation}
which is a uniform distribution. What about the output state of the channel $\bar{\mc{E}}_2$? Assuming an initial state $ \rho = \sum_i q_i |i\rangle\langle i| $, the stochastic swapping process produces an output state $ \rho^{(2)} = \bar{\mc{E}}_2(\rho) = \frac{1}{2} \left( \rho\otimes\tau + \tau\otimes\rho \right)$. Within the same energy subspace, the total weight is $ q_1p_4+q_2p_3+q_3p_2+q_4p_1 $, and the distribution is \begin{equation} \Pi_{E_t} \, \rho^{(2)} \, \Pi_{E_t} = \frac{q_1 p_4+q_4p_1}{2} \cdot \Pi_{1,4} + \frac{q_2p_3+q_3p_2}{2}\Pi_{2,3}, \end{equation} which is uniform across the subspace for each type.
\begin{figure}
\caption{Comparison of eigenvalues of $ \tau^{(2)} $ and $ \rho^{(2)} $ within the energy $ E_t $ subspace. If each individual eigenvalues of $ \rho^{(2)} $ are larger (or smaller) than $ \tau^{(2)} $, then smoothening the subspace on $ \rho^{(2)} $ does not affect the trace distance. However, if we have the situation in the diagram where some eigenvalues of $ \rho^{(2)}$ are larger than that of $ \tau^{(2)} $, while some are smaller, then one can further reduce the trace distance by smoothening out the subspace on $ \rho^{(2)} $.}
\label{fig:counter_ex}
\end{figure}
\par Supplementary Figure~\ref{fig:counter_ex} shows the eigenvalues in this 4-dimensional subspace, compared to the distribution given by $ \tau^{(2)} $. Note that as long as \begin{equation} \frac{q_1p_4+q_4p_1}{2} > p_1p_4 > \frac{q_2p_3+q_3p_2}{2} \quad \text{or} \quad \frac{q_1p_4+q_4p_1}{2} < p_1p_4 < \frac{q_2p_3+q_3p_2}{2}, \end{equation} holds, one can always further decrease the trace distance by using another state $ \rho^* $ that makes the entire subspace uniform. An example of this is when $ q_1 =1,q_2=q_3=q_4=0 $, while assuming that $ p_1\leq \frac{1}{2} $, so that $ p_1 p_4 \leq \frac{p_4}{2} $. Note that since $ p_1 $ is the largest eigenvalue of the single-copy thermal state, $ p_1 \geq \frac{1}{4} $ as well. Concretely, take the state $ \rho^* = \rho^{(2)} - \Pi_{E_t} \, \rho^{(2)} \, \Pi_{E_t} + \frac{p_4}{4}\Pi_{E_t} $. Then \begin{align*}
d(\rho^*,\tau^{(2)}) &= d(\rho^{(2)},\tau^{(2)}) - \frac{1}{2}\mathrm{Tr}(|\Pi_{E_t} (\rho^{(2)}-\tau^{(2)}) \Pi_{E_t}|)
+ \frac{1}{2}\mathrm{Tr}(|\frac{p_4}{4}\Pi_{E_t} -\Pi_{E_t}\tau^{(2)}\Pi_{E_t}|) \\ &=d(\rho^{(2)},\tau^{(2)})-\frac{p_4}{2}+ p_4(2p_1 - \frac{1}{2}) =d(\rho,\tau^{(2)})+ p_4 (2p_1 -1) \leq d(\rho^{(2)},\tau^{(2)}), \end{align*} which implies that the output of the stochastic swapping channel $ \rho^{(2)} $ cannot be optimal in terms of minimizing the distance.
\end{document}
|
arXiv
|
{
"id": "1912.04920.tex",
"language_detection_score": 0.6994640231132507,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[On the Factorization of Two Adjacent Numbers]{On the Factorization of Two Adjacent Numbers in Multiplicatively Closed Sets Generated by Two Elements} \author[C.P. Anil Kumar]{C.P. Anil Kumar} \address{No. 104, Bldg. 23, Lakshmi Paradise, 5th Main, 11th Cross, LN Puram, Bengaluru-560021} \email{[email protected]} \subjclass[2010]{Primary: 11A55, Secondary: 11K60} \keywords{Multiplicatively Closed Sets, Continued Fractions, Primary and Secondary Convergents} \date{\sc \today} \begin{abstract} For two natural numbers $1<p_1<p_2$, with $\alpha=\frac{\log(p_1)}{\log(p_2)}$ irrational, we describe, in main Theorem~\ref{theorem:FactRectangles} and in Note~\ref{note:AdjFact}, the factorization of two adjacent numbers in the multiplicatively closed subset $S=\{p_1^ip_2^j\mid i,j\in \mathbb{N}\cup\{0\}\}$ using primary and secondary convergents of $\alpha$. This suggests general Question~\ref{ques:AdjFact} for more than two generators which is still open. \end{abstract} \maketitle \section{\bf{Introduction}} Continued fractions have been studied extensively in the theory of diophantine approximation. More so as a tool to prove results in this theory, for example, Hurwitz's theorem. A basic introduction to the theory of continued fractions is given in~\cite{MR0001185},~\cite{MR1083765}. A proof of Hurwitz's theorem is also mentioned in ~\cite{MR1083765} (Chapter $7$). The following is the question which this article concerns and we answer the question using the theory of continued fractions as a tool.
\begin{ques} \label{ques:AdjacentNumberFactorization} Let $S=\{1=s_0<s_1<s_2<\cdots\}\subset \mathbb{N}$ be a multiplicatively closed set generated by two natural numbers $p_1<p_2$ such that $\frac{\log(p_1)}{\log(p_2)}$ is irrational. Let $s_k=p_1^ip_2^j\in S$ for some $k\geq 0$. What are the factorizations of the adjacent numbers $s_{k-1},s_{k+1}$ in terms of $p_1,p_2,i,j$? \end{ques} We answer this Question~\ref{ques:AdjacentNumberFactorization} using the simple continued fraction expansion of $\frac{\log(p_1)}{\log(p_2)}$ and its primary and secondary convergents. In C.~P.~Anil~Kumar~\cite{MR3943663} (Theorem $3.2$), a question on the existence of arbitrary large gaps in $S$ has been answered affirmatively by another constructive technique. Also in Article~\cite{MR3943663} ( Lemma $5.4$ and Note $5.5$) a formula for the next number of $p_2^j$ in the multiplicatively closed set $S$ has been found. Here we prove this result as Corollary~\ref{cor:ArbLargeInt} of main Theorem~\ref{theorem:FactRectangles}.
The following question for more than two generators is still open. \begin{ques} \label{ques:AdjFact} Let $T=\{1=t_0<t_1<t_2<\cdots\}\subset \mathbb{N}$ be a finitely generated multiplicatively closed infinite set generated by positive integers $d_1,d_2,\cdots,d_n$ for $n>2$. Let $t_k=d_1^{i_1}d_2^{i_2}\cdots d_n^{i_n}$. How do we construct an explicit factorization of the elements $t_{k-1},t_{k+1}\in T$ in terms of the positive integers $d_j,i_j, 1\leq j\leq n$? \end{ques} Now we proceed to mention some notation, a required definition and state main Theorem~\ref{theorem:FactRectangles}. \begin{notation} Throughout this article, let $0<p_1<p_2$ be two positive integers such that $\frac {\log p_1}{\log p_2}$ is irrational. Let $S=\{p_1^ip_2^j\mid i,j\in \mathbb{N}\cup \{0\}\}=\{s_0=1<s_1<s_2<\cdots\}$ be the multiplicatively closed set generated by $p_1,p_2$. \end{notation} \begin{defn}[Non-negative Integer Co-ordinates of an Element]
Any integer $n\in S$ can be uniquely expressible as $n=p_1^ip_2^j$. We associate the non-negative integer pair $(i,j)$ to $n$ which are the integer co-ordinates of the element $n$.
So in particular there is a bijection (co-ordinatisation map) of $S$ with the grid $(\mathbb{N}\cup \{0\})\times (\mathbb{N}\cup \{0\})$. \end{defn} Main Theorem~\ref{theorem:FactRectangles} gives the decomposition of the factorization grids \equa{&(\mathbb{N}\cup \{0\})\times (\mathbb{N}\cup \{0\})\text{ and }\\
&\big((\mathbb{N}\cup \{0\})\times (\mathbb{N}\cup \{0\})\big)^{*}=\big(\mathbb{N}\cup\{0\}\big)\times \big(\mathbb{N}\cup\{0\}\big)\backslash\{(0,0)\}} into rectangles which are related by local translations to describe the factorization of the number and its next number in an elegant manner. Now we state the main theorem. \begin{thmOmega}
\namedlabel{theorem:FactRectangles}{$\Omega$}
Let $\{a_0=0,a_1,a_2,\cdots,\}$ be the continued fraction of $\frac {\log p_1}{\log p_2}$. Let $h_0=0,k_0=1$ and let $\{\frac{h_i}{k_i}\mid i\in \mathbb{N},gcd(h_i,k_i)=1\}$
be the sequence of primary convergents of $\frac {\log p_1}{\log p_2}$.
Consider the integer grid rectangles $\rectangle A^t_iB^t_iC^t_iD^t_i$ for $i\geq 1$ of dimensions $h_{2i}\times k_{2i}$ with co-ordinates given by
\equa{ A^t_i&=\big(k_{2i-1}+tk_{2i},0\big),\\
B^t_i&=\big(k_{2i-1}+(t+1)k_{2i}-1,0\big),\\
C^t_i&=\big(k_{2i-1}+(t+1)k_{2i}-1,h_{2i}-1\big),\\
D^t_i&=\big(k_{2i-1}+tk_{2i}, h_{2i}-1\big), 0\leq t< a_{2i+1}.}
The corresponding translated rectangles (translation applied to each point) denoted by $\rectangle \tilde{A^t_i}\tilde{B^t_i}\tilde{C^t_i}\tilde{D^t_i}$ of next numbers in the multiplicatively closed set are given by
\equa{ \tilde{A^t_i}&=\big(0,h_{2i-1}+th_{2i}\big),\\
\tilde{B^t_i}&=\big(k_{2i}-1,h_{2i-1}+th_{2i}\big),\\
\tilde{C^t_i}&=\big(k_{2i}-1,h_{2i-1}+(t+1)h_{2i}-1\big),\\
\tilde{D^t_i}&=\big(0,h_{2i-1}+(t+1)h_{2i}-1\big),0\leq t< a_{2i+1}}
again of the same dimensions $h_{2i}\times k_{2i}$ with translation given by \equ{\rectangle\tilde{A^t_i}\tilde{B^t_i}\tilde{C^t_i}\tilde{D^t_i}=\rectangle A^t_iB^t_iC^t_iD^t_i+\big(-k_{2i-1}-tk_{2i},h_{2i-1}+th_{2i}\big).}
Now consider the integer grid rectangles $\rectangle P^t_iQ^t_iR^t_iS^t_i$ for $i\geq 0$ of dimensions $h_{2i+1} \times k_{2i+1}$ with co-ordinates given by
\equa{ P^t_i&=\big(0,h_{2i}+th_{2i+1}\big),\\
Q^t_i&=\big(k_{2i+1}-1,h_{2i}+th_{2i+1}\big),\\
R^t_i&=\big(k_{2i+1}-1,h_{2i}+(t+1)h_{2i+1}-1\big),\\
S^t_i&=\big(0,h_{2i}+(t+1)h_{2i+1}-1\big), 0\leq t< a_{2i+2}.}
The corresponding translated rectangles (translation applied to each point) denoted by $\rectangle \tilde{P^t_i}\tilde{Q^t_i}\tilde{R^t_i}\tilde{S^t_i}$ of next numbers in the multiplicatively closed set are given by
\equa{ \tilde{P^t_i}&=\big(k_{2i}+tk_{2i+1},0\big),\\
\tilde{Q^t_i}&=\big(k_{2i}+(t+1)k_{2i+1}-1,0\big)\\
\tilde{R^t_i}&=\big(k_{2i}+(t+1)k_{2i+1}-1,h_{2i+1}-1\big),\\
\tilde{S^t_i}&=\big(k_{2i}+tk_{2i+1},h_{2i+1}-1\big), 0\leq t< a_{2i+2}}
again of the same dimensions $h_{2i+1}\times k_{2i+1}$
with translation given by \equ{\rectangle\tilde{P^t_i}\tilde{Q^t_i}\tilde{R^t_i}\tilde{S^t_i}=\rectangle P^t_iQ^t_iR^t_iS^t_i+\big(k_{2i}+tk_{2i+1},-h_{2i}-th_{2i+1}\big).}
Also we have the grid
\equa{\big(\mathbb{N}\cup\{0\}\big)\times \big(\mathbb{N}\cup\{0\}\big) &=\\ &\underset{i\geq 1}{\bigsqcup}\bigg( \underset{0\leq t< a_{2i+1}}{\bigsqcup} \rectangle A^t_iB^t_iC^t_iD^t_i \bigg)
\underset{i\geq 0}{\bigsqcup}\bigg( \underset{0\leq t< a_{2i+2}}{\bigsqcup} \rectangle P^t_iQ^t_iR^t_iS^t_i \bigg)}
and the grid of next numbers (hence origin deleted as next number cannot be origin)
\equa{\big(\mathbb{N}\cup\{0\}\big)&\times \big(\mathbb{N}\cup\{0\}\big)\backslash\{(0,0)\} =\\
&\underset{i\geq 1}{\bigsqcup }\bigg( \underset{0\leq t< a_{2i+1}}{\bigsqcup} \rectangle \tilde{A^t_i}\tilde{B^t_i}\tilde{C^t_i}\tilde{D^t_i} \bigg)
\underset{i\geq 0}{\bigsqcup }\bigg( \underset{0\leq t< a_{2i+2}}{\bigsqcup} \rectangle \tilde{P^t_i}\tilde{Q^t_i}\tilde{R^t_i}\tilde{S^t_i} \bigg).} \end{thmOmega} Here in the following note we mention briefly how to get the factorization of the previous number of $s_k\in S$. \begin{note}
\label{note:AdjFact}
Using Theorem~\ref{theorem:FactRectangles} the factorization of the previous number can be obtained because previous number of the next number of a number is the given number. The answer to Question~\ref{ques:AdjacentNumberFactorization} can be obtained by suitably expressing $(i,j)$.
For the next number express
$s_k=(i,j)\in (\mathbb{N}\cup \{0\})\times (\mathbb{N}\cup \{0\})$ as $(k_{2l-1}+tk_{2l}+r,s)$ with $0\leq t<a_{2l+1},0\leq r<k_{2l},0\leq s<h_{2l}$ or as
$(r,h_{2l}+th_{2l+1}+s)$ with $0\leq t<a_{2l+2},0\leq r<k_{2l+1},0\leq s<h_{2l+1}$. We get the next number $s_{k+1}$ using Theorem~\ref{theorem:FactRectangles}.
For the previous number express $s_{k}=(i,j)\in (\mathbb{N}\cup \{0\})\times (\mathbb{N}\cup \{0\})\backslash\{0,0\}$ as $(r,h_{2l-1}+th_{2l}+s)$ with $0\leq t<a_{2l+1},0\leq r<k_{2l},0\leq s<h_{2l}$
or as $(r+k_{2l}+tk_{2l+1},s)$ with $0\leq t<a_{2l+2},0\leq r<k_{2l+1},0\leq s<h_{2l+1}$. We get the previous number $s_{k-1}$ again using Theorem~\ref{theorem:FactRectangles} in the reverse manner. \end{note} \section{\bf{Types of Fractions and Convergents associated to an Irrational in $[0,1]$}} In this section we define various types of fractions and convergents associated to an irrational $\alpha \in [0,1]$.
\subsection{Primary and Secondary Convergents} \begin{defn}[Primary and Secondary Convergents] Let $\alpha \in [0,1]$ be an irrational. Suppose $\{a_0=0,a_1,a_2,a_3,\cdots,\}$ be the sequence denoting the simple continued fraction expansion of $\alpha$, i.e., \equ{\alpha=a_0+\frac 1{a_1+\frac 1{a_2+\frac 1{a_3+ \frac 1{\cdots}}}}.} Let $h_0=0,k_0=1$. Define \equ{\frac{h_i}{k_i}=a_0+\frac 1{a_1+\frac 1{\ddots+\frac 1{a_i}}}, gcd(h_i,k_i)=1 \text{ for } i\in \mathbb{N}.} Then an element in the sequence $\{\frac{h_i}{k_i}: i\in \mathbb{N}\cup\{0\}\}$ is called a primary convergent. The first few primary convergents with relatively prime numerators and denominators are given by \equ{\frac 01,\frac 1{a_1},\frac{a_2}{1+a_1a_2},\frac{1+a_2a_3}{a_3+a_1+a_1a_2a_3},\frac{a_2+a_4+a_2a_3a_4}{1+a_3a_4+a_1a_2+a_1a_4+a_1a_2a_3a_4},\cdots.} By induction, we can show, with these expressions for $\frac{h_i}{k_i}$, that $h_ik_{i+1}-k_ih_{i+1}=\pm 1,i\in \mathbb{N}\cup\{0\}$ as polynomials. Also we have as polynomials, \equ{h_{i+2}=a_{i+2}h_{i+1}+h_i, k_{i+2}=a_{i+2}k_{i+1}+k_i.} So we actually have polynomial expressions for $h_i,k_i$ in terms of variables $a_i:i\in \mathbb{N}\cup\{0\}$ arising from continued fraction of the irrational $\alpha$. For any irrational $\alpha$ the convergents satisfy \equ{\frac{h_{2j}}{k_{2j}}<\frac{h_{2j+2}}{k_{2j+2}}<\alpha < \frac{h_{2l+1}}{k_{2l+1}}<\frac{h_{2l-1}}{k_{2l-1}}, j\in \mathbb{N}\cup \{0\}, l\in \mathbb{N}.}
Now we define the finite monotonic sequences of new intermediate fractions with relatively prime numerators and denominators given by \equ{\frac{h_{2j}}{k_{2j}}<\frac{h_{2j}+th_{2j+1}}{k_{2j}+tk_{2j+1}}<\frac{h_{2j+2}}{k_{2j+2}}, 0<t<a_{2j+2}, t,j\in \mathbb{N}\cup\{0\}} and \equ{\frac{h_{2l+1}}{k_{2l+1}}<\frac{h_{2l-1}+th_{2l}}{k_{2l-1}+tk_{2l}}<\frac{h_{2l-1}}{k_{2l-1}}, 0<t<a_{2l+1}, t,l\in \mathbb{N}.} These new intermediate fractions are called secondary convergents. \end{defn}
\subsection{Upper and Lower Fractions} Here we define two sequences of fractions called upper and lower fractions associated to an irrational $\alpha\in [0,1]$.
\begin{defn}[Upper and and Lower Fractions] Let $0<\alpha<1$. For $n\in \mathbb{N}$, let \equ{f(n)=\bigg\lceil \frac n{\alpha} \bigg\rceil, g(n)=\bigg\lfloor \frac n{\alpha} \bigg\rfloor.} Then the sequences $\{f(n):n\in \mathbb{N}\},\{g(n):n\in \mathbb{N}\}$ are called upper and lower sequences of $\alpha$ respectively. We have \equ{g(n)\alpha < n < f(n)\alpha, f(n)-g(n)=1, n\in \mathbb{N}.} Since $0<\alpha<1$ for any $n\in \mathbb{N}$, we also observe that \equ{\lfloor f(n)\alpha \rfloor = n, \lceil g(n)\alpha \rceil = n.} A fraction in the sequence $\{\frac n{f(n)}:n\in \mathbb{N}\}$ is called a lower fraction associated to $\alpha$ and a fraction in the sequence $\{\frac n{g(n)}:n\in \mathbb{N}\}$ is called an upper fraction associated to $\alpha$. We need not in general have $gcd(n,f(n))=1$ or $gcd(n,g(n))=1$. However we definitely have \equ{\frac {n}{f(n)}<\alpha < \frac{n}{g(n)}.} \end{defn} \begin{note} An element in the upper sequence gives a lower fraction and an element in the lower sequence gives an upper fraction associated to $\alpha$. \end{note} \subsection{The Upper and Lower Sequences $f$ and $g$} Here in this section we prove Theorems~\ref{theorem:UpLowSequence}~\ref{theorem:UpLowSeqDifferences} concerning the values of the upper and lower sequences. \begin{theorem} \label{theorem:UpLowSequence} Let $\alpha \in [0,1]$ be an irrational with continued fraction expansion $\{a_0=0,a_1,a_2,\cdots\}$. Let $\{f(n):n\in \mathbb{N}\},\{g(n):n\in \mathbb{N}\}$ be the upper and lower sequences of $\alpha$. Let $h_0=0,k_0=1$. For $i\in \mathbb{N}$, let $h_i,k_i$ be the numerator and denominator of $i^{th}$ primary convergent which are relatively prime. Then we have \equa{f(h_{2j}+th_{2j+1})&=k_{2j}+tk_{2j+1},\\ g(h_{2j}+th_{2j+1})&=k_{2j}+tk_{2j+1}-1, 0<t\leq a_{2j+2},j\in \mathbb{N}\cup\{0\},t\in \mathbb{N}\\ g(h_{2l-1}+th_{2l})&=k_{2l-1}+tk_{2l},\\ f(h_{2l-1}+th_{2l})&=k_{2l-1}+tk_{2l}+1,0 \leq t \leq a_{2l+1},t\in \mathbb{N}\cup\{0\},l\in \mathbb{N}.} \end{theorem} \begin{proof} To prove the theorem it suffices to prove the following inequalities. \equa{(k_{2j}+tk_{2j+1}-1)\alpha<h_{2j}+th_{2j+1}&<(k_{2j}+tk_{2j+1})\alpha,\\
&0<t\leq a_{2j+2},j\in \mathbb{N}\cup\{0\},t\in \mathbb{N}\\ (k_{2l-1}+tk_{2l})\alpha<h_{2l-1}+th_{2l}&<(k_{2l-1}+tk_{2l}+1)\alpha,\\ &0\leq t \leq a_{2l+1},t,l\in \mathbb{N}.} This we prove by induction on $j,l$ simultaneously as follows.
We observe that \equ{h_0=k_0-1=0,k_1\alpha < h_1 \Rightarrow (k_0+tk_1-1)\alpha<h_0+th_1\text{ for all }t > 0.} We also have for \equ{0 \leq t \leq a_2, \frac{h_0+th_1}{k_0+tk_1}\leq \frac{h_2}{k_2}<\alpha \Rightarrow h_0+th_1 < (k_0+tk_1)\alpha.} So \equ{(k_0+tk_1-1)\alpha<h_0+th_1<(k_0+tk_1)\alpha, 0<t\leq a_2.} We have \equ{h_2<k_2\alpha\text{ and }\frac 1{a_1+1}<\alpha \Rightarrow h_1 < (k_1+1)\alpha.} Hence \equ{\text{for all }t \geq 0, h_1+th_2<(k_1+tk_2+1)\alpha.} We also have \equ{\text{for }0\leq t \leq a_3, \alpha<\frac{h_3}{k_3}\leq \frac{h_1+th_2}{k_1+tk_2} \Rightarrow (k_1+tk_2)\alpha < h_1+th_2.} So \equ{(k_1+tk_2)\alpha < h_1+th_2 < (k_1+tk_2+1)\alpha, 0 \leq t \leq a_3.} This proves the initial step of the induction for $j=0,l=1$.
Now assume that the inequalities follow for $j=r,l=r+1$ for some $r\in \mathbb{N}$. We prove for $j=r+1,l=r+2$. We have \equ{k_{2r+3}\alpha < h_{2r+3}\text{ and for }j=r,t=a_{2r+2},(k_{2r+2}-1)\alpha < h_{2r+2}} which together imply \equ{(k_{2r+2}+tk_{2r+3}-1)\alpha < h_{2r+2}+th_{2r+3}\text{ for }t\geq 0.} We also have for \equa{0\leq t\leq a_{2r+4}, \frac{h_{2r+2}+th_{2r+3}}{k_{2r+2}+tk_{2r+3}} &\leq \frac{h_{2r+4}}{k_{2r+4}}<\alpha \Rightarrow\\ h_{2r+2}+th_{2r+3}&<(k_{2r+2}+tk_{2r+3})\alpha.} So \equ{(k_{2r+2}+tk_{2r+3}-1)\alpha < h_{2r+2}+th_{2r+3}<(k_{2r+2}+tk_{2r+3})\alpha.} We have \equ{h_{2r+4}<k_{2r+4}\alpha\text{ and for }l=r+1,t=a_{2r+3}, h_{2r+3}<(k_{2r+3}+1)\alpha} which together impy \equ{h_{2r+3}+th_{2r+4}<(k_{2r+3}+tk_{2r+4}+1)\alpha\text{ for all }t\geq 0.} We also have for \equa{0\leq t\leq a_{2r+5}, \alpha < \frac{h_{2r+5}}{k_{2r+5}} &\leq \frac{h_{2r+3}+th_{2r+4}}{k_{2r+3}+tk_{2r+4}} \Rightarrow \\
(k_{2r+3}+tk_{2r+4})\alpha &< h_{2r+3}+th_{2r+4}.} So \equ{(k_{2r+3}+tk_{2r+4})\alpha<h_{2r+3}+th_{2r+4}<(k_{2r+3}+tk_{2r+4}+1)\alpha, 0\leq t\leq a_{2r+5}.} This proves the induction step for $j=r+1,l=r+2$.
Hence the theorem follows and the values of the sequences $f,g$ at the values of $n$ being the numerator of any primary or secondary convergent are known. \end{proof} \begin{note} \label{note:FracPart} Now we make an important observation about the monotonic nature of the ceil or rounding up fractional parts $h_{*}-k_{*}\alpha$. The numerators of the secondary and primary convergents associated to the lower sequence $g$ satisfy the following monotonicity. \equ{h_1<h_1+h_2<\cdots<h_1+a_3h_2=h_3<h_3+h_4<\cdots<h_3+a_5h_4=h_5<\cdots.} The sequence of differences is given by \equ{h_2,h_2,\cdots,h_2,h_4,h_4,\cdots,h_4,\cdots} where $h_{2i}$ appears $a_{2i+1}$ times for $i\geq 1$. This sequence is non-decreasing and diverges to infinity. Now we apply $g$ to the above sequence to obtain the denominators of the secondary and primary convergents associated to the lower sequence $g$ which also satisfy the following monotonicity. \equ{k_1<k_1+k_2<\cdots<k_1+a_3k_2=k_3<k_3+k_4<\cdots<k_3+a_5k_4=k_5<\cdots.} The sequence of differences is given by \equ{k_2,k_2,\cdots,k_2,k_4,k_4,\cdots,k_4,\cdots} where $k_{2i}$ appears $a_{2i+1}$ times for $i\geq 1$. This sequence is non-decreasing and diverges to infinity. The ceil fractional parts satisfy \equa{h_1-k_1\alpha>(h_1+h_2)&-(k_1+k_2)\alpha>\cdots>h_3-k_3\alpha>\\
&(h_3+h_4)-(k_3+k_4)\alpha>\cdots>h_5-k_5\alpha>\cdots.}
Similarly we make an observation on the monotonic nature of the floor or usual fractional parts $k_{*}\alpha-h_{*}$. The numerators of the secondary and primary convergents associated to the upper sequence $f$ satisfy the following monotonicity. \equ{h_0<h_0+h_1<\cdots<h_0+a_2h_1=h_2<h_2+h_3<\cdots<h_2+a_4h_3=h_4<\cdots.} The sequence of differences is given by \equ{h_1,h_1,\cdots,h_1,h_3,h_3,\cdots,h_3,\cdots} where $h_{2i-1}$ appears $a_{2i}$ times for $i\geq 1$. This sequence is non-decreasing and diverges to infinity. Now we apply $f$ to the above sequence to obtain the denominators of the secondary and primary convergents associated to the upper sequence $f$ which also satisfy the following monotonicity. \equ{k_0+k_1<\cdots<k_0+a_2k_1=k_2<k_2+k_3<\cdots<k_2+a_4k_3=k_4<\cdots.} The sequence of differences after including $k_0$ in the beginning is given by \equ{k_1,k_1,\cdots,k_1,k_3,k_3,\cdots,k_3,\cdots} where $k_{2i-1}$ appears $a_{2i}$ times for $i\geq 1$. This sequence is non-decreasing and diverges to infinity. The floor fractional parts satisfy \equa{k_0\alpha>(k_0+k_1)\alpha&-(h_0+h_1)>\cdots>k_2\alpha-h_2>\\
&(k_2+k_3)\alpha-(h_2+h_3)>\cdots>k_4\alpha-h_4>\cdots.} \end{note} Now we prove a useful lemma regarding fractions. \begin{lemma} \label{lemma:fracDetOne} Let $\frac ab>\frac pq>\frac cd \geq 0$ be three fractions such that $ad-bc=1$. Then we have \equ{q>max(b,d).} \end{lemma} \begin{proof} We have $\frac 1{bd}>\frac ab-\frac pq=\frac {aq-bp}{bq}$. If $aq-bp=1$ then $q>d$. If $aq-bp>1$ and $q\leq d$ then $\frac {aq-bp}{bq}\geq \frac {aq-bp}{bd}>\frac 1{bd}$ a contradiction. Hence $q>d$. We also have $\frac 1{bd}>\frac pq-\frac cd=\frac {pd-cq}{dq}$. If $pd-cq=1$ then $q>b$. If $pd-cq>1$ and $q\leq b$ then $\frac {pd-cq}{dq}\geq \frac {pd-cq}{bd}>\frac 1{bd}$ a contradiction. Hence $q>b$. So the lemma follows. \end{proof} We prove the second theorem. \begin{theorem} \label{theorem:UpLowSeqDifferences} Let $\alpha \in [0,1]$ be an irrational. Let $\{f(n):n\in \mathbb{N}\},\{g(n):n\in \mathbb{N}\}$ be the upper and lower sequences of $\alpha$. Consider the two sequences of lower and upper fractions of primary and secondary convergents respectively. \equa{\bigg\{\frac {p_1}{q_1}&<\cdots<\frac {p_i}{q_i}<\cdots\mid i\in \mathbb{N}\bigg\}=\\ &\bigg\{\frac {h_0}{k_0}<\frac {h_0+h_1}{k_0+k_1}<\cdots <\frac{h_2}{k_2}<\frac {h_2+h_3}{k_2+k_3}<\cdots <\frac{h_4}{k_4}<\frac {h_4+h_5}{k_4+k_5}<\cdots\bigg\}.} \equa{\bigg\{\frac {r_1}{s_1}&>\cdots>\frac {r_i}{s_i}>\cdots\mid i\in \mathbb{N}\bigg\}=\\ &\bigg\{\frac {h_1}{k_1}>\frac {h_1+h_2}{k_1+k_2}>\cdots >\frac{h_3}{k_3}>\frac {h_3+h_4}{k_3+k_4}>\cdots >\frac{h_5}{k_5}>\frac {h_5+h_6}{k_5+k_6}>\cdots\bigg\}.} Given a lower fraction $\frac{n}{f(n)}\notin \{\frac {p_j}{q_j}\mid j\in \mathbb{N}\}$ there exists a lower fraction $\frac {p_i}{q_i}$ such that \equ{n>p_i, f(n)>f(p_i)=q_i,f(n)\alpha-n>q_i\alpha-p_i.} Given an upper fraction $\frac{n}{g(n)}\notin \{\frac {r_j}{s_j}\mid j\in \mathbb{N}\}$ there exists an upper fraction $\frac {r_i}{s_i}$ such that \equ{n>r_i,g(n)>g(r_i)=s_i,n-g(n)\alpha>r_i-s_i\alpha.} \end{theorem} \begin{proof} Both the sequences $f,g$ are monotonically increasing and $\frac{h_i}{k_i}-\frac{h_{i+1}}{k_{i+1}}=\frac{(-1)^{i+1}}{k_ik_{i+1}}\longrightarrow 0$ as $i\longrightarrow \infty$. Hence \equ{\underset{i\longrightarrow \infty}{\lim} \frac{p_i}{q_i}=\underset{i\longrightarrow \infty}{\lim} \frac{h_i}{k_i}= \underset{i\longrightarrow \infty}{\lim} \frac{r_i}{s_i}=\alpha.} Now $0<\frac n{f(n)} <\alpha$. So there exist two consecutive lower fractions $\frac{p_{i-1}}{q_{i-1}},\frac{p_i}{q_i}$ such that \equ{\frac{p_{i-1}}{q_{i-1}}<\frac n{f(n)} <\frac {p_i}{q_i}<\alpha.} Using Lemma~\ref{lemma:fracDetOne} we conclude that $f(n)>max(q_i,q_{i-1})=q_i>q_{i-1}$ since $p_iq_{i-1}-q_ip_{i-1}$ $=1$. By monotonicity of $f$ we conclude that $n> max(p_i,p_{i-1})=p_i>p_{i-1}$. Now we have \equ{f(n)\alpha-n=f(n)(\alpha-\frac n{f(n)})>f(n)(\alpha-\frac {p_i}{q_i})>q_i(\alpha-\frac {p_i}{q_i})=q_i\alpha-p_i.} Now we prove the other case. Suppose $\frac n{g(n)}>\frac {h_1}{k_1}=\frac 1{k_1}>\alpha$. Since $n>1,g(n)>k_1$. Choose $\frac{r_i}{s_i}=\frac {h_1}{k_1}$ and we have \equ{n-g(n)\alpha=g(n)(\frac n{g(n)}-\alpha)>g(n)(\frac {h_1}{k_1}-\alpha)>k_1(\frac {h_1}{k_1}-\alpha)=h_1-k_1\alpha.} Suppose $\frac{h_1}{k_1}> \frac n{g(n)} > \alpha$. Then there exist two consecutive upper fractions $\frac {r_{i-1}}{s_{i-1}},\frac {r_i}{s_i}$ such that \equ{\frac {r_{i-1}}{s_{i-1}}>\frac n{g(n)}>\frac {r_i}{s_i}>\alpha.} Again using Lemma~\ref{lemma:fracDetOne} we conclude that $g(n)>max(s_i,s_{i-1})=s_i>s_{i-1}$ since $s_ir_{i-1}-r_is_{i-1}=1$. By monotonicity of $g$ we conclude that $n>max(r_i,r_{i-1})=r_i>r_{i-1}$. Now we have \equ{n-g(n)\alpha=g(n)(\frac n{g(n)}-\alpha)>g(n)(\frac {r_i}{s_i}-\alpha)>s_i(\frac {r_i}{s_i}-\alpha)=r_i-s_i\alpha.} This proves the theorem. \end{proof} \begin{note} \label{note:MinFracPart} In Theorem~\ref{theorem:UpLowSeqDifferences} if $\frac{n}{f(n)}=\frac{p_i}{q_i}$ then there exists a positive integer $k$ such that $n=kp_i,f(n)=kq_i$. So if $k>1$ the we have $n>p_i,f(n)>q_i,f(n)\alpha-n>q_i\alpha-p_i$. The other case is similar. So we conclude that minimal fractional parts occur exactly at the numerators and denominators of primary and secondary convergents using Note~\ref{note:FracPart} in the sequences of lower and upper fractions of $\alpha$ which is the content of the statement of Theorem~\ref{theorem:FracPart}. \end{note} \section{\bf{The proof of the main theorem}} \label{sec:FactRectangles} We begin this section with a theorem which is an observation on the rounding up or ceil fractional parts of the lower sequence and the usual fractional parts of the upper sequence.
\begin{theorem} \label{theorem:FracPart} Let $\alpha\in [0,1]$ be an irrational. Let $f,g$ be the upper and lower sequences associated to $\alpha$. Let $z_0=\alpha$ and for $n\in \mathbb{N}$ let $z_n=-n+f(n)\alpha,y_n=n-g(n)\alpha$. Let $n_0=0,z_{n_0}=z_0=\alpha,m_1=1,y_{m_1}=y_1=1-g(1)\alpha$. Define two subsequences $z_{n_j},y_{m_j}$ with the property that \equa{z_{n_j}<z_{n_{j-1}}&=\min \{z_0,\cdots, z_{n_j-1}\} \text{ for }j\in \mathbb{N},\\ y_{m_j}<y_{m_{j-1}}&=\min \{y_1,\cdots, y_{m_j-1}\} \text{ for }1<j\in \mathbb{N}.} Then the sequence \equa{\{n_0<n_1&<\cdots<n_j<\cdots\mid j\in \mathbb{N}\cup\{0\}\}=\\ &\{h_0<h_0+h_1<\cdots<h_0+a_2h_1=h_2<h_2+h_3<\cdots<\\ &h_2+a_4h_3=h_4<h_4+h_5<\cdots\}} and the sequence \equa{\{m_1<m_2&<\cdots<m_j<\cdots\mid j\in \mathbb{N}\}=\\ &\{h_1<h_1+h_2<\cdots<h_1+a_3h_2=h_3<h_3+h_4<\cdots<\\ &h_3+a_5h_4=h_5<h_5+h_6<\cdots\}.} \end{theorem} \begin{proof} This theorem follows by applying Theorem~\ref{theorem:UpLowSeqDifferences} and Notes~\ref{note:FracPart},~\ref{note:MinFracPart} which together imply that the lesser fractional parts in the sequence occur exactly at numerators of primary and secondary convergents of the lower and upper fractions for the upper and lower sequences respectively. \end{proof} Now we prove main Theorem~\ref{theorem:FactRectangles}. \begin{proof} Let $\alpha=\frac {\log p_1}{\log p_2}\in \mathbb{R}\backslash \mathbb{Q}$. Consider a point \equ{(k_{2i-1}+tk_{2i}+r,s)\in \rectangle A^t_iB^t_iC^t_iD^t_i} and its next number \equ{(r,h_{2i-1}+th_{2i}+s)\in \rectangle \tilde{A^t_i}\tilde{B^t_i}\tilde{C^t_i}\tilde{D^t_i}} for any \equ{0\leq t<a_{2i+1},0\leq r <k_{2i},0\leq s< h_{2i}.} Now suppose there exists an integer $p_1^bp_2^a$ such that \equ{p_1^{k_{2i-1}+tk_{2i}+r}p_2^{s}<p_1^bp_2^a<p_1^{r}p_2^{h_{2i-1}+th_{2i}+s}.} Then we arrive at a contradiction as follows.
The following sequence of inequalities hold \equ{\big(k_{2i-1}+tk_{2i}+r\big)\alpha+s<b\alpha+a<r\alpha+h_{2i-1}+th_{2i}+s<\big(k_{2i-1}+tk_{2i}+r+1\big)\alpha+s} because $\alpha=\frac{\log p_1}{\log p_2}$. If $b=r$ then there exist two integers $a-s,h_{2i-1}+th_{2i}$ in between \equ{\big(k_{2i-1}+tk_{2i}\big)\alpha,\big(k_{2i-1}+tk_{2i}+1\big)\alpha} which is a contradiction.
If $b<r$ then we must have $a>s$ so that $a-s>0$. Hence \equ{\big(k_{2i-1}+tk_{2i}+r-b\big)\alpha<a-s<h_{2i-1}+th_{2i}+(r-b)\alpha<\big(k_{2i-1}+tk_{2i}+r-b+1\big)\alpha.} So let $0<z<\alpha$ be the positive rounding up ceil fractional part such that \equ{\big(k_{2i-1}+tk_{2i}+r-b\big)\alpha+z=a-s.} Now suppose $k_{2i-1}+tk_{2i}+r-b<k_{2i-1}+(t+1)k_{2i}$. Then the fractional part $z$ has to satisfy \equ{z\geq h_{2i-1}+th_{2i}-\big(k_{2i-1}+tk_{2i}\big)\alpha} because minimal fractional parts occur exactly at the numerators and denominators of primary and secondary convergents. On the other hand \equa{z=(a-s)&-\big(k_{2i-1}+tk_{2i}+r-b\big)\alpha<\\
&h_{2i-1}+th_{2i}+(r-b)\alpha-\big(k_{2i-1}+tk_{2i}+r-b\big)\alpha=\\
&h_{2i-1}+th_{2i}-\big(k_{2i-1}+tk_{2i}\big)\alpha} which is a contradiction. So we have \equ{k_{2i-1}+tk_{2i}+r-b\geq k_{2i-1}+(t+1)k_{2i}\Rightarrow r-b \geq k_{2i} \Rightarrow r\geq k_{2i}+b\Rightarrow r\geq k_{2i}} which is again a contradiction to $0\leq r\leq k_{2i}-1$.
Now consider the case $b>r$ so that $b-r>0$. We have \equ{\big(k_{2i-1}+tk_{2i}\big)\alpha+s-a<(b-r)\alpha<h_{2i-1}+th_{2i}+s-a<\big(k_{2i-1}+tk_{2i}+1)\alpha+s-a.}
Let $0<y<\alpha$ be the rounding up ceil fractional part such that \equ{(b-r)\alpha+y=h_{2i-1}+th_{2i}+s-a.} Now suppose $h_{2i-1}+th_{2i}+s-a<h_{2i-1}+(t+1)h_{2i}$. Then the fractional part $y$ has to satisfy \equ{y\geq h_{2i-1}+th_{2i}-\big(k_{2i-1}+tk_{2i}\big)\alpha} because minimal fractional parts occur exactly at the numerators and denominators of primary and secondary convergents. On the other hand \equa{y&=h_{2i-1}+th_{2i}+s-a-(b-r)\alpha<\\
&h_{2i-1}+th_{2i}+s-a-\big((k_{2i-1}+tk_{2i})\alpha+s-a\big)=\\ &h_{2i-1}+th_{2i}-\big(k_{2i-1}+tk_{2i}\big)\alpha} which is a contradiction. So we have \equ{h_{2i-1}+th_{2i}+s-a \geq h_{2i-1}+(t+1)h_{2i} \Rightarrow s-a\geq h_{2i} \Rightarrow s\geq h_{2i}+a \Rightarrow s\geq h_{2i}} which is again a contradiction to $0\leq s\leq h_{2i}-1$.
This proves that the next number of $p_1^{k_{2i-1}+tk_{2i}+r}p_2^{s}$ is $p_1^{r}p_2^{h_{2i-1}+th_{2i}+s}$ for \equ{0\leq t<a_{2i+1},0\leq r <k_{2i},0\leq s<h_{2i}.}
Now we consider the second set of rectangles. Consider a point \equ{(r,h_{2i}+th_{2i+1}+s)\in \rectangle P^t_iQ^t_iR^t_iS^t_i} and its next number \equ{(r+k_{2i}+tk_{2i+1},s)\in \rectangle \tilde{P^t_i}\tilde{Q^t_i}\tilde{R^t_i}\tilde{S^t_i}} for any \equ{0\leq t<a_{2i+2},0\leq r<k_{2i+1},0\leq s< h_{2i+1}.} Now suppose there exists an integer $p_1^bp_2^a$ such that \equ{p_1^{r}p_2^{h_{2i}+th_{2i+1}+s}<p_1^bp_2^a<p_1^{r+k_{2i}+tk_{2i+1}}p_2^{s}.} Then we arrive at a contradiction as follows in a similar manner.
The following sequence of inequalities hold \equ{\big(k_{2i}+tk_{2i+1}+r-1\big)\alpha+s<r\alpha+h_{2i}+th_{2i+1}+s<b\alpha+a<\big(k_{2i}+tk_{2i+1}+r\big)\alpha+s} because $\alpha=\frac{\log p_1}{\log p_2}$. This implies \equ{\big(k_{2i}+tk_{2i+1}+r-b-1\big)\alpha<h_{2i}+th_{2i+1}+(r-b)\alpha<a-s<\big(k_{2i}+tk_{2i+1}+r-b\big)\alpha.} If $a=s$ then we observe that $\big(k_{2i}+tk_{2i+1}+r-b\big)\alpha\geq \alpha$ and $\big(k_{2i}+tk_{2i+1}+r-b-1\big)\alpha\leq -\alpha$ which is a contradiction. If $a>s$ so that $a-s>0$ then let $0<x<\alpha$ be the positive fractional part such that \equ{(a-s)+x=\big(k_{2i}+tk_{2i+1}+r-b\big)\alpha.} Now suppose $\big(k_{2i}+tk_{2i+1}+r-b\big)<k_{2i}+(t+1)k_{2i+1}$. Then the fractional part $x$ has to satisfy \equ{x\geq (k_{2i}+tk_{2i+1})\alpha-(h_{2i}+th_{2i+1})} because minimal fractional parts occur exactly at the numerators and denominators of primary and secondary convergents. On the other hand \equa{x&=\big(k_{2i}+tk_{2i+1}+r-b\big)\alpha-(a-s)<\\
&\big(k_{2i}+tk_{2i+1}+r-b\big)\alpha-\big(h_{2i}+th_{2i+1}+(r-b)\alpha\big)=\\ &(k_{2i}+tk_{2i+1})\alpha-(h_{2i}+th_{2i+1})} which is a contradiction. So we have \equa{\big(k_{2i}+tk_{2i+1}+r-b\big)&\geq k_{2i}+(t+1)k_{2i+1} \Rightarrow\\
r-b&\geq k_{2i+1}\Rightarrow r\geq k_{2i+1}+b\Rightarrow r\geq k_{2i+1}} which is a contradiction to $0\leq r\leq k_{2i+1}-1$.
Now suppose $a<s$ so that $s-a>0$. Then we have \equ{\big(k_{2i}+tk_{2i+1}-1\big)\alpha+s-a<h_{2i}+th_{2i+1}+s-a<(b-r)\alpha<\big(k_{2i}+tk_{2i+1}\big)\alpha+s-a.} So $b-r>0$ and let $0<u<\alpha$ be the positive fractional part such that \equ{h_{2i}+th_{2i+1}+s-a+u=(b-r)\alpha.} Now suppose $h_{2i}+th_{2i+1}+s-a<h_{2i}+(t+1)h_{2i+1}$. Then the fractional part $u$ has to satisfy \equ{u\geq (k_{2i}+tk_{2i+1})\alpha-(h_{2i}+th_{2i+1})} because minimal fractional parts occur exactly at the numerators and denominators of primary and secondary convergents. On the other hand \equa{ u&=(b-r)\alpha-(h_{2i}+th_{2i+1}+s-a)<\\
&\big(k_{2i}+tk_{2i+1}\big)\alpha+s-a-(h_{2i}+th_{2i+1}+s-a)=\\
&(k_{2i}+tk_{2i+1})\alpha-(h_{2i}+th_{2i+1})} which is a contradiction. So we have \equa{h_{2i}+th_{2i+1}+s-a&>h_{2i}+(t+1)h_{2i+1}\Rightarrow\\
s-a&\geq h_{2i+1}\Rightarrow s\geq h_{2i+1}+a \Rightarrow s\geq h_{2i+1}} which is again a contradiction to $0\leq s\leq h_{2i+1}-1$.
This proves that the next number of $p_1^{r}p_2^{h_{2i}+th_{2i+1}+s}$ is $p_1^{r+k_{2i}+tk_{2i+1}}p_2^{s}$ for \equ{0\leq t<a_{2i+2},0\leq r <k_{2i+1},0\leq s<h_{2i+1}.}
Now we prove that the rectangles cover the grid $(\mathbb{N}\cup\{0\})\times(\mathbb{N}\cup\{0\})$. For this we need to prove that given $(x,y)\in (\mathbb{N}\cup\{0\})\times(\mathbb{N}\cup\{0\})$ either there exists $j \geq 0$ such that \equ{h_{2j}+th_{2j+1} \leq y < h_{2j}+(t+1)h_{2j+1}, 0 \leq x < k_{2j+1}\text{ for some }0\leq t<a_{2j+2}} or there exists $i\geq 1$ such that \equ{k_{2i-1}+\tilde{t}k_{2i}\leq x <k_{2i-1}+(\tilde{t}+1)k_{2i},0\leq y<h_{2i} \text{ for some }0\leq \tilde{t}<a_{2i+1}}
Now there always exist $j\geq 0,0\leq t<a_{2j+2}$ such that \equ{h_{2j}+th_{2j+1} \leq y < h_{2j}+(t+1)h_{2j+1}.} If $0 \leq x < k_{2j+1}$ then we are done. Otherwise $x\geq k_{2j+1}$. Hence there exist $i\geq j+1, 0\leq \tilde{t}<a_{2i+1}$ such that \equ{k_{2i-1}+\tilde{t}k_{2i}\leq x <k_{2i-1}+(\tilde{t}+1)k_{2i}.} Now we have \equ{0\leq h_{2j}+th_{2j+1} \leq y < h_{2j}+(t+1)h_{2j+1}\leq h_{2j+2}\leq h_{2i} \Rightarrow 0\leq y<h_{2i}.} So the rectangles cover the grid. The rest of the proof for the grid with origin deleted is similar.
This completes the proof of Theorem~\ref{theorem:FactRectangles}. \end{proof} We mention the following corollary of Theorem~\ref{theorem:FactRectangles} with a very brief proof as it is straight forward. \begin{cor} \label{cor:ArbLargeInt} Let $S$ be a multiplicatively closed set with two generators $1<p_1<p_2$ such that $\frac{\log(p_1)}{\log(p_2)}$ is irrational. Then there exist arbitrarily large gap integer intervals of $S$ with end points in $S$. \end{cor} \begin{proof} To obtain arbitrarily large gap intervals, we apply Theorem~\ref{theorem:FactRectangles} for the largest values of $r\in \{k_{2i}-1,k_{2i+1}-1\}, s\in\{h_{2i}-1,h_{2i+1}-1\}$ which tend to infinity as $i$ tends to infinity. \end{proof}
\end{document}
|
arXiv
|
{
"id": "1806.03344.tex",
"language_detection_score": 0.6204284429550171,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\def \Lip{{\rm Lip}} \def \ds{\displaystyle}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{{\varepsilon}}{{\varepsilon}}
\newcommand{\X} {{\cal X}}
\newcommand{\ep} {{\epsilon}}
\newcommand{\g} {{\bf g}}
\newcommand{\F} {{\cal F}}
\newcommand{\J} {{\cal J}}
\newcommand{\B} {{\cal B}}
\newcommand{{\cal{S}}}{{\cal{S}}}
\newcommand{{\cal{R}}}{{\cal{R}}}
\newcommand{\Lq} {{\cal L}}
\newcommand{\cend} {\end{center}}
\newcommand{\elp}[2]{L^{#1}{(#2)}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\mathbb{R}^n}{\mathbb{R}^n}
\newcommand{\displaystyle\frac}{\displaystyle\frac}
\newcommand{\displaystyle{\int}}{\displaystyle{\int}} \def{\mathcal P}{{\mathcal P}}
\newcommand{\delta\mbox{-}d\mbox{-ball}}{\delta\mbox{-}d\mbox{-ball}}
\newcommand{\displaystyle{\oint}}{\displaystyle{\oint}}
\newcommand{\displaystyle}{\displaystyle}
\newcommand{\displaystyle\lim}{\displaystyle\lim}
\newcommand{\rightarrow}{\rightarrow}
\newcommand{\sumk} {\sum_{k=1}^{m_j}}
\newcommand{\sumj} {\sum_{j=1}^\infty}
\newcommand{\beginc} {\begin{center}}
\newcommand{\ltwo} {L^2(\Omega)}
\newcommand{\ltwoj} {{L^2 (\Omega_j)}}
\newcommand{\cl}[2] {{\cal L}^{#1}_\mu(#2,{Q})}
\newcommand{\wa}[2] {W^{1,{#1}}_{\nu,\mu}(#2,{Q})}
\newcommand{\wb}[2] {W^{1,{#1}}_{\nu,\mu,0}(#2,{Q})}
\newcommand{\wc}[2] {{\cal W}^{1,{#1}}_{\nu,\mu}(#2,{Q})}
\newcommand{\we}[2] {{\cal W}^{1,{#1}}_{\nu,\mu,0}(#2,{Q})}
\newcommand{\normal}[1]{\vec #1}
\newcommand{\hf} {[f]_{\alpha \; ; \; x_0}}
\newcommand{\di} {\partial}
\newcommand{\scq} {\mathcal{Q}}
\newcommand{\bg} {{\bf g}}
\newcommand{{\rm diam}}{{\rm diam}}
\newcommand{\ball}[3] {B_{#1}({#2}_{#3})}
\newcommand{\sumjn} {\sum_{j=1}^N}
\newcommand{\sumjm} {\sum_{j=1}^M}
\newcommand{\scl} {\mathcal{L}}
\newcommand{\rhu} {\rightharpoonup}
\newcommand{\lhu} {\leftharpoonup}
\newcommand{\limj} {\displaystyle\lim_{j\rightarrow \infty}}
\newcommand{\textrm{dist}}{\textrm{dist}}
\newtheorem{remark}[theorem]{{\bf Remark}}
\newtheorem{defn}[theorem]{{\bf Definition}}
\newtheorem{thm}[theorem]{\bf{Theorem}}
\newtheorem{propn}[theorem]{\bf{Proposition}}
\newtheorem{lemma}[theorem]{\bf{Lemma}}
\newtheorem{fact}[theorem]{{\bf Fact}}
\newtheorem{corollary}[theorem]{{\bf Corollary}}
\begin{center}{\bf\Large A Compact Embedding Theorem \\ for Generalized Sobolev Spaces} \footnote{\noindent{\footnotesize 2010 Mathematics Subject Classification: 46B50, 46E35, 35H20
{\noindent\footnotesize Key words and phrases: compact embedding, Sobolev spaces, degenerate quadratic forms}}} \end{center} \begin{center}
by Seng-Kee Chua, Scott Rodney and Richard L. Wheeden \end{center}
{\footnotesize{{\bf Abstract:} We give an elementary proof
of a compact embedding theorem in abstract Sobolev spaces. The
result is first presented in a general context and later
specialized to the case of degenerate Sobolev spaces defined
with respect to nonnegative quadratic forms on
$\mathbb{R}^n$. Although our primary interest concerns degenerate
quadratic forms, our result also applies to nondegenerate cases,
and we consider several such applications, including the classical
Rellich-Kondrachov compact embedding theorem and results for the
class of $s$-John domains in $\mathbb{R}^n$, the latter for
weights equal to powers of the distance to the boundary. We also
derive a compactness result for Lebesgue spaces on quasimetric
spaces unrelated to $\mathbb{R}^n$ and possibly without any notion
of gradient.}}
\section{The General Theorem}
The main goal of this paper is to generalize the classical Rellich-Kondrachov theorem concerning compact embedding of Sobolev spaces into Lebesgue spaces. Our principal result applies not only to the classical Sobolev spaces on open sets $\Omega\subset \mathbb{R}^n$ but also allows us to treat the degenerate Sobolev spaces defined in \cite{SW2}, and to obtain compact embedding of them into various $L^q(\Omega)$ spaces. These degenerate Sobolev spaces are associated with quadratic forms $Q(x,\xi) = \xi' Q(x)\xi$, $x\in \Omega, \xi \in \mathbb{R}^n$, which are nonnegative but may vanish identically in $\xi$ for some values of $x$. Such quadratic forms and Sobolev spaces arise naturally in the study of existence and regularity of weak solutions of some second order subelliptic linear/quasilinear partial differential equations; see, e.g., \cite[2]{SW1}, \cite{R1}, \cite{MRW}, \cite{RSW}.
The Rellich-Kondrachov theorem is frequently used to study the existence of solutions to elliptic equations, a famous example being subcritical and critical Yamabe equations, resulting in the solution of Yamabe's problem; see \cite{Y}, \cite{T}, \cite{A}, \cite{S}. Further applications lie in proving the existence of weak solutions to Dirichlet problems for elliptic equations with rough boundary data and coefficients; see \cite{GT}. In a sequel to this paper, we will apply our compact embedding results to study the existence of solutions for some classes of degenerate equations.
In this section, we will state and prove our most general compact embedding results. In Sections 2 and 3, we study some applications to classical and degenerate Sobolev spaces, respectively. In Section 4, more general results in quasimetric spaces are studied.
We begin by listing some useful notation. Let $w$ be a measure on a $\sigma$-algebra $\Sigma$ of subsets of a set $\Omega$, with $\Omega \in \Sigma$. For $0< p\le \infty$, let $L^p_w(\Omega)$ denote the class of real-valued measurable functions
$f$ satisfying $||f||_{L^p_w(\Omega)} <\infty$, where
$||f||_{L^p_w(\Omega)} = \Big(\int_\Omega |f|^pdw
\Big)^{1/p}$ if $p <\infty$ and $||f||_{L^\infty_w(\Omega)} =
\text{ess sup}_\Omega\, |f|$, the essential supremum being taken with respect to $w$-measure. When dealing with generic functions in $L^p_w(\Omega)$, we will not distinguish between functions which are equal a.e.-$w$. For $E\in \Sigma$, $w(E)$ denotes the $w$-measure of $E$, and if $0<w(E)<\infty$ then $f_{E,w}$ denotes the $w$-average of $f$ over $E$: $f_{E,w} = \frac{1}{w(E)}\int_E fdw$. Throughout the paper, positive constants will be denoted by $C$ or $c$ and their dependence on important parameters will be indicated.
For $k\in\mathbb{N}$, let $\mathscr{X}(\Omega)$ be a normed linear space of measurable $\mathbb{R}^k$-valued functions $\g$ defined on
$\Omega$ with norm $||\g||_{\mathscr{X}(\Omega)}$. We assume that there is a subset $\Sigma_0\subset \Sigma$ so that $(\mathscr{X}(\Omega), \Sigma_0)$ satisfies the following properties:\\
\noindent{(A)} For any $\g\in\mathscr{X}(\Omega)$ and $F\in \Sigma_0$, the function $\g\chi_F\in \mathscr{X}(\Omega)$, where $\chi_F$ denotes the characteristic function of $F$.\\
\noindent{($B_p$)} There are constants $C_1, C_2, p$ satisfying $1 \le C_1, C_2, p < \infty$ so that if $\{F_\ell\}$ is a finite collection of sets in $\Sigma_0$ with $\displaystyle\sum_{\ell} \chi_{F_\ell}(x) \leq C_1$ for all $x\in\Omega$, then \begin{eqnarray}
\displaystyle\sum_{\ell} ||\g\chi_{F_\ell}||^{p}_{\mathscr{X}(\Omega)} \leq C_2 ||\g||^{p}_{\mathscr{X}(\Omega)}\quad \text{for all $\g\in \mathscr{X}(\Omega)$.} \nonumber \end{eqnarray}
For $1\le N \le \infty$, we will often consider the product space $L^N_w(\Omega)\times \mathscr{X}(\Omega)$. This is a normed linear space with norm
\begin{eqnarray}\label{bspace} ||(f,\g)||_{L^N_w(\Omega)\times \mathscr{X}(\Omega)} =
||f||_{L^N_w(\Omega)} + ||\g||_{\mathscr{X}(\Omega)}. \end{eqnarray} A set ${\cal S}\subset L^N_w(\Omega)\times \mathscr{X}(\Omega)$ will be called a {\it bounded set in} $L^N_w(\Omega)\times \mathscr{X}(\Omega)$ if $$
\sup_{(f,\g)\in {\cal S}} ||(f,\g)||_{L^N_w(\Omega)\times \mathscr{X}(\Omega)} < \infty. $$ Projection maps such as the one defined by \begin{eqnarray}\label{projection} \pi:(f,\g) \rightarrow f, \quad (f,\g) \in L^N_w(\Omega)\times \mathscr{X}(\Omega), \end{eqnarray} will play a role in our results. If $w(\Omega) < \infty$, then $\pi(L^N_w(\Omega)\times \mathscr{X}(\Omega))\subset L^q_w(\Omega)$ if $1\le q \le N.$
\begin{thm}\label{general} Let $w$ be a finite measure on a $\sigma$-algebra $\Sigma$ of subsets of a set $\Omega$, with $\Omega \in \Sigma$. Let $1 \le p < \infty$, $1<N \le \infty$, $\mathscr{X}(\Omega)$ be a normed linear space satisfying properties (A) and ($B_p$) relative to a collection $\Sigma_0 \subset \Sigma$, and let ${\cal S}$ be a bounded set in $L^N_w(\Omega)\times \mathscr{X}(\Omega)$.
Suppose that ${\cal S}$ satisfies the following: given $\epsilon>0$, there are a finite number of pairs $\{E_\ell,F_\ell\}_{\ell=1}^J$ with $E_\ell \in \Sigma$ and $F_\ell\in \Sigma_0$ (the pairs and $J$ may depend on $\epsilon$) such that
\noindent (i) $w\big(\Omega\setminus\cup_{\ell} E_\ell\big) <\epsilon$ and $w(E_\ell) >0$;
\noindent (ii) $\{F_\ell\}$ has bounded overlaps independent of $\epsilon$ with the same overlap constant as in ($B_p$), i.e., \begin{eqnarray}\label{bddov} \displaystyle\sum_{\ell=1}^J \chi_{F_\ell}(x) \leq C_1, \quad x\in \Omega, \end{eqnarray} for $C_1$ as in ($B_p$);
\noindent (iii) for every $(f,\g)\in {\cal S}$, the local Poincar\'e-type inequality
\begin{eqnarray}\label{pc} ||f-f_{E_\ell,w}||_{L^{p}_w(E_\ell)} \leq \epsilon\,
||\g\chi_{F_\ell}||_{\mathscr{X}(\Omega)} \end{eqnarray} holds for each $(E_\ell, F_\ell)$.
Let $\hat{\cal S}$ be the set defined by \begin{eqnarray}\label{hatS} \hat{\cal S}= \left\{f\in L^N_w(\Omega):\, \text{there exists $\{(f^j,\g^j)\}_{j=1}^\infty \subset {\cal S}$ with $f^j \to f$ a.e.-$w$}\right\}. \end{eqnarray} Then $\hat{\cal S}$ is compactly embedded in $L^q_w(\Omega)$ if $1\le q <N$ in the sense that for every sequence $\{f_k\} \subset \hat{\cal S}$, there is a single subsequence $\{f_{k_i}\}$ and a function $f\in L^N_w(\Omega)$ such that $f_{k_i}\to f$ pointwise a.e.-$w$ in $\Omega$ and in $L^q_w(\Omega)$ norm for $1\leq q<N$. \end{thm}
Before proceeding with the proof of Theorem \ref{general}, we make several simple observations. First, in the definition of $\hat{\cal S}$, the property that $f\in L^N_w(\Omega)$ follows by Fatou's lemma since the associated functions $f^j$ are bounded in $L^N_w(\Omega)$, as ${\cal S}$ is bounded in $L^N_w(\Omega) \times\mathscr{X}(\Omega)$ by hypothesis. Fatou's lemma also shows that $\hat{\cal S}$ is a bounded set in $L^N_w(\Omega)$. Moreover, since $N>1$, if $\{f^j\}$ is bounded in $L^N_w(\Omega)$ and $f^j\to f$ a.e.-$w$, then $(f^j)_{E,w} \to f_{E,w}$ for all $E\in \Sigma$; in fact, in this situation, by using Egorov's theorem, we have $\int_\Omega f^j \varphi dw \to \int_\Omega f\varphi dw$ for all $\varphi \in L^{N'}_w(\Omega), 1/N +1/N'=1$.
Next, while the hypothesis $w(E_\ell)>0$ in assumption (i) ensures that the averages $f_{E_\ell,w}$ in (\ref{pc}) are well-defined, it is not needed since we can discard any pair $E_\ell, F_\ell$ with $w(E_\ell) =0$ without affecting the inequality $w(\Omega\setminus \cup E_\ell)<\epsilon$ or (\ref{bddov}) and (\ref{pc}).
Finally, since $\hat{\cal S}$ contains the first component $f$ of any pair $(f,\g) \in {\cal S}$, a simple corollary of Theorem \ref{general} is that the projection $\pi$ defined in (\ref{projection}) is a compact mapping of ${\cal S}$ into $L^q_w(\Omega)$, $1\le q <N$, in the sense that for every sequence $\{(f_k,\g_k)\}\subset {\cal S}$, there is a subsequence $\{f_{k_i}\}$ and a function $f\in L^N_w(\Omega)$ such that $f_{k_i}\to f$ pointwise a.e.-$w$ in $\Omega$ and in $L^q_w(\Omega)$ norm for $1\leq q<N$. \\
\noindent {\bf Proof:} Let ${\cal S}$ satisfy the hypotheses and suppose $\{f_k\}_{k\in\mathbb{N}} \subset \hat{\cal S}$. For each $f_k$, use the definition of $\hat{\cal S}$ to choose a sequence $\{(f_k^j, \g_k^j)\}_j \subset {\cal S}$ with $f_k^j \to f_k$ a.e.-$w$ as $j\to \infty$. Since ${\cal S}$ is bounded in $L^N_w(\Omega)\times \mathscr{X}(\Omega)$, there is $M\in (0,\infty)$
so that $||(f_k^j,\g_k^j)||_{L^N_w(\Omega)\times \mathscr{X}(\Omega)}
\leq M$ for all $k$ and $j$. Also, as noted above, $\{f_k\}$ is bounded in $L^N_w(\Omega)$ norm; in fact $||f_k||_{L^N_w(\Omega)} \le M$ for the same constant $M$ and all $k$.
Since $\{f_k\}$ is bounded in $L^N_w(\Omega)$, then if $1<N<\infty$, it has a weakly convergent subsequence, while if $N=\infty$, it has a subsequence which converges in the weak-star topology. In either case, we relabel the subsequence as $\{f_k\}$ to preserve the index. Fix $\epsilon >0$ and let $\{E_\ell, F_\ell\}_{\ell =1}^J$ satisfy the hypotheses of the theorem relative to $\epsilon$. Setting $\Omega^\epsilon = \cup E_\ell$, we have by assumption (i) that \begin{eqnarray} \label{Oj} w(\Omega\setminus\Omega^\epsilon) < \epsilon. \end{eqnarray}
Let us show that there is a positive constant $C$ independent of $\epsilon$ so that \begin{eqnarray}\label{diff}
\displaystyle\sum_\ell ||f_k - (f_k)_{E_\ell,w}||_{L^p_w(E_\ell)}^p \le C\epsilon^p \quad\mbox{for all $k$.} \end{eqnarray} Fix $k$ and let $\Delta$ denote the expression on the left side of (\ref{diff}). Since $f_k^j - (f_k^j)_{E_\ell,w} \to f_k - (f_k)_{E_\ell,w}$ a.e.-$w$ as $j \to \infty$, Fatou's lemma gives \begin{eqnarray}
\Delta \le \displaystyle\sum_\ell \liminf_{j\to\infty} ||f_k^j -
(f_k^j)_{E_\ell,w}||_{L^p_w(E_\ell)}^p.\nonumber \end{eqnarray} Consequently, by using the Poincar\'e inequality (\ref{pc}) for ${\cal S}$ and superadditivity of $\liminf$, we obtain \begin{eqnarray}
\Delta \le \liminf_{j\to \infty} \displaystyle\sum_\ell \epsilon^p ||\g_k^j
\chi_{F_\ell} ||^p_{\mathscr{X}(\Omega)}.\nonumber \end{eqnarray} By (\ref{bddov}), the sets $F_\ell$ have finite overlaps uniformly in $\epsilon$, with the same overlap constant $C_1$ as in property ($B_p$) of $\mathscr{X}(\Omega)$. Hence, by property ($B_p$) applied to the last expression together with boundedness of ${\cal S}$, \begin{eqnarray} \Delta \le C_2 \epsilon^p \liminf_{j\to \infty}
||\g_k^j||_{\mathscr{X}(\Omega)}^p \le C_2M^p\epsilon^p.\nonumber \end{eqnarray} This proves (\ref{diff}) with $C= C_2M^p$.
Next note that
\[ \int_{\Omega^\epsilon}|f_m-f_k|^{p}dw \le\,
\sum_{\ell}\int_{E_\ell}|f_m-f_k|^{p}dw \] \[
\le 2^{p-1}\Big(\sum_{\ell} \int_{E_\ell}|f_m-f_k
-(f_m-f_k)_{E_\ell,w}|^{p}dw +
\displaystyle\sum_{\ell}|(f_m-f_k)_{E_\ell,w}|^{p}w(E_\ell)\Big) \] \begin{eqnarray} \label{IandII} = 2^{p-1}(I + II). \end{eqnarray} We will estimate I and II separately. We have
\begin{eqnarray} I \le 2^{p-1}\left(\displaystyle\sum_{\ell} ||f_m -
(f_m)_{E_\ell,w}||_{L^p_w(E_\ell)}^p + \displaystyle\sum_\ell
||f_k - (f_k)_{E_\ell,w}||_{L^p_w(E_\ell)}^p\right)\nonumber \end{eqnarray} \begin{eqnarray} \label{I} \leq 2^{p-1}\left(C\epsilon^{p} + C\epsilon^p \right) = 2^{p}C \epsilon^{p} \end{eqnarray} by (\ref{diff}). To estimate $II$, first note that \[
II = \displaystyle\sum_{\ell=1}^J |(f_m-f_k)_{E_\ell,w}|^{p}w(E_\ell) =
\displaystyle\sum_{\ell=1}^J \frac{1}{w(E_l)^{p-1}}\Big|\int_\Omega
(f_m-f_k)\chi_{E_\ell}dw\Big|^{p}. \] Since $w(\Omega)<\infty$, each characteristic function $\chi_{E_\ell}\in L^{N'}_w(\Omega)$, $1/N + 1/N' =1$ (with $N'=1$ if $N=\infty$). As $\{f_k\}$ converges weakly in $L^N_w(\Omega)$ when $1<N<\infty$, or converges in the weak-star sense when $N=\infty$, then for $m,k$ sufficiently large depending on $\epsilon$, and for all $1\leq \ell \leq J$, \[
\frac{1}{w(E_l)^{p-1}}\Big| \int_{\Omega} (f_m-f_k)
\chi_{E_\ell}dw\Big|^{p} \leq \displaystyle\frac{\epsilon^{p}}{J}. \] Thus $II\leq \epsilon^{p}$ for $m,k$ sufficiently large depending on $\epsilon$. Combining this estimate with (\ref{IandII}) and (\ref{I}) shows that
\begin{eqnarray} \label{COj} ||f_m-f_k||_{L^{p}_w(\Omega^\epsilon)}< C\epsilon \end{eqnarray} for $m,k$ sufficiently large and $C =C(M, C_2)$.
Let us now show that $\{f_k\}$ is a Cauchy sequence in
$L^{1}_w(\Omega)$. For $m, k$ as in (\ref{COj}), H\"older's inequality and the fact that $||f_k||_{L^N_w(\Omega)} \leq M$ for all $k$ yield
\begin{eqnarray} ||f_m-f_k||_{L^{1}_w(\Omega)} &\le & ||f_m-f_k
||_{L^{1}_w(\Omega^\epsilon)} + ||f_m-f_k||_{L^{1}_w(\Omega
\setminus \Omega^\epsilon)}\nonumber\\ &\leq& ||f_m-
f_k||_{L^{p}_w(\Omega^\epsilon)} w(\Omega^\epsilon)^{\frac{1}{p'}} +
||f_m-f_k||_{L^{N}_w(\Omega \setminus\Omega^\epsilon)} w(\Omega\setminus \Omega^\epsilon)^{\frac{1}{N'}}\nonumber\\ &<& C\epsilon w(\Omega^\epsilon)^{\frac{1}{p'}} + 2Mw(\Omega\setminus \Omega^\epsilon)^{\frac{1}{N'}}\nonumber\\ &<& C\epsilon w(\Omega)^\frac{1}{p'} +2M\epsilon^{\frac{1}{N'}} \quad \mbox{by (\ref{Oj})}.\nonumber \end{eqnarray} Since $N'<\infty$, it follows that $\{f_k\}$ is Cauchy in $L^{1}_w(\Omega)$. Hence it has a subsequence (again denoted by $\{f_k\}$) that converges in $L^1_w(\Omega)$ and pointwise a.e.-$w$ in $\Omega$ to a function $f\in L^1_w(\Omega)$. If $N=\infty$, $\{f_k\}$ is bounded in $L^\infty_w(\Omega)$ by hypothesis, so its pointwise limit $f \in L^\infty_w(\Omega)$. If $N<\infty$, since $\{f_k\}$ is bounded in $L^N_w(\Omega)$, Fatou's Lemma implies that $f\in L^N_w(\Omega)$. This completes the proof in case $q=1$.
For general $q$, we will use the same subsequence $\{f_k\}$ as above. Thus we only need to show that $\{f_k\}$ converges in $L^q_w(\Omega)$ for $1<q<N$. We will use H\"older's inequality. Given $q\in (1,N)$, choose $\lambda\in(0,1)$, namely $\lambda = \big(\frac{1}{q}-\frac{1}{N}\big)/\big(1 -\frac{1}{N}\big)$, hence $\lambda = 1/q$ if $N=\infty$, so that
\begin{eqnarray} \label{interp} ||f_m-f_k||_{L^q_w(\Omega)} \leq
||f_m-f_k||_{L^{1}_w(\Omega)}^\lambda
||f_m-f_k||_{L^N_w(\Omega)}^{1-\lambda}.
\end{eqnarray} As before, $||f_k||_{L^N_w(\Omega)} \le M$, and therefore
$||f_m-f_k||_{L^N_w(\Omega)}^{1-\lambda}\leq (2M)^{1-\lambda}$, giving by (\ref{interp}) that $\{f_k\}$ is Cauchy in $L^q_w(\Omega)$ as it is Cauchy in $L^{1}_w(\Omega)$. This completes the proof of Theorem \ref{general}. $\Box$
A compact embedding result is also proved in \cite[Theorem 3.4]{FSSC} by using Poincar\'e type estimates. However, Theorem \ref{general} applies to situations not considered in \cite{FSSC} since it is not restricted to the context of Lipschitz vector fields in $\mathbb{R}^n$. Other abstract compact embedding results can be found in \cite[Theorem 4]{HK1} and \cite[Theorem 8.1]{HK2}, including a version (see \cite[Theorem 5]{HK1}) for weighted Sobolev spaces with nonzero continuous weights, and a version in \cite{HK2} for metric spaces with a single doubling measure. The proof in \cite{HK1} assumes prior knowledge of the classical Rellich-Kondrachov compactness theorem (see e.g. \cite[Theorem 7.22(i)]{GT} and below).\\
By making minor changes in the proof of Theorem \ref{general}, we can obtain a sufficient condition for a bounded set in $L^N_w(\Omega)$ to be precompact in $L^q_w(\Omega)$, $1\le q< N$, without mentioning the sets $\{F_\ell\}$, the space $\mathscr{X}(\Omega)$, properties (A) and ($B_p$), or conditions (\ref{bddov}) and (\ref{pc}). We state this result in the next theorem. An application is given in \S 4.
\begin{thm}\label{absgeneral} Let $w$ be a finite measure on a $\sigma$-algebra $\Sigma$ of subsets of a set $\Omega$, with $\Omega \in \Sigma$. Let $1 \le p <\infty$, $1< N \le \infty$ and ${\mathcal P}$ be a bounded subset of $L^{N}_w(\Omega)$. Suppose there is a positive constant $C$ so that for every $\epsilon>0$, there are a finite number of sets $E_\ell \in \Sigma$ with
\noindent (i) $w\big(\Omega\setminus\cup_{\ell} E_\ell\big) <\epsilon$ and $w(E_\ell) >0$;
\noindent (ii) for every $f \in {\mathcal P}$, \begin{equation}\label{abspoincare}
\sum_{\ell} ||f-f_{E_\ell,w}||^p_{L^{p}_w(E_\ell)}\le C\epsilon^p. \end{equation}
\noindent Let $$\hat{{\mathcal P}}=\{ f\in L^N_w(\Omega): \ \mbox{there exists } \{f^j\}\subset {\mathcal P} \ \mbox{with $ f^j \to f \ a.e.$-$w$ } \}. $$ Then for every sequence $\{f_k\}\subset \hat{{\mathcal P}}$, there is a single subsequence $\{f_{k_i}\}$ and a function $f\in L^N_w(\Omega)$ such that $f_{k_i}\to f$ pointwise a.e.-$w$ in $\Omega$ and in $L^q_w(\Omega)$ norm for $1\leq q<N$. \end{thm}
\begin{remark}\label{closureS} \begin{enumerate}
\item Given $\epsilon >0,$ let $\{E_\ell\}$ satisfy hypothesis (i) of Theorem \ref{absgeneral}. Hypothesis (ii) of Theorem \ref{absgeneral} is clearly true for $\{E_\ell\}$ if for every $f\in {\mathcal P}$, there are nonnegative constants $\{a_\ell\}$ such that
\begin{eqnarray}\label{pc2} ||f-f_{E_\ell,w}||_{L^{p}_w(E_\ell)} \leq \epsilon\ a_\ell \end{eqnarray} and \begin{equation}\label{sumcond} \sum a_\ell^p \le C \end{equation} with $C$ independent of $f, \epsilon$. The constants $\{a_\ell\}$ may vary with $f$ and $\epsilon$.
\item Theorem \ref{general} is a corollary of Theorem \ref{absgeneral}. To see why, suppose that the hypothesis of Theorem \ref{general} holds. Define ${\mathcal P}$ by ${\mathcal P} = \pi({\cal S}) = \{f: (f,\g)\in {\cal S}\}$. Let $\epsilon>0$ and choose $\{(E_\ell, F_\ell)\}$ as in Theorem \ref{general}. Given $f\in {\mathcal P}$, choose any $\g$ such that $(f,\g) \in {\cal S}$ and set $a_\ell =
||\g\chi_{F_\ell}||_{\mathscr{X}(\Omega)}$ for all $\ell$. Then (\ref{pc}), (\ref{bddov}) and property ($B_p$) of $\mathscr{X}(\Omega)$ imply (\ref{pc2}) and (\ref{sumcond}). The preceding remark shows that the hypothesis of Theorem \ref{absgeneral} holds. The conclusion of Theorem \ref{general} now follows from Theorem \ref{absgeneral}. \end{enumerate} \end{remark}
{\bf Proof of Theorem \ref{absgeneral}: } Theorem \ref{absgeneral} can be proved by checking through the proof of Theorem \ref{general}. In fact, the nature of hypothesis (\ref{abspoincare}) allows simplification of the proof. First recall that if $f^j\to f$ a.e.-$w$ and $\{f^j\}$ is bounded in $L^N_w(\Omega)$, then $(f^j)_{E,w}\to f_{E, w}$ for every $E\in \Sigma$. Therefore, by the definition of $\hat{{\mathcal P}}$ and Fatou's lemma, the truth of (\ref{abspoincare}) for all $f\in {\mathcal P}$ implies its truth for all $f \in \hat{{\mathcal P}}$. Given a sequence $\{f_k\}$ in $\hat{{\mathcal P}}$, we follow the proof of Theorem \ref{general} but no longer need to introduce the $\{f_k^j\}$ or prove (\ref{diff}) since (\ref{diff}) now follows from the fact that (\ref{abspoincare}) holds for $\hat{{\mathcal P}}$. Further details are left to the reader. \qed
We close this section by listing an alternate version of Theorem \ref{general} that we will use in \S 3.4 when we consider local results.
\begin{thm}\label{generallocal} Let $w$ be a measure (not necessarily finite) on a $\sigma$-algebra $\Sigma$ of subsets of a set $\Omega$, with $\Omega \in \Sigma$. Let $1 \le p < \infty$, $1<N \le \infty$, $\mathscr{X}(\Omega)$ be a normed linear space satisfying properties (A) and ($B_p$) relative to a set $\Sigma_0 \subset \Sigma$, and let ${\cal S}$ be a collection of pairs $(f, \bf{g})$ such that $f$ is $\Sigma$-measurable and ${\bf g} \in \mathscr{X}(\Omega)$.
Suppose that ${\cal S}$ satisfies the following conditions relative to a fixed set $\Omega'\in \Sigma$ (in particular $\Omega'\subset \Omega$): for each $\epsilon =\epsilon_j= 1/j$ with $j\in \mathbb{N}$, there are a finite number of pairs $\{E_\ell^\epsilon, F_\ell^\epsilon\}_{\ell}$ with $E_\ell^\epsilon \in \Sigma$ and $F_\ell^\epsilon \in \Sigma_0$ such that
\noindent (i) $w(\Omega' \setminus \cup_\ell E_\ell^\epsilon)=0$ and $0<w(E_\ell^\epsilon) <\infty$;
\noindent (ii) $\{F_\ell^\epsilon\}_\ell$ has bounded overlaps independent of $\epsilon$ with the same overlap constant as in ($B_p$), i.e., \begin{eqnarray}\nonumber \displaystyle\sum_\ell \chi_{F_\ell^\epsilon}(x) \leq C_1, \quad x\in \Omega, \end{eqnarray} for $C_1$ as in ($B_p$);
\noindent (iii) for every $(f,\g)\in {\cal S}$, the local Poincar\'e-type inequality
\begin{eqnarray}\nonumber ||f-f_{E_\ell^\epsilon,w}||_{L^{p}_w(E_\ell^\epsilon)}
\leq \epsilon\, ||\g\chi_{F_\ell^\epsilon}||_{\mathscr{X}(\Omega)} \end{eqnarray} holds for each $(E_\ell^\epsilon, F_\ell^\epsilon)$.
Then for every sequence $\{(f_k, \g_k)\}$ in ${\cal S}$ with \begin{eqnarray}\label{bddlocal}
\sup_k\left[||f_k||_{L^N_w(\cup_{\ell, j}E_\ell^{1/j})} +
||\g_k||_{\mathscr{X}(\Omega)} \right] < \infty, \end{eqnarray} there is a subsequence $\{f_{k_i}\}$ of $\{f_k\}$ and a function $f\in L^N_w(\Omega')$ such that $f_{k_i}\to f$ pointwise a.e.-$w$ in $\Omega'$ and in $L^q_w(\Omega')$ norm for $1\leq q\le p$. If $p<N$, then also $f_{k_i}\to f$ in $L^q_w(\Omega')$ norm for $1\leq q <N$. \end{thm}
The principal difference between the assumptions in Theorems \ref{general} and \ref{generallocal} occurs in hypothesis (i). When we apply Theorem \ref{generallocal} in \S 3.4, the sets $\{E_\ell^\epsilon\}$ will satisfy $\Omega' \subset \cup_\ell E_\ell^\epsilon$ for each $\epsilon$, and consequently the condition in hypothesis (i) that $w(\Omega'\setminus \cup_\ell E^\epsilon_\ell)=0$ for each $\epsilon$ will be automatically true. Unlike Theorem \ref{general}, the value of $q$ in Theorem \ref{generallocal} is always allowed to equal $p$. Although $w(\Omega)$ is not assumed to be finite in Theorem \ref{generallocal}, $w(\Omega')<\infty$ is true due to hypothesis (i) and the fact that the number of $E_\ell^\epsilon$ is finite for each $\epsilon$. As in Theorem \ref{general}, the hypothesis $w(E_\ell^\epsilon)>0$ is dispensible.
{\bf Proof of Theorem \ref{generallocal}: } The proof is like that of Theorem \ref{general}, with minor changes and some simplifications. We work directly with the pairs $(f_k,\g_k)$ without considering approximations $(f_k^j, \g_k^j)$. Due to the form of assumption (i) in Theorem \ref{generallocal}, neither the set $\Omega^\epsilon$ nor estimate (\ref{Oj}) is now needed. Since $w(\Omega' \setminus \cup_\ell E_\ell^\epsilon)=0$ for each $\epsilon =1/j$, we can replace $\Omega^\epsilon$ by $\Omega'$ in the proof, obtaining the estimate \begin{eqnarray}\label{newCOj}
||f_m-f_k||_{L^{p}_w(\Omega')}< C\epsilon \end{eqnarray} as an analogue of (\ref{COj}). In deriving (\ref{newCOj}), the weak and weak-star arguments are guaranteed since by (\ref{bddlocal}), \[
\sup_k ||f_k||_{L^N_w(\cup_{\ell, j}E_\ell^{1/j})} <\infty. \] The main change in the proof comes by observing that the entire argument formerly used to show that $\{f_k\}$ is Cauchy in $L^1_w(\Omega)$ is no longer needed. In fact, (\ref{newCOj}) proves that $\{f_k\}$ is Cauchy in $L^p_w(\Omega')$, and therefore it is also Cauchy in $L^q_w(\Omega')$ if $1\le q \le p$ since $w(\Omega') <\infty$. The first conclusion in Theorem \ref{generallocal} then follows. To prove the second one, assuming that $p,q <N$, we use an analogue of (\ref{interp}) with $\Omega'$ in place of $\Omega$ and the same choice of $\lambda$, namely, \[
||f_m-f_k||_{L^q_w(\Omega')} \leq
||f_m-f_k||_{L^1_w(\Omega')}^\lambda
||f_m-f_k||_{L^N_w(\Omega')}^{1-\lambda}. \] The desired conclusion then follows as before since we have already shown that the first factor on the right side tends to $0$.
\section{Applications in the Nondegenerate Case} \setcounter{equation}{0}
\setcounter{theorem}{0}
Roughly speaking, a consequence of Theorem \ref{general} is that a set of functions which is bounded in $L^N_w(\Omega)$ is precompact in $L^q_w(\Omega)$ for $1\le q<N$ if the gradients of the functions are bounded in an appropriate norm, and a {\it local} Poincar\'e inequality holds for them. The requirement of boundedness in $L^N_w(\Omega)$ will be fulfilled if, for example, the functions satisfy a {\it global} Poincar\'e or Sobolev estimate with exponent $N$ on the left-hand side. In order to illustrate this principle more precisely, we first consider the classical gradient operator and functions on $\mathbb{R}^n$ with the standard Euclidean metric. We include a simple way to see that the Rellich-Kondrachov compactness theorem follows from our results. Our derivation of this fact is different from those in \cite{AF} and \cite{GT}; in particular, it avoids using the Arzel\'a-Ascoli theorem and regularization of functions by convolution. We also list compactness results for the special class of $s$-John domains in $\mathbb{R}^n$. In \cite{HK1}, the authors mention that such results follow from their development without giving specific statements. See also \cite[Theorem 8.1]{HK2}. We list results for degenerate quadratic forms and vector fields in Section 3.
We begin by proving a compact embedding result for some Sobolev spaces involving two measures. Let $w$ be a measure on the Borel subsets of a fixed open set $\Omega\subset \mathbb{R}^n$, and let $\mu$ be a measure on the $\sigma$-algebra of Lebesgue measurable subsets of $\Omega$. We also assume that $\mu$ is absolutely continuous with respect to Lebesgue measure. If $1\le p< \infty$, let $E^p_\mu(\Omega)$ denote the class of locally Lebesgue integrable functions on $\Omega$ with distributional derivatives in $L^p_\mu(\Omega)$. If $1\le N \le \infty$, we say that a set $Y \subset L^N_w(\Omega)\cap E^p_\mu(\Omega)$ (intersection of function spaces instead of normed spaces of equivalence classes) is {\it bounded in} $L^N_w(\Omega)\cap E^p_\mu(\Omega)$ if \[
\sup_{f\in Y}\left\{||f||_{L^N_w(\Omega)} + ||\nabla f||_{L^p_\mu(\Omega)}\right\} < \infty. \]
We use $D$ to denote a generic open Euclidean ball. The radius and center of $D$ will be denoted $r(D)$ and $x_D$, and if $C$ is a positive constant, $CD$ will denote the ball concentric with $D$ whose radius is $Cr(D)$.
\begin{thm}\label{main2} Let $\tilde{\Omega}\subset \Omega$ be open sets in $\mathbb{R}^n$. Let $w$ be a Borel measure on $\Omega$ with $w(\tilde{\Omega})= w(\Omega) <\infty$ and $\mu$ be a measure on the Lebesgue measurable sets in $\Omega$ which is absolutely continuous with respect to Lebesgue measure. Let $1\le p <\infty$, $1<N\le \infty$ and $\mathscr{S}\subset L^N_w(\Omega)\cap E^p_\mu(\Omega)$, and suppose that for all $\epsilon>0$, there exists $\delta_\epsilon>0$ such that \begin{equation}\label{poincare2}
\|f-f_{D,w}\|\down{L^p_w(D)} \le \epsilon \|\nabla f\|\down{L^p_\mu(D)} \ \mbox{ for all } f\in \mathscr{S} \end{equation} and all Euclidean balls $D$ with $r(D)<\delta_\epsilon$ and $2D\subset \tilde{\Omega}$. Then for any sequence $\{f_k\} \subset \mathscr{S}$ that is bounded in $L^N_w(\Omega) \cap E_\mu^p(\Omega)$, there is a subsequence $\{f_{k_i}\}$ and a function $f \in L^N_w(\Omega)$ such that $\{f_{k_i}\} \to f$ pointwise a.e.-$w$ in $\Omega$ and in $L^q_w(\Omega)$ norm for $1\le q<N$. \end{thm}
Before proving Theorem \ref{main2}, we give typical examples of $\tilde{\Omega}$ and $w$ with $w(\tilde{\Omega})= w(\Omega) <\infty$. For any two nonempty sets $E_1, E_2 \subset \mathbb{R}^n$, let \begin{equation}\label{rho}
\rho(E_1,E_2) = \inf \{|x-y|: x\in E_1, y\in E_2\} \end{equation} denote the Euclidean distance between $E_1$ and $E_2$. If $x\in \mathbb{R}^n$ and $E$ is a nonempty set, we will write $\rho(x,E)$ instead of $\rho(\{x\},E)$. Let $\tilde{\Omega}$ be an open subset of $\Omega$. If $\Omega$ is bounded and $\Omega\setminus\tilde{\Omega}$ has Lebesgue measure $0$, the measure $w$ on $\Omega$ defined by $dw =\rho(x, \mathbb{R}^n \setminus \tilde{\Omega})^\alpha dx$ clearly has the desired properties if $\alpha \ge 0$. The range of $\alpha$ can be increased to $\alpha >-1$ if $\Omega$ is a Lipschitz domain and $\Omega \setminus \tilde{\Omega}$ is a finite set. Indeed, if $\partial\Omega$ is described in local coordinates $x= (x_1,\dots,x_n)$ by $x_n= F(x_1,\dots,x_{n-1})$ with $F$ Lipschitz, then the distance from $x$
to $\partial\Omega$ is equivalent to $|x_n -F(x_1,\dots, x_{n-1})|$, and consequently the restriction $\alpha >-1$ guarantees that $w$ is finite near $\partial\Omega$ by using Fubini's theorem; see also \cite[Remark 3.4(b)]{C1}. If $\Omega$ is bounded and $\Omega\setminus \tilde{\Omega}$ is finite, but with no restriction on $\partial\Omega$, the range can clearly be further increased to $\alpha >-n$ for the measure $\rho(x, \Omega\setminus {\tilde{\Omega}})^\alpha dx$. Also note that any $w$ without point masses satisfies $w(\tilde{\Omega}) = w(\Omega)$ if $\tilde{\Omega}$ is obtained by deleting a countable subset of $\Omega$.
\noindent{\bf Proof of Theorem \ref{main2}:} We will verify the hypotheses of Theorem \ref{general}. Let \[
\mathscr{X}(\Omega) = \big\{\g = (g_1,\dots,g_n): |\g|= \big(\sum_{i=1}^n g_i^2\big)^{1/2} \in L^p_\mu(\Omega)\big\} \]
and $||\g||_{\mathscr{X}(\Omega)} = ||\g||_{L^p_\mu(\Omega)}$. Then $$
\|\nabla f\|\down{\mathscr{X}(\Omega)}=\|\nabla f\|\down{L^p_\mu(\Omega)}\quad\mbox{if $f\in E^p_\mu(\Omega)$.} $$ If $f \in E^p_\mu(\Omega)$, we may identify $f$ with the pair $(f,\nabla f)$ since the distributional gradient $\nabla f$ is uniquely determined by $f$ up to a set of Lebesgue measure zero. Then $L^N_w(\Omega)\cap E^p_\mu(\Omega)$ can be viewed as a subset of $L^N_w(\Omega)\times \mathscr{X}(\Omega)$. In Theorem \ref{general}, choose ${\cal S}$ to be the particular sequence $\{f_k\} \subset \mathscr{S}$ in the hypothesis of Theorem \ref{main2}, and choose $\Sigma$ to be the Lebesgue measurable subsets of $\Omega$ and $\Sigma_0$ to be the collection of balls $D \subset \Omega$. Then hypotheses (A) and ($B_p$) are valid with $C_2=C_1$ for any $C_1$. Given $\epsilon >0$, since $w(\tilde{\Omega})=w(\Omega)<\infty$, there is a compact set $K\subset \tilde{\Omega}$ with $w(\Omega\setminus K)<\epsilon$. Let $0<\delta'_\epsilon <\rho(K,\R^n\setminus \tilde{\Omega})$ (where $\rho(K, \R^n\setminus\tilde{\Omega})$ is interpreted as $\infty$ if $\tilde{\Omega} =\mathbb{R}^n$), let $\delta_\epsilon$ be as in (\ref{poincare2}), and fix $r_\epsilon$ with $0<r_\epsilon < \min\{\delta_\epsilon, \delta'_\epsilon\}$. By considering the triples of balls in a maximal collection of pairwise disjoint balls of radius $r_\epsilon/6$ centered in $K$, we obtain a collection $\{E_{\ell}^\epsilon\}_{\ell}$ of balls of radius $r_\epsilon/2$ which satisfy $2E_\ell^\epsilon \subset \tilde{\Omega}$, have bounded overlaps with overlap constant independent of $\epsilon$, and whose union covers $K$. Since $K$ is compact, we may assume the collection is finite. Also, \[ w\big(\Omega \setminus \cup_\ell E_\ell^\epsilon\big) \le w\big(\Omega \setminus K\big) <\epsilon, \] and (\ref{pc}) holds with $F_\ell =E_\ell= E_\ell^\epsilon$ by (\ref{poincare2}). Theorem \ref{main2} now follows from Theorem \ref{general} applied to $\Omega$. \qed
In particular, we obtain the following result when $w=\mu$ is a Muckenhoupt $A_p(\mathbb{R}^n)$ weight, i.e., when $d\mu = dw = \eta \,dx$ where $\eta(x)$ satisfies $$
\left(\frac{1}{|D|}\int_D\eta\,dx\right)\left(\frac{1}{|D|} \int_D \eta^{-1/(p-1)} dx\right)^{p-1} \le C $$
if $1<p<\infty$ and $|D|^{-1}\int_D \eta\,dx \le C\, \text{essinf}_D w$ if $p=1$ for all Euclidean balls $D$, with $C$ independent of $D$. As is well known, such a weight also satisfies the classical doubling condition \begin{eqnarray}\label{classicaldoubling} w(D_r(x)) \le C \left(\frac{r}{r'}\right)^\theta w(D_{r'}(x)), \quad 0<r'<r<\infty, \end{eqnarray} with $\theta \ge np-\epsilon$ for some $\epsilon>0$ if $p>1$, and with $\theta =n$ if $p=1$, where $C$ and $\theta$ are independent of $r, r', x$.
We denote by $W^{1,p,w}(\Omega)$ the weighted Sobolev space defined as all functions in $L^p_w(\Omega)$ whose distributional gradient is in $L^p_w(\Omega)$. Thus $W^{1,p,w}(\Omega) = L^p_w(\Omega) \cap E^p_w(\Omega)$. If $w(\Omega)<\infty$, it follows that $L^N_w(\Omega)\cap E^p_w(\Omega) \subset W^{1,p,w}(\Omega)$ when $N\ge p$, and that the opposite containment holds when $N\le p$.
\begin{thm}\label{nondegen} Let $1\le p <\infty$, $w\in A_p(\mathbb{R}^n)$ and $\Omega$ be an open set in $\mathbb{R}^n$ with $w(\Omega)<\infty$. If $1< N \le \infty$, then any bounded subset of $L^N_w(\Omega)\cap E^p_w(\Omega)$ is precompact in $L^q_w(\Omega)$ if $1\le q <N$. Consequently, if $N>p$ and $\mathscr{S}$ is a subset of $W^{1,p,w}(\Omega)$ with \begin{equation}\label{wsob}
\|f\|_{L^N_w(\Omega)}\le C(\|f\|_{L^p_w(\Omega)}+\|\nabla f\|_{L^p_w(\Omega)}) \ \mbox{ for all } f\in \mathscr{S}, \end{equation} then any set in $\mathscr{S}$ that is bounded in $W^{1,p,w}(\Omega)$ is precompact in $L^q_w(\Omega)$ for $1\le q<N$.
If $\Omega$ is a John domain, there exists $N>p$ ($N$ can be $\theta p/(\theta -p)$ for some $\theta>p$ as described after (\ref{classicaldoubling})) such that $W^{1,p,w}(\Omega)$ is compactly embedded in $L^q_w(\Omega)$ for $1\le q <N$. In particular, the embedding of $W^{1,p,w}(\Omega)$ into $L^p_w(\Omega)$ is compact when $w\in A_p(\mathbb{R}^n)$ and $\Omega$ is a John domain.
\end{thm}
\begin{remark}
When $w=1$ and $p<n$, the choices $N= np/(n-p)$ and $\mathscr{S} = W_0^{1,p}(\Omega)$ guarantee (\ref{wsob}) by the classical Sobolev inequality for functions in $W_0^{1,p}(\Omega)$ (see e.g. \cite[Theorem 7.10]{GT}); here $W_0^{1,p}(\Omega)$ denotes the closure in $W^{1,p}(\Omega)$ of the class of Lipschitz functions with compact support in $\Omega$. Consequently, the classical Rellich-Kondrachov theorem giving the compact embedding of $W_0^{1,p}(\Omega)$ in $L^q(\Omega)$ for $1\le q <np/(n-p)$ follows as a special case of the first part of Theorem \ref{nondegen}. \end{remark}
{\bf Proof. } We will apply Theorem \ref{main2} with $w=\mu$. Fix $p$ and $w$ with $1\le p <\infty$ and $w\in A_p(\mathbb{R}^n)$. By \cite{FKS}, there is a constant $C$ such that the weighted Poincar\'e inequality \[
||f -f_{D,w}||_{L^p_w(D)} \le C r(D) ||\nabla f||_{L^p_w(D)}, \quad f\in
C^\infty(\Omega), \] holds for all Euclidean balls $D\subset \Omega$. Then since $C^\infty(\Omega)$ is dense in $L^N_w(\Omega)\cap E^p_w(\Omega)$ if $1\le N <\infty$ (see e.g. \cite{Tur}), by fixing any $\epsilon>0$ we obtain from Fatou's lemma that for all balls $D \subset \Omega$ with $C r(D) \le \epsilon$, \[
||f -f_{D,w}||_{L^p_w(D)} \le \epsilon\, ||\nabla f||_{L^p_w(D)}
\quad\text{if $f\in L^N_w(\Omega)\cap E^p_w(\Omega)$}. \] The same holds when $N=\infty$ since $L^\infty_w(\Omega) = L^\infty(\Omega) \subset L^p_w(\Omega)$ due to the assumptions $w\in A_p(\mathbb{R}^n)$ and $w(\Omega)<\infty$. With $1<N\le \infty$, the first statement of the theorem now follows from Theorem \ref{main2}, and the second statement is a corollary of the first one.
Next, let $\Omega$ be a John domain. Choose $\theta>p$ so that $w$ satisfies (\ref{classicaldoubling}) and define $N = \theta p/(\theta-p)$. Then $N>p$ and by \cite[Theorem 1.8 (b) or Theorem 4.1]{CW1}, \[
||f -f_{\Omega,w}||_{L^N_w(\Omega)} \le C ||\nabla
f||_{L^p_w(\Omega)}, \quad \forall f\in C^\infty(\Omega). \] Again, the inequality remains true for functions in $W^{1,p,w}(\Omega)$ by density and Fatou's lemma. It is now clear that (\ref{wsob}) holds, and the last part of the theorem follows.\qed \\
Our next example involves domains in $\mathbb{R}^n$ which are more restricted. For special $\Omega$, there are values $N>1$ such that \begin{equation}\label{embedding}
\|f\|\down{L^N(\Omega)}\le C\big(\|f\|\down{L^1(\Omega)}+\|\nabla f\|\down{L^p(\Omega)}\big) \end{equation} for all $f\in L^1(\Omega)\cap E^p(\Omega)$. Note that if $\Omega$ has finite Lebesgue measure, then $W^{1,p}(\Omega) \subset L^1(\Omega)\cap E^p(\Omega)$. As we will explain, (\ref{embedding}) is true for some $N>1$ if $\Omega$ is an $s$-John domain in $\mathbb{R}^n$ and $1 \le s <1+\frac{p}{n-1}$. Recall that for $1\le s<\infty$, a bounded domain $\Omega \subset \mathbb{R}^n$ is called an $s$-John domain with central point $x'\in \Omega$ if for some constant $c>0$ and all $x \in \Omega$ with $x \neq x'$, there is a curve $\Gamma: [0,l] \rightarrow \Omega$ so that $\Gamma(0)=x, \Gamma(l) =x'$, \[
|\Gamma(t_1)-\Gamma(t_2)| \le t_2-t_1\quad\mbox{for all $[t_1,t_2]
\subset [0,l]$, and} \] \[ \rho(\Gamma(t), \Omega^c) \ge c\, t^s\quad\mbox{for all $t\in [0,l]$.} \] The terms $1$-John domain and John domain are the same. When $\Omega$ is an $s$-John domain for some $s\in [1, 1+p/(n-1))$, it is shown in \cite{KM}, \cite{CW1}, \cite{CW2} that (\ref{embedding}) holds for all finite $N$ with \begin{equation}\label{scondition} \frac{1}{N} \ge \frac{s(n-1)-p+1}{np} \end{equation} and for all $f\in W^{1,p}(\Omega)$ without any support restrictions. Note that the right side of (\ref{scondition}) is strictly less than $1/p$ for such $s$, and consequently there are values $N>p$ which satisfy (\ref{scondition}). For $N$ as in (\ref{scondition}), the global estimate \begin{equation}\label{KMpoincare}
||f-f_\Omega||_{L^{N}(\Omega)} \le C ||\nabla f||_{L^p(\Omega)},
\quad f_\Omega = \int_\Omega f(x) dx/|\Omega|, \end{equation} is shown to hold if $f\in Lip_{loc}(\Omega)$ in \cite{CW2}, and then follows for all $f\in L^1(\Omega)\cap E^{p}(\Omega)$; see the proof of Theorem \ref{weightedsjohn} for related comments. Inequality (\ref{embedding}) is clearly a consequence of (\ref{KMpoincare}).
More generally, weighted versions of (\ref{KMpoincare}) hold for $s$-John domains and lead to weighted compactness results, as we now show. Let $1\le p< \infty$, and for real $\alpha$ and $\rho(x,\Omega^c)$ as in (\ref{rho}), let $L^p_{\rho^\alpha dx}(\Omega)$ be the class of Lebesgue measurable $f$ on $\Omega$ with \[
||f||_{L^p_{\rho^\alpha dx}(\Omega)} = \left(\int_\Omega |f(x)|^p
\rho(x,\Omega^c)^\alpha dx\right)^{1/p} <\infty. \]
\begin{thm}\label{weightedsjohn} Suppose that $1\le s <\infty$ and $\Omega$ is an $s$-John domain in $\mathbb{R}^n$. Let $p, a, b$ satisfy $1\le p<\infty$, $a\ge 0$, $b\in \mathbb{R}$ and $b-a<p$.
(i) If \begin{equation}\label{scondition1} n+a>s(n-1+b)-p+1, \end{equation} then for any $1\le q <\infty$ such that \begin{equation}\label{scondition2} \frac{1}{q} > \max\left\{\frac{1}{p}-\frac{1}{n}, \frac{s(n-1+b)-p+1}{(n+a)p}\right\}, \end{equation} $L^1_{\rho^a dx}(\Omega)\cap E^p_{\rho^b dx}(\Omega)$ is compactly embedded in $L^q_{\rho^a dx}(\Omega)$.
(ii) If $p>1$ and \begin{equation}\label{s1condition} n+ap>s(n-1+b)-p+1\ge n+a, \end{equation} then for any $1\le q<\infty$ such that \begin{equation}\label{s2condition} \frac{a}{q} > \max\left\{\frac{b}{p}-1, \frac{s(n-1+b)-p-n+1}{p}\right\}, \end{equation} $L^1_{\rho^a dx}(\Omega)\cap E^p_{\rho^b dx}(\Omega)$ is compactly embedded in $L^q_{\rho^a dx}(\Omega)$. \end{thm}
\begin{remark}\label{re-sjohn} \begin{enumerate} \item If $a=b=0$, (\ref{scondition1}) is the same as $s< 1+\frac{p}{n-1}$. If $a=0$, (\ref{s1condition}) never holds. \item The requirement that $b-a<p$ follows from (\ref{scondition1}) and (\ref{scondition2}) by considering the cases $n-1+b \ge 0$ and $n-1+b <0$ separately. Hence $b-a<p$ automaticallly holds in part (i), but it is an assumption in part (ii). Also, (\ref{s1condition}) and (\ref{s2condition}) imply that $q<p$, and consequently that $p>1$. \item Conditions (\ref{scondition1}) and (\ref{scondition2}) imply there exists $N \in (p, \infty)$ with \begin{equation}\label{scondition3} \frac{1}{q}>\frac{1}{N} > \max\left\{\frac{1}{p}-\frac{1}{n}, \frac{s(n-1+b)-p+1}{(n+a)p}\right\}. \end{equation} Conversely, (\ref{scondition1}) holds if there exists $N\in (p, \infty)$ so that (\ref{scondition3}) holds. \item Assumption (\ref{s2condition}) ensures that there exists $N\in (q, \infty)$ such that (\ref{s2condition}) holds with $q$ replaced by $N$. \end{enumerate}
\end{remark}
\noindent{\bf Proof:} This result is also a consequence of Theorem \ref{main2}, but we will deduce it from Theorem \ref{general} by using arguments like those in the proofs of Theorems \ref{main2} and \ref{nondegen}. Fix $a,b,p,q$ as in the hypothesis and denote $\rho(x) = \rho(x,\Omega^c)$. Choose $w= \rho^a dx$ and note that $w(\Omega) <\infty$ since $a\ge 0$ and $\Omega$ is now bounded. Define \[
\mathscr{X}(\Omega) = \big\{\g = (g_1,\dots,g_n): |\g|\in L^p_{\rho^b dx}(\Omega)\big\} \]
and $||\g||_{\mathscr{X}(\Omega)} = ||\g||_{L^p_{\rho^a dx}(\Omega)}$. Fix $\epsilon>0$ and choose a compact set $K \subset
\Omega$ with $|\Omega\setminus K|_{\rho^a dx} := \int_{\Omega\setminus K} \rho^a dx < \epsilon$. Also choose $\delta_\epsilon'$ with $0<\delta_\epsilon' < \rho(K, \Omega^c)$, where $\rho(K, \Omega^c)$ is the Euclidean distance between $K$ and $\Omega^c$.
If $D$ is a Euclidean ball with center $x_D\in K$ and $r(D) < \frac{1}{2}\delta_\epsilon'$, then $2D\subset \Omega$ and $\rho(x)$ is essentially constant on $D$; in fact, for such $D$, \[ \frac{1}{2}\rho(x_D) \le \rho(x) \le \frac{3}{2}\rho(x_D),\quad x \in D. \]
We claim that for such $D$, the simple unweighted Poincar\'e estimate \[
||f -f_D||_{L^p(D)} \le C r(D) ||\nabla f||_{L^p(D)}, \quad f\in
Lip_{loc}(\Omega), \] where $f_D= f_{D,dx}$, implies that for $f\in Lip_{loc}(\Omega)$, \begin{equation}\label{weightedpc}
||f -f_{D,\rho^a dx}||_{L^p_{\rho^a dx}(D)} \le {\tilde C}
\big(r(D)^{\frac{a-b}{p}} + {\rm diam}(\Omega)^{\frac{a-b}{p}}\big) r(D)
||\nabla f||_{L^p_{\rho^b dx}(D)}, \end{equation} where $f_{D,\rho^a dx}= \int_D f\rho^a dx/\int_D \rho^a dx$ and ${\tilde C}$ depends on $C,a,b$ but is independent of $D, f$. To show this, first note that for such $D$, since $\rho\sim \rho(x_D)$ on $D$, the simple Poincar\'e estimate immediately gives \[
||f -f_D||_{L^p_{\rho^a dx}(D)} \le {\tilde C} \rho(x_D)^{\frac{a
-b}{p}} r(D) ||\nabla f||_{L^p_{\rho^b dx}(D)}, \quad f\in Lip_{loc}(\Omega), \] and then a similar estimate with $f_D$ replaced by $f_{D,\rho^a dx}$ follows by standard arguments. Clearly (\ref{weightedpc}) will now follow if we show that \[ \rho(x_D)^{\frac{a-b}{p}}\le r(D)^{\frac{a-b}{p}}
+ {\rm diam}(\Omega)^{\frac{a-b}{p}}\quad \text{for such $D$.} \] However, this is clear since $r(D)\le \rho(x_D) \le diam(\Omega)$ for $D$ as above, and (\ref{weightedpc}) is proved.
We can now apply the weighted density result of \cite{H}, \cite{HK1} to conclude that (\ref{weightedpc}) holds for all $f \in L^1_{\rho^a dx}(\Omega)\cap E^p_{\rho^bdx}(\Omega)$ and all balls $D$ with $x_D\in K$ and $r(D) <\frac{1}{2}\delta_\epsilon'$.
Recall that $\frac{a-b}{p} +1>0$. Thus there exists $r_\epsilon$ with $0 <r_\epsilon <\frac{1}{2}\delta_\epsilon'$ and \[ {\tilde C} \big(r_\epsilon^{\frac{a-b}{p}}
+ {\rm diam}(\Omega)^{\frac{a-b}{p}}\big) r_\epsilon < \epsilon. \] Let $\Sigma$ and $\Sigma_0$ be as in the proof of Theorem \ref{main2}, and let $\{E_\ell\}_\ell=\{F_\ell\}_\ell$ be the triples of balls in a maximal collection of pairwise disjoint balls centered in $K$ with radius $\frac{1}{3}r_\epsilon$. Then (\ref{weightedpc}) and the choice of $r_\epsilon$ give the desired version of (\ref{pc}), namely \[
||f -f_{D,\rho^a dx}||_{L^p_{\rho^a dx}(D)} \le \epsilon ||\nabla
f||_{L^p_{\rho^b dx}(D)} \] for $D=E_\ell$ and $f\in L^1_{\rho^a dx}(\Omega)\cap E^p_{\rho^bdx}(\Omega)$. Next, use the last two parts of Remark \ref{re-sjohn} to choose $N\in (q,\infty)$ so that either (\ref{scondition2}) or (\ref{s2condition}) holds with $q$ there replaced by $N$. Every $f\in L^1_{\rho^a dx}(\Omega)\cap E^p_{\rho^b dx}(\Omega)$ then satisfies the global Poincar\'e estimate \begin{equation} \label{globalsjohn}
||f- f_{\Omega, \rho^a dx}||_{L^N_{\rho^a dx}(\Omega)} \le C ||\nabla
f||_{L^p_{\rho^b dx}(\Omega)}, \quad f\in L^1_{\rho^a
dx}(\Omega)\cap E^p_{\rho^b dx}(\Omega), \end{equation} where $f_{\Omega,\rho^a dx} = \int_\Omega f \,\rho^a dx/\int_\Omega \rho^a dx$. In fact, under the hypothesis of Theorem \ref{weightedsjohn}, this is proved for $f\in Lip_{loc}(\Omega) \cap L^1_{\rho^a dx}(\Omega)\cap E^p_{\rho^bdx}(\Omega)$ in \cite {CW2} for example, and then follows for all $f\in L^1_{\rho^a dx}(\Omega)\cap E^p_{\rho^bdx}(\Omega)$ by the density result of \cite{H}, \cite{HK1} and Fatou's lemma. By (\ref{globalsjohn}), \[
||f ||_{L^N_{\rho^a dx}(\Omega)} \le C ||f||_{L^1_{\rho^a
dx}(\Omega)} + C ||\nabla f||_{L^p_{\rho^b dx}(\Omega)} \] for the same class of $f$. The remaining details of the proof are left to the reader. \qed
In passing, we mention that the role played by the distance function $\rho(x,\Omega^c)$ in Theorem \ref{weightedsjohn} can instead be played by $$
\rho_0(x) = \inf\{|x-y|: y\in \Omega_0\},\quad x\in \Omega, $$ for certain $\Omega_0 \subset \Omega^c$; see \cite[Theorem 1.6]{CW2} for a description of such $\Omega_0$ and the required Poincar\'e estimate, and note that the density result in \cite{HK1} holds for positive continuous weights.
\section{Applications in the Degenerate Case} \setcounter{equation}{0}
\setcounter{theorem}{0}
In this section, $\Omega$ denotes a fixed open set in $\mathbb{R}^n$, possibly unbounded. For $(x,\xi) \in \Omega\times \mathbb{R}^n$, we consider a nonnegative quadratic form $\xi' Q(x)\xi$ which may degenerate, i.e., which may vanish for some $\xi \neq 0$. Such quadratic forms occur naturally in the context of subelliptic equations and give rise to degenerate Sobolev spaces as discussed below. Our goal is to apply Theorem \ref{general} to obtain compact embedding of these degenerate spaces into Lebesgue spaces related to the gain in integrability provided by Poincar\'e-Sobolev inequalities. The framework that we will use contains the subelliptic one developed in \cite[2]{SW1}, where regularity theory for weak solutions of linear subelliptic equations of second order in divergence form is studied.
\subsection{Standing Assumptions}
We now list some notation and assumptions that will be in force everywhere in \S 3 even when not explicitly mentioned.
\begin{defn} A function $d$ is called a finite symmetric quasimetric (or simply a quasimetric) on $\Omega$ if $d:\Omega\times\Omega\rightarrow [0,\infty)$ and there is a constant $\kappa\geq 1$ such that for all $x,y,z\in\Omega$, \begin{eqnarray} d(x,y) &=& d(y,x),\nonumber\\ d(x,y) &=& 0 \iff x=y, \textrm{ and} \nonumber\\ \label{tri} d(x,y)&\leq&\kappa [d(x,z)+d(z,y)]. \end{eqnarray} \end{defn} If $d$ is a quasimetric on $\Omega$, we refer to the pair $(\Omega,d)$ as a quasimetric space. In some applications, $d$ is closely related to $Q(x)$. For example, $d$ is sometimes chosen to be the Carnot-Carath\'eodory control metric related to $Q$; cf. \cite{SW1}.
Given $x\in\Omega$, $r>0$, and a quasimetric $d$, the subset of $\Omega$ defined by \begin{eqnarray} B_r(x) = \{y\in\Omega\;:\; d(x,y) <r\}\nonumber \end{eqnarray} will be called the quasimetric $d$-ball centered at $x$ of radius $r$. Note that every $d$-ball $B = B_r(x)$ satisfies $B \subset \Omega$ by definition.
It is sometimes possible, and desirable in case the boundary of $\Omega$ is rough, to be able to work only with $d$-balls that are deep inside $\Omega$ in the sense that their Euclidean closures $\overline{B}$ lie in $\Omega$. See part (ii) of Remark \ref{various} for comments about being able to use such balls.
Recall that $D_s(x)$ denotes the ordinary Euclidean ball of radius $s$ centered at $x$. We always assume that $d$ is related as follows to the standard Euclidean metric: \begin{eqnarray}\label{new2} \mbox{$\forall\, x\in\Omega$ and $r>0$, $\exists \, s=s(x,r)>0$ so that } D_{s}(x) \subset B_r(x). \end{eqnarray}
\begin{remark} Condition (\ref{new2}) is clearly true if $d$-balls are open, and it is weaker than the well-known condition of C. Fefferman and Phong stating that for each compact $K\subset\Omega$, there are constants $\beta, r_0>0$ such that $D_{r^\beta}(x)\subset B_r(x)$ for all $x \in K$ and $0<r<r_0$. \end{remark}
Throughout \S 3, $Q(x)$ denotes a fixed Lebesgue measurable $n\times n$ nonnegative symmetric matrix on $\Omega$ and we assume that every $d$-ball $B$ centered in $\Omega$ is Lebesgue measurable. We will deal with three locally finite measures $w,\nu,\mu$ on the Lebesgue measurable subsets of $\Omega$, each with a particular role. In \S 3.3, where only global results are developed, we will assume $w(\Omega)<\infty$ but this assumption is not required for the local results of \S 3.4. The measure $\mu$ is assumed to be absolutely continuous with respect to Lebesgue measure; the comment following (\ref{sobnorm}) explains why this assumption is natural. In \S 3, we sometimes assume that $w$ is absolutely continuous with respect to $\nu$, but we drop this assumption completely in the Appendix.
We do not require the existence of a doubling measure for the collection of $d$-balls, but we always assume that $(\Omega,d)$ satisfies the weaker local geometric doubling property given in the next definition; see \cite{HyM} for a global version.
\begin{defn}\label{doublingdef} A quasimetric space $(\Omega,d)$ satisfies the {\it local geometric doubling condition} if for every compact $K\subset\Omega$, there exists $\delta'=\delta'(K)>0$ such that for all $x\in K$ and all $0<r'<r < \delta'$, the number of disjoint $d$-balls of radius $r'$ contained in $B_{r}(x)$ is at most a constant ${\cal C}_{r/r'}$ depending on $r/r'$ but not on $K$. \end{defn}
\subsection{Degenerate Sobolev Spaces $\wa{p}{\Omega},\;\wb{p}{\Omega}$}
We will define weighted degenerate Sobolev spaces by using an approach like the one in \cite{SW2} for the unweighted case. We first define an appropriate space of vectors, including vectors which will eventually play the role of gradients, where size is measured relative to the nonnegative quadratic form
\begin{eqnarray} Q(x,\xi) = \xi'Q(x)\xi,\quad (x,\xi)\in \Omega\times\mathbb{R}^n. \nonumber \end{eqnarray} For $1\leq p<\infty$, consider the collection of measurable $\mathbb{R}^n$-valued functions $\vec{g}(x) = (g_1(x),...,g_n(x))$ satisfying
\begin{eqnarray} \label{ellpnorm}||\vec{g}||_{{\cal L}^p_\mu(\Omega,Q)} = \Big\{\int_\Omega Q(x,\vec{g}(x))^\frac{p}{2}d\mu\Big\}^\frac{1}{p} =
\Big\{\int_\Omega |\sqrt{Q(x)}\vec{g}(x)|^pd\mu\Big\}^\frac{1}{p}
<\infty.
\end{eqnarray} We identify any two functions $\vec{g}, \vec{h}$ in the collection for which $||\vec{g} - \vec{h}||_{{\cal L}^p_\mu(\Omega,Q)} = 0$. Then (\ref{ellpnorm}) defines a norm on the resulting space of equivalence classes. The form-weighted space ${\cal L}^p_\mu(\Omega,Q)$ is defined to be the collection of these equivalence classes, with norm (\ref{ellpnorm}). By using methods similar to those in \cite{SW2}, it follows that ${\cal L}^2_\mu(\Omega,Q)$ is a Hilbert space and ${\cal L}^p_\mu(\Omega,Q)$ is a Banach space for $1\le p<\infty$.
Now consider the (possibly infinite) norm on $Lip_{loc}(\Omega)$ defined by
\begin{eqnarray} \label{sobnorm}||f||_{\wa{p}{\Omega}} =
||f||_{L^p_\nu(\Omega)} + ||\nabla f||_{{\cal L}^p_\mu(\Omega,Q)}. \end{eqnarray} We comment here that our standing assumption that $\mu(Z)=0$ when $Z$
has Lebesgue measure $0$ assures that $||\nabla f||_{{\cal L}^p_\mu(\Omega,Q)}$ is well-defined if $f\in Lip_{loc}(\Omega)$; in fact, for such $f$, the Rademacher-Stepanov theorem implies that $\nabla f$ exists a.e. in $\Omega$ with respect to Lebesgue measure.
\begin{defn}\label{spacedef} Let $1\leq p <\infty$. \begin{enumerate} \item The degenerate Sobolev space $\wa{p}{\Omega}$ is the completion under the norm (\ref{sobnorm}) of the set $$
Lip_{Q,p}(\Omega) = Lip_{Q,p,\nu,\mu}(\Omega) = \{ f\in Lip_{loc}(\Omega)\;:\; ||f||_{\wa{p}{\Omega}}<\infty\}. $$ \item The degenerate Sobolev space $\wb{p}{\Omega}$ is the completion under the norm (\ref{sobnorm}) of the set $Lip_{Q,p,0}(\Omega) = Lip_0(\Omega)\cap Lip_{Q,p}(\Omega)$, where $Lip_0(\Omega)$ denotes the collection of Lipschitz functions with compact support in $\Omega$. If $Q \in L^{p/2}_{loc}(\Omega)$, then $Lip_{Q,p,0}(\Omega) = Lip_0(\Omega)$ since $\nu$ and $\mu$ are locally finite. \end{enumerate} \end{defn}
We now make some comments about $\wa{p}{\Omega}$, most of which have analogues for $\wb{p}{\Omega}$. By definition, $\wa{p}{\Omega}$ is the Banach space of equivalence classes of Cauchy sequences of $Lip_{Q,p}(\Omega)$ functions with respect to the norm (\ref{sobnorm}). Given a Cauchy sequence $\{f_j\}$ of $Lip_{Q,p}(\Omega)$ functions, we denote its equivalence class by $[\{f_j\}]$. If $\{v_j\}\in [\{f_j\}]$, then $\{v_j\}$ is a Cauchy sequence in $L^p_\nu(\Omega)$ and $\{\nabla v_j\}$ is a Cauchy sequence in ${\cal L}^p_\mu(\Omega,Q)$. Hence, there is a pair $(f,\vec{g}) \in L^p_\nu(\Omega) \times {\cal L}^p_\mu(\Omega,Q)$ so that \begin{eqnarray}
||v_j-f||_{L^p_\nu(\Omega)} \rightarrow 0\quad \textrm{and }||\nabla v_j -
\vec{g}||_{{\cal L}^p_\mu(\Omega,Q)}\rightarrow 0\nonumber \end{eqnarray} as $j\rightarrow \infty$. The pair $(f,\vec{g})$ is uniquely determined by the equivalence class $[\{f_j\}]$, i.e., is independent of a particular $\{v_j\}\in [\{f_j\}]$. We will say that $(f,\vec{g})$ is {\it represented by} $\{v_j\}$. We obtain a Banach space isomorphism ${\cal J}$ from $\wa{p}{\Omega}$ onto a closed subspace $\wc{p}{\Omega}$ of $L^p_\nu(\Omega)\times {\cal L}^p_\mu(\Omega,Q)$ by setting \begin{eqnarray} {\cal J}([\{f_j\}]) = (f,\vec{g}). \end{eqnarray} We will often not distinguish between $\wa{p}{\Omega}$ and $\wc{p}{\Omega}$. Similarly, $\we{p}{\Omega}$ will denote the image of $\wb{p}{\Omega}$ under ${\cal J}$, but we often consider these spaces to be the same.
It is important to think of a typical element of $\wc{p}{\Omega}$, or $\wa{p}{\Omega}$, as a pair $(f,\vec{g})$ as above, and not simply as the first component $f$. In fact, if $(f,\vec{g}) \in \wc{p}{\Omega}$, the vector $\vec{g}$ may not be uniquely determined by $f$; see \cite[Section 2.1]{FKS} for a well known example.
If $f\in Lip_{Q,p}(\Omega)$, then the pair $(f,\nabla f)$ may be viewed as an element of $W_{\nu,\mu}^{1,p}(\Omega,Q)$ by identifying it with the equivalence class $[\{f\}]$ corresponding to the sequence each of whose entries is $f$. When viewed as a class, $(f, \nabla f)$ generally contains pairs whose first components are not Lipschitz functions; for example, if $f \in Lip_{Q,p}(\Omega)$ and $F$ is any function with $F=f$ a.e.-$\nu$, then $(f, \nabla f) = (F, \nabla f)$ in $W^{1,p}_{\nu,\mu}(\Omega,Q)$. However, in what follows, when we consider a pair $(f, \nabla f)$ with $f \in Lip_{Q,p}(\Omega)$, we will {\it not} adopt this point of view. Instead we will identify an $f\in Lip_{Q,p}(\Omega)$ with the single pair $(f, \nabla f)$ whose first component is $f$ (defined everywhere in $\Omega$) and whose second component is $\nabla f$, which exists a.e. with respect to Lebesgue measure by the Rademacher-Stepanov theorem. This convention lets us avoid assuming that $w$ is absolutely continuous with respect to $\nu$, written $w<<\nu$, in Poincar\'e-Sobolev estimates for $Lip_{Q,p}(\Omega)$ functions. We will reserve the notation ${\cal H}$ for subsets of $Lip_{Q,p}(\Omega)$ viewed in this way.
On the other hand, ${\cal W}$ will denote various subsets of $W^{1,p}_{\nu,\mu}(\Omega,Q)$ with elements viewed as equivalence classes. When our hypotheses are phrased in terms of such ${\cal W}$, we will assume that $w<<\nu$ in order to avoid technical difficulty associated with sets of measure $0$; see the comment after (\ref{poincare*}). In the Appendix, we drop the assumption $w<<\nu$ altogether.
We will abuse the notation (\ref{sobnorm}) by writing \begin{eqnarray}\label{pairnormlip}
||(f,\nabla f)||_{\wa{p}{\Omega}} = ||f||_{L^p_\nu(\Omega)} + ||\nabla
f||_{{\cal L}^p_\mu(\Omega,Q)},\quad f \in Lip_{Q,p}(\Omega), \end{eqnarray} and we extend this to generic $(f,\vec{g})\in W^{1,p}_{\nu, \mu}(\Omega, Q)$ by writing \begin{eqnarray}\label{pairnorm}
||(f,\vec g)||_{\wa{p}{\Omega}} = ||f||_{L^p_\nu(\Omega)} +
||\vec{g}||_{{\cal L}^p_\mu(\Omega,Q)}. \end{eqnarray}
\subsection{Global Compactness Results for Degenerate Spaces}
In this section, we state and prove compactness results which apply to the entire set $\Omega$. Results which are more local are given in \S 3.4.
In order to apply Theorem \ref{general} in this setting, we will use the following version of Poincar\'e's inequality for $d$-balls.
\begin{defn}\label{poincaredef} Let $1\leq p<\infty$, $Lip_{Q,p}(\Omega)$ be is as in Definition \ref{spacedef}, and ${\cal H} \subset Lip_{Q,p}(\Omega)$. We say that the \emph{Poincar\'e property of order $p$ holds for} ${\cal H}$ if there is a constant $c_0\ge 1$ so that for every $\epsilon>0$ and every compact set $K\subset\Omega$, there exists $\delta = \delta(\epsilon,K)>0$ such that for all $f \in {\cal H}$ and every $d$-ball $B_r(y)$ with $y\in K$ and $0<r< \delta$,
\begin{eqnarray} \label{poincare} \left(\int_{B_r(y)} |f-f_{B_r(y),w}|^{p} dw \right)^\frac{1}{p} \leq
\epsilon ||(f,\nabla f)||_{W_{\nu,\mu}^{1,p}(B_{c_0r}(y),Q)}. \end{eqnarray} \end{defn}
\begin{remark} \label{various}
(i) Inequality (\ref{poincare}) is not of standard Poincar\'e form. A more typical form is \begin{eqnarray} \label{poincare3}
\left(\displaystyle\frac{1}{w(B_r(y))}\int_{B_r(y)} |f-f_{B_r(y),w}|^{p} dw\right)^\frac{1}{p}\hspace{2in} \nonumber\\ \leq Cr \left(\displaystyle\frac{1}{\mu(B_{c_0r}(y))}\int_{B_{c_0r}(y)}
|\sqrt{Q}\nabla f|^{p}d\mu\right)^\frac{1}{{p}}. \end{eqnarray} In \cite[2]{SW1} and \cite{R1}, the unweighted version of \eqref{poincare3} with $p=2$ is used. Let $\rho(x,\partial\Omega)$ and $\rho(E,\partial\Omega)$ be as in (\ref{rho}). In \cite{SW2}, the unweighted form of (\ref{poincare3}) with $p=2$ is assumed for all $f\in Lip_{Q,2}(\Omega)$ and all $B_r(y)$ with $y\in \Omega$ and $0<r< \delta_0 \rho(y,\partial\Omega)$ for some $\delta_0\in (0,1)$ independent of $y,r$. If $K$ is a compact set in $\Omega$, this version would then hold for all $B_r(y)$ with $y\in K$ and $0<r< \delta_0 \rho(K,\partial\Omega)$. For general $p, w$ and $\mu$, if for every compact $K\subset\Omega$, (\ref{poincare3}) is valid for all $B_r(y)$ with $y\in K$ and $0<r< \delta_0 \rho(K, \partial\Omega)$, then (\ref{poincare}) follows easily provided \begin{eqnarray} \label{balancing} \lim_{r\rightarrow 0} \left\{\sup_{y\in K}r^{p} \displaystyle\frac{w(B_r(y))}{\mu(B_{c_0r}(y))}\right\} = 0 \end{eqnarray} for every compact $K\subset\Omega$. Note that (\ref{balancing}) automatically holds if $w=\mu$.
If both (\ref{poincare3}) and (\ref{balancing}) hold, then (\ref{poincare}) is true for any choice of $\nu$. In this situation, one can pick $\nu =w$ in order to avoid technicalities encountered below when $w$ is not absolutely continuous with respect to $\nu$.
(ii) Especially when $\partial\Omega$ is rough, it is simplest to deal only with $d$-balls $B$ which stay away from $\partial\Omega$, i.e., which satisfy \begin{eqnarray} \label{closure} \overline{B} \subset \Omega. \end{eqnarray} We can always assume this for the balls in (\ref{poincare}) if the converse of (\ref{new2}) is also true, namely if \begin{eqnarray} \label{eucled2} \forall \, x\in \Omega \ \mbox{ and } r>0, \ \exists \,s= s(r,x)>0 \ \mbox{ such that } B_s(x)\subset D_r(x). \end{eqnarray} To see why, let us first show that given a compact set $K$ and an open set $G$ with $K\subset G\subset\Omega$, there exists $t>0$ so that $\overline{B_t(y)} \subset G$ for all $y\in K$. Indeed, for such $K$ and $G$, let $t'=\frac{1}{2}\rho(K,G^c)$. By (\ref{eucled2}), for each $x\in K$ there exists $r(x)>0$ so that $B_{r(x)}(x)\subset D_{t'}(x)$. Further, by (\ref{new2}), there exists $s(x)>0$ so that $D_{s(x)}(x)\subset B_{r(x)/(2\k)}(x)$, where $\kappa$ is as in (\ref{tri}). Since $K$ is compact, we may choose finite collections $\{B_{r_i/(2\k)}(x_i)\}$ and $\{D_{s_i}(x_i)\}$ with $x_i \in K$, $r_i = r(x_i)$, $s_i = s(x_i)$, and $K\subset \bigcup D_{s_i}(x_i) \subset \bigcup B_{r_i/(2\k)}(x_i)$. Now set $t=\min\{ r_i/(2\k)\}$. Let $y\in K$ and choose $i$ such that $y\in B_{r_i/(2\k)}(x_i)$. By (\ref{tri}), $B_t(y)\subset B_{r_i}(x_i)$ and consequently $B_t(y)\subset D_{t'}(x_i)$. Since $\overline{D_{t'}(x_i)} \subset G$, we obtain $\overline{B_{t}(y)} \subset G$ for every $y\in K$, as desired. In particular, $\overline{B_t(y)} \subset \Omega$ for all $y\in K$. Since the validity of (\ref{poincare}) for some $\delta=\delta(\epsilon, K)$ implies its validity for min\,$\{\delta,t\}$, it follows that we may assume (\ref{closure}) for every $B_r(y)$ in (\ref{poincare}) when (\ref{eucled2}) holds. Similarly, since the constant $c_0$ in (\ref{poincare}) is independent of $K$, we may assume as well that every $B_{c_0r}(y)$ in (\ref{poincare}) has closure in $\Omega$.
(iii) We can often slightly weaken the assumption in Definition \ref{poincaredef} that $K$ is an arbitrary compact set in $\Omega$. For example, in our results where $w(\Omega) < \infty$, it is generally enough to assume that for each $\epsilon >0$, there is a particular compact $K$ with $w(\Omega\setminus K)< \epsilon$ such that (\ref{poincare}) holds. However, in \S 3.4, where we do not assume $w(\Omega) < \infty$, it is convenient to keep the hypothesis that $K$ is arbitrary.
\end{remark}
Given a set ${\cal H} \subset Lip_{Q,p}(\Omega)$, define \begin{eqnarray}\label{newset} \hat{\cal H}= \{ f : \mbox{there exists } \{f^j\} \subset {\cal H} \mbox{ with $f^j \to f$ a.e.-$w$} \}. \end{eqnarray} It will be useful later to note that if ${\cal H}$ is bounded in $L^N_w(\Omega)$ for some $N$, then $\hat{{\cal H}}$ is also bounded in $L^N_w(\Omega)$ by Fatou's lemma; in particular, every $f \in \hat{\cal H}$ then belongs to $L^N_w(\Omega)$. See (\ref{hat-closure}) for a relationship between $\hat{{\cal H}}$ and the closure of ${\cal H}$ in $\wa{p}{\Omega}$ in case $w<<\nu$.
We now state our simplest global result. Its proof is given after Corollary \ref{simplecor*}.
\begin{thm}\label{simpleversion} Let the assumptions of \S 3.1 hold, $w(\Omega)<\infty$, $1\le p <\infty$, $1<N \le \infty$ and ${\cal H}\subset \Lip_{Q,p}(\Omega)$. Suppose that the Poincar\'e property of order $p$ in Definition \ref{poincaredef} holds for ${\cal H}$ and that \begin{eqnarray}\label{sup}
\sup_{f \in {\cal H}} \left\{||f||_{L^N_w(\Omega)} +
||f||_{L^p_\nu(\Omega)} + ||\nabla f||_{{\cal L}^p_\mu(\Omega,Q)}
\right\} < \infty. \end{eqnarray} Then any sequence $\{f_k\} \subset\hat{{\cal H}}$ has a subsequence that converges in $L^q_w(\Omega)$ norm for every $1\leq q<N$ to a function belonging to $L^N_w(\Omega)$. \end{thm}
Let ${\cal H} \subset Lip_{Q,p}(\Omega)$ and $\hat{\cal H}$ be as in (\ref{newset}). We reserve the notation $\overline{\cal H}$ for the closure of ${\cal H}$ in $W^{1,p}_{\nu,\mu}(\Omega, Q)$, i.e., for the closure of the collection $\{(f, \nabla f): f\in {\cal H}\}$ with respect to the norm (\ref{pairnormlip}). Elements of $\overline{\cal H}$ are viewed as equivalence classes. If $w<<\nu$, then \begin{eqnarray}\label{hat-closure} \{ f : \mbox{there exists $\vec{g}$ such that } (f,\vec{g})\in \overline{\cal H}\}\subset \hat{\cal H}. \end{eqnarray} Indeed, if $(f,\vec{g})\in \overline{\cal H}$, there is a sequence $\{f^j\} \subset {\cal H}$ such that $(f^j,\nabla f^j) \rightarrow (f,\vec{g})$ in $\wa{p}{\Omega}$ norm, and consequently $f^j \rightarrow f$ in $L^p_\nu(\Omega)$. By using a subsequence, we may assume that $f^j \rightarrow f$ pointwise a.e.-$\nu$, and hence by absolute continuity that $f^j\to f$ pointwise a.e.-$w$. This proves (\ref{hat-closure}). In fact, it can be verified by using Egorov's theorem that \begin{eqnarray}\label{betterhat-closure} \{f: \text{there exists $\{(f^j,\vec{g^j})\}\subset \overline{\cal H}$ with $f^j \rightarrow f$ a.e.-$w$}\} \subset \hat{\cal H}. \end{eqnarray}
Theorem \ref{simpleversion} and (\ref{hat-closure}) immediately imply the following corollary.
\begin{corollary}\label{simplecor} Let the assumptions of \S 3.1 hold, $w(\Omega)< \infty$ and $w<<\nu$. Let $1\le p <\infty$, $1<N \le \infty$, ${\cal H} \subset Lip_{Q,p}(\Omega)$ and $\overline{\cal H}$ be the closure of ${\cal H}$ in $W_{\nu,\mu}^{1,p}(\Omega, Q)$. Suppose that the Poincar\'e property of order $p$ in Definition \ref{poincaredef} holds for ${\cal H}$ and that \begin{eqnarray}\label{corboundN}
\sup_{f \in {\cal H}} \left\{||f||_{L^N_w(\Omega)} +
||(f,\nabla f)||_{W^{1,p}_{\nu,\mu}(\Omega,Q)}
\right\} < \infty. \end{eqnarray} Then any sequence $\{f_k\}$ in \begin{eqnarray} \{ f : \mbox{there exists $\vec{g}$ such that } (f,\vec{g})\in \overline{\cal H}\} \nonumber \end{eqnarray} has a subsequence that converges in $L^q_w(\Omega)$ norm for $1\le q<N$ to a function that belongs to $L^N_w(\Omega)$.
\end{corollary}
\begin{remark} Corollary \ref{simplecor} may be thought of as an analogue in the degenerate setting of the Rellich-Kondrachov theorem since it contains this classical result as a special case. To see why, set $Q(x) = \mbox{Id}$ and $w=\nu=\mu$ to be Lebesgue measure. Then, given a bounded sequence $\{(f_k,\vec{g}_k)\}\subset W^{1,p}_0(\Omega) = W^{1,p}_{dx,dx,0}(\Omega,Q)$ we may choose $\{f_k^j\}\subset Lip_0(\Omega)$ with $(f_k^j,\nabla f_k^j)\rightarrow (f_k,\vec{g}_k)$ in $W^{1,p}(\Omega)$ norm. Thus, setting ${\cal H} = \{f_k^j\}_{k\in\mathbb{N},j>J_k}$ where each $J_k$ is chosen sufficiently large to preserve boundedness, the classical Sobolev inequality gives (\ref{corboundN}) with $N=np/(n-p)$ for $1\leq p<n$. The Rellich-Kondrachov theorem now follows from Corollary \ref{simplecor}. \end{remark}
We next mention analogues of these results when ${\cal H}$ is replaced by a set ${\cal W}\subset W^{1,p}_{\nu,\mu}(\Omega,Q)$ with elements viewed as equivalence classes, assuming that $w <<\nu$. We then modify Definition \ref{poincaredef} by replacing (\ref{poincare}) with the analogous estimate
\begin{eqnarray} \label{poincare*} \left(\int_{B_r(y)} |f-f_{B_r(y),w}|^{p} dw \right)^\frac{1}{p} \leq
\epsilon ||(f,\vec{g})||_{W_{\nu,\mu}^{1,p}(B_{c_0r}(y),Q)}\quad \text{if }(f,\vec{g}) \in {\cal W}. \end{eqnarray} The assumption $w<< \nu$ guarantees that the left side of (\ref{poincare*}) does not change when the first component of a pair is arbitrarily altered in a set of $\nu$-measure zero.
If Poincar\'e's inequality is known to hold for subsets of Lipschitz functions in the form (\ref{poincare}), it can often be extended by approximation to the similar form (\ref{poincare*}) for subsets of $W^{1,p}_{\nu,\mu}(\Omega,Q)$. Indeed, let us show without using weak convergence that if $w<<\nu$ and the Radon-Nikodym derivative $dw/d\nu \in L^{p'}_\nu(\Omega), 1/p + 1/p' =1$, then (\ref{poincare*}) holds with ${\cal W} = W^{1,p}_{\nu,\mu}(\Omega,Q)$ if (\ref{poincare}) holds with ${\cal H} = Lip_{Q,p}(\Omega)$. This follows easily from Fatou's lemma since if $(f,\vec{g}) \in W^{1,p}_{\nu,\mu}(\Omega,Q)$ and we choose $\{f_j\} \subset Lip_{Q,p}(\Omega)$ with $(f_j, \nabla f_j) \rightarrow (f,\vec{g})$ in $W^{1,p}_{\nu,\mu}(\Omega,Q)$, then for any ball $B$, since $f_j \rightarrow f$ in $L^p_\nu(\Omega)$, we have \[ (f_j)_{B,w} = \frac{1}{w(B)}\int_B f_j\, \frac{dw}{d\nu}\, d\nu \rightarrow \frac{1}{w(B)}\int_B f\, \frac{dw}{d\nu}\, d\nu = f_{B,w}. \] Of course we may also assume that $f_j \rightarrow f$ a.e.-$w$ by selecting a subsequence of $\{f_j\}$ which converges to $f$ a.e.-$\nu$. The same argument shows that if (\ref{poincare*}) holds for all pairs in any set ${\cal W}\subset W^{1,p}_{\nu,\mu}(\Omega, Q)$, then it also holds for pairs in the closure $\overline{\cal W}$ of ${\cal W}$ in $W^{1,p}_{\nu,\mu}(\Omega,Q)$. Moreover, if all balls $B$ in question satisfy $\overline{B} \subset \Omega$ (cf. (\ref{closure})), then the assumption can clearly be weakened to $dw/d\nu \in L^{p'}_{\nu, loc}(\Omega)$. As we observed in Remark \ref{various}(ii), the balls in (\ref{poincare}) can be assumed to satisfy (\ref{closure}) provided (\ref{eucled2}) is true.
Analogues of Theorem \ref{simpleversion} and Corollary \ref{simplecor} for a set ${\cal W}\subset W^{1,p}_{\nu,\mu}(\Omega,Q)$ are given in the next result, which also includes the Rellich-Kondrachov theorem as a special case.
\begin{thm}\label{simpleversion*} Let the assumptions of \S 3.1 hold, $w(\Omega)<\infty$ and $w<<\nu$. Let $1\le p <\infty$, $1<N \le \infty$ and ${\cal W} \subset W^{1,p}_{\nu,\mu}(\Omega,Q)$. Suppose that the Poincar\'e property in Definition \ref{poincaredef} holds, but in the modified form given in (\ref{poincare*}), and that \begin{eqnarray}\label{sup*}
\sup_{(f,\vec{g}) \in {\cal W}} \left\{||f||_{L^N_w(\Omega)} +
||(f,\vec{g})||_{W^{1,p}_{\nu,\mu}(\Omega,Q)} \right\} < \infty. \end{eqnarray} Let \[ \hat{\cal W} = \{f: \text{there exists $\{(f^j,\vec{g^j})\} \subset {\cal W}$ with $f^j \rightarrow f$ a.e.$-w$}\}. \] Then any sequence in $\hat{{\cal W}}$ has a subsequence that converges in $L^q_w(\Omega)$ norm for every $1\leq q<N$ to a function belonging to $L^N_w(\Omega)$. In particular, if $\overline{\cal W}$ denotes the closure of ${\cal W}$ in $W_{\nu,\mu}^{1,p}(\Omega, Q)$, then the same is true for any sequence in \begin{eqnarray} \{ f : \mbox{there exists $\vec{g}$ such that } (f,\vec{g})\in \overline{\cal W}\}. \nonumber \end{eqnarray} \end{thm}
As a corollary, we obtain a result for arbitrary sequences $\{(f_k, \vec{g_k})\}$ which are bounded in $W^{1,p}_{\nu,\mu}(\Omega,Q)$ and whose first components $\{f_k\}$ are bounded in $L^N_w(\Omega)$.
\begin{corollary}\label{simplecor*} Let the assumptions of \S 3.1 hold, $w(\Omega)<\infty$, $w<<\nu$, $1\le p <\infty$ and $1<N \le \infty$. Suppose that the Poincar\'e property in Definition \ref{poincaredef} holds for all of $W^{1,p}_{\nu,\mu}(\Omega,Q)$, i.e., Definition \ref{poincaredef} holds with (\ref{poincare}) replaced by (\ref{poincare*}) for ${\cal W} = W^{1,p}_{\nu,\mu}(\Omega,Q)$. Then if $\{(f_k, \vec{g_k})\}$ is any sequence in $W_{\nu,\mu}^{1,p}(\Omega, Q)$ such that \begin{eqnarray}
\sup_k \left[ ||f_k||_{L^N_w(\Omega)} + ||(f_k,\vec{g_k})||_{W^{1, p}_{\nu,\mu}(\Omega,Q)}\right] < \infty, \nonumber \end{eqnarray} there is a subsequence of $\{f_k\}$ that converges in $L^q_w(\Omega)$ norm for $1\le q<N$ to a function belonging to $L^N_w(\Omega)$. If in addition $dw/d\nu \in L^{p'}_\nu(\Omega), 1/p + 1/p' =1$, the conclusion remains valid if the Poincar\'e property holds just for $Lip_{Q,p}(\Omega)$. \end{corollary}
In fact, the first conclusion in Corollary \ref{simplecor*} follows by applying Theorem \ref{simpleversion*} with ${\cal W}$ chosen to be the specific sequence $\{(f_k, \vec{g_k})\}_k$ in question, and the second statement follows from the first one and our observation above that (\ref{poincare*}) holds with ${\cal W} = W^{1,p}_{\nu,\mu}(\Omega,Q)$ if $dw/d\nu \in L^{p'}_\nu(\Omega), 1/p + 1/p' =1$, and if (\ref{poincare}) holds with ${\cal H} = Lip_{Q,p}(\Omega)$. \\
\noindent {\bf Proofs of Theorems \ref{simpleversion} and \ref{simpleversion*}.} We will concentrate on the proof of Theorem \ref{simpleversion}. The proof of Theorem \ref{simpleversion*} is similar and omitted. We begin with a useful covering lemma.
\begin{lemma}\label{help} Let the assumptions of \S 3.1 hold and $w(\Omega)< \infty$. Fix $p\in [1,\infty)$ and a set ${\cal H} \subset Lip_{Q,p}(\Omega)$. Suppose the Poincar\'e property of order $p$ in Definition \ref{poincaredef} holds for ${\cal H}$, and let $\kappa$ be as in (\ref{tri}) and $c_0$ be as in (\ref{poincare}). Then for every $\epsilon>0$, there are positive constants $r=r(\epsilon,\kappa, c_0), M= M(\kappa, c_0)$ and a finite collection $\{B_{r}(y_k)\}_k$ of $d$-balls, so that \begin{eqnarray} \label{lemma_i}&& (i) \quad w\big(\Omega\setminus \displaystyle\bigcup_{k} B_{r}(y_k)\big)<\epsilon ,\\ \label{lemma_ii}&& (ii) \quad\sum_{k} \chi_{B_{c_0r}(y_k)}(x)\leq M \quad \text{for all $x\in \Omega$}, \\
\label{lemma_iii}&& (iii) \quad ||f-f_{B_{r}(y_k),w}||_{L^{p}_w(B_{r}(y_k))}
\leq \epsilon ||(f,\nabla f)||_{W^{1,p}_{\nu,\mu}(B_{c_0r}(y_k),Q)} \end{eqnarray} for all $f \in {\cal H}$ and all $k$. Note that $M$ is independent of $\epsilon$.
\end{lemma}
\noindent {\bf Proof:} We first recall the ``swallowing'' property of $d$-balls: There is a constant $\gamma \ge 1$ depending only on $\kappa$ so that if $x, y \in \Omega$, $0< r_1\leq r_2< \infty$ and $B_{r_1}(x)\cap B_{r_2}(y)\neq\emptyset$, then \begin{eqnarray} \label{swallowing}B_{r_1}(x)\subset B_{{\gamma}r_2}(y). \end{eqnarray} Indeed, by \cite[Observation 2.1]{CW1}, $\gamma$ can be chosen to be $\kappa+2\kappa^2$.
Fix $\epsilon>0$. Since $w(\Omega)< \infty$, there is a compact set $K\subset \Omega$ with $w(\Omega\setminus K) <\epsilon$. Let $\delta' = \delta'(\epsilon)$ be as in Definition \ref{doublingdef} for $K$, and let $\delta = \delta(\epsilon)$ be as in (\ref{poincare}). Fix $r$ with $0<r<\textrm{min}\{\delta, \delta'/(c_0\gamma)\}$ where $c_0$ is as in (\ref{poincare}). For each $x\in K$, use (\ref{new2}) to pick $s(x,r)>0$ so that $D_{s(x,r)}(x)\subset B_{r/\gamma}(x)$. Since $K$ is compact, there are finitely many points $\{x_j\}$ in $K$ so that $K\subset \cup_j B_{r/\gamma}(x_j)$. Choose a maximal pairwise disjoint subcollection $\{B_{r/\gamma}(y_k)\}$ of $\{B_{r/\gamma}({x_j})\}$. We will show that the collection $\{B_r(y_k)\}$ satisfies (\ref{lemma_i})--(\ref{lemma_iii}).
To verify (\ref{lemma_i}), it is enough to show that $K\subset \cup_k B_r(y_k)$. Let $y\in K$. Then $y\in B_{r/\gamma}(x_j)$ for some $x_j$. If $x_j = y_k$ for some $y_k$ then $y\in B_r(y_k)$. If $x_j\neq y_k$ for all $y_k,$ there exists $y_\ell$ so that $B_{r/\gamma}(y_\ell) \cap B_{r/\gamma}(x_j)\neq\emptyset$. Then $B_{r/\gamma}(x_j)\subset B_{r}(y_\ell)$ by (\ref{swallowing}), and so $y\in B_r(y_\ell)$. In either case, we obtain $y\in \displaystyle\cup_k B_r(y_k)$ as desired.
To verify (\ref{lemma_ii}), suppose that $\{k_i\}_{i=1}^L$ satisfies $\cap_{i=1}^L B_{c_0r}(y_{k_i}) \neq \emptyset$.
Then by (\ref{swallowing}), $B_{c_0r}(y_{k_i}) \subset B_{c_0{\gamma}r}(y_{k_1})$ for $1\leq i\leq L$. Since $\gamma, c_0 \ge 1$, we have $B_{r/\gamma}(y_k) \subset B_{c_0r}(y_k)$ for all $k$, and consequently $$ \cup B_{r/\gamma}(y_{k_i}) \subset \cup B_{c_0r}(y_{k_i}) \subset B_{c_0\gamma r}(y_{k_1}). $$ By construction, $\{B_{r/\gamma}(y_k)\}$ is pairwise disjoint in $k$. Since $0<r/\gamma< c_0\gamma r<\delta'$, the corresponding constant ${\cal C}$ in the definition of geometric doubling depends only on $(c_0\gamma r)/(r/\gamma) = c_0\gamma^2$, i.e., ${\cal C}$ depends only on $\kappa$ and $c_0$. Choosing $M$ to be this constant, we obtain that $L\leq M$ as desired. The same argument shows that the collection $\{B_{c_0r}(y_k)\}$ has the stronger bounded intercept property with the same bound $M$, i.e., any ball in the collection intersects at most $M-1$ others.
Finally, let us verify (\ref{lemma_iii}). Recall that $0<r<\delta$ by construction. Hence (\ref{poincare}) implies that for each $k$ and all $f \in {\cal H }$,
\begin{equation} ||f-f_{B_{r}(y_k),w}||_{L^{p}_w(B_{r}(y_k))}
\leq \epsilon ||(f,\nabla f)||_{W^{1,p}_{\nu, \mu}(B_{c_0r}(y_k),Q)}, \end{equation} as required. This completes the proof of Lemma \ref{help}. $\Box$
The proof of Theorem \ref{simpleversion} will be deduced from Theorem \ref{general} by choosing $\mathscr{X}(\Omega) = L^p_\nu(\Omega) \times {\cal L}^{p}_\mu(\Omega,Q)$ and considering the product space \[ {\cal B}_{N,{\mathscr{X}}(\Omega)} = L^N_w(\Omega)\times \left(L^p_\nu(\Omega) \times {\cal L}^{p}_\mu(\Omega,Q)\right). \] We always choose $\Sigma$ to be the Lebesgue measurable subsets of $\Omega$ and $\Sigma_0 = \{B_r(x): r>0,x\in\Omega\}$. Note that $\mathscr{X}(\Omega)$ and ${\cal B}_{N,{\mathscr{X}}(\Omega)}$ are normed linear spaces (even Banach spaces), and the norm in ${\cal B}_{N,{\mathscr{X}}(\Omega)}$ is
\begin{eqnarray} ||(h, (f,\vec{g}))||_{{\cal B}_{N,{\mathscr{X}}(\Omega)}} =
||h||_{L^N_w(\Omega)} + ||f||_{L^p_\nu(\Omega)} +
||\vec{g}||_{{\cal L}^{p}_\mu(\Omega,Q)}. \end{eqnarray} The roles played in \S 1 by $\bf{g}$ and $(f,\bf{g})$ are now played by $(f,\vec{g})$ and $(h, (f,\vec{g}))$ respectively.
Let us verify properties (A) and ($B_p$) in \S 1 with ${\mathscr{X}}(\Omega)$ and $\Sigma_0$ chosen as above. To verify (A), fix $B\in \Sigma_0$ and $(f,\vec{g})\in {\mathscr{X}}(\Omega)$. Clearly $f\chi_B \in L^p_\nu(\Omega)$ since $f \in L^p_\nu(\Omega)$. Also, \begin{eqnarray} \int_\Omega \Big((\vec{g}\chi_{B})'Q (\vec{g} \chi_{B}) \Big)^\frac{p}{2}d\mu &=& \int_{B} \Big(\vec{g}\,'Q(x)\vec{g} \Big)^\frac{p}{2}d\mu \nonumber\\ &\leq& \int_\Omega \Big(\vec{g}\,'Q(x)\vec{g} \Big)^\frac{p}{2}d\mu <\infty.\nonumber \end{eqnarray} Thus $(f,\vec{g})\chi_{B}\in {\mathscr{X}}(\Omega)$ and property (A) is proved.
To verify ($B_p$), let $\{B_l\}$ be a finite collection of $d$-balls satisfying $\sum_{l} \chi_{B_l}(x)\leq C_1$ for all $x \in\Omega$. Then if $(f,\vec{g})\in {\mathscr{X}}(\Omega)$, $$
\displaystyle\sum_{l}||(f,\vec{g}) \chi_{B_l}||^p_{\mathscr{X}(\Omega)}
= \displaystyle\sum_l \left(||f\chi_{B_l}||_{L^p_\nu(\Omega)} +
||\vec{g} \chi_{B_l}||_{{\cal L}^{p}_\mu(\Omega,Q)}\right)^p $$ $$
\le 2^{p-1} \displaystyle\sum_l \left(||f\chi_{B_l}||^p_{L^p_\nu(\Omega)} +
||\vec{g} \chi_{B_l}||^p_{{\cal L}^{p}_\mu(\Omega,Q)}\right) $$ $$
= 2^{p-1} \int_\Omega |f|^p\left(\displaystyle\sum_{l}\chi_{B_l}\right) d\nu + \int_\Omega \left(\vec{g}\,'Q\vec{g} \right)^{\frac{p}{2}} \left(\displaystyle \sum_{l}\chi_{B_l}\right) d\mu $$ $$
\le 2^{p-1}C_1 \left(||f||^p_{L^p_\nu(\Omega)} +
||\vec{g}||^p_{{\cal L}^{p}_\mu(\Omega,Q)}\right) \le 2^pC_1
||(f, \vec{g})||^p_{{\mathscr{X}}(\Omega)}. $$ This verifies ($B_p$) with $C_2$ chosen to be $2^pC_1$.
The proof of Theorem \ref{simpleversion} is now very simple. Let ${\cal H}$ satisfy its hypotheses and choose ${\cal S}$ in Theorem \ref{general} to be the set \[ {\cal S} = \left\{(f, (f,\nabla f)) : f \in {\cal H}\right\}. \] Note that ${\cal S}$ is a bounded subset of ${\cal B}_{N,{\mathscr{X}}(\Omega)}$ by hypothesis (\ref{sup}). Next, in order to choose the pairs $\{E_\ell, F_\ell\}_\ell$ and verify conditions (i)--(iii) of Theorem \ref{general} (see (\ref{bddov}) and (\ref{pc})), we appeal to Lemma \ref{help}. Given $\epsilon>0$, let $\{E_\ell, F_\ell\}_\ell = \{B_r(y_k),B_{c_0r}(y_k)\}_{k}$ where $\{y_k\}$ and $r$ are as in Lemma \ref{help}. Then $E_\ell, F_\ell\in \Sigma_0$, and conditions (i)--(iii) of Theorem \ref{general} are guaranteed by Lemma \ref{help}. Finally, by noting that the set $\hat{\cal H}$ defined in (\ref{newset}) is the same as the set $\hat{\cal S}$ defined in (\ref{hatS}), the conclusion of Theorem \ref{simpleversion} follows from Theorem \ref{general}. $\Box$\\
For special domains $\Omega$ and special choices of $N$, the boundedness assumption (\ref{sup}) (or (\ref{corboundN})) can be weakened to \begin{eqnarray}\label{weaksup}
\sup_{f \in {\cal H}} \left\{||f||_{L^p_\nu(\Omega)} +
||\nabla f||_{{\cal L}^p_\mu(\Omega,Q)} \right\} =
\sup_{f \in {\cal H}} ||(f,\nabla f)||_{W^{1,p}_{\nu,\mu}(\Omega,Q)} < \infty. \end{eqnarray} This is clearly the case for any $\Omega$ and $N$ for which there exists a global Sobolev-Poincar\'e estimate that bounds
$||f||_{L^N_w(\Omega)}$ by $||(f,\nabla f)||_{W^{1,p}_{\nu,\mu}(\Omega, Q)}$ for all $f \in {\cal H}$. We now formalize this situation assuming that $w <<\nu$. In the appendix, we consider a case when $w<<\nu$ fails.
The form of the global Sobolev-Poincar\'e estimate we will use is given in the next definition. It guarantees that (\ref{sup}) and (\ref{weaksup}) are the same when $N=p\sigma$.
\begin{defn}\label{strongsobdef} Let $1\leq p<\infty$ and ${\cal H} \subset Lip_{Q,p}(\Omega)$. Then the \emph{global Sobolev property of order $p$ holds for} ${\cal H}$ if there are constants $C>0$ and $\sigma>1$ so that \begin{eqnarray}\label{strongsob}
||f||_{L^{p\sigma}_w(\Omega)} \leq C ||(f,\nabla f)||_{\wa{p}{\Omega}}
\quad\text{for all $f \in {\cal H}$}. \end{eqnarray} \end{defn}
If $w<<\nu$, then (\ref{strongsob}) extends to $(f,\vec{g}) \in \overline{\cal H}$. In fact, let $(f,\vec{g}) \in \overline{{\cal H}}$ and choose $\{f_j\}\subset {\cal H}$ with $(f_j, \nabla f_j)\to (f,\vec{g})$ in $W^{1,p}_{\nu,\mu}(\Omega,Q)$. Then $f_j \rightarrow f$ in $L^p_{\nu}(\Omega)$ norm, and by choosing a subsequence we may assume that $f_j \rightarrow f$ a.e.-$\nu$. Hence $f_j \rightarrow f$ a.e.-$w$ because $w<<\nu$. Since each $f_j$ satisfies (\ref{strongsob}), it follows that \begin{eqnarray}\label{strongsob**}
||f||_{L^{p\sigma}_w(\Omega)} \le C ||(f,\vec{g})||_{\wa{p}{\Omega}} \quad\text{if $(f,\vec{g})\in \overline{\cal H}$}. \end{eqnarray} Under the same assumptions, namely that Definition \ref{strongsobdef} holds for a set ${\cal H} \subset Lip_{Q,p}(\Omega)$ and that $w<<\nu$, the same sequence $\{f_j\}$ as above is also bounded in $L_w^{p\sigma}(\Omega)$ norm and so satisfies $(f_j)_{E,w} \rightarrow f_{E,w}$ for measurable $E$ by the same weak convergence argument given after the statement of Theorem \ref{general}. Hence the Poincar\'e estimate in Definition \ref{poincaredef} also extends to $\overline{\cal H}$ in the same form as (\ref{poincare*}), with ${\cal W}$ there replaced by $\overline{\cal H}$, i.e.,
\begin{eqnarray} \label{poincare**} \left(\int_{B_r(y)} |f-f_{B_r(y),w}|^{p} dw \right)^\frac{1}{p} \leq \epsilon
||(f,\vec{g})||_{W_{\nu,\mu}^{1,p}(B_{c_0r}(y),Q)}\quad \text{if }(f,\vec{g}) \in \overline{\cal H}. \end{eqnarray} Hence, we immediately obtain the next result by choosing ${\cal W} =\overline{\cal H}$ and $N=p\sigma$ in Theorem \ref{simpleversion*}.
\begin{thm} \label{appth1} Let the assumptions of \S 3.1 hold, $w(\Omega)< \infty$ and $w<<\nu$. Fix $p\in [1,\infty)$ and a set ${\cal H} \subset Lip_{Q,p}(\Omega)$. Suppose the Poincar\'e and global Sobolev properties of order $p$ in Definitions \ref{poincaredef} and \ref{strongsobdef} hold for ${\cal H}$, and let $\sigma$ be as in (\ref{strongsob}). If $\{(f_k,\vec{g_k})\}$ is a sequence in $\overline{{\cal H}}$ with \begin{eqnarray} \label{w1pbound}\sup_k
||(f_k,\vec{g_k})||_{W^{1,p}_{\nu,\mu}(\Omega,Q)} < \infty, \end{eqnarray} then $\{ f_k\}$ has a subsequence which converges in $L^q_w(\Omega)$ for $1 \le q< p\sigma$, and the limit of the subsequence belongs to $L^{p\sigma}_w(\Omega)$. \end{thm}
A result for the entire space $W^{1,p}_{\nu,\mu}(\Omega, Q)$ follows by choosing ${\cal H}= Lip_{Q,p}(\Omega)$ in Theorem \ref{appth1} or Corollary \ref{simplecor}:
\begin{corollary}\label{appcor1} Suppose that the hypotheses of Theorem \ref{appth1} hold for ${\cal H} = Lip_{Q,p}(\Omega)$. If $\{(f_k,\vec{g_k})\} \subset \wa{p}{\Omega}$ and (\ref{w1pbound}) is true then $\{ f_k\}$ has a subsequence which converges in $L^q_w(\Omega)$ for $1 \le q< p\sigma$, and the limit of the subsequence belongs to $L^{p\sigma}_w(\Omega)$. \end{corollary}
See the Appendix for analogues of Theorem \ref{appth1} and Corollary \ref{appcor1} without the assumption $w<<\nu$.
\subsection{Local Compactness Results for Degenerate Spaces}
In this section, for general bounded measurable sets $\Omega'$ with $\overline{\Omega'}\subset \Omega$, we study compact embedding of subsets of $W^{1,p}_{\nu,\mu}(\Omega,Q)$ into $L^q_w(\Omega')$ without assuming a global Sobolev estimate for $\Omega$ or $\Omega'$ and without assuming $w(\Omega) <\infty$. For some applications, see the comment at the end of the section.
The theorems below will assume a much weaker condition than the global Sobolev estimate (\ref{strongsob}), namely the following local estimate.
\begin{defn}\label{localsobdef} Let $1\le p<\infty$. We say that the \emph{local Sobolev property of order $p$} holds if for some fixed constant $\sigma >1$ and every compact set $K\subset\Omega$, there is a constant $r_1>0$ so that for all $d$-balls $B= B_r(y)$ with $y\in K$ and $0<r<r_1$, \begin{eqnarray} \label{sob}
||f||_{L^{p\sigma}_w(B)} \le C(B)\,
||(f,\nabla f)||_{W^{1,p}_{\nu,\mu}(\Omega, Q)} \quad \text{if } f\in Lip_0(B), \end{eqnarray} where $C(B)$ is a positive constant independent of $f$. We will view any $f\in Lip_0(B)$ as extended by $0$ to all of $\Omega$. \end{defn}
\begin{remark}\label{lots} (i) A more standard assumption than (\ref{sob}) is a normalized inequality that includes a factor $r$ in the gradient term on the right side: \begin{eqnarray}\label{sob2} \left(\frac{1}{w(B_r(y))}\int_{B_r(y)}
|f|^{{p}\sigma}dw\right)^\frac{1}{{p}\sigma} \leq C\left(\frac{1}{\nu(B_r(y))} \int_{B_r(y)} |f|^{p}d\nu\right)^\frac{1}{p}
\nonumber\\ + Cr \left(\frac{1}{\mu(B_r(y))}\int_{B_r(y)} |\sqrt{Q}\nabla f|^{p}d\mu\right)^\frac{1}{{p}}, \end{eqnarray} with $C$ independent of $r, y$; see e.g. \cite{SW1} and \cite{R1} in the unweighted case with $p=2$. Clearly (\ref{sob2}) is a stronger requirement than (\ref{sob}).
(ii) In the classical $n$-dimensional elliptic case for linear second order equations in divergence form, $Q$ satisfies $c|\xi|^2 \le Q(x,\xi) \le C|\xi|^2$ for some fixed constants $c, C>0$ and $d$ is the standard Euclidean metric $d(x,y) = |x-y|$. For $1\le p<n$ and $\sigma = n/(n-p)$, (\ref{sob}) then holds with $dw= d\nu =d\mu = dx$ since the corresponding version of (\ref{sob2}) is true with
$|\sqrt{Q}\nabla f|$ replaced by $|\nabla f|$. \end{remark}
We will also use a notion of Lipschitz cutoff functions on $d$-balls:
\begin{defn}\label{cutoff} For $s\geq 1$, we say that the \emph{cutoff property of order $s$} holds for $\mu$ if for each compact $K\subset\Omega$, there exists $\delta =\delta(K)>0$ so that for every d-ball $B_r(y)$ with $y\in K$ and $0<r<\delta,$ there is a function $\phi\in Lip_0(\Omega)$ and a constant $\gamma=\gamma(y,r) \in (0,r)$ satisfying
\noindent (i) $\;$ $0\leq \phi \leq 1$ in $\Omega$, \\ \noindent (ii) $\;$ supp $\phi \subset B_r(y)$ and $\phi =1$ in $B_\gamma(y)$, \\ \noindent (iii) $\;$ $\nabla \phi \in {\cal L}^s_\mu(\Omega,Q)$. \end{defn}
Since $\mu$ is always assumed to be locally finite, the strongest form of Definition \ref{cutoff}, namely the version with $s=\infty$, automatically holds if $Q$ is locally bounded in $\Omega$ and (\ref{eucled2}) is true; recall that we always assume (\ref{new2}). To see why, fix a compact set $K\subset \Omega$ and consider $B_r(y)$ with $y\in K$ and $r<1$. Use (\ref{new2}) to choose open Euclidean balls $D', D$ with common center $y$ such that $\overline{D'} \subset D \subset B_r(y) (\subset \Omega \text{ by definition})$. Construct a smooth function $\phi$ in $\Omega$ with support in $D$ such that $0\le \phi \le 1$ and $\phi = 1$ on $D'$. By (\ref{eucled2}), there is $\gamma >0$ such that $B_\gamma(y)\subset D'$. Then $\phi$ satisfies parts (i)-(iii) of Definition \ref{cutoff} with $s=\infty$; for (iii), we use the fact that $\nabla\phi$ has compact support in $\Omega$ together with local boundedness of $Q$ and local finiteness of $\mu$.
To compensate for the lack of a global Sobolev estimate, given ${\cal H} \subset Lip_{Q,p}(\Omega)$, we will assume in conjunction with the cutoff property of some order $s\ge p\sigma'$ that for every compact set $K\subset \Omega$, there exists $\delta = \delta(K) >0$ such that for every $d$-ball $B$ with center in $K$ and radius less than $\delta$, there is a constant $C_1(B)$ so that \begin{eqnarray}\label{globpc}
||f||_{L^{pt'}_\mu(B)} \leq C_1(B)\, ||(f, \nabla
f)||_{W^{1,p}_{\nu,\mu}(\Omega,Q)} \quad \text{if } f\in {\cal H}, \end{eqnarray} where $t=s/p$ and $1/t+ 1/t'=1$. Note that $1\leq t'\leq \sigma$ since $s\ge p\sigma'$.
\begin{remark} \label{extracondition} Inequality (\ref{globpc}) is different in nature from (\ref{sob}) even if $t'=\sigma$ and $w=\mu$ since there is a restriction on supports in (\ref{sob}) but not in (\ref{globpc}). However, (\ref{globpc}) implies (\ref{sob}) when
$s=p\sigma'$, $w=\mu$ and ${\cal H}$ contains all Lipschitz functions with support in any ball. On the other hand, (\ref{globpc}) is often automatic if $\mu=\nu$. For example, as mentioned earlier, if $Q$ is locally bounded and (\ref{eucled2}) is true, then the cutoff property holds with $s=\infty$, giving $t= \infty$ and $t'=1$. In this case, when $\mu = \nu$, the left side of (\ref{globpc}) is clearly smaller than the right side (in fact smaller than $||f||_{L^p_\nu(\Omega)}$).
\end{remark}
We can now state our main local result.
\begin{thm}\label{newappth2} Let the assumptions of \S 3.1 and condition (\ref{eucled2}) hold, and let $w<<\nu$. Fix $p\in [1,\infty)$ and suppose the Poincar\'e property of order $p$ in Definition \ref{poincaredef} holds for a fixed set ${\cal H}\subset Lip_{Q,p}(\Omega)$ and the local Sobolev property of order $p$ in Definition \ref{localsobdef} holds. Assume the cutoff property of some order $s\geq p\sigma'$ is true for $\mu$, with $\sigma$ as in (\ref{sob}), and that (\ref{globpc}) holds for ${\cal H}$ with $t=s/p$. Then for every $\{(f_k,\vec{g_k})\} \subset \overline{\cal H}$ that is bounded in $W^{1,p}_{\nu,\mu}(\Omega,Q)$ norm, there is a subsequence $\{f_{k_i}\}$ of $\{f_k\}$ and an $f \in L^{p\sigma}_{w,loc}(\Omega)$ such that $f_{k_i} \rightarrow f$ pointwise a.e.-$w$ in $\Omega$ and in $L^q_w(\Omega')$ norm for all $1\le q <p\sigma$ and every bounded measurable $\Omega'$ with $\overline{\Omega'} \subset \Omega$. \end{thm}
See the Appendix for a version of Theorem \ref{newappth2} without assuming $w<<\nu$.
Recall that $\overline{\cal H} = W^{1,p}_{\nu,\mu}(\Omega,Q)$ if ${\cal H} = Lip_{Q,p}(\Omega)$. In the important case when $Q \in L^\infty_{loc}(\Omega)$, Theorem \ref{newappth2} and Remark \ref{extracondition} immediately imply the next result.
\begin{corollary}\label{newappcor2} Let $Q$ be locally bounded in $\Omega$ and suppose that (\ref{eucled2}) holds. Fix $p\in [1,\infty)$, and with $w=\nu=\mu$, assume the Poincar\'e property of order $p$ holds for $Lip_{Q,p}(\Omega)$ and the local Sobolev property of order $p$ holds. Then for every bounded sequence $\{(f_k,\vec{g_k})\} \subset W^{1,p}_{w,w}(\Omega,Q)$, there is a subsequence $\{f_{k_i}\}$ of $\{f_k\}$ and a function $f \in L^{p\sigma}_{w,loc}(\Omega)$ such that $f_{k_i} \rightarrow f$ pointwise a.e.-$w$ in $\Omega$ and in $L^q_w(\Omega')$ norm, $1\le q <p\sigma$, for every bounded measurable $\Omega'$ with $\overline{\Omega'} \subset \Omega$.
\end{corollary}
\noindent{\bf Proof of Theorem \ref{newappth2}:} We begin by using the cutoff property in Definition \ref{cutoff} to construct a partition of unity relative to $d$-balls and compact subsets of $\Omega$.
\begin{lemma}\label{partitionofunity} Fix $\Omega$ and $s\geq 1$, and suppose the cutoff property of order $s$ holds for $\mu$. If $K$ is a compact subset of $\Omega$ and $r>0$, there is a finite collection of $d$-balls $\{B_r(y_j)\}$ with $y_j \in K$ together with Lipschitz functions $\{\psi_j\}$ on $\Omega$ such that $supp \,\psi_j \subset B_r(y_j)$ and
\noindent (a) $K\subset \displaystyle\bigcup_j B_r(y_j)$,\\ \noindent (b) $0\leq \psi_j\leq 1$ in $\Omega$ for each $j$, and $\displaystyle\sum_j \psi_j(x) =1$ for all $x\in K$,\\ \noindent (c) $\nabla \psi_j \in {\cal L}^s_\mu(\Omega,Q)$ for each $j$. \end{lemma}
\noindent {\bf Proof:} The argument is an adaptation of one in \cite{Ru} for the usual Euclidean case. The authors thank D. D. Monticelli for related discussions. Fix $r>0$ and a compact set $K\subset\Omega$, and set $\beta = \min\{\delta/2,r\}$ for $\delta= \delta(K)$ as in Definition \ref{cutoff}. Since $\beta< \delta$, Definition \ref{cutoff} implies that for each $y\in K$, there exist $\gamma(y) \in (0,\beta)$ and $\phi_y(x)\in Lip(\Omega)$ so that $0\le \phi_y\le 1$ in $\Omega$, $supp \,\phi_y \subset B_{\beta}(y))$, $\phi_y = 1$ in $B_{\gamma(y)}(y)$ and $\nabla\phi_y \in {\cal L}^s_\mu(\Omega,Q)$. The collection $\{B_{\gamma(y)}(y)\}_{y\in K}$ covers $K$, so by (\ref{new2}) and the compactness of $K$, there is a finite subcollection $\{B_{\gamma(y_j)}(y_j)\}_{j=1}^m$ whose union covers $K$. Part (a) follows since $\gamma(y_j)<r$. Next let $\phi_j(x)=\phi_{y_j}(x)$ and define $\{\psi_j\}_{j=1}^m$ as follows: set $\psi_1 = \phi_1$ and $\psi_j=(1-\phi_1)\cdots (1-\phi_{j-1})\phi_j$ for $j=2,..,m$. Then each $\psi_j$ is a Lipschitz function in $\Omega$, and $supp \,\phi_j \subset B_r(y_j)$ since $\beta <r$. Also, $0\leq \psi_j\leq 1$ in $\Omega$ and \begin{eqnarray} \displaystyle\sum_{j=1}^m \psi_j(x)= 1-\prod_{j=1}^m (1-\phi_j(x)), \quad x\in \Omega.\nonumber \end{eqnarray} If $x\in K$ then $x\in B_{\gamma(y_j)}(y_j)$ for some $j$. Hence some $\phi_j(x)=1$ and consequently $\sum_j \psi_j(x)= 1$ . This proves part (b). Lastly, we use Leibniz's product rule to compute $\nabla \psi_j$ and then apply Minkowski's inequality $j$ times to obtain part (c) from the fact that $\nabla\phi_j\in {\cal L}^s_\mu(\Omega,Q)$. $\Box$\\
The next lemma shows how the local Sobolev estimate (\ref{sob}) and Lemma \ref{partitionofunity} lead to a local analogue of the global Sobolev estimate (\ref{strongsob}).
\begin{lemma}\label{localglobal} Let $\Omega'$ be a bounded measurable set with $\overline{\Omega'} \subset \Omega$. Suppose that both Definition \ref{localsobdef} and the cutoff property for $\mu$ of some order $s\ge p\sigma'$ hold, and also that (\ref{globpc}) holds with $t=s/p$ for a fixed set ${\cal H} \subset Lip_{loc}(\Omega)$. Then there is a finite constant $C(\Omega')$ such that \begin{eqnarray}\label{localglobalsob}
||f||_{L^{p\sigma}_w(\Omega')} \le C(\Omega')\, ||(f,\nabla
f)||_{W^{1,p}_{\nu,\mu}(\Omega,Q)}\quad \text{if } f \in {\cal H}. \end{eqnarray} \end{lemma}
\noindent{\bf Proof:} Let $r_1$ be as in Definition \ref{localsobdef} relative to the compact set $\overline{\Omega'}\subset \Omega$, and let $\delta$ be as in (\ref{globpc}). Use Lemma \ref{partitionofunity} to cover $\overline{\Omega'}$ by the union of a finite number of $d$-balls $\{B_j\}$ each of radius smaller than $\min\{r_1,\delta\}$. Associated with this cover is a collection $\{\psi_j\}\subset Lip(\Omega)$ with $supp \,\psi_j\subset B_j$, $\sum_j\psi_j=1$ in $\Omega'$, and $\nabla\psi_j\in {\cal L}^s_\mu(\Omega,Q)$. If $f \in {\cal H}$, then
\begin{eqnarray}\label{3.6-1} ||f||_{L^{p\sigma}_w(\Omega')} =
||f\sum_j \psi_j ||_{L^{p\sigma}_w(\Omega')} \leq \displaystyle\sum_j ||\psi_j f||_{L^{p\sigma}_w(B_j)}. \end{eqnarray} Since $\psi_j f\in Lip_0(B_j)$, (\ref{sob}) and the product rule give \begin{eqnarray}\label{3.6-2}
&& ||\psi_j f||_{L^{p\sigma}_w(B_j)}\le C(B_j)\,||(\psi_j f,
\nabla(\psi_jf))||_{W^{1,p}_{\nu,\mu}(B_j,Q)} \nonumber\\
&=& C(B_j)\left(||\psi_j f||_{L^{p}_\nu(B_j)} + ||\sqrt{Q}\nabla(\psi_j f)||_{L^p_\mu(B_j)}\right) \nonumber\\
&\leq& C(B_j)\left(||\psi_j f||_{L^p_\nu(B_j)} + ||\psi_j\sqrt{Q}\nabla f||_{L^p_\mu(B_j)} + ||f\sqrt{Q}\nabla \psi_j||_{L^p_\mu(B_j)} \right)
\nonumber\\ &\leq& C(B_j)\left(||(f,\nabla f)||_{\wa{p}{\Omega}} +
||f\sqrt{Q}\nabla \psi_j||_{L^p_\mu(B_j)}\right),
\end{eqnarray} where we have used $|\psi_j| \le 1$. We will estimate the second term on the right of (\ref{3.6-2}) by using (\ref{globpc}). Recall that $t=s/p \ge \sigma'$ and $1/t + 1/t'=1$. Let \[
\overline{C} = \max_j ||\sqrt{Q}\nabla\psi_j||_{L_\mu^s(B_j)}. \] By H\"older's inequality and (\ref{globpc}), \begin{eqnarray} \label{3.6-3}
||f\sqrt{Q}\nabla \psi_j||_{L^p_\mu(B_j)} &\leq&
||f||_{L^{pt'}_\mu(B_j)} ||\sqrt{Q}\nabla\psi_j||_{L_\mu^s(B_j)}
\nonumber\\
&\leq& \overline{C} C_1(B_j)||(f,\nabla f)||_{\wa{p}{\Omega}}. \end{eqnarray} Combining this with (\ref{3.6-2}) gives
\begin{eqnarray} ||\psi_j f||_{L^{p\sigma}_w(B_j)} &\leq& C(B_j)\big(1+\overline{C}
C_1(B_j)\big)||(f,\nabla f)||_{\wa{p}{\Omega}}.\nonumber \end{eqnarray} By (\ref{3.6-1}), for any $f\in {\cal H}$,
\begin{eqnarray} \nonumber||f||_{L^{p\sigma}_w(\Omega')}&\leq& ||
(f,\nabla f)||_{\wa{p}{\Omega}}\displaystyle\sum_j C(B_j)\big(1+ \overline{C}
C_1(B_j)\big) \nonumber\\ \nonumber &=& C(\Omega') ||(f,\nabla f)||_{\wa{p}{\Omega}}, \end{eqnarray} which completes the proof of Lemma \ref{localglobal}. \qed \\
Theorem \ref{newappth2} follows from Lemma \ref{localglobal} and Theorem \ref{generallocal}. We will sketch the proof, omitting some familiar details. By choosing a sequence of compact sets increasing to $\Omega$ and using a diagonalization argument, it is enough to prove the conclusion for a fixed measurable $\Omega'$ with compact closure $\overline{\Omega'}$ in $\Omega$. Fix such an $\Omega'$ and select a bounded open $\Omega''$ with $\overline{\Omega'} \subset \Omega'' \subset \overline{\Omega''} \subset \Omega.$ For ${\cal H}$ as in Theorem \ref{newappth2}, apply Lemma \ref{localglobal} to the set $\Omega''$ to obtain \begin{eqnarray}\label{Omegaprimeprime}
||f||_{L^{p\sigma}_w(\Omega'')} \le C(\Omega'')\, ||(f,\nabla
f)||_{W^{1,p}_{\nu,\mu}(\Omega,Q)},\quad f \in {\cal H}. \end{eqnarray} By assumption, $w<<\nu$, so (\ref{Omegaprimeprime}) extends to $\overline{\cal H}$ in the form \begin{eqnarray}\label{Omegaprimeprimeext}
||f||_{L^{p\sigma}_w(\Omega'')} \le C(\Omega'')\,
||(f,\vec{g})||_{W^{1,p}_{\nu,\mu}(\Omega,Q)},\quad (f,\vec{g})\in
\overline{\cal H}. \end{eqnarray}
Let $\epsilon >0$. By hypothesis, ${\cal H}$ satisfies the Poincar\'e estimate (\ref{poincare}) for balls $B_r(y)$ with $y \in \overline{\Omega'}$ and $r<\delta(\epsilon, \Omega')$. Since the Euclidean distance between $\overline{\Omega'}$ and $\partial \Omega''$ is positive and we have assumed (\ref{eucled2}), we may also assume by Remark \ref{various}(ii) that all such balls lie in the larger set $\Omega''$. Next we claim that (\ref{poincare}) extends to $\overline{\cal H}$, i.e., \begin{eqnarray}\label{poincareext}
\left(\int_{B_r(y)} |f-f_{B_r(y),w}|^p dw\right)^{\frac{1}{p}} \le
\epsilon ||(f,\vec{g})||_{W^{1,p}_{\nu,\mu}(B_{c_0r}(y),Q)} \quad \text{if } (f,\vec{g}) \in \overline{\cal H}, \end{eqnarray} for the same class of balls $B_r(y)$. In fact, if $(f,\vec{g}) \in \overline{\cal H}$ and $\{f^j\} \subset {\cal H}$ satisfies $(f^j, \nabla f^j) \rightarrow (f,\vec{g})$ in $W^{1,p}_{\nu,\mu}(\Omega,Q)$ norm, then there is a subsequence, still denoted $\{f^j\}$, with $f^j \rightarrow f$ a.e.-$\nu$ in $\Omega$, and so with $f^j \rightarrow f$ a.e.-$w$ in $\Omega$ since $w<<\nu$. By (\ref{Omegaprimeprime}), $\{f^j\}$ is bounded in $L^{p\sigma}_w(\Omega'')$. Hence, since the balls in (\ref{poincareext}) satisfy $B_r(y) \subset \Omega''$, we obtain $f^j_{B_r(y),w} \rightarrow f_{B_r(y),w}$ by our usual weak convergence argument, and (\ref{poincareext}) follows by Fatou's lemma from its analogue (\ref{poincare}) for the $(f^j, \nabla f^j)$.
Now let $\{(f_k,\vec{g_k})\} \subset \overline{\cal H}$ be bounded in $W^{1,p}_{\nu,\mu}(\Omega,Q)$ norm and apply Theorem \ref{generallocal} with $\mathcal{X}(\Omega) = L^p_\nu(\Omega) \times {\cal L}^p_\mu(\Omega,Q)$ to the set ${\cal S}$ defined by \begin{eqnarray} \nonumber {\cal S} = \left\{\big(f_k, (f_k,\vec{g_k})\big)\right\}_k, \end{eqnarray} and with $\{(E_\ell^\epsilon, F_\ell^\epsilon)\}_\ell$ chosen to be a finite number of pairs $\{(B_r(y_\ell), B_{c_0r}(y_\ell)\}_\ell$ as in (\ref{poincareext}), but now with $r$ fixed depending on $\epsilon$, and with $\Omega' \subset \cup_\ell B_r(y_\ell)$. Such a finite choice exists by (\ref{new2}) and the Heine-Borel theorem since $\overline{\Omega'}$ is compact; cf. the proof of Lemma \ref{help}. Since $\Omega'$ is completely covered by $\cup_\ell E^\epsilon_\ell$, assumption (i) of Theorem \ref{generallocal} is fulfilled. Moreover, the collection $\{F_\ell^\epsilon\}$ has bounded overlaps uniformly in $\epsilon$ by the geometric doubling argument used to prove Lemma \ref{help}.
Finally, (\ref{bddlocal}) follows from (\ref{Omegaprimeprimeext}) applied to the bounded sequence $\{(f_k,\vec{g_k})\}$ since $\cup_{\ell, \epsilon} E_\ell^\epsilon \subset \Omega''$. Thus Theorem \ref{generallocal} implies that there is a subsequence $\{f_{k_i}\}$ of $\{f_k\}$ and a function $f\in L^{p\sigma}_w(\Omega')$ such that $f_{k_i} \rightarrow f$ a.e.-$w$ in $\Omega'$ and in $L^q_w(\Omega')$ norm, $1\le q < p\sigma$. This completes the proof of Theorem \ref{newappth2}. \qed
For functions which are compactly supported in a fixed bounded measurable $\Omega'$ with $\overline{\Omega'}\subset\Omega$, the proof of Theorem \ref{newappth2} can be modified to yield compact embedding into $L^q_w(\Omega')$ for the same $\Omega'$ without assuming (\ref{eucled2}). Of course we always require (\ref{new2}). Given such $\Omega'$ and a set ${\cal H} \subset Lip_{Q,p,0}(\Omega')$, we may view ${\cal H}$ as a subset of $Lip_{Q,p,0}(\Omega)$ simply by extending functions in ${\cal H}$ to all of $\Omega$ as $0$ in $\Omega\setminus\Omega'$. In this way, the proof of Theorem \ref{newappth2} works without (\ref{eucled2}). For example, choosing ${\cal H} = Lip_{Q,p,0}(\Omega')$, we obtain
\begin{thm}\label{last} Let the assumptions of \S 3.1 hold and $w<<\nu$. Let $\Omega'$ be a bounded measurable set with $\overline{\Omega'} \subset \Omega$. Fix $p\in [1,\infty)$ and suppose the Poincar\'e property of order $p$ in Definition \ref{poincaredef} holds for $Lip_{Q,p,0}(\Omega')$, with $Lip_{Q,p,0}(\Omega')$ viewed as a subset of $Lip_{Q,p,0}(\Omega)$ using extension by $0$, and suppose the local Sobolev property of order $p$ in Definition \ref{localsobdef} holds. Assume the cutoff property of some order $s\geq p\sigma'$ is true for $\mu$, with $\sigma$ as in (\ref{sob}), and that (\ref{globpc}) holds for $Lip_{Q,p,0}(\Omega')$ with $t=s/p$. Then for every sequence $\{(f_k,\vec{g_k})\} \subset W^{1,p}_{\nu,\mu,0}(\Omega',Q)$ which is bounded in $W^{1,p}_{\nu,\mu}(\Omega',Q)$ norm, there is a subsequence $\{f_{k_i}\}$ of $\{f_k\}$ and a function $f \in L^{p\sigma}_w(\Omega')$ such that $f_{k_i} \rightarrow f$ pointwise a.e.-$w$ in $\Omega'$ and in $L^q_w(\Omega')$ norm, $1\le q <p\sigma$. \end{thm}
The full force of the local Sobolev estimate in Definition \ref{localsobdef} is not needed to prove Theorem \ref{last}. In fact, it is enough to assume that (\ref{sob}) holds only for balls centered in the fixed compact set $\overline{\Omega'}$.
The proof of Theorem \ref{last} is like that of Theorem \ref{newappth2}, working with the set $\Omega'$ that occurs in the hypotheses of Theorem \ref{last}. However, now (\ref{localglobalsob}) in the conclusion of Lemma \ref{localglobal} (with ${\cal H} = Lip_{Q,p,0}(\Omega')$) remains valid if $\Omega'$ is replaced on the left side by $\Omega$ since every $f \in Lip_{Q,p,0}(\Omega')$ vanishes on $\Omega\setminus\Omega'$. The resulting estimate serves as a replacement for (\ref{Omegaprimeprime}), so it is not necessary to demand that the $E_\ell^\epsilon$ are subsets of a compact set $\overline{\Omega''} \subset \Omega$. Hence (\ref{eucled2}) is no longer required. Finally, the Poincar\'e estimate extends as usual to $W^{1,p}_{\nu,\mu,0}(\Omega', Q)$ (the closure of $Lip_{Q,p,0}(\Omega'))$, and due to support considerations, the $E_\ell^\epsilon$ can be restricted to subsets of $\Omega'$ by replacing $E_\ell^\epsilon$ by $E_\ell^\epsilon \cap \Omega'$; this guarantees $w(E_\ell^\epsilon)< \infty$ since $w$ is locally finite by hypothesis.
Recalling the comments made immediately after Definition \ref{cutoff} and in Remark \ref{extracondition}, we obtain a useful special case of Theorem \ref{last}:
\begin{corollary}\label{lastcor} Let the assumptions of \S 3.1 hold, $\Omega$ and $Q$ be bounded, $w=\nu=\mu$ and (\ref{eucled2}) be true. Let $\Omega'$ be a measurable set with $\overline{\Omega'} \subset \Omega$. Fix $p\in [1,\infty)$ and suppose the Poincar\'e property of order $p$ in Definition \ref{poincaredef} holds for $Lip_{Q,p,0}(\Omega')$ and the local Sobolev property of order $p$ in Definition \ref{localsobdef} holds. Then for every $\{(f_k,\vec{g_k})\} \subset W^{1,p}_{\nu,\mu,0}(\Omega',Q)$ which is bounded in $W^{1,p}_{\nu,\mu}(\Omega,Q)$ norm, there is a subsequence $\{f_{k_i}\}$ of $\{f_k\}$ and a function $f \in L^{p\sigma}_w(\Omega')$ such that $f_{k_i} \rightarrow f$ pointwise a.e.-$w$ in $\Omega'$ and in $L^q_w(\Omega')$ norm, $1\le q <p\sigma$. \end{corollary}
In case $p=2$ and all measures are Lebesgue measure, Corollary \ref{lastcor} is used in \cite{R1} to show existence of weak solutions to Dirichlet problems for some linear subelliptic equations. It is also used in \cite{R2} to derive the global Sobolev inequality
\begin{eqnarray} ||f||_{L^{2\sigma}(\Omega')} \leq C
\Big(\int_{\Omega'}|\sqrt{Q}\nabla f|^2dx\Big)^{1/2} \end{eqnarray} for open $\Omega'$ with $\overline{\Omega'}\subset\Omega$ from the local estimate (\ref{sob2}).
\section{Precompact subsets of $L^N$ in a quasimetric space} \setcounter{equation}{0}
\setcounter{theorem}{0}
\setcounter{equation}{0}
\setcounter{theorem}{0} In this section, we will consider the situation of an open set $\Omega$ in a topological space $X$ when $X$ is also endowed with a quasimetric $d$. As there is no easy way to define Sobolev spaces on general quasimetric spaces, this section concentrates on establishing a simple criterion not directly related to Sobolev spaces ensuring that bounded subsets of $L^N_w(\Omega)$ are precompact in $L^q_w(\Omega)$ when $1\le q<N\le \infty$.
We begin by further describing the setting for our result. The topology on $X$ is expressed in terms of a fixed collection ${\cal T}$ of subsets of $X$ which may not be related to the quasimetric $d$. Thus when we say that a set ${\cal O}\subset X$ is \emph{open}, we mean that ${\cal O}\in {\cal T}$. Given an open $\Omega$, we will assume each of the following: \begin{eqnarray} \nonumber &&(i)\;\;\; \mbox{$\forall x\in X$ and $r>0$, the $d$-ball
$B_r(x) = \{y\in X\;:\; d(x,y)<r\}$ is a Borel set;}\\ \nonumber &&(ii)\;\;\mbox{$\forall x\in X$ and $r>0$, there is an open
set ${\cal O}$ so that $x\in{\cal O}\subset B_r(x)$;}\\ \nonumber &&(iii)\;\;\mbox{if $X\not = \Omega$, then $\forall x\in
\Omega$, $d(x,\Omega^c)=\inf\{ d(x,y): y\in \Omega^c\} >0$.} \end{eqnarray}
Property $(ii)$ serves as a substitute for (\ref{new2}).
Unlike the situation in \S 3, $d$-balls centered in $\Omega$ may not be subsets of $\Omega$ unless $X= \Omega$. However, we note the following fact. \begin{remark}\label{remark1} Properties $(ii)$ and $(iii)$ guarantee that for any compact set $K\subset \Omega$, there exists ${\varepsilon}(K)>0$ such that $B_r(x)\subset \Omega$ if $x\in K$ and $r < {\varepsilon}(K)$. In fact, first note that for any $x\in \Omega$, $(iii)$ implies that the $d$-ball $B(x)$ with center $x$ and radius $r_x = d(x,\Omega^c)/(2\kappa)$ lies in $\Omega$. If $K$ is a compact set in $\Omega$, (ii) shows that $K$ can be covered by a finite number of such balls $\{B(x_i)\}$. With ${\varepsilon}(K)$ chosen to be a suitably small multiple (depending on $\kappa$) of $min\, \{r_{x_i}\}$, the remark then follows easily from the swallowing property of $d$-balls. \end{remark}
Further, we assume that $(\Omega,d)$ satisfies the local geometric doubling condition in Definition \ref{doublingdef}, i.e., for each compact set $K\subset \Omega$, there exists $\delta'(K)>0$ such that for all $x\in K$ and all $0<r'<r < \delta'(K)$, the number of disjoint $d$-balls of common radius $r'$ contained in $B_{r}(x)$ is at most a constant ${\cal C}_{r/r'}$ depending on $r/r'$ but not on $K$. We will choose $\delta'(K)\le {\varepsilon}(K)$ in the above.
With this framework in force, we now state the main result of the section.
\begin{thm}\label{mainmetric} Let $\Omega\subset X$ be as above, and let $w$ be a finite Borel measure on $\Omega$ such that given any $\epsilon >0$, there is a compact set $K \subset \Omega$ with $w(\Omega\setminus K)< \epsilon$. Let $1\leq p<\infty$ and $1<N\leq \infty$, and suppose ${\cal{S}}\subset L^N_w(\Omega)$ has the property that for any compact set $K\subset \Omega$, there exists $\delta_K>0$ such that \begin{equation}\label{pointype}
\|f-f_{B,w}\|_{L^p_w(B)}\le b(f,B) \ \mbox{ if $f\in {\cal{S}}$ and $B=B_r(x)$, $x\in K$, $0<r<\delta_K$}, \end{equation} where $b(f,B)$ is a nonnegative ball set function. Further, suppose there is a constant $c_0\ge 1$ so that for every $\ep>0$ and every compact set $K\subset \Omega$, there exists $\tilde{\delta}_{\ep,K}>0$ such that \begin{equation}\label{overlap} \sum_{B\in \F} b(f,B)^p \le \ep^p \ \mbox{ for all $f\in {\cal{S}}$ } \end{equation} for every finite family $\F=\{B\}$ of $d$-balls centered in $K$ with common radius less than $\tilde{\delta}_{\ep,K}$ for which $\{c_0B\}$ is a pairwise disjoint family of subsets of $\Omega$. Then any sequence in ${\cal{S}}$ that is bounded in $L^N_w(\Omega)$ has a subsequence that converges in $L^q_w(\Omega)$ for $1\le q<N$ to a function in $L^N_w(\Omega)$. \end{thm}
{\bf Proof. } Let $\ep>0$ and choose a compact set $K\subset \Omega$ with $w(\Omega\setminus K)<\ep$. Next, for $c_0\ge 1$, as in the proof of Lemma \ref{help} there is a positive constant $r=r(\ep,K,c_0)< \min\{\delta_{K},\tilde{\delta}_{\ep,K},\delta'(K),{\varepsilon}(K)/(\gamma c_0)\}$ (see (\ref{pointype}),(\ref{overlap}), Definition \ref{doublingdef} and Remark \ref{remark1}), where $\gamma=\k+2\k^2$ with $\k$ as in (\ref{tri}), and a finite family $\{B_r(y_k)\}_k$ of $d$-balls centered in $K$ satisfying $K\subset \cup_k B_r(y_k)$ and whose dilates $\{B_{c_0r}(y_k)\}_k$ lie in $\Omega$ and have the bounded intercept property (with intercept constant $M$ independent of $\ep$). Since $\{B_{c_0r}(y_k)\}_k$ has bounded intercepts with bound $M$, it can be written as the union of at most $M$ families of disjoint $d$-balls; see e.g. the proof of \cite[Lemma 2.5]{CW1}. By (\ref{overlap}), we conclude that $$ \sum_{k} b(f,B_r(y_k))^p\le M\ep^p. $$ Theorem \ref{mainmetric} then follows immediately from Theorem \ref{absgeneral}; see also Remark 1.3(1). \qed
As an application of Theorem \ref{mainmetric} we present a version of \cite[Theorem 8.1]{HK2} in the case $p\geq 1$. Our version improves the one in \cite{HK2} by allowing two different measures and by relaxing the assumptions made about embedding and doubling. Furthermore, while the analogue in \cite{HK2} of our (\ref{pwitha}) uses only the $L^1_w(B)$ norm on the left side, it automatically self-improves to the $L^p_w(B)$ norm due to the doubling assumption, with a further fixed enlargement of the ball $c_0B$ on the right side; see e.g. \cite[Theorem 5.1]{HK2}.
\begin{corollary} \label{betterthanHK} Let $X,d,\Omega,w$ be as above,
and let $\mu$ be a Borel measure on $\Omega$. Fix $1\leq
p<\infty$, $1<N\leq \infty$ and $c_0\geq 1$. Consider a
sequence of pairs $\{(f_i,g_i)\}\subset L^N_w(\Omega)\times
L^p_\mu(\Omega)$ such that for any compact set $K\subset \Omega$,
there exists $\bar{\delta}_K>0$ with
\begin{eqnarray}\label{pwitha} ||f_i-(f_i)_{B,w}||_{L^p_w(B)} \leq
a_*(B)||g_i||_{L^p_\mu(c_0B)} \end{eqnarray} for all $i$ and all $d$-balls $B$ centered in $K$ with $c_0B
\subset\Omega$ and $r(B)<\bar{\delta}_K$, where $a_*(B)$ is a
non-negative ball set function satisfying \begin{eqnarray}\label{a_*} \displaystyle\lim_{r\rightarrow 0}\Big\{ \sup_{y\in K}a_*(B_r(y))\Big\}=0. \end{eqnarray} Then if $\{f_i\}$ and $\{g_i\}$ are bounded in $L^N_w(\Omega)$ and $L^p_\mu(\Omega)$ respectively, $\{f_i\}$ has a subsequence converging in $L^q_w(\Omega)$ for $1\leq q<N$ to a function belonging to $L^N_w(\Omega)$. \end{corollary}
{\bf Proof.} Given $\ep>0$ and compact set $K\subset \Omega$, use (\ref{a_*}) to choose $r_0>0$ so that $a_*(B_r) < \epsilon/\beta$ for any $d$-ball $B_r$ centered in $K$ with $r<r_0$, where $\beta =
\sup_i||g_i||_{L^p_\mu(\Omega)}<\infty$. In Theorem \ref{mainmetric}, choose ${\cal{S}} = \{f_i\}$, $\delta_K = \overline{\delta}_K$,
$b(f_i,B) = a_*(B)||g_i||_{L^p_\mu(c_0B)}$ and $$ \tilde{\delta}_{\ep,K} = \min\{\overline{\delta}_K, \delta'(K) ,r_0,{\varepsilon}(K)/c_0\}. $$ If $B$ is a $d$-ball with center in $K$ and $r(B) <\tilde{\delta}_{\ep,K}$, then $c_0B\subset \Omega$. Hence,
\begin{eqnarray}\label{4.10} \sum_{B\in{\cal F}}\big(a_*(B)||g_i||_{L^p_\mu(c_0B)}
\big)^p\leq \ep^p||g_i||_{L^p_\mu(\Omega)}^p/\beta^p \le \ep^p\nonumber \end{eqnarray} for every ${\cal F}$ as in Theorem \ref{mainmetric}. The conclusion now follows from Theorem \ref{mainmetric}.\qed
\begin{remark} \begin{enumerate}
\item The $g_i$ in (\ref{pwitha}) are usually the modulus of a fixed derivative of the corresponding $f_i$, such as $|\nabla f_i|$ when $X$ is a Riemannian manifold. More generally, $g_i$ may be the upper gradient of $f_i$ (see \cite{Hei} for the definition). \item Theorem \ref{mainmetric} can also be used to obtain an extension of Theorem 2.3 to $s$-John domains in quasimetric spaces; see \cite[Theorem 1.6]{CW2}. \end{enumerate} \end{remark}
\section{Appendix} \setcounter{equation}{0} \setcounter{theorem}{0}
Here we briefly consider analogues of Theorem \ref{appth1}, Corollary \ref{appcor1} and Theorem \ref{newappth2} without assuming $w<<\nu$, but adding the assumption that ${\cal H}$ is linear. In this case, (\ref{strongsob}) can be extended by continuity to obtain a bounded linear map from $\overline{\cal H}$ into $L^{p\sigma}_w(\Omega)$. Here, as always, $\overline{\cal H}$ denotes the closure of $\{(f,\nabla f): f\in {\cal H}\}$ in $W^{1,p}_{\nu,\mu}(\Omega, Q)$. However, when $w<<\nu$ fails, there is no natural way to obtain the extension for every $(f,\vec{g}) \in \overline{\cal H}$ keeping the same $f$ on the left side. In fact, let $(f,\vec{g}) \in \overline{{\cal H}}$ and choose $\{f_j\}\subset {\cal H}$ with $(f_j, \nabla f_j)\to (f,\vec{g})$ in $W^{1,p}_{\nu,\mu}(\Omega,Q)$. Linearity of ${\cal H}$ allows us to apply (\ref{strongsob}) to differences of the $f_j$ and conclude that $\{f_j\}$ is a Cauchy sequence in $L^{p\sigma}_w(\Omega)$. Therefore $f_j \to f^*$ in $L^{p\sigma}_w(\Omega)$ for some $f^* \in L^{p\sigma}_w(\Omega)$, and \[
||f^*||_{L^{p\sigma}_w(\Omega)} \le C ||(f,\vec{g})||_{\wa{p}{\Omega}} \quad\text{if $(f,\vec{g})\in \overline{\cal H}$}. \] The function $f^*$ is determined by $(f, \vec{g})$, i.e., $f^*$ is independent of the particular sequence $\{f_j\}\subset {\cal H}$ above. Indeed, if $\{\tilde{f_j}\}$ is another sequence in ${\cal H}$ with $(\tilde{f_j},\nabla\tilde{f_j}) \rightarrow (f, \vec{g})$ in $W^{1,p}_{\nu,\mu}(\Omega, Q)$, and if $\tilde{f_j} \rightarrow \tilde{f^*}$ in $L^{p\sigma}_w(\Omega)$, then by (\ref{strongsob}) and linearity of ${\cal H}$, \[
||\tilde{f_j}- f_j||_{L^{p\sigma}_w(\Omega)} \le C
||(\tilde{f_j}-f_j,\nabla \tilde{f_j} - \nabla f_j)||_{W^{1,p}_{\nu, \mu}(\Omega, Q)} \rightarrow 0. \]
Consequently $||\tilde{f^*}- f^*||_{L^{p\sigma}_w(\Omega)} =0$. Thus $(f, \vec{g})$ determines $f^*$ uniquely as an element of $L^{p\sigma}_w(\Omega)$. Define a mapping \begin{eqnarray}\label{map} T: \overline{\cal H} \rightarrow L^{p\sigma}_w(\Omega)\quad\text{by setting $T(f, \vec{g}) = f^*$}. \end{eqnarray} Note that $\overline{\cal H}$ is a linear set in $W^{1,p}_{\nu,\mu}(\Omega, Q)$ since ${\cal H}$ is linear, and that $T$ is a bounded linear map from $\overline{\cal H}$ into $L^{p\sigma}_w(\Omega)$. Also note that $T$ satisfies $T(f,\nabla f) = f$ when restricted to those $(f, \nabla f)$ with $f\in {\cal H}$. Furthermore, if $w <<\nu$ then $T(f,\vec{g}) =f$ for all $(f,\vec{g}) \in \overline{\cal H}$, i.e., $f^* = f$ a.e.-$w$ for all $(f,\vec{g}) \in \overline{\cal H}$. This follows since $f_j \rightarrow f$ in $L^p_{\nu}(\Omega)$ norm and $f_j \rightarrow f^*$ in $L^{p\sigma}_w(\Omega)$ norm. In this appendix, where it is not assumed that $w<<\nu$, $f^*$ plays a main role. One can find a function $h$ such that $h=f^*$ a.e.-$w$ and $h=f$ a.e.-$\nu$, but as this fact is not needed, we omit its proof.
An analogue of Theorem \ref{appth1} is given in the next result.
\begin{thm} \label{appth1*} Let all the assumptions of Theorem \ref{appth1} hold except that now the set ${\cal H}$ is linear and we do not assume $w<<\nu$. Then the map $T: \overline{{\cal H}} \rightarrow L^q_w(\Omega)$ defined in (\ref{map}) is compact if $1\leq q < p\sigma$. Equivalently, if $\{(f_k,\vec{g_k})\}$ is a sequence in $\overline{{\cal H}}$ with $\sup_k
||(f_k,\vec{g_k})||_{W^{1,p}_{\nu,\mu}(\Omega,Q)} < \infty$, then $\{f^*_k\}$ has a subsequence which converges in $L^q_w(\Omega)$ for $1 \le q< p\sigma$, where $f^*_k= T(f_k,\vec{g_k})$. Moreover, the limit of the subsequence belongs to $L^{p\sigma}_w(\Omega)$. \end{thm} {\bf Proof:} Let ${\cal H}$ satisfy the hypothesis of the theorem and let $\{(f_k,\vec{g_k})\}\subset \overline{\cal H}$ be bounded in $W^{1,p}_{\nu,\mu}(\Omega, Q)$. For each $k$, choose $h_k\in {\cal H}$ so that
\begin{eqnarray}\label{nearby} ||(f_k,\vec{g_k})-(h_k,\nabla h_k)||_{\wa{p}{\Omega}} \leq 2^{-k}. \end{eqnarray} Set ${\cal H}_1= \{h_k\}_k \subset {\cal H}$. Then $\{(h_k, \nabla h_k): h_k \in {\cal H}_1\}$ is bounded in $\wa{p}{\Omega}$. Further, (\ref{strongsob}) implies a version of (\ref{sup}), namely \[ \sup_{f \in {\cal H}_1}
\left\{||f||_{L^{p\sigma}_w(\Omega)} + ||(f,\nabla f)||_{W^{1,p}_{\nu,\mu}(\Omega,Q)} \right\} < \infty. \] Theorem \ref{simpleversion} now applies to ${\cal H}_1$ with $N=p\sigma$ and gives that any sequence in $\hat{{\cal H}_1}$ has a subsequence which converges in $L^q_w(\Omega)$ norm for $1\leq q< p\sigma$ to a function belonging to $L^{p\sigma}_w(\Omega)$. The sequence $\{h_k\}$ lies in $\hat{{\cal H}_1}$, as is easily seen by considering, for each fixed $k$, the constant sequence $\{f^j\}$ defined by $f^j = h_k$ for all $j$. We conclude that $\{h_k\}$ has a subsequence $\{h_{k_l}\}$ converging in $L^q_w(\Omega)$ norm for $1\le q <p\sigma$ to a function $h \in L^{p\sigma}_w(\Omega)$. By linearity and boundedness of $T$ from $\overline{\cal H}$ to $L^{p\sigma}_w(\Omega)$ together with (\ref{nearby}), we have (writing $f_k^* = T(f_k, \vec{g_k})$) \[
||f_k^* -h_k||_{L^{p\sigma}_w(\Omega)} = ||T(f_k,\vec{g_k}) - T(h_k,
\nabla h_k)||_{L^{p\sigma}_w(\Omega)} \le C 2^{-k} \rightarrow 0. \] Restricting $k$ to $\{k_l\}$ and using $w(\Omega)<\infty$, we conclude that $\{f^*_{k_l}\}$ also converges to $h$ in $L^q_w(\Omega)$ for $1\le q<p\sigma$, which completes the proof. \qed
Setting ${\cal H}= Lip_{Q,p}(\Omega)$ in Theorem \ref{appth1*} gives an analogue of Corollary \ref{appcor1}:
\begin{corollary}\label{appcor1*} Let the hypotheses of Theorem \ref{appth1*} hold for ${\cal H} = Lip_{Q,p}(\Omega)$. Then the map $T$ defined by (\ref{map}) is a compact map of $\wa{p}{\Omega}$ into $L^q_w(\Omega)$ for $1\leq q < p\sigma$, i.e., if $\{(f_k,\vec{g_k})\} \subset \wa{p}{\Omega}$ and
$\sup_k ||(f_k,\vec{g_k})||_{W^{1,p}_{\nu,\mu}(\Omega,Q)} < \infty$, then $\{ f^*_k\}$ has a subsequence which converges in $L^q_w(\Omega)$ for $1 \le q< p\sigma$, where $f^*_k = T(f_k,\vec{g_k})$. Moreover, the limit of the subsequence belongs to $L^{p\sigma}_w(\Omega)$. \end{corollary}
Theorem \ref{newappth2} also has an analogue without assuming $w<<\nu$ provided ${\cal H}$ is linear, and in this instance (\ref{strongsob}) is not required: the subsequence $\{f_{k_i}\}$ of $\{f_k\}$ in the conclusion is then replaced by a subsequence of $\{f_k^*\}$, where $f_k^*$ is constructed as above but now using bounded measurable $\Omega'$ whose closures increase to $\Omega$. Now $f^*$ arises when (\ref{Omegaprimeprime}) is extended to $\overline{\cal H}$, namely, instead of (\ref{Omegaprimeprimeext}), we obtain \begin{eqnarray} \nonumber
||f^*||_{L^{p\sigma}_w(\Omega'')} \le C(\Omega'')\,
||(f,\vec{g})||_{W^{1,p}_{\nu,\mu}(\Omega,Q)}\quad \text{if }(f,\vec{g})\in \overline{\cal H} \end{eqnarray} where $f^*$ is constructed for a pair $(f,\vec{g})\in \overline{\cal H}$ by using linearity of ${\cal H}$ and (\ref{Omegaprimeprime}) for a particular $(\Omega', \Omega'')$. It is easy to see that $f^* \in L^{p\sigma}_{w, loc}(\Omega)$ by letting $\Omega' \nearrow \Omega$. The Poincar\'e inequality analogous to (\ref{poincareext}) is \begin{eqnarray}
\nonumber \left(\int_{B_r(y)} |f^*-f^*_{B_r(y),w}|^p dw \right)^\frac{1}{p} \le
\epsilon ||(f,\vec{g})||_{W^{1,p}_{\nu,\mu}(B_{c_0r}(y),Q)} \quad \text{if } (f,\vec{g}) \in \overline{\cal H}, \end{eqnarray} obtained by extending (\ref{poincare}) from ${\cal H}$ to $\overline{\cal H}$. Further details are omitted.\\
\singlespace {\footnotesize{
\noindent Department of Mathematics\\ National University of Singapore\\ 10, Lower Kent Ridge Road\\ Singapore 119076\\ e-mail: [email protected]\\
\noindent Department of Mathematics, Physics and Geology\\ Cape Breton University\\ Sydney, NS B1P6L2\\ e-mail: [email protected]\\
\noindent Department of Mathematics\\ Rutgers University\\ Piscataway, NJ 08854\\ e-mail: [email protected]
}
\end{document}
|
arXiv
|
{
"id": "1110.6907.tex",
"language_detection_score": 0.6650400757789612,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Arbitrary Conditional Distributions with Energy}
\begin{abstract} Modeling distributions of covariates, or \textit{density estimation}, is a core challenge in unsupervised learning. However, the majority of work only considers the \textit{joint} distribution, which has limited utility in practical situations. A more general and useful problem is \textit{arbitrary conditional density estimation}, which aims to model \textit{any} possible conditional distribution over a set of covariates, reflecting the more realistic setting of inference based on prior knowledge. We propose a novel method, Arbitrary Conditioning with Energy (ACE), that can simultaneously estimate the distribution $p(\mathbf{x}_u \mid \mathbf{x}_o)$ for all possible subsets of unobserved features $\mathbf{x}_u$ and observed features $\mathbf{x}_o$. ACE is designed to avoid unnecessary bias and complexity --- we specify densities with a highly expressive energy function and reduce the problem to only learning one-dimensional conditionals (from which more complex distributions can be recovered during inference). This results in an approach that is both simpler and higher-performing than prior methods. We show that ACE achieves state-of-the-art for arbitrary conditional likelihood estimation and data imputation on standard benchmarks. \end{abstract}
\section{Introduction}
Density estimation, a core challenge in machine learning, attempts to learn the probability density of some random variables given samples from their true distribution. The vast majority of work on density estimation focuses on the \textit{joint} distribution $p(\mathbf{x})$ \cite{goodfellow2014generative,dinh2016density,papamakarios2017masked,grathwohl2018ffjord,oliva2018transformation,nash2019autoregressive,fakoor2020trade}, i.e.,~the distribution of all variables taken together. While the joint distribution can be useful (e.g.,~to learn the distribution of pixel configurations that represent human faces), it is limited in the types of predictions it can make. We are often more interested in \textit{conditional} probabilities, which communicate the likelihood of an event \textit{given} some prior information. For example, given a patient's medical history and symptoms, a doctor determines the likelihoods of different illnesses and other patient attributes. Conditional distributions are often more practical since real-world decisions are nearly always informed by prior information.
However, we often do not know ahead of time \textit{which} conditional distribution (or distributions) will be of interest. That is, we may not know which features will be known (observed) or which will be inferred (unobserved). For example, not every patient that a doctor sees will have had the same tests performed (i.e.,~have the same observed features). A na\"ive approach requires building an exponential number of models to cover all possible cases (one for every conditional distribution), which quickly becomes intractable. Thus, an intelligent system needs to understand the intricate conditional dependencies between all arbitrary subsets of covariates, and it must do so with a single model to be practical.
In this work, we consider the problem of learning the conditional distribution $p(\mathbf{x}_u \mid \mathbf{x}_o)$ for any arbitrary subsets of unobserved variables $\mathbf{x}_u \in \mathbb{R}^{|u|}$ and observed variables $\mathbf{x}_o \in \mathbb{R}^{|o|}$, where $u,o \subseteq \{1, \dots , d\}$ and $o \cap u = \emptyset$. We propose a method, Arbitrary Conditioning with Energy (ACE), that can assess any conditional distribution over any subset of random variables, using a single model. ACE is developed with an eye for simplicity --- we reduce the arbitrary conditioning problem to the estimation of one-dimensional conditional densities (with arbitrary observations), and we represent densities with an energy function, which fully specifies unnormalized distributions and has the freedom to be instantiated as any arbitrary neural network.
We evaluate ACE on benchmark datasets and show that it outperforms current methods for arbitrary conditional/marginal density estimation. ACE remains effective when trained on data with missing values, making it applicable to real-world datasets that are often incomplete, and we find that ACE scales well to high-dimensional data. Also, unlike some prior methods (e.g.,~\cite{li2020acflow}), ACE can naturally model data with both continuous and discrete values.
Our contributions are as follows: 1) We develop the first energy-based approach to arbitrary conditional density estimation, which eliminates restrictive biases (e.g.~normalizing flows, Gaussian mixtures) imposed by common alternatives. 2) We empirically demonstrate that ACE is state-of-the-art for arbitrary conditional density estimation and data imputation. 3) We find that complicated prior approaches can be easily outperformed with a simple scheme that uses mixtures of Gaussians and fully-connected networks.
\section{Previous Work}
Several methods have been previously proposed for arbitrary conditioning. Sum-Product Networks are specially designed to only contain sum and product operations and can produce arbitrary conditional or marginal likelihoods \cite{poon2011sum,butz2019deep}. The Universal Marginalizer trains a neural network with a cross-entropy loss to approximate the marginal posterior distributions of all unobserved features conditioned on the observed ones \cite{douglas2017universal}. VAEAC is an approach that extends a conditional variational autoencoder by only considering the latent codes of unobserved dimensions \cite{ivanov2018variational}, and NeuralConditioner uses adversarial training to learn each conditional distribution \cite{belghazi2019learning}. DMFA uses factor analysis to have a neural network output the parameters of a conditional Gaussian density for the missing features given the observed ones \cite{przewikezlikowski2020estimating}. The current state-of-the-art is ACFlow, which extends normalizing flow models to handle any subset of observed features \cite{li2020acflow}.
Unlike VAEAC, ACE does not suffer from mode collapse or blurry samples. ACE is also able to provide likelihood estimates, unlike NeuralConditioner which only produces samples. DMFA is limited to learning multivariate Gaussians, which impose bias and are harder to model than one-dimensional conditionals. While ACFlow can analytically produce normalized likelihoods and samples, it is restricted by a requirement that its network consist of bijective transformations with tractable Jacobian determinants. Similarly, Sum-Product Networks have limited expressivity due to their constraints. ACE, on the other hand, exemplifies the appeal of energy-based methods as it has no constraints on the parameterization of the energy function.
Energy-based methods have a wide range of applications within machine learning \cite{lecun2006tutorial}, and recent work has studied their utility for density estimation. Deep energy estimator networks \cite{saremi2018deep} and Autoregressive Energy Machines \cite{nash2019autoregressive} are both energy-based models that perform density estimation. However, both of these methods are only able to estimate the joint distribution.
Much of the previous work on density estimation relies on an autoregressive decomposition of the joint density according to the chain rule of probability. Often, the model only considers a single (arbitrary) ordering of the features \cite{nash2019autoregressive,oord2016conditional}. \citet{uria2014deep} proposed a method for assessing joint likelihoods based on any arbitrary ordering, where they use masked network inputs to effectively share weights between a combinatorial number of models. \citet{germain2015made} also consider a shared network for joint density estimation with a constrained architecture that enforces the autoregressive constraint in joint likelihoods. In this work, we construct an order-agnostic weight-sharing technique not for joint likelihoods, but for arbitrary conditioning. Moreover, we make use of our weight-sharing scheme to estimate likelihoods with an energy based approach, which avoids the limitations of the parametric families used previously (e.g.,~mixtures of Gaussians \cite{uria2014deep}, or Bernoullis \cite{germain2015made}). Query Training \cite{lazaro2020query} is a method for answering probabilistic queries. It also takes the approach of computing one-dimensional conditional likelihoods but does not directly pursue an autoregressive extension of that ability.
The problem of imputing missing data has been well studied, and there are several approaches based on classic machine learning techniques such as $k$-nearest neighbors \cite{troyanskaya2001missing}, random forests \cite{stekhoven2012missforest}, and autoencoders \cite{gondara2018mida}. More recent work has turned to deep generative models. GAIN is a generative adversarial network (GAN) that produces imputations with the generator and uses the discriminator to discern the imputed features \cite{yoon18gain}. Another GAN-based approach is MisGAN, which learns two generators to model the data and masks separately \cite{li2019misgan}. MIWAE adapts variational autoencoders by modifying the lower bound for missing data and produces imputations with importance sampling \cite{mattei2019miwae}. ACFlow can also perform imputation and is state-of-the-art for imputing data that are missing completely at random (MCAR) \cite{li2020acflow}.
While it is not always the case that data are missing at random, the opposite case (i.e.,~missingness that depends on unobserved features' values) can be much more challenging to deal with \cite{fielding2008simple}. Like many data imputation methods, we focus on the scenario where data are MCAR, that is, where the likelihood of being missing is independent of the covariates' values.
\section{Background}
\subsection{Arbitrary Conditional Density Estimation}
A probability density function (PDF), $p(\mathbf{x})$, outputs a nonnegative scalar for a given vector input $\mathbf{x} \in \mathbb{R}^d$ and satisfies $\int p(\mathbf{x}) \dif \mathbf{x} = 1$. Given a dataset $\mathcal{D} = \{ \mathbf{x}^{(i)} \}_{i=1}^N$ of \textit{i.i.d.}~samples drawn from an unknown distribution $p^*(\mathbf{x})$, the object of density estimation is to find a model that best approximates the function $p^*$. Modern approaches generally rely on neural networks to directly parameterize the approximated PDF.
Arbitrary conditional density estimation is a more general task where we estimate the conditional density $p^*(\mathbf{x}_u \mid \mathbf{x}_o)$ for all possible subsets of observed features \mbox{$o \subset \{1, \dots, d\}$} (i.e.,~features whose values are known) and corresponding subset of unobserved features \mbox{$u \subset \{1, \dots , d\}$} such that $o$ and $u$ do not intersect. Here $\mathbf{x}_o \in \mathbb{R}^{|o|}$ and $\mathbf{x}_u \in \mathbb{R}^{|u|}$. The estimation of joint or marginal likelihoods is recovered when $o$ is the empty set. Note that marginalization to obtain $p(\mathbf{x}_o) = \int p(\mathbf{x}_u, \mathbf{x}_o) \dif \mathbf{x}_u$ (and hence $p(\mathbf{x}_u \mid \mathbf{x}_o) = \frac{p(\mathbf{x}_u, \mathbf{x}_o)}{p(\mathbf{x}_o)}$) is intractable in performant generative models like normalizing flow based models \cite{dinh2016density,li2020acflow}; thus, we propose a weight-sharing scheme below.
\subsection{Energy-Based Models}
Energy-based models capture dependencies between variables by assigning a nonnegative scalar \textit{energy} to a given arrangement of those variables, where energies closer to zero indicate more desirable configurations \cite{lecun2006tutorial}. Learning consists of finding an energy function that outputs low energies for correct values. We can frame density estimation as an energy-based problem by writing likelihoods as a Boltzmann distribution \begin{equation} \label{eq:energy-dist}
p(\mathbf{x}) = \frac{e^{- \mathcal{E}(\mathbf{x})}}{Z}, \qquad Z = \int e^{- \mathcal{E}(\mathbf{x})} \dif \mathbf{x} \end{equation} where $\mathcal{E}$ is the energy function, $e^{- \mathcal{E}(\mathbf{x})}$ is the unnormalized likelihood, and $Z$ is the normalizer.
Energy-based models are appealing due to their relative simplicity and high flexibility in the choice of representation for the energy function. This is in contrast to other common approaches to density estimation such as normalizing flows \cite{dinh2016density,li2020acflow}, which require invertible transformations with Jacobian determinants that can be computed efficiently. Energy functions are also naturally capable of representing non-smooth distributions with low-density regions or discontinuities.
\section{Arbitrary Conditioning with Energy}
We are interested in approximating the probability density $p(\mathbf{x}_u \mid \mathbf{x}_o)$ for any arbitrary sets of unobserved features $\mathbf{x}_u$ and observed features $\mathbf{x}_o$. We approach this by decomposing likelihoods into products of one-dimensional conditionals, which makes the learned distributions much simpler. This concept is a basic application of the chain rule of probability, but has yet to be thoroughly exploited for arbitrary conditioning. During training, ACE estimates distributions of the form $p(x_{u^\prime_i} \mid \mathbf{x}_{o^\prime})$, where $x_{u^\prime_i}$ is a scalar. During inference, more complex distributions can then be recovered with an autoregressive decomposition: \mbox{$p(\mathbf{x}_u \mid \mathbf{x}_o) = \prod_{i = 1}^{|u|} p(x_{u^\prime_i} \mid \mathbf{x}_{o\, \cup\, u^\prime_{<i}})$}, where $u^\prime$ is an arbitrary permutation of $u$ and $ u^\prime_{<i} = \{u^\prime_1, \dots, u^\prime_{i-1}\}$. This approach is appealing because the estimated distributions are over a one-dimensional domain, allowing one to use a myriad of flexible estimators.
We adopt an energy-based approach (similar to AEMs \cite{nash2019autoregressive}), which affords a large degree of flexibility in modeling the exponentially many conditional distributions at hand --- we are free to represent the energy function with an arbitrary, and highly expressive, neural network that directly outputs unnormalized likelihoods. This contrasts with the current state-of-the-art for arbitrary conditional density estimation, which is limited by normalizing flow transformations \cite{li2020acflow}. Energy functions are highly expressive and naturally model complex (i.e.,~multimodal, non-smooth) densities, as they avoid a parametric-family on the shape of the distribution. Our main contribution is a method, Arbitrary Conditioning with Energy (ACE), for computing arbitrary conditional likelihoods with energies, one dimension at a time.\footnote{An implementation of ACE is available at \url{https://github.com/lupalab/ace}.}
\subsection{Decomposing Densities}
\begin{figure}
\caption{Proposal Network}
\label{fig:proposal-network}
\caption{Energy Network}
\label{fig:energy-network}
\caption{Overview of the networks used in ACE. The plus symbol refers to concatenation.}
\label{fig:networks}
\end{figure}
We decompose the arbitrary conditioning task into $1d$-domain arbitrary conditional estimation problems. This follows from the chain rule of probability, which allows us to write \begin{equation} \label{eq:chain-rule}
p(\mathbf{x}_u \mid \mathbf{x}_o) = \prod_{i = 1}^{|u|} p\left(x_{u^\prime_i} \mid \mathbf{x}_{o\, \cup\, u^\prime_{<i}}\right), \end{equation} where $u^\prime$ is an arbitrary permutation of $u$, $x_{u^\prime_i}$ is the $i^{\text{th}}$ unobserved feature given by $u^\prime$, and $u^\prime_{<i} = \{u^\prime_1, \dots, u^\prime_{i-1}\}$. The one-dimensional conditionals in \autoref{eq:chain-rule} are themselves arbitrary conditionals due to the choice of permutation. Thus, the arbitrary conditioning problem may be reduced to one-dimensional arbitrary conditional estimation. If we can compute any likelihood of the form $p(x_{i} \mid \mathbf{x}_{o^\prime})$ for $o^\prime \subset \{1, \ldots, d\}$, we can compute any distribution over the features. This includes all possible conditional distributions, marginal distributions, and the joint distribution. Although the reduction of arbitrary conditioning to 1$d$ conditional estimation is seemingly straightforward, this is, to the best of our knowledge, the first work to leverage this formulation.
We train our model non-autoregressively, to output the likelihood $p(x_{i} \mid \mathbf{x}_{o^\prime})$ for any $x_{i}$ and $\mathbf{x}_{o^\prime}$, where $i\in \{1, \ldots, d\} \setminus o^\prime$. During inference, an autoregressive procedure that repeatedly applies the chain rule can be used to compute likelihoods of the form $p(\mathbf{x}_u \mid \mathbf{x}_o)$. In practice, this consists of simply adding the previously considered unobserved dimension to the observed set before moving to the next step in the chain.\footnote{ One could also implement this with a masked network architecture (e.g., MADE \cite{germain2015made}) that can more efficiently compute autoregressive likelihoods. We choose not to do this for simplicity and to enable the use of arbitrary, and more expressive, architectures.}
\subsection{Likelihoods from Energies}
We express likelihoods in terms of energies by modifying \autoref{eq:energy-dist} to write \begin{equation} \label{eq:likelihood-energy}
p(x_{u_i} \mid \mathbf{x}_o) = \frac{e^{- \mathcal{E}(x_{u_i}; \mathbf{x}_o)}}{Z_{u_i; \mathbf{x}_o}}~, \end{equation} and we choose to represent the energy function as a neural network. In order to avoid learning a different network for each conditional, we adopt a weight-sharing scheme (e.g.~\cite{uria2014deep,ivanov2018variational,li2020acflow}). That is, we use a bitmask \mbox{$\mathbf{b} \in \{0, 1\}^d$} that indicates which features are observed and define a zero-imputing function $\boldsymbol{\phi}(\mathbf{x}_o;\mathbf{b})$ that returns a $d$-dimensional vector where unobserved features are replaced with zeros (see \autoref{fig:zero-imputation} in Appendix). These are then used as inputs to the energy network (see \autoref{fig:energy-network}).
\paragraph{Approximating normalized likelihoods.} We can use \autoref{eq:likelihood-energy} to compute normalized likelihoods, but only if the normalizing constant $Z_{u_i; \mathbf{x}_o}$ is known. Directly computing the normalizer is intractable in general. However, importance sampling can be used to obtain an estimate \cite{bishop2006pattern}. \citet{nash2019autoregressive} show that this technique is sufficiently accurate when considering one-dimensional conditionals for joint density estimation, and we adapt the approach for arbitrary conditioning. Assuming access to a proposal distribution $q(x_{u_i} \mid \mathbf{x}_o)$ which is similar to the target distribution, we approximate $Z_{u_i; \mathbf{x}_o}$ as \begin{align}
Z_{u_i; \mathbf{x}_o} &= \int e^{- \mathcal{E}(x_{u_i}; \mathbf{x}_o)} \dif x_{u_i} = \int \dfrac{e^{- \mathcal{E}(x_{u_i}; \mathbf{x}_o)}}{q(x_{u_i} \mid \mathbf{x}_o)} q(x_{u_i} \mid \mathbf{x}_o) \dif x_{u_i} \\
&\approx \dfrac{1}{S} \sum_{s=1}^S \dfrac{e^{- \mathcal{E}(x_{u_i}^{(s)}; \mathbf{x}_o)}}{q(x_{u_i}^{(s)} \mid \mathbf{x}_o)}, \quad x_{u_i}^{(s)} \sim q(x_{u_i} \mid \mathbf{x}_o)~. \label{eq:importance-weight} \end{align}
For some problems, we may have access to a good proposal distribution ahead of time, but in general, we can learn one in parallel with the energy network. We note that importance sampling is also more computationally efficient than the simpler alternative of using points from a grid. Since distributions may be non-smooth and span a large domain, fewer samples from a good proposal distribution will be able to obtain as good of an estimate as a grid that has many more points, which in turn will require fewer evaluations of the energy network. Furthermore, generating samples from the proposal is very efficient, as it only requires one neural network evaluation to obtain the parameters of a tractable parametric distribution, which can then be easily sampled.
\subsection{Training ACE} \label{sec:training}
We learn the proposal distribution alongside the energy function by having a neural network output the parameters of a tractable parametric distribution. For the proposal network, we again share weights between all conditionals as is done with the energy network. The proposal network accepts a concatenation of $\mathbf{b}$ and $\boldsymbol{\phi}(\mathbf{x}_o;\mathbf{b})$ as input, and it outputs the parameters $\boldsymbol{\omega}(u_i;\mathbf{x}_o)$ for a mixture of Gaussians for each unobserved dimension $u_i$. The proposal network also outputs a latent vector, $\boldsymbol{\gamma}(u_i;\mathbf{x}_o)$, for each unobserved dimension, which is used as input to the energy network in order to enable weight sharing between the proposal and energy networks \cite{nash2019autoregressive}. Using a different latent vector for each feature allows the latent vectors to represent the fact that different unobserved features may depend on $\mathbf{x}_o$ in different ways.
We then estimate the normalizing constants in \autoref{eq:likelihood-energy} using importance sampling: \begin{equation} \label{eq:normalizer-estimate}
\hat{Z}_{u_i; \mathbf{x}_o} = \dfrac{1}{S} \sum_{s=1}^S \dfrac{e^{- \mathcal{E}(x_{u_i}^{(s)}; \mathbf{x}_o;\boldsymbol{\gamma}(u_i;\mathbf{x}_o))}}{q(x_{u_i}^{(s)} \mid \boldsymbol{\omega}(u_i;\mathbf{x}_o))} \end{equation} where $x_{u_i}^{(s)}$ is sampled from $q(x_{u_i} \mid \boldsymbol{\omega}(u_i;\mathbf{x}_o))$. This in turn leads to the following approximation of the log-likelihood of $x_{u_i}$ given $\mathbf{x}_o$: \begin{equation} \label{eq:model-estimate}
\log p(x_{u_i} \mid \mathbf{x}_o) \approx -\mathcal{E}(x_{u_i}; \mathbf{x}_o;\boldsymbol{\gamma}(u_i;\mathbf{x}_o)) - \log \hat{Z}_{u_i; \mathbf{x}_o} , \end{equation} where we use abbreviated notation in the previous two equations and omit the bitmask $\mathbf{b}$ for greater readability. Refer to \autoref{fig:networks} for the precise inputs of each network.\footnote{It is not strictly necessary for the energy network to include $\mathbf{x}_o$ as input, since the latent vectors can encode that information. However, doing so acts like a sort of skip-connection, which often helps with deeper networks.}
Since \autoref{eq:chain-rule} gives us a way to autoregressively compute $p(\mathbf{x}_u \mid \mathbf{x}_o)$ as a chain of one-dimensional conditionals, we only concern ourselves with learning \mbox{$p(x_{u_i} \mid \mathbf{x}_o)$} for arbitrary $u_i$ and $\mathbf{x}_o$. Thus, for a given data point $\mathbf{x}$, we randomly partition it into $\mathbf{x}_o$ and $\mathbf{x}_u$ and jointly optimize the proposal and energy networks with the maximum-likelihood objective \begin{equation} \label{eq:objective}
J(\mathbf{x}_o; \mathbf{x}_u; \boldsymbol{\theta}) = \sum_{i=1}^{|u|} \log p(x_{u_i} \mid \mathbf{x}_o) + \sum_{i=1}^{|u|} \log q(x_{u_i} \mid \mathbf{x}_o), \end{equation} where $\boldsymbol{\theta}$ holds the parameters of both the energy and proposal networks. Because we want to optimize the proposal and energy distributions independently, gradients are stopped on proposal samples and proposal likelihood before they are used in \autoref{eq:normalizer-estimate}. We note that optimizing \autoref{eq:objective} can be interpreted as a stochastic approximation to optimizing the full autoregressive likelihoods $p(\mathbf{x}_u \mid \mathbf{x}_o)$ (i.e., \autoref{eq:chain-rule}). For a given data point, we are choosing to optimize individual $1d$-conditionals from numerous hypothetical autoregressive chains, rather than all of the $1d$-conditionals from a single autoregressive chain. That is, each random set of observed values that is encountered during training (in \autoref{eq:objective}) can be seen as a single factor in the product in \autoref{eq:chain-rule} for some $p(\mathbf{x}_{u} \mid \mathbf{x}_o)$. Over the course of training though, the same conditionals are ultimately being evaluated in either case.
The negative of \autoref{eq:objective} is minimized with Adam \cite{kingma2014adam} over a set of training data, where observed and unobserved sets are selected at random for each minibatch (see \autoref{sec:experiments}). In some cases, we found it useful to include a regularization term in the loss that penalizes the energy distribution for large deviations from the proposal distribution. We use the mean-square error (MSE) between the proposal likelihoods and energy likelihoods as a penalty, with gradients stopped on the proposal likelihoods in the error calculation. The coefficient of this term in the loss is a hyperparameter.
\subsection{Inference}
\subsubsection{Likelihoods} \label{sec:likelihoods}
Recall that our model learns one-dimensional conditionals: $p(x_{u_i} \mid \mathbf{x}_o)$. Thus, to obtain a complete likelihood for $\mathbf{x}_u$, we employ an autoregressive application of the chain rule (see \autoref{eq:chain-rule}). The pseudocode for this procedure is presented in the Appendix (see \autoref{alg:likelihood}). Since the values of $\mathbf{x}_u$ are known ahead of time, each one-dimensional conditional can be evaluated in parallel as a batch, allowing the likelihood $p(\mathbf{x}_u \mid \mathbf{x}_o)$ to be computed efficiently.
Importantly, the order in which each unobserved dimension is evaluated does not matter. As argued in previous work \cite{uria2014deep,germain2015made}, this can be considered advantageous, since we can effectively leverage multiple orderings at test time to obtain an ensemble of models. However, this does incur extra computational cost during inference. We note that a model which perfectly captures the true densities would give consistent likelihoods for all possible orderings (thus evaluating only one ordering would suffice). However, current order-agnostic methods (such as ACE or \cite{uria2014deep,germain2015made}) do not inherently satisfy this constraint. In the Appendix, we study how these inconsistencies can be mitigated in ACE, but we leave a true solution to this challenge for future work.
\subsubsection{Imputing} \label{sec:means}
Sampling allows us to obtain multiple possible values for the unobserved features that are diverse and realistic.\footnote{We describe how to generate samples from ACE in the Appendix.} However, these are not always the primary goals. For example, in the case of data imputation, we may only want a single imputation that aims to minimize some measure of error (see \autoref{sec:imputation}). Thus, rather than imputing true samples, we might prefer to impute the mean of the learned distribution. In this case, we forego autoregression and directly obtain the mean of each distribution $p(x_{u_i} \mid \mathbf{x}_o)$ with a single forward pass. Analytically computing the mean of the proposal distribution is straightforward since we are working with a mixture of Gaussians. We estimate the mean of the energy distribution via importance sampling: \begin{equation}
\mathbb{E} \left[ x_{u_i} \right] \approx \sum_{s=1}^S \dfrac{r_s}{\sum_j r_j} x_{u_i}^{(s)}, \qquad r_s = \dfrac{p(x_{u_i} \mid \mathbf{x}_o)}{q(x_{u_i} \mid \mathbf{x}_o)} \end{equation} where $x_{u_i}^{(s)}$ is sampled from $q(x_{u_i} \mid \mathbf{x}_o)$. It is worth noting that imputing the mean ignores dependencies between features in the unobserved set, so for some applications, other methods of imputing (such as multiple imputation by drawing samples) may make more sense.
\subsubsection{Heterogenous Data}
Prior approaches to arbitrary conditioning have to make restrictive assumptions when modeling arbitrary dependencies between continuous and discrete covariates. VAEAC \cite{ivanov2018variational}, for example, makes an assumption of conditional independence given a latent code. ACFlow \cite{li2020acflow} is \emph{not} directly applicable to discrete data given its use of the change of variable theorem. On the other hand, ACE can naturally model arbitrary dependencies between continuous and discrete covariates without any assumptions. In this setting, the proposal network outputs categorical logits for discrete features, as opposed to Gaussian mixture parameters. These logits can themselves be interpreted as energies, and we don't need to learn an additional energy function for the discrete features. Similarly, ACE could be applied to multimodal data in a straightforward fashion by conditioning the proposal distributions and energy function on a fused multimodal latent representation (which could be freely learned by arbitrary neural networks).
\section{Experiments} \label{sec:experiments}
\subsection{Real-valued UCI Data}
We first evaluate ACE on real-valued tabular data. Specifically, we consider the benchmark UCI repository datasets described by \citet{papamakarios2017masked} (see \autoref{tab:datasets} in the Appendix).
Unlike other approaches to density estimation that require particular network architectures \cite{germain2015made,dinh2016density,nash2019autoregressive,li2020acflow}, ACE has no such restrictions. Thus, we use a simple fully-connected network with residual connections \cite{he2016deep,he2016identity} for both the energy network and proposal network. This architecture is highly expressive, yet simple, and helps avoid adding unnecessary complexity to ACE. The bitmask $\mathbf{b}$, which indicates observed features, is sampled for each training example by first drawing $k \sim \mathcal{U}\{0, d - 1\}$, then choosing $k$ distinct features with uniform probability to be observed. Full experimental details and hyperparameters can be found in the Appendix.
We also consider the scenario in which data features are completely missing, i.e.,~some features are deemed unavailable for particular instances during training and are never part of the observed or unobserved set.\footnote{Features are missing at the per-instance level. For example, this does not mean that the $i^{\text{th}}$ feature is never observed for all training instances.} This allows us to examine the effectiveness of ACE on incomplete datasets, which are common when working with real-world data. When training models with missing data, we simply modify the sets of observed and unobserved indices to remove any indices which have been declared missing. This is a trivial modification and requires no other change to the design or training procedure of ACE. We consider two scenarios where data are missing completely at random at a 10\% and 50\% rate.
\subsubsection{Likelihood Evaluation}
\begin{table*}[t] \centering \caption{Test arbitrary conditional log-likelihoods (in nats) for UCI datasets. Higher is better. Likelihood estimates are computed with 20,000 importance samples for \textsc{Power}, \textsc{Gas}, and \textsc{Hepmass}, 10,000 importance samples for \textsc{Miniboone}, and 3,000 importance samples for \textsc{BSDS}. Results for ACFlow and VAEAC are taken from \citet{li2020acflow}. The best model for each dataset and missing rate is shown in bold. Results are averaged over 5 observed masks.} \label{tab:ac-likelihood-results}
\resizebox{\textwidth}{!}{ \begin{tabular}{@{}lrrrrrrrrrrrrrrr@{}} \toprule
& \multicolumn{3}{c}{\textsc{Power}} & \multicolumn{3}{c}{\textsc{Gas}} & \multicolumn{3}{c}{\textsc{Hepmass}} & \multicolumn{3}{c}{\textsc{Miniboone}} & \multicolumn{3}{c}{BSDS} \\ \cmidrule(l){2-16}
Missing Rate &
\multicolumn{1}{c}{0.0} &
\multicolumn{1}{c}{0.1} &
\multicolumn{1}{c}{0.5} &
\multicolumn{1}{c}{0.0} &
\multicolumn{1}{c}{0.1} &
\multicolumn{1}{c}{0.5} &
\multicolumn{1}{c}{0.0} &
\multicolumn{1}{c}{0.1} &
\multicolumn{1}{c}{0.5} &
\multicolumn{1}{c}{0.0} &
\multicolumn{1}{c}{0.1} &
\multicolumn{1}{c}{0.5} &
\multicolumn{1}{c}{0.0} &
\multicolumn{1}{c}{0.1} &
\multicolumn{1}{c}{0.5} \\ \midrule ACE &
\textbf{0.631} &
\textbf{0.633} &
\textbf{0.600} &
\textbf{9.643} &
\textbf{9.526} &
\textbf{8.530} &
\textbf{-3.859} &
\textbf{-4.255} &
\textbf{-8.133} &
\textbf{0.310} &
\textbf{-0.688} &
\textbf{-5.701} &
\textbf{86.701} &
\textbf{86.130} &
\textbf{80.613} \\ ACE Proposal & 0.583 & 0.573 & 0.542 & 9.484 & 9.348 & 8.183 & -4.417 & -4.796 & -8.497 & -0.241 & -1.328 & -9.169 & 85.228 & 84.204 & 75.767 \\ \midrule ACFlow & 0.561 & 0.557 & 0.458 & 8.086 & 7.568 & 5.405 & -8.197 & -7.784 & -10.538 & -0.972 & -5.150 & -9.892 & 81.827 & 80.783 & 75.050 \\ ACFlow+BG & 0.528 & 0.510 & 0.417 & 7.593 & 7.212 & 4.818 & -6.833 & -9.670 & -10.975 & -1.098 & -3.577 & -10.849 & 81.399 & 79.745 & 73.061 \\ VAEAC & -0.042 & -0.103 & -0.343 & 2.418 & 2.823 & 1.952 & -10.082 & -10.389 & -11.415 & -3.452 & -4.242 & -9.051 & 74.850 & 74.313 & 66.628 \\ \bottomrule \end{tabular} }
\end{table*} \begin{table*}[t] \centering \caption{Test marginal log-likelihoods (in nats) for UCI datasets. Higher is better. We evaluate the marginal distributions of the first 3, 5, and 10 dimensions of each dataset (\textsc{Power} and \textsc{Gas} don't have 10 features, so the joint likelihood over all features is reported instead). The same number of importance samples are used as in \autoref{tab:ac-likelihood-results}. Results for ACFlow and TAN are taken from \citet{li2020acflow}. Bold indicates where ACE outperformed ACFlow. Results are averaged over 5 observed masks.} \label{tab:marginal-likelihood-results}
\resizebox{\textwidth}{!}{ \begin{tabular}{@{}lrrrrrrrrrrrrrrr@{}} \toprule
& \multicolumn{3}{c}{\textsc{Power}} & \multicolumn{3}{c}{\textsc{Gas}} & \multicolumn{3}{c}{\textsc{Hepmass}} & \multicolumn{3}{c}{\textsc{Miniboone}} & \multicolumn{3}{c}{BSDS} \\ \cmidrule(l){2-16}
Dimensions &
\multicolumn{1}{c}{3} &
\multicolumn{1}{c}{5} &
\multicolumn{1}{c}{6} &
\multicolumn{1}{c}{3} &
\multicolumn{1}{c}{5} &
\multicolumn{1}{c}{8} &
\multicolumn{1}{c}{3} &
\multicolumn{1}{c}{5} &
\multicolumn{1}{c}{10} &
\multicolumn{1}{c}{3} &
\multicolumn{1}{c}{5} &
\multicolumn{1}{c}{10} &
\multicolumn{1}{c}{3} &
\multicolumn{1}{c}{5} &
\multicolumn{1}{c}{10} \\ \midrule ACE &
\textbf{-0.56} &
\textbf{1.42} &
\textbf{0.58} &
\textbf{1.31} &
\textbf{4.31} &
\textbf{12.20} &
\textbf{-4.00} &
\textbf{-5.91} &
\textbf{-10.72} &
\textbf{-2.13} &
\textbf{-3.80} &
\textbf{-7.94} &
\textbf{5.10} &
\textbf{9.37} &
\textbf{20.31} \\ ACE Proposal & -0.58 & 1.35 & 0.49 & 1.11 & 3.98 & 11.84 & -4.01 & -5.94 & -10.82 & -2.14 & -3.79 & -7.93 & 5.06 & 9.30 & 20.17 \\ \midrule ACFlow & -0.57 & 1.34 & 0.42 & 0.78 & 3.01 & 10.13 & -4.03 & -6.19 & -11.58 & -2.76 & -5.31 & -10.36 & 5.06 & 9.26 & 19.60 \\ \midrule TAN & -0.54 & 1.40 & 0.57 & 1.22 & 4.47 & 12.09 & -4.00 & -5.92 & -10.87 & -2.13 & -3.73 & -8.13 & 5.11 & 9.43 & 20.44 \\ \bottomrule \end{tabular} }
\end{table*}
\autoref{tab:ac-likelihood-results} presents the average arbitrary conditional log-likelihoods on held-out test data from models trained with different levels of missing data. During inference, no data is missing and $\mathbf{b}$ is drawn from a Bernoulli distribution with $p=0.5$. Likelihoods are calculated autoregressively as described in \autoref{sec:likelihoods}. The order in which the unobserved one-dimensional conditionals are computed is randomly selected for each instance.
We can draw two key findings from \autoref{tab:ac-likelihood-results}. First, we see that our proposal distribution outperforms ACFlow in all cases. Even this exceptionally simple approach (just a fully-connected network that produces a mixture of Gaussians) can give rise to extremely competitive performance, and we see there are advantages to using decomposed densities and unrestricted network architectures. Second, we find that in every case, the likelihood estimates produced by the energy function are higher than those from the proposal, illustrating the benefits of an energy-based approach which imposes no biases on the shape of the learned distributions.
We also examine the arbitrary marginal distributions learned by ACE, i.e.,~the unconditional distribution over a subset of features. We again test our model against ACFlow, and we additionally compare to Transformation Autoregressive Networks (TAN) \cite{oliva2018transformation}, which are designed only for joint likelihood estimation. A separate TAN model has to be trained for each marginal distribution. While a single ACFlow model can estimate all marginal distributions, \citet{li2020acflow} retrained models specifically for arbitrary marginal estimation. Contrarily, we used the same ACE models when evaluating arbitrary marginals as were used for arbitrary conditionals. Results are provided in \autoref{tab:marginal-likelihood-results}. We find that ACE outperforms ACFlow in all cases and even surpasses TAN most of the time, even though ACFlow and TAN both received special training for marginal likelihood estimation and ACE did not.
\subsubsection{Imputation} \label{sec:imputation}
\begin{figure*}
\caption{Normalized root-mean-square error (NRMSE) of imputations generated by ACE. Lower is better. NRMSE is computed as the root-mean-square error normalized by the standard deviation of each feature and then averaged across all features. Estimates of energy distribution means are computed with 20,000 importance samples for \textsc{Power} and \textsc{Gas}, 10,000 importance samples for \textsc{Hepmass} and \textsc{Miniboone}, and 3,000 importance samples for \textsc{BSDS}. Results for ACFlow and VAEAC are taken from \citet{li2020acflow}.}
\label{fig:ac-imputation}
\end{figure*}
We also evaluate ACE for data imputation, where some elements are missing from a dataset completely at random and we seek to infer their values. ACE is naturally applied to this task --- we consider $p(\mathbf{x}_u \mid \mathbf{x}_o)$, where $\mathbf{x}_u$ contains the missing features.
\autoref{fig:ac-imputation} shows the normalized root-mean-square error (NRMSE) on held-out test data. Again, we consider models trained with three different levels of missing data. During inference, $\mathbf{b}$ is drawn from a Bernoulli distribution with $p = 0.5$ for the 0\% and 50\% missing rates and $p = 0.9$ for the 10\% missing rate.\footnote{This means that for the 10\% missing rate, we are imputing fewer values (only 10\% as opposed to 50\%) during inference than with the other two missing rates. This explains the lower errors for the 10\% missing rate that we see in \autoref{fig:ac-imputation}. Even though there is missing data during training, we are estimating the values of fewer values based on a larger amount of observed data, which intuitively should result in more accurate imputations.} The means of the unobserved distributions are used as the imputed values (see \autoref{sec:means}).
As seen in \autoref{fig:ac-imputation}, ACE achieves a lower NRMSE score than ACFlow in all cases (exact numbers are available in the Appendix). These results further validate ACE's ability to accurately model arbitrary conditionals, leading us to again advocate for simple methods with few biases. It is also worth noting that ACE and ACE Proposal do comparably in this imputation task, which estimates the first-order moment of conditional distributions. However, as evidenced in \autoref{tab:ac-likelihood-results}, the energy-based likelihood better captures higher-order moments.
\subsection{High-dimensional Data}
\begin{figure}\label{tab:mnist}
\label{tab:adult}
\end{figure}
We examine ACE's ability to scale to general high-dimensional data by considering the flattened MNIST dataset \cite{lecun2010mnist} (i.e.,~each example is a 784-dimensional vector). In order to make a fair comparison, we trained a non-convolutional ACFlow model (using the authors' code) on the flattened data as well. The only change to ACE's training procedure is in the distribution of bitmasks that are sampled during training, which we detail in the Appendix.
Table \ref{tab:mnist} compares ACE and ACFlow in terms of arbitrary conditional log-likelihoods, joint bits-per-dimension (BPD), and peak signal-to-noise-ratio for inpaintings, and we see that ACE outperforms ACFlow for all three metrics. We also note that ACE's BPD of 1.42 is comparable to prior methods for joint density estimation such as TAN, MADE, and MAF, which have reported BPD of 1.19, 1.41, and 1.52 respectively (lower is better), despite the fact that these other methods can \textit{only} model the joint distribution \cite{oliva2018transformation}. Qualitatively, we see that ACE produces more realistic inpaintings (see \autoref{fig:mnist-samples-ac}) and samples from the joint (see \autoref{fig:mnist-samples-joint}) than ACFlow. These findings indicate that ACE's performance scales well to high-dimensional data.
\begin{figure}
\caption{Inpaintings from ACE and ACFlow. Left: Groundtruth and observed pixels. Middle: ACE samples. Right: ACFlow samples.}
\label{fig:mnist-samples-ac}
\caption{Samples from the joint distribution (i.e.,~all pixels are unobserved). Left: ACE samples. Right: ACFlow samples.}
\label{fig:mnist-samples-joint}
\caption{MNIST samples generated by ACE and ACFlow.}
\label{fig:mnist-samples}
\end{figure}
\subsection{Mixed Continuous-Discrete Data}
In order to demonstrate ACE's ability to model data with both continuous and discrete features, we conduct experiments on the UCI Adult dataset\footnote{\url{https://archive.ics.uci.edu/ml/datasets/adult}}, which offers a relatively even balance between the number of continuous and discrete features (see below). We preprocess the data by standardizing continuous features and dropping instances with missing values. We also only keep instances for which the \texttt{native-country} feature is \texttt{United-States}.\footnote{Such instances account for about 91\% of the data, and the other 9\% take on any of 39 possible values. Thus we are able to discard a highly class-imbalanced feature while retaining the vast majority of the data.} The processed data has 6 continuous features and 8 discrete features and is split into train, validation, and test partitions of size 22003, 5501, and 13788 respectively.
We trained an ACE model and VAEAC model (using the authors' publicly available code) on this dataset and evaluated them in terms of likelihoods and imputation. For discrete features, the mode of the learned distribution is imputed (as opposed to the mean). The results are presented in Table \ref{tab:adult}, where we see that ACE outperforms VAEAC on all metrics. We also reiterate that due to its use of the change of variable theorem, ACFlow cannot be trained on this dataset.
\subsection{Importance Sampling Accuracy}
\begin{figure}
\caption{Percent error of normalizing constant estimates obtained with importance sampling when compared to ``true'' values obtained with numerical integration. Estimates grow more accurate as the number of importance samples increases. Whiskers indicate 1.5 times the IQR.}
\label{fig:is-box}
\end{figure}
In order to better understand the impact of the number of importance samples used when estimating normalizers, we compare the importance sampling estimates from our model to ``true'' estimates obtained via numerical integration (similar to \cite{nash2019autoregressive}). For each UCI dataset, we randomly select 10,000 one-dimensional conditionals (from a validation set that was not seen during training) and integrate the unnormalized energy likelihoods using a trapezoidal rule \cite{pitkin2017lintegrate}. For each conditional, we then also estimate the normalizer with an increasing number of importance samples. \autoref{fig:is-box} shows the percentage errors of the importance sampling estimates. In most cases, a small number of samples (e.g., 20) can already produce relatively accurate estimates, and we see that the estimates grow more accurate as the number of samples increases.
\section{Conclusion}
In this work, we present a simple approach to modeling all arbitrary conditional distributions $p(\mathbf{x}_u \mid \mathbf{x}_o)$ over a set of covariates. Our method, Arbitrary Conditioning with Energy (ACE), is the first to wholly reduce arbitrary conditioning to one-dimensional conditionals with arbitrary observations and to estimate these with energy functions. By using an energy function to specify densities, ACE can more easily model highly complex distributions, and it can freely use high-capacity networks to model the exponentially many distributions at hand.
Empirically, we find that ACE achieves state-of-the-art performance for arbitrary conditional/marginal density estimation and for data imputation. For a given dataset, all of these results are produced with the same trained model. This high performance does not come at the cost of complexity --- ACE is much simpler than other common approaches, which often require restrictive complexities such as normalizing flow models or networks with specially masked connections \cite{li2020acflow,germain2015made}.
Furthermore, ACE's proposal distribution, a basic mixture of Gaussians, outperforms prior methods, demonstrating that the principle of learning one-dimensional distributions is still powerful when decoupled from energy-based learning. These results emphasize that seemingly complex problems do not necessitate highly complex solutions, and we believe future work on arbitrary density estimation will benefit from similar ideas.
\acksection
This work was supported in part by grants NIH 1R01AA02687901A1 and NSF IIS2133595.
\appendix
{\LARGE \textbf{Appendix}}
\section{Sampling from ACE}
Sampling the proposal distribution can be performed in an autoregressive fashion where $x_{u_i}$ is sampled from \mbox{$q(x_{u_i} \mid \mathbf{x}_o)$} then added to the observed set, at which point $x_{u_{i + 1}}$ can be sampled. We do this until all unobserved features have been sampled. The pseudocode for this procedure is presented in \autoref{alg:proposal-sample}.
We also want to produce samples that come from the energy function. One drawback of energy-based models is that we are unable to analytically sample the learned distribution. However, there are several methods for obtaining approximate samples. We employ a modification of the proposal sampling procedure such that many proposal samples are drawn at each step, and a single sample is then chosen from that collection based on importance weights. As the number of samples goes to infinity, this is consistent with drawing samples from the energy distribution. The pseudocode for this procedure is presented in \autoref{alg:energy-sample}. We note that this method of sampling is closely related to sampling importance resampling \cite{rubin1988using}.
\section{Algorithms}
For convenience, we provide the procedure for ACE's test-time likelihood evaluation in \autoref{alg:likelihood} and sampling in \autoref{alg:proposal-sample} and \autoref{alg:energy-sample}.
\begin{algorithm}[p]
\caption{ACE Likelihood Evaluation}
\label{alg:likelihood}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} $\mathbf{x}_o$, $\mathbf{x}_u$, $\mathbf{b}$
\STATE Set $\mathbf{x}_{cur} = \boldsymbol{\phi}(\mathbf{x}_o;\mathbf{b})$ and $\mathbf{b}_{cur} = \mathbf{b}$
\STATE Initialize $r = 0$
\STATE Choose an arbitrary permutation $u^\prime$ of $u$
\FOR{$u^\prime_i$ {\bfseries in} $u^\prime$}
\STATE Compute $\log p(x_{u^\prime_i} \mid \mathbf{x}_{cur})$ using Equation 8
\STATE Set $r = r + \log p(x_{u^\prime_i} \mid \mathbf{x}_{cur})$
\STATE Set $\mathbf{x}_{cur}[u^\prime_i] = x_{u^\prime_i}$
\STATE Set $\mathbf{b}_{cur}[u^\prime_i] = 1$
\ENDFOR
\STATE {\bfseries Output:} $r$, which contains $\log p(\mathbf{x}_u \mid \mathbf{x}_{o})$
\end{algorithmic} \end{algorithm}
\begin{algorithm}[p]
\caption{ACE Proposal Sampling}
\label{alg:proposal-sample}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} $\mathbf{x}_o$, $\mathbf{b}$, $u$
\STATE Set $\mathbf{x}_{cur} = \boldsymbol{\phi}(\mathbf{x}_o;\mathbf{b})$ and $\mathbf{b}_{cur} = \mathbf{b}$
\STATE Choose an arbitrary permutation $u^\prime$ of $u$
\FOR{$u^\prime_i$ {\bfseries in} $u^\prime$}
\STATE Sample $x_{u^\prime_i} \sim q(x_{u^\prime_i} \mid \mathbf{x}_{cur}; \mathbf{b}_{cur})$
\STATE Set $\mathbf{x}_{cur}[u^\prime_i] = x_{u^\prime_i}$
\STATE Set $\mathbf{b}_{cur}[u^\prime_i] = 1$
\ENDFOR
\STATE {\bfseries Output:} $\mathbf{x}_{cur}$, which contains the observed and imputed values
\end{algorithmic} \end{algorithm}
\begin{algorithm}[p]
\caption{ACE Energy Sampling}
\label{alg:energy-sample}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} $\mathbf{x}_o$, $\mathbf{b}$, $u$, $N$
\STATE Set $\mathbf{x}_{cur} = \boldsymbol{\phi}(\mathbf{x}_o;\mathbf{b})$ and $\mathbf{b}_{cur} = \mathbf{b}$
\STATE Choose an arbitrary permutation $u^\prime$ of $u$
\FOR{$u^\prime_i$ {\bfseries in} $u^\prime$}
\STATE Draw samples $\{x_{u^\prime_i}^{(s)}\}_{s=1}^N$ from $q(x_{u^\prime_i} \mid \mathbf{x}_{cur}; \mathbf{b}_{cur})$
\STATE Compute importance weights for the $N$ samples, as in Equation 6
\STATE Draw $x_{u^\prime_i}$ from the $N$ samples according to the importance weights
\STATE Set $\mathbf{x}_{cur}[u^\prime_i] = x_{u^\prime_i}$
\STATE Set $\mathbf{b}_{cur}[u^\prime_i] = 1$
\ENDFOR
\STATE {\bfseries Output:} $\mathbf{x}_{cur}$, which contains the observed and imputed values
\end{algorithmic} \end{algorithm}
\section{Order Consistency}
Because ACE can compute likelihoods by using any permutation of $u$, there are numerous ways to compute $p(\mathbf{x}_u \mid \mathbf{x}_o)$ for a given $\mathbf{x}_u$ and $\mathbf{x}_o$. However, due to inaccuracies in the learned model, we may obtain different results depending on which ordering is used. This phenomenon has surfaced in prior work as well \cite{uria2014deep,germain2015made}, where it has been argued that different orderings can be treated as an advantageous ensemble of models. This perspective is certainly useful, but ideally, our model should give equivalent likelihoods for all orderings. In order to better understand ACE's susceptibility to this problem, as well as how it may be addressed, we do a straightforward experiment in which we fine-tune trained ACE models with an additional loss term that minimizes the variance of $p(\mathbf{x}_u \mid \mathbf{x}_o)$ computed (autoregressively) over 10 permutations of $u$. Intuitively, as the expected variance over all $\mathbf{x}$ and $\mathbf{b}$ goes to zero, the distributions induced by different orderings of $u$ become the same and the model gives the same likelihood regardless of the chosen ordering.
\autoref{tab:consistency} shows the results of these experiments. We find that ACE models can be effectively fine-tuned to produce more consistent likelihoods over different orderings, at almost no cost in performance in terms of the average likelihood. However, we see that if desired, even stronger consistency can be obtained for a slight tradeoff in the average likelihood.
\begin{table}[b] \centering \caption{Log-likelihoods after different amounts of consistency fine-tuning. The number after the $\pm$ is the average standard deviation of $p(\mathbf{x}_u \mid \mathbf{x}_o)$ when computed over 1000 randomly chosen orderings. The coefficient refers to the weight of the variance term in the loss during fine-tuning. The 0.0 coefficient refers to the model with which the fine-tuning was initialized.} \label{tab:consistency} \vskip 0.2cm \begin{tabular}{@{}lrrrr@{}} \toprule Coefficient & \multicolumn{1}{c}{0.0} & \multicolumn{1}{c}{0.5} & \multicolumn{1}{c}{1.0} & \multicolumn{1}{c}{2.0} \\ \midrule \textsc{Power} & $0.622 \pm 0.072$ & $0.622 \pm 0.063$ & $0.620 \pm 0.058$ & $0.619 \pm 0.051$ \\ \textsc{Gas} & $9.583 \pm 0.513$ & $9.587 \pm 0.457$ & $9.556 \pm 0.420$ & $9.497 \pm 0.376$ \\ \textsc{Hepmass} & $-3.555 \pm 0.878$ & $-3.669 \pm 0.718$ & $-3.823 \pm 0.621$ & $-4.090 \pm 0.505$ \\ \bottomrule \end{tabular} \end{table}
\section{Experimental and Implementation Details} \label{sec:exp-details}
\begin{table}[p] \centering \caption{UCI datasets used in our experiments.} \label{tab:datasets} \vskip 0.15in \begin{tabular}{@{}lrr@{}} \toprule Dataset & \multicolumn{1}{l}{Instances} & Dimensions \\ \midrule \textsc{Power} & 1.66M & 6 \\ \textsc{Gas} & 852K & 8 \\ \textsc{Hepmass} & 315K & 21 \\ \textsc{Miniboone} & 29.6K & 43 \\ BSDS & 1M & 63 \\ \bottomrule \end{tabular} \vskip -0.1in \end{table}
\begin{figure}
\caption{We use a bitmask $\mathbf{b}$ and zero-imputing function $\boldsymbol{\phi}(\cdot ; \mathbf{b})$ to ensure network inputs always have the same shape, regardless of how many features are observed or unobserved. In the figure, shaded cells correspond to observed features.}
\label{fig:zero-imputation}
\end{figure}
We used a fully-connected residual architecture for both the proposal and energy networks. Each network uses pre-activation residual blocks \cite{he2016identity} and ReLU activations.
While the energy network only outputs one energy at a time, we can compute energies for every unobserved dimension in parallel by processing them as a batch. A softplus activation is applied to the network's output to ensure energies are nonnegative. We also enforce an upper bound on the energies by manually clipping the network outputs. This is equivalent to setting a lower bound on the unnormalized likelihoods, and we found it improved stability during training. A bound of 30 worked well in our experiments.
During training, we approximate normalizing constants with 20 importance samples from the proposal distribution. Proposal distributions used 10 mixture components, and the minimum allowed scale of each component was 0.001. A small amount of Gaussian noise was added to continuous values in each batch of data during training, as we found it improved stability. The learning rate was linearly annealed over the course of training. We used a warm-up period at the beginning of training where only the proposal network is optimized so that importance sampling does not occur until the proposal is sufficiently similar to the target distribution. \autoref{tab:hyperparameters} gives the hyperparameters that varied between datasets. Evaluations were performed using the weights that produced the highest likelihoods on a set of validation data during training.
All models except for MNIST were trained on two Tesla V100 GPUs, and training time varied from roughly a few hours to a day depending on the dataset. The MNIST model trained on four Tesla V100 GPUs for between one and two days. However, we note that multiple GPUs are not necessary for good results --- we found that state-of-the-art performance can still be achieved by training ACE models on a single GPU with smaller batch sizes.
\subsection{MNIST}
When training on MNIST, images were scaled to the range $[0, 1]$, and the reported likelihoods are evaluated in that space.
For MNIST, we use a different masking scheme during training so that the model learns to inpaint specific types of regions, such as square cutouts. The mask for each example is sampled from a mixture of the following distributions: \begin{itemize}
\item \textbf{Bernoulli:} Each pixel is randomly selected to be observed with probability $p=0.5$.
\item \textbf{Half:} The upper, lower, left, or right half of the image is randomly selected to be observed.
\item \textbf{Rectangular:} A random rectangle within the image is selected to be unobserved, with the constraint that the area of the rectangle is at least 30\% of the image.
\item \textbf{Square:} A square with a fourth of the area of the image is randomly selected to be unobserved. \end{itemize}
During training (but not at test time), each sampled mask was also overlaid with an additional Bernoulli mask for $p \sim \mathcal{U}(0.02, 0.98)$ in order to help simulate the distribution of masks that the model will encounter during the autoregressive procedures it uses during inference. At test time, the extra Bernoulli noise was not used when sampling masks.
ACFlow was trained analogously to ACE, using the authors' code.
\begin{table}[b] \centering \caption{Dataset-specific hyperparameters.} \label{tab:hyperparameters} \vskip 0.1cm \resizebox{\textwidth}{!}{ \begin{tabular}{@{}lrrrrrrr@{}} \toprule \textsc{Hyperparameter} & \multicolumn{1}{l}{\textsc{Power}} & \multicolumn{1}{l}{\textsc{Gas}} & \multicolumn{1}{l}{\textsc{Hepmass}} & \multicolumn{1}{l}{\textsc{Miniboone}} & \multicolumn{1}{l}{BSDS} & \multicolumn{1}{l}{\textsc{Adult}} & \multicolumn{1}{l}{MNIST} \\ \midrule Dropout & 0.2 & 0.0 & 0.2 & 0.5 & 0.2 & 0.5 & 0.2 \\ MSE Penalty Coef. & 1.0 & 0.0 & 0.0 & 0.0 & 0.0 & 1.0 & 0.0 \\ Training Steps & 1600000 & 1000000 & 1000000 & 15000 & 1000000 & 40000 & 800000 \\ Warm-up Steps & 5000 & 5000 & 5000 & 100 & 5000 & 2500 & 100000 \\ Training Noise Scale & 0.003 & 0.001 & 0.001 & 0.005 & 0.001 & 0.005 & 0.01 \\ Learning Rate & 0.0001 & 0.001 & 0.0005 & 0.001 & 0.001 & 0.0005 & 0.0002 \\ Batch Size & 512 & 2048 & 2048 & 2048 & 2048 & 1024 & 64 \\ Proposal Hidden Dim. & 512 & 512 & 512 & 512 & 1024 & 512 & 1024 \\ Proposal Res. Blocks & 4 & 4 & 4 & 4 & 4 & 4 & 5 \\ Proposal Latent Output Dim. & 64 & 64 & 64 & 64 & 64 & 64 & 128 \\ \bottomrule \end{tabular} } \end{table}
\section{Results}
Table \ref{tab:ac-ll-appendix} presents the full UCI likelihood results with standard deviations. In the main text, the imputation results are presented as a graph. We give the values that generated the graph, along with standard deviations, in Table \ref{tab:imputation-appendix}.
\begin{figure}\end{figure}
\end{document}
|
arXiv
|
{
"id": "2102.04426.tex",
"language_detection_score": 0.8208553194999695,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{center} \begin{large} \textbf{Bifurcation and Secondary Bifurcation of\\ Heavy Periodic Hydroelastic Travelling Waves} \end{large}
\textsc{Pietro Baldi}\footnote{ \emph{E-mail}: \texttt{[email protected]}}
\emph{Dipartimento di Matematica e Applicazioni ``R. Caccioppoli'',\\ Universit\`a di Napoli ``Federico II'',\\ Via Cintia, 80126 Napoli, Italy}
\textsc{and}
\textsc{John F. Toland}\footnote{ \emph{E-mail}: \texttt{[email protected]}}
\emph{Department of Mathematical Sciences,\\ University of Bath,\\ Bath BA2 7AY, UK} \end{center}
\begin{abstract} The existence question for two-dimensional symmetric steady waves travelling on the surface of a deep ocean beneath a heavy elastic membrane is analyzed as a problem in bifurcation theory. The behaviour of the two-dimensional cross-section of the membrane is modelled as a thin (unshearable), heavy, hyperelastic Cosserat rod, and the fluid beneath is supposed to be in steady two-dimensional irrotational motion under gravity. When the wavelength has been normalized to be $2\pi$, and assuming that gravity and the density of the undeformed membrane are prescribed, there are two free parameters in the problem: the speed of the wave and drift velocity of the membrane.
It is observed that the problem, when linearized about uniform horizontal flow, has at most two independent solutions for any values of the parameters.
When the linearized problem has only one normalized solution, it is shown that the full nonlinear problem has a sheet of solutions comprised of a family of curves bifurcating from simple eigenvalues. Here one of the problem's parameters is used to index a family of bifurcation problems in which the other is the bifurcation parameter.
When the linearized problem has two solutions, with wave numbers $k$ and $l$ such that $\max\{k,l\} / \min\{k,l\} \notin {\mathbb Z}$,
it is shown that there are three two-dimensional sheets of bifurcating solutions. One consists of ``special'' solutions with minimal period $2\pi/k$; another consists of ``special'' solutions with minimal period $2\pi/l$; and the third, apart from those on the curves where it intersects the ``special'' sheets, consists of ``general'' solutions with minimal period $2\pi$.
The two sheets of ``special'' solutions are rather similar to those that occur when the linearized problem has only one solution.
However, points where the first sheet or the second sheet intersects the third sheet are period-multiplying (or symmetry-breaking) secondary bifurcation points on primary branches of ``special'' solutions. This phenomenon is analogous to that of Wilton ripples, which arises in the classical water-wave problem when the surface tension has special values. In the case of Wilton ripples, the coefficient of surface tension and the wave speed are the problem's two parameters. In the present context, there are two speed parameters, meaning that the membrane elasticity does not need to be highly specified for this symmetry-breaking phenomenon to occur.
\end{abstract}
\begin{small} \emph{Keywords:} hydrodynamic waves, hydroelastic waves, nonlinear elasticity, free boundary problems, travelling waves, bifurcation theory, secondary bifurcations, Wilton ripples, Lyapunov-Schmidt reduction, symmetry-breaking.
\emph{2000 Mathematics Subject Classification:} 35R35, 74B20, 74F10, 76B07, 37G40.
\end{small}
\section{Introduction} The existence question for symmetric, $2\pi$-periodic steady waves travelling with speed $c_0$ on the surface of a heavy, inviscid fluid which is at rest at infinite depth beneath a heavy, thin (unshearable) elastic membrane was considered in \cite{Toland-heavy} as a global problem in the calculus of variations. Here we use local methods to study the bifurcation of such waves. We will show that there are two free parameters, $c_0$ (the wave speed) and $d$ (the membrane drift velocity) and that, when the problem is linearized about a uniform horizontal stream with the membrane unstretched, there are no more than two linearly independent solutions. (See equation \eqref{pinotto}, in which $\lambda_1 = \rho(c_0-d)^2$, $\lambda_2= c_0^2$, $\rho$ is the density of the membrane and $\mathcal{C}$ is the Hilbert transform of a $2\pi$-periodic function.) When there is no non-zero solution, nonlinear waves do not bifurcate from uniform horizontal streams; when there is only one solution, a sheet of solutions representing a parameterized family of bifurcations from simple eigenvalues occurs; when there are two independent solutions, there bifurcate three sheets of small-amplitude periodic waves. The latter corresponds to the presence of secondary bifurcations from curves of ``special'' solutions, the hydroelastic analogue of what are known as Wilton ripples \cite{wilton}, as described in the Abstract and in Section \ref{picture}. To quote from \cite{cd}, ``Waves characterized by two dominant modes are often called Wilton's ripples in the literature in reference to Wilton's paper (1915). It turns out that the phenomenon described as Wilton's ripples was accounted for at least twice prior to Wilton's paper : in an unpublished addendum to an essay that Bohr (1906) wrote in order to win the Royal Danish Academy prize on the theme `The surface tension of water' and in a paper by Harrison (1909).'' (Bohr's essay is \cite{bohr}.)
A key feature of the present analysis is the reduction of the physical problem (\ref{classical problem} a-g) to an equation \eqref{eq:F=0} for one $2\pi$-periodic function of one real variable and two parameters.
\subsection*{Physical Problem} This problem is described in detail in \cite{Toland-heavy}. To summarise, we consider waves on the surface of an infinitely deep ocean beneath an elastic membrane under the assumption that there is no friction between the membrane and the fluid. Since the water depth is infinite, there is no loss (after normalizing length scales appropriately) in restricting attention to waves that have period $2\pi$ in the horizontal direction. The fluid's Eulerian velocity field is supposed to be two-dimensional and stationary at the same time as the material of the membrane is in motion, driven by gravity, by forces and couples due to its elasticity and by pressure from the fluid. The resulting mechanical behaviour of the surface membrane is modelled by regarding its cross-section as a heavy, unshearable, hyperelastic Cosserat rod, using the treatment in Antman \cite[Ch.~4]{Antman}. We deal with this first.
\textbf{Membrane Elasticity.} Let $(x,0) \in {\mathbb R}^2$ be the rest position of a material point in the membrane cross-section and let $\boldsymbol{r}(x) \in {\mathbb R}^2$ be its position after deformation. In the notation of Antman, $\vartheta(x)$ is the angle between the horizontal positive semi-axis and the vector $\boldsymbol{r}'(x)$ (where $'$ means $d/dx$). Let \begin{equation} \label{def nu mu}
\nu(x)=|\boldsymbol{r}'(x)| \, \text{ and } \, \mu(x)= \vartheta'(x). \end{equation} Thus $\nu(x)$ is the stretch of the membrane at the point $\boldsymbol{r}(x)$ and \[ \widehat\sigma(\boldsymbol{r}(x)) = \frac{\mu(x)}{\nu(x)} \] is its curvature. We suppose that the elastic properties of the membrane are described as follows. \begin{hyp} \label{hyp:exists E} \emph{(Hyperelasticity)} There exists a $C^\infty$-function $E(\nu,\mu) \geq 0$,~ $\nu > 0$, $\mu \in {\mathbb R}$, such that, after the deformation $(x,0) \mapsto \boldsymbol{r} (x)$, the elastic energy in the reference segment $\{ (x,0) : x \in [x_1,x_2]\}$ is \[ \mathcal{E} (\boldsymbol{r}) = \int_{x_1}^{x_2} E(\nu(x), \mu(x)) \, dx, \] where $\nu(x), \mu(x)$ are defined in \eqref{def nu mu}. $E$ is called the stored energy function. \end{hyp} We also assume that the reference configuration, unstretched and unbent, is a local minimum of the elastic energy, which is locally convex. \begin{hyp} \label{hyp:E=0 and convex} \emph{(Rest state and local convexity)} \[ E(1,0) = E_1(1,0) = E_2(1,0) = E_{12}(1,0)=0,\quad E_{11}(1,0) > 0,~E_{22}(1,0)>0. \] Subscripts 1, 2 denote partial derivatives with respect to $\nu$, $\mu$, respectively. \end{hyp}
\begin{remark*} Since this is a study of bifurcating waves, regularity questions that are significant for large-amplitude waves are unimportant here. It is therefore for convenience only that we suppose $E\in C^\infty$. With a little more technical effort the theory can be developed for $E$ with much less regularity. \qed\end{remark*}
\textbf{Travelling Waves.} For a periodic travelling wave, the position at time $t$ of the material point with Lagrangian coordinates $(x,0)$ in the undeformed membrane is assumed to be given by \begin{equation*} \mathbf R(x ,t):\,=\big(x +dt +u(x -ct),\, v(x -ct)\big), \end{equation*} where $u$ and $v$ are $2\pi$-periodic and $c,\,d \in {\mathbb R} $. Let $c_0 = c+d$. Then the surface profile at time $t$ is the curve \begin{align*} \mathcal{S}_t &= \{(x +dt +u(x -ct),\, v(x -ct)) : x \in {\mathbb R} \} \\ &= \{(s+u(s),v(s)):s \in {\mathbb R}\}+(c_0t,0) \\ &=: \mathcal{S}+(c_0t,0). \end{align*} Thus $\mathcal{S}_t$ is represented by a profile $\mathcal{S}$ of fixed shape propagating from left to right, say, without changing shape at a constant velocity $c_0$ while at the same time the material point with Lagrangian coordinates $(x ,0)$ has temporal period $2\pi/c$ relative to a frame moving with speed $d$. We refer to $c_0$ as the \emph{wave speed} and to $d$ as the \emph{drift velocity of the membrane}, both calculated relative to the fluid at rest at infinite depth.
Since the membrane is in motion relative to the moving frame, there are inertial effects due to its mass, but it is supposed throughout that there is no friction between the fluid and the membrane. Under this assumption, it was shown in \cite{Toland-heavy} that the inertial effects lead to an equivalent steady-wave problem in which the wave speed and the drift velocity coincide, and the stored energy function is perturbed by a quadratic term.
\begin{remark*} To motivate this observation and what follows, it is worth observing the corresponding situation that arises when travelling-wave solutions $R$ to an analogous nonlinear wave equation \[ (E'(R_x))_x = \rho R_{tt}, \quad \rho>0, \] are sought. This equation describes longitudinal motion in a one-dimensional elastic rod for which $E$ is the stored energy function, $\rho$ is the undeformed density and $R(x,t)\in {\mathbb R}$ is the position at time $t$ of the point with Lagrangian coordinate $x\in {\mathbb R}$. It is significantly simpler than the hydroelastic wave problem because here there are no body forces and because the shape of the rod does not change, only the relative positions of its material points in a straight line change with time.
First note that a {stationary ({time-independent}) solution} satisfies {$(E'(R_x))_x =0$} and is given by a critical point of the potential energy functional \[ \int_0^{2\pi} E (R_x)\,dx. \] On the other hand, $R$ is a periodic travelling wave with drift velocity $d$ if \[ R(x,t) =c_0t+r(x-ct), \] for some $c$ and $c_0$, where $r(s+2\pi) = 2\pi+r(s)$ and $d=c_0-c$. Here we regard the variable $s$ $(\,= x-ct)$ as a steady Lagrangian coordinate for travelling waves. The equation for $r(s)$ is then \begin{equation} \label{ogue} \{ E'(r_s) -c^2\rho r_s \}_s =0, \end{equation} which corresponds to critical point of \[ \int_0^{2\pi} \Big(E (r_s)-\frac{\rho}2 c^2r_s^2 \Big)\,ds. \] Therefore periodic travelling waves correspond to a boundary-value problem for stationary solutions of a nonlinear wave equation with a \emph{different} stored energy function \[ \underline E (p) := E(p) - \frac \rho 2 c^2p^2, \] instead of $E(p)$. In \cite{Toland-heavy}, $\underline E$ is called the pseudo-potential energy of travelling waves.
If $c$ is large $\underline E$ is not convex, even when $E$ is convex \cite{strain}.
Note that if $r$ and $c$ correspond to a travelling wave, then a family, parametrized by $c_0\in {\mathbb R}$, of travelling waves with drift velocity $d$, is given by \[ R(x,t)=c_0t+r(x-ct), \qquad d = c_0-c. \] However, in the hydroelastic wave problem, the interaction of the membrane with the fluid means that the dependence of waves on both parameters is not so trivial. In fact, we will see that the solutions $r(s)=s$, $c_0$ and $c$ arbitrary, to this one-dimensional problem, when combined with a wave profile with zero elevation, is a family of trivial solutions of the hydroelastic wave problem which we describe next. They correspond to an undeformed membrane drifting with velocity $d=c_0-c$ on the surface of a uniform flow with horizontal velocity $c_0$. \qed \end{remark*}
To summarise the analogous treatment in \cite{Toland-heavy} of the equivalent hydroelastic travelling-wave problem, let $(s,0)$ be the steady Lagrangian coordinate, $\boldsymbol{r}(s)$ its deformed position and let $\mu$ and $\nu$ be defined as in \eqref{def nu mu}, with $s$ in place of $x$.
Let $\rho$ be the density of the \emph{undeformed} membrane section and let $\eta (s)= \textbf{j} \cdot \boldsymbol{r}(s)$, where $\textbf{j}$ is the unit vector in the upward vertical direction ($\eta$ is the wave elevation). \begin{subequations} \label{classical problem} In \cite[eqn.~(1.8)]{Toland-heavy} it is shown that $\nu$, $\mu$ and $\eta$ satisfy \begin{equation} \frac{d~}{ds} \Big\{ \nu(s) \, E_1(\nu(s),\mu(s)) -\frac{\rho}{2}\, c^2 \nu(s)^2 + \mu(s)\, E_2(\nu(s),\mu(s)) - E(\nu,\mu) -g \rho \eta(s)\Big\} = 0, \end{equation} which is the analogue of \eqref{ogue} in the present situation. The pressure $P$ in the fluid, internal forces and gravity combine to deform the membrane. Thus, from \cite[(1.7e)]{Toland-heavy}, \begin{equation} \label{def P} P(\boldsymbol{r}) = \frac{1}{\nu} \, \Big( \frac{ E_2(\nu, \mu)_s}{\nu} \Big)_{\!s} - \frac{\mu}{\nu} \big( E_1(\nu, \mu) -c^2\rho \nu \big) + \frac{g\rho \cos \vartheta}{\nu}, \end{equation} where, as in \cite{Toland-heavy}, we assume that the material in one period of the membrane surface is a deformation of an interval of length $2\pi$ of the reference membrane\footnote{The periodic wave does not require additional membrane mass as it passes underneath.}: \begin{equation} \label{r constrain} \mathcal{S} \cap \big([ 0, 2\pi]\times {\mathbb R}\big) = \{ \boldsymbol{r}(s) : s \in [s_0, s_0 + 2\pi] \} \ \text{ for some} \ s_0 \in {\mathbb R}. \end{equation}
\textbf{Fluid Motion.} In the moving frame the fluid velocity field is two dimensional, stationary and irrotational, the membrane cross-section coincides with a streamline and the flow at infinite depth is horizontal with velocity $-c_0$. The surface $\mathcal{S}$ is a zero level line for the stream function $\psi(X,Y)$, which is harmonic and represents horizontal laminar flow at infinite depth. So \begin{align} \label{D psi=0} \Delta \psi = 0 & \quad \text{below } \mathcal{S}, \\ \label{psi=0} \psi = 0 & \quad \text{on } \mathcal{S} \ \text{(the kinematic boundary condition),} \\ \label{psi bottom} \nabla \psi (X,Y) \to (0,c_0) & \quad \text{as } Y \to - \infty. \end{align}
\textbf{Membrane-fluid interaction.} The dynamic boundary condition takes the form \begin{equation} \label{psi dynamics}
-\frac12\, |\nabla \psi(X,Y)|^2 - g Y +\frac{c_0^2}{2} = P(\boldsymbol{r}(s)) \quad \text{when } (X,Y) = \boldsymbol{r}(s) \in \mathcal{S}. \end{equation} Here $P$ is given by \eqref{def P} and the left side of \eqref{psi dynamics} is the pressure in the fluid. \end{subequations}
\subsection{The Free-Boundary Problem}\label{fbp}
A steady hydroelastic wave with speed $c_0$ and drift velocity $d$ is a non-self-intersecting smooth $2\pi$-periodic curve $\mathcal{S}$ in the plane for which there exists a solution of \eqref{classical problem} with $c = c_0-d$. Since we are interested in symmetric waves, we require that $\mathcal{S}$ is symmetric about a vertical line. To examine the solution set of this elaborate system we will use standard bifurcation-theory methods based on the implicit function theorem.
Before going into the details, we give a schematic outline of what we have achieved (much remains to be done).
Throughout we treat $g$ (gravity) and $\rho$ (density of the undeformed membrane) as constants and regard \[ \lambda_1 := c^2 \rho, \quad \lambda_2 := c_0^2, \quad \lambda=(\lambda_1,\lambda_2)\in {\mathbb R}^2, \] as the physical parameters. (Because bifurcation theory studies the existence of solutions with small amplitudes and slopes, the question of self-intersection of wave surfaces does not arise in the present study. By contrast, in the theory of large-amplitude waves \cite{Baldi-Toland-var}, considerable effort is required to ensure that there is no self-intersection.)
\subsection{Bifurcation Picture} \label{picture} In this section we explain schematically the geometric picture of small amplitude hydroelastic waves close to a bifurcation point. See Theorems \ref{thm:simple bif} and \ref{thm:secondary bif} for a detailed statement.
The first observation is that, for all choices of the two independent parameters, the problem, when linearized at the trivial solution of uniform horizontal flow under an undeformed membrane, has at most two linearly independent solutions. If there is only one linearized solution when $(\lambda_1,\lambda_2) = (\lambda_1^*,\lambda_2^*)$ say, there is at most only one linearized solution for all nearby $(\lambda_1,\lambda_2)$. Therefore, with either one of the parameters held fixed, there are bifurcations from simple eigenvalues with respect to the other parameter. Their union is a two-dimensional sheet of solutions which bifurcates from $(\lambda_1^*,\lambda_2^*)$. The details are in Section \ref{simple}.
On the other hand, suppose that at $\lambda^*= (\lambda_1^*,\lambda_2^*)$ the linearized problem has two solutions $\cos (k\tau)$ and $\cos (l\tau)$, where $k$ and $l$ are positive integers with $\max\{k,l\} / \min\{k,l\} \notin {\mathbb Z}$. By restricting attention to solutions in $Z_k=\mathrm{span}\, \{\cos (jk\tau):j \in {\mathbb N}\}$, or in $Z_l=\mathrm{span}\, \{\cos (jl\tau): j \in {\mathbb N}\}$, the problem may be reduced to one of ``bifurcation from a simple eigenvalue'' for particular solutions that have \emph{minimal period} $2\pi/k$ or $2\pi/l$, respectively. This is straightforward and similar to what is done in Section \ref{simple}. Locally we obtain a sheet of solutions of minimal period $2\pi/k$ and a sheet of solutions of minimal period $2\pi/l$. Each of these sheets is locally the graph of a function which gives $\lambda_2$ in terms of the wave amplitude and $\lambda_1$ (the roles of $\lambda_2$ and $\lambda_1$ can be reversed). We will refer to these solutions with minimal periods less than $2\pi$ as ``special'' solutions.
In Section \ref{double} we show that in addition to these two sheets of ``special'' solutions there is a two-dimensional sheet of ``general'' solutions and that the sheet of ``general'' solutions intersects each of the sheets of ``special'' solutions in a curve. The solutions on the sheet of ``general'' solutions, except where it intersects the sheet of ``special'' solutions, have minimal period $2\pi$. Therefore the general solutions on this sheet represent a symmetry-breaking (or period-multiplying) secondary bifurcation on the curves of special solutions.
This is the hydroelastic analogue of Wilton ripples, a type of water wave which arises in the presence of surface tension \cite{JT,TJ,wilton}. In Wilton-ripple theory there are also two parameters, the wave speed and the surface tension coefficient (which measures surface elasticity). Wilton ripples bifurcate from uniform streams at certain values of the wave speed, when the surface tension has particular values.
Here the two parameters are independent of the elasticity of the membrane. Therefore the wave and drift speeds can conspire to produce ripples, for \emph{any} prescribed elastic membrane.
After Lyapunov-Schmidt reduction, when the linearized problem has two independent solutions, $\cos(k\tau)$ and $\cos(l\tau)$, there are two bifurcation equations in four unknowns \begin{equation}\label{qual} \Phi_k (t_1,t_2,\lambda_1,\lambda_2) = 0, \quad \Phi_l (t_1,t_2,\lambda_1,\lambda_2) = 0, \end{equation} where $t_1$ and $t_2$ near 0 are the coefficients of $\cos (k\tau)$ and $\cos (l\tau)$, respectively. Solutions with $t_1=0$ or $t_2=0$ correspond to ``special'' solutions. The hypothesis that $\max\{k,l\} / \min\{k,l\} \notin {\mathbb Z}$ leads to the key observation that, for all $t_1,t_2$ near 0, \[ \Phi_k (0,t_2,\lambda_1,\lambda_2) = 0, \quad \Phi_l (t_1,0,\lambda_1,\lambda_2) = 0. \] Then each of the sheets of ``special'' solutions is found by solving one equations in three unknowns, as is done in Section \ref{simple}.
To find the ``general'' solutions, we seek solutions of \eqref{qual} with neither $t_1$ nor $t_2$ equal to 0. This problem is reduced in Section \ref{double} to a desingularized one of the form \begin{equation} \label{help} \Psi_k (t_1,t_2,\lambda_1,\lambda_2) = 0, \quad \Psi_l (t_1,t_2,\lambda_1,\lambda_2) = 0, \end{equation} for which $(0,0,\lambda^*)$ is a solution and $\partial (\Psi_k,\,\Psi_l)/\partial (\lambda_1,\,\lambda_2)$ at $(0,0,\lambda^*)$ is invertible. Then the implicit function theorem gives $\lambda$ in a neighbourhood of $\lambda^*$ in ${\mathbb R}^2$ as a function of $(t_1,t_2)$ in a neighbourhood of the origin in ${\mathbb R}^2$. This is the sheet of general solutions we are seeking. It intersects the ``special'' sheets when $t_1=0$ or $t_2=0$. The present analysis, which yields a \emph{qualitative local description} of the set of all $2\pi$-periodic hydroelastic waves near a bifurcation point, is sufficient to show that secondary bifurcations occur.
\begin{figure}
\caption{A possible bifurcation diagram in the space $(t_1,t_2,\lambda_1)$, when $\lambda_2$ is fixed. The dashed curves correspond to the two branches of ``special'' solutions on the planes $t_1=0$ and $t_2=0$, and the solid curve gives the secondary branch of ``general'', symmetry-breaking solutions.}
\label{fig 1}
\end{figure}
\begin{remark*} A more detailed geometrical description of the bifurcating sheets depends on the coefficients in Taylor series arising in the bifurcation equations, and to calculate their values can be very complicated. For example, suppose that we want to draw a picture of the solution set when one of the parameters, $\lambda_2$ say, is fixed, and assume that \eqref{help} has the form \begin{align*} f(\lambda_1) + (At_1 ^2+Bt_1t_2 +Ct_2^2) -\lambda_2 &=0,\\ g(\lambda_1) + (\alpha t_1 ^2+\beta t_1t_2 +\gamma t_2^2) -\lambda_2 &=0, \end{align*} where $f$ and $g$ are smooth functions and $A,\,\alpha,\,B,\,\beta,\,C,\,\gamma$ are constants. Suppose also that \[ f'(\lambda_1^*) \neq g'(\lambda_1^*). \] Then it is clear from the implicit function theorem that locally \[ \lambda_1 = \Lambda \big( (A-\alpha)t_1^2 + (B-\beta)t_1t_2 + (C-\gamma)t_2^2 \big), \] for some function $\Lambda$ with $\Lambda'(0)\neq 0$. Therefore, for fixed $\lambda_2$ close to $\lambda_2^*$ the solution set is given locally by the level set of the function \[ f \big( \Lambda \big( (A-\alpha)t_1^2 + (B-\beta)t_1t_2 + (C-\gamma)t_2^2 \big) \big) + At_1^2 + Bt_1t_2 + Ct_2^2. \] The complexity of the dependence of this set on $A,\,\alpha,\,B,\,\beta,C,\,\gamma,\,f$ and $g$ is evident. These quantities in turn depend, in an explicit but highly non-trivial way, on the elastic properties of the membrane. \qed\end{remark*}
\begin{remark*} In the case when $\max\{k,l\} / \min\{k,l\} \in {\mathbb Z}$, analysis is still possible, but the details are yet more complicated. \qed\end{remark*}
\begin{remark*} In the present work solutions will be found for values of parameters that are not covered by the maximization argument in \cite{Toland-heavy}. Indeed, a convexity hypothesis of the form $E_{11}(\nu,\mu)>c^2\rho=\lambda_1$ was crucial in the existence proof of \cite{Toland-heavy}, whereas both primary and secondary bifurcations occur here provided only that $\lambda_1 \neq E_{11}$, see Lemmas \ref{lemma:exists one} and \ref{lemma:kl}. \qed\end{remark*}
\section{Mathematical formulation} Suppose that in the free-boundary problem \eqref{classical problem}, the \emph{shape of $\mathcal{S}$} is known. Then $\psi$ is given by the unique solution to (\ref{classical problem}\,d,e,f). Thus the kinetic and potential energies of the fluid are determined solely by the shape of $\mathcal{S}$. On the other hand, the elastic and gravitational potential energy of the membrane are determined by the positions of the material points $(x,0)$ in the deformed membrane. To deal with this distinction, and ultimately to prove in Section \ref{further} that only the shape matters, suppose that the shape of $\mathcal{S}$ is given by a parametrization \[ \mathcal{S}=\{\varrho(\tau): \tau \in {\mathbb R}\}, \ \ \text{where} \ \varrho (\tau+2\pi) = (2\pi,0) + \varrho(\tau). \] Then, as in \cite{Toland-heavy}, we seek $\mathbf R$ for travelling waves in the form \begin{equation}\label{chii} \mathbf R(x,t):\,= (c_0 t,0) + \varrho(\chi(x -ct)), \end{equation} where $\chi:{\mathbb R} \to {\mathbb R}$ is a diffeomorphism with $\chi (s+2\pi) = 2\pi + \chi(s)$. As before, $s$ is the steady travelling-wave Lagrangian coordinate, the unknowns are $\varrho(\tau)$ and $\chi(s)$, and $\boldsymbol{r}(s) = \varrho(\chi(s))$, so that $\mathcal{S} = \{\boldsymbol{r}(s):s \in {\mathbb R}\}$. To develop this approach, we recall the class of parametrizations in \cite{Toland-heavy}, specially tailored for this problem.
Let $w$ be a $2\pi$-periodic real-valued function with second derivative locally square-integrable on ${\mathbb R}$, and let $\mathcal{C}$ denote its Hilbert transform. Then \begin{equation}\label{rho} \varrho(w)(\tau) := (-\tau-\mathcal{C} w(\tau), \, w(\tau)), \quad \tau \in {\mathbb R}, \end{equation} is a $2\pi$-periodic curve in the plane. Thus $w$ is the unknown that describes the wave shape. The other unknown is the stretch of the reference membrane. To describe it we follow \cite{Baldi-Toland-var} by introducing diffeomorphisms $\kappa(\tau)$ ($\kappa = \chi^{-1}$ in \eqref{chii}) of the interval $[0,2\pi]$ such that $\chi(0) = 0$ and $\chi(2\pi)=2\pi$. Then if the material point $s$ of the membrane in the reference configuration is \begin{equation} \label{x=chi(tau)} s = \kappa(\tau) \quad \forall\, \tau \in {\mathbb R}, \end{equation} its position $\boldsymbol{r}(s)$ after deformation is \[ \boldsymbol{r}(s) = \varrho(w)(\tau), \] and the stretch of the membrane is given in terms of $\kappa$ and $w$ by \[
\nu(s) = \frac{|\varrho(w)'(\tau)|}{\kappa'(\tau)} = \frac{\Omega(w)(\tau)}{\kappa'(\tau)} \,, \] where \[ \Omega (w)(\tau) = \sqrt{w'(\tau)^2+ (1+\mathcal{C} w'(\tau))^2}. \] Also, since $\boldsymbol{r}(s) = \varrho(w)(\tau)$, there is a useful formula for the curvature, \[ \widehat\sigma(\boldsymbol{r}(s)) = \sigma (w)(\tau) = - \frac{1}{\Omega(w)(\tau)}\,\, \mathcal{C}\Big(\frac{ \Omega (w)'(\tau)}{\Omega(w)(\tau)} \Big). \] Roughly speaking, given a profile parametrized by \eqref{rho}, the diffeomorphisms \eqref{x=chi(tau)} describe the family of all \emph{physical deformations $(s,0) \mapsto \boldsymbol{r}(s)$ of the reference state} which produce the same profile $\varrho(w)$. Note that the parametrization of a curve and the stretch of the material in it are independent. It will be convenient later to replace $\kappa$ with the new unknown \[ \xi(\tau) := \kappa'(\tau) - 1. \] Then $\xi$ has zero mean, $$ \frac{1}{2\pi}\int_{0}^{2\p} \xi(\tau) \,d\tau=0, $$ because $\kappa(0) = 0$ and $\kappa(2\pi)=2\pi$. In \cite{Toland-heavy} it is shown how solutions to this hydroelastic wave problem with a heavy membrane corresponds to critical points of a Lagrangian which, in terms of the unknown $(w,\xi)$, is written \begin{align} \label{J} J(w,\xi) := {} &
\frac{c_0^2}{2} \int_{0}^{2\p} w \mathcal{C} w' \, d\tau \, - \frac{g}{2} \int_{0}^{2\p} w^2 (1 + \mathcal{C} w') \, d\tau \\ & - \int_{0}^{2\p} (1 + \xi) \, E \Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w)\, \sigma(w)}{1+\xi} \Big) \, d\tau \, + \frac{c^2 \rho}{2} \int_{0}^{2\p} \frac{\Omega(w)^2}{1+\xi}\,d\tau \notag \\ & - g \rho \int_{0}^{2\p} w (1+\xi) \, d\tau. \notag \end{align} The first term in the right in \eqref{J} is the fluid's kinetic energy in one period, relative to the moving frame; the second term, with a minus, is the change in gravitational potential energy of the same body of fluid relative to a uniform flow; the third is minus the elastic energy of one period of the deformed membrane; the fourth is plus the kinetic energy of the membrane; the fifth term is minus the gravitational potential energy of one period of the membrane. (For the derivation, see the discussion leading to \cite[(2.18)]{Toland-heavy}.)
This will be the starting point for this analysis. Local bifurcation theory will be used to give a complete descriptions of all small-amplitude $2\pi$-periodic waves represented by critical points of $J$ close to a bifurcation point.
\section{The equations} Define the notation $(E - \nabla E \cdot)(\nu,\mu)$ by \[ (E - \nabla E \cdot)(\nu,\mu) := E (\nu,\mu) - \nu\, E_1 (\nu,\mu) - \mu\, E_2 (\nu,\mu). \] Suppose that $w$ and $\xi$ are small in an appropriate norm and that $J$ is differentiable at $(w,\xi)$. Then the partial derivative with respect to $\xi$ in a direction $\eta$, where $[\eta]=0$, is \[ d_\xi J(w,\xi)\, \eta = - \int_{0}^{2\p} \eta \, \Big\{ (E - \nabla E \cdot)\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) \, \, + \frac{c^2 \rho}{2}\, \Big( \frac{\Omega(w)}{1+\xi}\Big)^2 + \, g \rho w \Big\} \, d\tau. \] Let $\mathcal{L}[w]$, depending on $w$, be the linear operator defined in \cite[Section 4.5]{Baldi-Toland-var} by \[ \mathcal{L}[w](u) := \frac{ w' u + (1+\mathcal{C} w')\, \mathcal{C} u}{\Omega(w)^2}\,, \] with the property that \begin{equation} \label{dirs} d_w \Omega(w) h = \Omega(w) \, \mathcal{L}[w](h'), \quad d_w \big(\Omega(w) \sigma(w) \big) h = - \mathcal{C} \big(\mathcal{L}[w](h')\big)'. \end{equation} Then the partial derivative of $J(w,\xi)$ with respect to $w$ in the direction $h$ is \begin{align*} d_w J(w,\xi) h \, &= \int_{0}^{2\p} h \, \Big\{ c_0^2 \, \mathcal{C} w' - g w (1+\mathcal{C} w') - g \, \mathcal{C}(w w') - g \rho (1+\xi) \Big\}\, d\tau \\
&\quad + \int_{0}^{2\p} \Big\{ \frac{c^2 \rho \,\Omega(w)}{1+\xi}\, - E_1\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) \Big\} \, \Omega(w) \, \mathcal{L}[w](h') \, d\tau \, \notag\\&\quad + \int_{0}^{2\p} E_2\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) \, \big( \mathcal{C} \mathcal{L}[w](h') \big)' \, d\tau. \end{align*} Now, suppose that $(w,\xi)$ is a critical point of $J$. Define the projection \[ \mathcal{P} u := u - \frac{1}{2\pi}\int_{0}^{2\p} u(\tau) \,d\tau, \] for all $2\pi$-periodic, locally integrable functions $u$. Then $\mathcal{P} u$ has zero mean on $[0,2\pi]$. Note that the operator $\mathcal{L}[w]$ is independent of the mean of $w$. A simple calculation shows that $d_w J(w,\xi)=0$ if and only if \[ \frac{1}{2\pi} \int_{0}^{2\p} (w + w\,\mathcal{C} w')\,d\tau + \rho = 0 \quad \text{and} \quad d_w J(w,\xi) \mathcal{P} h = 0 \quad \forall\, h. \] Hence it suffices to study the equation $dJ_0(w,\xi) =0$, where $J_0$ is defined by \begin{gather*} J_0 (w,\xi) := J(w,\xi) + g\pi\,\Big( \frac{1}{2\pi} \int_{0}^{2\p} w \mathcal{C} w'\,d\tau + \rho \Big)^2, \\ \intertext{over a class of functions satisfying} \int_{0}^{2\p} w(\tau) \,d\tau=\int_{0}^{2\p} \xi(\tau) \,d\tau=0, \end{gather*} because then $dJ_0(w,\xi)=0$ implies that $dJ(w^*,\xi)=0$, where \[ w^* := w - \frac{1}{2\pi}\int_{0}^{2\p} w\,\mathcal{C} w'\,d\tau -\rho. \] From a calculation similar to that for $J$, partial derivatives of $J_0$ are given by \begin{equation} \label{dxi J0} d_\xi J_0(w,\xi) \,\eta =
- \int_{0}^{2\p} \eta \, \Big\{ (E - \nabla E \cdot)\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) \, \, + \frac{c^2 \rho}{2}\, \Big( \frac{\Omega(w)}{1+\xi}\Big)^2 + \, g \rho w \Big\} \, d\tau \end{equation} and \begin{align} \notag d_w J_0 (w,\xi) h\, = {} & \int_{0}^{2\p} h \, \Big\{ c_0^2 \, \mathcal{C} w' - g w (1+\mathcal{C} w') - g \, \mathcal{C}(w w') - g \rho (1+\xi) \Big\}\, d\tau \notag \\\notag & + \int_{0}^{2\p} \Big( c^2 \rho \, \frac{\Omega(w)}{1+\xi}\, - E_1\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) \Big) \, \Omega(w) \, \mathcal{L}[w](h') \, d\tau \, \\&+ \int_{0}^{2\p} E_2\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) \, \mathcal{C} (\mathcal{L}[w](h'))' \, d\tau \notag \\ & + \, 2g \, \Big( \frac{1}{2\pi} \int_{0}^{2\p} w\mathcal{C} w'\,d\tau + \rho \Big) \, \int_{0}^{2\p} h\, \mathcal{C} w' \, d\tau. \label{dw J0} \end{align} In \cite[Sections 4.1 \& 4.2]{Baldi-Toland-var} the membrane density $\rho$ is zero. However, the calculations there are easily extended to take account of the extra terms here which involve $\rho > 0$. To proceed, we adapt the notation from \cite{Baldi-Toland-var} to the case of positive $\rho$.
For $u$ with zero mean, let \[ \nabla I_0 := \Big( c_0^2 + \frac g\pi\int_{0}^{2\p} w \mathcal{C} w'\,d\tau + 2g\rho \Big) \, \mathcal{C} w' - g w (1+\mathcal{C} w') -g \, \mathcal{C} (w w') - g \rho (1+\xi). \] With the $L^2$-adjoint of the inverse operator $\mathcal{L}[w]^{-1}$ given by \[ (\mathcal{L}[w]^{-1})^* (u) = w' u + \mathcal{C} ((1+\mathcal{C} w')u), \] let \begin{equation} \label{m0} m_0 := (\mathcal{L}[w]^{-1})^* \Big( \int_0^\tau \mathcal{P}(\nabla I_0) \, dt \Big). \end{equation} Then, as in \cite{Baldi-Toland-var}, the equations for critical points of $J_0$ can be written as follows: \begin{subequations} \label{P Euler both} \begin{equation} \label{P Euler xi} \mathcal{P} \Big\{ (E - \nabla E \cdot)\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) \, + \frac{c^2 \rho}{2}\, \Big( \frac{\Omega(w)}{1+\xi}\Big)^2 \Big\} + g \rho w = 0, \end{equation} \begin{multline} \label{P Euler w} \mathcal{P} E_2\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) + \mathcal{C} \Big\{ \int_0^\tau \mathcal{P} \Big( m_0 + \Omega(w) E_1\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) \\ - c^2 \rho\, \frac{\Omega(w)^2}{1+\xi}\Big) \, dt \Big\} = 0. \end{multline} \end{subequations} Note that $(w,\xi)=(0,0)$ solves \eqref{P Euler both} for all values of $c,c_0,g,\rho$.
The free-boundary problem in Section \ref{fbp} is for symmetric waves, so we can simplify matters by studying the bifurcation problem in spaces of even functions. This is what we do subsequently. Note that, if $w$ and $\xi$ are even functions, then $\mathcal{C} w'$, $\Omega(w)$, $\sigma(w)$ and $1+\xi$ are also even. \begin{lemma} \label{lemma:even} Suppose that $(w,\xi)$ are even functions such that \begin{equation} \label{critical pt} d J_0 (w,\xi)\,(h,\eta) = 0 \end{equation} for all even $(h,\eta)$ with sufficient regularity. Then \eqref{critical pt} holds for all $(h,\eta)$. \end{lemma}
\begin{proof} Since every $2\pi$-periodic function is the sum of even and odd functions, by the hypothesis it suffices to observe that \eqref{critical pt} holds for $(h,\eta)$ odd when $w,\,\xi$ are even. This follows by \eqref{dxi J0} and \eqref{dw J0}, since $\mathcal{L}[w](h')$ is odd for odd $h$. \end{proof}
\section{A further simplification} \label{further} In this section we use the implicit function theorem to show that, for solutions of \eqref{P Euler xi}, $\xi$ is a function of $(w,\lambda_1)$ near $w=\xi=0$. This means that the stretch variable $\xi$ can be eliminated and the problem reduced to one for the unknown shape which is given by $w$. For $k \in {\mathbb N}$, let $H^k_0$ denote the space of real-valued, $2\pi$-periodic, even, zero-mean functions, with $k$th weak derivative locally square-integrable. For $r>0$, let $B_r(X)$ denote the open ball of radius $r$ centred at the origin in a Banach space $X$. Fix $r>0$ such that \[ \frac12\, \leq \Omega(w) \leq 2, \quad
|\sigma(w)| \leq 1, \quad
|\xi| \leq \frac12 \] for all $(w,\xi)\in B_r(H^3_0) \times B_r(H^1_0)$. Then a map $M \colon B_r(H^3_0) \times B_r(H^1_0) \times (0,+\infty) \to H^1_0$ may be defined by \[ M(w,\xi,\lambda_1) := \mathcal{P} \Big\{ (E - \nabla E \cdot)\Big( \frac{\Omega(w)}{1+\xi}\,, \frac{\Omega(w) \sigma(w)}{1+\xi} \Big) \, + \frac{\lambda_1}{2}\, \Big( \frac{\Omega(w)}{1+\xi}\Big)^2 \Big\} + g \rho w, \] because $E$ is smooth, and $\Omega(w)/(1+\xi)$ and $\Omega(w) \sigma(w) / (1+\xi)$ are bounded functions. Thus, the Euler equation \eqref{P Euler xi} may be written as \[ M(w,\xi,\lambda_1) = 0. \] Moreover, $M(0,0,\lambda_1)=0$ for all $\lambda_1$. We note that \[ M \in C^\infty \big( B_r(H^3_0) \times B_r(H^1_0) \times (0,+\infty), \, H^1_0\big), \] and, when $(w,\xi) = (0,0)$, \begin{equation} \label{facts at 0} \Omega(0) \equiv 1, \quad \sigma(0) \equiv 0, \quad d\Omega(0)h = \mathcal{C} h', \quad d(\Omega \sigma)(0) h = h''. \end{equation} Since $ \mathcal{L}[0] = (\mathcal{L}[0]^{-1})^* = \mathcal{C}$, it follows from H\ref{hyp:E=0 and convex} and \eqref{dirs} that \begin{align} & d_w M(0,0,\lambda_1)\,h = - (E_{11} - \lambda_1) \, \mathcal{C} h' + g \rho h, \label{dw M} \\ & d_\xi M(0,0,\lambda_1)\, \eta = (E_{11} - \lambda_1) \eta, \label{dxi M} \\ & d_{\lambda_1} M(0,0,\lambda_1) = 0, \label{dc M} \end{align} where for convenience we have written $E_{11}$ instead of $E_{11}(1,0)$.
\begin{lemma} \label{lemma:IFT xi(w,c)} When $\widehat \lambda_1 \neq E_{11}$, there exist a neighbourhood $\mathcal{W}$ of $(0,\widehat\lambda_1)$ in $H^3_0 \times (0,+\infty)$ and a map $\overline \xi \in C^\infty(\mathcal{W}, B_r(H^1_0))$ such that \[ M(w,\overline \xi(w,\lambda_1),\lambda_1) = 0 \] for all $(w,\lambda_1) \in \mathcal{W}$, and, if $M(w,\xi,\lambda_1) = 0$ with $(w,\lambda_1) \in \mathcal{W}$, then $\xi=\overline \xi(w,\lambda_1)$. Moreover, for all $\lambda_1 \neq E_{11}$, \begin{align} & \overline \xi(0,\lambda_1) = 0, \label{xi(0,c)=0} \\ & d_w \overline \xi(0,\lambda_1)\,h = \, \mathcal{C} h' + \frac{g\rho}{\lambda_1 - E_{11}}\, h. \label{dw xi} \end{align} \end{lemma}
\begin{proof} To obtain the existence of $\overline \xi$, apply the implicit function theorem using \eqref{dxi M}. The required formulae then follow by the chain rule and \eqref{dw M} and \eqref{dc M}. \end{proof}
Because of this, all solutions $(w,\xi,\lambda)$ of \eqref{P Euler both} with $(w,\lambda_1) \in \mathcal{W}$ are solutions of \eqref{P Euler w} of the form $(w,\overline \xi(w,\lambda_1),\lambda)$. Let $\overline m (w,\lambda)$ denote $m_0$ (see \eqref{m0}) when $\xi = \overline \xi(w,\lambda_1)$, and let \[ e_i(w,\lambda_1) := E_i \Big( \frac{\Omega(w)}{1+\overline \xi(w,\lambda_1)}\,,\, \frac{\Omega(w)\sigma(w)}{1+\overline \xi(w,\lambda_1)} \Big), \quad i = 1,2. \] Then on the set $\mathcal{D}:= \{(w,\lambda): (w,\lambda_1) \in \mathcal{W},~ \lambda_2 > 0 \}$, define the function \[ F(w,\lambda) := \,\mathcal{P} e_2(w,\lambda_1) + \mathcal{C} \Big\{ \int_0^\tau \mathcal{P} \Big( \overline m(w,\lambda) + \Omega(w)e_1(w,\lambda_1)- \frac{\lambda_1\, \Omega(w)^2}{1+\overline \xi(w,\lambda_1)}\Big)\, dt \Big\}, \] so that the system \eqref{P Euler both}, for $(w,\lambda_1) \in \mathcal{W}$ and $\lambda_1 \neq E_{11}$, becomes \begin{equation} \label{eq:F=0} F(w,\lambda) = 0,\quad (w,\lambda) \in \mathcal{D}. \end{equation}
\section{The linearized equation} \label{sec:linearized} Recall that $F(0,\lambda)=0$ for all $\lambda$ with $\lambda_1 \neq E_{11}$ and that $F \in C^\infty(\mathcal{D},\,H^1_0)$. We now calculate its partial derivatives. When $w=0$, \eqref{facts at 0} holds, $\overline \xi=0$ by \eqref{xi(0,c)=0} and, as a consequence, $\mathcal{P}(\nabla I_0)=0$. Hence, by \eqref{dw xi}, \[ d_w \overline m(0,\lambda)\,h = -(\lambda_2 + g\rho)\,h + \Big( \frac{(g\rho)^2}{E_{11}-\lambda_1}\, - g \Big) \, \mathcal{C} \Big( \int_0^\tau h(t)\,dt \Big), \] and, by H2, \begin{align} \label{dF} d_w F(0,\lambda)\,h = & {} \ E_{22} h'' + \lambda_1\, h - \lambda_2 \, \mathcal{C} \Big( \int_0^\tau h(t)\,dt \Big) \\ & + \Big( g + \frac{(g\rho)^2}{\lambda_1-E_{11}} \Big)\, \mathcal{P} \Big( \int_0^\tau \mathcal{P} \Big(\int_0^t h(s)\,ds \Big) \, dt \Big), \notag \end{align} where, as above, $E_{ii} = E_{ii}(1,0)$, $i=1,\,2$.
Now suppose that $h \in H^3_0\setminus \{0\}$ and that $d_w F(0,\lambda)h=0$. Then $h \in H^5_0$, and differentiating the equality $d_w F(0,\lambda)\,h=0$ twice with respect to $\tau$ yields \begin{equation} \label{pinotto} E_{22} h'''' + \lambda_1 h'' - \lambda_2 \,\mathcal{C} h' + \Big( g + \frac{(g\rho)^2}{\lambda_1-E_{11}} \Big) \, h = 0. \end{equation} It is easy to see that \eqref{pinotto} has a non-constant even solution $h$ if and only if \begin{equation} \label{integer eq} E_{22} k^4 - \lambda_1 k^2 - \lambda_2 k + g + \frac{(g\rho)^2}{\lambda_1-E_{11}}\, = 0, \end{equation} for some positive integer $k$. Since $E_{22}(1,0)$ is assumed to be positive, for every fixed $\lambda_1,\lambda_2 > 0$, \eqref{integer eq} possesses at most two positive integer solutions. (This follows by noting that the graph of the quartic curve $x \mapsto E_{22} x^4 - \lambda_1 x^2$ on the half-plane $\{ x \geq 1\}$ intersects any straight line with slope $\lambda_2$ at most twice.)
\subsection{Non-trivial kernel} \label{ntk} Let $g,\rho, E_{11}, E_{22}>0$ be fixed. Then $\lambda_1,\lambda_2>0$, $\lambda_1 - E_{11}\neq 0$ and \eqref{integer eq} holds for some integer $k \geq 1$ if and only if \begin{equation} \label{pol geq 0} 0 < \lambda_2 k\, = E_{22} k^4 - \lambda_1 k^2 + g + \frac{(g\rho)^2}{\lambda_1 - E_{11}}\, = \frac{p_k(\lambda_1)}{E_{11} - \lambda_1} \quad \text{and} \quad \lambda_1 > 0, \end{equation} where \[ p_k(X) := k^2 X^2 - \big(E_{22}k^4 + E_{11} k^2 + g\big) \, X + E_{11} \big(E_{22} k^4 + g\big) - (g\rho)^2. \] The discriminant of $p_k$ is \[ \Delta(p_k) = \big(E_{11} k^2 - E_{22} k^4 - g\big)^2 + \big(2g\rho k\big)^2 \,> 0, \] and, since $p_k(E_{11})= -(g\rho)^2$, its roots \[ X_k^{\pm} := \frac{E_{11} k^2 + E_{22} k^4 + g \pm \sqrt{\Delta(p_k)}}{2k^2} \] satisfy \[ X_k^- < E_{11} < X_k^+ . \] Moreover $X_k^- > 0$ if and only if $E_{11} (E_{22} k^4 + g) > g^2 \rho^2$. It follows that \eqref{pol geq 0} holds if and only if \[ \lambda_1 \in (0,X_k^-) \cup (E_{11},X_k^+), \quad \lambda_2 = f_k(\lambda_1), \] where $(0,X_k^-)$ is meant to be empty if $X_k^- \leq 0$, and \[ f_k(\lambda_1) := E_{22} \,k^3 - \lambda_1 k + \frac{1}{k} \Big( g + \frac{(g\rho)^2}{\lambda_1-E_{11}}\Big). \] $X_k^- \to E_{11}$ and $X_k^+ \to +\infty$ as $k \to +\infty$. Thus, for every $\lambda_1 \neq E_{11}$, there exists an integer $\bar k = \bar k(\lambda_1)$ such that $\lambda_1 \in (0,X_k^-) \cup (E_{11},X_k^+)$ for all $k \geq \bar k$. Therefore, for every $k \geq \bar k$, \eqref{integer eq} holds with $\lambda_2 = f_k(\lambda_1)$, and we have proved the following lemma.
\begin{lemma} \label{lemma:exists one} Let $g,\rho,E_{11}$ and $E_{22}$ be fixed, positive constants. For every fixed $\lambda_1 \neq E_{11}$, the parameters $\lambda_2$ for which the linearized operator $d_w F(0,\lambda_1,\lambda_2)$ has a non-trivial kernel form a sequence $\{\lambda_2^{(k)} = f_k(\lambda_1): k \geq \bar k(\lambda_1)\}$, with \[ \lambda_2^{(k)} = f_k(\lambda_1) \to +\infty \quad \text{as} \ k \to \infty. \] \end{lemma}
Thus, for every $g, \rho, E_{11}, E_{22}>0$ there exists a set $\mathcal{A}$ formed by infinitely many curves $\mathcal{A}_k$ in the parameter quadrant $\{ (\lambda_1, \lambda_2) : \lambda_1 > 0$, $\lambda_2>0\}$, \[ \mathcal{A} = \bigcup_{k \in {\mathbb N}} \mathcal{A}_k, \quad \mathcal{A}_k = \big\{ (\lambda_1, \lambda_2) : \lambda_2 = f_k(\lambda_1) \big\} \cap \big\{\lambda_1 >0, \ \lambda_2 >0\big\} \] (see figure \ref{fig 2}), such that the kernel of the linearized operator $d_w F(0,\lambda)$ is nontrivial if and only if $\lambda \in \mathcal{A}$. Note that there is no restriction on $\lambda_1$ except that $\lambda_1\neq E_{11}$.
\begin{figure}
\caption{Plots of the curves $\mathcal{A}_k$, $k=1,\ldots,7$, when $g=9.81$, $g\rho=1$, $E_{11}=4$ and $E_{22}=1$, first in the region $3.96 < \lambda_1 < 4.10$ and $0 < \lambda_2 < 330$, with two different scales for the two axes, and then in the region $0 < \lambda_1, \lambda_2 < 30$, with the same scale for $\lambda_1$ and $\lambda_2$.}
\label{fig 2}
\end{figure}
\subsection{Double eigenvalues} The kernel of $d_w F(0,\lambda)$ is two-dimensional if and only if \eqref{integer eq} has two positive integer solutions $k \neq l$, namely the curves $\mathcal{A}_k$ and $\mathcal{A}_l$ cross at $\lambda$. Now, $k \neq l$ solve \eqref{integer eq} if and only if \begin{equation} \label{kl inter} \lambda_2 = h_{k,l}(\lambda_1) := (k+l) \big(E_{22}(k^2+l^2) - \lambda_1\big) \quad \text{and} \quad q_{k,l}(\lambda_1)=0, \end{equation} where \begin{align*} q_{k,l}(X) := {} & kl X^2 - \big( E_{11} kl + E_{22} kl (k^2 + kl + l^2) - g \big) X \\ & + E_{11} E_{22} kl (k^2 + kl + l^2) - E_{11} g + (g\rho)^2 . \notag \end{align*} For all $kl$ sufficiently large, the discriminant of $q_{k,l}$, \[ \Delta(q_{k,l}) = \big( E_{11} kl - E_{22} kl (k^2 + kl + l^2) + g \big)^2 - 4 kl (g\rho)^2, \] is positive, and the roots $X_{k,l}^-$ and $X_{k,l}^+$ of $q_{k,l}$ are both greater than $E_{11}$.
Since $h_{k,l}(X_{k,l}^+) < 0$ for all $kl$ sufficiently large, there are at most finitely many solutions $\lambda$ of \eqref{kl inter} with $\lambda_1 = X_{k,l}^+$ and $\lambda_2>0$. On the other hand, $h_{k,l}(X_{k,l}^-) > 0$ for all $kl$ sufficiently large, and \[ X_{k,l}^- \to E_{11}, \quad h_{k,l}(X_{k,l}^-) \to +\infty \] as $kl \to +\infty$. Thus, we have proved the following lemma.
\begin{lemma} \label{lemma:kl} Let $g,\rho,E_{11}$ and $E_{22}$ be fixed, positive constants. The parameters $\lambda$ for which the linearized operator $d_w F(0,\lambda)$ has a two-dimensional kernel form a sequence $\lambda^{(n)} = (\lambda^{(n)}_1, \lambda^{(n)}_2)$, with \[ \lambda^{(n)}_1 \searrow E_{11}, \quad \lambda^{(n)}_2 \to + \infty \quad \text{as}\ n \to \infty. \] \end{lemma}
\begin{remark*} Double eigenvalues with $\lambda_1 < E_{11}$ are possible, provided we assume some additional hypotheses on $E$, namely \[ E_{22} (kl)^2 < g < E_{22} kl (k^2 + kl + l^2) \] and $E_{11}$ sufficiently large. In any case, they are at most finitely many. \qed \end{remark*}
\section{Lyapunov-Schmidt reduction}We turn now to study the bifurcation of solutions of \eqref{eq:F=0}.
Recall that throughout we are dealing with $2\pi$-periodic functions $w$ of zero mean. Suppose that $\lambda^* = (\lambda_1^*, \lambda_2^*) \in \mathcal{A}$, $\lambda_1^* \neq E_{11}$. Then the kernel, \[ V := \mathrm{Ker}\, d_wF(0,\lambda^*) \subset H^3_0, \] of the linearized operator, is a subspace of dimension 1 or 2, depending on the number of integer solutions of \eqref{integer eq}, and the range \[ R := \mathrm{Range}\, d_wF(0,\lambda^*) \subset H^1_0 \] is orthogonal to $V$ with respect to the $L^2(0,2\pi)$ scalar product, namely \[ H^1_0 = V \oplus R, \quad H^3_0 = V \oplus (R \cap H^3_0). \] In fact, it is evident from \eqref{dF} that $d_w F(0,\lambda)$ is a diagonal operator with respect to the basis of even $2\pi$-periodic functions $\{ \cos (j\tau) \colon j=1,2,\ldots \}$, for all $\lambda$.
Following the classical Lyapunov-Schmidt decomposition, we write \[ w = v + y, \quad v \in V, \quad y \in R \cap H^3_0, \] and denote $\Pi_V, \Pi_R$ the projection onto $V$ and $R$ respectively. The equation $F(w,\lambda)=0$ is then equivalent to the system \begin{align} \label{Lyap-Schmidt} \begin{cases} \Pi_V \,F(v+y, \lambda) = 0 & \text{(bifurcation equation)},\\ \Pi_R \,F(v+y, \lambda) = 0 & \text{(auxiliary equation)}. \end{cases} \end{align}
\begin{lemma}[Auxiliary equation] \label{lemma:aux} There is a neighbourhood $\mathcal{U}$ of $(0,\lambda^*)$ in $V \times {\mathbb R}^2$, a neighbourhood $U$ of $0$ in $R \cap H^3_0$ and a function $\overline y \in C^\infty(\mathcal{U}, U)$ such that \[\Pi_R \,F(v + \overline y(v,\lambda), \lambda) = 0 \] for all $(v,\lambda) \in \mathcal{U}$, and, if $\Pi_R \,F(v + y,\lambda) = 0$ with $y \in U$ and $(v,\lambda) \in \mathcal{U}$, then $y = \overline y(v,\lambda)$. Moreover, for all $(0,\lambda) \in \mathcal{U}$, \begin{equation} \label{oy} \overline y(0,\lambda)=0, \quad d_v \overline y(0,\lambda)=0, \quad d_{\lambda_i} \overline y(0,\lambda) = 0, \quad i=1,2, \end{equation} and there exists a constant $C>0$ such that \[
\| \overline y(v,\lambda) \|_{H^3} \leq C \| v \|_{H^3}^2 \] for all $(v,\lambda) \in \mathcal{U}$. \end{lemma}
\begin{proof} Apply the implicit function theorem, and note that, for all $\lambda$, $d_w F(0,\lambda)$ is diagonal in the basis $\{ \cos j\tau \}$, $j=1,2,\ldots$, therefore $\Pi_R \, d_w F(0,\lambda)v = 0$ for all $v \in V$, for all $\lambda$. \end{proof}
In this way, the bifurcation problem for the equation \eqref{eq:F=0} has been reduced to \begin{equation} \label{bif eq} \Pi_V \,F(v + \overline y(v,\lambda), \lambda) = 0, \end{equation} with $(v,\lambda) \in \mathcal{U} \subset V \times {\mathbb R}^2$.
\section{Sheets bifurcating from a simple eigenvalue} \label{simple}
Here we study the elementary case in which equation \eqref{integer eq}, and hence the linearized operator \eqref{dF}, has a 1-dimensional kernel. From the discussions in Section \ref{sec:linearized} this is the case for parameter values $(\lambda_1^*,\lambda_2^*)$ on the union of countably many curves with countably many points removed.
Hence suppose that there exists a unique integer $k \geq 1$ that satisfies \eqref{integer eq} for $\lambda^* = (\lambda^*_1, \lambda^*_2)$. Then \[ V := \mathrm{Ker}\, d_wF(0,\lambda^*) = \big\{ t \cos (k \tau) \colon t \in {\mathbb R} \big\} \] and, by \eqref{bif eq}, the system \eqref{Lyap-Schmidt} is equivalent to the problem \begin{equation} \label{biff} \Phi(t,\lambda) := \Pi_V \,F \big( t \cos(k\tau) + \overline y(t \cos(k\tau),\lambda),\, \lambda \big) = 0. \end{equation} $\Phi$ is a smooth real valued map of three real variables $(t,\lambda_1,\lambda_2)$, defined on a neighbourhood of $(0,\lambda_1^*,\lambda_2^*)$. Since $F(0,\lambda)=0$ for all $\lambda$, from Lemma \ref{lemma:aux} it follows that \[ \Phi(0,\lambda)=0 \quad \text{for all } (0,\lambda) \in \mathcal{U}. \] Also, using \eqref{oy} and the orthogonality of $V$ and $R$, \begin{equation} \label{da Phi=0} \partial_t \Phi(0,\lambda^*) = 0. \end{equation} To find solutions of \eqref{biff} with $t \neq 0$ we invoke the implicit function theorem in the usual analytic approach to bifurcation problems.
\begin{theorem} \label{thm:simple bif} Suppose that there exists a unique integer $k \geq 1$ that satisfies \eqref{integer eq} for $\lambda^* = (\lambda^*_1, \lambda^*_2)$, with $\lambda_1^* \neq E_{11}$. Then there exist neighbourhoods $\mathcal{U}_1$ of $(0,\lambda_1^*)$ in ${\mathbb R}^2$ and $U_1$ of $\lambda_2^*$ in ${\mathbb R}$, and a map $\overline \lm_2 \in C^\infty(\mathcal{U}_1, U_1)$, with $\overline \lm_2(0,\lambda_1^*)=\lambda_2^*$, such that \[ \Phi(t,\lambda_1,\overline \lm_2(t,\lambda_1)) = 0 \quad \text{for all } (t,\lambda_1) \in \mathcal{U}_1, \] and, if $\Phi(t,\lambda_1,\lambda_2)=0$, with $(t,\lambda_1) \in \mathcal{U}_1$, $t \neq 0$ and $\lambda_2 \in U_1$, then $\lambda_2 = \overline \lm_2(t,\lambda_1)$. As a consequence, \[ F \big( w(t,\lambda_1), \lambda_1, \overline \lm_2(t,\lambda_1) \big)=0, \] where \[ w(t,\lambda_1) := t \cos(k\tau) + \overline y \big( t\cos(k\tau), \lambda_1, \overline \lm_2(t,\lambda_1)\big) = t \cos(k\tau) + O(t^2). \] \end{theorem}
\begin{proof} First, we prove that \begin{equation} \label{transversality} \partial^2_{t,\lambda_2} \Phi (0,\lambda^*) \neq 0. \end{equation} By \eqref{oy}, \[ \partial_t \Phi(0,\lambda) = \Pi_V \, d_w F(0,\lambda)\, \big( 1+ d_v \overline y(0,\lambda) \big) \cos(k \tau) = \Pi_V \, d_w F(0,\lambda)\,
\cos(k \tau), \] and \[ \partial_{t,\lambda_2}^2 \Phi(0,\lambda^*) = \Pi_V \, d^2_{w,\lambda_2} F(0,\lambda^*)\,\cos(k\tau). \] By \eqref{dF}, \[ d_w F(0,\lambda) \cos(k\tau) = \Big( - k^2 E_{22} + \lambda_1 + \frac{\lambda_2}{k}
- \Big( g + \frac{(g\rho)^2}{\lambda_1 - E_{11}} \Big)\,\frac{1}{k^2}\, \Big) \cos(k\tau), \] whence \[ \Pi_V \, d^2_{w,\lambda_2} F(0,\lambda^*) \cos(k\tau) = \frac{1}{k^*}\, > 0, \] and \eqref{transversality} is proved. Since \[ \Phi(t,\lambda) = \int_0^t (\partial_t \Phi) (x,\lambda)\,dx \, = \,t \int_0^1 (\partial_t \Phi) (zt,\lambda)\,dz, \] it follows that $\Phi(t,\lambda) =0$, with $t \neq 0$, if and only if $\varphi(t,\lambda)=0$, where \[ \varphi(t,\lambda) := \int_0^1 (\partial_t \Phi)(xt,\lambda)\,dx. \] From the smoothness of $\Phi$ it follows that $\varphi$ is also smooth. By \eqref{da Phi=0}, $\varphi(0,\lambda^*)=0$. Moreover, $\partial_{\lambda_2}\varphi(0,\lambda^*) \neq 0$ by \eqref{transversality}. The result now follows from the implicit function theorem. \end{proof}
\begin{remark*} Since \[ \Pi_V \, d^2_{w,\lambda_1} F(0,\lambda^*) \cos(k\tau) = 1 + \Big( \frac{g\rho}{k(\lambda_1^* - E_{11})} \Big)^2 > 0, \] the role of $\lambda_1$ and $\lambda_2$ can be swapped. \qed \end{remark*}
\section{Bifurcation from a double eigenvalue} \label{double}
We have observed that for any $(\lambda_1,\lambda_2)$ there are at most two positive integer solutions, $k,\,l,$ of \eqref{integer eq}, and this happens only if $f_k(\lambda_1) = f_l(\lambda_1) = \lambda_2$. Suppose that there are indeed two such solutions, $k$ and $l$, with \begin{equation} \label{non-res} \frac{\max\{k,l\} }{\min\{k,l\} }\, \notin {\mathbb Z}. \end{equation} Let $Z_k$ be the closure of $\mathrm{span}\, \{ \cos (jk\tau): j \in {\mathbb N}\}$ in $L^2(0,2\pi)$, and similarly for $Z_l$. Now note that if one seeks waves with minimal period $2\pi/k$ or $2\pi/l$, the original bifurcation problem \eqref{eq:F=0} may be specialized to a problem on $Z_k$ or $Z_k$ and the reduced problem \eqref{bif eq} is similarly restricted to $Z_k$ or $Z_l$. In each of these restricted settings separately, only one solution, $k$ or $l$, of \eqref{integer eq} is relevant, and
there is a simple eigenvalue from which a curve of solutions in $Z_k\cap H_0^3$ or $Z_l\cap H_0^3$ bifurcates, exactly as in the preceding section. However, we will now show that other solutions that are neither in $Z_k$ nor $Z_l$ bifurcate at $\lambda^*$ when $k$ and $l$ are solutions of \eqref{integer eq} with $\lambda= \lambda^*$ and \eqref{non-res} holds.
In this case the kernel of the linearized problem is two-dimensional, \[ V := \mathrm{Ker}\, d_wF(0,\lambda^*) = \big\{ t_1\cos (k \tau) + t_2 \cos (l\tau) :\, (t_1,t_2) \in {\mathbb R}^2 \big\}, \] and the bifurcation problem \eqref{bif eq} is \begin{equation} \label{Phi=0} \Phi (t_1,t_2,\lambda) = 0, \quad \lambda = (\lambda_1,\lambda_2), \end{equation} where \[ \Phi (t_1,t_2,\lambda) := \Pi_V F( v + \overline y(v,\lambda), \lambda), \quad v = t_1 \cos(k\tau) + t_2 \cos (l\tau). \] Let $\Phi_k \cos (k\tau) := \Pi_k \Phi$ and $\Phi_l \cos (l\tau):= \Pi_l \Phi $, where $\Pi_k$ and $\Pi_l$ denote the projections onto $\mathrm{span}\, \{ \cos(k\tau) \}$ and $\mathrm{span}\, \{ \cos(l\tau) \}$, respectively. Thus \eqref{Phi=0} becomes \begin{align*} \Phi_k (t_1,t_2,\lambda_1,\lambda_2) & = 0, \\ \Phi_l (t_1,t_2,\lambda_1,\lambda_2) & = 0, \end{align*} a system of two equations in four unknowns which is satisfied by $(0,0,\lambda_1,\lambda_2)$ for all $\lambda$. The key to our result is the following observation.
Suppose that $t_1=0$, and $v=t_2 \cos(l\tau)$, $t_2 \in {\mathbb R}$. Then an application of Lemma \ref{lemma:aux} in the subspace $Z_l$ of $2\pi/l$-periodic functions yields that $\overline y(v,\lambda) \in Z_l \cap R$, because of the local uniqueness in the implicit function theorem. Hence $v+\overline y(v,\lambda)$ is $2\pi/l$-periodic, therefore $F(v+\overline y(v,\lambda), \lambda)$ is also $2\pi/l$-periodic. As a consequence, \begin{equation} \label{inv observ k} \Phi_k(0,t_2,\lambda) = 0 \ \text{ for all } t_2,\lambda. \end{equation} For the same reason, \begin{equation} \label{inv observ l} \Phi_l(t_1,0,\lambda) = 0 \ \text{ for all } t_1,\lambda. \end{equation} We now require the non-degeneracy condition \begin{equation} \label{nondeg} \Big( \frac{g\rho}{\lambda_1^* - E_{11}} \Big)^2 \neq kl. \end{equation}
\begin{remark*} It is easily checked that condition \eqref{nondeg} is equivalent to the geometrical assumption that $f_k'(\lambda_1^*) \neq f_l'(\lambda_1^*)$. In other words, the curves $\mathcal{A}_k$ and $\mathcal{A}_l$ are not tangential at their intersection point $\lambda_1^*$. \qed \end{remark*}
\begin{theorem} \label{thm:secondary bif} Suppose that there exist two integers $k,l$ that satisfy \eqref{integer eq} for $\lambda^* = (\lambda^*_1, \lambda^*_2)$ where $\lambda_1^* \neq E_{11}$, and \eqref{non-res} and \eqref{nondeg} hold. Then there exist neighbourhoods $\mathcal{U}_2$ of the origin and $U_2$ of $\lambda^*$ in ${\mathbb R}^2$, and functions $\overline \lm(t_1,t_2) = (\overline \lm_1(t_1,t_2), \overline \lm_2(t_1,t_2))$, $\overline \lm \in C^\infty(\mathcal{U}_2, U_2)$, with $\overline \lm(0,0) = \lambda^*$, such that \[ \Phi(t_1,t_2,\overline \lm(t_1,t_2)) = 0 \quad \text{for all } (t_1,t_2) \in \mathcal{U}_2, \] and, if $\Phi(t_1,t_2,\lambda)=0$ with $(t_1,t_2) \in \mathcal{U}_2\setminus \{(0,0)\}$ and $\lambda \in U_2$, then $\lambda = \overline \lm(t_1,t_2)$. As a consequence, \[ F \big( w(t_1,t_2), \overline \lm(t_1,t_2) \big)=0, \] where \begin{align*} w(t_1,t_2) := {} & \, t_1 \cos(k\tau) + t_2 \cos(l\tau) + \overline y \big( t_1\cos(k\tau) + t_2 \cos(l\tau),\, \overline \lm(t_1,t_2) \big) \\ = {} & \, t_1 \cos(k\tau) + t_2 \cos(l\tau) + O(t_1^2 + t_2^2). \end{align*} \end{theorem}
\begin{proof} Let $\Psi := (\Psi_k, \Psi_l)$, \[ \Psi_k(t_1,t_2,\lambda) := \int_0^1 (\partial_{t_1} \Phi_k) (xt_1,t_2,\lambda)\,dx, \quad \ \Psi_l(t_1,t_2,\lambda) := \int_0^1 (\partial_{t_2} \Phi_k) (t_1,xt_2,\lambda)\,dx. \] $\Psi_k$ and $\Psi_l$ are smooth by \eqref{inv observ k} and \eqref{inv observ l}. Moreover, since \begin{align} \label{good k} \Psi_k(0,0,\lambda) & = \Pi_k \, d_w F(0,\lambda) \cos (k\tau) \\ & = - E_{22} k^2 + \lambda_1 + \frac{\lambda_2}{k}\, - \Big( g + \frac{(g\rho)^2}{\lambda_1 - E_{11}} \Big) \frac{1}{k^2}, \notag \end{align} and, analogously, \begin{equation} \label{good l} \Psi_l(0,0,\lambda) = - E_{22} l^2 + \lambda_1 + \frac{\lambda_2}{l}\, - \Big( g + \frac{(g\rho)^2}{\lambda_1 - E_{11}} \Big) \frac{1}{l^2}, \end{equation} it follows that \[ \Psi(0,0,\lambda^*) = 0. \] To apply the implicit function theorem to $\Psi$ at the point $(0,0,\lambda^*)$, it is sufficient to prove that the 2$\times$2 matrix representing the linear map $\partial_\lambda \Psi(0,0,\lambda^*)$ is invertible. Now, differentiating \eqref{good k} and \eqref{good l} with respect to $\lambda_1$ and $\lambda_2$, \[ \det \big(\partial_\lambda \Psi(0,0,\lambda^*) \big) = \Big\{ \Big( \frac{g\rho}{\lambda_1^* - E_{11}} \Big)^2 - kl \Big\} \Big( \frac1k - \frac1l \Big) \frac1{kl}, \] which is nonzero by \eqref{nondeg}. \end{proof}
\begin{remark*} By the definition of $\Psi$, for $(t_1,t_2) \in \mathcal{U}_2$, with $t_1\neq0$ and $t_2\neq0$, Theorem \ref{thm:secondary bif} gives solutions of problem \eqref{Phi=0} which do not belong to $Z_k$ nor $Z_l$, as it had been stated above. \qed \end{remark*}
\textbf{Acknowledgements.} JFT acknowledges the support of a Royal Society/Wolfson Research Merit Award. PB is supported by the European Research Council under FP7 and the Italian PRIN \emph{Variational methods and nonlinear differential equations}. The main part of this paper has been written when PB was supported by the UK EPSRC at the University of Bath. The authors are grateful to M.C.W. Jones for the reference to the work of Bohr mentioned in the Introduction. \small{
}
\end{document}
|
arXiv
|
{
"id": "0812.0071.tex",
"language_detection_score": 0.7040826082229614,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\maketitle
\begin{abstract} A \emph{framework} is a graph and a map from its vertices to $\mathbb{R}^d$. A framework is called \emph{universally rigid} if there is no other framework with the same graph and edge lengths in $\mathbb{R}^{d'}$ for \emph{any} ${d'}$. A \emph{framework attachment} is a framework constructed by joining two frameworks on a subset of vertices. We consider an attachment of two universally rigid frameworks that are in general position in $\mathbb{R}^d$. We show that the number of vertices in the overlap between the two frameworks must be sufficiently large in order for the attachment to remain universally rigid.
Furthermore, it is shown that universal rigidity of such frameworks is preserved even after removing certain edges. Given positive semidefinite stress matrices for each of the two initial frameworks, we analytically derive the PSD stress matrices for the combined and edge-reduced frameworks. One of the benefits of the results is that they provide a general method for generating new universally rigid frameworks. \end{abstract}
\begin{section}{Introduction}\label{sec:intro}
A \emph{framework} in $\mathbb{R}^d$ is a graph embedded in Euclidean $d$-dimensional space.
The graph vertices are assigned point coordinates, the edges are represented as line segments connecting corresponding points, and the edge lengths are determined as the distances between the edge-connected points in $\mathbb{R}^d$. One of the important questions in geometry of frameworks is whether there are other frameworks with the same edge lengths. When searching for such frameworks we want to exclude shape-preserving trivial euclidean transformations such as translations, rotations and reflections, applied to already found frameworks. Thus different frameworks will differ in pairwise distances between some pairs of vertices that are not connected by an edge.
In case there are no different frameworks in $\mathbb{R}^d$, the original framework is called \emph{globally rigid}. Furthermore, if there are no different frameworks in any space of higher dimension, the framework is called \emph{universally rigid}.
Connelly first studied universal rigidity in the context of Cauchy polygons \cite{cn}. Later, in \cite{cnch2}, he set and proved a sufficient condition for an arbitrary framework to be universally rigid. The condition is formulated in terms of \emph{stresses} acting on the edges, and it states that a framework in $\mathbb{R}^d$ is universally rigid if there exists a positive semidefinite stress matrix of nullity $d+1$ (see below for definitions). Recently, Gortler and Thurston \cite{gt} proved that this condition is also necessary for \emph{generic} frameworks, in which the vertices' coordinates are algebraically independent over rationals. Alfakih \cite{ak} then established sufficiency of the condition for the wider class of frameworks that are in \emph{general position}, such that no subset of $d+1$ vertices is affinely dependent. In \cite{gt}, Gortler and Thurston have also suggested an algorithm for searching for such PSD stress matrices based on semidefinite programming. (See \cite{pat}, \cite{LLR} for related work, and \cite{SBoyd} for a general overview of semidefinite programming.)
Although universally rigid frameworks are now characterized by the above conditions, not many results are known about operations on frameworks that preserve universal rigidity, nor are there many known examples of systematically generated universally rigid frameworks. In \cite{cn}, Connelly generated larger Cauchy polygons inductively from smaller ones. Another known framework extension method that preserves universal rigidity is $(d+1)$-lateration \cite{trilat}\cite{aklat}, which is also too specific and not efficient in terms of the number of edges added to the original framework.
In this paper we consider a new problem of characterizing the universal rigidity of attachments of two universally rigid frameworks in general position in $\mathbb{R}^d$. By attachment we mean a combined framework in which the two frameworks share some vertices and in which all edges are preserved (see Figure~\ref{fig:attach3D}). We prove that an attachment is universally rigid if and only if the number of shared vertices (at which the two frameworks attach) is $d+1$ or greater (see Theorem \ref{thm:OmegaMain}). We also show that removing those edges connecting shared vertices that are inherited from only one of the attached frameworks preserves universal rigidity of the attachment. Finally, we derive PSD stress matrices of nullity $d+1$ for both original and edge-reduced attachments explicitly, from the PSD stress matrices that are given for the two joined frameworks.
Thus, similar to frameworks, we can characterize the framework attachments by the stress matrices. Moreover, the stress matrices for universally rigid framework attachments have the same properties as the stress matrices for universally rigid frameworks, and computing such matrices does not involve computational search. These results provide a more general method than $(d+1)$-lateration \cite{aklat} for generating new universally rigid frameworks from two or more arbitrary universally rigid frameworks. It can also be used to analyze complex frameworks by decomposing them into smaller attached frameworks for which universal rigidity is known (or can be more easily established). The results may have a potential application to structural engineering and molecular biology.
The rest of the paper is organized as following. Definitions and preliminaries are given in section \ref{sec:bkgnd}. In section \ref{sec:Main} we formulate and prove the main result which is a necessary and sufficient condition for an attachment to be universally rigid. We also prove universal rigidity of the edge-reduced framework attachment. Section \ref{sec:EdgRedStressMat} contains derivation of PSD stress matrices for each of the two attachments. In Section \ref{sec:Apps} we show that edge-reduced framework attachment is a generalization of $(d+1)$-lateration as a method for creating new universally rigid frameworks possessing PSD stress matrices of nullity $d+1$. We also show that an edge-reduced graph attachment of two graphs whose frameworks are universally rigid in any general position always has a a PSD matrix of nullity $d+1$. \begin{figure}
\caption{Attachment of two universally rigid frameworks in $\mathbb{R}^3$. Vertices $1,2,3$ and $4$ are shared by the two frameworks. Compared to $G_{A}$, graph $G_{B}$ has an additional edge $\{1,3\}$ between the shared vertices. By Theorem \ref{thm:EdgeReducedMain}, removing this edge from the attachment $G$ preserves its universal rigidity.}
\label{fig:attach3D}
\end{figure}
\end{section}
\begin{section}{Background}\label{sec:bkgnd}
A \emph{graph} $G=(V,E)$ is a finite set of vertices $V=(1,2,\ldots,v)$ together with a set of edges $E$. The set $E$ can be represented as a set of two-element subsets $\{i,j\}$ of $V$. We denote the number of graph vertices and edges by $v=|V|$ and $e=|E|$, respectively.
A graph is called \emph{k-connected} if removing at least \emph{k} vertices disconnects the remaining vertices. Having two graphs $G_{A}=(V_{A},E_{A})$ and $G_{B}=(V_{B},E_{B})$ such that $V_{A}\cap V_{B}\neq\emptyset$ after appropriate labeling of vertices by integers, we can construct a \emph{graph attachment} $G=(V,E)$ where $V=V_{A}\cup V_{B}$ and $E=E_{A}\cup E_{B}$ (Figure~\ref{fig:attach3D}). Throughout the paper, we assume that the set $V_{A}\cap V_{B}$ of shared vertices is a proper subset of both $V_{A}$ and $V_{B}$.
A \emph{configuration} of graph $G=(V,E)$ in $\mathbb{R}^d$ is a mapping of $V$ to $\mathbb{R}^d$. A configuration can also be represented as a single point $\mathbf{p}=(\mathbf{p}_1, \ldots ,\mathbf{p}_v) \in \mathbb{R}^d \times \cdots \times \mathbb{R}^d = \mathbb{R}^{vd}$, where $\mathbf{p}_i$ is the coordinate of vertex $i$ in $\mathbb{R}^d$. A configuration is called \emph{generic} if its coordinates (in $\mathbb{R}^{vd}$) are algebraically independent over $\mathbb{Q}$. A configuration is said to be in \emph{general position} if every subset $\{\mathbf{p}_{i_{1}}, \ldots ,\mathbf{p}_{i_{d+1}}\}$ of $d+1$ vertices is affinely independent. Two configurations $\mathbf{p}$ and $\mathbf{q}$ are \emph{congruent}, or $\mathbf{p} \cong \mathbf{q}$, if $\mathbf{q}=R\mathbf{p}$, where $R$ is an element of the group Euc($d$) of rigid motions in $\mathbb{R}^d$ including translations and rotations. A graph $G$ together with its configuration $\mathbf{p}$ is called \emph{framework} in $\mathbb{R}^d$, denoted as $G(\mathbf{p})$. Two frameworks $G(\mathbf{p})$ and $G(\mathbf{q})$ are said to be \emph{equivalent} if whenever $\{i,j\}$ is an edge of $G$, then $\|\mathbf{p}_{i}-\mathbf{p}_{j}\|=\|\mathbf{q}_{i}-\mathbf{q}_{j}\|$. A \emph{framework attachment} is a framework of graph attachment defined above. Alternatively, if we are given two frameworks both having subsets of vertices with the same pairwise distances, it is possible to coincide the two subsets of vertices in space by applying a rigid motion to one of the frameworks. A framework attachment is created by merging some or all pairs of the coinciding vertices, such that all edges from both frameworks are preserved. There may be more than one attachment constructed from the two frameworks. While the remainder of this section covers single graphs and related notions, we will treat framework attachments in detail in the next sections.
\begin{comment} Alternatively, if we are given two frameworks both having subsets of vertices with the same pairwise distances, then by applying appropriate rigid motion to one of the frameworks, a framework attachment is created such that the above vertices are coinciding in space (there may be more than one attachment constructed), while all edges are preserved. While the remainder of this section covers single graphs and related notions, we will treat framework attachments in detail in the next sections. \end{comment} A framework $G(\mathbf{p})$ in $\mathbb{R}^d$ is called \emph{globally rigid} if whenever $G(\mathbf{q})$ is equivalent to $G(\mathbf{p})$ then $\mathbf{p} \cong \mathbf{q}$. A weaker than global rigidity is the notion of local rigidity. A framework $G(\mathbf{p})$ in $\mathbb{R}^d$ is called \emph{locally rigid} if there is $\epsilon>0$ such that whenever $G(\mathbf{q})$ is equivalent to $G(\mathbf{p})$ \emph{and} $\mathbf{q}\in B_{\epsilon}(\mathbf{p})$, then $\mathbf{p} \cong \mathbf{q}$. In other words, locally rigid framework is globally rigid in some $\epsilon$-neighborhood. By definition, global rigidity implies local rigidity.
A framework $G(\mathbf{p})$ in $\mathbb{R}^d$ is called \emph{universally rigid} if whenever $G(\mathbf{q})$ in $\mathbb{R}^{d'}$ is equivalent to $G(\mathbf{p})$ and $d'\geq d$, then $\mathbf{p} \cong \mathbf{q}$. A graph is called \emph{universally rigid} in $\mathbb{R}^d$ if any of its frameworks in general position $\mathbb{R}^d$ is universally rigid. Universal rigidity implies global rigidity.
A necessary condition for global rigidity of a framework is given by the following variation of the Hendrickson's theorem \cite{hn}, which holds for frameworks in general position rather than generic frameworks. \begin{theorem} \label{thm:globconn} A globally rigid framework in general position in $\mathbb{R}^{d}$ is $(d+1)$-connected. \end{theorem} \begin{proof} See \cite{hn}, where the same argument for generic frameworks applies to frameworks in general position as well. \begin{comment} The proof is almost identical to the one given in \cite{hn}. By definition, frameworks in general position have no more that $d$ vertices that lie in a subspace of dimension $d-1$. Therefore, only a separating set of $k\leq d$ vertices can act as a mirror for partial reflection. Any set of $k$ vertices span $k-1$-dimensional subspace that does not include any other vertex. The subspace can be extended to a $d-1$ dimensional subspace that also contains only these $k$ vertices. If the $k$ vertices form a separating set, the framework allows non-trivial partial reflection, contradicting global rigidity of the framework. Therefore, no $d$ or fewer vertices form a separating set in globally rigid framework in $\mathbb{R}^{d}$, which implies the framework must be $(d+1)$-connected. \end{comment} \end{proof}
For a given graph $G=(V,E)$ with $v$ vertices, $e$ edges and dimension $d$, we define the \emph{edge function} $f: \mathbb{R}^{vd} \to \mathbb{R}^{e}$ as a function mapping graph configuration in $\mathbb{R}^{vd}$ to its edge-square-length in $G(\mathbf{p})$ (after some ordering of $e$ edges):
$f(\mathbf{p})=f(\mathbf{p}_1, \ldots ,\mathbf{p}_v)=(\ldots ,\frac{1}{2}\|\mathbf{p}_i-\mathbf{p}_j\|^2,\ldots)$. The $e \times vd$ Jacobian matrix corresponding to the edge function is called \emph{rigidity matrix}, denoted by $df$:
\begin{equation}\label{eqn:dfdef}
df= \left( \begin{array}{ccccccccccc} \cdot & \cdots & \cdot & \cdots & \cdot & \cdots & \cdot & \cdots & \cdot & \cdots & \cdot \\ 0 & \cdots & 0 & \mathbf{p}_i-\mathbf{p}_j & 0 & \cdots & 0 & \mathbf{p}_j-\mathbf{p}_i & 0 & \cdots & 0 \\ \cdot & \cdots & \cdot & \cdots & \cdot & \cdots & \cdot & \cdots & \cdot & \cdots & \cdot \\
\end{array} \right).
\end{equation}
Given the rigidity matrix $df$, a framework is \emph{infinitesimally rigid} if $\rank (df)=vd-\binom{d+1}{2}$. Equivalently, infinitesimal rigidity of a framework can be characterized by its infinitesimal flexes. A vector $\mathbf{q}\in\mathbb{R}^{vd}$ is called an \emph{infinitesimal flex} of the framework $G(\mathbf{p})$ if for any edge $\{i,j\}$, the scalar product $(\mathbf{p}_i-\mathbf{p}_j,\mathbf{q}_i-\mathbf{q}_j)=0$. A vector $\mathbf{q}$ is called a \emph{trivial infinitesimal flex} of the framework $G(\mathbf{p})$ if there exists a differentiable path $\mathbf{h}(t)$ of rigid motions in $\mathbb{R}^{d}$ such that $\mathbf{h}(0)=\mathbf{p}$ and $\mathbf{h}'(0)=\mathbf{q}$. Along the path all pairwise distances (and edge lengths in particular) remain constant, therefore evaluating the derivative of $(\mathbf{h}_{i}(t)-\mathbf{h}_{j}(t))^{2}$ at $t=0$ shows that such $\mathbf{q}$ satisfies the scalar product equation above. A framework $G(\mathbf{p})$ is then called infinitesimally rigid if every infinitesimal flex of $G(\mathbf{p})$ is trivial.
\begin{comment} \begin{theorem} [Asimow-Roth \cite{ar}]\label{thm:infloc} A generic framework in $\mathbb{R}^{d}$ with at least $d+1$ vertices is locally rigid if and only if it is infinitesimally rigid. \end{theorem} \begin{corollary} []\label{cor:globrank} The rigidity matrix of generic globally rigid framework in $\mathbb{R}^{d}$ has rank $vd-\binom{d+1}{2}$. \end{corollary} \end{comment}
The next notion is key for the main results of the paper. For each edge $\{i,j\}$ of graph $G$ we define a scalar $w_{ij}=w_{ji}$, and have them arranged into a vector $w=(\ldots ,w_{ij},\ldots) \in \mathbb{R}^{e}$. We say that vector $w$ is an \emph{equilibrium stress} (or simply \emph{stress} in this paper), if the following vector equation holds for each vertex $i$:
\begin{equation}\label{eqn:stresseq} \sum_{j:\{i,j\}\in E}w_{ij}(\mathbf{p}_i-\mathbf{p}_j)=0. \end{equation} By writing the above as a system of $vd$ scalar equations and then forming into a matrix form, we arrive at the following relation between stresses and the rigidity matrix:
\begin{proposition}\label{prop:stressdf} The space of stresses equals to $\ker(df^T)$. \end{proposition}
For each element $w=(\ldots ,w_{ij},\ldots)$ of the space of stresses, there corresponds a $v\times v$ symmetric \emph{stress matrix} $\Omega$ defined such that for $i,j \in V$ and $i\neq j$, $\Omega_{ij}=w_{ij}$ for $\{i,j\}\in E$ and $\Omega_{ij}=0$ for $\{i,j\}\notin E$. The diagonal entries are defined such that the rows and columns sum to zero: $\Omega_{ii}=-\sum_{j\neq i}\Omega_{ij}$. An equivalent definition of the stress matrix, which will be used here, is the following. \begin{definition}\label{dfn:defStressMat} A stress matrix of a framework is a $v\times v$ matrix $\Omega$ such that: \begin{enumerate} \item for $i,j\in V$, $\Omega_{i,j}=\Omega_{j,i}$; \item for $i,j\in V$, $i\neq j$ and $\{i,j\}\notin E$, $\Omega_{i,j}=0$; \item $\sum_{j\in V}\Omega_{i,j}=0$ for all $i\in V$; \item $\sum_{j\in V}\Omega_{i,j}\cdot\mathbf{p}_j=0$ for all $i\in V$;
\end{enumerate} \end{definition} Properties (1)--(3) follow directly from the construction of stress matrix described above. In addition, $\sum_{j}\Omega_{i,j}\cdot\mathbf{p}_j=\sum_{j\neq i}\Omega_{i,j}\cdot\mathbf{p}_j+ \Omega_{i,i}\cdot\mathbf{p}_i=\sum_{j\neq i}\Omega_{i,j}\cdot\mathbf{p}_j-\sum_{j\neq i}\Omega_{i,j}\cdot\mathbf{p}_i= \sum_{}w_{ij}(\mathbf{p}_j-\mathbf{p}_i)=0$, which verifies (4).
The following two theorems establish characterization of universal rigidity by the stress matrix. \begin{theorem} [Connelly \cite{cnch2}, Gortler-Thurston \cite{gt}]\label{thm:UR} A generic framework $G(\mathbf{p})$ in $\mathbb{R}^{d}$, having at least $d+2$ vertices, is universally rigid if and only if it has a positive semidefinite stress matrix with nullity $d+1$. \end{theorem} \begin{theorem} [Alfakih \cite{ak}]\label{thm:AK} Let $G$ be a framework in general position in $\mathbb{R}^{d}$, having at least $d+2$ vertices. If there exists a positive semidefinite stress matrix of $G$ of nullity $d+1$, then $G$ is universally rigid. \end{theorem} \begin{comment} The following are the \textbf{Properties} of stress matrices.
\begin{enumerate} \item for $i,j\in V$, $\Omega_{i,j}=\Omega_{j,i}$; \item for $i,j\in V$, $i\neq j$ and $\{i,j\}\notin E$, $\Omega_{i,j}=0$; \item $\sum_{j\in V}\Omega_{i,j}=0$ for all $i\in V$; \item $\sum_{j\in V}\Omega_{i,j}\cdot\mathbf{p}_j=0$ for all $i\in V$;
\end{enumerate} \end{comment} We will conclude this section by developing some properties of the stress matrices that will be needed later. \begin{lemma}\label{lma:p1matrix} Let $A$ be a $d\times v$ matrix having $\mathbf{p}_1, \ldots ,\mathbf{p}_v$ as its columns. Assume a matrix $B$ is obtained from $A$ by appending a row $(1,\ldots ,1)$. Then $\mathbf{p}$ is in general position if and only if every $(d+1)\times (d+1)$ submatrix of $B$ has full rank. \end{lemma} \begin{proof} If such a submatrix, having columns $\mathbf{p}_{i_{k}}$, $k=1,\ldots ,d+1$, is not full rank, then $\sum{\alpha_{k}}\left(\begin{array}{c}\mathbf{p}_{i_{k}}\\1\end{array}\right)=0$ for some non-trivial $\{\alpha_{k}\}$. But this holds if and only if $\sum{\alpha_{k}\mathbf{p}_{i_{k}}}=0$ and $\sum{\alpha_{k}}=0$, i.e. when $\mathbf{p}_{i_{k}}$ are affinely dependent. \begin{comment} and hence $\sum{\alpha_{i}}=0$ for some non-trivial $\{\alpha_{i}\}$. But then for some $\alpha_{j}\neq 0$, we would have $\sum_{i}{\alpha_{i}}\left(\begin{array}{c}\mathbf{p}_i\\1\end{array}\right)=\sum_{i}{\alpha_{i}}\left(\begin{array}{c}\mathbf{p}_i\\1\end{array}\right) - \sum_{i}{\alpha_{i}}\left(\begin{array}{c}\mathbf{p}_j\\1\end{array}\right)=\sum_{i}{\alpha_{i}}\left(\begin{array}{c}\mathbf{p}_i-\mathbf{p}_j\\0\end{array}\right)=0$, in contradiction to the assumption that $\mathbf{p}_i$ are affinely independent.
\end{comment} \end{proof} \begin{corollary}\label{cor:genposopen} The set of configurations in general position is an open set. \end{corollary}
\begin{lemma}\label{lma:projker} If $\Omega$ is a PSD stress matrix of nullity $d+1$, corresponding to a framework $G(\mathbf{p})$ in general position, then the coordinate projections of $\mathbf{p}$ and the vector $(1,\ldots ,1)^T$ are a basis for $\ker(\Omega)$. \end{lemma} \begin{proof} We note that the equation in (4) of Definition \ref{dfn:defStressMat} is a vector equation, and so it is satisfied for each of the coordinate projections. Vector $(1,\ldots ,1)^T$ satisfies the same equation by (3). Taking $d$ coordinate projections' column vectors, vector $(1,\ldots ,1)^T$ and forming into a matrix, we get a matrix $B$ (as in Lemma \ref{lma:p1matrix}) transposed, which is of full rank. Therefore the coordinate projections and vector $(1,\ldots ,1)^T$ are $d+1$ linearly independent vectors in $\ker(\Omega)$, and hence they are the basis. \end{proof} \begin{comment} The next straightforward observation will be of use in the next section. \begin{proposition}\label{prop:Omega2stress} Any matrix satisfying Properties (1)--(4) above is a stress matrix, corresponding to a stress vector satisfying equilibrium equation (\ref{eqn:stresseq}). \end{proposition} \end{comment} In the following lemma we assume a framework in general position has a PSD stress matrix of nullity $d+1$. \begin{lemma}\label{lma:OmegaDiag} For a framework in general position, its PSD stress matrix of nullity $d+1$ can be diagonalized by an invertible matrix whose first $d+1$ columns are coordinate projections of $\mathbf{p}$ and a vector $(1,\ldots ,1)^T$. \end{lemma} \begin{proof} Let $\Omega$ be the matrix. Since $\Omega$ is symmetric, $\Omega=U\Lambda U^T$ or, $\Omega U=U\Lambda$, where $U$ is a unitary matrix of eigenvectors corresponding to eigenvalues of $\Omega$ arranged on the diagonal of $\Lambda$ \cite{hj}. Because $\Omega$ is a PSD matrix of nullity $d+1$, $\Lambda$ has $d+1$ zeros and $v-d-1$ positive values on its diagonal. By re-ordering columns of $U$ and corresponding diagonal entries of $\Lambda$ we make the first $d+1$ columns of $U$ to be the eigenvectors corresponding to 0-eigenvalues which are the first $d+1$ entries of the diagonal of $\Lambda$. We then replace these $d+1$ columns of $U$ with (non-orthonormal) set of $d$ coordinate projection vectors and a vector $(1,\ldots ,1)^T$, which are also the eigenvectors corresponding to 0-eigenvalues. This results in a new matrix $S$ such that $\Omega S=S\Lambda$. Since the first $d+1$ columns of $U$ and $S$ are a basis for the same subspace (in particular, each of the first $d+1$ columns of $S$ can be expressed as a linear combination of the first $d+1$ columns of $U$), it is easy to show that the set of all columns of $S$ is linearly independent, which implies that $S$ is invertible. Therefore $\Omega=S\Lambda S^{-1}$. \end{proof}
For general $v\times v$ positive semidefinite matrices the following is also true: \begin{lemma}\label{lma:kerSumMat} Let $A$, $B$ be PSD matrices. Then $\ker(A+B)=\ker(A)\cap\ker(B)$. \end{lemma} \begin{proof} If $x\in\ker(A)\cap\ker(B)$, then $(A+B)x=Ax+Bx=0+0=0$, so $\ker(A)\cap\ker(B)\subseteq\ker(A+B)$. Conversely, if $x\in\ker(A+B)$, $0=x^{T}(A+B)x=x^{T}Ax+x^{T}Bx$, therefore $x^{T}Ax=x^{T}Bx=0$. Since $x^{T}Ax=\langle A^{1/2}x,A^{1/2}x\rangle$, where $A^{1/2}=U\Lambda^{1/2} U^{T}$ and hence $\ker(A^{1/2})=\ker(A)$, we have $0=A^{1/2}x=Ax$, and so $x\in\ker(A)$. Similarly, $x\in\ker(B)$, hence $\ker(A+B)\subseteq\ker(A)\cap\ker(B)$. \end{proof}
\begin{lemma}\label{lma:omegaepsilon} For any PSD matrix $\Omega_{1}$ and a matrix $\Omega_{2}$ satisfying $\ker(\Omega_{1})\subseteq\ker(\Omega_{2})$, there is $c > 0$ such that $c\Omega_{1}+\Omega_{2}$ is a PSD matrix with the same nullity as $\Omega_{1}$. \end{lemma} \begin{proof} For a sufficiently small $\epsilon$, we can view $\Omega_{1}+\epsilon\Omega_{2}$ as perturbed matrix $\Omega_{1}$. Assume that $\Omega_{1}$ has rank $r$. Then, the matrix has $r$ positive eigenvalues $\lambda_{l}$ \cite{hj}, while the rest of the eigenvalues are 0. We can assume the eigenvalues are ordered such that $\lambda_{l}\geq\lambda_{l+1}>0$ for $l=1\ldots r-1$. By Weyl theorem \cite{weyl}, the perturbation of the eigenvalues of $\Omega_{1}$ is bounded:
$|\lambda_{l}-\lambda'_{l}|\leq \epsilon\|\Omega_{2} \|$ for all $l$. A sufficient condition for preserving positivity of the first $r$ perturbed eigenvalues is $|\lambda_{l}-\lambda'_{l}|<\lambda_{[r]}$, where $\lambda_{[r]}$ is minimal eigenvalue among the first $r$. Therefore by setting $\epsilon=\frac{\lambda_{[r]}(\Omega_{1})}{2\|\Omega_{2} \|}$, the first $r$ eigenvalues of $\Omega_{1}+\epsilon\Omega_{2}$ are positive.
The rest of the eigenvalues of $\Omega_{1}+\epsilon\Omega_{2}$ are zero. If this was not the case, then $\rank (\Omega_{1}+\epsilon\Omega_{2})>\rank (\Omega_{1})$, or, $\dim(\ker(\Omega_{1}+\epsilon\Omega_{2}))<\dim(\ker(\Omega_{1}))$. From other side, since $\ker(\Omega_{1})\subseteq\ker(\Omega_{2})=\ker(\epsilon\Omega_{2})$ for $\epsilon>0$, we must have $\ker(\Omega_{1})\subseteq \ker(\Omega_{1}+\epsilon\Omega_{2})$, which implies $\dim(\ker(\Omega_{1}+\epsilon\Omega_{2}))\geq\dim(\ker(\Omega_{1}))$, a contradiction. Therefore $\Omega_{1}+\epsilon\Omega_{2}$ has $r$ positive eigenvalues while the rest are zero, and so the matrix is positive semidefinite with the same nullity as $\Omega_{1}$. By scaling the matrix by $c=\epsilon^{-1}$, the conclusion of the lemma follows.
\end{proof} \end{section}
\begin{section}{Framework attachments: main results}\label{sec:Main} In this section we will address the main question of this paper: given two universally rigid frameworks, under which conditions their attachment is also universally rigid?
\begin{theorem}\label{thm:OmegaMain} A framework attachment of two universally rigid frameworks in general position in $\mathbb{R}^{d}$, not joined on all vertices, is universally rigid if and only if the number of shared vertices is greater or equal $d+1$. \end{theorem} \begin{proof}
Assume that we are given the attachment $G$ in $\mathbb{R}^d$ of two universally rigid frameworks, $G_{A}$ and $G_{B}$, that are in general position. Denote the set of vertices in $G_{A}$ by $V_{A}$, the set of vertices of $G_{B}$ by $V_{B}$, and the set of shared vertices by $V_{C}=V_{A}\cap V_{B}$. If $|V_{C}|< d+1$, removing the shared vertices from $G$ leaves a disconnected graph, which means $G$ is not $(d+1)$-connected and, by Theorem \ref{thm:globconn}, not universally rigid.
To prove the converse, we express can express the configuration $\mathbf{p}$ of $G$ as $\mathbf{p}=(\mathbf{p}_{A},\mathbf{p}_{B\backslash C})$, where $\mathbf{p}_{A}$ is the configuration of $G_{A}$. Since $G_{A}$ is universally rigid, any configuration $\mathbf{p'}$ of $G$ satisfies $\mathbf{p}'_{A}\cong \mathbf{p}_{A}$. Therefore for any such $\mathbf{p'}$ there corresponds (by a rigid motion) a configuration $\mathbf{p}''=(\mathbf{p}_{A},\mathbf{p}''_{B\backslash C})$ such that $\mathbf{p''}\cong \mathbf{p'}$. We will now show that any such $\mathbf{p''}$ is congruent to $\mathbf{p}$, thus proving $\mathbf{p'}\cong \mathbf{p}$.
Without loss of generality, assume that the two frameworks are joined at $n\geq d+1$ vertices $v_1,\ldots ,v_n$. Since each framework is in general position, the coordinate vectors $\mathbf{p}_1,\ldots ,\mathbf{p}_{d+1}$ are affinely independent, i.e. $\mathbf{y}_{1}=\mathbf{p}_{2} - \mathbf{p}_{1},\ldots ,\mathbf{y}_{d}=\mathbf{p}_{d+1} - \mathbf{p}_{1}$ are linearly independent. Since $G_{B}$ is universally rigid, $\mathbf{p}''_{B}\cong \mathbf{p}_{B}$. That means there is a congruency map $H: \mathbf{p}_{B}\to \mathbf{p}''_{B}$. The map fixes $\mathbf{p}_1,\ldots ,\mathbf{p}_{d+1}$, therefore it must be some linear transformation $A$ from the orthogonal group $O_d$ (we can assume, without loss of generality, that vertex $v_1$ is at the origin). The transformation $A$ also fixes the basis vectors $\mathbf{y}_{1},\ldots, \mathbf{y}_{d}$. Therefore, for a coordinate vector $\mathbf{p}_j = \sum_{i}{\alpha_{i} \mathbf{y}_{i}}$ of a vertex $v_j\in V_{B\backslash C}$, we have: \begin{displaymath} \mathbf{p}''_j=A \mathbf{p}_j = A\sum_{i}{\alpha_{i} \mathbf{y}_{i}} = \sum_{i}{\alpha_{i} (A\mathbf{y}_{i})} = \sum_{i}{\alpha_{i} \mathbf{y}_{i}} = \mathbf{p}_j, \end{displaymath} which shows that $\mathbf{p}''_{B\backslash C}=\mathbf{p}_{B\backslash C}$. Therefore, $\mathbf{p}''=\mathbf{p}$ and hence $\mathbf{p}'\cong\mathbf{p}$. Since $\mathbf{p}'$ is an arbitrary configuration of the attachment, and the above argument did not depend on the dimension $d$, this shows that the attachment is universally rigid. \begin{comment} The configuration $\mathbf{p}$ of $G$ can be expressed as $\mathbf{p}=(\mathbf{p}_{A},\mathbf{p}_{B\backslash C})$, where $\mathbf{p}_{A}=\pi(\mathbf{p})$ is the projection of the attachment configuration onto coordinates of the vertices of $G_{A}$. \end{comment} \end{proof}
With this result, one may naturally ask: \begin{question}\label{qtn:URGen} What are the generating graphs for universally rigid graphs under graph attachment? \end{question} In other words, we are interested in identifying a smallest set of universally rigid graphs which would span, under finite attachment, the space of universally rigid graph attachments. At a first glance the space may resemble a semigroup with a set of generators. However, since there can be more than one way to attach a pair of graphs, the binary operation is not well defined. Leaving the attempt of finding the answer to the above question for a separate possible research, \begin{comment} We previously mentioned that (non-compete) universally rigid frameworks in $\mathbb{R}^{d}$ must be ($d+1$)-connected, which implies they must have at least $d+1$ vertices. Therefore, by the above theorem \ref{thm:OmegaMain}, attaching two such frameworks creates another universally rigid framework, which can be re-phrased as following. \begin{corollary}\label{col:colMain} Non-complete universally rigid frameworks in $\mathbb{R}^{d}$ form a semigroup. \end{corollary} Since there can be more than one way to attach two frameworks, a binary operation needs to be defined appropriately. \end{comment} we will now establish the second main result, which states that removing certain edges between shared vertices in the attachment preserves universal rigidity (see last diagram in Figure~\ref{fig:attach3D}). \begin{definition}\label{dfn:EdgeReducedDef} An \emph{edge-reduced framework attachment} is a framework obtained from framework attachment by removing those edges between the shared vertices that are inherited from only one of the two joined frameworks. \end{definition} \begin{theorem}\label{thm:EdgeReducedMain} An edge-reduced framework attachment of two universally rigid frameworks is universally rigid. \end{theorem} \begin{proof} Given the non edge-reduced attachment $G$ in $\mathbb{R}^d$ and the corresponding edge function $f$, we again denote the set of vertices in $G_{A}$ by $V_{A}$, the set of vertices of $G_{B}$ by $V_{B}$, and the set of shared vertices by $V_{C}$. As in the proof of Theorem \ref{thm:OmegaMain}, here it will also be sufficient to consider only the configurations of $G$ in which the vertices of $G_{A}$ are ``pinned''. Let $\pi$, defined by $\mathbf{p}_{A}=\pi(\mathbf{p})$, be the projection of the attachment configuration $\mathbf{p}=(\mathbf{p}_{A},\mathbf{p}_{B\backslash C})$ onto coordinates of the vertices of $G_{A}$. \begin{comment} The configuration $\mathbf{p}$ of $G$ can be expressed as $\mathbf{p}=(\mathbf{p}_{A},\mathbf{p}_{B\backslash C})$, where $\mathbf{p}_{A}=\pi(\mathbf{p})$ is the projection of the attachment configuration onto coordinates of the vertices of $G_{A}$. \end{comment} Since $G$ is universally rigid (by Theorem \ref{thm:OmegaMain}), the set $f^{-1}(f(\mathbf{p}))$ of all edge-length preserving configurations consists of configurations congruent to $\mathbf{p}$.
Assume now we remove the edge $\{i,j\}$ of length $l_{1}$ between shared vertices that was inherited from $G_{B}$ only, resulting in an edge-reduced framework attachment $G_{1}$ with a corresponding edge function $f_{1}$. Since $G_{A}$ is universally rigid and the removed edge did not belong to $G_{A}$, any configuration $\mathbf{p'}=(\mathbf{p}'_{A},\mathbf{p}'_{B\backslash C})\in f_{1}^{-1}(f_{1}(\mathbf{p}))$ satisfies $\mathbf{p}'_{A}\cong \mathbf{p}_{A}$. Therefore for any such $\mathbf{p'}$ there corresponds (by a rigid motion) a configuration $\mathbf{p}''=(\mathbf{p}_{A},\mathbf{p}''_{B\backslash C})\in f_{1}^{-1}(f_{1}(\mathbf{p}))\cap\pi^{-1}(\mathbf{p}_{A})$ such that $\mathbf{p''}\cong \mathbf{p'}$. We will now show that any such $\mathbf{p''}$ is congruent to $\mathbf{p}$, thus proving $\mathbf{p'}\cong \mathbf{p}$.
We make the following observation regarding the configuration sets: \begin{align*} f^{-1}(f(\mathbf{p})) &\subseteq f_{1}^{-1}(f_{1}(\mathbf{p})),\\ f^{-1}(f(\mathbf{p})) &= f_{1}^{-1}(f_{1}(\mathbf{p})) \cap D_{ij}, \end{align*} where $D_{ij}$ is the set of all edge-reduced configurations with the distance between vertices $i$ and $j$ equal to $l_{1}$. By restricting the attachment configurations to those with vertices of $G_{A}$ mapped to $\mathbf{p}_{A}$, we get: \begin{displaymath} \mathbf{p}\in f^{-1}(f(\mathbf{p})) \cap \pi^{-1}(\mathbf{p}_{A}) = f_{1}^{-1}(f_{1}(\mathbf{p})) \cap D_{ij} \cap \pi^{-1}(\mathbf{p}_{A}) = f_{1}^{-1}(f_{1}(\mathbf{p})) \cap \pi^{-1}(\mathbf{p}_{A})\ni\mathbf{p''},
\end{displaymath} where the last equality stems from the fact that if $\mathbf{p}_{A}$ is fixed then $dist(i,j)=l_{1}$, or $\pi^{-1}(\mathbf{p}_{A})\subset D_{ij}$, and so the edge length restriction can be omitted. Therefore $\mathbf{p}\cong \mathbf{p''}\cong \mathbf{p'}$, and since $\mathbf{p'}$ is an arbitrary configuration of $G_{1}$, this shows that $G_{1}$ is universally rigid. Proceeding by induction, we conclude that removing those edges between shared vertices that are inherited only from $G_{B}$ preserves universal rigidity. \end{proof} The main idea here is that universal rigidity of $G_{A}$ maintains the same pairwise distances between the shared vertices for all edge-reduced attachment configurations. Since $G$ is also universally rigid, we conclude that all edge-reduced attachments agree on pairwise distances between all vertices, and so, in particular, all configurations are congruent to $\mathbf{p}$.
\end{section}
\begin{section}{Stress matrices for framework attachments}\label{sec:EdgRedStressMat}
Let's assume we are given an attachment of two universally rigid frameworks in general position with PSD stress matrices $\Omega_{A}$ of size $v_{A}\times v_{A}$, and $\Omega_{B}$ of size $v_{B}\times v_{B}$, each of of nullity $d+1$. Here $v_{A}$ and $v_{B}$ are the number of vertices for the corresponding frameworks, and so the framework attachment will have $v=v_{A}+v_{B}-n$ vertices.
\begin{figure}
\caption{A framework in $\mathbb{R}^2$ of a Tutte graph \cite{cnbook}. The framework is universally rigid and has a PSD stress matrix of nullity $3$, but it is not infinitesimally rigid. The arrows indicate a non-trivial infinitesimal flex. The numbers by the edges are the stresses corresponding to a PSD stress matrix of nullity 3.}
\label{fig:Tutte}
\end{figure}
We re-order, if needed, the columns and rows of $\Omega_{A}$ such that the \emph{last} $n$ columns (and rows) correspond to shared vertices. We also rearrange $\Omega_{B}$ to have the \emph{first} $n$ columns (and rows) correspond to shared vertices. Next, we extend framework $G_{A}$ by $v_{B}-n$ non-shared vertices of $G_{B}$ (leaving out new edges), thus getting an extended framework $\widetilde{G}_{A}$. Adding the new disconnected vertices is equivalent to augmenting $\Omega_{A}$ by $v_{B}-n$ rows and columns of zeros stresses, which results in an extended $v\times v$ stress matrix $\widetilde{\Omega}_{A}$. Similarly, we extend framework $G_{B}$ to $\widetilde{G}_{B}$ by adding $v_{A}-n$ non-shared vertices (and no edges) of framework $G_{A}$. The stress matrix $\Omega_{B}$ is extended to a $v\times v$ matrix $\widetilde{\Omega}_{B}$ in a similar fashion except for the way we augment the matrix by zeros, as shown below. (The difference comes from maintaining consistent labeling of vertices.)
\begin{displaymath} \widetilde{\Omega}_{A}= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c.c} \begin{BMAT}(@,35pt,35pt){c}{c} \Omega_{A} \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,25pt,25pt){c}{c} 0 \end{BMAT} \end{BMAT} \right)
,\quad \widetilde{\Omega}_{B}= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c.c} \begin{BMAT}(@,20pt,20pt){c}{c} 0 \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,40pt,40pt){c}{c} \Omega_{B} \end{BMAT} \end{BMAT}\\ \right). \end{displaymath} Clearly, the attachment of the two extended frameworks (joined at all vertices) is identical to the original attachment. Also, the two extended stress matrices are of the same size. We can now state the two main results of this section. \begin{theorem}\label{thm:OmegaSum} Given PSD stress matrices $\Omega_{A}$ and $\Omega_{B}$ of nullity $d+1$ for two frameworks in general position sharing $n\geq d+1$ vertices, a PSD stress matrix of nullity $d+1$ for the framework attachment $G$ can be obtained by summing the two matrices after extending each by appropriate number of zero columns and rows: \begin{equation}\label{eqn:omegaPlus} \widetilde{\Omega}=\widetilde{\Omega}_{A}+\widetilde{\Omega}_{B}= \left( \begin{BMAT}(@,2pt,2pt){ccc}{ccc} \begin{BMAT}(@,20pt,20pt){c}{c} \Omega_{A} \end{BMAT} & \phantom{0} & 0\\ \phantom{0} & \begin{BMAT}(@,5pt,5pt){c}{c} + \end{BMAT} & \phantom{0}\\ 0 & \phantom{0} & \begin{BMAT}(@,30pt,30pt){c}{c} \Omega_{B} \end{BMAT} \addpath{(0,1,.)rruu} \addpath{(1,0,.)uurr} \end{BMAT}\\ \right). \end{equation} \end{theorem}
\begin{theorem}\label{thm:stressReduced} If $G_{A}$ is also infinitesimally rigid, a positive semidefinite stress matrix of nullity $d+1$ for the edge-reduced framework attachment, in which $K$ edges inherited from $G_{B}$ were removed, is of the form:
\begin{displaymath}
\widetilde\Omega_{re}=c\widetilde\Omega_{A}+\widetilde{\Omega}_{AK}+\widetilde\Omega_{B}, \end{displaymath} where $c$ is a large enough constant, and the entries of matrix $\widetilde{\Omega}_{AK}$ are derived from the rigidity matrix of $G_{A}$ and stress matrix $\Omega_{B}$ by solving $K$ systems of linear equations. \end{theorem}
In the rest of this section we prove these two results. \begin{remark}\label{rmk:noUR} While being complementary to the two main theorems from the previous section, these results do not actually require universal rigidity of the attached frameworks. From other side, a condition of infinitesimal rigidity of the framework, whose edges are preserved in the edge-reduced attachment, has been introduced for the construction of the stress matrix in the proof of Theorem \ref{thm:stressReduced}. An example in Figure~\ref{fig:Tutte} shows that infinitesimal rigidity is not automatically satisfied even when the framework is universally rigid with a PSD stress matrix of nullity $d+1$. Infinitesimal rigidity does, however, hold for all generic universally rigid frameworks. \end{remark}
\begin{proof}[Proof of Theorem \ref{thm:OmegaSum}]
The matrix $\widetilde{\Omega}$ is a PSD matrix because both $\widetilde{\Omega}_{A}$ and $\widetilde{\Omega}_{B}$ are. Also, since both $\widetilde{\Omega}_{A}$ and $\widetilde{\Omega}_{B}$ satisfy properties (1), (3) and (4) of Definition \ref{dfn:defStressMat}, so does $\widetilde{\Omega}$. If $i,j\in V_{A}\cup V_{B}$, $i\neq j$ and $\{i,j\}\notin E_{A}\cup E_{B}$ then the ($i,j$) entry of both $\widetilde{\Omega}_{A}$ and $\widetilde{\Omega}_{B}$ is zero, and so is ($i,j$) entry of $\widetilde{\Omega}$. This verifies property (2) as well, and hence the matrix $\widetilde{\Omega}$ is a PSD stress matrix for the framework attachment.
To prove that $\widetilde{\Omega}$ is of nullity $d+1$,
we first consider the case where the number of shared vertices is exactly $d+1$.
Assume, as above, that the last $d+1$ columns(rows) of $\Omega_{A}$, and the first $d+1$ columns(rows) of $\Omega_{B}$ correspond to the shared vertices, in the same order.
We begin by diagonalizing the PSD matrices by $\Omega_{A}=U_{A}\Lambda_{A} U_{A}^T$ and $\Omega_{B}=S_{B}\Lambda_{B} S_{B}^{-1}$, where $\Omega_{A}$ and $\Omega_{B}$ are diagonal, $U_{A}$ is unitary, and $S_{B}$ is the invertible matrix constructed as in Lemma \ref{lma:OmegaDiag}. The corresponding extended stress matrices, obtained as in the same lemma, are then readily diagonalizable as $\widetilde{\Omega}_{A}=\widetilde{U}_{A}\widetilde{\Lambda}_{A} \widetilde{U}_{A}^T$ and $\widetilde{\Omega}_{B}=\widetilde{S}_{B}\widetilde{\Lambda}_{B} \widetilde{S}_{B}^{-1}$, where the $v\times v$ matrices are of the form:
\begin{displaymath} \widetilde{U}_{A}= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c.c} \begin{BMAT}(@,35pt,35pt){c}{c} U_{A} \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,25pt,25pt){c}{c} I \end{BMAT} \end{BMAT} \right)
,\quad \widetilde{S}_{B}= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c.c} \begin{BMAT}(@,20pt,20pt){c}{c} I \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,40pt,40pt){c}{c} S_{B} \end{BMAT} \end{BMAT}\\ \right), \end{displaymath}
\begin{displaymath} \widetilde{\Lambda}_{A}= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c.c} \begin{BMAT}(@,35pt,35pt){c}{c} \Lambda_{A} \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,25pt,25pt){c}{c} 0 \end{BMAT} \end{BMAT} \right)
,\quad \widetilde{\Lambda}_{B}= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c.c} \begin{BMAT}(@,20pt,20pt){c}{c} 0 \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,40pt,40pt){c}{c} \Lambda_{B} \end{BMAT} \end{BMAT}\\ \right). \end{displaymath}
Using the above diagonalization, we can write: \begin{displaymath} \widetilde{\Omega}=\widetilde{\Omega}_{A}+\widetilde{\Omega}_{B}= \widetilde{U}_{A}(\widetilde{\Lambda}_{A}+\widetilde{U}_{A}^{T}\widetilde{\Omega}_{B} \widetilde{U}_{A})\widetilde{U}_{A}^{T}= \widetilde{U}_{A}(\widetilde{\Lambda}_{A}+\widetilde{U}_{A}^{T}\widetilde{S}_{B}\widetilde{\Lambda}_{B} \widetilde{S}_{B}^{-1}\widetilde{U}_{A})\widetilde{U}_{A}^{T}, \end{displaymath} or, \begin{displaymath} \widetilde{\Omega}=\widetilde{U}_{A}(\widetilde{\Lambda}_{A}+V^{-1}\widetilde{\Lambda}_{B} V)\widetilde{U}_{A}^{T}, \end{displaymath} where $V=\widetilde{S}_{B}^{-1}\widetilde{U}_{A}$. Since $\widetilde{U}_{A}$ is invertible, the matrices $(\widetilde{\Lambda}_{A}+V^{-1}\widetilde{\Lambda}_{B} V)$ and $\widetilde{\Omega}$ are of the same nullity. Also, both $\widetilde{\Lambda}_{A}$ and $V^{-1}\widetilde{\Lambda}_{B} V$ are PSD, and so, by Lemma \ref{lma:kerSumMat}, $\nullity(\widetilde{\Omega})=\dim(\ker(\widetilde{\Lambda}_{A}+V^{-1}\widetilde{\Lambda}_{B} V))= \dim(\ker(\widetilde{\Lambda}_{A})\cap\ker(V^{-1}\widetilde{\Lambda}_{B} V))$. Using the formula for the dimension of the sum of two vector spaces \cite{artin}, we expand the last expression for the intersection, which gives: \begin{equation}\label{eqn:dimeq} \nullity(\widetilde{\Omega})=\dim(\ker(\widetilde{\Lambda}_{A}))+\dim(\ker(V^{-1}\widetilde{\Lambda}_{B} V))-\dim(\ker(\widetilde{\Lambda}_{A})+\ker(V^{-1}\widetilde{\Lambda}_{B} V)). \end{equation}
\begin{comment} \begin{align} \nullity(\widetilde{\Omega}) &=\dim(\ker(\widetilde{\Lambda}_{A})\cap\ker(V^{-1}\widetilde{\Lambda}_{B} V))\nonumber\\
\label{eqn:dimeq} &=\dim(\ker(\widetilde{\Lambda}_{A}))+\dim(\ker(V^{-1}\widetilde{\Lambda}_{B} V))-\dim(\ker(\widetilde{\Lambda}_{A})+\ker(V^{-1}\widetilde{\Lambda}_{B} V)).
\end{align} \end{comment}
We will now determine each of the three terms appearing in the above equation. Since only the last $v_{B}$ diagonal entries of $\widetilde{\Lambda}_{A}$ are zero, the basis for $\ker(\widetilde{\Lambda}_{A})$ consist of $v\times 1$ vectors $e_{i}$, where $i=\{v_{A}-d,\ldots ,v\}$ and $e_{i}=(0,\ldots ,0,1,0,\ldots ,0)^{T}$, with 1 at the $i^{th}$ coordinate. In a compact form,
\begin{equation}\label{eqn:kerLa} \ker(\widetilde{\Lambda}_{A})=\im \left( \begin{BMAT}(@,2pt,2pt){c}{c.cc} \begin{BMAT}(@,10pt,20pt){c}{c} 0 \end{BMAT} \\ \begin{BMAT}(@,10pt,5pt){c}{c} \phantom{0} \end{BMAT} \\ \begin{BMAT}(@,10pt,30pt){c}{c} I_{v_{B}} \end{BMAT} \end{BMAT}\\ \right)_{v\times v_{B}}. \end{equation} Similarly, \begin{equation}\label{eqn:kerLb} \ker(\widetilde{\Lambda}_{B})=\im \left( \begin{BMAT}(@,2pt,2pt){c}{cc.c} \begin{BMAT}(@,10pt,20pt){c}{c} I_{v_{A}} \end{BMAT} \\ \begin{BMAT}(@,10pt,5pt){c}{c} \phantom{0} \end{BMAT} \\ \begin{BMAT}(@,10pt,30pt){c}{c} 0 \end{BMAT} \end{BMAT}\\ \right)_{v\times v_{A}}, \end{equation} and since $\dim(\ker(V^{-1}\widetilde{\Lambda}_{B} V))=\dim(\ker(\widetilde{\Lambda}_{B}))$, we get:
\begin{comment} \begin{equation}\label{eqn:dimkerLa} \dim(\ker(\widetilde{\Lambda}_{A}))=v_{B}, \end{equation} \begin{equation}\label{eqn:dimkerVLbV} \dim(\ker(V^{-1}\widetilde{\Lambda}_{B} V))=v_{A}. \end{equation} \end{comment}
\begin{align} \label{eqn:dimkerLa} \dim(\ker(\widetilde{\Lambda}_{A})) &= v_{B},\\
\label{eqn:dimkerVLbV}
\dim(\ker(V^{-1}\widetilde{\Lambda}_{B} V)) &= v_{A}. \end{align}
For the third term in (\ref{eqn:dimeq}), we will find the basis of $\ker(\widetilde{\Lambda}_{A})+\ker(V^{-1}\widetilde{\Lambda}_{B} V)$. We first note that $x\in\ker(\widetilde{\Lambda}_{B})\iff V^{-1}x\in\ker(V^{-1}\widetilde{\Lambda}_{B} V)$, and since $V$ is non-singular, the basis $\mathbf{B}$ of $\ker(V^{-1}\widetilde{\Lambda}_{B} V)$ consist of the columns of the matrix-product of $V^{-1}$ and the basis-matrix of $\ker(\widetilde{\Lambda}_{B})$, given in (\ref{eqn:kerLb}):
\begin{displaymath} \mathbf{B}=V^{-1} \left( \begin{BMAT}(@,2pt,2pt){c}{cc.c} \begin{BMAT}(@,10pt,20pt){c}{c} I_{v_{A}} \end{BMAT} \\ \begin{BMAT}(@,10pt,5pt){c}{c} \phantom{0} \end{BMAT} \\ \begin{BMAT}(@,10pt,30pt){c}{c} 0 \end{BMAT} \end{BMAT}\\ \right)=\widetilde{U}_{A}^{T}\widetilde{S}_{B} \left( \begin{BMAT}(@,2pt,2pt){c}{cc.c} \begin{BMAT}(@,10pt,20pt){c}{c} I_{v_{A}} \end{BMAT} \\ \begin{BMAT}(@,10pt,5pt){c}{c} \phantom{0} \end{BMAT} \\ \begin{BMAT}(@,10pt,30pt){c}{c} 0 \end{BMAT} \end{BMAT}\\ \right)=\widetilde{U}_{A}^{T} \left( \begin{BMAT}(@,2pt,2pt){c.c}{c.c} \begin{BMAT}(@,20pt,20pt){c}{c} I \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,40pt,40pt){c}{c} S_{B} \end{BMAT} \end{BMAT}\\ \right) \left( \begin{BMAT}(@,2pt,2pt){c}{cc.c} \begin{BMAT}(@,10pt,20pt){c}{c} I_{v_{A}} \end{BMAT} \\ \begin{BMAT}(@,10pt,5pt){c}{c} \phantom{0} \end{BMAT} \\ \begin{BMAT}(@,10pt,30pt){c}{c} 0 \end{BMAT} \end{BMAT}\\ \right). \end{displaymath} By splitting $S_{B}$ and then $\widetilde{U}_{A}^{T}$ into blocks of appropriate sizes to perform block matrix multiplication, we get: \begin{equation}\label{eqn:Bv} \mathbf{B} = \widetilde{U}_{A}^{T} \left( \begin{BMAT}(@,2pt,2pt){ccc}{ccc} \begin{BMAT}(@,20pt,20pt){c}{c} I \end{BMAT} & 0 & 0\\ 0 & \begin{BMAT}(@,5pt,5pt){c}{c} D \end{BMAT} & Y\\ 0 & X & \begin{BMAT}(@,20pt,30pt){c}{c} Z \end{BMAT} \addpath{(0,1,.)rrr} \addpath{(0,2,.)rrr} \addpath{(1,0,.)uuu} \addpath{(2,0,.)uuu} \end{BMAT}\\ \right) \left( \begin{BMAT}(@,2pt,2pt){c}{cc.c} \begin{BMAT}(@,2pt,20pt){c}{c} I_{v_{A}} \end{BMAT} \\ \begin{BMAT}(@,2pt,5pt){c}{c} \phantom{0} \end{BMAT} \\ \begin{BMAT}(@,2pt,30pt){c}{c} 0 \end{BMAT} \end{BMAT}\\ \right) \begin{comment} =\widetilde{U}_{A}^{T} \left( \begin{BMAT}(@,2pt,2pt){cc}{ccc} \begin{BMAT}(@,20pt,20pt){c}{c} I \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,5pt,5pt){c}{c} D \end{BMAT}\\ 0 & \begin{BMAT}(@,5pt,30pt){c}{c} X \end{BMAT} \addpath{(0,1,.)rr} \addpath{(1,0,.)uuu} \addpath{(0,2,.)rr} \end{BMAT}\\ \right)
\end{comment} = \left( \begin{BMAT}(@,2pt,2pt){c.c}{c.c} \begin{BMAT}(@,30pt,35pt){c}{c} U_{A}^{T} \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,20pt,30pt){c}{c} I \end{BMAT} \end{BMAT} \right) \left( \begin{BMAT}(@,2pt,2pt){cc}{ccc} \begin{BMAT}(@,20pt,20pt){c}{c} I \end{BMAT} & 0\\ 0 & \begin{BMAT}(@,5pt,5pt){c}{c} D \end{BMAT}\\ 0 & \begin{BMAT}(@,5pt,30pt){c}{c} X \end{BMAT}
\addpath{(0,1,.)rr} \addpath{(1,0,.)uuu} \addpath{(0,2,.)rr} \end{BMAT}\\ \right)= \left( \begin{BMAT}(@,2pt,2pt){c}{c.c} \begin{BMAT}(@,20pt,30pt){c}{c} U_{A}^{T}\left( \begin{BMAT}(@,15pt,15pt){cc}{cc} I & 0\\ 0 & D
\end{BMAT}\right)
\end{BMAT}\\
\begin{BMAT}(@,30pt,30pt){cc}{c} 0 & X \end{BMAT} \end{BMAT}\right), \end{equation}
where the matrix $D$ is the $(d+1)\times (d+1)$ upper left block of the matrix $S_{B}$. By construction, the first $d+1$ columns of $S_{B}$ are the $d$ coordinate projections of $G_{B}$ and a vector $(1,\ldots ,1)^T$, and so the first $d$ columns of $D$ consist of coordinates (of shared vertices) while the last column is the $(d+1)\times 1$ vector $(1,\ldots ,1)^T$. Since $G_{B}$ is in general position, by Lemma \ref{lma:p1matrix} the matrix $D$ is invertible. This in turn implies that the above $v_{A}\times v_{A}$ submatrix $U_{A}^T\left( \begin{array}{cc} I&0\\ 0&D\end{array} \right)$ is invertible. Therefore, right-mulitplication of $\mathbf{B}$, given in (\ref{eqn:Bv}), by the inverse of the submatrix produces a new basis-matrix \cite{artin} for $\ker(V^{-1}\widetilde{\Lambda}_{B} V)$:
\begin{equation}\label{eqn:kerVLbV} \ker(V^{-1}\widetilde{\Lambda}_{B} V)=\im \left( \left( \begin{BMAT}(@,2pt,2pt){c}{c.c} \begin{BMAT}(@,30pt,30pt){c}{c} U_{A}^{T}\left( \begin{BMAT}(@,15pt,15pt){cc}{cc} I & 0\\ 0 & D
\end{BMAT}\right)
\end{BMAT}\\
\begin{BMAT}(@,30pt,30pt){cc}{c} 0 & X \end{BMAT} \end{BMAT}\right) \left( \begin{BMAT}(@,15pt,15pt){cc}{cc} I & 0\\ 0 & D
\end{BMAT}\right)^{-1}U_{A} \right)=\im \left( \begin{BMAT}(@,2pt,2pt){c}{cc.c} \begin{BMAT}(@,10pt,20pt){c}{c} I_{v_{A}} \end{BMAT} \\ \begin{BMAT}(@,10pt,5pt){c}{c} \phantom{0} \end{BMAT} \\ \begin{BMAT}(@,10pt,30pt){c}{c} Q \end{BMAT} \end{BMAT}\\ \right)_{v\times v_{A}} \end{equation} for some matrix $Q$. Combining (\ref{eqn:kerLa}) and (\ref{eqn:kerVLbV}) now gives: \begin{displaymath} \ker(\widetilde{\Lambda}_{A})+\ker(V^{-1}\widetilde{\Lambda}_{B} V)=\im \left( \begin{BMAT}(@,2pt,2pt){c}{c.cc} \begin{BMAT}(@,10pt,20pt){c}{c} 0 \end{BMAT} \\ \begin{BMAT}(@,10pt,5pt){c}{c} \phantom{0} \end{BMAT} \\ \begin{BMAT}(@,10pt,30pt){c}{c} I_{v_{B}} \end{BMAT} \end{BMAT}\\ \right) +\im \left( \begin{BMAT}(@,2pt,2pt){c}{cc.c} \begin{BMAT}(@,10pt,20pt){c}{c} I_{v_{A}} \end{BMAT} \\ \begin{BMAT}(@,10pt,5pt){c}{c} \phantom{0} \end{BMAT} \\ \begin{BMAT}(@,10pt,30pt){c}{c} Q \end{BMAT} \end{BMAT}\\ \right)=\im(I_{v})=\mathbb{R}^{v}, \end{displaymath} and hence \begin{equation}\label{eqn:dimsumker} \dim(\ker(\widetilde{\Lambda}_{A})+\ker(V^{-1}\widetilde{\Lambda}_{B} V))=v=v_{A}+v_{B}-d-1. \end{equation} By substituting (\ref{eqn:dimkerLa}), (\ref{eqn:dimkerVLbV}) and (\ref{eqn:dimsumker}) into the dimension formula (\ref{eqn:dimeq}), we finally get: \begin{displaymath} \nullity(\widetilde{\Omega})=v_{B}+v_{A}-(v_{A}+v_{B}-d-1)=d+1, \end{displaymath} which completes the proof for the case where the number of shared vertices is $d+1$.
In case the number of shared vertices is $n>d+1$, we consider an ``intermediate'' framework attachment which has $d+1$ shared vertices and $n-d-1$ pairs of vertices that are coinciding in space. The attachment with $n$ shared vertices is then constructed by merging each pair of coinciding vertices into one (shared) vertex. Such merging in turn corresponds to certain operations on the stress matrix of the intermediate attachment, which we will now describe in detail.
The PSD stress matrix for the intermediate attachment is similar to (\ref{eqn:omegaPlus}): \begin{comment} \begin{displaymath} \Omega= \left( \begin{BMAT}(@,2pt,2pt){ccc}{ccc} \begin{BMAT}(@,20pt,20pt){c}{c} \Omega_{A} \end{BMAT} & \phantom{0} & 0\\ \phantom{0} & \begin{BMAT}(@,5pt,5pt){c}{c} + \end{BMAT} & \phantom{0}\\ 0 & \phantom{0} & \begin{BMAT}(@,30pt,30pt){c}{c} \Omega_{B} \end{BMAT}
\addpath{(0,1,|)rruu}
\addpath{(1,0,|)uurr} \end{BMAT}\\ \right). \end{displaymath}
\begin{displaymath} \Omega=
\begin{BMAT}(@,2pt,2pt){cccccc}{cccccc} \phantom{0} & c & \phantom{0} & c & \phantom{0} & \phantom{0}\\ \begin{BMAT}(@,20pt,20pt){c}{c} \Omega_{A} \end{BMAT}
& | & \phantom{0} & | & 0 & \phantom{0}\\ - & + & - & + & - & r\\
\phantom{0} & | & \begin{BMAT}(@,5pt,5pt){c}{c} + \end{BMAT}
&| & \phantom{0} & \phantom{0}\\ - & + & - & + & - & r\\
0 & | & \phantom{0} & | & \begin{BMAT}(@,30pt,30pt){c}{c} \Omega_{B} \end{BMAT} & \phantom{0}\\ \addpath{(0,2,.)rrruuu} s\addpath{(2,0,.)uuurrr} \end{BMAT}\\
\end{displaymath}
\begin{displaymath} \Omega=
\begin{BMAT}(@,2pt,2pt){cccccc}{cccccc} \phantom{0} & c2 & \phantom{0} & c1 & \phantom{0} & \phantom{0}\\
\Omega_{A} & | & \phantom{0} & | & 0 & \phantom{0}\\
- & + & - & + & - & r2\\
\phantom{0} & | & + & | & \phantom{0} & \phantom{0}\\
- & + & - & + & - & r1\\
0 & | & \phantom{0} & | & \Omega_{B} & \phantom{0}\\
\end{BMAT}\\ \end{displaymath} \end{comment}
\begin{displaymath}
\begin{BMAT}(@,2pt,2pt){cc}{cc} \phantom{0} \phantom{0} \phantom{0} \phantom{0} \begin{BMAT}(@,2pt,2pt){ccccc}{c} \phantom{0} & c_{1} & \phantom{0} & c_{2} & \phantom{0} \end{BMAT} & \phantom{0} \\ \widetilde{\Omega}=\left( \begin{BMAT}(@,2pt,2pt){ccccc}{ccccc}
\Omega_{A} & | & \phantom{0} & | & 0\\ - & + & - & + & -\\
\phantom{0} & | & + & | & \phantom{0}\\ - & + & - & + & -\\
0 & | & \phantom{0} & | & \Omega_{B} \addpath{(2,0,.)uuurrr} \addpath{(0,2,.)rrruuu} \end{BMAT}\right)
& \begin{BMAT}(@,2pt,2pt){c}{ccccc} \phantom{0} \\ r_{1} \\ \phantom{0} \\ r_{2} \\ \phantom{0} \\ \end{BMAT}\\ \end{BMAT}, \end{displaymath} where a pair of columns $c_{1}$, $c_{2}$ (and rows $r_{1}$, $r_{2}$) correspond to one of the $n-d-1$ pairs of coinciding vertices that are to be merged. The matrix $\widetilde{\Omega}$ is of nullity $d+1$ (which, as in the proof above, comes from $G_{B}$ being in general position). We will now obtain a stress matrix for the attachment with the two coinciding vertices merged, and show that this matrix is positive semidefinite of nullity $d+1$.
We first add column $c_{2}$ to column $c_{1}$, and row $r_{2}$ to row $r_{1}$. Such operation results in a matrix $\widetilde{\Omega}'=R\widetilde{\Omega} R^T$ where $R$ is an elementary matrix, and so $\widetilde{\Omega}'$ is a PSD matrix of nullity $d+1$. Adding the columns and rows corresponds to merging the two vertices. We now exchange the column $c_{2}$ (row $r_{2}$) with the last column (row), resulting in another, positive semidefinite matrix $\widetilde{\Omega}''$ of nullity $d+1$. We can view $\widetilde{\Omega}''$ as a matrix $\widetilde{\Omega}_{m}$ bordered with a vector $y$ and a real number $a$:
\begin{displaymath} \widetilde{\Omega}''= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c.c} \begin{BMAT}(@,50pt,50pt){c}{c} \widetilde{\Omega}_{m} \end{BMAT} & y\\ y^T & \begin{BMAT}(@,10pt,10pt){c}{c} a \end{BMAT} \end{BMAT} \right). \end{displaymath} By theorem 4.3.8 in \cite{hj}, the eigenvalues $\mu_{i}$ of $\widetilde{\Omega}_{m}$ are related to the eigenvalues $\lambda_{i}$ of $\widetilde{\Omega}''$ by the following inequality: \begin{displaymath} \lambda_{1}\leq\mu_{1}\leq\lambda_{2}\leq\mu_{2}\leq\cdots\leq\lambda_{v-1}\leq\mu_{v-1}\leq\lambda_{v}. \end{displaymath} Since $\widetilde{\Omega}''$ is of nullity $d+1$, we have $\lambda_{1}=\cdots =\lambda_{d+1}=0$, and hence $\mu_{1}=\cdots =\mu_{d}=0$, $0\leq\mu_{d+1}\leq\lambda_{d+2}$, and $\mu_{d+2},\ldots ,\mu_{v-1}>0$. This implies that $\widetilde{\Omega}_{m}$ is of nullity $d$ or $d+1$. However, $\widetilde{\Omega}_{m}$ is a stress matrix for a framework attachment (with the pair of coinciding vertices merged), and as such its minimal nullity is $d+1$. This comes from the fact that the configuration corresponding to the attachment of two frameworks in general position with at least $d+1$ vertices each does not lie in any affine subspace of $\mathbb{R}^d$ \cite{gt}.
Repeating the above operations on the $n-d-2$ rows/columns of $\widetilde{\Omega}_{m}$ corresponding to the rest of coinciding vertices, we then obtain a PSD stress matrix of nullity $d+1$ for the attachment with $n$ shared vertices. This completes the proof of Theorem \ref{thm:OmegaSum}. \begin{comment} Repeating the above procedure of merging coinciding vertices $n-d-2$ times will result in an attachment with $n$ shared vertices. The corresponding stress matrix is constructed by repeating the above row/column operations $n-d-2$ times (starting with the matrix $\widetilde{\Omega}_{m}$). Since the resulting stress matrix is a positive semidefinite matrix of nullity $d+1$, the framework attachment with $n\geq d+1$ shared vertices is universally rigid. This completes the proof of Theorem \ref{thm:OmegaMain}. \end{comment}
\end{proof}
\begin{comment} As was shown, the PSD stress matrix of nullity $d+1$ for the framework attachment (with $n\geq d+1$ shared vertices) can be obtained by summing two appropriately extended PSD stress matrices of original frameworks. Here, as in the proof of theorem \ref{thm:EdgeReducedMain}, we assume that some edges connecting vertices in $C$ (and inherited from $G_{B}$ only) have been removed, and we will derive the PSD stress matrix of nullity $d+1$ for the edge-reduced framework. \end{comment}
Before formally proving Theorem \ref{thm:stressReduced}, we first outline the proof for the case where only \emph{one} edge (say, $\{i,j\}$) of $G_{B}$ is removed from the attachment. We start by having two frameworks and the edge $\{i,j\}$ still connected in $G_{B}$. From the PSD stress matrix $\Omega_{B}$, we obtain a stress $w_{1}$ on the edge $\{i,j\}$. We then \emph{add} the edge $\{i,j\}$ to the same pair of vertices in $G_{A}$, thus obtaining edge-extended framework $\overline{G}_{A}$, and use Proposition \ref{prop:stressw1} below to derive a stress vector $\overline{w}$ such that the stress on the added edge is $-w_{1}$ (as in Figure~\ref{fig:figRedEdge}). With the help of Lemma \ref{lma:omegaepsilon}, we then obtain a PSD stress matrix of nullity $d+1$ bearing the same stress on the added edge. Attaching the two frameworks, by Theorem \ref{thm:OmegaSum}, will correspond to summing the corresponding extended PSD stress matrices, resulting in $w_{1}+(-w_{1})=0$ stress on the edge $\{i,j\}$ of the framework attachment. By Theorem \ref{thm:EdgeReducedMain}, we can remove this edge from the attachment. Since the entry in the stress matrix corresponding to $\{i,j\}$ is 0, after removing this edge the same stress matrix will satisfy equilibrium equation \ref{eqn:stresseq}, and hence it is the stress matrix for the edge-reduced attachment as well. In the proof of Theorem \ref{thm:stressReduced} below, we will follow this outline for a more general case where $K$ edges are removed from the attachment. But first we establish the following preliminary result guaranteeing a non-zero stress on the added edge. \begin{comment} To prove Theorem \ref{thm:stressReduced}, we first derive the stress matrix for the framework attachment in which only \emph{one} edge (say, $\{i,j\}$) from $G_{B}$ is removed. The edge removal is performed by the following construction. We start by having two frameworks and the edge $\{i,j\}$ still connected in $G_{B}$. From the PSD stress matrix $\Omega_{B}$, we obtain a stress $w_{1}$ on the edge $\{i,j\}$. We will then \emph{add} the edge $\{i,j\}$ to the same pair of vertices in $G_{A}$, thus obtaining edge-extended $\overline{G}_{A}$, and prove that there is a stress vector $\overline{w}$ such that the stress on the added edge is $-w_{1}$ (as in Figure~\ref{fig:figRedEdge}). We will also show how to obtain a PSD stress matrix bearing the same stress on the added edge. Attaching the two frameworks, by Theorem \ref{thm:OmegaSum}, will correspond to summing the corresponding extended PSD stress matrices, resulting in $w_{1}+(-w_{1})=0$ stress on the edge $\overline{w}$ of the framework attachment. By theorem \ref{thm:EdgeReducedMain}, we can now remove this edge from the attachment. Since the entry in the stress matrix corresponding to $\overline{w}$ is 0, after removing this edge the same stress matrix will satisfy equilibrium equation \ref{eqn:stresseq}, and hence it is the stress matrix for the edge-reduced graph as well. In the rest of the section we will prove the above claims and give the expression for the stress matrix. \end{comment} \begin{figure}
\caption{Edges between the shared vertices in $G_{A}$ (\ref{fig:figGac}), $G_{B}$ (\ref{fig:figGbc}) and $\overline{G}_{A}$ (\ref{fig:figGabc}). The stresses on the added edges in $\overline{G}_{A}$ counter-balance the stresses on the same edges in $G_{B}$.}
\label{fig:figGac}
\label{fig:figGbc}
\label{fig:figGabc}
\label{fig:figRedEdge}
\end{figure}
\begin{proposition}\label{prop:stressw1} For a framework $\overline{G}_{A}$, obtained from infinitesimally rigid framework $G_{A}$ by adding edge $\{i,j\}$, there exists a stress vector $\overline{w}$ such that the stress on the added edge is equal to $-w_{1}$.
\end{proposition} \begin{proof} For the unmodified $G_{A}$ we have the rigidity matrix $df$, defined by (\ref{eqn:dfdef}). By proposition \ref{prop:stressdf}, a stress vector is an element of a kernel of rigidity matrix transposed. Since $G_{A}$ is infinitesimally rigid, $\rank (df)=v_{A}d-\binom{d+1}{2}$. With $e_{A}$ being the number of edges (and the number of rows in $df$), $\nullity(df^{T})=e_{A}-\rank(df)$. Adding edge $\{i,j\}$ to $G_{A}$ corresponds to adding a row to $df$ (or, a column to $df^{T}$), resulting in a new rigidity matrix $\overline{df}$. Since adding an edge preserves infinitesimal rigidity, $\rank (\overline{df})=v_{A}d-\binom{d+1}{2}=\rank (df)$, and so $\nullity(\overline{df}^{T})=e_{A}+1-\rank(df)=\nullity(df^{T})+1$. In other words, adding an edge to an infinitesimally rigid framework increases the dimension of the space of stresses by 1. We will now show that in the new space of stresses, there exists a vector with non-zero stress component corresponding to the added edge $\{i,j\}$. We first note that columns of $df^{T}$ and components of a stress vector $w\in\ker(df^{T}))$ are indexed by existing edges of $G_{A}$. \begin{comment} We can express $df^{T}$ and $w$ in the form: \begin{displaymath} df^{T}= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c} P_{x} & P_{y} \end{BMAT} \right), w= \left( \begin{BMAT}(@,2pt,2pt){c}{c.c} w_{x}\\ w_{y} \end{BMAT}\\ \right), \end{displaymath} where we split matrix $df^{T}$ and vector $w$ into two blocks between which the new edge related column (in $df^{T}$) and component (in $w$) will be inserted. \end{comment} We append a new edge related column (in $df^{T}$) and a component (in $w$), and from the following equalities: \begin{comment} \begin{displaymath} df^{T}w= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c} P_{x} & P_{y} \end{BMAT} \right) \left( \begin{BMAT}(@,2pt,2pt){c}{c.c} w_{x}\\ w_{y} \end{BMAT}\\ \right)= \left( \begin{BMAT}(@,2pt,2pt){c.c.c}{c} P_{x} & \rho & P_{y} \end{BMAT} \right) \left( \begin{BMAT}(@,2pt,2pt){c}{c.c.c} w_{x}\\ 0\\ w_{y} \end{BMAT}\\ \right) =\overline{df}^{T}\cdot \left( \begin{BMAT}(@,2pt,2pt){c}{ccc} w_{x}\\ 0\\ w_{y} \end{BMAT}\\ \right), \end{displaymath} \end{comment} \begin{displaymath} df^{T}w= \left( \begin{BMAT}(@,2pt,2pt){c.c}{c} df^{T} & \rho \end{BMAT} \right) \left( \begin{BMAT}(@,2pt,2pt){c}{c.c} w\\ 0 \end{BMAT}\\ \right) =\overline{df}^{T}\cdot \left( \begin{BMAT}(@,2pt,2pt){c}{cc} w\\ 0 \end{BMAT}\\ \right), \end{displaymath} where $\rho = (0,\ldots \mathbf{p}_{i}-\mathbf{p}_{j},\ldots ,\mathbf{p}_{j}-\mathbf{p}_{i},\ldots ,0)^{T}$ corresponds to the added edge $\{i,j\}$ (see Figure \ref{fig:figGabc}), we have: \begin{comment} \begin{displaymath} w= \left( \begin{BMAT}(@,2pt,2pt){c}{c.c} w_{x}\\ w_{y} \end{BMAT}\\ \right) \in\ker(df^{T}) \iff \left( \begin{BMAT}(@,2pt,2pt){c}{c.c.c} w_{x}\\ 0\\ w_{y} \end{BMAT}\\ \right) \in\ker(\overline{df}^{T}). \end{displaymath} \end{comment} \begin{displaymath} w\in\ker(df^{T}) \iff \left( \begin{BMAT}(@,2pt,2pt){c}{cc} w\\ 0 \end{BMAT}\\ \right) \in\ker(\overline{df}^{T}). \end{displaymath} Since $\dim(\ker(\overline{df}^{T}))>\dim(\ker(df^{T}))$, vectors $(w,0)^{T}$ do not span all of $\ker(\overline{df}^{T})$, and so there exists a stress vector $\overline{w}=(w,\psi)^{T}\in \ker(\overline{df}^{T})$ such that $\psi\neq 0$. By scaling, we can set $\psi=-w_{1}$, while the rest of the components (namely, $w$) of vector $\overline{w}$ can be found by solving the equation: \begin{comment} \begin{displaymath} \overline{df}^{T}\overline{w}= \left( \begin{BMAT}(@,2pt,2pt){ccc}{c} P_{x} & \rho & P_{y} \end{BMAT} \right) \left( \begin{BMAT}(@,2pt,2pt){c}{ccc} w_{x}\\ -w_{1}\\ w_{y} \end{BMAT}\\ \right)= \left( \begin{BMAT}(@,2pt,2pt){cc}{c} P_{x} & P_{y} \end{BMAT} \right) \left( \begin{BMAT}(@,2pt,2pt){c}{cc} w_{x}\\ w_{y} \end{BMAT}\\ \right)-\rho w_{1}=0, \end{displaymath} or, in a compact form, \begin{equation}\label{eqn:dfw} df^{T}w=\rho w_{1}. \end{equation} \end{comment} \begin{displaymath} \overline{df}^{T}\overline{w}= \left( \begin{BMAT}(@,2pt,2pt){cc}{c} df^{T} & \rho \end{BMAT} \right) \left( \begin{BMAT}(@,2pt,2pt){c}{cc} w\\ -w_{1} \end{BMAT}\\ \right)=0, \end{displaymath} or, in a compact form, \begin{equation}\label{eqn:dfw} df^{T}w=\rho w_{1}. \end{equation}
\end{proof}
\begin{comment} \begin{proposition}\label{prop:omegaedge} There exists a PSD stress matrix of nullity $d+1$, corresponding to the edge-extended framework $\overline{G}_{A}$, in which the $(i,j)$ entry equals $-w_{1}$. \end{proposition}
\begin{proof}[Proof of Proposition \ref{prop:omegaedge}] Since $\Omega_{A}$ and $\overline{\Omega}_{A}$ of framework $\overline{G}_{A}$ both satisfy properties (3) and (4) of Definition \ref{dfn:defStressMat}, while $\Omega_{A}$ also satisfies Lemma \ref{lma:projker}, we have $\ker(\Omega_{A})\subseteq\ker(\overline{\Omega}_{A})$. The matrix $\Omega_{A}$ is positive semidefinite of nullity $d+1$, and therefore by Proposition \ref{lma:omegaepsilon}, with suitable choice of $\epsilon>0$, the matrix $\Omega_{A}+\epsilon\overline{\Omega}_{A}$ is positive semidefinite with nullity $d+1$. It is a stress matrix for $\overline{G}_{A}$, because a sum of stresses also satisfies equilibrium equation. The $(i,j)$ entry of $\Omega_{A}+\epsilon\overline{\Omega}_{A}$, corresponding to the added edge, equals $-\epsilon w_{1}$ by the construction. Since scaling a stress matrix by a positive factor preserves equilibrium equation, rank, and keeps the matrix PSD, we can make $(i,j)$ entry equal $-w_{1}$ by scaling the matrix by $\epsilon^{-1}$, thus getting $\epsilon^{-1}\Omega_{A}+\overline{\Omega}_{A}$ as the desired matrix. \end{proof} \end{comment}
\begin{proof}[Proof of Theorem \ref{thm:stressReduced}]
Following the same construction as in the proof outline above, we first add one of the $K$ edges to $G_{A}$. From \ref{eqn:dfw}, we then compute a stress vector for the extended framework, whose last entry, corresponding to the stress on the added edge, equals to the sign-inverted stress on the same edge in $G_{B}$. Repeating for all $K$ edges, we obtain $K$ stress vectors $\overline{w}_{k}$ by appropriately choosing $w_{1}$ from $\Omega_{B}$ and solving \ref{eqn:dfw} for each edge. From each $\overline{w}_{k}$, in turn, we form a stress matrix such that the $(i_{k},j_{k})$ entry, corresponding to the added edge, is equal to the last (non-zero) entry of $\overline{w}_{k}$. By summing all $K$ stress matrices, we get a stress matrix $\overline{\Omega}_{AK}$ for the framework $\overline{G}_{A}$ constructed by adding \emph{all} $K$ edges to $G_{A}$. The elements of $\overline{\Omega}_{AK}$ corresponding to the added edges will have the same stresses, but with the opposite sign, as the stresses on the same edges in $G_{B}$. However, this matrix in general may neither be PSD nor have nullity $d+1$. From other side, the given matrix $\Omega_{A}$, which is a stress matrix for both $G_{A}$ and $\overline{G}_{A}$, is a PSD matrix with nullity $d+1$, with the entries corresponding to the $K$ added edges equal to 0. Moreover, since $\Omega_{A}$ and $\overline{\Omega}_{AK}$ both satisfy properties (3) and (4) of Definition \ref{dfn:defStressMat}, while $\Omega_{A}$ also satisfies Lemma \ref{lma:projker}, we have $\ker(\Omega_{A})\subseteq\ker(\overline{\Omega}_{AK})$. Therefore by Lemma \ref{lma:omegaepsilon}, with a suitable choice of $c>0$ (such as $c=\frac{2\|\overline{\Omega}_{AK} \|}{\lambda_{[v_{A}-d-1]}(\Omega_{A})}$), the matrix $c\Omega_{A}+\overline{\Omega}_{AK}$ is positive semidefinite with nullity $d+1$, and has the same stresses on the added edges as $\overline{\Omega}_{AK}$. \begin{comment}
Let's assume there are $K$ edges that are in $G_{B}$ but not in $G_{A}$ that are to be removed from the framework attachment. As described in the proof's outline given above, we add each of $K$ edges to $G_{A}$ separately, and from \ref{eqn:dfw} (with appropriately chosen $w_{1}$ from $\Omega_{B}$ for each of the $K$ edges), we can obtain stress vectors $\overline{w}_{k}$, where $k=1\ldots K$. From each $\overline{w}_{k}$ we form a stress matrix, and by taking a sum of all $K$ matrices we obtain a stress matrix $\overline{\Omega}_{AK}$ for the framework $\overline{G}_{A}$ extended by $K$ edges. By this construction, the elements of $\overline{\Omega}_{AK}$ corresponding to the added edges will have the same stresses, but with the opposite sign, as the stresses on the same edges in $G_{B}$. However, in general this matrix may neither be PSD nor have nullity $d+1$. From other side, the given matrix $\Omega_{A}$, which is a stress matrix for both $G_{A}$ and $\overline{G}_{A}$, is a PSD matrix with nullity $d+1$, with the entries corresponding to the $K$ added edges equal to 0. Moreover, since $\Omega_{A}$ and $\overline{\Omega}_{AK}$ of framework $\overline{G}_{A}$ both satisfy properties (3) and (4) of Definition \ref{dfn:defStressMat}, while $\Omega_{A}$ also satisfies Lemma \ref{lma:projker}, we have $\ker(\Omega_{A})\subseteq\ker(\overline{\Omega}_{AK})$. Therefore by Lemma \ref{lma:omegaepsilon}, with suitable choice of $c>0$ (such as $c=\frac{2\|\overline{\Omega}_{AK} \|}{\lambda_{[v_{A}-d-1]}(\Omega_{A})}$), the matrix $c\Omega_{A}+\overline{\Omega}_{AK}$ is positive semidefinite with nullity $d+1$, and has the same stresses on the added edges as $\Omega_{A}$. \end{comment}
Finally, we attach the frameworks $\overline{G}_{A}$ and $G_{B}$. The $K$ added edges in $\overline{G}_{A}$ were already in $G_{B}$, and so this attachment is identical to the regular framework attachment of $G_{A}$ and $G_{B}$. In terms of matrices, however, this attachment corresponds to adding extended $\Omega_{B}$ to the extension of $c\Omega_{A}+\overline{\Omega}_{AK}$, resulting, as in the proof of Theorem \ref{thm:OmegaMain}, in a PSD stress matrix of nullity $d+1$: \begin{displaymath} \widetilde\Omega_{re}=c\widetilde\Omega_{A}+\widetilde{\Omega}_{AK}+\widetilde\Omega_{B}. \end{displaymath} Its entries, corresponding to the $K$ edges, are equal to $0$, therefore $\widetilde\Omega_{re}$ is the desired stress matrix for the attachment with the $K$ edges removed.
\begin{comment}
If there are $K$ edges that are in $G_{B}$ but not in $G_{A}$ that are to be removed from the framework attachment, we repeat the above construction for each such edge, thus getting $K$ matrices $\epsilon_{k}^{-1}\Omega_{A}+\overline{\Omega}_{A_{k}}$, where the entries of each of the $\overline{\Omega}_{A_{k}}$ are obtained from stress vector $\overline{w}=(w_{x},-w_{k},w_{y})^{T}$ by solving $df^{T}w=\rho_{k}w_{k}$ for $w=(w_{x},w_{y})^{T}$ (as in \ref{eqn:dfw}). Here $w_{k}$ are the stress values (in $\Omega_{B}$) for the edges in $G_{B}$ that are to be removed in the attachment. Note that by construction, for each $k$, the entry of $\overline{\Omega}_{A_{k}}$ corresponding to the added $k^{th}$ edge is $-w_{k}$, while the entries corresponding to the other $K-1$ edges are zero. Therefore, by adding these $K$ PSD matrices we get a PSD stress matrix in which the $K$ entries corresponding to the added edges will be equal $-w_{k}$ for all $k=1\ldots K$. This stress matrix corresponds to the framework $\overline{G}_{A}$ having $K$ added edges from $G_{B}$. Since all $\epsilon_{k}^{-1}\Omega_{A}+\overline{\Omega}_{A_{k}}$ have the same kernel, by Lemma \ref{lma:kerSumMat} their sum-matrix has the same kernel and hence the matrix is of nullity $d+1$. We can also choose $\epsilon = \min(\epsilon_{k})=\frac{\lambda_{[v_{A}-d-1]}(\Omega_{A})}{2\cdot \max\|\overline{\Omega}_{A_{k}} \|}$, as in the proof of Proposition \ref{lma:omegaepsilon}, and so the resulting PSD stress matrix is $\sum_{k=1}^{K}(\epsilon^{-1}\Omega_{A}+\overline{\Omega}_{A_{k}})=K\epsilon^{-1}\Omega_{A}+\sum\overline{\Omega}_{A_{k}}$.
Finally, we attach the frameworks $\overline{G}_{A}$ and $G_{B}$. The $K$ added edges in $\overline{G}_{A}$ were already in $G_{B}$, and so this attachment is identical to the regular framework attachment of $G_{A}$ and $G_{B}$. In terms of matrices, however, this attachment corresponds to adding extended $\Omega_{B}$ to the extension of $K\epsilon^{-1}\Omega_{A}+\sum\overline{\Omega}_{A_{k}}$, resulting, as in the proof of Theorem \ref{thm:OmegaMain}, in a PSD stress matrix of nullity $d+1$:
\begin{displaymath} \widetilde\Omega_{re}=\widetilde\Omega_{B}+ K\epsilon^{-1}\widetilde\Omega_{A}+\sum_{k=1}^{K}\widetilde{\Omega}_{A_{k}}. \end{displaymath} Its entries, corresponding to the $K$ edges, equal $w_{k}+(-w_{k})=0$, therefore $\widetilde\Omega_{re}$ is the desired stress matrix for the attachment with the $K$ edges removed. \end{comment} \end{proof} \end{section}
\begin{section}{Generation of new universally rigid frameworks}\label{sec:Apps}
In \cite{aklat}, Alfakih \emph{et al.} proved that $(d+1)$-lateration framework in general position in $\mathbb{R}^d$ is not only universally rigid but also has a PSD stress matrix of nullity $d+1$. In this section we will show that edge-reduced framework attachment is a generalization of $(d+1)$-lateration, which implies that a corresponding PSD stress matrix can be constructed as in Theorem \ref{thm:stressReduced}. We will also show that, in general, the edge-reduced attachment can be used to generate universally rigid graphs. \begin{definition}\label{dfn:d1lateration} A graph $G=(V,E)$ is a $(d+1)$-lateration graph if: \begin{enumerate} \item verices $\{1,\ldots ,d+1\}$ form a complete graph \item each vertex $i>d+1$ connects to $d+1$ vertices from the set $\{1,\ldots ,i-1\}$. \end{enumerate} \end{definition} The definition above suggests that $(d+1)$-lateration graphs, after appropriately labeling the vertices, can be generated inductively. Starting with a complete graph $G$, at each induction step we expand $G$ by adding a new vertex and connecting it to some $d+1$ existing vertices. The following proposition shows that the same can be accomplished using edge-reduced graph attachment. \begin{proposition}\label{prop:attachforlat} Any $(d+1)$-lateration graph can be generated by a sequence of edge-reduced attachments. \end{proposition} \begin{proof} Assume at $k^{th}$ step we introduce a new vertex $v_{k}$ in graph $G$, and want to connect it to $d+1$ vertices $v_{i_{1}},\ldots ,v_{i_{d+1}}$ that are already in $G$. This can be done by first attaching graph G to a complete graph consisting of vertices $v_{i_{1}},\ldots ,v_{i_{d+1}},v_{k}$ such that $v_{i_{1}},\ldots ,v_{i_{d+1}}$ are shared, and then removing the edges between the shared vertices that were not in $G$. \end{proof}
To obtain a PSD stress matrix of nullity $d+1$ for a $(d+1)$-lateration framework in general position we can then use Theorem \ref{thm:stressReduced}, after checking the condition that any such framework is infinitesimally rigid. For this we use the following re-phrased result by Connelly (\cite{cnbook}, Proposition 2.21). \begin{proposition}[Connelly \cite{cnbook}]\label{prop:inf221} Suppose $G(\mathbf{p})$ is an infinitesimally rigid framework in general position in $\mathbb{R}^d$. Extend the framework by adding a new vertex $v_{k}$ and connecting it to $d$ existing vertices $v_{i_{1}},\ldots ,v_{i_{d}}$ such that $\mathbf{p}_{k}$ is not in the affine span of $\mathbf{p}_{i_{1}},\ldots ,\mathbf{p}_{i_{d}}$. Then the extended framework is infinitesimally rigid. \end{proposition} From here it immediately follows: \begin{corollary}\label{cor:d1latinf} Any $(d+1)$-lateration framework in general position is infinitesimally rigid. \end{corollary} \begin{comment} \begin{proof} Following the inductive construction of $(d+1)$-lateration framework, we start with a complete framework $G$ which is always infinitesimally rigid. When adding a vertex $v_{k}$ to $G$ at $k^{th}$ step, we first connect $v_{k}$ to $d$ existing vertices $v_{i_{1}},\ldots ,v_{i_{d}}$ of $G$. By Proposition \ref{prop:inf221}, the extended framework is infinitesimally rigid. To complete $k^{th}$ step of $(d+1)$-lateration, we then connect $v_{k}$ to one more vertex $v_{i_{d+1}}$, which preserves infinitesimal rigidity of the extended framework. Continuing by the induction, the conclusion of the proposition follows. \end{proof} \end{comment} With this result we can apply Theorem \ref{thm:stressReduced} and compute a PSD stress matrix of nullity $d+1$ for a $(d+1)$-lateration framework in general position. Since the framework generation involves attachment of a complete graph (containing a new vertex) and then removal of some $K$ edges between the shared vertices, each step would require computing a PSD stress matrix of nullity $d+1$ for the complete framework, and then solving $K$ systems of linear equations as described in the theorem.
The edge-reduced attachment can also be applied for generation of new universally rigid graphs from smaller and simpler graphs whose frameworks in general position are always universally rigid. An example of such universally rigid graph, which can not be generated by $(d+1)$-lateration, is shown in Figure \ref{fig:example3lat}. It turns out that all frameworks of the graphs generated by the edge-reduced attachment have PSD stress matrices of nullity $d+1$, as the following theorem shows. \begin{figure}
\caption{Edge-reduced graph attachment of two universally rigid graphs. Each of the two graphs can be generated by $3$-lateration, which is not the case for their edge-reduced attachment.}
\label{fig:example3lat}
\end{figure} \begin{theorem}\label{thm:URgraph} Assume we have two frameworks in general position corresponding to two universally rigid graphs. If both frameworks have PSD stress matrices of nullity $d+1$, their edge-reduced attachment also has one. \end{theorem} \begin{proof} To apply Theorem \ref{thm:stressReduced}, we only need to prove that any of the two frameworks is infinitesimally rigid. Assume that a framework $G(\mathbf{p})$ in general position of such universally rigid graph $G$ is not infinitesimally rigid. Then there exists a non-trivial infinitesimal flex $\mathbf{q}$. By Corollary \ref{cor:genposopen}, there is $t>0$ such that the two frameworks $G(\mathbf{p}+t\mathbf{q})$ and $G(\mathbf{p}-t\mathbf{q})$ are in general position. From $(\mathbf{p}_i-\mathbf{p}_j,\mathbf{q}_i-\mathbf{q}_j)=0$ we have \begin{displaymath}
\|(\mathbf{p}_{i}+t\mathbf{q}_{i})-(\mathbf{p}_{j}+t\mathbf{q}_{j})\|^{2}=
\|\mathbf{p}_i-\mathbf{p}_j\|^{2}+\|t\mathbf{q}_i-t\mathbf{q}_j\|^{2}=
\|(\mathbf{p}_{i}-t\mathbf{q}_{i})-(\mathbf{p}_{j}-t\mathbf{q}_{j})\|^{2}, \end{displaymath} which holds for any edge $\{i,j\}$. Therefore, since the flex $t\mathbf{q}$ is non-trivial, $G(\mathbf{p}+t\mathbf{q})$ and $G(\mathbf{p}-t\mathbf{q})$
are equivalent but not congruent, which contradicts universal rigidity of the graph $G$. \end{proof} \end{section}
\end{document}
|
arXiv
|
{
"id": "1011.4094.tex",
"language_detection_score": 0.7160554528236389,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\thispagestyle{empty} \baselineskip=28pt \vskip 5mm \begin{center} {\Huge{\bf Non-Gaussian Autoregressive Processes with Tukey $g$-and-$h$ Transformations}} \end{center}
\baselineskip=12pt \vskip 10mm
\begin{center}\large Yuan Yan$^1$ and Marc G.~Genton\footnote[1]{ \baselineskip=10pt Statistics Program, King Abdullah University of Science and Technology, Thuwal 23955-6900, Saudi Arabia.\\ E-mail: [email protected], [email protected]\\ This publication is based upon work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No: OSR-2015-CRG4-2640.} \end{center}
\baselineskip=17pt \vskip 10mm \centerline{\today} \vskip 15mm
\begin{center} {\large{\bf Abstract}} \end{center} When performing a time series analysis of continuous data, for example from climate or environmental problems, the assumption that the process is Gaussian is often violated. Therefore, we introduce two non-Gaussian autoregressive time series models that are able to fit skewed and heavy-tailed time series data. Our two models are based on the Tukey $g$-and-$h$ transformation. We discuss parameter estimation, order selection, and forecasting procedures for our models and examine their performances in a simulation study. We demonstrate the usefulness of our models by applying them to two sets of wind speed data. \baselineskip=14pt
\par
\noindent {\bf Some key words:} Autoregressive; Heavy tails; Non-Gaussian; Skewness; Time series; Tukey $g$-and-$h$ transformation. \par
\noindent {\bf Short title}: Tukey $g$-and-$h$ Autoregressive Processes
\pagebreak
\pagenumbering{arabic} \baselineskip=26pt
\section{Introduction}\label{sec:intro} To study climate change, it is essential to understand the temporal properties of the data of interest. Climate and environmental data, such as precipitation, temperature, thickness of glacial varves, wind speed, concentration of air pollutants, are continuous in nature. In a time series analysis of continuous random variables, the autoregressive moving-average (ARMA) model, especially the Gaussian one, is popularly adopted because of its simplicity and interpretability. However, data from climate or environmental science are often asymmetric with heavy tails. Therefore, a non-Gaussian time series analysis is needed. Since most climate or environmental data we come across are continuous, in this paper we do not consider time series analysis of categorical or binary data, which are intrinsically non-Gaussian.
In linear models, three approaches are mainly used to deal with non-Gaussian data: transformation of the dependent variable, regression with non-Gaussian error, and generalized linear models (GLM). Similarly, we can also classify many of the existing non-Gaussian time series methods and models into three categories: 1) transformations to `Gaussianize' the data, 2) ARMA models with non-Gaussian noise, and 3) extension of the GLM to the time series context.
The transformation approach is widely used among practitioners to, first, Gaussianize data and then examine the latent Gaussian process. Equivalently, it assumes that the observed process $Y_t$ is obtained after a transformation $\eta$ of the latent Gaussian process $Z_t$: \begin{equation} Y_t=\eta(Z_t).\label{model1} \end{equation} For example, precipitation data can be approximated by the gamma distribution, then a square or cube root transformation is usually applied to normalize the data; similarly wind speed data can be assumed to follow a Weibull distribution, and taking a logarithm alleviates the departure from normality \citep{Has89}. The logarithm, square and cube root transformations all belong to the Box-Cox power transformation \citep{BoxCox64}, which is a family of transformations with one parameter ($\lambda$) and can be applied only to positive values. A parametric form of transformations allows users to find the optimal transformation $\eta$ from among the members of the Box-Cox transformation family by estimating $\lambda$ from the data, instead of attempting many different transformations. A large literature exists on the Box-Cox power transformation, as well as its modifications and its application in linear models and time series analysis. \citet{Nel79} extensively explored the use of Box-Cox transformations in 21 time series from economics. Non-parametric transformations are an appealing alternative to assuming a parametric form of transformations due to their flexibility. \citet{Nychka03} explored non-parametric transformations on both the dependent variable and the predictor variables in regression. \citet{Block90} used the empirical distribution and probability integral transform (PIT) to form ARMA models with arbitrary continuous marginal distributions. However, there is a trade-off between flexibility and parsimony.
Instead of using transformations to Gaussianize the data, an ARMA model with non-Gaussian noise may be specified, whether it is the marginal distribution of the process or the distribution of the error term. Given the marginal distribution, the distribution of the error term can be found by linking their moment generating function, namely, the Laplace transformation of the probability density function. Many models were previously proposed following this direction, for example, ARMA models with exponential marginals \citep{EARMA} and AR models with marginal gamma distribution \citep{GARMA}. On the other hand, specifying the distribution of the non-Gaussian error term is easier for data simulation and maximum likelihood inference since the conditional likelihood can then be written down in a closed form. ARMA models that are driven by non-Gaussian innovations and their properties were discussed in \citet{Da80} and \citet{Li88}.
For the third class of existing non-Gaussian time series methods and models, \citet{Ben03} introduced the generalized ARMA model, extending the GLM to a time series setting. Earlier exploration of GLM in time series can be found in \citet{Ben03} and references therein. \citet{Cor09} proposed a model combining the GLM idea with transformations, and considered its use in time series.
Apart from the three categories summarized above, there exists many other specialized models for non-Gaussian, non-stationary time series. The most famous ones are the autoregressive conditional heteroskedasticity (ARCH) \citep{ARCH} and the generalized autoregressive conditional heteroskedasticity (GARCH) \citep{GARCH} models. These models have been used canonically for analyzing financial time series that exhibit changes in volatility. Other models include the zeroth-order self-exciting threshold autoregressive (SETAR) model \citep{tong90}, the multipredictor autoregressive time series (MATS) models \citep{martin92}, the Gaussian mixture transition distribution (GMTD) models \citep{Le96}, the mixture autoregressive models \citep{Wo00}, and their extension to the Student $t$-mixture autoregressive model \citep{Wo09}.
Considering the trade-offs between model flexibility and parsimony, in this paper we use an alternative parametric family of transformations, the Tukey $g$-and-$h$ (TGH) transformation, which is more flexible and interpretable than the Box-Cox family and at the same time keeps the model fairly simple compared to the non-parametric approaches. We build two autoregressive models for non-Gaussian time series via the TGH transformation in both the data transformation and non-Gaussian error approaches. Our first model uses the TGH transformation for $\eta$ in (\ref{model1}). Our second model assumes the non-Gaussian error term in the AR process to be a TGH transformation of Gaussian white noise.
The remainder of this paper is organized as follows. In Section~\ref{sec:model}, we review the TGH transformation, discuss properties of the univariate TGH distribution and its extensions, and introduce our two AR models based on the TGH transformation. In Section~\ref{sec:est_pred}, we describe the parameter estimation and forecasting algorithm for our models. In Section~\ref{sec:sim}, we demonstrate the estimation and prediction performances of our models through a simulation study. In Section~\ref{sec:app}, we illustrate the usefulness of our models by applying them to two wind speed datasets. We summarize our findings and prospects for future research in Section~\ref{sec:sum}.
\section{Two Tukey $g$-and-$h$ Autoregressive Models}\label{sec:model} \subsection{TGH Transformations and Properties}\label{sec:prop} The TGH transformation \citep{tukey77} is defined as a strictly monotonic increasing function of $z\in\mathbb{R}$ for $h\geq 0$ and $g \in \Bbb{R}$: \begin{equation} \tau_{g,h}(z)=\begin{cases}g^{-1}\{\exp(gz)-1\}\exp(hz^2/2),&g\neq 0,\\z\exp(hz^2/2),&g=0.\end{cases} \label{taugh} \end{equation}
When applying the TGH transformation to a univariate standard normal random variable $Z \sim \mathcal{N}(0,1)$, the transformed random variable $T=\tau_{g,h}(Z)$ is said to follow the TGH distribution. The support of $T$ is the real line when $h\neq 0 \text{ or } h=g=0$; for $h=0$ and $g \neq 0$, it has a lower (upper) bound of $-1/g$ when $g>0$ ($g<0$). The univariate TGH distribution is a flexible family of distributional models for non-Gaussian data. It has two interpretable parameters, $g$ and $h$, which respectively control the skewness and tail heaviness of $Y$. The tails become heavier as $h$ increases, and the $q$-th moment of $Y$ exists only for $h<1/q$. The level of skewness increases as $|g|$ increases (becoming right-skewed when $g>0$). \citet{Martinez84} included the case of $h<0$, for which the TGH transformation is no longer monotonic and the resulting TGH distribution has lighter tails than the Gaussian one. However, their method is unconventional, so we only consider the case of $h\geq0$. Also, from a practical point of view, real data are more often heavy-tailed rather than light-tailed. Special cases of the TGH distribution include the shifted log-normal random variable for $h=0$ and $g>0$, and a Pareto-like distribution for $g=0$ and $h>0$, which was discussed in detail by \citet{Georg15}. The $q$-th moments of the TGH distribution were derived by \citet{Martinez84}, for $h<1/q$: \begin{equation} \mathbb{E}(T^q)=\begin{cases}\frac{1}{g^q\sqrt{1-qh}}\sum\limits_{i=0}^{q}(-1)^i{q \choose i}\exp\left\{\frac{(q-i)^2g^2}{2(1-qh)}\right\},&g\neq 0,\\ \frac{q!}{2^{q/2} (q/2)!}(1-qh)^{-\frac{q+1}{2}}, &g=0, q \text{ even},\\ 0, &g=0, q \text{ odd}. \end{cases} \label{momen} \end{equation} We illustrate various aspects of the Tukey $g$-and-$h$ distributions by a visuanimation \citep{Visu} in the supplementary material.
Extensions of the univariate TGH distribution to the multivariate case can be found in \citet{FG06} and \citet{HR12}. \citet{XG16} used the TGH transformation to build a new max-stable process for the modeling of spatial extremes. Recently, \citet{XG17} further extended the TGH distribution to the spatial case and defined TGH random fields. Those models were found to be useful in practice and have been applied to air pollution, wind speed, rainfall and economic data. In this paper, we apply the TGH transformation in the time series context and build two models that take advantage of the AR structure.
\subsection{TGH Transformation of a Latent AR Process}\label{sec:model1}
Our first model transforms a latent Gaussian process similarly to \citet{XG17}. Let $Z_t \sim \text{AR}(p),\ t=1,2,\ldots$, be a stationary Gaussian AR process of order $p$ with zero mean and unit variance, and let $T_t=\tau_{g,h}(Z_t)$ by applying the standard TGH transformation to $Z_t$. Our model 1, the Tukey $g$-and-$h$ transformed autoregressive (TGH-AR($p$)-t) process $Y_t$, has the same form as (\ref{model1}), with $\eta$ being a generalized version of the TGH transformation: \begin{equation} Y_t = \bm{X}_t^{\rm T}\bm{\beta}+\xi+\omega\tau_{g,h}(Z_t), \label{model11}\end{equation} where $\bm{X}_t$ and $\bm{\beta}$ are the covariates and the corresponding coefficients respectively, and $\xi$ and $\omega$ are the location and scale parameters, respectively. In a time series analysis, the covariates are usually functions of $t$ to model the trend or the seasonality, which may also be incorporated through a seasonal integrated AR model. In our first model, unlike the commonly adopted Box-Cox approach, the deterministic part that consists of the location parameter, covariates and their coefficients, is outside the transformation. In this way, the trend (also seasonality or periodicity) is attributed directly to the process $Y_t$ rather than the underlying Gaussian process $Z_t$. We control the skewness and tail behavior of the stochastic part of the process $Y_t$ by applying the TGH transformation to $Z_t$ with different values of $g$ and $h$. The moments of $Y_t$ can be computed as the moments for $T_t$ in (\ref{momen}) with modification for the additional location and scale parameters. The autocovariance function $C_T(\tau)$ of $T_t$ can be found in terms of the autocorrelation function (ACF) $\rho_Z(\tau)$ of the underlying Gaussian process $Z_t$, which was derived in \citet{XG17}: \begin{align*} C_T(\tau)&=\frac{\exp\left[\frac{1+\rho_Z(\tau)}{1-h\{1+\rho_Z(\tau)\}}g^2\right]-2\exp\left[\frac{1-h\{1-\rho_Z^2(\tau)\}}{(1-h)^2-h^2\rho_Z^2(\tau)}\frac{g^2}{2}\right]+1}{g^2\sqrt{(1-h)^2-\rho_Z^2(\tau)h^2}}-\left\{\mathbb{E}({T_t})\right\}^2. \end{align*} The autocorrelation of $Y_t$ is always weaker than the autocorrelation of $Z_t$ and in the supplementary material we illustrate this fact via Visuanimation~S1.
When $p=1$, the TGH-AR(1)-t process is of the form (\ref{model11}), with $Z_t=\phi Z_{t-1}+\epsilon_t$, and where $\epsilon_t$ is a Gaussian white-noise process with zero mean and constant variance $\sigma^2_\epsilon=1-\phi^2$. There is a constraint that $\sigma^2_\epsilon$ is a function of $\phi$ so that $Z_t$ has unit variance. This dependency may seem strange at first, but it is effectively equivalent to standardizing $Z_t$ to a process with unit variance by multiplication by a scaling constant. For example, the usual AR(1) process with a Gaussian error term $\epsilon_t$ of variance $\sigma_{\omega}^2$ will result in a process $Z_t^\prime$ of variance $\frac{\sigma_{\omega}^2}{1-\phi^2}$, and $Z_t^\prime$ can be standardized to have a unit variance by multiplying the rescaling factor by $\frac{\sqrt{1-\phi^2}}{\sigma_{\omega}}$. This is equivalent to an AR(1) process with an error term of variance of $1-\phi^2$.
\begin{figure}
\caption{Functional boxplots of 1000 realizations of sample size $n=100$ from model 1, i.e., TGH-AR(1)-t, without covariates, with $\xi$=0, $\omega$=1, an AR coefficient of $\phi$=0.8 and six pairs of values for $g$ and $h$, labeled with values of the mean ($\mu$, green line), standard deviation ($\sigma$), skewness ($\gamma_1$) and excess kurtosis ($\gamma_2$) of the process.}
\label{fb1}
\end{figure}
To simulate samples from the TGH-AR($p$)-t model, we first simulate data from the underlying Gaussian AR($p$) process $Z_t$ for $t=1,2,\ldots,n$. Then we transform the process by applying the generalized version of the TGH transformation to $Z_t$ at each time point. Figure~\ref{fb1} shows the functional boxplots \citep{SG11} of 1000 realizations from the TGH-AR(1)-t model of sample size $n=100$ without covariates, with $\xi=0$, $\omega=1, \phi=0.8$, $\sigma^2_\epsilon=0.36$ and six pairs of values for $g$ and $h$. The six pairs of values for $g$ and $h$ include the Gaussian AR case when $g=h=0$. The functional boxplots are labeled with values of the marginal mean ($\mu$, green line), standard deviation ($\sigma$), skewness ($\gamma_1$) and excess kurtosis ($\gamma_2$) for the transformed process $Y_t$. From the functional boxplots of the sample paths of $Y_t$, we see clearly when $h=0$, as $g$ increases from 0.3 to 0.5, the process is more right skewed; when $g=0$, as $h$ increases from 0.1 to 0.2, extreme values occur more in the time series; with $g=0.3, h=0.1$ we see the effect of both $g$ and $h$ on the process.
\subsection{AR Process with TGH Error}\label{sec:model2} Instead of applying the transformation to the time series itself, in our second model we transform the error term $\epsilon_t\overset{i.i.d.}{\sim} \mathcal{N} (0, 1)$ in an AR process to a non-Gaussian error term $T_t=\tau_{g,h}(\epsilon_t)$ that follows a standard TGH distribution. Thus, model 2 (the TGH-AR($p$)-e model) is defined as: \begin{equation}
Y_t=\bm{X}_t^{\rm T}\bm{\beta}+\xi+\phi_1 \tilde{Y}_{t-1}+\cdots+\phi_p \tilde{Y}_{t-p}+\omega\tau_{g,h}(\epsilon_t), \label{model2} \end{equation} where $\tilde{Y}_t=Y_t-\bm{X}_t^{\rm T}\bm{\beta}-\xi$ is a process of median 0 (however, not a zero-mean process when $g \neq 0$); $\bm{X}_t$ and $\bm{\beta}$ are the covariates and corresponding coefficients respectively; $\xi$ and $\omega$ are the location and scale parameters, respectively; and $\bm{\phi}=(\phi_1,\ldots, \phi_p)^{\rm T}$ are the AR coefficients under constraints such that the AR process is weakly stationary. We express (\ref{model2}) in a moving average form: $\tilde{Y}_t=\psi(B)(\omega T_t)$, where $\psi(B)=\sum_{j=0}^{\infty}\psi_j B^j$ of the backshift operator $B$. Although the marginal distribution of $Y_t$ in the TGH-AR($p$)-e model cannot be written down in a closed form, the moments of $\tilde{Y}_t$ can be derived with respect to moments of the standardized error term $T_t$:
$\mathbb{E}(\tilde{Y}_t) = \omega\sum_{j=0}^{\infty}\psi_j \mathbb{E}(T_t) = \frac{\omega}{1-\sum_{j=1}^{p}\phi_j}\mathbb{E}(T_t), \mathbb{V}(\tilde{Y}_t) = \omega^2\sum_{j=0}^{\infty}\psi_j^2\mathbb{V}(T_t)$. Furthermore, $\gamma_1(\tilde{Y}_t) = \frac{\sum_{j=0}^{\infty}\psi_j^3}{\left(\sum_{j=0}^{\infty}\psi_j^2\right)^{3/2}}\gamma_1(T_t), \gamma_2(\tilde{Y}_t) = \frac{\sum_{j=0}^{\infty}\psi_j^4}{\left(\sum_{j=0}^{\infty}\psi_j^2\right)^{2}}\gamma_2(T_t)$, as long as the summation converges. These relationships conform with the algebraic relations between the skewness and kurtosis of the process and error term in ARMA processes derived by \citet{Da80}. Figure~\ref{fb2} shows the functional boxplots of 1000 realizations from the TGH-AR(1)-e model with a sample size of 100, and the same parameters as in Figure~\ref{fb1}. Again, we see that skewness and tail behavior of the process can be controlled by $g$ and $h$.
\begin{figure}
\caption{Functional boxplots of 1000 realizations of sample size $n=100$ from model 2, i.e., TGH-AR(1)-e, without covariates, with $\xi$=0, $\omega$=1, an AR coefficient of $\phi$=0.8, and six pairs of values for $g$ and $h$, labeled with values of the mean ($\mu$, green line), standard deviation ($\sigma$), skewness ($\gamma_1$) and excess kurtosis ($\gamma_2$) of the process.}
\label{fb2}
\end{figure}
\subsection{Comparison of the Two Models} Comparing Figures~\ref{fb1} and \ref{fb2}, we notice that, given the same values for $g$ and $h$, the skewness and kurtosis of the TGH-AR(1)-t process are larger than those of the TGH-AR(1)-e process, while the variance is smaller. This agrees with the findings in \citet{Da80}, where the ARMA process has skewness and kurtosis less than the error process.
We interpret the difference between the two models from the viewpoint of the data-generating mechanism. In model 1, TGH-AR-t, we assume that there is an underlying latent Gaussian process $Z_t$ and that the observed process $Y_t$ is a realization of a non-linear transformation of $Z_t$. \citet{Georg11} justified this idea using financial data. Overreactions to changes in the stock market skew the underlying symmetric process of financial news and result in skewed log-returns of the stock market; see also \citet{LucaG}. For model 2, the TGH-AR-e model, we do not assume such a latent process, but $Y_t$ itself is an AR model with non-Gaussian noise that follows the TGH distribution. The difference between the two models can be seen most obviously by plotting the relationship between $Y_t-\bm{X}_t^{\rm T}\bm{\beta}$ and $Y_{t-1}-\bm{X}_{t-1}^{\rm T}\bm{\beta}$ (Figure~\ref{rela12}). The relationship is linear for model 2 and non-linear for model 1. The exploratory plots shown in Figure~\ref{rela12} for a sample size of $n = 500$ and different values of $g$, $h$ and $\phi$ should be consulted as a reference for deciding whether model 1 or 2 is more suitable for the time series data to be analyzed. In practice, one should take into account both a reasonable data-generating mechanism for the process and the empirical behavior of $Y_t-\bm{X}_t^{\rm T}\bm{\beta}$ versus $Y_{t-1}-\bm{X}_{t-1}^{\rm T}\bm{\beta}$ to decide which model to use for a specific dataset.
\begin{figure}
\caption{Illustration of the non-linear relationship between $Y_{t}$ and $Y_{t-1}$ for the TGH-AR(1)-t model, overlapped with theoretical contours of the joint density (left two columns); illustration of the linear relationship between $Y_{t}$ and $Y_{t-1}$ for the TGH-AR(1)-e model with skewed and/or heavy-tailed noise (right two columns). Each panel is plotted with one realization of sample size $n=500$ from our models without covariates, with $\xi$=0, $\omega$=1 and different values for $g, h$ and $\phi$.}
\label{rela12}
\end{figure}
We want to emphasize that our time series models are not simply variations of the spatial case \citep{XG17}. We utilize the unique properties of the AR structure for discrete time processes, which the random field approach does not possess. One main difference between the AR time series and the random fields is that the correlation structure of the random process is induced by the AR coefficients, whereas in the random fields, it is modeled directly. Moreover, the property of conditional independence of an AR process allows us to write down the likelihood explicitly instead of a matrix form as for the random fields. The AR structure also makes it possible for us to build the TGH-AR-e model, which is not possible in the random fields setting.
\section{Inference}\label{sec:est_pred} \subsection{Parameter Estimation}\label{sec:est} A drawback of the TGH transformation is that its inverse transformation does not have an explicit form (except when either $g$ or $h$ is equal to 0) and, thus, the likelihood inference is difficult. Earlier estimation procedures rely on matching sample quantiles or sample moments with their population counterparts to avoid this difficulty. For example, \citet{Hoa85} developed an estimation procedure by matching a sequence of sample quantiles and theoretical quantiles, which we refer to as the letter-value-based method. Recently, \citet{XG15} proposed an efficient parameter estimation algorithm for the independent TGH distribution by using an approximated likelihood. Their algorithm greatly improved the parameter estimation performance compared to the moment or quantile-based methods without compromising the computational speed.
The parameters involved in our two TGH-AR models are $\xi,\omega,g,h,\bm{\beta},\bm{\phi}$. Since the maximum likelihood estimator (MLE) is well-known to possess many good asymptotic properties, we prefer the likelihood-based estimator over the moment or quantile-based estimators. The vector of the parameters related to the TGH transformation is denoted by $\bm{\theta}_1=(\xi,\omega,g,h,\bm{\beta}^{\rm T})^{\rm T}$, the autoregressive parameters by $\bm{\phi}$, and set $\bm{\theta}=(\bm{\theta}_1^{\rm T},\bm{\phi}^{\rm T})^{\rm T}$.
For model 1, the log-likelihood function given $n$ observations $(y_1,\bm{x}_1),\ldots,(y_n,\bm{x}_n)$ can be written as: \begin{eqnarray} \label{mle1} \begin{split}
l_n(\boldsymbol{\theta})
=&f_{\bm{\phi}}(z_{1,\boldsymbol{\theta}_1})+ f_{\bm{\phi}}(z_{2,\boldsymbol{\theta}_1}|z_{1,\boldsymbol{\theta}_1})+\cdots+ f_{\bm{\phi}}(z_{n,\boldsymbol{\theta}_1}|z_{n-1,\boldsymbol{\theta}_1},\ldots,z_{n-p,\boldsymbol{\theta}_1})-n\log \omega\\ &-\frac{h}{2}\sum_{t=1}^n z_{t,{\boldsymbol{\theta}_1}}^2-\sum_{t=1}^n\log\left[\exp(g z_{t,\boldsymbol{\theta}_1})+\frac{h}{g}\{\exp(g z_{t,\boldsymbol{\theta}_1})-1\} z_{t,\boldsymbol{\theta}_1}\right], \end{split}
\end{eqnarray} where $z_{t,\boldsymbol{\theta}_1}=\tau_{g,h}^{-1}\left\{\frac{y_t-\xi-\bm{x}_t^{\rm T}\bm{\beta}}{\omega}\right\}, f_{\bm{\phi}}(\cdot|\cdot)$ is the conditional log-likelihood for the underlying Gaussian AR$(p)$ process $Z_t$, which is also Gaussian.
Since an explicit form of the likelihood of $Y_t$ is intractable for model 2, we consider the conditional likelihood given the first $k\geq p$ observations. The conditional log-likelihood of model 2 can be written as: \begin{eqnarray} \label{mle2} \begin{split}
l_n(\boldsymbol{\theta}|\,y_1,\ldots,y_k)
=&-\frac{h+1}{2}\sum_{t=k+1}^n \epsilon^2_{t,\boldsymbol{\theta}}-\sum_{t=k+1}^n\log\left[\exp(g \epsilon_{t,\boldsymbol{\theta}})+\frac{h}{g}\{\exp(g \epsilon_{t,\boldsymbol{\theta}})-1\} \epsilon_{t,\boldsymbol{\theta}}\right]\\
&-(n-k)\log \omega-\frac{n-k}{2}\log(2\pi), \end{split} \end{eqnarray} where $\epsilon_{t,\boldsymbol{\theta}}=\tau_{g,h}^{-1}\left\{\frac{\tilde{y}_t-\phi_1 \tilde{y}_{t-1}-\cdots-\phi_p \tilde{y}_{t-p}}{\omega}\right\}$ and $\tilde{y}_t=y_t-\xi-\bm{x}_t^{\rm T}\bm{\beta}$.
Although we can find the MLE for $\bm{\theta}$ in our two models by maximizing (\ref{mle1}) or (\ref{mle2}) with respect to $\bm{\theta}$ in principle, it is computationally expensive to find the inverse $\tau_{g,h}^{-1}(\cdot)$ numerically for each data point and iteration of the optimization since $\tau_{g,h}^{-1}(\cdot)$ does not have a closed form. To bypass this computational challenge, we borrow the idea of approximated likelihood from \citet{XG15} for the univariate TGH distribution. \citet{XG17} extended this idea to the random field case and proposed an algorithm for iteratively estimating both the parameters related to the TGH transformation and the parameters in the model of the covariance function. In essence, they linearized the inverse transformation function piecewisely and maximized the approximated likelihood function with the piecewisely linearized inverse function instead of the exact one.
Equipped with this linearization procedure, now we numerically maximize the approximated log-likelihood by using the approximated inverse transformation function $\tilde{\tau}_{g,h}^{-1}$ instead of $\tau_{g,h}^{-1}$ in either function~(\ref{mle1}) or (\ref{mle2}). Thus, we obtain the maximum approximated likelihood estimator (MALE) of the parameters in our models much faster. We also found that iteratively optimizing $\boldsymbol{\theta}_1$ and $\bm{\phi}$ is better than optimizing $\bm{\theta}$ directly; however, it is about 20 times slower on a Lenovo computer with 16 2.50GHZ Intel Xeon(R) CPUs. Hence, we optimize all of the parameters for estimation directly in our algorithm.
\subsection{Order Selection}\label{sec:ord}
The order selection for an AR model is usually performed via the Akaike information criterion (AIC) or the Bayesian information criterion (BIC). In a simulation study (all simulations are done on the same computer as mentioned above), we found that the algorithm based on BIC performs much better than the one based on AIC for our models. Therefore, we use the order selection procedure based on BIC with our estimation algorithm. For model 1, BIC=$-2 l_n(\boldsymbol{\theta})+(p+4)\log n$; for model 2, BIC=$-2 l_n(\boldsymbol{\theta}|\,y_1,\ldots,y_k)+(p+4)\log(n-k)$ and again we use the approximated likelihood instead of the exact one.
We evaluate the empirical correct order selection rate by BIC with the approximated likelihood for our two models through a simulation study. We generate data from both models with $\xi=0$, $\omega=1$, $g=0.3, h=0.1$ without covariates, with sample sizes $n=100$ and $n=500$ and a true AR order of $p=0,1,2$. For $p=1$ and $p=2$, we use multiple values for the AR coefficient(s), which exhibit different behaviors of the ACF and different intensities for the spectral density. We get correct order selection rate from 1000 simulations. Table~\ref{bic} summarizes the correct order selection rate by BIC for our two models with different sample sizes $n$ and orders $p$ for different values of $\phi$($\bm{\phi}$).
\begin{table}[ht]
\centering
\caption{Correct order selection rate for both the TGH-AR($p$)-t and the TGH-AR($p$)-e models without covariates and with $\xi=0$, $\omega=1$, $g=0.3, h=0.1$ and a true AR order of 0, 1 and 2 for $n=100$ and $n=500$.}
\label{bic}
\begin{tabular}{c|c|c|c|c|c}
\hline
&&\multicolumn{2}{c|}{TGH-AR($p$)-t}&\multicolumn{2}{|c}{TGH-AR($p$)-e}\\
\cline{3-6}
&& $n=100$ & $n=500$ & $n=100$ & $n=500$ \\
\hline\hline
$p$=0 && 0.959 & 0.988 & 0.938 & 0.990 \\ \hline
\multirow{4}{*}{$p$=1} & $\phi=-0.5$ & 0.951 & 0.987 & 0.934 & 0.987\\
&$\phi=0.5$& 0.950 & 0.988 & 0.932 & 0.981 \\
&$\phi=-0.8$& 0.952 & 0.982 & 0.940 & 0.985 \\
&$\phi=0.8$& 0.961 & 0.987 & 0.942 & 0.992 \\ \hline
\multirow{4}{*}{$p$=2} & $\bm{\phi}=(-0.5,-0.3)^{\rm T}$ & 0.825 & 0.991 & 0.858 & 0.987\\
&$\bm{\phi}=(0.2,0.4)^{\rm T}$& 0.532 & 0.986 & 0.592 & 0.986 \\
&$\bm{\phi}=(0.8,-.25)^{\rm T}$& 0.706 & 0.987 & 0.777 & 0.983 \\
&$\bm{\phi}=(1.5,-0.75)^{\rm T}$& 0.948 & 0.982 & 0.937 & 0.983 \\ \hline
\hline
\end{tabular} \end{table}
The correct order selection rate improved and reached over 98\% by increasing the sample size from 100 to 500 in all cases. Order selection performance is similar with data generated by all the six pairs of values for $g$ and $h$ as used in the functional boxplots. We conclude that the overall correct order selection rate is satisfactory for our order selection procedure based on BIC with the approximated likelihood.
\subsection{Forecasting}\label{sec:pred} In this section, we derive one-step-ahead point and probabilistic forecasts for our two models.
For model 1, $Y_t=\bm{X}_t^{\rm T}\bm{\beta}+\xi+\omega T_t$, given $p$ observations $y_{t},\ldots,y_{t-p+1}$, we derive the conditional distribution:
$$T_{t+1} |y_{t},\ldots,y_{t-p+1} \sim \tau_{g,h}(\tilde{\mu}+\tilde{\sigma}Z),\, Z\sim \mathcal{N}(0,1),$$
where $\tilde{\mu}$ and $\tilde{\sigma}^2$ are the conditional mean and variance, respectively, for $Z_{t+1}|Z_{t},\ldots,Z_{t-p+1}$ of the underlying Gaussian AR($p$) process $Z_t$. Here, $\tilde{\mu}$ and $\tilde{\sigma}^2$ are determined by $\bm{\phi}$ and can be found efficiently by the Durbin-Levinson algorithm. With this conditional distribution, the point predictors for minimizing the absolute prediction error (conditional median) and the squared prediction error (conditional mean) are: $ \hat{Y}_{t+1}=\xi+\bm{X}_{t+1}^{\rm T}\bm{\beta}+\omega\tau_{g,h}(\tilde{\mu})$ and $ \hat{Y}_{t+1}=\xi+\bm{X}_{t+1}^{\rm T}\bm{\beta}+\frac{\omega}{g\sqrt{1-h\tilde{\sigma}^2}}\exp\left\{\frac{h\tilde{\mu}^2}{2(1-h\tilde{\sigma}^2)}\right\}\left[\exp\left\{\frac{g^2\tilde{\sigma}^2+2g\tilde{\mu}}{2(1-h\tilde{\sigma}^2)}\right\}-1\right]$, respectively.
For model 2, the conditional distribution is:
$$Y_{t+1} |y_{t},\ldots,y_{t-p+1} \sim \bm{X}_{t+1}^{\rm T}\bm{\beta}+\xi+\phi_1 \tilde{y}_{t}+\cdots+\phi_p \tilde{y}_{t-p+1}+\omega\tau_{g,h}(Z),\, Z\sim \mathcal{N}(0,1).$$ The point predictors for minimizing the absolute loss (conditional median) and squared loss (conditional mean) are $\hat{Y}_{t+1}=\xi+\bm{X}_{t+1}^{\rm T}\bm{\beta}+\phi_1 \tilde{y}_{t}+\cdots+\phi_p \tilde{y}_{t-p+1}$ and $ \hat{Y}_{t+1}=\xi+\bm{X}_{t+1}^{\rm T}\bm{\beta}+\phi_1 \tilde{y}_{t}+\cdots+\phi_p \tilde{y}_{t-p+1}+\frac{\omega}{g\sqrt{1-h}}\left[\exp\left\{\frac{g^2}{2(1-h)}\right\}-1\right]$, respectively. We note that the conditional median has the same form as that of an AR model with Gaussian error. In practice, we need to estimate the parameters first to make forecasts for future values of the time series based on those estimated parameters. Thus, the difference between the point predictions based on the conditional medians of our model and the Gaussian AR model comes from the difference in their respective estimations of $\bm{\beta}, \xi$ and $\bm{\phi}$.
Prediction confidence intervals (CI) can be found easily from the conditional distributions of our two models. There are different ways to form the two-sided CI for a given distribution. One popular choice for the $(1-\alpha)$100\% CI is to exclude $\alpha/2$ from both tails and then use the central $1-\alpha$ probability interval of the distribution, which we refer to as the symmetric weight CI. Another choice is to find the shortest $1-\alpha$ probability interval, which we refer to as the minimum-length CI. The minimum-length CI coincides with the symmetric weight CI for symmetric distributions. For model 1, the lower and upper bounds of the 95\% prediction CI can be found by transforming the lower and upper bounds of a 95\% probability interval of the underlying normal distribution of mean $\tilde{\mu}$ and variance $\tilde{\sigma}^2$. The symmetric weight prediction interval is $[\bm{X}_{t+1}^{\rm T}\bm{\beta}+\xi+\omega \tau_{g,h}(\tilde{\mu}-z_{1-\alpha/2}\tilde{\sigma}),\bm{X}_{t+1}^{\rm T}\bm{\beta}+\xi+\omega \tau_{g,h}(\tilde{\mu}+z_{1-\alpha/2}\tilde{\sigma})]$, and the minimum-length prediction interval is $[\bm{X}_{t+1}^{\rm T}\bm{\beta}+\xi+\omega \tau_{g,h}(\tilde{\mu}-z_{1-\gamma^{\text{opt}}}\tilde{\sigma}),\bm{X}_{t+1}^{\rm T}\bm{\beta}+\xi+\omega \tau_{g,h}(\tilde{\mu}+z_{1-\alpha+\gamma^{\text{opt}}}\tilde{\sigma})]$, where $\gamma^{\text{opt}}$ needs to be optimized numerically for different values of $g$ and $h$ for given $\tilde{\mu}$ and $\tilde{\sigma}$.
For model 2, the symmetric weight prediction interval is $[\hat{Y}_{t+1}+\omega \tau_{g,h}(-z_{1-\alpha/2}),\hat{Y}_{t+1}+\omega \tau_{g,h}(z_{1-\alpha/2})]$ and the minimum-length prediction interval is $[\hat{Y}_{t+1}+\omega \tau_{g,h}(-z_{1-\gamma^{\text{opt}}}),\hat{Y}_{t+1}+\omega \tau_{g,h}(z_{1-\alpha+\gamma^{\text{opt}}})]$, where $\hat{Y}_{t+1}$ is the conditional median of $Y_{t+1} |y_{t},\ldots,y_{t-p+1}$.
The prediction intervals given above are derived with respect to the true parameters of the two models. However, in practice, those parameters need to be estimated and the prediction interval with the estimated parameters should be inflated taking the uncertainty in parameter estimation into account. Research on mean squared prediction error (MSPE) with estimated parameters can be referred to a vast literature in the context of linear models, time series models and spatial models \citep{MSPESpZC92}. In particular, for a Gaussian AR($p$) model, the MSPE of a one-step ahead forecast with estimated autoregressive coefficients is inflated by a factor of $1 + p/n$, where $n$ is the sample size \citep{MSPEAR76}. Many authors \citep[e.g.][]{MSPELMToy82} have concluded that the bias of the estimated MSPE is asymptotically negligible and we see later in the simulation study that the empirical coverage of the prediction CIs by ignoring the uncertainty in parameter estimation is close to the nominal level.
\section{Simulation Study}\label{sec:sim} We perform a Monte Carlo simulation study to assess performance of our models from different aspects. We consider six pair of values for $g$ and $h$ (same values as in the functional boxplots) and sample sizes $n=100, 250, 500$. For each of our TGH-AR(1)-t and TGH-AR(1)-e model, each sample size $n$ and each pair of values for $g$ and $h$, we first generate one realization from the model with $\xi=-3$, $\omega=1.5, \bm{\beta}=(3,-2)^{\rm T}, \bm{X}_t=\left\{\cos(2\pi t/24),\sin(2\pi t/24)\right\}^{\rm T}$ and $\phi=0.8$. Next, we carry out parameter estimation by three methods: the MALE for both of our models without order selection as well as the MLE for a Gaussian AR model without order selection. Finally, we use again the three methods based on the TGH-AR-t, TGH-AR-e and Gaussian AR models, to produce point and probabilistic forecasts with the estimated parameters. For each scenario, we run the above procedure 1000 times.
In Section~\ref{sec:sim1}, we compare the MALE for the TGH-AR(1)-t model with two other estimators for the independent TGH distribution when data are generated from the TGH-AR(1)-t model. In Section~\ref{sec:sim2} we evaluate the finite sample behavior of the MALE for both models and in Section~\ref{sec:sim3} we compare the forecasting performances of our models with the performance of a Gaussian AR model.
\subsection{Comparison of Estimation Methods}\label{sec:sim1} Using the TGH-AR-t model, we first check the improvement of estimation performance by using the MALE to simultaneously estimate $\bm{\theta}_1$ and $\bm{\phi}$ rather than sequentially estimating $\bm{\theta}_1$ from $Y_t$ treating them as independent realizations and then estimate $\bm{\phi}$ from the transformed process $\tilde{Z_t}=\tau^{-1}_{\hat{g},\hat{h}}(\frac{Y_t -\hat{\xi}-\bm{X}_{t}^{\rm T}\hat{\bm{\beta}}}{\hat{\omega}})$. The latter approach is often adopted by practitioners to first find the optimal transformation and then estimate the temporal structure from the `Gaussianized' process. For this comparison, we use two additional methods for estimation in the simulation procedure as described before: the letter-value-based method \citep{Hoa85} and the MALE for independent TGH distributions \citep{XG15} to estimate $\bm{\theta}_1$ first and then estimate $\phi$ by the transformed process $\tilde{Z_t}$.
\begin{figure}
\caption{Comparison of three estimators: letter-value-based method (LVII), MALE based on independent TGH distribution (Ind) and MALE for the TGH-AR-t model (TGH-AR-t), with boxplots of each estimator for each parameter from 1000 replications with data generated from the TGH-AR($1$)-t with true parameters indicated by green line.}
\label{compare}
\end{figure}
Figure~\ref{compare} presents boxplots of the parameters estimated by the three different estimators for 1000 replications with $g=0.3, h=0.1$ for 3 sample sizes. We see that the bias and variance of the estimators improve dramatically as the sample size grows from 100 to 500 for all three methods. Our joint estimation procedure outperforms the other two sequential methods, especially for $h$ and $\phi$, which indicate that estimating the optimal transformation by ignoring the dependency structure of the underlying process deteriorate the estimation. Hence, the comparison result justifies the need to develop such a tailored estimation procedure for the TGH distribution in the time series context.
\subsection{Estimation Performance of MALE}\label{sec:sim2}
\begin{figure}
\caption{Boxplots of the MALE in 1000 simulations for the two TGH-AR(1) models with $n=100$ for six pairs of different values of $g$ and $h$ for each parameter with true value indicated by green line. For $g=h=0$, boxplots of the MLE based on the Gaussian AR model (indicated by `Gau') for $\xi,\omega,\beta_1,\beta_2,\phi$ are also included.}
\label{est}
\end{figure}
Next, we check the finite sample behavior of the MALE for both models. Figure~\ref{est} shows the estimation results from our two TGH-AR(1) models with $\phi=0.8$, $n=100$ for six pairs of different values of $g$ and $h$. For $g=h=0$, which corresponds to a Gaussian AR process, we also include boxplots of the MLE based on the Gaussian AR model for $\xi,\omega,\beta_1,\beta_2$ and $\phi$. Visunimations of boxplots in a same manner as sample size grows ($n=100, 250, 500$) for both models can be found in the supplementary material (Visuanimations~S2 and S3). Results of estimation performance for the two TGH-AR(2) models with $\phi_1=0.8, \phi_1=-0.25$ are also included in the supplementary material (Visuanimations~S4 and S5).
From Figure~\ref{est}, we can see that with sample size $n=100$, the MALE for $\omega, h$ and $\phi$ is a bit biased. Nevertheless, as shown by Visuanimations~S2 and S3 in the supplementary material, the estimation improves greatly as the sample size increases to $n=250$. In addition, under $g=h=0$ when the true distribution is Gaussian, with two more parameters, $g$ and $h$, to estimate, the estimators based on the approximated TGH-AR likelihood do not deteriorate much from the Gaussian MLE. The visuanimations also give us an empirical guideline about how the sample size affects the estimation performance. To get an overall satisfying estimation performance by the MALE, we suggest a sample size no less than $n=250$.
\subsection{Forecasting Performance}\label{sec:sim3} Finally, we evaluate the forecasting performance for our two models with parameters estimated by the corresponding MALE without order selection.
Table~\ref{pred_t} summarizes the performance of the point forecasts from each of the three methods that based on the TGH-AR-t, TGH-AR-e and Gaussian AR models, and for data generated from each of our two TGH-AR(1) models with $\xi=-3$, $\omega=1.5, \bm{\beta}=(3,-2)^{\rm T}, \bm{X}_t=\left\{\cos(2\pi t/24),\sin(2\pi t/24)\right\}^{\rm T}$ and $\phi=0.8$ for $g=0.3, h=0.1$ and $n=500$. The mean absolute errors (MAE) is calculated from the conditional median and the root mean square errors (RMSE), from the conditional mean as point predictor. It also shows the empirical coverage and average length of the 95\% minimum-length CI from each method.
\begin{table}[ht]
\caption{Summary of point forecast performance of the three methods based on the TGH-AR-t, TGH-AR-e and Gaussian AR models, and for data generated from each of our two TGH-AR(1) models with $\xi=-3$, $\omega=1.5, \bm{\beta}=(3,-2)^{\rm T}, \bm{X}_t=\left\{\cos(2\pi t/24),\sin(2\pi t/24)\right\}^{\rm T}$, $\phi=0.8$ for $g=0.3, h=0.1$, $n=500$.}
\centering
\begin{tabular}{c||c|c|c||c|c|c}
\hline
Data generated from &\multicolumn{3}{c|}{Model 1: TGH-AR(1)-t}&\multicolumn{3}{|c}{Model 2: TGH-AR(1)-e}\\
\cline{1-7}
Modeled by & TGH-AR-t & TGH-AR-e & Gau-AR & TGH-AR-t & TGH-AR-e & Gau-AR\\
\hline\hline
MAE & \textbf{0.860} & 0.869 & 0.870 & 1.374 & \textbf{1.365} & 1.379\\ \hline
RMSE & \textbf{1.124} & 1.142 & 1.139 & 1.798 & \textbf{1.795} & 1.801 \\ \hline
95\% CI coverage & \textbf{95.6\%} & 95.2\% & 94.6\% & 95.0\% & \textbf{95.6\%} & 95.6\% \\ \hline
95\% CI width & \textbf{4.30} & 4.60 & 4.68 & 7.40 & \textbf{7.24} &7.55
\end{tabular}
\label{pred_t} \end{table}
\begin{figure}
\caption{Comparison of probabilistic forecast performances via PIT and reliability plots when data are generated from TGH-AR(1)-t model with $\xi=-3$, $\omega=1.5, \bm{\beta}=(3,-2)^{\rm T}, \bm{X}_t=\left\{\cos(2\pi t/24),\sin(2\pi t/24)\right\}^{\rm T}$, $\phi=0.8$ for $g=0.3, h=0.1$, $n=500$ and fitted for the three methods: TGH-AR(1)-t, TGH-AR(1)-e and Gau-AR(1). Mean CRPS value is also labeled.}
\caption{Comparison of probabilistic forecast performances via PIT and reliability plots when data are generated from TGH-AR(1)-e model with $\xi=-3$, $\omega=1.5, \bm{\beta}=(3,-2)^{\rm T}, \bm{X}_t=\left\{\cos(2\pi t/24),\sin(2\pi t/24)\right\}^{\rm T}$, $\phi=0.8$ for $g=0.3, h=0.1$, $n=500$ and fitted for the three methods: TGH-AR(1)-t, TGH-AR(1)-e and Gau-AR(1). Mean CRPS value is also labeled.}
\label{pred1}
\label{pred2}
\end{figure}
The probabilistic forecasts of the three methods can be evaluated through a histogram of the probability integral transform (PIT) values, which is to apply the conditional cumulative distribution function to the true value of the process at the time point for forecasting. If the conditional distribution assumed by a certain model conforms with the true conditional distribution, the histogram should be flat (uniform). Another metric for evaluating probabilistic forecast is the continuous ranked probability score (CRPS); more on probabilistic forecasts and CRPS for a Gaussian distribution can be found in \citet{GnKa} and \citet{GM06}. \citet{XG17} derived the CRPS for a TGH distribution. The lower the CRPS is, the better the probabilistic forecast is. A plot of the empirical versus nominal quantile, i.e., a reliability plot, can be used to check the quality of the quantile prediction. Figures~\ref{pred1} and \ref{pred2} show histograms of the PIT values (labeled with the mean CRPS value) and reliability plots for all three forecasting methods applied to the data generated from each of our TGH-AR(1) model. Figures S1 and S2 in the supplementary material show probabilistic forecast results for the two TGH-AR(2) models with $\phi_1=0.8, \phi_1=-0.25$.
Even though in Table~\ref{pred_t}, the MAEs and RMSEs do not differ much between the models, superiority of a model can be seen evidently by comparison of the PIT histograms. Not surprisingly, the best forecast is achieved by using the same model that the data are generated from: the forecast has the smallest MAE and RMSE, a flatter PIT histogram, and a smaller mean CRPS; the reliability plot is close to a $45^{\circ}$ straight line; the empirical coverage of the 95\% CI is close to the nominal level while the width is shorter than that of the Gaussian AR predictor. Note that, although we used the prediction CIs derived in Section~\ref{sec:pred} by treating the estimated parameters as the true parameters, Table~\ref{pred_t} shows that the empirical coverage of the 95\% CIs is close to the nominal level. We also notice that the TGH-AR-t and TGH-AR-e models produce quite different forecasts via their distinctive histograms of the PIT values, which means that the two models are not interchangeable.
\section{Application to Wind Speed Data}\label{sec:app} The analysis of wind data is a crucial step in simulations for climate science. The accurate forecasting of wind speeds and quantifying the forecast uncertainty are also important for exploiting wind as clean energy. In this section, we illustrate the usefulness of our two non-Gaussian time series models under both wind speed simulation and wind speed forecasting scenarios. In Section~\ref{sec:app1} we fit daily wind speed data with the TGH-AR-t model to get parameter estimation for the purpose of fast wind speed simulation. In Section~\ref{sec:app2}, we apply the TGH-AR-e model in order to make better wind speed forecasts. For the two datasets in the two subsections, the reason to use one specific model is based on plots of $Y_t-\bm{X}_t^{\rm T}\bm{\beta}$ versus $Y_{t-1}-\bm{X}_{t-1}^{\rm T}\bm{\beta}$ and with reference to Figure~\ref{rela12} to see whether the relationship is linear or not.
\subsection{Wind Speed Simulation}\label{sec:app1}
Climate models can produce multiple outputs of spatio-temporal wind speed data over the globe. Statistical models have been developed to reproduce the output from climate models for the sake of fast simulation instead of using the computationally expensive physical models. In order to fit a space-time statistical model for wind field over the entire world, a 4-step multi-resolution method has been proposed by \citet{CG16}, in which the first step is to model the time series of wind speed at each location individually; see \citet{CG18} for the general principles for analyzing big spatio-temporal data from climate models. For the 4-step multi-resolution method, our TGH-AR time series models can be used as a modification of the first step when time series data show non-Gaussian features.
\begin{figure}
\caption{Maps of the estimated parameters from fitting the TGH-AR-t model with order selection to daily wind speed residuals.}
\label{par1}
\end{figure}
To illustrate the usefulness of our TGH-AR-t model, we consider a publicly available Large Ensemble Project (LENS) dataset that consists of 30 ensembles of daily wind speed over the globe with a spatial resolution of around $1^\circ$ longitude and $1^\circ$ latitude from the year 1920 to 2100 \citep{Kay15}. We select one ensemble from the complete dataset and use the historical 86 years (1920-2005, $n=31390$) at 22 $\times$ 22 gridded locations over Saudi Arabia (bounded roughly by $13-34^{\circ}$N and $32-59^{\circ}$E). For each day in a year for each location, we estimate the seasonality by taking the average of the wind speed across the 86 years at that location. For each location, we remove the seasonal effect from each time point and analyze the residual wind speed data, which show a clear AR pattern by the ACF and the partial autocorrelation function (PACF). We use the TGH-AR-t model because the residual wind speed time series clearly show features of skewness and heavy tails and a plot of $Y_t$ versus $Y_{t-1}$ for the residual wind speed shows a nonlinear relationship. We get parameter estimation with order selection by fitting the TGH-AR-t model at each location. With the TGH-AR-t model and the estimated parameters at each location, wind speed data can be generated rapidly without using the physical model. The estimated parameters also give insights into the pattern of the distributional properties for the wind speed residuals.
Figure~\ref{par1} shows maps of the estimated values of the 4 parameters related to the Tukey $g$-and-$h$ transformation as well as two autoregressive coefficients. Since the autoregressive order selected is 2 for the majority of the locations, maps of estimations of higher order autoregressive coefficients are omitted. We may notice from these plots that $\hat{\xi}$, $\hat{\omega}$, $\hat{\phi_1}$ and $\hat{\phi_2}$ (white area indicate the order selected is only 1) are distinctively different between land and ocean. We also observe that the $\hat{g}$ and $\hat{h}$ estimates show interesting patterns that are closely related to elevation and geographical features of a location. For the full 4-step multi-resolution analysis of daily wind speed over the globe, where the TGH-AR-t model is used in the first step, see \citet{JY18}.
\subsection{Hourly Wind Speed Data at a Meteorological Station}\label{sec:app2}
We consider hourly wind speed data observed at a meteorological tower in Sunnyside, Oregon, from 1 December 2013 to 31 December 2014 \citep{KH15}. First, we use the hourly wind speed observed in June 2014 ($n=720$) of this dataset to demonstrate the suitability of using the TGH-AR-e model. We notice that there exists a diurnal pattern in the hourly wind speed, so we include harmonics with periods of 12 and 24 hours as the covariates in the model. We find the MALE with order selection by fitting the wind speed time series from that month with the TGH-AR-e model. The estimated parameters are $\hat{\xi}=7.84,\, \hat{\omega}=1.65,\, \hat{g}=0.11,\, \hat{h}=0.06,\, \hat{\phi}_1=1.01,\, \hat{\phi}_2= -0.15$, and $\hat{\bm{\beta}}=(-0.48, 2.31, -0.14, 0.19)^{\rm T}$ with $\bm{X}_t=\left\{\cos(2\pi t/12),\sin(2\pi t/12),\cos(2\pi t/24),\sin(2\pi t/24\right\}^{\rm T}$.
\begin{figure}
\caption{Various diagnostic plots of applying the TGH-AR-e model to the hourly wind speed data of the month June 2014.}
\label{wind1}
\end{figure}
Figure~\ref{wind1} shows the original hourly wind speed time series overlapped with a diurnal pattern (green line) estimated using the TGH-AR-e model. The histogram and normal Q-Q plot of the residual wind speeds after removing periodicity obviously deviate from Gaussianity. The skewness and kurtosis values presented with the histogram further support using our TGH-AR-e model to adapt to right-skewed and heavy-tailed data. The residual wind speed at time $t$ is plotted against time $t - 1$ in the first panel of the middle row in Figure~\ref{wind1}, showing a strong linear relationship, which is the reason why we choose to transform the error term using the TGH-AR-e model rather than the process itself. The ACF and the PACF indicate an absence of seasonality after removing the diurnal pattern with the harmonics by the TGH-AR-e model. The ACF and the PACF also validate the appropriateness of using an AR(2) model, selected by BIC, for the wind speed residuals process. The bottom row of Figure~\ref{wind1} shows a histogram, a normal Q-Q plot and the PACF for the estimated back-transformed Gaussian error term $\tilde{\epsilon}_{t,\boldsymbol{\theta}}=\tau_{\hat{g},\hat{h}}^{-1}\left\{\frac{\hat{\tilde{y}}_t-\hat{\phi_1} \hat{\tilde{y}}_{t-1}-\hat{\phi_2}\hat{\tilde{y}}_{t-2}}{\hat{\omega}}\right\}$, where $ \hat{\tilde{y}}_t=y_t-\hat{\xi}-\bm{x}_t^{\rm T}\hat{\bm{\beta}}$. These plots confirm the validity of using a TGH-AR(2)-e model in which $\epsilon_t \overset{i.i.d.}{\sim} \mathcal{N} (0, 1)$.
Next, we compare the forecast performance from our TGH-AR-e model to those of the Gaussian AR model for this dataset. We make one-step-ahead forecasts for the whole year of 2014, with a rolling window length of 30 days ($n=720$) for parameter estimation to account for the non-stationarity caused by seasonal effect. To estimate the parameters for the Gaussian AR model, we first remove the diurnal pattern using linear regression with the same harmonics as in the TGH-AR-e model. Then, we find the MLE of the autoregressive parameters from the residuals.
The MAE of the conditional median from the TGH-AR-e model is 4.05; the Gaussian AR forecast has an MAE of 4.28. The RMSEs for the conditional means of the TGH-AR-e model and the Gaussian AR model are 1.47 and 1.50, respectively. The empirical coverage of the 95\% minimum-length CI is 94.2\% for the TGH-AR-e model and 93.3\% for the Gaussian model. We conclude that the point forecasts and CIs based on the TGH-AR-e model are better than those based on the Gaussian AR model for these hourly wind speed data. However, it is difficult to see the differences between the forecast results from only these numbers. Figure~\ref{wind2} shows histograms of the PIT values from forecasts based on the TGH-AR-e and Gaussian AR models. By comparing these histograms we see evidently the superiority of fitting the wind speed using a non-Gaussian TGH error in the AR model rather than a Gaussian error.
\begin{figure}
\caption{Histograms of the PIT values of probabilistic forecasts by the TGH-AR-e and Gaussian AR model for one-hour-ahead forecasts over the whole year of 2014.}
\label{wind2}
\end{figure}
\section{Discussion}\label{sec:sum} In this paper, we applied the TGH transformation in a time series context and built two flexible non-Gaussian autoregressive models. The TGH-AR-t model assumes a latent Gaussian process, which is transformed as whole while the TGH-AR-e model transforms the white-noise error term in an autoregressive regression model. The intrinsic difference between the two models can be explained by different data generating mechanisms. For a given time series, a plot like Figure~\ref{rela12} can best help in deciding which model is more suitable. We described an efficient parameter estimation procedure for our two models that approximates the maximum likelihood estimator and an order selection procedure based on the approximated likelihood. We derived formulas for point and probabilistic forecast using the two models. We illustrated the empirical performances of estimation, order selection and forecasting of our models through a simulation study. We found that estimating all parameters at once by our estimation procedure, the performance drastically improved compared to sequential estimation that ignores the temporal dependence. Another finding was that the two models yielded different forecasts when calibrated on the same sample, hence proving that the two models are not interchangeable. Simulations also suggested a sample size no less than 250 for a satisfying performance of our models. Finally, we demonstrated the usefulness of our models by applying them to two wind speed datasets at different temporal resolutions. Our TGH-AR models provided a fast simulation method that could emulate wind speed outcome of a gridded climate model and produced competitive forecast results for an observational hourly wind speed dataset from a meteorological station.
The AR models considered in this paper cannot incorporate measurement errors. Extensions of the TGH-AR models to ARMA or state-space models need further research. Also, we feel that an exhaustive comparisons of the many existing transformations, including the Box-Cox, TGH and Sinh-arcsinh transformations \citep{JoPew09}, would be welcome. In a future study, the parameters $g$ and $h$ should be allowed to change smoothly across time, either by imposing a parametric function of $t$ for $g$ and $h$ or by a penalty of smoothness, instead of the moving window scheme we used here. Extension of the TGH framework to a spatio-temporal setting is also promising.
Additional information and supporting material for this article is available online at the journal's website.
\baselineskip =16pt
\end{document}
|
arXiv
|
{
"id": "1711.07516.tex",
"language_detection_score": 0.8081199526786804,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Utilizing Dynamic Properties of Sharing Bits and Registers to Estimate User Cardinalities over Time \thanks{\textsuperscript{*}Peng Jia, Jing Tao and Xiaohong Guan are corresponding authors.} } \author{
\fontsize{11}{11}\selectfont
Pinghui Wang$^{1,2}$, Peng Jia$^{1}$, Xiangliang Zhang$^{3}$, Jing Tao$^{1,2,4}$, Xiaohong Guan$^{2,1,5}$, Don Towsley$^{6}$\\
$^{1}$MOE Key Laboratory for Intelligent Networks and Network Security, Xi'an Jiaotong University, China\\
$^{2}$Shenzhen Research Institute of Xi'an Jiaotong University, Shenzhen, China\\
$^{3}$King Abdullah University of Science and Technology, Thuwal, SA \\
$^{4}$Zhejiang Research Institute of Xi'an Jiaotong University, Hangzhou, China\\
$^{5}$Department of Automation and NLIST Lab, Tsinghua University, Beijing, China\\
$^{6}$School of Computer Science, University of Massachusetts Amherst, MA, USA\\
Email: \{phwang, jtao, xhguan, pengjia\}@sei.xjtu.edu.cn, [email protected],\\
[email protected]\\ }
\maketitle
\begin{abstract} Online monitoring user cardinalities (or degrees) in graph streams is fundamental for many applications. For example in a bipartite graph representing user-website visiting activities, user cardinalities (the number of distinct visited websites) are monitored to report network anomalies. These real-world graph streams may contain user-item duplicates and have a huge number of distinct user-item pairs, therefore, it is infeasible to exactly compute user cardinalities when memory and computation resources are limited. Existing methods are designed to approximately estimate user cardinalities, whose accuracy highly depends on parameters that are not easy to set. Moreover, these methods cannot provide anytime-available estimation, as the user cardinalities are computed at the end of the data stream. Real-time applications such as anomaly detection require that user cardinalities are estimated on the fly. To address these problems, we develop novel bit and register sharing algorithms, which use a bit array and a register array to build a compact sketch of all users' connected items respectively. Compared with previous bit and register sharing methods, our algorithms exploit the dynamic properties of the bit and register arrays (e.g., the fraction of zero bits in the bit array at each time) to significantly improve the estimation accuracy, and have low time complexity ($O(1)$) to update the estimations each time they observe a new user-item pair. In addition, our algorithms are simple and easy to use, without requirements to tune any parameter. We evaluate the performance of our methods on real-world datasets. The experimental results demonstrate that our methods are several times more accurate and faster than state-of-the-art methods using the same amount of memory. \end{abstract}
\section{Introduction} \label{sec:introduction} Many real-world networks are given in the form of graph streams. Calling network is such an example with nodes representing users and an edge representing a call from one user to another. When web surfing activities are modeled as a bipartite graph stream where users and items refer to network hosts and websites respectively, an edge represents a visit by a user to a website. Monitoring the cardinalities (or degrees) of users in these networks is fundamental for many applications such as network anomaly detection~\cite{Estan2003,Zhao2005,Nychis2008,YuNSDI2013}, where a user's cardinality is defined to be the number of \emph{\textbf{distinct}} users/items that the user connects to in the regular/bipartite graph stream of interest. Due to the large-size and high-speed nature of these graph streams, it is infeasible to collect the entire graph especially when the computation and memory resources are limited. For example, network routers have fast but very small memories, which leads their traffic monitoring modules incapable to exactly compute the cardinalities of network users. Therefore, it is important to develop fast and memory efficient algorithms to approximately compute user cardinalities over time.
Compared with using a counter to record a user's frequency (i.e., the number of times the user occurred) over time, one needs to build a hash table of distinct occurred edges to handle edge duplicates in graph streams when computing user cardinalities. Therefore, computing user cardinalities is more complex and difficult than computing user frequencies for large data streams, and frequency estimation methods such as Count-Min sketch~\cite{Cormode2005improved} fails to approximate user cardinalities. To address this challenge, a variety of cardinality estimation methods such as Linear-Time Probabilistic Counting (LPC)~\cite{Whang1990} and HyperLogLog (HLL) \cite{FlajoletAOFA07} are developed to approximately compute cardinalities. An LPC/HLL sketch consists of $m$ bits/registers, where $m$ is a parameter affecting the estimation accuracy. Since user cardinalities are not known in advance and change over time, one needs to set $m$ large (e.g., thousand) to achieve reasonable accuracy for each user, whose cardinality may vary over a large range. However, this method wastes considerable memory because a large value of $m$ is not necessary for most users, which have small cardinalities. To solve this problem, \cite{Zhao2005, Yoon2009, WangTIFS2012, XiaoSIGMETRICS2015} develop different virtual sketch methods to compress each user's LPC/HLL sketch into a large bit/register array shared by all users. These virtual sketch methods build each user's virtual LPC/HLL sketch using $m$ bits/registers randomly selected from the large bit/register array. This significantly reduces memory usage because each bit/register may be used by more than one user. However, bits/registers in a user's virtual LPC/HLL sketch may be contaminated by other users. We refer to these bits/registers as ``\textbf{noisy}" bits/registers. In practice, most users have small cardinalities and their virtual LPC/HLL sketches tend to contain many ``noisy" bits/registers, which results in large estimation errors. Another limitation of existing methods is that they are unable to report user cardinalities on the fly, because they are customized to estimate user cardinalities after all the data has been observed. For real-time applications like on-line anomaly detection, it is important to track user cardinalities in real-time. For example, network monitoring systems are required to detect abnormal IP addresses such as super spreaders (i.e., IP addresses with cardinalities larger than a specified threshold) on the fly. Moreover, online monitoring of IP address cardinalities over time also facilitates online detection of stealthy attacks launched from a subclass of IP addresses.
To address the above challenges, we develop two novel streaming algorithms FreeBS and FreeRS to accurately estimate user cardinalities over time. We summarize our main contributions as:\\ \noindent$\bullet$ Compared with previous bit and register sharing methods, our algorithms FreeBS and FreeRS exploit the dynamic properties of the bit and register arrays over time (e.g., the fraction of zero bits in the bit array at each time) to significantly improve the estimation accuracy. To be more specific, FreeBS/FreeRS allows the number of bits/registers used by a user to dynamically increase as its cardinality increases over time and each user can use all shared bits/registers, which results in more accurate user cardinality estimations.
\noindent$\bullet$ Our algorithms report user cardinality estimations on the fly and allow to track user cardinalities in real-time. The time complexity is reduced from $O(m)$ in state-of-the-art methods CSE~\cite{Yoon2009} and vHLL~\cite{XiaoSIGMETRICS2015} to $O(1)$ for updating user cardinality estimations each time they observe a new user-item pair.
\noindent$\bullet$ We evaluate the performance of our methods on real-world datasets. Experimental results demonstrate that our methods are orders of magnitude faster and up to 10,000 times more accurate than state-of-the-art methods using the same amount of memory.
The rest of this paper is organized as follows. The problem formulation is presented in Section~\ref{sec:problem}. Section~\ref{sec:preliminaries} introduces preliminaries. Section~\ref{sec:methods} presents our algorithms FreeBS and FreeRS. The performance evaluation and testing results are presented in Section~\ref{sec:results}. Section~\ref{sec:related} summarizes related work. Concluding remarks then follow.
\section{Problem Formulation} \label{sec:problem}
To formally define our problem, we first introduce some notation. Let $\Gamma=e^{(1)} e^{(2)} \cdots$ be the graph stream of interest consisting of a sequence of edges. Note that an edge in $\Gamma$ may appear more than once. In this paper, we focus on bipartite graph streams consisting of edges between users and items. Our methods however easily extend to regular graphs. Let $S$ and $D$ denote the user and item sets, respectively. For $t= 1, 2, \ldots$, let $e^{(t)}=(s^{(t)}, d^{(t)})$ denote the $t^\text{th}$ edge of $\Gamma$, where $s^{(t)}\in S$ and $d^{(t)}\in D$ are the user and the item of $e^{(t)}$ respectively. Let $N_s^{(t)}$ denote the set of distinct items that user $s$ connects to before and including time $t$. Define $n_s^{(t)}=|N_s^{(t)}|$ to be the cardinality of user $s$ at time $t$. Then, $n^{(t)}=\sum_{s\in S}{|N_s^{(t)}|}$ is the sum of all user cardinalities at time $t$. In this paper, we develop fast and accurate methods for estimating user cardinalities at times $t= 1, 2, \ldots$ using a limited amount of memory.
When no confusion arises, we omit the superscript $(t)$ to ease exposition.
\section{Preliminaries}\label{sec:preliminaries} \begin{figure*}
\caption{Overview of bit sharing method CSE and register sharing method vHLL. Virtual CSE/vHLL sketches of users may contain ``noisy" bits/registers (e.g., the bit and register in red and bold in the figure).}
\label{fig:CSEandCHLL}
\end{figure*} \subsection{Estimating a Single User's Cardinality} \subsubsection{Linear-Time Probabilistic Counting} For a user $s\in S$, Linear-Time Probabilistic Counting (LPC)~\cite{Whang1990} builds a sketch $B_s$ to store the set of items that $s$ connects to, i.e., $N_s^{(t)}$. Formally, $B_s$ is defined as a bit array consisting of $m$ bits, which are initialized to zero. Let $h(d)$ be a uniform hash function with range $\{1, \ldots, m\}$. When user-item pair $(s,d)$ arrives, the $h(d)^\text{th}$ bit in $B_s$ is set to one, i.e., $B_s[h(d)] = 1$. For any bit $B_s[i]$, $1\le i\le m$, the probability that it remains zero at time $t$ is $P(B_s[i] = 0) = (1 - \frac{1}{m})^{n_s^{(t)}}$. Denote by $U_s^{(t)}$ the number of zero bits in $B_s$ at time $t$. Then, the expectation of $U_s^{(t)}$ is computed as $\mathbb{E}(U_s^{(t)}) = \sum_{i=1}^m P(B_s[i] = 0)\approx m e^\frac{-n_s^{(t)}}{m}$. Based on the above equation, when $U_s^{(t)}>0$, Whang et al.~\cite{Whang1990} estimate $n_s^{(t)}$ as \begin{equation*} \hat n_s^{(t, \text{LPC})} = -m \ln \frac{U_s^{(t)}}{m}. \end{equation*} The range of $\hat n_s^{(t, \text{LPC})}$ is $[0, m\ln m]$, and its expectation and variance are computed as \[ \mathbb{E}(\hat n_s^{(t, \text{LPC})}) \approx n_s^{(t)} + \frac{1}{2}\left(e^{\frac{n_s^{(t)}}{m}} - \frac{n_s^{(t)}}{m} - 1\right), \] \[ {\rm Var}(\hat n_s^{(t, \text{LPC})}) \approx m \left(e^{\frac{n_s^{(t)}}{m}} - \frac{n_s^{(t)}}{m} - 1\right). \]
\subsubsection{HyperLogLog} To estimate the cardinality of user $s$, HyperLogLog (HLL)~\cite{FlajoletAOFA07} is developed based on the Flajolet-Martin (FM) sketch~\cite{Flajolet1985} $R_s$ consisting of $m$ registers $R_s[1], \ldots, R_s[m]$. All $m$ registers are initialized to $0$. For $1\le i\le m$, let $R_s^{(t)}[i]$ be the value of $R_s[i]$ at time $t$. When a user-item pair $(s,d)$ arrives, HLL maps the item into a pair of random variables $h(d)$ and $\rho(d)$, where $h(d)$ is an integer uniformly selected from \{1, \ldots, $m$\} at random and $\rho(d)$ is drawn from a $Geometric(1/2)$ distribution, $P(\rho(d)=k) = \frac{1}{2^k}$ for $k=1, 2, \ldots$. \footnote{Functions $h(d)$ and $\rho(d)$ are usually implemented as: Let $\Phi(d)=\langle x_1x_2\cdots \rangle$ be the binary format of the output of a hash function $\Phi(d)$, and $b=\lceil \log_2 m\rceil$. Then, $h(d)$ is defined as $h(d)=(x_1x_2\cdots x_b\mod m) + 1$ and $\rho(d)$ is defined as the number of leading zeros in $\langle x_{b+1} x_{b+2}\cdots \rangle$ plus one.} Then, register $R_s[{h(d)}]$ is updated as \[ R_s^{(t)}[h(d)]\gets \max\{R_s^{(t-1)}[h(d)], \rho(d)\}. \] At time $t$, Flajolet et al.~\cite{FlajoletAOFA07} estimate $n_s^{(t)}$ as \[ \hat n_s^{(t, \text{HLL})}=\frac{\alpha_m m^2}{\sum_{i=1}^m 2^{-R^{(t)}[i]}}, \] where $\alpha_m$ is the following constant to correct for bias \[ \alpha_m =\left(m\int_0^{\infty} \left(\log_2 \frac{2+x}{1+x}\right)^m dx \right)^{-1}. \] The above formula for $\alpha_m$ is complicated. In practice, $\alpha_m$ is computed numerically, e.g., $\alpha_{16} \approx 0.673$, $\alpha_{32} \approx 0.697$, $\alpha_{64} \approx 0.709$, and $\alpha_m \approx 0.7213/(1 + 1.079/m)$ for $m \ge 128$. The error of $\hat n_s^{(t, \text{HLL})}$ is analyzed as \[ \lim_{n_s^{(t)}\rightarrow \infty}\frac{\mathbb{E}(\hat n_s^{(t, \text{HLL})})}{n_s^{(t)}} = 1 + \delta_1(n_s^{(t)}) + o(1), \] \[ \lim_{n_s^{(t)}\rightarrow \infty}\frac{\sqrt{{\rm Var}(\hat n_s^{(t, \text{HLL})})}}{n_s^{(t)}} = \frac{\beta_m}{\sqrt{m}} + \delta_2(n_s^{(t)}) + o(1). \]
where $\delta_1$ and $\delta_2$ represent oscillating functions of a tiny amplitude (e.g., $|\delta_1(n_s^{(t)})|<5\times 10^{-5}$ and $|\delta_2(n_s^{(t)})|<5\times 10^{-4}$ as soon as $m\ge 16$) which can be safely neglected for all practical purposes, and $\beta_m$ is a constant for a specific $m$, e.g., $\beta_{16} \approx 1.106$, $\beta_{32} \approx 1.070$, $\beta_{64} \approx 1.054$, $\beta_{128} \approx 1.046$, and $\beta_\infty \approx 1.039$.
Since $\hat n_s^{(t, \text{HLL})}$ is severely biased for small cardinalities, HLL treats the HLL sketch $R_s$ as an LPC sketch (i.e., a bitmap of $m$ bits) when $\frac{\alpha_m m^2}{\sum_{i=1}^m 2^{-R_s^{(t)}[i]}} < 2.5m$. In this case, $n_s^{(t)}$ is estimated as $-m \ln \frac{\tilde U_s^{(t)}}{m}$, where $\tilde U_s^{(t)}$ is the number of registers among $R_s[1]$, $\ldots$, $R_s[m]$ that equal 0 at time $t$. Therefore, we easily find that LPC outperforms HLL for small cardinalities under the same memory usage.
\subsubsection{Discussions} To compute all user cardinalities, one can use an LPC or HLL sketch to estimate each user cardinality. Clearly, using a small $m$, LPC and HLL will exhibit large errors for users with large cardinalities. Most user cardinalities are small and assigning an LPC/HLL sketch with large $m$ to each user in order to accurately estimate large user cardinalities is wasteful as LPC and HLL do not require to set a large $m$ to achieve reasonable estimation accuracy for users with small cardinalities. In practice, the user cardinalities are not known in advance and vary over time. Therefore, it is difficult to set an optimal value of $m$ when using LPC and HLL to estimate all user cardinalities. In the next subsection, we introduce state-of-the-art methods to address this problem, and also discuss their shortcomings. \subsection{Estimating All User Cardinalities} \subsubsection{CSE: Compressing LPC Sketches of All Users into a Shared Bit Array} As shown in Figure~\ref{fig:CSEandCHLL} (a), CSE~\cite{Yoon2009} consists of a large bit array $A$ and $m$ independent hash functions $f_1(s),...,f_m(s)$, each mapping users to $\{1, \ldots, M\}$, where $M$ is the length of the one dimensional bit array $A$. Similar to LPC, CSE builds a virtual LPC sketch for each user and embeds LPC sketches of all users in $A$. For user $s$, its virtual LPC sketch $\hat B_s$ consists of $m$ bits selected randomly from $A$ by the group of hash functions $f_1(s),...,f_m(s)$, that is \[ \hat B_s = (A[f_1(s)], \ldots, A[f_m(s)]). \] Each bit in $A$ is initially set to zero. When a user-item pair $(s,d)$ arrives, CSE sets the $h(d)^\text{th}$ bit in $\hat B_s$ to one. Similar to LPC, $h(d)$ is a uniform hash function with range $\{1, \ldots, m\}$. Since the $h(d)^\text{th}$ element in $\hat B_s$ is bit $A[f_{h(d)}(s)]$, CSE only needs to set bit $A[f_{h(d)}(s)]$, i.e., $A[f_{h(d)}(s)]\gets 1$.
Let $\hat U_s^{(t)}$ be the number of zero bits in $\hat B_s$ and $U^{(t)}$ be the number of zero bits in $A$ at time $t$. A user's virtual LPC sketch can be viewed as a regular LPC sketch containing ``noisy" bits (e.g., the bit in red and bold in Figure~\ref{fig:CSEandCHLL} (a)), that are wrongly set from zero to one by items of other users. To remove the estimation error introduced by ``noisy" bits, Yoon et al.~\cite{Yoon2009} estimate $n_s^{(t)}$ as \begin{equation*} \hat n_s^{(t, \text{CSE})} = -m \ln \frac{\hat U_s^{(t)}}{m} + m\ln \frac{U^{(t)}}{M}. \end{equation*} On the right-hand side of the above equation, the first term is the same as the regular LPC, and the second term corrects the error introduced by ``noisy" bits. The bias and variance of $\hat n_s^{(t, \text{CSE})}$ are given by eqs. (23) and (24) in the original paper~\cite{Yoon2009}.
\subsubsection{vHLL: Compressing HLL Sketches of All Users into a Shared Bit Array} Xiao et al.~\cite{XiaoSIGMETRICS2015} develop a register sharing method, vHLL, which extends the HLL method to estimate all user cardinalities. vHLL consists of a list of $M$ registers $R[1], \ldots, R[M]$, which are initialized to zero. To maintain the virtual HLL sketch $\hat R_s$ of a user $s$, vHLL uses $m$ independent hash functions $f_1(s),...,f_m(s)$ to randomly select $m$ registers from all registers $R[1], \ldots, R[M]$, where each function $f_1(s),...,f_m(s)$ maps users to $\{1, \ldots, M\}$. Formally, $\hat R_s$ is defined as \[ \hat R_s = (R[f_1(s)], \ldots, R[f_m(s)]). \]
For $1\le i\le M$, let $R^{(t)}[i]$ be the value of $R[i]$ at time $t$. When a user-item pair $(s,d)$ arrives, it maps the item to a pair of two random variables $h(d)$ and $\rho(d)$, where $h(d)$ is an integer uniformly selected from \{1, \ldots, $m$\} at random, and $\rho(d)$ is a random integer drawn from a $Geometric(1/2)$ distribution, which is similar to HLL. We can easily find that the $h(d)^\text{th}$ element in the virtual HLL sketch of user $s$ is $R[f_{h(d)}(s)]$, therefore, vHLL only needs to update register $R[f_{h(d)}(s)]$ as \[ R^{(t)}[f_{h(d)}(s)]\gets \max\{R^{(t-1)}[f_{h(d)}(s)], \rho(d)\}. \] A user's virtual HLL sketch can be viewed as a regular HLL containing ``noisy" registers (e.g., the register in red and bold in Figure~\ref{fig:CSEandCHLL} (b)), which are wrongly set by items of other users. To remove the estimation error introduced by ``noisy" registers, Xiao et al.~\cite{XiaoSIGMETRICS2015} estimate $n_s^{(t)}$ as \begin{equation*} \hat n_s^{(t, \text{vHLL})} = \frac{M}{M-m}\left(\frac{\alpha_m m^2}{\sum_{i=1}^m 2^{-R^{(t)}[f_i(s)]}} - \frac{m\alpha_M M}{\sum_{i=1}^M 2^{-R^{(t)}[i]}}\right), \end{equation*} where $\alpha_m$ is the same as that of HLL. For the two terms between the parentheses on the right-hand side of the above equation, the first term is the same as the regular HLL, and the second term corrects the error introduced by ``noisy" registers. Similar to the regular HLL, the first term is replaced by $-m \ln \frac{\hat U_s^{(t)}}{m}$ when $\frac{\alpha_m m^2}{\sum_{i=1}^m 2^{-R^{(t)}[f_i(s)]}} < 2.5m$ where $\hat U_s^{(t)}$ is the number of registers among $R[f_1(s)]$, $\ldots$, $R[f_m(s)]$ that equal 0 at time $t$. The expectation of $\hat n_s^{(t, \text{vHLL})}$ approximately equals $n_s^{(t)}$, that is, $\mathbb{E}(\hat n_s^{(t, \text{vHLL})}) \approx n_s^{(t)}$. The variance of $\hat n_s^{(t, \text{vHLL})}$ is approximately computed as ${\rm Var}(\hat n_s^{(t, \text{vHLL})}) \approx \frac{M^2}{(M-m)^2} (\frac{1.04^2}{m}(n_s^{(t)} + (n^{(t)} - n_s^{(t)})\frac{m}{M})^2 +(n^{(t)} - n_s^{(t)}) \frac{m}{M}\left(1-\frac{m}{M}\right) + \frac{(1.04n^{(t)}m)^2}{M^3})$, where $n^{(t)} = \sum_{s\in S} n_s^{(t)}$ counts the total number of distinct user-item pairs occurred before and including time $t$.
\subsection{Unsolved Challenges} \noindent\textbf{Challenge 1: It is difficult to set parameter $m$ for both CSE and vHLL.} The estimation accuracy of CSE and vHLL highly depends on the value of $m$, as we can see in above discussions. Increasing $m$ introduces more ``\emph{unused}" bits in virtual LPC sketches of occurred users, which can become contaminated with noise. Here ``unused" bits refer to the bits in a user's virtual LPC sketch that no user-item pairs of the user are hashed into. However, decreasing $m$ introduces large estimation errors for users with large cardinalities. Similarly, vHLL also confronts the same challenge in determining an optimal $m$. Later our experimental results will also verify that errors increase with $m$ for users with small cardinalities under CSE and vHLL.
\noindent\textbf{Challenge 2: It is computationally intensive to estimate user cardinalities for all values of $t$.} At each time, both CSE and vHLL require time complexity $O(m)$ to compute the cardinality of a single user. When applied to compute cardinalities for all users in $S$ at all times, CSE and vHLL have to be repeatedly called and will incur high computational cost, which prohibits their application to high speed streams in an on-line manner.
\section{Our Methods}\label{sec:methods} In this section, we present our streaming algorithms FreeBS and FreeRS for estimating user cardinalities over time. FreeBS and FreeRS are designed based on two novel bit sharing and register sharing techniques, respectively. The basic idea behind our methods can be summarized as: Unlike vHLL/CSE mapping each user's items into $m\ll M$ bits/registers, FreeBS/FreeRS randomly maps user-item pairs into all $M$ bits/registers in the bit/register array. Thus, users with larger cardinalities (i.e., connecting to a larger number of items) tend to use more bits/registers. For each user-item pair $e^{(t)}=(s^{(t)}, d^{(t)})$ occurred at time $t$, we discard it when updating $e^{(t)}$ does not change any bit/register in the bit/register array shared by all users. Otherwise, $e^{(t)}$ is a new user-item pair that does not occur before time $t$, and we increase the cardinality estimation of user $s^{(t)}$ by $\frac{1}{q^{(t)}}$, where $q^{(t)}$ is defined as the probability that a new user-item pair changes any bit/register in the bit/register array at time $t$. To some extent, the above procedure can be viewed as a sampling method such that each new user-item pair arriving at time $t$ is sampled with probability $q^{(t)}$ and each user's cardinality is estimated using the Horvitz-Thompson estimator~\cite{horvitz1952gsw}.
\subsection{FreeBS: Parameter-Free Bit Sharing} \noindent\textbf{Data Structure.}
The pseudo-code for FreeBS is shown as Algorithm~\ref{alg:FreeBS}. FreeBS consists of a one-dimensional bit array $B$ of length $M$, where each bit $B[j]$, $1\le j\le M$, is initialized to zero. In addition, FreeBS uses a hash function $h^*(e)$ to uniformly and independently map each user-item pair $e=(s,d)$ to an integer in $\{1, \ldots, M\}$ at random, i.e., $P(h^*(e)=i)=\frac{1}{M}$, and $P(h^*(e)=i\wedge h^*(e')=i'|e\ne e')=\frac{1}{M^2}$, $i, i'\in \{1, \ldots, M\}$. Note that $h^*(e)$ differs from the hash function $h(s)$ used by CSE that maps \textbf{ user $s$} to an integer in $\{1, \ldots, M\}$ at random.
\begin{algorithm} \SetKwFunction{insert}{insert} \SetKwFunction{delete}{delete} \SetKwFunction{continue}{continue} \SetKwFunction{MaxRankEdge}{MaxRankEdge} \SetKwFunction{ComputeEdgeLabel}{ComputeEdgeLabel} \SetKwFunction{UpdateEdgeLabel}{UpdateEdgeLabel} \SetKwFunction{UpdateTriCounts}{UpdateTriCounts} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output}
\BlankLine $B[1, \ldots, M]\gets [0, \ldots, 0]$\; $\hat n_s^\text{(FreeBS)} \gets 0$, $s\in S$\; $m_0\gets M$\; \ForEach {$e=(s, d)$ in $\Gamma$}{
\If {$B[h^*(e)]==0$}{
$B[h^*(e)]\gets 1$\;
$\hat n_s^\text{(FreeBS)}\gets \hat n_s^\text{(FreeBS)} + \frac{M}{m_0}$\;
$m_0\gets m_0 - 1$\;
} } \caption{The pseudo-code for FreeBS.}\label{alg:FreeBS} \end{algorithm}
\noindent\textbf{Update Procedure.} When a user-item pair $e=(s,d)$ arrives at time $t$, FreeBS first computes a random variable $h^*(e)$, and then sets $B[h^*(e)]$ to one, i.e., $B[h^*(e)]\gets 1.$
Let $B_0^{(t)}$ denote the set of the indices corresponding to zero bits in $B$ at time $t$. Formally, $B_0^{(t)}$ is defined as $B_0^{(t)}=\{i: B^{(t)}[i]=0, 1\le i\le M\}.$
Let $\hat n_s^{(t, \text{FreeBS})}$ denote the cardinality estimate for user $s$ at time $t$. We initialize $\hat n_s^{(0, \text{FreeBS})}$ to 0. Next, we describe how $\hat n_s^{(t, \text{FreeBS})}$ is computed on-line. Let $m_0^{(t)} = |B_0^{(t)}|$ denote the number of zero bits in $B$ at time $t$. Let $q_\text{B}^{(t)}$ denote the probability of $e$ changing a bit in $B$ from 0 to 1 at time $t$. Formally, $q_\text{B}^{(t)}$ is defined as \begin{equation*}
q_\text{B}^{(t)} = \sum_{i\in B_0^{(t-1)}} P(h^*(e)=i)= \frac{|B_0^{(t-1)}|}{M} = \frac{m_0^{(t-1)}}{M}. \end{equation*} Let $\mathbf{1}(\mathbb{P})$ denote the indicator function that equals 1 when predicate $\mathbb{P}$ is true and 0 otherwise. Besides setting $B[h^*(e)]$ to 1 at time $t$ with the arrival of user-item pair $e=(s,d)$, we also update the cardinality estimate of user $s$ as \[ \hat n_s^{(t, \text{FreeBS})}\gets \hat n_s^{(t-1, \text{FreeBS})} + \frac{\mathbf{1}(B^{(t-1)}[h^*(e)] = 0)}{q_\text{B}^{(t)}}. \] For any other user $s'\in S\setminus\{s\}$, we keep its cardinality estimate unchanged, i.e., $\hat n_{s'}^{(t, \text{FreeBS})}\gets \hat n_{s'}^{(t-1, \text{FreeBS})}$. We easily find that $q_\text{B}^{(t)}$ can be fast computed incrementally. That is, we initialize $q_\text{B}^{(1)}$ to 1, and incrementally compute $q_\text{B}^{(t+1)}$ as \begin{equation*} q_\text{B}^{(t+1)} \gets q_\text{B}^{(t)}-\frac{\mathbf{1}(B^{(t-1)}[h^*(e)] = 0)}{M}, \quad t\ge 1. \end{equation*} Hence, the time complexity of FreeBS for processing each user-item pair is $O(1)$.
\noindent\textbf{Error Analysis.} Let $T_s^{(t)}$ denote the set of the first occurrence times of user-item pairs associated with user $s$ in stream $\Gamma$. Formally, we define $T_s^{(t)}$ as \[ T_s^{(t)} = \{i: s^{(i)}=s \wedge e^{(j)}\ne e^{(i)}, 0<j< i\le t\}. \]
\begin{theorem}\label{theorem:FreeBS} The expectation and variance of $\hat n_s^{(t, \text{FreeBS})}$ are \[ \mathbb{E}(\hat n_s^{(t, \text{FreeBS})}) = n_s^{(t)}, \] \[ \text{Var}(\hat n_s^{(t, \text{FreeBS})}) = \sum_{i\in T_s^{(t)}} \mathbb{E}(\frac{1}{q_\text{B}^{(i)}}) - n_s^{(t)}\le n_s^{(t)}\left(\mathbb{E}(\frac{1}{q_\text{B}^{(t)}}) - 1\right), \] where \begin{equation*} \begin{split} \mathbb{E}(\frac{1}{q_\text{B}^{(i)}}) &= \sum_{j = 1}^M \frac{M\binom{n^{(i)}}{j} \sum_{k=0}^{j-1} (-1)^k \binom{j}{k} (\frac{j-k}{M})^{n^{(i)}}}{M-j}\\ &\approx e^{\frac{n^{(i)}}{M}} \left(1+\frac{1}{M}\left(e^{\frac{n^{(i)}}{M}} - \frac{n^{(i)}}{M} - 1\right)\right). \end{split} \end{equation*} \end{theorem} \begin{pf} Let $\delta_{e}$ denote an indicator that equals 1 when updating a user-item pair $e$ incurs a value change of $B[h^*(e)]$ (i.e., $B[h^*(e)]$ changes from 0 to 1), and 0 otherwise. We easily have \[ \hat n_s^{(t, \text{FreeBS})} = \sum_{i\in T_s^{(t)}} \frac{\delta_{e^{(i)}}}{q_\text{B}^{(i)}}. \] For each $\delta_{e^{(i)}}$, we have \[
\mathbb{E}(\delta_{e^{(i)}}|q_\text{B}^{(i)}) = q_\text{B}^{(i)}, \quad 1\le i\le t, \] \[
\text{Var}(\delta_{e^{(i)}}|q_\text{B}^{(i)}) = \mathbb{E}(\delta_{e^{(i)}}^2|q_\text{B}^{(i)}) - (\mathbb{E}(\delta_{e^{(i)}}|q_\text{B}^{(i)}))^2 = q_\text{B}^{(i)} - (q_\text{B}^{(i)})^2. \] Define $Q_\text{B}^{(t)}=\{q_\text{B}^{(1)}, \ldots, q_\text{B}^{(t)}\}.$
Then, we have \begin{equation*} \begin{split}
\mathbb{E}(\hat n_s^{(t, \text{FreeBS})}| Q_\text{B}^{(t)}) &= \mathbb{E}\left(\sum_{i\in T_s^{(t)}} \frac{\delta_{e^{(i)}}}{q_\text{B}^{(i)}}\middle| Q_\text{B}^{(t)}\right)\\
&= \sum_{i\in T_s^{(t)}} \frac{\mathbb{E}(\delta_{e^{(i)}}|Q_\text{B}^{(t)})}{q_\text{B}^{(i)}}\\
&= \sum_{i\in T_s^{(t)}} \frac{P(\delta_{e^{(i)}}=1|Q_\text{B}^{(t)})}{q_\text{B}^{(i)}}\\ &= n_s^{(t)}. \end{split} \end{equation*} Given $Q_\text{B}^{(t)}$, random variables $\delta_{e^{(i)}}$, $i\in T_s^{(t)}$, are independent of each other. Then, we have \[
\mathbb{E}(\hat n_s^{(t, \text{FreeBS})}) = \mathbb{E}(\mathbb{E}(\hat n_s^{(t, \text{FreeBS})}|Q_\text{B}^{(t)})) = \mathbb{E}(n_s^{(t)}) = n_s^{(t)}. \] The variance of $\hat n_s^{(t, \text{FreeBS})}$ is computed as \begin{equation*} \begin{split}
\text{Var}(\hat n_s^{(t, \text{FreeBS})}|Q_\text{B}^{(t)}) &= \text{Var}\left(\sum_{i\in T_s^{(t)}} \frac{\delta_{e^{(i)}}}{q_\text{B}^{(i)}}\middle|Q_\text{B}^{(t)}\right)\\
&= \sum_{i\in T_s^{(t)}} \frac{\text{Var}(\delta_{e^{(i)}}|Q_\text{B}^{(t)})}{(q_\text{B}^{(i)})^2}\\ &= \sum_{i\in T_s^{(t)}} \frac{q_\text{B}^{(i)} - (q_\text{B}^{(i)})^2}{(q_\text{B}^{(i)})^2}\\ &= \sum_{i\in T_s^{(t)}} \frac{1}{q_\text{B}^{(i)}} - n_s^{(t)}. \end{split} \end{equation*}
Since $\text{Var}(\mathbb{E}(\hat n_s^{(t, \text{FreeBS})}|Q_\text{B}^{(t)}))=0$, using the equation $\text{Var}(X) = \text{Var}(\mathbb{E}(X|Y)) + \mathbb{E}(\text{Var}(X|Y))$,
we have \begin{equation*} \begin{split}
\text{Var}(\hat n_s^{(t, \text{FreeBS})}) &= \mathbb{E}(\text{Var}(\hat n_s^{(t, \text{FreeBS})}|Q_\text{B}^{(t)}))\\ &=\mathbb{E}(\sum_{i\in T_s^{(t)}} \frac{1}{q_\text{B}^{(i)}}) - n_s^{(t)}\\ &=\sum_{i\in T_s^{(t)}} \mathbb{E}(\frac{1}{q_\text{B}^{(i)}}) - n_s^{(t)}. \end{split} \end{equation*}
In what follows we derive the formula for $\mathbb{E}(\frac{1}{q_\text{B}^{(i)}})$. For $j$ specific distinct bits in $B$, there exist $j!\tau(n^{(i)}, j)$ ways to map $n^{(i)}$ distinct user-item pairs occurred in stream $\Gamma$ before and including time $i$ into these bits given that each bit has at least one user-item pair, where $\tau(n^{(i)}, j)$, the Stirling number of the second kind~\cite{Abramowitz1964}, is computed as \[ \tau(n^{(i)}, j)=\sum_{k=0}^{j-1} (-1)^k \binom{j}{k} (j-k)^{n^{(i)}},~0< j \le n^{(i)}. \] In addition, there exist $\binom{M}{j}$ ways to select $j$ distinct bits from $B$, therefore we have \[
P(m_0^{(i)} = M - j|n^{(i)}) = \frac{\binom{M}{j} j!\tau(n^{(i)}, j)}{M^{n^{(i)}}}. \] Then, we have \begin{equation*} \mathbb{E}(\frac{1}{q_\text{B}^{(i)}}) = \sum_{j = 1}^M \frac{M\binom{n^{(i)}}{j} \sum_{k=0}^{j-1} (-1)^k \binom{j}{k} (\frac{j-k}{M})^{n^{(i)}}}{M-j}. \end{equation*} Next, we introduce a method to approximately compute $\mathbb{E}(\frac{1}{q_\text{B}^{(i)}})$. We expand the function $\frac{1}{q_\text{B}^{(i)}}$ by its Taylor series around $\mathbb{E}(q_\text{B}^{(i)})$ as \begin{equation*} \begin{split} \mathbb{E}(\frac{1}{q_\text{B}^{(i)}}) &\approx \mathbb{E}\left(\frac{1}{\mathbb{E}(q_\text{B}^{(i)})} - \frac{ q_\text{B}^{(i)}-\mathbb{E}(q_\text{B}^{(i)})}{(\mathbb{E}(q_\text{B}^{(i)}))^2} + \frac{ (q_\text{B}^{(i)}-\mathbb{E}(q_\text{B}^{(i)}))^2}{(\mathbb{E}(q_\text{B}^{(i)}))^3}\right)\\ &=\frac{1}{\mathbb{E}(q_\text{B}^{(i)})} + \frac{\text{Var}(q_\text{B}^{(i)})}{(\mathbb{E}(q_\text{B}^{(i)}))^3}. \end{split} \end{equation*} From~\cite{Whang1990} (eqs.(5) and (6) in~\cite{Whang1990}), we easily have $\mathbb{E}(q_\text{B}^{(i)}) = e^{-\frac{n^{(i)}}{M}}$ and $\text{Var}(q_\text{B}^{(i)}) =\frac{1}{M} e^{-\frac{n^{(i)}}{M}}(1-(1+\frac{n^{(i)}}{M})e^{-\frac{n^{(i)}}{M}})$. Then, we obtain $\mathbb{E}(q_\text{B}^{(i)}) \approx e^{\frac{n^{(i)}}{M}} (1+\frac{1}{M}(e^{\frac{n^{(i)}}{M}} - \frac{n^{(i)}}{M} - 1))$. \end{pf}
\subsection{FreeRS: Parameter-Free Register Sharing} \noindent\textbf{Data Structure.} The pseudo-code for FreeRS is shown as Algorithm~\ref{alg:FreeRS}. FreeRS consists of $M$ registers $R[1]$, $\ldots$, $R[M]$, which are initialized to zero. In addition, FreeRS also uses a hash function $h^*(e)$ to randomly map each user-item pair $e=(s,d)$ to an integer in $\{1, \ldots, M\}$ and another function $\rho^*(e)$ that maps $e$ to a random integer in $\{1, 2, \ldots\}$ according to a $Geometric(1/2)$ distribution. Note that $h^*(e)$ and $\rho^*(e)$ differ from hash functions $h(s)$ and $\rho(s)$ used by vHLL, which map \textbf{user $u$} to a random integer in $\{1, \ldots, M\}$ and $\{1, 2, \ldots\}$, respectively.
\noindent\textbf{Update Procedure.} When user-item pair $e=(s,d)$ arrives at time $t$, FreeRS first computes two random variables $h^*(e)$ and $\rho^*(e)$, and then updates $R^{(t)}[h^*(e)]$ as \[ R^{(t)}[h^*(e)]\gets \max\{R^{(t-1)}[h^*(e)], \rho^*(e)\}. \] Let $q_\text{R}^{(t)}$ denote the probability of $e$ changing the value of a register among $R[1], \ldots, R[M]$ at time $t$. Formally, $q_\text{R}^{(t)}$ is defined as \begin{equation*} \begin{split} q_\text{R}^{(t)} &= \sum_{j=1}^M P(h^*(e)=j\wedge R^{(t)}[j] > R^{(t-1)}[j])\\ &= \frac{\sum_{j=1}^M 2^{-R^{(t-1)}[j]}}{M}. \end{split} \end{equation*}
\begin{algorithm} \SetKwFunction{insert}{insert} \SetKwFunction{delete}{delete} \SetKwFunction{continue}{continue} \SetKwFunction{MaxRankEdge}{MaxRankEdge} \SetKwFunction{ComputeEdgeLabel}{ComputeEdgeLabel} \SetKwFunction{UpdateEdgeLabel}{UpdateEdgeLabel} \SetKwFunction{UpdateTriCounts}{UpdateTriCounts} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output}
\BlankLine $R[1, \ldots, M]\gets [0, \ldots, 0]$; $q_\text{R}\gets 1$\; $\hat n_s^\text{(FreeRS)} \gets 0$, $s\in S$\;
\ForEach {$e=(s, d)\in \Gamma$}{
\If {$\rho^*(e) > R[h^*(e)]$}{
$q_\text{R}\gets q_\text{R} + \frac{2^{-\rho^*(e)} - 2^{-R[h^*(e)]}}{M}$\;
$R[h^*(e)]\gets \rho^*(e)$\;
$\hat n_s^\text{(FreeRS)}\gets \hat n_s^\text{(FreeRS)} + \frac{1}{q_\text{R}}$\;
} } \caption{The pseudo-code for FreeRS.}\label{alg:FreeRS} \end{algorithm}
Let $\hat n_s^{(t, \text{FreeRS})}$ denote the cardinality estimate of user $s$ at time $t$. When user-item pair $e=(s,d)$ arrives at time $t$, we update the cardinality estimate of user $s$ as \[ \hat n_s^{(t, \text{FreeRS})}\gets \hat n_s^{(t-1, \text{FreeRS})} + \frac{\mathbf{1}(R^{(t)}[h^*(e)]\ne R^{(t-1)}[h^*(e)])}{q_\text{R}^{(t)}}. \] For any other user $s'\in S\setminus\{s\}$, we keep its cardinality estimate unchanged, i.e., $\hat n_{s'}^{(t, \text{FreeRS})}\gets \hat n_{s'}^{(t-1, \text{FreeRS})}$. Similar to $q_\text{B}^{(t)}$, we compute $q_\text{R}^{(t)}$ incrementally. In detail, we initialize $q_\text{R}^{(1)} = 1$ and incrementally compute $q_\text{R}^{(t+1)}$ as \begin{equation*} \begin{split} q_\text{R}^{(t+1)} &\gets q_\text{R}^{(t)}+\\ &\frac{2^{-\rho^*(e)} - 2^{-R[h^*(e)]}}{m} \mathbf{1}(R^{(t)}[h^*(e)]\ne R^{(t-1)}[h^*(e)]). \end{split} \end{equation*} Hence, the time complexity of FreeRS for processing each user-item pair is also $O(1)$.
\noindent\textbf{Error Analysis.} We derive the error of $\hat n_s^{(t, \text{FreeRS})}$ as follows: \begin{theorem}\label{theorem:FreeRS} The expectation and variance of $\hat n_s^{(t, \text{FreeRS})}$ are \[ \mathbb{E}(\hat n_s^{(t, \text{FreeRS})}) = n_s^{(t)}, \] \[ \text{Var}(\hat n_s^{(t, \text{FreeRS})}) = \sum_{i\in T_s^{(t)}}\mathbb{E}(\frac{1}{q_\text{R}^{(i)}}) - n_s^{(t)}\le n_s^{(t)}\left(\mathbb{E}(\frac{1}{q_\text{R}^{(t)}}) - 1\right), \] where \begin{equation*} \begin{split} \mathbb{E}(\frac{1}{q_\text{R}^{(i)}}) &= \sum_{k_1,\ldots,k_M\ge 0} \frac{\sum_{n_1+\ldots+n_M=n^{(i)}}\binom{n^{(i)}}{n_1,\ldots,n_M}\Pi_{j=1}^M \gamma_{n_j, k_j}}{M^{n^{(i)}-1}\sum_{j=1}^M 2^{-k_j}},\\ \end{split} \end{equation*} with \[ \gamma_{n_j, k_j}=\left\{
\begin{array}{ll}
(1-2^{-k_j})^{n_j} - (1-2^{-k_j+1})^{n_j}, & n_j>0, k_j>0 \\
0, & n_j>0, k_j = 0 \\
1, & n_j = 0, k_j = 0.
\end{array}
\right. \] $\mathbb{E}(\frac{1}{q_\text{R}^{(i)}})$ is approximately $\frac{1.386 n^{(i)}}{M}$ when $n^{(i)}>2.5 M$. \end{theorem} \begin{pf} Let $\xi_{e}$ denote an indicator equal to 1 when processing a user-item pair $e$ incurs a value change of any $R[1], \ldots, R[M]$, and 0 otherwise. We have \[ \hat n_s^{(t, \text{FreeRS})} = \sum_{i\in T_s^{(t)}} \frac{\xi_{e^{(i)}}}{q_\text{R}^{(i)}}. \] For each $\xi_{e^{(i)}}$, we have \[
\mathbb{E}(\xi_{e^{(i)}}|q_\text{R}^{(i)}) = q_\text{R}^{(i)}, \quad 1\le i\le t, \] \[
\text{Var}(\xi_{e^{(i)}}|q_\text{R}^{(i)}) = \mathbb{E}(\xi_{e^{(i)}}^2|q_\text{R}^{(i)}) - (\mathbb{E}(\xi_{e^{(i)}}|q_\text{R}^{(i)}))^2 = q_\text{R}^{(i)} - (q_\text{R}^{(i)})^2. \] Similar to FreeBS, define \[ Q_\text{R}^{(t)}=\{q_\text{R}^{(1)}, \ldots, q_\text{R}^{(t)}\}. \] Then, we have \begin{equation*}
\mathbb{E}(\hat n_s^{(t, \text{FreeRS})}|Q_\text{R}^{(t)}) = \mathbb{E}(\sum_{i\in T_s^{(t)}} \frac{\xi_{e^{(i)}}}{q_\text{R}^{(i)}} \Bigm| Q_\text{R}^{(t)})= n_s^{(t)}. \end{equation*} Therefore, we obtain \[
\mathbb{E}(\hat n_s^{(t, \text{FreeRS})}) = \mathbb{E}(\mathbb{E}(\hat n_s^{(t, \text{FreeRS})}|Q_\text{R}^{(t)})) = \mathbb{E}(n_s^{(t)}) = n_s^{(t)}. \] Given $Q_\text{R}^{(t)}$, all $\xi_{e^{(i)}}$ ($i\in T_s^{(t)}$) are independent of each other. Similar to FreeBS, the variance of $\hat n_s^{(t, \text{FreeRS})}$ given $Q_\text{R}^{(t)}$ is \begin{equation*}
\text{Var}(\hat n_s^{(t, \text{FreeRS})}|Q_\text{R}^{(t)}) = \sum_{i\in T_s^{(t)}} \frac{1}{q_\text{R}^{(i)}} - n_s^{(t)}. \end{equation*}
It is easily shown that $\text{Var}(\mathbb{E}(\hat n_s^{(t, \text{FreeRS})}|Q_\text{R}^{(t)}))=0$. Using equation $\text{Var}(X) = \text{Var}(\mathbb{E}(X|Y)) + \mathbb{E}(\text{Var}(X|Y))$, we have \begin{equation*} \begin{split}
\text{Var}(\hat n_s^{(t, \text{FreeRS})}) &= \mathbb{E}(\text{Var}(\hat n_s^{(t, \text{FreeRS})}|Q_\text{R}^{(t)}))\\ =&\mathbb{E}(\sum_{i\in T_s^{(t)}} \frac{1}{q_\text{R}^{(i)}}) - n_s^{(t)}=\sum_{i\in T_s^{(t)}} \mathbb{E}(\frac{1}{q_\text{R}^{(i)}}) - n_s^{(t)}. \end{split} \end{equation*}
In what follows we derive the formula for $\mathbb{E}(\frac{1}{q_\text{R}^{(i)}})$. Using $h^*(\cdot)$, FreeRS randomly splits stream $\Gamma$ into $M$ sub-streams $\Gamma_j$, $1\le j\le M$. Each $R[j]$ tracks the maximum value of function $\rho^*(\cdot)$ for user-item pairs in sub-stream $\Gamma_j$. At time $i$, assume that $n_j$ distinct user-item pairs have occurred in $\Gamma_j$. Then, $P(R^{(i)}[j] = k_j|n_j) = \gamma_{n_j, k_j}$. Therefore, \begin{equation*} \begin{split}
&P(R^{(i)}[1] = k_1, \ldots, R^{(t)}[M] = k_M|n^{(i)})=\\ &\frac{\sum_{n_1+\ldots+n_M=n^{(i)}}\binom{n^{(i)}}{n_1,\ldots,n_M}\Pi_{j=1}^M \gamma_{n_j, k_j}}{M^{n^{(i)}}}.\\ \end{split} \end{equation*} An exact expression for $\mathbb{E}(\frac{1}{q_\text{R}^{(i)}})$ is easily derived. However, it is too complex to analyze. Hence, we introduce a method to approximate $\mathbb{E}(\frac{1}{q_\text{R}^{(i)}})$. From~\cite{FlajoletAOFA07}, we have $\mathbb{E}(\frac{\alpha_M M}{q_\text{R}^{(i)}}) = \alpha_M M \mathbb{E}(\frac{1}{q_\text{R}^{(i)}}) \approx n^{(i)}$ for $n^{(i)}>2.5 M$. Therefore, $\mathbb{E}(\frac{1}{q_\text{R}^{(i)}})\approx \frac{n^{(i)}}{\alpha_M M}\approx \frac{1.386 n^{(i)}}{M}$. \end{pf} \subsection{Discussions} \label{sec:discussion} \noindent\textbf{FreeBS vs CSE.} FreeBS outperforms CSE in three aspects: (1) FreeBS can estimate cardinalities up to $\sum_{i=1}^M \frac{M}{i} \approx M\ln M$, which is larger than the maximum cardinality $m\ln m$ allowed by CSE; (2) FreeBS exhibits a smaller estimation error than CSE. From~\cite{TaoWGH17}, we find that the expectation and the variance of estimation $\hat n_s^{(t, \text{CSE})}$ given by CSE are \[ \mathbb{E}(\hat n_s^{(t, \text{CSE})}) \approx n_s^{(t)} + \frac{1}{2}\left(\mathbb{E}(\frac{1}{q^{(t)}}) e^{\frac{n_s^{(t)}}{m}} - \frac{n_s^{(t)}}{m} - 1\right), \] \[ {\rm Var}(\hat n_s^{(t, \text{CSE})}) \approx m \left(\mathbb{E}(\frac{1}{q^{(t)}}) e^{\frac{n_s^{(t)}}{m}} - \frac{n_s^{(t)}}{m} - 1\right), \] where $q^{(t)} =\frac{U^{(t)}}{M}$ is the fraction of zero bits in the shared bit array at time $t$. We find that CSE exhibits a large bias when $n_s^{(t)}\gg m$, while FreeBS is unbiased. When $m$ is large (e.g., $m$ approaches to $M$), FreeBS and CSE perform nearly the same bit setting operations but use their different cardinality estimators. Then, we have $\mathbb{E}(\frac{1}{q^{(t)}}) \approx \mathbb{E}(\frac{1}{q_\text{B}^{(t)}})$. From Theorem~\ref{theorem:FreeRS}, therefore, we easily have \begin{equation*} \begin{split} {\rm Var}(\hat n_s^{(t, \text{CSE})})&\gtrapprox m \left(\mathbb{E}(\frac{1}{q^{(t)}}) (1+\frac{n_s^{(t)}}{m}) - \frac{n_s^{(t)}}{m} - 1\right)\\
&>n_s^{(t)} \mathbb{E}(\frac{1}{q^{(t)}}) - n_s^{(t)}\gtrapprox{\rm Var}(\hat n_s^{(t, \text{FreeBS})});\ \end{split} \end{equation*} 3) FreeBS has complexity $O(1)$ to update user cardinality estimates each time it observes a new user-item pair, while CSE has complexity $O(m)$.
\noindent\textbf{FreeRS vs vHLL.} FreeRS and vHLL both can estimate cardinalities up to $2^{2^w}$, where $w$ is the number of bits in a register. However, FreeRS outperforms vHLL in two aspects: 1) FreeRS has complexity $O(1)$ to update user cardinality estimates each time it observes a new user-item pair, while vHLL has complexity $O(m)$; 2) FreeRS exhibits a smaller estimation error in comparison with vHLL. From Theorem~\ref{theorem:FreeRS}, then we have $\text{Var}(\hat n_s^{(t, \text{FreeRS})})\le n_s^{(t)}(\mathbb{E}(\frac{1}{q_\text{R}^{(t)}}) - 1)\approx n_s^{(t)}(\frac{n^{(t)}}{M\alpha_M} - 1)<\frac{1.386 n^{(t)} n_s^{(t)}}{M}$, while the variance of vHLL is \begin{equation*} \begin{split} {\rm Var}(\hat n_s^{(t, \text{vHLL})}) &\gtrapprox(\frac{M}{M-m})^2 \times \frac{1.04^2}{m} \times 2n^{(t)} n_s^{(t)} \frac{m}{M}(1-\frac{m}{M})\\ &=\frac{2.163 n^{(t)} n_s^{(t)}}{M-m}>\frac{2.163 n^{(t)} n_s^{(t)}}{M}. \end{split} \end{equation*}
\noindent\textbf{FreeBS vs FreeRS.} We observe that 1) FreeBS is faster than FreeRS. For each coming user-item pair $e$, FreeBS only computes $h^*(e)$ to select and set a bit, but FreeRS needs to compute both $h^*(e)$ and $\rho^*(e)$ to select and update a register; 2) Under the same memory usage, we compare the accuracy of FreeBS using $M$ bits and FreeRS using $M/w$ registers, where $w$ is the number of bits in a register. From Theorems~\ref{theorem:FreeBS} and ~\ref{theorem:FreeRS}, we have \[ \text{Var}(\hat n_s^{(t, \text{FreeBS})}) = \sum_{i\in T_s^{(t)}}\mathbb{E}(\frac{1}{q_\text{B}^{(i)}}) - n_s^{(t)}, \] \[ \text{Var}(\hat n_s^{(t, \text{FreeRS})}) = \sum_{i\in T_s^{(t)}}\mathbb{E}(\frac{1}{q_\text{R}^{(i)}}) - n_s^{(t)}, \] where $\mathbb{E}(\frac{1}{q_\text{B}^{(i)}})\approx e^{\frac{n^{(i)}}{M}}$ and $\mathbb{E}(\frac{1}{q_\text{R}^{(i)}})\approx \frac{1.386 w n^{(i)}}{M} < e^{\frac{n^{(i)}}{M}}$ when $\frac{n^{(i)}}{M}\ge 0.772 w$. Therefore, FreeRS is more accurate than FreeBS for users not appearing among the first $0.772 w M$ distinct user-item pairs presented on stream $\Gamma$. Flajolet et al.~\cite{FlajoletAOFA07} observe that HLL exhibits a large error for estimating small cardinalities. To solve this problem, they view a register of HLL as a bit of LPC and estimate the cardinality based on the fraction of registers that equal 0. When $n^{(i)} \ll M/w$, we easily find that $q_\text{R}^{(i)}$ is approximately computed as the fraction of registers that equal 0 at time $i$, and we have $\mathbb{E}(\frac{1}{q_\text{B}^{(i)}}) < \mathbb{E}(\frac{1}{q_\text{R}^{(i)}})$ because the number of bits in FreeBS is $w$ times larger than the number of registers in FreeRS under the same memory usage. It indicates that FreeBS is more accurate than FreeRS for users whose user-item pairs appear early in stream $\Gamma$.
\section{Evaluation} \label{sec:results} \subsection{Datasets} In our experiments, we used a variety of publicly available real-world datasets to evaluate the performance of our methods in comparison with state-of-the-art methods, which are summarized in Table~\ref{tab:datasets}. Dataset sanjose (resp. chicago) consists of one-hour passive traffic traces collected from the equinix-sanjose (resp. equinix-chicago) data collection monitors during March 20, 2014. Twitter, Flickr, Orkut and LiveJournal are all graph-datasets where each edge represents social relationship between any two users and may occur more than once in these datasets. Figure~\ref{fig:CCDF} shows the CCDFs of user cardinalities for all datasets used in our experiments.
\begin{table}[htb] \centering \caption{Summary of datasets used in our experiments. \label{tab:datasets}}
\begin{tabular}{|c|c|c|c|c|} \hline {\bf dataset}&{\bf $\#$users}&{\bf max-cardinality}&{\bf total cardinality}\\ \hline sanjose~\cite{CAIDAdata} & 8,387,347 & 313,772 &23,073,907 \\ \hline chicago~\cite{CAIDAdata} & 1,966,677 & 106,026 & 9,910,287 \\
\hline Twitter~\cite{Kwak2010} & 40,103,281 & 2,997,496 & 1,468,365,182 \\ \hline Flickr~\cite{MisloveIMC2007} & 1,441,431 & 26,185 & 22,613,980 \\ \hline Orkut~\cite{MisloveIMC2007} & 2,997,376 & 31,949 & 223,534,301 \\ \hline LiveJournal~\cite{MisloveIMC2007} & 4,590,650 & 9,186 & 76,937,805 \\ \hline \end{tabular} \end{table}
\subsection{Baselines}
FreeBS and CSE~\cite{Yoon2009} are bit-sharing algorithms, while FreeRS and vHLL~\cite{XiaoSIGMETRICS2015} are register-sharing algorithms, where each register consists of $5$ bits, i.e., $w=5$. To compare our methods with these two methods under the same memory size, we let FreeBS and CSE have $M$ bits, whereas FreeRS and vHLL have $M/5$ 5-bit registers. Moreover, both CSE and vHLL use virtual sketches to record the cardinality of each user, and we set $m$ (i.e., the number of bits/registers in the virtual sketch) to be the same for both methods. LPC~\cite{Whang1990} and HyperLogLog++~\cite{HeuleEDBTICDT2013} (short for HLL++) build a sketch for each user $s \in S$ to record items that $s$ connects to. Specially, HLL++ is an optimized method of HyperLogLog, which uses $6$ bits for each register, and implements bias correction and sparse representation strategies to improve cardinality estimation performance. In our experiments, under the same memory size $M$, we let LPC have $\frac{M}{|S|}$ bits and HLL++ have $\frac{M}{6|S|}$ 6-bit registers for each user respectively.
In this paper, we aim to compute the cardinalities of all users at each time $t=1,2,\ldots$. At each time $t$, enumerating each occurred user and computing its cardinality requires time $O(|S^{(t)}|m)$ for methods CSE, vHLL, LPC, and HLL++, where $S^{(t)}$ is the set of occurred users before and include time $t$. It is computationally intensive and thus is prohibitive for estimating all users' cardinalities over time $t$. To solve this problem, we allocate each occurred user $s$ a counter $\hat n_s$ to keep tracking of $u$'s cardinality for CSE, vHLL, LPC, and HLL++. For each edge $e^{(t)}=(s^{(t)}, d^{(t)})$ arriving at time $t$, we only estimate the cardinality of user $s^{(t)}$ for CSE, vHLL, LPC, and HLL++, and then update its counter $\hat n_{s^{(t)}}$ and keep the counters of the other occurred users unchanged, which reduces time complexity from $O(|S^{(t)}|m)$ to $O(m)$. To track all users' cardinalities over time, therefore, all methods FreeRS, FreeBS, CSE, vHLL, LPC, and HLL++ require a counter for each user, and we do not consider this memory usage in our comparisons.
\subsection{Metrics} In our experiments, we use a fine-grained metric \emph{relative standard error} (RSE) to evaluate the performance of estimating the cardinality for any user with a particular cardinality $n$ at time $t$. Smaller RSE indicates better performance of cardinality estimation. Formally, we define \[ \text{RSE}^{(t)} (n) = \frac{1}{n} \sqrt{\frac{\sum_{s \in S}(\hat n_s^{(t)} - n)^2\mathbf{1}(n_s^{(t)}=n)}{\sum_{s \in S}\mathbf{1}(n_s^{(t)}=n)}}. \]
\begin{figure}
\caption{CCDFs of user cardinalities.}
\label{fig:CCDF}
\end{figure}
\subsection{Runtime} For CSE, vHLL, LPC and HLL++, its runtime is mainly determined by the number of bits/registers in each user's (virtual) sketch,
because estimating a user's cardinality needs to enumerate $m$ bits/registes in the user's sketch, which requires large time complexity $O(m)$. On the contrary, our methods FreeBS and FreeRS have low time complexity $O(1)$ to process each coming element and update the counter for its user. In our experiments, we vary the number of bits/registers in the sketch $m$ to compare the runtime of all six methods. In this case, we record the runtime required for processing each element and updating the cardinality of the user, and then average the update time for all users in the datasets. The experimental results are shown in Figure~\ref{fig:runningtime}. As $m$ increases, the runtime of all four methods CSE, vHLL, LPC and HLL++ increases. Our methods FreeBS and FreeRS are substantially faster than other four methods for different $m$. Also we notice that CSE is faster than vHLL and FreeBS is faster than FreeRS, this is because register sharing based methods perform more operations than bitmap sharing methods for processing each element. \begin{figure}
\caption{Runtime of our methods FreeBS and FreeRS in comparison with CSE, vHLL, LPC and HLL++ for different $m$ (i.e., the number of bits/registers in the (virtual) sketch) of a user.}
\label{fig:runningtime}
\end{figure}
\subsection{Accuracy} Whenever an element $e=(s,d)$ arrives, we implemented all six methods to estimate the cardinality of the user $s$. Figure~\ref{fig:scatter} shows the experimental results when all elements in the dataset Orkut arrive. In our experiments, we fixed the memory size $M=5 \times 10^8$ bits, and the number of bits/register in the virtual sketch for CSE/vHLL was set to $m=1,024$. Under the fixed memory size, each user in the dataset Orkut used $167$ bits for LPC and $28$ 6-bit registers for HLL++ respectively. Points close to the diagonal line $\hat n_s^{(t)} = n_s^{(t)}$ indicate better cardinality estimation accuracy. From Figure~\ref{fig:scatter}, we observe that our methods FreeBS and FreeRS are more accurate than other four methods for different actual cardinalities. CSE and LPC have limited estimation ranges and LPC exhibits extremely bad estimation performance especially for larger cardinalities, therefore we omit the experimental results of LPC in the later experiments. Furthermore, we used the metric RSE to compare the estimation errors of our methods with those of CSE, vHLL and HLL++ at a fine-grained level. Figure~\ref{fig:RSE} shows the RSEs of all methods for all datasets in Table~\ref{tab:datasets}.
We can see that our methods FreeBS and FreeRS are up to 10,000 times more accurate than CSE, vHLL, and HLL++. Bit sharing method FreeBS (resp. CSE) is more accurate than register sharing method FreeRS (resp. vHLL) for characterizing users with small cardinalities. For users with large cardinalities, register sharing method FreeRS (resp. vHLL) outperforms bit sharing method FreeBS (resp. CSE). Specially, the RSE of CSE first decreases and then increases as the actual cardinality increases. This is because CSE has a small estimation range, i.e., $m \ln m$, which is consistent with the original paper~\cite{Yoon2009}. Meanwhile, HLL++ is more accurate than CSE and vHLL for small cardinalities due to its bias correction strategy. Because each user uses fewer registers for HLL++ than vHLL, HLL++ performs larger estimation errors than vHLL for large cardinalities.
\begin{figure}
\caption{(Orkut) Estimated cardinalities vs actual cardinalities for FreeBS, FreeRS, CSE, vHLL, LPC, and HLL++ under the same memory size $M=5 \times 10^{8}$ (bits), where the number of bits/registers in the virtual sketch for CSE/vHLL $m=1,024$.}
\label{fig:scatter}
\end{figure}
\begin{figure*}
\caption{(All datasets) Cardinality estimation accuracy, where memory size $M=5 \times 10^{8}$ (bits), and $m=1,024$ for CSE/vHLL.}
\label{fig:RSE}
\end{figure*}
\begin{figure*}
\caption{(sanjose) Accuracy of detecting super spreaders over time $t$, where $\Delta=5 \times 10^{-5}$, $M=5 \times 10^8$ (bits), and $m=1,024$ for CSE/vHLL.}
\label{fig:Superspreader}
\end{figure*}
\begin{table*}[tb!]
\caption{(All datasets) Performance of detecting super spreaders with $\Delta=5 \times 10^{-5}$, $M=5 \times 10^{8}$, $m=1,024$. CSE reported an empty set of users for Twitter and Orkut due to the limited estimation range, therefore we report their results on Twitter and Orkut as ``N/A".}
\label{tab:Tabsuperspreader}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\textbf{dataset}}&\multicolumn{5}{c|}{\textbf{FNR}} & \multicolumn{5}{c|}{\textbf{FPR}} \\
\cline{2-11}
& \textbf{FreeBS} & \textbf{FreeRS} & \textbf{CSE} & \textbf{vHLL} & \textbf{HLL++} & \textbf{FreeBS} & \textbf{FreeRS} & \textbf{CSE} & \textbf{vHLL} & \textbf{HLL++} \\
\hline
sanjose & 2.54e-3 & \textbf{2.27e-3} & 1.02e-2 & 1.12e-2 & 2.88e-2 & \textbf{1.61e-7} & 1.76e-7 & 1.22e-6 & 1.08e-6 & 3.38e-6 \\
\hline
chicago & 2.61e-3 & \textbf{2.55e-3} & 8.00e-3 & 8.23e-3 & 9.06e-3 & \textbf{2.32e-6} & 2.54e-6 & 8.46e-6 & 8.47e-6 & 1.08e-5 \\
\hline
Twitter & 2.45e-2 & \textbf{2.27e-2} & N/A & 6.38e-2 & 1.04e-1 & 2.99e-7 & \textbf{2.75e-7} & N/A & 6.84e-7 & 1.25e-6 \\
\hline
Flickr & \textbf{5.13e-3} & 5.55e-3 & 1.09e-2 & 1.13e-2 & 1.38e-2 & 1.18e-5 & \textbf{1.04e-5} & 3.57e-5 & 3.87e-5 & 4.13e-5 \\
\hline
Orkut & \textbf{1.47e-2} & 1.53e-2 & N/A & 7.86e-2 & 9.71e-2 & \textbf{3.22e-7} & 3.34e-7 & N/A & 1.97e-6 & 2.52e-6 \\
\hline
LiveJournal & \textbf{4.37e-3} & 4.62e-3 & 1.04e-2 & 9.92e-3 & 1.36e-2 & 4.33e-7 & \textbf{4.15e-7} & 1.22e-6 & 1.13e-6 & 1.78e-6 \\
\hline
\end{tabular} \end{table*}
\subsection{Case Study: Detecting Super Spreaders over Time} We implemented methods FreeBS, FreeRS, CSE, vHLL, and HLL++ to detect super spreaders over time, where a super spreader refers to a user connecting to at least $\Delta n^{(t)}$ items at time $t$, where $n^{(t)}$ is the sum of all user cardinalities at time $t$ and $0<\Delta<1$ is a relative threshold. In our experiments, we set memory size $M = 5 \times 10^8$ bits and the number of bits/registers in the virtual sketch $m = 1,024$. We used two metrics \emph{false negative ratio} (FNR) and \emph{false positive ratio} (FPR) to evaluate performance, where FNR is the ratio of the number of super spreaders not detected to the number of super spreaders, and FPR is the ratio of the number of users that are wrongly detected as super spreaders to the number of all users. Figure~\ref{fig:Superspreader} shows experimental results of all five methods for detecting super spreaders over time in dataset sanjose. We observe that both FreeBS and FreeRS are more accurate than other three methods for detecting super spreaders over time. For example, FNR and FPR for FreeBS and FreeRS are about $4$ to $20$ times smaller than other three methods. Table~\ref{tab:Tabsuperspreader} shows the results for all datasets when all elements arrive. We notice that our methods FreeBS and FreeRS outperform other three methods on all datasets.
\section{Related Work} \label{sec:related} Sketch methods use small amount of memory to quickly build compressive summaries of large data streams, and have been successfully used for applications such as heavy hitter detection~\cite{Estan2002,EstanSigcomm2003,CormodeVLDB2003,CormodeSigmod2004,Zhang2004,YuNSDI2013}, heavy change detection~\cite{Krishnamurthy2003,Cormode2003,Schweller2007}, super host detection~\cite{Zhao2005,Cao2009,Yoon2009,Wang2011,WangTIFS2012,WangCN2012}, cardinality distribution estimation~\cite{ChenINFOCOM2009,WangTIFS2014}, network flow size estimation~\cite{Duffield2003,Hohn2003,Yang2007,KumarSIGMETRICS2004,Ribeiro2008,Kumar2005,Kumar2006,LievenS10}, and network traffic entropy estimation~\cite{Lall2006,Zhao2007}. Next, we discuss existing cardinality estimation methods in detail.\\ \noindent\textbf{Estimating data streams' cardinalities.} To estimate the cardinality of a large data stream (i.e., the number of distinct elements in the data stream), Whang et al.~\cite{Whang1990} develop the first sketch method LPC.
LPC can only estimate cardinalities less than $m\ln m$, where $m$ is the size of the LPC sketch. Therefore, it needs to set a large $m$ to handle data streams with large cardinalities. \cite{Estan2003,ChenJASA2011} combine LPC and different sampling methods to enlarge the estimation range. Flajolet and Martin~\cite{Flajolet1985} develop a sketch method FM, which uses a register to estimate the data stream's cardinality and provides a cardinality estimation bounded by $2^w$, where $w$ is the number of bits in the register.
To further improve the accuracy and decrease the memory usage, sketch methods MinCount~\cite{Bar-YossefRANDOM2002}, LogLog~\cite{Durand2003}, HyperLogLog~\cite{FlajoletAOFA07}, HyperLogLog++~\cite{HeuleEDBTICDT2013}, RoughEstimator~\cite{KanePODS2010}, and HLL-TailCut+~\cite{XiaoZC17} are developed to use a list of $m$ registers and compress the size of each register from 32 bits to 5 or 4 bits under the same estimation range of $2^{32}$. \cite{GiroireDAM2009,lumbrosoAOFA2010} develop several cardinality estimators based on order statistics of observed samples. Ting~\cite{TingKDD2014} introduces the concept of an area cutting process to generally model the above sketch methods and provides a martingale based estimator to further improve the accuracy of these methods. Chen et.al~\cite{Chen2013} extend HyperLogLog to estimate the cardinality over sliding windows. Besides these sketch methods, sampling methods Wegman's adaptive sampling~\cite{FlajoletComputing1990} and distinct sampling~\cite{GibbonsPVLDB2001} are also developed for estimating a large stream's cardinality, while Flajolet et al.~\cite{FlajoletAOFA07} reveal that the memory efficiency of these two sampling methods is even worse than the original FM method. Recently,~\cite{CohenKDD2017,TingKDD2016} developed methods using the above sketch methods to estimate the cardinalities of set unions and intersections.
\noindent\textbf{Estimating all user cardinalities.} Significant attention has been paid to develop sketch methods to estimate the cardinalities of network hosts (or users) over high speed links. To achieve a desired accuracy, the above methods require a large number of bits/registers for each host because host cardinalities are unknown in advance and vary over a large range. This is not memory efficient because most hosts have small cardinalities. To solve this problem, \cite{Zhao2005, Yoon2009, WangTIFS2012, XiaoSIGMETRICS2015} develop different virtual sketch methods to compress the LPC/HLL sketches of all hosts into a large bit/register array shared by all hosts. Zhao et al.~\cite{Zhao2005} propose a virtual sketch method, which consists of a list of LPC sketches (i.e., a two-dimensional bit array). For each host, they randomly select $k$ ($k$ is usually set to 2 or 3) LPC sketches from the list shared by all hosts. Note that each LPC in the list may be used by more than one hosts, which introduces ``noisy" bits in a host's LPCs. To remove the error introduced by ``noisy" bits, Zhao et al.~\cite{Zhao2005} develop a method to estimate a host's cardinality based on its $k$ LPC sketches. To further reduce the memory usage, \cite{Yoon2009, WangTIFS2012} generate each host's virtual LPC sketch by selecting $m$ bits from a large one-dimensional bit array at random. All these virtual sketch methods~\cite{Yoon2009, WangTIFS2012} have to set a large value of $m$ (e.g., thousand) to achieve reasonable accuracy for host cardinalities over a large range. It results in large estimation errors, because most hosts have small cardinalities and many bits in their virtual LPC sketches tend to be contaminated by ``noisy" bits. To reduce ``noisy" bits in virtual LPC sketches, \cite{TaoWGH17} builds a small regular LPC sketch for each host, which also provides information for estimating the host cardinality. As we mentioned, these LPC based virtual sketch methods have small estimation ranges bounded by $m\ln m$. To enlarge the estimation range, \cite{XiaoSIGMETRICS2015} develop a sketch method vHLL, which generates a virtual HLL sketch by randomly selecting $m$ registers from a large register array shared by all hosts. To achieve desired accuracy for host cardinalities over a large range, \cite{XiaoSIGMETRICS2015} also needs to set a large value of $m$. However, it results in a high computational cost and large estimation errors for network hosts with small cardinalities, in which virtual HLL sketches include many ``noisy" registers. In addition, the above virtual sketch methods are customized to non-streaming settings, i.e., estimate host cardinalities at the end of an interval, and are computationally expensive to be extended to streaming settings.
\section{Conclusions and Future Work} \label{sec:conclusions} In this paper, we develop two novel streaming algorithms FreeBS and FreeRS to accurately estimate user cardinalities over time. Compared to existing bit/register sharing methods using (i.e., selecting) only $m$ bits/registers for each user, FreeBS/FreeRS enables that the number of bits/registers used by a user dynamically increases as its cardinality increases over time and each user can use all shared bits/registers. It is therefore capable to estimate user cardinalities over a large range. For example, existing bit sharing methods can be only used to estimate user cardinalities over the range $[0, m\ln m]$. Our method FreeBS enlarges the range to $[0, M\ln M]$, where $M$ is the total number of bits/registers used by all users. In addition, our algorithms FreeBS and FreeRS exploit dynamic properties of shared bits/registers to significantly improve estimation accuracy. They are simple yet effective, and sharply reduce time complexity of computing all user cardinalities to $O(1)$ each time they observe a new user-item pair. We conduct experiments on real-world datasets, and experimental results demonstrate that our methods FreeBS and FreeRS significantly outperform state-of-the-art methods in terms of accuracy and computational time. In future, we plan to extend our methods to applications such as SDN routers to monitor anomalies.
\section*{Acknowledgment}
The research presented in this paper is supported in part by National Key R\&D Program of China (2018YFC0830500), National Natural Science Foundation of China (U1301254, 61603290, 61602371), the Ministry of Education\&China Mobile Research Fund (MCM20160311), the Natural Science Foundation of Jiangsu Province (SBK2014021758), 111 International Collaboration Program of China, the Prospective Joint Research of Industry-Academia-Research Joint Innovation Funding of Jiangsu Province (BY2014074), Shenzhen Basic Research Grant (JCYJ20160229195940462, JCYJ20170816100819428), China Postdoctoral Science Foundation (2015M582663), Natural Science Basic Research Plan in Shaanxi Province of China (2016JQ6034).
\balance
\end{document}
|
arXiv
|
{
"id": "1811.09126.tex",
"language_detection_score": 0.728295087814331,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{{\TheTitle}
\begin{abstract} Isogeometric analysis (IGA) has become one of the most popular methods for the discretization of partial differential equations motivated by the use of NURBS for geometric representations in industry and science. A crucial challenge lies in the solution of the discretized equations, which we discuss in this talk with a particular focus on PDE-constrained optimization discretized using IGA. The discretization results in a system of large mass and stiffness matrices, which are typically very costly to assemble. To reduce the computation time and storage requirements, low-rank tensor methods have become a promising tool. We present a framework for the assembly of these matrices in low-rank form as the sum of a small number of Kronecker products. For assembly of the smaller matrices only univariate integration is required. The resulting low rank Kronecker product structure of the mass and stiffness matrices can be used to solve a PDE-constrained optimization problem without assembling the actual system matrices. We present a framework which preserves and exploits the low-rank Kronecker product format for both the matrices and the solution. We use the block AMEn method to efficiently solve the corresponding KKT system of the optimization problem. We show several numerical experiments with 3D geometries to demonstrate that the low-rank assembly and solution drastically reduces the memory demands and computing times, depending on the approximation ranks of the domain. \end{abstract}
\begin{keywords} Isogeometric Analysis, optimal control, low rank decompositions, tensor train format \end{keywords}
\begin{AMS} 65F10, 65F50, 15A69, 93C20 \end{AMS} \section{Motivation}
Isogeometric Analysis (IgA) is a relatively new discretization technique to give an approximate solution to a problem posed by a partial differential equation (PDE) on a given domain $\Omega$. It was introduced by Hughes, Cottrell and Bazilevs in 2005 \cite{CAD}.
In Isogeometric Analysis the physical domain $\Omega$ and the solution space for solving a PDE via the Galerkin method \cite{strang} are parameterized by the same spline functions, typically B-splines or NURBS (Non uniform rational B-splines). These basis functions are globally defined and have large overlapping supports depending on their degrees. This leads to a global representation of the physical domain and the discretization of the PDEs has a high computational complexity increasing exponentially with respect to the problem's dimension \cite{Mantzaflaris_space_time}.
Recently, a lot of effort has been made to find strategies to overcome this drawback and efficiently assemble the arising system matrices. Here, a big focus lies on exploiting the tensor product structure of the basis functions and lowering the overall computational cost of the basis function quadrature, e.g via finding new quadrature rules \cite{Hiemstra, Hughes} or sum factorization \cite{Antolin}.
To reduce the complexity of the integration and ultimately reduce the overall computation time and storage requirements, Mantzaflaris et al. \cite{angelos1,angelos2} developed a low rank tensor method, which exploits the tensor structure of the basis functions and separates the variables of the integrals. The arising system matrices can then be represented in a compact manner as a sum of Kronecker products of smaller matrices which are assembled via univariate integration, lifting the curse of dimensionality from the integration. This is accomplished via an interpolation step and a low rank representation of the resulting coefficient tensor.
For two-dimensional settings the low rank approximation can be easily realized by a singular value decomposition of the coefficient matrix. However, in higher dimensions we need to decompose a higher-order tensor which is more challenging, both computationally and conceptually, as there exist many different decompositions and definitions of ranks in the dimensions greater than two.
In this paper we combine the low rank method of Mantzaflaris et al. with low rank Tensor Train (TT) calculations \cite{osel-tt-2011, DoOs-dmrg-solve-2011}. Exploiting the tensor product nature of the arising interpolation we can calculate a low rank TT approximation without prior assembly of the full coefficient tensor by means of the alternating minimal energy (AMEn) method \cite{amen}. We further utilize this method to ultimately solve a large scale optimal control problem in a compact low rank block format, exploiting the Kronecker product structure of the system.
We consider an optimal control problem with a parabolic PDE constraint of the form \begin{align}
\min_u&& \frac{1}{2}\int_0^T\int_\Omega (y - \hat{y})^2 \,\mathrm{d} x\,\mathrm{d} t &+ \frac{\beta}{2}\int_0^T\int_\Omega u^2 \,\mathrm{d} x&& \label{equation:optimization1} \\ \mbox{s.t.}&& y_t - \Delta y &=u && \mbox{ in } [0,T]\times \Omega, \label{equation:optimization2}\\ &&y &= 0 && \mbox{ on } [0,T]\times \partial \Omega, \label{equation:optimization3} \end{align} with a desired state $\hat{y}$ and control $u$ on a given domain $\Omega$ parameterized by B-Splines or NURBS, as described later on. The discretization of (\ref{equation:optimization1}) - (\ref{equation:optimization3}) in this paper will be performed by isogeometric analysis and the workhorse is the representation of two bilinear forms, the mass term $a_m$ and the stiffness term $a_s$ in a discretized low rank format.
We will briefly review the basic ingredients for isogeometric analysis to clarify the terminology and notations used throughout the paper in Section \ref{section:basics}. In Section \ref{section:LowRank} we review the previously mentioned low rank approach of Mantzaflaris et al. and present a way to exploit the tensor product structure to quickly find a low rank approximation using the TT format. We then show how an optimal control problem of the format (\ref{equation:optimization1}) - (\ref{equation:optimization3}) is discretized using IgA and state the resulting discrete saddle point problem in Section \ref{section:optimization}. The discretization results in a very large linear system and in Section \ref{section:LRoptimization} we exploit the derived low rank representation to solve this system in a compact format making use of the iterative Block AMEn method \cite{bdos-sb-2016,ds-navier-2017}.
\textcolor{black}{The performance of the low rank scheme is illustrated by various examples in Section \ref{section:examples}. First, we show the performance for approximating both mass and stiffness matrices in the low rank form for domains with different ranks. We then use these approximations to solve computationally challenging PDE-constrained optimization problems in a low rank tensor train format.}
\section{Basics for IgA} \label{section:basics}
In isogeometric analysis, a geometry is represented exactly using a set of B-splines or NURBS functions. The same basis functions are then used to build the solution space to solve a PDE on the geometric domain \cite{CAD}. The term \textit{B-spline} is short for basis spline and denotes a special type of recursively defined spline. Every spline function with a chosen degree, smoothness, and domain partition can be uniquely represented as a linear combination of B-splines with the same degree, smoothness, and domain partition \cite{boor}.
A set of B-splines is uniquely defined by its degree and knot vector. Choosing a degree $p$, we define a vector $\xi$, called the open knot vector as $\xi= \{\hat{x}_1, \hdots \hat{x}_{n+p+1}\}$ with \begin{equation} 0= \hat{x}_1 = \hdots = \hat{x}_{p+1} < \hat{x}_{p+2} \leq \hdots \leq \hat{x}_n < \hat{x}_{n+1} = \hdots = \hat{x}_{n+p+1} = 1, \label{equation:knotVector} \end{equation} where the end knots appear $p+1$ times and for all other knots duplicate knots are allowed up to multiplicity $p$. The parameter $n$ determines the number of resulting B-splines $\beta_{i,p}$ with $i=1,\hdots,n$.
For each knot vector $\xi$ as in (\ref{equation:knotVector}), the according B-splines $\beta_{i,p}$ of degree $p$, with $i = 1,\hdots, n$, are uniquely defined by the recursion \begin{align} \beta_{i,0}(\hat{x}) &= \begin{cases} 1 & \mbox{if } \hat{x}_i \leq \hat{x} < \hat{x}_{i+1}, \\ 0 & \mbox{otherwise}, \end{cases} \\ \beta_{i,j}(\hat{x}) &= \frac{\hat{x}-\hat{x}_i}{\hat{x}_{i+j} - \hat{x}_{i}} \beta_{i, j-1}(\hat{x}) + \frac{ \hat{x}_{i+j+1} - \hat{x}}{\hat{x}_{i+j+1} - \hat{x}_{i+1}} \beta_{i+1, j-1}(\hat{x}), \end{align} where $j = 1,2,\hdots, p$ and $i = 1, \hdots, n$. Each resulting B-spline $\beta_{i,p}$ has the local support $[\hat{x}_i,\hat{x}_{i+p+1}]$.
We use $\mathbb{S}_\xi^p$ to denote the spline space spanned by the B-splines with degree $p$ and knot vector $\xi$. To construct a B-spline curve in a $D$-dimensional space, the B-splines of $\mathbb{S}_\xi^p$ are combined with given values, called the \textit{control points}, $C_1, \hdots, C_n \in \mathbb{R}^D$ .
Given a B-spline space $\mathbb{S}_\xi^p$ and $n$ control points $C_i \in \mathbb{R}^D$, the curve $F:\mathbb{R} \rightarrow \mathbb{R}^D$ defined by \begin{equation} F(\hat{x}) = \sum^n_{i=1} C_i \beta_{i,p}(\hat{x}) \label{equation:B-spline} \end{equation} is called a B-spline curve of degree $p$.
By using a B-spline curve as defined in Equation (\ref{equation:B-spline}), conic geometries can not be represented exactly \cite{NURBS}. Conic shapes can only be represented by rational functions. Therefore, a generalization of the B-splines was developed, the so called NURBS (Non-uniform rational B-splines) \cite{piegl}. The term \emph{non-uniform} refers to the fact that NURBS usually are defined by a knot vector with non-uniformly sized knot spans.
NURBS are used in a wide spectrum of computational application, especially in CAD or CGI environments, where it became the standard tool to model any kind of required shape \cite{NURBS2}. By adding weights to the B-spline functions and rationalizing the curve, a NURBS curve is defined as \begin{equation} N(\hat{x}) = \sum^n_{i=1} C_i \frac{\beta_i(\hat{x}) w_i}{\sum^n_{j=1} \beta_j(\hat{x}) w_j } . \end{equation}
To represent arbitrary $D$-dimensional geometries with B-splines or NURBS as necessary in isogeometric analysis, univariate spline spaces are combined to multivariate spaces via tensor product.
Consider $D$ different univariate spline spaces $\mathbb{S}_{\xi_d}^{p_d}$, each having one-dimensional variables $\hat{x}^{(d)} \in \mathbb{R}$, with $d = 1, \hdots ,D$. The knot vector and the spline degree of the according univariate space are denoted by $\xi_d$ and $p_d$.
We obtain a $D$-variate tensor product spline space $\mathbb{S}^D = \mathbb{S}_{\xi_1}^ {p_1} \otimes \hdots \otimes \mathbb{S}_{\xi_D}^{p_D}$ with variables $\hat{x} = (\hat{x}^{(1)}, \hdots, \hat{x}^{(D)})^T$ as a space of piecewise polynomial functions with degree $p = (p_1, \hdots, p_D)$. Its elements are denoted by \begin{equation} \beta_\mathbf{i} (\hat{x}) = \prod_{d=1}^D \beta_{i_d}^{(d)} (\hat{x}^{(d)}). \end{equation}
Given such a basis $\mathbb{S}^D$, we define a B-spline (or NURBS) geometry mapping $G:\hat{\Omega} \rightarrow \Omega$ from the $D$-dimensional unit cube $\hat{\Omega}:=[0,1]^D$ onto an arbitrary geometric shape $\Omega \subset \mathbb{R}^D$ as \begin{equation} \label{equation:geometryMapping} G(\hat{x}) = \sum_{\mathbf{i} \in I} C_\mathbf{i} \beta_{\mathbf{i}}(\hat{x}) = C:B(\hat{x}), \end{equation}
with control points $C_{\mathbf{i}} \in \mathbb{R}^D$ and multi-index $\mathbf{i} \in I = \{ (i_1, \hdots, i_D) \, | \, i_d = 1,\hdots, n_d ,\, d=1,\hdots, D\}$. Here $C:B(\hat{x})$ denotes the Frobenius product of a tensor $C \in \mathbb{R}^{D \times n_1 \times \hdots \times n_D}$, holding all the control points, and a tensor $B(\hat{x})\in \mathbb{R}^{n_1\times \hdots \times n_D}$ holding all the basis functions of $\mathbb{S}^D$ evaluated in $\hat{x}$ in a suitable order.
Now that we have a spline representation of the geometry $\Omega$, we can use the same spline functions to parameterize the solution space of a PDE on the geometry. For the discretization of the optimal control problem (\ref{equation:optimization1}) - (\ref{equation:optimization3}) we need to look at the bilinear forms of two stationary problems.
The first important bilinear form, given by \begin{equation} a_m(u,v) = \langle u, v \rangle_2 = \int_\Omega uv \,\mathrm{d} x, \end{equation} is called the mass term and results as the weak formulation of the boundary value problem $u(x) = f(x)$ in $\Omega$.
The second bilinear form we will consider is called the stiffness term, \begin{align} a_s(u,v) &= -\int_\Omega (\Delta u) v \,\mathrm{d} x, \\ &= \int_\Omega \nabla u \cdot \nabla v \,\mathrm{d} x, \label{equation:stiffness}, \end{align} and results from the Poisson equation, $- \Delta u(x) = f(x)$ in $\Omega$.
We want to produce approximations to the solutions $u \in H^1(\Omega)$ with discrete functions $u_h \in V_h \subset H^1(\Omega)$ constructed with B-splines. In isogeometric analysis we use the same splines from the geometry mapping (\ref{equation:geometryMapping}) to parameterize the solution space $V_h$, \begin{equation}
V_h = \mbox{span}\{\beta_\mathbf{i} \circ G^{-1} \, \, : \, \, \mathbf{i} \in I \}, \end{equation} with an index set $I$ such that $\beta_{\mathbf{i}}$ are the elements of $\mathbb{S}^D$. The functions in $V_h$ are linear combinations of the basis functions with coefficients $u_\mathbf{i}$, \begin{equation}
u_h = \sum_{\mathbf{i} \in I} u_{\mathbf{i}} ( \beta_{\mathbf{i}} \circ G^{-1}). \label{equation:coefficientSet} \end{equation} The space $V_h$ is now used for the Galerkin discretization of the mass and stiffness terms, resulting in the discrete mass term, \begin{equation} a_{m,h}(u_h, v_h) = \int_\Omega u_h(x) v_h(x) \,\mathrm{d} x = \int_{\hat{\Omega}} \sum_{\mathbf{i} \in I} u_\mathbf{i} \beta_\mathbf{i}(\hat{x}) \sum_{\mathbf{j} \in I} v_\mathbf{j} \beta_\mathbf{j}(\hat{x}) \omega(\hat{x}) \,\mathrm{d} \hat{x}, \end{equation}
with $\omega(\hat{x}) = | \det \nabla G(\hat{x})| $ and the discrete stiffness term \begin{equation} a_{s,h}(u_h,v_h) = \int_{\Omega} (\nabla u_h(x) ) \cdot \nabla v_h(x) \,\mathrm{d} x = \int_{\hat{\Omega}}(Q(\hat{x}) \sum_{\mathbf{i} \in I} u_\mathbf{i} \nabla \beta_\mathbf{i}(\hat{x})) \cdot\sum_{\mathbf{j} \in I} v_\mathbf{j} \nabla \beta_\mathbf{j}(\hat{x}) \,\mathrm{d} \hat{x}, \end{equation}
with $Q(\hat{x}) = \big (\nabla G(\hat{x})^{T} \nabla G(\hat{x})\big )^{-1} | \det \nabla G(\hat{x})|$ \cite{angelos1}.
This has to hold for all $v_h$ from the test space $V_h$, hence for all combinations of $v_\mathbf{i}\beta_\mathbf{i}$. However, as the basis functions $\beta_\mathbf{i}$ are linearly independent, it is sufficient if the equation holds for each $\beta_\mathbf{j}$ separately. Thus, we can rewrite the discrete bilinear forms as matrix-vector products $Au$ with the vectorization $u$ of the coefficient set $u_{\mathbf{i}}$, $\mathbf{i}\in \mathcal{I}$, where $A$ is realized as a mass matrix $M$ with elements \begin{equation} \label{equation:massMatrix} M_{\mathbf{i},\mathbf{j}} = \int_{\hat{\Omega}} \beta_\mathbf{i} \beta_\mathbf{j} \omega \,\mathrm{d} \hat{x} \end{equation} or a stiffness matrix $K$ with elements \begin{equation} \label{equation:stiffnessMatrix}
K_{\mathbf{i},\mathbf{j}} = \int_{\hat{\Omega}}( Q \nabla \beta_\mathbf{i} )\cdot \nabla \beta_\mathbf{j} \,\mathrm{d} \hat{x} = \sum_{k,l=1}^D \int_{\hat{\Omega}} q_{k,l} \frac{\partial}{\partial \hat{x}_l} \beta_\mathbf{i} \frac{\partial}{\partial \hat{x}_k} \beta_\mathbf{j} \,\mathrm{d} \hat{x}. \end{equation}
During the derivation of the mass and stiffness matrices, we did not pay attention to the tensor product structure of $\mathbb{S}^D$. We can either arrange $M$ and $K$ as matrices or as tensors of size $(\mathbf{n},\mathbf{n}) = (n_1,\hdots,n_D,n_1,\hdots,n_D)$. With this tensor notation the mass and stiffness matrices in a multi-dimensional setting are represented in a compact way.
Let $B$be the tensor of order \textcolor{black}{D} and size $\mathbf{n} = (n_1,\hdots,n_D)$ holding every $\beta_{\mathbf{i}}\in \mathbb{S}^D$ of equation (\ref{equation:geometryMapping}). All combinations of elements $\beta_\mathbf{i} \beta_\mathbf{j}$ which make up the integrands of $M$ \textcolor{black}{are included in} the tensor product $B \otimes B$.
With this consideration we write the mass term as a tensor $M$ \begin{equation} \label{equation:massTensor1}
M = \int_{\hat{\Omega}} \omega B \otimes B \,\mathrm{d} \hat{x} \, \, \, \in \mathbb{R}^{\mathbf n \times \mathbf n}, \end{equation} with elements coming from equation (\ref{equation:massMatrix}). The stiffness term can be treated similarly. With the tensor gradient we can write it as a tensor $K$ \begin{align} \label{equation:stiffnessTensor1}
K &= \int_{\hat{\Omega}} [Q \cdot( \nabla \otimes B)] \cdot (\nabla \otimes B) \,\mathrm{d} \hat{x} \, \, \, \in \mathbb{R}^{\mathbf n \times \mathbf n}.\\
&= \sum_{k,l=1}^D \int_{\hat{\Omega}} q_{k,l} \frac{\partial}{\partial \hat{x}_l} B \otimes \frac{\partial}{\partial \hat{x}_k} B \,\mathrm{d} \hat{x} \end{align} whose elements are of the form (\ref{equation:stiffnessMatrix}). The associated mass and stiffness matrices are obtained by reordering the indices since the elements of the mass and stiffness tensors match the elements of the matrices. So far this tensor structure was not exploited but we will need it to reduce the complexity of the assembly procedure.
\section{Low-rank IGA} \label{section:LowRank}
Looking at the mass and stiffness matrices (\ref{equation:massMatrix}) and (\ref{equation:stiffnessMatrix}), we see that their entries are the product of univariate B-splines and a $D$-variate weight function, $\omega(\hat{x})$ or $Q(\hat{x})$. The scalar $\omega(\hat{x}) = |\mbox{det } \nabla G(\hat{x})|$ and the matrix $Q(\hat{x}) = (\nabla G(\hat{x})^{-1}) (\nabla G(\hat{x}))^{-T} \omega(t) \in \mathbb{R}^{D\times D}$ are determined by the geometry mapping. As Mantzaflaris et al. suggest in \cite{angelos1},we can approximate these weight functions by some combination of univariate functions via interpolation, \begin{equation}
\omega(\hat{x}) \approx \omega_1(\hat{x}^{(1)}) \cdots \omega_D(\hat{x}^{(D)}). \end{equation} For the low rank approximation of the mass and stiffness matrix, we then approximate the arising multidimensional integrals as products of univariate integrals. The integrands are separable after interpolating the weight functions. To further reduce the computation time and storage requirements of the mass and stiffness matrix calculation, the resulting interpolating function is approximated with low rank methods giving low rank approximations of the system matrices \cite{angelos2,angelos1}.
To do so, we interpolate the weight functions by a combination of univariate B-splines of higher order, denoted by the spline space $\tilde{\mathbb{S}}^D$ with suitable knot vectors $\tilde{\xi}_d$ and degrees $\tilde{p}_d$ with $d=1,\hdots,D$. The weight function $\omega(\hat{x})$ of the mass matrix is interpolated as \begin{equation}
\omega(\hat{x}) \approx \sum_{\mathbf{j}\in \mathcal{J}} W_{\mathbf{j}}\tilde{\beta}_{\mathbf{j}}(\hat{x})=W:\tilde{B}(\hat{x}), \end{equation} where $\tilde{\beta}_\mathbf{j}(\hat{x})$ are the elements of the spline space $\tilde{\mathbb{S}}^D$ and $\tilde{B}(\hat{x})$ is the tensor holding all $\beta_\mathbf{j}$ ordered according to the index set $\mathcal{J}$. The weight tensor $W$ has the same dimension as the spline space $\tilde{\mathbb{S}}^D$, being $(\tilde{n}_1,\hdots,\tilde{n}_D)$, and we get its entries by interpolating the weight function in a sufficient number of points, namely $\tilde{n} = \tilde{n}_1\cdots\tilde{n}_D$.
As the derivatives of a B-spline or NURBS are again B-splines or NURBS, the weight function $\omega(\hat{x}) = |\det \nabla G(\hat{x})|$ is again a B-Spline (NURBS) of degree $Dp+1$ \cite{angelos1}. Thus we can get an exact interpolation if we choose basis functions of degree $Dp+1$.
Now we can construct canonical low rank representations of the weight tensor, \begin{equation} \label{equation:canonicalLowRank}
W \approx \sum_{r=1}^R \bigotimes_{d=1}^D w_r^{(d)} =: W_R, \end{equation} with $w_r^{(d)} \in \mathbb{R}^{n_d}$. With this we get a low rank representation of the weight function, \begin{equation} \label{equation:weightLowRank}
\omega(\hat{x}) \approx W_R : \tilde{B}(\hat{x}) = \sum_{r=1}^R \prod_{d=1}^D w_r^{(d)} \cdot \tilde{\beta}^{(d)}(\hat{x}^{(d)}). \end{equation} Here $\tilde{\beta}^{(d)}(\hat{x}^{(d)}) \in \mathbb{R}^{n_d}$ denotes the vector holding all univariate basis functions evaluated in $\hat{x}^{(d)}$, and ``$\cdot$'' is the scalar product. The entries of the mass matrix can be approximated using this low rank representation and we can calculate each entry as the sum of products of univariate integrals, \begin{align}
M_{\mathbf{i},\mathbf{j}} &= \int_{\hat{\Omega}} \prod_{d=1}^D \beta_{i_d}^{(d)} \beta_{j_d}^{(d)} \sum_{r=1}^R\prod_{d=1}^D w_r^{(d)} \cdot \tilde{\beta}^{(d)} \,\mathrm{d} \hat{x} \\
&= \sum_{r=1}^R \prod_{d=1}^D \int_0^1\beta_{i_d}^{(d)}\beta_{j_d}^{(d)} w_r^{(d)} \cdot \tilde{\beta}^{(d)} \,\mathrm{d} \hat{x}^{(d)}. \end{align} With these univariate integrals we define a univariate mass matrix, which depends on some weight function $\omega$, as \begin{equation} \label{equation:massMatrix1D}
M^{(d)}(\omega) = \int_0^1 B^{(d)} \otimes B^{(d)} \omega \, \,\mathrm{d} \hat{x}^{(d)}, \end{equation} where $B^{(d)}\in \mathbb{R}^{n_d}$ is the vector holding all $n_d$ univariate B-splines of $\mathbb{S}^{p_d}_{\xi_d}$. According to the tensor representation in equation (\ref{equation:massTensor1}), we can finally write the mass matrix as a sum of Kronecker products of small univariate mass matrices (\ref{equation:massMatrix1D}) with $\omega = w_r^{(d)} \cdot \tilde{\beta}^{(d)}$, \begin{equation} \label{equation:massFinal}
M = \sum_{r=1}^R \bigotimes_{d=1}^D M^{(d)}(w_r^{(d)} \cdot \tilde{\beta}^{(d)}). \end{equation}
The same procedure can be applied to the weight function of the stiffness matrix $Q(\hat{x})$. Note that $Q(\hat{x}) \in \mathbb{R}^{D\times D}$, thus we have to apply the interpolation to each entry of $Q$. Similarly to (\ref{equation:weightLowRank}), for each entry of $Q$ we get the canonical low rank representation \begin{equation}
q_{k,l}(\hat{x}) \approx V_{k,l,R} : \tilde{B}(\hat{x}) = \sum_{r=1}^R \prod_{d=1}^D v_{k,l,r}^{(d)} \cdot \tilde{\beta}^{(d)} (\hat{x}^{(d)}), \quad \mbox{ for all } k,l=1,\hdots,D, \end{equation} with $v_{k,l,r}^{(d)} \in \mathbb{R}^{n_d}$.
Using this low rank method, we approximate the entries of the stiffness matrix as \begin{align}
K_{\mathbf{i},\mathbf{j}} &= \sum_{k,l=1}^D \int_{\hat{\Omega}} \Big ( \prod_{d=1}^D \delta(l,d) \beta_{i_d}^{(d)} \delta(k,d) \beta_{j_d}^{(d)} \Big )\sum_{r=1}^R \prod_{d=1}^D v_{k,l,r}^{(d)} \cdot \tilde{\beta}^{(d)} \,\mathrm{d} \hat{x},\\
& = \sum_{k,l=1}^D \sum_{r=1}^R \prod_{d=1}^D \int_0^1 \delta(l,d) \beta_{i_d}^{(d)} \delta(k,d)\beta_{j_d}^{(d)} v_{k,l,r}\cdot ^{(d)}\tilde{\beta}^{(d)} \,\mathrm{d} \hat{x}^{(d)}, \end{align} where $\mathbf{j}=(j_1,\ldots,j_D)$, and $\delta(k,d)$ denotes the operator \textcolor{black}{acting on $f$ as} \begin{equation}
\delta(k,d) f = \begin{cases} \frac{\partial f}{\partial \hat{x}_d} &\mbox{ if } k = d, \\ f & \mbox{ otherwise}. \end{cases} \end{equation} To get a representation for the stiffness matrix corresponding to the mass matrix representation in (\ref{equation:massFinal}), we define the $D^2$ univariate stiffness matrices dependent on some weight function \textcolor{black}{$q^{(d)}(\hat{x}^{(d)})$} as \begin{equation}
K_{k,l}^{(d)}(\textcolor{black}{q^{(d)}}) = \int_0^1 \left (\delta(l,d) B \right ) \otimes \left (\delta(k,d) B \right ) \textcolor{black}{q^{(d)}} \,\mathrm{d} \hat{x}^{(d)}, \quad \mbox{ for } k,l=1,\hdots,D. \end{equation} With this and \textcolor{black}{$q^{(d)} = v_{k,l,r}^{(d)}\cdot \tilde{\beta}^{(d)}$} the final low rank tensor representation of the stiffness matrix is \begin{equation} \label{equation:stiffnessFinal}
K = \sum_{k,l=1}^D \sum_{r=1}^R \bigotimes_{d=1}^D K_{k,l}^{(d)}(v_{k,l,r}^{(d)} \cdot \tilde{\beta}^{(d)}). \end{equation}
Both (\ref{equation:massFinal}) and (\ref{equation:stiffnessFinal}) rely on an efficient low rank representation of $W$ and $V_{k,l}$ and we need suitable strategies to perform this task. For a two dimensional setting $W$ and $V_{k,l}$ are matrices and a singular value decomposition (SVD) can be applied easily to find a low rank representation \cite{angelos2}. We approximate the matrix $W \in \mathbb{R}^{n_1\times n_2}$ as \begin{equation}
W = U \Sigma V^T \approx \sum_{r=1}^R u_r \sigma_r v_r^T = \sum_{r=1}^R (u_r \sqrt{\sigma_r}) \otimes (v_r \sqrt{\sigma_r}). \end{equation} with $U\in \mathbb{R}^{n_1\times n_1}$, $V \in \mathbb{R}^{n_2\times n_2}$ and $\Sigma \in \mathbb{R}^{n_1 \times n_2}$ is the rectangular matrix holding the sorted singular values $\sigma_i$, $i=1,\hdots,\min(n_1,n_2)$ on its diagonal. The low rank approximation is derived by truncating the $n-R$ smallest singular values and the corresponding rows of $U$ and $V$.
In higher dimensional settings, the decomposition becomes more challenging and different types of low rank tensor approximations can be applied as well as considering only a partial decomposition, e. g. into a univariate and a bivariate integration in 3D settings \cite{angelos3}.
\textcolor{black}{For the low rank tensor approximation of a $D$ dimensional tensor as in Equation \eqref{equation:canonicalLowRank} there exists a multitude of possible approximations, e.g. the higher order singular value decomposition (HOSVD), or a CPD decomposition. But for our purpose the tensor train (TT) decomposition is suited best with respect to simplicity and robustness and we will proceed with TT in the rest of the paper. }
A tensor $W$ is said to be in TT format, if it can be written as \begin{align} W(i_1,\hdots,i_D) = W_1(i_1)\cdots W_D(i_D), \label{eq:tt} \end{align} where $W_d(i_d)$ is an $R_{d-1}\times R_d$ matrix for each fixed $i_d$, $1 \leq i_d \leq n_d$ and $R_0 = R_D = 1$ \cite{osel-tt-2011}.
By rearranging the matrices $W_d(i_d)$ for $i_d=1,\hdots, n_d$ into $D$ tensors of sizes $R_{d-1}\times n_d \times R_d$ we can rewrite the TT-format into a canonical low rank representation as desired in (\ref{equation:canonicalLowRank}), \begin{equation} \label{equation:TTranks}
W = \sum_{r_1=1}^{R_1}\cdots \sum_{r_D=1}^{R_D} \bigotimes_{d=1}^D \mbox{vec}(W_d(r_{d-1},:,r_d)). \end{equation}
To interpolate the weight function, we \textcolor{black}{inherently} need to solve a large system of equations $\omega(\hat{X}) = W:\tilde{B}(\hat{X})$, where $\hat{X}$ denotes the set of $n = n_1\cdots n_D$ interpolation points. This equation can be rewritten into \begin{equation} \label{equation:weightEquationSystem}
\mbox{vec}(\omega(\hat{X})) = \underbrace{\left(B^{(1)}(\hat{X}^{(1)}) \otimes \hdots \otimes B^{(D)}(\hat{X}^{(D)}) \right )}_{A} \mbox{vec}(W). \end{equation} \textcolor{black}{The matrix $A$ in \eqref{equation:weightEquationSystem} can be very large for a direct solution. However, we can first approximate $\omega(\hat{X})$ in a tensor decomposition, and then use the Kronecker structure of $A$ for an efficient computation of a tensor decomposition of $W$. Indeed, assuming that we have a TT format $$ \omega(\hat{X}) = \sum_{r_1=1}^{R_1}\cdots \sum_{r_D=1}^{R_D} \bigotimes_{d=1}^D \mbox{vec}(\omega_d(r_{d-1},:,r_d)), $$ we can write the TT format \eqref{equation:TTranks} for $W$ in the form \begin{equation} \label{equation:weightEquationSolve} W = \sum_{r_1=1}^{R_1}\cdots \sum_{r_D=1}^{R_D} \bigotimes_{d=1}^D \left[B^{(d)}(\hat{X}^{(d)})\right]^{-1} \mbox{vec}(\omega_d(r_{d-1},:,r_d)), \end{equation} which requires solving $d$ linear systems of sizes $n_1,\ldots,n_D$, respectively. The TT approximation for $\omega(\hat{X})$ could be precomputed by the TT-SVD \cite{osel-tt-2011} or the TT-Cross \cite{ot-ttcross-2010} methods. We refrain from doing so as the computational time is rather small for the full $\omega(\hat{X})$. }
This strategy and the efficient representation of $M$ and $K$ allow us to tackle PDE-constrained optimal control problems next.
\section{A PDE-constrained optimization model problem} \label{section:optimization} We recall the optimal control problem, \begin{align}
\min_{y,u}&& \frac{1}{2}\int_0^T\int_\Omega (y - \hat{y})^2 \,\mathrm{d} x\,\mathrm{d} t &+ \frac{\beta}{2}\int_0^T\int_\Omega u^2 \,\mathrm{d} x \,\mathrm{d} t && \\ \mbox{s.t.}&& y_t - \Delta y &=u && \mbox{ in } [0,T]\times \Omega, \\ &&y &= 0 && \mbox{ on } [0,T]\times \partial \Omega, \label{equation:zeroBoundary} \\ && y(t=0,\cdot) &= 0 && \mbox{ on } \Omega \end{align} to a desired state $\hat{y}$ with control $u$ on a given geometry $\Omega$ and time frame $[0,T]$.
We want to solve this by discretizing in both time and space resulting in a large saddle point problem \cite{saddlePoint, FEM}. Using an implicit Euler scheme for the time discretization of the PDE and the rectangle rule lead to the time-discrete problem
\begin{align}\label{equation:timeDiscrete}
\min_{y,u} && \sum_{k=1}^{N_t} \frac{\tau}{2} \Big ( \int_{\Omega} (y_k-\hat{y}_k)^2 \,\mathrm{d} x &+ \beta \int_{\Omega} u_k^2 \,\mathrm{d} x \Big ) && \\
\mbox{s.t.} && \frac{y_{k+1} - y_{k}}{\tau} - \Delta y_{k+1} &= u_{k+1} && \mbox{ in } \Omega, \mbox{ for } k=1,\hdots,N_t, \\
&& y_k &= 0 && \mbox{ on } \partial \Omega, \mbox{ for } k=1,\hdots,N_t, \end{align} with the number of time steps $N_t$ corresponding to the time step size $\tau = T/{N_t}$ and continuous solution $y_k$ in each time step $k$.
Using the Galerkin-based spatial discretization as described in Section \ref{section:basics} leads to the discrete quadratic problem \begin{align}
\min_{y,u}&& \sum_{k=1}^{N_t}\frac{\tau}{2} \big ( (y_k-\hat{y}_k)^TM(y_k-\hat{y}_k) &+ \beta u_k^TMu_k \big) &&\\ \mbox{s.t.}&& \frac{M y_{k}-My_{k-1}}{\tau} + Ky_{k} &= Mu_{k} &&\mbox{ for } k = 1:N_t, \end{align} where $M$ and $K$ are the mass and stiffness matrix of the chosen discretization. Here the zero boundary conditions (\ref{equation:zeroBoundary}) are integrated in $M$ and $K$ by omitting the boundary nodes. Omitting the notation from the time-continuous problem (\ref{equation:timeDiscrete}), the states are collected in the vector $y = [y_1,\hdots,y_{N_t}]^T$ with each state $y_k$ being the vector of the corresponding coefficients (\ref{equation:coefficientSet}) and accordingly for the control parameters $u_k$. Note that $y_k$ and $u_k$ are vectors of appropriate dimensionality.
A local minimum of the discretized problem satisfies the \textit{Karush-Kuhn-Tucker} (KKT) conditions \cite{opti2}. The KKT conditions state that if $(y^*,u^*)$ is a local minimum which satisfies a certain constraint qualification, then there exists a multiplier vector $\lambda^*$ such that the \textit{Lagrangian} of the problem, here \begin{multline}
\mathcal{L}(y,u) = \sum_{k=1}^{N_t}\biggl ( \frac{\tau}{2} \big ( (y_k-\hat{y}_k)^TM(y_k-\hat{y}_k) + \beta u_k^TMu_k \big) \\
+ \lambda_k \big (M y_{k}-My_{k-1} + \tau Ky_{k}-\tau Mu_{k} \big)\biggr ), \end{multline} has a saddle point in the local minimum $(y^*, u^*, \lambda^*)$, \begin{equation}
\nabla \mathcal{L}(y^*,u^*,\lambda^*) = 0. \end{equation} For our discrete optimization problem this results in the conditions \begin{align}
0 &= \nabla_{y_k}\mathcal{L}(y,u,\lambda) = \tau M (y_k-\hat{y}_k) - \lambda_{k+1} M + \lambda_k(M +\tau K), \\
0 &= \nabla_{u_k} \mathcal{L}(y,u,\lambda) = \tau \beta M u_k - \lambda_k \tau M, \\
0 &= \nabla_{\lambda_k} \mathcal{L}(y,u,\lambda) = M (y_k-y_{k-1}) + \tau K y_k-\tau Mu_k, \end{align} for $k = 1,\hdots, N_t$.
We can rewrite these equations into an equation system \begin{equation} \label{equation:KKTSystem}
\begin{bmatrix} \tau \mathcal{M} & 0 & \mathcal{K}^T \\ 0 &\tau \beta \mathcal{M} & -\tau \mathcal{M} \\ \mathcal{K} & -\tau \mathcal{M} & 0 \end{bmatrix} \begin{bmatrix} y \\ u \\ \lambda \end{bmatrix} = \begin{bmatrix} \tau \mathcal{M}\hat{y} \\ 0 \\ 0 \end{bmatrix}, \end{equation} where $\mathcal{M} = \mathcal{I}_{N_t} \otimes M$ and $\mathcal{K} = \mathcal{I}_{N_t} \otimes \tau K + C \otimes M$, where $\mathcal{I}$ is the $N_t\times N_t$ identity matrix and $C$ is \begin{equation}
C = \begin{bmatrix} 1 & 0 & 0 & \hdots & 0 \\ -1 & 1 & 0 & \hdots & 0 \\ 0 & -1 & 1 & \hdots & 0 \\ \vdots & & \ddots & \ddots&\\ 0 & \hdots & 0 & -1 & 1 \end{bmatrix}. \end{equation} The resulting equation system is a saddle point problem as described in \cite{saddlePoint,stoll1,stoll2}.
\section{Low-rank solvers for the PDE-constrained optimization problem} \label{section:LRoptimization} The saddle point problem (\ref{equation:KKTSystem}) typically becomes very large, depending on the number of time steps and refinement in the spatial discretization. By exploiting the tensor product structure for both the solution and the coefficients from Section \ref{section:LowRank}, we can reduce the problem to smaller linear systems on the elements of individual TT blocks.
As we can represent the low rank mass and stiffness matrices as sums of Kronecker products, we can rewrite \begin{align}
\label{eq:M_kron}
\mathcal{M} &= \mathcal{I}_{N_t} \otimes \left ( \sum_{r=1}^R \bigotimes_{d=1}^D M_r^{(d)} \right ),\\
\label{eq:K_kron}
\mathcal{K} &= \mathcal{I}_{N_t} \otimes \left ( \sum_{k,l=1}^D \sum_{r=1}^R \bigotimes_{d=1}^D K_{k,l,r}^{(d)} \right ) + C \otimes \left ( \sum_{r=1}^R \bigotimes_{d=1}^D M_r^{(d)} \right ). \end{align} With this, each block of (\ref{equation:KKTSystem}) becomes a sum of Kronecker products of small matrices. This structure can be preserved and exploited in appropriate linear solvers, such as Alternating Linear Scheme (ALS) \cite{holtz-ALS-DMRG-2012}, Density Matrix Renormalization Group \cite{schollwock-2011,jeckelmann-dmrgsolve-2002} and Alternating Minimal Energy (AMEn) \cite{amen}. However, indefinite matrix of saddle point structure in \eqref{equation:KKTSystem} might yield instabilities in the vanilla versions of these algorithms. We use an extended Block AMEn method (implemented in \textit{amen\_block\_solve.m} in the TT-Toolbox \cite{tt-toolbox}), which preserves the block structure in \eqref{equation:KKTSystem}, and hence the numerical stability.
This algorithm aims to approximate all solution components $y,u,\lambda$ in the same representation, called Block TT decomposition \cite{dkos-eigb-2014}. Let us collect $y,u,\lambda$ into a matrix \begin{equation} f = \begin{bmatrix}f_1 & f_2 & f_3\end{bmatrix} = \begin{bmatrix}y & u & \lambda\end{bmatrix}, \label{eq:w_yup} \end{equation} where the components are referred to as $f_{\ell}$, $\ell=1,2,3$. The Block TT decomposition incorporates the $\ell$-index into one of the factors: instead of \eqref{eq:tt}, we write \begin{equation}
f_{\ell}(i_1,\ldots,i_D) = F_{1}(i_1) \cdots F_{d-1}(i_{d-1}) \cdot \hat F_{d}(i_d,\ell) \cdot F_{d+1}(i_{d+1}) \cdots F_{D}(i_D)
\label{eq:btt} \end{equation} for some $1\le d \le D$. We can put the component enumerator $\ell$ into an arbitrary TT factor using the singular value decomposition. Suppose we want to move $\ell$ from the factor $d$ to $d+1$. Consider $\hat F_{d}$ as an $R_{d-1} n_d \times 3 R_d$ matrix, $ \hat F_{d}(r_{d-1},i_d;~\ell,r_d) $ and compute the truncated SVD $$ \hat F_{d} \approx U \Sigma V^\top. $$ Now we call $U$ the $d$-th TT factor instead of $\hat F_{d}$, and multiply $\Sigma V^\top$ with the $(d+1)$-th factor, \begin{align} \label{eq:svd1} F_d(r_{d-1},i_d,r_d')={}&U(r_{d-1},i_d;~r_d'), \\ \label{eq:svd2} \hat F_{d+1}(r_d',i_{d+1},\ell,r_{d+1})={}&\sum_{r_d=1}^{R_d}\Sigma V^\top (r_d';~\ell,r_d) F_{d+1}(r_d,i_{d+1},r_{d+1}). \end{align} We have obtained the same representation as \eqref{eq:btt} with $\ell$ sitting in the $(d+1)$-th factor. This process can be continued further or reversed in order to place $\ell$ in an arbitrary factor.
A state of the art technique for computing directly the factors of a TT decomposition is the Alternating Linear Scheme \cite{holtz-ALS-DMRG-2012,DoOs-dmrg-solve-2011}. We can observe that the TT representation is linear with respect to the elements of each factor. Indeed, introduce the following $n^D \times R_{d-1} n_d R_d$ \emph{frame} matrix: \begin{equation}
\begin{split}
F_{\neq d}(i_1,\ldots,i_D;~r_{d-1},j_d,r_d)
& = F_{1}(i_1) \cdots F_{d-1}(i_{d-1},r_{d-1}) \\
& \cdot \delta_{i_d,j_d} \\
& \cdot F_{d+1}(r_d,i_{d+1}) \cdots F_{D}(i_D),
\end{split}
\label{eq:frame} \end{equation} where $\delta_{i_d,j_d}$ is the identity matrix with respect to the indices $i_d,j_d$. In case of the Block TT decomposition \eqref{eq:btt}, we assume that we choose the same $d$ for both the position of $\ell$ in \eqref{eq:btt} and the position of the identity matrix in \eqref{eq:frame}. We can then observe that \begin{equation} f_{\ell} = F_{\neq d} \cdot \mbox{vec} (\hat F_{d}(\ell)). \label{eq:ttlin} \end{equation} This linearity allows us to project the original problem into a subspace spanned by the columns of $F_{\neq d}$. Iterating over all $d=1,\ldots,D$, we obtain the ALS algorithm. This method starts from some initial guess in the low-rank TT representation, and hence it never encounters the original (prohibitively large) tensors.
The Block AMEn method \cite{bdos-sb-2016,ds-navier-2017} projects each of the \emph{submatrices} of \eqref{equation:KKTSystem} onto the frame matrix individually. For each selected $d=1,\ldots,D$, we compute the elements of $\hat F_{d}$ from the following reduced KKT system: \begin{equation} \label{eq:KKT_red}
\begin{bmatrix} \tau F_{\neq d}^T \mathcal{M}F_{\neq d} & 0 & F_{\neq d}^T \mathcal{K}^T F_{\neq d} \\ 0 &\tau \beta F_{\neq d}^T \mathcal{M}F_{\neq d} & -\tau F_{\neq d}^T\mathcal{M}F_{\neq d} \\ F_{\neq d}^T\mathcal{K}F_{\neq d} & -\tau F_{\neq d}^T\mathcal{M}F_{\neq d} & 0 \end{bmatrix}
\begin{bmatrix} \mbox{vec}~\hat F_{d}(1) \\ \mbox{vec}~\hat F_{d}(2) \\ \mbox{vec}~\hat F_{d}(3) \end{bmatrix} =
\begin{bmatrix} \tau F_{\neq d}^T \mathcal{M}\hat{y} \\ 0 \\ 0 \end{bmatrix}. \end{equation} This system is small (each submatrix is now of size $R_{d-1} n_d R_d$), and can be solved efficiently by e.g. MINRES. Moreover, since $F_{\neq d}$ inherits the TT decomposition of $f$, and the system matrices $\mathcal{M}$ and $\mathcal{K}$ have the tensor product structure \eqref{eq:M_kron}, \eqref{eq:K_kron}, the reduced matrices $F_{\neq d}^T \mathcal{M}F_{\neq d}$ and $F_{\neq d}^T \mathcal{K}F_{\neq d}$ can be assembled efficiently using the multiplication of tensor trains factor by factor \cite{osel-tt-2011,holtz-ALS-DMRG-2012}. Having solved \eqref{eq:KKT_red}, we plug the new factor $\hat F_{d}$ back into the block TT decomposition \eqref{eq:btt}, which provides an updated approximation to $y,u$ and $\lambda$ through \eqref{eq:w_yup}. In order to prepare the next ALS step we move the enumerator $\ell$ to the next factor using SVD (e.g. \eqref{eq:svd1}--\eqref{eq:svd2} in the forward sweep $d\rightarrow d+1$), and recompute the corresponding frame matrix using the new $F_{d}$ factor. The singular value decomposition makes the appropriate matricization of $F_{d}$ orthogonal, such that the whole frame matrix $F_{\neq d}$ is orthogonal in each step. This ensures invertibility of the projected matrix in \eqref{eq:KKT_red}.
\section{Numerical experiments} \label{section:examples}
The performance of the low rank tensor train method highly depends on the geometry as the interpolation becomes more challenging and the ranks grow with increasing complexity of the geometry. We conduct some numerical experiments of different complexity to show the advantages of our method compared to the full assembly of the stiffness matrix before combining the assembly with an optimal control problem to show the the performance for large scale saddle point problems.
For our numerical experiments we used \textsc{Matlab} R2018b with the TT-Toolbox \cite{tt-toolbox} on a desktop computer with an \textsc{Intel} Core i7-4770 Quad-Core processor running at $4\times3400$ MHz with 32 GB of RAM.
\subsection{Assembly with TT method}
First, we will assemble stiffness matrices for different geometries in the low rank format and compare the assembly with a full standard assembly performed by the isogeometric analysis toolbox GeoPDEs 3.0 \cite{geoPDEs} in \textsc{Matlab}. The assembly is compared for different levels of refinement and for each refinement we insert 4 additional knots per knot section in each spatial dimension. We use the same Gauss-Legendre quadrature rule with five quadrature nodes in each spatial direction for both assemblies. For the mass matrix we can get an exact interpolation using a spline space of degree $2p+1$ with $p$ being the degree of the original splines \cite{angelos2}. We use the same degree for the stiffness matrix assembly in our experiments. However, note that we can always increase the degree of the interpolating splines to get a higher accuracy if desired.
\textcolor{black}{Solving \eqref{equation:weightEquationSolve} results in a low rank solution for the given domain and the required truncation tolerance. Hence, the scheme exhibits low-ranks for simpler domains, such as the domain considered next.} The first domain is a three dimensional quarter annulus as shown in Figure \ref{figure:geometry1}. The weight function $Q$ of this geometry is very simple and can be approximated by only one combination of basis functions, thus giving us a rank of 1 for each entry of $Q$. \textcolor{black}{The TT-SVD detects this low rank without prior knowledge of the low rank nature of the geometry, so we can assemble} the stiffness matrix from only one combination of univariate stiffness matrices (\ref{equation:stiffnessFinal}). We compare the assembly with the full assembly as performed by the geoPDEs toolbox \cite{geoPDEs} and additionally compare the method with an assembly using the CPD method implemented in \cite{tensorlab}. Note that we have to specify the desired rank of the system beforehand to use CPD. Here we chose rank 1, due to the prior knowledge of the low rank nature of the geometry. We use the same quadrature rule for all three assemblies.
\begin{figure}
\caption{Rank 1 domain}
\label{figure:geometry1}
\caption{Time comparision for stiffness matrix assembly}
\label{figure:assembly1_Time}
\caption{Difference to full assembly stiffness matrix}
\label{figure:assembly1_Diff}
\caption{Storage requirements}
\label{figure:assembly1_Storage}
\caption{Comparison between full assembly and low rank assembly of the stiffness matrix for the quarter annulus domain}
\label{figure:assembly1}
\end{figure}
Figure \ref{figure:assembly1_Time} shows that the TT method is a lot faster than the full assembly and the CPD method, especially for a high number of h-refinements which corresponds to a high number of basis elements. Here the advantage of the TT method over the CPD method lies in the solution of equation (\ref{equation:weightEquationSystem}), which can be efficiently done avoiding full assembly \textcolor{black}{by exploiting the Kronecker product structure} yielding a result already in the low rank Tensor Train format. For the CPD method on the other hand we solve (\ref{equation:weightEquationSystem}) to get a full tensor and \textcolor{black}{compute} a CPD of this fully assembled tensor \textcolor{black}{using the ALS method. A cross approximation method for CPD could reduce the timings in Figure \ref{figure:assembly1_Time}, but it is significantly less developed and understood than the TT cross scheme, and we are not testing it here.}
Figure \ref{figure:assembly1} further shows that both the CPD and the Tensor Train method approximate the fully assembled stiffness matrix very well. The graph in Figure \ref{figure:assembly1_Diff} depicts the mean difference of the matrices in the Frobenius norm, \begin{equation}
\mbox{diff} = \frac{\| S - \tilde{S} \|_F}{\|S\|_F}. \end{equation} Note that the CPD method and the TT method find the same decompositions here. Figure \ref{figure:assembly1_Storage} shows the advantages in regard of storage requirements. For the low rank method we only need to store a small number of small sparse matrices instead of storing the large stiffness matrix reducing the required memory drastically and if a high refinement is desired this effect is amplified.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
tolerance & TT-ranks & $q_{1,1}$ & $q_{1,2}$ & $q_{1,3}$ & $q_{2,2}$ & $q_{2,3}$ & $q_{3,3}$ & $\omega$\\
\hline
$10^{-10}$ & $R_1$ & 2 & 2 & 1 & 2 & 1 & 11 & 2\\
& $R_2$ & 2 & 1 & 1 & 1 & 1 & 3 & 1\\
\hline
$10^{-7}$ & $R_1$ & 3 & 3 & 1 & 2 & 1 & 7 & 2\\
& $R_2$ & 1 & 1 & 1 & 1 & 1 & 3 & 1\\
\hline $10^{-4}$ & $R_1$ & 2 & 2 & 1 & 2 & 1 & 5 & 2\\
& $R_2$ & 1 & 1 & 1 & 1 & 1 & 2 & 1\\ \hline
\end{tabular} \caption{Ranks for the weight function tensor approximation of figure \ref{figure:geometry3}} \label{table:ranks2} \end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
tolerance & TT-ranks & $q_{1,1}$ & $q_{1,2}$ & $q_{1,3}$ & $q_{2,2}$ & $q_{2,3}$ & $q_{3,3}$ & $\omega$\\
\hline
$10^{-7}$ & $R_1$ & 16 & 23 & 24 & 21 & 29 & 21 & 17\\
& $R_2$ & 9 & 13 & 12 & 11 & 14 & 9 & 9\\
\hline $10^{-4}$ & $R_1$ & 8 & 10 & 12 & 6 & 15 & 8 & 8\\ & $R_2$ & 5 & 7 & 8 & 8 & 9 & 5 & 5\\ \hline
\end{tabular} \caption{Ranks for the weight function tensor approximation of figure \ref{figure:geometry2}} \label{table:ranks1} \end{table}
\begin{figure}
\caption{deformed cuboid domain}
\label{figure:geometry3}
\caption{Time comparision for stiffness matrix assembly}
\label{figure:assembly3_Time}
\caption{Difference to full assembly stiffness matrix}
\label{figure:assembly3_Diff}
\caption{Storage requirements}
\label{figure:assembly3_Storage}
\caption{Comparision between full assembly and low rank assembly of the stiffness matrix assembly for the domain \ref{figure:geometry3}}
\label{figure:assembly3}
\end{figure}
\begin{figure}
\caption{High rank domain}
\label{figure:geometry2}
\caption{Time comparision for stiffness matrix assembly}
\label{figure:assembly2_Time}
\caption{Difference to full assembly stiffness matrix}
\label{figure:assembly2_Diff}
\caption{Storage requirements}
\label{figure:assembly2_Storage}
\caption{Comparison between full assembly and low rank assembly of the stiffness matrix for \textcolor{black}{a high-rank twisted beam domain}}
\label{figure:assembly2}
\end{figure}
Another example for a low rank domain is the deformed cuboid in Figure \ref{figure:geometry3}. This geometry still possesses a low rank structure and the ranks for different desired accuracies are displayed in Table \ref{table:ranks2}. These ranks stay constant throughout various levels of refinements. We see the comparison of the full assembly and the TT method with different accuracies being $10^{-10}$, $10^{-7}$ and $10^{-4}$ in Figure \ref{figure:assembly3}. We reach the desired accuracies quickly after some refinement steps as depicted in Figure \ref{figure:assembly3_Diff}. Again, the TT method is faster than the full assembly and has an advantage with respect to the storage requirements especially for high refinements, as seen in Figures \ref{figure:assembly3_Time} and \ref{figure:assembly3_Storage}. Note that for the next refinement the fully assembled stiffness matrix would not fit into the memory of our desktop PC anymore.
The performance of the method is remarkable not only for low rank structures as the geometries in Figure \ref{figure:geometry1} or Figure \ref{figure:geometry3} but also for more complex geometries like the high rank domain in Figure \ref{figure:geometry2}. This geometry does not possess a low rank structure but we can still apply the TT method with a high rank or truncate with a desired accuracy to get a low rank approximation.
In Figure \ref{figure:assembly2} we see the comparison of the full assembly and the TT method with different desired accuracies being $10^{-7}$ and $10^{-4}$. The corresponding ranks for the entries of $Q$ are displayed in Table \ref{table:ranks1}. The ranks stay stable and only occasionally vary by $\pm 1$ from the values in table \ref{table:ranks1} due to numerical inaccuracies throughout the different refinement steps.
Recall that the given ranks correspond to the tensor ranks in the TT format (\ref{equation:TTranks}), thus the number of smaller stiffness matrices in each dimension corresponds to the product $R = R_1R_2$. Even though this results in a large number of small matrices, the TT low rank method is still much faster than the full assembly for high refinements.
\subsection{Optimal control examples}
\begin{figure}
\caption{Iterations for Block AMEn solver for different refinements}
\label{figure:rank1opti2iterations}
\caption{Computation time for the optimization}
\label{figure:rank1opti2time}
\caption{Ranks of the solution for different refinements}
\label{figure:rank1opti2ranks}
\caption{Memory compression of the low-rank solution in $\%$ of the full solution}
\label{figure:rank1opti2memory}
\caption{Performance for different refinements and control parameters on quarter annulus geometry from Figure \ref{figure:geometry1}}
\label{figure:rank1opti2}
\end{figure}
\begin{figure}
\caption{Developement of objective function value for different control parameters}
\label{figure:rank1opti1_value}
\caption{Norm of the discretized control $\|u_h\|$}
\label{figure:rank1opti_control}
\caption{Iterations for the optimization with AMEn}
\label{figure:rank1opti_iterations}
\caption{Performance of the low rank method on the quarter annulus domain from Figure \ref{figure:geometry1} for different control parameters}
\label{figure:rank1opti1}
\end{figure}
\begin{figure}
\caption{Constant in time desired state $\hat{y}$}
\label{figure:rank1_solution_yhat}
\caption{$\beta=10^{-4}$}
\label{figure:rank1_solution_1e-4}
\caption{$\beta = 10^{-3}$}
\label{figure:rank1_solution_1e-3}
\caption{$\beta=10^{-2}$}
\label{figure:rank1_solution_1e-2}
\caption{$\beta = 10^{-1}$}
\label{figure:rank1_solution_1e-1}
\caption{$\beta=1$}
\label{figure:rank1_solution_1e0}
\caption{Comparison of given state and solution for different control parameters at one time step on domain from Figure \ref{figure:geometry1}}
\label{figure:rank1_solution}
\end{figure}
We now illustrate the performance of the Block AMEn method from Sec. \ref{section:LRoptimization} on two optimal control examples. We first regard the rank 1 domain from Figure \ref{figure:geometry1} before showing experimental results \textcolor{black}{on the geometric model depicted in Figure \ref{figure:geometryRotor}}.
For the rank 1 geometry we used equidistant spatial knot insertion to refine the geometric representation and thus the solution space. \textcolor{black}{We used discretizations with up to a maximum of 66 degrees of freedom per spatial direction and divided the time frame into 10 time steps. This discretization translates into a total of roughly 8 million degrees of freedom. Note that we will not pay any regard to the variation of the time discretization in this work. However, our experiments showed that increasing the number of time steps does not affect the number of iterative steps for most setups.} The desired accuracy for \textcolor{black}{both the weight interpolation and the optimization was set to $10^{-5}$.}
In Figure \ref{figure:rank1opti2} we see the performance throughout different refinements for different control parameters $\beta$. Even for a high number of degrees of freedom the method converges after a small number of iterative steps as seen in Figure \ref{figure:rank1opti2iterations}.
\textcolor{black}{Using the Block AMEn method to solve the optimal control problem in a low rank format returns the solution in a low rank tensor train format too. Figure \ref{figure:rank1opti2ranks} illustrates the maximum TT-rank of the solution and even though they are quite large, the memory consumption of the solution is reduced drastically. Figure \ref{figure:rank1opti2memory} displays the storage requirements of the solution in relation to the full solution vector.}
Figure \ref{figure:rank1_solution} \textcolor{black}{shows an exemplary result for the arbitrary desired state we used for our experiments.} The desired state in Figure \ref{figure:rank1_solution_yhat} was set as constant in time and Figures \ref{figure:rank1_solution_1e-4} - \ref{figure:rank1_solution_1e0} show snapshots of the same time step for different control parameters varying from $\beta=10^{-4}$ to $\beta=1$. As expected we see that the controlled state matches the desired state well for small control parameters and its magnitude decreases with higher control parameter.
The numerical values for the example from Figure \ref{figure:rank1_solution} are displayed in Figure \ref{figure:rank1opti1}. The method is robust with respect to the control parameter $\beta$. The objective function and the control behave as expected when the control parameter $\beta$ is changed and the method delivers a result within the desired accuracy after a small number of iterative steps.
\begin{figure}
\caption{Freeform CAD model domain}
\label{figure:geometryRotor}
\end{figure}
\begin{figure}
\caption{Constant in time desired state $\hat{y}$}
\label{figure:rotor_solution_yhat}
\caption{$\beta = 10^{-4}$}
\label{figure:rotor_solution_1e-4}
\caption{$\beta = 10^{-3}$}
\label{figure:rotor_solution_1e-3}
\caption{$\beta=10^{-2}$}
\label{figure:rotor_solution_1e-2}
\caption{$\beta = 10^{-1}$}
\label{figure:rotor_solution_1e-1}
\caption{$\beta=1$}
\label{figure:rotor_solution_1e0}
\caption{Comparison of given state and solution for different control parameters at one time step on domain in Figure \ref{figure:geometryRotor}}
\label{figure:rotor_solution}
\end{figure}
\textcolor{black}{Additionally we tested the low rank optimization scheme on a large scale geometry inspired by the shape of a wind turbine rotor blade as depicted in Figure \ref{figure:geometryRotor}. The 3D solid NURBS model was designed by freeform surface modeling in the commercial CAD software Rhino 6.0. The low rank assembly step detected a rank profile as displayed in Table \ref{table:ranks3} for a truncation tolerance of $10^{-5}$.}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
tolerance & TT-ranks & $q_{1,1}$ & $q_{1,2}$ & $q_{1,3}$ & $q_{2,2}$ & $q_{2,3}$ & $q_{3,3}$ & $\omega$\\
\hline
$10^{-5}$ & $R_1$ & 11 & 12 & 5 & 11 & 3 & 4 & 4\\
& $R_2$ & 2 & 3 & 1 & 2 & 1 & 2 & 1\\ \hline
\end{tabular} \caption{Ranks for the weight function tensor approximation of the CAD model in Figure \ref{figure:rotor_solution}} \label{table:ranks3} \end{table}
\textcolor{black}{Again, we set a fixed number of 10 time steps and an arbitrary desired state constant in time. The desired state used for the experiments is illustrated in Figure \ref{figure:rotor_solution_yhat}. Figures \ref{figure:rotor_solution_1e-4} - \ref{figure:rotor_solution_1e0} show a time snapshot of the experiment for different control parameters $\beta$. }
\textcolor{black}{Even though this geometric model has a high rank profile, our scheme performes very well as seen in Figure \ref{figure:rotor_opti}.}
\textcolor{black}{For larger control parameters $\beta$ the rank of the solution and the number of iterative steps are robust and stay almost constant for different levels of discretization as illustrated in Figures \ref{figure:rotor_opti_iterations} and \ref{figure:rotor_opti_ranks}. The number of iterations and the ranks increase only for very small control parameters. But even for the smallest control parameter with a high solution rank we reach a significant reduction in memory consumption comparing the low rank solution with the full solution vector as displayed in Figure \ref{figure:rotor_opti_memory}. }
\begin{figure}
\caption{Iterations for amen solver for different \\ refinements}
\label{figure:rotor_opti_iterations}
\caption{Computation time for the optimization}
\label{figure:rotor_opti_time}
\caption{Ranks of the solution for different \\ refinements}
\label{figure:rotor_opti_ranks}
\caption{Memory compression of the low-rank \\ solution in $\%$ of the full solution}
\label{figure:rotor_opti_memory}
\caption{Performance for different refinements and control parameters on freeform CAD rotor blade domain in Figure \ref{figure:geometryRotor}}
\label{figure:rotor_opti}
\end{figure}
\section{Conclusion} In this paper, we combined the low rank method presented by Mantzaflaris et al. with Tensor Train calculations to obtain a powerful method for solving large equation systems arising from IGA-discretized PDEs \textcolor{black}{and successfully applied the developed scheme to efficiently solve large PDE-constrained optimal control problems.}
We can reduce the storage requirements and calculation time for the \textcolor{black}{mass and} stiffness matrix assembly drastically by finding low rank approximations and splitting the matrices into a Kronecker product of smaller matrices. Our scheme finds low rank approximations for given desired accuracies without any prior knowlede about the geometry. \textcolor{black}{We can exploit the} resulting low rank structures, keeping the memory consumption low throughout further computations. The iterative Block AMEn method allows us to solve large systems like a PDE-constrained optimal control problem without assembling the whole equation system. In combination with this iterative method the low rank format gives a great advantage and we can solve very large systems within a reasonably short time.
\textcolor{black}{Various numerical experiments showed the high potential of the method. However, there might be even further efficiency gains if we} \textcolor{black}{find a suitable preconditioner for the reduced linear systems \eqref{eq:KKT_red} in the block AMEn method.}
\section*{Acknowledgment} The authors would like to thank Angelos Mantzaflaris for his helpful insights.
\end{document}
|
arXiv
|
{
"id": "1811.06797.tex",
"language_detection_score": 0.7172255516052246,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Transitivity of endomorphisms]{Approximate unitary equivalence of finite index endomorphisms of the AFD factors} \author{Koichi Shimada} \email{[email protected]} \address{Department of Mathematical Sciences University of Tokyo, Komaba, Tokyo, 153-8914, Japan} \date{} \begin{abstract} We consider two finite index endomorphisms $\rho$, $\sigma $ of any AFD factor $M$. We characterize the condition for there being a sequence $\{ u_n\}$ of unitaries of the factor $M$ with $\mathrm{Ad}u_n \circ \rho \to \sigma $. The characterization is given by using the canonical extension of endomorphisms, which is introduced by Izumi. Our result is a generalization of the characterization of approximate innerness of endomorphisms of the AFD factors, obtained by Kawahiashi--Sutherland--Takesaki and Masuda--Tomatsu. Our proof, which does not depend on the types of factors, is based on recent development on the Rohlin property of flows on von Neumann algebras. \end{abstract}
\maketitle
\section{Introduction} In this paper, we characterize the approximate innerness of the difference of two finite index endomorphisms of the AFD factors of type III (Theorem \ref{main}). More precisely, for two finite index endomorphisms $\rho$ ,$ \sigma $ of any AFD factor $M$, we give a good necessary and sufficient condition for there being a sequence $\{ u_n\}$ of unitaries of $M$ with $\mathrm{Ad}u_n \circ \rho \to \sigma $ as $n\to \infty $ in the sense of Masuda--Tomatsu \cite{MT1}. First of all, we explain the reason why we are interested in this topic. The reason is that this result should be useful for classifying group actions. It has been important to classify group actions on von Neumann algebras up to cocycle conjugacy. Since a remarkable work of Connes \cite{C}, classification of group actions on the AFD factors has greatly been developed by many researchers. In particular, the actions of discrete amenable groups on the AFD factors are completely classified (See Jones \cite{J}, Ocneanu \cite{O}, Sutherland--Takesaki \cite{ST}, Kawahigashi--Takesaki--Sutherland \cite{KwST} and Katayama--Sutherland--Takesaki \cite{KtST}). It is interesting to note that although there are many different actions up to conjugacy, they are clearly classified when we ignore the difference of cocycle conjugacy. One of the next problems is to classify actions of continuous groups. Among them, classification of actions of compact groups is considered to be relatively easy because the dual of a compact group is discrete. In fact, actions of compact abelian groups on the AFD factors have completely been classified by using this observation (See Jones--Takesaki \cite{JT} and Kawahigashi--Takesaki \cite{KwT}). However, when it comes to classifying actions of non-abelian compact groups, the problem is much more difficult. One of the reasons is that the dual action of an action of a non-abelian compact group is a collection of endomorphisms, not of automorphisms. Hence in order to proceed with classifying actions, it is important to understand the properties of endomorphisms. In the proof of classification theorems of group actions, whether the difference of two actions is approximated by inner automorphisms or not is very important. Hence we should characterize the approximate innerness of the difference of two endomorphisms of the AFD factors.
In this paper, we characterize the approximate innerness of the difference of two endomorphisms in the sense of Masuda--Tomatsu \cite{MT1} (Theorem \ref{main}). In Masuda--Tomatsu \cite{MT3}, they propose a conjecture of the complete invariant for actions of discrete Kac algebras on the AFD factors (Conjecture 8.2 of \cite{MT2}). The dual of minimal actions of compact groups are ones of them. Our main theorem implies that if two actions of discrete Kac algebras on the AFD factors of type III have the same invariants, the difference of these two actions is approximately inner (See Problem 8.3 and the preceding argument to that problem of \cite{MT3}). Our main theorem characterizes when one endomorphism transits to another endomorphism. Hence the theorem may also be seen as a kind of endomorphism counterpart of the main theorem of Haagerup--St\o rmer \cite{HS}, which characterizes when one normal state of a von Neumann algebra transits to another normal state. It is important to note that our main theorem is a generalization of Theorem 1 (1) of Kawahigashi--Sutherland--Takesaki \cite{KwST} and Theorem 3.15 of Masuda--Tomatsu \cite{MT1}. The proof of our theorem is based on recent development on the Rohlin property of flows on von Neumann algebras, which does not depend on the types of the AFD factors. Our method is also applicable to the characterization of the central triviality of automorphisms (Theorem 1. (2) of Kawahigashi--Sutherland--Takesaki \cite{KwST}). In appendix, we give another proof of the characterization of the central triviality, which does not depend on the types of the AFD factors.
\textbf{Acknowledgment} The author thanks Professor Reiji Tomatsu for introducing him to this topic and for giving him useful comments and Professor Toshihiko Masuda for pointing out a mistake in the first version of this paper. The author is also thankful to Professor Yasuyuki Kawahigashi, who is his adviser, for his useful comments on the presentation of this work. The author is supported by Research Fellowships of the Japanese Society for the Promotion of Science for Young Scientists No.26-6590. This work is also supported by the Program for Leading Graduate Schools, MEXT, Japan.
\section{Preliminaries} \subsection{Notations} Let $M$ be a von Neumann algebra. For a normal positive linear functional $\psi $ of $M$ and $x\in M$, set
\[ \| x\| _{\psi}:=\sqrt{\psi (x^*x)},\]
\[ \| x\| _{\psi}^\sharp :=\sqrt{\frac{\psi (x^*x)+\psi (xx^*)}{2}}.\] \begin{lemm} \label{inequality} Let $\lambda $ be a $\sigma$-weakly continuous linear functional of a von Neumann algebra $M$ and $\lambda =\psi v$ be its polar decomposition. Then we have
\[ \| \lambda a\| \leq \psi (vaa^*v^*)^{1/2}\| \lambda \|^{1/2} , \]
\[ \| a\lambda \| \leq \psi (a^*a)^{1/2}\| \lambda \|^{1/2} \] for any $a\in M$. \end{lemm} \begin{proof} By Cauchy--Schwarz's inequality, for $x\in M$, we have \begin{align*}
|\lambda a(x)|&=|\psi (vax)| \\
&\leq \psi (vaa^*v^*)^{1/2}\psi (x^*x)^{1/2} \\
&\leq \psi (vaa^*v^*)^{1/2} \| \lambda \| ^{1/2} \| x\| . \end{align*} The latter inequality is shown in a similar way. \end{proof}
\subsection{A topology of semigroups of endomorphisms} Let $M$ be a factor of type $\mathrm{III}$. Let $\mathrm{End}(M)_0$ be the set of all finite index endomorphisms $\rho$ of $M$. Let $d(\rho ) $ be the square root of the minimal index of $M\supset \rho (M)$ and $E_\rho $ be the minimal expectation from $M$ to $\rho (M)$. Set $\phi _\rho :=\rho ^{-1} \circ E_\rho$. In Masuda--Tomatsu \cite{MT1}, a topology of $\mathrm{End} (M)_0$ is introduced in the following way. We have \[ \rho _i \to \rho \]
if, by definition, $\| \psi \circ \phi _{\rho _i} -\psi \circ \phi _\rho \| \to 0$ for any $\psi \in M_*$. \subsection{Canonical extension of endomorphisms} Let $\varphi $ be a normal faithful semifinite weight of $M$ and $\sigma ^\varphi$ be the group of modular automorphisms of $\varphi$. In Izumi \cite{I}, an extension $\tilde{\rho}$ of $\rho \in \mathrm{End}(M)_0$ on the continuous core $\tilde{M}:=M\rtimes _{\sigma ^\varphi} \mathbf{R}$ is introduced in the following way. We have \[ \tilde {\rho} (x\lambda _t^{\sigma ^\varphi})=d(\rho )^{it}\rho (x) [D\varphi \circ \phi _{\rho} :D\varphi ]_t \lambda _t^{\sigma^\phi}\] for $t\in \mathbf{R}$, $x\in M$, where $[D\varphi \circ \phi _{\rho}:D\varphi ]_t$ is the Connes cocycle between $\varphi \circ \phi _{\rho}$ and $\varphi$. This extension does not depend on the choice of $\varphi$ under a specific identification (See Theorem 2.4 of Izumi \cite{I}). The extension $\tilde{\rho}$ is said to be the canonical extension of $\rho$.
In Lemma 3.5 of Masuda--Tomatsu \cite{MT1}, it is shown that there exists a left inverse $\phi _{\tilde{\rho}}$ of $\tilde{\rho}$ satisfying \[ \phi _{\tilde{\rho}} (x\lambda ^\varphi _t )=d(\rho )^{-it} \phi _\rho (x[D\phi :D\phi \circ \phi _\rho ]_t )\lambda _t^\varphi \] for $x\in M$, $t\in \mathbf{R}$.
\section{The main theorem} The main theorem of this paper is the following. \begin{theo} \label{main} Let $\rho$ , $\sigma $ be endomorphisms of an AFD factor $M$ of type $\mathrm{III}$ with $d(\rho ), d(\sigma )<\infty$. Then the following two conditions are equivalent.
\textup{(1)} We have $\phi _{\tilde{\rho}}\circ \theta _{-\log (d(\rho) /d(\sigma ))}|_{\mathcal{Z}(\tilde{M})}=\phi _{\tilde{\sigma }}|_{\mathcal{Z}(\tilde{M})}$.
\textup{(2)} There exists a sequence $\{ u_n\}$ of unitaries of $M$ with $\mathrm{Ad}u_n \circ \rho \to \sigma $ as $n\to \infty$. \end{theo} As a Corollary, we have the following result. \begin{cor} Let $M$ be an AFD factor and $R_0$ be the AFD factor of type $\mathrm{II}_1$. Take endomorphisms $\rho _1$, $\rho _2\in \mathrm{End}(M)_0$ and $\sigma _1$, $\sigma _2\in ^mathrm{End}(R_0)_0$. Then the following two conditions are equivalent.
\textup{(1)} There exists a sequence of unitaries $\{u_n\}$ of $M \otimes R_0$ with $\mathrm{Ad}u_n \circ (\rho _1\otimes \sigma _1) \to \rho _2\otimes \sigma _2$ as $n\to \infty$.
\textup{(2)} There exists a sequence of unitaries $\{v_n\}$ of $M$ with $\mathrm{Ad}v_n \circ \rho _1 \to \rho _2$ as $n \to \infty$. \end{cor} \begin{proof} Since $\sigma _1$ and $\sigma _2 $ are approximately inner, we may assume that $\sigma _1=\sigma _2=\mathrm{id}_{R_0}$. By the identification $\mathcal{Z}((M\otimes R_0)\rtimes _{\sigma ^\varphi \otimes \mathrm{id}_{R_0}}\mathbf{R})\cong \mathcal{Z}((M\rtimes _{\sigma ^\varphi}\mathbf{R})\otimes R_0) \cong \mathcal{Z}(M\rtimes _{\sigma ^\varphi }\mathbf{R})$ by \[ (x\otimes y)\lambda _t^{\sigma ^\varphi \otimes \mathrm{id}_{R_0}}\mapsto (x\lambda _t^{\sigma ^\varphi})\otimes y,\] we have $\phi _{\rho _i \otimes \mathrm{id}_{R_0}}=\phi _{\rho_i}$ for $i=1,2$. We also have $\d(\rho _i \otimes \mathrm{id}_{R_0})=d(\rho _i)$. Hence by Theorem \ref{main}, conditions (1) and (2) are equivalent. \end{proof} Note that this corollary would be quite difficult to show without Theorem \ref{main} (See also Section 3 of Connes \cite{C2}).
Theorem \ref{main} should also be useful for classifying actions of compact groups on the AFD factors of type $\mathrm{III}$. Popa--Wassermann \cite{PW} and Masuda--Tomatsu \cite{MT2} showed that any compact group has only one minimal action on the AFD factor of type $\mathrm{II}_1$, up to conjugacy. One of the next problems is to classify actions of compact groups on the AFD factors of type III. In Masuda--Tomatsus \cite{MT3} and \cite{MT4}, they are trying to solve this problem, and some partial answers to this problem are obtained (Theorems A, B of \cite{MT3} and Theorem 2.4 of \cite{MT4}). However, still the problem has not been solved completely. In Masuda--Tomatsu \cite{MT3}, a conjecture about this classification problem is proposed (Conjecture 8.2). Our main theorem implies that if two actions of discrete Kac algebras on the AFD factors of type III have the same invariants, the difference of these two actions is approximately inner (See Problem 8.3 and the preceding argument to that problem of Masuda--Tomatsu \cite{MT3}). In order to classify group actions, whether the difference of two actions is approximately inner or not is very important. Kawahigashi--Sutherland--Takesaki \cite{KwST} and Masuda--Tomatsu \cite{MT1} characterize the approximate innerness of endomorphisms under such a motivation. Theorem \ref{main} is a generalization of their results.
In the following, we will show Theorem \ref{main}. Implication \textup{(2)}$\Rightarrow $ \textup{(1)} is shown easily by using known results.
\textit{Proof of implication \textup{(2)} $\Rightarrow$ \textup{(1)} of Theorem \ref{main}.} This is shown by the same argument as that of the proof of implication (1) $\Rightarrow $ (2) of Theorem 3.15 of \cite{MT1}. Assume that we have $\mathrm{Ad}u_n \circ \rho \to \sigma $ as $n \to \infty$. Then by the continuity of normalized canonical extension (Theorem 3.8 of Masuda--Tomatsu \cite{MT1}), we have \[ \phi _{\tilde{\rho}}\circ \theta _{-\log d(\rho)} \circ \mathrm{Ad}u_n^*(x) \to \phi _{\tilde{\sigma }} \circ \theta _{-\log d(\sigma )} (x)\] in the strong* topology for any $x\in \tilde{M}$. Hence we have
\[ \phi _{\tilde{\rho}}\circ \theta _{-\log (d(\rho )/d(\sigma ))}|_{\mathcal{Z}(\tilde{M})} =\phi _{\tilde{\sigma }}|_{\mathcal{Z}(\tilde{M})}.\] \qed
In the following, we will show the opposite implication. Our strategy is to reduce the problem to that of endomorphisms on semifinite von Neumann algebras. In order to achieve this, in Kawahigashi--Sutherland--Takesaki \cite{KwST} and Masuda--Tomatsu \cite{MT1}, they have used discrete decomposition theorems (See Connes \cite{C}). However, in our situation, the centers of the images of caonical extensions may not coincide with that of $\tilde{M}$. This makes the problem difficult. It seems that Corollary 4.4 of Izumi \cite{I} means that it is difficult to show Theorem \ref{main} by the same strategy as those in them. Instead, we will use continuous decomposition. We also note that our method gives a proof of Theorem (1) of Kawahigashi--Sutherland--Takesaki \cite{KwST} which does not depend on the types of the AFD factors.
\section{Approximation on the continuous core} In order to prove implication \textup{(1)} $\Rightarrow$ \textup{(2)} of Theorem \ref{main}, we need to prepare some lemmas. We first show the implication when $\phi _{\tilde{\rho}}=\phi _{\tilde{\sigma}}$. Until the end of the proof of Lemma \ref{22.5}, we always assume that $d(\rho)=d(\sigma)$ and $\phi _{\tilde{\rho}}=\phi _{\tilde{\sigma}}$. Choose a dominant weight $\varphi $ of $M$ (For the definition of dominant weights, see Definition II.1.2. and Theorem II.1.3. of Connes--Takesaki \cite {CT}). Then by Lemma 2.3 (3) of Izumi \cite{I}, it is possible to choose unitaries $u$ and $ v$ of $M$ so that $ (\varphi , \mathrm{Ad} u\circ \rho )$ and $(\varphi , \mathrm{Ad}v \circ \sigma )$ are invariant pairs (See Definition 2.2 of Izumi \cite{I}). More precisely, we have \[ \varphi \circ \mathrm{Ad}u \circ \rho =d(\rho ) \varphi , \ \varphi \circ E_{\mathrm{Ad}u\circ \rho} =\varphi ,\] \[ \varphi \circ \mathrm{Ad} v \circ \sigma =d(\sigma ) \varphi , \ \varphi \circ E_{\mathrm{Ad}v\circ \sigma} =\varphi .\] By replacing $\rho$ by $\mathrm{Ad} u \circ \rho $ and $\sigma $ by $\mathrm{Ad}v\circ \sigma $ respectively, we may assume that $(\varphi , \rho)$ and $(\varphi , \sigma )$ are invariant pairs. In the rest of this paper, we identify $\tilde{M}$ with $M\rtimes _{\sigma ^\varphi}\mathbf{R}$. Let $h$ be a positive self-adjoint operator affiliated to $\tilde{M}$ satisfying $h^{-it}=\lambda _t^\varphi$. Let $\tau$ be a trace of $\tilde{M}$ defined by $\hat{\varphi }(h\cdot)$. \begin{lemm} \label{expectation} For $\rho \in \mathrm{End}(M)_0$, we have $\phi _{\tilde{\rho}}=\tilde{\rho}^{-1} \circ E_{\tilde{\rho}}$, where $E_{\tilde{\rho}}$ is the conditional expectation with respect to $\tau$. \end{lemm} \begin{proof} For $x\in M$ and $t\in \mathbf{R}$, we have \begin{align*} \tilde{\rho} \circ \phi _{\tilde{\rho}} (x\lambda _t^\varphi ) &= \tilde{\rho } (d(\rho )^{-it} \phi _{\rho} (x[D\varphi : D\varphi \circ \phi _\rho ]_t ) \lambda _t^\varphi ) \\
&=d(\rho )^{it} d(\rho )^{-it} \rho (\phi _\rho (x[D\varphi :D\varphi \circ \phi _\rho ]_t ))[D\varphi \circ \phi _\rho : D\varphi ]_t \lambda _t^\varphi \\
&=E_\rho (x[D\varphi : D\varphi \circ \phi _\rho ]_t ) [D\varphi \circ \phi _\rho :D\varphi ]_t \lambda _t^\varphi \end{align*} Since $(\varphi , \rho )$ is an invariant pair, we have \[ [D\varphi \circ \phi _\rho :D\phi ]_t =d(\rho )^{-it}.\] Hence we have \[E_{\rho } (x[D\varphi :D\varphi \circ \phi _\rho ]_t )[D\varphi \circ \circ \phi _\rho :D\varphi ]_t \lambda _t^\varphi = E_{\rho }(x) d(\rho ) ^{it} d(\rho ) ^{-it} \lambda _t^\varphi =E_\rho (x) \lambda _t^\varphi .\] Hence by an argument of p.226 of Longo \cite{L}, it is shown that $\tilde{\rho}\circ \phi _{\tilde{\rho}}$ is the expectation with respect to $\tau$. \end{proof} \begin{lemm} \label{trace} For $\rho \in \mathrm{End}(M)_0$, we have $\tau \circ \phi _{\tilde{\rho}} =d(\rho ) ^{-1} \tau$. \end{lemm} \begin{proof} By Lemma \ref{expectation}, we have $\phi _{\tilde{\rho}}=\tilde{\rho}^{-1} \circ E_{\tilde{\rho}}$. On the other hand, by Proposition 2.5 (4) of Izumi \cite{I}, we have $\tau \circ \tilde{\rho } =d(\rho ) \tau$. Hence we have \begin{align*} \tau \circ \phi _{\tilde{\rho}} &=d(\rho) ^{-1} \tau \circ \tilde{\rho }\circ \phi _{\tilde{\rho}} \\
&=d(\rho ) ^{-1} \tau \circ \tilde{\rho} \circ \tilde{\rho} ^{-1} \circ E_{\tilde{\rho }} \\
&= d(\rho ) ^{-1} \tau \circ E_{\tilde{\rho}} \\
&= d(\rho ) ^{-1} \tau . \end{align*} \end{proof} In the following, we identify $\mathcal{Z}(\tilde{M})$ with $L^\infty (X, \mu )$. Let \[ \tau =\int _X^\oplus \tau _x \ d\mu (x)\] be the direct integral decomposition of $\tau$. \begin{lemm} \label{kkey}
Let $\rho , \sigma $ be elements of $\mathrm{End}(M)_0$. Assume that $\phi _{\tilde{\rho}}|_{\mathcal{Z}(\tilde{M})}=\phi _{\tilde{\sigma }}|_{\mathcal{Z}(\tilde{M})}$ and $d(\rho )=d(\sigma )$. For $a\in \tilde{M}_+$ with $\tau (a) <\infty$, set \[ b:=\tilde{\rho} (a)=\int _X^\oplus b_x \ d\mu (x),\] \[ c:=\tilde{\sigma } (a) =\int _X^\oplus c_x \ d\mu (x).\] Then we have \[\tau _x (b_x)=\tau _x(c_x)\] for almost every $x\in X$. \end{lemm} \begin{proof} Take an arbitrary positive element $z$ of $\mathcal{Z}(\tilde{M})_+$. Then we have \begin{align*} \tau (bz) &= \int _X \tau _x(b_xz_x) \ d\mu (x) \\
&=\int _X \tau _x(b_x)z_x \ d\mu (x). \end{align*} Similarly, we have \[ \tau (cz) =\int _X\tau _x(c_x)z_x \ d\mu (x).\] On the other hand, by Lemma \ref{trace}, we have \begin{align*} \tau (bz)&= d(\rho ) \tau \circ \phi _{\tilde{\rho}}(bz) \\
&=d(\rho ) \tau \circ \phi _{\tilde{\rho}}(\tilde{\rho}(a)z) \\
&=d(\rho ) \tau \circ \tilde{\rho }^{-1} \circ E_{\tilde{\rho}} (\tilde{\rho }(a)z) \\
&=d(\rho ) \tau \circ \tilde{\rho } ^{-1} (\tilde{\rho} (a) E_{\tilde{\rho}}(z)) \\
&=d(\rho ) \tau (a \phi _{\tilde{\rho}}(z)). \end{align*}
Since we assume $d(\rho )=d(\sigma ) $ and $\phi _{\tilde{\rho}}|_{\mathcal{Z}(\tilde{M})}=\phi _{\tilde{\sigma}}|_{\mathcal{Z}(\tilde{M})}$, the last number of the above equality is $d(\sigma ) \tau (a\phi _{\tilde{\sigma }}(z))$, which is shown to be $\tau (cz)$ in a similar way. Hence we have \[ \int _X \tau _x(b_x)z_x \ d\mu (x)=\int _X \tau _x(c_x) z_x \ d\mu (x).\] Since the maps $x\mapsto \tau _x(b_x)$ and $x\mapsto \tau _x(c_x)$ are integrable functions and $z\in L^\infty (X, \mu )=L^1(X, \mu )^*$ is arbitrary, we have $\tau _x(b_x)=\tau _x(c_x)$ for almost every $x\in X$. \end{proof} Note that we have never used the assumption that $M$ is approximately finite up to this point. However, in order to show the following lemma, we need to assume that $M$ is approximately finite. Let \[ \tilde{M}=\int ^\oplus _X\tilde{M}_x \ d\mu (x)\] be the direct integral decomposition. \begin{lemm} \label{convergence in strong} Let $M$ be an AFD factor of type $\mathrm{III}$ and $\rho$, $\sigma $ be as in Lemma \ref{kkey}. Then for almost every $x\in X$, there exist a factor $B_x$ of type $I_\infty$, a unitary $u$ of $\tilde{M}_x$ and a sequence $\{ u_n\}$ of unitaries of $\tilde{M}_x$ with the following properties.
\textup{(1)} The relative commutant $B_x'\cap \tilde{M}_x$ is finite.
\textup{(2)} There exists a sequence of unitaries $\{ v_n\}$ of $B_x'\cap \tilde{M}_x$ with $u_n =(v_n \otimes 1)u$, where we identify $\tilde{M}_x$ with $(B_x'\cap \tilde{M}_x)\otimes B_x$.
\textup{(3)} For almost every $x\in X$ and for any $a\in \tilde{M}$, we have $\mathrm{Ad}u_n ((\tilde{\rho} (a))_x) \to (\tilde{\sigma }(a))_x $ in the strong * topology.
\textup{(4)} We have $B_x\subset u(\tilde{\rho}(\tilde{M}))_xu^*\cap (\tilde{\sigma }(\tilde{M}))_x$. \end{lemm} \begin{proof} Let $B_0 \subset \tilde{\rho} (\tilde{M})$ be a factor of type $\mathrm{I}_\infty$ with $Q:=\tilde{\rho} (\tilde{M}) \cap B_0'$ finite. Let $\{ f_{ij}^0 \}$ be a matrix unit generating $B_0$. We may assume that $\tau (f_{ii}^0)<\infty$. Then since $(\tau \circ E_{\tilde{\rho}})_x((f_{11}^0)_x)<\infty$ for almost every $x\in X$, $P:=\tilde{M} \cap B_0'$ is also finite. Then by Lemma \ref{kkey}, there exists a partial isometry $v$ of $\tilde{M}$ with $v^*v=\tilde{\rho }(f_{11}^0)$, $vv^*=\tilde{\sigma}(f_{11}^0)$. Set \[ u:=\sum _{j=1}^\infty \tilde{\sigma}(f_{j1}^0)v\tilde{\rho}(f_{1j}^0).\] Then $u$ is a unitary of $\tilde{M}$ with $u\tilde{\sigma}(f_{ij}^0)u^*=\tilde{\rho}(f_{ij}^0)$. Set \[ B:=\tilde{\sigma }(B_0)(=u\tilde{\rho}(B_0)u^*), \] \[ f_{ij}:=\tilde{\sigma }(f_{ij}^0)(=u\tilde{\rho}(f_{ij}^0)u^*).\] By replacing $\tilde{\rho} $ by $\mathrm{Ad}u \circ \tilde {\rho}$, we may assume that $\tilde{\rho}(f_{ij})=\tilde{\sigma }(f_{ij})$. In the following, we identify $\tilde{M}$ with $P \otimes B$ and $P $ with $R\otimes \mathcal{Z}(\tilde{M})$, where $R$ is the AFD factor of type $\mathrm{II}_1$. By the approximate finiteness of $R$ and $\mathcal{Z}(\tilde{M})$, there exists a sequence $\{ \{ e_{ij}^n\otimes a_k^n \}_{i,j,k}\}_{n=1}^\infty$ of systems of partial isometries of $P$ with the following properties.
(1) For each $n$, the system $\{ e_{ij}^n\}_{i,j}$ is a matrix unit of $R$.
(2) For each $n$, the system $\{ a_k^n\}_k$ is a partition of unity in $\mathcal{Z}(\tilde{M})$.
(3) For each $n$, $\{ e_{ij}^{n+1}\}_{i,j} $ is a refinement of $\{ e_{ij}^n\}_{i,j}$.
(4) For each $n$, $\{ a_k^{n+1}\}_k$ is a refinement of $\{ a_k^n\}_k$.
(5) We have $\bigvee _{n=1}^\infty \{ e^n_{ij} \otimes a^n_k\}_{i,j,k}'' =P$.
Fix a natural number $n$. Then by Lemma \ref{kkey}, we have \[ \tau _x ((\tilde{\rho}(e^n_{11} \otimes a_k^n\otimes f_{11}))_x)=\tau _x((\tilde{\sigma }(e^n_{11} \otimes a_k^n \otimes f_{11}))_x)\] for almost every $x\in X$. Hence for almost every $x\in X$, there exists a partial isometry $v_k^n$ of $P_x=(\tilde{\rho}(f_{11})\tilde{M}\tilde{\rho}(f_{11}))_x$ with \[ {v_k^n}^*v_k^n =\tilde{\rho} (e^n_{11} \otimes a_k^n \otimes f_{11})_x, \ v_k^n {v_k^n }^* =\tilde{\sigma } (e^n_{11} \otimes a_k^n \otimes f_{11})_x.\] Set \[ v_n :=\sum _{k,j} \tilde{\sigma }(e_{j1}\otimes a_k^n \otimes f_{11})_xv_k^n \tilde{\rho }(e_{1j} \otimes a_k^n \otimes f_{11})_x.\] Then $v_n$ is a unitary of $ \tilde{\rho} (f_{11})_x \tilde{M}_x \tilde{\rho} (f_{11})_x$ with \[ v_n \tilde{\rho}(e_{ij}^n\otimes a_k^n \otimes f_{11} )_xv_n^* =\tilde{\sigma }(e_{ij}^n \otimes a_k^n \otimes f_{11})_x.\] Hence for almost every $x\in X$, there exists a sequence $\{ v_n\} $ of unitaries of $P_x$ with \[ \mathrm{Ad}(v_n \otimes 1) (\tilde{\rho}(a)_x )\to \tilde{\sigma }(a)_x\] for any $a\in \tilde{M}$. \end{proof} \begin{lemm} \label{convergence in strong globaly} Let $M$, $\rho$ and $\sigma $ be as in Lemma \ref{convergence in strong}. Then there exist a unital subfactor $B$ of $\tilde{M}$, a unitary $u$ of $\tilde{M}$ and a sequence $\{ u_n\}$ of unitaries of $\tilde{M}$ with the following properties.
\textup{(1)} The factor $B$ is of type $I_\infty$.
\textup{(2)} The relative commutant $B'\cap \tilde{M}$ is finite.
\textup{(3)} There exists a sequence of unitaries $\{ v_n\}$ of $B'\cap \tilde{M}$ with $u_n =(v_n \otimes 1)u$, where we identify $\tilde{M}$ with $(B'\cap \tilde{M})\otimes B$.
\textup{(4)} For any $a\in \tilde{M}$, we have $\mathrm{Ad}u_n \circ \tilde{\rho} (a) \to \tilde{\sigma }(a) $ in the strong * topology.
\textup{(5)} We have $B\subset u\tilde{\rho}(\tilde{M})u^*\cap \tilde{\sigma }(\tilde{M})$. \end{lemm} \begin{proof} This is shown by ``directly integrating'' the above lemma. \end{proof}
The conclusion of Lemma \ref{convergence in strong globaly} means that $\mathrm{Ad}u_n \circ \tilde{\rho}$ converges to $\tilde{\sigma }$ point *strongly. However, this convergence is slightly weaker than the topology we consider. We need to fill this gap. In order to achieve this, the following criterion is very useful. \begin{lemm} \textup{(Lemma 3.8 of Masuda--Tomatsu \cite{MT2}).} \label{criterion} Let $\rho$ and $\rho _n$, $n\in \mathbf{N}$ be be endomorphisms of a von Neumann algebra $N$ with left inverses $\Phi$ and $\Phi _n$, $n\in \mathbf{N}$, respectively. Fix a normal faithful state $\phi $ of $N$. Then the following two conditions are equivalent.
\textup{(1)} We have $\lim _{n \to \infty } \| \psi \circ \Phi _n -\psi \circ \Phi \| =0 $ for all $\psi \in N_*$.
\textup{(2)} We have $\lim _{n\to \infty } \| \phi \circ \Phi _n -\phi \circ \Phi \| =0$ and $\lim _{n\to \infty} \rho _n (a) =\rho (a) $ for all $a \in N$. \end{lemm}
Hence what we need to do is to find a normal faithful state of $\tilde{M}$ satisfying condition (2) of Lemma \ref{criterion}.
\begin{lemm} \label{key} Let $M$, $\rho$, $\sigma $ be as in Lemma \ref{convergence in strong}. Then there exists a sequence of unitaries $u_n$ of $\tilde{M}$ with $\mathrm{Ad}u_n \circ \tilde{\rho} \to \tilde{\sigma} $. \end{lemm} \begin{proof} Take a subfactor $B$ of $\tilde{M}$, a unitary $u$ of $\tilde{M}$ and a sequence $\{ v_n\}$ of unitaries of $\tilde{M}$ as in Lemma \ref{convergence in strong globaly}. By condition (5) in Lemma \ref {convergence in strong globaly}, we have $u^*Bu \subset \tilde{\rho}(\tilde{M})$. Set \[ F:=\tilde{\rho}^{-1}(u^*Bu).\] Then we have \[ \tilde{\rho}^{-1}\circ \mathrm{Ad}u^*(B)=F,\] \[ \tilde{\rho}^{-1}\circ \mathrm{Ad}u^*(B'\cap \mathrm{Ad}u\circ \tilde{\rho}(\tilde{M}))=F'\cap \tilde{M}.\] We also have
\[ \mathrm{Ad}u\circ E_{\tilde{\rho}}\circ \mathrm{Ad}u^*|_B=\mathrm{id}_B,\] \[\mathrm{Ad}u \circ E_{\tilde{\rho}}\circ \mathrm{Ad}u^*(B'\cap \tilde{M})=B'\cap \mathrm{Ad}u \circ \tilde{\rho}(\tilde{M}).\] Let $\{ f_{ij}\}$ be a matrix unit generating $B$. Set \[ \overline{\tau} (a):=\tau (a\tilde{\rho}^{-1}(u^*f_{11}u))\] for $a\in F'\cap \tilde{M}$, which is a faithful normal finite trace of $F'\cap \tilde{M}$. Let $\varphi$ be a normal faithful state of $F$. Let $\Psi _F: \tilde{M}\to (F'\cap \tilde{M})\otimes F$ is the natural identification map. Then by the above observation, for $a\in B'\cap \tilde{M}$ and $i$, $j$, we have \begin{align*} \ &(\overline{\tau} \otimes \varphi )\circ \Psi _F\circ \phi _{\tilde{\rho}} \circ \mathrm{Ad}u^*(af_{ij}) \\ &=(\overline{\tau} \otimes \varphi )\circ \Psi _F \circ (\tilde{\rho}^{-1}\circ \mathrm{Ad}u^*)\circ (\mathrm{Ad}u \circ E_{\tilde{\rho}}\circ \mathrm{Ad}u^*)(af_{ij}) \\
&=(\overline{\tau}\otimes \varphi ) \circ \Psi _F\circ (\tilde{\rho}^{-1}\circ \mathrm{Ad}u^*)((\mathrm{Ad}u \circ E_{\tilde{\rho}}\circ \mathrm{Ad}u^*|_{B'\cap \tilde{M}})(a)f_{ij}) \\
&=(\overline{\tau} \circ \phi _{\tilde{\rho}} \circ \mathrm{Ad}u^*) (a) (\varphi \circ \phi _{\tilde{\rho}} \circ \mathrm{Ad}u^*) (f_{ij}). \end{align*} Since $B \subset \tilde{\sigma }(\tilde{M})\cap \mathrm{Ad}u \circ \tilde{\rho}(\tilde{M})$, we have \[ E_{\tilde{\sigma}}(af_{ij})=E_{\tilde{\sigma}}(a)f_{ij},\] \[ \mathrm{Ad}u \circ E_{\tilde{\rho}}\circ \mathrm{Ad}u^*(af_{ij})=\mathrm{Ad}u \circ E_{\tilde{\rho}} \circ \mathrm{Ad}u^*(a)f_{ij}\] for $a\in B'\cap \tilde{M}$. Notice that $\tilde{\sigma }^{-1}(f_{ij})=\tilde{\rho}^{-1}(u^*f_{ij}u)$ by condition (3) of Lemma \ref{convergence in strong globaly}. Then for any $a\in B'\cap \tilde{M}$, we have \begin{align*} \ &( \overline{\tau}\otimes \varphi )\circ \Psi _F \circ \phi _{\tilde{\rho}}\circ \mathrm{Ad}u^* (v_n^*\otimes 1)(af_{ij}) \\ &= (\overline{\tau}\otimes \varphi ) \circ \Psi _F \circ \phi _{\tilde{\rho}}((u^*(v_n^*av_n)u)(u^*f_{ij}u)) \\ &= \overline{\tau} \circ \phi _{\tilde{\rho}}(u^*(v_n^*av_n)u)\varphi (\tilde{\rho}^{-1}(u^*f_{ij}u)) \\ &=\tau (\phi _{\tilde{\rho}} (u^*(v_n^*av_n )u)\tilde{\rho}^{-1}(u^*f_{11}u))\varphi (\tilde{\rho}^{-1}(u^*f_{ij}u)) \\ &=\tau \circ \phi _{\tilde{\rho}}(u^*(v_n^*av_n)f_{11}u)\varphi (\tilde{\rho}^{-1}(u^*f_{ij}u)) \\ &=d(\rho ) \tau (u^*(v_n^*av_n)f_{11}u)\varphi (\tilde{\rho}^{-1}(u^*f_{ij}u)) \\ &=d(\sigma ) \tau (af_{11}) \varphi (\tilde{\sigma }^{-1}(f_{ij})) \\ &=\tau (\phi _{\tilde{\sigma }}(a)\tilde{\sigma }^{-1}(f_{11})) \varphi (\tilde{\sigma }^{-1}(f_{ij})) \\ &=\tau (\phi _{\tilde{\sigma }}(a)\tilde{\rho}^{-1}(u^*f_{11}u)) \varphi (\tilde{\sigma }^{-1}(f_{ij})) \\ &= (\overline{\tau}\otimes \varphi ) \circ \Psi _F \circ \phi _{\tilde{\sigma}}(af_{ij}). \end{align*} Hence we have $( \overline{\tau}\otimes \varphi )\circ \Psi _F \circ \phi _{\tilde{\rho}} \circ \mathrm{Ad}(u^*(v_n \otimes 1)^*)=(\overline{\tau}\otimes \varphi )\circ \Psi _F\circ \phi _{\tilde{\sigma }}$ for any $n$. Hence by Lemma \ref{convergence in strong globaly} and Lemma \ref{criterion}, we have $\mathrm{Ad}((v_n\otimes 1)u)\circ \tilde{\rho} \to \tilde{\sigma}$. \end{proof}
\section{Averaging by the trace-scaling action} In this section, we always assume that $M$ is an AFD factor of type III. Let $\varphi $ be a dominant weight of $M$ and $\rho, \sigma \in \mathrm{End}(M)_0$ be finite index endomorphisms with $(\varphi , \rho)$ and $(\varphi , \sigma )$ invariant pairs. Set \[ \tilde{M}:=M\rtimes _{\sigma ^{\varphi}} \mathbf{R}.\]
Let $\psi _0$ be a normal faithful state of $\tilde{M}$ and $\{ \psi _i\}_{i=1}^\infty $ be a norm dense sequence of the unit ball of $\tilde{M}_*$. Let $\theta $ be the dual action on $\tilde{M} $ of $\sigma ^\varphi$. We will replace the sequence $\{ u_n\}$ chosen in the previous section so that it is almost invariant by $\theta $. In order to achieve this, we use a property of $\theta $ which is said to be the Rohlin property. In order to explain this property, we first need to explain related things. Let $\omega $ be an ultrafilter of $\mathbf{N}$. A sequence $\{ [-1,1] \ni t \mapsto x_{n,t} \in \tilde{M}\}_{n=1}^\infty $ of maps from $[-1,1]$ to $\tilde{M}$ is said to be $\omega $-equicontinuous if for any $\epsilon >0$, there exist an element $U\subset \mathbf{N}$ of $\omega $ and $\delta >0$ with $\| x_{n,t} -x_{n,s}\| <\epsilon $ for any $s,t\in [-1,1]$ with $|s-t|<\delta $, $n\in U$. Set
\[ \mathcal{C}:=\{ (x_n)\in l^\infty (\tilde{M}) \mid \| x_n \psi -\psi x_n \| \to 0 \ \mathrm{for} \ \mathrm{any} \ \psi \in \tilde{M}.\} ,\] \[ \mathcal{C}_{\theta , \omega } :=\{ (x_n) \in \mathcal{C}_\omega \mid \mathrm{ the} \ \mathrm{maps} \ \{ t\mapsto \theta _t(x_n)\}_{n=1}^\infty \ \mathrm{are} \ \omega \ \mathrm{equicontinuous}.\} ,\] \[ \mathcal{I}_\omega :=\{ (x_n)\in l^\infty (\tilde{M}) \mid x_n \to 0 \ \mathrm{in } \ \mathrm{the } \ \mathrm{*strong} \ \mathrm{topology}.\} .\] Then $\mathcal{I}_\omega $ is a (norm) closed ideal of $\mathcal{C}_{\theta , \omega }$, and the quotient $\tilde{M}_{\theta , \omega }:=\mathcal{C}_{\theta , \omega }/\mathcal{I}_\omega $ is a von Neumann algebra. As mentioned in Masuda--Tomtasu \cite{MT5}, the action $\theta $ has the Rohlin property, that is, for any $R>0$, there exists a unitary $v$ of $\tilde{M}_{\theta ,\omega} $ with \[ \theta _t(v)=e^{-iRt}v\]
for any $t\in \mathbf{R}$ (See Section 4 of Masuda--Tomatsu \cite{MT5}). Choose arbitrary numbers $r>0$ and $0<\epsilon <1$. Then since $M$ is of type III, there exists a real number $R$ which is not of the discrete spectrum of $\theta |_{\mathcal{Z}(\tilde{M})}$ and which satisfies $r/R<\epsilon ^2$. Then as shown in Theorem 5.2 of Masuda--Tomatsu \cite{MT5}, there exists a normal injective *-homomorphism $\Theta$ from $\tilde{M}\otimes L^\infty ([-R,R])$ to $\tilde{M}^\omega $ satisfying $x\otimes f\mapsto xf(v)$ for any $x\in \tilde{M}$, $f \in L^\infty([-R,R])$. For each $t\in \mathbf{R}$, set \[ \gamma _t:L^\infty ([-R,R]) \ni f \mapsto f(\cdot -t)\in L^\infty ([-R,R]),\] where we identify $[-R,R]$ with $\mathbf{R}/2R\mathbf{Z}$ as measured spaces. Then the *-homomorphisms $\Theta $ and $\gamma _t$ satisfy \[ \Theta \circ (\theta _t\otimes \gamma _t ) =\theta _t\circ \Theta \] (See Theorem 5.2 of Masuda--Tomatsu \cite{MT5}). \begin{lemm} \label{10} For $\psi \in \tilde{M}_*$ and $x\otimes f\in \tilde{M}\otimes L^\infty ([-R,R])$, we have \[ \psi ^\omega \circ \Theta =\psi \otimes \tau _{L^\infty },\] where $\tau _{L^\infty}$ is the trace coming from the normalized Haar measure of $L^\infty ([-R,R])$. \end{lemm} \begin{proof} Let $\{ v_n\}$ be a representing sequence of $v$. Then we have \begin{align*} \psi ^\omega \circ \Theta (x\otimes f) &= \psi ^\omega (xf(v)) \\
&=\lim _{n\to \omega } \psi (xf(v_n)) \\
&=\psi (x) \lim _{n \to \omega }f(v_n) \\
&=\psi (x)\tau _{L^\infty }(f) \\
&=(\psi \otimes \tau _{L^\infty } )(x\otimes f). \end{align*} \end{proof} Since the maps \[ [-R,R]\ni t\mapsto \psi _i \circ \phi _{\tilde{\rho}}\circ \theta _t \in (\tilde{M})_*,\] \[ [-R,R]\ni t\mapsto \psi _i \circ \phi _{\tilde{\sigma }}\circ \theta _t \in (\tilde{M})_*\] are norm continuous, the union of their images \[ \{ \psi _i \circ \phi _{\tilde{\rho}} \circ \theta _t \mid t\in [-R,R]\} \cup \{ \psi _i \circ \phi _{\tilde{\sigma }} \circ \theta _t\mid t\in [-R,R]\}\] is compact. Hence there exists a finite set $-R=t_0 <\cdots <t_J =R$ of $[-R,R]$ such that
\[ \| \psi _i \circ \phi _{\tilde{\rho}} \circ \theta _{t_j}-\psi _i \circ \phi _{\tilde{\rho}}\circ \theta _t \| <\epsilon ,\]
\[ \| \psi _i \circ \phi _{\tilde{\sigma }} \circ \theta _{t_j} -\psi _i \circ \phi _{\tilde{\sigma }} \circ \theta _t\| <\epsilon \] for any $i=1, \cdots , n$, $j=0, \cdots , J-1$ and $t\in [t_j, t_{j+1}]$. We may assume that $t_j=0$ for some $j$. Then by Lemma \ref{key}, there exists a unitary $u$ of $\tilde{M}$ with
\[ \| \psi _i \circ \phi _{\tilde{\rho}}\circ \theta _{t_j}\circ \mathrm{Ad}u -\psi _i \circ \phi _{\tilde{\sigma }} \circ \theta _{t_j}\| <\epsilon\] for any $j=0, \cdots , J-1$, $i=1, \cdots ,n$ (Notice that we used the fact that we have $\phi _{\tilde{\rho}}\circ \theta _{t_j}=\theta _{t_j} \circ \phi _{\tilde{\rho}}$ and that we have $\phi _{\tilde{\sigma }}\circ \theta _{t_j}=\theta _{t_j}\circ \phi _{\tilde{\sigma }}$ for any $j=0, \cdots , J-1$). Set \[ U:[-R,R]\ni t\mapsto \theta _t(u) \in \tilde{M},\] which is a unitary of $\tilde{M}\otimes L^\infty ([-R,R])$. \begin{lemm} \label{11} We have
\[ \| (\psi _i \circ \phi _{\tilde{\rho}})^\omega \circ \mathrm{Ad} \Theta (U)|_{\mathrm{Im}\Theta } -(\psi _i \circ \phi _{\tilde{\sigma}})^\omega | _{\mathrm{Im}\Theta } \| <3\epsilon .\] \end{lemm} \begin{proof} Let $m$ be the normalized Haar measure of $[-R,R]$. By Lemma \ref{10}, we have \begin{align*}
\ &\| (\psi _i \circ \phi _{\tilde{\rho}})^\omega \circ \mathrm{Ad}\Theta (U) |_{\mathrm{Im}\Theta } - (\psi _i \circ \phi _{\tilde{\sigma }})^\omega |_{\mathrm{Im}\Theta }\| \\
&=\| ((\psi _i \circ \phi _{\tilde{\rho}})\otimes \tau _{L^\infty}) \circ \mathrm{Ad}U -(\psi _i \circ \phi _{\tilde{\sigma }} ) \otimes \tau _{L^\infty } \| \\
&= \int _{[-R,R]} \| (\psi _i \circ \phi _{\tilde{\rho}} ) \circ \mathrm{Ad}\theta _t(u) -\psi _i \circ \phi _{\tilde{\sigma }} \| \ dm(t) \\
&=\int _{[-R,R]} \| (\psi _i \circ \phi _{\tilde{\rho}}) \circ \theta _t\circ \mathrm{Ad}u -\psi _i \circ \phi _{\tilde{\sigma }} \circ \theta _t \| \ dm(t) \\
&\leq \sum _{j=0}^{J-1} \int _{[t_j, t_{j+1}]} ( \| (\psi _i \circ \phi _{\tilde{\rho}} )\circ \theta _t -(\psi _i \circ \phi _{\tilde{\rho}}) \circ \theta _{t_j} \| \\
&+ \| \psi _i \circ \phi _{\tilde{\rho}}\circ \theta _{t_j} \circ \mathrm{Ad}u-\psi _i \circ \phi _{\tilde{\sigma }} \circ \theta _{t_j}\| \\
&+ \| \psi _i \circ \phi _{\tilde{\sigma }} \circ \theta _{t_j}-\psi _i \circ \phi _{\tilde{\sigma }} \circ \theta _{t} \| ) \ dm(t) \\ &\leq \sum _{j=0}^{J-1} \int _{[t_j, t_{j+1}]} (\epsilon + \epsilon +\epsilon ) \ dm(t) \\ &=3\epsilon . \end{align*} \end{proof} \begin{lemm} \label{12} We have
\[ \| \theta _s(\Theta (U))-\Theta (U) \| _{\psi _0^\omega }^\sharp <2\epsilon \]
for $|s|\leq r$. \end{lemm} \begin{proof} Notice that we have \[ (\theta _s\otimes \gamma _s) (U):t\mapsto \theta _s(U_{t-s}),\] where $U_t$ denotes the evaluation of the function $U$ at the point $t$. Hence by the definition of $U$, we have \[ (\theta _s\otimes \gamma _s)(U) _t=\theta _t(u)\] for any $t\in [-R+r, R-r]$, where the left hand side is the evaluation of the function $(\theta _s\otimes \gamma _s) (U)$ at the point $t$. Hence by Lemma \ref{10}, we have \begin{align*}
\ &\| \theta _s(\Theta (U))-\Theta (U) \| _{\psi _0^\omega }^\sharp \\
&= \| (\theta _s\otimes \gamma _s )(U) -U \| _{\psi _0 \otimes \tau _{L^\infty}}^\sharp \\
&=(\int _{[-R,R]} (\| ((\theta _s \otimes \gamma _s )(U))_t -U_t \| _{\psi _0}^\sharp )^2 \ dm(t) )^{1/2} \\ &\leq (\int _{[-R,-R+r]\cup [R-r,R]} 4 \ dm(t) ) ^{1/2} \\ &\leq (4\epsilon ^2 )^{1/2} \\ &=2\epsilon . \end{align*} \end{proof} Let
\[ \psi _i \circ \phi _{\tilde{\sigma }} =| \psi _i \circ \phi _{\tilde{\sigma }}|v_i \] be the polar decompositions of $\psi _i \circ \phi _{\tilde{\sigma }}$ for $i=1, \cdots , n$. \begin{lemm} \label{13} There exists a finite subset $-R=s_0< \cdots < s_K=R$ of $[-R,R]$ with the following properties.
\textup{(1)} We have
\[ \| (U-\sum _{k=0}^{K-1} \theta _{s_k}(u)e_k )v_i^* \| _{|\psi _i \circ \phi _{\tilde{\sigma }}|\otimes \tau _{L^\infty}} ^\sharp <\epsilon \] for any $i=1, \cdots , n$, where $e_k:=\chi _{[s_k,s_{k+1}]}\in L^\infty ([-R,R])$.
\textup{(2)} We have
\[ \| U-\sum _{k=0}^{K-1} \theta _{s_k}(u) e_k \| _{|\psi _i \circ \phi _{\tilde{\rho}}|\otimes \tau _{L^\infty}}^\sharp <\epsilon \] for any $i=1, \cdots , n$.
\textup{(3)} We have
\[ \| U-\sum _{k=0}^{K-1} \theta _{s_k} (u)e_k\| _{(\psi _0 \circ \theta _{t_j}) \otimes \tau _{L^\infty}}^\sharp <\epsilon\] for any $i=1, \cdots , n$ and $j=0, \cdots , J-1$. \end{lemm} \begin{proof} Since the map $t\mapsto \theta _t(u)$ is continuous in the strong * topology, there exists a finite set $-R=s_0< \cdots < s_K=R$ of $[-R,R]$ with
\[ \| (\theta _t(u)-\theta _{s_k}(u) )v_i^* \| _{|\psi _i \circ \phi _{\tilde{\sigma }}| }^\sharp <\epsilon \] for $i=1, \cdots , n$, $k=0, \cdots , K-1$ and $t\in [s_k, s_{k+1}]$,
\[ \| \theta _t(u)-\theta _{s_k}(u) \| _{|\psi _i \circ \phi _{\tilde{\rho}}|} ^\sharp <\epsilon \] for $i=1, \cdots ,n$, $k=0, \cdots , K-1$ and $t\in [s_k, s_{k+1}]$,
\[ \| \theta _t(u)-\theta _{s_k}(u)\| _{\psi _0\circ \theta _{t_j}}^\sharp <\epsilon \] for $j=0, \cdots , J-1$, $k=0, \cdots ,K-1$ and $t\in [s_k, s_{k+1}]$. Then we have \begin{align*}
\ &\| (U-\sum _{k=0}^{K-1}\theta _{s_k}(u)e_k )v_i^* \| _{|\psi _i \circ \phi _{\tilde{\sigma }}|\otimes \tau _{L^\infty} }^\sharp \\
&=(\sum _{k=0}^{K-1} \int _{[s_k, s_{k+1})} (\| (\theta _t(u) -\theta _{s_k}(u))v_i^* \| _{|\psi _i \circ \phi _{\tilde{\sigma }}|}^\sharp )^2 \ dm(t) )^{1/2} \\ &< (\sum _{k=0}^{K-1} \int _{[s_k,s_{k+1})} \epsilon ^2 \ dm(t ))^{1/2} \\ &=\epsilon . \end{align*} The other inequalities are shown in a similar way. \end{proof} Set \[ V:=\sum _{k=0}^{K-1} \theta _{s_k}(u)e_k.\] Take a representing sequence $\{ e_k^n\}_{n=1}^\infty$ of $\Theta (e_k)$ so that $\{ e_k^n\}_{k=0}^{K-1}$ is a partition of unity in $\tilde{M}$ by projections for each $n$. Set \[ v_n:=\sum _{k=0}^{K-1} \theta _{s_k}(u)e_k^n,\] which is a unitary. The sequence $\{ v_n\}_{n=1}^\infty$ represents the unitary $\Theta (V)$. Let $\{ u_n\}_{n=1}^\infty $ be a representing sequence of $\Theta (U)$. \begin{lemm} \label{P} We have
\[ \lim _{n\to \omega } \| \theta _t (v_n)-v_n \| _{\psi _0}^\sharp <6\sqrt{\epsilon} .\] for $t\in [-r,r]$. \end{lemm} \begin{proof} Note that we have \begin{align*}
\ &(\| \theta _t(a)\| _{\psi _0}^\sharp )^2 \\ &=\frac{1}{2}\psi _0 \circ \theta _t(a^*a+aa^*) \\ &=\frac{1}{2}(\psi _0 \circ \theta _{t_j}(a^*a+aa^*)) -\frac{1}{2}((\psi _0\circ \theta _{t_j}-\psi _0\circ \theta _t)(a^*a+aa^*)) \\
&\leq (\| a\| _{\psi _0\circ \theta _{t_j}}^\sharp )^2 +\| a\| ^2 \| \psi _0\circ \theta _{t_j}-\psi _0\circ \theta _t\| \end{align*} for any $a\in \tilde{M}$. Hence for $t\in [t_j, t_{j+1}]\cap [-r,r]$, we have \begin{align*}
\ &\| \theta _t(v_n) -v_n \| _{\psi _0}^\sharp \\
&\leq \| \theta _t(v_n-u_n) \| _{\psi _0}^\sharp +\| \theta _t(u_n)-u_n\| _{\psi _0}^\sharp +\| u_n-v_n \| _{\psi _0}^\sharp \\
&\leq (4\| \psi _0 \circ \theta _{t_j}-\psi _0 \circ \theta _t\| +(\| v_n -u_n\| _{\psi _0\circ \theta _{t_j}}^\sharp )^2)^{1/2} \\
&+\| \theta _t(u_n)-u_n \| _{\psi _0}^\sharp +\| u_n -v_n \| _{\psi _0}^\sharp \\
&< (4\epsilon +(\| v_n -u_n\| _{\psi _0\circ \theta _{t_j}}^\sharp )^2 )^{1/2} \\
&+\| \theta _t(u_n)-u_n \| _{\psi _0}^\sharp + \| u_n -v_n \| _{\psi _0}^\sharp . \end{align*} Hence by Lemmas \ref{12} and \ref{13} (3), we have \begin{align*}
\ &\lim _{n\to \omega} \| \theta _t(v_n) -v_n \| _{\psi _0}^\sharp \\
&\leq (4\epsilon +(\| V-U\| _{(\psi _0 \circ \theta _{t_j})\otimes \tau _{L^\infty}}^\sharp )^2)^{1/2} \\
&+\| \theta _t(U)-U\| _{\psi _0\otimes \tau _{L^\infty}}^\sharp +\| U-V\|_{\psi _0\otimes \tau _{L^\infty}}^\sharp \\ &<(4\epsilon +\epsilon ^2)^{1/2} +2\epsilon +\epsilon \\ &<6\sqrt{\epsilon} . \end{align*} \end{proof} \begin{lemm} \label{Q}
We have
\[\lim _{n\to \omega } \| v_n ^*\psi _i \circ \phi _{\tilde{\rho}}-\psi _i \circ \phi _{\tilde{\sigma }}v_n^* \| \leq 7\epsilon \] for any $i=1, \cdots , n$. \end{lemm} \begin{proof} Notice that we have \begin{align*}
\ & \| u_n^* (\psi _i \circ \phi _{\tilde{\rho}})-(\psi _i \circ\phi _{\tilde{\sigma }})u_n ^*\| \\
& \| \Theta (U)^* (\psi _i \circ \phi_{\tilde{\rho}})^\omega |_M -(\psi _i \circ \phi _{\tilde{\sigma }})^\omega \Theta (U)^*|_M \| \\
&\leq \| (\psi _i \circ \phi _{\tilde{\rho}} )^\omega \circ \mathrm{Ad}\Theta (U)|_{\mathrm{Im}\Theta}-(\psi _i \circ \phi _{\tilde{\sigma }})^\omega |_{\mathrm{Im}\Theta}\| .
\end{align*} Hence by Lemmas \ref{11} and \ref{13} (1) (2), we have \begin{align*}
\ &\lim _{n\to \omega } \| v_n^*\psi _i \circ \phi _{\tilde{\rho}}-\psi _i \circ \phi _{\tilde{\sigma }}v_n^* \| \\
&\leq \lim _{n\to \omega }(\| (v_n^*-u_n^*)\psi _i \circ \phi _{\tilde{\rho}} \| \\
&+\| u_n^*\psi _i \circ \phi _{\tilde{\rho}}-\psi _i \circ \phi _{\tilde{\sigma}}u_n^*\| +\| \psi _i \circ \phi _{\tilde{\sigma }}(u_n^*-v_n^*)\| ) \\
&\leq \lim _{n\to \omega }(\| (v_n-u_n)^*\| _{|\psi _i \circ \phi _{\tilde{\rho}}|} \\
&+\| (\psi _i \circ \phi _{\tilde{\rho}})^\omega \circ \mathrm{Ad}\Theta (U) |_{\mathrm{Im}\Theta} -(\psi _i \circ \phi _{\tilde{\sigma }})^\omega |_{\mathrm{Im}\Theta}\| \\
&+\| (v_n-u_n)v_i^*\| _{|\psi _i \circ \phi _{\tilde{\sigma }}|} )\\
&=\| V-U\| _{|\psi _i \circ \phi _{\tilde{\rho}}|\otimes \tau _{L^\infty}} \\
&+\| (\psi _i \circ \phi _{\tilde{\rho}})^\omega \circ \mathrm{Ad}\Theta (U) |_{\mathrm{Im}\Theta} -(\psi _i \circ \phi _{\tilde{\sigma }})^\omega |_{\mathrm{Im}\Theta}\| +\| (V-U)v_i^*\|_{|\psi _i \circ \phi _{\tilde{\sigma}}|\otimes \tau _{L^\infty}} \\ &\leq \epsilon +3\epsilon +\epsilon \\ &=5\epsilon . \end{align*} Note that in order to show the second inequality, we used Lemma \ref{inequality}. \end{proof} By Lemmas \ref{P} and \ref{Q}, we have the following proposition. \begin{prop} \label{R} There exists a sequence $\{ v_n\}_{n=1}^\infty $ of unitaries of $\tilde{M}$ with
\[ \lim _{n\to \infty }\| \theta _t(v_n)-v_n\| _{\psi _0}^\sharp =0,\]
\[ \lim _{n\to \infty}\| v_n^*\psi _i \circ \phi _{\tilde{\rho}}-\psi _i \circ \phi _{\tilde{\sigma }}v_n^* \| =0\] for any $i=1, 2, \cdots $. \end{prop}
\section{Approximation on $\tilde{M}\rtimes _\theta \mathbf{R}$.} \label{lifting} Set \[ n_{\tau}:=\{ x\in \tilde{M}\mid \tau (x^*x)<\infty \}.\] \begin{lemm} \label{33} Let $L^2(\tilde{M})$ be the standard Hilbert space of $\tilde{M}$ and $\Lambda :n_\tau \to L^2(\tilde{M})$ be the canonical injection. For each $x\in n_\tau$, set $V _{\tilde{\rho}}(\Lambda (x)):=\sqrt{d(\rho )}^{-1}\Lambda (\tilde{\rho}(x))$. Then $V_{\tilde{\rho}}$ defines an isometry of $L^2(\tilde{M})$ satisfying \[ V_{\tilde{\rho}}^* xV_{\tilde{\rho}}=\phi _{\tilde{\rho}} (x)\] for any $x\in \tilde{M}$. \end{lemm} \begin{proof} Take $x\in n_\tau$. Then by Lemma 2.5 (4) of Izumi \cite{I}, we have \begin{align*}
\| V_{\tilde{\rho}} \Lambda (x)\| ^2 &=d(\rho )^{-1} \tau (\tilde{\rho }(x^*x) ) \\
&=\tau (x^*x) =\| \Lambda (x)\| ^2. \end{align*} Hence $V_{\tilde{\rho}}$ defines an isometry of $L^2(\tilde{M})$. Next, we show the latter statement. We have $V_{\tilde{\rho}}^* (\Lambda (x))=\sqrt{d(\rho )}\Lambda (\phi _{\tilde{\rho}}(x))$ because \begin{align*} \langle V_{\tilde{\rho}}^* \Lambda (x), \Lambda (y)\rangle &=\langle \Lambda (x), \sqrt{d(\rho )}^{-1} \Lambda (\tilde{\rho}(y))\rangle \\
&=\sqrt{d(\rho )}^{-1} \tau (\tilde{\rho}(y)^*x) \\
&=\sqrt{d(\rho )}\tau (y^*\phi _{\tilde{\rho}}(x)) \\
&=\langle \sqrt{d(\rho )} \Lambda (\phi _{\tilde{\rho}}(x)), \Lambda (y)\rangle \end{align*} for any $x,y \in n_\tau$. In order to show the third equality of the above, we used Lemma \ref{trace}. Hence for any $x\in \tilde{M}$ and $y\in n_\tau$, we have \begin{align*} V_{\tilde{\rho}}^*xV_{\tilde{\rho}}\Lambda (y)&= \sqrt{d(\rho )}^{-1}V_{\tilde{\rho}}^*\Lambda (x\tilde{\rho}(y)) \\
&= \Lambda (\phi _{\tilde{\rho}}(x\tilde{\rho}(y))) \\
&=\phi _{\tilde{\rho}}(x) \Lambda (y). \end{align*} \end{proof}
Let $\rho $ be an endomorphism of a von Neumann algebra $M$. Then since its canonical extension $\tilde{\rho}$ satisfies $\tau \circ \tilde{\rho}=d(\rho )\tau$, the endomorphism $\tilde{\rho}$ extends to $\tilde{M}\rtimes _\theta \mathbf{R}$ by $\lambda _t^\theta \mapsto \lambda _t^\theta $ for any $t\in \mathbf{R}$. We denote this extension by $\tilde{\tilde{\rho}}$.
\begin{lemm} \label{18} Let $\alpha $ and $\sigma $ be finite index endomorphisms of a separable infinite factor $M$ and $\varphi $ be a dominant weight of $M$. Assume that there exists a sequence $\{ u_n\}$ of unitaries of $\tilde{M} \rtimes _\theta \mathbf{R}$ with $\mathrm{Ad}u_n \circ \tilde{\tilde{\rho}} \to \tilde{\tilde{\sigma}}$ as $n \to \infty$. Then there exists a sequence $\{ v_n \}$ of unitaries of $M$ with $\mathrm{Ad}v_n \circ \rho \to \sigma $. \end{lemm} \begin{proof} Since $(\varphi , \rho )$ and $(\varphi , \sigma )$ are invariant pairs, it is possible to identify $\tilde{\tilde{\rho}}$ with $\rho \otimes \mathrm{id}_{B(L^2\mathbf{R})}$ and $\tilde{\tilde{\sigma }}$ with $\sigma \otimes \mathrm{id}_{B(L^2\mathbf{R})}$ through Takesaki duality, respectively (It is possible to choose the same identification between $M\otimes B(L^2\mathbf{R}) $ and $\tilde{M}\rtimes _\theta \mathbf{R}$ for $\tilde{\tilde{\rho}}$ and $\tilde{\tilde{\sigma }}$. See the argument preceding to Lemma 3.10 of Masuda--Tomatsu \cite{MT1}). Then by (the proof of) Lemma 3.11 of Masuda--Tomatsu \cite{MT1}, there exist an isomorphism $\pi $ from $M\otimes B(L^2\mathbf{R})$ to $M$ and unitaries $u_\rho$, $u_\sigma $ of $M$ satisfying \[ \pi \circ (\rho \otimes \mathrm{id}) \circ \pi ^{-1} =\mathrm{Ad}u_\rho \circ \rho , \] \[ \pi \circ (\sigma \otimes \mathrm{id} ) \circ \pi ^{-1} =\mathrm{Ad}u_\sigma \circ \sigma \] (Although in the statement of Lemma 3.11 of Masuda--TOmatsu \cite{MT1}, the isomorphism $\pi $ depends on the choice of $\rho$, $\pi $ turns out to be independent of $\rho $ by its proof). Then we have \begin{align*} \ & \mathrm{Ad}(u_\sigma ^*\pi (u_n) u_\rho ) \circ \rho \\ &= \mathrm{Ad}(u_\sigma ^*\pi (u_n ) )\circ \pi \circ (\rho\otimes \mathrm{id}_{B(L^2\mathbf{R})}) \circ \pi ^{-1} \\ &=\mathrm{Ad}u_\sigma ^* \circ \pi \circ (\mathrm{Ad}u_n \circ (\rho \otimes \mathrm{id}_{B(L^2\mathbf{R})}) ) \circ \pi ^{-1} \\ &\to \mathrm{Ad}u_\sigma ^* \circ \pi \circ (\sigma \otimes \mathrm{id}_{B(L^2\mathbf{R})} )\circ \pi ^{-1} \\ &=\mathrm{Ad}u_\sigma ^* \circ (\mathrm{Ad}u_\sigma \circ \sigma ) \\ &=\sigma . \end{align*} \end{proof}
\begin{lemm} \label{19} Let $\rho $ be an endomorphism with finite index and with $(\varphi , \rho )$ an invariant pair. Let $E_{\tilde{\tilde{\rho}}}$ be the minimal expectation from $\tilde{\tilde{M}}$ to $\tilde{\tilde{\rho}}(\tilde{\tilde{M}})$. Then we have the following.
\textup{(1)} For each $x\in \tilde{M}$, we have $E_{\tilde{\tilde{\rho}}}(x)=E_{\tilde{\rho}}(x)$.
\textup{(2)} For any $s\in \mathbf{R}$, we have $E_{\tilde{\tilde{\rho}}}(\lambda _t^\theta )=\lambda _t^\theta$. \end{lemm} \begin{proof} This is shown in the proof of Theorem 4.1 of Longo \cite{L}. \end{proof} \begin{lemm} \label{36} For $\xi \in L^2(\mathbf{R}, \tilde{M})$, set \[ V_{\tilde{\tilde{\rho}}}(\xi )(s):=V_{\tilde{\rho}}(\xi (s)).\] Then $V_{\tilde{\tilde{\rho}}}$ is an isometry of $L^2(\mathrm{R}, \tilde{M})$ satisfying \[ V_{\tilde{\tilde{\rho}}}^*xV_{\tilde{\tilde{\rho}}}=\phi _{\tilde{\tilde{\rho}}}(x)\] for any $x\in M$, where $\phi _{\tilde{\tilde{\rho}}}=\tilde{\tilde{\rho}}^{-1}\circ E_{\tilde{\tilde{\rho}}}$. \end{lemm} \begin{proof} The first statement is shown by the following computation. \begin{align*}
\| V_{\tilde{\tilde{\rho}}}(\xi )\| ^2&=\int _{\mathbf{R}}\| V_{\tilde{\rho}}(\xi (s))\| ^2 \ d\mu (s) \\
&=\int _{\mathbf{R}}\| \xi (s)\| ^2\ d\mu (s) \\
&=\| \xi \| ^2 \end{align*} for $\xi \in L^2(\mathbf{R}, \tilde{M})$. Next, we show the latter statement. Choose $x\in M$ and $\xi \in L^2(\mathbf{R}, \tilde{M})$. Then we have \begin{align*} V_{\tilde{\tilde{\rho}}}^* \circ \pi _\theta (x) \circ V_{\tilde{\tilde{\rho}}}(\xi ) &= V_{\tilde{\tilde{\rho}} }^* \pi _\theta (x)(s\mapsto V_{\tilde{\rho}}(\xi (s))) \\
&=V_{\tilde{\tilde{\rho}}} ^* (s\mapsto \theta _{-s}(x)\circ V_{\tilde{\rho}}(\xi (s))) \\
&=(s\mapsto V_{\tilde{\rho}}^* \circ \theta _{-s}(x) \circ V_{\tilde{\rho}}(\xi (s))) \\
&=(s\mapsto \phi _{\tilde{\rho}}(\theta _{-s}(x))(\xi (s))) \\
&=(s\mapsto \theta _{-s}( \phi _{\tilde{\rho}}(x))(\xi (s))) \\
&=\pi _\theta (\phi _{\tilde{\rho}}(x))(\xi ) \\
&=\phi _{\tilde{\tilde{\rho}}} (\pi _\theta (x))(\xi ). \end{align*} In order to show the fourth equality of the above, we used Lemma \ref{33}. The last equality of the above follows from Lemma \ref{19}. For $t\in \mathbf{R}$ and $\xi \in L^2(\mathbf{R},\tilde{M})$, we have \begin{align*} V_{\tilde{\tilde{\rho}}}^*\lambda _t^\theta V_{\tilde{\tilde{\rho}}}\xi &= V_{\tilde{\tilde{\rho}}} ^*(s\mapsto V _{\tilde{\rho}}(\xi (s-t)) \\
&= s\mapsto V_{\tilde{\rho}}^*V_{\tilde{\rho}}(\xi (s-t)) \\
&= \lambda _t^\theta (\xi ). \end{align*}
Thus we are done. \end{proof} \begin{lemm} \label{20} Let $N$ be a von Neumann algebra and $\{ V_n\}_{n=0}^\infty $ be a sequence of isometries on the standard Hilbert space $L^2(N)$ such that for each $n$, the map $\Phi _n:N\ni x\mapsto V_n^*xV_n$ is a left inverse of an endomorphism of $N$. Then the following two conditions are equivalent.
\textup{(1)} The sequence of operators $\{ V_n\}_{n=1}^\infty $ converges to $V_0$ strongly.
\textup{(2)} We have $\|\psi \circ \Phi _n -\psi \circ \Phi _0\| \to 0$ for any $\psi \in N_*$. \end{lemm} \begin{proof} This is shown by the same argument as that of the proof of Lemma 3.3 of Masuda--Tomatsu \cite{MT1}. \end{proof}
\begin{lemm} \label{22.5} Let $\{ u_n\} $ be a sequence of unitaries of $\tilde{M}$ satisfying the following conditions.
\textup{(1)} We have $\mathrm{Ad}u_n \circ \tilde{\rho} \to \tilde{\sigma }$ as $n \to \infty$.
\textup{(2)} For any compact subset $F$ of $\mathbf{R}$, we have $\theta _t(u_n )-u_n \to 0$ uniformly for $t\in F$.
Then we have $\mathrm{Ad} u_n \circ \rho \to \sigma$. \end{lemm} \begin{proof} By Lemmas \ref{18}, \ref{36} and \ref{20}, it is enough to show that $V_{\tilde{\tilde{\rho}}}u_ n^*\to V_{\tilde{\tilde{\sigma }}}$. Notice that we have \begin{align*} \ &V_{\tilde{\tilde{\rho}} }u_n^*(\xi \otimes f) \\ &=(s\mapsto V_{\tilde{\rho}}(\theta _{-s}(u_n^*)(\xi ) )f(s)) \end{align*} for any $\xi \in L^2(M)$ and $f\in L^2\mathbf{R}$. Hence we have \begin{align*}
\ &\| V_{\tilde{\tilde{\rho}}}u_n^*(\xi \otimes f) -V_{\tilde{\tilde{ \sigma}} } (\xi \otimes f)\| ^2 \\
&=\int _{\mathbf{R}}\| V_{\tilde{\rho}}(\theta _{-s}(u_n^*)(\xi ))-V_{\tilde{\sigma}}(\xi ) \| ^2|f(s)|^2\ ds \\
&\leq \int _{\mathbf{R}} \| (V_{\tilde{\rho}}((\theta _{-s}(u_n^*) -u_n^*) (\xi ))\| ^2 |f(s)|^2 \ ds +\int _{\mathbf{R}} \| V_{\tilde{\rho}} (u_n^*(\xi )) -V_{\tilde{\sigma }}(\xi ) \| ^2 |f(s)|^2 \ ds \\
&\to 0 \end{align*} by the Lebesgue dominant convergence theorem. Note that in order to show the last convergence, we use Lemmas \ref{R}, \ref{33} and \ref{20}. \end{proof}
\section{The proof of the main theorem}
\begin{lemm} \label{40} Let $M$ be an AFD factor and $\sigma $ be a finite index endomorphism of $M$ with $d(\sigma )=d$. Then there exists an endomorphism $\lambda$ with the following properties.
\textup{(1)} The endomorphism $\lambda $ is approximately inner.
\textup{(2)} We have $d(\lambda )=d$.
\textup{(3)} The endomorphism $\lambda$ has Connes--Takesaki module and it is $\theta _{-\log d}|_{\mathcal{Z}(\tilde{M})}$. \end{lemm} \begin{proof} By the proof of Theorem 3 of Kosaki--Longo \cite{KL}, there exists an endomorphism $\lambda _0$ of the AFD factor of type $\mathrm{II}_1 $ with $d(\lambda _0)=d$. Then $\mathrm{id}_M \otimes \lambda _0$ is an endomorphism of $M$ with $d(\mathrm{id}\otimes \lambda _0)=d$ and with $\mathrm{mod}(\mathrm{id} \otimes \lambda _0)$ trivial. Hence by the existence of a right inverse of the Connes--Takesaki module of automorphisms (See Sutherland--Takesaki \cite{ST3}), there exists an automorphism $\alpha $ of $M$ with $\mathrm{mod}(\alpha \circ \lambda _0)=\theta _{-\log (d)}$. By Theorem 3.15 of Masuda--Tomatsu (or by the same argument of our paper), it is shown that $\lambda :=\alpha \circ \lambda _0$ is approximately inner. \end{proof}
Now, we return to the proof of the main theorem.
\textit{Proof of implication \textup{(1)} $\Rightarrow $ \textup{(2)} of Theorem \ref{main}.} Let $\rho , \sigma $ be endomorphisms of $\mathrm{End}(M)_0$ with the first condition of Theorem \ref{main}. Then by Lemma \ref{40}, there exist endomorphisms $\lambda , \mu \in \mathrm{End}(M)_0$ with the following properties.
(1) We have $d(\lambda )=d(\sigma)$, $d(\mu ) =d(\rho )$.
(2) We have $\tilde{\lambda}|_{\mathcal{Z}(\tilde{M})}=\theta _{-\log (d(\sigma ))}|_{\mathcal{Z}(\tilde{M})}$ and $\tilde{\mu}|_{\mathcal{Z}(\tilde{M})}=\theta _{-\log (d(\rho ))}|_{\mathcal{Z}(\tilde{M})}$.
(3) The endomorphisms $\lambda $ and $\mu $ are approximately inner.
By the second condition, we have \begin{align*}
\phi _{\tilde{\rho}}\circ \phi _{\tilde{\lambda}}|_{\mathcal{Z}(\tilde{M})} &=\phi _{\tilde{\rho}} \circ \theta _{\log d(\sigma)} |_{\mathcal{Z}(\tilde{M})} \\
&=\phi _{\tilde{\sigma }}\circ \theta _{-\log (d(\sigma )/d(\rho ))} \circ \theta _{\log d(\sigma)}|_{\mathcal{Z}(\tilde{M})} \\
&=\phi _{\tilde{\sigma}}\circ \theta _{\log (d(\rho))}|_{\mathcal{Z}(\tilde{M})} \\
&=\phi _{\tilde{\sigma }}\circ \phi _{\tilde{\mu }}|_{\mathcal{Z}(\tilde{M})}. \end{align*}
Hence by replacing $\rho $ by $\lambda \circ \rho$ and $\sigma $ by $\mu \circ \sigma $ respectively, we may assume that $d(\rho)=d(\lambda )$ and $\phi _{\tilde{\rho}} |_{\mathcal{Z}(M)}=\phi _{\tilde{\sigma}} |_{\mathcal{Z}(M)}$. By Proposition \ref{R}, there exists a sequence $\{ u_n\}$ of unitaries of $\tilde{M}$ satisfying the assumptions of Lemma \ref{22.5}. Hence by Lemma \ref{22.5}, we have $\mathrm{Ad}u_n \circ \rho \to \sigma $. \qed
\section{Appendix (A proof of the characterization of central triviality of automorphisms of the AFD factors)} In this section, we will see that it is possible to give a proof of a characterization theorem of central triviality of automorphisms of the AFD factors by a similar strategy to the proof of Theorem \ref{main}, which is independent of the types of the AFD factors.
Let $M$ be an AFD factor of type III. Let $\alpha $ be an automorphism of $M$ and $\tilde{\alpha}$ be its canonical extension. Set \[ p:=\mathrm{min}\{ q\in \mathbf{N}\mid \tilde{\alpha }^q \ \mathrm{is} \ \mathrm{centrally} \ \mathrm{trivial}\} ,\] \[ G:=\mathbf{Z}/p\mathbf{Z}.\] \begin{lemm} The action $\{ \tilde{\alpha }_n \circ \theta _t\}_{(n,t)\in G\times \mathbf{R}}$ of $G\times \mathbf{R}$ on $\tilde{M}_{\omega , \theta }$ is faithful. \end{lemm} \begin{proof}
We will show this lemma by contradiction. Let $\varphi $ be a normal faithful state of $\tilde{M}$ and $\{ \psi _j\}_{j=1}^\infty$ be a norm dense sequence of the unit ball of $\tilde{M}_*$. Assume that there existed a pair $(n ,t)\in (G\times \mathbf{R})\setminus \{(0,0)\} $ satisfying $\tilde{\alpha }_n \circ \theta _{-t}(a)=a$ for any $a\in \tilde{M}_{\omega , \theta }$. Then the automorphism $\tilde{\alpha }_n \circ \theta _{-t}$ would be centrally non-trivial because $\tilde{\alpha }_n \circ \theta _{-t}$ is trace-scaling if $t\not =0$. Hence there would exist an $x$ of $\tilde{M}_\omega $, which can never be of $\tilde{M}_{\omega, \theta }$, with $\tilde{\alpha} _n (x)\not =\theta _t(x)$ and with $\| x\|\leq 1$. Take a representing sequence $\{ x_k\}$ of $x$ with $\| x_k\| \leq 1 $ for any $k$. Then we would have \begin{align*}
\ & \lim _{k\to \omega }\| \tilde{\alpha} _n (x_k) -\theta _t(x_k)\| _{\varphi \circ \theta _s}^\sharp \\
&=\mathrm{weak}\lim _{k \to \omega }\frac{1}{2} ( |\tilde{\alpha } _n (x_k) -\theta _t(x_k )|^2 +|(\tilde{\alpha }_n (x_k)-\theta _t(x_k))^*|^2 ) \\ & =2\delta >0 \end{align*} for some $\delta >0$ (The constant $\delta $ does not depend on the choice of $s\in \mathbf{R}$). Then for each natural number $L$, there would exist $k \in \mathbf{N}$ satisfying the following three conditions.
(1) We have $\| x_k\| \leq 1$.
(2) We have
\[ \| \theta _t(x_k)\psi _j -\psi _j \theta _t(x_k)\| (=\| x_k (\psi _j\circ \theta _{t})-(\psi _j \circ \theta _{t})x_k\| )<\frac{1}{L}\]
for $j=1, \cdots , L$, $| t| \leq L$ (Use the compactness of $\{ \psi _j \circ \theta _t\mid t\in L\}$. See also the argument just after Lemma \ref{10}).
(3) We have
\[ \| \tilde{\alpha }_n (x_k)-\theta _t(x_k) \| _\varphi ^\sharp >\delta .\]
Let $\Theta :L^\infty ([-L,L], dm(t))\otimes (\tilde{M}, \varphi) \to (\tilde{M}_{\omega , \theta }, \varphi ^\omega )$ be the inclusion mentioned in Section 5 (an inclusion coming from the Rohlin property of $\theta$), where $dm(t)$ is the normalized Haar measure of $[-L,L]$. Set \[ \tilde{y}:=([-L,L]\ni s\mapsto \theta _s(x_k)) \in L^\infty ([-L,L], dm(s))\otimes \tilde{M},\] \[ y:=\Theta (\tilde{y}).\] Since we would have $\tilde{\alpha }_n\circ \theta _{-t}$ is trivial on $\tilde{M}_{\omega, \theta }$, we would have \[ (\tilde{\alpha }_n (\Theta (f)))_s=\tilde{\alpha }_n (\Theta (f)_{s-t})\] for $f\in L^\infty ([-L,L])\otimes \tilde{M}$ and $s\in [-L+t, L-t]$, where $f_s$ is the evaluation of the function $f$ at $s\in [-L,L]$. Hence we would have \begin{align*}
\| \tilde{\alpha }_n (y) -y \| _{\varphi ^\omega }^\sharp &\geq (\int _{[-L+t,L-t]}(\| \tilde{\alpha }_n (\theta _{s-t}(x_k)) -\theta _{s}(x_k) \| _\varphi ^\sharp )^2 \ ds \\
&-\int _{[-L,-L+t]\cup [L-t,L]} 2^2 \ ds)^{1/2} \\
&\geq (\int _{[-L,L]}\delta ^2 \ ds -\frac{4t}{L})^{1/2} \\
&=(\delta ^2-\frac{4t}{L})^{1/2} . \end{align*} Since we have \[ (\theta _r(y))_s=\theta _s(y)\] for any $0<r<1$, $s\in [-L+r,L-r]$, we have \begin{align*}
\| \theta _r(y)-y\| _{\varphi ^\omega}^\sharp &=(\int _{[-L,L]}(\| (\theta _r(y))_s-y_s \| _{\varphi }^\sharp )^2\ ds )^{1/2} \\
&\leq (\int _{[-L, -L+1]\cup [L-1,L]}2^2\ ds )^{1/2} \\
&=\frac{2}{\sqrt{L}}
\end{align*}
for $| r|\leq 1$.
We also have
\begin{align*}
\| [y,\psi _j ]\| &=\| [y, \psi _j]| _{\Theta (\mathbf{C}\otimes \tilde{M})} \| \\
&\leq \| [y, \psi _i ]|_{\Theta ( L^\infty ([-L,L])\otimes \tilde{M})} \| \\
&=\int _{[-L,L]} \| [ \tilde{y}_s , \psi _i ]\| \ ds \\
&= \int _{[-L,L]} \| [\theta _s(x_k), \psi _j]\| \ ds \\
&<\int _{[-L,L]} \frac{1}{L} \ ds \\
&=\frac{1}{L}
\end{align*}
for $j=1, \cdots , L$.
Hence there would exist a sequence $\{ y_l\}$ of $\tilde{M}$ with the following properties.
(1) We have $\| y_l\| \leq 1$.
(2) We have $\| [y_l, \psi _j]\| \to 0$ for any $j=1, 2, \cdots $.
(3) For any $j=1,2, \cdots $, we have $\| \theta _r(y_l)-y_l\| _{\varphi }^\sharp \to 0$ uniformly for $s\in [-1,1]$.
(4) We have $\| \tilde{\alpha }_n(y_l)-\theta _t(y_l)\| _{\varphi }^\sharp \geq \delta /2$ for any $l$.
This would contradict the assumption that $\tilde{\alpha }_n \circ \theta _{-t}$ were trivial on $\tilde{M}_{\omega , \theta }$. \end{proof}
\begin{lemm} For each $p\in \hat{(G\times \mathbf{R})}=\hat{G}\otimes \mathbf{R}$, there exists a unitary $u$ of $\tilde{M}_{\omega , \theta }$ with $\tilde{\alpha }_n \circ \theta _t(u)=\langle (n,t),p\rangle u$ for any $(n,t)\in G\times \mathbf{R}$. \end{lemm} \begin{proof} The proofs of Theorems 4.10 and 7.7 of Masuda--Tomatsu \cite{MT5} works in our case. \end{proof} \begin{lemm} There exist a non-zero projection $e$ of $(\tilde{M}_{\omega , \theta})^\theta $ with $\tilde{\alpha}(e)$ orthogonal to $e$. \end{lemm} \begin{proof}
By the previous lemma, for each natural number $l$, there exists a unitary $u$ of $\tilde{M}_{\omega , \theta }$ with $\tilde{\alpha }(u)=e^{2\pi i/p}u$ and with $\theta _t(u)=e^{-it/l}u$ for any $t$. Then there exists a spectral projection $e$ of $u$ with $\tilde{\alpha }(e)\leq 1-e$, $\tau ^\omega (e)=1/p$ and with $\tau ^\omega (e-\theta _t(e))\leq 1/(2l)$ for $|t|\leq 1$. By the usual diagonal argument, it is possible to choose a desired projection. \end{proof} \begin{theo} \textup{(See Theorem 1 (2) of Kawahigashi--Sutherland--Takesaki)} For an automorphism $\alpha $ of $M$, $\alpha $ is centrally trivial if and only if its canonical extension is inner. \end{theo} \begin{proof} First, assume that $\tilde{\alpha }$ is not centrally trivial. Then by the previous lemma, neither is $\tilde{\tilde{\alpha }}$. Hence neither is $\alpha $ centrally trivial (See, for example, Lemmas 5.11 and 5.12 of Sutherland--Takesaki \cite{ST}). The above argument means that if $\alpha $ is centrally trivial, then $\tilde{\alpha}$ is centrally trivial. Since $\tilde{M}$ is of type II, any centrally trivial automorphism of $\tilde{M}$ is inner. The opposite direction is trivial by the central triviality of a modular endomorphism group. \end{proof}
\begin{rema} Finally, we remark that by our results and the result of Masuda \cite{M}, if we admit that the AFD factors are completely classified by their flows of weights, it is possible to classify the actions of discrete amenable groups on the AFD factors without separating cases by the types of the factors. \end{rema}
\end{document}
|
arXiv
|
{
"id": "1504.02434.tex",
"language_detection_score": 0.642646074295044,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Cardinal invariants concerning functions whose sum is
almost continuous.}
{\small\noindent Krzysztof Ciesielski\footnotemark[1], Department of Mathematics, West Virginia University, Morgantown, WV 26506-6310 ([email protected])}
{\small\noindent Arnold W. Miller\footnote[1]{ The results presented in this paper were initiated, and partially obtained, during the Joint US--Polish Workshop in Real Analysis, {\L}{\'o}d{\'z}, Poland, July 1994. The Workshop was partially supported by the NSF grant INT--9401673. \par We want to thank Juris Steprans for many helpful conversations. \par AMS Subject Classification. Primary: 26A15; Secondary: 03E35, 03E50. },
York University, Department of Mathematics,
North York, Ontario M3J 1P3, Canada, Permanent address:
University of Wisconsin-Madison,
Department of Mathematics,
Van Vleck Hall,
480 Lincoln Drive,
Madison, Wisconsin 53706-1388, USA ([email protected])}
\begin{abstract} Let ${\cal A}$ stand for the class of all almost continuous functions from ${\Bbb R}$ to ${\Bbb R}$ and let ${{\rm A}(\almost)}$ be the smallest cardinality of a family $F\subseteq{\reals^\reals}$ for which there is no $g\colon{\Bbb R}\to{\Bbb R}$ with the property that $f+g\in{\cal A}$ for all $f\in F$. We define cardinal number ${{\rm A}(\darboux)}$ for the class ${\cal D}$ of all real functions with the Darboux property similarly. It is known, that ${\goth c} < {{\rm A}(\almost)} \leq 2^{{\goth c}}$ \cite{Nat:AC1}. We will generalize this result by showing that the cofinality of ${{\rm A}(\almost)}$ is greater that ${\goth c}$. Moreover, we will show that it is pretty much all that can be said about ${{\rm A}(\almost)}$ in ZFC, by showing that ${{\rm A}(\almost)}$ can be equal to any regular cardinal between ${\goth c}^+$ and $2^{{\goth c}}$ and that it can be equal to $2^{{\goth c}}$ independently of the cofinality of $2^{{\goth c}}$. This solves a problem of T.~Natkaniec \cite[Problem 6.1, p. 495]{Nat:AC1}.
We will also show that ${{\rm A}(\darboux)}={{\rm A}(\almost)}$ and give a combinatorial characterization of this number. This solves another problem of Natkaniec. (Private communication.) \end{abstract}
\section{$\!\!\!\!\!\!\!${\bf.} Preliminaries.}
We will use the following terminology and notation. Functions will be identified with their graphs. The family of all functions from a set $X$ into $Y$
will be denoted by $Y^X$. Symbol $|X|$ will stand for the cardinality of a set $X$. The cardinality of the set ${\Bbb R}$ of real numbers is denoted by ${\goth c}$. For a cardinal number $\kappa$ we will write ${\rm cf}(\kappa)$ for the cofinality of $\kappa$. A cardinal number $\kappa$ is regular, if $\kappa={\rm cf}(\kappa)$. Recall also, that the Continuum Hypothesis (abbreviated as CH) stands for the statement ${\goth c}=\aleph_1$.
A function $f\colon{\Bbb R}\to{\Bbb R}$ is {\em almost continuous} (in the sense of Stallings \cite{Stall}) if and only if for every open set $U\subseteq{\Bbb R}^2$ containing $f$ there exists a continuous function $g\subseteq U$. So, every neighborhood of $f$ in the graph topology contains a continuous function. This concept was introduced by Stallings \cite{Stall} in connection with fixed points. We will use symbol ${\cal A}$ to denote the family of almost continuous functions from ${\Bbb R}$ to ${\Bbb R}$.
For ${\cal F}\subseteq{\Bbb R}^{\Bbb R}$ define the cardinal ${\rm A}({\cal F})$ as follows: \begin{eqnarray*} {\rm A}({\cal F})
& = & \min\{|F|\colon F\subseteq{\Bbb R}^{\Bbb R} \&\ \neg\exists
g\in{\Bbb R}^{\Bbb R}\ \forall f\in F\ f+g\in{\cal F}\}\\
& = & \min\{|F|\colon F\subseteq{\Bbb R}^{\Bbb R} \&\ \forall
g\in{\Bbb R}^{\Bbb R}\ \exists f\in F\ f+g\not\in{\cal F}\} \end{eqnarray*}
For a generalization of the next theorem see Natkaniec \cite{Nat:AC1}. Fast \cite{fast} proved the same result for the family of Darboux functions.
\thm{nat} {${\goth c} < {{\rm A}(\almost)} \leq 2^{{\goth c}}$.}\nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
At the Joint US--Polish Workshop in Real Analysis in {\L}{\'o}d{\'z}, Poland, in July 1994 A.~Maliszewski gave a talk mentioning several problems of his and T.~Natkaniec. Natkaniec asked whether or not anything more could be said about the cardinal ${{\rm A}(\almost)}$. (See also Natkaniec \cite[Problem 6.1, p. 495]{Nat:AC1} or \cite[Problem 1.7.1, p. 55]{Nat:AC2}.) In what follows we will show that pretty much nothing more can be said (in ZFC), except that the ${\rm cf}({{\rm A}(\almost)})>{\goth c}$.
We will also study the family ${\cal D}\subseteq{\reals^\reals}$ of Darboux functions. Recall that a function is {\em Darboux} if and only if it takes every connected set to a connected set, or (in the case of a real function) satisfies the intermediate value property. Note that ${\cal A}\subseteq{\cal D}$. This is because if for example $f(a)<c<f(b)$ and $c$ is omitted by $f$ on $(a,b)$, then take the $h$-shape set $H$ (see Figure \ref{fig1}). \unitlength=1.00mm
\begin{figure}
\caption{$h$-shape set $H$}
\label{fig1}
\end{figure} The complement of $H$ is an open neighborhood of the graph of $f$ which does not contain a graph of a continuous function. It is known (Stallings \cite{Stall}) that the inclusion ${\cal A}\subseteq{\cal D}$ is proper.
It is obvious from the definition that if ${\cal F}\subseteq{\cal G}\subseteq{\Bbb R}^{\Bbb R}$ then ${\rm A}({\cal F})\leq{\rm A}({\cal G})$. In particular, ${{\rm A}(\almost)}\leq{{\rm A}(\darboux)}$. At the Joint US--Polish Workshop in Real Analysis in {\L}{\'o}d{\'z}, Poland, in July 1994, T.~Natkaniec asked the authors whether it is possible that ${\rm A}({\cal A})<{\rm A}({\cal D})$. We will give a negative answer for this question by showing (in ZFC) that ${\rm A}({\cal A})={\rm A}({\cal D})$.
We will finish this section with the following technical fact, see Natkaniec \cite[Thm. 1.2, p. 464]{Nat:AC1}.
\thm{ThBlock}{(Kellum)
There exists a family ${\cal B}$ of closed sets (called {\rm a blocking
family}) with the properties that:
\begin{itemize}
\item for every $f\in{\reals^\reals}$ we have
$$f\in{\cal A}\ \mbox{if and only if }\ \forall B\in{\cal B}\
f\cap B\not=\emptyset;$$
\item for every $B\in{\cal B}$ the projection ${{\rm pr}_x}(B)$ of
$B$ onto the $x$-axis (equivalently, the domain of $B$)
is a non-degenerate interval.
\end{itemize}
}\nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
The paper is organized as follows. We will show that ${{\rm A}(\darboux)}={{\rm A}(\almost)}$, give some other characterizations of this cardinal, and prove that ${\rm cf}({{\rm A}(\almost)})>{\goth c}$ in Section 2. In Section 3 we will prove that some forcing axioms imply that ${{\rm A}(\almost)}$ can be any regular cardinal between ${\goth c}^+$ and $2^{\goth c}$ and that ${{\rm A}(\almost)}$ can be equal to $2^{\goth c}$ for any value of $2^{\goth c}$. The proof of the consistency of the forcing axioms used in Section 3 will be left for the Section 4.
\section{$\!\!\!\!\!\!\!${\bf.} ${{\rm A}(\darboux)}={{\rm A}(\almost)}$ and its cofinality.}
We will need the following definitions.
For a cardinal number $\kappa\leq{\goth c}$ we define the family $${\cal D}(\kappa)\subseteq{\Bbb R}^{\Bbb R}$$ of {\em $\kappa$ strongly Darboux functions} as the family of all functions $f\colon {\Bbb R}\to{\Bbb R}$ such that for all $a,b\in {\Bbb R}$, $a<b$, and $y\in{\Bbb R}$ the set $(a,b)\cap f^{-1}(y)$ has cardinality at least $\kappa$.
It is obvious from the definition that \begin{equation}\label{eq1} {\cal D}(\lambda)\subseteq {\cal D}(\kappa)\ \mbox{ for all cardinals }\ \kappa\leq\lambda\leq{\goth c}. \end{equation}
We will need the following lemmas.
\lem{LowerBound}{ ${\rm A}({\cal D}({\goth c}))>{\goth c}$.}
{\sc Proof. } Pick a family $F\subseteq {\Bbb R}^{\Bbb R}$ of cardinality continuum. We will find a function $g\in{\Bbb R}^{\Bbb R}$ such that $f+g\in{\cal D}({\goth c})$ for all $f\in F$. Let $$\langle\langle a_\xi,b_\xi,y_\xi,f_\xi\rangle\colon \xi<{\goth c}\rangle$$ be an enumeration of the set of all $$\langle a,b,y,f\rangle\in{\Bbb R}\times{\Bbb R}\times{\Bbb R}\times F\mbox{ with $a<b$},$$ such that each four-tuple appears in the sequence continuum many times. Define by induction a sequence $\langle x_\xi\in{\Bbb R}\colon\;\; \xi<{\goth c}\rangle$ such that $$x_\xi\in(a_\xi,b_\xi)\setminus\{x_\zeta\colon\zeta<\xi\}.$$ Then, any function $g\in{\Bbb R}^{\Bbb R}$ such that $g(x_\xi)=y_\xi-f(x_\xi)$ for all $\xi<{\goth c}$ has the property that $f+g\in{\cal D}({\goth c})$ for all $f\in F$. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
\lem{lemStrDar1}{ ${{\rm A}(\darboux)}={\rm A}({\cal D}(\omega_1))$. }
{\sc Proof. } Since ${\cal D}(\omega_1)\subseteq{\cal D}$ we have ${\rm A}({\cal D}(\omega_1))\leq{{\rm A}(\darboux)}$. To prove the other inequality let $\kappa={\rm A}({\cal D}(\omega_1))$. Then, by (\ref{eq1}) and Lemma \ref{LowerBound}, $$\kappa={\rm A}({\cal D}(\omega_1))\geq {\rm A}({\cal D}({\goth c}))>{\goth c}.$$ We will show that $\kappa\geq{{\rm A}(\darboux)}$.
Let $F\subseteq{\Bbb R}^{\Bbb R}$ be a family of cardinality $\kappa$ witnessing $\kappa={\rm A}({\cal D}(\omega_1))$: \begin{equation}\label{eqA} \forall g\in{\Bbb R}^{\Bbb R}\ \exists f\in F\ f+g\not\in{\cal D}(\omega_1). \end{equation} It is enough to find family $F^*\subseteq{\Bbb R}^{\Bbb R}$ of cardinality $\kappa$ such that \begin{equation}\label{eq2} \forall g\in{\Bbb R}^{\Bbb R}\ \exists f^*\in F^*\ f^*+g\not\in{\cal D}. \end{equation} Define $F^*=\{h\in{\Bbb R}^{\Bbb R}\colon \;\exists f\in F\; h=^*f\}$, where $h=^*f$ if and only if the set $\{x\colon h(x)\neq f(x)\}$ is at most countable. Since $\kappa>{\goth c}$ and for every $f\in{\Bbb R}^{\Bbb R}$ the set $\{h\in{\Bbb R}^{\Bbb R}\colon h=^*f\}$ has cardinality ${\goth c}$, we have
$|F^*|=\kappa$. It is enough to show that $F^*$ satisfies (\ref{eq2}). So, choose $g\in{\Bbb R}^{\Bbb R}$. Then, by (\ref{eqA}), there exists $f\in F$ such that $f+g\not\in{\cal D}(\omega_1)$. This means, that there are $a<b$ and $y\in{\Bbb R}$ such that the set $(a,b)\cap (f+g)^{-1}(y)$ is at most countable. Then we can find $f^*=^*f$ such that \begin{itemize}
\item $(f^*+g)(a)<y$,
\item $(f^*+g)(b)>y$, and
\item $(f^*+g)(x)\neq y$ for every $x\in (a,b)$. \end{itemize} Thus, $f^*+g\not\in{\cal D}$. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
Now, we are ready for one of our main theorems.
\thm{equqtion}{ ${{\rm A}(\darboux)}={{\rm A}(\almost)}$.}
{\sc Proof. } We already know that ${{\rm A}(\almost)}\leq{{\rm A}(\darboux)}$. So, by Lemma \ref{lemStrDar1}, it is enough to prove that ${\rm A}({\cal D}(\omega_1))\leq{{\rm A}(\almost)}$.
So, let $\kappa={{\rm A}(\almost)}$. Then, by Theorem \ref{nat}, $\kappa>{\goth c}$ and, by the definition of ${{\rm A}(\almost)}$, there exists a family $F\subseteq{\Bbb R}^{\Bbb R}$ of cardinality $\kappa$ witnessing it, i.e., such that \[ \forall g\in{\Bbb R}^{\Bbb R}\ \exists f\in F\ f+g\not\in{\cal A}. \] In particular, by the definition of the family ${\cal B}$ of blocking sets (from Theorem \ref{ThBlock}), \begin{equation}\label{eq3} \forall g\in{\Bbb R}^{\Bbb R}\ \exists f\in F\ \exists B\in{\cal B}\ (f+g)\cap B=\emptyset. \end{equation} It is enough to find a family $F^*\subseteq{\Bbb R}^{\Bbb R}$ of cardinality $\kappa$ such that \begin{equation}\label{eq4} \forall g\in{\Bbb R}^{\Bbb R}\ \exists f^*\in F^*\ f^*+g\not\in{\cal D}(\omega_1). \end{equation} In order to do this, choose a function $h_B\in{\Bbb R}^{\Bbb R}$ for every $B\in{\cal B}$ such that \[ (x,h_B(x))\in B\ \mbox{ for every }\ x\in{{\rm pr}_x}(B). \] Let \[ F^*=\{f-h_B\colon f\in F\ \&\ B\in{\cal B}\}. \] Clearly $F^*$ has cardinality $\kappa$, since
$|{\cal B}|\leq{\goth c}<\kappa$. We will show that $F^*$ satisfies (\ref{eq4}). Let $g\in{\Bbb R}^{\Bbb R}$. Then, by (\ref{eq3}), there exist $f\in F$ and $B\in{\cal B}$ such that $(f+g)\cap B=\emptyset$. In particular, \[ [(f-h_B)+g]\cap(B-h_B)=[(f+g)\cap B]-h_B=\emptyset, \] where we define $Z-h_B=\{(x,y-h_B(x))\colon (x,y)\in Z\}$ for any $Z\subseteq{\Bbb R}^2$. But $(B-h_B)\supset{{\rm pr}_x}(B)\times\{0\}$. Hence, $[(f-h_B)+g]\cap[{{\rm pr}_x}(B)\times\{0\}]=\emptyset$. In particular, $[(f-h_B)+g]^{-1}(0)\cap{{\rm pr}_x}(B)=\emptyset$. So, $f^*=f-h_B\in F^*$, while $(f-h_B)+g\not\in{\cal D}(\omega_1)$ since, by Theorem \ref{ThBlock}, ${{\rm pr}_x}(B)$ contains a non-degenerate interval. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
To prove the next theorem we need a few more definitions. For a set $X\subseteq{\Bbb R}$ and a cardinal number $\kappa\leq{\goth c}$ we define the family $${\cal D}(X,\kappa)\subseteq{\Bbb R}^X$$ as the family of all functions $f\colon X\to{\Bbb R}$ such that for all $a,b\in X$, $a<b$, and $y\in{\Bbb R}$ the set $(a,b)\cap f^{-1}(y)$ has cardinality at least $\kappa$. Similarly, define the cardinal ${\rm A}({\cal F})$ as before: \begin{eqnarray*} {\rm A}({\cal F})
& = & \min\{|F|\colon F\subseteq{\Bbb R}^X \&\ \forall
g\in{\Bbb R}^X\ \exists f\in F\ f+g\not\in{\cal F}\} \end{eqnarray*} (Thus ${\cal D}({\Bbb R},\kappa)={\cal D}(\kappa)$.) It is obvious from the definitions that for $\kappa$ with $\omega_1\leq\kappa\leq{\goth c}$ \begin{equation}\label{eqBB} {\rm A}({\cal D}({\Bbb R}\setminus{\Bbb Q},\kappa))= {\rm A}({\cal D}({\Bbb R},\kappa)) \end{equation} and also \begin{equation}\label{eqCC} {\rm A}({\cal D}(X,\kappa))={\rm A}({\cal D}(Y,\kappa))\ \mbox{ for all order isomorphic $X,Y\subseteq{\Bbb R}$.} \end{equation}
\thm{thCof}{ ${{\rm A}(\almost)}={{\rm A}(\darboux)}={\rm A}({\cal D}({\goth c}))$.}
{\sc Proof. } By (\ref{eq1}) it is obvious that ${{\rm A}(\darboux)}={\rm A}({\cal D}(\omega_1))\geq {\rm A}({\cal D}({\goth c}))$.
To prove the other inequality let $F\in{\Bbb R}^{\Bbb R}$ be a family of cardinality $\kappa$ with $\kappa<{{\rm A}(\darboux)}.$ It is enough to find $g\in{\Bbb R}^{\Bbb R}$ such that \begin{equation}\label{eqDD} f+g\in{\cal D}({\goth c})\ \mbox{ for every }\ f\in F. \end{equation} So, let $\langle S_\alpha\colon \alpha<{\goth c}\rangle$ be a sequence of pairwise disjoint dense subsets of ${\Bbb R}$ each of which is order isomorphic to the set ${\Bbb R}\setminus{\Bbb Q}$ of all irrational numbers. By (\ref{eqBB}) and (\ref{eqCC}) for every $\alpha<{\goth c}$ we have \[ \kappa<{{\rm A}(\darboux)}={\rm A}({\cal D}(\omega_1))= {\rm A}({\cal D}({\Bbb R},\omega_1))= {\rm A}({\cal D}({\Bbb R}\setminus{\Bbb Q},\omega_1))= {\rm A}({\cal D}(S_\alpha,\omega_1)). \] We can apply the definition of ${\rm A}({\cal D}(S_\alpha,\omega_1))$ to the family
$$F|_{S_\alpha}=\{f|_{S_\alpha}\in{\Bbb R}^{S_\alpha}\colon f\in F\}$$ to find a function $g_\alpha\colon S_\alpha\to{\Bbb R}$ such that \[
(f|_{S_\alpha})+g_\alpha\in{\cal D}(S_\alpha,\omega_1)\ \mbox{ for every }\ f\in F. \] It is easy to see that any $g\in{\Bbb R}^{\Bbb R}$ extending $\bigcup_{\alpha<{\goth c}}g_\alpha$ satisfies (\ref{eqDD}).
\nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
We will finish this section with one more cardinal equal to ${{\rm A}(\almost)}$. For any infinite cardinal $\kappa$ let \[
{\goth e}_\kappa=\min\{|F|\colon F\subseteq \kappa^\kappa \&\ \forall
g\in \kappa^\kappa\ \exists f\in F\ |f\cap g|<\kappa\}. \] This cardinal was extensively studied in Landver \cite{land}.
\thm{Comb}{ ${{\rm A}(\almost)}={{\rm A}(\darboux)}={\rm A}({\cal D}({\goth c}))={\goth e}_{\goth c}$.}
{\sc Proof. } It is enough to prove that ${\rm A}({\cal D}({\goth c}))={\goth e}_{\goth c}$. It is also clear that \[
{\goth e}_{\goth c}=\min\{|F|\colon F\subseteq {\Bbb R}^{\Bbb R} \&\ \forall
g\in {\Bbb R}^{\Bbb R}\ \exists f\in F\ |f\cap g|<{\goth c}\}. \]
To prove the inequality ${\rm A}({\cal D}({\goth c}))\leq{\goth e}_{\goth c}$ let $F\subseteq{\Bbb R}^{\Bbb R}$ have cardinality $\kappa<{\rm A}({\cal D}({\goth c}))$. Then, there exists $g\colon{\Bbb R}\to{\Bbb R}$ such that $g-f\in{\cal D}({\goth c})$ for every $f\in F$. In particular,
$|(g-f)^{-1}(0)|={\goth c}$, i.e., $f(x)=g(x)$ for continuum many $x\in{\Bbb R}$. So, $|f\cap g|={\goth c}$ for all $f\in F$, i.e., $\kappa<{\goth e}_{\goth c}$. This proves ${\rm A}({\cal D}({\goth c}))\leq{\goth e}_{\goth c}$.
To prove ${\goth e}_{\goth c}\leq{\rm A}({\cal D}({\goth c}))$ take a family $F\subseteq{\Bbb R}^{\Bbb R}$ of cardinality $\kappa<{\goth e}_{\goth c}$. We will show that $\kappa<{\rm A}({\cal D}({\goth c}))$.
Choose a sequence $\langle S_{a,b}^y\subseteq(a,b)\colon a,b,y\in{\Bbb R}, \ a<b\rangle$ of pairwise disjoint sets of cardinality continuum. Applying $\kappa<{\goth e}_{\goth c}$ to the family
$$F_{a,b}^y=\{(y-f)|_{S_{a,b}^y}\colon f\in F\}$$ we can find $g_{a,b}^y\colon S_{a,b}^y\to{\Bbb R}$
such that $|(y-f)|_{S_{a,b}^y}\cap g_{a,b}^y|={\goth c}$ for every $f\in F$. In particular, $(y-f)(x)= g_{a,b}^y(x)$, i.e., $(f+g_{a,b}^y)(x)=y$ for continuum many $x\in S_{a,b}^y\subseteq(a,b)$. Now, if we take any $g\in{\Bbb R}^{\Bbb R}$ extending $\bigcup\{g_{a,b}^y\colon a,b,y\in{\Bbb R},\ a<b\}$ then $(f+g)^{-1}(y)\cap(a,b)$ has cardinality continuum for every $f\in F$ and $a,b,y\in{\Bbb R}$, $a<b$. So, $\kappa<{\rm A}({\cal D}({\goth c}))$. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
\cor{}{${\rm cf}({{\rm A}(\almost)})>{\goth c}$.} {\sc Proof. } It is obvious that ${\rm cf}({\goth e}_\kappa)>\kappa$ since $\kappa$ can be split into $\kappa$ many sets of size $\kappa$. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
\section{$\!\!\!\!\!\!\!${\bf.} Forcing axioms and the value of ${{\rm A}(\almost)}$.}
In this section we will prove the following two theorems.
\thm{thForMain}{
Let $\lambda\geq\kappa\geq\omega_2$ be cardinals such that
${\rm cf}(\lambda)>\omega_1$ and
$\kappa$ is regular. Then it is relatively consistent with
ZFC that the Continuum Hypothesis (${\goth c}=\aleph_1$) is true,
$2^{{\goth c}}=\lambda$, and ${{\rm A}(\almost)}=\kappa$. }
So for example if $2\leq n\leq 17$, then it is consistent that $${\goth c}=\aleph_1< {{\rm A}(\almost)}=\aleph_n\leq\aleph_{17}=2^{\goth c}.$$
\thm{thForMain2}{
Let $\lambda$ be a cardinal such that
${\rm cf}(\lambda)>\omega_1$.
Then it is relatively consistent with
ZFC that the Continuum Hypothesis (${\goth c}=\aleph_1$) holds and
${{\rm A}(\almost)}=\lambda=2^{{\goth c}}$. }
It follows from Theorem \ref{thForMain2} that ${{\rm A}(\almost)}$ can be a singular cardinal, e.g. ${{\rm A}(\almost)}=\aleph_{\omega_2}$ where ${\goth c}^+=\omega_2$. We do not know how to get ${{\rm A}(\almost)}$ strictly smaller than $2^{\goth c}$ and singular.
The technique of proof is a variation on the idea of a Generalized Martin's Axiom (GMA). In this section we will formulate the forcing axioms and show that they imply the results. The proof of the consistency of these axioms will be left for Section 4.
For a partially ordered set $({\Bbb P},\leq)$ we say that $G\subseteq{\Bbb P}$ is a ${\Bbb P}$-filter if and only if \begin{itemize}
\item for all $p,q\in G$ there exists $r\in G$ with
$r\leq p$ and $r\leq q$, and
\item for all $p,q\in{\Bbb P}$ if $p\in G$ and $q\geq p$, then
$q\in G$. \end{itemize} Define $D\subseteq {\Bbb P}$ to be dense if and only if for every $p\in{\Bbb P}$ there exists $q\in D$ with $q\leq p$.
For any cardinal $\kappa$ and poset ${\Bbb P}$ define MA$_\kappa({\Bbb P})$ (Martin's Axiom for ${\Bbb P}$) to be the statement that for any family ${\cal D}$ of dense subsets of ${\Bbb P}$ with $|{\cal D}|<\kappa$ there exists a ${\Bbb P}$-filter $G$ such that $D\cap G\not=\emptyset$ for every $D\in {\cal D}$.
>From now on, let ${\Bbb P}$ be the following partial order $${\Bbb P}=
\{p\;| \; p:X\to{\Bbb R}, X\subseteq{\Bbb R}, \mbox{ and } |X|<{\goth c}\}$$ i.e., the partial function from ${\Bbb R}$ to ${\Bbb R}$ of cardinality less than ${\goth c}$. Define $p\leq q$ if and only if $q\subseteq p$, i.e., $p$ extends $q$ as a partial function.
\lem{geq}{MA$_\kappa({\Bbb P})$ implies ${{\rm A}(\almost)}\geq\kappa$.}
{\sc Proof. } We know by Theorem \ref{Comb} that ${{\rm A}(\almost)}={\goth e}_{\goth c}>{\goth c}$. Thus, it is enough to prove that MA$_\kappa({\Bbb P})$ implies ${\goth e}_{\goth c}\geq\kappa$ for $\kappa>{\goth c}$. Note that for any ${\Bbb P}$-filter $G$ since any two conditions in $G$ must have a common extension, $\bigcup G$ is a partial function from ${\Bbb R}$ to ${\Bbb R}$. Moreover, it is easy to see that for any $x\in{\Bbb R}$ the set $$D_x=\{p\in{\Bbb P}: x\in{\rm dom}(p)\}$$ is dense in ${\Bbb P}$ and that $\bigcup G\colon{\Bbb R}\to{\Bbb R}$ for any ${\Bbb P}$-filter $G$ intersecting all sets $D_x$.
Let $\langle S_\alpha:\alpha<{\goth c} \rangle$ be a partition of ${\Bbb R}$ into pairwise disjoint sets of size ${\goth c}$. Also for any $f\in{{\Bbb R}^{\Bbb R}}$ and $\alpha<{\goth c}$ the set $$D_{f,\alpha}=\{p\in{\Bbb P}: \exists x\in({\rm dom}(p)\cap S_\alpha)\;\; p(x)=f(x)\}$$ is dense in ${\Bbb P}$. Given any $F\subseteq {{\Bbb R}^{\Bbb R}}$
with $|F|<\kappa$ let $${\cal D}=\{D_x:x\in{\Bbb R}\}\cup\{D_{f,\alpha}: f\in F,\alpha<{\goth c}\}.$$
Notice that $|{\cal D}|={\goth c}<\kappa$. Applying MA$_\kappa({\Bbb P})$ we can find a ${\Bbb P}$-filter $G$ such that $G$ meets every $D\in{\cal D}$. Letting
$g=\bigcup G\colon{\Bbb R}\to{\Bbb R}$ we see that $|f\cap g|={\goth c}$ for every $f\in{\cal F}$. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
The proof of Lemma \ref{geq} is a kind of forcing extension of the inductive argument used in the proof of Theorem \ref{nat}.
Notice also, that Theorem \ref{thForMain2} follows immediately from Lemma \ref{geq}, Theorem \ref{nat} and the following theorem.
\thm{thForMain2Proof}{
Let $\lambda$ be a cardinal such that
${\rm cf}(\lambda)>\omega_1$.
Then it is relatively consistent with
ZFC+CH that $2^{{\goth c}}=\lambda$ and that
MA$_\lambda({\Bbb P})$ holds. }
Thus, we have proved Theorem \ref{thForMain2} modulo Theorem \ref{thForMain2Proof}. Theorem \ref{thForMain2Proof} will be proved in Section 4.
Lemma \ref{geq} shows also one inequality of Theorem \ref{thForMain}. To prove the reverse inequality we will use a different partial order $({\Bbb P}^*,\leq)$. It is similar to ${\Bbb P}$ but in addition has some side conditions. \[ {\Bbb P}^*=\{(p,E):p\in{\Bbb P} \mbox{ and } E\subseteq{\reals^\reals}
\mbox{ with } |E|<{\goth c}\}. \] Define the ordering on ${\Bbb P}^*$ by \begin{eqnarray*} (p,E)\leq (q,F) & \mbox{ iff } & p\leq q\ \mbox{ and } \ E\supseteq F\\ & \mbox{ and } & \forall x\in{\rm dom}(p)\setminus{\rm dom}(q)\;\; \forall f\in F\; p(x)\not=f(x). \end{eqnarray*} The idea of the last condition is that we wish to create a generic function $g\in {\reals^\reals}$ with the property that for many $f$ we have $g(x)\not=f(x)$ for almost all $x$. Thus, the condition $(q,F)$ `promises' that for all new $x$ and old $f\in F$ it should be that $g(x)\not=f(x)$.
For a cardinal number $\kappa$ define Lus$_\kappa({\Bbb P}^*)$ to be the statement: \begin{quote} There exists a sequence $\langle G_\alpha:\alpha<\kappa\rangle$ of ${\Bbb P}^*$-filters, called a $\kappa$-Lusin sequence, such that for every dense set $D\subseteq{\Bbb P}^*$ \[
|\{\alpha<\kappa\colon \;G_\alpha\cap D=\emptyset\}|<\kappa. \] \end{quote} Thus we have a Lusin sequence of ${\Bbb P}^*$-filters. This is also known as a kind of Anti-Martin's Axiom. See vanDouwen and Fleissner \cite{df}, Miller and Prikry \cite{mp}, Todorcevic \cite{tod}, and Miller \cite{sur} for a similar axiom.
\lem{leq}{
Suppose ${\goth c}<\kappa$, $\kappa$ is regular, and
Lus$_\kappa({\Bbb P}^*)$.
Then ${{\rm A}(\almost)}\leq\kappa$. }
{\sc Proof. } Let $\langle G_\alpha:\alpha<\kappa\rangle$ be a $\kappa$-Lusin sequence of ${\Bbb P}^*$-filters and let \[ g_\alpha=\bigcup \{p:\exists F\; (p,F)\in G_\alpha\}. \] Then $g_\alpha$ is a partial function from ${\Bbb R}$ into ${\Bbb R}$. Similarly to the last proof, let $$D_x=\{(p,F)\in{\Bbb P}^*\colon \; x\in{\rm dom}(p)\}.$$ To see that $D_x$ is dense let $(q,F)$ be an arbitrary element of ${\Bbb P}^*$ and suppose it is not already an element of $D_x$. The set $Q=\{f(x):f\in F\}$ has cardinality less than ${\goth c}$ so there exists $y\in{\Bbb R}\setminus Q$. Let $p=q\cup\{(x,y)\}$. Then $(p,F)\leq (q,F)$ and $(p,F)\in D_x$. Thus, each $D_x$ is dense in ${\Bbb P}^*$. Hence, since ${\goth c}<\kappa$ and $\kappa$ is regular, we may assume the each $g_\alpha$ is a total function.
For each $f\in{\reals^\reals}$ define $$D(f)=\{(p,E)\in{\Bbb P}^*: \; f\in E\}.$$
Note that for any $(p,F)$ if we let $E=F\cup\{f\}$, then $(p,F)\leq (p,E)$. Hence $D(f)$ is dense.
Next, note that by the nature of definition of $\leq$ in ${\Bbb P}^*$, if $(p,F)\in G$, where $G$ is a ${\Bbb P}^*$-filter, and $g=\bigcup \{p\colon\exists F\;(p,F)\in G\}$, then for any $f\in F$ we have $g(x)\not=f(x)$ except possibly for the $x$ in the domain of $p$. Therefore for any $f\in {\reals^\reals}$ there exists $\alpha<\kappa$ such that
$|g_\alpha\cap f|<{\goth c}$. Thus, the family $\{g_\alpha\colon\alpha<\kappa\}$ shows that ${{\rm A}(\almost)}={\goth e}_{\goth c}\leq\kappa$ as was to be shown. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par \lem{equiv}{ For any regular $\kappa$ we have Lus$_\kappa({\Bbb P}^*)\longrightarrow $MA$_\kappa({\Bbb P}^*)\longrightarrow$MA$_\kappa({\Bbb P})$. } {\sc Proof. } This first implication needs that $\kappa$ is regular but is true for any partial order. Given a family ${\cal D}$ of dense subsets of ${\Bbb P}^*$ of cardinality less than $\kappa$ and $\langle G_\alpha:\alpha<\kappa\rangle$ a Lusin sequence for ${\Bbb P}^*$ it must be that for some $\alpha<\kappa$ that $G_\alpha$ meets every element of ${\cal D}$.
The second implication follows from the fact that in some sense ${\Bbb P}$ is `living inside' of ${\Bbb P}^*$. Let $r:{\Bbb R}\to{\Bbb R}$ be a map with of $|r^{-1}(y)|={\goth c}$ for every $y\in{\Bbb R}$. Define $$\pi:{\Bbb P}^*\to{\Bbb P} \mbox{ by } \pi(p,F)=r\circ p.$$ Notice that if $(p,E)\leq(q,F)$ then $\pi(p,E)\leq\pi(q,F)$. This implies that $\pi(G)$ is a ${\Bbb P}$-filter for any ${\Bbb P}^*$-filter $G$. Furthermore, we claim that if $D\subseteq{\Bbb P}$ is dense, then $\pi^{-1}(D)$ is dense in ${\Bbb P}^*$. To see this, let $(p,F)\in{\Bbb P}^*$ be arbitrary. Since $D$ is dense, there exists $q\leq \pi(p,F)$ with $q\in D$. Now, find $s\in{\Bbb P}$ extending $p$ such that $r\circ s= q\supseteq r\circ p$ and $s(x)\neq f(x)$ for every $x\in{\rm dom}(s)\setminus{\rm dom}(p)$ and $f\in F$. This can be done by choosing \[ s(x)\in r^{-1}(q(x))\setminus\{f(x)\colon f\in F\} \] for every $x\in{\rm dom}(q)\setminus{\rm dom}(p)$. Then, $(s,F)\leq (p,F)$ and $(s,F)\in\pi^{-1}(q)\subseteq\pi^{-1}(D)$.
This gives us the second implication, since if ${\cal D}$ is a family of dense subsets of ${\Bbb P}$ with $|{\cal D}|<\kappa$ and $G$ is a ${\Bbb P}^*$-filter meeting each element of $\{\pi^{-1}(D):D\in{\cal D}\}$, then $\pi(G)$ is a ${\Bbb P}$-filter meeting each element of ${\cal D}$. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
It follows from Lemmas \ref{geq}, \ref{leq}, and \ref{equiv} that Lus$_\kappa({\Bbb P}^*)$ implies ${{\rm A}(\almost)}=\kappa$. In particular, Theorem \ref{thForMain} follows from the following theorem.
\thm{thForMainProof}{
Let $\lambda\geq\kappa\geq\omega_2$ be cardinals such that
${\rm cf}(\lambda)>\omega_1$ and $\kappa$ is regular.
Then it is relatively consistent with
ZFC+CH that $2^{{\goth c}}=\lambda$ and
Lus$_\kappa({\Bbb P}^*)$ holds. }
Theorem \ref{thForMainProof} will be proved in Section 4.
\section{$\!\!\!\!\!\!\!${\bf.} Consistency of our forcing axioms.}
In this section we will prove Theorems \ref{thForMain2Proof} and \ref{thForMainProof}. For Theorem \ref{thForMain2Proof}, start with a model of GCH and extend it by forcing with the countable partial functions from $\lambda$ to $\omega_1$. For Theorem \ref{thForMainProof} start with a model of $$2^\omega=\omega_1+2^{\omega_1}=\lambda$$ and do a countable support iteration of ${\Bbb P}^*$ of length $\kappa$. ${\Bbb P}^*$ is isomorphic to the eventual dominating partial order. For the expert this should suffice. The rest of this section is included for our readers who are not set theorists. For similar proofs see for example Kamo \cite{kam} and Uchida \cite{uc}.
We begin with some basic forcing terminology and facts. (See Kunen \cite{Kun}.) For a model $M$ of set theory ZFC and a partial order set $({\Bbb S},\leq)$ a filter $G\subseteq{\Bbb S}$ is {\em ${\Bbb S}$-generic over $M$} if $G$ intersects every dense $D\subseteq{\Bbb S}$ belonging to $M$. The fundamental theorem of forcing states that for every model $M$ of ZFC and every partial order
${\Bbb S}$ from $M$ there exists model $M[G]$ of ZFC (called an {\em ${\Bbb S}$-generic extension of $M$}) such that $G$ is ${\Bbb S}$-generic over $M$ and $M[G]$ is the smallest model of ZFC such that $M\subseteq M[G]$ and $G\in M[G]$. Thus, the simplistic idea for getting MA$_\kappa({\Bbb P})$ is to start with model $M$ of $ZFC$, take ${\Bbb P}$ from $M$ and look at the model $M[G]$, where $G$ is ${\Bbb P}$-generic over $M$. Then, $G$ intersects ``all'' dense subsets of ${\Bbb P}$ and we are done. There are, however, two problems with this simple approach. First, ``all'' dense subsets of ${\Bbb P}$ means ``all dense subsets from $M$'' and we like to be able to talk about all dense subsets from our universe, i.e., from $M[G]$. Second, our partial order is a set described by some formula as the set having some properties. There is no reason, in general, that the same description will give us the same objects in $M$ and in $M[G]$.
The second problem will not give us much trouble. For the generic extensions we will consider, the definition of ${\Bbb P}$ will give us the same objects in all models we will consider. In the case of the partial order ${\Bbb P}^*$ this will not be the case, but the new orders ${\Bbb P}^*$ will be close enough to the old so that it will not bother us.
To take care of the first of the mentioned problems, we will be constructing a Lusin sequence $\langle G_\alpha\colon\alpha<\kappa\rangle$ by some kind of induction on $\alpha<\kappa$: our final model can be imagined as $N=M[G_0][G_1]\ldots[G_\alpha]\ldots$ and we will make sure that every dense subset $D\in N$ of ${\Bbb P}^*$ is taken care of from some stage $\alpha<\kappa$.
We need some more definitions and facts. Given a partial order we say that $p,q$ are {\em compatible} if there exists $r$ such that $r\leq p$ and $r\leq q$. A partial order is {\em well-met} provided for any two elements $p,q$ if $p$ and $q$ are compatible, then they have a greatest lower bound, i.e., there exists $r$ such that $r\leq p$ and $r\leq q$ and for any $s$ if $s\leq p$ and $s\leq q$, then $s\leq r$. Notice that both partial orders ${\Bbb P}$ and ${\Bbb P}^*$ used in Lemmas \ref{geq} and \ref{leq} are well-met. For the case of ${\Bbb P}^*$ if $(p,E)$ and $(q,F)$ are compatible, then $(p\cup q,E\cup F)$ is there greatest lower bound. A subset $L$ of a partial order is {\em linked} if any two elements of $L$ are compatible. A partial order is {\em $\omega_1$-linked} provided it is a union of $\omega_1$ linked subsets. Assuming the Continuum Hypothesis note that the poset ${\Bbb P}$ used in the proof of Lemma \ref{geq} has cardinality $\omega_1$ hence it is $\omega_1$-linked. Note that for any $p\in{\Bbb P}$ if we define $$L_p=\{(q,F)\in{\Bbb P}^*: q=p\},$$ then $L_p$ is a linked subset of ${\Bbb P}^*$, hence ${\Bbb P}^*$ is also $\omega_1$-linked. A subset $A$ of a partial order is an {\em antichain} if any two elements of $A$ are incompatible. We say that a partial order has the {\em $\omega_2$-chain condition} ({\em $\omega_2$-cc}) if every its antichain has cardinality less than $\omega_2$. Clearly $\omega_1$-linked implies the $\omega_2$-chain condition. Finally we say a partial order is countably closed if any descending $\omega$-sequence $\langle p_n:n\in\omega\rangle$ (i.e., $p_{n+1}\leq p_n$ all $n$) has a lower bound. Notice that both of our partial orders are countably closed.
All partial orders we are going to consider here will be countably closed and will satisfy $\omega_2$-chain condition. In particular, it is known that if the generic extension $M[G]$ of $M$ is obtained with such partial order, then $M[G]$ and $M$ have the same cardinal numbers, the same real numbers, the same countable subsets of real numbers and the same sets ${\Bbb R}^X$ for any countable set $X\in M$. In particular, ${\Bbb P}$ will be the same in $M[G]$ as in $M$.
Let us also notice that every dense set contains a maximal antichain and if $A$ is a maximal antichain, then $D=\{p:\exists q\in A\;\; p\leq q\}$ is a dense set. Thus a filter $G$ is ${\Bbb S}$-generic over a model $M$ if and only if it meets every maximal antichain in $M$.
{\sc Proof of Theorem \ref{thForMain2Proof}.} Take a model $M$ of ZFC+GCH. For a set $X$ in $M$ let \[ {\Bbb S}_X= \{p\in{\Bbb P}^X\colon p(x)=\emptyset \mbox{ for all but countably many $x\in X$}\}. \] Define an ordering on ${\Bbb S}_X$ by $p\leq q$ if and only if $p(x)\leq q(x)$ for every $x\in X$.
Now, let $\lambda$ be as in Theorem \ref{thForMain2Proof} and let $G$ be a ${\Bbb S}_\lambda$ generic over $M$. We will show that MA$_\lambda({\Bbb P})$ holds in $M[G]$.
It is easy to see that ${\Bbb S}_\lambda$ is countably closed. It is also known that ${\Bbb S}_\lambda$ satisfies $\omega_2$-cc and that $2^{\omega_1}=\lambda$ in $M[G]$. (See Kunen \cite[Ch. VII, Lemma 6.10 and Thm. 6.17]{Kun}.)
Now, for $\alpha<\lambda$ let $G_\alpha=\{p(\alpha)\colon p\in G\}$. Then, each $G_\alpha$ is a filter in ${\Bbb P}$. We will show that for every family
${\cal D}$ of dense subsets of ${\Bbb P}$ with $|{\cal D}|<\lambda$ there exists $\alpha<\kappa$ such that $G_\alpha$ intersects every $D$ from ${\cal D}$.
In order to argue for it we need two more facts about forcing ${\Bbb S}_X$. (See Kunen \cite[Ch. VII]{Kun}: Thm. 1.4 and 2.1 for (A) and Lemma 5.6 for (B).) \begin{description} \item{(A)} If $X,Y\in M$ are disjoint and $G$ is ${\Bbb S}_{X\cup Y}$-generic
over $M$, then $G_X=G\cap{\Bbb S}_X$ is ${\Bbb S}_X$-generic
over $M$, $G_Y$ is ${\Bbb S}_Y$-generic over $M[G_X]$, and
$$M[G_X][G_Y]=M[G].$$ \item{(B)} If $A\subseteq M$ then there exists $X\in M$ with
$|X|\leq |A|+\omega_1$
such that $A\in M[G_X]$. \end{description}
Now, let $G_\lambda$ be ${\Bbb S}_\lambda$ generic over $M$ and let ${\cal D}\in M[G_\lambda]$ be a family of dense subsets of ${\Bbb P}$
with $|{\cal D}|<\lambda$. Let ${\cal H}$ be a family of maximal antichains, one contained in each element of ${\cal D}$.
Then, $|A|\leq\omega_1$ for each $A\in {\cal H}$, since ${\Bbb P}$ satisfies $\omega_2$-cc. So, by (B), there is
$X\subseteq\lambda$ from $M$ of cardinality $|{\cal H}| \cdot \omega_1<\lambda$ such that ${\cal H}\in M[G_X]$. Choose $\alpha\in\lambda\setminus X$. Then since $G_\alpha$ is ${\Bbb P}$-generic over $M[G_X]$ it follows that $G$ meets each element of ${\cal H}$ hence of ${\cal D}$. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
Next we prepare to prove Theorem \ref{thForMainProof}. As mentioned in the beginning of the section, we will try to prove it by defining some sequence $\langle {\Bbb S}_\alpha\colon\alpha\leq\kappa\rangle$ of partial orders and try to obtain our final model as $N_\kappa=M[G_\kappa]$ where every $G_\alpha$ is an ${\Bbb S}_\alpha$-generic over an appropriate initial model. This technique is called iterated forcing and needs a few words of introduction.
We can define in $M$ an iterated forcing $\langle {\Bbb S}_\alpha:\alpha<\kappa\rangle$ by induction on $\alpha$. At successor stages we define $${\Bbb S}_{\alpha+1}={\Bbb S}_\alpha\times{{\Bbb P}^*}^{M[G_\alpha]}.$$ where ${{\Bbb P}^*}^{M[G_\alpha]}$ is ${\Bbb P}^*$ in the sense of $M[G_\alpha]$. (Since we add new elements of ${\Bbb R}^{\Bbb R}$ the partial order ${\Bbb P}^*$ changes as our models increase.) We can't really do it precisely this way, because ${\Bbb P}_{\alpha+1}$ must be in $M$. However, it is possible to find its approximation, $\hat{{\Bbb P}}_\alpha$, in $M$, called a name for ${\Bbb P}_\alpha$, and use this instead. (See Kunen \cite[Ch. VII sec. 5]{Kun}).
For limit ordinals $\lambda<\kappa$, define ${\Bbb S}_\lambda$
to a set of functions $f$ with domain $\lambda$ such that $f|_{\alpha}\in {\Bbb S}_\alpha$ for each $\alpha<\lambda$ and $f(\alpha)={\Bbb I}$ for all but countable many $\alpha$. Here we use ${\Bbb I}$ to denote the largest element of any partial order. Countable support iterations originated with Laver \cite{lav}. For details see Baumgartner \cite{Baum} or Kunen \cite[Ch. VII sec. 7]{Kun}.
The proof that follows will involve a basic lemma used to show various generalizations of Martin's Axiom hold for one cardinal up. (See Baumgartner \cite{Baum} and Shelah \cite{sh}). In particular, we will need the following theorem.
\thm{thBaum}{ (Baumgartner)
Assume the Continuum Hypothesis. Suppose
$\langle {\Bbb S}_\alpha:\;\alpha<\kappa\rangle$
is a countable support iteration of countably closed
well-met $\omega_1$-linked partial orders.
Then for every
$\alpha\leq\kappa$ we have that ${\Bbb S}_\alpha$ is
countably closed and satisfies the $\omega_2$-chain condition. }
Actually we need only a very weak version of this theorem, for example, something analogous to \cite[Theorem VII, 7.3]{Kun} of Kunen.
Now, we are ready for the proof of Theorem \ref{thForMainProof}.
{\sc Proof of Theorem \ref{thForMainProof}.} Take a model $M$ of ZFC+CH in which $2^{{\goth c}}=\lambda$, and $\kappa$ is a regular cardinal with $\omega_2\leq\kappa\leq\lambda$. Let ${\Bbb S}_\alpha$ be a countable support iteration $\{{\Bbb P}_\alpha\colon\alpha<\kappa\}$, where
${\Bbb P}_\alpha={{\Bbb P}^*}^{M[G^\alpha]}$ for all $\alpha<\kappa$. Here for $\alpha<\kappa$ let $G^\alpha=G^\kappa|_\alpha$. Then $G^\alpha$ is ${\Bbb S}_\alpha$-generic filter over $M$.
Let $G^\kappa$ be an ${\Bbb S}_\kappa$-generic filter over $M$. We will show that Lus$_\kappa({\Bbb P}^*)$ holds in $M[G^\kappa]$.
In the model $M[G^\alpha]$ the partial order ${{\Bbb P}^*}^{M[G^\alpha]}$ can be decoded from ${\Bbb S}_\alpha$ and we can also decode a filter $G_\alpha$ which is ${{\Bbb P}^*}^{M[G^\alpha]}$-generic over $M[G^\alpha]$. We claim that the sequence $\langle G_\alpha\colon\alpha\in \kappa\rangle$ is a Lusin sequence for ${\Bbb P}^*$ in $M[G^\kappa]$.
So, let $D\in M[G^\kappa]$ be a dense subset of ${\Bbb P}^*$ and let $A\in M[G]$ be a maximal antichain contained in
$D\subseteq{\Bbb P}^*$. Then, $|A|\leq\omega_1$, since ${\Bbb P}^*$ satisfies $\omega_2$-cc. So, by the fact similar to (B) above, there is $\beta<\kappa$ such that $A\in M[G^\beta]$. Then, for every $\alpha\geq\beta$, the filter $G_\alpha$ is generic over $M[G^\alpha]\supseteq M[G^\beta]$ and so, $G_\alpha$ intersects both $A$ and $D$. Therefore, the set \[ \{\alpha<\kappa\colon G_\alpha\cap D=\emptyset\} = \{\alpha<\kappa\colon G_\alpha\cap A=\emptyset\}\subseteq\beta \] has cardinality less then $\kappa$. \nopagebreak\par\noindent\nopagebreak$\blacksquare$\par
It is worth mentioning that some generalizations of the these theorems are possible where the Continuum Hypothesis fails.
\end{document}
|
arXiv
|
{
"id": "9410201.tex",
"language_detection_score": 0.7552412152290344,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On Scaling Invariance and Type-\MyRoman{1} Singularities for the Compressible Navier-Stokes Equations} \author{ Zhen Lei \footnote{School of Mathematical Sciences; LMNS and Shanghai
Key Laboratory for Contemporary Applied Mathematics, Fudan University, Shanghai 200433, P. R.China. {\it Email:
[email protected]}}\and Zhouping Xin\footnote{The Institute of Mathematical Sciences and Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong. {\it Email: [email protected]} }
} \date{\today} \maketitle
\begin{abstract} We find a new scaling invariance of the barotropic compressible Navier-Stokes equations. Then it is shown that type \MyRoman{1} singularities of solutions with
$$\limsup_{t \nearrow T}|{\rm div} u(t, x)|(T - t) \leq \kappa,$$ can never happen at time $T$ for all adiabatic number $\gamma \geq 1$. Here $\kappa > 0$ doesn't depend on the initial data. This is achieved by proving the regularity of solutions under $$\rho(t, x) \leq \frac{M}{(T - t)^\kappa},\quad M < \infty.$$ This new scaling invariance also motivates us to construct an explicit type \MyRoman{2} blowup solution for $\gamma > 1$. \end{abstract}
\maketitle
\section{Introduction}
We study the Cauchy problem for the compressible Navier-Stokes equations \begin{equation}\label{CNS} \begin{cases} \partial_t\rho + \nabla\cdot (\rho u) = 0,\\[-4mm]\\ \partial_t(\rho u) + \nabla\cdot(\rho u\otimes u) + \nabla P(\rho)
= \mu\Delta u + (\lambda + \mu)\nabla\nabla\cdot u,\\[-4mm]\\ P(\rho) = A\rho^\gamma, \end{cases} \end{equation}
which govern the motion of a compressible viscous and polytropic Newtonian fluid. As usual, $\rho: \mathbb{R}_+ \times \Omega \rightarrow \mathbb{R}_+$ denotes the density of the fluid flows, $u: \mathbb{R}_+ \times \Omega \rightarrow \mathbb{R}^n$ is the velocity field and $P(\rho) = A\rho^\gamma$ the scalar pressure. Moreover, $\Omega \subseteq \mathbb{R}^n$ is a smooth bounded domain or the whole space with $n$ ($= 2\ {\rm or}\ 3$) being the space dimension, the constant $A > 0$, the adiabatic number $\gamma \geq 1$ and the viscosity constants $\lambda$ and $\mu$ satisfy the physical constraint \begin{equation}\label{phycons} \mu > 0,\ \ n\lambda + 2\mu > 0. \end{equation} If $\lambda = \mu = 0$ in \eqref{CNS}, one recovers the compressible Euler equations.
The main results of this paper are that there is a new scaling law for \eqref{CNS} and certain type \MyRoman{1} singularities can be excluded. The new scaling invariance of the compressible Navier-Stokes equations is stated as a theorem below which is our key observation. Here we do not pursue the exact meaning of "solution". \begin{thm}\label{ScalingI} Let $(\rho, u)$ be a solution to the compressible Navier-Stokes equations \eqref{CNS} and $\kappa > 0$. Define \begin{equation}\label{Scaling} \begin{cases} \rho^\kappa(t, x) = \kappa^{\frac{1}{\gamma}}\rho(\kappa t, \kappa^{\frac{\gamma + 1}{2\gamma}}x),\\[-4mm]\\ u^\kappa(t, x) = \kappa^{\frac{\gamma - 1}{2\gamma}}u(\kappa t, \kappa^{\frac{\gamma + 1}{2\gamma}}x). \end{cases} \end{equation} Then $(\rho^\kappa, u^\kappa)$ is also a solution to \eqref{CNS} for every $\kappa > 0$. \end{thm} While such a scaling invariance can be verified directly, we believe that it is interesting in itself. As is well-known, the natural scaling invariance has played one of the most essential roles in the study of incompressible Navier-Stokes equations at least from a heuristical point of view (for instance, the famous Caffarelli-Kohn-Nirenberg theory \cite{CKN:1}, the small data global existence type results in critical spaces \cite{Chemin, FK, Kato, Planchon, KTa, LL}, etc.). Certain type of asymptotic scalings are also one of the backbones in the study of compressible Navier-Stokes equations (for instance, the work of Dachin \cite{Danchin} and so on). The formula \eqref{Scaling} gives the first precise scaling laws for the barotropic compressible Navier-Stokes equations. Based on the scaling, one may formally assign the dimensions of space-time variables and unknowns as follows: Each space variable $x_j$ has dimension $+ 1$, the time variable $t$ has dimension $+ \frac{2\gamma}{\gamma + 1}$, the density function $\rho$ has dimension $- \frac{2}{\gamma + 1}$ and the velocity vector $u$ has dimension $- \frac{\gamma - 1}{\gamma + 1}$.
At a heuristical level, the scaling invariance given in \eqref{Scaling} suggests that local strong solutions with certain blowup rate will stop the formation of true singularities. It also suggests us a way to construct finite time singular solutions with special forms. We will confirm these intuitive ideas and we hope it could be useful elsewhere. We would like to mention that for type \MyRoman{1} solutions of the Navier-Stokes equations in the incompressible case, a significant progress has been achieved recently for the axi-symmetric case \cite{CSTY, KNSS, CSYT, LZ}.
For the compressible Navier-Stokes equations \eqref{CNS}, there have been extensive literatures on the existence and finite time singularity of solutions. In particular, in the absence of vacuum, a uniqueness result was obtained by Serrin \cite{Serrin} and the local existence results were given by Nash \cite{Nash} and Itaya \cite{Itaya}. Later on, non-vacuum small perturbations of a uniform non-vacuum constant state have been shown to exist globally in time and remain smooth in any space dimensions \cite{MN, MN2, Hoff1, Hoff2, Hoff3, Danchin, CMZ}. Those works are based on the dissipative nature of the system. In the presence of vacuum, the system is strongly degenerate and the problem becomes extremely completed. A breakthrough was made by Lions \cite{Lions} where global existence of weak solutions with finite energy was established for adiabatic number $\gamma > \frac{9}{5}$ (see also Feireisl, Novotn${\rm \acute{y}}$ and Petzeltov${\rm \acute{a}}$ \cite{Fei2} for the case of $\gamma > \frac{3}{2}$, and Jiang and Zhang \cite{JZ1, JZ2} for the case of $\gamma > 1$ under symmetry). In recent years, there are tremendous works on the existence and uniqueness of classical solutions in the presence of vacuum, see for instance, \cite{Fei1, Des, Solo, CCK-1, CCK-2, HLX, Luo} and the references therein. In particular, when the initial total energy is sufficiently small, the well-posedness holds globally in time \cite{HLX}. For the formation of singularities, Xin \cite{Xin} showed that in the case that the initial density has a compact support, any smooth solution with $u \in C^1([0, T] : H^s(\mathbb{R}^n))(s > [n/2]+2)$ to the Cauchy problem of the full compressible Navier-Stokes system without heat conduction blows up in finite time for any space dimension $n \geq 1$ (see also \cite{XY, HXin} for further results). Li, Wang and Xin \cite{LWX} even proved that the classical solution with finite energy does not exist in the inhomogeneous Sobolev space for any short time under some natural assumptions on initial data near the vacuum. For the compressible Euler Equations, finite time singularities of solutions have been shown in \cite{Sideris} in the absence of vacuum and \cite{LDZ} in the presence of vacuum. There are also many works on blowup criterions on the compressible Navier-Stokes equations, for instance, see \cite{FJO, JW, HLX2, HZ1, HZ2, SWZ, WZ} and the references therein. Of all those works, \cite{Hoff2, HLX2, SWZ, CCK-2, WZ} are the most relevant for our work and will be discussed in detail below.
We first impose the boundary condition when $\Omega$ is a bounded domain with a smooth boundary \begin{equation}\nonumber u = 0\quad {\rm on}\ \partial\Omega. \end{equation} We assume that the initial data satisfy \begin{equation}\label{data} \begin{cases} \rho(0, \cdot) = \rho_0(\cdot) \in L^1 \cap H^s,\\[-4mm]\\ u_0 \in H_0^1\cap H^s, \end{cases} s \geq 3. \end{equation}
It is clear that \eqref{data} implies $\rho_0(\cdot) \in L^\infty$ and $\|\sqrt{\rho_0}u_0\|_{L^2} < \infty$. Here and in what follows we use $H^s$ and $W^{s, q}$ to denote the usual Sobolev space. For $f \in H_0^1$, if $\Omega$ is a smooth bounded domain, we meant that $f = 0$ on the boundary in the sense of trace. Due to \cite{CCK-2}, there exists a unique local strong solution to the compressible Navier-Stokes equations with initial data \eqref{data} under certain compatibility conditions on the initial data. Moreover, the local strong solution $(\rho, u)$ satisfies \begin{equation}\label{solu-1} \begin{cases}
\rho \in L^1\cap C([0, T), W^{1, q}),\quad u \in C([0, T), L^{6}),\quad u(t, \cdot)|_{\partial\Omega} = 0,\\[-4mm]\\
\nabla u \in C([0, T), H^1),\quad \int_0^t\|\nabla^2u\|_{L^q}^2ds < \infty\ (\forall\ t < T), \end{cases} 3 < q < 6, \end{equation} and the solution is regular at time $T$ if \begin{equation}\label{criterion}
\sup_{0 \leq t < T}(\|\nabla \rho\|_{L^q} + \|\nabla u\|_{L^2}) < \infty.
\end{equation}
We remark that throughout this paper, we say the solution $(\rho, u)$ is regularity at time $T$ if the continuity in time holds at $t = T$ in \eqref{solu-1}.
For the local strong solution in \eqref{solu-1}, Huang and Xin \cite{HZ2} and Huang, Li and Xin \cite{HZ1} proved the regularity at time $T$ if
$$\lim_{t \nearrow T}\int_0^t\|\nabla u + (\nabla u)^T\|_{L^\infty}ds < \infty.$$ Under extra constraint $\lambda < 7\mu$, Sun, Wang and Zhang \cite{SWZ} (under $\rho_0 > 0$) and independently, Huang, Li and Xin \cite{HLX2} proved the regularity at time $T$ if
$$\sup_{0 \leq t < T}\|\rho\|_{L^\infty} < \infty.$$ Wen and Zhu \cite{WZ} weakened the constraint to be $\lambda < \frac{29\mu}{7}$ and improved the result in terms of $\|\rho\|_{L^q}$ for a sufficiently large $q$. We will use some ideas and calculations in \cite{Hoff2, HLX2, HZ2, SWZ, WZ} and follow the framework of \cite{HLX2, HZ2, SWZ}.
The second result of this paper is the following theorem which asserts that the solution in \eqref{solu-1} is smooth at time $T$ if it is type \MyRoman{1}. Due to the strong degeneracy of the parabolic nature of the system, the index $\kappa$ is not \textit{optimal} in our theorems. Based on the above scaling invariance \eqref{Scaling}, we conjecture that the optimal index might be $\frac{1}{\gamma}$, which seems a fantastic challenge to us.
\begin{thm}[No type \MyRoman{1} Singularities]\label{TypeI} Let $\gamma \geq 1$, $n = 3$, $A > 0$, $T > 0$, $K \leq 7$, $\lambda < K\mu$ be constants and satisfy the physical constraint \eqref{phycons}. Let $p \in (3, 6)$ be determined in Lemma \ref{elementarylem} and $\kappa$ satisfy \begin{equation}\label{Constraint} \kappa < \min\big\{\frac{1}{\gamma + 3},\quad \frac{1}{3\gamma},\quad \frac{p - 3}{p + 1}\big\}. \end{equation} Then $(\rho, u)$ in \eqref{solu-1} is regular at time $T$ provided that \begin{equation}\label{UPB-1}
|\nabla\cdot u(t, x)| \leq \frac{\kappa}{T - t},\quad {\rm as}\ t \nearrow T. \end{equation} If $K = 2$, $p$ can be taken to be 4. \end{thm}
The proof of Theorem \ref{TypeI} will be presented in Section 2 by assuming the validity of Theorem \ref{main}. \begin{rem} The constant $p$ determined in Lemma \ref{elementarylem} depends only on the physical viscosity constants $\lambda$ and $\mu$, but not on solutions and other constants. With extra complicated calculations, one may improve $K \leq 7$ as $K \leq \frac{29}{7}$ (see Wen and Zhu \cite{WZ} for that purpose). \end{rem}
\begin{rem} For $\gamma > 1$, it is clear that the following energy law holds for the solutions in Theorem \ref{TypeI} \begin{eqnarray}\label{basicen}
&&\frac{1}{2}\int \big(\rho|u|^2 + \frac{A}{\gamma - 1}\rho^\gamma\big)dx + \int_0^t\int\big(\mu|\nabla u|^2 + (\lambda + \mu)|\nabla\cdot u|^2\big)dx\\\nonumber
&&= \frac{1}{2}\int \big(\rho_0|u_0|^2 + \frac{A}{\gamma - 1}\rho_0^\gamma\big)dx, \end{eqnarray} which will be frequently used throughout this paper. For instance, we often use the following $$\sqrt{\rho}u \in L^\infty(L^2_x),\quad \rho \in L^\infty(L^\gamma_x),\quad \nabla u \in L^2(L^2_x).$$ In the whole space case, vacuum might be very common since $\rho \in L^\gamma$. In the bounded domain case, vacuum is also allowed to appear. \end{rem}
\begin{rem} When $\gamma = 1$, which corresponds to the isothermal process, instead of using the basic energy law \eqref{basicen}, we will use the conservation of mass $$\int \rho(t, x)dx = \int \rho_0(x)dx$$ and the following alternative \begin{eqnarray}\label{basicen-1}
&&\frac{1}{2}\int\rho|u|^2 dx + \mu\int_0^t\|\nabla u\|_{L^2}^2ds + \frac{\lambda + \mu}{2}\int_0^t\|\nabla\cdot u\|_{L^2}^2ds\\\nonumber
&&= \frac{1}{2}\int\rho_0|u_0|^2 dx + A\int_0^t\int \rho\nabla\cdot u dxds - \frac{\lambda + \mu}{2}\int_0^t\|\nabla\cdot u\|_{L^2}^2ds\\\nonumber
&&\leq \frac{A^2\|\rho\|_{L^1}}{2(\lambda + \mu)}\int_0^t\|\rho\|_{L^\infty}ds. \end{eqnarray} Under the constraint on $\rho$ in Theorem \ref{main}, the right hand side of \eqref{basicen-1} is uniformly bounded on $t \in [0, T)$. Hence the case of $\gamma = 1$ can be treated exactly in the same way as $\gamma > 1$, with even a simpler calculation. So in what follows, we will only focus on the case when $\gamma > 1$. \end{rem}
To obtain Theorem \ref{TypeI}, we will prove the following stronger result, which implies that certain possible concentration of density is removable and won't lead to the formulation of finite-time singularities.
\begin{thm}\label{main} Let $M > 0$ and all other constants be given in Theorem \ref{TypeI} and satisfy the same constraints in Theorem \ref{TypeI}. Then $(\rho, u)$ in \eqref{solu-1} is regular at time $T$ provided that \begin{equation}\label{UPB} \rho(t, x) \leq\frac{M}{(T - t)^{\kappa}},\quad 0 \leq t < T. \end{equation} If $K = 2$, $p$ can be taken to be 4. \end{thm}
The proof of Theorem \ref{main} will be presented in Section 3, Section 4 and Section 5.
The third result of this article is to construct the following explicit solution to the compressible Euler and Navier-Stokes equations which blows up at any given finite time $T > 0$.
\begin{thm}\label{Exam} For any $T > 0$, the following solution pair
$$\rho(t, x) = C_n^{\frac{1}{\gamma - 1}}\Big(\frac{|x|}{T - t}\Big)^{\frac{2}{\gamma - 1}},\quad u(t, x) = - \frac{2x}{[n(\gamma - 1) + 2](T - t)}$$ solves the compressible Navier-Stokes equations \eqref{CNS} for all $\gamma > 1$, where constants $C_n$ are given by \begin{equation}\label{Const} C_2 = \frac{(\gamma - 1)^2}{2A\gamma^3},\quad C_3 = \frac{3(\gamma - 1)^2}{A\gamma(3\gamma - 1)^2}. \end{equation} \end{thm}
\begin{rem}\nonumber For the physical adiabatic number $1 < \gamma < 3$, it is easy to check that there holds \begin{equation}\nonumber \rho \in \begin{cases} C^\infty([0, T), C^{1 + [\alpha], \alpha - [\alpha]}(\mathbb{R}^n)),\quad {\rm if}\ \alpha > 0\ {\rm is\ not\ an\ integer},\\[-4mm]\\ C^\infty([0, T), C^{\alpha + 1}(\mathbb{R}^n)), \quad {\rm if}\ \alpha > 0\ {\rm is\ an\ odd\ integer},\\[-4mm]\\ C^\infty([0, T), C^{\alpha, 1}(\mathbb{R}^n)), \quad {\rm if}\ \alpha > 0\ {\rm is\ an\ even\ integer}. \end{cases} \end{equation} Here $\alpha = \frac{3 - \gamma}{\gamma - 1}$. \end{rem}
The construction of the explicit blowup example is inspired by the above dimension analysis and the scaling invariance \eqref{Scaling}. Indeed, we consider self-similar solutions of the following form \begin{equation}\label{SSS} \begin{cases} \rho(t, x) = \frac{1}{(T - t)^{\frac{1}{\gamma}}}\Theta(\frac{x}{(T - t)^{\frac{\gamma + 1}{2\gamma}}}),\\[-4mm]\\ u(t, x) = \frac{1}{(T - t)^{\frac{\gamma - 1}{2\gamma}}}V(\frac{x}{(T - t)^{\frac{\gamma + 1}{2\gamma}}}). \end{cases} \end{equation} One can derive that $(\Theta, V)$ are governed by \eqref{SCNS}. Note that \eqref{SCNS} is still a complicated system of nonlinear differential equations and in general hard to solve. Fortunately, the special structure of \eqref{SCNS} allows us to construct explicit solutions to it, which gives the explicit solution in Theorem \ref{Exam}. Details are presented in Section 6. We remark that the solution in Theorem \ref{Exam} is constructed in H${\rm \ddot{o}}$lder spaces. It is an interesting question to construct singular solutions to \eqref{SCNS} which live in Sobolev spaces before the blowup time.
\section{Preliminaries and Proof of Theorem \ref{TypeI}}
Let us first give the proof of Theorem \ref{TypeI}, by assuming the validity of Theorem \ref{main}. The proof of Theorem \ref{main} will be postulated to Section 3, Section 4 and Section 5.
\begin{proof}[Proof of Theorem \ref{TypeI}]
With loss of generality, by using \eqref{UPB-1}, one may assume that
$$|\nabla\cdot u(t, \cdot)| \leq \frac{\kappa}{T - t},\quad 0 \leq t < T.$$ By the continuity equation in \eqref{CNS}, one has \begin{eqnarray}\nonumber
\rho(t, x) &\leq& \|\rho_0\|_{L^\infty}e^{\int_{0}^t\|\nabla\cdot u(t, \cdot)\|_{L^\infty}ds}\\\nonumber
&\leq& \frac{\|\rho_0\|_{L^\infty}T^\kappa}{(T - t)^\kappa},\quad \forall\ 0 \leq t < T. \end{eqnarray} Then the proof of Theorem \ref{TypeI} is a straightforward consequence of Theorem \ref{main}. \end{proof}
Let $v(t, x)$ be the solution to \begin{equation}\nonumber Lv = A\nabla\rho^\gamma, \end{equation} where the elliptic operator $L$ is defined by $$Lu = \mu\Delta u + (\lambda + \mu)\nabla\nabla\cdot u.$$ In the case of a smooth bounded domain, we also impose the boundary condition $$v = 0\ \ {\rm on}\ \partial\Omega.$$ By standard elliptic estimates (see, for instance, \cite{SWZ} and the references therein), the assumption on $\rho$ in Theorem \eqref{main} and the interpolation inequality, one has \begin{lem}\label{lem3} Under the assumption of Theorem \ref{main}, there hold \begin{equation}\label{4} \begin{cases}
\|\nabla v(t)\|_{L^q} \lesssim \|\rho^\gamma\|_{L^q} \lesssim \|\rho^\gamma\|_{L^1}^{\frac{1}{q}}(T - t)^{- \kappa\gamma(1 - \frac{1}{q})},\quad 1 < q < \infty,\\[-4mm]\\
\|\nabla v(t)\|_{{\rm BMO}} \lesssim \|\rho^\gamma\|_{L^\infty} \lesssim (T - t)^{- \kappa\gamma}, \end{cases} \end{equation} and \begin{eqnarray}\label{5}
\|\nabla^2 v(t)\|_{L^q} \lesssim \|\rho\|_{L^\infty}^{\gamma - 1}\|\nabla\rho\|_{L^q} \lesssim (T - t)^{- \kappa(\gamma - 1)}\|\nabla\rho\|_{L^q}. \end{eqnarray} \end{lem}
The next lemma is well-known and can be found in, for instance, \cite{BW, KT, SWZ}. \begin{lem}\label{lem1} Let $v$ be defined above and $q > 3$. Then \begin{eqnarray}\nonumber
\|\nabla v\|_{L^\infty} \lesssim 1 + \|\rho\|_{L^\infty}^\gamma\ln\big(e + \|\rho\|_{L^\infty}^{\gamma - 1}\|\nabla\rho\|_{L^q}\big). \end{eqnarray} \end{lem} \begin{proof} Indeed, by \cite{BW, KT, SWZ}, one has \begin{eqnarray}\nonumber
\|\nabla v\|_{L^\infty} \lesssim 1 + \|\nabla v\|_{L^2} + \|\nabla v\|_{{\rm BMO}}\ln\big(e + \|\nabla^2 v\|_{L^q}\big). \end{eqnarray} Then \eqref{4}, \eqref{5} and the basic energy law \eqref{basicen} imply that \begin{eqnarray}\nonumber
&&\|\nabla v\|_{L^2} + \|\nabla v\|_{{\rm BMO}}\ln\big(e + \|\nabla^2 v\|_{L^q}\big)\\\nonumber
&&\lesssim \|\rho\|_{L^\infty}^{\frac{\gamma}{2}} + \|\rho\|_{L^\infty}^\gamma\ln\big(e + \|\nabla\rho^\gamma\|_{L^q}\big)\\\nonumber
&&\lesssim \|\rho\|_{L^\infty}^\gamma\ln\big(e + \|\rho\|_{L^\infty}^{\gamma - 1}\|\nabla\rho\|_{L^q}\big). \end{eqnarray} \end{proof}
At last, let us give an elementary lemma. \begin{lem}\label{elementarylem} Let $\lambda < K\mu$, $K \leq 7$ and the physical constraint \eqref{phycons} be satisfied. Then there exists $p \in (3, 6)$ such that the following quadratic form $$f(X, Y) = 4\mu(X^2 + Y^2) - (\lambda + \mu)R^2Y^2 + 4\mu R Y^2$$ when $R$ is evaluated at $p - 2$, has a lower bound of $X^2 + Y^2$ multiplied by a small positive constant. Moreover, if $K = 2$, then $p$ can be taken to be 4. \end{lem} \begin{proof} With a little bit ambiguity of notation $f$, we consider the following quadratic polynomial for $1 < R < 4$: \begin{eqnarray}\nonumber f(R) = 4\mu - (\lambda + \mu)R^2 + 4\mu R. \end{eqnarray} Note that $\lambda + \mu > 0$ under \eqref{phycons}. It is clear that $f(2) = 4(2\mu - \lambda)$ and $f(1) = 7\mu - \lambda$. By an elementary analysis, one can conclude that \begin{eqnarray}\nonumber f(1+) > 0, &&{\rm if}\ \lambda < K\mu,\quad K\leq 7,\\\nonumber f(2) > 0, &&{\rm if}\ \lambda < K \mu,\quad K \leq 2. \end{eqnarray} Hence, for all $\lambda < K\mu$, $K \leq 7$ satisfying \eqref{phycons}, there exists $p \in (3, 6)$ which might be very close to $3$ such that \begin{eqnarray}\nonumber f(p - 2) > 0. \end{eqnarray} Moreover, if $K \leq 2$ and \eqref{phycons} holds, then $p$ can be taken to be 4. Then the lemma is proved by rewriting $f(X, Y)$ as $$f(X, Y) = 4\mu X^2 + Y^2f(R).$$ \end{proof}
\section{Energy Estimates}
In this section we follow the framework in \cite{SWZ, HZ2} and prove two energy estimates. So we will omit those same computations as there. The first one gives the uniform in time bound of the $L^1$ norm of $\rho |u|^p$ under the assumptions in Theorem \ref{main}. The second one is on the estimate of $L^2$ type energy estimate for $\nabla(u - v)$, together with a space-time estimate of $\nabla^2(u - v)$. Then we give a corollary which asserts that the $L^2$ norm of $\nabla u$ may grow in time at $(T - t)^{- \frac{\kappa\gamma}{2}}$, while $\int_0^T\|\nabla u\|_{L^6}^{1 + \delta}dt$ is still bounded for some $\delta \in (0, 1)$. From now on we focus on the whole space case. The bounded domain case with smooth boundaries can be treated similarly without any essential difficulty in view of the homogenous Dirichlet boundary condition for $u$ and $v$.
Let us first prove the following lemma. \begin{lem}\label{energyes} Under the assumptions in Theorem \ref{main}, there holds
$$\big\|\rho|u|^{p}\big\|_{L^1} + \int_0^T\big\||u|^{\frac{p}{2} - 1}|\nabla u|\big\|_{L^2}^2dt \lesssim 1.$$ \end{lem} \begin{proof} Standard estimates as \cite{Hoff2, HLX, SWZ} yield \begin{eqnarray}\nonumber
&&\frac{1}{p}\frac{d}{dt}\int \rho |u|^p dx \leq A\int \rho^\gamma\nabla\cdot (|u|^{p - 2}u) dx\\\nonumber
&&\quad -\ \int |u|^{p - 2}\big(\mu|\nabla u|^2 + (p - 2)[\mu - (\lambda + \mu)\frac{p - 2}{4}]|\nabla |u||^2\big) dx. \end{eqnarray}
Set $X^2 = |\nabla u|^2 - |\nabla |u||^2$ and $Y^2 = |\nabla |u||^2$. Then by Lemma \ref{elementarylem}, one has
$$\int |u|^{p - 2}\big(\mu|\nabla u|^2 + (p - 2)[\mu - (\lambda + \mu)\frac{p - 2}{4}]|\nabla |u||^2\big) dx \geq c_0\int |u|^{p - 2}|\nabla u|^2 dx$$ and thus \begin{eqnarray}\label{1}
\frac{d}{dt}\int \rho |u|^p dx + pc_0\int |u|^{p - 2}|\nabla u|^2 dx \leq Ap\int \rho^\gamma\nabla\cdot (|u|^{p - 2}u) dx. \end{eqnarray}
Let us treat the right hand side in \eqref{1} as follows: \begin{eqnarray}\nonumber
&&\int \rho^\gamma\nabla\cdot (|u|^{p - 2}u) dx\\\nonumber
&&\lesssim \int \rho^{\gamma + \frac{1}{p} - \frac{1}{2}}(\rho|u|^{p})^{\frac{1}{2} - \frac{1}{p}}(|u|^{\frac{p}{2} - 1}|\nabla u|)dx\\\nonumber
&&\lesssim \|\rho\|_{L^{p\gamma + 1 - \frac{p}{2}}}^{\gamma + \frac{1}{p} - \frac{1}{2}}\big\|\rho|u|^{p}\big\|_{L^1}^{\frac{1}{2} - \frac{1}{p}}\big\||u|^{\frac{p}{2} - 1}|\nabla u|\big\|_{L^2}. \end{eqnarray} Inserting the above into \eqref{1} and using Young's inequality, one has \begin{eqnarray}\label{9}
\frac{d}{dt}\int \rho |u|^p dx + pc_0\int |u|^{p - 2}|\nabla u|^2 dx \lesssim \|\rho\|_{L^{p\gamma + 1 - \frac{p}{2}}}^{2\gamma + \frac{2}{p} - 1}\big\|\rho|u|^{p}\big\|_{L^1}^{1 - \frac{2}{p}}. \end{eqnarray} It follows from \eqref{UPB} that for $\frac{T}{2} \leq t < T$ \begin{eqnarray}\nonumber
\|\rho\|_{L^{p\gamma + 1 - \frac{p}{2}}}^{2\gamma + \frac{2}{p} - 1} &\lesssim& \|\rho^\gamma\|_{L^1}^{\frac{2}{p}}\|\rho\|_{L^\infty}^{\frac{2}{p}[(p - 1)\gamma + 1 - \frac{p}{2}]}\\\nonumber &\lesssim& (T - t)^{- \frac{2}{p}[(p - 1)\gamma + 1 - \frac{p}{2}]\kappa}. \end{eqnarray} Due to \eqref{Constraint}, one has \begin{equation}\nonumber \frac{2}{p}[(p - 1)\gamma + 1 - \frac{p}{2}]\kappa < \begin{cases}\frac{\gamma + (1 - \frac{2}{p})(\gamma - 1)}{\gamma + 3} < \frac{\gamma + \frac{1}{2}}{\gamma + 3},\quad 1 \leq \gamma \leq \frac{3}{2}, \\[-4mm]\\ \frac{\gamma + (1 - \frac{2}{p})(\gamma - 1)}{3\gamma} < \frac{2\gamma - 1}{3\gamma},\quad \gamma > \frac{3}{2}. \end{cases} \end{equation} Hence, $(T - t)^{- \frac{2}{p}[(p - 1)\gamma + 1 - \frac{p}{2}]\kappa}$ is integrable on $[0, T]$ and \begin{eqnarray}\label{2}
\big\|\rho|u|^{p}\big\|_{L^1} \lesssim 1. \end{eqnarray} Using \eqref{2} and integrating \eqref{9} with respect to time, one further attains
$$\int_0^T\big\||u|^{\frac{p}{2} - 1}|\nabla u|\big\|_{L^2}^2dt \lesssim 1.$$ The proof of the lemma is completed. \end{proof}
Recall the important quantity $$\omega = u - v,$$ whose divergence is refereed to as the effective viscous flux in literature (see \cite{Hoff2} for instance). Using \eqref{4} and Lemma \ref{energyes}, we have
\begin{lem}\label{fluxlem} Let \begin{equation}\nonumber \delta = \begin{cases} \frac{\gamma + 2}{\gamma + 4} ,\quad 1 \leq \gamma \leq \frac{3}{2},\\[-4mm]\\ \frac{3\gamma - 1}{3\gamma + 1},\quad \gamma > \frac{3}{2}. \end{cases} \end{equation} Under the assumptions of Theorem \ref{main}, it holds that
$$\sup_{0 \leq t < T}\|\nabla \omega\|_{L^2} \lesssim 1,\quad \int_0^T\|\sqrt{\rho}\omega_t\|_{L^2}^2dt \lesssim 1,\quad \int_0^T\|\nabla^2\omega\|_{L^2}^{1 + \delta}dt \lesssim 1. $$ \end{lem} \begin{proof} Note that \begin{equation}\nonumber \begin{cases} \rho\partial_t\omega - L\omega = \rho F,\\[-4mm]\\
\omega(t, \cdot)|_{\partial\Omega} = 0, \end{cases} \end{equation} where $$F = - u\cdot\nabla u - A\partial_tL^{-1}\nabla\rho^\gamma.$$ A straightforward energy estimate gives that \begin{eqnarray}\label{6}
&&\frac{1}{2}\frac{d}{dt}\int \big(\mu|\nabla \omega|^2 + (\lambda + \mu)|\nabla\cdot \omega|^2\big) dx + \int \rho|\partial_t\omega|^2\\\nonumber
&&= \int \rho F\omega_tdx \leq \frac{1}{2}\int \rho|\partial_t\omega|^2dx + \frac{1}{2}\int \rho|F|^2dx. \end{eqnarray} Clearly, one has \begin{eqnarray}\nonumber
\frac{1}{2}\int \rho|F|^2dx \leq \int \rho|u\cdot\nabla u|^2dx + A^2\int \rho|\partial_tL^{-1}\nabla\rho^\gamma|^2dx. \end{eqnarray}
For $1 \leq \gamma \leq \frac{3}{2}$, using interpolation inequalities, \eqref{4} and \eqref{UPB}, we can first estimate that \begin{eqnarray}\label{7}
&&\int \rho|\partial_tL^{-1}\nabla\rho^\gamma|^2dx\\\nonumber
&&= \int \rho |L^{-1}\nabla\nabla\cdot(\rho^\gamma u) + (\gamma - 1)L^{-1}\nabla(\rho^\gamma\nabla\cdot u)|^2 dx\\\nonumber
&&\lesssim \|\rho\|_{L^\infty} \|\rho^\gamma u\|_{L^2}^2 + \|\rho\|_{L^{\frac{3}{2}}}\|\rho^\gamma\nabla\cdot u\|_{L^2}^2 \\\nonumber
&&\lesssim \|\sqrt{\rho}u\|_{L^2}^{2}\|\rho\|_{L^\infty}^{2\gamma} +
\|\rho\|_{L^\gamma}^{\frac{2\gamma}{3}}\|\rho\|_{L^\infty}^{\frac{4\gamma}{3} + 1}\|\nabla u\|_{L^2}^2\\\nonumber &&\lesssim (T - t)^{- 2\kappa \gamma} + (T - t)^{- \kappa(\frac{7\gamma}{3} + 1)}\\\nonumber &&\quad +\
(T - t)^{- \kappa(\frac{4\gamma}{3} + 1)}\|\nabla \omega\|_{L^2}^2. \end{eqnarray} For $\gamma > \frac{3}{2}$, one simply has \begin{eqnarray}\label{7-1}
&&\int \rho|\partial_tL^{-1}\nabla\rho^\gamma|^2dx\\\nonumber
&&\lesssim \|\sqrt{\rho}u\|_{L^2}^{2}\|\rho\|_{L^\infty}^{2\gamma} +
\|\rho\|_{L^{\frac{3}{2}}}\|\rho\|_{L^\infty}^{2\gamma}\|\nabla u\|_{L^2}^2\\\nonumber &&\lesssim (T - t)^{- 2\kappa \gamma} + (T - t)^{- 3\kappa \gamma} +
(T - t)^{- 2\kappa \gamma}\|\nabla \omega\|_{L^2}^2. \end{eqnarray}
Next, the second tern in \eqref{6} can be estimated as follow \begin{eqnarray}\nonumber
\int \rho|u\cdot\nabla u|^2dx &\lesssim& \|\rho\|_{L^\infty}^{1 - \frac{2}{p}}\|\rho|u|^p\|_{L^1}^{\frac{2}{p}}\|\nabla u\|_{L^{\frac{2p}{p - 2}}}^2\\\nonumber
&\lesssim& (T - t)^{- \kappa(1 - \frac{2}{p})}\|\nabla v\|_{L^{\frac{2p}{p - 2}}}^2 + (T - t)^{- \kappa(1 - \frac{2}{p})}\|\nabla\omega\|_{L^2}^{2 - \frac{6}{p}}\|\nabla\omega\|_{L^6}^{ \frac{6}{p}}\\\nonumber
&\lesssim& (T - t)^{- \kappa(1 - \frac{2}{p})- \kappa\gamma(1 + \frac{2}{p})} + (T - t)^{- \kappa(1 - \frac{2}{p})}\|\nabla\omega\|_{L^2}^{2 - \frac{6}{p}}\|\nabla^2\omega\|_{L^2}^{ \frac{6}{p}}. \end{eqnarray} Here one has used interpolation inequality, \eqref{4} and Lemma \ref{energyes}. On the other hand, one also has \begin{eqnarray}\nonumber
\|\nabla^2\omega\|_{L^2} &\lesssim& \|\rho\omega_t\|_{L^2} + \|\rho F\|_{L^2}\\\nonumber
&\lesssim& (T - t)^{-\frac{\kappa}{2}}\|\sqrt{\rho}\omega_t\|_{L^2} + (T - t)^{-\frac{\kappa}{2}}\|\sqrt{\rho} F\|_{L^2}. \end{eqnarray} Hence, \begin{eqnarray}\label{8}
&&\int \rho|u\cdot\nabla u|^2dx\lesssim (T - t)^{- 2\kappa - \kappa(\gamma - 1)(1 + \frac{2}{p})}\\\nonumber
&&\quad +\ (T - t)^{- \kappa(1 + \frac{1}{p})}\|\nabla\omega\|_{L^2}^{2 - \frac{6}{p}}(\|\sqrt{\rho}\omega_t\|_{L^2} + \|\sqrt{\rho} F\|_{L^2})^{ \frac{6}{p}}. \end{eqnarray}
By \eqref{7}, \eqref{7-1}, \eqref{8} and using Young's inequality, we have \begin{eqnarray}\label{8-1}
\int \rho|F|^2dx &\leq& \frac{1}{2}\|\sqrt{\rho}\omega_t\|_{L^2}^2 + (T - t)^{- \kappa(1 + \frac{1}{p})\frac{p}{p - 3}}\|\nabla \omega\|_{L^2}^2\\\nonumber
&&+\ \Big\{(T - t)^{- \kappa(\frac{7\gamma}{3} + 1)} + (T - t)^{- \kappa(\frac{4\gamma}{3} + 1)}\|\nabla \omega\|_{L^2}^2\Big\}1_{1 \leq \gamma \leq \frac{3}{2}}\\\nonumber
&&+\ \Big\{(T - t)^{- 3\gamma\kappa} + (T - t)^{- 2\gamma\kappa}\|\nabla \omega\|_{L^2}^2\Big\}1_{\gamma > \frac{3}{2}}. \end{eqnarray} Inserting the above into \eqref{6}, and using Gronwall's inequality and the integrability condition \begin{equation}\nonumber \begin{cases} \kappa(1 + \frac{1}{p})\frac{p}{p - 3} < 1,\\[-4mm]\\ \kappa\big(\frac{7\gamma}{3} + 1\big) < \frac{7\gamma + 3}{3(\gamma + 3)} < 1,\quad {\rm if}\ 1 \leq \gamma \leq \frac{3}{2},\\[-4mm]\\ 3\gamma\kappa < 1,\quad {\rm if}\ \gamma > \frac{3}{2}, \end{cases} \end{equation} one has
$$\sup_{0 \leq t < T}\|\nabla \omega\|_{L^2} \lesssim 1,\quad \int_0^T\big\|\sqrt{\rho}\omega_t\|_{L^2}^2dt \lesssim 1. $$ Consequently, using the above estimates and by revisiting \eqref{8-1}, one has \begin{eqnarray}\nonumber
\int_0^T\int \rho|F|^2dxdt \lesssim 1. \end{eqnarray} which in turn gives that \begin{eqnarray}\nonumber
\int_0^t\|\nabla^2\omega\|_{L^2}^{1 + \delta}dt &\lesssim& \int_0^t\|\rho\|_{L^\infty}^{\frac{1 + \delta}{2}}\big(\|\sqrt{\rho}\omega_t\|_{L^2} + \|\sqrt{\rho} F\|_{L^2}\big)^{{1 + \delta}}ds\\\nonumber &\lesssim& 1, \end{eqnarray} where one has used the Cauchy inequality and the integrability condition \begin{equation}\nonumber \frac{1 + \delta}{1 - \delta}\kappa < \begin{cases} \frac{1 + \frac{\gamma + 2}{\gamma + 4}}{1 - \frac{\gamma + 2}{\gamma + 4}}\frac{1}{\gamma + 3} = 1,\quad {\rm if}\ 1 \leq \gamma \leq \frac{3}{2},\\[-4mm]\\ \frac{1 + \frac{3\gamma - 1}{3\gamma + 1}}{1 - \frac{3\gamma - 1}{3\gamma + 1}}\frac{1}{3\gamma} = 1,\quad {\rm if}\ \gamma > \frac{3}{2} \end{cases} \end{equation} This finishes the proof of the lemma. \end{proof}
\begin{rem} It should be emphasized here that the conclusion on the space-time estimate may not be true for $\delta = 1$. But the scaling heuristics indicates that such kind of estimate with any $\delta > 0$ is crucial (but $\kappa$ may be smaller if $\delta$ is smaller). \end{rem}
A straightforward consequence of Lemma \ref{fluxlem}, together with Lemma \ref{lem3} gives that \begin{cor}\label{cor} Let $\delta$ be given in Lemma \ref{fluxlem} and all other assumptions be the same as in Theorem \ref{main}. Then it holds that
$$\|\nabla u(t, \cdot)\|_{L^2}^2 \lesssim 1 + (T - t)^{-\kappa\gamma},\quad \int_0^T\|\nabla u\|_{L^6}^{1 + \delta}dt \lesssim 1. $$ \end{cor} \begin{proof} Indeed, one has \begin{eqnarray}\nonumber
&&\|\nabla u(t, \cdot)\|_{L^2}^2 \lesssim \|\nabla v(t, \cdot)\|_{L^2}^2 + \|\nabla \omega(t, \cdot)\|_{L^2}^2\\\nonumber
&&\lesssim \|\rho^\gamma\|_{L^2}^2 + \|\nabla\omega\|_{L^2}^2 \lesssim (T - t)^{-\gamma\kappa} + \|\nabla\omega\|_{L^2}^2\\\nonumber &&\quad\quad\quad \lesssim 1 + (T - t)^{-\gamma\kappa} \end{eqnarray} and \begin{eqnarray}\nonumber
&&\int_0^T\|\nabla u\|_{L^6}^{1 + \delta}dt \lesssim \int_0^T\|\nabla v\|_{L^6}^{1 + \delta}dt + \int_0^T\|\nabla \omega\|_{L^6}^{1 + \delta}dt\\\nonumber
&&\lesssim \int_0^T\|\rho^\gamma\|_{L^6}^{1 + \delta}dt + \int_0^T\|\nabla \omega\|_{L^6}^{1 + \delta}dt \lesssim 1. \end{eqnarray} Here one has used the fact that $\frac{5\kappa\gamma(1 + \delta)}{6} < 1$. \end{proof}
\section{Further Estimate for the Effective Viscous Flux}
In this section we estimate the high order regularity of the quantity $\omega$. We follow the calculations in \cite{Hoff1}. So some similar calculations are omitted below. Denote the material derivative of $f$ by $$\dot{f} = \partial_tu + u\cdot\nabla u.$$ We have
\begin{lem}\label{lem4} Let all assumptions in Theorem \ref{main} be true and $\delta$ be given in Lemma \ref{fluxlem}. Then one has \begin{eqnarray}\label{53}
\int \rho|\dot{u}|^2dx + \int_0^t\|\nabla \dot{u}\|_{L^2}^2ds \lesssim 1. \end{eqnarray} \end{lem}
\begin{proof} Starting from $$\rho\dot{u} + A\nabla \rho^\gamma = Lu,$$ one can derive that (see \cite{Hoff1}) \begin{eqnarray}\label{51}
&&\frac{d}{dt}\int \rho|\dot{u}|^2dx + \|\nabla \dot{u}\|_{L^2}^2 + \|\nabla \cdot \dot{u}\|_{L^2}^2\\\nonumber
&&\leq C\|\nabla u\|_{L^4}^4 + 2A \int\big[\partial_t\rho^\gamma \nabla\cdot\dot{u} + (u \cdot\nabla \dot{u})\cdot \nabla\rho^\gamma\big] dx. \end{eqnarray} First, note that \begin{eqnarray}\nonumber &&\int\partial_t\rho^\gamma \nabla\cdot\dot{u} + (u \cdot\nabla \dot{u})\cdot \nabla\rho^\gamma dx\\\nonumber &&= - \int\big[\gamma\rho^\gamma(\nabla\cdot u)\nabla\cdot\dot{u} + (u\cdot\nabla\rho^\gamma) \nabla\cdot\dot{u} - (u \cdot\nabla \dot{u})\cdot \nabla\rho^\gamma\big] dx\\\nonumber &&= - \int\big[(\gamma - 1)\rho^\gamma(\nabla\cdot u)\nabla\cdot\dot{u} + \rho^\gamma {\rm tr}(\nabla\dot{u}\nabla u)\big] dx\\\nonumber
&&\leq \|\rho^\gamma\|_{L^1}\|\rho\|_{L^\infty}^{3\gamma} + A^2\|\nabla u\|_{L^4}^4 + \frac{1}{4A}\|\nabla\dot{u}\|_{L^2}^2. \end{eqnarray} Inserting the above into \eqref{51} leads to \begin{eqnarray}\label{52}
&&\frac{d}{dt}\int \rho|\dot{u}|^2dx + \|\nabla \dot{u}\|_{L^2}^2 + \|\nabla \cdot \dot{u}\|_{L^2}^2\\\nonumber
&&\leq C\|\nabla u\|_{L^4}^4 + C\|\rho^\gamma\|_{L^1}\|\rho\|_{L^\infty}^{3\gamma}\\\nonumber
&&\lesssim \|\nabla u\|_{L^4}^4 + (T - t)^{- 3\gamma\kappa}. \end{eqnarray} which, by integration with respect to time, gives that \begin{eqnarray}\label{52}
\int \rho|\dot{u}|^2dx + \int_0^t\|\nabla \dot{u}\|_{L^2}^2ds \lesssim \int_0^t\|\nabla u\|_{L^4}^4ds + 1. \end{eqnarray}
Now by the interpolation inequality and Corollary \ref{cor}, one has \begin{eqnarray}\nonumber
\|\nabla u\|_{L^4}^4 &\leq& \|\nabla u\|_{L^2}\|\nabla u\|_{L^6}^3\\\nonumber
&\lesssim& \|\nabla u\|_{L^2}\|\nabla u\|_{L^6}\big(\|\nabla v\|_{L^6} + \|\nabla \omega\|_{L^6}\big)^2\\\nonumber
&\lesssim& \|\nabla u\|_{L^2}\|\nabla u\|_{L^6}\big(\|\rho^\gamma\|_{L^1}^{\frac{1}{6}}\|\rho\|_{L^\infty}^{\frac{5\gamma}{6}} + \|\nabla^2 \omega\|_{L^2}\big)^2\\\nonumber
&\lesssim& (T - t)^{- \frac{\gamma\kappa}{2} - \frac{5\gamma\kappa}{3}}\|\nabla u\|_{L^6} + (T - t)^{- \frac{\gamma\kappa}{2} - \kappa}\|\nabla u\|_{L^6}\|\sqrt{\rho}\dot{u}\|_{L^2}^2 \end{eqnarray} Note that \begin{eqnarray}\nonumber
&&(T - t)^{- \frac{\gamma\kappa}{2} - \frac{5\gamma\kappa}{3}}\|\nabla u\|_{L^6}\\\nonumber
&&\lesssim (T - t)^{- \frac{\gamma\kappa}{2} - \frac{5\gamma\kappa}{3}}\|\nabla v\|_{L^6} + (T - t)^{- \frac{\gamma\kappa}{2} - \frac{5\gamma\kappa}{3}}\|\nabla \omega\|_{L^6}\\\nonumber
&&\lesssim (T - t)^{- \frac{\gamma\kappa}{2} - \frac{5\gamma\kappa}{2}} + (T - t)^{- \frac{8\gamma\kappa}{3}}\|\sqrt{\rho}\dot{u}\|_{L^2}. \end{eqnarray} It is clear that $(T - t)^{- \frac{\gamma\kappa}{2} - \frac{5\gamma\kappa}{2}}$ is integrable in time. Moreover, by the definition of $\delta$ and the constraint on $\kappa$, one also has
$$\frac{\gamma\kappa}{2} + \kappa < \frac{\delta}{1 + \delta},\quad \frac{8\gamma\kappa}{3} < 1.$$ Hence, we have
$$\int_0^t(T - s)^{ - \frac{(\gamma + 2)\kappa}{2}}\|\nabla u\|_{L^6}ds \lesssim \Big(\int_0^T(T - t)^{ - \frac{\kappa(\gamma + 2)}{2}\frac{1 + \delta}{\delta}}dt\Big)^{\frac{\delta}{1 + \delta}} \lesssim 1.$$ Using \eqref{52} and Gronwall's inequality, one can finish the proof of the lemma. \end{proof}
\section{Blowup Criterions and Proof of Theorem \ref{main}}
Consider the continuity equation. Applying $\nabla$ and then performing the energy estimate, one can derive easily that \begin{eqnarray}\nonumber
\frac{d}{dt}\|\nabla \rho\|_{L^q}^q \lesssim \|\nabla u\|_{L^\infty}\|\nabla \rho\|_{L^q}^q + \int \rho|\nabla^2u||\nabla \rho|^{q - 1}dx. \end{eqnarray} For $3 < q < 6$, one can estimate the last term in the above inequality by \begin{eqnarray}\nonumber
&&\int \rho|\nabla^2u||\nabla \rho|^{q - 1}dx\\\nonumber
&&\lesssim \|\rho\|_{L^\infty}\big(\|\nabla^2v\|_{L^q} + \|\nabla^2\omega\|_{L^q}\big)\|\nabla \rho\|_{L^q}^{q - 1}\\\nonumber
&&\lesssim \|\rho\|_{L^\infty}^\gamma\|\nabla \rho\|_{L^q}^{q} + \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q}\|\nabla \rho\|_{L^q}^{q - 1}, \end{eqnarray} where \eqref{5} has been used. Moreover, by Lemma \ref{lem1}, there holds \begin{eqnarray}\nonumber
\|\nabla u\|_{L^\infty} &\lesssim& \|\nabla\omega\|_{L^\infty} + \|\nabla v\|_{L^\infty}\\\nonumber
&\lesssim& 1 + \|\nabla\omega\|_{L^\infty} + \|\rho\|_{L^\infty}^\gamma\ln\big(e + \|\rho\|_{L^\infty}^{\gamma - 1}\|\nabla\rho\|_{L^q}\big). \end{eqnarray} Combining the above three estimates together, we arrive at \begin{eqnarray}\nonumber
\frac{d}{dt}\|\nabla \rho\|_{L^q} &\lesssim& \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q} + \big(1 + \|\nabla \omega\|_{L^\infty} + \|\rho\|_{L^\infty}^\gamma\ln(e + \|\rho\|_{L^\infty}^{\gamma - 1})\big)\\\nonumber
&& \times\ \|\nabla \rho\|_{L^q}\ln\big(e + \|\nabla\rho\|_{L^q}\big)\\\nonumber
&\lesssim& \big(1 + \|\nabla \omega\|_{L^\infty} + \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q} + \|\rho\|_{L^\infty}^\gamma\ln(e + \|\rho\|_{L^\infty})\big)\\\nonumber
&&\times\ \|\nabla \rho\|_{L^q}\ln\big(e + \|\nabla\rho\|_{L^q}\big). \end{eqnarray} Then the Gronwall's inequality shows that
$$\|\nabla \rho\|_{L^q} \lesssim 1,\quad 0 \leq t \leq T$$ provided that \begin{equation}\label{21} \begin{cases}
\int_0^T\|\rho\|_{L^\infty}^\gamma \ln(e + \|\rho\|_{L^\infty}) dt < \infty,\\[-4mm]\\
\int_0^T\big(\|\nabla \omega\|_{L^\infty} + \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q}\big) dt < \infty. \end{cases} \end{equation}
Clearly, the first one in \eqref{21} holds under \eqref{UPB}. It remains to check the second inequality in \eqref{21}. By interpolation and using Lemma \ref{fluxlem}, one has \begin{eqnarray}\nonumber
&&\|\nabla \omega\|_{L^\infty} + \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q}\\\nonumber
&& \lesssim \|\nabla\omega\|_{L^2}^{\frac{2q - 6}{5q - 6}}\|\nabla^2\omega\|_{L^q}^{\frac{3q}{5q - 6}} + \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q}\\\nonumber
&&\lesssim (T - t)^{- \kappa}\|\nabla^2\omega\|_{L^q}. \end{eqnarray} Hence, using $$\rho\dot{u} = L\omega,$$ one can get \begin{eqnarray}\nonumber
&&\int_0^T\big(\|\nabla \omega\|_{L^\infty} + \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q}\big) dt\\\nonumber
&&\lesssim \int_0^T (T - t)^{- \kappa}\|\rho\dot{u}\|_{L^q}ds\\\nonumber
&&\lesssim \int_0^T (T - t)^{- \kappa}\|\rho\|_{L^{\frac{6q}{6 - q}}}\|\dot{u}\|_{L^6}ds. \end{eqnarray} For $\gamma \leq \frac{6q}{6 - q}$, then one can use the interpolation inequality to derive that \begin{eqnarray}\nonumber
&&\int_0^T\big(\|\nabla \omega\|_{L^\infty} + \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q}\big) dt\\\nonumber
&&\lesssim \int_0^T (T - t)^{- \kappa(2 - \frac{6 - q}{6q}\gamma)}\|\rho^\gamma\|_{L^1}^{\frac{6 - q}{6q}}\|\nabla\dot{u}\|_{L^2}ds\\\nonumber
&&\lesssim \int_0^T\|\nabla\dot{u}\|_{L^2}^2dt + \int_0^T(T - t)^{- \kappa(4 - \frac{6 - q}{3q}\gamma)}dt. \end{eqnarray} For $\gamma > \frac{6q}{6 - q}$, it holds that \begin{eqnarray}\nonumber
&&\int_0^T\big(\|\nabla \omega\|_{L^\infty} + \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q}\big) dt\\\nonumber
&&\lesssim \int_0^T\|\nabla\dot{u}\|_{L^2}^2dt + \int_0^T(T - t)^{- 2\kappa}dt. \end{eqnarray} Using \eqref{53} in Lemma \ref{lem4} and noting that $\kappa(4 - \frac{6 - q}{3q}\gamma) < 1$, one has \begin{eqnarray}\nonumber
\int_0^T\big(\|\nabla \omega\|_{L^\infty} + \|\rho\|_{L^\infty}\|\nabla^2\omega\|_{L^q}\big) dt \lesssim 1. \end{eqnarray}
Now we have proved that
$$\|\nabla \rho\|_{L^q} \lesssim 1,\quad 0 \leq t \leq T.$$ By Sobolev imbedding, one has $$\rho \lesssim 1, \quad 0 \leq t < T.$$ Now the proof of Theorem \ref{main} follows from the known blowup criteria, see \cite{HLX2, SWZ}. Alternatively, one can also estimate that \begin{eqnarray}\nonumber
\|\nabla u(t, \cdot)\|_{L^2}^2 &\lesssim& \|\nabla v(t, \cdot)\|_{L^2}^2 + \|\nabla \omega(t, \cdot)\|_{L^2}^2\\\nonumber
&\lesssim& \|\rho^\gamma\|_{L^2}^2 + \|\nabla\omega\|_{L^2}^2 \lesssim 1 \end{eqnarray} for all $0 \leq t < T$. Then using non-blowup criterion \eqref{criterion}, one finishes the proof of Theorem \ref{main}.
\section{Construction of the Explicit Blowup Solution}
Now we construct the explicit blowup solution and prove Theorem \ref{Exam}. The key here is to find an explicit solution to an over-determined nonlinear system of equations which serves as the profile of a self-similar solution to the compressible Euler and Navier-Stokes equations.
Let us derive the system governing the profile $(\Theta, V)$. Denote $$y = \frac{x}{(T - t)^{\frac{\gamma + 1}{2\gamma}}}.$$ Due to \eqref{SSS}, it is easy to compute that $$\partial_t\rho(t, x) = \frac{1}{(T - t)^{\frac{1}{\gamma} + 1}}\big[\frac{1}{\gamma}\Theta(y) + \frac{\gamma + 1}{2\gamma}y\cdot\nabla_y\Theta(y)\big]$$ and $$\rho\partial_tu (t, x) = \frac{1}{(T - t)^{\frac{\gamma + 1}{2\gamma} + 1}}\big[\frac{\gamma - 1}{2\gamma}V(y) + \frac{\gamma + 1}{2\gamma}y\cdot\nabla_y V(y)\big]\Theta(y).$$ Moreover, one can also compute that $$\nabla_x\cdot(\rho(t, x) u(t, x)) = \frac{1}{(T - t)^{\frac{1}{\gamma} + 1}}\nabla_y \cdot (\Theta(y) V(y))$$ and $$\rho u\cdot\nabla_x u + A\nabla_x\rho^\gamma = \frac{1}{(T - t)^{\frac{\gamma + 1}{2\gamma} + 1}}\big[\Theta V\cdot\nabla_y V(y) + A\nabla_y\Theta^\gamma(y)\big],$$ $$\mu\Delta_xu + (\lambda + \mu)\nabla_x\nabla_x\cdot u = \frac{1}{(T - t)^{\frac{\gamma + 1}{2\gamma} + 1}}\big(\mu\Delta_yV(y) + (\lambda + \mu)\nabla_y\nabla_y\cdot V(y)\big).$$ Consequently, we obtain the systems governing the profile $(\Theta, V)$ \begin{equation}\label{SCNS} \begin{cases} \frac{1}{\gamma}\Theta + \frac{\gamma + 1}{2\gamma}y\cdot\nabla \Theta + \nabla\cdot (\Theta V) = 0,\\[-4mm]\\ \big(\frac{\gamma - 1}{2\gamma}V + \frac{\gamma + 1}{2\gamma}y\cdot\nabla V + V\cdot\nabla V\big)\Theta + A\nabla \Theta^\gamma
= \mu\Delta V + (\lambda + \mu)\nabla\nabla\cdot V. \end{cases} \end{equation}
Next, let us construct a special solution to \eqref{SCNS}. Here we hope to ignore the viscosity terms in \eqref{SCNS}. Hence, we search for solutions to \eqref{SCNS} with $$V(y) = \beta y$$ for some constant $\beta$ which will be determined later. Using this linear velocity field $V$, we reduce \eqref{SCNS} to \begin{equation}\label{SCE} \begin{cases} \big(\frac{1}{\gamma} + n\beta\big)\Theta + \big(\frac{\gamma + 1}{2\gamma} + \beta\big)y\cdot\nabla \Theta = 0,\\[-4mm]\\ A\nabla \Theta^\gamma = - (1 + \beta)\beta y\Theta,\\[-4mm]\\
V(y) = \beta y. \end{cases} \end{equation} Clearly, system \eqref{SCE} is over-determined. Fortunately, by choosing $\beta$ so that $$\beta = - \frac{2}{n(\gamma - 1) + 2},$$ one has a solution to \eqref{SCE}, which reads \begin{equation}\label{Solu} \begin{cases} \Theta(y) = C_n^{\frac{1}{\gamma - 1}}r^{\frac{2}{\gamma - 1}},\\[-4mm]\\
V(y) = - \frac{2y}{n(\gamma - 1) + 2}, \end{cases} \end{equation} where the constants $C_n$ are given in \eqref{Const}. It is clear that the example in Theorem \ref{Exam} can be obtained by inserting \eqref{Solu} into \eqref{SSS}.
\begin{rem} It is interesting to construct regular solutions $(\Theta, V)$ to system \eqref{SCNS} with \begin{equation}\nonumber \nabla V \in H^s(s \geq 2),\quad \Theta \in L^p \cap C^\infty. \end{equation} A solution in this class would yield a family of physically more reasonable solutions which blow up in any given finite time. Unfortunately, we are not able to settle this problem down at present. However, using the first equation in \eqref{SCNS}, it is easy to derive the following \textit{a priori} estimate \begin{eqnarray}\nonumber
\Big|\frac{2p - 3(\gamma + 1)}{2p\gamma}\Big|\|\Theta\|_{L^p}^p \leq \frac{p - 1}{p}\|\Theta\|_{L^p}^p\|\nabla\cdot V\|_{L^\infty}. \end{eqnarray}
This simply implies that profiles $(\Theta, V)$ with sufficiently small $\|\Theta\|_{L^p}$ and $\|\nabla V\|_{H^2}$ do not exist for any $p \neq \frac{3(\gamma + 1)}{2}$. Hence, there is no self-similar blowup solutions to the compressible Navier-Stokes equations \eqref{CNS} with sufficiently small initial data $\int\rho_0|u_0|^2$ ($\lesssim \|\rho_0\|_{L^{\frac{3}{2}}}\|\nabla u_0\|_{L^2}^2$), which agrees with the result in \cite{HLX}. \end{rem}
\end{document}
|
arXiv
|
{
"id": "1710.02253.tex",
"language_detection_score": 0.5303870439529419,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\begin{abstract} Let us say that an element of a given family $\mathcal{A}$ of subsets of $\mathbb{R}^d$ can be reconstructed using $n$ test sets if there exist $T_1,\ldots,T_n \subset \mathbb{R}^d$ such that whenever $A,B\in \mathcal{A}$ and the Lebesgue measures of $A \cap T_i$ and $B \cap T_i$ agree for each $i=1,\ldots,n$ then $A=B$. Our goal will be to find the least such $n$.
We prove that if $\mathcal{A}$ consists of the translates of a fixed reasonably nice subset of $\mathbb{R}^d$ then this minimum is $n=d$. In order to obtain this result, on the one hand we reconstruct a translate of a fixed absolutely continuous function of one variable using $1$ test set. On the other hand, we prove that under rather mild conditions the Radon transform of the characteristic function of $K$ (that is, the measure function of the sections of $K$), $(R_\theta \chi_K) (r) = \lambda^{d-1} (K \cap \{x \in \mathbb{R}^d : \langle x,\theta \rangle = r\})$ is absolutely continuous
for almost every direction $\theta$. These proofs are based on techniques of harmonic analysis.
We also show that if $\mathcal{A}$ consists of the magnified copies $rE+t$ $(r\ge 1, t\in\mathbb{R}^d)$ of a fixed reasonably nice set $E\subset \mathbb{R}^d$, where $d\ge 2$, then $d+1$ test sets reconstruct an element of $\mathcal{A}$, and this is optimal. This fails in $\mathbb{R}$: we prove that a closed interval, and even a closed interval of length at least $1$ cannot be reconstructed using $2$ test sets.
Finally, using randomly constructed test sets, we prove that an element of a reasonably nice $k$-dimensional family of geometric objects can be
reconstructed using $2k+1$ test sets. An example from algebraic topology shows that $2k+1$ is sharp in general. \end{abstract}
\keywords{Reconstruction, intersection, Lebesgue measure, Fourier transform, Radon transform, convex set, random construction}
\subjclass[2010]{28A99, 42A61, 26A46, 42A38, 51M05}
\maketitle
\section{Introduction}
There is a vast literature devoted to various kinds of geometric reconstruction problems. Part of the reasons why these are so popular is their connection with geometric tomography.
The set of reconstruction problems we will study is the following. Given a family of subsets of $\mathbb{R}^d$ we would like to find ``test sets'' such that whenever someone picks a set from the family and hands us the Lebesgue measure of the chunk of this set in the test sets, then we can tell which the chosen set is. In other words, the measures of the intersection of the set with our test sets uniquely determine the member of the family. Our aim is to use as few test sets as possible. If it is enough to use $n$ test sets then we say that \emph{an element of the given family can be reconstructed using $n$ test sets}. The formal definition is the following. We denote the Lebesgue measure on $\mathbb{R}^d$ by $\lambda^d$.
\begin{defi} Let $\mathcal{A}$ be a family of Lebesgue measurable subsets of $\mathbb{R}^d$ of finite measure. We say that \emph{an element of $\mathcal{A}$ can be reconstructed using $n$ test sets} if there exist measurable sets $T_1,\ldots,T_n$ such that whenever $A,B\in \mathcal{A}$ and $\lambda^d(A \cap T_i)=\lambda^d(B \cap T_i)$ for every $i=1,\ldots,n$ then $A=B$. \end{defi}
The first question of this form we are aware of is the following folklore problem, which asks, using the above terminology, whether an axis parallel unit subsquare of $[0,10]\times[0,10]$ can be reconstructed using two test sets. We leave this question to the reader as an exercise. There are numerous natural modifications of the problem: Can a unit segment of $[0,10]$ (or of $\mathbb{R}$) be reconstructed using $1$ test set? Can a unit disc be reconstructed using $2$ test sets? What happens in higher dimensions? And so on.
In each of the above problems the given family $\mathcal{A}$ is the set of translates of a fixed set. Since in $\mathbb{R}^d$ this means $d$ parameters, we might hope that we can reconstruct a translate of a fixed set using $d$ test sets. One of our main goals is to show that this is indeed true, at least under some mild assumptions on the set. For $d\ge 3$ we prove (Corollary~\ref{c:posmeasure}) this for any bounded measurable set of positive measure, in the plane (Theorem~\ref{t:rectbound}) for any bounded measurable set of positive measure with rectifiable boundary of finite length, and in the real line for any finite union of intervals.
The first idea behind all of the above results for $d\ge 2$ is the following. Suppose we want to reconstruct a translate of $E\subset\mathbb{R}^d$. Let $T$ be a test set of the form $T=S\times\mathbb{R}^{d-1}$, where $S\subset\mathbb{R}$. Then clearly $\lambda^d((E+x)\cap T)$ depends only on the first coordinate of $x$: in fact, one can easily check that \begin{equation}\label{e:integral} \lambda^d((E+x)\cap T)=\int_S (R_{(1, 0, \ldots, 0)} \chi_E) (t-x_1) dt, \end{equation} where $x_1$ denotes the first coordinate of $x$ and $(R_{(1, 0, \ldots, 0)} \chi_E) (t) = \lambda^{d-1} (K \cap \{x \in \mathbb{R}^d : x_1 = t\})$ is the Radon transform of $\chi_E$ in direction $(1, 0, \ldots, 0)$. (In general, the Radon transform of a function $f$ is $(R_\theta f) (t) = \int_{\{x \in \mathbb{R}^d : \langle x,\theta \rangle = t\}} f \,d\lambda^{d-1}$, but will we only apply this for characteristic functions.) Thus if $\int_S (R_{(1, 0, \ldots, 0)} \chi_E) (t-x_1) dt$ uniquely determines $x_1$, in other words if we can reconstruct a translate of the $\mathbb{R}\to[0,\infty]$ function $R_{(1, 0, \ldots, 0)} \chi_E$ using the test set $S\subset\mathbb{R}$ then $\lambda^d((E+x)\cap T)$ determines $x_1$. Therefore, if we can do this in $d$ linearly independent directions then we are done.
In order to carry out the above program, first we show (Theorem~\ref{t1}) that a translate of a fixed non-negative not identically zero compactly supported absolutely continuous function can be reconstructed using one test set in the above sense. Then we get the above results by finding many directions in which the Radon transform of the characteristic function is absolutely continuous.
For concrete sets (for example if we want to reconstruct a translate of the unit ball) this is immediate, for more general sets we use Fourier transforms.
We also consider the problem of reconstruction of a magnified copy of a fixed set $E\subset\mathbb{R}^d$ ($d\ge 2$), where by a magnified copy of $E$ we mean a set of the form $rE+x$, where $r\ge 1$ and $x\in\mathbb{R}^d$. Since we have $d+1$ parameters, one can hope to reconstruct using $d+1$ test sets. Using $\mathbb{R}^d$ as a test set we can reconstruct $r$ since $\lambda^d(rE+x)$ depends only on $r$. The reconstruction of $x$ is done similarly as in the above case of translations but now we need to consider not only translations of the Radon transforms but also the translations of their rescaled copies, since instead of \eqref{e:integral} we need here the more general $\lambda^d((rE+x)\cap T)=r^{d-1}\int_S (R_{(1, 0, \ldots, 0)} \chi_E) (\frac{t-x_1}{r}) dt$. Therefore we need to choose the set $S\subset\mathbb{R}$ so that for \emph{every} $r\ge 1$ the integral $\int_S (R_{(1, 0, \ldots, 0)} \chi_E) (\frac{t-x_1}{r}) dt$ determines $x_1$. So this is a harder task than in the case of translations (where we needed this only for $r=1$) and we can only prove a positive result (Theorem~\ref{t2}) under some additional assumption on the derivative of the Radon transform: we also require that $(R_\theta \chi_E)'$ can be approximated well in $L^1$ norm by a $g\in C^1$ function with small $\norm{g'}_1$. For functions obtained from concrete nice sets of $\mathbb{R}^d$ ($d\ge 2$) (for example, if we want to reconstruct a ball of radius at least $1$) this condition can be checked. For the more general case we again have to find many directions in which this condition is satisfied. We have positive general result if we assume that $d\ge 4$, or that $d\ge 2$ and $E$ is convex. In the first case we use Fourier transforms here as well, while in the second case we take advantage of the Brunn--Minkowski inequality among others. This way we get (Corollaries~\ref{c:genmagn}~and~\ref{c:convexmagn}) that if $E\subset\mathbb{R}^d$ is a fixed bounded measurable set of positive measure and $d\ge 4$, or if $E\subset\mathbb{R}^d$ is a convex set with nonempty interior and $d\ge 2$ then a set of the form $rE+x$, where $r\ge 1$ and $x\in\mathbb{R}^d$ can be reconstructed using $d+1$ sets.
In all of the above mentioned results the reconstruction is impossible using fewer test sets, since that would mean a continuous injective map from the parameter space into a smaller dimensional Euclidean space: if we attempt to reconstruct an element of $\{A_{\alpha}:\alpha\in\Lambda\}$, where the parametrization is chosen so that $\alpha\mapsto\lambda^d(A_\alpha\cap T)$ is continuous for any measurable set $T\subset\mathbb{R}^d$ then reconstruction using $T_1,\ldots,T_n$ would yield that $\alpha\mapsto(\lambda^d(A_\alpha\cap T_1),\ldots,\lambda^d(A_\alpha\cap T_n))$ is a continuous injective $\Lambda\to\mathbb{R}^n$ map, which is impossible if $\Lambda$ contains an open subset of $\mathbb{R}^{n+1}$, or more generally an $(n+1)$-dimensional manifold.
The above argument also shows that sometimes we need more functions than the number of parameters: if the above $\Lambda$ cannot be embedded continuously into $\mathbb{R}^n$ then one cannot reconstruct an element of $\{A_{\alpha}:\alpha\in\Lambda\}$ using $n$ test sets.
\begin{example}\label{e:2k} Let $\Lambda$ be the $k$-skeleton of a $2k+2$ simplex (that is, $\Lambda$ is the union of the $k$-dimensional faces of a simplex in $\mathbb{R}^{2k+2}$). By the van~Kampen--Flores Theorem (see e.g.~in \cite{Matousek}) $\Lambda$ cannot be embedded continuously into $\mathbb{R}^{2k}$, so the above argument shows that a unit ball in $\mathbb{R}^{2k+2}$ centered at a point of $\Lambda$ cannot be reconstructed using $2k$ sets, although the parameter space is $k$-dimensional, which means that we only have $k$ parameters. \end{example}
In Section~\ref{freedom} we prove (Theorem~\ref{special case}) that reasonably nice geometric objects parametrized reasonably nicely by $k$ parameters can be reconstructed using $2k+1$ test sets, which is sharp according to the above example. The test sets are given using a random construction. As applications we get for example that an $n$-gon in the plane can be reconstructed using $4n+1$ test sets, an ellipsoid in $\mathbb{R}^3$ can be reconstructed using $19$ test sets, and a ball in $\mathbb{R}^d$ can be reconstructed using $2d+3$ test sets, in particular an interval in $\mathbb{R}$ can be reconstructed using $5$ test sets. (Here and in the sequel $n$-gons, ellipsoids, balls and intervals are understood to be closed.)
One might be tempted to say that Example~\ref{e:2k} is quite artificial and in natural situations the number of parameters should suffice, and probably an interval can be reconstructed using $2$ test sets. But this is false, we prove (Theorem~\ref{t:unitint}) that an interval cannot be reconstructed using $2$ test sets, not even if we consider only intervals of length more than $1$. Like above, the obstacle is again of topological nature, although it is more complicated since the parameter space can be embedded into $\mathbb{R}^2$.
\begin{remark} It is natural to ask what happens if we try to reconstruct a set using test \textit{functions} by considering the integrals of the functions over the set, or even test measures by considering the measures of the set.
First we describe why a nice geometric object can be reconstructed using the following single test measure. Let $$ \mu = \sum_{i=1}^\infty \frac{\delta_{q_i}}{3^i}, $$ where $\{q_1, q_2, \ldots\} = \mathbb{Q}^d$ and $\delta_x$ denotes the Dirac measure at $x$. Then it is easy to see that $\mu(A) = \mu(B) \iff A \cap \mathbb{Q}^d = B \cap \mathbb{Q}^d$. Suppose that the symmetric difference of any two distinct sets of $\mathcal{A}$ contains a ball, which is always the case for families of geometric objects. Then $\mu$ reconstructs a member of $\mathcal{A}$, that is, whenever $A,B\in\mathcal{A}$ are distinct sets then $\mu(A)\neq\mu(B)$.
Similarly to the case of test sets, it is not possible to reconstruct a member of $\mathcal{A}$ with fewer bounded test functions than the dimension of the parameter space of $\mathcal{A}$. However, as opposed to the case of test sets, it is almost obvious to reconstruct a translate of a fixed bounded measurable set $E\subset\mathbb{R}^d$ using $d$ bounded test functions:
the functions $\arctan x_1,\ldots,\arctan x_d$ reconstruct a translate of $E$.
It is also very easy to reconstruct a magnified copy $rE+t$ of a fixed bounded measurable set $E\subset\mathbb{R}^d$ using $d+1$ bounded test functions: the constant $1$ function determines $r$, and then $\arctan x_1,\ldots,\arctan x_d$ determine $t$. As it was mentioned earlier, finding $d+1$ test sets that reconstruct a magnified copy of a fixed set is much harder in higher dimensions, and it is even impossible in $\mathbb{R}$: an interval cannot be reconstructed using two test sets.
Therefore the reconstruction problem for intervals shows nicely the difference between test measures, test functions and test sets: an interval of $\mathbb{R}$ can be reconstructed using $1$ test measure, it can be reconstructed using $2$ test functions (but $1$ does not suffice) and it cannot be reconstructed using less than $3$ test sets. \end{remark}
\section{Reconstruction of an interval}\label{s:int}
By interval we always mean a bounded nondegenerate closed interval. The following simple lemma is the key tool to reconstruct a translate of a fixed interval or a finite union of intervals.
\begin{lemma}\label{l:semigroup} Suppose that $G\subset (0,\infty)$, $h\in G$ and $G+h\subset G$. Let $A\subset \mathbb{R}$ be a measurable set that has positive Lebesgue measure in every nonempty interval. Suppose that $A\cap (A+G)=\emptyset$. Then an interval of length $h$ can be reconstructed using the test set $T=A\cup (A+G)$; in fact, $x\mapsto\lambda([x,x+h]\cap T)$ is strictly increasing. \end{lemma}
\begin{proof} Clearly it is enough to prove that $\lambda([u,u+h]\cap T) < \lambda([v,v+h]\cap T)$ for any $u<v<u+h$. So let $u<v<u+h$. Then $$ \lambda([v,v+h]\cap T) - \lambda([u,u+h]\cap T) = \lambda([u+h,v+h]\cap T) - \lambda([u,v]\cap T) $$ $$ =\lambda([u+h,v+h]\cap A) + \lambda([u+h,v+h]\cap (A+G)) - \lambda([u,v]\cap (A \cup (A+G))). $$ The first term is positive since $A$ has positive measure in every nonempty interval. By the translation invariance of $\lambda$, the second term can be written as $\lambda([u,v]\cap (A+G-h))$. Hence it is enough to prove that $A \cup (A+G) \subset A+G-h$. We have $A\subset A+G-h$ since $h\in G$, and we have $A+G\subset A+G-h$ since $G+h\subset G$. \end{proof}
\begin{theorem}\label{t:unitint} A translate of a fixed interval can be reconstructed using $1$ test set; that is, for any $h$ there exists a set $T\subset\mathbb{R}$ such that $\lambda([x,x+h]\cap T)\neq \lambda([y,y+h]\cap T)$ if $x\neq y$. \end{theorem}
\begin{proof} We can clearly suppose that $h=1$. It is not hard to see that one can choose countably many pairwise disjoint measurable subsets of $[0,1]$ such that all of them have positive measure in every nonempty subinterval of $[0,1]$. Using $\mathbb{Z}$ as the index set we denote them by $A_k$ ($k\in \mathbb{Z}$). Then Lemma~\ref{l:semigroup} applied to $A=\cup_{k\in\mathbb{Z}} (A_k +k)$, $G=\{1,2,\ldots\}$, $h=1$ completes the proof. \end{proof}
To reconstruct a translate of a fixed finite union of intervals, Lemma~\ref{l:semigroup} has to be applied for a more complicated $G$, for which it is a bit harder to construct a suitable $A$. This is done in the following lemma.
We call a set $E\subset \mathbb{R}$ \emph{locally finite} if it has finitely many elements in every bounded interval.
\begin{lemma}\label{l:discrete} For any locally finite set $G\subset (0,\infty)$ there exists a measurable set $A\subset \mathbb{R}$ such that $A$ has positive Lebesgue measure in every nonempty interval and $A\cap (A+G)=\emptyset$. \end{lemma} \begin{proof} Let $I_1,I_2,\ldots$ be an enumeration of the intervals with rational endpoints of length less than the minimal element of $G$.
By induction we define nowhere dense closed sets $A_1,A_2,\ldots$ with positive measure such that for every $n$, $A_n\subset I_n$ and \begin{equation}\label{e:union} \left(\bigcup_{j=1}^n A_j\right) \cap \left(\bigcup_{j=1}^n A_j + G \right) = \emptyset. \end{equation} This will complete the proof since then we can choose $A=\cup_{j=1}^\infty A_j$.
We can take $A_1$ as an arbitrary nowhere dense closed subset of $I_1$ with positive measure since then \eqref{e:union} is guaranteed by $\diam(I_1)< \min G$.
Suppose that we already chose $A_1,\ldots,A_{n-1}$ with all the requirements up to $n-1$. For any $A_n\subset I_n$ we have $A_n \cap (A_n+G)=\emptyset$. To complete the proof we need to choose a nowhere dense closed set $A_n\subset I_n$ of positive measure disjoint to $(\cup_{j=1}^{n-1} A_j) + G$ and $ (\cup_{j=1}^{n-1} A_j) - G$. Since $G$ is locally finite, we only need to avoid the union of finitely many translates of the nowhere dense closed set $\cup_{j=1}^{n-1} A_j$. As this union is closed and nowhere dense, it is not of full measure in $I_n$. \end{proof}
\begin{theorem}\label{t:finiteint} Let $E$ be a finite union of intervals in $\mathbb{R}$. Then a translate of $E$ can be reconstructed using $1$ set; that is, there exists a measurable set $T$ such that $\lambda((E+t)\cap T)\neq \lambda((E+t')\cap T)$ if $t\neq t'$. \end{theorem} \begin{proof} Let $E=\cup_{j=1}^n I_j$, where $I_j$ is an interval of length $a_j$ and the intervals are pairwise disjoint. Let $G$ be the additive semigroup generated by $a_1,\ldots,a_n$; that is, $G=\{\sum_{i=1}^n k_i a_i \ : \ k_1,\ldots,k_n\in\{0,1,2,\ldots\}\} \setminus \{0\}$. Then $G\subset (0,\infty)$ is a locally finite set and it contains every $a_i$. Let $A$ be the set obtained by Lemma~\ref{l:discrete} from $G$ and let $T=A\cup(A+G)$. Then by Lemma~\ref{l:semigroup} each function $x\mapsto \lambda((I_j+x)\cap T)$ is strictly increasing, so their sum $x\mapsto \lambda((E+x)\cap T)$ is also strictly increasing, which completes the proof. \end{proof}
An interval has two parameters, so one cannot reconstruct an interval using $1$ test set, but one might expect that $2$ test sets should be enough. We show that this is false. The following lemma concerns the topological obstacle of the reconstruction using two sets. The lemma is surely well known for topologists, but for completeness we present a short proof.
\begin{lemma}\label{findreference} Let $U\subset\mathbb{R}^2$ be a path connected open set and let $f:U\to\mathbb{R}^2$ be continuous and injective. Suppose that $f$ is differentiable at two points $a$ and $b$ such that the determinant of the Jacobi matrix at $a$, $\det f'(a)>0$. Then $\det f'(b)\ge 0$. \end{lemma} \begin{proof}
Suppose that $\det f'(b)<0$. Let $C:[0,2\pi]\to \mathbb{R}^2$ be the curve $C(t)=e^{it}$. If $r$ is small enough, the winding number of the curve $f(a+rC)$ around $f(a)$ is $1$, while the winding number of $f(b+rC)$ around $f(b)$ is $-1$. However, $U$ is path-connected and the winding number of $f(x+rC)$ is continuous in $x$, which yields a contradiction. \end{proof}
\begin{theorem}\label{t:interval} An interval in $\mathbb{R}$ cannot be reconstructed using two measurable sets. Moreover, even an interval of length bigger than $1$ cannot be reconstructed using two sets; that is, for any pair of measurable sets $A, B \subset \mathbb{R}$ there exist two distinct intervals $I$ and $I'$ of length bigger than $1$ such that $\lambda( I \cap A ) =\lambda( I' \cap A )$ and $\lambda( I \cap B ) =\lambda( I' \cap B )$. \end{theorem} \begin{proof} Suppose to the contrary that $A$ and $B$ reconstruct an interval of length bigger than $1$. Let $U=\{(x,y)\in\mathbb{R}^2: y-x>1\}$. Let $f:U\to [0,\infty)^2$ be defined by $$f((x,y))=(\lambda(A\cap [x,y]), \lambda(B\cap[x,y])).$$ The map $f$ is Lipschitz, and since $A$ and $B$ reconstruct, it is also injective.
Let $d_H(x)=\lim_{r\to 0+}\lambda(H\cap [x-r,x+r])/2r$ denote the density of a set $H$ at a point $x$ if the limit exists. Suppose that $y-x>1$ and $d_A(x), d_B(x), d_A(y), d_B(y)$ all exist. Using the $o$ notation, \begin{align*} f(x+t_x, y+t_y)= (&\lambda(A\cap[x,y])- d_A(x)t_x + d_A(y)t_y + o(t_x) + o(t_y), \\
&\lambda(B\cap[x,y])- d_B(x)t_x + d_B(y)t_y + o(t_x) + o(t_y) ). \end{align*} This implies that $f$ is differentiable at $(x,y)$ and its derivative (Jacobian) is $$ \left( \begin{array}{cc} -d_A(x) & d_A(y) \\ -d_B(x) & d_B(y) \\ \end{array} \right). $$
Let $I_1$ and $I_2$ be two non-empty intervals such that their distance is bigger than $1$ and $I_1$ is on the left-hand side of $I_2$. Then none of $A, B, A^c, B^c$ and $A\triangle B$ can have zero measure intersection with both of $I_1$ and $I_2$, since otherwise $f$ maps $I_1 \times I_2 \subset U$ injectively and continuously into a (vertical, horizontal or diagonal) line, which is impossible.
This implies that all of $A, B, A^c, B^c$ and $A\triangle B$ must have positive measure in any interval of length bigger than $1$.
In particular, $\lambda(A \triangle B)>0$ and both $A$ and $B$ have positive measure in every halfline.
Since $\lambda(A \triangle B)>0$, we have $\lambda(A\setminus B)>0$ or $\lambda(B\setminus A)>0$. We may suppose that the first one holds. Recall that Lebesgue's density theorem states that the density of a measurable set is $1$ at almost all of its points and $0$ at almost all of the points of its complement. Since $\lambda(A\setminus B)>0$, this implies that there exists a point $z$ for which $d_{A\setminus B}(z)=1$. Then $d_A(z)=1$ and $d_B(z)=0$. Since $B\cap (-\infty,z-1)$ and $B\cap (z+1,\infty)$ have positive measure, we can pick $u<z-1$ and $v>z+1$ such that $d_B(u)=d_B(v)=1$ and both of $d_A(u)$ and $d_A(v)$ exist.
Then $$ f'(z,u) = \left( \begin{array}{cc} -1 & d_A(u) \\ 0 & 1 \\ \end{array} \right) \quad \text{and} \quad
f'(v,z) = \left( \begin{array}{cc} -d_A(v) & 1 \\ -1 & 0 \\ \end{array} \right), $$ thus $\det f'(z,u)=-1$, $\det f'(v,z)=1$. This contradicts Lemma~\ref{findreference}. \end{proof}
In Corollary~\ref{c:concrete} we will see that $5$ test sets are enough. We do not know whether $3$ or $4$ are enough or not.
\section{Reconstruction of a translate of a fixed function}
As it is explained in the Introduction, for getting positive results about the reconstruction of a translate of a fixed set in $\mathbb{R}^d$ ($d\ge 2$), it will be useful to get results about the reconstruction of a translate of a given $\mathbb{R}\to\mathbb{R}$ function using $1$ test set.
To reconstruct a translate of a fixed function, the following definition will be crucial.
\begin{defi} Let $f:\mathbb{R}\to\mathbb{R}$ be an $L^1$ function and $\varepsilon > 0$. Define \[ K(\varepsilon,f) =
\inf \left\{ \mathop{Var}(g) : g \textrm{ is compactly supported,} \, \norm{f-g}_1 <\varepsilon \right\}, \] where $\mathop{Var}(g)$ denotes the total variation of $g$.
\end{defi}
Clearly, $K(\varepsilon, f)$ is monotone in $\varepsilon$. Also, $K(\varepsilon,f)$ is always finite as the piecewise constant functions of compact support are dense in $L^1$.
The following lemma shows that we can replace functions of bounded variation by $C^1$ functions.
\begin{lemma} \lab{l:C^1} Let $f:\mathbb{R}\to\mathbb{R}$ be an $L^1$ function with $\supp(f) \subset [-1,1]$ and $\varepsilon > 0$. Then \[ K(\varepsilon,f)= \inf \left\{ \mathop{Var}(g) : g \in C^1,
\supp(g) \subset [-1,1], \,\norm{f-g}_1 <\varepsilon \right\}. \] \end{lemma}
Note that if $g \in C^1$ then $\mathop{Var}(g) = \norm{g'}_1$.
\begin{proof} It suffices to prove that if $g$ is of bounded variation with $\supp(g) \subset [-1,1]$ and $\varepsilon > 0$ then there exists a $g_1 \in C^1$ with $\supp(g_1) \subset [-1,1], \norm{g - g_1}_1 < \varepsilon$ and $\mathop{Var}(g_1) \le \mathop{Var}(g)$.
Let us first assume instead that $g$ is constant on $(- \infty, -1)$ and $(1, \infty)$, and it is also monotone. It is not hard to find a piecewise constant monotone function $g_0$ such that $\norm{g - g_0}_1 < \varepsilon$. Then clearly $\mathop{Var}(g_0) = \mathop{Var}(g)$. Finally, we can easily approximate $g_0$ by a monotone $g_1 \in C^1$ such that $\norm{g_0 - g_1}_1 < \varepsilon$ and $\mathop{Var}(g_1) = \mathop{Var}(g)$.
Let now $g$ be a general function of bounded variation with $\supp(g) \subset [-1,1]$. It is well-known that it can be decomposed as $g = g_+ - g_-$, where $g_+$ and $g_-$ are non-decreasing and $\mathop{Var}(g) = |g_+(1) - g_+(-1)| + |g_-(1) - g_-(-1)|$ (indeed, let $g_+(x)$ be the positive variation of $g$ on $[-1, x]$). Applying the above approximation gives the result. \end{proof}
Recall that $f * g$ stands for the convolution of the two functions, and also that a function $f$ is locally absolutely continuous iff there exists a locally $L^1$ function $f^*$ such that $f(y) - f(x)= \int_x^y f^*(t) \,dt$ for every $x, y \in \mathbb{R}$. Moreover, in that case $f^* = f'$ almost everywhere. The following lemma is rather well-known, but we were unable to find a suitable reference so we include a proof.
\begin{lemma} \lab{l:conv} Let $f : \mathbb{R} \to \mathbb{R}$ be locally absolutely continuous, $g : \mathbb{R} \to \mathbb{R}$ be locally $L^1$ and assume that one of them is compactly supported. Then $f * g$ is also locally absolutely continuous and $(f * g)' = f' * g$ almost everywhere. Moreover, if $g$ is locally $L^\infty$ then $f*g$ is $C^1$ and $(f * g)' = f' * g$ everywhere. \end{lemma}
\begin{proof} Since we are only interested in the local behaviour of $f*g$, and one of them is compactly supported, we may actually assume (using the formula defining $f*g$) that both of them are compactly supported. This justifies the use of Fubini's Theorem in the following computation. \[ (f*g)(y) - (f*g)(x) = \int_\mathbb{R} [f(y-u) - f(x-u)] g(u) \,du = \int_\mathbb{R} \int_{x-u}^{y-u} f'(t) \,dt\ g(u) \,du = \] \[ \int_\mathbb{R} \int_x^y f'(t-u) \,dt \ g(u) \,du = \int_x^y \int_\mathbb{R} f'(t-u) g(u) \,du \,dt = \int_x^y (f' * g)(t) \,dt, \] hence we are done with the proof of the first statement. If $g$ is also in $L^\infty$, then $f'*g$ is the convolution of an $L^1$ and an $L^{\infty}$ function, so it is continuous. The above equation shows that $f*g$ is the integral of this continuous function, which yields the remaining statements. \end{proof}
\begin{lemma}\label{l:77} Let $f : \mathbb{R} \to \mathbb{R}$ be a non-negative absolutely continuous function with $\supp(f)\subset[-1,1]$ and $\int_{\mathbb{R}}f=1$. Let $a>0$ and $f_a(x)=f(x/a)/a$. Let $\Phi:\mathbb{R}\to (0,1)$ be a $C^1$ function with $\Phi'>0$, and let $\Psi:\mathbb{R}\to[0,1]$ be a measurable function. Then for any $\varepsilon>0$ and $x\in\mathbb{R}$ we have $$ (f_a * \Psi)'(x) \ge \min_{[x-a,x+a]} \Phi' -\frac{2\varepsilon}{a} - \frac{K(\varepsilon, f')}{a^2} \intnorm{\Psi-\Phi}{a}.
$$
\end{lemma}
\begin{proof} We denote by $f_a'$ the $L^1$ function for which $f_a(x)=\int_{-\infty}^x f_a'(t) \,dt$. Clearly $f_a'=f'(x/a)/a^2$. Since $\supp(f)\subset[-1,1]$, the function $f_a$ is supported in $[-a,a]$.
Fix $\delta>0$. By Lemma~\ref{l:C^1}, we can choose a $C^1$ function $g_0$ supported in $[-1,1]$ such that $\norm{f'-g_0}_1<\varepsilon$ and $\norm{g_0'}_1\le K(\varepsilon, f') +\delta$. Let $g(x)=g_0(x/a)/a^2$ (thus $g'(x)=g_0'(x/a)/a^3$). Then we have \begin{equation} \label{two} \norm{f_a'-g}_1<\varepsilon/a \quad \textrm{and} \quad \norm{g'}_1 \le \frac{K(\varepsilon, f')+\delta}{a^2}. \end{equation}
Using Lemma \ref{l:conv} several times, we obtain \begin{align*} (f_a * \Psi)'(x) & = (f_a * \Phi)'(x) + (f_a * (\Psi-\Phi))'(x) = \\ & = (f_a * \Phi')(x) + (f_a' * (\Psi-\Phi))(x) = \\ &= (f_a * \Phi')(x) + ((f_a'-g) * (\Psi-\Phi))(x) + (g * (\Psi-\Phi))(x) = \\ &= (f_a * \Phi')(x) + ((f_a'-g) * (\Psi-\Phi))(x) + \left (g' * \int (\Psi-\Phi)\right)(x), \end{align*} where $\left(\int (\Psi-\Phi)\right)(t)=\int_{t_0}^t (\Psi(s)-\Phi(s)) ds$ for an arbitrary fixed $t_0$.
Using $\int_{\mathbb{R}}f_a=1$,
$|\Psi-\Phi|\le 2$ and that $f_a$ and $g'$ are supported in $[-a,a]$, and then (\ref{two}), this implies that \begin{align*} (f_a * \Psi)'(x) & \ge \min_{[x-a,x+a]} \Phi' - 2 \norm{f_a'-g}_1 - \norm{g'}_1 \intnorm{\Psi-\Phi}{a} \\ & \ge \min_{[x-a,x+a]} \Phi' -\frac{2\varepsilon}{a} - \frac{K(\varepsilon, f')+\delta}{a^2} \intnorm{\Psi-\Phi}{a}. \end{align*} Letting $\delta\to 0$ we get the claimed inequality. \end{proof}
In this section our goal is to reconstruct a translate of a given non-negative not identically zero compactly supported absolutely continuous function $f$ on $\mathbb{R}$ by a measurable set $T$ by choosing $T$ so that $\int_T f(x-b)dx$ is strictly increasing in $b$. Note that $\int_{\mathbb{R}} \Phi(x)f(x-b)dx$ is strictly increasing for any strictly increasing $\Phi(x)$, and by denoting the characteristic function of $T$ by $\chi_T$, we have $\int_T f(x-b)dx=\int_{\mathbb{R}} \chi_T f(x-b)dx$. Therefore our task is to approximate a given $\Phi$ by a characteristic function, so that their integrals are close. This will be done in the following two lemmas.
\begin{lemma}\label{l:Marci} Let $\Phi:[0,1]\to (0,1)$ be a $C^1$ function with $\Phi'>0$, and let $\delta > 0$. Then we can choose $T\subset[0,1]$ as a finite union of intervals so that \begin{equation}\label{delta} \abs{\int_{a}^b (\chi_T - \Phi)} \le \delta \text{ for any } a,b\in[0,1] \end{equation} and \begin{equation}\label{zero} \int_{0}^{1} (\chi_T - \Phi)=0. \end{equation} \end{lemma}
\begin{proof} Choose a positive integer $n$ so that $4/n<\delta$.
Let $T_0=\emptyset, T_1=[0,1/n]$ and by induction construct $T_m\subset[0,m/n]$ for $m=1,\ldots,n$ so that $T_m=T_{m-1}$ or $T_m=T_{m-1}\cup[(m-1)/n,m/n]$ and $0 \le \int_0^{m/n} (\chi_{T_m} - \Phi) \le 1/n$. Then letting $h=\int_0^1 (\chi_{T_n} - \Phi)$ we have $0\le h \le 1/n$. Since $T_n\supset T_1=[0,1/n]$, by letting $T=T_n\setminus [0,h]$ we have (\ref{zero}) and $-1/n \le \int_0^{m/n} (\chi_{T_m} - \Phi) \le 1/n$ for any $m=1,\ldots,n-1$. Then $-2/n \le \int_0^{b} (\chi_{T_m} - \Phi) \le 2/n$ for any $b\in[0,1]$, which implies (\ref{delta}) since $4/n<\delta$. \end{proof}
Recall that $\lfloor x \rfloor$ denotes the integer part of $x$.
\begin{lemma}\label{l:88} Let $\Phi:\mathbb{R}\to (0,1)$ be a $C^1$ function with $\Phi'>0$ and $\delta:\{0,1,2,\ldots\} \to (0,1)$. Then we can choose $T$ as a locally finite union of intervals so that \begin{equation}\label{defA}
\abs{\int_{a}^b (\chi_T - \Phi)} \le \delta(\lfloor |a| \rfloor) + \delta(\lfloor
|b| \rfloor) \end{equation} for any $a,b\in\mathbb{R}$. \end{lemma}
\begin{proof} Apply Lemma~\ref{l:Marci} on $[k,k+1]$ and on $[-k-1,-k]$ (instead of $[0,1]$) and $\delta=\delta(k)$ for each $k=0,1,\ldots$ and let $T$ be the union of the sets we obtain. \end{proof}
Now we can prove the main result of the section.
\begin{theorem}\label{t1} Let $f : \mathbb{R} \to \mathbb{R}$ be a non-negative not identically zero compactly supported absolutely continuous function. Then a translate of $f$ can be reconstructed using one test set; that is, there exists a measurable set $T$ such that if $b \neq b'$ then $\int_T f(x-b) \,dx \neq \int_T f(x-b') \,dx$.
In fact, $\int_T f(x-b) \,dx$ is strictly increasing in $b$, and we can choose $T$ to be a locally finite union of intervals. \end{theorem}
\begin{proof}
Since $f$ is absolutely continuous, $f'$ exists almost everywhere, $f' \in L^1$ and $f(x)=\int_{-\infty}^x f'(t) \,dt$ for every $x \in \mathbb{R}$. We may suppose that $f$ (and $f'$) is supported in $[-1,1]$ and that $\int_\mathbb{R} f=1$.
Let $\Phi:\mathbb{R} \to [0,1]$ be an arbitrary $C^1$ function with $\Phi'>0$, and $h:[0,\infty)\to (0,1)$ be an arbitrary decreasing continuous function (which we will specify later).
By applying Lemma~\ref{l:88} to a sufficiently small function $\delta$ we obtain a set $T$ such that \begin{equation}\label{intaf}
\intnorm{\chi_T-\Phi}{1} \le h(|x|) \textrm{ for every } x \in \mathbb{R}. \end{equation} We will complete the proof by proving that $f * {\chi_T}$ is strictly increasing. As this function is $C^1$ by Lemma \ref{l:conv}, it suffices to show that $(f * {\chi_T})'>0$ everywhere.
Applying Lemma~\ref{l:77} to $\Psi=\chi_T$ and $a=1$, and using (\ref{intaf})
we obtain \begin{align*} (f * {\chi_T})'(x) & \ge \min_{[x-1,x+1]} \Phi' - 2\varepsilon - K(\varepsilon, f') \intnorm{\chi_T-\Phi}{1} \\
& \ge \min_{[x-1,x+1]} \Phi' - 2\varepsilon - K(\varepsilon, f') h(|x|). \end{align*}
Therefore, choosing $\varepsilon=\varepsilon(x)=1/4 \min_{[x-1,x+1]} \Phi'$, we see that if we fix $h$ such that
$$h(|x|)\le \frac{\min_{[x-1,x+1]} \Phi'}{4K(\varepsilon(x), f')}$$ for every $x\in\mathbb{R}$ then $$(f*{\chi_T})'(x) \ge 1/4 \min_{[x-1,x+1]} \Phi'>0.$$ \end{proof}
\section{Reconstruction of a function of the form $f(\frac{x}{a}+b)$}
The reconstruction of a magnified copy of a fixed set in $\mathbb{R}^d$ ($d\ge 2$) will be based on the following result.
\begin{theorem}\label{t2} Let $f : \mathbb{R} \to \mathbb{R}$ be a non-negative not identically zero compactly supported absolutely continuous function. Suppose that \begin{equation}\label{1/3} \text{there exist } C_1, C_2 \text{ such that } \quad K(\varepsilon ,f')\le C_1\exp(C_2\varepsilon^{-1/3}) \quad \text{for every } \varepsilon>0.
\end{equation} Then there exists a measurable set $T$ (in fact, a locally finite union of intervals) such that $\int_T f(\frac{x}{a}+b)$ is strictly monotone in $b$ ($b\in\mathbb{R}$) for every $a\ge 1$.
\end{theorem}
\begin{remark} The theorem does not remain true if we replace $a\ge 1$ by $a>0$. Indeed, if $b \mapsto \int_{T} f( \frac{x}{a} + b ) \,dx$ is strictly monotone for every $a > 0$ then $T$ cannot be of full or zero measure on any interval, so both $T$ and its complement has density points on any interval, and choosing a small enough $a$ easily shows that monotonicity fails. \end{remark}
Since $\mathbb{R}$ clearly determines $a$, we obtain the following.
\begin{cor} If $f$ satisfies the conditions of Theorem \ref{t2} then a function of the form $f(\frac{x}{a}+b)$ ($a\ge 1$, $b\in\mathbb{R}$) can be reconstructed using two test sets, one of which is $\mathbb{R}$. \end{cor}
\begin{proof}[Proof of Theorem~\ref{t2}] Since $K(\varepsilon, (cf(x/r+b))')=K(\varepsilon c^{-1}, f') c/r$, property \eqref{1/3} is invariant under linear transformations of $f$. Therefore we may suppose that $\int_{\mathbb{R}}f=1$,
and that $f$ is supported in $[-1,1]$.
Let $f_a(x)=f(x/a)/a$ ($a\ge 1$).
Let $\Phi:\mathbb{R}\to (0,1)$ be a $C^1$ function such that $$
\Phi'(x)=\frac{c_1}{|x| \log^2 |x|} $$
when $|x|\ge 2$ (for some positive constant $c_1$) and let $\Phi'>0$ everywhere. Let $h:[0,\infty)\to(0,1)$ be a decreasing continuous function, which we will specify later.
By applying Lemma~\ref{l:88} to a sufficiently small function $\delta$ we obtain a set $T$ such that \begin{equation}\label{defB}
\intnorm{\chi_T-\Phi}{a} \le h(|x-a|)+h(|x+a|). \end{equation} Again, we will complete the proof by proving that $(f * {\chi_{T}})'>0$ everywhere.
Applying Lemma~\ref{l:77} to $\Psi=\chi_T$ we get that \begin{align}\label{e:88} (f_a * {\chi_{T}})'(x) \ge \min_{[x-a,x+a]} \Phi' -2\varepsilon/a - K(\varepsilon, f')/a^2 \intnorm{\chi_T - \Phi}{a} \end{align} for every $\varepsilon>0$.
We may suppose that $x\ge 0$ as one can deal with the other case similarly.
First let us suppose that $a\ge x/2$, $a\ge 1$. Then we have \begin{equation}\label{e:89} \min_{[x-a,x+a]} \Phi' \ge
\min\left(\frac{c_1}{|x+a|\log^2|x+a|}, \min_{[-2,2]}\Phi'\right) \ge \frac{c_2}{3a \log^2(3a)} \end{equation} for some $c_2>0$. Using (\ref{defB})
and that $h$ is decreasing we get \begin{equation}\label{e:90}
\intnorm{\chi_T - \Phi}{a} \le h(|x-a|)+h(x+a)\le 2h(0). \end{equation} Choosing $\varepsilon=\frac{c_2}{12\log^2(3a)}$ and combining (\ref{e:88}), (\ref{e:89}) and (\ref{e:90}) we obtain \begin{align*} (f_a * {\chi_{T}})'(x) \ge \frac{c_2}{6a \log^2(3a)} - \frac{2h(0)}{a^2} K\left(\frac{c_2}{12\log^2(3a)}, f'\right). \end{align*} Using condition (\ref{1/3}) on the magnitude of $K$ we obtain $$ K\left(\frac{c_2}{12\log^2(3a)}, f'\right) \le C_1\exp(C_2(12\log^2(3a)/c_2)^{1/3}) \le \frac{C_3 a}{\log^2(3a)}$$ for some $C_3$, where the last inequality follows from the fact that the ratio $$\frac{\exp(C_2(12\log^2(3a)/c_2)^{1/3})}{\frac{a}{\log^2(3a)}}$$ tends to $0$ as $a\to\infty$ and continuous on $[1,\infty)$, so it is bounded on $[1,\infty)$.
Therefore, if we choose $h(0)$ small enough (for example, $h(0)=c_2/(24C_3)$), we have \begin{align*} (f_a * {\chi_{T}})'(x) \ge \frac{c_2}{12a \log^2(3a)} >0 \end{align*} for every $a\ge 1$ and $x\le 2a$.
Now let us suppose that $1\le a<x/2$. For some $c_3>0$ we have \begin{equation}\label{e:91} \min_{[x-a,x+a]} \Phi' \ge \frac{c_3}{2x \log^2(2x)}. \end{equation} Using (\ref{defB})
and that $h$ is decreasing we get \begin{equation}\label{e:92} \intnorm{\chi_T - \Phi}{a} \le h(x-a)+h(x+a)\le 2h(x/2). \end{equation} Choosing $\varepsilon=\frac{a c_3}{8x \log^2(2x)}$ and combining (\ref{e:88}), (\ref{e:91}) and (\ref{e:92}) we obtain \begin{align*} (f_a * {\chi_{T}})'(x) \ge \frac{c_3}{4x \log^2(2x)} - K\left(\frac{a c_3}{8x \log^2(2x)}, f'\right) \frac{2h(x/2)}{a^2}. \end{align*} Since $K(\varepsilon,f)$ is non-increasing in $\varepsilon$ and $a\ge 1$, we get $$K\left(\frac{a c_3}{8x \log^2(2x)}, f'\right) \frac{2h(x/2)}{a^2} \le K\left(\frac{c_3}{8x \log^2(2x)}, f'\right)2h(x/2).$$ Therefore, choosing $h$ such that for every $x\ge 2$ we have $$ h(x/2) \le \left. \frac{c_3}{16x \log^2(2x)}\middle/ K\left(\frac{c_3}{8x \log^2(2x)}, f'\right)\right.,$$ we get that $$(f_a * {\chi_{T}})'(x) \ge \frac{c_3}{8x \log^2(2x)} >0$$ for every $a\ge 1$ and $x>2a$. \end{proof}
\begin{remark} It is not hard to check that one can replace the exponent $-1/3$ by $-(1-\delta)$ for any $\delta>0$ in the condition (\ref{1/3}). To obtain this, the function $\Phi$ in the proof has to be chosen so that
$\Phi'(x)=c_1/(|x|\log^{1+\delta}|x|)$ for $|x|\ge 2$. We omit the details since we will not need this fact. \end{remark}
\begin{defi} We say that $x_0 \in \mathbb{R}$ is a \emph{controlled singularity} of a function $g : \mathbb{R} \to \mathbb{R}$ if $ | g(x )| \le \frac{1}{|x - x_0|^{1-\delta}}$ in a neighbourhood of $x_0$ for some $\delta>0$, and $g$ is monotone on $(x_0 - \varepsilon, x_0)$ and $(x_0, x_0 + \varepsilon)$ for some $\varepsilon > 0$. \end{defi}
\begin{lemma}\label{singularity} If $g$ is measurable, compactly supported, and locally is in $C^1$ except for a finite number of controlled singularities then $$ \text{there exist } C_1, C_2 \text{ such that } \quad K(\varepsilon ,g)\le C_1\exp(C_2\varepsilon^{-1/3}) \quad \text{for every } \varepsilon>0. $$ \end{lemma}
\begin{proof} Let us approximate $g$ by $g_n = \min(n, \max(-n, g))$ for a large enough $n$. An easy computation shows that we need $n = C \varepsilon^{- \frac{1 - \delta}{\delta}}$ to achieve $\norm{g - g_n}_1 < \varepsilon$, and then $\mathop{Var}(g_n) \le C' \varepsilon^{- \frac{1 - \delta}{\delta}}$. Therefore $K(\varepsilon, g ) \le C' \varepsilon^{- \frac{1 - \delta}{\delta}}$ is subexponential and we are done. \end{proof}
\begin{cor} In Theorem~\ref{t2} one can replace (\ref{1/3}) by the condition that $f'$ is locally in $C^1$ except for a finite number of controlled singularities. \end{cor}
\section{Absolute continuity of the Radon transform and reconstruction of a translate of a fixed set} \label{s:transl}
\begin{notation} For a measurable set $E\subset\mathbb{R}^d$ $(d\ge 2)$ of finite Lebesgue measure and a unit vector $\theta\in S^{d-1}$ the \emph{Radon transform in direction $\theta \in S^{d-1}$} is defined as the measure function of the sections of $E$ in direction $\theta \in S^{d-1}$; that is, $$ (R_\theta \chi_E) (r) = \lambda^{d-1} (E \cap \{x \in \mathbb{R}^d : \langle x,\theta \rangle = r\}), $$ where $\langle\cdot, \cdot \rangle$ denotes scalar product. Note that $R_\theta \chi_E$ is almost everywhere well defined. \end{notation}
\begin{theorem}\label{t:abscont} Suppose that $E\subset\mathbb{R}^d$ $(d\ge 2)$ is a bounded measurable set with positive Lebesgue measure, $\theta_1,\ldots,\theta_d\in S^{d-1}$ are linearly independent and the Radon transforms $R_{\theta_1} \chi_E, \ldots, R_{\theta_d} \chi_E$ are absolutely continuous modulo nullsets. Then a translate of $E$ can be reconstructed using $d$ sets. \end{theorem}
\begin{proof} We may assume that the Radon transforms are absolutely continuous, that is, there are no exceptional nullsets, since modifying the functions on nullsets will have no effect on the following argument. By applying Theorem~\ref{t1} to the functions $R_{\theta_1} \chi_E, \ldots, R_{\theta_d} \chi_E$ we get measurable test sets $T_1,\ldots,T_d\subset\mathbb{R}$ such that \begin{equation}\label{intneq} \int_{T_i} (R_{\theta_i} \chi_E) (x-b) \,dx \neq \int_{T_i} (R_{\theta_{i'}} \chi_E) (x-b') \,dx \qquad (b\neq b',\ i\in\{1,\ldots,d\}). \end{equation} For each $i$ let \begin{equation}\label{Vi} V_i= \{a\in\mathbb{R}^d\ :\ \langle a,\theta_i \rangle \in T_i \}. \end{equation} One can easily check that $$ \lambda^d((E+v)\cap V_i)= \int_{T_i} (R_{\theta_i} \chi_E) (x-\langle v, \theta_i \rangle) \,dx $$ for any $v\in\mathbb{R}^p$. Combining this with (\ref{intneq}) we get that $\lambda^d((E+v)\cap V_i)$ determines $\langle v, \theta_i \rangle$. Since $\theta_1,\ldots,\theta_d$ are linearly independent, this implies that the numbers $\lambda^d((E+v)\cap V_1),\ldots,\lambda^d((E+v)\cap V_d)$ determine $v$, which completes the proof. \end{proof}
\begin{remark} Since in Theorem~\ref{t1} every test set can be chosen to be a locally finite union of intervals and the test sets of the above proof are defined by (\ref{Vi}), each test set of the above theorem (and of all of its corollaries) can be chosen as a locally finite union of parallel layers, where by layer we mean a rotated image of a set of the form $[a,b]\times \mathbb{R}^{d-1}$. \end{remark}
The above theorem can clearly be applied to many
geometric objects.
\begin{cor} \begin{enumerate} \item A ball of fixed radius in $\mathbb{R}^d$ ($d\ge 1$) can be reconstructed using $d$ sets; that is, for any $r$ there exist measurable sets $T_1, \dots, T_d \subset \mathbb{R}^d$ such that if $x \neq x'$ then $\lambda^d (B(x,r) \cap T_i) \neq \lambda^d (B(x',r) \cap T_i)$ for some $i \in \{ 1, \dots, d \}$. \item Let $E$ be a (not necessarily convex) polytope in $\mathbb{R}^d$ ($d\ge 2$). Then a translate of $E$ can be reconstructed using $d$ test sets. \end{enumerate} \end{cor}
\begin{proof} In Theorem~\ref{t:unitint} we already proved the case $d=1$ of (1).
Now let $d \ge 2$. If $B$ is a fixed ball then $R_\theta \chi_B$ is clearly absolutely continuous for every $\theta$. If $E$ is a polytope in $\mathbb{R}^d$ then $R_\theta \chi_E$ is absolutely continuous for any $\theta$ which is not orthogonal to any face of $E$. Therefore in both cases Theorem~\ref{t:abscont} can be applied.
\end{proof}
In the remaining part of this section in order to apply Theorem~\ref{t:abscont} for a more general set $E\in\mathbb{R}^d$, we try to find many directions $\theta\in S^{d-1}$ for which $R_\theta \chi_E$ is absolutely continuous modulo a nullset.
To get a general positive result for $d\ge 3$ we use Fourier transforms. Denote the Fourier transform of a function $f$ by $\hat{f}$.
\begin{lemma} \lab{l:weak} Let $f : \mathbb{R} \to \mathbb{R}$ be a compactly supported $L^2$ function. If $\ r\hat{f}(r) \in L^2$ then $f$ is absolutely continuous modulo a nullset and $f' \in L^2$. \end{lemma}
\begin{proof} Recall that an $L^1$ function agrees with an absolutely continuous function almost everywhere if and only if its weak derivative is an $L^1$ function. (Indeed, this is the well-known fact that the Sobolev space $W^{1,1}$ is the class of absolutely continuous function modulo nullsets, see \cite[Corollary 7.14.]{Le}.)
Therefore it suffices to prove that the function \[ f^* (r) = \widehat{ - 2 \pi i r \hat{f} (-r)} \quad (r \in \mathbb{R}) \] is in $L^1$, it is the weak derivative of $f$, and that $f^* \in L^2$. Clearly, $f^* \in L^2$ follows from the assumption $\ r\hat{f}(r) \in L^2$. Let $\varphi$ be an arbitrary $C^\infty$ function of compact support. Using the Parseval Formula twice as well as $\widehat{\psi'} (r)= 2 \pi i r \hat{\psi} (r)$ and $\hat{\hat{g}} (r) = g(-r)$ we obtain \[ \int_\mathbb{R} f^* \varphi = \int_\mathbb{R} \widehat{-2 \pi i r \hat{f} (-r)} \overline{\overline{\varphi(r)}} \,dr = \int_\mathbb{R} 2 \pi i r \hat{f} (r) \overline{\widehat{\overline{\varphi(r)}}} \,dr = \] \[ - \int_\mathbb{R} \hat{f}(r) \overline{2 \pi i r \widehat{\overline{\varphi(r)}}} \,dr = - \int_\mathbb{R} \hat{f}(r) \overline{\widehat{\overline{\varphi}'(r)}} \,dr = - \int_\mathbb{R} f(r) \overline{\overline{\varphi}'(r)} \,dr = - \int_\mathbb{R} f \varphi', \] which yields that $f^*$ is the weak derivative of $f$. But it is easy to see that the support of the weak derivative of $f$ is contained in $\supp(f)$, hence $f^*$ is a compactly supported $L^2$ function, therefore it is in $L^1$, which concludes the proof. \end{proof}
\begin{lemma}\label{l:fourier} Let $K\subset\mathbb{R}^d$ ($d\ge 2$) be a bounded measurable set of positive Lebesgue measure. Then for almost every $\theta\in S^{d-1}$ we have $$
\int_\mathbb{R} |\widehat{R_\theta \chi_K} (r) |^2 |r|^p \,dr<\infty \textrm{ for any } 0 \le p \le d-1. $$
\end{lemma}
\begin{proof}
By Plancherel Theorem $\int_{\mathbb{R}^d} |\hat{{\chi_K}}|^2 = \int_{\mathbb{R}^d} {\chi_K}^2 <\infty$. Therefore, using polar coordinates, for almost every direction $\theta$, \begin{equation} \lab{e:Lp}
\int_\mathbb{R} |\widehat{{\chi_K}} (r\theta) |^2 |r|^{d-1} \,dr<\infty. \end{equation} Fix such a $\theta$. Since $\chi_K\in L^1$, the function $\widehat{{\chi_K}}$ is bounded, so (\ref{e:Lp}) implies that \begin{equation}\label{e:L2}
\int_\mathbb{R} |\widehat{{\chi_K}} (r\theta) |^2 |r|^p \,dr<\infty \end{equation} for any $p \le d-1$. An easy computation shows the well-known fact that $\widehat{R_\theta \chi_K}(r)=\widehat{{\chi_K}}(r\theta)$, so we are done. \end{proof}
\begin{theorem}\label{t:abscontsections} Let $K\subset\mathbb{R}^d$ ($d\ge 3$) be a bounded measurable set. Then the Radon transform of $\chi_K$ in direction $\theta$, that is, $$ (R_\theta \chi_K) (r) = \lambda^{d-1} (K \cap \{x \in \mathbb{R}^d : \langle x,\theta \rangle = r\}) $$ is absolutely continuous for almost every $\theta\in S^{d-1}$. \end{theorem}
\begin{proof} Since $d\ge 3$ we can apply Lemma~\ref{l:fourier} for $p=2$ to get that $r\widehat{R_\theta \chi_K} (r) \in L^2$ for almost every $\theta\in S^{d-1}$. Hence Lemma \ref{l:weak} applied to $R_\theta \chi_K$
gives that $R_\theta \chi_K$ is absolutely continuous modulo a nullset for almost every $\theta \in S^{d-1}$. By \cite[Corollary 2]{OS}, $R_\theta \chi_K$ is continuous for almost every $\theta$, which completes the proof. \end{proof}
\begin{remark} In fact, we do not need that the functions $R_\theta \chi_K$ are actually continuous for almost every $\theta$. We will only use that they are absolutely continuous modulo nullsets for our applications. \end{remark}
Combining Theorems~\ref{t:abscont}~and~\ref{t:abscontsections} we get the following.
\begin{cor}\label{c:posmeasure} Let $d\ge 3$ and let $E\subset\mathbb{R}^d$ be a bounded set of positive Lebesgue measure. Then a translate of $E$ can be reconstructed using $d$ sets; that is, there are measurable sets $T_1, \dots, T_d \subset \mathbb{R}^d$ such that if $x \neq x'$ then $\lambda^d ((E+x) \cap T_i) \neq \lambda^d ((E+x') \cap T_i)$ for some $i \in \{ 1, \dots, d \}$. \end{cor}
We do not know whether Corollary~\ref{c:posmeasure} holds for $d=1$ and $d=2$. Our method clearly cannot work for $d=1$. The next theorem shows that we cannot obtain Corollary~\ref{c:posmeasure} for $d=2$ the same way, since Theorem~\ref{t:abscontsections} fails in $\mathbb{R}^2$ in the following strong sense.
\begin{theorem} There exists a bounded measurable set $K$ in $\mathbb{R}^2$ such that for every direction $\theta$ the Radon transform in direction $\theta$ does not agree almost everywhere with a continuous function. \end{theorem} \begin{proof} We call a planar set a Besicovitch set if it is measurable and it contains a unit line segment in every direction. It is well known that there exists a compact Besicovitch set of measure zero, let $A$ be such a set. For each $n\ge 1$, let $A_n$ be an open neighbourhood of $A$ of Lebesgue measure at most $1/2^n$. Let $p_i$ be a sequence of points dense in the unit disc. Take $K=\bigcup_{n=1}^\infty A_n+p_n$. Then the measure of $K$ is at most $1$. Since $A$ contains a unit line segment in every direction, for every $\theta$ the Radon transform (the measure function of the sections) $(R_\theta \chi_K)(x)\ge 1$ for $x\in U_\theta$ where $U_\theta$ is a dense open subset of an interval of length $2$. Suppose that $R_\theta \chi_K$ agrees with a continuous function almost everywhere. Then $(R_\theta \chi_K)(x) \ge 1$ on an interval of length $2$ (almost everywhere), thus $\int R_\theta \chi_K\ge 2$, which contradicts the fact that the measure of $K$ is at most $1$. \end{proof}
If we require only the continuity of $R_\theta \chi_K$ then it is enough to assume that the boundary of $K$ has Hausdorff dimension less than $2$. Since we do not need this result, we only sketch the proof.
\begin{theorem}\label{Besicovitch} Let $K$ be a bounded Borel set in $\mathbb{R}^2$ such that $\partial K$ has Hausdorff dimension less than $2$. Then the Radon transform of $\chi_K$ in direction $\theta$ (that is, the measure function of the sections of $K$ in direction $\theta$) is continuous for almost every $\theta$. \end{theorem} \begin{proof} (Sketch) Let ${\overline{K}}$ denote the closure of $K$. If $R_\theta \chi_{\overline{K}} \neq R_\theta \chi_K$, then there exists a line perpendicular to $\theta$ which intersects $\partial K$ in a set of positive (one-dimensional Lebesgue) measure.
Since $\overline{K}$ is compact, $R_\theta \chi_{\overline{K}}$ is easily seen to be upper semi-continuous for every $\theta$.
If $\partial K$ has zero one-dimensional Lebesgue measure on the line $\{x\in\mathbb{R}^2 : \langle x,\theta \rangle=a\}$, then $R_\theta \chi_K$ is lower semi-continuous at $a$.
From these observations it follows that if $R_\theta \chi_K$ is not continuous, then there exists a line perpendicular to $\theta$ which intersects $\partial K$ in a set of positive (one-dimensional Lebesgue) measure.
Now suppose to the contrary that $R_\theta \chi_K$ is not continuous in positively many directions. Then there are positively many directions in which there are lines which intersect $\partial K$ in a set of positive (one-dimensional Lebesgue) measure. It is well known that this implies that $\partial K$ has Hausdorff dimension $2$. (This is a slight generalization of the fact that every planar Besicovitch set must have Hausdorff dimension $2$, cf. \cite[Proposition 1.5 and Lemma 1.6]{Wo}.) \end{proof}
For absolute continuity of $R_\theta \chi_K$ we need to assume more. Recall the definition of Hausdorff measure (and dimension) from \cite{Fa} or \cite{Ma} and also that by the length of a set $A$ we mean $\mathcal{H}^1(A)$, that is, the 1-dimensional Hausdorff measure of $A$. A set is rectifiable if it can be covered by countably many Lipschitz curves and an $\mathcal{H}^1$-null set. A set is purely unrectifiable if it intersects every rectifiable set (equivalently, every Lipschitz curve) in an $\mathcal{H}^1$-null set.
\begin{theorem}\label{t:rectbound} Let $K$ be a compact set in $\mathbb{R}^2$ with rectifiable boundary of finite length. Then $R_\theta \chi_K$ is absolutely continuous for all but countably many $\theta$. \end{theorem}
\begin{proof} Let $\mu$ denote the $1$-dimensional Hausdorff measure restricted to $\partial K$, and let $\mu_\theta$ denote the projection of $\mu$ to the line in direction $\theta$.
\begin{lemma}\label{abscontlength_new} For any interval $[x,y]$ and direction $\theta$ we have $$ \abs{(R_\theta \chi_K)(y) - (R_\theta \chi_K)(x)} \le \mu_\theta([x,y]). $$ \end{lemma}
\begin{proof} We can clearly assume that $\theta$ is vertical. Then $\abs{(R_\theta \chi_K)(y) - (R_\theta \chi_K)(x)}$ is the difference of the measures of $(\{x\}\times \mathbb{R}) \cap K$ and $(\{y\}\times \mathbb{R}) \cap K$ (two vertical lines intersected with $K$). Clearly, $\partial K$ must intersect those horizontal segments $[(x, t), (y, t)]$ for which $(x, t) \in (\{x\}\times \mathbb{R}) \cap K$ but $(y, t) \notin (\{y\}\times \mathbb{R}) \cap K$ or vice versa. The measure of these $t$ is at least $\abs{(R_\theta \chi_K)(y) - (R_\theta \chi_K)(x)}$, thus the projection of $([x, y]\times \mathbb{R}) \cap \partial K$ on the vertical axis has Lebesgue measure at least $\abs{(R_\theta \chi_K)(y) -
(R_\theta \chi_K)(x)}$. Since projections do not increase Hausdorff measure and Lebesgue measure on a line agrees with the $1$-dimensional Hausdorff measure, this implies that $$ \abs{(R_\theta \chi_K)(y) - (R_\theta \chi_K)(x)} \le \mathcal{H}^1(([x,y]\times \mathbb{R})\cap \partial K) = \mu ([x,y]\times\mathbb{R})= \mu_\theta([x,y]). $$ \end{proof}
\begin{lemma}\label{pu222} For every direction $\theta$, if $\mu_\theta$ is absolutely continuous with respect to the Lebesgue measure then the function $R_\theta \chi_K$ is absolutely continuous. \end{lemma} \begin{proof} Suppose $\theta$ is such that $\mu_\theta$ is absolutely continuous with respect to the Lebesgue measure. Since $\partial K$ has finite length, $\mu$ and $\mu_\theta$ are finite measures. Therefore the Radon--Nikodym derivative of $\mu_\theta$ is in $L^1$, and thus the function $x\mapsto \mu_\theta((-\infty, x])$ is absolutely continuous. Recall that a real function $f$ is absolutely continuous if and only if for every $\varepsilon>0$ there exists $\delta>0$ such that for every finite system of disjoint intervals $[x_j, y_j]$ satisfying $\sum_j \abs{y_j -x_j} < \delta$ we have $\sum_j \abs{f(y_j) - f(x_j)} <\varepsilon$. Thus, by Lemma~\ref{abscontlength_new}, the absolute continuity of the function $x\mapsto \mu_\theta((-\infty, x])$
implies the absolute continuity of the function $R_\theta \chi_K$. \end{proof} To finish the proof of Theorem~\ref{t:rectbound} we have to show that $\mu_\theta$ is absolutely continuous for all but countably many $\theta$. Suppose to the contrary that for uncountably many $\theta$, there are Borel sets $A_\theta \subset\partial K$ with $\mathcal{H}^1(A_\theta)>0$, such that the projection of $A_\theta$ to the line in direction $\theta$ has Lebesgue measure zero. Then there is an $\varepsilon>0$ such that $\mathcal{H}^1(A_\theta)>\varepsilon$ for uncountably many $\theta$. As $\mathcal{H}^1(\partial K)<\infty$, there are $\theta$ and $\theta'$ such that $\mathcal{H}^1(A_\theta \cap A_{\theta'})>0$. Therefore $A_\theta \cap A_{\theta'}$ is a rectifiable set of positive length, and there are two directions in which the projection has Lebesgue measure zero. It is well-known (see e.g. in \cite[18.10 (4)]{Ma}) that this is impossible. \end{proof}
\begin{remark} The condition that the boundary of $K$ has finite length cannot be omitted in Theorem~\ref{t:rectbound}. There exists a compact set with rectifiable boundary of $\sigma$-finite length so that for \emph{every} direction the Radon transform of its characteristic function is not of bounded variation, hence not absolutely continuous.
We sketch the random construction.
Let $A_N$ be the random compact set we obtain by decomposing the unit square into $N\times N$ many squares of side-length $1/N$ and keeping each of them independently with probability $1/2$. It can be shown that the total variation of $R_\theta \chi_{A_N}$ is at least $N^{1/2-\varepsilon}$ for every $\theta$, with probability tending to $1$ as $N\to \infty$.
Now let $$A=\{(0,0)\} \cup \bigcup_{k=0}^\infty \left(\alpha_k A_{N_k} + \frac{1}{2^k}\right), $$ where $\alpha_k\to 0$ and $N_k\to\infty$ sufficiently rapidly. Then $A$ is compact and $\partial A$ is rectifiable of $\sigma$-finite length. It can be shown that, with probability $1$, $R_\theta \chi_{A}$ is not of bounded variation for any $\theta$.
\end{remark}
\begin{cor}\label{c:rectbound} Let $E$ be a bounded measurable set in $\mathbb{R}^2$ with positive Lebesgue measure and rectifiable boundary of finite length. Then a translate of $E$ can be reconstructed using $2$ test sets. \end{cor}
\begin{proof} We apply Theorem~\ref{t:rectbound} for $K=\overline E$. Since $K\setminus E \subset \partial E$ has Lebesgue measure zero, $R_\theta \chi_E$ equals almost everywhere to $R_\theta \chi_K$ for every $\theta$, so Theorem~\ref{t:abscont} can be applied. \end{proof}
From Theorem~\ref{t:finiteint} and Corollaries~\ref{c:posmeasure}~and~\ref{c:rectbound} we get the following in any dimension.
\begin{cor} A translate of a fixed finite union of bounded convex sets in $\mathbb{R}^d$ ($d=1,2,\ldots$) can be reconstructed using $d$ test sets. \end{cor}
\section{Reconstruction of a magnified copy of a fixed set} \label{s:magn}
The first part of this section is analogous to the first part of the previous section but here the results follow from Theorem~\ref{t2} instead of Theorem~\ref{t1}.
\begin{theorem}\label{t:magn} Let $E\subset\mathbb{R}^d$ $(d\ge 2)$ be a bounded measurable set with positive Lebesgue measure. Suppose that $\theta_1,\ldots,\theta_d\in S^{d-1}$ are linearly independent such that for each $i=1,\ldots,d$ the Radon transform of $\chi_E$ in direction $\theta_i$ is absolutely continuous modulo a nullset and
there exist $C_1$, $C_2$ such that $K(\varepsilon ,(R_{\theta_i} \chi_E)')\le C_1\exp(C_2\varepsilon^{-1/3})$ for every $\varepsilon>0$.
Then a set of the form $rE+x$, where $r\ge 1$ and $x\in\mathbb{R}^d$, can be reconstructed using $d+1$ test sets.
\end{theorem}
\begin{proof} As in the proof of Theorem~\ref{t:abscont}, we may assume that the Radon transforms are actually absolutely continuous. By applying Theorem~\ref{t2} to the functions $R_{\theta_1} \chi_E,\ldots,R_{\theta_d} \chi_E$ we get measurable sets $T_1,\ldots,T_d\subset\mathbb{R}$ such that for each $i$ and $r\ge 1$ the integral $\int_{T_i} (R_{\theta_i} \chi_E)(\frac{x}{r}-b) \,dx$ determines $b$. For each $i$ let $V_i= \{a\in\mathbb{R}^d\ :\ \langle a,\theta_i \rangle \in T_i \}$. One can easily check that $$ \lambda^d((rE+v)\cap V_i)= r^{d-1} \int_{T_i} (R_{\theta_i} \chi_E)\left(\frac{x-\langle v, \theta_i \rangle}{r}\right) \,dx $$ for any $v\in\mathbb{R}^p$.
Therefore, for any $r\ge 1$ the numbers $\lambda^d((rE+v)\cap V_1),\ldots,\lambda^d((rE+v)\cap V_d)$ determine $v$. Let $V_{d+1}=\mathbb{R}^d$. Then $\lambda^d((rE+v)\cap V_{d+1})$ clearly determines $r$, which completes the proof. \end{proof}
\begin{remark} Since in Theorem~\ref{t2} the test set that determines $b$ can be chosen to be a locally finite union of intervals, we obtained that each of the first $d$ test sets of the above theorem (and of all of its corollaries) can be chosen as finite union of parallel layers (where by layer we mean a rotated image of a set of the form $[a,b]\times \mathbb{R}^{d-1}$) and one test set as $\mathbb{R}^d$. \end{remark}
The above theorem can clearly be applied to many geometric objects.
\begin{cor}\label{c:magnified} \begin{enumerate} \item\label{largeball} A ball of radius at least $1$ in $\mathbb{R}^d$ ($d\ge 2$) can be reconstructed using $d+1$ sets; that is, there are measurable sets $T_1, \dots, T_{d+1} \subset \mathbb{R}^d$ such that if $(x,r) \neq (x', r')$, $x, x' \in \mathbb{R}^d$, $r, r' \ge 1$ then $\lambda^d (B(x,r) \cap T_i) \neq \lambda^d (B(x',r') \cap T_i)$ for some $i \in \{ 1, \dots, d+1 \}$. \item\label{polytope} Let $E$ be a (not necessarily convex) polytope in $\mathbb{R}^d$ ($d\ge 2$). Then a magnified copy $rE+x$, where $r\ge 1$ and $x\in \mathbb{R}^d$ can be reconstructed using $d+1$ test sets. \end{enumerate} \end{cor}
\begin{proof} It is easy to check that the assumptions of Lemma~\ref{singularity} hold for $(R_\theta \chi_E)'$ if $E$ is the unit ball or if $E$ is a polytope and $\theta$ is not orthogonal to any of its faces. Thus we can apply Theorem~\ref{t:magn} to $E$ in both cases. \end{proof}
\begin{remark} By Theorem~\ref{t:interval} the above corollary does not hold for $d=1$. \end{remark}
In the remaining part of this section we check the condition of Theorem~\ref{t:magn} for more general sets. For the most general theorem we need the following result concerning the Radon transforms, which we can only prove for $d\ge 4$.
\begin{theorem}\label{t:sectionsformagn} If $d\ge 4$ then for any bounded measurable set $E\subset\mathbb{R}^d$ the Radon transform $R_\theta \chi_E$ (that is, the measure function of the sections of $E$ in direction $\theta$) is absolutely continuous for almost every $\theta\in S^{d-1}$, and $K(\varepsilon, (R_\theta \chi_E)')\le C_\theta\varepsilon^{-2}$, where $C_\theta$ depends only on $E$ and $\theta$. \end{theorem}
\begin{proof} By Theorem~\ref{t:abscontsections}, $R_\theta \chi_E$ is absolutely continuous for almost every $\theta$. Applying Lemma~\ref{l:fourier} for $p=3$ and $p=2$ we get that for almost every $\theta$, \begin{equation}\label{e:cubic}
\int |x|^3 |\widehat{R_\theta \chi_E}|^2(x) <\infty \qquad \text{and} \qquad
\int |x|^2 |\widehat{R_\theta \chi_E}|^2(x) <\infty. \end{equation} Fix such a $\theta$ and put $f=R_\theta \chi_E$. Thus, denoting the weak derivative of $f$ by $f'$, \begin{equation}\label{e:deriv}
\int |x| |\widehat{f'}|^2(x) <\infty \qquad \text{and} \qquad
\int |\widehat{f'}|^2 <\infty. \end{equation} We may assume that $f$ (and thus $f'$) is supported in $[0,1]$. Note that $f'$ is in $L^1 \cap L^2$. Let $g_R(x)=(\chi_{[-R,R]} \widehat{f'})\widehat{\;}\,(-x)$. Thus $g_R$ is $C^\infty$ and $\widehat{g_R}=\chi_{[-R,R]} \widehat{f'}$. We will approximate $f'$ by $\chi_{[0,1]}g_R$ to get a bound on $K(\varepsilon, f')$.
We have \begin{align}\label{L1benk} \norm{\chi_{[0,1]}g_R - f'}_1 & = \norm{\chi_{[0,1]}(g_R - f')}_1 \le \norm{\chi_{[0,1]}(g_R - f')}_2 \le \norm{g_R - f'}_2 \nonumber \\ & = \norm{\widehat{g_R} - \widehat{f'}}_2 = \norm{(1-\chi_{[-R,R]}) \widehat{f'}}_2 \\
& \le \norm{|x|^{1/2} R^{-1/2} \widehat{f'} (x) }_2 = R^{-1/2} \left(\int |x| |\widehat{f'}|^2(x) \,dx \right)^{1/2}. \nonumber \end{align} We have to bound the total variation of $\chi_{[0,1]}g_R$. We will do this in two steps. First, \begin{align}\label{boundvariation1} \norm{\chi_{[0,1]} g_R'}_1 & \le \norm{\chi_{[0,1]} g_R'}_2 \le \norm{g_R'}_2 = \norm{\widehat{g_R'}}_2 = 2\pi \norm{x\widehat{g_R} (x) }_2 \le 2\pi \norm{x\chi_{[-R,R]}
\widehat{f'}(x) }_2 \\
& \le 2\pi \norm{|x|^{1/2} R^{1/2} \widehat{f'} (x) }_2 = 2\pi R^{1/2} \left(\int |x| |\widehat{f'}|^2(x) \,dx \right)^{1/2}. \nonumber \end{align} Second, \begin{align}\label{boundvariation2} \norm{g_R}_\infty & \le \norm{\widehat{g_R}}_1 = \norm{\chi_{[-R,R]} \widehat{f'}}_1 \le 2R \norm{\widehat{f'}}_\infty \le 2R \norm{f'}_1 \\ & \le 2R \norm{f'}_2 = 2R \norm{\widehat{f'}}_2 = 4\pi R
\norm{x\widehat{f} (x) }_2 \le 4\pi R \left( \norm{|x|^{3/2} \widehat{f} (x) }_2^2 + \norm{\hat{f}}^2_2 \right)^{1/2}, \nonumber \end{align} where in the last inequality we used that $x^2\le 1$ on $[-1,1]$ and
$x^2\le |x|^3$ outside $[-1,1]$.
Combining \eqref{e:cubic}, \eqref{e:deriv}, \eqref{boundvariation1} and \eqref{boundvariation2} gives that the total variation of $\chi_{[0,1]}g_R$ is at most $$2 \norm{g_R}_\infty + \norm{\chi_{[0,1]} g_R'}_1 \le c_1 R$$ for some finite positive constant $c_1$ (depending on $f$). Comparing this to \eqref{L1benk} gives that $$K(c_2 R^{-1/2}, f')\le c_1R$$ for some $c_2>0$, and thus $K(\varepsilon, f') \le C \varepsilon^{-2}$. \end{proof}
\begin{remark} The proof of the previous theorem is simpler for $d\ge 5$. In these dimensions we obtain from Lemma~\ref{l:fourier} that $r^2 \widehat{R_\theta \chi_E}(r) \in L^2$ and $r \widehat{R_\theta \chi_E}(r) \in L^2$ for almost every $\theta$. Then by Lemma \ref{l:weak}, $R_\theta \chi_E$ is absolutely continuous (we may ignore the nullset) and $(R_\theta \chi_E)' \in L^2$. It is easy to see that the usual proof of the formula $\widehat{f'} (r) = 2 \pi i r \hat{f}$ works if we only assume that $f$ is absolutely continuous modulo a nullset. Hence $r((R_\theta \chi_E)')\widehat{\;} (r) = r(2 \pi i r \widehat{R_\theta \chi_E}) \in L^2$, so a second application of Lemma \ref{l:weak} yields that $(R_\theta \chi_E)'$ is absolutely continuous (ignoring the nullset again). Therefore $K(\varepsilon, (R_\theta \chi_E)') \le \mathop{Var}((R_\theta \chi_E)')$ is bounded in this case. \end{remark}
Theorems~\ref{t:magn}~and~\ref{t:sectionsformagn} immediately imply the following.
\begin{cor}\label{c:genmagn} Let $d\ge 4$ and let $E\subset\mathbb{R}^d$ be a bounded set of positive Lebesgue measure. Then a set of the form $rE+x$, where $r\ge 1$ and $x\in\mathbb{R}^d$, can be reconstructed using $d+1$ sets; that is, there are measurable sets $T_1, \dots, T_{d+1} \subset \mathbb{R}^d$ such that if $(x,r) \neq (x', r')$, $x,x' \in \mathbb{R}^d$, $r, r' \ge 1$ then $\lambda^d ((rE+x) \cap T_i) \neq \lambda^d ((r'E+x') \cap T_i)$ for some $i \in \{ 1, \dots, d+1 \}$. \end{cor}
In the remaining part of this section we show that for convex $E$ the above result also holds for $d\ge 2$.
\begin{lemma}\label{brunn} Let $d\ge 2$ and let $E\subset \mathbb{R}^d$ be a bounded convex set of non-empty interior. Then for every direction $\theta$ the function $(R_\theta \chi_E)^{1/(d-1)}$ is concave on its support, where $R_\theta \chi_E$ denotes the Radon transform of $\chi_E$ in direction $\theta$. \end{lemma}
\begin{proof} Let $\theta$ be an arbitrary direction. Let $E_x = \{a\in E : \sk{a,\theta}=x\}$ for $x\in\mathbb{R}$. By convexity of $E$ we have \begin{equation}\label{convexity} (1-t) E_x + t E_y \subset E_{(1-t)x+ty}, \end{equation} where $0\le t\le 1$. Applying the Brunn--Minkowski inequality for $(d-1)$-dimensional convex bodies gives $$\lambda^{d-1}((1-t) E_x + t E_y)^{1/(d-1)} \ge (1-t)\lambda^{d-1}(E_x)^{1/(d-1)} + t\lambda^{d-1}(E_y)^{1/(d-1)},$$ supposing that both $E_x$ and $E_y$ are non-empty. Combining this with \eqref{convexity} gives $$\lambda^{d-1}(E_{(1-t)x+ty})^{1/(d-1)} \ge (1-t)\lambda^{d-1}(E_x)^{1/(d-1)} + t\lambda^{d-1}(E_y)^{1/(d-1)}.$$ That is, $(R_\theta \chi_E)((1-t)x+ty )^{1/(d-1)} \ge (1-t) (R_\theta \chi_E)(x )^{1/(d-1)} + t (R_\theta \chi_E)(y)^{1/(d-1)}$ whenever $x$ and $y$ are in the support of $R_\theta \chi_E$. \end{proof}
\begin{lemma}\label{konvex1irany} Let $d\ge 2$ and let $E\subset \mathbb{R}^d$ be a bounded convex set of non-empty interior. Let $x,y\in \overline{E}$ have maximal distance among all pairs of points of the closure $\overline{E}$ of $E$, and let $\theta$ be the direction of $xy$. Then $R_\theta \chi_E$ is absolutely continuous and satisfies $K(\varepsilon, (R_\theta \chi_E)') = O(1/\varepsilon^{1/(d-1)})$. \end{lemma} \begin{proof} We may suppose without loss of generality that the distance of $x$ and $y$ is $1$, and the support of the Radon transform $f = R_\theta \chi_E$ is $[0,1]$. The set $E$ is contained in the balls of unit radius centered at $x$ and $y$. This implies that \begin{equation}\label{i1} f(t)\le C t^{(d-1)/2} \text{ and } f(1-t) \le C t^{(d-1)/2} \qquad (0\le t\le 1). \end{equation}
Lemma~\ref{brunn} implies that $g=f^{1/(d-1)}$ is concave on its support. Let $g'$ denote the everywhere existing right-hand derivative of $g$, which is also a weak derivative of $g$. Concavity and $g(0)=g(1)=0$ imply that \begin{equation}\label{i2} g'(t)\le \frac{g(t)}{t} \text{ and }g'(1-t)\ge -g(1-t)/t \qquad (0<t\le 1). \end{equation} If we combine this with \eqref{i1} we obtain \begin{equation}\label{i2.5}
|g'(t)| \le \frac{C'}{\sqrt{t}} \text{ \ and \ } |g'(1-t)| \le \frac{C'}{\sqrt{t}} \qquad (0<t<1). \end{equation}
The formula $$f'(t) = (g^{d-1})'(t) = (d-1) g^{d-2}(t) g'(t) \qquad (0\le t\le 1)$$ implies that $f$ has an everywhere existing right-hand derivative, we denote it by $f'$. Clearly $f'$ is also a weak derivative of $f$. Thus by \eqref{i2}, \begin{equation}\label{i4} f'(t) \le (d-1) g^{d-2}(t) \frac{g(t)}{t} \le (d-1) \frac{f(t)}{t} \qquad (0<t\le 1). \end{equation}
Let us fix a small $\varepsilon>0$. Let $h:\mathbb{R} \to \mathbb{R}$ be defined as $h(x)=f'(x)$ if $\varepsilon\le x \le 1-\varepsilon$, and $h(x)=0$ otherwise. The function $g$ is concave, nonnegative, $g(0)=g(1)=0$, so $g$ is monotone in $[0,\varepsilon]$ and in $[1-\varepsilon,1]$ if $\varepsilon$ is small enough. Hence $f=g^{d-2}$ is also monotone in the same intervals, thus
$$\int_0^\varepsilon |f'(t)| + \int_{1-\varepsilon}^1 |f'(t)| = f(\varepsilon) + f(1-\varepsilon).$$ Using this and \eqref{i1} we obtain \begin{equation}\label{i5} \norm{f'-h}_1 = f(\varepsilon) + f(1-\varepsilon) \le 2C \varepsilon^{(d-1)/2}. \end{equation}
We have to give an upper bound for $\mathop{Var}(h)$. We will use the inequality
$$\mathop{Var}(f_1 f_2)\le \mathop{Var}(f_1)\sup |f_2| + \mathop{Var}(f_2)\sup |f_1|.$$
Writing $m$ for $\max_{[0,1]} g$, \begin{align} Var_{[\varepsilon,1-\varepsilon]}(f') & = Var_{[\varepsilon,1-\varepsilon]}((d-1) g^{d-2} g') \nonumber \\
& \le (d-1) \left( Var_{[\varepsilon,1-\varepsilon]}(g') m^{d-2}
+ Var_{[\varepsilon,1-\varepsilon]}(g^{d-2}) \max_{[\varepsilon,1-\varepsilon]}|g'| \right) \nonumber \\ & \le (d-1) \left( Var_{[\varepsilon,1-\varepsilon]}(g') m^{d-2}
+ 2m^{d-2} \max_{[\varepsilon,1-\varepsilon]}|g'| \right) \label{i6} \end{align} where $Var_{[0,1]}(g^{d-2}) = 2m^{d-2}$ (if $d\ge 3$) follows from $g$ being concave. Note that \eqref{i6} holds for $d=2$ as well. As $g'$ is nonincreasing,
$Var_{[\varepsilon,1-\varepsilon]}(g') = |g'(1-\varepsilon)-g'(\varepsilon)|=|g'(\varepsilon)| + |g'(1-\varepsilon)|$
and $\max_{[\varepsilon,1-\varepsilon]}|g'|=\max(|g'(\varepsilon)|,|g'(1-\varepsilon)|)$. Therefore \eqref{i6} and \eqref{i2.5} implies \begin{align*} Var_{[\varepsilon,1-\varepsilon]}(f') & \le (d-1) 4C' m^{d-2}/\sqrt{\varepsilon}. \end{align*}
Thus \begin{align} \mathop{Var}(h) & = h(\varepsilon)+ h(1-\varepsilon) + Var_{[\varepsilon,1-\varepsilon]}(h) \nonumber\\
& = |f'(\varepsilon)| + |f'(1-\varepsilon)| + Var_{[\varepsilon, 1-\varepsilon]}(f') \nonumber \\
& \le |f'(\varepsilon)| + |f'(1-\varepsilon)| + (d-1) 4C' m^{d-2}/\sqrt{\varepsilon}. \label{i6.6} \end{align}
Using \eqref{i4}, \eqref{i1} and $d\ge 2$ we get that both $|f'(\varepsilon)|$ and $|f'(1-\varepsilon)|$ are at most $(d-1)\varepsilon^{(d-3)/2} \le (d-1)/\sqrt{\varepsilon}$.
Using this, \eqref{i6.6} and \eqref{i1} we obtain \begin{align}\label{i7}
\mathop{Var}(h) & \le |f'(\varepsilon)| + |f'(1-\varepsilon)| + (d-1) 4C' m^{d-2}/\sqrt{\varepsilon} \le C'' /\sqrt{\varepsilon}, \end{align} where $C''$ depends on $m$, but not on $\varepsilon$. Combining \eqref{i5} and \eqref{i7} and setting $\delta = 2C \varepsilon^{(d-1)/2}$ give that $K(\delta, f') \le C''' \delta^{-1/(d-1)}$ if $\delta>0$ is small enough. \end{proof}
\begin{theorem}\label{konvexsuru} Let $d\ge 2$ and let $E\subset \mathbb{R}^d$ be a bounded convex set of non-empty interior. There exist a dense set of directions $\theta$ for which the Radon transforms $R_\theta \chi_E$ are absolutely continuous and satisfy $K(\varepsilon, (R_\theta \chi_E)')=O(1/\varepsilon^{1/(d-1)})$. \end{theorem}
\begin{proof} We will find an appropriate $\theta$ arbitrarily close to the vertical direction. This will prove the theorem. Let $\delta>0$ be small. Let $\Phi$ be the linear transformation which maps $(x_1, \ldots, x_d)$ to $(\delta x_1, \ldots, \delta x_{d-1}, x_d)$. (We call the $x_d$ coordinate direction vertical.) Let $E_\delta = \Phi(E)$.
Suppose that the projection of $E$ to the vertical axis has diameter $1$. Let $x,y\in \overline{E_\delta}$ be points which have maximal distance among all pairs of points of $\overline{E_\delta}$. Their distance is at least $1$. For some constant $C$ (depending on $E$ only), $E_\delta$ is contained in a right circular cylinder in vertical position of radius $C\delta$ and height $1$. This implies that the distance of the direction of $xy$ to the vertical direction is at most $C'\delta$.
Let us apply Lemma~\ref{konvex1irany} to $E_\delta$. We obtain a direction $\theta$ which is $C'\delta$ close to vertical such that the Radon transform $R_\theta \chi_{E_\delta}$ has the right properties. Consider $E_\delta$ and the hyperplanes which are orthogonal to $\theta$. If we apply $\Phi^{-1}$ to them, we get $E$ and the new hyperplanes will be orthogonal to a direction which is $C''\delta^2$ close to vertical---in fact, they are orthogonal to $\Phi^*(\theta)=\Phi(\theta)$ as $\Phi$ is self-adjoint. Since $\Phi$ is a linear map, the Radon transform $R_{\Phi(\theta)} \chi_E$ can be obtained from $R_\theta \chi_{E_\delta}$ by an affine transformation, that is, $$ (R_{\Phi(\theta)} \chi_E) (x) =c (R_\theta \chi_{E_\delta}) (ax+b)$$ for some $a\neq 0$, $c>0$, $b\in \mathbb{R}$. Therefore $R_{\Phi(\theta)} \chi_E$ is also absolutely continuous, and satisfies $K(\varepsilon, (R_{\Phi(\theta)} \chi_E)') = O(1/\varepsilon^{1/(d-1)})$. \end{proof}
By combining Theorems~\ref{t:magn}~and~\ref{konvexsuru} we get the following.
\begin{cor}\label{c:convexmagn} Let $d\ge 2$ and let $E\subset\mathbb{R}^d$ be a bounded convex set with nonempty interior. Then a set of the form $rE+x$, where $r\ge 1$ and $x\in\mathbb{R}^d$, can be reconstructed using $d+1$ sets. \end{cor}
Note that by Theorem~\ref{t:interval} the above result fails for $d=1$.
\section{A general positive result for families with $k$ degrees of freedom}\label{freedom} In this section we prove that nice geometric objects of $k$ degrees of freedom can be reconstructed using $2k+1$ measurable sets. We also show that this result is sharp.
\begin{notation} We denote the complete metric space of non-empty compact sets of $\mathbb{R}^d$ with the Hausdorff metric by $(\mathcal{K}(\mathbb{R}^d),d_H)$.
In any metric space, let $B(A,\delta)$ denote the open $\delta$-neighborhood of the set $A$.
We recall the definition of the upper box dimension (upper Minkowski dimension) and the packing dimension in a metric space $X$. The upper box dimension of a bounded set $A\subset X$ is $$ \overline{\dim}_B(A) = \inf\{ s: \limsup_{\varepsilon\to 0} N(A,\varepsilon) \varepsilon^s = 0\}, $$ where $N(A,\varepsilon)$ is the smallest number of $\varepsilon$-balls in $X$ needed to cover $A$. Recall that in $\mathbb{R}^d$ this is the same as the upper Minkowski dimension (see e.g. in \cite{Ma}); that is, $$ \overline{\dim}_B(A) = \overline{\dim}_M(A) = \inf\{ s: \limsup_{\varepsilon\to 0} \lambda_d(B(A,\varepsilon))\varepsilon^{s-d} = 0\}. $$ The packing dimension (or modified upper box dimension in \cite{Fa}) of $A\subset X$ is given by $$ \dim_P(A) = \inf\left\{\sup_i \overline{\dim}_B(A_i)\ :\ A_i \textrm{ is bounded and } A\subset \cup_{i=1}^\infty A_i \right\}. $$ (Alternatively, the packing dimension may be defined in terms of the radius based packing measures, see \cite{Cu}.) \end{notation}
\begin{theorem}\label{special case} Let $\mathcal{C}$ be a collection of compact subsets in $\mathbb{R}^d$. Suppose that ${\dim}_P\, \mathcal{C} \le k$, $k\in\{1,2,\ldots\}$ and for every $K\in\mathcal{C}$,
$K=\overline{\text{int}\,K}$ and $\overline{\dim}_B \partial K = d-1$. Then an element of \ $ \mathcal{C}$ can be reconstructed using $2k+1$ test sets. \end{theorem}
\begin{remark} Example~\ref{e:2k} shows that this theorem is sharp in the sense that $2k+1$ cannot be replaced by $2k$. \end{remark}
Before proving the theorem we state some corollaries. In applications, the condition ${\dim}_P\, \mathcal{C} \le k$ is guaranteed by obtaining $\mathcal{C}$ as a $k$-parameter family of compact subsets of $\mathbb{R}^d$. More precisely, $\mathcal{C}$ will always be covered by finitely many sets of the form $f(G)$, where $G\subset\mathbb{R}^k$ is open and $f:G\to\mathcal{K}(\mathbb{R}^d)$ is Lipschitz. This clearly implies ${\dim}_P\, \mathcal{C} \le k$. Using this observation one can immediately apply Theorem~\ref{special case} for any natural collection of geometric objects with finitely many parameters by counting the number of parameters. We illustrate this by the following list of applications. The reader can easily extend this list.
\begin{cor}\label{c:concrete} \begin{enumerate} \item \label{5enough}
An interval in $\mathbb{R}$ can be reconstructed using $5$ test sets. \item \label{ball}
A ball in $\mathbb{R}^d$ can be reconstructed using $2d+3$ test sets. \item An $n$-gon in $\mathbb{R}^2$ can be reconstructed using $4n+1$ test sets. \item An axis-parallel rectangle in $\mathbb{R}^2$ can be reconstructed using $9$ test sets. \item An ellipsoid in $\mathbb{R}^3$ can be reconstructed using $19$ test sets. \item A simplex in $\mathbb{R}^d$ can be reconstructed using $2d^2+2d+1$ test sets. \end{enumerate} \end{cor}
Instead of Theorem~\ref{special case} we prove the following even more general statement.
\begin{theorem}\label{randomkonstr} Let $\mathcal{C}\subset \mathcal{K}(\mathbb{R}^d)$ be such that ${\dim}_P\, \mathcal{C} <\infty $. Suppose that $K=\overline{\text{int}\,K}$ and that $\overline{\dim}_B \partial K \le b <d$ for every $K\in\mathcal{C}$. Then an element of $\mathcal{C}$ can be reconstructed using $r=\left\lfloor \frac{2 \,{\dim}_P \mathcal{C}}{d-b}\right\rfloor + 1$ test sets. \end{theorem}
\begin{proof} We define a random set $A$ and we show that a set $K\in\mathcal{C}$ can be reconstructed using $r$ independent copies of $A$.
Let $1 > p_1>p_2>\cdots$ be a fast decreasing sequence of reals such that $\sum p_i<\infty$. Let $(n_i)$ be an increasing sequence of $2$-powers converging to $\infty$ sufficiently fast. Let us also assume that $n_{i-1}$ divides $\log_2 n_i$,
and $\log_2 n_i$ divides $n_i$ for each $i$, which conditions automatically hold if $n_i=2^{2^{l_i}}$ and $l_i$ is a sufficiently fast increasing sequence of integers.
For each $i$ we take the grids of cubes $\mathcal{J}_i=\{(v+[0,1)^d)/n_i\ : v\in\mathbb{Z}^d\}$ and $\mathcal{D}_i=\{(v+[0,1)^d)/\log_2 n_i\ : v\in\mathbb{Z}^d\}$. Since $n_{i-1}$ divides $\log_2 n_i$ and $\log_2 n_i$ divides $n_i$, the partition $\mathcal{J}_i$ is finer than $\mathcal{D}_i$, which is finer than $\mathcal{J}_{i-1}$.
Now we define a random set $A_i\subset \mathbb{R}^d$ as the union of certain cubes of $\mathcal{J}_i$ in the following way. Independently for each cube $D$ of $\mathcal{D}_i$ we do the following. Choose a random integer $m_D$ between $0$ and $p_i (n_i/\log_2 n_i)^d$ uniformly. Then choose randomly $m_D$ cubes of $\mathcal{J}_i$ in the cube $D$ (selecting each cube with equal probability) and let $H_D$ be their union. Finally, let $A_i=\cup_{D\in \mathcal{D}_i} H_D$.
This way each cube of $\mathcal{J}_i$ is contained in $A_i$ with probability approximately $p_i/2$, and points of distance more than $\sqrt{d} / \log_2 n_i$ are independent. (Note the major difference between this random set $A_i$ and the random set which independently chooses each cube of $\mathcal{J}_i$ with probability $p_i/2$: The number of $\mathcal{J}_i$-cubes of our $A_i$ inside each cube of $\mathcal{D}_i$ has standard deviation $\approx n_i^d$, while in the other construction it would have standard deviation $\approx \sqrt{n_i^d}$. We ignored $p_i$ here as we will choose $n_i\gg 1/p_i$.)
Since $\sum p_i <\infty$, almost every point of $\mathbb{R}^d$ is contained in only finitely many sets $A_i$. Hence the following infinite symmetric difference makes sense (up to measure zero): let $A=A_1 \triangle A_2 \triangle \cdots$.
The key property of this random set is the following.
\begin{lemma}\label{l:prob} If $K, K'\in \mathcal{C}$, $K, K'\subset [-i,i]^d$ and $K\setminus K'$ contains a cube $D\in \mathcal{D}_i$ then the probability that $$
|\lambda(A\cap K) - \lambda(A\cap K')| < \frac{1}{4n_i^d} $$ is at most $(\log_2 n_i)^{d}/(p_i n_i^{d})$. \end{lemma}
\begin{proof}
Let $B_i=A_1\triangle \cdots \triangle A_{i}$ ($i=1,2,\ldots$).
First we prove that the probability that \begin{equation}\label{lambdadiff}
|\lambda(B_i \cap K) -
\lambda(B_i \cap K')| < 1/(2n_i^d) \end{equation} is at most $(\log_2 n_i)^d/(p_i n_i^d)$.
Since $D\subset K$, we have \begin{equation}\label{eq:difference} \lambda(B_i \cap K) - \lambda(B_i \cap K') = \lambda(B_i \cap D) + \lambda(B_i \cap K \cap D^c) - \lambda(B_i \cap K'). \end{equation} Note that the last two terms of the right-hand side depend only on $A_1,\ldots,A_{i-1}$ and $A_i\setminus D$. Let us fix these random variables. Then the last two terms are constants, and we know the (conditional) distribution of $\lambda(B_i \cap D)$: this is $m_D/n_i^d$ if $D$ is disjoint from $B_{i-1}$, and it is $\lambda(D)-m_D/n_i^d$ if $D$ is contained in $B_{i-1}$. Hence the absolute value of the expression of \eqref{eq:difference} can be less than $1/(2n_i^d)$ only for at most one value of $m_D$. Since each value of $m_D$ was chosen with probability at most $(\log_2 n_i)^d/(p_i n_i^d)$, this implies that the conditional probability of \eqref{lambdadiff} is at most $(\log_2 n_i)^d/(p_i n_i^d)$. Since this holds for each fixed choice of $A_1,\ldots,A_{i-1}$ and $A_i\setminus D$, we get that \eqref{lambdadiff} holds indeed with probability at most $(\log_2 n_i)^d/(p_i n_i^d)$.
We can choose $p_{i+1}, p_{i+2}, \ldots$ such that $$\sum_{j=i+1}^\infty p_j < \frac{1}{100n_i^d (2i)^d}.$$ Then $$\sum_{j=i+1}^\infty \lambda(A_j \cap K) \le \sum_{j=i+1}^\infty \lambda(A_j \cap [-i,i]^d) < \frac{1}{100n_i^d}$$ since $K\subset [-i,i]^d$ and by construction the density of each $A_j$ is at most $p_j$ in each cube of the form $a+[0,1]^d, a\in\mathbb{Z}^d$. Clearly the same inequality holds for $K'$.
Combining these inequalities with (\ref{lambdadiff}) we get that $$
|\lambda(A\cap K) - \lambda(A\cap K')| \ge \frac{1}{2n_i^d} - \frac{2}{100n_i^d} \ge \frac{1}{4n_i^d} $$ with probability at least $1-(\log_2 n_i)^d/(p_i n_i^d)$. \end{proof}
Let $s>\dim_P\mathcal{C}$ be such that $\left\lfloor \frac{2 \,{\dim}_P \mathcal{C}}{d-b}\right\rfloor = \left\lfloor \frac{2s}{d-b}\right\rfloor$. We may suppose without loss of generality that $\overline{\dim}_B \partial K < b$ for every $K\in\mathcal{C}$ by increasing $b$ such that $\left\lfloor \frac{2s}{d-b}\right\rfloor$ does not increase. Write $\mathcal{C}$ as $\bigcup_{j=1}^\infty \mathcal{C}'_j$ such that each $\mathcal{C}'_j$ has upper box dimension less than $s$.
For every $K\in\mathcal{C}$ there exists a positive integer $m_0(K)$ such that for every $m\ge m_0(K)$, $$ \lambda(B(\partial K, 1/m)) \le m^{b-d} $$ since the upper box dimension of $\partial K$ is less than $b$. For $K,L\in\mathcal{C}$, using that $K\triangle L \subset B(\partial K \cup \partial L, d_H(K,L))$, this implies that \begin{equation}\label{close} \lambda(K \triangle L) \le 2 m^{b-d} \quad \textrm{ if } d_H(K,L)\le 1/m \textrm{ and } m_0(K),m_0(L)\le m. \end{equation}
For $i\ge 1$ let $$ \mathcal{C}_i=\{ K\in \bigcup_{j=1}^i \mathcal{C}'_j \,:\, m_0(K)\le i, \,K\subset [-i,i]^d \}. $$ Thus $\mathcal{C}_1\subset \mathcal{C}_2 \subset \cdots$, $\bigcup_i \mathcal{C}_i = \mathcal{C}$, and the upper box dimension of each $\mathcal{C}_i$ is less than $s$.
For each $i$ let $$ \widetilde{\mathcal{C}}_{i}=\{(K,K')\in{\mathcal{C}_i}^2: K\setminus K' \text{ or } K'\setminus K \text{ contains a cube } D\in\mathcal{D}_i \}. $$ Then for every integer $N$, using the assumption that $K=\overline{\text{int}\,K}$ for every $K\in\mathcal{C}$ and thus $\text{int}\, K \triangle \text{int}\, K' \neq\emptyset$ for every $K\neq K'$, $K,K' \in \mathcal{C}$, we get that \begin{equation}\label{terfelbontas} \bigcup_{i=N}^\infty \widetilde{\mathcal{C}}_i = \mathcal{C}^2 \setminus \{(K,K): K\in \mathcal{C}^2\}. \end{equation}
Let us fix $i$. Since $\mathcal{C}_i$ has upper box dimension less than $s$, for every sufficiently large positive integer $k_i$ (say, for $k_i \ge \kappa_i$) there exist (at most) $k_i^s$ sets $\mathcal{C}_i^j\subset \mathcal{C}_i$ ($1\le j \le k_i^s$) with diameter at most $1/k_i$ that cover $\mathcal{C}_i$.
For each pair $(j,j')\in\{1,\ldots,k_i^s\}^2$ pick a pair $$ (K_{i,(j,j')},K'_{i,(j,j')}) \in \widetilde{\mathcal{C}}_i \cap (\mathcal{C}_i^j \times \mathcal{C}_i^{j'}) $$ whenever such pair exists. Then \begin{equation}\label{quantors} \forall (K,K')\in \widetilde{\mathcal{C}}_i\ \exists (j,j')\in\{1,\ldots,k_i^s\}^2\ : \ d_H(K,K_{i,(j,j')}),d_H(K',K'_{i,(j,j')}) \le 1/k_i. \end{equation}
Repeating the construction of $A$ independently $r$ times we obtain $A_1,\ldots,A_r$. We claim that an element $K\in \mathcal{C}$ can be reconstructed using these sets, provided we choose the sequences $(n_i)$ and $(p_i)$ appropriately.
For each picked pair $(K_{i,(j,j')},K'_{i,(j,j')})$ we apply Lemma~\ref{l:prob} to get that the probability that there exists $1\le t\le r$ such that \begin{equation}\label{probbecsles2}
|\lambda(A^t\cap K_{i,(j,j')}) - \lambda(A^t\cap K'_{i,(j,j')})| \ge \frac{1}{4n_i^d} \end{equation} is at least $1-(\log_2 n_i)^{rd}/(p_i^r n_i^{rd})$.
Since there are at most $k_i^{2s}$ possible pairs $(j, j')$, this implies that with probability at least $1-k_i^{2s}(\log_2 n_i)^{rd}/(p_i^r n_i^{rd})$, for every picked pair $(K_{i,(j,j')},K'_{i,(j,j')})$ there exists $1\le t\le r$ such that (\ref{probbecsles2}) holds. If \begin{equation}\label{bigki} k_i\ge \max(i,\kappa_i), \end{equation} then using (\ref{close}) and (\ref{quantors}), this implies that with probability at least $$ 1-k_i^{2s}(\log_2 n_i)^{rd}/(p_i^r n_i^{rd}), $$ for any $(K,K')\in \widetilde{\mathcal{C}}_i$ there exists $1\le t\le r$ such that \begin{equation}\label{probbecsles4}
|\lambda(A^t\cap K) - \lambda(A^t\cap K')| \ge \frac{1}{4n_i^d}-4k_i^{b-d}. \end{equation}
Therefore if we choose the sequences $(n_i)$ and $(k_i)$ so that (\ref{bigki}), \begin{equation}\label{posprob} \sum_{i=1}^\infty k_i^{2s}(\log_2 n_i)^{rd}/(p_i^r n_i^{rd}) < \infty \end{equation} and \begin{equation}\label{posdiff} \frac{1}{4n_i^d}-4k_i^{b-d} > 0 \qquad (i=1,2,\ldots) \end{equation} hold then by (\ref{terfelbontas}) and the Borel--Cantelli lemma we get that almost surely for any two distinct $K,K'\in\mathcal{C}$ we have $\lambda(A^t\cap K) \neq \lambda(A^t\cap K')$ for at least one $t\in\{1,\ldots,r\}$, which is exactly what we need to prove.
Choose $k_i$ such that $k_i^{b-d}=n_i^{-d}/64$; that is, $k_i=n_i^{d/(d-b)}/64$. Then (\ref{posdiff}) clearly holds and \eqref{bigki} also holds if $n_i$ is large enough. Then, using that $r= \left\lfloor 2s/(d-b) \right \rfloor +1$, we have $$k_i^{2s}(\log_2 n_i)^{rd}/(p_i^r n_i^{rd}) = 64^{-2s} p_i^{-r} (\log_2 n_i)^{rd} n_i^{d(2s/(d-b) -r)} \le n_i^{-\delta d}$$ for $\delta=(r-2s/(d-b))/2>0$, provided that we choose $n_i$ large enough compared to $1/p_i$. Since $\delta>0$, this implies that (\ref{posprob}) also holds if $(n_i)$ is increasing fast enough. This completes the proof of the theorem. \end{proof}
\section{Open questions}
In this final section we collect some of the numerous remaining open questions.
\begin{question} How many test sets are needed to reconstruct an interval in $\mathbb{R}$? \end{question}
The answer is $3$, $4$ or $5$ by Theorem \ref{t:interval} and Corollary \ref{c:concrete} \eqref{5enough}.
\begin{question} Let $d\ge 2$. How many test sets are needed to reconstruct a ball in $\mathbb{R}^d$? For example, does $d+1$ suffice? \end{question}
We know by Corollary~\ref{c:concrete}~(\ref{ball}) that $2d+3$ test sets are enough. By Corollary~\ref{c:magnified}~(\ref{largeball}), if we consider only balls of radius at least $1$ then the answer is $d+1$ (for $d\ge 2$). In fact, we also do not know whether the restriction $r\ge 1$ on the magnification rate is necessary for the other two corollaries (\ref{c:magnified}~(\ref{polytope}) and \ref{c:genmagn}) of Section~\ref{s:magn}.
\begin{question} Let $d=1$ or $d=2$. How many test sets are needed to reconstruct a translate of an arbitrary fixed bounded measurable subset of $\mathbb{R}^d$ of positive measure? For example, does $d$ suffice? Does finitely many suffice? \end{question}
For $d \ge 3$ we know by Corollary \ref{c:posmeasure} that $d$ sets suffice.
\begin{question} Let $d = 2$ or $d = 3$ and let $E\subset\mathbb{R}^d$ be a bounded set of positive Lebesgue measure. Can a set of the form $rE+x$ $(r\ge 1, x\in\mathbb{R}^d)$ be reconstructed using $d+1$ test sets? And finitely many test sets? What if we drop the condition $r \ge 1$? \end{question}
Theorem \ref{t:interval} provides a negative answer for $d = 1$, whereas Corollary \ref{c:genmagn} shows that the answer is affirmative (with $r \ge 1$) for $d\ge 4$.
\end{document}
|
arXiv
|
{
"id": "1109.6169.tex",
"language_detection_score": 0.7306077480316162,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[ Brill-Noether theory for moduli spaces]
{Brill-Noether theory for moduli spaces of sheaves on algebraic varieties}
\author[L.\ Costa, R.M.\ Mir\'o-Roig]{L.\ Costa$^*$, R.M.\ Mir\'o-Roig$^{**}$}
\address{Facultat de Matem\`atiques, Departament d'Algebra i Geometria, Gran Via de les Corts Catalanes 585, 08007 Barcelona, SPAIN } \email{[email protected], [email protected]}
\date{\today} \thanks{$^*$ Partially supported by MTM2007-61104.} \thanks{$^{**}$ Partially supported by MTM2007-61104.}
\subjclass{14F05}
\keywords{Brill-Noether, moduli spaces, stability, vector bundles}
\begin{abstract} Let $X$ be a smooth projective variety of dimension $n$ and let $H$ be an ample line bundle on $X$. Let $M_{X,H}(r;c_1, \cdots, c_{s})$ be the moduli space of $H$-stable vector bundles $E$ on $X$ of rank $r$ and Chern classes $c_i(E)=c_i$ for $i=1, \cdots, s:=min\{r,n\}$. We define the Brill-Noether filtration on $M_{X,H}(r;c_1, \cdots, c_{s})$ as $W_{H}^{k}(r;c_1,\cdots, c_{s}
)= \{ E \in M_{X,H}(r;c_1, \cdots, c_{s}) | h^0(X,E) \geq k \}$ and we realize $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ as the $k$th determinantal variety of a morphism of vector bundles on $M_{X,H}(r;c_1, \cdots, c_{s})$, provided $H^i(E)=0$ for $i \geq 2$ and $E \in M_{X,H}(r;c_1, \cdots, c_{s})$. We also compute the expected dimension of $W_{H}^{k}(r;c_1,\cdots, c_{s} )$. Very surprisingly we will see that the Brill-Noether stratification allow us to compare moduli spaces of vector bundles on Hirzebruch surfaces stables with respect to different polarizations. We will also study the Brill-Noether loci of the moduli space of instanton bundles and we will see that they have the expected dimension. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction} \label{intro}
Let $X$ be a smooth projective variety of dimension $n$ over an algebraically closed field $K$ of characteristic 0 and let $M_{X,H}(r;c_1, \cdots, c_{s})$ be the moduli space of rank $r$, vector bundles $E$ on $X$ stable with respect to an ample line bundle $H$ and with fixed Chern classes $c_i(E)=c_i$ for $i=1, \cdots, s:=min\{r,n\}$. Moduli spaces of stable vector bundles were constructed in the 1970's by Maruyama and since then they have been extensively studied from the point of view of algebraic geometry, of topology and of differential geometry; giving very pleasant connections between these areas. Unfortunately, except in the classical case of vector bundles on curves, relatively little is known about their geometry in terms of the existence and structure of their subvarieties.
In the case of line bundles on smooth projective curves $C$ of genus $g$, where the moduli spaces $Pic^d(C)$ of degree $d$ line bundles are all isomorphic to the Jacobian, Brill-Noether theory has long provided a basic source of geometrical information. The classical theory of Brill-Noether has a long history and it is concerned with the subvarieties $W^k$ of $Pic^d(C)$ whose line bundles have at least $k+1$ independent global sections. Basic questions concerning non-emptiness, connectedness, irreducibility, dimension, singularities, cohomological classes etc ... have been answered when the curve $C$ is generic on the moduli space ${ M}_g$ of curves of genus $g$. There are several natural ways to generalize the classical theory of Brill-Noether. First, instead of studying line bundles on curves, we can consider vector bundles of any rank giving rise to the Brill-Noether loci in the moduli space of stable rank $r$ vector bundles on curves studied by Newstead, Teixidor and others. Indeed, during the last two decades, a great amount of job has been made around the Brill-Noether stratification of the moduli space of degree $d$, stable, rank $r$ vector bundles on algebraic curves, giving rise to nice and interesting descriptions of these subvarieties. Nevertheless, it should be mentioned that in spite of big efforts, many questions concerning their geometry still remain open. Second, instead of studying line bundles on curves, we can consider line bundles on varieties of arbitrary dimension and, finally, we can go in both directions simultaneously. We can consider a smooth projective variety $X$ of dimension $n$, an ample line bundle $H$ on $X$, the moduli space $M_{X,H}(r;c_1, \cdots, c_{s})$ of rank $r$, $H$-stable vector bundles $E$ on $X$ with fixed Chern classes $c_i(E)=c_i$; $1\le i \le min\{r,n\}$, and we can study the subschemes in $M_{X,H}(r;c_1, \cdots, c_{s})$ defined by conditions $\{ \dim H^{j}(X,E)\ge n_{j} \}$. In \cite{GoHi}, Gottsche and Hirschowitz studied the Brill-Noether loci in the moduli space of stable vector bundles on $\PP^2$ and in \cite{L}-\cite{L1}, Leyenson studied the Brill-Noether loci in the moduli space of stable vector bundles on K3 surfaces. The goal of this paper is to introduce a Brill-Noether theory for moduli spaces of rank $r$, $H$-stable vector bundles on algebraic varieties of arbitrary dimension, extending, in particular, all the above results to higher dimensional varieties. Once we will have constructed the Brill-Noether stratification in this much more general context, we will address the main problems and we will analyze them for several concrete moduli problems.
\vskip 2mm Next we outline the structure of this paper. In section 2, we will define the Brill-Noether locus $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ in $M_{X,H}(r;c_1, \cdots, c_{s})$ as the set of vector bundles in $M_{X,H}(r;c_1, \cdots, c_{s})$ having at least $k$ independent sections and associated to this locus we consider the generalized Brill-Noether number $\rho^{k}_H(r;c_1,\cdots, c_{s} )$. We prove that $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ has a natural structure of algebraic variety and that any of its non-empty components has dimension $\ge \rho^{k}_H(r;c_1,\cdots, c_{s} )$. Therefore, it is natural to ask whether the numerical condition $\rho^{k}_H(r;c_1,\cdots, c_{s} )<0$ implies that $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ is empty, and whether $\rho^{k}_H(r;c_1,\cdots, c_{s} )\ge 0$ implies that $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ is non-empty of dimension $\rho^{k}_H(r;c_1,\cdots, c_{s} )$. We end the section giving examples of situations where the expected dimension $\rho^{k}_H(r;c_1,\cdots, c_{s} )< 0$ and $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ is non-empty; and examples where $\rho^{k}_H(r;c_1,\cdots, c_{s} )>0$ and $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ is non-empty of dimension strictly greater than $\rho^{k}_H(r;c_1,\cdots, c_{s} )$, in contrast with the classical case. In section 3, we will analyze how the Brill-Noether stratification will allow us to compare moduli spaces of vector bundles on a smooth projective surface stable with respect to different polarizations. It turns out that the ample cone of a smooth projective surface $X$ has a chamber structure such that the moduli space $M_{X,H}(r;c_1,c_2)$ only depends on the chamber of $H$ and the problem consists on determining how the moduli space changes when the polarization crosses a wall between two chambers (see Definition \ref{parets}). Very surprisingly we will realize that in many situations, the Brill Noether locus controls these changes. As a by-product we will obtain a huge family of examples where the dimension of the Brill-Noether loci coincide with the expected one. In the last section, we describe the Brill-Noether loci in the moduli space of mathematical instanton bundles.
\vskip 2mm \noindent {\bf Notation:} We will work over an algebraically closed field $K$ of characteristic zero. Let $X$ be a smooth projective variety of dimension $n$ and let $E$ be a rank $r$ vector bundle on $X$ with Chern classes $c_i(E)=c_i$, $1 \leq i \leq s:= min\{r,n\}$. Set $\chi(r;c_1, \cdots, c_{s}):=\chi(E)$. We will write $h^i(E)$ (resp. $ext^i(E,F)$) to denote the dimension of the $i$-th cohomology group $H^i(X,E)=H^i(E)$ (resp. $i$-th Ext group $Ext^i(E,F)$) as a $K$-vector space. The sheaf $K_X$ will denote the canonical sheaf on $X$.
\section{Construction of the Brill-Noether Loci}
The goal of this section is to prove the existence of a Brill-Noether type stratification on the moduli space of stable vector bundles on smooth projective varieties analogous to the classical stratification of the Picard variety $Pic^d(C)$ of degree $d$ line bundles on a smooth projective curve $C$.
Let us start fixing the notation and some basic definitions. We consider $X$ an $n$-dimensional smooth projective variety, $H$ an ample divisor on $X$, $r \geq 2$ an integer and $c_i \in H^{2i}(X,\ZZ)$ for $i=1, \cdots, s=min\{r,n\}$. We denote by $M_{H}=M_{X,H}(r;c_1, \cdots, c_{s})$ the moduli space of rank $r$, vector bundles $E$ on $X$, with fixed Chern classes $c_i(E)=c_i$ for $i=1, \cdots, s$, and $H$-stable according to the following definition due to Mumford and Takemoto.
\begin{definition} Let $H$ be an ample line bundle on a smooth projective $n$ dimensional variety $X$. For a torsion free sheaf $F$ on $X$ we set $$ \mu(F)=\mu_H (F):=\frac{c_1(F)H^{n-1}}{rk(F)}.$$ The sheaf $F$ is said to be {\em $H$-semistable} if $$ \mu_H (E)\le \mu_H (F) $$ \noindent for all non-zero subsheaves $E\subset F$ with $rk(E)<rk(F)$; if strict inequality holds then $F$ is {\em $H$-stable}. Notice that for rank $r$ vector bundles $F$ on $X$ with $(c_1(F)H^{n-1},r)=1$, the concepts of $H$-stability and $H$-semistability coincide. \end{definition}
\begin{remark} \label{estabilitatdepen} The definition of stability depends on the choice of the ample line bundle $H$. The changes of the moduli space that occur when the line bundle $H$ varies have been studied by several people in great detail and reveals interesting phenomena (see for instance \cite{Qin93}; \cite{JPAA}; \cite{crelle}; \cite{Michigan}; \cite{nagoya} and references therein). In section 3, we will see how the Brill-Noether loci allow us to study these changes. \end{remark}
The main goal of this section is to construct a subvariety $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ of $M_H$ whose support is the set of rank $r$, $H$-stable vector bundles $E$ on $X$ with Chern classes $c_i$ such that $h^0(E) \geq k$. In other words, we are going to construct a variety $$W_{H}^{k}(r;c_1,\cdots, c_{s} )$$ such that
\[Supp(W_{H}^{k}(r;c_1,\cdots, c_{s} ))= \{ E \in M_H | h^0(E) \geq k \}. \]
To achieve our propose, we first need to recall the definition of $k$-th determinantal variety. Let $\phi:E \rightarrow F$ be a morphism between locally free sheaves of ranks $e$, $f$ over an algebraic variety $X$. Upon choosing local trivializations of $E$ and $F$ over an open set $U \subset X$, the morphism $\phi$ is represented by an $e \times f$ matrix $A$ of holomorphic functions. We denote by $U_k$ the subset of $U$ whose ideal is generated by the $(k+1)\times (k+1)$ minors of $A$. It is easy to see that $U_k$ does not depend on the choice of the trivialization, and therefore there is a well-defined subvariety $X_k(\phi)$ of $X$ such that \[ X_k(\phi) \cap U=U_k \] for every open set $U \subset X$. The variety $X_k(\phi)$ is called the {\bf $k$-th determinantal variety} or the {\bf $k$-th degeneracy locus} of $\phi$; it is supported on the set
\[ \{p \in X| rk(\phi_p) \leq k \} \] and it is clear from the definition that $X_k(\phi)$ has codimension at most $(e-k)(f-k)$ when it is non-empty and $$X_k (\phi)\subset Sing(X_{k+1}(\phi))$$ whenever $X_{k+1}(\phi) \neq X$.
We are now ready to define the Brill-Noether filtration of $M_{X,H}(r;c_1, \cdots, c_{s})$ and to give a formula for the expected dimension of the Brill-Noether locus.
\begin{theorem} \label{construccio} Let $X$ be a smooth projective variety of dimension $n$ and consider a moduli space $M_{H}=M_{X,H}(r;c_1, \cdots, c_{s})$ of rank $r$, $H$-stable vector bundles $E$ on $X$ with fixed Chern classes $c_i(E)=c_i$. Assume that for any $E \in M_H$, $H^i(E)=0$ for $i \geq 2$. Then, for any $k \geq 0$, there exists a determinantal variety $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ such that
\[Supp(W_{H}^{k}(r;c_1,\cdots, c_{s} ))= \{ E \in M_H | h^0(E) \geq k \}. \] Moreover, each non-empty irreducible component of $W_{H}^{k}(r;c_1,\cdots, c_{s} )$ has dimension at least $$\dim(M_H) -k(k-\chi(r;c_1, \cdots, c_s)),$$ and $$W_{H}^{k+1}(r;c_1,\cdots, c_{s} )\subset Sing(W_{H}^{k}(r;c_1,\cdots, c_{s} ))$$ whenever $ W_{H}^{k}(r;c_1,\cdots, c_{s} ) \neq M_{X,H}(r;c_1, \cdots, c_{s})$. \end{theorem}
\begin{proof} First of all assume that $M_H$ is a fine moduli space. Let ${\mathcal U} \rightarrow X \times M_{H}$ be a universal family such that for any $t \in M_H$, ${\mathcal U}|_{X\times \{ t\}} =E_t$ is an $H$-stable rank $r$ vector bundle on $X$ with Chern classes $c_i(E_t)=c_i$. Let $D$ be an effective divisor on $X$ such that for any $t \in M_H$, \begin{equation} \label{D} h^0(E_t(D))= \chi(E_t(D)), \quad H^i(E_t(D))=0, \quad i \geq 1. \end{equation} Consider ${\mathcal D}=D \times M_H$ the corresponding product divisor on $X \times M_H$ and denote by
\[\nu: X \times M_H \rightarrow M_H \]
the natural projection. It follows from (\ref{D}) and the base
change theorem that $\nu_*{\mathcal U}({\mathcal D})$ is a locally free sheaf of
rank $\chi(E_t(D))$ on $M_H$ and
\[ R^i\nu_*{\mathcal U}({\mathcal D})=0, \quad i>0.\]
Therefore, applying the functor $\nu_*$ to the short exact
sequence
\[ 0 \rightarrow {\mathcal U} \rightarrow {\mathcal U}({\mathcal D}) \rightarrow {\mathcal U}({\mathcal D})/{\mathcal U} \rightarrow 0\]
we get the following exact sequence
\[ 0 \rightarrow \nu_*{\mathcal U} \rightarrow \nu_*{\mathcal U}({\mathcal D}) \mapright{\gamma} \nu_*({\mathcal U}({\mathcal D})/{\mathcal U}) \rightarrow R^1\nu_*{\mathcal U} \rightarrow 0.\] The map $\gamma$ is a morphism between locally free sheaves on $M_H$ of rank $\chi(E_t(D))$ and $\chi(E_t(D))-\chi(E)$ respectively and the $(\chi(E_t(D))-k)$-th determinantal variety $$W^k_{H}(r;c_1,\cdots, c_{s} ) \subset M_H$$ associated to it has support
\[ \{E_t \in M_H| \operatorname{rank} \gamma_{E_t} \leq \chi(E_t(D))-k \} \] i.e. $W^k_{H}(r;c_1,\cdots, c_{s} )$ is the locus where the fiber of $R^1\nu_*{\mathcal U}$ has dimension at least $(\chi(E_t(D))-\chi(E_t))-(\chi(E_t(D))-k)=k-\chi(E_t)$. For any $E_t \in M_H$ the assumption $h^i(E_t)=0$, $i \geq 2$, implies \[ h^1(E_t)= h^0(E_t)-\chi(E_t).\] Thus,
\[\begin{array}{ll}Supp(W_{H}^{k}(r;c_1,\cdots, c_{s} )) & = \{ E \in M_H | h^1(E) \geq k-\chi(E) \}
\\ & = \{ E \in M_H | h^0(E) \geq k \}. \end{array} \] Using the language of Fitting ideals, we have that $W_{H}^k(r;c_1,\cdots, c_{s} )$ is the subvariety of $M_{H}$ defined by the $(k-\chi(E_t))$th Fitting ideal of $R^1\nu_* {\mathcal U}$. Moreover, it can also be seen that $W_{H}^k(r;c_1,\cdots, c_{s} )$ represents the functor \[S \rightarrow \bigg \{ \begin{array}{l} \mbox{equivalence classes of families ${\mathcal F}$ on $S \times M_H \mapright{\nu} M_H$} \\ \mbox{of $H$-stable rank $r$ vector bundles $E$ on $S$ with $c_i(E)=c_i$} \\ \mbox{such that the Fitting rank of $R^1\nu_*{\mathcal F}$ is at least $(k-\chi(E))$} \end{array} \bigg \}. \] Finally, since $W^k_{H}(r;c_1,\cdots, c_{s} )$ is a $(\chi(E_t(D))-k)$-determinantal variety associated to a morphism between locally free sheaves of rank $\chi(E_t(D))$ and $\chi(E_t(D))-\chi(E)$ respectively, any of its non-empty irreducible components has dimension greater or equal to $\dim(M_H) -k(k-\chi(E))$ and $$W_{H}^{k+1}(r;c_1,\cdots, c_{s} )\subset Sing(W_{H}^{k}(r;c_1,\cdots, c_{s} ))$$ whenever $ W_{H}^{k}(r;c_1,\cdots, c_{s} ) \neq M_{X,H}(r;c_1, \cdots, c_{s})$.
If $M_H$ is not a fine moduli space, it is also possible to carry out the construction of the Brill-Noether locus using only the local existence of a universal sheaf on $M_H$. Indeed, we carry out the constructions locally, we show the independence of the choice of the locally universal sheaf and we conclude that the construction glue as a global algebraic object. \end{proof}
\begin{remark} (1) The cohomological assumptions in Theorem \ref{construccio} are natural if we want to have a filtration of the moduli space $M_H$ by the subvarieties $W_{H}^k(r;c_1,\cdots, c_{s} )$. Indeed, if
$X$ is an $n$-dimensional projective variety, then any vector bundle $E$ on $X$ has $n+1$ cohomological groups whose dimensions are related by Riemann-Roch theorem and one is forced to look for a multigraded filtration of the moduli space $M_H$ by means of the sets $\{ E \in M_H| h^i(E) \geq k_i\}$. Under the assumptions of Theorem \ref{construccio}, $h^i(E)=0$ for $i \geq 2 $ and for any $E \in M_H$, the only non-vanishing cohomology groups are $H^0(E)$ and $H^1(E)$ and their dimensions are subject to one relation given by Riemann-Roch theorem: $\dim H^0(E)-\dim H^1(E)=\chi(E)=\chi(r;c_1, \cdots, c_s)$. Hence, it makes sense to consider only the filtration of the moduli space $M_H$ by the dimension of the space of global sections.
(2) We want to point out that there exists plenty of vector bundles satisfying the cohomological conditions of Theorem \ref{construccio}. For instance, instanton bundles on $\PP^{2n+1}$, Schwarzenberger bundles on $\PP^n$, Steiner bundles on $\PP^n$, Steiner and Spinor bundles on a hyperquadric $Q_n \subset \PP^{n+1}$, etc. \end{remark}
\begin{definition} The variety $W_{H}^k(r;c_1,\cdots, c_{s} )$ is called the {\bf $k$-Brill-Noether locus} of the moduli space $M_{H}$ (or simply Brill-Noether locus if there is no confusion) and $$\rho_{H}^k(r;c_1,\cdots, c_{s} ):= \dim M_H-k(k-\chi(r;c_1,\cdots, c_{s} ))$$ is called the {\bf generalized Brill-Noether number}.
By Theorem \ref{construccio}, the Brill-Noether locus $W_{H}^k(r;c_1,\cdots, c_{s} )$ has dimension greater or equal to $\rho_{H}^k(r;c_1,\cdots, c_{s} )$ and the number $\rho_{H}^k(r;c_1,\cdots, c_{s} )$ is also called the expected dimension of the corresponding Brill-Noether locus. Hence, we are led to pose the question whether the dimension of the Brill-Noether locus $W_{H}^k(r;c_1,\cdots, c_{s} )$ and its expected dimension coincide provided the Brill-Noether locus $W_{H}^k(r;c_1,\cdots, c_{s} )$ is non-empty. \end{definition}
\begin{notation} If there is no confusion then, we will simply write $W^k$ and $\rho^k$ instead of $W_{H}^k(r;c_1,\cdots, c_{s} )$ and $\rho_{H}^k(r;c_1,\cdots, c_{s} )$.
We will say that the Brill-Noether locus is defined in the moduli space $M_{H}$ whenever the assumptions of Theorem \ref{construccio} are satisfied, i.e. for any $E \in M_H$, $H^i(E)=0$ for $i \geq 2$. \end{notation}
\begin{remark} Notice that when $X$ is a smooth projective curve and we consider the moduli space $Pic^d(X)$ of degree $d$ line bundles on $X$, then we recover the classical Brill-Noether loci which have been well known since the last century, and the generalized Brill-Noether number is the classical Brill-Noether number $\rho=\rho(g,r,d)=g-(r+1)(g-d+r)$. \end{remark}
\begin{corollary} \label{superficie} Let $X$ be a smooth projective surface and let $M_{H}=M_{X,H}(r;c_1,c_2)$ be a moduli space of rank $r$, $H$-stable vector bundles $E$ on $X$ with fixed Chern classes $c_i(E)=c_i$. Assume that $c_1H \geq r K_X H$. Then, for any $k \geq 0$, there exists a determinantal variety $W_{H}^{k}(r;c_1,c_2)$ such that
\[Supp(W_{H}^{k}(r;c_1,c_2))= \{ E \in M_H | h^0(E) \geq k \}. \] Moreover, each non-empty irreducible component of $W_{H}^{k}(r;c_1,c_2)$ has dimension greater or equal to $$\rho_H^k(r;c_1,c_2)=\dim(M_H) -k(k-r(1+P_a(X))+\frac{c_1K_X}{2}-\frac{c_1^2}{2}+c_2)$$ and $W_{H}^{k+1}(r;c_1,c_2) \subset Sing(W_{H}^{k}(r;c_1,c_2))$ whenever $W_{H}^{k}(r;c_1,c_2) \neq M_{X,H}(r;c_1,c_2)$. \end{corollary} \begin{proof} First of all notice that for any $E \in M_H$, the numerical condition $c_1(E)H > rK_X H$ implies that $H^2(E)=0$. Indeed, by Serre duality we have: \[H^2(E) \cong H^0(E^*(K_X)) \] and since $E$ is an $H$-stable vector bundle on $X$, $E^*$ is also $H$-stable. Thus, if $H^2(E) \neq 0$, ${\mathcal O}_{X}(-K_X) \hookrightarrow E^*$ and since $E^*$ is $H$-stable we get \[(-K_X H)< \frac{c_1(E^*)H}{r}=-\frac{c_1(E)H}{r},\] which contradicts the assumption $c_1(E)H \geq rK_X H$. Hence the result follows from Theorem \ref{construccio} and the fact that by the Riemann-Roch theorem \[ \chi(r;c_1,c_2)=r(1+P_a(X))-\frac{c_1K_X}{2}+\frac{c_1^2}{2}-c_2.\] \end{proof}
For any sheaf $E$ on $\PP^2$, denote by $\chi^+=\chi^+(E):= \max \{\chi(E), 0 \}$. In \cite{GoHi}, G\"{o}ttsche and Hirschowitz gave, under the assumption $c_1>-3r$, a lower bound for the codimension of the Brill-Noether loci $W^{\chi^++1}(r;c_1,c_2)$ of rank $r$, stable vector bundles $E$ on $\PP^2$ with fixed Chern classes $c_i(E)=c_i$ such that
$h^0(E) \geq \chi^++1$. From the previous result we get the following upper bound:
\begin{corollary} Let $W^{\chi^++1}(r;c_1,c_2)$ be the Brill-Noether locus of rank $r$, stable vector bundles $E$ on $\PP^2$ with fixed Chern classes $c_i(E)=c_i$ such that
$h^0(E) \geq \chi^++1$. Assume that $c_1>-3r$. Then, the following holds:
\noindent(a) If $\chi^+= \chi(r;c_1,c_2)>0$ \[2 \leq \operatorname{codim}(W^{\chi^++1}(r;c_1,c_2)) \leq \chi(r;c_1,c_2)+1. \] (b) If $\chi^+= \chi(r;c_1,c_2)=0$ \[ \operatorname{codim}(W^{\chi^++1}(r;c_1,c_2))=1. \] (c) If $\chi^+=0> \chi(r;c_1,c_2)$ \[ \operatorname{codim}(W^{\chi^++1}(r;c_1,c_2)) \leq (\chi^++1)(\chi^++1-\chi(r;c_1,c_2)). \] \end{corollary} \begin{proof} The lower bounds follow from \cite{GoHi}; Theorem 1. Since $K_{\PP^2}={\mathcal O}_{\PP^2}(-3)$, the hypothesis $c_1>-3r$ is equivalent to $c_1(E)H > rK_X H$ and the upper bounds follow from Corollary \ref{superficie}. \end{proof}
In subsequent sections we will see that there are plenty of situations where the assumptions of Theorem \ref{construccio} are satisfied and we will prove that in several of them the Brill-Noether loci have exactly the expected dimension, showing
that the bound given in Theorem \ref{construccio} is sharp.
Once we have proved the existence of these varieties, it is natural to ask whether the condition $\rho_H^k(r;c_1,\cdots, c_{s} )<0$ implies that the variety $W_{H}^k(r;c_1,\cdots, c_{s} )$ is empty and whether the condition $\rho_H^k(r;c_1,\cdots, c_{s} ) \geq 0$ implies that the variety $W_{H}^k(r;c_1,\cdots, c_{s} )$ is non-empty. Indeed we are led to pose the following three questions.
\begin{question} \label{mainquestion} Let $X$ be a smooth projective variety of dimension $n$. We consider a moduli space $M_{H}(r;c_1,\cdots, c_{s} )$ of rank $r$, $H$-stable vector bundles on $X$ where the Brill-Noether locus is defined.
\begin{itemize} \item[(1)] Whether $ \rho_H^k(r;c_1,\cdots, c_{s} )<0$ implies $W_{H}^k(r;c_1,\cdots, c_{s} ) =\emptyset$ ? \item[(2)]Whether $ \rho_H^k(r;c_1,\cdots, c_{s} ) \geq 0$ implies $W_{H}^k(r;c_1,\cdots, c_{s} ) \neq \emptyset$ ? \item[(3)]Whether $ \rho_H^k(r;c_1,\cdots, c_{s} ) \geq 0$ and $W_{H}^k(r;c_1,\cdots, c_{s} ) \neq \emptyset$ implies
$$ \rho_H^k(r;c_1,\cdots, c_{s} )=\dim W_{H}^k(r;c_1,\cdots, c_{s}
) \quad ?$$ \end{itemize} \end{question}
If $C$ is a smooth algebraic curve and the moduli space is the Picard variety $Pic^d(C)$ of degree $d$ line bundles on $C$, the answer to Question \ref{mainquestion} is well-known. In fact, classical Brill-Noether theory has its roots dating more than a century ago and it is concerned with the subvarieties of the Picard variety $Pic^d(C)$ determined by degree $d$ line bundles on $C$ having at least a specified number of independent sections. Basic questions, concerning non-emptiness, connectedness, irreducibility, dimension, singularities, cohomology classes, etc ... have been completely answered when the underlying curve is a generic curve in the moduli space $M_g$ of curves of genus $g$. Indeed, the Brill-Noether locus is non-empty if $\rho \geq 0$ and connected if $\rho >0$. For a generic curve in $M_g$, the Brill-Noether locus is empty if $\rho <0$ and is irreducible of dimension $\rho$ if $\rho>0$. Modern proofs of these results have been given by Kempf, Kleiman and Laksov, Fulton and Lazarsfeld, Griffiths and Harris, and Gieseker and a full treatment of this is contained in \cite{ACGH}. The Brill-Noether loci in the moduli space of vector bundles of higher rank on a generic curve $C$ has been studied by Teixidor and others in a series of papers in 1994-2007.
Next example shows that if we deal with algebraic varieties of dimension greater than one, Question \ref{mainquestion} (1) is no longer true; it gives an example of negative generalized Brill-Noether number and the corresponding Brill-Noether locus is non-empty, in contrast with the classical case. Even more, we give examples where the expected dimension of the Brill-Noether locus and the dimension of the Brill-Noether locus do not coincide in spite of being positive the generalized Brill-Noether number. Indeed, we have
\begin{example} Let $X=\PP^1 \times \PP^1$ be a quadric surface in $\PP^3$. We denote by $l_1$, $l_2$ the generators of $Pic(X)$ and for any integer $n \geq 2$ we fix the ample line bundle $L=l_1+nl_2$. We will describe the Brill-Noether stratification of $M_{X,L}(2; (2n-1)l_2, 2n)$. Indeed, since $(2n-1)l_2 L=2n-1> -4n-4=2K_X L$, its existence is guaranteed by Corollary \ref{superficie}. Let us now prove that for any integer $n \geq 2$ and any integer $j$, $0 \leq j \leq n$, the Brill-Noether locus $W_L^{j}(2;(2n-1)l_2,2n)$ is non-empty. To this end, we consider ${\mathcal F}$ the irreducible family parameterizing rank two vector bundles $E$ on $X$ given by an exact sequence
\begin{equation}
\label{rk2se5} \hspace{6mm}
0 \rightarrow {\mathcal O}_{X} \rightarrow E \rightarrow
{\mathcal O}_{X}((2n-1)l_2)\otimes I_{Z} \rightarrow 0 \end{equation} where $Z$ is a locally complete intersection 0-dimensional scheme of length $2n$ such that $H^0I_{Z}((2n-1)l_2)=0$.
Notice that since $|Z|=2n$ and $h^0{\mathcal O}_X((2n-1)l_2)=2n$, the condition \[ H^0I_{Z}((2n-1)l_2)=0 \] is satisfied for all generic $Z \in Hilb^{2n}(X)$ and ${\mathcal F}$ is non-empty. In addition, it can be seen that $\dim {\mathcal F}= 4(2n)-3$ and that $ {\mathcal F} \hookrightarrow M_{X,L} (2;(2n-1)l_2,2n)$ (see \cite{nagoya}; Proposition 4.6).
\vskip 2mm \noindent{\bf Claim 1:} $W^1_L(2;(2n-1)l_2,2n) \cong {\mathcal F}$.
\vskip 2mm \noindent{\bf Proof of Claim 1:} Any $E \in {\mathcal F}$ is $L$-stable and since $H^0I_{Z}((2n-1)l_2)=0$, we have $h^0(E)=1$. Thus ${\mathcal F} \subset W^1_L(2;(2n-1)l_2,2n)$. Let us prove the converse. We take a vector bundle
$E \in W^1_L(2;(2n-1)l_2,2n)$ and a non-zero global section $s$ of $E$. We denote by $Y$ its scheme of zeros and by $D$ the maximal effective divisor contained in $Y$. Then $s$ can be regarded as a section of $E(-D)$ and its scheme of zeros has codimension $\geq 2$. Thus, for some effective divisor $D=al_1+bl_2$ we have a short exact sequence \begin{equation}
\label{} \hspace{6mm}
0 \rightarrow {\mathcal O}_{X}(D) \rightarrow E \rightarrow
{\mathcal O}_{X}((2n-1)l_2-D)\otimes I_{Z} \rightarrow 0 \end{equation} where $Z$ is a locally complete intersection 0-cycle. Since $D$ is effective, $a,b \geq 0$ and by the $L$-stability of $E$ we have \[(al_1+bl_2)L = (an+b) < \frac{2n-1}{2}=\frac{c_1(E)L}{2}. \] Therefore, $a=b=0$ and in fact $E \in {\mathcal F}$.
It follows from Claim 1 that the Brill-Noether locus $W^1_L(2;(2n-1)l_2,2n)$ is an irreducible variety of dimension $8n-3$ and notice that in this case, its dimension coincides with the expected one. Indeed,
\[\begin{array}{ll} \rho_L^1(2;(2n-1)l_2,2n) &= \dim M_{X,L}(2;(2n-1)l_2,2n) -1(1-\chi(2;(2n-1)l_2,2n)) \\ & = 8n-3 \\
\end{array} \]
since by Riemann-Roch theorem,
\begin{equation} \label{RR} \chi(2;(2n-1)l_2,2n)=2+\frac{((2n-1)l_2)(2l_1+2l_2)}{2}+ \frac{((2n-1)l_2)^2}{2}-2n=1.
\end{equation}
For any $i$, $1 \leq i \leq n-1$, we can choose a $0$-dimensional scheme $Z_i$ on $X$ of length $2n$ such that $h^0(I_{Z_i}((2n-1)l_2))=i$ and $h^0(I_{Z_i}((2n-i-1)l_2))=0$. Thus, if we denote by $E_i$ the rank two vector bundle on $X$ given by the exact sequence \begin{equation}
\label{} \hspace{6mm}
0 \rightarrow {\mathcal O}_{X} \rightarrow E_i \rightarrow
{\mathcal O}_{X}((2n-1)l_2)\otimes I_{Z_i} \rightarrow 0 \end{equation} we have $h^0(E_i)=i+1$.
\vskip 2mm \noindent{\bf Claim 2:} $E_i$ is $L$-stable.
\vskip 2mm \noindent{\bf Proof of Claim 2:} Since $E_i$ is given by a non-trivial extension \begin{equation}
\label{} \hspace{6mm}
0 \rightarrow {\mathcal O}_{X} \rightarrow E_i \rightarrow
{\mathcal O}_{X}((2n-1)l_2)\otimes I_{Z_i} \rightarrow 0, \end{equation} given a sub-line bundle ${\mathcal O}_X(al_1+bl_2)$ of $E_i$ we have two possible cases: \[ (1) \quad {\mathcal O}_X(al_1+bl_2) \hookrightarrow {\mathcal O}_X \] \[ (2) \quad {\mathcal O}_X(al_1+bl_2) \hookrightarrow {\mathcal O}_{X}((2n-1)l_2)\otimes I_{Z_i}. \] In case $(1)$, $(al_1+bl_2)L \leq 0 < \frac{2n-1}{2}=\frac{c_1(E_i)L}{2}$. In case $(2)$, $-al_1+(2n-1-b)l_2$ is an effective divisor and hence $a \leq 0$ and $b \leq 2n-1$. On the other hand, since \[H^0({\mathcal O}_X(al_1+(b-i)l_2)) \subset H^0(I_{Z_i}((2n-i-1)l_2))=0 \] we have $a <0$ or $b <i$. If $b<i$, since $a \leq 0$ and $i\le n-1$, we have $$(al_1+bl_2)L =an+b \le n-1< \frac{2n-1}{2} =\frac{c_1(E_i)L}{2}.$$ Assume $a<0$. In that case, since $b \leq 2n-1$ we get $$(al_1+bl_2)L =an+b \leq -n+b \leq n-1 < \frac{2n-1}{2} =\frac{c_1(E_i)L}{2}.$$ Therefore, $E_i$ is $L$-stable which proves Claim 2.
\vskip 2mm By Claim 2, $E_i \in W^{i+1}_L(2;(2n-1)l_2,2n)$ and we get a chain of non-empty Brill-Noether loci \[M_{X,L}(2;(2n-1)l_2,2n) \supset W^1_L(2;(2n-1)l_2,2n) \supset W^2_L(2;(2n-1)l_2,2n)\supset \cdots \hspace{25mm} \]
\[\hspace{60mm} \cdots \supset W^{n}_L(2;(2n-1)l_2,2n) \supsetneq \emptyset. \]
Notice that for any $k$, $1\le k \le n$, such that $8n-3 <k(k-1)$ the Brill-Noether locus $W^{k}_L(2;(2n-1)l_2,2n)$ is non-empty and the generalized Brill-Noether number, $\rho^k_L(2;(2n-1)l_2,2n)$, is negative. In fact, \[ \begin{array}{ll}\rho^k_L(2;(2n-1)l_2,2n) & = \dim M_{X,L}(2;(2n-1)l_2,2n)-k(k-\chi(2;(2n-1)l_2,2n)) \\ & =8n-3-k(k-1) \end{array}\] where the last equality follows from Proposition \ref{moduli} and the equation (\ref{RR}).
Finally, we have that for any $k$ such that $2<k(k-1)<8n-3$, the generalized Brill-Noether number, $\rho^k_L(2;(2n-1)l_2,2n)$, is positive; however, the Brill-Noether locus $W^{k}_L(2;(2n-1)l_2,2n)$ is non-empty and its dimension is greater than the expected one. In fact, it is enough to observe that $$\dim W^{k}_L(2;(2n-1)l_2,2n)=8n-2k-1>\rho^k_L(2;(2n-1)l_2,2n).$$
\end{example}
\section{Brill-Noether loci and change of polarizations}
In this section we will see that the Brill-Noether stratification allow us to compare moduli spaces of vector bundles on smooth projective surfaces, stable with respect to different polarizations.
Let $X$ be a smooth projective variety of dimension $n$. As we pointed out in Remark \ref{estabilitatdepen}, the notion of stability of vector bundles on $X$ strongly depends on the ample divisor. Hence it is natural to consider the following interesting problem:
\begin{problem} What is the difference between the moduli spaces $$M_H=M_{X,H}(r;c_1, \cdots, c_{min\{r,n\}}) \quad \mbox{and} \quad M_L=M_{X,L}(r;c_1, \cdots, c_{min\{r,n\}})$$ where $H$ and $L$ are two different polarizations? \end{problem}
It turns out that the ample cone of $X$ has a chamber structure such that the moduli space $M_{X,H}(r;c_1, \cdots, c_{min\{r,n\}})$ only depends on the chamber of $H$ and the problem consists on determining how the moduli space changes when the polarization crosses a wall between two chambers (see Definition \ref{parets}). These changes have been explicitly described in very few occasions (see for instance \cite{JPAA}, \cite{Michigan}, \cite{nagoya}).
We will focus our attention in case where $X$ is a Hirzebruch surface and we will deal with stable rank two vector bundles on $X$. Very surprisingly we will realize that in a huge family of moduli spaces, the difference between two moduli spaces $M_{L}(2;c_1,c_2)$ and $M_{H}(2;c_1,c_2)$ (where $L$ and $H$ are two polarizations sitting in chambers sharing a common wall), is precisely described by suitable Brill-Noether loci. Moreover, we will be able to compute the dimension of these Brill-Noether loci and we will prove that the expected dimension of these Brill-Noether loci is indeed the dimension.
To start with, let us recall the basic results about walls and chambers due to Qin (\cite{Qin93}).
\begin{definition} \label{parets}
(i) Let $\xi \in Num(X) \otimes \RR$. We define
\[ {\mathcal W}^{\xi} := C_X \cap \{ x \in Num(X) \otimes \RR | x \xi =0 \}. \] \noindent (ii) Define ${ {\mathcal W}}(c_1,c_2)$ as the set whose elements consist of ${\mathcal W}^{\xi}$, where $\xi$ is the numerical equivalence class of a divisor $D$ on $X$ such that ${\mathcal O}_X(D+c_1)$ is divisible by 2 in $Pic(X)$, and that \[ D^2 <0 ;\hspace{8mm} c_2 + \frac{D^2-c_1^2}{4}= [Z] \] for some locally complete intersection codimension-two cycle $Z$ in $X$.
\noindent (iii) A wall of type $(c_1,c_2)$ is an element in ${ {\mathcal W}}(c_1,c_2)$. A chamber of type $(c_1,c_2)$ is a connected component of $C_X \setminus{ {\mathcal W}}(c_1,c_2)$. A $\ZZ$-chamber of type $(c_1,c_2)$ is the intersection of $Num(X)$ with some chamber of type $(c_1,c_2)$.
\noindent (iv) A face of type $(c_1,c_2)$ is ${ F}= {\mathcal W}^{\xi} \cap {\overline C}$, where ${\mathcal W}^{\xi}$ is a wall of type $(c_1,c_2)$ and $C$ is a chamber of type $(c_1,c_2)$. \end{definition}
We say that a wall ${\mathcal W}^{\xi}$ of type $(c_1,c_2)$ separates two polarizations $L$ and $L'$ if, and only if, $\xi L <0< \xi L'$.
\begin{remark} \label{nomes:depen} In \cite{Qin93}; Corollary 2.2.2 and Remark 2.2.6, Qin proves that the moduli space $M_{X,L}(2;c_1,c_2)$ only depends on the chamber of $L$ and that the study of moduli spaces of rank two vector bundles stable with respect to a polarization lying on walls may be reduced to the study of moduli spaces of rank two vector bundles stable with respect to a polarization lying on $\ZZ$-chambers. \end{remark}
\begin{definition} \label{E:xi} Let $\xi $ be a numerical equivalence class defining a wall of type $(c_{1},c_{2})$. We define ${\mathcal E}_{\xi }(c_{1},c_{2})$ as the quasi-projective variety parameterizing rank 2 vector bundles $E$ on $X$ given by an extension \[ 0 \rightarrow {\mathcal O}_X(D) \rightarrow E \rightarrow
{\mathcal O}_X(c_{1}-D)\otimes I_{Z} \rightarrow 0 \] where $D$ is a divisor with $2D-c_{1}\equiv \xi $ and $Z$ is a locally complete intersection 0-cycle of length $c_{2}+(\xi ^{2}-c_{1}^{2})/4$. Moreover, we require that $E$ is not given by the trivial extension when $\xi ^{2}=c_{1}^{2}-4c_{2}$. \end{definition}
\begin{remark} \label{Qin} By \cite{Qin93}; Theorem 1.2.5, if $L_{1}$ and $L_{2}$ are two ample divisors on $X$ and $E$ is a rank 2 vector bundle on $X$ which is $L_{1}$-stable but $L_{2}$-unstable, then we have $E \in {\mathcal E}_{\xi }(c_{1},c_{2})$ where $\xi $ defines a non-empty wall of type $(c_{1},c_{2})$ separating $L_{1}$ and $L_{2}$ (i.e. $\xi L_{1}<0<\xi L_{2}$; moreover, we can consider the ample divisor $L:=(\xi L_{2})L_{1}-(\xi L_{1})L_{2}$ on $X$ and we have $L \xi =0$). More can be said, by \cite{Qin93}; Theorem 1.3.3, given $L_1$ and $L_2$ two polarizations lying on chambers $C_1$ and $C_2$, sharing a common wall,
we have
\refstepcounter{equation} \hspace{4mm}
\[ (\theequation) \label{ratse1} \hspace{4mm}
M_{L_1}(2;c_{1},c_2)=(M_{L_2}(2;c_{1},c_2) \setminus \sqcup_{\xi}
{\mathcal E}_{\xi }(c_{1},c_2)) \sqcup ( \sqcup_{\xi}{\mathcal E}_{-\xi}(c_1,c_2)),
\] where $\xi$ satisfies $\xi L_1>0$ and runs over all numerical equivalence classes which define the common wall ${\mathcal W}$. \end{remark}
In the next result, we have summarized well-known properties of some moduli spaces of stable rank two vector bundles on projective surfaces that we will need later on (see for instance \cite{JPAA}; Proposition 3.11).
\begin{proposition} \label{moduli} Let $X$ be a smooth, projective rational surface with effective anticanonical line bundle and let $H$ be an ample line bundle on $X$. Then, the moduli space $M_{X,H}(2;c_1,c_2)$ of rank two, $H$-stable vector bundles $E$ on $X$ with fixed Chern classes $c_i(E)=c_i$ is either empty or a smooth irreducible variety of dimension \[\dim M_{X,H}(2;c_1,c_2)= 4c_2- c_1^2-3. \] \end{proposition}
For any integer $e \geq 0$, let $X=\FF_e \cong \PP({\mathcal O}_{\PP^1} \oplus {\mathcal O}_{\PP^1}(-e))$ be a non singular, Hirzebruch surface. We denote by $C_0$ and $F$ the standard basis of $Pic(X) \cong \ZZ \oplus \ZZ$ such that $C_0^2=-e$, $F^2=0$ and $C_{0}F=1$. The canonical divisor is given by
\[ K_{X}=-2C_0-(e+2)F \] and it is well known that a divisor $L=aC_0+bF$ on $X$ is ample if, and only if, it is very ample, if and only if, $a>0$ and $b>ae$, and that $D=a'C_0+b'F$ is effective if and only if $a' \geq 0$ and $b' \geq 0$ (\cite{hart}; V, Corollary 2.18).
Given an integer $c_2>0$ and $\alpha \in \{0,1 \}$, we denote by $M_L(2;C_0+\alpha F,c_2)$ the moduli space of rank two, $L$-stable vector bundles $E$ on $X$ with fixed Chern classes $c_1(E)=C_0+\alpha F$ and $c_2(E)=c_2$. For any integer $n$, $1 \leq n \leq c_2-1$, consider the following ample divisor on $X$ \[ L_n:=C_0+(e+2c_2-\alpha-2n+1)F.\] Notice that the equivalence class \[ \xi_n:= C_0-(2c_2-\alpha-2n)F \] defines a non-empty wall ${\mathcal W}^{\xi_n}$ of type $(C_0+ \alpha F,c_2)$ which separates the ample divisors $L_{n}$ and $L_{n+1}$. Indeed, $\xi_{n}^2=-e-2(2c_2-\alpha-2n)<0$, $\xi_n+c_1=2(C_0-(c_2-\alpha-n)F)$ is divisible by two in $Pic(X)$, $c_2 + \frac{\xi_n^2-c_1^2}{4}=n>0$ and \[ L_{n+1}\xi_n=-1<0< 1= L_{n}\xi_n.\] We are led to pose the following problem
\begin{problem} Determine the difference between the moduli spaces \[ M_{L_n}(2;C_0+\alpha F,c_2) \quad \mbox{and} \quad M_{L_{n+1}}(2;C_0+\alpha F,c_2).\] \end{problem}
\begin{remark} Given an ample line bundle $L=aC_0 + bF$ on $X$, we can represent $L$ as a point of coordinates $(a,b)$ in the plane. The following picture gives us an idea of the situation we are discussing:
\begin{picture}(180,160)(-100,-10) \put(0,0){\line(0,1){160}}
\put(0,0){\line(1,2){70}}
\put(0,0){\line(2,1){170}} \put(0,0){\line(1,1){130}}
\put(0,0){\line(1,0){190}} \put(-8,150){\makebox(0,0){$C_0$}} \put(85,115){\makebox(0,0){$\bullet L_n$}} \put(140,140){\makebox(0,0){${\mathcal W}^{\xi_n}$}}
\put(130,90){\makebox(0,0){$\bullet L_{n+1} $}}
\put(200,-3){\makebox(0,0){$F$}} \end{picture}
\vskip 5mm
\end{remark}
Next result completely solves this problem. We are going to prove that these differences are surprisingly controlled by the following two Brill-Noether loci: \[ W^1_{L_{n+1}}(2;\bar{c_1},n)\subset M_{L_{n+1}}(2;\bar{c_1},n)\quad \mbox{and} \quad W^1_{L_{n}}(2;\tilde{c_1},n) \subset M_{L_{n}}(2;\tilde{c_1},n)\] where $\bar{c_1}=-C_0+(2c_2-\alpha -2n)F$ and $\tilde{c_1}=C_0+(\alpha+2n-2c_2)F$.
\begin{theorem} Let $X=\FF_e$ be a smooth Hirzebruch surface, $c_2 >1$ an integer and $\alpha \in \{0,1 \}$. For any integer $n$, $1 \leq n \leq c_2-1$, consider the ample divisor $L_n:=C_0+(e+2c_2-\alpha-2n+1)F$. Then \[ M_{L_n}(2;C_0+\alpha F,c_2) \cong (M_{L_{n+1}}(2;C_0+\alpha F,c_2) \setminus W^1_{L_{n+1}}(2;\bar{c_1},n)) \sqcup W^1_{L_{n}}(2;\tilde{c_1},n).\] Moreover, the Brill-Noether loci \[ W^1_{L_{n+1}}(2;\bar{c_1},n)\subset M_{L_{n+1}}(2;\bar{c_1},n)\quad \mbox{and} \quad W^1_{L_{n}}(2;\tilde{c_1},n) \subset M_{L_{n}}(2;\tilde{c_1},n)\] do have the expected dimension $\rho_{L_{n+1}}^1(2;\bar{c_1},n)$ and $\rho_{L_n}^1(2;\tilde{c_1},n)$, respectively. \end{theorem} \begin{proof} We have already seen that the numerical class \[ \xi_n:= C_0-(2c_2-\alpha-2n)F \] defines a non-empty wall ${\mathcal W}^{\xi_n}$ of type $(C_0+ \alpha F,c_2)$ which separates the ample divisors $$L_{n}=C_0+(e+2c_2-\alpha-2n+1)F \quad \mbox{and}\quad L_{n+1}=C_0+(e+2c_2-\alpha-2n-1)F.$$ Hence, by Remark \ref{Qin} we have:
\begin{equation}
\label{des1} \hspace{4mm}
M_{L_n}(2;C_0+ \alpha F,c_2)=(M_{L_{n+1}}(2;C_0+ \alpha F,c_2) \setminus \sqcup_{\xi}
{\mathcal E}_{\xi }(c_{1},c_2)) \sqcup ( \sqcup_{\xi}{\mathcal E}_{-\xi}(c_1,c_2)),
\end{equation} where $\xi$ satisfies $\xi L_n > 0$ and runs over all numerical equivalence classes which define the common wall ${\mathcal W}^{\xi_n}$ separating $L_{n}$ and $L_{n+1}$.
\noindent {\bf Claim 1:} $\xi_n$ is the only equivalence class which defines the common wall ${\mathcal W}^{\xi_n}$ and verifies $\xi_n L_n> 0$.
\noindent {\bf Proof of Claim 1:} Let $L=aC_0+bF \in {\mathcal W}^{\xi_n}$ be an ample divisor lying on the common wall defined by $\xi_n$. By definition, $$0=L \xi_n=-ae-a(2c_2-\alpha-2n)+b$$ and thus $L=aC_0+a(2c_2-\alpha-2n+e)$. Assume there exists a numerical equivalence class $\xi=\sigma C_0+ \gamma F$ defining the common wall ${\mathcal W}^{\xi_n}$ and such that $\xi L_n> 0$. In particular, since $L=C_0+(2c_2-\alpha-2n+e) \in {\mathcal W}^{\xi_n}$, we have \[ 0=\xi L= (\sigma C_0+ \gamma F)(C_0+(2c_2-\alpha-2n+e)F) =-\sigma e + \sigma (2c_2-\alpha-2n+e)+\gamma \] and hence $\gamma=-\sigma (2c_2-\alpha-2n)$. On the other hand, since $\xi$ defines a non-empty wall \[ 0 \leq c_2 + \frac{\xi^2-c_1^2}{4}=c_2-\frac{\sigma^2}{2}(2c_2-\alpha-2n) +\frac{e}{4}(1-\sigma^2)-\frac{\alpha}{2} \] which gives us $\sigma= \pm 1$. Finally, $\xi L_n > 0$ implies $\sigma = 1$ and thus $\xi=C_0-(2c_2-\alpha-2n)F=\xi_n$ which proves Claim 1.
Applying Claim 1, equation (\ref{des1}) turns out to be \begin{equation}
\label{des2} \hspace{4mm}
M_{L_n}(2;C_0+ \alpha F,c_2)=(M_{L_{n+1}}(2;C_0+ \alpha F,c_2) \setminus
{\mathcal E}_{\xi_n }(c_{1},c_2)) \sqcup {\mathcal E}_{-\xi_n}(c_1,c_2).
\end{equation}
Notice that \[\begin{array}{ll}\tilde{c_1}L_n & = (C_0+(\alpha+2n-2c_2)F)(C_0+(e+2c_2-\alpha-2n+1)F ) \\ & = 1\\ & > 2(-4c_2+4n+2 \alpha -e-4) \\ & = 2(-2C_0-(e+2)F)(C_0+(e+2c_2-\alpha-2n+1)F )\\ & = 2 K_X L_n \end{array} \] and \[\begin{array}{ll}\bar{c_1}L_{n+1} & =(-C_0+(2c_2-\alpha-2n)F)(C_0+(e+2c_2-\alpha-2n-1)F )\\ & = 1 \\ & > 2(-4c_2+4n+2 \alpha -e) \\ & = 2(-2C_0-(e+2)F)(C_0+(e+2c_2-\alpha-2n-1)F ) \\ & =2 K_X L_{n+1}. \end{array} \] Thus by Corollary \ref{superficie} the Brill-Noether stratification of the moduli spaces $M_{L_n}(2;\tilde{c_1},n)$ and $M_{L_{n+1}}(2;\bar{c_1},n)$ are defined.
\noindent {\bf Claim 2:} We have: \[ (a) \quad {\mathcal E}_{-\xi_n} \cong W^1_{L_{n}}(2;\tilde{c_1},n), \] \[(b) \quad {\mathcal E}_{\xi_n} \cong W^1_{L_{n+1}}(2;\bar{c_1},n). \]
\noindent {\bf Proof of Claim 2:} First of all notice that Claim 2 is, by definition, equivalent to have \[ (a') \quad
{\mathcal E}_{-\xi_n} \cong \{G \in M_{L_n}(2;C_0+ \alpha F,c_2)| h^0(G(-(c_2-n)F))>0 \},\]
\[ (b') \quad {\mathcal E}_{\xi_n} \cong \{G \in M_{L_{n+1}}(2;C_0+ \alpha F,c_2)| h^0(G(-C_0+(c_2-n-\alpha )F))>0 \}. \] Let us prove $(a')$. If $E \in {\mathcal E}_{-\xi_n}$, then $E$ is given by a non-trivial extension \[ 0 \rightarrow {\mathcal O}_X((c_2-n)F) \rightarrow E \rightarrow
{\mathcal O}_X(C_0-(c_2-n-\alpha)F)\otimes I_{Z} \rightarrow 0 \] where $Z$ is a $0$-dimensional scheme of length
$|Z|=c_2(E(-(c_2-n)F))=n$. Therefore $h^0(E(-(c_2-n)F))>0$. Moreover, it follows from (\ref{des2}) that ${\mathcal E}_{-\xi_n} \subset M_{L_n}(2;C_0+ \alpha F,c_2)$. Now let us see the converse. Let $E
\in \{G \in M_{L_n}(2;C_0+ \alpha F,c_2)| h^0(G(-(c_2-n)F))>0 \}$ and we are going to see that $E \in {\mathcal E}_{-\xi_n}$. Let $s$ be a non-zero section of $E(-(c_2-n)F)$ and let $Y$ be its scheme of zeros. Let $D$ be the maximal effective divisor contained in $Y$. Then $s$ can be regarded as a section of $E(-(c_2-n)F-D)$ and its scheme of zeros has codimension $\geq 2$. Thus, for some effective divisor $D=aC_0+bF$ we have a short exact sequence \[ 0 \rightarrow {\mathcal O}_X((c_2-n)F+D) \rightarrow E \rightarrow
{\mathcal O}_X(C_0-(c_2-n-\alpha)F-D)\otimes I_{Z} \rightarrow 0. \]
By assumption, $E$ is $L_n$-stable. Therefore,
\[ ((c_2-n)F+D)L_n < \frac{c_1(E)L_n}{2}=\frac{(C_0+ \alpha F)L_n}{2},\]
which is equivalent to $a(2c_2-\alpha-2n+1)+b \leq 0$. Since $D$
is an effective divisor $a,b \geq 0$ and hence the only solution
is $a=b=0$. Therefore $D=0$ and $E$ is given by the exact
sequence
\[ 0 \rightarrow {\mathcal O}_X((c_2-n)F) \rightarrow E \rightarrow
{\mathcal O}_X(C_0-(c_2-n-\alpha)F)\otimes I_{Z} \rightarrow 0 \] which proves that $E \in {\mathcal E}_{-\xi_n}$.
Let us now prove $(b')$. By Remark (\ref{des2}), ${\mathcal E}_{\xi_n} \subset M_{L_{n+1}}(2;C_0+ \alpha F,c_2)$ and since any $E \in {\mathcal E}_{\xi_n}$ is given by a non-trivial extension of type \[ 0 \rightarrow {\mathcal O}_X(C_0-(c_2-n-\alpha)F) \rightarrow E \rightarrow
{\mathcal O}_X((c_2-n)F)\otimes I_{Z} \rightarrow 0, \]
for any $E \in {\mathcal E}_{\xi_n}$, we have $h^0(E(-C_0+(c_2-n-\alpha )F))>0$. Conversely, given $$E \in \{G \in M_{L_{n+1}}(2;C_0+ \alpha F,c_2)|
h^0(G(-C_0+(c_2-n-\alpha )F))>0\}$$ we are going to see that $E \in {\mathcal E}_{\xi_n}$. Let $s$ be a non-zero section of $E(-C_0+(c_2-n-\alpha )F)$ and let $Y$ be its scheme of zeros. Let $D$ be the maximal effective divisor contained in $Y$. Then $s$ can be regarded as a section of $E(-C_0+(c_2-n-\alpha )F-D)$ and its scheme of zeros has codimension $\geq 2$. Thus, for some effective divisor $D=aC_0+bF$ we have a short exact sequence \[ 0 \rightarrow {\mathcal O}_X(C_0-(c_2-n-\alpha)F+D) \rightarrow E \rightarrow
{\mathcal O}_X((c_2-n)F-D)\otimes I_{Z} \rightarrow 0. \]
By assumption, $E$ is $L_{n+1}$-stable. Therefore,
\[ (C_0-(c_2-n-\alpha)F+D)L_{n+1} < \frac{c_1(E)L_{n+1}}{2}=\frac{(C_0+ \alpha F)L_{n+1}}{2},\]
which is equivalent to $a(2c_2-\alpha-2n-1)+b \leq 0$. Since $D$
is an effective divisor this implies that $a=b=0$ and thus $E$ is given by the exact
sequence
\[ 0 \rightarrow {\mathcal O}_X(C_0-(c_2-n-\alpha)F) \rightarrow E \rightarrow
{\mathcal O}_X((c_2-n)F)\otimes I_{Z} \rightarrow 0 \] which proves that $E \in {\mathcal E}_{\xi_n}$.
Putting together Claim 2 and (\ref{des2}) we obtain \[ M_{L_n}(2;C_0+\alpha F,c_2) \cong (M_{L_{n+1}}(2;C_0+\alpha F,c_2) \setminus W^1_{L_{n+1}}(2;\bar{c_1},n)) \sqcup W^1_{L_{n}}(2;\tilde{c_1},n).\]
Finally let us see that the dimensions of the Brill-Noether loci $W^1_{L_{n}}(2;\tilde{c_1},n)$ and $W^1_{L_{n+1}}(2;\bar{c_1},n)$ coincide with the expected one. To this end, by Claim 2, it suffices to prove that \[ (i) \quad \dim({\mathcal E}_{-\xi_n})=\rho_{L_{n}}^1(2;\tilde{c_1},n), \] \[(ii) \quad \dim({\mathcal E}_{\xi_n})= \rho_{L_{n+1}}^1(2;\bar{c_1},n). \] By construction,
\[\begin{array}{ll}\dim {\mathcal E}_{-\xi_n} & = ext^1(I_Z(C_0-(c_2-n-\alpha)F),{\mathcal O}_X((c_2-n)F))+2|Z|-
h^0E(-(c_2-n)F) \\ & = h^1(I_Z(-C_0-(2c_2-2n-\alpha+e+2)F))+2|Z|- h^0E(-(c_2-n)F) \end{array} \] being $E \in {\mathcal E}_{-\xi_n}$. Since $h^i({\mathcal O}_X(-C_0-(2c_2-2n-\alpha+e+2)F))=0$, for $i=0,1$, from the cohomological exact sequence associated to \[0 \rightarrow I_Z(-C_0-(2c_2-2n-\alpha+e+2)F) \rightarrow {\mathcal O}_X(-C_0-(2c_2-2n-\alpha+e+2)F)\]\[ \rightarrow {\mathcal O}_Z(-C_0-(2c_2-2n-\alpha+e+2)F)\rightarrow 0 \] we deduce that
$$h^1(I_Z(-C_0-(2c_2-2n-\alpha+e+2)F))=h^0({\mathcal O}_Z(-C_0-(2c_2-2n-\alpha+e+2)F))=|Z|.$$ For any $E \in {\mathcal E}_{-\xi_n}$, we have $h^0E(-(c_2-n)F)=1$ and
$c_2(E(-(c_2-n)F))=n=|Z|$. Putting altogether we get \[ \dim {\mathcal E}_{-\xi_n}=3n-1.\] On the other hand, by Riemann-Roch theorem, \[\begin{array}{ll} \chi(2;\tilde{c_1},n) & =2 +\frac{(\tilde{c_1})(2C_0+(e+2)F)}{2}+\frac{(\tilde{c_1})^2}{2}-n \\ & =2 +\frac{(C_0+(\alpha+2n-2c_2)F)(2C_0+(e+2)F)}{2}+\frac{(C_0+(\alpha+2n-2c_2)F)^2}{2}-n \\ &=3n-4c_2+2 \alpha -e +3, \end{array}\] and by Proposition \ref{moduli} $\dim M_{L_n}(2;\tilde{c_1},n)=4c_2-(2 \alpha-e)-3$. Therefore, \[\rho_{L_{n}}^1(2;\tilde{c_1},n)=\dim M_{L_n}(2;\tilde{c_1},n)-1(1-\chi(2,\tilde{c_1},n))=3n-1 \] which proves $(i)$.
Let us now prove $(ii)$. By construction,
\[\begin{array}{ll}\dim {\mathcal E}_{\xi_n} & = ext^1(I_Z((c_2-n)F),{\mathcal O}_X(C_0-(c_2-n-\alpha)F)+2|Z|- h^0E(-C_0+(c_2-n-\alpha)F) \\ &=
h^1(I_Z(-3C_0+(2c_2-2n-\alpha-e-2)F))+2|Z|-h^0E(-C_0+(c_2-n-\alpha)F) \end{array} \] being $E \in {\mathcal E}_{\xi_n}$. Since $h^i({\mathcal O}_X(-3C_0+(2c_2-2n-\alpha-e-2)F))=0$, for $i=0,2$, $h^1({\mathcal O}_X(-3C_0+(2c_2-2n-\alpha-e-2)F)) = -\chi({\mathcal O}_X(-3C_0+(2c_2-2n-\alpha-e-2)F))=-\chi$ and by Riemann-Roch theorem $$ \begin{array}{ll} \chi &=1+\frac{(-3C_0+(2c_2-2n-\alpha-e-2)F)(2C_0+(e+2)F)}{2}+\frac{(-3C_0+(2c_2-2n-\alpha-e-2)F)^2}{2} \\ &= 4c_2-4n-2 \alpha-2+e. \end{array}$$ From the cohomological exact sequence associated to \[0 \rightarrow I_Z(-3C_0+(2c_2-2n-\alpha-e-2)F) \rightarrow {\mathcal O}_X(-3C_0+(2c_2-2n-\alpha-e-2)F)\] \[ \rightarrow {\mathcal O}_Z(-3C_0+(2c_2-2n-\alpha-e-2)F)\rightarrow 0 \] we deduce that $$\begin{array}{ll}h^1(I_Z(-3C_0+(2c_2-2n-\alpha-e-2)F)) &= h^0({\mathcal O}_Z(-C_0-(2c_2-2n-\alpha+e+2)F)) \\ & +h^1({\mathcal O}_X(-3C_0+(2c_2-2n-\alpha-e-2)F))
\\ &=|Z|+4c_2-4n-2 \alpha-2+e. \end{array}$$ For any $E \in {\mathcal E}_{\xi_n}$, we have $h^0E(-C_0+(c_2-n-\alpha)F)=1$ and
$c_2(E(-C_0+(c_2-n-\alpha)F)=n=|Z|$. Putting altogether we get \[ \dim {\mathcal E}_{\xi_n}=4c_2-n+e-2\alpha-3.\] On the other hand, by Riemann-Roch theorem, \[\chi(2;\bar{c_1},n)= 2+ \frac{(\bar{c_1})(2C_0+(e+2)F)}{2}+\frac{\bar{c_1}^2}{2}-n=1-n, \] and by Proposition \ref{moduli} $\dim M_{L_n}(2;\bar{c_1},n)=4c_2-(2 \alpha-e)-3$. Thus, \[\rho_{L_{n+1}}^1(2;\bar{c_1},n)=\dim M_{L_{n+1}}(2;\bar{c_1},n)-1(1-\chi(2;\bar{c_1},n))= 4c_2-n+e-2\alpha-3\] which proves $(ii)$. \end{proof}
\section{Brill-Noether loci and Instanton bundles}
In this section, we will address the three problems posed in section two for the case of mathematical instanton bundles on $\PP^3$. More precisely, we will prove that the Brill-Noether locus of instanton bundles on $\PP^3$ is non-empty if and only if the corresponding generalized Brill-Noether number is non-negative and in this case the Brill-Noether locus is a smooth irreducible variety of the expected dimension.
Let $MI(n)$ be the moduli space of mathematical instanton bundles $E$ over $\PP^3$ with $c_1(E)=2$ and $c_2(E)=n$ and verifying the instanton condition $H^1(E(-3))=0$. It is known that $MI(n)$ is non-singular and irreducible for $n \leq 6$ (see \cite{B1} for $n=2$, \cite{H2} for $n=3$, \cite{ES} for $n=4$, \cite{B2} and \cite{LP} for $n=5$ and \cite{CTT} for $n=6$) and it has been conjectured that $MI(n)$ is non-singular and irreducible. The only known component $MI_0(n)$ of $MI(n)$ is the one made of vector bundles which are generalizations of (a twist of) the ones associated to $n+1$ skew lines in $\PP^3$ ('t Hooft bundles) and we know that $MI_0(n)$ is a generically smooth variety of dimension $8n-11$.
Since for any instanton bundle $E\in MI_0(n)$, we have $H^{i}(\PP^3, E)=0$ for $i\ge 2$, Theorem \ref{construccio} applies and the Brill-Noether stratification of $MI_0(n)$ is well defined. So, for any $k\ge 1$, we can study the Brill-Noether loci
\[ W^k:= \{E \in MI_0(n) | h^0(E) \geq k \} \subset MI_0(n).\] \noindent We have:
\begin{proposition} \label{instanton} Assume $n>13$. With the above notations we have \begin{itemize} \item[(i)] $W^k \neq \emptyset$ if and only if $\rho^k(2;2,n) \geq 0$ if and only if $k < 3$. \item[(ii)] $W^1$ is a smooth rational irreducible quasi-projective variety of the expected dimension $5n-1$. \item[(iii)] $W^2$ is a smooth rational irreducible quasi-projective variety of the expected dimension $2n+7$. \end{itemize} \end{proposition} \begin{proof} (i) Let us first compute $\rho^k(2;2,n)$. To this end, we take $E$ a mathematical instanton bundle with Chern classes $(c_1(E),c_2(E))=(2,n)$. It is well-known that $E(-1)$ is the cohomology bundle of a monad of the following type $$ 0 \longrightarrow {{\mathcal O}}_{\PP^3}(-1)^{n-1} \mapright{\alpha }{{\mathcal O}}_{\PP^3}^{2n} \mapright{\beta} {{\mathcal O}}_{\PP^3}(1)^{n-1} \longrightarrow 0.$$ Hence we have two exact sequences $$ 0 \longrightarrow {\mathcal K}=\ker (\beta) \rightarrow {{\mathcal O}}_{\PP^3}^{2n} \mapright{\beta} {{\mathcal O}}_{\PP^3}(1)^{n-1} \longrightarrow 0,$$ and $$ 0\longrightarrow {{\mathcal O}}_{\PP^3}(-1)^{n-1} \mapright{\alpha } {\mathcal K} \rightarrow E(-1) \longrightarrow 0$$ which allows us to compute $\chi(E)$. Indeed, \[\chi(E)= \chi({\mathcal K}(1))-(n-1)=2n\chi({\mathcal O}_{\PP^3}(1))-(n-1)\chi({\mathcal O}_{\PP^3}(2))-(n-1)=-3n+11. \] Since $\dim MI_0(n)=8n-11$, we deduce that \[\begin{array}{ll} \rho^k(2;2,n) & =\dim MI_0(n)-k(k- \chi(E)) \\ & =8n-11-k(k+3n-11) \\ &=n(8-3k)+k(11-k)-11. \end{array} \] So, $\rho^1(2;2,n)=5n-1$, $ \rho^2(2;2,n)=2n+7$ and $\rho^k(2;2,n)<0$ for $k \geq 3$ provided $n \geq 14$. By a result in \cite{BG}, for any $E \in MI_0(n)$, $h^0(E) \leq 2$. On the other hand, any instanton bundle $E$ associated to $n+1$ general skew lines in $\PP^3$ satisfies $h^0(E)=1$ ('t Hooft bundles) and any instanton bundle $E$ associated to $n+1$ skew lines on a smooth quadric $Q$ in $\PP^3$ satisfies $h^0(E)=2$ (special 't Hooft bundles). Putting altogether we have $W^k \neq \emptyset$ if and only if $k \leq 2$, if and only if $\rho^k(2;2,n) >0$.
(ii) and (iii) The expected dimensions are $\rho^1(2;2,n)=5n-1$ and $ \rho^2(2;2,n)=2n+7$. Hence, the results follows from the fact that $W^1$ (resp. $W^2$) is nothing but the variety of 't Hooft bundles (resp. special ' t Hooft bundles) and we well-known that the variety of 't Hooft bundles (resp. special ' t Hooft bundles) is a smooth, irreducible and rational variety of dimension $5n-1$ (resp. $2n+7$).
\end{proof}
\end{document}
|
arXiv
|
{
"id": "0807.3232.tex",
"language_detection_score": 0.6787222027778625,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{A Gap Theorem for Self-shrinkers of the Mean Curvature Flow in Arbitrary Codimension\thanks {The first author was partially supported by NSF grant DMS-0909581; the second author was supported by NSFC 10971110.} } \author{ Huai-Dong Cao and Haizhong Li} \date{} \maketitle
\begin{abstract}
In this paper, we prove a classification theorem for self-shrinkers of the mean curvature flow with $|A|^2\le 1$ in arbitrary codimension. In particular, this implies a gap theorem for self-shrinkers in arbitrary codimension. \end{abstract}
\section {Introduction}
Let $x:M^n\to \mathbb R^{n+p}$ be an $n$-dimensional submanifold in the (n+p)-dimensional Euclidean space. If we let the position vector $x$ evolve in the direction of the mean curvature $\bf{H}$, then it gives rise to a solution to the mean curvature flow: \begin{equation}\label{1-1} x:M\times [0,T)\rightarrow \mathbb R^{n+p}, \qquad \frac{\partial x}{\partial t} = \bf{H} \end{equation}
We call the immersed manifold $M$ a self-shrinker if it satisfies the quasilinear elliptic system: \begin{equation}\label{1-2} \bf{H}=-x^{\perp} \end{equation} where $\perp$ denotes the projection onto the normal bundle of $M$.
Self-shrinkers are an important class of solutions to the mean curvature flow \eqref{1-1}. Not only they are shrinking homothetically under mean curvature flow (see, e.g., \cite{CM}), but also they describe possible blow ups at a given singularity of the mean curvature flow.
In the curve case, U. Abresch and J. Langer \cite{AL} gave a complete classification of all solutions to \eqref{1-2}. These curves are so-called Abresch-Langer curves.
In the hypersurface case (i.e. codimension 1), K. Ecker and G. Huisken \cite{EH} proved that if an entire graph with polynomial volume growth is a self-shrinker, then it is necessarily a hyperplane. Recently L. Wang \cite{LW} removed the condition of {\it polynomial volume growth} in Ecker-Huisken's Theorem. Let $|A|^2$ denote the norm square of the second fundamental form of $M$. In \cite{H2} and \cite{H3}, G. Huisken proved a classification theorem that $n$-dimensional self-shrinkers satisfying (1.2) in
$\mathbb R^{n+1}$ with non-nengative mean curvature, bounded $|A|$, and polynomial volume growth are $\Gamma\times \mathbb R^{n-1}$, or $\mathbb S^m (\sqrt{m})\times\mathbb R^{n-m}$ ($0\leq m\leq n$). Here, $\Gamma$ is a Abresch-Langer curve and $\mathbb S^m(\sqrt{m})$ is a $m$-dimensional sphere of radius $\sqrt{m}$. Recently, T.H. Colding and W.P. Minicozzi \cite{CM} showed that G. Huisken's classification theorem still holds without the assumption that {\it $|A|$ is bounded.} Moreover, they showed that the only embedded entropy stable self-shrinkers with polynomial volume growth in $\mathbb R^{n+1}$ are hyperplanes, n-spheres, and cylinders.
In arbitrary codimensional case, K. Smoczyk \cite{Smo} proved the following two results: (i) For any $n$-dimensional compact self-shrinker $M^n$ in $R^{n+p}$ satisfying (1.2), if
${\bf H}\not=0$ and unit mean curvature vector field $\nu={\bf H}/|{\bf H}|$ is parallel in the normal bundle, then $M^n=\mathbb S^n(\sqrt{n})$ in $\mathbb R^{n+1}$; (ii) For any $n$-dimensional compact self-shrinker $M^n$ in $R^{n+p}$ satisfying (1.2), if $M^n$ is a complete self-shrinker with ${\bf H}\not= 0$ and unit mean curvature vector field $\nu={\bf H}/|{\bf H}|$ is parallel in the normal bundle, and having uniformly bounded geometry, then $M^n$ is either $\Gamma\times \mathbb R^{n-1}$, or $N^m\times \mathbb R^{n-m}$. Here $\Gamma$ is an Abresch-Langer curve and $N^m$ is a $m$-diemnsional minimal submanifold in $\mathbb S^{m+p-1}(\sqrt{m})$. On the other hand, Q. Ding and Z. Wang \cite{DW} recently have extended the result of L. Wang \cite{LW} to higher codimensional case under the condition of {\it flat normal bundle}.
Very recently, based on an identity of Colding and Minicozzi (see (9.42) in \cite{CM}), N. Q. Le and N. Sesum \cite{LS} proved a gap theorem (cf. Theorem 1.7 in \cite{LS})
for self-shrinkers of codimension 1: if a hypersurface $M^n\subset \mathbb R^{n+1}$ is a smooth complete embedded self-shrinker without boundary and with polynomial volume growth, and satisfies $|A|^2<1$, then $M^n$ is a hyperplane. Motivated by this result of Le and Sesum, we prove in this paper the following classification theorem for self-shrinkers in arbitrary codimensions:
\noindent {\bf Theorem 1.1} {\it If $M^n \to \mathbb R^{n+p}$ ($p\ge 1$) is an $n$-dimensional complete self-shrinker without boundary and with polynomial volume growth, and satisfies \begin{equation}
|A|^2\leq 1, \end{equation} then $M$ is one of the followings:
(i) a round sphere $\mathbb S^n(\sqrt{n})$ in $\mathbb R^{n+1}$,
(ii) a cylinder $\mathbb S^m(\sqrt{m})\times \mathbb R^{n-m}$, $1\leq m\leq n-1$, in $\mathbb R^{n+1}$,
(iii) a hyperplane in $\mathbb R^{n+1}$.
\noindent Here $|A|^2$ is the norm square of the second fundamental form of $M$.}
As an immediate consequence, we have the following gap theorem valid for arbitrary codimensions:
\noindent {{\bf Corollary 1.1} {\it If $M^n\to \mathbb R^{n+p}$ ($p\ge 1$) is a smooth complete embedded self-shrinker without boundary and with polynomial volume growth, and satifies \begin{equation}
|A|^2<1, \end{equation} then $M$ is a hyperplane in $\mathbb R^{n+1}$.}
\noindent {\bf Remark 1.1} We expect that the condition on volume growth in Theorem 1.1 and Corollary 1.1 can be removed. In fact, it was conjectured by the first author that a complete self-shrinker automatically has polynomial volume growth. Note that D. Zhou and the first author \cite{CZ} proved that a complete Ricci shrinker necessarily has at most Euclidean volume growth.
\noindent {\bf Remark 1.2} Shortly after our work was finished, Q. Ding and Y. L. Xin \cite{DX} proved that any complete non-compact {\it properly immersed} self-shrinker $M^n$ in $\mathbb R^{n+p}$ has at most Euclidean volume growth.
\noindent {\bf Acknowledgements.} Part of the work was carried out while the first author was visiting the Mathematical Sciences Center of Tsinghua University during fall 2010. He would like to thank the Center for its hospitality and support. The authors would also like to thank the referee for helpful comments which make the proofs of Lemma 3.1 and Proposition 5.1 more readable.
\section { Preliminaries}
In this section, we recall some formulas and notations for submanifolds in Euclidean space by using the method of moving frames.
Let $x:M^n\to \mathbb R^{n+p}$ be an $n$-dimensional submanifold of the
$(n+p)$-dimensional Euclidean space $R^{n+p}$. Let $\{e_1,\cdots,e_{n}\}$ be a local orthonormal basis of $M$ with respect to the induced metric, and $\{\theta_1,\cdots, \theta_n\}$ be their dual 1-forms. Let $e_{n+1},\cdots, e_{n+p}$ be the local unit orthonormal normal vector fields.
In this paper we make the following convention on the range of indices:
$$ 1\leq i,j,k\leq n; \qquad n+1\leq \alpha,\beta,\gamma\leq n+p. $$
Then we have the following structure equations, \begin{equation}\label{2-1} dx=\sum\limits_i \theta_i e_i, \end{equation}
\begin{equation}\label{2-2} de_i=\sum\limits_j\theta_{ij}e_j+\sum\limits_{\alpha,j} h^\alpha_{ij} \theta_je_\alpha, \end{equation}
\begin{equation}\label{2-3} de_\alpha=-\sum\limits_{i,j}h^\alpha_{ij}\theta_j e_i+\sum \limits_\beta \theta_{\alpha\beta}e_\beta, \end{equation} where $h^\alpha_{ij}$ denote the components of the second fundamental
form of $M$ and $\theta_{ij}$, $\theta_{\alpha\beta}$ denote the connections of the tangent bundle and normal bundle of $M$, respectively.
The Gauss equations are given by \begin{equation}\label{al}R_{ijkl}=\sum_{\alpha} (h_{ik}^{\alpha}h_{jl}^{\alpha}-h_{il}^{\alpha}h_{jk}^{\alpha}) \end{equation} \begin{equation}\label{am}R_{ik}=\sum_{\alpha} H^{\alpha}h_{ik}^{\alpha}-\sum_{\alpha,j}h_{ij}^{\alpha}h_{jk}^{\alpha} \end{equation}
\begin{equation}\label{an}R =H^{2}-|A|^2 \end{equation} where $R$ is the scalar curvature of $M$,
$|A|^2=\sum\limits_{\alpha,i,j}(h^\alpha_{ij})^2$ is the norm square of the second fundamental form, ${\bf H}=\sum\limits_{\alpha}H^\alpha e_\alpha=\sum\limits_{\alpha}
(\sum\limits_i h^\alpha_{ii})e_\alpha$ is the mean curvature vector field, and $H=|\bf{H}|$ is the mean curvature of $M$.
The Codazzi equations are given by (see, e.g., \cite{Li})
\begin{equation}\label{c4}h^\alpha_{ijk}=h^\alpha_{ikj}, \end{equation} where the covariant derivative of $h^\alpha_{ij}$ is defined by \begin{equation}\label{2-8} \sum_{k}h^\alpha_{ijk}\theta_k=dh^\alpha_{ij}+\sum_k h^\alpha_{kj}\theta_{ki} +\sum_k h^\alpha_{ik}\theta_{kj}+\sum_\beta h^\beta_{ij}\theta_{\beta\alpha}. \end{equation}
If we denote by $R_{\alpha\beta i j}$ the curvature tensor of the normal connection $\theta_{\alpha\beta}$ in the normal bundle of $x: M \rightarrow \mathbb R^{n+p}$, then the Ricci equations are \begin{equation}\label{aj}R_{\alpha\beta i j}=\sum_{k} (h_{ik}^{\alpha}h_{kj}^{\beta}-h_{jk}^{\alpha}h_{ki}^{\beta}). \end{equation}
By exterior differentiation of $(2.8)$, we have the following Ricci identities (see, e.g., \cite{Li}) \begin{equation} h^\alpha_{ijkl}-h^\alpha_{ijlk}=\sum\limits_m h^\alpha_{mj}R_{mikl} +\sum\limits_m h^\alpha_{im}R_{mjkl}+\sum\limits_\beta h^\beta_{ij} R_{\beta\alpha kl}. \end{equation}
We define the first and second covariant derivatives, and Laplacian of the mean curvature vector field ${\bf H}=\sum\limits_\alpha H^\alpha e_\alpha$ in the normal bundle $N(M)$ as follows (cf. \cite{CL}, \cite{Li})
\begin{equation} \sum\limits_i H^\alpha_{,i}\theta_i=dH^\alpha+\sum\limits_\beta H^\beta\theta_{\beta\alpha}, \end{equation}
\begin{equation} \sum\limits_j H^\alpha_{,ij}\theta_j=dH^\alpha_{,i}+ \sum\limits_j H^\alpha_{,j}\theta_{ji}+\sum\limits_\beta H^\beta_{,i}\theta_{\beta\alpha}, \end{equation}
\begin{equation} \Delta^\perp H^\alpha=\sum\limits_i H^\alpha_{,ii},\qquad H^\alpha=\sum\limits_k h^\alpha_{kk}. \end{equation}
Let $f$ be a smooth function on $M$, we define the covariant derivatives $f_i$, $f_{ij}$, and the Laplacian of $f$ as follows \begin{equation}\label{2-14} df=\sum_i f_i\theta_i,\qquad \sum_j f_{ij}\theta_j=df_i+\sum_jf_j\theta_{ji},\qquad \Delta f= \sum_i f_{ii}. \end{equation}
\section {A Key Lemma} As we mentioned in the introduction, the proof of Le-Sesum's gap theorem relies on an important identity of Colding and Minicozzi \cite{CM} for hypersurfaces. The identity, see (9.42) in \cite{CM} or (4.1) in \cite{LS}, is obtained in terms of certain second order linear operator for hypersurfaces (which is part of the Jacobi operator for the second variation). In this section, we derive a similar inequality for arbitrary codimensions.
Let $a$ be any fixed vector in $\mathbb R^{n+p}$, we define the following height functions in the $a$ direction on $M$, \begin{equation} f=\langle x,a\rangle, \end{equation} and \begin{equation} g_\alpha =\langle e_\alpha,a\rangle \end{equation} for a fixed normal vector $e_\alpha$.
From \eqref{2-14} for $f_i$ and the structure equation \eqref{2-1} , we have
\begin{equation}\label{3-3}
f_i=\langle e_i,a\rangle. \end{equation} Similarly, from \eqref{2-14} for $f_{ij}$ and the structure equation \eqref{2-2}, we have \begin{equation}\label{3-4}
f_{ij}=\sum_\alpha h^\alpha_{ij}\langle e_\alpha,a\rangle. \end{equation}
Since $a$ can be arbitrary in \eqref{3-3} and \eqref{3-4}, we obtain (see \cite{CL})
\begin{equation}\label{3-5}
x_i=e_i,\qquad x_{ij}=\sum_\alpha h^\alpha_{ij}e_\alpha.
\end{equation} Define the first derivative $g_{\alpha,i}$ of $g_\alpha$ by \begin{equation} \sum_ig_{\alpha,i}\theta_i=dg_\alpha+\sum_\beta g_\beta\theta_{\beta\alpha}. \end{equation} We have, by use of \eqref{2-3}, \begin{equation}\label{3-7} g_{\alpha,i}=-\sum_k h^\alpha_{ik}\langle
e_k,a\rangle. \end{equation} Taking covariant derivatives on both sides of \eqref{3-7} in the $e_j$ direction and using \eqref{3-5}, we have
\begin{equation}\label{3-8}
g_{\alpha,ij}=-\sum_{k} h^\alpha_{ikj}\langle e_k,a\rangle-\sum_{k,\beta}h^\alpha_{ik}h^\beta_{kj}\langle e_\beta,a\rangle, \end{equation} where the second derivative $g_{\alpha,ij}$ of $g_\alpha$ is defined by \begin{equation} \sum_jg_{\alpha,ij}\theta_j=dg_{\alpha,i}
+\sum_{j}g_{\alpha,j}\theta_{ji}+\sum_\beta g_{\beta,i}\theta_{\beta\alpha}. \end{equation}
Again, since $a$ is arbitrary in \eqref{3-7} and \eqref{3-8}, we obtain (see \cite{CL}) \begin{equation} e_{\alpha,i}=-\sum\limits_j h^\alpha_{ij}e_j,\qquad
e_{\alpha,ij}=-\sum\limits_{k} h^\alpha_{ikj}e_k-\sum\limits_{k,\beta}h^\alpha_{ik}h^\beta_{kj} e_\beta, \end{equation} where the covariant derivative $h^\alpha_{ijk}$ of the second fundamental form $h^\alpha_{ij}$ is defined by \eqref{2-8}.
\vskip 3mm
Now the self-shrinker equation \eqref{1-2} is equivalent to \begin{equation}\label{3-11} -H^\alpha=<x,e_\alpha>, \quad n+1\leq \alpha\leq n+p. \end{equation}
Taking covariant derivative of \eqref{3-11} with respect to $e_i$ by use of (3.5) and (3.10), we have \begin{equation}\label{3-12} -H^\alpha_{,i}=-\sum\limits_{j}h^\alpha_{ij}<x,e_j>,\quad 1\leq i\leq n,\quad n+1\leq \alpha\leq n+p. \end{equation}
Taking covariant derivative of (3.12) with respect to $e_k$ by use of (3.5) and (3.11), we have \begin{equation} \begin{array}{lcl} -H^\alpha_{,ik}&=&-\sum\limits_{j}h^\alpha_{ijk}<x,e_j>-h^\alpha_{ik}-\sum\limits_{\beta,j} h^\alpha_{ij}h^\beta_{jk}<x,e_\beta>\\ &=&-\sum\limits_{j}h^\alpha_{ijk}<x,e_j>-h^\alpha_{ik}+\sum\limits_{\beta,j} H^\beta h^\alpha_{ij}h^\beta_{jk}. \end{array} \end{equation}
Writing \begin{equation}\label{3-14} \sigma_{\alpha\beta}=\sum\limits_{i,j}h^\alpha_{ij}h^\beta_{ij}, \end{equation} we have \begin{equation} \sum\limits_{\alpha,\beta}\sigma_{\alpha\beta}H^\alpha H^\beta\leq
|A|^2 |H|^2. \end{equation}
We are now ready to prove the following key lemma:
\noindent
{\bf Lemma 3.1} {\it Let $M^n$ be an $n$-dimensional complete self-shrinker in $\mathbb R^{n+p}$ without boundary and with polynomial volume growth, if $|A|^2$ is bounded on $M^n$, then \begin{eqnarray*}
\int_M|\nabla^\perp H|^2e^{-\frac{|x|^2}{2}}dv &=&\int_M
[\sum\limits_{\alpha,\beta}\sigma_{\alpha\beta}H^\alpha H^\beta-|H|^2]e^{-\frac{|x|^2}{2}}dv\\
&\leq& \int_M [|A|^2-1]|H|^2e^{-\frac{|x|^2}{2}}dv. \end{eqnarray*} }
\begin{proof} Letting $i=k$ in (3.13) and summing over i, we get \begin{equation}\label{3-16} \Delta^\perp H^\alpha=\sum\limits_j H^\alpha_{,j}<x,e_j>+H^\alpha-\sum\limits_\beta\sigma_{\alpha\beta}H^\beta. \end{equation}
Since $M^n$ has polynomial volume growth and $|A|^2$ is bounded on $M^n$, \eqref{3-11}, \eqref{3-12}, \eqref{3-14} and \eqref{3-16} imply that \begin{equation*}
\int_M|\nabla^\perp H|^2e^{-\frac{|x|^2}{2}}dv<+\infty, \qquad \int_M \sum\limits_\alpha H^\alpha \Delta^\perp H^\alpha e^{-\frac{|x|^2}{2}}dv<+\infty, \end{equation*} and \begin{equation*}
\int_M\sum_{\alpha,i}H^{\alpha}H^{\alpha}_{,i}<x,e_i>e^{-\frac{|x|^2}{2}}dv<+\infty. \end{equation*} Let $\varphi_r(x)$ be a smooth cut-off function with compact support in $B_{x_0}(r+1)\subset M$,
\begin{equation*}
\varphi_r(x)=\left\{\begin{array}{ll}
1, & \textrm{in }B_{x_0}(r) \\
0 & \textrm{in }M\setminus B_{x_0}(r+1)
\end{array}\right.\qquad 0\leq \varphi_r(x)\leq 1, \quad |\nabla\varphi_r|\leq 1. \end{equation*} Then, by integration by parts, we get \begin{eqnarray*}
\int_M \sum\limits_\alpha \Delta^\perp H^\alpha (\varphi_r H^\alpha) e^{-\frac{|x|^2}{2}}dv&=&\int_M \varphi_r H^{\alpha}H^{\alpha}_{,i}<x,e_i>e^{-\frac{|x|^2}{2}}dv -\int_MH^{\alpha}_{,i}(\varphi_r H^{\alpha})_{,i}e^{-\frac{|x|^2}{2}}dv\\
&=&\int_M\varphi_r \left(\sum_{\alpha,i} H^{\alpha}H^{\alpha}_{,i}<x,e_i>-|\nabla^{\perp}H|^2\right)e^{-\frac{|x|^2}{2}}dv\\
&&\quad -\int_M \sum_{\alpha,i}H^{\alpha}H^{\alpha}_{,i}(\varphi_r)_i e^{-\frac{|x|^2}{2}}dv. \end{eqnarray*} Letting $r\rightarrow +\infty$, the dominated convergence theorem implies that \begin{eqnarray}\label{3-17}
\int_M \sum\limits_\alpha \Delta^\perp H^\alpha H^\alpha e^{-\frac{|x|^2}{2}}dv&=& \int_M\left(\sum_{\alpha,i}H^{\alpha}H^{\alpha}_{,i}<x,e_i>-|\nabla^{\perp}H|^2\right)e^{-\frac{|x|^2}{2}}dv. \end{eqnarray} Putting \eqref{3-16} into \eqref{3-17}, we obtain: \begin{eqnarray*}
\int_M|\nabla^\perp H|^2e^{-\frac{|x|^2}{2}}dv &=&\int_M
\left(\sum\limits_{\alpha,\beta}\sigma_{\alpha\beta}H^\alpha H^\beta-|H|^2\right)e^{-\frac{|x|^2}{2}}dv\\
&\leq& \int_M \left(|A|^2-1\right)|H|^2e^{-\frac{|x|^2}{2}}dv. \end{eqnarray*} \end{proof}
\noindent
{\bf Remark 3.1} From the proof of Lemma 3.1, one can see that the conclusion of Lemma 3.1 is valid even if $|A|^2$
has certain growth in $|x|^2$.
\section { Proof of Theorem 1.1}
Now we present the proof of Theorem 1.1. \begin{proof}[Proof of Theorem 1.1]
Under the assumptions of Theorem 1.1, from Lemma 3.1, we know that either ${\bf H}\equiv 0$, or ${\bf H}\not=0$ but with $\nabla^\perp{\bf H}\equiv 0$ and $|A|^2\equiv 1$.
If ${\bf H}\equiv 0$, we have $<x,e_\alpha>\equiv 0,$ $n+1\leq \alpha\leq n+p$, from which we easily conclude from (3.12) that $M$ is totally geodesic, that is, a hyperplane in $\mathbb R^{n+1}$.
Next, suppose that ${\bf H}\not=0$, $\nabla^\perp{\bf H}\equiv 0$, and $|A|^2\equiv 1$. In this case, (3.13) becomes \begin{equation} \sum\limits_{\beta,j} H^\beta h^\alpha_{ij}h^\beta_{jk}=h^\alpha_{ik}+ \sum\limits_{j}h^\alpha_{ijk}<x,e_j>,\quad 1\leq i,k\leq n; n+1\leq\alpha\leq n+p. \end{equation}
Multiplying both sides of (4.1) by $h^\alpha_{ik}$ and summing over $\alpha,i, k$, we get \begin{equation}
\sum\limits_{\alpha,\beta,i,j,k}H^\beta h^\alpha_{ij}h^\beta_{jk}h^\alpha_{ik}=|A|^2+\frac{1}{2}(|A|^2)_{,j}<x,e_j>
=|A|^2=1. \end{equation}
Next we choose a local orthonormal frame $\{e_{\alpha}\}$ for the normal bundle of $x: M \rightarrow \mathbb R^{n+p}$, such that $e_{n+p}$ is parallel to the mean curvature vector $\bf H$; i.e.,
\begin{equation}
e_{n+p}=\frac{\bf H}{|\bf {H}|},\quad H^{n+p}=H, \qquad H^{\alpha}=0, \quad \alpha\not=n+p. \end{equation}
Because now the equality holds in (3.15), we have \begin{equation} h^{\alpha}_{ij}=0, \quad \alpha\not=n+p,\qquad
|A|^2=\sum\limits_{i,j}h^{n+p}_{ij}h^{n+p}_{ij}=1. \end{equation}
Since $\nabla^\perp \bf{H}\equiv 0$ and $|A|^2\equiv 1$, by the definition of $\Delta$ and using (2.7), (2.10), (2.4), (2.5) and (2.9), we have (c.f. \cite{Si},\cite{SSY},\cite{Li},\cite{WA}) \begin{eqnarray*}
0&=&\frac{1}{2}\Delta |A|^2\\ &=&\sum_{\alpha,i,j,k}(h^{\alpha}_{ijk})^2+\sum_{\alpha,i,j,k}h^{\alpha}_{ij}h^{\alpha}_{ijkk}\\ &=&\sum_{\alpha,i,j,k}(h^{\alpha}_{ijk})^2+\sum_{\alpha,i,j,k,m} h^{\alpha}_{ij}h^{\alpha}_{mk}R_{mijk}+\sum_{\alpha,i,j,m} h^{\alpha}_{ij}h^{\alpha}_{im}R_{mj}+\sum_{\alpha,\beta,i,j,k} h^{\alpha}_{ij}h^{\beta}_{ik}R_{\beta\alpha jk}\\ &=&\sum_{\alpha,i,j,k}(h^{\alpha}_{ijk})^2+\sum_{\alpha,\beta,i,j,m} H^{\beta}h^{\beta}_{mj}h^{\alpha}_{ij}h^{\alpha}_{im}-\sum_{\alpha,\beta,i,j,k,m}h^{\alpha}_{ij}h^{\beta}_{ij}h^{\alpha}_{mk}h^{\beta}_{mk}+2\sum_{\alpha,\beta,i,j,k} h^{\alpha}_{ij}h^{\beta}_{ik}R_{\beta\alpha jk}. \end{eqnarray*} Plugging (4.2), (4.3) and (4.4) into the above identity, we conclude that \begin{equation} h^{\alpha}_{ijk}=0,\quad n+1\leq \alpha\leq n+p. \end{equation} Because $e_{n+1}\wedge_{n+2}\wedge\cdots \wedge e_{n+p-1}$ is parallel in the normal bundle of $M$ and $h^{\alpha}_{ij}\equiv 0, \quad \alpha\not=n+p$, by Theorem 1 of Yau \cite {Y}, we see that
$M$ is a hypersurface in $\mathbb R^{n+1}$. So (4.5) implies that $M$ is an isoparametric hypersurface, thus from $|A|^2=1$ we conclude that $M$ is either a round sphere $\mathbb S^n(\sqrt{n})$, or a cylinder $\mathbb S^m(\sqrt{m})\times \mathbb R^{n-m}$, $1\leq m\leq n-1$ in $\mathbb R^{n+1}$. This completes the proof of Theorem 1.1. \end{proof}
\section{Further Remarks}
In this section, we make several simple observations:
\noindent {\bf Proposition 5.1} {\it If a submanifold $M^n\to \mathbb R^{n+p}$ is an $n$-dimensional complete self-shrinker without boundary and with polynomial volume growth, such that \begin{equation}\label{5-1}
|H|^2\geq n, \end{equation}
then $|H|^2\equiv n$ and $M$ is a minimal submanifold in the sphere $\mathbb S^{n+p-1}(\sqrt{n})$.}
\begin{proof}[Proof of Proposition 5.1] From (3.5) and (3.11), we have \begin{equation}\label{5-2}
\frac{1}{2}\Delta |x|^2=n+<x,\Delta x>=n+\sum\limits_\alpha H^\alpha
<x,e_\alpha>=n-|H|^2 \end{equation} Under the polynomial volume growth assumption, (1.2) and \eqref{5-2} guarantee that \begin{equation*}
\int_M(\Delta |x|^2)e^{-\frac{|x|^2}{2}}dv<+\infty \qquad {\mbox and} \qquad \int_M|\nabla |x|^2|^2e^{-\frac{|x|^2}{2}}dv<+\infty. \end{equation*} Then, by integrating by parts and the dominated convergence theorem, it follows that (similar to the proof of Lemma 3.1) \begin{equation}\label{5-3}
\frac{1}{4}\int_M|\nabla |x|^2|^2e^{-\frac{|x|^2}{2}}dv
=\frac{1}{2}\int_M(\Delta |x|^2)e^{-\frac{|x|^2}{2}}dv
=\int_M(n-|H|^2)e^{-\frac{|x|^2}{2}}dv. \end{equation} From \eqref{5-1} and \eqref{5-3}, we get
$|H|^2=n$ and $<x,x>=r^2$. Thus by (1.2) we conclude that $r=\sqrt{n}$ and $M$ is a minimal submanifold in the sphere $\mathbb S^{n+p-1}(\sqrt{n})$. \end{proof}
\noindent
{\bf Proposition 5.2} {\it If a submanifold $M\to \mathbb R^{n+p}$ is an $n$-dimensional compact self-shrinker without boundary and satisfies either $|H|^2=constant$, or \begin{equation}\label{5-4}
|H|^2\leq n, \end{equation}
then $|H|^2\equiv n$ and $M$ is a minimal submanifold in the sphere $\mathbb S^{n+p-1}(\sqrt{n})$.}
\begin{proof}[Proof of Proposition 5.2] Integrating \eqref{5-2} over $M$ and using the Stokes theorem, we have \begin{equation}\label{5-5}
\int_M(n-|H|^2)dv=0. \end{equation} Hence Proposition 5.2 follows from \eqref{5-5}, \eqref{5-4}, and (1.2). \end{proof}
\noindent {\bf Remark 5.1} Let $x:M\to \mathbb R^{n+p}$ be an $n$-dimensional submanifold. If $x$ satisfies \begin{equation} \lambda H^\alpha=<x,e_\alpha>, \quad n+1\leq \alpha\leq n+p \end{equation} for some positive constant $\lambda$, then we call $M$ a {\it self-expander} of the mean curvature flow. Observe that for a self-expander, we have \begin{equation}\label{5-7}
\frac{1}{2}\Delta |x|^2=n+<x,\Delta x>=n+n\sum\limits_\alpha H^\alpha <x,e_\alpha>=n+n\lambda |H|^2. \end{equation} From \eqref{5-7}, we immediately get
\noindent {\bf Proposition 5.3} {\it There exists no n-dimensional compact self-expander without boundary in $\mathbb R^{n+p}$.}
Finally, we list some simple examples of self-shrinkers of higher codimensions.
\noindent {\bf Example 5.1} {\it For any positive integers $m_1,\cdots,m_p$ such that $m_1+\cdots +m_p=n$, the submanifold \begin{equation} M^n=\mathbb S^{m_1}(\sqrt{m_1})\times\cdots\times\mathbb S^{m_p}(\sqrt{m_p})\subset\mathbb R^{n+p} \end{equation} is an n-dimensional compact self-shrinker in $\mathbb R^{n+p}$ with \begin{equation}
{\bf H}=-X,\qquad |{\bf H}|^2=n,\qquad |A|^2=p \end{equation} Here \begin{equation}
\mathbb S^{m_i}(r_i)=\{X_i\in\mathbb R^{m_i+1}: |X_i|^2=r_i^2\},\qquad i=1,\cdots,p \end{equation} is a $m_i$-dimensional round sphere with radius $r_i$.}
\noindent {\bf Example 5.2} {\it For positive integers $m_1,\cdots,m_p,q\geq 1$, with $m_1+\cdots+m_p+q=n$, the submanifold \begin{equation} M^n=\mathbb S^{m_1}(\sqrt{m_1})\times\cdots\times\mathbb S^{m_p}(\sqrt{m_p})\times\mathbb R^q\subset\mathbb R^{n+p} \end{equation} is an n-dimensional complete non-compact self-shrinker in $\mathbb R^{n+p}$ with polynomial volume growth which satisfies \begin{equation}
{\bf H}=-X^{\perp},\qquad |{\bf H}|^2=\sum_{i=1}^pm_i,\qquad |A|^2=p. \end{equation} }
\noindent
{\bf Remark 5.2} In Example 5.1 and Example 5.2, if we let $p\geq 2$, then we have an n-dimensional self-shrinker of codimension $p$ with $|A|^2=p\geq 2$, thus not one of the three cases in Theorem 1.1.
\noindent
{\bf Remark 5.3} From Example 5.2, we can see that the condition ``$|{\bf H}|^2\geq n$'' in Proposition 5.1 is necessary.
\noindent {\bf Example 5.3} (cf. \cite{CA}) {\it Let \begin{equation} X: \mathbb S^2(\sqrt{m(m+1)})\hookrightarrow \mathbb S^{2m}(\sqrt{2})\subset\mathbb R^{2m+1},\qquad m\geq 2 \end{equation} be a minimal surface in $\mathbb S^{2m}(\sqrt{2})$. Consider it as a surface in $\mathbb R^{2m+1}$, then it is a self-shrinker with \begin{equation}
{\bf H}=-X,\qquad |{\bf H}|^2=2,\qquad |A|^2=2-\frac {2}{m(m+1)}<2, \end{equation}}
\noindent
{\bf Remark 5.4} By choosing local orthogonal frame $\{e_{\alpha}\}$ for the normal bundle of $x: M^n \rightarrow \mathbb R^{n+p}$, such that $e_{n+p}$ is parallel to the mean curvature vector $\bf H$, by Lemma 3.1, if
$|A|^2$ is bounded, and
\begin{equation}
\sum_{i,j}h_{ij}^{n+p}h_{ij}^{n+p}\leq 1,\label{5.15}
\end{equation}
we have $\nabla^{\bot}{\bf H}=0$, that is, $|{\bf H}|^2=constant$ and unit mean curvature vector
field $\nu={\bf H}/|{\bf H}|$ is parallel in the normal bundle. From Proposition 5.2 and Theorem 1.3 of Smoczyk \cite{Smo},
we have
\noindent
{\bf Proposition 5.4} {\it Let $M^n$ be an $n$-dimensional complete self-shrinker in $\mathbb R^{n+p}$ without boundary and with polynomial volume growth. If $|A|^2$ is bounded on $M^n$ and (5.15) holds, then $$ M^n=N^m\times \mathbb R^{n-m}, \qquad 0\leq m\leq n, \label{5.15} $$ where $N^m$ is a $m$-dimensional minimal submanifold in $\mathbb S^{m+p-1}(\sqrt{m})$.}
\begin{flushleft}
\noindent \begin{tabbing} XXXXXXXXXXXXXXXXXXXXXXXXXX*\=\kill Huai-Dong Cao\>Haizhong Li\\ Department of Mathematics\>Department of Mathematical Sciences\\ Lehigh University\>Tsinghua University\\ Bethlehem, PA 18015\>100084, Beijing\\ USA\>People's Republic of China\\ E-mail:[email protected]\>E-mail:[email protected] \end{tabbing}
\end{flushleft}
\end{document}
|
arXiv
|
{
"id": "1101.0516.tex",
"language_detection_score": 0.6157243251800537,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{Parallel Independence in Attributed Graph Rewriting}
\begin{abstract}
In order to define graph transformations by the simultaneous
application of concurrent rules, we have adopted in previous work a
structure of attributed graphs stable by unions. We analyze the
consequences on parallel independence, a property that characterizes
the possibility to resort to sequential rewriting. This property
turns out to depend not only on the left-hand side of rules, as in
algebraic approaches to graph rewriting, but also on their
right-hand side. It is then shown that, of three possible
definitions of parallel rewriting, only one is convenient in the
light of parallel independence. \end{abstract}
\section{Introduction}\label{sec-intro}
The notion of parallel independence from \cite{Rosen75,EhrigK76} has been studied mostly in the algebraic approaches to graph rewriting, see \cite{HndBkCorradiniMREHL97}. It basically consists in a condition on concurrent transformations of an object that not only guarantees but characterizes the possibility to apply the transformations sequentially in any order such that all such sequences of transformations yield the same result.
When two transformations are involved, with rules $r_1$ and $r_2$, this takes the form of the diamond property and is known as the \emph{Local Church-Rosser Problem} \cite{HndBkCorradiniMREHL97}; it consists in finding a condition (called parallel independence) on direct transformations $H_1 \xleftarrow{r_1} G \xrightarrow{r_2} H_2$ that is equivalent to the existence of direct transformations $H_1\xrightarrow{r_2} H \xleftarrow{r_1} H_2$ with the same redexes, hence to the existence of two equivalent sequences of transformations $G\xrightarrow{r_1} H_1 \xrightarrow{r_2} H$ and $G\xrightarrow{r_2} H_2 \xrightarrow{r_1} H$. It is obvious that non overlapping redexes always entail parallel independence, the difficulty of the problem is that the reverse does not hold and that, depending on the rules, some amount of overlap may be allowed. The notion of parallel independence is also instrumental in defining Critical Pairs (as pairs of transformations that are not parallel independent) that are central in proving confluence of sets of production rules \cite{LambersEO08,Costaetal16}.
This notion should therefore also be considered in algorithmic approaches to graph rewriting. Indeed, the informal description of parallel independence given above makes perfect sense out of the algebraic approach; it is purely operational. Consider for instance Python's multiple assignment $a,b:= b,a$, an elegant expression that swaps the values of $a$ and $b$. We naturally understand this as a parallel expression $a:=b \parallel b:= a$. If $a$ and $b$ have the same value then the two assignments can be evaluated in sequence in any order, yielding the same result independently of the chosen order; they are parallel independent. If however they have distinct values, the two sequential evaluations yield different results (and none corresponds to the intended meaning); the two assignments are parallel dependent. Parallel dependence also typically occurs in cellular automata when rules are applied to neighbor cells, because of the overlap. Hence sequential applications of rules in an undetermined order would result in non deterministic automata.
These examples show that there is a legitimate way of computing by applying \emph{simultaneously} concurrent transformations that may not be parallel independent, even though the result may not be reachable by sequential transformations. Swapping the values of $a$ and $b$ cannot be performed by applying $a:= b$ or $b:= a$ sequentially. This calls for a notion of \emph{parallel transformation} for defining the result of such simultaneous applications of rules.
One such transformation has been defined in \cite{BdlTE20c}, in an algorithmic approach that is adopted here. It is based on directed graphs where vertices and arrows are equipped with \emph{sets} of attributes and enables a definition of a \emph{union} of such graphs, given in Section~\ref{sec-graphs}. This is a fundamental difference with terms or termgraphs and leads to a natural definition of parallel transformation in Section~\ref{sec-rules}.
The consequences of these definitions on parallel independence are analyzed in Sections~\ref{sec-seqindep} and \ref{sec-parindep}. The results of these sections are also from \cite{BdlTE20c}, we give them here in a slightly simpler setting and without proofs, focusing on comparisons with the algebraic approach to graph rewriting where parallel independence has been originally formulated.
In Section~\ref{sec-EDP} we analyze the notion of parallel rewriting. We first define a notion of \emph{regularity} that ensures the absence of conflicts between concurrent rules. We then show that this notion is too restricted to encompass parallel independence, and generalize it to the \emph{effective deletion property} (also from \cite{BdlTE20c}).
Section~\ref{sec-parcoh} is devoted to comparisons with the algebraic notion of \emph{parallel coherence} from \cite{BdlTE20b}. It is shown that its translation to the present framework, though more general than regularity, is still too restricted to encompass parallel independence. It is also shown to be the right algebraic translation of the effective deletion property. Concluding remarks and related works are presented in Section~\ref{sec-conclusion}.
\section{Attributed Graphs}\label{sec-graphs}
We assume a many-sorted signature $\Sigma$ and a set $\mathscr{V}$ of \emph{variables}, disjoint from $\Sigma$, such that every variable has a $\Sigma$-sort. For any finite $X\subseteq \mathscr{V}$, $\Termsig{X}$ denotes the algebra of $\Sigma$-terms over $X$. For any $\Sigma$-algebra $\mathcal{A}$, let $\carrier{\mathcal{A}}$ be the disjoint union of the carrier sets of the $\Sigma$-sorts in $\mathcal{A}$.
An \emph{attributed graph} (or \emph{graph} for short) $G$ is a tuple $\tuple{\nodes{G},\arrows{G},\leftf{G},\rightf{G},\algebrf{G},\attrf{G}}$ where $\nodes{G},\arrows{G}$ are sets whose elements are respectively called \emph{vertices} and \emph{arrows}, $\leftf{G},\rightf{G}$ are the \emph{source} and \emph{target} functions from $\arrows{G}$ to $\nodes{G}$, $\algebrf{G}$ is a $\Sigma$-algebra and $\attrf{G}$ is an \emph{attribution of $G$}, i.e., a function from $\nodes{G}\cup \arrows{G}$ to $\Part{\carrier{\algebrf{G}}}$. The elements of $\carrier{\algebrf{G}}$ are called \emph{attributes}, and we assume that $\nodes{G}$, $\arrows{G}$ and $\carrier{\algebrf{G}}$ are pairwise disjoint. $G$ is \emph{unlabeled} if $\labelf{G}(x)=\varnothing$ for all $x\in \nodes{G}\cup \arrows{G}$, it is \emph{finite} if the sets $\nodes{G}$, $\arrows{G}$ and $\attrf{G}(x)$ are finite. The \emph{carrier} of $G$ is the set $\carrier{G}\defeq \nodes{G}\cup \arrows{G}\cup \carrier{\algebrf{G}}$.
A graph $H$ is a \emph{subgraph} of $G$, written $H\mathrel{\lhd} G$, if the \emph{underlying graph} $\tuple{\nodes{H},\arrows{H},\leftf{H},\rightf{H}}$ of $H$ is a subgraph of $G$'s underlying graph (in the usual sense), $\algebrf{H} = \algebrf{G}$ and $\attrf{H}(x)\subseteq \attrf{G}(x)$ for all $x\in \nodes{H}\cup\arrows{H}$.
Graphs are better specified as pictures. Vertices and arrows will be named and their attributes will be listed after each name, separated from it by $\mathrel{|}$ (which is omitted if the attribute is $\varnothing$). Since graphs may not be connected, they will be surrounded by a rectangle with rounded corners, as in: \[H\ =\
\raisebox{-3.2ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{\begin{tikzpicture}[scale=2.2]
\node (x) at (0,0) {$x$};
\node (y) at (1,0) {$y\mathrel{|} 1$};
\path[->,bend left] (x) edge node[fill=white,font=\footnotesize] {$f$} (y);
\end{tikzpicture}};
\end{tikzpicture}}\ \mathrel{\lhd}\
\raisebox{-4.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{\begin{tikzpicture}[scale=2.2]
\node (x) at (0,0) {$x\mathrel{|} 1$};
\node (y) at (1,0) {$y\mathrel{|} 0,1$};
\node (z) at (1.5,0) {$z$};
\path[->,bend left] (x) edge node[fill=white,font=\footnotesize] {$f$} (y);
\path[->,bend right] (x) edge node[fill=white,font=\footnotesize] {$g\mathrel{|} 0$} (y);
\end{tikzpicture}};
\end{tikzpicture}}\ =\ G\] where $H$ is the graph such that $\nodes{H}=\set{x,y}$, $\arrows{H}=\set{f}$, $\leftf{H}(f)=x$, $\rightf{H}(f)=y$, $\attrf{H}(x)=\attrf{H}(f)=\varnothing$, $\attrf{H}(y)=\set{1}$ and similarly for $G$ (the $\Sigma$-algebra $\algebrf{H}=\algebrf{G}$ must contain at least $0$ and $1$).
A \emph{morphism} $\alpha$ from graph $H$ to graph $G$, written $\alpha:H\rightarrow G$, is a function from $\carrier{H}$ to $\carrier{G}$ such that the restriction of $\alpha$ to $\nodes{H}\cup\arrows{H}$ is a morphism from $H$'s to $G$'s underlying graphs (that is, $\leftf{G}\circ{\alpha} = {\alpha}\circ\leftf{H}$ and $\rightf{G}\circ{\alpha} = {\alpha}\circ\rightf{H}$, this restriction of $\alpha$ is called the \emph{underlying graph morphism of $\alpha$}), the restriction of $\alpha$ to $\carrier{\algebrf{H}}$ is a $\Sigma$-homomorphism from $\algebrf{H}$ to $\algebrf{G}$, denoted $\attrf{\alpha}$, and $\attrf{\alpha}\circ\attrf{H}(x)\subseteq \attrf{G}\circ \alpha(x)$ for all $x\in \nodes{H}\cup\arrows{H}$. Note that $H\mathrel{\lhd} G$ iff $\carrier{H} \subseteq \carrier{G}$ and the canonical injection from $\carrier{H}$ to $\carrier{G}$ is a morphism from $H$ to $G$. For all $F\mathrel{\lhd} H$, the \emph{image} $\alpha(F)$ is the smallest subgraph of $G$ w.r.t. the order $\mathrel{\lhd}$ such that $\restrf{\alpha}{\carrier{F}}$ is a morphism from $F$ to $\alpha(F)$.
An \emph{isomorphism} is a morphism that has an inverse morphism. We write $H\mathrel{\simeq} G$ if there is an isomorphism from $H$ to $G$. A morphism $\mu:H\rightarrow G$ is a \emph{matching} if the underlying graph morphism of $\mu$ is injective. For any $F\mathrel{\lhd} H$ it is then easy to see that \[\mu(F)=\tuple{\,\mu(\nodes{F}),\ \mu(\arrows{F}),\
\mu\circ\leftf{F}\circ\invf{\mu},\
\mu\circ\rightf{F}\circ\invf{\mu},\ \algebrf{G},\
\attrf{\mu}\circ\attrf{F}\circ\invf{\mu}\ }.\]
Given two attributions $l$ and $l'$ of $G$ let $l\setminus l'$ (resp. $l\cap l'$, $l\cup l'$) be the attribution of $G$ that maps any $x$ to $l(x)\setminus l'(x)$ (resp. $l(x)\cap l'(x)$, $l(x)\cup l'(x)$). If $l$ is an attribution of a subgraph $H\mathrel{\lhd} G$, it is implicitly extended to the attribution of $G$ that is identical to $l$ on $\nodes{H}\cup\arrows{H}$ and maps any other entry to $\varnothing$.
Unions of graphs can only be formed between \emph{joinable} graphs, i.e., graphs that have a common part. We start with a simpler notion of joinable functions.
\begin{definition}[joinable functions]\label{def-joinablef}
Two functions $f:D\rightarrow C$ and $g:D'\rightarrow C'$ are
\emph{joinable} if $f(x)=g(x)$ for all $x\in D\cap D'$. Then, the
\emph{meet} of $f$ and $g$ is the function
$f \curlywedge g: D\cap D'\rightarrow C\cap C'$ that is the restriction
of $f$ (or $g$) to $D\cap D'$. The \emph{join} $f \curlyvee g$ is the
unique function from $D\cup D'$ to $C\cup C'$ such that
$f=\restrf{(f\curlyvee g)}{D}$ and $g=\restrf{(f\curlyvee g)}{D'}$.
For any set $I$ and any $I$-indexed family
$(f_i:D_i\rightarrow C_i)_{i\in I}$ of pairwise joinable functions,
let $\curlyvee_{i\in I}f_i$ be the only function from
$\bigcup_{i\in I}D_i$ to $\bigcup_{i\in I}C_i$ such that
$f_i = \restrf{\big(\curlyvee_{i\in I}f_i\big)}{D_i}$ for all
$i\in I$. \end{definition}
We see that any two restrictions $\restrf{f}{A}$ and $\restrf{f}{B}$ of the same function $f$ are joinable, and then $\restrf{f}{A}\curlywedge \restrf{f}{B} = \restrf{f}{A\cap B}$ and $\restrf{f}{A}\curlyvee \restrf{f}{B} = \restrf{f}{A\cup B}$. Conversely, if $f$ and $g$ are joinable then each is a restriction of $f\curlyvee g$.
\begin{definition}[joinable graphs]\label{def-joinableg}
Two graphs $H$ and $G$ are \emph{joinable} if
$\algebrf{H}=\algebrf{G}$,
$\nodes{H}\cap\arrows{G} = \arrows{H}\cap\nodes{G} = \varnothing$, and
the functions $\leftf{H}$ and $\leftf{G}$ (and similarly
$\rightf{H}$ and $\rightf{G}$) are joinable. We can then define the
graphs \begin{eqnarray*}
H\sqcap G &\defeq & \tuple{\ \nodes{H}\cap\nodes{G},\
\arrows{H}\cap\arrows{G},\
\leftf{H}\curlywedge\leftf{G},\
\rightf{H}\curlywedge\rightf{G},\
\algebrf{H},\ \attrf{H}\cap\attrf{G}\ },\\
H\sqcup G &\defeq & \tuple{\ \nodes{H}\cup\nodes{G},\
\arrows{H}\cup\arrows{G},\
\leftf{H}\curlyvee\leftf{G},\
\rightf{H}\curlyvee\rightf{G},\
\algebrf{H},\ \attrf{H}\cup\attrf{G}\ }. \end{eqnarray*} Similarly, if $(G_i)_{i\in I}$ is an $I$-indexed family of graphs that are pairwise joinable, and $\mathcal{A}$ is an algebra such that $\mathcal{A}=\algebrf{G_i}$ for all $i\in I$, then let \[\displaystyle \bigsqcup_{i\in I}G_i \ \defeq\ \tuple{\ \bigcup_{i\in I}\nodes{G_i},\
\bigcup_{i\in I}\arrows{G_i},\ \curlyvee_{i\in
I}\leftf{G_i},\ \curlyvee_{i\in I}\rightf{G_i},\ \mathcal{A},\ \bigcup_{i\in
I}\attrf{G_i}\ }.\] \end{definition}
It is easy to see that these structures are graphs: the sets of vertices and arrows are disjoint and the adjacency functions have the correct domains and codomains. If $I=\varnothing$ the chosen algebra $\mathcal{A}$ is generally obvious from the context. Note that if $H$ and $G$ are joinable then $H\sqcap G \mathrel{\lhd} H \mathrel{\lhd} H\sqcup G$. Similarly, if the $G_i$'s are pairwise joinable then $G_j\mathrel{\lhd} \bigsqcup_{i\in I}G_i$ for all $j\in I$. We see that any two subgraphs of $G$ are joinable, and that $H\mathrel{\lhd} G$ iff $H\sqcap G = H$ iff $H\sqcup G = G$. These operations are commutative and, on triples of pairwise joinable graphs, they are associative and distributive over each other. For any two graphs $H,G$ there exists $G'\mathrel{\simeq} G$ such that $H$ and $G'$ are joinable (one possibility is to take $\nodes{G}'\cap \arrows{H}=\varnothing$ and $\arrows{G}'\cap (\nodes{H}\cup\arrows{H})=\varnothing$).
For any sets $V$, $A$ and attribution $l$, we say that \emph{$G$ is
disjoint from $V,A,l$} if $\nodes{G}\cap V=\varnothing$, $\arrows{G}\cap A = \varnothing$ and $\attrf{G}(x)\cap l(x) = \varnothing$ for all $x\in\nodes{G}\cup\arrows{G}$. We write $\delgraph{G}{V}{A}{l}$ for the largest subgraph of $G$ (w.r.t. $\mathrel{\lhd}$) that is disjoint from $V,A,l$. This provides a natural way of removing objects from an attributed graph. It is easy to see that this subgraph always exists (it is the union of all subgraphs of $G$ disjoint from $V,A,l$), hence rewriting steps will not be restricted by a \emph{gluing condition} as in the Double-Pushout approach (see \cite{EhrigEPT06}).
\section{Applying Rules in Parallel}\label{sec-rules}
\begin{definition}[rules, matchings]\label{def-rules}
For any finite $X\subseteq \mathscr{V}$, a \emph{$(\Sigma,X)$-graph} is a
finite graph $G$ such that $\algebrf{G} = \Termsig{X}$. Let
$\Var{G}\ \defeq\
\bigcup_{x\in\nodes{G}\cup\arrows{G}}\big(\bigcup_{t\in\attrf{G}(x)}
\Var{t}\big)$, where $\Var{t}$ is the set of variables occurring in
$t$.
A \emph{rule} $r$ is a triple $\tuple{L,K,R}$ of $(\Sigma,X)$-graphs
such that $L$ and $R$ are joinable, $L\sqcap R\mathrel{\lhd} K\mathrel{\lhd} L$ and
$\Var{L} = X$ (see Remark~\ref{rem-rules} below). The rule $r$ is \emph{unlabeled} if $L$, $K$ and $R$ are unlabeled.
A \emph{matching of $r$ in} a graph $G$ is a matching $\mu$ from $L$
to $G$ that is \emph{consistent}, i.e., such that
$\attrf{\mu}(\attrf{L}(x)\setminus \attrf{K}(x))\cap
\attrf{\mu}(\attrf{K}(x)) = \varnothing$ (or equivalently
$\attrf{\mu}(\attrf{L}(x)\setminus \attrf{K}(x)) =
\attrf{\mu}(\attrf{L}(x))\setminus \attrf{\mu}(\attrf{K}(x))$) for
all $x\in \nodes{K}\cup\arrows{K}$. We denote $\Matches{r}{G}$ the
set of all matchings of $r$ in $G$ (they all have domain
$\carrier{L}$).
We consider finite sets $\mathcal{R}$ of rules such that for all $r,r'\in\mathcal{R}$,
if $\tuple{L,K,R} = r \neq r' = \tuple{L',K',R'}$ then
$\carrier{L}\neq\carrier{L'}$, so that
$\Matches{r}{G}\cap \Matches{r'}{G} = \varnothing$ for any graph $G$;
we then write $\Matches{\mathcal{R}}{G}$ for
$\biguplus_{r\in\mathcal{R}}\Matches{r}{G}$. For any $\mu\in \Matches{\mathcal{R}}{G}$
there is a unique rule $\rulem{\mu}\in\mathcal{R}$ such that
$\mu\in \Matches{\rulem{\mu}}{G}$, and its components are denoted
$\rulem{\mu} = \tuple{\Lg{\mu}, \Kg{\mu}, \Rg{\mu}}$. \end{definition}
\begin{remark}\label{rem-rules}
If $X$ were allowed to contain a variable $v$ not occurring in $L$,
then $v$ would freely match any element of $\algebrf{G}$ and the set
$\Matches{r}{G}$ would contain as many matchings with essentially
the same effect. Also note that $\Var{R}\subseteq\Var{L}$, $R$ and
$K$ are joinable and $R\sqcap K = L\sqcap R$. The fact that $K$ is not
required to be a subgraph of $R$ allows the possible deletion by
other rules of data matched by $K$ but not by $R$. \end{remark}
A rewrite step may involve the creation of new vertices in a graph, corresponding to the vertices of a rule that have no match in the input graph, i.e., those in $\nodes{R}\setminus\nodes{L}$ (or similarly may create new arrows). These vertices should really be new, not only different from the vertices of the original graph but also different from the vertices created by other transformations (corresponding to other matchings in the graph). We simply reuse the vertices $x$ from $\nodes{R}\setminus\nodes{L}$ by \emph{indexing} them with any relevant matching $\mu$, each time yielding a new vertex $\tuple{x,\mu}$ which is obviously different from any new vertex $\tuple{x,\nu}$ for any other matching $\nu\neq\mu$, and also from any vertex of $G$. This is similar to a construction of colimits in the category of sets.
\begin{definition}[graph $\RImg{G}{\mu}$ and matching $\liftm{\mu}$]\label{def-RImg}
For any rule $r=\tuple{L,K,R}$, graph $G$ and $\mu\in\Matches{r}{G}$
we define a graph $\RImg{G}{\mu}$ together with a matching
$\liftm{\mu}$ of $R$ in $\RImg{G}{\mu}$. We first define the sets
\[\RImg{\nodes{G}}{\mu} \defeq {\mu}(\nodes{R}\cap
\nodes{K})\cup ((\nodes{R} \setminus \nodes{K}) \times
\set{\mu})\ \text{ and }\ \RImg{\arrows{G}}{\mu} \defeq
{\mu}(\arrows{R}\cap \arrows{K})\cup ((\arrows{R} \setminus
\arrows{K}) \times \set{\mu}).\] Next we define $\liftm{\mu}$ by:
$\liftm{\attrf{\mu}} \defeq \attrf{\mu}$ and for all
$x\in\nodes{R}\cup\arrows{R}$, if $x\in\nodes{K}\cup\arrows{K}$ then
$\liftm{{\mu}}(x) \defeq {\mu}(x)$ else
$\liftm{{\mu}}(x) \defeq \tuple{x,\mu}$. Since the restriction of
$\liftm{{\mu}}$ to $\nodes{R}\cup\arrows{R}$ is bijective, then
$\liftm{\mu}$ is a matching from $R$ to the graph
\[\RImg{G}{\mu}\ \defeq\
\tuple{\ \RImg{\nodes{G}}{\mu},\ \RImg{\arrows{G}}{\mu},\
\liftm{{\mu}}\circ \leftf{R}\circ
\invf{\liftm{{\mu}}},\ \liftm{{\mu}}\circ
\rightf{R}\circ \invf{\liftm{{\mu}}},\ \algebrf{G},\
\liftm{\attrf{\mu}}\circ \attrf{R}\circ
\invf{\liftm{{\mu}}}\ }.\] \end{definition}
By construction $\liftm{\mu}(R)=\RImg{G}{\mu}$, the matchings $\mu$ and $\liftm{\mu}$ are joinable and $\mu\curlywedge\liftm{\mu}$ is a matching from $R\sqcap K$ to $\mu(R\sqcap K)$. It is easy to see that the graph $G$ and the graphs $\RImg{G}{\mu}$ are pairwise joinable.
For any set $M\subseteq\Matches{\mathcal{R}}{G}$ of matchings in a graph $G$ we define below how to transform $G$ by applying simultaneously the rules associated with matches in $M$. This simply consists in first removing simultaneously all the vertices, arrows and attributes that are matched by $\Lg{\mu}$ but not by $\Kg{\mu}$ for any $\mu\in M$, and then in adding simultaneously all the images of the right-hand sides $\Rg{\mu}$.
\begin{definition}[graph $\Ipgr{G}{M}$]\label{def-parallel-rew}
For any graph $G$ and set $M\subseteq \Matches{\mathcal{R}}{G}$ let
\[\Ipgr{G}{M} \defeq \delgraph{G}{\Vd{M}}{\Ad{M}}{\ld{M}} \sqcup \bigsqcup_{\mu\in
M}\RImg{G}{\mu}\text{ where}\]
\[\Vd{M} \defeq \bigcup_{\mu\in M}{\mu}(\nodesLg{\mu}\setminus
\nodesKg{\mu}),\ \Ad{M} \defeq \bigcup_{\mu\in
M}{\mu}(\arrowsLg{\mu}\setminus \arrowsKg{\mu})\text{ and }
\ld{M}\defeq \bigcup_{\mu\in
M}\attrf{\mu}\circ(\attrfLg{\mu}\setminus
\attrfKg{\mu})\circ\invf{\mu}.\]
If $M$ is a singleton $\set{\mu}$ we write $\Ipgr{G}{\mu}$ for
$\Ipgr{G}{M}$, $\Vd{\mu}$ for $\Vd{M}$, etc. \end{definition}
\begin{example}\label{ex-Ipgr}
We represent the simultaneous assignment $a,b := b,a$ by two rules
that correspond to the simple assignments $a:=b$ and $b:=a$. For
this we use a signature $\Sigma$ with two constants $a$ and $b$ of
sort \texttt{identifier}, and a set of variables $\mathscr{V}$ with two
variables $u$ and $v$ of sort \texttt{integer}. The environment is
represented by two nodes $x$ and $y$, each attributed by an
identifier and its value. More precisely, let $\mathcal{A}$ be the
$\Sigma$-algebra where the sort \texttt{integer} is interpreted as
$\mathcal{A}_{\texttt{integer}} = \mathds{Z}$, the sort \texttt{identifier} as
$\mathcal{A}_{\texttt{identifier}} = \set{a,b}$ and each constant as
itself. We consider the environment where $a=1$ and $b=-1$; this is
represented by the graph
\[G\ =\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} a,1\hspace{2em}
y\mathrel{|} b,-1$};
\end{tikzpicture}}\]
We consider the rules
\begin{align*}
r_1& = \tuple{\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x_1\mathrel{|} a,u\hspace{2em}
y_1\mathrel{|} b,v$};
\end{tikzpicture}}\ ,\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x_1\mathrel{|} a\hspace{2em}
y_1\mathrel{|} b,v$};
\end{tikzpicture}}\ ,\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x_1\mathrel{|} a,v$};
\end{tikzpicture}}\ }\\
r_2& = \tuple{\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x_2\mathrel{|} a,u\hspace{2em}
y_2\mathrel{|} b,v$};
\end{tikzpicture}}\ ,\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x_2\mathrel{|} a,u\hspace{2em}
y_2\mathrel{|} b$};
\end{tikzpicture}}\ ,\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$y_2\mathrel{|} b,u$};
\end{tikzpicture}}\ }
\end{align*}
that correspond to $a:=b$ and $b:=a$ respectively. We see that $r_1$
removes the content $u$ associated to $a$ and replaces it by the
content $v$ associated to $b$. There is exactly one matching $\mu_i$
of rule $r_i$ in $G$ for $i=1,2$, given by
\[\begin{array}{c|cccccc}
&x_1&y_1&a&b&u&v\\ \hline
\mu_1&x&y&a&b&1&-1
\end{array}\hspace{2em}
\begin{array}{c|cccccc}
&x_2&y_2&a&b&u&v\\ \hline
\mu_2&x&y&a&b&1&-1
\end{array}\]
Let $M=\set{\mu_1,\mu_2}$. Since no vertex or arrow is removed we
have $\Vd{M}=\Ad{M}=\varnothing$. We also have
\begin{align*}
\ld{M}(x) &= \big(\attrf{\mu}_1\circ(\attrfLg{\mu_1}\setminus
\attrfKg{\mu_1})\circ\invf{\mu_1}(x)\big) \cup \big(\attrf{\mu}_2\circ(\attrfLg{\mu_2}\setminus
\attrfKg{\mu_2})\circ\invf{\mu_2}(x)\big)\\
&= \attrf{\mu}_1\big(\attrfLg{\mu_1}(x_1)\setminus
\attrfKg{\mu_1}(x_1)\big) \cup \attrf{\mu}_2\big(\attrfLg{\mu_2}(x_2)\setminus
\attrfKg{\mu_2}(x_2)\big)\\
&= \attrf{\mu}_1(\set{u})\cup \attrf{\mu}_2(\varnothing)\\
&= \set{1}
\end{align*}
and similarly $\ld{M}(y)=\set{-1}$, so that
$\delgraph{G}{\Vd{M}}{\Ad{M}}{\ld{M}}\ =\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} a\hspace{2em}
y\mathrel{|} b$};
\end{tikzpicture}}$. Finally, we see that
\begin{align*}
\RImg{G}{\mu_1} &=\liftm{\mu_1}(\Rg{\mu_1}) = \liftm{\mu_1}(\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x_1\mathrel{|} a,v$};
\end{tikzpicture}}\ ) = \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} a,-1$};
\end{tikzpicture}}\\
\RImg{G}{\mu_2} &=\liftm{\mu_2}(\Rg{\mu_2}) = \liftm{\mu_2}(\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$y_2\mathrel{|} b,u$};
\end{tikzpicture}}\ ) = \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$y\mathrel{|} b,1$};
\end{tikzpicture}}
\end{align*} and hence \[\Ipgr{G}{M}\ =\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} a\hspace{2em}
y\mathrel{|} b$};
\end{tikzpicture}}\ \sqcup\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} a,-1$};
\end{tikzpicture}}\ \sqcup\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$y\mathrel{|} b,1$};
\end{tikzpicture}} \ =\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} a,-1\hspace{2em}
y\mathrel{|} b,1$};
\end{tikzpicture}}\] that represents the environment where
$a=-1$ and $b=1$, i.e., where the initial values of $a$ and $b$ have
been swapped. Note that the same transformation can obviously be
performed by the single rule \[ \tuple{\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x_1\mathrel{|} a,u\hspace{2em}
y_1\mathrel{|} b,v$};
\end{tikzpicture}}\ ,\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x_1\mathrel{|} a\hspace{2em}
y_1\mathrel{|} b$};
\end{tikzpicture}}\ ,\ \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x_1\mathrel{|} a,v\hspace{2em}
y_1\mathrel{|} b,u$};
\end{tikzpicture}}\ }\] More importantly, this rule can be
computed from $r_1$ and $r_2$ (see \cite{BdlTE20b}). \end{example}
In Definition~\ref{def-parallel-rew} $\Ipgr{G}{M}$ is guaranteed to be a graph since the $\sqcup$ operation is only applied on joinable graphs. Every morphism $\liftm{\mu}$ is a matching from the right-hand side $\Rg{\mu}$ to the result $\Ipgr{G}{M}$ of the transformation. The case where $M$ is a singleton defines the classical semantics of one sequential rewrite step.
\begin{definition}[sequential rewriting]\label{def-seqrew}
For any finite set of rules $\mathcal{R}$, we define the relation $\seqr{\mathcal{R}}$
of \emph{sequential rewriting} by stating that, for all graphs $G$
and $H$, \[G\seqr{\mathcal{R}}H\text{ iff there exists some }\mu\in\Matches{\mathcal{R}}{G}
\text{ such that }H\mathrel{\simeq} \Ipgr{G}{\mu}.\] \end{definition}
\section{Sequential Independence}\label{sec-seqindep}
In the Double-Pushout approach to graph rewriting (see \cite{EhrigEPT06}), production rules are spans $L\leftarrow K\rightarrow R$, with two morphisms from an \emph{interface} $K$ to the left- and right-hand sides $L$, $R$. These objects and morphisms are taken in a category, possibly of some sort of graphs. Direct derivations are diagrams \begin{center}
\begin{tikzpicture}[xscale=1.65, yscale=1.6]
\node (G) at (-0.5,0) {$H$};
\node (L1) at (-0.5,1) {$R$}; \node (K1) at (-1.5,1) {$K$};
\node (R1) at (-2.5,1) {$L$};
\node (D1) at (-1.5,0) {$D$}; \node (H1) at (-2.5,0) {$G$};
\path[->] (K1) edge (L1);
\path[->] (K1) edge (R1);
\path[->] (L1) edge (G);
\path[->] (K1) edge (D1);
\path[->] (D1) edge (G);
\path[->] (D1) edge (H1);
\path[->] (R1) edge node[fill=white, font=\footnotesize] {$\mu_1$} (H1);
\end{tikzpicture} \end{center} where the two squares are \emph{pushouts}, i.e., a form of union. Since objects, say $D$ and $R$, can always be understood modulo isomorphisms, their union cannot be defined without specifying what they have in common; this is the rôle of $K$ and of the morphisms from $K$ to $D$ and $R$. If for instance $K$ is empty then the pushout $H$ is the disjoint union (or direct sum, or co-product) of $D$ and $R$. Hence the right square adds something to $D$, and inversely the left square removes something from $G$. Hence $H$ is obtained from $G$ by removing an image of $L$ and writing an image of $R$, with the possibility that $L$ and $R$ share a common part given by $K$. This very general approach has a drawback: depending on $G$ and $\mu_1$ the object $D$ may not exist, and if it does it may not be unique.
In this approach sequential independence is a property of two consecutive direct transformations, formulated as the existence of two commuting morphisms $j_1$ and $j_2$ as shown below.
\begin{center}
\begin{tikzpicture}[xscale=1.65, yscale=1.8]
\node (L) at (0.5,1) {$L_2$}; \node (K) at (1.5,1) {$K_2$};
\node (R) at (2.5,1) {$R_2$}; \node (G) at (0,0) {$H_1$};
\node (D) at (1.5,0) {$D_2$}; \node (H) at (2.5,0) {$H_2$};
\path[->] (K) edge (L);
\path[->] (K) edge (R);
\path[->] (L) edge node[fill=white, font=\footnotesize, near end] {$\mu_2$} (G);
\path[->] (K) edge (D);
\path[->] (D) edge (G);
\path[->] (D) edge (H);
\path[->] (R) edge (H);
\node (L1) at (-0.5,1) {$R_1$}; \node (K1) at (-1.5,1) {$K_1$};
\node (R1) at (-2.5,1) {$L_1$};
\node (D1) at (-1.5,0) {$D_1$}; \node (H1) at (-2.5,0) {$G$};
\path[->] (K1) edge (L1);
\path[->] (K1) edge (R1);
\path[->] (L1) edge (G);
\path[->] (K1) edge (D1);
\path[->] (D1) edge (G);
\path[->] (D1) edge (H1);
\path[->] (R1) edge node[fill=white, font=\footnotesize] {$\mu_1$} (H1);
\path[-] (L1) edge[draw=white, line width=3pt] (D);
\path[-] (L) edge[draw=white, line width=3pt] (D1);
\path[->,dashed] (L1) edge node[fill=white, font=\footnotesize] {$j_1$} (D);
\path[->,dashed] (L) edge node[fill=white, font=\footnotesize] {$j_2$} (D1);
\end{tikzpicture} \end{center} It is then proven by the Local Church-Rosser Theorem that the two production rules can be applied in reverse order to $G$ and yield the same result $H_2$ (we may call this the \emph{swapping property}). Of course, the matchings $\mu_1:L_1\rightarrow G$ and $\mu_2:L_2\rightarrow H_1$ are then replaced by other matchings $\mu'_1:L_1\rightarrow H'_1$ and $\mu'_2:L_2\rightarrow G$ that are related to $\mu_1$ and $\mu_2$. A drawback of this definition is that it does not account for longer sequences of direct transformations. Indeed, if three consecutive steps are given by $\tuple{\mu_1,\mu_2,\mu_3}$, it is possible to swap $\mu_1$ with $\mu_2$ if they are sequential independent, and similarly for $\mu_2$ and $\mu_3$, but this does not imply that $\mu_1$ and $\mu_3$ can be swapped under these hypotheses (because the matchings, and hence the direct transformations, are modified by the swapping operations). We would need to express sequential independence between $\mu_1$ and $\mu_3$, but the definition does not apply since they are not consecutive steps. More elaborate notions of equivalence between sequences of direct transformations are thus required (see the notion of \emph{shift equivalence} in \cite[chapter 3.5]{HndBkCorradiniMREHL97}).
Because of the specificities of our framework (no pushouts, horizontal morphisms are only canonical injections, and there may be no such morphism from $K$ to $R$) we need a different definition of sequential independence. It is natural to think of the swapping property itself as the definition of sequential independence, since it describes the operational meaning of parallel independence, but we are faced with another problem. We are dealing with possibly infinite sets of matchings of rules in a graph, and we cannot form a notion of infinite sequences of rewrite steps (because each step may both remove and add data). Yet we do not wish to restrict the notion to finite sets, not simply for the sake of generality but also because it is closely related to parallel independence, a notion that can naturally be defined on infinite sets (see Section~\ref{sec-parindep}).
We may however use Definition \ref{def-parallel-rew} to handle infinite sets of matchings, by using the graph $\Ipgr{G}{M}$ to stand for the result of an (independent) sequence of transformations. We may thus express sequential independence as a generalized swapping property, where the swap is performed between one transformation and all the others (taken in parallel). Yet this definition would not imply that all subsets of a sequential independent set are sequential independent, hence it needs to be stated in a more general way, by swapping any transformation with any others (and not only with all the others).
\begin{definition}\label{def-seqindep}[sequential independence]
For any graph $G$ and set $M \subseteq\Matches{\mathcal{R}}{G}$, we say that
$M$ is \emph{sequential independent} if for all $N\subseteq M$ and
all $\mu\in M\setminus N$,
\begin{itemize}
\item $\mu(\Lg{\mu})\mathrel{\lhd} \Ipgr{G}{N}$, hence there is a canonical
injection $j$ from $\mu(\Lg{\mu})$ to $\Ipgr{G}{N}$,
\item there exists an isomorphism $\alpha$ such that
$\alpha(\Ipgr{G}{N\cup\set{\mu}})=
\Ipgr{\big(\Ipgr{G}{N}\big)}{j\circ\mu}$
and $\alpha$ is the identity on $G$.
\end{itemize} \end{definition}
The isomorphism $\alpha$ in~Definition \ref{def-seqindep} is necessary to account for the difference between the isomorphic graphs $\liftm{\mu}(\Rg{\mu})$ and $\liftm{(j\circ \mu)}(\Rg{\mu})$, i.e., to transform vertices or arrows of the form $\tuple{x,\mu}$ into $\tuple{x,j\circ\mu}$ (but there is no need to be that specific in the definition).
It is then easy to see (by induction on the cardinality of $M$) that \begin{prop}\label{prop-seqindep}
For any graph $G$ and finite set $M \subseteq\Matches{\mathcal{R}}{G}$, if $M$ is
sequential independent then \[G \seqrew{\mathcal{R}} \Ipgr{G}{M}.\] \end{prop}
Of course there is usually more than one sequence of rewriting steps from $G$ to $\Ipgr{G}{M}$, since under the hypothesis they can be swapped; but without it there is generally none (as illustrated in Example~\ref{ex-Ipgr}). And the fact that there is one such sequence does not imply sequential independence, i.e., the converse of Proposition \ref{prop-seqindep} is obviously not true.
\section{Parallel Independence}\label{sec-parindep}
In the Double-Pushout approach, parallel independence is a property of two direct transformations of the same object $G$, formulated as the existence of two commuting morphisms $j_1$ and $j_2$ as shown below.
\begin{center}
\begin{tikzpicture}[xscale=1.65, yscale=1.8]
\node (L) at (0.5,1) {$L_2$}; \node (K) at (1.5,1) {$K_2$};
\node (R) at (2.5,1) {$R_2$}; \node (G) at (0,0) {$G$};
\node (D) at (1.5,0) {$D_2$}; \node (H) at (2.5,0) {$H_2$};
\path[->] (K) edge (L);
\path[->] (K) edge (R);
\path[->] (L) edge node[fill=white, font=\footnotesize, near end] {$\nu$} (G);
\path[->] (K) edge (D);
\path[->] (D) edge (G);
\path[->] (D) edge (H);
\path[->] (R) edge (H);
\node (L1) at (-0.5,1) {$L_1$}; \node (K1) at (-1.5,1) {$K_1$};
\node (R1) at (-2.5,1) {$R_1$};
\node (D1) at (-1.5,0) {$D_1$}; \node (H1) at (-2.5,0) {$H_1$};
\path[->] (K1) edge (L1);
\path[->] (K1) edge (R1);
\path[->] (L1) edge node[fill=white, font=\footnotesize, near end] {$\mu$} (G);
\path[->] (K1) edge (D1);
\path[->] (D1) edge (G);
\path[->] (D1) edge (H1);
\path[->] (R1) edge (H1);
\path[-] (L1) edge[draw=white, line width=3pt] (D);
\path[-] (L) edge[draw=white, line width=3pt] (D1);
\path[->,dashed] (L1) edge node[fill=white, font=\footnotesize] {$j_1$} (D);
\path[->,dashed] (L) edge node[fill=white, font=\footnotesize] {$j_2$} (D1);
\end{tikzpicture} \end{center}
The Local Church-Rosser Theorem mentioned above actually shows that $\mu$ and $\nu$ are parallel independent iff they correspond to a sequential independent pair $\tuple{\mu,\nu'}$ (where $\nu':L_2\rightarrow H_1$ is related to $\nu$). It is the symmetry between $\mu$ and $\nu$ that entails the swapping property. This is remarkable since parallel independence does not refer to the \emph{results} of the transformations involved, while the result of the sequences of transformations is central in the swapping property (as in Definition~\ref{def-seqindep}).
This definition of parallel independence can easily be lifted to sets $M$ of matchings (or direct transformations) by considering all possible pairs $\mu,\nu\in M$, with a slight caveat. In this definition the two direct transformations may be identical, thus stating a property of a single transformation that is not shared by all direct transformations. But Definition~\ref{def-parallel-rew} does not allow to apply any member $\mu$ of $M$ more than once (because applying $\mu$ any number of times in parallel would jeopardize determinism of $\fullPRr{\mathcal{R}}$, see Definition~\ref{def-EDP} below). For this reason we will only consider pairs of distinct matchings (so that singletons $M$ shall be considered as parallel independent, see below).
Our goal is therefore to formulate parallel independence in the present framework, in order to obtain an equivalence similar to the Local Church-Rosser Theorem. Considering that the pushout complement $D_1$ is replaced by the graph $\delgraph{G}{\Vd{\mu}}{\Ad{\mu}}{\ld{\mu}}$, the commuting property of $j_2$ amounts to $\nu(L_2)\mathrel{\lhd} \delgraph{G}{\Vd{\mu}}{\Ad{\mu}}{\ld{\mu}}$, that can be more elegantly expressed as $\nu(L_2) \sqcap \mu(L_1) \mathrel{\lhd} \mu(K_1)$, or $\nu(\Lg{\nu}) \sqcap \mu(\Lg{\mu}) \mathrel{\lhd} \mu(\Kg{\mu})$ using our notations. This simply means that any graph item that is matched by two concurrent rules cannot be removed. The commuting property of $j_1$ is obtained by swapping $\mu$ and $\nu$.
However, our treatment of attributes makes it possible to recover in the right-hand side an attribute that has been deleted in the left-hand side (this is of course not possible for vertices or arrows). This possibility should therefore be accounted for in the notion of parallel independence, i.e., an attribute that is matched twice may be deleted provided it is recovered. This can be expressed as \[\nu(\Lg{\nu}) \sqcap \mu(\Lg{\mu}) \mathrel{\lhd} \mu(\Kg{\mu}) \sqcup
\liftm{\mu}(\Rg{\mu}) \text{ for all } \mu,\nu\in M \text{ such that
} \mu\neq \nu.\] However, this is not a sufficient condition for sequential independence.
\begin{example}\label{ex-notseqindep}
We consider the following graph and rules:
\begin{align*}
G &= \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x\mathrel{|} 0$};
\end{tikzpicture}} \text{ where } \carrier{\algebrf{G}}=\set{0}\\
r_1 &= (\,\raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_1\mathrel{|} 0$};
\end{tikzpicture}},\,
\raisebox{-1ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_1$};
\end{tikzpicture}},\,\raisebox{-1ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_1$};
\end{tikzpicture}}\,)\\
r_2 &=(\,\raisebox{-1ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_2$};
\end{tikzpicture}},\, \raisebox{-1ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_2$};
\end{tikzpicture}},\,
\raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_2\mathrel{|} 0$};
\end{tikzpicture}}\,)
\end{align*}
There is a unique matching $\mu$ of $r_1$ (resp. $\nu$ of $r_2$) in
$G$, given by $\mu(x_1)=\nu(x_2)=x$ and
$\attrf{\mu}(0)=\attrf{\nu}(0)=0$. We see that $M=\set{\mu,\nu}$ is
not sequential independent. Indeed, let $N=\set{\nu}$, then
$G=\mu(\Lg{\mu})\mathrel{\lhd} \Ipgr{G}{N}=G$ (hence $j$ is the identity
morphism of $G$), but
$\Ipgr{\big(\Ipgr{G}{N}\big)}{\mu} = \Ipgr{G}{\mu} =
\raisebox{-0.7ex}{\begin{tikzpicture} \node[draw,rounded corners] at
(0,0) {$x$};
\end{tikzpicture}}$ is not isomorphic to $\Ipgr{G}{M}=G$.
Yet we see that
\begin{align*}
\nu(\Lg{\nu}) \sqcap \mu(\Lg{\mu}) = \raisebox{-0.7ex}{\begin{tikzpicture} \node[draw,rounded corners] at
(0,0) {$x$};
\end{tikzpicture}} &\mathrel{\lhd} \raisebox{-0.7ex}{\begin{tikzpicture} \node[draw,rounded corners] at
(0,0) {$x$};
\end{tikzpicture}} = \mu(\Kg{\mu}) \sqcup
\liftm{\mu}(\Rg{\mu}) \\
\mu(\Lg{\mu}) \sqcap \nu(\Lg{\nu}) = \raisebox{-0.7ex}{\begin{tikzpicture} \node[draw,rounded corners] at
(0,0) {$x$};
\end{tikzpicture}} &\mathrel{\lhd} \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x\mathrel{|} 0$};
\end{tikzpicture}} = \nu(\Kg{\nu}) \sqcup \liftm{\nu}(\Rg{\nu}),
\end{align*}
which proves that this condition is true
for all pairs of distinct elements of $M$; hence it is not
sufficient to ensure sequential independence. \end{example}
The problem in Example~\ref{ex-notseqindep} is that the attribute 0 of $x$ is considered as being matched only once (by $\Lg{\mu}$), while it is actually also matched by $\Rg{\nu}$. This leads to the following definition.
\begin{definition}[parallel independence]\label{def-parindep}
For any graph $G$ and set $M \subseteq\Matches{\mathcal{R}}{G}$, we say that
$M$ is \emph{parallel independent} if
\[(\nu(\Lg{\nu})\sqcup \liftm{\nu}(\Rg{\nu}))\sqcap \mu(\Lg{\mu})\
\mathrel{\lhd}\ \mu(\Kg{\mu}) \sqcup \liftm{\mu}(\Rg{\mu}) \text{\ \ for all } \mu,\nu\in M
\text{ such that } \mu\neq \nu.\] \end{definition}
This definition may seem strange, but it is easy to see that on unlabeled graphs it amounts to $\nu(\Lg{\nu})\sqcap \mu(\Lg{\mu})\mathrel{\lhd} \mu(\Kg{\mu})$ for all $\mu\neq\nu$, i.e., to the standard algebraic notion of parallel independence (translated to the present framework).
It turns out that Definition~\ref{def-parindep} provides the expected characterization of sequential independence.
\begin{theorem}\label{thm-para}
For any graph $G$ and set $M\subseteq\Matches{\mathcal{R}}{G}$, $M$ is parallel
independent iff $M$ is sequential independent. \end{theorem}
The (rather long) proof of Theorem~\ref{thm-para} can be found in \cite{BdlTE20c}.
We therefore see that Definition~\ref{def-parindep} arises as a characterization of sequential independence that does not refer to the results of the transformations, and indeed that does not rely on the definition of $\Ipgr{G}{M}$ (Definition~\ref{def-parallel-rew}), though of course it does rely on the definitions of unions of graphs, of rules and of the matchings $\liftm{\mu}$ (Definitions~\ref{def-joinableg}, \ref{def-rules} and \ref{def-RImg}). Note also that Definition~\ref{def-parindep} depends explicitly on the right-hand sides of rules, in contrast with the general algebraic definition of parallel independence given above, or with the Essential Condition of parallel independence in \cite{CorradiniDLRMCA18}.
\section{Parallel Rewriting}\label{sec-EDP}
We have not yet defined a relation of parallel rewriting as we did for sequential rewriting (Definition~\ref{def-seqrew}). The reason is that two matchings may conflict as one retains (in $R\sqcap K$) what another removes.
\begin{example}\label{ex-conflict}
We consider the following unlabeled rule $r$ and graph $G$.
\[r\ =\ (\,\raisebox{-3.9ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{\begin{tikzpicture}[scale=2.2]
\node (x) at (0,0) {$x$};
\node (y) at (1,0) {$x'$};
\path[->,bend left] (x) edge
node[fill=white,font=\footnotesize] {$f$} (y);
\path[->,bend left] (y) edge node[fill=white,font=\footnotesize] {$f'$} (x);
\end{tikzpicture}};
\end{tikzpicture}},\,
\raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x$};
\end{tikzpicture}},\,\raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x$};
\end{tikzpicture}}\,)\hspace{2em}
G\ =\ \raisebox{-3.9ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{\begin{tikzpicture}[scale=2.2]
\node (x) at (0,0) {$y$};
\node (y) at (1,0) {$z$};
\path[->,bend left] (x) edge
node[fill=white,font=\footnotesize] {$g$} (y);
\path[->,bend left] (y) edge node[fill=white,font=\footnotesize] {$h$} (x);
\end{tikzpicture}};
\end{tikzpicture}} \]
There are two matchings $\mu_1,\mu_2$ of $r$ in
$G$, given by
\[\begin{array}{c|cccc}
&x&x'&f&f'\\ \hline
\mu_1&y&z&g&h
\end{array}\hspace{2em}
\begin{array}{c|cccccc}
&x&x'&f&f'\\ \hline
\mu_2&z&y&h&g
\end{array}\]
According to rule $r$ with matching $\mu_1$, the node
$\mu_1(x')=z$ and the arrows $\mu_1(f)=g$ and $\mu_1(f')=h$ have
to be removed, and the node $\mu_1(x)=y$ should occur in the
result of the transformation. But with matching $\mu_2$, the node $\mu_2(x')=y$ should
be removed and the node $\mu_2(x)=z$ should be preserved. There is
a conflict between $\mu_1$ and $\mu_2$ on the nodes of $G$ (but
not on its arrows).
Let $M=\set{\mu_1,\mu_2}$, then $\Vd{M}= \mu_1(\set{x'})\cup\mu_2(\set{x'}) =
\set{y,z}=\nodes{G}$ hence $\delgraph{G}{\Vd{M}}{\Ad{M}}{\ld{M}}$
is empty and
\[\Ipgr{G}{M} = \mu_1(\raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x$};
\end{tikzpicture}}) \sqcup \mu_2(\raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x$};
\end{tikzpicture}}) = \raisebox{-0.9ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$y$};
\end{tikzpicture}} \sqcup \raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$z$};
\end{tikzpicture}} = \raisebox{-1.8ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{\begin{tikzpicture}[scale=1]
\node (x) at (0,0) {$y$};
\node (y) at (1,0) {$z$};
\end{tikzpicture}};
\end{tikzpicture}}\]
\end{example}
The transformation offered by Definition~\ref{def-parallel-rew} performs deletions before unions, which means that these conflicts are resolved by giving priority to retainers over removers. But if the deletion actions of a rule are not executed in a parallel transformation, how can we claim that this rule has been executed (or applied) in parallel with others? Thus, in order to define parallel rewriting with a clear semantics we need to rule out such conflicts.
A natural restriction is therefore to make sure that the items that should be removed, i.e., those contained in $\Vd{M}$, $\Ad{M}$ or $\ld{M}$, have indeed been removed from the result.
\begin{definition}[regularity]\label{def-regular}
For any graph $G$ and set $M \subseteq\Matches{\mathcal{R}}{G}$, we say that
$M$ is \emph{regular} if $\Ipgr{G}{M}$ is disjoint from
$\Vd{M}, \Ad{M}, \ld{M}$. \end{definition}
As for sequential independence, this property of $M$ can be characterized as a property of pairs of elements of $M$.
\begin{lemma}\label{lm-regular}
For any graph $G$ and set $M \subseteq\Matches{\mathcal{R}}{G}$, \[M \text{ is
regular\ \ iff\ \ }
\liftm{\nu}(\Rg{\nu}) \sqcap \mu(\Lg{\mu})\mathrel{\lhd} \mu(\Kg{\mu})\text{ for
all }\mu,\nu\in M.\] \end{lemma} \begin{proof}
Let $H = \bigsqcup_{\nu\in M}\RImg{G}{\nu}$, then $\Ipgr{G}{M} =
\delgraph{G}{\Vd{M}}{\Ad{M}}{\ld{M}} \sqcup H$ is disjoint from
$\Vd{M}, \Ad{M}, \ld{M}$ iff $H$ is. We have \[\nodes{H}\cap \Vd{M} = \Big(\bigcup_{\nu\in M}\RImg{\nodes{G}}{\nu}\Big) \cap
\Big(\bigcup_{\mu\in M}\mu(\nodesLg{\mu}\setminus
\nodesKg{\mu})\Big) = \bigcup_{\mu,\nu\in M}\liftm{\nu}(\nodesRg{\nu})
\cap \mu(\nodesLg{\mu})\setminus \mu(\nodesKg{\mu})\] hence $\nodes{H}\cap \Vd{M} = \varnothing$ iff $\liftm{\nu}(\nodesRg{\nu}) \cap \mu(\nodesLg{\mu})\setminus \mu(\nodesKg{\mu}) = \varnothing$ for all $\mu,\nu\in M$, but this is equivalent to $\liftm{\nu}(\nodesRg{\nu}) \cap \mu(\nodesLg{\mu})\subseteq \mu(\nodesKg{\mu})$. Similarly we see that $\arrows{H}\cap \Ad{M} = \varnothing$ iff $\liftm{\nu}(\arrowsRg{\nu}) \cap \mu(\arrowsLg{\mu})\subseteq \mu(\arrowsKg{\mu})$ for all $\mu,\nu\in M$.
For every vertex or arrow $x$ of $\Ipgr{G}{M}$ we have \begin{align*}
\labelf{H}(x)\cap \ld{M}(x)
&= \bigcup_{\nu\in M} \labelf{\nu}\circ \labelfRg{\nu} \circ
\invf{\nu}(x) \cap \ld{M}(x)\\
&= \bigcup_{\mu,\nu\in M} \labelf{\nu}\circ \labelfRg{\nu} \circ
\invf{\nu}(x) \cap \labelf{\mu}\circ (\labelfLg{\mu}\setminus \labelfKg{\mu})\circ \invf{\mu}(x)\\
&= \bigcup_{\mu\nu\in M} \labelf{\nu}\circ \labelfRg{\nu} \circ
\invf{\nu}(x) \cap \labelf{\mu}\circ \labelfLg{\mu}\circ
\invf{\mu}(x) \setminus \labelf{\mu}\circ \labelfKg{\mu}\circ \invf{\mu}(x) \end{align*} by using the fact that $\mu$ is consistent. We therefore see that $\labelf{H}(x)\cap \ld{M}(x)=\varnothing$ holds iff $\labelf{\nu}\circ \labelfRg{\nu} \circ \invf{\nu}(x) \cap \labelf{\mu}\circ \labelfLg{\mu}\circ \invf{\mu}(x) \subseteq \labelf{\mu}\circ \labelfKg{\mu}\circ \invf{\mu}(x)$ holds for all $\mu,\nu\in M$. By definition $M$ is regular iff $\nodes{H}\cap \Vd{M} =\arrows{H}\cap \Ad{M} = \varnothing$ and $\labelf{H}\cap \ld{M}$ is empty everywhere, hence $M$ is regular iff $\liftm{\nu}(\Rg{\nu}) \sqcap \mu(\Lg{\mu})\mathrel{\lhd} \mu(\Kg{\mu})$ for all $\mu,\nu\in M$. \end{proof} \begin{corollary}\label{cr-regular}
$M$ is regular iff all its subsets are regular. \end{corollary}
These nice properties, and the fact that regularity ensures the absence of conflicts, are however not sufficient in the light of parallel independence. Indeed, we now show that a parallel independent set may not be regular.
\begin{example}\label{ex-EDP}
Let us consider rules $r_1=\tuple{L_1,K_1,R_1}$ and
$r_2=\tuple{L_2,K_2,R_2}$ where the graphs $L_1$, $K_1$ and $R_1$
have only one vertex $x_1$, the graphs $L_2$, $K_2$ and $R_2$ have
only one vertex $x_2$, and the attributes are as pictured below ($u,v$
are variables and $f$ is a unary function symbol). Let $\algebrf{G}$ be the
algebra with carrier set $\set{0}$ where $f$ is interpreted as
the constant function $0$, and let $G$ be the graph that has
a unique vertex $x$ with attribute $0$.
\begin{center}
\begin{tikzpicture}[scale=1.1]
\draw (0,1) circle (.5cm and .25cm);
\draw (0.5,1) circle (1cm and .35cm);
\draw (0.5,-1) circle (1cm and .35cm);
\draw (-0.5,-1) circle (1cm and .35cm);
\draw (0,-1) circle (.5cm and .25cm);
\draw (0,0) circle (0.7cm and .33cm);
\node (G) at (0,0) {$0$}; \node (U) at (0,1) {$u$};
\node (FU) at (1,1) {$f(u)$}; \node (V) at (-1,-1) {$v$};
\node (FV) at (1,-1) {$f(v)$};
\node at (-2.2,1) {$\set{u}=\labelf{L}_1(x_1)=\labelf{K}_1(x_1)$};
\node at (2.9,1) {$\labelf{R}_1(x_1) = \set{u,f(u)}$};
\node at (1.6,0) {$\labelf{G}(x)=\set{0}$};
\node at (-2.5,-1) {$\set{v}=\labelf{L}_2(x_2)$};
\node at (0,-1.6) {$\labelf{K}_2(x_2)=\varnothing$};
\node at (2.8,-1) {$\labelf{R}_2(x_2)=\set{f(v)}$};
\path[->] (U) edge node[left, font=\footnotesize] {$\labelf{\mu}_1$} (G);
\path[->] (FU) edge node[right, font=\footnotesize] {$\labelf{\mu}_1$} (G);
\path[->] (V) edge node[left, font=\footnotesize] {$\labelf{\mu}_2$} (G);
\path[->] (FV) edge node[right, font=\footnotesize] {$\labelf{\mu}_2$} (G);
\end{tikzpicture}
\end{center}
There are exactly two matchings of $\set{r_1,r_2}$ in $G$: $\mu_1$
and $\mu_2$ defined by $\mu_1(x_1)=\mu_2(x_2)=x$ and
$\labelf{\mu}_1(u)=\labelf{\mu}_2(v)=0$. Let $M=\set{\mu_1,\mu_2}$,
we see by Lemma~\ref{lm-regular} that $M$ is not regular since
\[\liftm{\mu_1}(R_1)\sqcap\mu_2(L_2) = \mu_1(\,\raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_1\mathrel{|} u,f(u)$};
\end{tikzpicture}}\,) \sqcup \mu_2(\,\raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_2\mathrel{|} v$};
\end{tikzpicture}}\,) = \raisebox{-1.2ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x\mathrel{|} 0$};
\end{tikzpicture}} = G\] is not a subgraph of
$\mu_2(K_2) = \mu_2(\raisebox{-0.9ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_2$};
\end{tikzpicture}}) = \raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x$};
\end{tikzpicture}}$ (or equivalently because $\Ipgr{G}{M}=G$ is not disjoint from $\ld{M}$).
However, we see that $M$ is sequential independent since the
matchings can be applied sequentially in any order, yielding in both
cases the graph $G$. Equivalently, $M$ is parallel independent since
\begin{align*}
(\mu_2(L_2)\sqcup \liftm{\mu_2}(R_2))\sqcap \mu_1(L_1) =
(G\sqcup G)\sqcap G = G &\mathrel{\lhd} G\sqcup G = \mu_1(K_1)\sqcup \liftm{\mu_1}(R_1)\\
(\mu_1(L_1)\sqcup \liftm{\mu_1}(R_1))\sqcap \mu_2(L_2) =
(G\sqcup G)\sqcap G = G &\mathrel{\lhd} \raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x$};
\end{tikzpicture}} \sqcup G = \mu_2(K_2)\sqcup \liftm{\mu_2}(R_2).
\end{align*} \end{example}
Note that conversely a set may be regular and not parallel independent, as is the case of the set $M$ in Example~\ref{ex-Ipgr}.
We obviously need a more comprehensive notion of parallel rewriting, one that applies \emph{at least} on all parallel independent sets of matchings. We see in Example~\ref{ex-EDP} that the two rules do clash on the attribute 0 of $x$, but the clash is settled by their right-hand sides. This leads to the following definition from \cite{BdlTE20c}.
\begin{definition}[effective deletion property, parallel rewriting]\label{def-EDP}
For any graph $G$, a set $M \subseteq\Matches{\mathcal{R}}{G}$ is said to
satisfy the \emph{effective deletion property} if $\Ipgr{G}{M}$
is disjoint from $\Vd{M}, \Ad{M}, \ld{M}\setminus \llift{M}$, where
\[\llift{M}\defeq \bigcup_{\mu\in
M}\attrf{\mu}\circ(\attrfRg{\mu}\setminus
\attrfKg{\mu})\circ\invf{\mu}.\]
For any finite set of rules $\mathcal{R}$, we define the relation $\IPTrew{\mathcal{R}}$ of \emph{parallel rewriting} by stating that, for all graphs $G$ and $H$, $G\IPTrew{\mathcal{R}} H$ iff there exists a set $M\subseteq\Matches{\mathcal{R}}{G}$ that has the effective deletion property and such that $H\mathrel{\simeq} \Ipgr{G}{M}$. We write $G\fullPRr{\mathcal{R}} H$ if $M=\Matches{\mathcal{R}}{G}$. \end{definition}
The effective deletion property is obviously more general than regularity. The example below shows that it is strictly more general than regularity.
\begin{example}\label{ex-reg-parindep-to-edp}
We consider again Example~\ref{ex-notseqindep} where
$M=\set{\mu,\nu}$ is not sequential independent, hence by
Theorem~\ref{thm-para} $M$ is not parallel independent. We have
$\Vd{M}=\Ad{M}=\emptyset$ and $\ld{M}(x)=\set{0}$. Since
$\Ipgr{G}{M}=\raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x\mathrel{|} 0$};
\end{tikzpicture}}$ is not disjoint from $\Vd{M},\Ad{M},\ld{M}$ then
$M$ is not regular. But
\[\llift{M}(x) = \attrf{\mu}\circ(\attrf{R}_1\setminus
\attrf{K}_1)\circ\invf{\mu}(x)\ \cup\
\attrf{\nu}\circ(\attrf{R_2}\setminus
\attrf{K_2})\circ\invf{\nu}(x) = \set{0},\] hence
$\ld{M}(x)\setminus \llift{M}(x)=\varnothing$ and therefore $M$ has the
effective deletion property. \end{example}
It has been shown in \cite{BdlTE20c} that $\fullPRr{\mathcal{R}}$ is deterministic up to isomorphism, that is, if $G\fullPRr{\mathcal{R}} H$, $G'\fullPRr{\mathcal{R}} H'$ and $G\mathrel{\simeq} G'$ then $H\mathrel{\simeq} H'$. In particular, it is possible to represent any cellular automaton by a suitable rule $r$ and a class of graphs that correspond to configurations of the automaton (every vertex corresponds to a cell), such that $\fullPRr{r}$ (restricted to such graphs) is the transition function of the automaton. Furthermore, it is proved in \cite{BdlTE20c} (as a lemma to Theorem~\ref{thm-para}) that
\begin{theorem}\label{thm-indep2edp}
For any graph $G$ and set $M\subseteq\Matches{\mathcal{R}}{G}$ if $M$ is
parallel independent then $M$ has the effective
deletion property. \end{theorem}
Hence effective deletion supports a definition of parallel rewriting that is general enough to handle parallel independence. Besides, Example~\ref{ex-reg-parindep-to-edp} also shows that the effective deletion property is strictly more general than parallel independence.
We also see that \begin{corollary}
If $M\subseteq \Matches{\mathcal{R}}{G}$ is finite and parallel independent then $G
\seqrew{\mathcal{R}} \Ipgr{G}{M}$ and $G\IPTrew{\mathcal{R}} \Ipgr{G}{M}$. \end{corollary} \begin{proof}
By Theorem~\ref{thm-indep2edp} we have $G\IPTrew{\mathcal{R}}
\Ipgr{G}{M}$. By Theorem~\ref{thm-para} $M$ is sequential
independent, hence by Proposition~\ref{prop-seqindep} we have $G
\seqrew{\mathcal{R}} \Ipgr{G}{M}$. \end{proof} Hence in this case parallel and sequential rewriting meet, and parallel rewriting can be said to yield a \emph{correct} result
w.r.t. sequential rewriting.
\section{Parallel Coherence}\label{sec-parcoh}
One drawback of the effective deletion property is that it cannot be characterized as a property of pairs of elements of $M$, as the following example shows.
\begin{example}
We consider the following graph and rule
\begin{align*}
G &= \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x\mathrel{|} 0$};
\end{tikzpicture}} \text{ where } \carrier{\algebrf{G}}=\set{0,1}\\
r_3 &= (\,\raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_3\mathrel{|} 0$};
\end{tikzpicture}},\, \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_3\mathrel{|} 0$};
\end{tikzpicture}},\,
\raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_3\mathrel{|} 0,1$};
\end{tikzpicture}}\,)
\end{align*}
and also the rules $r_1$, $r_2$ of Example~\ref{ex-notseqindep}. For
$i=1,2,3$ let $\mu_i$ be the unique matching of $r_i$ in $G$ such
that $\attrf{\mu}_i$ is the identity function of $\set{0,1}$. Let
$M=\set{\mu_1,\mu_2,\mu_3}$ and $N=\set{\mu_1,\mu_3}$. We obviously
have $\Vd{M}=\Vd{N}=\Ad{M}=\Ad{N}=\varnothing$. We see that
$\ld{\mu_1}(x) = \set{0}$ and $\ld{\mu_2}(x) = \ld{\mu_3}(x) =
\varnothing$, so that $\ld{N}(x)=\ld{M}(x)=\set{0}$,
\[\Ipgr{G}{N} = \raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x$};
\end{tikzpicture}} \sqcup \raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x$};
\end{tikzpicture}} \sqcup \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} 0,1$};
\end{tikzpicture}} = \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} 0,1$};
\end{tikzpicture}}\ \text{ and }\ \Ipgr{G}{M} = \raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x$};
\end{tikzpicture}} \sqcup \raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x$};
\end{tikzpicture}} \sqcup \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} 0$};
\end{tikzpicture}} \sqcup \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} 0,1$};
\end{tikzpicture}} = \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x\mathrel{|} 0,1$};
\end{tikzpicture}}\] We then see that
$\llift{\mu_1}(x)=\varnothing$, $\llift{\mu_2}(x)=\set{0}$ and
$\llift{\mu_3}(x)=\set{1}$, so that $\llift{N}(x)=\set{1}$ and
$\llift{M}(x)=\set{0,1}$. Hence
$\ld{M}(x)\setminus\llift{M}(x)=\varnothing$ and
$\ld{N}(x)\setminus\llift{N}(x)=\set{0}$, and $M$ but not $N$ has
the effective deletion property. \end{example}
The reader may find strange that the conflict between $r_1$ and $r_3$ could be settled by some other rule, here $r_2$. This means that we need the whole of $M$ to decide wether all conflicts are settled. For this reason the effective deletion property may appear as too general.
Another possibility for defining parallel rewriting is to translate to the present framework the notion of parallel coherence that has been devised in order to define algebraic parallel graph transformation (see \cite{BdlTE20b}). In that paper we used production rules of the form $L\leftarrow K\leftarrow I \rightarrow R$ that do not require a morphism from $K$ to $R$. Direct derivations are commuting diagrams \begin{center}
\begin{tikzpicture}[xscale=1.8, yscale=1.4]
\node (L) at (0,1) {$L$}; \node (K) at (1,1) {$K$}; \node (I) at
(2,1) {$I$}; \node (R) at (3,1) {$R$}; \node (G) at (0.5,0) {$G$};
\node (D) at (1.5,0) {$D$}; \node (H) at (2.5,0) {$H$};
\path[->] (K) edge (L);
\path[->] (I) edge (K);
\path[->] (I) edge (R);
\path[->] (L) edge (G);
\path[->] (K) edge (D);
\path[->] (D) edge (G);
\path[->] (I) edge (D);
\path[->] (D) edge (H);
\path[->] (R) edge (H);
\end{tikzpicture} \end{center} where the squares are pushouts. Note that a standard Double-Pushout can be obtained with $K=I$. \emph{Parallel coherence}, as a property of two direct transformations of the same object $G$, is defined as the existence of two commuting morphisms $j_1$ and $j_2$ as shown below.
\begin{center}
\begin{tikzpicture}[xscale=1.65, yscale=2]
\node (L) at (0.5,1) {$L_2$}; \node (K) at (1.5,1) {$K_2$}; \node (I) at
(2.5,1) {$I_2$}; \node (R) at (3.5,1) {$R_2$}; \node (G) at (0,0) {$G$};
\node (D) at (2,0) {$D_2$}; \node (H) at (3,0) {$H_2$};
\path[->] (K) edge (L);
\path[->] (I) edge (K);
\path[->] (I) edge (R);
\path[->] (L) edge node[fill=white, font=\footnotesize, near start] {$\nu$} (G);
\path[->] (K) edge (D);
\path[->] (D) edge (G);
\path[->] (I) edge (D);
\path[->] (D) edge (H);
\path[->] (R) edge (H);
\node (L1) at (-0.5,1) {$L_1$}; \node (K1) at (-1.5,1) {$K_1$}; \node (I1) at
(-2.5,1) {$I_1$}; \node (R1) at (-3.5,1) {$R_1$};
\node (D1) at (-2,0) {$D_1$}; \node (H1) at (-3,0) {$H_1$};
\path[->] (K1) edge (L1);
\path[->] (I1) edge (K1);
\path[->] (I1) edge (R1);
\path[->] (L1) edge node[fill=white, font=\footnotesize, near start] {$\mu$} (G);
\path[->] (K1) edge (D1);
\path[->] (D1) edge (G);
\path[->] (I1) edge (D1);
\path[->] (D1) edge (H1);
\path[->] (R1) edge (H1);
\path[-] (I1) edge[draw=white, line width=3pt] (D);
\path[-] (I) edge[draw=white, line width=3pt] (D1);
\path[->,dashed] (I1) edge node[fill=white, font=\footnotesize,
near start] {$j_1$} (D);
\path[->,dashed] (I) edge node[fill=white, font=\footnotesize,
near start] {$j_2$} (D1);
\end{tikzpicture} \end{center}
This notion clearly generalizes algebraic parallel independence and is therefore a good candidate. In the present framework the object $I_2$ is replaced by the graph $R_2\sqcap K_2$, hence the commuting property of $j_2$ amounts to $\nu(R_2\sqcap K_2)\mathrel{\lhd} \delgraph{G}{\Vd{\mu}}{\Ad{\mu}}{\ld{\mu}}$, that can be expressed as $\mu(\Lg{\mu}) \sqcap \nu(\Rg{\nu}\sqcap \Kg{\nu}) \mathrel{\lhd} \mu(\Kg{\mu})$. This simply means that any graph item that is matched by some $R\sqcap K$ cannot be removed by any rule.
\begin{definition}[parallel coherence]\label{def-parcoh}
For any graph $G$ and set $M \subseteq\Matches{\mathcal{R}}{G}$, we say that
$M$ is \emph{parallel coherent} if
\[\nu(\Rg{\nu}\sqcap \Kg{\nu})\sqcap \mu(\Lg{\mu}) \mathrel{\lhd}
\mu(\Kg{\mu}) \text{ for all } \mu,\nu\in M.\] \end{definition}
We easily show that this notion is more general than regularity.
\begin{lemma}\label{lm-reg2parcoh}
For any graph $G$ and set $M \subseteq\Matches{\mathcal{R}}{G}$, if $M$ is
regular then $M$ is parallel coherent. \end{lemma} \begin{proof}
By Lemma~\ref{lm-regular} we have
$\liftm{\nu}(\Rg{\nu}) \sqcap \mu(\Lg{\mu})\mathrel{\lhd} \mu(\Kg{\mu})$ for
all $\mu,\nu\in M$. Since $\Rg{\nu}\sqcap \Kg{\nu} \mathrel{\lhd} \Rg{\nu}$
then $\nu(\Rg{\nu}\sqcap \Kg{\nu}) = \liftm{\nu}(\Rg{\nu}\sqcap
\Kg{\nu}) \mathrel{\lhd} \liftm{\nu}(\Rg{\nu})$, hence $\nu(\Rg{\nu}\sqcap
\Kg{\nu})\sqcap \mu(\Lg{\mu}) \mathrel{\lhd} \liftm{\nu}(\Rg{\nu}) \sqcap
\mu(\Lg{\mu}) \mathrel{\lhd} \mu(\Kg{\mu})$, hence $M$ is parallel coherent. \end{proof}
It is easy to see that the converse does not hold (use for instance Example~\ref{ex-notseqindep}). We now show that parallel coherence is a restriction of the (possibly too general) effective deletion property.
\begin{theorem}\label{thm-parcoh2edp}
For any graph $G$ and set $M\subseteq\Matches{\mathcal{R}}{G}$, if $M$ is
parallel coherent then $M$ has the effective
deletion property. \end{theorem} \begin{proof}
Let $H=\Ipgr{G}{M}$ then as in the proof of Lemma~\ref{lm-regular} we
have
\[\nodes{H}\cap \Vd{M} = \bigcup_{\mu,\nu\in M}\liftm{\nu}(\nodesRg{\nu})
\cap \mu(\nodesLg{\mu})\setminus \mu(\nodesKg{\mu}).\]
But $\mu(\nodesLg{\mu})\subseteq \nodes{G}$ and by
Definition~\ref{def-RImg} we have
\begin{align*}
\nodes{G}\cap \liftm{\nu}(\nodesRg{\nu})
&= \nodes{G}\cap \RImg{\nodes{G}}{\nu} \\
&= \nodes{G} \cap \big({\nu}(\nodesRg{\nu}\cap \nodesKg{\nu})\cup
((\nodesRg{\nu} \setminus \nodesKg{\nu}) \times \set{\nu})\\
&= {\nu}(\nodesRg{\nu}\cap \nodesKg{\nu}),
\end{align*}
hence
\[\nodes{H}\cap \Vd{M}
= \bigcup_{\mu,\nu\in M}
\nu(\nodesRg{\nu}\cap\nodesKg{\nu})\cap
\mu(\nodesLg{\mu})\setminus \mu(\nodesKg{\mu})
= \varnothing\]
since by parallel coherence
$\nu(\nodesRg{\nu}\cap\nodesKg{\nu})\cap \mu(\nodesLg{\mu})\subseteq
\mu(\nodesKg{\mu})$ for all $\mu,\nu\in M$. Similarly
$\arrows{H}\cap \Ad{M} = \varnothing$.
For all $x\in \nodes{H}\cup \arrows{H}$,
if $x\not\in \nodes{G}\cup\arrows{G}$ then $\ld{M}(x)=\varnothing$ and
obviously $\labelf{H}(x)\cap\ld{M}(x)\setminus
\llift{M}(x)=\varnothing$. Otherwise $x\in \nodes{G}\cup\arrows{G}$ hence
$\invf{\liftm{\nu}}(x)=\invf{\nu}(x)$ so that
\[\labelf{H}(x) = \big(\labelf{G}(x)\setminus
\ld{M}(x)\big)\cup\bigcup_{\nu\in
M}\labelf{\nu}\circ \labelfRg{\nu}\circ \invf{\nu}(x).\]
Using the identity $A=(A\setminus B)\cup (A\cap B)$ for all sets $A$
and $B$ we have
\begin{align*}
\labelf{\nu}\circ\labelfRg{\nu}\circ \invf{\nu}(x)
&= \big(\labelf{\nu}\circ\labelfRg{\nu}\circ \invf{\nu}(x) \setminus \labelf{\nu}\circ
(\labelfRg{\nu}\cap \labelfKg{\nu})\circ \invf{\nu}(x)\big)\\
& \quad \cup
\big(\labelf{\nu}\circ\labelfRg{\nu}\circ \invf{\nu}(x) \cap \labelf{\nu}\circ
(\labelfRg{\nu}\cap \labelfKg{\nu})\circ \invf{\nu}(x)\big)
\end{align*}
for all $\nu\in M$. By parallel coherence we have
$\labelf{\nu}\circ (\labelfRg{\nu}\cap \labelfKg{\nu})\circ
\invf{\nu}(x)\cap \labelf{\mu}\circ
\labelfLg{\mu}\circ \invf{\mu}(x) \subseteq \labelf{\mu}\circ
\labelfKg{\mu}\circ \invf{\mu}(x)$ for all $\mu,\nu\in M$, and
since $\mu$ is consistent we get
\begin{align*}
\labelf{\nu}\circ (\labelfRg{\nu}\cap \labelfKg{\nu})\circ \invf{\nu}(x)\cap \ld{M}(x)
& = \bigcup_{\mu\in M}\labelf{\nu}\circ (\labelfRg{\nu}\cap \labelfKg{\nu})\circ
\invf{\nu}(x)\cap \labelf{\mu}\circ (\labelfLg{\mu}\setminus \labelfKg{\mu})\circ \invf{\mu}(x)\\
& = \bigcup_{\mu\in M}\labelf{\nu}\circ (\labelfRg{\nu}\cap \labelfKg{\nu})\circ
\invf{\nu}(x)\cap \labelf{\mu}\circ \labelfLg{\mu}\circ
\invf{\mu}(x) \setminus \labelf{\mu}\circ\labelfKg{\mu}\circ \invf{\mu}(x) \\
& = \varnothing,
\end{align*}
hence
\begin{align*}
\labelf{H}(x)\cap \ld{M}(x)
& = \bigcup_{\nu\in M}\labelf{\nu}\circ \labelfRg{\nu}\circ
\invf{\nu}(x)\cap \ld{M}(x)\\
&= \bigcup_{\nu\in M}\bigl(\labelf{\nu}\circ \labelfRg{\nu}\circ
\invf{\nu}(x)\setminus \labelf{\nu}\circ (\labelfRg{\nu}\cap \labelfKg{\nu})\circ
\invf{\nu}(x)\bigr)\cap \ld{M}(x).
\end{align*}
Finally, by using the obvious fact that $f(A)\setminus f(A\cap
B)\subseteq f(A\setminus B)$ for any function $f$, we get
\[\labelf{H}(x)\cap \ld{M}(x) \subseteq \bigcup_{\nu\in
M} \labelf{\nu}\circ (\labelfRg{\nu}\setminus \labelfKg{\nu})
\circ\invf{\nu}(x)\cap \ld{M}(x) \subseteq \llift{M}(x)\]
hence $\labelf{H}(x)\cap \ld{M}(x)\setminus
\llift{M}(x)=\varnothing$. This proves that $H$ is disjoint from
$\Vd{M}$, $\Ad{M}$, $\ld{M}\setminus \llift{M}$ and therefore
that $M$ has the effective deletion property. \end{proof}
Yet parallel coherence is not sufficient in the light of parallel independence, as we now show.
\begin{prop}
Parallel coherence does not generalize parallel independence. \end{prop} \begin{proof}
In Example~\ref{ex-EDP} is exhibited a set $M=\set{\mu_1,\mu_2}$
that is shown to be parallel independent. But we see that
\[\mu_1(R_1\sqcap K_1)\sqcap\mu_2(L_2) = \mu_1(\,\raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_1\mathrel{|} u$};
\end{tikzpicture}}\,) \sqcap \mu_2(\,\raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x_2\mathrel{|} v$};
\end{tikzpicture}}\,) = \raisebox{-1.3ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0)
{$x\mathrel{|} 0$};
\end{tikzpicture}} = G\] is not a subgraph of
$\mu_2(K_2) = \raisebox{-0.7ex}{\begin{tikzpicture}
\node[draw,rounded corners] at (0,0) {$x$};
\end{tikzpicture}}$, hence $M$ is not parallel coherent. \end{proof}
Parallel coherence is therefore too restricted to support a definition of parallel rewriting in the present framework. The problem here as above is that deleted attributes can be recovered by the right-hand side of rules, and that this possibility is not accounted for in the algebraic definitions, since these do not distinguish between graph items and attributes.
To summarize, we have established that the following implications hold, and no other: \[\text{regularity }\Rightarrow \text{ parallel coherence } \Rightarrow
\text{ effective deletion property } \Leftarrow \text{ parallel independence.}\]
We see this as an endorsement of parallel rewriting based on the effective deletion property (Definition~\ref{def-EDP}), even if it is the only property that cannot be characterized simply on pairs of matchings. This suggests that the effective deletion property would be worth transposing to an algebraic framework. But there is no straightforward way of doing this, as can now be shown.
\begin{corollary}
For any set of unlabeled rules $\mathcal{R}$, any unlabeled graph $G$ and
any subset $M\subseteq \Matches{\mathcal{R}}{G}$,
\[M\text{ is regular\ \ iff\ \ }M\text{ is parallel
coherent\ \ iff\ \ }M\text{ has the effective deletion property.}\] \end{corollary} \begin{proof}
Assume that $M$ has the effective deletion property, then
$\Ipgr{G}{M}$ is disjoint from
$\Vd{M},\Ad{M},\ld{M}\setminus\llift{M}$ hence so is
$\bigsqcup_{\nu\in M}\liftm{\nu}(\Rg{\nu})$. For all $\mu\in M$ we have
$\Vd{\mu}\subseteq \Vd{M}$ and $\Ad{\mu}\subseteq \Ad{M}$, hence
$\bigsqcup_{\nu\in M}\liftm{\nu}(\Rg{\nu})$ is disjoint from $\Vd{\mu}$,
$\Ad{\mu}$, $\varnothing$ and therefore so is $\liftm{\nu}(\Rg{\nu})$
for every $\nu\in M$. Thus
$\liftm{\nu}(\nodesRg{\nu})\cap \mu(\nodesLg{\mu})\setminus
\mu(\nodesKg{\mu}) = \varnothing$ and
$\liftm{\nu}(\arrowsRg{\nu})\cap \mu(\arrowsLg{\mu})\setminus
\mu(\arrowsKg{\mu}) = \varnothing$, which is equivalent to
$\liftm{\nu}(\nodesRg{\nu})\cap \mu(\nodesLg{\mu})\subseteq
\mu(\nodesKg{\mu})$ and
$\liftm{\nu}(\arrowsRg{\nu})\cap \mu(\arrowsLg{\mu})\subseteq
\mu(\arrowsKg{\mu})$. Since these graphs are unlabeled, this
entails that
$\liftm{\nu}(\Rg{\nu})\sqcap \mu(\Lg{\mu})\mathrel{\lhd} \mu(\Kg{\mu})$ for
all $\mu,\nu\in M$, hence that $M$ is regular by
Lemma~\ref{lm-regular}. The equivalences follow by
Lemma~\ref{lm-reg2parcoh} and Theorem~\ref{thm-parcoh2edp}. \end{proof} Hence an algebraic approach to parallel graph transformation that would apply to the category of (unlabeled) graphs could not distinguish these notions. In this sense parallel coherence is already the right algebraic translation of the effective deletion property (and of regularity), even if it is too weak to account for the special treatment of attributes in the present non algebraic framework.
\section{Related Work and Conclusion}\label{sec-conclusion}
Many notions of attributed graphs exist in the literature. For instance, in \cite{Plump04,DuvalEPR14} graph items can hold at most one attribute. This means that concurrent rules could possibly conflict because of their right-hand sides, if two rules required to attribute distinct values to the same graph item. Our choice of attaching sets of attributes to vertices and arrows means that new attributes are freely included in those sets, and thus avoids conflicting right-hand sides. Indeed, we see from Definition~\ref{def-EDP} that a conflict must involve an element of $\Vd{M}$, $\Ad{M}$ or $\ld{M}$. Hence the right-hand sides of rules never \emph{create} conflicts, though they may \emph{settle} the conflicts created in the left-hand sides and are therefore relevant to parallel independence.
Other notions of attributed graphs that allow unbounded attributes are possible, for instance the E-graphs from \cite{EhrigEPT06}. But the fact that in E-graphs a single value can be referenced several times as attribute of a vertex or arrow means that the number of matchings of rules may uselessly inflate.
Another approach to parallelism is to accept overlapping, non independent matchings and ask the user to decide what to do in particular situations \cite{KniemeyerBHK07}. The present approach shows that the user can be spared this work not just on parallel independent matchings, but on the larger class of sets that satisfy the effective deletion property (or parallel coherence in an algebraic framework).
It is also possible to restrict by design all overlaps to vertices, as is the case in Hyperedge Replacement Systems \cite{DrewesKH97}, and still be able to specify powerful parallel transformations \cite{LaneseM05}, though in a non deterministic way. Note that these are asynchronous models of parallelism, where determinism amounts to confluence. This property has been widely studied in term rewriting; it becomes more subtle when acyclic term graphs are considered \cite{Plump99}, and more elusive when cycles are allowed \cite{AriolaK96,AriolaB97}. Our model of parallelism is a synchronous one where deterministic transformations can be designed without reference to confluence \cite{BdlTE20}, as in cellular automata.
The use of parallel transformations to define sequential independence in an algebraic approach to graph rewriting (as in Definition~\ref{def-seqindep}) could be worth investigating.
\end{document}
|
arXiv
|
{
"id": "2102.02366.tex",
"language_detection_score": 0.7456036806106567,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title{On automorphisms behind the Gitik -- Koepke model for violation of the Singular Cardinals Hypothesis w/o large cardinals}
\author{Vladimir Kanovei}
\date{\today}
\maketitle
\begin{abstract} It is known that the assumption that ``GCH first fails at $\aleph_\om$'' leads to large cardinals in {\bf ZFC}. Gitik and Koepke~\cite{gk}
demonstrated that this is not so in {\bf ZF}: namely there is a generic cardinal-preserving extension of $\bL$ (or any universe of {\bf ZFC} + GCH) in which all {\bf ZF} axioms hold, the axiom of choice fails, $\card{2^{\aleph_n}}=\aleph_{n+1}$ for all natural $n$, but there is a surjection from $2^{\alom}$ onto $\la$, where $\la>\aleph_{\om+1}$ is any previously chosen cardinal in $\bL$, for instance, $\aleph_{\om+17}$. In other words, in such an extension GCH holds in proper sense for all cardinals $\aleph_n$ but fails at $\alom$ in Hartogs' sense.
The goal of this note is to analyse the system of automorphisms involved in the Gitik -- Koepke construction. \end{abstract}
It is known (see \cite{cite1}) that the consistency of the statement ``GCH first fails at $\alom$'' with {\bf ZFC} definitely requires a large cardinal. Gitik and Koepke~\cite{gk}
demonstrated that picture changes in the absense of the axiom of choice, if one agrees to treat the violation of GCH in Hartogs' sense. Namely there is a generic cardinal-preserving extension of $\bL$ (or any universe of {\bf ZFC} + GCH) in which all {\bf ZF} axioms hold, the axiom of choice fails, $\card{2^{\aleph_n}}=\aleph_{n+1}$ for all natural $n$, but there is a surjection from $2^{\alom}$ onto $\la$, where $\la>\aleph_{\om+1}$ is any previously chosen cardinal in $\bL$, for instance, $\aleph_{\om+17}$. Thus in such an extension GCH holds in proper sense for all cardinals $\aleph_n$ but fails at $\alom$ in Hartogs' sense.
For the sake of convenience we formulate the main result as follows.
\bte [Gitik -- Koepke \cite{gk}] \lam M Let\/ $\la>\aleph_{\om+1}$ be a cardinal in\/ $\bL$, the constructible universe. There is a set-generic extension\/ $\bL[G]$ of\/ $\bL$ and a symmetric cardinal-preserving subextension\/ $\Ls[G]\sq\bL[G]$, such that the following is true in\/ $\Ls[G]$$:$ \ben \renu \itla{M1} all axioms of\/ {\bf ZF}$;$
\itla{M2}\msur $\card{2^{\aleph_n}}=\aleph_{n+1}$ for all natural\/ $n$$;$
\itla{M3} there is a surjection from\/ $2^{\alom}$ onto\/ $\la$. \een \ete
The goal of this note is to analyse the system of automorphisms (which turns out to consist of three different subsistems) involved in the Gitik -- Koepke proof of this theorem in \cite{gk}.\snos {The author learned the description of the Gitik -- Koepke model in the course of his visit to Bonn in the Winter of 2009/2010.} On the base of our analysis, we present the proof in a somewhat more pedestrian way than in \cite{gk}.
\parf{Basic definitions and the forcing} \las{bd} \las{bdf}
After an array of auxiliary definitions, we'll introduce the forcing.
$\la$ is a fixed cardinal everywhere; $\la>\aleph_\om$.
\punk{Basic definitions} \las{bd1}
We define: \bde \item[$\ogd n$ =] all sets $d\sq\aal n$ such that $\card d\le\aleph_n$ \imar{ogd n}
\item[$\opp n$ =] all functions $p:\dom p\to 2$, such that $\pu\ne\dom p\sq\aal n$, \imar{opp n}
\item[$\ogp n$ =] all functions $p\in\opp n$, such that $\dom p\in \ogd n$, \imar{ogp n}
\item[$\dD$ =] all sets $d\sq\omal$ such that $d\cap\aal n\in\dD\og n$ for all $n$,
\item[$\dda$ =] all sets $d\sq\omal$ such that there is $n_0\in\om$ such that $d\cap\aal n\in\dD\og n$ for all $n\ge n_0$,
\item[$\pP$ =] all functions $p:\dom p\to 2$ such that $\dom p\sq\omal$, \imar{pP}
\item[$\dP$ =] all functions $p\in\pP$ such that $\dom p\in\dD$. \imar{dP} \ede
If $n\in\om$ then we let $d\og n=d\cap\aal n$ and $p\og n=p\res{\aal n}$ for all $d\in\dD$ and $p\in\pP$. Thus $d\in\dD$ iff $d\og n\in\ogd n$ for all $n$, and $p\in\dP$ iff $p\og n\in\ogp n$ for all $n$.
We order $\dP$ so that $p\le q$ iff $\dom q\sq\dom p$ and $q=p\res{\dom q}$.
Note that if $m\ne n$ then $\ogp n\cap\ogp m=\pu$.
\punk{Assignments} \las{bd2}
An \rit{assignment} will be any function $a$ such that \ben \tenu{{\rm(a\arabic{enumi})}}
\itla{a1}\msur $\dom a=\bas a\ti\abs a$, where $\bas a\sq\om$ and $\abs a\sq\la$ are finite sets, and
\itla{a2} if $\ang{n,\ga}\in\dom a$ then $a(n,\ga)\in\aal n$. \een
In particular, $\pu$ (the empty assignment) belongs to $\dA$.\snos {We suppose that $\bas\pu=\abs\pu=\pu$, but it can be consistently assumed that either $\bas\pu=\pu$ and $\abs \pu=\Ga\sq\la$ is any finite set, or $\abs\pu=\pu$ and $\bas\pu=N\sq\om$ is any finite set, depending on the context. Any assignment $a\ne\pu$ has definite values of $\abs a$ and $\bas a$.}
If $n\in\bas a$ then define a map $a\og n$ on the set $\abs a$ by $a\og n(\ga)=a(n,\ga)$.
The set $\dA$ of all assignments is \rit{ordered} so that $a\le b$ ($a$ is stronger) iff \ben \tenu{{\rm(a\arabic{enumi})}}
\addtocounter{enumi}2 \itlm{a3}\msur $\bas b\sq\bas a$ and $\abs b\sq\abs a$, and
\itlm{a4}
if $n\in\bas a\bez\bas b$ and $\ga\ne\da$ belong to $\abs b$ then $a(n,\ga)\ne a(n,\da)$. \een
Clearly $\pu$ is the \dd\le largest element in $\dA$.
Assignments $a,b$ are \rit{coherent} iff $\dom a=\dom b$, and for any $n\in\bas a=\bas b$ and $\ga,\da\in\abs a=\abs b$
we have: $a(n,\ga)=a(n,\da)$ iff $b(n,\ga)=b(n,\da)$.
If $a\in\dA$ and $\Da\sq \abs a$ then let $a\pes\Da$ be the restriction $a\res{(\bas a\ti\Da)}$.
\punk{Narrow subconditions} \las{bd3}
Let $\pH$ consist of all indexed sets \imar{pH} $h=\sis{h_\xi}{\xi\in\abs h}$, where $\abs h\sq\omal$ and $h_\xi\in\opp n$ for all $n$ and $\xi\in\abs h\cap\aal n$.
We put $h\og n=h\res{\aal n}$ (restriction) for $h\in\pH$ and any $n$. Thus still $h\og n\in\pH$ and $\abs{h\og n}=\abs h\cap\aal n$.
Let $\dH$ consist of all \imar{dH} $h\in\pH$ such that \ben
\tenu{{\rm(h\arabic{enumi})}} \itlm{h1}\msur $\card\abs{h\og n}\le \aal n$ for all $n$,
\itlm{h2} the set $\bas h=\ens{n}{h\og n\ne \pu}$ is finite,
\itlm{h3}\msur $h_\xi\in\ogp n$ for all $n$ and $\xi\in\abs h\cap\aal n$. \een
We say that a condition $h\in\dH$ is
\bde \itsep \item[\rit{regular}] at some $n\in\bas h$, iff for every $\xi\in\abs{h}\cap\aal n$ the set $\ens{\eta\in\abs{h}\cap\aal n}{h_\eta=h_\xi}$ has cardinality exactly $\aleph_n$,
\item[\rit{stronger}] than another condition $g\in\dH$, symbolically $h\le g$, iff $\abs g\sq\abs h$, and $h_\xi\le g_\xi$ for all $\xi\in\abs g$. \ede The empty condition $\pu\in\dH$ ($\abs \pu=\pu$) is \dd\le largest in $\dH$.
We further define $\ogh n=\ens{h\in\dH}{\abs h\sq\aal n}$; \imar{ogh n} thus $\ogh n$ consists of all indexed sets $h=\sis{h_\xi}{\xi\in\abs h}$, where $\abs h\in\dD\og n$ (that is, $\abs h\sq\aal n$ and $\card{\abs h}\le\aleph_n$), and $h_\xi\in\ogp n$ for all $\xi\in\abs h$.
It is clear that $h\in\dH$ iff $h\og n\in\ogh n$ for all $n$ and the set $\bas h$ is finite.
\punk{Wide subconditions} \las{bd4}
Let $\pQ$ consist of all indexed sets $q=\sis{q_\ga}{\ga\in\abs q}$, where $\abs q\sq\la$ and $q_\ga\in\pP$ for all $\ga\in\abs q$. We define \bde \item[$\pqa$ =] all $q\in\pQ$ such that $\abs q$ is finite, \imar{pqa}
\item[$\dQ$ =] all $q\in\pQ$ such that $\abs q$ is finite and $q_\ga\in\dP$ for all $\ga\in\abs q$. \imar{dQ} \ede
We say that a condition $q\in\pQ$ is:
\bde \itsep \item[\rit{uniform}\rm,] if $\dom q_\ga\og n=\dom q_\da\og n$ for all $\ga,\da\in\abs q$ and $n\in\om$,
\item[\rit{compatible}] with an assignment $a\in \dA$, iff
we have $q_\ga\og n=q_\da\og n$ whenever $\ga,\da\in\abs q\cap\abs a$, $n\in\bas a$, and $a(n,\ga)=a(n,\da)$.
\item[\rit{equally shaped}] with another condition $p\in \pQ$, iff $\abs p=\abs q$, and we have $\dom{p_\ga\og n}=\dom{q_\ga\og n}$ holds for all $\ga\in \abs p$ and $n\in\om$.
\item[\rit{stronger}] than another condition $p\in \pQ$, symbolically $q\le p$, iff $\abs p\sq\abs q$, and $p_\ga\le q_\ga$ in $\dP$ for all $\ga\in\abs p$. \ede
Once again, the empty condition $\pu\in\dQ$ ($\abs \pu=\pu$) is \dd\le largest in $\dQ$.
\punk{Conditions} \las{bd5}
Let $\dT$, {\ubf the forcing}, consist of all triples of the form $t=\ang{q^t,a^t,h^t}$, where $q^t\in\dQ$, $a^t\in\dA$, $h^t\in\dH$, and \ben \tenu{{\rm(t\arabic{enumi})}}
\itlm{t1}\msur $\abs{a^t}=\abs{q^t}$ and $\bas{a^t}=\bas{h^t}$ --- we put $\abs{t}:=\abs{a^t}$
and $\bas{t}:=\bas{a^t}$,
\itlm{t3}\msur $\ran a^t\sq\abs{h^t}$ and we have $h^t_{a^t(n,\ga)}=q^t_\ga\ogr n$
for all $n\in\bas t$ and $\ga\in \abs t$.
\itlm{t2} therefore $q^t$ is compatible with $a^t$ in the sense above, that is, if $\ga,\da\in\abs t$, $n\in\bas t$, and $a^t(n,\ga)=a^t(n,\da)$, then $q^t_\ga\og n=q^t_\da\og n$. \een
The set $\dT$ is ordered componentwise: a condition $t\in\dT$ is \rit{stronger than} $s\in\dT$, symbolically $t\le s$, iff
$q^t\le q^s$ in $\dQ$, $a^t\le a^s$ in $\dA$, $h^t\le h^s$ in $\dH$. Clearly $t=\ang{\pu,\pu,\pu}$ is the largest condition in $\dT$.
A condition $t\in\dT$ is \rit{uniform}, symbolically $t\in\dtr$, iff $q^t$ is uniform.
\vyk{ \bdf \lam{esha} Conditions $p,q\in \dQ$ are \rit{equally shaped} iff $\bas p=\bas q$ and $\abs p=\abs q$, and \rit{strongly equally shaped} iff in addition $a^p=a^q$. \edf
\bdf [restrictions] \lam{rest} Let $\Da\sq\la$ be a finite set.
If $a\in\dA$ and $\Da\sq \abs a$ then let $a\pes\Da$ be the restriction $a\res{(\bas a\ti\Da)}$.
If $q\in\dQ$ and $\Da\sq\abs q$ then let $q\pes\Da=\sis{q_\ga}{\ga\in\Da}$.\snos {This is not a condition in $\dQ$ since it has no $a^q$.} \edf
{\ubf Comment: the forcing as a product.} For any $n$ let $\dQ\og n$ consist of all indexed sets $q=\ang{a^q,\sis{q_\ga}{\ga\in\abs q}}$, where \ben \tenu{{\rm(p\arabic{enumi})}} \itlm{31}\msur $\abs q\sq\la$ is finite and $q_\ga\in\ogp n$ for all $\ga\in\abs q$,
\itlm{33}\msur $a^q$ is a function from $\abs q$ to $\aal n$,
\itlm{36} if $\ga,\da\in\abs q$ and $a^q(\ga)=a^q(\da)$ then $q_\ga=q_\da$. \een
Then $\dQ$ as a whole can be identified with the product $\prod_{n=0}^\om\dQ\og n$ defined so that it is a finite support product \vrt\ the components $a^q$, and unrestricted (countable) support product \vrt\ the components $q_\ga$. }
\parf{Permutations} \las{sym1}
In this section and the following two sections we consider three groups of full or partial order-preserving transformations of conditions.
Let $\pif$ be the group of all permutations of the set $\omal$ such that \ben \tenu{(\Alph{enumi})} \itlm{p1} for any $n$, the restriction $\pi\og n=\pi\res\aal n$ is a permutation of the set $\aal n$,
\itlm{p2} the set $\bas\pi=\ens{n}{\pi\og n\ne\text{ the identity}}$ is finite. \een
Let $\pif\og n$ consist of all $\pi\in\pif$ equal to the identity outside of $\aal n$. Any $\pi\in\pif\og n$ is naturally identified with $\pi\og n$.
There are two types of induced action of transformations $\pi\in \pif$, namely:
\ben \Renu \itlm{api1} if $f$ is a function such that $\ran f\sq\omal$ then $f'=\pi\ap f$ is a function with the same domain and $f'(x)=\pi(f(x))$ for all $x\in\dom f=\dom f'$;
\itlm{api2} if $f$ is a function such that $\dom f\sq\omal$ then $f'=\pi\ap f$ is a function, $\dom f'=\ens{\pi(\xi)}{\xi\in\dom f}$, and $f'(\pi(x))=f(x)$ for all $\xi\in\dom f$.\snos {We ignore the conflicting case when both $\ran f\sq\omal$ and $\dom f\sq\omal$ as it will never happen in the domains of action of transformations $\pi\in \pif$ considered below.} \een
Accordingly, we define that any $\pi\in \pif$: \ben \aenu \itlm{pe1} acts on $\dA$ by \ref{api1}, so that if $a\in\dA$ then $a'=\pi\ap a\in\dA$, $\dom a'=\dom a$, and $a'(n,\ga)=\pi(a(n,\ga))$ for all $\ang{n,\ga}\in\dom a$;
\itlm{pe2} acts on $\pH$ (and on $\dH\sq\pH$) by \ref{api2}, so that if $h\in\pH$ then $h'=\pi\ap h\in\pH$, $\abs{h'}=\ens{\pi(\xi)}{\xi\in\abs h}$, and $h'_{\pi(\xi)}=h_\xi$ for all $\xi\in\abs h$.
\vyk{ \itlm{pe3} acts on $\dQ$ as the identity, $\pi\ap q=q$ for any $q\in\dQ$, since neither \ref{api1} nor \ref{api2} is, generally speaking, applicable to an arbitrary element $q\in\dQ$. } \een
Finally if $t=\ang{q^t,a^t,h^t}\in \dT$ then put $\pi\ap t=\ang{q^t,\pi\ap a^t,\pi\ap h^t}$.
The following lemma is rather obvious.
\ble \lam{sk1} Any\/ $\pi\in\pif$ is an order-preserving automorphism of the ordered sets\/ $\dA$, $\dH$, and\/ $\dT$.
Moreover if\/ $a\in\dA$ and\/ $n\in\bas a\bez\bas \pi$ then\/ $(\pi\ap a)\og n=a\og n$, and accordingly if\/ $h\in\dH$ and\/ $n\nin\bas \pi$ then\/ $(\pi\ap h)\og n=h\og n$.\qed \ele
\parf{Swaps} \las{sym2}
Suppose that $a,b\in\dA$, $\dom a=\dom b=D$, and $\ran a=\ran b$. Such a pair of assignments induces a \rit{swap transformation} $\sw ab$, acting: $$ \bay{clccc} \text{from}& \dA_{a}=\ens{c\in\dA}{c\le a}& \text{to}&\dA_b\,,& \\[1ex]
\text{from}& \pQ_{a}=\ens{q\in\pQ} {\abs a\sq\abs q\land q\text{ is compatible with } a}& \text{to}&\pQ_b\,,\\[1ex]
\text{from}& \dQ_{a}=\ens{q\in\dQ} {\abs a\sq\abs q\land q\text{ is compatible with } a}& \text{to}&\dQ_b\,. \eay $$ Recall that $q\in\pQ$ is compatible with $a\in \dA$ iff
$q_\ga\og n=q_\da\og n$ holds whenever $\ga,\da\in\abs a\cap\abs q$, $n\in\bas a$, and $a(n,\ga)=a(n,\da)$. Obviously $\dQ_{a}=\pQ_a\cap\dQ$.
The action of $\sw ab$ on $\dA_{a}$ is defined as follows: \ben \aenu \itlm{sabA} if $c\in\dA_a$ then $c'=\sw ab\ap c\in \dA$, $\dom c'=\dom c$, ${c'\res D}=b$ (where $D=\dom a=\dom b$), and $c'\res{(\dom c\bez D)}= c\res{(\dom c\bez D)}$. \een
The action of $\sw ab$ on $\pQ_{a}$ is defined as follows. First of all, if $n\in\bas a$ and $\ga\in\abs a$ then let $\swt abn(\ga)$ be the least $\vt\in\abs a$ satisfying $a(n,\vt)=b(n,\ga)$; such ordinals $\vt$ exist because $\ran a=\ran b$. Thus $\swt abn:\abs a\to\abs a$. Then: \ben \aenu \atc1 \itlm{sQ} if $q\in\pQ_a$ then $q'=\sw ab\ap q\in \pQ$, $\abs{q'}=\abs q$, and for all $n\in\om$ and $\ga\in\abs q$:
\ben \itlm{sQ1} if $\ga\in\abs{a}$ and $n\in\bas{a}$ then $q'_\ga\og n=q_\vt\og n$, where $\vt=\swt abn(\ga)$,
\itlm{sQ2} if either $\ga\nin\abs{a}$ or $n\nin\bas{a}$ then $q'_\ga\og n=q_\ga\og n$. \een \een
Finally if $t\in \dT_a=\ens{t\in\dT}{a^t\le a}$ (then $a^t\in\dA_a$ and $q^t\in\dQ_a$) then put $$ \sw{a}{b}\ap t=\ang{\sw{a}{b}\ap q^t,\sw{a}{b}\ap a^t,h^t}. $$
\ble \lam{sk2} Assume that\/ $a,b\in\dA$, $\bas a=\bas b=B$, $\abs a=\abs b=\Da$, and\/ $\ran a=\ran b$. Then\/ $\sw ab$ is an order-preserving bijection\/ $\dA_{a}\onto\dA_{b}$, $\dQ_{a}\onto\dQ_{b}$, $\dT_{a}\onto\dT_{b}$ and\/ $\sw ba$ is the inverse in each of the three cases.
Lett\/ $t\in\dT_a$. Then\/ $t'=\sw ab\ap t\in\dT_b$, $\abs t=\abs{t'}$, $\bas t=\bas{t'}$, and$:$ \ben \renu \itlm{sk2i} if\/ $t$ is uniform, then so is\/ $t'$ and\/ $q^t,q^{t'}$ are equally shaped$;$
\itlm{sk2ii} if\/ $n\in B$, $\ga\in\Da$, and\/ $a(n,\ga)=b(n,\ga)$ then\/ $a^t(n,\ga)=a^{t'}(n,\ga)=a(n,\ga)=b(n,\ga)$ and\/ $q^{t'}_\ga\og n=q^t_\ga\og n\,;$
\itlm{sk2iii} if\/ $n\in\abs t$ then\/ $\ens{q^{t'}_\ga\og n}{\ga\in\abs{t'}}= \ens{q^{t}_\ga\og n}{\ga\in\abs{t}}\,.$ \een \ele \bpf The first essential part of the lemma is to show that if $t\in \dT_a$ then $t'=\sw ab\ap t\in\dT_b$. Basically it's enough to show that $t'\in\dT$. And here the only notable task is to prove \ref{t3} of Section \ref{bd5}, that is, $q^{t'}_\ga\ogr n=h^{t'}_{a^{t'}(n,\ga)}$ for all $n\in\bas{t'}$ and $\ga\in\abs{t'}$.
We can assume that $n\in\bas{a}$ and $\ga\in\abs{a}$, simply because $\sw ab$ is the identity outside of $\dom a=\bas{a}\ti\abs{a}$. We have $a^{t'}(n,\ga)=b(n,\ga)$ within this narrower domain, hence the result to prove is $q^{t'}_\ga\ogr n=h^{t}_{b(n,\ga)}$ for all $n\in\bas{a}$ and $\ga\in\abs{a}$. (Recall that $\sw ab$ does not change $h^t$, so that $h^{t'}=h^t$.)
However $q^{t'}_\ga\ogr n=q^{t}_\vt\ogr n$ by \ref{sQ1}, where $\vt=\swt abn(\ga)$, so that, in particular, $a(n,\vt)=b(n,\ga)$. Thus the equality required turns out to be $q^{t}_\vt\ogr n=h^{t}_{a(n,\vt)}$, which is true since $t$ is a condition.
The other essential claim is that the action of $\sw ba$ is the inverse of the action of $\sw ab$. Suppose that $t\in\dT_a$ and let $t'=\sw ab\ap t$; $t\in\dT_b$. Put $s=\sw ba\ap t'$; $s\in\dT_a$ once again. We have to show that $s=t$. The key fact is $q^s_\ga\og n=q^t_\ga\og n$ for all $n\in\bas a$ and $\ga\in\abs a$. By definition $q^s_\ga\og n=q^{t'}_\za\og n$, where $\za=\swt ban$, in particular, $b(n,\za)=a(n,\ga)$. Still by definition, $q^{t'}_\za\ogr n=q^{t}_\vt\ogr n$, where $\vt=\swt abn(\za)$, so that $a(n,\vt)=b(n,\za)$. To conclude, $q^s_\ga\og n=q^{t}_\vt\ogr n$, where $a(n,\ga)=a(n,\vt)$. But then $q^t_\ga\og n=q^{t}_\vt\ogr n$ by \ref{t2} of Section \ref{bd5}, and hence we have $q^s_\ga\og n=q^{t}_\ga\ogr n$, as required.
Claims \ref{sk2i}, \ref{sk2ii} are rather obvious.
It follows from \ref{sQ2} that claim \ref{sk2iii} is trivial for $n\in\abs t\bez B$, while in the case $n\in B$ it suffices to prove $\ens{q^{t'}_\ga\og n}{\ga\in B}=\ens{q^{t}_\ga\og n}{\ga\in B}$. The inclusion $\sq$ holds because $q^{t'}_\ga\og n=q^t_\vt\og n$ by \ref{sQ1}, where $\vt=\swt abn(\ga)$. The inclusion $\qs$ holds by the same reason with respect to the inverse swap $\sw ba$. \epf
\parf{Rotations} \las{sym3}
This is a more complicated type of transformations, and we have to define it by extension beginning from most elementary conditions.
\punk{Simple rotations} \las{sym3a}
If $d\in\dD$ and $p\in\dP$, or generally even $d\in\dda$ and $p\in\pP$, then define $d\ap p=p':\dom{p'}\to2$ so that $\dom p=\dom{p'}$ and $$ p'(\al)=\left\{ \bay{rcl} p(\al) &\text{whenever}& \al\in(\dom p)\bez d\,,\\[1ex]
1-p(\al)&\text{whenever}& \al\in d\cap\dom p\,. \eay \right. $$
Clearly $p\mapsto d\ap p$ is an order-preserving automorphism of $\dP$ and of $\pP$.
Transformations of this type, as well as those based on them and defined below, will be called \rit{rotations}.
\punk{Rotations for narrow subconditions} \las{sym3b}
We define product rotations which fit to conditions in $\pH$ and $\dH\sq\pH$. Let $\Psd$ consist of all indexed sets \imar{Psd} $\psi=\sis{\psi_\xi}{\xi\in\abs\psi}$, where $\abs\psi\sq\omal$ is a finite set, and $\psi_\xi\in\ogd n$ for all $n\in\om$ and $\xi\in\abs\psi\cap\aal n$. If $\psi\in\Psd$ and $h\in\pH$ then define $h'=\psi\ap h\in\pH$ so that $\abs{h'}=\abs h$ and for all $\xi$: $$ h'_\xi=\left\{ \bay{rcl} h_\xi &\text{whenever}& \xi\in\abs h\bez \abs\psi\,,\\[1ex]
\psi_\xi\ap h_\xi &\text{whenever}& \xi\in \abs h\cap\abs\psi\,. \eay \right. $$ Let $\Psd\og n=\ens{\psi\in\Psd}{\abs\psi\sq\aal n}$; and accordingly if $\psi\in\Psi$ then let $\psi\og n=\psi\res\aal n$; then $\psi\og n\in\Psd\og n$. The next lemma is obvious.
\ble \lam{rl1} If\/ $\psi\in\Psd$ then the map\/ $h\mapsto\psi\ap h$ is an order-preserving action\/ $\pH\onto\pH$ and\/ $\dH\onto\dH$.\qed \ele
\punk{Rotations for wide subconditions} \las{sym3c}
Now define product rotations which fit to conditions in $\pQ$ and $\dQ\sq\pQ$. Let $\Phd$ consist of all indexed sets \imar{Phd} $\vpi=\sis{\vpi_\xi}{\xi\in\abs\vpi}$, where $\abs\vpi\sq\la$ is a finite set and $\vpi_\ga\in\dD$ for all $\ga\in\abs\vpi$. If $\vpi\in\Phd$ and $q\in\pQ$ then define $q'=\vpi\ap q\in\pQ$ so that $\abs{q'}=\abs q$ and for all $\ga$: $$ q'_\ga=\left\{ \bay{rcl} q_\ga &\text{whenever}& \ga\in\abs q\bez\abs\vpi\,,\\[1ex]
\vpi_\ga\ap q_\ga &\text{whenever}& \xi\in \abs\vpi\cap\abs q\,. \eay \right. $$ The lext elementary lemma is left to the reader.
\ble \lam{rl2} If\/ $\vpi\in\Phd$ then the map\/ $q\mapsto\vpi\ap q$ is an order-preserving action\/ $\pQ\onto\pQ$ and\/ $\dQ\onto\dQ$. If\/ $q\in\pQ$ then\/ $q$ and\/ $\vpi\ap q$ are equally shaped.\qed \ele
As above, say that $\vpi\in\Phd$ is \rit{compatible} with an assignment $a\in \dA$, in symbol $\vpi\in\Phd_a$, iff
$\vpi_\ga\og n=\vpi_\da\og n$ holds whenever $\ga,\da\in\abs\vpi\cap\abs a$, $n\in\bas a$, and $a(n,\ga)=a(n,\da)$. In this case, if in addition $\abs \vpi\sq\abs a$ then we define: \ben \aenu \itla{ro1} a rotation $\psi=\ftp\vpi a\in\Psd$ (\dd a\rit{projection}) \imar{ftp vpi a} so that $$ \abs\psi=\ens{a(n,\ga)}{n\in\bas a\land\ga\in\abs\vpi} $$ and if $n\in\bas a$, $\ga\in\abs\vpi$, and $\xi=a(n,\ga)$ then $\psi_\xi=\vpi_\ga\og n$;
\itla{ro2} a rotation $\ve=\ftw\vpi a\in\Phd$ \imar{ftw vpi a} (\dd a\rit{extension}) so that $\abs{\ve}=\abs a$, $\ve_\da=\vpi_\da$ for all $\da\in\abs\vpi$, and the following holds for all $\ga\in\abs a\bez\abs \vpi$ and $n\in\om$: \een
$$ \ve_\ga\og n= \left\{ \bay{rcl} \vpi_\da\og n &\text{iff}& n\in\bas a\,\land\,\da\in\abs\vpi\,\land\,a(n,\ga)=a(n,\da)\,,\\[1ex]
\pu &\text{iff}& n\nin\bas a\,\lor\,\neg\: \sus\da\in\abs\vpi\,(a(n,\ga)=a(n,\da))\,. \eay \right. $$ The consistency of both \ref{ro1} and \ref{ro2} follows from the compatibility assumption.
\punk{Rotations for conditions} \las{sym3d}
Finally we define how any $\vpi\in\Phd$ acts on the set $$ \dT_\vpi=\ens{t\in\dT}{\abs\vpi\sq\abs t\,\land\, \vpi\text{ is compatible with }a^t}\,. $$ If $t\in\dT_\vpi$ then let $\vpi\ap t=t'$, where $q^{t'}=(\ftw\vpi{a^t})\ap q^t$, $a^{t'}=a^t$, $h^{t'}=(\ftp\vpi{a^t})\ap h^t$.
\ble \lam{sk3} Suppose that\/ $\vpi\in\Phd$. Then the map\/ $t\mapsto\vpi\ap t$ is an order-preserving action\/ $\dT_\vpi\onto\dT_\vpi$, with\/ $t\mapsto\vpi^{-1}\ap t$ being the inverse.
If\/ $t\in\dT_\vpi$ is uniform then so is\/ $t'=\vpi\ap t$, and\/ $q^t,q^{t'}$ are equally shaped. \ele \bpf Assume that $t\in\dT_\vpi$ and prove that $t'=\vpi\ap t$ belongs to $\dT_\vpi$ as well; this is the only part of the lemma not entirely trivial. We have to check \ref{t3} of Section~\ref{bd5}, that is, $h^{t'}_{a^{t'}(n,\ga)}=q^{t'}_\ga\ogr n$ for all $n\in\bas{t'}$ and $\ga\in \abs{t'}$. By definition $a^{t'}=a^t$, $\bas{t'}=\bas{t}$, and $\abs{t'}=\abs{t}$, hence we have to prove $q^{t'}_\ga\ogr n=h^{t'}_{a^t(n,\ga)}$, for all $n\in\bas{t}=\bas{t'}$, $\ga\in\abs{t}=\abs{t'}$.
Note that $q^{t'}=(\ftw\vpi{a^t})\ap q^t$ and $h^{t'}=\psi\ap h^t$, where $\psi=\ftp\vpi{a^t}\in\Psd$.\vom
{\it Case 1\/}: $\ga\in\abs\vpi$. Then $q^{t'}_\ga\og n=\vpi_\ga\og n\ap q^t_\ga\og n$. Let $\xi=a^t(n,\ga)$. By definition $h^{t'}_\xi=\psi_\xi\ap h^t_\xi$. On the other hand, $\psi_\xi=\vpi_\ga\og n$ and $h^t_\xi=q^t_\ga\og n$. Therefore $h^{t'}_\xi=\vpi_\ga\og n\ap q^t_\ga\og n=q^{t'}_\ga\og n$, as required.\vom
{\it Case 2\/}: $\ga\nin\abs\vpi$, and there is an ordinal $\da\in\abs \vpi$ such that $a^t(n,\ga)=a^t(n,\da)$. Then the extended rotation $\ve=\ftw\vpi{a^t}$ satisfies $\ve_\ga\og n=\vpi_\da\og n$, and hence $q^{t'}_\ga\og n=\ve_\ga\og n\ap q^t_\ga\og n= \vpi_\da\og n\ap q^t_\da\og n=q^{t'}_\da\og n=h^{t'}_\xi$, where $\xi=a^t(n,\ga)=a^t(n,\da)$ (we refer to Case 1), as required.\vom
{\it Case 3\/}: $\ga\nin\abs\vpi$, but there is no ordinal $\da\in\abs \vpi$ such that $a^t(n,\ga)=a^t(n,\da)$. The extended rotation $\ve=\ftw\vpi{a^t}$ satisfies $\ve_\ga\og n=\pu$ in this case, and hence $q^{t'}_\ga\og n=q^t_\ga\og n$. Moreover, the Case~3 assumption means that $\xi=a^t(n,\ga)\nin \abs\psi$, and hence $h^{t'}_\xi=h^t_\xi$, and we are done. \epf
\parf{The symmetry lemma} \las{mst}
We begin with auxiliary definitions. If $u\in\dT$ then let $$ \tle{u}=\ens{u'\in\dT}{u'\le u}\,. $$
\bdf \lam{coh2} Suppose that $N\sq\om$ and\/ $\Ga\sq\la$ are finite sets. Conditions $s,t\in \dT$ are \rit{similar on\/ $N\ti\Ga$} iff \ben \xaenu \itlm{m1.}\msur $\Ga\sq\abs s=\abs t$, $N\sq\bas s=\bas t$,
\itlm{m2.}\msur ${q^s\res\Ga}={q^t\res\Ga}$ and the restricted assignments\/ $a^s\pes\Ga$ and\/ $a^t\pes\Ga$ are coherent (see Section~\ref{bd2}),
\itlm{m3.} if\/ $n\in N$ then\/ $h^s\og n=h^t\og n$, and\/ $a^s(n,\ga)=a^t(n,\ga)$ for all\/ $\ga\in\Ga$, \een
and \rit{strongly similar on\/ $N\ti\Ga$} if in addition
\ben \xaenu \atc3 \itlm{m1,}\msur $s,t$ are uniform conditions, and $q^s,q^t$ are equally shaped (see Section~\ref{bd4}),
\itlm{m4}\msur $\ran {a^s}=\ran {a^t}$ and $\abs{h^s}=\abs{h^t}$,
\itlm{m5} conditions $h^s$ and $h^t$ are regular at every $n\in\bas s\bez N$ (Section~\ref{bd3}),
\itlm{m6}\msur $\ens{h^s_\xi}{\xi\in\abs{h^s}}=\ens{h^t_\xi}{\xi\in\abs{h^t}}$ --- then easily\/\\ $\ens{h^s_\xi}{\xi\in\abs{h^s}\cap\aal n}= \ens{h^t_\xi}{\xi\in\abs{h^t}\cap\aal n}$ for all\/ $n$.\qed \een \eDf
\bte [the symmetry lemma] \lam{main} Suppose that\/ $N\sq\om$, $\Ga\sq\la$ are finite sets, conditions\/ $s,t\in \dT$ are strongly similar on\/ $N\ti\Ga$, $B=\bas s=\bas t$, $\Da=\abs s=\abs t$.
Then$:$ \ben \renu \itlm{mai1} there exists a transformation\/ $\pi\in\pif$ such that\/ $\pi\og n$ is the identity for all\/ $n\in N$, condition\/ $u=\pi\ap s$ is strongly similar to\/ $t$ on\/ $N\ti\Ga$, and moreover\/ $\pi\ap h^s=h^u=h^t$, and\/ $a^u\pes\Ga=a^t\pes\Ga\,;$
\itlm{mai2} condition\/ $v=\sw{a^u}{a^t}\ap u$ is strongly similar to\/ $t$ on\/ $N\ti\Ga$, and moreover\/ $h^v=h^u$ and\/ $a^v=a^t\,;$
\itlm{mai3} there is a rotation\/ $\vpi\in\Phd_{a^v}$ {\rm(\ie, compatible with $a^v$)} such that\/ $\abs\vpi=\Da$, $\vpi_\ga\og n=\pu$ for all\/ $n\in B$ and\/ $\ga\in\Da$,\snos {Then obviously $\vpi$ is compatible with each of the assignments $a^s,a^t,a^u,a^v$.} and moreover\/ $t=\vpi\ap v\,;$
\itlm{mai4}\msur
$\tau= \vpi\circ \sw{a^u}{a^t}\circ \pi$ is an order preserving bijection from\/ $\tle{s}$ onto\/ $\tle{t}\,;$
\itlm{mai5} any condition\/ $s'\in\tle{s}$ is similar to\/ $t'=\tau\ap s'$ on\/ $N\ti \Ga$. \een \ete
\bpf \ref{mai1} Let
$\Xi=\abs{h^s}=\abs{h^t}$.
Under our assumptions, obviously there is a transformation $\pi\in\pif$ such that
\ben
\aenu \itlm{pi1}\msur $\bas\pi=B$ and if $n\in N$ then $\pi\og n$ is the identity;
\itlm{pi2}\msur $\pi(a^s(n,\ga))=a^t(n,\ga)$\snos {As $s,t$ are similar on $\Ga$, here we avoid a contradiction related to the possibility of equalities $a^t(n,\ga)=a^t(n,\ga')$ for $\ga\ne\ga'$ in $\Ga$.}
for all $n\in B$ and $\ga\in\Ga$; \vyk{ \snos {To have this property, it is important that \ref{m2} of Definition~\ref{coh2} holds for $p,q$.} }
\itlm{pi3}\msur $\pi$ maps the set $\Xi$ onto itself, and $\pi$ is the identity outside of $\Xi$,
\itlm{pi4} if $\xi\in \Xi=\abs{h^s}$ then $h^s_\xi=h^t_{\pi(\xi)}$. \een The only point of contention is whether \ref{pi2} does not contradict to \ref{pi4}. That is, we have to check that $h^s_{a^s(n,\ga)}=h^t_{a^t(n,\ga)}$. Note that $h^s_{a^s(n,\ga)}=q^s_\ga\og n$ and $h^t_{a^t(n,\ga)}=q^t_\ga\og n$ by \ref{t3} of Section~\ref{bdf}. On the other hand $q^s_\ga=q^t_\ga$ by \ref{m2.} of Definition~\ref{coh2}, as required.
\ble \lam{ml1} The transformation\/ $\pi$ satisfies\/ \ref{mai1} of the theorem, and in addition if\/ $s'\in\tle s$ then\/ $s'$ is similar to\/ $u'=\pi\ap s'$ on\/ $N\ti\Ga$. \ele \bpf[Lemma] Prove that $h^u=\pi\ap h^s$ is equal to $h^t$. (This is a fragment of \ref{mai1}.) We have $\abs{h^u}=\ens{\pi(\xi)}{\xi\in\abs{h^s}}=\Xi$ by \ref{pi3}, and $\abs{h^t}=\Xi$ as well. Thus it remains to prove that $h^u_\eta=h^t_\eta$ for any $\eta=\pi(\xi)\in \Xi$, where $\xi\in \Xi$. Yet by definition (Section~\ref{sym1}) $h^u_\eta=h^s_\xi$, and $h^t_\eta=h^s_\xi$ by \ref{pi4}.
The equality $a^u\pes\Ga=a^t\pes\Ga$ follows from \ref{pi2} since $a^u(n,\ga)=\pi(a^s(n,\ga))$.
Prove that any $s'\in T\zd s'\le s$, is similar to $u'=\pi\ap s'$ on $N\ti\Ga$.
Item \ref{m1.} of Definition~\ref{coh2} holds for the pair of conditions $s',u'$ simply because the action of any $\pi\in\pif$ preserves $\abs\cdot$ and $\bas$.
Prove \ref{m2.}. We have $q^{s'}=q^{u'}$ because the action of $\pi$ does not change $q^{s'}$ at all. To show the coherence of $a^{s'}\pes\Ga$ and $a^{u'}\pes\Ga$ suppose that $\ga,\da\in\Ga$, $n\in\om$, and $a^{s'}(n,\ga)=a^{s'}(n,\da)$, and prove that $a^{u'}(n,\ga)=a^{u'}(n,\da)$. (The inverse implication can be checked pretty the same way.)
Suppose first that $n\in B$. Then $a^{s'}(n,\ga)=a^{s}(n,\ga)$ and $a^{s'}(n,\da)=a^{s}(n,\ga)$, therefore $a^{s}(n,\ga)=a^{s}(n,\da)$. It follows that $a^{t}(n,\ga)=a^{t}(n,\da)$ by the coherence in \ref{m2.} for $s,t$, therefore $a^{u}(n,\ga)=a^{u}(n,\da)$ since $a^u\pes\Ga=a^t\pes\Ga$, and finally $a^{u'}(n,\ga)=a^{u'}(n,\da)$, as required.
Now suppose that $n\nin B$. Then the equality $a^{s'}(n,\ga)=a^{s'}(n,\da)$ implies $\ga=\da$ by \ref{a4} of Section~\ref{bdf}, so obviously $a^{u'}(n,\ga)=a^{u'}(n,\da)$.
To check \ref{m3.}, that is, $h^{u'}\og n=h^{s'}\og n$ and $a^{u'}(n,\ga)=a^{s'}(n,\ga)$ for all $\ga\in\Ga$ and $n\in N$, use the fact that $\pi\og n$ is the identity for any $n\in N$ by \ref{pi1}.
Prove that $s$ is strongly similar to $u=\pi\ap s$ on $N\ti\ga$. We have \ref{m1,} of Definition~\ref{coh2} (for the pair of conditions $s',u'$) by rather obvious reasons.
The equalities $\ran a^{u'}=\ran a^{s'}$ and $\abs{h^{u'}}=\abs{h^{s'}}$ in \ref{m4} hold by \ref{pi3} since $\ran a^{u'}$ is equal to the \dd\pi image of $\ran a^{s'}$. Finally the equality $\ens{h^{u'}_\xi}{\xi\in\abs{h^{u'}}}= \ens{h^{s'}_\xi}{\xi\in\abs{h^{s'}}}$ in \ref{m6} holds whenever $u'=\pi\ap s'$ for some $\pi$.
We conclude that conditions $u$ and $t$ are strongly similar on $N\ti\Ga$. \epF{Lemma}
\ref{mai2} Let $a=a^u$ and $b=a^t$. Thus $a,b\in\dA$, $\dom a=\dom b=B\ti \Da$, $\ran a=\ran b$, and $a\pes\Ga=b\pes\Ga$ by the above. Thus, as obviously $u\in \dtr_a$, we define $v=\sw ab\ap u\in \dtr_b$.
\ble \lam{ml2} Condition\/ \ref{mai2} of the theorem holds, and in addition if\/ $u'\in\tle u$ then\/ $u'$ is similar to\/ $v'=\sw ab\ap u'$ on\/ $N\ti\Ga$. \ele \bpf[Lemma] That equalities $h^v=h^u$ and $a^v=a^t$ in \ref{mai2} hold is clear by definition: for instance swaps do not change $h^u$ at all.
Prove that any $u'\in T\zd u'\le u$, is similar to $v'=\sw ab\ap u'$ on $N\ti\Ga$.
By definition (see Section~\ref{sym2}) $v'$ and $u'$ are equal outside of the domain $N\ti\Da$, and $h^{v'}=h^{u'}$. Therefore we can \noo\ assume that $\abs{v'}=\abs{u'}=\Da$ and $\bas{v'}=\bas{u'}=B$. Then $a^{v'}=b=a^t$ and $a^{u'}=a=a^u$, thus the restricted assignments $a^{v'}\pes\Ga=b\pes\Ga$ and $a^{u'}\pes\Ga=a\pes\Ga$ are not merely coherent (as required by \ref{m2.} of Definition~\ref{coh2}) but just equal by the above. The equality $q^{v'}\res\Ga=q^{u'}\res\Ga$ in \ref{m2.} follows from $a\pes\Ga=b\pes\Ga$ as well. And finally we have $h^{v'}=h^{u'}$ ($\sw ab$ does not change this component), proving \ref{m3.}.
Now prove that any $u$ is strongly similar to $v=\sw ab\ap u$ on $N\ti\ga$. We skip \ref{m1,} of Definition~\ref{coh2} as clear and rather boring. Further, as $h^{v}=h^{u}$, we have $\abs{h^{v}}=\abs{h^{u}}$ in \ref{m4} and the whole of \ref{m6}. It remains to show $\ran a^v=\ran a^u$ in \ref{m4}. Recall that $a^v=a^t$ while conditions $s,t,u$ are strongly similar, therefore $\ran a^v=\ran a^t=\ran a^s=\ran a^u$.
We conclude that conditions $v$ and $t$ are strongly similar on $N\ti\Ga$. \epF{Lemma}
\ref{mai3} Thus $v,t$ are uniform conditions, strongly similar on $N\ti\Ga$, and $a^v=a^t$. In particular $q^v$ and $q^t$ are equally shaped, that is, in this case, $\abs{q^v}=\abs{q^t}=\Da$ and $\dom{q^v_\ga\og n}=\dom{q^t_\ga\og n}$ holds for all $\ga\in \Da$ and $n\in\om$. Define a rotation $\vpi\in\Phd$ so that still $\abs\vpi=\Da$, and $$ \vpi_\ga\og n=\ens{\al\in\dom{q^v_\ga\og n}=\dom{q^t_\ga\og n}} {q^v_\ga(\al)\ne q^t_\ga(\al)} $$ for all $\ga\in \Da$ and $n\in\om$. Then clearly $\vpi\ap q^v=q^t$. Moreover $\vpi$ is compatible with $a^v=a^t$, because so are $q^t$ and $q^v$ in the sense of \ref{t2} of Section~\ref{bdf}. Thus conditions $v$ and $t$ belong to $\dT_\vpi$, so $\vpi\ap v$ makes sense.
\ble \lam{ml3} Condition\/ \ref{mai3} of the theorem holds, and in addition if\/ $v'\in\tle v$ then\/ $v'$ is similar to\/ $t'=\vpi\ap v'$ on\/ $N\ti\Ga$. \ele \bpf[Lemma]
Recall that $a^v=a^t$ and $h^v=h^u=h^t$ by \ref{mai1}, \ref{mai2}. It follows by \ref{t3} of Section~\ref{bdf} that $q^v_\ga\og n=q^t_\ga\og n$, and hence $\vpi_\ga\og n=\pu$, whenever $\ga\in\Da$ and $n\in B$. To accomplish the proof of \ref{mai3} check that $\vpi\ap v=t$. Indeed $a^v=a^t$ since $\vpi$ does not change this component. Further, $q^t=\vpi\ap q^v$ simply by the choice of $\vpi$. Let us show that $h^t=h^v$ as well. Indeed, since by definition $\bas{h^v}=B=\bas t$, any change in $h^v$ by the action of $\vpi$ can be only due to a component $\vpi_\ga\og n$ for some $\ga\in\Da$ and $n\in B$ --- but this is the identity since $\vpi_\ga\og n=\pu$ in this case.
Now prove that any $v'\in T\zd v'\le v$, is similar to $t'=\vpi\ap v'$ on $N\ti\Ga$. By definition $a^{v'}=a^{q'}$, covering the coherence in \ref{m2.} of Definition~\ref{coh2}. Further the extended rotation $\vpi'=\ftw\vpi{a^{v'}}$ obviously satisfies the same property $\vpi'_\ga\og n=\pu$ for all $n\in B$ and $\ga\in\Da'=\abs{v'}$. This implies $h^{t'}\og n=h^{v'}\og n$ even for all $n\in B$, so that \ref{m3.} holds for $v',t'$ for all $n\in B$. It only remains to prove that $q^{t'}\res \Ga=q^{v'}\res\Ga$ in \ref{m2.} of Definition~\ref{coh2}, that is, $q^{t'}_\ga=q^{v'}_\ga$ for all $\ga\in\Ga$.
By definition it suffices to show that $\vpi_\ga\og n=\pu$ for all $\ga\in\Ga$ and $n\in\om$, or equivalently, $q^v\res\Ga=q^t\res\Ga$ --- yet this is the case since $v$ and $t$ are similar on $N\ti\Ga$ by the above. \epF{Lemma}
Finally, \ref{mai4} of the theorem is a consequence of lemmas \ref{sk1}, \ref{sk2}, \ref{sk3}, while \ref{mai5} is a corollary of lemmas \ref{ml1}, \ref{ml2}, \ref{ml3}.\vtm
\epF{Theorem}
\parf{The extension} \las{E}
Let a set $G\sq\dT$ be \dd\dT generic over $\bL$. It naturally produces: \bit \itsep \item[--\ ] for any $n$ and $\xi\in\aal n$, $\fg_\xi=\bigcup_{t\in G}h^t_\xi\in2^{\aal n}$,
\item[--\ ] for every $n$, $\fg\og n=\sis{\fg_\xi}{\xi\in\aal n}:\aal n\to 2^{\aal n}$,
\item[--\ ] for any $\ga<\la$, $\yg_\ga=\bigcup_{t\in G}q^t_\ga\in2^{\omal}$,
\item[--\ ]\msur for any $\ga<\la$ and $n$, $\yg_\ga\og n=\yg_\ga\res\aal n\in2^{\aal n}$,
\item[--\ ]\msur $\vy[G]=\sis{\yg_\ga}{\ga<\la}$, a map $\la\to 2^{\omal}$,
\item[--\ ] a map $\ag=\bigcup_{t\in G}a^t:\om\ti\la\to\omal$ such that $\ag(n,\ga)\in\aal n$ for all $n$ and $\ga$. \eit
\ble \lam{str} If a set\/ $G\sq\dT$ is\/ \dd\dT generic over\/ $\bL$ then\/ \ben \renu \itsep \itla{str1} if\/ $n<\om$, $\ga<\la$, and\/ $\ag(n,\ga)=\xi$ then\/ $\yg_\ga\og n=\fg_\xi\;;$
\itla{str2} if\/ $n<\om$, $\ga,\da<\la$, and\/ $\ag(n,\ga)\ne\ag(n,\da)$ then\/ $\yg_\ga\og n\ne\yg_\da\og n\;;$
\itla{str3} if\/ $\ga\ne\da<\la$ then there is a number\/ $n_0=n_0(\ga,\da)$ such that\/ $\ag(n,\ga)\ne \ag(n,\da)$ for all\/ $n\ge n_0$. \een \ele \bpf \ref{str1} is obvious.
\ref{str2} Suppose that a condition $t\in G$ forces otherwise, and $\ga,\da\in\abs t$, $n\in\bas t$. Then $\xi=a^t(n,\ga)\ne a^t(n,\da)=\eta$; $\xi,\eta$ are ordinals in $\aal n$. Note that $h^t_\xi$ and $h^t_\eta$ are conditions in $\dP\og n$. Let $w_\xi\le h^t_\xi$ and $w_\eta\le h^t_\eta$ be any pair of {\ubf incompatible} conditions in $\dP\og n$. Let $t'\in T$ be a condition which differs from $t$ only in the following: $q^{t'}_\ga\og n=h^{t'}_\xi=w_\xi$ and $q^{t'}_\da\og n=h^{t'}_\eta=w_\eta$. Obviously $t'\le t$, and $t'$ forces that $\yg_\ga\og n\ne\yg_\da\og n$.
\ref{str3} Definitely there is a condition $t\in G$ such that $\abs t$ contains both $\ga$ and $\da$. Let $B=\bas t$ (a finite subset of $\om$) and let $n_0$ be bigger than $\tmax B$. Now if $s\in G$, $s\le t$, and $n\in\bas s$, $n\ge n_0$, then $a^s\le a^t$, and hence $a^s(n,\ga)\ne a^s(n,\da)$. This implies $\ag(n,\ga)\ne \ag(n,\da)$. \epf
Now let us define a {\ubf symmetric subextension} of $\bL[G]$, on the base of certain symmetric hulls of sets $\fg\og n$ and $\yg_\ga$.
\bbl \lam{bla1} Below, $\pif\zd\Phd\zd\Psd\zd\dD\zd\dD\og n\zd$ mean objects defined in $\bL$ as in Sections \ref{bd} --- \ref{sym3}. Thus in particular $\pif\in\bL$ and all elements of $\pif$ belong to $\bL$ either.\qed \ebl
In $\bL[G]$, put \bit \item[--\ ] for every $n$, $\Fg\og n=$ the \dd{(\pif,\Psd)}hull of $\fg\og n$. Thus the set $\Fg\og n$ consists of elements of the form $\pi\ap(\psi\ap \fg\og n)$, where $\pi\in\pif$
and $\psi\in\Psd$.
\item[--\ ] $\vfg=\sis{\Fg\og n}{n<\om}$. \eit
The actions of $\pi\in\pif$ and $\psi\in\Psd$ are defined as in sections \ref{sym1} and \ref{sym3} above. In particular $\psi\ap \fg\og n$ and $\pi\ap(\psi\ap \fg\og n)$ are maps $\aal n\to2^{\aal n}$ in $\bL[\fg\og n]$. It is clear that $\Fg\og n$ is closed under further application of transformations in $\pif$ and $\Psd$, so there is no need to consider iterated actions.
It takes more time to define suitable hulls of elements $\yg_\ga$. First of all, put \bit \itsep \item[--\ ] for any $n$ and $\ga<\la$, $\Yg_\ga\og n=\ens{d\ap \yg_\ga\og n}{d\in\dD\og n}\sq 2^{\aal n}$;
\item[--\ ] for any $n$, $\Yg\og n=\bigcup_{\ga<\la}\Yg_\ga\og n$ --- still $\Yg\og n\sq 2^{\aal n}$, and obviously $\Yg\og n$ is the \dd{\dD\og n}hull of $\ens{\yg_\ga\og n}{\ga<\la}$. \eit Finally, if $\ga<\la$ then we let $\Yg_\ga$ be the set of all $z\in 2^{\omal}$ in $\bL[G]$ such that there exist a set $d\in\dD$ and a number $n_0$ satisfying:\vtm
1) $z\og n=d\og n\ap \yg_\ga\og n$ for all $n\ge n_0$;\snos {Regarding the action of $d\in\dD$ see Section~\ref{sym3a}.}\vtm
2) $z\og n\in \Yg\og n$ for all $n<n_0$.\vtm
\noi In other words, to obtain $\Yg_\ga$ we first define the \dd\dD hull $\dD\ap{\yg_\ga}=\ens{d\ap\yg_\ga}{d\in D}$ of $\yg_\ga$, and then allow to substitute sets in $\Yg\og n$ for $y\og n$ for any $y\in \dD\ap{\yg_\ga}$ and finitely many $n$, so that \bit \item[$(\star)$]\msur $\Yg_\ga$ is the set of all $z\in 2^{\omal}$ (in $\bL[G]$) such that there exist an element $y\in\dD\ap{\yg_\ga}$ and a number $n_0$ satisfying: $z\og n=y\og n$ for all $n\ge n_0$, and $z\og n\in \Yg\og n$ for all $n<n_0$. \eit
\ble \lam{cons} If\/ $\ga\ne\da$ then\/ $\Yg_\ga\cap\Yg_\da=\pu$. \ele \bpf Suppose towards the contrary that $z\in\Yg_\ga\cap\Yg_\da$. Then by $(\star)$ there exist rotations $d',d''\in\dD$ and a number $n_0$ such that the elements $y'=d'\ap\yg_\ga$ and $y''=d''\ap\yg_\da$ satisfy $y\og n=y'\og n$ for all $n\ge n_0$. In other words, $\yg_\ga\og n=(d\ap\yg_\da)\og n$ for all $n\ge n_0$, where $d=d'\sd d''\in\dD$ (symmetric difference). Now use Lemma~\ref{str}\ref{str3} to obtain a number $n\ge n_0$ such that $\ag(n,\ga)\ne\ag(n,\da)$; still we have $\yg_\ga\og n=d\og n\ap\yg_\da\og n$. But this yields a contradiction similarly to the proof of Lemma~\ref{str}\ref{str2}. \epf
Now we let, in $\bL[G]$, $\vyg=\sis{\Yg_\ga}{\ga<\la}$, a function defined on $\la$.
We finally define $$ \textstyle \wg=\bigcup_{n}\Fg\og n\,\cup\, \bigcup_{\ga<\la}\Yg_\ga \,\cup\,\ans{\vfg,\vyg}\,. $$
\bdf \lam{se} $\Ls[G]=\bL(\wg)=$ HOD over $\wg$ in $\bL[G]$. \edf
Thus by definition every set in $\Ls G$ is definable in $\bL[G]$ by a formula with parameters in $\bL$, two special parameters $\vfg$ and $\vyg$, and finally parameters which belong to the sets $\Fg\og n$ and $\Yg_\ga$ for various $n<\om$ and $\ga<\la$. The next lemma allows to reduce the last category of parameters, basically, to those in $\ens{\fg\og n}{n<\om}\cup\ens{\yg_\ga}{\ga<\la}$.
\ble \lam{redp} If\/ $n<\om$ then every\/ $x\in\Fg\og n$ belongs to\/ $\bL[\fg\og n]$. If\/ $\ga<\la$ and\/ $z\in\Yg_\ga$ then there is a finite set\/ $\Da\sq\la$ such that\/ $z\in\bL[\ens{\yg_\da}{\da\in\Da}]$. \ele \bpf By definition $x$ belongs to the \dd{(\pif,\Psd)}hull of $\fg\og n$. But $\pif$ and $\Psd$ belong to $\bL$ (see Blanket Agreement~\ref{bla1}). Regarding the claim for $z\in\Yg_\ga$, come back to $(\star)$. Note that $y$ as in $(\star)$ belongs to $\bL[\yg_\ga]$ (since $\dD\in\bL$). Then to obtain $z$ from $y$ we replace a finite number of intervals $y\og n$ in $y$ by elements of sets $\Yg\og n$. Thus suppose that $n<\om$ and $w\in\Yg\og n$, that is, $w\in\Yg_\da\og n$, where $\da<\la$. But then $w\in\bL[\yg_\da]$ (since $\dD\og n\in\bL$), so that it suffices to define $\Da$ as the (finite) set of all ordinals $\da$ which appear in this argument for all intervals $y\og n$ to be replaced. \epf
\parf{Definability lemma} \las{pdl}
The next theorem plays key role in the analysis of the abovedefined symmetric subextension.
\bte [the definability lemma] \lam{dl1} Suppose that a set\/ $G\sq\dT$ is\/ \dd\dT generic over\/ $\bL$, and\/ $N\sq\om\zt \Ga\sq\la$ are finite sets. Let\/ $Z\in\bL[G]\zt Z\sq\bL$, be a set definable in\/ $\bL[G]$ by a formula with parameters in\/ $\bL$ and those in the list\/ $$ \ans{\vfg,\vyg}\,\cup\,\ens{\fg\og n}{n\in N} \,\cup\,\ens{\yg_\ga}{\ga\in\Ga}. $$ Then\/ $Z\in\bL[\ens{\fg\og n}{n\in N},\ens{\yg_\ga}{\ga\in\Ga}]$. \ete
Beginning the {\ubf proof of Theorem~\ref{dl1}}, we put $\vxng=\sis{\fg\og n}{n\in N}$ and $\vyng=\sis{\yg_\ga}{\ga\in\Ga}$, and let $$ \vt(z):= \vt(z,\vfg,\vyg,\vxng,\vyng) $$ be a formula such that $Z=\ens{z}{\vt(z)}$ in $\bL[G]$.
By Lemma~\ref{str}\ref{str3} there is $n_0$ such that $\ag(n,\ga)\ne \ag(n,\da)$ whenever $n>n_0$ and $\ga\ne\da$ belong to $\Ga$.
Let $M=N\cup\ens{n}{n\le n_0}$.
Say that a condition $t\in\dT$ \rit{complies with\/ $\vxng\zi\vyng$} if $M\sq\bas t$, $\Ga\sq\abs t$, and
\ben \itsep \Renu \itlm{pa1} if $n\in N$ and $\xi\in\abs{h^t}\cap\aal n$ then $h^t_\xi\subset \fg_\xi$,
\itlm{pa2} if $\ga\in\Ga$ then $q^t_\ga\subset \yg_\ga$,
\itlm{pa3} if $n\in\bas t$ and $\ga\in\Ga$ then $a^t(n,\ga)=\ag(n,\ga)$. \een For instance any condition $t\in G$ with $M\sq\bas t$, $\Ga\sq\abs t$ complies with $\vxng\zi\vyng$ by obvious reasons.
It is quite clear that the set $\txy$ of all conditions $t\in\dT$ which comply with $\vxng\zi\vyng$ belongs to $\bL[\vxng\zi\vyng]$. Therefore to prove the theorem it suffices to verify the following assertion:\vtm
\rit{if\/ $z\in\bL$, $s,t\in\txy$, and\/ $s$ forces\/ $\vt(z)$, then\/ $t$ does not force\/ $\neg\:\vt(z)$.}\vtm
\noi Suppose towards the contrary that this fails, so that
\ben \fenu \itlm*\msur $z\in\bL$, $s,t\in\txy$, condition $s$ forces\/ $\vt(z)$, while $t$ forces $\neg\:\vt(z)$. \een
The proof of Theorem~\ref{dl1} continues in Sections \ref{pdl1} and \ref{pdl2}.
\parf{Proof of the definability lemma, part 1} \las{pdl1}
Working towards the symmetry lemma. Our goal is now to strengthen $s,t$ towards the requirements of Theorem~\ref{main}.
\ble \lam{dl2} There exists a condition\/ $s'\in\txy$ such that\/ $\abs{s'}=\abs{s}\cup\abs t$, $\bas{s'}=\bas{s}\cup\bas t$, and\/ $s'\le s$.
Accordingly there is a condition\/ $t'\in\txy$ such that\/ $\abs{t'}=\abs{s}\cup\abs t$, $\bas{t'}=\bas{s}\cup\bas t$, and\/ $t'\le t$. \ele \bpf[Lemma] We define $a^{s'}$. This takes some time.\vom
\rit{Domain\/ $\bas s\ti \abs s$.} If $n\in\bas s$ and $\ga\in\abs s$ then put $a^{s'}(n,\ga)=a^s(n,\ga)$ and $q^{s'}(n,\ga)=q^s(n,\ga)$.\vom
\rit{Domain\/ $(\bas t\bez\bas s)\ti\Ga$.} If $n\in\bas t\bez\bas s$ and $\ga\in\Ga$ then put $a^{s'}(n,\ga)=\ag(n,\ga)$, and $q^{s'}(n,\ga)=q^s(n,\ga)$, as above.\vom
\rit{Domain\/ $(\bas t\bez\bas s)\ti(\abs s\bez\Ga)$.} For any $n\in\bas t\bez\bas s$ fix a bijection $\da\mto\xi^n_\da$ from $\abs s\bez\Ga$ to $\aal n\bez \ens{a^{s'}(n,\ga)}{\ga\in\Ga}$. If now $\da\in\abs s\bez\Ga$ then put $a^{s'}(n,\da)=\xi^n_\da$ and $q^{s'}(n,\da)=\pu$.\vom
\rit{Domain\/ $(\bas t\cup\bas s)\ti(\abs t\bez\abs s)$.}
Fix an ordinal $\da^\ast\in\abs s$. If $n\in\bas t\cup\bas s$ and $\da\in\abs t\bez\abs s$ then put $a^{s'}(n,\da)=a^{s'}(n,\da^\ast)$ and $q^{s'}(n,\da)=q^{s'}(n,\da^\ast)$.\vom
\rit{Domain\/ $\big(\om\bez{(\bas t\cup\bas s)}\big)\ti(\abs t\bez\abs s)$.}
If $n\nin\bas t\cup\bas s$ and $\da\in\abs t\cup\abs s$ then put $q^{s'}(n,\da)=\pu$ and keep $a^{s'}(n,\da)$ undefined.\vom
On the top of the above definition, define $h^{s'}$ so that $$ \abs{h^{s'}}=\abs{h^{s}} \cup\ens{\xi^n_\da}{n\in\bas t\bez\bas s\land \da\in\abs s\bez\Ga}, $$ $h^{s'}_\xi= h^{s}_\xi$ for all $\xi\in\abs{h^{s}}$, and $h^{s'}_{\xi^n_\da}=\pu$ for all $n\in\bas t\cup\bas s$ and $\da\in\abs t\bez\abs s$.
We claim that $s'$ is as required.
The key issue is to prove $a^{s'}\le a^s$, in particular, \ref{a4} of Section~\ref{bdf} for $a=a^{s'}$, $b=a^s$. Note that if $\ga\ne\da$ belong to $\Ga$ and $n\nin\bas s$ then $\ag(n,\ga)\ne \ag(n,\da)$ by the choice of $M$ and because $M\sq\bas s$. Therefore if $n\in\bas t\bez \bas s$ and $\ga,\da$ as indicated then by definition $a^{s'}(n,\ga)\ne a^{s'}(n,\da)$, as required.
We have \ref{pa1}, \ref{pa2}, \ref{pa3} by obvious reasons: in particular, $q^{s'}_\ga=q^s_\ga$ for all $\ga\in\Ga$, and if $n\in N$ then $n\in\abs s$ and hence by construction $\abs{h^{s'}}\cap\aal n= \abs{h^{s}}\cap\aal n$ and $h^{s'}_\xi= h^{s}_\xi$ for all $\xi\in\abs{h^{s'}}\cap\aal n$. \epF{Lemma}
It follows from the lemma that we can \noo\ assume in \ref* that
\ben \aenu \itlm{st1} conditions $s,t$ satisfy $\abs s=\abs t$ and $\bas s=\bas t$. \een
Moreover we can \noo\ assume that in addition to \ref* and \ref{st1}:
\ben \atc1 \aenu \itlm{st2}\msur $\abs{h^s}=\abs{h^t}$, and if $n\in\bas s=\bas t$ then the set $\abs{h^s}\cap\aal n=\abs{h^t}\cap\aal n$ is infinite. \een This is rather elementary. If say $\xi\in\abs{h^s}\bez\abs{h^t}$ then simply add $\xi$ to $\abs{h^t}$ and define $h^t_\xi=\pu$.
Further, we can \noo\ assume that, in addition to \ref*, \ref{st1}, \ref{st2}:
\ben \atc2 \aenu \itlm{st4} conditions $s,t$ satisfy $\ran a^s=\ran a^t$. \een Suppose that $n\in\bas s$ and, say, $\xi\in (\ran a^t\bez \ran a^s)\cap\aal n$. Put $\xi_n=\xi$ and for any $m\in\bas s\zt m\ne n$ pick an ordinal $\xi_m\in\abs{h^s}\cap\aal m$, $\xi_m\nin\ran{a^s}\cup\ran{a^t}$ (this is possible by \ref{st2}). Add an ordinal $\ga\nin\abs s=\abs t$ to $\abs s$ and to $\abs t$. If $m\in\bas s=\bas t$ then put $a^s(m,\ga)=a^t(m,\ga)=\xi_m$ and
$q^s_\ga\og m=q^t_\ga\og m=h^t_{\xi_m}$,
and in addition define $q^s_\ga\og m=q^t_\ga\og m=\pu$ for all $m\nin\bas s=\bas t$. Conditions $s,t$ extended this way still satisfy \ref*, \ref{st1}, \ref{st2}, but now $\xi\in\ran{a^s}$.
One has to maintain such extension for all indices $\xi$ in $\ran a^t\bez \ran a^s$ and $\ran a^s\bez \ran a^t$ one by one; the details are left to the reader.
\bre After this step, the sets $\Da=\abs s=\abs t$ and $B=\bas s=\bas t$ (finite subsets of resp.\ $\la$ and $\om$) will not be changed, as well as the assignments $a=a^s$ and $b=a^t$ ($\dom a=\dom b=B\ti\Da$). Put $\Xi=\abs{h^s}=\abs{h^t}$. \ere
Further we can \noo\ assume that in addition to \ref*, \ref{st1}, \ref{st2}, \ref{st4}:
\ben \atc3 \aenu \itlm{st5} subconditions $q^s,q^t$ are uniform and equally shaped. \een
It suffices to define a pair of stronger conditions $s',t'\in\txy$ such that $$ \abs{s'}=\abs{t'}=\Da\,,\; \abs{h^{s'}}=\abs{h^{t'}}=\Xi\,,\; \bas{s'}=\bas{t'}=B\,,\; a^{s'}=a\,,\; a^{t'}=b\,, $$ and in addition $q^{s'},q^{t'}$ are uniform and equally shaped.
Consider any $n\in\om$. Put $d\og n=\bigcup_{\da\in\Da}(\dom q^s_\da\og n\cup\dom q^t_\da\og n)$, a set in $\dD\og n$. If $\da\in\Da$ then define extensions $q^{s'}_\da\og n\zi q^{t'}_\da\og n\in\dP\og n$ of resp.\ $q^{s}_\da\og n\zi q^{t}_\da\og n$ so that
\ben \renu \itlm{zz1}\msur $\dom{q^{s'}_\da\og n}=\dom{q^{t'}_\da\og n}=d\og n$,
\itlm{zz2} if $n\in N$ and $\da\in\Ga$ then simply $q^{s'}_\da\og n= q^{t'}_\da\og n=\yg_\da\res{d\og n}$,
\itlm{zz3} if $n\in B$ and $\ga,\da\in\Da$ then: if $a(n,\da)=a(n,\ga)$ then $q^{s'}_\da\og n= q^{s'}_\ga\og n$,\\ and if $b(n,\da)=b(n,\ga)$ then $q^{t'}_\da\og n= q^{t'}_\ga\og n$. \een On the top of this, define $h^{s'}_{a(n,\da)}=q^{s'}_\da\og n$ and $h^{t'}_{b(n,\da)}=q^{t'}_\da\og n$ for all $n\in B$ and $\da\in \Da$. In the rest, put $\abs{h^{s'}}=\abs{h^{t'}}=\Xi$ (recall that $\Xi=\abs{h^s}=\abs{h^t}$), and $h^{s'}_\xi=h^s_\xi$, $h^{t'}_\xi=h^t_\xi$ for all $\xi\in\Xi$ {\ubf not} in $\ran a=\ran b$.
Further, we can \noo\ assume that, in addition to \ref* and \ref{st1} --- \ref{st5}: \ben \atc4 \aenu \itlm{st3} conditions $s,t$ coincide on the domain $N\ti\Ga$, so that \ben \itla{st3a} if $\ga\in\Ga$ then $q^s_\ga=q^t_\ga$,
\itla{st3b} if $n\in N$ then $h^s\og n=h^t\og n$, that is, $h^s_\xi=h^t_\xi$ for all $\xi\in\abs{h^s}\cap\aal n=\abs{h^t}\cap\aal n$, and
\itla{st3c} if $n\in N$ and $\ga\in\Ga$ then $a^s(n,\ga)=a^t(n,\ga)=\ag(n,\ga)$ --- but this already follows from the compliance assumption. \een \een
Regarding \ref{st3a}, note that this is already done. Indeed, $q^s,q^t$ are equally shaped by \ref{st5}, and satisfy $q^s_\ga\su \yg_\ga$ and $q^t_\ga\su \yg_\ga$ by \ref*, therefore $q^s_\ga=q^t_\ga$.
Now consider \ref{st3b}; suppose that $n\in N$. Let $\xi\in\abs{h^s}\cap\aal n$.
If $\xi\in\ran{a^s}=\ran{a^t}$ then $\xi=a^s(n,\ga)=a^t(n,\da)$ for some $\ga,\da\in\Da$, and then $h^s_\xi=q^s_\ga\og n$ and $h^t_\xi=q^t_\ga\og n$. It follows that $\dom{h^s_\xi}=\dom{h^t_\xi}$, by \ref{st5}. Therefore ${h^s_\xi}={h^t_\xi}$, because we have $h^s_\xi\su \fg_\xi$ and $h^t_\xi\su \fg_\xi$.
If $\xi\in\ran{a^s}=\ran{a^t}$ then still $h^s_\xi\su \fg_\xi$ and $h^t_\xi\su \fg_\xi$, thus $h^s_\xi$ and $h^t_\xi$ are compatible as conditions in $\dP$, and we simply replace either of them by $h^s_\xi\cup h^t_\xi$.
And finally, we can \noo\ assume that, in addition to \ref* and \ref{st1} --- \ref{st3}:
\ben \atc5 \aenu \itlm{st6} we have $\ens{h^s_\xi}{\xi\in\abs{h^s}}=\ens{h^t_\xi}{\xi\in\abs{h^t}}$ as in \ref{m6} of Definition~\ref{coh2}, and subconditions $h^s,h^t$ are regular on every $n\in B\bez N$ (Subsection~\ref{bd3}). \een
The equality $\ens{h^s_\xi}{\xi\in\abs{h^s}\cap\aal n}= \ens{h^t_\xi}{\xi\in\abs{h^t}\cap\aal n}$ holds already for all $n\in N$ by \ref{st3}.
Now suppose that $n\in B\bez N$. The requirement of compliance with $\vxng\zi\vyng$ is void for $n\nin N$, therefore we can simply extend $h^s\og n$ and $h^t\og n$ to a bigger domain and appropriately define $h^s_\xi$ and $h^t_\xi$ for all ``new'' elements $\xi$ in these extended domains so that \ref{st6} holds, without changing $q^s\zi q^t$ and $a^s\zi a^t$.
To conclude, we can \noo\ assume in \ref* that \ref{st1} --- \ref{st6} hold, that is, in other words, conditions $s\zi t\in\txy$ are strongly similar on $N\ti\Ga$ in the sense of Definition~\ref{coh2}.
\parf{Proof of the definability lemma, part 2} \las{pdl2}
We continue the proof of Theorem~\ref{dl1}. Our intermediate result and the starting point of the final part of the proof is the contrary assumption \ref* with the additional assumption that conditions $s\zi t\in\txy$ in \ref* are strongly similar on $N\ti\Ga$, and to complete the proof of the theorem it suffices to derive a contradiction. This will be obtained by means of Theorem~\ref{main}.
In accordance with Theorem~\ref{main}, let $B=\bas s=\bas t$, $\Da=\abs s=\abs t$, and let transformations $\pi\zi \sw{a^u}{a^t}\zi\vpi$ and $\tau=\vpi\circ \sw{a^u}{a^t} \circ \pi$, and conditions $v,u\in\dT$ satisfy $\bas u=\bas v=B$, $\abs u=\abs v=\Da$, and \ben \renu \itlm{nai1}\msur $\pi\in\pif$, $\pi\og n$ is the identity for all\/ $n\in N$, $u=\pi\ap s$, $u$ is strongly similar to\/ $t$ on\/ $N\ti\Ga$, and moreover\/ $\pi\ap h^s=h^u=h^t$, and\/ $a^u\pes\Ga=a^t\pes\Ga\,;$
\itlm{nai2}\msur $v=\sw{a^u}{a^t}\ap u$, $v$ is strongly similar to\/ $t$ on\/ $N\ti\Ga$, $h^v=h^u$, $a^v=a^t\,;$
\itlm{nai3}\msur $\vpi\in\Phd_{a^v}$, $\abs\vpi=\Da$, $\vpi_\ga\og n=\pu$ for all\/ $n\in B$ and\/ $\ga\in\Da$, and\/ $t=\vpi\ap v\,;$
\itlm{nai4}\msur
$\tau= \vpi\circ \sw{a^u}{a^t}\circ \pi$ is an order preserving bijection from\/ $\tle{s}$ onto\/ $\tle{t}\,;$
\itlm{nai5} any condition\/ $s'\in\tle{s}$ is similar to\/ $t'=\tau\ap s'$ on\/ $N\ti \Ga$. \een (= items \ref{mai1} --- \ref{mai5} of Theorem~\ref{main}).
Consider a set $G\sq\dT$ generic over $\bL$ and containing $s$. We assume that $s$ is the largest (= weakest) condition in $G$. Then, by \ref{nai5}, $H=\ens{\tau\ap s'}{s'\in G}\sq\dT$ is generic over $\bL$ either, and $\bL[H]=\bL[G]$. Moreover $t=\tau\ap s\in H$. Therefore it follows from \ref* that \ben \fenu \atc1 \itlm+\msur $\vt(z,\vfg,\vyg,\vxng,\vyng)$ is true in $\bL[G]$, but\\[1ex] $\vt(z,\vfh,\vyh,\vxnh,\vynh)$ is false in $\bL[H]=\bL[G]$. \een Our strategy to derive a contradiction will be to show that the parameters in the formulas are pairwise equal, and hence one and the same formula is simultaneously true and false in one and the same class. This is the content of the following lemma.
\ble \label{peq} \ben \renu \itlm{peq1}\msur $\vyng=\vynh$, that is, if\/ $\ga\in\Ga$ then\/ $\yg_\ga=\yh_\ga\;;$
\itlm{peq2}\msur $\vxng=\vxnh$, that is, if\/ $n\in N$ and\/ $\xi\in\aal n$ then\/ $\fg_\xi=\fh_\xi\;;$
\itlm{peq3}\msur $\vfg=\vfh$, that is, $\Fg\og n=\Fh\og n$ for all\/ $n\in\om\;;$
\itlm{peq4}\msur $\vyg=\vyh$, that is, $\Yg_\ga=\Yh_\ga$ for all\/ $\ga<\la$. \een \ele \bpf \ref{peq1} If $\ga\in\Ga$ then by definition $\yg_\ga=\bigcup_{s'\in G} q^{s'}_\ga$ and $\yh_\ga=\bigcup_{t'\in H} q^{t'}_\ga= \bigcup_{s'\in G} q^{(\tau\ap s')}_\ga$. Yet if $s'\in G$ then condition $t'=\tau\ap s'$ satisfies $q^{t'}_\ga=q^{s'}_\ga$ by \ref{nai5}.
\ref{peq2} A similar argument. Suppose that $n\in N$ and $\xi\in\aal n$. By definition, $\fg_\xi=\bigcup_{s'\in G}h^{s'}_\xi$ and $\fh_\xi=\bigcup_{t'\in H}h^{t'}_\xi= \bigcup_{s'\in G}h^{(\tau\ap s')}_\xi$. However if $s'\in G$ then condition $t'=\tau\ap s'$ satisfies $h^{t'}_\xi=h^{s'}_\xi$ still by \ref{nai5}.
\ref{peq3} By definition, $\Fg\og n$ and $\Fh\og n$ are the \dd{(\pif,\Psd)}hulls of resp.\ $$ \fg\og n=\sis{\fg_\xi}{\xi\in\aal n} \quad\text{and}\quad \fh\og n=\sis{\fh_\xi}{\xi\in\aal n}\,. $$ Thus it remains to prove that $\fh\og n$ belongs to the \dd{(\pif,\Psd)}hull of $\fg\og n$, and vice versa.
Let $\psi=\ftp\vpi{a^v}$ (a rotation in $\Psd$, see Section~\ref{sym3}). By definition, if $s'\in G$ and $t'=\tau\ap s'$, then the subconditions $h^{s'}$ and $h^{t'}$ satisfy $h^{t'}=\psi\ap(\pi\ap h^{s'})$. (The middle transformation $\sw{a^u}{a^t}$ does not act on the \dd hcomponents). It easily follows that $\fh\og n=\psi\ap(\pi\ap\fg\og n)$, as required.
\ref{peq4} Note that the sequences $\vyng=\sis{\yg_\ga}{\ga<\la}$ and $\vynh=\sis{\yh_\ga}{\ga<\la}$ satisfy $\vynh=\vpi\ap(\sw{a^u}{a^t}\ap\vyng)$. (Permutation $\pi$ does not act on wide subconditions.) That is, the construction of $\vynh$ from $\vyng)$ goes in two steps.\vom
{\it Step 1\/}: we define $\vz=\sis{\zzz_\ga}{\ga<\la}$ by $\vz=\sw{a^u}{a^t}\ap\vyng$. Thus by definition\vtm
1) $\zzz_\ga\in2^{\omal}$ for all $\ga$,\vtm
2) if $n\nin B$ or $\ga\nin\Da$ then $\zzz_\ga\og n=\yg_\ga\og n$, and\vtm
3) \psur if $n\in B$ and $\ga\in\Da$ then $\zzz_\ga\og n=\yg_{\vt}\og n$, where $\vt=\swt{a^u}{a^t}n(\ga)$.\vtm
\noi Thus the difference between $\vz$ and $\vyng$ is located within the finite domain $B\ti\Da$. Moreover, as in Lemma~\ref{sk2}\ref{sk2iii}, we have \ben \fenu \atc2 \itla{ddag} $\ens{\zzz_\ga\og n}{\ga<\la}=\ens{\yg_\ga\og n}{\ga<\la}$ for every $n$. \een
{\it Step 2\/}: we define $\vynh=\vpi\ap\vz$. Thus by definition\vtm
4) if $\ga\in\Da$ then directly $\yh_\ga=\vpi\ap\zzz_\ga$, that is, $\yh_\ga\og n=\vpi\og n\ap\zzz_\ga\og n$, $\kaz n$;\vtm
5) if $\ga\nin\Da$ and $\sus\da\in \Da\,(\ag(n,\ga)=\ag(n,\da))$, then $\yh_\ga\og n=\vpi\og n\ap\zzz_\ga\og n$;\vtm
6) if $\ga\nin\Da$ but $\neg\:\sus\da\in \Da\,(\ag(n,\ga)=\ag(n,\da))$, then $\yh_\ga\og n=\zzz_\ga\og n$.\vtm
\noi Now it immediately follows from \ref{ddag} that $\Yg\og n=\Yh\og n$ for every $n$: both sets are equal to the \dd{\dD\og n}hull of one and the same set mentioned in \ref{ddag}.
We are ready to prove that $\Yg_\ga=\Yh_\ga$ for every $\ga<\la$.
We start with a couple of definitions. If $y,y'\in 2^{\aal n}$ and there exists a set $d\in\Yg\og n=\Yh\og n$ such that $y'=d\ap y$ then write $y\eqn n y'$. If $y,y'\in 2^{\omal}$ and there exists a number $n_0$ such that $y'\og n\eqn n y\og n$ for all $n<n_0$ and $y'\og n=y\og n$ for all $n\ge n_0$ then write $y\eqa y'$.
Then by $(\star)$ in Section~\ref{E} we have:
$$ \left. \bay{rcl} \Yg_\ga &=& \ens{z\in2^{\omal}}{\sus y\in\dD\ap\yg_\ga\,(z\eqa y)};\\[1ex]
\Yh_\ga &=& \ens{z\in2^{\omal}}{\sus y\in\dD\ap\yh_\ga\,(z\eqa y)}; \eay \right\} \eqno(\ast\ast) $$ and hence to prove $\Yg_\ga=\Yh_\ga$ it suffices to check that $\yh_\ga\in\Yg_\ga$ and $\yg_\ga\in\Yh_\ga$.\vom
{\it Case 1\/}: $\ga\in\Da$.
It follows from 2) and 3) that $\zzz_\ga\eqa \yg_\ga$ and hence $\yh_\ga\eqa y$ by 4), where $y=\vpi\ap\yg_\ga$. Thus $\yh_\ga\in\Yg_\ga$ by $(\ast\ast)$, the first line. On the other hand, $\zzz_\ga=\vpi^{-1}\ap \yh_\ga$ still by 4), so that $\yg_\ga\in\Yh_\ga$ by $(\ast\ast)$, the second line.
{\it Case 2\/}: $\ga\nin\Da$. Note that for a given $\ga$ 5) holds only for finitely many numbers $n$ by Lemma~\ref{str}\ref{str3}, so 6) holds for almost all $n$. Therefore $\yh_\ga\eqa\zzz_\ga$. But $\zzz_\ga=\yg_\ga$ in this case by 2). Thus $\yh_\ga\in\Yg_\ga$ by $(\ast\ast)$, the first line (with $y=\yg_\ga$). And $\yg_\ga\in\Yh_\ga$ holds by a similar argument.
\epF{Lemma}
\qeD{Theorem~\ref{dl1}}
\parf{The structure of the extension} \las{F}
Here we accomplish the proof of Theorem~\ref{M}.
\bbl \lam{bla2} We fix a set $G\sq \dT$, \dd\dT generic over $\bL$, during the course of this section. \ebl
It will be shown that the symmetric subextension $\Ls[G]=\bL(\wg)$ (see Section~\ref{E}) satisfies Theorem~\ref{M}. The following is a key technical claim.
\bte \lam{alen} Suppose that\/
$\nu<\om$, and\/ $Z\in\Ls[G]\zt Z\sq[0,\aleph_{\nu+1})$. Then\/ $Z\in\bL[\ens{\fg\og n}{n\le \nu}]$. \ete \bpf It follows from Lemma~\ref{redp} and Theorem~\ref{dl1} that there exist finite sets\/ $N\sq\om$ and\/ $\Ga\sq\la$ such that\/ $Z\in\bL[\ens{\fg\og n}{n\in N},\ens{\yg_\ga}{\ga\in\Ga}]$. We can assume that \ben \aenu \itlm{alen1}\msur $N=\ans{0,1,2,\dots,\ka}$ for some $\ka<\om$, $\ka\ge\nu$;
\itlm{alen2} if $\ga\ne \da$ belong to $\Ga$ and $n<\om$ satisfies $\ag(n,\ga)=\ag(n,\da)$ then $n\le\ka$. \een (Lemma~\ref{str}\ref{str3} is used to justify \ref{alen2}.) Define, in $\bL$, $$ \dT[N,\Ga]=\ens{s\in\dT} {\bas s=N\land \abs s=\Ga\land a^s=\ag\res{(N\ti\Ga)}}. $$
\ble \lam{gg} The set\/ $G[N,\Ga]=G\cap\dT[N,\Ga]$ is\/ \dd{\dT[N,\Ga]}generic over\/ $\bL$ and\/ $Z\in\bL[G[N,\Ga]]$. \ele \bpf[Lemma] Suppose that $t\in\dT$, $N\sq\bas t$, $\Ga\sq\abs t$. Define the \rit{projection} $s=t[N,\Ga]\in\dT[N,\Ga]$ so that $q^s=q^t\res\Ga$, $a^s=a^t\res{(N\ti\Ga)}$, and $h^s$ is the restriction of $h^t$ to the set $\abs{h^t}\cap\bigcap_{n\le\ka}\aal n$. (It is not asserted that $t\le s$.) Given a condition $s'\in \dT[N,\Ga]$, $s'\le s$, we have to accordingly find a condition $t'\le t$ such that $t'[N,\Ga]=s'$.
Define $t'$ as follows. First of all, $\bas t'=\bas t$, $\abs{t'}=\abs t$, $a^{t'}=a^t$.
Put $h^{t'}\og n=h^{s'}\og n$ for $n\in N$ but $h^{t'}\og n=h^{t}\og n$ for $n\in \bas t\bez N$.
If $n\nin N$ then put $q^{t'}_\ga\og n= q^{t'}_\ga\og n$ for all $\ga\in\abs{t'}=\abs t$. If $n\in N$ and $\ga\in\abs{t'}$ then put $q^{t'}_\ga\og n= h^{t'}_\xi= h^{s'}_\xi$, where $\xi=a^{t'}(n,\ga)$. \epF{Lemma}
In continuation of the proof of the theorem, let us analyse $\dT[N,\Ga]$ as the forcing notion. It looks like the product $\prod_{n=0}^\ka\dH\og n\,\ti\,\dP^\Ga$: indeed, if $s\in\dT[N,\Ga]$ then $h^s$ can be seen as an element of $\prod_{n=0}^\ka\dH\og n$, $q^s$ can be seen as an element of $\dP^\Ga$ (the product of $\card\Ga$ copies of $\dP$; $\card\Ga<\om$), while $a^s=\ag\res{(N\ti\Ga)}$ is a constant. However if $n\in N$ and $\ga\in\Ga$ then $q^s_\ga\og n=q^t_{a^s(n,\ga)}$, hence in fact $\dT[N,\Ga]$ can be identified with $$ \textstyle \prod_{n=0}^\ka\dH\og n\ti(\prod_{n=\ka+1}^\iy\dP\og n)^\Ga= \prod_{n=0}^\ka\dH\og n\ti\prod_{n=\ka+1}^\iy(\dP\og n^\Ga)\,. \eqno(3) $$ However the sets $\dP\og n$ and $\dH\og n$ as forcing notions are \dd{\aleph_n^+}closed, meaning that any decreasing sequence of length $\le\aleph_n$ has a lower bound in the same set. Therefore if we present $\dT[N,\Ga]$ as $$ \textstyle \prod_{n=0}^\nu\dH\og n\ti \prod_{n=\nu+1}^\ka\dH\og n\ti\prod_{n=\ka+1}^\iy(\dP\og n^\Ga)\,, \eqno(4) $$ then it becomes clear that the second and third subproducts are \dd{\aleph_{n+1}^+}closed forcing notions. Hence, by basic results of forcing theory, the set $Z\sq[0,\aleph_{\nu+1})$ belongs to the subextension corresponding to the first subproduct $\prod_{n=0}^\nu\dH\og n$. That is, $Z\in\bL[\ens{\fg\og n}{n\le \nu}]$, as required. \epf
\bco \lam{Fco1} If\/ $n<\om$ then it is true in\/ $\Ls[G]$ that\/ $\aleph_n$ remains a cardinal, the power set\/ $\cP(\aleph_n)$ is wellorderable, and\/ $\card(\cP(\aleph_n))=\aleph_{n+1}$.\qed \eco
Yet cardinal preservation holds for all cardinals!
\bco \lam{Fle1} Any cardinal in\/ $\bL$ remains a cardinal in\/ $\Ls[G]$. \eco \bpf Indeed we have established (see the proof of Theorem~\ref{alen}) that any set $Z\in\Ls G$, $Z\sq\bL$, belongs to a generic extension of $\bL$ via a forcing as in (3) in the proof of Theorem~\ref{alen}. However any such a forcing is cardinal-preserving by a simple cardinality argument. \epf
To accomplish the proof of Theorem~\ref{M}, it remains to check that the symmetric subextension $\Ls[G]$ contains a surjection $\sg:2^{\omal}\onto\la$. We define $\sg$ in $\Ls[G]$ as follows. If $\ga<\la$ and $z\in\Yg_\ga$ then put $\sg(z)=\ga$. (The definition is consistent by Lemma~\ref{cons}.) If $z\in2^{\omal}$ does not belong to $\bigcup_{\ga<\la}\Yg_\ga$ then $\sg(z)=0$. As any set $\Yg_\ga$ definitely contains $\yg_\ga$, $\sg$ is a surjection onto $\la$, as required.\vtm
\qeD{Theorem~\ref{M}}
\def{\Large\bf References}{{\Large\bf References}}
{\small
}
\end{document}
|
arXiv
|
{
"id": "1008.3471.tex",
"language_detection_score": 0.6084228754043579,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
\begin{document}
\title[Group $\GreenH$-classes of Special Inverse Monoids]{Group $\GreenH$-classes of Finitely Presented Special Inverse Monoids}
\keywords{special inverse monoid, units, group $\GreenH$-classes, maximal subgroup} \subjclass[2010]{20F05; 20M18, 20M05} \maketitle
\begin{center}
ROBERT D. GRAY\footnote{School of Mathematics, University of East Anglia, Norwich NR4 7TJ, England. Email \texttt{[email protected]}.} \ and \ MARK KAMBITES\footnote{Department of Mathematics, University of Manchester, Manchester M13 9PL, England. Email \texttt{[email protected]}.
This research was supported by the EPSRC-funded projects EP/N033353/1 `Special inverse monoids: subgroups, structure, geometry, rewriting systems and the word problem' and EP/V032003/1 `Algorithmic, topological and geometric aspects of infinite groups, monoids and inverse semigroups', and by a London Mathematical Society Research Reboot Grant. } \end{center}
\begin{abstract} We study the group $\GreenH$-classes (also known as maximal subgroups, in the sense of semigroup theory) of finitely presented special inverse monoids. We show that the group $\GreenH$-classes which can arise in such monoids are exactly the recursively presented groups, and moreover every such group $\GreenH$-class can also arise in the $E$-unitary case. We also prove that the possible groups of units are exactly the finitely generated recursively presented groups; this improves upon a result of, and answers a question of, the first author and Ru\v{s}kuc. These results give the first significant insight into the group $\GreenH$-classes of such monoids beyond the group of units, and the results together demonstrate that (perhaps surprisingly) it is possible for the subgroup structure to have a complexity which significantly exceeds that of the group of units. We also observe that a finitely presented special inverse monoid (even an $E$-unitary one) may have infinitely many pairwise non-isomorphic group $\GreenH$-classes; this contrasts sharply with the case of (non-inverse) special monoids, where Malheiro showed that all idempotents lie in the $\GreenD$-class of $1$, from which it follows that all group $\GreenH$-classes are isomorphic. \end{abstract}
\section{Introduction}\label{sec_intro} This paper is concerned with inverse monoids that admit presentations in which each defining relation has the form $w = 1$. The study of these monoids, which are termed \emph{special inverse monoids} in the literature, is motivated both intrinsically by a beautiful geometric theory, and extrinsically by connections to other areas of semigroup theory and geometric group theory, including most notably possible applications to the \textit{one-relator word problem} (the question of whether the word problem for one-relator monoids is decidable). The one-relator word problem has been a natural open problem since the resolution of the corresponding problem for groups by Magnus in 1932; it has resisted extensive study (most notably by Adjan and co-authors) and is widely regarded as one of the hardest and most important open problems in semigroup theory.\footnote{ For the avoidance of confusion, we note that neither one-relator groups nor one-relator (special) inverse monoids are typically examples of one-relator monoids, since interpreting a group or inverse monoid presentation as a monoid presentation will usually give a different monoid. Indeed, non-trivial free groups require relations to define them as monoids (in fact it can always be done with just a single relation \cite{Perrin1984}), while non-trivial free inverse monoids are not even finitely presented as monoids \cite{Schein1975}). A notable exception is the \textit{bicyclic monoid}, which is given by the presentation $\langle p, q \mid pq = 1 \rangle$ as both a monoid and an inverse monoid. In contrast, it transpires that one-relator groups \textit{are} examples of one-relator special inverse monoids. (This can be deduced from, for example,
Lemma~\ref{lemma_removeidempotents} below).}
The study of special inverse monoids was initiated by Margolis, Meakin and Stephen \cite{MMS87}, motivated by successes in understanding special (non-inverse) monoids, and the prospect of applying exciting geometric methods (such as the Scheiblich/Munn description of \textit{free inverse monoids} and \textit{Stephen's folding procedure} - see below and the survey \cite{Meakin:2007zt} for details) which had recently been developed for the study of inverse monoids. They showed that Stephen's procedure specialises to give a particularly beautiful geometric theory of Sch\"utzenberger graphs in the case of special inverse monoids, and could be used to extend some of the results about the special (non-inverse) case. The case of special \textit{one-relator} inverse monoids $\mathrm{Inv}\langle A \mid w=1 \rangle$ received particular attention, culminating in a celebrated proof of Ivanov, Margolis and Meakin \cite{Ivanov:2001kl} that the one-relator word problem (for general monoids) reduces to the word problem for certain one-relator \textit{special inverse} monoids. Since special inverse monoids have much more evident geometric structure than general monoids, this opened up the possibility that the kind of methods employed in geometric group theory could be used to understand the word problem for one-relator special inverse monoids, and hence the original one-relator word problem for monoids. This motivated extensive further study of this area, with initial results including positive solutions to the word problem in certain important cases \cite{Hermiller:2010bs, Margolis:2005il, Meakin:2007zt}, but the first author \cite{Gray2020} eventually showing that, surprisingly, the word problem for special one-relator inverse monoids in general is undecidable. However, the case shown undecidable is not one arising from the reduction of the one-relator word problem for monoids, so the latter remains open and a better understanding of special inverse monoids may yet produce a solution.
In order to progress further, it seems that a deeper and more systematic understanding of special inverse monoids is required. In particular, it is necessary to understand the subgroup structure of such monoids. Associated with every idempotent $e$ of a monoid is a group $H_e$ called the \textit{group $\GreenH$-class} of $e$, which is the largest subgroup of the monoid (with respect to containment) containing $e$. For this reason, these are also called the maximal subgroups of the monoid, and every subgroup of the monoid is contained in a group $\GreenH$-class. The group $\GreenH$-class of the identity element is just the group of units of the monoid and the group $\GreenH$-classes of distinct idempotents are disjoint. Hence the main task in investigating the subgroup structure of a monoid is to understand its group $\GreenH$-classes.
An obvious question is to what extent the subgroup structure resembles that found in the well-established theory of special (non-inverse) monoids, where it is known that all group $\GreenH$-classes are isomorphic to the group of units \cite{Malheiro2005} and that one-relator examples have group of units (and therefore group $\GreenH$-classes) which are one-relator groups. (This fact has fact played a key role in Adjan's resolution of some cases of the one-relator word problem \cite{Adjan:1966bh}.) The first author and Ru\v{s}kuc \cite{GrayRuskucUnits} recently investigated the (left, right and two-sided) units, showing that in general the behaviour is very different from the non-inverse case: unlike in the non-inverse case, there is a one-relator special inverse monoid whose group of units is not a one-relator group, and there is a finitely presented special inverse monoid whose group of units is not finitely presented. One aim of this paper, realised in Section~\ref{sec_GroupsOfUnits} below, is to answer a question they posed \cite[Question 8.6]{GrayRuskucUnits} by giving a complete characterisation of the possible groups of units in finitely presented special inverse monoids: it transpires that they are exactly the finitely generated, recursively presented groups (Theorem~\ref{thm_possibleunits}).
The other main aim of this paper, in Section~\ref{sec_GroupHClasses}, is to initiate the study of group $\GreenH$-classes more generally. In sharp contrast to the case of special (non-inverse) monoids, and contrary to our own prior expectations and we believe to those of most experts in the field, we show that these can differ wildly from the group of units. We introduce a powerful construction (Theorem~\ref{thm_stabiliser}) which allows us to exactly characterise the possible group $\GreenH$-classes in finitely presented special inverse monoids: these are exactly the (not necessarily finitely generated) recursively presented groups (Corollary~\ref{cor_possiblemaxsubgroups}). The same construction, combined with an old result of Higman \cite[Theorem~7.3]{Lyndon:2001lh}, also allows us to produce an example of a single finitely presented special inverse monoid in which every finite group arises as an $\GreenH$-class (Corollary~\ref{cor_everyfinitegroup}).
The main theme of this paper is that the subgroups of special inverse monoids are potentially far wilder, and the structure of these monoids therefore more complex, than expected, but this does not mean there is no hope of understanding them. Indeed, in order to establish these wild examples we develop a new geometric approach to maximal subgroups, exploiting the fact \cite[Theorem~3.5]{Stephen:1990ss} that they are isomorphic to the automorphism groups of the \textit{Sch\"utzenberger graphs} of the monoid. This approach, which contrasts with the more algebraic/combinatorial approach of \cite{Gray2020} and \cite{GrayRuskucUnits}, also offers new ways to obtain a positive understanding of group $\GreenH$-classes in particular cases and important sub-classes, which we will develop in future work.
\section{Preliminaries}\label{sec_preliminaries} In this section we fix notation and briefly recall some relevant background material from the geometric and combinatorial study of inverse monoids. For additional background we refer the reader to \cite{Meakin:2007zt} for combinatorial inverse semigroup theory and \cite{Lyndon:2001lh} for combinatorial group theory.
\subsection*{Graphs} Throughout this paper we use the word \emph{graph} to mean a (possibly infinite) directed graph, possibly with loops and multiple edges, in which each edge is labelled by a symbol from some alphabet. The graph is called \emph{deterministic} if no two edges with the same label share a start vertex or an end vertex.
A \emph{path} in a graph is a sequence of edges $e_1, \ldots, e_n$ such that end vertex of $e_{i}$ coincides with the start vertex of $e_{i+1}$ for $1 \leq i \leq n-1$. The path is called \emph{closed} if its start vertex (the start vertex of $e_1$) coincides with its end vertex (the end vertex of $e_n$). The \emph{label} of a path is the word which is the concatenation in order of the labels of the edges. A path is called \emph{simple} if no two edges start at the same vertex or end at the same vertex. A graph $\Gamma$ is a \emph{subgraph} of a graph $\Omega$ if the vertex and edge sets of $\Gamma$ are subsets of the vertex and edge sets respectively of $\Omega$; it is said to be an \emph{induced subgraph} if in addition it contains every edge of $\Gamma$ whose start and end are vertices of $\Omega$. A \emph{morphism} $\phi: \Gamma \rightarrow \Delta$ of graphs consists of a map from the vertex set of $\Gamma$ to the vertex set of $\Delta$ and a map from the edge set of $\Gamma$ to the edge set of $\Delta$ which for each edge preserves the label and respects the start and end vertices.
\subsection*{Inverse monoids and Sch\"utzenberger graphs} An \emph{inverse monoid} $M$ is a monoid such that for every $m \in M$ there is a unique element $m^{-1} \in M$, called the \emph{inverse} of $m$, that satisfies $mm^{-1}m=m$ and $m^{-1} m m^{-1} = m^{-1}$. The map $m \mapsto m^{-1}$ satisfies $(m^{-1})^{-1} = m$ and $(mn)^{-1} = n^{-1} m^{-1}$. If $A$ is a subset of $M$ we write $A^{-1}$ for the set of inverses of elements in $A$. For brevity, we shall also sometimes use the notation $m'$ for the inverse of $m$, especially when working with inverse monoid presentations. We say an inverse monoid is \emph{generated} by a subset $A$ if is generated by $A$ under the multiplication and inversion operations, or equivalently, generated by $A \cup A^{-1}$ under multiplication alone. If $M$ is an inverse monoid generated by a set $A$ then the \emph{Cayley graph} of $M$ with respect to $A$ has vertex set $M$ and a directed edge from $m$ to $mx$ labelled by $x$ for all $m \in M$ and $x \in A \cup A^{-1}$. The \emph{Sch\"utzenberger graphs} of $M$ with respect to the generating set $A$ are the strongly connected components of the Cayley graph where two vertices $u, v$ belong to the same strongly connected component if there is a path from $u$ to $v$, and a path from $v$ back to $u$. So the Sch\"utzenberger graph $S\Gamma(m)$ of $m \in M$ is the subgraph of the Cayley graph induced on the set of vertices in the strongly connected component containing $m$. Note that if $M$ is a group then it has just one Sch\"utzenberger graph, which is the entire Cayley graph.
Recall that in an inverse monoid $M$ two elements $m$ and $n$ are $\GreenR$-related if and only if they generate the same principal right ideal which is equivalent to saying that $mm^{-1} = nn^{-1}$. Dually $m$ and $n$ are $\GreenL$-related if $m^{-1}m = n^{-1}n$ and are $\GreenH$-related if they are both $\GreenR$- and $\GreenL$-related. If $m \in M$ we write $S \Gamma(m)$ for the Sch\"utzenberger graph of $m$, and $H_m$, $R_m$ etc for the equivalence classes of $M$ under Green's relations. Note that the vertex set of $S \Gamma(m)$ is exactly $R_m$. We define the \textit{root} of the graph $S \Gamma(m)$ to be the vertex $m m^{-1}$, that is, the unique vertex corresponding to an idempotent element of $M$.
If $w \in (A^{\pm 1})^*$ is a word over the generating set and its inverses viewed as a formal alphabet then we may use $w$ in place of $m$ in the above notation, to be interpreted as the element of $M$ represented by $w$. It may be shown that if $s \GreenR t$ and $su=t$ then $tu^{-1}=s$. It follows that if $mx \GreenR x$ where $m \in M$ and $x \in A^{\pm 1}$ then $mxx^{-1} =m$ and hence within the Sch\"utzenberger graphs edges come in inverse pairs. For simplicity we will often draw and speak of Sch\"utzenberger graphs of an $A$-generated inverse monoid as only having edges labelled by letters from $A$, leaving it implicit that there are always edge labelled by their inverses in the reverse direction; we shall sometimes speak of traversing edges ``backwards'', by which we formally mean traversing the corresponding inverse edge.
\subsection*{Inverse monoid presentations and the maximal group image} Let $A$ be a (not necessarily finite) alphabet. The \textit{free inverse monoid} $\mathrm{Inv} \langle A \rangle$ is the unique (up to isomorphism) inverse monoid generated by $A$ with the property that every map from $A$ to an inverse monoid extends to a morphism from $\mathrm{Inv} \langle A \rangle$. More concretely, if we let $A^{-1}$ be a set of formal inverses for the generators in $A$ and write $A^{\pm 1}$ for $A \cup A^{-1}$, it is the quotient of the free monoid $(A^{\pm 1})^*$ by the \textit{Wagner congruence}, which is the congruence generated by the relations $ww^{-1}w = w$ and $uu^{-1}ww^{-1} = ww^{-1}uu^{-1}$ where $u, w \in (A^{\pm 1})^*$ and where by definition $(a_1^{\epsilon_1} \ldots a_k^{\epsilon_k})^{-1} = a_k^{-\epsilon_k} \ldots a_1^{-\epsilon_1}$.
An elegant geometric description for the free inverse monoid was given by Munn \cite{Munn74}, based on earlier work of Scheiblich \cite{Scheiblich73}; we shall not make explicit use of this, but the ideas they developed are central to geometric inverse semigroup theory and implicitly used in much of what we do. A word $w$ over $A^{\pm 1}$ is called a \textit{fundamental idempotent} if it represents an idempotent element of the free inverse monoid on $A$. This is equivalent to saying that $w$ represents an idempotent element in every $A$-generated inverse monoid, and also to saying that $w$ represents the identity in the free group on $A$.
The inverse monoid defined by the presentation $\mathrm{Inv}\langle A \mid R \rangle$, where $R \subseteq (A^{\pm 1})^* \times (A^{\pm 1})^* $
is the quotient of the free inverse monoid $\mathrm{Inv} \langle A \rangle$ by the congruence generated by $R$. An inverse monoid presentation is called \textit{special} if all relations have the form $w=1$, and an inverse monoid is called special if it admits a special inverse monoid presentation. A well-known fact about special inverse monoid presentations is the following result, which essentially says that fundamental idempotent relators can be incorporated into other relators. \begin{lemma}[see for example \cite{Gray2020}, Lemma 3.3]\label{lemma_removeidempotents} Let $A$ be an alphabet and $e, r_1, \dots, r_m \in A^{\pm 1}$ with $e$ a fundamental idempotent. Then $$\mathrm{Inv}\langle A \mid e = r_1 = r_2 \dots = r_m = 1 \rangle \ = \ \mathrm{Inv} \langle A \mid e r_1 = r_2 = \dots = r_m = 1 \rangle.$$ \end{lemma}
We use $\mathrm{Gp}\langle A \mid R \rangle$ to denote the group defined by the presentation with generators $A$ and defining relators $R$, and we use $\mathrm{Gp}\langle X \rangle$ to denote the free group on the set $X$. Let $M = \mathrm{Inv}\langle A \mid R \rangle$ and let $G = \mathrm{Gp}\langle A \mid R \rangle$ be its \emph{maximal group image} and let $\sigma:M \rightarrow G$ denote the canonical surjective homomorphism from $M$ to $G$. Let $\Gamma$ be the Cayley graph of $G$ with respect to $A$. This defines a map $\overline{\sigma}$ from the disjoint union of the Sch\"utzenberger graphs of $M$ to the Cayley graph $\Gamma$ of $G$, which maps vertices using $\sigma$ and each edge $m \xrightarrow{a} ma$ in a Sch\"utzenberger graph maps to the unique edge in $\Gamma$ from $\sigma(m)$ to $\sigma(ma)$ labelled by $a$. This map $\overline{\sigma}$ is clearly a morphism of labelled graphs. The inverse monoid $M$ is called $E$-unitary if $\sigma^{-1}(1_G)$ is equal to the set of idempotent of $M$. It is known (see \cite[Lemma~1.8]{Stephen93}) that this is equivalent to the map $\overline{\sigma}$ being injective on every Sch\"utzenberger graph of $M$.
\subsection*{Stephen's procedure}
We now describe a procedure due to Stephen \cite{Stephen:1990ss} for iteratively approximating Sch\"utzenberger graphs of inverse monoids. Since we will be working only with special inverse monoids in this paper we will only describe the procedure in this case, even though it applies more generally.
Let $M = \mathrm{Inv}\langle A \mid R \rangle$ be a special inverse monoid presentation and let $\Gamma$ be a graph labelled over $A \cup A^{-1}$ such that the edges in $\Gamma$ occur in inverse pairs. We define two operations \begin{enumerate} \item $P$-expansion: For every vertex $v \in V(\Gamma)$ and every $r \in R$, if there is no closed path at $v$ labelled by $R$ then we attach a simple closed path at $v$ labelled by $r$ such that all the internal vertices of this closed path are disjoint from $\Gamma$. (For every new edge we add we also add the corresponding inverse edge so that all the edges still occur in inverse pairs.) \item Edge folding: if there are edges $e$ and $f$ with the same label and the same start or end vertex then we identify these edges (which also identifies their start vertices and identifies their end vertices).
\end{enumerate} It follows from \cite{Stephen:1990ss} that starting with any such graph $\Gamma$ and special inverse monoid presentation, the set of all graphs obtained by applying successive $P$-expansions and edge foldings forms a directed system in the category of $(A \cup A^{-1})$-labelled graphs. We denote the limit of this system by $\mathrm{Exp}(\Gamma)$. Given any word $w \in (A^{\pm 1})^*$ we use $L_w$ to denote the straight line graph labelled by the word $w$.
So $L_w$ has vertex set $\{0, 1, \ldots, |w|\}$, one pair of inverse edges between $i$ and $i+1$ for all $0 \leq i \leq |w|-1$, and the label of the unique path of length $|w|$ from the vertex $0$ to the vertex $|w|$ is equal to the word $w$. The following is then a theorem of Stephen \cite{Stephen:1990ss} specialised to the case of special inverse monoids.
\begin{theorem}\label{thm_StephenThm} Let $M = \mathrm{Inv}\langle A \mid R \rangle$ be a special inverse monoid and let $w \in (A \cup A^{-1})^*$. Then the Sch\"utzenberger graph $S\Gamma(w)$ is isomorphic to the graph $\mathrm{Exp}(L_w)$ obtained by Stephen's procedure starting with the straight line graph $L_w$ labelled by $w$.
\end{theorem}
\section{Geometry of Sch\"utzenberger Graphs}\label{sec_geometry}
In this section we establish some foundational results about the geometry of Sch\"utzenberger graphs in special inverse monoids, which will be used to establish our main theorems below, and are also likely to be of wider use in the future development of the subject.
\subsection*{Extending automorphisms}
We shall need the following technical result, which gives a sufficient condition for an automorphism of a subgraph of a Sch\"utzenberger graph to extend to an automorphism of the containing graph.
\begin{lemma}\label{lem_StephenAutInvariant} Let $M = \mathrm{Inv}\langle A \mid R \rangle$ be a special inverse monoid, let $e$ be a fundamental idempotent in $(A \cup A^{-1})^*$, and let $\Omega$ be any connected subgraph of $S\Gamma(e)$ such that $\Omega$ contains the vertex $e$ of $S\Gamma(e)$ and the word $e$ can be read from the vertex $e$ entirely within $\Omega$. Then every automorphism of $\Omega$ extends uniquely to an automorphism of $S\Gamma(e)$. \end{lemma} \begin{proof} Consider an automorphism $\theta$ of $\Omega$. Suppose that it maps the vertex $e$ to $ew$ where $w$ is a word labelling a path in $\Omega$ from $e$ to its image vertex under $\theta$. Such a path exists since $\Omega$ is connected. Now since the word $e$ can be read from the vertex $e$ of $\Omega$ and there is an automorphism of $\Omega$ sending $e$ to $ew$ it follows that $e$ can also be read from $ew$. Hence, it follows that $ewe$ is $\GreenR$-related to $e$, and the path labelled by the word $ewe$ can be read within $\Omega$ starting at the vertex $e$. The inverse of the automorphism $\theta$ maps $ew$ to $e$ and must map $e$ to $ew^{-1}$. Indeed, since there is a path in $\Omega$ labelled by $w$ from $e$ to $ew$ it follows that there is a path labelled by $w$ from $\theta^{-1}(e)$ to $\theta^{-1}(ew)=e$ and hence $\theta^{-1}(e) = ew^{-1}$. Since there is an automorphism of $\Omega$ sending $e$ to $ew^{-1}$, and since the word $e$ can be read within $\Omega$ at $e$, it follows that the word $e$ can be read within $\Omega$ starting at $ew^{-1}$. In particular $ew^{-1}e$ is $\GreenR$-related to $e$. But if $ew^{-1}e$ is $\GreenR$-related to $e$ then its inverse $ewe$ is $\GreenL$-related to $e$, and we conclude that $ewe$ is $\GreenH$-related to $e$. Also, since $e$ labels a closed path in the graph $\Omega$ starting and ending at the vertex $e$, and $\theta$ is an automorphism of $\Omega$ sending $e$ to $ew$ it follows that $e$ also labels a closed path in $\Omega$ starting and ending at the vertex $ew$. Hence $ew = ewe$ is $\GreenH$-related to $e$, in other words it belongs to the group $\GreenH$-class $H_e$. Now for any vertex in $\Omega$ since $\Omega$ is connected we can choose a word $u$ over $A \cup A^{-1}$ labelling a path from the vertex $e$ of $\Omega$ to that vertex. Since $\theta$ maps $e$ to $ew$ it follows that the vertex at the end of the path from $e$ labelled by $u$ must map to $ewu$. So the automorphism $\theta$ is given by the map $eu \mapsto ewu$. But $ew=ewe$ so this is the map $eu \mapsto (ew)eu$ where $ew \in H_e$. It is then a consequence of Green's Lemma \cite[Lemma~2.2.2]{Howie95} that left multiplication by $ew$ defines a bijection from $R_e$ to itself which induces an automorphism of the Sch\"utzenberger graph $S\Gamma(e)$. Hence the automorphism $\theta$ of $\Omega$ extends uniquely to an automorphism of $S\Gamma(1)$. Since $\theta$ was an arbitrary automorphism of $\Omega$ this completes the proof. \end{proof} An alternative way of seeing why the previous lemma is true is via Stephen's procedure: since the word $e$ can be read inside $\Omega$ from the vertex $e$ of $S\Gamma(e)$, and it is a connected subgraph of $S\Gamma(e)$, it follows that the Sch\"utzenberger graph $S\Gamma(e)$ can be constructed by starting from $\Omega$ and using Stephen's procedure. Since Stephen's procedure is automorphism-invariant, it follows that every automorphism of $\Omega$ will extend (uniquely, since an automorphism of an inverse automaton is uniquely determined by what it is does to any single vertex) to an automorphism of $S\Gamma(e)$.
\subsection*{$E$-unitary special inverse monoids}
The following result, part of which is due directly to Stephen \cite{Stephen93} and part of which we deduce from his work, shows that the Sch\"utzenberger graphs of an $E$-unitary special inverse monoid are isomorphic to certain induced subgraphs of the Cayley graph of the maximal group image.
\begin{lemma}\label{lem_fullSubgraph} Let $M = \mathrm{Inv}\langle A \mid R \rangle$ be an $E$-unitary special inverse monoid and let $w \in (A^{\pm 1})^*$, and let $\Gamma$ be the Cayley graph of the maximal group image
$G = \mathrm{Gp}\langle A \mid R \rangle$ with respect to $A$. Then the Sch\"utzenberger graph $S\Gamma(w)$ is embedded into the Cayley graph $\Gamma$ by $\overline{\sigma}$ as an induced subgraph. Moreover, the embedded copy of $S\Gamma(w)$ is the smallest subgraph of $\Gamma$ such that the word $w$ can be read from $1$ and every relator word $r \in R$ can be read at every vertex of $S\Gamma(w)$. \end{lemma} \begin{proof} The fact the graph embeds is \cite[Lemma~3.5]{Stephen93}. For the final part, let $\Delta$ be the smallest subgraph of $\Gamma$ in which word $w$ can be read from $1$ and every relator word $r \in R$ can be read at every vertex of $\Delta$. It follows from Stephen's procedure (Theorem~\ref{thm_StephenThm}) that, as a subgraph of $\Gamma$, the graph $S\Gamma(w)$ contains all the vertices and edges of $\Delta$.
Conversely, it follows from the argument in \cite{Stephen93} preceding the statement of \cite[Lemma~3.1]{Stephen93} that for every edge $x$ in the Sch\"utzenberger graph $S\Gamma(w)$ there is a path in $S\Gamma(w)$ from the vertex $1$ labelled by a word of the form $v p_1 p_2 \ldots p_k$ where $v$ is a prefix of $w$ and each $p_i$ is a prefix of some defining relator $r \in R$, such that this path traverses the edge $x$ in some direction. It follows that $S\Gamma(w)$ is contained in $\Delta$ and hence $S\Gamma(w) = \Delta$. \end{proof}
\subsection*{Morphisms between labelled digraphs and Sch\"utzenberger graphs} In several arguments we will construct a labelled digraph and then assert the existence of a morphism to this graph from a Sch\"utzenberger graph. The key lemma we need for this is the following.
\begin{lemma}\label{lem_graphmorphisms} Let $M = \mathrm{Inv}\langle A \mid R \rangle$ be a special inverse monoid, and let $\Omega$ be a deterministic $A$-labelled graph in which for every vertex $v$ in $\Omega$ and every $r \in R$ there is a closed path in $\Omega$ at $v$ labelled by $r$. Then for every vertex $w$ of $\Omega$ there is a morphism from $S\Gamma(1)$ to $\Omega$ that sends the root of $S\Gamma(1)$ to $w$.
\end{lemma} \begin{proof} We need to define a map and then show that it is a well-defined morphism.
Let $T$ be the infinite graph constructed iteratively from a single vertex by adding a loop labelled by $r$ at every vertex for every $r \in R$, but not performing any edge folding. We view each of these loops as oriented in such a way that the word $r$ is the label of the path given by reading the loop clockwise. By a \emph{proper subpath of a loop of $T$} we mean a path $\pi$ with initial vertex being the vertex at which the loop was attached in the construction of $T$, and $\pi$ is a simple path which traverses the loop clockwise but does not visit every vertex of the loop i.e. the end vertex of $\pi$ is not equal to the start vertex of $\pi$. Note that if $r \in R$ is the label of a loop in $T$ then any proper subpath of this loop is labelled by a proper prefix of the word $r$. From the construction it follows that every vertex $u$ of $T$ there is a unique sequence $(\pi_1, \pi_2, \ldots, \pi_k)$ where each $\pi_i$ is a proper subpath of a loop and $\pi_1 \pi_2 \ldots \pi_k$ is a path from the root of $T$ to $u$. We define a map from the vertex set of $T$ to vertices in $\Omega$ where the vertex $u$ with corresponding sequence $(\pi_1, \pi_2, \ldots, \pi_k)$ of proper subpaths of loops maps to the vertex in $\Omega$ obtained by following the path labelled by $p_1 \ldots p_k$ starting at the vertex $w$ of $\Omega$, where $p_i$ is the label of the path $\pi_i$ for $1 \leq i \leq k$. This gives a well-defined (by uniqueness of the sequences of proper subpaths of loops) map from the vertices of $T$ to the vertices of $\Omega$. As a consequence of the assumptions that $\Omega$ is deterministic and in $\Omega$ every relator from $R$ can be read from every vertex in $\Omega$, this map extends uniquely to a morphism of graphs which maps edges of $T$ to the edges of $\Omega$. Let us use $\phi$ to denote this graph morphism from $T$ to $\Omega$.
It follows from Stephen's procedure that $S\Gamma(1)$ is obtained by determinising $T$. We claim that $\phi$ induces a well-defined graph morphism from $S\Gamma(1)$ to $\Omega$. To see this note that two vertices $v$ and $u$ of $T$ are identified in $S\Gamma(1)$ if and only if there is a path in $T$ between these vertices labelled by a word that freely reduces to the empty word in the free group. Since $\phi$ is a morphism it follows that there is a path in $\Omega$ between $\phi(v)$ and $\phi(u)$ labelled by the same word that freely reduces to the empty word in the free group. Since the graph $\Omega$ is deterministic it follows that $\phi(v) = \phi(u)$. Hence $\phi$ induces a well-defined map from the vertices of $S\Gamma(1)$ to the vertices of $\Omega$. Two edges $e$ and $f$ of $T$ are identified in $S\Gamma(1)$ if and only if they have the same label, say $a \in A$, and their start vertices $v$ and $u$ are identified in $S\Gamma(1)$. But we have already seen that this means that $\phi(v) = \phi(u)$ which, since $\Omega$ is deterministic means that both $e$ and $f$ must be mapped to the unique edge in $\Omega$ with start vertex $\phi(v) = \phi(u)$ and labelled by $a$. This shows that $\phi$ induces a well defined map from the edges of $S\Gamma(1)$ to the edges of $\Omega$.
It remains to verify that $\phi$ induces a morphism of graphs from $S\Gamma(1)$ to $\Omega$. Let $e$ be an edge in $S\Gamma(1)$. Choose an edge $f$ in $T$ such that $f$ is equal to $e$ when $T$ is determinised, that is, $f$ is a member of the equivalence class of edges that represented $e$. Since $\phi$ is a morphism from $T$ to $\Omega$ it follows that the start vertex of $f$ in $T$ maps to the start vertex of $\phi(f)$ in $\Omega$, and the end vertex of $f$ in $T$ maps to the end vertex of $\phi(f)$ in $\Omega$. But by definition $\phi(e) = \phi(f)$ and $\phi$ maps the start vertex $e$ to the same place as the start vertex of $f$, and similarly for the end vertices. It follows that $\phi$ induces a morphism of graphs from $S\Gamma(1)$ to $\Omega$. \end{proof}
Note that previous lemma is not true if one drops the condition that $\Omega$ is deterministic. The following two results will be useful.
\begin{lemma}\cite[Corollary~3.2]{Gray2020}\label{lem_reducerightinvert} Let $M = \mathrm{Inv}\langle A \mid R \rangle$. If $xaa^{-1}y$ is right invertible where $a \in A \cup A^{-1}$ and $x, y \in (A \cup A^{-1})^*$ then $xaa^{-1}y = xy$ in $M$.
\end{lemma}
\begin{lemma}\label{lemma_eunitarytransfer} Suppose $M = \mathrm{Inv}\langle A \mid R \rangle$ is an $E$-unitary inverse monoid, and $T$ is a set of relators which hold in the maximal group image $\mathrm{Gp}\langle A \mid R \rangle$. Then the inverse monoid $N = \mathrm{Inv}\langle A \mid R \cup T \rangle$ is $E$-unitary. \end{lemma} \begin{proof} Suppose $s \in N$ is an element which maps to $1$ in the maximal group image. Choose a word $w$ over $A^{\pm 1}$ representing $s$. Since the relations in $T$ already hold in the maximal group image of $M$, $w$ also represents $1$ in the maximal group image of $M$. Since $M$ is $E$-unitary this means $w$ represents an idempotent in $M$, and hence also in $N$ which is a quotient of $M$. \end{proof}
\section{Exact characterisation of groups of units}\label{sec_GroupsOfUnits}
It is known that the group of units of a finitely presented special inverse monoid is always finitely generated: this is implicit in the work of Ivanov, Margolis and Meakin \cite[Proposition 4.2]{Ivanov:2001kl}, and an explicit proof can be found as \cite[Theorem 1.3]{GrayRuskucUnits}. The first author and Ru\v{s}kuc \cite{GrayRuskucUnits} have recently shown that for every finitely generated subgroup $H$ of a finitely presented group $G$, there is a finitely presented special inverse monoid with group of units the free product $G * H$. Thus, every finitely generated subgroup of a finitely presented group (which by Higman's embedding theorem \cite{Higman61} means every finitely generated recursively presented group), is a free factor of the group of units of a finitely presented special inverse monoid. They ask \cite[Question 8.6]{GrayRuskucUnits} the natural question of whether the ``free factor'' can be eliminated, in other words, whether every finitely generated recursively presented group arises as the group of units of some finitely presented special inverse monoid. In this section we answer this question in the positive, giving an exact characterisation of the possible groups of units of finitely presented special inverse monoids.
\begin{theorem}\label{thm_possibleunits} The groups of units of finitely presented special inverse monoids are exactly the finitely generated, recursively presented groups (or equivalently, the finitely generated subgroups of finitely presented groups). \end{theorem} \begin{proof} The fact that groups of units in finitely presented special inverse monoids are finitely generated is \cite[Theorem 1.3]{GrayRuskucUnits} (and also implicit in \cite[Proposition 4.2]{Ivanov:2001kl}). The fact that they are recursively presented can easily be shown by using the special inverse monoid presentation to recursively enumerate all relations $w=1$ which hold in the special inverse monoid, and then using those where $w$ factorises as a product of chosen representatives of a generating set for the group of units to enumerate relations which hold in the group. (Alternatively, it follows from Corollary~\ref{cor_possiblemaxsubgroups} below, which is proved independently of Theorem~\ref{thm_possibleunits}.)
For the converse, let $H$ be a finitely generated, recursively presented group. By the Higman embedding theorem \cite{Higman61} we may assume that $H$ is a (finitely generated, and hence recursively enumerable) subgroup of a finitely presented group $G$. Choose a finite special monoid presentation for $G$, say $G = \mathrm{Mon}\langle A \mid r_1, \dots, r_k \rangle$, supposing without loss of generality that $H$ is generated as a monoid by some (possibly empty if $H$ is the trivial group) subset $B$ of $A$, and that for each $a \in A$ there is a unique formal inverse $\overline{a} \in A$ (with $\overline{\overline{a}} = a$) and $a \overline{a} = 1$ is among the defining relators for $G$. Let $p_0$, \dots, $p_k$, $z$ and $d$ be symbols not in $A$, and consider the special inverse monoid \begin{align*} M = \mathrm{Inv}\langle \ A, \ p_0, \dots, p_k, \ z, \ d \ \mid \ &p_i a p_i' p_i a' p_i' = 1 \ \ \ &(a \in A, i = 0, \dots, k) \\ &p_i r_i d' p_i' = 1 &(i = 1, \dots, k) \\ &p_0 d p_0' = 1 \\ &zbz'zb'z' = 1 &(b \in B) \\ &z \left( \prod_{i=0}^k p_i' p_i \right) z' = 1 \ \rangle. \end{align*} noting that the order of the product in the final relator is unimportant because the factors are fundamental idempotents.
We note that the presentation does \textbf{not} automatically identify the formal inverse of a generator in $A$ with its inverse in the inverse monoid, so $a$, $\overline{a}$, $a'$ and $\overline{a}'$ may be four distinct elements of $M$. However, recalling that the defining relators for $G$ include relators of the form $a \overline{a}$ and $\overline{a} a$, notice that at any vertex in a Sch\"utzenberger graph of $M$ with edges labelled by all the $p_i$s coming in, and for any generator $a \in A$, Stephen's procedure will as a consequence of the second and third families of relators in the presentation for $M$ attach closed paths labelled $a\overline{a}d'$, $\overline{a}ad'$ and $d$ which determinise to give closed paths labelled $a\overline{a}$ and $\overline{a}a$. Thus, for every vertex $u$ with edges labelled by all the $p_i$s coming in, and for any generator $a \in A$, there is an edge leaving $u$ labelled by $a$ to a vertex $v$ with a parallel reverse edge from $v$ back to $u$ labelled by $\overline{a}$.
Note also that the group defined by this presentation, which is the maximal group image of $M$, is easily seen to be a free product of $G$ (generated by $A$) with a free group freely generated by $\lbrace p_0, \dots, p_k, z \rbrace$, with $d$ mapping to $1$.
Let $\Gamma$ be the Cayley graph of $G$ with respect to $A$. Note that from the assumptions on the presentation chosen for $G$ every edge in $\Gamma$ labelled by $a$ has a unique reverse edge labelled by $\overline{a}$.
We call a vertex of $S\Gamma(1)$ a \textit{$z$-vertex} if it has a $z$-edge coming in, and a \textit{$p$-vertex} if it has edges coming in labelled by $p_i$ for all $i = 0, \dots, k$. Notice that it follows from the final relation in the presentation that every $z$-vertex is a $p$-vertex.
We claim that every $p$-vertex of $S\Gamma(1)$ has an embedded copy of $\Gamma$ with its root at this vertex. Indeed, let $v$ be a $p$-vertex. It follows by the first type of relation in the presentation that $v$ has $a$-edges going out for all $a \in A$, and that these edges all lead to $p$-vertices. Moreover, by the above argument, $v$ has $a$-edges coming in for each $a \in A$, and each comes from the vertex to which the corresponding $\overline{a}$-edge going out leads. A simple inductive argument shows that all words over $A^{\pm 1}$ can be read from $v$ staying always at $p$-vertices. Now for each $p$-vertex and each defining relator $r_i$ of $G$, there must be closed paths at $v$ labelled $d$ and $r_id'$, which since $S\Gamma(1)$ is a deterministic graph means there must be a closed path at every $p$-vertex labelled by $r_i$.
It then follows from Lemma~\ref{lem_graphmorphisms} (applied with $G$ playing the role of the monoid, $\Gamma$ as its Sch\"utzenberger graph, and $S\Gamma(1)$ as the $\Omega$ in the statement of the lemma) that for each $p$-vertex $v$, there is a morphism from $\Gamma$ to $S\Gamma(1)$ taking $1$ to $v$. Indeed, by the assumptions on the monoid presentation defining $G$ having the relations $a \overline{a}=1$ for all $a \in A$ it follows that $G = \mathrm{Mon}\langle A \mid r_1, \dots, r_k \rangle =
\mathrm{Inv}\langle A \mid r_1, \dots, r_k \rangle = T$. Hence it follows that working with this inverse monoid presentation for $G$ then the Cayley graph of $\Gamma$ of $G$ is isomorphic to $S\Gamma_T(1)$ the Sch\"utzenberger graph of $1$ of $G$ viewed as the inverse monoid $T$. Now from the observations in the previous paragraph it follows that all the $p$-vertices in $S\Gamma_M(1)$ that can be reached from a fixed $p$-vertex by a word over $A \cup A^{-1}$ is also a $p$-vertex, and for every vertex in that set every relator labels a closed path. Also the graph on this collection of $p$-vertices is clearly deterministic since it is a subgraph of $S\Gamma_M(1)$ which is deterministic. Hence the conditions Lemma~\ref{lem_graphmorphisms} are satisfied which completes the proof of the claim that for each $p$-vertex $v$, there is a morphism from $\Gamma$ to $S\Gamma_M(1)$ taking $1$ to $v$. Moreover, by our observations above about the maximal group image being the free product of $G$ (generated by $A$) with a free group freely generated by $\lbrace p_0, \dots, p_k, z \rbrace$, with $d$ mapping to $1$, it follows that if we compose this morphism from $\Gamma$ to $S\Gamma(1)$ with the map from $S\Gamma(1)$ to the maximal group image we see that distinct vertices of $\Gamma$ will map to elements which differ in the maximal group image, and therefore must be different in $M$, so that the map from $\Gamma$ to $S\Gamma(1)$ is injective. Note also that it follows from this argument that in addition to the edges labelled by generators from $A$ the embedded copy of $\Gamma$ at a $p$-vertex $v$ also has a loop labelled by $d$ at every vertex. Moreover, by mapping to the maximal group image it is readily seen that these account for all the edges between vertices in this embedded copy of $\Gamma$, that is, the induced subgraph of $S\Gamma(1)$ on this set of vertices is isomorphic to $\Gamma$ with a loop labelled by $d$ added to every vertex.
In particular, the end of the $z$-vertex starting at $1$ is a $p$-vertex, and therefore is the root of an embedded copy of $\Gamma$. Moreover, an easy inductive argument using the penultimate relation in the presentation shows that every vertex in this copy of $\Gamma$ corresponding to an element of the subgroup $H$ is a $z$-vertex. Since in the embedded copy of $\Gamma$ every edge labelled by $b \in B$ has a corresponding reverse edge labelled by $\overline{b} \in B$ it then follows that for every word $w$ over $B$ (including the empty word), $zwz'$ can be read both into and out of the start vertex of $S\Gamma(1)$ which means that these elements represent units of the inverse monoid $M$. Furthermore, it is easy to see that for any two such words $zuz'$ and $zvz'$ we have that $(zuz')(zvz') = z(uv)z'$ in $M$ (by Lemma~\ref{lem_reducerightinvert} since $zuz'$ and $zvz'$ are both right invertible in $M$), and $zuz' = zvz'$ in $M$ if and only if $u = v$ in $H$. For the non-trivial direction of the latter claim, if $zuz' = zvz'$ in $M$ then $zuz' = zvz'$ in the maximal group image which implies $u = v$ in the maximal group image of $M$ and thus $u=v$ in $H$. It follows that the set of elements of $M$ represented by the set of words $\{ zwz' : w \in B^* \}$ forms a subgroup of the group of units of $M$ that is isomorphic to the group $H$. Call this subgroup $C$.
To complete the proof that the group of units of $M$ is isomorphic to $H$ it suffices to check that $M$ has no units other than those in $C$. By \cite[Theorem 1.3]{GrayRuskucUnits} and \cite[Proposition 4.2]{Ivanov:2001kl} the relators in a special inverse monoid can be factorised into subwords representing units, in such a way that the factors generate the entire group of units. So it suffices to check that if a relator can be written $uvw$ where $u,v$ and $w$ represent units then $v$ is in $C$. First notice that if we add relations to the presentation identifying $d$ and the generators in $A$ with $1$, we obtain a natural morphism from $M$ onto the monoid: \begin{align*} N = \mathrm{Inv}\langle \ p_0, \dots, p_k, \ z \ \mid \ &p_i p_i' = 1 &(i = 0, \dots, k) \\ &zz' = 1 \\ &z \left( \prod_{i=0}^k p_i' p_i \right) z' = 1 \ \rangle \end{align*} Notice that $N$ itself has trivial group of units: indeed, it has a homomorphism onto the bicyclic monoid $B = \mathrm{Inv}\langle b \mid bb' = 1 \rangle$ taking $z$ to $b^2$ and all the $p_i$s to $b$. Any unit in $N$ must map to $1$ under this map. If there are non-trivial units there must be a proper prefix of a relator which maps to $1$ under this map, but it is easy to see that all prefixes of relators map to $b$ or $b^2$. Note that this argument in particular shows that none of the generators $\{p_0, p_1, \ldots, p_k, z\}$ is invertible in $N$.
Clearly units in $M$ must map to units in $N$; so units in $M$ must map to $1$ in $N$. Thus, it suffices to consider factorisations of $uvw$ of relators (with $u$ or $w$ possibly empty) such that $u$, $v$ and $w$ map to $1$ in $N$, and show that in such cases $v$ either is not a unit or is necessarily in $C$. The only such factorisations of relators in the presentation of $M$ are (i) the factorisations $(p_i a p_i') (p_i a' p_i')$ of the first type of relation, and (ii) the factorisations $(zbz')(zb'z')$ of the penultimate type. That these are the only such factorisations can be proved using fact that none of the generators $\{p_0, p_1, \ldots, p_k, z\}$ is invertible in $N$, and hence certainly none of them or their inverses are equal to $1$ in $N$. In case (ii) we can see that the factors are already in $C$, so to complete the proof of the theorem it suffices to show that $p_i a p_i'$ is not a unit for each $a \in A$ and $i \in I_0$.
To this end, we define an infinite graph $\Omega$ recursively as follows. The graph $\Omega$ has a root vertex at which for each $x \in \lbrace z, p_0, \dots, p_i \rbrace$ there is an $x$-edge going out of the vertex with a copy of $\Omega_x$ at the far end, where, $\Omega_x$ is defined as follows For each $i \in \lbrace 0, 1, \dots, k \rbrace$, a \textit{$p_i$-zone} $\Omega_{p_i}$ consists of a copy of the free monoid Cayley graph on $A$ (rooted at $1$) with \begin{itemize} \item at each vertex and for each $x \in \lbrace z, p_0, \dots, p_k \rbrace$, an $x$-edge going out with a copy of $\Omega_x$ at the far end; \item at each vertex (except the root) an edge coming in labelled $p_i$, with a copy of $\Omega_{p_i'}$ at the far end; \item at each vertex $v$ and for each relator $r_i$ an edge labelled $d$ from $v$ to the end of the path starting at $v$ and labelled $r_i$; and \item at each vertex $v$ a loop labeled $d$. \end{itemize} A \textit{$z$-zone} $\Omega_{z}$ consists of a copy of the Cayley graph $\Gamma$ of the group $G$ with respect to the generating set $A$ with \begin{itemize} \item at each vertex and for each $x \in \lbrace z, p_0, \dots, p_k \rbrace$ an $x$-edge going out with a copy of $\Omega_x$ at the far end; \item at each vertex and for each $x \in \lbrace p_0, \dots, p_k \rbrace$ an $x$-edge coming in with a copy of $\Omega_{x'}$ at the far end; \item at each vertex of the copy of $\Gamma$ corresponding to an element of $H$, a $z$-edge coming in with a copy of $\Omega_{z'}$ at the far end; and \item at each vertex a loop labelled $d$. \end{itemize}
For $x \in \lbrace z, p_0, \dots, p_k \rbrace$, an \textit{$x'$-zone} $\Omega_{x'}$ consists of a root vertex with \begin{itemize} \item for each $y \in \lbrace z, p_0, \dots, p_k \rbrace$ with $y \neq x$ an $y$-edge going out with a copy of $\Omega_y$ at the far end. \end{itemize} See Figure~\ref{fig_pictorial2} for an illustration of the graph $\Omega$.
\begin{figure}
\caption{ An illustration of part of the graph $\Omega$ constructed in the proof of Theorem~\ref{thm_possibleunits}. The black vertex is the root, the square in the centre represents a $z$-zone $\Omega_z$ with the copy of $\Gamma$ partitioned into vertices $\Gamma_H$ representing elements of $H$ and its complement, while the triangle on the left represents a $p_i$-zone $\Omega_{p_i}$ where the $d$-labelled edges in the $p_i$-zone have been omitted from the diagram. }
\label{fig_pictorial2}
\end{figure}
It is immediate from the definition of the zones that $\Omega$ is deterministic, since we were careful to ensure that there are never two edges with the same label leaving the same vertex, and that where we attach a $\Omega_x$ at the end of a new edge, it is of a type whose root has no existing $x$-edge coming in.
Next we claim that every relator can be read around a closed path at every vertex. First notice that every vertex is the start of a $z$-edge leading into a $z$-zone, and a $p_i$ edge leading into either a $p_i$-zone or a $z$-zone (not necessarily at the root). With this observation it is easy to check that the defining relators which are fundamental idempotents can all be read (necessarily along a closed path, since the graph is deterministic) at every vertex.
For the relators of the form $p_i r_i d' p_i'$, notice that reading the initial $p_i$ from any vertex will take us into either a $p_i$-zone or a $z$-zone. If it is a $p_i$-zone then $r_i$ can be read in the free monoid Cayley graph, and we have inserted a $d$-edge from there back to the point at which we came in. If it is a $z$-zone then since $r_i$ represents $1$ in $G$, it can be read around a closed path, and $d$ can then be read around a loop. In both cases, the original $p_i'$ can then be read back along the original edge to the start point. Similarly, for the relators of the form $p_0 d p_0'$, reading $p_0$ from any vertex takes us to a $p_0$-zone or a $z$-zone, whereupon we can read $d$ around a loop and then $p_0'$ back to the start.
Since $\Omega$ is deterministic and in $\Omega$ every relator can be read around a closed path at every vertex it follows from Lemma~\ref{lem_graphmorphisms} that there is a morphism from the Sch\"utzenberger graph $S \Gamma(1)$ to $\Omega$. Notice for each $i$, that the vertex which $p_i$ maps to under this morphism is the root of a $p_i$-zone, and therefore for each $a \in A$ has no $a$-edge coming in. It follows that $a p_i'$ cannot be read into the root of $\Omega$, and hence $a p_i'$ cannot be read into the root of $S\Gamma(1)$. Therefore, the word $a p_i'$ is not left invertible, which means that $p_i a p_i'$ cannot be a unit. This completes the proof. \end{proof}
We remark that in fact the graph $\Omega$ constructed in the proof of Theorem~\ref{thm_possibleunits} is isomorphic (via the given morphism) to $S\Gamma(1)$. To show this requires more work and is not needed for the above proof of the theorem, but an alternative proof of the theorem could be obtained by establishing this fact, observing that the automorphism group of $\Omega$ is easily seen to be $H$, and using the fact that the group of units in an inverse monoid is always the automorphism group of $S \Gamma(1)$. In fact this viewpoint provided the intuition form which we arrived at the presentation used in the proof.
We also remark on the high proportion of the proof which is devoted to showing that we do not have extra ``undesired'' group elements. Proving that unwanted things do \textit{not} live in particular subgroups seems to be by some distance the hardest problem when working in this area, in the same way that proving certain words are \textit{not} equal is often the hardest part of working with a group or monoid presentation.
It remains an open question which groups arise as groups of units of \textit{one-relator} special inverse monoids. Theorem~\ref{thm_possibleunits} does not really provide any help with this particular question, since even after eliminating fundamental idempotent relators using Lemma~\ref{lemma_removeidempotents}, the number of relations in the presentation constructed will be one more than the number of relations in a \textit{special monoid} presentation for the underlying group, which may already be more than the number of relations in a group presentation. We shall see below (Corollary~\ref{cor_onerelator}) that every finitely generated subgroup of a one-relator group arises as a group $\GreenH$-class of a one-relator special inverse monoid, but we do not know if all such groups arise as groups of units of one-relator special inverse monoids. We also do not know whether groups of units in one-relator special inverse monoids are necessarily finitely presented. It was shown in \cite{GrayRuskucUnits} that a positive answer to the latter question would imply that all one-relator groups are coherent, resolving a long-standing open problem first posed by G.~Baumslag \cite[page 76]{Baumslag1974}. We refer to the recent survey article of Wise \cite{WiseCoherence2020} for more background on this and other aspects of the theory of coherent groups.
\section{Group $\GreenH$-classes}\label{sec_GroupHClasses}
In the previous section we exactly characterised possible groups of units of finitely presented special inverse monoids; these are (as was already known) all finitely generated, and it is natural to ask whether the same is true for the other group $\GreenH$-classes in such a monoid. Indeed, in the case of special (non-inverse) monoids it is even known that the other group $\GreenH$-classes are all isomorphic to the group of units, so one might even ask if this is the case for special inverse monoids. In section we shall show that the answer to both questions is negative: it transpires that every recursively presented group (not just the finitely generated ones) can arise as a group $\GreenH$-class in a finitely presented special inverse monoid! The same construction will also allow us to show that a finitely presented special inverse monoid can have infinitely many pairwise non-isomorphic group $\GreenH$-classes.
Our construction will need some preliminary results, starting with the following straightforward group-theoretic fact: \begin{lemma}\label{lemma_stabiliser} Let $K$ be a subgroup of a group $G$, and $t \in G$. Then the setwise stabiliser of $K \cup tK$ under the left translation action of $G$, $$\lbrace g \in G \ \mid \ g(K \cup tK) = K \cup tK \rbrace,$$ is either equal to, or an index $2$ overgroup of, $K \cap tKt^{-1}$, with equality provided $K \cap tKt = \emptyset$. \end{lemma} \begin{proof} Let $$H = \lbrace g \in G \mid gK = K \textrm{ and } gtK = tK \rbrace$$ be the intersection of the setwise stabilisers of $K$ and $tK$. We claim that $H = K \cap tKt^{-1}$. Indeed, if $g \in K \cap tKt^{-1}$ then for every $h \in K$ we have $gh \in K$, while writing $g = tkt^{-1}$ with $k \in K$ we have $gth = tkt^{-1}th = t(kh) \in tK$, so that $g \in H$. Conversely, if $g \in H$ then $gK = K$ and $gtK = tK$; the first equation gives $g \in K$ and the second gives $t^{-1}gtK = K$ so that $t^{-1}gt \in K$ and $g \in K \cap tKt^{-1}$.
Now if $g(K \cup tK) = K \cup tK$ then because left cosets form a partition either (i) $gK = K$ and $gtK = tK$ or (ii) $gK = tK$ and $gtK = K$. It follows that there is a morphism from the stabiliser of $K \cup tK$ to the cyclic group $\mathbb{Z}_2$, taking $g$ to $0$ in case (i) and $1$ in case (ii). By definition the kernel of this morphism is $H = K \cap tKt^{-1}$, so the latter has index $1$ or $2$ in the stabiliser, depending on whether the morphism is trivial or surjective.
Moreover, if $K \cap tKt = \emptyset$ then case (ii) cannot arise. Indeed, if $gK = tK$ and $gtK = K$ then the first equation gives $g = th$ for some $h \in K$, whereupon the second gives $tht \in K$; but now $tht$ lies in $K \cap tKt$, contradicting our hypothesis that this intersection is empty. Thus, in this case the morphism is trivial and we have equality. \end{proof}
\begin{lemma}\label{lemma_usefulgroup} Let $K$ be a finitely presented group, and $H$ a recursively enumerable subgroup of $K$. Then there exists a finitely presented group containing two conjugate subgroups $K_1$ and $K_2$ such that $K_1 \cong K_2 \cong K$ and $K_1 \cap K_2 \cong H$. Moreover, this group can be chosen in such a way that $t K_1 t^{-1} = K_2$ for some $t$ such that $K_1 \cap t K_1 t = \emptyset$. \end{lemma} We note that a subgroup of a finitely generated group is recursively enumerable if and only if it is generated by a recursively enumerable set. \begin{proof} Let $G_1$ be an HNN extension of $K$, with stable letter $t$ conjugating each element of $H$ to itself. Since $K$ is finitely presented and $H$ is recursively enumerable, $G_1$ is finitely generated and recursively presented by a presentation consisting of the presentation for $K$ with an extra generator $t$ and extra relations $twt^{-1} = w$ for each word $w$ representing an element of $H$. So by the Higman embedding theorem, $G_1$ can be embedded in a finitely presented group $G_2$. But now $G_2$ contains two conjugate subgroups $K_1 = K$ and $K_2 = tKt^{-1}$, which intersect exactly in a copy of $H$, as required. Finally, it follows easily from Britton's Lemma \cite[page 181]{Lyndon:2001lh} that $K_1 \cap t K_1 t$ is empty in $G_1$ and hence also in $G_2$. \end{proof}
\begin{theorem}\label{thm_stabiliser} Let $G$ be a group, and $H$ a subgroup of $G$. Then there exists an $E$-unitary special inverse monoid $M$ such that for every \textbf{finite} subset $X \subseteq G$, the setwise stabiliser in $G$ of the union of the left $H$-cosets determined by $X$, $$K = \lbrace g \in G \mid gXH= XH \rbrace,$$ is isomorphic to a group $\GreenH$-class in $M$. If $G$ is finitely presented and $H$ is finitely generated then $M$ can be chosen to be finitely presented with the same number of defining relations as $G$ (unless $G$ is free, in which case $M$ will have $1$ defining relation). If $H$ is trivial then $M$ can be chosen to be the free product of $G$ with a free inverse monoid. \end{theorem} \begin{proof}
Let $\mathrm{Gp}\langle A \mid R \rangle$ be a group
presentation for $G$, choosing $A$ to be finite if $G$ is finitely generated and $R$ to be finite if $G$ is finitely presented. If $G$ is free choose $A$ to be a free
generating set and $R$ to be empty. Let $B$ be a set of words over $A^{\pm 1}$ representing a monoid generating set for $H$, choosing $B$ to be empty if $H$ is trivial and finite if $H$ is finitely generated. Let $\Gamma$ be the Cayley graph of $G$ with respect to $A$.
Let $y$ and $z$ be new symbols not in $A$, and define \begin{align*} M \ = \ \mathrm{Inv}\langle A, y, z \ \mid \ & R,\ aa' = a'a=1 \ (a \in A), \\ &zbz'zb'z'=yby'yb'y'=1 \ (b \in B) \rangle. \end{align*} Notice that the presentation is finite provided $G$ is finitely presented and $H$ is finitely generated,
and clearly presents the free product $G * \mathrm{Inv} \langle y, z \rangle$ if $H$ is trivial (since we chose $B$ to be empty in that case). In the case that the presentation is finite, since all the relators except those in $R$ are fundamental idempotents, repeated application of Lemma~\ref{lemma_removeidempotents} will give a special inverse monoid presentation for $M$ with $|A|+2$ generators and $|R|$ relations (or $1$ relation if $R$ is empty, so that $G$ is free).
Because the relators in $R$ do not feature the letters $y$ and $z$ and the relators not in $R$ are all fundamental idempotents. It follows easily that this presentation when interpreted as a group presentation yields a free product $G * \mathrm{Gp}\langle y, z \rangle$ of $G$ with a free group of rank $2$, so this free product is the maximal group image of $M$. In particular, it follows that $A$ generates a copy of $G$ inside the group of units of $M$.
To see that $M$ is $E$-unitary first note that $$N \ = \ \mathrm{Inv}\langle A, y, z \ \mid R, \ aa'=a'a=1 \ (a \in A) \rangle \ = \ G * \mathrm{Inv} \langle y,z \rangle$$ is the inverse monoid free product of the group $G$ and a free inverse monoid, both of which are $E$-unitary, and hence $G * \mathrm{Inv} \langle y,z \rangle$ is $E$-unitary. The fact that the free product of two $E$-unitary inverse monoids is again $E$-unitary can be proved, for example, by applying a result of Stephen \cite[Theorem 6.5]{Stephen98} that gives sufficient conditions for the amalgamated free product of two $E$-unitary inverse semigroups to be $E$-unitary; see \cite[Proof of Theorem 3.8]{Gray2020} for details. Hence, $N$ is $E$-unitary, and since the relators of the form $zbz'zb'z'=yby'yb'y'=1$ hold in every group, it follows from Lemma~\ref{lemma_eunitarytransfer} that $M$ is $E$-unitary.
Now let $X$ be a finite set of words over the generators, representing a finite subset of $G$, and consider the element $$e = \prod_{x \in X} xz'zy'yx',$$ noting that the order of the product is unimportant, and the element $e$ is idempotent, because the factors are fundamental idempotents and idempotents commute in inverse semigroups. We claim that the $\GreenH$-class of $e$ is isomorphic to $K$.
To this end, consider the Schutzenberger graph $S \Gamma(e)$. First, we note that it contains a copy of the Cayley graph $\Gamma$, rooted at $e$. Indeed, any word over $A^{\pm 1}$ is right invertible in $M$ (because of the relations of the form $aa'=a'a=1$) and hence readable from $e$, and two words representing the same element in $G$ are equal in $M$ (because of the relators in $R$ and those of the form $aa'=a'a=1$), so that the corresponding paths from $e$ must end in the same place. Conversely, if two words $u$ and $v$ over $A^{\pm 1}$ label paths from $e$ ending in the same place then $eu = ev$ in $M$, and hence also in the maximal group image, which means that $u=v$ in the maximal group image $G \ast \mathrm{Gp}\langle y,z \rangle$ and hence $u=v$ in $G$.
We claim next that the vertices of $S \Gamma(e)$ corresponding to elements of $XH \subseteq G$ are exactly those which have both a $y$-edge and a $z$-edge coming in. Indeed, suppose $x \in X$ and $h \in H$ so that $xh \in XH$. Since the word defining $e$ can be read from the root of $S\Gamma(e)$, the word $xz'zy'yx'$ (which occurs as a factor of $e$ after a fundamental idempotent prefix) can be read from $e$, so the vertex corresponding to $x$ has both a $y$-edge (call it $e$) and a $z$-edge coming in. In particular in the case $h=1$ this shows that $xh=x$ does have both a $y$-edge and a $z$-edge coming in. If $h \neq 1$ then $h$ is a product of elements of our chosen generating set for $H$, so is represented by a word of the form $b_1 \dots b_k$ where each $b_1, \dots, b_k \in B$. Now from the start of the $y$-edge $e$ we can read the (right invertible) word $(y b_1 y') (y b_2 y') \dots (y b_k y')$, and it is easy to see that the final letter of this word must be read along a $y$-edge coming into the vertex corresponding to $xh$. An identical argument shows that there is a $z$-edge coming into this vertex.
\begin{figure}
\caption{ An illustration of part of the graph $S\Gamma(e)$ within the Cayley graph $\Delta$ of the maximal group image $G \ast \mathrm{Gp}\langle z, y \rangle$. Each square in the figure is a copy of the Cayley graph $\Gamma$ of $G$, each partitioned into the vertices $\Gamma_{XH}$ corresponding to the elements $XH$, depicted as shaded regions in the figure, and the complement of this set. The graph $S\Gamma(e)$ contains all the red edges and has vertex set equal to the union of all the copies $\Gamma$ to which these red edges are incident. The graph $\Delta'$ is the non-deterministic quotient of $\Delta$ with vertices the copies of $\Gamma$ (i.e. the squares) and all $y$- and $z$-edges between them. The graph $S\Gamma(e)'$ is the corresponding quotient of $S\Gamma(e)$. }
\label{fig_pictorial}
\end{figure}
For the converse, since $M$ is $E$-unitary by Lemma~\ref{lem_fullSubgraph} we may consider $S \Gamma(e)$ as a subgraph of the Cayley graph $\Delta$ of the maximal group image, which we have already observed is a free product of $G$ (generated by $A$) with a free group on $y$ and $z$. This Cayley graph $\Delta$ partitions into a ``tree'' of copies of the Cayley graph of $G$, in other words, a collection of copies of $\Gamma$ (which we shall call \textit{components}), joined by cut edges labelled $y$ and $z$. See Figure~\ref{fig_pictorial} for an illustration of the graph $\Delta$. Let $\Delta'$ be the quotient graph whose vertices are the components, and with an edge between two components exactly if $\Delta$ has an edge between two vertices in the respective components with the same label. (Note that $\Delta'$ is not deterministic; indeed it will typically have many edges leaving with the same label.) Let $S\Gamma(e)'$ be the corresponding quotient of $S\Gamma(e)$.\footnote{ In general the precise structure of $S\Gamma(e)'$ will depend on the cardinalities of $G$ and $XH$ (which if infinite has the same cardinality as $H$ since $X$ is finite). If $\kappa$ is the cardinality of $G$ and $\mu$ is that of $XH$ then $\Delta'$ will be a tree where every vertex has $\kappa$ many in- and out-edges labelled by $y$ and also $\kappa$ many in- and out-edges labelled by $z$. The subgraph $S\Gamma(e)'$ will have a root vertex with $\kappa$ many out-edges labelled by each of $y$ and $z$, and $\mu$ many in-edges labelled by $y$ and $z$. Every vertex in $S\Gamma(e)'$ will have $\kappa$ many many out-edges labelled by each of $y$ and $z$. Any vertex with an in-edge labelled by $y$ will have $\mu$ many in-edges labelled by $y$, and any vertex with an in-edge labelled by $z$ will have $\mu$ many in-edges labelled by $z$. Furthermore $S\Gamma(e)'$ is a tree since it is a subgraph of the tree $\Delta'$. In particular the only vertex in $S\Gamma(e)'$ with both a $y$-edge and a $z$-edge coming in is the root vertex. It follows that all vertices of $S\Gamma(e)$ with both an $y$-edge and a $z$-edge coming in are within the component of $e$. }
We claim that the only vertex of $S\Gamma(e)'$ with both a $y$-edge and a $z$-edge coming in is that corresponding to the component of $e$.
By Lemma~\ref{lem_fullSubgraph}, $S\Gamma(e)$ is equal to the smallest subgraph of the Cayley graph $\Delta$ such that $e$ can be read from $1$ and every relator from the presentation for $M$ can be read at every vertex of $S\Gamma(e)$. It follows from this that $S \Gamma(e)'$ can be formed from the image in $\Delta'$ by iteratively tracing paths labelled by $yy'$ and $zz'$ in $\Delta'$, in other words, either adding a $y$-edge out of an existing vertex to a new vertex, or or adding a $y$-edge out of an existing vertex to a new vertex and another $y$-edge coming into the new vertex, or adding a $y$-edge into an existing vertex which already has a $y$-edge into it, or corresponding operations with $z$-edges. Therefore, the only way a vertex can acquire a $y$-edge coming in is if either (i) it is created as a new vertex at the end of a $y$-edge or (ii) it already had a $y$-edge coming in. This, any vertex with a $y$-edge coming in (except for the component of $e$) must have been created with a $y$-edge coming in. The dual statement applies to $z$-edges. But a vertex clearly cannot be created with both a $y$-edge and a $z$-edge coming in.
Finally, suppose a vertex $v$ of $S \Gamma(e)$ within the component of $e$ has a $y$-edge coming in. Then by Lemma~\ref{lem_fullSubgraph} (or \cite[Lemma~3.1]{Stephen93}) the start $vy'$ of that edge must reachable from the vertices in the path traced out by $e$ in $\Delta$ (which means either a vertex in the component of $e$, or a vertex of the form $exy'$ or $exz'$ for some $x \in X$) by following a path labelled by a product of proper prefixes of the defining relators. Consider such a path where the number of relator-prefixes is minimal. Consider the projection of $M$ onto $\mathbb{Z}$ taking all generators in $A$ to $0$ and the generators $y$ and $z$ to $1$. The entire component of $e$ clearly maps to $0$, and $exy'$, $exz'$ (for all $x \in X$) and $vy'$ map to $-1$. All relator-prefixes map to non-negative values, so the only way this can happen is if the path starts at $exy'$ or $exz'$ for some $x \in X$, and the relator-prefixes are of the form $yby'$ or $zbz'$ for $b \in B$. If we try to mix relator-prefixes of the form $yby'$ with those of the form $zbz'$ then the path clearly leaves the component of $e$, and can only return by eventually following $zb'z$ back to the same point, which contradicts the minimality of the number of relator-prefixes in the product. This can be seen e.g. by considering the structure of $S\Gamma(e)$ as a subgraph of the Cayley graph $\Delta$ of the maximal group image $G \ast \mathrm{Gp}\langle y, z \rangle$, as illustrated in Figure~\ref{fig_pictorial}, and observing that this fact is clearly true within the Cayley graph $\Delta$ of this free product and hence must also be true within the subgraph $S\Gamma(e)$. Moreover, if we use only relator-prefixes of the form $zbz'$ we will clearly not end at $vy'$; so we must use relator-prefixes only of the form $yby'$. By a similar argument, this means our path must start at $exy'$. So in summary, we have a path from $exy'$ to $vy'$ labelled by a product of words from $yBy'$. But this means there is a path from $ex$ to $v$ labelled by a product of words from $B$, which means that $v$ corresponds to an element of $XH$ as required.
Now let $\Omega$ be the graph which is the Cayley graph $\Gamma$ with additional $y$- and $z$-edges coming into the vertices corresponding to elements of $XH$. Clearly, the automorphisms of $\Omega$ are exactly the automorphisms of $\Gamma$ which map elements of $XH$ to elements of $XH$, in other words. The automorphisms all come from elements of $G$ acting by left translation, so this means the automorphism group of $\Omega$ is isomorphic to the setwise stabiliser of $XH$ under the left translation action of $G$, in other words, to $K$.
Since $\Omega$ is a connected subgraph of $S\Gamma(e)$ rooted at $e$ (in fact $\Omega$ is an induced subgraph of $S\Gamma(e)$), and since $e$ itself can be read in $\Omega$, it follows from Lemma~\ref{lem_StephenAutInvariant} that every automorphism of $\Omega$ will extend uniquely to an automorphism of $S\Gamma(e)$. On the other hand, any automorphism of $S\Gamma(e)$ must clearly preserve the set of vertices which have a both a $y$-edge and a $z$-edge coming in, that is, the set of vertices corresponding to $XH$. It follows easily that it must preserve the embedded copy of $\Omega$, and hence is an extension of an automorphism of $\Omega$. Thus, $K$ is exactly the automorphism group of $S \Gamma(e)$, which by \cite[Theorem 3.5]{Stephen:1990ss} is isomorphic to the group $\GreenH$-class $H_e$ of $e$.\end{proof}
\begin{theorem}\label{thm_maxsubgroups} Let $H$ be a recursively enumerable subgroup of a finitely presented group. Then there is an $E$-unitary finitely presented special inverse monoid with a group $\GreenH$-class isomorphic to $H$. \end{theorem} \begin{proof} By Lemma~\ref{lemma_usefulgroup} there is a finitely presented group $G$ with a finitely generated subgroup $K$ and an element $t$ such that $K \cap tKt^{-1} \cong H$ and $K \cap tKt = \emptyset$. By Lemma~\ref{lemma_stabiliser} this means $H$ is isomorphic to the stabiliser of $K \cup tK$. Now by Theorem~\ref{thm_stabiliser} there is a finitely presented special inverse monoid with the latter as a group $\GreenH$-class. \end{proof} \begin{corollary} There exist $E$-unitary finitely presented special inverse monoids with group $\GreenH$-classes which are not finitely generated. \end{corollary}
\begin{corollary}\label{cor_possiblemaxsubgroups} The possible group $\GreenH$-classes of finitely presented special inverse monoids (and of $E$-unitary finitely presented, recursively presented, and $E$-unitary recursively presented special inverse monoids) are exactly the (not necessarily finitely generated) recursively presented groups. \end{corollary} \begin{proof} Higman \cite[Corollary to Theorem 1]{Higman61} showed that every recursively presented group embeds ``effectively'' in a finitely presented group; although an exact definition of ``effectively'' is not given, it is clear from the argument that this implies that the image will be a recursively enumerable subgroup of the finitely presented group. Hence, by Theorem~\ref{thm_maxsubgroups}, every such group is a group $\GreenH$-class of some $E$-unitary finitely presented special inverse monoid.
Conversely, suppose $M = \mathrm{Inv}\langle A \mid R \rangle$ is a finitely (or recursively) presented special inverse monoid, and let $e \in M$ be an idempotent represented by some word $w$ over $A^{\pm 1}$. Since one can recursively enumerate relations which hold in $M$, one can enumerate all words in the $\GreenH$-class of $e$ (by listing a word $v$ as soon as one discovers that the relations $vv'=w$ and $v'v = w$ hold in $M$), and all relations which hold between such words. Thus, the $\GreenH$-class of $e$ is a recursively presented group. \end{proof}
The fact that the construction in Theorem~\ref{thm_stabiliser} preserves the number of relators in the underlying group (except when free) means it can be applied in particular to one-relator inverse monoids.
\begin{corollary}\label{cor_onerelator} Every finitely generated subgroup of a one-relator group arises as a group $\GreenH$-class in a one-relator special inverse monoid. \end{corollary}
We do not know any examples of one-relator special inverse monoids ($E$-unitary or otherwise) containing group $\GreenH$-classes which are not embeddable in a one-relator group, or which are not finitely generated. \begin{question} Is every group $\GreenH$-class in a one-relator special inverse monoid necessarily embeddable in a one-relator group? \end{question} \begin{question}\label{question_onerelatorfp} Is every group $\GreenH$-class in a one-relator special inverse monoid necessarily finitely generated? \end{question} Recall that a group is said to have the \textit{Howson property} (or \textit{finitely generated intersection property}) if the intersection of two finitely generated subgroups is always finitely generated. Free groups have this property \cite[Proposition~3.13]{Lyndon:2001lh} but in contrast there are one-relator groups (even hyperbolic ones) which do not \cite{KarrassSolitar1969,Kapovich1999}. However, we do not know of any example of a one-relator group containing two \textit{conjugate} finitely generated subgroups whose intersection is not finitely generated; nor are we aware of any proof that this cannot happen. If it can happen then this would imply a negative answer to Question~\ref{question_onerelatorfp}. Indeed, by applying Lemma~\ref{lemma_stabiliser} and Theorem~\ref{thm_stabiliser} we could obtain an $E$-unitary one-relator special inverse monoid with a group $\GreenH$-class that is a finite index overgroup of a group that is not finitely generated, and hence by \cite[Proposition~4.2]{Lyndon:2001lh} not itself not finitely generated. \begin{question} Is the intersection of two conjugate finitely generated subgroups in a one-relator group necessarily finitely generated? \end{question}
Another application of Theorem~\ref{thm_stabiliser} is to construct examples of finitely presented special inverse monoids with infinitely many pairwise non-isomorphic group $\GreenH$-classes. This contrasts sharply with the case of special (non-inverse) monoids, where by a result of Malheiro \cite{Malheiro2005} all idempotents lie in the $\GreenD$-class of $1$, and therefore all group $\GreenH$-classes are necessarily isomorphic to each other. \begin{corollary}\label{cor_everyfinitegroup} There exists an $E$-unitary finitely presented special inverse monoid (in fact, a free product of a finitely presented group with a finite rank free inverse monoid) in which every finite group arises as a group $\GreenH$-class. \end{corollary} \begin{proof} Take $G$ to be any finitely presented group with every finite group as a subgroup (for example, Higman's universal group into which embeds every finitely generated group \cite[Theorem~7.3]{Lyndon:2001lh}), and $H$ to be the trivial group. Then every finite group arises as a finite union of left $H$-cosets and is its own stabiliser under left translation, so Theorem~\ref{thm_stabiliser} gives a finitely presented $E$-unitary special inverse monoid (in fact, a free product of $G$ with a finite rank free inverse monoid) in which they all arise as group $\GreenH$-classes. \end{proof}
We conclude by remarking on a natural and related, if slightly tangential, open question. Belyaev \cite{Belyaev} showed that an analogue of Higman's embedding theorem holds for inverse semigroups: every recursively presented inverse semigroup embeds in a finitely presented one. We do not know if a corresponding result holds for special inverse monoids. \begin{question} Does every recursively presented special inverse monoid embed as a subsemigroup (or even as a submonoid) in a finitely presented special inverse monoid? \end{question}
\end{document}
|
arXiv
|
{
"id": "2212.04204.tex",
"language_detection_score": 0.8780701160430908,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
}
|
arXiv/math_arXiv_v0.2.jsonl
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.